entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 17
188
| authors
sequence | primary_category
stringlengths 5
18
| categories
sequence | text
stringlengths 2
629k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.04107v1 | 20230709062020 | Efficient Approximation Algorithms for Scheduling Coflows with Precedence Constraints in Identical Parallel Networks to Minimize Weighted Completion Time | [
"Chi-Yeh Chen"
] | cs.DS | [
"cs.DS"
] |
Model-Based End-to-End Learning for Multi-Target Integrated Sensing and Communication
José Miguel Mateos-Ramos, Student Member, IEEE,
Christian Häger, Member, IEEE,
Musa Furkan Keskin, Member, IEEE,
Luc Le Magoarou, Member, IEEE,
Henk Wymeersch, Senior Member, IEEE
This work was supported, in part, by a grant from the Chalmers AI Research Center Consortium (CHAIR), by the National Academic Infrastructure for Supercomputing in Sweden (NAISS), the Swedish Foundation for Strategic Research (SSF) (grant FUS21-0004, SAICOM), Hexa-X-II, part of the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101095759., and Swedish Research Council (VR grant 2022-03007). The work of C. Häger was also supported by the Swedish Research Council under grant no. 2020-04718.
José Miguel Mateos-Ramos, Christian Häger, Musa Furkan Keskin and Henk Wymeersch are with the Department of Electrical Engineering, Chalmers University of Technology, Sweden (email: [email protected]; [email protected]; [email protected]; [email protected]).
Luc Le Magoarou is with INSA Rennes, CNRS, IETR - UMR 6164, F-35000, Rennes, France (email: [email protected]).
Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
This paper focuses on the problem of coflow scheduling with precedence constraints in identical parallel networks, which is a well-known 𝒩𝒫-hard problem. Coflow is a relatively new network abstraction used to characterize communication patterns in data centers. Both flow-level scheduling and coflow-level scheduling problems are examined, with the key distinction being the scheduling granularity. The proposed algorithm effectively determines the scheduling order of coflows by employing the primal-dual method. When considering workload sizes and weights that are dependent on the network topology in the input instances, our proposed algorithm for the flow-level scheduling problem achieves an approximation ratio of O(χ) where χ is the coflow number of the longest path in the directed acyclic graph (DAG). Additionally, when taking into account workload sizes that are topology-dependent, the algorithm achieves an approximation ratio of O(Rχ), where R represents the ratio of maximum weight to minimum weight. For the coflow-level scheduling problem, the proposed algorithm achieves an approximation ratio of O(mχ), where m is the number of network cores, when considering workload sizes and weights that are topology-dependent. Moreover, when considering workload sizes that are topology-dependent, the algorithm achieves an approximation ratio of O(Rmχ). In the coflows of multi-stage job scheduling problem, the proposed algorithm achieves an approximation ratio of O(χ). Although our theoretical results are based on a limited set of input instances, experimental findings show that the results for general input instances outperform the theoretical results, thereby demonstrating the effectiveness and practicality of the proposed algorithm.
Scheduling algorithms, approximation algorithms, coflow, precedence constraints, datacenter network, identical parallel network.
§ INTRODUCTION
With the evolution of technology, a large volume of computational demands has become the norm. As personal computing resources are no longer sufficient, cloud computing has emerged as a solution for accessing significant computational resources. With the increasing demand, large-scale data centers have become essential components of cloud computing. In these data centers, the benefits of application-aware network scheduling have been proven, particularly for distributed applications with structured traffic patterns <cit.>. The widespread use of data-parallel computing applications such as MapReduce <cit.>, Hadoop <cit.>, Dryad <cit.>, and Spark <cit.> has led to a proliferation of related applications <cit.>.
In these data-parallel applications, tasks can be divided into multiple computational stages and communication stages, which are executed alternately. The computational stages generate a substantial amount of intermediate data (flows) that needs to be transmitted across various machines for further processing during the communication stages. Due to the large number of applications generating significant data transmission requirements, robust data transmission and scheduling capabilities are crucial for data centers. The overall communication pattern within the data center can be abstracted by coflow traffic, representing the interaction of flows between two sets of machines <cit.>.
A coflow refers to a set of interconnected flows, where the completion time of the entire group depends on the completion time of the last flow within the set <cit.>. Previous studies related to coflows <cit.> have primarily focused on the single-core model <cit.>. However, technological advancements have led to the emergence of data centers that operate on multiple parallel networks in order to improve efficiency <cit.>. One such architecture is the identical or heterogeneous parallel network, where multiple network cores function in parallel, providing combined bandwidth by simultaneously serving traffic.
This study addresses the problem of coflow scheduling with precedence constraints in identical parallel networks. The objective is to schedule these coflows in the parallel networks in a way that minimizes the weighted total completion time of coflows. We consider both flow-level scheduling and coflow-level scheduling. In the flow-level scheduling problem, flows within a coflow can be distributed across different network cores. Conversely, in the coflow-level scheduling problem, all flows within a coflow are required to be transmitted in the same network core. The key difference between these two problems lies in their scheduling granularity. The coflow-level scheduling problem, being a coarse-grained scheduling, can be quickly solved but yields relatively poorer results. On the other hand, the flow-level scheduling problem, being a fine-grained scheduling, takes more time to solve but produces superior scheduling results. It is worth noting that, although these two problems exhibit differences in time complexity when solved using linear programming, in the case of the flow-level scheduling problem using the primal-dual method, the decision of scheduling flows is transformed into the decision of scheduling coflows. This transformation leads to the solving time being equivalent to that of the coflow-level scheduling problem.
§.§ Related Work
The concept of coflow abstraction was initially introduced by Chowdhury and Stoica <cit.> to characterize communication patterns within data centers. The scheduling problem for coflows has been proven to be strongly 𝒩𝒫-hard, indicating the need for efficient approximation algorithms rather than exact solutions. Due to the easy reduction of the concurrent open shop problem to coflow scheduling, where only the diagonal elements of the demand matrix have values, solving the concurrent open shop problem within a factor better than 2-ϵ is 𝒩𝒫-hard <cit.>, implying the hardness of the coflow scheduling problem as well.
Since the proposal of the coflow abstraction, extensive research has been conducted on coflow scheduling <cit.>. Qiu et al.<cit.> presented the first deterministic polynomial-time approximation algorithm with an ratio of 67/3. Subsequently, Ahmadi et al. <cit.> proved that the technique proposed by Qiu et al.<cit.> actually yields only a deterministic 76/3-approximation algorithm for coflow scheduling with release times.
Khuller et al. <cit.> also proposed an approximation algorithm for coflow scheduling with arbitrary release times, achieving a ratio of 12.
Recent research by Shafiee and Ghaderi <cit.> has resulted in an impressive approximation algorithm for the coflow scheduling problem, achieving an approximation ratio of 5. Additionally, Ahmadi et al. <cit.> have made significant contributions to this field by proposing a primal-dual algorithm that enhances the computational efficiency of coflow scheduling.
In the coflow scheduling problem within a heterogeneous parallel network, Huang et al. <cit.> introduced an O(m)-approximation algorithm, where m represents the number of network cores. On the other hand, Tian et al. <cit.> were the first to propose the problem of scheduling coflows of multi-stage jobs, and they provided a O(N)-approximation algorithm, where N represents the number of servers in the network. Furthermore, Shafiee and Ghaderi <cit.> proposed a polynomial-time algorithm that achieves an approximation ratio of O(χ̃log(N)/log(log(N))), where χ̃ denotes the maximum number of coflows in a job.
§.§ Our Contributions
This paper focuses on addressing the problem of coflow scheduling with precedence constraints in identical parallel networks and presents a range of algorithms and corresponding results. The specific contributions of this study are outlined below:
* When considering workload sizes and weights that are dependent on the network topology in the input instances, the proposed algorithm for the flow-level scheduling problem achieves an approximation ratio of O(χ) where χ is the coflow number of the longest path in the directed acyclic graph (DAG).
* When taking into account workload sizes that are topology-dependent, the proposed algorithm for flow-level scheduling problem achieves an approximation ratio of O(Rχ), where R represents the ratio of maximum weight to minimum weight.
* For the coflow-level scheduling problem, the proposed algorithm achieves an approximation ratio of O(mχ), where m is the number of network cores, when considering workload sizes and weights that are topology-dependent.
* When considering workload sizes that are topology-dependent, the algorithm for the coflow-level scheduling problem achieves an approximation ratio of O(Rmχ).
* In the coflows of multi-stage job scheduling problem, the proposed algorithm achieves an approximation ratio of O(χ).
A summary of our theoretical findings is provided in Table <ref> where TDWS stands for topology-dependent workload sizes, while TDW stands for topology-dependent weights.
§.§ Organization
The structure of this paper is outlined as follows. In Section <ref>, an introduction is provided, covering fundamental notations and preliminary concepts that will be referenced in subsequent sections. Following that, the primary algorithms are presented in the following sections: Section <ref> provides an overview of the algorithm addressing the flow-level scheduling problem, while Section <ref> elaborates on the algorithm designed for the coflow-level scheduling problem. To address the scheduling problem for the coflows of multi-stage jobs, our algorithm is discussed in Section <ref>. In Section <ref>, a comparative analysis is conducted to evaluate the performance of our proposed algorithms in comparison to the previous algorithm. Lastly, in Section <ref>, our findings are summarized and meaningful conclusions are drawn.
§ NOTATION AND PRELIMINARIES
The identical parallel network consists of a collection of m non-blocking switches, each with dimensions of N × N. These switches form the infrastructure of the network, where N input links are connected to N source servers, and N output links are connected to N destination servers. These switches serve as practical and intuitive models for the network core. Network architectures such as Fat-tree or Clos <cit.> can be employed to construct networks that provide complete bisection bandwidth. In this configuration, each switch's i-th input port is connected to the i-th source server, and the j-th output port is connected to the j-th destination server. Consequently, each source server (or destination server) has m simultaneous uplinks (or downlinks), where each link may consist of multiple physical connections in the actual network topology <cit.>. Let ℐ denote the set of source servers, and 𝒥 denote the set of destination servers. The network core can be visualized as a bipartite graph, with ℐ on one side and 𝒥 on the other. For simplicity, we assume that all network cores are identical, and the links within each core have the same capacity or speed.
A coflow is a collection of independent flows, and its completion time of a coflow is determined by the completion time of the last flow in the set, making it a critical metric for evaluating the efficiency of data transfers. The demand matrix D^(k)=(d_i,j,k)_i,j=1^N represents the specific data transfer requirements within coflow k. Each entry d_i,j,k in the matrix corresponds to the size of the flow that needs to be transmitted from input i to output j within the coflow. In the context of identical network cores, the flow size can be interpreted as the transmission time, as all cores possess the same capacity or speed. This simplification allows for easier analysis and optimization of coflow scheduling algorithms. To facilitate efficient management and routing of flows, each flow is identified by a triple (i, j, k), where i represents the source node, j represents the destination node, and k corresponds to the coflow. This identification scheme enables precise tracking and control of individual flows within the parallel network.
Furthermore, we assume that flows are composed of discrete data units, resulting in integer sizes. For simplicity, we assume that all flows within a coflow are simultaneously initiated, as demonstrated in <cit.>.
This paper investigates the problem of coflow scheduling with release times and precedence constraints. The problem involves a set of coflows denoted by 𝒦, where coflow k is released into the system at time r_k. The completion time of coflow k, denoted as C_k, represents the time required for all its flows to finish processing. Each coflow k∈𝒦 is assigned a positive weight w_k. Let R be the ratio between the maximum weight and the minimum weight. The relationships between coflows can be modeled using a directed acyclic graph (DAG) G=(𝒦, E), where an arc (k', k)∈ E and k', k∈𝒦 indicate that all flows of coflow k' must be completed before any flow of coflow k can be scheduled. This relationship is denoted as k'≺ k. The DAG has a coflow number of χ, which represents the length of the longest path in the DAG. The objective is to schedule coflows in an identical parallel network, considering the precedence constraints, in order to minimize the total weighted completion time of the coflows, denoted as ∑_k∈𝒦 w_kC_k. For clarity, different subscript symbols are used to represent different meanings of the same variables. Subscript i represents the index of the source (or input port), subscript j represents the index of the destination (or output port), and subscript k represents the index of the coflow. For instance, ℱ_i denotes the set of flows with source i, and ℱ_j represents the set of flows with destination j. The symbols and terminology used in this paper are summarized in Table <ref>.
§ APPROXIMATION ALGORITHM FOR THE FLOW-LEVEL SCHEDULING PROBLEM
This section focuses on the flow-level scheduling problem, which allows for the transmission of different flows within a coflow through distinct network cores. We assume that coflows are transmitted at the flow level, ensuring that the data within a flow is allocated to the same core. We define ℱ_i as the collection of flows with source i, represented by ℱ_i={(i, j, k)| d_i,j,k>0, ∀ k∈𝒦, ∀ j∈𝒥}, and ℱ_j as the set of flows with destination j, given by ℱ_j={(i, j, k)| d_i,j,k>0, ∀ k∈𝒦, ∀ i∈ℐ}. For any subset S⊆ℱ_i (or S⊆ℱ_j), we define d(S)=∑_(i, j, k)∈ S d_i,j,k as the sum of data size over all flows in S and d^2(S)=∑_(i, j, k)∈ S d_i,j,k^2 as the sum of squares of data size over all flows in S. Additionally, we introduce the function f(S) as follows:
f(S) = d(S)^2+ d^2(S)/2m.
The flow-level scheduling problem can be formulated as a linear programming relaxation, which is expressed as follows:
min ∑_k ∈𝒦 w_k C_k <ref>
s.t. C_k≥ C_i,j,k, ∀ k∈𝒦, ∀ i∈ℐ, ∀ j∈𝒥
C_i,j,k≥ r_k+d_i,j,k, ∀ k∈𝒦, ∀ i∈ℐ, ∀ j∈𝒥
C_i,j,k≥ C_k'+d_i,j,k, ∀ k, k'∈𝒦:k'≺ k,
∀ i∈ℐ, ∀ j∈𝒥
∑_(i, j, k)∈ Sd_i,j,kC_i,j,k≥ f(S), ∀ i∈ℐ, ∀ S⊆ℱ_i
∑_(i, j, k)∈ Sd_i,j,kC_i,j,k≥ f(S), ∀ j∈𝒥, ∀ S⊆ℱ_j
In the linear program (<ref>), the variable C_k represents the completion time of coflow k in the schedule, and C_i,j,k denotes the completion time of flow (i, j, k). Constraint (<ref>) specifies that the completion time of coflow k is bounded by the completion times of all its flows, ensuring that no flow finishes after the coflow. Constraint (<ref>) guarantees that the completion time of any flow (i, j, k) is at least its release time r_k plus the time required for its transmission. To capture the precedence constraints among coflows, constraint (<ref>) indicates that all flows of coflow k' must be completed before any flow of coflow k can be scheduled. Constraints (<ref>) and (<ref>) introduce lower bounds on the completion time variables at the input and output ports, respectively.
We define L_i,S,k as the sum of the loads on input port i for coflow k in the set S. Similarly, L_j,S,k represents the sum of the loads on output port j for coflow k in the set S. To formulate the dual linear program, we have the following expressions:
L_i,S,k =∑_(i',j',k')∈ S|i'=i,k'=kd_i',j',k',
L_j,S,k =∑_(i',j',k')∈ S|j'=j,k'=kd_i',j',k'.
The dual linear program is given by
max ∑_k ∈𝒦∑_i ∈ℐ∑_j ∈𝒥α_i, j, k(r_k+d_i,j,k)
+∑_i ∈ℐ∑_S ⊆ℱ_iβ_i,S f(S)
+∑_j ∈𝒥∑_S ⊆ℱ_jβ_j,S f(S)
+ ∑_(k', k) ∈ E∑_i ∈ℐ,j ∈𝒥γ_k', i, j, k d_i,j,k <ref>
s.t. ∑_i ∈ℐ∑_j ∈𝒥α_i, j, k
+∑_i ∈ℐ∑_S⊆ℱ_iβ_i,SL_i,S,k
+∑_j ∈𝒥∑_S⊆ℱ_jβ_j,SL_j,S,k
+∑_(k',k)∈ E∑_i ∈ℐ,j ∈𝒥γ_k', i, j, k
-∑_(k,k')∈ E∑_i ∈ℐ,j ∈𝒥γ_k, i, j, k'≤ w_k, ∀ k∈𝒦
α_i, j, k≥ 0, ∀ k∈𝒦, ∀ i∈ℐ,
∀ j∈𝒥
β_i, S≥ 0, ∀ i∈ℐ, ∀ S⊆ℱ_i
β_j, S≥ 0, ∀ j∈𝒥, ∀ S⊆ℱ_j
γ_k', i, j, k≥ 0, ∀ (k', k)∈ E, ∀ i∈ℐ,
∀ j∈𝒥
It is important to note that each flow (i, j, k) is associated with a dual variable α_i, j, k, and for every coflow k, there exists a corresponding constraint. Additionally, for any subset S ⊆ℱ_i (or S ⊆ℱ_j) of flows, there exists a dual variable β_i, S (or β_j, S). To facilitate the analysis and design of algorithms, we define γ_k', k as the sum of γ_k', i, j, k over all input ports i and output ports j in their respective sets ℐ and 𝒥:
γ_k', k=∑_i ∈ℐ,j ∈𝒥γ_k', i, j, k.
Significantly, it should be emphasized that the cost of any feasible dual solution provides a lower bound for OPT, which represents the cost of an optimal solution.
This implies that the cost attained by any valid dual solution ensures that OPT cannot be less than that. In other words, if we obtain a feasible dual solution with a certain cost, we can be certain that the optimal solution, which represents the best possible cost, will not have a lower cost than the one achieved by the dual solution.
The primal-dual algorithm, as depicted in Appendix <ref>, Algorithm <ref>, is inspired by the research of Davis et al. <cit.> and Ahmadi et al. <cit.>, respectively. This algorithm constructs a feasible schedule iteratively, progressing from right to left, determining the processing order of coflows. Starting from the last coflow and moving towards the first, each iteration makes crucial decisions in terms of increasing dual variables α, β or γ. The guidance for these decisions is provided by the dual linear programming (LP) formulation. The algorithm offers a space complexity of O(Nn) and a time complexity of O(n^2), where N represents the number of input/output ports, and n represents the number of coflows.
Consider a specific iteration in the algorithm. At the beginning of this iteration, let 𝒦 represent the set of coflows that have not been scheduled yet, and let k denote the coflow with the largest release time. In each iteration, a decision must be made regarding whether to increase dual variables α, β or γ.
If the release time r_k is significantly large, increasing the α dual variable results in substantial gains in the objective function value of the dual problem. On the other hand, if L_μ_1(r) (or L_μ_2(r) if L_μ_2(r)≥ L_μ_1(r)) is large, raising the β variable leads to substantial improvements in the objective value. Let κ be a constant that will be optimized later.
If r_k>κ· L_μ_1(r)/m (or r_k>κ· L_μ_2(r)/m if L_μ_2(r)≥ L_μ_1(r)), the α dual variable is increased until the dual constraint for coflow k becomes tight. Consequently, coflow k is scheduled to be processed as early as possible and before any previously scheduled coflows.
In the case where r_k≤κ· L_μ_1(r)/m (or r_k≤κ· L_μ_2(r)/m if L_μ_2(r)≥ L_μ_1(r)), the dual variable β_μ_1(r),𝒢_i (or β_μ_2(r),𝒢_j if Lμ_2(r)≥ L_μ_1(r)) is increased until the dual constraint for coflow k' becomes tight.
In this step, we begin by identifying a candidate coflow, denoted as k', with the minimum value of β. We then examine whether this coflow still has unscheduled successors. If it does, we continue traversing down the chain of successors until we reach a coflow that has no unscheduled successors, which we will refer to as t_1.
Once we have identified coflow t_1, we set its β and γ values such that the dual constraint for coflow t_1 becomes tight. Moreover, we ensure that the β value of coflow t_1 matches that of the candidate coflow k'.
The flow-driven-list-scheduling algorithm, as depicted in Algorithm <ref>, leverages a list scheduling rule to determine the order of coflows to be scheduled. In order to provide a clear and consistent framework, we assume that the coflows have been pre-ordered based on the permutation generated by Algorithm <ref>, where σ(k)=k for all k∈𝒦. Thus, the coflows are scheduled sequentially in this predetermined order.
Within each coflow, the flows are scheduled based on a non-increasing order of their sizes, breaking ties arbitrarily. Specifically, for every flow (i, j, k), the algorithm identifies the least loaded network core, denoted as h^*, and assigns the flow (i, j, k) to this core.
The algorithm's steps involved in this assignment process are outlined in lines <ref>-<ref>.
A flow is deemed "ready" for scheduling only when all of its predecessors have been fully transmitted. The algorithm then proceeds to schedule all the flows that are both ready and have been released but remain incomplete. These scheduling steps, encapsulated in lines <ref>-<ref>, have been adapted from the work of Shafiee and Ghaderi <cit.>.
§.§ Analysis
In this section, we present a comprehensive analysis of the proposed algorithm, establishing its approximation ratios. Specifically, we demonstrate that the algorithm achieves an approximation ratio of O(χ) when considering workload sizes and weights that are topology-dependent in the input instances. Additionally, when considering workload sizes that are topology-dependent in the input instances, the algorithm achieves an approximation ratio of O(Rχ) where R is the ratio of maximum weight to minimum weight. It is crucial to note that our analysis assumes that the coflows are arranged in the order determined by the permutation generated by Algorithm <ref>, where σ(k)=k for all k∈𝒦.
Let S_k={1, 2, …, k} denote the set of the first k coflows. Furthermore, we define S_i,k as the set of flows from the first k coflows at input port i. Formally, S_i,k is defined as follows:
S_i,k={(i, j, k')| d_i,j,k'>0, ∀ k'∈{1,…,k}, ∀ j∈𝒥}.
Similarly, S_j,k represents the set of flows from the first k coflows at output port j, defined as:
S_j,k={(i, j, k')| d_i,j,k'>0, ∀ k'∈{1,…,k}, ∀ i∈ℐ}.
Let β_i,k=β_i,S_i,k and β_j,k=β_j,S_j,k. These variables capture the dual variables associated with the sets S_i,k and S_j,k.
Moreover, we introduce the notation μ_1(k) to denote the input port with the highest load in S_k, and μ_2(k) to represent the output port with the highest load in S_k. Recall that d(S) represents the sum of loads for all flows in a subset S. Therefore, d(S_i,k) corresponds to the total load of flows from the first k coflows at input port i, and d(S_j,k) corresponds to the total load of flows from the first k coflows at output port j.
Finally, let L_i,k=∑_j∈𝒥 d_i,j,k denote the total load of flows from coflow k at input port i, and L_j,k=∑_i∈ℐ d_i,j,k denote the total load of flows from coflow k at output port j.
Let us begin by presenting several key observations regarding the primal-dual algorithm.
The following statements hold.
* Every nonzero β_i,S can be written as β_μ_1(k),k for some coflow k.
* Every nonzero β_j,S can be written as β_μ_2(k),k for some coflow k.
* For every set S_μ_1(k),k that has a nonzero β_μ_1(k),k variable, if k' ≤ k then r_k'≤κ· d(S_μ_1(k),k)/m.
* For every set S_μ_2(k),k that has a nonzero β_μ_2(k),k variable, if k' ≤ k then r_k'≤κ· d(S_μ_2(k),k)/m.
* For every coflow k that has a nonzero α_μ_1(k), 1, k, r_k>κ· d(S_μ_1(k),k)/m.
* For every coflow k that has a nonzero α_1, μ_2(k), k, r_k>κ· d(S_μ_2(k),k)/m.
* For every coflow k that has a nonzero α_μ_1(k), 1, k or a nonzero α_1, μ_2(k), k, if k'≤ k then r_k'≤ r_k.
The validity of each of the aforementioned observations can be readily verified and directly inferred from the steps outlined in Algorithm <ref>.
For any subset S, we have that d(S)^2≤ 2m· f(S).
Let C_k represent the completion time of coflow k when scheduled according to Algorithm <ref>. For any coflow k, we have C_k≤ a·max_k'≤ kr_k'+χ(d(S_μ_1(k),k)+d(S_μ_2(k),k)/m)+(1-2/m)C_k^*, where a=0 signifies the absence of release times, and a=1 indicates the presence of arbitrary release times.
First, let's consider the case where there is no release time and no precedence constraints. In this case, the completion time bound for each coflow can be expressed by the following inequality:
Ĉ_k ≤ 1/md(S_μ_1(k), k) + 1/md(S_μ_2(k),k)+(1-2/m) max_i, j d_i,j,k
Now, let v_1v_2⋯ v_f be the longest path of coflow k, where v_f=k. Then, we can derive the following inequalities:
C_k ≤ ∑_q=1^fĈ_v_q
≤ ∑_q=1^f1/md(S_μ_1(q), q) + 1/md(S_μ_2(q),q) +(1-2/m) max_i, j d_i,j,q
≤ ∑_q=1^f1/md(S_μ_1(k), k) + 1/md(S_μ_2(k),k) +(1-2/m) max_i, j d_i,j,q
= f/md(S_μ_1(k), k) + f/md(S_μ_2(k),k) +∑_q=1^f(1-2/m) max_i, j d_i,j,q
≤ f/md(S_μ_1(k), k) + f/md(S_μ_2(k),k)+ (1-2/m) C_k^*.
When considering the release time, coflow k is transmitted starting at max_k'≤ kr_k' at the latest. This proof confirms the lemma.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then γ_k', k=0 holds for all k, k'∈𝒦.
Given that w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ, and j ∈𝒥, the β value of coflow k is smaller than that of coflow k'. As a result, there is no need to order the coflow k by setting γ_k',k.
For every coflow k, ∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_(k',k)∈ Eγ_k', k-∑_(k,k')∈ Eγ_k, k'= w_k.
A coflow k is included in the permutation of Algorithm <ref> only if the constraint ∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_S⊆ℱ_iβ_i,SL_i,S,k+∑_j ∈𝒥∑_S⊆ℱ_jβ_j,SL_j,S,k+∑_(k',k)∈ Eγ_k', k-∑_(k,k')∈ Eγ_k, k'≤ w_k becomes tight for this particular coflow, resulting in ∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_(k',k)∈ Eγ_k', k-∑_(k,k')∈ Eγ_k, k'= w_k.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then the total cost of the schedule is bounded as follows.
∑_kw_kC_k ≤ (a+2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+2(a·κ+2χ)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+2(a·κ+2χ)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT.
By applying Lemma <ref>, we have
∑_k=1^n w_kC_k ≤ ∑_k=1^n w_k· A +(1-2/m) ∑_k=1^n w_kC_k^*
where A=a·max_k'≤ kr_k'+χd(S_μ_1(k),k)+d(S_μ_2(k),k)/m. We have ∑_k=1^n w_k C_k^*=OPT. Now we focus on the first term ∑_k=1^n w_k· A. By applying Lemmas <ref> and <ref>, we have
∑_k=1^n w_k· A = ∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, k· A
+∑_k=1^n∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k· A
+∑_k=1^n∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k· A
Let's begin by bounding ∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, k· A.
By applying Observation <ref> parts (<ref>), (<ref>) and (<ref>), we have
∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, k·A
≤ ∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥 α_i, j, k(a·r_k+2χ·r_k/κ)
≤ (a+2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥 α_i, j, k·r_k
Now we bound ∑_k=1^n∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k· A. By applying Observation <ref> part (<ref>), we have
∑_k=1^n∑_i ∈ℐ∑_k'≥kβ_i,k'L_i,k·A
≤ ∑_k=1^n∑_i ∈ℐ∑_k'≥kβ_i,k'L_i,k(a·max_ℓ≤kr_ℓ+χd(S_μ_1(k),k)+d(S_μ_2(k),k)/m)
≤ ∑_k=1^n∑_i ∈ℐ∑_k'≥kβ_i,k'L_i,k(a·κ·d(S_μ_1(k'),k')/m + 2χ·d(S_μ_1(k),k)/m)
≤ (a·κ+2χ)∑_k'=1^n∑_i ∈ℐ∑_k≤k'β_i,k'L_i,kd(S_μ_1(k'),k')/m
≤ (a·κ+2χ)∑_k'=1^n∑_i ∈ℐβ_i,k'∑_k≤k'L_i,kd(S_μ_1(k'),k')/m
= (a·κ+2χ)∑_k'=1^n∑_i ∈ℐβ_i,k'd(S_i,k')d(S_μ_1(k'),k')/m
≤ (a·κ+2χ)∑_k'=1^n∑_i ∈ℐβ_i,k'(d(S_μ_1(k'),k'))^2/m
By sequentially applying Observation <ref> and Observation <ref> part (<ref>), we can upper bound this expression by
2(a·κ+2χ)∑_i ∈ℐ∑_k=1^nβ_i,kf(S_μ_1(k),k)
= 2(a·κ+2χ)∑_k=1^nβ_μ_1(k),kf(S_μ_1(k),k)
≤ 2(a·κ+2χ)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
By Observation <ref> and Observation <ref> parts (<ref>) and (<ref>), we also can obtain
∑_k=1^n∑_j ∈𝒥∑_k'≥kβ_j,k'L_j,k ·A
≤ 2(a·κ+2χ)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
Therefore,
∑_kw_kC_k ≤ (a+2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+2(a·κ+2χ)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+2(a·κ+2χ)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4χ+2-2/m for the flow-level scheduling problem with release times.
To schedule coflows without release times, the application of Lemma <ref> (with a = 1) indicates the following:
∑_kw_kC_k ≤ (1+2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+2(κ+2χ)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+2(κ+2χ)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT.
In order to minimize the approximation ratio, we can substitute κ=1/2 and obtain the following result:
∑_kw_kC_k ≤ (4χ+1)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+(4χ+1)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+(4χ+1)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT
≤ (4χ+2-2/m)· OPT.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4χ+1-2/m for the flow-level scheduling problem without release times.
To schedule coflows without release times, the application of Lemma <ref> (with a = 0) indicates the following:
∑_kw_kC_k ≤ (2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+2· 2χ∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+2· 2χ∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT.
In order to minimize the approximation ratio, we can substitute κ=1/2 and obtain the following result:
∑_kw_kC_k ≤ 4χ∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+4χ∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+4χ∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT
≤ (4χ+1-2/m)· OPT.
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then the inequality ∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ (R-1)(∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k) holds for all k∈𝒦.
We demonstrate the case of L_μ_1(r)>L_μ_2(r), while the other case of L_μ_1(r)≤ L_μ_2(r) can be obtained using the same approach, yielding the same result. If coflow k does not undergo the adjustment of the order by setting γ_k',k, then ∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ 0.
Suppose coflow p is replaced by coflow k through the adjustment of γ_k',k.
Let
B=∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k,
B_p=∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,p+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,p,
H=∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k',
H_p=∑_k'∈𝒦|(k',p)∈ Eγ_k',p-∑_k'∈𝒦|(p,k')∈ Eγ_p,k',
R'=w_k/w_p.
If coflow k undergoes the adjustment of the order by setting γ_k',k, then
H = w_k-B-L_i,k/L_i,p(w_p-B_p-H_p)
≤ w_k-B-w_p+B_p+H_p
≤ w_k-w_p+H_p
≤ w_k-w_p
= R'-1/R'w_k
≤ R-1/Rw_k
The inequalities (<ref>) and (<ref>) are due to L_i,p≤ L_i,k for all i ∈ℐ. The inequality (<ref>) is due to
H_p≤ 0. Based on Lemma <ref>, we know that ∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_(k',k)∈ Eγ_k', k-∑_(k,k')∈ Eγ_k, k'= w_k.
Thus, we obtain:
H ≤ (R-1)(∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k).
This proof confirms the lemma.
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then the total cost of the schedule is bounded as follows.
∑_kw_kC_k ≤ R(a+2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+2R(a·κ+2χ)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+2R(a·κ+2χ)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT.
According to lemma <ref>, we have
∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ R(∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k) holds for all k∈𝒦.
Then, following a similar proof to lemma <ref>, we can derive result
∑_kw_kC_k ≤ R(a+2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+2R(a·κ+2χ)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+2R(a·κ+2χ)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT.
By employing analogous proof techniques to theorems <ref> and <ref>, we can establish the validity of the following two theorems:
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4Rχ+R+1-2/m for the flow-level scheduling problem with release times.
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4Rχ+1-2/m for the flow-level scheduling problem without release times.
§ APPROXIMATION ALGORITHM FOR THE COFLOW-LEVEL SCHEDULING PROBLEM
This section focuses on the coflow-level scheduling issue, which pertains to the transmission of flows within a coflow via a single core. It is important to remember that L_i,k=∑_j=1^Nd_i,j,k and L_j,k=∑_i=1^Nd_i,j,k, where L_i,k denotes the overall load at source i for coflow k, and L_j,k denotes the overall load at destination j for coflow k.
Let
f_i(S) = ∑_k∈ S L_i,k^2+(∑_k∈ S L_i,k)^2/2m
and
f_j(S) = ∑_k∈ S L_j,k^2+(∑_k∈ S L_j,k)^2/2m
for any subset S⊆𝒦.
To address this problem, we propose a linear programming relaxation formulation as follows:
min ∑_k ∈𝒦 w_k C_k <ref>
s.t. C_k≥ r_k+L_i,k, ∀ k∈𝒦, ∀ i∈ℐ
C_k≥ r_k+L_j,k, ∀ k∈𝒦, ∀ j∈𝒥
C_k≥ C_k'+L_ik, ∀ k, k'∈𝒦, ∀ i∈ℐ:
k'≺ k
C_k≥ C_k'+L_jk, ∀ k, k'∈𝒦, ∀ j∈𝒥:
k'≺ k
∑_k∈ SL_i,kC_k≥ f_i(S) ∀ i∈ℐ, ∀ S⊆𝒦
∑_k∈ SL_j,kC_k≥ f_j(S) ∀ j∈𝒥, ∀ S⊆𝒦
In the linear program (<ref>), the completion time C_k is defined for each coflow k in the schedule. Constraints (<ref>) and (<ref>) ensure that the completion time of any coflow k is greater than or equal to its release time r_k plus its load. To account for the precedence constraints among coflows, constraints (<ref>) and (<ref>) indicate that all flows of coflow k' must be completed before coflow k can be scheduled. Additionally, constraints (<ref>) and (<ref>) establish lower bounds for the completion time variable at the input and output ports, respectively.
The dual linear program is given by
max ∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k+L_i,k)
+∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k+L_j,k)
+∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
+ ∑_(k', k) ∈ E∑_i ∈ℐγ_k', i, k L_i,k
+ ∑_(k', k) ∈ E∑_j ∈𝒥γ_k', j, k L_j,k <ref>
s.t. ∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k
+∑_i ∈ℐ∑_S⊆𝒦/k∈ Sβ_i,SL_i,k
+∑_j ∈𝒥∑_S⊆𝒦/k∈ Sβ_j,SL_j,k
+∑_(k',k)∈ E∑_i ∈ℐγ_k', i, k
+∑_(k',k)∈ E∑_j ∈𝒥γ_k', j, k
-∑_(k,k')∈ E∑_i ∈ℐγ_k, i, k'
-∑_(k,k')∈ E∑_j ∈𝒥γ_k, j, k'≤ w_k, ∀ k∈𝒦
α_i, k≥ 0, ∀ k∈𝒦, ∀ i∈ℐ
α_j, k≥ 0, ∀ k∈𝒦, ∀ j∈𝒥
β_i, S≥ 0, ∀ i∈ℐ, ∀ S⊆𝒦
β_j, S≥ 0, ∀ j∈𝒥, ∀ S⊆𝒦
γ_k', i, k≥ 0, ∀ (k', k)∈ E, ∀ i∈ℐ
γ_k', j, k≥ 0, ∀ (k', k)∈ E, ∀ j∈𝒥
Let γ_k', k=∑_i ∈ℐγ_k', i, k+∑_j ∈𝒥γ_k', j, k. Notice that for every coflow k, there exists two dual variables α_i, k and α_j, k, and there is a corresponding constraint. Additionally, for every subset of coflows S, there are two dual variables β_i, S and β_j, S. For the precedence constraints, there are two dual variables γ_k', k and γ_k, k'. Algorithm <ref> in Appendix <ref> presents the primal-dual algorithm which has a space complexity of O(Nn) and a time complexity of O(n^2), where N represents the number of input/output ports and n represents the number of coflows.
The coflow-driven-list-scheduling, as outlined in Algorithm <ref>, operates as follows. To ensure clarity and generality, we assume that the coflows are arranged in an order determined by the permutation generated by Algorithm <ref>, where σ(k)=k for all k∈𝒦. We schedule all the flows within each coflow iteratively, following the sequence provided by this list.
For each coflow k, we identify the network core h^* that can transmit coflow k in a manner that minimizes its completion time (lines <ref>-<ref>). Subsequently, we transmit all the flows allocated to network core h (lines <ref>-<ref>).
In summary, the coflow-driven-list-scheduling algorithm works by iteratively scheduling the flows within each coflow, following a predetermined order. It determines the optimal network core for transmitting each coflow to minimize their completion times, and then transmits the allocated flows for each core accordingly.
§.§ Analysis
In this section, we present a comprehensive analysis of the proposed algorithm, establishing its approximation ratios. Specifically, we demonstrate that the algorithm achieves an approximation ratio of O(mχ) when considering workload sizes and weights that are topology-dependent in the input instances. Additionally, when considering workload sizes that are topology-dependent in the input instances, the algorithm achieves an approximation ratio of O(Rmχ) where R is the ratio of maximum weight to minimum weight. It is crucial to note that our analysis assumes that the coflows are arranged in the order determined by the permutation generated by Algorithm <ref>, where σ(k)=k for all k∈𝒦.
We would like to emphasize that S_k={1, 2, …, k} represents the set of the first k coflows. We define β_i,k=β_i,S_k and β_j,k=β_j,S_k for convenience. Moreover, we define L_i(S_k)=∑_k'≤ k L_i, k' and L_j(S_k)=∑_k'≤ k L_j, k' to simplify the notation. Furthermore, let μ_1(k) denote the input port with the highest load among the coflows in S_k, and μ_2(k) denote the output port with the highest load among the coflows in S_k. Hence, we have L_μ_1(k)(S_k)=∑_k'≤ k L_μ_1(k), k' and L_μ_2(k)(S_k)=∑_k'≤ k L_μ_2(k), k'.
Let us begin by presenting several key observations regarding the primal-dual algorithm.
The following statements hold.
* Every nonzero β_i,S can be written as β_μ_1(k),k for some coflow k.
* Every nonzero β_j,S can be written as β_μ_2(k),k for some coflow k.
* For every set S_k that has a nonzero β_μ_1(k),k variable, if k' ≤ k then r_k'≤κ· L_μ_1(k)(S_k)/m.
* For every set S_k that has a nonzero β_μ_2(k),k variable, if k' ≤ k then r_k'≤κ· L_μ_2(k)(S_k)/m.
* For every coflow k that has a nonzero α_μ_1(k), k, r_k>κ· L_μ_1(k)(S_k)/m.
* For every coflow k that has a nonzero α_μ_2(k), k, r_k>κ· L_μ_2(k)(S_k)/m.
* For every coflow k that has a nonzero α_μ_1(k), k or a nonzero α_μ_2(k), k, if k'≤ k then r_k'≤ r_k.
The validity of each of the aforementioned observations can be readily verified and directly inferred from the steps outlined in Algorithm <ref>.
For any subset S, we have that (∑_k∈ S L_i,k)^2≤ 2m· f_i(S) and (∑_k∈ S L_j,k)^2≤ 2m· f_j(S).
Let C_k represent the completion time of coflow k when scheduled according to Algorithm <ref>. For any coflow k, we have C_k≤ a·max_k'≤ kr_k'+χ(L_μ_1(k)(S_k)+L_μ_2(k)(S_k)), where a=0 signifies the absence of release times, and a=1 indicates the presence of arbitrary release times.
First, let's consider the case where there is no release time and no precedence constraints. In this case, the completion time bound for each coflow can be expressed by the following inequality:
Ĉ_k ≤ L_μ_1(k)(S_k)+L_μ_2(k)(S_k)
Now, let v_1v_2⋯ v_f be the longest path of coflow k, where v_f=k. Then, we can derive the following inequalities:
C_k ≤ ∑_q=1^fĈ_v_q
≤ ∑_q=1^f L_μ_1(q)(S_q)+L_μ_2(q)(S_q)
≤ ∑_q=1^f L_μ_1(k)(S_k)+L_μ_2(k)(S_k)
= f(L_μ_1(k)(S_k)+L_μ_2(k)(S_k))
When considering the release time, coflow k is transmitted starting at max_k'≤ kr_k' at the latest. This proof confirms the lemma.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then γ_k', k=0 holds for all k, k'∈𝒦.
Given that w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ, and j ∈𝒥, the β value of coflow k is smaller than that of coflow k'. As a result, there is no need to order the coflow k by setting γ_k',k.
For every coflow k, ∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'= w_k.
A coflow k is included in the permutation of Algorithm <ref> only if the constraint
∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k +∑_i ∈ℐ∑_S⊆𝒦/k∈ Sβ_i,SL_i,k +∑_j ∈𝒥∑_S⊆𝒦/k∈ Sβ_j,SL_j,k+∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ w_k becomes tight for this particular coflow, resulting in ∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'= w_k.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then the total cost of the schedule is bounded as follows.
∑_kw_kC_k ≤ (a+2χ· m/κ)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+(a+2χ· m/κ)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+2(a·κ+2χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+2(a·κ+2χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
By applying Lemma <ref>, we have
∑_k=1^n w_kC_k
≤∑_k=1^n w_k·(a·max_k'≤ kr_k+χ(L_μ_1(k)(S_k)+L_μ_2(k)(S_k)))
Let A=a·max_k'≤ kr_k+χ(L_μ_1(k)(S_k)+L_μ_2(k)(S_k)). By applying Lemmas <ref> and <ref>, we have
∑_k=1^n w_kC_k ≤ ∑_k=1^n(∑_i ∈ℐα_i, k+∑_k=1^n∑_j ∈𝒥α_j, k)· A
+∑_k=1^n∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k· A
+∑_k=1^n∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k· A
Let's begin by bounding ∑_k=1^n∑_i ∈ℐα_i, k· A+∑_k=1^n∑_j ∈𝒥α_j, k· A.
By applying Observation <ref> parts (<ref>), (<ref>) and (<ref>), we have
∑_k=1^n(∑_i ∈ℐ α_i, k+∑_k=1^n∑_j ∈𝒥 α_j, k)·A
≤ ∑_k=1^n(∑_i ∈ℐ α_i, k+∑_k=1^n∑_j ∈𝒥 α_j, k)(a·r_k+2χ·m·r_k/κ)
≤ (a+2χ·m/κ)∑_k=1^n(∑_i ∈ℐ α_i, k+∑_k=1^n∑_j ∈𝒥 α_j, k)·r_k
Now we bound ∑_k=1^n∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k· A. By applying Observation <ref> part (<ref>), we have
∑_k=1^n∑_i ∈ℐ∑_k'≥kβ_i,k'L_i,k ·A
≤ ∑_k=1^n∑_i ∈ℐ∑_k'≥kβ_i,k'L_i,k(a·max_k'≤kr_k'+L_μ_1(k)(S_k)+L_μ_2(k)(S_k))
≤ ∑_k=1^n∑_i ∈ℐ∑_k'≥kβ_i,k'L_i,k(a·κ·L_μ_1(k)(S_k)/m + 2χ·L_μ_1(k)(S_k))
≤ (a·κ+2χ·m)∑_k'=1^n∑_i ∈ℐ∑_k≤k'β_i,k'L_i,kL_μ_1(k)(S_k)/m
≤ (a·κ+2χ·m)∑_k'=1^n∑_i ∈ℐβ_i,k'∑_k≤k'L_i,kL_μ_1(k)(S_k)/m
= (a·κ+2χ·m)∑_k'=1^n∑_i ∈ℐβ_i,k'L_i(S_k)L_μ_1(k)(S_k)/m
≤ (a·κ+2χ·m)∑_k'=1^n∑_i ∈ℐβ_i,k'(L_μ_1(k)(S_k))^2/m
By sequentially applying Observation <ref> and Observation <ref> part (<ref>), we can upper bound this expression by
2(a·κ+2χ·m)∑_i ∈ℐ∑_k=1^nβ_i,kf_i(S_μ_1(k),k)
= 2(a·κ+2χ·m)∑_k=1^nβ_μ_1(k),kf_i(S_μ_1(k),k)
≤ 2(a·κ+2χ·m)∑_i ∈ℐ∑_S⊆𝒦β_i,Sf_i(S)
By Observation <ref> and Observation <ref> parts (<ref>) and (<ref>), we also can obtain
∑_k=1^n∑_j ∈𝒥∑_k'≥kβ_j,k'L_j,k ·A
≤ 2(a·κ+2χ·m)∑_j ∈𝒥∑_S⊆𝒦β_j,Sf_j(S)
Therefore,
∑_kw_kC_k ≤ (a+2χ· m/κ)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+(a+2χ· m/κ)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+2(a·κ+2χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+2(a·κ+2χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4χ m+1 for the coflow-level scheduling problem with release times.
To schedule coflows without release times, the application of Lemma <ref> (with a = 1) indicates the following:
∑_kw_kC_k ≤ (1+2χ· m/κ)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+(1+2χ· m/κ)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+2(κ+2χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+2(κ+2χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
In order to minimize the approximation ratio, we can substitute κ=1/2 and obtain the following result:
∑_kw_kC_k ≤ (4χ· m+1)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+(4χ· m+1)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+(4χ· m+1)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+(4χ· m+1)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
≤ (4χ· m+1) · OPT.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4χ m for the coflow-level scheduling problem without release times.
To schedule coflows without release times, the application of Lemma <ref> (with a = 0) indicates the following:
∑_kw_kC_k ≤ (2χ· m/κ)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+(2χ· m/κ)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+2(2χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+2(2χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
In order to minimize the approximation ratio, we can substitute κ=1/2 and obtain the following result:
∑_kw_kC_k ≤ (4χ· m)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+(4χ· m)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+(4χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+(4χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
≤ 4χ· m · OPT.
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then the inequality ∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ (R-1)(∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k) holds for all k∈𝒦.
If coflow k does not undergo the adjustment of the order by setting γ_k',k, then ∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ 0. If coflow k undergoes the adjustment of the order by setting γ_k',k, then we have ∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤R-1/Rw_k. Based on Lemma <ref>, we know that ∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_(k',k)∈ Eγ_k', k-∑_(k,k')∈ Eγ_k, k'= w_k.
Thus, we obtain:
∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ (R-1)(∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k).
This proof confirms the lemma.
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then the total cost of the schedule is bounded as follows.
∑_kw_kC_k ≤ R(a+2χ· m/κ)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+R(a+2χ· m/κ)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+2R(a·κ+2χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+2R(a·κ+2χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
According to lemma <ref>, we have
∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ R(∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k) holds for all k∈𝒦.
Then, following a similar proof to lemma <ref>, we can derive result
∑_kw_kC_k ≤ R(a+2χ· m/κ)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+R(a+2χ· m/κ)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+2R(a·κ+2χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+2R(a·κ+2χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
By employing analogous proof techniques to theorems <ref> and <ref>, we can establish the validity of the following two theorems:
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4Rχ m+R for the flow-level scheduling problem with release times.
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4Rχ m for the flow-level scheduling problem without release times.
§ COFLOWS OF MULTI-STAGE JOBS SCHEDULING PROBLEM
In this section, we will focus on addressing the coflows of multi-stage job scheduling problem. We will modify the linear programs (<ref>) by introducing a set 𝒯 to represent the jobs and a set 𝒯_t to represent the coflows that belong to job t. We will also incorporate an additional constraint (<ref>), which will ensure that the completion time of any job is limited by its coflows. Our objective is to minimize the total weighted completion time for a given set of multi-stage jobs. Assuming that all coflows within the same job have the same release time. The resulting problem can be expressed as a linear programming relaxation, which is as follows:
min ∑_t ∈𝒯 w_t C_t <ref>
s.t. (<ref>)-(<ref>)
C_t≥ C_k, ∀ t∈𝒯, ∀ k∈𝒯_t
The dual linear program is given by
max ∑_k ∈𝒦∑_i ∈ℐ∑_j ∈𝒥α_i, j, k(r_k+d_i,j,k)
+∑_i ∈ℐ∑_S ⊆ℱ_iβ_i,S f(S)
+∑_j ∈𝒥∑_S ⊆ℱ_jβ_j,S f(S)
+ ∑_(k', k) ∈ E∑_i ∈ℐ,j ∈𝒥γ_k', i, j, k d_i,j,k <ref>
s.t. ∑_k∈𝒯_t∑_i ∈ℐ∑_j ∈𝒥α_i, j, k
+∑_k∈𝒯_t∑_i ∈ℐ∑_S⊆ℱ_iβ_i,SL_i,S,k
+∑_k∈𝒯_t∑_j ∈𝒥∑_S⊆ℱ_jβ_j,SL_j,S,k
+∑_k∈𝒯_t∑_(k',k)∈ E∑_i ∈ℐ,j ∈𝒥γ_k', i, j, k
-∑_k∈𝒯_t∑_(k,k')∈ E∑_i ∈ℐ,j ∈𝒥γ_k, i, j, k'≤ w_t, ∀ t∈𝒯
α_i, j, k≥ 0, ∀ k∈𝒦, ∀ i∈ℐ,
∀ j∈𝒥
β_i, S≥ 0, ∀ i∈ℐ, ∀ S⊆ℱ_i
β_j, S≥ 0, ∀ j∈𝒥, ∀ S⊆ℱ_j
γ_k', i, j, k≥ 0, ∀ (k', k)∈ E,
∀ i∈ℐ, ∀ j∈𝒥
Let α_i, j, t = ∑_k∈𝒯_tα_i, j, k, L_i,S,t=∑_k∈𝒯_t L_i,S,k and L_j,S,t=∑_k∈𝒯_t L_j,S,k for all t∈𝒯.
Algorithm <ref> in Appendix <ref> determines the order of job scheduling. Since there are no precedence constraints among the jobs, there is no need to set γ to satisfy precedence constraints. We transmit the jobs sequentially, and within each job, the coflows are transmitted in topological-sorting order. As the values of γ are all zero, similar to the proof of Theorem <ref>, we can obtain the following theorem. Unlike Theorem <ref>, this result is not limited to the workload sizes and weights that are topology-dependent in the input instances.
The proposed algorithm achieves an approximation ratio of O(χ) for minimizing the total weighted completion time of a given set of multi-stage jobs.
§ EXPERIMENTAL RESULTS
In order to evaluate the effectiveness of the proposed algorithm, this section conducts simulations comparing its performance to that of a previous algorithm. Both synthetic and real traffic traces are used for these simulations, without considering release time. The subsequent sections present and analyze the results obtained from these simulations.
§.§ Comparison Metrics
Since the cost of the feasible dual solution provides a lower bound on the optimal value of the coflow scheduling problem, we calculate the approximation ratio by dividing the total weighted completion time achieved by the algorithms by the cost of the feasible dual solution.
§.§ Randomly Generated Graphs
In this section, we examine a collection of randomly generated graphs that are created based on a predefined set of fundamental characteristics.
* DAG size, n: The number of coflows in the DAG.
* Out degree, deg: Out degree of a node.
* Parallelism factor, (p) <cit.>: The calculation of the levels in the DAG involves randomly generating a number from a uniform distribution. The mean value of this distribution is √(n)/p. The generated number is then rounded up to the nearest integer, determining the number of levels. Additionally, the width of each level is calculated by randomly generating a number from a uniform distribution. The mean value for this distribution is p ×√(n), and it is also rounded up to the nearest integer <cit.>. Graphs with a larger value of p tend to have a smaller χ, while those with a smaller value of p have a larger χ.
* Workload, (W_min, W_max, L_min, L_max) <cit.>:
Each coflow is accompanied by a description (W_min, W_max, L_min, L_max) that provides information about its characteristics. To determine the number of non-zero flows within a coflow, two values, w_1 and w_2, are randomly selected from the interval [W_min, W_max]. These values are then assigned to the input and output links of the coflow in a random manner. The size of each flow is randomly chosen from the interval [L_min, L_max]. The construction of all coflows by default follows a predefined distribution based on the coflow descriptions. This distribution consists of four configurations: (1, 4, 1, 10), (1, 4, 10, 1000), (4, N, 1, 10), and (4, N, 10, 1000), with proportions of 41%, 29%, 9%, and 21%, respectively. Here, N represents the number of ports in the core.
Let level_k denote the level of coflow k, and let Lv(k)={k'∈𝒦 | level_k < level_k'} represent the set of coflows that have a higher level than k. When constructing a DAG, only a subset of Lv(k) can be selected as successors for each coflow k. For coflow k, a set of successors is randomly chosen with a probability of deg/|Lv(k)|. To assign weights to each coflow, positive integers are randomly and uniformly selected from the interval [1, 100].
§.§ Results
Figure <ref> illustrates the approximation ratio of the proposed algorithm compared to the previous algorithm for synthetic traces. The problem size ranges from 5 to 25 coflows in five network cores, with input and output links set to N=10. For each instance, we set deg=3, p=1, and χ≥ 2. The proposed algorithms demonstrate significantly smaller approximation ratios than 4χ+2-2/m. Furthermore, FDLS outperforms Weaver by approximately 4.7% to 7.5% within this problem size range. Although there are no restrictions on the workload's load and weights being topology-dependent for each instance, we still obtain results lower than 4χ+2-2/m. This demonstrates the excellent performance of the algorithm in general scenarios.
The effects of flow density were compared by categorizing the coflows into three instances: dense, sparse, and combined. For each instance, the number of flows was randomly selected from either the range [N, N^2] or [1, N], depending on the specific instance. In the combined instance, each coflow has a 50% probability of being set to sparse and a 50% probability of being set to dense. Figure <ref> illustrates the approximation ratio of synthetic traces for 100 randomly chosen dense and combined instances, comparing the previous algorithm with the proposed algorithm. The problem size consisted of 25 coflows in five network cores, with input and output links set to N=10. For each instance, we set deg=3, p=1, and χ≥ 2. In the dense case, Weaver achieved an approximation ratio of 2.80, while FDLS achieved an approximation ratio of 2.66, resulting in a 5.12% improvement with Weaver. In the combined case, FDLS outperformed Weaver by 2.52%. Importantly, the proposed algorithm demonstrated a greater improvement in the dense case compared to the combined case.
Figure <ref> illustrates the approximation ratio of synthetic traces for varying numbers of network cores, comparing the previous algorithm to the proposed algorithm when all coflows are released simultaneously at time 0. The problem size consists of 25 coflows distributed across 5 to 25 network cores, with input and output links set to N=10. For each instance, we set deg=3, p=1, and χ≥ 2. Remarkably, the proposed algorithm consistently achieves significantly smaller approximation ratios compared to the theoretical bound of 4χ+2-2/m. As the number of network cores increases, the approximation ratio also tends to increase. This observation can be attributed to the widening gap between the cost of the feasible dual solution and the cost of the optimal integer solution as the number of network cores grows. Consequently, this leads to a notable discrepancy between the experimental approximation ratio and the actual approximation ratio. Importantly, across different numbers of network cores, FDLS outperforms Weaver by approximately 1.79% to 5.30%.
Figure <ref> illustrates the approximation ratio of synthetic traces for varying parallelism factor (p), comparing the previous algorithm to the proposed algorithm when all coflows are released simultaneously at time 0. The problem size consists of 25 coflows distributed across 5 network cores, with input and output links set to N=10. For each instance, we set deg=3, p=1, and χ≥ 2. According to our settings, the coflow number of the longest path in the DAG (χ) exhibits an increasing trend as the parallelism factor p decreases. Correspondingly, the approximation ratio also shows an upward trend with a decrease in the parallelism factor p. This empirical finding aligns with the theoretical analysis, demonstrating a linear relationship between the approximation ratio and χ.
We present the simulation results of the real traffic trace obtained from Hive/MapReduce traces captured from Facebook's 3000-machine cluster, consisting of 150 racks. This real traffic trace has been widely used in previous research simulations <cit.>. The trace dataset comprises a total of 526 coflows. In Figure <ref>, we depict the approximation ratio of the real traces for different thresholds of the number of flows. That is, we apply a filter to the set of coflows based on the condition that the number of flows is equal to or greater than the threshold value. For each instance, we set deg=3, p=1, and χ≥ 2. Notably, the proposed FDLS algorithm outperforms the Weaver algorithm by approximately 4.84% to 3.11% across various thresholds. Furthermore, as the number of flows increases, the approximation ratio decreases. This observation is consistent with our previous findings, suggesting a decreasing trend in the approximation ratio as the number of coflows increases.
§ CONCLUDING REMARKS
This paper focuses on the study the problem of coflow scheduling with release times and precedence constraints in identical parallel networks. The algorithm we propose effectively solves the scheduling order of coflows using the primal-dual method. The primal-dual algorithm has a space complexity of O(Nn) and a time complexity of O(n^2). When considering workload sizes and weights that are topology-dependent in the input instances, our proposed algorithm for the flow-level scheduling problem achieves an approximation ratio of O(χ). Furthermore, when considering workload sizes that are topology-dependent in the input instances, the algorithm achieves an approximation ratio of O(Rχ). For the coflow-level scheduling problem, the proposed algorithm attains an approximation ratio of O(mχ) when considering workload sizes and weights that are topology-dependent in the input instances. Moreover, when considering workload sizes that are topology-dependent in the input instances, the algorithm achieves an approximation ratio of O(Rmχ). In the coflows of multi-stage job scheduling problem, the proposed algorithm achieves an approximation ratio of O(χ). Although our theoretical results are based on a limited set of input instances, experimental findings show that the results for general input instances outperform the theoretical results, thereby demonstrating the effectiveness and practicality of the proposed algorithm.
10
url@rmstyle
Agarwal2018
S. Agarwal, S. Rajakrishnan, A. Narayan, R. Agarwal, D. Shmoys, and A. Vahdat,
“Sincronia: Near-optimal network design for coflows,” in Proceedings
of the 2018 ACM Conference on SIGCOMM, ser. SIGCOMM '18.1em plus
0.5em minus 0.4emNew York, NY, USA: Association for Computing
Machinery, 2018, p. 16–29.
ahmadi2020scheduling
S. Ahmadi, S. Khuller, M. Purohit, and S. Yang, “On scheduling coflows,”
Algorithmica, vol. 82, no. 12, pp. 3604–3629, 2020.
al2008scalable
M. Al-Fares, A. Loukissas, and A. Vahdat, “A scalable, commodity data center
network architecture,” ACM SIGCOMM computer communication review,
vol. 38, no. 4, pp. 63–74, 2008.
Bansal2010
N. Bansal and S. Khot, “Inapproximability of hypergraph vertex cover and
applications to scheduling problems,” in Automata, Languages and
Programming, S. Abramsky, C. Gavoille, C. Kirchner, F. Meyer auf der Heide,
and P. G. Spirakis, Eds.1em plus 0.5em minus 0.4emBerlin,
Heidelberg: Springer Berlin Heidelberg, 2010, pp. 250–261.
borthakur2007hadoop
D. Borthakur, “The hadoop distributed file system: Architecture and design,”
Hadoop Project Website, vol. 11, no. 2007, p. 21, 2007.
Chowdhury2012
M. Chowdhury and I. Stoica, “Coflow: A networking abstraction for cluster
applications,” in Proceedings of the 11th ACM Workshop on Hot Topics
in Networks, ser. HotNets-XI.1em plus 0.5em minus 0.4emNew
York, NY, USA: Association for Computing Machinery, 2012, p. 31–36.
Chowdhury2015
——, “Efficient coflow scheduling without prior knowledge,” in
Proceedings of the 2015 ACM Conference on SIGCOMM, ser. SIGCOMM
'15.1em plus 0.5em minus 0.4emNew York, NY, USA: Association
for Computing Machinery, 2015, p. 393–406.
chowdhury2011managing
M. Chowdhury, M. Zaharia, J. Ma, M. I. Jordan, and I. Stoica, “Managing data
transfers in computer clusters with orchestra,” ACM SIGCOMM computer
communication review, vol. 41, no. 4, pp. 98–109, 2011.
Chowdhury2014
M. Chowdhury, Y. Zhong, and I. Stoica, “Efficient coflow scheduling with
varys,” in Proceedings of the 2014 ACM Conference on SIGCOMM, ser.
SIGCOMM '14.1em plus 0.5em minus 0.4emNew York, NY, USA:
Association for Computing Machinery, 2014, p. 443–454.
Daoud08
M. I. Daoud and N. Kharma, “A high performance algorithm for static task
scheduling in heterogeneous distributed computing systems,” Journal of
Parallel and Distributed Computing, vol. 68, no. 4, pp. 399 – 409, 2008.
DAVIS2013121
J. M. Davis, R. Gandhi, and V. H. Kothari, “Combinatorial algorithms for
minimizing the weighted sum of completion times on a single machine,”
Operations Research Letters, vol. 41, no. 2, pp. 121–125, 2013.
Dean2008
J. Dean and S. Ghemawat, “Mapreduce: Simplified data processing on large
clusters,” Communications of the ACM, vol. 51, no. 1, p. 107–113,
jan 2008.
dogar2014decentralized
F. R. Dogar, T. Karagiannis, H. Ballani, and A. Rowstron, “Decentralized
task-aware scheduling for data center networks,” ACM SIGCOMM Computer
Communication Review, vol. 44, no. 4, pp. 431–442, 2014.
greenberg2009vl2
A. Greenberg, J. R. Hamilton, N. Jain, S. Kandula, C. Kim, P. Lahiri, D. A.
Maltz, P. Patel, and S. Sengupta, “Vl2: A scalable and flexible data center
network,” in Proceedings of the ACM SIGCOMM 2009 conference on Data
communication, 2009, pp. 51–62.
huang2016
X. S. Huang, X. S. Sun, and T. E. Ng, “Sunflow: Efficient optical circuit
scheduling for coflows,” in Proceedings of the 12th International on
Conference on emerging Networking EXperiments and Technologies, 2016, pp.
297–311.
Huang2020
X. S. Huang, Y. Xia, and T. S. E. Ng, “Weaver: Efficient coflow scheduling in
heterogeneous parallel networks,” in 2020 IEEE International Parallel
and Distributed Processing Symposium (IPDPS), 2020, pp. 1071–1081.
isard2007dryad
M. Isard, M. Budiu, Y. Yu, A. Birrell, and D. Fetterly, “Dryad: distributed
data-parallel programs from sequential building blocks,” in
Proceedings of the 2nd ACM SIGOPS/EuroSys European Conference on
Computer Systems 2007, 2007, pp. 59–72.
khuller2016brief
S. Khuller and M. Purohit, “Brief announcement: Improved approximation
algorithms for scheduling co-flows,” in Proceedings of the 28th ACM
Symposium on Parallelism in Algorithms and Architectures, 2016, pp.
239–240.
Qiu2015
Z. Qiu, C. Stein, and Y. Zhong, “Minimizing the total weighted completion time
of coflows in datacenter networks,” in Proceedings of the 27th ACM
Symposium on Parallelism in Algorithms and Architectures, ser. SPAA
'15.1em plus 0.5em minus 0.4emNew York, NY, USA: Association
for Computing Machinery, 2015, p. 294–303.
Sachdeva2013
S. Sachdeva and R. Saket, “Optimal inapproximability for scheduling problems
via structural hardness for hypergraph vertex cover,” in 2013 IEEE
Conference on Computational Complexity, 2013, pp. 219–229.
shafiee2018improved
M. Shafiee and J. Ghaderi, “An improved bound for minimizing the total
weighted completion time of coflows in datacenters,” IEEE/ACM
Transactions on Networking, vol. 26, no. 4, pp. 1674–1687, 2018.
shafiee2021scheduling
——, “Scheduling coflows with dependency graph,” IEEE/ACM
Transactions on Networking, 2021.
Shvachko2010
K. Shvachko, H. Kuang, S. Radia, and R. Chansler, “The hadoop distributed file
system,” in 2010 IEEE 26th Symposium on Mass Storage Systems and
Technologies (MSST), 2010, pp. 1–10.
Singh2015
A. Singh, J. Ong, A. Agarwal, G. Anderson, A. Armistead, R. Bannon, S. Boving,
G. Desai, B. Felderman, P. Germano, A. Kanagala, J. Provost, J. Simmons,
E. Tanda, J. Wanderer, U. Hölzle, S. Stuart, and A. Vahdat, “Jupiter
rising: A decade of clos topologies and centralized control in google's
datacenter network,” in Proceedings of the 2015ACM Conference on
SIGCOMM, ser. SIGCOMM '15.1em plus 0.5em minus 0.4emNew York,
NY, USA: Association for Computing Machinery, 2015, p. 183–197.
Tian18
B. Tian, C. Tian, H. Dai, and B. Wang, “Scheduling coflows of multi-stage jobs
to minimize the total weighted job completion time,” in IEEE INFOCOM
2018 - IEEE Conference on Computer Communications, 2018, pp. 864–872.
Topcuoglu02
H. Topcuoglu, S. Hariri, and M.-Y. Wu, “Performance-effective and
low-complexity task scheduling for heterogeneous computing,” IEEE
Transactions on Parallel and Distributed Systems, vol. 13, no. 3, pp.
260–274, Mar 2002.
zaharia2010spark
M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, and I. Stoica, “Spark:
Cluster computing with working sets,” in 2nd USENIX Workshop on Hot
Topics in Cloud Computing (HotCloud 10), 2010.
Zhang2016
H. Zhang, L. Chen, B. Yi, K. Chen, M. Chowdhury, and Y. Geng, “Coda: Toward
automatically identifying and scheduling coflows in the dark,” in
Proceedings of the 2016 ACM Conference on SIGCOMM, ser. SIGCOMM
'16.1em plus 0.5em minus 0.4emNew York, NY, USA: Association
for Computing Machinery, 2016, p. 160–173.
zhao2015rapier
Y. Zhao, K. Chen, W. Bai, M. Yu, C. Tian, Y. Geng, Y. Zhang, D. Li, and
S. Wang, “Rapier: Integrating routing and scheduling for coflow-aware data
center networks,” in 2015 IEEE Conference on Computer Communications
(INFOCOM).1em plus 0.5em minus 0.4emIEEE, 2015, pp. 424–432.
§ THE PRIMAL-DUAL ALGORITHM OF SECTION <REF>
The primal-dual algorithm, presented in Algorithm <ref>, draws inspiration from the works of Davis et al. <cit.> and Ahmadi et al. <cit.>. This algorithm constructs a feasible schedule iteratively, progressing from right to left, determining the processing order of coflows. Starting from the last coflow and moving towards the first, each iteration makes crucial decisions in terms of increasing dual variables α, β or γ. The guidance for these decisions is provided by the dual linear programming (LP) formulation. The algorithm offers a space complexity of O(Nn) and a time complexity of O(n^2), where N represents the number of input/output ports, and n represents the number of coflows.
Consider a specific iteration in the algorithm. At the beginning of this iteration, let 𝒦 represent the set of coflows that have not been scheduled yet, and let k denote the coflow with the largest release time. In each iteration, a decision must be made regarding whether to increase dual variables α, β or γ.
If the release time r_k is significantly large, increasing the α dual variable results in substantial gains in the objective function value of the dual problem. On the other hand, if L_μ_1(r) (or L_μ_2(r) if L_μ_2(r)≥ L_μ_1(r)) is large, raising the β variable leads to substantial improvements in the objective value. Let κ be a constant that will be optimized later.
If r_k>κ· L_μ_1(r)/m (or r_k>κ· L_μ_2(r)/m if L_μ_2(r)≥ L_μ_1(r)), the α dual variable is increased until the dual constraint for coflow k becomes tight. Consequently, coflow k is scheduled to be processed as early as possible and before any previously scheduled coflows.
In the case where r_k≤κ· L_μ_1(r)/m (or r_k≤κ· L_μ_2(r)/m if L_μ_2(r)≥ L_μ_1(r)), the dual variable β_μ_1(r),𝒢_i (or β_μ_2(r),𝒢_j if Lμ_2(r)≥ L_μ_1(r)) is increased until the dual constraint for coflow k' becomes tight.
In this step, we begin by identifying a candidate coflow, denoted as k', with the minimum value of β. We then examine whether this coflow still has unscheduled successors. If it does, we continue traversing down the chain of successors until we reach a coflow that has no unscheduled successors, which we will refer to as t_1.
Once we have identified coflow t_1, we set its β and γ values such that the dual constraint for coflow t_1 becomes tight. Moreover, we ensure that the β value of coflow t_1 matches that of the candidate coflow k'.
Permuting Coflows
§ THE PRIMAL-DUAL ALGORITHM OF SECTION <REF>
Algorithm <ref> presents the primal-dual algorithm which has a space complexity of O(Nn) and a time complexity of O(n^2), where N represents the number of input/output ports and n represents the number of coflows.
Permuting Coflows
§ THE PRIMAL-DUAL ALGORITHM OF SECTION <REF>
Algorithm <ref> determines the order of job scheduling. Since there are no precedence constraints among the jobs, there is no need to set γ to satisfy precedence constraints.
Permuting Jobs
|
http://arxiv.org/abs/2307.04669v1 | 20230710161213 | Reversal of the skyrmion topological deflection across ferrimagnetic angular momentum compensation | [
"L. Berges",
"R. Weil",
"A. Mougin",
"J. Sampaio"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"cond-mat.mes-hall"
] |
AIP/123-QED
Reversal of the skyrmion topological deflection across ferrimagnetic angular momentum compensation]Reversal of the skyrmion topological deflection across ferrimagnetic angular momentum compensation
[email protected]
Université Paris-Saclay, CNRS, Laboratoire de Physique des Solides, 91405 Orsay, France
Due to their non-trivial topology, skyrmions describe deflected trajectories, which hinders their straight propagation in nanotracks and can lead to their annihilation at the track edges. This deflection is caused by a gyrotropic force proportional to the topological charge and the angular momentum density of the host film. In this article we present clear evidence of the reversal of the topological deflection angle of skyrmions with the sign of angular momentum density. We measured the skyrmion trajectories across the angular momentum compensation temperature () in GdCo thin films, a rare earth/transition metal ferrimagnetic alloy. The sample composition was used to engineer the skyrmion stability below and above the . A refined comparison of their dynamical properties evidenced a reversal of the skyrmions deflection angle with the total angular momentum density. This reversal is a clear demonstration of the possibility of tuning the skyrmion deflection angle in ferrimagnetic materials and paves the way for deflection-free skyrmion devices.
[
J. Sampaio
August 12, 2023
===================
The discovery of efficient driving of chiral magnetic textures by current-induced spin-orbit torques <cit.> has opened the possibility of energy-efficient and high-performance spintronic devices <cit.>, with applications in
digital <cit.> or
neuromorphic <cit.> computation,
ultra-dense data-storage <cit.>,
and signal processing <cit.>. Chiral textures are stable in magnetic thin films with a significant Dzyaloshinskii-Moriya interaction (DMI), typically induced with an adjacent heavy-metal layer (e.g. Pt/Co). Additionally, the heavy-metal layer, through the spin Hall effect, converts an applied charge current into a spin current that drives the magnetic textures by spin orbit torque (SOT). Very promising mobility of chiral magnetic domain walls (DW) has been observed <cit.>, with nonetheless a saturating mobility at large current densities <cit.>.
Another archetypal chiral magnetic texture is the skyrmion, a small (down to few tens of nm) radially symmetric whirling texture. Although highly mobile <cit.>, their non-trivial topology induces a transverse deflection of their trajectory, a phenomenon known as gyrotropic deflection or skyrmion Hall effect <cit.>. This reduces the velocity in the forward direction and can lead to the annihilation of the skyrmion at the edges of the hosting magnetic track, and is thus highly undesired.
The gyrotropic deflection can be mitigated in magnetic systems with anti-parallel lattices <cit.>, such as antiferromagnets or ferrimagnets, where the overall angular momentum density of the double skyrmion can be suppressed.
In particular, ferrimagnetic alloys of the rare-earth/transition-metal (RETM) family, where the RE and TM moments are antiferromagnetically coupled <cit.>, are a promising example. In a previous work by our team, it was shown that skyrmions in GdCo thin films attained the high-mobility linear regime beyond pinning, and that their velocity and deflection followed the predictions of the Thiele model <cit.>.
However, there is still only little experimental evidence of the advantages of these systems <cit.>, especially regarding the control of the gyrotropic deflection.
In RETMs, The balance between the moments of different nature can be changed with alloy composition or temperature which leads to two points of interest for skyrmions. At the first one, the magnetic compensation temperature , the magnetization of the two sub-lattices are equal, the total magnetization (M_s = M_ TM - M_ RE) vanishes, and the size of the skyrmions is minimal due to the absence of dipolar fields <cit.>.
As RE and TM have different gyromagnetic ratios (γ_ RE and γ_ TM), the total angular momentum density (L_s = M_ TM/γ_ TM - M_ RE/γ_ RE) will vanish at a different temperature, the angular compensation temperature . Both and depend on composition. The reduction and reversal of the total angular momentum, which is the root cause of magnetic precession, leads to interesting dynamical properties near , such as e.g. the reversal of the deflection angle of chiral domain wall fingers <cit.> or the precessionless motion of magnetic domains walls <cit.>. However, the reversal of the skyrmion gyrotropic deflection at has not yet been demonstrated.
In this letter, we measure the velocity and deflection angle of skyrmions driven by spin-orbit torques in two Pt/GdCo/Ta films of different composition, above and below their .
We show the dependence of the deflection with angular moment density, and in particular its reversal by changing sample composition or temperature. A quantitative analysis with a rigid texture model based on the Thiele equation is used to characterize the role of the material parameters on the skyrmion dynamics.
The skyrmion dynamics were measured in two samples. Sample 1 is composed of a film of (Si/SiOx(100))/ Ta(1)/ Pt(5)/ Gd_0.32Co_0.68(5)/ Ta(3) and sample 2 of (Si/SiOx(300))/ Ta(3)/ Pt(5)/ Gd_0.3Co_0.7(8)/ Ta(5)/ Pt(1) (thicknesses in nm) as presented in the insets in Fig. <ref>a.
The samples were patterned into 10 μm- or 20 μm-wide tracks in order to apply current pulses (Fig.<ref>b).
The magnetization as a function of temperature was measured by SQUID magnetometry on unpatterned samples and is presented in Fig. <ref>(a). Sample 1 presents a around 360 K whereas sample 2 presents a around 200 K. Therefore, at room temperature, sample 1 is RE-dominated whereas sample 2 is TM-dominated, where RE or TM domination refers to which sublattice has the higher magnetic moment and therefore aligns with an external magnetic field.
It is useful to use the effective ferromagnet model of ferrimagnets <cit.>, which assumes a signed magnetization and angular momentum density that are positive, by convention, when TM-dominated: M_s=|M_ Co|-M_ Gd| and L_s=|L_ Co|-L_ Gd|.
The exact determination of the is not straightforward. It was therefore deduced for both samples, using the mean field model described in ref. <cit.>. The calculated L_S(T) are shown by the dashed lines in Fig. <ref>(a), and yield = 416 K for sample 1 and = 260 K for sample 2. These results are consistent with the empirical law described in ref. <cit.> which gives for GdCo between 40 to 60 K above the .
The magnetic textures are observed in each sample as a function of temperature by magneto-optical-Kerr-effect (MOKE) microscopy.
A typical differential MOKE image is presented in Fig. <ref>(c).
Skyrmions are observed in the temperature ranges indicated by the color bands in Fig. <ref>(a). In these ranges, starting from a saturated state and lowering the applied external magnetic field, skyrmions with a core of opposing magnetization will naturally nucleate at small enough field (-30 to 0 mT for an initial saturation at large negative magnetic field). Skyrmions can also be nucleated by applying electrical pulses. A typical phase diagram (versus temperature and field) of these samples is presented in a previous work <cit.>. In the studied temperature range, sample 1 only presents one skyrmion stability range around 290 K, whereas sample 2 presents two skyrmion stability ranges, one around 90 K and a second around 350 K.
In sample 1, the skyrmion stability range is below (and ), where the film L_S<0, and so these are dubbed RE-dominated skyrmions. In sample 2, the skyrmions at 90 K are RE-dominated as well, while the skyrmions at 350 K are TM-dominated (above and with therefore L_S>0).
Note that in the MOKE images, the signal is proportional to the Co sublattice, independently of the temperature <cit.>. Thus, skyrmions with a core Co moment pointing along the same direction will appear with the same color (black for -z with our experimental conditions), whether they are RE- or TM-dominated (Fig. <ref>c).
Once skyrmions are nucleated, electrical pulses of 3 to 10 ns are applied and MOKE images are acquired in order to study the skyrmions dynamics.
The skyrmion motion is tracked over several pulses using a partially-automated process described in ref. <cit.>, and their velocity and deflection are calculated considering the pulse duration and the traveled distance. Typical images of skyrmions displacements are shown in Fig. <ref>, in the case of sample 2 at low temperature and L_s<0 (a) and high temperature and L_s>0 (b).
The average skyrmion diameter was similar for the three studied cases, 0.86±0.28 μm.
An example of the observed skyrmion dynamics in sample 1 is presented in Fig. <ref>(c) with a superposition of successive MOKE images where the skyrmion color refers to the MOKE image number.
The skyrmion deflection () and velocity (v) versus applied current density (j) are presented in Fig. <ref>(a,b) for the three cases: RE-dominated skyrmions in sample 1, and RE- and TM-dominated skyrmions in sample 2. Videos of successive displacements in both samples are shown in S.I.
In the three cases, the velocity shows a clear depinning transition above a current threshold (different for each case), and then follows a linear regime.
The mobility in the linear regime (i.e. Δ v/Δ j) is much higher in sample 2 than in sample 1.
In sample 2, the mobility of TM-dominated skyrmions is slightly higher than RE-dominated skyrmions. These differences in mobility will be discussed later.
The linear regime extends up to 190 m/s in sample 1 and to 450 m/s in sample 2. At highest j, skyrmions are nucleated by the pulse, which hinders the tracking analysis and thus limits the maximum j that can be examined.
In the linear regime, the deflection angle is approximately constant with the current density, and its absolute value is about 40^∘ for the three cases. The deflection angle is clearly reversed between the TM- and RE-dominated skyrmions: it is positive for TM-dominated skyrmions (in sample 2) and negative for RE-dominated skyrmions (in both samples).
The deflection also reverses with core polarity, i.e. with the Co moment pointing along +z (which appear as white skyrmions in the MOKE images; see SI).
The in the pining regime is measured to be larger than in the flow regime in sample 1, whereas it is lower in sample 2. This is perhaps a bias induced by the different nucleation protocol used in these measurements. For sample 1, skyrmions were only nucleated by current pulses, mostly near one of the edges, whereas for sample 2 they were first nucleated homogeneously by magnetic field. As skyrmions can be annihilated at the edges, only the skyrmions that deviate towards the center are accounted for, which biases the measurement of the mean .
The skyrmion dynamics in the linear regime can be quantitatively analyzed using a rigid-texture formalism based on the Thiele equation <cit.>. It expresses the equilibrium of all forces applied on the magnetic texture that reads in our case as: F⃗_G + F⃗_ SOT + α Dv⃗ = 0⃗, where F⃗_ SOT is the SOT force, F⃗_G the gyrotropic force and α D is, in general, a tensor describing the dissipation. This formalism can be applied to skyrmions in double-lattice systems as presented in refs. <cit.>.
These forces are depicted in Fig. <ref> c), on a black dot representing a skyrmion in the case of L_s<0. The norm of the skyrmion velocity |v| and its deflection can be deduced to be:
|v|=v_0/√(1+ρ^2)
=arctan(ρ)
In the limit of skyrmions larger than the domain wall width parameter Δ, the parameters v_0 and ρ are:
v_0 ≈ -πΔ/2 L_αħ j θ_ SHE/ 2 e t
ρ ≈ Δ/2π RL_S/L_α n
where
ħ is the Planck constant, e the fundamental charge, t the magnetic film thickness,
θ_ SHE is the effective SHE angle in the Pt layer,
L_α=α_Co|L_s^Co|+α_Gd|L_s^Gd| the energy dissipation rate,
n= p_ Co 4π = ± 4π the topological charge of the skyrmion,
R its radius, and
p_ Co=±1 is the orientation along z of the core Co moment.
Because L_α is always positive, the sign of the deflection is given by the sign of the product of L_s (positive for T>) and p_Co. This sign is presented in Table <ref> as a function of temperature for p_Co =-1, which is the case shown here (black skyrmions).
The parameters needed for the model were measured on both samples (see Table <ref>). M_s(T) were measured by SQUID magnetometry (Fig. <ref>a), and L_s(T) was deduced from a mean-field model as described in ref. <cit.>.
The dissipation rate L_α (7.4 and 4.3× 10^-7 kgm^-1s^-1 for sample 1 at 290 K and 2 at 350 K, respectively) is calculated using the gyromagnetic ratio γ and the effective Gilbert dissipation parameter α. The L_α at 90 K in sample 2 (5.2× 10^-7 kgm^-1s^-1) was estimated using the calculated sub-lattice angular momenta from the mean-field model and assuming constant sub-lattice Gilbert damping parameters. The domain wall width parameter Δ is calculated from K_u, the exchange stiffness A, and M_S.
The skyrmion diameter was taken from the average diameter observed in the images, which is very similar for the three studied cases.
These measured parameters allowed to fully constrain the model and obtain curves for the velocity and deflection angle with no fitting parameters, shown by the dashed lines in Fig. <ref>, which reproduce accurately the experimental data.
The sign of the deflection angle observed in the experiments agrees with Eq (<ref>b) taking into account the L_S of the film (L_S<0 for RE-dominated skyrmions and L_S>0 for TM-dominated skyrmions).
The skyrmion mobility, given by the slope of the linear model shown in Fig. <ref>(b), is much higher in sample 2 than in sample 1 (1.80 at 350 K vs 0.6 m·s^-1/GA·m^-2, respectively).
This difference in mobility cannot be ascribed to a difference in skyrmion diameter (see eq. <ref>), as the two conditions present very similar average sizes (0.85 ± 0.28 μm and 0.86 ± 0.28 μm, respectively).
This large difference has multiple origins. First, the L_α
of sample 2 is lower (L_α (sample 1; 290K)/L_α (sample 2; 350 K)≈ 1.5).
The second major cause is the difference of the film stacks, in particular the thickness of the Ta capping layer. The measured θ_ SHE is more than 2 times higher in sample 2 (0.09) than in sample 1 (0.04). This can be expected to be due a better passivation of the Ta layer in sample 2 which can therefore contribute more to the SOT than the thinner (3 nm) Ta cap of the sample 1 which is probably fully oxidized.
Finally, comparing the skyrmion velocity curves for the two conditions in sample 2 (at 90 and 350 K), it can be seen that both the depinning current and the mobility in the linear regime are significantly different.
The depinning current is higher at 90 K, which can be attributed by the thermal nature of the depinning process <cit.>.
The difference in mobility is not due to a difference in skyrmion diameter (which again is very similar in all three studied conditions).
It can be expected that several magnetic parameters vary between 90 and 350 K, but the experimental mobility can be understood by considering only the variation of L_α (L_α (90 K)/L_α (350 K)≈ 1.2,
assuming
with L_α (90 K) calculated assuming constant sublattice Gilbert damping parameters).
This result and the Thiele model suggest that L_α is a more pertinent parameter than α to characterize the role of dissipation in the skyrmion mobility. Interestingly, L_α can more be more easily optimized than α to increase mobility, by increasing the sample temperature (as was the case here) or by decreasing the material's Curie temperature (all other parameters remaining equal).
A recent work <cit.> on skyrmions measured at relatively high temperature also seems to point toward such an effect which seems to be an interesting path to increase skyrmion mobility.
In conclusion, we observed the propagation of skyrmions in the flow regime, i.e., beyond the effects of pinning in two GdCo samples, below and above the angular compensation temperature. The observed mobilities were very large, with a velocity up to 450 m/s.
The skyrmion dynamics was studied in three cases, two in RE-dominated films and one in a TM-dominated film.
The deflection angle was constant with driving current and
its sign was opposite between RE- and TM-dominated cases, both when comparing two samples of different composition and when comparing two temperatures (above and below ) in the same sample. This confirms the modulation of deflection angle with L_S.
These experiments demonstrate the effects of the angular momentum density L_S of the host material on the deflection of skyrmions. They show that can be reversed in GdCo ferrimagnetic thin films across their angular compensations, either by changing the alloy stoichiometry or simply its temperature.
In particular, the reversal of sign of across compensation strongly supports that should be zero at angular moment compensation. The engineering of magnetic parameters that was done to produce the two presented skyrmion-hosting samples could be repeated rather straightforwardly to engineer a film with stable skyrmions at with no deflection.
The authors thank Stanislas Rohart for fruitful discussions, and André Thiaville for the study of the sample properties by BLS.
This work was supported by a public grant overseen by the French National Research Agency (ANR) as part of the “Investissements d’Avenir” program (Labex NanoSaclay, reference: ANR-10-LABX-0035, project SPICY). Magnetometry and Anomalous Hall effect measurements were performed at the LPS Physical Measurements Platform.
§ SUPPLEMENTARY INFORMATION
Videos of successive MOKE images showing the skyrmion motion can be found in .... for the three temperature regions discussed in the text. Motion of skyrmions of opposite polarity (i.e., p_ Co=+1; white in the MOKE images) is also shown for sample 1.
§ DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author upon reasonable request.
*
38
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Moore et al.(2008)Moore,
Miron, Gaudin, Serret,
Auffret, Rodmacq, Schuhl,
Pizzini, Vogel, and Bonfim]Moore2008
author author T. A. Moore, author I. M. Miron,
author G. Gaudin, author G. Serret, author
S. Auffret, author B. Rodmacq, author A. Schuhl, author S. Pizzini, author J. Vogel, and author M. Bonfim, title title
High domain wall velocities induced by current in ultrathin Pt/Co/AlOx
wires with perpendicular magnetic anisotropy, 10.1063/1.3062855 journal journal Applied
Physics Letters volume 93, pages
262504 (year 2008)NoStop
[Thiaville et al.(2012)Thiaville, Rohart, Jué, Cros, and Fert]Thiaville2012a
author author A. Thiaville, author S. Rohart,
author É. Jué,
author V. Cros, and author A. Fert, title
title Dynamics of Dzyaloshinskii domain walls in
ultrathin magnetic films, 10.1209/0295-5075/100/57002
journal journal EPL (Europhysics Letters) volume 100, pages 57002 (year 2012)NoStop
[Manchon et al.(2019)Manchon, ŽŽelezný,
Miron, Jungwirth, Sinova,
Thiaville, Garello, and Gambardella]Manchon2019
author author A. Manchon, author J. ŽŽelezný, author
I. M. Miron, author
T. Jungwirth, author
J. Sinova, author A. Thiaville, author K. Garello, and author P. Gambardella, title title Current-induced spin-orbit torques in ferromagnetic and
antiferromagnetic systems, 10.1103/RevModPhys.91.035004
journal journal Rev. Mod. Phys. volume 91, pages 035004 (year
2019)NoStop
[Fert, Reyren, and Cros(2017)]Fert2017b
author author A. Fert, author N. Reyren, and author V. Cros, title title Magnetic skyrmions: advances in physics
and potential applications, 10.1038/natrevmats.2017.31
journal journal Nature Reviews Materials volume 2, pages 17031 (year
2017)NoStop
[Sampaio et al.(2013)Sampaio, Cros, Rohart, Thiaville, and Fert]Sampaio2013
author author J. Sampaio, author V. Cros,
author S. Rohart, author A. Thiaville, and author A. Fert, title
title Nucleation, stability and current-induced
motion of isolated magnetic skyrmions in nanostructures, 10.1038/nnano.2013.210 journal journal Nature
Nanotechnology volume 8, pages
839–844 (year 2013)NoStop
[Zhang et al.(2015)Zhang,
Baker, Komineas, and Hesjedal]Zhang2015a
author author S. Zhang, author A. A. Baker,
author S. Komineas, and author T. Hesjedal, title title Topological computation based on direct
magnetic logic communication, 10.1038/srep15773
journal journal Scientific Reports volume 5, pages 15773 (year
2015)NoStop
[Huang et al.(2017)Huang,
Kang, Zhang, Zhou, and Zhao]Huang2017
author author Y. Huang, author W. Kang,
author X. Zhang, author Y. Zhou, and author
W. Zhao, title title Magnetic skyrmion-based synaptic devices, 10.1088/1361-6528/aa5838 journal journal Nanotechnology volume 28, pages 08LT02 (year 2017)NoStop
[Zázvorka et al.(2019)Zázvorka, Jakobs, Heinze,
Keil, Kromin, Jaiswal,
Litzius, Jakob, Virnau,
Pinna, Everschor-Sitte, Rózsa, Donges, Nowak, and Kläui]Zazvorka2019
author author J. Zázvorka, author F. Jakobs, author D. Heinze,
author N. Keil, author
S. Kromin, author S. Jaiswal, author K. Litzius, author G. Jakob, author P. Virnau, author D. Pinna, author K. Everschor-Sitte, author L. Rózsa, author A. Donges, author U. Nowak, and author M. Kläui, title title Thermal skyrmion diffusion used in a reshuffler device, 10.1038/s41565-019-0436-8 journal journal Nature Nanotechnology volume 14, pages 658–661 (year 2019)NoStop
[Li et al.(2017)Li,
Kang, Huang, Zhang,
Zhou, and Zhao]Sai2017
author author S. Li, author W. Kang, author Y. Huang, author
X. Zhang, author Y. Zhou, and author W. Zhao, title title
Magnetic skyrmion-based artificial neuron device, 10.1088/1361-6528/aa7af5 journal journal
Nanotechnology volume 28, pages
31LT01 (year 2017)NoStop
[Song et al.(2020)Song,
Jeong, Pan, Zhang,
Xia, Cha, Park, Kim, Finizio, Raabe, Chang,
Zhou, Zhao, Kang,
Ju, and Woo]Song2020a
author author K. M. Song, author J.-S. Jeong,
author B. Pan, author
X. Zhang, author J. Xia, author S. Cha, author T.-E. Park,
author K. Kim, author
S. Finizio, author J. Raabe, author J. Chang, author Y. Zhou, author W. Zhao, author W. Kang, author H. Ju, and author
S. Woo, title title Skyrmion-based artificial synapses for neuromorphic
computing, 10.1038/s41928-020-0385-0 journal journal Nature Electronics volume 3, pages 148–155 (year
2020)NoStop
[Fert, Cros, and Sampaio(2013)]Fert2013c
author author A. Fert, author V. Cros, and author J. Sampaio, title title Skyrmions on the track, 10.1038/nnano.2013.29 journal journal
Nature Nanotechnology volume 8, pages
152–156 (year 2013)NoStop
[Brataas, Kent, and Ohno(2012)]Brataas2012
author author A. Brataas, author A. D. Kent, and author H. Ohno, title title Current-induced torques in magnetic
materials, 10.1038/nmat3311 journal
journal Nature Materials volume 11, pages 372–381 (year 2012)NoStop
[Carpentieri et al.(2015)Carpentieri, Tomasello, Zivieri, and Finocchio]Carpentieri2015
author author M. Carpentieri, author R. Tomasello, author R. Zivieri,
and author G. Finocchio, title title Topological,
non-topological and instanton droplets driven by spin-transfer torque in
materials with perpendicular magnetic anisotropy and Dzyaloshinskii-Moriya
Interaction, 10.1038/srep16184 journal
journal Scientific Reports volume 5, pages 1–8 (year 2015)NoStop
[Finocchio et al.(2015)Finocchio, Ricci, Tomasello, Giordano, Lanuzza, Puliafito, Burrascano, Azzerboni, and Carpentieri]Finocchio2015
author author G. Finocchio, author M. Ricci,
author R. Tomasello, author A. Giordano, author
M. Lanuzza, author V. Puliafito, author P. Burrascano, author B. Azzerboni, and author M. Carpentieri, title title Skyrmion based microwave detectors and harvesting, 10.1063/1.4938539 journal journal
Applied Physics Letters volume 107, pages 3–8 (year 2015)NoStop
[Kim et al.(2017)Kim,
Kim, Hirata, Oh,
Tono, Kim, Okuno,
Ham, Kim, Go, Tserkovnyak, Tsukamoto, Moriyama,
Lee, and Ono]Kim2017
author author K.-J. Kim, author S. K. Kim,
author Y. Hirata, author S.-H. Oh, author
T. Tono, author D.-H. Kim, author T. Okuno, author W. S. Ham, author S. Kim, author G. Go, author Y. Tserkovnyak, author
A. Tsukamoto, author
T. Moriyama, author
K.-J. Lee, and author
T. Ono, title title Fast domain wall motion in the vicinity of the angular
momentum compensation temperature of ferrimagnets, 10.1038/nmat4990 journal journal Nature
Materials volume 16, pages 1187–1192
(year 2017)NoStop
[Boulle et al.(2016)Boulle,
Vogel, Yang, Pizzini,
de Souza Chaves, Locatelli, Menteş, Sala, Buda-Prejbeanu,
Klein, Belmeguenai, Roussigné, Stashkevich, Mourad
Chérif, Aballe, Foerster,
Chshiev, Auffret, Miron,
Gaudin, Chérif, Aballe,
Foerster, Chshiev, Auffret,
Miron, and Gaudin]Boulle2016
author author O. Boulle, author J. Vogel,
author H. Yang, author
S. Pizzini, author D. de Souza Chaves, author A. Locatelli, author T. O. Menteş, author A. Sala, author L. D. Buda-Prejbeanu, author O. Klein, author M. Belmeguenai,
author Y. Roussigné,
author A. Stashkevich, author S. Mourad Chérif, author L. Aballe, author
M. Foerster, author
M. Chshiev, author S. Auffret, author I. M. Miron, author G. Gaudin, author S. M. Chérif, author L. Aballe, author M. Foerster,
author M. Chshiev, author S. Auffret, author
I. M. Miron, and author
G. Gaudin, title title Room-temperature chiral magnetic skyrmions in ultrathin
magnetic nanostructures, 10.1038/nnano.2015.315
journal journal Nature Nanotechnology volume 11, pages 449–454 (year
2016)NoStop
[Hrabec et al.(2017)Hrabec,
Sampaio, Belmeguenai, Gross,
Weil, Chérif, Stashkevich, Jacques, Thiaville, and Rohart]Hrabec2017c
author author A. Hrabec, author J. Sampaio,
author M. Belmeguenai, author I. Gross, author
R. Weil, author S. M. Chérif, author A. Stashkevich, author V. Jacques, author A. Thiaville, and author S. Rohart, title title
Current-induced skyrmion generation and dynamics in symmetric bilayers, 10.1038/ncomms15765 journal journal Nature Communications volume 8, pages 15765 (year 2017)NoStop
[Jiang et al.(2017)Jiang,
Zhang, Yu, Zhang,
Wang, Benjamin Jungfleisch, Pearson, Cheng, Heinonen, Wang, Zhou, Hoffmann, and te Velthuis]Jiang2017a
author author W. Jiang, author X. Zhang,
author G. Yu, author
W. Zhang, author X. Wang, author M. Benjamin Jungfleisch, author J. E. Pearson, author X. Cheng, author O. Heinonen,
author K. L. Wang, author Y. Zhou, author
A. Hoffmann, and author
S. G. E. te Velthuis, title
title Direct observation of the skyrmion Hall
effect, 10.1038/nphys3883 journal
journal Nature Physics volume 13, pages 162–169 (year 2017)NoStop
[Reichhardt, Reichhardt, and Milošševi ćć(2022)]Reichhardt2022
author author C. Reichhardt, author C. J. O. Reichhardt, and author M. V. Milošševi ćć, title title Statics and
dynamics of skyrmions interacting with disorder and nanostructures, 10.1103/RevModPhys.94.035005 journal journal Rev. Mod. Phys. volume 94, pages 035005 (year 2022)NoStop
[Zang et al.(2011)Zang,
Mostovoy, Han, and Nagaosa]Zang2011
author author J. Zang, author M. Mostovoy,
author J. H. Han, and author N. Nagaosa, title title Dynamics of skyrmion crystals in
metallic thin films, 10.1103/PhysRevLett.107.136804
journal journal Phys. Rev. Lett. volume 107, pages 136804 (year
2011)NoStop
[Che(2017)]Chen2017
title title Skyrmion hall effect, @noop journal journal Nature Physics volume 13, pages 112–113 (year 2017)NoStop
[Dohi et al.(2019)Dohi,
DuttaGupta, Fukami, and Ohno]Dohi2019
author author T. Dohi, author S. DuttaGupta,
author S. Fukami, and author H. Ohno, title
title Formation and current-induced motion of
synthetic antiferromagnetic skyrmion bubbles, 10.1038/s41467-019-13182-6 journal journal
Nature Communications volume 10, pages
5153 (year 2019)NoStop
[Hansen et al.(1989)Hansen,
Clausen, Much, Rosenkranz, and Witter]Hansen1989
author author P. Hansen, author C. Clausen,
author G. Much, author
M. Rosenkranz, and author
K. Witter, title title Magnetic and magneto‐optical properties of rare‐earth
transition‐metal alloys containing Gd, Tb, Fe, Co, 10.1063/1.343551 journal journal Journal of
Applied Physics volume 66, pages
756–767 (year 1989)NoStop
[Sala and Gambardella(2022)]Sala2022
author author G. Sala and author P. Gambardella, title title
Ferrimagnetic Dynamics Induced by Spin‐Orbit Torques, 10.1002/admi.202201622 journal journal
Advanced Materials Interfaces volume 2201622, pages 2201622 (year 2022)NoStop
[Berges et al.(2022)Berges,
Haltz, Panigrahy, Mallick,
Weil, Rohart, Mougin, and Sampaio]Berges2022
author author L. Berges, author E. Haltz,
author S. Panigrahy, author S. Mallick, author
R. Weil, author S. Rohart, author A. Mougin, and author J. Sampaio, title title Size-dependent mobility of skyrmions beyond pinning in
ferrimagnetic GdCo thin films, 10.1103/PhysRevB.106.144408 journal journal
Physical Review B volume 106, pages
144408 (year 2022)NoStop
[Woo et al.(2018)Woo,
Song, Zhang, Zhou,
Ezawa, Liu, Finizio,
Raabe, Lee, Kim,
Park, Kim, Kim, Lee, Lee, Choi, Min,
Koo, and Chang]Woo2018b
author author S. Woo, author K. M. Song,
author X. Zhang, author Y. Zhou, author
M. Ezawa, author X. Liu, author S. Finizio, author J. Raabe,
author N. J. Lee, author S.-I. Kim, author
S.-Y. Park, author
Y. Kim, author J.-Y. Kim, author D. Lee, author O. Lee, author J. W. Choi,
author B.-C. Min, author H. C. Koo, and author J. Chang, title
title Current-driven dynamics and inhibition of the
skyrmion Hall effect of ferrimagnetic skyrmions in GdFeCo films, 10.1038/s41467-018-03378-7 journal journal Nature Communications volume 9, pages 959 (year 2018)NoStop
[Caretta et al.(2018)Caretta, Mann, Büttner, Ueda, Pfau, Günther, Hessing, Churikova, Klose, Schneider, Engel, Marcus, Bono, Bagschik, Eisebitt, and Beach]Caretta2018b
author author L. Caretta, author M. Mann,
author F. Büttner, author K. Ueda, author
B. Pfau, author C. M. Günther, author P. Hessing, author A. Churikova, author C. Klose, author M. Schneider, author D. Engel, author C. Marcus, author D. Bono, author K. Bagschik, author S. Eisebitt,
and author G. S. D. Beach, title title Fast current-driven domain
walls and small skyrmions in a compensated ferrimagnet, 10.1038/s41565-018-0255-3 journal journal Nature
Nanotechnology volume 13, pages
1154–1160 (year 2018)NoStop
[Hirata et al.(2019)Hirata,
Kim, Kim, Lee, Oh, Kim, Nishimura, Okuno,
Futakawa, Yoshikawa, Tsukamoto, Tserkovnyak, Shiota,
Moriyama, Choe, Lee, and Ono]Hirata2019
author author Y. Hirata, author D.-H. Kim,
author S. K. Kim, author D.-K. Lee, author
S.-H. Oh, author D.-Y. Kim, author T. Nishimura, author T. Okuno, author Y. Futakawa, author H. Yoshikawa, author A. Tsukamoto, author Y. Tserkovnyak, author Y. Shiota, author T. Moriyama, author S.-B. Choe, author K.-J. Lee, and author T. Ono, title title Vanishing skyrmion Hall
effect at the angular momentum compensation temperature of a ferrimagnet, 10.1038/s41565-018-0345-2 journal journal Nature Nanotechnology volume 14, pages 232–236 (year 2019)NoStop
[Haltz et al.(2020)Haltz,
Sampaio, Krishnia, Berges,
Weil, and Mougin]Haltz2020
author author E. Haltz, author J. Sampaio,
author S. Krishnia, author L. Berges, author
R. Weil, and author
A. Mougin, title title Measurement of the tilt of a moving domain wall shows
precession-free dynamics in compensated ferrimagnets, 10.1038/s41598-020-73049-5 journal journal
Scientific Reports volume 10, pages
16292 (year 2020)NoStop
[Wangsness(1953)]Wangsness1953
author author R. K. Wangsness, title title Sublattice
Effects in Magnetic Resonance, 10.1103/PhysRev.91.1085
journal journal Physical Review volume 91, pages 1085–1091 (year
1953)NoStop
[Hirata et al.(2018)Hirata,
Kim, Okuno, Nishimura,
Kim, Futakawa, Yoshikawa,
Tsukamoto, Kim, Choe, and Ono]Hirata2018
author author Y. Hirata, author D.-H. Kim,
author T. Okuno, author T. Nishimura, author
D.-Y. Kim, author Y. Futakawa, author H. Yoshikawa, author A. Tsukamoto, author K.-J. Kim, author S.-B. Choe, and author T. Ono, title title Correlation between
compensation temperatures of magnetization and angular momentum in GdFeCo
ferrimagnets, 10.1103/PhysRevB.97.220403 journal journal Physical Review B volume 97, pages 220403 (year
2018)NoStop
[Berges(2022)]bergesphd2022
author author L. Berges, title title Magnetic
skyrmions in gdco ferrimagnetic thin-films, http://www.theses.fr/2022UPASP161 (year 2022), note phD thesis defended at université Paris-SaclayNoStop
[Thiele(1974)]Thiele1974
author author A. A. Thiele, title title Applications of
the gyrocoupling vector and dissipation dyadic in the dynamics of magnetic
domains, 10.1063/1.1662989 journal
journal Journal of Applied Physics volume 45, pages 377–393 (year
1974)NoStop
[Panigrahy et al.(2022)Panigrahy, Mallick, Sampaio, and Rohart]Panigrahy2022
author author S. Panigrahy, author S. Mallick,
author J. Sampaio, and author S. Rohart, title title Skyrmion inertia in synthetic
antiferromagnets, 10.1103/PhysRevB.106.144405
journal journal Physical Review B volume 106, pages 144405 (year
2022)NoStop
[Hayashi et al.(2014)Hayashi, Kim, Yamanouchi, and Ohno]Hayashi2014
author author M. Hayashi, author J. Kim,
author M. Yamanouchi, and author H. Ohno, title title Quantitative characterization of the
spin-orbit torque using harmonic Hall voltage measurements, 10.1103/PhysRevB.89.144425 journal journal Physical Review B volume 89, pages 144425 (year 2014)NoStop
[Litzius et al.(2020)Litzius, Leliaert, Bassirian, Rodrigues, Kromin, Lemesh, Zazvorka, Lee, Mulkers, Kerber, Heinze, Keil, Reeve, Weigand, Van Waeyenberge,
Schütz, Everschor-Sitte, Beach, and Kläui]Litzius2020
author author K. Litzius, author J. Leliaert,
author P. Bassirian, author D. Rodrigues, author
S. Kromin, author I. Lemesh, author J. Zazvorka, author K.-J. Lee, author J. Mulkers, author N. Kerber,
author D. Heinze, author N. Keil, author
R. M. Reeve, author
M. Weigand, author B. Van Waeyenberge, author G. Schütz, author K. Everschor-Sitte, author G. S. D. Beach, and author M. Kläui, title title The role of temperature and drive current in skyrmion dynamics, 10.1038/s41928-019-0359-2 journal journal Nature Electronics volume 3, pages 30–36 (year 2020)NoStop
[Thiele(1973)]Thiele1973
author author A. A. Thiele, title title Steady-State
Motion of Magnetic Domains, 10.1103/PhysRevLett.30.230
journal journal Physical Review Letters volume 30, pages 230–233 (year 1973)NoStop
[Manchon et al.(2015)Manchon, Koo, Nitta, Frolov, and Duine]Manchon2015
author author A. Manchon, author H. C. Koo,
author J. Nitta, author S. M. Frolov, and author R. A. Duine, title
title New perspectives for Rashba spin–orbit
coupling, 10.1038/nmat4360 journal
journal Nature Materials volume 14, pages 871–882 (year 2015)NoStop
|
http://arxiv.org/abs/2307.05991v1 | 20230712081300 | Unsteady drag force on an immersed sphere oscillating near a wall | [
"Zaicheng Zhang",
"Vincent Bertin",
"Martin Essink",
"Hao Zhang",
"Nicolas Fares",
"Zaiyi Shen",
"Thomas Bickel",
"Thomas Salez",
"Abdelhamid Maali"
] | cond-mat.soft | [
"cond-mat.soft",
"physics.class-ph",
"physics.flu-dyn"
] |
Robbed withdrawal
Ze ChencMaximus Labs, [email protected]
Ruichao JiangjMaximus Labs & Carleton University, [email protected]
Javad TavakolijaUniversity of British Columbia, [email protected]
Yiqiang ZhaozCarleton University, [email protected]
August 12, 2023
=================================================================================================================================================================================================================================================================
The unsteady hydrodynamic drag exerted on an oscillating sphere near a planar wall is addressed experimentally, theoretically, and numerically. The experiments are performed by using colloidal-probe Atomic Force Microscopy (AFM) in thermal noise mode. The natural resonance frequencies and quality factors are extracted from the measurement of the power spectrum density of the probe oscillation for a broad range of gap distances and Womersley numbers.
The shift in the natural resonance frequency of the colloidal probe as the probe goes close to a solid wall infers the wall-induced variations of the effective mass of the probe. Interestingly, a crossover from a positive to a negative shift is observed as the Womersley number increases. In order to rationalize the results, the confined unsteady Stokes equation is solved numerically using a finite-element method, as well as asymptotic calculations.
The in-phase and out-of-phase terms of the hydrodynamic drag acting on the sphere are obtained and agree well to the experimental results. All together, the experimental, theoretical, and numerical results show that the hydrodynamic force felt by an immersed sphere oscillating near a wall is highly dependent on the Womersley number.
Fluid mechanics, nanofluidics, colloidal-probe Atomic Force Microscopy (AFM).
§ INTRODUCTION
The motion of particles in a fluid is one of the central problems in fluid mechanics, across many scales. The hydrodynamic drag force exerted by the fluid on the particles is the fundamental quantity that dictates the motion. Applications include the sedimentation of synthetic entities, the swimming of biological microorganisms <cit.>, blood flows <cit.>, peristaltic pumping <cit.>, microfluidic flows <cit.>, Brownian motion at short times <cit.>, etc...
At small Reynolds number, while the steady, bulk, Stokes’ drag force exerted on a translating sphere is well known, addressing further the transient contributions is more intricate – even though the implications of such effects are potentially numerous.
For an isolated spherical particle with radius R translating in a viscous liquid at velocity V, the bulk drag force F at small Reynolds number is given by the Basset-Boussinesq-Oseen (BBO) expression <cit.>:
F= -6πη RV - 6R^2√(πρη)∫_-∞^t 1/√(t-τ)dV/dτdτ - 2πρ R^3/3dV/dt,
where ρ and η are the density and dynamic viscosity of the viscous liquid, respectively.
The right-hand side of the latter equation includes three terms successively: a Stokes viscous force, a Basset memory term, and an added-mass term. The Basset force originates from the diffusive nature of vorticity within the unsteady Stokes equation, and the added-mass force can be interpreted as an inertial effect due to the displaced fluid mass. Equation (<ref>) provides a good description of particle dynamics in a large variety of particle-laden and multi-phase flows, as long as the particle Reynolds number is small <cit.>.
Nevertheless, the effect of nearby solid boundaries on the unsteady drag is still an open question. The canonical situation is that of an immersed sphere oscillating near a planar rigid surface. Some asymptotic expressions of the drag in the large-distance limit have been derived recently, by using a point-particle approximation together with the method of images <cit.>, or by using low or high frequency expansions of the unsteady Stokes equations <cit.>. However, theoretical descriptions of the confined limit, i.e. where the sphere is in close proximity to the surface, are scarce. We thus aim here at investigating the unsteady drag, in the full spatial range from bulk to confinement, by combining numerical simulations, asymptotic calculations and colloidal-probe Atomic Force Microscopy (AFM) experiments.
AFM colloidal-probe methods and their Surface Force Apparatus (SFA) analogues, have been first introduced in the 1990's in order to measure molecular interactions (e.g. electrostatic, van der Waals, ...) between surfaces <cit.>. Recently, these methods have been extended and used to study flow under micro-to-nanometric confinement, e.g. near soft <cit.> or capillary interfaces <cit.>,
using complex fluids <cit.>, or to measure the friction at solid-liquid interfaces <cit.>, and electrohydrodynamic effects <cit.>, etc... More specifically, for dynamic colloidal AFM measurements, a micron-size spherical colloidal probe is placed in a viscous fluid, in the vicinity of a surface, with a probe-surface distance D. Then, the probe is driven to oscillate without direct contact, via either acoustic excitation or thermal noise. The force exerted on the sphere is inferred from the colloidal motion, trough the cantilever's deflection, which allows to extract specific information on the confined surfaces or fluid properties. We point out that other experimental techniques were used to probe the bulk streaming flow around an oscillating sphere at finite Reynolds numbers, like particle visualization techniques <cit.>, and optical tweezers <cit.>.
If the typical angular frequency of the flow is ω, then the vorticity diffuses on a typical distance δ∼√(η/(ρω)) called the viscous penetration length. The dynamic force measurements are usually restricted to low Reynolds numbers, low probing frequencies, and to the confined regime where D≪ R. In such a case, the penetration length is large, the flow is mainly located in the confined fluid layer, it is purely viscous and quasi-steady, and the lubrication theory holds <cit.>. Consequently, in all the above examples, the fluid inertial effects are disregarded in the analysis of the measured hydrodynamic force. However, when the colloidal probe oscillates at high frequencies, the penetration depth δ becomes smaller and comparable to the characteristic length scale of the flow. Thus, unsteady effects become important <cit.>. The relevant dimensionless number to characterize the crossover to such a regime is the Womersley number = R √(ω / ν), the square of which corresponds to the ratio between the typical diffusion time scale R^2/ν, and the period of the oscillation. Inertial effects should be predominant when it takes more time for the velocity field to diffuse than for the sphere to oscillate, i.e. > 1. In such a situation, the hydrodynamic force exerted
on the sphere is not only a viscous lubrication drag, but it also contains contributions due to the fluid inertia, which were partly studied in previous works <cit.>.
The article is organized as follows. In Sec. <ref>, we introduce the experimental method of thermal noise AFM and present the typical experimental results.
We show that, as the distance to the wall is reduced, the natural frequency increases for low Womersley numbers but decreases for high Womersley numbers. In contrast, the dissipation monotonically increases with decreasing distance for all Womersley numbers. In order to rationalize the results, in Sec. <ref>, we compute the hydrodynamic drag force in terms of added mass and dissipation, in the asymptotic limit of large distance, and we perform a detailed calculation in the Low-Womersley limit using the Lorentz reciprocal theorem. Furthermore, a finite-element method is employed to obtain the full numerical solution in all regimes. Finally, the experimental, theoretical and numerical results are summarized and compared in Sec. <ref>. Mainly, the variation of the resonance frequency is rationalized by the change of the effective mass with distance and Womersley number.
§ EXPERIMENTS
§.§ Colloidal-probe AFM setup
A schematic of the experimental system is shown in <ref>(a). A borosilicate sphere (MOSci Corporation, radius R=27 ± 0.5 μ m) is glued (Epoxy glue, Araldite) to the end of an AFM cantilever (SNL-10, Brukerprobes), and located near a planar mica surface. The cantilever stiffness k_c=0.68± 0.05 N/m is calibrated using the drainage method proposed by <cit.>. The experiments were performed using an AFM (Bruker, Dimension3100) in three different liquids, i.e water, dodecane and silicone oil, whose densities and dynamic viscosities are 1000 kg/m^3, 1 mPa· s, 750 kg/m^3, 1.34 mPa· s and 930 kg/m^3, 9.3 mPa· s, respectively, at room temperature. The probe-surface distance D was controlled by an integrated stage step motor. Each separation distance was adjusted by displacing the cantilever vertically using the step motor with precision in position <0.1 μ m. The probe's deflection was directly acquired using an analog to digital (A/D) acquisition card (PCI-4462, NI, USA) with a sample frequency of 200 kHz. The vertical position of the probe was observed to fluctuate due to thermal noise, as discussed in the following section. The amplitude of the sphere's fluctuation remains smaller than ∼1 nm in all the experiments.
§.§ Confined thermal dynamics
The time-dependent position of the probe is denoted Z(t). We suppose that the probe dynamics can be modelled by a forced harmonic oscillator, as:
m_∞Z̈+ γ_∞Ż + k_c Z = F_th + F_int,
where m_∞ is the effective mass of the probe in the bulk, and where γ_∞ is the bulk damping coefficient. These two coefficients correspond to the free dynamics of the probe far from the surface, and can thus be obtained by measuring the resonance properties of the AFM probe in the far field, as shown below. Besides the elastic restoring force by the cantilever of stiffness k_c, and in the absence of conservative forces (e.g. van der Waals or electrostatic forces), the two main forces acting on the sphere along the z direction are the random thermal force F_th and the hydrodynamic interaction force with the wall F_int. The latter corresponds to the deviation of the hydrodynamic drag with respect to the bulk drag force.
Taking the Fourier transform of <ref>, we find:
- m_∞ω^2 Z̃ + iωγ_∞Z̃ + k_cZ̃ = F̃_th + F̃_int,
where f̃(ω) = 1/2π∫_-∞^∞ dt f(t) e^-i ω t is the Fourier transform of the function f(t). The real and imaginary parts of F̃_int correspond to an inertial force and a dissipative force, respectively, that can be recast into:
F̃_int = m_intω^2 Z̃ - iωγ_intZ̃,
where m_int and γ_int are the wall-induced variations of the effective mass and dissipation coefficient. For the sake of simplicity, we neglect in the following the possible frequency dependencies of m_int and γ_int. With this assumption, and injecting <ref> into <ref>, the probe's motion follows a thermally-forced harmonic oscillator dynamics with a spring constant k_c, an effective damping coefficient γ≡γ_∞ + γ_int and an effective mass m ≡ m_∞ + m_int. For the latter problem, one can then derive the one-sided power spectral density S(ω) ≡ 2 ⟨|Z(ω) | ^2 ⟩, as:
S(ω) =
2 ⟨| F_th| ^2 ⟩ / (m^2ω_0^4)/[1-(ωω_0)^2 ]^2+(ωω_0 Q)^2
= 2k_BT / (π Q m ω_0^3)/[1-(ωω_0)^2 ]^2+(ωω_0 Q)^2 ,
where ⟨·⟩ denotes the ensemble average, k_BT is the thermal energy, ω_0 = √(k_c/(m_∞ + m_int)) is the natural angular frequency, and Q=(m_∞ + m_int)ω_0/γ is the quality factor. The second equality in <ref> is obtained by using the correlator of the noise ⟨ F_th(t)F_th(t') ⟩ = 2 γ k_BT δ_D(t-t'), where we assumed a white noise through the Dirac distribution δ_D , and where we invoked the fluctuation-dissipation theorem to set the amplitude of the noise. The experimental power spectral densities are fitted by the function <cit.>:
S(ω)
= c_1/[1-(ωω_0)^2 ]^2+(ωω_0 Q)^2 + c_2,
where ω_0 and Q are the key adjustable parameters indicating the position and the width of the resonance, and where c_1 and c_2 are unimportant extra parameters allowing to accommodate for potential spurious experimental offset and/or prefactor.
§.§ Power spectral density
<ref> displays the power spectral densities for probes immersed in dodecane or silicone oil (water was employed as well, but the similar results are not shown here), and for a variety of probe-wall distances. A well-defined peak can be observed for each spectrum, indicating the fundamental resonance.
The resonance properties are well described by the damped harmonic oscillator model above. The largest probe-wall distance (D=100 μm) corresponds to nearly 4 times the sphere radius, so that the hydrodynamic interactions between the probe and the wall can be neglected. At such distances, the bulk resonance frequency ω_0^∞ = √(k_c/m_∞) and bulk quality factor Q_∞ = m_∞ω_0^∞/γ_∞ are extracted from the fitting procedure, giving respective values of 7070± 5 Hz and 3.3± 0.1 in dodecane and 5320± 5 Hz and 1.3± 0.1 in silicone oil. In the more viscous fluid (silicone oil), the resonance is broader since the dissipation is larger, as expected. Also, in both liquids, we observe that the resonance is broader as the sphere gets closer to the hard wall, which indicates that the near-wall dissipation is larger as compared to the bulk situation, as expected too. Besides, and interestingly, the natural frequency appears to depend on the viscosity of the ambient fluid, highlighting the fact that the effective mass is not trivial. Moreover, the natural frequency depends on the probe-wall distance.
To be quantitative, the fitted values of the natural frequency ω_0 and the quality factor Q are shown in <ref> as functions of the normalized separation distance D/R, for the three liquids studied. Intriguingly, we observe an increase of the natural frequency in silicone oil near the wall as compared to the bulk resonance frequency (<ref>(a)), and a corresponding decrease in dodecane (<ref>(c)) and water (<ref>(e)).
We point out that the probe-wall distances in the present experiments are large enough (D>0.5 μ m), so that molecular interactions (e.g. electrostatic or van der Waals forces) can be safely neglected. Therefore, the changes in natural frequency observed here should only result from hydrodynamic contributions. The following section aims at modeling this intricate behaviour.
§ THEORY
§.§ Governing equations
We aim here at calculating the hydrodynamic force exerted on an immersed sphere moving normally near a rigid, flat and immobile wall. The amplitude of thermal oscillations in the experiments is nanometric, which implies a relatively small Reynolds number for all accessible frequencies. Therefore, we can neglect the convective term of the incompressible Navier-Stokes equations. Nonetheless, the typical resonance frequency is in the kHz range, such that the squared Womersley number ^2 = R^2 ω / ν is in the 1-50 range. As a consequence, we expect inertial effects to be important. The fluid velocity field v thus satisfies the unsteady incompressible Stokes equations:
ρ∂_t v = -∇ p + η∇^2 v, ∇·v = 0 ,
where p is the hydrodynamic pressure field. Without loss of generality, the sphere's position is supposed to oscillate normally to the substrate at a frequency ω, and with an amplitude A, which correspond to a given Fourier mode of the full fluctuation spectrum. Applying the Fourier transform to the unsteady incompressible Stokes equations, we get:
iρωṽ = -∇p̃ + η∇^2 ṽ, ∇·ṽ = 0.
A no-slip condition is assumed at both the wall and the sphere surfaces, denoted by 𝒮_w and 𝒮_0 respectively (see <ref> (b)), leading to the following boundary conditions for the fluid velocity field:
(r∈𝒮_0) = iω A e_z ,
(r∈𝒮_w) = 0 ,
with e_z the unit vector in the z-direction. The hydrodynamic drag force applied on the sphere is given by:
F̃ = ∫_𝒮_0 n·σ̃ d𝒮_0,
where σ̃ = -p̃𝐈 + η[∇ + (∇)^T] is the fluid stress tensor, and n denotes the unit vector normal to 𝒮_0 oriented towards the fluid. To the best of our knowledge, there is no closed-form solution of the problem, in contrast with the steady case (see <cit.>).
By symmetry, the drag force is directed along the z direction, i.e. F̃=F̃_ze_z. Using dimensional analysis, and assuming that the oscillation amplitude A is much smaller than D, one can show that the drag force F̃_z normalized by the bulk Stokes reference -6iπη R A ω, to form the dimensionless drag force f̃_z=F̃_z/(-6iπη R A ω), depends only on two dimensionless parameters: i) the Womersley number , and ii) the sphere-wall distance relative to the sphere radius D/R. As a consequence, the dimensionless hydrodynamic interaction force (see <ref> and <ref>) reads:
F̃_int/6iπη R A ω = f̃_z(D/R →∞, Wo) - f̃_z(D/R, Wo)= (m_intω^2-iωγ_int)Z̃/6iπη R A ω.
Although there is no general analytical solution of <ref> with the boundary conditions of <ref>, the hydrodynamic drag force has known asymptotic expressions in certain limits, some of which are given in the next two subsections.
§.§ Large-distance regime
In the infinite-distance limit, the force expression reduces to the BBO equation (see <ref>) for a sphere in an unbounded space, which gives in Fourier space:
F̃_z = -6iπη R Aω(1 + √(-i) - i^2/9), for D/R →∞.
The last term of <ref> corresponds to an inertial force of added mass 2πρ R^3/3 and the √(-iWo) term corresponds to the Basset force. The large-distance asymptotic correction to the added-mass contribution due to a rigid wall has been computed using the potential-flow theory, and gives 2πρ R^3 {1 + 3R^3/[8(R+D)^3]}/3 (see <cit.>). By using a boundary-integral formulation of the unsteady incompressible Stokes equations, <cit.> have generalized the latter result by including the Basset force, to obtain large-distance the asymptotic drag force, that reads:
F̃_z = -6iπη R Aω(1 + √(-i) - i^2/9 + B R^3/(D+R)^3), for D/R ≫ 1,
where the numerical prefactor B depends on and reads:
B = 1/4( 1+√(-i)-i^2 /3) [1/3+3i/2^2 (1+√(-i) -i^2 /9) ].
§.§ Small-distance regime
In the limit of small sphere-wall distance, which is of importance for colloidal-probe experiments, the drag force is usually dominated by viscous effects. The out-of-phase component of the force can be described by lubrication theory (see <cit.>), in which the main contribution to the drag comes from the confined region between the sphere and the wall, which leads to the expression:
F̃_z = -6iπη R^2 Aω/D.
We stress that the in-phase correction to the latter is still unknown in the lubricated limit. It would be interesting to perform asymptotic-matching calculations on the unsteady Stokes equations (see <cit.>) to obtain a self-consistent expression of the effective added-mass in this limit.
§.§ Low-Womersley-number regime
As pointed out by <cit.>, in the small-frequency limit, which corresponds to a small Womersley number, the drag force can be expressed in terms of known integrals, by using the Lorentz reciprocal theorem (see <cit.>). We provide here an alternative derivation of this result.
We introduce the model steady problem of a sphere moving normally to a surface in a viscous fluid, which corresponds to the problem of <ref>, at zero frequency, i.e.:
·σ̂ = 0 ,
·v̂ = 0 ,
with the same boundary conditions:
v̂(r∈𝒮_0) = iω Ae_z ,
v̂(r∈𝒮_w) = 0 ,
where σ̂ and v̂ are the fluid stress and velocity fields of the model problem, respectively.
Integrating the Lorentz identity ∇· (σ̃·v̂ - σ̂·ṽ) = iωρṽ·v̂ on the total fluid volume, we obtain:
(iω A e_z) ·[∫_𝒮_0 σ̂·n d𝒮_0 - ∫_𝒮_0 σ̃·n d𝒮_0] = iωρ∫_𝒱ṽ·v̂ d𝒱,
where the divergence theorem has been used. Recalling <ref>, we get:
F̃_z = F̂_z -ρ/A∫_𝒱ṽ·v̂ d𝒱.
The force F̂_̂ẑ and velocity field v̂ of the model problem correspond to the ones derived analytically by <cit.>, using a modal decomposition.
The force of the model problem thus reads:
F̂_z6iπη R ω A = 43sinh(α) ∑_n=1^∞ n(n+1)(2n-1)(2n+3) {1-2sinh[(2n+1)α] + (2n+1)sinh (2α)[2sinh((n+1/2)α)]^2 - [(2n+1)sinh(α)]^2},
with cosh(α) = 1 + D/R. Nevertheless, the unsteady velocity field in <ref> is still unknown, so that the drag force F̃_z cannot be found exactly.
Analytical progress can be made in the low- regime, where the unsteady velocity field can be approximated by the steady solution with 𝒪(^2) corrections, as = v̂[1 + 𝒪(^2)]. In this limit, at leading order in inertial contributions, the drag force reduces to:
F̃_z = F̂_z -ρ/A∫_𝒱v̂^2 d𝒱.
The volume integral in <ref> can then be evaluated numerically using the model velocity field provided by <cit.>.
§.§ Finite-element method
We complement the previous asymptotic expressions of the drag force with full numerical solutions. Using the open-source finite-element library Nutils (see <cit.>), we solve <ref>. The axisymmetric velocity and pressure fields are defined on a 320 × 320-element mesh, uniformly spaced on a rectangular domain [0≤τ≤α, 0≤σ≤π].
We then use the bipolar coordinate transform:
r = a sin(σ)/cosh(τ) - cos(σ), z = a sinh(τ) /cosh(τ) - cos(σ),
with a = Rsinhα. The resulting mesh, when axisymmetry is considered, spans the entire domain where r>0 and z>0, with the exception of a circular region corresponding to the sphere, as shown in <ref>. On the symmetry axis (r=0), the flow in the radial direction is constrained and the vertical flow is required to be shear-free. At the wall surface (z=0), the velocity field is set to zero. Finally, on the surface of the sphere, the radial and vertical velocity components are set to zero and unity (imaginary part) respectively, following <ref>. From the calculated velocity and pressure fields, the total force exerted on the particle can be directly computed using <ref>. Typical flow fields are shown in <ref>(b) and (c).
§ RESULTS
§.§ Drag force
The total hydrodynamic force is decomposed in its in-phase and out-of-phase parts, as F̃_z = m A ω^2 - i γ Aω, and shown in <ref> and <ref> versus the dimensionless sphere-wall distance.
First, the Basset-Boussinesq-Oseen force of <ref> agrees well with the simulation results at large distance, for all Wo. The infinite-distance rescaled effective mass is found to increase with decreasing Womersley number as ∼1/, for ^2 ≪ 1. This effect arises from the Basset term in <ref>. Indeed, invoking the velocity scale Aω, one finds a Basset force that scales as R^2√(ρηω) Aω∼ρ R^3 Aω^2/. This could rationalize the experimental observations made in <ref>, where the large-distance natural frequency of the colloidal probe changes in liquids of different viscosities. Conversely, the rescaled damping coefficient increases with increasing Womersley number as , for ^2 ≫ 1 (see <ref>). Here again, this effect originates from the Basset force that also scales as η R Aω.
Interestingly, the behaviour of the rescaled effective mass with dimensionless distance is not universal. For large Wo, the rescaled effective mass decreases with increasing normalized distance. Furthermore, the large-distance asymptotic expression of <ref> accurately describes the rescaled effective mass in the ^2≫ 1 regime. Indeed, <ref> is valid as long as the sphere-wall distance exceeds the viscous penetration length, i.e. D≫δ = R/. Near the wall, deviations of the rescaled effective mass from the large-distance asymptotic expression are systematically observed (see insets in <ref> (c) and (d)), and are comparable to ∼1 in magnitude. In sharp contrast, for small Wo, the rescaled effective mass decreases with decreasing dimensionless distance. The typical Wo value at which the effective-mass variation with distance changes sign is ^2 ≈ 5. In addition, in the small-Wo regime, the numerical solution agrees well with the asymptotic expression of <ref> (see <ref>(a)) at small dimensionless distances. Eventually, at vanishing sphere-wall distances, the effective mass tends towards a constant value, found numerically to be:
m ≈ 11.45 ρ R^3, for D ≪ R ≪δ.
Furthermore, an intermediate regime where the rescaled effective mass increases in an affine manner with the dimensionless distance is observed in Fig. <ref>(a), as predicted by <cit.>, as:
m = 9π/4ρ R^2 (R+D), for R ≪ D ≪δ.
The latter asymptotic expression has been obtained by considering the Lorentz correction to the Stokes drag at large distance (see <cit.>).
The rescaled damping coefficient decreases with increasing dimensionless distance (see <ref>). At low Wo, which corresponds to the low-frequency regime, the rescaled damping coefficient is well described at all distances by the steady drag force of <ref>. However, at large Wo, we observe a transition from the BBO expression at large distance to the steady drag force at small distance. The typical distance at which the transition occurs is D ≃δ, which is smaller than R. In this regime, the rescaled damping coefficient diverges as ∼1/D, as predicted by lubrication theory (see <ref>).
§.§ Comparison of the model with experiments
We now turn to a comparison of the model with experiments. The resonance properties of the colloidal probe are quantified by the natural frequency ω_0/(2π) and quality factor Q, as measured by fitting the power spectral density to the harmonic-oscillator model (see <ref>). The resulting values of these two quantities were already shown in <ref>, as functions of the probe-wall distance, for three different liquids of various kinematic viscosities.
Since the natural frequency variations are small, typically on the order of 5% or less of the bulk natural frequency, we perform a Taylor expansion of the natural frequency at first order in m_int/m_∞:
ω_0 = √(k_c/m_∞ + m_int)≃ω_0^∞(1 - m_int/2m_∞).
We then compute the natural frequency at all distances from the numerical simulations, by using <ref>. The Womersley number is set by using the bulk natural frequency, through ^2 = R^2 ω_0^∞/ν. The resulting ^2 values are 2.4, 18.1 and 31.7 for silicone oil, dodecane and water, respectively. As shown in <ref>(a)-(c)-(e), the experimental results agree with the numerical simulation, which confirms that the modification of the natural frequency of the oscillator originates from the hydrodynamic interactions between the sphere and the wall.
Similarly, we invoke an approximate expression of the quality factor:
Q = Q_∞/ω_0/ω_∞(1+γ_int Q_∞ω_0^∞/k_c)≃Q_∞/(1+γ_int Q_∞ω_0^∞/k_c).
We then compute the quality factor at all distances from the numerical simulations, by using <ref>, and setting the same values as given above. As shown in <ref>(b)-(d)-(f), the experimental results agree with the numerical simulation, confirming that the decrease of the quality factor is essentially due to the increase of the viscous Stokes drag as the sphere-wall distance is reduced.
§ CONCLUSION
We investigated the hydrodynamic force exerted on an immersed sphere oscillating normally to a rigid planar wall, by using a combination of colloidal-probe AFM experiments, finite-element simulations and asymptotic calculations.
The in-phase and out-of-phase components of the hydrodynamic force are obtained from the measurements of the natural frequency and damping of the thermal motion of the probe for various probe-wall distances.
A shift in the natural frequency of the probe was observed with decreasing probe-wall distance, revealing a striking wall-induced unsteady effect: the natural frequency was found to increase with decreasing probe-wall distance in viscous liquids, whereas the opposite trend was observed in low viscosity liquids such as water. By solving the unsteady incompressible Stokes equations numerically, the hydrodynamic force was computed at all distances. The added mass and dissipation increase due to the presence of the wall were then extracted and compared to their experimental counterparts – with excellent agreement. In addition, at large distance, we recovered the analytical expression derived by <cit.>.
Besides, in the low-Womersley-number limit, the hydrodynamic force could be expressed in a simple integral form using the Lorentz reciprocal theorem, which was validated by the numerical simulations. Beneath the fundamental interest for confined or interfacial fluid dynamics, the present results might be of practical importance for colloidal experiments, because they clarify the hydrodynamic drag acting on a spherical particle near a wall. Essentially, our findings highlight the crucial but overlooked role played by fluid inertia, despite the typically low Reynolds numbers.
§ ACKNOWLEDGMENTS
The authors thank Elie Raphaël, Yacine Amarouchene and Jacco Snoeijer for interesting discussions, as well as Arno Goudeau for preliminary experiments.
§ FUNDING
The authors acknowledge financial support from the European Union through the European Research Council under EMetBrown (ERC-CoG-101039103) grant. Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. The authors also acknowledge financial support from the Agence Nationale de la Recherche under EMetBrown (ANR-21-ERCC-0010-01), Softer (ANR-21-CE06-0029), Fricolas (ANR-21-CE06-0039) and EDDL (ANR-19-CE30-0012) grants, and from the NWO through the VICI Grant No. 680-47-632. They also acknowledge the support from the LIGHT S&T Graduate Program (PIA3 Investment for the Future Program, ANR-17-EURE-0027). Finally, they thank the Soft Matter Collaborative Research Unit, Frontier Research Center for Advanced Material and Life Science, Faculty of Advanced Life Science at Hokkaido University, Sapporo, Japan.
§ DECLARATION OF INTERESTS
The authors report no conflict of interest.
jfm
|
http://arxiv.org/abs/2307.06315v1 | 20230712172507 | Why and How to Implement Worked Examples in Upper Division Theoretical Physics | [
"Philipp Scheiger",
"Ronny Nawrodt",
"Holger Cartarius"
] | physics.ed-ph | [
"physics.ed-ph"
] |
[email protected]
Research Group Teaching Methodology in Physics and Astronomy, Friedrich-Schiller-University Jena, 07743 Jena
Physics Didactics Research, 5. Physikalisches Institut and Center for Integrated Quantum Science and Technology, Universität Stuttgart, Pfaffenwaldring 57, Stuttgart 70569, Germany
Studying worked examples has been shown by extensive research to be an effective method for learning to solve well-structured problems in physics and mathematics. The effectiveness of learning with worked examples has been demonstrated and documented in many research projects. In this work, we propose a new four-step approach for teaching with worked examples that includes writing explanations and finding and correcting errors. This teaching method can even be implemented in courses in which homework performance constitutes part of the grading system. This four-step approach is illustrated in the context of Lagrangian mechanics, which is ideal for the application of worked examples due to its universal approach to solve problems.
Why and how to implement worked examples in upper division theoretical physics
Ronny Nawrodt
August 12, 2023
==============================================================================
§ INTRODUCTION
Worked examples are step-by-step solutions of exercises or tasks.
When well-structured worked examples are included in the learning process of novices, they improve the initial acquisition of cognitive skills as compared to conventional problem-solving activities (the so-called worked-example effect) <cit.>. Effective learning with worked examples can consist, for example, of studying several solutions and explaining them oneself before solving a similar problem without additional support (example-problem pairs <cit.>).
The advantages of this method are often understood through cognitive load theory <cit.>. This theory assumes that learning is associated with cognitive load and describes what can facilitate or impede learning. For example, the removal of extraneous tasks such as remembering how to complete mathematical operations allows the student to concentrate on the main learning task. In addition, worked examples are considered cognitive activating, meaning that they cause the student to focus attention on the most important learning tasks (for an overview, see Refs. <cit.>).
Learning with worked examples can be more effective than to conventional problem solving because when learners are presented with a new topic, they often concentrate cognitive resources on the technical aspects of solving the problem. In that case, less resources are available for the construction of abstract schemata, which lead to transfer and can help to solve related problems (cf. Ref. <cit.>).
The effectiveness of such a worked examples effect is well examined and proven in well-structured areas of mathematics and physics <cit.>, and also in text comprehension <cit.>, and essay writing <cit.>.
However, worked examples are not always superior or more efficient in learning in comparison to classical problem solving <cit.>. They need to be structured and designed in a way that extraneous load is decreased and germane load is increased such that learners profit from them. For example, self-explanations by students turn out to be important <cit.>. The request for self-explanations (and also finding or fixing errors <cit.>) in our approach is an important difference from the simple presentation of solved problems in lectures.
The added value of worked examples has also been demonstrated for double integrals in calculus at university level <cit.>, i.e., in a topic which is needed regularly in theoretical physics. Therefore, worked examples are a promising tool for improving teaching in upper-division theoretical physics. They can be used as further methodical variation (in addition to, e.g., peer instruction and small group tutorials) for the transformation of upper-division physics courses <cit.> or to help students to overcome typical difficulties with mathematical tools <cit.>.
In addition to example-problem pairs (the simplest and most often recommended scheme of worked examples <cit.>), other approaches with additional intermediate tasks can be beneficial <cit.> such as including self-explanation prompts and successively removing more and more worked-out tasks in solutions. It is the purpose of this work to show that a four-step approach based on worked examples can indeed be introduced in theoretical physics. In Sec. <ref> we present the concept and explain its
introduction in exercise courses. Short notes on our experiences from several courses are given in Sec. <ref>, and conclusions are drawn in
Sec. <ref>. More details based on the relevant effects from cognitive load theory (cf. Refs. <cit.>) relevant for the concept and our conclusions for worked examples in theoretical physics can be found in the supplementary material bellow this paper.
§ CONCEPT OF A FOUR-STEP APPROACH APPLIED IN THEORETICAL PHYSICS
The goal of the four-step approach is to strengthen the worked example effect. It ensures that extended self-explanations are taken seriously by all students due to two additional steps embedded between completely elaborated worked examples and problem tasks without solutions. While the individual steps are known and tested, we propose an arrangement ideally suited for courses that assign problem sets. A lecturer can implement the method with very little additional preparation time because it can be based on existing textbook problems. It offers the students a broader variety of examples without investing more time, and the opportunity to acquire a deeper understanding of the underlying principles used in the exercises. The concept is structured as follows.
The first step in our scheme is a maximally elaborated worked example with a detailed step-by-step solution. The problem chosen for this step should be paradigmatic for this type of physics task. To reduce extraneous load generated by the necessary collection of information (split attention effect <cit.>) the important aspects are highlighted in the problem description and the structure of the solution is explicitly shown. Ideally the calculation path and the explanation of the different solution tasks are close together on the worksheet. The students' mathematical capabilities should determine amount of mathematical detail that is included.
After this preparation, we foster self-explanations in step two by demanding written explanations from the students for a solution that is presented without explanations. This example can be simpler than the first one to help students get started. Since most learners are passive or superficial self-explainers, <cit.> we recommend either an obligatory written explanation for assignments that are submitted for grading or peer discussions for assignments that are completed in tutorials.
In step three students are asked not only to explain the work, but also to identify and correct errors that are purposefully included in the solution. This task should be more challenging than step 2; therefore, the chosen example should be more difficult again. We recommend 2-4 errors, such that the students continue to search after finding the first error but the wrong solution does not become too confusing. The focus should be on errors in the translation of the physics problem into its mathematical description. One calculation error should be included for a quick boost of accomplishment (see supplementary material).
Step four consists of a problem without any solution and it is the student's task to develop and provide a full solution, which is the typical last step of worked examples.
A detailed elaborate example for Lagrangian mechanics can be found in supplementary material 2 (bellow this text), which fits in a 90-min exercise class or can be completed by students at home. All examples are typical textbook problems <cit.> (an English version of the first book is also available <cit.>) and can be adopted easily. In addition, we provide a discussion about the lessons we have learned using worked examples, how they can be embedded in instruction, and their applicability in other topics.
§ EXPERIENCES WITH THE FOUR-STEP APPROACH
We used this or a similar structure of worked examples in a set of different courses from mathematics refresher courses to seminars accompanying classical mechanics, electrodynamics, quantum theory, and thermodynamics/statistis lectures. In all these setups we observed strong interest of the students and their active participation. In each of our applications the majority of the students was able to solve the last problem correctly. Thus, we found that worked examples can be productive for several teaching situations in theoretical physics. The example presented in supplementary material 2 (see bellow this text) on Lagrangian mechanics was tested in a 90-min online seminar with a very satisfying performance of the students. Some students were able to complete the whole task within the seminar; a short homework (completing the last example) remained for the others.
While our four-step approach is, in principle, very simple, we want to emphasize the challenges. A great challenge is finding of the correct level of difficulty for the learners. Novices with less prior knowledge need more explanations, and error finding can overstrain them. However, redundant information can reduce the learning effects for students with adequate prior knowledge. Those two sides on the scale should be considered for every lecture and course.
§ CONCLUSIONS
Due to our positive experiences, we recommend the application of worked examples in theoretical physics and invite other lecturers to adopt and enhance the approach of worked examples presented here in theoretical physics.
A nice side effect of worked examples with additional tasks (written self-explanations and error finding) is that they can also be used to assess the students' performances. Suggestions on how this can be implemented can be found in supplementary material 2.
Our presented scheme can be applied to every topic for which a universal solution structure to problems exists and worked examples are usable in general. Very typical examples are Lagrangian mechanics of the second kind and solutions to the time-independent Schrödinger equation (see Table <ref>).
A systematic evaluation of the long-term performance of the students after attending the worked-examples problem courses has to be relegated to future research. This was not a topic in this work; however, an actual proof of the superiority as compared to bare problem solving is desirable. Nevertheless, previous research indicates a benefit of worked examples in university environments <cit.>.
§ AUTHOR DECLARATIONS
The authors have no conflicts to disclose.
99
Sweller.1985 J. Sweller and G. A. Cooper, “The Use of Worked Examples as a Substitute for Problem Solving in Learning Algebra,” Cogn. Instr. 2 (1), 59–89 (1985).
Paas.2006 F. Paas and T. van Gog, “Optimising worked example instruction: Different ways to increase germane cognitive load,” Learn. Instr. 16 (2), 87–91 (2006).
Sweller.2011 John Sweller, Paul Ayres and Slava Kalyuga, Cognitive Load Theory, (Springer New York, New York, 2011).
Sweller.1998J. Sweller, J. J. G. van Merriënboer and F. G. W. C. Paas, “Cognitive Architecture and Instructional Design,” Educ. Psychol. Rev. 10 (3), 251–296 (1998).
Renkl.2002 A. Renkl, “Worked-out examples: instructional explanations support learning by self-explanations,” Learn. Instr. 12 (5), 529–556 (2002).
Renkl.2003
A. Renkl et al., “Cognitive Load beim Lernen aus Lösungsbeispielen,” Zeitschrift für Pädagogische Psychologie 17 (2), 93–101 (2003).
Cooper.1987 G. Cooper and J. Sweller, “Effects of schema acquisition and rule automation on mathematical problem-solving transfer,” J. Educ. Psychol. 79 (4), 347–362 (1987).
Paas.1992 Fred G. W. C. Paas, “Training strategies for attaining transfer of problem-solving skill in statistics: A cognitive-load approach,” J. Educ. Psychol. 84 (4), 429–434 (1992).
Paas.1994 F. G. W. C. Paas and J. J. G. van Merriënboer, “Variability of worked examples and transfer of geometrical problem-solving skills: A cognitive-load approach,” J. Educ. Psychol. 86 (1), 122–133 (1994).
Ward.1990 M. Ward and J. Sweller, “Structuring Effective Worked Examples,” Cogn Instr. 7 (1), 1–39 (1990).
vanGog.2011 T. van Gog, L. Kester and F. Paas, “Effects of worked examples, example-problem, and problem-example pairs on novices' learning,” Contemp. Educ. Psychol. 36 (3), 212–218 (2011).
Elby.2000
A. Elby, “Helping physics students learn how to learn,” Phys. Educ. Res., Am. J. Phys. Suppl. 69 (7), S54 (2000).
Oksa.2010 A. Oksa, S. Kalyuga and P. Chandler, “Expertise reversal effect in using explanatory notes for readers of Shakespearean text,” Instr. Sci. 38 (3), 217–236 (2010).
Kyun.2013 S. Kyun, S. Kalyuga and J. Sweller, “The Effect of Worked Examples When Learning to Write Essays in English Literature,” J. Exp. Educ. 81 (3), 385–408 (2013).
Kalyuga.2001
S. Kalyuga, P. Chandler, J. Tuovinen and J. Sweller, “When problem solving is superior to studying worked examples,” J. Educ. Psychol. 93 (3), 579–588 (2001).
Renkl.1997
Alexander Renkl, “Learning from worked-out examples: A study on individual differences,” Cogn. Sci. 21 (1), 1–29 (1997).
Groe.2007
C. S. Große and A. Renkl, “Finding and fixing errors in worked examples: Can this foster learning outcomes?,”Learn. Instr. 17 (6), 612–634 (2007).
Brown.2016
B. R. Brown, A. Mason and C. Singh, “Improving performance in quantum mechanics with explicit incentives to correct mistakes,” Phys. Rev. Educ. Res. 12 (1), 010121 (2016).
Santosa.2019
C. A. H. F. Santosa, D. Suryadi, S. Prabawanto and S. Syamsuri, “The role of worked-example in enhancing students' self-explanation and cognitive efficiency in calculus instruction,” J. Ris. Pendidik. Mat. Jkt. 5 (2), 168–180 (2019).
Chasteen.2015
S. V. Chasteen et al., “Educational transformation in upper-division physics: The Science Education Initiative model, outcomes, and lessons learned,” Phys. Rev. ST Phys. Educ. Res. 11 (2), 020110 (2015).
Chasteen.2008
S. V. Chasteen and S. J. Pollock, “Transforming Upper-Division Electricity and Magnetism,” in AIP Conference Proceedings, edited by C. Henderson, M. Sabella, and L. Hsu (American Institute of Physics) 1064 (1), p. 91 (2008).
Goldhaber.2009
S. Goldhaber et al., “Transforming upper-division quantum mechanics: Learning goals and assessment,” in AIP Conference Proceedings, edited by M. Sabella, C. Henderson, and Ch. Singh (American Institute of Physics) 1179 (1), p. 145 (2009).
Pollock.2012
S. J. Pollock, R. E. Pepper and A. D. Marino, “Issues and progress in transforming a middle-division classical mechanics/math methods course,” in AIP Conference Proceedings, (American Institute of Physics) 1413 (1), p. 303.
Wilcox.2013
B. R. Wilcox, M. D. Caballero, D. A. Rehn and S. J. Pollock, “Analytic framework for students' use of mathematics in upper-division physics,” Phys. Rev. ST Phys. Educ. Res. 9 (2), 020119 (2013).
Caballero.2015
M. D. Caballero, B. R. Wilcox, L. Doughty and S. J. Pollock, “Unpacking students' use of mathematics in upper-division physics: where do we go from here?,” Eur. J. Phys. 36 (6), 065004 (2015).
Pepper.2012
R. E. Pepper, S. V. Chasteen, S. J. Pollock and K. K. Perkins, “Observations on student difficulties with mathematics in upper-division electricity and magnetism,” Phys. Rev. ST Phys. Educ. Res. 8 (1), 010111 (2012).
Atkinson.2003
R. K. Atkinson, A. Renkl and M. M. Merrill, “Transitioning From Studying Examples to Solving Problems: Effects of Self-Explanation Prompts and Fading Worked-Out Steps,” J. Educ. Psychol. 95 (4), 774–783 (2003).
Paas.2003
F. Paas, A. Renkl and J. Sweller, “Cognitive Load Theory and Instructional Design: Recent Developments,” Educ. Psychol. 38 (1), 1–4 (2003).
Trafton.1993
J. G. Trafton and B. J. Reiser, “Studying examples and solving problems: Contributions to skill acquisition,” in Proceedings of the 15th conference of the Cognitive Science Society, (Lawrence Erlbaum Associates, Publishers, Hillsdale, New Jersey, 1993), 1017–1022 (1993).
Rodiawati.2019
A. Rodiawati and E. Retnowati, “How to Design Worked Examples for Learning Patterns in Mathematics,” J. Phys. Conf. Ser. 1320 (1), 012045.
Chi.1989
M. T. H. Chi et al., “Self-Explanations: How Students Study and Use Examples in Learning to Solve Problems,” Cogn. Sci. 13 (2), 145–182 (1989).
Hausmann.2002
R. G. M. Hausmann and M. T. H. Chi, “Can a computer interface support self-explaining?,” Cogn. Technol. 7 (1), 4–14 (2002).
Schworm.2006
S. Schworm and A. Renkl, “Computer-supported example-based learning: When instructional explanations reduce self-explanations,” Comput. Educ. 46 (4), 426–445 (2006).
RobertS.Siegler.2008
R. S. Siegler and Z. Chen, “Differentiation and integration: guiding principles for analyzing cognitive change,” Dev. Sci. 11 (4), 433–448 (2008).
Durkin.2012
K. Durkin and B. Rittle-Johnson, “The effectiveness of using incorrect examples to support learning about decimal magnitude,” Learn. Instr. 22 (3), 206–214 (2012).
Stark.2011
R. Stark, V. Kopp and M. R. Fischer, “Case-based learning with worked examples in complex domains: Two experimental studies in undergraduate medical education,” Learn. Instr. 21 (1), 22–33 (2011).
Nolting.2011
Wolfgang Nolting, Grundkurs Theoretische Physik 2, 8th edition (Springer, Berlin, Heidelberg, 2011).
Bartelmann.2018
M. Bartelmann, B. Feuerbacher, T. Krüger, D. Lüst, A. Rebhan and A. Wipf, Theoretische Physik 1 | Mechanik, 1st edition (Springer, Berlin, Heidelberg, 2018).
Nolting.2016
Wolfgang Nolting, Theoretical Physics 2 - Analytical Mechanics, 1st edition (Springer International Publishing, 2016).
The authors are grateful for financial support by the Academy of Teaching Support of Friedrich Schiller University Jena. This research was also funded by the Bundesministerium für Bildung und Forschung (Federal Ministry of Education and Research), at the Professional School of Education Stuttgart Ludwigsburg in the project “Lehrerbildung PLUS,” grant No. 01JA1907A. This project is part of the “Qualitätsoffensive Lehrerbildung,” a joint initiative of the Federal Government and the Länder which aims to improve the quality of teacher training.
The authors also thank Thomas Rubitzko for stimulating discussions at the beginning of this project.
§ THEORETICAL FRAMEWORK
As highlighted in the main text, worked examples need to be well structured and embedded in the learning process of novices to improve the initial skill acquisition <cit.>.
The superiority of learning with worked examples as compared to conventional problem solving is often explained by the cognitive load theory (e.g., <cit.>). Cognitive load can be described as the amount of working memory resources a person needs to fulfill a task. The cognitive load theory describes three types of cognitive load that are relevant during learning. There is the intrinsic load which refers to the complexity of the learning contents in relation to a learner's prior knowledge. The second type, the extraneous load, refers to activities that are irrelevant (in general or under certain circumstances) for learning. For example, the search for new or forgotten information, such as the definition of a formula or a calculation rule, generates extraneous load if it is not a particular requested learning goal. In well designed worked examples this type of cognitive load is reduced as much as possible. The third type, called germane load, refers to cognitive resources that are bound by learning-relevant activities. Unlike intrinsic cognitive load, which is generally considered invariant, teachers can also manipulate germane load. Increased germane load aims not only to solve the task (intrinsic load) but to maximize the learning effect in automating schema and building understanding. Self-explanations of the step-by-step solution is an example for such cognitive load. In contrast to extraneous load germane load is desired and crucial when deep understanding of the learning contents is requested.
The worked examples effect can be effective in learning because when learners solve problems at the beginning of an unknown topic, significant cognitive resources are invested in technical aspects the solution, rather than dedicated to the construction of abstract schemata. It's this construction that leads to transfer and can help to solve related problems. Solving problems at the beginning is usually accomplished by a means-ends-analysis strategy. Here a substantial portion of cognitive load is needed to keep many aspects in mind such as the current problem state, the goal state, differences between them, etc. Remaining working memory capacities are therefore limited and are no longer available for learning processes like the construction of abstract schemata, which are crucial in theoretical physics. In contrast, learners can concentrate on gaining understanding when studying worked examples since they are freed from performance demands (cf. Ref. <cit.>).
Cognitive load theory explains why learning with worked examples is superior to common problem solving in a broad variety of learning situations. So far there few examples of applications in upper division physics. However, there are no reasons why there should not be a positive effect. In particular, examples from mathematics show that even complex tasks can successfully be addressed with worked examples <cit.>.
This renders them to a promising tool for improving teaching in upper division theoretical physics. Thus, worked examples should be useful in the transformation of upper-division physics courses as it is pursued, e.g., in the Science Education Initiative (for an overview see <cit.>). In addition, worked examples might help to support students to overcome typical difficulties with certain mathematical tools (see, e.g. <cit.>) when they have to solve problems with long and complex mathematical transformations.
§.§ DESINGS FOR WORKED EXAMPLES
Worked examples are not always superior or more efficient in learning as compared to problem solving <cit.>. However when worked examples are structured and designed in a way that extraneous load is decreased and germane load is increased, learners do profit from them.
A good design to reduce extraneous load takes several effects of the cognitive load theory into account (cf. <cit.>). These are the split-attention effect, the redundancy effect and the expertise reversal effect. When learners have to split their attention between at least two sources of information that have been separated either spatially or temporally, extraneous cognitive load is increased and the learning efficiency is decreased (split attention effect). Information that is redundant or unnecessary has the same effect on learners cognitive load (redundancy effect) or information that is already known by learners and stored in their long-term memory (expertise reversal effect). In order to motivate learners to study the examples and to process the information in worked examples it is recommended to demand them to solve a similar problem after studying the worked example.
Besides decreasing extraneous cognitive load we want to increase germane load. Therefore we use the self-explanation effect and the studying error principle. We want to foster self explanation because learners who study examples longer and explain them more actively to themselves are more successful <cit.>. But most learners are passive or superficial self-explainers according to Renkl <cit.>. This leads to the conclusion that demanding self-explanations by instructional procedures is crucial in learning from worked examples. Self-explanation does not necessarily mean a talking-aloud procedure. There is evidence that prompting written self-explanations foster learning outcomes <cit.>. The self-explanation effect can be increased by explaining correct and incorrect solutions <cit.>; especially explaining why incorrect solutions are wrong helps to avoid these errors later <cit.>. The task to correct their own mistakes in mid-term exams can help students to increase their performance in the final exam <cit.>. However there is only a positive effect in finding and explaining errors in worked examples for learners with adequate prior knowledge <cit.>, so implementing errors in worked examples too early can overwhelm weaker learners with little prior knowledge. To prohibit such an overchallenge, weaker learners need additional support by explicitly marking errors <cit.> or by expert explanations and feedback why certain steps in the solution are correct or incorrect <cit.>.
Next to the most recommended scheme of worked examples, viz. example-problem pairs <cit.> other approaches with more intermediate steps are possible. In many tasks it can be beneficial to combine self-explanation prompts and fading or successively removing more and more worked-out steps in solutions as Atkinson et al. <cit.> have shown. They found that this combination even positively influences the quality of example processing for far-transfer tasks, thus a scenario in which we wish to apply worked examples.
§.§ CONCLUSIONS FOR THE DESIGN IN THEORETICAL PHYSICS
To be most efficient in learning scenarios, worked examples must be designed according to the effects and principles described above. However the more complex the problem is, the more complex it is to take account of all these effects in the given solution. This relation leads to a special challenge in theoretical physics since it requires a broad spectrum of previous knowledge and skills (mathematically and physically). However, this does not mean that worked examples are not applicable. Santosa <cit.> implied that worked examples can increase the learning efficiency in complex tasks of calculus such as higher-dimensional integrals, exactly the type of mathematics many physics problems demand to get solved. This result provides a strong motivation to extend this approach to exercises in theoretical physics.
In addition to physical principles (e.g. laws, boundary conditions, etc.), mathematical challenges (such as differential equations, differential & integral calculus, linear algebra and so on) also increase the difficulty of physics tasks. When students are not well versed in mathematical techniques they have to interrupt their thoughts on the intended problem with a search for (mathematical) solution methods. A search that can lead to other textbooks or lectures as sources of information creates extraneous load due to the split-attention effect.
Thus the split attention effect is most crucial in the applications we have in mind to reduce extraneous cognitive load. Reducing the spatial split attention effect in such complex exercises is challenging due to the amount of information. Nevertheless it can be minimized to a certain degree by highlighting important information, correlations or dependencies in the problem. In addition offering a step-by-step solution of a worked example should reduce the temporal split attention effect.
Since students of theoretical physics are not total novices in problem solving, our conclusion is mainly to foster self-explanations to increase germane cognitive load in the continuative examples. One possibility is to request written explanations for certain or all steps in the solution from students <cit.>. Another possibility is to extend the examples by certain comprehension questions students are requested to answer. When students have shown adequate problem solving skills or a broad prior knowledge, finding and correcting errors should improve the effects of self-explanation even more.
§ FOUR-STEP APPROACH IN LAGRANGIAN MECHANICS
This supplement provides a paradigm of how worked examples could be implemented in theoretical physics. Worked examples are useful in every topic in which there is a universal solution structure. To illustrate this approach, we have chosen the topic of Lagrangian mechanics of the second kind. The solution structure behind various application examples is itself a prime example of solution structures and therefore predestined to illustrate “worked examples.”
* Define the holonomic constraints and calculate the degrees of freedom S.
* Define generalized coordinates q_i according to the holonomic constrains.
* Write expressions for the kinetic T and potential V energy.
* Set up the Lagrangian.
* Determine the equations of motion for every generalized coordinate.
* Reduce the equations if possible.
In addition, in theoretical physics, Lagrangian mechanics is usually the first exposure of university students to a new topic, which is more conceptual than Newtonian mechanics, and thus additional support is appropriate here. The examples used in this paradigm can be found in modified form in textbooks like <cit.>. An English version of the first book is also available <cit.>.
The first step in our scheme is a fully worked example with a detailed step-by-step solution. After this preparation we foster self-explanations in step two by demanding written explanations from the students to accompany a solution that is provided. In step three this is extended by finding and fixing errors with an explanation. Step four consists of a problem without any solution and it is the student's task to develop and provide a full solution.
The whole program fits in a 90 minute presence exercise course or can be dealt with by students at home. All examples are typical text book problems and can be adopted easily.
§.§ Step 1: Maximally elaborate solution - Pendulum on springs
As mentioned in the main text, the first step in our scheme is a fully worked example with a detailed step-by-step solution. The split attention effect is reduced by highlighting the important aspects in the problem description. Explanations are given right beside each task of the solution procedure. Mathematical explanations should be given where necessary depending on the mathematical capabilities of the learners.
In the context of Lagrangian mechanics, the pendulum on springs (see Fig. <ref>) was chosen as first example since it contains many typical features of textbook exercises in Lagrangian mechanics. Thus, the detailed solutions of this problem offers the learners a broad overview. Students can learn how to deal with two coupled masses and two types of potential energy. For novices in Lagrangian mechanics we recommend a very detailed solution for example by explicitly naming all dimensions and arguing via holonomic constraints the irrelevance of the z component. If the students are already familiar with holonomic constraints this part is redundant and should be excluded from the worked example.
Exercise: Set up the Lagrangian (or Lagrangian function) and determine the equations of motion. Reduce them as much as possible.
A point mass m_1 is attached via two springs with the same spring constant k between two walls. The equilibrium point of the springs corresponds to the center position of the point mass between those walls. The restoring force is |𝐅| = k |𝐱|. Mass m_1 is only able to move horizontally along the x-axis. A second point mass m_2 is attached to m_1 via a massless rod with length l. The second point mass can oscillate in the x-y-plane under influence of a homogeneous gravitational field with force 𝐅_𝐆 = -m_2 g 𝐞_𝐲. The angle of deflection is given by φ (see Fig. <ref>). The small-angle approximation applies.
Table <ref> shows the solution path alongside detailed explanations for each calculation step.
§.§ Step 2: Fostering self-explanations - Mass in a cone in a gravitational field
In this example, students are encouraged to formulate their own explanations. A simpler example is sufficient for this task since the self-explanation is in the focus. This helps the students to get started.
In Lagrangian mechanics the problem “Mass in a cone in a gravitational field” (see Fig. <ref> was chosen as second example. The problem is not as difficult as example 1, since there is only one mass and one potential energy.
Exercise: Set up the Lagrangian and determine the equations of motion. Reduce them as much as possible.
On the inner face of an upwards opened cone is a pellet with mass m. The cone has an aperture angle of 2Θ and the pellet can move frictionless on its inner surface. It is under the influence of the gravitational force 𝐅_𝐆 = - mg 𝐞_𝐳. The axis of the cone is identical with the z-axis and its top is located in the origin, see Fig. <ref>.
Table <ref> shows the mathematical solution.
§.§ Step 3: Finding and fixing errors - Two masses on a wedge coupled by a spring
To foster self-explanation the third example has errors intentionally included. This task should be more challenging than step 2; therefore the chosen example should be more difficult. In our example for Lagrangian mechanics, an example with two generalized coordinates (Fig. <ref>) was chosen such that the errors cannot be found easily. In the incorrect solution a holonomic constraint is wrong. This error was chosen because students often have problems with the transition between mathematics and physics. The second error is located in the potential energy of the spring. Only the extension or compression compered to the stress-free length is relevant for the potential energy. The missing stress-free length in the potential is a typical student mistake and is easily overlooked in a given solution.
Exercise: Set up the Lagrangian and determine the equations of motion of the problem given in Fig. <ref>.
Two masses m_1 and m_2 move on a wedge. There is no friction but a gravitational field given by the force 𝐅_𝐆 = -m_1/2 g 𝐞_𝐲. The masses are connected via a massless spring with a spring rate k and a stress-free length l.
Set up the holonomic constrains and determine the number S of degrees of freedom
z_1 = 0, z_2 = 0, y_1/x_1 = tanα , y_2/x_2 = tanβ
S = 6 - 4 = 2
Define generalized coordinates, which conform to the holonomic constrains
q_1 = r_1, q_2=r_2
Transformations
x_1 = -r_1 cosα, x_2 = r_2cosβ
y_1 = -r_1 sinα, y_2 = -r_2 sinβ
z_1 = 0, z_2 = 0
Set up the kinetic T and potential V energy
T = m_1/2ṙ_1^2 (cos^2 α +sin^2 α) + m_2/2ṙ_2^2 (cos^2 β +sin^2 β)
= 1/2 (m_1 ṙ_1^2 + m_2ṙ_2^2)
V = -m_1 g r_1 sinα - m_2 g r_2 sinβ + k/2 (r_1+r_2)^2
Set up the Lagrangian
L = T-V = 1/2 (m_1 ṙ_1^2 + m_2ṙ_2^2) + m_1 g r_1 sinα
+ m_2 g r_2 sinβ - k/2 (r_1+r_2)^2
Determine the equations of motion for every variable
[]0.49
d/dt∂ L/∂ṙ_1 - ∂ L/∂ r_1 = 0
m_1 r̈_1 - m_1 g sinα + k(r_1 + r_2) = 0
[]0.49
d/dt∂ L/∂ṙ_2 - ∂ L/∂ r_2 = 0
m_2 r̈_2 - m_2 g sinβ + k(r_1 + r_2) = 0
§.§ Step 4: Student's task - Rotating mass on a tabletop connected to a second hanging mass
Since learners take the self explanation more seriously, when they have to solve a familiar problem by their own, the last step is the most important. Here a suitable task contains again all typical elements but is not too easy or is a transfer task.
In Lagrangian mechanics we chose (Fig. <ref>) a rotating mass on a tabletop connected to a second hanging mass as a fourth example since many elements of the first three examples can be used in this solution. The coupling of two masses can be found in examples 1 & 3, and the rotary motion in example 2. However, the combination of both effects never appeared. Thus, there is a transfer students have to master solving this problem.
Exercise: Set up the Lagrangian and determine the equations of motion of the following setup. Reduce them as much as possible.
A mass m rotates without friction on a tabletop. The mass is connected to a second mass M by a string of the length l=r+s. The second mass is below the tabletop and is pulled downwards by the influence of the gravitational force 𝐅_𝐆=-Mg𝐞_𝐳.
§.§ Discussion
§.§.§ Lessons learned
As mentioned in the main text, we used this or a similar structure of worked examples in two mathematics refresher courses prior to theoretical physics lectures in classical mechanics and electrodynamics, in a further topic of classical mechanics (Noether theorem), in electrodynamics (profiles for Maxwell's equations and derivation of the wave equation), in quantum theory (verbalization of formulas) and thermodynamics/statistical physics (idealized cycles and partition functions). In all these setups we observed active participation of our students. The worked example approach was preferred by students in initial test phases to the presentation of solutions by the lecturer and was rated as more useful. The performance of our students in the last problem was satisfying. We were also able to show our students more problems in each topic this way. Thus, we experienced that worked examples can be productive for several teaching situations in theoretical physics.
While the scheme presented here is in principle very simple, we want to mention two aspects, which should be kept in mind. The first addresses the prior knowledge of the learners. The great challenge for lecturers, who want to use worked examples, is to adapt the correct level of difficulty for the learners. Novices need more solution steps and explanations. Also error finding can overstrain them. On the other hand redundant information can reduce the learning effects for students with adequate prior knowledge.
For example, in previous tests of worked examples in Lagrangian mechanics of the first kind we delivered only a mathematical solution without explanations. Since our students were not total novices, we thought additional explanation were redundant. But this approach overstrained them. Thus, our recommendation is to deliver more rather than fewer explanations in the beginning.
For step 3 (finding and fixing errors) we recommend 2 to 4 errors, so that the students continue to search after the first finding but the wrong solution does not become too confusing. The type of errors should depend on the learning objectives of the instructor. We typically focus on physically incorrect terms but also miscalculations are possible to increase the number of errors. In general we recommend errors in the translation of the physics problem into its mathematical description, one calculation error for a quick boost of accomplishment, and errors that are typical for the used kind of problems. Here it can be helpful to skip the third step in one year and to search in students solutions for patterns in their mistakes.
In addition, when existing problem sets are used for the worked examples, the notion should be the same in every example, otherwise extraneous cognitive load can be increased and the worked examples lose their efficiency. If these two aspects are considered, we are convinced worked examples improve lectures and exercises in theoretical physics as an upper division levels research in mathematics <cit.> and our own experience implicate.
§.§.§ Embedding in instruction
A beneficial side effect of the self-explanation fostering variants are their use for lecture certificates or ungraded semester performances. In some physics study programs students have to solve a certain amount of problems every
week to obtain an admission to the exam. This practice is justified with the training students get by solving problems as a preparation for the exam and it is meant to be a protection for students not to take an exam il-prepared.
As mentioned above worked examples can be superior to problem solving. This is why we suggest to exchange the often used problem solving exercises by worked examples which demand self-explanations or finding and fixing errors.
However, if the students do not solve all problems by themselves and a rating of their work is required, the rating system has to be adapted. Instead of taking just the number of correct solution steps or solved problems into account, it is possible to establish a rating on the steps of the worked example scheme presented above. The quality of the students explanations in the self-explanation tasks as well as the number of detected errors in the error finding problem can legitimate the lecture certificate. In particular, the number of problems that can be utilized for the rating does not change much. We have had good experience by exchanging two problems without assistance with four worked examples of the types described above.
The scheme presented does not need to be rigidly adopted by other instructors. Most important are the presentation of the solution structure (step 1) and the student's task (step 4). The steps in between can help to ease the transition from understanding a solution to producing a solution yourself. Therefore, this four-step approach can also be split or modified and adapted to time and structural constraints. While the four-steps approach filled 90 minutes in our case, the four steps can be split up due to time restrictions into consecutive seminars (down to 45 minute seminars). The time constraints and the level of the students must be weighed here. It is conceivable to outsource steps 1 and 2 in the homework or to omit one step. For teaching students we once skipped step 2 in favor of a educational introduction of worked examples and cognitive load theory.
§.§.§ Applicability of the structure in other topics
In every topic were there is a universal solution structure to problems similar to that in Lagrangian mechanics our presented scheme and worked examples in general are applicable. For example solving the time-independent Schrödinger equation
Ĥ|Ψ⟩ = E |Ψ⟩
with a solution structure like:
* Determine the dimension of the Hilbert space and set up the Hamiltonian according to the physics problem.
* Set up the eigenvalue equation.
* Determine the eigenvalues.
* Determine the eigenvectors according to their eigenvalues.
* Check for boundary conditions.
Since physics problems occur in well structured domains, there are many topics suitable for worked examples. Just to inspire the readers mind we want to mention a few examples among many others. There are calculations with work integrals, or inertia tensors, Noether's theorem and Hamiltonian mechanics in classical mechanics. In quantum physics besides the time-independent Schrödinger equation there are perturbation theory and the determination of Clebsch-Gordan coefficients. In classical electromagnetism many solution methods of Poisson's equation like the multipole expansion are suitable for worked examples as well as are calculations for thermodynamic cycles or the determination of partition functions in statistical mechanics.
|
http://arxiv.org/abs/2307.04229v1 | 20230709170952 | Frequency-Domain Model of Microfluidic Molecular Communication Channels with Graphene BioFET-based Receivers | [
"Ali Abdali",
"Murat Kuscu"
] | cs.ET | [
"cs.ET"
] |
Frequency-Domain Model of Microfluidic Molecular Communication Channels with Graphene BioFET-based Receivers
Ali Abdali, Student Member, IEEE,
and Murat Kuscu, Member, IEEE
The authors are with the Nano/Bio/Physical Information and Communications Laboratory (CALICO Lab), Department of Electrical and Electronics Engineering, Koç University, Istanbul, Turkey (e-mail: {aabdali21, mkuscu}@ku.edu.tr).
This work was supported in part by EU Horizon 2020 MSCA-IF under Grant #101028935, and by The Scientific and Technological Research Council of Turkey (TUBITAK) under Grant #120E301.
August 12, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Molecular Communication (MC) is a bio-inspired communication paradigm utilizing molecules for information transfer. Research on this unconventional communication technique has recently started to transition from theoretical investigations to practical testbed implementations, primarily harnessing microfluidics and sensor technologies. Developing accurate models for input-output relationships on these platforms, which mirror real-world scenarios, is crucial for assessing modulation and detection techniques, devising optimized MC methods, and understanding the impact of physical parameters on performance. In this study, we consider a practical microfluidic MC system equipped with a graphene field effect transistor biosensor (bioFET)-based MC receiver as the model system, and develop an analytical end-to-end frequency-domain model. The model provides practical insights into the dispersion and distortion of received signals, thus potentially informing the design of new frequency-domain MC techniques, such as modulation and detection methods. The accuracy of the developed model is verified through particle-based spatial stochastic simulations of pulse transmission in microfluidic channels and ligand-receptor binding reactions on the receiver surface.
Molecular communications, receiver, frequency-domain model, graphene bioFETs, microfluidics, ligand-receptor interactions
§ INTRODUCTION
Molecular Communications (MC) is a bio-inspired communication paradigm that uses molecules as information carriers <cit.>. The unique properties of MC, such as biocompatibility, energy efficiency, and reliability under complex and dynamic physiological conditions, are promising for enabling seamless interactions among natural/synthetic cells and micro/nanoscale devices, so-called bio-nano things. Through the emerging Internet of Bio-Nano Things (IoBNT) framework, MC is expected to usher in a new era of unparalleled healthcare and environmental applications at the intersection of information communication technologies, biotechnology, and nanotechnology <cit.>.
MC research has predominantly focused on the development of theoretical channel models, modulation, detection and coding schemes, as well as the design of transmitter and receiver architectures <cit.>. Recent progress in the field has facilitated the integration of experimental validations with theoretical studies, utilizing MC testbeds of varying scales and sophistication. Notably, some of these testbeds, due to their scalability to micro/nanoscales, have the potential to serve as an ideal link between theoretical frameworks and practical applications of MC. Microfluidics technology plays a pivotal role in these practical investigations, as it enables the testing of diverse MC channels while offering comprehensive control over system parameters, such as flow conditions and channel geometry. Moreover, microfluidic channels closely mimic blood vessels and other biological microenvironments, characterized by convection-diffusion-based molecular transport processes <cit.>.
Integrating chemical sensors into microfluidic chips has augmented the utility of these testbeds, with sensors acting as MC receivers that vary in material, geometry, and transduction processes. Among these, affinity-based field-effect transistor biosensors (bioFETs) have emerged as compelling MC receiver architectures due to their inherent signal amplification, miniaturization capabilities, and ligand receptor-based interfaces that provide control over selectivity and sensitivity, resembling biological cells performing molecular sensing. Graphene bioFETs have garnered particular attention owing to the graphene's flexibility, two-dimensional (2D) geometry, and capacity to be functionalized with various bioreceptors, including DNA aptamers and proteins <cit.>. Initial investigations involving practical microfluidic MC systems equipped with graphene bioFET-based MC receivers have already unveiled crucial insights into the effects of convection, diffusion, ligand-receptor (LR) binding reactions, and receiver material properties on the MC performance <cit.>.
Despite these developments, the majority of theoretical and practical studies still primarily focus on the time-domain aspects of MC systems. This can be attributed to the fundamentally distinct nature of the information carriers, i.e., discrete molecules, which lead to ambiguities and complications in defining carrier waves and frequencies for this unconventional communication technique. Additionally, the innate nonlinearity and time-variance of MC communication systems pose challenges to the adoption of frequency-domain techniques as widely-utilized tools in MC research. Nevertheless, when operating regimes can be characterized or approximated as linear and time-invariant (LTI), exploring the frequency-domain features of MC systems can yield crucial insights regarding channel characteristics, such as bandwidth, as well as dispersion and distortion of the transmitted signals. This approach also offers a deeper understanding of the impact of various system parameters on communication performance, including channel geometry, LR binding kinetics at the channel/receiver interface, and the electrical characteristics of the transducer channel within the receiver. Moreover, frequency-domain models can enable the adoption of sophisticated communication tools and methods from conventional EM, including transfer functions and filters, to optimize MC systems and develop new communication techniques, such as frequency-domain pulse-shaping, modulation and detection techniques.
There has been a limited focus on frequency-domain analysis in MC. A notable contribution is the frequency-domain model for diffusion-based MC systems developed in <cit.>, which allows the determination of the end-to-end normalized gain and delay of the MC system as a function of frequency. In <cit.>, a frequency-domain equalizer (FDE) was proposed to address the inter-symbol interference (ISI) problem in MC. The transfer function of the MC channel, considering only diffusion-based transport, was derived in <cit.>.
In our recent study <cit.>, we introduced a frequency-domain detection technique for MC to estimate the concentration of information molecules in the presence of interfering molecules, leveraging LR binding kinetics. This method employs the power spectral density (PSD) of binding noise, which exhibits unique properties for each molecule type, enabling the differentiation of information molecules from interferers in the frequency-domain.
In this study, we present an end-to-end frequency-domain system model for a microfluidic MC channel employing a graphene bioFET-based MC receiver with ligand receptors on its surface for detecting molecular messages carried by information molecules (i.e., ligands). We consider transmitted signals as finite-duration molecular concentration pulses. We partition the end-to-end MC system into three subsystems: (i) the microfluidic propagation channel, where ligand propagation is governed by convection and diffusion; (ii) the channel/receiver interface, where the receiver's surface receptors interact with propagating ligands; and (iii) the graphene bioFET-based receiver, which transduces the number of bound receptors into an output electrical current. By employing LTI approximations, we analyze each subsystem independently and derive their transfer functions. We then combine these to obtain the end-to-end MC system's transfer function. The developed frequency-domain model is validated through particle-based spatial stochastic simulations using Smoldyn, an open-source simulation framework <cit.>. The simulation results show a strong agreement with the developed analytical frequency-domain model. We also examine the impact of various system parameters, such as pulse width of input signals, diffusion coefficient of ligands, and binding and unbinding rates of LR pairs, on the transfer function. Additionally, we leverage the developed model to determine the minimum sampling frequency for digitizing the output current by identifying the cutoff frequency and applying the Nyquist–Shannon theorem.
The remainder of this paper is organized as follows. Section <ref> offers an in-depth analysis of the three key components of the microfluidic MC system, followed by the development of the end-to-end frequency-domain model, which is then utilized to obtain the output signal. Section <ref> presents the simulation results intended to validate the developed model. Lastly, Section <ref> delivers concluding remarks.
§ END-TO-END FREQUENCY-DOMAIN MODEL
In this section, we present the derivation of the end-to-end frequency-domain model for the microfluidic MC system, depicted in Fig. <ref>(a). The system comprises a rectangular cross-section microfluidic channel in which molecular signals are uniformly transmitted across the cross-section of the channel inlet. The microfluidic channel is assumed to be open-ended, with a two-dimensional graphene bioFET-based biosensor serving as the receiver, positioned at the bottom of the channel without obstructing molecular propagation.
To establish the end-to-end model, transfer functions are derived for three subsystems: propagation of ligands within the microfluidic MC channel, LR binding interactions at the receiver surface, and molecular-to-electrical transduction process within the graphene bioFET-based MC receiver. The block diagram illustrating the end-to-end microfluidic MC system is provided in Fig. <ref>(b).
§.§ Transfer Function of Microfluidic Propagation Channel
We consider a straight microfluidic channel with a rectangular cross-section which is filled up with an electrolyte as the medium of propagation for molecular signals. The transmitter is located at the entrance of the microfluidic channel, while the receiver, a graphene bioFET, is situated at the base of the channel at position x=x_r as shown in Fig. <ref>(a). The surface of the graphene transduction channel of the receiver is functionalized with selective receptors, which are exposed to ligands of time-varying concentration. The receiver senses the concentration of ligands, flowing over its surface, through LR binding reactions. The flow is unidirectional from the inlet to the open outlet of the microfluidic channel.
The convection-diffusion equation, which is a linear partial differential equation, describes the behavior of mass transport of ligands within the microfluidic channel as <cit.>
∂ϕ/∂ t= -u ∇ϕ + D ∇ ^2ϕ,
where ϕ is the concentration of the released ligands, D is the diffusion coefficient of the ligands, u is the fluid flow velocity. The convection-diffusion equation describes the spatiotemporal evolution of the ligand concentration profile, ϕ, through the convective term (-u ∇ϕ) and the diffusive term (D ∇ ^2ϕ).
In this study, we consider a uniform and unidirectional fluid flow solely along the x-axis, i.e., u = u_x. To simplify the analysis of ligand transport, we adopt a one-dimensional (1D) approximation, focusing primarily on the x-axis. This assumption is justified when the ligand propagation is predominantly unidirectional, and lateral dispersion is much smaller than longitudinal transport. Such conditions arise due to the design and flow characteristics of the microfluidic channel, as discussed in <cit.>. The validity of this approximation can be quantified using the Péclet number, defined as Pe = u_x l / D. In this definition, l denotes the characteristics length, and for our case, corresponds to the distance between the transmitter and the receiver, i.e., l = x_r. When Pe ≫ 1, the 1D approximation is valid, which is consistent with the model parameter values considered in this study. Accordingly, the 1D solution of (<ref>) for an input concentration signal in the form of an impulse at the origin (i.e., ϕ_in(x, t) = δ(x - x_0, t - t_0) with x_0 = 0, t_0 = 0), gives the impulse response of the microfluidic propagation channel:
h_p(x,t)= 1/√(4π D t)exp(-(x-u_xt)^2/4Dt).
The propagation delay, τ, is the time it takes for the peak ligand concentration to travel a distance of x from the channel inlet, given by τ = x/u_x for Pe ≫ 1. The received concentration, ϕ_r, in the time-domain for an input concentration of ϕ_in in a straight microfluidic MC channel can be calculated through the convolution of the input concentration and the impulse response as follows:
ϕ_r (l)=(h∗ϕ_in)(l)= ∫^+∞_-∞ h_p(x)ϕ_in(l-x)dx.
For a rectangular finite-duration
concentration pulse input with a pulse width of T_p and amplitude of C_m, the received concentration at x=l can be calculated using (<ref>) as follows:
ϕ_r(l)= C_m/2((tu-l+T_pu/2√(Dt)) -(tu-l/2√(Dt))).
The transfer function of the propagation channel, which represents the frequency response to an impulse signal, can be derived by solving the frequency-domain counterpart of the 1D convection-diffusion equation (<ref>) obtained using the Fourier Transform (FT):
j 2π f Φ(x,f) = - u ∂Φ(x,f)/∂ x+ D ∂^2Φ(x,f)/∂ x^2,
where, Φ(x,f), the spectral density of ligand concentration, is obtained by taking the FT of the time-domain ligand concentration signal, i.e., Φ(x,f)= ℱ(ϕ(x,t)) <cit.>.
Assuming |(8π fD)/u^2|<1 to have a converging series expansion in solution of (<ref>), and fixing x = x_r, the receiver's central position, we can analytically approximate the transfer function of the microfluidic propagation channel, i.e., H_p(f), as follows <cit.>
H_p(f) = H_p(x = x_r,f)
≈exp(-((2π f)^2D/u^3+j2π f/u) x_r).
Spectral density of the received concentration, Φ_r(f) = Φ(x=x_r, f), can be obtained via multiplication of the transfer function, H_p(f), and the spectral density of the input ligand concentration signal, Φ_in(f) = Φ_in(x = 0, f), as follows
Φ_r(f) ≈ H_p(f)Φ_in(f).
Note that the spectral density of a rectangular finite-duration concentration pulse input signal with amplitude C_m and pulse width T_p can be obtained by taking FT as
Φ_in(f)= ℱ{C_mrect(t/T_p - 0.5)} = C_m T_p(f T_p),
where rect(t) = 1 for -0.5 < t < 0.5 is the rectangular function. Therefore, combining (<ref>), (<ref>), and (<ref>), spectral density of the received ligand concentration signal can be approximated as follows:
Φ_r(f) ≈ C_m T_p(f T_p)
×exp(-((2π f)^2D/u^3+j2π f/u) x_r).
§.§ Transfer Function of the Ligand-Receptor Binding Process
Propagating ligands encoding information bind to the receptors on the graphene bioFET-based MC receiver surface randomly and reversibly such that a formed LR complex dissociates after a random time duration. In the case of a monovalent reaction, where receptors have only one binding site and can be in one of the two states, unbound (U) or bound (B), the reversible LR binding interactions can be described in terms of reaction rates as follows
U <=>[ϕ_r(t) k_+][k_-] B,
where k_+ and k_- are the binding and unbinding rates of the LR pair, and ϕ_r(t) is the time-varying ligand concentration in the vicinity of receptors <cit.>, assuming that receptors are exposed to the same ligand concentration at all times, and the number of ligands bound to receptors is much lower than the number of ligands in the vicinity of the receptors such that the ligand concentration ϕ_r(t) can be assumed to remain unchanged during the LR binding interactions. The number of bound receptors as a function of time, i.e., N_b(t), can then be written as
dN_b(t)/dt= k_+ (N_r - N_b(t)) ϕ_r(t) - k_- N_b(t),
where N_r is the total number of receptors on the receiver surface. The second-order reaction represented by the above nonlinear equation can be simplified as a first-order reaction, if the total number of receptors is much higher than the number of bound receptors at all times <cit.>, yielding a linear equation:
dN_b(t)/dt= k_+ N_rϕ_r(t) - k_- N_b(t).
The condition of first-order reaction can be quantitatively formulated as k_+ϕ_r(t)≪ k_- <cit.>, ensuring the number of bound receptors is comparatively low with a high unbinding rate. The transfer function of the LR binding process can then be obtained by solving the frequency-domain equivalent of (<ref>):
j 2 π f N_b(f) = k_+ N_rΦ_r(f) - k_- N_b(f),
considering Φ_r(f) as the input signal and N_b(f) as the output signal:
H_lr(f) = k_+ N_r/k_- + j 2 π f.
The transfer function of the LR binding process corresponds to that of a low-pass filter, characterized by a cutoff frequency of f_c,lr = k_-/2π.
Utilizing (<ref>), the spectral density of the output signal, i.e., time-varying number of bound receptors, can be obtained as follows:
N_b(f) = H_lr(f) Φ_r(f) = k_+ N_r/k_- + j 2 π fΦ_r(f).
§.§ Transfer Function of Graphene BioFET-based MC Receiver
We consider a graphene bioFET-based MC receiver fabricated on a Si/SiO_2 substrate with a monolayer graphene, which is connected to power sources via deposited metal (Cr/Au) drain and source contacts insulated from the electrolyte MC channel through an insulator layer (e.g., thin Al_2O_3 film), as shown in Fig. <ref>(a). A bio-recognition layer is incorporated onto the surface of graphene, comprising receptors that interact with ligands through the LR binding process. A DC potential, denoted as V_ref, is applied to the electrolyte to determine the operating point. This receiver architecture has been previously implemented by our group and its fabrication methodology was detailed in <cit.>. In this architecture, binding of charged ligands to the receptors attached uniformly to the graphene surface results in the modulation of the charge carrier density of the transducer channel, i.e., graphene, through electric field effect. This change in charge carrier density modulates the conductance of the channel, and hence the drain-to-source current (Δ I_ds) of the receiver. Therefore, the alteration in Δ I_ds under constant drain-to-source bias (V_ds) becomes a function of the number of bound ligands (which is equal to the number of bound receptors N_b(t)) and the electrical charge of the bound ligands. Therefore, the bound ligands can be considered functionally equivalent to the gate of the transistor.
In conventional FETs, the effect of gate potential on Δ I_ds is quantified through the transconductance of the transistor, denoted by G_m. Likewise, the impact of ligands bound to receptors on Δ I_ds can be measured through transconductance. Therefore, transconductance plays a vital role in shaping the input-output relationship of the MC receiver, and the frequency-domain representation of the transconductance, denoted as G_m(f), becomes a key part of the transfer function of the MC receiver, referred to as H_t(f). Nevertheless, there is an additional component that contributes to H_t(f), which will be further explained.
To obtain the G_m(f), we can use a small-signal model, which is an AC equivalent circuit that approximates the nonlinear behavior of the device with linear elements. In this study, we build on the small-signal model developed in <cit.> for graphene solution-gated FETs, used as a neural interface, to obtain the input-output relation in frequency-domain with the input being the time-varying number of bound receptors and the output being the drain-to-source current. Fig. <ref>(a) presents the schematic of the MC receiver combined with the equivalent circuit to depict physical origin of each element, and Fig. <ref>(b) demonstrates the small-signal model of the MC receiver.
We start modeling the MC receiver by investigating solid-liquid interface behavior. The interface of a charged surface and an electrolyte is commonly referred to as an electrical double layer (EDL) <cit.>. The electrons on the charged surface and the ions on the electrolyte are separated by a single layer of solvent molecules that stick to the charged surface and act as a dielectric in a conventional capacitor. Hence, EDL properties are generally modeled as a capacitor in the literature <cit.>. In this model, however, the graphene-electrolyte interface is described as a constant phase element (CPE) (i.e., CPE_g-e) rather than an ideal capacitor to precisely characterize the response of the EDL in graphene bioFETs. The CPE behavior of the graphene-electrolyte interface stems from the presence of charged impurities on the substrate and structural imperfections within the graphene lattice, potentially resulting in a non-uniform local density of states (DOS) <cit.>.
The impedance of a CPE can be written as follows <cit.>:
Z=1/Q_0(j2π f)^α,
where Q_0 is the admittance at f=1/2π Hz and α is a parameter that determines the phase angle. The values of both Q_0 and α depend on the applied voltage (which results from the bound charged ligands) and reflect the properties of the graphene-electrolyte interface. A CPE with α=1 behaves like an ideal capacitor, while a CPE with α=0 behaves like a pure resistor. A CPE with 0<α<1 represents an imperfect capacitor that has a non-constant capacitance value. The capacitance of a CPE (i.e., C_CPE) can be calculated by equating the imaginary part of the impedance of CPE to the impedance of an ideal capacitor, as proposed by Hsu et al. <cit.>. This approach yields the following expression for the capacitance of a CPE:
C_CPE=Q_0/(2π f) ^1-αexp(jπ/2(α -1)).
The bio-recognition layer can be modeled as a charged capacitor according to Xu et al. <cit.>. This represents the double-layer capacitance between a single ligand and electrolyte. In this model, the ligand-electrolyte interface is described as a CPE, which is denoted as CPE_l-e, with α=1 to mimic the behavior of an ideal capacitor and ensure notational consistency within the overall model.
When charged ligands bind to receptors, they generate a small signal variation in the gate potential. This variation is transduced into a voltage at the graphene-electrolyte interface (V_int).
A current source element V_intG_m(f) is used to model the conversion of AC signals at the gate into AC signals in the drain current (I_ds), where G_m(f) is the transconductance of the bioFET. To account for the DC current flowing through the graphene bioFET caused by the reference voltage (V_ref), a resistive element R_ds–DC is employed in the model. However, during small signal analysis, when V_ref is set to zero, the R_ds-DC is removed from the small signal model depicted in Fig <ref>(b). To account for parasitic capacitances in the device that arise as a result of the coupling between electrolyte and the contact metals through the insulating layer, another CPE, CPE_par, is included in parallel with CPE_g–e. As it will be revealed, this CPE affects the high-frequency response of the bioFET. Using this equivalent circuit, we can obtain the frequency-domain representation of transconductance as follows <cit.>:
G_m(f)=dI_ds/dV_int|_v_ds + G_m,eff.
The derivative term on the RHS of (<ref>) represents the intrinsic transconductance, which is the change in the drain current with respect to the interface potential. The intrinsic transconductance depends on the interface capacitance between the graphene and the electrolyte (CPE_g-e), which reflects the charge accumulation at the interface. This relationship is given by
dI_ds/dV_int|_v_ds = V_dsw_g/l_gμ_g C_CPE_g-e,
where w_g and l_g represent the width and length of the graphene transduction channel, respectively, and μ_g denotes the charge carrier mobility of graphene <cit.>. The interface capacitance (i.e., C_CPE_g-e) can be obtained using (<ref>) to have a frequency-dependent relation for the intrinsic transconductance.
An additional term, G_m,eff, contributes positively to the gain of the transduction process at high frequencies. The interface capacitances C_CPE_g-e and C_CPE_par lead to a direct capacitive current between the gate and the graphene bioFET contacts, which is evenly distributed to the drain and source. This contribution, independent of field-effect coupling, can be regarded as an effective transconductance term. As shown in Fig. <ref>, which plots the magnitude of | G_m(f) | over a range of frequencies for MC receiver, the capacitive contribution to the drain current dominates the frequency response beyond a certain frequency threshold. This contribution can be expressed as <cit.>:
G_m,eff(f)=1/(2Z_CPE_g-e)+1/(2Z_CPE_par).
By explicitly incorporating the frequency dependence in (<ref>) for (<ref>), the second term in (<ref>) can be derived as follows:
G_m,eff(f) = Q_g-e+(2π f)^α _g-ee^jπ/2α _g-e
+Q_par(2π f)^α _pare^jπ/2α _par.
By combining (<ref>) with (<ref>), and (<ref>), the frequency-dependent transconductance of a MC receiver can be expressed as:
G_m(f) = ± V_dsw_g/l_gμ_gQ_g-e/(2π f)^1-α_g-ee^jπ/2(α_g-e-1)
+Q_g-e(2π f)^α _g-ee^jπ/2α _g-e+Q_par(2π f)^α _pare^jπ/2α _par.
The equation above uses the ± sign, which is positive in the electron conduction regime, and negative in the hole conduction regime of the bioFET. In this study, we focused on the hole conduction regime when plotting | G_m(f) | and conducting the simulations. In Fig. <ref>, two distinct response regimes can be identified: (i) a CPE dominant regime (up to 1 kHz), and (ii) a Z_CPE current regime where G_m increases due to capacitive currents (above 1 kHz).
To obtain the transfer function of the bioFET-based MC receiver, i.e., H_t(f), in addition to G_m(f), we need to derive the potential created by a single ligand on the graphene surface (V_int). As it will be revealed, by including V_int(f) in the H_t(f), we will be able to derive the spectral density of the output current, i.e., I_m(f), through end-to-end transfer function. The effective charge on the graphene surface resulting from binding of each ligand to the receptor is determined by the expression Q_m=q_effN_e^-, where N_e^- denotes the average number of free electrons per ligand. The mean effective charge, q_eff, represents the charge that a single electron of a ligand can generate on the graphene surface in the presence of ionic screening in the medium. The relationship is given by q_eff= q ×exp(-r/λ_D) where q is the elementary charge and r represents the average distance between the ligand electrons in the bound state and the surface of the transducer. It is assumed that this average distance is equivalent to the average length of the surface receptor in the bound state <cit.>. The Debye length, λ_D, characterizes the ionic strength of the medium, and is given by λ_D=√((ϵ_Mk_BT)/(2 N_Aq^2c_ion)), where ϵ_M is the dielectric permittivity of the medium, k_B is Boltzmann’s constant, T is the temperature, N_A is Avogadro’s number, and c_ion is the ionic concentration of the medium <cit.>. Finally, the interface potential generated by the charge accumulated on the surface by a single ligand is as follows <cit.>:
V_int(f)= Q_m/C_CPE_eq,
where C_CPE_eq is the equivalent capacitance of the transducer, which is comprised of a parallel combination of CPE_l-e, CPE_g-e, and CPE_par connected in series with another parallel pair of CPE_g-e and CPE_par as shown in Fig. <ref>(b). This can be expressed as:
C_CPE_eq= (1/C_CPE_l-e+C_CPE_g-e+C_CPE_par.
+ . 1/C_CPE_g-e+C_CPE_par)^-1,
where frequency-dependent relation of all C_CPE terms can be obtained by utilizing (<ref>). Therefore, the transfer function of the transduction process in a graphene bioFET-based MC receiver, i.e., H_t(f), can be written by using (<ref>) and (<ref>) as
H_t(f) = V_int(f) G_m(f).
§.§ End-to-End Transfer Function and Output Current Spectral density
The end-to-end transfer function of a microfluidic MC channel with graphene bioFET-based receiver can be expressed using Equations (<ref>), (<ref>), and (<ref>) as follows
H(f) = H_p(f) × H_lr(f) × H_t(f)
= V_int(f) G_m(f) (k_+ N_r/k_- + j 2 π f) e^-((2π f)^2D/u^3+j2π f/u)x_r.
Spectral density of the output current can be obtained by using the end-to-end transfer function and the spectral density of the input concentration signal as:
I_m(f) = H(f) ×Φ_in(f).
§ NUMERICAL RESULTS
In this section, we present the numerical results obtained using the developed analytical frequency-domain model, which is validated through particle-based simulations under various settings. The default values for the parameters used in the analyses are provided in Table <ref>. The admittance and phase angle values for CPE_g-e and CPE_par are extracted from the experimentally fitted data in <cit.>, conducted in an electrolyte medium with an ionic strength of 0.5 mM. We considered the same ionic strength (c_ion= 0.5 mM). As for the admittance of CPE_l-e, it is assigned considering the fact that the area of the double-layer interface between ligands and electrolyte is significantly smaller compared to the double-layer surface at the graphene-electrolyte interface. Consequently, based on the parallel plate capacitor formula C=εA/d, where ε is permittivity, and d is the distance between the surface layers (a single layer of molecules in this case), it is evident that the capacitive behavior of CPE_l-e are significantly lower compared to CPE_g-e since the values of A and d remain the same for both interfaces. We set μ_g=200 cm^2/Vs as reported in <cit.>. Aptamers are utilized as the receptors, and their default length is defined as 2 nm <cit.>. Binding and unbinding rates, k_+ and k_- are set considering the assumption of (<ref>) and accepted values in the MC literature <cit.>. We consider the microfluidic channel with a cross-sectional height of h_ch= 3 μm, a width of w_ch= 3 μm, and a length of l_ch= 200 μm, resulting in a laminar and steady flow. The simulations were performed using Smoldyn, a particle-based spatial stochastic simulation framework that offers high spatiotemporal resolution by simulating each molecule of interest individually <cit.>. This approach captures the inherent stochasticity of molecular transport and reactions and provides nanometer-scale spatial resolution.
The simulation setup consisted of a straight microfluidic channel with a rectangular cross-section, as shown in Fig. <ref>. The receptor molecules were immobilized at the channel bed, representing the 2d MC receiver. An input rectangular pulse signal composed of ligands was introduced at the inlet of the channel as shown in Fig. <ref>(a). These ligands propagated towards the receptors through convection and diffusion as depicted in Fig. <ref>(b). A fraction of the ligands randomly bound to the receptor molecules for varying durations, depending on their kinetic interaction rates. Subsequently, they unbound and continued their propagation until they exited the channel at the outlet, as demonstrated in Fig. <ref>(c).
To validate the model in both time and frequency domains, we evaluated the transfer function of the propagation channel, the transfer function of the LR binding process, and the end-to-end frequency-domain model under varying system parameters. This evaluation was conducted using both analytical expressions and simulation results. The particle-based simulation does not incorporate the transfer function of the MC receiver. Therefore, the numerical results for H_t(f) are solely obtained using analytical expressions. Moreover, we calculated the sampling frequency utilizing a numerical method for varying system parameters.
§.§ Propagation Channel
§.§.§ Effect of Varying Pulse Width
The first analysis investigates the impact of varying pulse width, T_p, a critical parameter commonly employed in signal generation and modulation schemes, such as pulse width modulation (PWM) <cit.>. The results of this analysis are presented in Fig. <ref>. As expected, an increase in pulse width leads to a higher concentration value in time domain, as shown in Fig. <ref>(a). In the frequency domain (Fig. <ref>(b)), a higher amplitude is observed in the spectral density of the received concentration, Φ_r(f), as the pulse width increases. Moreover, the cutoff frequency decreases with the increasing pulse width, which is consistent with the expectations based on Equations (<ref>) and (<ref>). The analytical model exhibits a high degree of agreement with the simulation results. It is important to note that as pulse width increases, the likelihood of inter-symbol interference also rises, leading to potential challenges in the signal recovery process.
§.§.§ Effect of Varying Diffusion Coefficient
We also analyze the impact of varying diffusion coefficient of ligands, D, on the response of the MC system. The diffusion coefficient is a fundamental parameter in MC systems, as it determines the rate at which molecules disperse through the medium. Molecules with a higher diffusion coefficient disperse more, resulting in a broader received signal width. Additionally, the peak received concentration decreases due to the higher dispersion, as shown in Fig. <ref>(a). On the other hand, in the frequency domain, increasing the diffusion coefficient is reflected in a slight decrease in the cutoff frequency of the spectral density of the received concentration, Φ_r(f), for both analytical and simulation results, as demonstrated in Fig. <ref>(b).
§.§ Ligand-Receptor Binding Process
§.§.§ Effect of Varying Binding Rate
We investigate the effect of varying binding rates, k_+, on the time-varying number of bound receptors in both time and frequency domains. Fig. <ref>(a) demonstrates that an increase in the binding rate directly corresponds to an increased number of bound receptors, N_b(t), as molecules with higher binding rate exhibit a higher propensity to bind to the receptors when they are in close proximity of each other. Similarly, as expected from Equation (<ref>), the frequency domain analysis reveals a higher amplitude in the spectral density of the number of bound receptors, N_b(f), when binding rates are increased, as shown in Fig. <ref>(b).
§.§.§ Effect of Varying Unbinding Rate
We also investigate the impact of varying unbinding rates, k_-, on the number of bound receptors. Contrary to the effect of binding rates, increasing the unbinding rate leads to a decrease in the number of bound receptors in the time-domain, N_b(t), as shown in <ref>(a). Molecules with higher unbinding rate have shorter bound state durations. In the frequency domain, as shown in Fig. <ref>(b), the unbinding rate exhibits an inverse relationship with the spectral density, as described by (<ref>). Consequently, a higher unbinding rate results in a lower amplitude in the spectral density of bound receptors, N_b(f), a finding supported by both simulation and analytical results.
§.§ End-to-End Model
§.§.§ Effect of Varying Pulse Width
To evaluate the end-to-end model's accuracy and investigate the impact of varying pulse widths, T_p, we analyze the spectral density of output current, I_m(f), for three pulse signals with different pulse widths but identical amplitudes, i.e., concentrations, as shown in Fig. <ref>(a). As predicted in Section <ref>, an increase in pulse width corresponds to a higher amplitude in I_m(f). This phenomenon occurs because a wider ligand pulse results in a higher concentration of ligands in the vicinity of the receiver's receptors. This, in turn, increases the probability of binding to a receptor before the already bound ones dissociate, resulting in a higher number of observed bound receptors, i.e., amplitude. The analytical expression for the output current spectrum, represented by (<ref>) and incorporating the transfer function of the three main processes and the input signal concentration, demonstrates high accuracy when compared to the simulation results.
§.§.§ Effect of Varying Ligand Concentration
We also evaluate the impact of varying ligand concentrations, C_m, on the end-to-end model by performing simulations with input concentration pulses with different concentrations but identical pulse widths. By analyzing the spectral density of the resulting output current, I_m(f), we observe that the amplitude of I_m(f) increases as concentration increases, as depicted in Fig. <ref>(b). The simulation results are strongly aligned with the analytical results obtained from (<ref>).
§.§ Sampling Frequency
To reconstruct the input concentration signal, ϕ_in(t), from the sampled sequence of the number of bound receptors, it is essential to employ an appropriate sampling frequency. Considering that both the input concentration spectral density and the end-to-end transfer function and consequently the resulting output current spectral density, display a Lorentzian-shaped profile, it is essential to determine the cutoff frequency that contains most of the spectrum energy. The energy of the output current spectral density within a bandwidth ranging from 0 Hz to the cutoff frequency can be quantified as follows <cit.>:
∫^f_c_0 |H(f) Φ_in(f)|^2 df =η∫^+ ∞_0 |H(f) Φ_in(f)|^2 df,
where f_c is the cutoff frequency and η is the fraction of the total spectrum energy contained within the interval (0,f_c). In this study, we consider η = 0.99, which indicates that 99% of the spectral power is contained within the specified bandwidth. Once the cutoff frequency has been determined, the sampling frequency can be obtained using the Nyquist–Shannon theorem, which states that in order to achieve a reconstruction that captures all the information, the sampling frequency should be greater than twice the bandwidth:
2f_c≤ f_s≤∞.
Fig. <ref> shows the sampling frequency obtained from (<ref>), which is a function of pulse width T_p, diffusion coefficient D, and flow velocity u. Fig. <ref>(a) demonstrates that increasing the pulse width results in a lower sampling frequency required to reconstruct the original continuous signal. As shown in Fig. <ref>(b), the spectral density of a wider pulse signal has a lower cutoff frequency. Therefore, it is expected that increasing the pulse width would allow a lower sampling frequency.
Fig. <ref>(b) indicates that the sampling frequency decreases as the diffusion coefficient increases. This can be attributed to the reduction in the cutoff frequency while increasing the diffusion coefficient, as shown in Fig. <ref>(b). Therefore, the decrease in sampling frequency aligns with the expectations set by the Nyquist-Shannon theorem.
Finally, Fig. <ref>(c) shows the impact of increasing flow velocity on the sampling frequency. As the flow velocity increases, the signals traverse the receiver position more quickly, which reduces the time window available for capturing an adequate number of samples from the propagating signal. Consequently, to guarantee the collection of a sufficient number of samples, it is necessary to raise the sampling frequency in response to an increase in flow velocity.
§ CONCLUSION
In this study, we introduced a comprehensive end-to-end frequency-domain model for a practical microfluidic MC system with a graphene bioFET-based receiver. The model provides valuable insights into the dispersion and distortion of received signals, and has the potential to inform the design of new frequency-domain MC techniques, such as modulation and detection, matched filters, and interference-free receiver architectures. The end-to-end transfer function, denoted as H(f), incorporates the input-output relationships of three sequential modules: the microfluidic propagation channel, the LR binding process, and the graphene bioFET-based receiver. The accuracy and reliability of the developed model were verified through particle-based spatial stochastic simulations, which demonstrated a high degree of agreement with the analytical expressions.
IEEEtran
|
http://arxiv.org/abs/2307.06164v1 | 20230712134525 | A Fermi Model of Quantum Black Hole | [
"Chong-Sun Chu",
"Rong-Xin Miao"
] | hep-th | [
"hep-th",
"gr-qc"
] |
equationsection
=0.7cm
|
http://arxiv.org/abs/2307.04197v1 | 20230709150552 | Vacuum Integration: UV- and IR-divergencies | [
"I. V. Anikin"
] | hep-ph | [
"hep-ph",
"hep-th"
] |
Mid-infrared spectroscopy with a broadly-tunable thin-film lithium niobate optical parametric oscillator
Amir H. Safavi-Naeini1
August 12, 2023
========================================================================================================
§ INTRODUCTION
In different QFT-models, at the classical level,
the effects of spontaneous symmetry breaking are very important in the context of
the geometrical analysis of the Goldstone theorem.
In this connection, the study of a vacuum state as
the potential minimum plays an significant
role <cit.>.
Meanwhile, the quantum corrections, that tend usually to
distort the geometrical picture, computed within the effective potential (EP) methods allow to
return again to the classical geometrical analysis of the models with spontaneous symmetry breaking.
In the standard EP-approaches, the quantum corrections are given by the
the vacuum integrations with the massive propagators.
However, the special interest is related to the vacuum integrations with the massless propagators.
It is mostly dictated by the use of conformal symmetry (see for example <cit.>).
On the other hand, working with the vacuum massless integrations, it demands some careful considerations.
Indeed, the general dimensional analysis suggests that all vacuum integrations with the massless propagators
lead to zero <cit.>. It is true except a particular case of dimensionless integrand
where the ultraviolet (UV), or infrared (IR), momentum region is only under consideration.
In this case, the arguments of dimensional analysis cannot be applied.
In <cit.>, it has been shown that the vacuum integration of dimensionless and massless integrand
is proportional to δ(n-D/2) where the space dimension is defined as D=d-2ε
(d=2, 4, 6 etc.) and n implies the propagator index.
The delta-function as a singular generated function (distribution) is a well-defined linear functional on the
suitable finite function space. In the case of dimensional regularization, this space should be realized with the
integration measure as dε φ(ε) where φ has a localized support.
However, it is not always convenient, even possible, to deal with the measure as dε
<cit.>. Moreover,
owing to the symmetry properties, the delta-function is usually hiding the information on
the UV(or IR)-divergency.
Following Gorishni-Isaev's method <cit.>,
we present all necessary details on the vacuum integration where the delta-function
has been treated in the frame of the sequential approach <cit.>.
We also demonstrate how the delta-function represents the UV(IR)-regimes.
§ Δ_F(0)-SINGULARITY
Let us consider the simplest case of scalar massless propagator Δ_F(0) giving the tad-pole diagram.
Using the Fourier transform, the propagator Δ_F(0) can be write as
[For the sake of shortness, here and in what follows the momentum loop normalization is hidden in (d^D k).
Moreover, the Euclidian measure of momentum integrations has been implies.]
Δ_F(0) = ∫(d^D k)/k^2=
∫ (d^D k) { C^-1(D,1) ∫ d^Dz e^-i k z/(z^2)^D/2-1}
=C^-1(D,1) ∫ d^Dz δ(z)/(z^2)^D/2-1≡Γ(D/2-1) ∫ (d^Dz) δ(z)/(z^2)^D/2-1,
where the integration measure (d^D z) absorbs the normalization constant
i(-π)^D/2 arising from
C^-1(D,n)=i(-π)^D/2Γ(D/2-n)/Γ(n).
If we assume that D/2-1=0,
then the propagator in Eqn. (<ref>) takes a form of
Δ_F(0) = Γ(0) ∫ (d^Dz) δ(z) ⇒Γ(0),
where, as well-known, the singularity of Γ-function can be presented as
Γ(0) = lim_ϵ→ 0Γ(ϵ)= lim_ϵ→ 0{1/ϵ + ....}.
It is worth to notice that
the condition given by D/2-1=0 should
be applied before the integration over (d^D k) in Eqn. (<ref>) in order to avoid the uncertainty, see also Sec. <ref>.
On the other hand, according to <cit.>, the vacuum integration method applied to
the Feynman propagator results in the delta-function. Let us remind a key moment of Gorishni-Isaev's method.
Using the spherical system (in the momentum Euclidian space), Δ_F(0) can be represented as
Δ_F(0) = ∫(d^D k)/k^2=
1/2∫ dΩ∫_0^∞ dβ β^D/2-2,
where dΩ gives the finite angle measure of integration.
The replacement β = e^y leads to the following expression
Δ_F(0) =
1/2∫ dΩ∫_-∞^∞ (dy) e^iy [(-i)(D/2-1)]=
1/2 | i |δ( D/2 -1 ) ∫ dΩ
or, restoring all coefficients, it reads
Δ_F(0)= - 2i π^1+D/2 δ(1-D/2) |_D=2 = - 2i π^2 δ(0).
So, for the case of D=2, the matching of Eqns. (<ref>) and (<ref>) gives the following
representation
(-i) Δ_F(0)=Γ(0) = - 2 π^2 δ(0).
With this, we may conclude that δ(0)-singularity can be treated as the singularity of Γ(0), see Eqn. (<ref>).
The same inference has been reached by the different method, see <cit.>.
Notice that the physical (UV or IR) nature of the mentioned singularity has been somewhat hidden.
In the dimensional regularization, the UV- and IR-divergencies are associated with the small positive (ε >0)
and negative (ε < 0) regularized parameter ε, respectively.
In this connection, using the α-parametrization, we rewrite Eqn. (<ref>) as
Δ_F(0) = ∫(d^D k)/k^2=
Γ(D/2-1) ∫ (d^Dz) δ(z)/(z^2)^D/2-1
=
∫ (d^Dz) δ(z) {∫_0^∞ dα α^D/2-2 e^-α z^2}=∫_0^∞ (dα) α^D/2-2.
Hence, one gets (modulo the normalization factor which is now irrelevant)
Δ_F(0) = ∫(d^D k)/k^2=
∫_0^∞ (dα) α^D/2-2⇒1/D/2-1{lim_α→∞α^D/2-1 - lim_α→ 0 α^D/2-1}.
From Eqn. (<ref>), one can see that the first term corresponds to the UV-divergency, while the second term –
to the IR-divergency. That is, we have
lim_α→∞α^D/2-1 = [∞]_UV if D >2,
lim_α→ 0 α^D/2-1 = [∞]_IR if D < 2.
In other words, if the dimensional parameter ϵ in D= d - 2ϵ is small one, | ϵ | < 1,
and it varies from the negative to positive variables,
we have the following representation for Δ_F(0)
Δ_F(0) |_d=2 ⇒1/D/2-1{Θ(D>2 | ϵ < 0)
lim_α→∞α^D/2-1 -
Θ(D<2 | ϵ > 0)
lim_α→ 0 α^D/2-1} = 0
⇒δ( 1- D/2 ) |_D≠ 2=0,
where ϵ should be considered as an external independent parameter.
From Eqns. (<ref>) and (<ref>), in the dimensional regularization, one can see that the
positive small ϵ is regularizing the UV-divergency but not IR-divergency.
Thus, every of the methods gives the same final conclusion.
To conclude this section, we remind the other useful representation given by
Δ_F(0) = lim_z^2→ 0Δ_F(z^2) =
lim_z^2→ 01/4πδ_+(z^2)= δ(0), z∈𝔼^4
which is in agreement with Eqns. (<ref>) and (<ref>).
§ VACUUM INTEGRATION AS A LIMIT OF NON-VACUUM INTEGRATION
We now address to the relation between vacuum and non-vacuum integrations.
In the dimensional regularization procedure, we begin with the consideration of
two-point 1PI massless Green function given by
ℐ(p^2)= ∫(d^D k)/k^2(k^2+p^2)=
(c.c.) (p^2)^D/2-2 G(1,1),
where (c.c) implies the coefficient constant and
G(1,1)=Γ(-D/2+2) Γ^2(D/2-1)/Γ(D-2).
Using D=4-2ϵ, we get
ℐ(p^2)= ∫(d^D k)/k^2(k^2+p^2)=(c.c.) (p^2)^-ϵ Γ(ϵ) Γ^2(1-ϵ)/Γ(2-2ϵ).
In Eqns. (<ref>) and (<ref>), the scale dependence of μ^2 is hidden as irrelevant one.
The vacuum integration can be obtained from Eqn. (<ref>) with the help of the corresponding limit as
𝒱_2≡∫(d^D k)/(k^2)^2=
lim_p^2→ 0ℐ(p^2).
There are, however, some subtleties of this limit which are now under our considerations.
Indeed, having used the α-representation, let us calculate the integral of Eqn. (<ref>).
We have the following
ℐ(p^2)= (c.c.)
∫_0^∞ dα dβe^-p^2 αβ/α + β/[α+β]^D/2
=(c.c.) ∫_0^∞λλ^1-D/2∫_0^1 dx e^-p^2λ x x̅,
where
α =λ x_1, β= λ x_2, λ∈ [0, ∞].
The next stage of calculations is to make a replacement as
λ̃= p^2 λ x x̅, dλ̃= p^2 x x̅ dλ
in the exponential function. This replacement simplifies the integrals and it leads to the corresponding combination
of Γ-functions denoted as G(1,1) <cit.>. Ultimately, we reproduce the result
presented by Eqns. (<ref>) and (<ref>).
Now, the first mathematical subtlety is that if we suppose the limits p^2→ 0 and ϵ→ 0
are consequent ones, not simultaneous, it is clear that these limits are not commutative operations, i.e.
[ lim_p^2→ 0, lim_ϵ→ 0] ≠ 0.
On the other hand, if the limits are simultaneous ones we deal with the uncertainty of [0]^0 which should be somehow resolved.
The second subtlety is related to the limit p^2→ 0 and the replacement of Eqn. (<ref>).
Namely, in order to avoid the mentioned uncertainty, we have to implement the limit p^2→ 0 before the possible replacement.
In this case, the limit of p^2→ 0 is well-defined operation and we finally obtain that
lim_p^2→ 0ℐ(p^2) = (c.c.) ∫_0^∞ dλλ^1-D/2 =
1/2- D/2{lim_λ→∞λ^2-D/2 - lim_λ→ 0 λ^2- D/2}
≡∫(d^D k)/(k^2)^2 =𝒱_2.
§ Δ(0)-SINGULARITY
We are now in a position to discuss the treatment of δ(0)-singularity (or δ(0)-uncertainty).
To this aim, we follow to the
sequential approach to the singular generated functions (distributions).
From one hand, based on the dimensional analysis, we may conclude that all massless vacuum integrations disappear, i.e.
𝒱_n=∫(d^D k)/[k^2]^n=0 for n≠ D/2.
However, the case of n=D/2 (or n=2 if ε→ 0) requires the special consideration because
the dimensional analysis argumentation does not now work.
Nevertheless, the nullification of 𝒱_D/2 takes still place but thanks to different reasons.
It turns out, the ultraviolet and infrared divergencies are cancelled each other.
Hence, if only the ultraviolet divergencies are under our consideration, 𝒱_D/2 is not equal to zero.
To demonstrate, we dwell on the vacuum integration which is externally the IR-regularized one.
It is necessary to remind that, in the space with D=d-2ϵ, the positive value of ϵ allows to avoid the UV-divergency.
In the
spherical co-ordinate system, we write the following
representation
𝒱_2=∫_UV(d^D k)/[k^2]^2≡π^D/2/Γ(D/2)∫_μ^2^∞ dββ^D/2-3 with β=|k|^2,
where μ^2 plays a role of IR-regularization and the angular integration given by the measure dΩ is calculated explicitly.
Next, calculating β-integration, we reach the representation as
𝒱_2=
π^2-εμ^-2ε/Γ(2-ε) 1/ε|_ε→ 0,
where it is shown that the ϵ-pole corresponds to the UV-divergency only because the IR-divergency is absent
by construction thanks for μ^2.
This is a very-well known representation used, for example, in <cit.>.
On the other hand, we are able to calculate the vacuum integration by
Gorishni-Isaev's method <cit.>. In this case, 𝒱_n reads
𝒱_n=∫(d^D k)/[k^2]^n =
2i π^1+D/2/(-1)^D/2 Γ(D/2)δ(n-D/2).
Supposing D=4-2ε, the only contribution is given by
𝒱_2=∫(d^D k)/[k^2]^2 =
2i π^3-ε/Γ(2-ε)δ(ε) ≠ 0.
Hence, the delta-function of argument ϵ reflects the UV-divergency.
We specially stress that the representations of 𝒱_2 given by Eqns. (<ref>) and (<ref>)
are equivalent.
The delta-function as a generated function (distribution) is a linear singular functional (which cannot be generated by any locally-integrated functions) defined on the suitable finite function space.
Such a definition is absolutely well but it is not unique one. Namely, the delta-function can be understood with the help of the fundamental sequences of regular functionals provided the corresponding weak limit,
see for example <cit.>.
Besides, one of the delta-function representations is related to the following realization
δ(t)=lim_ε→ 0δ_ε(t)≡lim_ε→ 0 St.F. (-ε≤ t ≤ 0)/ε,
where St.F.(-ε≤ t ≤ 0) implies the well-known step-function without any uncertainties.
Going back to Eqn. (<ref>), one can see that the treatment of δ(ε) as the linear (singular)
functional on the finite function space with dμ(ε)=dεϕ(ε) meets some difficulties
within the dimensional regularization approach. Indeed, for the practical use,
ε is not a convenient variable for the construction of the finite function space because we finally need
to be focused on the limit as ϵ→ 0.
Meanwhile, within the sequential approach <cit.>, the delta-function might be considered as the usual singular (meromorphic)
function and the δ(0)-singularity/uncertainty can be treated as
a pole of the first order <cit.>,
δ(0)=lim_ε→ 0δ_ε(0)≡lim_ε→ 01/ε.
For the demanding mathematician, the representation of Eqn. (<ref>) should be understood merely as a symbol.
That is, δ(0) denotes alternatively the limit of 1/ϵ.
This representation is also backed by the obvious fact that Eqns. (<ref>) and (<ref>)
are equivalent ones.
It is worth to notice that representation of δ(0) through the pole of an arbitrary meromorphic function
should be used very carefully.
For example, if we suppose that (here, z∈𝔼^4 and the delta-function is assumed to be a functional on
the finite function space)
[ δ(z)]^2 = δ(0) δ(z),
the representation given by
δ(z) = lim_ϵ→ 0δ_ϵ(z), δ_ϵ(z) = 1/π^2 ϵ^4
e^- z^2/ϵ^2⇒δ(0) ∼δ_ϵ(0)=1/π^2 ϵ^4
does not satisfy the condition of Eqn. (<ref>).
Another informative example can be found in <cit.>.
§ CONCLUSION
To conclude, we have presented the important explanations regarding the massless vacuum integrations.
In the note, we have demonstrated the preponderance of sequential approach where
the singular generated functions (distributions) are treated as a fundamental
sequences of regular functionals. Due to this treatment, the uncertainty as δ(0) can be resolved
via the meromorphic function of first order.
Also, it has been shown in detail how the delta-function represents either UV-regime or IR-regime.
§ ACKNOWLEDGEMENTS
Our special thanks go to S.V. Mikhailov and L. Szymanowski for very useful and stimulating discussions.
99
Vasilev:2004yr
A. N. Vasilev,
“The field theoretic renormalization group in critical behavior theory and stochastic dynamics,”
Boca Raton, USA: Chapman & Hall/CRC (2004) 681 p
Anikin:2020dlh
I. V. Anikin,
Phys. Part. Nucl. Lett. 18, 290 (2021)
Anikin:2023wkk
I. V. Anikin,
“Conformal Symmetry and Effective Potential: I. Vacuum V_z,x-operation for the Green functions,”
arXiv:2306.15373 [hep-ph]
Anikin:2023ogb
I. V. Anikin,
“Conformal Symmetry and Effective Potential: II. Evolution,”
arXiv:2306.17018 [hep-ph]
Grozin:2005yg
A. Grozin,
“Lectures on QED and QCD,” arXiv:hep-ph/0508242 [hep-ph].
Grozin:2007zz
A. Grozin,
“Lectures on QED and QCD: Practical calculation and renormalization of one- and multi-loop Feynman diagrams,”
World Scientific, 2007,
ISBN 978-981-256-914-1, 978-981-256-914-1
Gorishnii:1984te
S. G. Gorishnii and A. P. Isaev,
Theor. Math. Phys. 62, 232 (1985)
[Teor. Mat. Fiz. 62, 345 (1985)]
Antosik:1973
Antosik P., Mikusinsky Y., Sikorsky R., “Theory of Generalized Functions: A Sequential Approach,”
(PWN – Polish Scientific Publisher, 1973)
Gelfand:1964
I. M. Gelfand and G. E. Shilov,
“Generalized Functions Vol 1 Properties And Operations,”
Academic Press, 1964, ISBN-0-12-279501-6
Efimov:1973pjo
G. V. Efimov,
Int. J. Theor. Phys. 10, 19 (1974)
|
http://arxiv.org/abs/2307.05243v1 | 20230709184055 | Inconsistency with De Sitter Spacetime of "Gravitational Pair Production and Black Hole Evaporation" | [
"Mark P. Hertzberg",
"Abraham Loeb"
] | gr-qc | [
"gr-qc",
"astro-ph.HE",
"hep-ph",
"hep-th"
] | |
http://arxiv.org/abs/2307.04573v1 | 20230710140728 | A Semi-Automated Solution Approach Selection Tool for Any Use Case via Scopus and OpenAI: a Case Study for AI/ML in Oncology | [
"Deniz Kenan Kılıç",
"Alex Elkjær Vasegaard",
"Aurélien Desoeuvres",
"Peter Nielsen"
] | cs.AI | [
"cs.AI",
"cs.IR",
"cs.LG"
] |
inst1]Deniz Kenan Kılıçmycorrespondingauthor
[mycorrespondingauthor]Corresponding author
[email protected]
[inst1]organization=Department of Materials and Production, Aalborg University,
addressline=Fibigerstræde 16,
city=Aalborg,
postcode=9220,
country=Denmark
inst1]Alex Elkjær Vasegaard
[email protected]
inst1]Aurélien Desoeuvres
[email protected]
inst1]Peter Nielsen
[email protected]
In today's vast literature landscape, a manual review is very time-consuming. To address this challenge, this paper proposes a semi-automated tool for solution method review and selection. It caters to researchers, practitioners, and decision-makers while serving as a benchmark for future work. The tool comprises three modules: (1) paper selection and scoring, using a keyword selection scheme to query Scopus API and compute relevancy; (2) solution method extraction in papers utilizing OpenAI API; (3) sensitivity analysis and post-analyzes. It reveals trends, relevant papers, and methods. AI in the oncology case study and several use cases are presented with promising results, comparing the tool to manual ground truth.
* Automated support for literature choice and solution selection for any use case.
* A generalized keyword selection scheme for literature database queries.
* Trends in literature: detecting AI methods for a case study using Scopus and OpenAI.
* A better understanding of the tool by sensitivity analyses for Scopus and OpenAI.
* Robust tool for different domains with promising OpenAI performance results.
Artificial intelligence (AI) Machine learning (ML) OpenAI Generative pre-trained transformers (GPT) Scopus Solution approach selection
§ INTRODUCTION
Over the past decade, artificial intelligence (AI) and machine learning (ML) have gained significant attention in the fields of information technology and computer science, accompanying significant advancements and benefits across diverse industries and sectors <cit.>. There are numerous AI/ML taxonomies presented in the literature that can be used to select a collection of AI strategies to address a specific challenge[<https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html>] <cit.>.
Figure <ref> illustrates an example taxonomy of the extensive AI/ML domain, encompassing multiple problem types and branches. However, to search for AI methods specific to a given use case, it is not only necessary to select a fitting branch in the taxonomy, but one also has to refine the search by comparing it to the standing knowledge base of the literature on the use case.
The increasing amount of literature presents a challenge for decision-makers seeking to employ AI/ML methodology in their specific problem domains. Manual review is time-consuming <cit.>, often resulting in incomplete information without targeted searches. A tool that rapidly generates trend findings and examines solution methods for any use case would be extremely beneficial in various situations.
This research proposes a semi-automatic tool developed to generate results on solution approaches for any use case. The study presents results on multiple problem domains on AI with a focus on the case study for AI/ML in oncology.
The proposed scheme
contains the following steps:
* Determining keywords systematically from the use case by a two-domain, three-level setup.
* Automated literature extraction using selected keywords via Scopus Search API <cit.>.
* Extracting AI methods automatically from Scopus search results by using OpenAI API.
* Sensitivity-analyzes for both Scopus and OpenAI.
* Post-analyzes based on results.
The proposed scheme can be used iteratively for the decision makers to augment their understanding of the problem and similarly align the keywords better with the desired use case and specificity level, consequently obtaining better results.
The remainder of this paper is structured as follows: Sec. <ref> reviews the use of AI methods and the literature on model selection approaches. Sec. <ref> presents the proposed AI method selection tool, and Sec. <ref> showcases the performance, sensitivity, and post-analysis of the method. In Sec. <ref>, discussion, conclusion, and suggestions for future works are given.
§ LITERATURE REVIEW
In the literature, there are reviews and surveys on which AI approaches or applications are used for different problem domains such as building and construction 4.0 <cit.>, architecture, engineering and construction (AEC) <cit.>, agriculture <cit.>, watermarking <cit.>, healthcare <cit.>, oil and gas sector <cit.>, supply chain management <cit.>, pathology <cit.>, banking <cit.>, finance <cit.>, food adulteration detection <cit.>, engineering and manufacturing <cit.>, renewable energy-driven desalination system <cit.>, path planning in UAV swarms <cit.>, military <cit.>, cybersecurity management <cit.>, engineering design <cit.>, vehicular ad-hoc networks <cit.>, dentistry <cit.>, green building <cit.>, e-commerce <cit.>, drug discovery <cit.>, marketing <cit.>, electricity supply chain automation <cit.>, monitoring fetus via ultrasound images <cit.>, IoT security <cit.>.
As can be seen, some of the problem domains in the example review and surveys are low-level, while some are high-level. The abstraction level is difficult to integrate for the solution domain while considering the reviews and surveys. Even if the same problem domain is considered, it will be an issue to depend on reviews or surveys in the literature as there may be an unlimited number of use case scenarios and levels of specificity.
In addition, AI approaches specified in reviews or surveys can sometimes be very general. In this case, it may be necessary to make article reviews manually, and it causes labor and time lost <cit.>. Based on this idea, one can search for an automated way to minimize the time spent on manual review in order to get an AI method applied to a given use case.
The last decade saw significant steps toward a fully automatic model selection scheme with tools that select models for specialized use cases, generally referred to as model determination, parameter estimation, or hyper-parameter selection tools. For forecasting time series in R, the popular forecast package by R. Hyndman et al. was presented, showcasing great initial results <cit.>. For regression models, the investigated selection procedures are generally based on the evaluation of smaller pre-defined sets of alternative methods, e.g., by information criteria (AIC, BIC), shrinkage methods (Lasso), stepwise regression, and or cross-validation schemes <cit.>. For ML-based model schemes, the methods proposed by B. Komer et al. <cit.> introduce the hyperopt package for hyper-parameter selection accompanying the Scikit-learn ML library, J. Snoek et al. <cit.> presents a bayesian optimization scheme to identify the hyper-parameter configuration efficiently, and J. Bergstra et al. <cit.> identifies hyper-parameter configurations for training neural networks and deep belief networks by using a random search algorithm and two greedy sequential methods based on the expected improvement criterion. There also exist smaller frameworks, e.g., that of hyper-parameter tuning based on problem features with MATE <cit.>, to model and fit autoregressive-to-anything processes in Java <cit.>, or extensions to general purpose optimization frameworks <cit.>.
On the other hand, Dinter et al. <cit.> presents a systematic literature review on the automation of systematic literature reviews with a concentration on all systematic literature review procedures as well as natural language processing (NLP) and ML approaches. They stated that the main objective of automating a systematic literature review is to reduce time because human execution is costly, time-consuming, and prone to mistakes. Furthermore, the title and abstract are mostly used as features for several steps in the systematic review process proposed by Kitchenham et al. <cit.>. Even though our research does not stick to these procedures since our study was not a pure systematic literature review, the title and abstract are included for the OpenAI part. Additionally, they found the majority of systematic literature reviews to be automated using support vector machine (SVM) and Bayesian networks, such as Naive Bayes classifiers, and there appears to be a distinct lack of evidence regarding the effectiveness of deep learning approaches in this regard.
The work of H. Chen et al. <cit.> produce a written section of relevant background material to a solution approach written in the form of a research paper through a bidirectional encoder representation from transformers (BERT)-based semantic classification model.
Similarly, K. Heffernan et al. <cit.> utilizes a series of machine learning algorithms as automatic classifiers to identify solutions and problems from non-solutions and non-solutions in scientific sentences with good results.
These findings suggest that ML-based language models can be utilized in the automation of literature review with success.
Consequently, we have identified literature that explains the procedure of manually and automatically reviewing the literature. We have also identified automated tuning frameworks for different modeling schemes.
However, there is a gap in the automatic selection of a solution approach. Our paper aims to investigate and address this gap.
§ METHODOLOGY
The proposed methodology has three main modules; see the flowchart in Fig. <ref>. The first module covers selecting keywords and getting results via Scopus Search[<https://dev.elsevier.com/sc_search_tips.html>]. Then the advanced search query returns the results where the fields are explained by Scopus Search Views [<https://dev.elsevier.com/sc_search_views.html>]. In the second module, solution methods that are used for each article are searched using the OpenAI API. In the third module, sensitivity and post-analyzes are performed. The flow indicated by the red dashed line is performed automatically.
This scheme is appropriate for any problem and solution domain. It can be used for use cases in many different fields. Although the second block of this study focuses on AI methods, this block can also evolve into other topics, such as which hardware to be used and which scientific applications to be employed. However, as the tool relies on the OpenAI framework, ground truth data is created manually to check the performance.
In Tab. <ref>, the benefits and functions of the methods used in the proposed methodology are shown.
§.§ Module 1: Scopus search
The goal of the first module is to search for a relevant pool of paper w.r.t. the given problem a user is dealing with. To do so, a keyword selection scheme has been made in order to facilitate the user's work. This scheme is then used to make a Scopus query, but also to score each paper.
To determine keywords, three specification levels (a general, an expended, and a detailed one) are applied to the given problem and the searched solutions. This work is done manually as it involves eliciting user information on the use case. That means both classification and order are specified by the user. However, this stage is critical in recommending more appropriate solution approaches because these keywords are the first inputs to the proposed methodology and determine the pool of papers used in module 2.
Fig. <ref> gives an example of the proposed keyword selection scheme.
Notice that it is possible, but not necessary, to add keywords in each field, where a field refers to the specific level in the block. Leaving some fields empty will lead to a less specified pool of solution approaches, which consequently risks not fitting the use case. At the same time, adding too many keywords can lead either to a too restricted pool of papers (e.g., if one uses too many general keywords, and fulfill each field) or, if too many expanding keywords are given, to a less specific pool of paper as if the field was left empty.
The different levels showcase:
Level 1 The general and necessary keywords. The keyword must be a part of the research paper for the paper to be in the selected pool of papers.
Level 2 The expanding keywords. Here only one of the keywords in the field is necessary for the paper to be selected.
Level 3 A further specification. It is only used in the later stage to rank the identified solution methods with the relevancy metric.
After keyword selection, a query is created for Scopus Search API. Information is searched in titles, abstracts, and keywords of recent articles or conference papers, for the words defined in levels 1 and 2. The query can be, for example:
Note that an expert can directly enter a query instead of using the keyword selection scheme. It is useful in some cases, for example: when it is difficult to find a good pool of papers using the query built by the keyword selection scheme, or when one wants to search in a specific field or a specific range of years, or for a first try if one wants to search only for reviews in order to get more appropriate academic keywords. However, it is still advantageous to follow this scheme as it helps to find, classify, and order the use case keywords, but also to specify what is important for scoring the paper.
The publication year, the number of citations, the title, and the abstract information of all articles returned by the Scopus query are saved. After all the results are obtained, the title and abstract information of all the articles are examined manually, and articles that are irrelevant and have not applied/mentioned any AI method are eliminated.
§.§ Module 2: Scoring and method extraction
In this module, the relevancy and popularity metrics for the Scopus search results are computed, and solution methods are extracted from the title and abstract of each paper.
The relevancy metrics count the number of unique level 2 and 3 keywords appearing at least once in the title, abstract, or keywords. Ultimately, the metric represents how well the methods fit the specificity of the use case. For example, a paper named “Hybrid learning method for melanoma detection" yields in the abstract “image recognition (5 times), deep learning (2 times), real-time"; it will therefore have a relevancy metric of 3, taking into account Fig. <ref>.
The popularity metric is used to know the research interest of a paper and its methods. It is computed by citation number/publication age in whole years +1 where 1 is added in the denominator to avoid zero divisions.
After calculating the relevancy and popularity metrics, the tool inputs the title and abstract information to OpenAI and outputs the AI approaches used in each article.
When someone provides a text prompt in OpenAI API, the model will produce a text completion that tries to match the context or pattern you provided. Essential GPT-3 models, which generate natural language, are Davinci, Curie, Babbage, and Ada. In this paper, “text-davinci-003" is used which is the most potent GPT-3 model and one of the models that are referred to as “GPT 3.5"[<https://beta.openai.com/docs/model-index-for-researchers>].
Some issues to consider when preparing prompts are as follows[<https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api>]:
* It is advised to place instructions at the start of the prompt and to use ### or """ to demarcate the context from the instruction.
* Speaking of what to do is preferable to speaking about what not to do.
The prompt can then be the following:
where `document_text' includes the title and abstract information of a paper.
To evaluate OpenAI's performance, the ground truth AI methods are manually produced for non-filtered papers, regarding the title and abstract information of each paper. Some high-level tags, such as “artificial intelligence" and “machine learning" are not included. In other words, the keywords used in Scopus search as a method are not involved. Precision, recall, and F1-measure are calculated for performance analysis.
§.§ Module 3: Analyzes
In this module, sensitivity analyzes are done regarding Scopus and OpenAI. Different combinations of level 1 and 2 keywords in the Scopus query are tried and the initial prompt is compared with other prompts for OpenAI.
For the selected use case, post-analyzes are performed by investigating which AI methods are used more often and which have higher relevancy or popularity metrics and comparing the results over different periods.
This can be done manually, or, if there are too many methods listed, first a clustering algorithm can be used to help this investigation. Currently, density-based spatial clustering of applications with noise (DBSCAN) <cit.> used with (1 - the normalized Indel similarity) as distance performs well enough to support post-analysis.
§ EXPERIMENTS
§.§ Use case definition
The use case example given in Fig. <ref> is tackled for our initial experiment. Here, AI is employed on the dataset of images to detect cancer.
§.§ Keywords from the use case scenario
Using Fig. <ref>, the following keywords are defined:
“oncology" as problem level 1, “artificial intelligence" and “AI" as solution level 1. Only “image processing" is used as solution level 2.
By using only one level 2 keyword, the experiment stays rather general in the expected results.
For simplicity, level 3 keywords are not used in this example. Level 3 keywords do not affect the pool of papers but enable the user to elicit relevancy to papers that match their use case better. Because the computation of the relevancy metric is trivial, it is omitted in this example.
§.§ Scopus API search and manual article cleaning
According to the selected keywords, our initial query of Scopus API[<https://dev.elsevier.com/sc_search_tips.html>] is given below.
That means the keywords are searched in the title, abstract, and keyword parts. In addition, to limit the size of the results, the publications published after 2013 are selected, and to be more specific, the document type is restricted to “Article" or “Conference Paper".
Then DOI, eid, year, and citation number results that Scopus API returns are given in Tab. <ref>. The relevancy and popularity values are calculated as stated in Sec. <ref>. Currently, some papers can have a relevancy of 0, but by manually checking them, they stay relevant. It happens when keywords only appear in “INDEXTERMS" provided by Scopus but are absent from the title, abstract, and author keywords. Moreover, this is also due to a total absence of keyword level 3. It can be fixed by taking these automatic keywords for the OpenAI analysis.
The query returns 92 results. Among them, 25 publications (irrelevant, not technical, just survey, etc.) indicated in red in Tab. <ref> are manually filtered.
The remaining 67 articles are the results related to the domains and keywords of the use case.
However, there are among them 12 papers, highlighted in orange, that apply an AI method successfully, but they do not mention particular methods (they do only highly general, level 1 and 2 ones) in the title and abstract; they will therefore be missed by the OpenAI extraction part that is stated in Sec. <ref>. However, it is not critical as trends are explored.
Still, 55 papers remain to be analyzed.
Note that of the 37 articles eliminated, these could have been marked as such if we had implemented the level 3 keywords.
§.§ OpenAI
The initial prompt for the OpenAI API is stated below.
where `document_text' includes the title and abstract information of a paper.
After finding methods using OpenAI and manual work, the precision value is calculated. Here it is assumed that manual findings are the actual methods. On the other hand, the results coming from OpenAI are the predicted ones.
§.§.§ OpenAI performance
To analyze the results, the methods found by OpenAI are compared to the ones found by manual investigation (considered ground truths) for each paper. There are four different performance determinants, and they are called
* “true found" the number of methods found both by OpenAI that belong to the ground truths,
* “false found" the number of methods found by OpenAI that do not belong to the ground truths,
* “true general found" the number of methods found by OpenAI and the manual search but belonging to level 1 or 2 keywords or high-level keywords like “machine learning",
* “total manual" the number of ground truths,
* “missing" = “total manual" - “true found".
With these data, precision, recall (or sensitivity or true positive rate), and F1-score can be calculated for performance analysis.
To do that, the following metrics are employed:
* True Positive (TP) =“true found",
* False Positive (FP) =“false found" + “true general found",
* False Negative (FN) =“missing".
The “true general found" results are counted as False Positive since they are terms that are entered into the Scopus search or they are high-level keywords for our solution domain interest like “machine learning, artificial intelligence-based approach" as mentioned above.
For each paper that is not filtered, the performance metrics are calculated as follows.
* Precision = TP / (TP + FP)
* Recall = TP / (TP + FN)
* F1-score = 2 ×Precision × Recall/(Precision + Recall)
The F1-score assesses the trade-off between precision and recall <cit.>. When F1-score is high, it indicates that both precision and recall are high. A lower F1-score indicates a larger imbalance in precision and recall.
Let's check the following example, coming from <cit.>:
“Transfer learning with different modified convolutional neural network models for classifying digital mammograms utilizing Local Dataset"
“ [...] accuracy of different machine learning algorithms in diagnostic mammograms [...] Image processing included filtering, contrast limited adaptive histogram equalization (CLAHE), then [...] Data augmentation was also applied [...] Transfer learning of many models trained on the Imagenet dataset was used with fine-tuning. [...] NASNetLarge model achieved the highest accuracy [...] The least performance was achieved using DenseNet169 and InceptionResNetV2. [...]"
Manually, “transfer learning", “convolutional neural network", “NASNetLarge", “DenseNet169", “InceptionResNetV2", “data augmentation", and “fine-tuning" are found as AI methods. What OpenAI has found is highlighted as well.
Highlighted in green, “transfer learning", “convolutional neural network", “data augmentation", `NASNetLarge", “DenseNet169" and “InceptionResNetV2" are “true found"; so TP=6. Highlighted in orange, “machine learning algorithms" is a “true general found", and highlighted in red, “contrast limited adaptive histogram equalization (CLAHE)" is “false found", then FP=2. Finally, highlighted in blue “fine-tuning" is a “missing" and so FN=1. With these, data can compute Precision=6/(6+2)=0.75, Recall=6/(6+1)=0.86 and F1-score=(2× 0.75× 0.86)/(0.75+0.86)=0.8.
In our studied case (see <ref>), the average scores are good, with an average precision of 0.7111, recall of 0.9226, and F1-score of 0.7775. There are 108 TPs, 51 FPs, and 12 FNs if all 55 results are grouped into a single result pool. The values of the precision, recall, and F1-score are then 0.6793, 0.9, and 0.7742, respectively. All ground truths and OpenAI findings are presented in Tab. <ref>.
§.§ Sensitivity analyzes
§.§.§ Scopus API sensitivity
For the Scopus sensitivity analysis, different combinations of level 1 keywords are tried in the query. The initial query can be seen in Sec. <ref>.
Tab. <ref> shows the impact of changing keywords in level 1. Changing a problem domain keyword with another that could be seen as a synonym can greatly impact the papers found. Using the more specific keyword “machine learning" in the solution domain instead of “artificial intelligence" has an impact on the publications found. Similarly, in the problem domain using “cancer" instead of “oncology" has a great impact on the number of papers found. On the other hand, changing double quotes to braces has not that much effect. Moreover, it seems that using only an abbreviation instead of the open form can change the number of results found. Using only the abbreviation has resulted in a poor paper pool.
However, despite the different pool of papers, the methods found by OpenAI are pretty much the same, both for the second and the third query. This means that using synonyms changes the pool of papers but not the methods used to solve the same kind of problem, which means that the method is robust to the keyword selection scheme.
§.§.§ OpenAI sensitivity
To analyze the sensitivity of OpenAI, different prompts are tested, and the differences of proposed AI methods are checked.
Results are summarized in Tab <ref>, and details are provided in <ref>. The number in the last column is an enriched ratio, meaning that if two prompts are equal, it will obtain an infinite value. However, having a difference between two prompts will lead to a decreasing ratio, considering that two papers do not provide the same set of words but also how many words in the prompt are different.
Below prompts are used for analysis.
Prompt 1
Prompt 2
Prompt 3
Prompt 4
Prompt 5
Prompt 6
The original prompt has a higher F1-score value than the other six prompts. With these few prompts, it can already be said that OpenAI is sensitive to the sentence used. However, it generally adds words with respect to the manual search, and extracting the most common words belonging to these results should be enough to find what the user is searching for. Moreover, it is observed that changing a word's position has less impact than changing a word; the more words the user changes, the more differences appear. It also seems that using more common/usual words will give more generic results, closer to the ones that are being searched for; when using very specific instructions, notably in the action verbs, the results will generally be more irrelevant.
§.§ Post-analyzes
The extracted AI methods for the use case described in Sec. <ref> are presented in <ref>. The total number of appearances of the methods, their relevancy, and popularity metrics are showcased in Tab. <ref> by years. Methods selected from articles that are not highlighted in Tab. <ref> and appeared at least in two papers are discussed.
Fig. <ref> illustrates the summary chart of Tab. <ref>.
It is seen from the figure that many different methods have been investigated to solve our example use case, but some are much more used or popular than others. These methods (e.g., class 2 (deep learning methods) and class 1 (artificial neural networks)) are the ones that the user should investigate in the first place to solve the given use case. To be more specific, until 2018 different types of neural networks, logistic regression, SVM, and random forest are popular methods. After 2018, SVM and neural networks are still utilized, and the extra trees classifier seems popular in 2022. However, the trend is being dominated by deep learning methods. Among the deep learning algorithms, CNN, U-Net, and AlexNet can be counted as the three most used and popular methods.
AI methods can be examined without making any classification, but in this case, there will be too many methods. To simplify this situation, the methods are divided into classes. In <ref>, specifics on method classification and detailed information for AI methods in these classes are provided. Moreover, a more detailed decision-making process can be made by using relevancy and popularity metrics. For example, these metrics support decision-making when being uncertain between two AI methods.
§.§ Experiments for different problem domains
In order to check the robustness of the tool, different problem domains and solution approaches are also considered for the Scopus search. The same initial prompt given in Sec. <ref> is used for all use cases to extract AI methods by utilizing OpenAI API.
First, the same problem domain is kept, and the level 2 solution approach is changed as given in the below query.
The aforementioned search yields 35 documents. Although 5 of them effectively use an AI approach, they do not mention any particular methods in the title or abstract, and 15 of them are irrelevant or merely surveys. Consequently, 15 of them are selected in the manner described in Sec. <ref>. Fig. <ref> shows AI methods employed in selected papers. Until 2019, SVM seems to be a popular method, and from 2019 the trend is shifting to deep learning algorithms. Recurrent neural network (RNN), convolutional neural network (CNN), and BERT are among the deep learning methods that are more used after 2019. In addition, some of the most popular methods are BERT, long short-term memory (LSTM), and generative pre-trained transformers (GPT).
Secondly, the solution approach components are retained the same while changing the problem domain. The query for the ”traffic control" issue domain is presented below.
The query returns 52 results, where nine are irrelevant or just surveys, and 20 use an AI method successfully, but they do not mention specific methods in the title and abstract. Therefore, 23 of them are selected. In Fig. <ref>, it is seen that until 2020, classical methods like scale-invariant feature transform (SIFT), speeded up robust features (SURF), k-nearest neighbors (KNN), and decision trees are popular methods. After 2020, deep learning methods class (that contains region-based CNN (R-CNN), Fast R-CNN, Faster R-CNN, you only look once (YOLO), deep simple online real-time tracking (DeepSORT), CNN, U-Net, etc.) is on the rise in terms of the number of uses and popularity.
Another query is the “satellite imagery” for the problem domain, given below. It returns 66 results and 37 of them are selected to be used in analyzes.
Fig. <ref> illustrates the summary of extracted AI methods. Class 1 includes CNN, deep neural network (DNN), DeepLabv3+, Fully Convolution Networks (FCN), U-Net, U-Net++, encoder-decoder, attention mechanism, Res2Net, ResNet, LSTM, SegNet, V-Net, U2Net, AttuNet, LinkNet, mask R-CNN, and cloud attention intelligent network (CAI-Net). On the other hand, class 2 covers ant colony optimization (ACO), genetic algorithm, particle swarm optimization (PSO), bat algorithm, and artificial bee colony (ABC). Until 2020, SVM, artificial neural network (ANN), and ACO were frequently used and popular methods. After 2020, the use and popularity of class 1 and PSO appear to be increasing. In class 1, the top three most used and most popular methods are CNN, U-Net, and DNN. As can be seen from the trend, the first methods to be considered in this problem domain may be the deep learning methods given above.
In Tab. <ref>, OpenAI performance results for all experiments are given, where TP, FP, and FN values are considered as a single pool, i.e., performance metrics are not average values for each article result. It should also be taken into account that if the “true found" words (i.e., machine learning, artificial intelligence, image processing) are not included in the FP, higher precision and F1-score values would have been obtained. Although the problem domain and solution approach change, similar performance results are attained, which is promising for the robustness of the tool.
§ DISCUSSION AND CONCLUSION
A big issue when utilizing automatic solution method selection schemes is the trust in the fit, relevancy, and popularity of the suggested methods. The fit to the actual use case depends on the ability of the human operator to interact with the tool and whether or not they understand the intricacies of the approach. With the proposed method, the human operator has the ability to validate the suggested methods from the accompanying pool of research papers, and due to the simplicity, responsiveness, and intuitiveness, it is relatively straightforward for the human operator to modify and align the usage of the tool with the overall goal of solving a problem. Additionally, to increase the tool's performance in terms of operation requirements (e.g., explainability, trustworthiness) and resources (e.g., hardware), the necessary features or extra resources for AI methods can be added and expanded later if the detailed requirements and current resources are stated clearly.
For example, if explainability is required, many different methods exist for obtaining explainable AI (XAI) methods <cit.>. On the other hand, if trustworthiness is required, then according to the system, environment, goals, and other parameters where AI will be used, several alternative criteria for trustworthiness may be specified <cit.>.
Details or requirements such as explainability and trustworthiness can be retrieved in the keyword selection scheme in Fig. <ref>.
Or, after AI methods are found by the proposed tool, post hoc analyzes can be made with the requirements not used in the proposed method. In some use cases, such requirements or details may not be specified at the beginning of the AI system life cycle and, therefore, may not be included in the keyword selection phase.
Due to the specificity of certain use cases, there is a considerable risk that no research has been conducted on the specifics of the use case. Consequently, the proposed methods will likely not showcase a high score in the relevancy metric. Therefore, the literature pool must be investigated after the results are identified.
Ultimately, the tool's applicability comes down to the objective of the application. It will comfortably propose methods already explained in the literature as to why it is very useful when identifying trends in the research communities. However, as the method identification is based on historical data that train the tool to determine what words within a research paper can be classified as a method, the tool will not fare well when dealing with entirely new solution approach schemes.
It is noteworthy that the relevancy explained in Sec. <ref> is computed and saved at the same time as the other data. It could be useful in the future if one wants an automatic filter. On the other hand, if the pool of papers is too big to be manually filtered, it is possible to filter at the end of the process, when one is checking for the methods to be used. The main disadvantage of filtering after the whole process is that it can allow a lot of irrelevant papers to be analyzed by OpenAI, and this will modify the perception of the trends of research for the studied use case. However, note that our tool is used to get trends in research about a given use case to support the selection of solution methods, and does not directly select a method for the user. It means that having some irrelevant papers analyzed in the whole process will not lead to a completely different result. Moreover, no information is lost, so the trends can be recomputed after filtering if necessary.
On the other hand, when the experiments are examined, the tool produces robust results concerning OpenAI performance for different problem and solution domains in its current state. In terms of the trend, up-to-date usage, and popularity of solution methods, our proposed approach quickly produces rich and advantageous information for the user. In addition, the recommended keyword selection scheme offers a very flexible structure in choosing the problem domain and solution approach for any use case.
§.§ Future work
Due to the nature of the underlying problem, certain processes are technically more difficult to automate than others <cit.>. In its current form, the proposed method still needs a human to perform the keyword selection, check the results given by the query, classify the found methods, and validate the robustness of the solution.
For future work, it would be of high value to remove the need for human intervention while presenting results that signify the trade-off for the different automated decisions. Our study towards automating these tasks is currently underway.
Simultaneously, employing versions from the updated suite of large language models, such as OpenAI's GPT-4[<https://openai.com/gpt-4>], and exploring other databases (like Web of Science, PubMed, IEEE Xplore, etc.) are also future works. Besides, open-source alternatives to GPT-3 or GPT-4, such as GPT-NeoX-20B <cit.> and GPT-J <cit.>, will be implemented to help in cutting costs.
The sensitivity analysis is split into two parts: queries and prompts. Queries highly depend on the keyword selection scheme and should be studied together. However, reasonably an automatic sensitivity analysis can be made using some variants of the initial query, like using quotation marks instead of brackets or using several forms of the same words.
Later, it could be interesting to study the sensitivity concerning synonyms.
On the other part, prompts can be analyzed more easily. Indeed, several sentences could be automatically generated with respect to the initial one and then tested. The common pool of solutions, or using a scoring-like number of occurrences, could be a robust amicable solution.
Classifying methods is not easy as we want to keep a stratification level from general methods to specific ones. However, as deep learning is already used to classify images, e.g., gaining attention in cancer research <cit.>, a deep learning method could pool different methods together and reduce the number of methods used like YOLO-v2, YOLOv4-tiny, etc.
Without any logical pooling, a simple clustering approach based on the text, such as DBSCAN, can be used to make an automatic pooling for a sufficiently big set of methods extracted. However, if we want to automatically match a specific taxonomy, another method will be needed.
Currently, the tool only checks the title, abstract, and keywords for the method determination. For certain papers, the specifics of the method are only introduced later in the paper. E.g., for hybrid methods. Consequentially, an important extension will be to determine the applied method of a paper from the entirety of a paper.
Finally, the tool can essentially investigate any arbitrary characteristic of the literature rather than only the solution approaches — E.g., identifying problem formulations and varieties therein. Therefore, exploring how to do this manually will greatly benefit the research community.
§ CREDIT AUTHORSHIP CONTRIBUTION STATEMENT
Deniz Kenan Kılıç: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data curation, Writing – original draft, Writing – review & editing, Visualization. Alex Elkjær Vasegaard: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data curation, Writing – original draft, Writing – review & editing, Visualization. Aurélien Desoeuvres: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data curation, Writing – original draft, Writing – review & editing. Peter Nielsen: Conceptualization, Methodology, Validation, Investigation, Data curation, Writing – review & editing, Supervision.
§ DECLARATION OF COMPETING INTEREST
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
§ ACKNOWLEDGMENT
All authors read and agreed to the published version of the manuscript.
§ SCOPUS AND OPENAI RESULTS
In Tab. <ref>, Scopus results are shown for the initial query stated in Sec. <ref>. As it is mentioned, articles highlighted in red are manually deleted, and the orange ones that use the AI method are related to the use case but do not specify it in the title and abstract.
In Tab. <ref>, OpenAI results for the initial prompt and ground truth methods extracted manually are shown with performance determinants. These performance determinants are utilized to calculate performance metrics stated in <ref>.
§ OPENAI PERFORMANCE RESULTS
Below, OpenAI performance results for 55 articles are listed in the same order as Tab. <ref>.
* TP = [1, 3, 2, 3, 1, 2, 1, 2, 2, 2, 3, 1, 1, 3, 1, 1, 2, 1, 4, 2, 0, 2, 1, 3, 5, 1, 1, 3, 1, 1, 1, 2, 0, 6, 2, 1, 2, 3, 1, 1, 1 ,2, 1, 2, 2, 3, 2, 6, 2, 2, 2, 4, 2, 1, 1]
* FP = [0, 1, 0, 0, 0, 2, 0, 2, 1, 0, 0, 1, 2, 2, 0, 1, 3, 1, 0, 0, 3, 0, 1, 2, 3, 1, 0, 0, 0, 0, 1, 2, 0, 0, 1, 2, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 2, 0, 1, 1, 1, 1, 5, 2]
* FN = [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0, 0, 0, 0 ,0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 2, 0, 0, 0]
* Precisions = [1, 0.75, 1, 1, 1, 0.5 ,1, 0.5, 0.6667, 1, 1, 0.5, 0.3334, 0.6, 1, 0.5, 0.4, 0.5, 1, 1, 0, 1, 0.5, 0.6, 0.625, 0.5, 1, 1, 1, 1, 0.5, 0.5, 0, 1, 0.6667, 0.3334, 1, 1, 0.5, 0.5, 1, 1, 0.5, 1, 0.6667, 0.75, 0.6667, 0.75, 1, 0.6667, 0.6667, 0.8, 0.6667, 0.1667, 0.3334] and Average(Precisions) = 0.7111
* Recalls = [1, 1, 1, 1, 1, 1, 1, 0.6667, 1, 1, 1, 1, 0.5, 1, 1, 1, 1, 1, 0.8, 1, 0, 1, 1, 1, 0.8334, 1, 1, 1, 1, 1, 1, 1, 0, 0.75, 1, 1, 1, 1, 1, 1, 1, 0.6667, 1, 1, 1, 1, 1, 0.8571, 1, 1, 1, 0.6667, 1, 1, 1] and Average(Recalls) = 0.9226
* F1-score = [1, 0.8571, 1, 1, 1, 0.6667, 1, 0.5714, 0.8, 1, 1, 0.6667, 0.4, 0.75, 1, 0.6667, 0.5714, 0.6667, 0.8889, 1, 0, 1, 0.6667, 0.75, 0.7143, 0.6667, 1, 1, 1, 1, 0.6667, 0.6667, 0, 0.8571, 0.8, 0.5, 1, 1, 0.6667, 0.6667, 1, 0.8, 0.6667, 1, 0.8, 0.8571, 0.8, 0.8, 1, 0.8, 0.8, 0.7273, 0.8, 0.2857, 0.5] and Average(F1-score) = 0.7775
If all 55 results are considered as a single result pool, then there are 108 TPs, 51 FPs, and 12 FNs. Then precision, recall and F1-score values are 0.6793, 0.9, and 0.7742, respectively.
When the performance metrics are examined, the OpenAI presents good performance for the manually generated ground truths.
§ OPENAI SENSITIVITY RESULTS
In Tab. <ref>, Tab. <ref> and Tab. <ref>, missing and extra/different methods are given with respect to the initial prompt. If there is no missing or extra/different method name, it is expressed by “X".
§ EXTRACTED AI METHODS AND POST-ANALYZES
In Tab. <ref>, how many times a method is mentioned in the articles is found according to years, and the relevancy and popularity sums are written next to it. The total number of articles used is 55 that are not filtered and not general in Tab. <ref>. Methods are classified by their occurrence number and their similar ones as described below. Of course, the classification of methods can be done in different ways and at different levels. They are classified to get a more compact overview of the results. The “true general found" results are not included. The methods that are “true found" and mentioned in at least 2 articles are shown.
In the classes listed below, after each method, it is written that it is employed in how many papers total, how many times it is used in which years, and the total relevancy and popularity metrics according to these years.
Class 1 (Artificial neural networks): Paraconsistent Artificial Neural Network (PANN) (x1; 2014, 0, 0.6), Artificial Neural Network (ANN) (x6; 2014, 1, 2.7; 2015, 1, 1; 2016, 0, 6; 2017, 1, 0.7143; 2021, 0, 4.6667; 2023, 0, 2), Probabilistic Neural Network (PNN) (x2; 2015, 0, 0.4444; 2017, 0, 3.2857), Multi-Layer Feed-forward Neural Network (MFFNN) (x1; 2016, 0, 1.125), Neural Networks (x6; 2017x2, 1, 3.8572; 2018, 0, 4; 2019, 0, 0.2; 2020, 1, 0.25; 2023, 0, 0), Perceptron (x1; 2020, 1, 3.75), Back-Propagation Perceptron (x1; 2020, 1, 3.75), Fully Connected Network (FCN) (x1; 2022, 0, 1.5)
Class 2 (Deep learning methods): Deep learning (x15; 2019, 0, 0.2; 2020x3, 1, 27.75; 2021x3, 1, 4.3334; 2022x3, 1, 3.5; 2023x5, 1, 2), Generative Adversarial Network (GAN) (x2; 2019, 0, 0.2; 2020, 1, 3.75), ResNet (x1; 2020, 1, 3.75), ResNet50 (x1; 2021, 0, 4.6667), AlexNet (x2; 2020, 1, 3.75; 2021, 0, 4.6667), U-Net (x2; 2021, 1, 0; 2022, 0, 1.5), Convolutional Neural Network (CNN) (x4; 2021, 0, 4.6667; 2022, 0, 2; 2023x2, 1, 0), 2D U-Net (x1; 2021, 0, 2.3333), 3D U-Net (x1; 2021, 0, 2.3333), Deep Reinforcement Learning (DRL) (x1; 2022, 0, 1), Convolutional Encoder-Decoder Architecture (x1; 2022, 0, 1), Convolution algorithm (x1; 2022, 0, 0), Deep Convolutional Neural Network (DCNN) (x1; 2023, 0, 0), NASNetLarge (x1; 2023, 1, 0), DenseNet169 (x1; 2023, 1, 0), InceptionResNetV2 (x1; 2023, 1, 0), EfficientNets (x1; 2023, 0, 2), Conditional Generative Adversarial Network (cGAN) (x1; 2023, 0, 0)
Class 3 (Tree-based methods): Random Forest (x2; 2016, 1, 2.125; 2018, 0, 4), Decision Trees (x1; 2016, 0, 4.125), Extra Trees Classifier (x1; 2022, 0, 8.5)
Class 4 (Optimization methods): Genetic Algorithm (x1; 2014, 1, 2.7), Sequential Minimal Optimization (SMO) (x1; 2016, 0, 1.75), Ant Colony Optimization (ACO) (x1; 2023, 1, 1)
The cases are counted where the same method is used between 2014-2023, and all time. Relevancy and popularity sums are calculated for a specific method regarding the related articles. In other words, the first column (“Papers") states how many articles use the method in total. The second and third columns show the sum of relevancy and popularity values for these articles, respectively.
If all the time is considered, class 1, class 2, class 3, class 4, “K-nearest neighbors (KNN)", “support vector machine (SVM)", “K-means", “grey level co-occurrence matrix (GLCM)" and “logistic regression" are the ones that are mentioned in at least 2 articles. Sorting the total number of papers using these methods from largest to smallest is as follows:
Papers: class 2 > class 1 > “SVM" > class 3 > class 4 = “KNN" > “K-means" = “logistic regression" = “GLCM"
The relevancy values for all times are sorted as:
Relevancy: class 2 > class 1 > class 4 = “SVM" = “KNN" > class 3 = “GLCM" > “K-means" = “logistic regression"
On the other hand, the sorting of popularity values for all time is given below and it indicates the highest value belongs to class 2.
Popularity: class 2 > class 1 > class 3 > “logistic regression" > “SVM" > class 4 > “GLCM" > “KNN" > “K-means"
From the above methods, it is seen that the number of implementing and popularity trends of class 1 and class 2 have been increasing over the years. For this reason, tests can be started with AI methods in these classes in a similar problem domain.
elsarticle-num
|
http://arxiv.org/abs/2307.06026v1 | 20230712091435 | Learning from Exemplary Explanations | [
"Misgina Tsighe Hagos",
"Kathleen M. Curran",
"Brian Mac Namee"
] | cs.LG | [
"cs.LG",
"cs.CV"
] |
1,2]Misgina Tsighe Hagos
1,3]Kathleen M. Curran
1,2]Brian Mac Namee
[1]Science Foundation Ireland Centre for Research Training in Machine Learning
[2]School of Computer Science, University College Dublin
[3]School of Medicine, University College Dublin
Learning from Exemplary Explanations
[
====================================
empty
eXplanation Based Learning (XBL) is a form of Interactive Machine Learning (IML) that provides a model refining approach via user feedback collected on model explanations. Although the interactivity of XBL promotes model transparency, XBL requires a huge amount of user interaction and can become expensive as feedback is in the form of detailed annotation rather than simple category labelling which is more common in IML. This expense is exacerbated in high stakes domains such as medical image classification. To reduce the effort and expense of XBL we introduce a new approach that uses two input instances and their corresponding Gradient Weighted Class Activation Mapping (GradCAM) model explanations as exemplary explanations to implement XBL. Using a medical image classification task, we demonstrate that, using minimal human input, our approach produces improved explanations (+0.02, +3%) and achieves reduced classification performance (-0.04, -4%) when compared against a model trained without interactions.
Keywords: Explanation based Learning, Interactive Learning, Medical Image Classification.
§ INTRODUCTION
Interactive Machine Learning (IML) is an approach that aims to provide a platform for user involvement in the model training or retraining process <cit.>. The literature on IML is dominated by active learning which reduces the manual effort associated with creating labelled training datasets by interactively selecting a sub-sample of an unlabelled dataset for manual labelling <cit.>. However, eXplanation Based Learning (XBL) has recently begun to gain traction as it allows deeper interaction with users by providing an opportunity to collect feedback on model explanations <cit.>. This form of interaction allows a more transparent form of model training than other IML approaches as users get a chance to refine a model by interacting-with and correcting its explanations.
XBL starts off with a learner model, f, that was initially trained using a simple classification loss, categorical cross entropy for example, which is calculated based on the error between the model's prediction and ground-truth label. Then, XBL typically refines f by augmenting its classification loss with an explanation loss,
L = L_CE + L_expl + λ∑_i=0θ_i
In Equation (<ref>), L_CE is the traditional categorical cross entropy which is calculated based on the error between the model's predictions and ground-truth labels; L_expl is an explanation loss that is computed between the explanation produced from a model and a manual annotation of input instances, M; λ is a regularisation term used to avoid overfitting that could be caused by the introduction of the new loss term, L_expl; and θ refers to network parameters. M can be a mask showing the important image regions that a learner should focus on or a mask of confounding or non-salient regions that a model should ignore. Saliency based feature attributions are usually used to generate model explanations. One example, from <cit.> formulates the explanation loss for training instances x ∈ X of size N and Gradient Weighted Class Activation Mapping (GradCAM) model explanations generated using a trained model f as shown in Equation (<ref>). GradCAM is a saliency based local model explanation technique <cit.>.
L_expl = ∑_i=0^N M_iGradCAM(x_i)
As is seen in the inner circle of Figure <ref>, in XBL, the most common mode of user interaction is image feature annotation. This requires user engagement that is considerably much more demanding than the simple instance labelling that most IML techniques require <cit.> and increases the time and cost of feedback collection in XBL. As can be seen in the outer circle of Figure <ref>, we are interested in lifting this pressure from users (or feedback providers) and simplifying the interaction to ask for identification of two explanations as exemplary explanations and ranking them as good and bad explanations, and so make feedback collection cheaper and faster. This kind of user interaction where users are asked for a ranking instead of category labels has also been found to increase inter-rater reliability and data collection efficiency <cit.>. We incorporate this feedback into model training through a contrastive loss; specifically, triplet loss <cit.>.
The main goal of this paper is to demonstrate the effectiveness this loss based on just two exemplars. Therefore, we use an existing feature annotated dataset to identify good and bad explanations to demonstrate suitability of our proposal. In a real-world interactive learning scenario where end users have to choose the good and bad explanations, active learning approaches can be used to reduce the pool of explanations users have to choose the explanations from.
The main contributions of this paper are:
* We propose the first type of eXplanation Based Learning (XBL) that can learn from only two exemplary explanations of two training images;
* We adopt triplet loss for XBL to incorporate the two exemplary explanations into an explanation loss;
* In addition to showing that XBL can be implemented with just two instances, our experiments demonstrate that our proposed method achieves improved explanations and comparable classification performance when compared against a baseline model.
§ RELATED WORK
Based on the approach utilised to incorporate user feedback into model training, XBL methods can be generally categorised into two: (1) augmenting loss functions; and (2) augmenting training datasets using user feedback by removing confounding or spurious regions identified by users.
Augmenting Loss Functions.
XBL methods that fall under this category follow the approach introduced in Equation <ref> by adding an explanation loss to a model's training to refine it to focus on image regions that are considered relevant by user(s) or to ignore confounding regions. One example of this category is Right for the Right Reasons (RRR) <cit.> that penalises a model with high input gradient model explanations on the wrong image regions based on user annotation. It uses,
L_expl = ∑_n^N [ M_n ∂/∂ x_n∑_k=1^K logŷ_nk] ^2
for a function f(X|θ)=ŷ∈ℝ^N× K trained on images x_n of size N with K categories, where M_n ∈ {0, 1} is user annotation of image regions that should be avoided by the model.
Similarly, Right for Better Reasons (RBR) <cit.> uses Influence Functions (IF) in place of input gradients to correct a model's behaviour. Contextual Decomposition Explanation Penalisation (CDEP) <cit.> penalises features and feature interactions.
User feedback in XBL experiments can be either: (1) telling the model to ignore non-salient image regions; or (2) instructing the model to focus on important image regions in a training dataset <cit.>. While the XBL methods presented above refine a model by using the first feedback type, Human Importance-aware Network Tuning (HINT) does the opposite by teaching a model to focus on important image parts using GradCAM model explanations <cit.>.
Augmenting Training Dataset.
In addition to augmenting loss functions, XBL can also be implemented by augmenting a training dataset based on user feedback. Instance relabelling <cit.>, counterexamples generation <cit.>, and using user feedback as new training instances <cit.> are some of the methods that augment a dataset to incorporate user feedback into XBL.
While XBL approaches show promise in unlearning spurious correlations that a model might have learned by giving attention to non-relevant or confounding image regions <cit.>, they all need a lot of effort from users. In order to unlearn spurious correlations from a classifier, <cit.> collected feature annotation on 3000 chest x-ray images. This kind of demanding task hinders practical deployment and domain transferability of XBL. For this reason, it is of paramount importance to build an XBL method that can refine a trained model using a limited amount of user interaction in order to achieve a plausible and domain transferable implementation. To the best of our knowledge, this area of XBL is completely unexplored.
§ EXEMPLARY EXPLANATION BASED LEARNING
As is illustrated by Equations <ref> and <ref>, for typical XBL approaches, user annotation of image features, or M, is an important prerequisite. We introduce Exemplary eXplanation Based Learning (eXBL) to mitigate the time and resource complexity caused by the feature annotation process. In eXBL, we propose to simplify the expensive feature annotation requirement and replace it with two exemplary explanations: Good GradCAM explanation (C_good) and Bad GradCAM explanation (C_bad). However, even if this replaces feature annotation with two labels, categorising explanations would still be expensive if it's to be performed for all training instances whose size could be in the thousands. For this reason, we only use one C_good and one C_bad.
We choose to use GradCAM model explanations because they have been found to be more sensitive to training label reshuffling and model parameter randomisation than other saliency based explanations <cit.>. To select the good and bad explanations from a list of generated GradCAM explanations, we use an objective explanation metric, called Activation Recall (AR). AR measures how much of the actual relevant parts of test images, M, are considered relevant by a model. While a larger AR value means a model is giving higher attention to relevant image regions, a smaller AR would mean the model is not focusing on relevant image parts for its prediction. AR is formulated as follows,
AR_x ∈ X = GradCAM(x) * M/M
We then assign products of input instances and GradCAM explanation to C_bad and C_good using the instances with maximum and minimum AR values, as follows,
C_good := i· GradCAM(i) , max_x∈ X(AR(x)) := AR_i
C_bad := j· GradCAM(j) , min_x∈ X(AR(x)) := AR_j
The product of the input instance and the Grad-CAM explanation is used instead of just the Grad-CAM explanation because taking only the GradCAM outputs to be the good/ bad explanations could lead to biased exemplary explanations as it would mean we are only taking the model's focus or attention into consideration.
We then take inspiration from triplet loss to incorporate C_good and C_bad into our explanation loss. The main purpose of our explanation loss is to penalise a trainer according to its distance from C_good and C_bad: The closest to C_good and the furthest from C_bad, the lower the loss.
For the product of the training instances x∈ X, and their corresponding GradCAM outputs, x · GradCAM(x), we compute the euclidean distances d_xg and d_xb, which represent distances from C_good and C_bad as follows,
d_xg := d(x · GradCAM(x), C_good)
d_xb := d(x · GradCAM(x), C_bad)
We train the model f to achieve d_xg≪ d_xb for all x. We do this by adding a margin = 1.0; d_xg - d_xb + margin < 0.
We then compute the explanation loss as follows,
L_expl = ∑_i^N max(d_x_ig - d_x_ib + margin, 0)
In addition to correctly classifying the training images, which is achieved through L_CE, this L_expl (Equation <ref>) would train f to output GradCAM values that resemble the good explanations and that differ from the bad explanations.
§ EXPERIMENTS
§.§ Data Collection and Preparation
We use the Covid-19 Radiography Database dataset <cit.>[<https://www.kaggle.com/datasets/tawsifurrahman/covid19-radiography-database>] which contains chest x-ray images of four categories: covid, normal, lung opacity, and viral pneumonia. We downsample the dataset to circumnavigate class imbalance. For model training we used 800 x-ray images per category totalling 3200 images. For validation and testing, we collected 1200 and 800 total images. We resize all images to 224 × 224 pixels. The dataset is also accompanied with feature annotation masks that show the relevant regions for each of the x-ray images collected from radiologists <cit.>.
Even though the exact number of effected images is unknown, the dataset contains confounding regions, such as marks, texts, and timestamps in many of the images.
§.§ Model Training
We followed a transfer learning approach using a pre-trained MobileNetV2 model <cit.>. We chose to use MobileNetV2 because it achieved better performance at the chest x-ray images classification task at a reduced computational cost after comparison against pre-trained models available at the Keras website[<https://keras.io/api/applications/>]. In order for the training process to affect the GradCAM explanation outputs, we only freeze and reuse the first 50 layers of MobileNetV2 and retrain the rest of the convolutional layers with a classifier layer (256 nodes with a ReLu activation with a 50% dropout followed by a Softmax layer with 4 nodes) that we added.
We first trained the MobileNetV2 to categorise the training set into the four classes using categorical cross entropy. It was trained for 60 epochs[The model was trained with an early stop monitoring the validation loss at a patience of five epochs and a decaying learning rate = 1e-04.] using Adam optimiser. We refer to this model as the Unrefined model. We use the Unrefined model to extract good and bad GradCAM explanations. Next, we employ our eXBL algorithm using the good and bad explanations to teach the Unrefined model to focus on relevant image regions by tuning its explanations to look like the good explanations and differ from the bad explanations as much as possible. We refer to this model as the eXBL model and it was trained for 100 epochs using the same early stopping, learning rate, and optimiser as the Unrefined model.
§ RESULTS
Tables <ref> and <ref> show classification performance of the Unrefined and eXBL refined models. While the average AR score of GradCAM explanations produced using the eXBL model is 0.705, the explanations of the Unrefined model score an average AR of 0.685. Sample test images, masks, GradCAM outputs, and overlaid GradCAM visualisations of both the Unrefined and eXBL models are displayed in Figure <ref>. From the sample outputs, we observe that the eXBL model was able to produce more accurate explanations that capture the relevant image regions presented with annotation masks. However, the superior explanations of the eXBL model come with a classification performance loss on half of the categories as is summarised in Table <ref>.
§ CONCLUSION
In this work, we have presented an approach to simplify the demanding task of feature annotation in XBL to an identification of only two model explanations. Our approach, Exemplary eXplanation-based Learning (eXBL) can tune a model's attention to focus on relevant image regions, thereby improving the saliency-based model explanations. We believe our approach is domain transferable and shows potential for real-world implementation of interactive learning using XBL.
Even though the eXBL model achieved comparable classification performance when compared against the Unrefined model (especially in categorising the Normal and Lung opacity categories, in which it scored better and equal to the Unrefined model, respectively), as is presented in Tables <ref> and <ref>, we observed that there is a classification performance loss when retraining the Unrefined model with eXBL to produce good explanations. We attribute this to the accuracy-interpretability trade-off. Although the existence of this trade-off is debated <cit.>, performance loss after retraining a model could mean that the initial model was exploiting confounding regions in the training instances. It could also mean that our selection of good and bad explanations may not have been optimal and that the two exemplary explanations may be degrading model performance.
The two exemplary explanations are selected using an objective evaluation metric, AR, and an existing dataset of annotation masks. For system development and experiment purposes, we use the masks as base knowledge. Although we believe our work presents a simple approach to implement XBL on other domains, future work should involve domain experts when picking the good and bad explanations. However, when involving end users, since the pool of explanations to choose the exemplary explanations from could be large, active learning approaches should be explored to select a subset of model explanations to prompt domain experts for feedback.
§ ACKNOWLEDGEMENTS
This publication has emanated from research conducted with the financial support of Science Foundation Ireland under Grant number 18/CRT/6183. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
apalike
|
http://arxiv.org/abs/2307.06273v1 | 20230712161817 | Spintronics in 2D graphene-based van der Waals heterostructures | [
"David T. S. Perkins",
"Aires Ferreira"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"cond-mat.dis-nn",
"cond-mat.mtrl-sci"
] |
School of Physics, Engineering and Technology and York Centre for Quantum Technologies, University of York, York, YO10 5DD, UK
Spintronics in 2D graphene-based van der Waals heterostructures
David T. S. Perkins and Aires Ferreira
===============================================================
Spintronics has become a broad and important research field that intersects with magnetism, nano-electronics, and materials science. Its overarching aim is to provide a fundamental understanding of spin-dependent phenomena in solid-state systems that can enable a new generation of spin-based logic devices. Over the past decade, graphene and related 2D van der Waals crystals have taken center stage in expanding the scope and potential of spintronic materials. Their distinctive electronic properties and atomically thin nature have opened new opportunities to probe and manipulate internal electronic degrees of freedom. Purely electrical control over conduction-electron spins can be attained in graphene-transition metal dichalcogenide heterostructures, due to proximity effects combined with graphene's high electronic mobility. Specifically, graphene experiences a proximity-induced spin-orbit coupling that enables efficient spin-charge interconversion processes; the two most well-known and at the forefront of current research are the spin Hall and inverse spin galvanic effects, wherein an electrical current yields a spin current and non-equilibrium spin polarization, respectively. This article provides an overview of the basic principles, theory, and experimental methods underpinning the nascent field of 2D material-based spintronics.
Spintronics in 2D graphene-based van der Waals heterostructures
§ ACRONYM GLOSSARY
2DEG: two-dimensional electron gas
BR: Bychkov-Rashba
CEE: collinear Edelstein effect
DOF: degree of freedom
FM: ferromagnet or ferromagnetic
ISGE: inverse spin galvanic effect
ISHE: inverse spin Hall effect
hBN: hexagonal boron nitride
KM: Kane-Mele
SdH: Shubnikov-de Hass
SGE: spin galvanic effect
SHE: spin Hall effect
SOC: spin-orbit coupling
SOT: spin-orbit torque
SV: spin-valley
TB: tight binding
TMD: transition metal dichalcogenide
vdW: van der Waals
WAL: weak localization
WL: weak anti-localization
§ INTRODUCTION
Spintronics can be considered the magnetic counterpart to electronics whereby the transfer and processing of information can be conducted through the electron's spin degree of freedom, rather than or in addition to its electronic property of charge. Specifically, spintronics concerns itself with the electrical manipulation of the electron's spin and one of its first milestones was the observation of a spin-polarized current by Tedrow and Meservey in 1973 Tedrow1973. Despite this and the first controlled injection of a spin-polarized current into a non-magnetic material being achieved in 1985 Johnson1985, it was only in 1988 that spintronics began to bloom and flourish into the field we know today. What sparked this scientific boom was the independent observation of a giant magnetoresistance attributed to the relative orientation of the ferromagnetic (FM) layers in Fe-Cr-based systems by Baibich1988,Binasch1989.
Since the Nobel prize winning discovery of giant magnetoresistance, initial efforts in the field focused on spin valve setups to study how the electron's spin would diffuse and relax through a normal metal when injected via a spin polarised current using an FM contact. However, a more exotic method of spin current generation had already been proposed in 1971 by Dyakonov and Perel Dyakonov1971, the spin Hall effect (SHE), where the application of an electrical current would yield a perpendicular pure spin current without the need for magnetic materials. The origin of this effect can be found in the asymmetric scattering of electrons based upon their spin due to the presence of spin-orbit coupling. Initially observed in 2004 via optical methods Kato2004, the SHE and its inverse (ISHE) have become paradigmatic phenomena in spintronics due to their lack of reliance upon magnetic components. A major focus for the application of the SHE is in low-power magnetic memory devices, where the manifestation of a spin current results in a spin accumulation, and hence a net spin polarization, which can exert a torque on the magnetization of a nearby FM. With a large enough spin accumulation, dramatic effects like magnetization switching can be induced. Clearly, with the use of a simple electrical current, we can manipulate the spin of charge carriers in a material and induce interesting nonequilibrium phenomena at interfaces between materials, thus yielding important applications in modern information technology.
In addition to spin current, another major concept in spintronics is spin texture: the momentum dependence of the transport electrons' spin within a solid. In many low-dimensional systems and at heterostructure interfaces, the enhancement of the relativistic spin-orbit interaction is key to generating non-trivial spin textures that give rise to a plethora of phenomena, ranging from the spin-momentum-locked surface states in topological insulators to the topologically protected real-space spin textures seen in chiral magnets, such as skyrmions and spin spirals. Of particular interest is the emergence of a spin polarization, as in the SHE, but without a resulting spin current. This effect, the sole spontaneous generation of a spin polarization via application of an electric current, is known as the inverse spin galvanic effect (ISGE), though in some literature it may also be referred to as the Rashba-Edelstein or Edelstein effect. Based upon Bychkov-Rashba-type spin-orbit coupling (SOC) Bychkov1984, arising when the mirror symmetry about a given plane is broken, this effect was initially predicted in two-dimensional electron gases (2DEGs) in semiconductor heterostructures in 1989 Aronov1989,Edelstein1990,Aronov1991 but remained unobserved in experiment until 2002, when its reciprocal effect (i.e. spin-to-charge conversion) was observed Ganichev2002. Combining both the SHE and ISGE, we find ourselves in a position with great control over the motion and net orientation of the electron spins in materials. However, to truly construct realistic devices using a spin-focused infrastructure, we must understand what makes an ideal material for providing such intricate control over spin. The two most important factors governing the effectiveness of spintronic devices are disorder and SOC: both mechanisms yield significant consequences on the transport range of electron spins and our control over them. while a larger SOC might ensure more efficient generation of spin currents and polarizations, it comes at the cost of faster spin dephasing and hence spin information is lost over a much shorter distance. In contrast, a weak SOC might allow for long range transport, but results in less efficient charge-to-spin conversion. Similarly, disorder can also change the efficiency of spin-charge interconversion. In a pristine system, the only processes present will be those intrinsic to the system. However, upon the inclusion of disorder, some intrinsic mechanisms are completely suppressed in favor of extrinsic mechanisms. Furthermore, spin-orbit active impurities can have similar effects to including a SOC into the system, due to their ability to change an electron's spin orientation. Clearly, there is a balancing act to be handled in constructing ideal spintronic devices with both disorder and SOC acting as the tuning knobs.
Prior to 2004, experiments studying spin transport focused primarily on 2DEGs realized in a multitude of systems including thin metallic films Jedema2001, permalloys Steenwyk1997,Dubois1999, and semiconductor heterostructures Yang1994,Kikkawa1999,Ohno1999,Malajovich2000. In most cases, spin information was passed into the system by either a spin polarized charge current, or via optical excitation of a semiconductor resulting in a spin imbalance of the conduction band. Despite these initial endeavors, the electrical processing of spin-encoded information was hindered by the difficulty of combining effective spin control with large enough spin lifetimes. However, with the discovery of graphene in 2004 as the first truly 2D, i.e. atomically thin, solid state system, a new epoch dawned in spintronics. It was quickly established that bare graphene offered the largest spin diffusion lengths of any material to date, with many reports finding l_s∼ 1 - 20 μm, courtesy of the weak intrinsic spin-orbit and hyperfine interactions of sp^2 hybridized carbon.
Naturally, graphene's extremely small intrinsic SOC makes it difficult to have precise electrical control over the net spin orientation, but does allow for long-distance spin-information transfer. However, with the many advances in nanofabrication over the past two decades and the isolation of other 2D materials, such as transition metal dichalcogenides (TMDs), layer-by-layer assemblies of 2D materials have been imagined and constructed with the ability to alter the net behavior of the composite system based purely on the individual layers used. Thus the concept of bespoke devices combining the desirable properties of different materials has become possible in the form of van der Waals (vdW) heterostructures.
What single 2D crystals lack, vdW heterostructures may offer a way to access by enhancing certain properties by matching an appropriate set of materials. Through simple proximity effects, the properties absent or deemed too weak in the original isolated crystal can be enhanced. Semiconducting TMDs, such as MoS_2 and WTe_2, are a classic example of this; due to their composition involving transition metal atoms, they naturally possess a large SOC. By stacking this with a graphene monolayer, the electrons of the graphene sheet will experience a proximity-induced SOC effect, due to their ability to now hop between the two layers. Furthermore, graphene's transport nature is not jeopardised by the proximity of the TMD since the low-energy states of graphene lie well within the TMD band gap. Consequently, transport is still dominated by the more conductive graphene layer, though the electronic states are now endowed with a substantial SOC. The typical size of these induced SOCs is up to order 10 meV Wang2019, which is three orders of magnitude larger than the intrinsic SOC present in regular graphene (Kane-Mele-type of order 10 μeV Sichau_19). The specific values of the SOCs depends on the choice of TMD partner and can be further tuned by means of external pressure Fulop2021. This dramatic change from an isolated crystal's properties with the introduction of material partners is what makes vdW heterostructures the modern candidates par excellence for many condensed matter experiments.
§ GRAPHENE AND VAN DER WAALS HETEROSTRUCTURES
Formed by stacking atomically thin layers of hexagonally packed atoms with weak van der Waals forces binding neighboring layers together, Fig. <ref>a Geim2013, vdW heterostructures constitute a specific class of 2D materials. Given the breadth of materials with 2D behavior – encompassing semiconductors, insulators, and semimetals – which can be exfoliated down to a monolayer, it is no surprise that the variety of vdW heterostructures is equally diverse. Furthermore, the electronic properties of such 2D compounds are sensitive to the number of layers, stacking sequence, and atomic coordination, while also being tunable “on-demand” through the controlled application of strain and electric fields Castro_Neto2009,Wang2012,Yun2012.
Monolayer graphene presents itself as a zero-gap semiconductor with a linear dispersion relation for both electrons and holes, whose unit cell consists of two distinct lattice sites which are individually referred to as the A and B sublattices (Fig. <ref>b). The band structure of graphene is characteristic of massless chiral Dirac fermions, which derives from the sp^2 network of carbon atoms forming a honeycomb lattice with preserved inversion symmetry (Fig. <ref>b), whose presence give rise to graphene's notable electronic properties Castro_Neto2009. Other 2D materials relevant to vdW heterostructures include: (i) hexagonal boron nitride “hBN”, an insulator, (ii) bilayer graphene, a zero-gap semiconductor with parabolic band dispersion; and (iii) group-VI dichalcogenides, direct band-gap semiconductors with strong SOC.
What makes a material useful in vdW heterostructures varies from material to material. The band gap opening in polyatomic compounds (e.g. hBN) is a direct consequence of broken inversion symmetry. The electronic band gap in monolayer hBN is about 7 eV, making it an insulating analog of graphene. The small lattice mismatch between hBN and graphene (about 1.8 %) allows for easy integration in graphene-based devices using dry transfer techniques. hBN encapsulation yields major improvements in the electronic mobility of graphene-based devices, and is currently the gold standard for the fabrication of high-performance vdW heterostructures. In lateral spin transport experiments, the hBN encpasulation of graphene has enabled a 10-fold increase in room-temperature spin lifetimes compared to devices using the now obsolete silicon oxide substrates Drogeler2014. The hBN induces an orbital gap in graphene (Fig. <ref> (a)), which results from virtual interlayer hopping processes between carbon sites in graphene and the distinct chemical species occupying the A and B sublattices of the partnered hBN layer.
Bilayer graphene can exist in Bernal-stacked “AB” form, with atoms on opposite layers stacked on top of each other in a staggered configuration, or, less frequently, in the “AA” form, with sublattices in adjacent layers perfectly aligned Liu2009. Similar to monolayer graphene, its conduction and valence bands touch at the Brillouin zone corners (Fig. <ref> (a)), however low-energy excitations around the Fermi points are associated with quadratic energy dispersion McCann2006a,Guinea2006. The finite density of states near the Fermi level exacerbates the effect of interactions leading to rich broken symmetry states even in the absence of external fields Zhang2010,Vafek2010,Nandkishore2010,Weitz2010. Individual layers in AB bilayer graphene can be addressed separately allowing important device functionalities, including band gap opening through gating McCann2006b,Castro2007. Of particular interest for spintronics, is the ability to fine-tune both the SOC (proximity with a TMD) and exchange interaction (proximity with an FM) experienced by the electrons. Only the layer that is the immediate neighbor to a partner material will experience significant proximity effects. Therefore, by applying an electric field perpendicular to the graphene bilayer, the electron density of each layer can be adjusted, which in turn changes the proximity-induced SOC and exchange interaction experienced by the electrons by shifting them towards or away from the partner material Zollner2020.
Group-VI dichalcogenides [MX_2 (M=Mo, W; X=S, Se,Te)] can be either trigonal-prismatically or octahedrally coordinated (so-called “2H” and “1T” phases, respectively). These polytypic structures have vastly different electronic properties: while TMDs of the 2H-MX_2 type are large-gap semiconductors, 1T-MX_2 are predominantly metals Mattheiss1973,Eda2012,Voiry2015. 2H-TMD monolayers have direct band gaps in the near-infrared to the visible region, which make them well suited for a broad range of applications in optoelectronics and photonics Mak2016. Owing to ultimate (2D) quantum confinement, electrons and holes are tightly held together which is reponsible for enhanced light–matter interactions. Excitons have typical binding energies of 0.5 eV, which strongly impact the spin and optoelectronic properties of semiconducting TMDs Wang2018. Moreover, the spin-valley locking of energy states in the vicinity of K_± points, stemming from lack of inversion symmetry, lead to spin-valley-dependent optical transition rules Wang2012, which enable the addressing of individual valleys with circularly polarized light. The spinvalley coupling in TMDs has been explored to convert optically driven valley currents into charge currents via the inverse valley Hall effect Mak2014. In free-standing conditions, the 1T phase undergoes a spontaneous lattice distortion to a semiconducting phase dubbed 1T^', which supports robust nontrivial topological behavior (quantum spin Hall effect) Qian2014,Tang2017,Wu2018,Shi_19.
The weak van der Waals forces between planes of 2D crystals offer a pratical route for band structure design. A remarkable example is “twisted” bilayer graphene: where the two graphene monolayers forming bilayer graphene are offset from one another by a simple rotation about the out-of-plane axis Lopes_dos_Santos2007, in addition to their stacking arrangement. For “magic angle” twisted bilayer graphene, the superlattice created by the two graphene sheets leads to strong renormalization of the band structure, which can exhibit flat bands at the Fermi level, leading to possible strongly correlated insulating states in the ultimate 2D (atomically thin) limit Cao2018.
With the above discussion of different 2D systems and their various combinations, it is clear that van der Waals heterostructures offer a route towards 2D designer materials. The exact combination of the individual layers is dictated by the purpose of the device being constructed. This is particularly relevant when trying to study and apply the SHE and ISGE from a technological perspective. The specifics behind these phenomena are discussed towards the end of this article, while the effect of proximity-induced SOC shall be covered shortly.
§.§ Honeycomb monolayers: A Tight-binding Description
The distinctive electronic properties of 2D layered materials and the special role played by the sublattice degree of freedom (DOF) can be best appreciated within a tight-binding (TB) model of electrons hopping on a honeycomb lattice (Fig. <ref> (b)). The minimal Hamiltonian (without SOC) reads as
H_2D = -t∑_⟨ i,j⟩( a_i^†b_j + H.c.) + m ∑_i( a_i^†a_i -b_i^†b_i)
where the fermionic operator a_i (b_i) annihilates a quasiparticle on site i belonging to sublattice A (B), t is the nearest neighbor hopping energy (t≈2.8 eV in graphene Castro_Neto2009), and ⟨ i,j ⟩ denotes the sum over nearest neighbors. The second term describes a staggered on-site energy with amplitude m=(ε_A-ε_B)/2 and is relevant for noncentrosymmetric 2D crystals (e.g. m≈ 3.5 eV in hBN Roman_2021 and m≈ 0.7-0.9 eV in semiconducting TMDs Xiao_2012), as well as graphene-based heterostructures displaying Moiré superlattice effects Dean2010,Jung2015,Wallbank_15.
This tight-binding Hamiltonian provides a starting point for understanding the low-energy properties of prototypical 2D materials. The energy bands are obtained by means of the Fourier transform a_i (b_i)=N^-1∑_ke^-ik·x_ia_k (b_k), where k=(k_x,k_y)^T is the 2D wavevector and N is the number of sites in each sublattice (N=N_A=N_B). After straightforward algebra, one finds
H_2D = ∑_k[ a_k^† b_k^† ]
[ m ϕ^*(k); ϕ(k) -m ]
[ a_k; b_k ],
with the geometric form factor
ϕ(k) = -t ∑_a=1,2,3 e^ik·δ_a ,
where the bond vectors, δ_a, are defined in the caption of Fig. <ref>. The energy dispersion is readily obtained as
E(k) = ±√(t^2 |ϕ(k)|^2 + m^2) ,
where the sign ± selects positive (negative) energy branch of the spectrum. A band structure representative of graphene with a sublattice staggered potential is shown in Fig. <ref>(a). Of particular note is the appearance of local extrema in the spectrum at the corners of the Brillouin zone, K_±, where E=± m, known as Dirac points. In this example, the small orbital gap (m≪ t) could be due to to use of a lattice-matching substrate (e.g. hBN). At half-filling, the negative energy states are completely filled, and hence the low-energy physics is controlled by excitations about the Dirac points (the region around a Dirac point is known as a valley). For semiconducting TMDs and hBN, the staggered on-site energy is comparable to (or larger than) t, resulting in sizable orbital gaps at the Dirac points. The generalization of this Hamiltonian to describe the strong intrinsic SOCs inherent in semiconducting TMDs, as well as symmetry breaking SOCs in vdW heterostructures, is presented in the next section. while the extension of the effective TB model to multilayers (e.g. bilayer graphene) is straightforward, the complete details are beyond the scope of this article, though the interested readers can find details in Refs. Peres2010,McCann2013.
§.§ Honeycomb monolayers: SOC interactions
The electronic structure of 2D materials containing heavy elements is strongly modified by spin-orbit effects generated by the periodic crystal potential. In the nonrelativistic approximation to the Dirac equation, the intrinsic spin-orbit interaction reads as
H_SO = -ħ/4m_e^2c^2 s· (p×∇V) ,
where V(x) is the periodic crystal potential, p is the momentum operator, m_e is the electron mass, and 𝐬 is the vector of Pauli matrices describing spin-1/2 particles (i.e. acting on the spin DOF). The (spin-dependent) hoppings generated by Eq. (<ref>) can be obtained by exploring time-reversal symmetry (𝒯) and the crystal symmetries. The most general H_SO for honeycomb lattices may therefore be written as
H_SO^𝒯 = ∑_⟨ i,j⟩( â_i^†T_ij^abb̂_j + b̂_i^†T_ij^baâ_j) + ∑_⟨⟨ i,j⟩⟩( â_i^†T_ij^aaâ_j + b̂_i^†T_ij^bbb̂_j) ,
where â_i^†/â_i^ (b̂_j^†/b̂_j^ ) are creation/annihilation operators for the A(B) sublattice with a 2-component spinor structure acting on spin space, T_ij^(ς) (with ς=aa,ab,bb) are spin-dependent hopping coefficients with a 2×2 complex matrix structure satisfying T_ij^ς=[T_ji^ς]^†, and ⟨⟨ i,j ⟩⟩ denotes the sum over next-nearest neighbors. The neglect of hoppings beyond next-nearest neighbors in Eq. (<ref>) is justified given the exponential decay of matrix elements with the distance. Note that on-site spin-dependent terms, such as â_i^†s_zâ_i, change sign under 𝒯 and thus are not allowed.
The symmetries of the system are contained within the T_ij^(ς). Expansion of the spin hopping matrices into elements of the SU(2) spin-algebra yields
T_ij = ω_ij^x s_x + ω_ij^y s_y + ω_ij^z s_z ,
where sublattice superscripts have been omitted. The anti-unitary operator enacting 𝒯 is given by Θ=i s_yK, where K denotes complex conjugation. The requirement, T_ij=Θ T_ij Θ^-1, leads to the following constraint: ω_ij^α = -(ω_ij^α)^* (with α=x,y,z). As such, the coefficients of spin-dependent hopping mediated by H_SO are purely imaginary. The full set of spin–orbit interactions up to next-nearest neighbors is shown in Fig. <ref>, where the allowed hoppings are dictated by the point group symmetry. For example, D_6h-invariant Hamiltonians (e.g. flat pristine graphene) only permit spin-conserving next-nearest neighbor hoppings. This is the result of three symmetries. First, mirror inversion about the plane, Σ_xy^h, which reverses the sign of x- and y- components in Eq. (<ref>), forbids all spin flip processes, ω_ij^(x,y)=0. Second, the mirror symmetry Σ_yz^d, which sets y → -y and consequently reverses the sign of s_z in Eq. (<ref>), ensures that ω_ij^z = 0 for all in-plane vertical hoppings, which are naturally nearest neighbor processes. Finally, combining the Σ_yz^d mirror symmetry with 6-fold rotational symmetry about the z-axis, we see that the vertical hoppings must map onto the other nearest neighbor hoppings and hence they too must vanish.
More generally, when space inversion or horizontal reflection symmetries are broken, other terms are allowed. Of particular relevance is the C_3v point group, given that it is a common subgroup of D_3h (e.g. hBN and TMD monolayers), D_3d (e.g. rippled graphene and Si- and Ge-based graphene analogs), and C_6v (e.g. graphene on a non-lattice-matched substrate) and hence defines the most general class of Hamiltonians compatible with the honeycomb lattice symmetries Saito_2016,Kochan2017. Altogether, C_3v-invariant models allow for 3 types of SOC
H_SO^C_3v = i/3√(3)∑_⟨⟨ i,j ⟩⟩ν_ij( λ_A â_i^†s_zâ_j + λ_B b̂_i^†s_zb̂_j) + 2i/3∑_⟨ i,j⟩[ λ â_i^†( s×d_ij)_zb̂_j^ + (b ↔ a) ]
+ 2i/3∑_⟨⟨ i,j ⟩⟩[ λ_nn^A â_i^†( s×d_ij)_zâ_j^ + λ_nn^B b̂_i^†( s×d_ij)_zb̂_j^ ],
namely, spin-conserving sublattice-resolved SOC, λ_A(B), Bychkov-Rashba (BR) interaction, λ, and spin-flipping sublattice-resolved SOC, λ_nn^A(B). Here, d_ij is the unit vector along the line segment connecting site i and j, while ν_ij = ± 1 distinguishes between clockwise and anti-clockwise electron hopping respectively.
The spin-conserving SOC (first term in Eq. (<ref>)) is a fingerprint of
pseudospin-spin coupling in 2D layered materials. Two cases are of noteworthy interest: (i) centrosymmmetric crystals, where only a spin conserving SOC is permitted with λ_A = λ_B, as is the case in pristine graphene (D_6h) discussed above, and graphene-like materials with D_3d point group, and (ii) systems with broken inversion symmetry leading to sublattice-resolved SOC (λ_A≠λ_B), as seen, for example, in 2H-TMD monolayers (D_3h) and graphene-TMD heterostructures (C_3v). The BR interaction generated by the nearest-neighbor spin-flip hoppings (second term in Eq. (<ref>)) signals the lack of mirror inversion symmetry, Σ_xy^h, associated with reduction of the point group from D_6h to C_6v, which may occur through interfacial effects in heterostructures or via application of an electric field perpendicular to the 2D plane. Finally, next-nearest-neighbor spin-flip processes, parameterized by the couplings λ_nn^A(B) (third term in Eq. (<ref>)) are also allowed in systems lacking out-of-plane reflection symmetry, Σ_xy^h, such as graphene-based vdW heterostructures or graphene placed on a generic substrate. Note that when the in-plane inversion symmetry is also broken, these processes become sublattice-resolved (λ_nn^A≠λ_nn^B). This occurs, for example, in graphene-TMD heterostructures.
§.§ Honeycomb Monolayers with Proximity-Induced SOC: Continuum Theory
The low-energy electronic structure of 2D materials can be conveniently modeled using a (long-wavelength) continuum description. Expanding the geometric form factor [Eq. (<ref>)] to first order around each Dirac point, K_±=4π/(3 √(3) a)(±1,0)^T, that is ϕ_±(k)≃(3ta/2)(± k_x+ik_y), yields the effective single-particle Hamiltonian ℋ_0k = ħ v ( τ_z⊗σ_xk_x + τ_0⊗σ_yk_y) ⊗ s_0 + m τ_0⊗σ_z⊗ s_0 when written in the conventional basis, |ψ⟩ = (K_+, K_-)^T⊗(A, B)^T⊗ (↑,↓)^T. Here k is the 2D wavevector measured with respect to a Dirac point, v=3at/2ħ is the Fermi velocity of massless Dirac fermions, s_0 is the identity matrix acting on the spin DOF, and σ_i and τ_i are the Pauli matrices supplemented with identity acting on sublattice and valley DOFs, respectively. This low-energy Hamiltonian admits a even simpler and more elegant form when written using the valley basis, |ψ⟩ = ( K_+A, K_+B, -K_-B, K_-A )^T⊗ (↑,↓)^T, i.e. ℋ_0k = τ_0⊗ H_0k⊗ s_0, with
H_0k = -iħ v σ·k + mσ_z.
The Dirac-Weyl equation [Eq. (<ref>)] governs the low-energy properties of honeycomb monolayers with low SOC and has been extensively used in investigations of opto-electronic and transport phenomena in single-layer graphene Peres2010,McCann2013. For ease of notation, the writing of the tensor product between matrices acting on different DOFs is omitted from here onward.
The spin-orbit interactions in the continuum can be derived by expanding the Fourier transformed TB Hamiltonian [Eq. (<ref>)] around the K points. Here, as in the previous section for the full TB Hamiltonian, 𝒯 and unitary (spatial) symmetries are exploited so as to constrain the allowed spin-orbit terms in the continuum theory. The time-reversal operation reverses spins s→ -s and swaps valleys (as momentum reversal sends K_+↔K_-). Since the time reversal operator is anti-unitary, the σ_y pseudospin operator is also affected, 𝒯: σ_x→σ_x , σ_y→-σ_y , σ_z→σ_z. Exploiting the 𝒯-symmetry transformations, one finds that there are 4×3=12 possible terms
ℋ_SO^𝒯 = ∑_i=x,y,z ( Δ^(i)τ_0 σ_z + λ_x^(i)τ_0 σ_x + λ_y^(i)τ_0 σ_y + ω^(i)τ_z σ_0) s_i .
Direct inspection of Eq. (<ref>) shows that the majority of SOC terms lead to anisotropic energy dispersion, and hence are incompatible with the C_3v point group symmetry. To preserve the continuous rotational symmetry of the low-energy theory about each Dirac point, the Hamiltonian within each valley should commute with the generator of rotations within the 2D plane for that valley (the total angular momentum operator). In the valley basis, the total angular momentum is independent of the valley index and takes the form
J_z = -i ħ∂_ϕτ_0 σ_0 s_0 + ħ/2( τ_0 σ_z s_0 + τ_0 σ_0 s_z),
where ϕ is the azimuthal angle. The first term of J_z is simply the orbital angular momentum, while the s_z piece is the usual spin contribution. The σ_z term arises from spin-like sublattice DOF and is therefore characteristic of vdW materials. The requirement of J_z's commutation with the Hamiltonian yields Δ^(x,y)=ω^(x,y)=λ_x(y)^z=0, λ_x^(y)=-λ_y^(x) and λ_x^(x)=λ_y^(y). One of such terms, a Dresselhaus-type SOC, λ_x^(x)τ_0(σ_x s_x-σ_y s_y), breaks the mirror reflection symmetry about the yz plane, rendering atomic sites within the same sublattice inequivalent, and is thus forbidden in C_3v invariant systems.
The full C_3v-invariant low-energy Hamiltonian in momentum space is therefore
H_k^C_3v = ħ v τ_0 σ·k s_0 + m τ_0 σ_z s_0 + Δ_KMτ_0 σ_z s_z + λ_BRτ_0 ( σ_x s_y - σ_y s_x) + λ_svτ_z σ_0 s_z ,
where the above couplings have a simple correspondence to the spin-hoppings of the TB model [Eq. (<ref>)]
Δ_KM = λ_A+λ_B/2 , λ_sv = λ_A-λ_B/2 , λ_BR = λ .
The competition between the different spin-orbit energy scales in Eq. (<ref>) influences the energy dispersion, while also dictating the topological properties and the spin structure of the eigenstates. Kane-Mele SOC (Δ_KMτ_0 σ_z s_z) leads to spin-degenerate bands due to its spin-conserving nature, with the ability to drive the system into a topologically non-trivial phase when it dominates Qian2014,Wu2018,Shi_19. However, in the honeycomb monolayer systems of interest here, including TMDs and graphene-based vdW heterostructures, the Kane-Mele SOC is negligible. In contrast, the spin-valley coupling (λ_svτ_z σ_0 s_z), also known as valley-Zeeman interaction, emerging from sublattice-resolved SOC is generally significant and yields spin-split bands within each valley. Typically, λ_sv is of order 100 meV for semiconducting TMDs (intrinsic SOC) Xiao_2012 and 1 meV for graphene-TMD heterostructures (proxomity-induced SOC) Tiwari2022. The Bychkov-Rashba term, λ_BRτ_0(σ_x s_y-σ_y s_x), arising at heterostructure interfaces, leads to spin admixture in the spin states, characterized by in-plane spin-momentum locking of the Bloch eigenstates. Similar to spin-valley SOC, this term results in spin-split bands distinguished by spin helicity rather than simple spin. The magnitude of this coupling is typically on the order of 1 to 10 meV, depending on the TMD used Wang2019,Tiwari2022. The presence of significant Rashba coupling explains the recently observed large charge-to-spin conversion via the ISGE at room temperature Offidani_17, as discussed later on in this article. Finally, it is worth noting that the sublattice-resolved spin-flip terms (λ_nn^A(B)) [see Eq. (<ref>)] are absent in the low-energy Hamiltonian (i.e. they appear at the next order in the small-k expansion), and as such have little impact on the transport physics of 2D honeycomb layers.
The dispersion relation associated with Eq. (<ref>) consists of two pairs of spin-split Dirac bands. A typical energy dispersion relation for a graphene-TMD heterostructure with competing Rashba and spin-valley couplings is shown in Fig. <ref>(a). The general electronic dispersion of C_3v invariant systems can be readily seen as
ε_ζ(k) = ±√(ħ^2 v_^2 k^2 + Δ_ζ^2(k)) ,
where Δ_KM has been neglected due to its inherently small nature compared to other SOC energy scales, k≡|k|, ζ=±1 is the spin-helicity index, and Δ_ζ^2(k) is the SOC-dependent mass term,
Δ_ζ^2(k)=m^2+λ_sv^2+2λ_BR^2+2ζ√((λ_BR^2-mλ_sv)^2+ħ^2v^2k^2(λ_BR^2+λ_sv^2)) .
The spin texture of the eigenstates (see Fig. <ref>(b)) can be cast in the following compact form
⟨s⟩_ζτk = -ζ ϱ(k) (k̂×ẑ) + m_ζτ^z(k) ẑ ,
where τ = ±1 is the K_± valley quantum number. The first term describes the spin winding of the electronic states generated by the BR effect Rashba2009. The second term is due to breaking of sublattice symmetry (λ_sv≠0 or m≠0) and tilts the spins in the ẑ direction. Because of the Dirac nature of the charge carriers, the spin texture has a strong dependence upon the Fermi energy. In fact, the spins point fully out of the plane at the Dirac point, but acquire an in-plane component as k is increased. For ħ v k ≫Δ_ζ, one finds ρ(k)≃cosθ and m_ζτ^z(k)≃sinθ, with θ=- τ arctan(λ_sv/λ_BR). The tilting angle, θ, has opposite signs in different valleys by virtue of time-reversal symmetry. Furthermore, one observes two distinct electronic regimes. For energies within the spin-gap (regimes Ia and Ib in Fig. <ref>), the Fermi surface has a well-defined spin helicity, a feature reminiscent of spin–momentum locking in topologically protected surface states Schwab2011. Consequently, near-optimal charge-to-spin conversion is observed inside the spin-gap via a large ISGE. In contrast, for energies outside the spin-gap (regime II in Fig. <ref>), the spin helicity is no longer a well-defined concept, but the larger Fermi radius of the spin-majority band (ζ = -1) nevertheless allows for a detectable ISGE (i.e. while the current-induced spin polarization arising from the two sub-bands have opposite signs, they do not cancel each other). These special features of the electronic and spin structure of proximitized 2D materials are ultimately responsible for the efficient current-driven spin polarization supported by graphene-TMD heterostructures Offidani_17.
§ RELATIVISTIC SPIN-ORBIT COUPLED TRANSPORT PHENOMENA
§.§ Electronic Signatures
As alluded to above, the presence of SOC in a material can drastically alter the transport properties of materials due to spin-charge inter-conversion effects. One of the earliest signatures of SOC-affected transport was measured in magnesium films (i.e. 2DEGs) Bergman1982,Sharvin1981, where the electrical conductivity was seen to decrease upon the application of a magnetic field (i.e. a negative magneto-conductivity). This behavior can be traced back to the quantum interference between the many paths an electron can take when travelling through a disordered system. In the case of a material where SOC is absent, quantum interference leads to the coherent backscattering of electrons, thus yielding more localized states, and hence reduces the electrical conductivity in a phenomenon known as weak localization (WL). The application of a magnetic field here destroys the coherence of the backscattered electrons and hence inhibits WL. Therefore, a positive magneto-conductivity is a clear signature of WL.
In contrast, materials that are SOC-active may exhibit the complete opposite of WL, weak anti-localization (WAL). In this case, the paths of backscattered electrons interfere destructively to yield more delocalized states. This reversal in behavior can be understood to result from the spin DOF becoming an active participant in the Hamiltonian and scattering events. Here, the application of a magnetic field changes the interference of backscattered electrons from destructive to constructive, therefore decreasing the conductivity of a material. The natural signature of SOC-active materials is therefore a negative magneto-conductivity.
The role of localization in graphene, however, is more involved due to the presence of other spin-like DOFs, namely the sublattice and valley DOFs. In the absence of both SOC and inter-valley scattering (i.e. disorder is smooth on the lattice scale), graphene exhibits a WAL phase; the complete opposite of what is seen in 2DEGs. The manifestation of WAL in this system is a result of graphene's π Berry phase, which precludes electrons from backscattering and thus prevents WL. Another interpretation can be found in considering the pseudospin (sublattice) DOF as an active participant in the Hamiltonian and scattering events, much in a similar manner to how spin becomes an active participant in SOC-active 2DEGs. Upon increasing the concentration of point defects and other short-range scatterers, such that inter-valley scattering no longer remains negligible, the disordered graphene system's localization phase reverses to exhibit a WL phase instead. Likewise, keeping intervalley scattering weak (i.e. intra-valley scattering dominates) and instead introducing a strong SOC also pushes graphene from a WAL to a WL phase. Finally, when both disorder and SOC are strong graphene moves back to WAL behavior Sousa2022. Several experiments on graphene-TMD and bilayer graphene-TMD have revealed WAL phases in these systems Wang2016,Volkl2017,Yang2017,Wakamura2018,Amann2022, indicating the presence of strong symmetry-breaking SOC. However, such observations do not provide a direct spectroscopic probe for the size of the various SOCs present. Only recently has the SOC of these systems been accessed directly Wang2019,Tiwari2022.
§.§ Spin Hall Effect
While SOC might affect the electrical conductivity of graphene-based vdW heterostructures through quantum interference, more profound and exotic behavior can be observed when considering other forms of transport. The SHE is one such example of this, whereby the generation of a pure spin current driven solely by electric fields can be achieved without the need for magnetic fields. In pristine graphene – owing to its negligible intrinsic SOC Sichau_19 – the SHE is absent. However, this scenario changes drastically in the presence of disorder-induced SOC Ferreira_14,Balakrishnan2014,Milletari_16. Here, the scattering of charge carriers from spin-orbit hot spots aligns spin and orbital angular momentum in opposite directions, resulting in the formation of transverse spin currents; the magnitude of the effect is characterized by the spin Hall angle, γ.
A direct consequence of the SHE in low-dimensional systems is the accumulation of spin at the system's boundaries. Specifically, thin films (2D materials) will accrue a collection of oppositely aligned spins at opposite edges, see Fig. <ref>, due to the resulting spin current pumping up spins towards one boundary and down spins towards the opposite boundary. In thin wires (1D materials), the spin accumulation can be seen to wind around the wire's surface, much in a similar manner to how a magnetic field appears around a wire carring an electrical current. Reversing the direction of the applied electrical current reverses the spin accumulation in both cases mentioned here; the up and down spins now accumulate at the opposite boundary to which they had originally found themselves in thin films, and the direction of winding is reversed in thin wires.
Broadly speaking, the SHE arises in two forms: intrinsically and extrinsically. The former of these two is a direct result of the material's band structure, as opposed to being reliant upon external perturbations or extrinsic factors, such as disorder. This manifestation of the SHE can be understood in terms of an internal force experienced by the electrons travelling through the material due to the application of a charge current. This internal force is generated by the SOC experienced by the electrons, and hence is an effect embedded within the band structure of the system. An analogy can be drawn between this spin-orbit force driving the SHE and the Lorentz force responsible for the regular Hall effect. In contrast to the intrinsic SHE is the extrinsic contribution. Here, the generation of a spin current is driven by external mechanisms such as scattering from impurities. In this case, the SHE is the result of asymmetric scattering of electrons based upon their spin; up spins are scattered with a certain directional preference, while down spins are scattered with the opposite preference. This type of asymmetric scattering, known as skew scattering, plays a central role in graphene-based vdW heterostructures Milletari2017.
To illustrate the importance of disorder-induced corrections to transport for these materials, consider the Hamiltonian for graphene with just Rashba coupling (i.e. λ_sv = 0 and neglecting Δ_KM given its small nature in most realistic scenarios). The carrier concentration is typically large enough to place the Fermi energy well above the Rashba pseudogap (i.e. in regime II of the band structure). Assuming only non-magnetic disorder is present, the system clearly lacks a well-defined quantization axis due to the lack of spin-valley coupling. Therefore, the SHE should not be observable simply because some form of SOC is present. With the inclusion of a spin-valley coupling it would then be natural to expect the possible occurrence an SHE as a quantization axis will now be well-defined. However, if a theoretical analysis of the SHE for the minimal model without including disorder self-consistently is performed, it would lead one to believe that a finite SHE would exist in the absence of spin-valley coupling. This clearly emphasises the importance of including disorder in a fully self-consistent manner Milletari2017,Milletari_16.
Within the Kubo-Streda formalism of linear response Streda1982,Crepieux2001, the spin Hall conductivity may be written as σ_yx^sH = σ_yx^sH,I + σ_yx^sH,II, with Milletari_16,Milletari2017
σ_yx^sH,I = -1/2π∫_-∞^+∞ dωdf/dω{Tr[ 𝒥_y^z G^+_ωj_x G^-_ω] - 1/2Tr[ 𝒥_y^z G^+_ωj_x G^+_ω + 𝒥_y^z G^-_ωj_x G^-_ω] },
σ_yx^sH,II = 1/2π∫_-∞^+∞ dω f(ω) Re{Tr[ 𝒥_y^z G^+_ωj_x∂ G^+_ω/∂ω - 𝒥_y^z∂ G^+_ω/∂ωj_x G^+_ω] },
where 𝒥_y^z is the z-polarized spin current operator in the y-direction, j_x being the x component of the disorder-renormalized electric current operator, G^±_ω are the retarded/advanced disorder-averaged Green's functions for electrons with energy ω, f(ω) is the Fermi-Dirac distribution, and the trace is over momentum and all internal DOFs (spin, sublattice, and valley). Given the applied electric field, E, generating an electrical current (assumed to be along the x-axis), the resulting spin Hall current can then be determined using 𝒥_y^sH = σ_yx^sH E_x.
The first term, σ_yx^sH,I, is sometimes referred to as the Fermi surface contribution, while the second term, σ_yx^sH,II, is similarly called the Fermi sea contribution. In weakly disordered systems, the type II contribution is higher order in the impurity density and hence may be neglected. Similar reasoning applies to the G^+G^+ and G^-G^- terms in the type I contribution. Hence, it turns out that the leading order behavior is dictated solely by the cross term of σ_yx^sH,I. Clearly, the SHE in the metallic regime of disordered materials is essentially a Fermi surface property.
A common representation that helps to visualize how disorder is included into linear response is that of Feynman diagrams. Figure <ref> shows the dominant cross term in diagrammatic form, alongside the Dyson series describing the disorder-averaged Green's functions and the Bethe-Salpeter equation satisfied by the disorder-renormalized vertex. Note, the discussions and analysis here assume that only non-magnetic (spin-independent) scalar disorder is present. The dashed lines represent an electron (solid line) scattering from an impurity located at the cross. If σ_yx^sH,I is evaluated without vertex corrections for the λ_sv = 0 case, one finds a non-vanishing result in complete contradiction to what is expected. However, including vertex corrections to any order in the number of scattering events, σ_yx^sH,I can be seen to vanish, thus recovering the expected result for zero spin-valley coupling.
Interestingly, if one were to consider a Fermi energy located within the Rashba pseudogap (regime I of the band structure, |ε| < 2 |λ_BR|) they would also find a vanishing SHE in the absence of λ_sv. However, in this case, the type II contribution would no longer be sub-leading order and hence also needs accounting for (the cross term of the type I contribution remains the dominant part of σ_yx^sH,I). With this in mind, the computation of Eq. (<ref>) yields Milletari2017
σ_yx^sH,I = e/16π|ε|/λ_BR = -σ_yx^sH,II.
Hence, σ_yx^sH = 0 and so the SHE remains absent even in regime I due to the Fermi surface contribution being counteracted by off-surface processes.
Shifting focus to the experimentally relevant situation in which λ_BR≠ 0 and λ_sv≠ 0 while the SHE is expected to be observable (due to emergence of an effective spin quantization axis around each valley, see Fig. <ref>), the role played by disorder changes drastically. If disorder is only accounted for within the Born approximation, where all processes involving three or more scatterings from a single impurity are neglected, then one finds Eq. (<ref>) yields a vanishing result. Only upon the inclusion of at least third order scattering events (at least three scatterings from a single impurity) into the vertex correction does one find a non-zero value for σ_yx^sH,I. By accounting for these higher order processes, a scattering event can now distinguish between left and right based upon the electron's spin along the well-defined quantization axis courtesy of the spin-valley coupling, i.e. skew scattering has been included. Therefore, not only are vertex corrections key to understanding the SHE completely, but so too are higher order scattering mechanisms allowing for the manifestation of the SHE. In other words, the extrinsic SHE due to scalar disorder effects in graphene-based vdW heterostructures is controlled entirely by skew scattering, which is always active provided the existence of a tilted BR-type spin texture at the Fermi level.
As a final note on intrinsic effects, couplings such as electron-phonon and electron-electron interactions, as well as structural defects such as ripples and shears in the individual layers of the heterostructure, are also considered as being intrinsic to the system. Consequently, the intrinsic SHE can potentially play an important role in clean systems where electron-phonon coupling dominates at room temperature, or in materials with sharp boundaries between structural domains. Just how major these effects are is still a current area of study, hence they shall not be covered in this article.
§.§ Inverse Spin Galvanic Effect
A natural partner to the SHE is the ISGE: the accumulation of spin upon application of an electric field without an associated spin current. Unlike the SHE, this spin accumulation manifests throughout the whole system. Rather, a non-trivial spin texture, facilitated by the interfacial breaking of inversion symmetry, is enough to allow for a non-zero spin polarisation when an electrical current is passed through the system. To demonstrate this, consider, once again, the minimal Dirac-Rashba Hamiltonian (only λ_BR≠ 0). It was shown above that the electron spins are locked in-plane and perpendicular to the electron momentum. In the absence of an electric field (i.e. in equilibrium), the Fermi rings forming the Fermi surface (two in regime II and Ia and one in regime Ib) are perfect circles around each Dirac point. Consequently, the electrons forming these rings yield a net spin polarization of zero (as expected of the nonmagnetic materials discussed here). However, when an electric field is introduced, these Fermi rings are shifted such that they are no longer rotationally symmetric about their respective Dirac points, see Fig. <ref>. Therefore, the sum of the electron spins from each Fermi ring yield a non-zero spin polarization.
The direction of the resulting spin polarization is entirely dependent upon the system's spin texture. In the case of the Dirac-Rashba model above, the ISGE yields a spin polarization that is perpendicular to the applied electric field. The inclusion of a spin-valley coupling does not change the direction of the resulting spin polarization compared to the minimal Dirac-Rashba model. This is due to the out-of-plane component gained by the electron spins being opposite in sign between valleys (i.e. ± in the K_± valleys). Hence, summing over the contribution from both valleys, one finds a vanishing z component of spin accumulation. The role of the Kane-Mele-type SOC in these systems is negligible and hence does not contribute any meaningful changes to the spin texture either.
An alternative method to manipulating the spin texture of these materials has been found through the introduction of a twist between a graphene monolayer and a TMD monolayer Li2019,David2019,Peterfalvi2022,Veneri2022. By introducing a rotational off-set between the two layers, both the Rashba and spin-valley couplings are affected and acquire a twist-angle, θ, dependence. Not only are the magnitudes of these SOCs changed by twisting, but so too is the form of the Rashba term in the Hamiltonian. Upon the introduction of twisting, the spin-valley term maintains the same form as in Eq. (<ref>) but with λ_sv replaced by λ̃_sv(θ) (λ̃_sv(0) = λ_sv), and the Rashba term becomes
H_R(θ) = λ̃_BR(θ) e^i s_zα_R(θ)/2 (σ_x s_y - σ_ys_x) e^-i s_zα_R(θ)/2,
where λ̃_BR(0) = λ_BR and α_R(θ) is known as the Rashba phase. The exact way in which the Rashba coupling, spin-valley coupling, and Rashba phase vary with twist-angle depends heavily upon the material that graphene is paired with. A guaranteed property, however, is that α_R(θ) = c_1π for θ = c_2π/6 for c_1, c_2∈ℤ. In any case, the spin texture will clearly be affected by the change in the Rashba coupling term's form. In fact, as the layers are twisted relative to one another, the electron spins can be seen to also rotate away from being locked perpendicular to their momenta. This rotation of the spin texture can be seen in Fig. <ref>, where the out-of-plane component due to spin-valley coupling has been neglected for ease of illustration. It turns out that, for some materials, there exists a critical angle, θ_c, where the spin texture is entirely radial (Fig. <ref>c), sometimes referred to as a Weyl-type or hedgehog spin texture. Clearly, by being the perpendicular analog of the untwisted case, the resulting spin polarization will be perfectly collinear to the applied electric field. For any twist-angles away from θ = 0, ±θ_c, π/6, the net spin polarization will be in-plane but neither perpendicular nor collinear to an applied electrical current. The value of θ_c is sensitive to the partner TMD used, atomic registry, strain distribution, and external perturbations (e.g. perpendicular electric field Naimer_21), and hence will vary significantly between materials. Consequently, meaningful predictions for θ_c are challenging to make. For example, graphene-WSe_2 has been predicted to host a θ_c≈ 14^∘ (this prediction is based on an 11-band tight-binding model of the twisted heterostructures informed by both density functional theory calculations Fang_15,Gmitra2016 and available angle-resolved photoemission data Pierucci_16,Nakamura_20. In contrast, even the existence of a critical twist angle for graphene-MoS_2 is difficult to ascertain given the variation in the reported material parameters of theory and experiment Peterfalvi2022.
Defects and structural disorder are also expected to play a significant role. Of particular relevance is twist-angle disorder; a type of spatial inhomogeneity that is ubiquitous in realistic systems Uri_20. Its impact on transport properties is expected to depend crucially on the comparative sizes of the spin diffusion length, l_s, and the twist-puddle size, ξ (i.e. typical size of regions with a similar twist-angle). When l_s≪ξ, the twist-angle disorder can be considered smooth and hence can be incorporated into linear-response calculations through an appropriate averaging procedure (e.g. a Gaussian or box weighting could be applied Veneri2022). Though it is challenging to make first-principles predictions about the twist-angle behavior of the Rashba phase for realistic systems, the dependence of coupled spin-charge transport phenomena on the Rashba phase can be reliably studied by replacing the standard Rashba SOC in the low-energy Hamiltonian of Eq. (<ref>) with Eq. (<ref>). In this case, the point group symmetry of the system, and therefore Hamiltonian, is once more reduced, going from C_3v to C_3 for nontrivial twist angles.
To understand this from a quantitative perspective, one notes that the ISGE can be written mathematically as S_α = K_αβ E_β (assuming Einstein summation), where K_αβ are the elements of the spin susceptibility tensor. As in the SHE, the spin susceptibility tensor can be determined using linear response theory Milletari_16,Milletari2017, K_αβ = K_αβ^I + K_αβ^II,
K_αβ^I = -1/4π∫_-∞^+∞ dωdf/dω{Tr[ s_α G^+_ωj_x G^-_ω] - 1/2Tr[ s_α G^+_ωj_x G^+_ω + s_α G^-_ωj_x G^-_ω] },
K_αβ^II = 1/4π∫_-∞^+∞ dω f(ω) Re{Tr[ s_α G^+_ωj_β∂ G^+_ω/∂ω - s_α∂ G^+_ω/∂ωj_x G^+_ω] },
where the Green's functions now contain the twisted form of the Rashba Hamiltonian. For disordered materials the G^+G^+ and G^-G^- of the type I contribution, as well as the type II contribution, can once again be neglected. Consequently, the results for the in-plane components of the twisted spin susceptibility tensor can be related back to the untwisted ISGE, albeit with modified Rashba and spin-valley couplings,
K_xx(λ̃_BR(θ),λ̃_sv(θ);θ) = K_yx(λ̃_BR(θ),λ̃_sv(θ);0) sinα_R(θ),
K_yx(λ̃_BR(θ),λ̃_sv(θ);θ) = K_yx(λ̃_BR(θ),λ̃_sv(θ);0) cosα_R(θ),
K_yx(λ̃_BR,λ̃_sv;0) = 4evε/π n u_0^2λ̃_BR^3 (ε^2 + λ̃_sv^2)/ε^4(λ̃_BR^2+λ̃_sv^2) - ε^2λ̃_sv^4 + 3λ̃_BR^2λ̃_sv^4,
where n is the impurity concentration and u_0 is the strength of the scalar impurities. These results are strictly valid for perfectly aligned heterostructures. For systems with smooth twist-disorder landscapes (l_s≪ξ), an intuitive approximate result can be obtained by taking the convolution of Eq. (<ref>) with a suitable twist-disorder distribution function Veneri2022. For twist angles θ≠θ_c, c_2π/6, both in-plane components of the spin susceptibility tensor will be non-vanishing, thus yielding a non-trivial polarization (S_x,S_y≠ 0). At the critical twist angle the Rashba phase must vanish (α_R(θ_c) = 0), and hence a collinear Edelstein effect (CEE) can be achieved, whereby the resulting spin polarization will be purely (anti-)parallel to the applied electrical current (c.f. Fig. <ref>(c)). What makes these twisted graphene-TMD vdW heterostructures appealing is the ease with which the SOC and spin texture can be manipulated; only a simple twist is needed to adjust them. This sets these systems apart from 2DEGs, where both Rashba-type and Dresselhaus-type SOC Trushin2007. Control of these SOCs in 2DEGs is achieved via asymmetric doping and the tuning of quantum well widths Ganichev2014; a set of processes far more complicated and involved than simple twisting. For further details, the reader is referred to Ref. Veneri2022 where the CEE phenomenon was predicted.
§.§ Observing Spin-Charge Interconversion
To make use of graphene's large l_s while also studying the effects of proximity-induced SOC, spin-valve setups are typically used to measure the ISHE and SGE (the inverse SHE and spin galvanic effect respectively). In this case, rather than trying to measure a spin current using an FM contact, an electrical current is measured instead by converting an injected spin current into an electrical current Valenzuela_09. A schematic of a spin-valve experiment is shown in Fig. <ref>.
Spin-valves operate by using a single FM contact to inject a spin current into graphene. This spin current is then able to flow along the isolated graphene channel until it reaches a T-junction, where it enters a region of high SOC. This region is no longer characterised by just monolayer graphene; instead, it now contains graphene layered on top of a TMD. As a result, the spin current entering this region undergoes the ISHE (due to any out-of-plane spin components) and generates an electrical current perpendicular to the injected spin current. Likewise, the electrons entering the high SOC region naturally also have a component of in-plane spin polarization and so are subject to the SGE, which also generates a perpendicular electrical current. Specifically, the component of the electron's spin parallel to the spin current generates an electrical current via the SGE, while the component of spin in-plane and perpendicular to the spin current yields an electrical current via the collinear SGE (if the material permits this process). The ensuing nonlocal resistance measured across the T-junction is therefore characterized by the spin-charge inter-conversion effects at the heart of modern spintronics.
In order to distinguish between the different electrical signals arising from the ISHE, SGE, and collinear SGE, a combination of magnetic fields must be used in conjunction with various FM orientations. The effect of applying a magnetic field to this setup is to cause spin precession about the applied field. The strength of the measured resistance in the T-junction will then depend on the magnetic field strength (how quickly the spins precess), and the length of the graphene channel (how long the spins have to precess). By combining the spin-valve measurements for various magnetic fields and FM orientations, the electrical signals associated to each of the aforementioned proximity-induced ISHE and SGE can be isolated by means of a simple symmetry analysis Cavill2020. A set of recent spin-valve experiments have revealed graphene-TMD heterostructures to yield room-temperature nonlocal resistances of order 1-10 mΩ due to the ISHE and SGE Safeer2019,Ghiasi2019,Benitez2020. Alternatively, the Onsager-reciprocal phenomena, namely the SHE and ISGE, can be discerned by measuring the spin accumulation in the direction of the spin current at opposite sides of the high SOC region, as was done in the experiment of Ref. Camosi_2022. The latter approach requires a complex multi-terminal architecture but has the advantage that it permits isolation of the SHE and ISGE in situations for which the TMD is conducting and thus directly influences the spin-charge conversion processes (due to its high intrinsic SOC).
Additionally, spin-valve measurements allow us to distinguish between conventional and anisotropic spin-charge conversion processes. An anisotropic SGE has been recently observed in graphene proximity-coupled to semimetallic (low-symmetry) TMDs Safee2019_2. Likewise, an anisotropic ISGE characterized by the presence of spin polarization components parallel and orthogonal to the driving current has been reported experimentally in Camosi_2022. However, an experimental demonstration exploiting twist-angle control in a graphene-semiconducting TMD heterostructures as described by Eq. (<ref>) has yet to be achieved.
While spin-valve setups are ideally suited for studies of spin dynamics and spin-charge interconversion processes, they do not allow one to discern the size of the SOC present in a graphene-on-TMD heterostructure. To do this, measurement of the Shubnikov-de Hass (SdH) oscillations must be made. The reconstruction of the low-energy electronic structure from SdH data allows for the determination of the average SOC, λ̅ = √(λ_BR^2 + λ_sv^2) Tiwari2022. Experiments have revealed that the typical SOC present in these materials is λ̅≃ 2.51 meV. This average value is in accord to the predictions of microscopic theories for vdW heterostructures Milletari2017,Offidani_17 regarding the observation of significant spin-charge interconversion efficiencies at room temperature, and is indeed compatible with the spin-valve measurements Safeer2019,Ghiasi2019,Benitez2020,Camosi_2022,Li2020,Hoque2021, thus demonstrating that proximity-induced SOC is responsible for the observed nonlocal resistances.
§ SUMMARY
This article has presented the role of graphene in modern spintronic devices, with a specific emphasis on how its partnership with other 2D materials can allow for bespoke systems with desirable electronic and spin transport properties. By constructing a low-energy theory of graphene, it becomes clear that transport phenomena in graphene-based systems will be dominated by the behavior of electrons around the Dirac points. From a physical point of view, one of the most striking features of isolated graphene is the description of its electrons as massless chiral Dirac fermions. Furthermore, the decoherence of spins in graphene occurs over large distances, allowing for long range spin transport, a direct result of the extremely weak intrinsic SOC (Kane-Mele) appearing naturally in graphene. However, despite its ability to host long range spin diffusion, graphene does not have a natural mechanism allowing for distinct spin control. This can ideally be achieved by pairing it with other materials that enhance its SOC.
In particular, the use of TMDs proximity-coupled to graphene gives rise to significant SOCs, of the order of meV, which allows meaningful spin currents and non-equilibirium spin polarizations, via the SHE and ISGE respectively, to manifest in the graphene layer. Consequently, electronic control of spin transport can be easily achieved and hence amalgamates well with current technological architectures. One major focus of spintronics is the implementation of the SHE and ISGE to generate large spin accumulations and polarizations that can be used to change the magnetization of a ferromagnet. In this case, the magnetization experiences a torque due to the SOC-related phenomena and so is referred to as a spin-orbit torque (SOT). The use of SOTs in technology could allow for the realization of SOT-based magnetic random access memory (SOT-MRAM), removing our reliance upon volatile memory and thus reducing energy consumption. This route away from traditional RAM appears to bear much promise and remains as a current area of interest in research with many facets and subtleties still requiring further study.
As a final note, while not covered in this article, interface-induced magnetic exchange interaction is also becoming increasingly important in the field (e.g. as means to allow the manifestation of quantum anomalous Hall phases and spin-dependent Seebeck effect in graphene MEC_graphene_1,MEC_graphene_2), and the reader is referred to the literature for further details. The same goes for high-temperature topologically nontrivial phases of matter – exhibiting technologically relevant phenomena like the quantum spin Hall effect Wu2018 – and the discovery of 2D vdW magnets 2Dmag_1,2Dmag_2,2Dmag_3, which are likely to open up interesting avenues across many emergent fields, including topological quantum computation, spin-orbitronics, magnonics, and antiferromagnetic spintronics 2D_topological_1,2D_FM_spinorbitronics_1,2D_magnonics_1,2D_AF_spintronics_1.
§ ACKNOWLEDGEMENTS
The authors acknowledge support from the Royal Society through Grants No. URF/R/191021 (A.F.) and No. RF/ERE/210281 (A.F. and D.T.S.P.). We are indebted to Yue Wang and Robert A. Smith for helpful comments on the manuscript.
|
http://arxiv.org/abs/2307.04381v1 | 20230710072646 | ADAQ-SYM: Automated Symmetry Analysis of Defect Orbitals | [
"William Stenlund",
"Joel Davidsson",
"Viktor Ivády",
"Rickard Armiento",
"Igor A. Abrikosov"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci"
] |
[email protected]
Department of Physics, Chemistry and Biology, Linköping
University, Linköping, Sweden
Department of Physics, Chemistry and Biology, Linköping University, Linköping, Sweden
Department of Physics, Chemistry and Biology, Linköping University, Linköping, Sweden
Department of Physics of Complex Systems, Eötvös Loránd University, Egyetem tér 1-3, H-1053 Budapest, Hungary
Department of Physics, Chemistry and Biology, Linköping University, Linköping, Sweden
MTA–ELTE Lendület "Momentum" NewQubit Research Group, Pázmány Péter, Sétány 1/A, 1117 Budapest, Hungary
Department of Physics, Chemistry and Biology, Linköping University, Linköping, Sweden
Quantum technologies like single photon emitters and qubits can be enabled by point defects in semiconductors, with the NV-center in diamond being the most prominent example. There are many different semiconductors, each potentially hosting interesting defects. High-throughput methods and automated workflows become necessary when searching for novel point defects in a large chemical space.
The symmetry properties of the point defect orbitals can yield useful information about the behavior of the system, such as the interaction with polarized light.
We have developed an automated code to perform symmetry analysis of point defect orbitals obtained by plane-wave density functional theory simulations.
The code, named ADAQ-SYM, calculates the characters for each orbital, finds the irreducible representations, and uses selection rules to find which optical transitions are allowed.
The capabilities of ADAQ-SYM are demonstrated on several defects in diamond and 4H-SiC.
The symmetry analysis explains the different zero phonon line (ZPL) polarization of the hk and kh divacancies in 4H-SiC.
ADAQ-SYM is automated, making it suitable for high-throughput screening of point defects.
ADAQ-SYM: Automated Symmetry Analysis of Defect Orbitals
Igor A. Abrikosov
August 12, 2023
========================================================
§ INTRODUCTION
Point defects in semiconductors can provide a platform for solid state quantum technology, with applications such as qubits<cit.>, sensors <cit.> and single photon emitters <cit.>. One significant benefit of quantum applications made with solid state point defects is room temperature operation <cit.>.
Theoretical calculations have been proven useful for identification of potentially interesting defects in wide-band gap semiconductors and quantitative estimations of their properties <cit.>.
Indeed, first principles methods based on density functional theory (DFT) can simulate the electronic structures and predict multiple properties <cit.>.
Each semiconductor material may host a multitude of intrinsic and extrinsic point defects.
To probe the large combinatorically complex chemical space in an efficient manner, high-throughput workflows have been developed <cit.> to simulate thousands of defect combinations, calculate relevant properties and store the results into a searchable database <cit.>.
Automatic Defect Analysis and Qualification (ADAQ) <cit.> is one such high-throughput workflow.
There are many relevant properties to study, such as how the defect interacts with light.
By analyzing symmetry of the defect orbitals and selection rules, one can deduce polarization of incoming and outgoing light.
With symmetry analysis on the theoretical side, polarization specific PL measurements can be more accurately matched with simulated defects and orientation for single defects can be identified <cit.>.
Before analyzing the orbitals, the point group symmetry of the crystal hosting the defect needs to be found. There are two broadly used codes for this, spglib <cit.> and AFLOW-SYM <cit.>. We use AFLOW-SYM because of its reported lowest mismatch when finding symmetry for known crystals <cit.>.
In addition, there are several codes that calculate irreducible representations of bands but mostly with focus on topological insulators <cit.>.
Within quantum chemistry, one method to quantitatively analyse the symmetry of molecular orbitals is with continuous symmetry measures (CSM) <cit.>, which provide a numerical measure of how close molecular orbitals are to certain irreducible representations.
Defect orbitals in the band gap are localized much like molecular orbitals, yet methods similar to CSM have not been applied to point defects in solid host materials. Presently, a common method of symmetry analysis of defect orbitals is visually inspecting an isosurface of the wave function and how it behaves under the symmetry transformations, this may be prone to human error especially for high symmetry structures, and is not applicable in high-throughput workflows.
Another method of analyzing the symmetry is to describe the defect orbitals as a linear combination of atomic orbitals and manually carrying out the group theory <cit.>. This method may be applicable in high-throughput, however, since the structure has already been relaxed with plane waves we focus on analyzing these directly without projecting to atomic orbitals. By omitting the projection we can keep all information present in the plane waves.
This paper presents a quantitative symmetry analysis method for defect orbitals in solid host materials simulated with a plane wave basis set and the selection rules of optical transitions between defect orbitals. We introduce ADAQ-SYM, a Python implementation of this method. The code is fast and automated, requiring little user input, making it applicable as an analysis tool for high-throughput simulations of defects.
Section <ref> presents an introduction to the group theory, specifically applied to defects.
Section <ref> describes the ADAQ-SYM algorithm that performs the symmetry analysis, and Appendix <ref> deals with how the code is constructed and what approximations are used. Computational details of the simulations in this paper are described in Section <ref>.
Section <ref> presents the results from symmetry analyses of several known defects; nitrogen vacancy (NV) center and silicon vacancy (SiV) center in diamond and the silicon vacancy (V_Si) and several divacancy (V_SiV_C) configurations in 4H-SiC. Section <ref> discusses these results and Appendix <ref> presents the recommended best practices when using the software.
§ THEORETICAL BACKGROUND
In this paper we consider point groups. For convenience we summarize basic concepts following Ref. <cit.>.
We refer to symmetry transformations as unitary transformations in three dimensional space which have at least one fixed point, meaning no stretching or translation. In the Schönflies notation, these transformations are:
* Identity, E.
* Rotation of 2π/n or 2π m/n, where n and m are integers, C_n or C_n^(m).
* Reflection in a plane, σ_x. x = h, v or d, denoting reflection in a horizontal, vertical or diagonal plane.
* Inversion, i.
* Improper rotation of 2π/n or 2π m/n, where n and m are integers, which is a rotation 2π/n or 2π m/n followed by a reflection in a horizontal plane, S_n or S_n^(m).
A set of these symmetry transformations, if they have a common fixed point and all leave the system or crystal structure invariant, constitutes the point group of that system or crystal structure.
The axis around which the rotation with the largest n occurs is called the principal axis of that point group.
The point groups relevant to solid materials are the 32 crystallographic point groups, of which the following four are used in this paper, C_1h, C_2h, C_3v and D_3d.
In brief, character describes how a physical object transforms under a symmetry transformation, (1 = symmetric, -1 = anti-symmetric, 0 = orthogonal), and representation Γ describes how an object transforms under the set of symmetry transformations in a point group.
Each point group has a character table which has classes of symmetry transformations on the columns and irreducible representation (IR) on the rows, with the entries in the table being characters. IRs can be seen as basis vectors for representations.
Each point group has an identity representation, which is an IR that is symmetric with respect to all transformations of that point group.
Character tables can have additional columns with rotations and polynomial functions, showing which IR they transform as.
Appendix <ref> contains the character tables of the point groups used in this paper, these character tables also show how the linear polynomials (x, y and z) transform.
For defects in solids, the point group is determined by the crystal structure, and the symmetry of the orbitals can be described by characters and IRs. Figure <ref> shows a divacancy defect in silicon carbide with the point group C_3v as an example.
Comparing with the character table for C_3v, Table <ref> in Appendix <ref>, one sees that the orbital has the IR a_1.
When defects are simulated with DFT, one obtains (one-electron) orbital wave functions ϕ_i and corresponding eigenvalues ϵ_i.
Optical transitions, where an electron moves from an initial state with orbital i to a final state with orbital f, has an associated transition dipole moment (TDM) μ⃗ , which is expressed as:
μ⃗ = ⟨ϕ_f|er⃗|ϕ_i⟩,
where e is the electron charge and r⃗ is the position operator.
Selection rules can be formulated with group theory <cit.>. For TDM the following applies:
for optical transitions to be allowed the representation of the TDM Γ_μ must contain the identity representation, where
Γ_μ = Γ_f ⊗Γ_r ⊗Γ_i,
with ⊗ being the direct product, and Γ_r is the IR of the polarization direction of the light, corresponding to the linear functions in the character tables.
§ METHODOLOGY
Figure <ref> shows the symmetry analysis process of ADAQ-SYM. Here, we describe the steps in detail.
First, we perform a DFT simulation on a defect in a semiconductor host material. This produces a relaxed crystal structure and a set of orbital wave functions and their corresponding eigenvalues. These are the main inputs for ADAQ-SYM.
The orbitals to be analyzed are chosen by the user. The electron orbitals associated with defects are localized
around the defect, and the inverse participation ratio (IPR) is a good measure of how localized an orbital is <cit.>. The discreet evaluation of IPR is
χ = ∑_r |ϕ_i(r⃗)|^4/(∑_r |ϕ_i(r⃗)|^2)^2,
and can be used to identify defect orbitals in the band gap, since they have much higher IPR than the bulk orbitals. There are also defect orbitals in the bands which are hybridized with the delocalized orbitals, their IPR are lower than the ones in the band gap, but still higher than the other orbitals in the bands. We employ IPR as a tool for identifying defect orbitals in the bands by spotting outliers.
After the careful selection of ab initio data and inputs, ADAQ-SYM is able to perform the symmetry analysis.
Second, the "center of mass" c⃗ of the each orbital is calculated according to
c⃗ = ⟨ϕ(r⃗)|r⃗|ϕ(r⃗)⟩ = ∫ dr^3 ϕ^*(r⃗) r ϕ(r⃗)) = ∑_rϕ^*(r⃗) r ϕ(r⃗).
These centers are used as the fixed points for the symmetry transformations. Orbitals are considered degenerate if the difference in their eigenvalues are less than a threshold. When calculating c for degenerate orbitals, they are considered together and the average center is used.
This method does not consider periodic boundary conditions and necessitates the defect be in the middle of the unit cell. To mitigate skew of the center of mass, the wave function is sampled in real space, and points with moduli under a certain percentage p of the maximum are set to zero according to
ϕ_trunc(r⃗) =
0 if |ϕ(r⃗)| < p max_r⃗ (ϕ(r⃗))
ϕ(r⃗) otherwise
.
Third, the point group and symmetry transformations of the crystal structure is found via existing codes. Each symmetry transformation has an operator Û. To get characters the overlap of an orbital wave function and its symmetry transformed counterpart, the symmetry operator expectation value (SOEV), is calculated
⟨Û⟩ = ⟨ϕ(r⃗)|Ûϕ(r⃗)⟩ = ∫ dr^3 ϕ^*(r⃗) (Ûϕ(r⃗)) ,
for each orbital and symmetry transformation. The wave function is expanded in a plane wave basis set, with G-vectors within the energy cutoff radius. Therefore, Eq. <ref> can be rewritten to be evaluated by summing over these G-vectors only once, the plane wave expansion is also truncated by reducing the cutoff radius when reading the wave function and renormalizing <cit.>.
Fourth, the character of a conjugacy class is taken to be the mean of the overlaps of operators within that class, and the overlaps of degenerate orbitals are added.
To find the representation of a set of characters, the row of characters is projected on each IR, resulting in how many of each IR the representation contains. Consider an IR Γ, where W⃗_Γ is a vector with the characters of Γ times their order.
For example, for C_3v the vector for a_2 is W⃗_a_2 = (1·1, 1·2, -1·3) = (1,2,-3). Let V⃗ be the vector with a row of characters and h be the order of the point group, then N_Γ is the number of times the IR Γ occurs which is calculated as follows
N_Γ = W⃗_Γ·V⃗1/h .
For degenerate states the found representation should be an IR with dimension equal to the degeneracy, e.g. double degenerate orbital should have a two-dimensional e state.
If an IR is not found, the overlap calculation is rerun with the center of another orbital as the fixed point.
The CSM S for the IRs of molecular orbitals <cit.> is used for the defect orbitals and calculated with
S(ϕ, Γ) = 100(1 - N_Γ),
which produces a number between 0 and 100. S(ϕ, Γ)=0 means that the orbital is completely consistent with IR Γ, and S(ϕ, Γ)=100 means that the orbital is completely inconsistent with the IR Γ.
Fifth, to calculate the IR of the TDM and find the allowed transitions, the characters of the TDM is calculated by taking the Hadamard (element-wise) product of the character vectors of each 'factor'
V⃗_μ = V⃗_f ∘V⃗_r ∘V⃗_i
and Eq. <ref> is used to calculate Γ_μ.
The representation of the resulting character vector is found in the same way as the IR of the orbital was found.
As an example, consider the group C_3v with the three IRs a_1, a_2 and e. If some TDM in this group has the character vector V⃗_⃗μ⃗ = (4,1,0) calculating the representation would look like:
W⃗_a_1 = (1,2,3), W⃗_a_2 = (1,2,-3), W⃗_e = (2,-2,0), h = 6 ,
Γ_μ = [N_a_1,N_a_2,N_e] = [4+2/6, 4+2/6, 8-2/6] = [1,1,1] .
Since Γ_μ contains a_1, the transition is allowed. The code contains a function to convert a representation array of the above format to a string such as "a_1 + a_2 + e".
Finally, the information produced by ADAQ-SYM is entered into a script to produce an energy level diagram which shows the position in the band gap, orbital occupation, IR and allowed transitions.
§ COMPUTATIONAL DETAILS
The DFT simulations are executed with VASP <cit.>, using the projector augmented-wave method <cit.>.
We apply the periodic boundary conditions, and the defects in the adjacent supercells cause a degree of self-interaction. To limit this, the supercell needs to be sufficiently large. In our case supercells containing more than 500 atoms are used.
The defects are simulated with the semi-local Perdew, Burke and Ernzerhof (PBE) exchange-correlation functional <cit.>. These simulations only include the gamma point, run with a plane-wave cutoff energy of 600 eV, with the energy convergence parameters 1×10^-6 eV and 5×10^-5 eV for the electronic and ionic relaxations, respectively. The simulations are done without symmetry constraints so symmetry breaking due to the Jahn-Teller effect can occur when relaxing the crystal structure.
Excited states are simulated by constraining the electron occupation <cit.>.
§ RESULTS AND DISCUSSION
To illustrate the capability of our method, we apply ADAQ-SYM to several defects in two different host materials, diamond and 4H-SiC, and analyze the symmetry properties for the defects orbitals.
The symmetry analysis provides a coherent picture of the known defects, finds the allowed optical transitions between defect orbitals, and, specifically, explains the different ZPL polarization of the hk and kh divacancies in 4H-SiC.
§.§ Diamond Defects
We first analyze the symmetry of the ground state of NV-, SiV0 and SiV- centers and silicon vacancy center in diamond.
These defects were simulated in a cubic (4a,4a,4a) supercell containing 512 atoms, where a=3.57 Å.
§.§.§ Negatively Charged NV Center
Figure <ref> shows the ground state crystal structure and electronic structure of the NV- center in diamond.
Figure <ref> (b) is the generated output from ADAQ-SYM, for each orbital in the band gap. It shows the eigenvalue, occupation and IR, as well as the allowed transitions for each polarization. In this case, the found IRs are in accordance with previous work <cit.>, and only one allowed transition is found, where the light is polarized perpendicular (⊥) to the principal axis. This selection rule has been experimentally confirmed <cit.>.
§.§.§ Silicon Vacancy Center
Figure <ref> shows the ground state electronic structure of the neutral (a) and negatively charged (b) silicon vacancy center in diamond, and the IPR for 30 KS-orbitals around the band gap.
Our DFT calculations show that most orbitals in the VB are delocalized and have a low IPR.
However, some orbitals have larger IPRs meaning that they are more localized and indicating that they are defect states. These defect states in the VB are ungerade (u), meaning anti-symmetric with respect to inversion.
Both the charge states considered in this work have point groups with inversion symmetry which only allow optical transitions between orbitals of different symmetry with respect to inversion. To populate an orbital that is gerade (g), that is symmetric with respect to inversion, an electron from an u-state must be excited. When some orbitals in the valance band are taken into account, ADAQ-SYM finds two allowed transitions from defect states in the valance band to an empty state in the band gap, in agreement with previous calculations <cit.>.
For SiV^- the behavior of the orbitals under inversion is clear. The IR of these states will depend on the point group being analyzed, and CSM is used measure how well the orbitals conform to different IRs. Table <ref> shows the CSM of the defect orbitals of SiV^- in different point groups. The orbitals conform well to the C_i point group, and with some tolerance they also conform to the IRs of C_2h. The orbitals do not conform IRs in D_3d, unless one considers the orbitals degenerate despite the difference in eigenvalues.
§.§ Silicon Carbide
In this subsection, we carry out the symmetry analysis of defects in 4H-SiC with ADAQ-SYM, in both the ground state and the lowest excited state. The IR of each KS-orbital in the band gap and the allowed polarization of light for both absorption and emission is shown in the figures below. 4H-SiC consists of alternating hexagonal (h) and quasi-cubic (k) layers, resulting in different defect configurations for the same stoichiometry. The defects were simulated in a hexagonal (6a,6a,2c) supercell containing 576 atoms, where a=3.09 Å and c=10.12 Å. For 4H-SiC, "in-plane" refers to the plane perpendicular to the c-axis.
§.§.§ Negatively Charged Silicon Vacancy
We simulated the ground and excited state of the negatively charged silicon vacancy in the h site.
Figure <ref> (a) shows two allowed transitions with different polarization, where the parallel polarized transition has slightly lower energy than the perpendicular, this corresponds well to the V1 and V1' absorption lines <cit.> associated with the silicon vacancy in the h site <cit.>.
Figure <ref> (b) shows that the transition back to the ground state emits light polarized parallel to the c-axis, in agreement with previous calculations and measurements of the V1 ZPL <cit.>.
§.§.§ High Symmetry Divacancy
Figure <ref> shows the ground and excited state of the hh configuration of the divacancy, and the allowed transitions. In the excited state, one electron occupies what was previously an empty degenerate state and causes an Jahn-Teller effect. Because of this, the point group symmetry is reduced from C_3v to C_1h and degenerate states split when the system is relaxed in our simulations. This also changes the principal axis from being parallel to the c-axis to being perpendicular to it, that is the principal axis now lies in-plane. The selection rule tells us that absorption (to the lowest excited state) happens only for light polarized perpendicular to the c-axis, and the transition from the excited state emits light polarized parallel to the in-plane principal axis, thus also perpendicular to the c-axis. This behavior corresponds well to previous calculations and measurements <cit.>.
The kk divacancy is basically identical to the hh divacancy with respect to symmetry.
§.§.§ Low Symmetry Divacancies
The two low symmetry divacancy configurations hk and kh exhibit different behavior regarding the polarization of the ZPL <cit.>. Examining the symmetry of the orbitals and applying selection rules regarding the TDM allows us to distinguish between these configurations. For both of these low symmetry configurations, the only symmetry transformation is a reflection in a plane where the principal axis lies in-plane.
Figure <ref> shows crystal- and electronic structure information of the hk divacancy. From panel (d) one sees that the relaxation to the ground state only emits light polarized parallel to the in-plane principal axis.
Figure <ref> shows crystal- and electronic structure information of the kh divacancy. Panel (d) demonstrates that the relaxation to the ground state only emits light polarized perpendicular to the in-plane principal axis, meaning there are components both in-plane and along the c-axis.
From the symmetry analysis by ADAQ-SYM, one can attribute the differing polarization behavior of the hk and kh configurations to the symmetry of the lowest excited state (symmetric and anti-symmetric respectively). Due to the principal axis laying in-plane, it is possible to experimentally determine the orientation of individual defects measuring the in-plane polarization angle of the PL detected along the c-axis, in an experiment similar to Alegre et al. <cit.>. In such a experiment, the hk divacancy will exhibit a luminescence intensity maxima when the polarization is parallel to the principal axis, and a minima when the polarization in perpendicular. The opposite would be true for the kh divacancy, and the two configurations could be distinguished by the approximately 30 meV difference in ZPL <cit.>, or by the 30 degree polarization differences between the respective maxima.
§ DISCUSSION
The orbitals of the NV- center, seen in Figure <ref> (c)-(d), are a little asymmetric, despite this ADAQ-SYM reproduces the results of previous calculations <cit.> because there is a tolerance when finding the characters of an orbital. This shows that the code can produce correct IRs, even for systems that are not simulated with symmetry constraints and not very tightly converged, making this a useful tool for high-throughput calculations of defects where high convergence becomes costly.
One issue that arose when analyzing SiV- is that the crystal symmetry was a little inconsistent with point group which the electronic structure seemed to conform to. Depending on the tolerance, AFLOW-SYM found either C_i or D_3d as the point group. The orbitals seem to conform to a C_2h point group, although with a strict tolerance on IR, it only matches with C_i.
The crystal structure seems to be distorted in a way to break the symmetry by little and the difference between the distortion that would reduce D_3d to C_2h is of similar magnitude to the distortion that reduced the symmetry to C_i, meaning they both fall within or outside of the tolerance of AFLOW-SYM. A more accurate DFT simulation might address this and make the distortions distinguishable. In this case, it was solved manually by calculating overlaps in D_3d and then calculating CSM of various subgroups of D_3d and seeing in which subgroup the orbitals conformed reasonably to IRs.
Having a loose tolerance parameter for the AFLOW-SYM crystal symmetry finder can be useful in ambiguous cases since ADAQ-SYM will then run for a larger set of symmetry operators, which gives an overview and can provide insight to what extent the orbitals are asymmetric with regards to each operator. It is also recommended to do this when multiple gradual distortions of the same defect are examined.
The initial excited state calculation of the silicon vacancy seemed to show a case of the pseudo Jahn-Teller effect
where the symmetry was reduced and the degenerate states split despite not being partially occupied in either spin channel. Upon running a simulation with more accurate tolerance parameters the point group remained C_3v and the splitting reduced to less than the threshold of 10 meV. For cases like this, convergence becomes more important and looser high-throughput simulations may exaggerate these effects, to resolve this one can have a higher degeneracy tolerance parameter which will cause more states to be grouped together as degenerate.
§ CONCLUSION
We have presented a method of determining the symmetry of defect orbitals, and implemented this method in the software ADAQ-SYM.
The implementation calculates the characters and irreducible representations of defect orbitals, the continuous symmetry measure is also calculated to get a numerical measure of how close the orbitals are described by the irreducible representations. Finally, ADAQ-SYM applies selection rules to the optical transitions between the orbitals. The code is applicable to efficient analysis of defects.
We have applied the software to a variety of known defects with different point groups and host materials, and it reliably reproduces their symmetry properties.
It is found that the polarization of the allowed transition for hk (kh) is parallel (perpendicular) to the in-plane principal axis, in accordance with experiments. A method to determine the orientation of individual hk and kh divacancies is also proposed.
In summary, ADAQ-SYM is an automated defect symmetry analysis code which is useful for both manual and high-throughput calculations.
§ SOFTWARE AVAILABILITY
For availability of ADAQ-SYM and instructions, see https://httk.org/adaq/https://httk.org/adaq/.
§ ACKNOWLEDGEMENTS
This work was partially supported by the Knut and Alice Wallenberg Foundation through the Wallenberg Centre for Quantum Technology (WACQT).
We acknowledge support from the Knut and Alice Wallenberg Foundation (Grant No. 2018.0071).
Support from the Swedish Government Strategic Research Area Swedish e-science Research Centre (SeRC) and the Swedish Government Strategic Research Area in Materials Science on Functional Materials at Linköping University (Faculty Grant SFO-Mat-LiU No. 2009 00971) are gratefully acknowledged.
JD and RA acknowledge support from the Swedish Research Council (VR) Grant No. 2022-00276 and 2020-05402, respectively.
The computations were enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) and the Swedish National Infrastructure for Computing (SNIC) at NSC, partially funded by the Swedish Research Council through grant agreements no. 2022-06725 and no. 2018-05973.
This research was supported by the National Research, Development, and Innovation Office of Hungary within the Quantum Information National Laboratory of Hungary (Grant No. 2022-2.1.1-NL-2022-00004) and within grant FK 145395.
§ IMPLEMENTATION
ADAQ-SYM is written in python using functional programming. Table <ref> provides an overview of the principal functions, and Table <ref> shows the settings ADAQ-SYM uses. To run the code, the user needs to provide three files from a VASP simulation; POSCAR or CONTCAR, the crystal structure; WAVECAR, wave function; EIGENVAL, eigenvalues and occupation of the bands. The user must also define which bands should be considered for the analysis. This should be a list of indices for each of the spin channels in EIGENVAL. In most cases, one should list the indices of the bands in the band gap.
The functions and call AFLOW-SYM <cit.> to find the point group and symmetry operators of the input crystal structure, the symmetry operators are then sorted by their conjugacy class and arranged in the order the classes appear in the character table. These functions use the setting which determine tolerance for asymmetry AFLOW-SYM uses, values may be "tight" or "loose".
The point group is used to load the right character table from text files by Gernot Katzer <cit.>. The vaspwfc module in the VaspBandUnfolding package <cit.> is used for reading the WAVECAR file and working with the plane wave expansion of the wave function, and it also serves as the basis of the IPR calculations.
The calculates the "center of mass" of each of the considered bands using Eq. <ref> and <ref>, where the cutoff percentage p is read from the setting. The wave function is sampled in a real space grid where the setting makes the grid denser.
The function loops through all considered orbitals and all symmetry operators and calculates the overlap, how Eq. <ref> is computed is described in more detail in <cit.> and Numpy <cit.> is used to accelerate the evaluation.
The evaluation time of the overlap calculation scales linearly with the number of G-vectors in the plane wave expansion. To speed up the code the series is truncated by multiplying the cutoff energy by the factor . The cutoff energy corresponds to a radius in k-space and only G-vectors within the radius are used, so halving the cutoff energy gives roughly one eighth as many G-vectors. Truncating the series produces some error in the overlap, this error is relatively small for larger than 0.1 <cit.>, the symmetry does not depend strongly on the high frequency components of the plane wave expansion. Note that the overlap calculation will produce a complex number.
The function reads the EIGENVAL file and groups the considered bands by degeneracy. Two bands are considered degenerate if the difference in eigenvalue is less than . This function also outputs the eigenvalue and occupation of the considered bands.
The function takes the overlaps and the bands grouped by degeneracy and first adds the overlaps of degenerate bands for each symmetry operator, then the overlaps within each conjugacy class is averaged to produce the character. At this point, the character is complex valued but this is resolved with the following function.
The function takes a set of characters and computes Eq. <ref> for all IRs of a point group, since the overlaps are in general complex N_Γ will also be a complex number. Doing this for a truly symmetric orbital will produce a complex number with a small imaginary component and a real component close to an integer. For a set of characters to be said to transform as IR Γ, the imaginary component must be smaller than and the real component must be within of a non-zero integer. For example with a tolerance of 0.05, characters producing N_Γ= 0.99 + 0.02i will be interpreted as transforming as IR Γ, while characters producing N_Γ= 0.96 + 0.07i or N_T= 0.92 + 0.03i will not. The same procedure is used when the CSM is calculated, since Eq. <ref> uses N_Γ.
The function calculates Eq. <ref> for each occupied state i, each non-full state f and each linear function r. The representation is found with and if the trivial representation is contained, the transition is marked as allowed.
The function uses Matplotlib <cit.> to create energy level diagrams of the considered states with the occupation and IR drawn, and all allowed transitions represented by arrows between the bands. The color of the arrow differs on the polarization of the transition.
§ BEST PRACTICES
The following summarizes our recommendations when running the software:
If no IR is found and several bands are close, increase degeneracy tolerance which will cause more states to be grouped together as degenerate. This may be preferential since actually degenerate orbitals split apart will not be assigned any IR, while accidentally degenerate orbitals grouped together as degenerate will assigned an IR which is the sum each orbitals IR, such as a_g+b_g, which makes it clear that the orbitals are accidentally degenerate.
If no IR is found, check that the centers of mass are close to your defect. If not, recalculate the centers with higher grid density, setting 6 or 8. There is also an automated fallback where the atomic position of any unique atomic species will be used.
If the crystal symmetry is unclear or you think it should be higher, increase AFLOWs tolerance. This way, the overlaps will be calculated for a larger set of symmetry operators.Then, check the overlaps manually, and look for subsets where the characters are close to integers, any such subset should be a point group which is a subset of the larger group.
§ CHARACTER TABLES
The character tables used in this paper are presented here.
|
http://arxiv.org/abs/2307.04958v1 | 20230711012100 | Near-wall model for compressible turbulent boundary layers based on an inverse velocity transformation | [
"Kevin Patrick Griffin",
"Lin Fu",
"Parviz Moin"
] | physics.flu-dyn | [
"physics.flu-dyn"
] |
Laser light scattering (LLS) to observe plasma impact on the adhesion of micrometer-sized particles to a surface
J Beckers
October 2023
================================================================================================================
In this work, a near-wall model, which couples the inverse of a recently developed compressible velocity transformation [Griffin, Fu, & Moin, PNAS, 118:34, 2021] and an algebraic temperature-velocity relation, is developed for high-speed turbulent boundary layers.
As input, the model requires the mean flow state at one wall-normal height in the inner layer of the boundary layer and at the boundary-layer edge.
As output, the model can predict mean temperature and velocity profiles across the entire inner layer, as well as the wall shear stress and heat flux.
The model is tested in an a priori sense using a wide database of direct numerical simulation high-Mach-number turbulent channel flows, pipe flows, and boundary layers (48 cases with edge Mach numbers in the range of 0.77–11 and semi-local friction Reynolds numbers in the range of 170–5700).
The present model is significantly more accurate than the classical ordinary differential equation (ODE) model for all cases tested.
The model is deployed as a wall model for large-eddy simulations in channel flows with bulk Mach numbers in the range of 0.7–4 and friction Reynolds numbers in the range of 320–1800. When compared to the classical framework, in the a posteriori sense, the present method greatly improves the predicted heat flux, wall stress, and temperature and velocity profiles, especially in cases with strong heat transfer. In addition, the present model solves one ODE instead of two and has a similar computational cost and implementation complexity as the commonly used ODE model.
§ INTRODUCTION
The largest driver of computational cost in numerical simulations of wall-bounded turbulence is typically the numerical resolution in the near-wall region. In scale-resolving simulations, e.g., wall-resolved (WR) large-eddy simulation (LES), high spatial and temporal resolutions are required to accurately simulate the small-scale eddies near walls. Wall models, or approximate boundary conditions, can be employed to reduce the near-wall resolution requirements.
The computational cost (the number of grid points multiplied by the number of time steps) for the simulation of a turbulent boundary layer scales with the Reynolds number as Re^2.7 for WRLES and Re^1.1 for wall-modeled (WM) LES <cit.>. Thus, wall models lead to substantial cost savings for high-Reynolds-number applications.
In simulations of the Reynolds-averaged Navier-Stokes (RANS) equations, high spatial resolution is also required to resolve the steep near-wall gradients in the mean flow.
Therefore, wall models —typically referred to as wall functions in the RANS context —can also greatly accelerate numerical simulations.
The present work focuses on the paradigm of wall-stress modeling <cit.> for LES. These models were derived from RANS analysis of boundary layers and typically invoke a zero-equation RANS model such as the Prandtl mixing length argument <cit.>, which models the turbulence length scale as a linear function of the wall-normal distance. An empirical damping function is introduced following <cit.> to ensure the correct near-wall scaling of the mixing length. RANS models have naturally been widely used as boundary conditions for under-resolved RANS simulations (e.g., <cit.>). In this context, such a model is typically referred to as a wall function.
<cit.> showed that the mixing length RANS model is suitable for use as a boundary condition for the LES equations, i.e., for deployment as a wall-stress model. Specifically, they invoke the
one-dimensional simplification of the RANS streamwise momentum equation. That is,
y((μ + μ_t) Uy) = 0,
where μ, μ_t, and U are the molecular dynamic viscosity, eddy viscosity, and velocity profiles, respectively, and y is the wall-normal coordinate. (·) denotes the Reynolds average and (·) denotes the Favre (density-weighted) average. Throughout this work the Favre- (density-weighted-) averaged RANS and LES equations are employed.
The eddy viscosity is further modeled as
μ_t = κ y ρ√(τ_w/ρ)( 1 - exp (y^+/A^+) )^2,
where ρ(y) is the density profile. The subscript (·)_w denotes quantities evaluated at the wall. τ_w = μ_w (dU/dy)_w is the wall shear stress. The superscript (·)^+ denotes non-dimensionalization by the friction velocity u_τ = √(τ_w/ρ_w), ρ_w, and the kinematic wall viscosity ν_w=μ_w/ρ_w.
The von Kármán constant κ = 0.41 and the eddy-viscosity damping coefficient A^+ = 17 are adopted following <cit.>.
For an incompressible flow, the density and molecular dynamic viscosity are known constants. In the context of WMLES, the ODE in Eq. (<ref>) is solved with two boundary conditions: 1) the no-slip wall condition and 2) a velocity sample, which is taken from the LES at a wall-normal distance referred to as the matching location. Note that the solution procedure is iterative because the eddy viscosity depends on the wall stress (Eq. (<ref>)). The computed wall stress τ_w is then applied as a momentum-flux boundary condition for the outer LES solver, which completes the two-way coupling of the wall model (inner) solution and the PDE (outer) simulation.
For compressible flow, the RANS equation for temperature can similarly be simplified to the one-dimensional form <cit.>, which results in a second, coupled ODE for the temperature profile, i.e.,
y((μ + μ_t) UUy + C_p (μ/+μ_t/_t)Ty) = 0,
where T is the temperature profile. C_p is the specific heat capacity at constant pressure, is the Prandtl number, and _t is the turbulent Prandtl number, which is assumed to be 0.9 <cit.>.
The dependence of molecular dynamic viscosity on temperature can be assumed to follow a power law or Sutherland's law.
The ideal gas equation of state closes the system
and the thin-boundary-layer assumption implies that the pressure is constant across the inner layer.
In WMLES, the temperature ODE in Eq. (<ref>) is solved with two additional boundary conditions: 1) the wall temperature and 2) the temperature at the matching location. Note that the solution procedure is also iterative in that the temperature depends on the velocity solution. The velocity also depends on the temperature through the density and viscosity. Solving two coupled boundary-value problems iteratively introduces a higher degree of non-linearity compared to the incompressible case and can prove difficult to converge in flows with strong temperature gradients (strong heat transfer), e.g., as was reported in <cit.>. In addition to the numerical difficulties, the accuracy of this wall model degrades substantially in flows with strong heat transfer (as will be demonstrated herein).
Improved results for high-speed wall-bounded turbulent flows over cold walls have been obtained by using the semi-local scaling in the damping function <cit.>, however, <cit.> reports that in adiabatic walls, the classical scaling (consistent with the van Driest transformation) is more accurate. This motivates using a recently developed compressible velocity transformation that is accurate for both diabatic and adiabatic turbulent boundary layers <cit.>.
In this work, a wall model for high-speed wall-bounded turbulent flows is developed in section <ref>. The model is evaluated via a priori testing in section <ref> and via a posteriori validation in section <ref>. Conclusions are drawn in section <ref>.
§ MODEL DEVELOPMENT
There are two principal differences between the present model and the classical ODE-based wall model (Eqs. (<ref>–<ref>)): (1) rather than solving an ODE for the compressible velocity profile directly, the incompressible ODE (with constant density and viscosity) is solved, and an inverse compressibility transformation <cit.> is employed; (2) rather than employing a RANS equation for temperature and assuming a constant Pr_t, an algebraic temperature-velocity relation is adopted, thus obviating the need to solve a second ODE.
§.§ Inverse compressible velocity transformation
A compressible velocity transformation seeks to map the local mean strain rate of the variable-property compressible flow, dU/dy, to the non-dimensional mean strain rate of a constant-property incompressible flow at an equivalent Reynolds number. Upon integration, the transformation maps the compressible velocity profile to an incompressible velocity profile. In this way, a successful transformation can collapse profiles with different Mach numbers and thermal boundary conditions to a single incompressible law of the wall. Coupled with the incompressible profile implied by Eq. (<ref>), an inverse velocity transformation can recover the compressible velocity profile.
The total-stress-based compressible velocity transformation of <cit.> is used in this work since it is shown to be accurate in a wide range of flows, including boundary layers with strong heat transfer. This transformation uses the viscous scaling arguments of <cit.> and <cit.> in the near-wall viscous region and uses a modified version of the turbulence equilibrium arguments of <cit.> for the logarithmic region. The transformation is an algebraic function that relates the local mean strain rate of the compressible flow, dU/dy, to the non-dimensional incompressible mean strain-rate, S_t^+, at the same semi-local friction Reynolds number, Re_τ^*, according to the relation
S_t^+ = S_eq^+/1+S_eq^+-S_TL^+,
where S_eq^+=1/μ^+ dU^+/dy^* and S_TL^+=μ^+ dU^+/dy^+. The superscript (·)^* denotes non-dimensionalization by the local density ρ(y), local molecular dynamic viscosity μ(y), and the semi-local friction velocity u_sl=√(τ_w/ρ(y)) <cit.>. The semi-local friction Reynolds number is thus defined as Re_τ^* = ρ_e u_slδ / μ_e, where the subscript (·)_e denotes quantities evaluated at the boundary layer edge (throughout this work, δ denotes the channel half height or the boundary-layer thickness). Note that all variables of the form S_(·)^+ represent different local non-dimensionalizations of the compressible strain rate, which were designed in prior works with the target of equaling the strain rate implied by the incompressible law of the wall. For example, although S_TL^+ is equivalent to the viscous stress, it is also a non-dimensionalization of the mean strain rate in a compressible flow. S_TL^+ will exactly recover the incompressible strain rate of a flow with the equivalent viscous stress as long as the compressible flow also obeys μ^+=1. Additionally, note that the transformation in Eq. (<ref>) assumes a constant stress layer in the buffer region of the boundary layer, where there is a transition between the underlying viscous and equilibrium transformations. <cit.> verifies that the deployment of this assumption does not significantly affect the accuracy of the transformation in equilibrium flows, and <cit.> verifies the same for boundary layers with moderate pressure gradients.
The inverse velocity transformation is readily obtained by algebraically rearranging the transformation to find
U^+y^* = (1/μ^+ S^+_t - 1/μ^+ +√(ρ^+)(1 + 1/2ρ^+ρ^+y^+ y^+ - 1/μ^+μ^+y^+y^+ ) )^-1.
The incompressible mean strain rate S_t^+ is available algebraically from the constant-property version of Eq. (<ref>), i.e., ρ=ρ_w and μ=μ_w. The incompressible model constants κ and B are determined using the aforementioned calibration but Re_τ^* is used in place of Re_τ since the former is invariant under the velocity transformation. Integrating Eq. (<ref>) with variable properties yields the targeted compressible velocity profile; the properties are functions of temperature, which will be discussed next.
§.§ Algebraic temperature-velocity relation
In order to close the velocity equation (Eq. (<ref>)), the temperature profile must be determined. The classical model uses the constant turbulent Prandtl number assumption to develop a coupled ODE for temperature (Eq. (<ref>)). However, the constant Prandtl number assumption has been shown to be less accurate than invoking the Generalized Reynolds Analogy (GRA) <cit.>. Thus, the presently proposed wall model leverages the GRA instead.
The analogy between the conservation equations for momentum and energy has led to the derivation of several algebraic relations between temperature and velocity.
Walz's equation <cit.> (also known as the modified Crocco-Busemann relation <cit.>) leverages the analogy between the conservation equations for momentum and energy to arrive at an algebraic relation between mean temperature and velocity. This relation accounts for non-unity Pr effects via a recovery factor, which is taken as r=()^1/3. While this relation is accurate in high-speed adiabatic boundary layers, <cit.> observed that the accuracy degrades significantly in boundary layers with wall heat transfer and proposed a semi-empirical correction to the relation.
This was subsequently recast in terms of a generalized Reynolds analogy <cit.>, thereby introducing the Reynolds analogy factor, s, which they choose as s = 1.14 following convention. The resulting temperature-velocity relation is given as,
T = T_w + s (T_r-T_w) U/U_e(1 - U/U_e) + ( U/U_e)^2 ( T_e-T_w ),
where the subscript (·)_e denotes quantities at the boundary-layer edge, the recovery temperature T_r = T_e + r U_e^2/(2 C_p).
This relation has been validated across a wide range of channel flows, pipe flows, and boundary layers with and without heat transfer <cit.>. Specifically, this relation is derived by <cit.> through defining the generalized recovery temperature T_r_g = T + r_g U^2/(2 C_p). Then, it is assumed that T_r_g = T_w + U_s U/C_p,
where U_s is a constant velocity scale. Equivalently, the assumption can be reinterpreted that T can be approximately represented as a second order Taylor expansion in terms of powers of U, i.e.,
T = b_0 + b_1 U + b_2 U^2/2,
where the no-slip condition implies b_0 = T_w, b_1 = (TU)|_w.
The algebraic relation of <cit.> can be recovered if b_2 is specified by evaluating the expression at the boundary-layer edge T_e=T|_U_e and b_1 is determined using the Reynolds analogy. However, in this work, we use the matching data (denoted with subscript (·)_m) T_m=T|_U_m to set b_2, such that the exact value at the matching location can be enforced.
The final temperature-velocity relation is
T = T_w + s (T_r-T_w) U/U_e(1 - U/U_m) + ( U/U_m)^2 ( T_m-T_w ).
Note that one consequence of this relation is that the wall heat flux and wall shear stress are algebraically linked by the Reynolds analogy factor, where the heat flux is defined as q_w = s τ_w C_p (T_w-T_r)/U_e.
§.§ Implementation details
Like the classical model (Eqs. (<ref>–<ref>)), the present model requires a matching temperature, velocity, and density, an equation of state (the ideal gas law is used in this work and the thin-boundary-layer assumption implies the pressure is constant), and a viscosity law (either a power law or Sutherland's law depending on the relevant reference data). In addition, the present model requires as input the velocity and temperature at the boundary-layer edge (computed using the method of <cit.>) for deploying the algebraic temperature-velocity relation (Eq. (<ref>)) due to its dependence on the recovery temperature and edge velocity. To solve the nonlinear system, the following approach is used. The incompressible ODE (Eq. (<ref>)) with constant properties is integrated once analytically, rearranged for dU/dy and substituted into the inverse velocity transformation (Eq. (<ref>)) as S. This equation (initial value problem with an initial guess for the wall shear stress) is solved via the shooting method, where, at each integration step, a sub-iteration determines the velocity increment that is consistent with the temperature-velocity relation (Eq. (<ref>)) and the resulting density and viscosity at that location.
The implementation of the present model is available at the link provided in the data availability section at the end of this manuscript. This implementation was first developed by <cit.> to compute temperature and velocity profiles for estimating grid-point requirements in compressible flows, and this manuscript serves as the comprehensive documentation and the further development of the underlying inverse method for WMLES approach for the first time. Intermediate developments were presented in <cit.>, and initial results were reported in <cit.>. <cit.> used a similar procedure but with a data-driven velocity transformation <cit.>. <cit.> and <cit.> approximate the mean profiles of channel flows by considering two velocity transformations <cit.> and employing the Central Mean Temperature Scaling <cit.>.
§ A PRIORI RESULTS
The present and classical wall models are first evaluated via a priori analysis. That is, the matching data are taken from DNS at a wall-normal distance of y_m=0.3δ. The wall model estimates the velocity and temperature profiles, as well as the wall shear stress and wall heat flux. The predicted velocity and temperature profiles are shown in Figure <ref> and <ref> for four channel flows with various Mach and Reynolds number conditions, Figure <ref> for two pipe flows at different Reynolds numbers, and Figure <ref> for two boundary layers, one with a heated and one with a cooled wall boundary condition. The bulk Mach number is defined as M_b = U_b/√(()γ R T_w), where γ is the ratio of specific heats and R is the gas constant. The bulk Reynolds number is defined as Re_b = ρ_b U_b δ / μ_w, where the bulk density is defined as ρ_b = ∬_A ρ dA/A and the bulk velocity is defined as U_b = ∬_A U dA/A, where A is the cross-sectional area of the domain. Reference DNS data are provided by <cit.>.
For all cases, the profiles predicted by the present model agree with the DNS profiles significantly better than the classical model. Note that the velocities are non-dimensionalized by the predicted friction velocity, so the obtained profiles do not necessarily pass through the matching data if the predicted wall stress is inaccurate.
Next, the model performance is evaluated with a wide range of DNS data from 48 different simulations.
The errors in the modeled wall stress and heat flux predictions are reported for each case with y_m=0.3δ. The relative error in the wall stress prediction ϵ_τ_w is defined as
ϵ_τ_w = τ_w,model - τ_w,DNS/τ_w,DNS× 100%.
The non-dimensional wall heat flux is defined as B_q = q_w/(C_p T_w ρ_w u_τ), and the relative error in the wall heat flux is defined as
ϵ_q_w = q_w,model - q_w,DNS/q_w,DNS× 100%.
ϵ_q_w is not reported for adiabatic boundary layer data because it is undefined, and both models predict negligible heat transfer for these data.
The data considered include the compressible channel flow simulations of <cit.>, the pipe flow simulations of <cit.>, the adiabatic supersonic and hypersonic boundary layers of <cit.>, and the diabatic supersonic and hypersonic boundary layers of <cit.>.
The cases have edge Mach numbers in the range of 0.77–11 and semi-local friction Reynolds numbers in the range of 170–5700.
Only the cases with Re_τ^* > 150 are analyzed because lower Reynolds numbers can exhibit strong Reynolds number effects <cit.> and are not the target of this study. The error measures are shown in Figure <ref>. The present model generates significantly less modeling error than the classical model, with the greatest error reduction when the non-dimensional heat transfer is the highest.
To distinguish the effects of Reynolds number and compressibility, we explore the effect of using Reynolds-number-dependent coefficients for the underlying incompressible Law of the Wall. Specifically, rather than letting the von Kármán constant κ and the damping coefficient A^+ be fixed values of 4.1 and 17, respectively, we recalibrate these values using incompressible reference data at various Reynolds numbers. We employ the DNS data from five incompressible turbulent channel flows <cit.> with friction Reynolds numbers Re_τ = u_τδ / ν_w = {182, 543, 1000, 1990, 5190}, and fit the least-squares optimal values of κ = {0.400, 0.408, 0.400, 0.391, 0.391} and A^+ = {18.2, 17.4, 17.0, 16.5, 16.5}. Linear interpolation and constant extrapolation of the optimal values are used to define κ and A^+ for all Reynolds numbers. The inverse velocity transformation uses the semi-local wall-normal coordinate y^*, so the incompressible data should be interpreted as a function of Re_τ^* rather than Re_τ. A priori analysis is performed as before using compressible DNS data, but with the optimal coefficients selected according to the Re_τ^* observed in the compressible DNS. In Figure <ref>(a-b), for the case of a turbulent channel flow with Re_τ^* = 190 and M_b = 1.7, there is a modest improvement from using the Reynolds-number-dependent coefficients for the incompressible model. This suggests that at low Reynolds numbers, the deviation of DNS data for the incompressible constant-property velocity profile from the nominal law of the wall is on the same order as the deviation of the constant coefficient model and compressible DNS velocity profile. However, there is not a complete collapse of the model with Reynolds-number-dependent coefficients with the compressible DNS. This is likely attributed to the documented error in the compressible velocity transformation at Re_τ^* <∼ 200 <cit.>. In Figure <ref>(c-d), the case of a turbulent channel flow with Re_τ^* = 590 and M_b = 1.7 is considered. The Reynolds number is high enough that the optimal and constant coefficients are similar; thus, the performance of the present model with either set of coefficients is similar. Overall, there is no significant sensitivity to tuning the coefficients, so, for simplicity, we use the constant coefficients of κ=0.41 and A^+=17 for the remainder of this manuscript.
Two more recently developed compressible wall models are considered. The first is developed by <cit.>; they show that the damping function in the classical model (Eq. (<ref>)) is consistent with the velocity transformation of <cit.>, which has been shown to be less accurate in channel flows than the velocity transformation of <cit.>. Therefore, <cit.> rewrite the damping function in terms of y^* and show that this makes the model consistent with the Trettel-Larsson transformation. The second additional model considered is proposed by <cit.>, which also uses the semi-local damping function and further replaces the constant turbulent Prandtl number assumption of the classical model with an explicit function of y^*. In Figure <ref>, these two additional wall models are compared with the classical and present wall models. Figure <ref>(a-d) indicate that all models are performing well in the channel flows except for the classical model. This behavior is explained by the behavior of the underlying velocity transformations. The models of <cit.> and <cit.> use the Trettel-Larsson transformation and the present model uses the total-stress-based transformation <cit.>. Both of these transformations are well established to outperform the van Driest transformation (used by the classical model) in channel flows. In Figures <ref>(e-f) and <ref>(g-h), the models are applied to boundary layers with cooled and heated walls, respectively. For both cases the classical model is the least accurate likely due to the inaccuracy of the van Driest transformation for boundary layers with strong heat transfer <cit.>, as the velocity transformation is the only difference between the classical model and that of <cit.>. Also for both cases, the models that use semi-local damping <cit.> perform almost identically, suggesting limited sensitivity in these flows to the change in turbulent Prandtl number model proposed by <cit.>. For the heated boundary layer, the present model slightly improves the prediction of the temperature peak and the log slope of the velocity compared to the semi-local damping models. For the cooled boundary layer, there is a more substantial improvement from the present model for the log slope of the velocity but the temperature profiles are only slightly improved. These improvements of the present model over the semi-local damping models are consistent with the improvements of the total-stress-based transformation over the Trettel-Larsson transformation for boundary layers with strong heat transfer.
§ A POSTERIORI WMLES RESULTS
In this section, several WMLES simulations are conducted using charLES, a high-fidelity compressible finite-volume code <cit.>. The numerical method consists of a low-dissipation, approximately entropy-preserving scheme, which utilizes artificial bulk viscosity to capture the solution discontinuities. Additional details about the solver and a summary of validation campaigns are available in <cit.>.
The WMLESs conducted herein are compressible turbulent channel flows driven with uniform volumetric momentum and energy source terms to achieve the same bulk Mach number M_b and bulk Reynolds number Re_b conditions of the DNS simulations of <cit.> as summarized in table <ref>.
The cases are run on a domain of size (π× 2 ×π√(3)/4)δ with periodic boundary conditions in the streamwise (first) and spanwise (third) dimensions. The mean profiles and fluxes were insensitive to doubling of the streamwise and spanwise domain sizes. Consistent with the DNS simulations, the viscosity is described by μ/μ_ref=(T/T_ref)^0.75 and Pr = 0.7. All cases are initialized from a uniform solution with the target bulk Mach number and Reynolds number, and zero velocity in the wall-normal and spanwise directions. The simulations are allowed to transition from laminar to turbulent states naturally and are run for ∼500 eddy turnover times δ/u_τ. To challenge the wall model and isolate the effect of near-wall numerical errors <cit.>, the wall model matching location is placed at y_m=0.3δ and a coarse grid of 12 points per half channel height is used for all simulations unless otherwise indicated. The computational cost of the present model is similar to that of the classical model. The present model varies between being 7% faster and 32% slower depending on the Reynolds number, matching location, and Mach number. No effort was made to optimize the performance of the present model, so these numbers are just meant to indicate that the approximate cost of the model is similar in the cases tested. In general, modest differences in the cost of a wall model can be efficiently amortized over parallel processors via load balancing that assigns fewer control volumes to processors that contain more boundary faces, but this is not used in the present study.
The velocity and temperature profiles from WMLES are shown in Figure <ref> and <ref> for turbulent channel flows at four combinations of Reynolds and Mach numbers. In all cases, the present model is significantly more accurate than the classical model for the prediction of velocity and temperature with respect to the reference DNS solutions. For these cases and the others listed in table <ref>, the errors in the predictions of the wall shear stress and the wall heat flux are shown in Figure <ref>.
The wall model is based on the inversion of the total-stress-based velocity transformation <cit.> and that was observed to have the greatest improvement over classical approaches in cases with strong heat transfer. This explains why the errors from the classical wall model grow significantly with the strong heat transfer, but the errors from the present model are rather small and do not vary with heat flux.
The primary quantities of interest for WMLES are the predictions of the mean profiles and fluxes. The fluctuating parts of LES solutions are not expected to exactly agree with DNS results unless the WMLES is conducted with DNS-like resolution, which is impractical. Nevertheless, the effect of wall models on the fluctuating part of the LES solution is presented for comparison between the present and classical models. Figures <ref> and <ref> include profiles of the LES resolved turbulent Mach number M_t=u”/√(()γ R T̃) and the LES temperature fluctuations T”, where (·)” denotes the Favre fluctuation (·)” = (·) - (̃·̃)̃. There is an improvement in the predictions of the fluctuating statistics by the present model compared to those by the classical model. An accurate prediction of second-order statistics is unlikely without an accurate prediction of mean statistics. Thus, the improved second-order statistics of the present model are likely a consequence of its improved mean statistics compared to those of the classical model (see Figure <ref> and <ref>). However, correct prediction of the mean field is not sufficient for the accurate prediction of second-order statistics in LES. In fact, the fluctuations in the LES results are generally over-predicted compared to the DNS data. The over-prediction may be due in part to the wall-blocking effect of stress-based wall model <cit.>. Given the coarse resolution of twelve points across the channel half height, numerical errors and subgrid-scale model errors are certainly contributing. The subgrid-scale model has not been adapted for compressibility other than by accounting for variable properties <cit.>. The turbulent Mach numbers are on the order of 0.3, which is sufficiently high that modeling for dilatational dissipation is a promising path to further improvements of the fluctuating statistics in the volume of the LES domain. Such research may be pursued independently of the current study focusing on wall modeling and the prediction of mean profiles and fluxes.
§.§ Sensitivity to numerical resolution and the matching location
In WMLES, the wall model exchanges data with the outer LES solver at the matching location. The modeling error in the inner wall modeled equations may grow as the matching distance increases, which motivates placing the matching location near the wall. On the other hand, the matching location should be far enough from the wall in terms of the LES mesh resolution so that the LES solver can resolve the large scales of turbulence at the height of the matching location. Otherwise, numerical errors may contaminate the matching data that is provided as input to the wall model. <cit.> demonstrate this trade-off and how LES numerical errors contaminate the wall-modeled solution if the matching distance is on the order of the wall-normal grid resolution. The optimal matching distance will depend on the accuracy of a specific LES solver, but a typical choice is y_m ≥ 3Δ <cit.>, where Δ is the wall-normal grid spacing near the wall.
To evaluate the convergence and sensitivity of the presently proposed wall model, two types of mesh convergence studies are considered. In the first study, the matching location is held fixed at y_m=0.3δ, which corresponds in semi-local units to y_m^*=186 and y_m^*=237 for the present model and classical model cases across all resolutions.
For the case of M_b=3.0 and Re_τ=1800, the numerical resolution of the WMLES is varied. In Figure <ref>, the WMLES solutions are shown for three LES resolutions with 9, 18, and 36 grid points across the channel half-height. The uniform hexagonally close-packed mesh topology with global refinement is employed, resulting in three meshes with 2.0×10^4, 1.6× 10^5, and 1.3× 10^6 control volumes, respectively (note that the reference DNS uses as many as 6.4× 10^8 control volumes).
In this study, the LES numerical errors at the matching location are expected to diminish as the resolution is refined, but modeling errors from using the wall model over the domain y∈[0,0.3δ] are not expected to change with resolution. For this reason, the classical model shows a large error in the log intercept of the velocity profile that is persistent with refinement and consistent with a priori analysis in Figure <ref>(a). For the finest resolution with the present model, the grid point nearest to the wall exhibits an error that is persistent with refinement, which is consistent with the observations of <cit.> and does not affect the accuracy of the simulation since the inner solution is applicable for y<y_m. For both the present and classical models, the results are only weakly dependent on the grid resolution. This suggests that the leading source of error for the simulations with the classical wall model is in fact the wall model rather than the numerical or subgrid-scale modeling errors, even on the coarsest simulation with 9 grid points per channel half height.
In the second grid convergence study, the models are tested in the way that WMLES is typically used in practice. That is, the matching distance is moved toward the wall as the grid is refined. In this study, two channel flows with different Reynolds number conditions are considered for three LES resolutions with 12, 24, and 48 grid points across the channel half height. The matching locations are y_m= 0.3δ, 0.15δ, and 0.075δ, respectively, which corresponds to y_m = 4 Δ for all cases, thus the effect of near-wall LES numerical errors is expected to be minor <cit.>. In Figure <ref>, the convergence study is performed for M_b=3.0 and Re_τ^*=590, and a lower Reynolds number case of M_b=3.0 and Re_τ^*=200 is shown in Figure <ref>. In both cases, the accuracy of the present model is relatively high and insensitive to mesh resolution compared to that of the classical model. For the higher Reynolds number test, the matching locations in semi-local units are always in the logarithmic region of the boundary layer. Therefore, the WMLES results are not sensitive to refinement over this range of resolutions. However, for the lower Reynolds number case, the most refined meshes lead to semi-local matching locations y_m^* in the buffer region. For the classical model, because the relative error of the modeled U^+ versus the DNS U^+ is maximal in the region of the buffer layer and early log layer (compare to similar a priori results in Figure <ref>), the convergence behavior for the classical model is complex in this regime. In other words, as the mesh is refined, although the LES numerical errors are diminishing, the wall modeling errors for the classical model may increase or decrease depending on the matching location since the relative modeling error does not monotonically reduce with wall-normal distance. On the other hand, the outer solution of the present model is relatively accurate irrespective of the matching location because the inner wall-modeled solution agrees well with the DNS solution throughout the viscous sublayer, buffer layer, and log layer (which is consistent with similar a priori results in Figure <ref>).
§ CONCLUSION
In this work, a wall model is proposed for turbulent wall-bounded flows with heat transfer. The model uses an established ODE description of incompressible flow, transforms that equation to account for compressibility effects, and is closed with an algebraic temperature-velocity relation. The resulting model can accurately estimate the near-wall profiles of temperature and velocity when the matching location is in the inner layer. This model is suitable for deployment as a boundary condition for an outer LES or RANS solver, an inflow generation scheme, or the base flow for perturbation methods, possibly with the incompressible model augmented with a wake profile for the outer layer of the boundary layer. The proposed method can only be as accurate as the models on which it is based, namely, the forward velocity transformation and the algebraic temperature-velocity relation. While these models have been widely validated in channel and pipe flows and boundary layers with moderate pressure gradients, further studies in complex flows are warranted, e.g., the developing boundary layers on a blunt body behind a curved shock.
The model is first tested a priori to verify that it can recover the boundary layer velocity and temperature data when provided with matching data from DNS. Numerical results reveal that the model accurately recovers the targeted profiles well, and the predicted wall stress and heat flux are within a few percent of their expected values for a wide database of DNS data for high-Mach-number turbulent channel flows, pipe flows, and boundary layers (48 cases with edge Mach numbers in the range of 0.77–11 and semi-local friction Reynolds numbers in the range of 170–5700). The model is also tested a posteriori as a boundary condition for WMLES in turbulent channel flows with bulk Mach numbers M_b=0.7–4.0 and Re_τ=320–1800. Especially in flows with strong heat transfer, the proposed model is substantially more accurate than the classical ODE-based near-wall model. The superior performance of the present model is due to two key differences with respect to the classical model: 1) the constant turbulent Prandtl number assumption is replaced with a more accurate algebraic temperature-velocity relation and 2) the van Driest velocity transformation is replaced with the total-shear-stress velocity transformation <cit.>.
§ ACKNOWLEDGMENTS
Kevin Griffin acknowledges support from the National Defense Science and Engineering Graduate Fellowship, the Stanford Graduate Fellowship, the Stanford Lieberman Fellowship, and the Exascale Computing Project (Grant17-SC-20SC), a collaborative effort of two US Department of Energy organizations (Office of Science and the National Nuclear Security Administration) responsible for the planning and preparation of a capable exascale ecosystem, including software, applications, hardware, advanced system engineering, and early testbed platforms, in support of the nation’s exascale computing imperative. Lin Fu acknowledges funding from the Research Grants Council (RGC) of the Government of Hong Kong Special Administrative Region (HKSAR) with RGC/ECS Project (No. 26200222) and from the Guangdong Basic and Applied Basic Research Foundation (No. 2022A1515011779). Parviz Moin acknowledges support from NASA grant (No. NNX15AU93A).
We wish to gratefully acknowledge helpful comments from Sanjeeb T. Bose.
This work was authored in part by the National Renewable Energy Laboratory, operated by Alliance for Sustainable Energy, LLC, for the U.S. Department of Energy (DOE) under Contract No. DE-AC36-08GO28308. The views expressed in the article do not necessarily represent the views of the DOE or the U.S. Government. The U.S. Government retains and the publisher, by accepting the article for publication, acknowledges that the U.S. Government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this work, or allow others to do so, for U.S. Government purposes.
[Declaration of interests]The authors declare that they do not have any financial or non-financial conflict of interests.
[Data availability statement]The data that support the findings of this study are available from the corresponding authors upon reasonable request. Matlab code implementing the proposed model will be available in the following public repository after the manuscript is accepted for publication:
https://github.com/kevingriffin1/comp_wm<https://github.com/kevingriffin1/comp_wm>
[Author ORCID]
Kevin Griffin https://orcid.org/0000-0002-0866-6224(0000-0002-0866-6224);
Lin Fu https://orcid.org/0000-0001-8979-8415(0000-0001-8979-8415)
jfm
|
http://arxiv.org/abs/2307.05218v1 | 20230711124342 | Probabilistic Operational Correspondence (Technical Report) | [
"Anna Schmitt",
"Kirstin Peters"
] | cs.LO | [
"cs.LO"
] |
A. Schmitt and K. Peters
TU Darmstadt, Germany Augsburg University, Germany
Probabilistic Operational Correspondence (Technical Report)
Anna Schmitt
1
0000-0001-6675-2879
Kirstin Peters
2
0000-0002-4281-0074
August 12, 2023
================================================================================
-2.4cm-2.2cm
Encodings are the main way to compare process calculi.
By applying quality criteria to encodings we analyse their quality and rule out trivial or meaningless encodings.
Thereby, operational correspondence is one of the most common and most important quality criteria.
It ensures that processes and their translations have the same abstract behaviour.
We analyse probabilistic versions of operational correspondence to enable such a verification for probabilistic systems.
Concretely, we present three versions of probabilistic operational correspondence: weak, middle, and strong.
We show the relevance of the weaker version using an encoding from a sublanguage of probabilistic into the probabilistic π-calculus.
Moreover, we map this version of probabilistic operational correspondence onto a probabilistic behavioural relation that directly relates source and target terms. Then we can analyse the quality of the criterion by analysing the relation it induces between a source term and its translation.
For the second version of probabilistic operational correspondence we proceed in the opposite direction. We start with a standard simulation relation for probabilistic systems and map it onto a probabilistic operational correspondence criterion.
This technical report contains the proofs to the lemmata and theorems of <cit.> as well as some additional material.
§ PROCESS CALCULI AND ENCODINGS
Let ΔΘ whenever
* Δ = ∑_i ∈ I p_i P_i, where I is a finite index set and ∑_i ∈ I p_i = 1,
* for each i ∈ I there is a distribution Θ_i such that P_i Θ_i or Θ_i = P_i,
* for some i ∈ I we have P_i Θ_i, and
* Θ = ∑_i ∈ I p_i ·Θ_i.
Let ⊆^2 and let Δ, Θ∈𝒟( ).
Then ( Δ, Θ) ∈ if
* Δ = ∑_i ∈ I p_i P_i, where I is a finite index set and ∑_i ∈ I p_i = 1,
* for each i ∈ I there is a process Q_i such that ( P_i, Q_i ) ∈, and
* Θ = ∑_i ∈ I p_i Q_i.
For our proofs it is important that Definition <ref> translates preorders into preorders.
Accordingly, we prove that it preserves reflexivity.
If is reflexive, then so is .
Assume a reflexive relation and consider a probability distribution Δ with Δ = ∑_i ∈ I p_i P_i for some finite index set I with ∑_i ∈ I p_i = 1.
We have to prove that ( Δ, Δ) ∈, that for each i ∈ I there is some Q_i such that ( P_i, Q_i ) ∈ and Δ = ∑_i ∈ I p_i Q_i.
Since is reflexive, we have ( P_i, P_i ) ∈, it suffices to choose Q_i = P_i to conclude the proof.
The preservation of transitivity was already given in <cit.>.
If is transitive, then so is .
We inherit the criteria (expect operational correspondence) from <cit.>:
Compositionality: For every operator 𝐨𝐩 with arity n of and for every subset of names N, there exists a context N𝐨𝐩_1, …, _n such that, for all S_1, …, S_n with S_1∪…∪S_n = N, it holds that 𝐨𝐩( S_1, …, S_n ) = N𝐨𝐩S_1, …, S_n.
Name Invariance a Relation ⊆^2: For every S ∈ and every substitution σ, it holds that Sσ≡_αSσ' if σ is injective and ( Sσ, Sσ' ) ∈ otherwise, where σ' is such that σ(a) = σ'( a) for all a ∈.
Divergence Reflection: For every S, S implies S.
Success Sensitiveness: For every S, S iff S.
The formulation of compositionality is rather strict, it rules out practically relevant translations.
Note that the best known encoding from the asynchronous π-calculus into the Join Calculus in <cit.> is not compositional, but consists of an inner, compositional encoding surrounded by a fixed context—the implementation of so-called firewalls—that is parameterised on the free names of the source term. In order to capture this and similar encodings we relax the definition of compositionality.
Weak Compositionality: The encoding is either compositional or consists of an inner, compositional encoding surrounded by a fixed context that can be parameterised on the free names of the source term or information that are not part of the source term.
An encoding · is strongly operationally corresponding ⊆^2 if it is:
Strongly Complete: ∀ S, S' S S' implies ( ∃ T S T ∧( S', T ) ∈)
Strongly Sound: ∀ S, T S T implies ( ∃ S' S S' ∧( S', T ) ∈)
· is operationally corresponding ⊆^2 if it is:
Complete: ∀ S, S' S S' implies ( ∃ T S T ∧( S', T ) ∈)
Sound: ∀ S, T S T implies ( ∃ S' S S' ∧( S', T ) ∈)
· is weakly operationally corresponding ⊆^2 if it is:
Complete: ∀ S, S' S S' implies ( ∃ T S T ∧( S', T ) ∈)
Weakly Sound: ∀ S, T S T impl. ( ∃ S', T' S S' ∧ T T' ∧( S', T' ) ∈)
§.§ Probabilistic
Probabilistic is introduced in <cit.> as a probabilistic extension of <cit.> to study probabilistic barbed congruence.
We omit the operator for non-deterministic choice from <cit.>; not because it is non-deterministic but because the summands of this choice are not necessarily guarded, whereas our target language has only guarded choice.
We will also adapt the semantics of recursion, to ensure the unfolding of recursion requires a step as it is the case in our target language.
We denote the resulting calculus as .
Its syntax is given in the following Definition:
The terms C of are given by:
P ui ∈ Ip_iP_iP_1P_2PA P[ f ] C⟨x̃⟩
where A ⊆ and f: → is a renaming function.
All names in A are bound in P by PA and all names in x̃ are bound in P by C def=( x̃)P.
Names that are not bound are free.
A renaming function can only affect the free names of a term.
Let P be the set of free names in P such that Q[ f ] = f(n) | n ∈Q for all Q ∈C.
Following <cit.> we extend some operations on processes to distributions, because these notions help us to define the semantics of the respective languages.
Let Δ_1, Δ_2 be distributions on processes.
We define the distributions Δ_1 |Δ_2 (for parallel composition), Δ_1A and xΔ_1 (for restriction), and Δ_1[ f ] (for a renaming function f) as:
( Δ_1 |Δ_2 )(P) = Δ_1( P_1 )·Δ_2( P_2 ) , if P = P_1 | P_2
0 otherwise
( Δ_1A)(P) = Δ_1( P' ) , if P = P'A
0 , otherwise
( xΔ_1)(P) = Δ_1( P' ) , if P = xP'
0 , otherwise
( Δ_1[ f ])(P) = Δ_1( P' ) , if P = P'[ f ]
0 , otherwise
The semantics of is given by the rules in Figure <ref>, where we start with the labelled semantics of <cit.>, change the Rule for recursion, and add the Rule to obtain a reduction semantics.
Rule reduces a probabilistic choice to a probability distribution over its branches after performing action u.
Rule instead to <cit.> makes the unfolding of recursion a separate τ-step.
The remaining rules are standard rules adapted to probability distributions, where the symmetric versions of the Rules and are omitted.
§.§ Probabilistic Pi-Calculus
The probabilistic π-calculus () is introduced in <cit.>, as a probabilistic version of the πI-calculus <cit.>, where output is endowed with probabilities.
We assume that names in a vector y are pairwise distinct.
The names ỹ_i are bound in P_i by xi ∈ IiỹP_i and xi ∈ IiỹP_i; x is bound in P by xP; and the names ỹ are bound in P by xỹP.
Names that are not bound are free.
Let P denote the set of free names in P.
Structural congruence ≡ is defined, similarly to <cit.>, as the smallest congruence containing α-equivalence ≡_α that is closed under the following rules:
[ P≡ P PQ≡QP PQR≡PQR x≡ xyP≡yxP; xPQ≡PxQ if x ∉P ]
We lift structural congruence to distributions, Δ_1 ≡Δ_2 if there is a finite index set I such that Δ_1 = ∑_i ∈ I p_i P_i, Δ_2 = ∑_i ∈ I p_i Q_i, and P_i ≡ Q_i for all i ∈ I.
We obtain the same by applying Definition <ref> on ≡ but do not want to use the symbol ≡.
The semantics of is given by the rules in Figure <ref>, where we start with the labelled semantics of <cit.> and add the Rule to obtain a reduction semantics.
For the labelled part of the semantics we use Labels of the following form: xỹi, xỹi, xỹ, and xỹ.
Rule implements the behaviour of a probabilistic selected output, which behaves like one of the processes P_i, after sending the corresponding output with probability p_i.
On the contrary, each input within the branching input is performed with probability 1, and the process behaves like P_j after receiving ỹ_j for some j ∈ I as defined in Rule .
Rule describes the interaction of input and output, where the passed names are bound.
Here the partial operation on labels is formally defined by: xỹixỹi = xỹxỹ = τ and undefined in all other cases.
The remaining rules are standard π-calculus rules extended with probabilities, where the symmetric version of is omitted.
In <cit.> a type system is introduced to ensure some interesting properties of well-typed terms such as linearity.
Here we are only interested in the untyped version of and, thus, omit the type system.
§ FOR A REASONABLE ENCODING
The encoding of S ∈C with the process definitions C_1 def=( x̃_1 ).S_1, …, C_n def=( x̃_n ).S_n consists of the outer encoding , where S is
C_1, …, C_n( S|C_1x̃_1S_1|…|C_nx̃_nS_n)
and the inner encoding is given in Figure <ref>.
In Definition <ref> the encoding from into is presented.
In the following we prove that this encoding satisfies the criteria given in Section <ref> (except for a classical version of operational correspondence) and the new criterion weak .
The encoding of a probabilistic choice is split into three cases: the first three cases of Definition <ref>.
For input guards a single input on x is used, to enable the communication with a potential corresponding output.
In the following such a communication step on a source term name is denoted as -step.
In the continuation of the input on x a probabilistic selecting output on the reserved name composed in parallel with a matching input is used to encode the probabilities.
This step on the reserved channel name is denoted as -step.
The sequence of these two communication steps on x and emulates the behaviour of a single communication step in the source.
By restricting the scope of the reserved name , interactions with other operators communicating on between two translations of inputs are prevented.
Further, as the renaming policy ensures that does not appear in P_i, conflicts between the reserved name and source term names are avoided.
An -step is a communication step on a translated source term name.
An -step is a communication step on an instance of the reserved name .
The encoding of an output-guarded probabilistic choice is straight forward, as it is translated using the probabilistic selecting output.
For the guard τ, an output-guarded probabilistic choice in parallel to a single input on the reserved name is used.
Because of the restriction, interactions with other translations of τ-guarded operators are prevented.
A communication step of this kind is denoted as -step.
This step does not only introduce the probabilities of a τ-guarded source term choice in the translation but also allows the translated term to do a step whenever the source term does one and compensates the missing τ in the syntax of the target language.
An -step is a communication step on an instance of the reserved name .
The application of a renaming function is encoded by a substitution.
A call C⟨ỹ⟩ is encoded by an output, where the corresponding process definitions are translated into replicated inputs and placed in parallel by the outer encoding.
The remaining translations are homomorphic.
An -step is a communication step that reduces a replicated input.
[-Steps]
Consider the source term S = τ.(1/8P ⊕7/8Q)τ.(3/5R ⊕2/5S) of without process definitions.
S can do the following sequence of steps:
S Δ_S, 1 = 1/8( Pτ.(3/5R ⊕2/5S)), 7/8( Qτ.(3/5R ⊕2/5S))
Δ_S, 2 = 3/40( P | R ), 2/40( P | S ), 21/40( Q | R ), 14/40( Q | S )
By Definition <ref> and since S has no process definitions, S = S and:
S = (1/8_1P⊕7/8_2Q)|
(3/5_1R⊕2/5_2S).
Thereby, the restriction of the reserved name prevents a communication between the left and right subterm of the outermost parallel operator.
By Figure <ref>, S can emulate the steps of S by SΔ_T, 1Δ_T, 2, where:
Δ_T, 1 = {[ 1/8P(3/5_1R⊕2/5_2S); 7/8Q(3/5_1R⊕2/5_2S)} ]
Δ_T, 2 = {[ 3/40( P||R|), 2/40( P||S|),; 21/40( Q||R|), 14/40( Q||S|)} ]
The distributions Δ_T, 1 and Δ_T, 2 are both structural congruent to the encoding of the corresponding source distributions, Δ_S, 1≡Δ_T, 1 and Δ_S, 2≡Δ_T, 2.
As both steps in SΔ_T, 1Δ_T, 2 reduce an instance of —though of course different instances of are reduced—they are both -steps.
An example of a -step followed by a -step is presented in <cit.>.
To obtain the probability distribution that results from this sequence of two steps on the target, the probabilities of the -step are multiplied with the probabilities of the corresponding -step.
The resulting probabilities match to the probabilities of the emulated source term step.
This multiplication stems from the rules of probability theory, where the probability of an event consisting of a sequence of several events has to be calculated by multiplying the probabilities of all the single events contained in that sequence.
Accordingly, if a single source term step is emulated by a sequence of target term steps, we compare the probabilities that result from multiplying the probabilities of the target term steps in the sequence with the probabilities of the source.
Fortunately, the multiplication of probabilities is already covered by Definition <ref> in order to define sequences of steps.
It remains to ensure that our version of operational correspondence compares the probabilities of the distribution that results from a single source term step with the final distribution in the emulating target term sequence (and not with the probabilities of a distribution in the middle of this sequence).
We already ruled out strong operational correspondence as defined in Definition <ref>.
The other two versions differ in whether they allow for intermediate states.
Another look at the example in <cit.> tells us, that intermediate states make sense.
Δ_T is a finite probability distribution with the probabilities 3/4 and 1/4, but neither S nor S' have cases with these probabilities.
However, since there is exactly one source term step and exactly one sequence of target term steps, Δ_T does not mark a partial commitment, because there was nothing to decide.
Indeed the restriction on ensures that each -step enables exactly one -step and communication is the only case that requires two steps to emulate a single source term steps.
Hence, also by interleaving with other emulations, we do not obtain partial commitments.
Nonetheless Δ_T is an intermediate state; not intermediate in terms of decisions and commitments but intermediate in terms of probabilities.
In the second variant of operational correspondence in Definition <ref> without intermediate states, we would need to find a relation that relates Δ_T either to S or Δ_T'.
Such a relation is difficult or at least not intuitive, since it has to relate states with different probabilities.
It is easier to allow for intermediate states.
So, we want to build a weak version of operational correspondence (third case of Definition <ref>) with probabilities.
This leads to the version of probabilistic operational correspondence below denoted as weak probabilistic operational correspondence.
An encoding · : → is weakly probabilistic operationally corresponding (weak ) ⊆^2 if it is:
Probabilistic Complete:
∀ S, Δ_S S Δ_S implies ( ∃Δ_T SΔ_T ∧( Δ_S, Δ_T ) ∈)
Weakly Probabilistic Sound: ∀ S, Δ_T SΔ_T implies
( ∃Δ_S, Δ_T' S Δ_S ∧Δ_T Δ_T' ∧( Δ_S, Δ_T' ) ∈)
Before we analyse the quality of our new version of in Section <ref>, we want to check whether it indeed exactly captures the way our encoding treads source term steps into probability distributions, we prove that our encoding satisfies weak .
The Example <ref> and the example given in <cit.> illustrate that steps on τ-guarded choices and communication steps satisfy weak ≡.
They cover -steps, -steps, and -steps.
The only missing kind of steps, are steps to unfold a recursion in the source and their emulation by -steps in the target.
[-Step]
Consider S = C⟨ỹ⟩ with C def=( ). in .
By Figure <ref>, S can perform only one step: S Δ_S = = 1.
By Definition <ref>, then:
S = C( S|C) and S = C( ỹ)
By Figure <ref>, S can perform exactly one maximal sequence of steps, namely the -step SΔ_T = 1 C( |C).
By Definition <ref>, Δ_S = Δ_T, because even though Δ_S does no longer contain any process constants its process definition is not consumed in the step S Δ_S.
We prove that the encoding in Definition <ref> satisfies the quality criteria of Section <ref> and weak .
We start with weak compositionality.
The encoding is weakly compositional.
Our encoding consists of the outer encoding and the inner encoding .
The outer encoding is a fixed context that is parametrised on the process definitions of the source term, that are not part of the source term itself.
The inner encoding is compositional, because the encoding function in Definition <ref> defines a context for each operator of the source language in that the translations of the subterms of the respective source term are used.
Hence, is weakly compositional.
Name invariance and the different versions of operational correspondence are defined modulo a relation on target terms that is success sensitive.
For our encoding we can choose as the structural congruence ≡ on the target language .
Structural congruence satisfies a stronger version of success sensitiveness with · instead of ·.
If T_1 ≡ T_2 then T_1⟷T_2.
Moreover, if Δ_1 ≡Δ_2 then Δ_1⟷Δ_2.
The proof is by induction on the definition of ≡.
All cases are immediate.
α-Equivalence ≡_α: In this case T_1 ≡_α T_2.
Since does not contain any names, we have T_1 iff T_2.
P≡ P: In this case T_1 = T_2 |.
Since does not contain , then T_1 iff T_2.
PQ≡QP: In this case T_1 = P | Q and T_2 = Q | P.
Since T_1 contains iff T_2 contains , then T_1 iff T_2.
PQR≡PQR: In this case T_1 = P |( Q | R ) and T_2 = ( P | Q )| R.
Since T_1 contains iff T_2 contains , then T_1 iff T_2.
x≡: In this case T_1 = x and T_2 =.
Since does not contain , then T_1 and T_2.
xyP≡yxP: Then T_1 = xyP and T_2 = yxP.
Since T_1 as well as T_2 contain iff P contains , then T_1 iff T_2.
xPQ≡PxQ: In this case T_1 = x( P | Q ) and T_2 = P |xQ, where x ∉P.
Since T_1 contains iff T_2 contains , then T_1 iff T_2.
Δ_1 ≡Δ_2: In this case there is a finite index set I such that Δ_1 = ∑_i ∈ I p_i P_i, Δ_2 = ∑_i ∈ I p_i Q_i, and P_i ≡ Q_i for all i ∈ I.
Because of P_i ≡ Q_i, we have P_i iff Q_i for all i ∈ I.
Then Δ_1 iff Δ_2.
The renaming policy of reserves the names , and keeps process constants C distinct from source term names, n = 1 and n∩( ), ( ), ( C ) | C is a process constant = ∅ for all n ∈.
Name invariance ensures that the encoding function treads all source term names in the same way.
Since the encoding function does not introduce any free names and because of the rigorous use of the renaming policy (although we omit it for readability in Definition <ref>), our encoding satisfies a stronger version of name invariance, where α-equivalence can be used regardless of whether σ is injective.
For every S ∈ and every substitution σ, it holds that Sσ≡_αSσ' and Sσ≡_αSσ', where σ' is such that σ(a) = σ'( a) for all a ∈.
Without loss of generality we assume that σ' behaves as identity for all names that are not in the range of .
The assumption can be replaced by applying alpha conversion such that the names introduced by the encoding function, the restricted names that are denoted by , , or C_i in Definition <ref>, are not affected by applying σ'.
The proof is by induction on the encoding function.
S: Assume without loss of generality that S has the process definitions C_1 def=( x̃_1 ).S_1, …, C_n def=( x̃_n ).S_n, where S_i = x̃_i for all 1 ≤ i ≤ n.
By the induction hypothesis, Sσ≡_αSσ'.
The renaming policy ensures that n∩C_1, …, C_n = ∅ for all n ∈.
Then we have:
Sσ
= C_1, …, C_n( Sσ|C_1x̃_1S_1|…|C_nx̃_nS_n)
≡_αC_1, …, C_n( Sσ' |C_1x̃_1S_1|…|C_nx̃_nS_n)
≡_αC_1, …, C_n( S|C_1x̃_1S_1|…|C_nx̃_nS_n)σ'
= Sσ'
Note that here x̃_i is short for the sequence that results from applying on all names in x̃_i.
Because of that, the renaming policy ensures that C_ix̃_iS_i = C_i and thus that σ' has no effect on these terms.
xi ∈ Ip_iP_i: In this case S = xi ∈ Ip_iP_i.
By the induction hypothesis, Pσ≡_αPσ'.
The renaming policy ensures that n∩ = ∅ for all n ∈.
Then we have:
Sσ = σ(x)i ∈ Ip_iP_iσ
= σ'(x)i ∈ IiP_iσ
≡_ασ'(x)i ∈ IiP_iσ'
≡_αxi ∈ IiP_iσ' = Sσ'
xi ∈ Ip_iP_i: In this case S = xi ∈ Ip_iP_i.
By the induction hypothesis, Pσ≡_αPσ'.
Then we have:
Sσ = σ(x)i ∈ Ip_iP_iσ
= σ'(x)i ∈ IiP_iσ
≡_ασ'(x)i ∈ IiP_iσ'
≡_α( xi ∈ IiP_i)σ' = Sσ'
τi ∈ Ip_iP_i: In this case S = τi ∈ Ip_iP_i.
By the induction hypothesis, Pσ≡_αPσ'.
The renaming policy ensures that n∩ = ∅ for all n ∈.
Then we have:
Sσ = τi ∈ Ip_iP_iσ≡_αi ∈ IiP_iσ
≡_αi ∈ IiP_iσ'
≡_αi ∈ IiP_iσ'
= Sσ'
PQ: In this case S = P | Q.
By the induction hypothesis, Pσ≡_αPσ' and Qσ≡_αQσ'.
Thereby, Sσ = Pσ| Qσ = Pσ|Qσ≡_αPσ' |Qσ' = ( P|Q)σ' = Sσ'.
PA: In this case S = PA.
Let γ be obtained from σ by removing all names in A from the domain of σ.
Moreover, let γ' be such that γ(a) = γ'( a) for all a ∈.
By the induction hypothesis, Pγ≡_αPγ'.
Then Sσ = PγA = APγ≡_αA( Pγ' ) = ( AP)σ' = Sσ'.
P[ f ]: In this case S = P[ f ].
Let f' be such that f'(σ(n)) = σ(f(n)) for all n ∈.
By the induction hypothesis, Pσ≡_αPσ'.
Then we have:
Sσ = ( Pσ)[ f' ] = Pσ𝗋𝖺𝗇_f'𝖽𝗈𝗆_f'
≡_α( Pσ' )𝗋𝖺𝗇_f'𝖽𝗈𝗆_f'
= ( P𝗋𝖺𝗇_f𝖽𝗈𝗆_f)σ' = Sσ'
C⟨ỹ⟩: In this case S = C⟨ỹ⟩.
Let z̃ be the result of applying σ on all names in ỹ.
Then Sσ = C⟨z̃⟩ = C( z̃) = C( ỹ)σ' = Sσ'.
: In this case S =.
Then Sσ = = = σ' = Sσ'.
We introduced in Definition <ref> a new variant of operational correspondence, namely weak probabilistic operational correspondence (weak ), for the encoding .
For the completeness part, we have to prove that the encoding preserves the behaviour of source terms.
Therefore, we show how the translations emulate a source term step.
∀ S, Δ_S S Δ_S implies ( ∃Δ_T SΔ_T ∧Δ_S≡Δ_T )
We start with a single step S Δ_S and show that we need in this case a finite and non-empty sequence of steps SΔ_T in the target such that Δ_S≡Δ_T.
Let C_1, …, C_n be all process constants in S and C_1 def=( x̃_1 ).S_1, …, C_n def=( x̃_n ).S_n the corresponding process definitions.
Then S = C_1, …, C_n( S|C_1x̃_1S_1|…|C_nx̃_nS_n).Since the outer restriction on C_1, …, C_n and the subterms C_ix̃_iS_i are not altered by steps of the target term, we define the context
= C_1, …, C_n( |C_1x̃_1S_1|…|C_nx̃_nS_n)
to capture this part of target terms, S = S.
By Figure <ref>, S Δ_S was derived from the Rule , S τΔ_S.
To strengthen our induction hypothesis and to capture labels different from τ we prove
∀ S, Δ_S S uΔ_S implies ( ∃ T_i S[p_i]û T_i _i ∈ I∧Δ_S≡∑_i ∈ Ip_i T_i )
where [p_i]τ̂ is [p_i, 1]τ⋯[p_i, n]τ, [p_i]x̂ is [p_i, 1]τ⋯[p_i, j]x_i⟨⟩⋯[p_i, n]τ, [p_i]x̂ is [p_i, 1]τ⋯[p_i, j]x_i⟨⟩⋯[p_i, n]τ, and in all three cases p_i = p_i, 1·…· p_i, n.
We perform an induction over the derivation of S uΔ_S using a case split over the rules in Figure <ref>.
: We consider three subcases:
u = x: In this case S = xi ∈ Ip_iP_i as well as Δ_S = ∑_i ∈ I p_i P_i.
By Definition <ref>, then S = xi ∈ IiP_i and we have Δ_S = ∑_i ∈ I p_i P_i.
S can emulate the step S τΔ_S using the Rules , , , , and by:
S[1]x_i⟨⟩[p_i]τ T_i_i ∈ I where T_i = ( P_i|)
The renaming policy ensures that ∉P_i for all i ∈ I.
Then Δ_S≡∑_i ∈ I p_i T_i.
u = x: In this case S = xi ∈ Ip_iP_i and Δ_S = ∑_i ∈ I p_i P_i.
By Definition <ref>, then S = xi ∈ IiP_i and we have Δ_S = ∑_i ∈ I p_i P_i.
S can emulate the step S τΔ_S using the Rules , , and by
S[p_i]x_i⟨⟩ T_i_i ∈ I where T_i = P_i
and where the Rules and are necessary to do steps in the inner part of the encoding.
Then Δ_S = ∑_i ∈ I p_i T_i and thus Δ_S≡∑_i ∈ I p_i T_i.
u = τ: In this case S = τi ∈ Ip_iP_i as well as Δ_S = ∑_i ∈ I p_i P_i.
By Definition <ref>, then S = i ∈ IiP_i and we have Δ_S = ∑_i ∈ I p_i P_i.
S can emulate the step S τΔ_S using the Rules , , , , and by:
S[p_i]τ T_i_i ∈ I where T_i = ( P_i|)
The renaming policy ensures that ∉P_i for all i ∈ I.
Then Δ_S≡∑_i ∈ I p_i T_i.
: In this case S = PQ, P uΔ_P = ∑_i ∈ I p_i P_i, and Δ_S = Δ_P |Q = ∑_i ∈ I p_i ( P_i | Q ).
By Definition <ref>, then S = P|Q and Δ_S = ∑_i ∈ I p_i P_i|Q.
By the induction hypothesis, the step P uΔ_P implies P[p_i]û T_i, P_i ∈ I and Δ_P≡∑_i ∈ Ip_i T_i, P.
S can emulate the step S uΔ_S using the Rules and to apply the steps in P[p_i]û T_i, P_i ∈ I such that:
S[p_i]û T_i _i ∈ I where T_i = T_i, P' |Q and T_i, P = T_i, P'
Because Δ_P≡∑_i ∈ Ip_i T_i, P and Δ_P = ∑_i ∈ Ip_i P_i, then Δ_S≡∑_i ∈ I p_i T_i.
: In this case S = PQ, Q uΔ_Q = ∑_i ∈ I p_i Q_i, and Δ_S = P|Δ_Q = ∑_i ∈ I p_i ( P | Q_i ).
By Definition <ref>, then S = P|Q and Δ_S = ∑_i ∈ I p_i P|Q_i.
By the induction hypothesis, the step Q uΔ_Q implies Q[p_i]û T_i, Q_i ∈ I and Δ_Q≡∑_i ∈ Ip_i T_i, Q.
S can emulate the step S uΔ_S using the Rules , , and to apply the steps in Q[p_i]û T_i, Q_i ∈ I such that:
S[p_i]û T_i _i ∈ I where T_i = P| T_i, Q' and T_i, Q = T_i, Q'
Because Δ_Q≡∑_i ∈ Ip_i T_i, Q and Δ_Q = ∑_i ∈ Ip_i Q_i, then Δ_S≡∑_i ∈ I p_i T_i.
: Here S = P | Q, P aΔ_P = ∑_i ∈ I p_i P_i, Q aΔ_Q = ∑_j ∈ J p_j Q_j, and we have Δ_S = Δ_P |Δ_Q = ∑_i ∈ I, j ∈ J p_i · p_j ( P_i | Q_j ).
By Definition <ref>, then S = P|Q and we have Δ_S = Δ_P|Δ_Q = ∑_i ∈ I, j ∈ J p_i · p_j P_i|Q_j.
By the induction hypothesis, the step P aΔ_P implies P[p_i]â T_i, P_i ∈ I and Δ_P≡∑_i ∈ Ip_i T_i, P.
By the induction hypothesis, the step Q aΔ_Q implies Q[p_j]â T_j, Q_j ∈ J and Δ_Q≡∑_j ∈ Jp_j T_j, Q.
S can emulate the step S τΔ_S using the Rules , , and to apply the steps in the sequences P[p_i]û T_i, P_i ∈ I and Q[p_j]û T_j, Q_j ∈ J such that:
S[p_i · p_j]τ̂ T_i, j_i ∈ I, j ∈ J where T_i, j = T_i, P' | T_j, Q',
T_i, P = T_i, P', and T_j, Q = T_j, Q'
Because Δ_P≡∑_i ∈ Ip_i T_i, P and Δ_P = ∑_i ∈ Ip_i P_i and Δ_Q≡∑_j ∈ Jp_j T_j, Q and Δ_Q = ∑_j ∈ Jp_j Q_j, then Δ_S≡∑_i ∈ I, j ∈ J p_i · p_j T_i, j.
: This case is symmetric to the last case for .
: In this case S = PA, P uΔ_P = ∑_i ∈ I p_i P_i, u ∉ A ∪A, and Δ_S = Δ_PA = ∑_i ∈ I p_i ( P_iA).
By Definition <ref>, then S = AP and Δ_S = ∑_i ∈ I p_i AP_i.
By the induction hypothesis, then P uΔ_P implies P[p_i]û T_i, P_i ∈ I and Δ_P≡∑_i ∈ Ip_i T_i, P.
S can emulate the step S uΔ_S using the Rules and to apply the steps in P[p_i]û T_i, P_i ∈ I such that:
S[p_i]û T_i _i ∈ I where T_i = AT_i, P' and T_i, P = T_i, P'
Because Δ_P≡∑_i ∈ Ip_i T_i, P and Δ_P = ∑_i ∈ Ip_i P_i, then Δ_S≡∑_i ∈ I p_i T_i.
: In this case S = P[ f ], P vΔ_P = ∑_i ∈ I p_i P_i, f(v) = u, and also Δ_S = Δ_P[ f ] = ∑_i ∈ I p_i ( P_i[ f ]).
Further, by Definition <ref>, it follows S = P𝗋𝖺𝗇_f𝖽𝗈𝗆_f as well as Δ_S = ∑_i ∈ I p_i P_i𝗋𝖺𝗇_f𝖽𝗈𝗆_f.
By induction hypothesis, then P vΔ_P implies P[p_i]v̂ T_i, P_i ∈ I and Δ_P≡∑_i ∈ Ip_i T_i, P.
S can emulate the step S uΔ_S using the Rules and to apply the steps in P[p_i]v̂ T_i, P_i ∈ I such that:
S[p_i]û T_i _i ∈ I where T_i = T_i, P'𝗋𝖺𝗇_f𝖽𝗈𝗆_f and T_i, P = T_i, P'
Because Δ_P≡∑_i ∈ Ip_i T_i, P and Δ_P = ∑_i ∈ Ip_i P_i, then Δ_S≡∑_i ∈ I p_i T_i.
: In this case S = C⟨ỹ⟩, C def=( x̃)P, and Δ_S = Pỹx̃.
By Definition <ref>, then we have S = C( ỹ) and Δ_S = Pỹx̃.
S can emulate the step S τΔ_S using the Rules , , , , and to reduce the replicated input in the outer encoding such that:
S[1]τ T where T = |Pỹx̃
By Lemma <ref>, then Δ_S≡T.
For u = τ we obtain from the above induction that S Δ_S implies the existence of some Δ_T = ∑_i ∈ I p_i T_i such that SΔ_T and Δ_S≡Δ_T, where SΔ_T is a non-empty and finite sequence of steps.
The proof of this lemma then is by induction on the number of steps in the source term sequence S Δ_S.
For soundness, we have to prove that the encoding does not introduce any new behaviour.
Therefore, we show that every sequence of steps on the target belongs to a matching sequence of steps on the source.
∀ S, Δ_T SΔ_T implies
( ∃Δ_S', Δ_T' S Δ_S' ∧Δ_T Δ_T' ∧Δ_S'≡Δ_T' )
We strengthen the proof goal for the induction, by assuming that the sequence Δ_T Δ_T' contains only -steps.
The proof is by induction on the number of steps in SΔ_T.
The base case for zero steps, Δ_T = S, holds trivially by choosing Δ_S = S and Δ_T' = Δ_T such that Δ_S = Δ_T'.
For the induction step, assume SΔ_T^* Δ_T.
By the induction hypothesis, there are some Δ_S^** and Δ_T^** such that S Δ_S^**, Δ_T^* Δ_T^**, and Δ_S^**≡Δ_T^**, where the sequence Δ_T^* Δ_T^** contains only -steps.
Note that, by Definition <ref> and because of the renaming policy , the restriction of ensures that no other step on the target can be in conflict with an -step.
Because of that, we can combine the steps in Δ_T^* Δ_T^** and Δ_T^* Δ_T to the sequence Δ_T^* Δ_T Δ_T^***, where Δ_T Δ_T^*** is the result of removing the step Δ_T^* Δ_T from Δ_T^* Δ_T^** if it is contained in this sequence and then reordering the steps such that the remaining steps in Δ_T^* Δ_T^** are applied after the step Δ_T^* Δ_T.
We have to proof that there are some Δ_S' and Δ_T' such that S Δ_S', Δ_T Δ_T', and Δ_S'≡Δ_T', where the sequence Δ_T Δ_T' contains only -steps.
Therefore, we construct Δ_S^**Δ_S' and the sequence Δ_T^***Δ_T' containing only -steps such that S Δ_S^**Δ_S', Δ_T^* Δ_T Δ_T^***Δ_T', and Δ_S'≡Δ_T'.
By Definition <ref>, Δ_T^* = ∑_i ∈ I p_i T_i^*, Δ_T = ∑_i ∈ I p_i ·Δ_T, i, ∑_i ∈ I p_i = 1, and T_i^* Δ_T, i or Δ_T, i = T_i^* for all i ∈ I.
We perform a case split on the nature of T_i^* Δ_T, i for all i ∈ I with Δ_T, i≠T_i^* to generate the initially empty sets 𝒮 and 𝒯 of source and target term steps.
We use 𝒮 and 𝒯 to collect the steps that we need for the sequences Δ_S^**Δ_S' and Δ_T^***Δ_T'.
Δ_T^* Δ_ T is an -step: By Definition <ref>, then Δ_T^* Δ_ T is a communication step on a translated source term name x.
To complete the emulation of the corresponding source term communication on x, we need to perform the -step that was enabled by this -step.
Accordingly, we add the respective source term step on x to 𝒮 and the -step that was enabled by Δ_T^* Δ_ T to 𝒯.
Δ_T^* Δ_ T is an -step: By Definition <ref>, then T_i^* Δ_T, i is a communication step on an instance of the reserved name .
By Definition <ref> and Definition <ref>, all in- and outputs on are initially guarded in the encoding and can only be unguarded by an -step.
Accordingly, SΔ_T^* contains the corresponding -step that unguarded the input on reduced in T_i^* Δ_T, i.
Since Δ_S^**≡Δ_T^**, then S Δ_S^** already contains the corresponding communication step in the source, in this case we do not have to add any steps to 𝒮 or 𝒯.
Δ_T^* Δ_ T is a -step: By Definition <ref>, then T_i^* Δ_T, i is a communication step on an instance of the reserved name .
In this case we add the source term τ-step that is emulated by T_i^* Δ_T, i to 𝒮 and leave 𝒯 unchanged.
Δ_T^* Δ_ T is a -step: By Definition <ref>, then Δ_T^* Δ_ T reduce a replicated input to emulate the unfolding of recursion.
Again we add the source term step to unfold a recursion that is emulated by T_i^* Δ_T, i to 𝒮 and leave 𝒯 unchanged.
Otherwise: By Definition <ref>, all steps of an encoded source term are -steps, -steps, -steps, or -steps.
We observe that in all cases, we need at most one step in the source and target.
Since Δ_T^* Δ_T^** contains only -steps, it can only complete the emulation of source terms steps and not start new such emulations.
Because of Δ_S^**≡Δ_T^** and because the emulations of the source terms steps in 𝒮 were enabled in Δ_T^*, the source term steps in 𝒮 are enabled in Δ_S^**.
If 𝒮 = ∅ then we can choose Δ_S' = Δ_S^**.
Else Δ_S^**Δ_S' be the result of applying for each branch i ∈ I in the distribution Δ_S^** with a step in 𝒮 the corresponding step.
Similarly, if 𝒯 = ∅ then Δ_T' = Δ_T^*** and else let Δ_T^***Δ_T' apply the steps in 𝒯 on the respective branches in Δ_T^***.
By Definition <ref>, may produce or x as junk, as leftovers from a completed emulation.
However, all forms of junk produced by are and superfluous restrictions and are not observable modulo ≡.
Since all initiated emulation attempts are completed and since Δ_S' results from performing all source term steps emulated in the target, Δ_S'≡Δ_T'.
Divergence reflection ensures that the encoding cannot introduce new sources of divergence.
For every S, S implies S.
By Lemma <ref>, for every sequence SΔ_T there are some Δ_S' and Δ_T' such that S Δ_S', Δ_T Δ_T', and Δ_S≡Δ_T, where in the proof of Lemma <ref> we additionally show that the sequence Δ_T Δ_T' is a sequence of -steps.
Moreover, from the construction of S Δ_S', this sequence contains exactly one source term step for every -step, every -step, and every -step in SΔ_T.
The -steps are initially guarded, can be unguarded only by an -step, and for each -step exactly one -step is unguarded.
We conclude that -steps, -steps, -steps, and -steps cannot introduce new loops.
Since there are no other kinds of steps, this ensures divergence reflection.
Success sensitiveness ensures that the translation passes a test if and only if the source term passes this test.
For every S, S iff S.
By Definition <ref>, S^* iff S^* for all S^*.
Then also Δ_S^* iff Δ_S^* for all distributions Δ_S^*.
* If S, then S Δ_S and Δ_S.
By Lemma <ref>, then SΔ_T and Δ_S≡Δ_T.
By Definition <ref>, Δ_S implies Δ_S.
By Lemma <ref>, then Δ_S≡Δ_T implies Δ_T.
Finally, SΔ_T and Δ_T imply S.
* If S, then SΔ_T and Δ_T.
By Lemma <ref>, then S Δ_S', Δ_T Δ_T', and Δ_S'≡Δ_T'.
Because of Δ_T Δ_T', Δ_T implies Δ_T'.
By Lemma <ref>, Δ_T' and Δ_S'≡Δ_T' imply Δ_S'.
By Definition <ref>, then Δ_S'.
Finally, S Δ_S' and Δ_S' imply S.
The encoding satisfies weak compositionality, name invariance, weak probabilistic operational correspondence ≡, divergence reflection, and success sensitiveness.
The proof is by the Lemmata <ref>, <ref>, <ref>, <ref>, <ref>, and <ref>, where Lemma <ref> proves that ≡ is success sensitive.
§ WEAK PROBABILISTIC OPERATIONAL CORRESPONDENCE
A relation is a (weak reduction) correspondence simulation if for each (P, Q) ∈:
* P P' implies ∃ Q' Q Q' ∧ (P', Q') ∈
* Q Q' implies ∃ P”, Q” P P”∧ Q' Q”∧ (P”, Q”) ∈
Two terms are correspondence similar if a correspondence simulation relates them.
A probabilistic version of correspondence simulation for a relation between probability distributions can be derived straightforwardly from Definition <ref>.
A relation is a (weak) probabilistic (reduction) correspondence simulation if for each (P, Q) ∈:
* P Δ implies ∃Θ Q Θ∧ (Δ, Θ) ∈
* Q Θ implies ∃Δ', Θ' P Δ' ∧ΘΘ' ∧ (Δ', Θ') ∈
Two terms are probabilistic correspondence similar if a probabilistic correspondence simulation relates them.
A relation on distributions is a (weak) probabilistic (reduction) correspondence simulation if for each (Δ, Θ) ∈:
* ΔΔ' implies ∃Θ' ΘΘ' ∧ (Δ', Θ') ∈
* ΘΘ' implies ∃Δ”, Θ”ΔΔ”∧Θ' Θ”∧ (Δ”, Θ”) ∈
Two terms are probabilistic correspondence similar if a probabilistic correspondence simulation relates them.
If is a preorder and a probabilistic correspondence simulation then so is .
If the preorder is a probabilistic correspondence simulation then so is .
Assume a probabilistic correspondence simulation that is a preorder and ( Δ, Θ) ∈.
Since is a preorder and by the Lemmata <ref> and <ref>, is a preorder.
By Definition <ref>, then there is some index set I such that Δ = ∑_i ∈ I p_i P_i, Θ = ∑_i ∈ I p_i Q_i, ∑_i ∈ I p_i = 1, and for each i ∈ I we have ( P_i, Q_i ) ∈.
* Assume ΔΔ'.
If Δ' = Δ then we can choose Θ' = Θ such that ΘΘ' and ( Δ', Θ' ) ∈.
Else, by Definition <ref>, then Δ' = ∑_i ∈ I p_i Δ_i' and for some (at least one) i ∈ I a processes P_i in Δ performed a sequence of at least one step P_i Δ_i'.
For all other i we have Δ_i' = P_i.
Since is a probabilistic correspondence simulation, ( P_i, Q_i ) ∈ and P_i Δ_i' imply Q_i Θ_i' and ( Δ_i', Θ_i' ) ∈.
For the i ∈ I without a step, we choose Θ_i' = Q_i such that Q_i Θ_i' and ( Δ_i', Θ_i' ) ∈.
Then ΘΘ' = ∑_i ∈ I p_i Θ_i' and ( Δ_i', Θ_i' ) ∈.
* Assume ΘΘ'.
If Θ' = Θ then we can choose Δ” = Δ and Θ” = Θ such that ΔΔ”, Θ' Θ”, and ( Δ”, Θ”) ∈.
Else, by Definition <ref>, then Θ' = ∑_i ∈ I p_i Θ_i' and for some (at least one) i ∈ I a processes Q_i in Θ performed a sequence of at least one step Q_i Θ_i'.
For all other i we have Θ_i' = Q_i.
Since is a probabilistic correspondence simulation, ( P_i, Q_i ) ∈ and Q_i Θ_i' imply P_i Δ_i”, Θ_i' Θ_i”, and ( Δ_i”, Θ_i”) ∈.
For the i ∈ I without a step, we choose Δ_i” = P_i and Θ_i” = Θ_i such that P_i Δ_i”, Θ_i' Θ_i” and ( Δ_i”, Θ_i”) ∈.
Then ΔΔ” = ∑_i ∈ I p_i Δ_i”, Θ' Θ” = ∑_i ∈ I p_i Θ_i” and ( Δ_i”, Θ_i”) ∈.
We conclude that is a probabilistic correspondence simulation.
· is weakly probabilistically operationally corresponding a preorder ⊆^2 that is a probabilistic correspondence simulation iff
∃( ∀ S ( S, S) ∈) ∧ = ∧( ∀ S, T ( S, T ) ∈⟶( S, T ) ∈) ∧ is a preorder and a probabilistic correspondence simulation.
One of the condition in Theorem <ref> is that the relation on the target is obtained from the induced relation between source and target by reduction on target terms, =.
We prove that this property is preserved by the lift operation in Definition <ref>.
If = then =.
Assume =.
By Definition <ref>, ( Δ, Θ) ∈ if
* Δ = ∑_i ∈ I p_i P_i, where I is a finite index set and ∑_i ∈ I p_i = 1,
* for each i ∈ I there is a process Q_i such that ( P_i, Q_i ) ∈, and
* Θ = ∑_i ∈ I p_i Q_i.
Then ( Δ, Θ) ∈ if
* Δ = ∑_i ∈ I p_i P_i in the target, where I is a finite index set and ∑_i ∈ I p_i = 1,
* for each i ∈ I there is a process Q_i such that ( P_i, Q_i ) ∈, and
* Θ = ∑_i ∈ I p_i Q_i in the target.
Since = and by Definition <ref>, then =.
The condition = in Theorem <ref> allows us to prove that is a preorder if is a preorder.
If is a preorder and = then is a preorder.
Assume a preorder and =.
Reflexivity: Since is reflexive, ( T, T ) ∈ for all target terms T.
Because of =, then ( T, T ) ∈.
Transitivity: Assume ( T_1, T_2 ) ∈ and ( T_2, T_3 ) ∈ for some target terms T_1, T_2, and T_3.
Because of =, then ( T_1, T_2 ) ∈ and ( T_2, T_3 ) ∈.
Since is transitive, then ( T_1, T_3 ) ∈.
Because of =, then ( T_1, T_3 ) ∈.
We conclude that is a preorder.
The condition ∀ S, T ( S, T ) ∈⟶( S, T ) ∈ is necessary to ensure (with the remaining properties) that the encoding satisfies weak in the 'only if'-case of Theorem <ref>.
Therefore we lift this property to distributions.
If ∀ S, T ( S, T ) ∈⟶( S, T ) ∈ then for all distributions Δ_S on the source and all distributions Δ_T on the target ( Δ_S, Δ_T ) ∈ implies ( Δ_S, Δ_T ) ∈.
Assume ∀ S, T ( S, T ) ∈⟶( S, T ) ∈ and ( Δ_S, Δ_T ) ∈.
By Definition <ref>, then there is some index set I such that Δ_S = ∑_i ∈ I p_i S_i, Δ_T = ∑_i ∈ I p_i T_i, ∑_i ∈ I p_i = 1, and for each i ∈ I we have ( S_i, T_i ) ∈.
With ∀ S, T ( S, T ) ∈⟶( S, T ) ∈, then ( S_i, T_i ) ∈ for each i ∈ I.
By Definition <ref>, then ( Δ_S, Δ_T ) ∈.
To prove Theorem <ref>, it is necessary to ensure that is a probabilistic correspondence simulation if is.
If is a probabilistic correspondence simulation and = then is a probabilistic correspondence simulation.
Assume that is a probabilistic correspondence simulation and =.
Moreover, assume ( T_1, T_2 ) ∈.
Because of =, then ( T_1, T_2 ) ∈.
Case (i): Assume T_1 Δ.
Since is a probabilistic correspondence simulation, then T_2 Θ and ( Δ, Θ) ∈.
With = and Lemma <ref>, then ( Δ, Θ) ∈.
Case (ii): Assume T_2 Θ.
Since is a probabilistic correspondence simulation, then T_1 Δ', ΘΘ', and ( Δ', Θ' ) ∈.
With = and Lemma <ref>, then ( Δ', Θ' ) ∈.
We conclude that is a probabilistic correspondence simulation.
Next we show the main theorem of Section <ref>.
Weak probabilistic operational correspondence induces a relation between source and target terms that is a probabilistic correspondence simulation.
More precisely, Theorem <ref> states:
· is weakly probabilistically operationally corresponding a preorder ⊆^2 that is a probabilistic correspondence simulation iff ∃( ∀ S ( S, S) ∈) ∧ =
∧ ( ∀ S, T ( S, T ) ∈⟶( S, T ) ∈) ∧ is a preorder and a probabilistic correspondence simulation.
We prove the two directions of the result separately.
if (⟶): Assume that · is weakly probabilistically operationally corresponding a preorder ⊆^2 that is a probabilistic correspondence simulation.
We construct from by adding ( S, S) for all source terms S and then building the reflexive and transitive closure.
Accordingly, ∀ S ( S, S) ∈ holds by construction.
Since we did not add any pairs of only source terms, no pairs of the form ( S_1, S_2 ) where both S_1 and S_2 are source terms, and since the only such pairs added by the reflexive and transitive closure are of the form ( S, S ), we have =.
Next we prove that ∀ S, T ( S, T ) ∈⟶( S, T ) ∈.
Therefore, fix some S and T and assume ( S, T ) ∈.
By the construction of , ( S, S) ∈ and all pairs relating a source and a target term contain a source term and its literal translation or result from such a pair, , and the transitive closure in the construction of .
Hence, T = S or ( S, T ) ∈.
In the former case, ( S, T ) ∈ follows from the reflexivity of .
The latter case directly provides ( S, T ) ∈.
That is a preorder directly follows from the construction of , because we used the reflexive and transitive closure.
As last condition we have to show that is a probabilistic correspondence simulation.
By Definition <ref>, for all ( P, Q ) ∈ and all P Δ we have to find Q Θ such that ( Δ, Θ) ∈ and for all Q Θ we have to find P Δ' and ΘΘ' such that ( Δ', Θ' ) ∈.
By the construction of , P may be a source or target term, but Q is a target term.
* Assume P Δ.
If P is a source term, then we have PΔ' with ( Δ, Δ' ) ∈, because of completeness in weak in Definition <ref>.
Since Q is a target term and by the construction of , ( P, Q ) ∈ implies ( P, Q ) ∈.
Because of =, then ( P, Q ) ∈.
Then Q Θ and ( Δ', Θ) ∈, because is a probabilistic correspondence simulation.
With the transitivity of and thus , ( Δ, Δ' ) ∈ and ( Δ', Θ) ∈ imply ( Δ, Θ) ∈.
Finally, ( Δ, Θ) ∈ follows from ( Δ, Θ) ∈, by the construction of .
Else, assume that P is a target term.
Then ( P, Q ) ∈ implies ( P, Q ) ∈, because Q is a target term and =.
Since is a probabilistic correspondence simulation (see Definition <ref>), then P Δ implies Q Θ with ( Δ, Θ) ∈.
Finally, ( Δ, Θ) ∈ follows from ( Δ, Θ) ∈, by the construction of .
* Assume Q Θ.
If P is a source term, then ( P, Q ) ∈, by the construction of .
Because of =, then ( P, Q ) ∈.
Since is a probabilistic correspondence simulation (see second case of Definition <ref>), then Q Θ implies PΔ_T', ΘΘ', and ( Δ_T', Θ' ) ∈.
From PΔ_T' we obtain P Δ” and Δ_T' Δ_T” with ( Δ”, Δ_T”) ∈, because of soundness in weak in Definition <ref>.
By Lemma <ref>, is a probabilistic correspondence simulation.
By the first case of Definition <ref>, then ( Δ_T', Θ' ) ∈ and Δ_T' Δ_T” imply Θ' Θ” and ( Δ_T”, Θ”) ∈.
By transitivity, we obtain ( Δ”, Θ”) ∈ and ΘΘ”.
Finally, ( Δ”, Θ”) ∈ follows from ( Δ”, Θ”) ∈, by the construction of .
Else, assume that P is a target term.
Then ( P, Q ) ∈ implies ( P, Q ) ∈, because Q is a target term and =.
Since is a probabilistic correspondence simulation (see Definition <ref>), then Q Θ implies P Δ', ΘΘ', and ( Δ', Θ' ) ∈.
Finally, ( Δ', Θ' ) ∈ follows from ( Δ', Θ' ) ∈, by the construction of .
only if (⟵): We assume that there is a relation such that ∀ S ( S, S) ∈, =, ∀ S, T ( S, T ) ∈⟶( S, T ) ∈, and is a preorder and a probabilistic correspondence simulation.
We start with weak probabilistic operational correspondence.
Completeness: Assume S Δ_S.
Since ( S, S) ∈ and because is a probabilistic correspondence simulation, then SΔ_T and ( Δ_S, Δ_T ) ∈.
By ∀ S, T ( S, T ) ∈⟶( S, T ) ∈ and Lemma <ref>, then ( Δ_S, Δ_T ) ∈ implies ( Δ_S, Δ_T ) ∈.
Weak Soundness: Assume SΔ_T.
Since ( S, S) ∈ and because is a probabilistic correspondence simulation, then S Δ_S, Δ_T Δ_T', and ( Δ_S, Δ_T' ) ∈.
By ∀ S, T ( S, T ) ∈⟶( S, T ) ∈ and Lemma <ref>, ( Δ_S, Δ_T' ) ∈ implies ( Δ_S, Δ_T' ) ∈.
By Definition <ref>, then · is weakly probabilistically operationally corresponding .
Finally, since is a preorder and a probabilistic correspondence simulation and because of the Lemmata <ref> and <ref>, is a preorder and a probabilistic correspondence simulation.
§ (STRONG) PROBABILISTIC OPERATIONAL CORRESPONDENCE
A relation is a probabilistic (reduction) bisimulation if for each (P, Q) ∈:
* P Δ implies ∃Θ Q Θ∧ (Δ, Θ) ∈
* Q Θ implies ∃Δ P Δ∧ (Δ, Θ) ∈
Two terms are probabilistic bisimilar if a probabilistic bisimulation relates them.
We can reuse several of the auxiliary results derived in Section <ref> for the proof of Theorem <ref>.
But we have to adapt Lemma <ref> to probabilistic bisimulation.
If is a probabilistic bisimulation and = then is a probabilistic bisimulation.
Assume that is a probabilistic bisimulation and =.
Moreover, assume ( T_1, T_2 ) ∈.
Because of =, then ( T_1, T_2 ) ∈.
Case (i): Assume T_1 Δ.
Since is a probabilistic bisimulation, then T_2 Θ and ( Δ, Θ) ∈.
With = and Lemma <ref>, then ( Δ, Θ) ∈.
Case (ii): Assume T_2 Θ.
Since is a probabilistic bisimulation, then T_1 Δ and ( Δ, Θ) ∈.
With = and Lemma <ref>, then ( Δ, Θ) ∈.
We conclude that is a probabilistic bisimulation.
An encoding · : → is probabilistic operationally corresponding () ⊆^2 if it is:
Probabilistic Complete:
∀ S, Δ_S S Δ_S implies ( ∃Δ_T SΔ_T ∧( Δ_S, Δ_T ) ∈)
Probabilistic Sound:
∀ S, Δ_T SΔ_T implies ( ∃Δ_S S Δ_S ∧( Δ_S, Δ_T ) ∈)
· is probabilistically operationally corresponding a preorder ⊆^2 that is a probabilistic bisimulation iff
∃( ∀ S ( S, S) ∈) ∧ = ∧( ∀ S, T ( S, T ) ∈⟶( S, T ) ∈) ∧ is a preorder and a probabilistic bisimulation.
We prove the two directions of the result separately.
if (⟶): Assume that · is probabilistically operationally corresponding a preorder ⊆^2 that is a probabilistic bisimulation.
We construct from by adding ( S, S) for all source terms S and then building the reflexive and transitive closure.
Accordingly, ∀ S ( S, S) ∈ holds by construction.
Since we did not add any pairs of only source terms, no pairs of the form ( S_1, S_2 ) where both S_1 and S_2 are source terms, and since the only such pairs added by the reflexive and transitive closure are of the form ( S, S ), we have =.
Next we prove that ∀ S, T ( S, T ) ∈⟶( S, T ) ∈.
Therefore, fix some S and T and assume ( S, T ) ∈.
By the construction of , ( S, S) ∈ and all pairs relating a source and a target term contain a source term and its literal translation or result from such a pair, , and the transitive closure in the construction of .
Hence, T = S or ( S, T ) ∈.
In the former case, ( S, T ) ∈ follows from the reflexivity of .
The latter case directly provides ( S, T ) ∈.
That is a preorder directly follows from the construction of , because we used the reflexive and transitive closure.
As last condition we have to show that is a probabilistic bisimulation.
By Definition <ref>, for all ( P, Q ) ∈ and all P Δ we have to find Q Θ such that ( Δ, Θ) ∈ and for all Q Θ we have to find P Δ such that ( Δ, Θ) ∈.
By the construction of , P may be a source or target term, but Q is a target term.
* Assume P Δ.
If P is a source term, then we have PΔ' with ( Δ, Δ' ) ∈, because of completeness in in Definition <ref>.
Since Q is a target term and by the construction of , ( P, Q ) ∈ implies ( P, Q ) ∈.
Because of =, then ( P, Q ) ∈.
Then Q Θ and ( Δ', Θ) ∈, because is a probabilistic bisimulation.
With the transitivity of and thus , ( Δ, Δ' ) ∈ and ( Δ', Θ) ∈ imply ( Δ, Θ) ∈.
Finally, ( Δ, Θ) ∈ follows from ( Δ, Θ) ∈, by the construction of .
Else, assume that P is a target term.
Then ( P, Q ) ∈ implies ( P, Q ) ∈, because Q is a target term and =.
Since is a probabilistic bisimulation (see Definition <ref>), then P Δ implies Q Θ with ( Δ, Θ) ∈.
Finally, ( Δ, Θ) ∈ follows from ( Δ, Θ) ∈, by the construction of .
* Assume Q Θ.
If P is a source term, then ( P, Q ) ∈, by the construction of .
Because of =, then ( P, Q ) ∈.
Since is a probabilistic bisimulation (see second case of Definition <ref>), then Q Θ implies PΔ_T and ( Δ_T, Θ) ∈.
From PΔ_T we obtain P Δ with ( Δ, Δ_T ) ∈, because of soundness in in Definition <ref>.
By transitivity, we obtain ( Δ, Θ) ∈.
Finally, ( Δ, Θ) ∈ follows from ( Δ, Θ) ∈, by the construction of .
Else, assume that P is a target term.
Then ( P, Q ) ∈ implies ( P, Q ) ∈, because Q is a target term and =.
Since is a probabilistic bisimulation (see Definition <ref>), then Q Θ implies P Δ and ( Δ, Θ) ∈.
Finally, ( Δ, Θ) ∈ follows from ( Δ, Θ) ∈, by the construction of .
only if (⟵): We assume that there is a relation such that ∀ S ( S, S) ∈, =, ∀ S, T ( S, T ) ∈⟶( S, T ) ∈, and is a preorder and a probabilistic bisimulation.
We start with probabilistic operational correspondence.
Completeness: Assume S Δ_S.
Since ( S, S) ∈ and because is a probabilistic bisimulation, then SΔ_T and ( Δ_S, Δ_T ) ∈.
By ∀ S, T ( S, T ) ∈⟶( S, T ) ∈ and Lemma <ref>, then ( Δ_S, Δ_T ) ∈ implies ( Δ_S, Δ_T ) ∈.
Soundness: Assume SΔ_T.
Since ( S, S) ∈ and because is a probabilistic bisimulation, then S Δ_S and ( Δ_S, Δ_T ) ∈.
By ∀ S, T ( S, T ) ∈⟶( S, T ) ∈ and Lemma <ref>, then ( Δ_S, Δ_T ) ∈ implies ( Δ_S, Δ_T ) ∈.
By Definition <ref>, then · is probabilistically operationally corresponding .
Finally, since is a preorder and a probabilistic bisimulation and because of the Lemmata <ref> and <ref>, is a preorder and a probabilistic bisimulation.
A relation is a strong probabilistic (reduction) bisimulation if for each (P, Q) ∈:
* P Δ implies ∃Θ Q Θ∧ (Δ, Θ) ∈
* Q Θ implies ∃Δ P Δ∧ (Δ, Θ) ∈
Two terms are strong probabilistic bisimilar if a strong probabilistic bisimulation relates them.
An encoding · : → is strongly probabilistic operationally corresponding (strong ) ⊆^2 if it is:
Strongly Probabilistic Complete:
∀ S, Δ_S S Δ_S implies ( ∃Δ_T SΔ_T ∧( Δ_S, Δ_T ) ∈)
Strongly Probabilistic Sound:
∀ S, Δ_T SΔ_T implies ( ∃Δ_S S Δ_S ∧( Δ_S, Δ_T ) ∈)
Again, we adapt Lemma <ref> to strong probabilistic bisimulation.
If is a strong probabilistic bisimulation and = then is a strong probabilistic bisimulation.
Assume that is a strong probabilistic bisimulation and =.
Moreover, assume ( T_1, T_2 ) ∈.
Because of =, then ( T_1, T_2 ) ∈.
Case (i): Assume T_1 Δ.
Since is a strong probabilistic bisimulation, then T_2 Θ and ( Δ, Θ) ∈.
With = and Lemma <ref>, then ( Δ, Θ) ∈.
Case (ii): Assume T_2 Θ.
Since is a strong probabilistic bisimulation, then T_1 Δ and ( Δ, Θ) ∈.
With = and Lemma <ref>, then ( Δ, Θ) ∈.
We conclude that is a probabilistic bisimulation.
Then can show Theorem <ref>:
· is strongly probabilistically operationally corresponding a preorder ⊆^2 that is a strong probabilistic bisimulation iff
∃( ∀ S ( S, S) ∈) ∧ = ∧( ∀ S, T ( S, T ) ∈⟶( S, T ) ∈) ∧ is a preorder and a strong probabilistic bisimulation.
We prove the two directions of the result separately.
if (⟶): Assume that · is strongly probabilistically operationally corresponding a preorder ⊆^2 that is a strong probabilistic bisimulation.
We construct from by adding ( S, S) for all source terms S and then building the reflexive and transitive closure.
Accordingly, ∀ S ( S, S) ∈ holds by construction.
Since we did not add any pairs of only source terms, no pairs of the form ( S_1, S_2 ) where both S_1 and S_2 are source terms, and since the only such pairs added by the reflexive and transitive closure are of the form ( S, S ), we have =.
Next we prove that ∀ S, T ( S, T ) ∈⟶( S, T ) ∈.
Therefore, fix some S and T and assume ( S, T ) ∈.
By the construction of , ( S, S) ∈ and all pairs relating a source and a target term contain a source term and its literal translation or result from such a pair, , and the transitive closure in the construction of .
Hence, T = S or ( S, T ) ∈.
In the former case, ( S, T ) ∈ follows from the reflexivity of .
The latter case directly provides ( S, T ) ∈.
That is a preorder directly follows from the construction of , because we used the reflexive and transitive closure.
As last condition we have to show that is strong a probabilistic bisimulation.
By Definition <ref>, for all ( P, Q ) ∈ and all P Δ we have to find Q Θ such that ( Δ, Θ) ∈ and for all Q Θ we have to find P Δ such that ( Δ, Θ) ∈.
By the construction of , P may be a source or target term, but Q is a target term.
* Assume P Δ.
If P is a source term, then we have PΔ' with ( Δ, Δ' ) ∈, because of strong completeness in strong in Definition <ref>.
Since Q is a target term and by the construction of , ( P, Q ) ∈ implies ( P, Q ) ∈.
Because of =, then ( P, Q ) ∈.
Then Q Θ and ( Δ', Θ) ∈, because is a strong probabilistic bisimulation.
With the transitivity of and thus , ( Δ, Δ' ) ∈ and ( Δ', Θ) ∈ imply ( Δ, Θ) ∈.
Finally, ( Δ, Θ) ∈ follows from ( Δ, Θ) ∈, by the construction of .
Else, assume that P is a target term.
Then ( P, Q ) ∈ implies ( P, Q ) ∈, because Q is a target term and =.
Since is a strong probabilistic bisimulation (see Definition <ref>), then P Δ implies Q Θ with ( Δ, Θ) ∈.
Finally, ( Δ, Θ) ∈ follows from ( Δ, Θ) ∈, by the construction of .
* Assume Q Θ.
If P is a source term, then ( P, Q ) ∈, by the construction of .
Because of =, then ( P, Q ) ∈.
Since is a strong probabilistic bisimulation (see second case of Definition <ref>), then Q Θ implies PΔ_T and ( Δ_T, Θ) ∈.
From PΔ_T we obtain P Δ with ( Δ, Δ_T ) ∈, because of strong soundness in strong in Definition <ref>.
By transitivity, we obtain ( Δ, Θ) ∈.
Finally, ( Δ, Θ) ∈ follows from ( Δ, Θ) ∈, by the construction of .
Else, assume that P is a target term.
Then ( P, Q ) ∈ implies ( P, Q ) ∈, because Q is a target term and =.
Since is a strong probabilistic bisimulation (see Definition <ref>), then Q Θ implies P Δ and ( Δ, Θ) ∈.
Finally, ( Δ, Θ) ∈ follows from ( Δ, Θ) ∈, by the construction of .
only if (⟵): We assume that there is a relation such that ∀ S ( S, S) ∈, =, ∀ S, T ( S, T ) ∈⟶( S, T ) ∈, and is a preorder and a strong probabilistic bisimulation.
We start with strong probabilistic operational correspondence.
Strong Completeness: Assume S Δ_S.
Since ( S, S) ∈ and because is a strong probabilistic bisimulation, then SΔ_T and ( Δ_S, Δ_T ) ∈.
By ∀ S, T ( S, T ) ∈⟶( S, T ) ∈ and Lemma <ref>, then ( Δ_S, Δ_T ) ∈ implies ( Δ_S, Δ_T ) ∈.
Strong Soundness: Assume SΔ_T.
Since ( S, S) ∈ and because is a strong probabilistic bisimulation, then S Δ_S and ( Δ_S, Δ_T ) ∈.
By ∀ S, T ( S, T ) ∈⟶( S, T ) ∈ and Lemma <ref>, then ( Δ_S, Δ_T ) ∈ implies ( Δ_S, Δ_T ) ∈.
By Definition <ref>, then · is strongly probabilistically operationally corresponding .
Finally, since is a preorder and a strong probabilistic bisimulation and because of the Lemmata <ref> and <ref>, is a preorder and a strong probabilistic bisimulation.
Similar to Lemma <ref>, we show that ≡ is barb sensitive.
If T_1 ≡ T_2 then T_1n⟷T_2n for all n ∈∪.
Moreover, if Δ_1 ≡Δ_2 then Δ_1n⟷Δ_2n for all n ∈∪.
Fix some n ∈∪.
The proof is by induction on the definition of ≡.
All cases are immediate.
α-Equivalence ≡_α: In this case T_1 ≡_α T_2.
Since barbs via labelled steps on unrestricted names, we have T_1n iff T_2n.
P≡ P: In this case T_1 = T_2 |.
Since does not contain any barbs, then T_1n iff T_2n.
PQ≡QP: In this case T_1 = P | Q and T_2 = Q | P.
Then T_1 iff ( Pn∨Qn) iff T_2n.
PQR≡PQR: In this case T_1 = P |( Q | R ) as well as T_2 = ( P | Q )| R.
Thereby, T_1 iff ( Pn∨Qn∨Rn) iff T_2n.
x≡: In this case T_1 = x and T_2 =.
Since does not contain any barbs, then T_1n and T_2n.
xyP≡yxP: Then T_1 = xyP and T_2 = yxP.
Since only unrestricted actions can be observed, then T_1n iff Pn iff T_2n.
xPQ≡PxQ: In this case T_1 = x( P | Q ) and T_2 = P |xQ, where x ∉P.
Here x ∉P ensures that Px and Px.
Then T_1n iff ( ( Pn∨Qn)∧ n ≠ x ∧ n ≠x) iff T_2n.
Δ_1 ≡Δ_2: In this case there is a finite index set I such that Δ_1 = ∑_i ∈ I p_i P_i, Δ_2 = ∑_i ∈ I p_i Q_i, and P_i ≡ Q_i for all i ∈ I.
Because of P_i ≡ Q_i, we have P_in iff Q_in for all i ∈ I.
Then Δ_1n iff Δ_2n.
Then we can show that the encoding also respects barbs, :
For every S and all n ∈∪, Sn iff Sn.
By Definition <ref>, since the encoding function does not introduce new free names and because of the rigorous use of the renaming policy , S^*n iff S^*n for all S^*.
Then also Δ_S^*n iff Δ_S^*n for all distributions Δ_S^*.
* If Sn, then S Δ_S and Δ_Sn.
By Lemma <ref>, then SΔ_T and Δ_S≡Δ_T.
By Definition <ref>, Δ_Sn implies Δ_Sn.
By Lemma <ref>, then Δ_S≡Δ_T implies Δ_Tn.
Finally, SΔ_T and Δ_Tn imply Sn.
* If Sn, then SΔ_T and Δ_Tn.
By Lemma <ref>, then S Δ_S', Δ_T Δ_T', and Δ_S'≡Δ_T'.
Because of Δ_T Δ_T', Δ_Tn implies Δ_T'n.
By Lemma <ref>, Δ_T'n and Δ_S'≡Δ_T' imply Δ_S'n.
By Definition <ref>, then Δ_S'n.
Finally, S Δ_S' and Δ_S'n imply Sn.
splncs04
|
http://arxiv.org/abs/2307.04928v1 | 20230710223003 | Belle II status and prospects for studies of neutral currents | [
"Valerio Bertacchi"
] | hep-ex | [
"hep-ex"
] |
Turán number for bushes
Zoltán Füredi
Alfréd Rényi Institute of Mathematics, Budapest, Hungary.
E-mail: .
Research partially supported by National Research, Development and Innovation Office NKFIH grants 132696 and 133819.
Alexandr Kostochka
University of Illinois at Urbana–Champaign, Urbana, IL 61801
and Sobolev Institute of Mathematics, Novosibirsk 630090, Russia. E-mail: .
Research supported in part by NSF
grant DMS-2153507 and NSF RTG grant DMS-1937241.
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The flavour chaning neutral current b→ s transitions are suppressed in the Standard Model (SM) and therefore sensitive to Beyond the Standard Model (BSM) amplitudes. The SM branching fractions are 𝒪(10^-5-10^-7), predicted with 10-30% uncertainties. Angular distributions and ratios can be used to improve the precision and eventually have access to new physics properties.
Belle II <cit.> and SuperKEKB <cit.> produce an optimal environment to study the neutral currents.
Belle II has similar and good performance in electron and muon channels, in term of efficiency, fake rate and particle identification capability. This is a key feature to perform lepton flavour universality (LFU) tests and lepton flavour violation (LFV) searches in the b→ sℓℓ^(') sector, where ℓ indicates a charged lepton. On the other hand, the b→ sγ and b→ sνν transitions represent a unique opportunity for Belle II, because of the almost complete hermeticity of the detector, the possibility to exploit the Υ(4S) initial state constraint and the relatively low combinatorial background of the SuperKEKB collisions.
One of the key tools of Belle II for the channels with missing energy in the final state is the B-tagging, a set of reconstruction techniques to identify the Υ(4S)→ BB events exploiting the initial state knowledge to constraint the missing information in the signal side. It consists in reconstructing the parter B meson, called B_tag, produced in association with the signal one, to infer the properties of the signal.
We refer to hadronic or semileptonic tagging according to the channels used for the B_tag reconstruction.
The B-tagging algorithm is called Full Event Interpretation (FEI) <cit.>, a boosted decision tree (BDT)-based tagging algorithm which exploits a hierarchical approach to reconstruct 𝒪(10^4) decay chains on the tag side. The efficiency for the hadronic (semileptonic) tag is 0.5% (2%) with a purity of 30% (10%).
§ FULLY INCLUSIVE B TO XS GAMMA
We present the measurement the B→ X_sγ branching ratio as a function of photon energy in the range 1.8 GeV<E_γ<2.7 GeV, where X_sγ is the inclusive final state involving a photon and a strange hadron. The measurement is performed on a 189 fb^-1 Belle II sample <cit.>.
The decays are reconstructed using the hadronic B-tagging, requiring a γ in the signal side with a threshold energy of 1.4 GeV. The main challenge of the analysis is to suppress the background without breaking the inclusivity of the measurement. The backgrounds are suppressed using a BDT and the residual X_d background is estimated using simulated events.
The signal is extracted by fitting the tag side M_ bc=√(E^*2_beam-p^*2_B) distribution (where E_beam^* is the beam energy and p_B* is the B meson momentum in the center-of-mass frame), as a function of E_γ. The result is competitive with previous measurements performed with hadronic B-tagging <cit.>. The result in term of the partial branching fraction as a function of the photon energy is shown inf Fig. <ref> (Left).
The prospects of this measurements with larger statistics depend on the chosen photon energy threshold <cit.>. With lower threshold the background will be higher, while with higher threshold the theoretical uncertainties will be higher. However, some improvements are expected both on background suppression side and by using additional tagging methods, which will allow to reach the percent level precision. Measurements of relative quantities, such as asymmetries, will allow for further reduction of systematic effect
§ MEASUREMENT OF B TO K* GAMMA BRANCHING FRACTIONS
We present the measurement of the branching fraction of B→ K^*γ, where K^* indicates both K^*+(892) and K^*0(892). The measurement is performed on a 63 fb^-1 Belle II sample <cit.>.
The decays are identified reconstructing only the signal B in the event. The misreconstructed γ background is suppressed with an energy selection, and with a veto on γ from π^0 and η decays. The e^+e^-→ q q background is suppressed with an MVA. The misreconstructed K^* background is suppressed using the K^* helicity angle distribution. A fit to Δ E=E_B^*-E_beam^* (where E_B is the energy of the B meson) is used to extract the signal, excluding higher-mass K^* resonances. The results are ℬ(B^0→ K^*0(K^+π^-)γ)=(4.5± 0.3± 0.2)× 10^-5, ℬ(B^0→ K^*0(K_S^0π^0)γ)=(4.4± 0.9± 0.6)× 10^-5, ℬ(B^+→ K^*+(K^+π^0)γ)=(5.0± 0.5± 0.4)× 10^-5, ℬ(B^+→ K^*+(K_S^0π^+)γ)=(5.4± 0.6± 0.4)× 10^-5, where the first uncertainty is statistical and the second systematic, compatible with the world averages <cit.>.
This measurement is performed as the cleanest exclusive channel in B→ X_sγ sector, and is a first step toward asymmetry measurements of radiative decays. In the latters several systematic uncertanties cancel out, and projections based on Belle result <cit.> shows that a precision below the percent level can be reached with few ab^-1 <cit.>.
§ SEARCH FOR B+ TO K+ NU NUBAR DECAYS
The search of B^+→ K^+νν decay is a unique opportunity for Belle II. This decays has never been observed before the amplitude <cit.> can receive sizeable contribution from BSM amplitudes. The measurement is performed on a sample with an integrated luminosity of 63 fb^-1 <cit.>. The reconstruction is performed with an inclusive tagging approach, reconstructing the B_sig using the highest p_T track compatible with a K^+, and assigning the rest of the event to the B_tag. The procedure is validated on B^+→ J/ψ(→μμ)K^+ decays. Two BDT in cascade are used to suppress the background exploiting the event shape, kinematics and vertex features.
No signal is observed and the result is shown in Fig. <ref> (Right) in term of upper limit. This correspond to ℬ( B^+→ K^+νν)=(1.9± 1.3 (stat)^+0.08_-0.07 (syst))× 10^-5, compatible with the SM prediction and the previous results <cit.>.
The projection with larger samples <cit.> shows that a 5σ observation can be achieved with an integrated luminosity of 5 ab^-1 with an expected 50% efficiency improvement coming from the use of exclusive tagging approaches in combination with the inclusive one. Moreover, additional channels (K^*, K_S^0) will be investigated.
§ MEASUREMENT OF RK(J/PSI)
We present the measurements of the branching fraction of B→ Jψ(→ℓℓ)K, ℓ=e,μ and K=K^+, K_S^0, performed on a 189 fb^-1 Belle II sample <cit.>. The ratios R_K(J/ψ)=ℬ(B→ Jψ(→μ^+μ^-)K)/ℬ(B→ Jψ(→ e^+e^-)K) are also measured. These channels have no sensitivity on BSM, so R_K(J/ψ)≈ 1 is expected. This analysis is used to validate the measurement of B→ K^*ℓℓ. The yields are extracted from a fit to (Δ E, M_bc) distribution. The results are R_K^+(J/ψ)=1.009±0.022±0.008, R_K_S^0(J/ψ)=1.042±0.042±.008, where the first uncertainty is statistical and the second systematic, in agreement with the expectations.
§ MEASUREMENT OF B TO K* L L BRANCHING FRACTIONS
We present the measurement of the branching fractions ℬ(B→ K^* ℓ^+ℓ^-) (where ℓ=e,μ and K^*=K^*+(892), K^*0(892)) performed on a sample with an integrated luminosity of 189 fb^-1 <cit.>. The backgrounds are suppressed using a BDT combined with a veto on the dilepton invariant mass for the J/ψ,ψ(2S)→ℓℓ background. An extended maximum likelihood fit is performed to the (Δ E, M_bc) distribution. The results are ℬ( B→ K^*μ^+μ^-)=(1.19± 0.31^+0.08_-0.07)× 10^-6, ℬ( B→ K^*e^+e^-)=(1.42± 0.48± 0.09)× 10^-6, ℬ( B→ K^*ℓ^+ℓ^-)=(1.25± 0.30^+0.08_-0.07)× 10^-6,
where the first uncertainty is statistical and the second systematic, compatible with the world average <cit.>.
These results prepare the ground for the measurement of R_K^(*)=ℬ(B→μ^+μ^-K^(*))/ℬ(B→ e^+e^-K^(*)), which will require a larger sample.
§ B TO K* TAU TAU PERSPECTIVES
The measurement of B→ K^*ττ is complementary to the previously discussed searches, investigating the new physics in the third generation. The SM branching ratio 𝒪(10^-7), but BSM amplitudes can enhance the signal of several order of magnitude <cit.>. Currently the decay has been never observed and an upper limit at 𝒪(10^-3) has been set <cit.>. Prospects extrapolating from the current upper limit with a larger samples shows that Belle II can investigate the branching ratios down to 10^-4 with 5 ab^-1, using hadronic and semileptonic B-tagging and reconstructing the τ leptons both in leptonic and hadronic decays <cit.>.
§ PERSPECTIVES OF LEPTON FLAVOR VIOLATION SEARCHES IN B TO K(*) L L' SECTOR
Several measurement have been performed in past years by BaBar, LHCb and Belle collaborations in the B→ K^(*)ℓℓ' sector, where ℓ=e,μ,τ, setting upper limits that span from 10^-5 to 10^-9 level. Belle II is planning to join the effort in the searches of new physics in this sector. Focusing on B→ K^(*)τℓ, the use of the hadronic or semileptonic tag allow to avoid the explicit reconstruction of the τ lepton. The signal is extracted from the τ recoil mass distribution, obtained from the B_tag and the signal K track information. The recent results performed on Belle sample using the FEI are very promising <cit.>.
This project has received funding from the European Union’s Horizon 2020 research and inno- vation programme under the ERC grant agreement No 819127.
JHEP
|
http://arxiv.org/abs/2307.10070v1 | 20230714124347 | Destructive relativity | [
"Maria Przybylska",
"Wojciech Szumiński",
"Andrzej J. Maciejewski"
] | math-ph | [
"math-ph",
"math.MP",
"nlin.CD"
] |
mathic
[email protected]
[email protected]
Institute of Physics, University of Zielona Góra,
Licealna 9, 65–417, Zielona Góra, Poland
[email protected]
Janusz Gil Institute of Astronomy, University of Zielona Góra,
Licealna 9, 65-417, Zielona Góra, Poland
The description of dynamics for high-energy particles requires an application of the special relativity theory framework, and analysis of properties of the corresponding equations of motion is very important. Here, we analyse Hamilton equations
of motion in the limit of weak external field when potential satisfies the condition 2V()≪ mc^2. We formulate very strong necessary integrability conditions for the case when the potential is a homogeneous function of coordinates of integer non-zero
degree. If Hamilton equations are integrable in the Liouville sense, then eigenvalues of the scaled Hessian matrix γ^-1V”() at any non-zero solution of the algebraic system V'()=γ must be integer numbers of appropriate form depending on k. As it turns out, these conditions are much stronger than those for the corresponding non-relativistic Hamilton equations. According to our best knowledge, the obtained results are the first general integrability necessary conditions for relativistic systems. Moreover, a relation between the integrability of these systems and corresponding non-relativistic systems is discussed. The obtained integrability conditions are very easy to use because the calculations reduce to linear algebra. We show their strength on the example of Hamiltonian systems with two degrees of freedom with polynomial homogeneous potentials. It seems that the only integrable relativistic systems with such potentials are those depending only on one coordinate, or having a radial form.
The paper has been already published in ,,Chaos: An Interdisciplinary Journal of Nonlinear Science”, and the final journal version is available under the link: https://doi.org/10.1063/5.0140633https://doi.org/10.1063/5.0140633.
Destructive relativity
Andrzej J. Maciejewski
July 14, 2023
==========================
Relativistic Hamiltonian equations describing a motion of a point mass in an arbitrary homogeneous potential are considered.
For the first time, the necessary integrability conditions for integrability in the Liouville sense for this class of systems are formulated.
These conditions are obtained by means of an analysis of the differential Galois groups of variational equations. They are simple and effective in applications. For instance,
an application of the necessary integrability conditions for systems with two degrees of freedom shows that relativity almost completely destroys integrability, that is, in almost all cases relativistic versions of integrable systems are not integrable.
§ INTRODUCTION
The study of the dynamics of classical systems in relativistic regimes is currently in
a great activity of the scientific community. Let us recall the relativistic
Kapitza system <cit.>; the relativistic hydrogen-like atom in a
magnetic field <cit.>; the relativistic
two-dimensional harmonic and anharmonic oscillators in a uniform gravitational
field <cit.>; the relativistic Lienard-type
oscillators <cit.> or the relativistic time-dependent
Ermakov–Milne–Pinney systems <cit.> to mention by name just a few. For more examples see
also <cit.>.
Relativistic systems are an interesting area of research by their own nature
and for their verified applications in many experimental contexts. For instance,
let us mention the recent experimental realization of the harmonic oscillator in
the relativistic regime, using the Bose-condensed lithium atoms in a
two-dimensional optical lattice <cit.>.
In the context of the classical and non-relativistic systems chaos occurs due to their
inherent nonlinearity of force fields and interactions of systems with these fields. Therefore, in the
simplest case models of a particle moving in flat spaces in a force field with
quadratic or separable potential are integrable. However, the addition of relativistic corrections to an integrable system can destroy its integrability
causing its chaotic behavior. In relativistic models the non-integrability
comes from the modification of the kinetic part of the Hamiltonian – even if
there are no non-linearities in the potential. Numerical studies, presented
in <cit.>, show that the classical two-dimensional Duffing-like
oscillators given by separable potentials of degree four are indeed chaotic.
Therefore, it seems that integrability in special relativity is not only related to the form of the potential. Hence, there is a natural question which integrable, Newtonian potentials have their integrable counterparts in special relativity and how to distinguish them.
The relativistic versions of classical systems can be considered as
perturbations of the latter. Thus, it is expected that many properties of
classical systems can be destroyed by relativity. Integrability is such a fundamental property. Hence, one can suspect that relativistic versions of
integrable models are not integrable. The question is what remains. In other words, which systems are integrable in classical and in relativistic settings?
The aim of the paper is a study the dynamics and integrability of a
relativistic particle moving in an external potential V() in the limit of
weak external field, i.e., when 2V()≪ mc^2. The considered relativistic
Hamiltonian takes the form
H=mc^2√(1+^2/m^2c^2)+V(),
see e.g. <cit.>, where =(q_1,…, q_n)∈^n,=(p_1, …, p_n)∈^n, while m is the particle
rest mass and c is the speed of light. For further consideration, we fix
units in such a way that m=1 and c=1. Moreover, we do not restrict the dimension
of the configuration space, so n is an arbitrary positive integer. The canonical Hamilton's equations
are as follows
=/√(1+^2), =-V'(),
where V'() denotes the gradient of V(). Detailed derivation of these equations one can find in <cit.>, and for applications, see for instance <cit.>.
The plan of the paper is the following. In Section <ref> the
motivation and main aims of studies of the considered Hamiltonian systems are
given. Section <ref> shows results of numerical analysis of famous dynamical
systems: the Kepler problem, the isotropic and anisotropic oscillators, and the Hénon-Heiles system in
relativistic and non-relativistic versions. In Section <ref> main
integrability results are presented: about the relation between the integrability of
relativistic and the corresponding non-relativistic systems formulated in
Proposition <ref>, about differential Galois
integrability obstructions given in Theorem <ref> and results of applications of
these obstructions to non-relativistic Hamiltonian systems with homogeneous
potentials presented in Theorem <ref>. Our main theorem devoted to
the integrability of relativistic Hamiltonian systems with homogeneous
potentials is formulated in Theorem <ref> at the end of this section.
Section <ref> is devoted to the outline of the proof of
Theorem <ref>. Very strong necessary integrability conditions were
possible to formulate thanks to three types of obstructions: these obtained from an analysis of variational equations on two non-equivalent energy levels studied in
Subsections <ref> and <ref> and those for
non-relativistic homogeneous potentials, that joined together in
Subsection <ref> complete the proof. Section <ref>
shows the application of the obtained integrability conditions to relativistic
Hamiltonian systems with two degrees of freedom with homogeneous polynomial
potentials. It also explains numerical results presented in Section <ref> that
except for radial potentials the passage from non-relativistic to relativistic versions
destroys integrability and for radial potentials, its super-integrability also
seems to be lost. Final remarks and comments are given in
Section <ref>. Appendix <ref> contains basic information
about Riemann P-equation used in Subsection <ref>.
§ NUMERICAL ANALYSIS OF CERTAIN POTENTIALS
In this section, we perform a numerical analysis of classical Hamiltonian
systems and their corresponding relativistic versions with the help of
Poincaré cross-sections method. We show that relativistic and non-relativistic versions of the isotropic harmonic oscillator and the classical Kepler problem are
integrable and super-integrable, respectively. However, the integrable anisotropic harmonic oscillator and the Hénon-Heiles system show chaotic trajectories in the relativistic regime.
§.§ The Kepler problem
As the first model, we consider the classical Kepler problem, which is governed
by the following Hamiltonian
H=1/2(p_1^2+p_2^2)+μ/√(q_1^2+q_2^2),
where μ∈. For μ<0 the force is attractive, otherwise, it is repulsive. Its relativistic counterpart has the form
H=√(1+p_1^2+p_2^2)+μ/√(q_1^2+q_2^2).
Fig. <ref> presents a pair of Poincaré sections for the
classical and the relativistic Kepler problem made for μ=-1/4 at the respective energy levels E=-0.1 and E=0.9. Plots, visible
in Fig. <ref>, show intersections of trajectories
calculated numerically with the suitably chosen surface of section q_1=0 and the
direction p_1>0. From Fig. <ref>(a), one can notice that
all orbits are closed and hence the motion is periodic. Each point corresponds to
a distinct initial condition. This is due to the fact that the classical Kepler
problem is maximally super-integrable. Taking into account the relativistic
correction, we observe that the pattern of Fig. <ref>(b) is
still very regular but now the motion is quasi-periodic. Nevertheless, there is no
presence of chaotic behaviour at all. The relativistic
Hamiltonian (<ref>) is, in fact, the integrable system and the additional
first integral is the angular momentum L=q_1p_2 - p_2 q_1.
§.§ The harmonic oscillator
As the second model, we consider 2D harmonic oscillator, which is governed by
the following Hamiltonian function
H=1/2(p_1^2+p_2^2)+1/2(q_1^2+α q_2^2),
where α is a positive parameter. For α=1 the system is isotropic
otherwise it is anisotropic. It is obvious that
Hamiltonian (<ref>) is integrable due to its
separability in the Cartesian coordinates. The corresponding relativistic harmonic oscillator is defined as
follows
H=√(1+p_1^2+p_2^2)+1/2(q_1^2+α q_2^2).
Fig. <ref> presents a pair of Poincaré
sections of the classical and the relativistic isotropic harmonic oscillators made
for α=1 at constant energy levels E=E_min+2. Here
E_min denotes the energy minimum of the respective systems. For the non-relativistic
system, we have E_min=0, while for the relativistic case
E_min=1. As the non-relativistic case is known to be
super-integrable, the Poincaré sections visible in
Fig. <ref>(b) suggest only the integrability of the
relativistic harmonic oscillator. This is due to the presence of quasi-periodic
orbits. Indeed, one can show that the
Hamiltonian system defined by (<ref>) is integrable with the
additional first integral L=q_1p_2 - p_2 q_1.
Let us consider the anisotropic case. Fig. <ref>
presents a pair of Poincaré sections for classical and relativistic harmonic
oscillators made for α=1/2 at the energy levels E=E_min+58, respectively. In
the non-relativistic case, the Poincaré section (see
Fig. <ref>(a)) presents integrable motion with one
particular periodic solution surrounded by quasi-periodic orbits. The situation
becomes more complex, when the relativistic correction is taken into
account, see Fig. <ref>(b). As we can notice, the
invariant tori become visibly deformed. Some of them are destroyed, and we observe the appearance of stable periodic solutions, which are
enclosed by the separatrices.
Fig. <ref> shows the magnification of a small part
of the Poincaré section in the vicinity of an unstable periodic solution, which
indicates the chaotic nature of the system.
§.§ The Hénon–Heiles system
The classical Hénon-Heiles potential is perhaps one of the most simple,
classical Hamiltonian systems, which can exhibit both integrable and chaotic
dynamics depending on the values of parameters. It is described by means of the following Hamiltonian
H=1/2(p_1^2+p_2^2)+1/2(q_1^2+q_2^2)+α q_1^2q_2+1/3β q_2^3,
where α,β are real parameters. This potential appears in various
problems in physics. For instance, in celestial mechanics <cit.>, in
statistical and quantum mechanics <cit.>, and recently it has been applied also in Hamiltonian neural networks <cit.>, to cite just
a few.
There are three known integrable cases of the Hénon-Heiles
model <cit.>, namely, α=0, β/α=1
and β/α=6. In all cases, additional first integrals are quadratic
polynomials with respect to the momenta, and the Hamiltonian is separable in
appropriate coordinates. It was proved by Ito in <cit.> and later
complemented in <cit.> that the above values of the
parameters are the only ones for which the Hénon-Heiles model is integrable.
The corresponding relativistic Hénon-Heiles system is defined as
follows
H=√(1+p_1^2+p_2^2)+1/2(q_1^2+q_2^2)+α q_1^2q_2+1/3β q_2^3.
Figs. <ref>-<ref> show the Poincaré sections of the
classical and the relativistic Hénon-Heiles model for values of parameters:
α=0, and β/α=1 and β/α=6. As we can notice, the
general shapes of the sections' boundaries are similar, whereas the dynamics presented within
them are completely different. In the non-relativistic cases, we obtain shapely
elegant integrable curves with mostly quasi-periodic solutions. In the relativistic regime,
however, the invariant tori broke up into sequences of stable and unstable
periodic solutions. Moreover, in the neighborhood of unstable periodic
solutions, we observe regions at the planes where chaotic motion takes place.
This suggests the non-integrability of the relativistic Hénon-Heiles model.
§ MAIN RESULTS
In the previous section, with the help of numerical analysis, we have shown that the super-integrable non-relativistic Kepler problem and the harmonic oscillator are integrable in a relativistic regime. On the other hand, the integrable non-relativistic models of the anisotropic harmonic oscillator and
the Hénon–Heiles system seem to be not integrable when the relativistic correction is taken into the account. The above models were just examples and an effective criterion
for the identification of integrable relativistic systems is needed. This is the aim of the paper.
At first, we make a simple observation. Note that v==/u,
so the Lorenz factor is u = 1/√(1-v^2). In the non-relativistic limit
u→ 1. The power series expansion of Hamiltonian (<ref>) gives
H=√(1+^2)+V()=(1 +1/2^2+
⋯)+V()
=1++⋯
where is the non-relativistic natural Hamiltonian
=1/2^2+V().
Thus, in the non-relativistic limit the Hamiltonian (<ref>) becomes (after
neglecting the constant term mc^2=1) the standard non-relativistic
Hamiltonian.
Investigating a specific system is important to know if it is integrable. In
general, it is difficult to give an answer to this question. Here we formulate this
question for the relativistic version of classical natural systems. It is hard to
expect a complete answer to this question as it is unknown for
non-relativistic ones. Nevertheless, we propose to study the relativistic versions of
those systems for which integrability was investigated deeply. Here we have in
mind natural systems with homogeneous potentials, which were investigated in
numerous works, see for instance <cit.>.
Let us assume that potential V() in the relativistic Hamiltonian
(<ref>) is a homogeneous function of integer non-zero degree k, that is
V(λ)=λ^k V() for λ>0. If we assign weights: 2 to
coordinates and k to momenta, then in the non-relativistic limit
Hamiltonian (<ref>) is weight homogeneous of weight-degree 2k.
This weight-homogeneity is consistent with the canonical structure. That is, a
non-vanishing Poisson bracket {,} with a weight-homogeneous function
of weight-degree l, is weight-homogeneous of weight-degree k+l -2. Then,
expansion (<ref>) gives a weight-homogeneous expansion of the form
H-1 = ∑_i=0^∞ H_2k+i,
where H_l is a weight-homogeneous function of weight-degree l. Thus,
=H_2k is the leading term of this expansion. Now, if H admits a certain number of functionally independent first integrals, then the weight-homogeneous leading terms of them are first integrals of =H_2k. Thanks
to the Ziglin Lemma <cit.>, we can assume that they are functionally
independent.
From the above, we can deduce the following fact.
If the relativistic Hamiltonian system governed by Hamiltonian (<ref>) with
a homogeneous potential V() is integrable in the Liouville sense, then
its non-relativistic counterpart defined by Hamiltonian given in (<ref>) with the same V() is also integrable in the Liouville sense.
It means that the integrability of a non-relativistic Hamiltonian is a necessary
condition for the integrability of its relativistic counterpart.
Moreover, if potential V() has expansion
V() = V_k() + ⋯,
where V_k() is homogeneous of degree k, and dots denote terms of homogeneous degrees
higher than k, then still we obtain expansion of the form (11). Hence, if the relativistic Hamiltonian
system defined by (1) with a non-homogeneous potential V() is integrable in the Liouville sense,
then the corresponding non-relativistic system defined by Hamiltonian with potential
V_k() is also integrable in the Liouville sense.
To prove our main results we apply the Morales–Ramis theory. Its detailed description with many examples one can find in book <cit.>, see also
<cit.>. The main idea is to
investigate the variational equations of the considered non-linear Hamiltonian system along a particular non-equilibrium solution. Passage to the variational equations, which are linear enables to use the differential Galois group related to them. A first integral of the considered non-linear system generates the first integral of variational equations, which is also an invariant of the Lie algebra of their differential Galois group. In the case of integrability in the Liouville sense, the number of first integrals and their commutations implies the Abelianity of the Lie algebra of the differential Galois group and the identity component of this group. This reasoning explains the origin of the fundamental theorem of the Morales–Ramis theory.
If a Hamiltonian system is integrable in the Liouville sense in a neighborhood of a particular solution, then the identity component of the
differential Galois group of the variational equations along this solution is Abelian.
The true strength of the above theorem appears when it is applied to classical
systems (<ref>) with homogeneous potentials V() of integer
degree k. For such potentials a non-zero vector satisfying
V'()=γ,
for a certain non-zero γ, is called a Darboux point of this potential.
Let (λ_1, …,λ_n) denotes the eigenvalues of scaled Hessian
matrix γ^-1V”(). Vector is its eigenvector. The corresponding
eigenvalue we denote by λ_n, and as it is easy to show using homogeneity of the potential that
λ_n=k-1. Since it does not give obstructions to the integrability it is called a trivial eigenvalue.
Assume that the Hamiltonian system defined by Hamiltonian
(<ref>) with a homogeneous potential V()∈() of
degree k∈^∗=∖{0} is integrable in the Liouville sense with
meromorphic first integrals. Then, for each eigenvalue λ of
γ^-1V”(), pair (k,λ) belongs to the following list:
[ k λ; ± 2 λ ; k p + k/2p(p-1) ; k (k p+1) (k p+k-1)/2 k ; 3 1/8 (2 p+1) (6 p+1), 1/96 (12 p+1) (12 p+5); 1/600 (30 p+1) (30 p+11), 1/600 (30 p+7) (30 p+17); 4 1/72 (12 p+1) (12 p+7) ; 5 1/360 (30 p+1) (30 p+19), 1/40 (10 p+1) (10 p+7); -3 -1/8 (2 p-1) (6 p+7), -1/96 (12 p-7) (12 p+13); -1/600 (30 p-19) (30p+31), -1/600 (30 p-13) (30 p+37); -4 -1/72 (12 p-5) (12 p+13) ; -5 -1/360 (30 p-11) (30 p+31) , -1/40 (10 p-3) (10 p+11) ]
where p is an integer and k≠ 0.
It is really amazing result: testing of the integrability is reduced to purely
algebraic calculations. Notice that, except the case k=± 2, if the system is
integrable, then all eigenvalues (λ_1, …,λ_n) are rational
numbers.
For the formulation of our main results, we introduce the following two functions
f_±(k,p)= 3 k p (2 p+1)+
1/2[1± (4 p+1) √(4 k^2 p (2 p+1)+1)],
and two sets
_±=f_±(k,p)p∈∩.
Additionally, let 𝒮 denote the set of square triangular numbers.
That is, numbers s such that s = q^2 and s = 12p(p+1) for
certain integers q,p∈, see e.g. <cit.>. Then, we denote
_1=𝒮, _2= {1/2p(p+1) | p∈},
_-1={1-s | s∈𝒮}
, _-2={1-s | s∈_2}.
The main result of our analysis has the following form.
Assume that a Hamiltonian system defined by Hamiltonian (<ref>), with
a homogeneous potential V()∈() of degree
k∈^∗=∖{0}, is integrable in the Liouville sense with
first integrals, which are rational functions of (,,u), where
u=√(1+^2).
Then,
* if k> 2, each eigenvalue λ=λ_i of matrix
γ^-1V”() with i=1,…, n-1, belongs to the set
_+∪_-;
* if k≤2, k≠0, each eigenvalue λ belongs to the set
_k∪_+∪_-.
Notice that the necessary condition for the integrability is that all
eigenvalues of V”() are integers, so it is more restrictive than in the
non-relativistic case. Moreover, and not so evident, these integers are
extremely rare. For example, taking k=4, we find that among numbers
f_±(k,p) with an integer p≤ 10^6 only 9 are integers.
In our Theorem <ref> we require that the first integrals depend on the
additional variable u. The reason is the following. We cannot use directly the
Morales–Ramis theory for the study of the integrability of the system governed by
Hamiltonian (<ref>) because it was originally formulated for meromorphic Hamiltonians and meromorphic corresponding Hamiltonian fields. In fact, our Hamiltonian (<ref>) is not a meromorphic
function because it contains the term u=√(1+^2). However, it is an algebraic
function of canonical coordinates. Then, for the extended applicability of the Morales–Ramis
theory for the case of algebraic potentials, we can use an approach proposed in
<cit.>. Here we just notice that when we add the additional
variable u=√(1+^2), then the
relativistic system (<ref>) one can rewrite as a Poisson system with
rational right-hand sides with a polynomial Hamiltonian. Indeed, Hamiltonian (<ref>) takes the form
=u+V(),
and its corresponding equations of motion (<ref>) transform into
d/dt=1/ u, d/dt =-V'(), du/dt=-1/u· V'(),
where we joined the time derivative of the additional variable u. This system
can be written as
=∇_, =(,,u),
with the Poisson bivector
=[ 𝕆 𝕀 1u; -𝕀 𝕆 ; -1u^T ^T 0 ],
where 𝕆 and 𝕀 denote the n× n zero matrix and the
identity matrix, respectively, is n-dimensional column of zeros, T
denotes the transpositions and ∇_=[V'(),,1]^T. The
skew-symmetry of this structure is evident, and the Jacobi identity one can check by
direct calculations. This Poisson structure has one Casimir function
=u^2-^2.
§ OUTLINE OF THE PROOF
Our main result was formulated as the theorem, so it needs a proof. It is quite
long, so we will not present here it in the whole extent. However, we believe that
for a reader it will be profitable to know just the basic ideas of this proof.
The starting point in our proof is the Morales-Ramis theorem formulated in Theorem <ref>. For its effective application, we
need a particular solution of the considered system. In general, there is no
universal method of finding it. However, for the classical case of natural Hamiltonian
systems with a homogeneous potential, one can find `straight line' solutions
along a Darboux point of the potential. In the same way, we find particular
solutions for the relativistic version of the system. The variational equations
along such a solution split into independent scalar equations of the second order.
If the system is integrable, then the identity component of the differential
Galois group of each of these scalar equations is Abelian. To check this
property we transform each of these equations into equations with rational
coefficients. This gives us the possibility to use all known results of the classical theory of such
equations <cit.> as well as the Kovacic algorithm <cit.> and its
numerous improvements <cit.>. The difficulty
of the problem is connected to the fact that the variational equations depend
on parameters.
In the relativistic case, the choice of a particular energy level is
important. By its proper choice, we achieve a confluence of singular points of the considered
equation. Thanks to this, we obtain the Riemann P-equation for which the
differential Galois group is known. In this way, we get quite strong necessary
conditions for integrability. For their improvement, we investigated the
variational equations for a generic value of the particular solution energy.
The key step in our reasoning is as follows. If the relativistic model is
integrable, then three types of conditions have to be fulfilled. Namely, the system has
to be integrable in the non-relativistic model, so conditions of
Theorem <ref> have to be fulfilled. Moreover, simultaneously,
conditions obtained from the analysis of variational equations for the specific
and generic values of the energy also have to be fulfilled.
§.§ Particular solution and variational equations
With an arbitrary Darboux point of the potential V(), one can associate a straight
line particular solution of the form (t) = φ(t), where
φ=φ(t) is a scalar function. As
=φ̇(t), the corresponding momentum can be calculated
from the first half of the Hamilton equations (<ref>). Hence, the particular solution
is given by
(t) = φ(t) , (t)= φ̇/√(1-φ̇^2d^2),
where we denoted d^2=^2. Then, substitution to second half of Hamilton's equations
(<ref>), gives the second order differential equation
φ̈= -γ(1-φ̇^2d^2)^3/2φ^k-1.
for scalar function φ provided (<ref>) holds.
Equation (<ref>) has the energy integral
h=1/d^2√(1-φ̇^2d^2)+
γ/kφ^k.
The variational equations along solution (<ref>) have the form
= √(1-φ̇^2d^2)[-φ̇^2(·)],
= -φ(t)^k-2V”() .
Here we used the fact that V”() is a homogeneous function of degree
(k-2), hence V”(φ(t) )=φ^k-2(t)V”().
Hessian V”() of the potential V calculated at a Darboux point is a
symmetric matrix. Thus, in a generic case, there exists a complex orthogonal
n× n matrix A such that the canonical change of variables
= A , = A ,
transforms V”() to its diagonal form with eigenvalues
(λ_1,…, λ_n). A Darboux point is an
eigenvector of V”() corresponding to the eigenvalue
λ_n=γ(k-1). It is transformed into
=A^T=[0,…,0,d]^T. In effect, the variational
equations (<ref>) are transformed to the form
η̇_i = √(1-φ̇(t)^2d^2) ξ_i, ξ̇_i =
-λ_i φ(t)^k-2η_i,
η̇_n = (1-φ̇(t)^2d^2)^3/2ξ_i, ξ̇_n =
-λ_n φ(t)^k-2η_n,
where i=1, …, n-1.
The second order equations for η_i, are as follows
η̈_i +a η̇_i+b η_i=0.
The coefficients a and b for i=1, …, n-1, have the form
a=-d^2γφ(t)^k-1φ̇(t)√(1-φ̇(t)^2d^2),
b=λ_iφ(t)^k-2√(1-φ̇(t)^2d^2),
while for i=n they are
a=-3d^2γφ(t)^k-1φ̇(t)√(1-φ̇(t)^2d^2),
b=λ_nφ(t)^k-2(1-φ̇(t)^2d^2)^3/2.
The last equation, for η_n, describes the variations along the particular solution and it does not give any obstructions for the integrability, so we consider only the first (n-1) equations called normal variational equations.
Next, we make the Yoshida transformation of the independent variable
t⟶ z := γ d^2/kφ(t)^k, with γ dk≠ 0,
see <cit.>.
We need the known formulae
t/ t x = ż x', ^2t/ t^2 x = ż^2 x” + z̈ x',
'≡z/ z,
where, in our case, ż^2 and z̈ take the form
ż^2=γ k φ^k-2 z [(z-s)^2-1]/(z-s )^2,
z̈=γφ^k-2[z+(k-1) (s +(z-s
)^3)]/(z-s )^3.
Here s=d^2 e, while e is a value of the energy integral h defined by (<ref>),
corresponding to the selected particular solution (<ref>). After
transformation (<ref>) equations (<ref>) read
η_i”+p(z) η_i'+q(z)η_i=0, i=1,…,n-1,
where
p(z)=k-1/k z+z-s
/(z-s )^2-1,
q(z)= λ (s -z)/k
z [(z-s)^2-1],
and λ=λ_i = λ_i/γ.
All of the above equations have the same form, the only dependence on index i is through λ_i. Hence, we can consider just one equation
η”+p(z) η'+q(z)η=0,
with coefficients p(z) and q(z) given in (<ref>).
For further analysis it is convenient to transform this equation into its
reduced form. We do this by making the following change of the dependent variable
η = w exp[ -1/2∫_z_0^z p(u) du ].
Next, we obtain
w” = r(z) w, r(z) =1/4p(z)^2 + 1/2p'(z) -q(z),
where the explicit form of coefficient r(z) is
r(z) = k^2-1/4k^2 z^2 +
3/16(z-s+1)^2 + 3/16(z-s-1)^2
+ 4s(1-k+2λ)-z(4-5k+8λ)/8kz[(z-s)^2-1].
The above shows that for generic values of the parameters, the reduced equation
has four regular singular points at z=0, z=s± 1, and at z=∞.
Summing up, we show that the normal variational equations are a direct product of
(n-1) second order Fuchsian equations of the form (<ref>). For its solvability analysis, we use the Kovacic
algorithm <cit.>. At the same time, we can
answer the question whether the identity component of its differential Galois
group is Abelian, that is, if the necessary conditions for the integrability
given by Theorem <ref> are fulfilled. The true difficulty
is connected with the fact that the system depends on three parameters: k, λ, and s. However,
we have the freedom to select a value of parameter s. In fact, as s=d^2 e, so
choosing the energy e of the particular solution, we can change the value of
s arbitrarily.
§.§ Generic energy level
Let us first consider the reduced normal variational equation (<ref>),
at a generic value of the
energy with e≠± d^-2. Since this is a linear second differential
equation with rational coefficients, we determine its differential Galois group with the help of the Kovacic algorithm <cit.>.
This algorithm decides, whether second order linear differential equation (<ref>), is solvable in a class of Liouvillian functions, and, as a
by-product, it enables to determine the differential Galois group of (<ref>). The algorithm consists of
four cases depending on the form of solutions and respective differential Galois
groups. In the first three cases, the equation is solvable, and its solutions
depend on a certain polynomial. The degree of this polynomial depends only on
k. The application of this algorithm, in our case, consists of checking that
the degree of the mentioned polynomial is a non-negative integer. This analysis gives the following proposition.
For s≠ 1, if the identity component of the differential Galois group of equation (<ref>) with coefficient given in (<ref>) is solvable, then pair (k,λ_i) for i=1,…,n-1, belongs to an item
of the following list
[ k λ; k (1 + p) (1 + k p),; k 1/4(3 + 2 p) (2 + k + 2 k p); k 2 k-1/4 k+k/16(p^2-4),; 1 p^2/16, p^2/144, p^2/100, p^2/64,; 2 1/8(p^2-1), p^2/72-1/8, p^2/50-1/8, p^2/32-1/8,; 3 3 p^2/64-1/3, p^2/48-1/3, 3p^2/100-1/3,; 4 1/16(p^2-9), p^2/36-9/16, p^2/25-9/16,; 5 5 p^2/144-4/5, p^2/20-4/5, 5 p^2/64-4/5,; 6 p^2/24-25/24, 3 p^2/32-25/24, 3 p^2/50-25/24,; ]
[ -1 1/16(16-p^2), 1-p^2/144, 1-p^2/100, 1-p^2/64,; -2 1/8(9-p^2), 9/8-p^2/72, 9/8-p^2/50 , 9/8-p^2/32,; -3 4/3-3 p^2/64, 4/3-p^2/48, 4/3-3p^2/100,; -4 1/16(25-p^2), 25/16-p^2/36, 25/16-p^2/25,; -5 9/5-5 p^2/144, 9/5-p^2/20, 9/5-5p^2/64,; -6 49/24-p^2/24, 49/24-3 p^2/32, 49/24-3 p^2/50.; ]
where p is an integer and k≠ 0.
We immediately notice from the above list, that if the system is integrable, then
all eigenvalues of the Hessian are rational numbers. Of course, we know it from
Theorem <ref>, however now we know that these numbers are either
integers or have specific numerators which are different from those given in Theorem <ref>.
§.§ Special energy level
Choosing the energy of the particular solution as e=d^-2, we got
s=1, and so, two singular points at z=0 and at z=s-1 merge. In effect,
we got a system with only three regular singularities z=0, z=2, z=∞.
So, equation (<ref>) is the Riemann P-equation, <cit.>. Conventionally its
singular points are located at z=0, z=1, and in z=∞. We achieve this
by making the change of the variable z ↦ 2 z in equation (<ref>). After this shift, we obtain
w”= r(z) w,
where now
r(z)= 1/4(ρ^2-1z^2+σ^2-1(z-1)^2+
1 +τ^2 -ρ^2-σ^2z(z-1)),
and ρ, σ and τ are the differences of exponents at singular
points at z=0, z=1 and in z=∞, respectively. In our case, they are as follows
ρ= √((k-2)^2+8 k λ)/2 k,
σ= 1/2,
τ= √((k-1)^2+4 k λ)/k.
This simplification profits. Necessary and sufficient conditions guarantying
the solvability of the Riemann P-equation are known and they are given by the
Kimura theorem <cit.>, see Appendix. For a more detailed analysis,
see <cit.>. Using this theorem, we deduce that our
equation (<ref>), has the following property.
If the identity component of the differential Galois group of equation
(<ref>) is solvable, then eigenvalues λ=λ_i with
i=1, … n-1, are given by
* λ = f_±(k,p), or
* λ=12[k-1 k+ k p (p+1)], or
* λ=2 k-14 k+116 k (4 p (p+1)-3),
where p is an arbitrary integer and k≠ 0.
The analysis which leads to the above statements is straightforward but quite
long, this is why we do not present it here.
Let us notice that the above proposition and
Theorem <ref> are both deduced from the Kimura Theorem <ref>, see Appendix.
However, for the non-relativistic case Theorem <ref> specifies
17 cases, while for the relativistic version of the system the above
proposition distinguishes only 4 cases.
§.§ Final steps
Let us assume that our Hamiltonian system (<ref>) is integrable in the
Liouville sense. Then, by Theorem <ref> for an arbitrary particular
solution, the identity component of the differential Galois group of the
variational equations is Abelian. Our reasoning is based on the following
implication: if the identity component of the differential Galois group of
(<ref>) is Abelian, then the identity component of the differential
Galois group of (<ref>) and also its reduced form (<ref>)
is also Abelian.
Moreover, the choice of the energy level of the particular solution gives two
different sets of necessary conditions formulated in Proposition
<ref> and in Proposition <ref>. They should be satisfied concomitantly.
At first, notice that, if the system is integrable, then by
Proposition <ref>, all non-trivial eigenvalues λ_i are rational.
Thus, the numbers f_±(k,p) in Proposition <ref> have to be
rational. From the definition of these numbers (<ref>), we deduce that
m=√(4 k^2 p (2 p+1)+1) has to be a rational number. As both k and p are integers, m^2=4 k^2 p (2 p+1)+1 is an odd integer, therefore m is odd.
Now, we have
f_±(k,p)= 3 k p (2 p+1)+
1/2[1± (4 p+1) m].
The expression in the square bracket is even number, so f_±(k,p) are
integers. Hence, they are just elements of sets _± defined
in (<ref>).
The set of numbers defined with odd integer p in the third line of the table
Proposition <ref>, coincides with the set given in item 3 of
Proposition <ref>.
Next, let us examine the family specified in item 2 of
Proposition <ref> which are k^2 p (1 + p)+k-12k,
p∈. Thus, they are irreducible rational numbers of the form sk
or 2s+12k, for odd or even k, respectively. A lengthy and
laborious analysis shows that for integer k, |k|>1 these numbers do not
appear in appropriate families of Proposition <ref>.
If k=±1, then elements of family of item 2 of
Proposition <ref> are integers of the form
λ=1/2p(p+1),orλ=1-1/2p(p+1), p∈.
For k=1 integer numbers appear in lines 1, 3 or 4, in the table given in
Proposition <ref>, and they are perfect squares, that is
λ=q^2 for a certain integer q. Similarly, for k=-1 integer numbers
appear in lines 1, 3 or 4, or 10 in the table, and they are of the form
λ=1-q^2. Thus, for k=1 and λ= q^2 we have at the same time
λ=12p(p+1), so λ is square triangular number, this
why λ∈_1. Similarly, for k=-1 we have that λ∈_-1.
This part of the reasoning is summarized in the following.
If the identity component of the differential Galois group of equation
(<ref>) is solvable, then eigenvalues λ=λ_i with
i=1, … n-1, are given by
* λ∈_- ∪_+, or
* λ=2 k-14 k+116 k (4 p (p+1)-3), for p∈, or
* for k=1, λ∈_k.
In the last step, we recall Proposition <ref>, which says that
the integrability of a relativistic Hamiltonian system with a homoegeneous potential implies
the integrability of the corresponding non-relativistic system with the same
potential. Thus, now we have to find the intersection of integrability conditions
formulated in Proposition <ref> with those conditions for
the corresponding non-relativistic model given in
Theorem <ref>. In this way, we eliminate completely the
second item in the above proposition except for the case k≠ 2 when Theorem <ref> does not give any restriction.
The numbers given in the second case of Proposition <ref>
can be written as
λ=(2 k p-k+2) (2 k p+3
k-2)/16 k.
Thus, there are irreducible numbers of the form 2q+116k for k odd. For k even, we have to possibilities.
If k=2 (2s+1) for s∈, then λ is irreducible rational number of
the form λ = q/(2s+1)=q/(k/2) for a certain integer q; if k=4 s,
then λ is irreducible rational number of the form λ = (2q+1)/16s=
(2q+1)/(4k) for a certain integer q. Summarizing, the irreducible form of
λ is
λ = q/16k for k=2s+1,
q/4k for k=4 s,
q/k/2 for k=2 (2s+1),
where q and s are integers.
Now, we have to check if the numbers of these forms are listed in the table of
Theorem <ref>. At first, we assume that k>2. Rational, non-integers numbers are given in rows
3–9 in the table (in the third row for k>2) of
Theorem <ref>. The irreducible form of λ in the third
row of the table in this theorem is either λ=(2q+1)/(2k) for even k,
or λ=q/k for odd k. Thus, they cannot coincide with numbers of the
form (<ref>).
If k=± 1, then admissible eigenvalues from Theorem <ref> belong to
families in lines 2 and 3, are integer numbers and they have forms (<ref>). Thus, they are
elements of sets _±1, as stated in the
Proposition <ref> and in Theorem <ref>.
Finally, for k=±2 Theorem <ref> does not give any
restriction. However, as it is easy to verify, the second case in
Proposition <ref> gives numbers of the forms (<ref>), and
thus they are elements of sets _±2 as it is claimed in
Theorem <ref>.
§ APPLICATIONS
When considering the integrability of natural Hamiltonian systems of the form
(<ref>), or (<ref>), it is convenient to identify
potentials V() and V_A():=V(A), where
A∈PO(2,)⊂GL(2,) for all A∈PO(2,).
Here by PO(2,) we denote the two-dimensional complex projective
orthogonal group, that is the group 2× 2 complex matrices A such that
AA^T= α E for a certain non-zero α∈. Clearly, integrability
of a particular potential from a given class implies the integrability of potentials
from this class.
Let us consider the relativistic systems with two degrees of freedom and
homogeneous potentials. Among them two families are integrable in both
relativistic and non-relativistic regimes, namely:
* if V=V(q_1) then additional first integral is F=p_2;
* if V=V(r) with r=√(q_1^2 +q_2^2), then
the additional first integral is
F=q_1p_2-q_2p_1
Now, let us consider only polynomial homogeneous potentials. The case of polynomial
potentials of degree one is covered by the first of the above cases. Thus, we
start with homogeneous potentials of degree k=2. In the non-relativistic case,
such potentials are always integrable. Let us consider such potentials in the
relativistic model. A real homogeneous potential of degree 2 can be transformed
to a form of the anisotropic harmonic oscillator, which is given by
V=q_1^2+α q_2^2.
This potential has two Darboux points. The Hessians at these points have non-trivial eigenvalues α and α^-1, respectively. We proved that if the system is
integrable, then both these numbers have to be integers, so α=± 1.
Moreover, α and α^-1 must belong to
_+∪_-∪_2, see Theorem <ref>. As 1∈_+, the case with α=1 satisfies the necessary conditions for the integrability and is integrable. On the other hand, -1∉_+∪_-∪_2, so for α=-1 the relativistic
system with potential (<ref>) is not integrable.
As the next example, we consider cases of polynomial potentials of degree k>2.
The first elements of the set _+∪_-, for respective k=3,4,5,6 are as follows
{ 0, 1, 5, 40, 176, 1365,5985,…},
{0, 1, 10, 45, 351, 1540, 11935,…},
{ 0, 1, 540, 1729, 18361, 58752,…},
{0, 1, 21, 56, 736, 1925, 25025,…}.
In <cit.>, see also <cit.>, it was proved that there are six families of
integrable homogeneous polynomial potentials of degree 3 with one or three different Darboux points with the following non-trivial eigenvalues of the Hessian
* V_1=q_1^3+α q_2^3, where α∈^⋆ with 0,0,2,
* V_2= 1/2q_1^2 q_2 + q_2^3 with 1/3,5,5,
* V_3= 1/2q_1^2 q_2 + 8/3q_2^3, with 1/8,15,15,
* V_4 = ±√(3)/18q_1^3 + 1/2q_1^2 q_2 + q_2^3, with 1/3,10/3,15,
* V_5=q_1^3, with 0,
* V_6=1/3(q_2∓ q_1)^2[q_2±2 q_1] with 2,
respectively. According to our main Theorem <ref> all non-trivial eigenvalues of Hessians evaluated at all Darboux points must be integer numbers belonging to the set _+∪_- given explicitly in the first line of equation (<ref>). But this holds only for potential V_5=q_1^3 with only one Darboux point with nontrivial eigenvalue equals 0 and it is really integrable in the relativistic model with additional first integral F=p_2. For remaining potentials at least one eigenvalue does not belong to the set _+∪_-.
Similarly, a list of known integrable families of homogeneous polynomial
potentials of degree 4 and 5 can be found in <cit.>. In
<cit.> it was proved that this list is complete. Using our
Theorem <ref> one can easily check that among them only potentials
V=q_1^4, V=(q_1^2+q_2^2)^2 and V=q_1^5 are integrable in the
relativistic model, which is obvious.
For arbitrary integer k≥ 2 there exist two integrable non-relativistic
homogeneous polynomial potentials which are separable in parabolic and Cartesian coordinates, respectively. At first let us consider potentials
separable in parabolic coordinates, which have the form
V_p=∑_i=0^[k/2]2^-2ik-iiq_1^2iq_2^k-2i.
Since they are separable, they are also integrable in the non-relativistic
model. However, in the relativistic regime all these potentials are not
integrable. To show this we notice that these potentials have k Darboux
points, and the non-trivial eigenvalues of the Hessians at these points are
k-1/2k,k+2,…,k+2_k-1 times.
As for k≥ 2, number k-12k is not an integer, so the
necessary conditions for the integrability given in our Theorem <ref>
are not fulfilled, so the system is not integrable.
As a last but not least important example, let us consider potentials separable in Cartesian coordinates, which have
the form
V_C = q_1^k +α q_2^k, α≠ 0.
These potentials have k Darboux points, and the non-trivial eigenvalues of the Hessian's at these points are
0,0,k-1,…,k-1_k-2 times.
Now, according to our theorem, if the system is integrable, then: 0 ∈_+∪_- and k-1 ∈_+∪_-. One can notice that
0∈_- for any k (just when we substitute p=0). If k-1 ∈_+∪_-, then from definition of sets _+
and _-, we deduce that equation
3 k p (2 p+1)+12[1± (4 p+1) √(4 k^2 p (2
p+1)+1)]=k-1,
for given integer k>2 has to possess an integer solution p∈.
From this equality we get
[1-k+p+kp(p-1) ][2-k-4p+4kp(p+2) ]=0.
Solution with respect k gives
k=p+1/1+p(1-p) or k=2(2p-1)/4 p (2 + p)-1.
It has integer solutions (k,p), with integer k≥ 3, and p∈ only for
k=10 that corresponds to p=-2 in the second solution. Summarizing, we proved
that, except k=10, potentials (<ref>) are not integrable in the
relativistic model.
Our Theorem <ref> gives necessary conditions for the
integrability, so potentials (<ref>) with k=10 can be
non-integrable. Indeed, the Poincaré section, visible in Fig. <ref>,
suggests the non-integrability of the system. We can prove this. Notice that in
Proposition <ref> we extract only a part of conditions guarantying
non-integrability. Knowing the explicit form of the potential one can perform
the Kovacic algorithm till the end i.e. not only to check that a polynomial that is included in a solution of the normal variational equation has a certain non-negative integer degree but to find it. Simple calculations, which we do not present here, show that for α≠ 0 such a polynomial does not exist. It implies that the identity component of the differential Galois group of normal variational equations is not solvable, thus, in particular, is non-Abelian and hence the relativistic Hamiltonian system with potential (<ref>) for k=10 is not integrable.
§ REMARKS AND COMMENTS
Our main result, formulated in Theorem <ref> and its applications
presented in Section <ref>, show that only in very exceptional cases relativistic versions of classical systems are integrable. It seems premature to conjecture that only systems with radial potentials and potentials
depending on one variable are integrable. Still, there are many open questions concerning the integrability in this context.
Let us remark here about an amazing fact concerning the necessary conditions for integrability given in Theorem <ref>. They state that eigenvalues of the Hessian of the potential evaluated at a Darboux point have to be integer numbers of
very special forms. It appears that these numbers, which belong to _+∪_-, can be expressed with the help of solutions of
the famous Pell equation
U^2 -D V^2 =1,
see e.g. <cit.>. In our case the parameter D takes the form D=32k^2. We proved that if numbers
λ_n∈_±, then they are given by the following recurrent relation
λ_n+3=a(λ_n+2-λ_n+1)+λ_n, a=4U_1^2-1,
where (U_1,V_1) is the fundamental solution of equation (<ref>).
For these numbers, one can find explicit formulae
λ_n = -128 k^2(k^2-8k +6)+η_∓ Z_- +η_± Z_+/2048 k^3,
where
Z_± = (X_0± 4 √(2) k Y_0)^2 (U_1± 4 √(2) k V_1)^2 n.
Here (X_0,Y_0) is a particular solution of the general Pell equation
X^2 - D Y^2 = N, N= 64k^2(k^2-2),
and η_±= 3± 2 √(2).
The above formulae are important for theoretical investigations.
Here we mention
one open problem. In <cit.> it was shown that a generic polynomial potential of
degree k has k Darboux points _i, and the non-trivial eigenvalues of
its scaled Hessian λ_i= γ^-1 V”(_i)-(k-1), satisfy the following universal relation
∑_i=1^k 1/λ_i -1=-1,
and this property is still valid in a relativistic regime. If the potential is integrable, then we know that all eigenvalues are integers.
Thus, the question is what are integer solutions of equation (<ref>) with λ_i∈_+∪_- for i=1, …, k for k≥3. As a
matter of fact, we found just one such solution (for k=10) with nontrivial eigenvalues given in (<ref>). Nevertheless, we have shown that relativistic Hamilton equations with the corresponding potential (<ref>) are not integrable. We conjecture that there are no other solutions, so in the relativistic model, all generic polynomial potentials are not integrable. For n≥ 3 degrees of freedom a generic polynomial potential has N=(k-1)^n-1k-2 Darboux points _i with n-1 non-trivial eigenvalues λ_1(_i),…,λ_n-1(_i) of the scaled Hessian V”(_i). Then between the non-trivial eigenvalues at these points exist n universal relations, see <cit.>. Among them, one
∑_i=1^N(1/λ_1(_i)-1+⋯+1/λ_n-1(_i)-1)
=-(k-1)^n-n(k-2)-1(k-2)^2,
is the generalisation of this in (<ref>), and using it one can expect similar results as for n=2.
§ DISCUSSION AND CONCLUSIONS
The article is devoted to the integrability analysis of Hamilton equations (<ref>) generated by Hamiltonian (<ref>) describing a
relativistic particle moving in an external potential V() in the limit of a
weak external field. Because the kinetic energy is no longer a quadratic form in
the momenta these equations differ significantly from their non-relativistic
counterparts.
We restrict our analysis to relativistic Hamiltonian systems with homogeneous potentials of an integer non-zero degree k. We noticed a direct relation between the integrability of relativistic and corresponding non-relativistic Hamiltonian
systems formulated in Proposition <ref> and
Theorem <ref>. Proposition <ref> is based on the
expansion (<ref>) of relativistic Hamiltonian into a power
series of terms that are weigh-homogeneous functions and the lowest term of
weight-degree 2k is exactly the corresponding non-relativistic Hamiltonian
with this potential. This implies that for the integrability of the relativistic
system the integrability of the corresponding non-relativistic one is necessary.
Expansion (<ref>) also explains the observation that a relativistic
Hamiltonian system with a potential that is integrable in non-relativistic framework is usually non-integrable. This is the case for example separable
potentials. Namely, relativistic Hamiltonian systems can be considered as
perturbations of non-relativistic ones. As it is well-known from the KAM theory
perturbations of integrable systems usually destroy integrability. However, in a
classical setting, by a perturbation we understand a small change of the
potential. However, in the relativistic version of perturbation consists of
a complete change of the kinetic energy term in the Hamiltonian function. Clearly,
the consequences of this change need a deeper investigation. Let us mention here
simple observations. First of all, the relativistic version (<ref>) of
the classical system (<ref>) has the same equilibria. Moreover,
in both cases they are of the same stability type, even more, linearizations at
these points coincide. It is no longer true with periodic solutions. Examples
given in Section II show this explicitly.
The main result of this paper is Theorem <ref>, which contains
necessary integrability conditions formulated in admissible values of
non-trivial eigenvalues of rescaled Hessian matrix γ^-1V”().
Namely, all of them must be integer numbers of appropriate form depending on
k. Let us notice that these conditions are much stronger than those for
non-relativistic systems with homogeneous potentials given in
Theorem <ref>, where non-integer rational eigenvalues of
γ^-1V”() were also admissible. The strength of obtained conditions
stems from the fact that they were derived by the intersection of conditions obtained
from analysis of the differential Galois group of variational equations along
two different particular solutions and also conditions for non-relativistic
potentials.
Application of the obtained conditions to homogeneous polynomial potentials of small degree allows us to presume that the only
relativistic Hamiltonian systems with such potentials are radial potentials
V(r) and those which depend only one coordinate V(q_1).
Obviously in applications the problem of the integrability of relativistic systems
(<ref>) with non-homogeneous potentials appears. The differential
Galois obstructions formulated in Theorem <ref> can be applied provided a particular, non-equilibrium solution is known. In the case when a non-homogeneous potential is a sum of a radial potential
r^k=(q_1^2+⋯+q_n^2)^k/2, k∈ and a homogeneous potential V_h() of
degree l, l≠ k, i.e. V()=r^k+V_h(), that admits a Darboux
point satisfying (<ref>), then still one can construct a particular solution of Hamilton equations with non-homogeneous potential
V() by means of . We do not analyse this class of potentials with
arbitrary r^k and V_h() because variational equations are too
complicated but for particular radial and homogeneous potentials such analysis
is possible.
§ RIEMANN P-EQUATION
The Riemann P-equation <cit.>, is the most general
second-order differential equation with three regular singularities.
If we place these singularities at z=0,1,∞, then it has the
form
d^2ξdz^2+(1-α-α'z+
1-γ-γ'z-1)dξdz
+
(αα'z^2+γγ'(z-1)^2+
ββ'-αα'-γγ'z(z-1))ξ=0,
where (α,α'), (γ,γ') and (β,β') are the
exponents at singular points. Exponents satisfy the Fuchs relation
α+α'+γ+γ'+β+β'=1.
We denote differences of exponents by
ρ=α-α', σ=γ-γ', τ=β-β'.
For equation (<ref>) the necessary and sufficient conditions for the solvability of the identity
component of its differential Galois group are given in the following theorem due to Kimura <cit.>, see
also <cit.>.
The identity component of the differential Galois group of
equation (<ref>) is solvable if and only if
I: at least one of the four numbers ρ+τ+σ,
-ρ+τ+σ, ρ-τ+σ, ρ+τ-σ is an odd
integer, or
II: the numbers ρ or -ρ and τ or -τ and
σ or -σ belong (in arbitrary order) to some of the
following fifteen families forming the so-called Schwarz’s Table <ref>.
§ ACKNOWLEDGEMENTS
This research has been founded by The
National Science Center of Poland under Grant No.
2020/39/D/ST1/01632. For the purpose of Open
Access, the authors have applied a CC-BY public
copyright license to any Author Accepted Manuscript
(AAM) version arising from this submission.
§ DATA AVAILABILITY STATEMENT
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
unsrtnat
' .4ex. '
#10=#11.5ex`0
62
urlstyle
[Guha and Garai(2021)]Guha:21::
P. Guha and S. Garai.
Relativistic formulation of curl force, relativistic Kapitza
equation and trapping.
Nonlinear Dyn 111, 9863–9874, 2023.
[Friedrich and Wintgen(1989)]Friedrich:89::
H. Friedrich and H. Wintgen.
The hydrogen atom in a uniform magnetic field — an example of
chaos.
Physics Reports, 1830 (2):0 37–79, 1989.
[Avazbaev et al.(2006)Avazbaev, Matrasulov, and
Khabibullaev]Avazbaev:06::
S. K. Avazbaev, D. U. Matrasulov, and P. K. Khabibullaev.
The largest Lyapunov exponents for the relativistic hydrogen-like
atom in a uniform magnetic field.
In Non-Linear Dynamics and Fundamental Interactions, pages
179–184, Dordrecht, 2006. Springer Netherlands.
[Babusci et al.(2013)Babusci, Dattoli, Quattromini, and
Sabia]Babusci:13::
D. Babusci, G. Dattoli, M. Quattromini, and E. Sabia.
Relativistic harmonic oscillator, the associated equations of motion,
and algebraic integration methods.
Phys. Rev. E, 87:0 033202, 2013.
[Vieira and Michtchenko(2018)]Vieira:18::
Ronaldo S.S. Vieira and Tatiana A. Michtchenko.
Relativistic chaos in the anisotropic harmonic oscillator.
Chaos, Solitons & Fractals, 117:0 276–282, 2018.
[Tung(2021)]Tung:21::
M. M. Tung.
The relativistic harmonic oscillator in a uniform gravitational
field.
Mathematics, 90 (4), 2021.
[Aktaş(2020)]Akta:20::
M. F. Aktaş.
Periodic solutions of relativistic Liénard-type equations.
Electron. J. Qual. Theory Differ. Equ., page 12, 2020.
[Haas(2021)]Haas:21::
F. Haas.
Relativistic Ermakov–Milne–Pinney systems and first
integrals.
Physics, 30 (1):0 59–70, 2021.
[Bernal et al.(2022)Bernal, Seoane, and Sanjuán]Bernal:22::
Juan D. Bernal, Jesús M. Seoane, and Miguel A. F. Sanjuán.
Relativistic chaotic scattering.
In Dimitri Volchenkov and J. A. Tenreiro Machado, editors,
Mathematical Methods in Modern Complexity Science, pages 33–62.
Springer International Publishing, Cham, 2022.
[Bernal et al.(2018)Bernal, Seoane, and Sanjuán]Bernal:18::
Juan D. Bernal, Jesús M. Seoane, and Miguel A. F. Sanjuán.
Uncertainty dimension and basin entropy in relativistic chaotic
scattering.
Phys. Rev. E, 97:0 042214, 2018.
[Nieto et al.(2018)Nieto, Seoane, Alvarellos, and
Sanjuán]Nieto:18::
Alexandre R. Nieto, Jesús M. Seoane, J. E. Alvarellos, and Miguel A. F.
Sanjuán.
Resonant behavior and unpredictability in forced chaotic scattering.
Phys. Rev. E, 98:0 062206, 2018.
[Fernández et al.(2020)Fernández, López, Seoane, and
Sanjuán]Fernandez:20::
D. S. Fernández, Á. G. López, J. M. Seoane, and M. A. F. Sanjuán.
Transient chaos under coordinate transformations in relativistic
systems.
Phys. Rev. E, 101:0 062212, 2020.
[Fujiwara et al.(2018)Fujiwara, Geiger, Singh, Senaratne, Rajagopal,
Lipatov, Shimasaki, and Weld]Fujiwara_2018
K. M. Fujiwara, Z. A. Geiger, K. Singh, R. Senaratne, S. V. Rajagopal,
M. Lipatov, T. Shimasaki, and D. M. Weld.
Experimental realization of a relativistic harmonic oscillator.
New J. Phys., 200 (6):0 063027, 2018.
[Goldstein et al.(2002)Goldstein, Poole Jr., and Safko]Goldstein:02::
Herbert Goldstein, Charles P. Poole Jr., and John L. Safko.
Classical mechanics.
Addison-Wesley Series in Physics. Addison-Wesley Publishing Co.,
Reading, Mass., third edition, 2002.
[Chanda and Guha(2018)]Chanda:18::
Sumanto Chanda and Partha Guha.
Geometrical formulation of relativistic mechanics.
Int. J. Geom. Methods Mod. Phys., 150 (04):0
1850062, 2018.
[Gomes and Ambika(2022)]Gomes:22::
Derek C. Gomes and G. Ambika.
Frequency locking, quasiperiodicity, and chaos due to special
relativistic effects.
In Walter Lacarbonara, Balakumar Balachandran, Michael J. Leamy, Jun
Ma, J. A. Tenreiro Machado, and Gabor Stepan, editors, Advances in
Nonlinear Dynamics, pages 495–505, Cham, 2022. Springer International
Publishing.
[Hénon and Heiles(1964)]Henon:64::
Michel Hénon and Carl Heiles.
The applicability of the third integral of motion: Some numerical
experiments.
Astronom. J., 69:0 73–79, 1964.
[Ford(1973)]Ford:73::
Joseph Ford.
The Transition from Analytic Dynamics to Statistical
Mechanics, pages 155–185.
John Wiley and Sons, Ltd, 1973.
ISBN 9780470143766.
[Mattheakis et al.(2022)Mattheakis, Sondak, Dogra, and
Protopapas]Mattheakis:22::
Marios Mattheakis, David Sondak, Akshunna S. Dogra, and Pavlos Protopapas.
Hamiltonian neural networks for solving equations of motion.
Phys. Rev. E, 1050 (6):0 Paper No. 065305,
2022.
[Chang et al.(1982)Chang, Tabor, and Weiss]Chang:82::
Y. F. Chang, M. Tabor, and J. Weiss.
Analytic structure of the Hénon-Heiles Hamiltonian in
integrable and nonintegrable regimes.
J. Math. Phys., 230 (4):0 531–538, 1982.
[Grammaticos et al.(1983)Grammaticos, Dorizzi, and
Ramani]Grammaticos:83::
B. Grammaticos, B. Dorizzi, and A. Ramani.
Integrability of Hamiltonians with third- and fourth-degree
polynomial potentials.
J. Math. Phys., 240 (9):0 2289–2295, 1983.
[Ito(1985)]Ito:85::
Hidekazu Ito.
Non-integrability of Hénon-Heiles system and a theorem of
Ziglin.
Kodai Math. J., 80 (1):0 120–138, 1985.
[Morales-Ruiz(1999)]Morales:99::
J. J. Morales-Ruiz.
Differential Galois theory and non-integrability of
Hamiltonian systems.
Progress in Mathematics, Birkhauser Verlag, Basel, 1999.
[Li and Shi(2011)]Li:11::
Wenlei Li and Shaoyun Shi.
Non-integrability of Hénon-Heiles system.
Celestial Mech. Dynam. Astronom., 1090 (1):0
1–12, 2011.
[Hietarinta(1983)]Hietarinta:83::
Jarmo Hietarinta.
A search for integrable two-dimensional Hamiltonian systems with
polynomial potential.
Phys. Lett. A, 960 (6):0 273–278, 1983.
[Hietarinta(1987)]Hietarinta:87::
Jarmo Hietarinta.
Direct methods for the search of the second invariant.
Phys. Rep., 1470 (2):0 87–154, 1987.
[Yoshida(1987)]Yoshida:87::
Haruo Yoshida.
A criterion for the nonexistence of an additional integral in
Hamiltonian systems with a homogeneous potential.
Phys. D, 290 (1-2):0 128–142, 1987.
[Yoshida(1989)]Yoshida:89::
Haruo Yoshida.
A criterion for the nonexistence of an additional analytic integral
in Hamiltonian systems with n degrees of freedom.
Phys. Lett. A, 1410 (3-4):0 108–112, 1989.
[Almeida et al.(1998)Almeida, Moreira, and Santos]Almeida:1998::
M. A. Almeida, I. C. Moreira, and F. C. Santos.
On the ziglin-yoshida analysis for some classes of homogeneous
hamiltonian systems.
Brazilian J. Phys., 280 (4):0 470–480, 1998.
[Nakagawa and Yoshida(2001)]Nakagawa:01::
Katsuya Nakagawa and Haruo Yoshida.
A list of all integrable two-dimensional homogeneous polynomial
potentials with a polynomial integral of order at most four in the momenta.
J. Phys. A, 340 (41):0 8611–8630, 2001.
[Morales-Ruiz and Ramis(2001a)]Morales:01::a
Juan J. Morales-Ruiz and Jean Pierre Ramis.
A note on the non-integrability of some Hamiltonian systems with a
homogeneous potential.
Methods Appl. Anal., 80 (1):0 113–120,
2001a.
[Morales Ruiz(1999)]Morales:99::c
Juan J. Morales Ruiz.
Differential Galois theory and non-integrability of
Hamiltonian systems, volume 179 of Progress in Mathematics.
Birkhäuser Verlag, Basel, 1999.
[Maciejewski and Przybylska(2004)]mp:04::d
Andrzej J. Maciejewski and Maria Przybylska.
All meromorphically integrable 2D Hamiltonian systems with
homogeneous potentials of degree 3.
Phys. Lett. A, 3270 (5-6):0 461–473, 2004.
[Maciejewski and Przybylska(2005)]mp:05::c
Andrzej J. Maciejewski and Maria Przybylska.
Darboux points and integrability of Hamiltonian systems with
homogeneous polynomial potential.
J. Math. Phys., 460 (6):0 062901, 33 pp.,
2005.
[Nakagawa et al.(2005)Nakagawa, Maciejewski, and Przybylska]mp:05::d
Katsuya Nakagawa, Andrzej J. Maciejewski, and Maria Przybylska.
New integrable Hamiltonian system with quartic in momenta first
integral.
Phys. Lett. A, 3430 (1-3):0 171–173, 2005.
[Maciejewski et al.(2008)Maciejewski, Przybylska, and
Yoshida]mp:08::c
Andrzej J. Maciejewski, Maria Przybylska, and Haruo Yoshida.
Necessary conditions for super-integrability of hamiltonian systems.
Phys. Lett. A, 3720 (34):0 5581–5587, 2008.
[Przybylska(2009a)]mp:09::a
Maria Przybylska.
Darboux points and integrability of homogenous Hamiltonian systems
with three and more degrees of freedom.
Regul. Chaotic Dyn., 140 (2):0 263–311,
2009a.
[Przybylska(2009b)]mp:09::b
Maria Przybylska.
Darboux points and integrability of homogenous Hamiltonian systems
with three and more degrees of freedom. Nongeneric cases.
Regul. Chaotic Dyn., 140 (3):0 349–388,
2009b.
[Casale et al.(2010)Casale, Duval, Maciejewski, and
Przybylska]mp:10::a
Guy Casale, Guillaume Duval, Andrzej J. Maciejewski, and Maria Przybylska.
Integrability of Hamiltonian systems with homogeneous potentials of
degree zero.
Phys. Lett. A, 3740 (3):0 448–452, 2010.
[Maciejewski and Przybylska(2010)]mp:10::b
Andrzej J. Maciejewski and Maria Przybylska.
Partial integrability of hamiltonian systems with homogeneous
potentials.
Regul. Chaotic Dyn., 150 (4-5):0 551–563,
2010.
[Maciejewski et al.(2012)Maciejewski, Przybylska, and
Yoshida]mp:12::a
Andrzej J. Maciejewski, Maria Przybylska, and Haruo Yoshida.
Necessary conditions for the existence of additional first integrals
for Hamiltonian systems with homogeneous potential.
Nonlinearity, 250 (2):0 255–277, 2012.
[Studziński and Przybylska(2013)]mp:13::a
Michał Studziński and Maria Przybylska.
Darboux points and integrability analysis of Hamiltonian systems
with homogeneous rational potentials.
Physica D, 249:0 1–15, 2013.
[Szumiński et al.(2015)Szumiński, Maciejewski, and
Przybylska]mp:15::d
Wojciech Szumiński, Andrzej J. Maciejewski, and Maria Przybylska.
Note on integrability of certain homogeneous hamiltonian systems.
Phys. Lett. A, 3790 (45-46):0 2970–2976,
2015.
[Maciejewski and Przybylska(2016)]mp:16::a
Andrzej J. Maciejewski and Maria Przybylska.
Integrability of hamiltonian systems with algebraic potentials.
Phys. Lett. A, 3800 (1-2):0 76–82, 2016.
[Maciejewski et al.(2017)Maciejewski, Szumiński, and
Przybylska]mp:17::b
Andrzej J. Maciejewski, Wojciech Szumiński, and Maria Przybylska.
Note on integrability of certain homogeneous hamiltonian systems in
2d constant curvature spaces.
Phys. Lett. A, 3810 (7):0 725–732, 2017.
[Llibre and Zhang(2018)]Llibre:18::
Jaume Llibre and Xiang Zhang.
On the integrability of the Hamiltonian systems with homogeneous
polynomial potentials.
Appl. Math. Nonlinear Sci., 30 (2):0 527–535,
2018.
[Combot et al.(2020)Combot, Maciejewski, and Przybylska]mp:20::e
Thierry Combot, Andrzej J. Maciejewski, and Maria Przybylska.
Bi-homogeneity and integrability of rational potentials.
J. Differential Equations, 2680 (11):0
7012–7028, 2020.
[Audin(2001)]Audin:01::
Michèle Audin.
Les systèmes hamiltoniens et leur intégrabilité,
volume 8 of Cours Spécialisés [Specialized Courses].
Société Mathématique de France, Paris, 2001.
[Morales-Ruiz and Ramis(1999)]Morales:99::b
Juan J. Morales-Ruiz and Jean Pierre Ramis.
Galoisian obstructions to integrability of Hamiltonian systems:
statements and examples.
In Hamiltonian systems with three or more degrees of freedom
(S'Agaró, 1995), volume 533 of NATO Adv. Sci. Inst. Ser. C Math.
Phys. Sci., pages 509–513. Kluwer Acad. Publ., Dordrecht, 1999.
[Morales-Ruiz and Ramis(2001b)]Morales:01::b1
Juan J. Morales-Ruiz and Jean Pierre Ramis.
Galoisian obstructions to integrability of Hamiltonian systems.
I.
Methods Appl. Anal., 80 (1):0 33–95,
2001b.
ISSN 1073-2772.
[Morales-Ruiz and Ramis(2001c)]Morales:01::b2
Juan J. Morales-Ruiz and Jean Pierre Ramis.
Galoisian obstructions to integrability of Hamiltonian systems.
II.
Methods Appl. Anal., 80 (1):0 97–111,
2001c.
ISSN 1073-2772.
[Dickson(1966)]Dickson:66::
Leonard Eugene Dickson.
History of the theory of numbers. Vol. II: Diophantine
analysis.
Chelsea Publishing Co., New York, 1966.
[Combot(2013)]Combot:13::
Thierry Combot.
A note on algebraic potentials and Morales-Ramis theory.
Celestial Mech. Dynam. Astronom., 1150 (4):0
397–404, 2013.
[Ince(1944)]Ince:44::
E. L. Ince.
Ordinary Differential Equations.
Dover Publications, New York, 1944.
[Poole(1960)]Poole:60::
E. G. C. Poole.
Introduction to the theory of linear differential equations.
Dover Publications Inc., New York, 1960.
[Kovacic(1986)]Kovacic:86::
Jerald J. Kovacic.
An algorithm for solving second order linear homogeneous differential
equations.
J. Symbolic Comput., 20 (1):0 3–43, 1986.
[Duval and Loday-Richaud(1992)]Duval:92::
Anne Duval and Michèle Loday-Richaud.
Kovačič's algorithm and its application to some families of
special functions.
Appl. Algebra Engrg. Comm. Comput., 30 (3):0
211–246, 1992.
[Ulmer and Weil(1996)]Ulmer:96::
Felix Ulmer and Jacques-Arthur Weil.
Note on Kovacic's algorithm.
J. Symbolic Comput., 220 (2):0 179–200, 1996.
[Whittaker and Watson(1935)]Whittaker:35::
E. T. Whittaker and G. N. Watson.
A Course of Modern Analysis.
Cambridge University Press, London, 1935.
[Kimura(1969/1970)]Kimura:69::
Tosihusa Kimura.
On Riemann's equations which are solvable by quadratures.
Funkcial. Ekvac., 12:0 269–281, 1969/1970.
[Maciejewski and Przybylska(2020)]mp:20::f
Andrzej J. Maciejewski and Maria Przybylska.
Integrability analysis of the stretch-twist-fold flow.
J. Nonlinear Sci., 300 (4):0 1607–1649, 2020.
[Andreescu and Andrica(2015)]Andreescu:15::
Titu Andreescu and Dorin Andrica.
Quadratic Diophantine equations, volume 40 of
Developments in Mathematics.
Springer, New York, 2015.
|
http://arxiv.org/abs/2307.04022v1 | 20230708175848 | Explicit a posteriori error representation for variational problems and application to TV-minimization | [
"Sören Bartels",
"Alex Kaltenbach"
] | math.NA | [
"math.NA",
"cs.NA",
"math.OC",
"35Q68, 49M25, 49M29, 65N30, 65N50"
] |
1]Sören BartelsEmail:
2]Alex KaltenbachEmail:
[1]Department of Applied Mathematics, University of Freiburg, Hermann–Herder–Straße 10, 79104 Freiburg
[2]Institute of Mathematics, Technical University of Berlin, Straße des 17. Juni 136, 10623 Berlin
Explicit a posteriori error representation for variational problems and application to TV-minimization
[
August 12, 2023
========================================================================================================
fancy
0cm
-0.25cm
[CO]Explicit error representation and application to TV-minimization
[CE]S. Bartels and A. Kaltenbach
[R]
[R]
In this paper, we propose a general approach for explicit a posteriori error representation for convex minimization problems using basic convex duality relations.
Exploiting discrete orthogonality relations in the space of element-wise constant vector fields as well as a discrete integration-by-parts formula between the Crouzeix–Raviart and the , all convex duality relations are transferred to a discrete level, making the explicit a posteriori error representation –initially based on continuous arguments only– practicable from a numerical point of view. In addition,
we provide a generalized Marini formula for the primal solution that determines a discrete primal solution in terms of a given discrete dual solution.
We benchmark all these concepts via the Rudin–Osher–Fatemi model. This leads to an adaptive algorithm that yields a (quasi-optimal)
linear convergence rate.
35Q68; 49M25; 49M29; 65N30; 65N50
§ INTRODUCTION
empty
The numerical analysis of the approximation of variational problems
is challenging when these are non-differentiable, degenerate, or involve
constraints. In particular, following established concepts for linear
elliptic partial differential equations often leads to sub-optimal results only.
The framework of convex duality provides an attractive concept to
reveal hidden information and structures to obtain quasi-optimal error representation formulas
under meaningful regularity conditions. Similar to <cit.>, we first exploit this
idea to derive explicit computable a posteriori error estimates for a natural error
measure. Then, this general result is transferred to a non-differentiable model problem with discontinuous solutions. As a whole, our results, similar to <cit.>, show that
the question of developing asymptotically exact a posteriori error estimators is
rather a question of identifying optimal error quantities. However, different from <cit.>, we also propose a general approach for making our results practicable from a numerical point of view.10mm
Given a domain Ω⊆ℝ^d, d∈ℕ,
a convex energy density ϕℝ→ℝ∪{+∞}, a
(Lebesgue) mea-surable energy density ψΩ×ℝ→ℝ∪{+∞} that is convex with respect to the second argument, and a Banach space X consisting of functions defined in
Ω, we denote by the minimization of the energy functional I X→ℝ∪{+∞}, for every v∈ X defined by
I(v) ∫_Ωϕ(∇ v) dx + ∫_Ωψ(·, v) dx ,
the primal problem.
Its (Fenchel) dual problem consists in the maximization of the functional D Y→ℝ∪{-∞}, where Y is a Banach space consisting of vector fields defined in
Ω, for every y∈ Y is defined by
D(y) -∫_Ωϕ^*(y) dx - ∫_Ωψ^*(·, div y) dx .
Here, ϕ^*ℝ^d→ℝ∪{+∞} and ψ^*Ω×ℝ→ℝ∪{+∞} (with respect to the second argument) denote the (Fenchel) conjugates of ϕℝ→ℝ∪{+∞} and ψΩ×ℝ→ℝ∪{+∞}, respectively.
Under rather general conditions, cf. <cit.>, we have the well-posedness of the
primal problem and the dual problem, i.e., the existence of a minimizer u∈ X of (<ref>), i.e., a primal solution, and of a maximizer z∈ Y of (<ref>), i.e., a dual solution, and the strong duality relation
min_v∈ X I(v) = I(u)= D(z) = max_y∈ Y D(y) .
Since u∈X and z∈ Y are optimal for (<ref>) and (<ref>), respectively, it holds 0∈∂ I(u) and 0∈∂ D(z).
In particular, for every v∈ X and y∈ Y, the quantities
ρ_I^2(v,u) I(v) - I(u) ,
ρ_-D^2(y,z) D(z) - D(y) ,
are non-negative. They define distances, if (<ref>) and (<ref>), respectively, are
strictly convex, and are called coercivity functionals or optimal convexity measures.
For accessible and admissible approximations v∈ X and y∈ Y of the solutions u ∈ X and z ∈ Y, given the definitions (<ref>) and (<ref>), the strong duality relation (<ref>) implies the error identity
ρ_I^2(v,u) + ρ_-D^2(y,z)
= I(v) - D(z)
η^2(v,y) .
Hence, the fully computable error estimator η^2 X× Y→ℝ∪{+∞}, cf. (<ref>), exactly
represents the sum of the primal and dual approximation errors, i.e., of (<ref>) and (<ref>).
The error representation (<ref>) can be seen as a generalization of the Prager–Synge
result, cf. <cit.>, which states that for the Poisson problem, i.e., ϕ1/2|·|^2∈ C^1(ℝ^d), ψ ((t,x)^⊤↦ -f(x)t) Ω×ℝ→ℝ∪{+∞}, where f∈ L^2(Ω), X W^1,2_D(Ω), and Y W^2_N(;Ω), for every v∈ W^1,2_D(Ω) and y∈ W^2_N(;Ω) with - y=f a.e. in Ω, we have that
12 ∇ v -∇ u_L^2(Ω;ℝ^d)^2 + 12 y - z _L^2(Ω;ℝ^d)^2
= 12 ∇ v-y ^2_L^2(Ω;ℝ^d) .
The equation (<ref>) has been used by various authors to define error estimators; for a comprehensive list of references, we refer the reader to <cit.>.
Often, local procedures are devised to construct an ad-missible vector field
y∈ W^2_N(;Ω) with - y=f a.e. in Ω from a given function v∈ W^1,2_D(Ω). While this leads to efficient procedures
to obtain accurate error estimators, the arguments cannot be expected to transfer
to non-linear problems. Another alternative to computing approximations
for the primal and dual problems consists in using finite element methods
for which reconstruction formulas are available, e.g., using the discontinuous Crouzeix–Raviart finite element
method and the Marini formula in the case of the Poisson problem, cf. <cit.>.7mm
It has recently been found (cf. <cit.>) that the discontinuous Crouzeix–Raviart finite element method leads to quasi-optimal a priori error estimates for non-linear and non-differentiable problems, while continuous finite element methods provide only a sub-optimal
convergence behavior. In the derivation of those results, a general
discrete convex duality theory with Raviart–Thomas vector fields has emerged that
also leads to reconstruction
formulas in rather general settings. As a consequence, given an approximation
v∈ X or y∈ Y, respectively, the missing one can be obtained via a simple post-processing procedure.
Then, the pair leads to the error representation formula (<ref>). It should also
be noted that neither v∈ X nor y∈ Y needs to be optimal in a subspace
of X or Y. By introducing appropriate residuals, any pair of admissible
approximations of u∈ X and z∈ Y can be used. This is particularly important for non-linear
problems, i.e., non-quadratic functionals, where an exact solution of discrete problems is neither possible nor rational.
A difficulty in the application of the explicit a posteriori error representation
formula (<ref>) arises from the condition that v∈ X and y∈ Y need to be admissible for
the functionals (<ref>) and (<ref>). In the case of the Poisson problem,
this arises, e.g., via element-wise constant approximations of f∈ L^2(Ω)
that are the images of Raviart–Thomas vector fields under the divergence operator. While data terms can be controlled by introducing appropriate
data oscillation terms, structural peculiarities of the energy densities
ϕℝ^d→ℝ∪{+∞} and ψΩ×ℝ→ℝ∪{+∞} and their (Fenchel) conjugates ϕ^*ℝ^d→ℝ∪{+∞} and ψ^*Ω×ℝ→ℝ∪{+∞} are often more challenging.
We illustrate this
by analyzing a non-differentiable
problem
which leads to a new error analysis and an adaptive refinement procedure
for the computationally challenging problem.
With ϕ = |·|∈ C^0(ℝ^d) and ψ=((x,t)^⊤↦α/2(t-g(x))^2)Ω×ℝ→ℝ
for a given function
g∈ L^2(Ω), i.e., the noisy image, and a given parameter α>0, i.e., the fidelity parameter,
the Rudin–Osher–Fatemi (ROF) model, cf. <cit.>, seeks a minimizing function u∈ BV(Ω)∩ L^2(Ω), i.e., the de-noised image, where BV(Ω) denotes the space of functions with bounded variation,
for the functional I BV(Ω)∩ L^2(Ω)→ℝ, for every v∈ BV(Ω)∩ L^2(Ω) defined by
I(v) |Dv|(Ω) + α2v-g_L^2(Ω)^2 ,
where |D(·)|(Ω)BV(Ω)→ [0,+∞] denotes the total variation functional.
The (Fenchel) problem to the minimization of the functional (<ref>) consists in the maximization of
the functional D W_N^2(;Ω)∩ L^∞(Ω;ℝ^d)→ℝ∪{-∞}, for every y∈ W_N^2(;Ω)∩ L^∞(Ω;ℝ^d) defined by
D(y) -I_K_1(0)(y)-12αdiv y+α g_L^2(Ω)^2+α2 g_L^2(Ω)^2 ,
where
I_K_1(0)(y) 0 if | y|≤ 1 a.e. in Ω and I_K_1(0)(y) +∞ else.
The primal solution u∈ BV(Ω) ∩ L^2(Ω), i.e., the unique minimizer of (<ref>), and a dual solution z∈ W_N^2(;Ω)∩ L^∞(Ω;ℝ^d), i.e., a (possibly non-unique) maximizer of (<ref>), are
(formally) related via, cf. <cit.>,
z ∈.{∇ u/|∇ u|} if |∇ u|>0
K_1(0) if |∇ u|=0
} a.e. in Ω ,
z = α (u-g) a.e. in Ω .
The relations (<ref>) determine z∈ W_N^2(;Ω)∩ L^∞(Ω;ℝ^d) via u∈ BV(Ω)∩ L^2(Ω) and vice versa.
A
Crouzeix–Raviart finite element approximation of (<ref>) is given by the minimization of the regularized, discrete functional
I_h,ε^cr𝒮^1,cr(𝒯_h)→ℝ, h,ε>0, for every v_h∈𝒮^1,cr(𝒯_h) defined by
I_h,ε^cr(v_h) f_ε(|∇_h v_h| )_L^1(Ω)
+ α2Π_h(v_h-g)_L^2(Ω)^2 .
Here, ∇_h is the element-wise application of the gradient operator
and f_ε∈C^1(ℝ) is a regularization of the modulus |·|, and Π_h denotes
the (local) L^2-projection onto element-wise constant functions.
A quasi-optimal dual Raviart–Thomas vector field z_h,ε^rt∈ℛT^0_N(𝒯_h) can be associated with a
minimizing function u_h,ε^cr∈𝒮^1,cr(𝒯_h) of I_h,ε^cr𝒮^1,cr(𝒯_h)→ℝ via the reconstruction formula
z_h,ε^rt = f_ε'(|∇_h u_h,ε^cr|) |∇_h u_h,ε^cr|∇_h u_h,ε^cr
+ αΠ_h (u_h,ε^cr -g)d( id_ℝ^d- Π_h id_ℝ^d) in ℛT^0_N(𝒯_h) .
For canonical choices of f_ε∈ C^1(ℝ), e.g.,
f_ε =|·|_ε= ((·)^2+ε^2)^1/2, it holds |Π_h z_h,ε^rt|≤ 1 a.e. in Ω, but not
|z_h,ε^rt|≤ 1 a.e. in Ω. Thus, we employ f_ε = (1-ε) |·|_ε,
so that
|f_ε'(t)|≤ 1-ε for all t∈ℝ. The choice ε∼ h^2 in (<ref>) and an additional projection step onto K_1(0)
lead to an accurate approximation z_h,ε^rt∈ℛT^0_N(𝒯_h) of z∈ W_N^2(;Ω)∩ L^∞(Ω;ℝ^d), which
satisfies |z_h,ε^rt|≤ 1 a.e. in Ω and, thus, represents an admissible test function that leads to the definition
of an error estimator. The resulting adaptive mesh-refinement procedure leads to significantly
improved experimental convergence rates compared to recent related contributions, cf. <cit.>. More precisely, we report quasi-optimal linear convergence rates which have been obtained only for meshes with quadratic grading towards a sufficiently simple jump set of a regular g in <cit.>.10mm
This article is organized as follows: In Section <ref>, we introduce the employed notation and the relevant finite element spaces. In Section <ref>, we propose a general approach for explicit a posteriori error representation for convex minimization problems based on (discrete) convex duality relations. In Section <ref>,
we transfer the concepts of Section <ref> to the Rudin–Osher–Fatemi model and propose a regularization scheme. In Section <ref>, we review our theoretical findings via numerical experiments.
§ PRELIMINARIES
§.§ Convex analysis
For a (real) Banach space X, which is equipped with the norm ·_X X→ℝ_≥ 0, we denote its corresponding (continuous) dual space by X^* equipped with the dual norm
·_X^* X^*→ℝ_≥ 0, defined by x^*_X^*sup_x_X≤ 1⟨ x^*,x⟩_X for every x^*∈ X^*, where ⟨·,·⟩_X X^*× X→ℝ, defined by ⟨ x^*,x⟩_X x^*(x) for every x^*∈ X^* and x∈ X, denotes the duality pairing.
A functional F X→ℝ∪{+∞} is called sub-differentiable in x∈ X, if F(x)<∞ and if there exists x^*∈ X^*, called sub-gradient, such that for every y∈ X, it holds
⟨ x^*,y-x⟩_X≤ F(y)-F(x) .
The sub-differential ∂ F X→ 2^X^* of a functional F X→ℝ∪{+∞} for every x∈ X is defined by (∂ F)(x){x^*∈ X^*|(<ref>) holds for x^*} if F(x)<∞ and (∂ F)(x)∅ else.
For a given functional F X→ℝ∪{±∞}, we denote its corresponding (Fenchel) conjugate by F^* X^*→ℝ∪{±∞}, which for every x^*∈ X^* is defined by
F^*(x^*)sup_x∈ X⟨ x^*,x⟩_X-F(x) .
If F X→ℝ∪{+∞} is a proper, convex, and lower semi-continuous functional, then also its (Fen-chel) conjugate F^* X^*→ℝ∪{+∞} is a proper, convex, and lower semi-continuous functional, cf. <cit.>.
Furthermore, for every x^*∈ X^* and x∈ X such that
F^*(x^*)+F(x) is well-defined, i.e., the critical case ∞-∞ does not occur, the Fenchel–Young inequality
⟨ x^*,x⟩_X≤ F^*(x^*)+F(x)
applies.
In particular,
for every x^*∈ X^* and x∈ X, it holds the Fenchel–Young identity
x^*∈ (∂ F)(x) ⇔ ⟨ x^*,x⟩_X= F^*(x^*)+F(x) .
The following convexity measures for functionals play an important role in the derivation of an explicit a posteriori error representation for convex minimization problems in Section <ref>; for further information, please refer to <cit.>.
Let X be a (real) Banach space and F X→ℝ∪{+∞} proper, i.e., D(F){x∈ X| F(x)<∞}≠∅.
(i) The σ^2_F
D(F)× X→ [0,+∞] for every x∈ D(F) and y∈ X is defined by
σ^2_F(y,x) F(y)-F(x)-sup_x^*∈ (∂ F)(x)⟨ x^*,y-x⟩_X ,
where we use the convention sup(∅)-∞.
(ii) The σ^2_F
D(F)^2→ [0,+∞] for every x,y∈ D(F) is defined by
σ_F,s^2(y,x)σ_F^2(y,x)+σ_F^2(x,y)=inf_x^*∈ (∂ F)(x);y^*∈ (∂ F)(y)⟨ x^*-y^*,x-y⟩_X ,
where we use the convention inf(∅) +∞.
Let X be a (real) Banach space and F X→ℝ∪{+∞} proper. Moreover, let x∈ X be minimal for F X→ℝ∪{+∞}. Then, the ρ^2_F
X^2→ [0,+∞] x∈ X for every y∈ X is defined by
ρ^2_F(y,x) F(y)-F(x)≥ 0 .
Let X be a (real) Banach space and F X→ℝ∪{+∞} proper. Moreover, let x∈ X be minimal for F X→ℝ∪{+∞}. Then, due to 0∈ (∂ F)(x), for every y∈ X, it holds
σ^2_F(y,x)≤ρ^2_F(y,x) .
§.§ Function spaces
Throughout the article, we denote by Ω⊆ℝ^d, d ∈ℕ, a bounded polyhedral Lipschitz domain, whose (topological) boundary is disjointly divided into a closed Dirichlet part Γ_D and an open Neumann part Γ_N, i.e., ∂Ω = Γ_D∪Γ_N and ∅ = Γ_D∩Γ_N. 3mm
For p∈[1,∞] and l∈ℕ, we employ the standard notations[Here, W^-1/p,p(Γ_N) (W^1-1/p',p'(Γ_N))^* and W^-1/p,p(∂Ω) (W^1-1/p',p'(∂Ω))^*.]
W^1,p_D(Ω;ℝ^l) {v∈ L^p(Ω;ℝ^l) |∇ v∈ L^p(Ω;ℝ^l× d), v=0 in L^p(Γ_D;ℝ^l)} ,
W^p_N(;Ω) {y∈ L^p(Ω;ℝ^d) | y∈ L^p(Ω), _n y=0 in W^-1/p,p(Γ_N)} ,
W^1,p(Ω;ℝ^l) W^1,p_D(Ω;ℝ^l) if Γ_D=∅, and W^p(;Ω) W^p_N(;Ω) if Γ_N=∅,
where we by W^1,p(Ω;ℝ^l)→L^p(∂Ω;ℝ^l) and by _n(·)W^p(;Ω)→W^-1/p,p(∂Ω), the trace and trace operator, respectively. In particular, we always omit (·) and _n(·). In addition, we employ the abbreviations L^p(Ω) L^p(Ω;ℝ^1), W^1,p(Ω) W^1,p(Ω;ℝ^1), and W^1,p_D(Ω) W^1,p_D(Ω;ℝ^1). For (Lebesgue) measurable functions u,vΩ→ℝ and a (Lebesgue) measurable set M⊆Ω, we write
(u,v)_M∫_Mu v dx ,
whenever the right-hand side is well-defined. Analogously, for (Lebesgue) measurable vector fields z,yΩ→ℝ^d and a (Lebesgue) measurable set M⊆Ω, we write (z,y)_M∫_Mz· y dx. Moreover,
let |(·)|(Ω) L^1_(Ω) →ℝ∪{+∞}, for every v∈ L^1_(Ω) defined by[Here, C_c^∞(Ω;ℝ^d) denotes the space of smooth and in Ω compactly supported vector fields.]
|v|(Ω)sup{-(v, ϕ)_Ω|ϕ∈ C_c^∞(Ω;ℝ^d);
ϕ_L^∞(Ω;ℝ^d)≤ 1} ,
denote the total variation functional. Then, the space of functions with bounded variation is defined by
BV(Ω){v∈ L^1(Ω)||v|(Ω)<∞} .
§.§ Triangulations
Throughout the entire paper, we denote by {𝒯_h}_h>0, a family of regular, i.e., uniformly shape regular and conforming, triangulations of Ω⊆ℝ^d, d∈ℕ, cf. <cit.>.
Here, h>0 refers to the average mesh-size, i.e., if we set h_T(T) for all T∈𝒯_h, then, we have that h
= 1/(𝒯_h)∑_T∈𝒯_hh_T.
For every element T ∈𝒯_h,
we denote by ρ_T>0, the supremum of diameters of inscribed balls. We assume that there exists a constant ω_0>0, independent of h>0, such that max_T∈𝒯_hh_Tρ_T^-1≤ω_0. The smallest such constant is called the chunkiness of {𝒯_h}_h>0. The sets 𝒮_h, 𝒮_h^i, 𝒮_h^∂, and 𝒩_h contain the sides, interior sides, boundary sides, and vertices, respectively, of the elements of 𝒯_h.
We have the following relation between the average mesh-size and the number of vertices:
h∼(𝒩_h)^-1/d .
For k∈ℕ∪{0} and T∈𝒯_h, let 𝒫_k(T) denote the set of polynomials of maximal degree k on T. Then, for k∈ℕ∪{0} and l∈ℕ, the sets of continuous and polynomial functions or vector fields, respectively, are defined by
ℒ^k(𝒯_h)^l {v_h∈ L^∞(Ω;ℝ^l)| v_h|_T∈𝒫_k(T)^l for all T∈𝒯_h} ,
𝒮^k(𝒯_h)^l ℒ^k(𝒯_h)^l∩ C^0(Ω;ℝ^l) .
For every T∈𝒯_h and S∈𝒮_h, let x_T1/d+1∑_z∈𝒩_h∩ Tz∈ T and x_S1/d∑_z∈𝒩_h∩ Sz∈ S denote the barycenters of T and S, respectively. The (local) L^2-projection operator Π_h L^1(Ω;ℝ^l)→ℒ^0(𝒯_h)^l onto element-wise constant functions or vector fields, respectively, for every
v∈ L^1(Ω), is defined by Π_h v|_T_Tv dx for all T∈𝒯_h.
The element-wise gradient
∇_hℒ^1(𝒯_h)^l→ℒ^0(𝒯_h)^l× d, for every v_h∈ℒ^1(𝒯_h)^l, is defined by ∇_hv_h|_T∇(v_h|_T) for all T∈𝒯_h.
§.§.§ Crouzeix–Raviart element
11mm
The Crouzeix–Raviart finite element space, cf. <cit.>, consists of affine functions that are continuous at the barycenters of inner element sides, i.e.,[Here, for every inner side S∈𝒮_h^i, v_h_S v_h|_T_+-v_h|_T_- on S, where T_+, T_-∈𝒯_h satisfy ∂ T_+∩∂ T_-=S, and for every boundary S∈𝒮_h^∂, v_h_S v_h|_T on S, where T∈𝒯_h satisfies S⊆∂ T.]
𝒮^1,cr(𝒯_h){v_h∈ℒ^1(𝒯_h)|v_h_S(x_S)=0 for all S∈𝒮_h^i} .
Note that 𝒮^1,cr(𝒯_h)⊆ BV(Ω). More precisely, for every v_h∈𝒮^1,cr(𝒯_h), cf. <cit.>, we have that Dv_h=∇_ hv_h⊗dx+v_h⊗ds|_𝒮_h with ∇_ hv_h⊗dx⊥v_h⊗ds|_𝒮_h, so that, cf. <cit.>,
|Dv_h|(Ω)= ∇_ hv_h_L^1(Ω;ℝ^d)+v_h_L^1(𝒮_h) .
The Crouzeix–Raviart finite element space with homogeneous Dirichlet boundary condition on Γ_D is defined by
𝒮^1,cr_D(𝒯_h){v_h∈𝒮^1,cr(𝒯_h)| v_h(x_S)=0 for all S∈𝒮_h∩Γ_D} .
A basis for 𝒮^1,cr(𝒯_h) is given by functions φ_S∈𝒮^1,cr(𝒯_h), S∈𝒮_h, satisfying the φ_S(x_S')=δ_S,S' for all S,S'∈𝒮_h. A basis for 𝒮^1,cr_D(𝒯_h) is given by φ_S∈𝒮^1,cr_D(𝒯_h), S∈𝒮_h∖Γ_D.
§.§.§ Raviart–Thomas element
The Raviart–Thomas finite element space, cf. <cit.>, consists of element-wise affine vector fields that have continuous constant normal components on inner element sides, i.e.,[Here, for every inner side S∈𝒮_h^i, y_h· n_Sy_h|_T_+· n_T_++y_h|_T_-· n_T_- on S, where T_+, T_-∈𝒯_h satisfy ∂ T_+∩∂ T_-=S and for every T∈𝒯_h, n_T∂ T→𝕊^d-1 denotes the outward unit normal vector field to T,
and for every boundary side S∈𝒮_h^∂, y_h· n_Sy_h|_T· n on S, where T∈𝒯_h satisfies S⊆∂ T and n∂Ω→𝕊^d-1 denotes the outward unit normal vector field to Ω.]
ℛT^0(𝒯_h){y_h∈ℒ^1(𝒯_h)^d| y_h|_T· n_T= on ∂ T for all T∈𝒯_h ,
y_h· n_S=0 on S for all S∈𝒮_h^i} .
Note that ℛT^0_N(𝒯_h)⊆ W^∞_N(;Ω).
The Raviart–Thomas finite element space with homogeneous normal component boundary condition on Γ_N is defined by
ℛT^0_N(𝒯_h){y_h∈ℛT^0(𝒯_h)| y_h· n=0 on Γ_N} .
A basis for ℛT^0(𝒯_h) is given by vector fields ψ_S∈ℛT^0(𝒯_h), S∈𝒮_h, satisfying ψ_S|_S'· n_S'=δ_S,S' on S' for all S'∈𝒮_h, where n_S is the unit normal vector on S pointing from T_- to T_+ if T_+∩ T_-=S∈𝒮_h. A basis for ℛT^0_N(𝒯_h) is given by ψ_S∈ℛT^0_N(𝒯_h), S∈𝒮_h∖Γ_N.
§.§.§ Discrete integration-by-parts formula
For every v_h∈𝒮^1,cr_D(𝒯_h) and y_h∈ℛT^0_N(𝒯_h), it holds the discrete integration-by-parts formula
(∇_hv_h,Π_h y_h)_Ω=-(Π_h v_h, y_h)_Ω .
In addition, cf. <cit.>,
if a vector field y_h∈ℒ^0(𝒯_h)^d satisfies for every v_h∈𝒮^1,cr_D(𝒯_h)
(y_h,∇_h v_h)_Ω=0 ,
then, choosing v_h=φ_S∈𝒮^1,cr_D(𝒯_h) for all S∈𝒮_h∖Γ_D, one finds that y_h∈ℛT^0_N(𝒯_h).
Similarly, if a function v_h∈ℒ^0(𝒯_h) satisfies for every y_h∈ℛT^0_N(𝒯_h)
(v_h, y_h)_Ω=0 ,
then, choosing y_h=ψ_S∈ℛT^0_N(𝒯_h) for all S∈𝒮_h∖Γ_N, one finds that v_h∈𝒮^1,cr_D(𝒯_h). In other words,
we have the orthogonal (with respect to the inner product (·,·)_Ω) decompositions
ℒ^0(𝒯_h)^d =(|_ℛT^0_N(𝒯_h))⊕∇_h(𝒮^1,cr_D(𝒯_h))
,
ℒ^0(𝒯_h) =(∇_h|_𝒮^1,cr_D(𝒯_h))⊕ (ℛT^0_N(𝒯_h)) .
§ EXACT A POSTERIORI ERROR ESTIMATION FOR CONVEX MINIMIZATION PROBLEMS
§.§ Continuous convex minimization problem and continuous convex duality
Let ϕℝ^d→ℝ∪{+∞} be a proper, convex, and lower semi-continuous function and let ψΩ×ℝ→ℝ∪{+∞} be a (Lebesgue) measurable function such that for a.e. x∈Ω, the function ψ(x,·)Ω×ℝ→ℝ∪{+∞} is proper, convex, and lower semi-continuous. We examine the convex minimization problem that seeks for a function u∈ W^1,p_D(Ω), p∈ (1,∞), that is minimal for the functional I W^1,p_D(Ω)→ℝ∪{+∞}, for every v∈W^1,p_D(Ω) defined by
I(v)∫_Ωϕ(∇ v) x+∫_Ωψ(·,v) x .
In what follows, we refer to the minimization of I W^1,p_D(Ω) →ℝ∪{+∞} as the primal problem.
A (Fenchel) dual problem to the minimization of (<ref>) consists in the maximization of the functional DL^p'(Ω;ℝ^d)→ℝ∪{ -∞}, for every y∈ L^p'(Ω;ℝ^d) defined by
D(y) -∫_Ωϕ^*( y) x-F^*( y) ,
where the distributional divergence L^p'(Ω;ℝ^d)→ (W^1,p_D(Ω))^* for every y∈L^p'(Ω;ℝ^d) and v∈W^1,p_D(Ω) is defined by ⟨ y,v⟩_W^1,p_D(Ω) -(y,∇ v)_Ω and
F^*L^p'(Ω)→ℝ∪{±∞} denotes the Fenchel conjugate to F L^p(Ω)→ℝ∪{+∞}, defined by F(v)∫_Ωψ(·,v) x for all v∈ L^p(Ω). Note that for every y∈W^p'_N(;Ω), we have that ⟨ y,v⟩_W^1,p_D(Ω)=( y, v)_Ω for all v∈ W^1,p_D(Ω) and, thus, the representation
D(y)=-∫_Ωϕ^*( y) x-∫_Ωψ^*(·, y) x .
A weak duality relation applies, cf. <cit.>, i.e.,
inf_v∈ W^1,p_D(Ω)I(v)≥sup_y∈ L^p'(Ω;ℝ^d)D(y) .
In what follows, we
always assume that ϕℝ^d→ℝ∪{+∞} and ψΩ×ℝ→ℝ∪{+∞} are such that (<ref>) admits at least one minimizer u∈ W^1,p_D(Ω), called the primal solution, (<ref>) at least one maximizer z∈ L^p'(Ω;ℝ^d), called the dual solution, and that a strong duality relation applies, i.e.,
I(u)= D(z) .
By the Fenchel–Young inequality (cf. (<ref>)), (<ref>) is equivalent to
the convex optimality relations
z·∇ u =ϕ^*(z)+ϕ(∇ u) Ω ,
z ∈∂ F(u) .
If z∈W^p'_N(;Ω), then the convex optimality relation (<ref>) is equivalent to
z u=ψ^*(·, z)+ψ(·, u) Ω .
If ϕ∈ C^1(ℝ^d),
then, by the Fenchel–Young identity (cf. (<ref>)), (<ref>) is equivalent to
z= Dϕ(∇ u) L^p'(Ω;ℝ^d) .
Similarly, if z∈W^p'_N(;Ω) and
ψ(x,·)∈ C^1(ℝ) for a.e. x∈Ω,
then (<ref>) is equivalent to
z=Dψ(·, u) L^p'(Ω) .
The convex duality relations (<ref>)–(<ref>) motivate introducing the primal-dual error estimator η^2 W^1,p_D(Ω)× L^p'(Ω;ℝ^d)→ [0,+∞], for every
v∈ W^1,p_D(Ω) and y∈ L^p'(Ω;ℝ^d) defined by
5mm
η^2(v,y) I(v)-D(y) .
Note that the sign of the estimator (<ref>) is a consequence of the weak duality relation (<ref>).
Together with the optimal convexity measures (cf. Definition <ref>) ρ_I^2 W^1,p_D(Ω)^2→ [0,+∞] of (<ref>) at a primal solution u∈ W^1,p_D(Ω) and ρ_-D^2L^p'(Ω;ℝ^d)→ [0,+∞] of the negative of (<ref>) at a dual solution z∈L^p'(Ω;ℝ^d), we arrive at the following explicit a posteriori error representation.3mm
The following statements apply:
(i) For every v∈ W^1,p_D(Ω) and y∈L^p'(Ω;ℝ^d), we have that
ρ^2_I(v,u)+ρ^2_-D(y,z)=η^2(v,y) .
(ii) For every v∈ W^1,p_D(Ω) and y∈W^p'_N(;Ω), we have that
η^2(v,y) = ∫_Ωϕ(∇ v)-∇ v· y+ϕ^*(y) dx+∫_Ωψ(·, v)- v div y+ψ^*(·,div y) dx .
(i) By the Fenchel–Young inequality (<ref>), the integrands in the representation (<ref>), are non-negative and, thus, suitable as local refinement indicators.
(ii) Appealing to Remark <ref>, from Theorem <ref> (i), for every v∈ W^1,p_D(Ω) and y∈ L^p'(Ω;ℝ^d), it follows that
σ_I^2(v,u)+σ_-D^2(y,z)≤η^2(v,y).
ad (i). Due to I(u)=D(z), cf. (<ref>), Definition <ref>, and (<ref>),
for every v∈ W^1,p_D(Ω) and y∈ L^p'(Ω;ℝ^d), we have that
ρ^2_I(v,u)+ρ^2_-D(y,z)=I(v)-I(u)+D(z)-D(y)=η^2(v,y) .
ad (ii). Using (<ref>), (<ref>), and integration-by-parts, we conclude that (<ref>) applies.
(i) In the , cf. <cit.>, i.e., ϕ1/p|·|^p∈ C^1(ℝ), p∈ (1,∞), and ψ ((t,x)^⊤↦ -f(x)t)Ω×ℝ→ℝ, where f∈ L^p'(Ω), cf. <cit.>, we have that
ρ^2_I(v,u)∼F(∇ v)-F(∇ u)_L^2(Ω;ℝ^d)^2 , ρ^2_-D(y,z)∼F^*(y)-F^*(z)_L^2(Ω;ℝ^d)^2 ,
where F,F^*ℝ^d→ℝ^d for every a∈ℝ^d are defined by F(a)| a|^p-2/2a and F^*(a)| a|^p'-2/2a.
(ii) In the , cf. <cit.>, i.e., ϕ1/2|·|^2∈ C^1(ℝ) and ψ ((t,x)^⊤↦ -f(x)t+I_χ(x)(t))Ω×ℝ→ℝ∪{+∞}, where f∈ L^2(Ω) and χ∈ W^1,2(Ω) with χ≤ 0 on Γ_D, cf. <cit.>, where I_χ(x)(t) 0 if t≥ 0 and I_χ(x)(t) +∞ else, we have that
ρ^2_I(v,u)= 12∇ v-∇ u_L^2(Ω;ℝ^d)^2+⟨ -Λ,v-u⟩_W^1,2_D(Ω) , ρ^2_-D(y,z)≥12y-z_L^2(Ω;ℝ^d)^2 ,
where Λ∈ (W^1,2_D(Ω))^* is defined by ⟨Λ,v⟩_W^1,2_D(Ω) (f,v)_Ω-(∇ u,∇ v)_Ω for all v∈ W^1,2_D(Ω).
(iii) In an , cf. <cit.>, i.e., ϕζ∘|·|∈ C^1(ℝ), where ζ(0) 0, ζ'(t)μ_2 t if t∈ [0,t_1], ζ'(t)μ_2 t_1 if t∈ [t_1,t_2], and ζ'(t)μ_1 t if t∈ [t_2,+∞) for some 0<t_1<t_2 and 0<μ_1<μ_2 with t_1μ_2=t_2μ_1, and ψ ((t,x)^⊤↦ -f(x)t)Ω×ℝ→ℝ, where f∈ L^2(Ω), cf.
<cit.>,
we have that
ρ^2_I(v,u)≥12μDϕ(∇ v)-Dϕ(∇ u)_L^2(Ω;ℝ^d)^2 , ρ^2_-D(y,z)≥12μy-z_L^2(Ω;ℝ^d)^2 .
(iv) In the , cf. <cit.>, i.e.,
ϕ|·|∈ C^0(ℝ) and ψ ((t,x)^⊤↦α/2(t-g(x))^2)Ω×ℝ→ℝ, where g∈ L^2(Ω), cf. <cit.>, we have that
ρ^2_I(v,u)≥α2v-u_L^2(Ω)^2 , ρ^2_-D(y,z)≥12α y- z_L^2(Ω)^2 .
Since the dual problem to the minimization of the negative of (<ref>), in turn, consists in the maximization of the negative of (<ref>),
the roles of the primal problem and the dual problem may be interchanged. An advantage of Theorem <ref> consists in the fact that it yields reliable and efficient a posteriori error estimators for both the primal problem and the dual problem, i.e.,7.5mm
Theorem <ref> also shows that for each y∈ L^p'(Ω;ℝ^d), the estimator η^2_I,y (v↦η^2(v,y)) W^1,p_D(Ω)→ [0,+∞]
satisfies
ρ^2_I(v,u)+ρ^2_-D(y,z)=η^2_I,y(v) ,
and for each v∈ W^1,p_D(Ω), the estimator η^2_-D,v (y↦η^2(v,y)) L^p'(Ω;ℝ^d)→ [0,+∞]
ρ^2_I(v,u)+ρ^2_-D(y,z)=η^2_-D,v(y) .
For the a posteriori error estimators (<ref>) and (<ref>) for being numerically practicable, it is necessary to have a
computationally cheap way to obtain sufficiently accurate approximation of the dual solution (for (<ref>)) and/or of the primal solution
(for (<ref>)), respectively. In Section <ref>, resorting to (discrete) convex duality relations between a non-conforming Crouzeix–Raviart approximation of the primal problem and a Raviart–Thomas approximation of the dual problem, we arrive at discrete reconstruction formulas, called generalized Marini formula, cf. <cit.>.9mm
§.§ Discrete convex minimization problem and discrete convex duality
Let ψ_hΩ×ℝ→ℝ∪{+∞} denote a suitable approximation[We refrain from being too precise concerning
what we mean with approximation to allow for more flexibility. Assumptions on both ϕℝ^d→ℝ∪{+∞} and ψ_hΩ×ℝ→ℝ∪{+∞}, h>0, that imply, e.g., Γ-convergence results can be found in <cit.>.] of ψΩ×ℝ→ℝ∪{+∞} such that ψ_h(·,t)∈ℒ^0(𝒯_h) for all t∈ℝ and for a.e. x∈Ω, ψ_h(x,·)Ω×ℝ→ℝ∪{+∞} is a proper, convex, and lower semi-continuous functional. Then, we examine the (discrete) convex minimization problem that seeks for a function u_h^cr∈𝒮^1,cr_D(𝒯_h) that is minimal for the functional I_h^cr𝒮^1,cr_D(𝒯_h)→ℝ∪{+∞}, for every v_h∈𝒮^1,cr_D(𝒯_h) defined by
I_h^cr(v_h)∫_Ωϕ(∇_ h v_h) x+∫_Ωψ_h(·,Π_h v_h) x .
In what follows, we refer the minimization of I_h^cr𝒮^1,cr_D(𝒯_h)→ℝ∪{+∞} to as the discrete primal problem.
In <cit.>, it is shown that the corresponding (Fenchel) dual problem to the minimization of (<ref>)
consists in the maximization of D_h^rtℛT^0_N(𝒯_h)→ℝ∪{-∞}, for every y_h∈ℛT^0_N(𝒯_h) defined by
D_h^rt(y_h)-∫_Ωϕ^*(Π_h y_h) x-∫_Ωψ_h^*(·, y_h) x .
A discrete weak duality relation, cf. <cit.>, applies
inf_v_h∈𝒮^1,cr_D(𝒯_h)I_h^cr(v_h)≥sup_y_h∈ℛT^0_N(𝒯_h)D_h^rt(y_h) .
We will always assume that ϕℝ^d→ℝ∪{+∞} and ψ_hΩ×ℝ→ℝ∪{+∞} are such that (<ref>) admits at least one minimizer u_h^cr∈𝒮^1,cr_D(𝒯_h), called the discrete primal solution,
(<ref>) admits at least one maximizer z_h^rt∈ℛT^0_N(𝒯_h), called the discrete dual solution, and that a discrete strong duality relation applies, i.e.,
I_h^cr(u_h^cr)=D_h^rt(z_h^rt) .
By the Fenchel–Young identity (cf. (<ref>)), (<ref>) is equivalent to the discrete convex optimality relations
Π_h z_h^rt·∇_ h u_h^cr =ϕ^*(Π_hz_h^rt)+ϕ(∇_ h u_h^cr) a.e. in Ω ,
z_h^rt Π_hu_h^cr =ψ_h^*(·, z_h^rt)+ψ_h(·,Π_hu_h^cr) a.e. in Ω .
If ϕ∈ C^1(ℝ^d), then, by the Fenchel–Young identity (cf. (<ref>)), (<ref>) is equivalent to
Π_h z_h^rt=Dϕ(∇_ h u_h^cr) in ℒ^0(𝒯_h)^d ,
and if ϕ^*∈ C^1(ℝ^d), then, by the Fenchel–Young identity (cf. (<ref>)), (<ref>) is equivalent to
∇_ h u_h^cr=Dϕ^*(Π_h z_h^rt) in ℒ^0(𝒯_h)^d .
Similarly, if ψ_h(x,·)∈ C^1(ℝ) for a.e. x∈Ω, then (<ref>) is equivalent to
z_h^rt=Dψ_h(·,Π_hu_h^cr) in ℒ^0(𝒯_h) ,
and if ψ_h^*(x,·)∈ C^1(ℝ) for a.e. x∈Ω, then (<ref>) is equivalent to
Π_hu_h^cr=Dψ_h^*(·, z_h^rt) in ℒ^0(𝒯_h) .
The relations (<ref>)–(<ref>) motivate the following discrete recontruction formulas for a discrete dual solution z_h^rt∈ℛT^0_N(𝒯_h) from a discrete primal solution u_h^cr∈𝒮^1,cr_D(𝒯_h) and vice versa, called generalized Marini formulas, cf. <cit.>.
The following statements apply:
(i) If ϕ∈ C^1(ℝ^d) and ψ_h(x,·)∈ C^1(ℝ) for a.e. x∈Ω, then, given a minimizer u_h^cr∈𝒮^1,cr_D(𝒯_h) of (<ref>),
a maximizer z_h^rt∈ℛT^0_N(𝒯_h) of (<ref>) is given via
z_h^rt= Dϕ(∇_ h u_h^cr)+Dψ_h(·, Π_hu_h^cr)/d(_ℝ^d-Π_h_ℝ^d) in ℛT^0_N(𝒯_h) ,
a discrete strong duality relation applies, i.e., (<ref>).
(ii) If ϕ^*∈ C^1(ℝ^d) and ψ_h^*(x,·)∈ C^1(ℝ) for a.e. x∈Ω, then, given a maximizer z_h^rt∈ℛT^0_N(𝒯_h) of (<ref>), a minimizer u_h^cr∈𝒮^1,cr_D(𝒯_h) of (<ref>) is given via
u_h^cr = Dψ_h^*(·, z_h^rt)+ Dϕ^*(Π_h z_h^rt)·(_ℝ^d-Π_h_ℝ^d)
in 𝒮^1,cr_D(𝒯_h) ,
a discrete strong duality relation applies, i.e., (<ref>).
It is possible to derive reconstructions formulas similar to (<ref>) and (<ref>) under weak conditions, e.g., resorting to a regularization argument (cf. Proposition <ref>) or given discrete Lagrange multipliers (cf. <cit.>).
ad (i). See <cit.>.5mm
ad (ii). By definition, it holds u_h^cr∈ℒ^1(𝒯_h) and the discrete convex optimality relation (<ref>) is satisfied.
Since z_h^rt∈ℛT^0_N(𝒯_h) is maximal for (<ref>) as well as ϕ^*∈ C^1(ℝ^d) and ψ_h^*(x,·)∈ C^1(ℝ) for a.e. x∈Ω, for every y_h∈ℛT^0_N(𝒯_h), we have that
(Dϕ^*(Π_h z_h^rt),Π_hy_h)_Ω+(Dψ_h^*(·, z_h^rt), y_h)_Ω=0 .
In particular, (<ref>) implies that Dϕ^*(Π_h z_h^rt)∈ ((|_ℛT^0_N(𝒯_h)))^⊥.
Appealing to <cit.>, it holds
((|_ℛT^0_N(𝒯_h)))^⊥=∇_h(𝒮^1,cr_D(𝒯_h)). Therefore, there exists
v_h∈𝒮^1,cr_D(𝒯_h) such that
∇_h v_h= Dϕ^*(Π_h z_h^rt) in ℒ^0(𝒯_h)^d .
Hence, for every y_h∈ℛT^0_N(𝒯_h), resorting to the discrete integration-by-parts formula (<ref>), (<ref>), (<ref>), and (<ref>), we find that
(Π_hv_h-Π_h u_h^cr, y_h)_Ω
=- (Dϕ^*(Π_h z_h^rt),Π_hy_h)_Ω-(Dψ_h^*(·, z_h^rt), y_h)_Ω=0 .
In other words, for every y_h∈ℛT^0_N(𝒯_h), we have that
( v_h-u_h^cr, y_h)_Ω= (Π_h v_h-Π_h u_h^cr, y_h)_Ω=0 .
On the other hand, we have that ∇_ h(v_h-u_h^cr)=0 in ℒ^0(𝒯_h)^d, i.e., v_h-u_h^cr∈ℒ^0(𝒯_h).
Therefore, (<ref>) in conjunction with (<ref>) implies that
v_h-u_h^cr∈ ( (ℛT^0_N(𝒯_h)))^⊥=(∇_h|_𝒮^1,cr_D(𝒯_h)). As a result, due to v_h∈𝒮^1,cr_D(𝒯_h), we conclude that u_h^cr∈𝒮^1,cr_D(𝒯_h) with
∇_ h u_h^cr =Dϕ^*(Π_h z_h^rt) in ℒ^0(𝒯_h)^d ,
Π_hu_h^cr =Dψ_h^*(·, z_h^rt) in ℒ^0(𝒯_h) .
By the Fenchel–Young identity, cf. (<ref>), (<ref>) is equivalent to
Π_h z_h^rt·∇_ h u_h^cr =ϕ^*(Π_hz_h^rt)+ϕ(∇_ h u_h^cr) a.e. in Ω ,
z_h^rt Π_hu_h^cr =ψ_h^*(·, z_h^rt)+ψ_h(·,Π_hu_h^cr) a.e. in Ω .
Eventually, adding (<ref>)_1 and (<ref>)_2, subsequently, integration with respect to x∈Ω, resorting to the discrete integration-by-parts formula (<ref>), and using the definitions (<ref>) and (<ref>), we arrive at I_h^cr(u_h^cr)=D_h^rt(z_h^rt),
which, appealing to the discrete weak duality relation (<ref>), implies that u_h^cr∈𝒮^1,cr_D(𝒯_h) is minimal for (<ref>).
§ APPLICATION TO THE RUDIN–OSHER–FATEMI (ROF) MODEL
In this section, we transfer the concepts derived in Section <ref> to the non-differentiable Rudin–Osher–Fatemi (ROF) model, cf. <cit.>. The approximation of the ROF model has been investigated by numerous authors: A priori error estimates has been derived in <cit.>.
A posteriori error estimates and adaptivity results can be found in <cit.>.7mm
§.§ The continuous Rudin–Osher–Fatemi (ROF) model
Given a function g∈ L^2(Ω), i.e., the noisy image, and a constant parameter α>0, the fidelity parameter the Rudin–Osher–Fatemi (ROF) model, cf. <cit.>, consists in the minimization of the functional I BV(Ω)∩ L^2(Ω)→ℝ, for every v∈ BV(Ω)∩ L^2(Ω) defined by
I(v)|v| (Ω)+α2v-g^2_L^2(Ω) .
In <cit.>, it has been established that there exists a unique minimizer u∈ BV(Ω)∩ L^2(Ω)
of (<ref>).
Appealing to <cit.> or <cit.>, the (Fenchel) dual problem to the minimization of (<ref>) consists in the maximization of the functional D W^2_N(;Ω) ∩ L^∞(Ω;ℝ^d)→ℝ∪{-∞}, for every y∈ W^2_N(;Ω) ∩ L^∞(Ω;ℝ^d) defined by
D(y) -I_K_1(0)(y) -12α y+α g_L^2(Ω)^2+α2g_L^2(Ω)^2 ,
where I_K_1(0) L^∞(Ω;ℝ^d)→ℝ∪{∞} is defined by I_K_1(0)(y) 0 if y∈ L^∞(Ω;ℝ^d) with | y|≤ 1 a.e. in Ω and I_K_1(0)(y)∞ else. Apart from that, in <cit.>, it is shown that (<ref>) admits a maximizer z∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d) and that a strong duality relation applies, i.e.,
I(u)=D(z) .
Appealing to <cit.>, (<ref>) is equivalent to
the convex optimality relations
z =α (u-g) in L^2(Ω) ,
-(u, z)_Ω =|u|(Ω) .
Next, if we introduce, by analogy with Section <ref>, the primal-dual error estimator
η^2 BV(Ω)× (W^2_N(;Ω)∩ L^∞(Ω;ℝ^d))→ [0,+∞], for every v∈ BV(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d) defined by
η^2(v,y) I(v)-D(y) ,
then the concepts of Section <ref> can be transferred to the ROF model.5mm
The following statements apply:
(i) For every v∈ BV(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d), we have that
ρ^2_I(v,u)+ρ^2_-D(y,z)=η^2(v,y) .
(ii) For every v∈ BV(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d), we have that
η^2(v,y)= |Dv|(Ω)+( y,v)_Ω+12α y-α (v-g)_L^2(Ω)^2+I_K_1(0)(y) .
ad (i). Due to I(u)=D(z), cf. (<ref>), Definition <ref>, and (<ref>),
for every v∈ BV(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d), we have that
ρ^2_I(v,u)+ρ^2_-D(y,z)=I(v)-I(u)+D(z)-D(y)=η^2(v,y) .
ad (ii). For every v∈ BV(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d), we have that
η^2(v,y) =|Dv|(Ω)+( y,v)_Ω+12αα (v-g)_L^2(Ω)^2
-12α2( y,α v)_Ω+12α y+α g_L^2(Ω)^2-α2g_L^2(Ω)^2^2+I_K_1(0)(y)
=|Dv|(Ω)+( y,v)_Ω+α2v-g_L^2(Ω)^2
-
12α y-α (v-g)_L^2(Ω)^2-α2v-g_L^2(Ω)^2+I_K_1(0)(y) ,
which yields the claimed representation.
Restricting the estimator (<ref>) to subclasses of BV(Ω) and W^2_N(;Ω)∩ L^∞(Ω;ℝ^d), , for which an appropriate integration-by-parts formula apply, e.g., (<ref>), it is possible to derive alternative representations of the estimator (<ref>), whose integrands are point-wise non-negative and, thus, suitable as local refinement indicators.
(i) For every v∈ W^1,1(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d), by integration-by-parts, it holds
η^2(v,y)=∇ v_L^1(Ω;ℝ^d)-(∇ v,y)_Ω+12α y+α (v-g)_L^2(Ω)^2+I_K_1(0)(y)≥ 0 .
(ii) For every T∈𝒯_h, we define the local refinement indicator η_T^2 W^1,1(Ω)× W^2_N(;Ω)∩ L^∞(Ω;ℝ^d)→ [0,+∞] for every v∈ W^1,1(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d) by
η^2_T,W(v,y)∇ v_L^1(T;ℝ^d)-(∇ v,y)_T+12α y+α (v-g)_L^2(T)^2+I_K_1(0)(y)≥ 0 .
(iii) For every v_h∈𝒮^1,cr(Ω) and y_h∈ℛT^0_N(𝒯_h), by the representation of the total variation of Crouzeix–Raviart functions (<ref>) and the discrete integration-by-parts formula (<ref>), it holds
η^2(v_h,y_h) =∇_ h v_h_L^1(Ω;ℝ^d)+v_h_L^1(𝒮_h)-(∇_ h v_h,Π_h y_h)_Ω
+12α y_h+α (v_h-g)_L^2(Ω)^2+I_K_1(0)(y_h)≥ 0 .
(iv) For every T∈𝒯_h, we define the discrete local refinement indicator η_T,CR^2𝒮^1,cr(𝒯_h)×ℛT^0_N(𝒯_h) → [0,+∞] for every v_h∈𝒮^1,cr(𝒯_h) and y_h∈ℛT^0_N(𝒯_h) by
η^2_T,CR(v_h,y_h) ∇ v_h_L^1(T;ℝ^d)+∑_S∈𝒮_h;S⊆ Tv_h_L^1(S)-(∇_ h v_h,Π_h y_h)_T
+12α y_h+α (v_h-g)_L^2(T)^2+I_K_1(0)(y_h)≥ 0 .
We emphasize that the primal-dual error estimator (<ref>) and the representations (<ref>) or in Remark <ref> (i) & (ii) are well-known, cf. <cit.>. However, the combination of (<ref>) with the representation of the total variation of Crouzeix–Raviart functions (<ref>) and the discrete integration-by-parts formula (<ref>) in Remark <ref> (iii) & (iv), to the best of the authors' knowledge, is new and leads to significantly improved experimental convergence rates of the corresponding adaptive mesh-refinement procedure compared to the contributions <cit.>, cf. Section <ref>.
15mm
§.§ The discretized Rudin–Osher–Fatemi (ROF) model
Given g∈ L^2(Ω) and α>0, with g_hΠ_hg∈ℒ^0(𝒯_h), the discretized ROF model, proposed in <cit.>, consists in the minimization of I^cr_h𝒮^1,cr(𝒯_h)→ℝ, for every v_h∈𝒮^1,cr(𝒯_h) defined by
I^cr_h(v_h)∇_hv_h_L^1(Ω;ℝ^d)+α2Π_hv_h-α g_h^2_L^2(Ω) .
Note that the functional (<ref>) defines a non-conforming approximation of the functional (<ref>), as, e.g., jump terms of across inner element sides are not included. This, however, turned out to be essential in the derivation of optimal a priori error estimate in <cit.>.
Since the functional (<ref>) is proper, strictly convex, weakly coercive, and lower semi-continuous,
the direct method in the calculus of variations, cf. <cit.>, yields the existence of a unique minimizer u_h^cr∈𝒮^1,cr(𝒯_h), called the discrete primal solution. Appealing to <cit.>, the corresponding (Fenchel) dual problem to the minimization of (<ref>) consists in the maximization of the functional D_h^rtℛT^0_N(𝒯_h)→ℝ∪{-∞}, for every y_h∈ℛT^0_N(𝒯_h) defined by
D_h^rt(y_h) -I_K_1(0)(Π_hy_h)-12α y_h+α g_h_L^2(Ω)^2+α2g_h_L^2(Ω)^2 .
Appealing to Theorem <ref> (below), there exists a maximizer z_h^rt∈ℛT^0_N(𝒯_h) of (<ref>), which satisfies |Π_h z_h^rt|≤ 1 a.e. in Ω, a
discrete strong duality relation applies, i.e.,
I^cr_h(u_h^cr)= D_h^rt(z_h^rt) ,
and the discrete convex optimality relations
z_h^rt =α (Π_h u_h^cr-g_h) ℒ^0(𝒯_h) ,
Π_hz_h^rt·∇_h u_h^cr =|∇_h u_h^cr| ℒ^0(𝒯_h) .
§.§ The regularized, discretized Rudin–Osher–Fatemi model
To approximate a discrete minimizer u_h^cr∈𝒮^1,cr(𝒯_h) of (<ref>), it is common to approximate
the modulus function by strictly convex regularizations. In this connection, for every ε∈ (0,1), we define a special regularization f_εℝ→ℝ_≥ 0 of the modulus function, for every t∈ℝ, via
f_ε(t) (1-ε) | t|_ε , | t|_ε (t^2+ε^2)^1/2 ,
where |·|_εℝ→ℝ_≥ 0 is commonly referred to as the standard regularization.7mm
Let us collect the most important properties of the regularization (<ref>).
For every ε∈ (0,1), the following statements apply:
(i) f_ε∈ C^1(ℝ) with f_ε'(0)=0.
(ii) For every t∈ℝ, it holds -ε | t|-ε^2≤ f_ε(t)-| t|≤ε (1-| t|).
(iii) For every t∈ℝ, it holds | f_ε'(t)|≤ 1-ε.
(iv) For every s∈ℝ, it holds
f_ε^*(s)-ε ((1-ε)^2-| s|^2)^1/2 if | s|≤ 1-ε
+∞ if | s|> 1-ε .
The main reason to consider the regularization f_εℝ→ℝ_≥ 0 instead of the standard regularization |·|_εℝ→ℝ_≥ 0 consists in the property (iii) in Lemma <ref>. This additional slope reduction enables us later to construct a sufficiently accurate, admissible approximation of the dual solution using an additional projection step, cf. Remark <ref> (below) and Section <ref> (below).
ad (i). The claimed regularity f_ε∈ C^1(ℝ) is evident. Since for every t∈ℝ, it holds
f_ε'(t)=(1-ε) t(t^2+ε^2)^1/2 ,
we have that f_ε'(0)=0.
ad (ii). For every t∈ℝ, due to 0≤| t|_ε-| t|≤ε, we have that
-ε | t|-ε^2≤ -ε | t|_ε≤ f_ε(t)-| t|=ε-ε | t|_ε≤ε (1-| t|) .
ad (iii). Immediate consequence of the representation (<ref>).
ad (iv). Due to <cit.>, for every s∈ℝ and ε∈ (0,1), we have that
f_ε^*(s)=((1-ε) |·|_ε)^*(s)=(1-ε) (|·|_ε)^*(s1-ε) .
Since for every s∈ℝ and ε∈ (0,1), it holds
(|·|_ε)^*(s)=
-ε (1-| s|^2)^1/2 if | s|≤ 1
+∞ if | s|> 1
,
we conclude that
the claimed representation of the Fenchel conjugate applies.
Given g∈ L^2(Ω), α> 0, and an element-wise constant regularization parameter ε_h∈ℒ^0(𝒯_h) with 0<ε_h<1 a.e. in Ω, for g_hΠ_hg∈ℒ^0(𝒯_h), the regularized, discrete ROF model consists in the minimization of the functional I^cr_h,ε_h𝒮^1,cr(𝒯_h)→ℝ, for every v_h∈𝒮^1,cr(𝒯_h) defined by
I^cr_h,ε_h(v_h)f_ε_h(|∇_hv_h|)_L^1(Ω)+α2Π_hv_h-g_h^2_L^2(Ω) .
Since the functional (<ref>) is proper, strictly convex, weakly coercive, and lower semi-continuous,
the direct method in the calculus of variations, cf. <cit.>, yields the existence of a unique minimizer u_h,ε_h^cr∈𝒮^1,cr(𝒯_h), called the regularized, discrete primal solution.
Appealing to (f_ε_h∘|·|)^*=f_ε_h^*∘|·|, cf. <cit.>, the corresponding (Fenchel) dual problem to the minimization of (<ref>) consists in the maximization of functional D_h,ε_h^rtℛT^0_N(𝒯_h)→ℝ∪{-∞}, for every y_h∈ℛT^0_N(𝒯_h) defined by
D_h,ε_h^rt(y_h) -∫_Ωf_ε_h^*(|Π_hy_h| ) dx-12α y_h+α g_h_L^2(Ω)^2+α2g_h_L^2(Ω)^2 .
The following proposition clarifies the well-posedness of the dual regularized, discretized ROF model, i.e., the existence of a maximizer of (<ref>). It also yields a discrete reconstruction formula for a maximizer of (<ref>) from a minimizer of (<ref>) and proves discrete strong duality.
The following statements apply:
(i) A discrete weak duality relation applies, i.e.,
inf_v_h∈𝒮^1,cr_D(𝒯_h)I_h,ε_h^cr(v_h)≥sup_y_h∈ℛT^0_N(𝒯_h)D_h,ε_h^rt(y_h) .
(ii) The discrete flux z_h^rt∈ℒ^1(𝒯_h), defined via the generalized Marini formula
z_h,ε_h^rtf_ε_h'(|∇_h u_h,ε_h^cr|)|∇_h u_h,ε_h^cr|∇_h u_h,ε_h^cr+αΠ_h u_h,ε_h^cr-g_hd(_ℝ^d-Π_h_ℝ^d) ,
satisfies z_h,ε_h^rt∈ℛT^0_N(𝒯_h) and the discrete convex optimality relations
z_h,ε_h^rt =α (Π_hu_h,ε_h^cr-g_h) in ℒ^0(𝒯_h) ,
Π_h z_h,ε_h^rt =f_ε_h'(|∇_ h u_h,ε_h^cr|)|∇_ h u_h,ε_h^cr|∇_ h u_h,ε_h^cr in ℒ^0(𝒯_h)^d .
(iii) The discrete flux z_h^rt∈ℛT^0_N(𝒯_h) is a maximizer of (<ref>) and discrete strong duality applies, i.e.,
I^cr_h,ε_h(u_h,ε_h^cr)=D_h,ε_h^rt(z_h,ε_h^rt) .
Note that, by the Fenchel–Young identity, cf. <cit.>, (<ref>) is equivalent to
Π_h z_h,ε_h^rt·∇_h u_h,ε_h^cr =f_ε_h^*(|Π_h z_h,ε_h^rt| )+f_ε (|∇_h u_h,ε_h^cr|) in ℒ^0(𝒯_h) .
Appealing to Lemma <ref> (iii), we have that |Π_h z_h,ε_h^rt|≤ 1-ε_h a.e. in Ω. Therefore,
if Π_hu_h,ε_h^cr-g_h_L^∞(Ω)≤ c_0 for some c_0>0, which can be expected by discrete maximum principles, then, choosing
ε_hα c_0/dh, yields that
z_h,ε_h^rt_L^∞(Ω;ℝ^d)≤ 1. However, choices like ε_h∼ h let us expect convergence rates not better than 𝒪(h^1/2), cf. Proposition <ref> (i) (below). In order to allow for the convergence rate 𝒪(h), one needs to choose ε_h∼ h^2. But, in this case, we cannot guarantee that z_h,ε_h^rt_L^∞(Ω;ℝ^d)≤ 1, so that we instead consider the scaled vector field z_h,ε_h^rt z_h,ε_h^rt(max{1,z_h,ε_h^rt_L^∞(Ω;ℝ^d)})^-1∈ℛT^0_N(𝒯_h), which is still a sufficiently accurate approximation of the dual solution, as indicated by the numerical experiments, cf. Section <ref>.
ad (i). Using element-wise that f_ε_h=f_ε_h^**, the definition of the convex conjugate, cf. (<ref>), and the discrete integration-by-parts formula (<ref>), we find that
inf_v_h∈𝒮^1,cr_D(𝒯_h)I_h,ε_h^cr(v_h)=inf_v_h∈𝒮^1,cr_D(𝒯_h)f_ε_h^**(|∇_ h v_h|)_L^1(Ω)+α2Π_h v_h-g_h_L^2(Ω)^2
=
inf_v_h∈𝒮^1,cr_D(𝒯_h)sup_y_h∈ℒ^0(𝒯_h)^d-∫_Ωf_ε_h^*(|y_h |) dx+(y_h,∇_ h v_h)_Ω+α2Π_h v_h-g_h_L^2(Ω)^2
≥
inf_v_h∈𝒮^1,cr_D(𝒯_h)sup_y_h∈ℛT^0_N(𝒯_h)-∫_Ωf_ε_h^*(|Π_h y_h |) dx-( y_h,Π_h v_h)_Ω+α2Π_h v_h-g_h_L^2(Ω)^2
≥
sup_y_h∈ℛT^0_N(𝒯_h)-∫_Ωf_ε_h^*(|Π_h y_h |) dx-sup_v_h∈ℒ^0(𝒯_h)( y_h,v_h)_Ω-α2v_h-g_h_L^2(Ω)^2
=
sup_y_h∈ℛT^0_N(𝒯_h)-∫_Ωf_ε_h^*(|Π_h y_h |) dx-12α y_h+α g_h_L^2(Ω)^2+α2g_h_L^2(Ω)^2
=
sup_y_h∈ℛT^0_N(𝒯_h)D_h,ε_h^rt(y_h) ,
which is the claimed discrete weak duality relation.
ad (ii). By Lemma <ref>, the minimality of u_h,ε_h^cr∈𝒮^1,cr(𝒯_h) for (<ref>), for every v_h∈𝒮^1,cr(𝒯_h), yields that
(f_ε_h'(|∇_ h u_h,ε_h^cr| )∇_ h u_h,ε_h^cr|∇_ h u_h,ε_h^cr|,∇_ h v_h)_Ω+α (Π_hu_h,ε_h^cr-g_h,Π_h v_h)_Ω=0 .
By definition, the discrete flux z_h,ε_h^rt∈ℒ^1(𝒯_h)^d, defined by (<ref>), satisfies the discrete convex optimality condition (<ref>) and (z_h,ε_h^rt|_T)=α (Π_hu_h,ε_h^cr-g_h)|_T in T for all T∈𝒯_h.
Choosing v_h=1∈𝒮^1,cr(𝒯_h) in (<ref>), we find that ∫_Ωα (Π_hu_h,ε_h^cr-g_h) dx=0.
Hence, since for Γ_D=∅ the divergence operator ℛT^0_N(𝒯_h)→ℒ^0(𝒯_h)/ℝ is surjective, there exists
y_h∈ℛT^0_N(𝒯_h) such that y_h=α (Π_hu_h,ε_h^cr-g_h) in ℒ^0(𝒯_h). Then, we have that ((z_h,ε_h^rt-y_h)|_T)=0 in T for all T∈𝒯_h, i.e., z_h,ε_h^rt-y_h∈ℒ^0(𝒯_h)^d. In addition, for every v_h∈𝒮^1,cr(𝒯_h), it holds
(Π_h y_h,∇_ h v_h)_Ω =-( y_h,Π_h v_h)_Ω
=-α (Π_hu_h,ε_h^cr-g_h,Π_h v_h)_Ω
=(f_ε_h'(|∇_ h u_h,ε_h^cr| )∇_ h u_h,ε_h^cr|∇_ h u_h,ε_h^cr|,∇_ h v_h)_Ω
=(Π_h z_h,ε_h^rt,∇_ h v_h)_Ω .
In other words, for every v_h∈𝒮^1,cr(𝒯_h), it holds
(y_h-z_h,ε_h^rt,∇_ h v_h)_Ω=(Π_h y_h-Π_h z_h,ε_h^rt,∇_ h v_h)_Ω=0 ,
i.e., y_h-z_h,ε_h^rt∈∇_ h(𝒮^1,cr_D(𝒯_h))^⊥. By the decomposition (<ref>), we have that ∇_ h(𝒮^1,cr_D(𝒯_h))^⊥=(|_ℛT^0_N(𝒯_h))⊆ℛT^0_N(𝒯_h).
As a result, it holds y_h-z_h,ε_h^rt∈ℛT^0_N(𝒯_h). Due to y_h∈ℛT^0_N(𝒯_h), we conclude that z_h,ε_h^rt∈ℛT^0_N(𝒯_h). In particular, now from
(z_h,ε_h^rt|_T)=α (Π_hu_h,ε_h^cr-g_h)|_T in T for all T∈𝒯_h, it follows the discrete optimality condition
(<ref>).
ad (iii). Using (<ref>), (<ref>), and the discrete integration-by-parts formula (<ref>), we find that
I_h,ε_h^cr(u_h,ε_h^cr) =
f_ε_h(|∇_ h u_h,ε_h^cr|)_L^1(Ω)+α2Π_h u_h,ε_h^cr-g_h_L^2(Ω)^2
=-∫_Ωf_ε_h^*(|Π_h z_h,ε_h^rt|) dx+(Π_h z_h,ε_h^rt,∇_ h u_h,ε_h^cr)_Ω+12α z_h,ε_h^rt_L^2(Ω)^2
=-∫_Ωf_ε_h^*(|Π_h z_h,ε_h^rt|) dx-( z_h,ε_h^rt,Π_hu_h,ε_h^cr)_Ω+12α z_h,ε_h^rt_L^2(Ω)^2
=-∫_Ωf_ε_h^*(|Π_h z_h,ε_h^rt|) dx-1α( z_h,ε_h^rt, z_h,ε_h^rt+α g_h)_Ω+12α z_h,ε_h^rt_L^2(Ω)^2
=-∫_Ωf_ε_h^*(|Π_h z_h,ε_h^rt|) dx-12α z_h,ε_h^rt+α g_h_L^2(Ω)^2
=D_h,ε_h^rt(z_h,ε_h^rt) ,
which is the claimed discrete strong duality relation and, thus, appealing to the discrete weak duality relation (<ref>), proves the maximality of z_h,ε_h^rt∈ℛT^0_N(𝒯_h) for (<ref>).
The following proposition describes the approximative behavior the regularized, discretized ROF problem towards the (unregularized) discretized ROF problem, given uniform convergence (to zero) of the element-wise constant regularization parameter ε_h∈ℒ^0(𝒯_h). In what follows, in the convergence ε_h_L^∞(Ω)→ 0,
the average mesh-size h>0 is always fixed.2mm
If ε_h_L^∞(Ω)<1, then the following statements apply:
(i) It holds α2Π_h u_h,ε_h^cr-Π_hu_h^cr_L^2(Ω)^2
≤ε_h_L^∞(Ω)1-ε_h_L^∞(Ω) (α2 g_L^2(Ω)^2+2 |Ω|).
(ii) z_h,ε_h^rt→α (Π_hu_h^cr-g_h) in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0).
(iii) f_ε_h^*(|Π_h z_h,ε_h^rt| )→ 0 in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0).
(iv) f_ε_h (|∇_h u_h,ε_h^cr|)→∇_h u_h^cr in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0).
ad (i). Using both the strong convexity of I_h^cr𝒮^1,cr(𝒯_h)→ℝ∪{+∞} and Lemma <ref> (ii),
we obtain
α2Π_h u_h,ε_h^cr-Π_hu_h^cr_L^2(Ω)^2 ≤ I_h^cr(u_h,ε_h^cr)-I_h^cr(u_h^cr)
≤11-ε_h_L^∞(Ω) I_h,ε_h^cr(u_h,ε_h^cr)+ε_h_L^∞(Ω)^21-ε_h_L^∞(Ω)|Ω| -I_h^cr(u_h^cr)
≤11-ε_h_L^∞(Ω) I_h,ε_h^cr(u_h^cr)+ε_h_L^∞(Ω)^21-ε_h_L^∞(Ω)|Ω|-I_h^cr(u_h^cr)
≤11-ε_h_L^∞(Ω) ( I_h^cr(u_h^cr)
+2 ε_h_L^∞(Ω) |Ω|)-I_h^cr(u_h^cr)
=
ε_h_L^∞(Ω)1-ε_h_L^∞(Ω) (I_h^cr(u_h^cr)+2 |Ω|) .
Since, by the minimality of u_h^cr∈𝒮^1,cr(𝒯_h) for (<ref>) and the L^2-stability of Π_h L^2(Ω)→ℒ^0(𝒯_h), it holds
I_h^cr(u_h^cr)≤ I_h^cr(0)=α2g_h_L^2(Ω)^2≤α2g_L^2(Ω)^2 ,
from (<ref>) we conclude the claimed error estimate.
ad (ii). From claim (i), it follows that
Π_h u_h,ε_h^cr→Π_hu_h^cr in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0) .
Thus, using (<ref>), from z_h,ε_h^rt=α ( Π_h u_h,ε_h^cr-g_h) in ℒ^0(𝒯_h), cf. (<ref>), we conclude that
z_h,ε_h^rt→α (Π_hu_h^cr-g_h) ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0) .
ad (iii). Due to Π_h z_h,ε_h^rt=f_ε_h'(|∇_h u_h,ε_h^cr|)/|∇_h u_h,ε_h^cr|∇_h u_h,ε_h^cr and Lemma <ref> (iii), we have that
|Π_h z_h,ε_h^rt| =| f_ε_h'(|∇_h u_h,ε_h^cr|)|≤ 1-ε_h a.e. in Ω .
Therefore, using Lemma <ref> (iv) together with (<ref>), we conclude that
. | f_ε_h^*(|Π_h z_h,ε_h^rt| )| =
ε_h ((1-ε_h)^2-|Π_h z_h,ε_h^rt| ^2)^1/2
≤ε_h (1-ε_h)≤ε_h
} a.e. in Ω ,
which implies that f_ε_h^*(|Π_h z_h,ε_h^rt| )→ 0 in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0).
ad (iv). Due to (<ref>), (u_h,ε_h^cr)_ε_h_L^∞(Ω)→ 0⊆𝒮^1,cr(𝒯_h) is bounded. The finite-dimensionality of 𝒮^1,cr(𝒯_h) and the Bolzano–Weierstraß theorem yield a subsequence (u_h,ε_h'^cr)_ε_h'_L^∞(Ω)→ 0⊆𝒮^1,cr(𝒯_h) and a function ũ_h^cr∈𝒮^1,cr(𝒯_h) such that
u_h,ε_h'^cr→ũ_h^cr in 𝒮^1,cr(𝒯_h) (ε_h'_L^∞(Ω)→ 0) .
From (<ref>) it is readily derived that
f_ε_h' (|∇_h u_h,ε_h'^cr|)→∇_hũ_h^cr in ℒ^0(𝒯_h) (ε_h'_L^∞(Ω)→ 0) .
Consequently, for every v_h∈𝒮^1,cr(𝒯_h), we find that
I_h^cr(ũ_h^cr) =lim_ε_h'_L^∞(Ω)→ 0I_h,ε_h'^cr(u_h,ε_h'^cr)
≤lim_ε_h'_L^∞(Ω)→ 0I_h,ε_h'^cr(v_h)
=I_h^cr(v_h) .
Thus, due to the uniqueness of u_h^cr∈𝒮^1,cr(𝒯_h) as a minimizer of (<ref>), we get ũ_h^cr=u_h^cr in 𝒮^1,cr(𝒯_h). Since this argumentation remains valid for each subsequence of (u_h,ε_h^cr)_ε_h_L^∞(Ω)→ 0⊆𝒮^1,cr(𝒯_h), the standard subsequence principle implies that f_ε_h (|∇_h u_h,ε_h^cr|)→∇_h u_h^cr in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0).
The approximation properties of the regularized, discrete ROF model (<ref>) (and (<ref>)) towards the (unregularized) discrete ROF model (<ref>) (and (<ref>)) enable us to transfer the discrete convex duality relations established in Proposition <ref>, which apply mainly due to the differentiability of the regularized, discrete ROF model, to the non-differentiable discrete ROF model. To the best of the authors' knowledge, the following discrete convex duality relations for the (unregularized) discrete ROF model (<ref>)
seem to be new.7mm
There exists a vector field z_h^rt∈ℛT^0_N(𝒯_h) with |Π_h z_h^rt|≤ 1 a.e. in Ω and the following properties:
(i) For a not relabeled subsequence, it holds
z_h,ε_h^rt→ z_h^rt in ℛT^0_N(𝒯_h) (ε_h_L^∞(Ω)→ 0) .
(ii) There hold the following discrete convex optimality relations:
z_h^rt =α (Π_h u_h^cr-g_h) in ℒ^0(𝒯_h) ,
Π_hz_h^rt·∇_h u_h^cr =|∇_h u_h^cr| in ℒ^0(𝒯_h) .
(iii) The discrete flux z_h^rt∈ℛT^0_N(𝒯_h) is maximal for D_h^rtℛT^0_N(𝒯_h)→ℝ and discrete strong duality applies, i.e.,
I_h^cr(u_h^cr)=D_h^rt(z_h^rt) .
ad (i). Due to Proposition <ref> (ii) and (<ref>), the sequence (z_h,ε_h^rt)_ε_h_L^∞(Ω)→ 0⊆ℛT^0_N(𝒯_h) is bounded. Thus, by the finite-dimensionality of ℛT^0_N(𝒯_h), the Bolzano–Weierstraß theorem yields a not relabeled subsequence and a vector field z_h^rt∈ℛT^0_N(𝒯_h) such that
z_h,ε_h^rt→ z_h^rt in ℛT^0_N(𝒯_h) (ε_h_L^∞(Ω)→ 0) .
Due to the continuity of Π_h L^1(Ω)→ℒ^0(𝒯_h) and ℛT^0_N(𝒯_h)↪ L^1(Ω), from (<ref>), we obtain
Π_h z_h,ε_h^rt→Π_h z_h^rt in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0) .
From |Π_h z_h,ε_h^rt|≤ 1-ε_h a.e. in Ω, cf. (<ref>), and (<ref>), we obtain |Π_h z_h^rt|≤ 1 a.e. in Ω, i.e.,
I_K_1(0)(Π_h z_h^rt)=0 .
ad (ii). Using Proposition <ref>, (<ref>), and (<ref>), we find that
. z_h^rt =lim_ε_h_L^∞(Ω)→ 0 z_h,ε_h^rt
=lim_ε_h_L^∞(Ω)→ 0α (Π_hu_h,ε_h^cr-g_h)
=α (Π_h u_h^cr-g_h) } a.e. in Ω ,
as well as.Π_h z_h^rt·∇_h u_h^cr =lim_ε_h_L^∞(Ω)→ 0Π_h z_h,ε_h^rt·∇_h u_h,ε_h^cr
=lim_ε_h_L^∞(Ω)→ 0f_ε_h^*(|Π_h z_h,ε_h^rt| )+f_ε_h(|∇_h u_h,ε_h^cr|)
=|∇_h u_h^cr| } a.e. in Ω ,
i.e., the claimed discrete convex optimality conditions.
ad (iii).
Using Proposition <ref> and (<ref>), we find that
I_h^cr(u_h^cr) =lim_ε_h_L^∞(Ω)→ 0I_h,ε_h^cr(u_h,ε_h^cr)
=lim_ε_h_L^∞(Ω)→ 0D_h,ε_h^rt(z_h,ε_h^rt)
=D_h^rt(z_h^rt) ,
i.e., the claimed discrete strong duality relation.
§ NUMERICAL EXPERIMENTS
5mm
In this section, we review the theoretical findings of Section <ref> via numerical experiments. To compare approximations to an exact solution, we impose Dirichlet boundary conditions on Γ_D=∂Ω, though an existence theory is difficult to establish, in general. However, the concepts derived in Section <ref> carry over verbatimly with Γ_N=∅ provided that the existence of a minimizer is given. All experiments were conducted deploying the finite element software package (version 2019.1.0), cf. <cit.>. All graphics were generated using the library (version 3.5.1), cf. <cit.>, and the library (version 2023.4.4), cf. <cit.>.
§.§ Implementation details regarding the optimization procedure
All computations are based on the regularized, discrete ROF problem (<ref>). This is motivated by the fact that appealing to Proposition <ref> (i), in order to bound the error u-Π_h u_h^cr_L^2(Ω), it suffices to determine the error u-Π_h u_h,ε_h^cr_L^2(Ω). The iterative minimization of (<ref>) is realized using a semi-implicit discretized L^2-gradient flow from <cit.> (see also <cit.>) modified with a residual stopping criterion guaranteeing the necessary accuracy in the optimization procedure.
Appealing to <cit.>, the iterates u_h^k∈𝒮^1,cr_D(𝒯_h), k∈ℕ, the residuals r_h^k∈𝒮^1,cr_D(𝒯_h), k∈ℕ, generated by Algorithm <ref>, and the minimizer u_h,ε_h^cr∈𝒮^1,cr_D(𝒯_h) of (<ref>) satisfy
u_h,ε_h^cr-u_h^k_L^2(Ω)≤ 2 r_h^k_L^2(Ω) .
In consequence, if we choose as a stopping criterion that r_h^k^*_L^2(Ω)≤ε_stop^hc_stop h for k^*∈ℕ, where c_stop>0 does not depend on h>0, then, owing to Proposition <ref> (i) and (<ref>), we have that
Π_h(u_h^cr-u_h^k^*)_L^2(Ω)^2≤ε_h_L^∞(Ω)1-ε_h_L^∞(Ω) (2 g_L^2(Ω)^2+8α |Ω|)+8 c_stop^2 h^2 .
If ε_h_L^∞(Ω)≤ c_reg h^2, where c_reg∈ (0,1), then, we arrive at Π_h(u_h^cr-u_h^k^*)_L^2(Ω)=𝒪(h).
Thus, to bound the error u-Π_hu_h^cr_L^2(Ω) experimentally, it is sufficient to compute u-Π_hu_h^k^*_L^2(Ω).
The following proposition proves the well-posedness, stability, and convergence of Algorithm <ref>.
Let the assumptions of Algorithm <ref> be satisfied and let ε_h∈ℒ^0(𝒯_h) such that ε_h>0 a.e. in Ω and ε_h_L^∞(Ω)<1. Then, the following statements apply:
(i) Algorithm <ref> is well-posed, i.e., for every k∈ℕ, given the most-recent iterate u_h^k-1∈𝒮^1,cr_D(𝒯_h), there exists a unique iterate u_h^k∈𝒮^1,cr_D(𝒯_h) solving (<ref>).
(ii) Algorithm <ref> is unconditionally strongly stable, i.e., for every L∈ℕ, it holds
I_h,ε_h^cr(u_h^L)+τ∑_k=1^Ld_τ u_h^k_L^2(Ω)^2≤ I_h,ε_h^cr(u_h^0) .
(iii) Algorithm <ref> terminates after a finite number of steps, i.e., there exists k^*∈ℕ such that r_h^k^*_L^2(Ω)≤ε_stop^h.6mm
The proof of Proposition <ref> (ii) is essentially based on the following inequality.
For every ε∈ (0,1) and a,b∈ℝ^d, it holds10mm
f_ε'(| a|)| a| b·(b-a)≥ f_ε(| b|)-f_ε(| a|)+12f_ε'(| a|)| a|| b-a|^2 .
Follows from <cit.>, since f_ε∈ C^1(ℝ_≥ 0) and (t↦ f_ε'(t)/t)∈ C^0(ℝ_≥ 0) is positive and non-decreasing for all ε∈ (0,1).
ad (i). Since f_ε'(t)/t≥ 0 for all ε∈ (0,1) and t≥ 0, the of Algorithm <ref> is a direct consequence of the Lax–Milgram lemma.
ad (ii).
Let L∈ℕ be arbitrary. Then,
for every k∈{1,…,L}, choosing v_h=d_τ u_h^k∈𝒮^1,cr_D(𝒯_h) in (<ref>), we find that
d_τ u_h^k_L^2(Ω)^2+(f_h,ε_h'(|∇_hu_h^k-1| )|∇_hu_h^k-1|∇_hu_h^k,∇_h d_τ u_h^k)_Ω+α (Π_hu_h^k-g_h,Π_h d_τ u_h^k)_Ω .
Appealing to Lemma <ref> with a=∇_hu_h^k-1|_T∈ℝ^d and b=∇_h u_h^k|_T∈ℝ^d applied for all T∈𝒯_h, for every k∈{1,…,L}, we have that
f_h,ε_h'(|∇_hu_h^k-1| )|∇_hu_h^k-1|∇_hu_h^k·∇_h d_τ u_h^k≥ d_τ f_h,ε_h(|∇_hu_h^k| ) a.e. in Ω .
In addition, since d_τ g_h=0, for every k∈{1,…,L}, we have that
(Π_hu_h^k-g_h)Π_h d_τ u_h^k =(Π_hu_h^k-g_h)d_τ(Π_h u_h^k-g_h)
=d_τ2|Π_hu_h^k-g_h|^2 .
Using (<ref>) and (<ref>) in (<ref>), for every k∈{1,…,L},
we arrive at
d_τ u_h^k_L^2(Ω)^2+d_τ I_h,ε_h^cr(u_h^k)≤ 0 .
Summation of (<ref>) with respect to k∈{1,…,L}, using ∑_k=1^Ld_τ I_h,ε_h^cr(u_h^k)=I_h,ε_h^cr(u_h^L)-I_h,ε_h^cr(u_h^0), yields the claimed stability estimate.
ad (iii). Due to (i), we have that d_τ u_h^k_L^2(Ω)^2→ 0 (k→∞), i.e., by the finite-dimensionality of 𝒮^1,cr_D(𝒯_h) and the equivalence of norms, it holds
u_h^k-u_h^k-1→ 0 in 𝒮^1,cr_D(𝒯_h) (k→∞) .
In addition, due to (i), we have that I_h,ε_h^cr(u_h^k)≤ I_h,ε_h^cr(u_h^0), which, using Lemma <ref>, implies that
(u_h^k)_k∈ℕ⊆𝒮^1,cr_D(𝒯_h) is bounded. Due to the finite-dimensionality of 𝒮^1,cr_D(𝒯_h), the -straß theorem yields a subsequence (u_h^k_l)_l∈ℕ⊆𝒮^1,cr_D(𝒯_h) and a function ũ_h∈𝒮^1,cr_D(𝒯_h) such that
u_h^k_l→ũ_h in 𝒮^1,cr_D(𝒯_h) (l→∞) .
Due to (<ref>), from (<ref>), we deduce that
u_h^k_l-1→ũ_h in 𝒮^1,cr_D(𝒯_h) (l→∞) .
As a result, using (<ref>)–(<ref>), by passing for l→∞ in (<ref>), for every v_h∈𝒮^1,cr_D(𝒯_h), we obtain
(f_h,ε_h'(|∇_hũ_h| )|∇_hũ_h|∇_hũ_h ,∇_hv_h )_Ω+α (Π_hũ_h-g_h,Π_hv_h)_Ω=0 ,
and, by uniqueness, ũ_h=u_h,ε_h^cr.
Hence, using (<ref>) and (<ref>), for every v_h∈𝒮^1,cr_D(𝒯_h), we obtain
(r_h^k_l,v_h)_Ω =(f_h,ε_h'(|∇_hu_h^k_l| )|∇_hu_h^k_l|∇_hu_h^k_l,∇_hv_h )_Ω+α (Π_hu_h^k_l-g_h,Π_hv_h)_Ω
→(f_h,ε_h'(|∇_hu_h,ε_h^cr| )|∇_hu_h,ε_h^cr|∇_hu_h,ε_h^cr ,∇_hv_h )_Ω+α (Π_hu_h,ε_h^cr-g_h,Π_hv_h)_Ω=0 (l→∞) ,
i.e., r_h^k_l⇀ 0 in 𝒮^1,cr_D(𝒯_h) (l→∞), and, thus, by the finite-dimensionality of 𝒮^1,cr_D(𝒯_h), r_h^k_l→ 0 in 𝒮^1,cr_D(𝒯_h) (l→∞), which implies that r_h^k_l→ 0 in L^2(Ω) (l→∞). As this remains valid for each subsequence of (r_h^k)_k∈ℕ⊆𝒮^1,cr_D(𝒯_h), the standard convergence principle yields that r_h^k→ 0 in L^2(Ω) (k→∞). In particular, there exists k^*∈ℕ such that r_h^k^*_L^2(Ω)≤ε^h_stop.
§.§ Implementation details regarding the adaptive mesh refinement procedure
8mm
Before we present numerical experiments, we briefly outline the details of the implementations regarding the adaptive mesh refinement procedure.
In general, we follow the adaptive algorithm, cf. <cit.>:
(i) The regularized, discrete primal solution u_i^cr∈𝒮^1,cr_D(𝒯_i) in step (Solve'Solve') is computed using
the semi-implicit discretized L^2-gradient flow, cf. Algorithm <ref>, for fixed step-size τ=1.0, stopping criterion ε_stop^h_ih_i/√(20), and initial condition u_i^0=0∈𝒮_D^1,cr(𝒯_i). Appealing to Proposition <ref> (ii), Algorithm <ref> is unconditionally strongly stable, so that employing the fixed step-size τ=1.0 is a reasonable choice.
The stopping criterion ε_stop^h_ih_i/√(20) ensures (cf. the argumentation below Algorithm <ref>) that the final iterate u_h_i^k^*∈𝒮^1,cr_D(𝒯_i) is a sufficiently accurate approximation of the discrete primal solution, in the sense
that its accuracy does not violate the best possible linear convergence rate, cf. Remark <ref> (below).
(ii) As an approximation u_i^cr∈𝒮^1,cr_D(𝒯_i) with u_i^cr=0 on ∂Ω, we employ
u_i^cr
u_i^cr if u_i^cr=0 on ∂Ω ,
I_k^∂ u_i^cr else ,
where the operator I_i^∂𝒮^1,cr(𝒯_i)→𝒮^1,cr_D(𝒯_i) for every v_h_i∈𝒮^1,cr(𝒯_i) is defined by
I_i^∂v_i∑_S∈𝒮_h_i;S∩∂Ω=∅v_h_i(x_S) φ_S .
(iii) Note that the particular choices in (ii) are only due to the imposed homogeneous Dirichlet boundary condition. In the case Γ_D=∅, the choice u_i^cru_i^cr∈𝒮^1,cr(𝒯_i) is always admissible.
(iv) If not otherwise specified, we employ the parameter θ=1/2 in (Estimate'Mark').
(v) To find the set ℳ_i⊆𝒯_i in step (Mark'Mark'), we deploy the Dörfler marking strategy, cf. <cit.>.
(vi) The (minimal) conforming refinement of 𝒯_i with respect to ℳ_i in step (Refine'Refine') is by deploying the red-green-blue-refinement algorithm, cf. <cit.>.
(vii) For the construction of the adaptively modified regularization parameter ε_i∈ℒ^0(𝒯_i) in step (Refine'Refine'), we employ separately the following two cases:
ε_iαd|Π_h_i-1 u_i-1^cr-g_h_i| h_i^2 + h_i^3 (locallocal) ,
h_i^2 (globalglobal) .
§.§ Example with Lipschitz continuous dual solution
We examine an example from <cit.>. In this example, we let Ω=(-1,1)^d, Γ_D=∂Ω, d∈{2,3}, r=1/2, α =10, and g=χ_B_r^d(0)∈ BV(Ω)∩ L^∞(Ω). Then, the primal solution u∈ BV(Ω)∩ L^∞(Ω) and a dual solution z∈ W^2(;Ω)∩ L^∞(Ω;ℝ^d), for a.e. x∈Ω are defined by
u(x) (1-dα r) g(x) ,
z(x)
-xr | x| < r ,
-rx| x|^d | x|≥ r .
Note that z∈ W^1,∞(Ω;ℝ^d), so that, appealing to <cit.>, uniform mesh-refinement (i.e., θ=1 in Algorithm <ref>) is expected to yield the quasi-optimal convergence rate 𝒪(h^1/2).
2D Case.
The coarsest triangulation 𝒯_0 of Figure <ref> (initial triangulation of Algorithm <ref>) consists of 16 halved squares. More precisely, Figure <ref> displays
the triangulations 𝒯_i, i∈{0,15,20,25}, generated by Algorithm <ref>
using either the adaptively modified ε_i∈ℒ^0(𝒯_i), cf. (locallocal), or the global choice ε_i h_i^2, cf. (globalglobal). For both choices,
a refinement towards the circle ∂ B_r^2(0), i.e., the jump set J_u of the exact solution u∈ BV(Ω)∩ L^∞(Ω), cf. (<ref>), is reported.
This behavior is also seen in Figure <ref>, where the regularized, discrete primal solution u_15^cr∈𝒮^1,cr_D(𝒯_15), the (local)
L^2-projection onto element-wise constant functions
Π_h_15 u_15^cr∈ℒ^0(𝒯_15), and
the (local) L^2-projections onto element-wise affine functions of
the modulus of the regularized, discrete dual solution z_15^rt∈ℛT^0_N(𝒯_15) and of the projected regularized, discrete dual solution z_15^rt∈ℛT^0_N(𝒯_15) are plotted. Figure <ref>, in addition, shows that using the adaptively modified ε_i∈ℒ^0(𝒯_i), cf. (locallocal), the refinement is more concentrated at the jump set J_u of the exact solution u∈ BV(Ω)∩ L^∞(Ω), cf. (<ref>). However, in Figure <ref> it is seen that (locallocal) does not result in an improved error decay, but an error decay comparable to (globalglobal). In addition,
Figure <ref> demonstrates that Algorithm <ref> improves the experimental convergence rate of about 𝒪(h^1/2) predicted by <cit.> for uniform mesh-refinement to the quasi-optimal rate 𝒪(h), cf. Remark <ref> (below). In addition, Figure <ref> indicates the primal-dual error estimator is reliable and efficient with respect to the error quantity
ρ̃^2(u_i^cr,z_i^rt)α2u_i^cr-u^2_L^2(Ω)+12α z_i^rt- z^2_L^2(Ω) , i∈ℕ ,
which, appealing to Remark <ref> (iv), is a lower bound for sum of the optimal convexity measures.
7mm
3D Case. The initial triangulation 𝒯_0 of Algorithm <ref> consists of 27 cubes each divided into six tetrahedrons. Using either the adaptively modified ε_i∈ℒ^0(𝒯_i), cf. (locallocal), or the global choice ε_i h_i^2, cf. (globalglobal), we report similar results to the 2D case: for both choices,
a refinement towards the sphere ∂ B_r^3(0), i.e., the jump set J_u of the exact solution u∈ BV(Ω)∩ L^∞(Ω), cf. (<ref>), is re-ported, which can be seen
in Figure <ref>, where the regularized, discrete primal solution u_10^cr∈𝒮^1,cr_D(𝒯_10) and
the (local) L^2-projection onto element-wise affine functions of
the modulus of the regularized, discrete dual solution z_10^rt∈ℛT^0_N(𝒯_10) are plotted.
Figure <ref> shows that the adaptive Algorithm <ref> improves the experimental convergence rate of about 𝒪(h^1/2) predicted by <cit.> for uniform mesh-refinement to the quasi-optimal rate 𝒪(h), cf. Remark <ref> (below).
12.5mm
In one dimension, the L^2-best-approximation error of the sign function on quasi-uniform
partitions is of order 𝒪(h^1/2), cf. <cit.>. More generally, using that the
intersection BV(Ω) ∩ L^∞(Ω) is contained in
fractional Sobolev spaces W^s,2(Ω) for all s<1/2,
cf. <cit.>, one cannot expect a higher convergence rate
than 𝒪(h^1/2) for generic, essentially bounded functions of bounded variation. For triangulations that are graded towards the jump
sets of certain discontinuous functions with a quadratic grading
strength, i.e., the local mesh-size satisfies
h_T ∼ h^2 for all elements T∈𝒯_h at the discontinuity set, with the average mesh-size h∼(𝒩_h)^-1/d, a linear
convergence rate 𝒪(h) has been established in <cit.>. Since our
error estimates not only bound squared L^2-errors but also control
squares of L^p-norms of non-linear error quantities involving derivatives, cf. , a higher convergence rate than linear cannot be expected.
In view of these aspects, the linear convergence rate 𝒪(h) for
the devised adaptive strategy is quasi-optimal.
§.§ Example without Lipschitz continuous dual solution
3mm
We examine an example from <cit.>. In this example, we let Ω=(-1.5,1.5)^2, Γ_D=∂Ω, r=1/2, α =10, and g=χ_B_r^2(re_1)-χ_B_r^2(-re_1)∈ BV(Ω)∩ L^∞(Ω). Then, the primal solution u∈ BV(Ω)∩ L^∞(Ω) and a dual solution z∈ W^2(;Ω)∩ L^∞(Ω;ℝ^2), for a.e. x∈Ω are defined by
u(x) (1-2α r) g(x) ,
z(x)∓x∓ r e_1r | x∓ r e_1| < r ,
∓r(x∓ r e_1)| x∓ r e_1|^2 | x∓ r e_1|≥ r .
Note that z∉ W^1,∞(Ω;ℝ^2), so that we cannot refer to <cit.> in order to expect uniform mesh-refinement to yield the convergence rate 𝒪(h^1/2).
However, since z|_Ω^±∈ W^1,∞(Ω^±;ℝ^2), where Ω^+Ω∩ (ℝ_>0×ℝ) and Ω^-Ω∩ (ℝ_<0×ℝ), and since the coarsest triangulation 𝒯_0 of Figure <ref> and, hence, also all resulting refinements 𝒯_i, i∈ℕ, of 𝒯_0 resolve J_zΩ∩ ({0}×ℝ), i.e., the jump set of
z∈ W^2(;Ω)∩ L^∞(Ω;ℝ^2), in the sense that J_z⊆⋃_S∈𝒮_h_iS for all i∈ℕ,
referring to <cit.>, we can expect uniform mesh-refinement to yield the convergence rate 𝒪(h^1/2).
The coarsest triangulation 𝒯_0 of Figure <ref> (initial triangulation of Algorithm <ref>) consists of 16 halved squares. More precisely, Figure <ref> displays
the triangulations 𝒯_i, i∈{0,15,20,25}, generated by Algorithm <ref>
using either the adaptively modified ε_i∈ℒ^0(𝒯_i), cf. (locallocal), or the global choice ε_i h_i^2, cf. (globalglobal). For both choices,
a refinement towards ∂ B_r^2(re_1)∪∂ B_r^2(-re_1), i.e., the jump set J_u of the exact solution u∈ BV(Ω)∩ L^∞(Ω), cf. (<ref>), is reported.
This behavior is also seen in Figure <ref>, where the regularized, discrete primal solution u_15^cr∈𝒮^1,cr_D(𝒯_15), the (local)
L^2-projection onto element-wise constant functions
Π_h_15 u_15^cr∈ℒ^0(𝒯_15), and
the (local) L^2-projections onto element-wise affine functions of
the modulus of the regularized, discrete dual solution z_15^rt∈ℛT^0_N(𝒯_15) and of the scaled regularized, discrete dual solution z_15^rt∈ℛT^0_N(𝒯_15) are plotted. Figure <ref>, in addition, shows that employing the adaptively modified regularization , cf. (locallocal), the refinement is more concentrated at the jump set J_u of the exact solution u∈ BV(Ω)∩ L^∞(Ω), cf. (<ref>). However, in Figure <ref> it can be seen that (locallocal) does not result in an improved error decay, but an error decay comparable to (globalglobal). In addition, Figure <ref> demonstrates that Algorithm <ref> improves the experimental convergence rate of about 𝒪(h^1/2) predicted by <cit.> for uniform mesh-refinement to the quasi-optimal rate 𝒪(h), cf. <ref>. In addition, Figure <ref> indicates the primal-dual error estimator is both reliable and efficient with respect to the error quantity (<ref>).
7mm
§.§ Example with Lipschitz continuous primal solution and Lipschitz continuous dual solution
We examine an example from <cit.>. In this example, we let Ω=(-1.5,1.5)^2, Γ_D=∂Ω, α =10, s(t)√(3t) and r(t)1/2√(1-4t) for t=0.1, and g∈ BV(Ω)∩ L^∞(Ω) for a.e. x∈Ω, be defined by2mm
g(x)
1 +2-α(s(t)^2+t)/s(t) if | x|≤ s(t) ,
1 +1-α(| x|^2+t)/| x| if s(t)<| x|≤ r(t) ,
0 else .
Then, the primal solution u∈ BV(Ω)∩ L^∞(Ω) and a dual solution z∈ W^2(;Ω)∩ L^∞(Ω;ℝ^2) with | z|≤ 1 a.e. in Ω, for a.e. x∈Ω are defined by
u(x)
1 - s(t)^2+t/s(t) if | x|≤ s(t) ,
1 -| x|^2+t/| x| if s(t)<| x|≤ r(t) ,
0 else ,
z(x)
-x/s(t) if | x|≤ s(t) ,
-x/| x| if s(t)<| x|≤ r(t) ,
-xr(t)/| x|^2 else .
Note that z∈W^1,∞(Ω;ℝ^2), so that, appealing to <cit.>, uniform mesh-refinement is expected to yield the quasi-optimal convergence rate 𝒪(h^1/2).
The coarsest triangulation 𝒯_0 of Figure <ref> (initial triangulation of Algorithm <ref>) consists of 16 halved squares. More precisely, Figure <ref> displays
the triangulations 𝒯_i, i∈{0,5,10,15}, generated by Algorithm <ref>
employing either ε_i∈ℒ^0(𝒯_i), cf. (locallocal), or ε_i h_i^2, cf. (globalglobal). For both choices,
a refinement mainly towards and on the set {|∇ u| >0} is reported.
This is also seen in Figure <ref>, where the regularized, discrete primal solution u_15^cr∈𝒮^1,cr_D(𝒯_10), the (local)
L^2-projection onto element-wise constant functions
Π_h_10 u_10^cr∈ℒ^0(𝒯_10), and
the (local) L^2-projections onto element-wise affine functions of
the modulus of the regularized, discrete dual solution z_10^rt∈ℛT^0_N(𝒯_10) and of the scaled regularized, discrete dual solution z_10^rt∈ℛT^0_N(𝒯_10) are plotted. Figure <ref> shows that employing the adaptively modified regularization parameter, cf. (locallocal), the refinement takes place at and on the set {|∇ u| >0}. However, in Figure <ref>, again, it can be seen that (locallocal) does not result in an improved error decay, but an error decay comparable to (globalglobal). In addition, Figure <ref> demonstrates that Algorithm <ref> improves the experimental convergence rate of about 𝒪(h^1/2) predicted by <cit.> for uniform mesh-refinement to the quasi-optimal rate 𝒪(h), cf. Remark <ref>. In addition, Figure <ref> indicates the primal-dual error estimator is both reliable and efficient with respect to the error quantity (<ref>).
7mm
§.§ Example without Dirichlet boundary condition and without exact solution
We examine an example from <cit.>. In this example, we let Ω=(-1,1)^2, r=1/2, Γ_D=∅, α =100, and g=χ_[-r,r]^2∈ BV(Ω)∩ L^∞(Ω). Then, the primal solution
and the dual solutions are not known. However, appealing to <cit.>, given the regularity of g∈ BV(Ω)∩ L^∞(Ω),
we can expect the convergence rate 𝒪(h^1/4) using uniform mesh refinement.
The coarsest triangulation 𝒯_0 of Figure <ref> (initial triangulation of Algorithm <ref>) consists of 16 halved squares. More precisely, Figure <ref> displays
the triangulations 𝒯_i, i∈{0,15,20,25}, generated by Algorithm <ref>
using either the adaptively modified ε_i∈ℒ^0(𝒯_i), cf. (locallocal), or the global choice ε_i h_i^2, cf. (globalglobal). For both choices,
a refinement towards the square ∂ [-r,r]^2, i.e., the jump set J_g of the data g∈ BV(Ω)∩ L^∞(Ω) is reported.
This behavior is also seen in Figure <ref>, where the regularized, discrete primal solution u_15^cr∈𝒮^1,cr_D(𝒯_15), the (local)
L^2-projection onto element-wise constant functions
Π_h_15 u_15^cr∈ℒ^0(𝒯_15), and
the (local) L^2-projections onto element-wise affine functions of
the modulus of the regularized, discrete dual solution z_15^rt∈ℛT^0_N(𝒯_15) and of the projected regularized, discrete dual solution z_15^rt∈ℛT^0_N(𝒯_15) are plotted. Figure <ref>, in addition, shows that using the adaptively modified ε_i∈ℒ^0(𝒯_i), cf. (locallocal), the refinement is, again, more concentrated at the jump set J_g of the data g∈ BV(Ω)∩ L^∞(Ω). However, in Figure <ref> it can be seen that (locallocal) does not result in an improved error decay, but an error decay comparable to (globalglobal). In addition,
Figure <ref> demonstrates that Algorithm <ref> improves the experimental convergence rate of about 𝒪(h^1/4) predicted by <cit.> for uniform mesh-refinement to the value 𝒪(h^2/5). This, on the one hand, confirms the optimality of the a priori error estimates established in <cit.> and, on the other hand, appealing to <cit.>, let us expect that there exists no Lipschitz continuous dual solution to the given data g=χ_[-r,r]^2∈ BV(Ω)∩ L^∞(Ω). The reported reduced error decay of 𝒪(h^2/5) compared to <cit.>, where an error decay of 𝒪(h^1/2) is reported, might only be pre-asymptotic and due to slight accuracy losses resulting due to the global scaling step. This might be due to potential singularities of a dual solution located at the corners of the square ∂ [-r,r]^2, as indicated in Figure <ref>. Therefore, it is possible that the error decay 𝒪(h^1/2) in <cit.> may be reported after surpassing a potential pre-asymptotic regime.
10mm
§.§ Numerical experiments with application to image processing
In order to benchmark the performance of the proposed numerical scheme (cf. Algorithm <ref> and Algorithm <ref>)
in a problem related to image processing, we examine a standard example from the field of image processing (cf. Section <ref>) and a new example (cf. Section <ref>).
11mm
§.§.§ The Cameraman image
We examine the cameraman image, which in a similar context has been considered in <cit.>. In this example,
we let Ω (0,1)^2, Γ_D=∅, α=1e+4, and g∈ BV(Ω)∩ L^∞(Ω) a piece-wise constant function taking its values in the interval [0,1], representing the cameraman image on a uniform triangulation with 66.049
nodes, cf. Figure <ref>. The adaptive algorithm (cf. Algorithm <ref>), employed as coarsening strategy, reduces
the number of nodes within 30 iteration steps to 25.059 nodes which corresponds to 38.0% of the initial number of nodes, which results in a squared L^2-error of u_30^cr-g_L^2(Ω)^2≈ 2.211e-3. The resulting coarsened image, by u_30^cr∈𝒮^1,cr(𝒯_30), is shown in Figure <ref>. The underlying grid 𝒯_30 shown in Figure <ref> reveals the expected coarsening of the triangulation away from the edges.
§.§.§ The Merle image
10mm
We examine an image of Merle, the male cat of the second author. In this example,
we let Ω (0,1)^2, Γ_D=∅, α=1e+4, and
g∈ BV(Ω)∩ L^∞(Ω) a piece-wise constant function taking its values in the interval [0,1], representing the Merle image on a uniform triangulation with 140.625
nodes, cf. Figure <ref>. The adaptive algorithm (cf. Algorithm <ref>), employed as coarsening strategy, reduces
the number of nodes within 30 iteration steps to 41.749 nodes which is 30.0% of the initial number of nodes, which results in a squared L^2-error of u_30^cr-g_L^2(Ω)^2≈ 2.162e-3. The resulting coarsened image, represented by u_30^cr∈𝒮^1,cr(𝒯_30), is shown in Figure <ref>. The underlying grid 𝒯_30 shown in Figure <ref> reveals the expected coarsening of the triangulation away from the edges.
5mm
10
AO00
M. Ainsworth and J. T.
Oden, A posteriori error estimation in finite element
analysis, Pure and Applied Mathematics (New York), Wiley-Interscience
[John Wiley & Sons], New York, 2000.
10.1002/9781118032824.
Bar12
S. Bartels, Total variation minimization with finite
elements: convergence and iterative solution, SIAM J. Numer. Anal.
50 no. 3 (2012), 1162–1180.
10.1137/11083277X.
Bar15
S. Bartels, Numerical methods for nonlinear
partial differential equations, Springer Series in Computational
Mathematics 47, Springer, Cham, 2015.
10.1007/978-3-319-13797-1.
Bar21
S. Bartels, Nonconforming discretizations of convex
minimization problems and precise relations to mixed methods, Comput.
Math. Appl. 93 (2021), 214–229.
10.1016/j.camwa.2021.04.014.
BDN18
S. Bartels, L. Diening, and
R. H. Nochetto, Unconditional stability of
semi-implicit discretizations of singular flows, SIAM J. Numer. Anal.
56 no. 3 (2018), 1896–1914.
10.1137/17M1159166.
BKROF22
S. Bartels and
A. Kaltenbach, Error estimates for total-variation
regularized minimization problems with singular dual solutions, Numer.
Math. 152 no. 4 (2022), 881–906.
10.1007/s00211-022-01324-w.
BK22Obstacle
S. Bartels and
A. Kaltenbach, Error analysis for a
Crouzeix-Raviart approximation of the obstacle problem, 2023.
10.48550/ARXIV.2302.01646.
BM20
S. Bartels and
M. Milicevic, Primal-dual gap estimators for a
posteriori error analysis of nonsmooth minimization problems, ESAIM
Math. Model. Numer. Anal. 54 no. 5 (2020), 1635–1660.
10.1051/m2an/2019074.
BNS15
S. Bartels, R. H. Nochetto,
and A. J. Salgado, A total variation diminishing
interpolation operator and applications, Math. Comp. 84
no. 296 (2015), 2569–2587. 10.1090/mcom/2942.
BTW21
S. Bartels, R. Tovey, and
F. Wassmer, Singular solutions, graded meshes,and
adaptivity for total-variation regularized minimization problems,
ESAIM Math. Model. Numer. Anal. 56 no. 6 (2022), 1871–1888.
10.1051/m2an/2022056.
BW21
S. Bartels and Z. Wang,
Orthogonality relations of Crouzeix-Raviart and Raviart-Thomas finite
element spaces, Numer. Math. 148 no. 1 (2021), 127–139.
10.1007/s00211-021-01199-3.
bartels15
S. Bartels, Error control and adaptivity for a
variational model problem defined on functions of bounded variation,
Math. Comp. 84 no. 293 (2015), 1217–1240.
10.1090/S0025-5718-2014-02893-7.
BC08
S. Bartels and
C. Carstensen, A convergent adaptive finite element
method for an optimal design problem, Numer. Math. 108 no. 3
(2008), 359–385. 10.1007/s00211-007-0122-x.
BBHSVN23
L. Baumgärtner,
R. Bergmann, R. Herzog,
S. Schmidt, and
J. Vidal-Núnez, Total generalized variation for
piecewise constant functions on triangular meshes with applications in
imaging, SIAM Journal on Imaging Sciences 16 no. 1 (2023),
313–339. 10.1137/22M1505281.
BC11
H. H. Bauschke and P. L.
Combettes, Convex analysis and monotone operator theory in hilbert
spaces, in CMS Books in Mathematics, 2011.
BW22
L. Baňas and A. Wilke,
A posteriori estimates for the stochastic total variation flow, SIAM
J. Numer. Anal. 60 no. 5 (2022), 2657–2680.
10.1137/21M1447982.
BB20
F. Bertrand and D. Boffi,
The Prager-Synge theorem in reconstruction based a posteriori error
estimation, in 75 years of mathematics of computation, Contemp.
Math. 754, Amer. Math. Soc., [Providence], RI, [2020] 2020, pp. 45–67. 10.1090/conm/754/15152.
Braess13
D. Braess, Finite Elemente. Theorie,
schnelle Löser und Anwendungen in der Elastizitätstheorie, 5th
revised ed. ed., Springer-Lehrb. Mastercl., Berlin: Springer Spektrum,
2013 (German). 10.1007/978-3-642-34797-9.
Brae09
D. Braess, An a posteriori error estimate and a
comparison theorem for the nonconforming P_1 element, Calcolo
46 no. 2 (2009), 149–155. 2520373.
10.1007/s10092-009-0003-z.
braides98
A. Braides, Approximation of free-discontinuity
problems, Lecture Notes in Mathematics 1694,
Springer-Verlag, Berlin, 1998. 10.1007/BFb0097344.
bregman67
L. Brégman, The relaxation method of finding the
common point of convex sets and its application to the solution of problems
in convex programming, USSR Computational Mathematics and Mathematical
Physics 7 no. 3 (1967), 200–217.
https://doi.org/10.1016/0041-5553(67)90040-7.
CL15
C. Carstensen and D. J.
Liu, Nonconforming FEMs for an optimal design problem, SIAM
J. Numer. Anal. 53 no. 2 (2015), 874–894.
10.1137/130927103.
CKNS08
J. Cascon, C. Kreuzer,
R. Nochetto, and
K. Siebert, Quasi-optimal convergence rate for an
adaptive finite element method, SIAM J. Numer. Anal. 46
no. 5 (2008), 2524–2550. 10.1137/07069047X.
CCMN08
V. Caselles, A. Chambolle,
S. Moll, and M. Novaga, A
characterization of convex calibrable sets in ℝ^N with respect to
anisotropic norms, Ann. Inst. H. Poincaré Anal. Non Linéaire
25 no. 4 (2008), 803–832.
10.1016/j.anihpc.2008.04.003.
CP20
A. Chambolle and T. Pock,
Crouzeix-Raviart approximation of the total variation on simplicial meshes,
J. Math. Imaging Vision 62 no. 6-7 (2020), 872–899.
10.1007/s10851-019-00939-3.5mm
CR73
M. Crouzeix and P.-A.
Raviart, Conforming and nonconforming finite element methods for
solving the stationary Stokes equations. I, Rev. Française
Automat. Informat. Recherche Opérationnelle Sér. Rouge 7
no. R-3 (1973), 33–75.
Dac08
B. Dacorogna, Direct methods in the calculus of
variations, second ed., Applied Mathematical Sciences 78,
Springer, New York, 2008.
DK08
L. Diening and C. Kreuzer,
Linear convergence of an adaptive finite element method for the
p-Laplacian equation, SIAM J. Numer. Anal. 46 no. 2
(2008), 614–638. 10.1137/070681508.
DR07
L. Diening and
M. Růžička, Interpolation operators in
Orlicz-Sobolev spaces, Numer. Math. 107 no. 1 (2007),
107–129. 10.1007/s00211-007-0079-9.
Doe96
W. Dörfler, A convergent adaptive algorithm for
Poisson's equation, SIAM J. Numer. Anal. 33 no. 3 (1996),
1106–1124. 10.1137/0733054.
ET99
I. Ekeland and
R. Témam, Convex analysis and variational
problems, english ed., Classics in Applied Mathematics 28,
Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA,
1999, Translated from the French.
10.1137/1.9781611971088.
EG21
A. Ern and J. L. Guermond,
Finite Elements I: Approximation and Interpolation, Texts in
Applied Mathematics no. 1, Springer International Publishing, 2021.
10.1007/978-3-030-56341-7.
FV04
F. Fierro and A. Veeser, A
posteriori error estimators for regularized total variation of characteristic
functions, SIAM J. Numer. Anal. 41 no. 6 (2003), 2032–2055.
10.1137/S0036142902408283.
HK04
M. Hintermüller and
K. Kunisch, Total bounded variation regularization
as a bilaterally constrained optimization problem, SIAM J. Appl.
Math. 64 no. 4 (2004), 1311–1333.
10.1137/S0036139903422784.
Hun07
J. D. Hunter, Matplotlib: A 2d graphics environment,
Computing in Science & Engineering 9 no. 3 (2007), 90–95.
10.1109/MCSE.2007.55.
LW10
A. Logg and G. N. Wells,
DOLFIN: automated finite element computing, ACM Trans. Math.
Software 37 no. 2 (2010), Art. 20, 28.
10.1145/1731022.1731030.
Mar85
L. D. Marini, An inexpensive method for the
evaluation of the solution of the lowest order Raviart-Thomas mixed
method, SIAM J. Numer. Anal. 22 no. 3 (1985), 493–496.
10.1137/0722029.
vedo
M. e. a. Musy, marcomusy/vedo: 2023.4.4, March 2023.
10.5281/zenodo.7734756.
NSV00
R. H. Nochetto,
G. Savaré, and
C. Verdi, A posteriori error estimates for variable
time-step discretizations of nonlinear evolution equations,
Communications on Pure and Applied Mathematics 53 no. 5
(2000), 525–589.
https://doi.org/10.1002/(SICI)1097-0312(200005)53:5<525::AID-CPA1>3.0.CO;2-M.
OBGXY05
S. Osher, M. Burger,
D. Goldfarb, J. Xu, and
W. Yin, An iterative regularization method for
total variation-based image restoration, Multiscale Modeling &
Simulation 4 no. 2 (2005), 460–489. 10.1137/040605412.
PraSyn47
W. Prager and J. L. Synge,
Approximations in elasticity based on the concept of function space,
Quart. Appl. Math. 5 (1947), 241–269.
10.1090/qam/25902.
RT75
P.-A. Raviart and J. M.
Thomas, A mixed finite element method for 2nd order elliptic
problems, in Mathematical aspects of finite element methods (Proc.
Conf., Consiglio Naz. delle Ricerche (C.N.R.), Rome, 1975),
1977, pp. 292–315. Lecture Notes in Math., Vol. 606.
Repin18
S. Repin and J. Valdman,
Error identities for variational problems with obstacles, ZAMM Z.
Angew. Math. Mech. 98 no. 4 (2018), 635–658.
10.1002/zamm.201700105.
Rep99
S. I. Repin, A posteriori error estimates for
approximate solutions to variational problems with strongly convex
functionals, J. Math. Sci. (New York) 97 no. 4 (1999),
4311–4328, Problems of mathematical physics and function theory.
10.1007/BF02365047.
ROF92
L. I. Rudin, S. Osher, and
E. Fatemi, Nonlinear total variation based noise
removal algorithms, Phys. D 60 no. 1-4 (1992), 259–268,
Experimental mathematics: computational issues in nonlinear science (Los
Alamos, NM, 1991). 10.1016/0167-2789(92)90242-F.
dr-nafsa
M. Růžička and
L. Diening, Non–Newtonian fluids and function
spaces, in Nonlinear Analysis, Function Spaces and Applications,
Proceedings of NAFSA 2006 Prague, 8, 2007, pp. 95–144.
Tart07-book
L. Tartar, An introduction to Sobolev spaces
and interpolation spaces, Lecture Notes of the Unione Matematica
Italiana 3, Springer, Berlin; UMI, Bologna, 2007.
Ver13
R. Verfürth, A Posteriori Error Estimation
Techniques for Finite Element Methods, Oxford University Press, 04 2013.
10.1093/acprof:oso/9780199679423.001.0001.9mm
ZeiIII
E. Zeidler, Nonlinear functional analysis and
its applications. III, Springer-Verlag, New York, 1985, Variational
methods and optimization, Translated from the German by Leo F. Boron.
10.1007/978-1-4612-5020-3.
|
http://arxiv.org/abs/2307.03983v1 | 20230708141424 | Hybrid Successive Interference Cancellation and Power Adaptation: a Win-Win Strategy for Robust Uplink NOMA Transmission | [
"Yanshi Sun",
"Wei Cao",
"Momiao Zhou",
"Zhiguo Ding"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
Hybrid Successive Interference Cancellation and Power Adaptation: a Win-Win Strategy for Robust Uplink NOMA Transmission
Yanshi Sun, Member, IEEE, Wei Cao, Momiao Zhou, Member, IEEE, Zhiguo Ding, Fellow, IEEE
Y. Sun, Wei Cao and M. Zhou are with the School of Computer Science and Information
Engineering, Hefei University of Technology, Hefei, 230009, China. (email: [email protected], [email protected] and [email protected]).
Z. Ding is with Department of Electrical Engineering and Computer
Science, Khalifa University, Abu Dhabi, UAE, and Department of Electrical
and Electronic Engineering, University of Manchester, Manchester, UK. (email: [email protected]).
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The aim of this paper is to reveal the importance of hybrid successive interference cancellation (SIC) and power adaptation (PA) for improving transmission robustness of uplink non-orthogonal multiple access (NOMA).
Particularly, a cognitive radio inspired uplink NOMA communication scenario is considered, where one primary user is allocated one dedicated resource block, while M secondary users compete with each other to be opportunistically served by using the same resource block of the primary user. Two novel schemes are proposed for the considered scenario, namely hybrid SIC with PA (HSIC-PA) scheme and fixed SIC with PA (FSIC-PA) scheme. Both schemes can ensure that the secondary users are served without degrading the transmission reliability of the primary user compared to conventional orthogonal multiple access (OMA) based schemes. Rigorous analytical results are presented to evaluate the performance of the proposed two schemes. It is shown that both schemes can avoid outage probability error floors without any constraints on users' target rates in the high SNR regime. Furthermore, it is shown that the diversity gain achieved by the HSIC-PA scheme is M, while that of the FISC-PA scheme is only 1. Numerical results are provided to verify the developed analytical results and also demonstrate the superior performance achieved by the proposed schemes by comparing with the existing HSIC without PA (HSIC-NPA) scheme.
The presented simulation results also show that HSIC-PA scheme performs the best among the three schemes, which indicates the importance of the combination of HSIC and PA for improving transmission robustness.
Non-orthogonal multiple access (NOMA), hybrid successive interference cancellation (HSIC), power adaptation, outage probability.
§ INTRODUCTION
Non-orthogonal multiple access (NOMA) has attracted extensive research interest during the past few years, and has been recognized as an important potential enabling technology for future wireless communication systems <cit.>. Compared to conventional orthogonal multiple access (OMA), where one channel resource block can be accessed by a single user only, the key appealing feature of NOMA is that allowing multiple users to simultaneously access the same channel resource block is encouraged <cit.>. Thus, by applying NOMA, larger connectivity and higher spectral efficiency can be obtained.
Existing research works show that NOMA can be compatible with many other advanced technologies, such as multiple input multiple output (MIMO) <cit.>, millimeter wave communications <cit.>, Terahertz communications <cit.>, reconfigurable intelligent surfaces (RIS) <cit.>, satellite communications <cit.> and so on.
Since NOMA allows multiple users to simultaneously occupy one channel resource block, how to address inter-user interference is one of key issues in NOMA communication systems. To this end, a widely used method in NOMA to address inter-user interference is successive interference cancellation (SIC), where users' signals are decoded in a successive manner <cit.>. Due to the error propagation nature of SIC, how to order users plays a very important role in the performance of SIC. Conventionally, there are two main types of methods for determining the decoding order of users in NOMA. One is known as the channel state information (CSI) based SIC method, where users are ordered according to the quality of their channels <cit.>. The other is known as the quality of service (QoS) based SIC method, where the signals for the users with more stringent QoS are decoded first, while other users are often opportunistically served and their signals are decoded later <cit.>. Note that, most existing works on NOMA carried out a prefixed SIC decoding order according to either the above two aforementioned criteria. Unfortunately, a very dispiriting phenomenon exists in the NOMA schemes based on the aforementioned CSI or QoS based methods. Specifically, the outage probability achieved by these schemes suffers from severe error floors, which means that the outage probability achieved by
a certain user doesn't approach zero as SNR goes infinity. Thus, the transmission reliability cannot be guaranteed, which significantly limits the application of NOMA in many practical scenarios.
It was thought that, such outage probability error floors are unavoidable in the implementation of NOMA, and swapping SIC decoding orders dynamically cannot yield a significant performance gain <cit.>.
Motivated by the error floor issue, a new design of SIC namely hybrid SIC (HSIC) was initially proposed for cognitive radio inspired uplink NOMA by <cit.>. In the proposed HSIC scheme, the decoding orders of users are dynamically determined according to the relationship between the instantaneous channel conditions and users' target rates. <cit.> show that the proposed HSIC scheme can avoid outage probability error floors, under some constraints on users' target rates. The most important contributions of the series studies in <cit.> are two folds.
First, <cit.> showed that it is possible to avoid outage error floors, at least under some specific conditions. Second, <cit.> indicated the importance of introducing HSIC to improve transmission robustness of NOMA.
However, as mentioned above, the proposed scheme in <cit.> can only avoid outage probability error floors under some stringent conditions on users' target rates, which may not be met in many realistic scenarios. Thus, it is natural to ask the following two questions.
The first question is whether it is possible to avoid outage probability error floors without any constraints on users' rates. And the second question is whether it is necessary to apply HSIC to avoid outage probability error floors.
This paper aims to answer the two aforementioned questions, and investigate the impact of the combination of HSIC and power adaptation (PA) on improving the transmission robustness in NOMA. Specifically, a cognitive radio inspired uplink NOMA scenario is considered. In the considered scenario, one primary user is allocated one dedicated channel resource block, while there are M secondary users who compete with each other to opportunistically share the primary user's resource block without degrading the outage performance of the primary user. Two new designs of NOMA schemes, namely HSIC with PA (HSIC-PA) and fixed SIC with PA (FSIC-PA) are proposed. Both schemes can avoid outage probability error floors without any constraints on users' target rates. The main contributions of this paper are listed as follows.
* Two novel designs of uplink NOMA schemes are proposed, namely HSIC-PA and FSIC-PA[Note that the
HSIC-PA scheme extends the scheme proposed in our previous work <cit.> where only two users are considered, while the FSIC-PA scheme hasn't been proposed according to our best knowledge.]. In the proposed HSIC-PA scheme, the decoding order of the secondary user can be dynamically adjusted according to the channel conditions. While in the proposed FSIC-PA scheme, the decoding order of the secondary user is fixed at the second stage of SIC. By rigorous derivation, the closed-form expressions for the outage probabilities achieved by the proposed two schemes are obtained.
* Based on the obtained expressions for the outage probabilities, asymptotic analysis in the high SNR regime is further developed to gain more insights into the proposed two schemes. It is shown that both HSIC-PA scheme and FSIC-PA scheme can avoid outage probability error floors without any constraints on users' target rates. The fact that the proposed FSIC-PA scheme can avoid error floors indicates that HSIC is not necessary to avoid error floors. Furthermore, the diversity gains achieved the proposed two schemes are also provided, respectively. Interestingly, the diversity gain achieved by HSIC-PA scheme is M, whereas that achieved by FSIC-PA scheme is only 1.
* Numerical results are presented to verify the accuracy of the developed analytical results and demonstrate the superior performance of the proposed HSIC-PA scheme and FSIC-PA scheme, by comparing with the benchmark scheme termed HSIC-NPA proposed in <cit.>. In terms of outage probability and ergodic rate, it is shown that FSIC-PA scheme performs better than HSIC-NPA scheme in the high SNR regime, but worse in the low SNR regime. Besides, HSIC-PA scheme performs the best among three schemes at all SNRs in terms of outage probability and ergodic rate, which shows the power of the combination of HSIC and PA in the design of uplink NOMA transmissions. In terms of power consumption, both the proposed HSIC-PA and FSIC-PA schemes consume less power than the existing HSIC-NPA scheme, whereas HSIC-PA scheme is more power-consuming than FSIC-PA scheme.
§ SYSTEM MODEL
Consider an uplink NOMA communication scenario with one base station (BS), one primary user U_0 and M
secondary users U_m, 1≤ m≤ M. Note that, in the considered scenario, ensuring the transmission reliability of U_0 is of the high priority, which has a target data rate denoted by R_0. In conventional OMA based schemes, the primary user is allocated with a dedicated resource block, which cannot be accessed by other users. While in the considered NOMA schemes of this paper, M secondary users compete with each other to opportunistically access the channel resource block which is allocated to the primary user. Note that allowing secondary users to share the channel resource block of the primary user must be done in such a way to ensure that the QoS of the primary user U_0 is not degraded.
The channel gain of the primary user U_0 is denoted by g, and the channel gains of the secondary users are denoted by h_m, 1≤ m≤ M. In this paper, g and h_m are modeled as the normalized Rayleigh fading gains, which means that g and h_m are independent and identically distributed (i.i.d) circular symmetric complex Gaussian (CSCG) random variables with zero mean and unit variance, i.e., g∼𝒞𝒩(0,1) and h_m ∼𝒞𝒩(0,1). The transmit power of the primary user U_0 is denoted by P_0. The transmit power of the secondary user U_m is denoted by β P_s, where
β∈ [0,1 ] is the adjustable power adaptation coefficient of U_m, and P_s is the maximum power of U_m. Without loss of generality, the background noise power is also assumed to be normalized throughout the paper.
In the remainder of the paper, the M secondary users are ordered according to their channel gains:
| h_1 | ^2< ⋯ < | h_M|^2.
In this paper, two novel NOMA schemes are proposed, namely HSIC-PA scheme and FSIC-PA scheme.
It will be shown that both schemes can avoid outage probability error floors.
For each scheme, in each period of transmission, only the secondary user which can achieve the largest instantaneous achievable rate is allowed to transmit signal by sharing the primary user's resource block.
The proposed two schemes are described in the following two subsections.
§.§ HSIC-PA Scheme
To begin with, define an interference threshold denoted by τ (g) as follows:
τ (g)= max{ 0,P_ 0 |g | ^2 /2^R_ 0 -1 -1}.
Note that τ(g) can be interpreted as the maximum interference, with which U_0 can
still achieve the same outage performance as in OMA where the resource block would be solely occupied by U_0. For more details on τ(g), please refer to <cit.>.
Define ϵ_0=2^R_0-1 and α _0=ϵ_0/P_0, we have
τ(g)=
|g|^2α_0^-1-1 , |g|^2>α_0,
0 , |g|^2<α_0.
For each secondary user U_m, its instantaneous achievable rate is determined by how its channel
gain compares to τ (g), which can be classified into the following two types:
* Type I: the maximum received signal power of U_m at the BS is less than or equal to τ(g), i.e.,
P_s | h_m |^2 ≤τ (g). For this case, putting U_m at the second stage of SIC
can yield a larger data rate compared to putting U_m at the first stage of SIC, and will not prevent the primary user from successfully decoding its signal. Thus, it is favorable to decode U_m's signal at the second stage of SIC, and the achievable rate of U_m is given by
R_1^m=log(1+P_ s | h_ m |^2),
which is the same as in HSIC-NPA scheme proposed in <cit.>.
* Type II: the maximum received signal power of U_m at the BS is larger than τ (g), i.e., P_s | h_m |^2 > τ (g). For this case, the benchmark scheme termed HSIC-NPA which is proposed in <cit.>
only considers the case where β is set to be 1. Thus, in order to avoid degrading the QoS of U_0, U_m's signal can only be decoded at the first stage of SIC in HSIC-NPA, yielding the following achievable data rate of U_m:
R_II,1^m=log(1+P_s | h_m |^2 /P_0 | g | ^2+1 ).
Note that the drawback of putting U_m at the first stage of SIC is that, when P_0|g|^2 is large, R_II,1^m might still be small even with a large P_s | h_m |^2.
To this end, the proposed HSIC-PA scheme offers an additional choice where β can be set to be less than 1 so that β P_s|h_m|^2=τ(g), which can provide an opportunity to yield a larger achievable rate. As a result, U_m's signal can be decoded at the second stage of SIC, yielding the following achievable data rate of U_m:
R_2,2^m=log(1+τ(g)).
Thus, in the proposed HSIC-PA scheme, when P_s | h_m |^2 > τ(g), the achievable data rate of U_m is given by:
R_2^m=max{R_2,1^m,R_2,2^m}.
According to the above discussions, the achievable data rate of U_m in HSIC-PA scheme can be concluded as:
R^m=
R_1^m, P_s | h_ m |^2 ≤τ(g)
R_2^m, P_s | h_ m |^2 >τ(g).
§.§ FSIC-PA Scheme
Another scheme termed FSIC-PA is proposed in this subsection.
Note that in HSIC-PA scheme, the secondary user's signal can be decoded either at the first or second stage of SIC. However, in FSIC-PA scheme, its signal can only be decoded at the second stage of SIC.
In FSIC-PA scheme, for each secondary user U_m, its instantaneous achievable rate can also be determined by considering the following two cases as in the previous subsection.
* Type I: the maximum received signal power of U_m at the BS is less than or equal to τ(g), i.e.,
P_s | h_m |^2 ≤τ (g). For this case, the decoding strategy is as same as in the HSIC-NPA and the proposed HSIC-PA scheme, where U_m is decoded at the second stage of SIC. Thus, the achievable data rate of U_m is R̂^m_I =log(1+P_s|h_m|^2), since the interference from U_0 can be removed by SIC.
* Type II: the maximum received signal power of U_m at the BS is larger than τ (g), i.e., P_s | h_m |^2 > τ (g). For this case, in the proposed FSIC-PA scheme, U_m can only be decoded at the second stage of SIC. To carry out this strategy, β is set to be less than 1 so that β P_s|h_m|^2=τ(g). Thus, the achievable data rate of U_m for type II is R̂_II^m=log(1+τ(g)).
By concluding the above two cases, the achievable data rate of U_m in the FSIC-PA scheme can be expressed as:
R̂^m=R̂^m_I, P_s | h_ m |^2 ≤τ(g)
R̂^m_II, P_s | h_ m |^2 >τ(g).
Note that, the proposed HSIC-PA and FSIC-PA schemes can ensure that the outage performance of the primary user is the same as that in the OMA scheme. Because the use of NOMA is transparent to the primary user, this paper focuses on the performance of the opportunistically served secondary users.
§ PERFORMANCE ANALYSIS ON HSIC-PA SCHEME AND FSIC-PA SCHEME
In this section, the closed-form expressions for the outage probabilities of the served secondary user achieved by the proposed two schemes will be provided. Furthermore, asymptotic analysis for the outage probabilities will be presented, which shows that both HSIC-PA and FSIC-PA schemes can avoid outage probability error floors without any constraints on users' target rates. Besides, rigorous comparisons between the proposed HSIC-PA/FSIC-PA scheme with the existing HSIC-NPA scheme will be carried out.
§.§ Outage probability achieved by HSIC-PA scheme
This subsection provides the exact and asymptotic expressions for the overall outage probability
of the served secondary users achieved by the proposed HSIC-PA scheme. Besides, the diversity gain <cit.> achieved by HSIC-PA is also provided.
Assume that all the secondary users have the same target rate, denoted by R_s. The overall outage probability achieved by the served secondary users in HSIC-PA is given by:
P_out=Pr(max{R^m, 1≤ m≤ M}<R_s).
For the ease of characterizing the outage probability P_out, it is helpful to define the event E_m, which denotes the event that there are m secondary users belonging to type I. Particularly, E_m can be expressed as follows:
E_m={ |h_m |^2< τ (g)/P_s, | h_m+1 | ^2>τ (g)/P_s},
1≤ m≤ M-1,
{|h_1|^2 > τ (g)/P_s}, m=0,
{|h_M|^2 < τ (g)/P_s}, m=M,
where the extreme cases E_0 and E_M denote the events where there is no type I secondary users and all the secondary users belong to type I, respectively.
It is shown that the expression of P_out can be divided into four parts, as highlighted in the following lemma.
For ease of calculation, P_out can be further simplified as:
P_out= P(|h_M|^2>τ(g)/P_s, | h_M |^2< |h_k |^2,R^M_II,2<R_s,|g|^2>α_0)_Q̃_1
+ P(|h_M|^2>τ(g)/P_s, | h_M |^2> |h_k |^2,R^M_II,1<R_s,|g|^2>α_0)_Q̃_2
+ P( E_M,R^M_I<R_s, | g | ^2>α _0)_Q_M + P(
R^M_II<R_s ,|g|^2<α_0) _Q_M+1.
Please refer to Appendix A.
By deriving the expressions of Q̃_1, Q̃_2, Q_M and Q_M+1 as shown in Appendix B, the expression for the overall outage probability of the admitted secondary users in HSIC-PA scheme can be obtained as shown in the following theorem.
The overall outage probability P_out of the admitted secondary users in HSIC-PA can be expressed as follows:
P_out=∑_i=0^M([ M; i ])(-1)^ie^-iα_s1-e^-(α_sP_0i+1)α_1/α_sP_0i+1+(1-e^-α_s)^Me^-α_1,
where ϵ_s=2^R_s-1,
α_s=ϵ_s/P_s,
α_1=(1+ϵ_s)α_0.
Please refer to Appendix B.
Based on Theorem 1, the asymptotic expression for P_out in the high SNR regime can be obtained as shown in the following corollary.
At high SNR, i.e., P_0=P_s→∞, the overall outage probability of the served secondary users in HSIC-PA can be approximated as follows:
P_out≈ϵ_s^M/P_s^MP_0∑_i=1^M([ M; i ])ϵ_0^i+1(1+ϵ_s)^i+1/i+1-ϵ_s^M/P_s^MP_0^2∑_i=0^M([ M; i ])ϵ_0^i+2(1+ϵ_s)^i+2/i+2+ϵ_s^M/P_s^M.
Please refer to Appendix C.
Further, it is straightforward that the first two terms of (<ref>) can be omitted in the high SNR regime, yielding a more simplified expression for P_out, as highlighted in the following corollary.
At high SNR, i.e., P_0=P_s→∞,
the approximation of P_out shown in (<ref>) can be further approximated as follows:
P_out≈ϵ_s^M/P_s^M.
Remark 1. Note that, the existing HSIC-NPA scheme can only avoid outage probability error floors under the constraint that ϵ_0ϵ_s≤ 1, which means that the feasible target rate for reliable transmission of the secondary users is primarily restricted by that of the primary user.
However, from the results shown in Corollary 2, it can be easily concluded that the outage probability error floor can be avoided by HSIC-PA scheme without any constraints on the users' target rates. Hence, the first question raised in Section I can be answered with the answer that it is possible to avoid outage probability error floors without any constraints on users' target rates.
Remark 2. In wireless communications, diversity gain is usually used as an important performance metric to measure how fast the outage probability decreases as transmit power increases <cit.>. It denotes the asymptotic scaling law of the outage probability to the transmit SNR. Specifically, the diversity gain, say d, achieved by HSIC-PA is defined as:
d=-lim_P_s→∞log P_out/log P_s
Based on the results shown in Corollary 2, it can be straightforwardly obtained that
d=M. Therefore, the diversity gain achieved by the HSIC-PA scheme is M, which is exactly the number of the secondary users. Thus, multi-user diversity gain can be fully utilized by the proposed HSIC-PA scheme, which means increasing the number of secondary users is helpful to reduce the overall outage probability.
From the perspective of diversity gain, the difference between the HSIC-NPA scheme and the HSIC-PA scheme can also be revealed. Recall that the diversity gain achieved by HSIC-NPA is also M when ϵ_0ϵ_s≤1, otherwise a diversity gain of zero is realized.
§.§ Outage probability achieved by FSIC-PA scheme
This subsection provides the exact expression for the overall outage probability
of the served secondary users in the proposed FSIC-PA scheme. Asymptotic analysis for the outage probability is also provided.
For the FSIC-PA scheme, the overall outage probability achieved by the served secondary users is defined as:
P̂_out=Pr(max{R̂^m, 1≤ m≤ M}<R_s).
The following theorem provides the closed-form expression for the outage probability achieved by the FSIC-PA scheme.
The overall outage probability P̂_out of the served secondary users in FSIC-PA can be expressed as follows:
P̂_out=1-e^-α_1+(1-e^α_s)^Me^-α_1.
Please refer to Appendix D.
Based on Theorem 2, asymptotic expression for P̂_out in the high SNR regime can be obtained as shown in the following corollary.
At high SNR, i.e., P_0=P_s→∞, the overall outage probability of the served secondary users in the FSIC-PA scheme can be approximated as follows:
P̂_out≈ϵ_0(1+ϵ_s)/P_0+ϵ_s^M/P_s^M-ϵ_s^Mϵ_0(1+ϵ_s)/P_s^MP_0.
By applying Taylor expansion 1-e^-x≈ x (x→ 0), the expression in (<ref>) can be further approximated as follows:
P̂_out≈ α_1+α_s^M-α_s^Mα_1
= ϵ_0(1+ϵ_s)/P_0+ϵ_s^M/P_s^M-ϵ_s^Mϵ_0(1+ϵ_s)/P_s^MP_0,
and the proof is complete.
Remark 3. From Corollary 3, it can be easily observed that the proposed FSIC-PA scheme can also avoid outage probability error floors without any constraints on the users' target rates. At this point, the second question raised in Section I can be answered with the answer that HSIC is not the necessary condition to avoid outage probability error floors.
Remark 4. It is also interesting to investigate the diversity gain achieved by the FSIC-PA scheme, which is defined as:
d̂=-lim_P_s→∞logP̂_out/log P_s.
According to Corollary 3, it can be straightforwardly obtained that d̂=1. Thus,
the multi-user diversity gain cannot be obtained by FSIC-PA scheme.
The above two remarks indicate that even though HSIC is not the necessary strategy to avoid the outage probability error floor, its combination with PA is beneficial for improving transmission robustness.
§.§ Comparisons between HSIC-PA/FSIC-PA scheme with HSIC-NPA scheme
In this section, more detailed comparisons of the proposed two schemes with the benchmark HSIC-NPA scheme are provided. Note that, if the served secondary user belongs to type I, the three schemes, i.e., HSIC-PA, HSIC-NPA and FSIC-PA, achieve the same instantaneous data rate.
However, the three schemes differ from each other if the served secondary user belongs to type II. Thus, it is necessary to compare the three schemes for the case when the served secondary user belongs to type II.
For ease of notation, denote the served secondary user by U_m^*. When U_m^* belongs to type II, denote its achievable rate by R_II, R̂_II and R̅_II for HSIC-PA, FSIC-PA and HSIC-NPA schemes, respectively.
From the description in Section. II, it can be found that R_II≥R̅_II always holds. Thus, it is sufficient to characterize the probability of the event that R_II>R̅_II, for the comparison between HSIC-PA and HSIC-NPA, as presented in the following theorem.
Under the condition that the served secondary user U_m^* is type II, the probability of the event that R_II>R̅_II, termed P^better, is given by:
P^better= P( R̅_2<R_2, U_m^* is type II) /P(U_m^* is type II) ,
where
P( R̅_2<R_2, U_m^* is type II)
= ∑_i=1^M([ M; i ])(-1)^ie^i/P_s[ṽ(α_0,iP_0/P_sα_0,i/P_s(α_0^-1-P_0)+1) -ũ(α_0,i/P_sα_0 ) ] ,
and
P(U_m^* is type II)=1-∑_i=0^M([ M; i ])(-1)^ie^i/P_sũ(α_0,i/P_sα_0 ),
where
ũ(x,y)=1/y+1e^-x(y+1),
and ṽ(x,y,z)=√(π)e^z^2/4y/2√(y)[1-erf (√(y)(x+z/2y))], where erf(·) denotes the Gaussian error function,
which is given by:
erf(x)=2/√(π)∫_0^xe^-t^2dt.
Please refer to Appendix E.
Differently, for the comparison between FSIC-PA and HSIC-NPA, R̂_II can be either larger or less than R̅_II. Thus, it is necessary to characterize both the probabilities of the events that R̂_II>R̅_II and R̂_II<R̅_II. By noting that
P̂( R̅_2<R̂_2, U_m^* is type II)=P(|h_M|^2>τ(g)/P_s,|h_M|^2< |h_k |^2,|g|^2>α_0),
which is the same as the expression of P( R̅_2< R_2, U_m^* is type II) in Theorem 3, the following theorem can be straightforwardly obtained.
Under the condition that the served secondary user U_m^* is type II, the probability of the event that R̂_II>R̅_II, termed P̂^better, is given by:
P̂^better= P( R̅_2<R̂_2, U_m^* is type II) /P(U_m^* is type II) ,
which is the same as the expression of P^better in Theorem 3. The probability of the event that R̂_II<R̅_II, termed P̂^worse, is given by:
P̂^worse=1-P̂^better.
§ NUMERICAL RESULTS
In this section, simulation results are provided to verify the accuracy of the developed analysis and demonstrate the performance of the proposed HSIC-PA and FSIC-PA schemes. Comparisons with the benchmark HSIC-NPA scheme developed in <cit.> are also provided.
Fig. <ref> verifies the accuracy of the developed analytical results for the outage probability achieved by the proposed HSIC-PA scheme. Note that, the curves for analytical results are based on Theorem 1, and those for Approximations I and II are based on Corollaries 1 and 2, respectively.
As shown in the figure, analytical results perfectly match simulations, which verifies the accuracy of the analytical results provided in Theorem 1.
Besides, Fig. <ref> also shows that both the curves for Approximation I and Approximation II
match the simulation results at high SNR, which verifies the accuracy of the approximations in Corollaries 1 and 2.
Fig. P_outLFJ_F verifies the accuracy of the developed analytical results for the outage probability achieved by the proposed FSIC-PA scheme. Note that the curves for analytical results are based on Theorem 2, and the curves for approximation are based on Corollary 3. From the figure, it can be observed that the curves for analysis perfectly match simulations, which verify the accuracy of the results provided in Theorem 2.
Besides, it is shown that the curves for the approximate results are accurate at high SNR, which demonstrates the accuracy of the results in Corollary 3.
A significant difference between HSIC-PA and FSIC-PA schemes can be clearly observed from Figs. <ref> and <ref>. Fig. <ref> shows that as M increases, the outage probability achieved by HSIC-PA scheme significantly decreases. In contrast, Fig. <ref> shows that,
for M>1, the outage probabilities for different values of M coincide. Thus, keeping increasing M cannot improve the outage performance of FSIC-PA in the high SNR regime. This observation is consistent with
the results in Section III that the diversity gain of HSIC-PA scheme is M, while that of FSIC-PA scheme is only 1.
Fig. <ref> shows the outage probabilities of the secondary users achieved by HSIC-NPA, HSIC-PA and FSIC-PA versus transmit SNR. As shown in the figure, for HSIC-NPA scheme, when R_0=1 BPCU, there is no outage probability error floor. However, when R_0=4 BPCU, the outage probability error floor exists. This observation is consistent with the conclusions in <cit.>,
i.e., the error floor can only be avoided when ϵ_0ϵ_s<1. By contrast, the proposed HSIC-PA and FSIC-PA schemes can avoid outage probability error floors, since the outage probabilities achieved by both schemes continuously decrease as the SNR increases. Fig. <ref> also shows that the HSIC-PA scheme performs the best among the three schemes for all cases. However, FSIC-PA achieves larger outage probabilities than HSIC-NPA when R_0=1 BPCU, while for the case where R_0=4 BPCU, FSIC-PA performs better at high SNRs.
Fig. <ref> shows the performance of the three schemes in terms of ergodic data rates achieved by the served secondary users.
From the figure, it is shown that HSIC-PA scheme always achieves the largest ergodic rate among the three schemes, which is consistent with the observation in Fig. <ref>.
Another interesting observation from Fig. <ref> is that the performance of FSIC-PA approaches that of HSIC-PA in terms of ergodic data rate at high SNR, while the performance of HSIC-NPA approaches that of HSIC-PA in terms of ergodic rate at low SNRs. This observation indicates that it is preferable to set the
secondary user at the first stage of SIC and use full transmit power at low SNRs, while it is preferable to set the secondary user at the second stage of SIC and use partial transmit power at high SNRs.
Fig. <ref> and Fig. <ref> demonstrate a more detailed comparison on achievable rates of the proposed two schemes with the benchmark HSIC-NPA scheme.
Fig. <ref> shows the probability that the served secondary user belongs to type II. It is shown that as SNR increases, the probabilities converge to a constant.
Fig. <ref> shows that the curves for P̂^better and P^better coincide, which is consistent with results shown in Theorems 3 and 4.
Fig. <ref> also shows that P̂^better and P^better increase with SNR, and approach 1 in the high SNR regime. While
P̂^worse decreases with SNR and approaches 1 in the low SNR regime.
The above observation can help to understand the phenomenon shown in Fig. <ref> and
Fig. <ref>, and leads to the following suggestions for practical systems.
On the one hand, at high SNR, it is preferable to apply power adaptation and put the secondary user at the second stage of SIC. On the other hand, at low SNR, it is better to decode the secondary user at the first stage of SIC.
Fig. <ref> shows the power consumption of HSIC-PA and FSIC-PA schemes. Note that the HSIC-NPA scheme always chooses full power to transmit for the secondary users, i.e., β is always set to be 1, while β can be set to be less than 1 in the proposed HSIC-PA and FSIC-PA schemes. Thus, HSIC-NPA is more energy consuming than the proposed two schemes in this paper. From the figure, it can be observed that at low SNRs, β approaches 1 in HSIC-PA and β approaches zero in FSIC-PA. Besides, as SNR increases, β decreases in HSIC-PA, while that in FSIC-PA increases. More interestingly, the values of β for both schemes approach a constant in the high SNR regime. However, at high SNR, the value of β in HSIC-PA scheme is a bit higher than that in FSIC-PA.
§ CONCLUSIONS
In this paper, two novel cognitive radio inspired uplink NOMA schemes were proposed to improve transmission robustness, namely HSIC-PA scheme and FSIC-PA scheme. Rigorous analysis has been developed to characterize the performance of the proposed schemes. It has been shown that both HSIC-PA and FSIC-PA schemes can avoid outage probability error floors in the high SNR regime without any constraints on users' target rates, which was thought impossible for uplink NOMA transmission. It has also been shown that the diversity gain achieved by the HSIC-PA scheme is M, which is the maximal multi-user diversity gain for the considered scenario. While the diversity gain achieved by the FSIC-PA scheme is 1. Numerical results have been presented to verify the accuracy of the developed analysis and demonstrate the superior performance of the proposed schemes. It has been shown by this paper that the combination of HSIC and PA is important to improve the transmission robustness of uplink NOMA.
§ PROOF FOR LEMMA 1
The outage events can be divided into two groups, one is |g|^2>α_0 and the other is |g|^2<α_0.
Thus, the outage probability P_out shown in (<ref>) can be written as:
P_out= ∑_m=1^M-1 P ( E_m,max{R^k_I, 1 ≤ k≤ m}<R_s,
max{ R^k_II,m < k ≤ M } <R_s, | g | ^2>α _0 )
+P( E_M,max{ R^k_I,1≤ k ≤ M } <R_s, | g | ^2>α _0)
+P( E_0,max{ R^k_II,1≤ k ≤ M } <R_s, | g | ^2>α _0)
+P(max{ R^k_II,1≤ k ≤ M } <R_s ,|g|^2<α_0).
Recall that the secondary users are ordered according to their channel gains, P_out can be further written as:
P_out= ∑_m=1^M-1 P ( E_m,R^m_I< R_s,R^M_II < R_s, | g | ^2>α _0 )_Q_m
+P( E_M,R^M_I<R_s, | g | ^2>α _0)_Q_M +P( E_0,R^M_II <R_s, | g | ^2>α _0) _Q_0
+ P(
R^M_II<R_s ,|g|^2<α_0) _Q_M+1.
Note that when |g|^2>α _0, R^M_II can be determined according to the value of |h_M |^2 as follows:
R^M_II
=
R^M_II,2 , |h_M |^2< |h |^2
R^M_II,1 , |h_M |^2> |h |^2,
where |h |^2=( | g |^2α_0^-1-1 )(P_0 | g |^2+1)/P_s. Thus, Q_m can be rewritten as follows:
Q_m= ∑_m=1^M-1P (E_m,R^m_I<R_s,R^M_II,2<R_s, | h_M |^2< |h |^2 ,|g|^2>α_0 ) _Q_m,1
+P ( E_m, R^m_I<R_s,R^M_II,1<R_s,
| h_M |^2> |h |^2 ,|g|^2>α_0 ) _Q_m,2.
By noting that regardless of the value of |h_M |^2, R^m_I is always smaller than R^M_II,1 and R^M_II,2,
Q_m can be further simplified as:
Q_m= ∑_m=1^M-1P (E_m,R^M_II,2<R_s, | h_M |^2< |h |^2 ,|g|^2>α_0 ) _Q_m,1
+P ( E_m,R^M_II,1<R_s,
| h_M |^2> |h |^2 ,|g|^2>α_0 ) _Q_m,2.
By applying the results shown in (<ref>), Q_0 can be rewritten as follows:
Q_0= P( E_0, |h_M |^2< |h |^2, R^M_II,2 <R_s, | g | ^2>α _0)_Q_0,1
+ P( E_0, |h_M |^2> |h |^2, R^M_II,1 <R_s, | g | ^2>α _0)_Q_0,2.
Note that, Q_m,1 and Q_0,1 can be combined, so as Q_m,2 and Q_0,2, thus, the sum of Q_m and Q_0 can be simplified as follows:
Q_m+Q_0= Q_m,1+Q_0,1_Q̃_1 +Q_m,2+Q_0,2_Q̃_2
= P(|h_M|^2>τ(g)/P_s, | h_M |^2< |h |^2,R^M_II,2<R_s,|g|^2>α_0)_Q̃_1
+ P(|h_M|^2>τ(g)/P_s, | h_M |^2> |h |^2,R^M_II,1<R_s,|g|^2>α_0)_Q̃_2.
Therefore, P_out=Q_m+Q_0+Q_M+Q_M+1=Q̃_1+Q̃_2+Q_M+Q_M+1 and the proof is complete.
§ PROOF FOR THEOREM 1
According to Lemma 1, the evaluation of P_out can be divided into four parts: Q̃_1,
Q̃_2,
Q_M
and Q_M+1.
§.§ Evaluation of Q̃_1
Note that Q̃_1 can be expressed as follows:
Q̃_1= P (|h_M|^2>|g|^2α_0^-1-1/P_s,|h_M|^2<(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s,log(1+τ(g))<R_s,|g|^2>α_0)
= α_0<|g|^2<α_1ε{P(|g|^2α_0^-1-1/P_s<|h_M|^2<(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s)_S̃_1},
where ε{ * } denotes the mathematical expectation.
Note that the users are ordered according to their channel gains, and hence the probability density function (pdf) of |h_M|^2 can be expressed as:
f_|h_M|^2(x)= M!/(M-1)!(1-e^-x)^M-1e^-x
= M(1-e^-x)^M-1e^-x.
By applying (<ref>), S̃_1 can be evaluated as follows:
S̃_1= ∫_|g|^2α_0^-1-1/P_s^(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_sf_|h_M|^2(x)dx
= ∑_i=0^M([ M; i ])(-1)^i( e^-i(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s-e^-i/P_s(|g|^2α_0^-1-1)).
Further, by noting that |g|^2 is exponentially distributed, Q̃_1 can be calculated as:
Q̃_1= ∫_α_0^α_1S̃_1e^-|g|^2d|g|^2
= ∑_i=0^M([ M; i ])(-1)^i∫_α_0^α_1( e^-i(xα_0^-1-1)(P_0x+1)/P_s-e^-i/P_s(xα_0^-1-1))e^-xdx.
For notational simplicity,
define u(α_0,α_1,c) as:
u(α_0,α_1,c)△=∫_α_0^α_1e^-(c+1)xdx= 1/c+1[e^-α_0(c+1)-e^-α_1(c+1)],
and v(α_1,α_0,A,B) as:
v(α_1,α_0,A,B)△= ∫_α_0^α_1e^-(Ax^2+Bx)dx
= √(π)e^B^2/4A/2√(A)[erf(√(A)(α_1+B/2A))-erf(√(A)(α_0+B/2A))].
By taking (<ref>) and (<ref>) into (<ref>), Q̃_1 can be expressed as:
Q̃_1=∑_i=0^M([ M; i ])(-1)^ie^i/P_s[ v(α_1,α_0,iP_0/P_sα_0,i/P_s(α_0^-1-P_0)+1)- u(α_0,α_1,i/P_sα_0)].
§.§ Evaluation of Q̃_2
Note that Q_2 can be expressed as follows:
Q̃_2= P (|h_M|^2>|g|^2α_0^-1-1/P_s,|h_M|^2>(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s,.
.log(1+P_s|h_M|^2/P_0|g|^2+1)<R_s,|g|^2>α_0)
(a)= α_0<|g|^2<α_1ε{P((|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s<|h_M|^2<α_s(P_0|g|^2+1) )_S̃_2},
where step (a) is obtained by noting the hidden condition (|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s<α_s(P_0|g|^2+1), which yields |g|^2<α_1.
By using the pdf of |h_M|^2 shown in (<ref>), S̃_2 can be evaluated as follows:
S̃_2= ∫_(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s^α_s(P_0|g|^2+1)M(1-e^-x)^M-1e^-xdx
= ∑_i=0^M([ M; i ])(-1)^i( e^-iα_s(P_0|g|^2+1)-e^-i/P_s(|g|^2α_0^-1-1)(P_0|g|^2+1)).
Further, by averaging with respect to |g|^2, Q̃_2 can be expressed as:
Q̃_2= ∑_i=0^M([ M; i ])(-1)^i∫_α_0^α_1( e^-iα_s(P_0x+1)-e^-i/P_s(xα_0^-1-1)(P_0x+1))e^-xdx.
By taking (<ref>) and (<ref>) into (<ref>), Q̃_2 can be further expressed as follows:
Q̃_2=∑_i=0^M([ M; i ])(-1)^i [e^-iα_su (α_0,α_1,iα_sP_0)- e^i/P_sv (α_1,α_0,iP_0/P_sα_0,i/P_s(α_0^-1- P_0)+1)].
§.§ Evaluation of Q_M
Note that Q_M can be rewritten as follows:
Q_M= P(|h_M|^2<|g|^2α_0^-1-1/P_s,log(1+P_s|h_M|^2)<R_s,|g|^2>α_0)
= P(|h_M|^2<min{|g|^2α_0^-1-1/P_s,α_s},|g|^2>α_0)
= α_0<|g|^2<α_1ε{ P(|h_M|^2<α_0^-1|g|^2-1/P_s) _S_M,1} +|g|^2>α_1ε{P(|h_M|^2<α_s)_ S_M,2},
where the last step is obtained by dividing the events into two cases, i.e., |g|^2<α_1 and |g|^2>α_1.
By using the pdf of |h_M|^2 shown in (<ref>), the expression for S_M,1 and S_M,2 can be obtained as:
S_M,1=(1-e^-α_0^-1|g|^2-1/P_s)^M and S_M,2=(1-e^-α_s)^M.
By averaging with respect to |g|^2, Q_M can be further evaluated as follows:
Q_M= ∫_α_0^α_1(1-e^-α_0^-1x-1/P_s)^Me^-xdx+∫_α_1^∞(1-e^-α_s)^Me^-xdx
= ∫_α_0^α_1∑_i=0^M([ M; i ])(-1)^ie^i/P_se^-α_0^-1/P_sixe^-xdx+(1-e^-α_s)^Me^-α_1
= ∑_i=0^M([ M; i ])(-1)^ie^i/P_su (α_0,α_1,i/α_0P_s)+(1-e^-α_s)^Me^-α_1,
where the last step is obtained by applying the results shown in (<ref>).
§.§ Evaluation of Q_M+1
Note that Q_M+1 can be expressed as follows:
Q_M+1=P(R^M_II<R_s ,|g|^2<α_0).
Note that, when |g|^2<α_0, τ(g)=0, yielding R^M_II=log(1+ P_s|h_M|^2/P_0|g|^2+1).
Thus, Q_M+1 can be further expressed as:
Q_M+1 = P(|g|^2<α_0,log(1+ P_s|h_M|^2/P_0|g|^2+1)<R_s)
= |g|^2<α_0ε{ P( log(1+ P_s|h_M|^2/P_0|g|^2+1)<R_s)_S_M+1}.
By using the pdf |h_M|^2 shown in (<ref>), S_M+1 can be evaluated as follows:
S_M+1= ∫_0^α_s(P_0|g|^2+1)f_|h_M|^2(x)dx
= (1-e^-α_s(P_0|g|^2+1))^M.
Further, by averaging with respect to |g|^2, Q_M+1 can be expressed as:
Q_M+1 = ∫_0^α_0(1-e^-α_s(P_0x+1))^Me^-xdx
= ∑_i=0^M([ M; i ])(-1)^ie^-α_si1-e^-(α_sP_0i+1)α_0/α_sP_0i+1,
where the last step is obtained by applying the binomial expansion.
Therefore, the expressions for Q̃_1,
Q̃_2,
Q_M,
and Q_M+1 are obtained, and the proof is complete.
§ PROOF FOR COROLLARY 1
In order to facilitate a high SNR approximation, P_out in (<ref>) can be
rewritten as follows:
P_out=∑_i=0^M([ M; i ])(-1)^i∫_0^α_1e^-xe^-iα_s(P_0x+1)dx+(1-e^-α_s)^Me^-α_1.
By using the fact that
∑_i=0^M([ M; i ])(-1)^iA^i=(1-A)^M,
P_out can be further approximated as follows:
P_out= ∫_0^α_1e^-x(1-e^-α_s(P_0x+1))^Mdx+(1-e^-α_s)^Me^-α_1
≈ ∫_0^α_1(1-x)α_s^M(P_0x+1)^Mdx+α_s^M(1-α_1),
where the last step is obtained by applying Taylor seizes 1-e^-x≈ x when x→ 0.
A more simplified form of P_out can be obtained by applying the binomial expansion:
P_out≈ α_s^M∫_0^α_1(1-x)∑_i=0^M([ M; i ])P_0^ix^idx+α_s^M(1-α_1)
= α_s^M∫_0^α_1∑_i=0^M([ M; i ])P_0^i(x^i-x^i+1)dx+α_s^M(1-α_1).
By taking integrations in (<ref>), P_out can be further calculated as follows:
P_out≈ α_s^M∑_i=0^M([ M; i ])P_0^i(α_1^i+1/i+1-α_1^i+2/i+2)+α_s^M-α_s^Mα_1
(a)= ϵ_s^M/P_s^MP_0∑_i=0^M([ M; i ])ϵ_0^i+1(1+ϵ_s)^i+1/i+1-ϵ_s^M/P_s^MP_0^2∑_i=0^M([ M; i ])ϵ_0^i+2(1+ϵ_s)^i+2/i+2
+ϵ_s^M/P_s^M-ϵ_s^Mϵ_0(1+ϵ_s)/P_s^MP_0
(b)= ϵ_s^M/P_s^MP_0∑_i=1^M([ M; i ])ϵ_0^i+1(1+ϵ_s)^i+1/i+1-ϵ_s^M/P_s^MP_0^2∑_i=0^M([ M; i ])ϵ_0^i+2(1+ϵ_s)^i+2/i+2+ϵ_s^M/P_s^M,
where step (b) is obtained by the fact that the first term shown in step (a) is ϵ_s^Mϵ_0(1+ϵ_s)/P_s^MP_0 when i=0, which is exactly the same as the the last term in step (a), and thus can be eliminated .
§ PROOF FOR THEOREM 2
Divide the outage events into two cases, one being |g|^2>α_0 and the other being |g|^2<α_0. . Therefore, the outage probability P̂_out shown in (<ref>) can be rewritten as:
P̂_out= ∑_m=1^M-1 P ( E_m,max{R̂^k_I, 1 ≤ k≤ m}<R_s,
max{R̂^k_II,m < k ≤ M } <R_s, | g | ^2>α _0 )
+P( E_M,max{R̂^k_I,1≤ k ≤ M } <R_s, | g | ^2>α _0)
+P( E_0,max{R̂^k_II,1≤ k ≤ M } <R_s, | g | ^2>α _0)
+P(max{R̂^k_II,1≤ k ≤ M } <R_s ,|g|^2<α_0).
Recall that the secondary users are ordered according to their channel gains,
P̂^out can be further written as:
P̂_out= ∑_m=1^M-1P(E_m,R̂_I^m<R_s,R̂_II^M<R_s,|g|^2>α_0)_F_m
+P(E_M,R̂_I^M<R_s,|g|^2>α_0 )_F_M
+ P(E_0,R̂_II^M<R_s,|g|^2>α_0 )_F_0
+P(R̂_II^M<R_s,|g|^2<α_0 )_F_M+1.
By noting that R̂^m_I<R̂^M_II for the first term, F_m and F_0 can be combined as follows:
F_m+F_0=P(|h_M|^2>τ(g)/P_s,R̂^M_II<R_s,
|g|^2>α_0)_F̃.
Therefore, P̂_out can be further simplified as:
P̂_out= P(|h_M|^2<τ(g)/P_s,R̂^M_I<R_s
,|g|^2>α_0)_F_M
+P(R̂_II^M<R_s,|g|^2<α_0)_F_M+1
+P(|h_M|^2>τ(g)/P_s,
R̂^M_II<R_s,|g|^2>α_0)_F̃.
Thus the remaining task is to derive the expressions for F_M, F_M+1 and F̃, respectively.
§.§ Evaluation of F_M
Note that F_M can be expressed as follows:
F_M= P(|h_M|^2<|g|^2α_0^-1-1/P_s,
log(1+P_s|h_M|^2)<R_s,|g|^2>α_0 ),
which is the same as the expression for Q_M in (<ref>). Thus, F_M can be expressed as:
F_M=∑_i=0^M([ M; i ])(-1)^ie^i/P_su(α_0,α_1,
i/α_0P_s)+(1-e^-α_s)^Me^-α_1.
§.§ Evaluation of F_M+1
Note that F_M+1 can be expressed as follows:
F_M+1= P(log(1+τ(g))<R_s,|g|^2<α_0)
(a)= P(|g|^2<α_0)
= 1-e^-α_0,
where step (a) is obtained by the fact that τ(g)=0 when |g|^2<α_0.
§.§ Evaluation of F̃
Note that F̃ can be expressed as follows:
F̃= P(|h_M|^2>τ(g)/P_s,
log(1+τ(g))<R_s,|g|^2>α_0)
= α_0<|g|^2<α_1ε{P(|h_M|^2>|g|^2α_0^-1-1/P_s)_T̃}.
By using the pdf of |h_M|^2 shown in (<ref>), T̃ can be evaluated as follows:
T̃= ∫_|g|^2α_0^-1-1/P_s^∞
M(1-e^-x)^M-1e^-xdx
= 1-∑_i=0^M([ M; i ])(-1)^ie^-i/P_sα_0|g|^2+i/P_s.
By taking expectation with respect to |g|^2, F̃ can be further evaluated as follows:
F̃= ∫_α_0^α_1( 1-∑_i=0^M([ M; i ])(-1)^ie^-i/P_sα_0x+i/P_s) e^-xdx
= e^-α_0-e^-α_1-∑_i=0^M([ M; i ])(-1)^ie^i/P_su(α_0,α_1,i/P_sα_0).
Until now, the expressions for F_M, F_M+1 and F̃ are obtained, and the proof is complete.
§ PROOF FOR THEOREM 3
Note that the numerator in (<ref>) can be rewritten as:
P( R̅_2<R_2, U_m^* is type II)_Q_n
= |g|^2>α_0ε{P(|g|^2α_0^-1-1/P_s<|h_M|^2
<(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s)_S_n}.
By using the pdf of |h_M|^2 shown in (<ref>), S_n can be evaluated as follows:
S_n= ∫_|g|^2α_0^-1-1/P_s^(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_sM(1-e^-x)^M-1e^-xdx
= ∑_i=0^M([ M; i ])(-1)^i( e^-i(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s-e^-i/P_s(|g|^2α_0^-1-1)).
Further, by averaging with respect to |g|^2, Q_n can be expressed as follows:
Q_n= ∑_i=0^M([ M; i ])(-1)^i∫_α_0^∞( e^-i(xα_0^-1-1)(P_0x+1)/P_s-e^-i/P_s(xα_0^-1-1))e^-xdx.
By taking (<ref>) and (<ref>) into (<ref>), Q_n can be further expressed as follows:
Q_n= ∑_i=0^M([ M; i ])(-1)^ie^i/P_s[ v(∞,α_0,iP_0/P_sα_0,i/P_s(α_0^-1-P_0)+1)-u(α_0,∞,i/P_sα_0)]
= ∑_i=1^M([ M; i ])(-1)^ie^i/P_s[ ṽ(α_0,iP_0/P_sα_0,i/P_s(α_0^-1-P_0)+1)
-ũ(α_0,i/P_sα_0)],
where the last step is obtained by noting that the term i=0 can be omitted since ṽ(α_0,iP_0/P_sα_0,i/P_s(α_0^-1-P_0)+1)
-ũ(α_0,i/P_sα_0)
=0 for i=0.
The denominator in (<ref>) can be calculated as follows:
P(U_m^* is type II)_Q_d
= P(|h_M|^2>τ(g)/P_s,|g|^2>α_0)_Q_d1
+P( |g|^2<α_0)_Q_d2
= |g|^2>α_0ε{P(
|h_M|^2>|g|^2α_0^-1-1/P_s)_S_d1}
+Q_d2.
Note that S_d1 is the same as the expression for T̃ in (<ref>). Thus, S_d1 can be obtained by using the
results in (<ref>) as follows:
S_d1= 1-∑_i=0^M([ M; i ])(-1)^ie^-i/P_sα_0|g|^2+i/P_s.
By averaging with respect to |g|^2, Q_d1 can be further evaluated as follows:
Q_d1= ∫_α_0^∞( 1-∑_i=0^M([ M; i ])(-1)^ie^-i/P_sα_0x+i/P_s) e^-xdx
= e^-α_0-∑_i=0^M([ M; i ])(-1)^ie^i/P_sũ(α_0,i/P_sα_0).
Q_d2 can be expressed as follows:
Q_d2=∫_0^α_0e^-xdx=1-e^-α_0.
Thus, Q_d is the sum of Q_d1 and Q_d2, which can be expressed as follows:
Q_d= 1-∑_i=0^M([ M; i ])(-1)^ie^i/P_sũ(α_0,i/P_sα_0).
Therefore, the expressions for P( R̅_2<R_2, U_m^* is type II) and P(U_m^* is type II) are obtained, and the proof is complete.
IEEEtran
|
http://arxiv.org/abs/2307.04545v1 | 20230710132434 | The Pairing-Hamiltonian property in graph prisms | [
"Marién Abreu",
"Giuseppe Mazzuoccolo",
"Federico Romaniello",
"Jean Paul Zerafa"
] | math.CO | [
"math.CO",
"05C76, 05C70, 05C45"
] |
The Pairing-Hamiltonian property
in graph prisms
Marién Abreu
Dipartimento di Matematica, Informatica ed Economia
Università degli Studi della Basilicata, Italy
[email protected]
Giuseppe Mazzuoccolo
Dipartimento di Scienze Fisiche, Informatiche e Matematiche
Università degli Studi di Modena e Reggio Emilia, Italy
[email protected]
Federico Romaniello
Dipartimento di Matematica “Giuseppe Peano"
Università di Torino, Italy
[email protected]
Jean Paul Zerafa
St. Edward's College, Triq San Dwardu
Birgu (Città Vittoriosa), BRG 9039, Cottonera, Malta
[email protected]
05C76, 05C70, 05C45
Let G be a graph of even order, and consider K_G as the complete graph on the same vertex set as G. A perfect matching of K_G is called a pairing of G. If for every pairing M of G it is possible to find a perfect matching N of G such that M ∪ N is a Hamiltonian cycle of K_G, then G is said to have the Pairing-Hamiltonian property, or PH-property, for short. In 2007, Fink [J. Combin. Theory Ser. B, 97] proved that for every d≥ 2, the d-dimensional hypercube 𝒬_d has the PH-property, thus proving a
conjecture posed by Kreweras in 1996.
In this paper we extend Fink's result by proving that given a graph G having the PH-property, the prism graph 𝒫(G) of G has the PH-property as well. Moreover, if G is a connected graph, we show that there exists a positive integer k_0
such that the k^th-prism of a graph 𝒫^k(G) has the PH-property for all k ≥ k_0.
§ INTRODUCTION
The problem of extending perfect matchings of a graph to a Hamiltonian cycle has been first considered by Las Vergnas <cit.> and Häggkvist <cit.> in the 1970s. They both proved Ore-type conditions which ensure that every perfect matching of a graph having some initial conditions can be extended to a Hamiltonian cycle.
Some years later, Kreweras <cit.> conjectured that any perfect matching of the hypercube 𝒬_d, d≥ 2, can be extended to a Hamiltonian cycle. This conjecture was proved in 2007 by Fink <cit.>. Actually, he proved a stronger version of the problem. Given a graph G, let K_G denote the complete graph on the same vertex set V(G) of G. Fink shows that every perfect matching of K_𝒬_d, and not only the perfect matchings of 𝒬_d, can be extended to a Hamiltonian cycle of K_𝒬_d, by using only edges of 𝒬_d. More in general, for a graph G of even order, a perfect matching of K_G is said to be a pairing of G. Given a pairing M of G, we say that M can be extended to a Hamiltonian cycle H of K_G if we can find a perfect matching N of G such that M ∪ N = E(H), where E(H) is the set of edges of H.
A graph G is said to have the Pairing-Hamiltonian property (or, the PH-property for short), if every pairing M of G can be extended to a Hamiltonian cycle as described above. For simplicity, we shall also say that a graph G is PH if it has the PH-property. This notation was introduced in <cit.>, where amongst other results, a classification of which cubic graphs admit the PH-property was given: these are the complete graph K_4, the complete bipartite graph K_3,3, and the cube 𝒬_3. We remark that this was the first non-trivial classification of graphs (having regular degree) admitting the PH-property, as, the only 2-regular graph admitting the PH-property is the cycle on 4 vertices, which happens to be 𝒬_2. We also remark that there is an infinite number of 4-regular graphs having the PH-property (see <cit.>). Following such a terminology we can state Fink's result from <cit.> as follows.
The hypercube 𝒬_d has the PH-property, for every d≥ 2.
Recall that the Cartesian product G H of two graphs G and H is a graph whose vertex set is V(G) × V(H), and two vertices (u_i, v_j) and (u_k, v_ℓ) are adjacent precisely if u_i = u_k and v_jv_ℓ∈ E(H), or u_iu_k ∈ E(G) and v_j = v_ℓ.
Given a graph G, the prism operator 𝒫(G) consists of two copies G_1 and G_2 of G with the same vertex labelling as in G, and an edge between the vertices having the same label. Note that 𝒫(G)=G K_2, the Cartesian product of G with K_2. The result of a single application of the operator is usually called the prism graph 𝒫(G) of G (see <cit.>), and repeated applications shall be denoted by powers, with 𝒫^k(G) being the prism graph of 𝒫^k-1(G). If needed we shall assume that 𝒫^0(G)=G.
It is worth noting that for d≥ 2, 𝒬_d=𝒫^d-2(Q_2). Hence, Theorem <ref> is equivalent to saying that for each k>0, 𝒫^k(𝒬_2) admits the PH-property. One might wonder whether it is possible to replace 𝒬_2 with some other initial graph. The main contribution of this paper is Theorem <ref>, which generalises Theorem <ref>. We obtain a much larger class of graphs with the PH-property by proving that for every graph G having the PH-property, the graph 𝒫^k(G) has the PH-property for each k≥0. Hence, Kreweras' Conjecture, and therefore Theorem <ref>, turn out to be special consequences of Theorem <ref> obtained starting from G=𝒬_2, which is trivially PH.
Other results on this topic, dealing with the Cartesian product of graphs, were also obtained in <cit.> and <cit.>. In particular, we state the following theorem which shall be needed in Section <ref>.
Let P_q be a path of length q. The graph P_q𝒬_d admits the PH-property, for d ≥ 5.
The above theorem is stated as Theorem 5 in <cit.>, where some other results apart from the statement above are proved. We use this result to obtain one of the same flavour for every connected graph G (see Theorem <ref>). More precisely, we prove that for every arbitrary connected graph G, the graph 𝒫^k(G) has the PH-property for a sufficiently large k, depending on the minimum number of leaves over all spanning trees of G. We refer the reader to <cit.> and <cit.> for other papers dealing with the Pairing-Hamiltonian property and related concepts under some graph operations.
§ GENERALISING FINK'S RESULT
As stated in the introduction, this section will be devoted to generalising Theorem <ref>.
Let G be a graph having the PH-property. Then, for each k≥0, 𝒫^k(G) admits the PH-property.
Consider 𝒫(G) and let G_1 and G_2 be the two main copies of the graph G in 𝒫(G). Then, a pairing P of 𝒫(G) can be partitioned into three subsets P_1 ∪ P_2 ∪ X where:
P_i={xy ∈ P | {x,y}⊂ V(G_i), for each i∈{1,2}}; and
X={xy ∈ P | x ∈ V(G_1), y ∈ V(G_2)}.
Note that |X| ≡ 0 2 since each G_i admits the PH-property and so are both of even order. We shall distinguish between two cases: whether X is empty or not.
Case 1. |X|=0.
In this case, P=P_1 ∪ P_2. Since G_1 has the PH-property, there exists a perfect matching M of G_1 such that P_1 ∪ M is a Hamiltonian cycle of K_G_1. Let M' be the perfect matching of G_2 such that x'y' ∈ M' if and only if xy ∈ M. In other words, M' is the copy of M in G_2. We observe that P_2 ∪ M' consists of the union of cycles of even length, say C_1,… , C_t. Note that cycles of length 2 shall be allowed in the sequel as they arise when P_2 ∩ M' ≠∅. For each i ∈{1,…,t}, we choose an edge e_i'=x_i'y_i' ∈ M' ∩ C_i and we denote the corresponding edge in M by e_i=x_iy_i. Consequently, the set
N=(M ∖{ e_1,…, e_t}) ∪ (M' ∖{e'_1,…, e'_t}) ∪{ x_ix_i',y_iy_i' | i∈{1,…,t}}
is a perfect matching of 𝒫(G) such that P ∪ N is a Hamiltonian cycle of K_𝒫(G). We note that the vertex x_i' in G_2 corresponds to the vertex x_i in G_1, see Figure <ref>.
Case 2. |X|=2r>0.
In this case we consider an analogous argument to the one used by Fink to prove Theorem <ref>. Since |X| ≠ 0, P_1 is a matching of K_G_1 which is not perfect, as there are 2r unmatched vertices. Let L be an arbitrary set of r edges of K_G_1 such that P_1 ∪ L is a pairing of G_1. Since G_1 has the PH-property, there exists a perfect matching M, of G_1, such that P_1 ∪ L ∪ M is a Hamiltonian cycle of K_G_1. Next we define the following set
R = {x y∈ E(K_G_2) |[ ∃ x,y ∈ V(G_1) with {xx,yy}⊆ X and; ∃ an (x,y) -path contained in P_1 ∪ M ]},
such that P_2 ∪ R is a pairing of G_2. Note that x x and y y are edges in K_G since |X| ≠ 0, and their extremes might not be corresponding vertices in G_1 and G_2, as they were in the former case.
Since G_2 has the PH-property there exists a perfect matching M^' of G_2, such that P_2 ∪ R ∪ M^' is a Hamiltonian cycle of G_2. It follows that P_1 ∪ P_2 ∪ X ∪ M ∪ M^' is a Hamiltonian cycle of K_𝒫(G) in which M ∪ M^' is a perfect matching of 𝒫(G), see Figure <ref>.
This proves that 𝒫(G) has the PH-property and thus, by iterating the prism operator, the result follows.
§ CONVERGENCE OF GENERAL GRAPH PRISMS TO THE PH-PROPERTY
In this section we show that given any connected graph G, there exist a sufficiently large integer k such that 𝒫^k(G) has the PH-property. In other words, after iterating the prism operator a sufficient number of times, the resulting graph will have the PH-property. We remark that if a graph contains a spanning subgraph admitting the PH-property, then the graph itself admits the PH-property. Hence, by Theorem <ref>, the next corollary follows.
Let G be a traceable graph. For k ≥ 5, the graph 𝒫^k(G) has the PH-property.
Recall that a traceable graph is a graph admitting a Hamiltonian path. Next, we show that starting from an arbitrarily connected graph G, we can always obtain a traceable graph by iterating the prism operator a suitable number of times. To this purpose, we need the following definition and lemma.
Let G be a connected graph. The minimum leaf number of G, denoted by ml(G), is the minimum number of leaves over all spanning trees of G.
Clearly, for any connected graph G, ml(G)≥ 2, and ml(G)=2 if and only if G is traceable.
Let G be a connected graph with ml(G) >2. Then, ml(G) > ml(𝒫(G)).
Suppose that ml(G) =t>2 and let G_1 and G_2 be the two copies of G in 𝒫(G). Let R_1,R_2 be two copies of a spanning tree of G with t leaves in G_1 and G_2, respectively. Let S={e_0,e_1,…,e_t-1} be the set consisting of the t edges which connect a leaf of R_1 to the corresponding leaf of R_2. Consequently, we have that T_0=(R_1 ∪ R_2) + e_0 is a spanning tree of 𝒫(G) with 2t-2 leaves. Moreover, T_0+e_1 has exactly one cycle, say C_1. Since ml(G) >2, C_1 is a proper subgraph of T_0 +e_1 and there exists a vertex v of C_1 such that deg_T_0+e_1(v) >2. We note that the removal of an edge of C_1, say f_1, which is incident to v gives rise to a spanning tree T_1=T_0+e_1-f_1 of 𝒫(G) with at most 2t-3 leaves. Then, for every j∈{2,…, t-1}, starting from j=2 and continuing consecutively up to t-1, we choose an edge f_j from E(T_j-1+e_j) lying on the unique cycle in T_j-1+e_j and incident to a vertex of degree at least 3 in T_j-1+e_j. We then let T_j to be equal to T_j-1+e_j-f_j, which by a similar argument to the above is a spanning tree of 𝒫(G) with at most 2t-2-j leaves. Therefore, T_t-1 has at most t-1 leaves and ml(𝒫(G)) ≤ t-1 < ml(G).
From the above statements, it is easy to obtain the following result.
Let G be a connected graph. Then, 𝒫^k(G) is traceable for all k ≥ml(G)-2.
If we start from G and apply the prism operator ml(G)-2 times, by Lemma <ref>, the graph 𝒫^ml(G)-2(G) has ml(𝒫^ml(G)-2(G))=2.
Consequently, it admits a Hamiltonian path.
Combining Theorem <ref> and Proposition <ref> we obtain the following.
Let G be a connected graph with m=ml(G), then 𝒫^m+3(G) has the PH-property.
If G is traceable, then m=2, and so, from Theorem <ref> we have that 𝒫^5(G) has the PH-property. On the other hand, if G is not traceable, then m>2. By Theorem <ref>, the graph 𝒫^m-2(G) is traceable. Hence, by Theorem <ref>, 𝒫^m-2(𝒫^5(G))=𝒫^m+3(G) admits the PH-property.
§ FINAL REMARKS
Several open problems were posed in <cit.>. In particular, proving that the graph P_q 𝒬_d has the PH-property for d=3,4 and an arbitrary q is still open. It is dutiful to note that we are aware that in case of a positive answer, Theorem <ref> should be refined accordingly.
A much more ambitious problem is to wonder whether it is enough for two graphs G and H to have the PH-property, for G H to have the PH-property as well.
This latter question seems very difficult to prove. Here, we have shown, in Theorem <ref>, that it holds when H is the hypercube, which is an iteration of the prism operator. In Theorem <ref>, we see that even if G does not have the PH-property, but is traceable, a large enough number of iterations of the prism operator make it converge to a graph with the PH-property. As a matter of fact, we can define the parameter 𝔭(G) as the smallest positive integer 𝔭=𝔭(G) such that 𝒫^𝔭(G) admits the PH-property. It trivially follows that 𝔭(G)=0 if and only if G is PH. Henceforth, the parameter 𝔭(G) can be considered as a measure of how far a graph G is from having the PH-property, with respect to the prism operator. Determining the behaviour of 𝔭(G) for some special classes of graphs could be of interest in the study of the PH-property.
We could also wonder if there are other graphs that speed up the convergence to the PH-property under the Cartesian product, or on the other hand if there are other products under which the convergence to the PH-property is faster. It seems so if we consider the strong product of graphs.
The strong product G ⊠ H is a graph whose vertex set is the Cartesian product V(G) × V(H) of V(G) and V(H), and two vertices (u_i, v_j), (u_k, v_ℓ) are adjacent if and only if they are adjacent in G H or if u_i,u_k∈ E(G) and v_j,v_ℓ∈ E(H).
It is trivial that G H is a subgraph of G ⊠ H; hence, if G H has the PH-property, then G ⊠ H will inherit the same property as well.
A result from <cit.> on accordion graphs easily implies that in the case of Hamiltonian graphs, only one occurrence of the strong product with K_2 is enough to obtain a graph with the PH-property.
Let G be a Hamiltonian graph, then G ⊠ K_2 has the PH-property.
This suggests that the strong product may have a faster convergence to the PH-property than the Cartesian product also for general graphs.
999
AGZ-Rook M. Abreu, J.B. Gauci and J.P. Zerafa, Saved by the rook: a case of matchings and Hamiltonian cycles, Contrib. Discrete Math. (2023), accepted.
AAAHST A. Alahmadi, R.E.L. Aldred, A. Alkenani, R. Hijazi, P. Solé and C. Thomassen, Extending a perfect matching to a Hamiltonian cycle, Discrete Math. Theor. Comput. Sci., 17(1) (2015), 241–254.
PrismGraphs R.E.L. Aldred and M.D. Plummer, Matching extension in prism graphs, Discrete Appl. Math., 221 (2017), 25–32.
Fink
J. Fink, Perfect matchings extend to Hamilton cycles in hypercubes, J. Combin. Theory Ser. B, 97 (2007), 1074–1076.
accordions
J.B. Gauci and J.P. Zerafa, Accordion graphs: Hamiltonicity, matchings and isomorphism with quartic circulants, Discrete Appl. Math. 321 (2022), 126–137.
GauZer J.B. Gauci and J.P. Zerafa, Perfect Matchings and Hamiltonicity in the Cartesian Product of Cycles, Ann. Comb. 25 (2021), 789–796, https://doi.org/10.1007/s00026-021-00548-1https://doi.org/10.1007/s00026-021-00548-1.
Hag R. Häggkvist,
On F-Hamiltonian graphs,
in: J.A. Bondy, U.S.R. Murty (eds.),
Graph Theory and Related Topics, Academic Press, New York, 1979, 219–231.
Kre G. Kreweras,
Matchings and Hamiltonian cycles on hypercubes,
Bull. Inst. Combin. Appl. 16 (1996), 87–91.
LasVergnas
M. Las Vergnas,
Problèmes de couplages et problèmes hamiltoniens en théorie des graphes,
Thesis, University of Paris 6, Paris, 1972.
betwixt
F. Romaniello and J.P. Zerafa, Betwixt and between 2-Factor Hamiltonian and Perfect-Matching-Hamiltonian Graphs, Electron. J. Combin. 30(2) (2023), #P2.5.
|
http://arxiv.org/abs/2307.06182v1 | 20230712141354 | CellGAN: Conditional Cervical Cell Synthesis for Augmenting Cytopathological Image Classification | [
"Zhenrong Shen",
"Maosong Cao",
"Sheng Wang",
"Lichi Zhang",
"Qian Wang"
] | cs.CV | [
"cs.CV",
"eess.IV"
] |
CellGAN: Conditional Cervical Cell Synthesis
Z. Shen et al.
School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
[email protected] Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
CellGAN: Conditional Cervical Cell Synthesis for Augmenting Cytopathological Image Classification
Zhenrong Shen1 Maosong Cao2 Sheng Wang1,3 Lichi Zhang1
Qian Wang2^()
August 12, 2023
=================================================================================================
Automatic examination of thin-prep cytologic test (TCT) slides can assist pathologists in finding cervical abnormality for accurate and efficient cancer screening.
Current solutions mostly need to localize suspicious cells and classify abnormality based on local patches, concerning the fact that whole slide images of TCT are extremely large.
It thus requires many annotations of normal and abnormal cervical cells, to supervise the training of the patch-level classifier for promising performance.
In this paper, we propose CellGAN to synthesize cytopathological images of various cervical cell types for augmenting patch-level cell classification.
Built upon a lightweight backbone, CellGAN is equipped with a non-linear class mapping network to effectively incorporate cell type information into image generation.
We also propose the Skip-layer Global Context module to model the complex spatial relationship of the cells, and attain high fidelity of the synthesized images through adversarial learning.
Our experiments demonstrate that CellGAN can produce visually plausible TCT cytopathological images for different cell types.
We also validate the effectiveness of using CellGAN to greatly augment patch-level cell classification performance.
Our code and model checkpoint are available at <https://github.com/ZhenrongShen/CellGAN>.
§ INTRODUCTION
Cervical cancer accounts for 6.6% of the total cancer deaths in females worldwide, making it a global threat to healthcare <cit.>.
Early cytology screening is highly effective for the prevention and timely treatment of cervical cancer <cit.>.
Nowadays, thin-prep cytologic test (TCT) <cit.> is widely used to screen cervical cancers according to the Bethesda system (TBS) rules <cit.>.
Typically there are five types of cervical squamous cells under TCT examinations <cit.>, including
normal class or negative for intraepithelial malignancy (NILM),
atypical squamous cells of undetermined significance (ASC-US),
low-grade squamous intraepithelial lesion (LSIL),
atypical squamous cells that cannot exclude HSIL (ASC-H),
and high-grade squamous intraepithelial lesion (HSIL).
The NILM cells have no cytological abnormalities while the others are manifestations of cervical abnormality to a different extent.
By observing cellular features (, nucleus-cytoplasm ratio) and judging cell types, pathologists can provide a diagnosis that is critical to the clinical management of cervical abnormality.
After scanning whole-slide images (WSIs) from TCT samples, automatic TCT screening is highly desired due to the large population versus the limited number of pathologists.
As the WSI data per sample has a huge size, the idea of identifying abnormal cells in a hierarchical manner has been proposed and investigated by several studies using deep learning <cit.>.
In general, these solutions start with the extraction of suspicious cell patches and then conduct patch-level classification.
The promising performance of cell classification at the patch level is critical, which contributes to sample-level diagnosis after integrating outcomes from many patches in a WSI.
However, such a patch-level classification task requires a large number of annotated training data.
And the efforts in collecting reliably annotated data can hardly be negligible, which requires high expertise due to the intrinsic difficulty of visually reading WSIs.
To alleviate the shortage of sufficient data to supervise classification, one may adopt traditional data augmentation techniques, which yet may bring little improvement due to scarcely expanded data diversity <cit.>.
Thus, synthesizing cytopathological images for cervical cells is highly desired to effectively augment training data.
Existing literature on pathological image synthesis has explored the generation of histopathological images <cit.>.
In cytopathological images, on the contrary, cervical cells can be spatially isolated from each other, or are highly squeezed and even overlapped.
The spatial relationship of individual cells is complex, adding diversity to the image appearance of color, morphology, texture, etc.
In addition, the differences between cell types are mainly related to nuanced cellular attributes, thus requiring fine granularity in modulating synthesized images toward the expected cell types.
Therefore, the task to synthesize realistic cytopathological images becomes very challenging.
Aiming at augmenting the performance of cervical abnormality screening, we develop a novel conditional generative adversarial network in this paper, namely CellGAN, to synthesize cytopathological images for various cell types.
We leverage FastGAN <cit.> as the backbone for the sake of training stability and computational efficiency.
To inject cell type for fine-grained conditioning, a non-linear mapping network embeds the class labels to perform layer-wise feature modulation in the generator.
Meanwhile, we introduce the Skip-layer Global Context (SGC) module to capture the long-range dependency of cells for precisely modeling their spatial relationship.
We adopt an adversarial learning scheme, where the discriminator is modified in a projection-based way <cit.> for matching conditional data distribution.
To the best of our knowledge, our proposed CellGAN is the first generative model with the capability to synthesize realistic cytopathological images for various cervical cell types.
The experimental results validate the visual plausibility of CellGAN synthesized images, as well as demonstrate their data augmentation effectiveness on patch-level cell classification.
§ METHOD
The dilemma of medical image synthesis lies in the conflict between the limited availability of medical image data and the high demand for data amount to train reliable generative models.
To ensure the synthesized image quality given relatively limited training samples, the proposed CellGAN is built upon FastGAN <cit.> towards stabilized and fast training for few-shot image synthesis.
By working in a class-conditional manner, CellGAN can explicitly control the cervical squamous cell types in the synthesized cytopathological images, which is critical to augment the downstream classification task.
The overall architecture of CellGAN is presented in Fig. <ref>, and more detailed structures of the key components are displayed in Supplementary Materials.
§.§ Architecture of the Generator
The generator of CellGAN has two input vectors.
The first input of the class label y, which adopts one-hot encoding, provides class-conditional information to indicate the expected cervical cell type in the synthesized image I_syn.
The second input of the 128-dimensional latent vector z represents the remaining image information, from which I_syn is gradually expanded.
We stack six UpBlocks to form the main branch of the generator.
To inject cell class label y into each UpBlock, we follow a similar design to StyleGAN <cit.>.
Specifically, the class label y is first projected to a class embedding c via a non-linear mapping network, which is implemented using four groups of fully connected layers and LeakyReLU activations.
We set the dimensions of class embedding c to the same as the latent vector z.
Then, we pass c through learnable affine transformations, such that the class embedding is specialized to the scaling and bias parameters controlling Adaptive Instance Normalization (AdaIN) <cit.> in each UpBlock.
The motivation for the design above comes from our hypothesis that the class-conditional information mainly encodes cellular attributes related to cell types, rather than common image appearance.
Therefore, by modulating the feature maps at multiple scales, the input class label can better control the generation of cellular attributes.
We further introduce the Skip-layer Global Context (SGC) module into the generator (see Fig.2 in Supplementary Materials), to better handle the diversity of the spatial relationship of the cells.
Our SGC module reformulates the idea of GCNet <cit.> with the design of SLE module from FastGAN <cit.>.
It first performs global context modeling on the low-resolution feature maps, then transforms global context to capture channel-wise dependency, and finally merges the transformed features into high-resolution feature maps.
In this way, the proposed SGC module learns a global understanding of the cell-to-cell spatial relationship and injects it into image generation via computationally efficient modeling of long-range dependency.
§.§ Discriminator and Adversarial Training
In an adversarial training setting, the discriminator forces the generator to faithfully match the conditional data distribution of real cervical cytopathological images, thus prompting the generator to produce visually and semantically realistic images.
For training stability, the discriminator is trained as a feature encoder with two extra decoders.
In particular, five ResNet-like <cit.> DownBlocks are employed to convert the input image into an 8×8×512 feature map.
Two simple decoders reconstruct downscaled and randomly cropped versions of input images I'_crop and I'_resize from 8^2 and 16^2 feature maps, respectively.
These decoders are optimized together with the discriminator by using a reconstruction loss ℒ_recon that is represented below:
ℒ_recon=𝔼_f∼ Dis(x),x∼ I_real[Dec(f)-𝒯(x)_ℓ_1],
where 𝒯 denotes the image processing (, 1/2 downsampling and 1/4 random cropping) on real image I_real,
f is the processed intermediate feature map from the discriminator Dis,
and Dec stands for the reconstruction decoder.
This simple self-supervised technique provides a strong regularization in forcing the discriminator to extract a good image representation.
To provide more detailed feedback from the discriminator, PatchGAN <cit.> architecture is adopted to output an 8×8 logit map by using a 1×1 convolution on the last feature map.
By penalizing image content at the scale of patches, the color fidelity of synthesized images is guaranteed as illustrated in our ablation study (see Fig. <ref>).
To align the class-conditional fake and real data distributions in the adversarial setting, the discriminator directly incorporates class labels as additional inputs in the manner of projection discriminator <cit.>.
The class label is projected to a learned 512-dimensional class embedding and takes inner-product at every spatial position of the 8×8×512 feature map.
The resulting 8×8 feature map is then added to the aforementioned 8×8 logit map, composing the final output of the discriminator.
For the objective function, we use the hinge version <cit.> of the standard adversarial loss ℒ_adv. We also employ R_1 regularization ℒ_reg <cit.> as a slight gradient penalty for the discriminator.
Combining all the loss functions above, the total objective ℒ_total to train the proposed CellGAN in an adversarial manner can be expressed as:
ℒ_total=ℒ_adv+ℒ_recon+λ_regℒ_reg,
where λ_reg is empirically set to 0.01 in our experiments.
§ EXPERIMENTAL RESULTS
§.§ Dataset and Experimental Setup
§.§.§ Dataset
In this study, we collect 14,477 images with 256×256 pixels from three collaborative clinical centers.
All the images are manually inspected to contain different cervical squamous cell types.
In total, there are 7,662 NILM, 2,275 ASC-US, 2,480 LSIL, 1,638 ASC-H, and 422 HSIL images.
All the 256×256 images with their class labels are selected as the training data.
§.§.§ Implementation Details
We use the learning rate of 2.5×10^-4, batch size of 64, and Adam optimizer <cit.> to train both the generator and the discriminator for 100k iterations.
Spectral normalization <cit.>, differentiable augmentation <cit.> and exponential-moving-average optimization <cit.> are included in the training process.
Fréchet Inception Distance (FID) <cit.> is used to measure the overall semantic realism of the synthesized images.
All the experiments are conducted using an NVIDIA GeForce RTX 3090 GPU with PyTorch <cit.>.
§.§ Evaluation of Image Synthesis Quality
We compare CellGAN with the state-of-the-art generative models for class-conditional image synthesis, , BigGAN <cit.> from cGANs <cit.> and Latent Diffusion Model (LDM) <cit.> from diffusion models <cit.>.
As shown in Fig. <ref>, BigGAN cannot generate individual cells with clearly defined cell boundaries.
And it also fails to capture the morphological features of HSIL cells that are relatively limited in training data quantity.
LDM only yields half-baked cell structures since the generated cells are mixed, and there exists negligible class separability among abnormal cell types.
On the contrary, our proposed CellGAN is able to synthesize visually plausible cervical cells and accurately model distinguishable cellular features for each cell type.
The quantitative comparison by FID in Table <ref> also demonstrates the superiority of CellGAN in synthesized image quality.
To verify the effects of key components in the proposed CellGAN, we conduct an ablation study on four model settings in Table <ref> and Fig. <ref>.
We denote the models in Fig. <ref> from left to right as Model i, Model ii, Model iii, and CellGAN.
The visual results of Model i suffer from severe color distortions while the other models do not, indicating that the PatchGAN-based discriminator can guarantee color fidelity by patch-level image content penalty.
The abnormal cells generated by Model i and Model ii tend to have highly similar cellular features.
In contrast, Model iii and CellGAN can accurately capture the morphological characteristics of different cell types.
This phenomenon suggests that the implementation of the class mapping network facilitates more distinguishable feature representations for different cell types.
By comparing the synthesized images from Model iii with CellGAN, it is observed that adopting SGC modules can yield more clear cell boundaries, which demonstrates the capability of SGC module in modeling complicated cell-to-cell relationships in image space.
The quantitative results further state the effects of the components above.
§.§ Evaluation of Augmentation Effectiveness
To validate the data augmentation capacity of the proposed CellGAN, we conduct 5-fold cross-validations on the cell classification performances of two classifiers (ResNet-34 <cit.> and DenseNet-121 <cit.>) using four training data settings for comparison:
(1) real data only (the baseline);
(2) baseline + BigGAN synthesized images;
(3) baseline + LDM synthesized images;
(4) baseline + CellGAN synthesized images.
For each cell type, we randomly select 400 real images and divide them into 5 groups.
In each fold, one group is selected as the testing data while the other four are used for training.
For different data settings, we synthesize 2,000 images for each cell type using the corresponding generative method, and add them to the training data of each fold.
We use the learning rate of 1.0×10^-4, batch size of 64, and SGD optimizer <cit.> to train all the classifiers for 30 epochs.
Random flip is applied to all data settings since it is reasonable to use traditional data augmentation techniques simultaneously in practice.
The experimental accuracy, precision, recall, and F1 score are listed in Table <ref>.
It is shown that both the classifiers achieve the best scores in all metrics using the additional synthesized data from CellGAN.
Compared with the baselines, the accuracy values of ResNet-34 and DenseNet-121 are improved by 5.25% and 4.05%, respectively.
Meanwhile, the scores of other metrics are all improved by more than 4%, indicating that our synthesized data can significantly enhance the overall classification performance.
Thanks to the visually plausible and semantically realistic synthesized data, CellGAN is conducive to the improvement of cell classification, thus serving as an efficient tool for augmenting automatic abnormal cervical cell screening.
§ CONCLUSION AND DISCUSSION
In this paper, we propose CellGAN for class-conditional cytopathological image synthesis of different cervical cell types.
Built upon FastGAN for training stability and computational efficiency, incorporating class-conditional information of cell types via non-linear mapping can better represent distinguishable cellular features.
The proposed SGC module provides the global contexts of cell spatial relationships by capturing long-range dependencies.
We have also found that the PatchGAN-based discriminator can prevent potential color distortion.
Qualitative and quantitative experiments validate the semantic realism as well as the data augmentation effectiveness of the synthesized images from CellGAN.
Meanwhile, our current CellGAN still has several limitations.
First, we cannot explicitly control the detailed attributes of the synthesized cell type, , nucleus size, and nucleus-cytoplasm ratio.
Second, in this paper, the synthesized image size is limited to 256×256.
It is worth conducting more studies for expanding synthesized image size to contain much more cells, such that the potential applications can be extended to other clinical scenes (, interactively training pathologists) in the future.
§.§.§ Acknowledgement.
This work was supported by the National Natural Science Foundation of China (No. 62001292).
splncs04
Z. Shen et al.
School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
[email protected] Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
[]+2cmSupplementary Materials for Paper Titled
"CellGAN: Conditional Cervical Cell Synthesis for
Augmenting Cytopathological Image Classification"
[]+2cmZhenrong Shen1 Maosong Cao2 Sheng Wang1,3 Lichi Zhang1 Qian Wang2^()
August 12, 2023
======================================================================================================================================================
|
http://arxiv.org/abs/2307.07451v1 | 20230714161345 | Unveiling the nature of galactic TeV sources with IceCube results | [
"Vittoria Vecchiotti",
"Francesco L. Villante",
"Giulia Pagliaroli"
] | astro-ph.HE | [
"astro-ph.HE"
] |
[email protected]
NTNU, Department of Physics, NO-7491 Trondheim, Norway
University of L'Aquila, Physics and Chemistry Department, 67100 L'Aquila, Italy
INFN, Laboratori Nazionali del Gran Sasso, 67100 Assergi (AQ), Italy
INFN, Laboratori Nazionali del Gran Sasso, 67100 Assergi (AQ), Italy
IceCube collaboration reported the first high-significance observation of the neutrino emission from the Galactic disk.
The observed signal can be due to diffuse emission produced by cosmic rays interacting with interstellar gas but can also arise from a population of sources.
In this paper, we evaluate both the diffuse and source contribution by taking advantage of gamma-ray observations and/or theoretical considerations.
By comparing our expectations with IceCube measurement, we constrain the fraction of Galactic TeV gamma-ray sources (resolved and unresolved) with hadronic nature.
In order to be compatible with the IceCube results, this fraction should be less than ∼ 40% corresponding to a cumulative source flux Φ_ν, s≤ 2.6 × 10^-10 cm^-2s^-1 integrated in the 1-100 TeV energy range.
Unveiling the nature of galactic TeV sources with IceCube results
G. Pagliaroli
July 2023
=================================================================
The diffuse galactic neutrino emission produced by hadronic interactions of high-energy Cosmic Rays (CR) with the gas contained in the galactic disk is a guaranteed signal for neutrino telescopes <cit.>.
The detection of this component is, however, challenging due both to the atmospheric neutrino background and to its subdominant role in all-sky astrophysical neutrino emission <cit.>.
Very recently, IceCube succeeded in its detection thanks to a decade of accumulated statistics and exploiting new machine learning techniques, providing the first detection of the neutrino emission from the galactic plane at the 4.5σ level of significance <cit.>.
IceCube exploits a template fitting procedure testing the data compatibility with three models for the expected galactic diffuse neutrino emission.
For each model, the spatial and spectral shapes are frozen to the expected ones while the normalization is free to match the neutrino data considering the entire sky.
All the models considered by IceCube describe the truly diffuse emission expected by CR interactions with the interstellar medium.
However, freshly accelerated hadrons colliding with the ambient medium within or close to an acceleration site can also produce high-energy neutrinos.
This "sources" component cannot be resolved with the actual statistics and with the poor angular resolution of IceCube cascade events, providing an additional large-scale galactic neutrino emission that adds to the truly diffuse emission due to CR interactions.
The detected IceCube neutrino signal is most likely due to the total galactic neutrino emission where part of the signal could also arise from a population of unresolved point sources, as also stated by the IceCube collaboration.
In this paper, we discuss the relative importance of truly diffuse and source components by using a multi-messenger approach.
High-energy sources have been observed in the TeV and sub-PeV energy domain by gamma-ray detectors, such as H.E.S.S. <cit.>, HAWC <cit.> and LHAASO <cit.>.
It was recently proven that unresolved gamma-ray sources have a relevant role in the interpretation of the large-scale gamma-ray emission detected in different energy ranges.
In particular, the presence of an unresolved source component at ∼ 10 GeV summed to the truly diffuse emission can change the spectral shape of the diffuse gamma-ray signal observed by Fermi-LAT mimicking a CRs spectral hardening in the inner Galaxy <cit.>.
At very high energy, the presence of the additional diffuse component due to unresolved sources seems needed to obtain a good agreement with the Tibet ASγ data, especially at high longitudes <cit.>.
All this suggests that sources could give a non-negligible contribution also to neutrino emission in the TeV energy domain explored by IceCube.
The relevance of this component depends, however, on the hadronic or leptonic nature of sources.
Hadronic processes produce a roughly equal number of charged and neutral pions which decay to neutrinos and gamma rays, respectively.
This strong correlation between the neutrino and gamma-ray sky, always valid for the truly diffuse emission, fails for the "sources" component if they have a leptonic nature.
In the following, we discuss the constraints on the fraction of Galactic TeV gamma-ray sources (resolved and unresolved) with hadronic nature that can be obtained from IceCube results.
The signal observed by IceCube is determined by the total galactic neutrino emission:
φ_ν, tot(E_ν)=φ_ν, diff(E_ν) +φ_ν, s(E_ν; E_ cut, ξ)
which is obtained as the sum of the truly diffuse emission φ_ν, diff produced by CR interactions with the interstellar gas and the cumulative contribution produced by sources φ_ν, s within a given observation window.
Since sources cannot be individually resolved, the two components cannot be disentangled, unless one uses additional information provided by gamma-ray observations and/or theoretical considerations, as is done in this paper.
The diffuse component can be estimated by using the approach described in <cit.>.
The obtained predictions depend on the assumed CR spatial and energy distribution, motivating the two cases (labeled as "Case B" and "Case C", respectively) better discussed in the following.
The cumulative neutrino source flux φ_ν, s(E_ν; E_ cut, ξ) is calculated by using the approach described in <cit.> which relies on the population study of the sources in the HGPS catalog <cit.> performed by <cit.>.
It is obtained by assuming that a fraction ξ of the source population emits gamma-rays and neutrinos due to hadronic interactions of primary nucleon flux ϕ_ p(E) ∝ E^-Γ_ p exp(-E/E_ cut) .
In our calculations, the proton spectral index is chosen as Γ_p = 2.4 to reproduce the average spectral properties of HGPS sources while the proton cutoff energy E_ cut is free to vary.
The source component is thus obtained in terms of two parameters, ξ and E_ cut, and can be written as:
φ_ν, s = ξ Φ^ max_ν, s ϕ_ν(E_ν; E_ cut)
where Φ^ max_ν, s represents the maximal source neutrino flux integrated in the [ 1, 100 ] TeV energy window, i.e. the neutrino source contribution obtained by assuming that all the TeV gamma-ray sources, resolved and unresolved, are powered by hadronic processes.
The quantity ϕ_ν(E_ν; E_ cut) is the neutrino spectrum produced by hadronic interactions (normalized in the same energy window), see <cit.> for details.
By choosing ξ=1 and E_ cut = ∞, we are able to determine the maximal neutrino flux allowed by gamma-ray observation.
It should be remarked that this limit, being based on the entire population, includes by construction also the potential contribution of sources that are not resolved by present gamma-ray telescopes.
In Fig. <ref> and Fig. <ref> we compare our predictions for the galactic neutrino emission with the IceCube results.
The IceCube galactic signal is obtained by using a template fitting procedure where the angular and energy dependence of the neutrino flux is fixed according to three different models, namely the π_0, KRA_γ^5 and KRA_γ^50 models <cit.>, while the overall normalization is free to vary.
We restrict our comparison to the angular region 0^∘≤ l ≤ 360^∘ and |b|< 5^∘ where the best-fits of the Galactic neutrino component obtained for the different templates give almost the same constraints above ∼ 50 TeV.
Moreover, in order to be conservative and to take into account the systematic uncertainty related to the adopted template, we show with the magenta region
the superposition of the regions obtained by IceCube by using different assumptions (including also 1σ uncertainties of the respective fits).
The displayed band shows that the energy region most effectively probed by IceCube is 50≤ E_ν≤ 100 TeV since different assumptions basically lead to the same reconstructed flux.
At lower energy, the extracted signal depends instead on the assumed neutrino spectrum.
In this respect, we recall that the neutrino spectral index is assumed to be equal to 2.7 in the π_0 model while it is close to 2.5 for the KRA_γ models.
We finally note that the IceCube signal is always below the maximal limit allowed by γ-ray observations (gray solid lines in Fig. <ref> and Fig. <ref>) derived in <cit.> and suitably rescaled in the considered angular region.
This is a relevant conclusion, different from what obtained by ANTARES <cit.> that reported a hint for a Galactic neutrino signal which can extend well above this limit, see <cit.>.
The truly diffuse neutrino emission φ_ν, diff due to CR interactions with the ISM is displayed by the cyan band in Fig. <ref>, labeled as Case B, and by the red band in Fig. <ref>, labeled as Case C.
We calculate this contribution by following the prescriptions of <cit.>.
We adopt the nucleon-nucleon cross-section given by <cit.> and the gas distribution of atomic and molecular hydrogen from the GALPROP code <cit.>.
The heavy element contribution is taken into account by assuming that the total mass of the interstellar gas is a factor 1.42 larger than the mass of hydrogen, as it is expected if the solar system composition is considered representative for the entire Galactic disk <cit.>.
The main source of uncertainty for the calculation of this component is the determination of the differential CR flux φ_ CR(E, r) as a function of the energy and position in the Galaxy.
In our Case B, CRs are assumed to have the same spectrum in the entire Galaxy; the flux φ_ CR(E, r) can be thus directly linked to its local determination φ_ CR, ⊙(E) parameterized by <cit.> by a position-dependent normalization factor that is calculated by assuming isotropic diffusion from the (non-uniform) distributions of CR sources in the Galaxy.
The obtained results depend on the adopted diffusion radius R.
The upper limit (both for Case B and Case C subsequently discussed), is obtained by taking
R = 1 kpc, i.e. by assuming that CRs are confined relatively close to their sources.
The lower limit is obtained by assuming R = ∞ that corresponds to a CR spatial distribution very close
to that predicted by the GALPROP code.
Finally, Case C implements as an additional ingredient the possibility, recently emerged from the analysis of Fermi-LAT gamma-ray data at GeV energies <cit.>, that CRs have a harder spectrum in the inner Galaxy than at the Sun position, see <cit.> for details.
As a result of these assumptions, one expects a larger neutrino emission in the TeV domain, with a harder spectral index (that also depends on the direction of the observation), as it is displayed by the red band in Fig.<ref>.
Our predictions for the truly diffuse emission are compared with the three reference models used by IceCube in Fig. <ref>.
As it is expected, our Case C is very similar to the KRA^5_γ while Case B predicts a diffuse emission which is a factor ∼ 2 greater than the π_0 model.
This is due to the fact that the π_0 model is obtained by extrapolating the neutrino diffuse emission at GeV energies (estimated from gamma-ray data) with a spectral index equal to 2.7.
This is, however, not consistent with the observed CR spectral behavior that
shows a hardening at rigidity ∼ 300 GV <cit.>.
This feature is automatically implemented in our calculations but is not considered in the π_0 model that consequently underestimates neutrino diffuse emission.
The first conclusion that is obtained from our calculations is that the Galactic gamma-ray source population cannot be entirely powered by hadronic mechanisms.
Indeed, the total predicted neutrino flux that is obtained by taking ξ=1 greatly exceeds the IceCube signal both in Case B and Case C, unless the proton cutoff energy is much lower than 100 TeV, i.e. a value that is not compatible with the fact that gamma-ray sources have been observed to emit up to sub-PeV energy domain, see e.g. <cit.>.
This conclusion is particularly strong and rich in physical implications when we consider our Case C, i.e. if we assume that the CR spectral index is position-dependent and becomes harder toward the Galactic center, as obtained from the analysis of the Fermi-LAT data by <cit.>.
Indeed, as it is reported in Fig.<ref>, the diffuse emission in our Case C saturates the IceCube signal, leaving no space for any other additional contribution.
This result is consistent with the best-fit normalization smaller than 1 that was obtained by IceCube analysis for the KRA^5_γ model <cit.>.
The above result automatically implies that the source contribution to the observed signal should be zero or negligible.
In other words, one is forced to require that either ξ≪ 1 or E_ cut≪ 500 TeV in such a way that the source contribution in the energy range probed by IceCube becomes much smaller than the CR diffuse emission.
This request, however, could be not easily fulfilled in the context of the model that we are considering.
Indeed, in order to motivate CR spectral hardening in the inner Galaxy, one is forced to assume that CRs have a galactic origin.
This implies the existence of sources in our Galaxy that should accelerate hadrons up to few PeVs energy.
As an example, the KRA^5_γ model assumes that the source injection spectrum is a power law with an exponential cutoff at 5 PeV.
In order to not exceed the IceCube signal, one is
forced to assume that these sources
accelerate hadrons up to few PeVs but do not effectively produce neutrinos in the 1-100 TeV energy range.
The situation is quite different if we consider our Case B, i.e. we assume that the CR spectrum is uniform within the Galaxy and corresponds to that measured at the Earth and parameterized by <cit.>.
In this case, the IceCube data allow for a non-vanishing source contribution that seems to be even required if we restrict the comparison to the most constrained energy range 50≤ E_ν≤ 100 TeV.
The blue bands reported in Fig. <ref> show the total (diffuse + sources) neutrino emission evaluated by using Eq. <ref> and considering selected values of the two parameters ξ and E_ cut.
We see that the allowed fraction ξ of TeV gamma-ray sources that can have hadronic nature depends on the assumed proton cutoff energy.
For E_ cut = 10 PeV, the maximal fraction is ∼ 20%, corresponding to a source contribution integrated between 1 and 100 TeV that is equal to Φ_ν, s = 1.1× 10^-10 cm^-2s^-1.
For a smaller cutoff energy E_ cut = 500 TeV, we obtain ξ≤ 40%, corresponding to Φ_ν, s≤ 2.6× 10^-10 cm^-2s^-1.
Larger values for ξ require smaller proton cutoff energies that, however, would correspond to assuming that the neutrino (and gamma-ray) source emission spectrum is suppressed above few× 10 TeV, with potential difficulties to explain the IceCube signal in the most constrained energy region above 50 TeV.
Finally, we can compare our findings with our present knowledge of TeV gamma-ray sources.
If we consider the HGPS catalog, we obtain that the cumulative gamma-ray flux integrated in the 1-100 TeV energy range that is produced by potential hadronic sources, i.e. 8 Supernova Remnants and 8 Composite Sources, is about ∼ 12% of the total gamma-ray signal Φ_γ, s = 4.2 × 10^-10 cm^-2s^-1 produced by the entire (resolved + unresolved) source population (see tab. 1 of <cit.>).
Converted in neutrinos, these 16 sources would account for a cumulative flux at a level of ∼ 6.0 × 10^-11 cm^-2s^-1.
This flux is not negligible and compatible with our limits for Case B, thus potentially confirming this scenario in which a comparable contribution to the IceCube signal is provided by diffuse and source components and disfavoring instead our Case C which requires a negligible source contribution.
However, the number of identified sources of this kind is still very limited not allowing us to reach this conclusion on firm statistical grounds.
In conclusion, we have discussed the implications of the recent measurement of high-energy neutrino emission from the Galactic disk performed by IceCube.
We have shown that the IceCube signal is compatible with the upper limit allowed by TeV gamma-ray observations calculated by <cit.>.
Moreover, we have demonstrated that only a fraction of the TeV-Galactic gamma-ray sources can have hadronic nature.
This fraction has to be negligible if we assume that CRs diffusing in the inner Galaxy have a spectrum harder than at the Sun position, as it is e.g. assumed in the KRAγ models or, equivalently, in our Case C.
If we consider instead the standard scenario in which the CR spectrum is uniform within the Galaxy (i.e. Case B), the maximally allowed fraction is ξ≤ 40%, for a cutoff energy of the source proton spectrum E_ cut=500 TeV, corresponding to a cumulative source flux from the Galactic plane Φ_ν, s≤ 2.6 × 10^-10 cm^-2s^-1.
Larger cutoff energies lead to smaller values for Φ_ν, s while lower cutoffs are not consistent with the IceCube signal at ∼ 100 TeV.
aapmrev4-2
|
http://arxiv.org/abs/2307.04111v1 | 20230709070831 | Model-Based End-to-End Learning for Multi-Target Integrated Sensing and Communication | [
"José Miguel Mateos-Ramos",
"Christian Häger",
"Musa Furkan Keskin",
"Luc Le Magoarou",
"Henk Wymeersch"
] | eess.SP | [
"eess.SP"
] |
Model-Based End-to-End Learning for Multi-Target Integrated Sensing and Communication
José Miguel Mateos-Ramos, Student Member, IEEE,
Christian Häger, Member, IEEE,
Musa Furkan Keskin, Member, IEEE,
Luc Le Magoarou, Member, IEEE,
Henk Wymeersch, Senior Member, IEEE
This work was supported, in part, by a grant from the Chalmers AI Research Center Consortium (CHAIR), by the National Academic Infrastructure for Supercomputing in Sweden (NAISS), the Swedish Foundation for Strategic Research (SSF) (grant FUS21-0004, SAICOM), Hexa-X-II, part of the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101095759., and Swedish Research Council (VR grant 2022-03007). The work of C. Häger was also supported by the Swedish Research Council under grant no. 2020-04718.
José Miguel Mateos-Ramos, Christian Häger, Musa Furkan Keskin and Henk Wymeersch are with the Department of Electrical Engineering, Chalmers University of Technology, Sweden (email: [email protected]; [email protected]; [email protected]; [email protected]).
Luc Le Magoarou is with INSA Rennes, CNRS, IETR - UMR 6164, F-35000, Rennes, France (email: [email protected]).
Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We study model-based end-to-end learning in the context of integrated sensing and communication (ISAC) under hardware impairments.
A monostatic orthogonal frequency-division multiplexing (OFDM) sensing and multiple-input single-output (MISO) communication scenario is considered, incorporating hardware imperfections at the ISAC transceiver antenna array.
To enable end-to-end learning of the ISAC transmitter and sensing receiver, we propose a novel differentiable version of the orthogonal matching pursuit (OMP) algorithm that is suitable for multi-target sensing.
Based on the differentiable OMP, we devise two model-based parameterization strategies to account for hardware impairments: (i) learning a dictionary of steering vectors for different angles, and (ii) learning the parameterized hardware impairments.
For the single-target case, we carry out a comprehensive performance analysis of the proposed model-based learning approaches, a neural-network-based learning approach and a strong baseline consisting of least-squares beamforming, conventional OMP, and maximum-likelihood symbol detection for communication.
Results show that learning the parameterized hardware impairments offers higher detection probability, better angle and range estimation accuracy, lower communication symbol error rate (SER), and exhibits the lowest complexity among all learning methods.
Lastly, we demonstrate that learning the parameterized hardware impairments is scalable also to multiple targets, revealing significant improvements in terms of ISAC performance over the baseline.
Hardware impairments, integrated sensing and communication (ISAC), joint communication and sensing (JCAS), machine learning, model-based learning, orthogonal matching pursuit (OMP).
§ INTRODUCTION
Next-generation wireless communication systems are expected to operate at higher carrier frequencies to meet the data rate requirements necessary for emerging use cases such as smart cities, e-health, and digital twins for manufacturing <cit.>. Higher carrier frequencies also enable new functionalities, such as ISAC. ISAC aims to integrate radar and communication capabilities in one joint system, which enables hardware sharing, energy savings, communication in high-frequency radar bands, and improved channel estimation via sensing-assisted communications, among other advantages <cit.>.
ISAC has been mainly considered by means of dual-functional waveforms. For instance, radar signals have been used for communication <cit.>, while communication waveforms have proven to yield radar-like capabilities <cit.>. Furthermore, optimization of waveforms to perform both tasks simultaneously has also been studied <cit.>, where the results depend on the cost function to optimize and the ISAC optimization variables. However, conventional ISAC approaches degrade in performance under model mismatch, i.e., if the underlying reality does not match the assumed mathematical models. In particular at high carrier frequencies, hardware impairments can severely affect the system performance and hardware design becomes very challenging <cit.>. This increases the likelihood of model mismatch in standard approaches, and problems become increasingly difficult to solve analytically if hardware impairments are considered.
DL approaches based on large NN have proven to be useful under model mismatch or complex optimization problems <cit.>. DL does not require any knowledge about the underlying models as it is optimized based on training data, which inherently captures the potential impairments of the system.
DL has been investigated in the context of ISAC for a vast range of applications, such as predictive beamforming in vehicular networks <cit.>, waveform design <cit.> and channel estimation <cit.> in IRS-assisted ISAC scenarios, multi-target sensing and communication in THz transmissions <cit.>, or efficient resource management <cit.>.
However, most previous works on DL for ISAC consider single-component optimization, either at the transmitter or receiver. On the other hand, end-to-end learning <cit.> of both the transmitter and receiver has proven to enhance the final performance of radar <cit.> and communication <cit.> systems. End-to-end learning in ISAC was applied by means of an AE architecture in <cit.>, to perform single-target angle estimation and communication symbol estimation, under hardware impairments. This was recently extended to multiple targets in <cit.>, although without considering impairments, where the AE outperformed conventional ESPRIT <cit.> in terms of angle estimation for single- and dual-snapshot transmissions.
Nevertheless, DL approaches often lack interpretability and require large amounts of training data to obtain satisfactory performance.
To overcome the disadvantages of large DL models, MB-ML <cit.> instead parameterizes existing models and algorithms while maintaining their overall computation graph as a blueprint.
This allows training initialization from an already good starting point, requiring less training data to optimize, and typically also offers a better understanding of the learned parameters.
A popular example of MB-ML learning is deep unfolding <cit.>, where iterative algorithms are “unrolled” and interpreted as multi-layer computation graphs.
In the context of sensing, deep unfolding of the fixed-point continuation algorithm with one-sided l_1-norm was applied to angle estimation of multiple targets <cit.>, showing enhanced accuracy with respect to DL and model-based benchmark approaches. In <cit.>, the ISTA was unfolded to perform angle estimation in the presence of array imperfections.
Related to communications, deep unfolding has been applied to massive MIMO channel estimation in <cit.>, where classical steering vector models are used as a starting point and then optimized to learn the system hardware impairments, by unfolding the matching pursuit algorithm <cit.>. This approach was later refined to reduce the required number of learnable parameters in <cit.>.
Previous MB-ML approaches <cit.> exhibit three primary shortcomings that can limit their effectiveness in practical scenarios. Firstly, they focus only on receiver learning; however, end-to-end learning of transmitter and receiver, which holds great potential given its promising performance in model-free DL applications <cit.>, remains unexplored in MB-ML. Secondly, sensing works <cit.> only investigate angle estimation, although range estimation is also required to estimate target locations. Hence, end-to-end MB-ML for multi-target positioning has not been studied before. Finally, while MB-ML has been utilized to address individual challenges related to sensing and communications, its untapped potential to significantly improve system performance in ISAC applications remains undiscovered.
In view of the current literature on DL and MB-ML for ISAC, three questions arise: (i) How can efficient end-to-end MB-ML strategies be developed for multi-target positioning? (ii) What computational and performance benefits can be harnessed by employing MB-ML in ISAC systems compared to large DL models and model-based approaches? (iii) To what extent can ISAC trade-offs be improved under hardware impairments by employing MB-ML strategies compared to large DL models and model-based approaches?
This paper aims to answer the above questions by studying end-to-end MB-ML for ISAC, focusing on the effect of hardware impairments in the ISAC transceiver ULA.
Considering a MIMO monostatic sensing and MISO communication scenario (as depicted in Fig. <ref>), we propose novel end-to-end MB-ML strategies for joint optimization of the ISAC transmitter and sensing receiver, suitable for both single- and multi-target scenarios.
Building upon our preliminary analysis in <cit.>, the main contributions of this work can be summarized as follows:
* Multi-target position estimation via end-to-end learning of OFDM ISAC systems:
For the first time in the literature, we investigate end-to-end learning of OFDM ISAC systems under hardware impairments at the ISAC ULA. To combat these hardware imperfections, we introduce novel learning architectures to simultaneously optimize the ISAC beamformer and sensing receiver. OFDM transmission enables joint angle and range (and, hence, position) estimation of multiple targets, significantly extending the single-carrier models and methods in our previous work <cit.>, and the recent works <cit.>.
* MB-ML via differentiable OMP:
Expanding upon the foundation laid by <cit.>, we propose a differentiable version of the OMP algorithm that is suitable for single- and multi-target sensing.
This new algorithm allows for end-to-end gradient-based optimization, where we consider two different MB-ML parameterization approaches.
The first approach learns a dictionary of steering vectors at each OMP iteration, extending our results in <cit.> to joint range-angle estimation and multiple targets.
The second approach is new compared to <cit.> and directly learns the parameterized ULA impairments at each iteration.
This offers the advantage of drastically reducing the number of parameters to be learned.
* Single- and multi-target performance comparison and ISAC trade-off characterization:
We first consider the single-target case (corresponding to one OMP iteration) and compare different solutions based on the extent of model knowledge: (i) NNBL[Note that the neural-network architectures in <cit.> do not directly apply to the scenario considered here due to the use of OFDM signals.], representing no knowledge of the system model, (ii) the two MB-ML approaches, where model knowledge is utilized, but impairments are learned, and (iii) a strong baseline, which fully relies on the mathematical description of the system model under no hardware impairments.
Our results show that under hardware impairments, the new MB-ML ULA impairment learning outperforms all other approaches in terms of target detection and range-angle estimation, with fewer trainable parameters.
Lastly, we show that impairment learning scales smoothly also to multiple targets, where it achieves better sensing and communication performance than the baseline.
In the rest of this paper, we first describe the mathematical ISAC system model in Sec. <ref>. Then, we describe the two approaches to perform target positioning and communication:
the baseline in Sec. <ref>, and MB-ML in Sec. <ref>. The main ISAC results are presented and discussed in Sec. <ref> before the concluding remarks of Sec. <ref>.
Notation. We denote column vectors as bold-faced lower-case letters, a, and matrices as bold-faced upper-case letters, A. A column vector whose entries are all equal to 1 is denoted as 1. The identity matrix of size N× N is denoted as I_N. The transpose and conjugate transpose operations are denoted by (·)^ and (·)^, respectively. The i-th element of a vector and the (i,j)-th element of a matrix are denoted by [a]_i and []_i,j. The element-wise product between two matrices is denoted by ⊙, while ⊘ denotes element-wise division, and ⊗ denotes the Kronecker product. · denotes matrix vectorization operator. Sets of elements are enclosed by curly brackets and intervals are enclosed by square brackets. The set {x∈|x≥0} is denoted as _≥0. The cardinality of a set 𝒳 is denoted by 𝒳. The uniform distribution is denoted by , and denotes the circularly-symmetric complex distribution. The Euclidean vector norm is represented by ‖·‖_2, while the matrix Frobenius norm is denoted by ‖·‖_F. The indicator function is denoted by 𝕀{·}.
§ SYSTEM MODEL
This section provides the mathematical models for the received sensing and communication signals, the ISAC transmitted signal and the hardware impairments. In Fig. <ref>, a block diagram of the considered ISAC system is depicted.
§.§ Multi-target MIMO Sensing
We consider an ISAC transceiver consisting of an ISAC transmitter and a sensing receiver sharing the same ULA of K antennas, as shown in Fig. <ref>.
The transmitted signal consists of an OFDM waveform across S subcarriers, with an inter-carrier spacing of Hz. In the sensing channel, we consider at most possible targets. Then, the backscattered signal impinging onto the sensing receiver can be expressed over antenna elements and subcarriers as <cit.>
= 1/√(S)ψ_t (θ_t) ^(θ_t) [() ⊙(τ_t)]^ + W,
where ∈KS collects the observations in the spatial-frequency domains, T ∼{0,...,} is the instantaneous number of targets in the scene,
and ψ_t ∼(0,^2) represents the complex channel gain of the t-th target. The steering vector of the ISAC transceiver ULA for an angular direction θ is, under no hardware impairments, [(θ)]_k= exp(- 2 π (k-(K-1)/2) d sin (θ ) / λ), k=0,...,K-1, with d = λ / 2, λ = c/f_c, c is the speed of light in vacuum and f_c is the carrier frequency[In case of different ULAs for transmitting and receiving, different steering vector models should be used in (<ref>).]. The precoder ∈ℂ^K permits to steer the antenna energy into a particular direction. Target ranges are conveyed by (τ_t) ∈ℂ^S, with [(τ_t)]_s = exp(-j2π s τ_t), s=0,...,S-1, and where τ_t = 2R_t/c represents the round-trip time of the t-th target at R_t meters away from the transmitter. Moreover, the communication symbol vector () ∈ℂ^S conveys a vector of messages ∈^S, each uniformly distributed from a set of possible messages . Finally, the receiver noise is represented by W, with [W]_i,j∼(0,N_0). Note that if T=0, only noise is received. From the complex channel gain and the noise, we define the integrated sensing SNR across antenna elements as _r = K^2/N_0.
The angles and ranges of the targets are uniformly distributed within an uncertainty region, i.e., θ_t ∼[, ] and R_t ∼[, ]. However, uncertainty regions might change at each new transmission. The position of each target is computed from target angle θ_t and range R_t as
_t = [ R_tcos(θ_t); R_tsin(θ_t) ].
The transmitter and the sensing receiver are assumed to have knowledge of {, , , }. In the considered monostatic sensing setup, the receiver has access to communication data (), which enables removing its impact on the received signal (<ref>) via reciprocal filtering <cit.>
= ⊘^() = α_t (θ_t) ^(τ_t) + ,
where α_t=1/√(S)^(θ_t) ψ_t and = W⊘^().
The goal of the sensing receiver is to estimate the presence probability of each target in the scene, denoted as û∈ [0,1]^, which is later thresholded to provide a hard estimate of the target presence, t̂∈{0,1}^. For all detected targets, the sensing receiver estimates their angles, θ̂∈ [-π/2, π/2]^, and their ranges, R̂∈_≥ 0^, from which target positions can be estimated according to (<ref>).
§.§ MISO Communication
In the considered ISAC scenario, communication and sensing share the same transmitter. We assume that the communication receiver is equipped with a single antenna element. In this setting, the received OFDM signal at the communication receiver in the frequency domain is given by <cit.>
= [()⊙]^(φ) + ,
with ∈ℂ^S denoting the S-point DFT of the channel taps [β_0, β_1, ..., β_L-1,0,...,0], where each tap is distributed as β_l ∼(0,σ_l^2). Complex Gaussian noise ∼(0,N_0I_S) is added at the receiver side. The average communication SNR per subcarrier is defined as _c = ∑_l=1^Lσ_l^2/(SN_0).
The communication receiver is assumed to be always present at a random position, such that φ∼[, ]. The transmitter has also knowledge of {, }. The receiver is fed with the CSI = ^(φ).
The goal of the receiver is to retrieve the communication messages that were transmitted.
§.§ ISAC Transmitter
ISAC scenarios require the use of a radar-communication beamformer to provide adjustable trade-offs between the two functionalities. Using the multi-beam approach from <cit.>, we design the ISAC
beamformer, based on a sensing precoder _r ∈ℂ^K, and a communication precoder _c∈ℂ^K, as
(η,ϕ) = √(P)√(η)_r + √(1-η)e^ϕ_c/‖√(η)_r + √(1-η)e^ϕ_c ‖ ,
where P is the transmitted power, η∈ [0,1] is the ISAC trade-off parameter, and ϕ∈ [0,2 π) is a phase ensuring coherency between multiple beams.
By sweeping over η and ϕ, we can explore the ISAC trade-offs of the considered system. The sensing precoder _r points to the angular sector of the targets, {, }, whereas the communication precoder _c points to the angular sector of the communication receiver, {, }. In Secs. <ref> and <ref>, we detail how _r and _c are computed for the baseline and MB-ML, respectively. However, the same precoding function is applied for sensing and communication, as represented in Fig. <ref>.
§.§ Hardware Impairments
We study the effect of hardware impairments in the ULA in the ISAC transceiver, which affect the steering vectors of (<ref>), (<ref>), (<ref>). Impairments in the antenna array include mutual coupling, array gain errors, or antenna displacement errors, among others <cit.>. Following the impairment models of <cit.>, we consider two types of impairments:
* Unstructured impairments: In this case, the true steering vector (θ) is unknown for all angles θ, while the methods for beamforming design and signal processing assume the nominal steering vector (θ). If we consider a grid of possible angles with N_θ points, then the steering vectors require K× N_θ complex values to be described.
* Structured impairments: In this case, the steering vector model is known, conditional on an unknown perturbation vector . We can thus write (θ;), where the meaning and dimensionality of depend on the type of impairment. In contrast to the unstructured impairments, the impairments are often described with a low-dimensional vector, independent of N_θ.
[Impact of structured impairments]
Consider the example of inter-antenna spacing errors, where ∈ℂ^K and [(θ; )]_k = exp(- 2 π (k-(K-1)/2) []_k sin (θ ) / λ), k=0,...,K-1.
In Fig. <ref>, the angle-delay map (defined in Sec. <ref>) is depicted under ideal conditions (top) and hardware impairments (bottom), when T = 4 targets are present. The main effect of hardware impairments is to expand target lobes in the angle domain. In the example shown in Fig. <ref>, two targets become indistinguishable due to impairments, and the appearance of spurious lobes hinders the detection of the target at the highest range. Another effect of hardware impairments is that the magnitude of the target lobes is decreased, which makes them harder to differentiate from noise. These results highlight the relevance of addressing hardware impairments in our sensing scenario.
§ BASELINE
In this section, we derive the baseline method according to model-based benchmarks, which will later be compared with end-to-end learning approaches in Sec. <ref>.
§.§ ISAC Beamformer
We design the baseline for the precoding mapping in Fig. <ref>, which affects both the sensing precoder _r, and the communication precoder _c in (<ref>), by resorting to the beampattern synthesis approach in <cit.>.
We define a uniform angular grid covering [-π/2, π/2] with grid locations {θ_i}_i=1^. For a given angular interval (i.e., = [, ] for communications, and = [, ] for sensing),
we denote by ∈1 the desired beampattern over the defined angular grid, given by
[]_i =
K, if θ_i ∈θ_interval
0, otherwise.
The problem of beampattern synthesis can then be formulated as
min__bs‖ - ^_bs‖_2^2, where = [(θ_1) … (θ_)] ∈K denotes the transmit steering matrix evaluated at the grid locations. This least-squares (LS) problem
has a simple closed-form solution
_bs = (^^)^-1^,
which yields, after normalization according to the transmit power constraints, a communication-optimal beam _c or a radar-optimal beam _r, which can then be used to compute the joint ISAC beam in (<ref>).
§.§ Multi-target Sensing Receiver
We propose to formulate the multi-target sensing problem based on the received signal in (<ref>) as a sparse signal recovery problem <cit.> and employ the OMP algorithm <cit.> to solve it, which represents our model-based benchmark.
To construct an overcomplete dictionary for OMP, we specify an angular grid {θ_i}_i=1^ and a delay grid {τ_j}_j=1^ depending on the region of interest for target detection (i.e., the a priori information {, , , }). Then, a spatial-domain and a frequency-domain dictionary covering angular and delay grids can be constructed as
_a = [ (θ_1) ⋯ (θ_) ] ∈K ,
_d = [ (τ_1) ⋯ (τ_) ] ∈S .
Using (<ref>), the problem of multi-target sensing based on the observation in (<ref>) becomes a sparse recovery problem
= ∑_i=1^∑_j=1^ []_i,j [_a]_:, i ([_d]_:, j)^ + ,
where ∈. Here, the goal is to estimate the T-sparse vector ∈1 under the assumption T ≪. The baseline OMP algorithm <cit.> to solve this problem is summarized in Algorithm <ref>, which will serve as a foundation to the proposed MB-ML approaches in Sec. <ref>.
§.§ Communication Receiver
We assume that the communication receiver has access to the CSI = ^(φ). Hence, the received signal can be expressed as = ⊙() +. Optimal decoding in this case corresponds to subcarrier-wise maximum likelihood estimation according to
_s = min_m_s ∈[]_s - []_s x(m_s)^2,
for s=0,...,S-1. Since communication decoding is already optimal, given the CSI, learning methods described in Sec. <ref> apply (<ref>) for communication message estimation.
§ MODEL-BASED LEARNING
MB-ML is inspired by the baseline of Sec. <ref>, although we need to develop differentiable beamforming and estimation algorithms that permit end-to-end learning, as well as a suitable loss function for multiple targets. This section describes the two MB-ML methods developed for multi-target sensing: (i) dictionary learning, which learns a dictionary of steering vectors for different angles as in <cit.>, and is suitable for unstructured impairments, as defined in Sec. <ref>; (ii) impairment learning, which directly learns a parameterization of the hardware impairments and thus is suitable for structured impairments, also defined in Sec. <ref>. This section also defines the loss function to train them.
§.§ Beamformer
MB-ML follows the same operations (<ref>) and (<ref>) to compute the precoding vector _r or _c, given an angular interval . Dictionary learning considers ∈ℂ^K× N_θ from (<ref>) as a free learnable parameter to account for unstructured impairments, which is comprised of KN_θ complex parameters.
The new proposed impairment learning considers instead as a free learnable parameter the vector ∈ℂ^K, which represents a parameterization of the structured hardware impairments. From , the dictionary of steering vectors is computed as () = [(θ_1;) … (θ_;)], such that () is used in (<ref>) instead of . Impairment learning reduces the number of learnable parameters by taking into account the structured hardware impairments of Sec. <ref>. Indeed, it has only K complex parameters, which can be several order of magnitudes less than the dictionary learning approach, since the dictionary of steering vector needs a relatively large number of columns N_θ to perform well. Note that the operation in (<ref>), which involves the learning parameters of both MB-ML methods, is already differentiable.
§.§ Sensing Receiver
Range-angle estimation of targets is based on Algorithm <ref>.
However, the max operation in line <ref> of Algorithm <ref> is not differentiable and the gradient of no loss function could be backpropagated in MB-ML.
To circumvent this issue, we develop a differentiable algorithm which is represented in Fig. <ref>. The difference with the conventional OMP in Algorithm <ref> is that we replace the operations of lines <ref>-<ref> by the following steps:
* max_i,j: We still perform this nondifferentiable operation as a temporary result to obtain the final estimation. Note that is based on an angular grid ={θ_i}_i=1^ and a delay grid ={τ_j}_j=1^. In line <ref> in Algorithm <ref>, this calculation yields the estimated angle-delay pair, which serves as foundation for the following step of the differentiable OMP algorithm.
* Mask the angle-delay map, , based on angle and range resolution: in order to consider elements of that solely correspond to a single target, we select the elements around the maximum of the angle-delay map that are within the angle and range resolution. This operation also helps to obtain a differentiable angle-delay estimation, similar to line <ref> in Algorithm <ref>.
We create the mask based on the angle and range resolution, since it determines the minimum angle or range for which two targets are indistinguishable.
The angle and range resolutions in our case are
≈2/K ≈c/2B = c/2S,
with B the bandwidth of the transmitted signal. The resolutions are considered in terms of the number of pixels of the angle-delay map, depending on and .
* Softmax: We apply a softmax operation to the masked matrix from the previous operation, so that the sum of its elements is equal to 1. Unlike line <ref> in Algorithm <ref>, the softmax function is differentiable, enabling end-to-end learning.
* Weighted sum: A weighted sum of and is implemented, where each weight corresponds to the output of the previous softmax operation, and they represent an estimate of the probability that a certain angle-delay pair is the true value. From this interpolation operation, an angle-delay pair (θ̂_I, τ̂_I) is obtained, which may not be included in or . From this computation, the angle-delay pairs are updated, as in line <ref> in Algorithm <ref>. Note that these four first steps (center column of Fig. <ref>), amount to looking in the dictionary for the most correlated atoms with the input, and then estimating the angle-delay pair as a convex combination of the corresponding angle-delays on the grid. This kind of similarity-based learning has been applied to other tasks within MIMO systems <cit.>, and is reminiscent of the attention mechanism <cit.>.
* Compute estimated spatial-domain and frequency-domain vectors (θ̂_I), (τ̂_I): unlike line <ref> in Algorithm <ref>, we recompute the spatial-domain and frequency-domain vectors based on the estimated angle-delay pair of the previous step, since the estimated angle-delay pair (_I, _I) might not be contained in (, ). The sets _a and _d are updated with the new vectors, as represented in Fig. <ref>.
After the previous steps, differentiable OMP continues as lines <ref>-<ref> in Algorithm <ref> to obtain the new residual ^(I+1), as depicted in Fig. <ref>. This differentiable OMP algorithm still involves looking over a grid of possible angles. We utilize as the dictionary of angles _a the same matrices and () from the beamformer of Sec. <ref> to compute , which allows parameter sharing between the co-located transmitter and receiver. The gradient of the loss function does not flow through the max operation, as illustrated in Fig. <ref>. To further improve memory efficiency, gradient flow is also discarded when computing the new residual ^(I+1) from the estimates (_I, _I).
§.§ Loss Function
As loss function for MB-ML multi-target sensing, we select the GOSPA loss from <cit.>. In our case, the GOSPA loss is defined as follows. Let γ>0, 0<μ≤2 and 1≤ p < ∞. Let = {_1, ..., _||} and = {_1,...,_||} be the finite subsets of ℝ^2 corresponding to the true and estimated target positions, respectively, with 0≤||≤, 0≤||≤. Let d(, ) = ‖ - ‖_2 be the distance between true and estimated positions, and (, ) = min(d(, ),γ) be the cut-off distance. Let Π_n be the set of all permutations of {1,...,n} for any n ∈ℕ and any element π∈Π_n be a sequence (π(1),...,π(n)). For || ≤ ||, the GOSPA loss function is defined as
d_p^(γ,μ)(, ) =
( min_π∈Π_||∑_i=1^||(_i, _π(i))^p + γ^p/μ (-) )^1/p.
If > , d_p^(γ,μ)(, ) = d_p^(γ,μ)(, ). The parameter p is proportional to the penalization of outliers, and the value of γ dictates the maximum allowable distance error. The role of μ, together with γ, is to control the detection penalization. This loss function becomes suitable for multiple targets, since it considers the association between estimated and true positions that gives the minimum loss, tackling the data association problem of multiple targets. In terms of target detection, we follow the same principle as the baseline, i.e., we stop the OMP algorithm when the maximum of the angle-delay map drops below a threshold. Sweeping this threshold over different values yields a trade-off in terms of detection and false alarm rates.
§ RESULTS
This section details the simulation parameters and the results for single- and multi-target ISAC.[Source code to reproduce all numerical results in this paper will be made available at <https://github.com/josemateosramos/MBE2EMTISAC> after the peer-review process.]
Four methods will be evaluated and compared:
* The model-based baseline from Sec. <ref>, working under the mismatched assumption of no hardware impairments.
* A NNBL method, extending <cit.>, which replaces the precoding and sensing estimation mappings in Fig. <ref> by NN, and can operate in the absence of any knowledge of the ISAC system (including the hardware impairments). More details can be found in Appendix <ref>.
* Dictionary learning from Sec. <ref>, where the unstructured impaired steering vectors (θ) are learned for both precoding and sensing.
* Impairment learning from Sec. <ref>, where the structured impairment vector d is learned for precoding and sensing.
§.§ Simulation Parameters
We consider a ULA of K=64 antennas, S=256 subcarriers, and a subcarrier spacing of 120 kHz. We set the maximum number of targets in the scene as = 5. The transmitted power is P=1 and the carrier frequency is f_c = 60 GHz. The sensing SNR across antenna elements was set to _r = K^2/N_0 = 15 dB, and the average communication SNR per subcarrier was fixed to _c = ∑_l=1^Lσ_c,l^2/(SN_0) = 20 dB. The number of channel taps in the communication channel is L=5, with an exponential power delay profile, i.e., σ_l^2 = exp(-l), l=0,...,L-1. The power delay profile is later normalized to obtain the desired average SNR. The number of grid points for angle and range is set as = 720 and =200.
To train the learning methods for a wide range of angles, we randomly draw {, } as in <cit.>, i.e.,
we draw a realization of ∼[-60, 60] and Δ∼[10, 20], for each new transmission. The target angular sector is computed as = - Δ/2, = + Δ/2. The communication angular sector and the range uncertainty region are set as {, } = {30, 50}, {, } = {10, 190} m, for all transmissions.
For hardware impairments, we consider the model of <cit.>, i.e., we assume structured hardware impairments where the antenna elements in the ULA array are spaced as ∼((λ/2) 1, ^2I_K). We select a standard deviation of = λ/25 = 0.2 mm. MB-ML is initialized with the same knowledge as the baseline, i.e., the steering vector models firstly assume that d=(λ/2) 1.
In the GOSPA loss, we set μ=2, as recommended in <cit.>, p=2, and γ = (-)/2=90 m. The cardinality mismatch term in (<ref>) implies the use of a threshold during training. However, our goal is to train the learning methods regardless of the threshold, and then explore sensing performance by changing the threshold. Hence, during training it is assumed to know the actual number of targets T, which means that || = || = T, and the GOSPA loss during training becomes
d_p^(γ,μ)(, ) = (min_π∈Π_||∑_i=1^||(_i, _π(i))^p)^1/p.
However, there is no detection penalization term in (<ref>), which implies that the detection probability estimation NN of NNBL cannot be optimized. Hence, we adopt a two-step training approach for NNBL, as follows:
* We first train and based on the simplified GOSPA loss of (<ref>).
* While freezing the parameters ξ, we then train and by minimizing
d_u^(γ_u,μ)(, ) = (min_π∈Π_||∑_i=1^|| d^(γ_u)(u_i, û_π(i))^p)^1/p,
where = {u_1, ..., u_||} and = {û_1, ..., û_||} are the true and estimated sets of target probabilities, d^(γ_u)(u_i, û_π(i)) = min(d(u_i, û_π(i)),γ_u), and d(u_i, û_π(i)) = -u_ilog(û_π(i)) - (1-u_i)log(1-û_π(i)). That is, we replace the position distance error in (<ref>) with the BCE loss. Note that in (<ref>) we also assume that ||=||=T.
The previous two-step training approach was observed to yield better performance, compared to joint training of all NN parameters ε, ξ, ζ based on the sum of the losses (<ref>) and (<ref>).
Network optimization is performed using the Adam optimizer <cit.>, with a batch size of B=3000 and 100,000 training iterations. The learning rate of dictionary and impairment learning was set to 5·10^-3 and 10^-7, respectively. In the two-step training approach for NNBL, 100,000 training iterations are applied to each of the steps. Position estimation training used a learning rate of 10^-2, while target detection utilized 10^-3 as learning rate. The architecture of NNBL is described in Appendix <ref>. NNBL also benefited from using a scheduler, to reduce the learning rate when the loss function has reached a plateau. Details of the scheduler parameters can be found in Appendix <ref>.
§.§ Performance Metrics
Concerning testing, we compute as detection performance metrics a measure of the probability of misdetection and the probability of false alarm, for multiple targets. We use the same definitions as in <cit.>, which correspond to
= 1-∑_i=1^Bmin{T_i, _i}/∑_i=1^B T_i,
= ∑_i=1^B max{T_i, _i} - T_i/∑_i=1^B - T_i,
where
T_i, _i are the true and estimated number of targets in each batch sample, respectively. The regression performance is measured via the GOSPA (for multiple targets sensing) and RMSE (for single target sensing).
As communication performance metric, we use the average SER across subcarriers, computed as
SER = 1/BS∑_i=1^B ∑_j=1^S 𝕀{[_i]_j ≠ [_i]_j},
with _i and _i the true and estimated message vectors at the i-th batch sample. All described methods in this paper (baseline of Sec. <ref>, MB-ML of Sec. <ref>, and NNBL) use a QPSK encoder, and the message estimation rule in (<ref>).
§.§ Single-target ISAC
In single-target ISAC, the maximum number of targets is =1, which implies that the GOSPA loss function in (<ref>) becomes (, ).
However, in order to compare with our previous work <cit.>, we train MB-ML and position estimation of NNBL using the MSE loss d(, )^p = - _2^2, and detection estimation of NNBL using the BCE loss, d(u, ) = -ulog() - (1-u)log(1-). Position estimation is assessed by the angle RMSE, √([(θ-θ̂)^2]), and the range RMSE, √([(R-R̂)^2]).
ISAC performance results are represented in Fig. <ref>, where
we sweep over [0,1] and [0,7π/4], taking 8 uniformly spaced values, to set η and ϕ in (<ref>), respectively. For testing, we fixed {, } = {-40, -20}[Unless otherwise stated, the authors also tested other values of {, }, and the results were qualitatively the same.]. The probability of false alarm was set to = 10^-2.
Result show that under no complexity limitations (solid lines) and hardware impairments, learning methods outperform the baseline in terms of misdetection probability, angle and range estimation, and SER, which implies that learning methods have adapted to hardware impairments. Communication performance, even in the case of optimal symbol estimation, is enhanced by learning approaches, which suggests that the impairments have a significant impact on the optimal communication precoder. In addition, dictionary learning outperforms NNBL for range estimation, although the converse happens for misdetection probability. Impairment learning yields the best performance among all learning methods, and with fewer parameters, which usually implies less training time. Indeed, NNBL is composed of a total of 7.78 million real learnable parameters, while dictionary learning uses K = 40,080 complex parameters, and impairment learning consists of K=64 complex parameters.
Under limited complexity, the number of parameters of dictionary learning and NNBL are restricted. We follow the approach of <cit.>, and restrict the number of (complex) parameters of dictionary learning by setting = 156, which reduces the number of parameters to 9,984 complex parameters. The complexity constraints applied to NNBL-learning are detailed in Appendix <ref>, which decreases the number of real parameters to 10,555. From Fig. <ref>, it is observed that while NNBL drops in performance, especially for angle and range estimation, dictionary learning still yields better results than the baseline. However, dictionary learning also decreased in performance compared to the unconstrained approach, which means that dictionary learning cannot achieve the same performance as impairment learning for the same number of parameters.
Lastly, we test all learning approaches for a scenario that was not encountered during training, to assess their generalization capabilities. Fig. <ref> depicts the performance of the learning methods for {, } = {-20, 20}, which includes a span of the angular uncertainty region wider than expected. The complexity of the networks is not restricted. The performance of all learning approaches has dropped compared to Fig. <ref>. However, while NNBL performs worse than the baseline, and dictionary learning yields similar results to the baseline, impairment learning is the only approach that still outperforms the baseline. NNBL and dictionary learning appear to overfit to the training data and degrade for unexpected inputs. This means that for new testing scenarios, impairment learning is the learning approach that best generalizes in terms of performance. This is due to the fact that impairment learning is the only method for which parameters are shared between all directions (all columns of the dictionary are affected each time the parameters are updated). Dictionary learning does not exhibit this feature, since each column of the dictionary (corresponding to a direction) is considered an independent set of parameters.
§.§ Multi-target ISAC
Based on the results of Sec. <ref>, impairment learning performs the best among all considered learning methods for the simpler case of single-target ISAC.
Hence, we only consider impairment learning to compare against the baseline for multi-target sensing. The batch size for MB-ML is decreased to B=1500 due to memory restrictions. The number of iterations was also reduced to 25,000, since finding the association between estimated and true data that minimizes the GOSPA loss of (<ref>) increases training time. In addition, ISAC results perform very close to perfect knowledge of impairments, as observed in the following.
We first compare the performance of the differentiable OMP algorithm of Sec. <ref> with the baseline, when hardware impairments are perfectly known. In Fig. <ref>, the sensing performance of both approaches is depicted. Results show that differentiable OMP performs closely to the baseline. The difference in performance might be because the dictionary _a in the baseline only covers the angular range {, }, while differentiable OMP uses a fixed dictionary that covers [-π/2, π/2]. However, this allows for efficient parameter sharing in MB-ML. Differentiable OMP takes a weighted sum of angles and ranges, which permits to select an angle or range outside the predefined dictionaries, unlike the baseline.
The GOSPA loss in Fig. <ref> achieves a minimum for different false alarm probabilities, since it takes into account both position and detection errors. For high , OMP estimates a higher number of targets than the true value, and conversely for low .
Fig. <ref> shows the results of the baseline without impairment knowledge, differentiable OMP with perfect impairment knowledge, and impairment learning. Impairment learning outperforms the baseline, which illustrates the adaptability of impairment learning to antenna imperfections in multi-target sensing. Moreover, the performance is very close to perfect knowledge of the impairments, which suggests that the learned spacing is quite similar to the underlying reality.
In terms of ISAC trade-off, Fig. <ref> presents the ISAC trade-offs in case of multiple targets when = 10^-2. In this case, we sweep in (<ref>) over η and fixed ϕ = 0, since in Figs.<ref> and <ref> we observed that the effect of ϕ is not very significant. Compared to Fig. <ref>, it is observed that impairment learning also outperforms the baseline when impairments are not known in terms of communication performance, due to the impact of hardware impairments in the communication precoder.
§ CONCLUSIONS
In this work, we studied the effect of antenna spacing impairments in multi-target ISAC, and different learning approaches to compensate for such impairments. A new efficient MB-ML approach to perform end-to-end learning and impairment compensation was proposed, based on a differentiable OMP algorithm. Simulation results showed that learning approaches outperform the baseline and they can compensate for hardware impairments. Among learning methods, the new proposed impairment learning approach outperformed all other considered methods, also exhibiting better generalization capabilities to new testing data, with much fewer parameters to optimize. Simulations results verify that injection of the system and impairment knowledge in learning methods improves their performance and reduces their complexity.
§ NNBL
Since the optimal detection and estimation rules might not be tractable, NNBL can be trained based on data to achieve optimality. Moreover, when no information about the impairments is available, NNBL can provide data-driven solutions to account for them. This appendix describes the principles and architecture of the considered NNBL approach.
§.§ Principles
NNBL replaces the precoding and sensing estimation mappings in Fig. <ref> by NN. The precoding network, :^2→^2K, takes as input and produces a precoder as output, where ε corresponds to the learnable parameters. NN in this work are considered to work with real-valued numbers, hence, the output dimension is doubled. The same mapping is applied to both sensing and communication precoders, to obtain _r and _c, which are later used to design the ISAC precoder according to (<ref>).
Sensing estimation is divided into two tasks, each corresponding to a different NN: (i) detection probability estimation, and (ii) position estimation. As input to both NN, we use ∈^× defined in Sec. <ref>, instead of , since we observed a better sensing performance.
In addition to the angle-delay map, the input is also composed of the a priori information {, , , }, as shown in Fig <ref>, to improve network performance.
The output of each NN is task-dependent. The detection probability network, : ^××^4→ [0,1]^, outputs a probability vector û whose elements correspond to the probability that each target is present in the scene, which is later thresholded to provide an estimate of the number of targets. The position estimation network, : ^××^4→^×2, outputs a matrix P̂ whose columns represent the position estimation of each potential target. The learnable parameters of each network are ζ and ξ, respectively. Both NN are trained based on the GOSPA loss function of Sec. <ref>.
§.§ NN Architectures
The precoding operation of Fig. <ref> was implemented as a MLP, whose input is an angular sector ({, } or {, }), with 3 hidden layers of 8K neurons and an output layer of 2K neurons, where we recall that K=64 is the number of antennas in the ULA transceiver. The activation function after each layer is the ReLU function, except for the final layer, which contains a normalization layer to ensure a unit-norm output, i.e., ‖_bs‖_2=1.
For the receiver side, we resort to CNN given the 2-dimensional nature of the input , as represented in Fig. <ref>. The receiver architecture repeats a set of layers, represented in Fig. <ref>, which we call residual bottleneck block. This block was inspired by the ResNet architecture <cit.>. A convolutional layer is first introduced with some stride to decrease the number of pixels to process. Then, 2 bottleneck blocks with skipped connections similar to <cit.> follow. However, we reduce the number of activation functions and normalization layers, as suggested in <cit.>. Another residual connection is introduced from the beginning to the end of both bottleneck blocks to help with gradient computation.
We observed that splitting position estimation into angle and range estimation, each of them involving a CNN, yielded better results than using a single network. Angle and range estimates are later combined into a position vector following (<ref>). The common architecture for all CNN (detection, angle and range estimation) is shown in Table <ref>. Convolutional layers introduce zero-padding so that the number of pixels is preserved. After the first and last convolutional layers, a 2-dimensional batch normalization and a ReLU activation function are also applied. The resulting feature map of the CNN has / 2^12 elements. For NNBL, = 320 and = 128 due to memory constraints. The resulting feature map from the convolutional layers, together with the a priori information {, , , } of the target locations, are processed by MLP. The angle estimation network only uses {, }, the range estimation network {, }, and the detection network utilizes both of them. The architecture of each MLP is described in Table <ref>. The activation function after each fully-connected layer is the ReLU function. Unless stated otherwise, all NN architectures were optimized to give the best ISAC performance, where we explored, for instance, kernel sizes up to 13x13, the number of residual bottleneck blocks from 3 to 7, or the number of layers of the MLP of Table <ref>, from K to 64K, among others.
When training NNBL, a scheduler is used to reduce the learning rate if the loss function plateaus. The patience of the scheduler was set as 10^4 iterations. If the loss function was regarded to plateau, the learning rate was decreased by half, with a minimum attainable learning rate of 10^-6.
When complexity limitations are considered, in the transmitter network the number of neurons in each hidden layer was reduced to 4. At the receiver side, the kernel size of the Maxpool layer is increased to 4x4, the number of residual bottleneck blocks is changed from 6 to 3, the number of channels in the network is reduced by a factor of 4, and the number of neurons in the hidden layer of the last MLP are constrained to 4.
IEEEtran
|
http://arxiv.org/abs/2307.05545v2 | 20230708232436 | Robotic Ultrasound Imaging: State-of-the-Art and Future Perspectives | [
"Zhongliang Jiang",
"Septimiu E. Salcudean",
"Nassir Navab"
] | cs.RO | [
"cs.RO"
] |
Z. Jiang et al.
1]Zhongliang Jiangcor1
[cor1]Corresponding author at: Technische Universität München, Fakultät für Informatik – I16, Boltzmannstr. 3, 85748 Garching bei München
[email protected]
2]Septimiu E. Salcudean
1,3]Nassir Navab
[1]Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany
[2]Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC V6T 1Z4, Canada
[3]Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
XX June 2021
xx Month 2021
xx Month 2021
xx Month 2021
Ultrasound (US) is one of the most widely used modalities for clinical intervention and diagnosis due to the merits of providing non-invasive, radiation-free, and real-time images. However, free-hand US examinations are highly operator-dependent. Robotic US System (RUSS) aims at overcoming this shortcoming by offering reproducibility, while also aiming at improving dexterity, and intelligent anatomy and disease-aware imaging. In addition to enhancing diagnostic outcomes, RUSS also holds the potential to provide medical interventions for populations suffering from the shortage of experienced sonographers. In this paper, we categorize RUSS as teleoperated or autonomous. Regarding teleoperated RUSS, we summarize their technical developments, and clinical evaluations, respectively. This survey then focuses on the review of recent work on autonomous robotic US imaging. We demonstrate that machine learning and artificial intelligence present the key techniques, which enable intelligent patient and process-specific, motion and deformation-aware robotic image acquisition.
We also show that the research on artificial intelligence for autonomous RUSS has directed the research community toward understanding and modeling expert sonographers' semantic reasoning and action. Here, we call this process, the recovery of the “language of sonography". This side result of research on autonomous robotic US acquisitions could be considered as valuable and essential as the progress made in the robotic US examination itself.
This article will provide both engineers and clinicians with a comprehensive understanding of RUSS by surveying underlying techniques.
Additionally, we present the challenges that the scientific community needs to face in the coming years in order to achieve its ultimate goal of developing intelligent robotic sonographer colleagues. These colleagues are expected to be capable of collaborating with human sonographers in dynamic environments to enhance both diagnostic and intraoperative imaging.
Ultrasound imaging, robotic ultrasound, telesonography, medical robotics, orientation optimization, path planning, visual servoing, compliant control, robotic US, robot learning, reinforcement learning, learning from demonstrations
§ INTRODUCTION
Today, medical imaging is one of the most crucial components of the entire healthcare industry, from wellness and screening to early diagnosis, treatment selection, and follow-up <cit.>.
Compared to the other three most common medical imaging modalities used in the current clinical practice [i.e., Radiography (X-ray), Computerized tomography (CT), and Magnetic resonance imaging (MRI)], Ultrasound (US) imaging has the advantage of being noninvasive, low-cost, portable, and free of ionizing radiation <cit.>.
These merits make it particularly suitable for some clinical needs, such as image-guided interventions <cit.> and obstetric applications <cit.>.
In October 2021, 0.79 million US examinations were performed in England, whereas there were 0.52 million CT scans and 0.31 million MRI scans <cit.>.
However, regarding traditional free-hand US examinations, substantial experience and visuo-tactile skills are required for achieving high-quality US images <cit.>. These factors limit the utilization of US in clinical applications requiring reliable biometric measurements or repeatable images for monitoring lesions. To obtain high-quality images, sonographers need to maintain the probe with proper pressure and adjust the probe orientation for optimal acoustic windows. To overcome intra- and inter-operator variations, the robotic US system (RUSS) has been gaining attention for two decades.
To illustrate the increased interest about RUSS, the number of related peer-reviewed publications in each year and cumulative years are depicted in Fig. <ref>. For individual years, the number of publications has grown from 1,020 in the year 2001 to 15,500 in the year 2022.
The accumulated number of publications exponentially increased to 125,110 from 2001 to 2022.
This dramatic rise in interest can be attributed to three distinct communities: engineers, clinicians, and entrepreneurs <cit.>. The need from clinicians for high-quality images and efficient and easy-to-use RUSS stimulates the development of RUSS by engineers. Due to the considerable economic benefits, entrepreneurs are motivated to develop prototypes and market them [https://www.adechotech.com/],[https://en.mgi-tech.com/],[https://www.bkmedical.com/].
To assist in combating global pandemics (e.g., COVID-19 and Ebola), the demand for intelligent systems and robotics is boosted extensively in the fields of disease prevention, screening, diagnosis, treatment, home care, etc. <cit.>.
RUSS has been investigated to remotely or autonomously perform US tests for early detection and diagnosis <cit.>.
Deploying RUSS in hospitals enables the separation of patients and sonographers, hence lowering the risks of virus transmission between patients and medical staff.
This paper is motivated by the desire to assist both robotic US technicians and clinicians. For roboticists, we provide a comprehensive summary of enabling technologies (i.e., compliant force control and path planning) that are commonly needed for a variety of applications. In addition to the enabling technologies, the advanced solutions developed by integrating additional techniques (e.g., surface registration, visual servoing, and image segmentation) are summarized to demonstrate the potential of RUSS for addressing real-world challenges (e.g., tissue motion and deformation). Using these techniques, clinicians and technicians can further consider how RUSS can assist them in addressing particular clinical needs by sensibly integrating the different techniques together. This will help to bridge the gap between medical and technology research.
Prior to this survey, there were some reviews that summarized the development of RUSS <cit.>. Recently, Salcudean et al. discussed the roles robotics play in the acquisition of medical images, including US, endoscopy, X-ray, optical coherence tomography, and nuclear medicine <cit.>.
Specific to RUSS, Von Haxthausen et al. provided a systematic summary of recent publications between 2016 and 2020 <cit.>. Li et al. focused on the development of autonomous RUSS <cit.>. These two surveys categorize literature based on the level of automation; in contrast, this article emphasizes the connection between the potential clinical applications and enabling techniques. In addition, some novel concepts of application-oriented techniques (e.g., motion-aware <cit.> and deformation-aware <cit.> characteristics) have not been discussed before. However, they are important to further pave the way for applying RUSS in real scenarios. Due to the fast development of artificial intelligence (AI), learning-based RUSS is emerging to automatically perform specific US examinations <cit.>.
Li et al. also noted this trend and mentioned the AI-based RUSS as one of the future directions <cit.>.
Nevertheless, learning-based RUSS solutions have not been systematically discussed yet.
Therefore, a comprehensive survey article covering these new trends of RUSS will be helpful for roboticists to quickly and systematically learn the key knowledge of RUSS, as well as for clinicians to comprehend how the robot benefits their specific clinical needs.
Regarding future development for RUSS, we discussed some open challenges and promising perspectives to inspire the research community and other stakeholders.
§ MATERIALS AND METHODS
§.§ Searching Policy
In order to provide an objective view of the development of robotic US imaging over the last two decades, we carried out an extensive search of RUSS on the Web of Science and google scholar. The search term was “(remote OR teleoperat*) AND (ultrasound OR US OR ultrasonography OR echography)", and “robot* AND (ultrasound OR US OR ultrasonography OR echography) AND (Imaging OR screening OR scan* OR acquisition* OR servoing)". To further narrow the most relevant and most impactful articles, the titles and abstracts were carefully reviewed to exclude the articles that were (a) not focusing on the medical domain, (b) not using robotic imaging adjustment or optimization, or (c) not employing traditional 2D/3D probes. This excludes papers using endocavitary probes <cit.> for cardiology and prostate applications. Finally, among similar articles, the most representative ones (the newest or most cited) were selected.
§.§ Technological Developments in RUSS
Skilled sonographers are often in shortage, particularly in rural areas.
To allow accurate adjustment of US acquisition parameters and address the unbalanced distribution of healthcare resources across nations and regions, teleoperated RUSS solutions have been developed over the past two decades (see Section <ref>). For such systems, the operations are fully carried out by experts via teleoperation techniques; thereby, remote experts take the responsibility of robotic acquisition.
To improve the level of autonomy of RUSS, quite a large number of RUSS solutions have been proposed for different applications in the past decades. To review the key characteristics of autonomous RUSS, we first summarize the existing articles in terms of enabling technologies, namely three key acquisitions parameters: contact force (Section <ref>), probe orientation (Section <ref>), and scan path (Section <ref>). By precisely controlling these parameters, the accuracy and reproducibility of US imaging can be improved <cit.>.
In addition, more advanced techniques need to be developed to tackle additional practical complications occurring in clinical routines, e.g., patient movement and probe pressure-induced deformation.
In this article, we featured four advanced techniques: 1) motion-aware US imaging (Section <ref>), deformation-aware US imaging (Section <ref>), US visual servoing (Section <ref>), and elastography imaging (Section <ref>).
Sonographers often need to search for standard examination planes for biometric measurement and diagnosis. It is a time-consuming and non-repeatable process, even for experienced sonographers, due to the noisy US images and tissue motion. Benefiting from the development of artificial intelligence, and in particular deep learning, the area of medical image processing has achieved phenomenal success <cit.>. Learning-based image processing techniques lead to accurate and robust understandings of US images, which further enables training RUSS to learn both manipulation skills and clinical knowledge directly from human sonographers. We summarize the most recent developments in learning-powered RUSS (Section <ref>), aiming to automatically search for specific anatomy or navigate a probe to visualize standard US planes. Finally, we discuss the open challenges and provide a few potential directions for future developments Section <ref>. The important components of robotic US and the organization structure of this article are depicted in Fig. <ref>.
By incorporating additional techniques to fundamental enabling technologies, the level of technical complexity is increased from Section <ref> to Section <ref>. In this way, we would like to highlight our strategy to inspire the community to achieve the ultimate goal of developing an intelligent robotic sonographer that can collaborate with human sonographers to improve diagnostic and intraoperative imaging in real scenarios.
§ TELEOPERATION IN RUSS
Teleoperation allows operators to remotely carry out certain tasks. Due to the development of networks, multimedia, and communication technologies in the past decades, teleoperation has become one of the most mature techniques for reforming modern medical procedures <cit.>. The main characteristic of teleoperation is that the robot's motion is controlled by operators. This is important for obtaining regulatory approval. The most successful representative is da Vinci from Intuitive Surgical, which has become the clinical standard of minimally invasive surgery for a wide range of surgical procedures <cit.>.
Regarding teleoperated RUSS, it has been seen as a solution for work-related musculoskeletal disorders of sonographers <cit.>. In addition, separating operators from patients reduces the risk of transmitting pandemics (e.g., Covid-19) <cit.>.
This section summarizes the technical and clinical contributions of remote RUSS, respectively.
§.§ Technical Developments
Teleoperated RUSS often consists of three individual components: 1) an expert console, 2) a patient-side manipulator (PSM) used to maneuver a US probe, and 3) a software control system mapping the movement made by experts to the PSM. The teleoperated RUSS allows sonographers to manually, unconstrainedly, and safely control the probe motion onto the patient via the PSM.
Teleoperated systems are also utilized on-site because robotic systems can overcome human limits in manipulation and perception by adding dexterity and precision. A common example is da Vinci, which is often employed on-site <cit.>.
§.§.§ Robotic Mechanism
In 1999, Salcudean et al. designed a six degree of freedom (DOF) lightweight mechanism with limited force capability for teleoperated RUSS <cit.>. Due to the need for a large orientation workspace, a parallelogram linkage was employed to decouple the orientation and translation in their final design, achieving the control resolution of 0.1 mm for translation and 0.09^∘ for rotation. Similarly, Lessard et al. designed the PSM in parallel structure in order to have enough workspace <cit.>.
Masuda et al. designed a 6-DOF mechanism consisting of gimbals, pantograph and slide mechanisms, which weighed 3.3 kg <cit.>.
To guarantee the safety of patients, there are four sensors symmetrically deployed around the probe to monitor real-time force.
In addition, a number of soft mechanisms were developed for force-sensitive applications, e.g., obstetric examinations, to strictly limit the maximum US probe pressure. Vilchis et al. proposed a cable-driven nonrigid remote robot <cit.>. This system has been used on 100 patients with abdominal aortic aneurysm (AAA) at a distance of 1125 km. Tsumura et al. designed a passive mechanism using springs for fetal examinations, which can prevent excessive contact force <cit.>. Besides, a portable and attachable robotic system has been designed by Ito et al. <cit.> [see Fig. <ref> (e)]. In the same direction, Vieyres et al. proposed a 4-DOF light mechanism with 3-DOF rotation and 1-DOF translation in probe centerline <cit.>. Then, they updated the design of the portable RUSS to allow all 6-DOF motions using serial mechanism <cit.>. The portable RUSS is easily used by paramedics, which makes it ideal for use in emergency medical circumstances. Nevertheless, owing to the need of the compact structure, portable RUSS typically have restricted working space.
Since mechanical design is beyond the scope of this survey's primary focus on imaging acquisition, we refer readers to two comprehensive review articles with mechanical designs for RUSS <cit.>.
To reduce the cost of RUSS, commercial robotic manipulators e.g., Universal Robot (University robot, Denmark) and Franka Emika Panda (Franka Emika GmbH, Germany) are often used as PSM <cit.> [see Fig. <ref> (b) and (c)].
It is noteworthy that another typical standard robotic arm KUKA LBR iiwa (KUKA Robotics GmbH, Germany), with integrated joint torque sensors, is also commonly employed as a PSM <cit.>.
HIPPOCRATE is a representative of teleoperated RUSS developed using a serial industrial robotic arm <cit.>.
§.§.§ Shared Autonomy in Teleoperated RUSS
To fully take advantage of the stability and accuracy of robotic techniques, Abolmaesumi et al. proposed a shared autonomy strategy between an expert and an image servo <cit.>. The in-plane three DOFs were controlled by visual servoing to automatically center the carotid artery in cross-sectional images, while the other three DOFs were teleoperated by an expert. In this case, the image servo can provide pixel-by-pixel control accuracy and further mitigate the negative influence of human tremor. To keep the tissue of interest always visible in the image and give more flexibility to the expert, Li et al. and Krupa et al. shared all four (in-plane and out-of-plane) DOFs of a lightweight body-mounted mechanism between the visual servoing algorithm and a human operator via teleoperation <cit.>. The visual servoing technique has also been widely used in autonomous RUSS to estimate and compensate for the motion of internal organs <cit.>, visualize and track the object of interest <cit.>, and improve the image quality by optimizing the acoustic windows <cit.>, etc. Please refer to Section <ref> for more details.
§.§.§ User Interface
Masuda et al. employed two joysticks to remotely control the three-dimensional rotation and translation individually of the PSM <cit.>.
Yet, this manner differs from how experts conduct conventional US examinations. To enhance the intuitiveness of the interaction, a dummy probe is frequently utilized to intuitively control PSM from the expert console <cit.>. A gyroscope was installed within the dummy probe so that it could track the motion of the expert <cit.>. To improve the accuracy of the motion estimation, some mature techniques, such as optical and electromagnetic tracking can be utilized.
As the use of a dummy probe allows experts to conduct US examinations as usual, RUSS can reduce training time and increase examination efficiency.
However, the lack of force feedback on expert side may hinder the clinical acceptance.
To tackle this problem, Martinelli et al. employed a haptic control system that rendered contact force in three dimensions <cit.>.
Conti et al. employed a commercial 6-DOF haptic device (Omega 6) to reflect the contact force in six dimensions <cit.> [see Fig. <ref> (a)].
Recently, Naceri et al. directly deployed two 7-DOF Franka Emika Panda <cit.>, one of which was used at expert console with force feedback, and the other one used at patient side to precisely reproduce the movements of the experts.
Benefiting from the development of virtual reality (VR) techniques, a VR simulator was designed as a new type of interface for teleoperated RUSS <cit.> [see Fig. <ref> (f)]. Compared to traditional joysticks or other haptic devices, an immersive experience can be achieved using VR simulators, which could intuitively visualize the remote scenes in 3D. The initial evaluation of a VR simulator has been performed by 12 experienced sonographers and the results suggest that the immersive simulator could be used for teleoperated RUSS <cit.>.
A deeper discussion about human-robotic interaction studies will be beyond the focus of this paper. To inspire further research incorporating novel human-machine interfaces to improve the efficiency, intuitiveness, and robustness of teleoperated RUSS, we refer readers to two comprehensive surveys on interface approaches <cit.>. Specific to medical applications, Abdelaal et al. provided a crucial review of interfaces that have been used or tested in vivo <cit.>.
§.§ Clinical Feasibility Evaluation
Teleoperated RUSS can fully utilize the advanced knowledge of experts. Compared to autonomous RUSS, teleoperated RUSS is easier to be certified for clinical use due to the fact that all diagnostic decisions and scan trajectory are made by experts. To achieve this objective, clinical studies have been performed using different teleoperated RUSS for a number of examinations. Clinical evaluations of existing teleoperated RUSS solutions have been categorized according to their clinical applications as TABLE-<ref>.
§.§.§ Abdominal Imaging
The abdomen is often examined using US images, which is one of the primary focuses of teleoperated RUSS. To validate the feasibility and diagnostic accuracy of such systems, Arbeille et al. evaluated a preliminary version of a teleoperated RUSS for general abdominal imaging on 20 patients <cit.>. The expert was in a room at some distance (20-50 km) from the patient's site. The time delay between experts and the PSM
was less than 0.1 s using ISDN (terrestrial) telephone lines and less than 0.5 s using satellite links. To evaluate the performance, the authors validated their approach on four different groups of organs. The results demonstrated that the expert could image the main views (longitudinal and transverse) of the liver, gallbladder, kidneys, aorta, pancreas, bladder, and uterus on the patient. Only the heart and spleen were not identified in two and four of the 20 cases, respectively. The experiments also showed that sonographers can master the teleoperated RUSS in less than 3 hours, while the examination time (27±7 min for three or four organs) was approximately 50% longer than the traditional US examination.
In a following study, Arbeille et al. further compared the performance of robotized and conventional US examinations on 87 patients examined in the emergency department at the Tours University in France <cit.>. The results demonstrated that each organ (e.g., liver, gallbladder, pancreas, kidney) can be correctly imaged by a robotized system in between 91100% of cases compared with the conventional US examinations. In addition, the mean visualization score for the teleoperated RUSS was 87.4% for the abdomen, while there were no false diagnoses made in this study <cit.>. In another clinical evaluation, Adams et al. also assessed the feasibility of performing adult abdominal US examination using a remote RUSS on 18 patients in the University of Saskatchewan, Canada <cit.>. Telerobotic examinations were successful in 92% of the examinations on various abdominal organs (given the organs were sufficiently visualized on the conventional examination);
five pathological findings were identified on both modalities, three and two findings were only identified using conventional and telerobotic system, respectively. Furthermore, they reported that all participating patients were willing (89% were strongly willing and the remaining 11% were willing) to have another telerobotic examination <cit.>.
Martinelli et al. carried out a study on 58 patients with a focus on the aorta <cit.>. The examination results demonstrated that all aneurysm cases were correctly detected by both conventional scans and the teleoperated RUSS. Furthermore, the quantitative results show that the diameter of the patient's aorta can be accurately measured. The interobserver correlation coefficient was 0.98 and the difference in measurement was less than 4 mm in 96.3% cases. In addition, the examination duration (mean±SD) of the teleoperated system and traditional examinations are 17±8 min and 12±7 min, respectively. Finally, they also reported that the acceptability of patients was 84±18%, which is similar to the result in <cit.>.
§.§.§ Cardiovascular Imaging
Compared with general abdominal organs, cardiac examinations are considered more technically demanding procedures. Regarding echocardiography, the clinical needs include the visualization and evaluation of the four cardiac chambers, measurements of aortic flow, and the identification of mitral, tricuspid, or aortic valve leaks or aortic stenosis <cit.>. To successfully perform tele-echocardiography, the probe was held by a 3-DOF robotic arm providing three orthogonal rotations, and then, the robotic arm was fixed to a motorized plate for obtaining translational movements <cit.>. The results on 41 cardiac patients demonstrated that similar measurements can be achieved in most cases (93%100%).
Among the 71 valve leaks or aortic stenosis patients, 61 (86%) were successfully detected using tele-echocardiography and there was no false-positive diagnosis reported.
Boman et al. also carried out a similar study on cardiovascular examination in Sweden <cit.>. The evaluations were carried out in three different stages. In stage 1, there were 27 patients in a different place than sonographers with a distance of 80 km. Regarding the other two stages, a total of 31 subjects were recruited in a place at 135 km from the experts. The results indicate that real-time echocardiographic examinations are possible <cit.>. Boman et al. compared the tele-echocardiography examination with the standard of care referral approach in terms of time and diagnosis <cit.>. 19 patients were randomized to remote consultation and imaging, and 19 to the standard of care consultation. The results demonstrated that the processing time was significantly reduced in the remote one (only 26.5 days vs 114 days for the standard one). Therefore, compared with the standard of care approach, patients were more satisfied with the remote consultation strategy, which offered an increased rapidity of diagnosis and the likelihood of receiving faster patient management <cit.>.
In 2007, Sekar et al. evaluated tele-echocardiography examination in the diagnosis of congenital heart diseases in pediatric populations <cit.>. In this 3-year study, 102 pediatric telecardiology examinations were performed between a tertiary care cardiac center and a remote rural hospital located 193 km away. Pathology was ruled out in 50 children by tele-echocardiography. In addition, heart lesions were identified in 52 children and 30 among them required surgery. By using teleoperation techniques, the total cost for such remote care can be controlled under 90 USD, which becomes considerable for most developing areas <cit.>. Sengupta et al. further validate the feasibility of long-distance (trans-Atlantic) telerobotic US scans for vascular examinations <cit.>. The results showed that the procedure to localize the remote probe along the short axis of the carotid artery took less than 60 s and an examination could successfully be conducted in 4 min. Avgousti et al. employed 4G wireless networks in order to reduce the time delay for live tele-echography <cit.>. However, it is also important to note that the communication stability and potential signal interference may lead to uncertainty.
§.§.§ Obstetric Imaging
Obstetric imaging is also one of the most frequent applications of US examination in clinical practice. From the beginning phase to the birth of infants, more than five fetal examinations are carried out and such examinations are important to evaluate the health of both fetuses and pregnant women <cit.>. To assess the feasibility of teleoperating fetal US examinations in pregnant women, Arbeille et al. carried out a study on 29 pregnant women in an isolated hospital 1700 km away using both conventional and teleoperation examinations <cit.>. The results demonstrated that the biometric parameters, placental location, and amniotic fluid volume can be correctly measured in most cases (93.1%) using a teleoperated RUSS. Only in two cases, femur length could not be correctly measured. The mean duration of US examination of the remote examinations (18 min) was longer than that of conventional examinations (14 min).
Another study with a similar objective was presented by Adams et al. on 30 patients in Canada <cit.>. In this study, the results indicated that there was no statistically significant difference between teleoperated RUSS and conventional measurements of overhead circumference, biparietal diameter, or single deepest vertical pocket of amniotic fluid; however, there were slight differences in the measures of abdominal circumference and femur length. Besides, 80% of the fetal structures could be sufficiently acquired by the telerobotic system (range, 57%–100% for each patient). Finally, a survey of participants shows that 92% patients are willing to have another telerobotic examination in the future. The aforementioned studies demonstrated the feasibility of using teleoperation to remotely carry out fetal US examinations while keeping comparable biometric measurements as precise as the conventional approach.
§.§.§ General Applications
Georgescu et al. reported the usability of a teleoperation system for general applications over one year <cit.>. In total 300 patients were involved: 138 supra-aortic vessels, 68 abdomen, 33 thyroid, 30 lower limb vein, 20 pelvis, 7 kidneys, 3 small parts, and 1 obstetrics. The reported average duration of a teleoperation examination was 24±5 min over all 300 examinations. In addition, the results showed that the use of teleoperation in the general medicine practice significantly reduced the waiting time (save several days) for patients, and similar information as conventional US examinations was achieved. It also contributed to saving costs for the healthcare system and facilitating earlier treatment of conditions, potentially leading to improved patient outcomes and less time in care facilities <cit.>. Most recently, a teleoperated RUSS was tested on 22 Covid-19 patients, and they concluded that teleoperated RUSS can be used to diagnose common abdominal, vascular, and superficial organ pathologies with acceptable accuracy <cit.>.
§ ENABLING TECHNOLOGIES FOR AUTONOMOUS RUSS
Recently, interest in autonomous RUSS has increased relatively to teleoperated RUSS. Autonomous RUSS has the potential to achieve standardized and reproducible US acquisitions. RUSS solutions further release sonographers from burdensome manipulation tasks and allow them to focus on diagnosis, requiring deep anatomical and physiological knowledge.
The move of the research community toward autonomous RUSS has also proposed novel scientific questions, which defined important and exciting challenges. To develop autonomous RUSS, we first need to understand how human sonographers perform US scans. In this paper, we call this process the recovery of the “language of sonography". The community has not investigated this consciously, but this path can be traced throughout the analysis of the state of the art.
The adjustment of contact force, probe position and orientation for optimal image acquisition has often been the first focus. Then, it is also crucial to plan an appropriate path for covering the area of interest and to compensate for the potential motion and deformation of the target anatomy during imaging.
These points will be discussed explicitly in the following sections in more detail when we review some of the most relevant states of the art.
In this section, three fundamental techniques used in RUSS are elaborated: 1) compliant control used to apply and maintain a given contact force between US probe and patients, 2) orientation optimization to determine the appropriate probe orientation for a given scan (often orthogonal to the contacted surface) and 3) path planning to best localize and visualize the anatomy of interest.
§.§ Force Control Approaches
Due to the inherited characteristic of US imaging, a certain contact force between a US probe and human tissues is required to optimize acoustic coupling, thereby achieving high-quality US images. It is challenging for human operators to maintain a constant force during US scans. The varying force will result in non-homogeneously deformed US images. Thus, a dedicated force controller is needed to maintain the contact force during scans. Furthermore, such a controller is also crucial for guaranteeing the safety of patients by preventing excessive force.
Depending on the target tissues, the acceptable contact force is less than approximately 20 N <cit.>. In the meanwhile, a small force (less than 1.2 N) is commonly considered as not being in complete contact with the skin <cit.>.
It is noteworthy that this subsection only summarized the force control approaches (both software and hardware-wise) that have been used for developing RUSS. A more general and comprehensive summary of force control can refer to <cit.>.
§.§.§ Hybrid Force/Position Controller
The traditional hybrid force/position control approaches are implemented in two decoupled subspaces taking position law and force control law, respectively, into account <cit.>. Both force and position differences between current values and desired values are fed into the robotic dynamic model to update the manipulator's motion. To apply a constant contact force between a probe and subjects, Gilbertson et al. implemented a hybrid position/force controller for a 1-DOF hand-held RUSS <cit.>. In this study, they simplified the contact model as two interfaces (human-machine and probe-patient) using a set of masses, springs, and dampers. Thereby, the contact force can be dynamically connected to the probe position and velocity by selecting proper interface parameters.
A similar hybrid position/force method based on an external 6-DOF force/torque (F/T) sensor was designed for 6-DOF RUSS <cit.>. Their approaches can automatically switch between velocity and force control modes according to the contact condition (free or contact space).
The External hybrid force/position control is also often used in RUSS. The external controller first updates the position based on the force; then, the positional error is controlled using an internal servo.
Pierrot et al. used a PI controller to maintain the contact force and a PID controller to continually run the joint position servo loop for a 7-DOF robotic US system <cit.>. Similarly, Ma et al. used a PID controller to actively compute the variation of Cartesian position based on the force error; and then used a position controller (provided by the manufacturer) in the inner loop <cit.>.To limit the negative effect caused by potential force measurement errors, a low-pass filter, and a moving filter were used to smooth the measured force. The authors claimed that the implementation of such an external force controller is simpler and can be adapted for any kind of robot <cit.>.
§.§.§ Compliant Controller
Regarding the hybrid force/position controller, a position controller is employed either in a sub-space for the traditional ones or in the low-level servoing loop for the external ones. Since the environment is unknown in real scenarios, the position control may result in excessive force to move to the computed positions. To ensure the safety of patients, two compliant control methods (impedance controller and admittance controller) are often used. The dynamic model of compliant controller is described as Eq. (<ref>) <cit.>.
F + F_ext = K_m e + Dė + Më
where F is the applied force/torque in Cartesian space, e = (x_d - x_c) is the Cartesian position and orientation error between the current pose x_c and the target pose x_d, F_ext is the desired force/torque, K_m, D and M are the stiffness, damping and inertia matrices, respectively.
Based on Eq. (<ref>), the compliant performance can be achieved in all directions by giving different K_m and D, which enables safe/soft interactions between RUSS and patients. Regarding Eq. (<ref>), there are two different interpretations, which are referring to impedance control and admittance control, respectively. For the former one, the pose error is seen as feedback and the computed force and torque are applied to achieve the expected force F_ext. On the other hand, for an admittance controller, the force applied at the end-effector F is measured as input, while the output is the Cartesian movement. Since admittance control only requires the measurement of external force/torque, it is often used for low-cost robots without accurate joint torque sensors, e.g., universal Robots <cit.>. On the contrary, impedance control is more often used when robotic manipulators are equipped with accurate joint torque sensors, e.g., KUKA LBR iiwa <cit.> and Franka Emika Panda <cit.>.
When the stiffness of the environment diminishes, the performance of impedance control will decrease due to friction and unmodeled dynamics, while the performance of admittance control will increase <cit.>. Therefore, admittance control could achieve better performance on soft tissues, while impedance control could be more suitable for stiff tissues.
§.§.§ Spring-based Mechanism
Since some clinical applications, e.g., fetal examination, are really sensitive to the applied force during US examinations, Tsumura et al. proposed a spring-based mechanism to maintain the contact force and passively adjust the probe pose with respect to the constrained surface <cit.>. Compared to the aforementioned sensor-based controllers, the passive mechanism can apply a constant force quickly and safely, especially in unstructured environments. Wang et al. proposed a spring-loaded ball clutch to limit the maximum contact force <cit.>. In normal cases, the detent structure is in its engaged position with ball restricted by a preloaded compressed spring. Once excessive force occurs, the ball comes out from the detent hole. Thus, the involved clutch joint will lose the function of transmitting torque <cit.>. In these ways, the maximum contact force of such mechanisms can be mechanically limited to 10 N <cit.> and 21.98±0.96 N <cit.>. Yet, this approach cannot precisely and dynamically control the contact force.
To address this challenge, Housden et al. extended their work <cit.> by integrating a customized multi-axis F/T sensor to allow active adjustment of contact force <cit.>. The designed F/T sensor consists of two pieces with eight legs in total and the displacements of the legs were measured with eight optoelectronic sensors.
By using the measured force as feedback, this system can actively adjust the contact force toward the desired values <cit.>. Bao et al. designed a parallel, motor-spring-based end-effector to actively generate a certain force for US scanning <cit.>. The force is adjusted by changing the position of two sliders connecting a moving platform using springs. The symmetrical configuration restricted the contact force consistent with the probe's centerline.
§.§.§ Others
Huang et al. attached two thin force sensors (IMS-Y-Z03, I-Motion Inc., China) on both sides of the front face of a linear probe <cit.>. Then, a simple rule was implemented to control the applied force: the probe moves downward 3.1 mm when the force is smaller than 1 N, the probe moves upward 3.1 mm when the force is larger than 8 N, and scans were only performed when both sensors measurements are in the range of [1, 8 N]. Their team extended this work by replacing a 3-DOF linear stage with a 6-DOF robotic arm <cit.>. A robotic arm enables in-plane rotation; thereby, an updated rule was used to maintain the constant force: the probe moves downward 0.2 mm when both the forces are smaller than the desired force, the probe moves upward 0.2 mm when the forces are larger than the desired one, the probe rotates 0.2^∘ (in-plane) when the two forces are different. Compared with other force adjustment approaches, this method is easy to be implemented, while the handcraft rule needs further improvement to adapt to inter-patient variations.
§.§ Probe Orientation Optimization
The relative probe orientation with respect to the restricted surface is also a key factor dominating the image quality. For some applications like US imaging of bone, US probe orientation is often optimized to be orthogonal to the constraint surface <cit.>.
In certain applications, such as image-guided interventions, the US probe may need to be tilted from the orthogonal direction in order to better visualize the targets and/or inserted instruments <cit.>. The articles discussing probe orientation adjustment are summarised in three subcategories: in-plane orientation, out-of-plane orientation, and full orientation optimization in this section.
§.§.§ In-Plane Optimization
The in-plane orientation of a 2D probe represents the rotation around the short axis of the probe (see Fig. <ref>). In other words, in-plane motion only happens in the plane of US view.
In <cit.>, the in-plane rotation was optimized using the visual servoing technique to improve the general image quality. To quantitatively assess the image's quality and further use it as the input signal for servoing control, the US confidence map <cit.> was computed for individual images. The US confidence map provides a pixel-wise measure of signal loss based on a simplified model of wave propagation in tissues.
The computed confidence map is often used as a measurement metric of image's quality <cit.>. However, it is worth noting that the quality here refers only to the strength of US signal. The best US images according to the confidence map may not be the best images expected by clinicians in examinations.
To obtain the US images leading to higher overall confidence values, the probe's orientation was often optimized to the orthogonal direction of the surface <cit.>.
In addition, Jiang et al. and Welleweerd et al. also employed US confidence map-based in-plane adjustments to improve sub-optimal contact conditions for limb arm and breast scans <cit.>, respectively.
Huang et al. adjusted in-plane orientation to balance the contact forces measured at two endpoints on the probe tip <cit.>.
Zettinig et al. proposed a 3D-to-3D volume registration to adapt the movement of target anatomy; then they further optimized the in-plane orientation to align the current needle guideline with the planned path on a preoperative CT or MR <cit.>.
§.§.§ Out-of-Plane Optimization
The out-of-plane motion is defined as the rotation around the probe's axial direction (see Fig. <ref>).
In <cit.>, authors claimed that in-plane adjustment only benefit axial aortic scans marginally; therefore, they optimized out-of-plane rotation to improve the imaging quality in terms of overall US confidence values <cit.>. A fixed rotation angle interval was applied step by step. However, it is uncommon for existing articles to only optimize the out-of-plane orientation.
§.§.§ Full Orientation Optimization
To estimate the normal direction of a constrained surface, depth camera-based approaches are most often used in the existing literature <cit.>. The advantage of these approaches is high computational efficiency, while the main limitation is relatively low accuracy of the estimations.
Recently, Ma et al. designed a probe holder with four laser distance sensors to actively adjust the probe's orientation to be normal to the surface <cit.>. The results demonstrated their adjustment can be computed in real-time.
In addition, Jiang et al proposed a method to identify the normal direction of the restricted surface using contact force for out-of-plane optimization and US images for in-plane optimization <cit.> (see Fig. <ref>). The bone boundary was used to demonstrate the probe orientation's impact on the imaging quality. In this study, Jiang et al proposed a feature called the smooth derivative of contact force, which enabled the accurate estimation of the out-of-plane orientation without the requirement for an expensive external F/T sensor <cit.>. To further improve the accuracy of the estimated normal direction, Jiang et al. deduced the underlying mechanical model based on the force measured during two orthogonal fan motions at a given contact point <cit.>. The upgraded method works for both convex and linear probes, and due to its purely force-based nature, it is invariant to image noises. Yet, due to nonnegligible deformations of the soft tissue (e.g., breast), the force-based approaches are more suitable for orthopedic applications (e.g., limbs and back).
Besides, a number of studies optimized the probe's full orientation solely using US images. Welleweerd et al. proposed a framework for automatic breast scanning without requiring patient-specific models <cit.>. To achieve this, in-plane optimization was firstly carried out to ensure acoustic coupling between the probe and the examined breast. Once the mean confidence value <cit.> of the resulting image is inside the given range, the probe will be moved tangentially to the breast. If the current mean confidence value differs from the specified range, out-of-plane corrections will be carried out to maintain constant confidence.
The mean error between the estimated normal directions and ground truth at all points of trajectory was 12.6^∘ out-of-plane and 4.3^∘ in-plane <cit.>. Chatelain et al. extended their preliminary work <cit.> from in-plane control of a 2D probe to full-orientation control of a 3D wobbler probe using the confidence map <cit.>. Recently, Osburg et al. used Convolutional Neural Network (CNN) to compute the surface normal at the point of contact based on native 3D volumetric data <cit.>.
Instead of identifying the normal direction of constraint surfaces, Jiang et al. estimated the normal direction of a subcutaneous tubular structure directly based on the segmented vessels of the most recent images <cit.>. The vascular boundaries obtained at different positions contain the local geometrical information (radius and centerline) of the blood vessel; thus, the US probe can be oriented orthogonally to the estimated centerline of the local segment of the tubular structure.
§.§ Path Generation for Autonomous US Scanning
In order to accomplish US examinations, a proper path is essential to visualize the object or locate the lesion on human tissue, e.g., along a target blood vessel and covering a volume of interest. This section categorizes the existing path planning methods as 1) offline scan path generation methods and 2) online scan path generation methods.
§.§.§ Offline Scan Path Generation
To locate and evaluate the length and severity of stenosis for planning the treatment of peripheral arterial disease (PAD), Merouche et al. directly give the scanning path by manually moving the robotic arm along the target artery <cit.>. To address the potential visualization issue caused by small motions after path planning procedures and to facilitate the tracking of the artery during automatic scans, the probe's position was tuned to maintain the cross-sectional lumen horizontally centered in the US view. Similarly, Jiang et al. manually drew a scan path on the surface of a vascular phantom, and then extracted the path based on RGB images <cit.>.
Considering autonomous path planning, scan trajectories can be determined on pre-scanned images (e.g., MRI and CT); then, transferring the planned path to the current setup by registering the live US or RGB-D image to the preoperative atlas.
Hennersperger et al. validated the feasibility of autonomously transferring a planned scan path from MRI to the current setup based on the registration between the MRI and 3D surface point clouds acquired by a Kinect camera (Microsoft Corporation, USA) <cit.>. Similarly, Langsch et al. computed the scanning trajectory of an aorta by registering 3D US volume to the patient's MRI <cit.>. However, due to the need for tomographic data (MRI or CT) of each patient, the advantage of these approaches is reduced in clinical practice. To further address this challenge, Virga et al. carried out non-rigid registration between the patient-specific 3D surface extract from a depth camera and a generic preoperative MRI template <cit.> [see Fig. <ref> (a)].
Specific to thorax examinations, Jiang et al. presented a skeleton graph-based non-rigid registration between the cartilage point clouds extracted from a tomographic template and US images of patients <cit.>. To further improve the registration accuracy, Jiang et al. introduced the dense skeleton graph to replace the manually designed key points of the skeleton <cit.> [see Fig. <ref> (b)].
Akbari et al. presented a complete US-based approach to find a proper trajectory for breast US imaging <cit.>. A manual prior scan is carried out in advance; then, the desired trajectory for the post scan is computed based on geometrical analysis of the target using the pre-scanned US images.
In addition, the scanning path is often planned solely on the surface extracted by an external camera directly <cit.>. Mustafa et al. extracted the patient's abdomen surface from an RGB image acquired using a web camera (2D) based on a preset HSV color filter; then, the position of the liver was estimated and a four-step acquisition protocol was applied <cit.>. Due to the lack of imaging depth information, the camera needed to be carefully configured anteriorly to subjects. Ma et al. used a Realsense SR305 RGB-D camera (Intel Corporation, USA) to extract the 3D surface data using a depth threshold and further planned the scanning path on the extracted 3D surface <cit.>.
Huang et al. extracted 2D skin surfaces of patients from an RGB image using the rule “red>Green>Blue" <cit.> [see Fig. <ref> (c)]. They claimed this is more generic and robust than the threshold-based approaches. Then, a “snake" trajectory was automatically generated to cover the area of interest. Suligoj et al. used the same logic to generate scan paths over a region manually annotated in an RGB image <cit.> [see Fig. <ref> (d)]. Recently, Ma et al. proposed a learning-based method to extract the human abdomen from a depth camera, and further divided the extracted region into four parts for autonomously generating scanning paths of the lung <cit.>.
The aforementioned path planning approaches for US scanning were directly determined on the patient's surface. However, the optimal coverage of an underlying volume of interest is not considered. To address this challenge, Graumann et al. proposed a method to automatically compute a suitable scanning path to cover a volume of interest easily selected in preoperative images <cit.>. Depending on the sizes of targeting volumes, one or multiple lines were automatically generated for full coverage. To automatically determine the optimal probe position on the skin to monitor the motion of the internal organ of interest, Bruder et al. computed patient-specific US image quality from a given CT scan <cit.>.
To further consider the full coverage of subcostal organs like liver and heart, Göbl et al. proposed a framework integrating both geometrical and physics-based constraints to estimate the best US scanning path with respect to the limited acoustic windows <cit.>. The poses maximizing the image quality (i.e., less acoustic attenuation) are finally selected. The results on both human and phantom data demonstrated that superior image quality was achieved using their method in comparison with a naive planning approach while maintaining the necessary coverage of the target.
§.§.§ Online Scan Path Generation
Although the off-line path planning are more often used in RUSS, some online planning approaches based on live US images have also been developed. Online approaches can generate more flexible trajectories than offline approaches, which can effectively guarantee the target's visibility inside the US view, even in the presence of unexpected motion. In <cit.>, Jiang et al. proposed a pipeline to enable a RUSS to automatically perform US screening of tubular structures based only on real-time US image feedback. The US probe was manually positioned on the tubular structures [see Fig. <ref> (e)]. Afterward, a U-Net was activated to constantly segment cross-sectional vessel lumen from US images; and thereby, a set of boundary point clouds were extracted and further used to estimate the geometry (centerline and radius) of the local artery sections. To completely scan the whole artery, the US probe was moved forward in the direction of the estimated local vessel centerline in real-time. In addition, similar work was accomplished by Huang et al. for automatically screening of carotid artery based on the US image feedback <cit.>. In <cit.>, Kim et al. employed a CNN as a classifier for real-time B-mode images to update the probe position for heart examinations. Since the next action is planned in real-time, the online path planning approach can facilitate the robust tracking of the target during autonomous scans. To ensure the scanning quality to facilitate the clinical diagnosis, Jiang et al. first presented an online segmentation quality-aware method based on the Doppler signal <cit.>. Once the segmentation performance is considered low, the probe orientation will be adjusted to enhance the Doppler signal and thereby improve the accuracy and completeness of the reconstructed 3D vessel. The significance of this study lies in its ability to inspire future research into quality-aware, closed-loop robotic scanning.
§ APPLICATION-ORIENTED
ADVANCED TECHNOLOGIES FOR AUTONOMOUS RUSS
The aforementioned three enabling technologies (force control, orientation optimization, and scanning path generation) have been extensively studied in the existing literature.
However, the enabling technologies can only guarantee the quality of US acquisition in ideal cases. To further enable the implementation of extensive and autonomous RUSS screening programs, more advanced technologies tackling practical challenges in real scenarios should be considered. In this section, four distinctive techniques are discussed: 1) Motion-aware US imaging: regarding the autonomous scanning of the anatomy of interest, the potential body motion should be monitored and properly compensated to achieve accurate and complete 3D anatomy geometry. 2) Deformation-aware US imaging: due to the inherited characteristic of US imaging, a certain force is necessary for properly visualizing the underlying anatomy of interest; thereby, the inevitable force-induced deformation hinders the correct measurements of the target anatomy. 3) US visual servoing: by providing pixel-to-pixel control to accurately move the probe to reach the desired cross-sectional images and guarantee the visibility of the object of interest in US views. 4) Elastography imaging: benefiting from the accurate control over probe position and contact force between the probe and tested objects, the underlying tissue properties can be estimated for diagnosis using RUSS.
§.§ Motion-Aware US Imaging
§.§.§ Periodic Motion Detection and Compensation
In this context, periodic or quasiperiodic motions refer primarily to internal physiological motions such as respiration and pulsation. Because of the advantages of non-invasive and real-time performance, US can be used to monitor internal tissue motion <cit.>. In free-hand mode, it is extremely difficult to compensate for such motions to achieve stable US images. To tackle this challenge, RUSS has been seen as a promising solution <cit.> because robots usually can provide higher accuracy in terms of positioning and repeatability than humans <cit.>. Esteban et al. reported that RUSS can intrinsically compensate for small motions caused by breathing or human tremor using compliant force control <cit.>. Heunis et al. employed a 6-DOF Stewart platform to mimic the involuntary periodic movements that occur during scans; and further proposed a pipeline to create an effective scanning path to cover a surface while compensating for these motions and adhering to preset contact forces <cit.>. This movement was also compensated for by using force control. The results demonstrated that the reconstruction error of arteries was 1.9±0.3 mm in non-static scenarios. To actively compensate for the respiration-induced motion in the liver or prostate, Ipsen et al. applied a constant force control to accomplish continuous US scans in long-term monitoring <cit.>. Furthermore, visual servoing (Section <ref>) is another potential solution for compensating the respiration motion <cit.> and pulsation caused by heart beating <cit.>.
§.§.§ Non-Periodic Motion Detection and Compensation
Subjects are often adjusted by sonographers to better visualize the target during scans. Thus, the ability to compensate for non-periodic patient’s motion is crucial for the practical use of RUSS. A representative example of the influence caused by non-periodic motion of the imaged patients is shown in Fig. <ref>. The scanned results are significantly different when the same object is kept stationary and moved during scanning.
To obtain complete and accurate 3D US scans of a vascular phantom in the presence of rigid motion, Jiang et al. proposed a vision-based RUSS to actively compensate for such non-periodic motion <cit.>. In this study, five passive markers were rigidly attached to the imaged phantom surface and further used to monitor the potential target motion. Once the target is moved, the motion-aware RUSS automatically computes the transformation and updates the trajectory to recover the scanning from the breaking point. To eliminate the requirement for careful configuration of the passive markers in real scenarios, Jiang et al monitored the patient's motion based on the real-time segmentation of objects in RGB images and computed the compensation matrix using extracted surface point clouds acquired before and after the motion <cit.>. The results on a realistic arm phantom demonstrate the effectiveness of this marker-less compensation method. The advantages of robotic US (accuracy and stability) and free-hand US (flexibility) were combined by including active compensation for potential patient motion during scans. However, such systems only considered the rigid motion of objects. To further tackle non-rigid articulated joint motions, Jiang et al. proposed a vision-based framework, combining joint detection and non-rigid surface registration, to automatically update scanning trajectories from a template to individual volunteers with varying arm gestures <cit.>. The robustness and accuracy of the proposed system have been evaluated on multiple volunteers.
§.§ Deformation-Aware US Imaging
Due to the probe-patient contact force, shape distortion of the visualized anatomy's geometry is inevitable, particularly for soft tissues such as superficial blood vessels (see Fig. <ref>). The force-induced deformation reduces the precision and repeatability of US images, and thereby could further limiting the diagnostic accuracy and consistency, especially for computer-assisted diagnosis.
To provide precise and reliable US images, pressure-induced image deformation needs to be properly corrected. Unlike human sonographers, robots/computers are not trained to make the diagnosis based on deformed images. Therefore, such corrections are particularly important for RUSS.
To achieve distortion-free images, Treece et al. combined non-rigid image-based registration with position sensing to correct pressure-induced deformations for free-hand 3D imaging <cit.>. Sun et al. computed 2D deformation fields based on the estimated pixel displacements and corresponding contact forces using polynomial regression models <cit.>. The pixel displacements were computed based on flow techniques using raw echo frequency (RF) data. Based on their experimental results, the parabolic polynomial regression model significantly outperforms the linear model. However, there was no significant performance difference between 2nd order and higher-order polynomial models. Burcher et al. build a model using the finite element method (FEM) to predict the deformation <cit.>. Nonetheless, the performance of the FEM-based approach is heavily dependent on the prior knowledge of tissue properties, which are usually hard to measure in real scenarios. To overcome this challenge, Dahmani et al. employed a linear elastic model to approximate personalized biomedical properties of involved tissues from the images <cit.>.
To alleviate the inter-variation of pressure-induced deformation between the acquired images along a scanning path, RUSS is often required to maintain a constant force during the screening.
To correct distorted images, Virga et al. built a 4th-order polynomial model to regress the pixel displacement with respect to contact force and further propagate the computed deformation field at sparse sampling points to the whole sweep direction <cit.>. The sampling points were selected manually on the first frame and this method took 186 s on average to compute a deformation field at one location. To speed up the process for compression-free 3D volume, Jiang et al. proposed a stiffness-based deformation correction approach, incorporating image pixel displacements, contact forces, and nonlinear tissue stiffness <cit.>. To obtain patient-specific stiffness models, robotic palpation was performed at sampling positions. Since tissue stiffness is the key factor dominating the deformation, the optimal deformation regression models at sampling positions can be propagated to other positions on the trajectory by interpolating the estimated local stiffness. However, the state of the art in the field of US image correction for force-induced deformation is not yet applicable to clinical practice. To further achieve this objective, a pixel-wise tissue properties estimator and anatomy-aware correction system should be developed to bridge the gap between different anatomy and different patients.
§.§ Ultrasound Visual Servoing
Understanding the interaction of sonographers with the patient and the US probe is of high importance when developing RUSS. In order to acquire B-mode images of the anatomy of interest, sonographers perform a rough positioning of the probe on the human body. Consecutively, the B-mode images are analyzed while adjusting the probe to obtain the final view with the anatomy of interest in focus. This dynamic image-based adjustment and exploring of the anatomy can be defined as “visual servoing". While this has been the subject of research in the last decades, we believe that the introduction of deep learning and the advances in reinforcement learning could allow the scientific community to further understand and solve this image-based optimization problem. Recent work that has been published in this field <cit.> can be taken as an indicator for being a potentially interesting research topic in the coming years. In this section, we review some prior work on visual servoing that can be considered as a development of the state of the art towards the goal of autonomous intelligent exploration of particular anatomy and physiology views needed for examination and treatment.
§.§.§ Autonomous US Probe Guidance
To automatically rediscover a previously registered US imaging view, Bachta et al. developed an image-based visual servoing approach using boundary information and tested it in a simulator <cit.>. The target edge was retrieved using a polynomial regression analysis, and the optimized coefficients were used as visual features to guide a robot-controlled probe to reach a desired image section. However, this method suffers from image noise and is limited to a specific shape. To overcome this challenge, Mebarki et al. employed image moments as visual features <cit.>, which are generic and robust with respect to measurement perturbations. To further achieve a model-free servoing task on unknown targets, they compute the interaction matrix in real-time using B-mode images <cit.>. The experiments on gelatin phantoms demonstrated promising results in terms of minimizing the visual-features error; however, only local convergence can be guaranteed. In particular, in the case of a roughly symmetric object, similar geometric properties can be observed from different cross-sectional images. To overcome this shortage, Nadeau et al. defined a set of 2D features based on a three-dimensional space using a motorized 3D probe <cit.>.
To accurately and actively navigate the probe to a given US plane using the visual servoing technique, Duflot et al. first used the subsampled shearlet coefficients as novel visual features as an input to the controller, instead of pure image signal information, i.e., point, lines, moments, etc. <cit.>. Since a set of noiseless and redundant features can be extracted using shearlet coefficients, promising performances of their approach in terms of accuracy, repeatability, and robustness could be achieved. A comprehensive comparison between shearlet-based and photometric-based visual servoing controllers was carried out in both simulator and physical phantom <cit.>.
§.§.§ Imaging Stabilization and Object Tracking
Visual servoing has also been used to track anatomies of interest and perform online compensation of the anatomy’s motion to stabilize the real-time US images. Without compensating for some potential motion like breathing, the resulting images will be affected. This will lead to inaccuracies in the estimation of the precise location of intervention target tissues. US visual servoing technologies are developed to compute the corresponding probe adjustment against environment dynamics based on real-time image feedback.
Nadeau et al. presented an intensity-based approach to maintain the view of an organ while compensating for the physiological motion of the patient <cit.>. Since the computation of image moments depends on object segmentation, image intensity values were directly used as visual features. In an extension work, they adapted their method for 3D probes and did first validations on soft animal tissues <cit.>. In 2015, Nadeau et al. applied a similar intensity-based visual servoing method to keep a target centered within a virtual imaging view in the context of intracardiac surgery <cit.>. Its effectiveness has been validated on in-vivo data. Besides cardiac applications, Nadeau et al. applied visual servoing to stabilize respiratory motion by compensating periodic disturbances with a predictive controller <cit.>.
In addition to intensity-based approaches, Krupa et al. employed US speckle information to estimate both in-plane and out-of-plane motion, thereby, realizing the tracking of soft tissue movements in US view <cit.>. Speckle is often considered to be noise, however, it conveys valuable data on the tissue of interest.
Speckle contains spatially coherent information between consecutive US images because it physically results from coherent reflections of small components in human tissue.
The preliminary experiments performed on a phantom with 2-DOF in-plane and out-of-plane motions demonstrated the potential of a speckle-based servoing approach. The validation for 6-DOF motion was further reported in <cit.>. To further consider soft tissues' deformation, Royer et al. developed a physics-based model to facilitate the accurate tracking of the target of interest in 3D US images <cit.>.
§.§.§ Imaging Quality Optimization
Visual servoing techniques have also been investigated to improve imaging quality. Chatelain et al. first introduced the US confidence map as a new feature for visual servoing <cit.>. The authors claimed that the US imaging quality could be improved by optimizing the probe orientation to maximize the overall confidence value. An interesting extension using 3D probes instead of 2D probes has been reported in <cit.>. To evaluate the effect of the proposed method in real scenarios, in-vivo validations were performed on healthy volunteers. In addition, Patlan et al. directly employed elastography as the input of the visual servoing controller <cit.>. To optimize the quality of the resulting elastography, the probe was automatically actuated to image a soft tissue object from different views, and further fused to enhance the computed elastography.
§.§ Elastography Imaging
US elastography is a non-invasive technique aiming to estimate the mechanical proprieties (i.e., stiffness) of the underlying soft tissues. Elastography has gained great interest in applications such as differentiating tumors from healthy tissues (breast, prostate, liver, etc.) and guiding radiofrequency ablation surgeries <cit.>. Based on the underlying principles for producing US elastography, the currently available techniques can be mainly grouped into shear wave imaging and mechanical strain imaging. In shear wave imaging, the propagation speed of shear wave is measured. In addition, for strain imaging, a mechanical compression is performed using a US probe on the object's skin, where the mechanical compression process can be accurately controlled and measured based on robotic techniques. Thereby, accurate and standardized elastography is expected to be achieved.
Compared with shear wave imaging, strain images are more common for robotic elastography imaging because it doesn't require specialized US hardware. Schneider et al. computed laparoscopic US elastography using an external vibrator positioned on the patient skin, where the US probe was remotely controlled by da Vinci (see Fig. <ref>) <cit.>. Patlan-Rosales et al. computed strain images using real-time radio-frequency (RF) signals to precisely locate subcutaneous tumors <cit.>. In this study, robot-assisted palpation was used instead of an external vibrator and the resulting strain images were used to horizontally maintain the object in the imaging center. To estimate the strain map of moving tissues, Patlan-Rosales et al. estimated and compensated the non-rigid motion using visual servoing on an abdominal phantom <cit.>. Instead of 2D elastography, the same team extended their work to create 3D elastography based on the pre- and post-compressed volumes obtained by a 3D US probe <cit.>.
To compute 3D elastography without using a 3D probe, Huang et al. designed a linear sliding track with a position sensor and a height-adjustable holder for conventional 2D probes <cit.>. In this study, the pre- and post-compression echo signals were recorded by manually adjusting the height of the probe holder. Then, paired frames of RF data from the pre- and post-compression sweeps were obtained by interpolation. 2D strain images were computed using the paired RF data; thereby, 3D strain maps were obtained by stacking the computed 2D strain images. To allow automatic acquisition of 3D strain maps, they replaced the linear track with a motorized 3-DOF linear stage <cit.> and a 6-DOF robotic arm <cit.>, respectively.
§ AI-POWERED ROBOTIC US ACQUISITION
AI techniques have been seen as a promising way to further improve the automation level of RUSS by enhancing the understanding of US images and enabling the intuitive transfer of senior sonographers' advanced physiological knowledge. Such techniques have gained increasing attention most recently. A diverse set of tasks like segmentation and classification of US images have achieved great success. Regarding the field of US image segmentation and classification, a large number of research articles have been published. More detailed techniques can be found in these survey articles <cit.>. In this article, we will only focus on the studies that aim to automatize and/or standardize US scanning using AI-based approaches. More specifically, the approaches tried to automatically search for specific anatomical features or navigate a probe to display standard US planes needed for examinations. These tasks are challenging because RUSS must be able to properly interpret the current states (US image, contact force, probe pose) and the surrounding context.
Due to the potential tissue deformation and inconsistent acoustic artifacts of medical US images, guiding a probe to visualize target objects in desired planes is a highly sophisticated task, which requires years of training <cit.>. However, such knowledge is not yet available for robots or computers. Due to the great advantage in feature representation over naive handcrafted features, CNN has the potential to achieve superhuman performance to robustly and accurately locate standard planes on challenging US images. Chen et al. employed a deep CNN to identify the fetal abdominal standard plane from recorded US video <cit.>. Since data collection and manual labeling are time-consuming, a transfer learning strategy was used to guarantee the performance with limited training data. To achieve real-time performance, Baumgartner et al. proposed a deep CNN architecture called SonoNet to automatically detect 13 fetal standard planes as well as provide localization of the fetal structures using a bounding box <cit.>. The SonoNet was trained in a weakly supervised mode with only image-level scan plane labels, which make it possible to prepare a large data set. These approaches aid sonographers to locate standard planes that can also improve efficiency in particular for novices. Yet, these methods cannot automatically guide the probe towards target planes or anatomical structures of interest.
To enable the ability of RUSS to automatically perform US scans, Mylonas et al. proposed a learning-based approach allowing autonomous execution of US scanning according to expert demonstrations <cit.>. To achieve this objective, a Gaussian Mixture Modeling (GMM) was employed to model the demonstrations (trajectories) towards target objects in a probabilistic manner. However, since the real-time US image was not taken into consideration, all the demonstrations roughly started from the same initial position. This limitation severely impairs the usability of this method in real scenarios. To overcome this limitation and further provide real-time probe movement guidance for obtaining standard planes, Droste et al. proposed a behavioral cloning framework to mimic the process of sonographers searching for standard planes <cit.>. The proposed US-GuideNet consists of two fully connected layers and a gated recurrent unit (GRU) used to extract the sequential information. Due to hardware limitations, the predicted next movement of the probe and the estimated final standard planes only accounted for the rotational component, while the translational component remained unaccounted for.
The performance of the imitation-based approach heavily relies on the given demonstrations.
However, human US demonstrations are frequently and inherently sub-optimal, where the sonographers often need to adjust the probe around the desired pose to finally determine the optimal view. To tackle sub-optimal demonstrations, Burke et al. introduced a probabilistic temporal ranking model which assumes that the images shown in the later stage are more important than the earlier images <cit.>. The probabilistic ranking model can generate a large data set consisting of pair-wise images based on limited demonstrations; and then, a reward inference network was trained to assess individual B-mode images in self-supervised mode. To automatically navigate the probe to the viewpoint visualizing the mimicked tumor inside the gel phantom, an exploratory Bayesian optimization policy was employed. Nonetheless, due to safety concerns, it is impractical to interact richly with patients to gain enough experience to achieve the optimal searching policy in real scenarios.
The process of navigating a US probe to a proper viewpoint displaying standard planes can be seen as a series of probe motions performed in accordance with current observations (e.g., US images, force, probe pose). Therefore, the reinforcement learning (RL) architecture has been seen as a particularly suitable solution for this type of task. Milletari et al. presented an initial work using a deep Q-learning (DQN) architecture to guide sonographers towards the correct sonic window for cardiac examination <cit.>.
To avoid dynamic interaction with patients, a grid world environment was built over the chest using recorded videos to simulate acquisition environment.
The results demonstrated that the DQN-based approach achieved better results (86.1% correct guidance) than a supervised approach (77.8% correct guidance) trained on the same data. A similar work also trained a DQN on a simulated 2D grid environment to navigate the probe towards the sacrum <cit.>. To automatically terminate the navigation process, a binary classifier (ResNet18) was employed to determine if the target object had been reached.
Since this method only considered 3-DOF translational movements, the probe orientation is necessary to be carefully initialized.
To further eliminate the requirement of manual initialization and automatically localize the paramedian sagittal oblique plane (a standard plane used in spine US examination), Li et al. trained a DQN to predict the potential actions in 5-DOF spaces (besides the translation in the probe centerline) <cit.>. In contrast to the grid word environment, this work built a simulator using 3D US volumes that cover the target anatomy of interest. This simulator can generate synthetic US images based on arbitrary probe poses. The experimental results demonstrated that the method can repeatably navigate the probe to the target standard plane with an accuracy of 4.91 mm (translational) and 4.65^∘ (orientational) in the intra-patient setting. Then, the authors extended the work by adding a deep learning module (VGG-16) to recognize the target standard views from real-time US images <cit.>. Due to the US simulator, a large amount of state-action data can be obtained for training the DQN agent. In addition, to learn the policy to guide the probe to the position visualizing the kidney, Chen et al. used a supervised learning process to predict the next actions based on the current US image; and an actor-critic RL module was developed to improve the utilization of data and enhance the generalization <cit.>. Recently, to bridge the gap between simulation and real scenarios, Bi et al. proposed VesNet-RL to perform US standard plane (longitudinal view) searching for vascular structures <cit.>. To achieve high generalization capability, this study computed the binary mask of real-time B-mode images and used the background-irreverent binary masks as the input to train the RL agent.
Instead of performing validation in the simulated environment with a virtual probe, Ning et al. proposed a state representation model to encode the force and US images into the scene image space acquired using an RGB camera; and then an agent was trained using the proximal policy optimization (PPO) method to control the robotic manipulator to automatically perform US scans in real world <cit.>. Similarly, Deng et al. employed a deep neural network to encapsulate the scanning skill (the US images, the pose/position of the probe, and the contact force) into a high-dimensional multi-modal model; then, a policy was trained based on expert demonstrations <cit.>. Due to the differences between the images in the given demonstrations and real ones obtained during dynamic interactions, the trained model was further improved with guided explorations carried out by human operators. However, such manual correction is very expensive during clinical examinations, and it will limit the efficiency of the RUSS.
Instead of directly learning a policy to search for standard planes, Jiang et al. proposed a novel machine learning framework (MI-GPSR) to understand the implicit physiological knowledge from expert demonstrations, which is implemented in a fashion of self-supervised mode using a probability ranking approach <cit.>. To ensure the generalization capability of the method, the authors employed the mutual information <cit.> to explicitly disentangle the task-related features from the domain features. The results on three types of phantoms [gel tubular structure, chicken heart, and lamb kidney phantom (see Fig. <ref>)] demonstrated that MI-GPSR can properly predict the reward of individual US images from unseen demonstrations and unseen phantoms with the same anatomy <cit.>. Understanding and modeling the semantic reasoning and intention of expert sonographers can facilitate not only the development of autonomous intelligent RUSS but also the design of US education and training systems and advanced methods for grading and evaluating the performance of human and robotic sonography.
§ OPEN CHALLENGES AND FUTURE PERSPECTIVES
Medical robots have gained increased attention, in particular during the COVID-19 pandemic. The role of robotics in managing public health and infectious diseases has been widely discussed among the community <cit.>. In order to apply RUSS in clinical practice, there are still many open challenges, including both technological (e.g., deep understanding of the dynamic scene, and advanced sensing technologies) and nontechnological (e.g., regulatory affairs and financing) aspects <cit.>. Here, we highlight two aspects that will widely affect the roadmap for RUSS, particularly for clinical translation and commercialization: 1) the acceptance of RUSS, and 2) the ethical and legal issues. In addition, we discussed some promising research directions to inspire the future development of RUSS.
§.§ Acceptance by Patients and Clinicians
The RUSS are designed to help both sonographers and patients in clinical practice. Besides demonstrating comparable or even better outcomes, the acceptance for RUSS is also important. Here, we want to first make a distinction between the concepts of acceptance and trust. Trust is mostly based on how well RUSS performs in terms of technical performance, such as safety, clinical results, robustness, repeatability, and so on. Yet, effective communication, friendly interaction, and mental development would also be necessary for improving acceptance.
Regarding teleoperated RUSS, Adams et al. indicated that all patients (18) were willing (89% were strongly willing and the remaining 11% were willing) to have another telerobotic examination <cit.>. A similar result was reported by <cit.>, where 97% of 28 patients were willing to have another teleoperation scan. However, the number of participating patients in these two studies is limited. A more comprehensive survey about the patients' acceptance of RUSS should be carried out in the future. Furthermore, it is noteworthy that the clinicians' attitudes toward RUSS are still missing.
Teleoperation systems are controlled by human operators, and there are some very successful teleoperation surgical systems, e.g., da Vinci system. This fact contributes to the positive attitude of stakeholders for teleoperated RUSS <cit.>. In contrast, since autonomous RUSS are partially or fully out of the control of experts, non-negligible worries about safety arise, which stress both patients and experts during scans. Autonomous RUSS is still far from gaining widespread acceptance.
A standard evaluation metric considering clinical practices will help improve the trustiness of emerging autonomous medical robotics <cit.>. Nagy et al. defined the concept of level of Clinical Realism: 1) Training tasks with rigid phantoms; 2) Surgical tasks with simple phantoms; 3) Surgical tasks with realistic phantoms, but little or no soft-tissue interaction; 4) Surgical tasks with soft-tissue interaction; 5) Surgical tasks with soft-tissue topology changes <cit.>.
To tackle the safety concern of autonomous RUSS, robotic arms are often controlled in compliant force mode, which will result in soft interaction between the probe and patients to prevent excessive contact force <cit.>. A force threshold is specified as a hard limitation in the low-level controllers to completely eliminate the potential extreme situation. The RUSS will stop instantly whenever the real-time force exceeds the predetermined threshold, which was 25 N in <cit.>. During robotic scans, two emergency buttons are often held by the clinical expert and the patient, respectively, to incorporate their observations into the safety-aware loop. Such a dedicated multi-layer safety-aware framework is beneficial for increasing the trust of clinicians and patients. By offering detailed explanations of the ongoing robotic US scans over audio and doing some straightforward interactions with patients such as ”high five", Eilers et al. claimed that the acceptance from patients could be enhanced <cit.>.
To improve the acceptance of new medical devices in clinical practices, the robotic system with a medical certification can speed up the process in both research and market-driven developments <cit.>. For example, KUKA LBR iiwa has been widely used as the key component for developing RUSS <cit.>. Nevertheless, this comes with a high unit cost and may necessitate the assistance of an experienced engineer for imaging acquisition or routine system maintenance <cit.>. Since the fee will be paid by the end-users, the financial issue will become a practical factor hindering the acceptance from the patients. Most recently, Kosa et al. examined the role of robotics in Intensive Care Medicine and their acceptability to patients and caregivers <cit.>. They concluded that it is still immature to use robots directly handling patients, and close collaborations between roboticists and clinicians are required to advance robotics to benefit the ICU.
§.§ Ethical and Legal Issues
The ethical and legal issues regarding medical robotics are still not clearly defined, particularly for autonomous systems. The distribution of responsibility between experts and RUSS (or other surgical robotic systems) remains unclear. Clinical translation will also need regulatory acceptance.
In order to properly tackle the ethical, regulatory, and legal issues for RUSS, Yang et al. divided surgical robots into six subgroups in terms of autonomy levels: no autonomy, robot assistance, task autonomy, conditional autonomy, high autonomy, and full autonomy <cit.>. To further improve the concept of level of autonomy, Haidegger defined the term “situation awareness" as the operator’s perception, comprehension, and prediction of a robot’s behavior in its environment <cit.>. Then, “situation awareness" is used to distinguish the required level of human supervision.
Up to the time of writing this article, commercial surgical robots are still solidly resting at Level-0, while a very large number of high-autonomy surgical robotic systems are waiting for clinical translation <cit.>. Since commercial surgical robots are dominated by a few disproportionately large companies; thereby they have no rush in disrupting the status quo <cit.>.
Ethical and legal regulations are critical for clinical translation and further commercialization. The need for such a regulation has been highlighted by various senior researchers in multiple impactful publications recently <cit.>.
To establish such regulations for medical robots, O'Sullivan et al. defined three different responsibilities: (1) accountability: the capacity of a system to give an explanation for its actions; (2) liability: the legal liability for potential damages caused by a robot; and (3) culpability: whom and how to implement punishment <cit.>. In addition, Vayena et al. discussed ethical and legal issues for digital health in terms of privacy and security, trust, and accountability <cit.>. As a large amount of data is often necessary for analysis, protecting privacy is undoubtedly important for avoiding misuse. Public trust is of paramount importance. Vayena et al. considered that the creation of a culture of trust will enable all stakeholders to benefit from the development of digital health <cit.>. Similarly, Yang et al. summarised five increasingly pressing topics in terms of ethics for robotics and AI <cit.>. Besides the aforementioned terms like responsibility, this works further emphasized some societal issues such as potential influence on employment and human freedom. Due to the quick evolution of the area of medical robotics, a proper and comprehensive regulatory system will boost a prosperous market and gradually benefit all stakeholders.
To deal with the unsolved issues regarding the safety, transparency, and trustworthiness of modern medical devices with a certain level of autonomy, the two leading Standard Development Organizations International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) created the first joint standardization document (IEC/TR 60601-4-1) regarding autonomy for technical developers <cit.>. Recently, Prestes et al. established the first global ontological standard for AI and robotics: IEEE 7007—Ontological Standard for Ethically Driven Robotics and Automation Systems <cit.>. For an in-depth review of the ongoing initiatives regarding regulations, we highly recommend that readers refer to these two articles <cit.>.
§.§ Future Perspectives
In addition to challenges, there are also numerous opportunities in the field of RUSS, particularly in light of the boom in both fundamental sensor development and advanced AI research. This survey will elaborate on future perspectives from these two aspects. By providing an understanding of the state of the art, we hope it can stimulate a number of exciting ideas. To clarify, the opportunities extend far beyond what are described below.
§.§.§ Fundamental Sensing Systems
Sensors are essential components of all intelligent systems. Generally, the development of new sensors has a substantial effect on existing systems in numerous ways. To achieve the ultimate goal of an autonomous RUSS, it is necessary to integrate multiple sensing systems mimicking the sophisticated human sensing system. By developing efficient data fusion techniques, redundancy, and multi-modality data would aid in achieving robust and reliable perception results. This applies not only to RUSS but to a vast array of autonomous systems.
Most recently, the novel concept and development of US patches have become attractive. Due to the advantages of small size, stretchable probability, and no need for US gel, it is very desired for continuous healthcare monitoring. The traditional US probes are rigid and bulky, making them unsuitable for imaging through nonplanar surfaces. To address this challenge, Hu et al. proposed a stretchable US probe that can conform to and detect nonplanar complex surfaces <cit.>. This soft probe consisted of a 10× 10 array of piezoelectric transducers covered by compliant silicone elastomers, and the results demonstrated that it could be stretched more than 50%. Similarly, Wang et al. developed and tested a skin-conformal ultrasonic phased array to monitor the physiological signals from tissues up to 14 cm <cit.>. To tackle the practical issue that the image quality is highly affected by US gels, Wang et al. designed a bioadhesive US device consisting of a thin and rigid US probe robustly adhered to the skin via a couplant made of a soft, tough, antidehydrating, and bioadhesive hydrogel-elastomer hybrid <cit.>. Based on this device, continuous imaging of internal tissues over days becomes feasible. Most recently, Hu et al. demonstrate a wearable cardiac US imager providing direct cardiac function assessment <cit.>. Such fundamental changes in US probe would open numerous opportunities for revolutionizing the techniques of robot-assisted US imaging.
§.§.§ Advanced AI-based RUSS
We consider the AI-based RUSS would be another promising direction, where the core task is to improve the intelligence of RUSS. To this end, the research community needs first to improve the computer's understanding of dynamic environments through multi-modality signals. Only when the system owns precise perception abilities, we can further expect and explore the way to make proper decisions autonomously.
Several studies have demonstrated that AI-based approaches outperformed conventional image processing methods <cit.>.
Benefiting from the accurate segmentation of target objects (e.g., blood vessels), precise state representations will further facilitate the development of autonomous scanning <cit.> or autonomous exploration of standard US planes <cit.>.
In addition, advanced learning-based frameworks have the potential to be used to transfer senior sonographers' physiological knowledge and experience to novices. Recent studies in the direction of learning from demonstrations <cit.> implicitly result in an attractive and influential new research topic on recovery of “language of sonography". Hands-on experience is very important and necessary for sonographers. Senior sonographers who can perform flawless US scans are still unable to directly parameterize and intuitively describe the acquisition requirements. However, US examinations are carried out based on their understanding of high-level physiological knowledge. Such knowledge is common among sonographers, although their comprehension may vary slightly due to experience. The concept of recovery of “language of sonography" refers to the underlying understanding of high-level anatomical knowledge. We believe that efforts to retract the “language of sonography" from intuitive demonstrations with multiple signals, such as US images, RGB-D images, force information, probe movement, gaze information, etc., are as valuable and essential as the progress made in robotic sonography itself <cit.>.
§ DISCUSSION
Robotic technologies have demonstrated promising potential to extend the use of US imaging in the healthcare industry, such as remote examinations, and accurate and quantitative control of acquisition parameters.
Compared with conventional US examinations, although current RUSS cannot yet show superiority in terms of improving clinical outputs, a number of benefits have been demonstrated. From the perspective of patients, the waiting time for the healthcare intervention was significantly reduced from 144 to 26.5 days <cit.> and their cost was reduced as well <cit.>. As for sonographers, robots bring dexterity as well as reduce work-related musculoskeletal disorders <cit.>.
Additionally, RUSS has the potential to make a significant contribution in a variety of clinical scenarios, including performing trauma examinations in pre-hospital settings <cit.>, freeing up a clinician's hand during the intervention <cit.>, and performing routine PAD screening or monitoring without radiation <cit.>. When it comes to trauma scans, it is vital to spot life-threatening intracavitary hemorrhage as soon as possible because this will enable doctors to make prompt treatment decisions to save lives in emergency scenarios. RUSS could be used for reliable and accurate trauma scan identification in pre-hospital settings by fusing precise sensing devices with a cutting-edge learning-based semantic segmentation framework.
Continuing the current progress on RUSS requires a deep understanding of how its embedded technologies add value to healthcare practices. Intelligent robotic imaging systems could provide different benefits. On one hand, they can democratize the healthcare by making US examination available at locations in which patient populations do not currently have access to expert sonographers. On the other hand, to maximize the added value of RUSS, it is important to also focus on enabling new types of interventions or new procedures that are impractical or impossible based on traditional US examination, e.g., 3D or 4D visualization of scanned anatomy compensating or embedding physical breathing and heartbeat. Although there is not yet any fully autonomous system for US examinations, autonomy is one of the main objectives of the scientific community. Similar to surgical robotics, autonomous RUSS will be more challenging to commercialize <cit.>, however, due to its nature of offering images and visualization rather than decision making, cutting, and suturing tissues, we believe autonomous RUSS is easier to be certified and productized than autonomous surgical robotic solutions. On the other hand, compared to robotic X-ray and nuclear imaging, RUSS may be harder to certify because it requires direct interaction with patients. Researchers, therefore, need to continue their studies to guarantee the trust in and acceptance of autonomous RUSS by both doctors and patients.
The reported results on current autonomous RUSS are still far from maturity and do not perform as well as or outperform clinicians.
Most existing research makes simplifying assumptions and often uses artificial setups for their validation. For example, most US servoing approaches (Section <ref>) are validated on phantoms or using simulation rather than on human subjects, and the existing motion and deformation compensation approaches may not perform as well on patients within the complex and dynamic clinical setups.
* Could advanced machine learning allow us to learn the “language of sonography" by observing expert sonographers?
* Could our RUSS systems understand the physics of imaging and its interaction with dynamic patient physiology?
* Could RUSS allow optimizing B-Mode, 3D and 4D image acquisition?
* Could advanced sensing and intelligent control allow for guaranteeing reproducibility and safety of scanning procedures?
* Could multimodal imaging and pretraining allow RUSS systems to observe and understand the specific anatomy and physiology of each patient?
* Could explainable AI enable RUSS systems to report and justify their actions and decisions to physicians?
* Could user-centric RUSS design allow smooth and friendly communication between sonographer robots, physician colleagues, and patients?
Answering each of these exciting and essential questions requires large multi-disciplinary scientific and engineering communities to gather, communicate and collaborate. The current review paper hopes to play a small role in gathering and highlighting some of the requirements and opening the path for the community to study and analyze the next crucial steps to take.
§ CONCLUSION
This survey has provided a brief picture of the rapidly evolving field of robot-assisted US imaging systems. Starting from the technical developments and clinical translations of various teleoperation systems in the first decade of the new millennium, in Section <ref>, the article summarizes the path the community took to get to its recent research focus on autonomous RUSS, in particular after the booming of machine learning and artificial intelligence throughout the last decade.
It is challenging to develop intelligent RUSS solutions, which require a number of advanced capabilities to understand dynamic environments, physics of US imaging, human anatomy and physiology, and thereby to tackle complex cases of diagnostic and interventional imaging.
To date, there are no such systems available. This paper aims at reviewing the state of the art and discussing the paths the community has taken or needs to take in the future.
The survey shows that the recent progress has demonstrated that RUSS may be able to improve image acquisition and 3D visualization, also taking motion and deformation into account, real-time geometrical (including volumetric) measurements, and in particular their reproducibility. The US handling habits vary among expert sonographers, and cannot be well described using handcrafted features. We believe that in the near future, the development of advanced machine learning will allow for figuring out the underlying “language of sonography" based on expert demonstrations. This can not only allow for autonomous intelligent RUSS development but also for designing US education and training systems, and advanced methodologies for grading and evaluating the performance of human and robotic US examinations. In view of its speed of progress, RUSS has the potential to revolutionize not only the US-based medical interventions themselves but also clinical screening, diagnosis, and robotic-assisted surgery.
§ DECLARATION OF COMPETING INTEREST
The authors report no conflicts of interest.
§ ACKNOWLEDGMENTS
The authors would like to acknowledge the Editors and anonymous reviewers for their time, and implicit contributions to the improvement of the article's thoroughness, readability, and clarity.
model2-names.bstauthoryear
|
http://arxiv.org/abs/2307.05468v2 | 20230711175343 | My3DGen: Building Lightweight Personalized 3D Generative Model | [
"Luchao Qi",
"Jiaye Wu",
"Shengze Wang",
"Soumyadip Sengupta"
] | cs.CV | [
"cs.CV"
] |
[
My3DGen: Building Lightweight Personalized 3D Generative Model
Luchao Qi^1, Jiaye Wu^2, Shengze Wang^1, Soumyadip (Roni) Sengupta^1
^1UNC Chapel Hill^2University of Maryland
====================================================================================================================
type=figure
< g r a p h i c s >
figureWe present My3DGen, where we build a parameter efficient (600K parameters) personalized 3D generative prior in conjunction with a pre-trained generic facial 3D prior, EG3D <cit.> (31 million parameters). This allows us to reconstruct and synthesize multi-view consistent facial imagery of an individual by training a personalized prior using as few as 10 images.
]
Our paper presents My3DGen, a practical system for creating a personalized and lightweight 3D generative prior using as few as 10 images. My3DGen can reconstruct multi-view consistent images from an input test image, and generate novel appearances by interpolating between any two images of the same individual. While recent studies have demonstrated the effectiveness of personalized generative priors in producing high-quality 2D portrait reconstructions and syntheses, to the best of our knowledge, we are the first to develop a personalized 3D generative prior. Instead of fine-tuning a large pre-trained generative model with millions of parameters to achieve personalization, we propose a parameter-efficient approach. Our method involves utilizing a pre-trained model with fixed weights as a generic prior, while training a separate personalized prior through low-rank decomposition of the weights in each convolution and fully connected layer. This approach is inspired by the success of low-rank adaptation in large language models. However, parameter-efficient few-shot fine-tuning on its own often leads to overfitting. To address this, we introduce a regularization technique based on symmetry of human faces. This regularization enforces that novel view renderings of a training sample, rendered from symmetric poses, exhibit the same identity. By incorporating this symmetry prior, we enhance the quality of reconstruction and synthesis, particularly for non-frontal (profile) faces. Our final system combines low-rank fine-tuning with symmetry regularization and significantly surpasses the performance of pre-trained models, e.g. EG3D. It introduces only approximately 600,000 additional parameters per identity compared to full finetuning and duplication of the original model, which amounts to around 31 million parameters. As a result, our system achieves a 50-fold reduction in model size without sacrificing the quality of the generated 3D faces. Code will be available at our project page: <https://luchaoqi.github.io/my3dgen>.
§ INTRODUCTION
In recent times, significant advancements have been made in the field of 3D-aware face generation, as evidenced by the notable progress demonstrated by various works <cit.>. The capability to generate 3D faces that maintain consistency across multiple views has wide-ranging applications in areas such as AR/VR, video communication and telepresence, computational photography, facial authentication and anti-spoofing. Despite the impressive outcomes achieved by current methods, they heavily rely on extracting generic 3D face priors from large-scale face datasets containing numerous individuals, such as FFHQ or CelebA. However, these large-scale pre-trained 3D generative models often encounter difficulties when representing the distinct characteristics of a specific person <cit.>. While it is possible to optimize them for better reconstructing an input image in the same pose through the use of Pivotal Tuning Inversion (PTI) <cit.>, their capacity to generate multi-view consistent novel view renderings of the same image or synthesize novel 3D appearances of the same individual remains limited.
Hence, this paper revolves around the question of how can we develop an enhanced personalized 3D generative prior capable of reconstructing and synthesizing multi-view consistent 3D faces specific to an individual. Our objective is to utilize personal collections of photographs belonging to a user, encompassing diverse physical conditions such as varying lighting, poses, and expressions, in order to construct a personalized 3D generative prior. Our primary focus lies in scenarios where only a limited number of images are available for a particular person, which is a common occurrence for many individuals. We initially demonstrate the most effective approach to fine-tuning a large pre-trained 3D generative model using as few as 10 images of an individual.
Utilizing a separate neural network to represent the 3D generative prior for each individual, achieved through fine-tuning a large pre-trained model, presents challenges in terms of memory efficiency. Storing millions of parameters per user becomes impractical and can result in issues such as overfitting or mode collapse when dealing with limited data. A more principled approach is to learn a separate personalized prior in conjunction with a generic prior of human faces. To address these concerns, we propose a parameter-efficient fine-tuning approach where the weights of the original pre-trained model are frozen, and additional model weights are trained using low-rank decomposition of each convolution and fully connected layer. This approach draws inspiration from the recent successes in parameter-efficient fine-tuning of large language models <cit.>. Furthermore, this methodology allows the model to preserve the generic 3D face priors in the frozen weights while learning personalized priors in the low-rank fine-tuned weights. Based on empirical observations, we find that low-rank fine-tuning of convolution and fully connected layers of StyleGANv2 and super-resolution module of EG3D, while performing full fine-tuning of the Neural Renderer module, achieves the optimal balance between performance and reduction in the number of parameters.
Nonetheless, we have observed that relying solely on parameter-efficient low-rank fine-tuning often leads to subpar results in terms of both reconstruction and generation of multi-view consistent imagery, particularly in poses that were not encountered during training. This issue is further exacerbated in a few-shot scenario where there is a lack of significant variations in pose, lighting, and expression. In light of these challenges, we propose symmetry-based regularization. We enforce the preservation of identity across multiple poses in the generated faces of an individual during training. To achieve this, we employ an existing facial identification network <cit.>. We minimize the dissimilarity of the facial identification score for two novel view renderings of the training sample in symmetric poses. Through experimentation, we have observed that this regularization technique significantly enhances the performance of parameter-efficient fine-tuning, particularly when dealing with non-frontal poses. Even in scenarios where we fine-tune all the parameters of the generative 3D model, symmetry priors prove to be beneficial, particularly for non-frontal poses of input images or when generating novel view renderings.
We have conducted both quantitative and qualitative evaluations to showcase the effectiveness of our model in reconstructing and generating multi-view consistent 3D faces. In order to demonstrate its ability to generate unseen appearances of a user, we interpolate between two test images that were not included in the training set, and assess the extent to which the user's identity is preserved. Our proposed approach, which combines parameter-efficient fine-tuning with symmetry-based regularization, only necessitates training 0.6 million parameters, compared to the full finetuning of the generative model, which requires 31 million parameters. This represents a substantial reduction of 50 times in the number of parameters.
Our method outperforms a pre-trained model, specifically EG3D <cit.>+PTI <cit.>, in both reconstruction (16% improvement in perceptual quality metric, 0.120 vs. 0.143) and interpolation (17% improvement in facial identification similarity, 0.63 vs. 0.54). When compared to fine-tuning the entire generative model, our approach yields similar quality results while requiring 50x less parameters. Notably, our method demonstrates an additional advantage when tested on non-frontal imagery due to the incorporation of symmetry-based regularization.
In summary, our contribution is the introduction of My3DGen, a parameter-efficient method for constructing a personalized 3D generative prior with as few as 10 images. This presents a distinctive research challenge that combines few-shot learning, parameter-efficient fine-tuning, and generative 3D modeling, that to the best of our knowledge has not been previously explored. The key contributions of our work are as follows:
∙ We first show how to fine-tune a 3D generative model, EG3D <cit.>, enabling it to improve the reconstruction and synthesis of multi-view consistent images of an individual using a training set comprising as few as 10 images.
∙ Subsequently, we introduce a parameter-efficient fine-tuning technique for the generative model by employing low-rank approximation on convolution and fully connected layers of the StyleGANv2 and Super-resolution module of EG3D. This approach significantly reduces the number of parameters by more than 50 times.
∙ Finally, we propose a symmetry based regularization for finetuning that utilizes symmetric nature of human face to generate more multi-view consistent imagery, especially for non-frontal faces.
§ RELATED WORK
The ability to reconstruct and generate three-dimensional (3D) human faces has extensive applications in virtual reality (VR), augmented reality (AR), virtual try-on, telecommunication, facial authentication, and image or video editing. One promising approach is the development of a parametric 3D model, known as a 3D Morphable Model (3DMM) <cit.>, which allows the generation of any human shape through a linear combination of basic shapes. A comprehensive survey on 3DMMs is available for interested readers <cit.>.
Typically, 3DMMs are constructed using high-quality facial scans from multiple individuals. However, when fitting a 3DMM to a test image <cit.>, the results often appear unrealistic due to the limited expressiveness of the linear blended model. Personalizing a 3DMM using personal photo collections is also challenging due to the absence of 3D scans. As a result, recent research has focused on developing generative models for 3D faces <cit.>, which can be trained solely using 2D images. Generative models offer greater expressiveness compared to parametric models, and their ability to train from 2D images instead of 3D scans facilitates easier personalization. Nevertheless, there is a lack of research specifically addressing the personalization of such generative 3D models, which is the primary focus of our paper.
In our work, we concentrate on achieving efficient parameterization for personalizing 3D generative models. We begin by providing an overview of existing research on personalizing 2D generative models, followed by a summary of previous work on parameter-efficient fine-tuning, and lastly the role of symmetry regularization in 3D reconstruction of faces.
GAN Inversion & Personalization. Given an input image, GAN inversions aim to find the latent vector in a pretrained GAN model's latent space that best reconstructs the image. Significant progress has been made for 2D GAN <cit.>, and 3D GAN inversion is an active area of research <cit.>. While both can involve modifying the weight of a pretrained GAN model to reconstruct new examples, GAN Inversion is very different from personalization, which is the problem we are trying to tackle. Inversion techniques typically only faithfully reconstruct the information visible in the given input image, personalization takes in a set of images of an identity, and aims to learn a generative prior of that identity, such that new images belonging to the underlying distribution of that identity can be synthesized. The most closely related to our work is MyStyle<cit.>, which proposes to adapt a pretrained 2D stylegan model into a personalized prior model. Our paper is different in that we aim to personalize a 3D generative model, which requires additional regularizations on geometry. Additionally, we tackle parameter efficient finetuning as large model size per identity prevents scaling up to many identities.
Parameter efficient Adaptation. Large foundation models in <cit.> often achieve very impressive performance for tasks in their domain. However, the huge number of parameters of such models often prevents these models from being fine-tuned for downstream tasks with limited budget. For example, GPT-3<cit.> contains 175 billion parameters, which would be impossible to run on a single PC. Therefore, many parameter efficient adaptation techniques are proposed to fine-tune models on a budget. In natural language processing, many techniques are proposed <cit.>. For image generative models, Zhang et. al. proposes controlnet<cit.> to efficiently adapt pretrained stable diffusion<cit.> models to various downstream tasks. Our approach is inspired by LoRA<cit.>, a technique for efficiently finetuning large language foundation models by imposing low-rank structure on weight matrices. We explore the best strategy to perform low-rank adaptation of EG3D that provides a strong trade-off between parameter efficiency and result quality.
Role of facial symmetry in reconstruction. Exploiting facial symmetry for 3D reconstruction has been investigated in the past and has proven to be effective. Researchers have explored the role of symmetry based prior in an optimization framework for 2D to 3D face reconstruction <cit.>. Recently, with the rise in deep neural networks, symmetry based priors have been also been used for regularization <cit.>. Other than 3D reconstruction, symmetry based priors have also been used for inpainting <cit.> and facial recognition <cit.>. While the symmetry based prior is an age-old concept in computer vision, our utilization of this to regularize a few-shot personalization of 3D generative model is new to our knowledge.
§ METHOD
Our goal is to fine-tune 3D aware generative model in few-shot settings. In section <ref>, we show how we formulate the personalization problem and fine-tune pretrained EG3D<cit.> for personalization. EG3D<cit.> has 31 million parameters, this fine-tuning process requires storing a large model for each identity, which is a bottleneck for scaling. In section <ref>, we show how we leverage latest advances in Large Language Model to address this problem. In section <ref>, we show how symmetry in human faces helps in few shot fine-tuning especially for non-frontal faces. Section <ref> includes implementation details. An overview of our approach is shown in figure <ref>.
§.§ Personalization Formulation
We consider a scenario where we have a reference set of N 2D images of an individual x_i, and their associated camera poses c_i, denoted as 𝒟_p={(x_i, c_i)}_i=1^N, where N is as few as 10 images. We utilize a pre-trained 3D aware stylegan based generative model G(·;θ_G), i.e. EG3D <cit.> pretrained on FFHQ. Our goal is to personalize this pre-trained model to identify and fine-tune a low dimensional manifold in the 𝐖 latent space to better reconstruct and generate multi-view consistent images by training on the few-shot reference set 𝒟_p of that particular identity. Our personalization scheme is inspired by Pivotal Tuning Inversion (PTI) <cit.> and MyStyle <cit.>, which shows effective performance in personalizing a 2D StyleGAN.
Similar to MyStyle<cit.>, we first invert images from (x_i, c_i) ∼𝒟_p with associated camera poses into latent vectors w_i, or anchors as defined by MyStyle<cit.>, with an optimization based off-the-shelf inversion technique <cit.>. We fix the camera pose and only optimize for the latent code.
Then we fine-tune the weights θ_G of the generative model G(·;θ_G) with reconstruction objective ℒ_rec which reconstructs x_i given the latent code w_i and camera pose c_i:
ℒ_r e c = ℒ_l p i p s(G(w_i, c_i;θ_G), x_i) +λ_L_2G(w_i,c_i;θ_G)-x_i_2
θ_G_p = *argmin_θ_Gℒ_r e c
where λ_L_2 is a hyperparameter balancing the two losses.
§.§ Parameter Efficient Fine-tuning
The finetuning approach described in Sec. <ref> requires updating all the parameters of a large pre-trained 3D generative model, in this case 31 million parameters. Thus we end up with separate set of 31 million parameters for each individual, which is highly inefficient. In principle there are certain levels of similarity and shared features between faces of all humans. To build an efficient model one should utilize these shared priors about human faces and build a separate person specific prior, which requires a significantly smaller amount of parameters to represent. This will also allow efficient finetuning, especially in a few-shot setup. Finetuning a large model with fewer training examples often lead to overfitting and mode collapse, resulting in a generative model which can not generalize to unseen appearances of that person. Instead we propose to freeze the weights of the pre-trained model which captured priors of generic human faces and train additional weights with low-rank decomposition which has significantly less number of parameters. This idea is inspired by recent success in parameter-efficient finetuning of Large Language Models, especially LoRA <cit.>.
When adapting for personalization, LoRA shows that a pre-trained model's weight matrix W_0∈ℝ^d ×k can be fine-tuned with a low-rank decomposition, such that the fine-tuned adapted weight matrix W_ft = W_0+Δ W=W_0+B A, where B ∈ℝ^d × r, A ∈ℝ^r × k are trained while W_0 is frozen, and the rank r ≪min (d, k) <cit.>. Inspired by this, we re-parametrize the convolution and fully connected layers in the StyleGAN2 generator and the super-resolution module. Pre-trained weights W_0 of the StyleGAN2 generator and the super-resolution module are frozen during training while only A and B are updated. We fine-tune all the weights of the fully connected network of the Neural Renderer, as it is relatively small. Under this setting, the number of parameters needed to be trained is determined by the rank r. Accordingly, EG3D's pre-trained generator contains ∼31M parameters and can be reduced to only 600K, with r=4.
§.§ Symmetry Loss
We first observe that few-shot fine-tuning EG3D's generative model using the reconstruction loss ℒ_r e c still performs poorly on in-the-wild images with non-frontal poses, especially for images with strong profile poses. We believe that this poor result stems from the fact that FFHQ contains 3D biases when the features of an object (e.g., smile probability, gaze direction, posture, or haircut) correlate with the position of the camera, especially when FFHQ is highly biased toward frontal faces <cit.>. Thus, the domain prior carried out by the pre-trained G_d does not contain any knowledge of non-profile poses. Moreover, few-shot finetuning is often data-limited where training images can also miss profile poses. We further observe that our proposed parameter efficient fine-tuning in Sec <ref> often slightly degrades the reconstruction performance for frontal faces. This is most likely due to the trade-off between compression of the model parameters and quality of the reconstruction, which can be improved with further regularizations. To this end, we need additional regularization to fix this artifact caused by posture bias in the face-forward data set.
The goal of the additional regularization is to enforce that generated faces of an individual across different poses and instances should preserve the identity of the individual. We use a facial identification network, ArcFace <cit.>, which is trained to produce features that are unique to an individual. With the extracted face features as ID metric, we resort to facial symmetry prior, i.e., the left face's feature is almost the same as the right one <cit.>. Although most human faces are not perfectly symmetric, symmetry is an explicit characteristic of a frontal face for the most part. We exploit this common observation as a prior and impose a symmetry loss to alleviate the self-occlusion issues caused by large poses <cit.>. Given a training image x_i, we can reconstruct it in any given pose c_k using x̂_̂î^c_k = G(w_i, c_k). Due to the symmetry nature, two symmetrical views of a face with camera pair c_k1 and c_k2 should produce similar results x̂_̂î^c_k1 and x̂_̂î^c_k2, if one of them is flipped horizontally to mirror itself. Now, we use a cosine similarity of ID metric between the features predicted by ArcFace:
ℒ_sym = 1 - cos(G(w_i, c_k1), G(w_i, c_k2)_flip).
We apply our ID symmetry regularization every four training iterations with hyperparameter λ_sym. Formally, our final objective is given by
θ_G_p = *argmin_θ_Gℒ_rec + λ_L_symℒ_sym.
The final reconstructed image of the input image x_i is taken to be G_p(w_i,c_i). The entire pipeline is trained from end to end using the loss function ℒ_rec + λ_L_symℒ_sym.
§.§ Implementation Details
We use EG3D FFHQ<cit.> pretrained weight<cit.>, which output images at resolution of 512×512. Hyper-parameters are chosen as following: λ_lpips = λ_2 = 1, λ_sym = 4. We apply ℒ_rec on both low-resolution (128×128) and high-resolution images (512×512) after the neural renderer block while we only apply ℒ_sym on high-resolution images. Our further experiments show that large LPIPS weight (λ_lpips>1) will lead to checkerboard-style artifacts <cit.> while small LPIPS weight (λ_lpips<0.1) will cause non-photorealistic artifacts.
In section <ref>, we evaluate the model's ability to invert unseen test images with PTI <cit.>. To project images into the model's 𝒲 latent space, we optimize the latent code for 500 iterations following the StyleGAN <cit.> optimization scheme. For the PTI inversion, we optimize the latent code for 600 iterations, followed by fine-tuning the model for an additional 350 iterations.
Unless otherwise specified, we render novel views from our model with a yaw range of ± 0.35 (radians) and a pitch range of ± 0.25 (radians) relative to the front of a human face for all of our experiments.
§ EVALUATION
To evaluate the effectiveness of our proposed personalization of 3D generative model we mainly test its ability to reconstruct and synthesize 3D shape and texture of unseen test images both quantitatively and qualitatively. In Sec. <ref> we first describe our evaluation protocol. Then we will demonstrate the effectiveness of our proposed MyGen3D with symmetric loss and parameter efficient fine-tuning both qualitatively and quantitatively in Sec <ref>. Finally, we will show that symmetry based regularization on its own improves for reconstructing and rendering non-frontal poses.
§.§ Evaluation Protocol
Similar to MyStyle<cit.>,
We evaluate the effectiveness of our proposed personalized prior in two ways:
Inversion. Here we demonstrate how well a model can reconstruct multi-view consistent images of an individual from different viewpoints by inverting only a single unseen test image using PTI<cit.>. To quantitatively evaluate inversion performance, we adopt DISTS<cit.>, LPIPS<cit.>, and ID score<cit.>. DISTS and LPIPS measure the visual quality of the reconstructed image in the same pose as the original image, and do not measure multi-view consistency. Live3Dportrait<cit.> noticed that small misalignment from off-the-shelf face pose estimator will cause traditional error metrics such as PSNR and SSIM to be unreliable. Therefore, we drop PSNR and SSIM, and adopt deep learning based LPIPS and DISTS metrics. For ID score, we reconstruct the test image in multiple poses and find their closest neighbor in the identity's reference set 𝒟_p in the ArcFace<cit.> feature space, and report cosine similarities as ID score. ID score test the ability to reconstruct multi-view consistent imagery.
Synthesis Here we demonstrate how well a model can smoothly synthesize new multi-view consistent images in different poses. With personalization, we aim to show that the model has captured a personal generative prior of the identity. We follow mystyle<cit.> to sample images from the model's captured prior space. We randomly sample two training images of an identity, as an anchor pair. Then we interpolate between the anchor pair at 10 equally spaced interpolation weights. At each interpolation step, given the latent code, we randomly generate 20 novel views of identity and compute ID scores of them as described in Inversion. We report mean ID score between image generated by every interpolated latent vector and nearest reference image in ArcFace feature space. This procedure is repeated 10 times for each personalized model. To better evaluate the ability of the model in generalizing to new images, we also perform a similar procedure on testing images. The only difference is that we sample from testing images set and project sampled testing images into latent space by optimizing the latent vectors following <cit.>.
Dataset. We perform experiments on celebrity face images <cit.>. Each celebrity face dataset is preprocessed following <cit.> to crop and pad images to 512×512. We use off-the-shelf face detection <cit.> to extract camera pose and align images similar to FFHQ, as described in EG3D. Images are augmented by mirroring and are inverted into EG3D's 𝒲_d using latent space 𝒲 for the best editability and multi-view consistency. Unless otherwise noted, we train with 10 images from the training split of each dataset to demonstrate few-shot personalizing ability of our approach.
§.§ Effectiveness of parameter-efficient personalization with symmetry regularization
We first show inversion and interpolation performance in Table <ref>, Table <ref> and Table <ref>. As consistent with previous work<cit.> on 2D generative models, personalization by finetuning all the parameters significantly improves reconstruction performance over pre-trained model (EG3D-PTI) in the same view by 25.3% in DISTS and 28.0% in LPIPS score, in different view by 1.7% in ID score. For synthesis by interpolation task, the model improved by 12.1% in ID score on training dataset, and by 11.1% on testing dataset.
However, such operations require storing 31 million parameters for the full model. Next we apply low-rank parameter efficient finetuning, termed LoRA to mimic the original name used for large language model finetuning in <cit.>. We observe that applying LoRA reduces performance in reconstructing the same view over finetuning all the weights by 17.4% in DISTS and 26.1% in LPIPS, in reconstructing different view by 1.4% in ID score and synthesis by interpolation task by 1.0% on training data, and 2.7% on testing data. However low-rank finetuning is still significantly better than the pre-trained network (EG3D-PTI<cit.>), as DISTS improve by 12.3%, LPIPS score by 9.2%, while marginally hurting ID score for reconstruction and improving interpolation by 11.3% on training data and 8.1% on testing data. While the performance only drops slightly we utilize only 600,000 (600k) parameters compared to finetuning the full 31 million parameters.
By leveraging symmetry in human faces, our full system is almost as good as full fine-tuning in both inversion and synthesis task. In fact we show in Sec. <ref> that for non-frontal faces our full system improves over full fine-tuning in synthesis by interpolation task.
We also provide qualitative comparison for both inversion and synthesis by interpolation tasks against pre-trained model (EG3D-PTI <cit.>) and full fine-tuning in Fig. <ref> and Fig. <ref> and Fig.<ref>. Notice how for both inversion and synthesis, our system is visually comparable to EG3D-PTI, while being 50 times more parameter efficient. For reconstruction, original EG3D-PTI tends to smooth out details (4th and 7th rows in Fig. <ref>), e.g. skin, beard, and hair. Also, EG3D-PTI occasionally exhibits corrupted 3D geometry (1st and 7th rows in Fig. <ref>) which aligns with the results observed in <cit.>. Ours can produce high-fidelity reconstructions under different poses while keeping the model compact using only 600K parameters compared to tuning the whole model with 31M parameters. For synthesis, EG3D-PTI provides non-identity-preserving interpolation results as it has not been personalized and thus produces implausible interpolations. As shown in Fig. <ref>, for training anchors, full fine-tuning can introduce undesirable artifacts (8th rows) where ours can produce fewer artifacts with symmetry loss regularization. Similar artifacts can also be found in Fig. <ref> (2nd and 8th rows) for test anchors without loss regularization.
§.§ Effectiveness of symmetry regularization for non-frontal views
Our proposed symmetry regularization is effective on its own. Applying symmetry regularizer, with or without parameter-efficient LoRA, slightly improves the reconstruction quality in Tab. <ref> and improves synthesis by interpolation quality in Tab. <ref> by 9% with LoRA and 8% without LoRA (for full finetuning). This shows the overall effectiveness of symmetry-based regularization in improving personalization of the generative model.
However, as hypothesized before in Sec. <ref>, the key advantage of symmetry comes in presence of profile faces. We show in Table <ref> that for profile faces, our proposed symmetry loss significantly improves inversion performance mainly when the pose is different from input pose, as ID score is improved by 4.8%. In Figure <ref>, we show an qualitative example of this phenomenon. In the first example second row, the full finetuning model didn't fully capture the 3D shape of Michelle Obama's face, and inferred a face shape that significantly deviates from reality. With symmetry regularization, as shown in the third row, the face is more rounded and closer match Michelle Obama's identity. In the second and third example, again without symmetric loss, the model is unable to recover the correct 3D shape and has part of face geometry deformed, which shows up in non-frontal poses (e.g. the last two columns).
§.§ Effect of Dataset Size
In figure <ref>, we show how the training dataset size influence the performance of our proposed method and baselines. Performance of “pretrained” should stay constant across all data sizes. We observe both Full Fine-tuning and ours benefit from additional data size when going from 10 images to 50 images. However, there is very limited improvement for both methods from 50 images to 100 images.
§ LIMITATIONS AND FUTURE WORKS
In figure <ref>, we show some failure cases of our system when performing inversion task. As shown in the first row of figure <ref>, our system builds a personal prior using a training set, and cannot handle unseen expressions. Nevertheless our system still obtains a reasonable reconstruction. Our model is trained on faces and does not have strong prior on objects, thus the phone in second row is not reconstructed correctly. Finally, with our current inversion formulation, our system do not work well on heavily cropped faces such as the one in the third row. In this example, the standard EG3D preprocessing pipeline fills the cropped region with gaussian blurred reflection padded value, resulting in invalid pixel values near the boundary. One can potentially extend our system by masking out invalid values in our inversion optimization objective similar to in-painting task of MyStyle<cit.>.
For this paper, we assume that camera intrinsics can be reliably estimated with an off-the-shelf pose estimator following EG3D. In the future, we can expand our system and optimize camera pose like <cit.>.
§ CONCLUSION
Recent years have witnessed an explosion in 3D generative model research, especially for faces <cit.>. While they can generate imaginary faces by sampling a latent code or reconstruct an input face by tuning the generator (PTI), they fall short in its ability to personalize and synthesize unseen appearances and viewpoints of an individual. This paper is the first attempt, to our knowledge, to develop a personalized 3D generative prior using as a few as 10 images of an individual.
We take a principled approach towards personalization. Instead of representing each personalized prior as a separate large generative model with millions of parameters (31 million for EG3D), we propose to represent each personalized model with a compact neural network (only 600k parameters) in conjunction with a pre-trained network that provides generic face priors. Our compact neural network uses parameter efficient low-rank decomposition of convolution and fully connected layers. However, training a parameter efficient generative prior using as few as 10 images is extremely ill-posed and requires additional regularizers. To this end, we propose a symmetry-based regularizer, where two novel view renderings in symmetric poses of a training example should preserve the identity of the individual. We observe that symmetry based regularizer improves performance for non-frontal poses. Our final system, My3DGen, which includes parameter efficient finetuning with symmetry based regularizer significantly outperforms a pre-trained model and is comparable to full finetuning which requires 50 times more parameters.
ieeetr
|
http://arxiv.org/abs/2307.04650v1 | 20230710154934 | Interaction between two overall neutral charged microscopically patterned surfaces | [
"Shiqi Zhou",
"Amin Bakhshandeh"
] | cond-mat.soft | [
"cond-mat.soft",
"cond-mat.stat-mech"
] |
School of Physics and Electronics, Central South University, Changsha, 410083, Hunan, China
[email protected]
Instituto de Física, Universidade Federal do Rio Grande do Sul, Caixa Postal 15051, CEP 91501-970, Porto Alegre, RS, Brazil.
We study the interaction between heterogeneously charged surfaces in an electrolyte solution by employing classical Density Functional Theory (cDFT) and Monte Carlo simulations. We observe a consistent behavior between cDFT and Monte Carlo simulations regarding force curves and two-dimensional density profiles. Armed with the validated cDFT, we explore the system’s behavior under parameters challenging to simulate directly. Our findings include impacts of domain size, domain charge, domain charge configuration, and bulk electrolyte concentration on the osmotic pressure. Remarkably, the force curve is more sensitive to the domain size for asymmetric configuration than symmetry configuration; the bulk concentration weakly influences the force curve independent of the system configurations.
Interaction between two overall neutral charged microscopically patterned surfaces
Amin Bakhshandeh
August 12, 2023
=====================================================================================
§ INTRODUCTION
Electrostatic interactions play a critical role in stabilizing colloidal systems, ionic chemical reactions, and biochemical and physical phenomena <cit.>. These interactions are responsible for many exciting phenomena; as a result, their study has led to significant advances in various scientific fields.
One intriguing phenomenon observed in colloidal suspensions with multivalent ions is the reversal of electrophoretic mobility <cit.>. Under certain conditions, a like-charge attraction between colloidal particles of the same charge sign can also occur <cit.>.
Understanding electrostatic interactions and their effects on colloidal systems is crucial for designing novel materials and processes in various applications, such as drug delivery, energy storage, and water treatment. Therefore, it is important to continue investigating the mechanisms and properties of electrostatic interactions in various systems.
When a charged surface is in contact with an electrolyte solution, an electrical potential difference is created across the interface, which attracts oppositely charged ions and creates an electric double layer (EDL). Helmholtz made the first attempt to formulate the concept of EDL in 1853 <cit.>, who proposed a primitive model in which there is a boundary between the charged surface and the electrolyte solution that constitutes an electrical double layer, with a layer of oppositely charged ions at the surface. However, Helmholtz's model did not consider the effect of the thermal motion of ions. This deficiency was corrected by Gouy-Chapman (GC) <cit.>, who introduced the concept of a diffuse double layer.
The stability of colloidal particles is more complicated and cannot be explained by the GC model alone. To provide a general understanding of the stability of colloidal suspensions, there are some approximate theories, such as Debye–Hückel and Derjaguin–Landau–Verwey–Overbeek (DLVO) theory. However, the problem with these theories is that they have limitations and can only be applied to systems such as 1:1 electrolytes with moderate concentrations. As the correlation of the system increases, these theories become inadequate and fail <cit.>.
Recent studies have shown that the interaction between the charged surface and the surrounding biomolecules can heavily influence the behavior of charged surfaces in biological systems. In particular, the presence of charged lipids and other membrane components can significantly alter the behavior of charged surfaces on cell membranes, leading to new and exciting phenomena <cit.>, recent developments in experimental techniques, such as single-molecule force spectroscopy, have provided new insights into the behavior of charged surfaces in biological systems <cit.>.
Recently, research has focused on the long-range interactions between heterogeneously charged surfaces. These surfaces are significant in nanotechnology, as it is possible to create periodically charged patterned surfaces using nano-fabrication techniques <cit.>. These systems exhibit unique behavior, such as the adsorption of polyelectrolytes and polyampholytes being dependent on their configuration <cit.>. It has been observed experimentally that the attraction between these surfaces can extend up to 500 Å <cit.>. One possible explanation for such observation is the correlation between the charged domains on the surfaces <cit.>. If this assumption was correct, with the rapid shear movement of plates, the attraction would disappear; in this situation, the domains do not have time to adjust themselves, and as a result, the correlations and attraction force get weakened. However, it is known that the correlation does not play any role in this observation, and the attraction force is due to pure electrostatic interaction <cit.>. In general, the study of these systems can be complex and often requires the use of molecular simulations and advanced approaches such as classical Density Functional Theory (cDFT) <cit.>.
The classical density functional theory provides a powerful tool for the calculation of the
structure and thermodynamic properties of heterogeneous fluid systems <cit.>. Most studies validating the accuracy of cDFT are primarily focused on one-dimensional cases, where the density distribution is solely dependent on a one-dimensional coordinate. Moreover, for three-dimensional cases, various scenarios have been compared with molecular simulations, resulting in different reported outcomes <cit.>.
The effectiveness of cDFT in the present two-dimensional model is still unknown, despite its demonstrated utility in studying complex systems, such as heterogeneously charged surfaces within electrolyte solutions <cit.>. Therefore, one of the aim of this article is to assess the performance of a commonly used cDFT version in a two-dimensional case using molecular simulation data, since to best of our knowledge such comparison does not exist.
Despite the advances that have been made in our understanding of charged surfaces in electrolyte solutions, there is still much to be learned. The behavior of these systems is highly dependent on the properties of the electrolyte solution, such as its concentration and ionic strength. Furthermore, the effects of thermal fluctuations and the presence of other molecules in the system can also significantly impact the behavior of charged surfaces. Additional research is required to fully comprehend the behavior of these complex systems and to create new theoretical and computational tools that can help us better predict their behavior in different environments.
The behavior of these systems is governed by two forces: electrostatic and entropic. The entropic force in an electrolyte solution arises from the collisions between an ion with other ions or surfaces within the solution., resulting in an exclusion volume effect. This effect causes the particles to experience a net force that is proportional to the density gradient of the surrounding particles.
This paper presents a comprehensive study of the interaction between two heterogeneously charged neutral surfaces (HCNS) using molecular simulation and cDFT. Through our simulations and theoretical calculations using cDFT, we aim to shed light on the complex interplay between electrostatic interactions and entropic forces that govern the behavior of these systems.
The paper is organized as follows: In section <ref>, we provide a detailed description of our simulation approach and theoretical model used to investigate the behavior of heterogeneously charged neutral surfaces. In section <ref>, we describe the cDFT, In section <ref>, we present and discuss our results; in section <ref>, we conclude our work.
§ SIMULATION METHOD
We utilize a two-dimensional model to investigate the behavior of heterogeneously charged surfaces immersed in an electrolyte solution. The model consists of two flat surfaces with dimensions L_x and L_y, separated by a distance of H and surrounded by the electrolyte. For our simulations, we set L_x=L_y=400 Å. The solvent is treated as a uniform dielectric with permittivity ϵ_w and Bjerrum length is defined as λ_B = q/k_B T ϵ_w, where q, k_B, and T denote the proton charge, Boltzmann constant, and temperature, respectively. We take λ_B to be 7.2 Å. Each plate has a charged domain with dimensions of L × L_x. The cell configuration is shown in Fig. <ref>.
Our model considers ions as hard spheres with a radius of 2 Å, and we simulate the system using the grand canonical Monte Carlo (GCMC) algorithm, as described in previous studies <cit.>. To study the effect of the electrolyte solution, we put the system in contact with a salt reservoir at a concentration of c, and determine the excess chemical potential of the reservoir using the mean spherical approximation (MSA) <cit.>. Although the MSA is an approximation, it is very precise for 1:1 electrolytes <cit.>.
In this paper, we use the term "symmetric" when two plates have the same configuration, and "asymmetric" when the configurations of two plates are mutually crosswise arranged.
We implemented a grand canonical Monte Carlo algorithm (GCMC) simulation for salt concentration 20 and 100 mM for surface charge density 0.0998 and 0.2 Cm^-2. Each plate consists of 1600 charged particles arranged in different patterns. To evaluate the electrostatic energy, we used the 3D Ewald summation method with a correction for the slab geometry of Yeh and Berkowitz <cit.>.
In Fig. <ref>, we plotted the density profile of negative and positive ions in 3D for different patterns.
To evaluate the force on the plate, we consider both the electrostatic interactions and the entropic force resulting from the momentum transfer of ions colliding with the plates. To calculate the entropic force, we use the method proposed by Wu et al. <cit.>:
For each sample, we move the plate towards the other plate and count the number of overlaps with electrolyte ions <cit.>:
β F = < N >/Δ z ,
where N is the number of overlaps of ions and Δ z is the displacement of the wall where Δ z → 0. The entropic pressure is then becomes:
β P = < N >/Δ z L_x L_y .
§ CLASSICAL DENSITY FUNCTIONAL THEORY
We use a cDFT version <cit.> that has been repeatedly validated in several one-dimensional cases. In this version, the treatment of hard sphere repulsive interactions incorporates a recently developed extended form of Ref. <cit.> of the fundamental measure functional version proposed by Kierlik and Rosinberg <cit.> .
The long-range Coulomb interaction is treated with a mean field approximation, and coupling between hard sphere and Coulomb
interactions is treated with second-order perturbation expansion based on mean
spherical approximation. Although this version has been repeatedly validated in
one-dimensional cases, two-dimensional validation has not yet been reported.
Consideration of two-dimensional model increases difficulty in algorithm
convergence. For this reason, we take three measures. (i): the calculation is
started from zero surface charge density; one gradually increases the charge
density to the target value and takes the density distribution output of the
previous charge density as input for the next charge density calculation. (ii) The
nonlinear equations resulting from the discretized two-dimensional cDFT
equation are solved by the Newton GMRES algorithm which is implemented in
the public-domain nonlinear Krylov solver NITSOL. (iii) In one-dimensional cases, the mesh size is typically very small, often around 0.025 times the molecular diameter. However, in the two-dimensional case, we set the mesh size to 0.1 times the molecular diameter for both dimensions. We conducted a validation test comparing grid sizes of 0.05 and 0.1 times the molecular diameter and found nearly identical results.
To evaluate the osmotic pressure, we begin by calculating the effective electrostatic interaction potential as a function of the plate separation H. This potential is obtained as the difference between the excess grand potential when the two plates are at a distance of H apart and the excess grand potential when the plates are sufficiently far apart. Next, the derivative of this interaction potential with respect to distance is computed to obtain the interaction force between the plates.
The excess grand potential is determined by subtracting the grand potential of the system with the two plates at a distance of H apart by the grand potential of a bulk system with the same volume but without the two plates. Within the framework of cDFT, it is possible to calculate the grand potential of a heterogeneous system, such as the two-plate system, by substituting the density distribution obtained from cDFT into the expression for the grand potential. Conversely, if the bulk density is used as input, the grand potential of the bulk system can be obtained.
§ RESULTS & DISCUSSION
In the first step, we compared the forces obtained by Monte Carlo (MC) simulations and cDFT for symmetric and asymmetric cases with a domain size of 200 × 400Å, and a surface charge density of 0.0998 Cm^-2 in the presence of 20 mM 1:1 electrolytes. As shown in Fig. <ref>(a-b) for symmetric and asymmetric cases, there is good agreement between simulation and cDFT.
Next, we investigate the effect of electrolyte concentration on the interaction between the charged microscopically patterned surfaces. For the symmetric case, Fig. <ref>(a) with a domain size, 200 × 400 Å and a surface charge density of 0.0998 Cm^-2, the repulsion force does not change significantly when the concentration increases from 20 to 100 mM. The same observation can be seen for the asymmetric case where the attraction exists. However, this is not the case for surface charge density. As is seen in Fig. <ref>(b), the blue curve representing the case with a surface charge density of 0.25 Cm^-2 exhibits a significant change in the attraction curve. In Fig. <ref>(a), the osmotic pressure is plotted against the separation distance. As can be seen, despite increasing the electrolyte concentration by five times, the osmotic pressure for the symmetric case does not change significantly.
However, as seen in Fig. <ref>(b), the attraction is higher for 20 mM electrolyte concentration than 100 mM. The observed phenomenon can be attributed to the entropic forces, wherein the increased collision between the plates and ions of the electrolyte at higher concentrations leads to a reduction in the attractive force. However, as the two plates get closer, the entropic forces reduce, since the number of particles between plates reduces, for both cases resulting in the same attraction force magnitude.
As is shown in Fig <ref>(a-b), for the symmetric case with a surface charge density of 0.2 Cm^-2, the repulsion does not change significantly for different domain sizes of 20×200 Å^2 and 40×200 Å^2. However, there seems to
be greater sensitivity to domain size for asymmetric cases in which there is the attractive force. It is observed that a bigger domain size results in a significantly higher attraction between plates for asymmetric configurations.
Also, we compared the density profiles of ions obtained from Monte Carlo simulation and cDFT in Figs. <ref> and <ref>, to achieve this, we mapped the 3D density profile of ions to 2D. As is observed, the MC simulation results have higher fluctuations than cDFT, but both simulation and cDFT are consistent overall. The ion density profiles provide us with insight into the structure of the electric double layer around the charged patterned surfaces. The issue with the density profile is that its fluctuations are unavoidably large, as depicted in Figs. <ref>-<ref>, this is due to the fact that to obtain 2D density profile, the small bins were used and this led to high fluctuations. However, overall, the agreement is generally acceptable. It is worth noting that for the two-dimensional system, the force curves do not appear to be significantly influenced by the density profile. Although the density profile is a physically significant quantity that sensitively depends on the microscopic configuration, its impact on the force curves seems to be relatively minimal. This observation is satisfactorily predicted by cDFT. Since force is the
negative derivative of the potential function with respect to distance, and its effective prediction should depend more sensitively on the accuracy of the method. Fortunately, cDFT demonstrates promising accuracy in predicting force curves. However, this particular issue is the subject of our future research.
After comparing the predictions of cDFT and Monte Carlo simulations, we observed a good agreement between the two methods. This comparison confirms the accuracy and reliability of our theoretical approach to studying the behavior of these systems. As a result, we focused solely on cDFT to investigate systems with higher electrolyte concentrations (2.5 M) and surface charge densities (2.5 Cm^-2). These parameters are challenging to simulate using MC method due to the difficulty in reaching equilibrium. As is seen in Fig. <ref> we plot the osmotic pressure for different domain sizes at different separation distances for asymmetric cases.
Fig. <ref> shows no attraction between the plates at high electrolyte concentrations for H>10 Å. This is opposite to the case of low electrolyte concentration, in which the attraction exists at separation distances up to 40 Å for asymmetrical pattern.
In Fig.<ref>(a), we show osmotic pressure obtained from cDFT for asymmetric configurations of plates with surface charge density σ = 0.1 Cm^-2 and different domain sizes in the presence of 2.5M 1:1 electrolyte. As is seen in Fig.<ref>(a) there is a maximum attractions and repulsion forces which around 5.9 and 8.8 Å. When the separation distance between plates is small, the higher entropic forces encourage ions to migrate from the region between the plates to the reservoir. Consequently, this leads to an increased Debye length and attraction force between the plates. However, by increasing the separation distance, H, the Helmholtz double layer (HDL) starts to form. The second maximum occur at a distance approximately twice the ion diameter (8 Å). This distance corresponds to when the Helmholtz layers of the two plates come into contact with each other. This double layer, increase the excluded volume between plates and as a result increases the entropic force, this results in the plates do not experiencing any attraction and only an entropic force being observed. When the separation distance increases, the entropic force gets smaller, and also, since the Debye length is around 1.9 Å, the interaction force rapidly goes to zero.
In Fig.<ref>(b), as the surface charge density increases, the first minimum (maximum attraction force) disappears due to the direct electrostatic interaction between the plates. However, the maximum repulsion force shifts to a smaller separation distance of around 8.1 Å, which can be attributed to the complete formation of the HDL on the domains. This is because the higher surface charge density leads to a stronger binding between ions and plates,and in turn this leads to the formation of a more rigid HDL.
In Fig. <ref> we plot the maximum and minimum forces observed in Fig. <ref> against different domain's size and the H which the maximum repulsion force appears in terms of L.
As is shown in Fig.<ref>(a), the maximum repulsion occurs at smaller distances for larger domain sizes. This suggests that HDL forms easily and completely for larger
domains than smaller ones. This observation is because the ions find it more difficult to approximate small domains due to the repulsion from neighboring domains, we show this schematically in Fig. <ref>. The maximum osmotic pressure, Fig.<ref>(b), reveals that as the domain size decreases, the maximum force also reduces. This, again, can be explained by the fact that the ions are firmly anchored to the charged surface within the HDL of the larger domains, where ions can more easily approximate the domain due to the weaker repulsion of neighboring domains. Additionally, as explained previously, Fig.<ref>(c) the maximum attraction force directly relates to the domain size. This is due to the electrostatic interaction between domains, which overcomes the entropic forces.
In Fig. <ref>, the same information is provided for σ=0.1 Cm^-2, and besides that, we also show the separation distance when maximum attraction force occur, H_min, in terms of L in Fig. <ref>(a).
As is shown in Fig. <ref>(a and c), the separation distance at which maximum and minimum forces occur exhibits the same trend as Fig. <ref>. For maximum forces in Fig. <ref>(b), a similar trend to Fig. <ref> is also observed. However, in Fig. <ref>(d), it can be seen that a larger domain size leads to stronger attraction forces.
§ CONCLUSION
This study investigates the impact of domain size, domain charge, domain surface configuration, and bulk electrolyte concentration on the osmotic pressure of charged systems using Monte Carlo simulation and Classical Density Functional Theory (cDFT).
We examined the interaction force between plates with symmetric and asymmetric configurations and its relation with domain size. To this end, we studied surface charge densities of 0.0998 and 0.25 Cm^-2 and a 20mM 1:1 electrolyte for both asymmetric and symmetric configurations with domain sizes of 20, 40, and 200 Å. In all cases, we observed attraction between the plates in the asymmetric configurations, while repulsion was observed in the symmetric configurations. Furthermore, we compared the results obtained from recently developed cDFT for plates with non-homogeneous charge distribution with those from MC simulation and found a good agreement.
Our findings reveal that the domain size has minimal effect on the osmotic pressure in symmetric configurations. However, the
attraction between plates is sensitive to the size of the domain for the case of asymmetric configurations.
Furthermore, we analyze the behavior of the force curve with variations in bulk concentration for symmetric and asymmetric configurations. The force curve remains relatively unchanged with changes in bulk concentration, but higher domain charge densities can amplify its sensitivity. Moreover, asymmetric configurations exhibit more complex behavior at higher electrolyte concentrations. It was observed that in this case a maximum repulsion appears which can be explained by HDL.
In addition, our study confirms the validity of the classical density functional theory cDFT <cit.> for slab geometry with non-homogeneous charged distributions. In our future work, we employ the method to study the systems with spherical geometry <cit.>, these system are of great importance in colloidal science an biological systems such as viruses.
§ ACKNOWLEDGMENTS
This project is supported by the National Natural Science Foundation of China (Grants 22173117), the High Performance Computing Center of Central South University and CAPES.
|
http://arxiv.org/abs/2307.05716v1 | 20230708074507 | Hierarchical defect-induced condensation in active nematics | [
"Timo Krüger",
"Ivan Maryshev",
"Erwin Frey"
] | cond-mat.soft | [
"cond-mat.soft"
] |
a,*]Timo Krüger
a,*]Ivan Maryshev
a,b,1]Erwin Frey
[a]Arnold Sommerfeld Center for Theoretical Physics (ASC) and Center for NanoScience (CeNS), Department of Physics, Ludwig-Maximilians-Universität München,
Theresienstrasse 37, 80333 Munich, Germany
[b]Max Planck School Matter to Life, Hofgartenstraße 8, 80539 Munich, Germany
[*]T.K. and I.M. contributed equally to this work.
[1]Corresponding author: [email protected]
Hierarchical defect-induced condensation in active nematics
[
August 12, 2023
===========================================================
Topological defects play a central role in the formation and organization of various biological systems.
Historically, such nonequilibrium defects have been mainly studied in the context of homogeneous active nematics.
Phase-separated systems, in turn, are known to form dense and dynamic nematic bands, but typically lack topological defects.
In this paper, we use agent-based simulations of weakly aligning, self-propelled polymers and demonstrate that contrary to the existing paradigm phase-separated active nematics form -1/2 defects. Moreover, these defects, emerging due to interactions among dense nematic bands, constitute a novel second-order collective state. We investigate the morphology of defects in detail and find that their cores correspond to a strong increase in density, associated with a condensation of nematic fluxes. Unlike their analogs in homogeneous systems, such condensed defects form and decay in a different way and do not involve positively charged partners.
We additionally observe and characterize lateral arc-like structures that separate from a band's bulk and move in transverse direction.
We show that the key control parameters defining the route from stable bands to the coexistence of dynamic lanes and defects are the total density of particles and their path persistence length.
We introduce a hydrodynamic theory that qualitatively recapitulates all the main features of the agent-based model, and use it to show that the emergence of both defects and arcs can be attributed to the same anisotropic active fluxes.
Finally, we present a way to artificially engineer and position defects, and speculate about experimental verification of the provided model.
§ INTRODUCTION
The characteristic features of a nematic liquid crystal are the emergence of long-range orientational order and the occurrence of half-integer topological defects, which, however, are annealed at thermodynamic equilibrium <cit.>.
The dynamics of its nonequilibrium counterpart, an active nematic <cit.>, is in contrast governed by the persistent creation and annihilation of pairs of topological defects with opposite charges, leading to a dynamic steady state commonly referred to as active turbulence <cit.>.
Dense gel-like mixtures of microtubules (cytoskeletal filaments) and kinesins (molecular motors) that cause relative sliding between microtubules have become experimental platforms for studying the formation, dynamics, and annihilation of these toplogical defects <cit.>.
The observed complex defect dynamics have been investigated using hydrodynamic theories <cit.>.
The basic insight derived from such studies is that topological defects constantly generate active flow in momentum-conserving systems <cit.> or active flux in momentum non-conserving systems <cit.>.
Another experimental model system for active nematics is the actomyosin motility assay, in which actin filaments actively glide over a lawn of myosin motor proteins, performing a persistent random walk with constant speed <cit.>.
These systems exhibit phase separation into dense polar-ordered regions and dilute disordered regions, which is further corroborated by numerical analyses of corresponding theoretical models <cit.>.
Tuning the interaction between actin filaments by the addition of polyethylene glycol led to the emergence of a dynamic coexistence of ordered states with fluctuating nematic and polar symmetry <cit.>, which has been explained by pattern-induced symmetry breaking <cit.>. Systems exhibiting dense, purely nematic lanes have been thoroughly investigated by both simulations and hydrodynamic theories <cit.>.
As for half-integer topological defects, the common paradigm states that they are absent in dilute self-propelled active nematics <cit.>, but fundamental exclusion criteria for their existence have not been given.
In fact, no steady-state topological defects have yet been found in this subclass of strongly phase-separated active matter.
So far, it has only been observed that transient defects can occur in models with weak density inhomogeneity during the coarsening process <cit.>.
Moreover, toy models inspired by dilute nematic systems without self-propulsion can exhibit defect formation <cit.>.
However, the authors attest that the connection of their phenomenological theory to existing experimental systems is tenuous.
Here we investigate dilute active nematics for the presence of defects using an agent-based model of “weakly-aligning self-propelled polymers” (WASP) which has been shown to faithfully reproduce the behavior of real actomyosin motility assays on all relevant length and timescales including pattern formation processes and the topology of the phase diagram <cit.>.
This allows us to leverage these agent-based simulations as an in-silico experimental system with which to discover new phenomena.
We show that the two hitherto seemingly incompatible phenomena — phase separation and topological defects — are actually closely linked in weakly interacting active nematics.
In particular, we characterize a subclass of topological defects associated with the compression of nematic fluxes, which are similar to phenomena predicted in conceptual models <cit.>, albeit in a different context.
These defects appear as characteristic collective excitations in a novel nonequilibrium steady state. They are in dynamic equilibrium with nematic lanes from which they emerge and into which they disassemble.
Additionally, we find another type of topologically charged structure, filamentous arc ejections (FAEs) — elongated arc-shaped polymer bundles that detach from nematic bands — remotely resembling +1/2 defects.
To elucidate the mechanisms underlying these phenomena, we also introduce a hydrodynamic theory, building on previously published models <cit.>.
Exploiting the respective strengths of these two complementary theoretical approaches, we uncover a close relationship between the dynamics of phase-separated nematic bands, formation of topologically charged structures, and the associated condensation phenomena.
§ RESULTS
§.§ Simulation setup
We use agent-based simulations that emulate the dynamics of weakly interacting self-propelled polymers (WASP) of fixed length L on two-dimensional surfaces building on earlier work <cit.>; refer to the SI for further details on the algorithm.
Each polymer consists of a tail pulled by a tip that follows a trajectory corresponding to a persistent random walk with persistence length L_p.
Upon collision of a polymer tip with the contour of another polymer, a weak alignment torque is assumed to act that changes its direction of motion [Fig. <ref>(a)].
Here we use a purely nematic alignment interaction [Fig. <ref>(b)] whose strength is set by the parameter α_n.
Additionally, a small repulsion force F acts on polymer tips that overlap with other polymers.
Here we are interested in systems that have a collision statistics with purely nematic symmetry [Fig. <ref>(b)].
Figure <ref>(c) shows the phase diagram of such a weak nematic as a function of the average polymer density ⟨ρ⟩ L^2
and path persistence length L_p; hereafter ⟨ ... ⟩ denotes spatial averaging.
It exhibits an isotropic-nematic transition from a disordered homogeneous phase to a nematically ordered phase.
The phase boundary ρ_n (L_p) approximately scales as L_p^-1; refer to
the SI for details.
Thus, when the phase diagram is redrawn as a function of L_p and the spatially averaged normalized density ⟨ϕ⟩ = ⟨ρ⟩ / ρ_n, the phase boundary essentially becomes a horizontal line [inset of Fig. <ref>(c)].
§.§ Dense topologically charged structures
As expected for nematically interacting systems, our simulations show isolated nematic lanes that exhibit strong bending fluctuations on large length and time scales (cf. Movie S1 SI) caused by lateral instabilities <cit.>.
In our simulations, in addition to these typical nematic lanes, we also discover distinct types of topologically charged structures.
One class of these are three-armed filamentous structures containing a topological defect with charge -1/2 at their center [Fig. <ref>(a)].
They are typically formed when three curved nematic lanes — with their convex sides facing each other — meet and condense into a topological defect with a high-density core region [Fig. <ref>(b)]; we do not observe “collisions” of four lanes.
Unlike defects in non phase-separated active nematics, these condensed topological defects (CTDs) do not have a directly corresponding positively charged partner.
Instead, they are surrounded by an extended topologically charged region with a dispersed positive charge, as can be seen in Fig. <ref>(a) (lower right panel), which depicts the topological charge density as defined in Refs. <cit.>.
Moreover, our simulations show that the active nematic flux is gradually compressed as the triple junction of the nematic lanes (defect core) is approached
[Fig. <ref>(a), top right panel].
This leads to a reduction in lane width and a corresponding increase in density, which reaches a maximum in proximity of the core.
These three-armed topological defects are dynamic structures that are constantly being dissolved and reassembled.
A second class of structures we observe are lateral filamentous arcs that separate from the bulk of a straight nematic band and eventually move in transverse direction.
A time trace of such a filamentous arc ejection (FAE) is shown in Fig. <ref>(c).
These structures have similarities to +1/2 defects: they are “curved” and they always emanate in the direction of their convex side.
Somewhat similar observations have been made in continuum models constructed for nematic particles with velocity reversals <cit.>. However, the authors did not address the properties of these structures or the reasons underlying their formation.
While there are certainly similarities on a superficial phenomenological level between FAEs and these structures, the underlying mechanisms and nature of these structures may be quite different.
Having discovered these collective topological structures in our in-silico experiments, we sought to explore how their emergence is affected by a change of parameters.
However, since the lateral instabilities of nematic bands required for the formation of CTDs (cf. section “From CTDs to FAEs and bands” below) occur only on very long time scales, a systematic investigation of a phase diagram in agent-based simulation is numerically prohibitively demanding.
Therefore, we sought an alternative way to explore the spatiotemporal dynamics of the systems that would enable us to dissect the processes underlying the formation of CTDs and FAEs.
As explained next, we achieved this through constructing a hydrodynamic approach that captures all the main features of our agent-based simulation setup.
§.§ Hydrodynamic model provides access to the phase diagram
To this end we used the standard Boltzmann-like approach
(see SI).
However, as discussed below, this model was insufficient to explain the emergence of half-integer defects and was therefore generalized to include density-dependent corrections.
By analogy with passive model C in the Hohenberg-Halperin classification scheme <cit.> we formulate a hydrodynamic model in terms of a density and an order parameter field.
For an active nematic, these are the (normalized) polymer density
ϕ = ∫dθ P(θ)/ ρ_n,
and the traceless and symmetric tensor
Q_ij = ∫dθ P(θ)(2n_in_j- δ_ij) (nematic order parameter), where the unit vector 𝐧= (n_x,n_y)=(cos θ, sin θ) defines the the local polymer orientation vector and P(θ) denotes the probability density for the polymer orientation θ.
The eigenvector associated with the larger of the two eigenvalues of the Q-tensor can be viewed as depicting the average orientation of the polymers.
Unlike classical model C, however, a hydrodynamic model for active nematics must be intrinsically nonequilibrium in character and its dynamics can not be determined by the gradient descent in a single free-energy landscape.
Nevertheless, using the analogy to the dynamics near thermal equilibrium, some intuition can be gained for the design of the model.
As we discuss in more detail below, part of the system's dynamics can be understood in terms of two separate effective free-energy functionals for the non-conservative Q-tensor (F_Q) and the conservative density field (F_ϕ), similar to related nonequilibrium models discussed recently <cit.>.
Mass-conservation requires that the density obeys a continuity equation ∂_t ϕ = - ∂_i J_i.
In general, for symmetry reasons, the current must be the gradient of a scalar quantity and a tensorial quantity containing the Q-tensor.
Similar to model B, the scalar component is of the form
J_i^iso = -∂_i μ (ϕ)
with chemical potential
μ (ϕ) = ν (ϕ) ϕ.
Here, the first and second terms of ν (ϕ) = λ^2+ν_ϕϕ account for motility-induced effective diffusion with the diffusion constant λ^2 ∝ L_p^2 <cit.>, and for steric repulsion due to excluded-volume interactions <cit.>, respectively. The latter contribution represents the density-dependent correction.
For the tensorial part, we write J_i^aniso = -∂_j [χ(ϕ) Q_ij], which again is assumed to contain motility- and interaction-induced parts: χ (ϕ) =λ^2+χ_ϕϕ. Similar as above, the latter term represents the density-dependent correction motivated by theories for active nematics <cit.>, and it is controlled by the phenomenological parameter χ_ϕ.
It will turn out that this anisotropic term leads to phase separation, since it causes compression in the direction perpendicular to the axis of the local orientational order.
Taken together, one gets
∂_tϕ
=
∂_i∂_j[
ν(ϕ)ϕ δ_ij
+
χ(ϕ) Q_i j]
.
The isotropic flux (first term) can be written in terms of an effective free-energy functional F_ϕ= ∫d^2 x (1/2λ^2 ϕ^2+1/3ν_ϕϕ^3).
In contrast, however, the anisotropic flux (second term in (<ref>)) violates time-reversal symmetry <cit.>.
We assume the time evolution of the nematic tensor to be of the form
∂_t Q_i j
= -[
δ F_Q/δ Q_ij]^st
=
-[
δ F_Q/δ Q_ij
- 1/2 δ_ij Tr(δ F_Q/δ Q_ij)
]
,
which corresponds to a gradient dynamics (model A) determined by the effective free-energy functional F_Q; here and in the following [...]^st denotes the traceless and symmetric part of a tensor.
We have chosen the timescale such that the friction coefficient in the gradient dynamics is set to 1.
The effective free-energy functional has a standard Landau-deGennes (LdG) part <cit.> responsible for an isotropic to nematic transition, but also includes a coupling between density gradients and the orientation of polymers as in inhomogeneous active nematics <cit.>,
F_Q
=
∫ d^2 x
(
12
[
(1-ϕ)Q^2
+ 12β (Q^2)^2
+ κ (∂_jQ_ij)^2
]
-
Q_ij[
ω ∂_i∂_jϕ
+
ω^a(∂_iϕ)(∂_jϕ)
]
)
.
The LdG free-energy density in terms of the order parameter Q^2= Q_klQ_kl describes a nematic ordering transition at the critical density ϕ_c = 1 with the gradient term playing the role of a generalised elasticity.
The stiffness coefficient (or Frank constant) κ also contains two contributions, one from the motility of the polymers <cit.>, and the other due to interactions <cit.>: κ (ϕ) = 12 λ^2+κ_ϕ⟨ϕ⟩.
Note that the last term — the density-dependent correction to elasticity — is linearised around the mean value of density ⟨ϕ⟩
(see SI).
The second line in (<ref>) takes into account the coupling between density gradients and nematic order, and can be derived solely on the basis of symmetry considerations.
The functional derivatives of F_Q with respect to the nematic tensor correspond to “interfacial torques” <cit.> in the equation of motion for the nematic tensor.
They rotate the director at the interface between high- and low-density domains, where the gradients of ϕ are the strongest.
The lowest-order coupling — and the associated “aligning torque” <cit.> ω [∂_i∂_jϕ]^st —
is iconic for active nematics <cit.>.
It is responsible for the destabilization of straight nematic lanes, eventually resulting in lane undulations (or other types of chaotic behavior associated with “dry active turbulence” <cit.>).
In our case, this term is due to self-advection (ω=λ^2, see
SI)
but it can be considered as “diffusive” since anisotropic diffusion of particles leads to an analogous contribution.
Interaction between the polymers yields the next-order couplings in (<ref>).
On symmetry grounds there are two different terms quadratic in ϕ:
[ϕ ∂_i∂_jϕ]^st and [(∂_iϕ) (∂_jϕ)]^st; both can also be obtained by explicitly coarse-graining microscopic models for interacting active polymers <cit.>.
The former recalls the diffusive ω-term (especially after the linearization around ⟨ϕ⟩) and therefore is ignored here.
The latter is associated with torque, which is bilinear in the density gradients ω^a [(∂_iϕ) (∂_jϕ)]^st, providing an effective liquid-crystalline “anchoring” <cit.> (or preferred orientation) of the nematic director field with respect to the density gradients.
The parameter ω^a is taken to be negative to ensure tangential anchoring, implying that polymers tend to orient perpendicular to the density gradients (or parallel to the boundary of dense lanes).
For simplicity, we ignore additional non-linearities in the equation of motion for the Q-tensor. Such contributions are considered elsewhere <cit.> where they are typically regarded as a modification to the elasticity terms.
Taken together Eqs. (<ref>, <ref>) are a generalization of the active model C <cit.>, which was originally introduced for non self-propelled biofilaments in the presence of molecular motors. The major difference is that the model now explicitly includes self-propulsion. Moreover, by including density-dependent terms, it shows the same results as the agent-based simulations (see discussion below) and is therefore quantitatively linked to the actomyosin motility assay. Finally, it possesses less degrees of freedom, since most of the terms are rigorously derived and are controlled by the same parameter (λ).
We consider
ν_ϕ, χ_ϕ, κ_ϕ, ω and ω^a as phenomenological parameters and solve the equations of motion numerically.
This model robustly reproduces the results obtained in the agent-based simulation to a very high degree of fidelity and for a large range of parameters.
It exhibits CTDs and FAEs whose structure, topological charge, and formation process are very similar to the ones observed in WASP; cf. Fig. <ref>(d)-(f). Therefore, in the following we use this hydrodynamic approach to analyse and underpin the main mechanisms of formation of CTDs and FAEs.
In summary, our model (and the active model C <cit.>) differs significantly from the standard theory of active nematics <cit.>, since it contains density-dependent corrections and higher order terms. Without such modifications the standard active nematic model is unable to reproduce CTDs.
§.§ From CTDs to FAEs and bands
Encouraged by the promising initial results shown by our hydrodynamic theory, we took advantage of the relative ease with which it can be used to determine the long-term behavior, and generated a (λ, ⟨ϕ⟩) phase diagram [Fig. <ref>(a)].
As can be seen, at low values of λ and ⟨ϕ⟩, CTD formation dominates, while in areas of large λ and ⟨ϕ⟩ stable nematic lanes emerge.
Between these regions lies a band of parameters where the system mainly exhibits FAEs.
To test whether these findings obtained with the hydrodynamic model also hold for our agent-based simulations, we determined the average number of CTDs present at a given time in the agent-based simulation along one-dimensional lines of the (L_p, ⟨ϕ⟩) phase space — one along a constant value of ⟨ϕ⟩ and one along a constant value of L_p.
Reassuringly, the results for the agent-based simulations and hydrodynamic model are in good agreement [Figs. <ref>(c) and (d)].
We further checked the mean number of FAEs present in the agent-based simulations as a function of L_p [Fig. <ref>(e)];
see SI for details.
The observed decline in FAE frequency with increasing L_p is consistent with the observations in the hydrodynamic model, where at high λ no FAEs occur [cf. Fig. <ref>(a)].
Taken together, these results demonstrate that not only do the agent-based and hydrodynamic models share the same collective states, the frequency of these states also shows the same dependence on parameter changes.
The above relationships between model parameters and the occurrence of CTDs or FAEs can be related to the overall dynamic behavior (in short, “activity”) of the system.
For both hydrodynamic and agent-based approaches, three distinct, qualitatively different dynamic states can be distinguished [Fig. <ref>(b)].
The first of these is associated with very strong bending undulations of nematic lanes.
It occurs at low values of L_p/λ or ⟨ϕ⟩ and is characterized by constant rearrangement of lanes [Movies S2, S3, S7, SI, Figs. <ref>(a), (b), (d) and (e)]:
Lanes frequently collide leading to the formation of CTDs. In addition, system-spanning configurations of straight (or only slightly curved) lanes [cf. Figs. <ref>(c) and (f)], which may form randomly, are disrupted by undulations within a fairly short time.
This is consistent with the observation that CTDs are the predominant phenomenon at low values of L_p/λ and ⟨ϕ⟩, respectively [Figs. <ref>(c), (d)].
Notably, FAEs can also be formed in this parameter regime following the emergence of short-lived system-spanning nematic lanes.
The second dynamic state can be found at intermediate values of L_p/λ or ⟨ϕ⟩.
In this regime, bending undulations are fewer and less pronounced, resulting in straight (or only slightly curved) and system-wide lanes that are stable over long periods of time:
Elongated openings often appear in the lateral areas of the lanes, which develop into filamentous arcs
[Movies S4, S8, SI, and Figs. <ref>(c),(f) and middle panel of Fig. <ref>(b)].
This is in accordance with the observation that FAEs are the predominant phenomenon observed at intermediate values of L_p/λ or ⟨ϕ⟩ [Figs. <ref>(a) and (c)-(e)].
The third dynamic state is associated with vanishing bending undulations at high values of L_p/λ or ⟨ϕ⟩. Here, straight and system-spanning configurations are stable and no openings develop in their lateral regions [Movies S5, S9, SI and right panel of Fig. <ref>(b)]. Consequently, neither FAEs nor CTDs are observed [Figs. <ref>(a) and (c)-(e)].
The tendency just discussed for the bending undulations to become weaker as the values of L_p/λ or ⟨ϕ⟩ are increased from low to high values can be rationalized by the following heuristic arguments.
With increasing L_p/λ the Frank constant <cit.> grows, and the effective elasticity (or collective stiffness of the polymers) yields stronger penalties for orientational distortions.
As a result, the bending instability weakens, as described above.
The hydrodynamic model has allowed us to verify this hypothesis: upon varying the elastic constant κ (independently from other parameters), we observe that weak elasticity favors the formation of CTDs, while a strong one yields stable bands.
As the density ⟨ϕ⟩ is increased (for a given and constant system size), a further effect contributing to higher stability of lanes is that a system-spanning nematic band occupies a growing fraction of space, i.e., the bands become wider while the bulk density remains largely the same [cf. SI].
Since broader bands are less susceptible to a bending instability, an increase of ⟨ϕ⟩, as discussed above, leads to the decay of defect formation.
An interesting aside can be mentioned here in the context of varying values of ⟨ϕ⟩: for very small densities, close to the onset of order, both models show a drop in the observed CTD number [Fig. <ref>(d)], which is likely due to the fact that there is less mass within the ordered phase, and therefore not enough mass to form multiple curved bands necessary for lanes to collide and CTDs to be created.
Overall, the formation of condensed defects and filamentous arc ejections are both strongly linked to the stability of the nematic lanes, i.e., to their propensity to exhibit a bending instability <cit.>, which, in turn, can be externally controlled by tuning either L_p/λ or ⟨ϕ⟩.
§.§ Detailed structure of CTDs and FAEs
To better understand the structure of the CTDs forming in agent-based simulations, we studied the polymer flows through them in detail.
To this end, we tracked the motion of each polymer as it passed through a condensed defect.
This enables us to distinguish the polymer flows from one to another arm of a defect and investigate whether there is a relationship between the lateral position of individual polymers and their eventual direction of turning.
Fig. <ref>(a) illustrates the flux from one arm of a defect (arm 1) into the two other arms (arms 2 and 3) [see Movie S6 SI for a representative flux recorded in an agent-based simulation].
The flux in each defect arm gets strongly compressed laterally in the vicinity of a defect core and then splits almost exactly at the centerline of the lane, while undergoing a sharp change in direction [Fig. <ref>(a)].
Symmetrically the same flux enters the defect from arms 2 and 3, resulting in the nematic flow structure depicted in Fig. <ref>(a) and (c).
This also shows that the flows begin to mix again only at a greater distance from the center of the defect [cf. color mixing in Fig. <ref>(b) and (c)]. Hence, the overall topology often present at the birth of the defect [Fig. <ref>(b) and (e)] is preserved in the flow structure of the fully formed CTD as three barely intermingling nematic flows.
In addition, we investigated whether the velocity of the polymers is affected as they move through a CTD. As can be seen from Fig. <ref>(e), their speed remains almost unchanged and only a slowdown in the per mil range is observed. One can see two insignificant velocity drops corresponding to regions with the maximal density of polymers. Interestingly, in the immediate vicinity of the core of the defect, the particle velocity briefly returns to the average value, corresponding to particles inside the nematic band.
We also studied the temporal evolution of FAEs and their occurrence over time. To this end, we periodically projected the density of a system in a configuration that allows the formation of FAEs onto one-dimensional slides and stacked them to obtain kymographs (see SFig. 5 SI).
These reveal that the detachment of arcs accelerate over time.
Further, they show that in the hydrodynamic model, due to no noise being present, FAE events occur at regular intervals, whereas in the agent-based simulations they form stochastically.
Having established the existence of CTDs and FAEs, and characterized them in our agent-based in-silico experimental system, and having successfully introduced a hydrodynamic theory that faithfully reproduces the results of the simulations as well as providing access to the phase space of the observed pattern, we asked: why are these phenomena observed? What are the underlying mechanisms responsible for their formation?
To answer these questions, we leveraged the ability of the hydrodynamic model to provide access to single terms of its defining equations [Eqs. (<ref>,<ref>)].
This analysis reveals that both the formation of dense defects and the movement of arcs have the same root cause, namely the anisotropic (“curvature-induced”) density flux <cit.>, described by -∂_j(χ Q_ij) in Eq. (<ref>) in the hydrodynamic model.
This can be understood by plotting -∂_j(χ Q_ij) in the region of an FAE or a CTD; see the left and right panels of Fig. <ref>(d), respectively.
As can be seen, on opposite sides of the arcs the amplitudes of the fluxes are distinct. An effective “active force” acting on the concave side is greater than that on the opposite side, which leads to the movement of the bent band (or arc) in the corresponding direction [Fig. <ref>(d), left panel].
When three lanes meet, the same curvature-dependent fluxes concentrate polymers in the core of the resulting defect [Fig. <ref>(d), right panel]. This condensation is eventually balanced by the isotropic part of (<ref>) and particularly by steric repulsion of polymers.
To test this hypothesis, we set the excluded volume force F (see SI)
to zero in our agent-based simulations.
Observations in this case indicate that the formation of CTDs is reduced and that, when they form, they decay faster.
Thus, we conclude that formation of the dense defects is predominantly determined by the interplay between two counteracting processes: isotropic and anisotropic density fluxes.
In addition to the “emergent” way of obtaining CTDs just studied, in which spontaneously formed bands interact randomly and spontaneously condense into defects at stochastically distributed positions, we have sought a way to overcome this limitation by artificially generating and positioning CTDs.
In contrast to non-phase-separated systems — where such an endeavor would involve the forced separation of a defect pair — the way CTDs form spontaneously [Figs. <ref>(b),(e)] suggests that finding a way to position and form nematic lanes in suitable configurations could trigger the creation of a CTD.
In combination with the observation of polymer fluxes near a defect [Fig. <ref>(h)], we hypothesized that placing active polymer sources in a three-strand configuration should trigger the formation of three lanes that immediately condensate into CTDs.
To test this prediction, we implemented the possibility to add such “active particle throwers” into our agent-based simulations and positioned them as described.
Indeed, we found that this way a CTD can be formed at a predetermined location where it persists for an arbitrary amount of time, cf. Fig. <ref>(h) and movie S10 SI.
This may be of potential application in cases where topological defects and/or high-density regions (in a low density background) need to be created and controlled with high accuracy.
§ DISCUSSION
In summary, we have used a combination of agent-based simulations and hydrodynamic theory to study pattern formation in phase-separated nematic active matter.
Our analysis shows that topological defects and nematic lanes, previously considered as two distinct and separate collective states, coexist and are tightly coupled.
We investigated the structure, formation and decomposition of CTDs in phase-separated systems.
We observed that CTDs appear as characteristic collective excitations in a novel nonequilibrium steady state.
Moreover, the formation process of CTDs constitutes a new hierarchical condensation phenomenon.
Given the previously demonstrated and close connection of our agent-based algorithm to the actin motility-assay, a paradigmatic experimental model system, it is plausible to expect that CTDs will be observed in experimental active matter systems.
Below we discuss these observations step by step.
First of all, we characterized topologically charged structures, such as CTDs and FAEs, for the first time observed in a phase-separated nematic system with self-propulsion.
It is apparent that CTDs differ markedly from defects observed in homogeneous active matter, particularly in the dynamics of their formation and decay and in their spatial structure as well.
To begin with, CTDs upconcentrate density nearby their cores and condensate nematic fluxes.
This condensation phenomena is interesting by itself, since the majority of experimental active matter systems show a depletion of particles in -1/2 disclinations, e.g., bacteria embedded in liquid crystals <cit.> and cultures of neural progenitors <cit.>.
Weak density accumulation around the defects has been discussed in slightly inhomogeneous nematic <cit.>;
however, in such systems, the -1/2 defects occur only during the transient and eventually disappear via annihilation with their +1/2 counterparts.
Similar CTDs, among other structures, were observed in parameter sweeps of the phenomenological toy model for mixtures of non-self-propelled microtubules and kinesin motors <cit.>.
However, they were either transient or formed only under very special conditions (elasticity almost zero).
In the latter case, the shape and the mechanism of formation of the defects were clearly different from the CTDs observed here.
In our case CTDs are typically formed by the collision of three curved nematic lanes that condense into a high-density three-armed structure, trapping the previously spatially distributed negative charge [Figs. <ref>(a),(d)].
One might think of comparing condensation to CTDs with the process of motility-induced phase separation (MIPS) <cit.>.
However, the fundamental difference between the two is that CTDs are not associated with particle slowdown or prolonged residence of agents in high-density regions.
In addition, the formation of condensed defects provides a condensation mechanism for anisotropically shaped particles, which is not possible with MIPS <cit.>.
We may also argue that in MIPS the agents themselves condense into high-density clusters, while we observe the condensation of dynamical collective states (nematic lanes) into topological defects.
The mutual orientation of defects is also non-typical: we observe that two CTDs can be connected by a single nematic streamline (a filamentous bundle of polymers) [Figs. <ref>(a), <ref>(f)], whereas in non-phase-separated active matter negative half-integer disclinations usually point towards a corresponding defect with the opposite charge +1/2 [Fig. <ref>(g)] <cit.>.
The dynamic processes of defect decay in phase-separated and homogeneous active nematics are also clearly distinct.
In homogeneous systems, pairs of defects with opposite charges annihilate each other <cit.>. In contrast, we find that CTDs do not annihilate with other defects, but disintegrate due to the undulating dynamics of the lanes that connect to the defect arms (Fig. <ref>(g) and Movie S3 SI).
This means that the destruction of a negatively charged defect does not depend on the mobility or dynamics of a positively charged pair, rendering this process potentially easier to control.
In cases where all three lanes that connect to the respective arms have the same bending orientation (curvature of all either clockwise or anti-clockwise with respect to center), this decay takes place via an interesting process in which defects rotate before they dissolve [Fig. <ref>(g)].
Thus, CTDs not only emerge from “collisions” of nematic lanes, but also are connected by, and disassemble into them.
Taken together, this leads to one of the main conclusions of our work, namely that the presence of CTDs constitutes
a novel nonequilibrium steady state which corresponds to a dynamic equilibrium between dense nematic lanes and condensed topological defects coexisting in a diluted background of disordered filaments.
This is reminiscent of other recent findings in active matter, in which a dynamical coexistence between patterns of different symmetry (nematic and polar) was observed <cit.>. During the persistent formation and subsequent decay of CTDs, those defects act as temporal capacitors of negative topological charge (i.e., the curvature on the boundaries of lanes gets temporarily trapped in a very small region of space) which eventually gets released again.
It is well worth reiterating that this is a continuous cyclic phenomenon, not a transient one (unlike the defect formation observed in Ref. <cit.>).
The most important factors that allow this nonequilibrium steady state to occur are probably the following.
First, since CTDs emerge from interaction of curved nematic lanes, a lateral undulation instability of nematic lanes — as exhibited by our agent-based model — is a basic prerequisite for their formation.
Another factor that is likely to favor the formation of CTDs is the nature of the interaction between the polymers (agents), which exhibit only weak mutual alignment and weak steric exclusion.
The latter, in particular, is likely to be a critical factor necessary for the high compression of polymer density during CTD formation.
Starting from a rigorously derived hydrodynamic model for self-propelled particles, we have generalized it to include higher-order phenomenological corrections.
The resulting equations are reminiscent of a conceptual active model C <cit.>, but they include all terms arising from particle self-propulsion, which is an important additional feature here.
In particular, the hydrodynamic model presented here has many fewer degrees of freedom than the toy model presented in Ref. <cit.>, since the coefficients in front of all “standard” terms have a fixed relation among them.
This hydrodynamic theory provides additional insight into the physics of CTDs.
For example, it shows that density gradients play a crucial role through their coupling with the orientation field.
In particular, we consider density-dependent corrections of these coupling terms (controlled by the parameters χ_ϕ and ω^a), which typically disappear due to the linearization of terms around the mean value of density in the majority of hydrodynamic theories.
We want to stress again that these additional terms, which are missing in standard theories of active nematics, are crucial for a proper description of the system, because without them CTDs are no longer observed.
We argue that strong phase separation (and the resulting large density gradients) inevitably amplifies the effect of higher-order coupling terms between the density and the orientation field on the dynamics.
For example, the bilinear anchoring ω^a(∂_iϕ)(∂_jϕ) causes the nematic lines to closely follow the contour of the density field constituting a defect (SFig. 7 SI) and therefore can stabilize defects.
This is in line with the observation that a decrease in ω^a leads to a decrease in the number of defects (similar conclusion can be referred from <cit.>). However, in our model CTDs still can be formed even if ω^a=0, χ_ϕ≠0,κ_ϕ≠0.
We firmly believe that the phenomena we found can also be observed in experiments, even though our study is purely theoretical.
The weakly aligning, self-propelled polymers simulation approach we base our study on, has previously shown not only excellent agreement with experiments, but was also able to predict then novel states that were later found in experiments <cit.>; thus it can be viewed, as elaborated in the introduction, as a computational version of an experimental system.
In light of this, we expect that the most promising experimental model system that could allow observation of the new topological defects we predict is most likely the actomyosin motility assay <cit.>.
This paradigmatic system not only satisfies the requirement of weakly interacting agents <cit.>, but also offers the advantage of high particle numbers.
Previously, not only polar waves <cit.> but also nematic lanes <cit.> have been observed.
This has been achieved by adding depletion agents that enable one to tune the strength as well as the symmetry of the interaction between the actin filaments.
It is conceivable that similar and other changes in the design of the actin motility assay could be used to produce a weak and purely nematic interaction as used in our agent-based simulations.
For example, other depletion agents could be used and/or the properties of the surface to which the driving molecular motors are attached could be changed.
Recently, the latter was indeed shown to have a direct impact on polymer interactions <cit.>.
Alternatively, CTDs could potentially be observed in other types of motility assays <cit.>.
Another intriguing possibility for observing the predicted CTDs is to directly produce a configuration of nematic lanes favoring the formation of CTDs by suitably structuring the surface used in the motility assay <cit.>.
The deep understanding we gained about the formation of CTDs owing to the combination of agent-based simulation and hydrodynamic approach allowed us to find a way to generate them artificially (Fig. <ref>(h) and movie S10 SI). Given the availability of directed particle sources in an experimental system, the position of defects (and therefore the location of a domain of extremely high density) could be controlled with pin-point accuracy.
This provides a new tool for cases where -1/2 defects and/or small regions of high particle density (in an overall dilute system) are needed at specific positions, e.g., to trigger specific processes such as cell death <cit.> at definable points.
Given the strong and controlled nature of the focusing of the fluxes in nematic lanes, this method could be termed “active matter optics”.
Another important insight from the broader perspective of the active matter field is that
phase-separated active matter exhibits a hierarchy of emergent collective states.
Interaction between dense nematic lanes, considered as “first-order” collective states in active nematics, can lead to the formation of “second-order” collective states, here half-integer topological defects with an even higher density.
A phenomenon which one can call “hierarchical, alignment-induced phase-separation”.
It is reasonable to assume that similar effects may lead to new phenomena in other active systems with different symmetry, e.g., polar symmetry with polar waves as first-order collective states <cit.>.
Another class of systems in which higher-order collective states might emerge are active systems that are subject to external gradients <cit.>
or signalling interactions between the agents <cit.>.
A promising extension of our present investigations are active foams.
In this state of active matter, which has recently received increasing attention <cit.>, dense ordered bands assemble into actively reforming cellular networks.
Indeed, in preliminary simulations of the hydrodynamic theory, we have identified parameter regimes in our model where we observe active foams: CTDs are more frequent, interconnected, and persist for longer times.
Thus, the formation of active foams in active nematics seems very plausible, but a thorough investigation of the entire phase space in the agent-based model is computationally demanding and will be reserved for a future study.
§ AUTHOR CONTRIBUTIONS
T.K., I.M., and E.F. designed the research, performed research, analyzed data, and wrote the paper.
§ CONFLICTS OF INTEREST
There are no conflicts to declare.
§ ACKNOWLEDGEMENTS
We acknowledge financial support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the Excellence Cluster ORIGINS under
Germany's
Excellence Strategy (EXC-2094-390783311) and through Project-ID 201269156 -
Collaborative Research Center (SFB) 1032 - Project B2.
IM acknowledges European Union's Framework Programme for Research and Innovation Horizon 2020 (2014-2020) under the Marie Skłodowska-Curie Grant Agreement No. 754388 (LMU Research Fellows) and from LMUexcellent, funded by the Federal Ministry of Education and Research (BMBF) and the Free State of Bavaria under the Excellence Strategy of the German Federal Government and the Länder.
§ APPENDIX
§.§ Agent-based simulation method
We now describe our agent-based simulation model.
Please also refer to the SI and the Supplemental Materials of Refs. <cit.> for more details.
In our systems we simulate M polymers, each of length L.
Orientational diffusion causes the tip of each polymer to perform a persistent random walk. Upon collision with another polymer, local interaction causes the tip to gradually align with its direction.
Attached to the polymer tips are tails that just follows the path that is outlined by the tip.
This dynamics mimics the behavior of actin filaments in actomyosin motility assays <cit.>, in which polymers move in a snake-like fashion over a lawn of motor proteins and motion orthogonal to the contour is suppressed <cit.>.
Here we use purely nematic interactions between polymers which are primarily tuned by the nematic alignment amplitude α_n that allows for a continuous variation of the rate of alignment.
§.§ Parameters
If not stated otherwise, we used the following model parameters: discretization N = 5, polymer aspect ratio L/d = 21, nematic alignment strength α_n = 0.126≈7.2^∘ and a periodic simulation box of length L_box = 162.5L.
The velocity v^(n) of each polymer is randomly drawn from the interval [0.75,1.]v_0.
We started simulations with random initial conditions, i.e. randomly oriented polymers were placed at random positions in the simulation box.
Time is measured in units of L/v_0, where v_0 is the maximal velocity of a free polymer.
Density in Figs. <ref>(a)-(c) and <ref>(g)-(h) is time-averaged for better visibility, with averaging times of 159 for Fig. <ref>(a) and 16 for Figs. <ref>(b)-(c) and <ref>(g)-(h).
Note that the system shown in Fig. <ref>(h) does not have the usual periodic boundary conditions. Rather, the particles crossing the boundaries are moved either to a random position along a boundary with random orientation or to one of the particle sources. The ratio of these two possibilities is chosen so that the particle flux from the sources is kept constant.
§.§ Continuous theory
We numerically investigate Eqs. (<ref>,<ref>) under periodic boundary conditions by using finite differences of second order <cit.> on a 300×300 grid with the spatial resolution δ x = 0.5.
The time integration was performed via a second-order predictor-corrector scheme with time step dt = 10^-2.
We use the parameter values β=0.05, κ_ϕ=0.2, ω^a=-0.5, χ_ϕ=0.4, ν_ϕ=1.
Unless explicitly stated, we initialize simulations from an isotropic uniform state
with a small amount of noise. To make time and space dimensionless we rescale them by setting the rotational diffusion coefficient and μ_ρ equal to unity.
72
urlstyle
[De Gennes and Prost(1993)]de1993physics
Pierre-Gilles De Gennes and Jacques Prost.
The physics of liquid crystals.
Number 83. Oxford university press, 1993.
[Marchetti et al.(2013)Marchetti, Joanny, Ramaswamy, Liverpool, Prost,
Rao, and Simha]Marchetti2013
M Cristina Marchetti, Jean-François Joanny, Sriram Ramaswamy,
Tanniemola B Liverpool, Jacques Prost, Madan Rao, and R Aditi Simha.
Hydrodynamics of soft active matter.
Rev. Mod. Phys., 850 (3):0 1143,
10.1103/RevModPhys.85.1143.
[Doostmohammadi et al.(2018)Doostmohammadi, Ignés-Mullol, Yeomans,
and Sagués]Doostmohammadi2018
Amin Doostmohammadi, Jordi Ignés-Mullol, Julia M Yeomans, and Francesc
Sagués.
Active nematics.
Nat. Commun., 90 (1):0 3246,
10.1038/s41467-018-05666-8.
[Alert et al.(2022)Alert, Casademunt, and Joanny]alert2021active
Ricard Alert, Jaume Casademunt, and Jean-François Joanny.
Active turbulence.
Annu. Rev. Condens. Matter Phys., 130 (1):0
143–170,
10.1146/annurev-conmatphys-082321-035957.
[Sanchez et al.(2012)Sanchez, Chen, DeCamp, Heymann, and
Dogic]sanchez_spontaneous_2012
Tim Sanchez, Daniel T. N. Chen, Stephen J. DeCamp, Michael Heymann, and
Zvonimir Dogic.
Spontaneous motion in hierarchically assembled active matter.
Nature, 4910 (7424):0 431–434,
10.1038/nature11591.
[DeCamp et al.(2015)DeCamp, Redner, Baskaran, Hagan, and
Dogic]Decamp2015
Stephen J DeCamp, Gabriel S Redner, Aparna Baskaran, Michael F Hagan, and
Zvonimir Dogic.
Orientational order of motile defects in active nematics.
Nat. Mater., 140 (11):0 1110–1115,
https://doi.org/10.1038/nmat4387.
[Giomi et al.(2013)Giomi, Bowick, Ma, and Marchetti]giomi_defect_2013
Luca Giomi, Mark J. Bowick, Xu Ma, and M. Cristina Marchetti.
Defect Annihilation and Proliferation in Active Nematics.
Phys. Rev. Lett., 1100 (22):0 228101,
10.1103/PhysRevLett.110.228101.
[Shankar et al.(2018)Shankar, Ramaswamy, Marchetti, and
Bowick]shankar_defect_2018
Suraj Shankar, Sriram Ramaswamy, M. Cristina Marchetti, and Mark J. Bowick.
Defect Unbinding in Active Nematics.
Phys. Rev. Lett., 1210 (10):0 108002,
10.1103/PhysRevLett.121.108002.
[Thampi et al.(2014)Thampi, Golestanian, and
Yeomans]thampi_instabilities_2014
Sumesh P. Thampi, Ramin Golestanian, and Julia M. Yeomans.
Instabilities and topological defects in active nematics.
Europhys Lett., 1050 (1):0 18001,
10.1209/0295-5075/105/18001.
[Giomi et al.(2014)Giomi, Bowick, Mishra, Sknepnek, and
Cristina Marchetti]Giomi2014
Luca Giomi, Mark J Bowick, Prashant Mishra, Rastko Sknepnek, and
M Cristina Marchetti.
Defect dynamics in active nematics.
Philos. Trans. R. Soc. A, 3720 (2029):0
20130365,
https://doi.org/10.1098/rsta.2013.0365.
[Putzig et al.(2016)Putzig, Redner, Baskaran, and
Baskaran]putzig_instabilities_2016
Elias Putzig, Gabriel S. Redner, Arvind Baskaran, and Aparna Baskaran.
Instabilities, defects, and defect ordering in an overdamped active
nematic.
Soft Matter, 120 (17):0 3854–3859,
10.1039/C6SM00268D.
[Maryshev et al.(2019)Maryshev, Goryachev, Marenduzzo, and
Morozov]Maryshev2019Dry
Ivan Maryshev, Andrew B Goryachev, Davide Marenduzzo, and Alexander Morozov.
Dry active turbulence in a model for microtubule–motor mixtures.
Soft Matter, 150 (30):0 6038–6043,
10.1039/c9sm00558g.
[Schaller et al.(2010)Schaller, Weber, Semmrich, Frey, and
Bausch]schaller_polar_2010
Volker Schaller, Christoph Weber, Christine Semmrich, Erwin Frey, and
Andreas R. Bausch.
Polar patterns of driven filaments.
Nature, 4670 (7311):0 73–77,
10.1038/nature09312.
[Butt et al.(2010)Butt, Mufti, Humayun, Rosenthal, Khan, Khan, and
Molloy]butt_myosin_2010
Tariq Butt, Tabish Mufti, Ahmad Humayun, Peter B. Rosenthal, Sohaib Khan,
Shahid Khan, and Justin E. Molloy.
Myosin Motors Drive Long Range Alignment of Actin Filaments.
J. Biol. Chem., 2850 (7):0 4964–4974,
10.1074/jbc.M109.044792.
[Grégoire and Chaté(2004)]gregoire_onset_2004
Guillaume Grégoire and Hugues Chaté.
Onset of Collective and Cohesive Motion.
Phys. Rev. Lett., 920 (2):0 025702,
10.1103/PhysRevLett.92.025702.
[Solon et al.(2015)Solon, Chaté, and Tailleur]solon_phase_2015
Alexandre P. Solon, Hugues Chaté, and Julien Tailleur.
From Phase to Microphase Separation in Flocking Models:
The Essential Role of Nonequilibrium Fluctuations.
Phys. Rev. Lett., 114:0 068101,
10.1103/PhysRevLett.114.068101.
[Huber et al.(2021)Huber, Krüger, and Frey]huber_microphase_2021
Lorenz Huber, Timo Krüger, and Erwin Frey.
Microphase separation in active filament systems maintained by cyclic
dynamics of cluster size and order.
Phys. Rev. Res., 30 (1):0 013280,
10.1103/PhysRevResearch.3.013280.
[Huber et al.(2018)Huber, Suzuki, Krüger, Frey, and
Bausch]Huber2018
L Huber, R Suzuki, T Krüger, E Frey, and AR Bausch.
Emergence of coexisting ordered states in active matter systems.
Science, 3610 (6399):0 255–258,
DOI: 10.1126/science.aao5434.
[Denk and Frey(2020)]denk_pattern-induced_2020-1
Jonas Denk and Erwin Frey.
Pattern-induced local symmetry breaking in active-matter systems.
Proc. Natl. Acad. Sci. U.S.A., 1170 (50):0
31623–31630,
10.1073/pnas.2010302117.
[Ginelli et al.(2010)Ginelli, Peruani, Bär, and
Chaté]ginelli_large-scale_2010
Francesco Ginelli, Fernando Peruani, Markus Bär, and Hugues Chaté.
Large-scale collective properties of self-propelled rods.
Phys. Rev. Lett., 1040 (18):0 184502,
10.1103/PhysRevLett.104.184502.
[Peshkov et al.(2012)Peshkov, Aranson, Bertin, Chaté, and
Ginelli]Peshkov2012
Anton Peshkov, Igor S Aranson, Eric Bertin, Hugues Chaté, and Francesco
Ginelli.
Nonlinear field equations for aligning self-propelled rods.
Phys. Rev. Lett., 1090 (26):0 268701,
10.1103/PhysRevLett.109.268701.
[Ngo et al.(2014)Ngo, Peshkov, Aranson, Bertin, Ginelli, and
Chaté]ngo_large-scale_2014
Sandrine Ngo, Anton Peshkov, Igor S. Aranson, Eric Bertin, Francesco Ginelli,
and Hugues Chaté.
Large-Scale Chaos and Fluctuations in Active Nematics.
Phys. Rev. Lett., 113:0 038302,
10.1103/PhysRevLett.113.038302.
[Großmann et al.(2016)Großmann, Peruani, and
Bär]grosmann_mesoscale_2016
Robert Großmann, Fernando Peruani, and Markus Bär.
Mesoscale pattern formation of self-propelled rods with velocity
reversal.
Phys. Rev. E, 940 (5):0 050602,
10.1103/PhysRevE.94.050602.
[Maryshev et al.(2020)Maryshev, Morozov, Goryachev, and
Marenduzzo]Maryshev2020
Ivan Maryshev, Alexander Morozov, Andrew B Goryachev, and Davide Marenduzzo.
Pattern formation in active model c with anchoring: bands, aster
networks, and foams.
Soft Matter, 160 (38):0 8775–8781,
10.1039/d0sm00927j.
[Cai et al.(2019)Cai, Chaté, Ma, and Shi]Cai2019
Li-Bing Cai, Hugues Chaté, Yu-Qiang Ma, and Xia-Qing Shi.
Dynamical subclasses of dry active nematics.
Phys. Rev. E, 99:0 010601,
10.1103/PhysRevE.99.010601.
[Großmann et al.(2020)Großmann, Aranson, and
Peruani]grosmann_particle-field_2020
Robert Großmann, Igor S. Aranson, and Fernando Peruani.
A particle-field approach bridges phase separation and collective
motion in active matter.
Nat. Commun., 110 (1):0 5365,
10.1038/s41467-020-18978-5.
[Chaté(2020)]chate_dry_2020
Hugues Chaté.
Dry aligning dilute active matter.
Annu. Rev. Condens. Matter Phys., 110 (1),
10.1146/annurev-conmatphys-031119-050752.
[Mishra et al.(2014)Mishra, Puri, and Ramaswamy]mishra2014aspects
Shradha Mishra, Sanjay Puri, and Sriram Ramaswamy.
Aspects of the density field in an active nematic.
Philos. Trans. R. Soc. A, 3720 (2029):0
20130364,
10.1098/rsta.2013.0364.
[Bertin et al.(2013)Bertin, Chaté, Ginelli, Mishra, Peshkov, and
Ramaswamy]bertin_mesoscopic_2013
Eric Bertin, Hugues Chaté, Francesco Ginelli, Shradha Mishra, Anton Peshkov,
and Sriram Ramaswamy.
Mesoscopic theory for fluctuating active nematics.
New J. Phys., 150 (8):0 085032,
10.1088/1367-2630/15/8/085032.
[Blow et al.(2014)Blow, Thampi, and Yeomans]Blow2014
Matthew L Blow, Sumesh P Thampi, and Julia M Yeomans.
Biphasic, lyotropic, active nematics.
Phys. Rev. Lett., 1130 (24):0 248303,
0.1103/PhysRevLett.113.248303.
[Hohenberg and Halperin(1977)]HohenbergHalperin
Pierre C Hohenberg and Bertrand I Halperin.
Theory of dynamic critical phenomena.
Rev. Mod. Phys., 490 (3):0 435,
https://doi.org/10.1103/RevModPhys.49.435.
[Li and Cates(2021)]li_hierarchical_2021
Yuting I. Li and Michael E. Cates.
Hierarchical microphase separation in non-conserved active mixtures.
Eur. Phys. J. E, 440 (9):0 119,
10.1140/epje/s10189-021-00113-x.
[Baskaran and Marchetti(2012)]baskaran_self-regulation_2012
A. Baskaran and M. C. Marchetti.
Self-regulation in self-propelled nematic fluids.
Eur. Phys. J. E, 350 (9),
10.1140/epje/i2012-12095-8.
[Ahmadi et al.(2006)Ahmadi, Marchetti, and
Liverpool]ahmadi2006hydrodynamics
Aphrodite Ahmadi, M Cristina Marchetti, and Tanniemola B Liverpool.
Hydrodynamics of isotropic and liquid crystalline active polymer
solutions.
Phys. Rev. E, 740 (6):0 061913,
10.1103/PhysRevE.74.061913.
[Baskaran and Marchetti(2010)]baskaran2010nonequilibrium
Aparna Baskaran and M Cristina Marchetti.
Nonequilibrium statistical mechanics of self-propelled hard rods.
J. Stat. Mech. Theory Exp., 20100 (04):0
P04019,
10.1088/1742-5468/2010/04/P04019.
[Maryshev et al.(2018)Maryshev, Marenduzzo, Goryachev, and
Morozov]Maryshev2018
Ivan Maryshev, Davide Marenduzzo, Andrew B Goryachev, and Alexander Morozov.
Kinetic theory of pattern formation in mixtures of microtubules and
molecular motors.
Phys. Rev. E, 970 (2):0 22412,
10.1103/PhysRevE.97.022412.
[Cates(2019)]cates2019active
Michael E Cates.
Active field theories.
arXiv preprint,
10.48550/arXiv.1904.01330.
[Shaebani et al.(2020)Shaebani, Wysocki, Winkler, Gompper, and
Rieger]shaebani2020computational
M Reza Shaebani, Adam Wysocki, Roland G Winkler, Gerhard Gompper, and Heiko
Rieger.
Computational models for active matter.
Nature Reviews Physics, 20 (4):0 181–199,
https://doi.org/10.1038/s42254-020-0152-1.
[Sulaiman et al.(2006)Sulaiman, Marenduzzo, and
Yeomans]sulaiman2006lattice
N Sulaiman, D Marenduzzo, and JM Yeomans.
Lattice boltzmann algorithm to simulate isotropic-nematic emulsions.
Phys. Rev. E, 740 (4):0 041708,
https://doi.org/10.1103/PhysRevE.74.041708.
[Araki and Tanaka(2004)]araki2004nematohydrodynamic
Takeaki Araki and Hajime Tanaka.
Nematohydrodynamic effects on the phase separation of a symmetric
mixture of an isotropic liquid and a liquid crystal.
Phys. Rev. Lett., 930 (1):0 015702,
https://doi.org/10.1103/PhysRevLett.93.015702.
[Mishra et al.(2010)Mishra, Simha, and Ramaswamy]mishra2010dynamic
Shradha Mishra, R Aditi Simha, and Sriram Ramaswamy.
A dynamic renormalization group study of active nematics.
J. Stat. Mech. Theory Exp., 20100 (02):0
P02003,
10.1088/1742-5468/2010/02/P02003.
[Putzig and Baskaran(2014)]Putzig2014
Elias Putzig and Aparna Baskaran.
Phase separation and emergent structures in an active nematic fluid.
Phys. Rev. E, 900 (4):0 042304,
https://doi.org/10.1103/PhysRevE.90.042304.
[Sato and Teramoto(1996)]sato1996frank
Takahiro Sato and Akio Teramoto.
On the frank elastic constants of lyotropic polymer liquid crystals.
Macromolecules, 290 (11):0 4107–4114,
https://doi.org/10.1021/ma950986a.
[Ramaswamy et al.(2003)Ramaswamy, Simha, and
Toner]ramaswamy2003active
S Ramaswamy, R. Aditi Simha, and J Toner.
Active nematics on a substrate: Giant number fluctuations and
long-time tails.
Europhys Lett., 620 (2):0 196–202,
10.1209/epl/i2003-00346-7.
[Simha and Ramaswamy(2002)]simha2002hydrodynamic
R Aditi Simha and Sriram Ramaswamy.
Hydrodynamic fluctuations and instabilities in ordered suspensions of
self-propelled particles.
Phys. Rev. Lett., 890 (5):0 058101,
https://doi.org/10.1103/PhysRevLett.89.058101.
[Narayan et al.(2007)Narayan, Ramaswamy, and
Menon]narayan_long-lived_2007
V. Narayan, S. Ramaswamy, and N. Menon.
Long-Lived Giant Number Fluctuations in a Swarming
Granular Nematic.
Science, 3170 (5834):0 105–108,
10.1126/science.1140414.
[Genkin et al.(2017)Genkin, Sokolov, Lavrentovich, and
Aranson]genkin2017topological
Mikhail M Genkin, Andrey Sokolov, Oleg D Lavrentovich, and Igor S Aranson.
Topological defects in a living nematic ensnare swimming bacteria.
Phys. Rev. X, 70 (1):0 011029,
https://doi.org/10.1103/PhysRevX.7.011029.
[Kawaguchi et al.(2017)Kawaguchi, Kageyama, and
Sano]kawaguchi_topological_2017-1
Kyogo Kawaguchi, Ryoichiro Kageyama, and Masaki Sano.
Topological defects control collective dynamics in neural progenitor
cell cultures.
Nature, 5450 (7654):0 327–331,
10.1038/nature22321.
[Cates and Tailleur(2015)]cates2015motility
Michael E Cates and Julien Tailleur.
Motility-induced phase separation.
Annu. Rev. Condens. Matter Phys., 60 (1):0
219–244,
https://doi.org/10.1146/annurev-conmatphys-031214-014710.
[Van Der Linden et al.(2019)Van Der Linden, Alexander, Aarts, and
Dauchot]van2019interrupted
Marjolein N Van Der Linden, Lachlan C Alexander, Dirk GAL Aarts, and Olivier
Dauchot.
Interrupted motility induced phase separation in aligning active
colloids.
Phys. Rev. Lett., 1230 (9):0 098001,
https://doi.org/10.1103/PhysRevLett.123.098001.
[Shankar and Marchetti(2019)]shankar2019hydrodynamics
Suraj Shankar and M Cristina Marchetti.
Hydrodynamics of active defects: From order to chaos to defect
ordering.
Phys. Rev. X, 90 (4):0 041047,
https://doi.org/10.1103/PhysRevX.9.041047.
[Cortese et al.(2018)Cortese, Eggers, and Liverpool]cortese2018pair
Dario Cortese, Jens Eggers, and Tanniemola B Liverpool.
Pair creation, motion, and annihilation of topological defects in
two-dimensional nematic liquid crystals.
Phys. Rev. E, 970 (2):0 022704,
https://doi.org/10.1103/PhysRevE.97.022704.
[Hussain et al.(2013)Hussain, Molloy, and
Khan]hussain_spatiotemporal_2013
Saman Hussain, Justin E. Molloy, and Shahid M. Khan.
Spatiotemporal Dynamics of Actomyosin Networks.
Biophys. J., 1050 (6):0 1456–1465,
10.1016/j.bpj.2013.08.001.
[Suzuki and Bausch(2017)]suzuki_emergence_2017
Ryo Suzuki and Andreas R. Bausch.
The emergence and transient behaviour of collective motion in active
filament systems.
Nat. Commun., 80 (1):0 41,
10.1038/s41467-017-00035-3.
[Suzuki et al.(2015)Suzuki, Weber, Frey, and Bausch]suzuki_polar_2015
Ryo Suzuki, Christoph A. Weber, Erwin Frey, and Andreas R. Bausch.
Polar pattern formation in driven filament systems requires
non-binary particle collisions.
Nat. Phys., 110 (10):0 839–843,
10.1038/nphys3423.
[Sciortino and Bausch(2021)]sciortino_pattern_2021
Alfredo Sciortino and Andreas R. Bausch.
Pattern formation and polarity sorting of driven actin filaments on
lipid membranes.
Proc. Natl. Acad. Sci. U.S.A., 1180 (6):0
e2017047118,
10.1073/pnas.2017047118.
[Sumino et al.(2012)Sumino, Nagai, Shitaka, Tanaka, Yoshikawa, Chaté,
and Oiwa]sumino_large-scale_2012-1
Yutaka Sumino, Ken H. Nagai, Yuji Shitaka, Dan Tanaka, Kenichi Yoshikawa,
Hugues Chaté, and Kazuhiro Oiwa.
Large-scale vortex lattice emerging from collectively moving
microtubules.
Nature, 4830 (7390):0 448–452,
10.1038/nature10874.
[Memarian et al.(2021)Memarian, Lopes, Schwarzendahl, Athani,
Sarpangala, Gopinathan, Beller, Dasbiswas, and Hirst]memarian_active_2021
Fereshteh L. Memarian, Joseph D. Lopes, Fabian Jan Schwarzendahl,
Madhuvanthi Guruprasad Athani, Niranjan Sarpangala, Ajay Gopinathan,
Daniel A. Beller, Kinjal Dasbiswas, and Linda S. Hirst.
Active nematic order and dynamic lane formation of microtubules
driven by membrane-bound diffusing motors.
Proc. Natl. Acad. Sci. U.S.A., 1180 (52):0
e2117107118,
10.1073/pnas.2117107118.
[Turiv et al.(2020)Turiv, Koizumi, Thijssen, Genkin, Yu, Peng, Wei,
Yeomans, Aranson, Doostmohammadi, and Lavrentovich]turiv_polar_2020
Taras Turiv, Runa Koizumi, Kristian Thijssen, Mikhail M. Genkin, Hao Yu,
Chenhui Peng, Qi-Huo Wei, Julia M. Yeomans, Igor S. Aranson, Amin
Doostmohammadi, and Oleg D. Lavrentovich.
Polar jets of swimming bacteria condensed by a patterned liquid
crystal.
Nat. Phys., 160 (4):0 481–487,
10.1038/s41567-020-0793-0.
[Sciortino et al.(2022)Sciortino, Neumann, Krüger, Maryshev,
Teshima, Wolfrum, Frey, and Bausch]sciortino_defects_2022
Alfredo Sciortino, Lukas J Neumann, Timo Krüger, Ivan Maryshev, Tetsuhiko F
Teshima, Bernhard Wolfrum, Erwin Frey, and Andreas R Bausch.
Polarity and chirality control of an active fluid by passive nematic
defects.
Nat. Mater.,
10.1038/s41563-022-01432-w.
[Saw et al.(2017)Saw, Doostmohammadi, Nier, Kocgozlu, Thampi, Toyama,
Marcq, Lim, Yeomans, and Ladoux]saw_topological_2017-1
Thuan Beng Saw, Amin Doostmohammadi, Vincent Nier, Leyla Kocgozlu, Sumesh
Thampi, Yusuke Toyama, Philippe Marcq, Chwee Teck Lim, Julia M. Yeomans, and
Benoit Ladoux.
Topological defects in epithelia govern cell death and extrusion.
Nature, 5440 (7649):0 212–216,
10.1038/nature21718.
[Popescu et al.(2018)Popescu, Uspal, Bechinger, and
Fischer]popescu_chemotaxis_2018
Mihail N. Popescu, William E. Uspal, Clemens Bechinger, and Peer Fischer.
Chemotaxis of Active Janus Nanoparticles.
Nano Lett., 180 (9):0 5345–5349,
10.1021/acs.nanolett.8b02572.
[Lavergne et al.(2019)Lavergne, Wendehenne, Bäuerle, and
Bechinger]lavergne_group_2019
François A Lavergne, Hugo Wendehenne, Tobias Bäuerle, and Clemens
Bechinger.
Group formation and cohesion of active particles with visual
perception-dependent motility.
Science, 3640 (6435):0 70–74,
10.1126/science.aau5347.
[Ziepke et al.(2022)Ziepke, Maryshev, Aranson, and
Frey]alex_preprint_2022
Alexander Ziepke, Ivan Maryshev, Igor S Aranson, and Erwin Frey.
Multi-scale organization in communicating active matter.
Nat. Commun., 13,
10.1038/s41467-022-34484-2.
[Nagai et al.(2015)Nagai, Sumino, Montagne, Aranson, and
Chaté]nagai_collective_2015-1
Ken H. Nagai, Yutaka Sumino, Raul Montagne, Igor S. Aranson, and Hugues
Chaté.
Collective Motion of Self-Propelled Particles with
Memory.
Phys. Rev. Lett., 1140 (16):0 168001,
10.1103/PhysRevLett.114.168001.
[Ventejou et al.(2021)Ventejou, Chaté, Montagne, and
Shi]ventejou2021susceptibility
Bruno Ventejou, Hugues Chaté, Raul Montagne, and Xia-qing Shi.
Susceptibility of orientationally ordered active matter to chirality
disorder.
Phys. Rev. Lett., 1270 (23):0 238001,
https://doi.org/10.1103/PhysRevLett.127.238001.
[Lemma et al.(2022)Lemma, Mitchell, Subramanian, Needleman, and
Dogic]lemma2022active
Bezia Lemma, Noah P Mitchell, Radhika Subramanian, Daniel J Needleman, and
Zvonimir Dogic.
Active microphase separation in mixtures of microtubules and
tip-accumulating molecular motors.
Phys. Rev. X, 120 (3):0 031006,
https://doi.org/10.1103/PhysRevX.12.031006.
[Abramowitz and Stegun(1964)]AbramowitzStegun
Milton Abramowitz and Irene A. Stegun.
Handbook of Mathematical Functions with Formulas, Graphs, and
Mathematical Tables.
Dover, New York, 1964.
[Bertin et al.(2006)Bertin, Droz, and
Grégoire]bertin_boltzmann_2006
Eric Bertin, Michel Droz, and Guillaume Grégoire.
Boltzmann and hydrodynamic description for self-propelled particles.
Phys. Rev. E, 74:0 022101,
10.1103/PhysRevE.74.022101.
[Bertin et al.(2009)Bertin, Droz, and Grégoire]bertin_2009
Eric Bertin, Michel Droz, and Guillaume Grégoire.
Hydrodynamic equations for self-propelled particles: microscopic
derivation and stability analysis.
J. Phys. A Math. Theor., 420 (44):0 445001,
10.1088/1751-8113/42/44/445001.
[Peshkov et al.(2014)Peshkov, Bertin, Ginelli, and
Chaté]peshkov_boltzmann-ginzburg-landau_2014
A. Peshkov, E. Bertin, F. Ginelli, and H. Chaté.
Boltzmann-Ginzburg-Landau approach for continuous
descriptions of generic Vicsek-like models.
Eur. Phys. J.: Spec. Top., 2230 (7):0
1315–1344,
10.1140/epjst/e2014-02193-y.
[Ngo et al.(2012)Ngo, Ginelli, and Chaté]ngo_competing_2012-2
Sandrine Ngo, Francesco Ginelli, and Hugues Chaté.
Competing ferromagnetic and nematic alignment in self-propelled polar
particles.
Phys. Rev. E, 860 (5):0 050101,
10.1103/PhysRevE.86.050101.
*format=largeformat
§ SUPPLEMENTARY INFORMATION
§ WASP SIMULATION METHOD
In this section we provide a brief summary of the agent-based simulations.
The focus will be on the aspects most relevant for the current study.
For a detailed description of the WASP simulation setup, please refer to the supplemental materials of Refs. <cit.>.
In the agent-based simulations, we consider M polymers moving on a flat substrate (in two spatial dimensions).
Each polymer n consist of N spherical joints j which are located at a positions 𝐫_j^(n) (with j ∈ { 0, 1, …, N - 1 }, where the polymer tip is denoted by j = 0).
The direction of a polymer's tip is denoted by 𝐮_0^(n) and its motion is described by:
∂_t 𝐫_0^(n) = v^(n) 𝐮_0^(n) -𝐅_𝐫𝐞𝐩
= v^(n)(
[ cosθ_0^(n); sinθ_0^(n) ])-𝐅_𝐫𝐞𝐩 .
Here 𝐅_𝐫𝐞𝐩 describes a weak repulsion force (see (<ref>)) acting on a polymer head while in contact with the contour of another polymer.
θ_0^(n) denotes the orientation of a polymer and v^(n) its free speed.
For this study, the speed of each polymer was chosen at random from a continuous uniform distribution in the interval [0.75, 1] v_0, where v_0 denotes the maximal velocity of a free polymer (see section S<ref> for further details on this velocity dispersion).
The orientation of a polymer's head evolves in time according to
∂_t θ_0^(n) =
- δH̃_0^(n)/δθ_0^(n)
+ √(2v^(n)/L_p) ξ ,
where ξ is random white noise with zero mean and unit variance with the magnitude of the noise given by the prefactor.
This implies that individual polymers perform a persistent random walk with a path persistence length of L_p.
H̃_0^(n) sets the—in this study purely nematic—torque caused by interactions with other polymers.
Before we come to a description of H̃_0^(n), it will proof useful to introduce several other quantities.
The first is the distance vector
Δ𝐫_nm
=
(
𝐫_0^(n) -
𝐫^(m))_shDist .
This vector connects the tip of a polymer n with the position of an adjacent polymer's (denoted by m) contour that has the shortest possible distance.
The local orientation of the contour of the adjacent polymer m is given by θ_j^(m), which corresponds to the orientation of the polymer segment j of polymer n to which Δ𝐫_nm connects.
Second, if a polymer is interacting with several polymers at a time, we define a weighted average direction of the connecting vectors:
Δ𝐞_n
:=
∑_m
C(
|Δ𝐫_nm|
)
Δ𝐫_nm/|Δ𝐫_nm| .
Here C(
|Δ𝐫_nm|
) is a weighting factor accounting for the assumption that a more distant polymer contributes less to an interaction.
It is given by
C(
|Δ𝐫_nm|
)
=
{[ 0 if |Δ𝐫_nm|>d; (d- |Δ𝐫_nm|)/d else ].
,
where d defines the interaction radius.
Using the orientation of the averaged connecting vector θ̃_n, we define an averaged nematic impact angle as Δθ̃^(n)_n = θ_0^(n) - θ̃_n.
Equipped with these definitions we are now in a position to write down the alignment potential as
H̃_0^(n)
:=
α_n v_0/dcos (2Δθ̃^(n)_n) |Δ𝐞_n|
,
where the overall amplitude of the alignment is set by the absolute value of the weighted connecting vector, combined with the nematic alignment strength α_n.
The repulsion force 𝐅_𝐫𝐞𝐩 in (<ref>) is given by
𝐅_𝐫𝐞𝐩 =
-s ∑_m
C
(
|Δ𝐫_nm|
)
Δ𝐫_nm/|Δ𝐫_nm| ,
which is used to prevent unphysical aggregation of polymers. It is assumed to be weak with s = 0.05.
Filaments in actomyosin motility assays are observed to conduct a trailing motion, where the tail of a polymer follows the movement of the tip <cit.>.
To emulate this behaviour, tail joints move according to
∂_t 𝐫_j^(n)
=
K_s (
| 𝐫_j^(n)-𝐫_j-1^(n)| - b
) 1/2(
𝐮_j+1^(n)+𝐮_j^(n))
.
Here, the second part of the equation, 1/2
(
𝐮_j+1^(n) + 𝐮_j^(n)
), ensures the movement to be in the direction of the average of the segment's orientations that are adjacent to joint j.
The remainder of (<ref>) corresponds to a linear (Hookian) restoring force with spring coefficient K_s = 200 that ensures an average length b of the cylindrical segments between bonds.
§ ONSET OF NEMATIC PATTERNS
In this section we provide further information on how the phase diagram shown in Fig. 1(c) of the main text was obtained.
To determine the density ρ_n as a function of L_p above which nematic patterns are formed, we performed exploratory simulations in the phase space spanned by the (reduced) global polymer density ⟨ρ⟩ L^2
and the persistence length L_p.
To guarantee that the dynamics has reached a steady state we ran these simulations for a time 15 873 which is much larger than the initial timescale t_0 ≈ 100 it takes for a system to reach the quasi-stationary, disordered state <cit.>.
Figure <ref> shows the results of the in silico parameter scans in density at a set of fixed values for L_p: The blue triangles and red squares correspond to steady states where we visually observed nematic patterns or a disordered state, respectively.
To determine the phase boundary ρ_n (L_p) we fitted a function f_ρ(L_p) = a/L_p (with a as free fitting parameter) to the data points with the lowest density that still exhibited nematic order [solid line in Fig. <ref>].
The shape of the boundary line is dictated by the interplay between two counteracting effects: density-dependent, interaction-induced ordering and rotational diffusion.
The former increases linearly with density increase, and above the critical value of density, spontaneous ordering begins to predominate over diffusion.
Thus, the critical density is proportional to rotational diffusion coefficient and therefore ∝ L_p^-1 in our case.
We take f_ρ(L_p) as an approximation to the density corresponding to the onset of nematic patterns, ρ_n (L_p).
To further test whether this is a satisfactory approximation for the phase boundary, we ran ten independent simulations at a density corresponding to ρ_n [cf. dots in Fig. 1 (c) of the main text] and further ten at 0.9 ρ_n for several different L_p for a twice as large simulation time of 31 746.
All simulations at ρ_n formed ordered patterns, while none at 0.9 ρ_n did, affirming that f_ρ(L_p) adequately approximates the position of the isotropic-nematic transition.
§ DEFECT DETECTION
In this section, we explain the algorithms we used to identify topological defects in simulations of both the hydrodynamic theory and the agent-based model.
To algorithmically detect -1/2 defects in
both approaches, we took advantage of the fact that inside a defect core the topological charge density q, defined as <cit.>
q = 1/4 π( ∂_xQ̂_x a∂_yQ̂_y a - ∂_xQ̂_y a∂_yQ̂_x a),
has a very large negative value (with Q̂=Q/ρ and Q defined as in (<ref>)), whereas in other regions of space its absolute value is much smaller (cf. lower right pane of Fig. 2(a) and (d) of the main text). We exploit this fact and define any contiguous region of space in which q falls below a certain threshold value q_thrs as one -1/2 defect.
The position of -1 / 2 defects in the agent-based model is obtained in the following way.
Please first note that the main purpose of the data from the agent-based simulations in Fig. 3(c)-(e) is to qualitatively confirm the trend observed in the hydrodynamic model. To quantify the data with a high degree of precision would require averaging over large ensembles, which would be numerically prohibitively demanding given the very long time scales on which the observed phenomena occur.
The total runtime of each simulation was 142 857 (which is much longer than the dynamics of undulations; cf. Movie S1 and S2), from which we cutted an initial transient (cf. section S<ref>) before starting the measurement.
For each value of L_p/⟨ϕ⟩ we averaged over ten independent simulations.
To obtain q in agent-based simulations, we rasterized space into a grid with a grid spacing of Δ x = 0.3, which is small enough to resolve the structure of a defect (note that the qualitative agreement between the agent-based simulations and hydrodynamic model, shown in Fig. 3 of the main text, does not depend on the exact choice of this and the following numerical parameters).
We used the orientations θ_0^(n) of polymer tips residing inside each grid point at a given time to calculate a local value of Q̂ using (<ref>). To suppress noise due to stochastic particle fluctuations, we further averaged over a time span of 15.9, which is much shorter than density rearrangements due to bending undulations.
With this we obtained q(𝐫, t) using (<ref>).
We chose q_thrs = - 0.032, which is much lower than typical values of q outside defects.
Additionally, to avoid classifying small and short-lived density peaks that occur sporadically in the simulations as CTDs, we heuristically filtered them out by requiring the charge density to be below q_thrs for a time of at least 159 for a CTD to be detected.
The hydrodynamic model allows by construction a direct access to the Q-tensor, which allows a direct calculation of the function q, given by Eq. <ref>. The positions of -1 / 2 defects are defined as local minima of the function q and, for consistency, the same value of q_thrs is used as for the agent based simulations.
For the measurements in the hydrodynamic model, we discarded the data collected in the first half of the simulation runs in order to avoid any influence of initial transients.
To generate the data shown in Fig. 3 (a), we classified all runs in which CTDs were detected to be CTD-dominated (blue dots in Fig. 3 (a)). Distinction between FAEs and stable bands was made via visual inspection.
§ FLUX MEASUREMENT THROUGH DEFECTS
In the main text, we studied the mass flow through a defect as well as the speed of particles during a CTD passage; see Figs. 4(b) and 4(e), respectively.
To this end, we needed detailed information about the position and velocity of particles as they transitioned from one arm of a defect to another.
To determine these quantities, we leveraged the possibility offered by the agent-based simulations to access the position of each individual polymer at any given point in time.
In order to be able to deduce that a given polymer has transitioned from one arm of a defect to another one, several things have to be known.
First, one has to find a criterion which allows to algorithmically determine if a polymer is pertinent to a given arm at a given time.
For this we used the following heuristics:
Over each arm of a defect we placed a round “classification area”, which is large enough to cover the full width of the nematic lane (blue regions in Fig. <ref>, diameter 22 L).
The positions of the classification areas were chosen such that they roughly coincided with the area where the nematic lanes recovered their full width (midpoint distance of classification areas to defect: 26 L in Fig. <ref>).
Every polymer being inside one of these regions is classified as pertinent to the given defect arm.
Second, one has to find a criterion that allows to make a determination as to the origin of particles that have been classified as belonging to a particular arm.
For this we introduced an additionally classification area which encompasses all parts of the simulation box being further away from the defect core than a specific distance, cf. orange region in Fig. <ref> (distance to defect: 40 L).
(Note that the black colored area does not pertain to any classification area.)
After this partitioning, we measured the currents from one region to another with the below described heuristics.
We did this for a time span sufficiently long enough that many particles can travel from one blue region to another blue region (cf. Fig. <ref>), but short enough such that bending undulations do not change the position of the individual lanes significantly.
Data in Fig. 4(b) averaged over 159, Fig. 4(e) averaged over 4 019 trajectories in a time of 317.
For the flux measurement heuristics, we each assigned a unique identifier id to every classification area.
We then checked in short intervals of 0.16 for every polymer i if its position coincided with one of the classification areas.
If this was the case, polymer i was assigned the identifier of the region and the time of assignment t_assign was saved.
If polymer i already had a different identifier id' assigned (and hence also a different t_assign'), this meant that it had traveled from another classification area into the current region (without crossing a third region in the meantime).
In such a case, we stored the pairs of tuples (id', t_assign') and (id, t_assign), which allow (combined with with the also saved information of the position and speed of every polymer at every interval) to reconstruct the path polymer i has taken propagating from region id' to id. Subsequently, we replaced the assigned identifier and assignment time of polymer i with that of the current region and the current time and continued the simulation.
§ DISPERSION IN THE POLYMER VELOCITY
Most studies of active matter assume the speed of agents to be constant and uniform <cit.>. Yet, experiments of the actin motility assay show actin filaments to have a broad distribution of velocities <cit.>.
To take into account the effects of such a velocity dispersion, we drew the assigned speed of polymers from a distribution (cf. Section S<ref> of this Supplemental Material).
We have found that the introduction of such a velocity dispersion does not hinder the formation of nematic lanes.
To additionally check whether particles that possess different free velocities behave differently on the level of macroscopic structures—for example by causing an effective sorting of particles into spatially separate populations, where only relatively fast/slow particles form part of patterns—we subdivided the system into a grid with a grid spacing of Δ x = 0.3 and determined for each grid-cell the locally averaged ⟨ v^(n)⟩ of particles inside a simulation exhibiting nematic lanes and CTDs.
Any local accumulation of fast/slow particles would lead to a different value of ⟨ v^(n)⟩ when compared to the global average ⟨ v^(n)⟩_glob.
As can be inferred from Fig. <ref>, the system is well mixed (up to random fluctuations) with respect to polymer velocities.
We further found that the introduction of a velocity dispersion prevented the decay of purely nematic patterns into oppositely propagating polar waves (cf. Ref <cit.>), which hence seems to be an artefact of the assumption of equal and uniform velocities.
§ WIDTH OF NEMATIC LANES
As discussed in
the main text,
we measured the width of nematic lanes as a function of density ⟨ϕ⟩ in both the agent-based simulations and the hydrodynamic model (at a constant system size).
To this end, we performed several simulations at different polymer densities but at a fixed persistence length (resp. several realizations of the hydrodynamic model at different ⟨ϕ⟩ and fixed λ).
After these systems had reached a configuration in which they exhibited a single straight lane, we measured the width of the band and the average density ⟨ϕ⟩_bg in the disordered background.
(The width is determined by averaging the density of the system along the axis of the straight lane, which results in a one dimensional density profile.
The width of the lanes in the hydrodynamic model is then defined as the distance between the two points with the maximal gradient of this curve, which can easily be obtained due to the absence of noise.
In the agent based simulations the lane width is heuristically defined as the width of the region where this profile exceeds the threshold of three times ⟨ϕ⟩_bg.)
As shown in Fig. <ref>, the thickness of the lanes grows linearly with density in both the agent-based simulations and hydrodynamic model, while the density of the disordered background remains constant.
§ FAE DETECTION
In this section we describe the procedure we used to measure the mean number of FAEs present at different parameter regimes in the agent-based simulation (Fig. 3(e) of the main text).
For this we logged the formation of every FAE in the investigated systems; the most reliable method for detecting FAEs turned out to be manual inspection of simulation videos.
To obtain the mean number of FAEs present, we divided the total lifetime of all detected FAEs in the system by the total observation time.
For every investigated L_p in the agent-based simulations, we averaged over ten independent simulations, which each ran for a time of 142 857.
It is worth to note that agent-based simulations started in a parameter regime in which systems predominantly exhibit FAEs or stable lanes (i.e., high L_p; see also section “From CTDs to FAEs and bands” in the main text),
do not immediately form straight lanes at the onset of pattern formation, but frequently at first dwell in a state of high activity (cf. left panel of Fig. 3(b) in the main text) in which no FAE can develop.
We measured the duration of this initial transient (“dwell-time”) and found that it is shorter than a time of 70 000 in more than ninety percent of the cases.
We discarded this initial time span in the measurements of the mean numbers of CTDs (cf. section S<ref>) and FAEs present to rule out any influence of the initial transient on the results.
Further, we studied the temporal evolution of filamentous arc ejections.
The motion of a separating arc in the agent based and the hydrodynamic model, can be visualized using a kymograph of the density projection shown in Fig. <ref>.
As can be inferred from the bending of the lateral extrusions, the separation process of the arcs starts slowly and continues to accelerate until complete ejection and eventual dissolvement of the arc.
§ HYDRODYNAMIC MODEL
To provide the motivation of our hydrodynamic model we start form the general form of the evolution equation for the probability distribution function P(𝐫,θ,t):
∂_t P(𝐫,θ,t)
=
- L_p ∂_i [ n_i P(𝐫,θ,t) ]
+ ∂_θ^2 P(𝐫,θ,t) +interactions ,
where 𝐧=(cosθ,sinθ) is director vector, and L_p is the path persistence length of the polymers.
Time is measured in units of the diffusion coefficient.
Note that we only consider rotational diffusion and neglect translational diffusion.
In the following the space and time dependencies of the probability density are suppressed for brevity.
Contribution from the interaction between the polymers can be introduced in the form of collision intergrals in the Boltzmann ansatz <cit.>, or by using the gradient of the interaction-induced current in a Smoluchowski approach <cit.>.
We define the particle density ρ, the polarity vector 𝐩, and the nematic Q-tensor as the first three moments of the probability distribution function:
ρ
:=
∫_0^2 πdθ
P (θ)
,
p_i
:=
∫_0^2 πdθ
n_i P(θ)
,
Q_ij
:=
∫_0^2 πdθ (
2n_i n_j-δ_i j) P (θ)
,
where the subscripts i and j denote the Cartesian components and δ_ij represents the Kronecker delta.
It is convenient to consider Fourier harmonics of the probability distribution function:
P(𝐫, θ)=∑_k=-∞^∞ P_k(𝐫) e^i k θ.
According to their definitions, ρ , p_i, and Q_ij can be expressed via Fourier harmonics as follows:
ρ =
2 π P_0 ,
p_i
=
π(
(P_1 +P_-1 ), i(P_1 -P_-1 )
)
,
Q_ij =
π(
(P_2 +P_-2 ), i (P_2 -P_-2 )
)
,
where the symbol i denotes the imaginary unit.
By introducing the projection onto the m^th harmonics of P:
(…)^ m
:=
1/2 π∫_0^2 πdθ e^-i m θ(…)
,
one obtains the following contributions from the advective and diffusive parts of (<ref>) to the evolution equations of the m_th Fourier harmonics (P_m):
∂_t P_m =
-m^2 P_m
-L_p∂_i(n_iP(𝐫,θ))^ m
=
-m^2 P_m
- L_p1/2[
∂_x∑_kP_k (δ_k,m-1+δ_k,m+1)
+∂_y∑_k P_k (δ_k,m-1-δ_k,m+1)/i]
.
In terms of the collective variables this can be rewritten as:
∂_tρ =
- L_p∂_ip_i
,
∂_t p_i
=
-p_i- L_p/2∂_iρ +L_p/2∂_jQ_ij ,
∂_t Q_ij =
-4Q_ij
-L_p/2[
∂_ip_j+∂_jp_i-δ_ij∂_kp_k
]
.
Note, that we imply summation for repeating indices following the Einstein convention.
Since we consider a system with purely nematic interactions, the polar order decays on short time scales for all strengths of self-propulsion.
Thus, the polarity field 𝐩 equilibrates fast and can be eliminated adiabatically to arrive at dynamic equations for the density ρ and Q-tensor alone.
We find after rescaling time by a factor of 4:
∂_tρ =
λ^2Δρ
+ λ^2∂_i∂_jQ_ij ,
∂_t Q_ij =
-Q_ij
+λ^2/2Δ Q_ij
+λ^2
[
∂_i∂_jρ]^st ,
where we have introduced the parameter λ:=L_p/(2√(2)), Δ=∂_i∂_i denotes the Laplace operator, and [...]^st indicates the symmetric and traceless part of the expression.
We now discuss the physical meaning of each term on the RHS of
Eqs. (<ref>).
The first term in the density equation Eq. (<ref>) acts like effective translational diffusion, despite the fact that it is actually coming from the single particle advection (note, that the real translational diffusion is neglected in our model).
The second term in equation Eq. (<ref>) represents anisotropic flux of material along the nematic order. This term enhances diffusion along the direction of the eigenvector of Q_ij
corresponding to its positive eigenvalue, and suppresses it along the perpendicular direction. It also can be treated as curvature-induced flux, since it disappears in a uniformly ordered state.
The first term in the evolution equation of the nematic tensor Eq. (<ref>) is due to the thermal rotational diffusion. If there were no interaction between polymers, the action of this term would lead to disordering.
The second term in Eq. (<ref>) penalizes the distortion of Q_ij and represents the elasticity in terms of liquid crystal theory.
The last term of Eq. (<ref>) provides the coupling between the equations. It can be treated simply as an anisotropic diffusive contribution. But it also introduces “aligning torque” by changing the orientation of nematic order in the presence of the density gradients.
Finally, besides the diffusion- and advection-related terms we need to add interaction-induced contributions.
Inspired by Refs. <cit.> we also introduce the following terms to describe the nematic interactions of the polymers:
∂_tρ =
⋯
+
ν̃_ρΔρ^2
+
χ̃_ρ∂_i∂_j(ρ Q_ij)
,
∂_t Q_ij =
⋯
+
α̃ρ Q_ij
-
β̃Q^2Q_ij
+
κ̃_ρ⟨ρ⟩Δ Q_ij
+ω̃^a
[
2∂_iρ∂_jρ]^st .
The ν̃_ρ-related term in Eq. (<ref>) comes from the excluded volume interactions between the polymers (however an analogous term occurs due to the “collision" of polymers, e.g., see Ref. <cit.>).
The last term in Eq. (<ref>) is an interaction-induced flux representing a density-dependant correction <cit.> to the last term of Eq. (<ref>).
The first term of Eq. (<ref>) promotes density dependent ordering, which competes with motility-induced disordering coming from the fist term of Eq. (<ref>); β is a non-equilibrium Landau coefficient setting the magnitude of order in the bulk.
κ̃_ρ⟨ρ⟩ contributes to the restoring elastic constant. As can be seen, this is the only term in our theory that is linearized around the mean density value, whereas in the most of hydrodynamic models almost all terms in Eq. (<ref>) are subjected to this procedure. We linearize this particular term for two reasons. Firstly, for the sake of simplicity: we want this term to represent one particular effect – elasticity (or “rigidity” in terms of the material). Secondly, with this linearization it's simpler to interpret the term κ̃_ρ⟨ρ⟩Δ Q_ij as stemming from a free energy, while the contribution κ̃Δ(ρ Q_ij) could not be obtained from a free energy.
Finally, the last term of Eq. (<ref>) describes the non-equilibrium anchoring to the density interface <cit.>.
We emphasize again that we are not linearizing ν̃_ρ, χ̃_ρ, and ω̃^a - related terms around the mean density (the latter of which would simply disappear completely in that case).
Such higher-order terms are typically linearized (or ignored) in well-controlled closures in the vicinity of the isotropic/nematic transition (e.g., within Boltzmann–Ginzburg–Landau approach <cit.>).
However, our observations hint that this linearization procedure, widely used in the field of active nematics, may result in some physical processes not being accounted for by the resulting models, which in turn can leads to some phenomena (e.g., such as CTDs) escaping the researchers' gaze as well.
To obtain the equations of motion presented in the main text we simply combine (<ref>) and (<ref>) and re-normalize density by the critical one ϕ=ρ/ρ_n. The coefficients are also renamed accordingly: κ̃_ρ→κ_ϕ, etc.
As discussed in the main text, the hydrodynamic model allows to directly access the direction and magnitude of the anisotropic active flux -∂_j(χ Q_ij). To complement the illustration of this flux in Fig. 4(d) of the main text, we show in Fig. <ref> a direct plot of this observable as recorded in the hydrodynamic model.
§
Movie S1
Constantly undulating nematic lanes in an agent-based simulation.
(Parameters are: ρ L^2=3.15, L_p=11.1. Scale-bar: 15L. Density averaged over a time of 15.9 for better visibility.)
Movie S2
Emergence of a multitude of condensed topological defects in agent-based simulations. Note that the lateral movement of lanes happens on long timescales. A single frame roughly corresponds to the time of 162 a straight moving particle with a velocity of v_0 needs to cross the whole system. (Parameters are: ρ L^2=3.2, L_p=11.9. Scale-bar: 15L. Density averaged over a time of 15.9 for better visibility.)
Movie S3
Two condensed topological defects are formed simultaneously in an agent-based simulation. Due to continued undulation of the connecting nematic lanes the defects eventually disintegrate.
(Parameters are: ρ L^2=3.47, L_p=11.1. Scale-bar: 15L. Density averaged over a time of 3 for better visibility.)
Movie S4
Several filamentous arc ejection develop in succession along a nematic lane in an agent-based simulation.
(Parameters are: ρ L^2=2.7, L_p=14.3. Scale-bar: 15L. Density averaged over a time of 15.9 for better visibility.)
Movie S5
Straight and stable nematic lane in an agent-based simulation.
(Parameters are: ρ L^2=1.9, L_p=20.6. Scale-bar: 15L. Density averaged over a time of 15.9 for better visibility.)
Movie S6
Details of a flux in an agent-based simulation from one arm of a condensed topological defect to the two others. The path that is taken by the polymer heads is traced out. Only trajectories that start in the upper left arm and eventually will go to either the lower or upper right arm are visible.
(Parameters are: ρ L^2=3.5, L_p=11.1.)
Movie S7
Emergence of a multitude of condensed topological defects in a simulation of the hydrodynamic model. (Parameters are: β=0.05, κ_ϕ=0.2, ω^a=-0.5, χ_ϕ=0.4, ν_ϕ=1, λ=1,⟨ϕ⟩=1.1 )
Movie S8
Several filamentous arc ejection develop in succession along a nematic lane in a simulation of the hydrodynamic model. (Parameters are: β=0.05, κ_ϕ=0.2, ω^a=-0.5, χ_ϕ=0.4, ν_ϕ=1, λ=1.2,⟨ϕ⟩=1.1)
Movie S9
Straight and stable nematic lane in a simulation of the hydrodynamic model.
(Parameters are: β=0.05, κ_ϕ=0.2, ω^a=-0.5, χ_ϕ=0.4, ν_ϕ=1, λ=1.4,⟨ϕ⟩=1.1)
Movie S10
Three-beam symmetrical arrangement of sources of polar particles. The ensuing nematic currents eventually form a condensed topological defect.
(Parameters are: ρ L^2=3.6, L_p=14.3. Scale-bar: 15L. Density averaged over a time of 15.9 for better visibility.)
|
http://arxiv.org/abs/2307.04932v1 | 20230710225935 | Tur\' an number for bushes | [
"Zoltán Füredi",
"Alexandr Kostochka"
] | math.CO | [
"math.CO",
"05D05, 05C65, 05C05"
] |
myheadings
Füredi and Kostochka: Bushes, July 4, 2023
Turán number for bushes
Zoltán Füredi
Alfréd Rényi Institute of Mathematics, Budapest, Hungary.
E-mail: .
Research partially supported by National Research, Development and Innovation Office NKFIH grants 132696 and 133819.
Alexandr Kostochka
University of Illinois at Urbana–Champaign, Urbana, IL 61801
and Sobolev Institute of Mathematics, Novosibirsk 630090, Russia. E-mail: .
Research supported in part by NSF
grant DMS-2153507 and NSF RTG grant DMS-1937241.
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================
Let a,b ∈ Z^+,
r=a + b, and let T be a tree
with parts U = {u_1,u_2,…,u_s}
and V = {v_1,v_2,…,v_t}.
Let U_1, … ,U_s and V_1, …, V_t be disjoint sets, such that |U_i|=a and |V_j|=b for all i,j.
The (a,b)-blowup of T is the
r-uniform hypergraph with edge set
{U_i ∪ V_j :
u_iv_j ∈ E(T)}.
We use the Δ-systems method to prove the following Turán-type result.
Suppose a,b,t∈ Z^+, r=a+b≥ 3, a≥ 2, and
T is a fixed tree of diameter 4 in which the degree of the center vertex is t.
Then there exists a C=C(r,t,T)>0 such that
|ℋ|≤ (t-1)n r-1 +Cn^r-2 for every
n-vertex r-uniform hypergraph ℋ not containing
an (a,b)-blowup of T. This is asymptotically exact when t≤ |V(T)|/2.
A stability result is also presented.
Mathematics Subject Classification: 05D05, 05C65, 05C05.
Keywords: Hypergraph trees, extremal hypergraph theory, Delta-systems.
§ INTRODUCTION
§.§ Basic definitions and notation
An r-uniform hypergraph (an r-graph, for short), is a family of r-element subsets of a finite set.
We associate an r-graph with its edge set and call its vertex set V().
Often we take V()=[n], where [n]:={ 1, 2, 3,…, n}.
Given an r-graph ,
let the Turán number of , _r(n,), denote the maximum number of edges in an r-graph on n vertices that does not contain a copy of .
Since
a (graph) tree is connected and bipartite, it uniquely defines the parts in its bipartition. So, we say a tree
T is an (s,t)-tree if one part of V(T) has s vertices and the other has t vertices.
Let s,t, a,b > 0 be integers,
r=a + b, and let T = T(X,Y) be an (s,t)-tree
with parts U = {u_1,u_2,…,u_s}
and V = {v_1,v_2,…,v_t}.
Let U_1, … ,U_s and V_1, …, V_t be pairwise disjoint sets, such that |U_i|=a and |V_j|=b for all i,j.
So | ⋃ U_i∪ V_j | = as+bt.
The (a,b)-blowup of T, denoted by (T,a,b), is the
r-uniform hypergraph with edge set
(T,a,b):= {U_i ∪ V_j :
u_iv_j ∈ E(T)}.
The goal of this paper is to find the asymptotics of the Turán number for (a,b)-blowups of many trees of radius 2 using the
Δ-systems method.
Earlier, (a,b)-blowups of different classes of trees and different pairs (a,b) were considered in <cit.>. The main result in <cit.>
is the following.
Suppose r ≥ 3, s,t ≥ 2, a + b = r, b < a < r.
Then (as n→∞) any
𝒯-free n-vertex r-graph satisfies
|| ≤ (t - 1)n r-1 + o(n^r-1).
This is asymptotically sharp whenever t≤ s.
This theorem asymptotically settles about a half of possible cases, but when t<s and a≤ b, it is expected that the asymptotics is different from the one in Theorem <ref>. More is known on (a,b)-blowups of paths. Let P_ℓ denote the (graph) path with ℓ edges.
The case of P_2 was resolved asymptotically by Frankl <cit.> (for b=1) and by Frankl and Füredi <cit.> (for all 1≤ a≤ r-2 and b=r-a):
_r(n, (P_2,a,b)) = Θ( n^max{ a-1,b }).
The case of P_3 was fully solved for large n
by Füredi and Özkahya <cit.>. They showed that for fixed 1≤ a,b <r with r=a+b and for n> n_0(r),
_r(n,( P_3,a,b)) = n-1r-1.
For longer paths, the following was proved in <cit.>.
Let a+b=r, a,b≥ 1 and ℓ≥ 3. Suppose further that
(i) ℓ is odd, or
(ii) ℓ is even and a>b, or (iii) (ℓ, a,b) = (4,1,2).
Then
_r(n,(P_ℓ,a,b)) = ⌊ℓ-1/2⌋n r - 1+ o(n^r - 1).
So, the situation with blowups of P_ℓ is not resolved for the case when ℓ≥ 4 is even and a≤ b apart from
the case (ℓ, a,b) = (4,1,2).
In this paper, we first consider (a,b)-blowups of special trees of radius 2.
A graph bush B_t,h is the radius 2 tree obtained from the star K_1,t by joining each vertex of degree one to other h new vertices. So B_t,h has 1+t +th vertices. Let s=1+th. Then B_t,h is an (s,t)-tree with s>t.
Suppose that a,b,t,h are positive integers, a+b=r and t≥ 2.
By (a,b,t,h)-bush, _t,h(a,b), we will call the (a,b)-blowup of B_t,h.
This means the center vertex of B_t,h is replaced by an a-set A,
its neighbors by the b-sets B_1, …, B_t and its second
neighbors by a-sets A_i,j, i∈ [t], j∈ [s].
In particular, the (a,b)-blowup of the path P_4 is the (a,b,2,1)-bush _2,1(a,b).
Since _t,h(a,b) has t disjoint edges B_i∪ A_i,1 for i=1,…,t, the example of the r-uniform hypergraph with vertex set [n] in which every edge intersects the interval [t-1] shows that
_r(n,_t,h(a,b))≥n r-n-t+1 r∼ (t-1)n-1 r-1.
We will use the Δ-systems approach to show that this is asymptotically correct in many cases.
Recall that for a>b≥ 2 the asymptotic equality follows from Theorem <ref>.
In this paper we deal with all cases and also present a somewhat refined result by
considering shadows of hypergraphs.
Recall that for an r-graph ℋ the shadow, ∂ℋ, is the collection of (r-1)-sets that lie in some edge of ℋ.
Our first main result is the following.
Suppose that a,b,t,h are positive integers, r=a+b≥ 3.
Also suppose that in case of (a,b)= (1,r-1) we have h=1.
Then there exists a C=C(r,t,h)>0 such that the following holds.
If ℋ is an n-vertex r-uniform family not containing
a bush _t,h(a,b) then
|ℋ|≤ (t-1)|∂ℋ| +Cn^r-2.
This implies that (<ref>) is asymptotically exact in these cases as r,t,h are fixed and n→∞.
Note that for (a,b)=(1,r-1) we prove (<ref>) only for h=1.
In fact, in this case (<ref>) does not hold for h≥2.
An example is this: V()=[n], A=[t-1] and =_1∪_2, where
_1 is the set of r-subsets of [n] with exactly one vertex in A and
_2 is the Steiner system S_1(n-t+1,r,r-1) on [n]-A.
This example has asymptotically (t-1+1/r)n r-1 edges and does not contain
_t,2(1,r-1).
By increasing h, we can get examples without _t,h(1,r-1) that have even more edges.
McLennan <cit.> proved that in the graph case (a=b=1) we have
_r(n, B_t,h)= 1/2(t+th-1)n+O(1), some extremal graphs are vertex-disjoint unions of complete graphs K_t+th.
For the case t=1 the inequality (<ref>) is rather weak, _r(n, _1,h(a,b))= O(n^r-2) is known for a≥ 2. Even better bounds were proved in <cit.>. So we usually suppose that t≥ 2, r≥ 3.
Since each tree of diameter 4 with the degree of the center equal to t is a subgraph of a graph bush B_t,h for some h, Theorem <ref>
yields the following somewhat more general result.
Suppose a,b,t∈ Z^+, r=a+b≥ 3 and a≥ 2.
Let T be a fixed tree of diameter 4 in which the degree of the center vertex is t.
Then there exists a C=C(r,t,T)>0 such that the following holds.
If ℋ is an n-vertex r-graph not containing
an (a,b)-blowup of T, then
|ℋ|≤(t-1)n r-1 +Cn^r-2.
We also use the Δ-systems approach to show that for many a≥ 2 and b≥ 2 with a+b=r, the r-uniform hypergraphs without _t,h(a,b) of cardinality "close" to extremal contain vertices of "huge" degree.
Suppose that a,b,t,h are positive integers, a,b≥ 2 and r=a+b≥ 5.
Then for any C_0>0
there exist n_0>0 and C_1>0 such that the following holds.
If n>n_0, ℋ is an n-vertex r-uniform family not containing
a bush _t,h(a,b) and |ℋ|> (t-1)n-1r-1-C_0 n^r-2, then
there are t-1 vertices in [n] each of which is contained in at least n-1r-1-C_1 n^r-2 edges of .
The structure of this paper is as follows. In the next section, we discuss the Δ-system method and present a lemma
by Füredi <cit.> from 1983 that will be our main tool. In Section <ref> we describe properties of so called intersection structures. It allows us to prove the main case of Theorem <ref> (the case a≥ 2) in
Section <ref> and the case of a=1 and h=1 in
Section <ref>. Then in
Section <ref> we prove Theorem <ref>.
§ DEFINITIONS FOR THE Δ-SYSTEM METHOD AND A LEMMA
A family of sets {F_1,…,F_s} is an s-star or a Δ-system or an s-sunflower of size s with kernel A, if
F_i∩ F_j=A for all 1≤ i<j≤ s. The sets F_i∖ A are called petals.
For a member F of a family ℱ, let the intersection structure of F relative to ℱ be
ℐ(F,ℱ)={F∩ F': F'∈ℱ∖{F}}.
An r-uniform family ℱ⊆[n] r is r-partite if there exists a partition
(X_1,…,X_r) of the vertex set [n] such that |F∩ X_i|=1 for each
F∈ℱ and each i∈ [r].
For a partition (X_1,…,X_r) of [n] and a set S⊆ [n], the pattern Π(S) is the set
{i∈ [r]: S∩ X_i≠∅}. Naturally, for a family ℒ of subsets of [n],
Π(ℒ)={Π(S):S∈ℒ}⊆ 2^[r].
For any positive integers s and r, there exists a positive constant c(r,s) such that every
family ℱ⊆[n] r contains a subfamily ℱ^*⊆ℱ satisfying
1. |ℱ^*|≥ c(r,s)|ℱ|.
2. ℱ^* is r-partite, together with an r-partition (X_1,…,X_r).
3. There exists a family 𝒥 of proper subsets of [r] such that Π(ℐ(F,ℱ^*))=𝒥
holds for all F∈ℱ^*.
4. 𝒥 is closed under intersection, i.e., for all A,B∈𝒥 we have A∩ B∈𝒥, as
well.
5. For any F∈ℱ^* and each A∈ℐ(F,ℱ^*), there is an s-star in ℱ^*
containing F with kernel A.
Remark 1. The proof of Lemma <ref> in <cit.> yields that if ℱ itself is r-partite with an r-partition (X_1,…,X_r), then the r-partition in the statement can be taken the same.
Remark 2. By definition, if for some M⊂ [r] none of the members of the family 𝒥 of proper subsets of [r] in
Lemma <ref> contains M, then for any two sets F_1,F_2∈ℱ^*, their intersections with
⋃_j∈ MX_j are distinct. It follows that if |M|=m, then
|ℱ^*|≤∏_j∈ M|X_j| ≤( n-(r-m)/m)^m.
Thus, if |ℱ^*|> ( n-r+m/m)^m, then every m-element subset of [r] is contained in some
B∈𝒥.
Call a family 𝒥 of proper subsets of [r] m-covering if every m-element subset of [r] is contained in some
B∈𝒥. In these terms, Remark 2 says that
For k=0,1,…,r, define the family 𝒥^(k) of proper subsets of [r] as follows. It contains (a) the sets [r]-{i} for 1≤ i≤ k, (b) all (r-2)-element subsets of [r] containing {1,2,…,k}, and (c) all the intersections of these subsets.
By definition, each 𝒥^(k) is (r-2)-covering. Moreover,
14.5cm each (r-2)-covering family of proper subsets of [r] closed under intersections contains a subfamily isomorphic to some 𝒥^(k).
Indeed, if an (r-2)-covering family 𝒥 of proper subsets of [r] contains exactly k sets of size r-1,
then it must contain as members all (r-2)-element subsets of [r] not contained in these k sets, so properties (a) and (b) of the definition hold. Part (c) follows since 𝒥 is closed under intersections.
§ GENERAL CLAIMS ON INTERSECTION STRUCTURES.
Call a set B a (b,s)-kernel in a set system ℱ if B is the kernel of size b in a sunflower with s petals formed by members of ℱ.
If a+b=r and an r-uniform family ℋ does not contain _t,h(a,b), then there do not exist
disjoint sets A_0,B_1,B_2,…,B_t with |A_0|=a, |B_1|=… =|B_t|=b such that all B_1,…,B_t are (b,thr)-kernels in ℋ and the sets A_0∪ B_1,… ,A_0∪ B_t are edges of ℋ.
Suppose, there are such disjoint sets A_0,B_1,B_2,…,B_t. Let D_0=A_0∪⋃_j=1^tB_j. For i=1,…,t, do the following. Since B_i is a (b,thr)-kernel and |D_i-1∖ B_i|=a+(t-1)b +(i-1)ha≤ t h r -h, there exist h petals A_i,j (1≤ j≤ h) of a thr-sunflower with kernel B_i which are disjoint from D_i-1. Let D_i=D_i-1∪_j Y_i,j.
After t steps, we find a _t,h(a,b) whose edges are A_0∪ B_i and B_i∪ A_i,j for i=1,…,t, j=1,2,…,h.
Suppose a+b=r and 𝒢⊂[n] r with |𝒢|>1/c(r,thr)n^r-2
does not contain _t,h(a,b).
By Lemma <ref> and (<ref>), there is 𝒢^*⊆𝒢
satisfying the lemma such that the corresponding family 𝒥 of proper subsets of [r] is (r-2)-covering.
Let (X_1,…,X_r) be the corresponding partition.
Family 𝒥 does not contain disjoint members A and B such that |A|=a and |B|=b.
Suppose, it does. By renaming the elements of 𝒥, we may assume that A={1,…,a} and
B={a+1,…,r}. Let X={x_1,…,x_r}∈𝒢^*, where x_i∈ X_i for all i.
Since A∈𝒥, {x_1,…,x_a} is an (a,thr)-kernel in 𝒢^*. Let B_1,…,B_t be some t petals in the sunflower with kernel {x_1,…,x_a}. Since [r]-[a]=B∈𝒥, each of B_1,…,B_t
is a (b,thr)-kernel in 𝒢^*, contradicting Lemma <ref>.
If a+b=r and 2≤ a,b≤ r-2, then for each 0≤ k≤ r-2 and for k=r the family
𝒥^(k) has
disjoint members A and B such that |A|=a and |B|=b, unless (r,a,b,k)=(4,2,2,1).
If (a,b)=(1,r-1) or (a,b)=(r-1,1) and r≥ 3, then for each 1≤ k≤ r-2 and for k=r the family
𝒥^(k) has
disjoint members A and B such that |A|=a and |B|=b.
If k≥ a, then we let A=[a], B=[r]-[a], and represent them as follows:
A=⋂_k+1≤ i<i'≤ r([r]-{i,i'})∩⋂_a+1≤ i≤ k([r]-{i}),
B=⋂_1≤ i≤ a([r]-{i}).
If k≥ b, then we have a symmetric representation.
In particular, this proves the claim for (a,b)=(1,r-1) or (a,b)=(r-1,1).
If k≤ a-2, then we again let A=[a], B=[r]-[a], but represent them as follows (using that a≤ r-2 and k≤ a-2):
A=⋂_a+1≤ i<i'≤ r([r]-{i,i'}),
B=⋂_1≤ i≤ k([r]-{i})∩⋂_k+1≤ i<i'≤ a([r]-{i,i'}).
By symmetry, the only remaining case is that k=a-1=b-1. So r is even, and a=b=r/2=k+1. In this case, if r>4, then
we let A=[k-1]∪{k+1,k+2}, B=([r]-[k+2])∪{k}, and represent them as follows:
A=⋂_k+2≤ i<i'≤ r([r]-{i,i'})∩ ([r]-{k}),
B=⋂_1≤ i≤ k-1([r]-{i})∩ ([r]-{k+1,k+2}).
§ PROOF OF THE MAIN THEOREM
§.§ Basic procedure
Now we are ready to prove Theorem <ref>.
Assume that an n-vertex r-uniform family ℋ
does not contain _t,h(a,b).
Define C=C(r,t,h) := 1/c(r,thr), where c is from Lemma <ref>.
For any r-uniform family 𝒢, let 𝒢^* denote a family
satisfying Lemma <ref> and 𝒥(𝒢^*)⊂ 2^[r] denote the corresponding intersection structure.
Do the following procedure. Let ℋ_1=(ℋ)^*
and _1=𝒥(ℋ^*).
For i=1,2,…, if
|ℋ∖⋃_j=1^i ℋ_j|≤ n^r-2/c(r,thr),
then stop and let m:=i and
ℋ_0=ℋ∖⋃_j=1^i ℋ_j;
otherwise, let
ℋ_i+1:=(ℋ∖⋃_j=1^i ℋ_j)^*
and _i+1=𝒥((ℋ∖⋃_j=1^i ℋ_j)^*).
This procedure provides a partition of ℋ,
ℋ=⋃_i=0^m ℋ_i.
Let denote ⋃_i=1^m ℋ_i.
By definition,
|ℋ_0| ≤ Cn^r-2,
so we get
|| + Cn^r-2≥ | ℋ|.
We distinguish four cases, a,b≥ 2 and r≥ 5 (discussed in Subsection <ref>), (a,b)=(2,2) and r=4 (Subsection <ref>), (a,b)=(r-1,1) and r≥ 3 (Subsection <ref>), and finally
(a,b)=(1,r-1), r≥ 3 and h=1 (Section <ref>).
§.§ Case of a,b≥ 2 and r≥ 5
Here 2≤ a,b≤ r-2 and (a,b)≠ (2,2), so Lemma <ref> implies that for each 1≤ i≤ m, 𝒥(ℋ_i) has exactly r-1 (r-1)-subsets, it contains 𝒥^(r-1).
Hence for each hyperedge E∈ℋ_i⊂ (1≤ i≤ m) there exists an element c(E)∈ E such that
each proper subset of E containing c(E) is a kernel of a thr-star in ℋ_i.
Beware of the fact that although each ℋ_i is k-partite, the partitions might differ for different values of i.
This does not cause any problem in our argument, we only need the existence of the element c(E).
Define the function α on [n] r-1 as follows:
Given an (r-1)-set Y, let α(Y) be the number of edges E∈ with Y=E∖{ c(E)}.
α(Y)≤ t-1, for all (r-1)-subsets Y of [n].
Suppose to the contrary that there are t distinct E_1∈ℋ_i_1, …, E_t∈ℋ_i_t
such that E_i=Y∪{ c(E_i)}.
If i_j=i_j' for some j≠ j',
then the intersection structure of ℋ_i_j would contain the (r-1)-set Y, a contradiction.
Thus i_1,…,i_t are all distinct. By relabelling we may suppose that E_i∈ℋ_i.
We will find the t+1 disjoint sets A_0, B_1,B_2,…,B_t contradicting Lemma <ref> using induction as follows.
Let A_0⊂ Y arbitrary, |A_0|=a and let D_0:= ∪ E_i, |D_0|=t+r-1.
We define the sets E_i', D_i, B_i step by step as follows. We will have D_i:=D_0∪_j ≤ i E_j' and |D_i|= t+r-1+i(r-1-a).
For i=1, 2, …, t consider the family ℋ_i and its member E_i in it.
By the intersection structure of ℋ_i,
the set A_0∪{c(E_i)} is an (a+1,thr)-kernel in ℋ_i.
One of the thr petals of the sunflower in ℋ_i with kernel A_0∪{c(E_i)}
should be disjoint from D_i-1; let E_i' be the corresponding set in ℋ_i. Since c(E_i)∈ E_i', and homogeneity gives c(E_i')=c(E_i), the set
B_i:=E_i'-A_0 is a (b,thr)-kernel.
Claim <ref> implies
(t-1)|∂| ≥∑_∀ (r-1)-set Yα(Y)= ||.
This, together with (<ref>), completes the proof of Theorem <ref> in this case.
§.§ The case (a,b)=(2,2)
Lemma <ref> implies that for each 1≤ i≤ m, either—
𝒥^(3) is contained in 𝒥(ℋ_i), it has exactly three 3-subsets, so [4]∖{1},
[4]∖{2}, and [4]∖{3}
are in 𝒥 and 𝒥 also contains
all subsets containing the element 4 but it does not contain 123, or —
𝒥(ℋ_i) is of 𝒥^(1) type, it has a unique 3-subset, 234, and { 1, 12,
13, 14}⊂𝒥. Call ℋ_i (and its edges) type α if 𝒥(ℋ_i) has three 3-subsets.
Each of these edges E∈ℋ_i has an element c(E)∈ E such that
each proper subset of E containing c(E) is a kernel of a 4· t· h-star in ℋ_i.
The union of the families ℋ_i of type α is _α.
Call ℋ_i (and its edges) type β if 𝒥(ℋ_i) has a unique 3-subset.
Each of these edges E∈ℋ_i has an element b(E)∈ E such that
each set K⊂ E of the form {b(E), x} (x∈ E∖{ b(E)}) and the set
E∖{ b(E) } is a kernel of a 4· t· h-star in ℋ_i.
The union of the families ℋ_i of type β
is _β.
Define the function α on [n] 3 as follows:
Given an 3-set Y let α(Y) be the number of edges E∈_α with Y=E∖{ c(E)}.
Define the function β : →[n] r-1 as follows:
Given a 3-set Y, let β(Y) be the number of edges E∈_β with b(E)∈ Y⊂ E.
α(Y)+1/3β(Y)≤ t-1, for all (r-1)-subsets Y of [n].
Let E_1∈ℋ_i_1, …, E_α∈ℋ_i_α be the α(Y) distinct edges with
E∈_α and Y=E∖{ c(E)} and
let E_α+1∈ℋ_i_α+1, …, E_α+β∈ℋ_i_α+β be the β(Y) distinct edges with E∈_β and b(E)∈ Y ⊂ E.
If i_j=i_j' for some j≠ j',
then the intersection structure of ℋ_i_j would contain the 3-set Y, a contradiction.
Thus i_1, i_2,… are all distinct. By relabelling we may suppose that E_i∈ℋ_i.
Suppose to the contrary that α(Y)+1/3β(Y)> t-1, so α + ⌈β/3 ⌉≥ t.
Since |Y|=3, one can find an element y_0∈ Y such that b(E_j)=Y at least ⌈β/3 ⌉ times.
So we may suppose that there are t distinct E_i∈ℋ_i
such that the elements c(E_1), …, c(E_α) and d(E_j):= E_j∖ Y for α < j≤ t are all distinct
and E_i=Y∪{ c(E_i)}, b(E_j)=y_0.
We will find the t+1 disjoint sets A_0, B_1,B_2,…,B_t contradicting Lemma <ref> using induction as follows.
Let A_0: Y∖{ y_0}, |A_0|=2 and let D_0:= ∪ E_i, |D_0|=t+3.
We define the sets E_i', D_i, B_i step by step as follows. We will have D_i:=D_0∪_j ≤ i E_j' and |D_i|= t+3+i.
For i=1, 2, …, t, consider the family ℋ_i and its member E_i in it.
By the intersection structure of ℋ_i,
the set A_0∪{c(E_i)} (and the set A_0∪{c(E_i)}) is an (a+1,4· t· h)-kernel in ℋ_i.
One of the 4· t· h petals of the sunflower in ℋ_i with kernel A_0∪{c(E_i)}
should be disjoint from D_i-1; let E_i' be the corresponding set in ℋ_i. Since c(E_i)∈ E_i', the set
B_i:=E_i'-A_0 is an (b,4· t· h)-kernel.
Claim <ref> implies
(t-1)|∂| ≥∑_∀ 3-set Y( α(Y)+1/3β(Y))= |_α|+ |_β|.
This, together with (<ref>), completes the proof of Theorem <ref> in this case.
§.§ The case (a,b)=(r-1,1)
Call ℋ_i (as above) of type α if 𝒥(ℋ_i) has r-1 (r-1)-subsets.
Each of these edges E∈ℋ_i has an element c(E)∈ E such that
each proper subset of E containing c(E) is a kernel of a thr-star in ℋ_i.
The union of these ℋ_i families is _α.
Call ℋ_j (and its edges) type β if 𝒥(ℋ_j) has no (r-1)-subset.
Note that each element y of an edge E∈ℋ_j
is a kernel of a thr-star in ℋ_j.
The union of these ℋ_j families is _β.
The definition of α on [n] r-1 is the same as in the previous subsections,
the definition of β :→[n] r-1 is even simpler:
β(Y) is the number of edges E∈_β with Y⊂ E.
If α(Y)+β(Y)> t-1, then taking A_0:=Y and B_i:=E_i∖ Y one can see that each B_i is a kernel of a large star,
contradicting Lemma <ref>.
Finally, we complete the proof of this case as follows
(t-1)|∂| ≥∑_∀ (r-1)-set Y( α(Y)+β(Y))= |_α|+ r|_β|≥ ||.
§ HYPERGRAPHS WITHOUT A BUSH _T(1,R-1)
Call an r-graph t-normal if it has no (r-1)-tuples of vertices whose codegree is positive but less than t.
For every edge Y in an r-graph ℋ and any x∈ Y, let Q_ℋ(Y,x)={y∈ V(ℋ)-x: Y-x+y∈ℋ}.
In a
t-normal r-graph ℋ, |Q_ℋ(Y,x)|≥ t-1
for every edge
Y∈ℋ and vertex x∈ Y.
We will prove this case by induction on the number of edges. For |ℋ|<α(r,t) the claim is trivial.
Suppose ℋ is a counter-example with the fewest edges.
If our ℋ is not t-normal, then choose a (r-1)-tuple Y of vertices whose codegree is positive but less than t and
let ℋ' be obtained from ℋ the
edges containing Y. Then |ℋ'|-(t-1) |∂ℋ'|≤ |ℋ|-(t-1) |∂ℋ|, so ℋ' satisfies (<ref>) and is
_t(1,r-1)-free. This contradicts the minimality of ℋ. Thus ℋ is t-normal.
Let q^* be a huge number in terms of rt but small w.r.t. α(r,t)
By (<ref>),
there is a subfamily ℋ^* of ℋ satisfying Lemma <ref> for q=q^* with
|ℋ^*|>n^r-2. Let 𝒥 be the corresponding intersection structure.
By Lemmas <ref> and <ref>, 𝒥 contains a family isomorphic to 𝒥^(r-1), or
to 𝒥^(0). In both cases, 𝒥 contains a singleton. So, ℋ contains an element u and
disjoint (r-1)-tuples U_1,…, U_q^* such that Y_i=U_i+u is an edge in ℋ^* for each 1≤ i≤ q^*.
Since H is t-normal, for each 1≤ i≤ q^* we can choose a (t-1)-element subset
Q'_ℋ(Y_i,u) of Q_ℋ(Y_i,u). Construct the auxiliary bigraph R with parts
{U_i: 1≤ i≤ q^*} and V(ℋ) where U_iv∈ E(R) iff v∈ Q'_ℋ(Y_i,u).
Case 1. R has a matching of size (r+1)t, say M={U_1,v_1,…,U_(r+1)tv_(r+1)t}.
Since all U_i here are disjoint and all v_i are distinct, we can greedily find t disjoint sets U_i∪{v_i}:
start from M and one by one take a set U_i∪{v_i} and delete from the list all U_i'∪{v_i'} with i'≠ i such that
v_i∈ U_i' or v_i'∈ U_i'. If our disjoint sets are U_1∪{v_1},…,U_t∪{v_t}, then
we have _t(1,r-1) with the set of edges
{U_1∪{u},…,U_t∪{u},U_1∪{v_1},…,U_t∪{v_t}}
Case 2. R has no matching of size (r+1)t. Then it has a vertex cover of size less than (r+1)t.
So there is a set V⊂ V(ℋ) with |V|<(r+1)t containing Q'_ℋ(Y_i,u) for at least
q^*-(r+1)t sets Y_i.
Then some (t-1)-element set T=T(u)={z_1,…,z_t-1} serves as Q'_ℋ(Y_i,u) for
at least (q^*-(r+1)t)(r+1)t t-1^-1 sets Y_i. We choose q^* so that this number is at least t-1 and suppose
T(u) serves
for Y_1,…,Y_t-1. If for some other Y_i, Y_i∩ T=∅ and
Q'_ℋ(Y_i,u)≠ T, then we take an element z∈ Q'_ℋ(Y_i,u)- T and construct a _t(1,r-1) with the set of edges
{Y_1∪{u},…,Y_t-1∪{u},Y_i∪{u}}∪{Y_1∪{z_1},…,Y_t-1∪{z_t-1},Y_i∪{z}}.
So, T=Q'_ℋ(Y_i,u) for all 1≤ i≤ q^* such that Y_i∩ P=∅. Hence if for at least one other edge Y in ℋ containing
u and disjoint from T, Q_ℋ(Y_i,u)≠ T, then we find a _t(1,r-1) similarly to (<ref>) because q^* is large. It follows that denoting T'(u)=T(u)∪{u},
14.5cm
the set of edges of H containing u and disjoint from T'(u)-z is contained in the set of edges of H containing z_j and disjoint from T'(u)-z_j for each 1≤ j≤ t-1.
Case 2.1. 𝒥 contains a family isomorphic to 𝒥^(r-1). We may assume that 𝒥 contains all proper subsets of [r] containing 1 and that {x_1,…,x_r}∈ℋ where x_i∈ X_i for i∈ [r]. Since
{1,2}∈𝒥, there are 2t disjoint sets U_h={x_3,h,x_4,h,…,x_r,h} such that
U'_h=U_h∪{x_1,x_2}∈ℋ^* for all 1≤ h≤ 2t. At most t-1 of them intersect T. So we may assume
U'_1,…,U'_t are disjoint from T. Since [r]-{2}∈𝒥, for every 1≤ h≤ t, there are 2t elements
x_2,h,g∈ X_2 such that U_h,g=U_h∪{x_1,x_2,h,g}∈ℋ^* for all 1≤ h≤ t and
1≤ g≤ 2t. We may rename the elements x_2,h,g so that for all 1≤ h≤ t and
1≤ g≤ t, x_2,h,g∉ T∪{x_2}. After that we rename them again so that for all 1≤ h≤ t
the elements x_2,h,1 are distinct. Now, letting z_t=x_1, by (<ref>) we have a _t(1,r-1) with the set of edges {U_h∪{z_h,x_2}: h∈ [t]}∪{U_h∪{z_h,x_2,h,1}: h∈ [t]}.
Case 2.2. For each subfamily ℋ^* of ℋ satisfying Lemma <ref> for q=q^* with
|ℋ^*|>n^r-2, the intersection structure
𝒥=𝒥(ℋ^*) contains a family isomorphic to 𝒥^(0).
Do the following procedure. Let ℋ_1=(ℋ)^*. For i=1,2,…, if
|ℋ-⋃_j=1^i ℋ_j|≤ C· n^r-2, then stop, let m=i
and ℋ'=⋃_j=1^i ℋ_j and ℋ_0=ℋ-ℋ'; otherwise,
let ℋ_i+1=(ℋ-⋃_j=1^i ℋ_j)^*.
By the case, ℋ_i
contains a family isomorphic to 𝒥^(0) for each i≥ 1.
By the structure of 𝒥^(0), each v∈ V(ℋ' can serve as vertex u in the argument at the beginning of Case 2. So, let u∈ V(ℋ') and T'=T'(u) be defined as above.
By (<ref>), if for any z_j∈ T(u) and some edge Y in ℋ containing
z_j and disjoint from T'(u)-z_j, Q_ℋ(Y,z_j)≠ T'(u)-z_j, then we again find a _t(1,r-1) similarly to (<ref>) because q^* is large. It follows that
14.5cm there are disjoint t-element sets T'_1,…, of the form T'(u) covering V(ℋ') such that for
any two u,u'∈ T'_i
the set of edges of H containing u and disjoint from T_i-u equals the set of edges of H containing u' and disjoint from T_i-u' for every T_i.
Since 𝒥^(0) does not contain sets of size r-1, each (r-1)-element set Y∈∂ℋ_i
is only in one set in ℋ_i, thus |ℋ_i|≤ |∂ℋ_i|/r. Moreover, since
|ℋ_0|<α(r,t)n^r-2, by (<ref>) some (r-1)-tuple Y_0 belongs to at least r(t-1) families
ℋ_i, say to ℋ_1,…,ℋ_r(t-1).
For i∈ [r(t-1)], let y_i be the vertex such that Y_0∪{y_i}∈ℋ_i. Let Y'={y_1,…,y_r(t-1)}.
For every y∈ Y_0, |T'(y)∩ Y'|≤ t-1; thus there is y_j∈ Y'∖⋃_y∈ Y_0 T'(y), say j=1.
Let T(y_1)={z_1,…,z_t-1}. Since |Y'|>t, we may assume y_2∉ T'(y_1).
Since 𝒥(ℋ_1) contains all singletons, there are
disjoint (r-1)-tuples U_1,…, U_q^* such that U_i+y_1 is an edge in ℋ_1 for each 1≤ i≤ q^*.
Since q^* is large, we can choose among them t-1 sets, say U_1,…, U_t-1 disjoint from
Y_0∪ Y'∪ T(y_1). Then we have a _t(1,r-1) with the set of edges
{U_h∪{y_1}: h∈ [t-1]}∪{U_h∪{z_h}: h∈ [t-1]}∪{Y_0∪{y_1},Y_0∪{y_2}},
a contradiction.
§ STABILITY: PROOF OF THEOREM <REF>
Recall the Lovász form of the Kruskal-Katona Theorem:
If x is a positive real, 1≤ k<n, ⊆[n] k and ||= x k, then
|∂|≥x k-1.
Choose n_0 so that
n_0-1 r-1>3(C+C_0)n_0^r-2 n_0^r-1/(r-1)!<2n_0-1 r-1.
Let n>n_0 and ℋ be an n-vertex r-uniform family not containing
a bush _t,h(a,b) with |ℋ|> (t-1)n-1r-1-C_0 n^r-2.
Define C, m, _0,…,_m and as in Subsection <ref>. By (<ref>),
|| ≥ (t-1) n-1r-1-(C+C_0)n^r-2.
As in Subsection 4.2, for each 1≤ i≤ m, the intersection structure 𝒥(ℋ_i) contains 𝒥^(r-1). So, again
for each hyperedge E∈ℋ_i⊂ (1≤ i≤ m) there is an element c(E)∈ E such that
each proper subset of E containing c(E) is a kernel of a thr-star in ℋ_i.
For every Y∈[n] r-2, let (Y) be the set of vertices v∈ [n]-Y such that there is an edge E∈
containing Y and such that v=c(E).
Suppose Y∈[n] r-2, v,v'∈(Y), v≠ v' and edges E,E'∈ are such that v=c(E), v'=c(E') and
Y⊆ E∩ E'.
If
E∈_i and E'∈_i', then i≠ i'.
Suppose v≠ v', but i=i'.
We may assume that the partition of [n] corresponding to _i is (X_1,…,X_r) and
v,v'∈ X_r. By symmetry, we also may assume that Y⊂ X_1∪…∪ X_r-2.
Since Y⊆ E∩ E', [r-2]∈_i or [r-1]∈_i. Since 𝒥^(r-1)⊂_i and 2≤ a,b≤ r-2,
[r]-[a]∈_i and [a]∪{r}∈_i. Since _i is intersecting, [a]=([a]∪{r})∩ [r-1]=([a]∪{r})∩ [r-1]∈_i.
Together with [r]-[a]∈_i, this contradicts Lemma <ref>.
Similarly to Claim <ref>, the following holds.
For every (r-2)-subset Y of [n],
|(Y)|≤ t-1.
Suppose to the contrary that there are t distinct v_1,…,v_t∈ [n] and
distinct E_1,…,E_t∈ such that Y⊆ E_1∩…∩ E_t and v_i=c(E_i) for i=1,…,t.
Let E_1∈ℋ_i_1, …, E_t∈ℋ_i_t.
By Claim <ref>, i_1,…,i_t are all distinct. By relabelling we may suppose that E_i∈ℋ_i.
We will find the t+1 disjoint sets A_0, B_1,B_2,…,B_t contradicting Lemma <ref> using induction as follows.
Fix any subset A_0 of Y with |A_0|=a and let D_0:= ∪ E_i. Then |D_0|≤ 2t+r-2.
We define the sets E_i', D_i, B_i step by step as follows. We will have D_i:=D_0∪⋃_j ≤ i E_j' and |D_i|≤ 2t+r-2+i(r-1-a).
For i=1, 2, …, t consider the family ℋ_i and its member E_i in it.
By the intersection structure of ℋ_i,
the set A_0∪{c(E_i)} is an (a+1,thr)-kernel in ℋ_i.
One of the thr petals of the sunflower in ℋ_i with kernel A_0∪{c(E_i)}
should be disjoint from D_i-1; let E_i' be the corresponding set in ℋ_i. Since c(E_i)∈ E_i', and homogeneity gives c(E_i')=c(E_i), the set
B_i:=E_i'-A_0 is a (b,thr)-kernel.
For v∈ [n], let (v)={E∈: v=c(E)} and (v)={E-v: E∈}. Let =⋃_v∈ [n](v).
By Claim <ref>,
|(v)|=|(v)| for each v∈ [n]; in particular, by (<ref>), ||=||≥ (t-1) n^r-1-(C+C_0)n^r-2.
In these terms, Claim <ref> can be restated as follows:
For every j∈ [n], there a real x_j such that |(j)|=x_j r-1. Reorder the elements of [n] so that x_1≥ x_2≥…≥ x_n. Let b=3 (r-1)!(C+C_0).
For every 1≤ i≤ t-1,
n-x_i≤ b.
Suppose the claim fails and 1≤ i≤ t-1 is the smallest index for which (<ref>) does not hold.
Then
Since for each real
0<x<n,
x r-1/n r-1= (1-n-x/n-r+2)x r-2/n r-2,
Theorem <ref> yields that for every i≤ j≤ n,
|∂(j)|≥x_j r-2=n r-2/n r-1x_j r-1/(1-n-x_j/n-r+2)≥n r-2/n r-1|(j)|(1+b/n-r+2),
and for every 1≤ j≤ i-1,
|∂(j)|≥x_j r-2≥n r-2/n r-1x_j r-1≥n r-2/n r-1|(j)|.
Hence
∑_j=i^n |∂(j)|≥n r-2(1+b/n-r+2)/n r-1∑_j=i^n|(j)| ∑_j=1^i-1 |∂(j)|≥n r-2/n r-1∑_j=1^i-1|(j)|.
By (<ref>) and (<ref>),
(t-1)|∂|≥∑_j=1^n |∂(j)|=∑_j=1^i-1 |∂(j)|+∑_j=i^n |∂(j)|≥n r-2/n r-1(∑_j=1^n |(j)|+ b/n-r+2∑_j=i^n |(j)|
).
Since |∂|≤n r-2 and by (<ref>), ∑_j=1^n |(j)|=||=||≥ (t-1) n-1r-1-(C+C_0)n^r-2, this implies
(t-1)n r-1≥ (t-1) n-1r-1-(C+C_0)n^r-2+b/n-r+2∑_j=i^n |(j)|.
Again by (<ref>), ∑_j=i^n |(j)|≥ ||-(i-1) n-1r-1≥ (t-i) n-1r-1-(C+C_0)n^r-2. Since t-i≥ 1, plugging this into (<ref>) and rearranging we get
(C+C_0)n^r-2≥b/n-r+2( n-1r-1-(C+C_0)n^r-2).
Using the definition of b and (<ref>), inequality (<ref>) implies
(C+C_0)n^r-2≥3 (r-1)!(C+C_0)/n-r+2( n-1r-1-1/3n-1r-1),
which in turn again using (<ref>) yields
n^r-2(n-r+2)≥ 2(r-1)! n-1r-1≥ n^r-1,
a contradiction.
Now we are ready to finish the proof of the theorem. By Claim <ref>, for every 1≤ i≤ t-1,
|(i)|=x_i r-1≥n-b r-1=n r-1∏_j=0^r-2n-b-j/n-j≥n r-1(1-(r-1)b/n-r+2)
=n r-1-bn r-2≥n r-1-b/(r-1)!n^r-2.
This proves Theorem <ref> for C_1=b/(r-1)!=3(C+C_0).
99
DEF
M. Deza, P. Erdős and P. Frankl, Intersection properties of systems of finite
sets, Proc. London Math. Soc. (3) 36 (1978), 369–384.
EG P. Erdős, and T. Gallai,
On maximal paths and circuits of graphs. Acta Math. Acad. Sci. Hungar. 10 (1959), 337–356.
Frankl1977 P. Frankl,
On families of finite sets no two of which intersect in a singleton.
Bull. Austral. Math. Soc. 17 (1977),
125–134.
FF85 P. Frankl, and Z. Füredi,
Forbidding just one intersection.
J. Combin. Th., Ser. A 39 (1985), 160–176.
FF60 P. Frankl, and Z. Füredi, Exact solution of some Turán-type problems.
J. Combin. Th., Ser. A 45 (1987), 226–262.
Furedi1 Z. Füredi, On finite set-systems whose every intersection is a kernel of a star, Discrete Math. 47 (1983), 129–132.
FJKMV5 Z. Füredi, T. Jiang, A. Kostochka, D. Mubayi, and J. Verstraëte,
Extremal problems for hypergraph blowups of trees. To appear in SIDMA, 17 pp.
FurOzk
Z. Füredi, and L. Özkahya,
Unavoidable subhypergraphs: 𝐚-clusters.
J. Combin. Th., Ser. A 118 (2011), 2246–2256.
Tree
A. McLennan,
The Erdős-Sós conjecture for trees of diameter four.
J. Graph Theory 49 (2005), no. 4, 291–301.
|
http://arxiv.org/abs/2307.04053v1 | 20230708220300 | How is Fatherhood Framed Online in Singapore? | [
"Tran Hien Van",
"Abhay Goyal",
"Muhammad Siddique",
"Lam Yin Cheung",
"Nimay Parekh",
"Jonathan Y Huang",
"Keri McCrickerd",
"Edson C Tandoc Jr.",
"Gerard Chung",
"Navin Kumar"
] | cs.CL | [
"cs.CL"
] |
How is Fatherhood Framed Online in Singapore?
Sen Lu, Abhronil Sengupta
School of Electrical Engineering and Computer Science
The Pennsylvania State University
University Park, PA 16802, USA
Email: {senlu, sengupta}@psu.edu
============================================================================================================================================================================================
The proliferation of discussion about fatherhood in Singapore attests to its significance, indicating the need for an exploration of how fatherhood is framed, aiding policy-making around fatherhood in Singapore. Sound and holistic policy around fatherhood in Singapore may reduce stigma and apprehension around being a parent, critical to improving the nation's flagging birth rate. We analyzed 15,705 articles and 56,221 posts to study how fatherhood is framed in Singapore across a range of online platforms (news outlets, parenting forums, Twitter). We used NLP techniques to understand these differences. While fatherhood was framed in a range of ways on the Singaporean online environment, it did not seem that fathers were framed as central to the Singaporean family unit. A strength of our work is how the different techniques we have applied validate each other.
Keywords: fatherhood, singapore, social media
§ INTRODUCTION
Fatherhood is now an unprecedentedly visible cultural phenomenon in Singapore. This increased attention is related to the inaugural nationwide fatherhood movement, Dads for Life, the continual development of parenting magazines and the recent emergence of fatherhood blogs within the Singapore internet sphere. In recent times, various fatherhood-related initiatives in Singapore have collaborated with government agencies, business corporations, and community organizations on initiatives to create awareness of the importance of the father’s role, develop commitment to good fathering, and encourage fathers to spend time with their children. In Singapore, the introduction of paternity leave and encouragement for fathers to play a bigger role in childcare and child-raising suggest that the government is sympathetic to the pursuit of gender equality. However, there is a gap between the perception of the importance of fathers and the actual involvement of fathers in their children’s lives. In addition, the role of fathers continues to be recognized primarily as that of a breadwinner. Yet fathers want to do more and experience parenthood as a very fulfilling experience, to which they are highly committed <cit.>. The proliferation of discussion about fatherhood in Singapore attests to its significance as a commercial, ideological, and cultural subject, indicating the need for an exploration of how fatherhood is framed, aiding policy-making around fatherhood in Singapore. While there has been research around how fatherhood is framed in the Singapore context, there is limited analysis of how fatherhood is framed on social media, news outlets, or online forums. Such platforms are where opinions or news on fatherhood are forwarded, people get parenting information, or get quick answers to fatherhood questions. Studying how fatherhood is framed in the online Singaporean context is central to crafting progressive and effective policy around parenting in Singapore, as well as managing the media landscape. Sound and holistic policy around fatherhood in Singapore may reduce stigma and apprehension around being a parent, critical to improving the nation's flagging birth rate. Policies developed in Singapore around fatherhood may then be implemented in nearby East Asian countries, which have similarly low birth rates, to mitigate a rapidly aging society and a shrinking taxpayer base. In this paper, we demonstrate how fatherhood in Singapore is framed on multiple online platforms (news outlets, parenting forums, Twitter). Our main research question (RQ) is as follows: How is fatherhood in Singapore framed on various online platforms? Our findings suggested that while fatherhood was framed in a multiplicity of forms online, it did not seem that fathers were core to the family.
§ RELATED WORK
Fatherhood Framing Online
Work on fatherhood in Singapore is limited. Recent work proposed the concept of Confucian masculinity to explain how the depiction of active fatherhood reinforced the ubiquitous normal family that upholds patriarchal ideology and perpetuates patriarchal power, obscuring the contradictions of class, race, and sexuality that exist in Singapore <cit.>. Other work examined the fatherhood discourses in new dad ads; feature articles from Today’s Parents, a parenting magazine; articles from Life Dads, a government electronic newsletter on fatherhood; and blog entries from three fatherhood blogs <cit.>. The study employed critical discourse analysis, and proposed a Hegemonic Fatherhood Discourse Schema to postulate that the new father/man and traditional father/man ideology is the hegemonic fatherhood in Singapore, ultimately serving the interests of the Singapore state. While past work detailed framing around fatherhood in Singapore, previous research did not compare framing across online platforms, or provide an overview of fatherhood framing to develop policy or informational tools. While there was limited fatherhood research in the Singapore context, there was relatively more research on fatherhood framing online in other contexts. For example, recent work <cit.> used discussion threads from two Web-based parenting communities, r/Daddit and r/PreDaddit from Reddit. Results demonstrated that men used web-based communities to share the joys and challenges of the fatherhood experience.
§ DATA AND METHOD
Data We first selected three content experts who had published at least ten peer-reviewed articles in the last three years around fatherhood. We ensured the content experts were either from Singapore or conducted research on fatherhood/parenthood in Singapore. Given the wide disciplinary focus of fatherhood research, we sought to select a range of experts across disciplines. We recruited one expert from each of these disciplines: Public policy, social work, computational social science. Selecting experts from a range of fields allows results to be contextualized to fields where fatherhood research is concentrated, allowing for findings to be drawn on by stakeholders in public policy, social work, and computational social science. The context experts separately developed lists of online platforms most relevant to fatherhood in Singapore. Each expert developed a list of ten platforms independently, and we selected only platforms common to all three experts' lists. For each online platform, experts also provided up to 10 examples, where applicable, of websites, or forums, and we selected examples common to all experts' lists. The final list of platforms is as follows: Singapore news outlets (Straits Times, Channel NewsAsia, TODAYonline), parenting forums (singaporemotherhood.com, singaporeparents.com.sg/forum, forums.hardwarezone.com.sg/threads/welcome-to-hwzs-parenting-kids-early-learning-forum.5684416, mummysg.com/forums), Twitter (filtering only posts related to Singapore). Examples of platforms not selected: Facebook, Instagram, Reddit, LinkedIn. We were not able to collect Facebook and Instagram data as there was limited support for CrowdTangle, the main mode of Facebook/Instagram data collection. Similarly, the pushshift.io Reddit API had limited support and Reddit data collected was incomplete. LinkedIn had limited fatherhood posts and posts were mostly centered on non-family content. To capture fatherhood-related text on these platforms, we used queries based on a related systematic review e.g., father* OR dad* OR patern* OR paternal OR paternity OR stepdad* OR stepfather* OR step-dad* OR Step-father* OR papa. We used only English-language keywords as most of discussion in the Singapore internet environment is in English. English is also the major language of communication in Singapore. For forums, we used automated scraping techniques (Beautiful Soup) to obtain forum posts from 2010 to 2023, with the same set of keywords. We ran a search for querying the keywords in the title of the forum post or replies to the forum post. We collected all posts that contained these keywords within the forum posts and replies. Regarding Twitter, we used the Twitter API and the indicated keywords to collect tweets from 2011 to 2023. Finally, for news articles, we used Nexis to obtain news archives from 1992 to 2023. To prepare the data for analysis, English stop words such as the, a, an were removed, along with abbreviations, and terms were stemmed using Porter’s stemming algorithm. Stemming converts words with the same stem or root (e.g., innovative and innovator) to a single word type (e.g., innovate). We organized data into four streams for analysis: Twitter (tweets), news (news articles), forums (forum posts).
Sentiment
Sentiment analysis can aid us in comprehending how sentiment around fatherhood is expressed in the online arena. As an example, forums may be more likely to have lower sentiment compared to news. DistilBERT was used for sentiment analysis. DistilBERT was used separately on data from each platform. The model assigns sentiment based on each article or post. Sentiment is from a -1 to 1 scale, where values <0 are negative sentiment, >0 are positive sentiment, and close to 0 are neutral. To stay within the admitted input size of the model, the text length (title + body text) was clipped to to 512 tokens.
Emotion Recognition
Emotion recognition can help us understand how emotions are expressed across various platforms, indicating differences in how fatherhood is framed in Singapore. For example, forums may be more likely to contain anger compared to news. We used DistilBERT for emotion recognition. The model was applied separately on data from each platform. The model assigns emotions (anger, fear, joy, love, sadness, surprise) based on each article or post. To stay within the admitted input size of the model, we clipped the length of the text (title + body text) to 512 tokens.
We provided an overview of the data in Table <ref>. Two reviewers independently examined 10% of the articles or posts within each dataset to confirm salience with our research question. The reviewers then discussed their findings and highlighted items deemed relevant across both lists. We noted the following relevance proportions: News outlets (82%), Twitter (90%), Parenting forums (78%).
§ RESULTS
Overview
We first explored sample posts across platforms. News outlets generally mentioned fatherhood in the context of providing demographic data about interviewees, with excerpts such as So the 40-year-old eye specialist and father of three had to wrap up his work at the hospital quickly, or when interviewees were referring to their fathers with no specific reference to fatherhood e.g., Mr Lee, whose father founded the clan association, rents out its third floor to a small media firm. Broadly, news outlets did not seem to focus on the experience of fatherhood, with the bulk of articles mentioning fathers as a demographic indicator. Twitter posts focused on people recounting incidents, often humorous or heart-warming, with their fathers e.g., My dad was telling me something serious and he hit his leg against the table and I burst out laughing so he had no choice but to laugh, Dad brought back homemade fresh horfun (noodles) from the temple. It's delicious. Twitter seemed to have a greater focus on fathers playing a core function in the Singapore family unit. Posts from forums were very diverse topically. Several posts were about hiring a helper for a young child: My husband is totally against the idea of employing a helper, as he does not like a stranger living with us; I am a father of a newborn baby girl. I recently engaged a confinement lady by the name of Auntie Judy. Such posts suggest the significant role domestic helpers play in the Singaporean family, and how a portion of a father's role is perhaps to oversee the hiring of the domestic helper. Other posts were about suspected infidelity e.g., So my Wife of 2 years has been cheating on me with another male colleague, perhaps indicative of the strain parenting is related to within some Singaporean families.
We then provided word clouds in Figure <ref> as an overview of the data. Across all datasets, words such as time, work, now were prominent, perhaps indicative of how work and likely limited time are central to fatherhood in Singapore. Most common trigrams for news articles centered on leaders of Singapore, who were father and son: Lee Kwan Yew and Lee Hsien Loong. This may indicate that the mainstream news media discussion around fatherhood had little to do with fathers' role in a family, but simply around familial relationships within major news stories. In 1992 - 2003, common trigrams in the news were engineer success story and pressure parent counting. From 2004 - 2019, common trigrams were two baby boy, first new baby, and first time parent. From 2020 - 2022, common trigrams were generation grit family, and grit family love. Broadly, news trigrams may detail how the initial focus was on children bringing pride and wealth to their families, with a transition toward celebrating new births. In more recent years, forums tended to focus on how the family unit could overcome struggles. The most common trigrams in Twitter focused on celebrating fathers through specific events such as Father's Day and birthdays: happy father's day, happy birthday daddy. Such phrases indicated that Twitter may be used to celebrate fathers, but only in relation to pre-defined events, instead of fathers being celebrated for time put toward caregiving etc. Common trigrams in 2011 - 2020 were love u dad, dad love love. 2021 onwards, popular trigrams were feel fulfilling husband, and last nite daddy. Twitter data demonstrated a shift from declaring love for one's father, to fathers indicating how they were fulfilled in their role. Unlike other datasets, there appears to be a shift towards a more active form of fatherhood in Singapore, where fathers describe pride in their role. Trigrams in forums centered on perceived marital infidelity, such as wife unfaithful husband, and assisted reproductive technologies, such as ivf mommy toben, and cousin egg donor. Forums seemed to be platforms where people sought support around spousal infidelity and assisted reproductive technologies, rather than discuss fathers' role in the family unit. The most common trigrams in forums changed over time, with phrases such as gave birth daughter, and first time dad in 2010 - 2019, but with phrases such as happen file divorce, and judged urged divorcing in 2020. In 2021, common trigrams were conceiving single women, while in 2022, trigrams such as crave physical intimacy, and physicial intimacy normal were popular. Forums, while initially around celebrating birth, may have become places where people sought information around divorce, assisted reproductive technologies, and physical intimacy. Broadly, descriptive data indicated shifting framing around fatherhood, but a limited focus on fathers as core to the Singapore family.
Sentiment
We presented sentiment analysis results across each platform in Table <ref>. News and Twitter had higher proportions of positive sentiment (53.7% and 57.0% respectively) compared to forums (27.2%). Forums had the highest proportion of negative sentiment (65.9%), compared to news and Twitter (43.8% and 33.8% respectively). We then presented sentiment analysis results over time for each platform in Figure <ref>. News data exhibited several fluctuations but had the greatest rise in positive sentiment post-2009. The nationwide fatherhood movement, Dads for Life, started in 2009, may explain the increase in positive sentiment. Examples of news article content with positive sentiment were as follows: A group of prominent figures from various organisations and businesses have banded together to start up the Fathers Action Network. The network aims to kick-start a movement called Dads for Life to get fathers more involved with their families, especially in their childrens' lives. This follows a fatherhood perception survey conducted in April and May this year by a Ministry. Most felt that being a father and raising children is one of the most fulfilling experiences a man can have.; Work is work and family is family. Our ultimate goal is still our family. Work is just a means to get the money so we should be very clear about it. And that is the sort of spirit that the Dads for Life movement wants to inspire. After 2017, positive sentiment declined over time, and was overtaken by negative sentiment. Forums had broadly negative sentiment 2015 onward, reaching a peak in 2017, followed by a steady decline. Twitter exhibited mostly positive sentiment 2013 onward with a steady decline after. We suggest that the high proportion of positive sentiment in the news may be related to governmental initiatives and the high proportion of negative sentiment in forums may be related to a more frank discussion of the stresses of parenting.
Emotion Recognition
We presented emotion recognition results across each platform in Table <ref>. News had the highest proportion of joyous (61.3%) and loving (34.2%) posts, perhaps reflecting governmental initiatives around fatherhood. While Twitter and forums had similar levels of joyous posts (56.6% and 44.2% respectively), they were still not as high as news. Similarly, loving posts on Twitter and forums (2.4% and 4.1% respectively) were far lower than news outlets. We suggest that the emotion in the news reflects pro-fatherhood governmental initiatives, but these do not always filter successfully to other media. We then presented emotion recognition results over time for each platform in Figure <ref>. News data exhibited several fluctuations but had the steepest rise post-2009. Dads for Life, started in 2009, may explain the uptick in news articles, especially around joy. Examples of news article content that were coded as joy: It's a happy Father's Day for SAFRA, as it is set to receive funds from the "Dads for Life" movement to pump up father-friendly activities for its members over the next two years.; He will be running alongside his daughter in the Dads For Life 800m Father and Child Challenge, a new category in the annual SAFRA Singapore Bay Run and Army Half-Marathon. Mr Shariff, who was born without part of his left leg, said: I signed us up because I want to show her how running can make her happy. Both Twitter and forum posts saw a sudden spike post-2013 onward, mostly around joy. We suggest that the shift in emotion may be due to a delayed reaction to Dads for Life. Broadly, we forward that the 2009 Dads for Life movement and other similar policies may have catalyzed emotional reactions around fatherhood in the Singapore online arena. However, the rises in emotion were not sustained and seemed to decline by 2023, perhaps indicative that new policy levers may need to be rolled out.
§ DISCUSSION
Our RQ was to explore how fatherhood in Singapore is framed on various online platforms. A strength of our work is how the different techniques we applied validate each other as well as reveal differences across platforms. While fatherhood was framed in a range of ways on the Singaporean online environment, it did not seem that fathers were framed as central to the Singaporean family unit. Results also indicated that governmental initiatives may have some effect on altering the framing of fatherhood, but are not lasting in effect. The concordance in our results suggests the veracity of our findings and we hope that results can add to research and policy around fatherhood in Singapore. Our evidence adds to previous research, where we provided data on how governmental initiatives may initially buttress framing around fatherhood, but needs to be sustained to provide broad and lasting support for fathers. Key to how fatherhood is framed in Singapore is the inclusion of fathers' viewpoints when writing news articles on fatherhood. Where possible, fathers themselves should be consulted on articles about fatherhood. For example, a panel staffed by fathers can comment on fatherhood-related online news articles, providing suggestions on how articles can more accurately represent fathers' concerns <cit.>. Our findings relied on the validity of data collected with our search terms. We used a range of established techniques to search for all articles/posts relevant to fatherhood, and our data contained text aligned with how fatherhood is framed. We were thus confident in the comprehensiveness of our data. We only used English-language text but will include other languages in future work. Given the token limits for the emotion recognition technique, we were not able to use emotion recognition for the entirety of longer news articles. We note that the recall of the search string was not tested. We note that our data may not be generalizable to how fatherhood is framed globally. Our goal was not to identify who was doing the framing around fatherhood e.g., family members or government. Future studies will seek to identify which stakeholders were likely involved in the framing.
splncs04
|
http://arxiv.org/abs/2307.05019v1 | 20230711054510 | $ Λ_c $ semileptonic decays | [
"Sheng-Qi Zhang",
"Cong-Feng Qiao"
] | hep-ph | [
"hep-ph"
] |
Λ_c semileptonic decays
Sheng-Qi Zhang^1 and Cong-Feng Qiao^1[Corresponding author; [email protected].]
^1School of Physical Sciences, University of Chinese Academy of Sciences
YuQuan Road 19A, Beijing 100049, China
===========================================================================================================================================================================================================
Motivated by the recent experimental progress in the Λ_c decay that contains a neutron in the final state, we analyze the semileptonic decay Λ_c → n ℓν_ℓ in the framework of QCD sum rules. The transition form factors are analytically computed using three-point correlation functions and the Cutkosky cutting rules, which can be extrapolated into the physical region by employing the dipole parametrization. The branching fractions of Λ_c → n e^+ ν_e and Λ_c → n μ^+ ν_μ are estimated to be (0.280± 0.031)% and (0.274± 0.030)%, respectively. Furthermore, we calculate as well the relevant decay asymmetry observables sensitive to new physics beyond the standard model. The numerical results of semileptonic decays Λ_c →Λℓν_ℓ are also given and confronted to the latest experimental data.
§ INTRODUCTION
The semileptonic decay of the lightest charmed baryon Λ_c plays an important role in exploring strong and weak interactions in charm sectors. It can help elucidate the role of nonperturbative effects in strong interactions and provide crucial inputs for studying heavier charmed baryons and bottom baryons decay. Additionally, the precise measurement of the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements | V_cs | and | V_cd | can also provide the significant test for the standard model and the probable evidence for new physics beyond the standard model <cit.>.
Recent years, there have been extensive measurements of the semileptonic decay modes Λ_c→Λℓν_ℓ <cit.>. The most precise results of branching fractions yet are ℬ(Λ_c →Λ e^+ν_e)=(3.56 ± 0.11 ± 0.07 ) % <cit.> and ℬ(Λ_c →Λμ^+ν_μ)=(3.48 ± 0.14 ± 0.10 ) % <cit.>, respectively. Comparing the former result with Λ_c inclusive semileptonic decay mode ℬ(Λ_c → X e^+ν_e)=(3.95 ± 0.34 ± 0.09 ) % <cit.>, it can be inferred that there still remain some potential exclusive semileptonic decay modes measureable. Recently, BESIII collaboration reported the evidence of the decay modes containing excited states, specifically Λ_c →Λ(1520) e^+ν_e and Λ_c →Λ(1405) e^+ν_e <cit.>. These two decay modes yield a relative small branching fractions to be (1.02 ± 0.52 ± 0.11 ) × 10^-3 and (0.42 ± 0.19 ± 0.04 ) × 10^-3, respectively. Moreover, the measurement of two five-body semileptonic decay modes Λ_c→Λπ^+ π^- e^+ ν_e and Λ_c→ p K_s^0 π^- e^+ ν_e are also performed <cit.>, in which the upper limits are set to be ℬ(Λ_c→Λπ^+ π^- e^+ ν_e)<3.9× 10^-4 and ℬ(Λ_c→ p K_s^0 π^- e^+ ν_e)<3.3× 10^-4. In physics, besides the Λ_c semileptonic decay modes that include Λ(Λ^*) baryon in the final state, the exclusive semileptonic decay modes Λ_c→ n ℓν_ℓ are also permitted by the standard model. However, there is still a lack of experimental data in this regard.
Theoretically, Λ_c→ nℓν_ℓ is dominated by the Cabibbo-suppressed transition c→ dℓν_ℓ. As a result, the decay width is anticipated to be much smaller compared to the Λ_c→Λℓν_ℓ mode, which is dominated by the Cabibbo-favored transition c→ sℓν_ℓ. While experimentally, the main challenge lies in distinguishing neutron signals from neutral noises, which leads to the problem of direct neutron detections <cit.>. Fortunately, with the improvement of detector performance and analysis technique, BESIII collaboration has made notable progress in measuring Λ_c decays that involve neutron signals in the final state <cit.>. It is predictable that the experimental data for the decay mode Λ_c→ nℓν_ℓ will be available in the near future, making it beneficial to explore this process theoretically. Furthermore, the semileptonic decay Λ_c→ n ℓν_ℓ is an exceptional candidate for extracting the magnitude of CKM matrix element |V_cd|. Currently, the determination of |V_cd | relies primarily on the charm meson semileptonic decay D→πℓν_ℓ <cit.>. Therefore, it is of great importance to investigate the semileptonic decay Λ_c→ n ℓν_ℓ both experimentally and theoretically since such studies are crucial for providing precise verification for | V_cd | in the charm baryon sector.
In past, theoretical investigations for Λ_c→ n ℓν_ℓ semileptonic decay have been performed in depth in a variety of methods, such as the light-cone sum rules (LCSR) <cit.>, the light front approach (LF) <cit.>, the covariant confined quark model (CCQM) <cit.>, the constituent quark model (CQM) <cit.>, the relativistic quark model (RQM) <cit.>, the SU(3) flavor symmetry <cit.>, the MIT bag model (MBM) <cit.>, and the lattice QCD (LQCD) <cit.>. Additionally, QCD sum rules (QCDSR) has also been widely utilized to deal with the baryonic decay mode <cit.>. Rather than a phenomenological model, QCDSR is a QCD-based theoretical framework that systematically incorporates nonperturbative effects at each dimension. To evaluate the form factors in the weak transitions, the three-point correlation functions are constructed with appropriate interpolating currents. The QCDSR will be formally established by equating two representations of the three-point correlation functions, namely the QCD representation and the phenomenological representation, which enables the determination of the transition form factors. In this work, we will apply QCDSR to calculate the form factors of the Λ_c → n ℓν_ℓ semileptonic decay mode, after which the branching fractions as well as some other relevant decay asymmetry observables are also obtained. Besides, the numerical results of Λ_c →Λℓν_ℓ semileptonic decay are also given and compared with latest experimental results.
The rest of the paper is structured as follows: in Sec. <ref> we interpret the basic idea of QCDSR for the three-point correlation functions. The numerical results and analysis are presented in Sec. <ref>. The conclusions and discussions are given in the last section.
§ FORMALISM
The Λ_c → n ℓν_ℓ decay is dominated by the Cabibbo-suppressed transition c→ d ℓν_ℓ at quark level. The effective Hamiltonian depicting this transition is written as
ℋ_e f f=G_F/√(2) V_c d [ℓ̅γ_μ(1-γ_5) v_ℓ] [d̅γ^μ(1-γ_5) c],
where G_F denotes the Fermi constant and V_c d is the CKM matrix element. The Feynman diagram of Λ_c → n ℓν_ℓ is shown in Fig. <ref>. The leptonic part of this decay mode can be obtained through electro-weak perturbation theory, while the hadronic part can not be calculated perturbatively due to its involvement in the low-energy aspects of QCD. In general, the weak transition matrix element of hadronic part can be parametrized in terms of transition form factors
⟨Λ_c(q_1)|j_μ|n(q_2)⟩ =u̅_Λ_c(q_1) [f_1(q^2) γ_μ+i f_2(q^2)σ^μνq_ν/M_Λ_c+f_3(q^2) q_μ/M_Λ_c] u_n(q_2)
-u̅_Λ_c(q_1)[g_1(q^2) γ_μ+i g_2(q^2) σ^μνq_ν/M_Λ_c+g_3(q^2) q_μ/M_Λ_c]γ_5 u_n(q_2),
where q_1 and q_2 are the four-vector momentum of initial state Λ_c and final state neutron, respectively. The momentum transfer q is defined as q=q_1-q_2.
To calculate the transition form factors by QCDSR, the three-point correlation functions can be formally constructed as
Π_μ(q_1^2, q_2^2, q^2)=i^2 ∫ d^4 x d^4 y e^i(q_1 x-q_2 y) <0|T{j_Λ_c(x)j_μ(0)j^†_n(y) }|0>.
The weak transition current j_μ is defined as j_μ=c̅γ_μ(1-γ_5) d and the interpolating currents of Λ_c and neutron take the following quark structure <cit.>,
j_Λ_c =ϵ_i j k(u_i^T C γ_5 d_j) c_k,
j_n =ϵ_i j k(u_i^T C γ_5 d_j) d_k,
where the subscripts i, j, and k represent the color indices and C is the charge conjugation matrix.
On the phenomenological side, after inserting a complete set of intermediate hadronic states and exploiting double dispersion relations, the three-point correlation functions in Eq. (<ref>) can be described as,
Π_μ^phe(q_1^2, q_2^2, q^2)= ∑_spins⟨0|j_Λ_c|Λ_c(q_1)⟩⟨Λ_c(q_1)|j_μ|n(q_2)⟩⟨n(q_2)|j_n|0⟩/(q_1^2-M_Λ_c^2)(q_2^2-M_n^2)
+ higher resonances and continuum states,
where M_Λ_c and M_n denote the mass of Λ_c and neutron, respectively. The vacuum-to-baryon transition amplitudes can be parametrized by defining the decay constants,
⟨0|j_Λ_b|Λ_c(q_1)⟩ =λ_Λ_cu_Λ_c(q_1),
⟨0|j_n|n(q_2)⟩ =λ_nu_n(q_2),
where λ_Λ_c and λ_n represent the decay constants of Λ_c and neutron, respectively. By introducing the hadronic transition matrix elements in Eq. (<ref>) and utilizing the spin sum completeness relations, ∑ u_Λ_c(q_1)u̅_Λ_c(q_1)=q_1+M_Λ_c and ∑ u_n(q_2)u̅_n(q_2)=q_2+M_n, we can finally obtain the phenomenological representation of the three-point correlation functions of Eq. (<ref>),
Π_μ^phe(q_1^2, q_2^2, q^2) =λ_n(q_2+M_n)[f_1(q^2) γ_μ+i f_2(q^2)σ^μνq_ν/M_Λ_c+f_3(q^2)q_μ/M_Λ_c]λ_Λ_c(q_1+M_Λ_c)/(q_1^2-M_Λ_c^2)(q_2^2-M_n^2)
-λ_n(q_2+M_n)[g_1(q^2) γ_μ+i g_2(q^2)σ^μνq_ν/M_Λ_c+g_3(q^2)q_μ/M_Λ_c]γ_5λ_Λ_c(q_1+M_Λ_c)/(q_1^2-M_Λ_c^2)(q_2^2-M_n^2).
It should be noted that we assume f_3(q^2) and g_3(q^2) to be negligible in this study as they will contribute to semileptonic decays at 𝒪(m_ℓ^2 ) <cit.>.
On the QCD side, the three-point correlation functions of Eq. (<ref>) can be expressed by operator-product expansion (OPE) and double dispersion relations,
Π_μ^QCD(q_1^2, q_2^2, q^2)=∫_s_1^min^∞d s_1∫_s_2^min^∞d s_2ρ^QCD_μ(s_1,s_2,q^2)/(s_1-q_1^2)(s_2-q_2^2),
where s_1(2)^min is the kinematic limit. ρ^QCD_μ(s_1,s_2,q^2) stands for the spectral density, which can be obtained through the application of Cutkosky cutting rules <cit.>. In this work, contributions up to dimension 6 are considered in ρ^QCD_μ(s_1,s_2,q^2), which can be expressed as,
ρ^QCD_μ(s_1,s_2,q^2) =ρ^pert_μ(s_1,s_2,q^2)+ρ^⟨q̅q⟩_μ(s_1,s_2,q^2)+ρ^⟨ g_s^2G^2 ⟩_μ(s_1,s_2,q^2)
+ρ^⟨ g_s q̅σ· G q ⟩_μ(s_1,s_2,q^2)+ρ^⟨q̅q ⟩^2_μ(s_1,s_2,q^2).
The first term corresponds to the perturbative contribution, while ⟨q̅q ⟩, ⟨ g_s^2G^2 ⟩, ⟨ g_s q̅σ· G q ⟩, and ⟨q̅q ⟩^2 represent condensates that describe the nonperturbative effects. The relevant Feynman diagrams are plotted in Fig. <ref>.
To establish the relation between phenomenological representation and QCD representation, the quark-hadron duality is adopted,
Π_μ^phe(q_1^2, q_2^2, q^2)≃∫_s_1^min^s_1^0d s_1∫_s_2^min^s_2^0d s_2ρ^QCD_μ(s_1,s_2,q^2)/(s_1-q_1^2)(s_2-q_2^2).
Here, s_1^0 and s_2^0 denote the threshold parameters of Λ_c and neutron, respectively. After taking into account the double Borel transform to suppress the higher excited states and continuum states contributions, the analytic expression of f_i(q^2) and g_i(q^2) can be derived,
f_1(t)=g_1(t)= e^M_Λ_c^2/M_B_1^2e^M_n^2/M_B_2^2/λ_Λ_cλ_nM_Λ_c[∫_s_1^min^s_1^0d s_1∫_s_2^min^s_2^0d s_2∫ d ξ3 m_c ξ/64 π^4λ(s_1,s_2,t)^3/2×
(m_c^2(s_1-t-s_2)-t (s_1-t+s_2-2 ξ) )× e^-s_1/M_B_1^2e^-s_2/M_B_2^2
+m_c⟨q̅q⟩^2/6e^-m_c^2/M_B_1^2e^-m_d^2/M_B_2^2],
f_2(t)=g_2(t)= e^M_Λ_c^2/M_B_1^2e^M_n^2/M_B_2^2/λ_Λ_cλ_n∫_s_1^min^s_1^0d s_1∫_s_2^min^s_2^0d s_2∫ d ξ3 ξ/64 π^4λ(s_1,s_2,t)^5/2[
m_c^4 (s_2(s_1+t)+(s_1-t)^2-2 s_2^2)-
m_c^2 (s_1^3-s_1^2 (t+s_2+2 ξ)-s_1 (t^2+2 t (ξ-3 s_2)+s_2 (s_2-4 ξ))
+(t-s_2) (t^2+4 t ξ-s_2 (s_2-2 ξ)) )-
t (-2ξ (-2 s_1^2+s_1 (t+s_2)+(t-s_2)^2)-3 ξ^2 (s_1+t-s_2)
-s_1 s_2 (s_1+t)-s_1 (s_1-t)^2+2 s_1 s_2^2)]× e^-s_1/M_B_1^2e^-s_2/M_B_2^2,
where we define t = q^2 and λ(s_1,s_2,t) = s_1^2+s_2^2+t^2-2 s_1 s_2-2 s_1 t-2 s_2 t. M_B_1^2 and M_B_2^2 represent the Borel parameters which will appear after double Borel transform. The variable ξ is introduced in the integral through the phase space integration by utilizing Cutkosky cutting rules. It can be observed from Eq. (<ref>) and (<ref>) that the quark condensate ⟨q̅q⟩ and the mixed quark-gluon condensate ⟨ g_s q̅σ· G q ⟩ do not contribute to the transition form factors, while we find the contribution from gluon condensate ⟨ g_s^2G^2 ⟩ is negligible and can be ignored. Thus, only the four-quark condensate ⟨q̅q⟩^2 determines the primary nonperturbative contribution to f_1, which is in agreement with previous theoretical studies of heavy to light transitions <cit.>.
§ NUMERICAL RESULTS AND DISCUSSIONS
In our numerical calculation, the following input parameters are adopted <cit.>,
⟨q̅q⟩ = -(0.24 ± 0.01)^3 GeV^3,
s_1^0 = (9.5 ∼ 10.5) GeV^2, s_2^0 = (2.4∼ 3.0) GeV^2
m_c = 1.27 ± 0.02 GeV, m_d = 4.67^+0.48_-0.17 MeV,
λ_Λ_c = 0.0119 GeV^3, λ_n = 0.02 GeV^3,
M_Λ_c = 2.286 GeV, M_n = 0.938 GeV.
Here, the standard value of quark condensate ⟨q̅q ⟩ is taken at the renormalization point μ = 1 GeV. The decay constants and the threshold parameters are determined using the two-point sum rules <cit.>, employing the same interpolating currents of Eq. (<ref>) and Eq. (<ref>).
Moreover, two additional free parameters, namely the Borel parameters M_B_1^2 and M_B_2^2, are introduced in the framework of QCDSR. For simplicity, we adopt the following relation of Borel parameters <cit.>,
M_B_1^2/M_B_2^2=M_Λ_c^2-m_c^2/M_n^2-m_d^2.
In general, two criteria are employed to determine the values of Borel parameters. First, the pole contribution. In order to investigate the contribution of ground-state hadrons, the pole contribution has to dominate the spectrum. Thus, the pole contribution can be selected larger than 40% for the transition form factors, which can be formulated as follows,
R^PC_Λ_c=∫_s_1^min^s_1^0d s_1∫_s_2^min^s_2^0d s_2/∫_s_1^min^∞d s_1∫_s_2^min^s_2^0d s_2,
R^PC_n=∫_s_1^min^s_1^0d s_1∫_s_2^min^s_2^0d s_2/∫_s_1^min^s_1^0d s_1∫_s_2^min^∞d s_2.
These two ratios can be regarded as the pole contribution from the Λ_c chanel and neutron chanel, respectively.
The second criterion is the convergence of OPE, which ensures that the neglected power corrections in the condensate term remain small and the truncated OPE remains effective. In our calculation, only the four-quark condensate ⟨q̅q⟩^2 in Eq. (<ref>) contribute to the expansion of OPE, which means the relative contribution from the condensate ⟨q̅q⟩^2 needs to be less than 30%. Additionally, since the Borel parameters M_B_1^2 and M_B_2^2 are not physical parameters, it is necessary to find an optimal window in which the transition form factors exhibit minimal dependence of M_B_1^2 and M_B_2^2.
Through above preparation, the transition form factors of the semileptonic decay Λ_c→ nℓν_ℓ can be numerically calculated. The dependence of the form factors at the maximum recoil point q^2=0 with the required range of Borel parameter M_B_2^2 is shown in Fig. <ref>. In Fig. <ref>, it can be observed that the variation of s_1^0 has a negligible effect on f_1(0) and f_2(0), whereas the variation of s_2^0 has a more significant impact. For comparison, we show our results and previous theoretical predictions of transition form factors at maximum recoil point q^2=0 in Table <ref>. The errors are mainly determined by the uncertainties of the Borel parameters M_B_1^2 and M_B_2^2 and other input parameters listed in Eq. (<ref>). In Table. <ref>, our results for f_1(0), f_2(0), and g_1(0) are comparable to other predictions, while there is significant variation for g_2(0) obtained from different theoretical methods. In this work, the sign of g_2(0) aligns with the results from LCSR <cit.> and LF approach <cit.>, but differs from those derived by other theoretical methods. Further investigations are needed to resolve this discrepancy. Moreover, it is worth mentioning that the results from LCSR <cit.> are derived using the same interpolating current as in Eq. (<ref>), where the transition form factors at q^2=0 show a high level of consistency with QCDSR.
Considering that the QCDSR method is applicable only in the small q^2 region, and the physical region for q^2 in the Λ_c→ nℓν_ℓ decay extends from m_ℓ^2 to (M_Λ_c-M_n)^2, we perform our calculations within a limited range of q^2∈ [-0.4,0.4] GeV^2 and employ a dipole parametrization to extrapolate the obtained values to the entire physical region. The expression of dipole parametrization is
f_i(q^2)=f_i(0)/(1-q^2 / M_Λ_c^2)(1-a_1 q^2 / M_Λ_c^2+a_2 (q^2/M_Λ_c^2)^2).
Here, a_1 and a_2 are fitting parameters used in dipole parametrization. f_i(0) represents the value of form factors at q^2=0, which is also treated as a fitting parameter here. In order to obtain reliable fitting parameters, the following fitting procedure is adopted. For a given set of values of Borel parameters M_B_2^2 and threshold parameters s_1^0 and s_2^0, the dipole parametrization in Eq. (<ref>) is employed to get a set of fitting parameters. Multiple estimations of the fitting parameters are obtained by varying M_B_2^2, s_1^0, and s_2^0. The central values are determined by averaging the different fitting results, and the errors stem from the uncertainties of the inputs M_B_2^2, s_1^0, and s_2^0. The fitting results are presented in Table. <ref>, where the values of f_i(0) obtained from the fitting are consistent with our directly calculated results given in Table. <ref>. The q^2 dependence of form factors is shown in Fig. <ref>, where the fitted points exhibit a clear dipole behavior.
After deriving the q^2 dependence of transition form factors, the branching fractions and the relevant decay asymmetry observables of semileptonic decay Λ_c→ nℓν_ℓ can be analyzed. To facilitate this analysis, it is convenient to introduce the helicity amplitudes, which provide a more intuitive understanding of the physical pictures and simplify the expressions when discussing the asymmetries of the decay processes. The relations between helicity amplitudes and the form factors are as follows <cit.>:
H_1/2, 0^V= √(Q_-)/√(q^2)(M_+ f_1(q^2)-q^2/M_Λ_c f_2(q^2)), H_1/2, 0^A= √(Q_+)/√(q^2)(M_- g_1(q^2)+q^2/M_Λ_c g_2(q^2)),
H_1/2, 1^V= √(2 Q_-)(-f_1(q^2)+M_+/M_Λ_c f_2(q^2)), H_1/2, 1^A= √(2 Q_+)(-g_1(q^2)-M_-/M_Λ_c g_2(q^2)),
H_1/2, t^V= √(Q_+)/√(q^2)(M_- f_1(q^2)+q^2/M_Λ_c f_3(q^2)), H_1/2, t^A= √(Q_-)/√(q^2)(M_+ g_1(q^2)-q^2/M_Λ_c g_3(q^2)).
Here, H_λ^', λ_W^V(A) is the helicity amplitudes for weak transitions induced by vector and axial-vector currents, where λ^' and λ_W represent the helicity of the neutron and the W boson, respectively. Q_± is defined as Q_± = M_±^2-q^2 and M_±=M_Λ_c± M_n. The negative helicity amplitudes can be derived using following relations,
H_-λ^',-λ_W^V=H_λ^', λ_W^V, H_-λ^',-λ_W^A=-H_λ^', λ_W^A.
Then the total helicity amplitudes can be obtained,
H_λ^', λ_W=H_λ^', λ_W^V-H_λ^', λ_W^A.
With the above helicity amplitudes, the differential distribution of Λ_c→ nℓν_ℓ can be expressed as <cit.>
d Γ(Λ_c→ nℓν_ℓ)/d q^2=G_F^2|V_c d|^2 q^2√(Q_+Q_-)/384 π^3 M_Λ_c^3(1-m_ℓ^2/q^2)^2 H_tot,
where m_ℓ denotes the lepton mass (ℓ=e,μ) and H_tot is defined as
H_tot = (1+m_ℓ^2/2 q^2)(H_1/2,1^2+H_-1/2,-1^2+H_1/2,0^2+H_-1/2,0^2)
+3 m_ℓ^2/2 q^2(H_1/2,t^2+H_-1/2,t^2).
According to the definition of H_tot, the contribution to the differential decay width from f_3(q^2) and g_3(q^2) can be found in the term H_1/2,t^2 and H_-1/2,t^2, which is clearly suppressed by the factor m_ℓ^2. Hence, we neglect the effect of f_3(q^2) and g_3(q^2) in Eq. (<ref>). In order to obtain the numerical results of differential decay width, the following input parameters related to the decay analysis are taken from Particle Data Group <cit.>, where
G_F=1.166× 10^-5 GeV^-2, |V_cd|=0.221±0.004,
m_e=0.511 MeV, m_μ=0.106 GeV, τ_Λ_c=(201.5±2.7)× 10^-15 s.
Here, the mean lifetime of Λ_c, noted as τ_Λ_c, is introduced to calculate the branching fractions. We plot the q^2 dependence of differential decay width for Λ_c → n ℓν_ℓ semileptonic decay in Fig. <ref>(a) and list the numerical results of branching fractions in Table. <ref>. It can be found that the branching fractions for Λ_c → n e^+ ν_e semileptonic decay obtained using QCDSR are very close to the results derived by CQM <cit.>, RQM <cit.>, SU(3) flavor symmetry <cit.>, and MBM <cit.>. As for the Λ_c → n μ^+ ν_μ decay mode, we find our results are consistent with RQM <cit.> and relatively smaller than the Lattice QCD predictions <cit.>.
In addition, two relevant decay asymmetry observables, e.g., the leptonic forward-backward asymmetry (A_FB) and the asymmetry parameter (α_Λ_c) are defined as <cit.>,
A_F B(q^2) =d Γ/d q^2(forward)-d Γ/d q^2(backward)/d Γ/d q^2
=3/4H_1/2, 1^2-H_-1/2, -1^2-2m_ℓ^2/q^2(H_1/2,0H_1/2,t+H_-1/2,0H_-1/2,t)/H_tot,
α_Λ_c(q^2) =d Γ^λ^'=1/2 / d q^2-d Γ^λ^'=-1/2 / d q^2/d Γ^λ^'=1/2 / d q^2+d Γ^λ^'=-1/2 / d q^2,
where
d Γ^λ^'=1/2/d q^2= 4 m_l^2/3 q^2(H_1/2, 1^2+H_1/2, 0^2+3 H_1/2, t^2)+8/3(H_1/2, 1^2+H_1/2, 0^2),
d Γ^λ^'=-1/2/d q^2= 4 m_l^2/3 q^2(H_-1/2, -1^2+H_-1/2, 0^2+3 H_-1/2, t^2)+8/3(H_-1/2, -1^2+H_-1/2, 0^2).
The q^2 dependence of the decay asymmetry observables are plotted in Fig. <ref>(b) and (c). In Fig. <ref>, we can see the dependence of the differential decay width and the decay asymmetry observables on the lepton mass is consistent near the zero recoil region q^2=(M_Λ_c-M_n)^2. However, near the maximum recoil point q^2=m_ℓ^2, the behavior of the differential decay width and the leptonic forward-backward asymmetry is significantly different. The leptonic forward-backward asymmetry is going to 0 for Λ_c→ n e^+ ν_e decay mode and to -0.5 for Λ_c→ n μ^+ ν_μ decay mode at q^2=m_ℓ^2. This character agrees with Ref. <cit.>. As for the asymmetry parameter, it varies from α_Λ_c=-1 to α_Λ_c=0 as the q^2 increases from zero to q^2_max. Besides, it is almost indistinguishable throughout the entire physical region, which is also consistent with the findings in Ref. <cit.>. We also present the mean values of the relevant decay asymmetry observables in Table <ref>, which are obtained by separately integrating the numerators and denominators in Eq.(<ref>) - Eq.(<ref>) over the physical region of q^2. From Table. <ref>, it can be observed that our results for ⟨ A_FB⟩ and ⟨α_Λ_c⟩ agree with the previous theoretical predictions. Future experiments measuring these observables and comparing them with the predictions of the present study will contribute to our understanding of the relevant decay channels and the internal structures of baryons. In addition, the possibility of new physics effects beyond the standard model can be explored through these observables <cit.>.
By replacing the second d quark in the current (<ref>) with a strange quark and applying the same analysis procedure, we also investigate the semileptonic decay mode Λ_c →Λℓν_ℓ. The relevant input parameters are m_s = 93.4^+8.6_-3.4 MeV <cit.>, λ_Λ = 0.0208 GeV^3 <cit.>, M_Λ=1.116 GeV <cit.> and |V_cs|=0.975± 0.006 <cit.>. As this decay mode has been thoroughly investigated both theoretically and experimentally, we solely provide the numerical results of branching fractions and decay asymmetry observables in Table. <ref> as a validation of the QCDSR method. It is evident that the branching fractions, the forward-backward asymmetry, and the asymmetry parameter obtained through QCDSR for the semileptonic decay Λ_c →Λℓν_ℓ are in excellent agreement with Lattice QCD results <cit.> and experimental data <cit.>.
§ CONCLUSIONS
In this work, we calculate the weak transition form factors of Λ_c → n ℓν_ℓ semileptonic decay in the framework of QCD sum rules. The analytic results of the transition form factors are obtained through the analysis of the three-point correlation functions and the application of Cutkosky cutting rules. The numerical results for the form factors at the maximum recoil region point q^2=0 are computed and compared with other methods. In order to extend the form factors to the full physical region, we utilize a dipole parametrization that adequately captures the q^2 dependence of the form factors, ensuring a smooth extrapolation.
Based on the obtained form factors, we predict the branching fractions to be ℬ(Λ_c→ n e^+ ν_e)= (0.280± 0.031)% and ℬ(Λ_c→ n μ^+ ν_μ)= (0.274± 0.030)%, which will provide important information to determine the value of the CKM matrix element |V_cd|. Moreover, the mean values of the leptonic forward-backward asymmetry ⟨ A_FB⟩ and the asymmetry parameter ⟨α_Λ_c⟩ are also given, which will play a crucial role in probing potential new physics effects beyond the standard model. Although there is still no experimental data for Λ_c→ nℓν_ℓ semileptonic decay to date, considering the recent experimental progress of Λ_c decay modes involving neutron final state, we believe our predicted results can be tested by the future experiment at BESIII, BELLEII, and LHCb.
Finally, we analyze the semileptonic decay mode Λ_c →Λℓν_ℓ. Our results exhibit a strong agreement with the experimental data, indicating that the QCDSR calculation can be applied to other charmed baryons, such as Ξ_c^+(0). Furthermore, there is still potential for further improvement in this method. The relatively large errors in the branching fractions compared to the experimental data suggest the necessity for further refinement. One possible approach to address this issue is to calculate the contributions from radiation corrections, although this presents a significant challenge in the application of QCD sum rules.
Acknowledgments
We thank K.S. Huang, L. Tang, and B.D. Wan for their meaningful discussions. This work was supported in part by National Natural Science Foundation of China(NSFC) under the Grants 11975236 and 12235008, and University of Chinese Academy of Sciences.
10
Richman:1995wm
J. D. Richman and P. R. Burchat, Leptonic and semileptonic decays of charm and
bottom hadrons, Rev. Mod. Phys. 67, 893–976 (1995).
BESIII:2015ysy
M. Ablikim et al., Measurement of the absolute branching fraction for
Λ^+_c→Λ e^+ν_e, Phys. Rev. Lett. 115, 221805
(2015).
BESIII:2016ffj
M. Ablikim et al., Measurement of the absolute branching fraction for
Λ_c^+→Λμ^+ν_μ, Phys. Lett. B 767,
42–47 (2017).
BESIII:2022ysa
M. Ablikim et al., Study of the Semileptonic Decay Λ_c^+ →Λ e^+ ν_e, Phys. Rev. Lett. 129, 231803 (2022).
BESIII:2023vfi
M. Ablikim et al., Study of Λ_c^+→Λμ^+ν_μ
and Test of Lepton Flavor Universality with Λ_c^+→Λℓ^+ν_ℓ Decays, arXiv:2306.02624 (2023).
BESIII:2018mug
M. Ablikim et al., Measurement of the absolute branching fraction of the
inclusive semileptonic Λ_c^+ decay, Phys. Rev. Lett. 121,
251801 (2018).
BESIII:2022qaf
M. Ablikim et al., First observation of the semileptonic decay Λ_c^+→ p K^- e^+ν_e, Phys. Rev. D 106, 112010 (2022).
BESIII:2023jem
M. Ablikim et al., Search for the semi-leptonic decays Λ_c→Λπ^+ π^- e^+ ν_e and Λ_c→ p K_s^0 π^- e^+
ν_e, Phys. Lett. B 843, 137993 (2023).
BESIII:2022onh
M. Ablikim et al., Measurement of the absolute branching fraction of the
inclusive decay Λ̅_c^-→n̅ + X, arXiv:2210.09561
(2022).
BESIII:2022xne
M. Ablikim et al., Observations of the Cabibbo-Suppressed decays
Λ_c^+→ nπ^+π^0, nπ^+π^-π^+ and the
Cabibbo-Favored decay Λ_c^+→ nK^-π^+π^+, Chin. Phys.
C 47, 023001 (2023).
BESIII:2016yrc
M. Ablikim et al., Observation of Λ^+_c→ nK^0_Sπ^+, Phys. Rev.
Lett. 118, 112001 (2017).
BESIII:2022bkj
M. Ablikim et al., Observation of the Singly Cabibbo Suppressed Decay
Λ^+_c → n π^+, Phys. Rev. Lett. 128, 142001 (2022).
Belle:2006idb
L. Widhalm et al., Measurement of D_0→π l ν(K l ν) Form
Factors and Absolute Branching Fractions, Phys. Rev. Lett. 97,
061804 (2006).
CLEO:2009svp
D. Besson et al., Improved measurements of D meson semileptonic decays to pi
and K mesons, Phys. Rev. D 80, 032005 (2009).
BaBar:2014xzf
J. P. Lees et al., Measurement of the D^0 →π^- e^+ ν_e differential
decay branching fraction as a function of q^2 and study of form factor
parameterizations, Phys. Rev. D 91, 052022 (2015).
BESIII:2015tql
M. Ablikim et al., Study of Dynamics of D^0 → K^- e^+ ν_e and
D^0→π^- e^+ ν_e Decays, Phys. Rev. D 92, 072012 (2015).
BESIII:2017ylw
M. Ablikim et al., Analysis of D^+→K̅^0e^+ν_e and
D^+→π^0e^+ν_e semileptonic decays, Phys. Rev. D 96, 012002
(2017).
Li:2016qai
C.-F. Li, Y.-L. Liu, K. Liu, C.-Y. Cui, and M.-Q. Huang, Analysis of the
semileptonic decay Λ_c→ne^+ν_e,
J. Phys. G 44, 075006 (2017).
Azizi:2009wn
K. Azizi, M. Bayar, Y. Sarac, and H. Sundu, Semileptonic Λ_(b,c)
to Nucleon Transitions in Full QCD at Light Cone, Phys. Rev. D 80,
096007 (2009).
Khodjamirian:2011jp
A. Khodjamirian, C. Klein, T. Mannel, and Y. M. Wang, Form Factors and Strong
Couplings of Heavy Baryons from QCD Light-Cone Sum Rules, JHEP 09,
106 (2011).
Zhao:2018zcb
Z.-X. Zhao, Weak decays of heavy baryons in the light-front approach, Chin.
Phys. C 42, 093101 (2018).
Gutsche:2014zna
T. Gutsche, M. A. Ivanov, J. G. Körner, V. E. Lyubovitskij, and
P. Santorelli, Heavy-to-light semileptonic decays of Λ_b and
Λ_c baryons in the covariant confined quark model, Phys. Rev. D
90, 114033 (2014), [Erratum: Phys.Rev.D 94, 059902 (2016)].
Pervin:2005ve
M. Pervin, W. Roberts, and S. Capstick, Semileptonic decays of heavy lambda
baryons in a quark model, Phys. Rev. C 72, 035201 (2005).
Faustov:2016yza
R. N. Faustov and V. O. Galkin, Semileptonic decays of Λ _c baryons in
the relativistic quark model, Eur. Phys. J. C 76, 628 (2016).
Lu:2016ogy
C.-D. Lü, W. Wang, and F.-S. Yu, Test flavor SU(3) symmetry in exclusive
Λ_c decays, Phys. Rev. D 93, 056008 (2016).
Geng:2019bfz
C.-Q. Geng, C.-W. Liu, T.-H. Tsai, and S.-W. Yeh, Semileptonic decays of
anti-triplet charmed baryons, Phys. Lett. B 792, 214–218 (2019).
Geng:2020fng
C. Q. Geng, C.-C. Lih, C.-W. Liu, and T.-H. Tsai, Semileptonic decays of
Λ_c^+ in dynamical approaches, Phys. Rev. D 101, 094017
(2020).
Meinel:2017ggx
S. Meinel, Λ_c → N form factors from lattice QCD and phenomenology
of Λ_c → n ℓ^+ ν_ℓ and Λ_c → p μ^+ μ^-
decays, Phys. Rev. D 97, 034511 (2018).
Dai:1996xv
Y.-B. Dai, C.-S. Huang, M.-Q. Huang, and C. Liu, QCD sum rule analysis for the
Λ_b → Λ_c semileptonic decay, Phys. Lett.
B 387, 379–385 (1996).
Dosch:1999pr
H. G. Dosch, E. Ferreira, M. Nielsen, and R. Rosenfeld, Heavy Lambda
semileptonic decay: A QCD sum rule approach, Nucl. Phys. B Proc. Suppl.
74, 218–221 (1999).
Huang:1998rq
C.-S. Huang, C.-F. Qiao, and H.-G. Yan, Decay Λ_b → p
l ν̅ in QCD sum rules, Phys. Lett. B 437, 403–407 (1998).
Huang:1998ek
C.-S. Huang and H.-G. Yan, Exclusive rare decays of heavy baryons to light
baryons: Λ_b →Λγ and Λ_b →Λ l^+ l^-, Phys. Rev. D 59, 114022 (1999), [Erratum:
Phys.Rev.D 61, 039901 (2000)].
MarquesdeCarvalho:1999bqs
R. S. Marques de Carvalho, F. S. Navarra, M. Nielsen, E. Ferreira, and H. G.
Dosch, Form-factors and decay rates for heavy Lambda semileptonic decays
from QCD sum rules, Phys. Rev. D 60, 034009 (1999).
Shi:2019hbf
Y.-J. Shi, W. Wang, and Z.-X. Zhao, QCD Sum Rules Analysis of Weak Decays of
Doubly-Heavy Baryons, Eur. Phys. J. C 80, 568 (2020).
Zhao:2020mod
Z.-X. Zhao, R.-H. Li, Y.-L. Shen, Y.-J. Shi, and Y.-S. Yang, The semi-leptonic
form factors of Λ_b→Λ_c and Ξ_b→Ξ_c in QCD
sum rules, Eur. Phys. J. C 80, 1181 (2020).
Zhao:2021sje
Z.-X. Zhao, Semi-leptonic form factors of Ξ_c→Ξ in QCD sum rules,
arXiv:2103.09436 (2021).
Xing:2021enr
Z.-P. Xing and Z.-X. Zhao, QCD sum rules analysis of weak decays of doubly
heavy baryons: the b→ c processes, Eur. Phys. J. C 81,
1111 (2021).
Chung:1981wm
Y. Chung, H. G. Dosch, M. Kremer, and D. Schall, QCD Sum Rules for 'Baryonic
Currents', Phys. Lett. B 102, 175–179 (1981).
Emmerich:2016jjm
M. Emmerich, N. Offen, and A. Schäfer, The decays Λ_b,c→ N^*
l ν in QCD, J. Phys. G 43, 115003 (2016).
Wang:2012hu
Z.-G. Wang, Semileptonic decays B_c^* →η_c ℓν̅_ℓ with
QCD sum rules, Commun. Theor. Phys. 61, 81–88 (2014).
Yang:2005bv
M.-Z. Yang, Semileptonic decay of B and D → K_0^*(1430)ℓ̅ν
from QCD sum rule, Phys. Rev. D 73, 034027 (2006), [Erratum:
Phys.Rev.D 73, 079901 (2006)].
Du:2003ja
D.-S. Du, J.-W. Li, and M.-Z. Yang, Form-factors and semileptonic decay of
D^+_s →ϕl̅ν from QCD sum rule, Eur. Phys. J. C 37,
173–184 (2004).
Shifman:1978by
M. A. Shifman, A. I. Vainshtein, and V. I. Zakharov, QCD and Resonance
Physics: Applications, Nucl. Phys. B 147, 448–518 (1979).
Colangelo:2000dp
P. Colangelo and A. Khodjamirian, QCD sum rules, a modern perspective, At The
Frontier of Particle Physics 1495–1576 (2000).
ParticleDataGroup:2022pth
R. L. Workman et al., Review of Particle Physics, PTEP 2022, 083C01
(2022).
Chung:1984gr
Y. Chung, H. G. Dosch, M. Kremer, and D. Schall, Chiral Symmetry Breaking
Condensates for Baryonic Sum Rules, Z. Phys. C 25, 151 (1984).
Wan:2021vny
B.-D. Wan, S.-Q. Zhang, and C.-F. Qiao, Light baryonium spectrum, Phys. Rev.
D 105, 014016 (2022).
Leljak:2019fqa
D. Leljak and B. Melic, |V_ub| determination and testing of lepton flavour
universality in semileptonic B_c→ D^∗ decays, JHEP 02,
171 (2020).
Li:2021qod
Y.-S. Li, X. Liu, and F.-S. Yu, Revisiting semileptonic decays of
Λ_b(c) supported by baryon spectroscopy, Phys. Rev. D
104, 013005 (2021).
Huang:2022lfr
K.-S. Huang, W. Liu, Y.-L. Shen, and F.-S. Yu, Λ _b → p,
N^*(1535) form factors from QCD light-cone sum rules, Eur. Phys. J. C
83, 272 (2023).
Meinel:2016dqj
S. Meinel, Λ_c →Λ l^+ ν_l form factors and decay rates from
lattice QCD with physical quark masses, Phys. Rev. Lett. 118,
082001 (2017).
Geng:2022fsr
C.-Q. Geng, X.-N. Jin, and C.-W. Liu, Anatomy of Λ_c^+ semileptonic
decays, Phys. Rev. D 107, 033008 (2023).
Azizi:2019tcn
K. Azizi, A. T. Olgun, and Z. Tavukoğlu, Effects of vector leptoquarks on
Λ_b →Λ_cℓν̅_ℓ decay, Chin. Phys. C 45,
013113 (2021).
|
http://arxiv.org/abs/2307.04318v1 | 20230710032008 | Two-Sample and Change-Point Inference for Non-Euclidean Valued Time Series | [
"Feiyu Jiang",
"Changbo Zhu",
"Xiaofeng Shao"
] | stat.ME | [
"stat.ME"
] |
[4]
[]
Z>0=c<@
thmTheorem[section]
assAssumption[section]
thmbis[1]
ass-1
corCorollary[section]
proProposition[section]
remRemark[section]
exaExample
algAlgorithm[section]
defnDefinition[section]
lemLemma[section]
positioning
plotmarks
matrix
compat=1.7
matrix,backgrounds, arrows.meta
myback
myback,background,main
decorations.pathreplacing,angles,quotes
mycolor/.style = dashed,rounded corners,line width=1bp,color=#1
myfillcolor/.style = draw,fill=#1
declare function=
normcdf(,,)=1/(1 + exp(-0.07056*((-)/)^3 - 1.5976*(-)/));
#1 1
1
1]Feiyu Jiang
2]Changbo Zhu[Corresponding author. Email address: [email protected].]
3]Xiaofeng Shao
[1]Department of Statistics and Data Science, Fudan University
[2]Department of Applied and Computational Mathematics and Statistics, University of Notre Dame
[3]Department of Statistics, University of Illinois at Urbana Champaign
Two-Sample and Change-Point Inference for Non-Euclidean Valued Time Series
[
==========================================================================
0
Two-Sample and Change-Point Inference for Non-Euclidean Valued Time Series
==========================================================================
Data objects taking value in a general metric space have become increasingly common in modern data analysis. In this paper, we study two important statistical inference problems, namely, two-sample testing and change-point detection, for such non-Euclidean data under temporal dependence. Typical examples of non-Euclidean valued time series include yearly mortality distributions, time-varying networks, and covariance matrix time series. To accommodate unknown temporal dependence, we advance the self-normalization (SN) technique <cit.> to the inference of non-Euclidean time series, which is substantially different from the existing SN-based inference for functional time series that reside in Hilbert space <cit.>.
Theoretically, we propose new regularity conditions that could be easier to check than those in the recent literature, and derive the limiting distributions of the proposed test statistics under both null and local alternatives. For change-point detection problem, we also derive the consistency for the change-point location estimator, and combine our proposed change-point test with wild binary segmentation to perform multiple change-point estimation. Numerical simulations demonstrate the effectiveness and robustness of our proposed tests compared with existing methods in the literature. Finally, we apply our tests to two-sample inference in mortality data and change-point detection in cryptocurrency data.
§ INTRODUCTION
Statistical analysis of non-Euclidean data that reside in a metric space is gradually emerging as an important branch of functional data analysis, motivated by increasing encounter of such data in many modern applications. Examples include
the analysis of sequences of age-at-death distributions over calendar years <cit.>,
covariance matrices in the analysis of diffusion tensors in medical imaging <cit.>, and graph Laplacians of networks <cit.>.
One of the main challenges in dealing with such data is that the usual vector/Hilbert space operation, such as
projection and inner product may not be well defined and only the distance between two non-Euclidean
data objects is available.
Despite the challenge, the list of papers that propose new statistical techniques to analyze
non-Euclidean data has been growing. Building on Fréchet mean and variance <cit.>, which are counterparts of mean and variance for metric space valued random object, <cit.> proposed a test for comparing N(≥ 2) populations of metric space valued data. <cit.> developed a novel test to detect a change point in the Fréchet mean and/or variance in a sequence of independent non-Euclidean data. The classical linear and nonparametric regression has also been extended to metric spaced valued data; see <cit.>, <cit.>, and <cit.>, among others.
So far, the majority of the literature on
non-Euclidean data has been limited to independent data, and the only exceptions are <cit.> and <cit.>, which mainly focused on the autoregressive modeling of non-Euclidean valued time series. To the best of our knowledge, no inferential tools are available for non-Euclidean valued time series in the literature.
In this paper, we address two important problems: two-sample testing and change-point detection, in the analysis of non-Euclidean valued time series. These two problems are also well motivated by the data we analyzed in the paper, namely, the yearly age-at-death distributions for countries in Europe and daily Pearson correlation matrices for five cryptocurrencies. For time series data, serial dependence is the rule rather than the exception. This motivates us to develop new tests for non-Euclidean time series that is robust to temporal dependence.
Note that the two testing problems have been addressed by <cit.> and <cit.>, respectively for independent non-Euclidean data, but as expected, their tests fail to control the size when there is temporal dependence in the series; see Section <ref> for simulation evidence.
To accommodate unknown temporal dependence, we develop test statistics based on self-normalization <cit.>, which is a nascent inferential technique for time series data. It has been mainly developed for vector time series and has been extended to functional time series in Hilbert space <cit.>. The functional extension is however based on reducing the infinite dimensional functional data to finite dimension via functional principal component analysis, and then applying SN to the finite-dimensional vector time series.
Such SN-based inference developed for time series in Hilbert space cannot be applied to non-Euclidean valued time series, since the projection and inner product commonly used for data in Hilbert space are not available for data objects that live in a general metric space.
The SN-based extension to non-Euclidean valued time series is therefore fairly different from that in <cit.> and <cit.>, in terms of both methodology and theory. For independent non-Euclidean valued data, <cit.> build on the empirical process theory <cit.> by regulating the complexity of the analyzed metric space, which is in general abstract and may not be easy to verify. In our paper, we take a different approach that is inspired by the M-estimation theory in <cit.> and <cit.> for Euclidean data, and extend it to non-Euclidean setting. We assume that the metric distance between data and the estimator of the Fréchet mean admits certain decomposition, which includes a bias term, a leading stochastic term, and a remainder term. Our technical assumptions are more intuitive and could be easier to check in practice. Furthermore, we are able to obtain explicit asymptotic distributions of our test statistics under the local alternatives of rate O(n^-1/2), where n is the sample size, under our assumptions, whereas they seem difficult to derive under the entropy integral type conditions employed by <cit.>.
The remainder of the paper is organized as follows. Section <ref> provides background of non-Euclidean metric space in which random objects of interest reside in, and some basic assumptions that will be used throughout the paper. Section <ref> proposes SN-based two-sample tests for non-Euclidean time series. Section <ref> considers SN-based change-point tests. Numerical studies for the proposed tests are presented in Section <ref>, and Section <ref> demonstrates the applicability of these tests through real data examples. Section <ref> concludes. Proofs of all results are relegated to Appendix <ref>. Appendix <ref> summarizes the examples that satisfy assumptions in Section <ref>, and Appendix <ref> provides simulation results for functional time series.
Some notations used throughout the paper are defined as follows. Let · denote the conventional Euclidean norm. Let D[0,1] denote the space of functions on [0, 1] which are right continuous with left limits, endowed with the Skorokhod topology <cit.>. We use ⇒ to denote weak convergence in D[0,1] or more generally in ℝ^m-valued function space D^m[0,1], where m∈ℕ; →_d to denote convergence in distribution; and →_p to denote convergence in probability.
A sequence of random variables X_n is said to be O_p(1) if it is bounded in
probability. For x∈ℝ, define ⌊ x⌋ as the largest integer that is smaller than or equal to x, and ⌈ x ⌉ as the smallest integer that is greater than or equal to x.
§ PRELIMINARIES AND SETTINGS
In this paper, we consider a metric space (Ω,d) that is totally bounded, i.e. for any ϵ>0, there exist a finite number of open ϵ-balls whose union can cover Ω. For a sequence of stationary random objects {Y_t}_t∈ℤ defined on (Ω,d), we follow <cit.>, and define their Fréchet mean and variance by
μ=min_ω∈Ω𝔼d^2(Y_t,ω), V=𝔼d^2(Y_t,μ),
respectively. Fréchet mean extends the traditional mean in linear spaces to more general metric spaces by minimizing expected squared metric distance between the random object Y_t and the centroid akin to the conventional mean by minimizing the expected sum of residual squares. It is particularly useful for objects that lie in abstract spaces without explicit algebraic structure. Fréchet variance, defined by such expected squared metric distance, is then used for measuring the dispersion in data.
Given finite samples {Y_t}_t=1^n, we define their Fréchet subsample mean and variance as
μ̂_[a,b]=min_ω∈Ω∑_t=1+⌊ na⌋^⌊ nb⌋d^2(Y_t,ω),
V̂_[a,b]=1/⌊ nb⌋-⌊ na⌋∑_t=1+⌊ na⌋^⌊ nb⌋d^2(Y_t,μ̂_[a,b]),
where (a,b)∈ℐ_η, ℐ_η={(a,b): 0≤ a<b≤ 1, b-a≥η} for some trimming parameter η∈(0,1). The case corresponding to a=0 and b≥η is further denoted as
μ̂_[0,b]=μ̂_b, V̂_[0,b]=V̂_b,
with special case of b=1 corresponding to Fréchet sample mean and variance <cit.>, respectively.
Note that both Fréchet (subsample) mean and variance depend on the space Ω and metric distance d, which require further regulation for desired inferential purposes. In this paper, we do not impose independence assumptions, and our technical treatment differs substantially from those in the literature, c.f. <cit.>.
μ is unique, and for some δ>0, there exists a constant K>0 such that,
inf _d(ω, μ)<δ{𝔼(d^2(Y_0, ω))-𝔼(d^2(Y_0, μ))-K d^2(ω, μ)}≥ 0.
For any (a,b)∈ℐ_η, μ̂_[a,b] exists and is unique almost surely.
For any ω∈Ω, and (a,b)∈ℐ_η, as n→∞,
1/⌊ nb⌋-⌊ na⌋∑_t=⌊ na⌋+1^⌊ nb⌋[d^2(Y_t,ω)-𝔼d^2(Y_t,ω)]→_p 0.
For some constant σ>0,
1/√(n)∑_t=1^⌊ nr⌋(d^2(Y_t,μ)-V)⇒σ B(r), r∈(0,1],
where B(·) is a standard Brownian motion.
Let B_δ(μ) ⊂Ω be a ball of radius δ centered at μ. For ω∈ B_δ(μ), i.e. d(ω,μ)≤δ, we assume the following expansion
d^2(Y_t,ω)-d^2(Y_t,μ)= K_dd^2(ω,μ)+ g(Y_t,ω,μ)+R(Y_t,ω,μ), t∈ℤ,
where K_d∈(0,∞) is a constant, and g(Y_t,ω,μ) and R(Y_t,ω,μ) satisfy that, as n→∞,
sup_(a,b)∈ℐ_ηsup_ω∈ B_δ(μ)| n^-1/2∑_t=⌊ n a⌋+1^⌊ n b⌋ g(Y_t,ω,μ)/d(ω,μ)|=O_p(1),
and
sup_(a,b)∈ℐ_ηsup_ω∈ B_δ(μ)|n^-1/2∑_t=⌊ n a⌋+1^⌊ n b⌋ R(Y_t,ω,μ)/d(ω,μ)+n^1/2d^2(ω,μ)|→_p 0,
respectively.
Several remarks are given in order. Assumptions <ref>-<ref> are standard and similar conditions can be found in <cit.> and <cit.>. Assumptions <ref> and <ref> are adapted from Assumption (A1) in <cit.>, and are required for identification purpose. In particular, Assumption <ref> requires that the expected squared metric distance 𝔼d^2(Y_t,ω) can be well separated from the Fréchet variance, and the separation is quadratic in terms of the distance d(ω,μ). Assumption <ref> is useful for obtaining the uniform convergence of the subsample estimate of Fréchet mean, i.e., μ̂_[a,b], which is a key ingredient in forming the self-normalizer in SN-based inference.
Assumption <ref> is a pointwise weak law of large numbers, c.f. Assumption (A2) in <cit.>. Assumption <ref> requires the invariance principle to hold to regularize the partial sum that appears in Fréchet subsample variances. Note that
d^2(Y_t,ω) takes value in ℝ for any fixed ω∈Ω, thus both Assumption <ref> and <ref> could be implied by high-level weak temporal dependence conditions (e.g., strong mixing) in conventional Euclidean space, see <cit.> for discussions.
<Ref> distinguishes our theoretical analysis from the existing literature.
Its idea is inspired by <cit.> and <cit.> for M-estimators. In the conventional Euclidean space, i.e. (Ω,d)=(ℝ^m,·) for m≥ 1, it is easy to see that the expansion in <Ref> holds with K_d=1, g(Y_t,ω,μ)=2(μ-ω)^⊤(Y_t-μ) and R(Y_t,ω,μ)≡ 0. In more general cases, Assumption <ref> can be interpreted as the expansion of d^2(Y_t,ω) around the target value d^2(Y_t,μ). In particular, K_dd^2(ω,μ) can be viewed as the bias term,
g(Y_t,ω,μ) works as the asymptotic leading term that is proportional to the distance d(ω,μ) while R(Y_t,ω,μ) is the asymptotically negligible remainder term. More specifically, after suitable normalization, it reads as,
n^-1/2 ∑_t=⌊na⌋+1^⌊nb⌋ [d^2(Y_t,ω)-d^2(Y_t,μ)]
= n^1/2(b-a)K_dd^2(ω,μ)_bias term + d(ω,μ)n^-1/2∑_t=⌊na⌋+1^⌊nb⌋g(Y_t,ω,μ)/d(ω,μ)_stochastic term
+n^-1/2∑_t=⌊na⌋+1^⌊nb⌋ R(Y_t,ω,μ)_remainder term.
And the verification of this assumption can be done by analyzing each term.
In comparison, existing literature, e.g. <cit.>, <cit.>, impose assumptions on the complexity of (Ω,d). These assumptions typically involve the behaviors of entropy integral and covering numbers rooted in the empirical process theory <cit.>, which are abstract and difficult to check in practice, see Propositions 1 and 2 in <cit.>. Assumption <ref>, on the contrary, regulates directly on the metric d and could be easily checked for the examples below. Moreover, Assumption <ref> is useful for deriving local powers of tests to be developed in this paper, see Section <ref> and <ref> for more details.
Examples that can satisfy Assumptions <ref>-<ref> include:
* L_2 metric d_L for Ω being the set of square integrable functions on [0,1];
* 2-Wasserstein metric d_W for Ω being the set of univariate probability distributions on ℝ;
* Frobenius metric d_F for Ω being the set of square matrices, including the special cases of covariance matrices and graph Laplacians;
* log-Euclidean metric d_E for Ω being the set of covariance matrices.
We refer to Appendix <ref> for more details of these examples and verifications of above assumptions for them.
§ TWO-SAMPLE TESTING
This section considers two-sample testing in metric space under temporal dependence.
For two sequences of temporally dependent random objects {Y_t^(1),Y_t^(2)}_t∈ℤ on (Ω,d), we denote Y_t^(i)∼ P^(i), where P^(i) is the underlying marginal distribution of Y_t^(i) with Fréchet mean and variance μ^(i) and V^(i), i=1,2. Given finite sample observations {Y_t^(1)}_t=1^n_1 and {Y_t^(2)}_t=1^n_2, we are interested in the following two-sample testing problem,
ℍ_0: P^(1)=P^(2), ℍ_a: P^(1)≠ P^(2).
Let n=n_1+n_2, we assume two samples are balanced, i.e. n_1/n→γ_1 and n_2/n→γ_2 with γ_1,γ_2∈(0,1) and γ_1+γ_2=1 as min(n_1,n_2)→∞. For r∈(0,1], we define their recursive Fréchet sample mean and variance by
μ̂^(i)_r=min_ω∈Ω∑_t=1^⌊ rn_i⌋d^2(Y_t^(i),ω), V̂^(i)_r=1/⌊ rn_i⌋∑_t=1^⌊ rn_i⌋d^2(Y_t^(i),μ̂^(i)_r), i=1,2.
A natural candidate test of ℍ_0 is to compare their Fréchet sample mean and variance by contrasting (μ̂^(1)_1,V̂^(1)_1) and (μ̂^(2)_1,V̂^(2)_1). For the mean part, it is tempting to use d(μ̂^(1)_1,μ̂^(2)_1) as the testing statistic. However, this is a non-trivial task as the limiting behavior of d(μ̂^(1)_1,μ̂^(2)_1) depends heavily on the structure of the metric space, which may not admit conventional algebraic operations.
Fortunately, both V̂^(1)_1 and V̂^(2)_1 take value in ℝ, and it is thus intuitive to compare their difference. In fact, <cit.> propose the test statistic of the form
U_n= n_1n_2/n σ̂_1^2σ̂_2^2(V̂^(1)_1-V̂^(2)_1)^2,
where σ̂_i^2 is a consistent estimator of lim_n_i→∞Var{√(n)(V̂^(i)_1-V^(i))}, i=1,2.
However, U_n requires both within-group and between-group independence, which is too stringent to be realistic for applications in this paper. When either of such independence is violated, the test may fail to control size, see Section <ref> for numerical evidence. Furthermore, taking into account the temporal dependence requires replacing the variance by long-run variance, whose consistent estimation usually involves laborious tuning such as choices of kernels and bandwidths <cit.>. To this end, we invoke self-normalization technique to bypass the foregoing issues.
The core principle of self-normalization for the time series inference is to use an inconsistent long-run variance estimator that is a function of recursive estimates to
yield an asymptotically pivotal statistic. The SN procedure does not involve any tuning parameter or involves less number of tuning parameters compared to traditional counterparts. See <cit.> for a comprehensive review of recent developments
for low dimensional time series. For recent extension to inference for high-dimensional time series, we refer to
<cit.> and <cit.>.
§.§ Test Statistics
Define the recursive subsample test statistic based on Fréchet variance as
T_n(r)=r(V̂^(1)_r-V̂^(2)_r), r∈ [η,1],
and then construct the SN based test statistic as
D_n,1=n[T_n(1)]^2/∑_k=⌊ nη⌋^n [T_n(k/n)-k/nT_n(1)]^2,
where η∈(0,1) is a trimming parameter for controlling the estimation effect of T_n(r) when r is close to 0, which is important for deriving the uniform convergence of {√(n)T_n(r), r∈[η,1]}, see <cit.> and <cit.> for similar technical treatments.
The testing statistic (<ref>) is composed of the numerator n[T_n(1)]^2, which captures the difference in Fréchet variances, and the denominator ∑_k=⌊ nη⌋^n [T_n(k/n)- k/nT_n(1)]^2, which is called self-normalizer and mimics the behavior of the numerator with suitable centering and trimming. For each r∈[η,1], T_n(r) is expected to be a consistent estimator for r(V^(1)-V^(2)). Therefore, under ℍ_a, T_n(1) is large when there is significant difference in Fréchet variance, whereas the key element T_n(r)-rT_n(1) in self-normalizer remains to be small. This suggests that we should reject ℍ_0 for large values of D_n,1.
Note that (<ref>) only targets at difference in Fréchet variances. To detect the difference in Fréchet means, we can use contaminated Fréchet variance <cit.>. Let
V̂^C,(1)_r=1/⌊ rn_1⌋∑_t=1^⌊ rn_1⌋d^2(Y_t^(1),μ̂^(2)_r), and V̂^C,(2)_r=1/⌊ rn_2⌋∑_t=1^⌊ rn_2⌋d^2(Y_t^(2),μ̂^(1)_r),
and
T_n^C(r)=r(V̂^C,(1)_r+V̂^C,(2)_r-V̂^(1)_r-V̂^(2)_r).
The contaminated Fréchet sample variances V̂^C,(1)_r and V̂^C,(2)_r switch the role of μ̂_r^(1) and μ̂_r^(2) in V̂^(1)_r and V̂^(2)_r, respectively, and could be viewed as proxies for measuring Fréchet mean differences.
Intuitively, it is expected that V̂^C,(i)_r≈𝔼d^2(Y_t^(i),μ^(3-i)), and V̂^(i)_r≈𝔼d^2(Y_t^(i), μ^(i)), i=1,2. Under ℍ_0, both μ̂_r^(1) and μ̂_r^(2) are consistent estimators for μ^(1)=μ^(2), thus V̂^C,(i)_r≈V̂^(i)_r, i=1,2, which indicates a small value for T_n^C(r). On the contrary, when d(μ^(1),μ^(2))>0, V̂^C,(i)_r could be much larger than V̂^(i)_r as 𝔼d^2(Y_t^(i),μ^(3-i))>𝔼d^2(Y_t^(i),μ^(i))=min_ω∈Ω𝔼d^2(Y_t^(i),ω), i=1,2, resulting in large value of T_n^C(r).
The power-augmented test statistic is thus defined by
D_n,2=n{[T_n(1)]^2+[T_n^C(1)]^2}/∑_k=⌊ nη⌋^n {[T_n(k/n)-k/nT_n(1)]^2+ [T_n^C(k/n)-k/nT_n^C(1)]^2},
where the additional term ∑_k=⌊ nη⌋^n [T_n^C(k/n)-k/nT_n^C(1)]^2 that appears in the self-normalizer is used to stabilize finite sample performances.
Our proposed tests could be adapted to comparison of N-sample populations <cit.>, where N≥ 2. An natural way of extension would be aggregating all the pairwise differences in Fréchet variance and contaminated variance. Specifically, let the N groups of random data objects be {Y_t^(i)}_t=1^n_i, i=1,⋯,N. The null hypothesis is given as
ℍ_0: P^(1)=⋯=P^(N),
for some N≥ 2.
Let μ̂^(i)_r and V̂^(i)_r, r∈[η,1] be the Fréchet subsample mean and variance, respectively, for the ith group, i=1,⋯, N.
For 1≤ i≠ j≤ N, define the pairwise contaminated Fréchet subsample variance as
V̂^C,(i,j)_r=1/⌊ rn_i⌋∑_t=1^⌊ rn_i⌋d^2(Y_t^(i),μ̂^(j)_r), r∈ [η,1],
and define the recursive statistics
T_n^i,j(r)=r(V̂^(i)_r-V̂^(j)_r), T_n^C,i,j(r)=r(V̂^C,(i,j)_r+V̂^C,(j,i)_r-V̂^(i)_r-V̂^(j)_r), r∈ [η,1].
In the same spirit of the test statistics D_n,1 and D_n,2, for n=∑_i=1^N n_i, we may construct their counterparts for the N-sample testing problem as
D^(N)_n,1=n∑_i<j[T_n^i,j(1)]^2/∑_k=⌊ nη⌋^n ∑_i<j[T_n^i,j(k/n)-k/nT_n^i,j(1)]^2,
and
D^(N)_n,2=n∑_i<j{[T_n^i,j(1)]^2+[T_n^C,i,j(1)]^2}/∑_k=⌊ nη⌋^n ∑_i<j{[T_n^i,j(k/n)-k/nT_n^i,j(1)]^2+[T_n^C,i,j(k/n)-k/nT_n^C,i,j(1)]^2}.
Compared with classical N-sample testing problem in Euclidean spaces, e.g. analysis of variance (ANOVA), the above modification does not require Gaussianity, equal variance, or serial independence. Therefore, they could be work for broader classes of distributions. We leave out the details for the sake of space.
§.§ Asymptotic Theory
Before we present asymptotic results of the proposed tests, we need a slightly stronger assumption than Assumption <ref> to regulate the joint behavior of partial sums for both samples.
For some σ_1>0 and σ_2>0, we have
1/√(n)∑_t=1^⌊ nr⌋(
d^2(Y_t^(1),μ^(1))-V^(1)
d^2(Y_t^(2),μ^(2))-V^(2))⇒(σ_1B^(1)(r)
σ_2B^(2)(r)
),
where B^(1)(·) and B^(2)(·) are two standard Brownian motions with unknown correlation parameter ρ∈ (-1,1), and σ_1,σ_2≠ 0 are unknown parameters characterizing the long-run variance.
Suppose Assumptions <ref>-<ref> (with <ref> replaced by <ref>) hold for both {Y_t^(1)}_t=1^n_1 and {Y_t^(2)}_t=1^n_2. Then as n→∞, under ℍ_0, for i=1,2,
D_n,i→_d ξ^2_γ_1,γ_2(1;σ_1,σ_2)/∫_η^1[ξ_γ_1,γ_2(r;σ_1,σ_2)-rξ_γ_1,γ_2(1;σ_1,σ_2)]^2dr:=𝒟_η,
where
ξ_γ_1,γ_2(r;σ_1,σ_2)=γ_1^-1σ_1B^(1)(γ_1r)-γ_2^-1σ_2B^(2)(γ_2r).
Theorem <ref> obtains the same limiting null distribution for Fréchet variance based test D_n,1 and its power-augmented version D_n,2. Although D_n,2 contains contaminated variance T_n^C(1), its contribution is asymptotically vanishing as n→∞. This is an immediate consequence of the fact that
sup_r∈[η,1]|√(n)T_n^C(r)|→_p0,
see proof of Theorem <ref> in Appendix <ref>. Similar phenomenon has been documented in <cit.> under different assumptions.
We next consider the power behavior under the Pitman local alternative,
ℍ_an: V^(1)-V^(2)=n^-κ_VΔ_V, d^2(μ^(1),μ^(2))=n^-κ_MΔ_M,
with Δ_V∈ℝ, Δ_M∈(0,∞), and κ_V,κ_M∈ (0,∞).
Suppose Assumptions <ref>-<ref> (with <ref> replaced by <ref>) hold for both {Y_t^(1)}_t=1^n_1 and {Y_t^(2)}_t=1^n_2. As n→∞, under ℍ_an,
* if max{κ_V,κ_M}∈(0,1/2), then for i=1,2,
D_n,i→_p∞;
* if min{κ_V,κ_M}∈(1/2,∞), then for i=1,2,
D_n,i→_d𝒟_η;
* if κ_V=1/2 and κ_M∈(1/2,∞), then for i=1,2,
D_n,i→_d (ξ_γ_1,γ_2(1;σ_1,σ_2)+Δ_V)^2/∫_η^1(ξ_γ_1,γ_2(r;σ_1,σ_2)-rξ_γ_1,γ_2(1;σ_1,σ_2))^2dr;
* if κ_V∈ (1/2,∞) and κ_M=1/2, then D_n,1→_d𝒟_η, and
D_n,2→_d (ξ_γ_1,γ_2(1;σ_1,σ_2))^2+4K_d^2Δ_M^2/∫_η^1(ξ_γ_1,γ_2(r;σ_1,σ_2)-rξ_γ_1,γ_2(1;σ_1,σ_2))^2dr;
* if κ_V=κ_M=1/2, then
D_n,1→_d (ξ_γ_1,γ_2(1;σ_1,σ_2)+Δ_V)^2/∫_η^1(ξ_γ_1,γ_2(r;σ_1,σ_2)-rξ_γ_1,γ_2(1;σ_1,σ_2))^2dr,
D_n,2→_d (ξ_γ_1,γ_2(1;σ_1,σ_2)+Δ_V)^2+4K_d^2Δ_M^2/∫_η^1(ξ_γ_1,γ_2(r;σ_1,σ_2)-rξ_γ_1,γ_2(1;σ_1,σ_2))^2dr;
where K_d is defined in Assumption <ref>.
Theorem <ref> presents the asymptotic behaviors for both test statistics under local alternatives in various regimes. In particular, D_n,1 can detect differences in Fréchet variance at local rate n^-1/2, but possesses trivial power against Fréchet mean difference regardless of the regime of κ_M. In comparison, D_n,2 is powerful for differences in both Fréchet variance and Fréchet mean at local rate n^-1/2, which validates our claim that D_n,2 indeed augments power.
Our results merit additional remarks when compared with <cit.>. In <cit.>, they only obtain
the consistency of their test under either n^1/2|V^(1)-V^(2)|→∞ or n^1/2d^2(μ^(1),μ^(2))→∞, while Theorem <ref> explicitly characterizes the asymptotic distributions of our test statistics under local alternatives of order O(n^-1/2), which depend on κ_V and κ_M. Such theoretical improvement relies crucially on our newly developed proof techniques based on Assumption <ref>, and it seems difficult to derive such limiting distributions under empirical-process-based assumptions in <cit.>. However, we do admit that self-normalization could result in moderate power loss compared with t-type test statistics, see <cit.> for evidence in Euclidean space.
Note that the limiting distributions derived in <Ref> and <Ref> contain a key quantity ξ_γ_1,γ_2(r;σ_1,σ_2) defined in (<ref>), which depends on nuisance parameters σ_1,σ_2 and ρ. This may hinder the practical use of the tests. The following corollary, however, justifies the wide applicability of our tests.
Under Assumption <ref>, if either γ_1=γ_2=1/2 or ρ=0, then for any constants C_a,C_b∈ℝ,
(ξ_γ_1,γ_2(1;σ_1,σ_2)+C_a)^2+C_b^2/∫_η^1(ξ_γ_1,γ_2(r;σ_1,σ_2)-rξ_γ_1,γ_2(1;σ_1,σ_2))^2dr=_d (B(1)+C_a/C_ξ)^2+ (C_b/C_ξ)^2/∫_η^1(B(r)-rB(1))^2dr,
where
C_ξ=√(2σ_1^2+2σ_2^2-4ρσ_1σ_2), if γ_1=γ_2,
√(σ_1^2/γ_1+σ_2^2/γ_2), if ρ=0.
Therefore, by choosing C_a=C_b=0 in <Ref>, we obtain the pivotal limiting distribution
𝒟_η=_dB^2(1)/∫_η^1(B(r)-rB(1))^2dr.
The asymptotic distributions in <Ref> can be similarly derived by letting either C_a=Δ_V or C_b=2K_dΔ_M.
Therefore, when either two samples are of the same length (γ_1=γ_2) or two samples are asymptotically independent (ρ=0), the limiting distribution 𝒟_η is pivotal. In practice, we reject ℍ_0 if D_n,i>Q_𝒟_η(1-α) where Q_𝒟_η(1-α) denotes the 1-α quantile of (the pivotal)
D_η.
In Table <ref>, we tabulate commonly used critical values under various choices of η by simulating 50,000 i.i.d. 𝒩(0,1) random variables 10,000 times and approximating a standard Brownian motion by standardized partial sum of i.i.d. 𝒩(0,1) random variables.
§ CHANGE-POINT TEST
Inspired by the two-sample tests developed in Section <ref>, this section considers the change-point detection problem for a sequence of random objects {Y_t}_t=1^n, i.e.
ℍ_0: Y_1, Y_2, …, Y_n∼ P^(1)
against the single change-point alternative,
ℍ_a: there exists 0<τ<1 such that Y_t={[ Y_t^(1)∼ P^(1), 1≤ t≤⌊ nτ⌋; Y_t^(2)∼ P^(2), ⌊ nτ⌋ +1≤ t ≤ n. ].
The single change-point testing problem can be roughly viewed as two-sample testing without knowing where
the two samples split, and they share certain similarities in terms of statistical methods and theory.
Recall the Fréchet subsample mean μ̂_[a,b] and variance V̂_[a, b] in (<ref>), we further define the pooled contaminated variance separated by r∈(a,b) as
V̂_[r ; a, b]^C=1/⌊ n r⌋-⌊ n a⌋∑_i=⌊ n a⌋+1^⌊ n r⌋ d^2(Y_i, μ̂_[r, b])+1/⌊ n b⌋-⌊ n r⌋∑_i=⌊ n r⌋+1^⌊ n b⌋ d^2(Y_i, μ̂_[a, r]).
Define the subsample test statistics
T_n(r ; a, b)=(r-a)(b-r)/b-a(V̂_[a, r]-V̂_[r, b]),
and
T_n^C(r ; a, b)=(r-a)(b-r)/b-a(V̂_[r ; a, b]^C-V̂_[a, r]-V̂_[r, b]).
Note that T_n(r ; a, b) and T_n^C(r ; a, b) are natural extensions of T_n(r) and T_n^C(r) from two-sample testing problem to change-point detection problem by viewing {Y_t}_t=⌊ na⌋+1^⌊ nr⌋ and {Y_t}_t=⌊ nr⌋+1^⌊ nb⌋ as two separated samples.
Intuitively, the contrast statistics T_n(r ; a, b) and T_n^C(r ; a, b) are expected to attain their maxima (in absolute value) when r is set at or close to the true change-point location τ.
§.§ Test Statistics
For some trimming parameters η_1 and η_2 such that η_1>2η_2, and η_1∈(0,1/2), in the same spirit of D_n,1 and D_n,2, and with a bit abuse of notation, we define the testing statistics
SN_i= max _⌊ nη_1⌋≤ k≤ n-⌊ nη_1⌋D_n,i(k), i=1,2,
where
D_n,1(k)= n[T_n(k/n ; 0,1)]^2/ ∑_l=⌊nη_2⌋^k-⌊nη_2⌋ [T_n(l/n ; 0, k/n)]^2+ ∑_l=k+⌊nη_2⌋^n-⌊nη_2⌋ [T_n(l/n ; k/n, 1)]^2,
D_n,2(k)= n{[T_n(k/n ; 0,1)]^2+[T_n^C(k/n ; 0,1)]^2}/ L_n(k)+R_n(k),
with
L_n(k)= ∑_l=⌊nη_2⌋^k-⌊nη_2⌋ {[T_n(l/n ; 0, k/n)]^2+[T^C_n(l/n ; 0, k/n)]^2 },
R_n(k)= ∑_l=k+⌊nη_2⌋^n-⌊nη_2⌋ {[T_n(l/n ; k/n, 1)]^2+ [T^C_n(l/n ; k/n, 1)]^2}.
The trimming parameter η_1 plays a similar role as η in two-sample testing problem for stabilizing the estimation effect for relatively small sample sizes, while the additional trimming η_2 is introduced to ensure that the subsample estimates in the self-normalizers are constructed with the subsample size proportional to n. Furthermore, we note that the self-normalizers here are modified to accommodate for the unknown change-point location, see <cit.>, <cit.> for more discussion.
§.§ Asymptotic Theory
Suppose Assumptions <ref>-<ref> hold. Then, under ℍ_0, we have for i=1,2,
SN_i=max _⌊ nη_1⌋≤ k≤ n-⌊ nη_1⌋D_n,i(k) ⇒sup _r∈[η_1,1-η_1][B(r)-rB(1)]^2/V(r,η):=𝒮_η,
where V(r,η)=∫_η_2^r-η_2 [B(u)-u/rB(r)]^2du+∫_r+η_2^1-η_2 [B(1)-B(u)-(1-u)/(1-r){B(1)-B(r)}]^2du.
Similar to Theorem <ref>, Theorem <ref> states that both change-point test statistics have the same pivotal limiting null distribution 𝒮_η.
The test is thus rejected when SN_i>Q_𝒮_η(1-α), i=1,2, where Q_𝒮_η(1-α) denotes the 1-α quantile of 𝒮_η.
In Table <ref>, we tabulate commonly used critical values under various choices of (η_1,η_2) by simulations.
Recall in Theorem <ref>, we have obtained the local power of two-sample tests D_n,1 and D_n,2 at rate n^-1/2. To this end, consider the local alternative
ℍ_an: V^(1)-V^(2)=n^-1/2Δ_V, d^2(μ^(1),μ^(2))=n^-1/2Δ_M,
where Δ_V∈ℝ and Δ_M∈(0,∞). The following theorem states the asymptotic power behaviors of SN_1 and SN_2.
Suppose Assumptions <ref>-<ref> (with <ref> replaced by <ref>) hold. If Δ_V≠ 0 and Δ_M≠ 0 are fixed, then under ℍ_an, if τ∈(η_1,1-η_1), then as n→∞, we have
lim_|Δ_V|→∞lim_n→∞{max _⌊ nη_1⌋≤ k≤ n-⌊ nη_1⌋D_n,1(k)}→_p∞,
lim_max{|Δ_V|,Δ_M}→∞lim_n→∞{max _⌊ nη_1⌋≤ k≤ n-⌊ nη_1⌋D_n,2(k)}→_p∞.
We note that <Ref> deals with the alternative involving two different sequences before and after the change-point, while <Ref> only involves one stationary sequence. Therefore, we need to replace <Ref> by <Ref>.
<Ref> demonstrates that our tests are capable of detecting local alternatives at rate n^-1/2. In addition, it is seen from Theorem <ref> that SN_1 is consistent under the local alternative of Fréchet variance change as |Δ_V|→∞, while SN_2 is consistent not only under |Δ_V|→∞ but also under the local alternative of Fréchet mean change as Δ_M→∞. Hence SN_2 is expected to capture a wider class of alternatives than SN_1, and
these results are consistent with findings for two-sample problems in Theorem <ref>.
When ℍ_0 is rejected, it is natural to estimate the change-point location by
τ̂_i=n^-1k̂_i, k̂_i=max_⌊ nη_1⌋≤ k≤ n-⌊ nη_1⌋D_n,i(k),
We will show that the estimators are consistent under the fixed alternative, i.e. ℍ_a: V^(1)-V^(2)=Δ_V. Before that, we need to regulate the behaviour of Fréchet mean and variance under ℍ_a.
Let
μ(α)= min_ω∈Ω{α𝔼(d^2(Y_t^(1),ω))+(1-α)𝔼(d^2(Y_t^(2),ω))},
V(α)= α𝔼(d^2(Y_t^(1),μ(α)))+(1-α)𝔼(d^2(Y_t^(2),μ(α))),
be the limiting Fréchet mean and variance of two mixture distributions indexed by α∈[0,1].
μ(α) is unique for all α∈[0,1], and
|V^(2)-V(α)|≥φ(α), |V^(1)-V(α)|≥φ(1-α),
such that φ(α)≥ 0 is a continuous, strictly increasing function of α∈[0,1] satisfying φ(0)=0 and φ(1)≤ |Δ_V|.
The uniqueness of Fréchet mean and variance for mixture distribution is also imposed in <cit.>, see Assumption (A2) therein. Furthermore, Assumption <ref> imposes a bi-Lipschitz type condition on V(α), and is used to distinguish the Fréchet variance V(α) under mixture distribution from V^(1) and V^(2).
Suppose Assumptions <ref>-<ref> (with <ref> replaced by <ref>), and Assumption <ref> hold. Under ℍ_a, for i=1,2,
we have τ̂_i→_pτ, where τ̂_i is defined in (<ref>).
Theorem <ref> obtains the consistency of τ̂_i, i=1,2 when Fréchet variance changes.
We note that it is very challenging to derive the consistency result when ℍ_a is caused by Fréchet mean change alone, which is partly due to the lack of explicit algebraic structure on (Ω,d) that we can exploit and the use of self-normalization. We leave this problem for future investigation.
§.§ Wild Binary Segmentation
To detect multiple change-points and identify the their locations given the time series {Y_t}_t=1^n, we can combine our change-point test with the so-called wild binary segmentation (WBS) <cit.>. The testing procedure in conjunction with WBS can be described as follows.
Let I_M = { (s_m, e_m) }_m=1,2, …, M, where s_m, e_m are drawn uniformly from { 0, 1/n, 1/(n-1), …, 1/2, 1 } such that ⌈ n e_m ⌉ - ⌊ n s_m ⌋≥ 20. Then we simulate J i.i.d samples, each sample is of size n, from multivariate Gaussian distribution with mean 0 and identity covariance matrix, i.e., for j=1, 2, …, J, { Z^j_i }_i=1^n i.i.d.∼𝒩(0,1). For the jth sample { Z^j_i }_i=1^n, let D(k; s_m,e_m; {Z_i^j}_i=1^n) be the statistic D_⌊ n e_m ⌋ - ⌈ n s_m ⌉ +1, 2(k) that is computed based on sample { Z_⌈ n s_m ⌉^j, Z_⌈ n s_m ⌉ + 1^j, …, Z_⌊ n e_m ⌋^j } and
ξ_j = max_1 ≤ m ≤ Mmax_⌊ñ_m η_1 ⌋≤ k ≤ñ_m - ⌊ñ_m η_1 ⌋D(k; s_m,e_m; {Z_i^j}_i=1^n),
where ñ_m = ⌈ n e_m ⌉ - ⌊ n s_m ⌋ +1. Setting ξ as the 95% quantile of ξ_1, ξ_2, …, ξ_J, we can apply our test in combination with WBS algorithm to the data sequence
{Y_1, Y_2, … Y_n} by running Algorithm <ref> as WBS(0, 1, ξ). The main rational behind this algorithm is that we exploit the asymptotic pivotality of our SN test statistic, and the limiting null distribution of our test statistic applied to random objects is identical to that applied to i.i.d 𝒩(0,1) random variables.
Thus this threshold is expected to well approximate the
95% quantile of the finite sample distribution of the maximum SN test statistic on the M random intervals under the null.
§ SIMULATION
In this section, we examine the size and power performance of our proposed tests in two-sample testing (Section <ref>), change-point detection (Section <ref>) problems, and provide simulation results of WBS based change-point estimation (Section <ref>). We refer to Appendix <ref> with additional simulation results regarding comparison with FPCA approach for two-sample tests in functional time series.
The time series random objects considered in this section include (i). univariate Gaussian probability distributions equipped with 2-Wasserstein metric d_W; (ii). graph Laplacians of weighted graphs equipped with Frobenius metric d_F; (iii). covariance matrices <cit.> equipped with log-Euclidean metric d_E. Numerical experiments are conducted according to the following data generating processes (DGPs):
(i) Gaussian univariate probability distribution: we consider
Y_t^(1)=𝒩(arctan (U_t,1),[arctan(U_t,1^2)+1]^2),
Y_t^(2)=𝒩(arctan (U_t,2)+δ_1, δ_2^2[arctan(U_t,2^2)+1]^2).
(ii) graph Laplacians: each graph has N nodes (N=10 for two-sample test and N=5 for change-point test) that are categorized into two communities with 0.4N and 0.6N nodes respectively, and the edge weight for the first community, the second community and between community are set as 0.4+arctan(U_t,1^2), 0.2+arctan(U_t,1^'2), 0.1 for the first sample Y_t^(1), and δ_2[0.4+arctan(U_t,2^2)], δ_2[0.2+arctan(U_t,2^'2)], 0.1+δ_1 for the second sample Y_t^(2), respectively;
(iii) covariance matrix: Y_t^(i)=(2I_3+Z_t,i)(2I_3+Z_t,i)^⊤, i=1,2, such that all the entries of Z_t,1 (resp. Z_t,2) are independent copies of arctan(U_t,1 ) (resp. δ_1+δ_2arctan(U_t,2)).
For DGP (i)-(iii), (U_t,1,U_t,2)^⊤ (with independent copies (U'_t,1,U'_t,2)^⊤) are generated according to the following VAR(1) process,
(
U_t,1
U_t,2)=ρ(
U_t-1,1
U_t-1,2)+ϵ_t, ϵ_ti.i.d.∼𝒩(0,(
1 a
a 1
));
where a∈{0,0.5} measures the cross-dependence, and ρ∈{-0.4,0,0.4,0.7} measures the temporal dependence within each sample (or each segment in change-point testing). For size evaluation in change-point tests, only {Y_t^(1)} is used.
Furthermore, δ_1∈[0,0.3] and δ_2∈[0.7,1] are used to characterize the change in the underlying distributions. In particular, δ_1 can only capture the location shift, while δ_2 measures the scale change, and the case (δ_1,δ_2)=(0,1) corresponds to ℍ_0.
For DGP (i) and (ii), i.e. Gaussian distribution with 2-Wasserstein metric d_W and graph Laplacians with Euclidean metric d_F, the location parameter δ_1 directly shifts Fréchet mean while keeping Fréchet variance constant; and the scale parameter δ_2 works on Fréchet variance only while holding the Fréchet mean fixed. For DGP (iii), i.e. covariance matrices, the log-Euclidean metric d_E operates nonlinearly, and thus changes in either δ_1 or δ_2 will be reflected on changes in both Fréchet mean and variance.
The comparisons of our proposed methods with <cit.> for two-sample testing and <cit.> for change-point testing are also reported, which are generally referred to as DM.
§.§ Two-Sample Test
For the two-sample testing problems, we set the sample size as n_1=n_2∈{50,100,200,400}, and trimming parameter as η=0.15.
Table <ref> presents the sizes of our tests and DM test for three DGPs based on 1000 Monte Carlo replications at nominal significance level α=5%.
In all three subtables, we see that: (a) both D_1 and D_2 can deliver reasonable size under all settings; (b) DM suffers from severe size distortion when dependence magnitude
among data is strong; (c) when two samples are dependent, i.e. a=0.5, DM is a bit undersized even when data is temporally independent. These findings suggest that our SN-based tests provide more accurate size relative to DM when either within-group temporal dependence or between-group dependence is exhibited.
In Figure <ref>, we further compare size-adjusted power of our SN-based tests and DM test, in view of the size-distortion of DM. That is, the critical values are set as the empirical 95% quantiles of the test statistics obtained in the size evaluation, so that all curves start from the nominal level at 5%. For all settings, we note that D_2 is more powerful than (or equal to) D_1. In particular, D_1 has trivial power in DGP (i) and (ii) when only Fréchet mean difference is present. In addition, D_2 is more powerful in detecting Fréchet mean differences than DM for DGP (i) and (ii), and beats DM in DGP (i) for detecting Fréchet variance differences, although it is slightly worse than DM in detecting Fréchet variance differences for DGP (ii) and (iii). Due to robust size and power performance, we thus recommend D_2 for practical purposes.
§.§ Change-Point Test
For the change-point testing problems, we set the sample size n∈{200, 400,800}, and trimming
parameter as (η_1,η_2)=(0.15,0.05). Table <ref> outlines the size performance of our tests and DM test for three DGPs
based on 1000 Monte Carlo replications at nominal significance level α=5%. DM tests based asymptotic critical value and bootstraps (with 500 replications) are denoted as DM^a and DM^b, respectively.
From Table <ref>, we find that SN_1 always exhibits accurate size while SN_2 is a bit conservative. As a comparison, the tests
based on DM^a and DM^b suffer from severe distortion when strong temporal dependence is present, although DM^b is slightly better than DM^a in DGP (i) and (ii).
In Figure <ref>, we plot the size-adjusted power of our tests and DM test based on bootstrap calibration. Here, the size-adjusted power of DM^b is implemented following <cit.>. Similar to the findings in change-point tests, we find that SN_1 has trivial power in DGP (i) and (ii) when there is only Fréchet mean change and is worst among all three tests. Furthermore, SN_2 is slightly less powerful compared to DM and the power loss is moderate. Considering its better size control, SN_2 is preferred.
We further provide numerical evidence for the estimation accuracy by considering the alternative hypothesis of δ_1=1-δ_2=0.3 with true change-point location at τ=0.5 for DGP (i)-(iii) in the main context. When varying sample size n∈{400,800,1600}, we find that for all DGPs, the histograms
of τ̂ (based on SN_2) plotted in Figure <ref> get more concentrated around the truth τ=0.5, when sample size increases, which is consistent with our theoretical consistency of τ̂.
§.§ Multiple Change Point Detection
For simulations of multiple change point estimation, we consider non-Euclidean time series of length n=500 generated from the following two models. These models are the same as before, but reformulated for better presentation purpose.
* Gaussian univariate probability distribution: Y_t=𝒩(arctan (U_t)+δ_t,1, δ_t, 2^2[arctan(U_t^2)+1]^2).
* covariance matrix: Y_t=(2I_3+Z_t)(2I_3+Z_t)^⊤ with Z_t= δ_t,1+δ_t,2arctan(U_t).
Here, U_t are generated according to the AR(1) process U_t=ρ U_t-1+ϵ_t, ϵ_ti.i.d.∼𝒩(0,1). There are 3 change points at t=110, 250 and 370. The changes point locations are reflected in the definitions of {δ_t,1} and {δ_t,2}, where
δ_t,1 = a_1 𝕀_{n ≤ 110 } + a_2 𝕀_{ 110 < n ≤ 250 } + a_3 𝕀_{ 250 < n ≤ 370 } + a_4 𝕀_{ 370 < n ≤ 500 },
δ_t,2 = b_1 𝕀_{ n ≤ 110 } + b_2 𝕀_{ 110 < n ≤ 250 } + b_3 𝕀_{ 250 < n ≤ 370 } + b_4 𝕀_{ 370 < n ≤ 500 }.
For each model, we consider 3 cases that are differentiated by the magnitudes of a_i, b_i, i=1,2,3,4. For the data generating model of Gaussian distributions, we set
* (a_1, a_2, a_3, a_4) = (0, 0.7, 0, 0.8), (b_1, b_2, b_3, b_4) = (1, 1.5, 0.7, 1.4);
* (a_1, a_2, a_3, a_4) = (0, 0.2, 0, 0.3), (b_1, b_2, b_3, b_4) = (0.5, 1.5, 0.4, 1.4);
* (a_1, a_2, a_3, a_4) = (0, 0.5, 1.5, 3.3), (b_1, b_2, b_3, b_4) = (0.2, 1.5, 3.8, 6.5).
As for the data generating model of covariance matrices, we set
* (a_1, a_2, a_3, a_4) = (0, 1.2, 0, 1.3), (b_1, b_2, b_3, b_4) = (0.8, 1.5, 0.7, 1.6);
* (a_1, a_2, a_3, a_4) = (0, 1, 0, 1), (b_1, b_2, b_3, b_4) = (0.5, 2, 0.4, 1.9);
* (a_1, a_2, a_3, a_4) = (0, 2, 3.9, 5.7), (b_1, b_2, b_3, b_4) = (0.2, 0.7, 1.3, 2).
Cases 1 and 2 correspond to non-monotone changes and Case 3 considers the monotone change.
Here, our method described in Section <ref> is denoted as WBS-SN_2 (that is, a combination of WBS and our SN_2 test statistic).
The method DM in conjunction with binary segmentation, referred as BS-DM, is proposed in <cit.> and included in this simulation for comparison purpose. In addition, our statistic SN_2
in combination with binary segmentation, denoted as BS-SN_2, is implemented and included as well. The critical values for BS-DM and BS-SN_2 are obtained from their asymptotic distributions respectively.
The simulation results are shown in Table <ref>, where we present the ARI (adjusted rand index) and number of detected change points for two dependence levels ρ=0.3, 0.6. Note that ARI ∈ [0,1]
measures the accuracy of change point estimation and larger ARI corresponds to more accurate estimation.
We summarize the main findings as follows. (a) WBS-SN_2 is the best method in general as it can accommodate both monotonic and non-monotoic changes, and appears quite robust to temporal dependence. For Cases 1 and 2, we see that BS-SN_2 does not work for non-monotone changes, due to the use of binary segmentation procedure. (b) BS-DM tends to have more false discoveries comparing to the other methods. This is expected, as method DM is primarily proposed for i.i.d data sequence and exhibit serious oversize when there is temporal dependence in Section <ref>. (c) When we increase ρ=0.3 to ρ=0.6, the performance of WBS-SN_2 appears quite stable for both distributional time series and covariance matrix time series.
§ APPLICATIONS
In this section, we present two real data illustrations, one for two sample testing and the other for change-point detection. Both datasets are in the form of non-Euclidean time series and neither seems to be analyzed before by using techniques that take into account unknown temporal dependence.
§.§ Two sample tests
Mortality data. Here we are interested in comparing the longevity of people living in different countries of Europe. From the Human Mortality Database (<https://www.mortality.org/Home/Index>), we can obtain a time series that consists of yearly age-at-death distributions for each country. We shall focus on distributions for female from year 1960 to 2015 and there are 26 countries included in the analysis after exclusion of countries with missing data. Pair-wise two sample tests between the included countries are performed using our statistic D_2 to understand the similarity of age-at-death distributions between different countries. The resulting p-value matrix is plotted in Figure <ref> (left).
To better present the testing results and gain more insights, we define the dissimilarity between two given countries by subtracting each p-value from 1. Treating these dissimilarities as “distances", we apply multidimensional scaling to “project" each country onto two dimensional plane for visualization. See Figure <ref> (right) for the plot of “projected" countries. It appears that several west European countries, including UK, Belgium, Luxembourg, Ireland, and Austria, and Denmark, form a cluster; whereas several central and eastern European countries, including Poland, Latvia, Russian, Bulgaria, Lithuania and Czechia share similar distributions. We suspect the similarity in Mortality distribution is much related to the similarity in their economic development and healthcare system, less dependent on the geographical locations.
§.§ Change point detection
Cryptocurrency data. Detecting change points in the Pearson correlation matrices for a set of interested cryptocurrencies can uncover structural breaks in the correlation of these cryptocurrencies and can play an important role in the investors' investment decisions. Here, we construct the daily Pearson correlation matrices from minute prices of Bitcoin, Doge coin, Cardano, Monero and Chainlink for year 2021. The cryptocurrency data can be downloaded at <https://www.cryptodatadownload.com/analytics/correlation-heatmap/>. See Figure <ref> for the plot of time series of pairwise correlations. Three methods, namely, our SN_2 test combined with WBS (WBS-SN_2), SN_2 test combined with binary segmentation (BS-SN_2), and DM test of <cit.> in conjunction with binary segmentation (BS-DM), are applied to detect potential change points for this time series,
Method WBS-SN_2 detects an abrupt change on day 2021-05-17 and method BS-SN_2 detects a change point on day 2021-04-29. By comparison, more than 10 change points are detected by BS-DM and we suspect that many of them are false discoveries (see Section <ref> for simulation evidence of BS-DM's tendency of over-detection). The change point in mid-May 2021 is well expected and corresponds to a major crush in crypto market that wiped out 1 trillion dollars. The major causes of this crush are the withdrawal of Tesla's commitment to accept Bitcoin as payment and warnings regarding cryptocurrency sent by Chinese central bank to the financial institutes and business in China. Since this major crush, the market has been dominated by negative sentiments and fear for a recession. We refer the following CNN article for some discussions about this crush <https://www.cnn.com/2021/05/22/investing/crypto-crash-bitcoin-regulation/index.html>.
§ CONCLUSION
Motivated by increasing availability of non-Euclidean time series data, this paper considers two-sample testing and change-point detection for temporally dependent random objects. Our inferential framework builds upon the nascent SN technique, which has been mainly developed for conventional Euclidean time series or functional time series in Hilbert space, and the extension of SN to the time series of objects residing in metric spaces is the first in the literature. The proposed tests are robust to weak temporal dependence, enjoy effortless tuning and are broadly applicable to many non-Euclidean data types with easily verified technical conditions. On the theory front, we derive the asymptotic distributions of our two sample and change-point tests under both null and local alternatives of order O(n^-1/2). Furthermore, for change-point problem, the consistency of the change-point estimator is established under mild conditions.
Both simulation and real data illustrations demonstrate the robustness of our test with respect to temporal dependence and the effectiveness in testing and estimation problems.
To conclude, we mention several interesting but unsolved problems for analyzing non-Euclidean time series.
For example, although powerful against Fréchet mean differences/changes, the testing statistics developed in this paper rely on the asymptotic behaviors of Fréchet (sub)sample variances. It is imperative to construct formal tests that can target directly at Fréchet mean differences/changes. For the change-point detection problem in non-Euclidean data, the existing literature, including this paper, only derives the consistency of the change-point estimator. It would be very useful to derive explicit convergence rate and the asymptotic distribution of the change-point estimator, which is needed for confidence interval construction. Also it would be interesting to study how to detect structural changes when the underlying distributions of random objects change smoothly. We leave these topics for future investigation.
§ TECHNICAL PROOFS
§.§ Auxiliary Lemmas
We first introduce some notations. We denote o_up(·) as the uniform o_p(·) w.r.t. the partial sum index (a,b)∈ℐ_η. Let M_n(ω,[a,b])=n^-1∑_t=⌊ na⌋+1^⌊ nb⌋f_ω(Y_t), where f_ω(Y)=d^2(Y,ω)-d^2(Y,μ), then it is clear that
μ̂_[a,b]=min_ω∈ΩM_n(ω,[a,b]).
Let Ṽ_[a,b]=1/⌊ n b⌋-⌊ n a⌋∑_t=⌊ n a⌋+1^⌊ n b⌋ d^2(Y_t, μ).
The following three main lemmas are verified under Assumption <ref>-<ref>, and they are used repeatedly throughout the proof for main theorems.
sup_(a,b)∈ℐ_η√(n)d(μ̂_[a,b],μ)=O_p(1).
(1). We first show the uniform convergence, i.e.
sup_(a,b)∈ℐ_ηd(μ̂_[a,b],μ)=o_up(1).
For any ϵ>0, define
ψ(ϵ):=inf_d(ω,μ)>ϵ𝔼f_ω(Y),
and we know by that ψ(ϵ)>0 by the uniqueness of μ in Assumption <ref>.
Hence, let M(ω,[a,b])=(b-a)𝔼f_ω(Y), we have
P(sup_(a,b)∈ℐ_ηd(μ̂_[a,b],μ)>ϵ)
= P(⋃_(a,b)∈ℐ_η{d(μ̂_[a,b],μ)>ϵ})
≤ P(⋃_(a,b)∈ℐ_η{M(μ̂_[a,b],[a,b])-inf_d(ω,μ)> ϵM(ω,[a,b])≥ 0})
≤ P(⋃_(a,b)∈ℐ_η{M(μ̂_[a,b],[a,b])≥ηψ(ϵ)/2})
≤
P(⋃_(a,b)∈ℐ_η{M(μ̂_[a,b],[a,b])-M_n(μ̂_[a,b],[a,b])
+M_n(μ,[a,b])-M(μ,[a,b])≥ηψ(ϵ)/2})
≤ P(sup_(a,b)∈ℐ_ηsup_ω∈Ω|M_n(ω,[a,b])-M(ω,[a,b])|≥ηψ(ϵ)/4)
where the first inequality holds because the event {d(μ̂_[a,b],μ)>ϵ} implies that μ̂_[a,b]∈{ω∈Ω:d(ω,μ)> ϵ}, and thus M(μ̂_[a,b],[a,b])≥inf_d(ω,μ)>ϵM(ω,[a,b]); the second inequality holds by b-a≥η (hence (⌊ nb⌋-⌊ na⌋)/n>η/2 for large n)
and the definition of (<ref>) such that inf_d(ω,μ)>ϵM(ω,[a,b])=(b-a)ψ(ϵ)>ηψ(ϵ)/2; and the third holds by that M(μ,[a,b])=0 and M_n(μ,[a,b])≥ M_n(μ̂_[a,b],[a,b]).
Note M_n(ω,[a,b])-M(ω,[a,b])=M_n(ω,[0,b])-M(ω,[0,b])-M_n(ω,[0,a])+M(ω,[0,a]). Therefore, it suffices to show the weak convergence of the process {M_n(ω,[0,u])-M(ω,[0,u]), u∈[0,1],ω∈Ω} to zero. Note the pointwise convergence holds easily by the boundedness of f_ω and Assumption <ref>, so we only need to show the stochastic equicontinuity, i.e.
lim sup_n→∞P(sup_|u-v|<δ_1,d(ω_1,ω_2)<δ_2|M_n(ω_1,[0,u])-M(ω_1,[0,u])
-M_n(ω_2,[0,v])+M(ω_2,[0,v])|>ϵ)→ 0
as max(δ_1,δ_2)→ 0.
Then, by triangle inequality, we have
|M_n(ω_1,[0,u])-M(ω_1,[0,u])-M_n(ω_2,[0,v])+M(ω_2,[0,v])|
≤ |M_n(ω_1,[0,u])-M_n(ω_1,[0,v])|+|M_n(ω_1,[0,v])-M_n(ω_2,[0,v])|
+|M(ω_1,[0,u])-M(ω_1,[0,v])|+|M(ω_1,[0,v])-M(ω_2,[0,v])|
:= ∑_i=1^4 R_n,i.
Without loss of generality, we assume v>u, and by the boundedness of the metric, we have for some K>0,
R_n,1≤n^-1∑_t=⌊nu⌋+1^⌊nv⌋d^2(Y_t,ω_1)≤K|u-v|≤Kδ_1.
Similarly, R_n,3≤ K. Furthermore, we can see that
R_n,2,R_n,4≤ 2diam(Ω)d(ω_1,ω_2)≤ Kδ_2.
Hence, the result follows by letting δ_1 and δ_2 sufficiently small.
Thus, the uniform convergence holds.
(2). We then derive the convergence rate based on Assumption <ref>.
By the consistency, we have for any δ>0, P(sup_(a,b)∈ℐ_ηd(μ̂_[a,b],μ)≤δ)→ 1. Hence, on the event that sup_(a,b)∈ℐ_ηd(μ̂_a,b,μ)≤δ, and note that M_n(μ,[a,b])=n^-1∑_t=⌊ na⌋+1^⌊ nb⌋[d^2(Y_t,μ)-d^2(Y_t,μ)]=0, we have
0= M_n(μ,[a,b])
≥ M_n(μ̂_[a,b],[a,b])
= K_d⌊nb⌋-⌊na⌋/nd^2(μ̂_[a,b],μ) + n^-1∑_t=⌊na⌋+1^⌊nb⌋[g(Y_t,μ̂_[a,b],μ)+R(Y_t,μ̂_[a,b],μ)]
≥ K_d η/2d^2(μ̂_[a,b],μ)
+d(μ̂_[a,b],μ)[ n^-1∑_t=⌊na⌋+1^⌊nb⌋g(Y_t,μ̂_[a,b],μ)/d(μ̂_[a,b],μ)+o_up(n^-1/2+d(μ̂_[a,b],μ))],
where the last inequality holds by Assumption <ref> and the fact (⌊ nb⌋-⌊ na⌋)/n>η/2 for large n.
Note the above analysis holds uniformly for (a,b)∈ℐ_η, this implies that
sup_(a,b)∈ℐ_η[K_d η/2d(μ̂_[a,b],μ)-o_up(d(μ̂_[a,b],μ))]
≤ n^-1/2 sup_(a,b)∈ℐ_η| n^-1/2∑_t=⌊na⌋+1^⌊nb⌋g(Y_t,μ̂_[a,b],μ)/d(μ̂_[a,b],μ)|+o_up(n^-1/2)=O_p(n^-1/2),
and hence sup_(a,b)∈ℐ_ηd(μ̂_[a,b],μ)=O_p(n^-1/2).
sup_(a,b)∈ℐ_η√(n)|V̂_[a,b]-Ṽ_[a,b]|=o_p(1).
By Lemma <ref>, and Assumption <ref>, we have
sup_(a,b)∈ℐ_η√(n)M_n(μ̂_[a,b],[a,b])
≤ K_dsup_(a,b)∈ℐ_ηd(μ̂_[a,b],μ) sup_(a,b)∈ℐ_η|√(n)d(μ̂_[a,b],μ)
+ n^-1/2∑_t=⌊na⌋+1^⌊nb⌋g(Y_t,μ̂_[a,b],μ)/d(μ̂_[a,b],μ)+o_up(1+√(n)d(μ̂_[a,b],μ))|
= O_p(n^-1/2).
Hence, we have that
sup_(a,b)∈ℐ_η√(n)|V̂_[a,b]-Ṽ_[a,b]|≤η^-1sup_(a,b)∈ℐ_η√(n)M_n(μ̂_[a,b],[a,b]),
the result follows.
Let V̂^C_[a,b](ω̃)=1/⌊ nb⌋-⌊ na ⌋∑_t=⌊ na⌋+1^⌊ nb⌋d^2(Y_i,ω̃), where ω̃∈Ω is a random object such that
√(n)sup_(a,b)∈ℐ_ηd(ω̃,μ̂_[a,b])=O_p(1).
Then,
√(n)sup_(a,b)∈ℐ_η|V̂^C_[a,b](ω̃)-Ṽ_[a,b]|=o_p(1).
By triangle inequality and Lemma <ref>,
√(n)sup_(a,b)∈ℐ_η|V̂^C_[a,b](ω̃)-Ṽ_[a,b]|
= sup_(a,b)∈ℐ_η|√(n)/⌊n b⌋-⌊n a⌋ ∑_t=⌊n a⌋+1^⌊n b⌋ d^2(Y_t, ω̃)-d^2(Y_i,μ)|
≤ (η/2)^-1sup_(a,b)∈ℐ_η√(n)M_n(ω̃,[a,b]).
Note by triangle inequality for the metric, d(ω̃,μ)≤ d(μ̂_[a,b],μ)+d(ω̃,μ̂_[a,b])=O_p(n^-1/2), and we know that d(ω̃,μ)<δ with probability tending to 1, and on this event, by Assumption <ref>,
√(n)M_n(ω̃,[a,b])
≤ K_dd^2(ω̃,μ) +n^-1| ∑_t=⌊na⌋+1^⌊nb⌋g(Y_t,ω̃,μ)|
+n^-1|∑_t=⌊na⌋+1^⌊nb⌋R(Y_t,ω̃,μ)|.
Similar to the proof of Lemma <ref>, we get the result.
§.§ Proof of Theorems in Section <ref>
Let Ṽ^(1)_r=1/⌊ rn_1⌋∑_t=1^⌊ rn_1⌋d^2(Y_t^(1),μ^(1)), and Ṽ^(2)_r=1/⌊ rn_2⌋∑_t=1^⌊ rn_2⌋d^2(Y_t^(2),μ^(2)).
For each r∈[η,1], we consider the decomposition,
√(n)T_n(r)= √(n)r(V̂^(1)_r-V̂^(2)_r)
= √(n)r(V̂^(1)_r-Ṽ^(1)_r+Ṽ^(1)_r-V^(1))
-√(n)r(V̂^(2)_r-Ṽ^(2)_r+Ṽ^(2)_r-V^(2))
+√(n)r(V^(1)-V^(2))
:= R_n,1(r)+R_n,2(r)+R_n,3(r).
and
√(n)T_n^C(r)=
√(n)r(V̂^C,(1)_r-Ṽ^(1)_r)-√(n)r(V̂^(1)_r-Ṽ^(1)_r)
+√(n)r(V̂^C,(2)_r -Ṽ^(2)_r)-√(n)r(V̂^(2)_r-Ṽ^(2)_r)
:= R^C_n,1(r)+R^C_n,2(r)+R^C_n,3(r)+R^C_n,4(r).
By Lemma <ref>,
sup_r∈[η,1]√(n)r(V̂^(1)_r-Ṽ^(1)_r)=o_p(1), sup_r∈[η,1]√(n)r(V̂^(2)_r-Ṽ^(2)_r)=o_p(1),
i.e.
{R^C_n,2(r)+R^C_n,4(r)}_r∈[η,1]⇒ 0.
Furthermore, by Assumption <ref>,
√(n)r(V̂^(1)_r-V^(1))⇒γ_1^-1σ_1B^(1)(γ_1r), √(n)r(V̂^(2)_r-V^(2))⇒γ_2^-1σ_2B^(2)(γ_2r).
This implies that
{R_n,1(r)+R_n,2(r)}_r∈[η,1]⇒{ξ_γ_1,γ_2(r;σ_1,σ_2)}_r∈[η,1].
§.§ Proof of Theorem <ref>
Under ℍ_0, R_n,3(r)≡ 0, and μ^(1)=μ^(2)=μ. Hence, by (<ref>) and (<ref>), we obtain that
{√(n)T_n(r)}_r∈[η,1]⇒{ξ_γ_1,γ_2(r;σ_1,σ_2)}_r∈[η,1].
Next, by Lemma <ref>, can obtain that
√(n)sup_r∈[η,1]d(μ̂^(1)_r,μ)=o_p(1), √(n)sup_r∈[η,1]d(μ̂^(2)_r,μ)=o_p(1).
Hence, by Lemma <ref>, we have
{R^C_n,1(r)+R^C_n,3(r)}_r∈[η,1]⇒ 0.
Together with (<ref>), we have
{√(n)T^C_n(r)}_r∈[η,1]⇒ 0.
Hence, by continuous mapping theorem, for both i=1,2,
D_n,i→_d ξ^2_γ_1,γ_2(1;σ_1,σ_2)/∫_η^1[ξ_γ_1,γ_2(r;σ_1,σ_2)-rξ_γ_1,γ_2(1;σ_1,σ_2)]^2dr.
§.§ Proof of Theorem <ref>
In view of (<ref>) and (<ref>),
{√(n)T_n(r)}_r∈[η,1]⇒{ξ_γ_1,γ_2(r;σ_1,σ_2)+rn^-κ_V+1/2Δ_V}_r∈[η,1].
Hence
* For κ_V ∈(1/2,∞), {√(n)T_n(r)}⇒{ξ_γ_1,γ_2(r;σ_1,σ_2)}_r∈[η,1].
* For κ_V=1/2, {√(n)T_n(r)}_r∈[η,1]⇒{ξ_γ_1,γ_2(r;σ_1,σ_2)+rΔ_V}_r∈[η,1].
* For κ_V∈(0,1/2), √(n)T_n(1)→_p∞, and {√(n)T_n(r)-√(n)rT_n(1)}_r∈[η,1]⇒{ξ_γ_1,γ_2(r;σ_1,σ_2)-rξ_γ_1,γ_2(1;σ_1,σ_2)}_r∈[η,1].
Next, we focus on √(n)T_n^C(r).
When κ_M∈ (0,∞), it holds that d(μ^(1),μ^(2))=O(n^-κ_M/2)=o(1), and by triangle inequality, for any r∈[η,1],
|d(μ^(1),μ^(2))-d(μ̂^(2)_r,μ^(2))|≤ d(μ̂^(2)_r,μ^(1))≤ |d(μ^(1),μ^(2))+d(μ̂^(2)_r,μ^(2))|.
By Lemma <ref>, we have sup_r∈[η,1]d(μ̂^(2)_r,μ^(2))=O_p(n^-1/2).
This and (<ref>) imply that
* when κ_M∈(1/2,∞), d^2(μ̂^(2)_r,μ^(1))=o_up(n^-1/2);
* when κ_M∈(0,1/2], d^2(μ̂^(2)_r,μ^(1))=d^2(μ^(1),μ^(2))+o_up(n^-1/2)=n^-κ_MΔ_M+o_up(n^-1/2).
Similarly,
* when κ_M∈(1/2,∞), d^2(μ̂^(1)_r,μ^(2))=o_up(n^-1/2);
* when κ_M∈(0,1/2], d^2(μ̂^(1)_r,μ^(2))=n^-κ_MΔ_M+o_up(n^-1/2).
Furthermore, by Assumption <ref>, equations (<ref>) and (<ref>), we obtain
√(n)T_n^C(r)=R_n,1^C(r)+R_n,3^C(r)+o_up(1)
= √(n)K_d rd^2(μ̂^(2)_r,μ^(1))+ rd(μ̂^(2)_r,μ^(1))[n^-1/2∑_t=1^⌊γ_1nr⌋g(Y_t^(1),μ̂^(2)_r,μ^(1))/d(μ̂^(2)_r,μ^(1))]
+o_up(d(μ̂^(2)_r,μ^(1))+√(n)d^2(μ̂^(2)_r,μ^(1)))
+√(n)K_dr d^2(μ̂^(1)_r,μ^(2))+ rd(μ̂^(1)_r,μ^(2))[n^-1/2∑_t=1^⌊γ_2nr⌋g(Y_t^(2),μ̂^(1)_r,μ^(2))/d(μ̂^(1)_r,μ^(2))]
+o_up(d(μ̂^(1)_r,μ^(2))+√(n)d^2(μ̂^(1)_r,μ^(2)))
+o_up(1).
* For κ_M ∈(1/2,∞), d^2(μ̂^(2)_r,μ^(1))=o_up(n^-1/2), and d^2(μ̂^(1)_r,μ^(2))=o_up(n^-1/2).
Hence, {√(n)T_n^C(r)}_r∈[η,1]⇒ 0.
* For κ_M=1/2, we note that
d^2(μ̂^(2)_r,μ^(1))=n^-1/2Δ_M+o_up(1), and d^2(μ̂^(1)_n,μ^(2))=n^-1/2Δ_M+o_up(1).
Hence, {√(n)T_n^C(r)}_r∈[η,1]⇒{2rK_dΔ_M}_r∈[η,1], and {√(n)[T_n^C(r)-rT_n^C(1)]}_r∈[η,1]⇒ 0.
* For κ_M∈(0,1/2), we multiply n^2κ_M-1 on both denominator and numerator of D_n,2, and obtain
D_n,2=n^2κ_M{[T_n(1)]^2+[T_n^C(1)]^2}/n^-1∑_k=⌊ nη⌋^n n^2κ_M{[T_n(k/n)-k/nT_n(1)]^2+[T_n^C(k/n)-k/nT^C_n(1)]^2}.
Note that n^κ_M-1/2→0, as n→∞, we obtain that
{n^κ_M[T_n(r)-rT_n(1)]}_r∈[η,1]⇒ 0.
Furthermore, in view of (<ref>), we obtain
n^κ_MT^C_n(r)=
n^κ_Mr(K_d+o_up(1))[d^2(μ̂^(2)_r,μ^(1))+d^2(μ̂^(1)_r,μ^(2))]+o_up(1),
By arguments below (<ref>), we know that
n^κ_Md^2(μ̂^(2)_r,μ^(1))=Δ_M+o_up(n^κ_M-1/2)=Δ_M+o_up(1).
And similarly, n^κ_Md^2(μ̂^(1)_r,μ^(2))=Δ_M+o_up(1). We thus obtain that
{n^κ_M T^C_n(r)-rT^C_n(1)}_r∈[η,1]⇒ 0,
and
n^κ_MT_n^C(1)→_p 2K_dΔ_M.
Therefore, (<ref>) and (<ref>) implies that the denominator of (<ref>) converges to 0 in probability, while (<ref>) implies the numerator of (<ref>) is larger than a positive constant in probability, we thus obtain D_n,2→_p∞.
Summarizing the cases of κ_V and κ_M, and by continuous mapping theorem, we get the result.
§.§ Proof of <Ref>
When γ_1=γ_2=1/2, it can be shown that
ξ_γ_1,γ_2(r;σ_1,σ_2)=2σ_1B^(1)(r/2)-2σ_2B^(1)(r/2)=_d √(2σ_1^2+2σ_2^2-4ρσ_1σ_2)B(r);
and when ρ=0.
ξ_γ_1,γ_2(r;σ_1,σ_2)=_d √(σ_1^2/γ_1+σ_2^2/γ_2)B(r).
The result follows by the continuous mapping theorem.
§.§ Proof of Theorems in Section <ref>
With a bit abuse of notation, we define ℐ_η={(a,b): 0≤ a<b≤ 1, b-a≥η_2 } and 𝒥_η={(r;a,b): 0≤ a<r<b≤ 1, b-r≥η_2, r-a≥η_2 }.
§.§ Proof of Theorem <ref>
For (r;a,b)∈𝒥_η, we note that
√(n)T_n(r;a,b)
= √(n){(r-a)(b-r)/(b-a)(V̂_[a, r]-Ṽ_[a,r]+Ṽ_[a,r]-V)}
-√(n){(r-a)(b-r)/(b-a)(V̂_[r, b]-Ṽ_[r, b]+Ṽ_[r, b]-V)}.
By Lemma <ref> we know that sup_(a,r)∈ℐ_η√(n)|V̂_[a, r]-Ṽ_[a,r]|=o_p(1), sup_(r,b)∈ℐ_η√(n)|V̂_[r,b]-Ṽ_[r,b]|=o_p(1), and by Assumption <ref>,
{√(n)(r-a)(Ṽ_[a,r]-V)}_(a,r)∈ℐ_η⇒{σ[B(r)-B(a)]}_(a,r)∈ℐ_η,
{√(n)(b-r)(Ṽ_[r,b]-V)}_(r,b)∈ℐ_η⇒{σ[B(b)-B(r)]}_(r,b)∈ℐ_η.
Hence,
{√(n)T_n(r;a,b)}_(r;a,b)∈𝒥_η
⇒ σ{ (b-r)/(b-a)[B(r)-B(a)]-(r-a)/(b-a)[B(b)-B(r)]}_(r;a,b)∈𝒥_η.
Furthermore, we note that
√(n)T_n^C(r;a,b)
= (b-r)/(b-a)n^-1/2{∑_i=⌊n a⌋+1^⌊n r⌋ [d^2(Y_i, μ̂_[r, b])-d^2(Y_i, μ)]
- ∑_i=⌊n a⌋+1^⌊n r⌋ [d^2(Y_i, μ̂_[a,r])-d^2(Y_i, μ)]}
+ (r-a)/(b-a)n^-1/2∑_i=⌊n r⌋+1^⌊n b⌋ {[d^2(Y_i, μ̂_[a, r])-d^2(Y_i, μ)]
- ∑_i=⌊n r⌋+1^⌊n b⌋ [d^2(Y_i, μ̂_[r, b])-d^2(Y_i, μ)]}+o_up(1)
where o_up(1) is the rounding error due to [n(r-a)]^-1-[⌊ nr⌋-⌊ na⌋]^-1 and [n(b-r)]^-1-[⌊ nb⌋-⌊ nr⌋]^-1. Note by Lemma <ref>, we know that sup_(a,r)∈ℐ_ηd(μ̂_[a, r],μ)=O_p(n^-1/2) and sup_(r,b)∈ℐ_ηd(μ̂_[r, b],μ)=O_p(n^-1/2), hence by Lemma <ref> and <ref>, we obtain
sup_(r;a,b)∈𝒥_η|√(n)T_n^C(r;a,b)|=o_p(1).
The result follows by continuous mapping theorem.
§.§ Proof of Theorem <ref>
Note for any k=⌊ nη_1⌋,⋯,n-⌊ nη_1⌋, and i=1,2,
max_⌊ nη_1⌋≤ k≤ n-⌊ nη_1⌋D_n,i(k)≥ D_n,i(⌊ nτ⌋).
We focus on k^*=⌊ nτ⌋. In this case, the left and right part of the self-normalizer are both from stationary segments, hence by similar arguments as in ℍ_0,
{√(n)T_n(r ; 0, τ)}_r∈[η_2,τ-η_2]⇒{σ_1𝒢_1(r; 0, τ) }_r∈[η_2,τ-η_2],
{√(n)T^C_n(r ; 0, τ)}_r∈[η_2,τ-η_2]⇒ 0;
and
{√(n)T_n(r ; τ, 1)}_r∈[τ+η_2,1-η_2]⇒{σ_2𝒢_2(r;τ, 1) }_r∈[τ+η_2,1-η_2],
{√(n)T^C_n(r ; τ, 1)}_r∈[η_2,τ-η_2]⇒ 0,
where 𝒢_i(r;a,b)=(b-r)/(b-a)[B^(i)(r)-B^(i)(a)]-(r-a)/(b-a)[B^(i)(b)-B^(i)(r)] for i=1,2.
Hence, we only need to consider the numerator, where
√(n)T_n(τ;0,1)=√(n)τ(1-τ)(V̂_[0, τ]-V̂_[τ, 1]),
√(n)T_n^C(τ;0,1)=√(n)τ(1-τ)(V̂_[τ; 0, 1]^C-V̂_[0,τ]-V̂_[τ, 1]).
For √(n)T_n(τ;0,1), we have
√(n)T_n(τ;0,1)= √(n){τ(1-τ)(V̂_[0, τ]-Ṽ_[0,τ]+Ṽ_[0,τ]-V^(1))}
-√(n){τ(1-τ)(V̂_[τ, 1]-Ṽ_[τ, 1]+Ṽ_[τ, 1]-V^(2))}
+√(n)τ(1-τ)(V^(1)-V^(2))
= T_11+T_12+T_13.
By Lemma <ref>, we know that √(n)(V̂_[0,τ]-Ṽ_[0,τ])=o_p(1),
and by Assumption <ref>, we have √(n)τ(Ṽ_[0,τ]-V^(1))→_d σ_1B^(1)(τ).
This implies that
T_11→_d (1-τ)σ_1B^(1)(τ).
Similarly, we can obtain
T_12→_d -τσ_2[B^(2)(1)-B^(2)(τ)].
Hence, using the fact that √(n)(V^(1)-V^(2))=Δ_V, we obtain
√(n)T_n(τ;0,1)→_d(1-τ)σ_1B^(1)(τ)-τσ_2[B^(2)(1)-B^(2)(τ)]+τ(1-τ)Δ_V.
For √(n)T_n^C(τ;0,1) we have
√(n)T_n^C(τ;0,1)
= (1-τ)n^-1/2{ ∑_i=1^⌊n τ⌋ [d^2(Y_i, μ̂_[τ,1])- d^2(Y_i, μ^(1))]
- ∑_i=1^⌊n τ⌋[ d^2(Y_i, μ̂_[0,τ])- d^2(Y_i, μ^(1))]}
+τn^-1/2 {∑_i=⌊n τ⌋+1^n [d^2(Y_i, μ̂_[0,τ])-d^2(Y_i, μ^(2))]
-∑_i=⌊n τ⌋+1^n [d^2(Y_i, μ̂_[τ,1])-d^2(Y_i, μ^(2))]}+o_p(1)
:= T_21+T_22+T_23+T_24+o_p(1),
where o_p(1) is the rounding error due to (nτ)^-1-⌊ nτ⌋^-1 and [n(1-τ)]^-1-(n-⌊ nτ⌋)^-1.
Note by Lemma <ref>, we have d(μ̂_[0,τ],μ^(1))=O_p(n^-1/2), and by triangle inequality, we know that
d(μ̂_[τ,1],μ^(1))≤ d(μ̂_[τ,1],μ^(2))+d(μ^(1),μ^(2))=O_p(n^-1/4).
Then, by Assumption <ref>, we know
T_21
= √(n)(1-τ)τK_d d^2(μ̂_[τ,1],μ^(1))
+(1-τ) d(μ̂_[τ,1],μ^(1))[n^-1/2∑_i=1^⌊nτ⌋g(Y_i,μ̂_[τ,1],μ^(1))/d(μ̂_[τ,1],μ^(1))]
+o_p(d(μ̂_[τ,1],μ^(1))+√(n)d^2(μ̂_[τ,1],μ^(1)))
= √(n)(1-τ)τK_dd^2(μ̂_[τ,1],μ^(1))+O_p(n^-1/4)+o_p(1).
Now, by triangle inequality, we know
√(n)[d(μ̂_[τ,1],μ^(2))-d(μ^(1),μ^(2))]^2≤√(n)d^2(μ̂_[τ,1],μ^(1))
≤√(n)[d(μ̂_[τ,1],μ^(2))+d(μ^(1),μ^(2))]^2,
and note d(μ̂_[τ,1],μ^(2))=O_p(n^-1/2) by Lemma <ref>, we obtain √(n)d^2(μ̂_[τ,1],μ^(1))=Δ_M+o_p(1), and
T_21=(1-τ)τ K_dΔ_M+o_p(1).
By Lemma <ref>, T_22=o_p(1). Hence T_21+T_22=(1-τ)τ K_dΔ_M+o_p(1).
Similarly, we obtain that T_23+T_24=(1-τ)τ K_dΔ_M+o_p(1). Therefore,
√(n)T_n^C(τ;0,1)=2τ(1-τ)K_dΔ_M+o_p(1).
Hence, combining results of (<ref>)–(<ref>), we have
D_n,1(⌊nτ⌋) →_d[(1-τ)σ_1B^(1)(τ)-τσ_2[B^(2)(1)-B^(2)(τ)]+τ(1-τ)Δ_V]^2/[∫_η_2^r-η_2 σ_1^2𝒢_1^2(u ; 0, r) d u+∫_r+η_2^1-η_2 σ_2^2𝒢_2^2(u ; r, 1) d u]
:= 𝒮_η,1(τ;Δ),
and,
D_n,2(⌊ nτ⌋)
→_d [(1-τ)σ_1B^(1)(τ)-τσ_2[B^(2)(1)-B^(2)(τ)]+τ(1-τ)Δ_V]^2+4[τ(1-τ)Δ_M]^2/[∫_η_2^r-η_2σ_1^2𝒢_1^2(u ; 0, r) d u+∫_r+η_2^1-η_2σ_2^2𝒢_2^2(u ; r, 1) d u]
:= 𝒮_η,2(τ;Δ).
Therefore, we know that for the 1-α quantile of 𝒮_η, denoted by Q_1-α(𝒮_η), for i=1,2,
lim_n→∞ P(max_⌊ nη_1⌋≤ k≤ n-⌊ nη_1⌋D_n,i(k)≥ Q_1-α(𝒮_η))
≥ lim_n→∞ P(D_n,i(⌊ nτ⌋)≥ Q_1-α(𝒮_η))
= P(𝒮_η,i(τ;Δ)≥ Q_1-α(𝒮_η)).
In particular,
lim_|Δ_V|→∞P(𝒮_η,1(τ;Δ)≥Q_1-α(𝒮_η))=1,
lim_max{|Δ_V|,Δ_M}→∞P(𝒮_η,2(τ;Δ)≥Q_1-α(𝒮_η))=1.
§.§ Proof of Theorem <ref>
Define the pointwise limit of μ̂_[a,b] under ℍ_a as
μ_[a,b]=
μ^(1), b≤τ
min_ω∈Ω{(τ-a)𝔼d^2(Y_t^(1),ω)+(b-τ)𝔼d^2(Y_t^(2),ω)}, a<τ<b
μ^(2), τ≤ a
Define the Fréchet variance and pooled contaminated variance under ℍ_a as
V_[a,b]=
V^(1) b≤τ
τ-a/b-a𝔼(d^2(Y_t^(1),μ_[a,b]))+b-τ/b-a𝔼(d^2(Y_t^(2),μ_[a,b])), a<τ<b
V^(2), τ≤ a,
and
V^C_[r;a,b]=
V^(1) b≤τ
τ-a/r-a𝔼(d^2(Y_t^(1),μ_[r,b]))+r-τ/r-a𝔼(d^2(Y_t^(2),μ_[r,b]))+𝔼(d^2(Y_t^(2),μ_[a,r])), a<τ≤r
𝔼(d^2(Y_t^(1),μ_[r,b]))+τ-r/b-r𝔼(d^2(Y_t^(1),μ_[a,r]))+b-τ/b-r𝔼(d^2(Y_t^(2),μ_[a,r])), r<τ<b
V^(2), τ≤a.
We want to show that
{T_n(r;a,b)}_(r;a,b)∈𝒥_η
⇒{T(r;a,b)}_(r;a,b)∈𝒥_η,
{T^C_n(r;a,b)}_(r;a,b)∈𝒥_η
⇒{T^C(r;a,b)}_(r;a,b)∈𝒥_η,
where
T(r;a,b)=(r-a)(b-r)/b-a(V_[a, r]-V_[r, b]),
T^C(r;a,b)=(r-a)(b-r)/b-a(V_[r ; a, b]^C-V_[a, r]-V_[r, b]).
To achieve this, we need to show (1). sup_(a,b)∈ℐ_ηd(μ̂_[a,b],μ_[a,b])=o_p(1); (2). sup_(a,b)∈ℐ_η|V̂_[a,b]-V_[a,b]|=o_p(1); and (3). sup_(r;a,b)∈𝒥_η|V̂^C_[r;a,b]-V^C_[r;a,b]|=o_p(1).
(1). The cases when b≤τ and a≥τ follow by Lemma <ref>.
For the case when τ∈(a,b), recall
μ̂_[a, b]= min_ω∈Ω1/⌊nb⌋-⌊na⌋ ∑_t=⌊n a⌋+1^⌊n b⌋ d^2(Y_t, ω)
= min_ω∈Ω{n/⌊nb⌋-⌊na⌋1/n ∑_t=⌊n a⌋+1^⌊n τ⌋ d^2(Y_t^(1), ω)
+n/⌊nb⌋-⌊na⌋ 1/n ∑_t=⌊n τ⌋+1^⌊n b ⌋ d^2(Y_t^(2), ω)}.
By the proof of (1) in Lemma <ref>, for i=1,2, we have
{1/n∑_t=1^⌊ n u ⌋ d^2(Y_t^(i), ω)-u𝔼d^2(Y_t^(i),ω)}_ω∈Ω,u∈[0,1]⇒ 0,
which implies that
{n/⌊nb⌋-⌊na⌋1/n ∑_t=⌊n a⌋+1^⌊n τ⌋ d^2(Y_t^(1), ω)
+n/⌊nb⌋-⌊na⌋ 1/n ∑_t=⌊n τ⌋+1^⌊n b ⌋ d^2(Y_t^(2), ω)}_ω∈Ω,(a,b)∈ℐ_η
⇒{τ-a/b-a𝔼(d^2(Y_t^(1),ω)+b-τ/b-a𝔼(d^2(Y_t^(2),ω))}_ω∈Ω,(a,b)∈ℐ_η.
By Assumption <ref>, and the argmax continuous mapping theorem (Theorem 3.2.2 in <cit.>), the result follows.
(2). The cases when b≤τ and a≥τ follows by Lemma <ref>. For the case when τ∈(a,b), we have for some constant K>0
sup_(a,b)∈ℐ_η|V̂_[a,b]-V_[a,b]|
≤ sup_(a,b)∈ℐ_η(1/⌊nb⌋-⌊na⌋ ∑_t=⌊n a⌋+1^⌊n b⌋ |d^2(Y_t, μ̂_[a,b])-d^2(Y_t, μ_[a,b])|)
+sup_(a,b)∈ℐ_η|1/⌊nb⌋-⌊na⌋ ∑_t=⌊n a⌋+1^⌊n b⌋ d^2(Y_t, μ_[a,b])-V_[a,b]|
≤ sup_(a,b)∈ℐ_η(1/⌊nb⌋-⌊na⌋ ∑_t=⌊n a⌋+1^⌊n b⌋ K|d(Y_t, μ̂_[a,b])-d(Y_t, μ_[a,b])|)+o_p(1)
≤ sup_(a,b)∈ℐ_ηKd(μ̂_[a,b],μ_[a,b])+o_p(1)=o_p(1)
where the second inequality holds by the boundedness of the metric and (<ref>), and the third inequality holds by the triangle inequality of the metric.
(3). The proof is similar to (2).
By continuous mapping theorem, we obtain that for i=1,2,
{D_n,i(⌊ nr⌋)}_r∈[η_1,1-η_1]⇒{D_i(r)}_r∈[η_1,1-η_1],
where
D_1(r)= [T(r;0,1)]^2/∫_η_2^r-η_2[T(u;0,r)]^2du+∫_r+η_2^1-η_2[T(u;r,1)]^2du,
D_2(r)= [T(r;0,1)]^2+[T^C(r;0,1)]^2/∫_η_2^r-η_2[T(u;0,r)]^2+[T^C(u;0,r)]^2du+∫_r+η_2^1-η_2[T(u;r,1)]^2+[T^C(u;r,1)]^2du.
In particular, at r=τ, we obtain D_i(τ)=∞. Hence, to show the consistency of τ̂, it suffices to show that for any small ϵ>0, if |r-τ|>ϵ,
D_i(r)<∞.
By symmetry, we consider the case of r-τ>ϵ.
For r-τ>ϵ, we note that for both i=1,2,
sup_r-τ>ϵD_i(r)≤sup_r{[T(r;0,1)]^2+[T^C(r;0,1)]^2}/inf_r-τ>ϵ∫_η_2^r-η_2[T(u;0,r)]^2du.
By proof of Proposition 1 in <cit.>, we obtain that for some universal constant K>0,
sup_r{[T(r;0,1)]^2+[T^C(r;0,1)]^2}≤K(Δ^2_M+Δ^2_V)<∞.
Therefore, it suffices to show that there exists a function ζ(ϵ)>0, such that for any r-τ>ϵ,
∫_η_2^τ-η_2[T(u;0,r)]^2du>ζ(ϵ).
For r>τ, and for any u∈[η_2,τ-η_2],
T(u;0,r)
= u(r-u)/r(V^(1)-V_[u,r])
= u(r-u)/r[V^(1)-τ-u/r-u𝔼(d^2(Y_t^(1),μ_[u,r]))-r-τ/r-u𝔼(d^2(Y_t^(2),μ_[u,r]))]
= u(r-u)/r[V^(1)-V(τ-u/r-u)].
By Assumption <ref>, we can obtain that
|T(u;0,r)|>u(r-u)/rφ(ϵ/r-u)≥η_2^2φ(ϵ).
Hence, we can choose ζ(ϵ)=η_2^6φ^2(ϵ).
§ EXAMPLES
As we have mentioned in the main context, since d^2(Y_t,ω) takes value in ℝ for any fixed ω∈Ω, both Assumption <ref> and <ref> could be implied by high-level weak temporal dependence conditions in conventional Euclidean space. Therefore, we only discuss the verification of Assumption <ref>, <ref> and <ref> in what follows.
§.§ Example 1: L_2 metric d_L for square integrable functions defined on [0,1]
Let Ω be the Hilbert space of all square integrable functions defined on I=[0,1] with inner product ⟨ f,g⟩=∫_If(t)g(t)dt for two functions f,g∈Ω. Then, for the corresponding norm f=⟨ f,f⟩^1/2, L_2 metric is defined by
d_L^2(f,g)=∫_I[f(t)-g(t)]^2dt.
Assumptions <ref> and <ref> follows easily by the Riesz representation theorem and convexity of Ω. We only consider Assumption <ref>.
Note that
d_L^2(Y,ω)-d_L^2(Y,μ)= ∫_0^1 [ω(t)-μ(t)][ω(t)+μ(t)-2Y(t)]dt
= d_L^2(ω,μ)+2∫_0^1 [ω(t)-μ(t)][μ(t)-Y(t)]dt
:= d_L^2(ω,μ)+g(Y,ω,μ),
and R(Y,ω,μ)≡ 0.
Furthermore,
|n^-1/2∑_i=⌊n a⌋+1^⌊n b⌋g(Y_i,ω,μ)|
= |2∫_0^1 [ω(t)-μ(t)]n^-1/2∑_i=⌊n a⌋+1^⌊n b⌋[Y_i(t)-μ(t)]dt|
≤ 2d_L(ω,μ) {∫_0^1 |n^-1/2∑_i=⌊n a⌋+1^⌊n b⌋[Y_i(t)-μ(t)]|^2dt}^1/2,
where the inequality holds by Cauchy-Schwarz inequality.
By the boundedness of d_L(ω,μ), Assumption <ref> then follows if
sup_t∈[0,1]sup_(a,b)∈ℐ_η|n^-1/2∑_i=⌊ n a⌋+1^⌊ n b⌋[Y_i(t)-μ(t)]|=O_p(1),
which holds under general weak temporal dependence for functional observations, see, e.g. <cit.>.
§.§ Example 2: 2-Wasserstein metric d_W of univariate CDFs
Let Ω be the set of univariate CDF function on ℝ, consider the 2-Wasserstein metric defined by
d_W^2(G_1,G_2)=∫_0^1 (G_1(t)-G_2(t))^2dt,
where G_1 and G_2 are two inverse CDFs or quantile functions.
The verification of Assumption <ref> and <ref> can be found in Proposition C.1 in <cit.>. Furthermore, by similar arguments as Example 1, Assumption <ref> holds under weak temporal dependence conditions, see <cit.>.
§.§ Example 3: Frobenius metric d_F for graph Laplacians or covariance matrices
Let Ω
be the set of graph Laplacians or covariance matrices of a fixed dimension r, with uniformly bounded diagonals, and equipped with the Frobenius metric d_F, i.e.
d_F^2(Σ_1,Σ_2)=tr[(Σ_1-Σ_2)^⊤(Σ_1-Σ_2)].
for two r× r matrices Σ_1 and Σ_2.
The verification of Assumption <ref> and <ref> can be found in Proposition C.2 in <cit.>. We only consider Assumption <ref>.
Note that
d_F^2(Y,ω)-d_F^2(Y,μ)= tr(ω-μ)^⊤(ω+μ-2Y)
= d_F^2(ω,μ)+2tr(ω-μ)^⊤(μ-Y)
:= d_F^2(ω,μ)+g(Y,ω,μ),
and R(Y,ω,μ)≡ 0.
Furthermore, by Cauchy-Schwarz inequality,
|n^-1/2∑_i=⌊n a⌋+1^⌊n b⌋g(Y_i,ω,μ)|
= 2|tr[(ω-μ)^⊤n^-1/2∑_i=⌊n a⌋+1^⌊n b⌋(Y_i-μ)]|
≤ 2d_F(ω,μ) d_F(n^-1/2∑_i=⌊n a⌋+1^⌊n b⌋[Y_i-μ],0).
By the boundedness of d_F(ω,μ), Assumption <ref> then follows if
sup_(a,b)∈ℐ_ηn^-1/2∑_i=⌊ n a⌋+1^⌊ n b⌋vec(Y_i-μ)=O_p(1),
which holds under common weak dependence conditions in conventional Euclidean space.
§.§ Example 4: Log-Euclidean metric d_E for covariance matrices
Let Ω be the set of all positive-definite covariance matrices of dimension r, with uniformly both upper and lower bounded eigenvalues, i.e. for any Σ∈Ω, c≤λ_min(Σ)≤λ_max(Σ)≤ C for some constant 0<c<C<∞. The log-Euclidean metric is defined by d_E^2(Σ_1,Σ_2)=d_F^2(log_mΣ_1,log_mΣ_2), where log_m is the matrix-log function.
Note that log_mΣ has the same dimension as Σ, hence the verification of Assumptions <ref>, <ref> and <ref> follows directly from Example 3.
§ FUNCTIONAL DATA IN HILBERT SPACE
Our proposed tests and DM test are also applicable to the inference of functional data in Hilbert space, such as L_2[0,1], since the norm in Hilbert space naturally corresponds to the distance metric d. In a sense, our methods can be regarded as fully functional <cit.> since no dimension reduction procedure is required. In this section, we further compare them with SN-based testing procedure by <cit.> for comparing two sequences of temporally dependent functional data, i.e. {Y_t^(i)}_t=1^n_i i=1,2, defined on [0,1]. The general idea is to first apply FPCA, and then compare score functions (for mean) or covariance operators (for covariance) between two samples in the space spanned by leading K eigenfunctions. SN technique is also invoked to account for unknown temporal dependence.
Although the test statistic in <cit.> targets at the difference in covariance operators of {Y_t^(1)} and {Y_t^(2)}, their test can be readily modified to testing the mean difference.
To be specific, denote μ^(i) as the mean function of Y_t^(i), t=1,⋯,n_i, i=1,2, we are interested in testing
ℍ_0: μ^(1)(x)=μ^(2)(x), ∀ x∈[0,1].
We assume the covariance operator is common for both samples, which is denoted by C_p.
By Mercer’s Lemma, we have
C_p=∑_j=1^∞λ_p^jϕ_p^j⊗ϕ_p^j,
where {λ^j_p}_j=1^∞ and {ϕ^j_p}_j=1^∞ are the eigenvalues and eigenfunctions respectively.
By the Karhunen-Loève expansion,
Y_t^(i)=μ^(i)+∑_j=1^∞η_t,j^(i)ϕ^j_p, t=1,⋯,n_i; i=1,2,
where {η_t,j^(i)} are
the principal components (scores) defined by η_t,j^(i)=∫_[0,1]{Y_t^(i)-μ^(i)}ϕ^j_p(x)dx=∫_[0,1]{Y_t^(i)-μ_p+μ_p-μ^(i)}ϕ^j_p(x)dx with μ_p=γ_1μ^(1)+γ_2μ^(2).
Under ℍ_0, μ^(1)=μ^(2)=μ_p, and η_t,j^(i) should have mean zero. We thus build the SN based test by comparing empirical estimates of score functions. Specifically, define the empirical covariance operator based on the pooled samples as
Ĉ_p= 1/n_1+n_2(∑_t=1^n_1𝒴^(1)_t+∑_t=1^n_2𝒴^(2)_t),
where 𝒴^(i)_t= Y_t^(i)⊗ Y_t^(i), i=1,2. Denote by {λ̂^j_p}_j=1^∞ and {ϕ̂^j_p}_j=1^∞ the corresponding eigenvalues and eigenfunctions. We define the empirical scores (projected onto the eigenfunctions of pooled covariance operator) for each functional observation as
η̂^(i)_t,j=∫_[0,1]{Y_t^(i)(x)-μ̂_p(x)}ϕ̂^j_p(x)dx, t=1,⋯,n_i; i=1,2; j=1,⋯, K,
where μ̂_p=(∑_t=1^n_1Y_t^(1)+∑_t=1^n_2Y_t^(2))/n is the pooled sample mean function.
Let η̂^(i,K)_t,(K)=(η̂^(i)_t,1,⋯,η̂^(i)_t,K)^⊤, and α̂^(K)(r)=(⌊ rn_1⌋)^-1∑_t=1^⌊ rn_1⌋η̂^(1,K)_t-(⌊ rn_2⌋)^-1∑_t=1^⌊ rn_2⌋η̂^(2,K)_t as the difference of recursive subsample mean of empirical scores, we consider the test statistic as
ZSM=
n[α̂^(K)(1)]^⊤{∑_k=1^nk^2/n^2[α̂^(K)(k/n) -α̂^(K)(1)][α̂^(K)(k/n)-α̂^(K)(1)]^⊤}^-1[α̂^(K)(1)],
and under ℍ_0 with suitable conditions, it is expected that
ZSM→_d
B_K(1)^⊤{∫_0^1(B_K(r)-r B_K(1))(B_K(r)-r B_K(1))^⊤d r}^-1 B_K(1),
where B_K(·) is a K-dimensional vector of independent Brownian motions.
Consider the following model taken from <cit.>,
Y_t(x)= ∑_j=1^3{ξ^j, 1_t √(2) sin(2 πj x)+ξ^j, 2_t √(2) cos(2 πj x)}, t=1,2, …,n_1
where the coefficients ξ_t=(ξ^1,1_t, ξ^2,1_t, ξ^3,1_t, ξ^1,2_t, ξ^2,2_t, ξ^3,2_t)^' are generated from a VAR process,
ξ_t= ρξ_t-1+√(1-ρ^2) e_t, e_t i.i.d.∼ 𝒩(0,1/2 diag(𝐯)+1/2 1_6)∈ℝ^6
with v=(12, 7, 0.5, 9, 5, 0.3)^⊤.
To compare the size and power performance, we generate independent functional time series {Y_t^(1)} and {Y_t^(2)} from the above model, and modify {Y_t^(2)} according to the following settings:
* Y_t^(2)(x)= Y_t(x)+20δ_1sin(2π x), x∈[0,1];
* Y_t^(2)(x)= Y_t(x)+20δ_2η_tsin(2π x), x∈[0,1];
* Y_t^(2)(x)= Y_t(x)+20δ_1x, x∈[0,1];
* Y_t^(2)(x)= Y_t(x)+20δ_2η_tx, x∈[0,1];
* Y_t^(2)(x)= Y_t(x)+20δ_11(x∈[0,1]);
* Y_t^(2)(x)= Y_t(x)+20δ_2η_t1(x∈[0,1]);
where η_ti.i.d.∼𝒩(0,1) and δ_1,δ_2∈[0,0.3].
The size performance of all tests are evaluated by setting δ_1=δ_2=0.
As for the power performance, Cases 1m-3m with δ_1∈(0,0.3] correspond to alternatives caused by mean differences and Cases 1v-3v with δ_2∈(0,0.3] correspond to covariance operator differences. In particular, we note the alternative of Cases 1m and 1v depends on the signal function f(x)=sin(2π x), x∈[0,1], which is in the space spanned by the eigenfunctions of Y_t(x), while for Cases 3m and 3v, the signal function f(x)=1(x∈[0,1]) is orthogonal to these eigenfunctions.
We denote the two-sample mean test and covariance operator test based on <cit.> as ZSM and ZSV respectively. The empirical size of all tests are outlined in Table <ref> at nominal level α=5%. From this table, we see that (a) D_1 has accurate size across all model settings and D_2 is generally reliable for moderate dependence level, albeit oversize phenomenon for small n when ρ=0.7; (b) DM suffers from severe size distortion when temporal dependence is exhibited even for large n; (c) although both ZSM and ZSV utilize SN to robustify the tests due to temporal dependence, we find their performances depend on the user-chosen parameter K a lot, and still suffer from size distortion when n is small. In particular, the size distortion when K=4 is considerably larger than that for K=2 in the presence of temporal dependence.
Figure <ref> further compares their size-adjusted powers when n_1=n_2=400 and ρ=0.4. As can be seen, D_1 possesses trivial power against mean differences while D_2 is rather stable in all settings with evident advantages in Cases 2m and 3m. In contrast, the power performances of DM, ZSM and ZSV vary among different settings. For example, when the alternative signal function is in the span of leading eigenfunctions, i.e. Cases 1m and 1v, ZSM and ZSV with K=2 can deliver (second) best power performances as expected, while they are dominated by other tests when the alternative signal function is orthogonal to eigenfunctions in Cases 3m and 3v. As for DM, it is largely dominated by D_2 in terms of mean differences, although it exhibits moderate advantage over D_2 for covariance operator differences.
In general, whether the difference in mean/covariance operator is orthogonal to the leading eigenfunctions, or lack thereof, is unknown to the user.
Our test D_2 is robust to unknown temporal dependence, exhibits quite accurate size and delivers comparable powers in all settings, and thus should be preferred in practice.
agsm
|
http://arxiv.org/abs/2307.11761v1 | 20230714092016 | Fairness of ChatGPT and the Role Of Explainable-Guided Prompts | [
"Yashar Deldjoo"
] | cs.CL | [
"cs.CL",
"cs.AI"
] |
Fairness of ChatGPT and the Role Of Explainable-Guided Prompts
Yashar Deldjoo
Polytechnic University of Bari, Italy
[email protected]
Fairness of ChatGPT and the Role Of Explainable-Guided PromptsAccepted to the workshop on Challenges and Opportunities of Large Language Models in Real-World Machine Learning Applications, COLLM@ECML-PKDD'23.
Yashar Deldjoo
August 12, 2023
================================================================================================================================================================================================================
Our research investigates the potential of Large-scale Language Models (LLMs), specifically OpenAI's GPT, in credit risk assessment—a binary classification task. Our findings suggest that LLMs, when directed by judiciously designed prompts and supplemented with domain-specific knowledge, can parallel the performance of traditional Machine Learning (ML) models. Intriguingly, they achieve this with significantly less data—40 times less, utilizing merely 20 data points compared to the ML's 800. LLMs particularly excel in minimizing false positives and enhancing fairness, both being vital aspects of risk analysis. While our results did not surpass those of classical ML models, they underscore the potential of LLMs in analogous tasks, laying a groundwork for future explorations into harnessing the capabilities of LLMs in diverse ML tasks.
§ INTRODUCTION AND CONTEXT
Motivation. Recent advancements in large language models such as OpenAI's GPT <cit.>, Google's PALM <cit.>, and Facebook's LaMDA <cit.> have redefined the landscape of Artificial Intelligence (AI). These behemoth models utilize billions of parameters and capitalize on the vastness of the internet data for training, leading to the generation of accurate and high-quality content. Large-scale Language Models (LLMs) have shown outstanding performance across tasks such as health diagnostics, job-seeking, and risk assessment, among others <cit.>. Given the transformative potential of these systems in decision-making across various contexts, their trustworthiness has drawn substantial attention. Unlike conventional ML models, LLMs leverage immense data scales, far surpassing those typically used for pretraining smaller or mid-scale models. This data, often sourced from the internet, mirrors societal norms but can also propagate prevalent societal biases. If unchecked, these biases can amplify, leading to biased outcomes that unfairly affect certain individuals or demographics.
A crucial aspect of harnessing these systems is through prompt engineering, highlighted in this work. This technique mitigates the need for extensive dedicated training and offers system designers a measure of control over the model's behavior by enabling direct infusion of their insights into the learning process. While our focus is a case study on ChatGPT, the insights, and methodologies could potentially extend to other LLM types, an avenue we plan to explore in future work.
Contributions. This paper provides preliminary insights from an ongoing larger project aiming to address the challenges associated with the use of LLMs, particularly in decision-making processes through prompt engineering. We demonstrate the ability to harness the potential of utilizing pre-trained models for downstream ML tasks, thereby eliminating the need for dedicated model training. By meticulously designing prompts that embody problem-specific instructions and contexts, we direct these models toward achieving desired objectives, such as enhancing prediction accuracy and minimizing risk (and/or mitigating unfair outcomes). Moreover, we underscore the significance of incorporating domain-specific knowledge from experts, obtained through an apriori ML phase, as a powerful tool to improve the quality and effectiveness of the prediction tasks to a considerable degree. Our contributions can be summarized as follows:
* OpenAI-ML Application: We exemplify the application of OpenAI's GPT for specific ML tasks, focusing on credit risk assessment as a case study;
* Prompt Engineering: We investigate the impact of different prompts and their parameters on the outcomes of ML tasks;
* Domain Knowledge Integration: We propose a method for enhancing openAI-ML model accuracy by integrating optimal features, as identified by the ML models employed a priori. This demonstrates how leveraging feature importance can boost model performance when used as domain knowledge;
* Bias of Classical-ML vs. OpenAI-ML: Contrary to the approach of <cit.> that focuses on aggregate metrics, we scrutinize biases in OpenAI ML models, using gender as a case study. We assess gender fairness not through aggregate metrics but by comparing distributions via bootstrap sampling and evaluating results with statistical significance tests.
Our research is aimed at providing a guide for utilizing LLMs in ML tasks, with a primary focus on enhancing accuracy through prompt engineering and assessing its impact on fairness outcomes. We expand on previous work by Li et al.<cit.>, where fairness-based prompt engineering was conducted across several datasets. We have demonstrated how the accuracy of these systems can be substantially enhanced and supplemented this with a detailed fairness analysis using statistical measures. [The link to the system developed can be found in <https://github.com/yasdel/ChatGPT-FairXAI>.]
§ OPENAI-ML FRAMEWORK FOR CREDIT ASSESSMENT TASK
We utilize ChatGPT-3.5-Turbo via the chat-completion API, chosen for its outstanding text generation, speed, and cost-effectiveness. Converting the ML task into a chat conversation with ChatGPT is pivotal for prediction, ensuring the model responds in binary format – yes or no. Constructing prompts, which link the model and task, demands understanding of both context and capabilities. Effective prompts harness the model's comprehension skills for complex tasks, though their creation is challenging. Fig. <ref> visually depicts the process for designing prompts for downstream ML tasks.
Prompt Construction. This technique provides task-oriented instruction for the OpenAI model, delivering the necessary context. Our method starts with Part 1. Task Instruction, where we guide the model on its task, then Part 2. In-context Examples to boost predictions. In Part 3. Attribute Description, we detail task-specific features. This is followed by Part 4. Integration of Domain Knowledge, strategically incorporated to improve model comprehension and accuracy. The final stage is Part 5. Formulation of a Question/Problem, framing the task or query at hand. Note that the Integration of Domain Knowledge is strategically included to enhance the model's understanding and prediction accuracy (cf. Section <ref>).
PartOneStyle
linecolor=blue,
outerlinewidth=2pt,
roundcorner=20pt,
innertopmargin=10pt,
innerbottommargin=10pt,
innerrightmargin=10pt,
innerleftmargin=10pt,
backgroundcolor=green!20!white,
frametitle=Part 1: Task Instruction
PartTwoStyle
linecolor=blue!60!white,
outerlinewidth=2pt,
roundcorner=20pt,
innertopmargin=10pt,
innerbottommargin=10pt,
innerrightmargin=10pt,
innerleftmargin=10pt,
backgroundcolor=yellow!20!white,
frametitle=Part 2: In-Context Example
PartThreeStyle
linecolor=blue!60!white,
outerlinewidth=2pt,
roundcorner=20pt,
innertopmargin=10pt,
innerbottommargin=10pt,
innerrightmargin=10pt,
innerleftmargin=10pt,
backgroundcolor=red!20!white,
frametitle=Part 3: Attribute Description
PartFourStyle
linecolor=blue!60!white,
outerlinewidth=2pt,
roundcorner=20pt,
innertopmargin=10pt,
innerbottommargin=10pt,
innerrightmargin=10pt,
innerleftmargin=10pt,
backgroundcolor=orange!20!white,
frametitle=Part 4: Domain Knowledge Inetgration
PartFiveStyle
linecolor=blue,
outerlinewidth=2pt,
roundcorner=20pt,
innertopmargin=10pt,
innerbottommargin=10pt,
innerrightmargin=10pt,
innerleftmargin=10pt,
backgroundcolor=purple!20!white,
frametitle=Part 5: Final Task Question
[style=PartOneStyle]
Evaluate the credit risk based on given attributes. If good, respond with '1', if bad, respond with '0'.
[style=PartTwoStyle]
Here's an example: a customer with a good checking account history, a credit duration of 12 months, no history of bad credit, and a purpose of car loan, requested a credit amount of $5000. The system evaluated the risk as good and responded with '1'.
[style=PartThreeStyle]
Consider each attribute:
* Checking-account: Existing status
* Duration: Credit duration (months)
* Credit-history
* Purpose: (car, furniture, etc.)
* Credit-amount
[style=PartFourStyle]
Domain Knowledge:
dk2: Important features in evaluating credit risk often include Checking-account, Foreign-worker, Other-installment, Other-debtors, Credit-history, Credit-amount, and Savings-account.
dk3: The order of features is important in evaluating credit risk. Important features in evaluating credit risk involve assessing each feature sequentially, starting with the Checking-account, then moving to Foreign-worker, Other-installment, Other-debtors, Credit-history, Credit-amount, Savings-account, Age, Purpose, and finally Duration.
[style=PartFiveStyle]
Based on the provided inputs and domain knowledge, is the credit risk good (1 or bad 0)?
§.§ Domain knowledge Integration
In the context of ML tasks, domain knowledge is typically provided by the domain expert, such as a bank expert in the case of credit risk assessment. However, this domain knowledge can also be simulated by an ML model, which learns not only the relevance of individual features but also their interconnections. To evaluate the impact of this domain knowledge on task performance, a wide range of ML models were utilized as part of the domain knowledge for the OpenML prediction task. Particularly, we introduced three categories of domain knowledge, as detailed in Table <ref>: dk0 () represents a base case with no domain knowledge and learning is purely data-driven, Odd dk (, , , , ) or Machine Learning Feature Importance (MLFI) refers to the scenario where important features are identified by ML algorithms such as , , , , , and `Even dk' (, , , , ) or MLFI-ord considers the order of features in addition to their importance.
§ EXPERIMENTAL SETUP
Task. This work focuses on binary classification within a credit assessment context in machine learning (ML). The task involves learning a function f: 𝒳→0,1, predicting a binary outcome y ∈0,1 for each feature instance x ∈𝒳. The feature space, 𝒳, comprises a protected attribute G (e.g., age, race, sex) and all other attributes, 𝒳'. Together, they form the feature vector for each instance, 𝒳 = (G, 𝒳'). The outcome, y, denotes an individual's creditworthiness.
Hyperparameters and Models. We employed six ML models, each with a distinct set of hyperparameters. These were optimized using a randomized search cross-validation (CV) strategy, experimenting with 25 unique hyperparameters. This led to an extensive model tuning process involving numerous model iterations. We used a 5-fold CV (0.8, 0.2), with RandomizedSearchCV over 20 iterations. The exact hyperparameters depended on the specific model:
* : `n-estimators', `max-depth', `min-samples-split', `min-samples-leaf', `bootstrap'. (Total = 5)
* : `C', `penalty', `solver'. (Total = 3)
* : `hidden-layer-sizes', `activation', `solver', `alpha', `learning-rate', `max-iter'. (Total = 6)
* : `n-neighbors', `weights', `algorithm', `leaf-size', `p'. (Total = 5)
* : `n-estimators', `l-rate', `max-depth', `colsample-bytree'. (Total = 4)
* : `n-estimators', `learning-rate'. (Total = 2)
Dataset. We used the German Credit dataset, a space-efficient choice with 1,000 individuals and 21 attributes for creditworthiness classification. Cleaned by Le et al.<cit.> ), this dataset aids banks in lending decisions, using gender as a fairness-sensitive feature.
Bootstrap sampling. To address imbalances and distribution disparities between groups (e.g., Male vs. Female), we employed bootstrapping with 1000 resamples. Bootstrapping is a robust statistical technique that estimates sampling distributions through data resampling. By generating resampled datasets, calculating the mean disparity (here TPR) for each, and analyzing the resulting distributions, we assessed the statistical significance of the observed difference.
§ RESULTS
Accuracy. Table <ref> presents a comparative analysis of the performance of various models equipped with different types of domain knowledge and machine learning algorithms. The performance metrics under consideration include precision (Pre.), recall (Rec.), F1 score (F1), false-positive cost (FP Cost), and false-negative cost (FN Cost). Given the context of credit risk assessment, the latter two metrics bear particular importance, with the false-positive cost being assigned a weight of 5 to reflect the higher financial risk associated with erroneously granting credit to an unworthy applicant.
OpenAI-based models are tested under different scenarios using Machine Learning Feature Importance (MLFI) and its ordered variant (MLFI-ord), which can be seen as an attempt to incorporate domain knowledge into the model. Notably, , using model under the MLFI scenario, achieves the highest precision, recall, and F1 score among all OpenAI-based models, which are 0.7305 in each metric. This suggests that using a combination of domain knowledge (MLFI) and the model, the OpenAI-based model can achieve balanced and competitive results (the results are comparable with LR, XGB, and in terms of Pre). Overall, when utilizing and as domain knowledge, we observe relatively high performance. It is noteworthy that these models performed exceptionally well in the classical ML part, especially considering their F1 scores. To our surprise, we did not observe a significant advantage when using an ordered feature introduction. Instructing ChatGPT to use ordered feature values often led to poorer performance in many cases.
However, when we compare the average values of classical models with those of OpenAI-based models, we see that the former outperforms the latter in all accuracy metrics: precision (0.7792 vs. 0.7129), recall (0.8822 vs. 0.6078), and F1 score (0.8302 vs. 0.6528). This suggests that, for this specific task, classical models are generally more effective. Importantly, it's worth noting that the classical machine learning models used approximately 80% of the available data (i.e., 800 samples) for training, while the OpenAI-ML models only utilized 20 samples as training examples. This means that the classical models had a significant data advantage, using the information at a rate 40 times greater than that of OpenAI-ML. Despite this disparity in data, the OpenAI-ML still produced competitive results, highlighting their potential and efficiency. Additionally, it is worth considering that OpenAI-ML offers the unique advantage of producing human-controllable outputs in the form of prompts that can be generated through a predefined course of action. An intriguing observation is the reduced False Positive (FP) cost associated with OpenAI models compared to traditional models—an aspect of utmost importance in credit risk assessment. This suggests that OpenAI models demonstrate a cautious approach in averting certain false alarms, a trait that could potentially be amplified through targeted instruction. However, this cautious stance might also make them more susceptible to overlooking true instances.
Fairness. The fairness analysis of the provided results, which leverages odd dk due to its superior preceding task performance, offers insightful conclusions. In contrast to machine learning models, certain prompts could achieve fair outcomes, i.e., a non-significant difference in the efforts required by different genders. Despite comparable performance to prompts, machine learning models, including , , , , , and , consistently rejected the null hypothesis, implying significant gender-based effort disparity and lack of fairness. Prompts, however, showcased more diverse results. For instance, and suggested relative fairness, whereas , , and indicated significant effort differences. Interestingly, some prompts like even reversed disparity direction, favoring the less privileged group. This underlines the potential of prompts to promote fairness, even if they do not always yield statistically significant results. Our analysis emphasizes the need for a holistic model evaluation approach, going beyond statistical metrics, to encompass performance and fairness implications for different demographics.
§ CONCLUSION
The study exhibits the benefits of Large-scale Language Models (LLMs), in particular OpenAI's ChatGPT, in Machine Learning tasks. It uses prompt engineering to optimize behavior and prediction accuracy, suggesting that these models could potentially perform comparably, if not better, than traditional ML models. Integration of domain knowledge shows an interesting impact on accuracy, and gender fairness enhancement, setting the basis for broader future investigations. Future exploration requires prompt design optimization, providing ample reasoning time for the system systems through concepts such as Chain-of-Thought Prompting <cit.>, and fine-tuning methodologies. The incorporation of methods for system decision explanation, as well as the application of GPT-based systems to recommender systems while mitigating biases, are critical considerations for further exploration <cit.>.
splncs04
|
http://arxiv.org/abs/2307.05840v1 | 20230711232649 | Assessing individual risk and the latent transmission of COVID-19 in a population with an interaction-driven temporal model | [
"Yanir Marmor",
"Alex Abbey",
"Yuval Shahar",
"Osnat Mokryn"
] | stat.AP | [
"stat.AP",
"cs.SI",
"physics.soc-ph"
] |
Influential Simplices Mining via Simplicial Convolutional Network
Yujie Zeng,
Yiming Huang,
Qiang Wu,
Linyuan Lü,
Y. Zeng, Y. Huang, Q. Wu, and L. Lü are with the Institute of Fundamental and Frontier Studies, University of Electronic Science and Technology of China, Chengdu, PR China. E-mail: {yujie_zeng, yiming_huang, qiang.wu, linyuan.lv}@uestc.edu.cn
Y. Zeng, Y. Huang, and L. Lü are with the Yangtze Delta Region Institute (Huzhou), University of Electronic Science and Technology of China, Huzhou, PR China.
L. Lü is with the School of Cyber Science and Technology, University of Science and Technology of China, Hefei, PR China.
Y. Zeng and Y. Huang contributed equally to this work. Corresponding author: Linyuan Lü.
July 11, 2023
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
empty
§ INTRODUCTION
The SARS-CoV-2 pandemic, like other diseases, spread differently in different countries and communities <cit.>. Disease progression results from the interplay between the population's complex interaction dynamics, which are associated with the population’s physical contacts’ network, and the disease dynamics <cit.>, as well as characteristics such as the population age <cit.>.
Here, we present in detail an SEIR-like interactions-based contagion model (ICM) of airborne diseases for COVID-19 <cit.> over real-world interaction data that is enriched with personal disease progression details that depend on the individual susceptibility to the disease. The model is termed Interactions-based Contagion Model with Individual outcomes (ICMI). The individual susceptibility determines the severity of the disease for an infected individual. For COVID-19, the probability of an individual contracting a severe form of the disease, referred to as their susceptibility to the disease, is a function of a person's age <cit.>.
Accurately modeling disease progression requires considering the time-respecting paths, which are the sequence and order of these interactions between the members <cit.>. Contacts' temporal ordering and dynamics are crucial for understanding the transmission of infectious diseases. The interactions' temporal path ordering was shown to affect the spreading dynamics <cit.>. Further, considering the accurate structure of human interactions is pivotal for correctly predicting the spread of epidemics, such as the Covid-19 disease <cit.>.
To model the disease progression in a real-life community, we use the real-world encounters data from the Copenhagen Networks Study (CNS) dataset <cit.>. The use of real-world interaction data as the dataset for our analysis allows for a person-to-person spread of disease at the level at which it occurs in its actual temporal and local contexts <cit.>.
An investigation of the transmission of a severe acute respiratory syndrome (SARS) in a 2003 Toronto-area community outbreak found that “longer and closer proximity exposures incurred the highest rate of disease” <cit.>. Thus, the duration of the interaction is crucial for correctly simulating the airborne transmission of various pathogens.
A key feature of ICM is that the interaction's duration is correlated positively with the latent transmitted viral load if the encounter is with an infected person.
Duration of interaction as a proxy for the transmitted viral load was used for estimating viral infection between animals <cit.> and humans <cit.>.
Smieszek <cit.> noted that the time people spend interacting with each contact person decreases as the number of contact persons increases. The suggested model takes the length of the exposure into account while following real-world interactions of various scales that exist in the CNS dataset, as the proximity information enables the detection of gatherings <cit.>.
Combining the above with individual susceptibility, the ICMI model presents three significant contributions.
First, it extends the capabilities of ICM <cit.> by taking individual outcomes into account, enabling it to predict outcomes at the community and individual levels. With ICMI, the daily and average expected percentage of needed hospital beds could be predicted for communities that leverage digital proximity tracing applications <cit.>. In addition, the model considers the probability of symptomatic or severe infection in correlation with the individual's age. It models age using a personal susceptibility parameter, s_i, a normalized parameter denoting the personal susceptibility to the disease that correlates with age. Thus, lower values of s_i correspond to younger people who are more likely to be asymptomatic.
Second, ICMI enables us to model and predict individual risk of infection given personal daily exposure. Individual risk prediction projections are provided as a function of different variants' virality and individual vulnerability probability due to personal immune levels.
This prediction of individual risk not only enables individuals to plan their schedules to reduce their risk of infection but also presents a complementary paradigm to the current government-imposed non-pharmaceutical interventions (NPIs), providing an additional layer of personal control.
Third, the ICMI model contributes to the study of the spread of diseases caused by asymptomatic transmission.
Asymptomatic transmission of diseases such as COVID-19 is considered to be the Achilles' heel of the pandemic control <cit.> since asymptomatic individuals continue their normal social and travel activities while being hard to trace <cit.>.
The exact extent and impact of asymptomatic transmission are debatable <cit.>, and differences in asymptomatic cases' generation time may affect the transmission and spreading factor estimations <cit.>. Here, we find that asymptomatic transmission is significant and influential only in relatively sparse networks, and its effect is mitigated in dense networks.
Asymptomatic transmission influences the progression of the disease only in sparse communities.
Thus, taking into account the macroscopic daily outcomes of the microscopic interactions while considering individual susceptibility to severe disease enables us to predict outcomes at both the population and the personal level. The model further allows predicting individual risk of conducting meetings as a function of the virus characteristics and its prevalence in the population. We further showed that the enigmatic nature of asymptomatic transmission stems from the latent effect of the network density on this transmission and that asymptomatic transmission has a substantial effect only in sparse communities.
§ RESULTS
We ran numeric simulations over the CNS real-world interaction dataset, combining two processes. The first is the interaction model (ICM). The model considers the duration of the interactions, as is described in Equation <ref>. Figure <ref> shows the average daily meetings' duration in the CNS dataset. Considering the dataset consists of close to 700 individuals connected daily, the data demonstrate a skewed distribution graph, with many short meetings and a few very long ones. The model then accounts for the circadian nature of human behavior by considering the daily probability of not being infected in any of the encounters (Equation <ref>).
The second process incorporated is the personal disease progression modeling according to the individual susceptibility, as described in the infection model presented in Fig <ref>. The CNS dataset was recorded in a university and contained the reading of 700 students. The interactions are probably denser than that of a small metropolitan <cit.>. The ICMI model, however, assigns heterogeneous personal vulnerability values as described in each experiment. Personal vulnerability correlates with age, as s_i is a normalized parameter that correlates with age, s_i ∈ [0 … 1]. Low values of s_i denote very young people (kids), and high values denote very old.
The two processes are combined in each of the simulations, and each experiment is the result of 200 iterations. The simulation code is written in Python and is freely available: <https://github.com/ScanLab-ossi/covid-simulation>.
§.§ COVID-19 disease progression with individual outcomes
We start with a simulation of the disease progression over the real-world CNS temporal network. Given a variant with a minimum exposure latency D_min (i.e., each encounter with an infectious node is long enough to infect), we compute disease progression as well as individual outcomes in a community. Here, we start with one infectious initial node, i.e., a single patient zero. P_max, the maximal probability of getting infected given exposure is set to 0.5, and the population is young (s_i values are small), s.t. 80% of the population is asymptomatic (All parameters are configurable).
During each day, for each node i, interactions with Infectious nodes, either presymptomatic or asymptomatic, that are longer than D_min are examined. At the end of each day, according to Eq. <ref> and Eq. <ref> a node either stays in state Susceptible or enters state Exposed and infected. Personal progression of the disease then follows the individual state machine depicted in Figure <ref>.
Figure <ref> depicts the disease progression over the CNS real-world temporal network for two different COVID-19 variants. Each experiment was the result of 200 iterations. Figure <ref> shows the progression of a fast variant ((that is, D_min is low), for which every encounter with an infectious individual is long enough to infect, and Figure <ref> shows the disease progression and community outcome for a slower variant, in which only long exposures can infect (that is, D_min is high).
Individual outcomes can predict the number of daily hospital beds needed, denoted in red in the figures, and the expected death toll, if any.
We further see that more contagious variants, as is the case shown in Figure <ref>, create, even in young populations in which the majority of patients are asymptomatic, a higher load on the community hospitals.
§.§ Individual projection of infection as a function of the daily exposure
Considering the changing daily number of meetings and their lengths, we can approximate the individual probability of infection, given their daily exposure. We project here individual infection outcomes as a function of the daily number of meetings and their lengths and contextual information. Contextual information can be considered a proxy for the virality of a variant and susceptibility to the infection of an individual's immune system.
When considering meetings' duration, the minimum duration in the model, D_min, is used as an approximate for the virality of a variant, with very low D_min values corresponding to very infectious variants, and vice-versa. The other global parameter in the model, P_max, is used here on a per-individual level, P_max,i, to denote the personal vulnerability of a person i to the virus. In this approximation, high P_max,i values correspond to highly vulnerable, e.g., immune-compromised people. Lower values correspond to a lower probability of getting infected, given maximal exposure. In both experiments, the probability of getting infected is computed. The severity of the infection would then depend on the individual susceptibility.
In these two experiments, we examine the probability of becoming infected. This probability differs from the probability of being symptomatic or severely ill, which correlates with the personal susceptibility parameter, s_i. The probability of getting infected does not correlate with this parameter, of course <cit.>.
We generate random exposure data sampled from distributions fitted to the CNS real-world aggregated data. Aggregated information was taken such that an average daily outcome could be calculated. The average number of infected individuals differs on different days, depending on the spread of the virus in the community. Typically, a person would not be aware of the precise situation in their community. Hence, we consider the average of all the days in the dataset as the typical day over which we calculate individual outcomes.
In a time window τ, the information for the possible N people encountered by an average person during that time window was generated by randomly sampling from a distribution N ∼ ax^b fitted to the distribution of the CNS dataset (a≈ 0.051, b≈ -0.635, max{Cov[N]}≤4.4e^-5). Meetings lengths, d_ik^j, were generated from a discrete probability distribution based directly on the distribution of all encounters duration in the original dataset. Each experiment is the result of 200 iterations, and the results are depicted in Figures <ref> and <ref>, showing the personal projection of infection as a function of the daily exposure. Understandably, the number of contacts an individual is exposed to per day correlates highly with that individual's total exposure per day.
The most basic configuration (D_min=0 and P_max=1) for an individual shows a close to a linear relationship between the daily amount of nodes exposed to, the sum of exposure duration and P_infected.
Figure <ref> depicts the projected individual outcome for various levels of exposure to variants, approximated using the amount of viral load needed for infection. Higher D_min values correspond to a higher viral load needed for infection and, thus, to a less transmissible variant. Lower D_min values correspond to a highly transmissible variant. More encounters, even if short, increase the risk of infection even when the virus is less infectious. However, individuals can lower their risk of infection by lowering their daily exposure to less transmissible viruses.
Figure <ref> depicts the projected infection probability as a function of the daily exposure for various individual vulnerability levels. Here, the most transmissible variant was considered, i.e., enough viral load is transmitted even in very short exposures. Here, recovered or vaccinated people have a significantly lower probability of infection given similar exposure. Lowering P_max reduces P_infected by that factor, with the slope of the linear correlation dropping accordingly.
§.§ Temporal density mitigates the effect of asymptomatic infection
Modeling asymptomatic transmission can be achieved using the individual susceptibility parameter at the community level.
ICMI's detailed individual disease progression encompasses the individual susceptibility factor per node, s_i. At the community level, we define S⃗_⃗c⃗ as the vector of personal susceptibility levels in a community. S⃗_⃗c⃗ is defined as the vector s_i, i ∈ [1..n], with n is the total number of individuals in the simulation. The distribution of community personal susceptibility levels, S⃗_⃗c⃗, determines the percentage of symptomatic and asymptomatic individuals in the simulation. Thus, it can be used to examine the effect of different levels of asymptomatic carriers in the population on the progress of the disease. As susceptibility correlates with age, younger populations will yield more asymptomatic patients.
To examine the effect of asymptomatic transmission in a population, we ran the simulation over the CNS dataset while varying the average age of the population, s.t., the percentage of asymptomatic among the infected varies between 10% to 90%. Each experiment was performed 200 times, with random placement of the initial patient zero. Figure <ref> depicts the results of the experiment. Surprisingly, we find that asymptomatic transmission does not have a significant effect in this experiment. This result aligns with the difficulty of assessing the true impact of asymptomatic transmission <cit.>.
This is also a sensitivity test for how transmission in a community differs when the population age varies.
In a previous study <cit.>, we demonstrated that the CNS network is very dense.
Over two-thirds (∼ 64%) of the days have a temporal density of 0.2 or lower. That is, the overall number of interactions is at most 20% of the possible number of interactions. A quarter of the days have a temporal density of 25%, and the remaining 11% days are very dense, up to half the maximal possible density in which everybody meets everybody.
We perform the following experiments to understand the effect of temporal density on latent transmission due to asymptomatic people. We reduce temporal density by splitting each day to k=[2,3] pseudo days with 1/k the interactions of the original day. Thus, for example, a network with half the density will be depicted as twice as long, pseudo-days wise. We then repeat the experiment described above, changing the percentage of asymptomatic in the population over the longer, less dense networks.
Figure <ref> demonstrates the effect of temporal density on the latent transmissibility of a disease. In each experiment, we changed the percentage of asymptomatic carriers in the population between 10% and 90%. Each experiment was performed 200 times, with random placement of the initial patient zero. Figure <ref> depicts the spread of the disease on a network with half the temporal density of the original one, showing a significant effect of the latent, asymptomatic transmission. Similarly, when the temporal density is one-third that of the original network, as is the case in Figure <ref>, the more asymptomatic the population, the more they contribute to the total infection rate.
Hence, we find here that asymptomatic transmission influences the total infection rate in sparse networks. However, increased density mitigates the effect of asymptomatic transmission, and in highly dense communities, we could expect the disease to spread fast, regardless of age distribution.
§ DISCUSSION
Here, we presented the ICMI model, which assesses the disease progression over real-world community interactions. The model follows the temporal dynamics of the interactions while taking into account the virus parameters and individual susceptibility to the disease. Three different aspects combine the model: following temporal interactions, considering the interaction duration as a proxy for the transmitted viral load, and incorporating individual susceptibility. We discuss here each of these aspects.
The last decade's abundance of temporal information paved the path to further understanding of the temporal dynamics of networks <cit.>.
Temporal networks have become the playground for inferring behavior in a plethora of areas, including, but not limited to, the inference of the effects of changes on the evolution of networks <cit.>; revealing hidden structures, such as the revelation of the structure of “co-presence” in a metropolitan or university from temporal daily encounters <cit.>; the temporal, spatial diffusion of information in social media <cit.>, and the behavior of viral processes over it <cit.>.
Incorporating temporal dynamics into existing viral models has proven challenging. Recently, Holme <cit.> suggested a fast implementation of a temporal SIR over temporal networks. The model, however, does not allow for dynamics representative of real-world interactions, e.g., it assumes exponential time to recovery. Accounting for large infection events, Cooper <cit.> extended the SIR model to consider the surges in the size of the susceptible population over time. Yet, they do not consider other population dynamics.
Recent agent-based models incorporated human dynamics such as inter-arrival times and heterogeneity of interactions <cit.>.
For example, OpenABM <cit.> created a simulated city environment of 1 million people and their dynamics, typical of a household, schools, social interactions, etc. Their agent-based modeling was then used to evaluate different social distancing techniques in that environment. Unlike the presented ICMI model, they do not consider personal differences, i.e., individual susceptibility, yet discuss their importance for accurate modeling.
Agent-based models, however, do not maintain other real-world dynamics such as temporal path ordering.
Several recent models considered that the transmitted viral load can differ between interactions. Given an estimated load, the rate of the shedding of viral load in metropolitan transportation was estimated <cit.>. Viral load and meeting duration were considered to understand the interplay between biological and social factors in the asymptomatic spread of the disease <cit.>.
The importance of considering personal differences was discussed by many. For example, age and comorbidity factors contributed to the appearance of symptoms and to early isolation after infection <cit.>. Specifically, the SIDARTHE model <cit.> differentiated between symptomatic and asymptomatic patients and considered the severity of their symptoms as a proxy for isolation. SIDARTHE used the following states: susceptible (S), infected (I), diagnosed (D), ailing (A), recognized (R), threatened (T), healed (H), and extinct (E). However, the model did not consider the temporal dynamics of human interactions nor the effect of the duration of the encounters.
Our model agrees with the findings of Peirlink <cit.> that if infectiousness is the same for both symptomatic and asymptomatic patients, the size of the asymptomatic population does not largely affect the overall outbreak dynamics. Other complementary findings to ours are those of Park <cit.> and Subramanian <cit.>, who have shown that a faster asymptomatic transmission rate increases the realized proportion of asymptomatic transmission. Yet these findings did not take into account how asymptomatic contagion is affected by the density of the population. In here, it was shown that asymptomatic transmission has a substantial impact only in sparse communities.
ICMI considers the interaction's length as a proxy for the transmitted viral load. It computes the probability of infection at the end of the day. ICMI aggregates viral load only in the case of higher-order interactions, like group meetings. Otherwise, it does not consider different interactions as having an additive effect, as there is no evidence that viral load “remains” in-between interactions <cit.>. The CNS dataset was similarly used by Hambridge <cit.>, who devised a temporal interaction-based SEIR model to assess the effect of various interventions on the CNS data. Unlike ICMI, the work does not consider interactions' length, and assumes that multiple exposures on the same day increase the risk.
The ICMI model is the first to consider the interaction between the daily macroscopic dynamics and the microscopic interactions to predict the spreading dynamics in a population, given the various outcomes of getting infected within the population.
Given today's abundance of digital traces, the ICMI model enables policymakers to assess the disease progression using real-world interactions typical within their communities while considering the various pathogens. We have shown that by incorporating the individual outcomes and the community age distribution, policymakers could receive an estimation of the expected number of hospital beds required as the disease progresses in the community.
Devising a method for predicting individual outcomes as a function of daily exposure to a pathogen further gives a personal design tool for individuals to assess their risk in taking meetings of various lengths given the spread of the virus in the community. The method enables policymakers to decide when would be the right time to restrict large gatherings and long meetings and decide on general guidelines for the public.
The work has several limitations. The CNS dataset is the result of students' interactions in a university. The interactions were denser than social interactions, which may result in a faster-than-reality infection process <cit.>. We thus showed that the effect of asymptomatic latent transmission is negligible in dense networks but not in sparser ones. An additional limitation is that the analysis of latent transmission due to asymptomatic infections does not consider that symptomatic and asymptomatic people shed different viral loads <cit.>. The option was implemented in the code and is part of our future work.
§ CONCLUSIONS
The paper describes an SEIR-like interaction-driven contagion model of airborne disease for COVID-19 with individual outcomes (ICMI) that depend on age. It shows that ICMI, encompassing daily macroscopic dynamics with the microscopic level of interaction duration, enables outcomes prediction on both the population and the individual levels. It further allows for individual assessment of risk levels in different populations and can be used as a tool by policymakers. Using ICMI, we further showed that the effect of latent transmission due to asymptomatic infections depends highly on the structure and interaction density of the population’s social network and is higher in sparser structures.
In future work, we intend to use additional datasets obtained from community digital contact tracing or community structure as provided in <cit.> to explore the model further. The data would be augmented with the corresponding population age distribution to enable community-level predictions for real-life communities. Here, age was considered a proxy for personal vulnerability to the disease since we considered the recent COVID-19 pandemic a key example. However, in the case of other pathogens, other individual factors can be used.
§ METHODS
§.§ Interaction-driven contagion SEIR-like model with personal outcomes (ICMI)
The ICM with Individual outcomes (ICMI) model encompasses the emergent effects of the following three modeling dimensions:
* Real-life temporal interactions, modeled at a macroscopic level (daily). Here, we consider the topological structure of the interactions. Who met whom, and when, during each time window. Every day, we assess the likelihood of each node being exposed and infected within a given time window. This probability is the complement of the chance that the node avoids exposure during all of its encounters with infectious nodes in that time window.
* The duration of all interactions: how long each interaction lasted, and was it long enough to result in infection? Interaction duration is modeled at a microscopic level and correlates positively with the latent transmitted viral load, which, in turn, is positively correlated with the probability of getting infected <cit.>.
* Individual disease progression modeling: a personal susceptibility parameter mediates the progression and severity of the disease for infected individuals. Following the literature, for COVID-19, this parameter correlates with age <cit.>. Infected individuals, whether symptomatic or asymptomatic <cit.>, become infectious and can transmit the infection to others. Symptomatic individuals are removed from the network once symptoms appear. Recovered individuals cannot be reinfected with the same variant. The individual disease progression encompasses the population diversity, found to critically affect the spread of Covid-19 <cit.>.
§.§.§ The ICM model: contagion process without individual outcomes
Interacting nodes can be in one of the following states: Susceptible, Exposed, Infectious, and either Recovered or Removed. All nodes begin in state S, but for some initial infectious patients zero nodes in state I. Nodes that interact with infectious nodes might become infected, entering state E. State E stands for exposed and infected. Infected nodes become Infectious, thus entering state I. Infectious nodes in state I transition to state R.
The transition between states S and E is probabilistic and immediate. The transition between states E and I and from I to R is merely a function of time.
In the model, the probability of being exposed is calculated at the end of each time window τ as the complement of the probability of not getting exposed and infected at any of the interactions during that day.
P_i^τ(S → E)= 1-∏^N_i^τ(1- P_max)
Where N_i^τ is the subset of infected nodes in the time window τ that interacted with node i during that time window and thus might potentially expose it to the infection, and P_max is the probability of being infected during a maximal exposure.
§.§.§ The ICM model: encounters duration heterogeneity
The probability of infection is inversely correlated with distance and decreases dramatically with it, and is correlated with the duration at the exposure distance <cit.>. Hence, the likelihood of getting exposed and infected during each interaction with an infectious node is modeled as a Sigmoid function of the duration of the interaction.
At each encounter with an infectious node in time window τ, there is a probability for node i to get exposed and infected that is calculated as follows. Let d_i,k be a non-zero value for the strength of an edge that enters the focal node i from an infected node k, where k ∈ K. K is the set of infectious nodes that i encounters in time window τ. Here, the strength of an edge, d_i,k, corresponds to the duration of a meeting between the focal node i and an infectious neighbor node k ∈ K. Thus, the probability of node i becoming exposed and infected during an encounter with an infectious node k is as follows:
∀ k ∈ K, P_i,k=
P_ϵ d_i,k < D_min
d_i,k/D_max D_min≤ d_i,k≤ D_max
1 d_i,k > D_max
The model assigns an insignificant infection probability, P_ϵ, to encounters with infectious nodes that are shorter than the minimal time to infect, D_min. D_min is a property of a pathogen. For meetings longer than the minimal time to infect, the probability of infection is linear to the meeting duration, denoted as the strength of the link, d_i,k. Meetings longer than a maximal value, D_max, are considered as having the maximal probability of infection.
As COVID-19 variants, such as Alpha and Delta, are associated with different exposure levels of transmitted viral load <cit.>, the virality of such pathogens is correlated in the model with the minimum exposure latency, D_min.
D_max denotes the duration of the exposure for which the probability of infection is maximal. If the interaction is shorter than D_min, the duration is set to a minimal probability that reduces the probability of being infected due to this encounter.
At the end of each time window τ, the probability of a node i becoming exposed and infected (state E) is calculated as the complement of the probability of not being exposed in any of the encounters during that time window with infectious nodes, as follows:
P_i^τ(S → E)= 1-∏_k^K(1-P_i,k· P_max)
Where P_i^τ(S → E) is the probability of node i in state Susceptible to transition from state Susceptible to state Exposed following the interactions during the time window τ.
In the case of a gathering, it is probable that node i may encounter more than one infectious node k. We consider that in gatherings in which a node interacts with several infectious others, it is exposed to a higher viral load. The duration of the interaction in a gathering is calculated as the gathering's duration multiplied by the number of infectious nodes participating in the gathering.
§.§.§ ICMI: adding a detailed personal disease progression modeling
Once infected, personal reactions to the infection differ based on age, comorbidity, and other latent features. To trade off accuracy with simplicity, we encompass all of the above in a personal susceptibility parameter and correlate it with age <cit.>. Hence, the higher the individual susceptibility, the more likely the probability of becoming symptomatic and severely ill.
Disease progression is currently set to follow a known timeline <cit.>.
We will now explain the individual medical progression model. Once a person is exposed and infected, the disease progression and timeline depend on their personal susceptibility probability, referred for node i as s_i.
Figure <ref> depicts the model. The initial state is Susceptible, and following Eq <ref>, a node can transition to states Exposed and infected. However, in this state, individuals may be either Symptomatic or Asymptomatic, depending on their personal susceptibility parameter, s_i. Hence, the transition from state S to E would be to Symptomatic with probability s_i and to Asymptomatic with probability 1-s_i.
The incubation period from exposure to infection differs between asymptomatic and symptomatic people; In the simulation, both are modifiable parameters and rely on found COVID-19 infection parameters <cit.>.
Asymptomatic individuals recover and are placed back into the interactions temporal simulation in the Recovered State, in which they are not susceptible to the disease for the run of the simulation. Symptomatic individuals are removed from the interactions temporal network; however, we continue to model their disease progression.
Sick individuals have severe symptoms with a probability of s_i and light symptoms with a probability of 1-s_i. People with light symptoms quarantine until they have recovered. People with severe symptoms are hospitalized in the ICU in a severe state with probability s_i or a Stable state with probability 1-s_i. In both these states, individuals may deteriorate and die in probability s_i or recover in probability 1-s_i. Individuals who have recovered are re-introduced into the simulation as 'Recovered'. In this state, they are immune to the disease for the remainder of the simulation.
§.§ Real-world contact information
An ideal dataset for simulating the viral spread of a disease would be the actual population contact tracing. Manual contact tracing is slow and prone to high delays <cit.>. Digital contact tracing can aid in that; however, as seen in many countries, highly inaccurate if it is done by digital tracking. Digital apps are ubiquitously suggested <cit.>. However, the use of their data entails severe privacy issues <cit.>, and information is sparse unless the app is widely adopted <cit.>.
To overcome these hurdles, we use data from the Copenhagen Networks Study (CNS) <cit.>. This data includes over 700 students' contact information for over one month, recorded using Bluetooth sensors in mobile phones provided to the participants. The dataset describes a socio-physical activity at a high, non-aggregated resolution; It is temporal, following over thirty days of interactions of over several hundred people.
The CNS proximity information was registered as a function of the Received Signal Strength Indicator (RSSI). Extracting exact distance information from RSSI data is a difficult task <cit.>. For infection probability, distance is just one of the dimensions in calculating the chance of infection - the directions of standing, ventilation, and the environment are equally significant parameters <cit.>. As this information was unavailable, we chose to model proximity rather than exact distance and direction. Stronger signals correlate roughly with high proximity. Hence, we mined the CNS network for interactions for which the RSSI ≥ -90, which is the threshold value.
We model the social network of interactions Γ as a sequence of T consecutive undirected weighted temporal graphs {G_τ∈Γ, τ∈ T} where each temporal snapshot graph G_τ=(V_τ,E_τ) denotes the subset of interacting nodes V_τ during the τ_th temporal window and the weighted edges E_τ the interactions during this time <cit.>.
Each edge is a distinct interaction. Edge weight corresponds to the duration of the interaction used in the model described above.
We further detect gatherings, as in <cit.>, as the effect of gatherings on contagious epidemic processes was recently researched and found significant <cit.>.
§ AUTHOR CONTRIBUTIONS STATEMENT
O.M., Y.S., Y.M., and A.A. designed the experiments; Y.M and A.A. wrote the code and performed all the experiments; All authors analyzed the results; O.M. wrote the paper; All authors reviewed the manuscript.
§ ADDITIONAL INFORMATION
Code availability: All code and data used in this research are freely available: <https://github.com/ScanLab-ossi/covid-simulation>.
Competing interests:
The authors declare no competing interests.
10
rm
url<#>1urlprefixURL doiprefixDOI:
dufresne2020r0
authorHébert-Dufresne, L., authorAlthouse, B. M.,
authorScarpino, S. V. & authorAllard, A.
titleBeyond r_0: Heterogeneity in secondary infections
and probabilistic epidemic forecasting (year2020).
2002.04004.
sayama2015introduction
authorSayama, H.
titleIntroduction to the modeling and analysis of
complex systems (publisherOpen SUNY Textbooks,
year2015).
liu2018measurability
authorLiu, Q.-H. et al.
journaltitleMeasurability of the epidemic
reproduction number in data-driven contact networks.
Proceedings of the National Academy of
Sciences volume115, pages12680–12685
(year2018).
inde2021age
authorInde, Z. et al.
journaltitleAge-dependent regulation of
sars-cov-2 cell entry genes and cell death programs correlates with covid-19
severity.
Science Advances
volume7, pageseabf8609
(year2021).
banholzer2022estimating
authorBanholzer, N., authorFeuerriegel, S. &
authorVach, W.
journaltitleEstimating and explaining
cross-country variation in the effectiveness of non-pharmaceutical
interventions during covid-19.
Scientific reports
volume12, pages1–12 (year2022).
abbey2022interaction
authorAbbey, A., authorMarmor, Y.,
authorShahar, Y. & authorMokryn, O.
journaltitleExploring the effects of
activity-preserving time dilation on the dynamic interplay of airborne
contagion processes and temporal networks using an interaction-driven
model.
arXiv preprint arXiv:2202.11591
(year2022).
abbey2022analysis
authorAbbey, A., authorShahar, Y. &
authorMokryn, O.
journaltitleAnalysis of the competition among
viral strains using a temporal interaction-driven contagion model.
Scientific Reports
volume12, pages1–10 (year2022).
holme2012temporal
authorHolme, P. & authorSaramäki, J.
journaltitleTemporal networks.
Physics reports
volume519, pages97–125
(year2012).
ENRIGHT201888
authorEnright, J. & authorKao, R. R.
journaltitleEpidemics on dynamic networks.
Epidemics volume24,
pages88–97,
<https://doi.org/10.1016/j.epidem.2018.04.003>
(year2018).
holme2021fast
authorHolme, P.
journaltitleFast and principled simulations of
the sir model on temporal networks.
Plos one volume16,
pagese0246961 (year2021).
rocha2011simulated
authorRocha, L. E., authorLiljeros, F. &
authorHolme, P.
journaltitleSimulated epidemics in an empirical
spatiotemporal network of 50,185 sexual contacts.
PLoS computational biology
volume7, pagese1001109
(year2011).
scholtes2014causality
authorScholtes, I. et al.
journaltitleCausality-driven slow-down and
speed-up of diffusion in non-markovian temporal networks.
Nature communications
volume5, pages1–9 (year2014).
holme2015information
authorHolme, P.
journaltitleInformation content of
contact-pattern representations and predictability of epidemic outbreaks.
Scientific reports
volume5, pages1–12 (year2015).
delvenne2015diffusion
authorDelvenne, J.-C., authorLambiotte, R. &
authorRocha, L. E.
journaltitleDiffusion on networked systems is a
question of time or structure.
Nature communications
volume6, pages1–10 (year2015).
grossmann2020importance
authorGroßmann, G., authorBackenköhler, M. &
authorWolf, V.
titleImportance of interaction structure and stochasticity
for epidemic spreading: A covid-19 case study.
In booktitleInternational Conference on Quantitative
Evaluation of Systems, pages211–229
(organizationSpringer, year2020).
masuda2020small
authorMasuda, N. & authorHolme, P.
journaltitleSmall inter-event times govern
epidemic spreading on networks.
Physical Review Research
volume2, pages023163 (year2020).
wang2021impact
authorWang, B., authorXie, Z. & authorHan,
Y.
journaltitleImpact of individual behavioral
changes on epidemic spreading in time-varying networks.
Physical Review E
volume104, pages044307
(year2021).
herrmann2020covid
authorHerrmann, H. A. & authorSchwartz, J.-M.
journaltitleWhy covid-19 models should
incorporate the network of social interactions.
Physical Biology
volume17, pages065008
(year2020).
stopczynski2014measuring
authorStopczynski, A. et al.
journaltitleMeasuring large-scale social
networks with high resolution.
PloS one volume9
(year2014).
Stopczynski:2015aa
authorStopczynski, A., authorSapiezynski, P.,
authorPentland, A. S. & authorLehmann, S.
journaltitleTemporal fidelity in dynamic social
networks.
The European Physical Journal B
volume88, pages249,
<10.1140/epjb/e2015-60549-7> (year2015).
sapiezynski2019interaction
authorSapiezynski, P., authorStopczynski, A.,
authorLassen, D. D. & authorLehmann, S.
journaltitleInteraction data from the
copenhagen networks study.
Scientific Data volume6,
pages1–10 (year2019).
vespignani2020modelling
authorVespignani, A. et al.
journaltitleModelling covid-19.
Nature Reviews Physics
volume2, pages279–281
(year2020).
thurner2020network
authorThurner, S., authorKlimek, P. &
authorHanel, R.
journaltitleA network-based explanation of why
most covid-19 infection curves are linear.
Proceedings of the National Academy of
Sciences volume117, pages22684–22689
(year2020).
rea2007duration
authorRea, E. et al.
journaltitleDuration and distance of exposure
are important predictors of transmission among community contacts of ontario
sars cases.
Epidemiology & Infection
volume135, pages914–921
(year2007).
wilber2022model
authorWilber, M. Q. et al.
journaltitleA model for leveraging animal
movement to understand spatio-temporal disease dynamics.
Ecology Letters
volume25, pages1290–1304
(year2022).
smieszek2009mechanistic
authorSmieszek, T.
journaltitleA mechanistic model of infection:
why duration and intensity of contacts should be included in models of
disease spread.
Theoretical Biology and Medical Modelling
volume6, pages1–10 (year2009).
muller2020mobility
authorMüller, S. A., authorBalmer, M.,
authorNeumann, A. & authorNagel, K.
journaltitleMobility traces and spreading of
covid-19.
MedRxiv (year2020).
mo2021modeling
authorMo, B. et al.
journaltitleModeling epidemic spreading through
public transit using time-varying encounter network.
Transportation Research Part C: Emerging
Technologies volume122, pages102893
(year2021).
nagel2021realistic
authorNagel, K., authorRakow, C. &
authorMüller, S. A.
journaltitleRealistic agent-based simulation of
infection dynamics and percolation.
Physica A: Statistical Mechanics and its
Applications volume584, pages126322
(year2021).
lorch2020quantifying
authorLorch, L. et al.
journaltitleQuantifying the effects of contact
tracing, testing, and containment measures in the presence of infection
hotspots.
arXiv preprint arXiv:2004.07641
(year2020).
sekara2016fundamental
authorSekara, V., authorStopczynski, A. &
authorLehmann, S.
journaltitleFundamental structures of dynamic
social networks.
Proceedings of the national academy of
sciences volume113, pages9977–9982
(year2016).
ciaperoni2020relevance
authorCiaperoni, M. et al.
journaltitleRelevance of temporal cores for
epidemic spread in temporal networks.
Scientific reports
volume10, pages1–15 (year2020).
cencetti2021digital
authorCencetti, G. et al.
journaltitleDigital proximity tracing on
empirical contact networks for pandemic control.
Nature communications
volume12, pages1–12 (year2021).
pung2022using
authorPung, R., authorFirth, J. A.,
authorSpurgin, L. G., authorLee, V. J. &
authorKucharski, A. J.
journaltitleUsing high-resolution contact
networks to evaluate sars-cov-2 transmission and control in large-scale
multi-day events.
Nature communications
volume13, pages1–11 (year2022).
gandhi2020asymptomatic
authorGandhi, M., authorYokoe, D. S. &
authorHavlir, D. V.
titleAsymptomatic transmission, the achilles’ heel of
current strategies to control covid-19 (year2020).
shahar2021computing
authorShahar, Y. & authorMokryn, O.
journaltitleA statistical model for early
estimation of the prevalence and severity of an epidemic from simple tests
for infection confirmation.
medRxiv (year2021).
PARK2020100392
authorPark, S. W., authorCornforth, D. M.,
authorDushoff, J. & authorWeitz, J. S.
journaltitleThe time scale of asymptomatic
transmission affects estimates of epidemic potential in the covid-19
outbreak.
Epidemics volume31,
pages100392,
<https://doi.org/10.1016/j.epidem.2020.100392>
(year2020).
byambasuren2020estimating
authorByambasuren, O. et al.
journaltitleEstimating the extent of
asymptomatic covid-19 and its potential for community transmission:
systematic review and meta-analysis.
Official Journal of the Association of Medical
Microbiology and Infectious Disease Canada volume5,
pages223–234 (year2020).
park2020time
authorPark, S. W., authorCornforth, D. M.,
authorDushoff, J. & authorWeitz, J. S.
journaltitleThe time scale of asymptomatic
transmission affects estimates of epidemic potential in the covid-19
outbreak.
Epidemics volume31,
pages100392 (year2020).
shahar2023statistical
authorShahar, Y. & authorMokryn, O.
journaltitleA statistical model for early
estimation of the prevalence and severity of an epidemic or pandemic from
simple tests for infection confirmation.
Plos one volume18,
pagese0280874 (year2023).
subramanian2021quantifying
authorSubramanian, R., authorHe, Q. &
authorPascual, M.
journaltitleQuantifying asymptomatic infection
and transmission of covid-19 in new york city using observed cases, serology,
and testing capacity.
Proceedings of the National Academy of
Sciences volume118, pagese2019716118
(year2021).
lazer2009life
authorLazer, D. et al.
journaltitleLife in the network: the coming age
of computational social science.
Science (New York, NY)
volume323, pages721 (year2009).
mokryn2016role
authorMokryn, O., authorWagner, A.,
authorBlattner, M., authorRuppin, E. &
authorShavitt, Y.
journaltitleThe role of temporal trends in
growing networks.
PloS one volume11,
pagese0156505 (year2016).
Peel2015
authorPeel, L. & authorClauset, A.
journaltitleDetecting change points in the
large-scale structure of evolving networks.
29th AAAI Conference on Artificial Intelligence
(AAAI) pages1–11 (year2015).
arXiv:1403.0989v1.
miller2020size
authorMiller, H. & authorMokryn, O.
journaltitleSize agnostic change point
detection framework for evolving networks.
Plos one volume15,
pagese0231035 (year2020).
sun2013understanding
authorSun, L., authorAxhausen, K. W.,
authorLee, D.-H. & authorHuang, X.
journaltitleUnderstanding metropolitan patterns
of daily encounters.
Proceedings of the National Academy of
Sciences volume110, pages13774–13779
(year2013).
si2020comparative
authorSi, M. et al.
journaltitleA comparative analysis for
spatio-temporal spreading patterns of emergency news.
Scientific Reports
volume10, pages1–13 (year2020).
blondel2015survey
authorBlondel, V. D., authorDecuyper, A. &
authorKrings, G.
journaltitleA survey of results on mobile phone
datasets analysis.
EPJ data science
volume4, pages10 (year2015).
cooper2020sir
authorCooper, I., authorMondal, A. &
authorAntonopoulos, C. G.
journaltitleA sir model assumption for the
spread of covid-19 in different communities.
Chaos, Solitons & Fractals
volume139, pages110057
(year2020).
kerr2021covasim
authorKerr, C. C. et al.
journaltitleCovasim: an agent-based model of
covid-19 dynamics and interventions.
PLOS Computational Biology
volume17, pagese1009149
(year2021).
truszkowska2021high
authorTruszkowska, A. et al.
journaltitleHigh-resolution agent-based
modeling of covid-19 spreading in a small town.
Advanced theory and simulations
volume4, pages2000277
(year2021).
hinch2021openabm
authorHinch, R. et al.
journaltitleOpenabm-covid19—an agent-based
model for non-pharmaceutical interventions against covid-19 including contact
tracing.
PLoS computational biology
volume17, pagese1009146
(year2021).
muller2020realistic
authorMüller, S. A. et al.
journaltitleA realistic agent-based simulation
model for covid-19 based on a traffic simulation and mobile phone data.
arXiv preprint arXiv:2011.11453
(year2020).
tadic2021microscopic
authorTadić, B. & authorMelnik, R.
journaltitleMicroscopic dynamics modeling
unravels the role of asymptomatic virus carriers in sars-cov-2 epidemics at
the interplay between biological and social factors.
Computers in Biology and Medicine
volume133, pages104422
(year2021).
grossmann2021heterogeneity
authorGroßmann, G., authorBackenköhler, M. &
authorWolf, V.
journaltitleHeterogeneity matters: Contact
structure and individual variation shape epidemic dynamics.
Plos one volume16,
pagese0250050 (year2021).
gnanvi2021reliability
authorGnanvi, J. E., authorSalako, K. V.,
authorKotanmi, G. B. & authorKakaï, R. G.
journaltitleOn the reliability of predictions
on covid-19 dynamics: A systematic and critical review of modelling
techniques.
Infectious Disease Modelling
volume6, pages258–272
(year2021).
giordano2020modelling
authorGiordano, G. et al.
journaltitleModelling the covid-19 epidemic and
implementation of population-wide interventions in italy.
Nature medicine
volume26, pages855–860
(year2020).
peirlinck2020visualizing
authorPeirlinck, M. et al.
journaltitleVisualizing the invisible: The
effect of asymptomatic transmission on the outbreak dynamics of COVID-19.
Computer Methods in Applied Mechanics and
Engineering volume372, pages113410,
<10.1016/j.cma.2020.113410> (year2020).
hebert2020macroscopic
authorHébert-Dufresne, L., authorScarpino, S. V. &
authorYoung, J.-G.
journaltitleMacroscopic patterns of interacting
contagions are indistinguishable from social reinforcement.
Nature Physics volume16,
pages426–431 (year2020).
HAMBRIDGE2021325
authorHambridge, H. L., authorKahn, R. &
authorOnnela, J.-P.
journaltitleExamining sars-cov-2 interventions
in residential colleges using an empirical network.
International Journal of Infectious Diseases
volume113, pages325–330,
<https://doi.org/10.1016/j.ijid.2021.10.008>
(year2021).
liu2020viral
authorLiu, Y. et al.
journaltitleViral dynamics in mild and severe
cases of covid-19.
The Lancet infectious diseases
volume20, pages656–657
(year2020).
ferretti2020quantifying
authorFerretti, L. et al.
journaltitleQuantifying sars-cov-2 transmission
suggests epidemic control with digital contact tracing.
Science volume368
(year2020).
buitrago2020occurrence
authorBuitrago-Garcia, D. et al.
journaltitleOccurrence and transmission
potential of asymptomatic and presymptomatic sars-cov-2 infections: A living
systematic review and meta-analysis.
PLoS medicine volume17,
pagese1003346 (year2020).
aleta2022
authorAleta, A. et al.
journaltitleQuantifying the importance and
location of sars-cov-2 transmission events in large metropolitan areas.
Proceedings of the National Academy of
Sciences volume119, pagese2112182119,
<10.1073/pnas.2112182119> (year2022).
https://www.pnas.org/doi/pdf/10.1073/pnas.2112182119.
minutesai2018airborne
authorAi, Z. & authorMelikov, A. K.
journaltitleAirborne spread of expiratory
droplet nuclei between the occupants of indoor environments: A review.
Indoor Air volume28,
pages500–524 (year2018).
luo2021infection
authorLuo, C. H. et al.
journaltitleInfection with the sars-cov-2 delta
variant is associated with higher infectious virus loads compared to the
alpha variant in both unvaccinated and vaccinated individuals.
medRxiv (year2021).
teyssou2021delta
authorTeyssou, E. et al.
journaltitleThe delta sars-cov-2 variant has a
higher viral load than the beta and the historical variants in nasopharyngeal
samples from newly diagnosed covid-19 patients.
Journal of Infection
volume83, pagese1–e3
(year2021).
tisminetzky2022age
authorTisminetzky, M. et al.
journaltitleAge, multiple chronic conditions,
and covid-19: a literature review.
The Journals of Gerontology: Series A
volume77, pages872–878
(year2022).
polak2020systematic
authorPolak, S. B., authorVan Gool, I. C.,
authorCohen, D., authorvon der Thüsen, J. H. &
authorvan Paassen, J.
journaltitleA systematic review of pathological
findings in covid-19: a pathophysiological timeline and possible mechanisms
of disease progression.
Modern Pathology
volume33, pages2128–2138
(year2020).
zayet2020natural
authorZayet, S., authorGendrin, V. &
authorKlopfenstein, T.
journaltitleNatural history of covid-19: back
to basics.
New Microbes and New Infections
volume38, pages100815
(year2020).
kretzschmar2020impact
authorKretzschmar, M. E. et al.
journaltitleImpact of delays on effectiveness
of contact tracing strategies for covid-19: a modelling study.
The Lancet Public Health
volume5, pagese452–e459
(year2020).
yoneki2011epimap
authorYoneki, E. & authorCrowcroft, J.
titleEpimap: Towards quantifying contact networks and
modelling the spread of infections in developing countries.
In booktitleProceedings of the 1st International
Conference on Wireless Technologies for Humanitarian Relief,
pages233–240 (year2011).
ahmed2020survey
authorAhmed, N. et al.
journaltitleA survey of covid-19 contact
tracing apps.
IEEE Access volume8,
pages134577–134601 (year2020).
romanini2020privacy
authorRomanini, D., authorLehmann, S. &
authorKivelä, M.
journaltitlePrivacy and uniqueness of
neighborhoods in social networks.
arXiv preprint arXiv:2009.09973
(year2020).
barrat2020effect
authorBarrat, A., authorCattuto, C.,
authorKivelä, M., authorLehmann, S. &
authorSaramäki, J.
journaltitleEffect of manual and digital
contact tracing on covid-19 outbreaks: a study on empirical contact data.
medRxiv (year2020).
Liu2014Face
authorLiu, S., authorJiang, Y. &
authorStriegel, A.
journaltitleFace-to-face proximity estimation
using bluetooth on smartphones.
IEEE Transactions on Mobile Computing
volume13, pages811–823,
<10.1109/TMC.2013.44> (year2014).
ng2020covid
authorNg, P. C., authorSpachos, P. &
authorPlataniotis, K.
journaltitleCovid-19 and your smartphone:
Ble-based smart contact tracing.
arXiv preprint arXiv:2005.13754
(year2020).
vu2010joint
authorVu, L., authorNahrstedt, K.,
authorRetika, S. & authorGupta, I.
titleJoint bluetooth/wifi scanning framework for
characterizing and leveraging people movement in university campus.
In booktitleProceedings of the 13th ACM
International Conference on Modeling, Analysis, and Simulation of Wireless
and Mobile Systems, pages257–265 (year2010).
li2007role
authorLi, Y. et al.
journaltitleRole of ventilation in airborne
transmission of infectious agents in the built environment-a
multidisciplinary systematic review.
Indoor air volume17,
pages2–18 (year2007).
noakes2006modelling
authorNoakes, C., authorBeggs, C.,
authorSleigh, P. & authorKerr, K.
journaltitleModelling the transmission of
airborne infections in enclosed spaces.
Epidemiology & Infection
volume134, pages1082–1091
(year2006).
sze2010review
authorSze To, G. N. & authorChao, C. Y. H.
journaltitleReview and comparison between the
wells–riley and dose-response approaches to risk assessment of infectious
respiratory diseases.
Indoor air volume20,
pages2–16 (year2010).
Ciaperoni:2020aa
authorCiaperoni, M. et al.
journaltitleRelevance of temporal cores for
epidemic spread in temporal networks.
Scientific Reports
volume10, pages12529,
<10.1038/s41598-020-69464-3> (year2020).
|
http://arxiv.org/abs/2307.04516v1 | 20230710122404 | An Examination of Wearable Sensors and Video Data Capture for Human Exercise Classification | [
"Ashish Singh",
"Antonio Bevilacqua",
"Timilehin B. Aderinola",
"Thach Le Nguyen",
"Darragh Whelan",
"Martin O'Reilly",
"Brian Caulfield",
"Georgiana Ifrim"
] | cs.CV | [
"cs.CV"
] |
Wearable Sensors and Video Data Capture for Human Exercise Classification
Insight Centre for Data Analytics, University College Dublin, Ireland
{ashish.singh,antonio.bevilacqua,timi.aderinola,thach.lenguyen,b.caulfield, georgiana.ifrim}@insight-centre.org
Output Sports Limited, NovaUCD, Dublin, Ireland
{darragh, martin}@ouputsports.com
An Examination of Wearable Sensors and Video Data Capture for Human Exercise Classification
Ashish Singh
Antonio Bevilacqua
Timilehin B. Aderinola
Thach Le Nguyen
Darragh Whelan
Martin O'Reilly
Brian Caulfield
Georgiana Ifrim
August 12, 2023
============================================================================================================================================
Wearable sensors such as Inertial Measurement Units (IMUs) are often used to assess the performance of human exercise. Common approaches use handcrafted features based on domain expertise or automatically extracted features using time series analysis. Multiple sensors are required to achieve high classification accuracy, which is not very practical. These sensors require calibration and synchronization and may lead to discomfort over longer time periods. Recent work utilizing computer vision techniques has shown similar performance using video, without the need for manual feature engineering, and avoiding some pitfalls such as sensor calibration and placement on the body.
In this paper, we compare the performance of IMUs to a video-based approach for human exercise classification on two real-world datasets consisting of Military Press and Rowing exercises.
We compare the performance using a single camera that captures video in the frontal view versus using 5 IMUs placed on different parts of the body. We observe that an approach based on a single camera can outperform a single IMU by 10 percentage points on average. Additionally, a minimum of 3 IMUs are required to outperform a single camera. We observe that working with the raw data using multivariate time series classifiers outperforms traditional approaches based on handcrafted or automatically extracted features. Finally, we show that an ensemble model combining the data from a single camera with a single IMU outperforms either data modality. Our work opens up new and more realistic avenues for this application, where a video captured using a readily available smartphone camera, combined with a single sensor, can be used for effective human exercise classification.
§ INTRODUCTION
Recent years have seen an accelerated use of machine learning solutions to assess the performance of athletes.
New technologies allow easier data capture and efficient machine learning techniques enable effective measurement and feedback. In this paper, we focus on the application of human exercise classification where the task is to differentiate normal and abnormal executions for strength and conditioning (S&C) exercises. S&C exercises are widely used for rehabilitation, performance assessment, injury screening and resistance training in order to improve the performance of athletes <cit.>.
Approaches to data capture are either sensor-based or video-based. For sensor-based approaches, sensors such as Inertial Measurement Units (IMUs) are worn by participants <cit.>. For video, a participant's motion is captured using 3D motion capture <cit.>, depth-capture based systems <cit.>, or 2D video recordings using cameras <cit.>. The data obtained from these sources is processed and classified using machine learning models.
Classification methods based on sensor data are popular in the literature and real-world applications, and yet, video-based approaches are gaining popularity <cit.> as they show potential for providing high classification accuracy and overcoming common issues of inertial sensors.
Sensors require fitting on different parts of the body and the number of sensors to be worn depends upon the context of the exercise. For instance, the Military Press exercise requires at least 3 IMUs for optimal performance. Despite their popularity, sensors may cause discomfort, thereby hindering the movement of participants. In addition, using multiple sensors leads to overheads such as synchronization, calibration and orientation.
Recent advances in computer vision have enabled the usage of 2D videos for human exercise classification.
Past work explored posture detection <cit.> and the application of human exercise classification using pose estimation. Our previous work <cit.> proposed a novel method named BodyMTS to classify human exercises using video, human pose estimation and multivariate time series classification. There is less work comparing sensors with video in real-world applications.
In this paper, we compare the performance of a sensor-based approach utilizing 5 IMUs with that of video from a single front-facing camera, on the same set of 54 participants, on two real-world datasets consisting of Military Press (MP) and Rowing exercises. These are important S&C exercises and are widely used for injury risk screening and rehabilitation <cit.>. Incorrect executions may lead to musculoskeletal injuries and undermine the performance of athletes <cit.>. Hence, correct detection of abnormal movements is crucial to avoid injuries and maximize performance.
The main requirements for an effective human exercise classification application are <cit.>: accurate monitoring of body parts movement, correct classification of deviations from normal movements, timely feedback to end users, simple data capture using available smartphones and coverage of a wide range of S&C exercises. Previous work <cit.> has shown that this task is difficult and has poor intra and inter-rater accuracy in user studies with domain experts, with Kappa scores for inter-rater agreement between 0.18-0.53, and intra-rater between 0.38-0.62. Through discussions with domain experts, we established that an effective application should achieve a minimum accuracy of 80% to be useful for end users.
Existing methods using IMUs involve pre-processing the raw data, creating handcrafted features <cit.>, and applying classical machine learning algorithms. Handcrafted feature extraction is often tedious and time-consuming, requires access to domain knowledge and is prone to cherry-pick features that only work for a specific set of exercises.
Deep learning methods <cit.> overcome this issue by automatically constructing features during training, but still require expertise in deep learning architectures along with hardware resources such as GPUs. Hence, we take two approaches to feature extraction: (1) using lightweight packages such as catch22 <cit.> and tsfresh <cit.> to automate the feature extraction from raw signals and (2) using the raw time series data with time series classifiers, which implicitly construct features inside the algorithm.
For videos, we first extract multivariate data using human pose estimation with OpenPose <cit.> to obtain (X,Y) location coordinates of key body parts over all the frames of a video.
Figure <ref> shows data captured with IMUs and video for the Military Press exercise. The top part shows the Y-signal for 3 body parts for a total of 10 repetitions, while the bottom part shows the X, Y, and Z signals of the magnetometer from an IMU worn on the right arm for the same set of 10 repetitions.
Our main contributions are:
* We compare 3 strategies for creating features from IMU data for human exercise classification. We observe that directly classifying the raw signals using multivariate time series classifiers outperforms the approach based on handcrafted features by a margin of 10 and 4 percentage points in accuracy for MP and Rowing respectively. Automatic feature extraction shows better performance than handcrafted features.
* We compare the performance of IMU and video for human exercise classification. We observe that a single video-based approach outperforms a single IMU-based approach by a margin of 5 percentage points accuracy for MP and 15 percentage points for Rowing.
Additionally, we observe that a minimum of 3 IMU devices are needed to outperform a single video for both exercises.
* We propose an ensemble model that combines the data modalities from IMU and video, which outperforms either approach by a minimum of 2 percentage points accuracy for both MP and Rowing. This leads to an accuracy of 93% for MP and 87% for Rowing, using only a single IMU and a reduced-size video. We discuss reasons why combining video and sensor data is beneficial, in particular, the 2D video provides positional information, while the sensor provides information on orientation and depth of movement.
* To support this paper we have made all our code and data available [<https://github.com/mlgig/Video_vs_Shimmer_ECML_2023>].
The rest of the paper is organized as follows. Section <ref> presents an overview of related work, Section <ref> describes the data collection procedure, Section <ref> describes the data analysis and methodology for classification and Section <ref> presents the classification results using IMUs and video. Section <ref> concludes and outlines directions for future work and Section <ref> discusses ethical implications of this work.
§ RELATED WORK
This section describes the purpose of S&C exercises and provides an overview of sensor-based and video-based data capture approaches.
§.§ S&C Exercise Classification
S&C exercises aim at improving the performance of human participants in terms of strength, speed and agility, and they can be captured using sensor-based or video-based techniques.
Wearable sensor-based approaches involve fitting Inertial Measurement Units (IMUs) <cit.> on different parts of the body. This is followed by creating handcrafted features which are used in conjunction with a classical machine learning model. Deep learning methods attempt to automate the process of feature extraction. CNN models work by stacking IMU signals into an image <cit.>, whereas <cit.> uses an attention mechanism to identify the important parts in a signal.
Using IMUs has its own limitations. First, the number of inertial sensors required and their positions can vary from exercise to exercise <cit.>. Furthermore, sensors require calibration and synchronization and may also hinder the movement of the body and cause discomfort when used over longer time periods <cit.>.
Video-based systems can be categorized into 3 types: 3D motion capture, depth camera-based and 2D video camera. Though they are accurate, 3D motion capture systems are expensive and require complex setups. In addition, fitting multiple markers on the body may hinder the normal movement of the body <cit.>. Microsoft Kinect is commonly used for depth camera-based systems <cit.>. These systems are less accurate and are affected by poor lighting, occlusion, and clothing, and require high maintenance <cit.>. The third subcategory uses video-based devices such as DSLR or smartphone cameras. Works based on video rely on human pose estimation to track different body parts <cit.> and have shown 2D videos to be a potential alternative to IMU sensors.
The video-based analysis also includes commercial software such as Dartfish <cit.> by providing the option to analyze motion at a very low frame rate. However, these are less accurate and require fitting body markers of a different colour to the background.
§.§ Multivariate Time Series Classification (MTSC)
In multivariate time series classification tasks, the data is ordered and each sample has more than one dimension.
We focus on recent linear classifiers and deep learning methods, which have been shown to achieve high accuracy with minimal run-time and memory requirements <cit.>.
Linear Classifiers. ROCKET <cit.> is a state-of-the-art algorithm for MTSC in terms of accuracy and scalability. Two more extensions named MiniROCKET <cit.> and MultiROCKET <cit.>, have further improved this method. These classifiers work by using a large number of random convolutional kernels which capture different characteristics of a signal and hence do not require learning the kernel weights as opposed to deep learning methods. These features are then classified using a linear classifier such as Logistic or Ridge Regression.
Deep Learning Classifiers. Deep learning architectures based on Fully Convolutional Networks (FCN) and Resnet <cit.> have shown competitive performance for MTSC, without suffering from high time and memory complexity.
§ DATA COLLECTION
Participants.
54 healthy volunteers (32 males and 22 females, age: 26 ± 5 years, height: 1.73 ± 0.09 m, body mass: 72 ± 15 kg) were recruited for the study. Participants were asked to complete multiple repetitions of the two exercises in this study; the Military Press and Rowing exercises. In each case, the exercises were performed under 'normal' and 'induced' conditions. In the 'normal' condition the exercise was performed with the correct biomechanical form and in the 'induced' condition the exercise was purposefully performed with pre-determined deviations from the normal form, assessed and confirmed in real-time by the movement scientist. Please refer to these sources <cit.> for additional information on the experiment protocol.
The data was collected using two video cameras and 5 Shimmer IMUs placed on 5 different parts of the body. Two cameras (30 frames/sec with 720p resolution) were set up in front and to the side of the participants. In this work, we only use the video recordings from the front view camera which is a more common use case. The 5 IMUs with settings: sampling frequency of 51.2 Hz, tri-axial accelerometer(±2 g), gyroscope (±500 ^∘/s) and magnetometer (±1.9 Ga) <cit.> were fitted on the participants at the following five locations: Left Wrist (LW), Right Wrist (RW), Left Arm (LA), Right Arm (RA) and Back. The orientation and locations of all the IMUs were consistent for all the participants.
Exercise Technique and Deviations.
The induced forms were further sub-categorized depending on the exercise.
§.§ Exercise Classes for Military Press (MP)
Normal (N): This class refers to the correct execution, involving lifting the bar from shoulder level to above the head, fully extending the arms, and returning it back to shoulder level with no arch in the back. The bar must be stable and parallel to the ground throughout the execution.
Asymmetrical (A): The bar is lopsided and asymmetrical.
Reduced Range (R): The bar is not brought down completely to the shoulder level.
Arch (Arch): The participant arches their back during execution.
Figure <ref> shows these deviations using a single frame.
§.§ Exercise Classes for Rowing
Normal (N): This class refers to the correct execution, where the participant begins by positioning themselves correctly, bending knees and leaning forward from the waist. The execution starts by lifting the bar with fully extended arms until it touches the sternum and bringing it back to the starting position. The bar must be stable and parallel to the ground and the back should be straight.
Asymmetrical (A): The bar is lopsided and asymmetrical.
Reduced Range (R): The bar is not brought up completely until it touches the sternum.
Ext: The participant moves his/her back during execution.
RB: The participant executes with a rounded back.
Figure <ref> shows these deviations by depicting a single frame.
§ DATA ANALYSIS AND METHODS
This section presents the data pre-processing, features extraction and classification models. We present the feature extraction for IMU data, followed by feature extraction for video. We also provide a description of the train/test splits for IMUs and video data.
§.§ IMU Data
We discuss three strategies to create features from IMU data. First, we directly use the raw signal as a time series. Second, we use existing approaches to create handcrafted features. Third, we use dedicated packages to automatically extract features. Features extraction is performed after segmenting the full signal to obtain individual repetitions.
§.§.§ Raw Signal as Multivariate Time Series.
The raw signal from IMU records data for 10 repetitions. Hence, we segment the time series to obtain signals for individual repetitions. The Y signal of the magnetometer from the IMU placed on the right arm is utilized to segment the signals. The time series obtained after this step has variable length since the time taken to complete each repetition differs from participant to participant. Further, current implementations of selected time series classifiers cannot handle variable-length time series and therefore all time series are re-sampled to a length of 161 (the length of the longest time series). This does not impact the performance as shown in the supplementary material.
Every single repetition constitutes a single sample for train/test data. The final data D has a shape of D ∈ℝ^N × 45 × 161, where N indicates the total samples.
Each sample denoted by x_i in the data has a dimension of x_i ∈ ℝ^45 × 161, where 45 denotes the total number of time series (5 IMUs x 9 signals) and 161 is the length of each time series.
§.§.§ Handcrafted Features.
Each of the 5 IMUs outputs 9 signals (X,Y,Z) for each of the accelerometer, magnetometer and gyroscope.
We follow the procedure as described in <cit.> to create handcrafted features. Additionally, 5 signals were created for each IMU: pitch, roll, yaw signal and vector magnitude of accelerometer and gyroscope,
giving a total of 70 signals (5 × (9 + 5)). For each repetition signal, 18 handcrafted features that capture time and frequency domain characteristics were created.
Hence, we obtain the final data D ∈ℝ^N × 1260, where N is the total samples and 1260 represents the features extracted from 70 signals with 18 features each for both MP and Rowing.
§.§.§ Auto Extracted Features.
We use packages catch22 <cit.> and tsfresh <cit.> to perform automatic feature extraction from a single repetition signal. These packages calculate a wide range of pre-defined metrics in order to capture the diverse characteristics of a signal. They are straightforward to use and avoid the need for domain knowledge and signal processing techniques.
Catch22 captures 22 features for each of the 45 signals (5 IMUs x 9 signals) giving a total of 990 tabular features for MP and Rowing in the final dataset D ∈ℝ^N × 990, where N indicates the total samples. Similarly, tsfresh captures a large number of time series characteristics by creating a large number of features.
The final dataset D has a shape of D ∈ℝ^N × 15000 and D ∈ℝ^N × 16000, for MP and Rowing respectively. Both manual and automatic feature extraction are performed on the normalized time series, as we observed that normalizing the time series leads to an increase in accuracy.
§.§ Video Data
We follow the methodology presented in our previous work <cit.> to classify human exercise from videos. OpenPose is used for human pose estimation to track the key body parts, followed by a multivariate time series classifier. Each video consists of a sequence of frames where each frame is considered a time step. Each frame is fed to OpenPose which outputs coordinates (X,Y) for 25 body parts. We only use the 8 upper body parts most relevant to the target exercises but also conduct experiments with the full 25 body parts.
The time series obtained from a single body part is denoted by b^n = [(X,Y)^1, (X,Y)^2, (X,Y)^3,...(X,Y)^T] where n indicates the n^th body part and T is the length of the video clip.
§.§.§ Multivariate Time Series Data.
Since each video records 10 repetitions for each exercise execution, segmentation is necessary in order to obtain single repetitions. Each repetition forms a single time series sample for training and evaluating a classifier. We use peak detection to segment the time series as mentioned in our previous work <cit.>.
Similarly to the IMU case, every time series obtained after this step has a variable length and therefore is re-sampled to a length of 161.
The final data is denoted by D ∈ℝ^N × 16 × 161, where N indicates the total samples. Each sample denoted by x_i has a dimension of x_i∈ℝ^16 × 161, where 16 indicates X and Y coordinates for 8 body parts and 161 is the length of each time series.
§.§.§ Auto Extracted Features.
We use catch22 <cit.> and tsfresh <cit.> to perform automatic feature extraction from each single repetition signal.
§.§ Train/Test Splits
We use 3 train/test splits in the ratio of 70/30 on the full data set to obtain train and test data for both IMUs and video. Each split is done based on the unique participant IDs to avoid leaking information into the test data.
Train data is further split in the ratio of 85/15 to create validation data to fine-tune the hyperparameters. The validation data is merged back into the train data before the final classification.
The data is balanced across all the classes. Table <ref> shows the number of samples across all classes for a single train/test split for MP and Rowing respectively.
§.§ Classification Models
We use tabular machine learning models to work with handcrafted and automated features. Informed by previous literature on feature extraction for IMU data <cit.>, we focus on Logistic Regression, Ridge Regression, Naive Bayes, Random Forest and SVM as classifiers for tabular data.
We select ROCKET, MultiROCKET and deep learning models FCN and Resnet as recent accurate and fast multivariate time series classifiers <cit.>.
§ EMPIRICAL EVALUATION
We present results on IMU data, video data and combinations using ensembles.
We report average accuracy over 3 train/test splits for all the results. We use the sklearn library <cit.> to classify tabular data and sktime <cit.> to classify time series data. All the experiments are performed using Python on an Ubuntu 18.04 system (16GB RAM, Intel i7-4790 CPU @ 3.60GHz).
The Supplementary Material [<https://github.com/mlgig/Video_vs_Shimmer_ECML_2023/blob/master/Supplementary_material.pdf>] presents further detailed results on
leave-one-participant-out cross-validation, demographic results, execution time, as well as the impact of normalization and re-sampling length on the classification accuracy.
§.§ Accuracy using IMUs
We present the classification results using 3 different strategies for creating features from IMU data. For tabular features, we perform feature selection to reduce overfitting and execution time. We use Lasso Regression (C=0.01) with L1 penalty for feature selection, where C is the regularization parameter.
Logistic Regression achieves the best performance followed by Ridge Regression and SVM. These results suggest that linear classifiers are best suited for this problem. Hence we only present results using Logistic Regression here.
We tune hyperparameters, particularly regularization parameter C of Logistic Regression using cross validation. We observed that Logistic Regression (LR) with C=0.01 achieves the highest accuracy (Table <ref> presents results with Logistic Regression).
Table <ref> presents the results using raw data and multivariate time series classifiers. ROCKET achieves the best performance with MultiROCKET having similar accuracy for this problem. ROCKET has the added benefit that it can also work with unnormalised data and it is faster during training and prediction, so we select this classifier for the rest of the analysis.
We analyse the average accuracy using all 5 IMUs as well as combinations of IMUs using raw time series with ROCKET as classifier. The goal is to select the minimum number of IMUs needed to achieve the best performance for MP and Rowing. Table <ref> presents the average accuracy over 3 splits obtained using all IMUs whereas Table <ref> presents the average accuracy using different combinations of IMUs.
Results and Discussion:
From Table <ref> we observe that using raw data with ROCKET achieves the highest accuracy when compared to the approaches based on handcrafted and automated feature extraction. We tune hyperparameters of ROCKET using the validation data, particularly the number-of-kernels and observe no impact on the accuracy. The normalization flag is set to True here as turning it off leads to a 4 percentage points drop in the accuracy. ROCKET can easily be run on a single CPU machine without the need for much engineering effort (only 2 parameters to tune) and dedicated hardware. It is much faster than using tsfresh or catch22 for feature extraction followed by classification.
Table <ref> presents the accuracy using different combinations of IMUs placed on different parts of the body.
Accuracy is lowest when using only a single sensor. Accuracy starts to increase as more IMUs are included, for both MP and Rowing. We observe that placing 1 IMU on each wrist and 1 at the back achieved the same accuracy as using all 5 IMUs. The accuracy jumps from 0.83 to 0.88 moving from one IMU placed on the right wrist to two IMUs placed on both wrists and finally jumps to 0.91 when adding one more IMU at the back for MP. Similar behaviour is observed for Rowing. This suggests that 3 IMUs are sufficient for these exercises.
§.§ Accuracy using Video
Here we present the results of classification using video as the data source. We report the average accuracy over 3 train/test splits for MP and Rowing. We also present results using tabular classifiers with automated features for comparison with the IMU based approach. For the raw data approach, we study the accuracy when involving different body parts, e.g., all 25, the 8 upper body parts suggested by domain experts and results using automated channel selection technique <cit.>. The normalization flag is set to False here as turning it on leads to a 4 percentage points drop in accuracy. This is in contrast to the setting configured for IMUs. We tune hyperparameters of ROCKET, particularly the number-of-kernels and observe no impact on the accuracy. Table <ref> presents the average accuracy using these different approaches for classifying MP and Rowing exercises.
Results and Discussion:
From Table <ref> we observe that the average accuracy achieved using raw time series is highest when using the 8 body parts suggested by domain experts. Using automated features does not seem to work very well, in this case, achieving accuracy below 80% for both exercises. Moreover, using channel selection techniques leads to an improvement by 1 and 3 percentage points in accuracy versus using the full 25 body parts.
§.§ IMU versus Video
We compare IMU and video data for human exercise classification, using the raw data approach for both IMU and video as it achieves the best performance. We report the accuracy,
the execution time and the storage space required.
Table <ref> presents the results for both MP and Rowing exercises. We observe that a minimum of 3 IMUs are required to achieve a higher accuracy than a single video.
A single video outperforms a single IMU for both exercises by a minimum of 5 percentage points.
Table <ref> reports the real train/test time for both approaches. This time includes time taken for data pre-processing and to train/test the model. It also includes time to run pose estimation in case of video.
The IMUs approach takes the least amount of time to train/test as compared to the video-based approach. For video, OpenPose extracts the multivariate time series data. The total duration of all videos is 1h 38 minutes for MP, whereas OpenPose took 1h 12 minutes thus OpenPose can run faster than real-time, which is important for getting fast predictions.
Table <ref> presents the storage consumption for both approaches. We note savings in terms of storage space: 5 IMUs require 6 times more space than the time series obtained from videos. Even after selecting the minimum number of sensors which is 3 in both exercises, the storage consumption is more than 200 MB which is also higher as compared to using time series from video.
Our previous work in <cit.> explored the impact of video quality such as resolution and bit rate on classification accuracy and demonstrated how much video quality can be degraded without having a significant impact on the accuracy, whilst saving storage space and processing power.
§.§ Combining IMU and Video
We create an ensemble model by combining individual models trained independently on IMU and Video. For IMUs, we take the 3 sensors that achieved the highest accuracy. When video is combined with just a single sensor, we take the IMU placed on the left wrist, as it had the highest accuracy among single sensors and it is the most common location for people to wear their smartwatch.
Probabilities are combined by averaging and the class with the highest average probability is predicted for a sample during test time. Table <ref> presents a comparison of different approaches, using ROCKET as a multivariate time series classifier. From Table <ref>, we observe that an ensemble model achieves the best average accuracy when compared to using any number of IMUs and a single video-based approach. The accuracy for MP jumps by 2 percentage points when transitioning from 5 IMUs to an ensemble approach, and by 5 percentage points when moving from a single video to an ensemble. Similar results are observed for Rowing. These results suggest that combining IMU and video modalities enhances the performance of exercise classification. Combining video and IMU data sources, with video providing 2D location coordinates for key anatomical landmarks and IMUs capturing acceleration and orientation of the body parts, results in improved classification accuracy, as shown in this investigation (see supplementary material). This finding is consistent with previous work in <cit.> that highlights the complementary nature of video and IMUs in enhancing human pose estimation quality, while in this work we see a similar benefit for human exercise classification.
§ CONCLUSION
We presented a comparison of IMU and video-based approaches for human exercise classification on two real-world S&C exercises (Military Press and Rowing) involving 54 participants.
We compared different feature-creation strategies for classification. The results show that an automated feature extraction approach outperforms classification that is based on manually created features. Additionally, directly using the raw time series data with multivariate time series classifiers achieves the best performance for both IMU and video. While comparing IMU and video-based approaches, we observed that using a single video significantly outperforms the accuracy obtained using a single IMU. Moreover, the minimum number of IMUs required is not known in advance, for instance, 3 IMUs are required for MP to reach a reasonable accuracy.
Next, we compared the performance of an ensemble method combining both IMU and video with the standalone approaches.
We showed that an ensemble approach outperforms either data modality deployed in isolation. The accuracy achieved was 93% and 88% for MP and Rowing respectively.
The criteria to select sensors or videos will ultimately depend on the goal of the end user. For instance: the choice between video and IMUs will depend on a combination of factors such as convenience and levels of accuracy required for the specific application context.
We acknowledge the fact that the scenario that was tested in this research does not accurately reflect real-world conditions. This does mean that we are exposed to the risk that the induced deviation performances could be exaggerated, and therefore not reflective of the often very minor deviations that can be observed in the real-world setting. However, we would argue that performing exercises under induced deviation conditions, if done appropriately, is a very necessary first step towards validating these exercise classification strategies in this field. It would not be prudent to assume that this model could be generalised to operate to the same level in real-world conditions. Having said that, the use of conditioned datasets is a necessary first step in this kind of application and provides the proof of concept evidence necessary to move onto the real-world setting.
§.§.§ Acknowledgements
This work was funded by Science Foundation Ireland through the Insight Centre for Data Analytics (12/RC/2289_P2) and VistaMilk SFI Research Centre
(SFI/16/RC/3835).
§ ETHICAL IMPLICATIONS
Using videos for human exercise classification raises ethical implications that need to be mitigated, prompting a discussion of potential ethical implications.
Data Collection.
Participants in this study provided written consent and the Human Research Ethics Committee of the university approved this study. All experiments were conducted under the supervision of an expert physiotherapist. The potential implications, in this case, can arise when the language used for the consent form may not be native to all the participants. In our case, the organizing authority or professional who was carrying out the data collection made sure that all the participants have well understood the consent form and the use of this data in the future.
Privacy and Confidentiality. This study uses videos which record participants executing exercises. This poses obvious privacy challenges. A first step is to blur the video to protect the participant's identity. This work utilizes human pose estimation to extract time series from video, thereby avoiding the need to directly use the original video. By working with the extracted time series, it largely safeguards the privacy and confidentiality of the participants.
Diversity of Representation.
The participants considered in this study fall into the age group of 20 to 46. Hence the results presented here may not generalise for other age groups. Therefore the final use case will depend on the specific target users, such as athletes competing in the Olympic games versus individuals with less intensive training goals. While there were slightly more male participants than female participants, it does not impact the conclusions drawn in this work, as analysed in the supplementary material. However, this requires further exploration to avoid any biases in the conclusion. Future studies should aim for equal representation among participants in terms of age, sex, gender, race etc., from the start of the study.
Transparency and Feedback. The prediction of the model in this case outputs whether the execution of the exercise was correct or incorrect. Deep learning-based models and other posthoc explanation methods support saliency maps which can be used to highlight the discriminative regions of the data that can be mapped back to the original video thus providing more information about the model decision to the participant.
The above list is not exhaustive and other inherent biases may appear because of the chosen model and the way the data has been collected.
splncs04
|
http://arxiv.org/abs/2307.07474v1 | 20230714165516 | Reproducibility of density functional approximations: how new functionals should be reported | [
"Susi Lehtola",
"Miguel A. L. Marques"
] | physics.comp-ph | [
"physics.comp-ph"
] |
[email protected]
Molecular Sciences Software Institute, Blacksburg, Virginia 24061,
United States
Department of Chemistry, University of Helsinki, P.O. Box 55, FI-00014
University of Helsinki, Finland
Research Center Future Energy Materials and Systems of the University
Alliance Ruhr, Faculty of Mechanical Engineering, Ruhr University
Bochum, Universitätsstraße 150, D-44801 Bochum, Germany
Density functional theory is the workhorse of chemistry and materials
science, and novel density functional approximations (DFAs) are published
every year. To become available in program packages, the novel DFAs
need to be (re)implemented. However, according to our experience as
developers of Libxc [Lehtola et al, SoftwareX 7, 1 (2018)], a
constant problem in this task is verification, due to the lack of
reliable reference data. As we discuss in this work, this lack has
lead to several non-equivalent implementations of functionals such
as BP86, PW91, PBE, and B3LYP across various program packages, yielding
different total energies. Through careful verification, we have also
found many issues with incorrect functional forms in recent DFAs.
The goal of this work is to ensure the reproducibility of DFAs: DFAs
must be verifiable in order to prevent reappearances of the abovementioned
errors and incompatibilities. A common framework for verification
and testing is therefore needed. We suggest several ways in which
reference energies can be produced with free and open source software,
either with non-self-consistent calculations with tabulated atomic
densities or via self-consistent calculations with various program
packages. The employed numerical parameters—especially, the quadrature
grid—need to be converged to guarantee the ≲0.1μ E_h
precision for fully numerical calculations which routinely afford
such precision in the total energy. Such sub-μ E_h level of
agreement can only be achieved when fully equivalent implementations
of the DFA are used. Therefore, also the source code of the reference
implementation should be made available in any publication describing
a new DFA.
Reproducibility of density functional approximations: how new functionals
should be reported
Miguel A. L. Marques
August 12, 2023
============================================================================================
§ INTRODUCTION
Density functional theory<cit.>
(DFT) is the workhorse of modern quantum chemistry and materials science.
The key idea in DFT is that the complicated quantum mechanical interactions
of the electrons can be rewritten in terms of the electron density,
only, leading to significantly simpler and more affordable calculations
than those with traditional quantum chemical methods based on direct
solution of the electronic wave function.<cit.>
As the exact density functional is still unknown, in practice DFT
calculations rely on density functional approximations (DFAs).<cit.>
Combined with the development of computer architectures as well as
mathematical algorithms for DFT calculations, modern DFAs have enabled
ab initio design of new energy materials<cit.>
and catalysts,<cit.> for example.
Little is known, however, about the form of the exact density functional.
This leaves a great deal of freedom in the construction of DFAs. Therefore,
it is not surprising that a huge variety of DFAs has been proposed
in the literature for the past 60 years: for instance, over 600 DFAs
are available at present in our Libxc library of density functionals.<cit.>
Despite the significant number of DFAs already available in the literature,
the development of novel DFAs continues on many fronts. A great many
novel functionals have also appeared in the past three years, as exemplified
by some of the functionals<cit.>
discussed later in this work.
In order to become available for users of various scientific software
packages, the novel DFAs need to be implemented in those programs.
In the early days of DFT—before the internet and distributed version
control systems became widely available—distributing software was
difficult. As a result, software development typically happened within
a silo mentality: everything (including DFAs) was implemented separately
in each monolithic program package, leading to duplicated effort across
software packages. As a result, disparate choices were made across
various programs, as we will demonstrate in this work for the BP86,<cit.>
PW91,<cit.>
PBE,<cit.> and B3LYP<cit.>
functionals. (We also note in this context that <cit.>
recently described differences in the BP86 functional between ORCA<cit.>
and MRChem<cit.> of several kcal/mol at the complete
basis set (CBS) limit.)
In contrast, in the modern open source paradigm of software development,
common tasks are accomplished via reusable shared modular libraries.<cit.>
In the present case of the evaluation of DFAs, the aforementioned
Libxc<cit.> is the implementation of choice: Libxc
is used by around 40 electronic structure programs based on various
numerical approaches, such as atomic-orbital basis sets, plane waves,
as well as real-space approaches. Thanks to the modular approach to
software development, new functionals only need to be implemented
in Libxc to become usable in a large number of programs. In addition,
the common library enables access to exactly the same implementation
across various numerical approaches (see Lehtola2019_IJQC_25968
for a recent review on numerical approaches), which simplifies, for
instance, the study of numerical precision such as basis set truncation
errors by comparison to fully numerical reference values.
However, in spite of the progress facilitated by the new programming
paradigms, it is essential to verify any new implementation(s) of
DFAs before making them available for use. This verification essentially
boils down to the question of reproducibility: can the main result
of a paper describing a novel DFA—the DFA itself—be exactly
reproduced?
The ability to verify the DFA is topical thanks to the recent availability
of fully numerical methods that allow the reproduction of reliable
total energies for moderate size systems to μ E_h precision.<cit.>
Access to such total energies, determined directly at the CBS limit,
enables benchmark studies of the precision of various numerical approaches.<cit.>
For instance, the basis set truncation error (BSTE) of an approximate
method can be computed as the difference from the CBS limit energy
Δ E=E(approximate)-E(CBS)≥0,
affording an unambiguous measure of the precision of the studied numerical
approximation.
It is clear that <ref> only produces reliable estimates of
the BSTE if the density functional implementations used in the appoximate
and CBS limit calculations agree to very high (sub-μ E_h) precision.
As we already mentioned above, this is not always the case, as we
will demonstrate in this work. Therefore, high-precision studies should
always either use the exact same implementation of the density functional
(e.g. using Libxc), or verify that the employed density functional
implementations match to the required precision.
Our general aim with Libxc<cit.> is to reproduce
functionals as they were originally employed. The obvious first step
to enable such reproduction of a novel DFA in Libxc and other programs
is to always publish the source code of the reference implementation
as supporting information to the article describing a new DFA. Access
to the source code may be necessary to enable reverse engineering
the implementation that was actually employed, since as we will exemplify
in this work, in many cases the DFA described in an article does not
match the implementation that was actually used to obtain the published
data.
Validation efforts are also greatly aided by unambiguous, reliable
total energies evaluated with the novel functional, as comparing total
energies is easier than comparing implementations in source code.
As we will show in this work by counterexample, matching total energies
to sub-μ E_h precision is sufficient to demonstrate that the
DFAs match, because small changes to the functional parameters often
lead to μ E_h level differences in total energies. However,
we also note here that we have recently shown that many functionals
do not allow facile evaluation of reliable total energies due to numerical
ill-behavedness; we refer the reader to the related literature for
discussion,<cit.>
and urge functional developers to check and demonstrate that their
new functionals are numerically well-behaved.
The issues with differing implementations of established DFAs in various
program packages is most likely caused by the historical lack of reference
implementations and reliable reference energies. Because these two
issues are still a plague on the implementation and verification efforts
of novel DFAs, as we can attest as longtime developers of Libxc, the
aim of this work is to document various issues we have uncovered in
a number of DFAs, to draw attention to these common issues, and to
prevent them from reoccurring in the future by raising awareness in
the community about the need to be able to verify novel DFAs.
The layout of this work is as follows. Next, in <ref>,
we briefly summarize the mathematical composition of DFAs. Then, in
<ref>, we list common problems with
the ways that several DFAs have been reported in the literature. We
suggest three feasible alternative approaches for determining reliable
reference energies in <ref>: i) the use of
tabulated wave functions, as well as self-consistent calculations
with ii) Gaussian basis sets and iii) fully numerical methods. We
illustrate the usefulness of the three approaches in <ref>
by showing how tabulated wave functions can be used to study differences
between density functional implementations, and demonstrating the
need to converge self-consistent calculations to the quadrature grid
limit to allow the determination of reliable reference energies. We
finish with a brief summary and discussion in <ref>.
Atomic units are used throughout unless specified otherwise.
§ THEORY
In DFT, the total energy is expressed as
E[n]=T[n]+V[n]+E_J[n]+E_xc[n],
where T is the kinetic energy (typically evaluated in terms of
the occupied orbitals as suggested by <cit.>),
V is the nuclear attraction energy, E_J is the classical Coulomb
repulsion of the electrons, and E_xc is the quantum mechanical
exchange-correlation energy. Common DFAs express E_xc
as
E_xc[n]=∫ nϵ_xc(n_↑,n_↓,∇ n_↑,∇ n_↓,∇^2n_↑,∇^2n_↓,τ_↑,τ_↓) d^3r,
where n_↑ and n_↓ are the spin-up and spin-down
electron density, and τ_↑ and τ_↓
are the local kinetic energy densities
τ_σ=1/2∑_i occupied|∇ψ_iσ|^2.
The ϵ_xc term in <ref> is the DFA, which
is a (often complicated) mathematical function with known analytical
form. DFAs can be classified on Jacob's ladder based on their ingredients:<cit.>
* local density approximation (LDA): dependence only on n_↑
and n_↓
* meta-LDA approximation:<cit.> dependence on
n_↑ and n_↓ as well as τ_↑
and τ_↓
* generalized-gradient approximation (GGA): dependence on n_↑
and n_↓ as well as their gradients ∇ n_↑
and ∇ n_↓
* meta-GGA approximation: further dependence on the Laplacian ∇^2n_↑,
∇^2n_↓, and/or the local kinetic energy density
τ_↑, τ_↓
Note that two conventions for τ_σ exist in the literature:
the one with the physical factor 1/2, as in our <ref>,
and another without it. Several DFAs have been published following
either definition; the actual choice does not matter as long as it
is clear and made consistently.
In addition to a term of the form of <ref>, some DFAs also
add post-DFT terms to <ref> such as
* exact exchange in either the Hartree–Fock (global hybrids, e.g.
the B3LYP functional<cit.>) or range-separated
form (range-separated hybrids, e.g. the ωB97X functional<cit.>)
* non-local correlation (e.g. ωB97X-V) or semiempirical dispersion
(e.g. the ωB97X-D3 functional<cit.>)
* post-Hartree–Fock correlation (double hybrids, e.g. the XYG3 functional<cit.>)
Although the the need to verify DFAs is transversal to all families,
these additional ingredients will not be discussed further in this
work, because the conclusions of our main analysis also apply to such
functionals. Our main focus is the accurate evaluation of the total
energy of <ref>, and its ramifications on the reproducibility
and verification of DFT calculations.
It is important to note here that the DFA energy of <ref>
is usually evaluated by quadrature using the scheme pioneered by <cit.>;
see our recent work in Lehtola2022_JCP_174114 for discussion.
For the present purposes, it suffices to state that the quadrature
is an approximation, which can in principle be made arbitrarily accurate
by using sufficiently many points. It is of utmost importance to study
the convergence of the DFA energy with respect to the size of the
quadrature grid when reporting new DFAs and reference energies for
them, as we will discuss in <ref>.
§ COMMON PROBLEMS WITH VERIFICATION
As can be attested by our experience in developing Libxc, verifying
implementations of DFAs is often painstaking, as the usual problems
include the following.
* the original article does not report raw total energies, only computed
energy differences such as atomization energies and/or optimized geometries
(for example, refs. )
* the reference values are not fully converged with respect to all numerical
parameters (for example, refs. )
* the reference values are not reported with sufficiently many decimals
(for example, refs. )
* the functional form is incorrect
* the article gives the wrong functional form but it is correct in the
reference implementation (for example, refs. )
[mBRbug]There's a missing factor of two for the kinetic energy of the homogeneous electron gas in Patra2019_PCCP_19639.
[rMGGACbug]To reproduce the reference implementation of Jana2021_NJP_63007, the local kinetic energy of <ref> needs to be defined without the factor of one half.
[relPBE0bug]The description of the relPBE0 functional in Mitrofanov2021_JCP_161103 omits that the LDA and GGA contributions to correlation need to be scaled by factors of ≈ 0.390518 … and ≈ 0.60948…, respectively.
* the functional form was correct in the paper but not in the reference
implementation (for example, refs. )
[CCaLDAbug]To reproduce the reference implementation of Lebeda2022_PRR_23061, an additional factor of 2^2/3 needs to be added in the definition of α.
* too few details are given on how the data was actually obtained (for
example, refs. )
* the parameter values in the implementation are different from the
paper (for example, Perdew1996_PRL_3865, PBEbug, Adamo1998_JCP_664, mPW91bug, Boese2000_JCP_1670, hcthbug, Hoe2001_CPL_319, O3LYPbug, Peverati2012_JCTC_2310, N12bug, Peverati2012_PCCP_16187, N12SXbug, Ma2022_SA_279, GAS22bug)
[PBEbug]The implementation of Perdew1996_PRL_3865 employs a more precise value for μ than the one given in the paper.
[mPW91bug]In commit 4b9609d1c57 of the source code of the NWChem implementation, dated 19 Feb 2003, Edoardo Apra comments that Adamo has confirmed that there is a typo in the JCP paper; b = 0.00426 instead of 0.0046 given in the text of Adamo1998_JCP_664, also the exponent is 3.72 and not 3.73 as given in the manuscript. See <https://github.com/nwchemgit/nwchem/blob/master/src/nwdft/xc/xc_xmpw91.F> (accessed 1 June 2022).
[hcthbug]The c_5 parameter of HCTH/147 was given with the wrong sign in <cit.>. Another set of parameter values are given in <cit.> that agree with the ones in Boese2000_JCP_1670 for c_1–c_3 and c_9–c_15 but have small differences for the c_4–c_8 coefficients. Boese2003_JCP_3005 gives (in addition to the correct sign for c_5) one more decimal for c_4–c_7 than Boese2000_JCP_1670 but also a differently rounded value for c_8. The original implementation in CADPAC appears to use still more decimals.
[O3LYPbug]The parameters of Hoe2001_CPL_319 do not reproduce the data of the paper; see <http://www.ccl.net/chemistry/resources/messages/2008/10/09.007-dir/index.html> (accessed 1 June 2022).
[N12bug]The same-spin and opposite-spin correlation coefficients are interchanged in Peverati2012_JCTC_2310.
[N12SXbug]The same-spin and opposite-spin correlation coefficients are interchanged in Peverati2012_PCCP_16187. Moreover, the exchange functional coefficients need to be transposed.
[GAS22bug]The Jupyter notebook code of Ma2022_SA_279 employs many more decimals for the parameters than given in the paper.
We will now proceed to explain why these are problems for verification
of DFAs.
The problem with <ref> is that energy differences
or optimized geometries tend to exhibit systematic error cancellations.
They are therefore less sensitive to the functional form and to the
values of the employed parameters than the total energy is. Although
physical properties and chemistry often also benefit from such error
cancellation, one needs to compare raw total energies to ensure that
two DFA implementations are equivalent.
Similarly for <ref>, the typical problem is that
a default quadrature grid has been used to evaluate the density functional.
This almost always means that the data is not converged to the precision
necessary to compare two different implementations, as the default
grids tend to be chosen to maximize speed while maintaining a sufficient
level of precision for chemical applications. (Note that such a level
of precision is not always obtained even for standard calculations.<cit.>)
As was already remarked above in <ref>, the quadrature
error needs to be made insignificant for proper comparisons to take
place. As we have discussed in Lehtola2022_JCP_174114 for
fixed densities, and as will be demonstrated with self-consistent
calculations in <ref>, this can require hundreds
of radial grid points in the case of many functionals; polyatomic
calculations will also require the use of large angular grids to yield
fully converged total energies.
<Ref> is that many works report total energies
only with three-decimal precision. Such 1 mE_h precision
corresponds to roughly 30 meV, which is large enough in our experience
to hide both errors in the functional form (<ref>)
as well as discrepancies in parameter values (<ref>).
The best practice is to report reference total energies to 1 μ E_h
precision or better.
<Ref> is self-explanatory: the functional is not
what was published. In the present authors' opinion, the correct implementation
should reproduce the results in the paper, unless the original authors'
implementation was incorrect and the results in the paper have been
rectified with a further erratum. A famous example of this is the
Heyd–Scuseria–Ernzerhof (HSE) functional,<cit.>
whose HSE03 variant corresponds to the original erroneous implementation,
and the HSE06 variant to the rectified implementation, which is the
functional that was supposed to have been used in the original paper.
The HSE functionals are actually infamous for reproducibility: various
codes have implemented the functionals in dissimilar manners. Specifically,
there is an enormous mess concerning the values of ω in the
HSE functionals in the literature, as well as in the available implementations.
To rehash, the original paper<cit.> stated that
the range-separation parameter ω^HF=0.15=ω^PBE
was used for the HF and PBE parts of the functional. However, due
to an error in the code, the real value used was ω^HF=0.15/√(2)≈0.1061
and ω^PBE=0.15√(2)≈0.1890, according
to the erratum published a few years later.<cit.>
<cit.> tried to clarify the situation, and
called the original choice of parameters with ω^HF≠ω^PBE
HSE03, and the functional where ω^HF=ω^PBE
HSE06. By testing several properties for atoms <cit.>
determined the best value ω^HF=ω^PBE=0.11
for the HSE06 form, differring from the values described in Heyd2003_JCP_8207
and .
Now, HSE06 in Quantum Espresso<cit.> employs
the value ω^HF=ω^PBE=0.106, which is
clearly not the ω^HF=ω^PBE=0.15 of
the HSE06 of <cit.>, nor the reoptimized value
ω^HF=ω^PBE=0.11 of <cit.>.
HSE06 in VASP,<cit.> in turn, employs ,
which is similar to (but not the same as!) the value used in Quantum
Espresso, and also disagrees with <cit.> and
<cit.>. Even more surprisingly, HSE03 in VASP
employs ω^HF=ω^PBE=0.3Å^-10.1587,
breaking with the terminology of HSE03 vs. HSE06 suggested
by <cit.>.
The situation with the HSE functionals is complicated even further
by the introduction of scaling functions<cit.>
to force the functional to obey the local version of the Lieb–Oxford<cit.>
bound. In fact, we find several modern implementations using different
scaling functions, which lead to slightly different results.
Because of the above discrepancies, it is unfortunately the end user's
responsibility to check whether the implementations in any two codes
are sufficiently similar to enable meaningful comparison of the results.
The standard modular implementations provided by Libxc<cit.>
are thereby invaluable, as they enable apples-to-apples comparisons
of also these functionals.
In <ref>, sufficient details have not been provided
on how the data reported in the paper were obtained. In practice,
this may mean that the basis set and/or the quadrature grid that were
used for the calculations has not been specified. Without this information,
the results cannot be reproduced and their quality cannot be judged,
making the data worthless.
<Ref> is present in many older functionals,
where small differences in the parameter values that usually arise
from truncation cause differences typically of the order 10^-4E_h,
which are thereby noticeable when the calculations are tightly converged;
see <ref> for examples.
§ SIMPLE SOLUTIONS
§.§ Tabulated wave functions
If the DFA has the form of <ref>, it may suffice to report
non-self-consistent energies computed on top of atomic Hartree–Fock
wave functions. Our reasoning is the following: when combined with
symbolic algebra as in Libxc<cit.> or automatic
differentiation as in XCFun<cit.>, the correctness
of the energy evaluation—which is ensured by the non-self-consistent
single-point evaluation—already ensures that the gradient of the
energy is also correct, since computer algebra systems and automatic
differentiation toolkits are not expected to give faulty derivatives.
(We also note here that if a functional's energy gradient has not
been implemented correctly, the result of a self-consistent calculation
is a total energy that is higher than the true ground state energy,
provided that the calculation is variational, like modern Gaussian-basis
and fully numerical calculations are.<cit.>)
Although tabulated Hartree–Fock wave functions are often thought
to be synonymous with those of <cit.>, which
still appear to be used by functional developers, we note that more
accurate atomic Hartree–Fock wave functions have been reported in
the literature.<cit.> We made
these wave functions accessible in a simple and easy-to-use Python
package called AtomicOrbitals<cit.> in Lehtola2022_JCP_174114.
This package is interfaced with Libxc, and it also allows for easy
access to atomic densities and quadrature grids, thus allowing the
use of custom implementations of novel DFAs, as well. We recently
employed the wave functions of <cit.> to study
the numerical well-behavedness of DFAs in Lehtola2022_JCP_174114,
and found many recent DFAs to be ill-behaved.
§.§ Self-consistent calculations
Non-self-consistent calculations at fixed reference densities likely
suffice to determine whether two DFA implementations agree. However,
non-self-consistent calculations do not afford a full examination
of the numerical stability of a DFA: in a self-consistent field (SCF)
calculation, the electron density can adapt to features of the DFA,
and a poorly behaved density functional can exhibit numerical pathologies
in SCF calculations in extended basis sets.<cit.>
A good test system for studying various kinds of DFAs should have
a well-behaved electron configuration. The test system should not
have low-lying excited states onto which the SCF procedure could converge,
as such saddle-point convergence would unnecessarily complicate the
determination of whether a reimplementation of a DFA is faithful to
the original reference implementation. For similar reasons, the test
system should also not exhibit spatial or spin symmetry breaking.
The ideal systems therefore have either half-closed or fully closed
electronic shells. Moreover, since errors in the energy are likely
extensive, the best choice is to focus on the smallest possible bound
systems: atoms.
Studying atoms comes with significant added benefits for the density
functional comparison. Typical basis set expansions are much better
behaved in the case of single atoms, obviating the need for approaches
to choose an unambiguous basis for carrying out the electronic structure
calculation,<cit.> for instance. In addition,
calculations on atoms can also easily be carried out with fully numerical
methods: the high amount of symmetry inherent in atomic problems enabled
accurate numerical calculations already over 60 years ago.<cit.>
The N and Ne atoms are excellent choices for verification purposes.
They are light atoms and have half-closed 1s^22s^22p^3 (quartet)
and closed-shell 1s^22s^22p^6 (singlet) configurations, respectively,
while being sufficiently heavy to exhibit significant electron correlation.
Moreover, these atoms are sufficiently well-behaved to not cause numerical
issues for most functionals, unlike the lighter atoms with half-closed
shells: hydrogen only has a single electron, while lithium has a pronouncedly
diffuse electron distribution which is problematic for many functionals.<cit.>
We note that it is important to include both spin-restricted (neon)
and spin-unrestricted (nitrogen) systems in the verification, because
the verification is usually easiest to start with the former, as the
latter type of systems tend to be more complicated due to the spin
polarization.
For similar reasons, two sets of reference energies should be included
for new exchange-correlation functionals: self-consistent energies
for the exchange-only approximation, as well as for calculations including
both exchange and correlation. The exchange functional is energetically
much more important than the correlation functional, and the correctness
of the correlation component should only be checked once the correctness
of the usually much more simple exchange part has been certified.
We will now proceed to discuss two types of numerical approaches for
carrying out self-consistent calculations on these atoms, affording
well-defined reference energies to high precision.
§.§.§ Gaussian-basis calculations
Gaussian-basis calculations offer an easy choice for self-consistent
calculations. Gaussian basis sets of various sizes are available,<cit.>
ranging from minimal basis sets<cit.> to extended
basis sets designed especially for benchmark quality calculations.<cit.>
A large number of Gaussian-basis programs are likewise available for
performing the necessary calculations. In addition to established
commercial packages, several programs that are free and open source
software (FOSS) have also become available in recent years.<cit.>
Here we especially want to mention PySCF<cit.>
and Psi4,<cit.> which are both interfaced to
Libxc and enable efficient density functional calculations.
The actual basis set used to perform the calculation is not as important
as ensuring that the basis set is unambiguously defined. For instance,
the Dunning cc-pVXZ basis sets<cit.> have famous
discrepancies across program packages that mainly arise from different
ways to compute two-electron integrals. The cc-pVXZ basis sets are
generally contracted,<cit.> and many programs
designed for segmented contractions employ modified variants of these
basis sets for improved computational efficiency.<cit.>
There are also discrepancies between versions of the basis sets included
in various programs, as exemplified by the recent work of <cit.>.
For this reason, we recommend employing basis sets downloaded from
the Basis Set Exchange,<cit.> and enclosing
the used basis set in the Supporting Information. The Hartree–Fock
total energy should also be reported, as it can be used for an independent
test of whether the basis set really is the same in the calculations
in various programs.
§.§.§ Fully numerical calculations
Fully numerical calculations<cit.> go one
step further from Gaussian-basis calculations: a flexible numerical
basis set allows converging total energies to sub-μ E_h precision
from the CBS for the given density functional. For example, the freely
available open source program<cit.>
employs the finite element method, which affords a quick approach
to the CBS limit. is interfaced to Libxc and supports LDA,
GGA and meta-GGA calculations on atoms and diatomic molecules including
hybrid functionals. In the case of atoms, also range-separated functionals
are supported.<cit.> Fully numerical Hartree–Fock
calculations are likewise possible in within the single-determinant
or fractional-occupation approach. Fully numerical calculations on
atoms with have been extensively discussed in Lehtola2019_IJQC_25945,
, ,
and , to which we refer to further
details.
§ DEMONSTRATIVE CALCULATIONS
§.§ Studies at fixed density
In this subsection, we will demonstrate the effects that small changes
to the parameters or to the functional form of the PBE, P86, PW91,
PW92, and B3LYP functionals have on the resulting total energy, employing
the tabulated wave functions discussed in <ref>.
The total energies are evaluated with the scheme discussed in Lehtola2022_JCP_174114
employing the default N=2000 radial quadrature points.
PBE
The PBE exchange functional<cit.>
is defined by the simple enhancement factor
F_x(s) =1+κ-κ/1+μ s^2/κ=1+κ(1-κ/κ+μ s^2)
that depends only on two parameters: κ and μ, which control
the s→∞ asymptotic value of the enhancement function and
the coefficient of the s^2 term in the s→0 limit, s denoting
the standard expression for the reduced gradient, which is irrelevant
for the present discussion.
However, at least two variants of the PBE exchange functional can
be found in actual implementations. Although the parameter κ
typically has the value κ=0.804, there are differences in
the value of μ. The problem is that there are several definitions
in the paper of <cit.>: first, μ=βπ^2/3
with β=0.066725; second, μ=0.21951 (the first choice would
give μ=0.21952); and third, the value β=0.06672455060314922
used in Burke's reference implementation. Libxc<cit.>
follows the reference implementation and employs the precise value
in . In contast, XCFun<cit.>
employs the first option; this variant is available in Libxc as .
We also comment on a third implementation: for historical reasons,
the implementation of PBE exchange in the program is
equivalent to the choice κ=0.804000423825475 and μ=0.219510240580611.Giovanni Scalmani, private communication (2022).
This variant is also available in Libxc as
These different choices for κ and μ show up as detectable
differences in the total energy, as is demonstrated by the data in
<ref>. Even though the value of β differs by just
7 parts-per-million between and,
the resulting differences in total energy are still significant.
We note here that most functionals in Libxc allow overriding the parameter
values with function, allowing
the use of PBE exchange with arbitrary values for κ and μ,
for example.
P86
A similar issue exists in Perdew's 1986 correlation functional (P86),<cit.>
which together with Becke's 1988 exchange functional<cit.>
(B88) forms the famous BP86 exchange-correlation functional. Although
P86 relies on several fitted parameters, it also depends on a numerical
constant 1.745f̃ with f̃=0.11 given in the paper.
However, it turns out that the numerical factor 1.745 is in fact
an approximate value for
(9π)^1/61.74541506…
that originates from the Langreth–Mehl correlation functional,<cit.>
which was the basis for P86. Some implementations of P86 opt to use
the exact value of <ref>.
Both implementations are available in Libxc; the default version,
, employs the value 1.745 in the paper, while
employs (9π)^1/6. This 𝒪(10^-4)
difference in the value of the numerical constant is again visible
in total energies, if they are reported to sufficient precision, as
is seen from <ref>.
Note that the P86 functional is based on the PZ LDA correlation functional,
which was found to be numerically ill-behaved in Lehtola2022_JCP_174114
due to its poor convergence to the quadrature limit.<cit.>
This also means that the energies given in <ref> for the
and functionals are
unlikely to be converged to sub-μ E_h precision.<cit.>
In contrast, the variants of P86 based on the VWN functional were
found to be numerically well-behaved in Lehtola2022_JCP_174114.
The data for these and
functionals in <ref> likewise illustrate the effect of
the truncation of (9π)^1/6.
PW91
The 1991 Perdew–Wang exchange functional<cit.>
(PW91) is another interesting case. The enhancement factor for this
functional as described in Perdew1992_PRB_6671 reads as
F(s)=1+0.19645ssinh^-1(7.7956s)+(0.2743-0.1508e^-100s^2)s^2/1+0.19645ssinh^-1(7.7956s)+0.004s^2.
This functional, which is available in Libxc as ,
appears to contain 6 numerical parameters. However, it turns out that
this functional can also be written in a different form. According
to the literature,<cit.> the book
chapter by <cit.> shows that the enhancement factor
of PW91 exchange can also be written in the form
F(x)=bx^2-(b-β)x^2exp(-cx^2)-10^-6x^d/1+6bxsinh^-1x-10^-6x^d/A_x
where x=|∇ n|/n^4/3 is a reduced gradient without the numerical
prefactor (x∝ s), β=5(36π)^-5/3, b=0.0042,
c=1.6455, and d=4. Some programs implement the PW91 exchange
functional using the form of <ref>. This functional,
which is available in Libxc as , will obviously
give a different numerical result than <ref>, as is also
clear from <ref>.
Note that the book in which Perdew1991__ was published
is not online and has been out of print for decades, and the present
authors have failed to access it despite prolonged efforts. Because
of this and other similar issues, our recommendation is to publish
novel functionals as journal articles that are more likely to remain
accessible in the future.
PW92
The 1992 Perdew–Wang LDA correlation functional<cit.>
(PW92) is another interesting case. The PW92 functional employs the
spin interpolation formula of <cit.> with
f(ζ)=[(1+ζ)^4/3+(1-ζ)^4/3-2]/2^4/3-2.
Even though the exact value of f”(0), which is used in the functionals
of <cit.>, is easy to evaluate to
f”(0)=4/9(√(2)-1)≈1.709920934161365…,
Perdew1992_PRB_13244 specifies the value f”(0)=1.709 921
for this quantity that is used in the interpolation, and this is the
value used in the Libxc implementation , as well.
Employing the exact value of <ref>, as in the
version called , leads to slightly different
total energies, as is visible from <ref>.
Unfortunately, as the PW92 functional is an ingredient in many GGAs
and meta-GGAs, the choices for the employed value of f”(0) can
also affect many other functionals. These include the PW91,<cit.>
PBE,<cit.> the B97 class
of functionals,<cit.> AM05,<cit.>
BMK,<cit.> GAPC,<cit.>
and SOGGA11<cit.> GGAs, as well as the BC95<cit.>,
CC,<cit.> M05,<cit.> DLDF,<cit.>
M08-HX and M08-SO,<cit.> M11,<cit.>
M11-L,<cit.> MN12-L,<cit.>
MN12-SX,<cit.> MN-15,<cit.>
MN15-L,<cit.> revM11,<cit.>
VSXC,<cit.> B98,<cit.>
and CC06<cit.> meta-GGA functionals. Functionals
that build on top of these functionals may also be affected; for instance,
PKZB,<cit.> TPSS,<cit.>
and SCAN<cit.> meta-GGAs all build on top of the
PBE expressions and as a result, one needs to be aware of the underlying
choice in each case. Many functionals assume the more precise value
of <ref>, as it arises directly from <ref>,
as is also the case with the ωB97M-V functional,<cit.>
for example.
A recent example of issues with the definition of f”(0) is the
Google Advanced Science 2022 (GAS22) density functional,<cit.>
which can be described as a rediscovery of the ωB97M-V functional.
The Jupyter notebook cited in the supporting information of Ma2022_SA_279
contained a total energy for the Si atom in the def2-QZVPPD basis
set<cit.>, which we wanted to reproduce with
the Libxc implementation.
After ensuring that the Libxc implementation used the more precise
parameters of the reference implementation,<cit.> and that
our calculations were converged to the quadrature grid limit, a significant
difference of several μ E_h still remained in the total energy
of the Si atom.
Examining the reference Jupyter notebook implementation revealed that
the truncated value for f”(0) was used instead of the exact value
of <ref> originally employed in ωB97M-V,
as well as in our reimplementation of GAS22 in Libxc. In the def2-SVP<cit.>
basis set, this inconsistency lead to a 3.1 μ E_h difference
in the total energy of the Si atom in the self-consistent calculations
with the reference implementation employing PySCF<cit.>
and our calculations with Libxc<cit.> using ERKALE<cit.>.
When the same parameters were used, the difference between total self-consistent
energies produced by the two implementations was reduced to 5.5nE_h.
See <https://gitlab.com/libxc/libxc/-/issues/419> (accessed 11 July 2023).
B3LYP
There are several more examples of disparate functional forms in the
literature. We will comment on the perhaps most infamous one: the
B3LYP functional.<cit.> This functional is
based on Becke's three-parameter hybrid functional<cit.>
(B3PW91)
E_xc^B3PW91=E_xc^LDA+a_0Δ E_x^HF+a_xΔ E_x^B88+a_cΔ E_c^PW91
where Δ E^HF=E_x^HF-E_x^LDA
is an exact exchange correction, while Δ E_x^B88
and Δ E_c^PW91 are gradient corrections for B88
exchange and PW91 correlation.
In B3LYP, <cit.> replaced PW91 correlation
(which was not yet available in their program) with a combination
of the LYP<cit.> GGA correlation functional and
the LDA correlation functional of <cit.> (VWN).[FrischCCL] [FrischCCL]Michael Frisch's email reply to Mikael Johansson's question on the Computational Chemistry List, see <http://www.ccl.net/chemistry/resources/messages/2002/05/22.008-dir/>. Accessed 26 April 2022.
Unfortunately, the paper by VWN describes more than one functional;
Libxc implements six variants that reproduce different energies, as
demonstrated by the data in <ref>. Instead of the recommended
version, VWN5, which had been known in the literature as VWN for 14
years by the time B3LYP was published, the variant of VWN implemented
in the program is the RPA version, VWN(RPA),
which was also used in the B3LYP functional.
This discrepancy has been the source of much confusion. Because VWN5
was the recommended variant in the literature, many other programs
implemented B3LYP with VWN5 instead of VWN(RPA)—the exact flavor
of the functional not having been specified in Stephens1994_JPC_11623—and
disagreements between the two were later found in the literature,
as has been discussed by <cit.>, for example.
We note the caveat that to this day, B3LYP is not the same functional
in all programs, and it is the user's responsibility to find out which
flavor is used. Both forms of the B3LYP functional are available in
Libxc: for the original version of <cit.>,
and for the VWN5 variant.
Becke's hybrids
The half-and-half hybrid functionals originally introduced by Becke
are another example. The BHLYP were originally implemented in
incorrectly as half LDA plus half Hartree–Fock exchange in combination
with the LYP correlation functional (available in Libxc as handh),
while the correct composition contains Becke'88 exchange<cit.>
instead of LDA exchange<cit.>
().<cit.>
In some codes, such as Turbomole,<cit.>
BHLYP refers to the latter form which other codes offer as BHandHLYP.
As shown by the data in <ref>, these forms are clearly
not equal, and it is again the user's responsibility to know which
version is employed. (Note that the data in <ref> only
contains the DFA energy, thus excluding the exact exchange energy
from the Hartree–Fock component.)
§.§ Self-consistent calculations
We have previously examined the convergence of radial quadratures
of the exchange-correlation energy for various functionals in Lehtola2022_JCP_174114,
where we showed many recent DFAs to be ill-behaved. In this section,
we study SCF calculations with select density functionals, which are
used to demonstrate the need to converge the radial quadrature to
obtain reliable reference energies. As in Lehtola2022_JCP_174114,
we will consider the Li, N, Ne, Na, P, and Ar atoms.
The calculations in this section employ Psi4 version 1.8,<cit.>
which by default uses the M4 radial grid of <cit.>
with Gauss–Chebyshev quadrature of the second kind. This grid was
found to be one of the better performing alternatives in Lehtola2022_JCP_174114,
even though a modified version of the Gauss–Chebyshev quadrature
was employed in that work.
We choose PW92<cit.>
( with ) and PBE<cit.>
( with ) to represent LDA
and GGA functionals, which tend to converge rapidly to the grid limit,
and the TPSS<cit.> (
with ), MS0<cit.> (
with ), MVS<cit.> (
with ), SCAN<cit.> (
with ), and r^2SCAN<cit.>
( with ) to represent
meta-GGA functionals.
We start the density functional calculation from a preconverged HF
solution, as this allows a direct comparison to our earlier study:
examination of the quadrature grid convergence of the density functional
total energy of the first SCF iteration is analogous to Lehtola2022_JCP_174114,
as now the HF wave function is determined on-the-fly in the employed
Gaussian basis set, instead of employing pretabulated Hartree–Fock
wave functions in Slater-type orbital basis sets as in Lehtola2022_JCP_174114.
(These pretabulated densities were discussed above in <ref>,
and we also used them in <ref> to demonstrate
the importance of using a consistent set of parameters in density
functionals.)
To enable a tight convergence assessment similar to Lehtola2022_JCP_174114,
tight convergence thresholds ( and ) were used for both the HF and DFT calculations. Our calculations
employ an angular Lebedev grid<cit.> of 434
points, as calculations employing the 590-point grid yielded similar
results; after all, the atoms considered here were chosen due to their
spherically symmetric electron densities.
In addition, we turned off grid weight screening in the Psi4 calculations
(), as by default points with
small quadrature weights are thrown out. We examined the importance
of the basis function screening threshold on the grid, but the default
value of this threshold () appeared
to yield converged results.
The comparison of the grid convergence of the total energies E
Δ E(N_rad)=E(N_rad)-E(1500)
evaluated with the HF density or the SCF density for the functional
reveals that the behavior is similar in both cases; significant differences
in the behavior are only observed when the quadrature approaches machine
precision.
The convergence was also seen to be similar across basis sets: we
examined the split-valence polarized def2-SVP<cit.>
and the triple-ζ polarized def2-TZVP<cit.>
basis sets, as well as the extended, benchmark-quality AHGBS-9 basis
set.<cit.> Results for all calculations are
available in the Supporting Information; we only present our main
findings here, which we choose to exemplify with SCF calculations
in the def2-SVP basis set.
The PW92 data turned out to be uninteresting due to rapid convergence
to machine precision. The data for the remaining functionals are shown
in <ref>. As the total energy is seen to converge
in a similar manner in the present self-consistent calculations as
in the fixed-density evaluations of our previous work,<cit.>
these results emphatically confirm our analysis in Lehtola2022_JCP_174114:
the numerical well-behavedness of density functionals has not been
given sufficient attention in the past.
Although well-behaved functionals like PBE and TPSS only require around
100 radial quadrature points to yield total energies converged to
μ E_h precision, others require hundreds more. In the infamous
case of SCAN, getting two programs to agree on the energy to microhartree
requires the use of 600–700 radial quadrature points for the studied
atoms... and this is assuming the calculations converge, which may
not be the case; the missing data points for Li for MVS and SCAN in
<ref> are due to lack of SCF convergence even in
this small Gaussian basis set. These two functionals are known to
be unusably ill-behaved for fully numerical calculations.<cit.>
We end this section with the note that the default radial grid in
Psi4 consists of 75 points, and that many other programs similarly
employ a default grid around this size. This again underlines that
a proper convergence study is required to determine reference energies
to sub-μ E_h precision.
§ SUMMARY AND DISCUSSION
As we have explained in this work, the history of density functional
approximations is full of independent implementations which employ
slightly different parameter values. In many cases, the origin of
the slightly different parameters is the unclarity of the original
literature, which allows several reasonable choices.
The issue with such ambiguities is that they prevent the reproducibility
of the density functional approximation (DFA): the total energy computed
for a given DFA is not necessarily directly comparable between different
programs. Employing tabulated Hartree–Fock electron densities for
atoms, we have exemplified that small changes to the numerical parameters
employed in density functionals—representing different choices
for the parameters enabled by such ambiguities—can have effects
on total energies that are significant in applications demanding high
precision, such as fully numerical electronic structure calculations.
We underline that such different choices have been made in the various
implementations of many DFAs in different programs, and that the reusable
density functionals offered by Libxc are invaluable in allowing the
exact same DFA to be used across programs.
We have also extended our previous analysis of Lehtola2022_JCP_174114
with self-consistent calculations for the Li, N, Ne, Na, P, and Ar
atoms with various Gaussian basis sets. These calculations were used
to exemplify the need to converge the radial quadrature when reporting
reference energies for novel density functionals: as we have experienced
time and again, many functionals have been reported with unconverged
calculations. The self-consistent calculations also confirmed the
conclusions we made based on fixed densities in Lehtola2022_JCP_174114.
These two presentations combined with our analysis of various issues
in reported functionals in <ref> point
out the need to furnish works publishing novel density functionals
with accurate reference energies that allow the verification of reimplementations
of the reported DFA. We have suggested several straightforward ways
in which to determine such data with publicly available free and open
source software.<cit.> The systems for which
reference data is reported should include both spin-restricted and
spin-unrestricted systems; the N and Ne atoms offer excellent test
systems as they have well-behaved electronic structures.
Regardless of the employed approach, it is essential to converge the
calculation of the reference energy with respect to all numerical
parameters, most notably the quadrature grid, as we demonstrated in
Lehtola2022_JCP_174114 and <ref>.
The reference energy should be computed and reported to very high
precision: using suitably large integration grids and small cutoff
thresholds, an agreement of better than 0.1 μ E_h in total
energies is typically achievable in Gaussian-basis calculations across
programs.
Such precise computation of the total energy is already a decent assessment
of the numerical behavior of the density functional: in general, the
more rapidly the quadrature converges, the better-behaved the functional
is. However, the issue of numerical well-behavedness has not been
given adequate attention by the functional developer community, as
many functionals do not afford such quick convergence with respect
to the quadrature.<cit.>
In addition, we note that a good density functional should also afford
stable convergence to the complete basis set (CBS) limit.<cit.>
However, we again note that several recent functionals fail this criterion,
as well.<cit.> Due to
the increasing importance of fully numerical methods in electronic
structure theory, the numerical behavior of new density functionals
should be checked with fully numerical calculations, which are nowadays
routinely possible in the free and open source HelFEM program,<cit.>
for example.
We end this work with a summary of our suggestions to ensure the reproducibility
of novel DFAs. We ask the following data to be furnished as part of
the publication of new density functionals:
* the full mathematical equations for the functional, including all
the values for all the parameters with exactly the same values as
in the reference implementation
* the source code for the reference implementation, if this implementation
is not already included in a standard open source library such as
Libxc or XCFun
* reference energies for N and Ne atoms computed and reported to 0.1μ E_h
precision.
* For self-consistent calculations, the energies should be reported
separately for exchange-only calculations, and calculations with the
full exchange-correlation functional.
* If the functional is defined by separate exchange and correlation
parts, total energies should be reported both for exchange-only calculations,
as well as calculations that include both exchange and correlation.
We conclude with the statement that when novel functionals are originally
implemented as part of a common open source framework such as Libxc<cit.>
or XCFun<cit.>, this greatly facilitates
the reproduction of results, because the same implementation is available
across a wide variety of programs. We invite the density functional
developer community to interact more strongly with standard libraries,
as including a new functional in such libraries makes it available
to a huge community of potential users.
§ ACKNOWLEDGMENTS
We thank Gustavo Scuseria, Viktor Staroverov and Giovanni Scalmani
(Gaussian Inc) for help in reproducing the PBE and TPSS functionals
in . We thank the National Science Foundation for financial
support under grant no. CHE-2136142. We thank the Academy of Finland
for financial support under project numbers 350282 and 353749.
§ SUPPORTING INFORMATION
Convergence plots of the total energies of the Li, N, Ne, Na, P, and
Ar atoms in the def2-SVP, def2-TZVP, and AHGBS-9 basis sets, employing
the HF and the SCF electron densities.
|
http://arxiv.org/abs/2307.04205v2 | 20230709152618 | Extending the Forward Forward Algorithm | [
"Saumya Gandhi",
"Ritu Gala",
"Jonah Kornberg",
"Advaith Sridhar"
] | cs.LG | [
"cs.LG"
] |
Sharper Asymptotically Optimal CDC Schemes via Combinatorial Designs
Yingjie Cheng, Gaojun Luo, Xiwang Cao, Martianus Frederic Ezerman, and San Ling
Y. Cheng, and X. Cao are with the Department of Mathematics, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China, and also with Key Laboratory of Mathematical Modeling and High Performance Computing of Air Vechicles (NUAA), MIIT, Nanjing 210016, China, e-mails: { xwcao,chengyingjie}@nuaa.edu.cn
G. Luo, M. F. Ezerman, and S. Ling are with the School of Physical and Mathematical Sciences, Nanyang Technological University, 21 Nanyang Link, Singapore 637371, e-mails: { gaojun.luo, fredezerman, lingsan}@ntu.edu.sg.
G. Luo, M. F. Ezerman, and S. Ling are supported by Nanyang Technological University Research Grant No. 04INS000047C230GRT01. X. Cao, Y. Cheng, and G. Luo are also supported by the National Natural Science Foundation of China under Grant 12171241.
August 12, 2023
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The Forward Forward algorithm, proposed by Geoffrey Hinton in November 2022, is a novel method for training neural networks as an alternative to backpropagation. In this project, we replicate Hinton's experiments on the MNIST dataset, and subsequently extend the scope of the method with two significant contributions. First, we establish a baseline performance for the Forward Forward network on the IMDb movie reviews dataset. As far as we know, our results on this sentiment analysis task marks the first instance of the algorithm's extension beyond computer vision. Second, we introduce a novel pyramidal optimization strategy for the loss threshold - a hyperparameter specific to the Forward Forward method. Our pyramidal approach shows that a good thresholding strategy causes a difference of upto 8% in test error. [Our code can be found here: https://github.com/Ads-cmu/ForwardForwardhttps://github.com/Ads-cmu/ForwardForward] Lastly, we perform visualizations of the trained parameters and derived several significant insights, such as a notably larger (10-20x) mean and variance in the weights acquired by the Forward Forward network.
§ INTRODUCTION
Backpropagation is the most widely used optimization algorithm for training neural networks today. However, while widely successful, the backpropagation algorithm has 3 important limitations.
First, backpropagation is biologically implausible. There is no convincing evidence that the cortex of the brain explicitly propagates error derivatives or stores neural activities for use in a subsequent backward pass <cit.>. Moreover, backpropagation through time (the standard technique for training RNNs) is especially implausible, as the brain does not freeze in time in order to update neural connections.
The second major drawback with backpropagation is the need for perfect knowledge of the forward pass computation in order to compute the correct derivatives. This prevents us from being able to insert "black boxes" or non-differentiable components in the neural network.
Lastly, the need to store forward pass computations and backpropagate errors across layers makes backpropagation power and memory intensive. In order to train really large networks without consuming much power, different methods for training networks will need to be explored.
Geoffrey Hinton proposed the Forward Forward algorithm in November 2022, with the goal of enabling neural networks to learn continuously without the need for backpropagation <cit.>. In his paper, Hinton suggests two significant advantages of the forward-forward algorithm over backpropagation. First, it provides a more plausible model of learning in the human brain, and second, it can make use of very low-power analog hardware, thereby enabling much larger networks to be trained with much less power.
This project investigates the performance of the Forward Forward algorithm in training neural networks. The key contributions of our work are as follows. First, we have replicate Hinton's original results on the MNIST dataset. Next, we establish a baseline performance for the Forward Forward network on the IMDb movie reviews dataset. As far as we are aware, our results on this sentiment analysis task mark the first instance of the algorithm’s extension beyond computer vision. Lastly, we propose a new hyperparameter optimization strategy for tuning the loss threshold of the Forward Forward network. This pyramidal optimization strategy yields a 11% reduction in network error rate.
Apart from the above, we also report two more results. First, we performed extensive ablations on various activation functions for the FF algorithm and report some negative results. Second, we perform some visualisation of the weights learnt by the FF algorithm and record our observations. A detailed discussion of these results, as well as hypotheses that explain them are as left as future work.
§ LITERATURE REVIEW
§.§ Other forward-pass based approaches to training neural networks
A primary objective of the Forward Forward algorithm is to emulate learning processes observed in human brains. To achieve this, the algorithm adheres to a brain model known as Predictive Coding (PC) <cit.>, which characterizes the brain as a predictive system where each layer strives to enhance the accuracy of its own inputs. As each layer in the Forward Forward algorithm adjusts its gradients based on input data, the algorithm can be regarded as an application of PC in the domain of machine learning. Various forms of PC have previously been investigated in machine learning <cit.>. Specifically, supervised predictive coding greatly resembles the forward forward algorithm, as it only involves forward passes up and down the network (from the data to the labels and vice-versa). The recently developed Predictive Forward Forward Algorithm <cit.> extends and integrates the concepts of FF and PC, resulting in a robust neural system capable of learning a representation and generative model. Preliminary results on the MNIST dataset suggest the potential of this brain-inspired, backpropagation-free approach for credit assignment within neural systems.
Another forward pass based approach was proposed by Dellaferrera et al. <cit.> They propose a learning rule that replaces the backward pass with a second forward pass in which the input signal is modulated based on the error of the network. This learning rule addresses various issues such as weight symmetry, dependence of learning on non-local signals and the freezing of neural activity during error propagation. The authors demonstrate the effectiveness of their approach on MNIST, CIFAR-10, and CIFAR-100 datasets.
§.§ Directional approaches to training neural networks
Schmidhuber et al. <cit.> propose the Variable Shared Meta Learning (VSML) algorithm, which unifies various meta-learning approaches. The authors demonstrate on the MNIST and CIFAR10 datasets that simple weight-sharing and sparsity in an NN can express powerful learning algorithms in a reusable fashion. They implement the backpropagation learning algorithm solely by running in forward-mode, eliminating the need for a backward pass.
Baydin et al. <cit.> present a method for computing gradients based solely on the directional derivative that one can compute exactly and efficiently via the forward mode. They call this formulation the forward gradient. They demonstrate forward gradient descent on the MNIST dataset, showing substantial savings in computation and enabling training up to twice as fast in some cases.
§ DATASET DESCRIPTION
Our baseline implementation and threshold ablations were performed on the MNIST dataset <cit.>. In order to demonstrate the extensibility of the Forward Forward network to other domains, we picked a sentiment analysis task using the IMDb reviews dataset <cit.>. The dataset contains 25,000 positive and 25,000 negative movie reviews, split equally into test and train datasets. Each review was preprocessed by removing HTML tags, stop words and by performing stemming. Next, each word of the review was passed through Word2Vec <cit.> to get a lower dimensional representation of the word. Word2Vec uses a 2 layer neural network with no non-linearity in the hidden layer, and therefore can be approximated as a single layer network. Single layer neural networks do not backpropagate gradients and hence Word2Vec is an acceptable feature extractor for the Forward Forward network.
§.§ Generating Positive and Negative Data
MNIST images must have labels embedded into them before they can be passed through the Forward Forward network. This is done by utilising the black border around MNIST images. In order to append label data to images, we set the pixel corresponding to the label amongst the first 10 pixels to 255, and reduce the rest to a value of 0. These images, with correctly appended labels, constitute positive data for the model. For our sentiment analysis task, the positive/negative label was one-hot concatenated to the Word2Vec feature vector.
The Forward Forward algorithm also requires negative data during training. Negative data is generated by randomly appending the wrong label to the input image/review before passing it through the network. We use an equal number of positive and negative samples during training.
§ MODEL DESCRIPTION
§.§ Layer Training
Our network consists of several layers, each with its own loss function. The goal of the loss function is maximise the layer activation for positive data while minimising the layer activation for negative data. More concretly, the training loss for each layer is the difference between the sum of neuron activations to positive/negative inputs and a threshold hyper-parameter value. This threshold hyperparameter is known as the loss threshold, and we perform extensive hyperparameter tuning of this threshold in our analysis. It is worth nothing that during the forward pass, the input is normalized to prevent its magnitude from affecting the layer's output activation magnitude.
§.§ Network Architecture
As shown in Figure 2, the network consists of 4 fully connected layers with 2000 neurons each. Note that there is no output label layer. We use the Adam optimizer to optimize the network.
§.§ Inference
There are 2 plausible methods of inference for the Forward Forward network. The first method involves using a 1 layer classification neural network that uses the FF network activations as features for its classification task. An alternate method involves appending a label to the input image and passing it through the network. This process is repeated 10 times (once for each label), and the label that produces the maximum activation is chosen as the output label for the image. Between the two methods, we report results for the approach that uses the 1 layer classification network as it emperically provides better results.
§.§ Baselines
The original paper uses a fully connected neural network trained using backpropagation as a baseline network. This backpropagation-trained network has a 1.4% test error. We were able to reproduce this baseline and achieve a similar error rate.
The original paper proposed two major ways in which networks could be trained using Forward Forward (FF): an Unsupervised and a Supervised example of FF. For this report, our focus was to reproduce the training pipeline and architecture for the Supervised example of FF. The original paper was able to get around 1.36% with their forward forward architecture. Although the paper made it clear that the architecture they used included 4 fully connected layers with 2000 neurons each with ReLU activation, their loss function, optimizer, learning rate, threshold, and scheduling strategy was not elaborated on. Since the forward-forward learning dynamics are highly different from backpropagation, our standard intuitions and starting points did not work well. High thresholds performed better than lower ones, potentially since higher thresholds allow a wider range of squared activations for negative samples. Increasing threshold from 0.5 to 10 improved our model performance by approximately 8%. However, this also had the side effect of significantly slowing down convergence. We hypothesize that this happened because the model's weights were unable to change fast enough to adjust to the large threshold using our low learning rate. We used the Adam optimizer initialized with a learning rate of 0.01 (unusually high for Adam) to have our model converge within 100 epochs with the high threshold of 10. Using this approach, we were able to achieve a test error of 1.37% (comparable to backprop baselines).
§ RESULTS AND DISCUSSION
§.§ Forward Forward on Sentiment Analysis
A key requirement for biological plausibility is the ability for a training algorithm to work across multiple domains such as vision and language. In this study, we investigated the performance of the Forward Forward algorithm on the IMDB movie reviews dataset, with the aim of assessing its ability to generalize beyond Computer Vision and work effectively on Natural Language Processing tasks. Our results show that the Forward Forward algorithm achieved an accuracy of 84.86% on the test set after 6 epochs, indicating its potential in learning patterns beyond visual data. To compare its performance with traditional backpropagation-based networks, we trained a fully connected network with the same architecture as the Forward Forward network, and an output layer. We observed that the backpropagation-based network was also able to achieve an accuracy of 85% on the test set, albeit in fewer epochs, consistent with convergence findings in computer vision tasks.
Our findings suggest that the Forward Forward algorithm can be an effective alternative to backpropagation-based networks in the context of NLP tasks. Further research is warranted to explore the performance of the Forward Forward algorithm in training embeddings from scratch, and in performing more complex NLP tasks such as language modelling.
§.§ Threshold Ablations and Analysis on Forward Forward
The Forward Forward algorithm introduces a new training hyperparameter - the loss threshold. Finding the appropriate threshold for this hyperparameter is crucial for the algorithm to work well. At each layer of our algorithm, the sum of squared activations for every example in our batch is calculated and compared against this loss threshold. The layer's goal is to maximize this sum so that it exceeds the threshold for positive examples and falls below the threshold for negative examples. This is important because the subtracted value is passed into our loss function, which penalizes larger values.
Hinton, in his experiments, uses a threshold equal to the number of neurons in the layer. This can be rewritten as a threshold proportional to the number of neurons in a given layer, with a proportionality factor (k) of 1. We set this as our baseline for further study.
In our initial experiments, we varied the value of k and tested values ranging from 0.005 to 10 to determine the values that give us the best results. We found that a k between 0.3 and 0.5 gives better results than our initial baseline.
These initial experiments led us to consider whether it would make sense to have different thresholds for different layers. We tried using monotonically increasing values of k across layers and also tested monotonically decreasing values. We found that the monotonically increasing the threshold across layers performs distinctly better than other approaches. We hypothesize that larger threshold values in later layers improves performance as the later layers are responsible for higher level feature recognition while the lower layers tend to behave as feature extractors. We refer to this monotonically increasing threshold strategy as the pyramidal approach to loss threshold tuning.
We also explored the use of a threshold scheduler in our ablations. The intuition behind this approach is that the model might benefit from having lower thresholds initially while it is still learning, and then gradually increasing the penalty as it trains more. Our results show that using a threshold scheduler provides a promising direction compared to our baseline.
Overall we see an error reduction from 1.8% to 1.3% using this approach.
§.§ Activation Function Analysis
In his forward-forward algorithm, Hinton employs the ReLU activation function. We conducted an investigation into the performance of other commonly used activation functions within the context of the forward-forward algorithm. As illustrated in the graph below, most activation functions perform well with this algorithm. However, one notable observation is that bounded activations such as tanh and sigmoid do not train at all for certain thresholds.
Even after tuning the threshold hyperparameter, these activations do not perform as well as others. One possible hypothesis here may be that this is due to the nature of the objective functions - maximising a bounded activation may require incredibly high weight values, even to exceed low thresholds.
§.§ Weight analysis
Lastly, we analyze the weight matrices of the trained network. Our analysis revealed several key findings. Firstly, we observe that the range of weights for the Forward Forward trained network are much larger (10-20x) than that of the backpropagation trained network, with a range of -14.36 to 18.19 compared to -0.67 to 0.43, respectively. Notably, weight decay was not applied in either case. This disparity in weight ranges may be attributed to the objective function of the Forward Forward algorithm, which aims to improve performance by encouraging highly positive activations for positive samples and highly negative activations for negative samples.
Secondly, we found that the range of weights decreased as we went deeper into the network, although the underlying reasons for this pattern requires further investigation. Finally, we observed a strong spike in the weights connected to the encoded label part of the input, which is consistent with the notion that this aspect of the input contains crucial information for the network to predict the correct label.
Taken together, our weight matrix analysis provides additional insights into the workings of the Forward Forward algorithm and suggests potential avenues for future research in understanding the mechanisms underlying its effectiveness.
§ CONCLUSION
Our study explored the effectiveness of the Forward Forward algorithm on data beyond Computer Vision and conducted experiments to understand the effects of various parameters of the algorithm. We found that the Forward Forward algorithm performs comparably to its backpropagation variant and uncovered an important relationship between the threshold parameter and the depth and size of each layer of the forward forward network. Both of these contributions are novel and warrant further interest in exploring the Forward Forward algorithm.
Our study sets the stage for further exploration of more architectures and hyptheses regarding Forward Forward. This could include more complex NLP tasks where text embeddings can be trained from scratch, and the Forward Forward algorithm can be applied over time. Additionally, future work could investigate the use of more biologically inspired activations, such as the negative log of the student T distribution that bring the algorithm even closer to biological alignment. Overall, our findings suggest that the Forward Forward algorithm holds promise as a viable alternative to backpropagation and merits further exploration in the context of various machine learning algorithms that are biologically aligned.
|
http://arxiv.org/abs/2307.06163v2 | 20230712134517 | Gauss-Bonnet Dark Energy and the Speed of Gravitational Waves | [
"José Jaime Terente Díaz",
"Konstantinos Dimopoulos",
"Mindaugas Karčiauskas",
"Antonio Racioppi"
] | astro-ph.CO | [
"astro-ph.CO",
"gr-qc",
"hep-ph"
] |
m_Pl
ϵ_H
ϵ_Hc
w_m
x_c
y_c
z_c
u_c
w_f
c_GW
α_T
Departamento de Física Terica, Universidad Complutense de Madrid,
E-28040 Madrid, Spain
Consortium for Fundamental Physics, Physics Department, Lancaster
University, Lancaster LA1 4YB, UK
Departamento de Física Terica, Universidad Complutense de Madrid,
E-28040 Madrid, Spain
National Institute of Chemical Physics and Biophysics, Rvala 10,
10143 Tallinn, Estonia
Gauss-Bonnet Dark Energy has been a popular model to explain
the accelerated expansion of the Universe. Quite generically it also
predicts the speed of gravitational waves to be different
from the speed of light. This fact alone led some authors to exclude
such models in view of the new tight observational constraints on
. However, the behaviour of depends on the choice of
the Gauss-Bonnet (GB) coupling function. It is possible to construct
models where is always equal to the speed of light. More generally,
is a time dependent function with instances where both speeds
coincide. Nevertheless, we observe that the bound on excludes
scenarios where the GB term directly affects the expansion of the
Universe, even if the constraint on the variation of the coupling
function does not appear to be strong. We perform the dynamical systems
analysis to see if the expansion of the Universe could be affected
indirectly by modulating the behaviour of the scalar field, which
modulates the GB coupling. It is shown that either the bounds on
are violated by many orders of magnitude, or it might be very difficult
to find models that are consistent with other cosmological observations.
Gauss-Bonnet Dark Energy and the Speed of Gravitational Waves
Antonio Racioppi
August 12, 2023
=============================================================
§ INTRODUCTION
The detection of gravitational waves (GW) <cit.>
opens a new window to observe and measure the Universe.
Most directly, it enables testing General Relativity (GR) in regimes
that were not accessible before and constrain possible modifications
of the laws of gravity. They also provide new ways to test Dark Energy
(DE) models. Many of such models rely on gravity modifications and
therefore are subject to such constraints.
A very clear demonstration is provided in Ref. <cit.>.
A lucky coincidence of being able to detect GW emitted by the merger
of two neutron stars as well as the electromagnetic counterpart of
this event made it possible to put very stringent constraints on the
speed of GW, . The delay between arrival times of GW and γ-rays
led to the bound
|| < 10^-15 ,
where parametrises the deviation of from the speed
of light
≡ ^2-1 ,
in natural units, where c=ħ=1.
Many classes of modified gravity theories predict 0. The
constraints on in Ref. <cit.> excluded
a lot of well motivated and otherwise attractive models and considerably
narrowed down the space of available modifications <cit.>.
Among the excluded models – it is claimed in Ref. <cit.>
– is the Gauss-Bonnet Dark Energy (GBDE) one. This model has many
attractive features. The Gauss-Bonnet term itself is a unique combination
of curvature terms squared
𝒢≡ R^2-4R_μνR^μν+R_μνρσR^μνρσ ,
where R, R_μν and R_σμν^ρ
are the Ricci scalar, tensor and Riemann tensor respectively. Nevertheless,
this combination leads to metric tensor equations of motion that are
second order. The Gauss-Bonnet (GB) term is quite ubiquitous in actions
of low-energy effective string theory, be it at tree or one loop level
<cit.>.
The corresponding modification can be written as ξ𝒢
term in the Lagrangian, where ξ is the GB coupling. If the latter
is a constant, the GB term is a surface term and can be integrated
out (although it can still be important for other aspects of the theory,
such as regularization <cit.>). However,
on quite generic grounds, one might expect that the GB term also couples
to scalar fields of the theory, such as moduli or dilaton fields,
making ξ field dependent.
The possibility of explaining DE with the GB term was first investigated
in Refs. <cit.>.
One of the attractive features of such models is that they
provide the means to safely cross the phantom divide, that is, enter
the regime where the DE equation of state is w<-1, without instabilities.
The best fit value of w is smaller than -1 <cit.>.
If such w is associated with a scalar field, it leads to many instabilities
and such a model is likely excluded by observations <cit.>.
On the other hand, modifications of gravity in GBDE in some parameter
range allow for w<-1. In this model w is time dependent;
it briefly dips below -1 before settling on w=-1 <cit.>.
Hence, it can accommodate this low value without leading to contradictions.
Quite generically GBDE predicts 0. This fact alone
led the authors of Ref. <cit.> to claim that GBDE
is ruled out. Here we would like to point out that ,
predicted by GBDE, is not a constant. Moreover, the constraint
in eq. (<ref>) is an upper bound which is applicable
only at the very latest stages of the evolution of the Universe. Hence,
to assess the implications of these constraints for GBDE we need to
study it more carefully.
In this work we use the dynamical systems analysis to look
for viable models of GBDE and compute the evolution of parameter.
The crucial quantity in such models is the GB coupling ξ(ϕ).
It determines the dynamics of the universe as well as the evolution
of the parameter. Applying the bound in eq. (<ref>)
to the variation of ξ(ϕ) and the rate of its variation,
we find that the constraints appear weak. Nevertheless,
the bound in eq. (<ref>) prevents the GB term from
affecting the expansion of the Universe directly. The remaining possibility
is for this term to affect the expansion indirectly, by modifying
the behaviour of the scalar field. To investigate this issue we apply
the dynamical systems analysis. We also apply this analysis to the
case where =0 by construction, which is allowed by the model.
In Section <ref> we introduce the model, derive dynamical
equations and show the bounds on ξ(ϕ) that follow
from eq. (<ref>). In Section <ref>
we assume an exponential potential V(ϕ) and write the dynamical equations in terms of dimensionless variables,
which are used in the following sections. The dynamical systems
analysis is applied to models with the exponential GB coupling ξ(ϕ)
in Section <ref> and it is applied to the linear
function ξ(ϕ) in Section <ref>. The case
of =0 is studied in Section <ref>.
§ SCALAR-GAUSS-BONNET DARK ENERGY AND CONSTRAINTS ON
We start with the scalar-Gauss-Bonnet action
S = ∫d^4x√(-g)[1/2^2R+ξ(ϕ)𝒢-1/2∂_μϕ∂^μϕ-V(ϕ)+ℒ_m] ,
where 𝒢 is defined in eq. (<ref>). For brevity
we address to the above action as the Gauss-Bonnet (GB) action in
this work. ℒ_m is the Lagrangian of the matter
sector. If ξ is constant, the GB term is a total derivative and
does not affect the dynamics of the system. We assume the background
spacetime to be homogeneous, isotropic and flat, described by the
FRW metric g_μν=diag[-1,a^2(t),a^2(t),a^2(t)],
where t is the cosmic time and a is the scale factor.
In this model the speed of tensor mode propagation is determined by
the rate of change of the coupling function ξ(ϕ)
<cit.> (see
also Refs. <cit.>)
α_T = 8(ξ̈-ξ̇H)/^2+8ξ̇H .
It is clear from this expression that the constraint in eq. (<ref>)
can be satisfied if one of the following two conditions is fulfilled.
The first option is to choose the coupling function ξ(ϕ)
such that
ξ̈ = Hξ̇ .
This choice is discussed in refs. <cit.>
(other related references can be found in these articles) in the context
of inflation. However, there is another, more generic possibility.
We notice in eq. (<ref>) that α_T is suppressed
by the Planck mass. Therefore, as long as the conditions |ξ̈|/^2, H|ξ̇|/^2<10^-15
are satisfied, the GB action in eq. (<ref>) is compatible with
the constraints on the speed of gravitational waves. We can write
these conditions in a more useful way
|ξ̈|/H^2, |ξ̇|/H < 10^-15(/H)^2 ,
which emphasises the change of the coupling function and the rate
of change of this coupling over one Hubble time. If this condition
is to be imposed on inflation models, where typically H∼10^-5,
this bound can be tight. In that case, to limit α_T within
the allowed range, it is better to look for models that satisfy the
condition in eq. (<ref>). However, the constraints from
the observations of GRB170817A apply not to the early Universe, but
the late one, when the Hubble parameter is more than fifty orders
of magnitude smaller than during inflation. Indeed, plugging H_0^2/^2∼10^-120
into eq. (<ref>) we find
|ξ̈|/H_0^2, |ξ̇|/H_0 < 10^105 .
That is, ξ and ξ̇ need to vary by more than
100 orders of magnitude, Δξ ,Δξ̇<10^105,
over the age of the Universe to violate the bound. This appears
to demonstrate that the constraint on c_GW is exceptionally
weak and might give hope that GBDE models remain viable. Unfortunately,
as we show in this work, at least for simple functions ξ(ϕ),
this turns out not to be the case.
To understand the implications of eq. (<ref>) for
Gauss-Bonnet Dark Energy (GBDE) models better, let us first write
the homogeneous dynamical equations in the FRW background as <cit.>
H^2 = ρ_ϕ+ρ_m/3(^2+8Hξ̇) ,
Ḣ = -ρ_ϕ+P_ϕ+ρ_m+P_m+8H^2(ξ̈-ξ̇H)/2(^2+8Hξ̇) ,
where ρ_ϕ and P_ϕ are the energy and pressure densities
of the homogeneous scalar field ϕ respectively. They are defined
to be
ρ_ϕ ≡ 1/2ϕ̇^2+V(ϕ) ,
P_ϕ ≡ 1/2ϕ̇^2-V(ϕ) .
Similarly ρ_m and P_m are the energy
and pressure densities of the matter field.
The acceleration of spatial slices can be parametrised using the Hubble
flow parameter
ϵ_H ≡ -Ḣ/H^2 .
Alternatively, it is common to use the deceleration parameter q≡-1
for this purpose. The spatial slices expand in an accelerating fashion
if <1 (q<0).
Plugging eqs. (<ref>) and (<ref>) into eq. (<ref>) we
can write
= 3/2(1+P_ϕ+P_m/ρ_ϕ+ρ_m)+1/2α_T ,
where we also made use of eq. (<ref>). At the present epoch
≃0.5. Thus, in view of eq. (<ref>) we
see that the last term must be negligible. This rules out any direct
effect of the Gauss-Bonnet term to the expansion of the Universe.
Neglecting that last term, we arrive at the expression which can also
be obtained in a typical, General Relativistic quintessence models
<cit.>.
But even if observations exclude the scenario where the GB term affects
the expansion of the Universe directly, there remains a possibility
that it does so indirectly, by modifying the behaviour of the scalar
field ϕ. As we will see next, such a possibility is also excluded,
at least for an exponential potential V(ϕ).
§ THE DYNAMICAL SYSTEM
Equations (<ref>) and (<ref>) can be supplemented with dynamical
equations governing the evolution of the ϕ field and ρ_m
ϕ̈+3Hϕ̇+V_,ϕ = 24ξ_,ϕ(Ḣ+H^2)H^2 ,
ρ̇_m+3Hρ_m(1+) = 0 ,
where ≡ P_m/ρ_m is the barotropic
parameter of the matter component. And we assume a matter fluid with
0≤<1.
To analyse the generic behaviour of this dynamical system, it is convenient
to normalise the dynamical degrees of freedom and write them in a
dimensionless form, such as
x≡ϕ'/√(6) , y≡√(V)/√(3) H , u≡4√(6)H^2ξ_,ϕ/ , and z≡√(ρ_m)/√(3) H .
The prime in the definition of x and the equations below denotes
the derivatives with respect to the e-fold number
N ≡ ln a ,
where we normalised a such that a=1 today.
The definitions in eq. (<ref>) are particularly useful
if the scalar field potential is an exponential function
V = V_0e^-λϕ/ ,
where we take λ>0 to be a constant. We will always use the
above ansatz in this work. In that case the dynamical equations are
self-similar and the explicit dependence on the Hubble parameter drops
out of those equations. In particular, eqs. (<ref>) and
(<ref>) can be written as
x' = (-3)x+√(3/2)λ y^2+u(1-) ,
y' = (-√(3/2)λ x)y ,
z' = [-3/2(1+)]z ,
where
= [3x^2+3/2(1+)z^2+(ux)'-ux]1/1+ux
and the constraint in eq. (<ref>) becomes
1 = x^2+y^2+z^2-2ux .
A few comments about these equations are in order. First, notice the
important difference from analogous equations of models in General
Relativity (see e.g. <cit.>). In those models the
dimensionless variables y and z are constrained within the range
[0;1] and x∈[-1;1]. In the case of the
GB model such a restriction does not apply. Due to a priori
undetermined sign of u, maximum values of |x|, y
and z are not limited to 1. Second, it might appear that eq. (<ref>)
is inconsistent because it diverges as ux→-1. But this
impression is wrong because the constraint equation (<ref>)
forbids such values.
When doing dynamical analysis of this system, it is convenient to
use the equation for u too. Taking the derivative of the expression
in eq. (<ref>) we find
u' = -2 u+24H^2ξ_,ϕϕx .
§ THE EXPONENTIAL GAUSS-BONNET COUPLING
§.§ Dynamics
To understand the qualitative behaviour of this dynamical system,
we find its fixed points and investigate their stability. The fixed
points are defined as points, or regions, in the phase space where
x'=y'=z'=0 is satisfied. First, we study the case of the exponential
GB function, given by
ξ=ξ_0e^κϕ/ ,
which allows us to write eq. (<ref>) as
u' = (√(6)κ x-2)u .
The computation details for finding fixed points are provided in the
Appendix and the results are summarised in Table <ref>.
Looking at this table, notice that all the fixed points with =0
coincide with the ones analysed in ref. <cit.>,
as expected. In this reference the authors analyse a quintessence
model within the theory of GR and the exponential scalar field potential.
In other words, all the fixed points that are present in a similar
setup in GR, they are also present in GB models. However, even if
the fixed points coincide, the presence of the GB term might change
their stability, as we will show below.
The fixed point M in Table <ref> corresponds
to the case where the scalar field is diluted and only the matter
field remains. The K± fixed points correspond to
kination, where the universe is dominated by the kinetic energy of
the scalar field. In the case of I and λ<√(2)
this fixed point represents the power law inflation <cit.>.
In the scaling fixed point (Sc) the evolution of the scalar
field adjusts to mimic the behaviour of the matter field. Therefore,
the expansion rate of the Universe is given by =3/2(1+).
The GB term introduces two more scaling solutions: the fixed point
S2, which exists for κλ, and the fixed curve
S3, which exists if ξ(ϕ)V(ϕ)=constant.
For our purpose, the most interesting new fixed point is the de Sitter
one (dS), where Ḣ=0. This fixed point is very robust,
and exists for large variety of ξ(ϕ) functions.
Some discussion of DE models with κ=λ and various solutions
were provided in Ref. <cit.>. Some dynamical analysis
with κλ was performed in Ref. <cit.>
(see also <cit.>). Here we modify and extend
the analysis to make it more generic. In this section, we take κλ.
Often, the stability of fixed points can be determined by taking a
linear perturbation of equations (<ref>)–(<ref>) around
those points. In the case of an exponential GB function in eq. (<ref>)
those linear equations can be written as
δ x' = (-3)δ x+√(6)λδ y+(1-)δ u+(-)δ ,
δ y' = -√(3/2)λδ x+(-√(3/2)λ)δ y+δ ,
δ u' = √(6)κδ x+(√(6)κ-2)δ u-2δ ,
δ z' = [-3/2(1+)]δ z+δ ,
where
δ = δ x'+δ u'+[6-(+1)]δ x-(1+)δ u+2δ z/1+
is the linearised eq. (<ref>). The constraint equation
fixes the dynamics onto the three-dimensional hypersurface in the
four-dimensional phase space. The linearised version of that equation
is given by
0 = (-)δ x+δ y+δ z-δ u .
We next compute the eigenvalues of the system of equations (<ref>)-(<ref>)
and determine their stability. The eigenvalues at the fixed point
M are
m_1 = -3 ,
m_2 = ,
m_3 = -2 ,
where =3/2(1+) is the value of the Hubble
flow parameter at M. We can see that this fixed point is
always a saddle point for the range of values that we consider.
The eigenvalues at the kination fixed points (K±)
are
m_1 = 3/2(1-) ,
m_2 = -6±√(6)κ ,
m_3 = 3∓√(3/2)λ ,
where the upper sign corresponds to the point K+. We can
see that the eigenvalue m_1, which corresponds to the eigenvector
v=(0,0,0,1)[We order eigenvector components as v=(v_x,v_y,v_u,v_z).],
is always positive. Hence, these two fixed points are never stable.
The eigenvalues at the scaling fixed point Sc are
m_1 = 2(κ/λ-1) ,
m_± = -1/2(3-)[1±√(1-8/3-·(1-2/λ^2))] .
As can be seen from Table <ref>, this fixed point exists
(^2≥0) only if λ^2>2. Such a condition makes
the real part of m_± always negative. Therefore the stability
of this fixed point is determined solely by the sign of m_1.
That is, the scaling fixed point Sc is a saddle point for
κ>λ.
The eigenvalues at the de Sitter fixed point dS are
m_1 = -3/2(1+) ,
m_± = -3/2[1±√(1+8λ^2/3(2+3λ^2)(1-κ/λ))] .
Notice that the condition for the stability of this fixed point is
exactly opposite from the one required by the scaling fixed point
Sc: for κ>λ the scaling fixed point is a saddle
and the de Sitter one is the attractor. None of the interesting solutions
pass through the fixed points G or S2 so we don't
analyse their stability here.
To visualise the behaviour of the system we integrate numerically
a set of trajectories and show the phase portraits in Figure <ref>.
In all of those simulations we start with a negligible GB contribution,
u_0=-10^-25 to be precise. If the initial value of u_0
is too large, the phase portrait changes drastically. However, the
requirement for the negligible GB contribution at the initial stages
of the evolution is consistent with the required scenario to explain
Dark Energy. All the trajectories in the plots in Figure <ref>
start from y_0=10^-3 and move towards the dS attractor
at y=1.
The physically interesting trajectories are those that start close
to the K± or M fixed points. The former
set corresponds to the kination initial conditions and the latter
ones corresponds to a universe where matter dominates initially. In
both cases most trajectories are first attracted to the scaling fixed
point Sc. But because this fixed point is a saddle point
for κ/λ>1, eventually all the trajectories are repelled
and move to the de Sitter attractor dS.
This represents the desirable sequence of events: initially the universe
is dominated by the kinetic energy of the scalar field, which is the
case for quintessential inflation models, or the matter component,
which is often encountered in the quintessence models. Next, the system
moves into the scaling fixed point, and for a long time the Universe
evolves with an effective equation of state that of the matter component.
In the case of GR models and the exponential potential V(ϕ),
the scaling fixed point Sc is an attractor <cit.>.
That is, all the trajectories converge onto this point and remain
there. This is problematic, because the universe is not accelerating
at Sc in contrast to observations. The GB term, on the other
hand, converts this point into a saddle one and provides an escape
route. The scalar field eventually can come to dominate and cause
the universe to expand in an accelerated fashion in a new de Sitter
attractor point dS.
In Figure <ref> we show the time evolution of
density parameters and the effective equation of state of the “dark
fluid”. Density parameters are defined as
Ω_m ≡ ρ_m/3^2H^2=z^2 ,
Ω_ϕ ≡ ρ_ϕ/3^2H^2=x^2+y^2 ,
Ω_GB ≡ -8Hξ̇/^2=-2ux .
In terms of these variables the constraint in eq. (<ref>)
(the Friedmann equation) can be written as
1=Ω_ϕ+Ω_m+Ω_GB .
As Ω_GB can be positive as well as negative,
the value of each Ω parameter is not bounded by |Ω|≤1.
In order to apply the observational bounds on the equation of state
of Dark Energy we define a new parameter . It can be interpreted
as the equation of state of an effective “dark fluid” that causes
the accelerated expansion. To do that eq. (<ref>) can
be written as
= 3/2ρ_ϕ(1+)+ρ_m(1+w_m)/ρ_ϕ+ρ_m ,
where
≡ P_ϕ+8H^2(ξ̈-ξ̇H)/ρ_ϕ
and ρ_ϕ and P_ϕ are the energy and pressure densities
of the scalar field defined in eqs. (<ref>) and (<ref>).
In terms of dimensionless variables in eq. (<ref>) the
last expression can be also written as
= -1+2/3(1+2ux)-z^2(1+)/x^2+y^2 ,
where is meant to be substituted with eq. (<ref>).
We next run a large number of numerical simulations of eqs. (<ref>),
(<ref>) and (<ref>) varying λ, κ
parameters as well as the initial conditions x_0 (but always
with u_0=-10^-25 and =0) and select those models which
have regions where Ω_m=0.3147±0.0074 and =-0.957±0.08
<cit.> are satisfied for the same value of N.
The scale factor is normalised such that a=1 (N=0) at that moment.
In Figure <ref> we show two of such models. On
the L.H.S. column we can see the phase portraits, where these models
(red curves) are drawn from and on the R.H.S. column we find the time
evolution of the density parameters and . Both models have an
initial period of kination, which quickly gives way to the matter
domination . The model in the upper plot, has a long period of the
scaling behaviour before GB energy takes over. Eventually, all the
models settle down at the de Sitter attractor point, where Ω_ϕ=1
and all other Ω's vanish.
In the lower panel of Figure <ref> we can also
notice a quite generic feature of GB Dark Energy models, namely, that
for a brief period of time the effective equation of state of the
dark fluid can drop below -1.
§.§ The Speed of Gravitational Waves
Obviously, to select a realistic model of DE the consistency with
observational constraints on Ω_m and parameters
is a necessary but not sufficient condition. There are some other
requirements that a viable model of cosmology must satisfy. Among
those requirements, especially in the case of GB model, is a negligibly
small deviation of the speed of gravitational waves from the
speed of light.
As it was pointed out in <cit.>, generically in
scalar-GB models 1. Due to the tight observational constraints
on (see eq. (<ref>)) it was deemed that GBDE
models are excluded. However, such constraints do not fix =1,
they only place upper bounds on the deviation from 1, albeit very
strong ones. And, as one could naively conclude from eq. (<ref>),
that bound is not very constraining for GB models of DE. Moreover,
in GB models, is not a constant, but varies with time. But
the constraint in eq. (<ref>) applies only for a
short period over the history of the Universe.
To investigate this issue let us first write eq. (<ref>)
in terms of dimensionless variables, defined in eq. (<ref>)
α_T = (ux)'+(-1)ux/ux+1/2=Ω_GB'+(-1)Ω_GB/Ω_GB-1 ,
where Ω_GB is defined in eq. (<ref>).
It might appear that α_T diverges at ux=-1/2, or equivalently
at Ω_GB=-1. But, as can be seen from the constraint
equation (<ref>), such a value is not allowed.
We can immediately notice from eq. (<ref>) that α_T
vanishes at the two fixed points, Sc and dS. These
fixed points are the most interesting ones. Unfortunately, if the
GB model is to be a good model of our Universe, the current stage
of the evolution cannot be represented by any of these two fixed points.
Instead, we should find ourselves somewhere on the trajectory between
Sc and dS, as it is also demonstrated in Figure <ref>.
To see how α_T evolves with time we selected a large number
of numerical solutions that satisfy the Ω_m and
bounds discussed above. Some of those solutions are shown in
Figure <ref>. For clarity we can draw only a few of
them. However, the ones displayed in Figure <ref> are
representative of the whole set. First thing we notice is that the
maximum value of α_T is always very close to N=0, exactly
where the observational bounds in eq. (<ref>) apply.
Moreover that value is always |α_T|∼𝒪(1),
which clearly falls out of the allowed range. Hence, with high confidence
we can conclude that GBDE models with an exponential coupling constant
ξ(ϕ) are excluded by the observational constraint
on the speed of gravitational waves.
To get some insight why α_T is so large at this particular
moment, we can write eq. (<ref>) in an alternative form
α_T = 2(-3)+3(1-)Ω_m/1-Ω_GB+6y^2/1-Ω_GB .
This equality can be obtained either by using the dynamical equations
(<ref>)–(<ref>) to eliminate the time derivative from eq. (<ref>),
or directly from eq. (<ref>). In any case, it is important
to notice that this expression is very generic: it is valid for any
potential V(ϕ) and any GB coupling ξ(ϕ),
not necessarily exponential. At N=0 observations require ∼0.5.
Hence, the absolute value of the first term is of order ∼1.
Observations also require =0 and Ω_m∼0.3.
We can use the latter in the constraint in eq. (<ref>)
to write Ω_ϕ+Ω_GB∼0.7. Moreover, between
the scaling fixed point Sc and the de Sitter one dS,
the GB density parameter Ω_GB>0. Therefore, 0<Ω_GB<0.7
and the second term in eq. (<ref>) must be of order 𝒪(0.1)
to 𝒪(1). Finally y^2≤Ω_ϕ<0.7
and the last term of order 𝒪(1) at the maximum.
Barring precise cancellations, this leads to the conclusion that generically
|α_T|∼𝒪(0.1)-𝒪(1).
[The observation that Ω_m∼0.1 quite generically
leads to |α_T|≳0.1 raises another question.
As it follows from eq. (<ref>), such a large value
of |α_T| implies a very large change in the GB
coupling. But one has to remember that the GB term is only the lowest
order term in the series of low-energy effective string theory corrections
<cit.>. Since ξ varies
so much, one must wonder if it is consistent to neglect the higher
order corrections over the whole range of the evolution.]
§ THE LINEAR GAUSS-BONNET COUPLING
As we have seen above, models with an exponential GB coupling could
potentially provide a reasonable history of the Universe and explain
DE. Unfortunately, such models predict the speed of GW in the current
Universe that violates the allowed values by many orders of magnitude.
In this section we investigate another option: the linear GB coupling
ξ ∝ ϕ .
As it is shown in the Appendix, such a dynamical system has
the same fixed points as the =0 subset in Table <ref>.
To study the stability of those fixed points, we can linearise eqs. (<ref>)–(<ref>)
and use ξ_,ϕϕ=0. Equivalently, we can use the results
in section <ref> by setting κ=0. This gives
the eigenvalues at the Sc point
m_1 = -3(1+) ,
m_± = -1/2(3-)[1±√(1-8/3-·(1-2/λ^2))] .
As discussed in section <ref> the real part of m_±
eigenvalues are always negative. We can also see from the first equation
above that m_1 is negative too. Hence, this fixed point is always
an attractor.
On the other hand, the eigenvalues at the de Sitter fixed point dS
are
m_1 = -3/2(1+) ,
m_± = -3/2[1±√(1+8λ^2/3(2+3λ^2))] .
It is clear that m_1, m_+<0 and m_->0. Hence, this fixed
point is a saddle.
What the above result shows is that a realistic cosmological scenario
is impossible with the linear GB coupling. The scaling fixed point
is an attractor and there are no solutions which display a long matter
dominated period followed by an accelerated expansion. This is also
demonstrated in Figure <ref>.
One might wish to generalise the analysis presented here to
higher order polynomial functions ξ(ϕ). Unfortunately,
such a functional form of ξ(ϕ) is not amenable
to the presented methods of analysis, as the equations become non-self-similar.
However, we expect the results to be qualitatively similar to the
ones presented in this section.
§ THE CASE OF Α_T=0
As can be seen from the expression of α_T in eq. (<ref>)
this parameter can be made to vanish if one arranges for the GB coupling
in such a way that its time evolution obeys the condition in eq. (<ref>).
Unfortunately, this condition does not provide us with a functional
form of ξ(ϕ). Nevertheless, it provides with enough
information to investigate the relevant aspects of such a dynamical
system.
First, notice that eq. (<ref>), written in terms the
dimensionless variables in eq. (<ref>), becomes
24H^2ξ_,ϕϕx^2 = u[(+1)x-x'] .
It provides an additional constraint that can be used to eliminate
the ξ_,ϕϕ term in eq. (<ref>), at least for fixed
points with 0. Otherwise the LHS of the above equation vanishes
in any case. The fixed points are computed in the Appendix
and summarised in Table <ref>. As one would
expect, all the fixed points with =0 are the same as in
Table <ref>.
To determine the stability of fixed points we again linearise eqs. (<ref>)–(<ref>)
and also eq. (<ref>). At the de Sitter fixed point dS
this linear system reduces to
δ x' = δ x ,
δ u' = (4+3λ^2)δ x ,
δ z' = -3/2(1+)δ z ,
where we used that the linearised eq. (<ref>) at dS
implies the constraint δ x'=δ x. Combined with the linearised
eqs. (<ref>) and (<ref>) we find δ u=(4+3λ^2)δ x.
It is easy to compute the eigenvalues, which are
m_1 = 0 ,
m_2 = 1 ,
m_3 = -3/2(1+) .
We can see that the last two eigenvalues have opposite signs. Hence,
in this case dS is a saddle fixed point, no longer an attractor.
In the neighbourhood of the scaling fixed point Sc, the linear
equation for δ u takes the form
δ u' = (1-)δ u ,
where 1-=-1/2(1+3) is always negative
within the range 0≤<1. Therefore, in the u direction of
the phase space this fixed point is attractive and there are no trajectories
that flow from Sc to dS.
In summary, the condition c_GW=1 implies the scaling
fixed point Sc to be an attractor, just as in models of GR
<cit.> and dS becomes a saddle point. This
leads to the conclusion that there are no solutions which reproduce
a long, matter-like domination period and asymptotically approach
the de Sitter solution, which is required to reproduce the evolution
of the Universe.
§ SUMMARY AND CONCLUSIONS
In this work we investigate Gauss-Bonnet Dark Energy (GBDE) models.
Generically such models predict the speed of gravitational waves different
from the speed of light. In view of the tight observational constraints
on such deviations, denoted by (see eq. (<ref>)),
such models are considered to be disfavoured. However, the deviation
is time dependent and the bound, although tight, is an upper
bound, which is only applicable for very late Universe. Hence, before
excluding GBDE models we need to perform a more detailed analysis.
Moreover, if the bound in eq. (<ref>) is expressed
in terms of the variation of the GB coupling function ξ(ϕ),
it might appear to be a weak bound, as shown in eq. (<ref>).
To see if GBDE models can indeed survive the observational constraints
on we perform the dynamical systems analysis. We
assume that the scalar field has an exponential potential and find
that such a dynamical system quite generically has the scaling and
de Sitter fixed points (among others), denoted by Sc and
dS respectively in this work.
At Sc the scalar field adjusts in such a way that it mimics
the behaviour of the background matter component. In particular,
the equation of state of the scalar field is the same as
that of the matter component. In the case of the exponential GB coupling,
depending on the magnitude of the exponent, Sc is not stable:
it is a saddle point. This is in contrast to the General Relativistic
quintessence models with an exponential potential <cit.>.
In the latter setup the scaling fixed point is an attractor, which
makes it impossible to use as an explanation for the accelerated expansion
of the Universe. In GBDE Sc can have an unstable direction
which links to dS, the latter being an attractor. Using numerical
solutions we show that if the universe starts with a very small GB
term and it is either kination or matter dominated, initially all
solutions evolve towards the Sc fixed point. They linger
in the neighbourhood of Sc for a long period of time and
eventually change its course towards dS. This behaviour
is beneficial to modelling quintessential inflation, because it allows,
without extreme fine-tuning, to bridge the enormous energy density
gap (110 orders of magnitude) between infaltion and dark energy.
We find numerous models that follow this scenario and can predict
observationally allowed values for the matter energy density and DE
equation of state.
Moreover, at Sc and dS fixed points the speed of
gravitational waves is exactly the same as that of the speed of light,
i.e. =0. Unfortunately, if this model is to represent the evolution
of the actual Universe, we cannot be living either on the scaling
or de Sitter fixed points, but somewhere in between. However, as we
show in Figure <ref>, || changes from
0 to ∼1 in between these fixed points. Moreover, as we argue
below eq. (<ref>), if Ω_m∼0.1, which
corresponds to the current value, quite generically one expects ||∼𝒪(0.1)-𝒪(1),
which is ruled out by many orders of magnitude. We conclude that the
bound in eq. (<ref>) makes GBDE with an exponential
ξ(ϕ) function unviable.
In view of the above conclusion, we investigated other choices of ξ(ϕ). First, we demonstrate that
for a linear function ξ(ϕ) the de Sitter fixed
point is a saddle point, and the Scaling fixed point becomes an attractor.
This makes it impossible to find any viable solution that would be
consistent with the evolution of the Universe. We cannot apply our
method to study more generic functions ξ(ϕ), for
example monomials. This is because dynamical equations lose their
self-similar character. However, it is reasonable to expect that the
cases studied above are quite representative, and the behaviour of
models with a more general ξ(ϕ) function is qualitatively
very similar.
The advantage of the above described analysis is that we have an explicit
ξ(ϕ) function. But as we showed, this does not
provide a viable cosmological solution. Another hope to make GBDE
models conform to observational constraints is to impose the condition
=0, as can be seen in eq. (<ref>). In this case we
lose the benefit of having an explicit functional form of ξ(ϕ),
but we gain an additional constraint equation (<ref>). As it
is demonstrated in Section <ref>, the stability of
fixed points in this case is very similar to the linear model. That
is, there are no solutions which provide a long period of matter domination
followed by an accelerated expansion.
In summary, we find that a GBDE model with an exponential scalar field
potential and an exponential GB coupling function could provide a
realistic model of DE. However, the recent bounds on the speed of
GW rules out this possibility by many orders of magnitude. If, on
the other hand, we look for models that do satisfy =0, then
it is impossible to find a scenario consistent with other cosmological
observations.
The negative conclusions reached in this work apply to the metric
formulation of the Gauss-Bonnet model. But we know that some modified
gravity models that violate bound in the metric formulation,
become viable again in the Palatini formalism <cit.>.
One can hope that a similar modification could save the GBDE model
too.[Analogously, if the GB term is coming from the breaking of
the Weyl symmetry, also a Weyl term should be added to the action
(see for instance Refs. <cit.>).
] We intend to study this possibility in future publications.
M.K. is supported by the María Zambrano grant, provided by the
Ministry of Universities from the Next Generation funds of the European
Union. This work is also partially supported by the MICINN (Spain)
projects PID2019-107394GB-I00/AEI/10.13039/501100011033 (AEI/FEDER,
UE). K.D. was supported, in part, by the Lancaster-Manchester-Sheffield
Consortium for Fundamental Physics under STFC grant: ST/T001038/1.A.R.
is supported by the Estonian Research Council grant PRG1055.
For the purpose of open access, the authors have applied a
Creative Commons Attribution (CC BY) license to any Author Accepted
Manuscript version arising.
§ THE FIXED POINTS
In this section we derive fixed points of the dynamical system in
eqs. (<ref>)–(<ref>) that are summarised in Table <ref>.
Such points, or higher dimension structures are regions of the phase
space where x'=y'=z'=0. We denote the constant values of x,
y, z at these regions by , , . A priori
we do not impose the condition u'=0 at fixed points, but it follows
from the equations. In order to see that we can take the derivative
of the constraint equation with respect to N. At a fixed point
this gives
.u'|_x= = 0 .
This equation allows for u'0 if =0. However, if we plug
x'==0 into eq. (<ref>) and (<ref>) we find
0 = √(3/2)λ^2+u(1-) ,
where, in this case, is the value of in eq. (<ref>)
at x'=y'=z'==0. As it is clear from the above equation, all
quantities on the RHS are constant but u. Hence, u= must
be also a constant even for =0. In summary, the algebraic equations
to find fixed point values are given by
0 = (-3)+√(3/2)λ^2+(1-) ,
0 = (-√(3/2)λ) ,
0 = [-3/2(1+)] ,
where
= [3^2+3/2(1+)^2-]1/1+
and all the values have to satisfy the constraint in eq. (<ref>).
There are four possible solutions of this system:
A (x,y,u)=(,0,3(-1)/3+1),
which is valid for any value of . The expansion rate in this
case is given by =3/2(1+). We can see
that this is the scaling solution, where the scalar field adjusts
to the equation of state of the matter component.
B (x,y,u)=(,0,^2-1/2)
with =5^2+1/^2+1
C (x,y,u)=(,√(1-^2+23-√(3/2)λ/1+√(3/2)λ),3-√(3/2)λ/1+√(3/2)λ)
with =√(3/2)λ
D (x,y,u)=(√(3/2)1+/λ,√(3/2(1-^2)+λ/√(6)(1+3))/λ,)
with =3/2(1+). This is a second scaling
solution, which is valid for any (allowed by the constraint) value
of .
As we can see, these “fixed points” are actually curves in the
three dimensional phase space. In the case of A to C,
the curves are parametrised by the value, which can vary within
the range where y, u and z remain real. In the case of D,
the value of x is fixed to =√(3/2)1+/λ,
but u is free to vary within the similarly defined range.
We can infer more about the structure of fixed points if we use the
definition u in eq. (<ref>). Taking the derivative
of that expression we get eq. (<ref>), which we rewrite it here
for convenience:
u' = -2 u+24H^2ξ_,ϕϕx .
Hence, at fixed points we get the relation
12H^2ξ_,ϕϕ = .
If we take an exponential GB function, given in eq. (<ref>),
the above equation becomes
√(3/2)κ = .
Note, that we did not cancel factors, because =0 is an
allowed solution. Plugging various values, that are consistent with
this equation, into the system A–D, we obtain
fixed points that are summarised in Table <ref>. In that
table =β at the fixed point G. This is a solution
of the cubic equation, which is given by
β ≡ 1/3√(3)κ(5√(2)+9κ^2-50/α^1/3-α^1/3) ,
where
α ≡ √(2)(27κ^2-250)+√(2(27κ^2-250)^2+(9κ^2-50)^3)
and κ is an exponent of ξ(ϕ).
It is interesting to note that all fixed points with 0
(S2, G, S3 and IV) exist only
if the GB function ξ(ϕ) is of the form
ξ(ϕ) = c_1ϕ/+c_2e^κϕ/ ,
where c_20 and κ=√(2/3)·/. Or, more
precisely, it is sufficient that the GB function asymptotically approaches
this solution, as the trajectory in the phase space gets closer to
those fixed points. This result can be obtained by integrating eq. (<ref>)
and using the fact that for 0 we have
H = H_0e^-/√(6)ϕ/ .
For the linear ξ(ϕ), i.e. for ξ_,ϕϕ=0,
it follows from eq. (<ref>) that only fixed points with =0
are present. Their values are summarised in Table <ref>.
In the case of α_T=0 constraint, eq. (<ref>) at a
fixed point can be written as
24.H^2ξ_,ϕϕ|_c^2 = (+1) .
For fixed points with =0 this equation vanishes identically.
Looking at the system A–D, we see that only two
such fixed points exist: (x,y,u)=(0,0,0)
and (x,y,u)=(0,1,-√(3/2)λ).
These correspond to points M and dS in Table <ref>.
For 0 we can equate eq. (<ref>) with (<ref>)
and get
2 = (+1) .
For =0 the equation vanishes identically and we obtain the same
fixed points as =0 points in Table <ref>. On the
other hand, there is only one fixed point that satisfies 0.
That fixed point must have =1. Looking at the system A–D,
we can see that only one such point is allowed, which is the relation
C with (x,y,u)=(√(2/3)1/λ,2/√(3)λ,√(3/2)1/λ(1-λ^2/2)).
All these fixed points are summarised in Table <ref>.
aipnum4-2
|
http://arxiv.org/abs/2307.04089v1 | 20230709035156 | Can Variational Quantum Algorithms Demonstrate Quantum Advantages? Time Really Matters | [
"Huan-Yu Liu",
"Zhao-Yun Chen",
"Tai-Ping Sun",
"Cheng Xue",
"Yu-Chun Wu",
"Guo-Ping Guo"
] | quant-ph | [
"quant-ph"
] |
[email protected]
0000-0002-6158-9627
0000-0002-5181-160X
[email protected]
0009-0009-2591-1672
0000-0003-2207-9998
[email protected]
0000-0002-8997-3030
0000-0002-2179-9507
Applying low-depth quantum neural networks (QNNs), variational quantum algorithms (VQAs) are both promising and challenging in the noisy intermediate-scale quantum (NISQ) era: Despite its remarkable progress, criticisms on the efficiency and feasibility issues never stopped.
However, whether VQAs can demonstrate quantum advantages is still undetermined till now, which will be investigated in this paper.
First, we will prove that there exists a dependency between the parameter number and the gradient-evaluation cost when training QNNs. Noticing there is no such direct dependency when training classical neural networks with the backpropagation algorithm, we argue that such a dependency limits the scalability of VQAs.
Second, we estimate the time for running VQAs in ideal cases, i.e., without considering realistic limitations like noise and reachability. We will show that the ideal time cost easily reaches the order of a 1-year wall time.
Third, by comparing with the time cost using classical simulation of quantum circuits, we will show that VQAs can only outperform the classical simulation case when the time cost reaches the scaling of 10^0-10^2 years.
Finally, based on the above results, we argue that it would be difficult for VQAs to outperform classical cases in view of time scaling, and therefore, demonstrate quantum advantages, with the current workflow.
Since VQAs as well as quantum computing are developing rapidly, this work does not aim to deny the potential of VQAs. The analysis in this paper provides directions for optimizing VQAs, and in the long run, seeking more natural hybrid quantum-classical algorithms would be meaningful.
Can Variational Quantum Algorithms Demonstrate Quantum Advantages? Time Really Matters
Guo-Ping Guo
August 12, 2023
======================================================================================
§ INTRODUCTION
Machine learning (ML) <cit.> is one of the most remarkable technology in the 21st century, which has applications ranging from daily works to scientific research <cit.>. Developments of ML rely on the success of computer science and the neural network (NN) model <cit.>, which provided the capability of carrying out huge computational tasks and simulating complex functions. Quantum computing <cit.> is also developed rapidly in decades, whose features, like quantum entanglement and quantum operation parallelism, are unavailable for their classical counterparts. Quantum computing has been introduced to the ML region, known as quantum machine learning (QML) <cit.>.
Variational quantum algorithms (VQAs) <cit.> are representative of QML, whose workflow is shown in Fig. <ref>. It is a hybrid quantum-classical algorithm. A quantum processor prepares an ansatz with the quantum neural network (QNN) <cit.> U(θ)
[It is also called parameterized quantum circuits in some works. To make it consistent with classical machine learning, we use QNN here.]
as | ψ (θ) ⟩= U( θ )|0⟩ with θ={θ_1,θ_2,⋯,θ_L } the (trainable) parameter vector. The ansatz is then used to evaluate cost functions with quantum measurements, which is usually an expectation value under some Hamiltonian H: C(θ) =⟨ψ(θ)|H|ψ(θ)⟩. The classical processor optimizes θ to minimize the cost function. QNNs in VQAs are usually low-depth, which can be performed on current noisy intermediate-scale quantum (NISQ) <cit.> devices even without the support of fault-tolerant quantum computation technology <cit.>.
This makes VQAs potential to achieve quantum advantages in the NISQ era. Since its proposal, VQAs have been developed rapidly and have
applications ranging from quantum chemistry simulation <cit.> to numerical computation <cit.>. Experimental demonstrations have also been performed <cit.>.
As research progresses, the challenges of VQAs gradually attracted attention, which can be divided into the efficiency part and feasibility part: Efficiency challenges usually mean that executing VQAs requires huge resources. The well-known barren plateaus <cit.> describes a phenomenon with exponentially vanishing gradients, indicating the required sampling times to obtain the cost function also grows exponentially with the number of qubits. On the other hand, feasibility challenges are the major part. They focus on whether the correct answer can be acquired by running VQAs. Training VQAs is an NP-hard problem <cit.>, besides the barren plateaus problem mentioned above, there usually exists a variety of local minimum points in the optimization landscape of VQAs <cit.>, implying that it is difficult to achieve the global optimal point. The expressibility of QNNs <cit.> also affected the reachability issue <cit.>, where global optimal points will never be reachable if they cannot be represented by the QNN. Noise <cit.> and other factors will also affect the correctness of executing VQAs. Great efforts have also been provided to deal with such challenges, including mitigating barren plateaus to improve trainability <cit.>, reducing sampling times to improve efficiency <cit.>, mitigating noises <cit.>, etc.
We focus on challenges in the efficiency part in this work. First, we will prove that there exists a dependency between the number of parameters in QNNs and the gradient-evaluation cost when training the QNN. Noticing that such a dependency does not exist when training classical NN models with the backpropagation algorithm <cit.>, we argue that the parameter number affected the scalability of VQAs. Next, we consider the time cost for running VQAs in an ideal setting, i.e., we do not consider realistic limitations on VQAs like noise, qubit connectivity, reachability, etc. The time cost analysis is used as follows:
* The time cost scaling easily reached the 1-year wall time at about 20 qubits.
* By comparing with the time cost using classical simulation, we can see that VQAs can only outperform classical simulations when the time cost reaches a scaling of 10^0-10^2 years. Therefore, quantum advantages are difficult for VQAs to achieve based on the current workflow.
In performing such analysis, we would not deny the potential of VQAs, as well as other hybrid quantum-classical algorithms in the NISQ era, but some changes and improvements need to be made. According to our analysis, some directions for optimizing VQAs are provided. Taking one step further, we need to consider what is the natural way of executing machine learning with quantum computing.
The rest of this paper is organized as follows:
In Sec. <ref>, we introduced some backgrounds needed for the latter analysis, including training NNs with the backpropagation algorithm and QNNs.
In Sec. <ref>, the dependency of the parameter number and the gradient-evaluation cost in training QNNs is provided.
In Sec. <ref>, we analyze the time cost of running VQAs.
Sec. <ref> gives the total time cost of running VQAs.
In Sec. <ref>, we compare the time cost using both VQAs and classical simulation.
A conclusion is given in Sec. <ref>.
§ PRELIMINARY
§.§ Training classical neural networks using the backpropagation algorithm
The NN model is widely applied in solving ML tasks. General NNs are comprised of neurons, whose diagram is shown in Fig. <ref>. A neuron can be viewed as a non-linear function that maps n inputs x={x_1,x_2,⋯,x_n} to an output y as:
y = f( ∑_i w_ix_i-b ),
where b is a bias, w={ w_1,w_2,⋯,w_n} is the adjustable weight vector, f is the non-linear activation function and one example is the sigmod function:
f(x)=1/1+e^-x.
Different functions can be approximated by adjusting the weight vector, and the core idea of ML is to make such functions approach desired maps. “Learning” is exactly the process of adjusting the weights.
Only one neuron has limited learning capability. To further increase the expressive power, i.e., be able to fit more functions, neurons can be used to construct a NN, which is shown in Fig. <ref>. In the NN, the input is fed into several neurons, whose outputs are then viewed as inputs to neurons in the next layer. Denote y={y_1,y_2⋯,y_m} as the output of the whole NN, or equivalently, the output of neurons corresponding to the final layer. Denote the desired value as d={d_1,d_2⋯,d_m} and the vector of weights for all neurons as W. As introduced, the learning process is to adjust W such that y is close to d.
To achieve this, one can define a cost function as:
C ≡ C(W) := 1/2∑_i=1^m (y_i-d_i)^2.
C=0 implies we have finished the learning process. To find the minimum value of the cost function, one can start from some specific set of parameters and then optimize the weight vector according to optimization algorithms like gradient descent:
W←W - η·∇ C,
where η > 0 is the learning rate, the gradient is ∇ C={∂ C/∂ w_j |w_j∈W}. Every element in the gradient can be obtained via methods like the finite difference method:
∂ C/∂ w_j=lim_δ→ 0C(w_jδ+)-C(w_jδ-)/2δ,
where w_jδ±={ w_1,⋯,w_j±δ,⋯}.
Denote the total number of weights as M
[The parameters number in NN and QNN may not be the same, therefore we apply different notations (M and L).].
If we apply Eq. (<ref>) to evaluate the gradient for every weight, we will need to execute the NN O(M) times, and execute the NN once will query all M weights, then the query complexity for directly evaluating the gradient scales O(M^2). However, large NN execution will cost huge resources, so reducing the costs for evaluating gradients would be remarkable. We introduce the backpropagation algorithm below, which achieved this goal.
Take Fig. <ref> as one example, Consider the weight w_2, which is representative of weights corresponding to neurons in the final layer. The gradient element for this weight is:
∂ C/∂ w_2 = ∂ C/∂ y_1∂ y_1/∂ w_2.
According to Eq. (<ref>), ∂ C/∂ y_1 = y_1-d_1. And ∂ y_i/∂ w_2 is the operation within one neuron, which can be easily acquired according to Eq. (<ref>).
Next, we consider evaluating the gradient concerning w_1, which is representative of weights in the middle layer.
∂ C/∂ w_1 = ∂ C/∂ y_m1∂ y_m1/∂ w_1
= ( ∑_i ∂ C/∂ y_i∂ y_i/∂ y_m1) ∂ y_m1/∂ w_1.
According to Eq. (<ref>), ∂ C/∂ y_i is already known if all the gradients of weights corresponding to neurons in the final layers are obtained, which can be reused, and other partial derivatives are all within one neuron.
Moving back, ∂ C/∂ w_0 can be analyzed similarly.
Therefore, when training classical NN models, one can first execute the NN and record the output (y) for every neuron. When evaluating gradients, weights of neurons corresponding to the final layer can be first evaluated, whose information can be reused when evaluating gradients for neurons corresponding to former layers.
Gradient evaluation with this back-forward propagation of information is called the backpropagation algorithm, whose query complexity is O(M), which establishes a reduction compared to the directly finite difference method. Using this method, we do not need to execute NNs for every weight and this makes it scalable for training NNs even with huge sizes.
§.§ Quantum Neural Networks
To make it convenient for the latter analysis, we introduce the unitary coupled-cluster singles and doubles ansatz <cit.> and the hardware-efficient ansatz (HEA) <cit.> in this section.
§.§.§ Unitary coupled-cluster singles and doubles ansatz
In quantum chemistry simulations, the unitary coupled-cluster (UCC) ansatz is widely applied. It is derived from the coupled-cluster theory <cit.>, which applies symmetry-conserved excitation operators on some initial states, usually the Hartree-Fock (HF) state, to expand wavefunctions in the target subspace.
Denote the number of spin-orbitals and electrons of a given system as n_o and n_e. And order the n_o spin-orbitals from 1 to n_o, whose corresponding energies are in non-decreasing order. Then the HF state |ψ_HF⟩ = | 1,1,⋯,1,0,0,⋯,0⟩ with exactly n_e 1s and n_o-n_e 0s is the state with the lowest energy when ignoring interaction energies, which is usually served as ground state approximations.
When considering the interaction energies, the ground state should be |ψ⟩ = ∑_ |ψ_i⟩∈ S a_i |ψ_i⟩, where a_i are coefficients and all states in the set S satisfying the condition that the Hamming weight, i.e, the sum of all 1s is exactly n_e. Starting from the |ψ_HF⟩, some symmetry-conserved operations can be applied to expand the target subspace spanned by S. This can be realized with the fermionic creation(annihilation) operators a_j^†(a_j). For instance, the operator a_i^†a_α can excite one electron from the α-th spin-orbital to the i-th one and will result in 0 (not the vacuum state) if the α-th orbital has no electron or the i-th already has one electron. Therefore, we can define it as a single-excitation operator. Double-excitation operator a_i^†a_j^†a_α a_β can be similarly defined.
Since considering all excitations will cost huge resources, we usually consider the single- and double-excitations, and the UCC ansatz with only the single- and double-excitation is called the UCCSD ansatz:
|ψ_UCCSD(θ)⟩ = U_UCCSD(θ) |ψ_HF⟩,
where the QNN has the form:
U_UCCSD(θ) = e^T-T^†,
where T=T_1+T_2 are linear combinations of excitation operators, which are expressed as:
T_1 = ∑_α={1,2,⋯,n_e},
i={n_e+1,⋯,n_o}θ_iα a_i^† a_α,
T_2 = ∑_α,β={1,2,⋯,n_e},
i,j={n_e+1,⋯,n_o},
α<β,i<jθ_ijαβ a_i^† a_j^† a_α a_β,
where θ={θ_iα,θ_ijαβ} is the parameter vector. Therefore:
T-T^† = ∑_α={1,2,⋯,n_e},
i={n_e+1,⋯,n_o}θ_iα (a_i^† a_α - a_α^† a_i)
+ ∑_α,β={1,2,⋯,n_e},
i,j={n_e+1,⋯,n_o},
α<β,i<jθ_ijαβ (a_i^† a_j^† a_α a_β-a_β^† a_α^† a_j a_i).
To further implement the ansatz on quantum processors, fermionic-to-qubit mappings are required. We apply the Jordan-Wigner (JW) transformation <cit.>.
a_j^† = 1/2[∏_k<jZ_k] (X_j-iY_j),
a_j = 1/2[∏_k<j Z_k](X_j+iY_j).
After this, the HF state is mapped to |1⟩^⊗ n_e⊗ |0⟩^⊗ n_o-n_e, implying that under JW transformation, the number of qubits required is the same as the number of spin-orbitals: n=n_o. And the excitation operator becomes a linear combination of tensor products of Pauli operators (Pauli strings). Finally, the operation T-T^† will be a linear combination of Pauli strings. With some orders of Trotter expansion, we have:
U_UCCSD(θ) = ∏_l e^-iθ'_lP_l,
where θ' can be obtained from θ.
For every e^-iθ P, we can implement it on the quantum processor shown in Fig. <ref>.
§.§.§ Hardware-efficient ansatz
HEA is a problem-agnostic ansatz, which directly applies easy-implementable quantum gates of the quantum processor. We assume the HEA to be comprised of P blocks, each of which consists of single-qubit rotation and two-qubit entangling operations:
U_HEA(θ) = ∏_p=1^P U_entangle U_single(θ_p),
where:
U_entangle = CNOT_n,1∏_i=1^n-1CNOT_i,i+1,
U_single(θ_p) = ∏_i=1^n R_Z(θ_p^i1) R_X(θ_p^i2) R_Z(θ_p^i3),
where subscripts in CNOT gates represent the control and target qubit, respectively. The quantum circuit for the HEA described here is shown in Fig. <ref>.
It has been pointed out that HEA has remarkable expressibility <cit.>. Combined with the fact that HEA is hardware-friendly, it has become the most common-applied QNN model.
§ GRADIENTS IN VARIATIONAL QUANTUM ALGORITHMS
Training parameters in QNNs is the main step in executing VQAs, which is NP-hard <cit.>. On the one hand, cost functions in VQAs are obtained via repeated measurements, and achieving sampling error ϵ will require sampling O(1/ϵ^2) times. Then about 10^6 sampling times is required to reach the widely-applied chemical accuracy 1.6× 10^-3 Hartree
[1 Hartree = 2625.5 kJ/mol.]
. On the other hand, problems like barren plateaus can cause exponentially increased sampling times. Together with noise and other factors, evaluating cost functions in VQAs would be difficult.
Note that in the training process, measuring cost function is mainly used to evaluate gradients. If we apply Eq. (<ref>) for gradient evaluation, O(L) times of cost function needs to be evaluated. In Sec. <ref>, we introduced that the backpropagation algorithm can be used to reduce the times required for executing classical NNs, Therefore, it would be natural to ask whether such type of methods can be applied to reduce the gradient-evaluation cost when training QNNs.
First of all, the backpropagation algorithm cannot be implemented directly because a QNN is a parameterized unitary transformation that maps an initial state to the ansatz, without recording to inter-layer state, which, however, is required when performing backpropagation algorithms. As introduced in <cit.>, the backpropagation scaling for training QNNs is only possible when we have multiple copies of the ansatz.
Next, we consider whether there is some dependency between the gradient elements. If it is the case, after evaluating some gradient elements, we can apply this relation to directly compute the remaining gradient elements without running the QNN. However, we will show below that this is also unavailable.
For a general ansatz U(θ) with L independent parameters, and the cost function defined as the expectation value under some Hamiltonian H, we need at least O(L) times for evaluating the cost function to obtain the gradient.
The proof of this Theorem is provided below. According to this theorem, the costs for evaluating gradients in training QNNs depend on the number of parameters. This dependency heavily limits the scalability of VQAs.
In ML tasks, it is common to improve performance by increasing the number of parameters. Since there is no dependency of the gradient evaluation cost and the NN depth, such a performance-improving strategy works. However, scalability limitation makes increasing parameters not a good choice in VQAs. Since the parameter number naturally grows with the problem size or complexity, applying VQAs would be challenging.
Suppose the PQC has the form:
U(θ) = ∏_l=1^L U_l(θ_l) W_l = ∏_l=1^L (cosθ_l I - i sinθ_l P_l) W_l,
where θ = {θ_1,θ_2,⋯,θ_L } is a vector of independent parameters. P_l is a Hermitian operator and W_l is the un-parameterized gate. Denote the initial state as ρ_0, then the cost function is:
C(θ) = Tr [ U(θ) ρ_0 U^† (θ) H ].
Expand Eq. (<ref>) according to Eq. (<ref>), we have:
C(θ) = Tr[ ∏_l=1^L (cosθ_l I - i sinθ_l P_l) W_l ρ_0 ∏_l=L^1 W_l^† (cosθ_l I + i sinθ_l P_l) H ].
Observe there are 4 terms for every θ_l. We view cosθ_l and sinθ_l as coefficients. Then the function for each term in the cost function is:
cosθ_l cosθ_l, f(I,I);
cosθ_l sinθ_l, f(I,iP_l);
sinθ_l cosθ_l, f(-iP_l,I);
sinθ_l sinθ_l, f(-iP_l,iP_l).
Note that such four cases can be described by two bits p_lq_l and we define the above four cases mean p_lq_l=00,01,10,11, respectively. Then the cost function is expressed as:
C = ∑_ pq = { p_lq_l|p_lq_l=00,01,10,11 }_l=1^L a_pq f_pq,
where:
a_pq = ∏_l a_p_lq_l, a_p_lq_l = cos^2θ_l, p_lq_l=00,
sinθ_lcosθ_l, p_lq_l = 01,10,
sin^2θ_l, p_lq_l=11.
Denote:
g^l_pq = ∂ a_pq/∂θ_l.
Then the gradient is:
∂ C/∂θ_l = ∑_pq g_pq^l f_pq.
We assume {f_pq} are unknown. Computing ∂ C/∂θ_l through {f_pq} requires computing almost 4^L times, which is impractical.
If we can obtain the full gradient by evaluating the QNN k<O(L) times, then after evaluating some gradient elements we can obtain the others. Due to the unknown functions {f_pq}, unknown elements must be a linear combination of known gradients. If such a case exists, we consider the easiest case that we have obtained L-1 gradient elements, the remaining gradient can be expressed as:
∂ C/∂θ_l = ∑_k≠ l m_k ∂ C/∂θ_k.
This means that the vectors {g^k_pq}_k=1^L are linear dependent. Then there exists a set of numbers {m_i}_i=1^L that are not all 0:
∑_l=1^L m_l ∂ C/∂θ_l = 0.
This means:
∑_l=1^L m_l g^l_pq = 0, ∀ pq = {p_lq_l}.
We consider the following 2^L elements with indices:
pq = {00,11}^L.
And we re-order them as w_l=p_lq_l. Then the above equation will become:
∑_l=1^L m_l g^l_w=0, ∀ w={w_l}={0,1}^L.
Define w'={w_l}_l=2^L. Consider every pair of index 0,w' and 1,w', we have:
∑_l=1^L m_l g^l_0,w'=0,
∑_l=1^L m_l g^l_1,w'=0.
Add the two equations together:
∑_l=1^L m_l ( g^l_0,w' +g^l_1,w') =0.
Observe:
g^l_0,w' + g^l_1,w' = ∂ a_0,w'/∂θ_l + ∂ a_1,w'/∂θ_l = ∂/∂θ_l (a_0,w'+a_1,w').
While:
a_0,w'+a_1,w'= cos^2θ_l a_w' + sin^2θ_l a_w'=a_w',
we have:
g^0_0,w' +g^0_1,w' = 0.
Then Eq. (<ref>) will become:
∑_l=2^L m_l ( g^l_0,w' +g^l_1,w') = ∑_l=2^L m_l ∂ a_w'/∂θ_l = 0.
This is exactly the (L-1)-parameter case.
Repeat this process and we will eventually have:
m_L ∂ a_w_L/∂θ_L = 0, w_L=0,1.
Since a_w_L=0=cos^2θ_L, ∂ a_w_L=0/∂θ_L = -sin (2θ_L). Then we have m_L=0 except when θ_l=0. Moving back, we will obtain m_L-1=0. Finally, m_l=0,∀ l. This conflicts with the assumption that the vectors are linearly dependent. Then the proof is now finished.
§ TIME COSTS FOR EXECUTING VARIATIONAL QUANTUM ALGORITHMS
In this part, we estimate the time cost for executing VQAs, especially when using the UCCSD ansatz and HEA introduced in Sec. <ref>. Since VQA is executed by repeatedly measuring cost functions and updating parameters, the total time of running a VQA is:
t_VQA = t_cost× N_cost,
where t_cost is the time needed to obtain a cost function and N_cost is the number of cost functions needed to obtain to finish the algorithm.
On the one hand, cost functions in VQAs are obtained via repeated sampling of the ansatz. Then: t_cost = t_sample× N_sample, where t_sample and N_sample are the time needed to sample the ansatz once and the number of samples needed to obtain a cost function, respectively. On the other hand, N_cost depends on the optimization algorithms applied. When using gradient-based algorithms, we have:
N_cost = N_gradient× N_iterate, where N_gradient and N_iterate are the number of cost functions needed to evaluate to obtain one gradient and the number of iteration times, respectively. Below we will analyze the above four factors. And the sketch diagram for the analysis is shown in Fig. <ref>.
N_gradient As described in Theorem <ref>, we can view N_gradient simply as the number of parameters in the ansatz. In the UCCSD ansatz, the number of parameters is exactly the sum of single- and double-excitation terms:
L_UCCSD = C_n_e^1 C_n_o-n_e^1 + C_n_e^2 C_n_o-n_e^2,
where
C_n^m = n!/m!(n-m)!.
In HEA, parameters only appear in the single-qubit rotation operations. In each of the P blocks, we apply three single-qubit gates on every qubit, then we have:
L_HEA = 3nP.
t_sample Generally, sampling a quantum circuit includes three parts: initializing the quantum hardware, running the circuit, and measuring the outcome. Then:
t_sample = t_initial + t_gate + t_read.
On current superconducting hardware, t_initial
and t_read together will reach the order of 1 μ s <cit.>. The time of applying a single- and two-qubit gate are t_single=30 ns and t_double=60 ns <cit.>, respectively.
[The detailed time differs in systems but is in the same order. We will apply the averaged and experienced values.]
Then:
t_gate = l_single× t_single + l_double× t_double,
where l is the single- and two-qubit gate layer depth, where two gates in the same layer indicates they can be applied at the same time. Since the time of initializing the hardware and measuring the outcome is approximate to applying 10^2 quantum gates, then we will ignore this cost and only take the circuit running time as t_sample. The following theorems provide the value of t_gate for the UCCSD ansatz and HEA.
For a many-body system with n_o spin-orbitals and n_e electrons, the gate layer depth for the UCCSD ansatz under the first-order Trotter expansion is:
l_single = 6 C_n_e^1 C_n_o-n_e^1 +24 C_n_e^2 C_n_o-n_e^2 ,
l_double = 2 n_oC_n_e^1 C_n_o-n_e^1 + 8/3 (2n_o+1) C_n_e^2 C_n_o-n_e^2.
As introduced in Sec. <ref>, implementing the UCCSD ansatz on the quantum hardware requires transforming the ansatz into the form of Eq. (<ref>). According to Fig. <ref>, for a k-local Pauli operator, which means that the operator acts non-trivially on k qubits, the single-qubit and two-qubit depth of implementing e^-iθ P is 3 and 2k-2, respectively. Therefore, to determine the gate layer depth with the first-order Trotter expansion, we just need to determine the number of operators e^-iθ P in Eq. (<ref>) and the locality for each operator P.
Consider the single-excitation term, for every pair of i>α, the single-excitation term a_i^† a_α - a_α^† a_i is mapped with the JW transformation as:
a_i^†a_α - a_α^†a_i = [ ∏_k<i Z_k ] (X_i - i Y_i) [ ∏_k<α Z_k ] (X_α + i Y_α)
- [ ∏_k<i Z_k ] (X_i + i Y_i) [ ∏_k<α Z_k ] (X_α - i Y_α)
= Z_α (X_α + i Y_α) [ ∏_α<k<i Z_k ] (X_i-i Y_i)
- Z_α (X_α - i Y_α) [ ∏_α<k<i Z_k ] (X_i+i Y_i)
= 2 i X_α[ ∏_α<k<i Z_k ] Y_i - 2 i Y_α[ ∏_α<k<i Z_k ] X_i.
After mapping, a_i^† a_α - a_α^† a_i is mapped to a sum of 2 Pauli strings, each of which is (i-α+1)-local. Similar to Eq. (<ref>), for every group of i>j>α>β, the double-excitation term a_i^† a_j^† a_α a_β-a_β^† a_α^† a_j a_i is mapped to a sum of 8 Pauli strings, each of which is (i-β+1)-local.
Now we are going to determine the circuit depth. Since every e^-iθ P will cause 3 single-qubit circuit depth, and according to Eq. (<ref>), the number of single-excitation and double-excitation terms are C_n_e^1 C_n_o-n_e^1 and C_n_e^2 C_n_o-n_e^2, respectively. Then:
l_single = C_n_e^1 C_n_o-n_e^1 × 2 × 3 + C_n_e^2 C_n_o-n_e^2 × 8 × 3
=6 C_n_e^1 C_n_o-n_e^1 + 24 C_n_e^2 C_n_o-n_e^2.
The case for the two-qubit depth is more complex. For every pair of i,α, there are 2 Pauli strings for each single-excitation term, the two-qubit circuit depth for each of which is 2(i-α+1)-2=2(i-α). Therefore, the two-qubit gate layer depth with the single-excitation term is:
∑_i=n_e+1^n_e+(n_o-n_e)( ∑_α=1^n_e 4(i-α ) ) = ∑_i=n_e+1^n_e+(n_o-n_e)( 4in_e - n_e(n_e+1)/2× 4)
= 4n_e (n_e+1+n_o) (n_o-n_e) /2 -2 n_e(n_e+1)(n_o-n_e)
=2 n_on_e (n_o-n_e)
=2 n_o C_n_e^1 C_n_o-n_e^1.
For every group of i,j,α,β, the double-excitation operator will result in 8 Pauli strings, each of which is (i-β+1)-local. And different choices of j,α will not affect the locality. Then the two-qubit gate depth caused by the double-excitation term is:
∑_i=n_e+1^n_e+(n_o-n_e)( ∑_β=1^n_e (i-β)(n_e-β)(i-n_e-1) ) × 8 = 8/3 (2n_o+1) C_n_e^2 C_n_o-n_e^2 .
Adding Eq. (<ref>) and (<ref>), we obtain the overall two-qubit layer depth. And the theorem is now finished.
For the HEA described above with P blocks, we have:
l_single = 3P,
l_double = nP.
N_sample Cost functions in VQAs are obtained via repeated sampling, where reaching the sampling error ϵ requires sampling the circuit O(1/ϵ^2) times. then N_sample is determined by the sampling accuracy required.
Generally, the sampling error should be within the accuracy required for solving the problem. However, to perform parameter optimization, sampling accuracy should also be related to the scaling of the gradient. Suppose we are applying the parameter-shift rule <cit.> to evaluate the gradient as:
∂_jC = 1/2( C_+ -C_- ),
with C_± = C(θ_j±π/2) and ∂_jC = ∂ C/∂θ_j.
Denote the sampling error as ϵ and the sampled gradient as ∂_jC. The worst case is (Suppose ϵ > 0):
∂_jC = 1/2( [C_+ - ϵ] - [C_-+ϵ] )
= ∂_jC - ϵ.
To update parameters in the correct direction, we need:
∂_jC/∂_jC = ∂_jC/∂_jC-ϵ > 0.
Then sampling accuracy is dependent on the scaling of the gradient.
While the magnitude of the gradient could be affected by the barren plateaus, exponential sampling times would be required, which is not workable in practice. We will analyze the time cost with a set of several given sampling times. In real tasks, we can apply methods to reduce the sampling times, address the barren plateaus phenomenon and reduce measurement costs.
N_iterate Generally, N_iterate is not pre-known and differs between problems. Even for the same problem, different initial parameters and the choice of optimization algorithms will make N_iterate different. In gradient descent algorithms, both the learning rate and the gradient scaling will affect the iteration times. Moreover, while the scaling of the gradient can be affected by barren plateaus or local minimum points, optimization will take more steps. Therefore, we will treat N_iterate similar to N_sample, where we will provide the time cost for a set of given N_iterate. And we combine these two factors as:
N_si = N_sample× N_iterate,
t_VQA
Now we provide the value of t_VQA for both UCCSD ansatz and HEA. In general,
t_VQA = t_sample× N_sample× N_gradient× N_iterate
= N_si× ( t_single× l_single + t_double× l_double ) × L
= 3× 10^-8× N_si× (l_single+2l_double )× L.
Based on the former analysis, when considering the above ansatzes, we have:
t_VQA-UCCSD = 10^-8× N_si×( C_n_e^1 C_n_o-n_e^1 + C_n_e^2 C_n_o-n_e^2 )
×[ (12n_o+18) C_n_e^1 C_n_o-n_e^1 + (16n_o+88)C_n_e^2 C_n_o-n_e^2 ],
and
t_VQA-HEA = 9× 10^-8× N_si× (2n^2+3n)P^2 .
We can see that for a fixed N_si, the total time establishes a polynomial growth.
§ TOTAL TIME COST
Based on the analysis in Sec. <ref>, we now provide the detailed time cost for running VQAs. We will estimate the time cost under realistic assumptions of an ideal quantum processor. That is, we only take into account circuit running time and the sampling process for obtaining cost functions, and other factors including hardware noise, connectivity between physical qubits, the time for initializing the hardware and reading out the outcomes, as well as limitations for VQAs like reachability and trainability, are all ignored. The goal of ignoring these factors is to show the “best” time-scaling performance of VQAs.
As a representative application scenario, we consider applying VQAs to solve the ground states of different-sized molecular systems and label the systems according to their spin-orbital numbers n_o, which is also the number of qubits required: n. The number of electrons is set to be n_e=n_o/2.
Since N_sample and N_iterate are not pre-determined, we will provide the time cost concerning the value of the two factors, which are listed as:
N_sample ∈{ 10^4,10^5,10^6,10^7,10^8 },
N_iterate ∈{ 10^2,10^3,10^4 }.
Combine them as one factor: Therefore, N_si ranges from 10^6 to 10^12.
Given n_o and n_e, the structure of UCCSD ansatz is determined. However, the block depth P needed is generally hard to be determined. Therefore, we will consider the following two cases: P=n and P=n^2.
In Fig. <ref> and <ref>, we plot the time cost with different values of N_si for both UCCSD ansatz and HEA. The 1-year and 1000-year time are given as benchmarks.
From the figures, it is clear that for a fixed value of N_si, the total time cost for running VQAs establishes a polynomial growth with the number of qubits. Compared to the exponential scaling with classical simulation, VQAs seem to perform better.
However, in terms of real-time scaling, it is not the case. Even at a scaling of about 20 qubits, VQAs easily reached the 1-year time. In quantum chemistry tasks, to achieve chemical accuracy, sampling times is at least 10^6 times. Then the total time cost corresponding to N_si=10^6 can be viewed as the time for performing one step of parameter optimization, which comes at the level of 1 year. Since this is already the time on an ideal quantum computer, the real-time cost will be larger than this result.
§ VQAS V.S. CLASSICAL SIMULATIONS
Since the term “quantum advantage” is a topic compared to classical simulations, it is insufficient to only provide the time cost for using VQAs. In this part, we also consider the time cost of simulating VQAs using classical simulation of quantum circuits.
As quantum processors are unavailable for common research, classical simulation of quantum circuits is widely applied. The major difference between quantum simulation and classical simulation of quantum circuits is the time of quantum gates does not change with the number of qubits, but it is not the case with classical simulation. A quantum operation U_x with x the list of qubits that the operation acts on, is indeed U_x⊗ I_x̅, where x̅={ k|k∉x}. In this case, the time of applying a quantum gate grows exponentially with the number of qubits.
We set the gate time of 10 qubits as t_10=10^-3 s and the time for n qubits is t_n=t_102^n-10. Sampling is not required with classical simulation. We set N_sample=10^6 for quantum simulations to reach the chemical accuracy. And N_iterate is listed in Eq. (<ref>).
The time comparison between VQAs and classical simulations with both UCCSD ansatz and HEA is shown in Fig. <ref>. Due to the different increasing speeds, the time curve of VQAs and classical simulations crossed, whose corresponding time is denoted as T, which is a function of the ansatz, iteration number, etc.
It is only possible for VQAs to outperform classical computers when the time required is larger than T. From the figures, this time is at the scaling of years, and it also increased with the number of parameters.
Moreover, different from quantum processors, classical simulations can apply multi-cores, which can also provide a time reduction. For instance, in <cit.>, the average gate time is 2.09 s and 1.22 s when performing a 29-qubit and 40-qubit quantum operation. While quantum simulation with multiple quantum processors is still unavailable nowadays. Therefore, quantum advantages are difficult to reach for VQAs in the acceptable time-scaling.
§ CONCLUSION AND OUTLOOK
In this paper, we have investigated the time-scaling performance of VQAs and the potential for VQAs to achieve quantum advantages. We proved that methods like backpropagation cannot be directly applied when training QNNs since the inter-layer quantum states of QNNs are not recorded. And this makes the gradient-evaluation cost depend on the number of parameters in the quantum version of NN models, which limits the scalability of VQAs. Based on this result, we estimated the time cost of running VQAs in ideal cases, where realistic limitations like noise, reachability, and qubit connectivity are not considered, and we only take into account the time of performing quantum gates and errors due to finite sampling times. The result showed that even though the time established a polynomial growth, the time scaling easily reached the 1-year time wall time. Finally, we considered the time of applying classical simulations, which grows exponentially with the number of qubits. The result showed that the running time of VQAs is only shorter when the time-scaling is over 10^2 years with the UCCSD ansatz. However, due to the realistic limitations mentioned above, whether VQAs can perform better is still not sure. At a regular time-scaling, quantum advantages may be unavailable with VQAs.
By providing such a negative comment, we do not want to deny the potential of VQAs and the NISQ algorithms. In view of VQAs, optimizations need to be made to reduce the time cost, examples like more efficient sampling strategies and more parameter-saving ansatzes. And one of our future works is to design backpropagation-type algorithms for efficiently training QNNs.
In the view of long term, introducing quantum computing into the context of machine learning, or equivalently, quantum machine learning, has remarkable potential. However, due to the different features between quantum and classical computation, directly replacing the NN model with QNN may not be the optimal way to achieve quantum advantages. Seeking a more natural way to carry out QML tasks would be meaningful.
Taking one step further, a variety of quantum algorithms is a quantum-classical hybrid: A question is solved by classical pre-processing, quantum computation, and classical post-processing. Usual algorithms replace one step of classical computation with quantum computation, but the pre-processing process to fit quantum computation is preferred.
§ ACKNOWLEDGEMENT
This work was supported by the National Natural Science Foundation of China (Grant No. 12034018), and Innovation Program for Quantum Science and Technology No. 2021ZD0302300.
§ DATA AVAILABILITY
All the data that support the findings of this study are available within this article.
quantum
|
http://arxiv.org/abs/2307.04104v1 | 20230709055200 | lcs4Foam -- An OpenFOAM Function Object to Compute Lagrangian Coherent Structures | [
"Constantin Habes",
"Alexandra von Kameke",
"Mohammed Elwardi Fadeli",
"Holger Marschall"
] | physics.flu-dyn | [
"physics.flu-dyn",
"physics.app-ph",
"76-04, 76-10",
"I.6.3; I.6.6; J.2"
] |
lcs4Foam]lcs4Foam – An OpenFOAM Function Object to
Compute Lagrangian Coherent Structures
^1Mathematical Modeling and Analysis, Technical University of Darmstadt, 64287 Darmstadt, Germany
[email protected], [email protected], [email protected]
^2Department of Mechanical Engineering and Production Management, Hamburg University of Applied Sciences, 20099 Hamburg, Germany
[email protected]
To facilitate the understanding and to quantitatively assess the material transport in fluids, a modern characterisation method has emerged in fluid dynamics in the last decades footed in dynamical systems theory. It allows to examine the most influential material lines which are called Lagrangian Coherent Structures (LCS) and order the material transport into dynamically distinct regions at large scales which resist diffusion or mixing. LCS reveal the robust skeleton of material surfaces and are essential to assess material transport in time-dependent flows quantitatively. Candidates of LCS can be estimated and visualised from finite-time stretching and folding fields by calculating the Finite-Time Lyapunov Exponents (FTLE).
In this contribution, we provide an OpenFOAM function object to compute FTLE during CFD simulation. This enables the OpenFOAM community to assess the geometry of the material transport in any flow quantitatively on-the-fly using principally any OpenFOAM flow solver.
[
H. Marschall^1
August 12, 2023
===================
§ INTRODUCTION
Material transport and mixing in fluids is enhanced by advection. This advection is usually described mathematically in an Eulerian view by a time-dependent velocity field 𝐮(𝐮, t). With this Eulerian description, numerous important fluid mechanical characteristics can be derived and assessed. For instance, a higher Reynolds number (higher velocities) typically will go along with better overall mixing. However, such intuition might be misleading as has been shown for example in <cit.> studying a rising bubble. Here, a coherent structure has been found to arise for intermediate Reynolds numbers, which causes material to move together and locally hinders mixing and increases residence times in the vicinity of the bubble rear. The example shows: a closer look at the coherent structures is necessary to evaluate the details of the material transport in the specific flow situation. Lagrangian Coherent Structures (LCS) are often observable in fluid flows due to the shape that passive tracers take on, e.g. plankton in the ocean <cit.> or dissolved oxygen in the wake behind a rising bubble.
That the classical Eulerian view on advection is not optimal for addressing these issues was first noted in oceanography and atmospheric science <cit.>. The transport analysis was therefore started from its roots, the Lagrangian view, where the observer travels on the fluid parcels rather than watching them move by (Eulerian frame). The Lagrangian analysis thus considers the trajectories of individual fluid parcels and allows to draw conclusions on the transport from their evaluation. Nowadays, computational and theoretical advances allow for the calculation and analysis of the time-dependent dynamical system that governs material transport.
The underlying ideas for Lagrangian analysis stem from dynamical systems theory. In time-independent incompressible velocity fields, the dynamical system is the velocity field itself and the streamlines of the velocity field coincide with the trajectories of the fluid parcels. As such, trivially, structures in the velocity fields represent governing structures for the material that is transported (as long as molecular diffusion is comparably low and negligible) <cit.>. In this setting, unstable and stable manifolds divide the flow into different subdomains that move coherently (together) <cit.>.
For time-dependent flows however, the instantaneous streamlines and the trajectories of the fluid parcels do not coincide. It is thus a misleading habit to draw any conclusion about the material transport from the streamlines or any other material lines of the mean velocity field of a fluid flow. The resulting transport structures might have no relevance for the real dynamical system at all.
To obtain the lines that govern material transport in time-dependent flows the Lagrangian Coherent Structures are calculated from the trajectories of particles evaluated in the time-dependent velocity field 𝐮 = 𝐮(𝐱, t). LCS are those material lines and surfaces that separate regions of particles with very different fates or history for the time interval under consideration. Several different approaches to evaluate LCS have been developed during the last years <cit.>.
With this contribution we introduce an OpenFOAM function object that calculates the three dimensional Finite Time Lyapunov Exponents (FTLE) on-the-fly based on the general purpose numerical library libcfd2lcs <cit.> with the main computational details explained in <cit.>. The ridges in the FTLE-field are then candidates for LCS and can be assumed to coincide with LCS if some further conditions are met <cit.>. However, as also pointed out in <cit.>, these additional conditions are hard to evaluate in 3D and thus the FTLE-field will be viewed as an approximate representation of the 3D LCS. The details about the calculation of the FTLE-field and the underlying mathematical foundation are set out in Section <ref>.
§ THEORETICAL BACKGROUND OF LCS CALCULATIONS
From time-resolved CFD simulations, the time-dependent velocity field 𝐮(𝐱, t) is known in space and time. From this information the fluid parcel or passive particle trajectories
𝐱(𝐱_0, t) = 𝐱_0 + ∫_t_0^t𝐮(𝐱(τ), τ) d τ
can be calculated, where 𝐱_0 is the starting point of a trajectory in 3D space at a starting time t_0. Note, that each trajectory is now labelled by its start location in space and time. If a set of initially close passive particles is released at the same time the distances between them change over time due to the fluid motion. Passive particles initially forming a tiny sphere will undergo a linear deformation towards an ellipse for short times as would occur in a solid body under stress before it breaks. Certainly, in a fluid, the deformation will progress, and non-linear higher-order terms will play a role in causing stretching and folding which is crucial for mixing. However, as a first approximation and for short times these higher-order terms are neglected for the analysis of the deformation. If we consider infinitesimal spheres of initially close particles around all mesh cell centres of our simulation starting at the same initial time t_0, we obtain a set of different ellipsoids. All these ellipsoids have differently stretched and contracted principal axes which point in different directions at a slightly later time t_1. The principal axes of each ellipse denominate the final directions of maximal stretching (major axis) and maximal contraction (minor axis) of the initially spherical particle blob. The stretching factor S is the length of the major axis of the final ellipse divided by the initial radius of the sphere. If this stretching factor at each initial grid point is plotted, a 3D stretching field results revealing the regions at which stretching and thus particle separation for the time interval of interest [t_0,t_1] is largest due to the local flow conditions. Normally, the scaled logarithm of this stretching factor, defined by
σ_t_0^t_1(𝐱_0, t_0)=1/|t_1-t_0|log (S) ,
is plotted. This scaled logarithmic stretching factor is called the Finite-Time Lyapunov Exponent <cit.>. Connected areas or lines of large FTLE values characterise the fluid transport as these denote the areas or lines along which deformation and thus particle separation is largest. All these geometrical considerations have their mathematical counterparts. The stretching factor as described is the square root of the maximal eigenvalue of the right Cauchy-Green deformation tensor 𝐂_t_0^t_1. This tensor can be calculated for every mesh cell as envisioned above for the ellipsoid. As its name reveals it includes all the information about the deformation of the fluid masses at this point for the short time interval t_1 -t_0, and notably it is an objective tensor such that high stretching values and candidates for LCS derived from it will persist regardless of the motion of the observer (invariant to a time-dependent translation and rotation of the coordinate system of the observer) <cit.>.
The governing ordinary differential equation (ODE) for the evolution of a fluid parcel or a passive particle reads
𝐱̇=𝐮(𝐱(t), t) .
Therefore, the infinitesimal separation γ = 𝐱-𝐱^* of the passive particle, imagined in the centre of a infinitesimal sphere, to a particle on the surface of the sphere will be governed by the ODE
δ𝐱̇ = ∇𝐮γ .
The solution of this ODE is an exponential function, which explains why the FTLE is defined as the logarithm of the stretching factor.
To analyse the stretching during short but finite time intervals, particles distributed on a mesh are advected with the flow from an initial time t_0 over the time interval T=|t_1-t_0| to t_1. From the integral version of the governing ODE (Eq. <ref>) we obtain the definition of the flow map, Φ_t_0^t_1, which maps all the particles from their initial positions onto their final positions at time t_1, viz.
Φ_t_0^t_1: ℝ^n→ℝ^n ; 𝐱_0↦𝐱_0 + ∫_t_0^t_1𝐮(𝐱(τ), τ) d τ .
To obtain the separation of two initially close particles after this time interval a Taylor series
δ𝐱(t_1) = Φ_t_0^t_1(𝐱_0)-Φ_t_0^t_1(𝐱_0 + δ 𝐱(𝐭_0)) = 𝐃Φ_t_0^t_1(𝐱_0, t_0) δ x(t_0) + 𝒪(| δ x(t_0)^2|)
around the initial position can be employed. Where 𝐃Φ_t_0^t_1(𝐱_0, t_0) is the gradient (Jacobian) of the flow map with regard to the initial separation and is also the normalised fundamental matrix solution of the equation of variations above (Eq. <ref>) <cit.>. Therefore, the particle separation at time t_1 can be written as
δ𝐱(t_1)=√(⟨δ𝐱(t_0),[𝐃Φ_t_0^t_1(𝐱_0, t_0)]^*[𝐃Φ_t_0^t_1(𝐱_0, t_0)] δ𝐱(t_0)⟩).
The right Cauchy-Green deformation tensor is then defined as
𝐂_t_0^t_1(𝐱_0, t_0)=[𝐃Φ_t_0^t_1(𝐱_0, t_0)]^*[𝐃Φ_t_0^t_1(𝐱_0, t_0)] .
In this way the Finite-Time Lyapunov Exponent σ_t_0^t_1 for the time interval t_0 to t_1 can now be defined on the basis of this tensor in a more thorough, mathematical way. Therefore, it is now defined by
σ_t_0^t_1(𝐱_0, t_0)=1/|t_1-t_0|log√(λ_max(𝐂_t_0^t_1(𝐱_0, t_0))) .
Here λ_max is the maximum eigenvalue of the right Cauchy-Green deformation tensor and can be calculated using standard solvers. In the picture of the small ellipsoid, the square root of the eigenvalue is just the above stretching rate S.
§ COMPUTATIONAL DETAILS
The computation of flow maps within libcfd2lcs is described thoroughly in <cit.>. The following section presents a brief overview of how the computation is done in practice and which different timescales play a role in the calculations. Hereafter, we describe the structure and functionality of the newly developed function object. We will focus on how the function object acts as an interface between OpenFOAM and libcfd2lcs, how parallelisation is ensured and what has to be considered for the output of the generated data.
§.§ Numerical flow map computation in libcfd2lcs
libcfd2lcs is able to calculate both forward-time and backward-time FTLE fields. However, it uses two very different approaches for calculating the respective flow maps. The general approach used for the computation of the forward time flow-map Φ_t_0^t_0+T and the resulting forward-time FTLE field is very straightforward. A set of tracer particles is initialised on a grid with spacing Δ x_lcs by setting each initial tracer coordinate to the cell centre coordinate of a corresponding mesh cell. Then the flow map at each cell centre is computed by passively advecting these tracers with the flow, which mathematically corresponds to an integration of equation
d 𝐱/d t=𝐮(𝐱, t)
over the time interval T. Numerically this integration is done by utilising Runge-Kutta methods, with step size Δ t_lcs.
The time and space dependent velocity field 𝐮(𝐱, t) results from the specific fluid simulation under consideration and is passed to libcfd2lcs after each simulation time step Δ t_sim (see Section <ref>). In order to save the flow map Φ_t_0^t_0+T, the location of each particle after the integration is stored at its initial position.
As the evaluation of FTLE fields, indicating LCS candidates, is mainly relevant for time-dependent flows, it is often important to animate their evolution. At first glance, this would mean that a sequence of large particle sets would have to be integrated, requiring a great amount of computation. This problem is solved using a method developed by Brunton and Rowley <cit.>. With this method a flow map of the interval T can be constructed from a sequence of k flow maps over a smaller interval h, where T=kh. Following the notation of <cit.> this can be expressed as
Φ_t_0^t_0+T=Φ_t_0+(k-1)h^t_0+kh∘⋯∘Φ_t_0+h^t_0+2h∘Φ_t_0^t_0+h .
In practical terms, this means that the particle grid is reinitialised for every new time interval h after which they are advected again with the flow. Then the sub-step flow map is stored and the complete flow map is constructed when all needed sub-step flow maps are available. It is important to note that since a discrete particle grid is used for the sub-step flow map computation, interpolation of the sub-step flow maps is needed in order to match the trajectories at different timelevels when reconstructing the flow map Φ_t_0^t_0+T (see <cit.> for more details).
A different approach for constructing the backward-time flow maps is used. This is due to the fact that using the Lagrangian approach would require to store all computed velocity fields in the subset interval h before the integration of the tracers from t_0+h to t_0 could be done backward in time. Although this already includes Brunton's and Rowley's method for the flow map construction, the Lagrangian approach would be "cumbersome and resource intensive" <cit.>. Therefore, libcfd2lcs uses an Eulerian approach for the flow map computation proposed by Leung <cit.>. In contrast to the forward-time flow map, the backward-time flow map Φ_t_0+T^t_0 describes for each grid point where a particle, that is at that point at time t_0+T, originally was at time t_0. With Leung's Eulerian approach this backward-time flow map at time t_0+T is computed by initialising a vector field Ψ(𝐱, t_0) on a grid with the cell centre coordinates at time t_0. The advection of this so called "takeoff coordinate field" in an Eulerian reference frame is then described by the level set equation
∂Ψ(𝐱, t)/∂ t+(𝐮·∇) Ψ(𝐱, t)=0 .
Solving this equation over the time Interval [t_0, t_0+T] in forward time gives Ψ(𝐱, t_0+T), which represents the takeoff coordinates of a Lagrangian particle at t_0 reaching 𝐱 at time t_0+T. Thus, the backward-time flow map Φ_t_0+T^t_0 is equivalent to Ψ(𝐱, t_0+T). libcfd2lcs solves equation (<ref>) by using a semi-Lagrangian advection approach with the time step size
Δ t_lcs = c_cfl Δ x_lcs/𝐮(𝐱, t)
of this procedure being restricted by the CFL condition c_cfl < 1 (see <cit.> and <cit.> for more details).
Furthermore, Brunton's and Rowley's flow map construction method is also applied to the backward-time flow maps computed with the Eulerian method. Hence, the takeoff coordinate field is reinitialised after every sub-step time interval h and the backward-time flow map
Φ_t_0+T^t_0=Φ_t_0+h^t_0∘Φ_t_0+2h^t_0+h∘⋯∘Φ_t_0+kh^t_0+(k-1)h
is constructed form k sub-step backward-time flow maps.
Since a lot of different timescales are relevant in the practical FTLE field computation described above, we try to differentiate and order them in the following, before describing the structure and functionality of the newly developed function object in the next section. The basis of the on-the-fly LCS evaluation is a parallel running simulation that provides the velocity fields. Here, three intervals are of interest (see Fig. <ref>): the overall simulation time that spans from the simulation start time t_sim_start to the simulation end time t_sim_end, the time step size of the simulation Δ t_sim and the write time interval of the simulation results Δ t_sim_write. The computed velocity fields represent a fluid flow for which a reference timescale Δ t_ref can be identified. This reference timescale characterises the dominant hydrodynamic timescale of the flow and is typically larger than the simulation time step size. In order to save computing resources the LCS evaluation of the simulated flow does not necessarily have to start and end at the same time as the simulation. Therefore, a separate start and end time for the LCS evaluation denoted as t_lcs_start and t_lcs_end can be defined (see Fig. <ref>). During the LCS evaluation, a series of FTLE fields are computed. These FTLE fields are calculated from time T flow maps, which themselves are calculated as described earlier in this section. This means storing and constructing the time T flow maps from multiple sub-step flow maps after each LCS sub-step integration interval h. Calculating the sub-step flow maps in turn requires to numerically solve the equations (<ref>) or (<ref>) using the finite time step Δ t_lcs. While Δ t_lcs is set automatically according to equation (<ref>) and a specified CFL number, T and h have to be defined by the user. In order to detect all LCS candidates, T is usually chosen to be larger than Δ t_ref of the investigated flow <cit.>. With the aim of animating the evolution of the FTLE field, h is typically set significantly smaller than Δ t_ref while being in the order of magnitude of Δ t_sim_write.
§.§ Structure and functionality of the function object
In general, function objects can be used to generate additional data at runtime of the simulation. In doing so, function objects can access data generated by the flow solver at runtime, which offers a great advantage over classical post-processing since it can only utilise the stored fields or logged information. The newly developed function object incorporates the functionalities of libcfd2lcs into OpenFOAM at runtime while acting as an interface between both. This is achieved by processing the data generated by OpenFOAM and the subsequent exchange of this data via the libcfd2lcs API (see <cit.> for a detailed description of the libcfd2lcs API).
The calculation of the flow maps, the calculation of the resulting FTLE fields and the subsequent saving of these fields is completely handled by libcfd2lcs. The basic task of the function object is to pass the cell centre position vectors of the computational grid as well as the velocity field calculated by OpenFOAM to libcfd2lcs. Due to the very strict data structure requirements of libcfd2lcs this is not a trivial task. libcfd2lcs can only use static rectlinear grids for the calculation of forward-time and backward-time flow maps and therefore needs the velocity fields on these grids. This means that the mesh and velocity data has to be globally organised in an (i, j, k) structured format <cit.>. Since the LCS evaluation should also be available for simulations on moving grids with general topology and adaptive grid refinement, the function object offers several possibilities to deal with this problem.
In the simplest case, where the simulation mesh is already a static rectlinear mesh, the function object does not need to process the grid and velocity data, but can directly transfer it to libcfd2lcs as basic C++ arrays. This is the preferred method when the flow domain can be represented by a static rectlinear mesh and e.g. immersed boundary methods are used. If a moving mesh, a mesh of general topology or adaptive mesh refinement is used for the simulation a different approach is needed in order to prepare the data for its use in libcfd2lcs. Here, an additional static rectlinear mesh needs to be constructed in the preprocessing step, which can be done e.g. by using the utility. This mesh has to contain the region for which the LCS diagnostic should be performed, meaning that it can cover the whole simulation domain as well as only a part of it. However, since libcfd2lcs also requires boundary conditions for the FTLE field calculations, the boundary patches of the additional LCS mesh must be set accordingly. The user can choose between , , , and the generic patch types which the function objects translates into the corresponding libcfd2lcs boundary types. Then, during runtime, the velocity fields are mapped from the simulation mesh of general topology to the static rectlinear LCS mesh, from which the data can again be transferred to libcfd2lcs as basic C++ arrays. Although this implies that interpolation errors are made during the mapping process, the LCS evaluation is hardly affected by this. Haller showed in <cit.> that LCS are very robust against errors in the velocity field. Also, the additional computational overhead due to the mapping can be neglected compared to the overhead caused by the flow map computations. The function object also implements a third approach in which no additional LCS mesh is needed. This approach utilises the ability to construct complex, moving mesh geometries out of simple unconnected mesh regions in OpenFOAM with the approach. Using this approach the function object can utilise any specified static rectlinear mesh region of the for the LCS evaluation, meaning that the background mesh as well as any other static rectlinear mesh region can be used. In doing so, the function object extracts the mesh and velocity data from the specified mesh region of the and passes it to libcfd2lcs analogously to the previous approaches. Here the type patches are generally passed on as inlet or outlet, as they are treated the same by libcfd2lcs.
As libcfd2lcs also uses the domain decomposition approach and MPI for the parallelisation of the computations, the integration within the parallelisation of OpenFOAM is done in a straightforward manner. The local subdomains of the rectlinear LCS mesh and its velocity data are passed to libcfd2lcs together with an offset, which describes the position of the cell data in the globally (i, j, k) structured data array (see Fig. <ref>). For the MPI communication, the same MPI communicator as used for OpenFOAM is shared with libcfd2lcs. Therefore, the function object can be used for simulations running in parallel or serial. However, if the approach involving an additional LCS mesh is used, special attention is required for the domain decomposition in the preprocessing step. Here the simulation mesh, as well as the LCS mesh, must be cut along the same surfaces to make sure that the mapping of the velocity fields from one mesh to the other works properly.
As already mentioned, the data output of the flow-map and FTLE field data is completely handled by libcfd2lcs. This is due to the fact that the data output interval defined by h can differ from the solver write interval Δ t_sim_write (see section <ref>). Therefore, the results generated by the function object are not stored in corresponding time directories but in a separate folder in the case directory called . Additionally, a directory named is created inside of which all the sub-step data is stored. All data is stored in the Tecplot ASCII data file format (*.dat) and therefore can be visualised in ParaView when opened with its internal Tecplot reader or other common visualisation programs. In addition to this data, the computational overhead generated by the use of the function object with respect to the actual simulation is also output in the solver log file after each simulation time step. This enables the user to examine the computational costs of the LCS evaluation.
§ EXAMPLES OF USAGE
In this section a few examples are presented which are designed to show the functionality and capabilities of the function object. Therefore, example cases are presented in which only a rectlinear simulation mesh, a separate simulation and LCS mesh and a single are used.
§.§ Steady ABC flow
The Arnold-Beltrami-Childress (ABC) flow is an exact periodic solution of the Euler equations and is often used in the literature to verify LCS calculation methods. Therefore this case is also being reviewed here. The velocity field
𝐮=∇×[-Ψ𝐤+∇×(Φ𝐤)]
of the ABC flow can be described using 2 scalar potentials Ψ and Φ <cit.> which themselves are defined as
Ψ=-[C sin (y)+B cos (x)]
Φ=A[-x cos (z)+y sin (z)]-Ψ .
In (<ref>), 𝐤 can be any unit vector but is commonly chosen to be the vertical unit vector. This leads to the three expressions of the velocity components
u=A sin (z)+C cos (y)
v=B sin (x)+A cos (z)
w=C sin (y)+B cos (x) .
The parameters A, B and C can be freely selected and influence the properties of the ABC flow. In order to create comparability with literature values, A=0.5, B=0.8, C=0.8 is chosen. In order to test the newly developed function object on this flow configuration a dedicated ABC flow OpenFOAM solver was written. This solver does not solve the Euler equations in the usual sense, but sets the velocity components on a given computational mesh according to (<ref>). Due to the periodicity of the flow solution, the dimensions of the computational mesh used in this case setup are specified as x,y,z ∈ [0,2π] with a mesh size of 100×100×100.
Since the described mesh is rectlinear no additional LCS mesh is used. Again for reasons of comparability, a LCS integration time of T=10 s is selected for the LCS evaluation. The results of the LCS evaluation, both in forward- and backward-time, can be seen in Figure <ref>.
In these results the FTLE ridges, which indicate the LCS candidates in the ABC flow, can be seen very clearly. Furthermore, the results agree very well with the results from <cit.>, both qualitatively and quantitatively, which suggests that the new function object calculates the FTLE ridges reliably.
§.§ Time dependent double gyre
Another frequently used flow for the verification of LCS computing algorithms is the time periodic Rayleigh-Bénard convection flow, or often called double gyre, proposed by Solomon and Gollub <cit.>. The velocity field of this flow can be describe by using a stream function ψ
u=-∂ψ/∂ y
v=∂ψ/∂ x .
Here ψ is defined by
ψ(x, y, t)=A sin (π f(x, t)) sin (π y)
with
f(x, t)=a(t) x^2+b(t) x
a(t)=ϵsin (ω t)
b(t)=1-2 ϵsin (ω t)
This leads to the expressions for two-dimensional velocity components
u=-π A sin (π f(x)) cos (π y)
v=π A cos (π f(x)) sin (π y) d f/ d x .
As the name double gyre suggests, this model defines the flow of two two-dimensional gyres enclosed in a rectangle which expand and contract periodically along the x-axis. Therefore, the periodic motion is controlled by ϵ if ϵ≠ 0. Then ϵ describes approximately how far the line separating the gyres moves to the left or right from its centre position <cit.>. Otherwise (ϵ=0), no periodic motion is happening. Furthermore, A specifies the magnitude of the velocity vectors and ω/2π determines the oscillation frequency of the gyres.
Similar to the ABC flow example, a dedicated OpenFOAM solver was written for this case, which sets the velocity field on a given computational mesh according to (<ref>). For comparability, a mesh with the same specifications as in <cit.>,<cit.> and <cit.> was used. It has the dimensions [0,2]×[0,1]×[0,0.1]m and a resolution of 512×256×1 cells. As this mesh is also static and rectlinear no additional LCS mesh was used. For the mathematical model of the flow the parameter values are chosen to be ϵ=0.1, A=0.1 m s^-1 and ω=2π/10 s. Since the oscillation frequency is known, the hydrodynamic time scale can be easily determined by t_ref=2π/ω=10 s. As described in section <ref>, the LCS integration time interval T should be set larger than t_ref. Therefore, it is set to T=1.5· t_ref= 15 s. Figure <ref> shows the forward- and backward-time FTLE fields at t= 15 s of the previously described double gyre flow. Again, the results match very well with the results from <cit.>,<cit.> and <cit.>. This confirms that the function object is able to calculate the correct FTLE fields from velocity fields generated by OpenFOAM.
§.§ Flow around cylinder
As it has already been shown in the previous examples that the function object can calculate the correct FTLE fields from velocity fields provided by OpenFOAM, this example will focus on how to deal with non-rectlinear simulation meshes. For this purpose, a standard flow problem is selected that is very well suited for an LCS evaluation: the flow around an infinitely long cylinder.
The general case setup contains a fluid domain with size [-20,30]×[-20,20]×[-0.5,0.5]m that surrounds a cylinder with diameter D=2m and its centre axis at x=y=0m. The free-stream velocity and the fluids kinematic viscosity are set to 𝐮^ T=(1 0 0)m s^-1 and ν = 0.01m^2s^-1, respectively. This results in a Reynolds number of Re=200 which indicates that vortex shedding behind the cylinder occurs in a barely laminar regime. If we also assume a Strouhal number of St=0.2 at Re=200, the hydrodynamic time scale of this flow is t_ref=D/(u·St)=10s.
Because of the cylinder in the middle of the domain, a computational mesh discretising this domain is no longer rectlinear. Therefore, we consider two different procedures in the LCS evaluation, the first of which is carried out in two different ways.
Starting with the procedure where an additional rectlinear computational mesh is used for the LCS evaluation, the flow domain is discretised with a simulation mesh consisting of 9200 hexahedra (see upper left mesh in Fig. <ref>).
The flow solver that is used to simulate the previously described flow from t=0s to t=120s is with the initial conditions being calculated by . The first additional LCS mesh that is used within this procedure encloses the whole flow domain (see upper right mesh in Fig. <ref>). In order to minimise the loss of information during the mapping of the velocity fields between the two grids, the resolution of the LCS mesh is chosen in a way that it corresponds approximately to the finest resolution in the simulation mesh. This leads to a LCS mesh with 200×160×1 hexahedra. The boundary patch types are set to for the left and right patch (inlet,outlet), to for the bottom and top patch and to for the front and back patch. The LCS integration time T is again based on t_ref and is set to T=1.5· t_ref=15s. For a good animation of the dynamics of the FTLE fields h is chosen to be h=T/10=1.5s. The results of the forward- and backward-time FTLE fields can be seen in Fig. <ref>. They show how the vortices behind the cylinder form large coherent structures, where the FTLE ridges of the backward-time FTLE fields separate different fluid packages that do not mix in the vortex street.
Since the FTLE ridges only appear in a fraction of the overall domain and the LCS evaluation is computation-wise a quite costly operation, a second LCS mesh is prepared. This second LCS mesh is a lot smaller than the first one and encloses only the fraction of the flow domain where the FTLE ridges are expected to show up (see Fig. <ref>). The boundary patches on the smaller LCS mesh and its spacial resolution are also set analogous to its bigger counterpart, leading to a LCS mesh of size [-13,27]×[-7.5,7.5]×[-0.5,0.5] containing 160×60×1 hexahedra. Repeating the computations with the use of the smaller LCS mesh gives the results which are displayed in Fig. <ref> and are found to match with the results from the bigger LCS mesh. This shows that the LCS evaluation, when done with a separate LCS mesh, can be used in a very targeted way. The advantages this brings in terms of computational costs are discussed after considering the second procedure for the LCS evaluation of this flow problem.
The second procedure, which can be used on problems where no single static rectlinear mesh can be constructed, utilises OpenFOAM's functionalities. With regard to the flow problem considered here, an is constructed with the same dimensions as the simulation mesh used previously. It consists of three mesh zones, namely a rectlinear background mesh zone that spans the whole fluid domain, another finer and smaller mesh zone that is used for a finer resolution of the flow and a cylindrical mesh that surrounds the cylinder (see Fig. <ref>). For comparability reasons the finer rectlinear mesh zone has the same dimensions and resolution as the smaller additional LCS mesh considered previously and is therefore specified as the cell zone for the LCS evaluation. Also all other LCS evaluation settings are adopted. The only difference to the previously considered simulations is the used flow solver. Here the flow solver is due to the used . The resulting forward- and backward-time FTLE fields of this simulation can be found in Fig. <ref>. They match with the results from the previously considered procedure which shows that both approaches can be used equally well. The only thing that stands out are the high FTLE values along some boundaries in the studied solutions. These occur because of the way libcfd2lcs handles its inlet and outlet boundary conditions. It fixes out-flowing Lagrangian particles/takeoff coordinates on "open" boundaries and cannot generate new in-flowing particles during the flow map computation. Therefore, high FTLE values occur in the forward-time FTLE fields at "open" boundaries where inflow occurs, since there the most "stretching" happens. Vice versa, high FTLE values occur in the backward-time FTLE fields at "open" boundaries where outflow occurs, since there the most "folding" happens. These high values at "open" boundaries are just artefacts and have to be neglected. The reason they appear more in the approach is that all type patches are passed to libcfd2lcs as "open" boundaries whereas the user can specify all patches problem dependent in the additional LCS mesh approach.
Looking at the computation times of the flow calculations including the LCS evaluation, it becomes evident that LCS evaluation is a very costly operation (see Tab. <ref>). When using the "large" additional LCS mesh the simulation takes approximately 30 time longer than without the LCS evaluation. This can be improved by using the smaller additional LCS mesh. Here the simulation takes 9 times longer than without the LCS evaluation. Since the costs for the LCS evaluations are almost independent of the underlying simulation for a constant grid size, this factor becomes smaller and smaller for more complex simulations. This can also be seen from the fact that the factor is only 2.5 when the approach is used because the computations of the pressure and velocity fields take longer on an . At this point, however, it must be emphasised that the flow considered here is not a highly complex problem, which can also be seen from the simulation time of 1.5 min on a normal mesh and 8 min on an .
§ SUMMARY & CONCLUSION
We provide an OpenFOAM function object based on libcfd2lcs to compute Finite-Time Lyapunov Exponent (FTLE) fields that indicate candidates of Lagrangian Coherent Structures (LCS) and allow to visualise finite-time stretching and folding fields. LCS reveal the robust skeleton of material surfaces and are key to quantitatively assess material transport in time-dependent flows. This enables the OpenFOAM community to assess the geometry of the material transport in any flow quantitatively on-the-fly using principally any OpenFOAM flow solver.
Focusing on the practical aspects, we only give a brief overview of the mathematical foundation as well as how the computation is done in practice. We describe the structure and functionality of the newly developed function object. Further focus is laid on how the function object acts as an interface between OpenFOAM and libcfd2lcs, how parallelisation is ensured and what has to be considered for the output of the generated data.
From validation of the presented function object using simple benchmark problems, a notable computational overhead has been recognised. However, if LCS evaluations are used for much more complex problems as the ones used here, the computational overhead significantly drops and the LCS evaluation no longer accounts for the largest proportion of the computation time. Nevertheless, the user should be aware that the calculation of FTLE fields is expensive and should therefore think carefully about the size and position of the LCS mesh. In addition, consideration should also be given to whether both forward and backward-time FTLE calculations are required or if one of them is sufficient.
IEEEtran
|
http://arxiv.org/abs/2307.04671v1 | 20230710161455 | Ultrafast demagnetization in bulk nickel induced by X-ray photons tuned to Ni $M_{3}$ and $L_3$ absorption edges | [
"Konrad J. Kapcia",
"Victor Tkachenko",
"Flavio Capotondi",
"Alexander Lichtenstein",
"Serguei Molodtsov",
"Przemysław Piekarz",
"Beata Ziaja"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"cond-mat.other",
"physics.atom-ph",
"physics.comp-ph",
"physics.optics"
] |
[e-mail: ][email protected]
[: ]0000-0001-8842-1886
, Adam Mickiewicz University in Poznań,
Uniwersytetu Poznańskiego 2, 61614 Poznań, Poland
, Notkestr. 85, 22607 Hamburg, Germany
[e-mail: ][email protected]
[: ]0000-0002-0245-145X
European XFEL GmbH, Holzkoppel 4, 22869 Schenefeld, Germany
, Notkestr. 85, 22607 Hamburg, Germany
Elettra-Sincrotrone Trieste S.C.p.A, 34149 Trieste, Basovizza, Italy
European XFEL GmbH, Holzkoppel 4, 22869 Schenefeld, Germany
University of Hamburg, Jungiusstr. 9, 20355 Hamburg, Germany
European XFEL GmbH, Holzkoppel 4, 22869 Schenefeld, Germany
Institute of Experimental Physics, TU Bergakademie Freiberg, Leipziger Strasse 23, 09599 Freiberg, Germany
Center for Efficient High Temperature Processes and Materials Conversion (ZeHS), TU Bergakademie Freiberg, Winklerstrasse 5, 09599 Freiberg, Germany
, Radzikowskiego 152, 31-342 Kraków, Poland
[e-mail: ][email protected]
[: ]0000-0003-0172-0731
, Notkestr. 85, 22607 Hamburg, Germany
, Radzikowskiego 152, 31-342 Kraków, Poland
Studies of light-induced demagnetization started with the experiment performed by Beaupaire et al. on nickel. Here, we present theoretical predictions for X-ray induced demagnetization of nickel, with X-ray photon energies tuned to its M_3 and L_3 absorption edges. We show that the specific feature in the density of states of the d-band of Ni, a sharp peak located just above the Fermi level, strongly influences the change of the predicted magnetic signal, making it stronger than in the previously studied case of cobalt. We believe that this finding will inspire future experiments investigating magnetic processes in X-ray irradiated nickel.
Ultrafast demagnetization in bulk nickel induced by X-ray photons
tuned to Ni M_3 and L_3 absorption edges
Beata Ziaja
August 12, 2023
===========================================================================================================
§ INTRODUCTION
Ultrafast control of magnetization with lasers remains a hot topic in laser and solid-state physics communities. Apart from traditional terahertz and optical lasers, the state-of-art XUV or X-ray free-electron lasers <cit.> are now also used for demagnetization studies. The main advantage of these lasers is possibility to resonantly excite core electrons to the magnetically sensitive d-band. As the electronic occupation in the the d-band determines the magnetization of the material, the X-ray induced electronic excitation changes the population of spin-up and spin-down electrons in the band. This results in the decrease of the total magnetic moment in the material <cit.>. In our previous studies <cit.>, we modeled the experimentally observed ultrafast decrease of the X-ray scattering signal from X-ray irradiated cobalt which reflected a transient decrease of the cobalt magnetic moment. The XSPIN simulation tool was developed to follow the progressing demagnetization of cobalt. Our studies have shown that the signal decrease can be explained by ultrafast electron-driven demagnetization.
In this paper, we will apply our model to another widely-used magnetic material, nickel. Magnetic moments of nickel and cobalt are 0.66 μ_B and 1.70 μ_B respectively <cit.>. As nickel's Curie temperature (627 K) <cit.> strongly differs strongly from that of cobalt (1400 K), such study can reveal a potential effect of the Curie temperature on the demagnetization dynamics. Laser triggered demagnetization of nickel has been studied in various papers, see, e.g., <cit.>. Interestingly, so far, we have not found any relevant experimental data on Ni demagnetization recorded at XFEL facilities. Therefore, the actual comparison between Co and Ni demagnetization will be performed with theoretical predictions only.
§.§ Simulation scheme
As in our previous works <cit.>, we will use our recently developed XSPIN code to obtain predictions for the 'magnetic signal' from the X-ray irradiated nickel. The electronic density of states is obtained from the density functional theory (DFT) calculations implemented in the Vienna Ab initio Simulation Package (VASP) <cit.>. Average absorbed doses considered in the simulations are chosen not to cause structural changes (atomic dislocations) in the irradiated materials. Therefore, the equilibrium density of states (DOS) can be used throughout the whole simulation (the ”frozen atom” assumption). The occupations of electronic levels change during the material exposure to the X-ray pulse, as due to the photoionization, impact ionization and Auger process, excited electrons leave the band to the continuum. Later, they relax back to the band. As the electrons are heated up by the pulse, they remain hot on femtosecond timescales considered in this study, as their temperature can only decrease through an exchange with the lattice which follows on longer ((sub)ps) timescales. Moreover, due to the assumed common thermalization of all electrons, both spin-up and spin-down ones (following Fermi Dirac distribution with a common temperature and a chemical potential), the numbers of spin-up and spin-down electrons will be different from the corresponding values in the initial state. This thermalization-induced spin flip process, changing the population of spin-up and spin-down electrons, leads to a change of the magnetic signal.
For the simulation, we use a simulation box with N = 512 Ni atoms. We provide averaging over 100 000 realizations in the Monte Carlo module. The XFEL pulse is assumed to have a Gaussian temporal profile of the duration of 70 fs FWHM (full width at half maximum) for M-edge case (M_3 = 66.2 eV) and 50 fs FWHM for L-edge case (L_3 = 852.7 eV). The pulse duration was chosen such to compare the XSPIN predictions for nickel with our previous results for cobalt presented in <cit.>. For more details on the simulation parameters, see Tab. <ref>.
§ RESULTS
§.§ Spin-polarized electronic density of states from density functional theory calculations
In order to obtain spin-polarized electronic density of states for bulk nickel, we performed first-principle calculation, using the projector augmented wave (PAW) potentials <cit.> and the generalized gradient approximation (GGA) in the Pardew, Burke, and Ernzerhof (PBE) parametrization <cit.>, implemented in the VASP code <cit.>.
For the summation over the reciprocal space, we used 27 × 27 × 27 Monkhorst-Pack k–point grid <cit.>. The spin-polarized density of states for fcc bulk Ni (calculated for the experimental bulk value of the lattice constant, a = 3.524 Å) is presented in Figure <ref>. It is in an agreement with other DFT calculations (see also, e.g., <cit.>). For comparison, the density of states for fcc bulk Co (with a = 3.545 Å <cit.>) used in Refs. <cit.> is also presented.
The calculated magnetic moments of nickel and cobalt are 0.62 μ_B and 1.61 μ_B, respectively i.e., with a good agreement with those from <cit.>.
§.§ Electronic properties of X-ray irradiated nickel
Below we present the results on the transient distributions of excited electrons and holes obtained with the XSPIN code for nickel and for cobalt (cf. also <cit.>) irradiated with X rays tuned to their M absorption edges (∼ 67 eV and ∼ 61 eV respectively). Figure <ref> shows: (a) the transient number of polarized high energy electrons (with energies > 15 eV), (b) the number of low energy electrons (with energies < 15 eV), (c) the transient number of deep shell holes (with indicated polarization of electrons previously occupying the holes), and (d) electronic temperature. The photoexcitation dynamics in Co and Ni look qualitatively similar, with a stronger excitation in Co (Figure <ref>a-b) than in Ni. Collisional relaxation in Ni is also weaker than in Co (Figure <ref>c), which leads to the higher electronic temperature in Ni, when compared to Co (Figure <ref>d).
§.§ Generalized transient magnetization
In order to follow changing magnetic properties of irradiated materials, we have introduced in <cit.> a generalized transient magnetization which reflects the disparity between electronic populations in spin-up and spin-down electronic subsystems in the d-band:
M(t) = ∑_ħω_0 - Δ^ħω_0 + Δ[ N^h_↑ (E_i,↑) - N^h_↓ (E_i,↓)],
The probed region in d-band extends between ħω_0 - Δ and ħω_0 + Δ, where ħω_0 = ħω_γ - E_edge. Here, ħω_γ is the incoming photon energy, and E_edge is the energy of the resonant core p-level. The summation goes here over discrete levels. Note that we neglect the subleading effect of the different coupling of polarized light to spin-up and spin-down electrons (XMCD) here.
Electronic populations are calculated, assuming Fermi-Dirac distribution of electrons. Knowing at every time step t electronic temperature T_e and electronic chemical potential μ, we have N^h_σ (E) = 1 - N_e,σ^low(E) and N_e,σ^low(E) = { 1 + exp[ (E-μ)/k_B T_e] }^(-1).
Time evolution of squared generalized magnetization M^2(t) (normalized to its initial value before the pulse at t→-∞; cf. also (<ref>)) for different absorbed doses is presented in Figures <ref> and <ref>. The values of Δ in Ni were taken from experimental measurements. They are: Δ = 0.7 eV for nickel M-edge <cit.> and Δ=1.0 eV for nickel L-edge <cit.>. Once can see that the decrease of magnetization becomes stronger with the increasing absorbed dose, and also strongly changes with incoming photon energy around the absorption edge. Interestingly, if the probed region in d-band includes the sharp peak in the DOS of spin-down electrons near the Fermi level (X-ray photon energies of 67 eV and 853 eV for M- and L-edge respectively; see Tab. <ref>), the observed magnetization change is much stronger than in case when this peak is not included (X-ray photon energies of 68 eV and 854 eV for M- and L-edge respectively). The reason is that the peak ”provides” a large number of unoccupied states for the resonant excitation from p-level, which leads to a stronger decrease of the transient magnetization. Note that the decrease of magnetization is stronger for Ni than for Co (cf. Figure 4 from Ref. <cit.> and Figure 3 from Ref. <cit.> at the absorbed dose of 0.93 eV/atom). The reason is that cobalt DOS does not show such a peak close to the Fermi level, and the reduction of magnetization is, therefore, suppressed. This can also explain the lower Curie temperature for nickel than for cobalt.
§.§ Calculation of the mSAXS signal
Similarly as in <cit.>, we can calculate the mSAXS signal strength from the generalized magnetization. It is obtained as:
S = a ∫ M^2(t) I(t) dt,
where I(t) is the X-ray pulse intensity and a is a proportionality coefficient. Pulse fluence is then: F = ∫ I(t) dt. It is proportional to absorbed dose, D ∝ F, where the proportionality coefficient depends on the material parameters as well as on the photon energy. The dose dependence of the normalized signal strength, S_norm = S(D)[D_0/S(D_0)] for the corresponding experimental Δ values is presented in Figure <ref>. The normalization follows Ref. <cit.>, with the reference dose, D_0=10^-4 eV/atom for all considered cases.
Similarly as observed for generalized magnetization, the signal strength strongly depends on the fact if the probed region in d-band includes or does not include the sharp peak in the DOS of spin-down electrons near the Fermi level - being distinctly higher in the latter case. This explains also a stronger decrease of S_norm for nickel than for cobalt.
§ CONCLUSIONS
We provided theory predictions for electronic properties of X-ray irradiated Ni at photon energies close to M_3 or L_3 absorption edge, as well as for the resulting magnetization change and the mSAXS scattering strength. The results obtained indicate the same ultrafast demagnetization mechanism (caused by electronic excitation and relaxation) as in cobalt, occurring at a similar timescale. However, due to the difference in the DOS structure of the d-band, the degree of demagnetization for the equivalent dose would be higher in Ni than in Co. This finding is also consistent with the lower Curie temperature for nickel than for cobalt.
As in our previous studies on Co, we did not consider here atomic motion, and kept electronic band structure unchanged. This assumption does not hold for the case of high absorbed doses which may induce ultrafast structural changes in irradiated materials. The model should then be developed further, enabling inclusion of atomic dynamics and of the transient band structure.
Nevertheless, we expect that these theory predictions will inspire experimental studies on ultrafast X-ray induced demagnetization of nickel, a benchmark magnetic material of various applications.
The authors thank Leonard Müller and Andre Philippi-Kobs for helpful discussions at the early stages of the XSPIN model development.
K.J.K. thanks the Polish National Agency for Academic Exchange for funding in the frame of the Bekker program (PPN/BEK/2020/1/00184).
V.T., A.L., S.M., B.Z. acknowledge the funding received from the Collaboration Grant of the European XFEL and the Institute of Nuclear Physics, Polish Academy of Sciences.
|
http://arxiv.org/abs/2307.04680v2 | 20230710163427 | Unveiling the Graviton Mass Bounds through Analysis of 2023 Pulsar Timing Array Datasets | [
"Sai Wang",
"Zhi-Chao Zhao"
] | astro-ph.HE | [
"astro-ph.HE",
"gr-qc"
] |
=1
compat=newest,every axis plot/.append style=line width=1pt
figureFig.Figs.
figureFig.Figs.
()
[
]
|
http://arxiv.org/abs/2307.04736v1 | 20230710175021 | Atmospheric muons at PeV energies in radio neutrino detectors | [
"Lilly Pyras",
"Christian Glaser",
"Steffen Hallmann",
"Anna Nelles"
] | astro-ph.HE | [
"astro-ph.HE",
"hep-ph"
] |
Particle production from non–minimal coupling in a symmetry breaking potential transporting vacuum energy
Orlando Luongo
August 12, 2023
==========================================================================================================
§ INTRODUCTION
The existence of a high-energy astrophysical neutrino flux is by now firmly established, e.g. <cit.>. However, at energies beyond tens of PeV the flux predictions for neutrinos produced directly in sources or when ultra-high energy cosmic rays interact with cosmological photon fields vary widely e.g. <cit.>. It is however clear that detectors with larger effective detection volumes than currently exist are necessary to discover EeV neutrinos. Radio neutrino observatories offer a promising approach to this challenge by exploiting the kilometer-scale attenuation length of radio emission in ice and the relatively low cost per detection unit <cit.>. Among these observatories are the Radio Neutrino Observatory Greenland (RNO-G) <cit.>, which is currently under construction, and the planned radio array for the extension of IceCube, IceCube-Gen2 <cit.>. Both of these experiments are discovery-focused, making it essential to have a robust understanding of the signals and backgrounds involved.
The radio signal to be detected is generated by the particle cascade following a neutrino interaction in ice. The build-up of a net negative charge at the shower front leads to the emission of coherent radiation, the Askaryan emission <cit.>. Due to a Cherenkov-like effect, the emission is strongest at the Cherenkov angle (∼56 in ice). The signal amplitude at a given observer distance scales linearly with the shower energy <cit.> and is typically visible above the thermal noise at 510PeV <cit.>. Due to this emission mechanism, any particle cascade induced in the ice with the necessary energy deposit creates a detectable signal, independent of its parent particle. This means that also high-energy muons stemming from air showers could act as a background in neutrino detectors whenever they initiate a shower <cit.>.
Radio detectors do not need to be installed deep in the glacial ice. The antennas are typically located within 200 below the surface, which makes them sensitive to potential anthropogenic noise[This will not be discussed here as its mitigation is very experiment- and site-dependent.], as well as air shower induced backgrounds.
In general, three different types of air shower backgrounds are distinguished: (1) the in-air radio emission of air showers that is refracted into the ice to the antennas; (2) the core of incompletely developed air showers can penetrate into the ice, where it induces a cascade that emits radio signals; and (3) in-ice particle showers following an energy loss of an atmospheric muon. The signatures of (1) and (2) have previously been studied and quantified <cit.>. Both signals can be triangulated to close to the surface and therefore provide signatures that can be suppressed on an analysis level. Reflections in the ice may complicate the reconstruction, but this is true also for the neutrino signals itself. For both direct air shower backgrounds, a reasonable estimate of the background rate is possible, because the distribution of shower maxima as function of energy is relatively well-known.
The number of muon-induced background events, however, has been studied less. It has in principle been shown that muons are a non-negligible background to radio neutrino detectors in ice <cit.>. However, the predicted event rate depends on the muon flux, which in turn strongly depends on the hadronic interaction model, and the cosmic ray composition, all of which are less-well determined, in particular at the highest energies. Furthermore, instrumental parameters, foremost the triggering system, determine the observable rate. We present a comprehensive study of the muon-induced background in this article to guide future searches for neutrinos beyond PeV energies. For energies up to [print-unity-mantissa=false]e5, the influence of hadronic interaction models and cosmic ray spectrum on the atmospheric muon flux is studied in <cit.>.
§ PREDICTIONS OF MUONS AT PEV ENERGIES AND BEYOND
Atmospheric muons are produced in extensive air showers, which occur when high-energy cosmic rays penetrate the Earth's atmosphere. The cosmic ray nucleon interacts with an air nucleus and produces short-lived intermediate particles, mostly pions (the lightest known meson) and a few heavier particles with shorter life times, such as kaons, D-mesons, etc. Their decay gives rise to an atmospheric lepton flux, including muons. The energy range in which the atmospheric muon flux is visible to radio neutrino detectors is limited by the minimum muon energy that is required to produce an in-ice particle cascade with a visible radio signal (around 10PeV). At these energies, the flux of parent cosmic rays is low, which results in a very small muon flux. Nonetheless, this muon rate is likely comparable to the expected neutrino rate at these energies, making radio neutrino detectors the first experiments where atmospheric muons of EeV energies become relevant. While the much-discussed Muon Puzzle <cit.> describes a discrepancy between predicted and observed muon production in air showers for muons with energies around 10TeV (more muons are measured than predicted by Monte Carlo simulations), the situation for muons above PeV energy is different: These muons are usually produced within the first three interactions of an air shower, rather than continuously throughout the shower development <cit.>. The energy of a parent particle is distributed among its children, which leads to lower energy particles with each further interaction of the cascade. Consequently, one has to concentrate on the highest energy interactions to study the relevant muon background. Unfortunately, these interactions are far outside of the energy regime currently observable at accelerators, which makes far-reaching extrapolations necessary.
§.§ Muon production in air showers
Atmospheric muons are produced in the hadronic cascade of an air shower mainly through the decay of short-lived mesons, namely charged pions and kaons (conventional component) <cit.>. At very high energies the Lorentz time dilatation increases the decay length of pions and kaons to a multiple of their interaction length (ℓ_int) in air, making it more likely that they will interact and lose energy before they can decay. The contribution of particles with a shorter lifetime τ then becomes dominant as shown in <ref>. Due to their almost immediate decay the contribution of short-lived hadrons with cτ≪ℓ_int is called prompt flux and dominates above [print-unity-mantissa=false]e6. Charmed hadrons (with D^0, D^+, D^+_s, Λ^+_c, Ω^0_c and their antiparticles) have large (∼10%) branching ratios into semi-leptonic modes and a lifetime τ ∼ [print-unity-mantissa=false]e-12, implying a prompt decay with a probability of order 1 up to energies around [print-unity-mantissa=false]e7 <cit.>. In principle, also bottom hadrons (B^0, B^+, B^+_s, B^+_c, Λ^0_b, Ξ^0_b, Ξ^+_b), which have similar lifetimes and semi-leptonic decays, contribute to the prompt muon flux. B-mesons are less frequently produced by cosmic rays in the atmosphere, but their decay length is smaller, yielding a contribution to the [print-unity-mantissa=false]e7 muon flux which is 10% of the one from charm hadron decays <cit.>.
The rare prompt decays of unflavored mesons (η, η', ρ^0, ω, ϕ) <cit.> and photo-conversion into a muon pair (γ Z →μ^+ μ^- Z) e.g. Bethe-Heitler process, Drell-Yan processes <cit.> and photon conversion into a vector meson (including J/Ψ) decaying into muons make significant additional contributions, which dominate the muon flux above ∼3e8 <cit.>. A sketch of the contributions according to <cit.> is shown in <ref>. The uncertainties are a rough estimate considering experimental limitations and differences between events generators <cit.>.
Taking into account these different sources, the atmospheric muon flux can then be expressed as the sum of five components:
ϕ_μ(E, θ) = ϕ^conv_μ(E, θ) + ϕ^charm_μ(E, θ) + ϕ^unflav_μ(E, θ) + ϕ^γ_μ(E, θ) + ϕ^bottom_μ(E, θ).
The high energy muon flux is mainly driven by the outcome of the first interaction of an air shower. The relativistic hadron-ion collisions under low momentum transfer are in the non-perturbative regime of quantum chromodynamics (QCD) <cit.>, where hadron production cannot be calculated directly from first principles. Instead, effective theories and phenomenology are used; see <cit.> for a recent review. To simulate the hadron production, different hadronic interaction models are present. They are the largest source of uncertainties in air shower simulations, because the center-of-mass energy in the first interactions significantly exceeds the maximum energy studied at the LHC and interactions in the forward direction, i.e. high pseudorapidities are not well covered <cit.>. When extrapolating to higher energies, the model predictions thus diverge even further. A detailed discussion of post-LHC hadronic interaction models follows in <ref>.
Next to the particle physics processes in the air shower, the atmospheric muon flux is determined by the cosmic ray composition. The type of the primary particle entering the atmosphere and its number of nucleons has an influence on the number of muons produced. The muon number grows less-than-linear with the primary energy of an air shower <cit.>. This is a consequence of the energy fraction f given to charged pions in each interaction f∼(2/3)^n, after n generations. For nuclear primaries, a nucleus with atomic number A can be treated as the sum of A separate proton air showers all starting at the same point, each with 1/A of the primary energy <cit.>. The lower energy nucleons which initiate the shower generate fewer interaction generations, and so lose less energy to electromagnetic components <cit.>. Therefore the number of muons is larger for heavy primaries than for showers initiated by light nuclei of the same energy.
For very high-energy muons, which are created within the first interactions, this picture changes: since a proton contains the kinetic energy in one nucleon, it can produce higher energy particles than an iron primary with the same energy. Therefore, a 3e10 proton shower can produce muons up to [print-unity-mantissa=false]e10, while an iron-induced shower with the same energy and arrival direction only produces muons up to 2e8 <cit.>, as shown in <ref>.
§.§ Muon flux simulations
For this article, the atmospheric muon flux is calculated using Matrix Cascade Equations (MCEq) <cit.>, which describe the evolution of particle densities as they propagate through the atmosphere, using the CORSIKA parametrizations <cit.> as atmospheric model. The 1-dimensional cascade equations neglect the lateral versus the longitudinal development of the shower, which is important at lower energies, where the transverse momentum of the particles may be relatively important and imply a larger lateral displacement. Since this paper focuses on energies > 1PeV this seems an acceptable limitation. Compared to computational extensive Monte Carlo codes like AIRES <cit.> and CORSIKA <cit.>, MCEq provides a way to estimate the relative importance of a given parameter, for which accurate studies with full shower simulations would require very large statistics.
§.§ Dependence on hadronic interaction models
Several theoretical approximations describing particle production are available for different energy ranges and kinematic regimes. Different approaches have to be combined to model all hadronic interactions in air showers. For this paper, the post-LHC hadronic interaction models EPOS-LHC <cit.>, QGSJet-II.04 <cit.>, and Sibyll-2.3c <cit.> are considered. While has a more general focus on minimum-bias proton-proton and heavy ion collisions, the latter two are focused on air shower simulation.
A theoretical prediction of the muon flux above PeV energies should include at least four components (see <ref>), which are, however, not all taken into account in the same way in the hadronic interaction models. While the conventional flux is implemented in all models considered, only Sibyll-2.3c includes charm production (D^+, D^0, D_s, Λ_c) through a parametrization; forward charm production is intrinsically included in the nucleon PDF <cit.>. Sibyll-2.3c also includes muons from unflavored mesons and J/Ψ.
EPOS-LHC does not include charm, its prompt component arises from the decay of unflavored mesons. only considers η decay as a production channel for prompt muons <cit.>. The calculated muon fluxes, therefore, start to vary widely at 1PeV where the prompt flux dominates, see <ref> left.
EPOS-LHC and QGSJet-II.04 yield the lowest muon flux because they neglect the charm component and in the latter case most unflavored mesons. The photo-production of muon pairs that becomes relevant at PeV energies is not implemented in any model <cit.>. Given that only Sibyll-2.3c includes charm and unflavored mesons, it is currently the most complete model to predict the muon flux above PeV energies. However, it still under-predicts the flux of muons at the highest energies due to missing production channels from photo-conversion and B-mesons. The main theoretical uncertainties arise from charm cross-section calculations. The theoretical calculations are limited by the uncertainties in the scale, charm mass, and the nuclear PDFs <cit.>. A non-perturbative intrinsic charm component may also contribute <cit.>.
This short overview illustrates the difficulty of predictions beyond LHC energies. On the one hand, theoretical uncertainties are present due to the non-availability of measurements, and on the other hand, known processes have not been implemented in all codes, since the priorities have been weighted differently for existing hadronic interaction models. We therefore use the spread between the three hadronic interaction models as indication of the current uncertainties, while keeping in mind that they do not provide the full range of possible systematic uncertainties at this point.
§.§ Dependence on cosmic ray composition
The cosmic ray spectrum covers several decades of energy up to [print-unity-mantissa=false]e11, including particles from galactic and extra-galactic origin. Just below the so-called ankle at 8e9, the transition region from galactic to extra-galactic cosmic rays is expected <cit.>, with detailed explanations still varying.
Measurements of the cosmic ray composition above a few [print-unity-mantissa=false]e5 suffer from the uncertainties in the hadronic interaction models, since the composition has to be inferred from shower parameters such as the position of the shower maximum X_max, which provides composition models with much room for interpretation. Since the ultra-high energy muon flux directly depends on the cosmic ray composition, different models have been investigated to study the uncertainty stemming from this aspect. We also combine models to study the influence of galactic and extra-galactic components. This is done to show the spread in models, rather than choosing one over the other for correctness.
The well-known Hillas Gaisser models are theoretical simplifications for extreme scenarios: a heavy composition after the ankle (H3a) <cit.> and a proton-rich composition (H4a) <cit.>.
This is contrasted by the Global Spline Fit (GSF) <cit.>, a data-driven parameterization that considers measurements of more than ten experiments and provides uncertainties at each energy. GSF is agnostic to theoretical models explaining the derived composition in terms of sources and propagation.
Thoudam et al. <cit.> published different theory-driven cosmic ray spectra up to EeV energies. In the following, their prediction for cosmic rays stemming from Supernova remnants (SNR-CR) and Wolf-Rayet stars (WR-CR) is used as a galactic component, labeled T, and are combined with different extra-galactic components.
The UFA model by <cit.> predicts a strong pure-proton component concentrated only about one order of magnitude in energy below the ankle. For our combination into the T+UFA model the results are optimized for a pure nitrogen galactic composition, which matches the predicted composition for WR-CR <cit.>.
The extra-galactic component of Heinze et al. <cit.> is based on a framework in which an ensemble of generalized ultra high energy cosmic ray accelerators is characterized by an universal spectral index (equal for all injection species), a maximal rigidity, and the normalizations for five nuclear element groups. The source evolution is included as an additional free parameter. This allows for a parameter scan with a best fit result. The composition used in this paper is obtained by a fit to the Auger data from 2019.
The resulting muon flux is shown in <ref> for Sibyll-2.3c as hadronic interaction model.
As discussed in <ref>, the relevant quantity to produce high-energy muons from different primaries is the energy per nucleon. For hydrogen as a primary (with A = 1) the nucleon energy is equal to the primary energy, for heavier elements the energy scales with 1/A where A is the atomic number.
On the left of <ref> the hydrogen flux for the chosen models is shown, while <ref> right depicts the proton fraction taking into account all nuclei, relative to their neutron number. For a pure proton flux, the fraction would be 1, given that hydrogen consists only of one proton and one electron. For pure iron (with 26 protons and 30 neutrons), the fraction would be ∼ 0.46. The models start to deviate at [print-unity-mantissa=false]e7, close to the transition region from galactic cosmic rays to extra-galactic cosmic rays. Here, the theory-based models (T+UFA, T+H) have a dip in proton flux. The proton fraction of the GSF flux only decreases in the transition region to a fraction of 0.9 and is significantly higher than the theoretical models (around 0.5). The GSF model therefore also predicts the highest muon flux, with the exception of the only proton case at energies >3e17. We will therefore use mainly GSF to estimate the muon numbers going forward to remain conservative, keeping in mind that it is just one realization of the uncertainty stemming from the cosmic ray composition.
§ SIGNATURES OF MUONS IN RADIO INSTRUMENTS
When a muon travels through the ice it initiates showers along its track. At PeV energies and above, the relevant shower production mechanism is catastrophic energy losses. As a rule of thumb, the energy of the parent particle inducing the cosmic ray air shower is roughly one decade higher than the subsequent muon, while the following in-ice particle cascade has a shower energy typically one decade lower than the initiating muon. The in-ice shower energy is the important quantity for the radio emission and hence the one which determines if a muon triggers the in-ice radio detector. The Monte Carlo framework NuRadioMC <cit.> with its extension to simulate secondary interactions <cit.> is used to simulate the muon interaction in-ice, the subsequent Askaryan radio emission, the propagation of the radio signal to the detector, and finally the detector response to the electric field. In order to also track secondary losses of all types of leptons, the lepton propagation code PROPOSAL <cit.> has been included in NuRadioMC and is used for our simulations.
We study the dependence on the instrument details (<ref>), on the muon flux itself (<ref>), as well as possible mitigation strategies (<ref>).
For the purpose of this work, we simulate a detector array of 35 stations, which is similar to the Radio Neutrino Observatory Greenland (RNO-G). Each station is comprised of a dipole antenna (Vpol) located at a depth of -100 in the ice (deep component), and three log-periodic dipole antennas (LPDA) pointing straight down located at the surface (shallow component). The stations are arranged in a square grid with a spacing of 1.25km.
Simulations are performed for several triggers to study the dependence on instrument details. The assumed noise temperature is 300K, in both deep and shallow component. At a depth of -100m signal-only trigger with a simple threshold of 1.5σ_noise and 2.5σ_noise in the band of 96220MHz are evaluated. At the shallow part a high-low threshold trigger of 2.5σ_noise and a two-out-of-three coincidence in the band of 80180MHz is applied. The deep triggers are a simplification of the phased array trigger that is the current state of the art in radio neutrino detection <cit.>. Simulating a true phased array using a fixed trigger rate would be the best approximation of a real instrument, as done in e.g. <cit.>. To save computing time, we chose to use the simplified trigger of a single dipole. While the 2.5σ case is likely close to the current implementation for RNO-G <cit.>, a 1.5σ trigger is used as a proxy for potential future optimizations. A true phased array implementation will likely affect the absolute event numbers (e.g. <cit.>), but should not affect the relative scaling of different effects.
The shallow trigger represents an optimistic performance of the current RNO-G trigger.
In order to express the detector performance, the effective area is calculated. This is done by simulating muon interactions within an ice volume containing the detector array, the initial muon position is on the air-ice planar interface. Since only the projection of the detector is perpendicular to the direction of the flux, the simulated area has to be corrected with cos(θ). The effective area (A_eff) is the projected surface area multiplied with the trigger efficiency:
A_eff = A_proj·N_trig/N_sim = A_sim·cos(θ) ·N_trig/N_sim.
The expected event rate is obtained combining the effective area with an incident muon flux integrated over energy and the solid angle element of the flux, which adds a sin(θ) in spherical coordinates:
Γ_μ(E, θ) = ∫_t_1^t_2∫_E_1^E_2∫_0^2π∫_θ_1^θ_2Φ_μ(E, θ, ϕ) · A_eff(E, θ) ·cos(θ) ·sin(θ) dθ dϕ dE dt.
§.§ Dependence on instrumental details
As shown in <ref>, the shallow antennas detect the fewest muons. This is expected, as the LPDAs also have a comparatively smaller neutrino effective volume, due to their location close to the surface. As a consequence of the ice profile, in which the index of refraction increases with depth, signals propagate less often to the surface, but are bent instead towards the denser ice. The shallow LPDAs detect mostly horizontal muons above 65 zenith angle, because of the geometry constraint by the Cherenkov cone, while the deep antennas have a broad detection range with a peak around 55 zenith angle. A lower detection threshold increases the number of muons from 0.07 per year and 35 stations to 0.16 per year and 35 stations. The higher muon yield can mostly be attributed to muons in the range of [print-unity-mantissa=false]e7e8. The uncertainties shown in <ref> are statistical uncertainties only based on the Feldman Cousins confidence belts <cit.>, which provide upper limits for null results and two-sided confidence intervals for non-null results, which converge to a Poisson error. At lower energies, only a few geometries allow the antenna to register a signal, hence the statistics are small, and uncertainties increase due to the comparatively high muon flux at low energy. Most events (97%) are only seen in one station, regardless of the trigger configuration.
§.§ Dependence on hadronic interaction models and cosmic ray composition
The differences in the flux predictions due to the hadronic interaction models propagate almost directly into the muon rate of an in-ice radio detector. <ref> left shows the number of muons predicted for three different hadronic interaction models per year and 35 stations and the same cosmic ray composition. As discussed in <ref>, Sibyll-2.3c includes the most production mechanisms, which explains the larger flux. In <ref> right, the expected muon rate for the same hadronic interaction model, but for three different cosmic ray compositions are shown. The GSF model yields the highest muon rate with a maximum between [print-unity-mantissa=false]e7e8 muon energy, which is expected due to the higher proton content.
§ RELATION TO PARENT AIR SHOWER
While the projected number of muons is relatively small, they can still pose a problem if the neutrino rate is comparatively low. One possibility to distinguish between an atmospheric muon and a neutrino is to detect the air shower from which the muon originates. This would identify the muon and provide a veto mechanism on muon events, as also discussed in <cit.>.
§.§ Detectability of the parent air shower
To calculate the veto efficiency, it is essential to have information about the energy and arrival direction of the air shower, as well as the distance to the nearest detector station. As high-energy muons are boosted along the air shower's axis, the cosmic ray arrival direction can assumed to be the same as the muon arrival direction. The location of the air shower core can be determined by projecting the muon vertex position along the arrival direction until it intersects the boundary between the ice and air.
To establish a relationship between a muon and the corresponding cosmic ray energy, Bayes’ theorem can be applied. By solving the Matrix Cascade Equation with Sibyll-2.3c as hadronic interaction model for different types of primary cosmic rays (pr) - namely proton, helium, carbon, and iron - and over a range of cosmic ray energies (10 bins between [print-unity-mantissa=false]e6e11) the muon flux on ground-level can be calculated. Once the muon flux for a specific cosmic ray induced shower is known, it has to be folded with the actual flux of the primary to obtain the muon flux for all cosmic rays. Here, the number of the different primaries is drawn from the GSF cosmic ray spectrum. The probability to produce a muon with a certain energy given a cosmic ray energy p(E_CR|E_μ) is calculated by
p(E_CR|E_μ) = ∑_pr N_μ(E_CR, E_μ, θ, pr) · N_CR(E_CR, θ, pr)/∑_E_CR∑_pr N_μ(E_CR, E_μ, θ, pr) · N_CR(E_CR, θ, pr).
The number of muons N_μ is calculated for each shower, therefore it has to be summed over all possible primaries, pr. The number of cosmic rays N_CR is calculated from the cosmic ray flux and also needs to be summed over all primaries. This sum is normalized by summing over all possible cosmic ray energies the muon can stem from.
The distribution for different muon energies stemming from a cosmic ray with a certain energy is shown in <ref> for Sibyll-2.3c and GSF. The plot shows, that a muon with a given energy can stem from a variety of cosmic ray energies, most likely from a cosmic ray with an energy ∼ 10× higher than the muon energy. Since no cosmic rays have been measured above [print-unity-mantissa=false]e11 and thus there is no rate prediction, the highest energy muon distributions show a different shape. The relation between muon and cosmic ray energy depends on the choice of hadronic interaction model and cosmic ray composition.
To calculate the veto efficiency in a RNO-G like array, for each muon event, an air shower is selected according to muon arrival direction and placed inside the array as previously described. The resulting radio signal is simulated with CORSIKA <cit.> and the radio extension CoREAS <cit.> and then folded with the detector response using NuRadioReco <cit.>. Since the amplitude of the air shower signal scales linearly with the cosmic ray energy <cit.> it can now be calculated, which air shower energy is necessary to exceed a simple 2.5σ_noise trigger threshold in an upward-pointing shallow LPDA antenna and hence veto the muon event. In the last step, the probability that a muon event stems from an air shower with an energy higher than the trigger threshold energy is calculated and assigned to that muon. Combined with the predicted muon flux, the number of muons that can be vetoed by detecting the parent cosmic ray can be calculated. <ref> shows a veto efficiency close to 100% for muon energies > [print-unity-mantissa=false]e9. Muons originating from inclined air showers are more likely to be vetoed since the radio signal for inclined air showers covers a larger area but becomes fainter at the same time. Therefore the veto efficiency increases with higher zenith angles only for higher energies.
§.§ Timing of air shower and muon
While the muon and the air shower stem from the same cosmic ray, the signal arrival time at the detector and subsequently at the data acquisition unit (DAQ) differs. The air shower propagates through the atmosphere with a zenith angle θ. The position where the air shower axis intersects with the ice surface is called core position, with t=0. Here, the radio emission from the air shower is assumed to be a plane wave at the shower front, traveling the distance from the axis to the shallow antenna according to its arrival direction θ and the velocity of light in air, see <ref>. The muon travels along the arrival direction of the air shower and continues into the ice until it creates a shower. From there the radio emission propagates through the ice on a bent path to the antennas. Once received by a deep antenna, the signal travels along the cable to the DAQ at the surface, see <ref>.
The time difference as registered in the DAQ of the radio signal stemming directly from the air shower and the subsequent muon is the difference of t_μ→daq and t_cr→daq with the following definitions:
t_cr→daq = d_core→shallow_ant·cos(θ) ·1/c_air + t_cable_delay_shallow
t_μ→daq = d_core→vertex·1/c_vac + t_ice_propagation_deep_ant + t_cable_delay_deep.
The cable delay for 100m coaxial cable is ∼500, with c_coax = 2/3 c. The cable from the shallow antennas is typically 10, which provides a lower bound of the time difference at ∼450. The full distribution is shown in <ref>. The muon can travel up to 4 in the ice which increases the possible travel time up to several microseconds, moreover the propagation velocity in ice is slower than in air according to the refractive index. Any air shower veto would need to take this travel time into account, by either allowing for read-out with no trigger dead-time (i.e. double-buffering) or sufficiently long record lengths. Self-triggering on the air shower is challenging due to the potentially small signals and the resulting high trigger rate. A longer record length would allow a post-processing search, which simplifies background identification, however, its implementation into a low-power DAQ system is not easily possible.
§ CONSEQUENCES FOR EXPERIMENTS
After having studied dependencies of the muon flux predictions on hadronic interaction models, composition, and instrumental details to set the stage of the uncertainties in the flux predictions, we now discuss the experimental consequences and mitigation strategies.
We will investigate whether neutrino and muon flux predictions can be treated as independent (<ref>), whether neutrinos and muons can be distinguished based on their experimental signature in terms of expected rates, energy or zenith distribution (<ref>), and whether radio detectors can be used to measure the prompt muon flux at 100PeV energies and above (<ref>).
§.§ Possible connection between muon flux and neutrino flux
In <ref>, we established that the muon flux strongly depends on the cosmic ray composition at Earth, specifically the proton fraction, which is in turn related to the cosmic ray composition at the sources. The production of cosmogenic neutrinos is also influenced by the cosmic ray composition, as ultra-high energy cosmic rays interact with the cosmic microwave background and the extra-galactic background light <cit.>. Moreover, the proton component plays a significant role in the generation of neutrinos, since protons produce more neutrinos than heavier nuclei when propagating through the Universe <cit.>. This raises the question whether background and signal can be treated as independent from each other.
In the following analysis, we assume different cosmic ray compositions consistent with the Auger published data from 2019 and evaluate the resulting neutrino and muon events for an in-ice radio neutrino detector. We combine the galactic component by Thoudam (denoted T) with three extra-galactic components by Heinze et al. <cit.>: the best fit (H_best_fit) with a maximal rigidity R = 1.58e9, a source evolution parameter m = 4.0, and spectral index γ = -0.7; a fit with a flat source evolution (H_flat_evol: R = 2.81e9; m = 0.0; γ = 0.75) and a fit with a high maximal rigidity (H_high_Rmax: R = 4.46e9; m = -5.6; γ = 1.6). As the fits are supposed to resemble the measured cosmic ray composition on Earth, we expect the resulting muon flux to be similar, but the large measurement uncertainties still leave room to accommodate different interpretations.
The neutrino flux is calculated using the method described in <cit.>, while the muon flux is calculated using the Matrix Cascade Equations, as detailed in <ref>. The result is shown in <ref>. While the different models alter the numbers of detected muons by only a factor of two, the variation in the number of detected neutrinos changes by about a factor of ten. Since the galactic component stays unchanged, this means that only a small influence of the extra-galactic component is visible in the muon flux. The flux differs strongest at muon energies above [print-unity-mantissa=false]e7, which is in agreement with the expected transition region from galactic to extra-galactic cosmic rays.
In other words, most muons at the relevant energies are generated by cosmic rays of 10^8 GeV to 10^9.5 GeV while cosmogenic neutrinos relevant for radio neutrino detectors stem from cosmic rays of above [print-unity-mantissa=false]e10. This can also be seen in the fact that the change in muon number is significantly smaller than in the neutrino number from the same models. This means that the muon background expectation can in general be treated independently from the neutrino production models. Of course, keeping in mind that some model-dependent cases are imaginable, where background and signal need to be considered together, in particular when including new physics.
§.§ Observational signatures
We now consider the practical implications for neutrino observations and analyses with a radio neutrino telescope.
The observational signature for in-ice radio neutrino detectors is an electric field whose amplitude is proportional to the shower energy. The signal strength depends on the fractional energy which is deposited in the shower, so the shower energy rather than the muon or neutrino energy is the relevant observational quantity. The shower energy (which requires a reconstructed vertex distance and viewing angle, see e.g. <cit.> for details), together with the arrival direction are likely the only two reconstructed quantities that can be used to distinguish signal from background, unless a veto from air shower tagging or multiple station/pulses coincidences is possible.
The detected arrival directions of muon and neutrinos differ only slightly, as shown in <ref>, because they are dominated by the detector geometry, which is also illustrated by the different shape of the distributions for shallow and deep component. This, however, prohibits a distinction between muons and neutrinos on an event-by-event basis and complicates it even using the whole distributions at low statistics. The only unique signature of neutrinos is an arrival direction > 90 zenith, since muons get absorbed in the Earth. However, this is only a very small fraction of the expected events.
To summarize, <ref> combines the most conservative and optimistic models for muon and neutrino predictions for an RNO-G like detector in terms of shower energy. In the most conservative case, an RNO-G like detector will detect 0.07 muons a year (0.16 muons with a 1.5σ trigger), and in the most optimistic case, only 0.002 muons (0.01 muons with a 1.5σ trigger).
While there are thus differences in the extreme case of 𝒪(30) between the muon predictions, current neutrino flux predictions in contract vary by more than a factor of 𝒪(150).
The combination of Sibyll-2.3c as hadronic interaction model and the Global Spline Fit (GSF) yields the highest muon rate, the theoretically driven model T+H is approximately a factor two lower. QGSJet-II.04 combined with T+H and the proton-poor cosmic ray composition of H3a together yields almost no muons. Recall that QGSJet-II.04 does not include charm which results in an underestimation of the muon flux above PeV energies, where the prompt muon component dominates. The differences using Sibyll-2.3c and GSF, and T+H respectively are therefore likely a better estimate for the uncertainties of the muon event rate, reducing the uncertainty budget to a factor of 2, keeping in mind that Sibyll-2.3c still does not model all components of the muon production. The neutrino flux predictions are influenced by source and propagation modeling, as well as the cosmic ray composition, as indicated by the two predictions for the composition as reported by the Pierre Auger Collaboration and the Telescope Array (TA). Without additional experimental evidence the entire neutrino parameter space has to be considered equally likely for discovery experiments.
The maxima of the muon distributions predicted in all considered scenarios are at around [print-unity-mantissa=false]e7 and fall steeply towards higher energies.
Above [print-unity-mantissa=false]e8 all shown neutrino predictions are higher than the muon expectation, which provides an avenue towards a possible analysis cut at high energies. A recent study of the discovery potential for the diffuse flux of ultra-high energy cosmic neutrinos also showed the usefulness of using the reconstructed shower energy as a discriminator for the atmospheric muon background <cit.>.
In addition, it should be noted that all showers with an energy < [print-unity-mantissa=false]e6 have their vertex position within 20 radius of the deep antenna. While the community is pushing towards lowering the energy threshold of detectors to gain an overlap to existing (optical) experiments, the current simulations make a number of approximations which are no longer completely valid in these cases, e.g. observing the far field of the radio emission, the separation of emission and propagation, and a constant index of refraction in the emission zone. The predictions of event rates at low energies therefore carry additional uncertainties. However, <ref> also shows that the background problem likely becomes larger at low energies, in particular since the muon flux rises much more steeply towards lower energies than the neutrino flux predictions. This is shown in a different way in <ref>, which illustrated potential minimum energy cuts that could be imposed to gain a cleaner neutrino sample. For instance, cutting at a shower energy of 10^7.5 GeV would retain 80% or more of all expected neutrinos, but improve the signal-to-background ratio with a factor of 5-10 depending on the model. This in turn, however, raises the question how successful an extension of the detector sensitivities to lower neutrino energies can be, given the increasing muon background.
§.§ Measuring the muon flux
Finally, one can invert the approach taken above and ask whether radio detectors can be used to measure the prompt muon flux above PeV energies. As shown in <ref>, across all energies and arrival directions, roughly 50% of the detected muons can be related to an air shower that is also detected by the same instrument, meaning that a clear identification of muon events will be possible. In the case of RNO-G, ∼0.3 tagged muon events are expected in 10 years at a trigger threshold of 2.5σ based on Sibyll-2.3c and GSF. Hence, the array will be too small to make a probable detection of a muon over the planned operation time. Even with an optimized trigger to 1.5σ noiseless signal equivalent, the largest flux predictions (Sibyll-2.3c and GSF) still predict <1 tagged events in 10 years of operations. In addition, all muon signals will be very close to the threshold and thus the yet unknown analysis efficiency, as well as unstudied properties of the near-surface ice will have to be considered to solidify this number.
However, future radio detectors are already being planned, in particular, IceCube-Gen2 <cit.>. While the precise expected event numbers will depend on the details of the detector such as the exact hardware implementation of the trigger, bandwidth of the system, and analysis efficiencies, at this point an estimate is already possible. Using the detector configuration and trigger settings as foreseen for IceCube-Gen2 <cit.> which includes a full simulation of the phased array trigger system, we simulated the atmospheric muon rates and find that IceCube-Gen2 will observe ∼1.9 tagged muon events in 10 years for the currently highest flux expectations of Sibyll-2.3c and GSF (see <ref>) and ∼0.1 for QGSJet-II.04 and H3a. With an optimized trigger, one can envision improving on these numbers to reach an expectation significantly >0. This would allow the in-ice radio array of IceCube-Gen2 to provide the first measurements of the prompt muon flux at 10PeV. We publish the expected muon background as a function of shower energy and incident direction for all cosmic ray composition and interaction models discussed in this paper as supplemental material (see <ref>), so that this forecast can be incorporated in future analyses such as <cit.>.
§ CONCLUSION AND OUTLOOK
We presented a study of the background of atmospheric muons at PeV energies and beyond for radio neutrino detectors in ice. The ultra-high energy muon flux is highly dependent on hadronic interaction models and the proton fraction of cosmic ray composition. Sibyll-2.3c currently provides the most complete hadronic interaction model for these high energies, since it considers the conventional component, the contribution from charmed hadrons and muons from unflavored mesons, neglecting only the subdominant contributing from B-mesons and photo-conversion into muon pairs. The main uncertainties arise from the unknown charm cross-section, which is not accessible in current particle colliders. Using QGSJet-II.04, which is tuned to the conventional flux, results in muon rates that are a factor of 10 lower.
The cosmic ray composition influences the muon rate mostly through the parameter of the proton fraction. Changing from a proton-rich to a proton-poor model, yields a difference of a factor of two in flux prediction.
The total observed flux is very sensitive to instrument geometry and in particular trigger settings. An RNO-G like detector will, at full completion, observe about 0.07 muons per year, using the Sibyll-2.3c prediction and a 2.5σ-threshold. At a trigger of 1.5σ this number would rise to 0.16 muons per year. These numbers should be compared to the very uncertain flux predictions for neutrinos, which are ranging from 2.7 neutrinos to 0.01 neutrinos per year in RNO-G.
Since both the neutrino and muon fluxes depend on the proton fraction of the cosmic ray composition, it was studied whether they are correlated. It could be shown, that muon and neutrino flux predictions mostly decouple. Most ultra-high energy muons stem from cosmic rays at energies lower than those that cause the cosmogenic neutrino flux. One can therefore also not reduce uncertainties through a combined treatment of signal and background.
A possible mitigation strategy is to detect cosmic rays and thereby identify muon events: if the parent air shower of the muon can be detected, it provides a signature unique to muon events. In a detector with shallow antennas, such an air shower tagging is possible directly, using the same system. The efficiency of this mechanism is energy and arrival direction dependent with good efficiency for a zenith arrival direction more inclined than 55 zenith and muon energies above [print-unity-mantissa=false]e9. One could consider adding a more closely spaced array in shallow-only stations for RNO-G, which will likely improve the veto efficiency for less inclined showers. However, for high efficiency, such an in-fill array would have to have a spacing of 𝒪(100)m, making it too dense to be feasibly installed.
A discrimination between muon and neutrino signals only based on the arrival direction is unlikely, as the distributions follow mostly the detector acceptance. It is, however, likely that neutrinos and muons show a different energy spectrum. The muon flux will likely not be measurable above [print-unity-mantissa=false]e9 shower energy, already being smaller than most neutrino fluxes at [print-unity-mantissa=false]e8 shower energy. The obtainable resolution of the shower energy of radio neutrino detectors is expected to be better than a factor of two <cit.>, which seems sufficient to assign a significant signalness probability for high energy events. Combined with an air shower veto, which is most efficient at high energies, this should allow for a relatively background-free neutrino shower detection above [print-unity-mantissa=false]e8.
An RNO-G-like detector is likely too small to make a first measurement of the prompt muon flux at energies above 10PeV. This could be done by using those muons that are identified as stemming from an air shower, but the expected number of these kinds of events is <1 in 10 years. However, a much larger detector like the planned radio array of IceCube-Gen2 has the potential for the first muon measurements at these energies, thereby providing additional handles on hadronic interaction models and cosmic ray composition.
§ ACKNOWLEDGMENTS
We would like to thank Pavlo Plotko for generating specific neutrino fluxes using the PriNCE code. We acknowledge fruitful discussions with our colleagues from the RNO-G and IceCube-Gen2 collaborations on the road to taking a fresh look at the muon background. We acknowledge funding from the German Research Foundation (NE 2031-2/1) and the Initiative and Networking Fund of the Helmholtz Association (W2/W3-115). Simulations were partly enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at UPPMAX partially funded by the Swedish Research Council through grant agreement no. 2018-05973.
JHEP.bst
§ APPENDIX
For completeness we show the expected muon flux for the radio detector of IceCube-Gen2 as published in <cit.> in <ref>. Simulations were performed for a single station of the IceCube-Gen2 array at South Pole, which were scaled to match the full array of 164 hybrid stations and 197 shallow-only stations. Triggers stem from a phased array of four antennas at a depth of 200, using a trigger rate of 100Hz, and a two-out-of-four coincidence of downward pointing shallow log-periodic dipole antennas, also with a trigger rate of 100Hz.
|
http://arxiv.org/abs/2307.05431v1 | 20230711165138 | Geometric Neural Diffusion Processes | [
"Emile Mathieu",
"Vincent Dutordoir",
"Michael J. Hutchinson",
"Valentin De Bortoli",
"Yee Whye Teh",
"Richard E. Turner"
] | stat.ML | [
"stat.ML",
"cs.LG"
] |
One-Versus-Others Attention: Scalable Multimodal Integration
G. W. Morley
August 12, 2023
============================================================
Denoising diffusion models have proven to be a flexible and effective paradigm for generative modelling.
Their recent extension to infinite dimensional Euclidean spaces has allowed for the modelling of stochastic processes.
However, many problems in the natural sciences incorporate symmetries and involve data living in non-Euclidean spaces.
In this work, we extend the framework of diffusion models
to incorporate a series of geometric priors in infinite-dimension modelling.
We do so by a) constructing a noising process which admits, as limiting distribution, a geometric Gaussian process that transforms under the symmetry group of interest, and b) approximating the score with a neural network that is equivariant w.r.t. this group.
We show that with these conditions, the generative functional model admits the same symmetry.
We demonstrate scalability and capacity of the model, using a novel Langevin-based conditional sampler, to fit complex scalar and vector fields, with Euclidean and spherical codomain, on synthetic and real-world weather data.
§ INTRODUCTION
Traditional denoising diffusion models are defined on
finite-dimension Euclidean spaces <cit.>.
Extensions have recently been developed for more exotic distributions, such as those supported on Riemannian manifolds <cit.>, and on function spaces of the form f:^n→^d <cit.> (i.e. stochastic processes).
In this work, we extend diffusion models to further deal with distributions over functions that incorporate non-Euclidean geometry in two different ways.
This investigation of geometry also naturally leads to the consideration of symmetries in these distributions, and as such we also present methods for incorporating these into diffusion models.
Firstly, we look at tensor fields.
Tensor fields are geometric objects that assign to all points on some manifold a value that lives in some vector space V.
Roughly speaking, these are functions of the form f: M̧→ V.
These objects are central to the study of physics as they form a generic mathematical framework for modelling natural phenomena.
Common examples include the pressure of a fluid in motion as f: ^3 →, representing wind over the Earth's surface as f: Ş^2 →TŞ^2, where TŞ^2 is the tangent-space of the sphere, or modelling the stress in a deformed object as f: Object→T^3 ⊗T^3, where ⊗ is the tensor-product of the tangent spaces.
Given the inherent symmetry in the laws of nature, these tensor fields can transform in a way that preserves these symmetries.
Any modelling of these laws may benefit from respecting these symmetries.
Secondly, we look at fields with manifold codomain,
and in particular, at functions of the form f: →M̧.
The challenge in dealing with manifold-valued output, arises from the lack of vector-space structure.
In applications, these functions typically appear when modelling processes indexed by time that take values on a manifold.
Examples include tracking the eye of cyclones moving on the surface of the Earth, or modelling the joint angles of a robot as it performs tasks.
The lack of data or noisy measurements in the physical process of interest motivates a probabilistic treatment of such phenomena, in addition to its functional nature.
Arguably the most important framework for modelling stochastic processes are Gaussian Processes (GPs) <cit.>, as they allow for exact or approximate posterior prediction <cit.>.
In particular, when choosing equivariant mean and kernel functions, GPs are invariant (i.e. stationary) <cit.>.
Their limited modelling capacity and the difficulty in designing complex, problem-specific kernels motivates the development of neural processes (NPs) <cit.>, which learn to approximately model a conditional stochastic process directly from data.
NPs have been extended to model translation invariant (scalar) processes <cit.> and more generic E(n)-invariant processes <cit.>.
Yet, the Gaussian conditional assumption of standard NPs still limits their flexibility and prevents such models from fitting complex processes. Diffusion models provide a compelling alternative for significantly greater modelling flexibility.
In this work, we develop geometric diffusion neural processes which incorporate geometrical prior knowledge into functional diffusion models.
Our contributions are three-fold:
[label=(*)]
* We extend diffusion models to more generic function spaces (i.e. tensor fields, and functions f: →M̧) by defining a suitable noising process.
* We incorporate group invariance of the distribution of the generative model by enforcing the covariance kernel and the score network to be group equivariant.
* We propose a novel Langevin dynamics scheme for efficient conditional sampling.
§ BACKGROUND
Denoising diffusion models.
We briefly recall here the key concepts behind diffusion models on ^d and refer the readers to <cit.> for a more detailed introduction.
We consider a forward noising process (_t)_t ≥ 0 defined by the following Stochastic
Differential Equation (SDE)
_t = -12_t t + _t, _0 ∼ p_0 ,
where (_t)_t ≥ 0 is a d-dimensional Brownian motion and p_0 is the data distribution.
The process (_t)_t ≥ 0 is simply an Ornstein–Ulhenbeck (OU) process which converges with geometric rate to N(0,). Under mild conditions on p_0, the time-reversed process
(_t)_t ≥ 0 = (_T-t)_t ∈0,T also satisfies an SDE
<cit.> given by
_t = {12_t + ∇log p_T-t(_t)} t + _t, _0 ∼ p_T,
where p_t denotes the density of _t.
Unfortunately we cannot sample exactly from (<ref>) as p_T and the scores (∇log p_t)_t∈[0,T] are unavailable.
First, p_T is substituted with the limiting distribution N(0,) as it converges towards it (with geometric rate).
Second, one can easily show that the score ∇log p_t is the minimiser of
ℓ_t(𝐬) = 𝐬(_t) - ∇_y_tlog p_t|0(_t|_0)^2 over functions 𝐬: 0,T×^d →^d where the
expectation is over the joint distribution of _0,_t,
and as such can readily be approximated by a neural network 𝐬_θ(t, y_t) by minimising this functional.
Finally, a
discretisation of (<ref>) is performed (e.g. Euler–Maruyama) to obtain approximate samples of p_0.
r0.45
< g r a p h i c s >
Illustration of a vector field f: ^2 →^2 being steered by a group element g = uh ∈(2) = ^2 ⋊O(2).
Steerable fields.
In the following we focus on G being the Euclidean group (d), that is the set of rigid transformations in Euclidean space. Its elements g ∈(d) admits a unique decomposition g=uh where h ∈O(d) is a d× d orthogonal matrix and u ∈T(d) is a translation which can be identified as an element of ^d; for a vector x ∈^d, g· x = h x + u denotes the action of g on x, with h acting from the left on x by matrix multiplication. This special case simplifies the presentation, but can be extended to the general case is discussed in <ref>.
We are interested in learning a probabilistic model over functions of the form f : →^d such that a group G acts on and ^d.
We call a feature field a tuple (f, ρ) with f : →^d a mapping between input ∈ to some feature f() with associated representation ρ: G →GL(^d) <cit.>.
This feature field is said to be G-steerable if it is transformed for all ∈, g ∈ G as
g · f() = ρ(g) f(g^-1·).
In this setting, the action of (d) = T(d) ⋊O(d) on the feature field f given by (<ref>) yields g · f() = (uh) · f() ≜ρ(h) f(h^-1 ( -p)).
Typical examples of feature fields include scalar fields with ρ_triv(g) ≜ 1 transforming as g · f() = f(g^-1) such as temperature fields, and vectors or potential fields with ρ_Id(g) ≜ h
transforming as g · f() = h f(g^-1) as illustrated in <ref>, such as wind or force fields.
Typically the laws of nature have symmetry, and so we are a priori as likely to see a steerable field in one transformation under the group as another. We therefore wish sometimes to build models that place the same density on a field f as the transformed field g · f. Leveraging this symmetry can drastically reduce the amount of data required to learn from and reduce training time.
§ GEOMETRIC NEURAL DIFFUSION PROCESSES
§.§ Continuous diffusion on function spaces
We construct a
diffusion model on functions f: → , with 𝒴=^d,
by defining a diffusion model for every finite set of marginals. Most prior works on infinite-dimensional diffusions consider a noising process on the space of functions <cit.>. In theory, this allows the model to define a consistent distribution over all the finite marginals of the process being modelled. In practice, however, only finite marginals can be modelled on a computer and the score function needs to be approximated, and at this step lose consistency over the marginals. The only work to stay fully consistent in implementation is <cit.>, at the cost of limiting functions that can be modelled to a finite-dimensional subspace. With this in mind, we eschew the technically laborious process of defining diffusions over the infinite-dimension space and work solely on the finite marginals following <cit.>. We find that in practice consistency can be well learned from data see <Ref>, and this allows for more flexible choices of score network architecture and easier training.
Noising process.
We assume we are given a data process (_0(x))_x ∈.
Given any =(x^1, …, x^n) ∈^n, we consider the following forward noising process (_t())_t ≥ 0≜ (_t(x^1),…,_t(x^n))_t ≥ 0 = (_t^1,…,_t^n)_t ≥ 0∈^n defined by the following multivariate SDE
_t() = 12{m() -_t(x) }β_t t + β_t^1/2K(,)^1/2_t,
where K(,)_i,j = k(x^i, x^j)
with k:×→ a kernel
and m: →.
The process (_t(x))_t ≥ 0 is a multivariate Ornstein–Uhlenbeck process—with drift b(t, , _t()) = m() -_t() and diffusion coefficient σ(t, , _t()) = K(,)—which converges with geometric rate to N(m(), K(,)). Using <cit.>, it can be shown that this convergence extends to the process (_t)_t ≥ 0 which converges to the Gaussian Process with mean m and kernel k, denoted _∞.
In the specific instance where k(x,x') = _x(x'), then the limiting process _∞ is simply white noise, whilst other choices such as the squared-exponential or Matérn kernel would lead to the associated Gaussian limiting process _∞. Note that the white noise setting is not covered by the existing theory of functional diffusion models, as a Hilbert space and a square integral kernel are required, see <cit.> for instance.
Denoising process.
Under mild conditions on the distribution of _0(x), the time-reversal process (_t(x))_t ≥ 0 also satisfies an SDE <cit.> given by
_t() = {-12 (m() - _t()) + K(,) ∇log p_T-t(_t()) }β_T-t t + β_T-t^1/2K(,)^1/2_t,
with _0 ∼GP (m, k) and p_t the density of 𝐘_t(x) w.r.t. the Lebesgue measure.
In practice, the Stein score log p_T-t is not tractable and must be approximated by a neural network.
We then consider the generative stochastic process model defined by first sampling _0 ∼GP (m, k) and then simulating the reverse diffusion (<ref>) (e.g. via Euler-Maruyama discretisation).
Manifold valued outputs. So far we have defined our generative model with 𝒴 = ^d, we can readily extend the methodology to manifold-valued functional models using Riemannian diffusion models such as <cit.>, see <ref>. One of the main notable difference is that in the case where 𝒴 is a compact manifold, we replace the Ornstein-Uhlenbeck process by a Brownian motion which targets the uniform distribution.
Training.
As the reverse SDE (<ref>) involves the preconditioned score K(x, x) ∇log p_t,
we directly approximate it with a neural network (Ks)_θ: [0, T] ×^n ×^n →^n, where is the tangent bundle of ,see <ref>.
The conditional score of the noising process (<ref>) is given by
∇__tlog p_t(_t(x)| _0(x))
= - Σ_t|0^-1 (_t(x) - m_t|0)
= - σ_t|0^-1K(,)^-1/2ε ,
since _t = m_t|0 + Σ_t|0^1/2ε with ε∼N(0, ),
and
Σ_t|0 = σ_t|0^2 K with
σ_t|0 = (1 - e^-∫_0^t β(s) d s)^1/2,
see <ref>.
We learn the preconditioned score (Ks)_θ by minimising the following denoising score matching (DSM) loss <cit.> weighted by Λ(t) = σ_t|0^2 K^⊤K
ℒ(θ; Λ(t))
= 𝔼 [s_θ(t, _t) - ∇log p_t(_t| _0)_Λ(t)^2 ]
= 𝔼 [ σ_t|0· (Ks)_θ(t,_t) + K^1/2ε_2^2 ] ,
where x^2_Λ = x^⊤Λ x.
Note that when targeting a unit-variance white noise, then K = and the loss (<ref>) reverts to the DSM loss with weighting λ(t)=1/σ_t|0^2 <cit.>.
In <ref>, we explore several preconditioning terms and associated weighting Λ(t).
Overall, we found the preconditioned score K∇log p_t parameterisation, in combination with the ℓ_2 loss, to perform best, as shown by the ablation study in <ref>.
§.§ Invariant neural diffusion processes
In this section, we show how we can incorporate geometrical constraints into the functional diffusion model introduced in the previous <ref>.
In particular, given a group G, we aim to build a generative model over steerable tensor fields as defined in <ref>.
Invariant process.
A stochastic process f ∼μ is said to be G-invariant if μ(g ·𝖠) = μ(𝖠) for any g ∈ G, with μ∈𝒫((, )), where 𝒫 is the space of probability measure on the space of continuous functions and 𝖠⊂(, ) measurable.
From a sample perspective, this means that with input-output pairs 𝒞 = {(x^i, y^i)}_i=1^n, and denoting the action of G on this set as g ·𝒞≜{(g · x^i, ρ(g) y^i)}_i=1^n, f ∼μ is G-invariant if and only if g ·𝒞 has the same distribution as 𝒞.
In what follows, we aim to derive sufficient conditions on the model introduced in <ref> so that it satisfies this G-invariance property.
First, we recall such a necessary and sufficient condition for Gaussian processes.
Invariant (stationary) Gaussian process <cit.>.
We have that a Gaussian process GP(m, k) is G-invariant if and only if its mean m and covariance k are suitably G-equivariant—that is, for all ,' ∈, g ∈ G
m(g ·) = ρ(g) m() and k(g ·, g ·') = ρ(g) k(, ') ρ(g)^⊤.
Trivial examples of E(n)-equivariant kernels include diagonal kernels k = k_0 with k_0 invariant <cit.>, but see <ref> for non trivial instances introduced by <cit.>.
Building on <ref>, we then state that our introduced neural diffusion process
is also invariant if we additionally assume the score network to be G-equivariant.
Invariant neural diffusion process <cit.>.
We denote by (_t(x))_x ∈, t ∈ [0, T] the process induced by the time-reversal SDE (<ref>) where the score is approximated by a score network 𝐬_θ: [0, T] ×^n ×^n →^n, and the limiting process is given by ℒ(_0) = GP(m, k).
Assuming m and k are respectively G-equivariant per <ref>, if we additionally have that the score network is G-equivariant vector field, i.e. 𝐬_θ(t, g·, ρ(g) ) = ρ(g) 𝐬_θ(t, , ) for all ∈, g ∈ G, then for any t ∈ [0, T] the process (_t(x))_x ∈ is G-invariant.
This result can be proved in two ways, from the probability flow ODE perspective or directly in terms of SDE via Fokker-Planck, see <ref>.
In particular, when modelling an invariant scalar data process (_0(x))_x ∈ such as a temperature field, we need the score network to admits the invariance constraint 𝐬_θ(t, g·, ) = 𝐬_θ(t, , ).
Equivariant conditional process.
Often precedence is given to modelling the predictive process given a set of observations 𝒞 = {(x^c, y^c)}_c ∈ C.
In this context, the conditional process inherits the symmetry of the prior process in the following sense.
A stochastic process with distribution μ given a context 𝒞 is said to be conditionally G-equivariant if the conditional satisfies μ(𝖠 | g ·𝒞) = μ(g ·𝖠 |𝒞) for any g ∈ G and 𝖠∈C(𝒳,𝒴) measurable,
as illustrated in <ref>.
Equivariant conditional process.
Assume a stochastic process f ∼μ is G-invariant. Then the conditional process f|𝒞 given a set of observations 𝒞 is G-equivariant.
Originally stated in <cit.> in the case where the process is over functions of the form f: ^n→^d and marginals with density w.r.t. the Lebesgue measure, we prove <ref> for SPs over generic fields on manifolds in terms only of the measure of the process (<ref>).
§.§ Conditional sampling
[12]r0.4
1!
[stealth-, line width = .05cm] (0,0) .. controls (2,-4) and (3.,2) .. (6,0);
[-stealth, line width = .05cm, C0, dashed] (0,0) – (1.3,-2.3);
[-stealth, line width = .05cm, C3, dashed] (1.3,-2.3) – (1.3,-1.9);
[-stealth, line width = .05cm, C3, dashed] (1.3,-1.9) – (1.3,-1.7);
[-stealth, line width = .05cm, C3, dashed] (1.3,-1.7) – (1.3,-1.4);
[-stealth, line width = .05cm, C0, dashed] (1.3,-1.45) – (2.8,-1.35);
[-stealth, line width = .05cm, C3, dashed] (2.8,-1.35) – (2.4, -0.9);
[below=0.2cm] at (6,0) p_0;
[left=0.0cm] at (0,0) ℒ(_0) = p_T;
[left=0.2cm] at (1.3,-2.1) ℒ(_γ^0);
[right=0.1cm] at (1.3,-2.) ℒ(_γ^1);
[right=0.1cm] at (1.3,-1.65) ⋯;
[above=0.05cm] at (1.3,-1.35) p_γ;
[below right=0.1cm] at (2.8,-0.8) ℒ(_2γ^0);
[right=0.1cm] at (2.6, -0.7) ℒ(_2γ^1);
[above left=0.05cm] at (2.6, -0.9) p_2γ;
Illustration of Langevin corrected conditional sampling.
The black line represents the noising process dynamics (p_t)_t ∈0,T.
The time reversal (i.e. predictor) step, is combined with a Langevin corrector step projecting back onto the dynamics.
There exist several methods to perform conditional sampling in diffusion models such as: replacement sampling, amortisation and conditional guidance. Replacement sampling is not exact and do not recover the correct conditional distributions. SMC based methods have been proposed to correct this procedure but they suffer from the limitations of SMC and scale poorly with the dimension <cit.>. On the other hand, amortisation, require the retraining of the score network for each different conditional task which is impractical in our setting where the context length might change. Conditional guidance is a popular method in state-of-the-art image diffusion models but does not recover the true posterior distribution.
Here we propose a new method for sampling from exact conditional distributions of NDPs using only the score network for the joint distribution. Using the fact that the conditional score can be written as
∇_x log p(x|y) = ∇_x log p(x, y) - ∇_x log p(y) = ∇_x log p(x, y)
we can therefore, for any point in the diffusion time, conditionally sample using Langevin dynamics, following the SDE _s = 12K∇log p_T-t(_s) s + √(K)_s, by only applying the diffusion to the variables of interest and holding the others fixed. While we could sample directly at the end time this proves difficult in practice. Similar to the motivation of <cit.>, we sample along the reverse diffusion, taking a number of conditional Langevin steps at each time. In addition, we apply the forward noising SDE to the conditioning points at each step, as this puts the combined context and sampling set in a region that the score function will be well learned in training. Our procedure is illustrated in <Ref>. In <ref> we draw links between RePaint of <cit.> and our scheme.
§.§ Likelihood evaluation
Similarly to <cit.>,
we can derive a deterministic ODE which has the same marginal density as the SDE (<ref>), which is given by
the `probability flow' Ordinary Differential Equation (ODE), see <Ref>.
Once the score network is learnt, we can thus use it in conjunction with an ODE solver to compute the likelihood of the model.
A perhaps more interesting task is to evaluate the predictive posterior likelihood
p(y^*|x^*,{x^i,y^i}_i ∈ C) given a context set {x^i,y^i}_i ∈ C.
A simple approach is to simply rely on the conditional probability rule evaluate
p(y^*|x^*,{x^i,y^i}_i ∈ C) = p(y^*,{y^i}_i ∈ C|x^*,{x^i}_i ∈ C) / p({y^i}_i ∈ C|{x^i}_i ∈ C).
This can be done by solving two probability flow ODEs on the joint evaluation and context set, and only on the context set.
§ RELATED WORK
Gaussian processes and the neural processes family.
One important and powerful framework to construct distributions over functional spaces are Gaussian processes <cit.>.
Yet, they are restricted in their modelling capacity and when using exact inference they scale poorly with the number of datapoints.
These problems can be partially alleviated by using neural processes <cit.>, although they also assume a Gaussian likelihood.
Recently introduced autoregressive NPs <cit.> alleviate this limitation, but they are disadvantaged by the fact that variables early in the auto-regressive generation only have simple distributions (typically Gaussian).
Finally, <cit.> model weights of implicit neural representation using diffusion models.
Stationary stochastic processes.
The most popular Gaussian process kernels (e.g. squared exponential, Matérn) are stationary, that is, they are translation invariant.
These lead to invariant Gaussian processes, whose samples when translated have the same distribution as the original ones.
This idea can be extended to the entire isometry group of Euclidean spaces <cit.>, allowing for modelling higher order tensor fields, such as wind fields or incompressible fluid velocity <cit.>.
Later, <cit.> extended stationary kernels and Gaussian processes to a large class of non-Euclidean spaces, in particular all compact spaces, and symmetric non compact spaces.
In the context of neural processes, <cit.> introduced ConvCNP so as to encode translation equivariance into the predictive process.
They do so by embedding the context into a translation equivariant functional representation which is then decoded with a convolutional neural network.
<cit.> later extended this idea to construct neural processes that are additionally equivariant w.r.t. rotations or subgroup thereof.
Spatial structure in diffusion models.
A variety of approaches have also been proposed to incorporate spatial correlation in the noising process of finite-dimensional diffusion models leveraging the multiscale structure of data <cit.>.
Our methodology can also be seen as a principled way to modify the forward dynamics in classical denoising diffusion models.
Hence, our contribution can be understood in the light of recent advances in generative modelling on soft and cold denoising diffusion models <cit.>.
Several recent work explicitly introduced a covariance matrix in the Gaussian noise, either on a choice of kernel <cit.>, based on Discrete Fourier Transform of images <cit.>, or via empirical second order statistics (squared pairwise distances and the squared radius of gyration) for protein modelling <cit.>.
Alternatively, <cit.> introduced correlation on images leveraging a wavelet basis.
Functional diffusion models. Infinite dimensional diffusion models
have been investigated in the Euclidean setting in
<cit.>. Most
of these works are based on an extension of the diffusion models techniques
<cit.> to the infinite-dimensional space,
leveraging tools from the Cameron-Martin theory such as the Feldman-Hájek
theorem <cit.> to
define infinite-dimensional Gaussian measures and how they interact. We
refer to <cit.> for a thorough introduction to Stochastic
Differential Equations in infinite dimension. <cit.>
consider another approach by defining countable diffusion processes in a
basis. All these approaches amount to learn a diffusion model with spatial structure. Note
that this induced correlation is necessary for the theory of infinite dimensional SDE
<cit.> to be applied but is not necessary to implement
diffusion models <cit.>. Several approaches have been
considered for conditional
sampling. <cit.> modify the reverse diffusion to introduce a guidance term, while
<cit.> use the replacement
method. Finally <cit.> amortise the score function
w.r.t. the conditioning context.
§ EXPERIMENTAL RESULTS
Our implementation is built on <cit.>, and is publicly available at <https://github.com/cambridge-mlg/neural_diffusion_processes>.
§.§ 1D regression over stationary scalar fields
We evaluate GeomNDPs on several synthetic 1D regression datasets.
We follow the same experimental setup as <cit.> which we detail in <ref>.
In short, it contains Gaussian (Squared Exponential (SE), Matérn(52), Weakly Periodic) and non-Gaussian (Sawtooth and Mixture) sample paths, where Mixture is a combination of the other four datasets with equal weight. <Ref> shows samples for each of these dataset. The Gaussian datasets are corrupted with observation noise with variance σ^2=0.05^2.
<Ref> reports the average log-likelihood p(y^* | x^*, Ç) across 4096 test samples, where the context set size is uniformly sampled between 1 and 10 and the target has fixed size of 50.
All inputs x^c, x^* are chosen uniformly within their input domain which is [-2, 2] for the training data and `interpolation' evaluation and [2, 6] for the `generalisation' evaluation.
We compare the performance of to a GP with the true hyperparameters (when available), a (convolutional) Gaussian NP <cit.>, a convolutional NP <cit.> and a vanilla attention-based NDP <cit.> which we reformulated in the continuous diffusion process framework to allow for log-likelihood evaluations and thus a fair comparison—denoted NDP*.
We enforce translation invariance in the score network for by subtracting the centre of mass from the input x, inducing stationary scalar fields.
On the GP datasets, GNP, ConvNPs and methods are able to fit the conditionals perfectly—matching the log-likelihood of the GP model. GNP's performance degrades on the non-Gaussian datasets as it is restricted by its conditional Gaussian assumption, whilst NDPs methods still performs well as illustrated on <ref>.
In the bottom rows of <ref>, we assess the models ability to generalise outside of the training input range x ∈ [-2, 2], and evaluate them on a translated grid where context and target points are sampled from [2,6]. Only convolutional NPs (GNP and ConvNP) and T(1)- are able to model stationary processes and therefore to perform as well as in the interpolation task. The NDP*, on the contrary, drastically fails at this task.
Non white kernels for limiting process. The NDP methods in the above experiment target the white kernel 1(x = x') in the limiting process.
In <ref>, we explore different choices for the limiting kernel, such as SE and periodic kernels with short and long lengthscales, along with several score parameterisations, see <ref> for a description of these.
We observe that although choosing such kernels gives a head start to the training, it eventually yield slightly worse performance.
We attribute this to the additional complexity of learning a non-diagonal covariance.
Finally, across all datasets and limiting kernels, we found the preconditioned score K∇log p_t to result in the best performance.
Conditional sampling ablation.
We employ the SE dataset to investigate various configurations of the conditional sampler as we have access to the ground truth conditional distribution through the GP posterior.
In <ref> we compute the Kullback-Leibler divergence between the samples generated by and the actual conditional distribution across different conditional sampling settings.
Our results demonstrate the importance of performing multiple Langevin dynamics steps during the conditional sampling process.
Additionally, we observe that the choice of noising scheme for the context values y_c has relatively less impact on the overall outcome.
§.§ Regression over Gaussian process vector fields
We now focus our attention to modelling equivariant vector fields.
For this, we create datasets using samples from a two-dimensional zero-mean GP with one of the following -equivariant kernels:
a diagonal Squared-Exponential (SE) kernel, a zero curl (Curl-free) kernel and a zero divergence (Div-free) kernel,
as described in <ref>.
We equip our model, , with a -equivariant score architecture, based on steerable CNNs <cit.>.
We compare to NDP* with a non-equivariant attention-based network <cit.>.
We also evaluate two neural processes, a translation-equivariant ConvCNP <cit.> and a C4 ⋉^2 ⊂-equivariant SteerCNP <cit.>. We also report the performance of the data-generating GP, and the same GP but with diagonal posterior covariance GP (Diag.). We measure the predictive log-likelihood of the data process samples under the model on a held-out test dataset.
We observe in <ref> (Left), that the CNPs performance is limited by their diagonal predictive covariance assumption, and as such cannot do better than the GP (Diag.).
We also see that although NDP* is able to fit well GP posteriors, it does not reach the maximum log-likelihood value attained by the data GP, in contrast to its equivariant counterpart .
To further explore gains brought by the built-in equivariance, we explore the data-efficiency in <ref> (Right), and
notice that E(2)- requires few data samples to fit the data process, since effectively the dimension of the (quotiented) state space is dramatically reduced.
§.§ Global tropical cyclone trajectory prediction
Finally, we assess our model on a task where the domain of the stochastic process is a non-Euclidean manifold.
We model the trajectories of cyclones over the earth, modelled as sample paths of the form →Ş^2 coming from a stochastic process.
The data is drawn from the International Best Track Archive for Climate Stewardship (IBTrACS) Project, Version 4 (<cit.>)
and preprocessed as per <ref>, where details on the implementation of the score function, the ODE/SDE solvers used for the sampling, and baseline methods can be found.
<ref> shows some cyclone trajectories samples from the data process and from a trained model.
We also demonstrate how such trajectories can be interpolated or extrapolated using the conditional sampling method detailed in <ref>.
Such conditional sample paths are shown in <ref>.
Additionally, we report in <ref> the likelihood
[We only report likelihoods of models defined with respect to the uniform measure on 𝒮^2.
]
and MSE for a series of methods.
The interpolation task involves conditioning on the first and last 20% of the cyclone trajectory and predicting intermediary positions.
The extrapolation task involves conditioning on the first 40% of trajectories and predicting future positions.
We see that the GPs (modelled as f:→^2, one on latitude/longitude coordinates, the other via a stereographic projection, using a diagonal RBF kernel with hyperparameters fitted with maximum likelihood) fail drastically given the high non-Gaussianity of the data.
In the interpolation task, the NDP performs as well as the , but the additional geometric structure of modelling the outputs living on the sphere appears to significantly help for extrapolation.
See <ref> for more fine-grained results.
§ DISCUSSION
In this work, we have extended diffusion models to model invariant stochastic processes over tensor fields.
We did so by
[label=(*)]
* constructing a continuous noising process over function spaces which correlate input samples with an equivariant kernel,
* parameterising the score with an equivariant neural network.
We have empirically demonstrated the ability of our introduced model to fit complex stochastic processes, and by encoding the symmetry of the problem at hand, we show that it is more data efficient and better able to generalise.
We highlight below some current limitations and important research directions.
First, evaluating the model is slow as it relies on costly SDE or ODE solvers.
In that respect our model shares this limitation with existing diffusion models.
Second, targeting a white noise process appears to over-perform other Gaussian processes.
In future work, we would like to investigate the practical influence of different kernels.
Third, we wish to apply our method for modelling higher order tensors, such as moment of inertia in classical mechanics <cit.> or curvature tensor in general relativity.
Fourth, strict invariance may sometimes be too strong, we thus suggest softening it by amortising the score network over extra spatial information available from the problem at hand.
§ ACKNOWLEDGEMENTS
We are grateful to Paul Rosa for helping with the proof,
and to José Miguel Hernández-Lobato for useful discussions.
We thank the <cit.>, <cit.> and <cit.> teams, as our library is built on these great libraries.
Richard E. Turner and Emile Mathieu are supported by an EPSRC Prosperity Partnership EP/T005386/1 between Microsoft Research and the University of Cambridge.
Michael J. Hutchinson is supported by the EPSRC Centre for Doctoral Training in Modern Statistics and Statistical Machine Learning (EP/S023151/1).
§ ORGANISATION OF APPENDICES
In this supplementary, we first introduce in <ref> an Ornstein Uhlenbeck process on function space (via finite marginals) along with several score approximations.
Then in <ref>, we show how this methodology extend to manifold-valued inputs or outputs.
Later in <ref>, we derive sufficient conditions for this introduced model to yield a group invariant process.
What's more in <ref>, we study some conditional sampling schemes.
Eventually in <ref>, we give a thorough description of experimental settings along with additional empirical results.
§ ORNSTEIN UHLENBECK ON FUNCTION SPACE
§.§ Multivariate Ornstein-Uhlenbeck process
First, we aim to show that we can define a stochastic process on an infinite dimensional function space, by defining the joint finite marginals () as the solution of a multidimensional Ornstein-Uhlenbeck process.
In particular, for any set of input = (x_1, ⋯, x_k) ∈𝒳^k, we define the joint marginal as the solution of the following SDE
_t() = (m() -_t())/2 β_t t + √(β_t K(,))_t .
<cit.>
We assume we are given a data process (_0(x))_x ∈𝒳 and we denote by 𝐆∼GP(0, k) a Gaussian process with zero mean and covariance.
Then let's define
_t ≜ e^-1/2·∫_s=0^t β_s ds _0 + (1 - e^-1/2·∫_s=0^t β_s ds) m + (1 - e^-∫_s=0^t β_s ds)^1/2𝐆.
Then (_t(x))_x ∈𝒳 is a stochastic process (by virtue of being a linear combination of stochastic processes).
We thus have that _t _0 and _t _∞ with _∞∼GP(m, k), so effectively (_t(x))_t ∈_+, x ∈𝒳 interpolates between the data process and this limiting Gaussian process.
Additionally, ℒ(_t|_0=y_0) = GP(m_t, K_t)
with m_t = e^-1/2·∫_s=0^t β_s ds y_0 + (1 - e^-1/2·∫_s=0^t β_s ds) m and Σ_t = (1 - e^-∫_s=0^t β_s ds) K.
Furthermore, (_t(x))_t ∈_+, x ∈𝒳 is the solution of the SDE in (<ref>).
We aim to compute the mean and covariance of the process (_t)_t ≥ 0 described by the SDE (<ref>).
First let's recall the time evolution of the mean and covariance of the solution from a multivariate Ornstein-Uhlenbeck process given by
_t = f(_t, t) t + L(_t, t) _t.
We know that the time evolution of the mean and the covariance are given respectively by <cit.>
m_t/ t = [f(_t, t)]
Σ_t/ t = [f(_t, t) (m_t - _t)^⊤] + [(m_t - _t) f(_t, t)^⊤ ] + [L(_t, t) L(_t, t)^⊤ ].
Plugging in the drift f(_t, t) = 1/2 · (m - _t) β_t and diffusion term L(_t, t) = √(β_t K) from (<ref>), we get
m_t/ t = 1/2 · (m - _t) β_t
Σ_t/ t = β_t [ K -Σ_t ].
Solving these two ODEs we get
m_t = e^-1/2·∫_s=0^t β_s ds m_0 + (1 - e^-1/2·∫_s=0^t β_s ds) m
Σ_t = K + e^-∫_s=0^t β_s ds( Σ_0 - K)
with m_0 ≜[_0] and Σ_0 ≜Cov[_0].
Now let's compute the first two moments of (_t(x))_x ∈𝒳. We have
[_t] = [ e^-1/2·∫_s=0^t β_s ds _0 + (1 - e^-1/2·∫_s=0^t β_s ds) m + (1 - e^-1/2·∫_s=0^t β_s ds) 𝐆]
= e^-1/2·∫_s=0^t β_s ds m_0 + (1 - e^-1/2·∫_s=0^t β_s ds) m
= m_t
Cov[_t] = Cov[e^-1/2·∫_s=0^t β_s ds _0] + Cov[(1 - e^- ∫_s=0^t β_s ds)^1/2𝐆]
= e^- ∫_s=0^t β_s ds Σ_0 + (1 - e^- ∫_s=0^t β_s ds) K
= K + e^-∫_s=0^t β_s ds( Σ_0 - K)
= Σ_t .
§.§ Conditional score
Hence, condition on _0 the score is the gradient of the log Gaussian characterised by mean m_t|0 = e^-1/2 B(t)_0 and Σ_t|0 = (1 - e^-B(t)) K with B(t) = ∫_0^t β(s) ds which can be derived from the above marginal mean and covariance with m_0 = _0 and Σ_0 = 0.
∇__tlog p_t(_t| _0)
= ∇__tlog𝒩(_t | m_t|0, Σ_t|0)
= ∇__t - 1/2 (_t - m_t|0)^⊤Σ_t|0^-1 (_t - m_t|0) + c
= - Σ_t|0^-1 (_t - m_t|0)
= - L_t|0^-⊤L_t|0^-1L_t|0ϵ
= - L_t|0^-⊤ϵ
where L_t|0 denotes the Cholesky decomposition of Σ_t|0=L_t|0L_t|0^⊤, and _t = m_t|0 + L_t|0ϵ.
Then we can plugin our learnt (preconditioned) score into the backward SDE <ref> which gives
_t| = [-(m() - _t)/2 + K(,) ∇__tlog p_T-t(t, , _t) ] t + √(β_t K(,))β_t _t
§.§ Several score parametrisations
In this section, we discuss several parametrisations of the neural network and the objective.
For the sake of versatility, we opt to employ the symbol D_θ for the network instead of s_θ as mentioned in the primary text, as it allows us to approximate not only the score but also other quantities from which the score can be derived. In full generality, we use a residual connection, weighted by
c_out, c_skip: →, to parameterise the network
D_θ(t, Y̌_t) = c_skip(t) Y̌_t + c_out(t) F_θ(t, Y̌_t).
We recall that the input to the network is time t, and the noised vector Y̌_t = μ̌_t|0 + ň, where μ̌_t|0 = e^-B(t)/2Y̌_0 and ň∼𝒩(0, Σ_t|0) with Σ_t|0 = (1 - e^-B(t)) K. The gram matrix K corresponds to k(X, X) with k the limiting kernel. We denote by L_t|0 and S respectively the Cholesky decomposition of Σ_t|0=L_t|0L_t|0^⊤ and K = SS^⊤.
The denoising score matching loss weighted by Λ(t) is given by
ℒ(θ)
= 𝔼[D_θ(t, Y̌_t) - ∇_Y̌_tlog p_t(Y̌_t| Y̌_0)_Λ(t)^2 ]
No preconditioning
By reparametrisation, let Y̌_t = μ̌_t|0 + L_t|0ž, where ž∼𝒩(0̌, I), the loss from <ref> can be written as
ℒ(θ) = 𝔼[ D_θ(t, Y̌_t) + Σ_t|0^-1 (Y̌_t - μ̌_t|0) ^2_Λ(t)]
= 𝔼[ D_θ(t, Y̌_t) + Σ_t|0^-1 L_t|0ž^2_Λ(t)]
= 𝔼[ D_θ(t, Y̌_t) + L_t|0^-⊤ž^2_Λ(t)]
Choosing Λ(t) = Σ_t|0 = L_t|0L_t|0^⊤ we obtain
ℒ(θ) = 𝔼[ L_t|0^⊤ D_θ(t, Y̌_t) + ž^2_2]
= 𝔼[ σ_t|0 S^⊤ D_θ(t, Y̌_t) + ž^2_2].
Preconditioning by K
Alternatively, one can train the neural network to approximate the preconditioned score D_θ≈K∇_Y̌_tlog p_t(Y̌_t | Y̌_0). The loss, weighted by Λ = σ_t|0^2I, is then given by
ℒ(θ) = 𝔼[ D_θ(t, Y̌_t) + K L_t|0^-⊤ž^2_Λ(t)]
= 𝔼[ D_θ(t, Y̌_t) + σ_t|0^-1Sž^2_Λ(t)]
= 𝔼[ σ_t|0D_θ(t, Y̌_t) + Sž^2_2].
Precondition by S^⊤
A variation of the previous one, is to precondition the score by the transpose Cholesky of the limiting kernel gram matrix, such that
D_θ≈S^⊤∇_Y̌_tlog p_t(Y̌_t | Y̌_0)
.
The loss, weighted by Λ = σ_t|0^2I, becomes
ℒ(θ) = 𝔼[ D_θ(t, Y̌_t) + S^⊤ L_t|0^-⊤ž^2_Λ(t)]
= 𝔼[ D_θ(t, Y̌_t) + σ_t|0^-1ž^2_Λ(t)]
= 𝔼[ σ_t|0D_θ(t, Y̌_t) + ž^2_2].
Predicting Y̌_0
Finally, an alternative strategy is to predict Y̌_0 from a noised version Y̌_t. In this case, the loss takes the simple form
ℒ(θ) = 𝔼[ D_θ(t, Y̌_t) - Y̌_0 ^2_2].
The score can be computed from the network's prediction following
∇log p_t(Y̌_t | Y̌_0) = -Σ_t|0^-1(Y̌_t - μ̌_t|0)
= -Σ_t|0^-1(Y̌_t - e^-B(t)/2Y̌_0)
≈ -Σ_t|0^-1(Y̌_t - e^-B(t)/2 D_θ(t, Y̌_t))
<Ref> summarises the different options for parametrising the score as well as the values for c_skip and c_out that we found to be optimal, based on the recommendation from <cit.>. In practice, we found the precondition by K parametrisation to produce the best results, but we refer to <ref> for a more in-depth ablation study.
§.§ Exact (marginal) score in Gaussian setting
Interpolating between Gaussian processes GP(m_0, Σ_0) and GP(m, K)
K∇__tlog p_t(_t)
= - KΣ_t^-1 (_t - m_t)
= - K [K + e^-∫_s=0^t β_s ds( Σ_0 - K)]^-1 (_t - m_t)
= - K (L_tL_t^⊤)^-1 (_t - m_t)
= - KL_t^⊤^-1L_t^-1 (_t - m_t)
with Σ_t = K + e^-∫_s=0^t β_s ds( Σ_0 - K) = L_tL_t^⊤ obtained via Cholesky decomposition.
§.§ Langevin dynamics
Under mild assumptions on ∇log p_T-t <cit.> the following SDE
_s = 12K∇log p_T-t(_s) s + √(K)_s
admits a solution (_s)_s≥0 whose law ℒ(_s) converges with geometric rate to p_T-t for any invertible matrix K.
§.§ Likelihood evaluation
Similarly to <cit.>,
we can derive a deterministic process which has the same marginal density as the SDE (<ref>), which is given by the following Ordinary Differential Equation (ODE)—referred as the probability flow ODE
[ _t(); log p_t(_t()) ]
=
[ 12{ m() - _t() - K(,) ∇log p_t( _t()) }β_t; - 12{ m() - _t() - K(,) ∇log p_t( _t()) }β_t ] t .
Once the score network is learnt, we can thus use it in conjunction with an ODE solver to compute the likelihood of the model.
§.§ Discussion consistency
So far we have defined a generative model over functions via its finite marginals ^θ_T().
These finite marginals were to arise from a stochastic process if,
as per the Kolmogorov extension theorem <cit.>,
they satisfy exchangeability and consistency conditions.
Exchangeability can be satisfied by parametrising the score network such that the score network
is equivariant w.r.t permutation, i.e.𝐬_θ(t, σ∘, σ∘) = σ∘𝐬_θ(t, , ) for any σ∈Σ_n.
Additionally, we have that the noising process (_t(x))_x ∈ is trivially
consistent for any t ∈_+ since it is a stochastic process as per
<ref>, and consequently so is the (true) time-reversal
(_t(x))_x ∈. Yet, when approximating the score
𝐬_θ≈∇log p_t, we lose the consistency over the generative process ^θ_t() as the
constraint on the score network is non trivial to satisfy.
This is actually a really strong constraint on the model class, and as soon as one goes beyond linearity (of the posterior w.r.t. the context set), it is non trivial to enforce without directly parameterising a stochastic process, e.g. as <cit.>.
There thus seems to be a strong trade-off between satisfying consistency, and the model's ability to fit complex process and scale to large datasets.
§ MANIFOLD-VALUED DIFFUSION PROCESS
§.§ Manifold-valued inputs
In the main text we dealt with a simplified case of tensor fields where the tensor fields are over Euclidean space. Nevertheless, it is certainly possible to apply our methods to these settings. Significant work has been done on performing convolutions on feature fields on generic manifolds (a superset of tensor fields on generic manifolds), core references being <cit.> for the case of homogeneous spaces and <cit.> for more general Riemannian manifolds. We recommend these as excellent mathematical introductions to the topic and build on them to describe how to formulate diffusion models over these spaces.
Tensor fields as sections of bundles. Formally the fields we are interested in modelling are sections σ of associated tensor bundles of the principle G-bundle on a manifold M. We shall denote such a bundle BM and the space of sections Γ(BM). The goal, therefore, is to model random elements from this space of sections. For a clear understanding of this definition, please see <cit.> for an introduction suitable to ML audiences. Prior work looking at this setting is <cit.> where they construct Gaussian Processes over tensor fields on manifolds.
Stochastic processes on spaces of sections. Given we can see sections as maps σ: M → BM, where an element in BM is a tuple (m, b), m in the base manifold and b in the typical fibre, alongside the condition that the composition of the projection proj_i: (m, b)↦ m with the section is the identity, proj_i ∘σ = Id it is clear we can see distribution over sections as stochastic processes with index set the manifold M, and output space a point in the bundle BM, with the projection condition satisfied. The projection onto finite marginals, i.e. a finite set of points in the manifold, is defined as π_m_1, ..., m_n(σ) = (σ(m_1), ..., σ(m_n)).
Noising process. To define a noising process over these marginals, we can use Gaussian Processes defined in <cit.> over the tensor fields. The convergence results of <cit.> hold still, and so using these Gaussian Processes as noising processes on the marginals also defines a noising process on the whole section.
Reverse process. The results of <cit.> are extremely general and continue to hold in this case of SDEs on the space of sections. Note we don't actually need this to be the case, we can just work with the reverse process on the marginals themselves, which are much simpler objects. It is good to know that it is a valid process on full sections though should one want to try and parameterise a score function on the whole section akin to some other infinite-dimension diffusion models.
Score function. The last thing to do therefore is parameterise the score function on the marginals. If we were trying to parameterise the score function over the whole section at once (akin to a number of other works on infinite dimension diffusions), this could present some problems in enforcing the smoothness of the score function. As we only deal with the score function on a finite set of marginals, however, we need not deal with this issue and this presents a distinct advantage in simplicity for our approach. All we need to do is pick a way of numerically representing points on the manifold and b) pick a basis for the tangent space of each point on the manifold. This lets us represent elements from the tangent space numerically, and therefore also elements from tensor space at each point numerically as well. This done, we can feed these to a neural network to learn to output a numerical representation of the score on the same basis at each point.
§.§ Manifold-valued outputs
In the setting, where one aim to model a stochastic process with manifold codomain _t() = (_t(x_1), ⋯, _t(x_n)) ∈^n, things are less trivial as manifolds do not have a vector space structure which is necessary to define Gaussian processes.
Fortunately, We can still target a know distribution marginally independently on each marginal, since this is well defined, and as such revert to the Riemannian diffusion models introduced in <cit.> with n independent Langevin noising processes
_t(_k) = - 12∇ U(_t(_k)) β_t t + √(β_t )_t^ .
are applied to each marginal. Hence in the limit t →∞, _t() has density (assuming it exists) which factors as dp / d Vol_((y(x_1), ⋯, y(x_n))) ∝∏_k e^-U(y(x_n))). For compact manifolds, we can target the uniform distribution by setting U(x)=0. The reverse time process will have correlation between different marginals, and so the score function still needs to be a function of all the points in the marginal of interest.
§ INVARIANT NEURAL DIFFUSION PROCESSES
§.§ E(n)-equivariant kernels
A kernel k: ^d ×^d →^d× d is equivariant if it satisfies the following constraints:
(a) k is stationnary, that is if for all x, x' ∈^n
k(x, x') = k(x - x') ≜k̃(x - x')
and if (b) it satisfies the angular constraint for any h ∈ H
k(h x, h x') = ρ(h) k(x, x') ρ(h)^⊤.
A trivial example of such an equivariant kernel is the diagonal kernel k(x,x') = k_0(x, x') I <cit.>, with k_0 stationnary.
This kernel can be understood has having d independent Gaussian process uni-dimensional output, that is, there is no inter-dimensional correlation.
Less trivial examples, are the E(n) equivariant kernels proposed in <cit.>.
Namely curl-free and divergence-free kernels, allowing for instance to model electric or magnetic fields.
Formally we have
k_curl = k_0 A and k_div = k_0 B
with k_0 stationary, e.g. squared exponential kernel k_0(x, x') = σ^2 exp( x - x' ^2/2 l^2), and A and B given by
A(x, x') = I - (x - x')(x - x')^⊤/l^2
B(x, x') = (x - x')(x - x')^⊤/l^2 + ( n -1 - x - x' ^2/l^2) I.
See <cit.> for a proof.
§.§ Proof of <ref>
Below we give two proofs for the group invariance of the generative process, one via the probability flow ODE and one directly via Fokker-Planck.
Reverse ODE.
The reverse probability flow associated with the forward SDE (<ref>) with approximate score 𝐬_θ(t, ·) ≈∇log p_t is given by
_t| = 12[-m() + _t + K(,) 𝐬_θ(T - t, , _t) ] t
≜ b_ODE(t, , _t) t
This ODE induces a flow ϕ^b_t: X^n × Y^n →TY^n for a given integration time t, which is said to be G-equivariant if the vector field is G-equivariant itself, i.e. b(t, g·, ρ(g) _t) = ρ(g) b(t, , _t).
We have that for any g ∈ G
b_ODE(t, g·, ρ(g) _t)
= 12[ -m(g ·) + ρ(g) _t + K(g ·,g ·) 𝐬_θ(t, g ·, ρ(g) _t) ]
(1)=12[-ρ(g)m() + ρ(g) _t + ρ(g) K(,)ρ(g)^⊤ 𝐬_θ(t, g ·, ρ(g) _t) ]
(2)=12[-ρ(g)m() + ρ(g) _t + ρ(g) K(,)ρ(g)^⊤ρ(g) 𝐬_θ(t, , _t) ]
(3)=12ρ(g) [-m() + _t + K(,) 𝐬_θ(t, , _t) ]
=ρ(g) b_ODE(t, , _t)
with (1) from the G-invariant prior GP conditions on m and k, (2) assuming that the score network is G-equivariant and (3) assuming that ρ(g) ∈ O(n).
To prove the opposite direction, we can simply follow these computations backwards.
Finally, we know that with a G-invariant probability measure and G-equivariant map ϕ, the pushforward probability measure ^-1∘ϕ is also G-invariant <cit.>.
Assuming a G-invariant prior GP, and a G-equivariant score network, we thus have that the generative model from <ref> defines marginals that are G-invariant.
Reverse SDE.
The reverse SDE associated of the forward SDE (<ref>) with approximate score 𝐬_θ(t, ·) ≈∇log p_t is given by
_t| = [-(m() - _t)/2 + K(,) 𝐬_θ(T - t, , _t) ] t + √(β_t K(,))_t
≜ b_SDE(t, , _t) t + Σ^1/2(t, ) _t.
As for the probability flow drift b_ODE, we have that b_SDE is similarly G-equivariant, that is b_SDE(t, g·, ρ(g) _t) = ρ(g) b_SDE(t, , _t) for any g ∈ G.
Additionally, we have that diffusion matrix is also G-equivariant as for any g ∈ G we have Σ(t, g ·) = β_t K(g ·, g ·) = β_t ρ(g) K(, ) ρ(g)^⊤ = ρ(g) Σ(t, ) ρ(g)^⊤ since K is the gram matrix of an G-equivariant kernel k.
Additionally assuming that b_SDE and Σ are bounded, <cit.> says that the distribution of _t is G-invariant, and in in particular ℒ(_0).
§.§ Equivariant posterior maps
ϕ_*, A̧
Isom
Using the language of <cit.> our tensor fields are sections of an associated vector bundle A̧ of a manifold M with a G structure. Let _GM be the group of G-structure preserving isometries on M. The action of this group on a section of the bundle f ∈Γ(A̧) is given by
ϕ▹ f := ∘ f ∘ϕ^-1
<cit.>.
Let f ∼ P, P a distribution over the space of section. Let ϕ▹ P be the law of of ϕ▹ f. Let μ_x = Ļ(f(x)) = π_x_# P, the law of f evaluated at a point, where π_x is the canonical projection operator onto the marginal at x, # the pushforward operator in the measure theory sense, x∈ M and y is in the fibre of the associated bundle. Let μ_x^x', y = Ļ(f(x) | f(x') = y') = π_xμ^x', y' = π_x_#Ļ(f | f(x') = y'), the conditional law of the process when given f(x') = y'.
Assume that the prior is invariant under the action of _GM, i.e. that
ϕ▹μ_x = _#μ_ϕ^-1(x) = μ_x
Then the conditional measures are equivariant, in the sense that
ϕ▹μ_x^x', y' = _#μ_ϕ^-1(x)^x',y' = μ_x^ϕ^-1(x), (y) = μ_x^ϕ▹ (x', y')
∀ A, B test functions, ϕ∈_GM,
Bf(x')A(ϕ▹ f)(x) = Bf(x') A∘ f ∘ϕ^-1(x)
= Bf(x')AF(ϕ^-1(x))F(x')
= Bf(x')∫ A(y) _#μ_ϕ^-1(x)^x', f(x')(dy)
= ∫ By'∫ A(y) _#μ_ϕ^-1(x)^x', f(x')(dy) μ_x'(dy')
= ∫ By'∫ A(y) ϕ▹μ_x^x', f(x')(dy) μ_x'(dy')
By invariance this quantity is also equal to
B(ϕ^-1▹ f)(x') A((ϕ^-1▹ϕ▹ f)(x)) = B(ϕ^-1▹ f)(x') A(f(x))B(ϕ^-1▹ f)(x')
= B (f(ϕ^-1(x'))) A(F(x)) (f(ϕ^-1(x')))
= Bτ_x', g^-1 F(gx')∫ A(y) μ_x^ϕ(x'), ^-1(y)(dy)
= ∫B(y')∫ A(y) μ_x^ϕ▹ (x', y)(dy) ^-1_#μ_ϕ(x')(dy')
= ∫B(y')∫ A(y) μ_x^ϕ▹ (x', y)(dy) ϕ^-1▹μ_x'(dy')
Hence
ϕ▹μ_x^x', f(x')(dy) μ_x'(dy') = μ_x^ϕ▹ (x', y)(dy) ϕ^-1▹μ_x'(dy')
By the stated invariance ϕ^-1▹μ_x' = μ_x', hence
ϕ▹μ_x^x', f(x')(dy) = μ_x^ϕ▹ (x', y)(dy) a.e. y'
So
ϕ▹μ_x^x', f(x') = μ_x^ϕ▹ (x', y)
as desired.
§ LANGEVIN CORRECTOR AND THE ITERATIVE PROCEDURE OF REPAINT <CIT.>
§.§ Langevin sampling scheme
Several previous schemes exist for conditional sampling from Diffusion models. Two different types of conditional sampling exist. Those that try to sample conditional on some part of the state space over which the diffusion model has been trained, such as in-painting or extrapolation tasks, and those that post-hoc attempt to condition on something outside the state space that the model has been trained on.
This first category is the one we are interested in, and in it we have:
* Replacement sampling <cit.>, where the reverse ODE or SDE is evolved but by fixing the conditioning data during the rollout. This method does produce visually coherent sampling in some cases, but is not an exact conditional sampling method.
* SMC-based methods <cit.>, which are an exact method up to the particle filter assumption. These can produce good results but can suffer from the usual SMC methods downsides on highly multi-model data such as particle diversity collapse.
* The RePaint scheme of <cit.>. While not originally proposed as an exact sampling scheme, we will show later that it can in fact be shown that this method is doing a specific instantiation of our newly proposed method, and is therefore exact.
* Amortisation methods, e.g. <cit.>. While they can be effective, these methods can never perform exact conditional sampling, by definition.
Our goal is to produce an exact sampling scheme that does not rely on SMC-based methods. Instead, we base our method on Langevin dynamics. If we have a score function trained over the state space x̌ = [x̌^c, x̌^*], where x̌^c are the points we wish to condition on and x̌_s points we wish to sample, we exploit the following score breakdown:
∇_x̌^*log p(x̌^*|x̌^c) = ∇_x̌^*log p([x̌^*, x̌^c]) - ∇_x̌^*log p(x̌^c) = ∇_x̌^*log p(x̌)
If we have access to the score on the joint variables, we, therefore, have access to the conditional score by simply only taking the gradient of the joint score for the variable we are not conditioning on.
Given we have learnt s_θ(t, x̌) ≈∇_x̌log p_t(x̌), we could use this to perform Langevin dynamics at t=ϵ, some time very close to 0. Similar to <cit.> however, this produces the twin issues of how to initialise the dynamics, given a random initialisation will start the sampler in a place where the score has been badly learnt, producing slow and inaccurate sampling.
Instead, we follow a scheme of tempered Langevin sampling detailed in <ref>. Starting at t=T we sample an initialisation of y̌^* based on the reference distribution. Progressing from t=T towards t=ϵ we alternate between running a series of Lavgevin corrector steps to sample from the distribution p_t, x̌^*(y̌^* | y̌^c), and a single backwards SDE step to sample from p_x̌(y̌_t-γ | y̌_t) with a step size γ. At each inner and outer step, we sample a noised version of the conditioning points y̌^c based forward SDE applying noise to these context points, p_t, x̌^c(y̌_t^c | y̌^c). For the exactness of this scheme, all that matters is that at the end of the sampling scheme, we are sampling from p_x̌^*(y̌^* | y̌^c) (up to the ϵ away from zero clipping of the SDE). The rest of the scheme is designed to map from the initial sample at t=T of y̌^* to a viable sample through regions where the score has been learnt well.
Given the noising scheme applied to the context points does not actually play into the theoretical exactness of the scheme, only the practical difficulty of staying near regions of well-learnt score, we could make a series of different choices for how to noise the context set at each step.
The choices that present themselves are
* The initial scheme of sampling context noise from the SDE every inner and outer step.
* Only re-sampling the context noise every outer step, and keeping it fixed to this for each inner step associated with the outer step.
* Instead of sampling independent marginal noise at each outer step, sampling a single noising trajectory of the context set from the forward SDE and use this as the noise at each time.
* Perform no noising at all. Effectively the replacement method with added Langevin sampling.
These are illustrated in <ref>.
The main trade-off of different schemes is the speed at which the noise can be sampled vs sample diversity. In the Euclidean case, we have a closed form for the evolution of the marginal density of the context point under the forward SDE. In this case sampling the noise at a given time is O̧(1) cost. On the other hand, in some instances such as nosing SDEs on general manifolds, we have to simulate this noise by discretising the forward SDE. In this case, it is O̧(n) cost, where n is the number of discretisation steps in the SDE. For N outer steps and I inner steps, the complexity of the different noising schemes is compared in <ref>. Note the conditional sampling scheme other than the noise sampling is O̧(NI) complexity.
§.§ RePaint <cit.> correspondance
In this section, we show that:
* <Ref> and <ref> from <cit.> are equivalent in a specific setting.
* There exists a continuous limit (SDE) for both procedures. This SDE targets a probability density which does not correspond to p(x_t_0 | x_0^c).
* When t_0 → 0 this probability measure converges to p(x_0 | x_0^c)
which ensures the correctness of the proposed sampling scheme.
We begin by recalling the conditional sampling algorithm we study in
<Ref> and <Ref>.
𝙻
𝚖
𝐗
ε
First, we start by describing the RePaint algorithm
<cit.>. We consider (Z_k^0, Z_k^1, Z_k^2)_k ∈ a
sequence of independent Gaussian random variable such that for any
k ∈, Z_k^1 and Z_k^2 are d-dimensional Gaussian random variables
with zero mean and identity covariance matrix and Z_k^0 is a p-dimensional
Gaussian random variable with zero mean and identity covariance matrix. We
assume that the whole sequence to be inferred is of size d while the context
is of size p. For simplicity, we only consider the Euclidean setting with
K=. The proofs can be adapted to cover the case
K≠ without loss of generality.
Let us fix a time t_0 ∈0,T. We
consider the chain (X_k)_k ∈ given by X_0 ∈^d and for any
k ∈, we define
X_k+1/2 = ^γ X_k + 2 (^γ - 1) ∇_x_klog p_t_0([X_k, X_k^c]) + (^2γ-1)^1/2 Z^1_k ,
where X_k^c = ^-t_0 X_0^c + (1-^-2t_0)^1/2 Z_k^0. Finally, we consider
X_k+1 = ^-γ X_k+1/2 + (1 - ^-2γ)^1/2 Z_k^2.
Note that (<ref>) corresponds to one step of backward SDE
integration and (<ref>) corresponds to one step of forward
SDE integration. In both cases we have used the exponential integrator, see
<cit.> for instance. While we use the exponential integrator
in the proofs for simplicity other integrators such as the classical
Euler-Maruyama integration could have been used. Combining (<ref>)
and (<ref>), we get that for any k ∈ we have
X_k+1 = X_k + 2 (1 -^-γ) ∇_x_klog p_t_0([X_k, X_k^c]) + (1 - ^-2γ)^1/2 (Z_k^1 + Z_k^2) .
Remarking that (Z_k)_k ∈ = ((Z_k^1 + Z_k^3)/√(2))_k ∈ is a family of
d-dimensional Gaussian random variables with zero mean and identity covariance
matrix, we get that for any k ∈
X_k+1 = X_k + 2 (1 -^-γ) ∇_x_klog p_t_0([X_k, X_k^c]) + √(2) (1 - ^-2γ)^1/2 Z_k ,
where we recall that X_k^c = ^-t_0 X_0^c + (1-^-2t_0)^1/2 Z_k^0.
Note that the process (<ref>) is another version of the
algorithm <cit.>, where we have concatenated
the denoising and noising procedure. With this formulation, it is clear that
is equivalent to <Ref>. In what
follows, we identify the limitating SDE of this process.
In
what follows, we describe the limiting behavior of (<ref>) under
mild assumptions on the target distribution. In what follows, for any x_t_0∈^d, we denote
b(x_t_0) = 2∫_^p∇_x_t_0log p_t_0([x_t_0, x_t_0^c]) p_t_0|0(x_t_0^c|x_0^c) x_t_0^c .
We emphasize that b/2 ≠∇_x_t_0log p(· | x_0^c). In particular, using
Tweedie's identity, we have that for any x_t_0∈^d
∇log p_t_0(x_t_0|x_0^c) = ∫_^p∇_x_t_0log p([x_t_0, x_t_0^c] | x_0^c) p(x_t_0^c|x_t_0,x_0^c) x_t_0^c .
We introduce the following assumption.
There exist , C ≥ 0, >0 such that for any x_t_0^c, y_t^c ∈^p
and x_t_0, y_t ∈^d
∇log p_t_0([x_t_0, x_t_0^c]) - ∇log p_t_0([y_t, y_t^c])≤ (x_t_0 - y_t + x_t_0^c - y_t^c) .
<Ref> ensures that there exists a unique strong
solution to the SDE associated with (<ref>).
Note that conditions under which log p_t_0 is Lipschitz
are studied in
<cit.>. In the theoretical literature on diffusion models the
Lipschitzness assumption is classical, see
<cit.>.
We denote ((_t^γ)_t ≥ 0)_γ > 0 the family of processes such
that for any k ∈ and γ >0, we have for any t ∈k γ, (k+1)γ,
_t^γ = (1 - (t - kγ)/γ) _kγ^γ + (t - kγ)/γ_(k+1)γ^γ and
_(k+1)γ^γ = _kγ^γ + 2(1-^-γ) ∇__kγ^γlog p_t_0([_kγ^γ, _kγ^c,n]) + √(2)(1-^-2γ)^1/2_kγ^γ ,
where (_kγ^γ)_k ∈, γ > 0 is a family of
independent d-dimensional Gaussian random variables with zero mean and
identity covariance matrix and for any k∈, γ > 0,
_kγ^c,γ = ^-t_0 x_0^c + (1 - ^-2t_0)^1/2_kγ^0,γ, where
(_kγ^0,γ)_k ∈, γ > 0 is a family of
independent p-dimensional Gaussian random variables with zero mean and
identity covariance matrix. This is a linear interpolation of the
algorithm in the form of (<ref>).
Finally, we denote (_t)_t ≥ 0 such that
_t = b(_t) t + 2 _t , _0 = x_0 .
We recall that b depends on t_0 but t_0 is fixed here. This means
that we are at time t_0 in the diffusion and consider a corrector at
this stage. The variable t does not corresponds to the backward evolution but
to the forward evolution in the corrector stage. Under <Ref>,
(<ref>) admits a unique strong solution. The rest of the section
is dedicated to the proof of the following result.
Assume . Then lim_n → + ∞ (_t^1/n)_t ≥ 0 = (_t)_t ≥ 0.
This result is an application of <cit.>. It explicits what is the continuous
limit of the algorithm <cit.>.
In what follows, we verify that the
assumptions of this result hold in our setting. For any γ > 0 and x ∈^d, we define
b_γ(x) = (2/γ)[ (1- ^-γ) ∫_^d∇_x_t_0log p_t_0([x_t_0, x_t_0^c]) p_t_0|0(x_t_0^c|x_0^c) x_t_0^c
- (1/γ) (_(k+1)γ^γ - _k γ^γ)1__(k+1)γ^γ - _k γ^γ≥ 1_k γ = x ,
Σ_γ(x) = (4/γ)(1- ^-γ)^2 ∫_^d∇_x_t_0log p_t_0([x_t_0, x_t_0^c])^⊗ 2 p_t_0|0(x_t_0^c|x_0^c) x_t_0^c + (2/γ)(1- ^-2γ)
- (1/γ) (_(k+1)γ^γ - _k γ^γ)^⊗ 21__(k+1)γ^γ - _k γ^γ≥ 1_k γ = x .
Note that for any γ > 0 and x ∈^d, we have
b_γ(x) = 1__(k+1)γ^γ - _k γ^γ≤ 1 (_(k+1)γ^γ - _k γ^γ)_k γ^γ = x
Σ_γ(x) = 1__(k+1)γ^γ - _k γ^γ≤ 1 (_(k+1)γ^γ - _k γ^γ)^⊗ 2_k γ^γ = x
Assume . Then, we have that for any R, > 0 and γ∈0,1
lim_γ→ 0supΣ_γ(x) - Σ(x)x ∈^d, x≤ R = 0 ,
lim_γ→ 0supb_γ(x) - b(x)x ∈^d, x≤ R = 0 ,
lim_γ→ 0 (1/γ) sup_(k+1)γ^γ - _k γ^γ≥ | _k γ = xx ∈^d, x≤ R = 0 .
Where we recall that for any x ∈^d,
b(x) = 2∫_^p∇_x_t_0log p_t_0([x_t_0, x_t_0^c]) p_t|0^x(x_t_0^c|x_0^c) x_t_0^c , Σ(x) = 4 .
Let R, > 0 and γ∈0,1. Using <Ref>, there exists
C > 0 such that for any x_t_0∈^d with
x_t_0≤ R, we have
∇_x_t_0log p_t_0([x_t_0, x_t_0^c])≤ C (1 +
x_t_0^c). Since p_t_0|0^c is Gaussian with zero mean and
covariance matrix (1- ^-2t_0), we get that for any p ∈,
there exists A_k ≥ 0 such that for any x_t_0∈^d with
x_t_0≤ R
∫_^d∇_x_t_0log p_t_0([x_t_0, x_t_0^c])^p p_t_0|0^c(x_t_0^c | x_0^c) x_t_0^c ≤ A_k(1 + x_0^c^p) .
Therefore, using this result and the fact that for any s ≥ 0,
^-s≥ 1 -s, we get that there exists B_k ≥ 0 such that for any
k, p ∈ and for any x_t_0∈^d with
x_t_0≤ R
_(k+1)γ - _k γ^p_k γ=x≤ B_k γ^p/2 (1 + x_0^c^p).
Therefore, combining this result and the Markov inequality, we get that for
any x_t_0∈^d with x_t_0≤ R we have
lim_γ→ 0 (1/γ) sup_(k+1)γ^γ - _k γ^γ≥ | _k γ = xx ∈^d, x≤ R = 0 ,
lim_γ→ 0 (1/γ) (_(k+1)γ^γ - _k γ^γ)1__(k+1)γ^γ - _k γ^γ≥ 1_k γ = x = 0 ,
lim_γ→ 0 (1/γ) (_(k+1)γ^γ - _k γ^γ)1__(k+1)γ^γ - _k γ^γ≥ 1_k γ = x = 0
In addition, we have that for any x_t_0∈^d with R > 0
(2/γ)(1- ^-γ) - 2∫_^d∇_x_t_0log p_t_0([x_t_0, x_t_0^c]) p_t_0|0(x_t_0^c|x_0^c) x_t_0^c
≤ A_1(1+x_0^c) (2/γ)^-γ-1+γ .
We also have that for any x_t_0∈^d with R > 0
(4/γ)1 - ^-γ^2 ∫_^d∇_x_t_0log p_t_0([x_t_0, x_t_0^c])^⊗ 2 p_t_0|0(x_t_0^c|x_0^c) x_t_0^c
≤ A_2(1+x_0^c^2) (4/γ)1 - ^-γ^2 .
Combining this result, (<ref>), the fact that
lim_γ→ 0 (4/γ)1 - ^-γ^2 = 0 and
lim_γ→ 0 (2/γ)^-γ-1+γ = 0, we get that
lim_γ→ 0supΣ_γ(x) -
Σ(x)x ∈^d, x≤ R = 0. Similarly, using
(<ref>), (<ref>) and the fact that
lim_γ→ 0 (4/γ)1 - ^-γ^2 = 0, we get that
lim_γ→ 0supb_γ(x) -
b(x)x ∈^d, x≤ R = 0.
We can now conclude the proof of <Ref>.
We have that x↦ b(x) and x ↦Σ(x) are
continuous. Combining this result and <Ref>, we conclude
the proof upon applying <cit.>.
<Ref> is a non-quantitative result which states what is
the limit chain for the RePaint procedure. Note that if we do not resample, we
get that
b^cond(x) = 2 ∇_x_t_0log p_t_0([x_t_0, x_t_0^c]) , Σ(x) = 4 .
Recalling (<ref>), we get that (<ref>) is
an amortised version of b^cond.
Similar convergence results
can be derived in this case. Note that it is also possible to obtain
quantitative discretization bounds between (_t)_t ≥ 0 and
(_t^1/n)_t ≥ 0 under the ℓ^2 distance. These bounds are
usually leveraged using the Girsanov theorem
<cit.>. We leave the study of such
bounds for future work.
We also remark that b(x_t_0) is not given by
∇log p_t_0(x_t_0|x_0^c). Denoting U_t_0 such that for any x_t_0∈^d
U_t_0(x_t_0) = -∫_^p (log p_t_0(x_t_0 | x_t_0^c)) p_t|0(x_t_0^c|x_0^c) x_t_0^c ,
we have that ∇ U_t_0(x_t_0) = -b(x_t_0), under mild integration
assumptions. In addition, using Jensen's inequality, we have
∫_^dexp[-U_t_0(x_t_0)] x_t_0≤∫_^d∫_^p p_t_0(x_t_0| x_t_0^c) p_t|0(x_t_0^c|x_0^c) x_t_0 x_t_0^c ≤ 1 .
Hence, π_t_0 with density proportional to x ↦exp[-U_t_0(x)] defines a
valid probability measure.
We make the following assumption which allows us to control the ergodicity of
the process (_t)_t ≥ 0.
There exist > 0 and C ≥ 0 such that for any x_t_0∈^d and x_t_0^c ∈^p
⟨∇_x_tlog p_t_0([x_t, x_t^c]), x_t ⟩≤ -x_t^2 + C(1 + x_t^c^2) .
The following proposition ensures the ergodicity of the chain
(_t)_t ≥ 0. It is a direct application of
<cit.>.
Assume and . Then, π_t_0 is the unique
invariant probability measure of (_t)_t ≥ 0 and
lim_t → 0ℒ(_t) - π_t_0 = 0, where
ℒ(_t) is the distribution of _t.
Finally, for any t_0 > 0, denoting π_t_0 the probability measure with
density U_t_0 given for any x_t_0∈^d by
U_t_0(x_t_0) = -∫_^p (log p_t_0(x_t_0 | x_t_0^c)) p_t|0(x_t_0^c|x_0^c) x_t_0^c .
We show that the family of measures (π_t_0)_t_0 > 0 approximates the
posterior with density x_0 ↦ p(x_0|x_0^c) when t_0 is small enough.
Assume .
We have that lim_t_0 → 0π_t_0 = π_0 where π_0 admits a
density w.r.t. the Lebesgue measure given by x_0 ↦ p(x_0|x_0^c).
This is a direct consequence of the fact that p_t|0(·|x_0^c) →_x_0^c.
This last results shows that even though we do not target
x_t_0↦ p_t_0|0(x_t_0 |x_0^c) using this corrector term, we still
target p(x_0 | x_0^c) as t_0 → 0 which corresponds to the desired output
of the algorithm.
§ EXPERIMENTAL DETAILS
Models, training and evaluation have been implemented in <cit.>.
We used Python <cit.> for all programming, Hydra <cit.>, Numpy <cit.>, Scipy <cit.>, Matplotlib <cit.>, and Pandas <cit.>.
§.§ Regression 1d
§.§.§ Data generation
We follow the same experimental setup as <cit.> to generate the 1d synthetic data. It consists of Gaussian (Squared Exponential (SE), Matérn(52), Weakly Periodic) and non-Gaussian (Sawtooth and Mixture) sample paths, where Mixture is a combination of the other four datasets with equal weight. <Ref> shows samples for each of these dataset. The Gaussian datasets are corrupted with observation noise with variance σ^2=0.05^2. The left column of <ref> shows example sample paths for each of the 5 datasets.
The training data consists of 2^14 sample paths while the test dataset has 2^12 paths. For each test path we sample the number of context points between 1 and 10, the number of target points are fixed to 50 for the GP datasets and 100 for the non-Gaussian datasets. The input range for the training and interpolation datasets is [-2, 2] for both the context and target sets, while for the extrapolation task the context and target input points are drawn from [2, 6].
Architecture. For all datasets, except Sawtooth, we use 5 bi-dimensional attention layers <cit.> with 64 hidden dimensions and 8 output heads. For Sawtooth, we obtained better performance with a wider and shallower model consisting of 2 bi-dimensional attention layers with a hidden dimensionality of 128. In all experiment, we train the NDP-based models over 300 epochs using a batch size of 256. Furthermore, we use the Adam optimiser for training with the following learning rate schedule: linear warm-up for 10 epochs followed by a cosine decay until the end of training.
§.§.§ Ablation Limiting Kernels
The test log-likelihoods (TLLs) reported in <ref> for the NDP models target a white limiting kernel and train to approximate the preconditioned score K ∇log p_t. Overall, we found this to be the best performing setting. <Ref> shows an ablation study for different choices of limiting kernel and score parametrisation. We refer to <ref> for a detailed derivation of the score parametrisations.
The dataset in the top row of the figure originates from a Squared Exponential (SE) GP with lengthscale ℓ = 0.25. We compare the performance of three different limiting kernels: white (blue), a SE with a longer lengthscale ℓ=1 (orange), and a SE with a shorter lengthscale ℓ=0.1 (green). As the dataset is Gaussian, we have access to the true score. We observe that, across the different parameterisations, the white limiting kernel performance best. However, note that for the White kernel K = I and thus the different parameterisations become identical. For non-white limiting kernels we see a reduction in performance for both the approximate and exact score. We attribute this to the additional complexity of learning a non-diagonal covariance.
In the bottom row of <ref> we repeat the experiment for a dataset consisting of samples from the Periodic GP with lengthscale 0.5. We draw similar conclusions: the best performing limiting kernel, across the different parametrisations, is the White noise kernel.
§.§.§ Ablation Conditional Sampling
Next, we focus on the empirical performance of the different noising schemes in the conditional sampling, as discussed in <ref>. For this, we measure the the Kullback-Leibler (KL) divergence between two Gaussian distributions: the true GP-based conditional distribution, and an distribution created by drawing conditional sampling from the model and fitting a Gaussian to it using the empirical mean and covariance. We perform this test on the 1D squared exponential dataset (described above) as this gives us access to the true posterior. We use 2^12 samples to estimate the empirical mean and covariance, and fix the number of context points to 3.
In <ref> we keep the total number of score evaluations fixed to 5000 and vary the number of steps in the inner (L) loop such that the number of outer steps is given by the ratio 5000/L. From the figure, we observe that the particular choice of noising scheme is of less importance as long at least a couple (± 5) inner steps are taken. We further note that in this experiment we used the true score (available because of the Gaussianity of the dataset), which means that these results may differ if an approximate score network is used.
§.§ Gaussian process vector fields
Data
We create synthetic datasets using samples from two-dimensional zero-mean GPs with the following -equivariant kernels:
a diagonal Squared-Exponential (SE) kernel, a zero curl (Curl-free) kernel and a zero divergence (Div-free) kernel,
as described in <ref>.
We set the variance to σ^2=1 and the lengthscale to ℓ=√(5).
We evaluate these GPs on a disk grid, created via a 2D grid with 30 × 30 points regularly space on [-10, 10]^2 and keeping only the points inside the disk of radius 10.
We create a training dataset of size 80×10^3.
and a test dataset of size 10×10^3.
Models
We compare two flavours of our model .
One with a non-equivariant attention-based score network <cit.>, referred as NDP*.
Another one with a -equivariant score architecture, based on steerable CNNs <cit.>.
We rely on the library <cit.> for implementation.
A knn graph ℰ is built with k=20.
The pairwise distances are first embed into μ(r_ab) with a `smooth_finite' basis of 50 elements via , and with a maximum radius of 2.
The time is mapped via a sinusoidal embedding ϕ(t) <cit.>.
Then edge features are obtained as e_ab = Ψ^(e)(μ(r_ab)||ϕ(t)) ∀ (a, b) ∈ℰ_k with Ψ^(e) an MLP with 2 hidden layers of width 64.
We use 5 layers with update given by
V_a^k+1 = ∑_b ∈𝒩(a, ℰ_k) V_a^k⊗(Ψ^v(e_ab||V_a^k||V_b^k)) Y(r̂_ab)
with Y spherical harmonics up to order 2m Ψ^v an MLP with 2 hidden layers of width 64 acting on invariant features,
and node features V^k having irreps .
Each layer has a gate non-linearity <cit.>.
We also evaluate two neural processes, a translation-equivariant ConvCNP <cit.> with decoder architecture based on 2D convolutional layers <cit.> and a C4 ⋉^2 ⊂-equivariant SteerCNP <cit.> with decoder architecture based on 2D steerable convolutions <cit.>.
Specific details can be found in the accompanying codebase <https://github.com/PeterHolderrieth/Steerable_CNPs> of <cit.>.
Optimisation.
Models are trained for 80k iterations, via <cit.> with a learning rate of 5e-4 and a batch size of 32.
The neural diffusion processes are trained unconditionally, that is we feed GP samples evaluated on the full disk grid.
Their weights are updated via with exponential moving average, with coefficient 0.99.
The diffusion coefficient is weighted by
β: t ↦β_min + (β_max - β_min) · t, and β_min= 1e-4, β_max = 15.
As standard, the neural processes are trained by splitting the training batches into a context and evaluation set, similar to when evaluating the models.
Models have been trained on A100-SXM-80GB GPUs.
Evaluation.
We measure the predictive log-likelihood of the data process samples under the model on a held-out test dataset.
The context sets are of size 25 and uniformly sampled from a disk grid of size 648, and the models are evaluated on the complementary of the grid.
For neural diffusion processes, we estimate the likelihood by solving the associated probability flow ODE (<ref>).
The divergence is estimated with the Hutchinson estimator, with Rademacher noise, and 8 samples, whilst the ODE is solved with the 2nd order Heun solver, with 100 discretisation steps.
We also report the performance of the data-generating GP, and the same GP but with diagonal posterior covariance GP (Diag.).
§.§ Tropical cyclone trajectory prediction
Data. The data is drawn from he International Best330 Track Archive for Climate Stewardship (IBTrACS) Project, Version 4 <cit.>. The tracks are taken from the 'all` dataset covering the tracks from all cyclone basins across the globe. The tracks are logged at intervals of every 3 hours. From the dataset, we selected tracks of at least 50 time points long and clipped any longer to this length, resulting in 5224 cyclones. 90% was used fro training and 10% held out for evaluation. This split was changed across seeds. More interesting schemes of variable-length tracks or of interest, but not pursued here in this demonstrative experiment. Natively the track locations live in latitude-longitude coordinates, although it is processed into different forms for different models. The time stamps are processed into the number of days into the cyclone forming and this format is used commonly between all models.
Models.
Four models were evaluated.
The GP (→^2) took the raw latitude-longitude data and normalised it. Using a 2-output RBF kernel with no covariance between the latitude and longitude and taking the cyclone time as input, placed a GP over the data. The hyperparameters of this kernel were optimised using a maximum likelihood grid search over the data. Note that this model places density outside the bounding box of [-90, 90] × [-180, 180] that defines the range of latitude and longitude, and so does not place a proper distribution on the space of paths on the sphere.
The Stereographic GP (→^2/{0}) instead transformed the data under a sterographicc projection centred at the north pole, and used the same GP and optimisation as above. Since this model only places density on a set of measure zero that does not correspond to the sphere, it does induce a proper distribution on the space of paths on the sphere.
The NDP (→^2) uses the same preprocessing as GP (→^2) but uses a Neural Diffusion Process from <cit.> to model the data. This has the same shortcomings as the GP (→^2) in not placing a proper density on the space of paths on the sphere. The network used for the score function and the optimisation procedure is detailed below. A linear beta schedule was used with β_0=1e-4 and β_1=10. The reverse model was integrated back to ϵ=5e-4 for numerical stability. The reference measure was a white noise kernel with a variance 0.05. ODEs and SDEs were discretised with 1000 steps.
The (→Ş^2) works with the data projected into 3d space on the surface of the sphere. This projection makes no difference to the results of the model, but makes the computation of the manifold functions such as the exp map easier, and makes it easier to define a smooth score function on the sphere. This is done by outputting a vector for the score from the neural network in 3d space, and projecting it onto the tangent space of the sphere at the given point. For the necessity of this, see <cit.>. The network used for the score function and the optimisation procedure is detailed below. A linear beta schedule was used with β_0=1e-4 and β_1=15. The reverse model was integrated back to ϵ=5e-4 for numerical stability. The reference measure was a white noise kernel with a variance 0.05. ODEs and SDEs were discretised with 1000 steps.
Neural network. The network used to learn the score function for both NDP (→^2) and (→Ş^2) is a bi-attention network from <cit.> with 5 layers, hidden size of 128 and 4 heads per layer. This results in 924k parameters.
Optimisation. NDP (→^2) and (→Ş^2) were both optimised using (correctly implemented) Adam for 250k steps using a batch size of 1024 and global norm clipping of 1. Batches were drawn from the shuffled data and refreshed each time the dataset was exhausted. A learning rate schedule was used with 1000 warmup steps linearly from 1e-5 to 1e-3, and from there a cosine schedule decaying from 1e-3 to 1e-5. With even probability either the whole cyclone track was used in the batch, or 20 random points were sub-sampled to train the model better for the conditional sampling task.
Conditional sampling. The GP models used closed-form conditional sampling as described. Both diffusion-based models used the Langevin sampling scheme described in this work. 1000 outer steps were used with 25 inner steps. We use a ψ=1.0 and λ_0=2.5. In addition at the end of the Langevin sampling, we run an additional 150 Langevin steps with t=ϵ as this visually improved performance.
Evaluation. For the model (conditional) log probabilities the GP models were computed in closed form. For the diffusion-based models, they were computed using the auxiliary likelihood ODE discretised over 1000 steps. The conditional probabilities were computed via the difference between the log-likelihood of the whole trajectory and the log-likelihood of the context set only. The mean squared errors were computed using the geodesic distance between 10 conditionally sampled trajectories, described above.
|
http://arxiv.org/abs/2307.03884v1 | 20230708031428 | Noisy Tensor Ring approximation for computing gradients of Variational Quantum Eigensolver for Combinatorial Optimization | [
"Dheeraj Peddireddy",
"Utkarsh Priyam",
"Vaneet Aggarwal"
] | quant-ph | [
"quant-ph",
"cs.LG"
] |
APS/123-QED
Purdue University, West Lafayette IN 47906
{dpeddire, upriyam, vaneet}@purdue.edu
Variational Quantum algorithms, especially Quantum Approximate Optimization and Variational Quantum Eigensolver (VQE) have established their potential to provide computational advantage in the realm of combinatorial optimization. However, these algorithms suffer from classically intractable gradients limiting the scalability. This work addresses the scalability challenge for VQE by proposing a classical gradient computation method which utilizes the parameter shift rule but computes the expected values from the circuits using a tensor ring approximation. The parametrized gates from the circuit transform the tensor ring by contracting the matrix along the free edges of the tensor ring. While the single qubit gates do not alter the ring structure, the state transformations from the two qubit rotations are evaluated by truncating the singular values thereby preserving the structure of the tensor ring and reducing the computational complexity. This variation of the Matrix product state approximation grows linearly in number of qubits and the number of two qubit gates as opposed to the exponential growth in the classical simulations, allowing for a faster evaluation of the gradients on classical simulators.
Noisy Tensor Ring approximation for computing gradients of Variational Quantum Eigensolver for Combinatorial Optimization
Dheeraj Pedireddy, Utkarsh Priyam, and Vaneet Aggarwal
==========================================================================================================================
§ INTRODUCTION
Quantum computing has been far touted for its potential to solve some complex problems much more efficiently than the classical computers <cit.>. Although the fruition of the idea is further into the future, researchers have been exploring the real-time applicability of the current generation quantum computers. Most of the quantum processors in their current state are severely limited in the number of qubits, noise levels and inefficient error mitigation techniques, calling for a class of algorithms robust to the noise and error. Variational Quantum Algorithms (VQA) have been studied widely for their resilience to the noise from decoherence making them an ideal choice of algorithms for various applications on gate-based Noisy Intermediate Scale Quantum (NISQ) devices. Two such algorithms of prominence, Variational Quantum Eigensolver (VQE) and Quantum Approximate Optimization Algorithm (QAOA) evaluate the expected energy of a state resulting from a short parameterized circuit (frequently referred to as ansatz) with respect to an observable defined by a given problem. A classical outer-loop optimizer tries to find the optimal circuit parameters that minimize the expected energy. While QAOA implements a fixed ansatz inspired from adiabatic quantum computing, VQE utilizes a variable ansatz offering flexibility to engineer the ansatz based on the hardware constraints and the problem at hand. This work chooses to focus on VQE, inspired by the recent advances of variable ansatz in quantum machine learning <cit.>. VQE, initially developed by Peruzzo et al. <cit.>, has seen a number of applications in condensed matter physics <cit.>, quantum chemistry <cit.> and quantum mechanics <cit.>.
Optimization is one of the frontrunners among the applications being studied for potential quantum advantage from VQE and adjacent algorithms <cit.>. Combinatorial optimization is a class of problems of practical relevance with applications spanning across transportation, logistics, manufacturing etc. Studies have indicated that the exponentially growing state space and quantum entanglement can improve the chances of finding the right solution with a potential speedup <cit.>. Even minor improvements to optimization problems from quantum algorithms can potentially have a large impact on the society. In the context of VQE, a multi-qubit Hamiltonian is prepared with its ground state encoding the solution of the optimization problem and the algorithm optimizes their parameters to minimize the energy of the Hamiltonian. The algorithm has been extended to use filtering operators <cit.> and iterative approaches <cit.>, to improve the performance with combinatorial optimization. The approach has also been validated on several practical applications using optimization (e.g., Job Shop Scheduling <cit.>, Vehicle Routing <cit.>)
Despite promising prospects, VQAs and more broadly quantum circuits are hindered by a plethora of problems in the current era of quantum computing, with the primary forces of impedance being the limited number of qubits, physical cost of implementation of quantum circuits and decoherence noise. Hybrid algorithms also suffer from the asymmetric scaling of quantum and classical resources with the circuit execution scaling linearly in number of qubits and circuit depth and the classical gradient evaluation scaling exponentially. Note that the gradients of the variational parameters in VQAs were evaluated using either automatic or numeric differentiation until Schuld et al. <cit.> formalized the notion for gradients computed on quantum hardware popularized as the parameter shift rule. This method estimates the gradients by computing the energy of the wave functions generated by identical circuits with the parameter for which the gradient is to be estimated, shifted by certain values. Parameter shift rule alleviates the imbalance in the scalability, albeit at the cost of executing a much larger number of quantum circuits than the other methods. Given the inconsistency in evaluating the expected values from circuits due to decoherence and inefficient error mitigation techniques on top of the statistical noise from measurement, a larger number of circuits can lead to inaccurate results.
In order to address the issues of scalability, accuracy and cost of execution, this manuscript proposes a classically simulated quantum circuit execution method that approximates the initial and intermediate quantum states using a low-rank tensor ring (TR) to compute the expected energy, which in turn are used in approximating the gradients of a VQE. Built upon the Matrix Product State (MPS) approximation of many body quantum states <cit.>, the tensor ring VQE (TR-VQE) formulates a combinatorial optimization in the same way as a naive VQE, using parameter shift rule to compute the gradients. However, the expected values of the shifted circuits used to compute the gradients are evaluated by approximating the initial quantum state with a TR as opposed to MPS, where the single qubit and two qubit gates corresponding to the circuit ansatz are evaluted using tensor contractions. It must be noted that while a single qubit gate does not change the structure of the tensor network, a two qubit gate contracted with the two corresponding tensors can alter the network by increasing the tensor size or its rank. The proposed method retains the tensor ring structure and rank by truncated singular value decomposition of the higher order tensor resulting from the application of two-qubit gate. The consistent low-rank structure allows for an exponential speedup with respect to the number of qubits and circuit depth, compared to the MPS approximation and the brute force approximation with full state vector. This truncation however, induces a noise in the circuit executions similar to the decoherence in actual quantum computers. Therefore, classically simulating a noisy quantum computer instead of a perfect quantum computer only scales linearly in the number of qubits and circuit depth <cit.>. MPS representation tries to simulate ideal quantum computation without noise but literature suggests that the noise in the current generation quantum computers limits the amount of entanglement that can be built into a quantum state. Given the computational cost of simulating ideal quantum computers, this may not be an ideal prospect since they are not representative of the noisy quantum computations. Moreover, given the robustness of VQAs to noise, this kind of noisy simulation with the benefits of scalability can be specifically useful for machine learning and optimization. Furthermore, Liu et al. <cit.> highlights that the presence of noise in VQAs can naturally help the optimizer avoid saddle points. We posit that this advantage extends to the TR-VQE as well due to the induced noise. The proposed method is validated on multiple instances of max-cut problem compared against F-VQE <cit.> and a naive VQE using parameter shift rule. The expected values of the circuit for the benchmarks are computed using simulations implementing a non-noisy MPS approximation highlighting the improved performance of noisy TR approximation over MPS approximation.
The rest of the manuscript is organized as follows: Section <ref> recounts the existing literature related to the use of Tensor networks in approximating quantum circuits and applications in QML. Section <ref> formulates the notion of VQE to solve maximum cut problem introduced in Section <ref>. Section <ref> discusses the proposed method used to compute the gradients of a variational quantum circuit using the TR approximation of a quantum state and Section <ref> addresses the complexity analysis of the proposed method. The numerical simulations are explained in Section <ref> followed by a discussion on limitations and future direction in Section <ref>
§.§ Related Work
Since its inception, the tensor network approach has been much more widely explored in the context of classical simulation of quantum computations, compared to the brute-force statevector simulation
or other graphical and distributed methods <cit.>. Matrix Product states especially were widely regarded for their ability to efficiently represent moderately entangled quantum many body states <cit.>. The idea has been further extended to techniques that efficiently simulate quantum circuits <cit.> by contracting tensor networks at a fraction of cost of the statevector simulation which holds the full 2^N sized vector. Building upon the literature several variations have emerged for specific cases like Projected Entangled Pair States (PEPS) for two-dimensional circuits <cit.> and Tree Tensor networks (TTN) for circuits with tree-like connectivity <cit.> and Multi-scale Entanglement Renormalization Ansatz (MERA) <cit.> etc.
Note that the naive MPS-based circuit simulation (which will be referred to as non-noisy MPS approximation in this manuscript) as formulated in <cit.> and widely implemented across quantum computing platforms like Qiskit, do not efficiently encode circular entanglement from first to last qubits. Further, any application of two-qubit gate contractions result in increasing tensor size which in turn increases the computational complexity as the number of two-qubit gates in the circuit increases. To circumvent this shortcoming, Zhou et al. <cit.> proposed a truncated MPS approximation to simulate noisy quantum computers, which demonstrates a linear complexity in number of qubits and circuit depth.
The noisy simulation addresses the issue of increasing tensor size by approximating the larger tensor after the application of a two qubit gate with tensors of smaller size. The higher order tensor is decomposed into two lower order tensors by truncated singular value decomposition. This approximation preserves the tensor sizes after the application of each gate unlike in the previous iterations of MPS-based simulation.
A number of quantum-inspired tensor network methods have been explored in the machine learning literature for supervised learning. Huggins et al. <cit.> implements MPS and Tree Tensor Network models to solve binary classification. Other tensor network based methods using PEPS and MPS were demonstrated to be effective in image classification tasks <cit.>. The aforementioned literature mostly explores quantum-inspired classical machine learning techniques but very few works have probed into the utility of tensor networks in augmenting quantum machine learning techniques. Peddireddy et al. <cit.> extends the singular value thresholding method from Zhou et al. <cit.> to tensor rings implemented with variational quantum classifiers demonstrating the scalability and improved performance over non-noisy MPS approximation. Tensor rings also encode circular entanglement more efficiently than MPS due to the ring structure. While Zhou et al. <cit.> evaluates the approximated expectations using noisy MPS representation, they do not explore the notion of extending it to computing gradients of variational circuits. Therefore, the application of noisy circuit simulation to scale the classical optimization loop of VQE is still an open problem. Furthermore, extending this approximation method from MPS to tensor rings can also improve representability. This work builds up on <cit.> and <cit.> by adapting the noisy tensor ring representation to compute the approximate gradients of the parameters of a variational quantum eigensolver using the parameter-shift rule. Although the proposed TR based representation computes less accurate gradients than non-noisy MPS based representations owing to the additional information that is removed in the form of truncated singular values, TR based approach scales much more efficiently.
§ PROBLEM SETUP
§.§ Max-Cut Optimization Problem
This section will briefly introduce the maximum cut (max-cut) problem and its mathematical notion in the context of quantum computers. Max-Cut is an NP-hard binary optimization problem with a history of applications in statistical physics, VLSI design and clustering etc. Given an undirected graph G = (V,E), with V and E representing the nodes and edges of the graph, the problem aims to maximize the summed weights of the edges that are cut by grouping the nodes of the graph into two subsets by choosing the optimal subgroups.
The mathematical definition follows the QUBO formulation <cit.>: a graph of n nodes with the weights of the edges given by w_ij for (i,j) ∈ E. The nodes of the graph are cut into two subgroups labelled +1 and -1. The problem attempts to maximize the objective function C(x) given by the sum of the weights of the edges connecting the nodes in +1 to the nodes in -1 which assumes the form:
C(x) = ∑_i,j w_ij x_i (1 - x_j)
where x ∈{0, 1}^n and (i,j) ∈ E. The bitstring x corresponds to an instance of the grouping schema where x_i = 0 or 1 represents the i-th node being assigned to the subgroup +1 or -1 respectively. In order to find the solution to the given objective function with a quantum computer, we construct an Ising Hamiltonian <cit.> corresponding to the function by substituting x_i with its matrix transformation I- Z_i/2 where Z_i are the Pauli Z operators that act on qubit i and I is the identity matrix:
C(x) = ∑_i,j1/4 w_i,j (I - Z_i) (I + Z_j)
C(x) = 1/2∑_i<j w_ij - 1/2∑_i<jZ_i Z_j
Essentially, maximizing the objective of the given optimization problem is equivalent to minimizing the energy of Ising Hamiltonian given by:
ℋ = ∑_i,j w_i,j Z_i Z_j
whose ground state corresponds to the solution of the optimization.The full Hamiltonian ℋ∈ℂ^2^n is never constructed explicitly but is represented using a combination of the Pauli Z operators.
§.§ Variational Quantum Eigensolver
VQE is one of the algorithms that utilizes parameterized quantum circuits to solve for an approximate solution of combinatorial optimization problems. Unlike QAOA, VQE does not enforce any constraints on the circuit ansatz and therefore can be altered to suit the hardware that it's being implemented on. The optimization problem is first translated to a qubit Hamiltonian ℋ whose eigenvalues correspond to the costs of various solutions with the ground state being associated with the optimal solution of the problem. A quantum circuit with parameterized unitary rotations denoted by U(θ) is applied to an initial state |ψ_0⟩ (generally chosen to be the basis state |0⟩^⊗ n) resulting in a trial wavefunction.
|ψ(θ)⟩ = U(θ)|ψ_0⟩
Here, U(θ) represents a chosen ansatz U with variational parameters given by θ. The energy landscape of the Hamiltonian can be traversed using this wavefunction to estimate the expected energy. We choose the notation H(θ) to represent the expectation value of |ψ(θ)⟩ with respect to the observable Hamiltonian ℋ.
H(θ) = ⟨ψ(θ)|ℋ|ψ(θ)⟩
The algorithm then updates the variational parameters of the circuit employing an outer loop optimizer using gradient descent or other adjacent methods. The process is repeated until we arrive at a sufficiently low energy. The quality of the solution at the t-th iteration is evaluated using the approximation ratio which is defined as follows:
α = M-H(θ_t)/M-m
where M represents the maximum possible Hamiltonian value and m the minimum. In other words, α=1 represents the optimal solution, and α=0 represents making no cuts.
Most of the variational quantum algorithms including VQE are implemented as hybrid models that compute the expected value of the observable on a quantum computer while calculating gradients and updating the weights on a classical computer. The fundamental mechanics of the VQE algorithm is illustrated in Figure <ref>. Following the parameter shift rule <cit.>, when the variational parameters are components of a single qubit rotation gate, the gradient takes the following form:
H(θ)θ^i = 1/2[H(θ + π21_i) - H(θ - π21_i)]
Given the choice of ansatz, we choose a circuit that only comprises CX (CNOT) gates and single qubit rotation gates which form a universal gate set, thus simplifying the gradients to the closed form given in Equation <ref> where θ^i is the i-th element of θ, H(θ) corresponds to the energy of the Hamiltonian ℋ with respect to the wavefunction generated by the circuit U(θ) and 1_i is a one-hot vector with the i-th value as 1.
§ METHODOLOGY
§.§ Computing gradients using Tensor Rings
Since the gradients of VQE can be computed by implementing quantum circuits, it is crucial to be able to carry out the circuits efficiently. Although the parameter-shift method is faster than the automatic differentiation, it requires a quantum processor to run three identical ansatz with different parameters numerous times to arrive at the gradients (More discussion on this is provided in section <ref>). This could present an impediment given the limited availability of quantum computers and the cost of each implementation. Therefore, it is essential to study the utility of classical simulation of quantum circuits in assisting the optimization procedure.
Tensor networks have been shown to be effective in approximating quantum many body systems and are thus a strong contender among the methods for efficiently simulating quantum circuits. A tensor network can be easily understood via Penrose diagrams or Tensor Network diagrams where each diagram corresponds to a graph of multiple nodes with each node representing a tensor. A tensor is a multidimensional array of with its order denoting the number of its dimensions or edges. A popular approximation strategy for quantum systems involve Matrix Product States(MPS) or Tensor Trains (TT), a class of tensor networks that aim to represent a higher order tensor as a chain of order-3 tensors (See Figure <ref>). This representation has the advantage of the topological similarity with a multi-qubit system where each tensor corresponds to a single qubit and the contraction between the tensors encodes the entanglement between the qubits. However, TTs are limited in their flexibility and representation ability due to the constraint on their border rank. Since the border ranks are much lower than the inner ranks, this representation may not be optimal for some specific quantum systems. Also, an optimal TT representation greatly depends on the order of the products restricting the choice of ansatz. Note that the border rank constraints present the same hindrances in the application of TTs to classical datasets as well. In order to ameliorate these issues, researchers in the area of classical machine learning have adopted Tensor Rings(TR) to represent the data <cit.>. TR structures relaxes the rank constraints on the border tensors increasing the expressibility of the tensors. TR decomposition multiplies the tensors circularly therefore removing the variance to permutations of the multiplicative order. Notable advantages of TR representation with respect to quantum states involves flexibility in the choice of the ansatz. To explain this further, let us assume a circuit similar to the one shown in Figure <ref> where entanglement was introduced between the first and the last qubits using a CX between the said qubits. TR representations are a better fit to encode this kind of cyclic entanglement, therefore improving the choice set of ansatz for the problem.
A quantum state |ψ⟩∈ℂ^2^N can be approximated by a tensor ring with N tensors (corresponding to N qubits) circularly multiplied with each tensor denoted by τ(n).
|ψ⟩ = ∑_i_1 … i_N∑_r_1 … r_Nτ(1)_r_N r_1^i_1τ(2)_r_1 r_2^i_2…τ(N)_r_N r_1^i_N|i_1 i_2 … i_N⟩
Here, free indices i_n ∈{0, 1} span the 2^N dimensional Hilbert space corresponding to the quantum state whereas r_n represent the bond indices (indices connecting the tensors) with rank χ_n, which determines the quality of the approximation with entangled states i.e., higher values of χ_n are better able to represent strongly entangled states. The rank of the given tensor representation for |ψ⟩ is denoted by (χ_1, χ_2, … , χ_N). Throughout the manuscript we choose χ_n = χ for all n, reducing the number of hyperparameters. The choice of χ, hereafter referred to as the tensor ring bond, for a specific problem significantly determines the representation ability and therefore performance of the algorithm. Each tensor in the the proposed TR representation is a third order tensor with a dimension of χ×χ× 2. The exponential reduction in storage complexity can be observed where a quantum state is represented by 2^N parameters, its TR approximation can be represented using only 2Nχ^2 parameters. The approximation for a typical initialization for VQAs i.e., |0⟩^⊗N can be easily computed to be a tensor ring with each tensor of dimension χ×χ× 2 where the value of the tensor is 1 at the index (1,1,1) and 0 elsewhere, represented by 1_(1,1,1). However, if a different initialization is to be chosen, constructing an approximation may not be as straightforward but efficient algorithms for TR decomposition have been studied at length in <cit.>.
While a TR can represent a quantum state, it would also need to be transformed by parameterized rotations in order to function as specified in VQAs. Given the assumption of utilizing only single qubit gates and CX gates in order to simplify the parameter shift rule, it would be sufficient to study the transformations of the TR corresponding to the aforementioned gate set. Unitary transformations of single qubits are represented by a (2 × 2) matrix which is a 2nd order tensor. The matrix multiplication associated can be implemented by contracting the unitary tensor along the free edge of the tensor corresponding to a qubit as specified in the following equation:
τ'(n)_r_n-1 r_n^i'_n = ∑_i_nU_i'_n i_nτ(n)_r_n-1 r_n^i_n
U_i'_n i_n is the 2nd order tensor with indices i'_n and i_n corresponding to the unitary matrix acting on n-th qubit which is contracted along the edge i_n with the n-th tensor denoted by τ(n) spanning the indices r_n-1, r_n and i_n, resulting in the new tensor τ'(n)_r_n-1 r_n. Note that the transformation associated with a single qubit rotation (visually illustrated in Fig <ref>) does not alter the structure of the tensor ring preserving the storage complexity.
Two qubit rotations like CX however, can change the tensor ring structure increasing the storage complexity. In order to alleviate this, we use truncated singular value decomposition with the enlarged tensor to break it down to two tensors of the original smaller size. Say a two qubit gate U ∈ℝ^4×4 is to be applied to the adjacent qubits m and n (including the circular entanglement). We begin by contracting the two tensors τ(m)_r_m-1 r_m^i_m and τ(n)_r_n-1 r_n^i_n along their shared index r_m = r_n-1 to compute a new tensor:
M_r_m-1 r_n^i_m i_n = ∑_r_mτ(m)_r_m-1 r_m^i_mτ(n)_r_n-1 r_n^i_n
The two qubit gate U is then reshaped into the tensor U_i'_m i'_n i_m i_n and multiplied with the tensor M_r_m-1 r_n^i_m i_n along the shared edges:
(τ')_r_m-1 r_n^i'_m i'_n = ∑_i_m i_n U_i'_m i'_n i_m i_n M_r_m-1 r_n^i_m i_n
The resultant tensor is reshaped into a matrix of shape (i'_m × r_m-1) × (i'_n× r_n) whose singular value decomposition is performed as follows:
(τ')_i'_m × r_m-1^i'_n× r_n = ∑_r_m X_r_m-1 r_m^i'_m S_r_m Y_r_n-1 r_n^i'_n
where the orthogonal vectors of τ' populate the matrices X and Y whereas S_r_m is a diagonal matrix with the singular values. Since we assume a constant TR bond r_m = χ and we know the dimensionality of i to be 2 ( the free indices span the quantum state), in this case, τ' has 2χ singular values. S_r_m is truncated resulting in a new diagonal matrix S'_r_m with only the largest χ values remaining. We also truncate X and Y accordingly to keep only the orthogonal vectors corresponding to the remaining singular values. We compute products of the matrices X,Y and S as follows to make up the new tensors at the sites m and n of the tensor ring. Note that while this method can only work with two qubit gates acting on adjacent qubits,this can be extended to a generic circuit using SWAP gates.
τ'(m)_r_m-1 r_m^i'_m = X_r_m-1 r_m^i'_m S'_r_m
τ'(n)_r_n-1 r_n^i'_n = Y_r_n-1 r_n^i'_n
Following the procedure specified, the resulting tensor ring would culminate with the same structure and dimensionality as before the procedure, preserving the storage complexity after each application of a two qubit rotation. It is to be noted, the specified operations at worst scale at O(χ^3), and without this approximation, the dimensionality of the tensor network approximation scales exponentially in the number of two-qubit rotations or the depth of the circuit, therefore increasing the computational complexity. Different stages of the two qubit rotation procedure with a TR is demonstrated in Figure <ref>.
Given that an ansatz has been chosen for a variational algorithm (assuming the conditions of only constructing a circuit with parameterized single qubit gates and CX gates), it can be represented as a set of gates denoted by U, ordered by their position in the circuit i.e. a gate that is applied first to the quantum gate is placed at the beginning of the set, with the single qubit gates parameterized by θ_t. The final quantum state produced by the circuit can be approximated by a tensor ring that is initialized as 1_(1,1,1) and transformed with each gate in U as specified in the procedure in the preceding paragraphs. In order to compute the expected energy with respect to the final quantum state, it must be decomposed into its linear sum of the expected energy of the unitary components of the Hamiltonian composed of Pauli matrices.
⟨ψ(θ)|ℋ|ψ(θ)⟩ = ∑_i,j w_i,j⟨ψ(θ)|Z_iZ_j|ψ(θ)⟩
We propose to compute the expected energy with respect to a component Z_pZ_q using the TR representation by the application of single qubit Pauli Z gate at sites p and q and contracting it with the ring before the Z transformations along the edges that span the quantum Hilbert space (See Fig <ref>).
τ'(θ)_i_1…,i'_p,…,i'_q,… i_N = ∑_i_p, i_q Z_i_p^i'_p Z_i_q^i'_qτ(θ)_i_1…,i_p,…,i_q,… i_N
⟨ψ(θ)|Z_p Z_q|ψ(θ_t)⟩ = ∑_i_1,i_2,… i_Nτ'(θ)_i_1, i_2 … i_Nτ(θ)_i_1, i_2 … i_N
In the equations above, τ(θ) represents the final state produced by the ansatz U parameterized by θ approximated by a TR and τ'(θ) is produced after the Pauli Z transformations on the final state. Note that the indices i'_p and i'_q in τ'(θ) have been renamed to i_p and i_q for a simplified representation. When computing the expected value, the order of the contractions becomes crucial to the computational complexity but it has been established <cit.> that it can be computed effectively in O(Nχ^3) steps. The total procedure to compute the expected value has been presented in a more compact form in Algorithm <ref>. We utilize this algorithm to evaluate the gradients of the variational quantum eigensolver by computing the expected energy of the two circuits with shifted parameters as shown in Algorithm <ref>. The gradients are then used to update the weights of the variational parameters in the same manner as the naive VQE.
§.§ Complexity
In terms of memory, we note that we construct and manipulate only a tensor ring with N tensors corresponding to N qubits which grows at the scale of O(Nχ^2) as opposed to the O(2^N) for the full quantum state. Zhou et al. <cit.> establishes that the tensor network bond χ can be chosen to be sufficiently low in order to simulate a noisy quantum computer at a linear computational complexity in the number of qubits N and circuit depth D (defined as the number of repeating parametrized blocks). Parameter shift rule, popularized for its ability to compute the gradients on a quantum computer, evaluates the gradients by computing the expectations with shifted weights.However, computing the expected values with an additive error ϵ requires a many-fold implementation of the same circuit generally in the order of O(1/ϵ^2) which adds to the statistical noise. The proposed method can compute each gradient classically with a single iteration of two circuits each of which scales as O(NDχ^3) with an error rate controlled by χ. The error rate introduced by the truncation decreases with an increasing bond dimension χ and generally saturates at a finite value in the order of 10^-2 per two qubit gate for circuits with large N and D. This is in contrast to the error rate on a quantum computer characterized by the fidelity per two qubit gate which exponentially decays in the overall number of gates in the circuit <cit.>. The finite fidelity per gate allows us to scale the proposed algorithm in circuit depth and qubits for larger applications. Automatic differentiation (AD), a tool prevalent in classical machine learning literature and applications, grows at least as fast as the forward pass of the network in terms of computational complexity. This indicates that classically computing the gradients of VQE by AD scales exponentially as it would for classically computing the energy expectation of a circuit. It must be noted that the proposed method of tensor ring transformations can be used with AD as well, which again provides an exponential speedup in N and D.
§ EXPERIMENTS
To demonstrate the runtime performance and accuracy of the TR-VQE presented in Algorithm <ref>, we compare several instances of training TR-VQE for MaxCut problem with Filtering VQE (F-VQE) <cit.> and naive VQE implemented on the Qiskit framework (MPS-VQE). Both the benchmarks use a non-noisy MPS representation to simulate the quantum computations from the circuit as formulated in <cit.> and the F-VQE is additionally implemented with an identity filter to equate the number of parameters in all the experiments. A sampling noise is introduced in the implementation of MPS-VQE and F-VQE to compute the expected values from the circuit. As discussed before, MPS-VQE is expected to compute more accurate gradients than TR-VQE owing to the induced noise in the proposed TR representation. Therefore MPS-VQE converges faster, however takes longer runtimes per iteration because the tensor sizes in MPS-VQE increase with circuit depth. F-VQE additionally implements filtering operators to change the optimization landscape thereby improving the training convergence. Amaro et al. <cit.> claims that the inclusion of filtering operators leads to a faster and more reliable convergence to the optimal solution. This improvement, however, is dwarfed with larger circuits with more number of qubits (Readers can refer to <cit.> for additional details on the implementation of F-VQE). We further collected data on TR-VQE to analyze how internal configurations, namely bond rank, and graph size, i.e., number of qubits affect the performance relative to filtering and naive VQE. All of the graphs used were randomly generated with two to three edges per node, and uniformly distributed weights (between 1 to 10) and edge pairs. We use the same circuit ansatz for all experiments, with an initial parameterized layer of R_y gates on all qubits and a variational block repeated D times, where D represents the circuit depth. Each variational block contains a set of circular CX or CNOT gates followed by parameterized R_y gates on all qubits followed by another set of CX and R_y gates. The circuit depth and the tensor ring rank is set to 1 and 10 respectively for all experiments, unless otherwise specified.
Figure <ref> indicates how each of the three algorithms performs in terms of iteration runtime across randomly generated graphs of varying sizes and different circuit ansatz. The results for each algorithm were averaged across 10 initializations each with multiple unique MaxCut graphs of fixed size. For MPS-VQE and F-VQE, the number of shots used in the Hamiltonian evaluation was increased quadratically in graph size. Across varying graph sizes, TR-VQE’s per-iteration runtime, computed as the time taken for computing the expected value of the Hamiltonian and updating the parameters from the evaluated gradients, is faster than both filtering and non-filtering VQE with smaller graphs and by extension, smaller number of qubits. As illustrated in Figure <ref>, the iteration runtimes of TR-VQE consistently improve by a large margin over the benchmarks when the number of qubits are increased. Figure <ref> demonstrates the iteration runtime of each algorithm with increasing circuit depths for a graph with 10 nodes. TR-VQE again shows a significant improvement in runtime compared to MPS-VQE and F-VQE with increasing number of layers. The results from both the experiments are compatible with the theoretical claims of improved runtime complexity as discussed in Section <ref>. The runtime speedup can be attributed to the consistent rank and tensor sizes irrespective of the circuit depth whereas in the naive MPS based approach, the tensor sizes increase with the circuit depth.
On the other hand, TRVQE performs with near-equivalent accuracy to the other algorithms, despite the runtime speedup. Figure <ref> displays per-iteration accuracy for the algorithms, averaging data from 10 runs on various randomly generated graphs with a fixed size of 10 nodes. The accuracy was compared using the approximation ratio at each iteration computed as defined in equation <ref>.
The resulting data from Figure <ref> indicate that TR-VQE performs similar to F-VQE in terms of accuracy, diverging on average by no more than 3% at any point during training. When extended to variable graph sizes, TR-VQE once again performs on par or better than the alternative algorithms. The data in Table <ref> was collected using a TR-VQE bond rank of 10 and 1000 shots per circuit evaluation for MPS-VQE and F-VQE. Excluding an outlier at small graph sizes due to instability, MPS-VQE performed the most accurately due to the availability of more information, albeit at the cost of larger runtime. However, TR-VQE followed closely behind, with a large but inconsistent gap in accuracy between it and the least accurate F-VQE algorithm.
We also plot the approximation ratio of TR-VQE with varying TR bond rank and it is to be noted that TR-VQE performs almost as good as MPS-VQE at ranks as low as 12, indicating that an exponential speedup can be achieved at smaller ranks, improving the storage complexity. All experiments including the benchmarks see a wide variance in terms of accuracy with larger graph sizes due to a phenomenon called the barren plateau effect <cit.> which is informally defined as the impaired performance due to the exponential flattening of loss landscape in the number of qubits. Martin et al. <cit.> demonstrate that barren plateau effect persists in quantum MPS circuits and therefore we can surmise that Tensor ring circuits, as an extension of MPS, will face a similar challenge in training.
To assess the accuracy of approximate gradients, we employ the l^2-norm to compare gradients obtained from state vector simulations and those generated using the TR-VQE method. The mean gradient distance, computed as the average norm difference across 500 randomly selected points on the optimization landscape, is used as a metric. We compare this metric with values obtained from noisy simulations that emulate the gradients on an actual quantum computer using noise models from the ibm montreal machine. We examine the mean gradient distance for various circuit depths and graph sizes.
Figure <ref>(Left) illustrates that the gradients produced by the TR-VQE method closely resemble those obtained from exact state vector simulations, with almost negligible differences. In contrast, gradients derived from quantum simulation deviate significantly from the exact gradients, a trend that becomes more pronounced as the number of qubits increases, as expected. As shown in Figure <ref>(Middle), TR-VQE's effectiveness diminishes with higher circuit depths due to the cumulative impact of two-qubit gates. However, this performance decline can be mitigated by increasing the tensor rank, as demonstrated in Figure <ref>(Right). In conclusion, gradients computed from approximate classical simulations can achieve accuracy comparable to those obtained from quantum computers. Consequently, they can be a valuable addition to the optimization process in hybrid algorithms.
§ CONCLUSION
This work proposes a novel technique for combinatorial optimization problems with Variational Quantum Eigensolvers by approximating the circuit computations with noisy tensor ring contractions. The proposed algorithm uses parameter shift rule to evaluate the gradients used to update the variational parameters, but computes the expected values of the shifted circuits using tensor ring approximation. The computational complexity of circuit evaluation grows linearly in the number of qubits and the circuit depth which offers a quadratic speedup over the perfect classical simulation. Evaluating gradients using TR-VQE can also eliminate the additive error present in circuit computations on quantum computers. We validate the algorithm by implementations on several instances of Max-Cut problem and compare with algorithms that use the full state information. The results demonstrate the vast improvement in runtime with respect to the number of qubits and circuit depth validating the complexity analysis at a minor cost of accuracy.
§ COMMONLY USED GATES
The matrix representation of some of the commonly used gates in the manuscript are listed below:
R_x(θ) =
[ cos(θ/2) -isin(θ/2); -isin(θ/2) cos(θ/2) ],
R_y(θ) =
[ cos(θ/2) -sin(θ/2); sin(θ/2) cos(θ/2) ],
R_z(θ) =
[ e^-iθ/2 0; 0 e^iθ/2 ]
H =1/√(2)[ 1 1; 1 -1 ]
CNOT =
[ 1 0 0 0; 0 1 0 0; 0 0 0 1; 0 0 1 0 ]
R(α, β, γ) =
[ cos(α/2) -e^iγsin(α/2); e^iβsin(α/2) e^iβ + iγcos(α/2) ]
|
http://arxiv.org/abs/2307.04339v1 | 20230710043044 | Miriam: Exploiting Elastic Kernels for Real-time Multi-DNN Inference on Edge GPU | [
"Zhihe Zhao",
"Neiwen Ling",
"Nan Guan",
"Guoliang Xing"
] | cs.DC | [
"cs.DC",
"cs.AI"
] |
Many applications such as autonomous driving and augmented reality, require the concurrent running of multiple deep neural networks (DNN) that poses different levels of real-time performance requirements. However, coordinating multiple DNN tasks with varying levels of criticality on edge GPUs remains an area of limited study. Unlike server-level GPUs, edge GPUs are resource-limited and lack hardware-level resource management mechanisms for avoiding resource contention. Therefore, we propose Miriam, a contention-aware task coordination framework for multi-DNN inference on edge GPU. Miriam consolidates two main components, an elastic-kernel generator, and a runtime dynamic kernel coordinator, to support mixed critical DNN inference. To evaluate Miriam, we build a new DNN inference benchmark based on CUDA with diverse representative DNN workloads. Experiments on two edge GPU platforms show that Miriam can increase system throughput by 92% while only incurring less than 10% latency overhead for critical tasks, compared to state of art baselines.
Realization of an extremely anisotropic Heisenberg magnet in Rydberg atom arrays
Jaewook Ahn
August 12, 2023
================================================================================
§ INTRODUCTION
Deep learning (DL) has become a catalyst for a wide range of applications running on the edge, such as augmented reality and autonomous driving. These applications typically require the concurrent execution of multiple DNN tasks that have varying levels of criticality. For example, in mobile augmented reality, DNN inference tasks are often used for gesture recognition and user behaviour analysis, which are key components in providing a seamless user experience. This presents a major challenge as mobile/edge devices are constrained by limited computational resources for running multi-DNN inference tasks in real-time.
To support multiple DNN-based applications that have different real-time requirements <cit.>, a common practice is to share an edge Graphics Processing Unit (GPU). However, this practice poses significant challenges. On the one hand, when executing multiple DNNs simultaneously, their contention over the limited onboard resources on the same edge GPU can result in a performance bottleneck <cit.>.
On the other hand, dedicating the entire GPU to latency-critical tasks to guarantee their real-time requirements results in low GPU utilization <cit.>. Meanwhile, most of the approaches that attempt to support concurrent DNN inference tasks on GPU <cit.> require runtime support from vendors like NVIDIA Multi-Process Service (MPS) and Multi-Instance GPU (MIG) <cit.>, which are unavailable on edge GPUs due to the architectural differences.
Furthermore, multi-DNN inferences present two potentially conflicting objectives. Firstly, it is imperative that critical DNN tasks are given priority over other tasks in order to minimize end-to-end latency. This necessitates that the critical tasks are treated as first-class citizens on the GPU, with no interference from other tasks. Secondly, in order to achieve high overall throughput, all co-running DNN tasks should be concurrently executed in a best effort manner. These two conflicting objectives pose a major challenge for efficiently coordinating the inferences of multiple DNN tasks on edge GPU.
In this paper, we propose a new system named Miriam which aims to support real-time multi-DNN inference on edge GPUs by addressing the latency and throughput problems of co-running multiple DNN inference tasks. The key idea of Miriam is based on the elastic kernel [Kernel here refers to a small program that is executed on a GPU to perform the specific DNN kernel computations.], which can achieve more fine-grained resource mappings on GPU. Specifically, traditional kernels are elasticized by breaking them down into smaller, more flexible units that can be dynamically scheduled and remapped to different GPU resources based on their priority and criticality. This elasticization approach enables the padding of other GPU kernels, which maximizes GPU utilization without causing significant resource contention. As a result, critical tasks can be prioritized without compromising overall system throughput, thus improving the real-time performance of the system.
Our design is based on the key observation that the latency degradation of co-running DNN kernels is mainly caused by two dominant factors, namely intra-multi-processor (SM) resource contention and inter-multi-processor resource contention.
We leverage elastic kernels to address those two kinds of resource contention. Specifically, Miriam integrates two main components. The first component, the elastic-kernel generator, consists of an elastic grid/block generator that generates resource-controllable GPU kernels to resolve co-running DNN tasks resource contention, and a source-to-source kernel transformer that converts original GPU kernels into elastic kernels while preserving computation consistency. We also design a dynamic runtime coordinator to schedule the elastic kernels to proactively control the execution of the co-running kernel at runtime.
To evaluate the effectiveness of Miriam, we implement it as a hybrid framework based on CUDA, C++, and Python. We use a set of multi-DNN inference benchmarks for edge GPUs that include tasks with different priorities to evaluate the system's effectiveness. Our results demonstrate that, compared to existing methods, Miriam can serve significantly more requests with up to 92% throughput improvement while maintaining the inference speed for critical tasks with only a 10% increase in latency. These results highlight Miriam's superior performance in achieving efficient coordination of real-time multi-DNN inference tasks on edge GPUs.
§ RELATED WORK
To enable on-device multi-DNN inference on edge devices, prior methods such as joint DNN model compression sacrifices a modest level of accuracy for each model to reduce the computational costs of mixed DNN workloads <cit.>. In contrast, Miriam does not compromise on accuracy and can be seen as an orthogonal approach to the above systems. Other methods address this problem through new compiling techniques. For example, Veltair <cit.> proposes to generate multiple versions of compiled DNN models with different intensities of resource contention for scheduling at runtime to accelerate multi-DNN inference. However, these methods also lead to issues such as high overhead in storage and offline profiling, making them hard to scale to more use cases.
Systems like DeepEye <cit.>, Abacus <cit.>, and Dart <cit.> have utilized the interleaving of operators with different "contention channels" (memory-bound or compute-bound). Although these methods have proven to be effective, they require time-consuming offline profiling and are cumbersome to generalize for new DNN tasks. REEF <cit.> addresses the same problem of mixed-critical multi-DNN inference coordination and achieves kernel-level preemption for critical tasks. However, the approach requires modification of the GPU driver library, which is not practical in many popular closed-source devices. Heimdall <cit.> and Band <cit.> also target solving resource contention of multi-DNN inference, while they have different application settings from ours.
Warped-Slicer <cit.> employs performance versus computing unit occupancy curves for selecting an optimized simultaneous kernel pattern, but the method fails to address resource contention between kernels. Works such as HSM <cit.> and <cit.> model the latency degradation of concurrent GPU kernel executions based on hardware information, but the predictors built in these works are difficult to adapt to real-world multi-DNN inference scenarios that are characterized by nondeterministic kernel overlapping <cit.>. Other works such as Smcentric <cit.> and Effisha <cit.> tackle the GPU multitasking problem from resource management perspectives in a space-multiplexing manner <cit.>, which is orthogonal to Miriam's approach.
§ BACKGROUND
In this paper, we present the design and implementation of Miriam based on the CUDA programming model for NVIDIA GPU <cit.>. We first introduce some terminologies in CUDA. Fig. <ref> (left) shows the layout of an NVIDIA Jetson TX2 GPU, which consists of two SMs, each capable of running a number of GPU threads with a maximum size, and both SMs share the global memory.
CUDA Programming Model. A CUDA GPU has a number of Streaming Multiprocessor (SM). Each SM contains multiple cores, which are the processing units that execute the instructions of the threads. All cores within the same SM share the same set of registers and can communicate with each other through shared memory.
Code executed by the GPU is known as a GPU kernel <cit.>.
Threads are the smallest unit of work that can be executed in parallel on a GPU, and they are organized into blocks. Each block is a group of threads that can execute concurrently on a single SM.
A grid is a collection of blocks that are organized in a three-dimensional array.
The grid defines the overall structure of the data being processed and how it is partitioned into blocks.
GPU streams are a way of organizing and executing asynchronous tasks on the GPU. Each stream is a sequence of kernels (e.g. Conv, MemCopy) that can be executed independently of other streams. Kernels in the same stream are executed in a FIFO manner <cit.>.
Kernel Execution on GPU. When launching a kernel in CUDA, we specify the dimensions of the grid and blocks. Each block is dispatched to and executed on one SM. However, whether a block can be dispatched to an SM that already has a block executing on it depends on whether there are enough remaining resources, such as thread slots and shared memory, to accommodate the new block. If there is no available SM to accommodate a block, it has to wait in a queue in a first-in, first-out (FIFO) order.
When a kernel executes on an SM, it competes for on-SM resources, such as thread slots and shared memory, with other kernels already dispatched to and executing on the same SM. This competition greatly affects the execution time of a kernel on the SM. Thus, the varying time a block waits in the queue, in addition to the varying time it takes to execute its workload on the SM, contributes to the overall varying latency experienced by the kernel.
§ MOTIVATION AND CHALLENGES
Miriam aims to support co-running DNN inference tasks on edge GPU for real-time applications. Tasks that have strict real-time requirements are referred to as critical tasks. For example, obstacle detection in autonomous driving must be finished by a certain deadline, allowing sufficient time for the vehicle to maneuver around obstructions. Tasks that do not have strict real-time deadlines are referred to as normal tasks. For example, monitoring human drivers' emotions and fatigue can be executed in a best-effort manner to improve the driving experience.
aims to meet the real-time requirement for latency-critical tasks while maximizing the overall throughput of co-running normal tasks in a dynamic manner. One common solution is to sequentially execute critical tasks and normal tasks, which can yield the lowest latency for critical task execution, but at the cost of significantly reduced overall throughput. An alternative solution is to directly execute multiple DNN tasks on the same edge GPU without proper contention management. However, this can cause increased latency for critical tasks.
Here we investigate performance degradation caused by the simultaneous execution of multiple DNN tasks. When running alone on an edge GPU, GPU kernel execution time for DNN inferences tends to remain consistent. However, the simultaneous execution of multiple DNN tasks on an edge GPU can significantly impact performance. To study this effect, we conducted an experiment using CUDA multi-stream on an NVIDIA RTX 2060 GPU where we launched a DNN task (i.e., ResNet50) with different co-runners in a closed-loop manner. In Fig. <ref> (left), we present the cumulative distribution function (CDF) of the ResNet50 latency with various co-running tasks. The results show that the latency of ResNet50 ranges from 4.4 ms to roughly 16.2 ms when co-running with VGG16, while the solo-running latency is 4.2 ms, yielding a significant variation. Meanwhile, the latency distribution pattern for different co-running model settings also varies a lot.
The primary factor that results in these large variations in latency is the complex resource contention among the co-running tasks, which can be classified into intra-SM contention and inter-SM contention, as is shown in Fig. <ref> (right). The latency experienced by a GPU kernel depends not only on the time it takes for the workload to execute on the SM (affected by intra-SM contention) but also on the time it takes for the workload to wait to be dispatched to the SM (affected by inter-SM contention). Intra-SM contention and inter-SM contention are two types of resource contention among co-running tasks on a GPU. Intra-SM contention refers to the contention within an SM, which can occur when multiple thread blocks from different kernels are dispatched to the same SM and compete for shared resources, such as registers, shared memory, and execution units. Inter-SM contention refers to the contention among SMs, which can occur when multiple thread blocks from different kernels are dispatched to different SMs and compete for shared resources, such as global memory and memory controllers. These two types of contention can cause significant performance degradation and latency variation for co-running tasks on a GPU.
Thus, given two incoming DNN task queues for normal task τ^normal and critical task τ^critical, to maximize the overall task throughput while guaranteeing the real-time performance of critical tasks, it is crucial to carefully manage the contention that arises from multiple overlapping kernels during co-execution. Our design objective is: to mitigate the latency degradation of the critical kernel during concurrent execution with the normal kernel by resolving inter- and intra-SM contention while allocating idle SM resources to the normal kernel as much as possible.
§ MIRIAM OVERVIEW
We now introduce Miriam, a holistic kernel-level system for real-time multi-DNN inference on edge GPU. Miriam is a compiler-runtime synergistic framework that achieves fine-grained kernel-level GPU resources mapping. In this section, we first introduce the key idea of Miriam and then describe its system architecture.
§.§ Key Idea
In Section <ref>, we show that it is imperative to give careful consideration to the resource contention that arises between multiple parallel kernels. Failure to do so can result in GPU under-utilization and degradation of inference latency.
Motivated by these findings, Miriam proposes a new DNN kernel inference abstraction, elastic kernel, which is a GPU kernel that has adjustable grid size and block size. Different gird/block sizes of the elastic kernel correspond to different patterns of SM-level GPU resource usage. By transforming normal kernels into elastic kernels, Miriam can control their resource contention to the critical task, and thus maximize the overall system throughput while not compromising the real-time performance of the critical kernel.
To this end, Miriam generates an elastic kernel for each normal task offline and enables kernel coordination at runtime. Specifically, Miriam employs a novel elastic kernel generator to construct an elastic kernel with adjustable GPU resource usage patterns. During the runtime phase, the coordinator will select the best implementation patterns of the elastic kernels and dynamically pad them with the critical kernels to fully utilize the GPU resource.
§.§ System Architecture
Fig. <ref> shows a bird-eye view of . Miriam incorporates two parts: Offline Elastic Kernel Generation and Online Kernel Coordination, working at levels of compilation, i.e., source-to-source code transformation, and kernel coordination, respectively. They collaborate to exploit elastic kernels for supporting multiple DNN inference on edge GPUs.
generates elastic kernels by transforming the compiler-generated or handcrafted CUDA kernels to the elastic form. We generate elastic kernels from both grids' and blocks' perspectives of GPU kernels, which are called elastic grid and elastic block, respectively. These configuration knobs can achieve fine-grained control over inter- and intra-SM resources.
There are two challenges here for generating elastic kernels. First, the design space of the elastic kernel implementation patterns is too large (e.g., 2874 on average for a single kernel in AlexNet <cit.>).
Hence, we shrink the design space to decrease the number of potential elastic kernel candidates by taking the hardware limitation into consideration. Second, When a kernel is launched in CUDA, the execution configuration specifies the number of threads to be launched and how they are organized into blocks and grids. Modifying the grid and block size in a DNN kernel directly can cause computation errors because this affects how threads are organized and executed on the GPU. In case of this, includes a novel source-to-source kernel transformer, which transforms GPU programs of a given DNN kernel into an elastic kernel execution paradigm while ensuring the consistency of computation results.
adopts a novel dynamic kernel coordination mechanism that controls the execution of elastic and critical kernels at run-time. Specifically, will profile the SM occupancy of each elastic kernel and the critical kernels. Then, determines the grid size and block size of the next elastic kernel from the normal task queue at runtime. In this way, tasks with elastic kernels can maximize resource utilization without interference to other co-running critical kernels.
A key challenge here is that an elastic kernel may be executed solely or in parallel with different critical kernels. Hence, we cannot determine the scheduling of the elastic kernel at the time of kernel launch. To address this issue, we design a dynamic kernel sharding mechanism, in which we divide an elastic kernel into several shards and determine the scheduling for each sharding according to run-time resource usage.
Miriam can support a wide range of applications that need to run multiple DNNs on the edge GPU.
For instance, an obstacle detection task and a navigation task need to run in parallel to achieve autonomous driving.
The obstacle detection task is critical because it is related to driving safety, while the navigation task can be executed in a best-effort manner as a normal task.
For such a DL task set, as shown in Fig. <ref>, first divides them into critical kernels and normal kernels according to their task characteristic, i.e., criticality of the tasks. Normal kernels are compiled offline and transformed into elastic kernels by . At run-time, the elastic sharding policy of normal kernels is determined by the to maximize resource utilization while not interfering with the execution of the critical kernel.
§ GENERATION OF ELASTIC KERNELS
To support finer control over inter- and intra-SM resources of a kernel running on the edge GPU, we propose an elastic kernel generator. The design principle of Miriam is based on the insight that both the block and grid's resource allocations can be distilled from the native GPU programming model. Fig. <ref> illustrates the design of the proposed elastic kernel generator: elastic block and elastic grid. By separating resource allocation for thread blocks from the logic-level grid and thread block identity, this approach generates resource-controllable GPU kernels for further resolving co-running DNN tasks resource contention problems.
To improve the efficiency of the elastic kernel generation process, proposes to shrink the design space of elastic kernels according to hardware limitations, as well as observations on co-running DNN kernels from critical and normal task queues. Moreover, to maintain the accuracy of elastic kernel calculation after elastic kernel transformation, we design a source-to-source kernel transformer. Our transformer can convert original GPU kernels into elastic kernels while preserving computational equivalence.
§.§ Controllable Intra-SM Resource by Elastic Block
DNN kernels can be broadly categorized into memory operations (memory allocations, memory transfers, etc.) and kernel execution. To enable the execution of a single kernel on multiple GPU SMs, GPU programming divides a large kernel into multiple sub-kernels, each of which is executed by a GPU block.
The block size is determined by the computation workload of each sub-computation. Blocks with smaller sizes consume less thread usage for each instruction cycle.
Multi-DNN inference on edge GPU can cause severe intra-SM contention when multiple thread blocks from different kernels compete for the resource within the same SM. Some blocks would fail to execute or delay, which leads to a decrease in the overall throughput and an increase in the corresponding latency of the DNN inference. For this issue, one possible solution is to perform code-level optimization of the GPU kernel. This approach includes optimizing the memory access patterns and reducing unnecessary computations to decrease the intra-SM resource usage, and thus alleviates intra-SM contention. However, optimizing GPU codes for a specific DNN model is challenging and time-consuming. Different optimization techniques such as loop-tiling, loop-unrolling and parallelization naturally have different trade-offs in terms of execution performance, memory usage, and code complexity. Achieving the appropriate balance among those factors requires careful experimentation and tuning.
Adapting codes for different concurrent kernels from diverse tasks demands a significant amount of effort and may not generalize well, thereby restricting the effectiveness and applicability of the optimization techniques.
To carefully manage the resource usage of each block, adjusts the number of threads within the targeted block to generate elastic blocks for each thread block. We adopt the persistent thread technique <cit.> that is capable of adjusting a kernel’s resident block size on an SM. In contrast to traditional kernels where threads terminate after completing the kernel execution, persistent threads remain active throughout the execution of a kernel function. We limit the range of each elastic block size to fall between 1 and the maximum resident block size. We also transform the default 1:1 logical-to-physical threads mapping scheme to an N:1 mapping scheme while preserving the initial program semantics.
Compared to static block fusion <cit.>, which fuses multiple thread blocks from different GPU kernels into a single one to reduce unnecessary loads and stores, our persistent thread design does not require pre-compilation of all possible combinations of kernels. This feature enables flexible SM-level resource mapping at runtime.
Our elastic kernel is designed to stay within the shared memory limit, and we achieve this by modifying the way we control the intra-SM resources, including shared memory, compared to the original kernel. This modification results in a memory occupancy that is either equal to or less than that of the original kernel.
While the persistent thread mechanism provides fine-grained control over intra-SM parallelism, it comes with nontrivial overhead. The optimal number of launched persistent threads does not always equal to the maximum number of concurrently executing threads from all thread blocks that can be afforded by a single SM. Hence, we will narrow the design space of elastic block which will be introduced in Section <ref>.
§.§ Elastic Grid for Inter-SM Contention
While elastic block design can resolve intra-SM thread-slot contention, inter-SM memory (e.g., DRAM, L2 Cache) fetching contention can still be a severe problem if blocks inside a kernel are directly launched. DNN kernels often use a large number of blocks to hide stall cycles due to data access, thus, when multiple DNN inference requests arrive in rapid succession, multiple SMs are allocated to execute the requests (e.g. memory bus) have to wait for each other, leading to decreased execution performance.
Miriam proposes an elastic grid generator that slices the initial grid into multiple smaller grids. This approach can improve resource utilization and reduce inter-SM contention by allowing more efficient memory accesses across multiple SMs.
Elastic grid generation implies a kernel slicing plan: Given a kernel K, a slicing plan P(K) is a scheme that slices K into a sequence of n slices [s0, s1, s2,..., s_n-1] based on thread-block-granularity partitions.
Thus, given a set of kernels, the problem is to determine the optimal grid slicing policy of the initial kernel when co-running with other tasks with different workloads.
To formulate, as for a DNN kernel K with M thread blocks, a dichotomy algorithm-based slicing plan S(K) can be applied to K. Specifically, there would be a sequence of slicing schemes represented as:
S(K)=(M/2^n,M/2^n-1...,M), n=*max_i{M mod 2^i=0}
where n is the power index of 2 to be divided. By doing this, we enable normal kernels to be issued with a flexible number of thread blocks on SM, co-locating with critical kernels. By dividing the single kernel into multiples, the sliced grids can be scheduled to run independently by the GPU, allowing the GPU to interleave the execution of them with the execution of other critical kernels. The elastic grid design efficiently reduces co-locating kernels' inter-SM memory contention by improving the time-multiplexing potential of the kernel with other kernels, allowing the GPU to better balance the allocation of resources and maximize overall performance.
§.§ Workload-balanced-guided Design Space Shrinking
We need to determine the execution parameters of the elastic kernel at run-time, which includes the grid number(N_blk_be) and the block size(S_blk_be). We call each pair of execution parameters a schedule. A main challenge here is the huge number of feasible schedules, which makes it difficult to enumerate schedules or heuristically find optimal ones at run time.
The total number of feasible schedules is exponential to the number of operators in the incoming model and the size of input data. For example, an implemented AlexNet model in the Tango benchmark with an input image size of 3x224x224 can have up to 2.2 × 10^25 feasible schedules for all Conv kernels <cit.>.
To address this challenge, we shrink the design space for each kernel by removing combinations of elastic grid sizes and block sizes that may result in dispatch failure due to severe resource contention. In another word, Miriam narrows down the design space by eliminating configurations that are expected to have low performance.
When multiple kernels are co-running, thread blocks from different kernels can have many possible inter-leavings of SM-level contention or inefficiency. We propose two constraints to address these issues as shown in Eq. <ref>, and the specific parameters of these factors are shown in Table 1.
N_blk_be⩽ N_SM - N_blk_rt mod N_SM
S_blk_be⩽ L_threads - blk_size_rt
The first constraint is based on the observation that workload across SMs is unbalanced. This kind of imbalance appears broadly when the number of thread blocks is not a multiple of the number of SMs inside an edge GPU. To address this issue, we prune cases where the number of thread blocks of elastic kernels exceeds the remaining available SMs after dispatching all the thread blocks from critical kernels.
The second constraint addresses intra-SM workload balance, which aims to reduce contention between thread blocks from different kernels competing for resources within an SM. It is necessary to ensure that each SM has as much workload as possible and that the workload is balanced. If the workload in an SM is too light, then the resources in that SM may be wasted. On the other hand, if the workload in an SM is too heavy, it may lead to resource contention and performance degradation. We prune cases when the working threads of an elastic kernel exceed too much of the spare intra-SM resources after being occupied by blocks from the critical kernel based on the intra-SM workload balance constraint.
To formulate these two inefficiency cases, we define WIScore as a workload imbalance metric:
WIScore= N_blk_rt mod N_SM+N_blk_be/N_SM * S_blk_be+S_blk_be/L_threads(4)
where the value of WIScore ranges from [0,1]. Another factor we consider when shrinking the design space is the dispatch overhead for the elastic kernels. To ensure that the potential schedule generated for each elastic kernel is feasible and does not violate critical decision-making requirements. Miriam prunes these cases using OScore:
OScore =
1 ∑ LO_blk(k_be_i) < MAX_blk, ∀ i ∈ [1,N_shard]
and ∑ LO_pt(k_be_i) < MAX_pt, ∀ i ∈ [1,N_shard]
0 Otherwise
(5)
where function LO() represents the launch overhead which equals the sum of the launching time for each elastic kernel fragment, subtracting the launching time for the initial normal kernel. OScore is set to 0 when the overhead exceeds the maximum acceptable bar we set, which is a constant number.
The product of the WIScore and OScore values that are computed for each elastic kernel candidate gives a metric that can be used as a design space narrowing navigator for the performance boundary. Specifically, by multiplying these two scores (WIScore * OScore), we can identify the candidates that are likely to achieve the best performance within the given design space. Miriam computes it for every possible combination of elastic kernel implementation settings. Determining the optimal percentage of candidates to select is difficult since it is unclear how many candidates need to be chosen to ensure that Miriam finds the best parameters within the pruned design space. Thus, we test some representative tensor operations (such as convolution in CifarNet <cit.> and matrix multiplication in GRU <cit.>) and then picks out the top 20% combinations among all the candidates to be used in the next stage of runtime kernel coordination. Through these tests, we do not find any cases in which the model prunes the best-performing set of parameters.
With the assistance of constraint injections, we can greatly reduce the design space without sacrificing the candidate elastic kernel's performance. This feature is especially useful given the large number of possible kernel configurations in modern edge GPUs.
§.§ Source-to-Source Elastic Kernel Transformer
Before assessing the effectiveness of elastic kernel design, it is crucial to investigate whether the grid or block sizes of DNN kernels can be modified directly from the original user-developed or compiler-generated GPU programs. An experiment was conducted on the benchmarks of Tango <cit.> to evaluate the effectiveness of direct kernel transformation. The results of the experiment showed that only 7.4% of the implemented kernels in the Tango benchmarks were compatible with grid/block size adjustment without requiring modifications to computation schedules inside kernels.
This is because that the block size and grid size defined in a kernel are determined by the computation schedule of the kernel: either directly written in CUDA codes or through declarative loop-oriented scheduling primitives in DNN compilers, which bind symbolic-extent logical threads with physical GPU threads, as is shown in Fig. <ref>. This constraint motivates us to design a source-to-source kernel transformer that can support our elastic kernel design.
Miriam rapidly equivalently transforms a DNN kernel by injecting a piece of code at the beginning of each kernel, which checks the computation and memory offsets to realize where it begins and ends after being evicted. Specifically, we compute a global thread identifier and use it as a basis for SM-level workload distribution. This identifier takes the thread ID as input and produces a corresponding index for the data element accessed by the thread. We replace references regarding physical threads (e.g. GridDim) and identity variables (e.g. threadIdx.x) in the original kernel codes with logical equivalents. Miriam employs two approaches for implementing the index function: computation-based and memory-based. The computation-based approach computes the index within the kernel when the thread accesses the corresponding data element. Alternatively, in the memory-based approach, the indices are pre-calculated on the host side (i.e., the CPU) prior to kernel launch and stored in shared memory for use during kernel execution.
§ RUNTIME DYNAMIC KERNEL COORDINATION
This section introduces our design for the online scheduler of elastic kernel coordination. First, we call each elastic kernel (i.e., elastic grid and elastic block) as elastic kernel shard. Our guidelines for designing the coordinator are two-fold: maximizing overall real-time performance and mitigating resource contention. To achieve these goals, our runtime coordinator constantly monitors the available GPU resources, both from the critical kernels and elastic kernels. It then determines which elastic kernel shards can co-run effectively with the critical kernels.
Execution timeline of co-running kernels. Upon receiving multiple normal task requests b1...bn, Miriam pushes all the kernels into a normal tasks queue and the kernels are dispatched to the GPU semantic through multiple streams. Once a critical task arrives, Miriam will instantly select appropriate elastic kernel fragments of the following normal kernel in a "bin-packing" manner, considering the current intra- and inter-SM-level resource distributions. After that, once the critical kernels finished executing, all the kernels from normal tasks will re-occupying the GPU.
Grid/block size determination of elastic kernels.
During runtime, a fixed size for elastic grids and block settings for elastic kernels can easily become inefficient with the optimal co-scheduled elastic kernel shards varying with different co-running with critical kernels. For example, if one critical kernel finishes and there still exists half of the computations unfinished from the co-locating elastic kernel, the rest half of thread blocks from it lead to severe resource contention or under-utilization when co-locating with the subsequent critical kernel. The selection policy for elastic kernel shards is crucial in order to prevent latency interference with critical tasks. To ensure optimal performance, one approach is to build a duration prediction model for the formation of operator groups based on runtime performance events (e.g. cache misses and global memory bandwidth)<cit.>, and control the kernel overlap based on the model. However, runtime events are not supported on edge GPUs like Nvidia Jetson devices, and the hardware events reported by tools like Nsight Sys and Nsight Compute can only be obtained with high overhead. Thus, this method cannot be applied to our problem (kernel overlaps are not determined) in a practical way.
To address these challenges, Miriam adopts a greedy scheduling policy. Specifically, when the elastic kernel partially overlaps with the critical kernel, the kernel coordinator must carefully balance the resources allocated to each kernel. In this case, the coordinator needs to ensure that the padded elastic kernel does not interfere with the execution of the critical kernel, while still using as many available resources as possible. When the padded kernel runs on its own, the kernel coordinator can allocate all of the available resources to the kernel, since there are no other tasks running on the GPU. This allows the kernel to run as efficiently as possible, without any interference from other tasks. To efficiently manage these elastic kernels while achieving the goal, we propose a dynamic-sized shade binary tree approach for elastic kernel shards formation to achieve high runtime efficiency and low resource contention from different combinations of overlapped kernels.
Our shaded binary tree structure is an abstract for managing the elastic kernel shards, which is similar to a complete binary tree structure of shards, as is shown in Fig. <ref>. The root of the tree represents the kernel from the normal tasks, whose initial grid size is M. Each node corresponds to a part of computations, or potential thread blocks to be dispatched inside the kernel. The shading property for each node is the elastic block size of the thread block. Directed edges indicate the potential sliced peers for the unfinished computations left over from the predecessor. The whole structure is composed of the actual shard and the virtual shard. The actual shards are the ultimately formed elastic kernel shards that are to be dispatched, and the virtual shards are the potential fragments of the elastic kernel that would not be dispatched.
Miriam relies on the dynamic shaded kernel binary tree structure to manipulate the elastic kernels from normal tasks and determines the elastic kernel shards with heuristics based on the number of thread blocks of kernels from both critical and normal tasks. As is shown in Fig. <ref>, which illustrates the life cycle of an elastic normal kernel. For elastic fragment selection from normal kernels, the policy is to pick a set of elastic blocks from the head of the shaded kernel binary tree to share SM-level resources with co-locating thread blocks from resident critical kernels with trivial contention. Miriam proposes to utilize a policy to ensure that the elastic blocks from normal kernels will only use the left-over resources from the critical kernels.
§ EVALUATIONS
§.§ Experiment Setup
We implemented Miriam based on NVIDIA CUDA 11.2 <cit.> for elastic kernel generation and online kernel scheduling, and Python3.6 for the source-to-source kernel transformer.
§.§.§ Implementation and Testbed.
Our experiments are conducted on an NVIDIA GeForce RTX 2060 that features 1920 CUDA cores and an NVIDIA Jetson AGX Xaiver with Pascal GPU architecture with 256 NVIDIA
CUDA cores <cit.>. We implemented Miriam with NVIDIA CUDA 11.2 for elastic kernel generations and Python3.6 for the end-to-end kernel transformation. Note that Miriam is extensible and can work well on other GPU platforms that officially support OpenCL, HIP or other CUDA alike programming paradigms such as AMD Embedded Radeon™ E9170 <cit.>.
§.§.§ DNN Workloads.
We use six popular DNN models from both computer vision and language processing fields to evaluate Miriam. Inspired by DISB <cit.>, we build a benchmark named MDTB (Mixed-critical DNN Task Benchmarks) based on both CUDA implemented Kernels to fully demonstrate the performance and generalization of our framework, summarized in Table <ref>. MDTB benchmark simulates three patterns for inference tasks from user requests: (1). Arrival in uniform distribution. The client sends inference requests at a fixed frequency (e.g. 10 requests/second), which simulates critical applications such as pose estimation. (2). Arrival in Poisson distribution, which simulates event-driven applications such as obstacle detection. (3). Closed-loop workloads simulate when the client keeps sending inference requests.
We choose five representative DNN models in MDTB, including AlexNet <cit.>, SqueezeNet <cit.>, GRU <cit.>, LSTM <cit.>, ResNet <cit.>, and CifarNet <cit.>, all implemented in CUDA. We conduct neural network inference with a 224x224x3 single batch of images as the input to mimic the inference in real applications.
§.§.§ Baselines.
We compare Miriam with multiple DNN scheduling approaches on edge GPU. Sequential selects one model from both task queues (critical and normal) in a round-robin fashion and performs the inference one by one. In this mode, the critical tasks run independently, occupy the GPU resources, and can have optimal end-to-end latency for critical tasks. GPU Multi-stream with Priority enqueues kernels from both critical and normal tasks at the same time, and models are executed in parallel. This is adopted by NVIDIA Triton <cit.>. Inter-stream Barrier (IB) is the state-of-art multi-DNN operator scheduling method based on multi-stream <cit.>. It uses inter-stream barriers to manually synchronize kernel dispatch among different kernels. In this mode, the concurrency among kernels can be controlled by utilizing stream and synchronization-based mechanisms.
§.§.§ Metrics.
We use the overall throughput, the end-to-end latency for critical tasks, and the achieved occupancy as our evaluation metrics.
End-to-end Latency of Critical Tasks. This metric measures the end-to-end inference speed of critical tasks with real-time demands.
Overall Throughput. This metric represents how many requests from users can Miriam serve on the target edge GPU.
Achieved Occupancy. By definition, achieved occupancy is the average ratio of active warps on an SM to the maximum number of active warps supported by the SM<cit.>, defined as below:
Achieved Occupancy = Active_warps / Active_cyles/MAX_warps_per_SM
We use this metric to evaluate the fine-grained GPU utilization of our system performance.
§.§ Overall Performance
To reflect the performance gain of system overall throughput with little sacrifice on the real-time performance of the critical tasks, we compare Miriam against other GPU scheduling approaches under MDTB A-D workloads on two edge GPU platforms.
We merge discussion of the uniform distribution and poisson distribution of critical task requests because their workloads are comparable. This allows us to analyze and discuss their similarities more efficiently.
Closed-loop Critical Tasks (MDTB A). Workloads with closed-loop critical tasks (AlexNet) experience significant resource contention when co-running with normal tasks (CifarNet). Fig. <ref> (a)-(d) show that: compared to Sequential, Multi-stream and IB increase the critical task latency by 1.95× and 1.52× on 2060 and 2.02× and 1.77× on Xavier, respectively, while Miriam incurs only a 21% and 28% overhead on critical tasks. Miriam also improves overall throughput by 64% and 83% on the two platforms, outperforming other approaches significantly under MDTB A workloads. We observed that IB's throughput performance is even worse than Sequential's due to the frequent launching of critical tasks require the insertion of more synchronization barriers among GPU streams to manage kernel groups, resulting in significant overhead. In terms of achieved occupancy, Fig. <ref> (e) and (f) demonstrate that Miriam leads to higher SM-level GPU resources compared to other baselines. It is important to note that achieving nearly 100% theoretical occupancy is difficult for DNN inference tasks due to their large thread blocks, which can easily lead to resource idleness or SM incapacity to cover memory access latency <cit.>.
Uniform/Poisson Critical Tasks (MDTB B, C, and D). As the launching frequency of critical workloads decreases, the overall throughput of all approaches improves with different degrees compared to vanilla Sequential due to increased opportunities for normal tasks to share GPU resources with critical tasks. We observed that Miriam outperforms other approaches in this scenario. For instance, using MDTB B, C, and D on Xavier, Miriam increases overall throughput by 1.85×, 1.79×, and 1.91× over Sequential, which is much better than the other baselines. While both Multi-stream and IB also yield improved throughput compared to Sequential with 1.34× 1.73×, they lead to severe latency degradation for the critical tasks by 32% 88%, whereas Miriam only incurs a latency overhead of less than 21% for these benchmarks. This improvement can be attributed to our elastic kernel design and runtime dynamic kernel coordination approach. Since the Sequential approach exhibits the shortest latency for each critical task, our comparison demonstrates that Miriam maximizes overall throughput while preserving the end-to-end latency of critical tasks. From a GPU utilization standpoint, Miriam increases the average active warps of each cycle, resulting in better SM utilization. These results confirm the effectiveness of our elastic kernel sharding approach and demonstrate our ability to effectively pad critical kernels.
We observe that the performance improvements offered by Miriam may not always result in higher SM occupancy on Jetson Xavier. This is because Xavier has much fewer onboard resources and a smaller number of SM compared to 2060. Additionally, the relatively low memory bandwidth of the Xavier can limit the amount of data that can be transferred between the memory and SMs, leading to performance bottlenecks with complex models. The thermal design power of the Xavier is also relatively low compared to 2060, which can limit the amount of power that can be consumed by the GPU and the amount of heat that can be generated. This can negatively impact the clock speed of the processor cores and the amount of parallelism that can be achieved, which in turn can have a negative impact on the relationship between SM occupancy and performance.
§.§ In-depth Analysis of Miriam
To better understand why Miriam performs better than other GPU scheduling approaches under severe contention circumstances, we provide a in-depth analysis in this section, with two AlexNet models co-running on a single 2060 GPU named AlexNet-C which serves as the critical task, and AlexNet-N which serves as the normal task. Both tasks are launched in a closed-loop manner.
In Fig. <ref>, the upper two rows show the timelines of active kernels from the two co-running DNN tasks, which demonstrate the performance difference between Miriam and Multi-stream. The figure is sketched based on real profiling results achieved from NVIDIA Nsight Sys <cit.>, in which we use the blue color to represent the critical task, green color to represent normal tasks launched by vanilla Multi-stream, and pink color represents elastic kernels of the normal task by Miriam. As shown in the figure, there are obviously more pink blocks than green blocks, and these pink blocks are tightly padded with the blue blocks, which can be a showcase of the elastic kernel shards padded with the critical kernels. The end-to-end latency of AlexNet-C in Miriam is much lower than that in Multi-stream.
We also show the corresponding achieved occupancy of this case in Fig. <ref>. The average layer-wise achieved occupancy for Miriam is 65.25% and is 32.9% for Multi-stream. As mentioned, more average active warps per cycle and less contention overhead is the key to improving the parallelism while preserving the speed of critical tasks.
§.§ Evaluations on Design Space Shrinking
Miriam filters out the definitely-slow cases (80%) by applying hardware limiters, as detailed in Chapter 6.3. The trade-off between elasticized scale (i.e., the dynamic shaded binary tree's depth, as discussed in Chapter 7) and scheduling granularity is a critical consideration for different implementations of elastic kernels, as shown in Fig. <ref> to guide the further shrinking process. For instance, an elastic kernel shard with elastic_grid_size=1 is flexible to accommodate other critical kernels, but launching overhead for such a shard may be too large due to the increased number of kernel shards. Fig. <ref> summarizes the pruned space of candidate elastic kernels from the models in MDTB, ranging from 84% to 95.2%. The expected pruned space may differ across candidate models due to multiple factors, such as the complexity of the models (i.e., the operator types used) and the input size.
§.§ Case Study: Autonomous Driving with LGSVL
We further use a real-world trace from an open autonomous driving platform (i.e., LG SVL <cit.>) as the workload, which provides a realistic arrival distribution of critical tasks (i.e., obstacle detection) and normal tasks (i.e., pose estimation) in autonomous driving.
The trace was collected from a 3D Lidar perception module and a 2D camera perception module when running the LGSVL simulator, and we selected backbones from the models included in our MDTB benchmark, they are SqueezeNet for simulation of pose estimation as the normal task (lidar data), and ResNet for obstacle detection as the critical task (camera data). The clients send the inference requests in a uniform distribution, with 12.5 Hz frequency for the normal task and 10 Hz for the critical task, as is shown in Fig. <ref>. The experiment was conducted on GTX 2060.
Fig. <ref> demonstrates the experimental results for this real-world workload. Compared to Sequential, Multi-stream and IB increase the overall throughput by 1.41× and 1.25×, while amplifying the critical task latency by 82% and 56%, respectively. Due to the low launching frequency of both critical and normal tasks (10 and 12.5 Hz), the elastic kernels of the normal task can execute concurrently with the critical task with little eviction overhead for elastic kernel shards. Finally, Miriam achieves 89% improvement of overall throughput compared to Sequential, and only incurs 11% latency overhead for the critical task. This proves how Miriam can achieve large improvement of throughput based on our elastic kernel design with little sacrifice on critical task latency, which is also confirmed by our high SM occupancy among all baselines shown in Fig. <ref> (c).
§.§ System Overhead
The scheduling overhead of Miriam mainly consists of two parts. The first part is the runtime elastic kernel shards selection, which scans the shard candidates and has the complexity of O(N). Owing to the low complexity of the scheduling mechanism in Miriam, we find that their overall average overhead for serving in each DNN model is less than 0.35 ms. The second part is the launch time overhead for critical kernels due to the padding of the elastic kernels, we evaluate this overhead and found that in most (over 80%) cases, the overhead is less than 15 us. This latency overhead is mainly because of contention on the texture cache and L2 memory, which we leave for future work.
§ DISCUSSION
Scalability. We believe that Miriam has the potential to be scaled beyond pair-wise DNN tasks co-running and can support more general tasks. However, due to the large number of co-running kernel possibilities, some additional considerations must be taken into account. These include establishing a scheduling policy for normal tasks with the same priority, as well as finding an efficient way to perform offline kernel profiling since the design space increases exponentially.
Integrated with DNN Compiler. Representative DNN compilers like TVM <cit.> can generate high-performance DNN kernels with low latency using auto-tuning <cit.>. However, DNN compiling is an offline approach with a long compilation time, and the generated kernels can not be easily modified at runtime. This creates a gap between static compilation and dynamic scenarios in IoT applications, particularly when on-device resources become available dynamically.
To fill this gap, Miriam can serve as a post-compiling runtime to ensure that the on-device resources are fully utilized during runtime in an adaptive manner.
Orthogonal to Other Approaches. Miriam can work symbiotically with other optimized DNN execution approaches, such as model compression <cit.>, and edge-cloud offloading <cit.>, to execute multi-DNN workloads effectively.With such a collaborative approach, it becomes possible to achieve improved runtime performance and better resource utilization, enabling effective execution of multi-DNN workloads in resource-constrained edge computing environments.
§ CONCLUSION
We propose a novel system named Miriam that addresses latency and throughput problems of co-running multiple DNN inference tasks on edge GPUs. The proposed system utilizes elastic kernels to facilitate fine-grained GPU resource re-mapping and a runtime dynamic kernel coordinator to support dynamic multi-DNN inference tasks. Experimental results on a benchmark we built on two types of edge GPU show that Miriam can significantly improve the overall system throughput while incurring minimal latency overhead for critical tasks, compared to dedicating the GPU to critical tasks.
plain
|
http://arxiv.org/abs/2307.04829v1 | 20230710180755 | General wetting energy boundary condition in a fully explicit non-ideal fluids solver | [
"Chunheng Zhao",
"Alexandre Limare",
"Stephane Zaleski"
] | physics.flu-dyn | [
"physics.flu-dyn"
] |
a]Chunheng Zhao
a]Alexandre Limare
a,b]Stephane Zaleski
[a]Sorbonne Université and CNRS, Institut Jean Le Rond d'Alembert UMR 7190, F-75005 Paris, France
[b]Institut Universitaire de France, Paris, France
We present an explicit finite difference method to simulate the non-ideal multi-phase fluid flow. The local density and the momentum transport are modeled by the Navier-Stokes (N-S) equations and the pressure is computed by the Van der Waals equation of the state (EOS). The static droplet and the dynamics of liquid-vapor separation simulations are performed as validations of this numerical scheme. In particular, to maintain the thermodynamic consistency, we propose a general wetting energy boundary condition at the contact line between fluids and the solid boundary. We conduct a series of comparisons between the current boundary condition and the constant contact angle boundary condition as well as the stress-balanced boundary condition. This boundary condition alleviates the instability induced by the constant contact angle boundary condition at θ≈0 and θ≈π. Using this boundary condition, the equilibrium contact angle is correctly recovered and the contact line dynamics are consistent with the simulation by applying a stress-balanced boundary condition. Nevertheless, unlike the stress-balanced boundary condition for which we need to further introduce the interface thickness parameter, the current boundary condition implicitly incorporates the interface thickness information into the wetting energy.
* Energy consistent boundary condition for single species multi-phase Van der Waals' model
* Explicit finite difference method with adaptive mesh refinement
Van der Waals energy consistent boundary condition explicit finite difference
0000 1111
0000 1111
§ INTRODUCTION
Fluids spreading on solids are practical multi-phase systems in the real world <cit.>. Industrial applications of solid wetting research can be found in 3D printing <cit.>, nucleate boiling <cit.>, and surface material construction <cit.>. Numerical simulations on the wetting problem are made extremely difficult or impossible by the existence of a very wide range from the macroscopic scales to the nanometric scales <cit.>. When it comes to models such as level-set and volume of fluid (VOF) that treat the interface between two fluids as a sharp interface, the no-slip boundary condition contradicts the actual behavior observed in droplet spreading <cit.>. To address the limitation of those methods on the moving contact line, researchers have implemented explicit Navier-slip or implicit numerical slip boundary conditions <cit.>. Nevertheless, the boundary conditions associated with the sharp interface method will introduce nonphysical dynamics and prove ineffective in handling small or large contact angles. Hence, it is worth considering the diffuse interface method, a thermodynamically consistent mathematical model for multi-phase systems, to effectively simulate the dynamics of contact lines <cit.>. The diffuse interface method introduces energy dissipation, enabling the modeling of droplet spreading even with the no-slip or small slip length boundary condition <cit.>. In the vicinity of a diffused contact line, the bulk free energy and the surface energy determine the contact line profile as well as the fluid flow <cit.>. Moreover, the boundary condition within the diffuse interface method can be described as the wetting energy to ensure the thermodynamic consistency <cit.>. By employing the diffuse interface method, it becomes possible to accurately simulate contact angles, regardless of whether they are small or large in magnitude.
A well-known classical diffuse interface method is derived from the Van der Waals (VDW) equation of state (EOS) for a single species, (p+aρ^2)(1/ρ-b)=RT, with classical notations, where a and b are modification parameters of the molecular interaction and the molecular volume respectively <cit.>. Under the pressure and energy-driven mechanism, the VDW method is able to separate the single species into two phases, one with higher density, and the other one with lower density. Compared to the Cahn-Hilliard (C-H) method, the VDW has some different characteristics to be noted. First, the VDW method describes the single species phase change where the interface is indicated by the local density ρ, while the C-H method describes the physical situation of a binary system of two essentially immiscible species and the interface profile is normally steeper than the VDW method. In addition, in the VDW model, the bulk energy density is computed by the entropy and molecular interaction which can be represented by the equation ρ f_0=- ρ RTlog(1/ρ-b)-aρ^2. In contrast, the bulk free energy density adopted in the C-H is a double-well fourth-order polynomial ρ f_0=β (ρ-ρ_l)^2(ρ-ρ_g)^2, where β denotes the constant bulk energy coefficient, and ρ_l, ρ_g are saturated liquid and gas densities. One of the benefits of using the C-H type energy form is it allows us to accurately represent the flat interface profile at equilibrium using a hyperbolic tangent function. Additionally, the C-H energy form enables us to explicitly determine the interface thickness and the surface tension <cit.>. However, the inclusion of a fourth-order partial differential equation greatly amplifies the intricacy of the problem, thereby intensifying the difficulty of numerically simulating the C-H equation. Conversely, the VDW method offers a viable diffuse interface approach that is not only comparatively efficient but also valid.
Over the past few decades, extensive research has been conducted to numerically investigate the diffuse interface model of single species multi-phase systems <cit.>, and various boundary condition methods have been employed in the context of the diffuse interface model <cit.>. The stress-balanced boundary condition, as proposed in <cit.>, takes into account a smooth variation of surface tension at the diffused interface along the solid boundary. Moreover, from a thermodynamic perspective, the energy-consistent boundary condition can be applied in the diffuse interface method <cit.>. It establishes a connection between the bulk free energy and the wetting energy at the boundary, ensuring a uniform interface thickness as the system reaches thermodynamic equilibrium. The above-mentioned boundary conditions are based on the C-H type bulk free energy formulation. In this case,
the wetting energy and the surface tension can be evaluated without the difficulty to compute the integral operation. However, as for the VDW energy form, the value of the interface thickness and the surface tension is not explicit. In order to obtain the surface tension, we need to further numerically compute the integral along the surface, which makes it challenging to apply the mentioned boundary conditions. In recent years, constant contact angle boundary condition <cit.>, and chemical potential based boundary condition have been employed for the VDW single species model <cit.>. As we shall see the constant contact angle boundary condition induces an instability at equilibrium contact angles θ_eq≈0 or θ_eq≈π. In addition, the boundary condition used in <cit.> is applied to the pseudopotential LBM method. In this approach, the exact determination of the contact angle requires several free parameters, which adds complexity when utilizing other simulation methods.
In this study, as the energy-consistent boundary condition used in C-H model, we provide a general energy-consistent boundary condition for the VDW single species multi-phase model <cit.>. The boundary condition ensures energy consistency and allows for a uniform interface profile as the equilibrium contact angle is approached. To solve the Navier-Stokes equations, which incorporate a Korteweg stress form to model the surface effect, we employ a fully explicit finite-difference method <cit.>. This finite difference scheme enables easy implementation of adaptive mesh refinement technology which further enhances the computational efficiency of our approach. We perform a comparison of various boundary condition methods and present the wetting energy for different equilibrium contact angles and interface thickness parameters. Furthermore, we validate our numerical scheme by showcasing two benchmark problems: a single static droplet and the dynamics of liquid-vapor separation. The energy evolution for Laplace numbers La=[10,1000] is shown for a single static droplet and the average domain length evolution of the dynamics of liquid-vapor separation is provided.
§ METHODOLOGY
In this section, we provide an introduction to the mathematical model utilized in this work. We begin by presenting the governing equations and the thermodynamic energy of the system. With a focus on the energy aspect, we derive the wetting energy and proceed to compare different boundary condition methods based on their profiles while simulating a simple one-dimensional (1-D) equilibrium surface.
§.§ Governing Equations
The governing equations employed in our study consist of the compressible Navier-Stokes equations, incorporating the Korteweg stress surface tension force, along with the equation of state (EOS) <cit.>. Those formulations can be expressed as:
∂ρ/∂ t+∇·(ρ𝐮)=0,
∂ρ𝐮/∂ t+∇·(ρ𝐮⊗𝐮)=
∇·(σ_v+σ_s-p𝐈),
p=ρ RT(1/1-bρ-aρ/RT).
Eqs. (<ref>) and (<ref>) is the continuity equation and momentum equations. Here, the operator ⊗ represents the tensor product operation. Eq. (<ref>) stands for the VDW's EOS from which we can obtain the pressure and close this non-ideal gas system. In Eq. (<ref>), ρ denotes the local density of the liquid or gas phase, and 𝐮 is the velocity vector. In Eq. (<ref>), p𝐈 is the pressure tensor, where 𝐈 is the identity matrix,
σ_v= η[(∇𝐮+∇^T𝐮)-2/3(∇·𝐮)𝐈]
represents the viscous stress tensor, and
σ_s= λ[(1/2|∇ρ|^2+ρ∇^2ρ)𝐈-∇ρ⊗∇ρ]
is the surface stress tensor. Within these equations, η is the local viscosity, while λ corresponds to the surface energy coefficient. It should be noted that the thermodynamic pressure denoted by p can be determined from Eq. (<ref>), where R denotes the universal gas constant, T is defined as the temperature, and a, b are two gas constants that signify the intermolecular attraction and the volume modification ratio, respectively. Normally, we can rearrange Eq. (<ref>) in a dimensionless form:
p'=-8ρ'T'/3-ρ'-3ρ'^2,
where p'=p/p_c, ρ'=ρ/ρ_c, and T'=T/T_c are the dimensionless forms of pressure, density, and temperature. p_c=3/8ρ_c R T_c, ρ_c=1/3b, and T_c=8a/27Rb are critical pressure, density and temperature. For our simulations, we select the values a=3 and b=1/3, leading to ρ_c=1 and p_c=1. As the pressure term solely appears in a derivative form, we calculate ∇ p' during the simulation instead.
An expression for the energy associated with the pressure term can be expressed as follows <cit.>:
p=ρ^2 ∂ f_0/∂ρ,
where the Helmholtz free energy per unit volume is expressed as ρ f_0. The dimensionless formula f_0'=ρ_c f_0/p_c is given by <cit.>:
f_0'=-8/3 T'log(1/ρ'-1/3)-3ρ'- μ^*,
where μ^* denotes the dimensionless bulk chemical potential <cit.>, which is a universal constant value in both the liquid and gas regions. The bulk chemical potential can be determined through the Maxwell construction of the pressure profile or the common tangent construction of the free energy <cit.>.
§.§ Wetting energy model
The energy derivation presented in <cit.> establishes a connection between the stress form and potential form surface tension force formulations. In addition, when there is a solid boundary in simulation, a wetting energy, and a constraint function were introduced to close the system.
As outlined in <cit.>, we derive the boundary condition for the VDW model from an energy perspective. To incorporate the surface effect, we introduce a mixed energy density formulation, where the surface energy per unit volume is expressed as follows:
e_s=λ/2|∇ρ|^2,
and the mixed energy per unit volume is:
e_mix=ρ f_0+e_s.
In this expression, we also consider the kinetic energy per unit volume ρ e_k=1/2ρ|𝐮|^2 and the wetting energy per unit area e_w. The total energy of the system can be expressed in integral form as follows:
E=∫_Ω(e_mix+ρ e_k) d𝐱+∫_∂Ωe_w ds.
Considering a constant temperature, viscous dissipation is the only dissipation of the energy. The evolution of the total energy E is then
∂ E/∂ t=∫_Ω(∂ e_mix/∂ t+∂ρ e_k/∂ t) d𝐱
+∫_∂Ω∂ e_w/∂ t ds=∫_Ω𝐮·∇·σ_v d𝐱.
In this equation, Ω represents the fluid-dominated region, while ∂Ω corresponds to the solid boundary. Through variable substitution and integration by parts, Eq. (<ref>) can be rearranged as follows:
∫_Ω[(∂ρ f_0/∂ρ+λ∇ρ·∇)
∂ρ/∂ t+∂ρ e_k/∂ t] d𝐱
+∫_∂Ω∂ e_w/∂ t ds =
∫_Ω(μ_mix∂ρ/∂ t+∂ρ e_k/∂ t) d𝐱+∫_∂Ω(λ∂_𝐧ρ+∂ e_w/∂ρ)∂ρ/∂ t ds,
where μ_mix=δ e_mix/δρ represents the mixed chemical potential, which is obtained by taking the functional derivative of the mixed energy. To ensure non-dissipation at the boundary, we obtain the following expression:
λ∂_𝐧ρ+∂ e_w/∂ρ=0,
where ∂_𝐧ρ denotes the wall direction derivative of the density, ∂ e_w/∂ρ is referred to as the wetting potential. The potential surface force formulation can be derived from the volume integral part. The consistency between the potential form and the stress-form formulations can be demonstrated through the inclusion of an additional stress term:
∇·(σ_s-p𝐈)=-ρ∇μ_mix+∇·σ_ρ,
where the additional stress term σ_ρ takes the form:
σ_ρ=λ(|∇ρ|^2𝐈-∇ρ⊗∇ρ).
By utilizing the potential surface force formulation, the presence of spurious currents can be significantly reduced to a level below the round-off limit <cit.>.
For a 1-D planar simulation with σ_ρ=0, in the equilibrium state of the system, the mixed chemical potential must satisfy the following condition:
μ_mix=∂ρ f_0/∂ρ-λd^2ρ/d x^2=0.
When we multiply Eq. (<ref>) by dρ/d x and integrate it, the following equation can be obtained:
λ/2(dρ/dx)^2=∫∂ρ f_0/∂ x dx.
The first derivative of the density can then be derived as:
|dρ/dx|=√(2ρ f_0/λ).
To extend Eq.(<ref>) to multi-dimensional problems, we make the approximation |∇ρ|≈√(2ρ f_0/λ). Considering the constraint given by Eq.(<ref>), we can derive an energy-consistent wetting energy per unit area as follows:
e_w1=cosθ_eq∫_ρ_gs^ρ_ls√(2λρ f_0) dρ +C.
Here, C represents a constant parameter. However, in the simulation, this constant does not affect the evolution of the contact line, so we can set C=0. In this case, the wetting potential e_w1 is characterized by two saturation densities, and these values align with the equilibrium density in the liquid and gas phases <cit.>. Therefore, we have ρ_ls=ρ_l, ρ_gs=ρ_g.
In addition to the aforementioned e_w1 formulation, there are other approaches utilized to constrain the dynamics of the contact line. One such method is the constant contact angle boundary condition, which enforces the dynamic contact angle to be equal to the equilibrium contact angle. The energy formulation for this condition is expressed as follows <cit.>
e_w2=λθ_eq∫_ρ_gs^ρ_ls∂ρ/∂ x dρ +C.
Another stress-balanced energy formulation <cit.>
e_w3=-σ/2cosθ_eqsinπϕ/2,
where ϕ=2ρ-ρ_l-ρ_g/ρ_l-ρ_g known as the order parameter changing from ϕ=[-1,1], and σ is the surface tension between two fluids.
As well as a thermodynamic consistent formulation based on the pseudopotential lattice Boltzmann method can be shown as <cit.>
e_w4=-K_EOS K_INTγ(ρ_l-ρ_g)/2ζtanhζϕ,
where K_EOS, K_INT are scaling factors to adjust the interface thickness of the phase field method, γ and ζ are independent parameters that determine the contact angle.
Among the various wetting energy formulations, e_w2 maintains a constant contact angle throughout the evolution of the contact line. However, this approach may violate thermodynamic principles. Particularly, when the equilibrium contact angle θ_eq is close to 0 or 2π, the simulation system becomes highly unstable. On the other hand, the formulation of e_w3 is derived by considering the stress balance and minimizing the free energy, thereby ensuring the preservation of correct thermodynamics <cit.>. In order to establish the exact relationship between σ and λ, it is necessary to further determine the profile of the interface, as shown in the work by Chen et al. <cit.>. This relationship plays a crucial role in ensuring the accuracy of the contact line dynamics. In Figure <ref>, we present a comparison of the wetting potential values along the interface for different Cahn numbers, denoted as Cn=δ/L, where δ represents the initial interface thickness and L is the length of the system. Specifically, in Figure <ref> (a), (b), and (c), we consider an equilibrium contact angle of θ_eq=π/12 for varying values of Cn. It can be observed that, due to the large value of θ_eq in the case of a small contact angle, the wetting potential of the energy e_w2 exhibits significantly higher values compared to the other two methods. Furthermore, when we increase the equilibrium contact angle to θ_eq=π/4, as depicted in Figure <ref> (c), (d), and (e), the wetting energy formulation e_w3 exhibits varying peak values for different values of Cn. It is worth noting that the density profile is represented by a hyperbolic tangent function in each case. Consequently, the relationship between σ and λ is precisely determined as σ≈0.943λ/δ when the value of δ is known.
The formulation of e_w4 is heavily influenced by the parameter selection and is more suitable for specific numerical methods. In recent studies, an implicit chemical potential boundary condition has been proposed to address the contact line problem <cit.>. Due to the fully implicit nature of the method, it becomes challenging to accurately determine the contact angle precisely from the provided chemical potential value and temperature.
In our energy boundary condition, as described in Eq.(<ref>), the computation of e_w1 through integration is required. However, in a realistic simulation, this value is not necessary. Therefore, this approach can be utilized as a general boundary condition that effectively preserves thermodynamic consistency. Additionally, the information regarding the interface thickness δ in e_w1 is implicitly incorporated into the bulk free energy, and all the essential parameters are computed locally. This approach successfully addresses the instability issues encountered in previous methods.
There are linear, quadratic, and cubic wetting energy formulations based on the C-H model. However, similar to the formulation e_w3, these formulations require prior relations to evaluate the interface thickness and determine the density profile on the boundary. Therefore, we have not considered these formulations in the current work. For a detailed analysis of these formulations, refer to <cit.>.
§ NUMERICAL SCHEME
To solve the governing equations presented in the previous section, we employ the two-step MacCormack methodology <cit.>. To begin, we define a vector 𝐟 consisting of the density ρ and momentum ρ𝐮. Then, we proceed to reconstruct the governing equations using this vector Eqs. (<ref>), (<ref>):
𝐟 = [ ρ; ρ𝐮 ].
Eqs. (<ref>), (<ref>) can now be expressed as the functions of 𝐟:
∂_t 𝐟 +∇·𝐅 (ρ,∇ρ, ∇^2ρ)=0,
where 𝐅 can be further expressed as:
𝐅 = [ ρ𝐮; ρ𝐮⊗𝐮+p𝐈-σ_surf-σ_vis ].
As shown in <cit.>, Eq. (<ref>) can be solved by a precondition and correction finite difference method. The time derivative is dealt with in a fully explicit manner:
𝐟^*=𝐟^n -Δ t ∇^bck·𝐅^n,
𝐟^n+1=1/2(𝐟^n+𝐟^*) -
Δ t/2∇^fwd·𝐅^*.
Here, ∇^fwd stands for forward finite difference:
∇^fwdϕ(𝐱)=ϕ(𝐱+h)-ϕ(𝐱)/h,
∇^bck is the backward finite difference:
∇^bckϕ(𝐱)=ϕ(𝐱)-ϕ(𝐱-h)/h,
and ∇^ctr represents the central finite difference:
∇^ctrϕ(𝐱)=ϕ(𝐱+h)-ϕ(𝐱-h)/2h.
In addition, the derivative computation appears in 𝐅 can be computed by 𝐅^n(ρ^n,∇^fwdρ^n,∇^2_ctrρ^n), and 𝐅^*(ρ^*,∇^bckρ^*,∇^2_ctrρ^*) respectively.
Our simulation is implemented using the free code platform called Basilisk, which is a common tools language for the Octree structure utilizing adaptive mesh refinement methods <cit.>. Given that our method relies on finite differences and is fully explicit, the strategy for adaptive mesh refinement is straightforward. The complete code is now accessible at the following link: http://basilisk.fr/sandbox/zchmacchiato/.
§ RESULTS
In this study, the VDW model is utilized to simulate phase transformations for an isothermal single species multi-phase system. The interface between the two phases undergoes changes from the initial shape to the equilibrium shape, resulting in fluid flow. Our simulations aim to assess the stability, energy oscillation, and morphology changes that occur during this phase transition process.
§.§ single droplet simulation
To validate the numerical method, we simulate the coexisting saturated density values at a fixed temperature. The simulation begins by initializing a single droplet with a radius of r inside a gas tank, and it continues until the system reaches an equilibrium state. The initial density profile is represented by a hyperbolic tangent function:
ρ(𝐱,0)=ρ_l+ρ_g/2-ρ_l-ρ_g/2tanh|𝐱-𝐱_0|-r/δ.
Here, |𝐱-𝐱_0| represents the distance between the local position and the droplet interface, and δ denotes the initial interface thickness, r is the radius of the initial droplet. In Figure <ref>, we compare the simulation results of the density values with the analytic solutions obtained from the Maxwell construction. Our numerical scheme accurately captures the results, which are in good agreement with the theoretical solutions.
We proceed with the simulation of a single droplet in a square domain under a constant temperature T'=0.95 with periodic boundaries. Based on the results shown in Figure <ref>, the approximate saturated density of the liquid phase is ρ_l≈1.46, and that of the gas phase is ρ_g≈0.58. In this test, we do not consider the viscosity ratio. It is important to note that the exact density profile in a static solution is more complex, but it can be qualitatively represented by a hyperbolic tangent function as given in Eq.(<ref>). Consequently, the presence of different initial density values compared to the saturated density values introduces oscillations in the simulation. The initial density distribution, along with the surface tension stress, drives the droplet towards its equilibrium shape, while pressure helps separate the saturated density profile simultaneously. In an ideal scenario, with sufficient evolution time, we would expect a constant surface energy E_s=∫_Ω e_s and zero kinetic energy E_k=0. However, due to unbalanced numerical schemes and the choice of the surface force formulation<cit.>, spurious currents can occur.
In this test, we characterize the system using the Laplace number, La=λρ_c R/η^2, where R is the initial radius of the droplet. With La≫1, we expect a significant surface effect that induces pronounced spurious currents when the system reaches equilibrium <cit.>. The logarithmic evolution of kinetic energy and surface energy is presented in Figure <ref>. We vary La from 10 to 1000. As the viscous force dissipates the system's energy and balances the oscillations caused by capillary waves, reducing La leads to a rapid decrease in kinetic energy. The viscous dissipation gradually consumes the energy associated with the droplet shape, causing the kinetic energy to converge to a small, constant value. In our simulations, the final kinetic energy, attributed to spurious currents, does not reach zero. However, the surface energy converges to the same value for different La values, indicating that the surface effect accelerates the system's attainment of the equilibrium profile. When La≥1000, oscillations in the energies are observed. In such high-temperature systems, the significant surface effect induces capillary waves around the phase interface. The imbalance between surface tension and thermodynamic pressure, combined with the explicit numerical scheme, leads to the generation of spurious currents, preventing the system from reaching zero kinetic energy.
§.§ Dynamics of liquid-vapor separation
In this section, we explore the applicability of the VDW model to the dynamics of liquid-vapor separation, aiming to assess its performance in a complex morphology-changing problem. Additionally, we have incorporated adaptive mesh refinement into the simulation to enhance efficiency. When a single species is subjected to a temperature close to the critical temperature and possesses a random density profile, the pressure and surface stress act as driving forces, leading the mixture to undergo phase separation. This results in coarsening dynamics and the formation of two distinct phases: one with a higher density and the other with a lower density. In the VDW model, the phase separation solely relies on the equilibrium density corresponding to specific temperatures. This enables the system to minimize the free energy during its evolution.
As depicted in Figure <ref> (a), we initiate the simulation by introducing a single species with a random density fluctuation within a 2D square domain. The boundaries of the domain are set as periodic conditions to ensure continuity. The initial density profile is defined as follows:
ρ(𝐱,0)=ρ_c+0.2ρ_c (rand),
where the amplitude of random fluctuation is set to 0.2 ρ_c. The random number for generating the fluctuations is obtained from the random seed rand=[-1,1]. The phase separation is characterized by the growth of the domain length scale, defined as L=L_0^2/χ_m, where L_0^2 represents the area of the square domain, and χ_m=⟨ C^2(1-C)^2⟩ is the space average quantity parameter associated with the concentration of the gas phase, denoted as C=(ρ-ρ_g)/(ρ_l-ρ_g) <cit.>. In our previous work, we utilized the explicit method to investigate the dynamics of liquid-vapor separation under constant temperature conditions <cit.>. When the system temperature was set to T'=0.85, simulation results exhibited a growth rate characterized by L∼(t-t_0)^0.7, which was close to but slightly higher than the (t-t_0)^2/3 growth rate reported by Miranda et al. <cit.>. In the present study, we simulate the dynamics of liquid-vapor separation under T'=0.95 with a Laplace number of La=0.04. In this example, the simulations are performed with adaptive meshes using the feature of Basilisk <cit.>. The smallest (dimensionless) cell size, Δ, used is 0.0039 in order to fully resolve the liquid-vapor interface. The results presented here are obtained by averaging over 5 runs with different random initial density configurations.
The evolution of the mixture's morphology at different time steps is shown in Figure <ref>. Over time, the complexity of the mixture gradually diminishes, and the influence of surface tension becomes prominent, resulting in the formation of circular liquid droplets in the later stages. In Figure <ref>, we compare the simulation results of the domain length scale L evolution when La=0.04 with the corresponding theoretical solutions depicted by the red dashed curve. It can be observed that our simulation results exhibit a close agreement with the theoretical prediction L∼ (t-t_0)^2/3.
§.§ Equilibrium contact angle and energy evolution
In the previous section, we compared various boundary condition methods based on their profiles along the interface for the 1D planar case. Now, we employ different boundary conditions to model the equilibrium contact angle and assess the energy evolution of the contact angle simulation.
For this test, we start with a liquid droplet of radius r=0.2L, where the density is set to ρ=ρ_l, located on a solid boundary. The region to the left of the droplet is filled with gas, with a density of ρ=ρ_g. The density profile function and the velocity can be defined as follows:
ρ(𝐱,0)=ρ_l+ρ_g/2-ρ_l-ρ_g/2tanh|𝐱-𝐱_0|-r/δ,
𝐮(𝐱,0)=0.
Here, |𝐱-𝐱_0| represents the distance to the interface of the liquid droplet, and 𝐱_0 denotes the center position of the droplet. The temperature for this simulation is fixed at T'=0.95. The viscosity η and surface energy coefficient λ are chosen to satisfy La=4 for all simulations. To ensure stable simulations, the time interval Δ t needs to satisfy ηΔ t/Δ x^2≤0.1. Additionally, the Courant-Friedrichs-Lewy (CFL) condition is imposed with CFL=|𝐮max|Δ t/Δ x≤0.1. Since the equilibrium interface thickness, δ_eq, is unknown in the e_w3 formulation, we set δ_eq=0.3 and approximate σ as ≈3.143λ.
The equilibrium contact angle for each simulation is computed using the method described in <cit.>. The simulation results are evaluated when the kinetic energy reaches a constant value, typically when t≫1. In the provided figures, the interface position is defined as ρ=ρ_c.
Figure <ref> illustrates the evolution of density profiles for an equilibrium contact angle of θ_eq=π/6 at various time points. In this study, we employ adaptive mesh refinement techniques, where the resolution of the mesh is determined by the density distribution. Notably, the grids demonstrate a clear refinement in the contact line region as the droplet spreads over the solid surface. Figure <ref>(a) compares three different boundary conditions with the corresponding analytic solution, indicated by the black dashed line. The equilibrium shapes of the droplets residing on solids with different contact angles are shown in Figure <ref>(b). Upon comparison, it can be observed that when the equilibrium contact angle θ_eq is set to π/2, the results from the three methods align well with each other. Similarly, when the equilibrium contact angle approaches π/2, the simulation results of the form e_w1 exhibit good agreement with the reference curve. However, when θ_eq is set to 2π/3 and π/6, the results from form e_w2 deviate from the analytic solutions. Additionally, the accuracy of the results from form e_w3 is lower compared to those from e_w1 or the analytic solutions. By considering these comparisons, we can conclude that the form e_w1 consistently provides accurate results across a wider range of contact angles compared to the other two formulations.
It is important to evaluate the energy evolution of the contact line moving until the system reaches the equilibrium state, as it provides insights into the contact line dynamics of the system <cit.>. Figure <ref> illustrates the evolution of kinetic energy during the simulation. Once the droplet achieves its equilibrium shape, it stops evolving, and the kinetic energy initially increases and then gradually decreases toward zero. For the energy form e_w1, in the late stages of the simulation, the kinetic energy of each case converges to a very small value, approximately E_k∼10^-10. To further analyze the differences in kinetic energy evolution, Figure <ref> compares the kinetic energy profiles for the energy forms e_w1 and e_w2. It is worth noting that for contact angles θ_eq>5π/6, the simulation process becomes highly unstable when using form e_w2. Hence, the results for an even larger contact angle, θ_eq=35π/36, are not compared. The comparison in Figure <ref>(b) clearly demonstrates that the kinetic energy evolution differs significantly between the two boundary conditions. This indicates that the contact line dynamics associated with these two methods during the simulation are also distinct. Moreover, when using the e_w1 formulation for the boundary, the system reaches the equilibrium state more rapidly. In summary, the analysis of the kinetic energy evolution supports the superiority of the e_w1 formulation, as it leads to faster convergence to the equilibrium state and provides more stable contact line dynamics compared to the e_w2 formulation.
We proceed to compare the energy forms e_w1 and e_w3 in Figure <ref>. In order to maintain consistency between the two boundary conditions, the interface thickness δ_eq=0.164 is obtained from the equilibrium state of the simulation using e_w1 as the wetting energy. The comparison in Figure <ref>(b) reveals that the kinetic energy evolution of both cases is qualitatively consistent, and even the capillary-induced oscillation exhibits similar characteristics. It is worth noting that the idea behind form e_w3 stems from the concept of stress balance between the gas phase and liquid phase at the interface region <cit.>. The surface tension difference Δσ between the solid-gas and solid-liquid interfaces remains continuous along the boundary, and the dynamics of the moving contact line simulated using this method have been widely employed and compared with molecular dynamics and experimental studies <cit.>. Consequently, from a kinetic energy and dynamics standpoint, similar outcomes can be achieved whether we utilize e_w1 or e_w3 as the wetting energy. This suggests that both formulations can capture the essential features of the contact line dynamics and yield comparable results in kinetic energy and capillary-driven oscillations.
§ CONCLUDING REMARKS
In this study, we investigated an explicit finite difference method for solving the Van der Waals (VDW) multi-phase flow. Based on the MacCormack methodology, the numerical scheme provided qualitative simulation results for single static droplets and the dynamics of liquid-vapor separation. We proposed a general energy-based approach to address the contact line problem by relating the wetting energy to bulk free energy and surface energy, and compared them with existing boundary condition methods.
In the simulation tests, we evaluated the energy evolution and spurious currents of single static droplets under different Laplace numbers (La). As La was decreased to approximately 1, we observed a more stable equilibrium system with reduced intensity of spurious currents. We validated our method by analyzing the growth of domain length during the liquid-vapor separation process, and our results were in good agreement with the predicted solution L=(t-t_0)^2/3. Using the general energy-based boundary condition, we achieved highly consistent equilibrium contact angles with the predicted analytic solution. However, the other two existing methods failed to provide qualitative results due to large wetting potential and uncertain interface thickness.
Furthermore, the kinetic energy of the simulation for the equilibrium shape of the sessile droplet converged to E_k∼10^-10, which is at a similar level as the simulation of spurious currents in the single static droplet. Additionally, we observed consistent dynamics between the energy-consistent boundary condition and the stress balance boundary condition when the same interface thickness was employed in both approaches.
Overall, our study demonstrated the effectiveness of the explicit finite difference method for VDW multi-phase flow and provided valuable insights into the energy-based approach for modeling contact line phenomena.
§ APPENDIX
We here establish the distinction between the Korteweg stress-based surface force and the potential-based surface force. We begin by examining the force terms of the momentum equation, Eq. (<ref>), excluding the contribution from viscous dissipation
∇· ( σ_s - p𝐈 ) = ∇·(λ[( 1/2|∇ρ|^2+ρ∇^2ρ)𝐈-∇ρ⊗∇ρ]-p𝐈).
In this context, the pressure is determined by the equation of state, which can be obtained from the thermodynamic energy as p=ρ^2∂ f_0/∂ρ, as explained in the main text. By substituting this expression into Eq. (<ref>), we obtain the following result:
∇· ( σ_s - p𝐈 ) = ∇(λ/2|∇ρ|^2+λρ∇^2ρ-ρ^2∂ f_0/∂ρ) -∇·(λ∇ρ⊗∇ρ).
In addition, the flux term of the momentum equations can be also represented by a potential form surface force <cit.>
-ρ∇μ_mix=-∇ρμ_mix+μ_mix∇ρ,
with mixed chemical potential μ_mix=∂ (ρ f_0)/∂ρ-λ∇^2ρ. Eq. (<ref>) can be further simplified to
-ρ∇μ_mix=∇(λρ∇^2ρ-ρ^2∂ f_0/∂ρ)-λ∇^2ρ∇ρ.
Finally, the additional stress term σ_ρ can be evaluated by the difference between Eq. (<ref>) and Eq. (<ref>)
∇·σ_ρ=∇·(σ_s-p𝐈)+ρ∇μ_mix
=∇·(λ/2|∇ρ|^2𝐈-λ∇ρ⊗∇ρ)+λ∇^2ρ∇ρ,
which can be simplified as
σ_ρ=λ(|∇ρ|^2𝐈-∇ρ⊗∇ρ)+C,
The constant C is typically set to zero in practice, as only the divergence of the stress term appears in the momentum equation. The additional stress term is mostly implicitly incorporated into the pressure term. In the case of a 1-D simulation, this term simplifies to zero.
elsarticle-num
|
http://arxiv.org/abs/2307.04881v1 | 20230710200317 | Ab initio methods for polariton chemistry | [
"Jonathan J. Foley IV",
"Jonathan F. McTague",
"A. Eugene DePrince III"
] | physics.chem-ph | [
"physics.chem-ph"
] |
Department of Chemistry,
University of North Carolina Charlotte,
Charlotte, North Carolina, 28223
[email protected]
Department of Chemistry
William Paterson University
Wayne, New Jersey, 07470
Department of Chemistry and Biochemistry,
Florida State University,
Tallahassee, FL 32306-4390
[email protected]
Polariton chemistry exploits the strong interaction between quantized excitations in molecules and quantized photon states in optical cavities to affect chemical reactivity. Molecular polaritons have been experimentally realized by the coupling of electronic, vibrational, and rovibrational transitions to photon modes, which has spurred tremendous theoretical effort to model and explain how polariton formation can influence chemistry. This tutorial
review focuses on a particular thrust in theoretical chemistry and chemical physics aimed at merging familiar techniques from ab initio electronic structure theory with cavity quantum electrodynamics, toward the goal of supplying predictive theories for polariton chemistry. Our aim is to emphasize the relevant theoretical details with enough clarity for newcomers to the field to follow, and to present simple and practical code examples to catalyze further development work.
Ab initio methods for polariton chemistry
A. Eugene DePrince III
July 10, 2023
=========================================
§ INTRODUCTION
Strong interactions between nanoconfined photons and molecular systems<cit.> can lead to the creation of hybrid light–matter states known as polaritons that may display remarkably different chemical and physical properties than their parent components.<cit.> The technological and chemical applications of these strongly-coupled light–matter states are wide ranging. Recent examples of cavity control of chemical reactivity and catalysis,<cit.> polariton lasing,<cit.> manipulation of non-linear optical effects in organic molecules <cit.>, optical energy propagation,<cit.> plasmon-based photostabilization,<cit.> plasmon-based multimode vibrational strong couplingz ,<cit.> Bose-Einstein condensation of molecular exciton-polaritons,<cit.> and protection against decoherence processes<cit.> offer only a glimpse into the transformative potential of polaritonic approaches to chemistry and materials science. In order for the field to fully live up to its promise, the experimental realization of strong and ultra-strong light–matter coupling must be accompanied by high-quality theoretical descriptions of the emergence and properties of molecular polaritons.
There have been several excellent review and perspective articles focusing on theoretical advances
related to polaritonic chemistry. Theoretical challenges in polaritonic chemistry bridge most domains of chemical physics, including polaritonic structure, dynamics, statistical thermodynamics, and rate theories as pointed out by a recent comprehensive review
by Huo and co-workers <cit.> and an incisive perspective by Feist and co-workers <cit.>. Ruggenthaler et al. have contributed a rigorous review of several promising directions in ab initio cavity quantum electrodynamics (QED) methods with a particular emphasis on real-space approaches to bridge density functional theory and its real-time extensions with cavity QED; the resulting QEDFT approach<cit.> has played an important role in simulating polaritonic structure. In this tutorial review, we also focus on the problem of simulating polaritonic structure through the lens of ab initio cavity QED, but we emphasize emerging methods implemented with Gaussian basis sets. Throughout, we refer to ab initio cavity QED methods (whether in Guassian or real-space grid bases) as those where the starting point is a single time-independent Schrödinger equation for charged particles comprising a molecular system coupled to quantized photonic degrees of freedom. These methods can be seen to be complementary to parameterized cavity QED (pCQED) methods where one essentially considers solving two Schrödinger equations in series: a first for the molecular system, and the second for the coupled molecular-photonic system that is parameterized by the solutions to the
first.<cit.> As a tutorial review, our aim is to provide a level of technical detail sufficient for newcomers to the field to implement some of the more introductory methods and start applying them as-is, or to leverage these implementations to seed new or more elaborate methodological developments. In addition to the discussion of the related theory in text, we provide example code in a tutorial style that utilizes the Psi4Numpy framework for a QED-Hartree-Fock self-consistent field method and a QED-Configuration Interaction Singles method. We will present some illustrative calculations utilizing these methods, and also discuss results from the literature for methods beyond those for which we have provided tutorial implementations.
Historically, theoretical descriptions of strong light–matter interactions have been built upon simple model Hamiltonians that describe interactions between two- or few-level quantum emitters and a single photon modes. For electronic strong coupling in polariton chemistry, the Jaynes-Cummings model provides such an example. Here two states of the quantum emitter are parameterized by the ground- and excited-state energies, and these states couple to the photon mode through a dipolar transition;
see, for example, Ref. Huo22_Chemrxiv for a derivation and detailed discussion of this model. Such models are powerful tools for simulating qualitative changes to properties of molecular systems strongly coupled to nanoconfined photons,<cit.> offering essential insight, for example, into optical changes that can be induced by manipulating the energy content of an external field<cit.> or into changes in chemical reactivity<cit.> or rates of electron transfer reactions.<cit.> While such simulations improve our qualitative understanding of many problems, quantitative predictions of chemical reactivity or orbital-specific quantities (e.g., ionization potentials) within optical cavities or other nanoconfined environments necessitate an ab inito approach to light–matter interactions or a pCQED treatement with a sufficiently large basis of molecular and photonic eigenstates<cit.>.
The most conceptually straightforward strategy to realize an ab initio polaritonic model is to generalize an existing methodology to treat more than one type of quantum-mechanical particle – namely, for the description of both electrons and photons. Following this scheme, approaches based on quantum electrodynamics generalizations of density functional theory (QEDFT<cit.> and QED-DFT<cit.>), configuration interaction (QED-CIS) <cit.>, and coupled cluster (QED-CC) <cit.> have emerged. An alternative and perhaps more direct description of polaritonic structure could be obtained from a theory designed from the outset with a different particle type, the polariton, in mind.<cit.> This approach could be the more natural one, but, in the framework outlined in Ref. , the technical challenge of designing algorithms for treating multiple types of quantum-mechanical particles is supplanted by a new problem: enforcing the correct Fermi-Bose statistics on the polaritonic wave function. In either case, the vast majoriy of polaritonic quantum chemical models are built upon density functional theory (DFT). For many applications, DFT offers an excellent balance of accuracy and computational affordability. However, DFT suffers from a number of well-known deficiencies<cit.> that are no doubt inherited by polaritonic extensions of the model and potentially limit its applicability to arbitrary polaritonic problems. Hence, while this review article touches on QED generalizations of DFT, the main focus is wave function methods.
§ THE PAULI-FIERZ HAMILTONIAN
The starting point for our presentation of ab initio polaritonic structure theory is the Pauli-Fierz (PF) Hamiltonian,<cit.> represented in the length gauge and within the dipole and Born-Oppenheimer approximations. An excellent pedagogical discussion and derivation of this Hamiltonian from the minimal coupling Hamiltonian in the Coulomb gauge can found in a recent papers and reviews by Huo and co-workers.<cit.> Here, we briefly outline some key details, assuming a single photon mode for simplicity, but the Hamiltonian we derive can be generalized for multiple modes. It has been shown that the inclusion of multiple modes can profoundly impact ground-state and excited-state polariton surfaces, and physichemical process in model systems.<cit.> Most ab inito cavity QED studies to date have considered only a single mode, so multi-mode effects represent an important area to explore in future work.
We begin with the minimal coupling Hamiltonian in the Coulomb gauge,
Ĥ_ p · A = ∑_i^N 1/2m_i(p̂_i - z_i Â_⊥)^2 + V̂(x̂) + ħω_ cav,
where the subscript on the Hamiltonian denotes that this operator is also referred to as the "p· A" Hamiltonian.<cit.> The sum runs over all charged particles (electrons and nuclei in molecular systems), p̂_i and z_i are the momentum operator and charge for particle i, respectively, Â_⊥ is the transverse component of the vector potential, V̂(x̂) is the Coulomb potential operator for all pairs of charged particles, and ħω_ cav captures the photon energy. The symbols b̂^† and b̂ are photonic creation and annihilation operators, respectively. Important properties of the photonic creation and annihilation operators include their action on photon number states,
|n⟩ = √(n+1)|n+1⟩
|n⟩ = √(n)|n-1⟩
|n⟩ = n|n⟩,
and their commutation relations
[, ] = 1
[, ] = -1.
In Eq. <ref>, the coupling between light and matter is captured by first term, which includes the matter momenta and the product of the matter charges and the vector potential; note that, in the Coulomb gauge, the vector potential is purely transverse. The "p· A" Hamiltonian in the Coulomb gauge is quite natural for formulations of ab inito QED represented in
a real-space grid basis, and so approaches such as QEDFT are formulated in this gauge.<cit.> However,
because momentum eigenfunctions are delocalized functions, capturing the coupling matrix elements between in
the "p· A" representation is challenging for formulations that utilize Gaussian basis sets, which
are inherently localized in space. Therefore, PF Hamiltonian in the length gauge that we seek may be obtained from Ĥ_ p · A via a gauge transformation, known as the Power-Zienau-Wooley (PZW) transformation, followed by a unitary phase transformation. The PZW transformation operator is
Û_ PZW = exp( -i/ħμ̂·Â),
where  = A_ 0( + ) and A_ 0 = √(ħ/2ω_ cavϵ_0 V)ê is the vector potential of the cavity photon, which is still purely transverse but we are dropping the ⊥ for simplicity.
Let's consider the PZW transform of each term in Eq. <ref>. As noted in Ref. Huo20_9215, the
PZW operator boosts the momentum operator by an amount z Â. To see why this is the case, consider the BCH expansion of
this transformation for the light-matter coupling term for a single particle
with charge z:
Û_ PZW(p̂ - z Â) Û^†_ PZW = e^B̂Ĉ e^-B̂ = Ĉ + [B̂,Ĉ] + 1/2 [B̂,[B̂,Ĉ]] + ...
where Ĉ = (p̂ - z Â) and B̂ = -i/ħzÂx̂,
and we have used the fact that the dipole operator μ̂ = zx̂. Because  commutes with itself, we have
[Ĉ, B̂] = -i/ħz [x̂,p̂] = zÂ, and all subsequent commutators equal to zero.
Thus, we can see that
Û_ PZW(p̂ - z Â) Û^†_ PZW = (p̂ - z Â) + z = p̂.
Consequently, the first term in the PZW trasnformation of Eq. <ref> becomes
Û_ PZW∑_i^N 1/2m_i(p̂_i - z_i Â)^2 Û^†_ PZW = ∑_i^N 1/2m_ip̂_i.
Both  and x̂ commute with V̂(x̂), so we
have
Û_ PZWV̂(x̂) Û^†_ PZW = V̂(x̂).
Finally, we have
Û_ PZW ħω_ cav Û^†_ PZW = e^B̂Ĉ e^-B̂ = Ĉ + [B̂, Ĉ] + 1/2 [B̂,[B̂,Ĉ]] + ...
where we will call Ĉ = and B̂ = g( + ), where g = -i/ħμ̂· A_0.
The first commutator gives
g [( + ),] = - g ( - ),
and the second commutator gives
-1/2 g^2 [( + ), ( - )] = - g^2,
so that this term overall reads
Û_ PZW ħω_ cav Û^†_ PZW = + iω_ cavμ̂· A_0( - ) +ω_ cav/ħ ( μ̂· A_0)^2.
Combining all terms gives the Hamiltonian in the dipole gauge, also called the "d· E" Hamiltonian:<cit.>
Ĥ_ d· E = ∑_i^N p̂_i^2/2m_i + V̂(x̂) + + iω_ cavμ̂· A_0( - ) +ω_ cav/ħ ( μ̂· A_0)^2.
To derive the Pauli-Fierz Hamiltonian from Eq. <ref> we apply a unitary phase transformation defined by the operator
Û_ϕ = exp( i π/2),
which transforms the photonic operators as follows:
Û_ϕÛ^†_ϕ = i
Û_ϕÛ^†_ϕ = -i
Û_ϕÛ^†_ϕ = .
Thus, the Pauli-Fierz Hamiltonian can be defined as
Ĥ_PF = Û_ϕĤ_ d· EÛ^†_ϕ = ∑_i^N p̂_i^2/2m_i + V̂(x̂) + - ω_ cavμ̂· A_0( + ) +ω_ cav/ħ ( μ̂· A_0)^2.
It is common to define the coupling vector λ = √(ħ/ϵ_0 V)ê, and so after
recalling the definition
A_ 0 = √(ħ/2ω_ cavϵ_0 V)ê,
we can write A_ 0 = √(1/2ω_ cav)λ. At this point, the sum ∑_i^N p̂_i^2/2m_i = T̂_ e + T̂_ N runs over the electrons and nuclei, and potential operator
V̂(x̂) = V̂_ ee + V̂_ eN + V̂_ NN includes electron-electron repulsion, electron-nuclear attraction, and nuclear-nuclear repulsion operators. We will invoke the Born-Oppenheimer approximation, which fixes the nuclei and eliminates the nuclear kinetic energy operator, and makes the nuclear-nuclear repulsion a constant for a given
molecular geometry.
With these definitions in mind, we write the Pauli-Fierz Hamiltonian <cit.> in the length gauge and within the dipole and Born-Oppenheimer approximations and in atomic units as follows:
Ĥ = Ĥ_ e + ω_ cavb̂^†b̂ - √(ω_ cav/2) (λ·μ̂ )(b̂^† +b̂) + 1/2 (λ·μ̂ )^2.
Here, Ĥ_ e represents the electronic Hamiltonian that arises in standard electronic structure theories when
the Born-Oppenheimer approximation is imposed on the charged particles captured
by the ∑_i^N p̂_i^2/2m_i + V̂(x̂) term
in Eq. <ref>.
The second term Ĥ_ cav = ω_ cavb̂^†b̂ represents the Hamiltonian for the cavity mode, which is a harmonic oscillator with fundamental frequency ω_ cav.
The last two terms are the bilinear coupling, Ĥ_ blc = √(ω_ cav/2) (λ·μ̂ )(b̂^† +b̂), and dipole self-energy terms Ĥ_ DSE = 1/2( λ·μ̂)^2, respectively. We will assume a Cartesian coordinate system where λ and
μ̂ will have x, y, and z components. The molecular dipole operator μ̂ has an electronic and a nuclear contributions, i.e.,
μ̂ = μ̂_ e + μ_ n. In the Born-Oppenheimer approximation, the nuclear
contribution is a constant for a given geometry.
In the following sections, we use standard labeling notation for molecular spin orbitals, i.e., labels i, j, k, and l refer to electronic molecular spin-orbitals that are occupied in a reference configuration, and labels a, b, c, d refer to unoccupied electronic molecular spin-orbitals. General electronic molecular orbitals will be indexed by p, q, r, and s, and electronic atomic orbitals will be indexed by Greek labels. Unless otherwise noted, all electronic orbital labels refer to spin-orbitals. The symbols â^† and â will represent fermionic creation and annihilation operators, respectively, while b̂^† and b̂ will represent the bosonic equivalents.
§ MEAN-FIELD CAVITY QED
As our first step in approximating the energy eigenstates of Eq. (<ref>), we introduce the cavity quantum electrodynamics Hartree-Fock (QED-HF) method based on the reference wavefunction
|0^ e0^ p⟩ = |0^ e⟩⊗ |0^ p⟩
which is a direct product of a Slater determinant of electronic spin orbitals (|0^ e⟩) and a zero-photon state (|0^ p⟩). This zero-photon state is defined as a linear combination of photon-number states
|0^ p⟩ = ∑_n (b̂^†)^n |0⟩ c_n
where |0⟩ represents the photon vacuum.
The functions |0^ e⟩ and |0^ p⟩ can be determined via the following modified Roothaan-Hall procedure. In the first step, the electronic wavefunction can be determined as the Slater determinant that minimizes the expectation value of Eq. (<ref>), given a fixed zero-photon state. Second, given |0^ e⟩, we integrate out the electronic degrees of freedom of Eq. (<ref>) to obtain a photon Hamiltonian
Ĥ_ p = ⟨ 0^ e | Ĥ | 0^ e⟩
the lowest eigenfunction of which is |0^ p⟩. In practice, |0^ p⟩ can be determined by expanding Ĥ_ p in a basis of photon-number states and bringing it to diagonal form. This two-step procedure should be repeated until self-consistency.
One key detail in this procedure is that incorrect behavior can be recovered if the photon space is not fully converged. As an example, Fig. <ref>(a) illustrates the QED-HF energy for a cavity-bound hydrogen fluoride cation (described by the cc-pVQZ basis set) as the molecule is moved away from the origin. Here, the cation is coupled to a single-mode cavity with a fundamental frequency of 2 eV, the cavity mode is polarized along the molecular axis, the coupling strength, λ, is 0.05 atomic units, and the H–F distance is fixed at 0.917 Å throughout the translation. The QED-HF energy should be origin invariant, but, as is evident from the data, the correct invariance properties are only observed in the limit that the photon basis is complete. Figure <ref>(b) illustrates the error in the QED-HF energy, with respect to calculations carried out in the so-called “coherent-state basis,”<cit.> which, as discussed below, yields results that are equivalent to those obtained with a complete photon basis. Here, we can see that even with 20 photon number states, the QED-HF energy is still not strictly origin invariant, and this issue is more pronounced the farther from the origin the molecule is placed.
Aside from origin invariance, the QED-HF energy should be independent of the photon frequency;<cit.> any polaritonic wave function that is factorizable as a product of an electronic wave function and a photonic wave function should have this property. Figure <ref> illustrates the frequency dependence of the QED-HF energy for the same cavity-bound hydrogen fluoride cation when the molecule is placed 10 Å from the origin. Clearly, an incomplete photon basis leads to an incorrect frequency dependence in the QED-HF energy. The errors with respect to calculations carried out in the coherent-state basis depicted in Fig. <ref>(b) demonstrate that errors due to the incompleteness of the photon basis can be quite large, even when considering 20 photon number states. In this case, errors larger than 10^-3 E_ h are observed for cavity mode frequencies less than 1.5 eV; these errors become much smaller as the photon frequency increases.
As alluded to above, an equivalent representation of ground-state QED-HF involves representing the problem within the coherent-state basis,<cit.> which is the basis that diagonalizes Ĥ_ p. In this way, we avoid the need to solve the second step of the modified Roothaan-Hall procedure described above and automatically ensure convergence of the procedure with respect to the number of photon-number states. In the coherent-state basis, we need only solve the electronic problem with a transformed Hamiltonian, the form of which is derived in the next subsection.
§.§ Coherent-State Transformation of the Hamiltonian
As noted in Ref. , |0^ p⟩ can be exactly defined with a unitary coherent-state transformation operator of the form
Û_ CS = exp( z(b̂^† - b̂) )
were z is a parameter defined such that Û_ CSĤ_ pÛ^†_ CS is a diagonal operator:
z = -λ·⟨μ̂⟩/√(2 ω_ cav).
The term ⟨μ̂⟩ in Eq. <ref> represents the expectation value of the molecular dipole moment (with respect to the Slater determinant, |0^ e⟩), which is also a vector quantity.
We can relate the photon vacuum to the zero-photon state through the unitary transformation
defined in Eq. <ref>,
|0^ p⟩ = Û^†_ CS |0⟩
where |0⟩ represents the photon vacuum. Now, consider the expectation value of the PF Hamiltonian with respect to the QED-HF wavefunction:
⟨ 0^ e0^ p | Ĥ | 0^ e0^ p⟩ = ⟨ 0^ e | ⊗⟨ 0 | Û_ CSĤÛ^†_ CS |0 ⟩⊗ | 0^ e⟩
From the right-hand side of this expression, it is evident that the electronic wave function, |0^ e⟩, could be determined by minimizing the expectation value of the transformed Hamiltonian, ⟨ 0 | Û_ CSĤÛ^†_ CS | 0⟩, with respect to variations in the orbitals, without any explicit consideration of the photon degrees of freedom. Hence, by applying the coherent-state transformation to the full PF Hamiltonian, we avoid the second step of the modified Roothan-Hall procedure for QED-HF that is outlined above.
To transform Ĥ_PF to the coherent-state basis, we note that
Û_ CSÛ^†_ CS = - z[, ( - )] = - z
Û_ CSÛ^†_ CS = - z[, ( - )] = - z
Û_ CSÛ^†_ CS =
Û_ CSÛ^†_ CSÛ_ CSÛ^†_ CS = ( - z) ( - z).
So, applying this transformation to Eq. <ref> yields
Ĥ_ CS = Ĥ_e + ω_ cav ( - z) ( - z) - √(ω_ cav/2)λ·μ̂ ( + - 2z) + 1/2 (λ·μ̂)^2,
and substituting Eq. <ref> gives the specific form of the Pauli-Fierz Hamiltonian in the coherent state basis:
Ĥ_ CS = Ĥ_e + ω_ cav - √(ω_ cav/2) [λ· (μ̂ - ⟨μ̂⟩ )] ( + ) + 1/2 [λ· (μ̂ - ⟨μ̂⟩ )]^2 .
Although we see that in Figure <ref> the
total energy for charge systems remains origin invariant in the coherent state basis, the orbitals and the Fock matrix itself are not origin invariant for charged systems in this formulation. This presents
challenges for introducing perturbative corrections for
electron-electron and electron-photon correlation. This was recently
observed by Riso et al. who developed a strong coupling quantum
electrodynamics Hartree-Fock theory (SC-QED-HF) that leads to
a fully origin-invariant formluation <cit.> based on the
following ansatz:
|Φ_SCQEDHF⟩ = exp( -λ/√(2ω_ cav)∑_p ση_pσâ^†_pσâ_pσ( - ) ) | 0^ e⟩ |0⟩
where â^†_pσ and â_pσ are fermionic creation and annihilation operators for spin orbital pσ and η_p are orbital-specific coherent state coefficients.
§.§ Cavity QED Hartree-Fock (QED-HF) in the Coherent-State Basis
Consider a QED-HF wave function of the form of Eq. <ref>. We express the photon state using the coherent-state transformation (Eq. <ref>) and take the expectation value of the Pauli-Fierz Hamiltonian to give
E_QED-HF = ∑_μν ( T_μν + V_μν + 1/2 J_μν - 1/2 K_μν) γ_μν + ⟨1/2 [λ· (μ̂_ e - ⟨μ̂_ e )⟩]^2⟩
Here, μ and ν represent atomic basis functions, and
T_μν, V_μν, J_μν, and K_μν are electron kinetic energy integrals, electron-nucleus potential energy integrals, elements of the Coulomb matrix, and elements of the exchange matrix, respectively. The elements of the Coulomb and exchange matrices are defined by
J_μν = ∑_λσ (μν|λσ) γ_λσ
and
K_μν = ∑_λσ (μλ | σν) γ_λσ
where the symbol (μν|λσ) represents a two-electron repulsion integral in chemists' notation, and
γ_μν = ∑_i^N_ e c^*_μ i c_ν i is the one-particle reduced density matrix
(with {c_μ i} and N_ e being molecular orbital coefficients and the number of electrons, respectively).
The last term in Eq. <ref> is the dipole self-energy; note that, in the coherent-state basis, this quantity depends on only electronic degrees of freedom. Note also that the bilinear coupling term in Eq. <ref> does not contribute to the QED-HF total energy when the Hamiltonian is represented in the coherent-state basis. This property is shared by all QED approaches where the wave function is represented as a product of electron and photon functions (e.g., in the QED-DFT approach described in Ref. and in Sec. <ref>).
The implementation of the dipole self-energy term is not consistent across the literature, with the difference being the treatment of the square of the electric dipole operator. To appreciate these differences, we first expand the dipole self-energy operator as
1/2 [λ· (μ̂_e - ⟨μ̂_e ⟩)]^2 = 1/2 ( λ·μ̂_ e ) ^2 - ( λ·μ̂_ e ) ( λ·⟨μ̂_ e⟩ )+ 1/2 ( λ·⟨μ̂_ e⟩ ) ^2.
Now, the square of the electric dipole operator (the first term on the right-hand side of Eq. <ref>) can be expanded in terms of one- and two-electron contributions as
( λ·μ̂_ e ) ^2 = ∑_i ≠ j [ λ·μ̂_ e(i) ][ λ·μ̂_ e(j)] + ∑_i [ λ·μ̂_ e(i) ]^2.
where i and j represent different electrons.
The right-hand side of Eq. <ref> can be expressed in second-quantized notation as
( λ·μ̂_ e ) ^2 = ∑_μνλσ d_μν d_λσâ^†_μâ^†_λâ_σâ_ν - ∑_μν q_μνâ^†_μâ_ν.
where â^† and â represent fermionic creation and annihilation operators, respectively. The symbols d_μν and q_μν represent modified electric dipole and electric quadrupole integrals, which have the form
d_μν = - ∑_a ∈{x,y,z}λ_a ∫χ^*_μ r_a χ_ν dτ,
and
q_μν = - ∑_ab ∈{x,y,z}λ_a λ_b ∫χ^*_μ r_a r_b χ_ν dτ.
respectively, and are evaluated over atomic basis functions, χ_μ. Here, λ_a is a cartesian component of λ, and r_a is a cartesian component of the position vector [e.g., for 𝐫 = (x, y, z), r_x = x]. As is well known, the square of an operator expanded initially in first quantization and then represented in second quantization is not necessarily the same as the square of the second quantized form of the operator; these representations are only equivalent in the limit that the one-electron basis set is complete. Equation <ref> makes no assumptions about the completeness of the one-particle basis set and is the form of the square of the dipole operator employed in Refs. . On the other hand, many other studies take the second-quantized form of the square of the electric dipole operator to be the product of second-quantized electric dipole operators, which leads to
( λ·μ̂_ e ) ^2 = ∑_μνλσ d_μν d_λσâ^†_μâ_νâ^†_λâ_σ
= ∑_μνλσ d_μν d_λσâ^†_μâ^†_λâ_σâ_ν +∑_μνâ^†_μâ_ν∑_σ d_μσ d_σν.
In these studies, the assumption that the basis set is assumed to be complete is never stated, but this choice is evident in the form of the Fock matrix (see Eq. 30 of Ref. , for example). In this review, we choose the form of ( λ·μ̂_ e ) ^2 given by Eq. <ref>. Given that choice, and the fact that
( λ·μ̂_ e ) = ∑_μν d_μνâ^†_μâ_ν,
we arrive at
1/2 [λ· (μ̂_e - ⟨μ̂_e ⟩)]^2 = 1/2∑_μνλσ d_μν d_λσâ^†_μâ^†_λâ_σâ_ν
+ ∑_μν O^ DSE_μνâ^†_μâ_ν + 1/2 (λ·⟨μ_ e⟩)^2.
where
O^ DSE_μν = -( λ·⟨μ̂_ e⟩ ) d_μν - 1/2 q_μν.
Now, we can evaluate the expectation of Eq. <ref> with respect to a single determinant, which gives
⟨1/2 [λ· (μ̂_e - ⟨μ̂_e ⟩)]^2 ⟩ = ∑_μν (1/2 J^ DSE_μν - 1/2 K^ DSE_μν + O^ DSE_μν ) γ_μν
+ 1/2 (λ·⟨μ_ e⟩)^2
Here, J^ DSE_μν and K^ DSE_μν are elements of dipole self-energy matrices that are analogies of the usual Coulomb and exchange matrices:
J^ DSE_μν = d_μν∑_λσ d_λσγ_λσ = (λ·⟨μ̂_ e⟩) d_μν
K^ DSE_μν = ∑_λσ d_μσ d_λνγ_λσ.
With all of the components of the energy (Eq. <ref>) defined, we can make this energy stationary with respect to the molecular orbital expansion coefficients, {c_μ i}, while enforcing orthogonality of the molecular orbitals, which leads to a set of Hartree-Fock equations that resembles those in the ordinary electronic problem, augmented by the dipole self-energy contributions. As such, QED-HF orbitals are eigenfunctions of a modified Fock matrix,
F_μν = T_μν + V_μν + J_μν - K_μν
+ O^ DSE_μν + J_μν^ DSE - K_μν^ DSE
For organizational purposes, it will become convenient to partition the Fock matrix into contributions that define the canonical Fock operator, F^ C_μν = T_μν + V_μν + J_μν - K_μν,
plus terms that derive from the dipole self energy,
F^ DSE_μν = O^ DSE_μν + J_μν^ DSE - K_μν^ DSE.
Upon solving the QED-HF equations, one obtains a set of molecular orbitals corresponding to the (mean-field) ground state of a many-electron system coupled to an optical cavity. For sufficiently large coupling strengths, the cavity can induce significant changes in these orbitals, as compared to orbitals obtained from a standard HF procedure on the isolated many-electron system. Here, we examine such changes for a formaldehyde molecule that has been coupled to a single-mode optical cavity. Excited states of this system have been explored using QED generalizations of time-dependent density functional theory<cit.> (see Sec. <ref> for a description of the relevant theory). Here, we adapt the results of Ref. and focus on cavity-induced changes to the ground state (i.e., to the molecular orbitals). We supplement this discussion with a tutorial implementation of QED-HF that the interested reader can find https://github.com/FoleyLab/psi4polaritonic/blob/cpr/QED-HF_Tutorial.ipynbonline.<cit.> The tutorial provides a benchmark calculation on the water molecule, and can be modified to study other systems.
As described in Ref. , the geometry of isolated formaldehyde was optimized using restricted HF (RHF) theory and the cc-pVDZ basis set, and the principal symmetry axis of the molecule is aligned along the z-axis. At this level, the RHF ground-state has a dipole moment oriented along the z-axis with ⟨μ⟩_z = -1.009 a.u.
We consider solutions to the QED-HF equations for a coupling vector with fixed magnitude,
(i.e., |λ| = 0.1 a.u.),
and three different cavity mode polarizations:
λ_y = 0.1 ê_y a.u.,
λ_z = 0.1 ê_z a.u., and
λ_yz = √(1/2) (λ_y + λ_z) a.u., with
ê_y=(0, 1, 0) and
ê_z=(0, 0, 1).
As compared to the HF energy, the QED-HF energy is higher in all cases, with the largest increase occurring for λ_z (see Table I). Going back to the explicit expressions for the QED-HF dipole self energy derived above, we can see that this large change likely originates from the permanent dipole moment that is oriented along the z-axis, which contributes to the last term in Eq. <ref>.
The cavity-induced changes to the energy for the other polarizations point to important effects arising from the other contributions to Eq. <ref>. Specifically, in the case of λ_y, we should see no permanent dipole moment contributions to the dipole self energy, which indicates that the cavity effects stem entirely from the quadrupolar contribution to O^ DSE (Eq. <ref>) and the exchange-like contribution (Eq. <ref>).
To quantify cavity-induced changes to the energy, Ref. considered how various contributions to the QED-HF energy change with and without coupling to the photon field. Specific formulae for these couplings are given in reference Foley_154103.
The quadrupolar contribution to O^ DSE (Δ_1qe) and the Coulomb-like and exchange-like contributions (Eqs. <ref> and <ref>), the combination of which is denoted Δ_2de in Table I,
typically account for the largest changes to the QED-HF energy
for the three polarizations considered in Table I. However, the changes in the one- and two-electron contributions to the canonical RHF energy (denoted Δ_1E and Δ_2E) suggest that cavity-induced changes to the orbitals themselves can have appreciable energetic consequences. We note that the various components of the energetic changes largely cancel with one other (i.e. Δ_1E≈ -Δ_2E in all three
cases), leading to more modest changes in the total energy (see Table I).
Aside from the energy, we can also visualize the impact that the cavity has on the real-space form of the molecular orbitals. As an example, Fig. <ref> depicts HF orbitals for the highest occupied molecular orbital (HOMO, 2B_2) and the second-lowest unoccupied molecular orbital (LUMO+1, 6A_1) for an isolated formaldehyde molecule and the corresponding QED-HF orbitals for the
λ_yz case ( 7A^' and 8A^' ). The QED-HF orbitals are noticably distorted compared to the HF ones, which results in a reduction of symmetry from
C_2v to C_s and impacts both ground-state energy and properties.
The direct inclusion of these cavity-induced effects on the orbital basis is one appealing advantage of ab initio QED methods.
§.§ Cavity QED Density Functional Theory (QED-DFT)
The QED-HF theory outlined above can easily be adapted to develop a QED generalization of Kohn-Sham DFT, or QED-DFT.<cit.> To do so, one can simply follows the basic premise of Kohn-Sham DFT:<cit.> there exists a fictitious system of non-interacting photons and electrons that has the same density as the fully-interacting system. The QED-DFT ground-state is then taken to have the form of Eq. <ref>, except that |0^ e⟩ now refers to a determinant of Kohn-Sham orbitals. As with QED-HF, the photon part of the wave function can be exactly represented using the coherent-state transformation operator, see Eq. <ref>. All electron-electron correlation and exchange effects and electron-photon correlation effects can then, in principle, be accounted for by appropriate functionals of the density (and gradient of the density, etc.), as in standard Kohn-Sham DFT. Historically, QED-DFT was predated by a different generalization of DFT for cavity QED applications, called QEDFT, <cit.>
which, rather than following the Kohn-Sham scheme, represents the electronic and photonic degrees of freedom directly in real space.
QED-DFT studies typically employ standard exchange-correlation functionals used in electronic structure theory (i.e., they ignore electron-photon correlation effects), while, for QEDFT, a few examples of electron-photon correlation functions have been put forward.<cit.>
§ SINGLE-PARTICLE POST-SCF CAVITY QED METHODS
§.§ Cavity QED-Configuration Interaction with Single Excitations (QED-CIS)
A general correlated wave function for a many-electron system coupled to a single-mode cavity could take the form
|Ψ⟩ = ∑_μ∑_A c_μ^A | μ^ e⟩⊗ | A^ p⟩
where |μ^ e⟩ represents a determinant of electronic orbitals, |A^ p⟩ is a photon-number state corresponding to A photons in the cavity mode, and c_μ^A is an expansion coefficient. If {|μ^ e⟩} includes all possible determinants and {|A^ p⟩} includes all possible photon-number states, then this full configuration interaction (CI) wave function provides an exact description of the electronic/polaritonic structure, within a given one-electron basis set. However, as in the usual electronic case, a full CI description of a cavity-coupled many-electron system is, in general, an intractable prospect. The simplest solution to this problem is to truncate both the many-electron basis and the photon basis at some level.
McTague and Foley proposed<cit.> a truncated cavity QED-CI approach wherein the sum over Slater determinants, μ, in Eq. <ref> was restricted to include only the reference electronic configuration, |0^ e⟩, and all single electronic excitations out of this configuration, and the sum over photon-number states was restricted to include only states representing zero or one photon in the cavity (|0⟩ and |1⟩, respectively). Those authors termed this approach cavity QED configuration interaction with single excitations, or CQED-CIS, but, following the naming convention used in some QED coupled-cluster approaches<cit.> (see Sec. <ref>), we adopt the name QED-CIS-1. The QED-CIS-1 wave function for state I takes the form
|Ψ_I⟩ = c_0^0 |0^ e⟩⊗ |0⟩ + ∑_i,a c_ia^0 |Φ_i^a⟩⊗ |0⟩ + c_0^1 |0^ e⟩⊗ |1⟩ + ∑_i,a c_ia^1 |Φ_i^a⟩⊗ |1⟩.
Following Ref. ,
|Φ_i^a⟩ = 1/√(2)(|Φ_i_α^a_α⟩ + |Φ_i_β^a_β⟩) represents a singlet spin-adapted basis function, where |Φ_i_σ^a_σ⟩ is a determinant generated by exciting an electron with spin σ from a spatial orbital that is occupied in |0^ e⟩, ϕ_i, to an unoccupied spatial orbital, ϕ_a. For multiple cavity modes, QED-CIS-1 is defined such that the photon basis includes all possible combinations zero or one photon in each of the modes.
The expansion coefficients in Eq. <ref> can be determined as the elements of the eigenvectors of the matrix representation of the Pauli-Fierz Hamiltonian represented within the coherent-state basis (Ĥ_CS, Eq. <ref>), i.e., by solving the eigenvalue problem
[ 0 0 0 ħ g; 0 A + Δ ħ g^† ħ G; 0 ħ g ħω 0; ħ g^† ħ G 0 A + Δ + ħΩ ][ c^0_0; c^0_ia; c^1_0; c^1_ia ]
=
Ω_QED-CIS-1[ c^0_0; c^0_ia; c^1_0; c^1_ia, ]
Note that the matrix on the left-hand side of Eq. <ref> actually is the matrix representation of Ĥ_ CS - E_QED-HF, where E_QED-HF is the energy of the QED-HF reference state. The elements of A are similar to those encountered in canonical CIS theory,
A_ia,jb = F^ C_abδ_ij - F^ C_ij δ_ab + 2(ia|jb) - (ij|ab),
with important differences being that (i) the two-electron integrals are performed over QED-HF orbitals, and (ii) F^ C is not diagonal in the QED-HF basis when the coupling strength is non-zero.
The dipole self energy contribution to the Hamiltonian in the subspace of spin-adapted singly-excited functions is contained in the Δ matrix, with elements
Δ_ia,jb =F^ DSE_abδ_ij - F^ DSE_ij δ_ab + 2 d_ia d_jb
-d_ij d_ab,
Again, we note that F^ DSE is not necessarily diagonal in the QED-HF basis.
The symbol Ω represents a diagonal matrix of photon energy contributions, defined by
Ω_ia,jb = ωδ_ijδ_ab.
The symbols g and G arise from the bilinear coupling term in Ĥ_ CS and are defined by
g_ia = -√(ω) d_ia
and
G_ia,jb = √(ω/2)( d_ijδ_ab
- d_abδ_ij + ⟨ d ⟩δ_ijδ_ab)
The g term couples the reference to |Φ_i^a⟩ |1⟩, while G couples singly-excited configurations with different photon numbers, i.e., |Φ_i^a⟩ |0⟩ and |Φ_i^a⟩ |1⟩. Note that the fact that g couples the reference to |Φ_i^a⟩ |1⟩ implies that QED-CIS-1 captures some electron-photon correlation effects. Indeed, the lowest eigenvalue, Ω_QED-CIS-1, obtained from solving Eq. <ref> is nonpositive and represents an electron-photon correlation energy.
§.§ Cavity QED Time-Dependent Density Functional Theory (QED-TDDFT)
Given the popularity of time-dependent DFT (TDDFT) for the electronic structure problem, it is not surprising that multiple generalizations of TDDFT have been proposed and applied to cavity-embedded molecular systems. Both real-time<cit.> and linear-response<cit.> formulations have been put forward; here, we focus on the linear-response approaches because they more closely resemble the QED-CIS-1 method discussed above. Both real-space<cit.> and atom-centered Gaussian basis function<cit.> representations of the electronic structure have been used within linear-response QED-TDDFT. In the latter category, Refs. and have considered QED-TDDFT calculations on top of canonical Kohn-Sham reference configurations (i.e., |0^ e⟩⊗ |0⟩, where |0^ e⟩ is a Kohn-Sham determinant optimized in the absence of the cavity), while Refs. and have considered fully relaxed QED-DFT reference functions and represented the QED-TDDFT problem in the coherent-state basis, similar to what is done in QED-CIS-1. As discussed in Ref. , significant differences in excitation energies obtained from these “unrelaxed” and “relaxed” QED-TDDFT protocols can occur when considering large coupling strengths. In either case, linear-response QED-TDDFT can be implemented as a solution to a generalization of Casida's equations
[ A +Δ B + Δ' ħ g^† ħg̃^†; B +Δ' A + Δ ħ g^† ħg̃^†; ħ g ħ g ħω 0; ħg̃ ħg̃ 0 ħω ][ X; Y; M; N ]
=
Ω^QED-TDDFT[ 1 0 0 0; 0 - 1 0 0; 0 0 1 0; 0 0 0 - 1 ][ X; Y; M; N ]
Assuming a spin-adapted basis, the A matrix is the same as that given in Eq. <ref>, except that the exchange term (ij|ab) is replaced with appropriate derivatives of the exchange-correlation energy. For a cavity QED random phase approximation (RPA), the B matrix has elements
B_ia,jb = 2(ia|jb) - (ib|ja)
and, for QED-TDDFT, the exchange term (ib|ja) is again replaced by the appropriate derivatives of the exchange-correlation energy. The Δ' matrix has elements
Δ^'_ai,bj = 2 d_ai d_bj - d_aj d_ib
and, lastly, g̃ = g. As described, the QED-TDDFT formalism corresponds to the “relaxed” one developed in Ref. . The “unrelaxed” QED-TDDFT method proposed in Ref. can be obtained by ignoring the effects of the cavity in the underlying ground-state Kohn-Sham problem and taking
Δ_ia,jb = Δ'_ia,jb = 2 d_ai d_bj
The elements of X, Y, M, and N parametrize the QED-TDDFT excited states; the elements of X and Y correspond to the usual electronic excitation and de-excitation amplitudes encountered in conventional TDDFT, while M and N refer to photon creation and annihilation amplitudes, respectively. We see clear connections to QED-CIS-1, where the CI coefficients c_ai^0 and c_0^1 play roles that are similar to those of the elements of X and M, respectively. Unlike QED-CIS-1, however, the linear-response QED-TDDFT equations do not couple the QED-DFT reference to any excited configurations. Hence, this approach does not account for any explicit electron-photon correlation effects, absent any that are included via the exchange-correlation functional. Such effects were ignored in Refs. ; all calculations reported therein used standard density functional approximations designed for non-QED applications.
§.§ The QED-TDDFT and QED-CIS prisms
As mentioned above, some coefficients from the QED-CIS-1 problem map directly onto amplitudes that arise in QED-TDDFT. However, QED-CIS-1 lacks analogues to the de-excitation and annihilation amplitudes (Y and N, respectively). That said, in Ref. , Shao and coworkers explored an approximation to QED-TDDFT that ignored these terms, called the Tamm-Dancoff - Rotating Wave Approximation (TDA-RWA) in that work, which has a simpler structure that is more similar to QED-CIS-1. The TDA-RWA eigenvalue problem is
[ A +Δ ħ g^†; ħ g ħω ][ X; M; ]
=
Ω^TDA-RWA[ X; M ].
The primary differences between QED-CIS-1 and TDA-RWA are (i) the different definitions of the A matrix that we have already discussed and (ii) the fact that TDA-RWA, like QED-TDDFT, does not account for simultaneous electronic excitations and photon creation, which would couple the QED-DFT reference to excited configurations. Other subtle differences exist, depending on whether the TDA-RWA is done in a fully relaxed way or not (as discussed in the context of QED-TDDFT above). The TDA-RWA approach is only one of eight possible approximations to QED-TDDFT that Shao and co-workers analyzed in Ref. ; these approximations live on what those authors describe as the QED-TDDFT prism (see Figure <ref>). The facets of their prism include all possible combinations of including or neglecting of the B matrix, the Δ/Δ' matrices, and g̃.
An analogous family of approximations to QED-CIS-1 can be developed by neglecting Δ or the bilinear coupling terms in Eq. <ref> or by excluding simultaneous electron excitation and photon creation terms (|Φ_i^a⟩⊗ |1⟩) in Eq. <ref>. For example, excluding |Φ_i^a⟩⊗ |1⟩ from the wave function expansion results in a QED-CIS method has the same structure as TDA-RWA:
[ A +Δ ħ g^†; ħ g ħω ][ c^0_ia; c^1_0 ]
=
Ω_QED-CIS[ c^0_ia; c^1_0 ]
On the other hand, neglecting Δ Eq. <ref> leads to a Jaynes-Cummings-like approximation to QED-CIS-1 (JC-CIS-1):
[ 0 0 0 ħ g; 0 A ħ g^† ħ G; 0 ħ g ħω 0; ħ g^† ħ G 0 A + ħΩ ][ c^0_0; c^0_ia; c^1_0; c^1_ia ]
=
Ω_JC-CIS-1[ c^0_0; c^0_ia; c^1_0; c^1_ia ]
and if we neglect Δ from
Eq. <ref>,
we arrive at a JC-CIS methods that has the same structure as the TDA-JC method of
Shao and co-workers <cit.>:
[ A ħ g^†; ħ g ħω ][ c^0_ia; c^1_0 ]
=
Ω_JC-CIS[ c^0_ia; c^1_0 ]
Ref. provides a detailed analysis of the behavior of different facets of the QED-TDDFT prism for several cavity-coupled molecular systems. Here, we consider how the description of an MgH^+ cation coupled to a single-mode cavity differs for facets of the QED-CIS-1 prism. The cavity mode frequency is chosen to be resonant with the S_0 → S_1 transition in MgH^+ at an Mg–H distance of 2.2 Å (4.75 eV, as evaluated at the CIS/cc-pVDZ level of theory). The molecule is chosen to be oriented along the cavity mode polarization axis, and we consider two coupling strengths, |λ| = 0.01 a.u. and |λ| = 0.05 a.u. For the smaller coupling strength (|λ| = 0.01 a.u.), all facets of the prism provide a similar description of the upper and lower polariton states (see Fig. <ref>). On the other hand, clear differences between each model become evident for the stronger coupling strength (|λ| = 0.05 a.u.). Not surprisingly, energies from Jaynes-Cummings approximations (JC-CIS-1 and JC-CIS) are consistently lower than those from the Pauli-Fierz approaches (QED-CIS-1 and QED-CIS) because the Jaynes-Cummings model neglects the quadratic dipole self energy contributions, which are non-negative. We also see that QED-CIS-1 energies are consistent lower bounds to energies from QED-CIS; the reason is that simultaneous electron excitations and photon creation terms in QED-CIS-1 account for electron-photon correlation effects that lower the energy. For large coupling strengths, these effects can be quite large; at an Mg–H bond length of 2.2 Å and |λ| = 0.05 a.u., for example, the energies of the upper- and lower-polariton states computed by QED-CIS and QED-CIS-1 energies differ by 12.4 mE_h and 5.35 mE_h, respectively.
As mentioned above, simultaneous electron excitations and photon creation terms in QED-CIS-1 incorporate electron-photon correlation effects into the approach and, as a result, the lowest-energy eigenvalue associated with Eq. <ref> is nonpositive and corresponds to an electron-photon correlation contribution to the ground-state energy. Table <ref> quantifies these effects for a formaldehyde molecule coupled to a single-mode cavity with two different coupling vectors, λ_z and λ_yz, which both have magnitudes of 0.1 a.u. and were defined in <ref>. The geometry for formaldehyde was taken from Ref. , with the principal axis of the molecule aligned in the z-direction. The authors of Ref. considered a photon mode with ω = 10.4 eV, which is approximately resonant with the first two dipole allowed transitions at the CIS/cc-pVDZ level of theory. The changes to the ground-state energy as predicted by QED-CIS-1 are given relative to the canonical RHF method and the QED-HF method in Table <ref>. A Jupyter-notebook-based tutorial implementing the prism of QED-CIS-1 methods can be found https://github.com/FoleyLab/psi4polaritonic/blob/cpr/QED-CIS-1.ipynbonline.<cit.> The tutorial provides a benchmark calculation on the MgH^+ ion, and it can easily be modified to study other systems.
§ CAVITY QED COUPLED CLUSTER (QED-CC)
Beyond the single-particle theories discussed in the previous sections, a number of groups have considered many-body frameworks for ab initio cavity QED calculations. Many of these efforts have focused on the coupled-cluster (CC)<cit.> ansatz, which has enjoyed great success in conventional (non-QED) quantum chemistry applications. CC methods exhibit a number of desirable features that have contributed to this success, including the size-extensivity of truncated CC expansions, the size-intensivity of equation of motion (EOM)<cit.> or linear-response<cit.> CC excitation energies, and systematic convergence of the approach toward the full CI limit.
Two slightly different generalizations of CC theory for use with the PF Hamiltonian appeared in the literature at roughly the same time.<cit.> The polaritonic coupled-cluster theory of Mordovina, Bungey, Appel, Knowles, Rubio, and Manby
<cit.> considered an exponential parametrization of the ground-state polaritonic wave function that included single and double electronic transition operators, as well as photon creation operators and coupled electron transition and photon creation operators. They applied this ansatz, along with QED full CI, to the description of strong coupling between a single photon mode and a four-site Hubbard model. It should be noted that this work did not use typical boson creation operators, but, rather, nilpotent operators that lead to a linear parametrization of the photon space. On the otherhand, the QED-CCSD-1 model presented by Haugland, Enrico Ronca, Kjønstad, Rubio, and Koch<cit.> used an exponential parametrization of similar complexity, along with more familiar (non-nilpotent) boson creation operators, and they applied this approach strong coupling problems involving an ab initio molecular Hamiltonian. The ground-state QED-CCSD-1 wave function is
|Ψ_ CC⟩ = e^T̂|Φ_0⟩
with
T̂ = ∑_ia t_i^a â^†_a â_i + 1/4∑_ijab t_ij^abâ^†_a â^†_b â_j â_i
+ u_0 b̂^† + ∑_ia u_i^a â^†_a â_i b̂^† + 1/4∑_ijab u_ij^abâ^†_a â^†_b â_j â_i b̂^†
and where |Φ_0⟩ is a reference configuration of the form
|Φ_0⟩ = |0^ e⟩⊗ |0⟩
In Eq. <ref>, the symbols t_i^a, t_ij^ab, u_0, u_i^a, and u_ij^ab represent the cluster amplitudes, and we can see that QED-CCSD-1 is an extension of the usual CCSD model<cit.> that includes both photon creation operators and products of electronic transition and photon creation operators.
Excited states in QED-CC theory are represented within the EOM-CC framework,<cit.> in which we define both left- and right-hand excited states of the form
| Ψ_I ⟩ = R̂_I e^T̂ | Φ_0 ⟩
⟨Ψ̃_I | = ⟨Φ_0 | L̂_I e^-T̂
where, the label I denotes the state. These functions satisfy left- and right-hand eigenvalue equations
⟨Φ_0 | L̂_I H̅ = ⟨Φ | L̂_I E_I
H̅R̂_I |Φ_0 ⟩ = E_I R̂_I |Φ⟩
involving the similarity transformed PF Hamiltonian, H̅ = e^-T̂Ĥe^T̂. Here, Ĥ is represented in the coherent-state basis. At the EOM-QED-CCSD-1 level of theory, the R̂_I and L̂_I operators are defined by
L̂_I = l_0 + ∑_ail^i_a â^†_iâ_a + 1/4∑_abijl_ab^ijâ^†_i â^†_j â_b â_a
+ m_0 b̂ + ∑_aim^i_a â^†_iâ_a b̂ + 1/4∑_abijm_ab^ijâ^†_i â^†_j â_b â_a b̂
and
R̂_I = r_0 + ∑_air^a_i â^†_aâ_i + 1/4∑_abijr_ij^abâ^†_a â^†_b â_j â_i
+ s_0 b̂^† + ∑_ais^a_i â^†_aâ_i b̂^† + 1/4∑_abijs_ij^abâ^†_a â^†_b â_j â_i b̂^†
respectively, and the amplitudes appearing in Eqs. <ref> and <ref> are determined by solving Eqs. <ref> and <ref>.
Since 2020, several groups have developed implementations of similar QED-CC approaches and explored the influence of cavity effects on various ground-state properties. DePrince<cit.> used QED-CCSD-1 to demonstrate that strong coupling leads to appreciable changes in electron affinities in sodium halide compounds and that QED-HF significantly overestimates these effects. Ionization potentials were found to be less sensitive to cavity effects in these systems.
Pavošević and Flick<cit.> also explored the influence of cavity effects on electron affinities using a unitary formuation of QED-CCSD-1, implemented using the variational quantum eigensolver (VQE)<cit.> algorithm, on a quantum computer. They also extended the framework to include up to two photon creation operators plus single and double electronic excitaitons (termed QED-CCSD-2). These works led to a study on the features of ionization in QED environments by Riso, Haugland, Ronka and Koch<cit.> that highlighted the importance of an appropriate treatment of the ionized electron.
Beyond these studies on ionization / electron attachment, a number of works have used QED-CC approaches to explore how vacuum fluctuations can be leveraged in chemical contexts. Here, it is important to note that we are referring to changes to ground states of cavity-embedded systems, without driving transitions or creating polariton states via the addition of photons to the cavity. Pavošević, Hammes-Schiffer, Rubio, and Flick<cit.> used non-unitary QED-CCSD-2 to show that strong coupling leads to non-negligible changes in proton transfer reaction barrier heights; changes as large as 20% were reported in Ref. . These authors also introduced an approximation to QED-CCSD-2 in which single electron transitions appear with up to two photon creation operators, but double electron transitions only appear with up to single photon creation operators (termed QED-CCSD-21). This QED-CCSD-21 model has a similar structure to the approach of White, Gao, Minnich, and Chan,<cit.> which was developed to model electron-phonon interactions. Pavošević, Smith, and Rubio applied an approximate QED-CCSD-1 model (that ignores coupled two-electron plus photon interactions) to two cycloaddition reactions. In that work, the authors demonstrated that sufficiently strong coupling, along with precise control over the relative orientation of molecules and the cavity mode axis, could influence the major products of these reactions. Pavošević and Rubio have also incorporated QED-CCSD-1 into an embedding protocol<cit.> that treats a subset of a cavity-embedded molecular system using QED-CC and the remainder of the system via QED-DFT or QED-HF (termed `QED-CC-in-QED-SCF”). Assuming that electron-photon correlations are limited to the embedded region, this protocol could circumvent the high computational cost of the many-body ab initio cavity QED framework.
Haugland, Schäfer, Ronca, Rubio and Koch used QED-CCSD-1, QED-DFT, and QED full CI to model the effects of vacuum fluctuations on nature of intermolecular interactions.<cit.> Not surprisingly, QED-HF and QED-DFT do not provide good descriptions of intermolecular interactions in a cavity, particularly for van der Waals interactions. Additional notable observations include an R^-3 contribution to van der Waals interactions (which display R^-6 dependence in the absence of a cavity), stemming from electron-photon correlations, and an apparently infinite distance over which cavity-embedded molecules remain correlated, which results from the dipole self-energy contribution to the interaction energy. It should be noted that the coupling strength employed in this study was quite large: λ = 0.1 a.u., which, assuming a single cavity mode, corresponds to an effective mode volume of ≈ 0.2 nm^3. The authors correctly note that, at the mean-field level, multiple modes polarized along the same axis can be treated as a single effective mode with coupling strength, λ_ eff^2 = ∑_i λ_i^2. Even so, some conclusions regarding long-range correlation effects involve inter-molecule distances on the order of hundreds of Å, which seems inconsistent with such large coupling strengths. More recently, Philbin, Haugland, Ghosh, Ronca, Chen, Narang, and Koch<cit.> used machine learning (ML) techniques to learn intermolecular potentials for cavity-embedded dimers of H_2 molecules, which were treated using QED-CCSD-1 plus two-photon creation operators (termed QED-CCSD-12-SD1 in that work) and QED full CI with up to five photon creation operators (QED-FCI-5). Interestingly, comparisons between QED-CCSD-1 and QED-CCSD-12-SD1 revealed that two-photon transitions are crucial for recovering the correct sign on interaction energies for H_2 molecules separated by large distances; QED-CCSD-12-SD1 and QED-FCI-5 predict these interactions to be attractive, while QED-CCSD-1 predicts a repulsive interaction. Given machine-learned potentials, path integral molecular dynamics simulations on hundreds of cavity embedded molecules revealed that cavity-modified van der Waals interactions result in orientational order not seen in cavity-free simulations. An important caveat to note, though, is that these authors used potentials learned for large single-molecule coupling strengths (λ = 0.1 a.u.), which may not be entirely consistent with the large cavity volumes occupied by hundreds of molecules.
In 2022, Riso, Grazioli, Ronca, Giovannini, and Koch<cit.> developed a formulation of QED-CCSD-1 that models interactions between electronic degrees of freedom and the quantized photon field of a chiral cavity mode. They found that a proper description requires that the photon field be treated beyond the dipole (or even multipolar) approximation, which results in a complex-valued Hamiltonian that depends two cavity modes (for a single resonant frequency). These complications aside, Ref. demonstrated that circularly polarized light can discriminate between enantiomers of chiral molecules embedded within a chiral cavity (e.g., via changes to the energies of the ground states of the enantiomers or their rotational spectra). Moreover, the discriminating power of the cavity increases with the number of molecules.
Cleary, a large body of work has considered the effects of strong light-matter interactions on ground states of cavity-embedded systems. Somewhat less work has considered excited-state electronic/polaritonic structure of such systems. The initial papers<cit.> describing generalizations of CC theory for use with the PF Hamiltonian developed and applied QED-EOM-CC formalisms to cavity-embedded systems. In particular, Ref. describes how polariton formation can manipulate conical intersections; QED-CCSD-1 calculations on a cavity-coupled pyrole molecule show sufficiently strong coupling can open a gap at a conical intersection between the ^1 B_1 ^1 A_2 states. An exciting chemical consequence is that such modifications to the energy landscape could lead to changes in relaxation pathways or dynamics in chemical reactions. This idea has also been put forward in the context of linear response QEDFT, as well;<cit.> QEDFT simulations on cavity-embedded formaldehyde<cit.> have showed that different combinations of cavity parameters can move or suppress avoided crossings between excited states.
While we have limited this discussion to consider descriptions of purely electronic strong coupling, we recognize that Vidal, Manby, and Knowles<cit.> have used similar QED-EOM-CC approaches to explore how coupling to a cavity mode can affect vibronic structure.
Liebenthal and DePrince<cit.> extended QED-EOM-CC theory to consider non-particle-conserving excitation operators. Specifically, they developed a QED-EOM-CCSD-1 model for electron attachmentment (EA), which is a cavity QED generalization of the EOM-EA-CC approach<cit.> from electronic structure theory. One of the key findings in Ref. was that, in order to recover electron affinities obtained from separate QED-CCSD-1 calculations on different charge states,<cit.> QED-EOM-EA-CCSD-1 calculations starting from an N-electron reference must employ the coherent-state basis defined for the (N+1)-electron state. This finding suggests that the coherent-state basis should be chosen with care in any QED-EOM-CC model that samples non-particle or spin-conserving sectors of Fock space. This work also revealed defects in the similarity-transformed PF Hamiltonian (i.e., complex eigenvalues) at a same-symmetry conical intersection in magnesium fluoride (MgF), involving the lower-polariton state. Such defects can emerge in standard EOM-CC theories that make use of truncated cluster expansions; the MgF example highlights that this issue persists in the cavity QED generalization of EOM-CC.
We note that most QED-CC studies are formulated within the coherent-state basis introduced in Sec. <ref>. The primary reason for this choice is that it guarantees that the correlated calculation will be strictly origin invariant, even for charged species. Liebenthal, Vu, and DePrince<cit.> studied the numerical consequences of this choice by comparing QED-CCSD-1 and QED-EOM-CCSD-1 calculations in the coherent-state basis, using a QED-HF reference (termed “relaxed”), to calculations performed in the canonical Hartree-Fock basis, using a Hartree-Fock wave function that was not perturbed by cavity interactions (termed “unrelaxed”). For the unrelaxed case, they found that the presence of exponentiated single electron transitions (e^T̂_1) do a good job of accounting for orbital relaxation effects from QED-HF, while exponentiated boson creation operators (e^u_0b̂^†) can mimic the effects of the coherent-state transformation itself. For example, ground-state unrelaxed QED-CCSD-1 energies on charged species acquire only modest origin dependence; for a cavity-bound HF^+ cation, described by a cc-pVDZ basis set and a large coupling strength of λ = 0.05 a.u., that work showed that the energy changes by less than 1× 10^-3 E_ h when shifting the molecule 10 Å from the origin. Moreover, for the most part, excitation energies from relaxed and unrelaxed QED-EOM-CCSD-1 are similar, particularly for experimentally feasible coupling strengths (i.e., λ < 0.05). These results stand in stark contrast to results obtained from unrelaxed and relaxed formulations of QED-DFT and QED-TDDFT. First, unrelaxed QED-DFT acquires a substantial origin dependence in the energy (stemming from the dipole self energy contribution). Second, relaxed and unrelaxed QED-TDDFT yield significantly different spectra, with relaxed QED-TDDFT generally doing a better job of reproducing some trends from relaxed QED-EOM-CCSD-1. These observations are important, given that multiple formulations of of QED-TDDFT can be found in the literature, and not all of them account for cavity self-consistently in the ground state.<cit.>
Fregoni, Haugland, Pipolo, Giovannini, Koch, and Corni have applied QED-EOM-CCSD-1 to interactions between a molecular system and a plasmonic nano/picocavity.<cit.> Their protocol is similar to that discussed throughout this Section, except for the precise form of the Hamiltonian. First, a polarized continuum model for nanoparticles<cit.> is applied to describe the plasmon mode. Second, the dipole self-energy contribution is not included in the Hamiltonian for the coupled system. The argument for neglecting the dipole self energy is that the collective electronic oscillations comprising the plasmon excitation interact with the molecule through longitudinal Coulomb interactions, and this interaction
dominates over the coupling between the molecule transverse component of the vector potential. <cit.> It should also be noted that in the case of
strong coupling to a cavity mode with a significant material contribution to the excitation (such as a plasmonic mode), Eq. <ref> should be augmented to include coupling between the charged particles of the molecular subsystem and the electric scalar potential ϕ(x) associated with the plasmon excitation: Ĥ_ p · A = ∑_i^N 1/2m_i(p̂_i - z_i Â_⊥)^2 + z_i ϕ(x_i) + V̂(x̂) + ħω_ cav. We note that the dipole self energy term (even if very small) still emerges upon PZW transformation of this Hamiltonian, particularly through transformation of the energy of the cavity mode ħω_ cav (see Eq. <ref>). Third, the bilinear coupling term takes a slightly different form. Despite these differences, the QED-EOM-CCSD-1 wave function ansatz is the same as that discussed herein. Building upon this work,
Romanelli, Riso, Haugland, Ronca, Corni, and Koch<cit.> have developed a QED-CC model that folds in the effects of multiple plasmonic modes into a single effective mode. Other models for plasmon-molecule interactions that make use of quantized radiation fields and parametrized plasmon modes have been proposed as well.<cit.>
Lastly, a cavity QED extension of second-order perturbation theory (MP2) and the algebraic diagrammatic construction (ADC) has been developed by Bauer and Dreuw.<cit.> QED-MP2 is an approximation to QED-CCSD-1, and, like conventional ADC, QED-ADC can be thought of a Hermitian approximation to QED-EOM-CCSD-1. The data presented in Ref. suggest that the QED-MP2 correlation energy is much more sensitive to the frequency of the cavity mode than the correlation energy from QED-CCSD-1. This sensitivity is increased if the QED-MP2 calculations are performed on top of Hartree-Fock reference wave functions evaluated in the absence of the cavity. Hence, it appears that, like QED-DFT and QED-TDDFT, the QED-MP2 ansatz is not as robust as QED-CCSD-1 to the description of cavity effects at the mean-field level.
§ TRANSFORMATION OF OPERATORS
In the preceeding sections, we have obtained (approximate) eigenstates of Ĥ_ CS, where
Ĥ_ CS results from a unitary transformation of our original Hamiltonian in Eq. <ref>. In the following, we discuss relationships that hold between the
exact eigenstates of Ĥ_ CS (which could be obtained, for example, through full configuration interaction in a complete single-particle basis) and
Ĥ_ p · A. Although it is generally not possible to obtain the exact eigenfunctions of Ĥ_ CS or Ĥ_ p · A, we will work out
practical relationships for the photonic character and the dipole operator and apply them to expectation values taken with approximate eigenfunctions obtained from the QED-CIS-1 method.
The exact eigenvalues of an operator are preserved under unitary rotations, while the eigenfunctions
of Ĥ_ CS are related to the eigenfunctions of Ĥ_ p · A by a unitary transformation. In particular, we have:
Ĥ_ p · A⟶Ĥ_ CS via ÛĤ_ p · AÛ^†
|Ψ_I⟩⟶ |Ψ^'_I⟩ via Û|Ψ_I⟩
Ĥ_ p · A |Ψ_I⟩ = E_I |Ψ_I⟩
Ĥ_ CS |Ψ^'_I⟩ = E_I |Ψ^'_I⟩.
Therefore, in order for expectation values computed with these transformed eigenstates to have correspondance with the expectation values computed with the eigenstates of
Ĥ_ p · A, we must transform the operators as follows:
⟨Ψ_I | Ô | Ψ_I ⟩ = ⟨Ψ_I^' | Ô^' | Ψ_I^'⟩
= ⟨Ψ_I | Û^†Ô^'Û| Ψ_I ⟩
= ⟨Ψ_I | Û^†ÛÔÛ^†Û| Ψ_I ⟩.
Thus we see the transformation for operators to use with our
transformed eigenstates is also Ô^' = ÛÔÛ^†.
Specifically, following transformation of the Hamiltonian from the miminal coupling Hamiltonian in Eq. <ref> to the Pauli-Fierz Hamiltonian in the length gauge and to the coherent state basis, we must apply the same transformations to operators for the purposes of computing expectation values with the eigenfunctions of Eq. <ref>.
Following transformation of the Hamiltonian from the miminal coupling Hamiltonian in Eq. <ref> to the Pauli-Fierz Hamiltonian in the length gauge and to the coherent state basis, we apply the same transformations to operators for the purposes of computing expectation values with the eigenfunctions of Eq. <ref>.
Some operators will commute with the operators that provide these transformations (Û_ PZW, Û_ϕ, and Û_ CS) and will be unchanged, while others will be transformed. It is common to compute the photonic character of a polaritonic state, and so here we investigate the behaviour of the photon number operator, N̂_ p = b̂^†b̂ for a single photon mode. Furthermore, the dipole moment expectation value of the polariton system can be of interest <cit.>, so we will also investigate the behaviour of the dipole moment operator μ̂.
For a single photonic mode:
Û_ PZWÛ^†_ PZW = + i/ħ√(1/2ω_ cav)λ·μ̂ ( - ) + 1/ħ^21/2ω_ cav(λ·μ̂)^2,
Û_ϕÛ_ PZWÛ^†_ PZWÛ^†_ϕ = - 1/ħ√(1/2ω_ cav)λ·μ̂ ( + ) + 1/ħ^21/2ω_ cav(λ·μ̂)^2,
and
N̂_CS = - 1/ħ√(1/2ω_ cav) [λ· (μ̂ - ⟨μ̂⟩ )] ( + ) + 1/ħ^21/2ω_ cav[λ· (μ̂ - ⟨μ̂⟩ )]^2,
where N̂_CS = Û_ CSÛ_ϕÛ_ PZWÛ^†_ PZWÛ^†_ϕÛ^†_ CS.
On the other hand, The PZW transformation of the dipole operator can be shown to preserve the expectation values because the dipole operator can be shown to commute with μ̂·Â since  operators only on photon degrees of freedom, and μ̂ must commute with itself. Similarly, since the phase and coherent state transformations involve only photon operators and μ̂ involves only electron operators, the dipole operator is unchanged by these transformations, and we have
Û_CSÛ_ϕÛ_PZWμ̂Û^†_PZWÛ^†_ϕÛ^†_CS = μ̂.
Of course we are not typically able to obtain the exact eigenfunctions for Ĥ_ CS; for example we will perform some truncation in the single-particle basis and/or in the
many-particle basis. We will derive explicit expressions in the case that we have truncated the many-particle basis consistent with QED-CIS-1; these expressions are independent of the level of truncation of the single-particle basis.
Recalling the form of the QED-CIS-1 wavefunction ( <ref>),
we will examine the explicit expressions for the photonic occupation of a given
electronic state Ψ_I that can be defined as
⟨ N_CS⟩ = ⟨Ψ_I | N̂_CS | Ψ_I ⟩
= ⟨Ψ_I | | Ψ_I ⟩
- 1/√(2ω_cav)⟨Ψ_I | λ· (μ̂ - ⟨μ̂⟩ )( + ) | Ψ_I ⟩
+ 1/2ω_cav⟨Ψ_I | λ· (μ̂ - ⟨μ̂⟩ )^2 | Ψ_I ⟩.
The first expectation value can be computed as follows:
⟨Ψ_I | | Ψ_I ⟩ = |c_0^1|^2 + ∑_ia |c_ia^1|^2.
The second expectation value can be computed as follows:
-1/√(2ω_cav)⟨Ψ_I | λ· (μ̂_ e -
⟨μ⟩_e)( + ) | Ψ_I ⟩ = -1/√(2ω_cav) c^ T H_ blc c,
where c denotes the QED-CIS-1 eigenvector for state I and H_ blc is the contribution of the Hamiltonian
matrix in Eq. <ref> that contains only the elements given in Eqs. <ref> and <ref>.
The third expectation value can be computed as
1/2ω_cav⟨Ψ_I | (λ·(μ̂_ e -⟨μ_ e⟩))^2 | Ψ_I ⟩ =
1/2ω_cav c^ T H_ dse c,
where H_ blc is the contribution of the Hamiltonian
matrix in Eq. <ref> that contains only the elements given in Eqs. <ref>.
We plot these various contributions and the total photon occupation of the QED-CIS-1 ground-state of the MgH^+ ion as a function of the fundamental coupling strength λ = √(ħ/ϵ_0 V) from a photon polarized purely along the principle
axis of the molecule in
Figure <ref>. Here we denote the 0^ th order contribution
as arising from Eq. <ref>, the 1^ st order contribution
as arising from Eq. <ref>, the
2^ nd order contribution
as arising from Eq. <ref>, and the Total as arising from the sum of these three terms, e.g. Eq. <ref>.
§ CONCLUDING REMARKS
Despite the impressive surge of theoretical and experimental advances in polariton chemistry and molecular polaritonics, many challenges and opportunities remain to advance the field towards its full promise. While
it may seem daunting to span the chasm that exists between the majority of polariton experiments (done
in the regime of 10^6 to 10^9 molecules within the cavity mode volume) to the regime accessible by even large-scale atomistic methods <cit.> ( 100s of molecules), we assert that all advances in the theoretical treatment of cavity-molecule interactions provide value towards the goal of understanding and controlling polariton chemistry. In particular,
single- and few-molecule strong coupling has been experimentally realized with several different cavity platforms,<cit.> and, as the limits of this regime are expanded, there is an urgent need for rigorous and non-perturbative quantum mechanical methods that can accurately capture modifications to ground- and excited-state properties and emergent phenomena. The techniques described in
this review provide such a rigorous foundation, although we should note that there are additional advances
required for plasmonic nanocavities, such as rigorous inclusion of longitudinal scalar potential coupling
to capture the material contribution of plasmon excitation, and inclusion of the modified chemical environment
that molecules experience in the vicinity of plasmonic particles in the dark.<cit.> Some of these effects are
more naturally included in the real-space Coulomb gauge formulations described in Refs. ,
which then leaves us with an intriguing theoretical challenge for formulations based on Gaussian basis sets and in the length gauge, or Coulomb gauge formulations with Gaussian basis sets, as reported by Koch and co-workers.<cit.> Moreover, theoretical approaches (quantum and classical) can be deployed to approach collective strong coupling
from the bottom up, which may provide valuable insights into some of the phenomena
that are observed in this regime. In this case, the availability of
rigorous methods to benchmark lower-scaling methods (e.g. density functional based approaches, parameterized and semi-empirical approaches, and classical force fields) will be paramount. We hope that this tutorial review will serve to orient
researchers towards these varied areas of development, as well as to provide the foundation for further development of ab initio QED approaches and
the sound deployment of these methods.
Author Information
Present Address
Department of Chemistry,
Texas A&M University,
College Station, TX 77843
Acknowledgments This material is based upon work supported by the National Science Foundation under Grant No. CHE-2100984. J.J.F Acknowledges support from the Research Corporation for Scientific Advancement Cottrell Scholar Award. JJF and J.M. and the NSF CAREER Award CHE-2043215. J.J.F. acknowledges support from the Center for MAny-Body Methods, Spectroscopies, and Dynamics for Molecular POLaritonic Systems (MAPOL) under subcontract from FWP 79715, which is funded as part of the Computational Chemical Sciences (CCS) program by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences and Biosciences at Pacific Northwest National Laboratory (PNNL). PNNL is a multi-program national laboratory operated by Battelle Memorial Institute for the United States Department of Energy under DOE contract number DE-AC05-76RL1830.
|
http://arxiv.org/abs/2307.04417v1 | 20230710084558 | Handling Group Fairness in Federated Learning Using Augmented Lagrangian Approach | [
"Gerry Windiarto Mohamad Dunda",
"Shenghui Song"
] | cs.LG | [
"cs.LG",
"cs.CY"
] |
A]Dunda Gerry Windiarto Mohamad
A]Song Shenghui
[A]The Hong Kong University of Science and Technology
Federated learning (FL) has garnered considerable attention due to its privacy-preserving feature. Nonetheless, the lack of freedom in managing user data can lead to group fairness issues, where models might be biased towards sensitive factors such as race or gender, even if they are trained using a legally compliant process. To redress this concern, this paper proposes a novel FL algorithm designed explicitly to address group fairness issues. We show empirically on CelebA and ImSitu datasets that the proposed method can improve fairness both quantitatively and qualitatively with minimal loss in accuracy in the presence of statistical heterogeneity and with different numbers of clients. Besides improving fairness, the proposed FL algorithm is compatible with local differential privacy (LDP), has negligible communication costs, and results in minimal overhead when migrating existing FL systems from the common FL protocol such as FederatedAveraging (FedAvg) <cit.>. We also provide the theoretical convergence rate guarantee for the proposed algorithm and the required noise level of the Gaussian mechanism to achieve desired LDP. This innovative approach holds significant potential to enhance the fairness and effectiveness of FL systems, particularly in sensitive applications such as healthcare or criminal justice.
§ INTRODUCTION
Federated learning (FL) <cit.> is a distributed machine learning approach that enables model training on potentially sensitive data from different entities without the necessity for data sharing. This technique is promising in diverse domains such as computer vision (CV) as it can facilitate training of models on a large-scale, diverse set of data while preserving data privacy. However, FL can also present challenges related to group fairness, which refers to the equitable treatment of different groups in a population. Group fairness may be required by law such as in Europe <cit.>, ensuring that any decision making by predictive models trained using FL does not exhibit bias towards any particular group, such as race or gender. For example, an AI model used in a company's hiring process may have been trained on historical data that reflects biased hiring patterns, leading to discriminatory outcomes for underrepresented groups in the workforce. There are more examples <cit.> that further motivate raising awareness in training fair deep learning models.
Group unfairness in FL-trained deep learning models may originate from statistical heterogeneity, where the data used by individual clients is inherently biased. The biased data leads to a biased model, making it crucial to address statistical heterogeneity in FL-based models. However, handling statistical heterogeneity or non-identical and independently distributed (non-iid) data can be an arduous task, and currently is an open problem <cit.>. In this paper, we aim to reduce group unfairness of FL solely from its training mechanism. While off-the-shelf methods to prevent learning bias are available in centralized learning such as modifying the loss function <cit.>, adopting them in FL can be challenging because, apart from potentially more computation, it also requires additional communication and careful consideration of privacy.
Considering the difficulties associated with mitigating learning bias in FL, we propose a regularization technique to alleviate this issue. Our approach involves formulating the local optimization as a constrained minimax optimization problem using a fairness metric, and can be used alongside local differential privacy (LDP) <cit.>. In addition, we design an FL protocol that uses an augmented Lagrangian solver to tackle this optimization problem. We provide a detailed description of the proposed method in Section <ref> and offer theoretical results for the convergence rate and the required of noise level of the Gaussian mechanism to satisfy LDP in Section <ref>. We evaluate accuracy of the proposed algorithm on two CV datasets along with the fairness performance in Section <ref>.
Our contributions are stated as follows.
* We propose a new FL protocol to ensure group fairness. It follows the same framework as FedAvg except some modifications on the local training phase and the aggregation phase.
* We provide convergence guarantee and the upper bound for the standard deviation of the Gaussian noise to guarantee LDP when using the proposed algorithm.
* We focus on fairness evaluation on FL-trained CV models to fill the gap in the fair FL research, as most works evaluated their methods on categorical datasets with little focus on image datasets.
The proposed method has several key merits. Firstly, the empirical results show that the proposed approach is capable of increasing the fairness of the ML model without significant loss of accuracy when compared with the baselines, as discussed in Section <ref>. Secondly, some practical challenges may appear when deploying a new FL algorithm in practical systems. In the following, we outline several notable features of the proposed FL algorithm that facilitate its implementation.
Straightforward implementation from FedAvg. The proposed algorithm adds little overhead when migrating from FedAvg. Since we use stochastic gradient descent ascent (SGDA), in addition to performing gradient descent on the model, clients need to update a dual variable with gradient ascent during the local training. This computation is independent of the gradient calculation of the the model parameters, which means it can be executed sequentially. Apart from the model updates, the server also needs the dual variable updates from each client. Similar to aggregating model updates, the server aggregates dual variables by averaging if FedAvg is used. This shows that the proposed method only adds two independent steps in the current FedAvg implementation.
Compatibility with the existing privacy mechanism. Attackers may steal information (model updates) during the communication phase in FL. They can reverse engineer it to infer some sensitive data owned by the participating clients. To prevent this issue, LDP can be used to protect user data. In our implementation, we use the Gaussian mechanism on model updates to ensure privacy guaranteed by LDP <cit.>.
Negligible communication overhead. Compared with FedAvg, the proposed method only adds an extra scalar variable to the training framework, which needs to be exchanged between the client and server. This means that the proposed method only introduces negligible communication overhead.
§ RELATED WORK
There have been some engaging results in tackling the fairness issues in deep-learning models. We categorize some prior related works based on how the training is conducted, either centralized or federated learning.
Ensuring fairness in centralized learning. In centralized learning, it is not uncommon to modify the training framework to achieve a suitable degree of group fairness. The authors of <cit.> decorrelated the input images and the protected group attributes by using adversarial training. Knowledge transfer techniques and multi-classifiers can also be adopted as a debiasing method <cit.>. Augmenting each image sample with its perturbed version generated from generative models can potentially reduce biases as well <cit.>. The aforementioned works require additional components to the model, thus increasing the computation cost. This might not be suitable for FL. A possible alternative is to alter the loss function to take into account group fairness. The authors of <cit.> introduced a loss function obtained from the upper bound of the Lagrangian based on a constrained optimization formulation, which is closely related to this work. While they introduced a regularizer for the dual variable, the proposed method uses the augmented Lagrangian method with a squared constraint penalty term.
Ensuring fairness in FL. Some prior works considered group fairness in FL. Due to system constraints, most innovations came from modifying the objective function of the training, the optimization methods, or more information exchange. The example for the latter is FairFed <cit.>, where the client weights are adaptively adjusted during the aggregation phase based on the deviation of each client's fairness metric from the global one. Tackling fairness by altering the objective function includes utilizing differential multipliers to solve a constrained optimization problem (FPFL) <cit.> and adjusting the weight of the local loss function for each sensitive group during the aggregation phase (FedFB) <cit.>. Compared with FPFL, the proposed method uses equality constraint instead of the inequality constraint. Also, FPFL has some limitations such as it sends the client statistics separately (gradients, values of the current loss function, and the number of data) instead of the updated model directly to the server, which in turn increases privacy risks. Moreover, no theoretical convergence rate was provided in <cit.>. Along the line of modifying the optimization method, FCFL <cit.> proposed a two-stage optimization to solve a multi-objective optimization with fairness constraints, which demands more communication rounds. Most of the aforementioned existing works except <cit.> only evaluated their methods on categorical datasets. These comparisons are summarized in Table <ref>.
§ PRELIMINARIES
In this section, we introduce some mathematical notations that are often used in this paper. After that, we briefly describe the problem formulation of the conventional FL along with its algorithm.
§.§ Notations
Throughout this paper, we primarily focus on classification tasks in CV with groups consisting of binary sensitive (protected) attributes s ∈{0,1}. Such binary sensitive attributes can be written as s_0 and s_1 to represent s=0 and s=1 respectively. Also, the dataset 𝒟 with size |𝒟| constitutes of pairs of input x and label y with y ∈{0,1}, unless otherwise stated. We slightly abuse the notation of 𝒟 to represent both the set and the distribution. Some mathematical notations are stated as follows. [N] denotes {1,2, ..., N} and . denotes the ℓ_2-norm. We use 𝒲⊆ℝ^d and Λ to represent the parameter spaces of the model w and an additional training parameter λ respectively.
§.§ Group Fairness Metrics
To evaluate the group fairness of predictions generated by deep learning models, we can employ various measures based on how likely the model can predict a particular outcome for each group. Demographic parity (DP) <cit.> is commonly used for assessing the fairness of the model for binary sensitive attributes based on the 80% rule from <cit.>. Given a validation dataset 𝒟_val, we can partition it according to the sensitive attributes 0 and 1 as 𝒟_val,0 and 𝒟_val,1 respectively. Then, the empirical form of DP for binary classification tasks is defined as |PPR_𝒟_val,0 - PPR_𝒟_val,1|, where PPR_D is the ratio of positive predictions to all samples in D. On the other hand, equal opportunity (EO) [6] measures the absolute difference in true positive rates (sensitivities) between two protected groups. While DP takes into account the inherent biases in the whole dataset, EO only considers biases originating from the positive samples. Ideally, when DP or EO equals zero, the model is completely unbiased or fair. In multi-label classification tasks, EO is defined as the worst EO on a particular label, and similarly for DP.
§.§ Problem Setup
In the typical FL setting with N clients and one server, the goal is to train a global deep learning model f_w parameterized with w ∈𝒲 on each client dataset 𝒟_i (i ∈ [N]) with privacy guarantee. Clients receive the global model from the server (broadcasting phase) and train the model on their own dataset (local training phase). After that, the server collects the updated models from each participating client and aggregates them to get an updated model (aggregation phase). Similarly, the additional parameter λ∈Λ (e.g. the dual variable) that aids the training can be exchanged between the server and each client, and processed on the client during local training and on the server during aggregation. This process is repeated until convergence or a specified communication round.
The explicit formulation for the true local risk function represented by a loss function l(f_w(x),y) = l(x,y;w) and a regularization function g is given by,
F_i(w,λ):= 𝔼_(x_j, y_j) ∼𝒟_i l(x_j,y_j;w) + g(x_j,y_j;λ, w),
and the corresponding empirical risk function is given by,
F_i,S(w,λ):= 1/|𝒟_i|∑_(x_j,y_j) ∼𝒟_il(x_j,y_j;w) + g(x_j,y_j;λ, w).
We define the global true risk function as
F(w, λ) = ∑_i=1^N p_i F_i(w, λ),
where p_i is the client coefficient with ∑_i=1^N p_i = 1 and p_i ∈ [0,1], and the corresponding global empirical risk function as
F_S(w, λ) = ∑_i=1^N p_i F_i,S(w, λ) .
In FedAvg, g(x_j,y_j;λ, w) = 0. In contrast, formulations using regularization-based algorithm such as FedProx <cit.> or FedMoon <cit.> have a non-zero g.
§ FAIRFEDAVGALM
We first introduce the problem formulation for FL with group fairness constraints. Subsequently, we describe the proposed algorithm to achieve the objective. Lastly, we offer the upper bound for the standard deviation of the noise in the Gaussian mechanism on model updates to ensure LDP, and prove the convergence rate of the proposed algorithm with LDP.
§.§ Problem Formulation
The goal of this work is to ensure group fairness of FL-trained models. We tackle the problem by enforcing fairness during the training. For this purpose, we develop a constrained optimization with relaxation. Specifically, the local training aims to minimize the local risk function while satisfying the equality constraint based on the empirical DP metric. Since the empirical DP is not a differentiable function, we resort to using the formulation based on the loss function. Specifically, given 𝒟^s_0 as the population dataset with s_0, we consider an equality constraint μ̂^s_0_w = μ̂^s_1_w, where
μ̂^s_0_w = 1/| 𝒟^s_0|∑_i ∈𝒟^s_0 l(x_j,y_j;w)
and μ̂^s_1_w defined similarly. We write the constrained optimization during local training as
min_w 1/|𝒟_i|∑_(x_j,y_j) ∼𝒟_i l(x_j,y_j;w)
s.t. μ̂^s_0_w = μ̂^s_1_w.
We use the similar technique from the augmented Lagrangian approach to relax the constraint. The relaxation provides more freedom for the optimization algorithm to find solutions that may not satisfy all the constraints strictly, but rather approximate them within an acceptable range. The Lagrangian of the problem is rewritten with an additional squared penalty term of μ̂^s_0_w - μ̂^s_1_w controlled by a penalty coefficient β, that is
L(w,λ) = 1/|𝒟_i|∑_(x_j,y_j) ∼𝒟_i l(x_j,y_j;w) + λ(μ̂^s_0_w - μ̂^s_1_w)
+ β/2 (μ̂^s_0_w - μ̂^s_1_w)^2.
After that, we solve the following min-max problem instead,
min_w max_λ L(w,λ).
In the conventional augmented Lagrangian method <cit.>, at each iteration, another sub-iteration is performed to find the approximate solution such that the gradient of the objective is close to zero. Although the original augmented Lagrangian method requires that the final gradient to approximate the minimizer at any given time is bounded following a sequence approaching zero, we relax the condition by requiring bounded gradients, as it will be stated in Assumption <ref> later. Inspired by the augmented Lagrangian method, we can write g(x_j,y_j;λ, w) = λ(μ̂^s_0_w - μ̂^s_1_w) + β/2 (μ̂^s_0_w - μ̂^s_1_w)^2 as the regularization term for the proposed method, with which sub-iterations are performed by clients during the local training. Hence, we can formulate the objective of the local training as
min_w max_λ F_i,S(w,λ) .
§.§ Algorithm
We propose a fair FL algorithm that extends FedAvg based on the augmented Lagrangian method, dubbed as FairFedAvgALM. We assume that each client performs the same amount of local iterations (otherwise we need to use the correction term from <cit.>) to provide more flexibility in the experiment section. The proposed algorithm is shown in Algorithm <ref>.
We outline some changes in comparison with FedAvg. The core of the algorithm is SGDA, as opposed to FedAvg in which SGD is used instead. During local training, the i-th client computes the stochastic gradient ∇_w L^(t,k)_i at communication round t and local iteration k from their batch samples ℬ sampled from their local distribution 𝒟_i as
∇_w L_i^(t,k) = ∇_w (1/|ℬ|∑_(x_j,y_j) ∈ℬ l(x_j,y_j;w_i^(t,k-1))
+ λ(μ̂^s_0_w_i^(t,k-1) - μ̂^s_1_w_i^(t,k-1))
+ β/2 (μ̂^s_0_w_i^(t,k-1) - μ̂^s_1_w_i^(t,k-1))^2).
In the end of the local iteration, each client updates λ with the gradient ascent by λ_i,tλ_t-1 + η_λ, t∇_λ L_i^(t,E). Before sending the updates to the server, each client adds a Gaussian noise to them to ensure LDP. After the server receives the updates from the clients, it aggregates both w and λ following FedAvg. It will be shown in Section <ref> that using this heuristic for λ allows convergence at an acceptable rate.
§.§ Theoretical Analysis
In this section, we introduce the formal analysis of LDP and the convergence rate of FairFedAvgALM. The proof of the convergence rate extends the previous theoretical convergence results of FedAvg from <cit.> and includes the aggregation of λ as well as LDP. The formal definition of differential privacy is given below.
A randomized algorithm 𝒜 satisfies (ϵ, δ)-differential privacy if for any two neighboring joint datasets 𝒟 and 𝒟' differing by one sample, and for any subset S of the range of 𝒜, the following holds:
ℙ[𝒜(𝒟) ∈ S] ≤ e^ϵℙ[𝒜(𝒟') ∈ S] + δ.
In LDP, each client has their own privacy budget (ϵ_i, δ_i). A common method to achieve LDP is to use the Gaussian mechanism, by which a Gaussian noise with zero mean and standard deviation of σ_i is added to the model updates. The privacy budget (ϵ_i, δ_i)-LDP and σ_i are related through the sensitivity of the update, which is defined as Δ l = max_𝒟, 𝒟'f(𝒟) - f(𝒟'), where f represents the multivalued function that depends on the dataset (e.g. local model updates). Before presenting the result, we show the sensitivity of both primal and dual updates, Δ l_p and Δ l_d respectively, in the following lemma.
Assume that the loss function l is bounded by l_max (l ≤ |l_max|), the dual variable λ is bounded by λ_max (|λ| ≤λ_max), and the gradient of the loss function is bounded by D (∇_w l(z)≤ D, ∀ z∈𝒟). The sensitivities of primal updates and dual updates are given by
Δ l_p(t) ≤2 η_w,tD/|ℬ| + 8η_w,tλ_maxD/|ℬ| + 8 η_w,tβ D l_max(5|ℬ| - 2)/(|ℬ| - 2)^2
and
Δ l_d(t) ≤4η_λ,t l_max/|ℬ|.
See Section A.1 of the supplementary materials.
The sensitivities of the updates above are sufficient to estimate the upper bound for the standard deviation of the noise <cit.>, which is explicitly stated in the following theorem.
Given that the total number of communication rounds is T, the upper bounds of σ_i,λ and σ_i,w to achieve (ϵ_i, δ_i)-LDP for the i-th client with constant learning rates, η_w,t = η_w and η_λ,t = η_λ, are
σ_i,w≤Δ l_p √(2T log (1/δ_i))/ϵ_i
and
σ_i,λ≤Δ l_d √(2T log (1/δ_i))/ϵ_i.
The bound gives a rough estimation on the required noise levels to achieve the desired level of privacy.
We provide the upper bound of the convergence rate based on the empirical primal risk function R_S(w) := max_λ L(w, λ). Before presenting the result, we list several definitions and key assumptions, which are stated below.
The function h: 𝒲→ℝ is Lipschitz continuous if there exist G> 0 such that, for any w, w' ∈𝒲 and ξ∈𝒟, h(w; ξ) - h(w'; ξ)≤ Gw - w'.
Define a function f: 𝒲×Λ→ℝ. f(w,·) is ρ-strongly convex if for all w ∈𝒲 and λ, λ' ∈Λ, f(w,λ) ≥ f(w, λ') + ⟨∇_λ f(w,λ'),λ - λ'⟩ + ρ/2λ - λ'^2.
f(w,·) is ρ-strongly concave if -f(w,·) is ρ-strongly convex.
For randomly drawn batch samples ξ and for all i∈[N], the gradients ∇_w F_i,S(w, λ; ξ) and ∇_λ F_i,S(w, λ; ξ) have bounded variances B_w and B_λ respectively. If g_i,w(w, λ|ξ) := ∇_w F_i,S(w,λ;ξ) is the local estimator of the gradient, 𝔼_ξ [g_i,w(w, λ|ξ) - ∇_w F_i,S(w, λ)^2] ≤ B_w^2, and the case for λ is similar but bounded by B_λ^2.
The function f is L-smooth if it is continuously differentiable and there exists a constant L > 0 such that for any w, w' ∈𝒲, λ, λ' ∈ℝ, and ξ∈𝒟,
[ ∇_w f(w, λ; ξ) - ∇_w f(w', λ'; ξ); ∇_λ f(w, λ; ξ) - ∇_λ f(w', λ'; ξ) ]≤ L [ w - w'; λ - λ' ].
For all i∈[N], the stochastic gradient of F_i,S(w,λ) is bounded, that is for all w ∈𝒲, λ∈Λ and ξ∈𝒟, we have ∇_w f(w, λ; ξ) ≤ D.
In nonconvex analysis, it is not uncommon to use Polyak-Łojasiewicz (PL) condition on the objective function.
h(w) satisfies the PL condition if there exists a constant μ > 0 such that, for any w ∈𝒲, 1/2∇ h(w)^2 ≥μ(h(w) - min_w'∈𝒲 h(w')).
For simplicity, we assume full participation and the same number of local iterations for each client. The minimum empirical primal risk is R^*_S = min_w R_S(w). The upper bound of the convergence rate is given by the following theorem.
Define κ = L/μ. Let η_w,t = 2/μ t and η_λ, t = 16 κ^2/μ t^2/3. Given that Assumption <ref> and Assumption <ref> hold, each F_i,S(w,λ) is L-smooth, each F_i,S(·,λ) satisfies μ-PL condition, and each F_i,S(w,·) is ρ-strongly concave, we have
𝔼 R_S(w_T+1) - R_S^* = 𝒪(Γ + B_w^2 + dσ_w^2 + B_λ^2 + dσ_λ^2 /T^2/3),
after T communication rounds, where Γ := F_S^* - ∑_i=1^N p_i F_i,S^*, F_S^* := min_w max_λ F_S(w,λ) and F_i,S^* := min_w max_λ F_i,S(w,λ).
See Section A.2 of the supplementary materials.
Γ quantifies statistical heterogeneity of the FL system. In the case of strong non-iid, the saddle solution of the global risk function might be different from the weighted sum of each saddle local risks. Note that the convergence is slower than <cit.> (𝒪(1/T)) due to the minimax optimization.
§ EMPIRICAL RESULTS
In this section, we consider the performance of FL-trained deep learning models, including the prediction accuracy and fairness performance (DP and EO) of the FL-trained model, on CelebA and ImSitu datasets. We also provide the results with different levels of statistical heterogeneity as well as the Gaussian mechanism for LDP. Lastly, we provide a qualitative analysis of the FL-trained models using Grad-CAM <cit.> visualizer to illustrate how the enhanced fairness performance is achieved by the proposed algorithm.
Implementation. The learning rate of λ is decreased by a factor of b for every round. Moreover, the penalty term β also increases by a factor of b every round. We also use step learning rate decay to reduce the fluctuations in performance as the training progresses. We synthetically create the data heterogeneity by introducing label skews with balanced samples, which can be implemented using Dirichlet distribution parameterized by α on the labels <cit.>. Each experiment is repeated three times to capture different realizations. The code is available at https://github.com/gwmdunda/FairFedAvgALMhttps://github.com/gwmdunda/FairFedAvgALM.
Baselines. The following are the baselines used for the comparison study.
* FedAvg. It is the universal baseline in FL which aggregates all model updates by weighted average.
* FairALM-FedAvg. This is the modified version of FairALM <cit.> that fits FL. It aims to optimize L(w,λ) = 1/|𝒟_i|∑_(x_j,y_j) ∼𝒟_i l(x_j,y_j;w) + λ(μ̂^s_0_w - μ̂^s_1_w) + η_λ (μ̂^s_0_w + μ̂^s_1_w). The Lagrangian is utilized as the local training objective to extend the original method, which is only applicable in centralized learning.
* FairFed <cit.>. The server receives the local DP metrics, and based on them and the global trend, the server adjusts the value of p_i adaptively before averaging the model updates. In the CelebA experiments, DP metric is used, whereas in the ImSitu experiments, EO metric is used.
* FPFL <cit.>. It enforces fairness by solving the constrained optimization on the sample loss function F_S with two constraints, which correspond to the sensitive attribute 0 and 1, as the absolute difference between F_S and the loss evaluated on the particular sensitive group being less than a tolerance threshold. Even though the author stated that the local training can only perform one local iteration per round, we extend their methods to multiple local iterations because we use a large batch size. In this experimental study, we set the threshold value to zero. Hence, we reformulate it as a local constrained optimization with
g(w,λ) = λ_0 |F'_i(w^t_i,k-1) - μ̂^s_0_w^t_i,k-1| + λ_1 |F'_i(w^t_i,k-1) - μ̂^s_1_w^t_i,k-1)|
+ β/2((F'_i(w^t_i,k-1) - μ̂^s_0_w^t_i,k-1)^2
+ (F'_i(w^t_i,k-1) - μ̂^s_1_w^t_i,k-1)^2 ),
where λ := [λ_0, λ_1 ]^⊺∈ℝ^2.
§.§ CelebA Dataset
Description and setup. CelebA dataset <cit.> contains over 200,000 images of celebrities, each annotated with 40 attribute labels. In this experiment, we study a binary classification task for predicting attractiveness in images with male (gender) as the sensitive attribute. Since the image size is 178 × 218, we preprocess the input image by center cropping it to 178 × 178 and then resize it to 128 × 128. The model used for the prediction is the smaller version of ResNet-18 where the number of out channels from the first to fourth layer are 64, 128, 256, and 512, respectively. Other important hyperparameters are listed in Section C.1 of the supplementary materials.
The FL system consists of 10 clients participating in the training with roughly 2800 samples that are distributed from the central dataset. The test evaluation is conducted on a test dataset which has more samples than the local dataset. By default, the Dirichlet coefficient α is 1 unless stated otherwise.
Baseline comparison. Table <ref> shows the performance comparison of predicting attractiveness of images in CelebA dataset. Overall, the proposed method can gain good fairness performance for both DP and EO while sacrificing the accuracy by 4%. The trade-off between accuracy and fairness can also be observed in this task.
The proposed method gains 6.8 % DP and 12 % EO while sacrificing 3.5 % accuracy compared with FedAvg. It also shows the best possible reduction in DP and EO. As attractiveness is a potentially biased label, FairFedAvgALM demonstrates its effectiveness in handling group fairness under such situation. In contrast, FPFL reduces the accuracy more while decreasing both DP and EO less compared with FairFedAvgALM. FairFed does not offer any improvement in fairness. FairALM-FedAvg improves the fairness performance leniently as it slightly degrades the accuracy.
Statistical heterogeneity. We compare the performance of the proposed method with FedAvg on the other two levels of data heterogeneity, α=0.2 and α=5, and tabulate the result in Table <ref>, while maintaining the same setups and hyperparameters as before. As expected, the accuracy significantly drops when the system is more statistically heterogeneous (α = 0.2), and increases slightly in accuracy when the system is more homogeneous (α=5). The proposed method can improve fairness of the model with different levels of heterogeneity. Interestingly, when the system is statistically heterogeneous, the trained model is fairer compared with the same model trained on a more statistically homogeneous system.
Scalability with the number of clients. We study the effect of scaling up the number of clients on the performance of two algorithms: FedAvg and FairFedAvgALM, while maintaining the same amount of samples per client, the same setups, and hyperparameters. As shown in Table <ref>, the accuracy decreases as the number of clients increases. On the other hand, both fairness metrics improve as the number of clients increases for both algorithms. The proposed algorithm can still maintain better fairness performance compared with FedAvg across different numbers of clients.
Gaussian mechanism for LDP. We extend the previous setups from the baseline comparison by adopting Gaussian mechanism. We set σ_w = σ_λ, perform grid search on the set {0.1, 0.01, 0.001, 0.0001 }, and find that beyond 0.001, the trained model may diverge. Comparing Table <ref> and Table <ref>, we see that adding the Gaussian noise to the model updates degrades the accuracy and fairness performance. Firstly, the accuracy drops by 3-4 % for both FedAvg and FairFedAvgALM. Furthermore, the fairness aspect is heavily impacted for FedAvg where the DP performance is increased by almost 7 %, while the proposed method only increases by 1 %. Interestingly, EO drops by 2 % with both methods, but with higher variance.
Qualitative analysis. We study the behavior of the trained models by FedAvg and FairFedAvgALM through the GradCAM visualization. From Figure <ref>, we can empirically observe how FedAvg and FairFedAvgALM predict based on the input images. In general, FairFedAvgALM captures smaller regions on the face than FedAvg. For example, in the second image, FedAvg captures the eye and the forehead region to make a prediction, whereas FairFedAvgALM only takes the forehead information. Furthermore, FairFedAvgALM avoids regions that implicitly encode gender information. For instance, in the fourth image, FedAvg captures a chubby cheek, which is often associated with women, while FairFedAvgALM captures the lower hair, which is more gender-agnostic.
§.§ ImSitu Dataset
Description and setup. ImSitu dataset comprises more than 200,000 images capturing everyday events, with each image annotated with a verb and a corresponding set of nouns. In our study, we employ ResNet-18 <cit.>, which is pre-trained on the Imagenet (ILSVRC) dataset. The task is to predict the activity of each image from 211 possible labels. The verb label and gender label of each image were filtered according to the existence of the gender attribute and annotated, based on the methodology proposed by <cit.>. Prior to inputting the image into the model, we resize it to 256× 256 and randomly crop a part of the region of size 224 × 224. Other important hyperparameters are listed in Section C.2 of the supplementary materials.
The FL system in question is composed of four clients. The testing of the final model is conducted on unused samples of the clients. By default, the Dirichlet coefficient α is 2 unless stated otherwise.
Baseline comparison. The performance comparison between the proposed method and the baselines is shown in Table <ref>. Because the empirical DP becomes insensitive as the number of classes increases, we need to consider the performance on substasks, each consisting of positive and negative labels. In general, the proposed method can significantly improve the fairness of the model in a more complex dataset. The proposed method can achieve 3% improvement in DP and 6 % improvement in EO over FedAvg while reducing the accuracy at most by 6 %. Although the absolute improvement in terms of fairness seems minor, the relative improvement can reach 50 %, and the fairness improvement is consistent across different subtasks.
In the cooking-driving task, the model trained by FairFedAvgALM improves DP by 2% and EO by 4% while sacrificing the accuracy by roughly 3% compared to FedAvg. In this scenario, FairFed struggles to show any improvement in fairness while degrading the accuracy. FPFL can reach better fairness performance at the cost of larger accuracy drop. FairALM-FedAvg can improve the fairness of this task without sacrificing accuracy.
For the shaving-moisturizing task, around 6 % drop in accuracy of FairFedAvgALM is compensated by the 3 % decrease of DP from the FedAvg. FairFed has a minor improvement in performance while sacrificing little accuracy. Compared to FairFedAvgALM, FairALM-FedAvg improves DP similarly but is not as aggressive in terms of EO.
Some interesting observations are made in the assembling-hanging task. Firstly, the accuracy of FedAvg is not always superior compared to fairness-aware algorithms. In fact, FairALM-FedAvg has the highest accuracy while offering better fairness compared to FedAvg. Secondly, FairFed performs better than FairALM-FedAvg in terms of EO. The proposed method also outperforms FPFL in DP by 0.5 % and EO by 3 %. The proposed method still achieves the best fairness performance without sacrificing accuracy for this particular subtask.
§ CONCLUSION
In this paper, we proposed FairFedAvgALM, an FL algorithm based on augmented Lagrangian framework to impose group fairness constraints. The algorithm is a simple extension of FedAvg, enabling its seamless integration into typical FL systems, incurring negligible communication costs, and being compatible with LDP. We showed that the upper bound of the theoretical convergence rate of the proposed algorithm on nonconvex problems is 𝒪(1/T^2/3). We also theoretically demonstrated that adding the squared penalty term to the local objective increases the sensitivity of the primal update, which in turn increases the required noise level compared to FedAvg. Our experiments on CelebA and ImSitu datasets suggested that FairFedAvgALM can reduce the unfairness on trained FL models quite well with varying degrees of improvement under different levels of statistical heterogeneity, numbers of clients, and the presence of the Gaussian mechanism. The trade-off between the accuracy of predictions and fairness is empirically observed, and the proposed method enforces fairness more consistently compared to other methods.
in 1,...,
[fitpaper=true,scale=1.0,pages=]
|
http://arxiv.org/abs/2307.06047v1 | 20230712095033 | Quantum information diode based on a magnonic crystal | [
"Rohit K. Shukla",
"Levan Chotorlishvili",
"Vipin Vijayan",
"Harshit Verma",
"Arthur Ernst",
"Stuart S. P. Parkin",
"Sunil K. Mishra"
] | quant-ph | [
"quant-ph"
] |
Department of Physics, Indian Institute of Technology (Banaras Hindu University) Varanasi - 221005, India
[email protected]
Department of Physics and Medical Engineering, Rzeszow University of Technology, 35-959 Rzeszow Poland
Department of Physics, Indian Institute of Technology (Banaras Hindu University) Varanasi - 221005, India
Centre for Engineered Quantum Systems (EQUS), School of Mathematics and
Physics, The University of Queensland, St Lucia, QLD 4072, Australia
Max Planck Institute of Microstructure Physics, Weinberg 2, D-06120 Halle, Germany
Institute of Theoretical Physics, Johannes Kepler University Alterger Strasse 69, 4040 Linz, Austria
[email protected]
Max Planck Institute of Microstructure Physics, Weinberg 2, D-06120 Halle, Germany
[email protected]
Department of Physics, Indian Institute of Technology (Banaras Hindu University) Varanasi - 221005, India
Exploiting the effect of nonreciprocal magnons in a system with no
inversion symmetry, we propose a concept of a quantum information
diode, i.e., a device rectifying the amount of quantum
information transmitted in the opposite directions. We control the
asymmetric left and right quantum information currents through an
applied external electric field and quantify it through the left and
right out-of-time-ordered correlation (OTOC). To enhance the efficiency of the quantum information diode, we utilize a magnonic crystal. We excite magnons of different frequencies and let them
propagate in opposite directions. Nonreciprocal magnons propagating in
opposite directions have different dispersion relations. Magnons
propagating in one direction match resonant conditions and scatter on
gate magnons. Therefore, magnon flux in one direction is damped in the
magnonic crystal leading to an asymmetric transport of
quantum information in the quantum information diode. A quantum
information diode can be fabricated from an yttrium iron garnet (YIG)
film. This is an experimentally feasible concept and implies certain
conditions: low temperature and small deviation from the equilibrium
to exclude effects of phonons and magnon interactions. We show that
rectification of the flaw of quantum information can be controlled
efficiently by an external electric field and magnetoelectric
effects.
Quantum information diode based on a magnonic crystal
Sunil K. Mishra
=====================================================
§ INTRODUCTION
A diode is a device designated to support asymmetric transport.
Nowadays, household electric appliances or advanced experimental
scientific equipment are all inconceivable without extensive use of
diodes. Diodes with a perfect rectification effect permit electrical
current to flow in one direction only. The progress in nanotechnology
and material science passes new demands to a new generation of diodes;
futuristic nano-devices that can rectify either acoustic (sound
waves), thermal phononic, or magnonic spin current transport.
Nevertheless, we note that at the nano-scale, the rectification effect is
never perfect, i.e., backflow is permitted, but amplitudes of the
front and backflows are different
<cit.>. In
the present work, we propose an entirely new type of diode designed to
rectify the quantum information current. We do believe that in the
foreseeable future the quantum information diode (QID) has a perspective
to become a benchmark of quantum information technologies.
The functionality of a QID relies on the use of magnonic crystals,
i. e., artificial media with a characteristic periodic lateral
variation of magnetic properties. Similar to photonic crystals,
magnonic crystals possess a band gap in the magnonic excitation
spectrum. Therefore, spin waves with frequencies matching the band gap
are not allowed to propagate through the magnonic crystals
<cit.>. This effect has been utilized earlier to demonstrate a magnonic transistor in a YIG strip <cit.>.
The essence of a magnonic transistor is a YIG strip with a periodic
modulation of its thickness (magnonic crystal). The transistor is
complemented by a source, a drain, and gate antennas. A gate antenna
injects magnonic crystal magnons with a frequency ω_G matching
the magnonic crystal band gap. In the process, the gate magnons cannot leave
the crystal and may reach a high density. Magnons emitted from a
source with a wave vector k_s flowing towards the drain run
into the magnonic crystal. The interaction between the source magnons and
the magnonic crystal magnons is a four-magnon scattering process. The
magnonic current emitted from the source attenuates in the magnonic
crystal, and the weak signal reaches the drain due to the scattering.
The relaxation process is swift if the following condition holds
<cit.>
k_s=m_0π/a_0,
where m_0 is the integer,
and a_0 is the crystal lattice constant. The magnons with wave
vectors satisfying the Bragg conditions Eq. (<ref>)
will be resonantly scattered back, resulting in the generation of
rejection bands in a spin-wave spectrum over which magnon propagation
is entirely prohibited. Experimental verification of this effect is
given in Ref. <cit.>.
§ RESULTS
§.§ Proposed set-up for QID
A pictorial representation of a QID is shown in Fig. <ref>. A
magnonic crystal can be fabricated from a YIG film. Grooves can be
deposited using a lithography procedure in a few nanometer steps, and,
for our purpose, we consider parallel lines in width of 1μm spaced
with 10μm from each other. Therefore, the lattice constant,
approximately a_0=11μm, i. e., is much larger than the unit
cell size a=10nm used in our coarse-graining approach. Due to the
capacity of our analytical calculations, we consider quantum spin
chains of length about N=1000 spins and the maximal distance between
the spins r_ij=d (in the units of a), d=i-j=40. In what
follows, we take k(ω)a≪1. The mechanism of the QID is based
on the effect of direction dependence of nonreciprocal magnons
<cit.>. In the chiral spin systems, the
absence of inversion symmetry causes a difference in dispersion
relations of the left and right propagating magnons, i. e.,
ω_s,L(k)≠ω_s,R(-k). Due to the
Dzyaloshinskii–Moriya interaction (DMI), magnons of the same frequency
ω_s propagating in opposite directions have different wave
vectors <cit.>: a(k^+_s-k^-_s)=D/J, where J
is the exchange constant, and D is the DMI constant. Therefore, if
the condition Eq.(<ref>) holds for the left
propagating magnons, it is violated for the right propagating magnons
and vice versa. These magnons propagating in different directions
decay differently in the magnonic crystal. Without loss of generality,
we assume that the right propagating magnons with k^+_s satisfy the
condition Eq.(<ref>), and the current attenuates
due to the scattering of source magnons by the gate magnons. The left
propagating magnons k^-_s violate the condition Eq.(<ref>), and the current flows without scattering. Thus,
reversing the source and drain antennas' positions rectifies the
current. Following ref. <cit.>, we introduce a suppression rate of the source to drain the magnonic current ξ(D)=1-n_D^+/n_D^-, where n_D^+<n_D^- are densities of the drain magnons with and without scattering. The parameter ξ(D) is experimentally accessible, and it depends on a particular setup. Therefore, in this manuscript, we take ξ(D) as a free theory parameter. Multiferroic (MF) materials are considered as a good
example of a system with broken inversion symmetry, (see
Refs. <cit.>)
and references therein. MF properties of YIG are studied in
ref. <cit.>. Moreover, in accordance with the scanning
tunneling microscopy experiments, a change in the spin direction at
one edge of a chiral chain was experimentally probed by tens of
nanometers away from the second edge <cit.>.
§.§ Model
We consider a
2D square-lattice spin system with nearest-neighbor J_1 and the next
nearest-neighbor J_2 coupling constants:
Ĥ=J_1∑_⟨
n,m⟩σ̂_nσ̂_m+
J_2∑_⟨⟨
n,m⟩⟩σ̂_nσ̂_m-
P· E,
where ⟨ n,m⟩
and ⟨⟨ n,m⟩⟩ indicates all the pairs with nearest-neighbor and next nearest-neighbor interactions, respectively. The last term in Eq. (<ref>) describes a coupling of the ferroelectric polarization with unit vector 𝐞^x_i,i+1,
𝐏=g^†_ME𝐞^x_i,i+1×(σ̂_i×σ̂_i+1)
with an applied external electric field and mimics an
effective Dzyaloshinskii–Moriya interaction term
D=E_y g^†_ME breaking the
left-right symmetry, where
- P· E=D∑_n(σ̂_n×σ̂_n+1)_z.
Here we consider only the nearest neighbor
DMI and only in one direction. As a consequence, the
left-right inversion is equivalent to D→-D, or
E_y→-E_y. The broken left-right inversion
symmetry can be exploited in rectifying the information
current by an electric field. More importantly, the procedure is
experimentally feasible. We can diagonlize the Hamiltonian in Eq. (<ref>)
by using the Holstein-Primakoff transformation
<cit.>[See <ref>
for detailed
derivation]
as:
Ĥ = ∑_k⃗ω(± D,k)â^†_k⃗â_k⃗, ω(± D,k)=(ω(k⃗)±ω_DM(k⃗)), ω_DM(k⃗)=Dsin(k_xa),
ω(k⃗)=2J_1(1-γ_1,k)+2J_2(1-γ_2,k), gamma_1,k=1/2(cos k_xa+cos k_ya),
γ_2,k=1/2[cos (k_x+k_y)a+cos (k_x-k_y)a].
Here ± D corresponds to the magnons propagating in opposite
directions and the sign change is equivalent to the electric field
direction change. We note that a 1D character of the DM term is
ensured by the magnetoelectric effect <cit.> and to the electric field applied along the y axis.
The speed limit of information propagation is usually given in terms of Lieb-Robinson (LR) bound, defined for the Hamiltonians that are locally bounded and short-range interacting <cit.>. Since the Hamiltonian in Eq. (<ref>) satisfies both conditions, the LR bound can be defined for the spin model. However, when we transform the Hamiltonian using Holstein-Primakoff bosons, we have to take extra care as the bosons are not locally bounded. To define LR bound, we take only a few noninteracting magnons and exclude the magnon-magnon interaction to truncate terms beyond quadratic operators. In a realistic experimental setting, a low density of propagating magnons in YIG can easily be achieved by properly controlling the microwave antenna. In the case of low magnon density, the role of the magnon-magnon interaction between propagating magnons in YIG is negligible. Therefore, for YIG, we have a quadratic Hamiltonian, which is a precise approach in a low magnon density limit. Our discussion is valid for the experimental physical system <cit.>, where magnons of YIG do not interact with each other, implying that there is no term in the Hamiltonian beyond quadratic. We can estimate LR bounds <cit.> defining the maximum group velocities of the left-right propagating magnons v_g^±(k⃗)=∂ (ω(k⃗)±ω_DM(k⃗))/∂ k. Taking into account the explicit form of the dispersion relations, we see that the maximal asymmetry is approximately equal to the DM constant i. e., v_g^+(0)-v_g^-(0)≈ 2D. We note that the effect of nonreciprocal magnons is already observed experimentally <cit.> but up to date, never discussed in the context of the quantum information theory.
We formulate the central interest question as follows: At t=0, we
act upon the spin σ̂_n to see how swiftly changes in the
spin direction
can be probed tens of sites away d=n-m≫ 1, and whether the forward
and backward processes (i.e., probing for σ̂_m the outcome
of the measurement done on σ̂_n) are asymmetric or not.
Due to the left-right asymmetry, the chiral spin channel may sustain a
diode rectification effect when transferring the quantum information
from left to right and in the opposite direction. We note that our
discussion about the left-right asymmetry of the quantum information
flow is valid until the current reaches boundaries. Thus the upper
limit of the time reads t_max=Na/v_g^±(k⃗), where N
is the size of the system.
§.§ Out-of-time-order correlator
Larkin and Ovchinnikov <cit.> introduced the concept of the out-of-time-ordered correlator (OTOC), and since then, OTOC has been seen as a diagnostic tool of quantum chaos. The concern of
delocalizations in the quantum information theory (i.e., the scrambling of quantum
entanglement) was renewed only recently, see Refs. <cit.> and references therein. OTOC is also used for describing the static and dynamical phase transitions <cit.>. Dynamics of the semi-classical, quantum, and spin systems can be discussed by using OTOC <cit.>. We utilize OTOC to characterize the left-right asymmetry of the quantum information flow and thus infer the rectification effect of a diode.
Let us consider two unitary operators V̂ and Ŵ
describing local perturbations to the chiral spin system
Eq. (<ref>), and the unitary time evolution of one of the
operators
Ŵ(t)=exp(iĤt)Ŵ(0)exp(-iĤt).
Then the OTOC is defined as
C(t)=
1/2⟨[Ŵ(t),V̂(0)]^†[Ŵ(t),V̂(0)]⟩,
where parentheses ⟨⋯⟩ denotes a
quantum mechanical average over the propagated quantum state. Following
the definition the OTOC at the initial moment is
zero C(0)=0, provided that [Ŵ(0),V̂(0)]=0. In
particular, for the local unitary and Hermitian operators of our choice
Ŵ_m^†(t)≡σ̂_m^z(t)=exp(iĤt) η̂_mexp(-iĤt),
and V̂_n^†=σ̂_n^z=η̂_n, where
η̂_n=2̂a^†_nâ_n-1. The bosonic operators are
related to the spin operators via σ_n^-=2a_n^†, σ_n^+=2a_n, σ_n^z=2a_n^†a_n-1. In terms of the
occupation number operators, the OTOC is given as
C(t) = 1/2{⟨η_nη_m(t)η_m(t)η_n⟩+⟨η_m(t)η_nη_nη_m(t)⟩
- ⟨η_m(t)η_nη_m(t)η_n⟩-⟨η_nη_m(t)η_nη_m(t)}.
Indeed, the OTOC can be interpreted as the overlap of two wave
functions, which are time evolved in two different ways for the same
initial state |ψ(0)⟩. The first wave function is obtained
by perturbing the initial state at t=0 with a local unitary
operator V̂, then evolved further under the unitary evolution
operator Û=exp (-Ĥt) until time t. It is then
perturbed at time t with a local unitary operator Ŵ, and
evolved backwards from t to t=0 under
Û^̂†̂. Hence, the time-evolved wave function is
|ψ(t)⟩
=Û^̂†̂ŴÛV̂|ψ(0)⟩=Ŵ(t)V̂|ψ(0)⟩. To
get the second wave function, the order of the applied perturbations
is permuted, i. e., first Ŵ at t and then V̂ at
t=0. Therefore, the second wave function is
|ϕ(t)⟩=V̂Û^̂†̂ŴÛ|ψ(0)⟩=V̂Ŵ(t)|ψ(0)⟩
and their overlap is equivalent to
F(t)=⟨ϕ(t)|ψ(t)⟩. The OTOC is calculated from this
overlap using C(t)=1-[F(t)].
What breaks the time inversion symmetry for the OTOC is the permuted
sequence of operators Ŵ and V̂. However, in
spin-lattice models with a preserved spatial inversion symmetry
𝒫̂Ĥ=Ĥ, the spatial inversion
𝒫̂d(Ŵ, V̂)=-d(Ŵ,
V̂)=d(V̂, Ŵ) can restore the permuted order
between V̂ and Ŵ, where d(Ŵ, V̂) denotes
the distance between observables Ŵ and V̂. Permuting
just a single wave function, one finds
C(t)=1-(⟨ϕ(t)|𝒫̂𝒯̂|ψ(t)⟩)=C(0). Thus,
a scrambled quantum entanglement formally can be unscrambled by a
spatial inversion. However, in chiral systems
𝒫̂Ĥ≠Ĥ and the unscrambling procedure
fails.
Taking into account Eq. (<ref>), we analyze quantum
information scrambling along the x axis i. e.,
ω(± D,k)=ω(± D, k_x, 0) and along
the y axis, ω(0,k)=ω(0, 0, k_y). It is easy
to see that the quantum information flow along the y axis is
symmetric, while along the x axis it is
asymmetric and depends on the sign of the DM constant, i.e., the flow along
the x is different from -x. Let us assume that
Eq. (<ref>) holds for right-moving magnons
and is violated for left-moving magnons. Excited magnons with the same
frequency and propagating into different directions have different
wave vectors
ω_s(D,k_s^+)=ω_s(-D,k_s^-) where:
ω_s(± D,k_s^±)=2J_1(1-1/2cos k^±_xa)+2J_2(1-cos k^±_xa) ± Dsin k^±_xa,
k^+_m_0x=m_0π/a_0, m_0=ℕ and k^-_m_0x we
find from the condition
ω_s(D,k_s^+)=ω_s(-D,k_s^-) leading
to
k^-_m_0x=k^+_m_0x+2/atan^-1(D/J_1+2J_2).
Here we use shortened notations
ω_m_0=ω_s(D,k_s^+)=ω_s(-D,k_s^-)
and set dimensionless units J_1=2J_2≡ J=1. We excite in the diode magnons of different frequencies m_0=[1, N]. Considering Eq. (<ref>), Eq. (<ref>) and following Ref. <cit.>, we obtain expressions for the left and right OTOCs C_L(t) and C_R(t) as:
C_L(t) = 8/N^2Ω_1^LΩ_2^L-8/N^4Ω_1^LΩ_2^LΩ_1^LΩ_2^L,
C_R(t) = ζ^4(D)(8/N^2Ω_1^RΩ_2^R-8/N^4Ω_1^RΩ_2^RΩ_1^RΩ_2^R),
where frequencies Ω_1/2^L/R and details of derivations are presented in <ref>.
Parameter ξ enters into the right OTOC C_R(t) expression because the right propagating magnons are scattered on the gate magnons. This is due to the non-reciprocal magnon dispersion relations associated with the DMI term. Since the value of ξ depends on the experimental setup, we consider experimentally feasible values in our calculations.
It should be noted that in the calculation of OTOC, we consider expectation value over the one magnon excitation state â^†_n|ϕ⟩, where |ϕ⟩ is the vacuum state. Such a state shows the presence of the quantum blockade effect in a magnonic crystal. The calculation of equal time second-order correlation function showing the quantum blockade effect is given in <ref>.
Fig. <ref>(a) and Fig. <ref>(b) are the variation of C_L(t) and C_R(t) for |n^+-m| and |n^–m| distant spins, respectively. Both show similar behavior with increasing separation between the spins. However, the amplitude of C_R(t) is less than C_L(t) because the decay amplitude of the C_R(t) varies due to the suppression coefficient ζ. In the case of the dominant attenuation by the gate magnons, the OTOC decreases significantly. The difference in C_L(t) and C_R(t) originated due to the asymmetry arising from the DMI term. The time required to deviate the OTOC from zero increases as the separation between the spins increases. This observation indicates that quantum information flow has a finite "butterfly velocity." On the contrary, the amplitude of OTOC decreases as the separation between the spins increases because the initial amount of quantum information spreads among more spins. At the large time, OTOC again becomes zero because it spreads over the whole system. Fig. <ref>(c) is the behavior of C_R(t) with decreasing suppression coefficient ζ and fixed value of distance between the spins r_1,2. As suppression coefficient ζ decreases, the amplitude of C_R(t) decreases which is an indicator of increasing rectification. A detailed discussion of rectification is in the next subsection.
A high density of magnons can invalidate the assumption of a pure
state or spin-wave approximation that works only for a low density of
magnons. However, the key point in our case is that one has to
distinguish between two sorts of magnons, gate magnons and propagating
nonreciprocal magnons. The density of the propagating magnons can be
regulated in the experiment through a microwave antenna, and one can
always ensure that their density is low enough. It is easy to regulate
the density of the gate magnons, and an experimentally accessible
method is discussed in Ref. <cit.>.
In the magnonic systems, the Kerr nonlinearity may lead to interesting effects, for example, the magnon-magnon entanglement and frequency shift <cit.>. On the other hand, we note that DMI term and strong magneto-electric coupling may be responsible for nonlinear coupling terms similar to the magnon Kerr effect. This effect is studied in Ref. <cit.>.
§.§ Rectification
The efficiency of the quantum information diode is given by the rectification coefficient i.e., the ratio between the left and right propagating magnons that is calculated by left and right OTOC. DMI term and non-reciprocal magnon dispersion relations influence the rectification coefficient in two ways. a) directly meaning that in the left and right OTOCs appear different left and right dispersion relations and b) non-directly meaning that magnonic crystal due to the scattering on the gate magnons bans propagation of the drain magnons in one direction only (damping of the OTOC current). This non-reciprocal damping effect is experimentally observed in magnonic crystals <cit.>. The non-reciprocal damping enhances the rectification effect and it was not studied in the context of quantum information and OTOC before.
Let us calculate the total amount of correlations transferred in opposite directions followed by the rectification coefficient, a function of the external electric field as R=∫_0^∞ C_R(t) dt/∫_0^∞ C_L(t)dt. We interpolate the suppression rate as a function of the DMI coefficient in the form ζ(D)≈ e^-D/5. The coefficient ζ(D) mimics a scattering process of the drain magnons on the gate magnons <cit.>. In Fig. <ref> we see the variation of the rectification coefficient as a function of D. The electric field has a direct and important role in rectification. In particular, DMI constant D depends on the electric field E_y as D=E_yg_ME, where g_ME is the magneto-electric coupling constant. In the case of zero electric field, D will be zero, implying the absence of rectification effect R=1. As the electric field increases, D also increases linearly, and rectification decreases exponentially. A detailed study of the role of the electric field in DM has been done in Ref. <cit.>.
§ DISCUSSIONS
We studied a quantum information flow in a spin quantum system. In
particular, we proposed a quantum magnon diode based on YIG and
magnonic crystal properties. The flow of magnons with wavelengths
satisfying the Bragg conditions k=m_0π/a_o is reflected from the
gate magnons. Due to the absence of inversion symmetry in the system, left
and right-propagating magnons have different dispersion relations and
wave vectors. While for the right propagating magnons, the Bragg
conditions hold, left magnons violate them, leading to an asymmetric
flow of the quantum information.
We found that the strength of quantum correlations depends on the
distance between spins and time. The OTOC for the spins separated by
longer distance shows an inevitable delay in time, meaning that the
quantum information flow has a finite "butterfly velocity." On the
other hand, the OTOC amplitude becomes smaller at longer distances
between spins. The reason is that the initial amount of quantum
information spreads among more spins. After the quantum information
spreads over the whole system, which is pretty large ( N=1000
sites), the OTOC again becomes zero.
We proposed a novel theoretical concept that can be directly realized
with the experimentally feasible setup and particular material. There
are several experimentally feasible protocols for measuring OTOC in
the spin systems <cit.>. According
to these protocols, one needs to initialize the system into the fully
polarized state, then apply quench and measure the expectation value
of the first spin. All these steps are directly applicable to our
setup from YIG. The fully polarized initial state can be obtained by
switching on and off a strong magnetic field at a time moment t = 0.
Quench, in our case, is performed by a microwave antenna which is an
experimentally accessible device. Polarization of the initial spin can
be measured through the STM tip. Overall our setup is the
experimentally feasible setup studied in Ref. <cit.>.
§ DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available within
the article.
§ ACKNOWLEDGMENTS
SKM acknowledges the Science and Engineering Research Board, Department of
Science and Technology, India for support under Core Research Grant
CRG/2021/007095. A.E. acknowledges the funding by the
Fonds zur Förderung der Wissenschaftlichen Forschung (FWF) under Grant No. I 5384.
§ DIAGONALIZATION OF HAMILTONIAN EQ. (2)
2D square-lattice spin system with nearest-neighbor J_1 and the next nearest-neighbor J_2 coupling constants (taking ℏ=1):
Ĥ = J_1∑_⟨
n,m⟩σ̂_nσ̂_m+
J_2∑_⟨⟨
n,m⟩⟩σ̂_nσ̂_m-
P· E,
= J_1∑_⟨
n,m⟩σ̂_nσ̂_m+
J_2∑_⟨⟨
n,m⟩⟩σ̂_nσ̂_m-D∑_n(σ̂_n×σ̂_n+1)_z,
= 4[J_1∑_⟨
n,m⟩Ŝ_nŜ_m+
J_2∑_⟨⟨
n,m⟩⟩Ŝ_nŜ_m+D/i∑_n( Ŝ_n^+Ŝ_n+1^--Ŝ_n^-Ŝ_n+1^+)],
= 4[J_1 ∑_⟨
n,m⟩1/2{(Ŝ_n^-Ŝ_m^++Ŝ_n^+Ŝ_m^-)+Ŝ_n^zŜ_m^z}
+J_2∑_⟨⟨
n,m⟩⟩1/2{(Ŝ_n^-Ŝ_m^++Ŝ_n^+Ŝ_m^-)
+ Ŝ_n^zŜ_m^z}+D/i∑_n( Ŝ_n^+Ŝ_n+1^--Ŝ_n^-Ŝ_n+1^+)].
Spin-half systems have two permitted states on each site, i.e., |↑⟩ and |↓⟩. Operation of spin operators on these state are given as
Ŝ^+|↓⟩=|↑⟩, Ŝ^+|↑⟩=0,
Ŝ^-|↑⟩=|↓⟩, Ŝ^-|↓⟩=0,
Ŝ^z|↑⟩=1/2|↑⟩, Ŝ^z|↓⟩=-1/2|↓⟩,
Transformation of the spin operators in hard-core bosonic creation and annihilation operators are given as
Ŝ_m,n^+=â_m,n,
Ŝ_m,n^-=â_m,n^†,
Ŝ_m,n^z=1/2-â_m,n^†â_m,n
Hamiltonian in the bosonic representation is given as
Ĥ = 2[J_1 ∑_⟨
n,m⟩(â_n^†â_m+â_nâ_m^†-â_n^†â_n-â_m^†â_m+1/2+â_n^†â_nâ_m^†â_m)
+ J_2∑_⟨⟨
n,m⟩⟩(â_n^†â_m+â_nâ_m^†-â_n^†â_n-â_m^†â_m+1/2+â_n^†â_nâ_m^†â_m)
+ D/i∑_n (â_nâ_n+1^†-â_n^†â_n+1)].
Fourier transform of â_n^†(â_n) is â_k⃗^†(â_k⃗).
â_k⃗^†=1/√(N)∑_ne^i k⃗r⃗_na_n^†,
â_k⃗=1/√(N)∑_ne^-i k⃗r⃗_na_n.
Inverse Fourier transform is given as
â_n^†=1/√(N)∑_ne^i k⃗r⃗_na_k⃗^†,
â_n=1/√(N)∑_ne^-i k⃗r⃗_na_k⃗.
After summing over n we get Hamiltonian (Eq. <ref>) in k⃗ space as
Ĥ = ∑_k⃗ω_k⃗â^†_k⃗â_k⃗-D∑_k⃗sin(k⃗ a)â_k⃗^†â_k⃗,
= ∑_k⃗ω(± D,k)â^†_k⃗â_k⃗
where,
ω(± D,k)=(ω(k⃗)±ω_DM(k⃗)), ω_DM(k⃗)=Dsin(k_xa),
ω(k⃗)=2J_1(1-γ_1,k)+2J_2(1-γ_2,k), γ_1,k=1/2(cos k_xa+cos k_ya),
γ_2,k=1/2[cos (k_x+k_y)a+cos (k_x-k_y)a].
§ CALCULATION OF LEFT AND RIGHT OUT-OF-TIME ORDERED CORRELATION FUNCTIONS
We will calculate OTOC exactly for one magnon excitation state given in Eq. (7) as
C(t)=1/2{⟨η̂_nη̂_m(t)η̂_m(t)η̂_n⟩ + ⟨η̂_m(t)η̂_nη̂_nη̂_m(t)⟩
- ⟨η̂_m(t)η̂_nη̂_m(t)η̂_n⟩-⟨η̂_nη̂_m(t)η̂_nη̂_m(t)}.
Here, η̂_m/n= σ̂_m/n^z is Hermitian and unitary, therefore, Eq. (<ref>) transforms in the form given as
C(t)=1-⟨η̂_m(t)η̂_nη̂_m(t)η̂_n⟩=1-F(t),
where F(t) is given as
F(t) = ⟨ϕ|â_nη̂_m(t)η̂_nη̂_m(t)η̂_na^†_
n|ϕ⟩.
In the above equation, the expectation value is taken over one magnon excitation state â^†_n|ϕ⟩, where |ϕ⟩ is the vacuum state, equivalent to a polarized state.
First of all we calculate the product of four observables in F(t) (Eq. (<ref>))in bosonic representation as
η̂_m(t)η̂_nη̂_m(t)η̂_n = [1-2â^†_
mâ_
m(t)][1-2â^†_
nâ_
n][1-2â^†_
mâ_
m(t)][1-2â^†_
nâ_
n],
= [1-2â^†_
mâ_
m(t)-2â^†_
nâ_
n+4â^†_
mâ_
m(t)â^†_
nâ_
n]
× [1-2â^†_
mâ_
m(t)-2â^†_
nâ_
n+4â^†_
mâ_
m(t)â^†_
nâ_
n],
= 1-4â^†_
mâ_
m(t)-4â^†_
nâ_
n+4â^†_
mâ_
m(t)â^†_
nâ_
n +4â^†_
nâ_
nâ^†_
mâ_
m(t)
+ 4â^†_
mâ_
mâ^†_
mâ_
m(t)+4â^†_
nâ_
nâ^†_
nâ_
n+4â^†_
mâ_
mâ^†_
nâ_
n+4â^†_
nâ_
nâ^†_
mâ_
m
- 8â^†_
mâ_
mâ^†_
mâ_
m(t) â^†_
nâ_
n-8â^†_
nâ_
nâ^†_
mâ_
m(t) â^†_
nâ_
n
- 8â^†_
mâ_
m(t)â^†_
nâ_
nâ^†_
mâ_
m(t)-8â^†_
mâ_
m(t)â^†_
nâ_
nâ^†_
nâ_
n
+ 16â^†_
mâ_
m(t)â^†_
nâ_
nâ^†_
mâ_
m(t)â^†_
nâ_
n.
Further, we calculate the expectation value of the last term of Eq. (<ref>) over one magnon excitation state i. e.,
⟨ϕ|â_nâ^†_
mâ_
m(t) â^†_nâ_nâ_nâ^†_
mâ_
m(t) â^†_nâ_nâ^†_n|ϕ⟩,
using the properties of bosonic operators [â_i, â_j^†]=δ_ij, (â_i)^2=0, and (â^†_i)^2=0. We get
⟨ϕ|â_nâ^†_mâ_m(t) â^†_nâ_nâ_nâ^†_mâ_m(t) â^†_nâ_nâ^†_n|ϕ⟩ =⟨ϕ|â_n e^iĤ tâ^†_
mâ_
me^-iĤ tâ^†_nâ_n
e^iĤ tâ^†_
mâ_
me^-iĤ tâ^†_n|ϕ⟩,
= ⟨Ψ(t)|Ψ(t) ⟩,
where |Ψ(t)⟩=â_n
e^iĤ tâ^†_
mâ_
me^-iĤ tâ^†_n|ϕ⟩. Fourier transformation of the |Ψ(t)⟩ and diagonalized Hamiltonian will provide
|Ψ(t)⟩ = 1/N∑_k e^i(-k(m-n)+ω_kt/ℏ)1/N∑_k^' e^i( k^' (m-n)-ω_k^'t/ℏ)|ϕ⟩
= 1/N^2Ω_1 Ω_2 |ϕ⟩.
Hence,
⟨Ψ(t)|Ψ(t) ⟩=1/N^4Ω_1 Ω_2Ω_1 Ω_2.
Similarly,
⟨ϕ|â_nâ_
mâ_
m(t)â^†_n|ϕ⟩=1/N^2Ω_1 Ω_2
After doing simple bosonic algebra, time-dependent terms of Eq. (<ref>) are converted either in the form of Eq. (<ref>) or Eq. (<ref>). By using Eq. (<ref>) and Eq. (<ref>), we calculate F(t) as
F(t) =1-4/N^2Ω_1Ω_2-4+4/N^2Ω_1Ω_2+4/N^2Ω_1Ω_2+4/N^2Ω_1Ω_2+4/N^2Ω_1Ω_2+4/N^2Ω_1Ω_2+4
-8/N^2Ω_1Ω_2-8/N^2Ω_1Ω_2 -8/N^4Ω_1Ω_2Ω_1Ω_2-8/N^2Ω_1Ω_2+16/N^4Ω_1Ω_2Ω_1Ω_2
=1-8/N^2Ω_1Ω_2+8/N^4Ω_1Ω_2Ω_1Ω_2
Then, we get the left and right OTOCs’ analytical expressions as
C_L(t) = 8/N^2Ω_1^LΩ_2^L-8/N^4Ω_1^LΩ_2^LΩ_1^LΩ_2^L,
C_R(t) = ζ^4(D)(8/N^2Ω_1^RΩ_2^R-8/N^4Ω_1^RΩ_2^RΩ_1^RΩ_2^R),
where frequencies Ω_1/2^L/R are given as
Ω_1^R = Ω_2^R*=∑_m_0exp(-im_0π r_1,2/a_0)exp(i ω_m_0t/ℏ),
and
Ω_1^L = Ω_2^L*=∑_m_0exp(-ik_s^-r_1,2)exp(i ω_m_0t/ℏ).
§ QUANTUM BLOCKADE EFFECTS
To analyze the magnon blockade effect, we calculate the equal time second-order correlation function defined as <cit.>
g_a^2(0)=Tr(ρ̂â_m^† 2â_m^2)/[Tr(ρ̂â_m^†â_m)]^2=⟨â_m^† 2â_m^2⟩/⟨ a_m^†â_m⟩^2,
where a_m(a_m^†) are the annihilation (creation) operators of the magnon excitation. The magnon blockade is inferred from the condition g_a^2(0)→ 0 meaning that magnons can be excited individually, and two or more magnons cannot be excited together.
We note that
â_m^2â_m^†|ϕ⟩=0 leading to
g_a^2(0)=0. Therefore, the quantum blockade effect occurs in this case.
§ REFERENCES
iopart-num
|
http://arxiv.org/abs/2307.04547v1 | 20230710132556 | Spectral Observables and Gauge Field Couplings in Causal Dynamical Triangulations | [
"Giuseppe Clemente",
"Massimo D'Elia"
] | hep-th | [
"hep-th"
] | |
http://arxiv.org/abs/2307.04408v1 | 20230710081540 | TIM: Teaching Large Language Models to Translate with Comparison | [
"Jiali Zeng",
"Fandong Meng",
"Yongjing Yin",
"Jie Zhou"
] | cs.CL | [
"cs.CL"
] |
Violation of a Leggett–Garg inequality using ideal negative measurements
in neutron interferometry
Stephan Sponar^1
August 12, 2023
===================================================================================================
UTF8gbsn
Open-sourced large language models (LLMs) have demonstrated remarkable efficacy in various tasks with instruction tuning.
However, these models can sometimes struggle with tasks that require more specialized knowledge such as translation.
One possible reason for such deficiency is that instruction tuning aims to generate fluent and coherent text that continues from a given instruction without being constrained by any task-specific requirements.
Moreover, it can be more challenging for tuning smaller LLMs with lower-quality training data.
To address this issue, we propose a novel framework using examples in comparison to teach LLMs to learn translation.
Our approach involves presenting the model with examples of correct and incorrect translations and using a preference loss to guide the model's learning.
We evaluate our method on WMT2022 test sets and show that it outperforms existing methods.
Our findings offer a new perspective on fine-tuning LLMs for translation tasks and provide a promising solution for generating high-quality translations.
Please refer to Github for more details:
https://github.com/lemon0830/TIM.
§ INTRODUCTION
Generative large language models, like GPT4, have shown remarkable performance in various NLP tasks <cit.>.
For machine translation, the GPT models achieve very competitive translation quality, especially for high-resource languages <cit.>,
which opens up new possibilities for building more effective translation systems.
It is impractical to deploy such large models for the translation task only, and using or tuning open-sourced generative language models has become an attractive research direction.
In this regard, researchers have explored strategies for example selection and instruction design through In-Context Learning (ICL) <cit.>.
However, evaluations of open-sourced LLMs like Bloom show that they do not perform as well as strong multilingual supervised baselines in most translation directions <cit.>.
Additionally, ICL can increase decoding latency due to the need for large models with long context.
Based on these observations, researchers suggest tuning relatively small LLMs for translation with a few high-quality supervised instructions <cit.>.
Instruction tuning has been shown to be an efficient method for making LLMs better aligned to the task descriptions preferred by humans <cit.>.
The only requirement is to collect task-specific data, and LLMs will be fine-tuned on the data with the language modeling loss.
However, optimizing for simple next-token prediction loss will cause models to overlook context information, especially for low-capacity models.
It is serious for the tasks in which the specialized knowledge in context is necessary for task completion, and ignoring such knowledge on translation can lead to inadequacy and hallucination.
Therefore, there is a need to investigate the limitations of LLMs and explore methods for improving their performance in specialized tasks.
In this paper,
we propose to teach the language models to learn translation with examples in comparison, aiming to make full use of
a small amount of high-quality translation data.
Based on the training data, we further construct two kinds of comparisons:
output comparison and preference comparison.
Output comparison is used to learn responses of different instructions for the same input.
Preference comparison is used to maximize the gap between correct and incorrect translations.
Specifically, in order to help identify specific areas where the model may be making errors, we introduce an additional preference loss during fine-tuning, which is used to learn reward models <cit.>, as regularization to penalize unexpected outputs.
We evaluate TIM on WMT22 test sets in four language directions (EN⇔DE, EN⇔ZH), and the improvement over the baselines shows the effectiveness of our method.
Our model shows better zero-shot translation performance and stability in prompt choice.
As the size increases, the performance of the models trained with TIM increases, with the improvement being more pronounced in the case of smaller models.
In particular, the tuned LLaMa-13B <cit.> achieves top 1 on quality estimation without references in the EN⇔DE, outperforming the dedicated models for quality estimation like COMET.
§ RELATED WORK
The research of machine translation based on LLMs can be divided into two categories: LLMs as interface <cit.> and instruction tuning <cit.>.
The studies of using LLMs as interface focus on empirical analysis.
For example,
<cit.> evaluate ChatGPT, GPT3.5 (text-davinci-003), and text-davinci-002 in eighteen different translation directions involving high
and low resource languages.
<cit.> further evaluate four popular LLMs (XGLM, BLOOMZ, OPT and ChatGPT) on 202 directions and 102 languages, and compare them with strong supervised baselines, which provides a more comprehensive benchmark result.
Many efforts are also put into investigating translation exemplars selection strategy of in-context learning <cit.>.
Another line of work introduces knowledge, such as word alignments extracted from a dictionary, to LLMs for better translation <cit.>.
Tuning smaller LLMs (e.g., 7B) for translation tasks is a promising direction since they are better at English than supervised translation models.
However, even for directions from other languages to English, the gap between language models fine-tuned with translation data and supervised systems is still evident <cit.>.
Different from them, we introduce output comparison and preference comparison data and present a preference regularization to alleviate hallucination and help LLMs learn translation better.
§ METHOD
In brief, we tune generative language models to learn translation with output comparison and preference comparison in the instruction tuning framework.
First, we will give a formal introduction to instruction tuning.
Then, we present the detail of two kinds of comparisons of our method consisting of output comparison and preference comparison, and an additional preference learning loss.
Finally, we show the different ways of parameter tuning.
§.§ Background: Instruction Tuning
The purpose of instruction tuning is to enhance the capacity of language models in handling NLP instructions.
The concept is that the models can be trained to execute tasks specified in instructions, which would enable them to comprehend and execute tasks that have not been encountered before.
As illustrated in Figure <ref>,
generally, each instance of instruction-following data starts with “instructions” c describing the task the model should perform, and a corresponding output y indicating the answer to the instruction.
The “input” x, the optional context or input for the task, is not necessary sometimes but is used for the machine translation task.
Given the instruction data, the language models are optimized by minimizing the negative log-likelihood of the output y:
L_lm=-1/|y|∑_i^|y|logp(y_i|c,x).
Notably, the objective is the same as that used in pretraining.
§.§ Output Comparison
An important ingredient of our method is the construction of samples used to provide comparison signals for model learning.
In addition to regular translation data,
we construct data used for comparison by introducing dictionary information or translation errors, which are shown in Figure <ref>.
Dictionary-guided Data.
To make the model aware of the underlying reasons for different translations, we inform the model of different correct outputs with the help of bilingual dictionaries[https://github.com/facebookresearch/MUSE].
We do not manually replace the words in an input-output pair to synthesize the comparison data but directly use a multi-reference corpus.
Specifically, we use the “no error” submissions annotated by humans of WMT20 in Multidimensional Quality Metrics (MQM) datasets[https://github.com/google/wmt-mqm-human-evaluation] as the multi-reference of the source sentence.
Then, we obtain the word alignments between a single source sentence and multiple references by looking up the bilingual dictionary.
Finally, we use the word alignments as a note added to the input.
As shown in Figure <ref>, for the same input sentence “国有企业和优势...老区。”, with the note containing different word alignments, the outputs of Example 1 and Example 2 are different.
Error-guided Data.
In addition, inspired by <cit.>,
we introduce translations with error annotations.
For correct input-output pairs, the added notes indicate no mistakes in the references, while the notes of incorrect input-output pairs indicate detailed translation errors.
As shown in the left part of Figure <ref>, the output of Example 1 is a correct translation while the output of Example 2 has a major locale convention/name format mistake, corresponding to the added note.
We directly use the human-annotated data of WMT20 in MQM datasets.
§.§ Preference Comparison
In preference comparison, we assign contrastive outputs for each type of data, denoted as Bad Output, and train the model with an extra preference loss.
For the regular translation data, we use the prediction of large language models (e.g., Alpaca) as the comparison.
For each sample with dictionary information or error information, we
randomly sample a translation with errors as the Bad Output.
Moreover, we add noise to the Bad Output by randomly deleting words or swapping the positions of two words.
With examples of correct and incorrect translations, the model can be optimized to produce higher quality translations by distinguishing them,
which can reduce the resources needed for training.
One way to utilize the contrastive outputs is to train a reward model and further fine-tune the language model with the reward model using reinforcement learning, i.e., RLHF <cit.>.
Instead of using such complex two-stage training process, we directly tune the model using a preference loss:
L_pl=-log(σ(r_θ(c,x,y_0)-r_θ(c,x,y_1))),
where σ(·) is the sigmoid function, and y_0 and y_1 denote the preferred output and comparison output, respectively.
Specifically, r_θ is a linear head that takes the hidden state of the top layer and returns a scalar.
In practice, preference learning is calculated at the token level:
L_pl=-1/N-I∑_i=I^Nlog(σ(r_θ(h_i^(0))-r_θ(h_i^(1)))),
where I is the index starting from the segments different between y_0 and y_1, N is the maximum length of two sequences, and h_i is the hidden state of the i-th token.
The overall loss function for tuning the model is
L=L_lm+λL_pl,
where λ is a coefficient of the preference learning loss. We simply set λ as 0.5 in this paper.
§.§ Tuning Strategies
In addition to vanilla fine-tuning all model parameters, parameter efficient fine-tuning methods are specially proposed for large language models such as prefix tuning and LoRA <cit.>.
In this paper, we adopt three different strategies for tuning the models, listed in descending order from the number of fine-tuned parameters.
LoRA: Tuning with Low-rank Matrices.
LoRA <cit.> is a technique that reduces the number of trainable parameters by introducing new low-rank matrices to any module in the model while keeping the original weights frozen.
This results in a significant reduction in storage requirements for large language models, as well as efficient task-switching during deployment without impacting inference latency.
FixEmb: Tuning with Embedding Fixed.
It is likely that the limited number of trainable parameters in LoRA-based tuning can restrict its expressiveness for certain tasks.
To overcome this limitation, a simple solution would be to fine-tune the parameters of the model layers while keep the embeddings fixed.
By doing so, the model can gain more flexibility in adjusting its performance without compromising the important semantic information captured by the embeddings.
Full: Tuning Full Parameters.
Full parameter tuning has recently been demonstrated more effective than LORA.
The limitation of full parameter fine-tuning is the memory footprint, but it is not serious for the 7B models and little data.
§ EXPERIMENTS
In this section, we begin by conducting preliminary experiments to investigate the impact of inference strategies and the resilience of our TIM under varying instructions.
Subsequently, we evaluate TIM's performance on the WMT and FLORES-200 dev-test tasks, comprising a total of four language pairs.
For this evaluation, we employ BLOOMZ-7b-mt[https://huggingface.co/bigscience/bloomz-7b1-mt] and LLaMA-7b <cit.> as the backbones.
§.§ Settings
To avoid data leakage as much as possible <cit.>, we use the latest WMT22 test set and FLORES-200 dev-test.
* WMT22 Test Sets. We use the test sets from WMT22 competition[https://www.statmt.org/wmt22/translation-task.html],
which consist of more recent content from diverse domains such as news, social, e-commerce, and conversational domains.
The test sets comprise 1984, 2037, 1875, and 2037 samples for the German-to-English (De⇒En), English-to-German (En⇒De), Chinese-to-English (Zh⇒En), and English-to-Chinese (En⇒Zh) language pairs, respectively.
* FLORES-200 dev-test. We use the dev-test split from the FLORES-200 benchmarks[https://github.com/facebookresearch/flores/blob/main /flores200].
This dataset includes 1,012 sentences extracted from English Wikipedia, covering a broad range of topics and domains.
These sentences have been carefully checked by professional translators into approximately 200 languages.
To ensure a fair and consistent evaluation, we fine-tuned all models for 1 epoch with a batch size of 128, while imposing a maximum text length of 512.
The learning rate is 2e-5 and weight decay is 0.0.
We conducted fine-tuning on eight NVIDIA A100 GPUs, utilizing the Deep-Speed ZeRO stage3 for model parallelism.
The results of the final checkpoint are reported.
For automatic evaluations, we utilize two widely adopted metrics: BLEU <cit.> implemented in SacreBLEU[https://github.com/mjpost/sacrebleu], and COMET[https://github.com/Unbabel/COMET] with Unbabel/wmt22-comet-da.
These metrics employ distinct approaches to evaluate the quality of machine translation.
BLEU is driven by n-gram similarity, while COMET relies on cross-lingual pretrained models.
§.§ Baselines
We leverage the BLOOMZ-7b-mt and LLaMA-7b models as the foundation models and evaluate the following baselines:
Alpaca-(*) is a reproduction of the Alpaca model fine-tuned solely on the alpaca multi-task dataset[https://huggingface.co/datasets/tatsu-lab/alpaca].
MT-(*) is fine-tuned on the human-written validation data from previous WMT competitions, i.e., the newstest2017-2021 of Chinese⇔English and German⇔English, which consist of 45,433 sentence pairs for all four directions.
We use the notation TIM-(*) to refer to LLMs fine-tuned using our proposed TIM approach.
The training data for TIM-(*) includes the WMT translation data as well as the dictionary-guided and error-guided data described in Section <ref>.
Besides, we report the results of WMT22 winners, GPT-4 <cit.>, and NLLB-3.3B <cit.>.
The latter is a multilingual translation model trained on a massive parallel corpus of over 200 languages[The results in <cit.> are directly reported.].
§.§ Pre-Experiments
In this section, we investigate the effect of inference strategies and instructions.
We fine-tune the BLOOMZ-7b-mt with our TIM and conduct evaluations on the WMT22 test sets.
Effect of Inference Strategies.
Beam search has been the standard search algorithm for machine translation, while LLMs usually use sampling for efficiency.
We compare the performance of sampling and beam search, and the two search algorithms are combined with the notes in our dictionary-guided and error-guided data.
Table <ref> presents the experimental results.
First, we observe that instructing the model to generate translations without errors does not result in a significant performance gain, contrary to the conclusion drawn in <cit.>. We speculate that the preference loss function implicitly allows the LLMs to learn to generate error-free translations, making the additional instructions unnecessary.
Secondly, previous studies have shown that introducing alignment information from dictionaries can improve translation performance <cit.>.
Surprisingly, Table <ref> shows that adding alignment notes significantly improves the performance of De⇒En, but harms the performance of other language pairs.
This may be due to the fact that most of the words in the dictionaries we use are common words, or that the wording styles of the dictionaries differ greatly from the reference.
Further research is needed to determine
how to better collect and use dictionary information for machine translation is left for future work.
Effect of Instructions.
In human interaction scenarios, instructions provided by users may vary in style and form, and thus it is essential to evaluate the robustness of TIM under different instruction styles.
The performance of our TIM using ten distinct instructions is shown in Figure <ref>.
The result indicates that our TIM achieves consistent performance across all the tested instructions.
§.§ Main Results
Based on the observation in Section <ref>, we use a simple instruction “Translate from {src} to {tgt}.\n{input}” and beam search strategy with a beam size of 4 for all models during inference.
Table <ref> presents the translation performance on the WMT22 test sets and FLORES-200 dev-test.
For the models based on BLOOMZ-7b-mt, we only evaluate them on WMT22 test sets due to the data leakage issue.
We have the following observations:
First, based on LLaMA-7b, the Alpaca-(*) models exhibit some translation ability particularly in high-resource directions such as De⇒EN and En⇒DE, due to the small amount of translation instruction data based on Spanish⇔English that Alpaca possesses.
Introducing a small number of translation sentence pairs (i.e., MT-(*)) in the corresponding language can result in additional improvement.
Secondly, we observe significant performance fluctuations across different language models, training data, and language pairs for (*)-LoRA and (*)-Full.
For example, when the backbone is BLOOMZ-7b-mt, MT-LoRA outperforms MT-Full in most language pairs except for En⇒De.
However, when the backbone is the LLaMa-7b model, MT-LoRA underperforms MT-Full in Zh⇒En and En⇒Zh language pairs.
Our speculation is that LoRA can prevent LLMs from overfitting but is limited in the number of trainable parameters.
In contrast, the experiment result of (*)-FixEmb indicates that fine-tuning with fixed embedding parameters can better leverage the generalization of LLMs and prevent overfitting.
Finally, training LLMs with comparison can further enhance the understanding of the translation task.
Compared to Alpaca-(*) and MT-(*) models, TIM-(*) achieve significantly better performance on both the WMT22 test sets and FLORES-200 dev-test.
Concretely, based on BLOOMZ-7b-mt, TIM-FixEmb achieves notable improvement compared with MT-FixEmb, with 2.93, 3.29, 1.34, 2.40 BLEU scores and 0.55, 0.47, 0.50, 2.80 COMET scores on Zh⇒En, En⇒Zh, De⇒En, En⇒De, respectively.
§ ANALYSIS
§.§ Effect of Model Size
In this section, we present a comparison between TIM and instruction tuning across different model sizes.
Figure <ref> illustrates the consistent improvements achieved by TIM, indicating its generalizability.
Notably, BLOOM-3b does not outperform BLOOM-1b7 with instruction tuning.
On the other hand, as the foundation LLM's size increases, the translation performance of the LLMs after fine-tuning with TIM gradually improves.
In particular, the improvement is more significant when the model size is smaller.
This observation supports our hypothesis that simple instruction tuning with a small amount of training data, may not effectively learn task patterns and instead relies heavily on the model's original ability to comprehend instructions.
On the other hand,
training LLMs with comparison encourages them to swiftly identify the task's requirements and patterns and leverage internal cross-lingual knowledge.
§.§ Zero-shot Translation
To evaluate TIM’s performance in translation directions never seen previously, i.e., zero-shot multilingual capability, we conduct experiments on the WMT22 multilingual-to-English translation benchmark which encompasses 4 translation directions:
Czech-to-English (cs⇒en), Japanese-to-English (ja⇒en), Russian-to-English (ru⇒en), and Ukrainian-to-English (uk⇒en).
We compare our method with the following open-sourced models:
ChatGLM-6b[https://huggingface.co/THUDM/chatglm-6b], Alpaca-7b[https://huggingface.co/tatsu-lab/alpaca-7b-wdiff], Vicuna-13b[https://huggingface.co/lmsys/vicuna-13b-delta-v1.1], BayLing-13b <cit.>, and NLLB-3.3b <cit.>.
We report the results of the above models in <cit.>.
Due to the better performance of LLaMA in multilingual-to-English, we report the performance of fine-tuned LLaMA-7b and LLaMA-13b with our TIM, respectively.
As depicted in Figure <ref>, TIM-(*) (i.e., TIM-FixEmb-7b, TIM-LoRA-13b, and TIM-FixEmb-13b) exhibit good zero-shot multilingual capability on these
translation directions.
Compared to ChatGLM-6b, Alpaca-7b, and Vicuna-13B,
TIM-(*) exhibits superior translation ability, highlighting that aligning training languages strengthens the alignment of other languages as a by-product.
Additionally, TIM-(*) outperforms BayLing-13b, which uses additional interactive translation training data, in XX⇒English translations.
TIM-(*) also demonstrate comparative performance with NLLB-3.3B in some language pairs. These results demonstrate that adding carefully constructed translation data, combined with an effective training strategy such as our proposed TIM, can enhance the overall task capability of LLMs.
§.§ Ablation Study
To analyze the impact of different components of TIM, we investigate five variants of TIM-FixEmb taking BLOOMZ-7b-mt as the backbone: 172 w/o ℒ_pl, where we removed the ℒ_pl; 173 w/o Dict, where we removed the dictionary-guided comparisons in training data; 174 w/o Error, where we removed the error-guided comparisons in training data;
175 w/o OutputCom, where we removed output comparison;
176 w/o OutputCom&ℒ_pl, in which we fine-tuned LLM with translation instructions by standard instruction tuning method.
We illustrate the BLEU scores on Zh⇒ and En⇒De in Figure <ref>.
The experimental results of 175 and 176 demonstrate that LLMs can quickly learn better translation output through preference comparison, even without adding any output comparison data.
Moreover, the results of 172 173 and 175 show that output comparison is more crucial than preference comparison.
In particular, the removal of error-guided data (i.e., 174) results in a greater performance drop than the removal of dictionary-guided data (i.e., 173).
We hypothesize that this is because the translations without errors in the system's outputs of WMT2020 are relatively similar, causing the “output” of dictionary-guided data to be too similar to create a high-quality comparison.
If translation data with multiple more diverse references were available, we might achieve further improvement.
We leave this for future work.
§.§ MT Metrics Evaluation
The preference scores can reflect the quality of the model output.
To assess whether the strategy can successfully learn a meaningful importance estimation,
we use MTME[https://github.com/google-research/mt-metrics-eval] to evaluate the performance of our preference scores on standard test sets from the WMT22 Metrics Shared Tasks in De⇒En and En⇒De, respectively.
Specifically,
for each pair consisting of a source sentence and the corresponding hypothesis, we wrap them with our Training Prompt, compute the score of each token in the hypothesis, and use the score of the last token as the sentence-level score.
Table <ref> shows the system-level accuracy (Acc) and Pearson correlations (PCCs).
In particular, our TIM-LLaMA-13b outperforms all the reference-free metrics and achieves the best Pearson correlation on De⇒En.
This demonstrates that the LLMs are implicitly a reward model which can be jointly optimized during instruction tuning <cit.>.
§ CONCLUSION
We propose TIM, a training method that instruction tunes open-source large language models for the translation task with the comparison of translations.
Experiments and analyses validate the effectiveness of TIM in terms of translation quality and zero-shot translation ability.
For the reference-free MT metrics evaluation, TIM-LLaMA-13b even outperforms some popular metrics like COMET and BLEURT in De⇒En, showing that our method can well learn the translation and evaluation jointly.
Future work can explore the use of more diverse references for output comparison, and more advanced preference learning signals.
acl_natbib
|
http://arxiv.org/abs/2307.05175v1 | 20230711111232 | Shot noise classification of different conductance plateaus in a quantum point contact at the $ν=2/3$ edge | [
"Sourav Manna",
"Ankur Das",
"Moshe Goldstein"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"cond-mat.str-el"
] |
[email protected]
Department of Condensed Matter Physics, Weizmann Institute of Science, Rehovot 7610001, Israel
Raymond and Beverly Sackler School of Physics and Astronomy, Tel-Aviv University, Tel Aviv 6997801, Israel
[email protected]
Department of Condensed Matter Physics, Weizmann Institute of Science, Rehovot 7610001, Israel
Raymond and Beverly Sackler School of Physics and Astronomy, Tel-Aviv University, Tel Aviv 6997801, Israel
The ν = 2/3 filling is the simplest paradigmatic example of a fractional quantum Hall state, which contains counter-propagating edge modes. These modes can be either in the coherent or equilibrated to different extents,
on top of the possible edge reconstruction.
In the coherent regime,
two distinct renormalization group fixed points have been previously proposed, namely Kane-Fischer-Polchinski and Wang-Meir-Gefen.
In the equilibration regime, different degree of thermal equilibration
can exist, while charge is fully equilibrated.
Here, we show that these rich variety of models can give rise to
three possible conductance plateaus at
e^2/2h (recently observed in experiments), 5e^2/9h (we predict), and e^2/3h (observed earlier in experiments) in a quantum point contact geometry.
We identify different mechanisms for electrical shot noise
generation at these plateaus, which provides an experimentally accessible venue
for distinguishing among the distinct models.
Shot noise classification of different conductance plateaus in a quantum point contact at the nu2by3 edge
Moshe Goldstein
August 12, 2023
=========================================================================================================
§ INTRODUCTION
Fractional quantum Hall (FQH) effect <cit.> serves as an important
platform for studying topologically ordered phases
of matter.
This remarkable phenomenon manifests itself in a gapped bulk
with gapless chiral edges, realizing the bulk-boundary correspondence.
These modes
can carry both charge and energy and
fall into two categories as either co-propagating
or counter-propagating. Interaction and disorder give rise to
intriguing scenario for counter-propagating modes, and ν=2/3
filling serves as the simplest example for the same.
The earliest proposal by MacDonald argued
these modes to be e and e/3 charge modes <cit.>. However, this edge structure failed to be
consistent with the experimental observations of
two-terminal electrical conductance 2e^2/3h and detection of the
counter-propagating e/3 charge
mode <cit.>.
Eventually, it was proposed that random disorder induced charge tunnelings between the counter-propagating modes play a crucial role <cit.>. It was shown that,
at zero temperature, the system approaches
a disorder dominated
coherent renormalization group (RG)
fixed point, known as the Kane-Fischer-Polchinski (KFP) RG fixed point <cit.>.
At this
fixed point the edge consists of
a 2e/3 charge mode counter-propagating to a neutral mode. This edge structure is
consistent with the experimentally
observed two-terminal electrical conductance 2e^2/3h.
Another experimental device which can provide more information regarding the edge structure is a quantum point contact (QPC), a constriction in two-dimensional electron gas <cit.>. Across the QPC, shot noise can be measured to
determine the charge carried by an edge mode <cit.>.
For ν=2/3, the observation of a
e^2/3h QPC conductance plateau at a QPC transmision 1/2
<cit.>, observations consistent with the existence of neutral modes <cit.>,
and a crossover of the effective charge while changing the temperature <cit.>
were found.
These are incompatible with the KFP RG fixed point. To accommodate these experimental
findings,
reconstruction of the MacDonald edge was proposed leading to a
new coherent intermediate RG fixed point, proposed by Wang-Meir-Gefen (WMG) <cit.>. At this
fixed point the edge consists of
two e/3 charge modes counter-propagating to two neutral modes.
Existence of a plateau at a QPC transmission (corresponding to a QPC filling) and shot noise therein can also be a consequence of
equilibration among the chiral edge modes <cit.>
at finite temperatures.
Charge and heat equilibrations can occur independently.
Recently, different experiments
<cit.> have confirmed that the thermal equilibration length
is order of magnitude larger than the charge equilibration length.
These findings have complicated the study of the steady state
of edges since different regimes of thermal equilibration
can occur while charge is fully equilibrated.
In this paper we show in details (Ref. <cit.> contains the non-technical insight) that these zoo of models
can give rise to three distinct QPC conductance plateaus
at e^2/2h, 5e^2/9h, and e^2/3h. We note that the e^2/2h plateau has been reported
recently <cit.> and the
e^2/3h plateau was discovered earlier <cit.>
in experiments. We predict the possible appearance of
5e^2/9h plateau. We identify different scenarios for each plateau, as well as different mechanisms which may give rise to electrical shot noise at these plateaus. We show how shot noise be used to experimentally discern between the different
models. We define the correlations in <ref>.
We explain the emergence of e^2/2h, 5e^2/9h, and e^2/3h
QPC conductance plateaus and calculate shot noises there
in <ref>, <ref>, and <ref>,
respectively. We provide a summary with an outlook in <ref>.
§ DEFINITIONS
We define the time dependent current-current correlations (CCC) δ^2 I_ij(t̅) as the average of the symmetric combination of the product of two current operators <cit.>
δ^2 I_ij(t̅) = ⟨{Δ I_i(t̅), Δ I_j(0)}⟩
= ⟨Δ I_i(t̅) Δ I_j(0) ⟩ + ⟨Δ I_j(0) Δ I_i(t̅) ⟩,
where for i=j, δ^2 I_ii(t̅) is referred to as an autocorrelation, and for i ≠ j, δ^2 I_ij(t̅) is referred to as a crosscorrelation. Here, Δ I_i(t̅) = (I_i(t̅) - ⟨ I ⟩) is for the ith drain.
We note that the product of two current operators evaluated at different times is not Hermitian, and hence we define δ^2 I_ij(t̅) using an anti-commutator.
In the frequency domain we write
δ^2 I_ij (ω) = ∫_-∞^∞δ^2 I_ij(t̅) e^i w t̅ dt̅,
where ω is the frequency. In the dc limit or zero frequency limit we have
δ^2 I_ij (ω→ 0) =2 δ^2 Q_ij/τ^2,
with charge at the ith drain being Q_i=lim_τ→∞∫_0^τI_i(t)dt, τ is the time,
and δ^2 Q_ij is the
correlation in charge fluctuation. With the source current denoted by I and transmission coefficient denoted by t, the Fano factor is defined as
F = δ^2 I_ij (ω→ 0)/2 (e/τ) I t (1-t)=δ^2 Q_ij/ e τ I t (1-t).
§ THE NOISY E2H PLATEAU
Here, we show how e^2/2h QPC conductance plateau
(experimentally discovered in Ref. <cit.>
following an earlier theoretical work in Ref. <cit.>)
can appear
both in the coherent and equilibrated regimes. We compute
the shot noise at this plateau and show that different inequalities
hold among the autocorrelations and crosscorrelation
for different models.
§.§ Coherent scenario
We start with the MacDonald edge containing the
e/3 and e
charge modes (counter-propagating), while going
towards the edge from bulk
<cit.> (<ref>).
Let us label the charge e mode as mode “1" and the charge e/3 mode as mode “2".
In each region
between a contact and QPC the e and e/3 modes
are renormalized around the Kane-Fischer-Polchinski (KFP)
renormalization group (RG) fixed point leading to
counter-propagating (2/3 + ϵ)e and ϵ e charge modes, where ϵ > 0 (KFP region) <cit.>. The ϵ e mode becomes
neutral at the KFP RG fixed point (ϵ = 0).
At the QPC the e/3 mode is fully backscattered and the e mode is fully transmitted at a
plateau having transmission t, leading to G_D_1e^2/h QPC conductance plateau, where G_D_1=t I τ/e and I is the source current.
In each KFP region, we have stochastic tunneling of wavepackets. We quantify these random processes by the density kernel matrices S_1,S'_1,S”_1,S”'_1 <cit.>. Explicitly, we write
S_1=
[ ⟨ T_11⟩ ⟨ R_21⟩; ⟨ R_12⟩ ⟨ T_22⟩ ],S_1'=
[ ⟨ T'_11⟩ ⟨ R'_21⟩; ⟨ R'_12⟩ ⟨ T'_22⟩ ],
S_1”=
[ ⟨ T”_11⟩ ⟨ R”_21⟩; ⟨ R”_12⟩ ⟨ T”_22⟩ ],S_1”'=
[ ⟨ T”'_11⟩ ⟨ R”'_21⟩; ⟨ R”'_12⟩ ⟨ T”'_22⟩ ],
where T,T',T”,T”,R,R',R”,R”' are Bernouli random numbers ∈{0,1} which take account for the stochastic tunneling processes in the KFP regions. We denote by ⟨⋯⟩ the first moment (average value) of
the distribution of these stochastic variables. ρ_1^out and ρ_2^out denote the density wavepackets in the outgoing e and e/3 modes, respectively, and ρ_1^in and ρ_2^in denote wavepackets in the incoming e and e/3 modes. We may write
ρ_i^out = ∑_j (S_1)_jiρ_j^in
and similarly for S_1^', S_1^'', S_1^'''.
The elements of a density kernel matrix obey current conservation relations. We relate them to the elements of a conductance matrix (<ref>). For any of the QPC arms, if I_1 and I_2 are the currents in the outgoing
e and e/3 modes, respectively, and V_1 and V_2 are the voltages characterising the incoming e and e/3 modes, we may write
I_i = ∑_j G_ji V_j,
where e=1, h=1, and G is the conductance matrix,
G=
[ G_11 G_21; G_12 G_22 ]
=
[ 2/3+ϵ 1/3-ϵ; 1/3-ϵ ϵ ],
where (2/3+ϵ)e and ϵ e are the renormalized charge modes.
At the KFP RG fixed point ϵ=0; ϵ thus quantifies the deviation from this point.
Note that the S_1 matrix is defined in the density basis. To recover the conductance matrix we need to multiply the elements of S_1 by the appropriate charges. Hence we have
⟨ T_11⟩ = G_11, ⟨ R_21⟩/3 = G_21,
⟨ R_12⟩ = G_12, ⟨ T_22⟩/3 = G_22.
We now consider a wavepacket of charge e emanating from the source S. The density kernel matrix provides the net probability of reflection and transmission of the wavepacket while encountering (coming in and out) the KFP region. Hence, we have
S_1=
[ ⟨ T_11⟩ ⟨ R_21⟩; ⟨ R_12⟩ ⟨ T_22⟩ ]
=
[ 2/3+ϵ 1-3ϵ; 1/3-ϵ 3ϵ ].
We can write similar relations for
the matrix elements of S'_1, S”_1, S”'_1.
We calculate the total charge, which reach different contacts via multiple reflections and transmissions
throughout. Thereby, we sum up the resulting infinite series.
To organize this infinite series we need three
different pieces:
* A factor accounting for the first tunnelling while entering the QPC.
* The shortest route inside the QPC.
* Different scatterers give rise to the contribution due to multiple reflections. This factor (R'_12)_i (R”_21)_i (R”'_12)_i (R_21)_i+1, i ∈ [1,2,…] will remain same for all the contacts.
For the charge Q_1 entering at drain D_1 the first contribution is (T_11)_1,
and the second contribution is (T'_11)_n, n∈[1,2,…]. The number outside of the parenthesis denotes how
may times the wavepacket visited the same scatterer. Thus, using this we can write Q_1 as
Q_1 = e [ (T_11)_1 (T'_11)_1 + (T_11)_1 (R'_12)_1 (R”_21)_1 (R”'_12)_1 (R_21)_2 (T'_11)_2
+ (T_11)_1 (R'_12)_1 (R”_21)_1 (R”'_12)_1 (R_21)_2 (R'_12)_2 (R”_21)_2 (R”'_12)_2 (R_21)_3 (T'_11)_3
+ (T_11)_1 (R'_12)_1 (R”_21)_1 (R”'_12)_1 (R_21)_2 (R'_12)_2 (R”_21)_2 (R”'_12)_2 (R_21)_3 (R'_12)_3 (R”_21)_3 (R”'_12)_3 (R_21)_4 (T'_11)_4 + …].
We note the following rules,
(T_11)_i + (R_12)_i = 1, (T_11)_i (R_12)_i = 0,
(T_22)_i + (R_21)_i = 1, (T_22)_i (R_21)_i = 0,
and for different “time” indices i any T and R are uncorrelated. Similar relations hold for
T',T”,T”,R',R”,R”'. We also note that transmission and
reflection coefficients with different “prime” indices are uncorrelated as
those describe different scatterers. We assume that the scatterers are identical and write
⟨ T^'_11⟩ = ⟨ T'_11⟩ = ⟨ T”_11⟩ = ⟨ T”_11⟩= T,
⟨ R_12⟩ = ⟨ R'_12⟩ = ⟨ R”_12⟩ = ⟨ R”_12⟩= R̅_12,
⟨ R_21⟩ = ⟨ R'_21⟩ = ⟨ R”_21⟩ = ⟨ R”_21⟩= R̅_21,
⟨ T^2_ii⟩=⟨ T_ii⟩,
⟨ R^2_ij⟩=⟨ R_ij⟩,
⟨ T_ii R_ij⟩=0,⟨ T_ii R_ji⟩=⟨ T_ii⟩⟨ R_ji⟩,
and similarly for S_1^', S_1^'', S_1^'''.
We compute the source current by calculating the charge Q_S emanating from S.
This will be important when we are not at the exact KFP RG fixed point, i.e.,
ϵ≠ 0. Then the contributions can be written as an infinite series and two finite
series terms. The infinite series arises from the first wavepacket
from the source and the first
reflection (R_12)_1 from the first scatterer. In the infinite series the
two contributions, due to entry into and exit from the QPC region, are,
respectively,
(T_11)_1 and
(R'_12)_i (R”_21)_i (R”'_12)_i (T_22)_i+1, i ∈ [1,2,…].
Thus we can write,
Q_S = e [1 - { (R_12)_1 + (T_11)_1 (R'_12)_1 (R”_21)_1 (R”'_12)_1 (T_22)_2 + (T_11)_1 (R'_12)_1 (R”_21)_1 (R”'_12)_1 (R_21)_2 (R'_12)_2 (R”_21)_2 (R”'_12)_2 (T_22)_3 + …}].
Hence the source current is
I = ⟨ Q_S ⟩/τ = e/τ[ 1 - R̅_12 - T ⟨ T_22⟩R̅_12^2 R̅_21/1 - R̅_12^2 R̅_21^2],
where τ is the time. We use T=2/3+ϵ, R̅_12=1/3-ϵ, R̅_21=1-3ϵ and ⟨ T_22⟩=3ϵ to find
τ I = e [ 1 - R̅_12 - T ⟨ T_22⟩R̅_12^2 R̅_21/1 - R̅_12^2 R̅_21^2]
≈ e[ 2/3 + 0.25ϵ + 2.25ϵ^2 ].
The transmission coefficient is then
t = ⟨ Q_1 ⟩/τ I = e T^2/1-R̅_12^2R̅_21^21/e[ 1 - R̅_12 - T ⟨ T_22⟩R̅_12^2 R̅_21/1 - R̅_12^2 R̅_21^2]^-1
≈3/4 + 0.281 ϵ + 2.21 ϵ^2
and the QPC conductance plateau is at
G_D_1≈ (1/2 + 0.75 ϵ + 3.375 ϵ^2).
For the autocorrelation δ^2 Q_1=⟨ Q^2_1 ⟩- ⟨ Q_1 ⟩^2 = ⟨ Q_1 ⟩(1-⟨ Q_1 ⟩),
since ⟨ Q_1^2 ⟩ = ⟨ Q_1 ⟩, at D_1 we obtain
δ^2 Q_1 =e^2
T^2 (1-R̅_12^2R̅_21^2 - T^2)/(1-R̅_12^2R̅_21^2)^2.
Similarly, for the charge Q_2 entering at drain D_2 the first contribution will be (T_11)_1 and
second contribution will be (R'_12)_n (R”_21)_n (T”'_11)_n, n ∈ [1,2,…]. The third contribution for roaming inside the QPC region remains the same, leading
to
Q_2 = e [ (T_11)_1 (R'_12)_1 (R”_21)_1 (T”'_11)_1 + (T_11)_1 (R'_12)_1 (R”_21)_1 (R”'_12)_1(R_21)_2(R'_12)_2(R”_21)_2 (T”'_11)_2
+ (T_11)_1 (R'_12)_1 (R”_21)_1
(R”'_12)_1(R_21)_2(R'_12)_2(R”_21)_2
(R”'_12)_2(R_21)_3(R'_12)_3(R”_21)_3 (T”'_11)_3
+ (T_11)_1 (R'_12)_1 (R”_21)_1 (R”'_12)_1(R_21)_2(R'_12)_2(R”_21)_2
(R”'_12)_2(R_21)_3(R'_12)_3(R”_21)_3
(R”'_12)_3(R_21)_4(R'_12)_4(R”_21)_4 (T”'_11)_4 + …].
For the autocorrelation δ^2 Q_2=⟨ Q^2_2 ⟩- ⟨ Q_2 ⟩^2 = ⟨ Q_2 ⟩(1-⟨ Q_2 ⟩),
since ⟨ Q_2^2 ⟩ = ⟨ Q_2 ⟩, at D_2 we obtain
δ^2 Q_2 =e^2
(T^2R̅_12R̅_21) (1-R̅_12^2R̅_21^2 - T^2R̅_12R̅_21)/(1-R̅_12^2R̅_21^2)^2.
In a similar manner the crosscorrelation δ^2 Q_c= ⟨ Q_1 Q_2 ⟩ - ⟨ Q_1 ⟩⟨ Q_2 ⟩ = - ⟨ Q_1 ⟩⟨ Q_2 ⟩
(since ⟨ Q_1 Q_2 ⟩ = 0) becomes
δ^2 Q_c
=-e^2T^4 R̅_12R̅_21/(1-R̅_12^2R̅_21^2)^2.
Plugging in T=2/3+ϵ, R̅_12=1/3-ϵ, R̅_21=1-3ϵ, and ⟨ T_22⟩=3ϵ, we find
δ^2 Q_1 ≈ (0.25-0.562ϵ^2) e^2,
δ^2 Q_2 ≈ (0.13888-0.5ϵ+0.1875ϵ^2) e^2,
δ^2 Q_c ≈ (-0.0833+0.25ϵ-0.5625ϵ^2) e^2.
The Fano factors are
F_i = δ^2 Q_i/e τ I t (1-t), i ∈{1,2,c},
and therefore we find them
to be
F_1 ≈ 2 - 0.75ϵ + 3.375ϵ^2,
F_2 ≈ 1.111 - 4.416ϵ+7.375ϵ^2,
F_c ≈ -0.666 +2.25ϵ - 7.875ϵ^2.
The KFP RG fixed point results can be approached
by taking ϵ=0.
§.§ Equilibration scenario
We derive the general expressions for the CCC (shot noise) in a QPC for the
bulk filling ν and QPC filling ν_i when ν<ν_i (<ref>). We assume that the charge is fully equilibrated, hence charge transport is ballistic, moving “downstream" along each segment of the setup.
We call the direction opposite to charge flow (“upstream") antiballistic.
Thereafter, we compute the values of CCC for
specific choices of {ν,ν_i} and for different
thermal equilibration regimes.
We assume that there is
no bulk-leakage
<cit.>.
We consult <ref> and follow Refs. <cit.> to write
Δ I_S+Δ I_l=Δ I_u,
Δ I_u=Δ I_1+Δ I_r,
Δ I_G+Δ I_r=Δ I_d,
Δ I_d=Δ I_2+Δ I_l,
where Δ I_i, i ∈{S, G, u, d, r, l, 1, 2} are the current fluctuations. We also write
Δ I_1 = νe^2/hΔ V_N + Δ I_1^th,
Δ I_r = (ν_i-ν) e^2/hΔ V_N + Δ I_r^th,
Δ I_2 = νe^2/hΔ V_M + Δ I_2^th,
Δ I_l = (ν_i-ν) e^2/hΔ V_M + Δ I_l^th,
where Δ V_i, i ∈{M,N} are the voltage fluctuations and
Δ I_i^th, i ∈{1,2,r,l}
are the thermal fluctuations. We find
Δ I_1 = 1/(ν-2ν_i)[ (ν-ν_i)
(Δ I_G + Δ I_1^th - Δ I_2^th)
- ν (Δ I_l^th - Δ I_r^th) -ν_i Δ I_S ],
Δ I_2 = 1/(ν-2ν_i)[ (ν-ν_i)
(Δ I_S - Δ I_1^th + Δ I_2^th)
+ ν (Δ I_l^th - Δ I_r^th) -ν_i Δ I_G ].
We use the local Johnson-Nyquist relations for thermal noise,
⟨ (Δ I_l^th)^2 ⟩ = 2 e^2/h (ν_i-ν) k_B T_M,
⟨ (Δ I_1^th)^2 ⟩ = 2 e^2/hν k_B T_N,
⟨ (Δ I_r^th)^2 ⟩ = 2 e^2/h
(ν_i-ν)k_B T_N,
⟨ (Δ I_2^th)^2 ⟩ = 2 e^2/hν k_B T_M,
⟨ (Δ I_i^thΔ I_j^th) ⟩ = 0, for i ≠ j and i,j ∈{1,2,l,r},
where k_B is the Boltzmann constant to write
δ^2 I_1 =2(e^2/h)νν_i (ν_i-ν)/(ν-2ν_i)^2k_B (T_M+T_N)
+ 1/(ν-2ν_i)^2[ν_i^2 ⟨ (Δ I_S)^2 ⟩ + (ν-ν_i )^2⟨ (Δ I_G)^2 ⟩],
δ^2 I_2 =2(e^2/h)νν_i (ν_i-ν)/(ν-2ν_i)^2k_B (T_M+T_N)
+ 1/(ν-2ν_i)^2[ν_i^2 ⟨ (Δ I_G)^2 ⟩ + (ν-ν_i )^2⟨ (Δ I_S)^2 ⟩],
and
δ^2 I_c =-2(e^2/h)νν_i (ν_i-ν)/(ν-2ν_i)^2k_B (T_M+T_N)
+ ν_i(ν_i-ν)/(ν-2ν_i)^2[⟨ (Δ I_S)^2 ⟩ + ⟨ (Δ I_G)^2 ⟩],
where T_M, T_N are, respectively, the temperatures at the noise spots M and N which are found by solving
self-consistent equilibration equations and
considering energy conservations <cit.>. We note that the
dissipated powers at the hot spots take the form
P_H_1 = P_H_2 = e^2 V_dc^2/hν(ν_i-ν)/2ν_i.
The contributions
⟨ (Δ I_G)^2 ⟩ = ⟨ (Δ I_S)^2
at the noise spots O and P are computed by evaluating an integral as shown in Ref. <cit.>.
We refer to <ref> and consider {ν,ν_i } = { 2/3,1 }, { 2/3(R),1 }, and { 2/3(R), 1(R) },
where 2/3(R) refers to the reconstructed MacDonald edge <cit.> leading to the filling factor discontinuity δν = [-1/3,+1, -1/3, +1/3] and
1(R) denotes edge reconstruction in QPC leading to the QPC filling factor discontinuity
(from bulk to edge) δν_i = [+1, -1/3, +1/3] <cit.>.
As charge is fully equilibrated in each segment of the QPC set up (<ref>), we have
I_1 = e^2 V_dc/h×2/3×2/3×∑_i=0^∞( 1/3^2)^i=e^2 V_dc/2h
for each of the {ν,ν_i } choices here, leading to the
transmission t=3/4 and G_D_1=1/2.
We consider three different regimes of thermal equilibration as
while each segment of the QPC geometry is thermally
unequilibrated leading to
L_Q≪ L_A≪ l_eq^th (no thermal equilibration),
only QPC segment of the geometry is thermally
unequilibrated and other
segments are thermally equilibrated leading to
L_Q≪ l_eq^th≪ L_A (mixed thermal equilibration),
and each segment of the QPC geometry is thermally equilibrated leading to
l_eq^th≪ L_Q≪ L_A (full thermal equilibration).
Here, l_eq^th
is the thermal equilibration length.
For no thermal equilibration, we have only ballistic and
antiballistic heat transports in any segment of the QPC set up leading to constant CCC. For
{ν,ν_i } = { 2/3,1 } we find
T_M=T_N=√(4) eV_dc/√(15)π k_B,
⟨ (Δ I_S)^2 ⟩ = ⟨ (Δ I_G)^2 ⟩≈ 0.063 e^3 V_dc/h,
leading to F_1=F_2≈ 0.49, F_c ≈ -0.23.
For
{ν,ν_i } = { 2/3(R),1 } we find
T_M=T_N= eV_dc/4 π k_B,
⟨ (Δ I_S)^2 ⟩ = ⟨ (Δ I_G)^2 ⟩≈ 0.064 e^3 V_dc/h,
leading to F_1=F_2≈ 0.31, F_c ≈ -0.06.
For
{ν,ν_i } = { 2/3(R),1(R) } we find
T_M=T_N=√(5) eV_dc/√(36)π k_B,
⟨ (Δ I_S)^2 ⟩ = ⟨ (Δ I_G)^2 ⟩≈ 0.087 e^3 V_dc/h,
leading to F_1=F_2≈ 0.45 and F_c ≈ -0.1.
Similarly, for mixed and full thermal equilibration, we have diffusive
heat transport in the outer segment and
ballistic and
antiballistic heat transports in the line and
the upper segments of the QPC set up leading to length
dependent CCC. For each choice of {ν,ν_i }, we find
F_1=F_2≈ 0.18+0.34√(L_A/l_eq^th),
F_c≈ 0.1-0.34√(L_A/l_eq^th).
§ THE NOISY 5E9H PLATEAU
Here, we show how the 5e^2/9h QPC conductance plateau
can appear
only in the coherent regime and compute
shot noise at this plateau.
Here, the edge structure contains the
counter-propagating e/3 (“innermost"), e, e/3 and e/3 (“outermost")
charge modes as we go from the bulk towards the edge; this is the reconstructed MacDonald edge <cit.> (<ref>).
We label the e/3, e, e/3, e/3 charge modes (from bulk to edge in the contacts) as “2", “1", “3", “4", respectively.
At the QPC, we consider the case when the outermost e/3 mode is fully transmitted, the innermost e/3 mode is fully backscattered, and
the remaining modes are renormalized
around the KFP RG fixed point <cit.> at a
plateau having transmission t,
leading to G_D_1e^2/h QPC conductance plateau, where G_D_1=t I τ/e and I is the source current.
The (2/3 + ϵ_3)e and ϵ_3 e appear as the
renormalized charge modes (counter-propagating with ϵ_3 > 0). This region can be called as the KFP region.
Similarly, we can call the WMG region, where the
renormalized charge modes are ϵ_1 e, ϵ_2 e and (1/3 + ϵ_1 + ϵ_2)e (counter-propagating), with ϵ_1 > 0,ϵ_2 > 0.
In each region
between a contact and QPC the remaining modes are renormalized around the WMG RG fixed point <cit.>.
The WMG region contains the following modes :
ϵ_1 e, ϵ_2 e and (1/3 + ϵ_1 + ϵ_2)e (counter-propagating). Here, we have
ϵ_1 > 0,ϵ_2 > 0.
The ϵ_1 e, ϵ_2 e, ϵ_3 e modes become neutral while considering ϵ_1 = ϵ_2 = ϵ_3 = 0 at the RG fixed points.
The random processes that transmit/reflect a wavepacket across this region is quantified by the density kernel matrices S_1,S'_1,S”_1,S”'_1
<cit.>. Following similar procedure to the one presented in <ref> we write down the conductance matrices
G_1=
[ ⟨ G_11⟩ ⟨ G_21⟩ ⟨ G_31⟩; ⟨ G_12⟩ ⟨ G_22⟩ ⟨ G_32⟩; ⟨ G_13⟩ ⟨ G_23⟩ ⟨ G_33⟩ ]=
[ 1/3 + ϵ_1 + ϵ_2 1/3-(ϵ_1+ϵ_2) ϵ_3-(ϵ_1+ϵ_2); 1/3-ϵ_1/2-ϵ_2/2 (ϵ_1+ϵ_2)/2 (ϵ_1+ϵ_2)/2; 1/3-ϵ_1/2-ϵ_2/2 (ϵ_1+ϵ_2)/2 (ϵ_1+ϵ_2)/2 ]=G”_1,
G'_1=
[ ⟨ G'_11⟩ ⟨ G'_21⟩ ⟨ G'_31⟩; ⟨ G'_12⟩ ⟨ G'_22⟩ ⟨ G'_32⟩; ⟨ G'_13⟩ ⟨ G'_23⟩ ⟨ G'_33⟩ ]=
[ 1/3+ϵ_1+ϵ_2 1/3-(ϵ_1+ϵ_2) 1/3-(ϵ_1+ϵ_2); 1/3+ϵ_3-ϵ_1-ϵ_2/3(1/3+ϵ_3) (ϵ_1+ϵ_2)/3(1/3+ϵ_3) (ϵ_1+ϵ_2)/3(1/3+ϵ_3); ϵ_3(1/3+ϵ_3-ϵ_1-ϵ_2)/(1/3+ϵ_3) ϵ_3(ϵ_1+ϵ_2)/(1/3+ϵ_3) ϵ_3(ϵ_1+ϵ_2)/(1/3+ϵ_3) ]=G”'_1,
and the density kernel matrices
S_1=
[ ⟨ T_11⟩ ⟨ R_21⟩ ⟨ R_31⟩; ⟨ R_12⟩ ⟨ T_22⟩ ⟨ R_32⟩; ⟨ R_13⟩ ⟨ R_23⟩ ⟨ T_33⟩ ]=
[ 1/3 + ϵ_1 + ϵ_2 1-3(ϵ_1+ϵ_2) 1-(ϵ_1+ϵ_2)/ϵ_3; 1/3-ϵ_1/2-ϵ_2/2 3(ϵ_1+ϵ_2)/2 (ϵ_1+ϵ_2)/2ϵ_3; 1/3-ϵ_1/2-ϵ_2/2 3(ϵ_1+ϵ_2)/2 (ϵ_1+ϵ_2)/2ϵ_3 ]=S”_1,
S'_1=
[ ⟨ T'_11⟩ ⟨ R'_21⟩ ⟨ R'_31⟩; ⟨ R'_12⟩ ⟨ T'_22⟩ ⟨ R'_32⟩; ⟨ R'_13⟩ ⟨ R'_23⟩ ⟨ T'_33⟩ ]=
[ 1/3+ϵ_1+ϵ_2/2/3+ϵ_3 1-3(ϵ_1+ϵ_2) 1-3(ϵ_1+ϵ_2); 1/3+ϵ_3-ϵ_1-ϵ_2/3(1/3+ϵ_3)(2/3+ϵ_3) (ϵ_1+ϵ_2)/(1/3+ϵ_3) (ϵ_1+ϵ_2)/(1/3+ϵ_3); ϵ_3(1/3+ϵ_3-ϵ_1-ϵ_2)/(1/3+ϵ_3)(2/3+ϵ_3) 3ϵ_3(ϵ_1+ϵ_2)/(1/3+ϵ_3) 3ϵ_3(ϵ_1+ϵ_2)/(1/3+ϵ_3) ]=S”'_1.
At the RG fixed points we have ϵ_1=ϵ_2=ϵ_3=0.
Let us consider a wavepacket of charge e emanating from the source S. The density kernel matrix provides the net probability of reflection and
transmission of the wavepacket while encountering (entering and leaving) the WMG region. To calculate the average values of the total charge, which reach different contacts we need to consider an infinite
series of reflections and transmissions. We organize this infinite series into three
different pieces,
* The first tunnelling factor,
* the shortest path factor, and
* multiple
reflections factor. They can be divided into three types of contributions as
* Multiple reflections result in
⟨ R_12' ⟩⟨ R_21”⟩⟨ R_12”' ⟩⟨ R_21⟩
owing to the multiple reflections among all the contacts.
* ⟨ R_13' ⟩⟨ R_31⟩, which arises due to multiple reflections between S and D_1.
* ⟨ R_13”' ⟩⟨ R_31”⟩,
which arises due to multiple reflections between G and D_2.
We note that the total contribution from <ref> is the sum over all
possible such combinations. This contribution is common for
all the contacts, and evaluates to
X= ∑_i,j,k P_ijk⟨ A_i ⟩⟨ B_j ⟩⟨ C_k ⟩,
where
P_ijk = (i+j+k)!/i!j!k!,
provides the number of combinations one can make out of A_i, B_j, C_k, where
⟨ A_i ⟩ = [⟨ R_12' ⟩⟨ R_21”⟩⟨ R_12”' ⟩⟨ R_21⟩]^i ≡ a^i,
⟨ B_j⟩=[⟨ R_13' ⟩⟨ R_31⟩]^j ≡ b^j,
⟨ C_k⟩=[⟨ R_13”' ⟩⟨ R_31”⟩]^k ≡ c^k.
The combinatorial factor P_ijk appears since on average all the paths are equivalent and interchangable. The total number of possibilities is (i+j+k)!, and we have to divide it
by i!j!k!, as a,b,c are indistinguishable.
We thus obtain
X=1/1-a-b-c.
We find the source current to be
τ I=⟨ Q_S ⟩
=e[ 1- ⟨ R_12⟩ -⟨ R_13⟩ -⟨ T_11⟩ d/1-a-b-c] + e/3,
where d=(⟨ T_22⟩⟨ R_12'⟩⟨ R_21”⟩⟨ R_12”'⟩+⟨ T_33⟩⟨ R_13'⟩). The transmission coefficient is
t = ⟨ Q_1 ⟩ + e/3/τ I.
Plugging in all the above results, the autocorrelations and crosscorrelation are
δ^2 Q_1=⟨ Q^2_1 ⟩- ⟨ Q_1 ⟩^2=⟨ Q_1 ⟩(1-⟨ Q_1 ⟩),
δ^2 Q_2=⟨ Q^2_2 ⟩- ⟨ Q_2 ⟩^2=⟨ Q_2 ⟩(1-⟨ Q_2 ⟩),
δ^2 Q_c=-⟨ Q_1 ⟩⟨ Q_2 ⟩,
where
⟨ Q_1 ⟩ = ⟨ T_11⟩⟨ T_11' ⟩/1-a-b-c,
⟨ Q_2 ⟩ = ⟨ T_11⟩⟨ T_11”'⟩⟨ R_12'⟩⟨ R_21”⟩/1-a-b-c,
where we have used the fact that for ⟨ Q_1 ⟩ the first contribution is ⟨ T_11⟩ and second contribution is ⟨ T_11' ⟩, while
for ⟨ Q_2 ⟩ the first contribution is ⟨ T_11⟩ and second contribution is ⟨ T_11”' ⟩⟨ R_12'⟩⟨ R_21”⟩.
To first order in ϵ_1,ϵ_2,ϵ_3 we explicitly find,
⟨ Q_S ⟩≈[2/3 + 0.55(ϵ_1+ϵ_2)]e,
t ≈ 5/6 + 0.5 ϵ_3 - 1.36(ϵ_1+ϵ_2),
G_D_1≈ [5/9 + 0.33 ϵ_3 - 0.44(ϵ_1+ϵ_2)],
δ^2 Q_1 ≈ 0.1728 + 0.185 ϵ_3 - 0.246(ϵ_1+ϵ_2),
δ^2 Q_2 ≈ 0.0987 - 0.691(ϵ_1+ϵ_2),
δ^2 Q_c ≈ -0.0246 - 0.037 ϵ_3 + 0.246(ϵ_1+ϵ_2),
hence the Fano factors F_i = δ^2 Q_i/e τ I t (1-t), i ∈{1,2,c} is
F_1≈ 1.866 + 6.48 ϵ_3 - 16.41 (ϵ_1+ϵ_2),
F_2≈ 1.066 + 2.56 ϵ_3 - 15.32 (ϵ_1+ϵ_2),
F_c≈ -0.266 - 1.04 ϵ_3 + 4.63 (ϵ_1+ϵ_2).
The RG fixed points correspond to ϵ_1=ϵ_2=ϵ_3=0.
§ THE NOISY E3H PLATEAU
Here, we show how e^2/3h QPC conductance plateau
(experimentally discovered in Ref. <cit.>
following an earlier theoretical work in Ref. <cit.>)
can appear
both in the coherent and equilibrated regimes. We compute
the shot noise at this plateau and show that different inequalities
hold among the autocorrelations and crosscorrelation
for different models.
§.§ Coherent scenario
We consider the
renormalized (at the WMG RG fixed point <cit.>) reconstructed MacDonald edge structure, which consists of
the n_1,n_2, e/3 (inner) and e/3 (outer)
modes (from bulk to edge), where n_1, n_2 denote the neutral modes (<ref>).
A plateau is observed at transmission t=1/2, leading to (G_D_1e^2)/h QPC conductance plateau, where G_D_1=t I τ/e and I is the source current, when the inner e/3 charge mode is fully backscattered and the outer e/3 charge mode is fully transmitted <cit.>.
The source biases the two e/3 charge modes. We assume that N
quasiparticles each having charge e/3 emanate from S into each charge mode in a time interval τ.
Near the lower right side of QPC, upon equilibration N_1=N/2 quasiparticles populate each mode and neutralons are created. These neutralons move to the upper right side of QPC via the neutral modes and randomly decay into quasihole-quasiparticle pairs in the adjacent charge modes <cit.>. This decay process is stochastic and lead to N_1 and (N-N_1) electronic excitations in the inner and outer modes, respectively, which reach D_1 and D_2 and generate nonzero dc shot noises but zero dc current.
In a similar manner near
upper left side of QPC equilibration takes place and after equilibration N_2=N/2 quasiparticles present in both modes. This process also creates neutral
excitations which move to the lower left side of QPC and stochastically decay into (N-N_2) and N_2 electronic excitations in the inner and outer modes, respectively,
which reach D_2 and D_1 and generate shot noise.
We introduce random variables a,b, assuming the values ± 1 with equal probability, which characterize the neutral decay processes near upper right side and lower left side of QPC, respectively. We have ⟨ a_m^p/q⟩ = ⟨ b_m^p/q⟩ = 0. Here p stands for the inner e/3 charge mode and q stands for the outer e/3 charge mode. We note that a and b are mutually uncorrelated and use the following properties.
⟨ a_m^pa_n^q⟩ = ⟨ b_m^pb_n^q⟩ = δ_m,nδ_p,q,
⟨ a_m^p a_n^q⟩ = ⟨ b_m^p b_n^q⟩ = -δ_m,n for p ≠ q.
The charges Q_1 and Q_2 reaching at drains D_1 and D_2 during time τ are thus
Q_1 = eN_1/3 + eN_1/3 + e/3 ∑_i=1^N_1 a_i^p + e/3 ∑_j=1^N_2 b_j^q,
Q_2 = eN_2/3 + eN_2/3 + e/3 ∑_k=1^N-N_1 a_k^q + e/3 ∑_l=1^N-N_2 b_l^p.
The total current is
I = ⟨ Q_1 ⟩ + ⟨ Q_2 ⟩/τ = 2eN/3τ,
the transmission through QPC is
t=⟨ Q_1 ⟩/⟨ Q_1 ⟩ + ⟨ Q_2 ⟩ = 1/2
and the QPC conductance plateau is at G_D_1 = 1/3.
The autocorrelation at D_1 is then
δ^2 Q_1 = ⟨ (Q_1 - ⟨ Q_1 ⟩)^2 ⟩
=e^2/9⟨(∑_i=1^N_1 a_i^p + ∑_j=1^N_2 b_j^q) (∑_m=1^N_1 a_m^p + ∑_n=1^N_2 b_n^q)⟩
=e^2N/9.
Similarly we obtain for the autocorrelation at D_2, δ^2 Q_2 = ⟨ (Q_2 - ⟨ Q_2 ⟩)^2 ⟩ = e^2N/9, whereas the crosscorrelation is
δ^2 Q_c = ⟨ (Q_1 - ⟨ Q_1 ⟩) (Q_2 - ⟨ Q_2 ⟩) ⟩
=e^2/9⟨(∑_i=1^N_1 a_i^p + ∑_j=1^N_2 b_j^q) (∑_k=1^(N-N_1) a_k^q
+ ∑_l=1^(N-N_2) b_l^p)⟩
= - e^2N/9.
Finally, the Fano factor for autocorrelation at D_1 is
F_1=δ^2 Q_1/e τ I t (1-t)=2/3,
and similarly we have F_2 = -F_c = 2/3.
§.§ Equilibration scenario
We derive the general expressions for the CCC (shot noise) in a QPC for the
bulk filling ν and QPC filling ν_i when ν>ν_i (<ref>). We assume that the charge is fully equilibrated, hence charge transport is ballistic, moving downstream along each segment of the setup.
We call the direction opposite to charge flow (upstream) antiballistic.
Thereafter, we compute the values of CCC for
specific choices of {ν,ν_i} and for different
thermal equilibration regimes.
We assume that there is
no bulk-leakage
<cit.>.
We use Ref. <cit.> and write the expressions of δ^2 I_1 (autocorrelation in drain D_1), δ^2 I_2 (autocorrelation in drain D_2) and δ^2 I_c (crosscorrelation) (<ref>) as
δ^2 I_1 =2(e^2/h)ν_i/ν(ν-ν_i)k_B (T_M+T_N)
+ 1/ν^2[ν_i^2 ⟨ (Δ I_S)^2 ⟩ + (ν-ν_i )^2⟨ (Δ I_G)^2 ⟩],
δ^2 I_2 =2(e^2/h)ν_i/ν(ν-ν_i)k_B (T_M+T_N)
+
1/ν^2[ν_i^2 ⟨ (Δ I_G)^2 ⟩ + (ν-ν_i )^2⟨ (Δ I_S)^2 ⟩],
and
δ^2 I_c =-2(e^2/h)ν_i/ν(ν-ν_i)k_B (T_M+T_N)
+ ν_i(ν-ν_i)/ν^2[⟨ (Δ I_G)^2 ⟩ + ⟨ (Δ I_S)^2 ⟩],
where T_M, T_N are, respectively, the temperatures at the noise spots M and N which are found by solving
self-consistent equilibration equations and
considering energy conservation at each edge mode junction <cit.>.
We note that the
dissipated powers at the hot spots take the form
P_H_1 = P_H_2 = e^2 V_dc^2/hν_i(ν-ν_i)/2ν.
The contributions
⟨ (Δ I_G)^2 ⟩ = ⟨ (Δ I_S)^2
at the noise spots O and P are computed by evaluating an integral as shown in Ref. <cit.>.
We refer to <ref> and consider {ν,ν_i } = { 2/3,1/3 }, and { 2/3(R), 1/3) }.
As charge is fully equilibrated in each segment of the QPC set up
(<ref>), we have I_1=e^2 V_dc/(3h)
for each of the {ν,ν_i } choices here, leading to the
transmission t=1/2 and G_D_1=1/3.
For no thermal equilibration, we have only ballistic and
antiballistic heat transports in any segment of the QPC set up leading to constant CCC. For
{ν,ν_i } = { 2/3,1/3 } we find
T_M=T_N= eV_dc/√(5)π k_B,
⟨ (Δ I_S)^2 ⟩ = ⟨ (Δ I_G)^2 ⟩≈ 0.044 e^3 V_dc/h,
leading to F_1=F_2≈ 0.35, F_c ≈ -0.22.
For
{ν,ν_i } = { 2/3(R),1/3 } we find
T_M=T_N=3 eV_dc/4√(6)π k_B,
⟨ (Δ I_S)^2 ⟩ = ⟨ (Δ I_G)^2 ⟩≈ 0.06 e^3 V_dc/h,
leading to F_1=F_2≈ 0.28, F_c ≈ -0.1.
Similarly, for mixed and full thermal equilibration, we have diffusive
heat transport in the outer segment and
ballistic and
antiballistic heat transports in the line and
the upper segments of the QPC set up leading to length
dependent CCC. For each choice of {ν,ν_i }, we find
F_1=F_2≈ 0.09+0.3√(L_A/l_eq^th),
F_c≈ 0.09-0.3√(L_A/l_eq^th).
§ SUMMARY AND OUTLOOK
In this work we have considered the ν=2/3 FQH state in a QPC geometry. We have studied
both the bare and reconstructed edge structures that are consistent with the
bulk-boundary correspondence. For each of these, we have
considered the
steady state of the edge modes to be either coherent or
equilibrated.
In the coherent regime,
two different RG fixed points have been proposed earlier, namely KFP and WMG.
Recent experiments <cit.> have established that the thermal equilibration length
is order of magnitude large than the charge equilibration length.
These findings guarantee that
different degree of thermal equilibration
is possible while the charge is fully equilibrated.
In our calculation, we have found that there are three possible
QPC conductance plateaus
e^2/2h, 5e^2/9h, and e^2/3h when we consider all
possible models.
Experimentally, the e^2/2h plateau has been reported
recently <cit.> and the
e^2/3h plateau was discovered earlier <cit.>.
We predict that if one finds the e^2/3h plateau, then another
possible plateau can exist at 5e^2/9h if the edge modes are
in the coherent regime.
To figure out in which regime we are in, we can use
electrical shot noise, namely
autocorrelations and crosscorrelation, at these plateaus.
We have identified that distinct mechanisms are responsible
for the existence of the shot noise in different models.
We have found that different inequalities hold
among the correlations depending on the model.
Thus, our comprehensive study
provides a classification of
the steady state of the edge modes based on shot noise.
Our proposal can be extended to other filling fractions
and can be
tested in the experiments with present day
amenities.
Recently, auto- and cross-correlation noise
have also been discussed in Ref. Dima2023.
We thank Yuval Gefen for many illuminating
discussions and collaboration on related works.
We also thank Christian Glattli, Kun Yang, and Michael J. Manfra for their useful discussions.
S.M. was supported by Weizmann Institute of Science, Israel Deans fellowship through Feinberg
Graduate School, as well as the Raymond Beverly Sackler Center for Computational Molecular and Material Science at Tel Aviv University.
A.D. was supported by the German-Israeli Foundation Grant No. I-1505-303.10/2019, DFG
MI 658/10-2, DFG RO 2247/11-1, DFG EG 96/13-1,
and CRC 183 (project C01). A.D. also thanks the Israel planning and budgeting committee (PBC) and the
Weizmann Institute of Science, the Dean of Faculty fellowship, and the Koshland Foundation for financial support.
M.G. has been supported by the Israel Science Foundation (ISF) and the Directorate for Defense
Research and Development (DDR&D) Grant No. 3427/21, and by the US-Israel Binational Science Foundation (BSF) Grant No. 2020072.
|
http://arxiv.org/abs/2307.04432v1 | 20230710091220 | Density-dependent relativistic mean field approach and its application to single-$Λ$ hypernuclei in Oxygen isotopes | [
"Shi Yuan Ding",
"Wei Yang",
"Bao Yuan Sun"
] | nucl-th | [
"nucl-th"
] |
Density-dependent relativistic mean field approach and its application to single-Λ hypernuclei in Oxygen isotopes This work was partly supported by the Fundamental Research Funds for the Central Universities, Lanzhou University under Grant No. lzujbky-2022-sp02 and lzujbky-2023-stlt01, the National Natural Science Foundation of China under Grant No. 11875152 and No. 12275111, and the Strategic Priority Research Program of Chinese Academy of Sciences under Grant No. XDB34000000. The authors also want to thank the computation resources provided by the Supercomputing Center of Lanzhou University.
Shi-Yuan Ding^1,2)
Wei Yang^1,2
Bao-Yuan Sun^1,2;1)[email protected]
August 12, 2023
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
^1MOE Frontiers Science Center for Rare Isotopes, Lanzhou University, Lanzhou 730000, China
^2School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000, China
The in-medium feature of nuclear force which includes both nucleon-nucleon (NN) and hyperon-nucleon (Λ N) interactions impacts the description of single-Λ hypernuclei. With the alternated mass number or isospin of hypernuclei, such effects could be unveiled by analyzing systematical evolution of the bulk and single-particle properties. From a density-dependent meson-nucleon/hyperon coupling perspective, a new Λ N effective interaction in the covariant density functional (CDF) theory, namely DD-LZ1-Λ1, is obtained by fitting the experimental data of Λ separation energies for several single-Λ hypernuclei. It is then adopted to study the structure and transition properties of single-Λ hypernuclei in Oxygen isotopes, comparing with several selected CDF Lagrangians. Discrepancy is observed explicitly in the isospin evolution of Λ1p spin-orbit splitting with various effective interactions, ascribed to their divergence of the meson-hyperon coupling strengths with increasing density. In particular, the density-dependent CDFs introduce an extra contribution to enhance the isospin dependence of the splitting, which is originated from the rearrangement terms of Λ self-energies. In addition, the characteristics of hypernuclear radii are studied along the isotopic chain. Owing to the impurity effect of Λ hyperon, a size shrinkage is observed in the matter radii of hypernuclei as compared to their cores of normal nuclei, while its magnitude is elucidated further to correlate with the incompressibility of nuclear matter. Besides, there exists a sizable model-dependent trend that Λ hyperon radii evolve with the neutron number, which is decided partly by the in-medium NN interactions as well as the core polarization effects.
21.80.+a,
13.75.Ev,
21.30.Fe,
21.60.Jz
§ INTRODUCTION
The discovery of hyperon, particles containing strange quarks, in 1953 sparked strong interest among experimental and theoretical physicists <cit.>. The ability of hyperons to enter the nucleus and form a system of hypernuclei makes them sensitive probes for studying the structure and specific nuclear features. The studies on hyperon behavior in the nucleus help us to understand the baryon-baryon interaction in nuclear medium and its effects on nuclear properties <cit.>. In addition, hyperons are thought to be produced inside neutron stars <cit.>. The link between hypernucleus and neutron star properties benefits our comprehension of the state of matter in extreme environments, as well as strangeness-bearing nuclear force at high densities. In recent decades, a wealth of hypernuclear data has been generated through induced reactions of meson and electron beams at various radioactive beam facilities, including the Japan Proton Accelerator Research Complex (J-PARC) <cit.>, the Thomas Jefferson National Accelerator Facility (JLab) <cit.>, and the Facility for Antiproton and Ion Research (FAIR) <cit.>. These advanced facilities have played a pivotal role in advancing our understanding of strangeness in nuclear physics. Notably, single-Λ hypernuclei have been the most extensively studied, with experimental data covering hypernuclei from ^3_ΛH to ^208_ΛPb in various laboratories <cit.>.
When Λ hyperon enters into a nucleus, various phenomena could be observed. For instance, in ^7_ΛLi, it has been found that the size of the ^6Li core is smaller compared to the free space ^6Li nucleus, as suggested by the measurement of the γ-ray transition probability from E2(5/2^+→1/2^+) in ^7_ΛLi <cit.>. In addition, in ^13_ΛC, it is hinted that the Λ spin-orbit splitting is much smaller than the nucleon's <cit.>. Recently, the potential for producing neutron-rich hyperfragments at high-intensity heavy-ion accelerator facilities is discussed <cit.>. The directed flow of hypernuclei (^3_ΛH and ^4_ΛH) just observed at RHIC for the first time in heavy-ion collisions, providing insights into hyperon-nucleon interactions under finite pressure <cit.>. These advances highlight the promising prospects for investigating hypernuclear structures using the forthcoming high-intensity heavy-ion accelerator facility HIAF <cit.>. To provide accurate predictions for these experiments, researchers have performed detailed theoretical work on observables such as hypernuclear binding energy <cit.>, spin-orbit splitting <cit.>, hyperon and hypernuclear matter radius <cit.>. Overall, these efforts aim to provide valuable insights into the behavior of hypernuclei, and to deepen our understanding of the in-medium baryon interactions.
Due to their ability to provide a self-consistent and unified description of almost all nuclei on the nuclear chart, both non-relativistic and relativistic mean-field theories are widely used in the calculation of finite nuclei and nuclear matter, and have been extended to describe hypernuclear systems with strange degrees of freedom during the development of theoretical models <cit.>. As a key model utilized in this work, the relativistic mean-field theory has been extensively developed to study hypernulcear properties such as hyperon separation energy <cit.>, spin-orbit splitting <cit.>, hyperon halo <cit.>, hypernuclear deformation <cit.>, cluster structure <cit.> and drip lines <cit.>. While most theoretical models have primarily emphasized nonlinear self-coupling interactions for studying hypernuclei, there has been a recent study that explores the effective interactions for single-Λ hypernuclei within the density-dependent relativistic mean-field (DDRMF) model <cit.>. With three distinct fitting approaches, they propose six new sets of effective Λ N interactions and uncover a significant linear correlation between the ratios R_σ and R_ω, representing scalar and vector coupling strengths, respectively, between these effective Λ N and NN interactions.
Recently, a new type of density-dependent relativistic mean-field Lagrangian, DD-LZ1, has been proposed, inspired by the restoration of pseudo-spin symmetry (PSS) and nuclear medium effects <cit.>. This new effective Lagrangian has produced satisfactory results in describing the properties of nuclear matter and finite nuclei. With unique density-dependent form, DD-LZ1 eliminates the spurious shell closures that appeared in previous RMF calculations, and reasonably restores the PSS of high orbital angular momentum near the Fermi energy <cit.>. Applications with this new RMF Lagrangian has been performed for several nuclear many-body characteristics, in both finite nuclei with mass ranging from light to superheavy, and neutron star properties with density ranging from low to high. For instance, a comprehensive macroscopic-microscopic model was developed to evaluate the total energies for even-even nuclei with proton numbers ranging from 8 to 110 <cit.>. Even with the appearance of hyperon <cit.>, larger maximum masses of neutron stars could be obtained with DD-LZ1 than with several other RMF parameter sets, providing the possibility that the secondary object observed in GW190814 is a neutron star <cit.>. Utilizing the Thomas-Fermi approximation, different microscopic structures of nonuniform nuclear matter were calculated for the crust of neutron stars and a unified equation of state was established in a vast density range <cit.>. The different density-dependent behaviors of meson-nucleon couplings impact the microscopic structures of neutron star matter with DD-LZ1, affect correspondingly the description on various physical processes and evolutions of neutron stars.
Apart from dealing with the different nuclear medium effects caused by the interactions themselves, the evolution of isospin also leads to significant changes in the in-medium effects of hypernuclei, thereby affecting the description of their structural properties. In recent years, a series of refined theoretical studies have been conducted on hypernuclei in different isotopic chains using various interaction models. For instance, the no-core shell model has been employed to investigate the systematic evolution of the ground and excited state energies in the Helium and Lithium hyperisotopes <cit.>. The antisymmetrized molecular dynamics method has been applied to explore the e low-lying level structure of hypernuclei in the Beryllium hyperisotopes <cit.>. The multidimensionally constrained RMF model has been used to study the shape evolution of hypernuclei in the Argon hyperisotopes <cit.>. The beyond mean-field approach has been utilized to discuss the evolution of p-state energies and composition in the Carbon hyperisotopes <cit.>, as well as the hyperon halo structures in the Boron and Carbon hyperisotopes <cit.>. The studies exhibit the significance of isospin role in the description of hypernuclear structure. In fact, with the development of hypernuclear spectroscopy, new experiments related to hypernuclei have been initiated, such as the planned measurements in the J-PARC project, aiming to study the Λ hyperon binding energies in neutron-rich hyperisotopes of ^124-136_ΛSn <cit.>. These experiments will provide crucial information about the properties of hypernuclei associated with various isospin circumstance.
In view of the essential role of nuclear in-medium effects on hypernuclear structure and their relevance to the isotopic evolution, we aim to further expand the density-dependent RMF model to investigate the structure of single-Λ hypernuclei in Oxygen hyperisotopes. First, we will introduce the theoretical framework of the hypernuclear RMF approach in Sec. <ref>. Then, the induced Λ-nucleon (Λ N) effective interactions will be determined by fitting Λ separation energies to the experimental data for DD-LZ1 Lagrangian. To give the results and discussion, the influence of nuclear in-medium effects will be studied in Sec. <ref>, on the isospin dependence of hypernuclear bulk properties, hyperon spin-orbit splitting and matter/hyperon radius. Finally, a summary will be given in Sec. <ref>.
§ DDRMF APPROACH FOR SPHERICAL SINGLE-Λ HYPERNUCLEI
To describe single-Λ hypernuclei within the meson-exchanged type of the relativistic mean-field theory, the covariant Lagrangian density serves as the foundation, which is
ℒ = ℒ_B + ℒ_φ + ℒ_I,
where the terms of free fields read as
ℒ_B= ∑_Bψ̅_B(iγ^μ∂_μ-M_B)ψ_B,
ℒ_φ= +1/2∂^μσ∂_μσ-1/2m_σ^2σ^2-1/4Ω^μνΩ_μν+1/2m_ω^2ω^μω_μ
-1/4R⃗^μν·R⃗_μν+1/2m_ρ^2ρ⃗^μ·ρ⃗_μ-1/4F^μνF_μν,
where the index B (B') represents nucleon N or hyperon Λ, with its sum ∑_B over nucleon N and hyperon Λ. The masses of the baryon and mesons are given by M_B and m_ϕ (ϕ=σ, ω^μ, ρ⃗^μ), while Ω^μν, R⃗^μν and F^μν are the field tensors of vector mesons ω^μ, ρ⃗^μ and photon A^μ, respectively. The interaction between nucleon (hyperon) and mesons (photon) is involved by the Lagrangian ℒ_I,
ℒ_I=∑_Bψ̅_B (-g_σ Bσ-g_ω Bγ^μω_μ)ψ_B
+ψ̅_N (-g_ρ Nγ^μτ⃗·ρ⃗_μ-eγ^μ1-τ_3/2A_μ)ψ_N.
Here the Λ hyperon (namely ψ_B taken as ψ_Λ), which is charge neutral with isospin zero, only takes part in interactions that are spread by isoscalar mesons. The nuclear in-medium effects are introduced phenomenologically via the coupling strengths g_ϕ B (g_ϕ N), which use baryon-density dependent functions in density-dependent RMF (DDRMF) approach to define the strengths of different meson-baryon (meson-nucleon) couplings <cit.>.
The effective Hamiltonian operator for Λ hypernuclei can be obtained by performing the general Legendre transformation on the Lagrange density ℒ in Eq. (<ref>), and it can be written as the sum of the kinetic energy operator T̂ and the potential energy operator V̂_φ,
Ĥ≡ T̂+∑_φV̂_φ
= ∫ dx ∑_Bψ̅_B(x)(-iγ·∇+M_B) ψ_B(x)
+ 1/2∫ dx ∑_B∑_φ[ψ̅_B𝒢_φ Bψ_B]_x D_φ(x,x') [ψ̅_B'𝒢_φ B'ψ_B']_x',
here x is four-vector (t,x). Correspondingly, we define interaction vertices 𝒢_φ B(x) for a various of meson (photon)-nucleon (hyperon) coupling channels, which for isoscalar σ and ω mesons are represented as
𝒢_σ B(x) = +g_σ B(x),
𝒢_ω B^μ(x) = +g_ω B(x)γ^μ.
Notably, both nucleons and the Λ hyperon can contribute to the isoscalar meson fields. However, for the remaining isovector mesons and photon fields, it is expected that their interaction vertices solely connect to nucleons since the isoscalar and charge-zero nature of Λ hyperon,
𝒢_ρ N^μ(x) = +g_ρ N(x) γ^μτ⃗,
𝒢_A N^μ(x) = +eγ^μ1-τ_3/2.
As the retardation effects could be neglected in the majority of RMF models, the meson (photon) propagators D_ϕ (D_A) read as
D_ϕ(x,x')=1/4πe^-m_ϕ|x-x'|/|x-x'|,
D_A(x,x')=1/4π1/|x-x'|.
The baryons field operator ψ_B in the Hamiltonian (<ref>) can be second quantized in the positive-energy space under the no-sea approximation as
ψ_B(x)
=∑_if_i(x)e^-iϵ_i tc_i.
Here, f_i represents the Dirac spinor, while c_i denote the annihilation operators for state i. Accordingly, the energy functional E is determined by evaluating the expectation value of the Hamiltonian with respect to a trial Hartree-Fock ground state |Φ_0⟩,
E = ⟨Φ_0|Ĥ| Φ_0⟩ = ⟨Φ_0|T̂| Φ_0⟩+∑_φ⟨Φ_0|V̂_φ| Φ_0⟩.
Then the binding energy of a Λ hypernucleus is written by
E= ∑_B(E_kin,B + E_σ,B + E_ω,B) + E_ρ,N+E_e.m. + E_c.m. + E_pair,
where the kinetic energy functional of baryons is shown by E_kin,B. The contributions of the potential energy functional from σ and ω are denoted by the variables E_σ,B and E_ω,B. Additionally, E_ρ,N and E_e.m. are used to represent the contributions from ρ and A, respectively. The center-of-mass adjustment to the mean-field is represented by the term E_c.m., while E_pair takes into account the contribution from nucleon pairing correlations <cit.>.
The role of deformation in single-Λ hypernuclei has been discussed in various density functional models <cit.>, which may generate non-negligible effects on the single-particle energies like in Carbon hyperisotopes <cit.>. To describe single-Λ hypernuclei, in particularly the Oxygen hyperisotopes discussed hereafter, we just restrict the RMF approach to the spherical symmetry. Correspondingly, the Dirac spinor f_i(x) of the nucleon or hyperon in Eq. (<ref>) has the following form:
f_nκ m(x) = 1/r([ iG_a(r)Ω_κ m(ϑ,φ); F_a(r)Ω_-κ m(ϑ,φ) ]),
where the index a consists of the set of quantum numbers (nκ) = (njl), and Ω_κ m is the spherical spinor. Meanwhile, the propagators can be expanded in terms of spherical Bessel and spherical harmonic functions as
D_ϕ(x,x^') = ∑_L=0^∞∑_M=-L^L(-1)^MR^ϕ_LL( r, r^') Y_LM(Ω)Y_L-M(Ω^'),
where Ω=(ϑ,φ), and R_LL contains the modified Bessel functions I and K as
R_L L^ϕ(r, r^') =√(1/rr^') I_L+1/2(m_ϕr_<) K_L+1/2(m_ϕr_>),
R_L L^A(r, r^') =1/2L+1r_<^L/r_>^L+1.
In the DDRMF approach, the meson-baryon coupling strengths are adopted as a function of baryon density ρ_b, which are written by
g_ϕ B(ρ_b)=g_ϕ B(0) f_ϕ B(ξ) or
g_ϕ B(ρ_b)=g_ϕ B(0) e^-a_ϕ Bξ,
where ξ=ρ_b/ρ_0 with ρ_0 the saturation density of nuclear matter, and
f_ϕ B(ξ)=a_ϕ B1+b_ϕ B(ξ+d_ϕ B)^2/1+c_ϕ B(ξ+d_ϕ B)^2.
The free coupling strength at ρ_b=0 is represented by g_ϕ B(0) in the expression above. To keep the variational self-consistency between the energy density functional and single-particle properties, the extra terms in baryon self-energies, namely the rearrangement terms, will occur due to the density dependence of the coupling strengths. The single-particle (nucleon or hyperon) properties can be determined by solving the Dirac equation,
ε_a,B[ G_a,B(r); F_a,B(r) ] = [ Σ_+^B(r) -d/dr+κ_a,B/r; d/dr+κ_a,B/r -[2M_B-Σ_-^B(r)] ][ G_a,B(r); F_a,B(r) ].
Here the self-energies Σ_±^B=Σ_0,B±Σ_S,B composed by the vector and scalar terms. The scalar self-energy Σ_S,B = Σ_S,B^σ, and the time component of the vector one has
Σ_0,B(r) = ∑_ϕΣ_0,B^ϕ(r)+Σ_R(r),
where ϕ=ω, ρ for nucleons, and ϕ=ω for Λ hyperon. The self-energies of nucleon or hyperon include scalar one Σ_S,B and vector one Σ_0,B, in which the coupling of isoscalar mesons contributes as follows,
Σ_S,B^σ(r) =-g_σ B(r)∑_B^'∫ r^'2dr^' g_σ B^'(r^')ρ_s,B^'(r^')R^σ_00(r,r^'),
Σ_0,B^ω(r) =+g_ω B(r)∑_B^'∫ r^'2dr^' g_ω B^'(r^')ρ_b,B^'(r^')R^ω_00(r,r^').
Here, ρ_s,B and ρ_b,B represent the scalar and baryon density, respectively <cit.>. Additionally, the rearrangement term Σ_R appears in DDRMF approach, which contain the summation over all baryons for the isoscalar case of ϕ=σ,ω, but only over nucleons for the isovector one. For example, the contribution from σ-S coupling is shown as
Σ_R,σ(r)=∑_B1/g_σ B∂ g_σ B∂ρ_bρ_s,BΣ_S,B^σ(r).
§ RESULTS AND DISCUSSION
In recent years, there has been extensive theoretical research on hypernuclei, particularly focusing on the simplest single-Λ hypernuclei, using RMF and RHF theories. In this section, we aim to extend the effective interaction DD-LZ1 <cit.>, which has been proven to be successful and promising in determining the properties of nuclear structure in both bulk and single-particle aspects, to incorporate Λ hyperon within the framework of RMF model. To give a comparative study and illustrate the role of nuclear in-medium effects, the calculations with DD-LZ1 will be accompanied by several existing effective Λ N interactions within CDF models. These interactions have been significantly expanded to incorporate the degrees of freedom of the Λ hyperon and have yielded many successful findings in the study of hypernuclear structure and the properties of dense stars. In detail, density-dependent RMF effective interactions DD-LZ1 <cit.>, PKDD <cit.>, DD-ME2, TW99, DDV <cit.>, density-dependent RHF (DDRHF) effective interactions PKO1, PKO2, PKO3 <cit.>, and nonlinear RMF (NLRMF) effective interactions NL-SH <cit.> and PK1 <cit.> were selected. In these CDF functionals, the ω-tensor coupling which has been proved to be essential in reducing Λ's spin-orbit splitting in hypernuclei <cit.> is ignored. The Dirac equation is solved in a radial box size of R=20 fm with a step of 0.1 fm. For open-shell hypernuclei, we employ the BCS method to account for pairing correlations. As the strength of hyperon pairing correlations remains uncertain and may become essential in multi-Λ hypernuclei, our current work solely considers pairing correlations between nn and pp pairs by using the finite-range Gogny force D1S <cit.>, see Refs. <cit.> for details. In addition, the blocking effect should be taken into account for the last valence nucleon or hyperon, with a detailed description to the Ref. <cit.>.
§.§ Density dependence of Λ N effective interaction
For the theoretical study of hypernuclear structure, the Λ N interaction must be determined first. Since the Λ hyperon is an electrically neutral particle with isospin zero, our focus lies on the coupling strengths between the isoscalar-scalar σ meson and the isoscalar-vector ω meson with the Λ hyperon. For convenience, we introduce the ratio of the coupling strengths between the meson-hyperon and meson-nucleon, g_ϕΛ/g_ϕ N. According to the näive quark model <cit.>, we fix the ratio of the isoscalar-vector meson coupling strength g_ωΛ/g_ω N to 0.666, while the ratio of the isoscalar-scalar one g_σΛ/g_σ N can be obtained by reproducing the Λ hyperon separation energy B_Λ experimental data for ^16_ΛO, ^40_ΛCa, and ^208_ΛPb <cit.>. In the fitting process, the hyperon is placed in the 1s_1/2 ground state, and the B_Λ is defined as follows:
B_Λ(^A_Λ Z) = E(^A-1Z) - E(^A_Λ Z),
Based on the effective interaction DD-LZ1, we finally obtained a new set of Λ N interaction, namely DD-LZ1-Λ1, after a fitting process of Levenberg-Marquardt minimization. Then, we calculated the Λ separation energy B_Λ as well as the single-Λ energy, with hyperon occupying the ground state 1s_1/2 or possible excited states with higher angular momentum l_Λ. For B_Λ of DD-LZ1-Λ1, a remarkable agreement with experimental data is found for most of hypernuclei, except for ^28_ΛSi with significant deformation and Carbon hyperisotopes with light mass, as shown in Fig. <ref>. Actually, more accuracy description to the light-mass Carbon hyperisotopes could be obtained, by limiting the mass region of fitting and taking into account the deformation effects <cit.>. To investigate the deviation in describing the structural properties of single-Λ hypernuclei using different CDF effective interactions, the coupling strength of DD-LZ1-Λ1 in comparison with other selected CDF functionals are listed in Table <ref>. One could check the root-mean-square deviation Δ for B_Λ between theoretical calculation and experimental value, which is defined by
Δ≡√(1/N∑_i=1^N(B_Λ,i^exp.-B_Λ,i^cal.)^2).
To reveal the systematics, we define Δ_1 to be the deviation only for ^16_ΛO, ^40_ΛCa, and ^208_ΛPb, as well as Δ_2 that suitable for all hypernuclei.
From Table <ref>, it can be seen that different CDF theoretical models have good descriptions for ^16_ΛO, ^40_ΛCa and ^208_ΛPb, and most parameter sets have good consistency for hypernuclear theoretical calculations and experimental data over a large mass range from ^12_ΛC to ^208_ΛPb. In addition, by comparing three different types of CDF effective interactions, we can find that when the ratio of isospin scalar-vector meson coupling strength is fixed to the same value, the ratio of isospin scalar-scalar meson coupling strength g_σΛ/g_σ N may satisfy certain linear correlations with the ratio of isospin scalar-vector meson coupling strength, which has been systematically explored in some works <cit.>. It should be pointed out that the linear correlation of meson-hyperon coupling strength ratios obtained in the RMF framework is obviously not suitable for density-dependent RHF models <cit.>.
In DDRMF approach, the in-medium effects of nuclear force are effectively embedded in the density-dependent shape of meason-baryon coupling strength, playing the role in the nuclear structure via the equilibrium of nuclear dynamics from various coupling channels. In recent years, analysis based on the equilibrium of nuclear in-medium dynamics has been applied to clarify the mechanism of the pseudospin symmetry, the shell evolution, the liquid-gas phase transition, and hyperon's spin-orbit splitting in the CDF models <cit.>. The delicate in-medium balance between nuclear attractive and repulsive interactions may be significantly altered by treating the density dependence of coupling strength differently, impacting the description of the properties of nuclear matter and finite nuclei with different CDF effective interactions.
To provide a comprehensive understanding of the in-medium equilibrium in hypernuclei, we present the density dependence of coupling strengths for selected CDF effective interactions in Fig. <ref>(a) and Fig. <ref>(b), corresponding to the isoscalar-scalar channel g_σΛ and isoscalar-vector one g_ωΛ. There are systematic divergences of the meson-hyperon coupling strengths with increasing density among density-dependent RMF, density-dependent RHF, and nonlinear RMF effective interactions. Notably, the density dependence of g_σΛ and g_ωΛ is significantly reduced in the DDRHF effective interaction compared to the DDRMF effective interaction. This pronounced reduction in density dependence also influences the description of single-particle properties in hypernuclei, such as Λ hyperon spin-orbit splitting <cit.>. Furthermore, in contrast to density-dependent interactions, the NLRMF effective interaction exhibits density-independent characteristics for g_σΛ and g_ωΛ. Consequently, when applying these three types of CDF effective interactions to single-Λ hypernuclei, the systematic deviation could take place in describing the isospin dependence of the hypernuclear structure.
§.§ Bulk properties of single-Λ hypernuclei in Oxygen hyperisotopes
To focus on the isospin dependence of single-particle properties, we choose the Λ hypernuclei and their nucleonic counterpart in Oxygen (hyper)isotopes as examples, since they usually take the spherical symmetry. To check the accuracy of the chosen interactions in describing the properties of finite nuclei, we first calculated the binding energies E_B, charge radii R_c, and matter radii R_m for Oxygen isotopes using the DD-LZ1 effective interaction. We compared the theoretical calculations with experimental measurements, which were taken from Refs. <cit.>. From the results in Table <ref>, we can see that the theoretical calculations and experimental measurements are in good agreement for both the binding energies E_B and the charge radii R_c, for the interaction DD-LZ1. It is worth noting that the total matter radius R_m of finite nuclei, unlike the charge radius, still has significant uncertainties based on heavy ion reaction experiments. The theoretical calculations of R_m reconcile with the experimental measurements with the existence of error bars.
Furthermore, we summarize in Table <ref> the systematics of the occupied energy level of Λ hyperon, the single-particle energies of Λ hyperon, the total binding energies, the charge radii, and the matter radii of hypernuclei in Oxygen hyperisotopes. In order to give possible reference to hypernuclear experiments, we also calculated the strength of electric dipole transition B(E1) between the Λ1p and Λ1s occupation states. The transition strength is expressed as
B(E1;J_i⟶ J_f)=3e^2_Λ/4π⟨ f|r|i⟩^2(2j_f+1)
[ j_f 1 j_i; -1/2 0 1/2 ]^2,
Where e_Λ represents the effective charge of the Λ hyperon. The integration ⟨ f|r|i⟩ can be computed using the radial wave functions of the initial and final single-Λ state, see Ref. <cit.> for details.
In the framework of relativistic models, Dirac spinors with both upper and lower components could contribute to determining the value of B(E1). However, it is checked that the contribution from the lower component is negligible, especially for non-charge exchange channel. Therefore, only the contribution from the upper component is preserved in current calculations as a simplification. The inclusion of Λ hyperon causes the so-called impurity effect inside hypernuclei <cit.>. When the Λ hyperon is filled in the 1s_1/2 state, we can see from the comparison of the total matter radii in Table <ref> and Table <ref> that the introduction of hyperon causes a shrinkage effect on the hypernuclei, which is approximately 0.06-0.13 fm. Compared with the ground-state results, we observe a significant enhancement in Λ root-mean-square radii when hyperon is filled in higher-lying 1p state. This change in the density distribution of hyperon due to different level occupations leads to an overall expansion of the hypernuclear matter radii, different from the Λ1s case. Additionally, with the increase of neutron filling, both the hyperon radii, matter radii and B(E_1) show significant isospin dependence, which can be qualitatively explained by the density-dependence of the coupling strength. As indicated in Table <ref>, when Λ hyperon occupies the 1p state, its density distribution spreads more outward than the nucleonic core. As isospin evolves, more neutrons are filled and their attraction to the hyperon increases, correspondingly leading to a significant reduction in the hyperon radius. For B(E_1), its value is determined not only by the overlap between initial and final states which are sensitive to the neutron number, but also by the effective charge. As a result, the B(E_1) values enlarge a little from ^15_ΛO to ^17_ΛO and go down gradually as isospin evolves after N=8.
§.§ Isospin dependence of Λ spin-orbit splitting
Motivated by the connection between the density-dependent effective interactions of theoretical models and the isospin-dependent properties of nuclear structure, the spin-orbit splitting of Λ hyperon in hypernuclei, as a promising observable in current hypernuclear spectroscopy, will be discussed in this subsection with newly developed DD-LZ1-Λ1 and other selected CDF functionals. The Λ's spin-orbit splitting is defined by the difference of Λ single-particle energies between a couple of spin partner states, which is
Δ E_SO^Λ≡ε_j_Λ=l_Λ-1/2 - ε_j_Λ=l_Λ+1/2.
As shown in Fig. <ref>, the analysis is carried out for Λ spin partner states 1p in Oxygen hyperisotopes, with the Λ hyperon occupying its ground state.
In Fig. <ref>(a), it is seen that the isospin dependence of Δ E_SO^Λ is clearly distinguished with the chosen CDF functionals. The curves from NLRMF models tend to be stable with increasing neutron number, while for density-dependent RMF or RHF functionals the splitting enlarges generally with isospin. Among them, DD-LZ1-Λ1 exhibits the most significant isospin dependence. Besides, it is clear that the smaller Λ spin-orbit splitting is predicted by DDRHF compared to RMF, which has been illustrated as a result in single-particle properties since the dynamical equilibrium between nuclear attraction and repulsion is dramatically changed with the appearance of Fock terms <cit.>.
To better understand the evolution of Λ spin-orbit splitting with isospin, we could decompose Δ E_SO^Λ into various parts according to its source of the kinetic or potential energy. The values are obtained by left-multiplying the transferred Dirac spinor to the Dirac equation Eq. (<ref>), and separate the integrated contributions from different self-energie terms. For instance, Δ E_rea comes from the contribution of the rearrangement term Σ_R to Λ self-energy Σ_0,Λ, as seen in Eq. (<ref>), due to the density dependence of meson-hyperon couplings. Consequently, the rest one from the kinetic energy and the density-independent potential energies could be summed over, which means Δ E_kin+σ+ω≡Δ E_SO^Λ-Δ E_rea, as discussed in Fig. <ref>(b).
It is observed that the values of Λ spin-orbit splitting are primarily determined by Δ E_kin+σ+ω. However, the isospin dependence of the splitting is weakly controlled by Δ E_kin+σ+ω except for ^15_ΛO. Attributed to the occupation of ν 1p_1/2 orbit, the Λ spin-orbit splitting predicted by various CDF functionals systematically reduces from ^15_ΛO to ^17_ΛO. As has been illustrated in Ref. <cit.>, the spin-orbit coupling potential of hyperon is determined mainly by the radial derivative of the self-energy Σ_-^Λ. In general, the more neutrons are filled into hypernuclei, the larger the density circumstance where the Λ hyperon is housing. Thus, if the model is density dependent like DDRMFs and DDRHFs given in Fig. <ref>, the meson-hyperon coupling strength then weakens and Δ E_SO^Λ should become smaller correspondingly as the neutron number increases. As seen in Fig. <ref>(b), such a reduction in Δ E_kin+σ+ω is remarkable from ^15_ΛO to ^17_ΛO, and relatively less significant at larger neutron numbers.
Different from the NLRMF case, the density-dependent CDFs introduce extra contribution to reinforce the isospin dependence of the splitting, as demonstrated in Fig. <ref>(c), which cancels the reduction trend in Δ E_kin+σ+ω overwhelmingly and finally leads to the enhancement of Δ E_SO^Λ with increasing neutron number in Fig. <ref>(a). In fact, the contribution Δ E_rea to Λ spin-orbit splitting is originated from the rearrangement terms of Λ self-energies Σ_0,Λ which according to Eq. (<ref>) depends on the density slope of the meson-hyperon coupling strength. As the neutron number increases, the density scenario where Λ lives could get more intense, consequently weaker density dependence of the meson-hyperon coupling strength, smaller density slope as well as the suppressed value of Δ E_rea. Therefore, the link between the isospin evolution of Λ spin-orbit splitting and the in-medium behavior of Λ N interaction with baryon density is elucidated from the discussion on Oxygen hyperisotopes. In consequence, possible experimental constraints on Δ E_SO^Λ along the hyperisotopes could assist us further in understanding the in-medium effects of nuclear force.
§.§ Isospin dependence of matter and hyperon radii
In the properties of hypernuclear structure, not only the Λ spin-orbit splitting but also the Λ impurity effect could exhibit the information of in-medium nuclear interactions. In Fig. <ref>(a), we selected DDRMF functionals DD-LZ1-Λ1 and DD-ME2, DDRHF's PKO1-Λ1 and NLRMF's PK1, to illustrate its influence on the matter radii of Oxygen (hyper)isotopes, where the solid and dash-dotted lines correspond to the calculated results for single-Λ hypernuclei and their nucleonic counterpart in Oxygen (hyper)isotopes, respectively. The matter radius R_m in hypernuclei goes up monotonically as the neutron number increases, regardless of the specific model used, where a steep leap from ^23_ΛO to ^25_ΛO corresponds to the effect of new occupation in ν 2s_1/2.
Although divergent values given for Oxygen isotopes without hyperon, all of the selected models are getting closer in size of matter radii for hypernuclei, implying R_m of hypernuclei as a possible model-independent observable. It is evident that the matter radii of Oxygen hyperisotopes contract as compared to their nucleonic counterparts, namely the size shrinkage due to the impurity effect of the Λ hyperon. However, the shrinkage magnitude appears to be strongly model dependent. Among them, the DDRMF effective Lagrangian DD-LZ1-Λ1 yields the largest difference between the solid and dash-dotted lines, whereas the NLRMF one PK1 shows the smallest disparity. By checking the bulk properties of nuclear matter within these CDFs, it is verified that the shrinkage magnitude correlates well with the incompressibility, which is 230.7 MeV for DD-LZ1, 250.8 MeV for DD-ME2, 250.2 MeV for PKO1, 282.7 MeV for PK1, respectively <cit.>. In fact, the larger the incompressibility K is, the harder the nucleus is contracted by the exerted attraction from the filled hyperon inside, consequently the weaker size shrinkage effect in the calculated matter radii. The similar relation could be found from the Table II of a work on the isoscalar giant monopole resonance of hypernuclei, where the effective nuclear incompressibility modulus was extracted <cit.>.
To further distinguish the effects of different interactions on the description of hypernuclear structure, we investigate the isospin evolution of the Λ hyperon radius R_Λ in Oxygen hyperisotopes using all selected CDF effective interactions, as shown in Fig. <ref>. It is seen tangibly that R_Λ evolve diversely along Oxygen hyperisotopes with different CDF effective interactions. Some effective interactions, like PKO3-Λ1, DD-ME2, DDV, and DD-LZ1-Λ1, exhibit a reduced R_Λ with increasing neutron number. Especially, DD-LZ1-Λ1 gives the smallest hyperon radii among all chosen CDFs, and an strong declining trend. In fact, the core polarization effect due to Λ hyperon plays a significant role in this evolution. When Λ occupies the 1s_1/2 state, its density distribution is concentrated inside the hypernucleus. As a result, the Λ's coupling or attraction with the nucleons in the core (here corresponding to ^16O) appears relatively stronger than that with the valence nucleons. Hence, the evolution of the hyperon radius could be comprehended more or less by the size change of the core with respect to the neutron number.
The variation of the matter radii for the ^16O core in Oxygen (hyper)isotopes is plotted in Fig. <ref>(b) with respect to the neutron number. From N=8 to 14, in contrast to the situation of total matter radii R_m, there is no consistent isospin dependence for the selected CDFs in the core radius R_m^core with increasing neutron number. The nonlinear RMF functional PK1 exhibits a significant increasing trend with isospin, while the density-dependent RMF one DD-LZ1-Λ1 shows a noticeable decrease. Consequently, the hyperon radius R_Λ exhibit a similar isospin dependence resulting from the core polarization effect, determined mainly by the various isospin properties of CDF functionals in nucleon-nucleon channels. From such analysis, the importance of nuclear in-medium effects in affecting the hyperon radii is unveiled. So the divergent isospin evolution of R_Λ given by the CDFs with different density dependent meson-baryon couplings makes it a valuable tool to elucidate the in-medium behavior of nuclear force.
§ SUMMARY
In summary, considering the significance of nuclear in-medium effects in nuclear many-body problems, such as eliminating the spurious shell closures, we expanded the newly developed DDRMF Lagrangian DD-LZ1 to incorporate the Λ hyperon degree of freedom and determined the Λ N effective interaction by fitting the experimental data of Λ separation energies for several single-Λ hypernuclei. Subsequently, with several other CDF functionals, the features including Λ separation energy and B(E1) transition, and the evolution of the spin-orbit splitting as well as the characteristic radii were analyzed in detail along the Oxygen (hyper)isotopes.
By comparing the results obtained from different CDF models, we further investigated the crucial impact of nuclear in-medium effects on accurately describing the properties of hyperon, both in terms of their bulk and single-particle properties. For the 1p spin-orbit splitting of the Λ hyperon, significant differences in the isospin dependence are observed among the selected CDF effective interactions in Oxygen hyperisotopes. As the neutron number increases, the density circumstance where the hyperon is housing gradually increases, which causes the meson-hyperon coupling strengths that determine the hypernuclear properties to change as well. In particular, the density-dependent CDF effective interactions introduce additional rearrangement terms that significantly enhance the isospin dependence of the Λ spin-orbit splitting, leading to more distinct variation of Δ E_SO^Λ with neutron number in DDRMF and DDRHF models.
The evolution of the hypernuclear matter radius with isospin was further investigated. Significant model dependence in the magnitude of size shrinkage due to the inclusion of Λ hyperon is observed, where the DDRMF functional DD-LZ1-Λ1 displays the largest shrinkage effect. The result was then explained by an anticorrelation between the incompressibility coefficients K of nuclear matter and the hyperon radii R_Λ, providing us a possible way to constrain the hyperon distribution inside a hypernucleus from better-determined bulk properties of nuclear matter. Additionally, it is found that the isospin evolution of the hyperon radius is primarily influenced by the density-dependent behavior of the chosen CDF functional in NN interaction channel via the procedure of the core polarization. Thus, the sensitivity in depicting these hyperon-relevant properties in CDF models with a various of different meson-baryon couplings holds us great potential to elucidate nuclear in-medium nature in both Λ N and NN channels.
apsrev
91
natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL
[Danysz and
Pniewski(1953)]Danysz1953Philos.Mag.44.348
authorM. Danysz and
authorJ. Pniewski,
journalThe London, Edinburgh, and Dublin Philosophical Magazine
and Journal of Science volume44, pages348
(year1953), https://doi.org/10.1080/14786440308520318,
<https://doi.org/10.1080/14786440308520318>.
[Hashimoto and Tamura(2006)]Hashimoto2006PPNP57.564
authorO. Hashimoto and
authorH. Tamura,
journalProgress in Particle and Nuclear Physics
volume57, pages564 (year2006),
ISSN issn0146-6410,
<https://www.sciencedirect.com/science/article/pii/S0146641005000761>.
[Gal et al.(2016)Gal, Hungerford, and
Millener]Gal2016Rev.Mod.Phys.88.035004
authorA. Gal,
authorE. V. Hungerford,
and authorD. J.
Millener, journalRev. Mod. Phys.
volume88, pages035004
(year2016),
<https://link.aps.org/doi/10.1103/RevModPhys.88.035004>.
[Prakash et al.(1997)Prakash, Bombaci,
Prakash, Ellis, Lattimer, and Knorren]Prakash9971Phys.Rep.280.1
authorM. Prakash,
authorI. Bombaci,
authorM. Prakash,
authorP. J. Ellis,
authorJ. M. Lattimer,
and authorR. Knorren,
journalPhysics Reports volume280,
pages1 (year1997), ISSN issn0370-1573,
<https://www.sciencedirect.com/science/article/pii/S0370157396000233>.
[Tolos and Fabbietti(2020)]Tolos2020PPNP112.103770
authorL. Tolos and
authorL. Fabbietti,
journalProgress in Particle and Nuclear Physics
volume112, pages103770
(year2020), ISSN issn0146-6410,
<https://www.sciencedirect.com/science/article/pii/S014664102030017X>.
[Burgio et al.(2021)Burgio, Schulze, na,
and Wei]Burgio2021PPNP120.103879
authorG. Burgio,
authorH.-J. Schulze,
authorI. V. na, and
authorJ.-B. Wei,
journalProgress in Particle and Nuclear Physics
volume120, pages103879
(year2021), ISSN issn0146-6410,
<https://www.sciencedirect.com/science/article/pii/S0146641021000338>.
[Sawada(2007)]Sawada2007NPA782.434
authorS. Sawada,
journalNuclear Physics A volume782,
pages434 (year2007), ISSN issn0375-9474,
<https://www.sciencedirect.com/science/article/pii/S0375947406007305>.
[Nakamura et al.(2005)Nakamura,
Hashimoto, Fujii, Tamura, Takahashi, Maeda, Kanda, Okayasu, Nomura, Matsumura
et al.]Nakamura2005NPA754.421
authorS. N. Nakamura,
authorO. Hashimoto,
authorY. Fujii,
authorH. Tamura,
authorT. Takahashi,
authorK. Maeda,
authorH. Kanda,
authorY. Okayasu,
authorH. Nomura,
authorA. Matsumura,
et al., journalNuclear Physics A
volume754, pages421 (year2005),
ISSN issn0375-9474, noteproceedings of the Eighth
International Conference on Hypernuclear and Strange Particle Physics,
<https://www.sciencedirect.com/science/article/pii/S037594740500076X>.
[Henning(2004)]Henning2004NPA734.654
authorW. Henning,
journalNuclear Physics A volume734,
pages654 (year2004), ISSN issn0375-9474,
<https://www.sciencedirect.com/science/article/pii/S0375947404001393>.
[Pile et al.(1991)Pile, Bart, Chrien,
Millener, Sutter, Tsoupas, Peng, Mishra, Hungerford, Kishimoto
et al.]Pile1991PRL66.2585
authorP. H. Pile,
authorS. Bart,
authorR. E. Chrien,
authorD. J. Millener,
authorR. J. Sutter,
authorN. Tsoupas,
authorJ.-C. Peng,
authorC. S. Mishra,
authorE. V. Hungerford,
authorT. Kishimoto,
et al., journalPhys. Rev. Lett.
volume66, pages2585 (year1991),
<https://link.aps.org/doi/10.1103/PhysRevLett.66.2585>.
[Feliciello and
Nagae(2015)]Feliciello2015Rep.Prog.Phys.78.096301
authorA. Feliciello and
authorT. Nagae,
journalReports on Progress in Physics
volume78, pages096301
(year2015),
<https://doi.org/10.1088/0034-4885/78/9/096301>.
[Tanida et al.(2001)Tanida, Tamura, Abe,
Akikawa, Araki, Bhang, Endo, Fujii, Fukuda, Hashimoto
et al.]Tanida2001PRL86.1982
authorK. Tanida,
authorH. Tamura,
authorD. Abe,
authorH. Akikawa,
authorK. Araki,
authorH. Bhang,
authorT. Endo,
authorY. Fujii,
authorT. Fukuda,
authorO. Hashimoto,
et al., journalPhys. Rev. Lett.
volume86, pages1982 (year2001),
<https://link.aps.org/doi/10.1103/PhysRevLett.86.1982>.
[Kohri et al.(2002)Kohri, Ajimura,
Hayakawa, Kishimoto, Matsuoka, Minami, Miyake, Mori, Morikubo, Saji
et al.]Kohri2002PRC65.034607
authorH. Kohri,
authorS. Ajimura,
authorH. Hayakawa,
authorT. Kishimoto,
authorK. Matsuoka,
authorS. Minami,
authorY. S. Miyake,
authorT. Mori,
authorK. Morikubo,
authorE. Saji, et al.,
journalPhys. Rev. C volume65,
pages034607 (year2002),
<https://link.aps.org/doi/10.1103/PhysRevC.65.034607>.
[Feng(2020)]Feng2020PRC102.044604
authorZ.-Q. Feng,
journalPhys. Rev. C volume102,
pages044604 (year2020),
<https://link.aps.org/doi/10.1103/PhysRevC.102.044604>.
[Saito et al.(2021)Saito, Dou, Drozd,
Ekawa, Escrig, He, Kalantar-Nayestanaki, Kasagi, Kavatsyuk, Liu
et al.]Saito2021Nat.Rev.Phys.3.803
authorT. R. Saito,
authorW. Dou,
authorV. Drozd,
authorH. Ekawa,
authorS. Escrig,
authorY. He,
authorN. Kalantar-Nayestanaki,
authorA. Kasagi,
authorM. Kavatsyuk,
authorE. Liu, et al.,
journalNature Reviews Physics volume3,
pages803 (year2021),
<https://doi.org/10.1038/s42254-021-00371-w>.
[Aboona et al.(2023)Aboona, Adam, Adams,
Agakishiev, Aggarwal, Aggarwal, Ahammed, Aitbaev, Alekseev, Anderson
et al.]Aboona2023PRL130.212301
authorB. E. Aboona,
authorJ. Adam,
authorJ. R. Adams,
authorG. Agakishiev,
authorI. Aggarwal,
authorM. M. Aggarwal,
authorZ. Ahammed,
authorA. Aitbaev,
authorI. Alekseev,
authorD. M. Anderson,
et al. (collaborationSTAR Collaboration),
journalPhys. Rev. Lett. volume130,
pages212301 (year2023),
<https://link.aps.org/doi/10.1103/PhysRevLett.130.212301>.
[Yang et al.(2013)Yang, Xia, Xiao, Xu,
Zhao, Zhou, Ma, He, Ma, Gao et al.]Yang2013NIMPR317.263
authorJ. C. Yang,
authorJ. W. Xia,
authorG. Q. Xiao,
authorH. S. Xu,
authorH. W. Zhao,
authorX. H. Zhou,
authorX. W. Ma,
authorY. He,
authorL. Z. Ma,
authorD. Q. Gao,
et al., journalNuclear Instruments and Methods in
Physics Research Section B: Beam Interactions with Materials and Atoms
volume317, pages263 (year2013),
ISSN issn0168-583X,
<https://www.sciencedirect.com/science/article/pii/S0168583X13009877>.
[Zhou et al.(2022)Zhou, Yang, and the
HIAF project team]Zhou2022AAPPSBulletin32.35
authorX. Zhou,
authorJ. Yang, and
authorthe HIAF project team,
journalAAPPS Bull. volume32,
pages35 (year2022), ISSN issn2309-4710,
<https://link.springer.com/10.1007/s43673-022-00064-1>.
[Mare šš and
Jennings(1994)]Mares1994PRC49.2472
authorJ. Mare šš and authorB. K.
Jennings, journalPhys. Rev. C
volume49, pages2472 (year1994),
<https://link.aps.org/doi/10.1103/PhysRevC.49.2472>.
[Wirth and Roth(2018)]Wirth2018PLB779.336
authorR. Wirth and
authorR. Roth,
journalPhysics Letters B volume779,
pages336 (year2018), ISSN issn0370-2693,
<https://www.sciencedirect.com/science/article/pii/S0370269318301230>.
[Vretenar et al.(1998)Vretenar, Pöschl,
Lalazissis, and Ring]Vretenar1998PRC57.R1060
authorD. Vretenar,
authorW. Pöschl,
authorG. A. Lalazissis,
and authorP. Ring,
journalPhys. Rev. C volume57,
pagesR1060 (year1998),
<https://link.aps.org/doi/10.1103/PhysRevC.57.R1060>.
[Umeya and Harada(2011)]Umeya2011PRC83.034310
authorA. Umeya and
authorT. Harada,
journalPhys. Rev. C volume83,
pages034310 (year2011),
<https://link.aps.org/doi/10.1103/PhysRevC.83.034310>.
[Xia et al.(2017)Xia, Mei, and
Yao]Xia2017Sci.China-Phys.Mech.Astron60.102021
authorH. J. Xia,
authorH. Mei, and
authorJ. M. Yao,
journalSci. China-Phys. Mech. Astron
volume60, pages102021
(year2017),
<https://doi.org/10.1007/s11433-017-9048-2>.
[Ning et al.(2009)Ning, Xian-Rong, and
Fang-Qi]Wei2009CPC33.116
authorW. Ning,
authorZ. Xian-Rong,
and authorC. Fang-Qi,
journalChinese Physics C volume33,
pages116 (year2009),
<https://dx.doi.org/10.1088/1674-1137/33/S1/037>.
[Lu et al.(2011)Lu, Zhao, and
Zhou]Lu2011PRC84.014328
authorB. N. Lu,
authorE. G. Zhao, and
authorS. G. Zhou,
journalPhys. Rev. C volume84,
pages014328 (year2011),
<https://link.aps.org/doi/10.1103/PhysRevC.84.014328>.
[Zhang et al.(2021)Zhang, Sagawa, and
Hiyama]Zhang2021PRC103.034321
authorY. Zhang,
authorH. Sagawa, and
authorE. Hiyama,
journalPhys. Rev. C volume103,
pages034321 (year2021),
<https://link.aps.org/doi/10.1103/PhysRevC.103.034321>.
[Zhang et al.(2022a)Zhang,
Sagawa, and Hiyama]Zhang2022PTEP2022.023D01
authorY. Zhang,
authorH. Sagawa, and
authorE. Hiyama,
journalProgress of Theoretical and Experimental Physics
volume2022 (year2022a), ISSN
issn2050-3911, note023D01,
https://academic.oup.com/ptep/article-pdf/2022/2/023D01/42931223/ptac004.pdf,
<https://doi.org/10.1093/ptep/ptac004>.
[Xue et al.(2022)Xue, Chen, Zhou, Cheng,
and Schulze]Xue2022PRC106.044306
authorH.-T. Xue,
authorQ. B. Chen,
authorX.-R. Zhou,
authorY. Y. Cheng, and
authorH.-J. Schulze,
journalPhys. Rev. C volume106,
pages044306 (year2022),
<https://link.aps.org/doi/10.1103/PhysRevC.106.044306>.
[Reinhard(1989)]Reinhard1989Rep.Prog.Phys.52.439
authorP. G. Reinhard,
journalReports on Progress in Physics
volume52, pages439 (year1989),
<https://doi.org/10.1088/0034-4885/52/4/002>.
[Ring(1996)]Ring1996PPNP37.193
authorP. Ring,
journalProgress in Particle and Nuclear Physics
volume37, pages193 (year1996),
ISSN issn0146-6410,
<https://www.sciencedirect.com/science/article/pii/0146641096000543>.
[Bender et al.(2003)Bender, Heenen, and
Reinhard]Bender2003Rev.Mod.Phys.75.121
authorM. Bender,
authorP. H. Heenen,
and authorP. G.
Reinhard, journalRev. Mod. Phys.
volume75, pages121 (year2003),
<https://link.aps.org/doi/10.1103/RevModPhys.75.121>.
[Vretenar et al.(2005)Vretenar,
Afanasjev, Lalazissis, and Ring]Vretenar2005Phys.Rep.409.101
authorD. Vretenar,
authorA. V. Afanasjev,
authorG. A. Lalazissis,
and authorP. Ring,
journalPhysics Reports volume409,
pages101 (year2005), ISSN issn0370-1573,
<https://www.sciencedirect.com/science/article/pii/S0370157304004545>.
[Meng et al.(2006)Meng, Toki, Zhou,
Zhang, Long, and Geng]Meng2006PPNP57.470
authorJ. Meng,
authorH. Toki,
authorS. G. Zhou,
authorS. Q. Zhang,
authorW. H. Long, and
authorL. S. Geng,
journalProgress in Particle and Nuclear Physics
volume57, pages470 (year2006),
ISSN issn0146-6410,
<https://www.sciencedirect.com/science/article/pii/S014664100500075X>.
[Nikšić
et al.(2011)Nikšić, Vretenar, and
Ring]Niksic2011PPNP66.519
authorT. Nikšić,
authorD. Vretenar, and
authorP. Ring,
journalProgress in Particle and Nuclear Physics
volume66, pages519 (year2011),
ISSN issn0146-6410,
<https://www.sciencedirect.com/science/article/pii/S0146641011000561>.
[Meng and Zhou(2015)]Meng2015JPG42.093101
authorJ. Meng and
authorS. G. Zhou,
journalJournal of Physics G: Nuclear and Particle Physics
volume42, pages093101
(year2015),
<https://doi.org/10.1088/0954-3899/42/9/093101>.
[Meng(2016)]Meng2016Density
authorJ. Meng,
titleRelativistic Density Functional for Nuclear Structure
(publisherWORLD SCIENTIFIC, year2016),
https://www.worldscientific.com/doi/pdf/10.1142/9872,
<https://www.worldscientific.com/doi/abs/10.1142/9872>.
[Rayet(1976)]Rayet1976Ann.Phys.102.226
authorM. Rayet,
journalAnnals of Physics volume102,
pages226 (year1976), ISSN issn0003-4916,
<https://www.sciencedirect.com/science/article/pii/0003491676902621>.
[Lanskoy and Yamamoto(1997)]Lanskoy1997PRC55.2330
authorD. E. Lanskoy and
authorY. Yamamoto,
journalPhys. Rev. C volume55,
pages2330 (year1997),
<https://link.aps.org/doi/10.1103/PhysRevC.55.2330>.
[Brockmann and Weise(1977)]Brockmann1977PLB69.167
authorR. Brockmann and
authorW. Weise,
journalPhysics Letters B volume69,
pages167 (year1977), ISSN issn0370-2693,
<https://www.sciencedirect.com/science/article/pii/0370269377906359>.
[Bouyssy(1981)]Bouyssy1981PLB99.305
authorA. Bouyssy,
journalPhysics Letters B volume99,
pages305 (year1981), ISSN issn0370-2693,
<https://www.sciencedirect.com/science/article/pii/0370269381901064>.
[Glendenning and
Moszkowski(1991)]Glendenning1991PRL67.2414
authorN. K. Glendenning
and authorS. A.
Moszkowski, journalPhys. Rev. Lett.
volume67, pages2414 (year1991),
<https://link.aps.org/doi/10.1103/PhysRevLett.67.2414>.
[Sugahara and Toki(1994)]Sugahara1994PTP92.803
authorY. Sugahara and
authorH. Toki,
journalProgress of Theoretical Physics
volume92, pages803 (year1994),
ISSN issn0033-068X,
https://academic.oup.com/ptp/article-pdf/92/4/803/5358491/92-4-803.pdf,
<https://doi.org/10.1143/ptp/92.4.803>.
[Zhou et al.(2008)Zhou, Polls, Schulze,
and Vidaña]Zhou2008PRC78.054306
authorX.-R. Zhou,
authorA. Polls,
authorH.-J. Schulze,
and authorI. Vidaña,
journalPhys. Rev. C volume78,
pages054306 (year2008),
<https://link.aps.org/doi/10.1103/PhysRevC.78.054306>.
[Hu et al.(2014)Hu, Hiyama, and
Toki]Hu2014PRC90.014309
authorJ. N. Hu,
authorE. Hiyama, and
authorH. Toki,
journalPhys. Rev. C volume90,
pages014309 (year2014),
<https://link.aps.org/doi/10.1103/PhysRevC.90.014309>.
[Li et al.(2018)Li, Long, and
Sedrakian]Li2018EPJA54.133
authorJ. J. Li,
authorW. H. Long, and
authorA. Sedrakian,
journalThe European Physical Journal A
volume54, pages133 (year2018),
<https://doi.org/10.1140/epja/i2018-12566-6>.
[Rong et al.(2020)Rong, Zhao, and
Zhou]Zhou2020PLB807.135533
authorY. T. Rong,
authorP. W. Zhao, and
authorS. G. Zhou,
journalPhysics Letters B volume807,
pages135533 (year2020), ISSN
issn0370-2693,
<https://www.sciencedirect.com/science/article/pii/S0370269320303373>.
[Wu et al.(2017)Wu, Mei, Yao, and
Zhou]Yao2017PRC95.034309
authorX. Y. Wu,
authorH. Mei,
authorJ. M. Yao, and
authorX. R. Zhou,
journalPhys. Rev. C volume95,
pages034309 (year2017),
<https://link.aps.org/doi/10.1103/PhysRevC.95.034309>.
[Tanimura and Hagino(2012)]Tanimura2012PRC85.014306
authorY. Tanimura and
authorK. Hagino,
journalPhys. Rev. C volume85,
pages014306 (year2012),
<https://link.aps.org/doi/10.1103/PhysRevC.85.014306>.
[Hong-Feng and Jie(2002)]Lv2002CPL19.1775
authorL. Hong-Feng and
authorM. Jie,
journalChinese Physics Letters volume19,
pages1775 (year2002),
<https://dx.doi.org/10.1088/0256-307X/19/12/310>.
[Win and Hagino(2008)]Win2008PRC78.054311
authorM. T. Win and
authorK. Hagino,
journalPhys. Rev. C volume78,
pages054311 (year2008),
<https://link.aps.org/doi/10.1103/PhysRevC.78.054311>.
[Lu et al.(2014)Lu, Hiyama, Sagawa, and
Zhou]Zhou2014PRC89.044307
authorB. N. Lu,
authorE. Hiyama,
authorH. Sagawa, and
authorS. G. Zhou,
journalPhys. Rev. C volume89,
pages044307 (year2014),
<https://link.aps.org/doi/10.1103/PhysRevC.89.044307>.
[Chen et al.(2022)Chen, Chen, Zhou,
Cheng, Cui, and Schulze]Chen2022CPC46.064109
authorC. F. Chen,
authorQ. B. Chen,
authorX.-R. Zhou,
authorY. Y. Cheng,
authorJ.-W. Cui, and
authorH.-J. Schulze,
journalChinese Physics C volume46,
pages064109 (year2022),
<https://dx.doi.org/10.1088/1674-1137/ac5b58>.
[Tanimura(2019)]Tanimura2019PRC99.034324
authorY. Tanimura,
journalPhys. Rev. C volume99,
pages034324 (year2019),
<https://link.aps.org/doi/10.1103/PhysRevC.99.034324>.
[Meng et al.(2003)Meng, Lü, Zhang,
and Zhou]Meng2003NPA722.C366
authorJ. Meng,
authorH. Lü,
authorS. Zhang, and
authorS.-G. Zhou,
journalNuclear Physics A volume722,
pagesC366 (year2003), ISSN issn0375-9474,
<https://www.sciencedirect.com/science/article/pii/S0375947403013915>.
[Rong et al.(2021)Rong, Tu, and
Zhou]Rong2021PRC104.054321
authorY.-T. Rong,
authorZ.-H. Tu, and
authorS.-G. Zhou,
journalPhys. Rev. C volume104,
pages054321 (year2021),
<https://link.aps.org/doi/10.1103/PhysRevC.104.054321>.
[Wei et al.(2020)Wei, Zhao, Wang, Geng,
Sun, Niu, and Long]Wei2020CPC44.074107
authorB. Wei,
authorQ. Zhao,
authorZ. H. Wang,
authorJ. Geng,
authorB. Y. Sun,
authorY. F. Niu, and
authorW. H. Long,
journalChinese Physics C volume44,
pages074107 (year2020),
<https://doi.org/10.1088/1674-1137/44/7/074107>.
[Zhang et al.(2022b)Zhang,
Li, Gao, and Sun]Zhang2022CPC46.104.105
authorW. Zhang,
authorZ. Y. Li,
authorW. Gao, and
authorT. T. Sun,
journalChinese Physics C volume46,
pages104105 (year2022b),
<https://dx.doi.org/10.1088/1674-1137/ac7b18>.
[Rather et al.(2021a)Rather,
Rahaman, Dexheimer, Usmani, and Patra]Rather2021APJ917.46
authorI. A. Rather,
authorU. Rahaman,
authorV. Dexheimer,
authorA. A. Usmani,
and authorS. K. Patra,
journalThe Astrophysical Journal volume917,
pages46 (year2021a),
<https://dx.doi.org/10.3847/1538-4357/ac09f7>.
[Sun et al.(2023)Sun, Miao, Sun, and
Li]Sun2023APJ942.55
authorX. Sun,
authorZ. Miao,
authorB. Sun, and
authorA. Li, journalThe
Astrophysical Journal volume942, pages55
(year2023),
<https://dx.doi.org/10.3847/1538-4357/ac9d9a>.
[Rather et al.(2021b)Rather,
Rahaman, Imran, Das, Usmani, and Patra]Rather2021PRC103.055814
authorI. A. Rather,
authorU. Rahaman,
authorM. Imran,
authorH. C. Das,
authorA. A. Usmani,
and authorS. K. Patra,
journalPhys. Rev. C volume103,
pages055814 (year2021b),
<https://link.aps.org/doi/10.1103/PhysRevC.103.055814>.
[Malik et al.(2022)Malik, Ferreira,
Agrawal, and Providência]Malik2022APJ930.17
authorT. Malik,
authorM. Ferreira,
authorB. K. Agrawal,
and
authorC. Providência,
journalThe Astrophysical Journal volume930,
pages17 (year2022),
<https://dx.doi.org/10.3847/1538-4357/ac5d3c>.
[Yang et al.(2022)Yang, Wen, Wang, and
Zhang]Yang2022PRD105.063023
authorS. Yang,
authorD. Wen,
authorJ. Wang, and
authorJ. Zhang,
journalPhys. Rev. D volume105,
pages063023 (year2022),
<https://link.aps.org/doi/10.1103/PhysRevD.105.063023>.
[Xia et al.(2022a)Xia, Sun,
Maruyama, Long, and Li]Xia2022PRC105.045803
authorC.-J. Xia,
authorB. Y. Sun,
authorT. Maruyama,
authorW.-H. Long, and
authorA. Li, journalPhys.
Rev. C volume105, pages045803
(year2022a),
<https://link.aps.org/doi/10.1103/PhysRevC.105.045803>.
[Xia et al.(2022b)Xia,
Maruyama, Li, Sun, Long, and Zhang]Xia2022CTP74.095303
authorC.-J. Xia,
authorT. Maruyama,
authorA. Li,
authorB. Y. Sun,
authorW.-H. Long, and
authorY.-X. Zhang,
journalCommunications in Theoretical Physics
volume74, pages095303
(year2022b),
<https://dx.doi.org/10.1088/1572-9494/ac71fd>.
[Isaka et al.(2013)Isaka, Homma, Kimura,
Dote, and Ohnishi]Isaka2013Few-Body-Systems54.1219
authorM. Isaka,
authorH. Homma,
authorM. Kimura,
authorA. Dote, and
authorA. Ohnishi,
journalFew-Body Systems volume54,
pages1219 (year2013), ISSN issn0177-7963,
1432-5411,
<http://link.springer.com/10.1007/s00601-012-0547-3>.
[Choi et al.(2022)Choi, Hiyama, Hyun, and
Cheoun]Choi2022EPJA58.161
authorS. Choi,
authorE. Hiyama,
authorC. H. Hyun, and
authorM.-K. Cheoun,
journalThe European Physical Journal A
volume58, pages161 (year2022),
ISSN issn8,
<https://doi.org/10.1140/epja/s10050-022-00817-4>.
[Aoki et al.(2021)Aoki, Fujioka, Gogami,
Hidaka, Hiyama, Honda, Hosaka, Ichikawa, Ieiri, Isaka
et al.]Aoki2021arXive2110.04462
authorK. Aoki,
authorH. Fujioka,
authorT. Gogami,
authorY. Hidaka,
authorE. Hiyama,
authorR. Honda,
authorA. Hosaka,
authorY. Ichikawa,
authorM. Ieiri,
authorM. Isaka,
et al., titleExtension of the j-parc hadron
experimental facility: Third white paper (year2021),
2110.04462.
[Long et al.(2006)Long, Van Giai, and
Meng]Long2006PLB640.150
authorW. H. Long,
authorN. Van Giai,
and authorJ. Meng,
journalPhysics Letters B volume640,
pages150 (year2006), ISSN issn0370-2693,
<https://www.sciencedirect.com/science/article/pii/S0370269306009610>.
[Ding et al.(2022)Ding, Qian, Sun, and
Long]Ding2022PRC106.054311
authorS. Y. Ding,
authorZ. Qian,
authorB. Y. Sun, and
authorW. H. Long,
journalPhys. Rev. C volume106,
pages054311 (year2022),
<https://link.aps.org/doi/10.1103/PhysRevC.106.054311>.
[Xia et al.(2023)Xia, Wu, Mei, and
Yao]Xia2023Sci.China-Phys.Mech.Astron66.252011
authorH. Xia,
authorX. Wu,
authorH. Mei, and
authorJ. Yao,
journalScience China Physics, Mechanics & Astronomy
volume66, pages252011
(year2023), ISSN issn1674-7348, 1869-1927,
<https://link.springer.com/10.1007/s11433-022-2045-x>.
[Xue et al.(2023)Xue, Chen, Chen, Luo,
Schulze, and Zhou]Xue2023PRC107.044317
authorH.-T. Xue,
authorY.-F. Chen,
authorQ. B. Chen,
authorY. A. Luo,
authorH.-J. Schulze,
and authorX.-R. Zhou,
journalPhys. Rev. C volume107,
pages044317 (year2023),
<https://link.aps.org/doi/10.1103/PhysRevC.107.044317>.
[Tu and Zhou(2022)]Tu2022APJ925.16
authorZ.-H. Tu and
authorS.-G. Zhou,
journalThe Astrophysical Journal volume925,
pages16 (year2022),
<https://dx.doi.org/10.3847/1538-4357/ac3996>.
[Ren et al.(2017)Ren, Sun, and
Zhang]Ren2017PRC95.054318
authorS.-H. Ren,
authorT.-T. Sun, and
authorW. Zhang,
journalPhys. Rev. C volume95,
pages054318 (year2017),
<https://link.aps.org/doi/10.1103/PhysRevC.95.054318>.
[Jennings(1990)]Jennings1990PLB246.1990325
authorB. Jennings,
journalPhysics Letters B volume246,
pages325 (year1990), ISSN issn0370-2693,
<https://www.sciencedirect.com/science/article/pii/0370269390906078>.
[Berger et al.(1984)Berger, Girod, and
Gogny]Berger1984NPA428.23
authorJ. F. Berger,
authorM. Girod, and
authorD. Gogny,
journalNuclear Physics A volume428,
pages23 (year1984), ISSN issn0375-9474,
<https://www.sciencedirect.com/science/article/pii/0375947484902409>.
[Meng(1998)]Meng1998NPA.635.3
authorJ. Meng,
journalNuclear Physics A volume635,
pages3 (year1998), ISSN issn0375-9474,
<https://www.sciencedirect.com/science/article/pii/S037594749800178X>.
[Long et al.(2010)Long, Ring, Giai, and
Meng]Long2010PRC81.024308
authorW. H. Long,
authorP. Ring,
authorN. V. Giai, and
authorJ. Meng,
journalPhys. Rev. C volume81,
pages024308 (year2010),
<https://link.aps.org/doi/10.1103/PhysRevC.81.024308>.
[Geng et al.(2020)Geng, Xiang, Sun, and
Long]Geng2020PRC101.064302
authorJ. Geng,
authorJ. Xiang,
authorB. Y. Sun, and
authorW. H. Long,
journalPhys. Rev. C volume101,
pages064302 (year2020),
<https://link.aps.org/doi/10.1103/PhysRevC.101.064302>.
[Geng and Long(2022)]Geng2022PRC105.034329
authorJ. Geng and
authorW. H. Long,
journalPhys. Rev. C volume105,
pages034329 (year2022),
<https://link.aps.org/doi/10.1103/PhysRevC.105.034329>.
[Dover and Gal(1984)]Dover1984PPNP12.171
authorC. B. Dover and
authorA. Gal,
journalProgress in Particle and Nuclear Physics
volume12, pages171 (year1984),
ISSN issn0146-6410,
<https://www.sciencedirect.com/science/article/pii/0146641084900048>.
[and and and(2013)]Wang2013Com.Theor.Phys.60.479
authorand and
authorand, journalCommunications in
Theoretical Physics volume60, pages479
(year2013),
<https://dx.doi.org/10.1088/0253-6102/60/4/16>.
[Liu et al.(2020)Liu, Niu, and
Long]Liu2020PLB806.135524
authorJ. Liu,
authorY. F. Niu, and
authorW. H. Long,
journalPhysics Letters B volume806,
pages135524 (year2020), ISSN
issn0370-2693,
<https://www.sciencedirect.com/science/article/pii/S0370269320303282>.
[Yang et al.(2021)Yang, Sun, Geng, Sun,
and Long]Yang2021PRC103.014304
authorS. Yang,
authorX. D. Sun,
authorJ. Geng,
authorB. Y. Sun, and
authorW. H. Long,
journalPhys. Rev. C volume103,
pages014304 (year2021),
<https://link.aps.org/doi/10.1103/PhysRevC.103.014304>.
[Wang et al.(2021)Wang, Huang, Kondev,
Audi, and Naimi]Wang2021CPC45.030003
authorM. Wang,
authorW. Huang,
authorF. Kondev,
authorG. Audi, and
authorS. Naimi,
journalChinese Physics C volume45,
pages030003 (year2021),
<https://doi.org/10.1088/1674-1137/abddaf>.
[Zhang et al.(2022c)Zhang,
Cheoun, Choi, Chong, Dong, Dong, Du, Geng, Ha, He
et al.]Zhang2022ADNDT144.101488
authorK. Zhang,
authorM.-K. Cheoun,
authorY.-B. Choi,
authorP. S. Chong,
authorJ. Dong,
authorZ. Dong,
authorX. Du,
authorL. Geng,
authorE. Ha,
authorX.-T. He,
et al., journalAtomic Data and Nuclear Data Tables
volume144, pages101488
(year2022c), ISSN issn0092-640X,
<https://www.sciencedirect.com/science/article/pii/S0092640X22000018>.
[Kaur et al.(2022)Kaur, Kanungo,
Horiuchi, Hagen, Holt, Hu, Miyagi, Suzuki, Ameil, Atkinson
et al.]Kaur2022PRL129.142502
authorS. Kaur,
authorR. Kanungo,
authorW. Horiuchi,
authorG. Hagen,
authorJ. D. Holt,
authorB. S. Hu,
authorT. Miyagi,
authorT. Suzuki,
authorF. Ameil,
authorJ. Atkinson,
et al., journalPhys. Rev. Lett.
volume129, pages142502
(year2022),
<https://link.aps.org/doi/10.1103/PhysRevLett.129.142502>.
[Angeli and Marinova(2013)]Angeli2013ADNDT99.69
authorI. Angeli and
authorK. Marinova,
journalAtomic Data and Nuclear Data Tables
volume99, pages69 (year2013),
ISSN issn0092-640X,
<https://www.sciencedirect.com/science/article/pii/S0092640X12000265>.
[Li et al.(2021)Li, Luo, and
Wang]Li2021ADNDT140.101440
authorT. Li,
authorY. Luo, and
authorN. Wang,
journalAtomic Data and Nuclear Data Tables
volume140, pages101440
(year2021), ISSN issn0092-640X,
<https://www.sciencedirect.com/science/article/pii/S0092640X21000267>.
[Sun et al.(2008)Sun, Long, Meng, and
Lombardo]Sun2008PRC78.065805
authorB. Y. Sun,
authorW. H. Long,
authorJ. Meng, and
authorU. Lombardo,
journalPhys. Rev. C volume78,
pages065805 (year2008),
<https://link.aps.org/doi/10.1103/PhysRevC.78.065805>.
[Long et al.(2012)Long, Sun, Hagino, and
Sagawa]Long2012PRC85.025806
authorW. H. Long,
authorB. Y. Sun,
authorK. Hagino, and
authorH. Sagawa,
journalPhys. Rev. C volume85,
pages025806 (year2012),
<https://link.aps.org/doi/10.1103/PhysRevC.85.025806>.
[Lv et al.(2018)Lv, Zhang, Zhang, Wu,
Liu, and Cao]Lv2018CPL35.062102
authorH. Lv,
authorS.-S. Zhang,
authorZ.-H. Zhang,
authorY.-Q. Wu,
authorJ. Liu, and
authorL.-G. Cao,
journalChinese Physics Letters volume35,
pages062102 (year2018),
<https://dx.doi.org/10.1088/0256-307X/35/6/062102>.
|
http://arxiv.org/abs/2307.04789v2 | 20230710180003 | Anomalous Andreev Bound States in Non-Hermitian Josephson Junctions | [
"Chang-An Li",
"Hai-Peng Sun",
"Björn Trauzettel"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
[email protected]
Institute for Theoretical Physics and Astrophysics, University of
Wrzburg, 97074 Wrzburg, Germany
Wrzburg-Dresden Cluster of Excellence ct.qmat, Germany
Institute for Theoretical Physics and Astrophysics, University of
Wrzburg, 97074 Wrzburg, Germany
Institute for Theoretical Physics and Astrophysics, University of
Wrzburg, 97074 Wrzburg, Germany
Wrzburg-Dresden Cluster of Excellence ct.qmat, Germany
We propose a non-Hermitian Josephson junction composed of two s-wave
superconductors separated by a non-Hermitian barrier. We discover
that the Andreev bound states in such a non-Hermitian Josephson junction
exhibit several anomalous features: (i) the spectrum of Andreev bound
states becomes complex-valued; (ii) the spectrum exhibits a Josephson
gap, a finite phase window in which the existence of Andreev bound
states is forbidden; and (iii) the Andreev bound states give rise
to a complex supercurrent, the imaginary part of which signifies dissipative
supercurrent. Moreover, in the case where the two superconductors
are of p-wave type, we observe the destruction of Majorana zero
modes due to the complex nature of Andreev bound state spectrum. Our
predictions should be observable in Josephson junctions coupled to
the environment.
Anomalous Andreev Bound States in Non-Hermitian Josephson Junctions
Bjrn Trauzettel
August 12, 2023
===================================================================
Introduction.—Recently, there has been growing interest in non-Hermitian systems <cit.>.
Non-Hermitian Hamiltonians emerge in a variety of fields such as disordered
or correlated solids <cit.>,
open or driven systems <cit.>,
and resonance phenomena <cit.>. Distinct from the Hermitian
case, the eigenvalues of non-Hermitian Hamiltonians are in general
complex. The complex-valued nature of the spectrum gives rise to new
features such as point gaps, non-Hermitian skin effects, and enriched
topological classifications <cit.>.
Many of these non-Hermitian properties have been theoretically studied
and experimentally observed <cit.>.
Whereas the efforts on characterizing novel non-Hermitian systems
have achieved impressive progress, the role of the complex-valued
spectrum due to non-Hermiticity in quantum transport is largely unexplored
despite some related investigations <cit.>.
Josephson junctions (JJs), consisting of two superconductors separated
by a weak link, provide salient platforms for studying superconducting
transport <cit.>.
Hermitian JJs have been extensively studied for decades. Phase-coherent
quantum transport in Hermitian JJs gives rise to the dc Josephson
effect: a phase-dependent supercurrent without bias. In the short
junction regime, the Josephson effect is entirely determined by the
phase-dependence of Andreev bound states (ABSs) within the superconducting
gap while the contribution from bulk states can be neglected <cit.>.
Recently, a JJ formed by a 𝒫𝒯-symmetric non-Hermitian
superconductor as a weak link has been considered, revealing discrete
ABSs at specific Josephson phases <cit.>. It is fair to say that the influence of non-Hermiticity on fundamental
properties of JJs, such as ABSs and supercurrent, is barely understood.
However, this analysis is of high experimental relevance
because all JJs are somehow coupled to the environment.
In this work, we consider a short JJ constituted of two superconductors
mediated by a non-Hermitian barrier potential, referred to as a non-Hermitian
Josephson junction (NHJJ) [see Fig. <ref>(a)]. It
is an effective model for the setup shown in Fig. <ref>(b),
in which a dissipative lead is coupled to the interface of the junction <cit.>.
We find that the ABSs in such a NHJJ show anomalous behavior as compared
to their Hermitian counterparts. Their spectrum shows complex values
and exhibits Josephson gaps. The Josephson gaps indicate phase windows
in which ABSs do not exist [purple regions in Fig. <ref>(c)].
As a result, the Josephson current becomes complex and its imaginary
part signifies dissipative supercurrent. In addition, in p-wave
JJs with a non-Hermitian barrier potential, we show that Majorana
zero modes and the fractional Josephson effect are prohibited due
to the complex-valued nature of ABS spectrum.
Model and particle-hole symmetry.—We
consider a minimum 1D NHJJ model as sketched in Fig. <ref>(a):
Two superconductors are separated by a non-Hermitian barrier. The
superconductors can either be of s-wave or p-wave type. The
barrier potential is characterized by an imaginary potential
U(x)=-iVδ(x), V>0,
which represents physically an on-site loss. The strength V measures
the magnitude of the loss.
We first analyze the particle-hole symmetry (PHS) constraint on the effective
Hamiltonian such that the non-Hermitian barrier potential can be properly
taken into account. The NHJJ is described by a Bogoliubov-de Gennes
(BdG) Hamiltonian H=
where Ψ^†(x) and Ψ(x) are the field operators.
Due to the non-Hermiticity of the Hamiltonian H^*≠ H^T considered
here, PHS acting on ℋ_BdG can be considered
in two distinct ways <cit.>
Uℋ_BdG^*U^-1 =-ℋ_BdG, I,
Uℋ_BdG^TU^-1 =-ℋ_BdG, II.
Here, T is the transpose operation and U is a unitary matrix.
In the presence of PHS, the energy eigenvalues of ℋ_BdG
always come in pairs. For the above two types of PHSs, the relating
energy pairs are E⟷-E^* for type I PHS and
E⟷-E for type II PHS, respectively. To determine
which type of PHS acting on ℋ_BdG is more
relevant, we further consider the PHS constraint on the retarded Green's
function. We find that the retarded Green's function transforms as
U^†[G^R(E)]^*U=-G^R(-E) under PHS (see <cit.> for more details). This transformation
(founded in causality) is consistent with type I PHS stated in Eq. (<ref>).
Therefore, we conclude that type I PHS acting on ℋ_BdG
is physically more relevant. In the following, we only consider
the influence of type I PHS on the spectrum of ABSs.
The BdG Hamiltonian to describe the NHJJ takes the form
ℋ_BdG=([ [-ħ^2∂_x^2/2m-μ]+U(x) Δ̂(x); Δ̂^†(x) -[-ħ^2∂_x^2/2m-μ]+U(x) ]),
where m is the effective mass of the electrons, μ the chemical
potential, and Δ̂(x) the pairing potential which can
be either of s-wave or p-wave type in our model. The corresponding
Nambu basis is chosen as Ψ^†(x)=(Ψ_↑^†(x),Ψ_↓(x)).
Considering the spectrum of the system, electron and hole excitations
are described by the BdG equation ℋ_BdGψ(x)=Eψ(x),
where the wave function ψ(x)=(u(x),v(x))^T is a mixture of
electron and hole components, and E is the excitation energy measured
relative to the Fermi energy.
Complex ABS spectrum.—We first consider
an s-wave NHJJ, where the two superconductors are both of s-wave
type. Without loss of generality, we assume the left and right superconductors
have a constant pairing potential of the same magnitude but different
phases, i.e., Δ̂(x<0)=Δ and Δ̂(x>0)=Δ e^iϕ
where ϕ is the phase difference across the junction. This pairing
potential together with Eq. (<ref>) constitutes
a simple but nontrivial model of a NHJJ. We are interested in the
ABSs existing within the superconducting gap. We may write the wave
function of bound states as <cit.>
ψ_B(x)=∑_η,αA_ηαe^iασ_ηk_ηx([ e^iθ_ηα/2; e^-iθ_ηα/2 ])
with η=e/h, σ_e≡1, σ_h≡-1, and
α=±. The angles are defined as θ_η+≡σ_ηarccos(E_B/Δ)+ϕ
and θ_η-≡σ_ηarccos(E_B/Δ) with
the energy of bound states E_B. A_ηα is the coefficient
of different evanescent modes. The bound states should decay exponentially
for |x|→∞ with a finite decay length λ.
Therefore, the necessary condition for the existence of ABSs is given
by
λ=ħ v_F/Δ1/√(1-E_B^2/Δ^2), Re(λ)>0,
where v_F is the Fermi velocity. This condition also determines
the range of Josephson gaps as we explain below.
The boundary conditions of ψ_B(x) at x=0 yield the secular
equation to determine the ABSs as <cit.>
(Z^2+1)+2iZ√(Δ^2/E_B^2-1)=Δ^2/E_B^2[Z^2+cos^2(ϕ/2)].
The dimensionless non-Hermitian barrier strength Z for the non-Hermitian
barrier potential is defined as Z≡mV/ħ^2k_F,
where k_F is the Fermi wavevector. Noticing the imaginary term
2iZ√(Δ^2/E_B^2-1) in Eq. (<ref>),
the spectrum of ABSs can be complex, fundamentally different from
Hermitian JJs <cit.>.
At Z=0, the spectrum of ABSs reduces to the Kulik-Omel'yanchuk
(KO) limit E_B^±(ϕ)=±Δcos(ϕ/2) for
a bare JJ <cit.>. For Z≫1, the spectrum becomes
E_B^±(ϕ)=±Δ, merging to the superconducting gap
edges (without bound states). Therefore, we mainly focus on the most
interesting regime with 0<Z<1 in the following [For the case Z≥1, the condition for a bound state Re(λ)>0
is not satisfied. ].
Let us consider special values of ϕ first and subsequently present
the general solution of Eq. (<ref>). At ϕ=2nπ
with integer n, a trivial solution E_B^±(ϕ=2nπ)=±Δ
is possible, merging into the bulk continuum. At ϕ=(2n+1)π,
we find that the spectrum of ABSs is purely imaginary taking E_B^±[(2n+1)π]=-iZΔ/√(1-Z^2),
consistent with constraint on the spectrum from type I PHS. For general
phase difference ϕ∈[ϕ_b,ϕ_t], solving the secular
equation Eq. (<ref>), this leads to
E_B^±(ϕ)/Δ=±ζcos(ϕ/2)-iZ√(sin^2(ϕ/2)-Z^2)/1-Z^2,
where we define the sign function ζ=sgn[cos(ϕ/2)],
bottom phase edge ϕ_b≡2nπ+ϕ_0(Z), and top phase
edge ϕ_t≡(2n+1)π-ϕ_0(Z) with ϕ_0(Z)≡2arcsin(√(2)Z/√(1+Z^2)). Note that the ABSs cannot reach zero excitation energy for nonzero Z. We plot the spectrum of ABSs in Fig. <ref>(c). Notably,
the spectrum of ABSs becomes complex, indicating the coupling to the
environment.
Josephson gap.—In general, the spectrum
of ABSs is a continuous function with respect to the Josephson phase
ϕ. However, we find that within a finite phase
window Φ_W≡[2nπ-ϕ_0(Z),2nπ+ϕ_0(Z)], ABSs
are not allowed in the NHJJ. We call such phase windows Josephson
gaps, where ϕ_0(Z) denotes the Josephson gap edge. At ϕ_0(Z),
the ABSs exhibit E_B^±(ϕ_0)≈Δ(-iZ^2)
for Z≪1. Analytically, the Josephson gap is a direct consequence
of the constraint condition of bound states in Eq. (<ref>).
We now explain the appearance of Josephson gaps as a consequence of
the competition between finite phase difference ϕ across the
junction and phase-breaking scattering at the junction. In Hermitian
JJs, in response to a finite phase difference across the junction,
a supercurrent carried by ABSs flows from one superconductor to the
other one <cit.>. This mechanism
applies similarly to the non-Hermitian case. However, the non-Hermitian
barrier potential introduces decoherence of quasi-particles. As a
result, supercurrent and thus ABSs do not appear unless the phase
difference ϕ is large enough to overcome the non-Hermitian barrier
strength Z. To see this point more clearly, let us focus on the
case ϕ∼0 and Z≪1, where we obtain the Josephson gap
edge increasing with Z as ϕ_0(Z)≃2√(2)Z. Indeed,
the Josephson gap increases almost linearly with increasing barrier
potential strength Z even for larger Z, as shown in Fig. <ref>(a).
The Josephson gap can be described by 2ϕ_0(Z)=4arcsin(√(2)Z/√(1+Z^2)),
which is controllable by tuning Z. Note that it approaches 2π
in the limit Z→1, indicating the total suppression of
ABSs.
In the Hermitian case, the spectrum of ABSs can be written as
E_B^±(ϕ)=±Δ√(1-Tsin^2(ϕ/2)) with
T the transmission probability through the junction in the normal
state <cit.>. However, the complex ABS spectrum
of NHJJ does not fulfill a similar relation. The reason is that the
scattering matrix becomes non-unitary and particle number (modulo
2) is not conserved any more <cit.>.
Dissipative supercurrent.—The current-phase
relation is an important characterization of JJs. We obtain the supercurrent
by employing the formula I_s(ϕ)=2e/ħdℱ(ϕ)/dϕ
where ℱ(ϕ) is the free energy of the system <cit.>.
At zero temperature, the free energy of the JJ is determined by ABSs.
Therefore, the supercurrent flowing across the junction is carried
by ABSs. In the non-Hermitian case, the supercurrent can be complex.
It is given by I_s(ϕ)=-2e/ħdRe[E_B^+(ϕ)]/dϕ+i2e/ħdIm[E_B^+(ϕ)]/dϕ.
Explicitly, within the Josephson phase region ϕ∈[ϕ_b,ϕ_t],
it reads
I_s(ϕ)=Δ e/ħ[ζsin(ϕ/2)/1-Z^2-iZsin(ϕ)/2(1-Z^2)√(sin^2(ϕ/2)-Z^2)].
The current-phase relation is plotted in Fig. <ref>(b).
We note that it is 2π-periodic and the supercurrent does not
appear within the Josephson gap Φ_W. The supercurrent exhibits a jump at the Josephson gap edges.
In particular, the real part Re[I_s(ϕ)] jumps from
0 to √(2)Δ Ze/ħ for Z≪1, at the Josephson
gap edges ϕ_0(Z).
The supercurrent is related to phase-coherent multiple Andreev reflections.
Hence, ABSs appear and carry the supercurrent across the junction
by Cooper pairs transfer <cit.>.
In the non-Hermitian case, the spectrum of ABSs stated in Eq. (<ref>)
is complex with a negative imaginary part. Indeed, the resonant condition
for the Andreev reflection coefficient A_h-(E)=Δ/E[cos^2ϕ/2+√(Δ^2-E^2)(sinϕ+i2Z)/2E]/(Z^2+1)E^2-2iZE√(Δ^2-E^2)-Δ^2[Z^2+cos^2ϕ/2]
inherits the same spectrum <cit.>. Therefore, quasi-particles
(electrons and holes) obtain a finite life time such that they partially
lose their phase memory when traveling to the interfaces, as sketched
in Fig. <ref>(c). This introduces dissipation to
the multiple Andreev reflections process <cit.>.
Hence, the supercurrent becomes complex. In the dissipative multiple
Andreev reflections, quasi-particles with finite life time escape
into the external bath. Correspondingly, the imaginary part of Eq. (<ref>)
may effectively be interpreted as a leakage of supercurrent to the
environment. Its magnitude indicates how strong it leaks out of the
JJ. We note that imaginary currents in normal states have been employed
to describe delocalized behavior or dissipation of eigenstates <cit.>,
while to the best of our knowledge, a complex supercurrent has not
been addressed before.
Hence, the above anomalous features, including the complex ABS spectrum,
Josephson gaps, and the dissipative supercurrent, constitute defining
properties of NHJJs, with no counterparts in the Hermitian limit.
These features may have potential applications. For example, in the
presence of Josephson gaps, the NHJJ can in principle work as a supercurrent
switch by tuning the Josephson phase: “on” state with ϕ
located outside of Josephson gaps; “off” state with ϕ inside
of Josephson gaps.
Anomalous ABSs in p-wave NHJJs.—Next,
we examine the existence of anomalous ABSs in p-wave NHJJs, where
the two superconductors are both of p-wave type. The BdG Hamiltonian
is the same as in Eq. (<ref>) but with the pairing potential
changed to p-wave pairing as Δ̂(x<0)=-iΔ∂_x/k_F
and Δ̂(x>0)=-iΔ e^iϕ∂_x/k_F <cit.>.
Following the same procedure as before, we arrive at a secular equation
to determine ABSs as
(Z^2+1)+2iZ√(Δ^2/E_B^2-1)=Δ^2/E_B^2cos^2(ϕ/2)
for p-wave NHJJs <cit.>. At Z=0, the ABSs reduce
to E_B^±(ϕ)=±Δcos(ϕ/2). We focus on the regime
0<Z<1 hereafter. In the Josephson phase region ϕ∈[ϕ_b^p,ϕ_t^p],
the general solution of the basic equation yields E_B^+(ϕ)/Δ=√(cos^2(ϕ/2)-Z^2)-iZϑsin(ϕ/2)/1-Z^2
with E_B^-(ϕ)=-[E_B^+(ϕ)]^* by type I PHS. The
bottom and top phase edges are defined as ϕ_b^p≡2nπ+ϕ_0^p(Z)
and ϕ_b^p≡(2n+1)π-ϕ_0^p(Z) with ϕ_0^p(Z)=2arcsin(Z√(1-Z^2/1+Z^2)),
and the sign function is given by ϑ=sgn[sin(ϕ/2)].
At the special Josephson phase ϕ=(2n+1)π, the energy modes
are located at E_B^±[(2n+1)π]=Δ-2iZ/1-Z^2,
deviating from the well-known Majorana zero-energy modes in p-wave
Josephson junctions <cit.>. Due to type I
PHS in the p-wave case, if (u,v)^T is an eigenstate with eigenenergy
E, then (v^*,u^*)^T is also an eigenstate with eigenenergy
-E^*. Since there is no zero-energy solution in the non-Hermitian
case, the Majorana condition (u,v)^T∝(v^*,u^*)^T
cannot be fulfilled, such that the topological protection of Majorana
zero modes is destroyed by the non-Hermitian coupling. A similar conclusion has been drawn in normal-superconductor (NS) junctions formed with 1D topological superconductors <cit.>.
We plot the spectrum of ABSs in p-wave NHJJ in Fig. <ref>(a).
The spectrum is complex and it exhibits Josephson gaps Φ_W^p=[2nπ-ϕ_0^p(Z),2nπ+ϕ_0^p(Z)]
with a gap value of 2ϕ_0^p(Z)=4arcsin(Z√(1-Z^2/1+Z^2)).
It shows 2π periodicity in the presence of Josephson gaps, different
from the Hermitian topological Josephson junctions <cit.>.
In the limits ϕ∼0 and Z≪1, the Josephson gap edge is
±ϕ_0^p(Z)=±2Z, proportional to Z. At ϕ=ϕ_0^p(Z),
the ABSs become E_B^±(ϕ_0^p)≈Δ(-iZ^2),
the same as in the s-wave case. However, the Josephson gap does
not always increase with increasing Z, as shown in Fig. <ref>(b).
It reaches a maximum value 4arcsin(√(2)-1) at Z_m=√(√(2)-1)
and then decreases to zero gradually. Similar to the s-wave case,
the complex nature of ABSs can also give rise to a complex supercurrent.
Discussion and conclusion.—The non-Hermitian
barrier potential mimics the coupling of the JJ to the environment
through a dissipative lead <cit.>.
Consequently, quasi-particles obtain a finite lifetime and partially
leak from the junction to the environment. This loss process is reflected
by the loss term in Eq. (<ref>) <cit.>.
It gives rise to dissipation responsible for anomalous non-Hermitian
physics in the junction. Alternatively, the relevant dissipation may
also be induced by shedding light at the interface <cit.>.
Our results are obtained for 1D models, for simplicity, but they can
be generalized to higher spatial dimensions in a straightforward way.
In conclusion, we have analyzed the physical properties of NHJJs with
anomalous features. In particular, Josephson gaps emerge in the ABSs
due to the competition of the coherent superconducting phase with
non-Hermitian decoherence effect; Due to the complex-valued nature
of ABSs, the supercurrent carried by ABSs becomes dissipative. These
characteristic properties of NHJJs have no Hermitian counterparts.
We expect that such anomalous features can be observed in JJs coupled
to the environment.
C.A.L. and H.P.S. contributed equally to this work. We thank Jan Budich, Jian Li, and Chunxu Zhang for helpful discussions.
This work was supported by the Wrzburg-Dresden Cluster of Excellence
ct.qmat, EXC2147, project-id 390858490, the DFG (SFB 1170), and the
Bavarian Ministry of Economic Affairs, Regional Development and Energy
within the High-Tech Agenda Project “Bausteine fr
das Quanten Computing auf Basis topologischer Materialen”.
92
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Bender(2007)]Bender07RPP
author author C. M. Bender, title title Making sense of
non-hermitian hamiltonians, 10.1088/0034-4885/70/6/r03
journal journal Rep. Prog. Phys. volume 70, pages 947 (year
2007)NoStop
[El-Ganainy et al.(2018)El-Ganainy, Makris, Khajavikhan,
Musslimani, Rotter, and Christodoulides]El-Ganainy18nphys
author author R. El-Ganainy, author K. G. Makris, author M. Khajavikhan,
author Z. H. Musslimani,
author S. Rotter, and author D. N. Christodoulides, title title Non-Hermitian physics and pt
symmetry, https://doi.org/10.1038/nphys4323 journal
journal Nat. Phys. volume 14, pages 11 (year 2018)NoStop
[Ashida et al.(2020)Ashida,
Gong, and Ueda]Ashida20AP
author author Y. Ashida, author Z. Gong, and author M. Ueda, title title Non-Hermitian physics, 10.1080/00018732.2021.1876991 journal journal Adv. Phys. volume 69, pages
249 (year 2020)NoStop
[Bergholtz et al.(2021)Bergholtz, Budich, and Kunst]Bergholtz21rmp
author author E. J. Bergholtz, author J. C. Budich, and author F. K. Kunst, title title Exceptional
topology of non-Hermitian systems, 10.1103/RevModPhys.93.015005 journal journal
Rev. Mod. Phys. volume 93, pages
015005 (year 2021)NoStop
[Okuma and Sato(2023)]Okuma23arcmp
author author N. Okuma and author M. Sato, title title Non-Hermitian topological
phenomena: A review, 10.1146/annurev-conmatphys-040521-033133 journal journal Ann. Rev. Condens. Matter Phys. volume
14, pages 83 (year 2023)NoStop
[Zyuzin and Zyuzin(2018)]Zyuzin18prb
author author A. A. Zyuzin and author A. Y. Zyuzin, title title Flat band in
disorder-driven non-Hermitian Weyl semimetals, 10.1103/PhysRevB.97.041203 journal journal Phys.
Rev. B volume 97, pages 041203(R)
(year 2018)NoStop
[Papaj et al.(2019)Papaj,
Isobe, and Fu]Papaj19prb
author author M. Papaj, author H. Isobe, and author L. Fu, title title Nodal arc of disordered Dirac fermions
and non-Hermitian band theory, 10.1103/PhysRevB.99.201107
journal journal Phys. Rev. B volume 99, pages 201107(R) (year
2019)NoStop
[Michen and Budich(2022)]Michen22prr
author author B. Michen and author J. C. Budich, title title Mesoscopic
transport signatures of disorder-induced non-Hermitian phases, 10.1103/PhysRevResearch.4.023248 journal journal Phys. Rev. Res. volume 4, pages 023248 (year 2022)NoStop
[Yoshida et al.(2019)Yoshida, Peters, Kawakami, and Hatsugai]Yoshida19prb
author author T. Yoshida, author R. Peters,
author N. Kawakami, and author Y. Hatsugai, title title Symmetry-protected exceptional rings in
two-dimensional correlated systems with chiral symmetry, 10.1103/PhysRevB.99.121101 journal journal Phys.
Rev. B volume 99, pages 121101(R)
(year 2019)NoStop
[Nagai et al.(2020)Nagai,
Qi, Isobe, Kozii, and Fu]Nagai20prl
author author Y. Nagai, author Y. Qi, author H. Isobe, author
V. Kozii, and author
L. Fu, title title DMFT reveals the non-Hermitian topology and Fermi arcs in
heavy-fermion systems, 10.1103/PhysRevLett.125.227204
journal journal Phys. Rev. Lett. volume 125, pages 227204 (year
2020)NoStop
[Zhang and Gong(2020)]Zhangx20prb
author author X. Zhang and author J. Gong, title title Non-Hermitian floquet
topological phases: Exceptional points, coalescent edge modes, and the skin
effect, 10.1103/PhysRevB.101.045415 journal
journal Phys. Rev. B volume 101, pages 045415 (year 2020)NoStop
[Makris et al.(2008)Makris,
El-Ganainy, Christodoulides, and Musslimani]Makris08prl
author author K. G. Makris, author R. El-Ganainy,
author D. N. Christodoulides, and author Z. H. Musslimani, title title Beam dynamics in
𝒫𝒯 symmetric optical lattices, 10.1103/PhysRevLett.100.103904 journal journal
Phys. Rev. Lett. volume 100, pages
103904 (year 2008)NoStop
[Guo et al.(2009)Guo,
Salamo, Duchesne, Morandotti,
Volatier-Ravat, Aimez, Siviloglou, and Christodoulides]GuoA09prl
author author A. Guo, author G. J. Salamo,
author D. Duchesne, author R. Morandotti, author
M. Volatier-Ravat, author
V. Aimez, author G. A. Siviloglou, and author D. N. Christodoulides, title title Observation of 𝒫𝒯-symmetry breaking
in complex optical potentials, 10.1103/PhysRevLett.103.093902 journal journal
Phys. Rev. Lett. volume 103, pages
093902 (year 2009)NoStop
[Nakagawa et al.(2018)Nakagawa, Kawakami, and Ueda]Nakagawa18prl
author author M. Nakagawa, author N. Kawakami,
and author M. Ueda, title title Non-Hermitian Kondo effect
in ultracold alkaline-earth atoms, 10.1103/PhysRevLett.121.203001 journal journal
Phys. Rev. Lett. volume 121, pages
203001 (year 2018)NoStop
[Rotter(2009)]Rotter09JA
author author I. Rotter, title title A non-Hermitian
Hamilton operator and the physics of open quantum systems, 10.1088/1751-8113/42/15/153001 journal journal
J. Phys. A: Math. Theor. volume 42, pages 153001 (year 2009)NoStop
[Li et al.(2019a)Li, Harter, Liu, de Melo,
Joglekar, and Luo]Lij19nc
author author J. Li, author A. K. Harter,
author J. Liu, author
L. de Melo, author Y. N. Joglekar, and author L. Luo, title title
Observation of parity-time symmetry breaking transitions in a dissipative
floquet system of ultracold atoms, 10.1038/s41467-019-08596-1 journal journal Nat.
Commun. volume 10, pages 855
(year 2019a)NoStop
[Xiao et al.(2019)Xiao,
Wang, Zhan, Bian,
Kawabata, Ueda, Yi, and Xue]Xiaol19prl
author author L. Xiao, author K. Wang, author X. Zhan, author
Z. Bian, author K. Kawabata, author M. Ueda, author W. Yi, and author P. Xue, title title Observation of critical
phenomena in parity-time-symmetric quantum dynamics, 10.1103/PhysRevLett.123.230401 journal journal
Phys. Rev. Lett. volume 123, pages
230401 (year 2019)NoStop
[Moiseyev(2011)]Moiseyev11
author author N. Moiseyev, https://doi.org/10.1017/CBO9780511976186 title Non-Hermitian Quantum Mechanics (publisher
Cambridge University Press, Cambridge, UK, year
2011)NoStop
[Lee(2016)]Lee16prl
author author T. E. Lee, title title Anomalous edge
state in a non-Hermitian lattice, 10.1103/PhysRevLett.116.133903 journal journal
Phys. Rev. Lett. volume 116, pages
133903 (year 2016)NoStop
[Yao and Wang(2018)]Yao18prl
author author S. Yao and author Z. Wang, title title Edge states and topological
invariants of non-Hermitian systems, 10.1103/PhysRevLett.121.086803 journal journal
Phys. Rev. Lett. volume 121, pages
086803 (year 2018)NoStop
[Kunst et al.(2018)Kunst,
Edvardsson, Budich, and Bergholtz]Kunst18prl
author author F. K. Kunst, author E. Edvardsson,
author J. C. Budich, and author E. J. Bergholtz, title title Biorthogonal bulk-boundary
correspondence in non-Hermitian systems, 10.1103/PhysRevLett.121.026808 journal journal
Phys. Rev. Lett. volume 121, pages
026808 (year 2018)NoStop
[Gong et al.(2018)Gong,
Ashida, Kawabata, Takasan,
Higashikawa, and Ueda]GonZP18prx
author author Z. Gong, author Y. Ashida,
author K. Kawabata, author K. Takasan, author
S. Higashikawa, and author
M. Ueda, title title Topological phases of non-Hermitian systems, 10.1103/PhysRevX.8.031079 journal journal Phys. Rev. X volume 8, pages
031079 (year 2018)NoStop
[Zhou and Lee(2019)]Zhou19prb
author author H. Zhou and author J. Y. Lee, title title Periodic table for
topological bands with non-Hermitian symmetries, 10.1103/PhysRevB.99.235112 journal journal Phys.
Rev. B volume 99, pages 235112
(year 2019)NoStop
[Kawabata et al.(2019a)Kawabata, Shiozaki, Ueda, and Sato]Kawabata19prx
author author K. Kawabata, author K. Shiozaki,
author M. Ueda, and author M. Sato, title
title Symmetry and topology in non-Hermitian
physics, 10.1103/PhysRevX.9.041015 journal
journal Phys. Rev. X volume 9, pages 041015 (year 2019a)NoStop
[Kawabata et al.(2019b)Kawabata, Bessho, and Sato]Kawabata19prl
author author K. Kawabata, author T. Bessho, and author M. Sato, title title Classification of exceptional points and
non-Hermitian topological semimetals, 10.1103/PhysRevLett.123.066405 journal journal
Phys. Rev. Lett. volume 123, pages
066405 (year 2019b)NoStop
[Zhang et al.(2020)Zhang,
Yang, and Fang]ZhangK20prl
author author K. Zhang, author Z. Yang, and author C. Fang, title title Correspondence between winding numbers
and skin modes in non-Hermitian systems, 10.1103/PhysRevLett.125.126402 journal journal
Phys. Rev. Lett. volume 125, pages
126402 (year 2020)NoStop
[Okuma et al.(2020)Okuma,
Kawabata, Shiozaki, and Sato]Okuma20prl
author author N. Okuma, author K. Kawabata,
author K. Shiozaki, and author M. Sato, title title Topological origin of non-Hermitian skin
effects, 10.1103/PhysRevLett.124.086801 journal journal Phys. Rev. Lett. volume 124, pages 086801 (year
2020)NoStop
[Yao et al.(2018)Yao,
Song, and Wang]Yao18prl2
author author S. Yao, author F. Song, and author Z. Wang, title title Non-Hermitian Chern bands, 10.1103/PhysRevLett.121.136802 journal journal Phys. Rev. Lett. volume 121, pages 136802 (year 2018)NoStop
[Leykam et al.(2017)Leykam,
Bliokh, Huang, Chong, and Nori]Leykam17prl
author author D. Leykam, author K. Y. Bliokh,
author C. Huang, author Y. D. Chong, and author F. Nori, title
title Edge modes, degeneracies, and topological
numbers in non-Hermitian systems, 10.1103/PhysRevLett.118.040401 journal journal
Phys. Rev. Lett. volume 118, pages
040401 (year 2017)NoStop
[Shen et al.(2018)Shen,
Zhen, and Fu]ShenH18prl
author author H. Shen, author B. Zhen, and author L. Fu, title title Topological band theory for
non-Hermitian Hamiltonians, 10.1103/PhysRevLett.120.146402 journal journal
Phys. Rev. Lett. volume 120, pages
146402 (year 2018)NoStop
[Yin et al.(2018)Yin,
Jiang, Li, Lü, and Chen]Yin18prb
author author C. Yin, author H. Jiang, author L. Li, author
R. Lü, and author
S. Chen, title title Geometrical meaning of winding number and its
characterization of topological phases in one-dimensional chiral
non-Hermitian systems, 10.1103/PhysRevA.97.052115
journal journal Phys. Rev. A volume 97, pages 052115 (year
2018)NoStop
[Yokomizo and Murakami(2019)]Yokomizo19prl
author author K. Yokomizo and author S. Murakami, title title Non-bloch band
theory of non-Hermitian systems, 10.1103/PhysRevLett.123.066404 journal journal
Phys. Rev. Lett. volume 123, pages
066404 (year 2019)NoStop
[Lee and Thomale(2019)]LeeCH19prb
author author C. H. Lee and author R. Thomale, title title Anatomy of skin modes and
topology in non-Hermitian systems, 10.1103/PhysRevB.99.201103 journal journal Phys.
Rev. B volume 99, pages 201103(R)
(year 2019)NoStop
[Longhi(2019)]Longhi19prl
author author S. Longhi, title title Topological
phase transition in non-Hermitian quasicrystals, 10.1103/PhysRevLett.122.237601 journal journal
Phys. Rev. Lett. volume 122, pages
237601 (year 2019)NoStop
[Lee et al.(2019)Lee,
Ahn, Zhou, and Vishwanath]LeeJY19prl
author author J. Y. Lee, author J. Ahn, author H. Zhou, and author
A. Vishwanath, title
title Topological correspondence between Hermitian and
non-Hermitian systems: Anomalous dynamics, 10.1103/PhysRevLett.123.206404 journal journal
Phys. Rev. Lett. volume 123, pages
206404 (year 2019)NoStop
[Borgnia et al.(2020)Borgnia, Kruchkov, and Slager]Borgnia20prl
author author D. S. Borgnia, author A. J. Kruchkov, and author R.-J. Slager, title title Non-Hermitian
boundary modes and topology, 10.1103/PhysRevLett.124.056802 journal journal
Phys. Rev. Lett. volume 124, pages
056802 (year 2020)NoStop
[Budich and Bergholtz(2020)]Budich20prl
author author J. C. Budich and author E. J. Bergholtz, title title Non-Hermitian
topological sensors, 10.1103/PhysRevLett.125.180403
journal journal Phys. Rev. Lett. volume 125, pages 180403 (year
2020)NoStop
[Franca et al.(2022)Franca,
Könye, Hassler, van den
Brink, and Fulga]Franca22prl
author author S. Franca, author V. Könye,
author F. Hassler, author J. van den Brink, and author C. Fulga, title
title Non-Hermitian physics without gain or loss: The
skin effect of reflected waves, 10.1103/PhysRevLett.129.086601 journal journal
Phys. Rev. Lett. volume 129, pages
086601 (year 2022)NoStop
[Jezequel and Delplace(2023)]Jezequel23prl
author author L. Jezequel and author P. Delplace, title title Non-Hermitian
spectral flows and Berry-Chern monopoles, 10.1103/PhysRevLett.130.066601 journal journal
Phys. Rev. Lett. volume 130, pages
066601 (year 2023)NoStop
[Qin et al.(2023)Qin,
Shen, and Lee]QinF23pra
author author F. Qin, author R. Shen, and author C. H. Lee, title title Non-Hermitian squeezed polarons,
10.1103/PhysRevA.107.L010202 journal
journal Phys. Rev. A volume 107, pages L010202 (year 2023)NoStop
[Guo et al.(2023)Guo,
Chen, Ding, and Hu]GuoC23prl
author author C.-X. Guo, author S. Chen, author K. Ding, and author
H. Hu, title title Exceptional non-Abelian topology in multiband
non-Hermitian systems, 10.1103/PhysRevLett.130.157201
journal journal Phys. Rev. Lett. volume 130, pages 157201 (year
2023)NoStop
[Li et al.(2022)Li,
Trauzettel, Neupert, and Zhang]LiCA22arxiv
author author C.-A. Li, author B. Trauzettel,
author T. Neupert, and author S.-B. Zhang, title title Enhancement of second-order
non-Hermitian skin effect by magnetic fields, @noop (year 2022), http://arxiv.org/abs/2212.14691 arXiv:2212.14691
[cond-mat.mes-hall] NoStop
[Sun et al.(2023)Sun,
Li, Feng, and Guo]SunJ23arxiv
author author J. Sun, author C.-A. Li,
author S. Feng, and author H. Guo, title
title Hybrid higher-order skin-topological effect in
hyperbolic lattices, @noop (year 2023), http://arxiv.org/abs/2305.19810 arXiv:2305.19810 [cond-mat.mes-hall]
NoStop
[Zeuner et al.(2015)Zeuner,
Rechtsman, Plotnik, Lumer,
Nolte, Rudner, Segev, and Szameit]Zeuner15prl
author author J. M. Zeuner, author M. C. Rechtsman, author Y. Plotnik,
author Y. Lumer, author S. Nolte, author
M. S. Rudner, author
M. Segev, and author
A. Szameit, title title Observation of a topological transition in the bulk of a
non-Hermitian system, 10.1103/PhysRevLett.115.040402
journal journal Phys. Rev. Lett. volume 115, pages 040402 (year
2015)NoStop
[Ding et al.(2016)Ding,
Ma, Xiao, Zhang, and Chan]DingK16prx
author author K. Ding, author G. Ma, author M. Xiao, author
Z. Q. Zhang, and author
C. T. Chan, title title Emergence, coalescence, and topological properties of
multiple exceptional points and their experimental realization, 10.1103/PhysRevX.6.021007 journal journal Phys. Rev. X volume 6, pages
021007 (year 2016)NoStop
[Xiao et al.(2020)Xiao,
Deng, Wang, Zhu,
Wang, Yi, and Xue]Xiao20np
author author L. Xiao, author T. Deng, author K. Wang, author
G. Zhu, author Z. Wang, author W. Yi, and author P. Xue, title title Non-Hermitian bulk–boundary
correspondence in quantum dynamics, 10.1038/s41567-020-0836-6 journal journal Nat.
Phys. volume 16, pages 761
(year 2020)NoStop
[Ozturk et al.(2021)Ozturk,
Lappe, Hellmann, Schmitt,
Klaers, Vewinger, Kroha, and Weitz]Fahri21science
author author F. E. Ozturk, author T. Lappe,
author G. Hellmann, author J. Schmitt, author
J. Klaers, author F. Vewinger, author J. Kroha, and author M. Weitz, title title
Observation of a non-Hermitian phase transition in an optical quantum gas,
10.1126/science.abe9869 journal journal Science volume 372, pages
88 (year 2021)NoStop
[Liang et al.(2022)Liang,
Xie, Dong, Li, Li, Gadway, Yi, and Yan]LiangQ22prl
author author Q. Liang, author D. Xie, author Z. Dong, author
H. Li, author H. Li, author B. Gadway, author W. Yi, and author B. Yan, title title Dynamic signatures of non-Hermitian skin
effect and topology in ultracold atoms, 10.1103/PhysRevLett.129.070401 journal journal
Phys. Rev. Lett. volume 129, pages
070401 (year 2022)NoStop
[Zhang et al.(2023)Zhang,
Li, Sun, Liu, Zhao, Feng, Fan, and Qiu]ZhangQ23prl
author author Q. Zhang, author Y. Li, author H. Sun, author
X. Liu, author L. Zhao, author X. Feng, author X. Fan, and author C. Qiu, title title Observation of acoustic non-Hermitian
bloch braids and associated topological phase transitions, 10.1103/PhysRevLett.130.017201 journal journal
Phys. Rev. Lett. volume 130, pages
017201 (year 2023)NoStop
[Liang et al.(2023)Liang,
Tang, Xu, and Liu]LiangC23prl
author author C. Liang, author Y. Tang,
author A.-N. Xu, and author Y.-C. Liu, title
title Observation of exceptional points in thermal
atomic ensembles, 10.1103/PhysRevLett.130.263601
journal journal Phys. Rev. Lett. volume 130, pages 263601 (year
2023)NoStop
[Xu et al.(2023)Xu,
Zhou, Li, Cao, Chen, Xiao, Yang, and Qiu]XuG23prl
author author G. Xu, author X. Zhou, author Y. Li, author
Q. Cao, author W. Chen, author Y. Xiao, author L. Yang, and author C.-W. Qiu, title title Non-Hermitian chiral heat transport,
10.1103/PhysRevLett.130.266303 journal
journal Phys. Rev. Lett. volume 130, pages 266303 (year 2023)NoStop
[San-Jose et al.(2016)San-Jose,
Cayao, Prada, and Aguado]Sanjose16sr
author author P. San-Jose, author J. Cayao, author E. Prada, and author R. Aguado, title title Majorana bound states from exceptional points in non-topological superconductors, 10.1038/srep21427 journal journal Scientific Rep. volume 6, pages 21427 (year 2016)NoStop
[Zhu et al.(2016)Zhu,
Lü, and Chen]ZhuB16pra
author author B. Zhu, author R. Lü, and author S. Chen, title title 𝒫𝒯-symmetry breaking for the
scattering problem in a one-dimensional non-Hermitian lattice model, 10.1103/PhysRevA.93.032129 journal journal Phys. Rev. A volume 93, pages 032129 (year 2016)NoStop
[Longhi(2017)]Longhiprb17
author author S. Longhi, title title Non-Hermitian
bidirectional robust transport, 10.1103/PhysRevB.95.014201 journal journal Phys.
Rev. B volume 95, pages 014201
(year 2017)NoStop
[Chen and Zhai(2018)]ChenY18prb
author author Y. Chen and author H. Zhai, title title Hall conductance of a
non-Hermitian Chern insulator, 10.1103/PhysRevB.98.245130
journal journal Phys. Rev. B volume 98, pages 245130 (year
2018)NoStop
[Bergholtz and Budich(2019)]Bergholtz19prr
author author E. J. Bergholtz and author J. C. Budich, title title Non-Hermitian
Weyl physics in topological insulator ferromagnet junctions, 10.1103/PhysRevResearch.1.012003 journal journal Phys. Rev. Res. volume 1, pages 012003(R) (year 2019)NoStop
[Avila et al.(2016) Avila, Prada, San-Jose,
and Aguado]Avila19cp
author author J. Avila,
author E. Prada, author P. San-Jose, and author R. Aguado, title title Non-hermitian topology as a unifying framework for the Andreev versus Majorana states controversy, 10.1038/s42005-019-0231-8 journal journal Commun. Phys. volume 2, pages 133 (year 2019)NoStop
[Shobe et al.(2021)Shobe,
Kuramoto, Imura, and Hatano]Shobe21prr
author author K. Shobe, author K. Kuramoto,
author K.-I. Imura, and author N. Hatano, title title Non-Hermitian Fabry-Pérot resonances
in a 𝒫𝒯-symmetric system, 10.1103/PhysRevResearch.3.013223 journal journal
Phys. Rev. Res. volume 3, pages
013223 (year 2021)NoStop
[Kornich and Trauzettel(2022)]Viktoriia22prr
author author V. Kornich and author B. Trauzettel, title title Andreev
bound states in junctions formed by conventional and 𝒫𝒯-symmetric
non-Hermitian superconductors, 10.1103/PhysRevResearch.4.033201 journal journal
Phys. Rev. Res. volume 4, pages
033201 (year 2022)NoStop
[Sticlet et al.(2022)Sticlet, Dóra, and Moca]Sticlet22prl
author author D. Sticlet, author B. Dóra, and author C. P. Moca, title title Kubo formula for
non-Hermitian systems and tachyon optical conductivity, 10.1103/PhysRevLett.128.016802 journal journal
Phys. Rev. Lett. volume 128, pages
016802 (year 2022)NoStop
[Geng et al.(2023)Geng,
Wei, Zou, Sheng,
Chen, and Xing]GengH23prb
author author H. Geng, author J. Y. Wei,
author M. H. Zou, author L. Sheng, author
W. Chen, and author
D. Y. Xing, title title Nonreciprocal charge and spin transport induced bys
non-Hermitian skin effect in mesoscopic heterojunctions, 10.1103/PhysRevB.107.035306 journal journal
Phys. Rev. B volume 107, pages
035306 (year 2023)NoStop
[Isobe and Nagaosa(2023)]Isobe23prb
author author H. Isobe and author N. Nagaosa, title title Anomalous Hall
effect from a non-Hermitian viewpoint, 10.1103/PhysRevB.107.L201116 journal journal
Phys. Rev. B volume 107, pages
L201116 (year 2023)NoStop
[Victoriia et al.(2023)Victoriia]Victoriia23arxiv
author author V. Kornich title title Current-voltage characteristics of the N-I-PT-symmetric non-Hermitian superconductor junction as a probe of non-Hermitian formalisms, @noop (year 2023), http://arxiv.org/abs/2302.14802 arXiv:2302.14802
[cond-mat.mes-hall] NoStop
[Likharev(1979)]Likharev79rmp
author author K. K. Likharev, title title
Superconducting weak links, 10.1103/RevModPhys.51.101
journal journal Rev. Mod. Phys. volume 51, pages 101 (year
1979)NoStop
[Beenakker(1992)]Beenakker92proceed
author author C. W. J. Beenakker, title title
Three “universal” mesoscopic Josephson effects, in @noop
booktitle Transport Phenomena in Mesoscopic Systems, editor edited by editor H. Fukuyama and editor T. Ando (publisher Springer Berlin
Heidelberg, address Berlin, Heidelberg, year
1992) pp. pages 235–253NoStop
[Golubov et al.(2004)Golubov, Kupriyanov, and Il'ichev]Golubov04rmp
author author A. A. Golubov, author M. Y. Kupriyanov, and author E. Il'ichev, title title The
current-phase relation in Josephson junctions, 10.1103/RevModPhys.76.411 journal journal Rev.
Mod. Phys. volume 76, pages 411
(year 2004)NoStop
[Tinkham(1996)]Tinkham
author author M. Tinkham, @noop title Introduction to
Superconductivity (publisher Dover Publications, Inc.,
Garden City, New York, year 1996)NoStop
[Furusaki(1999)]Furusaki99sm
author author A. Furusaki, title title Josephson
current carried by Andreev levels in superconducting quantum point
contacts, https://doi.org/10.1006/spmi.1999.0730
journal journal Superlattices Microst. volume 25, pages 809 (year
1999)NoStop
[Beenakker and van
Houten(1991)]Beenakker91prl2
author author C. W. J. Beenakker and author H. van Houten, title title Josephson current through a superconducting quantum point contact
shorter than the coherence length, 10.1103/PhysRevLett.66.3056 journal journal
Phys. Rev. Lett. volume 66, pages
3056 (year 1991)NoStop
[Kwon et al.(2004)Kwon,
Sengupta, and Yakovenko]Kwon04epj
author author H.-J. Kwon, author K. Sengupta, and author V. M. Yakovenko, title title Fractional ac Josephson
effect in p- and d-wave superconductors, 10.1140/epjb/e2004-00066-4 journal journal Eur.
Phys. J. B volume 37, pages 349
(year 2004)NoStop
[Fu and Kane(2009)]FuL09prb
author author L. Fu and author C. L. Kane, title title Josephson current and noise
at a superconductor/quantum-spin-hall-insulator/superconductor junction,
10.1103/PhysRevB.79.161408 journal journal Phys. Rev. B volume 79, pages 161408(R) (year 2009)NoStop
[Dolcini et al.(2015)Dolcini, Houzet, and Meyer]Dolcini15prb
author author F. Dolcini, author M. Houzet, and author J. S. Meyer, title title Topological josephson
_0 junctions, 10.1103/PhysRevB.92.035428 journal journal Phys.
Rev. B volume 92, pages 035428
(year 2015)NoStop
[Beenakker et al.(2013)Beenakker, Pikulin, Hyart, Schomerus, and Dahlhaus]Beenakker13prl
author author C. W. J. Beenakker, author D. I. Pikulin, author T. Hyart, author H. Schomerus, and author J. P. Dahlhaus, title title Fermion-parity anomaly of
the critical supercurrent in the quantum spin-hall effect, 10.1103/PhysRevLett.110.017003 journal journal
Phys. Rev. Lett. volume 110, pages
017003 (year 2013)NoStop
[Li2()]Li2023SM
@noop journal See Supplemental Material for details of
(Sec. S1) particle-hole symmetry of the model, (Sec. S2) feasibilitty of the
non-Hermitian Josephson junction model, (Sec. S3) complex Andreev bound
states in s-wave case, (Sec. S4) the normal states transport, (Sec. S5) free
energy and complex supercurrent, (Sec. S6) Andreev reflection coefficient,
and (Sec. S7) complex Andreev bound states in p-wave case, which includes
Refs.
<cit.> NoStop
[Furusaki and Tsukada(1991)]Furusaki91ssc
journal author author A. Furusaki and author M. Tsukada, title title Dc Josephson effect and andreev reflection, https://doi.org/10.1016/0038-1098(91)90201-6 journal journal Solid State Commun. volume 78, pages 299 (year 1991)NoStop
[Beenakker(1991)]Beenakker91prl
author author C. W. J. Beenakker, title title
Universal limit of critical-current fluctuations in mesoscopic Josephson
junctions, 10.1103/PhysRevLett.67.3836 journal journal Phys. Rev. Lett. volume 67, pages 3836 (year
1991)NoStop
[Kulik and Omel'yanchuk(1975)]Kulik75jetps
author author I. O. Kulik and author A. N. Omel'yanchuk, title title
Contribution to the microscopic theory of the Josephson effect in
superconducting bridges, @noop journal journal JETP Lett. volume 21, pages
96 (year 1975)NoStop
[Note1()]Note1
note For the case Z≥ 1, the condition for a bound state
Re(λ )>0 is not satisfied.Stop
[Mortensen et al.(2000)Mortensen, Jauho, and Flensberg]Mortensen99sm
author author N. A. Mortensen, author A.-P. Jauho, and author K. Flensberg, title title Dephasing in
semiconductor-superconductor structures by coupling to a voltage probe,
https://www.sciencedirect.com/science/article/pii/S0749603600908905
journal journal Superlattices Microst. volume 28, pages 67 (year
2000)NoStop
[Jiang et al.(2009)Jiang,
Cheng, Sun, and Xie]JiangH09prl
author author H. Jiang, author S. Cheng,
author Q.-F. Sun, and author X. C. Xie, title title Topological insulator: A new quantized
spin hall resistance robust to dephasing, 10.1103/PhysRevLett.103.036803 journal journal
Phys. Rev. Lett. volume 103, pages
036803 (year 2009)NoStop
[Li et al.(2019b)Li, Li, and Shen]LiCA19prb
author author C.-A. Li, author J. Li, and author S.-Q. Shen, title title Majorana-Josephson interferometer,
10.1103/PhysRevB.99.100504 journal journal Phys. Rev. B volume 99, pages 100504(R) (year 2019b)NoStop
[Bouganne et al.(2020)Bouganne, Bosch Aguilera, Ghermaoui,
Beugnon, and Gerbier]Bouganne20np
author author R. Bouganne, author M. Bosch Aguilera, author A. Ghermaoui, author J. Beugnon,
and author F. Gerbier, title title Anomalous decay of coherence
in a dissipative many-body system, 10.1038/s41567-019-0678-2 journal journal Nat.
Phys. volume 16, pages 21 (year 2020)NoStop
[Hatano and Nelson(1996)]Hatano96prl
author author N. Hatano and author D. R. Nelson, title title Localization
transitions in non-Hermitian quantum mechanics, 10.1103/PhysRevLett.77.570 journal journal Phys.
Rev. Lett. volume 77, pages 570
(year 1996)NoStop
[Zhang et al.(2022a)Zhang, Denner,
Bzdu ššek, Sentef, and Neupert]ZhangSB22prb
author author S.-B. Zhang, author M. M. Denner,
author T. Bzdu ššek, author M. A. Sentef, and author T. Neupert, title title Symmetry breaking and spectral structure of the interacting
Hatano-Nelson model, 10.1103/PhysRevB.106.L121102
journal journal Phys. Rev. B volume 106, pages L121102 (year
2022a)NoStop
[Li et al.(2021)Li,
Liu, and Zhang]LiQ22prb
author author Q. Li, author J.-J. Liu, and author Y.-T. Zhang, title title Non-Hermitian Aharonov-Bohm effect in
the quantum ring, 10.1103/PhysRevB.103.035415 journal journal Phys. Rev. B volume
103, pages 035415 (year 2021)NoStop
[Lutchyn et al.(2010)Lutchyn, Sau, and Das Sarma]Lutchyn2010
author author R. M. Lutchyn, author J. D. Sau, and author S. Das Sarma, title title Majorana fermions and a
topological phase transition in semiconductor-superconductor
heterostructures, 10.1103/PhysRevLett.105.077001
journal journal Phys. Rev. Lett. volume 105, pages 077001 (year
2010)NoStop
[Alicea(2012)]Alicea12rpp
author author J. Alicea, title title New directions
in the pursuit of Majorana fermions in solid state systems, http://stacks.iop.org/0034-4885/75/i=7/a=076501 journal
journal Rep. Prog. Phys. volume 75, pages 076501 (year 2012)NoStop
[Zhang et al.(2022b)Zhang, Wang,
Pan, Li, Lu, Li, Zhang, Liu, Cao,
Liu, Wen, Liao, Zhuo, Shang, Liu, Zhao, and Zhang]ZhangS22prl
author author S. Zhang, author Z. Wang,
author D. Pan, author
H. Li, author S. Lu, author Z. Li, et al., title title
Suppressing Andreev bound state zero bias peaks using a strongly dissipative
lead, 10.1103/PhysRevLett.128.076803 journal journal Phys. Rev. Lett. volume 128, pages 076803 (year
2022b)NoStop
[Liu et al.(2022)Liu,
Zhang, Cao, Zhang, and Liu]LiuD22prl
author author D. Liu, author G. Zhang, author Z. Cao, author
H. Zhang, and author
D. E. Liu, title title Universal conductance scaling of Andreev reflections using
a dissipative probe, 10.1103/PhysRevLett.128.076802
journal journal Phys. Rev. Lett. volume 128, pages 076802 (year
2022)NoStop
[Yan et al.(2023)Yan,
Zhao, Zhou, Ma, Lyu, Chu, Hu, and Gong]YanQ23np
author author Q. Yan, author B. Zhao, author R. Zhou, author
R. Ma, author Q. Lyu, author S. Chu, author X. Hu, and author Q. Gong, title title Advances and applications on
non-Hermitian topological photonics, doi:10.1515/nanoph-2022-0775 journal journal
Nanophotonics volume 12, pages 2247
(year 2023)NoStop
[Nazarov and Blanter(2006)]Nazarovbook
author author Y. V. Nazarov and author Y. M. Blanter, @noop title Quantum Transport:
Introduction to Nanoscience (publisher Cambridge University
Press, year 2006)NoStop
[Coleman(2015)]Coleman
author author P. Coleman, https://doi.org/10.1017/CBO9780511976186 title Introduction to Many-Body Physics (publisher Cambridge University Press, Cambridge, UK, year
2015)NoStop
[Roccati et al.(2022)Roccati, Palma, Ciccarello, and Bagarello]Roccati22OS
author author F. Roccati, author G. M. Palma,
author F. Ciccarello, and author F. Bagarello, title title Non-Hermitian physics and master
equations, 10.1142/S1230161222500044 journal journal Open Syst. Inf. Dyn. volume 29, pages 2250004 (year
2022)NoStop
[Brasil et al.(2013)Brasil,
Fanchini, and Napolitano]Brrasil13bz
author author C. Brasil, author F. Fanchini, and author R. Napolitano, title title A simple derivation of the
Lindblad equation, https://www.scielo.br/j/rbef/a/dQpkcbLqQDrbWtWBfMj9HWn/?lang=en journal journal Rev. Bras. Ensino Fis. volume 35, pages 1303 (year
2013)NoStop
[Breuer and Petruccione(2002)]Breuer02book
author author H. P. Breuer and author F. Petruccione, @noop title The theory of open
quantum systems (publisher Oxford University Press, address Great Clarendon Street, year
2002)NoStop
equationsectionSfigure
Ssection
subsection
Supplemental material for “Anomalous Andreev Bound States in Non-Hermitian Josephson Junctions”
§ PARTICLE-HOLE SYMMETRY OF THE MODEL
In this section, we analyze the particle-hole symmetry that is physically
relevant for a non-Hermitian Josephson junction. From field operator
perspective, particle-hole symmetry (PHS) mixes creation operator
Ψ^† and annihilation operator Ψ in a way
CΨ_αC^-1=U_αβΨ_β^†, CΨ_α^†C^-1=Ψ_αU_αβ^T,
where U represents a unitary matrix and C denotes the PHS operator.
A Hamiltonian H=Ψ_α^†ℋ_αβΨ_β
is particle-hole symmetric if CHC^-1=H. This implies that the
first-quantized Hamiltonian transforms under PHS as
Cℋ^TC^-1=-ℋ,
where T is the transpose operation. In the Hermitian case, ℋ^T=ℋ^*.
While in the non-Hermitian case, ℋ^T≠ℋ^*
in general. Thus, the PHS comes in two kinds as <cit.>:
Uℋ^*U^-1 =-ℋ, I;
Uℋ^TU^-1 =-ℋ, II.
In the presence of PHS, the excitation energy of ℋ always
comes in pairs. For the two types of PHSs above, the corresponding
energy pairs are E⟷-E^* for type I PHS and
E⟷-E for type II PHS, respectively.
To determine which one of the two types of PHS on ℋ is
physically more relevant, we further consider the constraint of PHS
on Green's function. On the one hand, the effective Hamiltonian ℋ
is directly related with the retarded Green's function. Note that
the poles of the retarded Green's function yeild the eigenvalues of
the Hamiltonian, located in the lower half of tthe complex plane.
On the other hand, the physical indication of the retarded Green's
function as a propagator is consistent with causality. The retarded
Green's function is defined in terms of field operators as G_αβ^R(t)=-iθ(t)⟨[Ψ_α(t),Ψ_β^†(0)]⟩,
where t is the time domain and θ(t) is a Heaviside step
function. Considering PHS on field operators in Green's function,
it leads to
G_αβ^R(t) =-iθ(t)⟨[Ψ_α(t),Ψ_β^†(0)]⟩
=-iθ(t)⟨C^-1C[Ψ_α(t),Ψ_β^†(0)]⟩
=-iθ(t)⟨C[Ψ_α(t),Ψ_β^†(0)]C^-1⟩
=-U_αγ^*⟨iθ(t)[Ψ_γ^†(0),Ψ_δ(-t)]⟩U_δβ^T
=-U_αγ^*G_γδ^A(-t)U_δβ^T.
By Fourier transformation and relating the retarded and advanced Green's
function by [G_αβ^R(E)]^*=G_αβ^A(E),
the retarded Green's function fulfills
U^†[G^R(E)]^*U=-G^R(-E).
This PHS constraint on retarded Green's function is consistent with
type I PHS acting on ℋ: The eigenvalues of the system
(poles of the Green's function) reside on the same side of the complex
plane Im[E]≤0. In contrast, from the constraint of
type II PHS acting on ℋ, eigenvalues distribute symmetrically
in the upper and lower half of the complex plane. This symmetric eigenenergy
distribution is not consistent with the eigenvalues obtained from
the retarded Green's function. Thus, it is not compatible with the
requirement of causality. Therefore, we argue that type I PHS acting
on ℋ is physically more relevant to describe non-Hermitian
systems.
§ FEASIBILITY OF THE NON-HERMITIAN JOSEPHSON JUNCTION MODEL
In this section, we discuss the feasibility of the non-Hermitian Josephson
junction model. The crucial part is the non-Hermitian barrier potential.
It can be induced by coupling the system to the environment by a normal
dissipative lead, as shown in Fig. 1(b) of the main text. We provide
a general analysis for such an argument below. Assume there are three
components: the system of interest, the environment, and the dissipative
lead. For the interesting physics at low temperature, only several
energy levels in a finite energy window matter. The Hamiltonian for
the system can be written in its eigenbasis as H_s=∑_n_s(E_n_s-μ_s)|n_s⟩⟨ n_s|
with eigenenergy E_n_s and eigenstate |n_s⟩. This
is similar for the environment H_e=∑_n_e(E_n_e-μ_e)|n_e⟩⟨ n_e|
assuming a lower chemical potential μ_e<μ_s. Moreover,
the dissipative currentt I_r=∑_E_n_s>E_n_eV_es|n_e⟩⟨ n_s|
characterizes non-reciprocal quasiparticle transitions from system
to environment, accompanied by an energy relaxation process in the
environment <cit.>. Such nonzero transition amplitude
indicates an effective loss term -iΓ (Γ>0) for the
system, such that the effective Hamiltonian of the system may be written
as
H_s^eff=∑_n_s(E_n_s-iΓ-μ_s)|n_s⟩⟨ n_s|.
To capture the essential physics in a concise way, we employ a simplified
non-Hermitian barrier potential for the Josephson junction as
U(x)=-iVδ(x), V>0
to mimic the -iΓ contributions to Eq. (<ref>).
Alternatively, we also provide an argument based on the Lindblad master
equation, which describes the dynamics of an open quantum system interacting
with the environment. Explicitly, the Lindblad master equation can
be written as <cit.>
dρ_s/dt =-i[H_s,ρ_s]+γ∑_ℓ(L̂_ℓρL̂_ℓ^†-1/2{L̂_ℓ^†L̂_ℓ,ρ_s}),
where ρ_s is the density matrix for the system, L̂_ℓ
the jump operator acting on the Hilbert space of H_s, and γ
the coupling between system and the environment. To derive an effective
non-Hermitian Hamiltonian for the system, we can rewrite the Lindblad
master equation as
dρ_s/dt =-i[H_s,ρ_s]+γ∑_ℓ(L̂_ℓρ_sL̂_ℓ^†-1/2{L̂_ℓ^†L̂_ℓ,ρ_s})
=-i(H_effρ_s-ρ_sH_eff^†)+γ∑_ℓL̂_ℓρ_sL̂_ℓ^†,
where
H_eff=H_s-i/2γ∑_ℓL̂_ℓ^†L̂_ℓ.
By dropping the quantum jump terms γ∑_ℓL̂_ℓρ_sL̂_ℓ^†,
one obtain the effective non-Hermitian Hamiltonian H_eff
for describing the system of interest. This assumption may result
in a non-Hermtian Hamiltonian as the stated in Eq. (<ref>).
§ SOLUTION OF COMPLEX ANDREEV BOUND STATES IN S-WAVE CASE
In this section, we provide details for the solution of Andreev bound
states (ABSs) in s-wave non-Hermitian Josephson junctions. The
electron and hole excitations are described by the BdG equation
ℋ_BdG(x)([ u(x); v(x) ]) =E([ u(x); v(x) ]),
where
ℋ_BdG(x) =([ [-ħ^2∂_x^2/2m-μ]+U(x) Δ̂(x); Δ̂^†(x) -[-ħ^2∂_x^2/2m-μ]+U(x) ]),
U(x) =-iVδ(x), V>0.
In the eigenstate (u(x),v(x))^T, the upper component u(x)
represents electron-like and the lower component v(x) represents
hole-like excitations, respectively. The pairing potential is introduced
as
Δ(x) =
Δ, x<0,
Δe^iθ, x>0.
Specifically, type I PHS acts as
Uℋ_BdG^*U^-1=-ℋ_BdG,
where the matrix U takes the form
U=([ 0 1; -1 0 ]).
In the following, we focus on the ABSs within the superconducting
gap. The wave function of a bound state should decay exponentially
for |x|→∞. Then, the trial wavefunction can be taken
as
ψ_B(x)=([ u_B(x); v_B(x) ])=
A_h-e^ik_hx([ v_0; u_0 ])+A_e-e^-ik_ex([ u_0; v_0 ]), x<0;
A_e+e^ik_ex([ u_0e^iϕ/2; v_0e^-iϕ/2 ])+A_h+e^-ik_hx([ v_0e^iϕ/2; u_0e^-iϕ/2 ]), x>0,
where the parameters are defined as
ħk_e =√(2m(μ+i√(Δ^2-E^2))), k_h=k_e^*,
and
u_0^2 =1/2(1+i√(Δ^2-E^2)/E), v_0^2=1/2(1-i√(Δ^2-E^2)/E).
Note that u_0/v_0=E+i√(Δ^2-E^2)/Δ=e^iθ
with θ≡arccos(E/Δ). The bound state has a decay
length λ given by λ=ħ v_F/Δ1/√(1-E^2/Δ^2).
Therefore, the necessary condition for the existence of a bound state
is Re(λ)>0.
By enforcing continuity of the wave function ψ_B(x=0) and
the jump condition for the ψ'_B(x=0), we obtain
ψ_B(0+) =ψ_B(0-),
ψ'_B(0+)-ψ'_B(0-) =-τ_z2mV/ħ^2ψ_B(0).
Here, τ_z=diag(1,-1) is a Pauli matrix. The secular
equation is written as
([ v_0 u_0 -u_0e^iϕ/2 -v_0e^iϕ/2; u_0 v_0 -v_0e^-iϕ/2 -u_0e^-iϕ/2; -z_1v_0 z_2u_0 u_0e^iϕ/2 -v_0e^iϕ/2; -z_2u_0 z_1v_0 v_0e^-iϕ/2 -u_0e^-iϕ/2 ])([ A_h-; A_e-; A_e+; A_h+ ])=0,
where we have defined
Z ≡mV/ħ^2k_F, z_1≡1-2Z, z_2≡1+2Z.
Note that we have used the approximation k_e≈ k_h≈ k_F.
The determinant of this coefficient matrix needs to be zero to have
nontrivial solutions for the coefficients A_e/h±. After a cumbersome
calculation, we arrive at
4[u_0^4(Z+1)^2+v_0^4(Z-1)^2-2u_0^2v_0^2(Z^2+cosϕ)] =0.
Using Eq. (<ref>), this yields the basic equation to
determine ABSs as
(Z^2+1)+2Zi√(Δ^2/E^2-1)=Δ^2/E^2(Z^2+cos^2ϕ/2).
Let us first discuss some special limits and then present the general
solutions. We mainly focus on the regime 0<Z<1. Note that there
is no bound state when Z>1.
* For ϕ=2nπ, Eq. (<ref>) becomes
(Z^2+1)+2Zi√(Δ^2/E^2-1) =Δ^2/E^2(Z^2+1).
Assuming E≠0, we obtain
+2Zi√(Δ^2/E^2-1) =(Δ^2/E^2-1)(Z^2+1).
Defining x=Δ^2/E^2-1, we get +2Zi√(x)=x(Z^2+1).
It gives the trivial solution E^2=Δ^2, corresponding to
x=0. If x≠0, we analyze -4Z^2=x(1+Z^2)^2 with the
solution E=±Δ1+Z^2/1-Z^2. This solution does
not fulfill the condition of a bound state with Re(λ)>0.
* When ϕ=(2n+1)π, we have
(Z^2+1)+2Zi√(Δ^2/E^2-1) =Δ^2/E^2Z^2.
It yields E^2=Δ^2Z^2/Z^2-1. For 0<Z<1,
the energy is purely imaginary E=ZΔ/± i√(1-Z^2).
If we substitute this result back into the original equation, we obtain
E=ZΔ/+i√(1-Z^2) by fixing the sign convention
of √(-1)=i. Therefore, only one branch of the imaginary energy
is left.
* We now analyze general solution for arbitrary ϕ. From the above
discussion, we find that the energy E can not vanish. Thus we define
Δ^2/E^2=1/y.Then, we obtain
(Z^2+1)y+2Zi√(y(1-y))=Z^2+cos^2(ϕ/2).
Defining A=(Z^2+1), B=Z^2+cos^2(ϕ/2), the equation
can be simplified as
(A^2-4Z^2)y^2+(4Z^2-2AB)y+B^2 =0.
Solving this equation leads to
y =Z^2[Z^2-sin^2(ϕ/2)]+cos^2(ϕ/2)±2Zsgn(cos(ϕ/2))cos(ϕ/2)√(Z^2-sin^2(ϕ/2))/(1-Z^2)^2
=(sgn(cos(ϕ/2))cos[ϕ/2]±Z√(Z^2-sin^2(ϕ/2))/1-Z^2)^2.
This expression can be rewritten as
E/Δ=±sgn(cos(ϕ/2))cos[ϕ/2]± Z√(Z^2-sin^2(ϕ/2))/1-Z^2.
At Z=0, It reduces to the Kulik-Omel'yanchuk (KO) limit E_B^±(ϕ)=±Δcos(ϕ/2) <cit.>.
Considering further the necessary condition for a bound state Re(λ)>0,
this indicates
sin^2(ϕ/2) >2Z^2/1+Z^^2.
From this condition, we determine the Josephson gap edge at
ϕ_0(Z)=2nπ±2arcsin(√(2)Z/√(1+Z^2)), n∈ℤ.
Therefore, the spectrum of Andreev bound states is given by
E_B^±(ϕ)/Δ=±ζcos[ϕ/2]-iZ√(sin^2(ϕ/2)-Z^2)/1-Z^2, ϕ∈[ϕ_b,ϕ_t],
where the bottom phase edge is ϕ_b≡2nπ+ϕ_0(Z),
the top phase edge is ϕ_t≡(2n+1)π-ϕ_0(Z), and
ζ=sgn(cos(ϕ/2)). If we look at the ϕ∼0
with small Z, ϕ_0(Z)=2√(2)Z. Then, the energy can be
approximately as
E_B^±(ϕ_0)≈Δ(-iZ^2).
§ NORMAL STATES TRANSPORT
In this section, we calculate the normal state transport in the presence
of a non-Hermitian barrier potential. The normal states can be described
by the Hamiltonian
H=ħ^2k^2/2m-iVδ(x),V>0.
Then the scattering state can be expressed as
ψ(x<0) =e^ikx+re^-ikx,
ψ(x>0) =te^ikx,
where r and t are reflection and transmission amplitudes. Considering
the continuity of wave functions and the derivatives, we obtain
ψ(x=0^-) =ψ(x=0^+),
ψ'(x=0^+) -ψ'(x=0^-)=-i2mV/ħ^2ψ(0).
This equation leads to
1+r =t, t-1+r=-2Zt,
where we have defined Z=mV/ħ k_F. Thus the transmission
and reflection amplitudes are
t =1/1+Z, r=-Z/1+Z.
The corresponding transmission and reflection “probabilities”are
T =1/(1+Z)^2, R=Z^2/(1+Z)^2.
Note that T+R=1+Z^2/(1+Z)^2<1 when Z>0, which corresponds
to the loss of quasiparticles to the environment due to the non-Hermitian
barrier. In contrast, in the gain case, T+R≥1 in general.
§ FREE ENERGY AND COMPLEX SUPERCURRENT
In this section, we discuss the supercurrent carried by complex ABSs.
We obtain the supercurrent directly by
I_s(ϕ)=2e/ħdℱ(ϕ)/dϕ,
where ℱ(ϕ) is the free energy of the system <cit.>.
Consider a general system of independent particles with many energy
levels, each energy level can be regarded as a microcanonical ensemble <cit.>.
In our case the relevant energy levels are the ABSs in the gap. The
effective Hamiltonian can be regarded as a summation of independent
Hamiltonians as H-μ N=∑_j(E_j-μ)n̂_j where n̂_j
is the occupation number at level E_j. The partition function
is then a product of the individual partition functions as 𝒵=Tr[Π_j⊗e^-β(E_j-μ)n̂_j].
The trace of a exterior product of matrices is equal to the product
of their individual traces, thus the partition function is
𝒵 =Π_jTr[e^-β(E_j-μ)n̂_j]=Π_j𝒵_j=1+e^-β(E_j-μ)
for fermions. Then the corresponding free energy is given by
ℱ =-ln𝒵/β=-∑_jln(1+e^-β(E_j-μ))/β.
Following the above equation, the supercurrent can be written as
I_s(ϕ) =2e/ħdℱ/dϕ=2e/ħ∑_j=±dE_j/dϕf(E_j),
where f(E_j) is the Fermi-Dirac function f(E_j)=1/1+e^β(E_j-μ).
Note that we have two relevant energy levels E_B^±(ϕ).
We parameterize E_B^±(ϕ)=± a(ϕ)+ib(ϕ) with a(ϕ)>0.
Then, in the zero-temperature limit, we obtain
I_s(ϕ) =-2e/ħda(ϕ)/dϕ+i2e/ħdb(ϕ)/dϕ
=-2e/ħdRe[E_B^+(ϕ)]/dϕ+i2e/ħdIm[E_B^+(ϕ)]/dϕ.
Substituting the spectrum of ABSs to this formula, we derive
I_s(ϕ)=Δ e/ħ[ζsin(ϕ/2)/1-Z^2-iZsin(ϕ)/2(1-Z^2)√(sin^2(ϕ/2)-Z^2)], ϕ∈[ϕ_b,ϕ_t].
There is no supercurrent in the Josephson gap.
§ ANDREEV REFLECTION COEFFICIENT
In this section, we obtain the specific form of the Andreev reflection
coefficient. To this end, we take the trial wave function as
ψ(x)=([ u(x); v(x) ])=
e^ik_ex([ u_0; v_0 ])+A_h-e^ik_hx([ v_0; u_0 ])+A_e-e^-ik_ex([ u_0; v_0 ]), x<0;
A_e+e^ik_ex([ u_0e^iϕ/2; v_0e^-iϕ/2 ])+A_h+e^-ik_hx([ v_0e^iϕ/2; u_0e^-iϕ/2 ]), x>0.
The coefficients are determined by the boundary conditions:
ψ(0+) =ψ(0-), ψ'(0+)-ψ'_B(0-)=-2imV/ħ^2τ_zψ(0).
These boundary conditions can be rewritten as
([ v_0 u_0 -u_0e^iϕ/2 -v_0e^iϕ/2; u_0 v_0 -v_0e^-iϕ/2 -u_0e^-iϕ/2; -z_1v_0 z_2u_0 u_0e^iϕ/2 -v_0e^iϕ/2; -z_2u_0 z_1v_0 v_0e^-iϕ/2 -u_0e^-iϕ/2 ])([ A_h-; A_e-; A_e+; A_h+ ]) =([ -u_0; -v_0; -2Zu_0; 2Zv_0 ]).
The Andreev reflection probability is then obtained as
A_h-= u_0v_0[u_0^2e^-iϕ-v_0^2e^iϕ+(2z(u_0^2-v_0^2)-(u_0^2+v_0^2))]/2[u_0^4(Z+1)^2+v_0^4(Z-1)^2-2u_0^2v_0^2(Z^2+cosϕ)].
After a simplification, we arrive at
A_h- =Δ/E[cos^2ϕ/2+√(Δ^2-E^2)/E(sinϕ/2cosϕ/2-iZ)]/(Z^2+1)E^2+2iZE√(Δ^2-E^2)-Δ^2[Z^2+cos^2ϕ/2].
The poles of A_h- yield the spectrum of ABSs, consistent with
Eq. (<ref>).
§ SOLUTION OF COMPLEX ANDREEV BOUND STATES IN P-WAVE CASE
In this section, we present the solution of ABSs the p-wave case.
The BdG equation reads
([ [-ħ^2∂_x^2/2m-μ]+U(x) Δk̂_x/k_F; [Δk̂_x/k_F]^† -[-ħ^2∂_x^2/2m-μ]+U(x) ])([ u(x); v(x) ])=E([ u(x); v(x) ]).
Following a similar procedure as in the s-wave case, we find the
secular equation
(Z^2+1)+2iZ√(Δ^2/E^2-1) =Δ^2/E^2cos^2(ϕ/2).
Let us first present results for special limits and afterwords the
general solutions.
* Josephson phase ϕ=2nπ. The above equation becomes
(Z^2+1)+2Zi√(Δ^2/E^2-1) =Δ^2/E^2.
Assume E≠0 to get
+2Zi√(Δ^2/E^2-1) =[Δ^2/E^2-(Z^2+1)].
Defining x=Δ^2/E^2-1, which yields -2Zi√(x)=(x-Z^2).
The final result is E=±Δ/√(1-Z^2), which does
not fulfill the bound state condition Re(λ)>0.
* At Josephson phase ϕ=(2n+1)π, the above equation leads to
(Z^2+1)+2Zi√(Δ^2/E^2-1) =0.
When 0<Z<1, the energy is purely imaginary with E=Δ±2iZ/1-Z^2.
Then, if we substitute this solution back to the original equation,
we find E=Δ-2iZ/1-Z^2 by fixing the sign convention
√(-1)=i.
* General Josephson phase ϕ. We notice that the energy E cannot
reach zero. Thus we define Δ^2/E^2=1/y,
and the equation is simplified to be
(Z^2+1)y+2Zi√(y(1-y))=cos^2(ϕ/2).
Define A=(Z^2+1), B=cos^2(ϕ/2) to obtain
(A^2-4Z^2)y^2+(4Z^2-2AB)y+B^2 =0.
Thus, the general solution reads
y =-2[Z^2sin^2(ϕ/2)+(Z^2-cos^2(ϕ/2))]±√(16Z^2sin^2(ϕ/2)[Z^2-cos^2(ϕ/2)])/2(A^2-4Z^2)
=-(Zsgn(sin(ϕ/2))sin(ϕ/2)±√(Z^2-cos^2(ϕ/2))/1-Z^2)^2.
Finally, we arrive at the result
E/Δ=± iZsgn(sin(ϕ/2))sin(ϕ/2)±√(Z^2-cos^2(ϕ/2))/1-Z^2.
At Z=0, we recover the well-known p-wave junction limit with
E=±Δcos(ϕ/2). To determine the ABSs, we consider the
necessary condition for a bound state Re(λ)>0, which
indicates
sin^2(ϕ/2) >Z^2(1-Z^2)/1+Z^^2.
From this condition, we find the Josephson gap edges at
ϕ_0^p(Z)=2nπ±2arcsin(Z√((1-Z^2)/1+Z^2)), n∈ℤ.
Therefore, the spectrum of ABSs is given by
E_B^+(ϕ)/Δ=√(cos^2(ϕ/2)-Z^2)-iZsgn(sin(ϕ/2))sin(ϕ/2)/1-Z^2, ϕ∈[ϕ_b^p,ϕ_t^p],
and E_B^-(ϕ)=-[E_B^+(ϕ)]^* from type I PHS. The
bottom phase edge is ϕ_b^p≡2nπ+ϕ_0^p(Z) and
the top phase edge is ϕ_t^p≡(2n+1)π-ϕ_0^p(Z).
If we focus around ϕ∼0 with small Z, it gives a simple
relation ϕ_0^p(Z)=2Z. In this regime, the energy can be
approximated as
E_B^±(ϕ_0^p)≈Δ(-iZ^2).
Interestingly, this value is exactly the same as in the s-wave
case. The Josephson edge ϕ_0^p(Z) has a maximum value at
Z^2 =1-Z^2/1+Z^2=√(2)-1.
Thus the maximum Josephson gap value is 4arcsin(√(2)-1).
|
http://arxiv.org/abs/2307.04601v1 | 20230710143943 | InPars Toolkit: A Unified and Reproducible Synthetic Data Generation Pipeline for Neural Information Retrieval | [
"Hugo Abonizio",
"Luiz Bonifacio",
"Vitor Jeronymo",
"Roberto Lotufo",
"Jakub Zavrel",
"Rodrigo Nogueira"
] | cs.IR | [
"cs.IR"
] |
NeuralMind
University of Campinas
Brazil
NeuralMind
University of Campinas
Brazil
NeuralMind
University of Campinas
Brazil
NeuralMind
University of Campinas
Brazil
Zeta Alpha
Netherlands
Zeta Alpha
NeuralMind
University of Campinas
Brazil
2023
2023
acmcopyright[]
Recent work has explored Large Language Models (LLMs) to overcome the lack of training data for Information Retrieval (IR) tasks. The generalization abilities of these models have enabled the creation of synthetic in-domain data by providing instructions and a few examples on a prompt.
InPars <cit.> and Promptagator <cit.> have pioneered this approach and both methods have demonstrated the potential of using LLMs as synthetic data generators for IR tasks.
This makes them an attractive solution for IR tasks that suffer from a lack of annotated data.
However, the reproducibility of these methods was limited, because InPars' training scripts are based on TPUs – which are not widely accessible – and because the code for Promptagator was not released and its proprietary LLM is not publicly accessible.
To fully realize the potential of these methods and make their impact more widespread in the research community, the resources need to be accessible and easy to reproduce by researchers and practitioners.
Our main contribution is a unified toolkit for end-to-end reproducible synthetic data generation research, which includes generation, filtering, training and evaluation. Additionally, we provide an interface to IR libraries widely used by the community and support for GPU.
Our toolkit not only reproduces the InPars method and partially reproduces Promptagator, but also provides a plug-and-play functionality allowing the use of different LLMs, exploring filtering methods and finetuning various reranker models on the generated data. We also made available all the synthetic data generated in this work for the 18 different datasets in the BEIR benchmark which took more than 2,000 GPU hours to be generated as well as the reranker models finetuned on the synthetic data. Code and data are available at <https://github.com/zetaalphavector/InPars>
InPars Toolkit: A Unified and Reproducible Synthetic Data Generation Pipeline for Neural Information Retrieval
Rodrigo Nogueira
February 2023
==============================================================================================================
§ INTRODUCTION
Effective neural Information Retrieval (IR) models often require a large amount of labeled training data. However, obtaining human labeled data is costly and many publicly available benchmarks contain few or no training examples <cit.>. In these cases, the common approach is to train a model on a large dataset, such as
MS MARCO <cit.> and Natural Questions <cit.>, and use it in a zero-shot transfer learning scenario <cit.>.
Nonetheless, models trained on these datasets face challenges to generalize to the variety of tasks and specific domains available in the real world.
Thus, the recently proposed InPars <cit.> and Promptagator <cit.> methods, along with their extensions InPars-v2 <cit.> and InPars-Light <cit.>, have explored Large Language Models (LLMs) to generate synthetic data and have demonstrated their effectiveness. These methods not only outperform models that are finetuned on extensively labeled datasets but have also shown to be more adaptable to different tasks.
These methods propose the generation of synthetic in-domain training data by exploring the few-shot learning abilities of LLMs, prompting them with a brief description of the task and a small number of in-domain examples. InPars uses a static prompt that include examples collected from the MS MARCO dataset, whereas Promptagator uses dynamic prompts that include domain and task-specific examples sampled from the target dataset. Another key difference of these methods lies in the filtering of generated data. While InPars uses the sequence probability given by the LLM at generation time, Promptagator uses a consistency filtering with a model trained on the generated data. Similarly, InPars-v2 extends the pipeline by using a pre-trained reranker model to filter the examples. InPars-Light goes further in the efficiency direction by using lightweight models and showing that they are competitive with larger models.
These methods have proven to be effective, representing the state of the art in the BEIR benchmark <cit.>. However, reproducing such pipelines can still be a challenging task; researchers need to handle different codebases in addition to having access to a specific computational infrastructure. Most of the time, such components are not well integrated, making it difficult for researchers and practitioners to use them effectively. In this work, we bring all these components together, making it possible to experiment with InPars, Promptagator, and their variants, as well as to try new approaches using different LLMs, prompting approaches and datasets. We believe that making these resources available to allow reproducible work in the field of IR is crucial for several reasons. First, reproducibility is a key component of scientific research, as it allows other researchers to confirm and build upon the findings of a study.
Second, the reproduction of LLM related studies is often costly, and making the models and generated data available provides a valuable resource for the community.
We summarize our contributions as follows:
* We provide an extensive guideline for reproducing InPars and InPars-v2 for datasets on the BEIR benchmark on GPU. For Promptagator, we provide an implementation for reproducing the synthetic queries generation step with the dynamic prompt construction originally proposed by the authors.
* We also provide support for using different data sources: Pyserini's <cit.> pre-built indexes for the BEIR datasets, and <cit.> library, which contains multiple IR datasets.
* Lastly, we make available all the synthetic data generated in this reproduction study, along with the prompts and the finetuned reranker models.
§ METHODS
In this section, we describe the main methods reproduced in this paper and highlight the differences in their data generation pipelines.
§.§ InPars
The InPars method, currently available in two different versions, explores the few-shot learning abilities of LLMs to generate synthetic training data for IR tasks, by using a prompt template that instructs the LLM on how to generate the synthetic data. The prompt t||d is the concatenation of a prefix t and a document d, where the prefix t consists of N pairs of documents and their relevant queries, i.e., t={(q_1^*,d_1^*), ..., (q_N^*,d_N^*)}. The prompt t||d is fed to a language model G that generates a question q that is likely to be relevant to d. The resulting pair (q, d) forms a positive training example that is later used to finetune a retrieval model. The original InPars work uses a GPT-3 LLM as the synthetic data generator, while InPars-v2 replaced the LLM with GPT-J <cit.>. These models, trained on massive amounts of text data, have shown impressive abilities in generating human-like text, answering questions, translating languages, and even creating original content. GPT-J is an open-source 6B parameters transformer model trained using 402 billion tokens from the Pile <cit.>, an 800 GB English dataset. When generating the synthetic queries, a greedy decoding strategy was used.
InPars proposes two different prompts. The first one, named “Vanilla” prompt, uses three fixed pairs of examples of document and relevant query, that were randomly collected from the MS MARCO training dataset. The second prompt template, referred to as “Guided by Bad Questions” (GBQ) uses the same examples from the first prompt, but it labels the original questions from the MS MARCO dataset as “bad” questions. The “good” questions were manually created and are more elaborated. The intention is to encourage the LLM to produce more informative questions, where the full context of the document contributes to the answers.
InPars generates 100K pairs of positive training examples using documents randomly sampled from a collection D. The prefix t is always the same regardless of the input document d. After generating the synthetic data, a filtering step is proposed, to select the top K pairs with respect to the following (log) probability:
p_q = 1/|q|∑_i=1^|q|log p(q_i|t,d,q_<i),
where p(q_i|t,d,q_<i) is the probability assigned by G when autoregressively generating the i-th token of q, and q_<i are the tokens generated in the previous decoding steps.
This score is used to filter the top K=10,000 pairs of document and synthetic queries to be used as finetuning data.
This filtering improves the quality of the training data. Without it, i.e., using the full set of 100K synthetic queries to finetune a reranker model resulted in a drop of 4 MMR@10 points on MS MARCO.
The filtering approach was improved on InPars-v2, where a pretrained reranker model is used to filter the synthetic queries for the training step. A monoT5-3B reranker model finetuned for one epoch on the MS MARCO dataset is used to estimate a relevance score for each synthetic query generated by the LLM and the document that was used to generate it. After computing the score for each one of the 100,000 pairs of synthetic queries and documents, only the K=10,000 highest scores are kept as finetuning data.
These filtered queries are used to train a monoT5 <cit.> reranker, an adapted version of T5 <cit.> model for text ranking tasks. The filtered queries are used as positive examples, while negative examples are mined from BM25 candidates. Two models with 220M and 3B parameters were trained for one epoch over the 20,000 query-document pairs. The trained model is subsequently used to rerank the initial BM25 retrievals. This approach employs a two-stage retrieval pipeline. Firstly, BM25 retrieves the top 1,000 documents per query. Secondly, the trained model reranks the list by assigning a relevance score for each pair of query and document.
§.§ Promptagator
The Promptagator method also creates synthetic training data for IR tasks by exploiting the few-shot abilities of a 137B-parameter LLM. Differently from InPars, a specific prompt is created for each dataset using in-domain examples.
By creating a specific prompt template for each dataset, the prefixes used were selected according to the dataset description. Using the ArguAna dataset prompt as an example, the model is prompted with a prefix “” which indicates the document from the dataset, followed by a prefix “”, marking the question related to the document. This way, the prefixes resemble a better description of the datasets while instructing the LLM to generate a query for that specific task and document. Moreover, in the few-shot scenario, they use from 2 to 8 relevant query-document examples to create the prompt, sampled from the development set when it is available or, if not, from the test set.
Promptagator generates synthetic queries using a sampling decoding algorithm with a temperature of 0.7. For each dataset, they generate 8 synthetic queries for each document from a randomly sampled set of 1 million documents. FLAN <cit.> is used as the generator, which is a proprietary LLM that was pretrained on a multiple tasks using instructions.
To ensure that only high-quality synthetic questions are generated, the authors propose a filtering step based on consistency filtering. They train a retriever model using the same synthetic data that needs to be filtered to predict the most relevant passage for a given query. The retriever model keeps only queries that, when fed to the model, return the document that originated it among its top K results. The authors observed that setting K to 1 leads to better results when using the MS MARCO dataset as a validation set.
The authors suggest that this filtering strategy removes low-quality synthetic questions and improves performance on 8 out of the 11 datasets that were evaluated.
In the final step, the Promptagator method finetunes two different models using the synthetic data. The first is a bi-encoder based on the GTR <cit.> architecture with 110M parameters. The second is a cross-encoder with the same number of parameters, which reranks the top 200 candidates retrieved by the bi-encoder model.
§ EXPERIMENTAL SETUP
In this section, we describe the process of using the toolkit provided in this work. Firstly, we outline the steps for generating synthetic data. Next, we discuss the process of filtering the generated data to remove possibly irrelevant instances. Then, we describe how to build the training set using the filtered positive examples and mining the negatives. After that, we provide details on how to use the synthetic data to train a reranker. Finally, we describe the process of reranking and evaluating the trained model. By following the guidelines in this section, researchers and practitioners are able to leverage the provided resources effectively and reproduce the InPars method, as well as partially reproduce the Promptagator method and extend to new pipelines.
§.§ Commands
To begin, the synthetic data generation step is done using the command-line as follows:
[language=bash]
python -m inpars.generate –prompt="inpars" –dataset="trec-covid" –dataset_source="ir_datasets" –base_model="EleutherAI/gpt-j-6B" –output="trec-covid-queries.jsonl"
Diving into the required arguments, thwe first need to define the , which supports four different options for the prompt template to be selected: , , and . This argument will define which prompt template to be used during the generation step. We provide both “Vanilla” and “GBQ” prompts templates used by InPars, with “Vanilla” as a default.
The prompt template uses a specific template for each dataset and dynamically selects random pairs of query and relevant document to be used as prompt examples. The argument specifies the number of examples that will be used in the prompt in this case – with a default of 3 examples.
We randomly select labeled examples from the training set of each dataset when training data is available. If there is no training set, we use the development set as our source and, as a last resort, when there is no training or development set, we use the test set for creating the prompt examples. This approach is slight different from the one proposed by Promptagator, that collects examples only from the development or test set. Once the examples are collected, for each document that requires a synthetic question, the prompt is built dynamically. This means that the prompt examples are randomly ordered for each document.
To ensure a fair evaluation, the queries and documents used as few-shot examples that were extracted from the development or test sets are discarded from the evaluation metrics.
The next arguments are the and , which specify dataset to generate synthetic queries for and the source from which to load it. In line with InPars and Promptagator, we support the datasets from the BEIR <cit.> benchmark. BEIR is a widely used evaluation framework in the IR domain. It aims to provide a comprehensive evaluation benchmark on a variety of IR tasks, with a particular emphasis on zero-shot evaluation.
The argument is designed to integrate with two widely used dataset interfaces in the IR community: Pyserini <cit.>, a toolkit for conducting reproducible IR research with sparse and dense representations, and <cit.>, a commonplace for several IR ad-hoc ranking benchmarks. Furthermore, it is also possible to indicate a local file as the document and query collection. By default, we use the as the source, but both sources include all publicly available BEIR datasets.
The argument determines the LLM that will be used to generate the synthetic queries. By default, our toolkit uses the GPT-J <cit.> model available in the Hugging Face Hub [<https://huggingface.co/EleutherAI/gpt-j-6B>], but it can support any generative model available from Hugging Face. Lastly, the argument specifies the name of an output file to save the synthetic data. The output file will be a JSON format file, which will contain one query per line. This file will also include additional information related to the synthetic generation step, such as the log probabilities assigned to each token by the LLM during query generation, the prompt text fed to the LLM, and the document for which the query was generated.
Additional arguments related to the LLM, such as the maximum length of input and output or batch size, can also be set through the command-line arguments.
Once the synthetic data has been generated, we move on to the filtering stage. We provide two different filtering strategies, and the command to filter the synthetic queries is:
[language=bash]
python -m inpars.filter –input="trec-covid-queries.jsonl" –dataset="trec-covid" –filter_strategy="scores" –keep_top_k="10_000" –output="trec-covid-queries-filtered.jsonl"
Initially, before applying the filtering strategy, we keep only synthetic queries that meet some conditions. These conditions require the token count to fall within a specified range of minimum and maximum amount, defined by the arguments and . This is done to remove possible noisy synthetic queries. The optional argument removes synthetic queries in which a part of the document used for generation was copied to the query.
The first argument required by the filtering command-line is the , which refers to the file containing the synthetic queries generated in the previous step to be filtered. The argument indicates which dataset the queries belong to. The specifies the filtering strategy to be used. The default filtering strategy, introduced by InPars-v1, is called and is based on a mean value computed from LLM's tokens probabilities. The synthetic queries list is then sorted in descending order, and only the top-K values are retained. The argument defines the value ofk, with a default value of 10,000.
The second filtering strategy, which was introduced by InPars-v2, is called reranker. This strategy employs a pretrained reranker model to filter the synthetic queries by computing a relevancy score for each synthetic query-document pair. The scores list is sorted in descending order and only the top-k pairs with the highest scores are kept as positives query-document pairs to be used during training. Finally, the output file must be specified in to indicate where the filtered synthetic queries will be saved.
The filtering strategy proposed by Promptagator is not currently supported because its more elaborated and seem to require more computational resources: A bi-encoder is initially finetuned on 1 million synthetic examples and then used in the filtering step by retaining only the examples that correctly retrieve the source document. This is a costly procedure that has been postponed for future work.
The third stage of the pipeline involves mining negative examples for model training. In this stage, negative examples are mined by using the filtered synthetic queries to search for candidate documents. We followed the approach outlined in InPars, using BM25 to retrieve 1,000 candidate documents from the target collection. From this set, a random document is selected as the negative example. If the candidate document is the same one used during the synthetic generation step, the example is discarded, and a new one is sampled. The following command-line is used to execute this step:
[language=bash]
python -m inpars.generate_triples –input="trec-covid-queries-filtered.jsonl" –dataset="trec-covid" –output="trec-covid-triples.tsv"
The argument expects a file containing the previously filtered synthetic queries, as well as the identification. The result is a tuple(q, d^+, d^-), whereqandd^+are fixed (the synthetic query and the source document) andd^-represents the negative example sampled from BM25 candidates. The document collection is indexed using Pyserini, and all BEIR benchmark datasets are already available as pre-built indexes.
Once the synthetic training data is obtained, we proceed to the training step. We support the finetuning of a monoT5 reranker model, which is the same model used in InPars, as the final stage of the multi-stage retrieval pipeline.
To finetune the reranker using the synthetic data, the command-line is:
[language=bash]
python -m inpars.train –triples="trec-covid-triples.tsv" –base_model="castorini/monot5-3b-msmarco-10k" –output_dir="./reranker/" –max_steps="156"
The argument specifies the file containing the training tuples obtained in the previous step, where every line consists of a triple comprising a query, a positive document, and a negative document. The argument indicates the model to be finetuned – e.g., an original T5 model or a pre-trained monoT5. In all our experiments, we used the [<https://huggingface.co/castorini/monot5-3b-msmarco-10k>] as our initial base model. The argument specifies the path where the finetuned model should be saved. Our reranker models were trained for 156 steps, equivalent to one epoch over the query-relevant document pairs. In contrast to the InPars script that relies on TPUs, our training script supports GPUs. We conducted all experiments using a NVIDIA A100 80 GB GPU, and training the model for 156 steps took approximately 30 minutes.
Once the model has been trained, the next stage uses it to rerank a dataset. In this stage we support all BEIR datasets, as well as any custom local datasets. The command-line to rerank is:
[language=bash]
python -m inpars.rerank –model="./reranker/" –dataset="trec-covid" –output_run="trec-covid-run.txt"
The first argument is the , which specifies the trained model to be used for reranking. The argument indicates one of the BEIR datasets to load the documents and queries, as well as the initial run to be reranked. We are using the BEIR runs, created using BM25, as initial run for all the datasets. However, it is possible to provide an initial run from a local file in the TREC format using the argument. The reranker model will compute a relevancy score for each query and the candidates documents from the initial run. The output will consist of a reranked run, which will be saved in the location indicated by the path.
Finally, to evaluate the reranked run, the following command-line is used:
[language=bash]
python -m inpars.evaluate –dataset="trec-covid" –run="trec-covid-run.txt"
By providing the and to be evaluated, our script computes the metrics like recall and nDCG, in addition to other metrics computed by the TREC evaluation script.
§ RESULTS
This section presents and discusses the results obtained by reproducing the methods using our toolkit. Table <ref> presents a comparison between the baselines, the results reported by original methods and our reproductions.
The first two rows (1a and 1b) represent the BM25 baselines. In BM25-flat, document titles and contents are concatenated and stored as a single field while BM25-multi stores them as separate fields. The top 1000 documents retrieved by BM25-flat are reranked by the models in rows (2), (3a), (3b), (4b), and (4c). Row (2) presents the result of monoT5-3B, which was finetuned on MS MARCO for one epoch, used in a zero-shot setup.
Rows (3a) and (3b) presents the reproductions of InPars-v1 and InPars-v2 pipelines, respectively. Row (4a) presents the reported result for Promptagator.
The results in rows (4b) and (4c) illustrate the impact of using the Promptagator prompt with InPars pipelines. Comparing these results to those of InPars v1 and v2 (rows (3a) and (3b)), the results produced by the Promptagator prompt are either equal or slightly lower than those obtained through the InPars prompt, except for the ArguAna, Touché-2020 and SciFact datasets. Notably, for the ArguAna dataset, finetuning the reranker on the synthetic data generated by the Promptagator prompt resulted in an almost 14 nDCG@10 improvement compared to InPars' best result. These findings suggest that the Promptagator prompts are particularly effective in generating synthetic queries for the ArguAna and Touché-2020 datasets. Such datasets concentrate on argument retrieval, which is slightly different from other datasets in the BEIR benchmark. As a result, they gain advantage from using dataset-specific prompts.
Also, a factor that probably limited InPars prompt performance on the ArguAna dataset reported in rows (3a) and (3b) is related to the query length. When generating the synthetic queries, InPars sets a maximum number of 64 tokens. As shown in Table <ref>, the average number of words and tokens for queries across all datasets in the BEIR benchmark is below this value with the exception of the ArguAna dataset.
When examining the filtering strategy, the results from the InPars prompt indicate an average difference of 1 nDCG@10 point between the results displayed in rows (3a) and (3b). The improvements observed in the reranker strategy results are primarily driven by Touché-2020, FEVER and Climate-FEVER datasets. When using the Promptagator prompt, the filtering strategy appears to make a difference for certain datasets, as shown in rows (4b) and (4c). The reranking strategy appears to perform better for TREC-COVID, Touché-2020, and HotpotQA datasets. In particular, the HotpotQA dataset showed an improvement of more than 18 nDCG@10 points when compared to the scores strategy. On the other hand, the scores filtering strategy resulted in an improvement for the DBPedia and Climate-FEVER datasets, with gains of 9.6 and 16.8 nDCG@10 points, respectively. Despite the individual differences, the average results are very similar.
Table <ref> presents results comparing the performance on GPU (PyTorch <cit.> and Transformers <cit.>) versus TPU (Mesh-TensorFlow <cit.>). As part of our work, we added GPU support to reproduce InPars results. This support covers the synthetic data generation, filtering, training, reranking and evaluating. We conducted an experiment to verify that running it on a GPU setup would produce the same results as running it on the TPU setup. For this, we trained monoT5-3B models following the InPars-v2 approach. Our analysis revealed that while there were minor variations in datasets such as TREC-COVID, BioASQ, Robust04 and ArguAna, the results remained exactly the same for NFCorpus, NQ, and FiQA-2018, regardless of the device used. For the majority of datasets, the variance in results between running on TPU and GPU is minimal when considering individual performance, as demonstrated in the "Diff" column on Table <ref>. Furthermore, the average nDCG@10 remains consistent in both evaluation scenarios.
All experiments were conducted using an NVIDIA A100 80 GB GPU. Training monoT5-3B for 156 steps took about 30 minutes. Filtering 100K queries using a monoT5-3B model takes approximately 45 minutes. The duration of the evaluation step is determined by the number of queries that need to be reranked for each dataset, which can range from 50 queries for TREC-COVID to 13,145 queries for CQADupstack. The reranking of 1,000 candidate documents for a given query took a maximum of 30 seconds using the monoT5-3B reranker model.
Additionally, Table <ref> shows statistics regarding the token count for each set of documents and queries in all datasets on the BEIR benchmark.
The ArguAna dataset is noteworthy for having significantly different query length compared to the other datasets. TREC-NEWS and Robust04 have the largest document lengths. This information is crucial to keep in mind when choosing documents to use as prompt examples. For instance, if we consider the GPT-J model, with a maximum sequence length of 2048 tokens, at most two average TREC-NEWS documents can fit into a prompt, without even accounting for the length of the queries.
§ CONCLUSIONS
We have introduced the InPars Toolkit, a codebase designed to generate synthetic data using LLMs in a reproducible manner for neural IR tasks. The toolkit comprises an end-to-end pipeline that encompasses data generation, training, reranking, and evaluating the trained models. Additionally, the codebase is integrated with two major libraries for commonly used datasets from the BEIR benchmark, and it supports both GPU and TPU training and inference. Our goal is to make research on these methods more accessible and to pave the way for this emerging research trend in the IR community.
Our experiments have demonstrated that training reranker models using synthetic data and evaluating them on GPU infrastructure yielded results comparable to those obtained when training on the TPU setup. Additionally, we have also made available all synthetic data generated for all BEIR datasets and the models finetuned on this data.
§ FUTURE WORK
Future work will focus on integrating a wider range of open-source LLMs, including instruction finetuned LLMs, with the aim of enhancing the generation process. Another area of further exploration is to experiment with different prompting techniques, such as chain-of-thought prompting, and prompting for retrieval explanations. Moreover, there are plans to incorporate consistency filtering and expand the filtering methods to completely reproduce Promptagator and lay the foundations for new research approaches in the field of synthetic data generation for IR.
abbrv |
http://arxiv.org/abs/2307.04952v1 | 20230711004659 | Compact Twice Fusion Network for Edge Detection | [
"Yachuan Li",
"Zongmin Li",
"Xavier Soria P.",
"Chaozhi Yang",
"Qian Xiao",
"Yun Bai",
"Hua Li",
"Xiangdong Wang"
] | cs.CV | [
"cs.CV",
"cs.LG"
] |
Compact Twice Fusion Network for Edge Detection]Compact Twice Fusion Network for Edge Detection
1]Yachuan [email protected]
1]Zongmin [email protected]
2,5]Xavier Soria [email protected]
1]Chaozhi [email protected]
1]Qian [email protected]
1]Yun [email protected]
3]Hua [email protected]
[4]Xiangdong [email protected]
[1] College of Computer Science and Technology,China University of Petroleum (East China), Changjiang West Road,Qingdao, 266500, Shandong, China
[2] Faculty of Educational Science, Humanities, and Technology,National University of Chimborazo, Av. Eloy Alfaro, Riobamba, 060110, Chimborazo, Ecuador
[3]Institute of Computing Technology Chinese Academy of Sciences, South Road Zhongguancun, Beijing, 100190, China
[4]Physical Education Institute, Jimei University, Yinjiang Rd, Xiamen, 361021, Fujian, China
[5] CIDIS,ESPOL Polytechnic University, Campus Gustavo Galindo, Guayaquil, 090112, Guayas, Ecuador
The significance of multi-scale features has been gradually recognized by the edge detection community. However, the fusion of multi-scale features increases the complexity of the model, which is not friendly to practical application. In this work, we propose a Compact Twice Fusion Network (CTFN) to fully integrate multi-scale features while maintaining the compactness of the model. CTFN includes two lightweight multi-scale feature fusion modules: a Semantic Enhancement Module (SEM) that can utilize the semantic information contained in coarse-scale features to guide the learning of fine-scale features, and a Pseudo Pixel-level Weighting (PPW) module that aggregate the complementary merits of multi-scale features by assigning weights to all features.
Notwithstanding all this, the interference of texture noise makes the correct classification of some pixels still a challenge. For these hard samples, we propose a novel loss function, coined Dynamic Focal Loss, which reshapes the standard cross-entropy loss and dynamically adjusts the weights to correct the distribution of hard samples. We evaluate our method on three datasets, i.e., BSDS500, NYUDv2, and BIPEDv2. Compared with state-of-the-art methods, CTFN achieves competitive accuracy with less parameters and computational cost.
Apart from the backbone, CTFN requires only 0.1M additional parameters, which reduces its computation cost to just 60% of other state-of-the-art methods.
The codes are available at https://github.com/Li-yachuan/CTFN-pytorch-masterhttps://github.com/Li-yachuan/CTFN-pytorch-master.
[
*
October 2023
================
§ INTRODUCTION
The purpose of edge detection is to extract object boundaries and salient edges from natural images to preserve the key information and ignore insignificant details. Therefore, it is considered a fundamental task in computer vision and plays an important role in higher-level tasks such as salient detection <cit.>, semantic segmentation <cit.>, and depth map prediction <cit.>.
Edge detection which divides the pixels into edge and non-edge is a sub-task of semantic segmentation. Therefore, the pixels classification is the essential question in edge detection. The pioneering works finish this work by local features, such as brightness, gradient and color. The lack of global information limits the performance of edge detection methods, until the advent of Holistically-nested Edge Detection(HED) <cit.>. As the pioneer of contemporary edge detection, HED first introduces the deep supervision mechanism to edge detection and learns multi-scale predictions holistically. On this basis, a series of excellent methods are produced <cit.>.
Although the performance of edge detection has been significantly improved, these methods suffer from two major issues. Firstly, the number of model parameters increases sharply. With the further exploration of multi-scale features, the model parameters are also greatly increased, which is unable to meet the demands of downstream tasks. In most situations, spending a mass of space and computing resources for a little accuracy improvement doesn't make sense. Secondly, lack of attention to hard samples. Hard samples refer to the pixels whose classification probability is significantly different from the ground truth, that is, the pixels prone to misclassification. As shown in Fig. <ref>, the confidence of the edges in the red box decreases due to texture interference, while the textures in the green box are mistaken for edges. These are two typical examples of hard samples. Hard samples determine the ceiling of detection accuracy, so extra attention should be attached to them.
Aiming to fully exploit multi-scale feature fusion and avoid the sharply increased parameters, we introduce a Compact Twice Fusion Network (CTFN) for edge detection, in which higher quality edges are obtained by fusing the multi-scale features twice. A lightweight Semantic Enhancement Module (SEM) is introduced in the first feature fusion. In SEM, high-level semantic information is used to increase the receptive field of fine-scale features, thereby improving the discrimination of fine-scale branches. However, SEM is a cascade structure based on FPN <cit.>, in which the high-level semantic information is gradually attenuated in the process of transmission, so the second feature fusion is required to aggregate all scale information. In the second feature fusion, we introduce a Pseudo Pixel-level Weighting (PPW) module, which sets the weights of multi-scale features according to their context information and further reduces the module complexity by decomposing the weights into the spatial weights and the channel weights.
To further enhance the attention for the hard samples, we propose Dynamic Focal Loss (DFL). DFL reshapes the standard cross-entropy loss and dynamically adjusts the weight of the loss assigned to hard samples. Increasing the weights of hard samples is an effective method to optimize hard samples, but effectively determining the hard samples become a new problem. Focal loss <cit.> discriminates hard samples by the gap between model output and ground truth. However, due to the existence of randomly initialized modules, the output of the model is chaotic at the initial stage of training, so it is unreasonable to identify hard samples by early output. Therefore, DFL distrusts the model output at first and dynamically increases the confidence margin to reduce the adverse effects caused by the early chaos of the model.
The main contributions of the paper are summarized below:
* We systematically analyze the existing deep-learning based edge detection methods and find two urgent problems.
* We propose a Compact Twice Fusion Network (CTFN) that fully fuse multi-scale features while maintaining model compactness. CTFN utilizes only 0.1M additional parameters beyond the backbone, resulting in a 60% reduction in computational cost compared to state-of-the-art methods.
* For the hard samples in edge detection, we introduce Focal Loss for the first time and propose a Dynamic Focal Loss to solve the chaotic output problem in the early training stage of Focal Loss.
* Extensive experiments are conducted on BSDS500, NYUDv2, and BIPEDv2 datasets to demonstrate the effectiveness and robustness of our method.
§ RELATED WORK
The origin of edge detection can be traced back to the last century. Early pioneering methods <cit.> mainly focus on local cues, which prevents methods from distinguishing between texture and edge.
In recent years, deep learning based methods come from behind. Edge detection has entered the era of deep learning. We review the recent development of deep learning-based edge detection methods in terms of model structure and loss function.
§.§ Model structure
Holistically-nested Edge Detection (HED) <cit.> introduces the deep supervision mechanism to edge detection and learns multi-scale predictions holistically. Inspired by the great success of HED, the most recent methods <cit.> are committed to end-to-end learning and enrichment of global features. Multi-scale information in this kind of method is almost independent and final edge maps are obtained only by taking the weighted sum of multi-scale information, as shown in Fig. <ref>. Such methods can be called HED-based methods. It is worth mentioning that although BDCN <cit.> appears to have twice feature fusions, the gradient of the first feature fusion is truncated, so we still regard it as a HED-based method. The success of HED-based methods is undoubted, but the lack of semantic information in fine-scale features due to multi-scale features independence is an open problem worthy of study.
For the aforementioned problem, Wang <cit.> introduces a new fusion method, in which semantic information from coarse-scale is utilized to facilitate fine-scale feature learning. This kind of method has been further developed in recent years <cit.>. Since the structure of these methods is similar to U-Net <cit.>, they can be termed UNet-based methods, as mentioned in Fig. <ref>. UNet-based methods pay more attention to fine-scale features, while the coarse-scale semantic information gradually decay in the process of feature fusion.
A natural idea is to merge the two structures to fuse the features further. These merged structures are what we refer to as Multiple Feature Fusion (MFF) methods, as illustrated in Fig. <ref>. FCL <cit.> fuses multi-scale features twice. In the first fusion, LSTM is introduced to address attenuation problems during the downward fusion of high-level semantic information. In the second fusion, FCL designs a pixel-level weighting module to assign weights to each feature to refine the feature fusion process. BAN <cit.> repeatedly fuses multi-scale features in both bottom-up and self-downward directions to achieve fully fused multi-scale features.
MFF methods significantly improve edge detection accuracy. However, feature fusion dramatically increases model complexity, making them less ideal for downstream tasks and real-time computing.
Inspired by the multiple feature fusion methods <cit.>, we propose CTFN, which retains the structure of multiple feature fusion while removing unnecessary high-cost modules.
CTFN not only ensures the full fusion of multi-scale features but also reduces the number of parameters and the computational cost associated with multiple feature fusion.
§.§ Loss function
Weighted Cross-Entropy is employed to supervise the learning of the network in HED <cit.>. RCF <cit.> filters out samples with disputed ground truth based on HED and leads to the most popular loss function.
Since the relatively scarcity of edge pixels compared to non-edge pixels, backpropagation gradient stabilization requires significantly larger weights for edge pixels.
And it causes the problem of blurry edges. That is, there is a wide transition region between edge and non-edge, as shown in Fig. <ref>(b). Therefore, the following works are expected to obtain crisp edges by optimizing the loss function.
Deng <cit.> believe Weighted Cross-Entropy prevents generating crisp edges and replace it with the dice coefficient and cross-entropy. In another work <cit.>, they further consider the structural differences between the output and the ground truth by the structural similarity index (SSIM) <cit.>. Huan <cit.> directly divided the image into three parts: edge pixels, confusing pixels, and non-edge pixels, and optimize them respectively.
These loss functions give a further boost to edge detection, but they all ignore the problem of hard samples mentioned above. To this end, we propose a novel Dynamic Focal Loss.
§ METHOD
Our innovation can be divided into two parts: network architecture and loss function. In this section, we describe each in detail.
§.§ Compact Twice Fusion Network
The overall architecture of our proposed CTFN is shown in Fig. <ref>, which contains three main stages: backbone, first feature fusion, and second feature fusion. Multi-scale features are obtained through the backbone, and then edges are generated through twice feature fusion.
§.§.§ Backbone
To be fair with existing methods <cit.>, the backbone is based on the VGG16 <cit.> as well. After removing the last pooling layer and fully connected layers, the 13 convolutional layers in VGG16 are divided into 5 blocks by pooling layers. The dilations in the 5th block are set with 2 to enlarge the receptive fields. Backbone is used to generate multi-scale features, which is a prerequisite for two feature fusions and is also the module with the most parameters.
§.§.§ First Feature Fusion
In edge detection, identifying the differences between edges and textures is critical but can be challenging due to their similar appearances. Often, the differentiating factor is semantic information, which is closely tied to the receptive field of the feature. The larger the receptive field, the more semantic information that can be captured, thereby enhancing the preservation of edges and suppression of textures. Consequently, increasing the feature receptive field is key to improving feature quality, particularly for fine-scale branches that have smaller receptive fields than other scales.
To address this issue, we introduce a Semantic Enhancement Module (SEM) into the first feature fusion stage. The SEM specifically targets the fine branch features and works to increase their receptive fields, which improves their judgment ability. By enhancing the receptive field of the fine branch features, we can better capture relevant semantic information while minimizing the risk of introducing noise. This strategy allows us to achieve higher accuracy in edge detection tasks and produce sharper, more visually appealing results.
The detailed process for the first feature fusion is described in Algorithm <ref>
In the first step, we aim to balance the multi-scale feature channels and reduce computational costs. To achieve this, we use convolution to decrease the number of channels from the original value to 21 while also maintaining a consistent representation of features across scales. In particular, we set the kernel size of convolution to 1 to effectively reduce parameters and ensure computational efficiency. This strategy has been adopted by other works as well, such as <cit.>.
After reducing the number of channels, we use the Group Normalization (GN) Layer <cit.> to normalize the features. This step is important because it prevents vanishing gradients during subsequent feature fusion, which can substantially hinder network training and result in poor convergence.
Finally, the normalized features are combined through matrix addition from top to bottom. This approach leverages the benefits of the different features at multiple scale levels and results in a more robust representation of the input data. By performing the feature fusion in this way, we can increase the network's accuracy and reduce noise and other undesirable artifacts that may compromise the integrity of the outputs. Overall, this series of steps culminates in a powerful deep learning framework that can effectively tackle a wide range of edge detection tasks.
In fact, the effectiveness of using coarse-scale branches to guide the learning of fine-scale branches has been verified by previous works <cit.>, but the previous works introduce redundant parameters and calculations in the process of feature fusion, while our proposed SEM only retains the structure of this feature fusion. SEM is an extremely lightweight module that requires minimal additional learnable parameters, typically only a few convolution operations with a kernel size of 1. As a result of this efficient design, SEM not only provides a significant boost to model performance but also results in a compact architecture.
§.§.§ Second Feature Fusion
In earlier works <cit.>, the weighted sum of multi-scale edges is directly used to compute the final edge. It is inaccurate that all pixels in the same channel share the same weight and have equal importance in fusion. Recent works <cit.> found this problem and have attempted to assign different weight to each pixel. However, additional operations bring the problem of increasing parameters and decreasing inference speed.
In the second feature fusion stage, we propose a Pseudo Pixel-level Weighting (PPW) module that decomposes the weight of each pixel into spatial and channel components as depicted in Fig. <ref>. To minimize computation cost, we use a 1×1 convolution to directly calculate the product of channel weight and multi-scale edges. The spatial weight is calculated using a spatial weighting module consisting of three 3×3 convolution layers and a softmax activation function, similar to the CoFusion <cit.>. Since the spatial weighting module only accounts for spatial weight, fewer channels are required. Experimental results show that spatial weighting of PPW is able to achieve comparable accuracy with CoFusion using only 25% of the number of channels.
The final prediction of each pixel P_ij can be calculated using the PPW module input X_ijk as follows:
P_ij = PPW(X_ijk)
= ∑_i=1^L(X_ijk× W_ijk)
= ∑_i=1^L(X_ijk× Wc_i × Ws_jk)
Here, L represents the number of multi-scale edges, and Wc and Ws represent channel weight and spatial weight, respectively. X and P denote the input and output of the PPW module.
The PPW module bears some resemblance to CBAM <cit.>, but there are two primary differences.
Firstly, the two modules serve different purposes. The primary objective of PPW is to assign weights to the fusion of multi-scale features to obtain higher quality single-channel images. In contrast, CBAM's primary objective is to selectively emphasize informative features in their channel attention mechanism.
Secondly, there are differences in implementation. Specifically, in PPW, a 1× 1 convolution is used directly to perform channel weighting, and channels are reduced as early as possible in order to decrease module complexity.
§.§ Dynamic focal loss
Edge and non-edge pixels are extremely imbalanced in images, thus Weighted Cross-Entropy loss (WCE) is widely employed in edge detection, which is formulated as
WCE(p_i, y_i)={[ -αlog(p_i) if y_i=1; -βlog(1-p_i) if y_i=0; 0 otherwise ].
where p denotes the final edge prediction and y represents the ground truth. α=| Y_+|/| Y | and β =λ·| Y_-|/| Y |. | Y_+| and | Y_-| are used to represents the number of edge and non-edge, separately. λ controls the weight of positive over negative samples. | Y |=| Y_+| + | Y_-|. Due to the inconsistency of annotations among different annotators, a threshold γ is introduced for loss computation. For pixel i, it will be regarded as edge if the ground truth y_i is more than γ, and we define the ground truth y_i=1. Pixel i will be regarded as no-edge if y_i=0. λ controls the weight of edge over non-edge.
WCE solves the unbalance between positive and negative samples in edge detection by a balance coefficient successfully while ignoring the problem of hard samples. Hard samples are those pixels easily misclassified, which determines the quality of the edge map.
Hard samples are common in dense prediction tasks. In view of this problem, Lin proposed Focal Loss (FL) <cit.>
F L(p_i, y_i)={[ -αωlog (p_i) if y_i=1; -βωlog (1-p_i) otherwise . ].
where
ω={[ (1-p_i)^γ if y_i=1; p_i^γ otherwise. ].
Compared with WCE, FL contains a new weight factor ω, which utilizes the difference between the predicted result and the ground truth to weigh the sample and adjusts the weight flexibly by a hyper-parameter γ. Therefore, the model is guided to pay more attention to hard samples. The effectiveness of FL has been widely demonstrated <cit.>.
However, FL is suffering from confusion in the early stage of model training. Because some modules of the model are randomly initialized, the difference between the prediction and the ground truth at the beginning of training cannot truly reflect the distribution of hard samples. At this point, using FL to focus on fake hard samples is detrimental to the learning of real hard samples. So, we propose Dynamic Focal Loss (DFL) to optimize this situation. DFL is formulated as
DFL(p_i, y_i)={[ -αω' log (p_i) if y_i=1; -βω' log (1-p_i) if y_i=0; 0 otherwise. ].
where
ω'={[ μ+ϵ(1-p_i)^γ/ϵ+μ if y_i=1; μ+ϵ(p_i)^γ/ϵ+μ otherwise. ].
ϵ represents the current epoch and starts from 0. The effect of hyper-parameters γ and μ will be discussed in ablation study.
Our contribution is based on the fact that the output of the model can gradually reflect the correct distribution of the hard samples as the training progresses. Hence, our confidence margin in the prediction should rise gradually.
Here we represent this process as an exponential function ϵ/(ϵ+μ).
The experimental results show that this simple dynamic confidence setting can better define hard samples and guide the model to converge in a more correct direction.
§ EXPERIMENTS
In this section, we first introduce the datasets and implementation details of the experiment, then compare our method with the State-of-the-art methods, and finally verify the effectiveness of each module in our method through ablation experiments and Visual Analysis.
§.§ Experimental datasets
The performance of our proposed CTFN is validated on three benchmark datasets (BSDS500 <cit.>, NYUDv2 <cit.>, and BIPEDv2 <cit.>) and compare with the previous state-of-the-art methods. BSDS500 is the most popular dataset for edge detection, including 200 training images, 100 validation images, and 200 test images, each of which has 4-9 corresponding annotation results. In the experiment, a total of 300 images in the training and validation sets are used to train the model, and then the model is evaluated on the test set. NYUDv2 is an indoor scene semantic segmentation dataset, the edge ground truth is generated from segmentation maps, it containing 1449 groups of carefully annotated RGB and depth images, and each group image has one annotation result. We use 795 images to train the model and evaluate the model on the rest images. BIPEDv2 is recently proposed by Soria <cit.>, which is the second version of <cit.>. This dataset contains 250 carefully annotated high-resolution Barcelona Street View images. There are 200 images for training and validation and 50 images for testing. For a fair comparison, we use the same data augmentation method as RCF <cit.> for BSDS500 and NYUDv2, and the same as DexiNed <cit.> for BIPEDv2.
§.§ Implementation Details.
Our method is implemented on the PyTorch library. All the experiments are conducted on an NVIDIA GeForce 2080Ti GPU with 11GB memory. The backbone of CTFN is initialized on ImageNet and the rest module is randomly initialized. The threshold γ is set to 0.3 for BSDS500. And γ is useless for NYUDv2 and BIPEDv2, due to the ground truth being binary annotations. λ is set to 1.1 for BSDS500 and BIPEDv2, 1.2 for NYUDv2. SGD optimizer is adopted. A standard Non-Maximum Suppression (NMS) is performed to produce the final edge maps before the quantitative evaluation. F-measure is utilized as the quality evaluation standard of the generated edge map:
F_- measure=2^* P^* R/P+R
where P represents the accuracy rate and R represents the recall rate.
Due to local correlation, the edges obtained by deep learning-based methods are actually edge probability maps, even after NMS processing. So we need to choose a threshold to binarize the edges.
There are two options for the threshold used to binarize the edges. One is to find an optimal threshold value for each image (OIS), and another is to use the same optimal threshold for the whole dataset (ODS).
The maximum tolerance allowed for correct matches between edge predictions and ground truth annotations is set to 0.0075 for BSDS500 and BIPEDv2, 0.011 for NYUDv2. More experimental details can be referred to previous work <cit.>.
§.§ Comparison with the State-of-the-art Methods
§.§.§ Performance on BSDS500
We compare CTFN with recent deep learning based edge detection methods on BSDS500, and the results are summarized in Table <ref>. To be fair, all methods are based on VGG16. In terms of accuracy, CTFN, BAN, and FCL are significantly better than other methods. CTFN and BAN have advantages in ODS and OIS respectively and are all slightly better than FCL. In terms of the number of parameters, CTFN, RCF, and CAT are fewer than other methods. P' can reflect this advantage more intuitively. Analyzing parameters except for the VGG16 backbone, CTFN is 1/6 of BDCN, 1/9 of BAN, and 1/18 of FCL. In terms of the amount of calculation, FLOPs of CTFN is only 0.6G more than RCF, ranking second. FLOPs of CTFN is 25% less than BDCN, and nearly half less than FCL or BAN. In short, CTFN's accuracy is comparable to that of the state-of-the-art methods, while the number of parameters and computation cost is far superior to them. Additionally, BAN utilizes several tricks like combined loss function, edge map merging <cit.>, and two-stage training, while our CTFN is trained in an end-to-end manner.
A larger scale accuracy comparison is shown in Table <ref>. CTFN is compared with prior edge detection methods, including both traditional methods and deep learning methods. As shown in Table <ref>, CTFN reports ODS 0.817 on BSDS500, which is around 2% higher than the baseline method RCF. CTFN outperforms most existing HED-based methods and UNet-based methods with a large gap. Precision-Recall curves are presented in Fig. <ref>. As can be seen from the PR curves, when recall rate is less than 0.8, CTFN maintains a significant advantage in accuracy. In addition, we visualized the prediction results of images in BSDS500, and the comparison results are shown in Fig. <ref>. Visually, CTFN is significantly superior to other methods.
§.§.§ Performance on NYUDv2 and BIPEDv2
NYUDv2 has three types of inputs, i.e., RGB, HHA, and RGB-HHA. Following previous works <cit.>, we perform experiments on the three types data. The results of RGB-HHA are obtained by averaging the edges detected on RGB and HHA. Table <ref> shows the comparison of our method with several recent approaches. Our method outperforms the baseline RCF by 0.4% on RGB-HHA. A comparison of edge detection results on NYUDv2 is shown in Fig. <ref>. From left to right are input images, ground truth, and the results of RCF, BDCN, and CTFN. The edges generated by RCF contain many textures that should have belonged to the background. While BDCN, on the other hand, loses many edges. The results of CTFN were significantly better than those from RCF and BDCN.
We also record the evaluation results on BIPEDv2 and the comparison results with other methods are shown in Table <ref>. Similarly, CTFN achieves promising results, with accuracy only 0.1% lower than the best methods. We show some qualitative results in Fig. <ref>. From left to right are the input images, ground truth, and the results of RCF and CTFN.
Model size will not change when tested on different datasets, and the FLOPs ratio of different models is also fixed. Therefore, we only show the accuracy of each method in NYUDv2 and BIPEDv2. As for model size and FLOPs, the results in Table <ref> can be referenced.
§.§ Ablation Study
The main innovations of CTFN are Semantic Enhancement Module (SEM) in the first feature fusion stage, Pixel-Level Weighting (PPW) in the second feature fusion stage, and Dynamic Focal Loss (DFL). In this section, We verify the effectiveness of each module separately, as shown in Table <ref>. The model is trained on BSDS500 train-val set and report the performance on test set.
We first explore the impact of PPW. Experimental results are summarized in Group 1 of Table <ref>. Default represents the most commonly used weighted summation in previous edge detection methods <cit.>. CoFusion means the Context-aware fusion block proposed in the CATS <cit.>. CoFusion-l means CoFusion with same number of channels as PPW. Compared with the weighted summation, both CoFusion and PPW lead to 0.3% higher ODS. While PPW takes 1/4 of CoFusion's parameters because of fewer channels. Since the task is divided into channels and spaces in PPW, each subtask is simpler and requires fewer parameters. When using same number of channels, PPW outperforms Cofusion-l. By comparison, The default Weighted summation uses channel weighting, Cofusion uses mixed channel and space weighting, and PPW uses separate channel and space weighting. The experimental results show that PPW is the best choice.
As shown in Group 2 of Table <ref>, we compare the impact of different loss functions. WCE is the abbreviation of Weighted Cross-Entropy loss and FL is the abbreviation of Focal Loss. FL-pre means using WCE in the first epoch and then using FL to avoid confusion in the early stages of training, which is widely used. DFL is the abbreviation of Dynamic Focal Loss. We can observe that the accuracy of DFL is significantly higher than WCE and FL. And the performance of FL-pre is slightly worse than DFL. In fact, FL-pre is a special case of DFL, in which the hyper-parameter μ→0^+. We can simulate the case by setting μ to 0.1^-9 in Eq. <ref>.
We further test the impact of hyper-parameters γ and μ. The results are summarized in Table. <ref>. In Eq. <ref>, when γ=0, ω' is always equal to 1 irrespective of μ, thus DFL degenerates into Weighted Cross-Entropy. Through a series of simple trials, it can be observed that as the values of γ and μ increase, the accuracy increases first and then decreases. This evaluation is conducted to verify the effectiveness of DFL, thus more exhaustive explorations are not been done, even though they might lead to improvements in accuracy.
We verify the effectiveness of SEM in group 3 of Table <ref>. It can be observed that SEM improves the ODS of the model by 0.5%, which can effectively improve the model performance. The visualization results in Fig. <ref> further validate our conclusion. The effect of SEM is more notorious for fine-scale branches, where there are more textures when SEM is not used.
§.§ Visual Analysis
We visualize the weight ω of Focal Loss in Fig. <ref>, which is mentioned in Eq. <ref>. We can observe that in the early training stage, the output of the model is disordered, leading to similar and chaotic weight ω_1. Hard samples are not paid enough attention to. While the situation is quite different in the stable training stage, the misclassified negative samples near the edge and the error-detected texture in the background correspond to greater weight, which contributes to better edge map generation. By comparing Fig. <ref> and Fig. <ref>, it can be seen that the error of ω in the early training stage is relatively larger, so the confidence margin of ω should be low and gradually increase with the training of the model, which is also the principle of Dynamic Focal Loss.
Another point of note is that the transition from the edge to the non-edge area is softer in the results of CTFN. Due to the imbalance of positive and negative sample weights in Weighted Cross-Entropy Loss, the existing methods suffer from blurry edges <cit.>. As shown in Fig. <ref>, the outputs of these methods are far from single-pixel edges. Even though the edges are processed to a single-pixel width in subsequent Non-Maximum Suppression (NMS), blurry edges lead to localization ambiguity and is detrimental to the final accuracy. While DFL is utilized in CTFN to assign larger weights to the non-edge regions near the edges due to the large difference between the non-edges and the ground truth, which is shown in Fig. <ref>.
We visualize the results of CTFN and a typical WCE-based method BDCN <cit.> in Fig. <ref>. Compared with the ground truth on the left, the results of the two models seem to differ little. However, when we show the details of them in Fig. <ref>, we can observe that the edge of BDCN is thicker, which leads to larger localization ambiguity after NMS, as shown in Fig. <ref>. This contrast can be observed more clearly after concatenating them in the channel dimension, as shown in Fig. <ref>. We can observe that the result of the CTFN (purple) deviate less from the ground truth (cyan) than the result of BDCN (yellow).
Therefore, we can conclude that CTFN can effectively alleviate the problem of edge localization ambiguity.
§ CONCLUSION
In this paper, we review existing deep-learning based edge detection methods and propose a new Compact Twice Fusion Network(CTFN), in which we divide the edge detection model into three parts: the backbone, the first feature fusion stage, and the second feature fusion stage. We propose two lightweight modules SEM and PPW to fuse multi-scale features and further introduce a dynamic focal loss to focus on the hard samples of images. Experimental results on multiple datasets verify the effectiveness of CTFN. Compared to state-of-the-art methods, CTFN achieves competitive accuracy and higher efficiency.
Limitation. For fair comparison, our method still uses VGG-16 as the backbone, which accounts for more than 95% of the model parameters. This limits the further compression of the model. And the feature extraction ability of VGG-16 is hard to meet the needs of edge detection. In addition, compared with WCE, DFL adds two hyperparameters, which should be different on different datasets. This makes it more difficult for our method to transfer on different datasets. Therefore, in the future work, we will explore more efficient backbone and design a dynamic Focal Loss with adaptive hyperparameters.
*Acknowledgments
This work is partly supported by National key r&d program(Grant no. 2019YFF0301800),National Natural Science Foundation of China (Grant no. 61379106),the Shandong Provincial Natural Science Foundation (Grant nos.ZR2013FM036,ZR2015FM011). Xavier Soria was funded by the Air Force Office of Scientific Research under Award FA9550-22-1-0261.
§ DECLARATIONS
* Competing interests
No potential conflicts of interest were identified.
* Availability of data and materials
This work used publicly available dataset for the training and validation. The code source is available.
|
http://arxiv.org/abs/2307.04782v1 | 20230710180000 | Deeper than DEEP: A Spectroscopic Survey of $z>3$ Lyman-$α$ Emitters in the Extended Groth Strip | [
"Stephanie M. Urbano Stawinski",
"M. C. Cooper",
"Steven L. Finkelstein",
"Intae Jung",
"Pablo G. Pérez-González",
"Caitlin M. Casey",
"Olivia R. Cooper",
"Nimish P. Hathi",
"Benne W. Holwerda",
"Anton M. Koekemoer",
"Vital Fernández",
"Rebecca L. Larson",
"Ray A. Lucas",
"L. Y. Aaron Yung"
] | astro-ph.GA | [
"astro-ph.GA"
] |
firstpage–lastpage
[
[
Received 24 May 2023 / Accepted 30 June 2023
================================================
We present a spectroscopic survey of Lyα emitters in the Extended Groth Strip (EGS) field, targeting the regime near the Epoch of Reionization. Using Keck/DEIMOS, we observed 947 high-z candidates with photometric redshifts from 3 < z_phot < 7 and down to an H-band (HST/WFC3 F160W) magnitude limit of < 27.5. Observations were taken over the course of 8 nights, with integration times ranging from 4 to 7.8 hours. Our survey secured 137 unique redshifts, 126 of which are Lyα emitters at 2.8 < z < 6.5 with a mean redshift of z = 4.3. We provide a comprehensive redshift catalog for our targets, as well as the reduced one- and two- dimensional spectra for each object. These observations will provide an important auxiliary dataset for the JWST Directors Discretionary Early Release Science (DD-ERS) program the Cosmic Evolution Early Release Science Survey (CEERS), which recently completed near- and mid-IR imaging and spectroscopy of galaxies in the EGS field.
galaxies:high-redshift – surveys – catalogues
§ INTRODUCTION
Deep surveys in widely studied extragalactic fields are pivotal in characterizing galaxy evolution across cosmic time. The Extended Groth Strip (EGS) is one of the leading extragalactic fields on the sky, renowned for a balance of area and depth with observations extending from X-ray to radio wavelengths <cit.>. The EGS field is centered at α = 14^h19^m00^s and δ = +52^∘48^m00^s with the bulk of deep imaging observations covering a central region of 800 square arcminutes. Its relevance in extragalactic astronomy is due in part to major surveys using a variety of instruments, including the Hubble Space Telescope (HST) through both the All-wavelength Extended Groth strip International Survey (AEGIS, ) and the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS, ). Now with the launch of JWST <cit.>, the EGS has further cemented its status as a legacy field, due predominately to the Cosmic Evolution Early Release Science (CEERS) Survey (ERS 1345, PI: S Finkelstein[CEERS data can be publicly accessed in MAST: https://doi.org/10.17909/z7p0-848110.17909/z7p0-8481]), a Director's Discretionary Early Release Science (DD-ERS) program that has conducted both imaging and spectroscopy with JWST in the EGS (; Finkelstein et al. in prep).
The significant amount of existing observations and telescope time dedicated to the EGS makes supplemental spectroscopic observations increasingly powerful. Spectroscopic observations have routinely been used for confirmation or readjustment of photometric redshifts, ultimately improving the reliability of constraints derived from photometric spectral energy distribution (SED) fitting. Spectroscopy is also critical for obtaining certain spectral properties and dynamical measurements (such as emission and absorption line strengths, velocity offsets, and velocity widths). For these reasons, spectroscopic data drastically improve the implied constraints from photometry alone.
In June 2022, CEERS began imaging in the EGS using the Near Infrared Camera (NIRCam, ) and the Mid-Infrared Instrument <cit.> and continued to obtain additional photometric imaging in December 2022. The CEERS collaboration has since published the first data release of NIRCam observations <cit.> and MIRI imaging <cit.>. With the influx of photometry in the EGS field using JWST in the first year of operation, spectroscopic catalogs at high-z become particularly useful for improving inferred galaxy properties and informing future science.
Foundational spectroscopic surveys in the EGS, such as the DEEP2 and DEEP3 surveys (, see also ) have provided extensive spectroscopy within the EGS, greatly improving the inferred constraints on galaxy properties at low and intermediate redshifts. While the DEEP2 and DEEP3 surveys provide highly uniform spectra and an extremely high sampling density of secure redshifts at z ≲ 1, the EGS has trailed other extragalactic fields, such as COSMOS, GOODS-N, and GOODS-S, with respect to spectroscopic coverage at higher z <cit.>. Over the past decade, however, the 3D-HST survey <cit.> as well as the MOSFIRE Deep Evolution Field survey (MOSDEF, ) both began to push spectroscopic studies to higher z in the EGS field. 3D-HST used HST WFC3-IR/G141 grism spectroscopy to measure ∼ 3000 secure grism redshifts, including ∼ 500 galaxies at 2 < z < 3 and another 26 at 3 < z < 3.5. In addition, MOSDEF targeted ∼ 1500 galaxies at 1.37 < z < 3.80 from the EGS, GOODS-N, and COSMOS fields. Despite these more recent near-IR spectroscopic campaigns along with smaller efforts to study very high-z sources <cit.>, the EGS is still lacking in spectral coverage for galaxies at z > 4, an important epoch with respect to the recently-completed CEERS JWST photometric observations. CEERS spectroscopic observations using JWST NIRSpec and NIRcam are poised to greatly increase the publicly-available spectroscopy of high-z galaxies in the EGS <cit.>, yielding redshifts for hundreds of sources at a range of z, including at z ∼ 8-10 <cit.>.
To further supplement the recent deep, near- and mid-IR imaging data in the EGS from JWST, we undertook spectroscopic observations of intermediate- and high-z sources using the DEep Imaging Multi-Object Spectrograph (DEIMOS; ) on the KECK II telescope. We present spectroscopic observations of 947 targets with 137 unique spectroscopically confirmed redshifts. The majority (126) of these objects are Lyα emitters at 2.8 < z < 6.5, increasing the spectroscopic coverage of high-z galaxies in the EGS. In Sections <ref> and <ref>, we describe our target selection and observations for the survey, respectively. We present the Keck/DEIMOS redshift catalog in Section <ref>, along with subsequent analysis. Finally, in Section <ref>, we conclude with a discussion of the potential use of our survey and our recent collaborations with ongoing, ground-based high-z surveys in the EGS.
§ TARGET SELECTION AND SLIT MASK DESIGN
As its name suggests, the EGS field spans an extended, narrow area on the sky. To efficiently explore Lyα emission from z = 3-6 over the entirety of the field as probed by the CANDELS HST imaging, we required an optical spectrograph with a similarly broad field-of-view (FOV) and capable of a high level of multiplexing. The Keck/DEIMOS spectrograph is particularly well-suited due to its large FOV that matches the shape of the CANDELS footprint in the EGS as well its ability to observe ≳ 140 targets simultaneously.
Spectroscopic targets were selected from the photometric catalog of <cit.>, based upon the CANDELS HST and Spitzer IRAC observations in the EGS. Primary targets were selected to be at 3 < z_ phot < 7 (derived from ; ) and H < 27.5, with priority given to brighter sources. We adopted this magnitude limit to avoid potentially spurious sources at the detection limit of the existing HST imaging, while the photometric redshift limits were chosen to match the lowest and highest z at which Lyman-α would be detected given our instrument configuration (see <ref>). Slit masks were filled with any additional sources with z_ phot > 3 and H < 27.5 as well as (preferentially brighter) sources at z_ phot < 3 without a secure spectroscopic redshift from the DEEP2 and DEEP3 surveys. In total, the target population included 947 unique sources, with the vast majority (98%) selected to be at z_ phot > 3 from the <cit.> photo-z catalog (including 92% of targets at 3 < z_ phot < 6). Figure <ref> shows the distribution of our targeted sources as a function of z_ phot and H-band (F160W) magnitude, highlighting those that yielded a secure spectroscopic redshift (see <ref>).
We tiled the EGS with a total of 8 slitmasks, located at 4 overlapping positions along the strip, such that sources had at least two opportunities to be placed on a mask. Table <ref> summarizes the position and number of targets for each slitmask, along with the date of observation and total exposure time. Across all masks, slit widths were fixed to 1^'', with slit gaps measuring 05, so as to optimize the number of potential targets observed on a given mask. Slit lengths were allowed to vary, above a minimum slit length of 4^'', such that slits were sufficiently long to avoid any significant loss in redshift success (see results from DEEP2/DEEP3, ). Each slitmask includes roughly 140-150 sources per mask, with the final targeted sample including 947 unique sources down to the magnitude limit of H < 27.5. Across the 8 masks, 173 sources are targeted more than once, with 22 of these repeat targets placed on more than two masks. We describe how we handle repeat redshift measurements later in <ref>. Figure <ref> shows the distribution of targets with respect to the CANDELS HST/WFC3 imaging footprint as well as the CEERS JWST NIRCam, MIRI, and NIRSpec pointings. Along the southeast edge of the strip, overlap with the CEERS NIRCam fields is sub-optimal due to a lack of bright guide stars; though, the Keck/DEIMOS spectroscopy covers the central portion of the – at that time – planned CEERS observations.
§ DEIMOS OBSERVATIONS AND REDUCTIONS
As detailed in Table <ref>, spectroscopic observations were completed during June 2020 and 2021, prior to the launch of JWST in late 2021. With Keck/DEIMOS, we used the 600 lines mm^-1 grating blazed at 7500 Å and tilted to a central wavelength of 7200 Å, with the GG455 order-blocking filter employed. This spectroscopic setup provides an approximate spectral coverage of ∼ 4500–9900 Å, depending on the slit placement on the particular mask. The spectral resolution (FWHM) for the 600g grating on DEIMOS is ∼ 3.5 Å <cit.>, with a dispersion of 0.65 Å per pixel.
Each individual exposure was typically ∼ 1800 sec in length, with a minimum of 7 exposures per mask and no dithering applied between exposures. The total integration times achieved for each mask are listed in Table <ref>, ranging from ≲ 4 hours to as much as ∼ 7.8 hours. Calibrations for each mask included three internal quartz lamp flat-field frames and an arc lamp spectrum (using Kr, Ar, Ne, and Xe lamps). During observations, the DEIMOS flexure compensation system was utilized to ensure flexure frame-to-frame throughout the night (for both calibration and science images) differed by ≲ ± 0.25 pixels.
Observing conditions varied throughout the survey. In general, seeing ranged from roughly 06 to 1^'' with variable cloud cover. The DEIMOS detector is comprised of 8 CCDs, with each object spectrum spanning two chips (blue and red). One of the chips (CCD5) was inoperative during the 2020 observations and had an elevated level of read noise during the 2021 observations, resulting in decreased sensitivity (or a total loss of spectral coverage) at red wavelengths for approximately ∼ 25% of the slits per mask. The number of targets per mask, for which the resulting spectra do not fall on CCD5 (i.e. unaffected by this issue), are listed in Table <ref>.
Once the DEIMOS observations were completed, we reduced the entire dataset using the [<https://sites.uci.edu/spec2d/>] DEEP2/DEEP3 DEIMOS data reduction pipeline <cit.>. Spectroscopic redshifts were measured using a custom template fitter that incorporates both an emission-line galaxy template (included to find redshifts for low-z interlopers) as well as an asymmetric Gaussian profile to probe a single Lyα line where no other emission lines were detected. Examples of the best-fit templates for three high-z Lyα emitters are shown in Figure <ref>. The uncertainty reported in the redshift measurement is derived from the 1σ error on the location of the peak Lyα emission determined from the fit. The quality of each redshift was visually inspected and given a quality code (Q) following the previous classification from the DEEP2/DEEP3 surveys. A quality code of Q = -2 indicates major detector/reduction issues, rendering at least half of the spectrum unusable. Out of the 276 total targets assigned Q = -2, 245 targets (∼ 89 %) were placed on CCD5. Slits that had no detected emission or continuum are assigned a quality code of Q = 1. Upon visual inspection, targets with unclear or low-quality redshift measurements received a quality code of Q = 2. These objects would require follow-up analysis for redshift confirmation and are thus not reported in our final sample. Secure redshifts have a quality flag of either Q = 3 or Q = 4. Quality Q = 4 objects differ from Q = 3 by, upon visual inspection, having clear characteristics of an asymmetric Lyα profile (or multiple emission lines in the case of low-z interlopers).
§ REDSHIFT CATALOG
We present the spectroscopic measurements from our Keck/DEIMOS observations in Table <ref> (the full version is available on the electronic version of the Journal). In summary, from 947 unique targets we were able to secure a spectroscopic redshift of high quality (Q = 3, 4) for 137 galaxies. Of these, 126 are Lyα emitters at 2.8 < z < 6.5 (yielding a 13 % success rate) with a mean redshift of z = 4.3. Figure <ref> shows the full redshift distribution for objects with secure spectroscopic measurements. The sample includes 11 low-redshift galaxies (z < 1.2; all but one were originally targeted as high-z candidates) along with four galaxies at z > 6 that probe the end of the Epoch of Reionization (EOR). The most recent version of our catalog, including the 1D and 2D spectra, can be downloaded directly from the survey webpage.[<https://sstawins.github.io/deeper_than_deep/>]
§.§ Sources with Multiple Observations
As mentioned in <ref>, 173 targets were observed more than once. However, due to observing conditions, mask placement, and signal-to-noise from mask to mask, not all of these repeat observations resulted in multiple secure redshift measurements. Of the 173 sources with multiple observations, only 35 galaxies have two or more independent, secure (Q = 3,4) redshift measurements.
The value of the redshift measured in repeated observations of a given galaxy can be affected by variation in the placement of the slit with respect to the galaxy as well as variation in the resulting S/N of the observed spectrum. To assess the uncertainty of our redshift measurements, we utilize the deviation in redshift for galaxies with multiple z_ spec measurements. We limit this analysis to the 30 sources with repeated observations yielding a secure redshift (Q = 3,4) inferred from Lyα emission (i.e. excluding the 5 low-z sources with multiple z_ spec measurements). We fit a normalized probability density function to the differences in the measured redshift (Δ z), yielding a one-sided 1σ standard deviation of 0.0013 (∼ 389 km s^-1) and a two-sided standard deviation of 0.0007 (∼ 209 km s^-1). This uncertainty is 1-2 dex larger than the uncertainty associated with the fits to the observed emission line in a single observation, as described in <ref>. In general, this analysis suggests that the redshifts (at z > 2) reported in our catalog are accurate to ∼ 1 × 10^-3 (∼ 300 km s^-1). For low-z sources (z < 2), where redshifts are largely determined by fits to multiple emission lines including [OII], Hδ, Hβ, [OIII], and Hα, the typical redshift uncertainty is assumed to be ∼60 km s^-1, similar to that of the DEEP3 survey which utilized the same instrument configuration and slit width (, see also ).
In addition to enabling a study of spectroscopic redshift precision, repeated observations of sources within our survey allow for co-adding observations to produce higher signal-to-noise spectra. This is particularly interesting for sources that did not yield a secure redshift based upon any individual observation. One such source is 37653, which was targeted twice with Keck/DEIMOS, yielding a lower-quality redshift (Q=1,2) for each observation. To increase the signal-to-noise, we co-added the reduced, sky-subtracted 2D spectra, producing a combined observation with an effective exposure time of 7.7 hours. We then extracted a 1D spectrum, using a boxcar extraction width of ± 3 pixels (∼ 07). Finally, we fit the resulting 1D spectrum, using the same asymmetric Gaussian fit described in <ref>, to find a best-fit, secure (Q=3) redshift of z_ spec = 4.8998. The resulting spectroscopic redshift is in excellent agreement with the photo-z estimated from existing ground-based and HST imaging <cit.>. This source is also one of a small number of high-redshift galaxies – with a confirmed spectroscopic redshift – detected in the initial MIRI imaging for the CEERS survey <cit.>. Recent NIRSpec prism observations (in December 2022) as part of the CEERS survey have confirmed our spectroscopic redshift (with z_ NIRSpec = 4.89651, , and see further discussion in <ref>). By co-adding repeated observations for other sources in our target sample, we were able to measure secure redshifts for an additional 4 galaxies (i.e. 5 in total, including IDs 37653, 30014, 27862, 20237, and 24687).
§.§ Catalog Comparison to Literature
To put our catalog into context with a large, photometric catalog in the EGS, we match our sample of spectroscopically confirmed galaxies to the <cit.> photometric catalog. We match the two catalogs by separation on the sky, requiring a maximum separation of 02. Out of 137 total galaxies with secure redshifts from this work, 129 are found in the <cit.> catalog. Table <ref> includes the object ID from <cit.> for a given object. We compare the expected median photometric redshift reported in <cit.> with the spectroscopic redshift from this work in Figure <ref>. For comparison, we also plot our spectroscopic redshifts versus the photometric redshifts from our target catalog <cit.> in Figure <ref>. Overall, there is good agreement between our spectroscopic redshifts and the photometric redshifts from both catalogs. We find 71.3% (92 galaxies) of photometric redshifts from <cit.> are within Δ z < 0.05 (1+z), and only 12.4% (16 galaxies) exceed a maximum difference of Δ z > 0.15 (1+z). The median offset of these photometric redshifts (excluding significant outliers, Δ z > 0.15 (1+z)) from our spectroscopic measurements is Δ z/(1+z) = 0.013 with a 1σ standard deviation of 0.03. This is similar to the results from the <cit.> catalog, for which 65.1% of the photometric redshifts are within Δ z < 0.05 (1+z) of the spectroscopic redshift, and 16.3% are significant outliers outside Δ z > 0.15 (1+z). We find the median value of Δ z/(1+z) to be 0.009 with a 1σ standard deviation of 0.04 excluding significant outliers.
Five galaxies with secure redshifts from this work (IDs 69719, 47173, 24711, 200415, 200814) correspond to sources with previously published spectroscopic redshifts. We find good agreement between our measured redshifts and the published values in all but one case. Two galaxies in our sample, ID 69719 at z = 3.438 and 47173 at z = 3.304, were also observed (in the near-IR) as part of the MOSDEF survey (ID_MOSDEF 30847 and 13470, ), with measured redshifts of z = 3.435 and z = 3.302, respectively. One low-redshift emission line galaxy in this work (ID 24711, z = 0.2948) was also included in DEEP2 Data Release 4 (DR4, ID_DR4 12028700; ) with z = 0.2955. Finally, two objects – ID 200415 at z = 0.4639 and ID 200814 at z = 0.6426 – were included in the spectroscopic catalog from the 3D-HST survey (ID_3D-HST 21636 and 29958, ). Our measured redshift for the latter source (ID 200814) is in good agreement with the measurement of z = 0.6738 from the lower-resolution 3D-HST grism spectrum. For ID 200415, however, we find a significant difference in redshift, relative to the 3D-HST measurement of z = 0.73945. The WFC3 G141 grism spectrum for this object includes an emission feature towards the blue (near ∼ 1.14 μ m) that is identified as Hα (yielding z = 0.73945). This portion of the 3D-HST grism spectrum, however, suffers from a high level of contamination associated with a nearby bright source, such that the emission feature identified as Hα is likely spurious. In our Keck/DEIMOS spectrum, we detect a multitude of emission features, including [OII], Hβ, and [OIII] — as well as Hα and [NII].
With the exception of ID 200415, the difference in the measured spectroscopic redshift between that of our survey and previously published values for these 4 sources ranges from Δ z = 0.0007-0.03. These small differences in the measured z are consistent with the differences in spectral resolution, rest-frame spectral features sampled, and potential variation in slit placement between our observations and those of the MOSDEF, DEEP2, and 3D-HST surveys.
During December 2022, the CEERS team observed the EGS with NIRSpec. Two galaxies with a measured spectrum (ID 10496 and 14628 from ) are also found in our catalog (ID 47173 and 37653, respectively). ID 47173 was observed twice in 2020 and 2021 with DEIMOS, resulting in two redshift measurements (z_spec = 3.3038 with Q=3 and z_spec = 3.3056 with Q=2). NIRSpec spectroscopy is consistent with the Q=3 measured redshift by Δ z = 0.002, with z_NIRSpec = 3.30186 (MSA ID 11699, ). The difference in redshift in part represents the offset between the Lyα emission (Δ v_Lyα) and the associated metal lines observed in NIRSpec spectroscopy, which better trace the systemic redshift of the galaxy. In this case, the observed offset in Lyα emission for this galaxy is 580 km s^-1. This is within the range of Δ v_Lyα found by <cit.> around z ∼ 3.5 star-forming galaxies, with offsets up to 800 km s^-1 and an average offset of ⟨Δ v_Lyα⟩ = 358 km s^-1.
For ID 37653, both DEIMOS observations yielded low-quality (Q = 2) redshift measurements. As discussed previously in <ref>, we use the co-added spectrum of two individual DEIMOS observations to measure a secure redshift for this galaxy. We find the spectroscopic redshift to be z_spec = 4.8998, which agrees with the NIRSpec observations at z_NIRSpec = 4.89651 (MSA ID 707, ). This galaxy also lies close (∼ 1-2 comoving Mpc, in projection) to a recently identified overdense region at 4.5 < z < 5.5 in the EGS <cit.>. We find 7 other galaxies as within this region, all with 4.8 < z_spec < 5.0. Studies of high-redshift proto-clusters using the Millennium cosmological simulations show that such clusters at z ∼ 5 could be as large as 5-20 cMpc <cit.>. Taken together, the redshift and the location of 37653 could indicate that this galaxy is a member of this overdensity at z ∼ 5.
Using the redshift given by NIRSpec for 37653, we find the offset of Lyα emission is large (⟨Δ v_Lyα⟩ = 970 km s^-1) compared to offsets found by <cit.> around Lyα emitting galaxies at z = 4.4-5.7 (⟨Δ v_Lyα⟩ = 377 km s^-1 with a scatter of 329 km s^-1). In that work, none of the Lyα offsets exceed ∼800 km s^-1 for z = 4.4-5.7. More work is need to identify the cause of this high-velocity offset.
§.§ Redshift Success
Lastly, in this section, we discuss the success rate of our observations and compare those to preliminary selection criteria and other physical parameters. In Figure <ref>, we present the redshift success rate for our survey as a function of H-band apparent magnitude, z_phot, and absolute UV magnitude; where the redshift success rate is defined to be the number of sources with a secure (Q =3,4) redshift divided by the total number of targets observed. For this analysis, we exclude objects with Q=-2 as they are effectively unobserved.
We first calculate the redshift success rate as a function of H-band magnitude and photo-z from our target sample <cit.>. In general, the redshift success rate is ∼ 15% for targets with H ≲ 25. Although this slightly increases up to ∼ 20 % for dimmer targets, and with consideration of the 1 σ uncertainty for each bin, we find no significant dependence of redshift success on apparent H-band magnitude. This implies apparent H-band magnitudes are not a biased tracer of Lyα emission at this redshift range.
The redshift success does vary with z_phot across our targeted redshift range, mostly due to observational constraints. Near the effective low-z detection limit for Lyα emission given our DEIMOS setup (z ∼ 3.1), the success rate reaches as low as 6.8%. Meanwhile, at z ≳ 5.5, we find a decrease in the z success rate, dropping to 2.1% for z_phot > 6, likely due to the increase in sky emission at redder wavelengths. Low-redshift sources (z < 2) have a low success rate (9.1 %), however the uncertainty is larger due to the smaller number of total targets.
The average redshift success rate is relatively flat from 3.5 < z_phot < 5.5. For this redshift range, the average success rate is 24.1% and the 1σ standard deviation of the distribution is ∼ 1.5 %. For z_phot > 3, we are exclusively probing Lyα emitters, hence the redshift success rate could be related to a Lyα detection fraction (f_Lyα) as measured in previous works <cit.>. However, here we are not taking into account the variability in instrument sensitivity as a function of wavelength, which would directly translate into a corresponding variation in the sensitivity to Lyα detection as a function of z. Therefore, more work must be done to make a direct comparison between our measured redshift success rate and existing measurements of the Lyα detection fraction. Overall, this analysis shows our survey had excellent success around our desired redshift range.
Using the <cit.> photometric catalog, we can also analyze the redshift success rate as a function of other physical parameters such as stellar mass, SFR, rest-frame U-V color, and absolute UV magnitude (M_UV). We find no significant dependence of the redshift success rate on stellar mass and SFR. Conversely, the redshift success rate does strongly depend on rest-frame U-V color and absolute UV magnitude. For rest-frame U-V color, we find that the success rate increases towards bluer colors, reaching 38% for U-V ∼ -0.1 and decreasing to ∼ 5% at U-V ≳ 1.2. As shown in the right-most panel of Figure <ref>, we also find that the redshift success rate increases towards brighter M_UV magnitudes, reaching as high as 34% for M_UV∼ -21. Although previous work by <cit.> found that the Lyα fraction from z = 3-6 to be higher at fainter M_UV, we again caution our redshift success rate is not directly comparable to a Lyα detection fraction. In their previous work, <cit.> calculated the Lyα detection fraction for sources with equivalent widths (EW) >50Å. While work to measure Lyα EWs for our sample is ongoing, preliminary measurements show we are probing Lyα emitters below 50 Å for M_UV magnitudes brighter than -18.
§ CONCLUSION
In this work, we targeted 947 high-redshift galaxies (z_phot > 3) in the EGS with Keck/DEIMOS. In total, we measured spectroscopic redshifts for 137 galaxies, including 126 confirmed Lyα emitters at 2.8 < z < 6.5. This catalog significantly expands the number of spectroscopically confirmed galaxies in the EGS field at z_spec > 3.
Overalll, we find good agreement with our spectroscopic redshifts and photometric catalogs in literature <cit.>. A comparison between redshifts yields a small difference (Δ z / (1+z) < 0.05) of 65.1 and 71.3%, respectively. We also find 4 galaxies that have spectroscopic redshifts from other surveys in literature (i.e. MOSDEF, 3D-HST, and DEEP2), with the difference in spectroscopic redshifts ranging from Δ z = 0.0007 - 0.03.
This work comes at an opportune time, given the recently completed observations from the JWST ERS program CEERS. With the influx of photometric data from JWST, it becomes increasingly useful to have spectroscopic constraints to constrain photometric SED fits. Furthermore, spectroscopic redshifts are more reliable than those measured with photometric imaging, allowing improved target selection for future observations.
In December 2022 CEERS observed high-redshift galaxies, detecting faint emission from galaxies out to z = 4-6 using the NIRSpec multi-object spectrograph. Two galaxies targeted during these NIRSpec observations were also found in this catalog, with agreeing redshifts. Emission lines from these JWST observations will allow for analysis of gas conditions in galaxies at much higher redshift than previously studied. Together with our DEIMOS observations, we can start to measure the correlation between the dependence on Lyα emission with important galaxy conditions at z > 4. As demonstrated in <ref>, these measurements together could also lead to further characterization of Lyα velocity offsets for galaxies at z = 3-6.
Given the importance of spectroscopic measurements of galaxies at z > 4 in the EGS field, we are currently working on additional ground-based observations in collaboration with the Web Epoch of Reionization Lyman-alpha Survey (WERLS; PI: C. Casey and J. Kartaltepe). The ongoing program, which was allocated time in 2022A/B and 2023A/B, is targeting the EGS Field with Keck/LRIS and Keck/MOSFIRE with the goal of detecting Lyα emitters at the latter half of the EOR (5.5 < z < 8; see the preliminary catalogs from O. Cooper et al. in prep for Keck/MOSFIRE observations and Urbano Stawinski et al. in prep for the Keck/LRIS observations). This future work will expand upon the efforts of this paper and significantly add to known z > 4 galaxies in the EGS.
§ ACKNOWLEDGEMENTS
The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. MCC and SMUS acknowledge support from the National Science Foundation through grant AST-1815475. PGP-G acknowledges support from Spanish Ministerio de Ciencia e Innovación MCIN/AEI/10.13039/501100011033 through grant PGC2018-093499-B-I00.
§ DATA AVAILABILITY
The data underlying this article are available in the survey GitHub webpage: <https://sstawins.github.io/deeper_than_deep/>.
mnras
|
http://arxiv.org/abs/2307.04027v1 | 20230708182951 | Slow-roll inflation and growth of perturbations in Kaniadakis Cosmology | [
"Gaetano Lambiase",
"Giuseppe Gaetano Luciano",
"Ahmad Sheykhi"
] | gr-qc | [
"gr-qc",
"hep-ph",
"hep-th"
] | |
http://arxiv.org/abs/2307.07284v1 | 20230714112635 | Delaying Decisions and Reservation Costs | [
"Elisabet Burjons",
"Fabian Frei",
"Matthias Gehnen",
"Henri Lotze",
"Daniel Mock",
"Peter Rossmanith"
] | cs.DS | [
"cs.DS"
] |
Delaying Decisions and Reservation Costs
Elisabet Burjons^1
Fabian Frei^2
Matthias Gehnen^3
Henri Lotze^3
Daniel Mock^3
Peter Rossmanith^3
^1 York University, Canada
^2 ETH Zürich, Switzerland
^3 RWTH Aachen University, Germany
==================================================================================================================================================
We study the and the problem in a
natural
variant of the classical online
model that allows for delayed decisions
and reservations. Both problems can be
characterized by an obstruction set of subgraphs that
the online graph needs to avoid. In the case of the problem, the obstruction set
consists of an edge (i.e., the graph of two adjacent vertices),
while for the problem, the obstruction set contains all cycles.
In the delayed-decision model, an algorithm needs to maintain a
valid partial solution after every request, thus allowing it to
postpone decisions until the current partial solution is no longer
valid for the current request.
The reservation model grants an online algorithm the new and
additional option to pay a so-called reservation cost for any given
element in order to delay the decision of adding or rejecting it until
the end of the instance.
For the problem, we first analyze the variant with only
delayed decisions, proving a lower bound of 4 and an upper bound of
5 on the competitive ratio. Then we look at the variant with both
delayed decisions and reservation. We show that given bounds on the competitive
ratio of a problem with delayed decisions impliy lower and upper
bounds for the same problem when adding the option of reservations.
This observation allows us to give a lower bound of
min{1+3α,4} and an upper bound of min{1+5α,5}
for the problem. Finally, we show that the online Vertex
Cover problem, when both delayed decisions and reservations are
allowed, is min{1+2α, 2}-competitive, where α∈_≥ 0 is the reservation cost per reserved vertex.
§ INTRODUCTION
In contrast to classical offline problems, where an algorithm is
given the entire instance it must then solve,
an online algorithms has no advance knowledge about the instance it needs to
solve.
Whenever a new element of the instance is given, some irrevocable decision
must be taken before the next piece is revealed.
An online algorithm tries to optimize an objective function that is
dependent on the solution set formed by its decisions. The strict competitive ratio of an algorithm, as defined by Sleator and
Tarjan <cit.>, is the worst-case ratio of the performance of
an algorithm compared to that of an optimal solution computed by an offline
algorithm for the
given instance, over all instances. The competitive ratio of an online
problem is then the best competitive ratio over all online
algorithms. For a general introduction to online
problems, we refer to the books of Borodin and Ran
El-Yaniv <cit.> and of Komm <cit.>.
Not all online problems admit a competitive algorithm (i.e., one whose
competitive ratio is bounded by a constant) under the classical model. In
particular, this is the case for the problems and discussed in this paper.
The goal in the general problem is, given a graph G=(V,E), to
find a
minimum set of vertices S⊆ V such that G[V ∖ S] contains no
edges,
i.e., the the obstruction set is a path of length 1.
In the classical online version of , the graph is revealed vertex by
vertex, including all induced edges, and an online
algorithm must immediately and irrevocably decide for each vertex whether to
add it to the proposed
vertex cover or not.
The goal of the problem is, given a graph G=(V,E), to find a
minimum set of vertices S⊆ V such that G[V ∖ S] contains no
cycles.
In this case, the obstruction set contains all cycles.
In both problems, the non-competitiveness is easy to see:
If the first vertex is added to the solution set, the instance stops and thus leaving a single-vertex instance with an optimal solution size of zero.
On the other hand, not selecting the first vertex will lead to an instance where this vertex becomes a central vertex,
either of a star at , or of a friendship graph at .
These adversarial strategies are arguably pathological and unnatural, as
decisions are enforced that are not based in the properties of the very problem
to be solved:
We need to start constructing a vertex cover before any edge
is presented or a feedback vertex set without it being clear if there are even
any cycles in the instance.
To address this issue in general online Node- and Edge-Deletion
problems, Komm et al. <cit.> introduced the preemptive
online model, which was re-introduced by Chen et al. <cit.> as
the delayed-decision model. This model allows an online algorithm to
remain idle
until a “need to act” occurs, which in our case means waiting until
a graph from the obstruction set appears in the online graph. The
online algorithm may then choose to delete any vertices in the current online
graph. The main remaining restriction is that an
online algorithm may not undo any of these deletions.
Let G be an online graph induced by
its nodes V(G) = {v_1,…,v_n}, ordered by their occurrence
in an online instance. The problem is to select, for every i, a
subset of vertices S_i
⊆{v_1,…,v_i} with S_1 ⊆…⊆ S_n such
that the induced subgraph G[{v_1,…,v_i}∖ S_i] contains no
edge. The goal is to
minimize |S_n|.
The definition of the problem is identical, except that
“contains no edge” is replaced by “is cycle-free.”
A constant competitive ratio of 2 for the problem is simple to
prove and given in the introduction of the paper by Chen et
al. <cit.>.
The problem, in contrast, is more involved. We show that no
algorithm can admit a competitive ratio better than 4 and adapt results by
Bar-Yehuda et al. <cit.> to give an algorithm that is strictly
5-competitive as an upper bound.
We also consider the model where decisions can be delayed even further
by allowing an algorithm to reserve vertices (or edges) of an instance.
If removing the reserved vertices from the instance would mean that a valid
solution is maintained, the instance continues. Once an instance has ended,
the algorithm can freely select the vertices to be
included in the final solution (in addition to the already irrevocably
chosen ones) among all presented vertices, regardless
of their reservation
status.
This reservation is not free: When computing the final competitive ratio, the
algorithm has to pay a constant α∈_≥ 0 for each reserved
vertex; these costs are then added to the size of the chosen solution set.
Let α∈_≥ 0 be a constant and G an online graph
induced by its nodes V(G) = {v_1,…,v_n}, ordered by their
occurrence in an online instance. The is to select, for every i, vertex subsets S_i, R_i ⊆{v_1,…,v_i} with S_1
⊆…⊆ S_n and R_1 ⊆…⊆ R_n such
that G[{v_1,…,v_i}∖ (S_i ∪
R_i)] contains no edge. The goal is to minimize the sum |S_n| + |T| +
α|R_n|, where T ⊆ V(G) is a minimal vertex subset such that
G - (S_n ∪ T) contains no
edge.
Again, the definition for the is identical, except for replacing
“contains no edge” with “is cycle-free.”
For reservation costs of
α=0, the problem becomes equivalent to the offline version, whereas for
α≥1 taking an element directly into the solution
set becomes
strictly better than reserving it, rendering this reservation option useless.
The results for and , each with reservations, are depicted in Figure <ref>.
The reservation model is still relatively new and has been applied to the simple knapsack problem <cit.> and the secretary
problem <cit.>. We note that the two cited papers
consider relative reservation costs, while for the two
problems in the present paper the cost per item are fixed.
The online problem has not received a lot of attention in the past
years. Demange and Paschos <cit.> analyzed the online problem with two variations of how the online graph is revealed:
either vertex by vertex or in clusters, per induced subgraphs of
the final graph. The
proven competitive ratios are functions on the maximum degree of
the graph.
Zhang et al. <cit.> looked at a variant called the Online
3-Path
problem, where every induced path on three vertices
needs to be covered.
In this setting, the competitive ratio is again dominated by the maximum degree of the
graph.
Buchbinder and Naor <cit.> considered online integral
and fractional covering problems formulated as linear programs
where the covering constraints arrive online. As these are a strong
generalization of the online problem, they achieve
only logarithmic and not constant competitive ratios.
There has been some work on improving upon the bound of 2 for
some special cases of in the model with delayed
decisions (under different names). For the problem on
bipartite graphs where one side is offline, Wang and Wong
<cit.> give an algorithm achieving a competitive ratio of
1/1 - 1/e≈ 1.582.
Using the same techniques they achieve a competitive ratio of 1.901
for the full online problem for bipartite graphs and
for the online fractional problem on general graphs.
To the best of our knowledge, the problem has received
no attention in the online setting so far, most likely due to the
fact that there is no competitive online algorithm for this problem
in the classical setting. The offline problem, however,
has been extensively studied,
especially in the approximation setting.
One notable algorithm is the one in the paper of Bar-Yehuda et
al. <cit.>, yielding an approximation ratio of 4-2/n
on an undirected, unweighted graph. We adapt their notation in
Section <ref>, and our delayed-decision algorithm with a
competitive ratio of 5 is based on their aforementioned
approximation algorithm.
The currently best known approximation ratio of 2 by an (offline)
polynomial-time algorithm
was given by Becker and Geiger <cit.>.
The paper is organized as follows.
We first look at the problem, giving a
lower bound of 4 and an upper bound of 5 on the competitive ratio.
Then, we discuss how bounds on obstruction set problems
without reservation imply bounds on the equivalent problems
with reservations and vice versa, and how this applies to the
problem.
Finally, we consider the , giving tight bounds dependent on the reservation costs.
§ FEEDBACK VERTEX SET WITH DELAYED DECISIONS
In this section, we consider the problem,
which is concerned with finding the smallest subset of the vertices
of a graph such that their removal yields a cycle-free graph.
We give almost matching bounds on the competitive ratio
in the delayed decision model.
Given an ε
>0, there is no algorithm for achieving a
competitive ratio of 4-ε.
The adversarial strategy depicted in
Figure <ref> provides the lower bound.
First, a cycle is presented, which forces any algorithm to
delete a single vertex. The adversary now repeats the following scheme n times:
Identify a pair of vertices of the latest added (half) cycle that
are still
connected and close a cycle between those two vertices by adding a
half cycle
between them.
Every time such a half cycle is added, any algorithm has to delete
a vertex from
it. This is sketched in Figure <ref> by the
black cycles with red dots for exemplary deletions of some
algorithm. We assume
w.l.o.g. that an algorithm chooses one of the vertices of
degree 3 between two cycles for deletion, we call these vertices of degree 3 branchpoints. If an algorithm deletes vertices of degree 2 instead, the adversary can force the algorithm to delete even more vertices.
After following this scheme, we split the remaining branchpoints (one for each
new cycle) into two sets A and B alternatingly. We now take each (a,b)-pair with a ∈ A and b ∈ B and connect the two vertices with two paths, forming a new cycle between each pair of branching vertices. In order to remove all cycles with the lowest amount
of deletions, any algorithm has to delete either all vertices from
the set A or all vertices from the set B. Once such a set is
chosen, say A, the adversary adds “self-loops”, i.e., new
cycles each only connected to a vertex of B, forcing the deletion
of all (branchpoint) vertices of B.
The optimal solution consists only of the vertices of B and possibly, and
depending on whether n is even and on the choice of A or B,
another
vertex from the first and the last cycle.
Thus, any online algorithm has to delete at least 4n -2 vertices, while an optimal algorithm only
deletes at most n + 2 vertices. The resulting
competitive ratio is hence worse than 4-ε for large enough values of n.
Next, we present Algorithm <ref>, which guarantees a
competitive ratio of 5. It is a modified version of the
4-approximative algorithm for by Bar-Yehuda et
al. <cit.>.
Algorithm <ref> maintains a maximal so-called H
in the
presented graph
G and selects a feedback vertex set F for G within H. It is important to note that H is not necessarily an induced subgraph of G.
A of a graph G is a subgraph H of G where
every vertex has degree exactly 2 or 3 in H.
Given a H, a vertex is called a branchpoint of H if it has exactly degree 3 in H.
We say a vertex v is a linkpoint in H if it has exactly degree 2 in H and there is a cycle in G whose only intersection with H is v.
A cycle is called isolated in H if every vertex of this cycle (in G) is contained in H and has exactly degree 2 in H.
Algorithm <ref> adds every branchpoint, linkpoint and one vertex per isolated cycle without linkpoints in H to the solution set F.
Note that it is possible that vertices, that were once added to F as linkpoints or due to isolated cycles, can still become branchpoints of degree 3 in H due to new vertices in G. Also later on in the analysis we consider them as branchpoints.
Algorithm <ref> returns a for the given input graph G.
By contradiction, assume there is a cycle C in G without a vertex in F.
If C contains no point of H, then the complete cycle C can be added to the subgraph H as an isolated cycle, thus H is not maximal, which contradicts the procedure of Algorithm <ref>.
Therefore, we can assume that there is at least one vertex of C in H. If the cycle contains no branchpoints, either the cycle is an isolated cycle, where all vertices are in H and one vertex is added to F,
or it intersects with H at just a single vertex.
This vertex is, thus, a linkpoint and part of F as well.
Or, a third option is that C intersects H on two or more points and none of them are branchpoints, in which case, we can extend the , which also contradicts the procedure of Algorithm <ref>.
Thus, every cycle in G has to contain at least one vertex of F, which is a at the end.
Proving that Algorithm <ref> is 5-competitive is more tricky.
First, note that an optimal does not change if we consider a reduction graph G' instead of a graph G.
A reduction graph G' of a graph G is obtained by
deleting vertices of degree 1 and their incident edges, and
by deleting vertices of degree 2, connecting the two neighbors
directly (unless the vertex is self-looped).
The following lemma bounds the size of a reduced graph by its
maximum degree and the size of any feedback vertex set.
This will be used in the analysis of Algorithm <ref>.
The lemma is due to Voss <cit.> and is used in the proof of 4-competitiveness by Bar-Yehuda et al. <cit.>.
Let G be a graph where no vertex has degree less than 2. Then for every F, that contains all vertices of degree 2,
|V(G)| ≤ (Δ(G)+1)|F|-2
holds, where Δ(G) is the maximum degree of G. In particular, if every vertex has a degree of at most 3, |V(G)| ≤ 4|F|-2 holds.
Algorithm <ref> achieves a strict competitive ratio of 5-2/|V(G)| for the online with delayed decisions.
At the end of the instance, after running Algorithm <ref>, call the set of branchpoints B⊆ F, the set of linkpoints L⊆ F, and the set of vertices added due to isolated cycles I. Again note that vertices can become branchpoints in later steps, even if they were added to F as a linkpoint or for an isolated cycle.
Let μ be the size of an optimal for the graph G.
The vertices in I are part of pairwise independent cycles.
This follows from the fact that every cycle, that was handled as
an isolated cycle, was added completely to H, thus new isolated cycles cannot contain a vertex of one already added to H.
Therefore, |I| ≤μ, since an optimal solution must contain at least one vertex for each of the pairwise independent cycles.
Moreover, the set of cycles due to to which the vertices of L were added to F are also pairwise independent. This is only true due to the possibility of relabeling linkpoints to branchpoints:
For a contradiction, first assume that two
cycles
overlap at a vertex v and and intersect with H at
linkpoints ℓ_1 and ℓ_2. Then, H could be extended with a
path from ℓ_1 through v to ℓ_2, which would make ℓ_1
and ℓ_2 branchpoints, contradicting the assumption.
Since ℓ_1 and ℓ_2 are not branchpoints, both have degree 2 in H and a new path through the two cycles can be added to H, if there are no other points of H on this path. In this case the path would be added to H, therefore ℓ_1 and ℓ_2 would have become branchpoints. This contradicts the assumption.
If there are other vertices in H on the mentioned path, call the first of those vertices along the path x and assume, w.l.o.g.
that it is part of the same cycle as ℓ_1.
If x is connected to either ℓ_1 or ℓ_2 via H and the
degree of x in H is at most 2 at any point while it was part of H, then we have a
contradiction since H can be extended and one of the linkpoints ℓ_1 or ℓ_2 become a
branchpoint.
The only case that remains is where x immediately had degree 3 with respect
to H when it was first added to H. In this case x
is a branchpoint. Note that ℓ_1 was already deleted as a
linkpoint at this time, since by definition the cycle would not
have caused any deleted linkpoints otherwise.
Since the adversary presents the instance vertex-wise, there must
be a vertex s such that there was no possibility to add x to
H before s was presented, but such that x immediately became
a branchpoint when s was added to G. In particular, x cannot have been added immediately to H when it was presented.
Therefore, there must be at least two
independent paths in G∖ H from x to s. But since the
algorithm could also add two independent paths from x to s to
H, and one path from x to ℓ_1, and the algorithm is forced
to convert linkpoints to branchpoints whenever possible, it
adds the path from x to ℓ_1 first. Note that priority is unambiguous, since there are no paths from x to some other linkpoints in G∖ H: Otherwise ℓ_1 and the other linkpoint would already be connected within H, thus making them branchpoints.
Thus we have |L| ≤μ.
This also shows that there cannot be a branchpoint inside a cycle
that was used to delete a linkpoint, without also converting the
linkpoint to a branchpoint.
It follows that |L| + |I| ≤ 2 μ.
If |B| ≤ 2 |L|, then we have |F| = |I|+|L|+|B| ≤ 3
|L| + |I| ≤ 4 μ, which proves the statement.
In every other case, assume |B| > 2|L|.
We now consider a reduction graph H' of the graph H∖ L,
and delete every component consisting of only a single vertex.
Every vertex in the resulting graph has degree 3. In
the graph H we can have up to 2|L| more branchpoints than
in the resulting graph here.
By Lemma <ref>, |B|-2|L| is less than 4 μ (H') -2, where μ(H') is the optimal size of a for the graph H'.
The size of an optimal of the original graph G must
be at least μ(H') + |L| since every linkpoint in G is
part of a cycle that is not intersecting H'.
The inequality chain |F| = |I|+|L|+|B| ≤ |I|+3|L| +
|B|-2|L| ≤ |I| + 4|L| + 4μ(H')-2 ≤ |I| + 4μ(G)-2
≤ 5 μ(G)-2 concludes the proof.
One could think that the reason Algorithm <ref> does not match the competitive ratio of 4 is because the algorithm deletes vertices even in cases where it is not necessary. However, this is not the case. In the appendix we prove that Algorithm <ref> cannot be better than 5-competitive even if vertices in F are only deleted whenever they are part of a completed cycle.
The competitive ratio of Algorithm <ref> is at least 5-2/|V(G)|, even if vertices are only deleted whenever necessary.
§ ADDING RESERVATIONS TO THE DELAYED-DECISION MODEL
We now extend the previous results for the delayed-decision model
without reservations by presenting two general theorems that translate both upper and lower bounds to the model with reservation.
The delayed-decision model allows us to delay
decisions free of cost as long as a valid solution is maintained.
Combining delayed decisions with reservations, we have to
distinguish minimization and maximization:
For a minimization problem the default is that the union of selected
vertices and reserved ones constitutes a valid solution at any
point.
For a maximization problem, in contrast, it is that the selected
vertices
without the
reserved ones always constitute a valid solution.
Whether the goal is minimizing or maximizing, any
algorithm for a problem without reservations is also an
algorithm for the problem with reservations,
it just never uses this third option.
We present a slightly smarter approach for any minimization problem such
as the problem.
In the delayed-decision model, a c-competitive algorithm
for a minimization problem without reservation yields a
min{c,1+cα}-competitive algorithm for the variant with
reservation.
Modify the c-competitive algorithm such that it reserves whatever
piece of the input it would usually have immediately selected until
the instance ends – incurring an additional cost c·α·
Opt – and then pick an optimal solution for a cost of Opt. This
already provides an upper bound of 1+cα on the competitive
ratio. But running the algorithm without modification still yields an
upper bound of c of course.
The algorithm can now choose the better of these two options based on
the given α, yielding an algorithm that is
max{c,1+cα}-competitive.
There is a min{5,1+5α}-competitive algorithm
for the .
In the delayed-decision model, a lower bound of c on the
competitive ratio for a minimization problem without reservations
yields a lower bound of
min{1+(c-1)α,c} on the competitive ratio for the
problem with reservation.
The statement is trivial for α≥1; we thus consider now the
case of α<1. Assume that we have an algorithm with
reservations with a competitive ratio better than 1+(c-1)α.
Even though this algorithm has the option of reserving, it must select
a definitive solution, at the latest when the instance ends. This
definitive solution is of course at least as expensive as the optimal
solution. Achieving a competitive ratio better than 1+(c-1)α is
thus possible only if less the algorithm is guaranteed to reserve
fewer than c-1 input pieces in total. But in this case, we can
modify the algorithm with reservation such that it immediately accepts
whatever it would have only reserved otherwise. This increases the
incurred costs by 1-α for each formerly reserved input piece,
yielding an algorithm without reservations that achieves a competitive
ratio better than 1+(c-1)α+(c-1)(1-α)=c.
There is no algorithm solving the that can achieve a lower bound better than min{1+3α,4}.
§ VERTEX COVER
As already mentioned, the has a competitive ratio of 2 without reservation.
We now present tight bounds for all reservation-cost values α, beginning with the upper bound.
There is an algorithm for the that achieves a
competitive ratio
of min{1+2α, 2} for any reservation value.
These upper bounds, depicted in Figure <ref>, have
matching lower bounds.
We start by giving a lower bound for α≤1/2.
Given an ε>0, there is no algorithm for the achieving
a competitive ratio of 1+2α -ε for any α≤1/2.
We present the following exhaustive set of adversarial instances as depicted in Figure <ref>, deferred to the appendix.
First an adversary presents two vertices u_1 and v_1 connected to each other.
Any algorithm for online with reservations is
forced to cover this edge either by irrevocably choosing one
vertex for the cover or by placing one of the vertices in the
temporary cover (i.e., reserving it). In the first case, assume
w.l.o.g. that v_1 is the
chosen vertex for the cover. The adversary then presents a vertex
v_2 connected only to u_1 and ends the instance. Such an
algorithm would have a competitive ratio of 2 as it must then
cover the edge (u_1,v_2) by placing one of its endpoints in the
cover. Choosing vertex u_1 alone would have been
optimal, however. This is the same lower bound as given for the
model without reservations.
If, again w.l.o.g., the vertex v_1 is temporarily covered
instead, the adversary still presents a vertex v_2 connected to
u_1. Now an algorithm has four options to cover the edge
(u_1,v_2): Each of the two vertex u_1 or v_2 can be either
irrevocably chosen or temporarily reserved. If u_1 or v_2 are
temporarily covered, the instance will end here and the
reservation costs of the algorithm will be 2α. Both the
algorithm and the optimal solution will end up choosing only
vertex u_1,
which implies a final competitive ratio of 1+2α in this
case. If vertex v_2 is chosen, the instance will also end,
yielding a competitive ratio
worse than 2.
Thus, the only option remaining is to irrevocably cover the vertex
u_1. In this case, the adversary presents a vertex u_2
connected to v_2. An algorithm can then irrevocably or
temporarily take v_2 or u_2 respectively. If an algorithm
temporarily takes v_2 or u_2, the adversary will present one
more vertex u_0 connected to v_1 and end the instance. This
results in a graph that can be minimally covered by the vertices
v_1 and v_2. The algorithm, however, will have 3 vertices in
the cover and additional reservation costs of 2α for the
temporarily chosen vertices. Thus it will have a competitive ratio
of 3/2+α,
which is larger than 1+2α for the considered values of
α.
If an algorithm irrevocably takes u_2, the same vertex u_0
will be presented and then the instance will end with another
auxiliary vertex a_2 connected to v_2. An optimal vertex cover
would take vertices v_1 and v_2. Any algorithm that
has already irrevocably chosen u_1 and u_2, however, will have
to choose
two more vertices in order to cover the edges {v_1, u_0} and
{v_2,a_2}; thus, its competitive ratio will be worse than 2.
Again, the only remaining option is to irrevocably choose the
vertex v_2, after which an adversary presents a vertex v_3
connected to u_2. An algorithm may choose to irrevocably or
temporarily take the vertex u_2 or v_3. If an algorithm
decides to temporarily take any vertex or
irrevocably choose v_3, then the adversary presents an auxiliary
vertex b_2 connected to u_2 and ends the request sequence. An
optimal vertex cover in this case has size two, containing only
the vertices u_1 and u_2. In the best case, however, such an
algorithm has a vertex cover of size 3 and two temporary covers,
thus its competitive ratio will be at best 3/2+α, which
again is worse than 1+2α, as already observed.
In general, after irrevocably choosing u_1,,
u_k-1 and v_2,, v_k-1, and temporarily choosing
v_1, the adversary presents the vertex u_k connected to v_k.
If an algorithm chooses to reserve any of the endpoints or
irrevocably selects u_i, then the adversary presents the vertex
u_0 and ends the request sequence. In this case
an optimal vertex cover only contains the vertices v_i for every
i=1, k, thus it has size k. The algorithm, however, will
have to take v_1 and v_k in addition to the previously
irrevocably taken vertices, thus obtaining a vertex cover of
size 2k-1 at best together with two temporarily taken vertices.
Thus the
competitive ratio is
2k-1+2α/k=2-1-2α/k≥ 1+2α ,
where the inequality holds for every k≥1.
In the other case, after irrevocably choosing vertices
u_1,,u_k-1 and v_2,, v_k, the adversary presents
the vertex v_k+1 connected to u_k. If an algorithm chooses
to reserve one of the two endpoints or irrevocably chooses
v_k+1, then the adversary stops the request sequence.
An optimal vertex cover of such a graph consists of the vertices
u_i for every i=1,,k and it has size k. The
algorithm will have to choose u_i in order to obtain a vertex
cover at all, obtaining a vertex cover of size 2k-1 at best
together with at least one reservation, thus achieving a
competitive ratio of
2k-1+α/k≥ 1+2α - ε
for any k ≥1/ε.
For larger values of α the same adversarial strategy holds, but it gives us the following lower bound.
For α > 1/2, no algorithm for the is better than 2-competitive.
The lower bound of Theorem <ref> for α = 1/2 is 2 - ε. For larger values of α
the same adversarial strategy will give us a lower bound of 2. This is because, at all points during the analysis, either the value of the competitive ratio for each strategy was at least 2, or it had a positive correlation with the value of α, meaning that for larger values of α any algorithm following that strategy obtains strictly worse competitive ratios.
§ CONCLUSION
We have shown that some problems that are non-competitive in the classical model become competitive in modified, but natural
variations of the classical online model. Some questions remain open, such as the best competitive ratio for the problem, which
we believe to be 4.
It may be worthwhile to investigate which results can be
found for restricted graph classes.
For example, it is easy to see that the online version of
is 2-competitive on graphs with maximum degree
three.
In addition we also introduced the reservation model on graphs,
providing an upper and a lower bound for general graph problems.
It would be interesting try to find matching bounds, also on specific graph problems.
§ APPENDIX
§.§ Deferred Proofs
For convenience, we restate all statements before proving them.
lemmaduplicatelem:delete_in_completed_cycles_only
lemmaduplicate-1
The competitive ratio of Algorithm <ref> is at least 5-2/|V(G)|, even if vertices are only deleted whenever necessary.
The following adversarial strategy, illustrated in Figure <ref>, provides the desired lower bound.
First, the adversary presents n independent cycles C_1,...,C_n.
Algorithm <ref> deletes at least one vertex per cycle.
Second, the adversary connects each pair of neighboring cycles
C_i and C_i+1 for i ≤ n-1 with two independent paths.
This adds 4n-4 branchpoints.
Everything that is presented is part of the , marked in blue
in Figure <ref>.
Now a vertex c (the top vertex in
Figure <ref>) is added and connected to every
branchpoint
with two edges.
This new vertex will not be part of the .
Therefore, even a modified Algorithm <ref> that only deletes branchpoints whenever necessary has to delete every branchpoint.
Here, Algorithm <ref> deletes n+4n-4 vertices, whereas
an optimal solution consists of only one vertex per independent
cycle and the vertex c.
theoremduplicatethm:vc_reservation_upper
theoremduplicate-1
There is an algorithm for the that achieves a
competitive ratio
of min{1+2α, 2} for any reservation value.
We present an algorithm that achieves a competitive ratio of
1+2α and – together with the algorithm by Chen et
al. <cit.>, which does not reserve vertices – we achieve
a competitive ratio of min{1+2α, 2}.
Our algorithm is itself an adaptation of the classical 2-approximation for . Given a new vertex, the algorithm considers every edge and whenever an edge is uncovered the algorithm temporarily covers both endpoints by reserving the two vertices.
Given a graph G with a minimal vertex cover
of size k, this algorithm incurs reservation costs of
α· 2k, as the algorithm selects at most as
many vertices as 2· Opt, where Opt is the size of the
optimal vertex cover.
However, because the final decision on the vertex cover is
completely left to the very last step and no vertex is permanently
chosen during the running of the algorithm, once the whole instance
is presented the algorithm can choose a minimal vertex cover for
G as the final solution. Thus, its competitive ratio is
k+α· 2k/k=1+2α.
For α > 1/2, the online algorithm without reservations by
Chen et al. <cit.> has a competitive ratio of 2, beating the ratio of 1+2α.
§.§ Deferred Illustration
|
http://arxiv.org/abs/2307.05830v1 | 20230711225154 | SnakeSynth: New Interactions for Generative Audio Synthesis | [
"Eric Easthope"
] | cs.HC | [
"cs.HC",
"cs.SD",
"eess.AS"
] |
NIME'23,31 May–2 June, 2023, Mexico City, Mexico.
1
SnakeSynth: New Interactions for Generative Audio Synthesis
Eric Easthope
University of British Columbia
Vancouver, British Columbia, Canada
[email protected]
August 12, 2023
===============================================================================================================================
I present SnakeSynth, a web-based lightweight audio synthesizer that combines audio generated by a deep generative model and real-time continuous two-dimensional (2D) input to create and control variable-length generative sounds through 2D interaction gestures. Interaction gestures are touch and mobile-compatible with analogies to strummed, bowed, and plucked musical instrument controls. Point-and-click and drag-and-drop gestures directly control audio playback length and I show that sound length and intensity are modulated by interactions with a programmable 2D coordinate grid. Leveraging the speed and ubiquity of browser-based audio and hardware acceleration in Google's we generate time-varying high-fidelity sounds with real-time interactivity. SnakeSynth adaptively reproduces and interpolates between sounds encountered during model training, notably without long training times, and I briefly discuss possible futures for deep generative models as an interactive paradigm for musical expression.
[500]Applied computing Sound and music computing
[100]Applied computing Performing arts
[500]Computing methodologies Neural networks
[300]Human-centered computing Interaction techniques
[300]Human-centered computing Interaction paradigms
§ BACKGROUND
Interaction paradigms for deep generative models (DGMs) have remained relatively shallow in contrast to the diversity of interactions that are possible with musical interfaces and most research interest in DGMs still seems to revolve around generation of fixed-size images and audio in correspondence to fixed-size training data <cit.>. These models often work by learning a low-dimensional set of inputs that resemble the statistics of a training dataset enabling us to generate new samples through a small number of controls significantly smaller than the size of training data. In experimental contexts DGMs seem to be capable of generating novel outputs and controllably interpolating between training data features <cit.>. Recent developments including the success of WaveNet <cit.> and GANsynth <cit.> have revealed possibilities for how DGMs might be developed to be more expressive in terms of their outputs and this has consolidated some interest in using DGMs as tools for musical expression. Possibly the largest unified effort to do this might be Magenta (<https://research.google/teams/brain/magenta/>) at Google Research which leverages DGMs as part of a larger effort to create music synthetically using machine learning (ML) models.
Yet many projects featured for DGM-based music production and performance still suffer from common structural limitations in DGMs and how they function. Auto-regressive models <cit.> like WaveNet <cit.> inherently rely on sequential and often somewhat random updates to inputs to produce appreciable changes in outputs. In performance contexts changes in sound in response to new inputs then need to computed in real time or otherwise delayed. Setting aside the challenges of computing DGM outputs in real-time this breaks down an essential auditory feedback loop between a performer and their instrument(s). Responses on the part of the performer in response to an auto-regressive DGM then have to be anticipated as inputs sequentially and must randomly evolve towards more refined outputs.
This runs counter to how we think about musical instruments and related interfaces. While we would not expect a string to resonate the same way every time the same note is played, we do expect to hear the same note and for it to resonate when we play it. Correspondingly there is some expectation in performance being informed by anticipation about where and how to sound instruments in a one-to-one way. This one-to-one-ness also ensures that instruments play the same way today as they do tomorrow. In turn we think that regressive DGMs and randomness alone cannot produce usable and moreover re-usable ML-based digital music tools.
Luckily not all deep generative models are regressive and some are capable of producing inputs and outputs in a one-to-one way. Generative Adversarial Networks (GANs) <cit.> and variational auto-encoders <cit.> amongst other DGMs require only a single forward pass (“single-pass”) from input to output making them better candidates for musical interfaces by enabling performers to learn relationships between how to play digital instruments and what will be sounded when they play them. The development of audio-based GANs by Donahue et al. <cit.> and Engel et al. <cit.> have shown particular promise for generating novel sounds and musical forms. Technically speaking pairs can be established and anticipated during performance.
In broader performance contexts DGM-based instruments should
exhibit compatible playing dynamics with respect to player expectation.
The application of more energy to the instrument, for example by
“strumming” or “bowing” with greater intensity, should correspondingly
produce more albeit possibly cacophonous sound. Moving or scanning to selectively “pluck” strings should not produce unwanted sound. Continuous “bowed” sounds should correspond to continuous movements, particularly mechanical “driving” and resonance. Reversing
the direction of movement should “reverse” the sound in some way; on a
string this might correspond to differences in “down-picked” and “up-picked”
sounds. These are difficult to express with DGMs and even neural
networks broadly speaking due to the fixed length of their
outputs and so there is an opportunity here for new designs. The key problem
is making DGMs expressive in ways beyond their capacity to yield
different outputs.
Part of this is a matter of producing and controlling continuous
variable-length sounds with DGMs. Discrete trigger-based controls for
musical DGMs resembling MIDI inputs are common but offer little to no
control over the length of output audio. This puts the burden of
controlling audio length on the underlying DGM(s). Previous work has
done little to address the generation of variable-length audio with DGMs
despite the apparent utility of producing variable-length sounds in
music contexts. We can concatenate sounds to produce longer streams of
audio but results are often cacophonous (see algorithmic music from
Dadabots, <https://dadabots.com>). Thinking in terms of a 2D image
DGM architecture this is equivalent to generating variable-width images
by conjoining images sequentially and is a patch for the problem at
best. This makes the problem of generating variable-length audio with
DGMs an interesting gap in current work and a means to explore DGM
expressivity in creative settings as a performance tool.
SnakeSynth (Figure <ref>) is an ML-driven music performance tool and interactive
controller that bridges the gap from discrete trigger-based DGM controls
to continuous variable-length controls to enable new forms of musical
expression and performance dynamics with DGMs. Deriving its name and
interactive paradigm from the “Snake” video game genre, SnakeSynth
uses real-time 2D point-and-click and drag-and-drop gestures to directly
control DGM audio playback length to generate variable-length audio in
creative contexts. Interactions with a programmable sound “grid”
determine audio length relieving DGMs of un-needed extraneous design
constraints and giving more control to performers as the primary
creative agent. We forego concatenation-based approaches and model the
variability of audio length as an external interactive control over what
is otherwise a fixed-length DGM. In turn SnakeSynth enables “plucked,” “strummed,” and “bowed” playing gestures by triggering different fixed-length DGM-generated sounds and blending them through interaction to form longer variable-length sounds.
§ DESIGN
§.§ Model
§.§.§ Generative Adversarial Network
We set up a GAN made of two networks, a generator and a discriminator, configured as adversaries such that the generator network learns to generate “fake” but convincingly real outputs that “fool” classifications by the discriminator. As they train on new samples the generator improves its weights by back-propagation to produce more “realistic” outputs that resemble the statistics of training data. In turn the discriminator improves its weights to discern fake samples from dataset samples. Any losses of the generator are theoretically gains of the discriminator and vice versa, and both networks improve with training.
We use a modified Deep Convolutional GAN (DCGAN) architecture and simplify the DCGAN generator <cit.> to three convolutional layers and remove the batch normalization layer succeeding the fully-connected layer. DCGANs have the advantage of using local convolutional layers in place of exclusively fully-connected layers <cit.> significantly reducing the total number of trainable parameters and in turn reducing total training time. The generator consists of a fully-connected layer and a sequence of three filter layers each containing a convolutional layer, a batch normalization layer, and activated with a leaky rectified linear unit (leaky ReLU) layer. Batch normalization layers regularize training samples to increase training stability <cit.>. The final convolutional layer is activated with a tanh function layer.
We also use a Convolutional Neural Network (CNN)-based discriminator with two convolutional layers, no dropout layers, and no batch normalization. Again this significantly reduces the total number of trainable parameters. The discriminator consists of two convolutional layers activated with leaky ReLU layers and a fully-connected layer with no activation layer. Without an activation layer the discriminator outputs values outside of [-1,1] and we compute cross-entropy loss directly from logits from the fully-connected layer. Unlike the DCGAN authors we do not change any network initialization weights before training and we use reshape and flatten layers to transform square images to and from layers expecting flat inputs. Both networks are summarized in Figure <ref> for a latent space of size two in correspondence with a 2D cursor or touch-based control space. This amounts to a generator with just over one million trainable parameters (approximately 245 parameters per pixel).
§.§.§ Dataset: 2D Spectral Images
Our DCGAN training data comprises of square 64x64 pixel images made from Mel-scaled spectral coefficients for a small collection of human voice samples, which is only one of many possible training sets. This is in accordance with observations by Engel et al. <cit.> that GANs are capable of producing high-fidelity audio from limited spectral information and are significantly faster to train. Notably Mel-scaled coefficients are an effective compression of spectral audio information in 2D that simultaneously accounts for human audio perception. Compared to 1D time series-based approaches like WaveGAN <cit.> using images of spectral coefficients in place of audio significantly reduces the overall dimensionality and memory requirements for training and our use of GAN models in SnakeSynth.
Sounds in SnakeSynth are computed by inverting generated 2D spectral images into 1D time series through Griffin-Lim inversion. Each sound is windowed using a cosine window or similar to remove edge audio artifacts. We currently choose the number of sounds in the SnakeSynth interaction grid so this inversion is automated as part of data post-processing. Faster inversions could be realized in real-time interactive settings.
§.§.§ GAN Training
Generator outputs are initialized as Gaussian noise and we train the DCGAN generator and discriminator in lockstep, training the generator first to produce new outputs and then the discriminator second on outputs produced by the generator and against real samples from the training dataset. This training strategy is equivalent to a zero-sum competitive game between two players (or networks in this case) where losses of the generator amount to gains of the discriminator and vice versa. Goodfellow et al. <cit.> represents this with the objective function
L(G,D) = 𝔼_x∼μ_ref[ln D(x)] + 𝔼_z∼μ_Z[ln(1-D(G(z))]
for a generator G and discriminator D where 𝔼 shows expectation values and μ_ref is the set of possible outcomes in the sample distribution and μ_Z is the set of possible generated outcomes in a Gaussian (normal) distribution. Correspondingly 𝔼_x∼μ_ref shows the expectation value that real samples (x) from the training dataset or generated samples (z) are part of the sample space or the generated space respectively. Together these incur a generator and discriminator “loss” during training that we use to back-propagate updates to generator and discriminator weights.
The training dataset is shuffled and separated into batch sizes of one so that every image is seen during training and we train for 300 epochs using the same objective function defined by Goodfellow et al. <cit.>. Notably increasing the batch size can reduce training time. We find that a small 64x64 pixel DCGAN model with a two-dimensional generator latent space can train hundreds of epochs within a few minutes on a standard MacBook Pro (2020). Re-training on new samples correspondingly takes at most a few minutes longer and trained models and sounds can be stored and loaded for later use without further training.
Because the generator input dimension is only two we are able to directly access and visualize the space of possible generator outputs by passing 2D quantile values as coordinate inputs to the generator latent space. This enables us to produce samples from most of the generator output distribution using a 2D and particularly finite grid-based interactive controller. To do this quantile values up to 95th percentile outcomes are computed from the inverse cumulative density function (inverse CDF) for a 2D Gaussian (normal) distribution with zero mean (μ = 0) and unit variance (σ^2 = 1). Plotting images generated from these quantile values produces a visualization of nearly all possible generator outputs to arbitrary precision from a restriction of the entire generator output distribution (Figure <ref>, right) helping us design controllers with consideration for underlying generator statistics.
§.§ Interactions
SnakeSynth affords a number of different interaction types naturally
through interactions with a two-dimensional N × N coordinate grid (Figure <ref>, left):
* Click (or touch) gestures produce fixed-length audio (resembles “plucking”).
* Linear or near-linear gestures produce variable-length audio (resembles “strumming”) (Figure <ref>). Gesture distance determines sound length.
* Suddenly changing movement direction creates sudden audio changes and corresponding audio attack (resembles a “finite bow”) (Figure <ref>).
* Continuous gestures create continuous audio (resembles an “infinite bow”)
(Figure <ref>). Particularly circular or near-circular gestures produce continuous rhythmic audio.
* Chaotic gestures with many directional changes to linear and/or circular movements create cacophonous audio (resembles “brushing”) (Figure <ref>).
Interactions 2-5 are showcased in Appendix <ref>.
§.§ Synthesis
Instead of directly concatenating audio clips we trigger equal-length clips asynchronously and sum them over time to produce variable-length audio in response to interaction. Each sound is windowed in data post-processing so this is functionally similar to the overlap-add method. Simulating three equally-spaced interactions with the SnakeSynth grid in Figure <ref> we see three generated sample sounds and their windowing functions sum to produce a single variable-length sound. We also see the amplitude of the resulting sound increase with greater sample overlap. This is chosen to produce interaction analogies to mechanical “driving” and resonance (as mentioned before) and other ways to blend overlapping audio could be explored.
§ DISCUSSION
By foregoing concatenation-based approaches and modelling the variability of audio length in terms of interaction we lose the precision of a triggered fixed-size model and we have to choose how to blend sounds in context. However, we think that what we gain in flexibility in terms of modularity and greater choice over DGMs should not be understated as it keeps the abundance of fixed-length GAN models and ongoing research available to us as design options. Non-generative models of the same dimension would even suffice. Going further this flexibility enables us to create novel controllers for audio DGMs capable of generating variable-length audio. This does not seem to be widely recognized as a design interest and surprisingly we have seen little discussion about it in previous work.
SnakeSynth offers one way around the problem by treating audio length as a parameter of interactive control outside of the generative model. This bridges the gap from fixed-length audio DGMs to controller-driven variable-length DGMs and even to DGM-based music performance by recognizing that asynchronously triggering audio clips over time is congruent to mapping user interactions over time. Given the ubiquity of cursor movements and touch in 2D digital coordinate spaces these seem to be an appropriate starting point for discussion on and exploration of user interaction as a means of DGM control and particularly DGM control for musical expression.
Choices on how to map from the SnakeSynth coordinate grid to generator latent space(s) raise interesting questions about both the shape of the DGM latent spaces themselves but also how to construct novel and/or non-trivial maps between them and the SnakeSynth grid. We are not required to use quantile values as inputs either and interactions could be readily extended to any interface that produces at least two values in real time.
This is somewhere in the design of latent space-based control that existing human-computer interface principles like Fitts' law <cit.> could be applied such as knowledge about distance to target, target size, cognitive load, etc. Similarly the design of real-time sound blending beyond slowed attacks and windowing, especially for asynchronously triggered sounds, deserves further consideration. Semantically some of these design choices would reflect different views of the audio space or context at hand so as to be recognizable and learn-able by performers and reproducible in performance settings.
DGM research continues to evolve at quick pace and we are still finding new ways to train high-fidelity GANs quickly enough, for example by progressively adding layers during training <cit.>, that it may soon be feasible to train small GAN models in real time. This would enable SnakeSynth and derived tools to “evolve” new auditory spaces in response to real-time interactions and/or new data. This takes us away from the mindset of fixed-length audio models and re-frames digital musical instruments as things capable of evolving over time to adapt to context and changing how we might think about digital music tools for performance.
§ CONCLUSION
I showed how SnakeSynth, demoed as a web-based audio synthesizer, combines DGM audio and real-time continuous 2D input to create and control variable-length generative sounds through various interaction gestures. Interaction gestures have analogies to strummed, bowed, and plucked musical instrument controls. I showed that sound length and intensity are modulated by interactive control with a 2D programmable sound grid, and briefly discussed possible futures for DGMs as an interactive paradigm for musical expression.
§ ACKNOWLEDGMENTS
I thank Robert for his guidance and for pointing to the novelty of real-time GAN synthesis in the browser.
§ ETHICAL STANDARDS
The author is self-funded and reports no conflicts of interest. No living subjects were studied in this work.
abbrv
§ INTERACTIONS
|
http://arxiv.org/abs/2307.04807v1 | 20230710180058 | The Dragon-II simulations -- III. Compact binary mergers in clusters with up to 1 million stars: mass, spin, eccentricity, merger rate and pair instability supernovae rate | [
"Manuel Arca Sedda",
"Albrecht W. H. Kamlah",
"Rainer Spurzem",
"Francesco Paolo Rizzuto",
"Mirek Giersz",
"Thorsten Naab",
"Peter Berczik"
] | astro-ph.HE | [
"astro-ph.HE",
"astro-ph.GA",
"gr-qc"
] |
firstpage–lastpage
Autonomous feedback stabilization of a cavity-coupled spin oscillator
Dan M. Stamper-Kurn
August 12, 2023
=====================================================================
Compact binary mergers forming in star clusters may exhibit distinctive features that can be used to identify them among observed gravitational-wave (GW) sources. Such features likely depend on the host cluster structure and the physics of massive star evolution. Here, we dissect the population of compact binary mergers in the Dragon-II simulation database, a suite of 19 direct N-body models representing dense star clusters with up to 10^6 stars and <33% of stars in primordial binaries. We find a substantial population of black hole binary (BBH) mergers, some of them involving an intermediate-mass BH (IMBH), and a handful mergers involving a stellar BH and either a neutron star (NS) or a white dwarf (WD). Primordial binary mergers, ∼ 30% of the whole population, dominate ejected mergers. Dynamical mergers, instead, dominate the population of in-cluster mergers and are systematically heavier than primordial ones. Around 20% of Dragon-II mergers are eccentric in the LISA band and 5% in the LIGO band. We infer a mean cosmic merger rate of ℛ∼ 12(4.4)(1.2) yr^-1 Gpc^3 for BBHs, NS-BH, and WD-BH binary mergers, respectively, and discuss the prospects for multimessenger detection of WD-BH binaries with LISA. We model the rate of pair-instability supernovae (PISNe) in star clusters and find that surveys with a limiting magnitude m_ bol=25 can detect ∼ 1-15 yr^-1 PISNe. Comparing these estimates with future observations could help to pin down the impact of massive star evolution on the mass spectrum of compact stellar objects in star clusters.
methods: numerical – galaxies: star clusters: general – stars: general, black holes
§ INTRODUCTION
In less than a decade, the LIGO-Virgo-Kagra (LVK) collaboration discovered 76 confident gravitational-wave (GW) sources associated to merging stellar black holes (BHs) and neutron stars (NSs) <cit.>. This number raises up to 90 if one considers the population of events with a probability to have an astrophysical origin > 0.5 <cit.>, and it is destined to further increase by the end of the fourth observation run. Measurable quantities like component masses, spins, or the orbital eccentricity, and the merger rate of different types of compact binary mergers can represent the keys to identify the signatures of different formation channels <cit.>. From the theoretical standpoint, there is a plethora of mechanisms proposed to explain the formation of compact binary mergers, like isolated binary evolution <cit.>, dynamical pairing in dense star clusters <cit.>, formation in AGN disks <cit.>, secular dynamics involving three compact objects <cit.> or a binary orbiting a supermassive black hole <cit.>, and primordial BH evolution <cit.>. The majority of the aforementioned mechanisms relies on the assumption that compact objects are the relic of massive stars, and therefore they suffer the uncertainties affecting stellar evolution.
For example, the insurgence of pair instability supernova (PISN) and pulsational pair instability supernova (PPISN) mechanisms can carve in the BH mass spectrum the so-called upper-mass gap, a region extending in the range m_ gap = 40-150 where no remnants are expected. The boundaries of the gap are highly uncertain and depend on many poorly constrained quantities, like stellar rotation, rate of nuclear reactions, stellar evolution model <cit.>. The presence of several upper mass-gap BH candidates in the LVK source catalogue poses the question about the origin of these BHs. Stellar mergers, star-BH interactions, and repeated BH mergers represent possible pathways to overcome (P)PISN <cit.> and produce merging compact objects in dense star clusters <cit.>.
Spins could carry crucial information on the BH formation scenario and help placing constraints on the evolution of massive stars, but little is known about the distribution of stellar BH natal spins. Observations of merging BHs indicate that the spin distribution follows a Maxwellian distribution, with a peak around χ_ BH∼ 0.2-0.5 <cit.>. However, stellar BHs detected in low-mass X-ray binaries (LMXBs) are characterised by spins broadly distributed in the whole allowed range <cit.>, whilst those in high-mass X-ray binaries (HMXBs) involve BHs almost maximally spinning <cit.>.
Despite these differences may suffer observation biases, they may represent peculiarities of different evolutionary pathways.
Efficient angular momentum transport driven by magnetic stars could trigger the formation of BHs with natal spins as small as χ_ BH≲ 0.01, a mechanism proposed for BHs forming from single stars and in binaries with a negligible mass transfer among the components <cit.>. Significant mass transfer, instead, has been proposed to produce BHs with spin in a broad range in LMXB, even for BHs spinless at birth <cit.>, and nearly extremal BHs in HMXBs <cit.>.
Common envelope evolution in massive stellar binaries can lead to merging BBHs consisting in a nearly non-rotating BH <cit.>, although this strongly depends on the stellar evolution adopted <cit.>, and a BH companion with a spin spanning the whole allowed range of values <cit.>.
Amplitude aside, also the alignment of the spin vectors among themselves and with the binary angular momentum can affect both the waveform, the final merger remnant mass and spin, and the recoil kick (e.g. see Equation <ref>). From an "observational" perspective, measuring the spin components is intrinsically hard and their directions generally vary owing to precession, thus the spin of observed mergers can be characterised through the so-called effective spin parameter <cit.>
χ_ eff = χ⃗_1 + qχ⃗_2/1+q·L⃗,
where q < 1 is the binary mass ratio, χ⃗_1,2 are the two component spin vectors and L⃗ is the binary orbital angular momentum.
Observations of BBH mergers suggest that χ_ eff may increase at increasing the binary merger mass ratio, although some merging binaries exhibit a negative value of χ_ eff <cit.>, a feature generally associated with dynamical sources.
The orbital eccentricity at merger could represent another distinguishing feature of compact binary mergers, as dynamical interactions could trigger the formation of fairly eccentric (>0.1) sources contrarily to mergers forming from isolated binaries <cit.>. It has been recently claimed that up to four LVK sources may be eccentric <cit.>, although the effects of eccentricity and precession can lead to degeneracies in GW data analysis, making the eccentricity a poorly constrained quantity <cit.>.
Alongside GWs, the detection of (P)PISNe can represent a key piece to understand the final stages of massive stars' life. So far, only a few, most of which controversial, PISN and PPISN candidates have been observed in the last two decades <cit.>. The rarity of PISNe observations sets an intrinsic limit on the frequency of PISNe in star clusters, a quantity poorly constrained in theoretical and numerical models.
Dynamical interactions among stars in dense and massive star clusters can trigger both the formation of merging binaries and the development of PISNe, either from single massive stars or from stellar merger products. Young and intermediate-age star clusters are particularly interesting environments where these sources can form, because they are still in their dynamical youth, when cluster mass-loss and expansion did not yet affected substantially the cluster structure and the interaction rate among stars is maximal. There is vast literature investigating the formation and evolution of merging BHs in star clusters via different techniques, e.g. direct N-body simulations <cit.>, Monte Carlo simulations <cit.>, and semi-analytic tools <cit.>. However, there is lack of direct N-body simulations of particularly dense (>10^5 pc^-3) and massive (>100,000) star clusters, owing to the computational cost required to simulate such systems. Exploring this range of mass and densities with N-body models can complement the already existing simulations and can offer a term of comparison to Monte Carlo simulations (see Figure 1 in Arca Sedda et al 2023a, hereafter AS-I).
In this work, which represents the third of a series, we present results from the star cluster database, a suite of 19 direct N-body simulations of young and intermediate-age star clusters comprised of up to 1 million stars and up to 33% of stars initially in binaries, characterised by typical densities ρ = (1.2×10^4 - 1.5× 10^7) pc^-3.
In our previous papers, we focused on the general properties of our cluster models and their compact object populations (paper AS-I) and the processes that regulate the formation and growth of IMBHs (Arca Sedda et al 2023b, hereafter AS-II).
Here, we dissect the properties of BH-BH, BH-NS, and BH-WD mergers developing in the star cluster database, a suite of 19 direct N-body simulations of star clusters comprised of up to 1 million stars and up to 33% of stars initially in binaries (details about these models are discussed in our companion paper AS-I), performed with the code[<https://github.com/nbody6ppgpu/Nbody6PPGPU-beijing>]. The paper is organised as follows: in Section <ref> we briefly summarise the main features of our models; Section <ref> discusses the main properties of compact binary mergers in our models, focusing on the component masses and mass ratios, the eccentricity at merger, and the possible signatures that can identify their formation history; in Section <ref> we explore the impact of BH natal spins onto the global properties of the population, and we adopt a cosmologically motivated framework to infer the compact binary merger rate, the detection perspectives for future low-frequency GW detections, and the frequency rate and detection perspectives in magnitude limited surveys of PISNe; Section <ref> summarises the main results of this work.
§ NUMERICAL METHODS
§.§ The clusters
The simulation database consists of 19 star cluster models characterised by an initial number of stars N = (1.2, 3, 6, 10)× 10^5, half-mass radius R_ = (0.48, 0.80, 1.76) pc, and an initial binary fraction f_b = 0.05-0.2. In the following, we briefly summarise the main properties of clusters, referring the interested readers to our companion paper AS-I for more details on the run properties.
To initialise the clusters we exploit the tool <cit.>.
Each cluster is modelled according to a <cit.> profile with adimensional potential well W_0 = 6. We adopt an initial metallicity Z = 0.0005, typical of several clusters possibly hosting a dense sub-system of compact objects or an IMBH, like NGC3201 or NGC6254 <cit.>.
Star masses are drawn according to a <cit.> initial mass function limited in the range m_ ZAMS = (0.08-150). Stars in primordial binaries are paired depending on their mass, with stars heavier than >5 paired according to a flat mass-ratio distribution, and lighter stars paired randomly. Binary eccentricities are distributed according to a thermal distribution, P(e) de = e^2 de, while initial semimajor axes are assigned according to a distribution flat in logarithmic values limited between the sum of stars' radii and a maximum value of 50 AU.
The host galaxy potential is modelled through a Keplerian potential assuming a total mass of M_ gal = 1.78×10^11. All clusters are placed on a circular orbit around this galaxy model at a distance of R_ clu = 13.3 kpc. The adopted galaxy mass and orbital radius lead to a value of the circular velocity compatible with what is observed in the Milky Way.
The resulting tidal radius is much larger than the cluster half-mass radius. Therefore, models are initially underfilling their Roche lobe, which implies that the initial impact of the host galaxy potential is negligible.
All simulations are terminated when either the mean BH mass falls below ⟨ m_ BH⟩≲ 15, there are no BHs with a mass above 30, or the simulated time exceeds at least one relaxation time. As a result, the simulation time in models spans a range T_ sim = 0.1-2.3 Gyr, corresponding to 0.8-80 times the initial half-mass relaxation time (see also Table <ref>).
Over the simulated time, we find a nice overlap (see also Figure 2 in paper AS-I) between the evolution of clusters' mass and half-mass radius and observed properties of young and intermediate-age massive clusters in the Milky Way <cit.>, the Magellanic clouds <cit.>, and other galaxies in the local Universe like Henize 2-10 <cit.> or M83 <cit.>. In this sense, models can represent one possible evolutionary pathways of (relatively) young massive clusters.
§.§ The code
simulations have been performed with the code <cit.>, a state-of-the-art direct N-body integrator that runs on high performance computing hardware equipped with graphic-processing-units <cit.>. The code is part of the famous NBODY code series that was pioneered almost sixty years ago by Sverre Aarseth <cit.>.
The code implements a 4th-order Hermite integrator scheme with adaptive time-step based on the Ahmad-Cohen scheme for neighbours <cit.>, and implements a treatment for close encounters and few-body dynamics via the Kustaanheimo-Stiefel regularisation <cit.> and chain regularisation <cit.>.
Stellar evolution in is based on an upgraded version of the population synthesis code <cit.>. The main features of this state-of-the-art version, named BSE++, are described in detail in <cit.> <cit.>. We adopt the so-called level B of stellar evolution <cit.>, whose main characteristics are: delayed supernova (SN) scheme <cit.>, pair- and pulsation pair-instability supernova (PISN and PPISN) treated following <cit.>, fallback prescription for NS/BH natal kicks, and metallicity-dependend winds for massive stars <cit.>. We refer the reader to <cit.> and paper AS-I for further details.
The common envelope phase in binaries is modelled through the widely known α_ CE-λ_ CE scheme, which enables us to regulate the fraction of orbital energy injected into the envelope (α_ CE) and to scale the binding energy of the envelope by a factor λ_ CE. In this work, we adopt α_ CE = 3 and λ = 0.5 <cit.>.
The adopted stellar evolution recipes imply that the stellar BH mass-spectrum in clusters is limited to m_ BH,max = 40.5 <cit.>, unless BHs form from stellar mergers or star-BH interactions. In the latter case, parametrises the amount of mass accreted in a strong star-BH interaction or collision via an accretion parameter f_c <cit.>, which we set to f_c=0.5. We refer the reader to <cit.> for a discussion about the impact of f_c on BH evolution.
§.§.§ Modelling the final stages of compact object binary mergers
The dynamics of relativistic binaries is followed via the orbit-average formalism <cit.>, which enables us to follow the evolution of compact binaries and their coalescence inside the cluster, similarly to previous works <cit.>.
In its current implementation, follows the dynamics of relativistic binaries also if they are part of triples <cit.> and multiple systems, as well as if they form via hyperbolic interactions.
However, the BBH evolution is not followed down to the merger, rather the binary is decoupled from dynamics and promtply merged when the BBH pericentre falls below a critical value, which we set to 10^2 Schwarzschild radii, i.e. a_ dec = 2kGm_ bin/c^2 = kR_ Sch with k=100.
Adopting such limiting separation ensures that the binary is unlikely to undergo any further interaction with surrounding stars before merging. Considering the range of binary masses (1-300), star cluster masses (<10^6) and half-mass radii (0.1-3 pc) explored in this work, it is easy to show that the binary—single interaction timescale t_2-1 = (n σΣ)^-1 – with n the cluster density, σ the cluster velocity dispersion, and Σ the binary cross section – is generally >10^8 larger than the binary inspiral timescale, t_ insp∝ a^4/(m_1m_2m_ bin) <cit.>.
Moreover, the typical merger time for a binary with mass m_ bin < 200 and separation a_ dec is generally t_ insp < 100 yr, i.e. much smaller than the cluster crossing time, t_ step∼ 10^5 yr.
Therefore, our procedure ensures reliabile results while reducing the computational effort required to simulate the evolution of a binary with an orbital period of minutes or hours.
The pre-merger stages of the merging binary orbits are reconstructed by retrieving the orbital parameters at decoupling and integrating the orbit via the <cit.> equations:
a/ t = -64/5β(m_1,m_2)F(e)/a^3,
e/ t = -304/15β(m_1,m_2)e G(e)/a^4,
with
F(e) = (1 - e^2)^-7/2(1 +73/24 e^2 + 37/96e^4);
β(m_1,m_2) = (G^3/c^5) m_1m_2(m_1+m_2);
G(e) = (1-e^2)^-5/2(1+121/304e^2).
Along with the orbital evolution we calculate the associated GW strain and frequency <cit.>.
Natal spins of stellar BHs can be assigned according to different distribution, three of which are based on physical stellar model, namely the “Geneva”, “MESA”, and “Fuller” models <cit.>, and four are rather generic, namely zero-spins, uniform spin distribution, Gaussian spin distribution with mean value χ = 0.5 and dispersion σ_χ = 0.2, and Maxwellian distribution with dispersion σ_χ = 0.2.
In this work, whenever spins are taken into account during the simulation we assume a Gaussian distribution with χ = 0.5 for stellar BHs, whilst for IMBHs we decide on a case by case basis, depending on the IMBH formation scenario (see paper AS-II).
Compact binary merger products are assigned a final mass and spin calculated via numerical relativity fitting formulae <cit.> and a relativistic recoil, generated by asymmetric GW emission <cit.>, expressed via the following relation:
v⃗_ = v_mê_,1 + v_(cosξê_,1 + sinξê_,2) + v_∥ê_∥,
v_m = Aη^2 √(1-4η) (1+Bη),
v_ = Hη^2/1+q_(S_2,∥ - q_ S_1,∥),
v_∥ = 16η^2/1+q_[ V_11 + V_A Ξ_∥ + V_B Ξ_∥^2 + V_C Ξ_∥^3 ] ×
×| S⃗_2, - q_S⃗_1,| cos(ϕ_Δ - ϕ_1).
Here, η≡ q_/(1+q_)^2 is the symmetric mass ratio, Ξ⃗≡ 2(S⃗_2 + q_^2 S⃗_1) / (1 + q_)^2, and the subscripts and ∥ mark the perpendicular and parallel directions of the BH spin vector (S⃗) with respect to the direction of the binary angular momentum. We assume A = 1.2 × 10^4 km s^-1, B = -0.93, H = 6.9× 10^3 km s^-1, and ξ = 145^∘ <cit.>, V_11 = 3677.76 km s^-1, and V_A,B,C = (2.481, 1.793, 1.507)× 10^3 km s^-1. The vector Δ⃗ is defined as Δ⃗≡ (M_a+M_b)^2 (S⃗_b - q_S⃗_a)/(1+q_). The angle between the direction of the infall at merger and the in-plane component of Δ⃗, i.e. ϕ_Δ, is drawn from a uniform distribution, while ϕ_1 = 0-2π, which represents the phase of the binary, is extracted between the two limiting values according to a uniform distribution.
In , the user can decide to set the GW recoil to zero or to a fixed value, or to calculate it self-consistently via Eqs. <ref> and <ref>, in which case the kick is assigned to the remnant and the resulting energy correction is included in a similar way as it is done for natal BH kicks.
As described in detail in paper AS-II, in this paper series we adopt a simplified approach to investigate the impact of GW recoil in the simulations, owing to the fact that the relatively small sample of mergers does not enable us to filter out the inevitable stochastic effect of the BH spin directions and amplitudes on the kick amplitude.
The approach consists of three steps. First, we run all simulations assuming no GW recoil. Second, for each merger event in each simulation we evaluate the GR recoil assuming different distribution for BHs natal spins and we determine whether the remnant is likely to be retained or not in the cluster. Third, if a BH undergoes n mergers in a simulation with zero GW kick, we restart the simulation shortly before the n-th merger event and enable GW kicks assuming a spin for the merging components that depends on the BH formation history. This enables us to verify whether the remnant can be retained in the cluster and eventually merge again in a n+1th merger generation. In paper AS-II, we have shown that this approach permits us to highlight the fact that even when GW kicks are not taken into account, Newtonian dynamics is sufficient to eject all BH remnants from the parent cluster via strong binary-single encounters.
We note that none of the mergers with component masses <100 undergo multiple mergers. This suggests that even adopting zero GW recoils may have a negligible impact on the formation of compact binary mergers with mass <100.
§ RESULTS
§.§ The population of black hole binary mergers in Dragon-II clusters
In this section we describe the main results of our simulations, focusing on the population of compact binary mergers. Table <ref> summarizes the main properties of clusters and their compact objects.
The population of BHs formed in models and, in general, in star clusters likely suffers both the effects of single and binary stellar evolution and stellar dynamics. To highlight this aspect we show in Figure <ref>, for the models with R_ = 0.8pc and N=300k, the so-called initial to final mass relation (IFMR) that links the masses of compact objects and their stellar progenitors. The plot is dissected into BHs with a progenitor initially single or in a primordial binary system.
The population of BHs forming from single stars generally follows the expectations of the adopted stellar evolution recipes <cit.>. Deviation from the general trend owes to initially single stars that got caught in a pair and underwent mass-transfer.
The IFMR of BHs formed from stars in primordial binaries is more complex, being characterised, for example, by BHs in the upper mass-gap with masses in the range 40.5 - 80. This highlights the crucial role of binary stellar evolution and dynamics in sculpting the population of BHs in star clusters <cit.>.
§.§.§ Component masses and formation channels
The population of compact binary mergers in consists in 75 BH-BH, 2 NS-BH, and 1 WD-BH. Among BH-BH mergers, 45 involve two BHs below the PPISN maximum mass (m_ BH < 40.5), 12 involve two mass-gap BHs, and 21 involve one BH below the gap and a mass-gap BH. Six BH-BH mergers involve a primary with a mass m_ BH,1=(5.4-7.1) and a companion with mass m_ BH,2=(2.55-3.6), i.e. just above the threshold separating NSs and BHs in our models. All these low-mass mergers are in primordial binaries.
As discussed in paper AS-II, the BHs in the upper-mass gap mostly form in a star-BH accretion event, either by purely dynamical interactions or stellar evolution. We stress that, throughout our models, we assume that a fraction f_c = 0.5 of the star mass is accreted onto the BH during an accretion event <cit.>.
When GW recoil are “switched off” 4 mergers involve a second or third generation BH, i.e. which underwent one or two previous mergers. The inclusion of GW recoil reduces the number of total mergers to 74. For a detailed discussion about the impact of GW recoil, see paper AS-II.
Figure <ref> shows the component masses and mass ratio of mergers and of mergers observed during the first three LVK observation campaign, collected in the so-called GWTC-3 catalogue <cit.>. The plot includes all mergers occurring in the cluster or outside the cluter after being ejected via dynamical interactions considering zero GW kicks.
This plot illustrates the wealth of information hid in the Dragon-II star clusters: we find mergers in the upper-mass gap, IMBHs[In this work we set a mass treshold of M_ IMBH,min = 100 to discern between BHs and IMBHs.], repeated mergers, and in a handful cases also BHs merging with either a NS or a WD.
Interestingly, we find that mergers occurring inside the cluster are characterised by a primary with mass m_ BH,1 > 30 and a companion with a mass in the range m_ BH,2 = (20-50). Conversely, mergers occurring outside the cluster — or ejected mergers — are characterised by a mass-ratio q>0.6 and a primary mass typically m_ BH,1 < 40.
The number of mergers occurring inside the cluster (31) is comparable to that of binaries that merge after being ejected from the cluster (47), thus suggesting that in-cluster mergers can made-up the 40% of the total merger population. Among all of them, 27 are from primordial binaries (3 inside, 24 ejected), whilst 51 (28 inside, 23 ejected) are from dynamical binaries.
Figure <ref> shows the primary and companion mass of mergers originated from primordial, dynamical, or mixed binaries, with the latter identifying binary mergers in which at least one component was originally in a primordial binary. The plot exhibits some interesting features: 1) mergers from primordial binaries tend to have nearly equal-mass components, 2) purely dynamical mergers have masses that occupy a tight region of the plane with m_ BH,1=(20-50) and m_ BH,2=(20-40), 3) mergers with one component previously in a primordial binary are characterised by a heavy primary, m_ BH, 1 > 40, and a heavy companion, m_ BH,2 > 20. A similar trend is observed in recent N-body simulations tailored to relatively light star clusters, i.e. with mass <8,000 <cit.>.
As deeply discussed in paper AS-II, the crucial role of primordial binary dynamics is highlighted by the fact that all the IMBHs in clusters but one have an ancestor that was member of a primordial binary, regardless of the IMBH formation scenario.
Dynamics and binary stellar evolution deeply impact also the properties of stellar-size mergers. For example, “dynamical” and “primordial” mergers occupy two well separated regions of the primary mass - mass ratio plane.
The vast majority of primordial binary mergers occupy a region delimited by q > 0.6 and m_1 = (5-40), with the mass-ratio weakly increasing at increasing the primary mass: note that for m_1≲ 15 mergers have mass-ratio q=0.6-1, whilst mergers with a heavier primary have mass ratio q>0.85.
Dynamical mergers, instead, form in the right hand-side of Figure <ref>, generally at m_1 > 40.5. In this case, the mass ratio decreases with the primary mass as expected from the mass function limit, with companion masses in the range m_2 = (30-50).
We can identify three relatively well separated regions: low BH masses (m_ BH,1 <15) and widely distributed mass ratio (q=0.6-1) dominated by primordial binary mergers, BH masses in the range m_ BH,1 = (15-40.5) and high mass ratios (q>0.9) dominated by primordial binary mergers, and heavy BH primaries (m_ BH,1>40.5) with relatively massive companions (m_2=30-50) dominated by dynamical mergers.
In clusters, most binaries merging outside the cluster originate from primordial binaries and their ejection is typically triggered by the BH natal kick. However, all ejected mergers with component masses m_1,2 > 30 have a dynamical origin, owing to the adopted stellar evolution recipes.
We note that, given the limited simulation time, the population of mergers in clusters may lack some element that could form later in the cluster life, beyond several relaxation times. These late mergers would unavoidably have a dynamical origin, or at most a "mixed" origin, because all BHs formed in primordial binaries undergo a binary exchange or have been ejected in clusters over the simulated time. Moreover, late mergers will likely have smaller masses compared to those shown in Figure <ref>. This is mostly due to the BH-burning process, by which the average BH mass decreases over time <cit.>.
As a consequence, some BH mergers forming at late time may have properties similar to the primordial binary mergers shown in Figure <ref>.
Figure <ref> shows the mass distribution of the primary BH in mergers, dissected into in-cluster/ejected mergers and primordial/dynamical ones. Ejected binaries dominate the m_ BH,1≲ 20 mass range, whilst at larger primary masses their number and distribution is similar to that of in-cluster mergers. Dynamical mergers completely dominate the population of mergers with m_ BH,1 > 20, while primordial mergers dominate the population of lighter mergers. Noteworthy, we see that the primary mass distribution for mergers nicely overlap with the sample of mergers in the GWTC-3 catalogue, i.e. the catalogue of BBH mergers detected by the LVK collaboration <cit.>. However, a thorough comparison between modelled and observed mergers would require to take into account observation biases <cit.>. For this reason, we also overlay to our data the cosmic BH mass distribution inferred from GW detections.
Comparing models and observations can be crucial to assess the impact of different formation channels on the population of BH-BH mergers <cit.>. Our models suggest, for example, that BH mergers developing in star clusters could produce a substantial amount of mergers from primordial binaries. The progenitor binary could, in some cases, suffer the impact of dynamical interactions which may alter their orbital parameters. Nonetheless, in most cases BH mergers from primordial binaries could represent "isolated binary merger impostors", because they have properties typical of merging binaries developing within the isolated formation scenario but form in a dynamical environment. Taking into account the impact of these sources with a sort of mixed formation channel is crucial to correctly quantify the role of different formation channels in determining the shape of the mass distribution of detected merging BHs <cit.>.
Moreover, models highlights the role of dynamics in determining the formation of BH mergers with masses inside, and beyond, the mass-gap, supporting and complementing previous works on the topic based either on smaller, or lower-density, N-body cluster models and Monte Carlo simulations <cit.>.
§.§.§ Delay times
The delay time of mergers (t_), defined as the time elapsed from the beginning of the simulation to the binary merger, is rather peculiar. As show in Figure <ref>, it exhibits three peaks at t_≃ (0.5 , 1.5 , 10) Gyr. However, when the delay time is normalised to the initial half-mass relaxation time (t_ rlx) of the cluster, the overall t_ nicely distribute around a main peak located at t_ GW/t_ rlx≃ 8-30. The exact location of the peak depends on the definition of t_ rlx. For the sake of clarity, in the plot we use three different expressions of t_ rlx taken from <cit.> (GR21), <cit.> (AR16), or <cit.> (RN21).
The three peaks that appear in the t_ distribution find a clear explanation looking at the t_/t_ rlx distribution. In fact, the first peak at t_ GW = 500 Myr corresponds to mergers happening in simulations with t_ rlx=50-100 Myr, whilst the second peak corresponds to mergers occurring in clusters with a longer relaxation time (see Table <ref>). This interesting features suggests, on the one hand, that the delay time depends intrinsically on the cluster initial properties, as they determine the relaxation time, and, on the other hand, that dynamical processes operate in a similar way over a relaxation time regardless of the cluster structure. The third peak, instead, corresponds to ejected binaries that merge outside the cluster, which are mostly products of primordial binaries ejected via SN explosion during the formation of one of the BHs in the pair.
§.§.§ Eccentricities
One intriguing question that arose since the first detection of GWs is whether it is possible to untangle different formation channels in the observed population of BH mergers. Among all the parameters at play, the orbital eccentricity could represent the key to answer this question. Broadly speaking, in fact, most BH mergers forming via binary stellar evolution are expected to feature a negligible eccentricity close to merger, either because the BBH progenitor undergoes common envelope, which shrinks and circularise the orbit, or because the BBH separation is initially so large that GW emission circularise the orbit before the merger. Binaries developing in star clusters, instead, can form with high eccentricity and sufficiently small separation that the merger occurs on a timescale shorter than GW circularisation.
At the lowest-order level, binaries merging in galactic fields, often called isolated binary mergers, are expected to be circular GW sources, whilst at least some of those developing in star clusters and galactic nuclei, named dynamical mergers, can preserve a significant eccentricity (i.e. e > 0.1) when entering the typical frequency bands of GW detectors.
This simplistic division between isolated and dynamical binaries does not take into account several layers of complication. For example, it is well known that star clusters and stellar nurseries may contain a large fraction of binaries, especially among the population of massive stars, where the percentage of paired stars attains values as large as 50-100% <cit.>. If primordial binaries evolve on a timescale shorter than the typical timescale of dynamical interactions, star cluster could harbor a sub-population of compact binary mergers with properties pretty similar to those forming in galactic fields, e.g. low eccentricities or peculiar component masses and mass-ratios.
With up to 33% of stars initially paired, simulations offer us the possibility to search for differences between mergers forming entirely via dynamics and those forming from the evolution of primordial binaries. Figure <ref> shows the semimajor axis and eccentricity of all BH-BH mergers in clusters calculated at the moment of decoupling, i.e. when the GW emission starts dominating over dynamical perturbations. The plot dissects the population of BH mergers into those coming from the evolution of primordial binaries, those assembled purely via dynamical interactions, and those involving at least one component that was former member of a primordial binary. The population of dynamical and mixed binaries seem to follow two different sequences, although the low statistics make hard to understand whether they actually exist. The population of nearly circular primordial binaries is evident. These mergers can be considered mimickers of the field merger population, and constitute the 33% of the whole population of mergers. Only two of the primordial binaries exhibit a significant eccentricity and a relatively small separation.
The first is a NS-BH binary, we postpone a discussion about this specific source to the next subsection.
The second one involves two low-mass BHs, with masses m_ BH1,2=(7.1+2.55) and eccentricity e=0.997. The progenitor of this merger was a binary that underwent a common envelope phase first, after which the first BH formed, and later undergo Roche lobe overflow, at the end of which also the second BH forms and receives a small kick (∼ 3 km/s) that triggers the eccentricity increase.
As the binary shrinks and circularises because of GW emission, its frequency will increase. Therefore, a first step to determine whether a binary merger can appear eccentric in the sensitivity band of a specific GW detector requires to compare the binary eccentricity and the corresponding GW frequency.
We show in Figure <ref> the characteristic strain - frequency evolution for all mergers in our sample, assuming that they are located at a redshift z = 0.05, i.e. at a luminosity distance of 230 Mpc. To calculate the GW strain of sources we follows the formalism laid out in <cit.> and the implementation described in <cit.> (see Eqs. 30-39). The GW signal from simulated mergers is overlaid to the sensitivity curves of current and future ground-based and space-based detectors like LIGO <cit.>, Einstein Telescope <cit.>, DECIGO <cit.>, and LISA <cit.>. The plot highlights how the eccentricity drops as the binary sweeps across different frequency bands.
The top panel in Figure <ref> shows the fraction of mergers with eccentricity above a given threshold calculated when mergers sweep through five frequency bands centered in f_ band = 10^-3-10^-2-10^-1-1-10 Hz, i.e. the typical sensitivity bands of space-borne detectors like LISA (<10^-2 Hz), mid-frequency detectors like DECIGO (10^-2-1 Hz), and ground-based detectors like LIGO-Virgo-Kagra or the Einstein Telescope (>1 Hz). The plot highlights the fact that around 20-40-5% of all mergers appear eccentric, i.e. e > 0.1, while moving through the f = 10^-3-10^-1-10^1 Hz frequency bands, respectively. Clearly, the detectability of these mergers depend on many parameters, among which the location of the merger and the detector properties. Nonetheless, the plot makes apparent the importance of future deci-Hz detectors in placing constraints on the population of eccentric BBH mergers. Moreover, comparing models with future observations will help to quantify the impact of star cluster dynamics on the cosmic population of merging BHs.
Noteworthy, the eccentricity carries information about the formation history of the merger. For example, we find that all mergers with an eccentricity e>0.1 in both the 0.05-1 Hz and 1-10 Hz frequency bands occur inside the cluster. The number of eccentric binaries doubles in the 10^-2-1 Hz frequency band, but these eccentric binaries appear almost circular while reaching the ground-based detector band, explaining why it is more likely to find a merging binary with significant eccentricity while sweeping through the deci-Hz band.
Any binary merger will spend some time in the detector band before merging. In order to characterise the evolution of the eccentricity as the binary inspirals, we calculate the average binary eccentricity weighted with the time to the inspiral, i.e. ⟨ e ⟩ = ∫_0^t_ merg e dt / ∫_0^t_ merg dt. Practically, we measure the binary eccentricity in subsequent time bins from the time of decoupling to the time of merger and weight it with the remaining time to the merger. This quantity is shown for all mergers in the bottom panel of Figure <ref>, along with the evolution of the eccentricity as a function of the peak frequency <cit.>
f_p = 0.29 Hz(m_1+m_2/30)^1/2(a/50 R_⊙)^-3/2×
× ceil[1.15(1+e)^1/2/(1-e)^3/2],
The step-like behaviour of the e-f_p is due to the ceil function in Equation <ref>, which returns the nearest integer larger than the function argument.
The majority of in-cluster mergers clearly show an average eccentricity ⟨ e ⟩ > 0.7 across the whole 0.01-100 Hz frequency spectrum, whilst ejected mergers preserve a moderate eccentricity ⟨ e ⟩ < 0.4 in the f<1 mHz band. This suggests that GW detectors operating in different bands can probe different sub-populations of mergers forming in dense star clusters, with high-frequency detectors being more suited to observe short-lived, highly eccentric mergers occurring inside star clusters, and low-frequency detectors more suited to observe GW sources merging outside their parent cluster.
§.§ Exotic mergers
Despite the relatively small simulation grid, we also find some exotic mergers: a dynamical WD-BH and 2 NS-BH mergers, one dynamical and one from a primordial binary. The three mergers occur in the most dense simulations in our sample: the WD-BH merger occurs in a simulation with N=120k, R_=0.47pc, the dynamical BH mergers develop in a simulation with N=300k, R_=0.47pc, and the one forming from a primordial binary in a simulation with N=120k, R_=0.8pc.
This type of mergers are particularly rare in star clusters, because dynamical exchanges favor the replacement of the light component with another BH. Given their rarity, we discuss in the following the details of the formation and evolution of these interesting sources.
§.§.§ White dwarf - black hole mergers: implications for low-mass X-ray binaries
The WD-BH binary consists in a BH with mass m_ BH = 23.1 and a carbon-oxygen white dwarf (COWD) with mass m_ WD = 1.18. Initially, dynamical interactions pair the BH with the WD progenitor, a MS star with mass m_ MS,pro = 4.89. The two objects undergo common envelope during the late AGB phase of the companion, at the end of which the star turns into a WD, after ∼ 105 Myr. The resulting WD-BH binary has an "initial" eccentricity of e = 0.2 and period of 900 days. The binary undergoes a series of strong scatterings that cause a rather chaotic variation of the binary semimajor axis and a systematic increase of the eccentricity from e=0.6 up to e=0.99994930 after 135 Myr, corresponding to ∼ 4 relaxation times. At this stage, GW emission becomes sufficiently effective to drive binary coalescence. Figure <ref> shows the time variation of the WD-BH binary semimajor axis and eccentricity before coalescence.
The WD Roche lobe is larger than the BH innermost stable circular orbit, hence the WD will likely undergo disruption and start feeding the BH, possibly evolving into a low-mass X-ray binary. In these regards, it is interesting noting the observation of a ultracompact X-ray binary in the galactic cluster 47 Tuc <cit.>, likely comprised of a COWD and a BH <cit.>, with the BH being probably heavier than m_ BH > 9 <cit.>.
Our model confirms the possibility to form such type of low-mass X-ray binary via interactions of stars and BHs in a dense cluster, even in a relatively short time (t < 200 Myr).
Ultimately, the binary shrinkage driven by GW emission will disrupt the WD and the mass falling onto the BH could possibly power jets that can give rise to transients with peak energy 10^47-10^50 erg^-1 and duration of a few minutes <cit.>, giving rise to a tidal disruption event (TDE).
Despite this source is the only one undergoing coalescence, we find a total of 50 WD-BH binaries by the end of the simulation in all clusters. None of them have orbits such to trigger a TDE within a Hubble time, unless a strong interaction with some cluster member pushes the WD on an extremely eccentric orbit. Pushing the orbit to at least e > 0.9999(0.99999) would lead to 1(26) further WD-BH mergers.
Note that the eccentricity value required to trigger a WD TDE may seem extreme, but it is comparable to the eccentricity achieved by the WD-BH merger, hence testifying that it is possible to reach such extreme eccentricity values in clusters.
§.§.§ Neutron star - black hole mergers: implications for multimessenger astronomy
Concerning NS-BH binaries, we find two mergers, one of dynamical origin and the other forming from the evolution of a primordial binary.
The dynamical NS-BH has a NS with mass m_ NS=1.28 and a BH with mass m_ BH = 14.96.
The BH, whose progenitor had a mass m_ MS = 26.7, undergoes a series of chaotic interactions with a primordial binary containing the NS and its companion, which eventually leads to the merger. When the binary decouples from the cluster dynamics, it has a semimajor axis of a = 0.33 AU and an eccentricity e = 0.99778817, corresponding to a GW peak frequency f_ GW = 0.01 Hz.
After decoupling, the binary evolution is completely dominated by GW emission and the variation of its orbital parameters can be described, at first order, via the <cit.> formalism. We find that as the binary sweeps through the 0.01-0.5-1-10 Hz GW frequency band the NS-BH merger has a residual eccentricity of e_ NSBH = 0.99779-0.9974-0.21-0.02, thus in principle measurable with future GW detectors, especially with those operating in the deci-Hz frequency band.
The chirp mass of this merger, ℳ_ chirp = 3.4, is typical of dynamically assembled NS-BH mergers <cit.>, but hard to produce with isolated binary evolution <cit.>, although this strongly depends on the adopted stellar evolution scheme <cit.>.
The primordial NS-BH binary merger, instead, forms from a primordial binary with initial mass components m_1,2 = (26.3 + 18.7) and evolves through a common envelope phase initiated by the primary, which eventually forms the BH. Shortly after, the binary undergoes a second common envelope phase and eventually the companion evolves into a NS. Eventually, the merging binary consists of a BH with mass m_ BH = 5.6 and a NS mass m_ NS=1.88. Note that these properties, are intriguingly similar to GW200115, a GW source detected by the LVK during the O3 observation campaign, which was characterised by a BH with m_ BH= 5.7^+1.8_-2.1 and a NS with m_ NS = 1.5^+0.7_-0.3.
When the NS forms, the common envelope has shrunk the binary from 2.5 R_⊙ to a = 0.6 R_⊙, whilst the natal kick imparted at formation onto the NS causes an enhancement of the eccentricity from nearly zero to e= 0.57. The new orbital parameter as such that GW emission dominates over dynamics and the binary coalesces in ∼ 7× 10^4 yr. At decoupling, the binary peak frequency is f_∼ 2 mHz, right in the middle of LISA sensitivity band.
The development of a NS-BH binary merger from a primordial binary in a dense star cluster highlights the impact of primordial binaries in contributing to the population of mergers with properties similar to those forming in isolation, making quite hard untangling their actual origin.
Merging NS-BH binaries are thought to be possible progenitors of several electromagnetic (EM) transients, like short Gamma Ray Bursts (sGRBs) <cit.> and kilonovae <cit.>. A basic condition for the possible development of an EM transient is that part of the NS material remains bound to the BH, forming a disk. The fraction of NS mass in the disk depends on several quantities, among which the BH-to-NS mass ratio m_ BH/m_ NS, the BH spin χ, and the NS compactness C≡ Gm_ NS/c^2 R_ NS <cit.>. As numerical simulations have shown, in general the larger the m_ BH/m_ NS the larger the minimum spin required for the NS material to form a disk around the BH, and the larger the spin the larger the amount of matter bound to the BH <cit.>. Depending on the orbital parameters, the BH tidal field can tear apart the NS before it enters the BH event horizon, provided that the NS tidal radius
R_ tid = R_ NS(3m_ BH/m_ NS)^1/3,
exceeds the BH innermost stable circular orbit (ISCO), which for a spinning BH can be expressed as <cit.>
R_ ISCO = Gm_ BH/c^2[3 + Z_2 - sign(χ) [(3 - Z_1)(3 + Z_1 + 2Z_2 )]^1/2],
where Z_1,2 are functions of the BH adimensionless spin parameter χ. Whilst the condition R_ tid / R_ ISCO < 1 implies that the merger has no EM emission, the opposite does not ensure the EM counterpart detectability, as it depends on the geometry of the merger with respect to the observer and other possible observation biases.
In clusters, the dynamical NS-BH merger is characterised by m_ BH/m_ NS = 11.7 and compactness C = 0.19 (assuming a NS radius of 10 km). As shown in Figures 6-8 of <cit.>, the minimum BH spin required for an accretion disk to form with a mass 10% of the NS mass around such type of binary is χ_ BH > 0.98. The BH formed in this binary did not undergo any major interaction with stellar companions that could spin-up it <cit.>. Hence, it is possible that the BH formed with low-spin, according to the <cit.> model, hampering the formation of a massive accretion disk around the BH and minimizing the probability for a EM counterpart to develop.
The isolated NS-BH merger, instead, is characterised by m_ BH/m_ NS = 2.98 and C = 0.27. Even in this case, the spin required for an accretion disk to form is χ_ BH > 0.9. The BH in this binary undergoes a RLO phase, which could, in principle, spin-up the BH up to extremal values <cit.>, although this strongly depends on the stellar evolution recipes and the binary properties <cit.>.
The development of just 2 NS-BH mergers highlights how rare are these type of objects on the one hand, and make any statistical analysis poor, on the other hand. Nonetheless, the fact that the NS-BH mergers developed in clusters seem to be unlikely to feature an EM counterpart supports the idea that most NS-BH mergers proceed unseen in star clusters <cit.>. For comparison, note that for isolated binaries typically m_ BH∼ 12 and m_ NS = 1.6 <cit.>, which implies a minimum BH spin of χ_ BH≳ 0.8 to permit the formation of a fairly massive (mass > 0.1m_ NS) disk around the BH <cit.>.
§ DISCUSSION
§.§ The impact of natal spins on the properties of stellar black hole mergers
Spin amplitude and mutual orientation at merger represent two possible quantities that can help discerning whether a BBH merger results from isolated stellar binary evolution or stellar dynamics <cit.>.
In order to explore the impact of different spin prescriptions on mergers, we devise two models.
The first model (hereafter STEV) assumes that the spin is intrinsically related to the BH evolutionary pathways. For BHs formed from single stellar evolution, we assume a negligible spin (χ_ BH = 0.01) owing to efficient angular momentum transport triggered by the Tayler-Spruit dynamo <cit.>. For upper-mass gap BHs formed from massive binary evolution we assume that final spins spans the χ_ BH = 0.8-1 range <cit.>. For BHs in primordial binaries, instead, we assign to one BH a spin value of χ_ BH=0.01 and to the other χ_ BH = 0.1-1 <cit.>.
The second model (GAUS model) assumes, instead, that the spin distribution follows a Gaussian distribution with mean χ̅_ BH = 0.5 and dispersion σ_χ = 0.2, regardless the BH past evolution, a case possibly supported by the population of observed BH-BH mergers <cit.>.
In our analysis, we assume that the spin vectors in dynamical mergers are isotropically distributed, whilst for primordial mergers we proceeds as follows. We define an ad-hoc distribution function for the cosine of the angle between the spin of the i-th binary component and the binary angular momentum, θ_i, such that <cit.>
P(cosθ) = [(cosθ + 1)/2]^n_θ+1.
We set n_θ = 8, which implies that binaries have a 20(55)% probability to have θ_1,2 that differ by less than 5(20)%. Note that n_θ = 0 implies the isotropic distribution whilst n_θ≫ 1 implies fully aligned spins, i.e. θ_1 = θ_2.
For each BBH merger in our sample we select 1,000 values of the spin and spin directions depending on the aforementioned assumptions, in order to assess statistically the properties of mergers. The top panels in Figure <ref> show the median value and 95th percentile of the effective spin parameter and remnant BH mass for all BBH mergers in models. As, expected, we can clearly see a difference between primordial binaries, which have mildly aligned spins and thus χ_ eff>0, and dynamical binaries, for which χ_ eff∼ 0. The plots suggest that the STEV model, based on stellar evolution models, leads primordial binaries to have a χ_ eff smaller, on average, compared to the GAUSS model. The bottom panels of Figure <ref> overlay to a single realisation of the simulated data the observed mergers from GWTC-3, for comparison's sake.
Noteworthy, the assumption that BHs form with a negligible spin unless matter accretion processes are at play (STEV model) leads to a sub-population of mergers with χ_ eff∼ 0 and m_ bin = (40-100), a feature that disappears when a global Gaussian spin distribution is adopted (GAUS model) as shown in Figure <ref>.
If BH spins do not strongly depend on stellar evolution processes, but rather are well described by a general distribution, like a Gaussian, we can identify two populations in the plot, one with clearly positive χ_ eff values and m_ BH < 40, and one widely distributed around zero χ_ eff involving massive BHs, m_ BH > 40.
In order to improve the poor statistics, we proceed as follows: from the list of mergers we create an oversampled catalogue by repeating the spin assignment 100 times and, at each time, selecting a new "mock" BH mass in the range 2.5-40.5 if the BH merger mass is below the upper-mass gap, and in the range 40.5-100 otherwise. This way, each real merger will have 100 counterparts with BHs of the same class (upper mass-gap or not, merger in primordial or dynamical binary), but enabling to build-up a catalogue sufficiently rich to analyse the overall χ_ eff distribution. Figure <ref> shows the distribution of χ_ eff for the augmented sample in STEV and GAUS models.
We see that the STEV model follows a narrower distribution compared to the GAUS model, and exhibits a clear peak around zero owing to the population of BHs formed from single stars <cit.>.
§.§ Compact binary merger rates
§.§.§ Merger efficiency
As described in the previous section, we have simulated a total mass of M_ sim = 3.65× 10^6 and find in total 78 mergers when GW recoil is not accounted for, and 74 otherwise. Therefore, the resulting BH merger efficiency, defined as the ratio between the number of mergers and the total simulated mass <cit.>, is given by
η_ GW = N_ GW/M_ sim≃ (2.0-2.1)×10^-5^-1,
similar to what inferred for young and open clusters with a similar metallicity <cit.>. Note that given the limited simulation time our estimate could represent a lower limit to the total merger efficiency in massive young and intermediate-age clusters. Nonetheless, we note that as the cluster loses mass and expands, the binary formation rate and binary–single interaction rate will sharply decrease until the point in which it will be unlikely for tight binaries to form and merge within a Hubble time.
Interestingly, at fixed value of the half-mass radius, the merger efficiency changes sensibly with the initial binary fraction, being
η_ GW, fb =
2.3 × 10^-5^-1 f_b = 0.20,
1.2 × 10^-5^-1 f_b = 0.05.
This highlights the role of primordial binaries in determining the formation of merging compact objects. For comparison, note that the merger efficiency derived in <cit.> is based on star cluster models containing ∼ 40% of stars in primordial binaries.
To further explore the impact of cluster properties on the merger efficiency, we show in Figure <ref> the average merger efficiency per cluster, ϵ_(R_), as a function of the average cluster density ⟨ρ_ sim⟩, using the following definitions
ϵ_(R_) = N_/(M_ sim/N_ sim),
⟨ρ_ sim⟩ = M_ sim/N_ simR_^3
where M_ sim is the total simulated mass and N_ sim is the number of simulations performed for a given value of the half-mass radius, R_.
At fixed value of the binary fraction, this relation is well described by a power-law in the form ϵ_ = a (⟨ρ_⟩ / 1 M_⊙ pc^-3) ^ b, with a = (0.15±0.07)× 10^-5 and b = 0.25 ± 0.03.
The plot makes clear that increasing the cluster density by two orders of magnitude leads to ∼ 2.5× more mergers. Moreover, it further highlights the role of primordial binaries, showing that clusters with a lower binary fraction have a probability ∼ 50% smaller to develop a merger, at least in the case of R_ = 1.75 pc.
§.§.§ Merger rate for black hole binaries
We define the cosmic merger rate following <cit.>
ℛ(z) = / t_ lb(z)∫_0^z_ maxψ_ clus(z') t_ lb(z)/ z' z'
×∫_Z_ min^Z_ maxη_ GW(Z)ℱ(z',z,Z) Z,
where t_ lb(z) is the lookback time at merger, ψ_ clus(z') is the star cluster formation rate when the merging binary formed, η_ GW(Z) is the merger efficiency at the metallicity Z, ℱ(z',z,Z) is the number of mergers forming at redshift z' and merging at redshift z in environments with metallicity Z.
The adoption of Equation <ref> enables us to compare simulation results with those obtained for low-mass star clusters <cit.>. Note that this procedure does not take into account possible effects related to the initial cluster mass function, which could indeed have an impact on the overall merger rate <cit.>. Nonetheless, the similarity between the merger efficiency derived from simulations and that obtained by <cit.> for low-mass clusters suggests that it is possible to safely utilise the merger efficiency as a proxy of the overall number of mergers per unit mass in the whole range of possible cluster masses. This choice, although representing an approximation, permits us to avoid the inclusion of a cluster mass function in Equation <ref> and all the related uncertainties, like the cluster mass function boundaries and functional form.
We adopt a cosmic star cluster formation rate in the form
ψ_ clus(z) = 0.01 (1+z)^2.6f_ CFE/ 1 + [(1+z)/3.2]^6.2 yr^-1 Mpc^-3,
i.e. we rescale the stellar star formation rate derived by <cit.> by a factor f_ CFE, which represents the cluster formation efficiency, i.e. the fraction of star formation that goes into bound clusters. Although uncertain, observations and models suggest that the cluster formation efficiency (CFE) can be as large as f_ CFE,YC = 0.3 for young clusters <cit.> and f_ CFE,GC = 0.08±0.03 <cit.> for globular clusters, regardless of the star formation history. In the following, we adopt both young and globular cluster CFE values to constrain the BBH merger rate in our simulations.
For dynamical mergers, it has been shown that the merger efficiency η_ GW(Z) remains almost constant in the range Z<10^-3, and decreases roughly by an order of magnitude at solar values <cit.>. Since our models have all the same metallicity, Z = 0.005, to infer the merger rate we assume that the merger efficiency is constant at Z<0.005 and reduces by 10 times at larger metallicities <cit.>. Moreover, we factorise the function F(z,z',Z) = p(Z,z') N(z,z'), thus assuming that the number of mergers at redshift z that formed at z' is independent on the metallicity distribution. The p(Z,z') term represents the cosmic fraction of clusters with metallicity in the (Z,Z+dZ) bin at redshift z'.
We assume that the metallicity follows a log-normal distribution peaked at <cit.>
Log⟨Z(z)/ Z_⊙⟩ = 0.153 - 0.074z^1.34,
with dispersion either σ_Z = 0.2-0.5-0.8 <cit.>.
Since all models have the same metallicity, to infer the simulated merger rate we integrate Equation <ref> under two assumptions, one conservative and one optimistic. In the conservative case, we consider only clusters with a metallicity Z<0.005 and assume that they have a similar merger rate efficiency <cit.>. In the optimistic case, instead, we include in the integration also clusters with metallicity larger than the simulated one, reducing for metal-rich clusters the simulated merger efficiency by a factor 10, as expected from low-N simulations of young clusters <cit.>.
To compare with similar estimates in the literature, we first set f_ CFE = 1, i.e. that all stars form in star clusters, and calculate a merger rate of ℛ = 27, in broad agreement with the rate inferred for low-mass star clusters (N=10^2-5× 10^4) <cit.> and semi-analytic models of young and globular clusters <cit.>.
A more reliable estimate of the merger rate is shown in Figure <ref> for both the conservative and optimistic cases, and assuming different values of the cluster formation efficiency, f_ CFE=0.08-0.3.
As shown in the plot, we find a simulated merger rate of ℛ_ GW = (12±7) at redshift z=0.2. At the same redshift, the BBH merger rate inferred by the LVK is ℛ_ LVK=17.9-44 <cit.>.
§.§.§ Merger rate for exotic mergers
In simulations we find 3 elusive mergers: one WD-BH and two NS-BH mergers. Despite they are evidently too scarce to allow a statistical treatment, we can exploit them to attempt a rough, order of magnitude, estimate of the merger rates for these two classes of GW sources assembled in star clusters as:
R_ xBH(<D) = N_ x/M_ sim f_xδ M_g* N(<D) t_ rel^-1,
where M_g* is the galaxy stellar mass, δ=0.001-0.01 is the fraction of galaxy mass made up by star clusters <cit.>, f_x is the fraction of clusters with a given property (e.g. age within a certain range),
t_ rel is the cluster relaxation time, and N(<D) is the number of MW equivalent galaxies within a given cosmological distance D <cit.>
N(<D) = 4π/3 (2.26)^-3(D/ Mpc)^3 ρ_g/ Mpc^-3,
where ρ_g = 0.0116 Mpc^-3 is the galaxy number density in the local Universe <cit.>.
Moreover, we consider typical relaxation times of either globular clusters, t_ rel = 10^9 yr <cit.>, or massive and relatively young clusters in the Small Magellanic Cloud (SMC), t_ rel = 3.2× 10^7 yr <cit.>.
Note that the relaxation time of Galactic clusters is inferred from their present time properties. Depending on the amount of mass lost and the level of cluster expansion, it could be possible that the initial relaxation time was relatively shorter and therefore the number of dynamically old globular clusters is larger than what we see at present. In these regards, note that the relaxation time of SMC clusters, which are generally younger than a few Gyr, is sensibly smaller compared to Milky Way globulars, possibly because relaxation processes did not have time to sufficiently influence the cluster dynamics.
In the following calculations, we consider Milky Way-like galaxies only, M_g* = 6× 10^10 <cit.> located within D = 1 Gpc. In the Milky Way, there are only ∼ 4 out of 155 globular clusters with an age larger than 1 relaxation time, whilst around half of clusters in the SMC satisfy this requirement, thus f_x∼ 0.025 - 0.5.
This implies a frequency rate for WD-BH mergers in the local Universe of R_ WDBH = (1.8×10^-3 - 10.8) yr^-1, corresponding to a volumetric merger rate ℛ_ WDBH = RV_ com^-1(1 Gpc) = (3.8×10^-4-2.3).
In the case of NS-BH mergers, instead, the event occurs over a timescale of t_ = (0.04-0.5)t_ rel. The fraction of cluster with an age longer than t_ is f_x∼ 0.94 for clusters in both the Milky Way and the SMC, the resulting frequency rate for NS-BH mergers is R_ NSBH = (0.13-40.7) yr^-1, which implies a volumetric merger rate of ℛ_ NSBH = (0.027-8.7).
§.§ Multimessenger sources: prospects for LISA detection
Over the next decade, the network of ground-based detectors will be complemented by LISA, possibly the first space-borne low-frequency detector. LISA will be able to possibly catch the GW echoes of merging stellar BHs, IMBHs, and nearby WD and NS binaries. While we postpone a detailed discussion about BBHs forming in young massive clusters detectable with LISA to a forthcoming paper, we focus in the following on the handful exotic mergers that develop in our models.
Let us consider the case of a WD-BH merger. We have shown in Section <ref> that such a source could appear as an X-ray binary and give rise to a TDE once the WD approaches too closely the BH. Assuming that the binary evolves solely via GW emission, and adopting the <cit.> formalism to evolve the binary until the merger, we find that around 6 months prior to the merger, the WD will overfill the Roche lobe and start the X-ray binary phase.
At disruption, the frequency of the associated GW emission is given by <cit.>
f_ GW≃ 0.09 Hz(1+M_ WD/M_ BH) ×
×(M_ WD/0.6)^1/2(R_ WD/10^4 km)^-3/2 = 0.13 Hz,
where we have assumed R_ WD = 10^4 km. Note that an eccentricity between 0 and 1 would affect f_ GW by less than 20% <cit.>. The amplitude of the emitted signal at disruption will be <cit.>
h_c≃ 2×10^-20(T_ obs/4 yr)^1/2(D_L/10 Mpc)^-1(M_ BH/10)^0.66×
×(M_ WD/0.6)^1.58(R_ WD/10^4 km)^-1.75≃ 10^-19.
Since the WD will disrupt completely as crossing its Roche limit, the associated GW emission will appear as a burst <cit.>. For such source, the corresponding signal-to-noise ratio (S/N) for LISA can be written as <cit.>
( S/ N) = f^2/3h_c/S_c = 1.2(D_L/10 Mpc)^-1,
where S_c is the detector sensitivity curve in terms of characteristic strain <cit.> and we have exploited the intrinsic dependence on the measurable GW strain and the source luminosity distance D.
If the merger occurs inside the Milky Way, i.e. at D < 0.1 Mpc, it would appear as a loud source in LISA, with (S/N)> 120. More in general, the maximum distance at which LISA could detect such merger with a minimum signal-to-noise ratio of (S/N)>8(15) is D < 1.5 Mpc(0.7 Mpc).
Note that the Andromeda galaxy is ∼ 0.7-0.8 Mpc away from us, therefore to roughly estimate the probability for a closeby WD-BH merger we can replace in Equation <ref> N(<D) = 2 and find an upper limit to the local merger rate of closeby WD-BH mergers of R_ WDBH,close < (8.4× 10^-10 - 5.1× 10^-6) yr^-1.
§.§ The pair-instability supernova rate for massive star clusters: perspectives for detection via magnitude limited surveys
The onset of IMBH formation and the development of BBH mergers depend intrinsically on the cluster radius and initial density, the amount of stars initially in a binary, and the stellar evolution recipes adopted – e.g. BH matter accretion efficiency, the physics of PISNe and PPISNe.
In these regards, the fact that PISNe are rare events for which a smoking gun has not been observed yet <cit.>, offers us the possibility to use this physical process as a diagnostic quantity in models.
In practice, we can infer the PISN rate in simulations and compare such rate with current observation limits to explore whether our simulations produce unrealistically large PISN frequency rates.
As described in paper AS-II, in models PISNe develop either in single stars or in stellar merger products, provided that their core Helium reaches a mass in the range (64-130). This offer us a unique possibility to explore the impact of PISNe in star clusters, taking simultaneously into account the impact of stellar mergers in the overall population of PISN progenitors. According to the adopted stellar evolution, in a simple stellar population only stars heavier than m_ ZAMS≥ m_ PISN = 150 could undergo a PISN event, i.e. larger than the maximum stellar mass adopted for the initial mass function. Instead, in models we find 23 stars that undergo supernova. All these stars are either in a primordial binary or are captured in a binary before the explosion and undergo one or more stellar merger and accretion events that bring the star mass above m_ PISN. Typical masses for PISN progenitors are in the range (150-282).
The simulated PISN efficiency can be defined similarly to the compact object merger rate, i.e. η_ PISN = N_ PISN/M_ sim = 6.2× 10^-6^-1.
To calculate the PISN rate, we follow the approach adopted by <cit.>. Firstly, we assume that the Ni mass of the massive star that goes off as a PISN can be calculated via the following equation:
Log(M_ Ni/) = r (M_ He, f/)^s + t,
where r = -5.02× 10^4, s = -2.159, and t = 2.887 <cit.>, and M_ He,f is the final mass of the star He core. The Ni mass is used to infer the peak bolometric magnitude exploiting an Arnett-like relation <cit.>
Υ_ bol, Ni^ = -19.2-2.5 Log(M_ Ni/0.6),
which can be converted into an apparent bolometric magnitude via the Pogson's relation
μ_ bol^ = Υ_ bol, Ni^ + 5 Log(D_L / 10 pc),
being D_L the luminosity distance. To simplify the calculations, we adopt for the He mass, which is the main ingredient to calculate the Ni mass, an average value of M_ He,f = 90.4 as extracted from our models.
The value of μ_ bol^ is used to calculate whether a PISN can be detected in a magnitude limited survey. Assuming to have a population of PISNe with apparent magnitude distributed according to a Gaussian around μ_ bol^ and a magnitude detection threshold μ_ lim, we define the fraction of detectable sources as
f_ GSS = 0.5[1+ erf(μ_ lim-μ_ bol^/√(2)σ_μ)] ,
where we adopted σ_μ = 0.2[We verified that varying σ_μ in the range 0.1-0.3 has little effect on our results.].
The PISN rate as a function of the redshift can thus be evaluated as:
ℛ_ PISN(z) = ∫_z_1^z_2dV/dzψ(z) η_ PISN f_ GSS(z) f_ Z(z) dz,
where dV/dz is the comoving volume element and ψ(z) is the cosmic star formation rate, for which we assume the cosmic star formation history in Equation <ref> <cit.> and the same limits for f_ CFE described in Section <ref>, and that only stars with a metallicity Z ≤ 0.008 undergo PISNe <cit.>. Figure <ref> shows the PISNe rate for the intrinsic cosmic population and assuming different detection threshold in magnitude limited surveys, namely μ_ bol = 17, 20, 25. Note that these threshold roughly corresponds to the typical maximum detectable magnitude of already completed, like the Sloan Digital Sky Survey (SDSS[SDSS home: <http://www.sdss.org>]) or the Palomar Transient Factory (PTF[PTF home: <http://www.ptf.caltech.edu>]), ongoing, e.g. the Dark Energy Survey (DES[DES home: <http://www.darkenergysurvey.org>]), and future surveys, like the Large Synoptic Survey Telescope (LSST[LSST home: <http://www.lsst.org>]), the Zwicky Transient Facility (ZTF[ZTF home: <https://www.ztf.caltech.edu>]), or the EUCLID mission[EUCLID home: <https://sci.esa.int/web/euclid>] <cit.>.
From Figure <ref> we see that only future surveys (μ_ bol≥ 25) will be able to probe the cosmological properties of PISNe, whilst current surveys could in principle place constraints on PISNe within a redshift z<0.3.
Integrating Equation <ref> over the redshift returns the number of detected sources per year. The possible number of PISN detections per year for different values of the limiting bolometric magnitude, μ_ bol, and the cluster formation efficiency, f_ CFE, is summarized in Table <ref>. From the table is clear that the detection of PISNe from star clusters is still highly unlikely in completed and ongoing surveys, but it could lead to ∼ 8 detections per year with the next generation of detectors. Comparing future PISNe detections with numerical models could have a twofold aim. On the one hand, it will permit us to shed light on the actual contribution of massive stars in dense clusters to the overall population of PISNe. On the other hand, it will provide us with an useful term of comparison to determine the reliability of cluster simulations.
§ CONCLUSIONS
In this paper we have presented and discussed the properties of compact binary mergers and PISNe in the simulations, a suite of direct N-body models representing star clusters with up to 1 million stars and a relatively large (10%-33%) binary fraction. Our main results can be summarised as follows:
* We find a population of 75 BBH, 2 NS-BH, and 1 WD-BH mergers. Among them, 4 BBHs avoid merger when GW recoils are enabled. Mergers occurring inside the cluster make-up the ≳ 40% of the whole population and are mostly due to mergers formed via dynamical interactions (dynamical mergers). The population of ejected mergers, which merge outside the parent cluster, are equally contributed by mergers formed dynamically and from primordial binaries (primordial mergers). Typically, in-cluster mergers have primaries with masses m_ BH,1 > 30 and companion in the m_ BH,2 = 30-50 mass range, whilst ejected mergers involve lighter primaries, m_ BH,1 < 40, and are characterised by fairly large mass ratios, q > 0.6;
* Mergers forming from primordial binaries are characterised by large mass ratios and component masses clearly smaller than those formed dynamically. Among dynamical mergers, the most massive ones are those in which at least one component had an ancestor in a primordial binary;
* BBH mergers are characterised by a delay time that nicely distribute around a value of 10-30 cluster relaxation time. This highlights the fact that the processes that trigger BBH formation and merger are intrinsically related to the cluster dynamical evolution;
* The population of mergers forming from dynamical interactions or primordial binaries is clearly distinguishable from the residual eccentricity of the binary as it enters in the typical frequency bands of GW detectors, i.e. f = 0.001-100 Hz. We find that practically all primordial binaries are circular at merger, this implying that primordial binaries merge before dynamics can have an impact on their evolution, whilst around 20-40-5% of mergers preserve an eccentricity e > 0.1 when entering the LISA-DECIGO-LIGO bands. All mergers with e > 0.1 in the 0.05-1 Hz and 1-10 Hz bands occur inside the cluster, whilst half of eccentric mergers in the mHz band are ejected. This hints at the possibility to distinguish the formation history of a BBH merger from the frequency band in which it is observed;
* We identify three exotic mergers in our sample: a WD-BH binary formed dynamically and two NS-BH mergers, one formed dynamically and the other from a primoardial binary. A WD-BH merger that forms after 4 cluster relaxation time and it is triggered by chaotic interactions that increase the eccentricity up to an extremal value of e = 0.99994930. Once the WD approaches sufficiently close the BH, this type of sources could appear as an ultraluminous X-ray sources and, ultimately, be a source detectable by LISA if it occurs within 700 kpc from us, i.e. within the distance between the Milky Way and Andromeda. The dynamical NS-BH binary is characterised by a chirp mass ℳ = 3.4, larger than what predicted by the isolated stellar evolution scenario, and preserve an eccentricity of e= 0.9974(0.21) when crossing a frequency of f = 0.5(1) Hz, thus future observations with ET could help probing the population of closeby, dynamically formed, NS-BH mergers. The primordial NS-BH binary is not affected by dynamics at all, thus they can be mistaken for a merger occurring in isolation. This highlights the importance of star clusters with a large binary fraction as contributors of the isolated scenario of compact binary mergers. None of the NS-BH mergers are expected to release EM emission, unless the BHs have a spin χ > 0.9;
* We find that comparing the remnant mass and spin of BBH mergers could help untangling their origin. Using a model based on stellar evolution theories, we show that primordial binary mergers are characterised by remnant masses systematically smaller and effective spin parameters systematically larger than dynamical mergers;
* We derive a BBH merger efficiency of ∼ 2× 10^-5^-1, comparable with the value estimated for low-mass star clusters. Interestingly, we find that the merger efficiency depends on the star cluster properties. Decreasing the binary fraction by a factor 4, for example, leads to a decrease of the merger efficiency by a factor ∼ 2. Moreover, the merger efficiency increases with the cluster density following a power-law with slope ∼ 0.25. We adopt a series of cosmologically motivated assumptions for the cosmic star formation history, and use them to infer a merger rate density at redshift z < 0.2 of ℛ = 5-19 (0.027-8.7) (3.8×10^-4 - 2.3) for BBHs(WD-BH)(NS-BH) mergers, respectively. We predict that, in a 4 yr-long mission, LISA could detect N_ BBH = 12±7(5±3) BBH mergers (IMRIs) and can identify the WD-BH merger with a signal-to-noise ratio SNR> 8(15) if it occurs within D_L < 1.5(0.7) Mpc from us.
* We retrieve the cosmic frequency rate of PISNe, in order to explore the reliability of our simulations on the one hand, and to make predictions for PISNe detection from star clusters on the other hand. We find that future surveys with a limiting magnitude of m_ bol = 25 could detect N_ PISN = 0.7-8.8 PISNe per year. Comparing these estimates with future surveys could help placing constraints on the population of massive stars in dense star clusters.
The clusters represent a further step forward in the modelling of young and intermediate-age star clusters, providing the first suite of simulations that models clusters with both N>120,000 stars (up to 10^6), a high binary fraction (up to 33%), and an initial density of ρ = (1.2× 10^4-1.6×10^6) pc^-3. These simulations complement the vast literature of N-body simulations of lower-mass and lower density star clusters <cit.>, and provide the largest catalogue of BH mergers obtained in direct N-body simulations of metal-poor, dense and massive young clusters.
§ ACKNOWLEDGEMENTS
The authors thank the referee for their constructive and helful report. The authors warmly thank Agostino Leveque for their help and assistance in using their implementation of the code, and Giuliano Iorio, Sara Rastello, and Michela Mapelli for useful comments and discussion.
This work benefited of the support from the Volkswagen Foundation Trilateral Partnership through project No. 97778 “Dynamical Mechanisms of Accretion in Galactic Nuclei” and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 138713538 – SFB 881 “The Milky Way System”), and by the COST Action CA16104 “GWverse”. The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. for funding this project by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS Booster at Jülich Supercomputing Centre (JSC). Data analysis and part of the runs were conducted on the GRACE-BH HPC workstation, funded by the European Union's under the research project GRACE-BH.
MAS acknowledges funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 101025436 (project GRACE-BH, PI: Manuel Arca Sedda).
AWHK is a fellow of the International Max Planck Research School for Astronomy and Cosmic Physics at the University of Heidelberg (IMPRS-HD).
The work of PB was supported by the Volkswagen Foundation under the special stipend No. 9B870.
PB acknowledge the support within the grant No. AP14869395 of the Science Committee of the Ministry of Science and Higher Education of Kazakhstan ("Triune model of Galactic center dynamical evolution on cosmological time scale").
The work of PB was supported under the special program of the NRF of Ukraine Leading and Young Scientists Research Support - "Astrophysical Relativistic Galactic Objects (ARGO): life cycle of active nucleus", No. 2020.02/0346.
RS thanks Max Planck Institute for Astrophysics (Thorsten Naab) for hospitality during many visits
MG was partially supported by the Polish National Science Center (NCN) through the grant No. 2021/41/B/ST9/01191.
FPR acknowledges the support by the European Research Council via ERC Consolidator Grant KETJU (no. 818930).
TN acknowledges the support of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy - EXC-2094 - 390783311 of the DFG Cluster of Excellence "ORIGINS”.
§ DATA AVAILABILITY
The data from the runs of these simulations and their initial models
will be made available upon reasonable request by the corresponding author.
The Nbody6++GPU code is publicly available[<https://github.com/nbody6ppgpu/Nbody6PPGPU-beijing>]. The McLuster version used in this work will soon be available. A similar version is described in <cit.>.
mnras
|
http://arxiv.org/abs/2308.01916v1 | 20230709040958 | Semi Supervised Meta Learning for Spatiotemporal Learning | [
"Faraz Waseem",
"Pratyush Muthukumar"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
CMDFusion: Bidirectional Fusion Network with Cross-modality Knowledge Distillation for LIDAR Semantic Segmentation
^1Authors are with Cheng Kar-Shun Robotics Institute, The Hong Kong University of Science and Technology, Hong Kong SAR, China. jcenaa}@connect.ust.hk. {cqf}@ust.hk.
^2Authors are with Alibaba Group, China. zhangjin.zsw, zh334251, luomaochun.lmc, yingya.zyy}@alibaba-inc.com. lk158400}@cainiao.com.
^3Authors are with the SMILES LAB at the School of Information and Communication Engineering'an Jiaotong University, Xi'an, China. peiyixuan}@stu.xjtu.edu.
^*Work done as an intern at Alibaba DAMO Academy.
Jun Cen^1,2*, Shiwei Zhang^2, Yixuan Pei^3, Kun Li^2, Hang Zheng^2, Maochun Luo^2, Yingya Zhang^2, Qifeng Chen^1
August 12, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Labeled data is hard to come by in the real world. Moreover, a majority of available data comes in the source of video and visual media.
Recent advancements in representation learning have shown great successes in learning rich representations from a variety of inputs including text, images, and videos.
However, these state-of-the-art architectures are data-intensive, whereas meta learning architectures possess unique capabilities of learning new tasks from diverse training tasks and corresponding labels in the few-shot regime.
We apply semi-supervised meta learning to video data for learning spatiotemporal patterns.
We extend work on Masked Autoencoders (MAEs) utilizing the Vision Transformer (ViT) architecture for scalable self-supervised learning in the spatiotemporal domain.
We approached the goal of applying meta-learning to self-supervised masked autoencoders for spatiotemporal learning in three steps.
Broadly, we seek to understand the impact of applying meta-learning to existing state-of-the-art representation learning architectures.
Thus, we test spatiotemporal learning through: a meta-learning architecture only, a representation learning architecture only, and an architecture applying representation learning alongside a meta learning architecture. We utilize the Memory Augmented Neural Network (MANN) architecture to apply meta-learning to our framework.
Specifically, we first experiment with applying a pre-trained MAE and fine-tuning on our small-scale spatiotemporal dataset for video reconstruction tasks.
Next, we experiment with training an MAE encoder and applying a classification head for action classification tasks.
Finally, we experiment with applying a pre-trained MAE and fine-tune with MANN backbone for action classification tasks.
To execute our experiments, we generate a custom small-scale video dataset of 518 human-action classes consisting of 24927 video clips and human-generated annotations sourced from the MiniKinetics-200 and TinyVIRAT datasets. We also modify the existing ViT backbone in existing MAE architectures for small-scale datasets by applying Shifted Patch Tokenization (SPT) to combats the lack of locality inductive bias available in small-scale datasets.
Our experimental results show that fine-tuning on our custom small-scale video dataset outperforms existing pre-trained MAE architectures on video reconstruction tasks. Further, we find that training an MAE encoder with a small-scale ViT backbone on our small-scale video dataset for action classification tasks converges steadily. Finally, we find that applying a pre-trained MAE and fine-tuning with an MANN backbone for action classification tasks is effective on our small-scale video dataset test tasks.
§ INTRODUCTION
Recent advancements in deep learning including the Transformer architecture have shown great success in both vision and language domains learning rich representations from a variety of inputs including text, images, and videos (https://arxiv.org/abs/1706.03762ref: attention is all you need). Models such as BERT have shown success in the semi-supervised regime in denoising messy data and extracting high level embeddings from partially labeled datasets (https://arxiv.org/abs/1810.04805ref: bert). However, real-world labeled data in the format of videos is scarce and unstructured. State-of-the-art representation learning architectures have shown great success in the vision domain in extracting high-level features from images for reconstruction or classification tasks, however, these models require massive amounts of annotated vision data.
The field of meta learning has shown promise in learning high-level features from data in the few shot regime. Moreover, applying meta-learning to existing supervised learning architectures has been shown to allow for more data-efficient models while preserving generalizability to unseen tasks and datasets (https://arxiv.org/abs/1703.03400ref: model-agnostic meta-learning for fast adaptation of deep networks
). We propose applying semi-supervised meta-learning to video data for learning spatiotemporal patterns. We believe that wrapping existing state-of-the-art self-supervised representation learning architectures within a meta-learning framework will allow our architecture to both improve sample efficiency and generalize well to unseen data, particularly in the application of spatiotemporal learning on video datasets. Specifically, we perform experiments in the style of an ablation study to compare the performances of existing representation learning architectures for video data alone, existing self-supervised meta learning frameworks for video data alone, and our formulation of applying meta learning to representation learning architectures for video data classification tasks.
In addition to considering the effectiveness of applying meta learning towards existing representation learning architectures, we perform modifications to perform experiments with the scope of this project. That is, we scale down the vision transformer (ViT) backbone within the existing representation learning architecture for training on our custom small-scale video dataset. We generate this dataset consisting of video clips describing human-object interactions as well as corresponding human-generated annotations.
In this project, we make the following contributions:
* We collect a custom small-scale human-object video dataset built as a composite dataset from existing human-object video sources upon which we preprocess.
* We apply the meta-learning framework to existing self-supervised representation learning architectures and apply our model to downstream tasks including video reconstruction and action classification
* We perform an ablation study to understand the impact of applying meta-learning to existing self-supervised representation learning architectures on action classification accuracy and video reconstruction loss
§ RELATED WORKS
Prior work in the field of representation learning has shown successes in learning rich representations from vision and language domains. Particularly, autoencoder architectures have been proven to be effective in extracting representations from text and images. (https://arxiv.org/abs/2111.06377ref: masked autoencoders are scalable vision learners) proposed applying masked autoencoders (MAEs) for self-supervised learning for vision. By masking random patches of the input images and pre-training an autoencoder to reconstruct the missing pixels, they found that the architecture was able to perform well on the ImageNet dataset compared to similar self-supervised models. Moreover, their architecture was more efficient and scalable for larger models such that transfer performance in downstream tasks outperformed supervised pre-training models. They noted that a masking ratio larger than 75% masked pixels in an image poses as a non-trivial task to current state-of-the-art vision models.
(https://arxiv.org/abs/2205.09113ref: masked autoencoders as spatiotemporal learners) builds off of this work by applying masked autoencoders for video data to learn spatiotemporal patterns. The masking process follows similarly from above, however random spacetime patches of videos are masked out rather than pixels during the pre-training step. Their results showed that a masked autoencoder with a masked ratio of 90% outperforms supervised pre-training approaches by a wide margin on both benchmark datasets and real-world video data.
Meta-learning has shown effectiveness in generalizing well to unseen data with sample-efficient architectures in the few shot regime. One such implementation of meta-learning is the Memory Augmented Neural Network (MANN) architecture proposed by (https://dl.acm.org/doi/10.5555/3045390.3045585href: meta-learning with memory-augmented neural networks). The authors propose a black-box meta-learning framework with a two-part architecture. Their architecture included a controller implemented as a sequence model – they utilize an LSTM architure in their implementation – and an external memory module with reading and writing heads implemented with a Neural Turing Machine (NTM) (https://arxiv.org/abs/1410.5401neural turing machines). The LSTM sequence models are used to help a model to learn quickly from data with a small number of examples.
In our review of this space, we have not found existing work applying meta-learning alone towards self-supervised spatiotemporal learning. However, prior research has been done on applying self-supervised meta learning for natural language classification tasks (https://aclanthology.org/2020.emnlp-main.38/href: self-supervised meta-learning for few-shot natural language classification tasks).
Current vision models have become increasingly powerful since the widespread application of the Transformer architecture. The Vision Transformer (ViT) architecture, proposed by (https://arxiv.org/abs/2103.15691ref: ViViT: a video vision transformer), builds upon the self-attention mechanism proposed by (https://arxiv.org/abs/1706.03762ref: attention is all you need) for learning complex high dimensional representations from image datasets. This family of architectures relies on large amounts of image data, typically in the scale of hundreds of gigabytes worth of labelled images to train large architectures with hundreds of millions of parameters.
Some work has been done on scaling down these large-scale ViT architectures while preserving the learned high-level representations.(https://arxiv.org/abs/2112.13492ref: vision transformer for small-size datasets) proposes Shifted Patch Tokenization (SPT) and Locality Self-Attention (LSA) as methods to combat the lack of locality inductive bias available in small-scale datasets.
Existing work on applying representation learning architectures such as MAEs with ViT backbones show incredible performance in video classification and video reconstruction tasks, but are limited in real-world applications due to the data requirements of these sample inefficient architectures. Current research on small-scale ViT architectures perform well on image classification tasks, but have yet to be extended towards video data or applied in the regime of self-supervised learning.
§ METHODS
We approach the goal of applying meta-learning to self-supervised masked autoencoders for spatio-temporal learning using MANNs (memory augmented neural networks), in a similar fashion proposed by (https://dl.acm.org/doi/10.5555/3045390.3045585href: meta-learning with memory-augmented neural networks). In our case, we utilize the masked autoencoder (MAE) approach for initial pre-training, and then fine-tune using the MANN approach, using the MAE encoder as a backbone to the sequence model. In our implementation, we utilize the ViT sequence model scaled down and trained on our small-scale video dataset. We scale down the ViT backbone within the MAE encoder and decoder in a method proposed by (https://arxiv.org/abs/2112.13492ref: vision transformer for small-size datasets), however in their implementation, they focus on image data.
We consider the MAE method proposed by (https://arxiv.org/abs/2205.09113ref: masked autoencoders as spatiotemporal learners
) as a baseline for testing the performance of a state-of-the-art classification algorithm that does not use meta learning. We then train the MANN architecture with the ViT backbone end-to-end to evaluate the performance of a solely meta-learning based approach. Finally, we test our proposed combination of MAE with MANN fine-tuning to test if the MAE architecture in combination with meta-learning approaches is more effective in learning spatiotemporal patterns.
One benefit of applying meta-learning in this domain is that if we assume videos of humans interaction with objects share some high level structure, we can combine video clips from various human-object interaction datasets, allowing us to pre-train on more data. These combinations of benchmarks will allow us to pinpoint whether applying meta-learning with MAE is effective for spatiotemporal learning as well as the individual contributions of each.
To summarize, we devised a three-stage approach to reaching our proposed goals:
* Apply pre-trained MAE and fine-tune for video reconstruction downstream task
* Train MANN with MAE encoder on small-scale dataset and apply classification head for action classification downstream task
* Apply pre-trained MAE and fine-tune with MANN backbone for action classification downstream task
Describes a visualization of the model architectures for each of the three approaches we implement.
§ EXPERIMENTS
For the first approach in our technical method, we fine-tune the pre-trained MAE on our small-scale dataset and evaluate against the baseline video MAE model pre-trained on Kinetics-400. We utilize a pre-trained MAE architecture sourced from the authors of the video MAE architecture trained with the ViT-Large backbone on Kinetics-400 with a masking ratio of 90% and 1600 effective epochs (https://arxiv.org/abs/2205.09113ref: masked autoencoders as spatiotemporal learners
).
For the second approach in our technical method, we train the MAE autoencoder with our small-scale ViT and fine-tune with a classification head on our small-scale composite dataset. We ran experiments training the full video MAE as well as training the video MAE outfitted with a classification head. Additionally, we evaluate training the video autoencoder with and without masking to analyze the difference in training loss and classification accuracy. Note that the autoencoders used for training in this set of experiments utilize our small-scale ViT backbone which implements Shifted Patch Tokenization (SPT) to preserve locality-specific representations typically lost with small-scale datasets. Further, since the original work proposing small-scale ViT architectures implemented a small-scale ViT for image classification rather than video classification, we extend upon their work by including spacetime attention to their small-scale ViT architecture in order to support 3D video data in the format of time-indexed series of 2D images.
§.§ Datasets
For our experiments, we seek to perform spatiotemporal learning on video datasets. Initially, we started by utilizing the Kinetics-400 video dataset consisting of 400 human-action classes each with at least 400 video clips (https://arxiv.org/abs/2007.07355TinyVIRAT: low-resolution video action recognition). In total, the dataset consisted of 306,245 video clips each around 10 seconds in length with a resolution of 224 x 224 pixels. However, the size of this dataset is over 300 GB, and while it can be effectively used for the ViT-base backbone with 84,943,656 parameters within the MAE encoder of the existing state-of-the-art representation learning architecture for video learning, it was not a feasible dataset within the scope of our project. Instead, we developed a small-scale ViT backbone within the MAE encoder architecture which instead has 3,109,008 parameters. Correspondingly, we sought to scale down our video dataset used for training our small-scale ViT backbone.
One aspect we considered while building our dataset was since we apply the MANN meta-learning framework for self-supervised spatiotemporal learning, we can combine multiple datasets of varying action class distributions together into a composite dataset where each unique action class could be considered a new task during black-box adaptation with the MANN architecture. As a result, we were not limited to a single data source when constructing our small-scale dataset and instead, we utilized human-action video clips and annotations from a variety of input sources to generate our small-scale video dataset. In a semi-supervised dataset, labels are sparse, hence we hypothesize that a meta-learning based approach that learns quickly from a small number of examples can excel where standard fine-tuning may not be sufficient.
Our composite small-scale video dataset was sourced from the Kinetics-400, MiniKinetics-200, and TinyVIRAT datasets. MiniKinetics-200 is a subset of the Kinetics dataset consisting of the 200 human-action classes with the most training examples and TinyVIRAT is a video dataset containing real-life tiny actions in videos collected from low resolution video cameras consisting of 12829 video clips. Our small-scale video dataset contains 24,927 video clips amongst 518 human-action classes. Each video clip in our dataset consists of 100 frames at a temporal resolution of 10 FPS, meaning that each clip is around 10 seconds in length. We scale all clips in our dataset to a resolution of 64x64 pixels to perform efficient training and achieve our project goals with the computational resources available to us. All spatial and temporal resolution downscaling was performed using the OpenCV Python package.
We split our dataset into training and testing splits such that we reserve 18406 videos over 414 action classes for training and 6521 videos over 104 action classes for testing. For our implementation utilizing meta-learning for self-supervised spatiotemporal learning, each human-action class can be formulated as a distinct task, where our task training-testing split is roughly an 80-20 split. Kinetics-400, Mini-Kinetics200, and TinyVIRAT all include human-generated annotations of the video clips, which define the locations of individual video clips within the action classes.
§ RESULTS
For the first approach in our technical outline, we provide cross entropy loss results of a pre-trained MAE fine-tuned on our small-scale video dataset against a pre-trained baseline MAE architecture trained on Kinetics-400. For the sake of brevity, we provide experimental results for every 20th frame in the 100 frame video samples of our small-scale video dataset. We evaluate the pre-trained MAE baseline against our fine-tuned MAE model on the testing set of our small-scale video dataset consisting of 6521 100-frame video clips of 64x64 pixel resolution over 104 human-action classes. Table <ref> describes the averaged cross entropy loss for every 20th frame in the 100-frame video clips across the test set for our fine-tuned model compared against the pre-trained MAE baseline. The overall averaged cross entropy loss for all 100-frames across the test set in our pre-trained model was 0.1776, whereas the pre-trained MAE baseline was 0.1781.
We also provide a video reconstruction visualization for a single video in the testing split of our small-scale video dataset. Since we cannot show all 100 frames of this video reconstruction, we show a visualization of every 20th video frame reconstructed by our fine-tuned model in Figure <ref>.
For the second approach in our technical outline, we evaluate training our modified video MAE architecture with a small-scale ViT backbone end-to-end as well as training with a classification head attached for action classification tasks. These experiments were conducted on the TinyVIRAT dataset with 26 action classes, so we can formulate the experimental setting as a 26-way multi-class classification task. The end-to-end video MAE architecture with a small-scale ViT backbone contains 3.1 million parameters, while the video MAE architecture with the classification head contains 2.7 million parameters.
The top-1 accuracy for the end-to-end video MAE architecture with a small-scale ViT backbone was 37% and the top-5 accuracy was 75%. Figures <ref> and <ref> describe the training and validation curves of this end-to-end model. Note that since we do not normalize the loss value with the number of examples in the batch, the magnitude of the loss is not necessarily indicative of the model performance.
Additionally, we evaluate training the video auto encoder outfitted with a classification head with and without masking for our 26-way multi-class classification task. We consider a masking ratio of 80% when implementing masking. We find that the top-5 performance on the TinyVIRAT dataset is 76% with masking and 74.5% without masking. Figures <ref> and <ref> describe the training and validation curves for the video autoencoder with a classification head with masking implemented. Figures <ref> and <ref> describe the training and validation curves for the video autoencoder with a classification hea trained without masking. Figures <ref> and <ref> describe the validation split accuracy curve over training for the masked autoencoder and the autoencoder without masking, respectively.
When using a video autoencoder with shift patch tokenization, and a reduced number of parameters, in only 10 epochs of pretrainign and 10 epochs of finetuning, we get 46.8% top1 accuracy, which is significantly higher than the previous methods we tested, indicating the importance of using shifted patch tokenization and not masking during the finetuning phase.
§ CONCLUSION
To summarize, we apply self-supervised meta-learning for spatiotemporal learning on video data. We extend existing representation learning architectures for vision and video data and apply meta-learning through the black-box Memory Augmented Neural Network (MANN) architecture. We evaluate the effectiveness of applying MANN alongside Masked Auto Encoders (MAE) by tackling our goals for this project in a three stage approach.
Firstly, we experiment with fine-tuning a pre-trained MAE architecture on our custom small-scale video dataset. This small-scale video dataset is built and collected by combining multiple human-action video datasets such as the TinyVIRAT, Kinetics-400, and MiniKinetics-200 datasets. Our experimental results of our fine-tuned model against a pre-trained MAE baseline shows that our model outperforms the pre-trained MAE architecture in terms of averaged cross entropy loss across all frames of the testing split videos in our small-scale dataset with a value of 0.1776 compared to the baseline's averaged cross entropy loss of 0.1781. However, since the difference between these two values are negligible – our fine-tuned model outperforms the baseline by 0.3% – we note that there is not a significant enough improvement from fine-tuning a pre-trained MAE architecture on our small-scale video dataset alone. We anticipated these results and hypothesize that because the pre-trained model is very large and trained on hundreds of gigabytes worth of Kinetics-400 data, whereas we fine-tune on our small-scale dataset consisting of less than 25,000 video clips, fine-tuning this architecture directly will not have a noticeable impact on predictive power. Nevertheless, our fine-tuned model slightly outperforms the baseline pre-trained MAE architecture, however there are not enough results or significant enough a difference to suggest a trend.
Next, we experiment with training an end-to-end video MAE architecture with a modified small-scale ViT backbone. We evaluated this architecture on the TinyVIRAT dataset and formulated the problem as a 26-way multi-class video classification problem. The top-1 accuracy score was 37% and the top-5 accuracy score was 75%. We believe this is a significant accomplishment because the majority of existing benchmarks for the TinyVIRAT challenge utilize very large encoder architectures with hundreds of millions of parameters. However, we are able to achieve competent results on the TinyVIRAT dataset with a small-scale ViT backbone with just 3 million parameters.
Finally, we experiment with training a video auto encoder architecture with a classification head and evaluating the effect of masking. We similarly evaluated both the masked and non-masked architectures on the TinyVIRAT 26-way multi-class video classification task and find that the top-5 performance for the masked auto encoder architecture with an 80% masking ratio was 76% and for the auto encoder without masking was 74.5%. Comparatively, this shows that applying masking to the architecture improves action-class classification task performance. However, with just 50 epochs used for training, we would need to continue running experiments and fine-tune the masking ratio hyperparameter to confirm this trend.
§ FUTURE WORK
In the future, we want experiment with fine tuning the MANN architecture with and without a pre-trained video MAE. Another test we want to try is to replace MANN with other meta-learning implementations such as Model Agnostic Meta-Learning (MAML) proposed by (https://arxiv.org/abs/1703.03400ref: model-agnostic meta-learning for fast adaptation of deep networks
). We can also experiment with integrating text signals such as utilizing BERT pretrained embeddings generated on descriptions of videos in the action-class classification task setting. We have performed significant contributions to the TinyVIRAT codebase and could consider contributing to open-source implementations by providing our codebase for small-scale video MAE and meta-learning capabilities. Additionally, we have introduced a hook to export latent video frame representations, which can be used for future work by us and others. We believe we have created very useful building blocks for building more advanced vision transformers for the spatiotemporal learning domain.
9
@articlevaswani2017attention,
title=Attention is all you need,
author=Vaswani, Ashish and Shazeer, Noam and Parmar, Niki and Uszkoreit, Jakob and Jones, Llion and Gomez, Aidan N and Kaiser, Łukasz and Polosukhin, Illia,
journal=Advances in neural information processing systems,
volume=30,
year=2017
@articledevlin2018bert,
title=Bert: Pre-training of deep bidirectional transformers for language understanding,
author=Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina,
journal=arXiv preprint arXiv:1810.04805,
year=2018
@inproceedingsfinn2017model,
title=Model-agnostic meta-learning for fast adaptation of deep networks,
author=Finn, Chelsea and Abbeel, Pieter and Levine, Sergey,
booktitle=International conference on machine learning,
pages=1126–1135,
year=2017,
organization=PMLR
@articlekay2017kinetics,
title=The kinetics human action video dataset,
author=Kay, Will and Carreira, Joao and Simonyan, Karen and Zhang, Brian and Hillier, Chloe and Vijayanarasimhan, Sudheendra and Viola, Fabio and Green, Tim and Back, Trevor and Natsev, Paul and others,
journal=arXiv preprint arXiv:1705.06950,
year=2017
@articlelee2021vision,
title=Vision transformer for small-size datasets,
author=Lee, Seung Hoon and Lee, Seunghyun and Song, Byung Cheol,
journal=arXiv preprint arXiv:2112.13492,
year=2021
@inproceedingsxie2018rethinking,
title=Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification,
author=Xie, Saining and Sun, Chen and Huang, Jonathan and Tu, Zhuowen and Murphy, Kevin,
booktitle=Proceedings of the European conference on computer vision (ECCV),
pages=305–321,
year=2018
@inproceedingsdemir2021tinyvirat,
title=Tinyvirat: Low-resolution video action recognition,
author=Demir, Ugur and Rawat, Yogesh S and Shah, Mubarak,
booktitle=2020 25th International Conference on Pattern Recognition (ICPR),
pages=7387–7394,
year=2021,
organization=IEEE
@inproceedingshe2022masked,
title=Masked autoencoders are scalable vision learners,
author=He, Kaiming and Chen, Xinlei and Xie, Saining and Li, Yanghao and Dollár, Piotr and Girshick, Ross,
booktitle=Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,
pages=16000–16009,
year=2022
@inproceedingsarnab2021vivit,
title=Vivit: A video vision transformer,
author=Arnab, Anurag and Dehghani, Mostafa and Heigold, Georg and Sun, Chen and Lučić, Mario and Schmid, Cordelia,
booktitle=Proceedings of the IEEE/CVF International Conference on Computer Vision,
pages=6836–6846,
year=2021
@bansal2020self,
title=Self-supervised meta-learning for few-shot natural language classification tasks,
author=Bansal, Trapit and Jha, Rishikesh and Munkhdalai, Tsendsuren and McCallum, Andrew,
journal=arXiv preprint arXiv:2009.08445,
year=2020
@articlefeichtenhofer2022masked,
title=Masked Autoencoders As Spatiotemporal Learners,
author=Feichtenhofer, Christoph and Fan, Haoqi and Li, Yanghao and He, Kaiming,
journal=arXiv preprint arXiv:2205.09113,
year=2022
@inproceedingssantoro2016meta,
title=Meta-learning with memory-augmented neural networks,
author=Santoro, Adam and Bartunov, Sergey and Botvinick, Matthew and Wierstra, Daan and Lillicrap, Timothy,
booktitle=International conference on machine learning,
pages=1842–1850,
year=2016,
organization=PMLR
@articlegraves2014neural,
title=Neural turing machines,
author=Graves, Alex and Wayne, Greg and Danihelka, Ivo,
journal=arXiv preprint arXiv:1410.5401,
year=2014
|
http://arxiv.org/abs/2307.05020v1 | 20230711054518 | $B^{*}_c$ meson parameters and radiative decay width within the covariant confined quark model | [
"Aidos Issadykov",
"Sayabek K. Sakhiyev"
] | hep-ph | [
"hep-ph"
] |
§
1em
#1
#1
|
http://arxiv.org/abs/2307.07258v1 | 20230714102008 | Improving BERT with Hybrid Pooling Network and Drop Mask | [
"Qian Chen",
"Wen Wang",
"Qinglin Zhang",
"Chong Deng",
"Ma Yukun",
"Siqi Zheng"
] | cs.CL | [
"cs.CL"
] |
An (∞,n)-categorical straightening-Unstraightening construction
Martina Rovelli
August 12, 2023
===============================================================
Transformer-based pre-trained language models, such as BERT, achieve great success in various natural language understanding tasks. Prior research found that BERT captures a rich hierarchy of linguistic information at different layers. However, the vanilla BERT uses the same self-attention mechanism for each layer to model the different contextual features. In this paper, we propose a HybridBERT model which combines self-attention and pooling networks to encode different contextual features in each layer. Additionally, we propose a simple DropMask method to address the mismatch between pre-training and fine-tuning caused by excessive use of special mask tokens during Masked Language Modeling pre-training. Experiments show that HybridBERT outperforms BERT in pre-training with lower loss, faster training speed (8% relative), lower memory cost (13% relative), and also in transfer learning with 1.5% relative higher accuracies on downstream tasks. Additionally, DropMask improves accuracies of BERT on downstream tasks across various masking rates.
§ INTRODUCTION
Transformer-based pre-trained language models, such as BERT <cit.> and RoBERTa <cit.>, demonstrate remarkable success in various natural language understanding tasks. These models rely heavily on self-attention mechanisms to learn contextual features in natural language. Previous research <cit.> shows that BERT captures a rich hierarchy of linguistic information, encoding surface features at the bottom layers, syntactic features in the middle layers, and semantic features at the top layers. However, most works follow the BERT architecture and use the same self-attention mechanism to model different contextual features for each layer, neglecting the different contextual features from bottom to top layers. Moreover, it has been observed that a significant proportion of self-attention focuses on the previous/next token, the special mask token, or a combination of the two <cit.>, indicating that self-attention for natural language learning may be redundant.
In this work, we aim to address the aforementioned hierarchy and redundancy issues in BERT, improve its transfer learning performance on downstream tasks, and reduce its training cost by accelerating training speed and lowering memory usage. Appendix <ref> summarizes related work. Prior attempts on stacking different model structures in different layers only achieve small gains, not yet reaching transferability of BERT. For example, mixing self-attention layer and Fourier Transform layer only achieves 97%-99% accuracy of BERT on downstream tasks <cit.>. Prior works show that pooling networks can effectively model token mixing and long-range dependeices with linear time and memory complexity. Particularly, the multi-granularity pooling network <cit.> achieves 95.7% accuracy of BERT on downstream tasks, showing a good tradeoff between timememory efficiency and transfer learning ability. Hence, we propose a HybridBERT model to replace some self-attention layers in BERT with multi-granularity pooling network layers. We hypothesize that layer-wise mixing of self-attention (quadratic complexity) and pooling networks could simultaneously capture linguistic hierarchy and reduce redundancy.
HybridBERT outperforms BERT by 1.5% relative gains on average accuracy on downstream tasks. Also, since the multi-granularity pooling network has a linear complexity, HybridBERT has faster training speed and lower memory cost compared to BERT.
To further enhance transfer learning, we also aim to improve Masked Language Modeling (MLM) <cit.>, which is crucial for the success of self-supervised learning of natural language. Typically, MLM replaces around 15% of the input tokens with the special token [MASK]. This corruption creates a mismatch since the model sees the [MASK] token during pre-training but not during fine-tuning on downstream tasks.
ELECTRA <cit.> addresses this mismatch by replacing some tokens with surrogates sampled from small generator networks; however, this approach requires additional generator networks[More related works are in Appendix <ref>.]. In this paper, we propose a simple and efficient DropMask method to address the mismatch in MLM.
Our contributions include:
(1) We propose a HybridBERT model that replaces some self-attention layers in BERT with pooling network layers.
(2) We propose a simple and efficient DropMask method to improve the transfer learning ability of BERT without adding parameters or compromising training speed and memory cost.
(3) Experiments show that HybridBERT outperforms BERT in pre-training with lower loss, faster training speed (8% relative), lower memory cost (13% rel.), and 5% rel. higher average accuracy on downstream tasks. To the best of our knowledge, HybridBERT is the first model with different model structures in different layers that outperforms BERT in both reduced pre-training cost and improved transfer learning ability. DropMask improves accuracy of BERT on downstream tasks across various masking rates.
§ OUR APPROACH
§.§ HybridBERT
Fig. <ref> illustrates the proposed HybridBERT. It consists of L self-attention layers and N-L pooling network layers (where N is the total number of layers). First, the input sequence of tokens {x_1,…, x_l} maps to word-, position-, and type-embeddings, where l is the sequence length. These three different embeddings are then added as the input embeddings for the encoder, denoted by H^0 = {h_1, …, h_l}. The self-attention layers are the same as in Transformer <cit.>.
To model structural dependencies at the top layers (near the output), we use a multi-granularity pooling network (PoNet) <cit.>, which includes global aggregation (GA) and local max-pooling (LMP). We remove the original segment max-pooling in PoNet so that we align with BERT for generic scenarios, which does not rely on prerequisite structure in the source data.
Fig. <ref> provides a detailed illustration of the pooling network layers. For the input hidden states to the pooling network layers, five different linear projections are applied: H_* = H_in W_*, where * can be Q, K, V, O, and L. Among them, Q, K, and V denote query, key, and value; O denotes the output fusion for global aggregation; L denotes local max-pooling.
Global Aggregation
Global Aggregation (GA) aims to capture the most important global information for each token and guarantee linear computational complexity. First, we average H_Q to obtain a rough representation of the sequence h^avg_Q. Second, we take the rough representation h^avg_Q as a query to perform multi-head cross-attention on the input sequence to obtain h^att_Q, as follows:
h^att_Q = Attention(h^avg_Q, H_K, H_V).
Therefore, h^att_Q provides a more accurate sequence representation compared to h^avg_Q. Note that h^avg_Q is a single token, hence the computational complexity of this cross-attention is O(N).
To avoid all tokens sharing the same global representation of h^att_Q, the global aggregation output is the element-wise product between h^att_Q and H_O:
H_GA = H_O ⊙ h^att_Q.
Local Max-Pooling
Local Max-Pooling (LMP) is standard max-pooling over a sliding window to capture local contextual information for each token as: H_LMP = LMP(H_L). The sliding window size and stride length are set to 3 and 1 in all our experiments.
Finally, the output hidden states of the pooling network are the sum of these two features: H_out = H_GA+H_LMP.
§.§ DropMask
Masked Language Modeling is the crucial training objective for the success of BERT and BERT-like models.
Typically, the masking rate is set as 15%. 80% of the masked positions are replaced with the special token [MASK], 10% are replaced with randomly sampled words, and the remaining 10% are left unchanged.
Having too many [MASK] tokens in input sequence can cause significant mismatch between pre-training and fine-tuning, as downstream tasks do not have [MASK] in their input sequence.
The recently proposed MAE architecture <cit.> uses an asymmetric encoder-decoder to process a subset of visible image patches, thereby enabling training large encoders with small computation cost. Inspired by this approach, we propose a simple and efficent DropMask method for self-attention in BERT models. In contrast to MAE, which removes masked tokens from the encoder, DropMask only excludes [MASK] tokens from calculation of weighted summation for self-attention, as follows:
y_j = ∑_i ≠[MASK]A_ij x_i, i,j ∈{1,…, l}
where A is the dot-product attention matrix, x is the input value, and y is the output of weighted summation.
DropMask enables [MASK] tokens to view other unmasked tokens, while preventing all tokens from viewing [MASK] tokens, thereby reducing mismatch between pre-training and fine-tuning in BERT.
§ EXPERIMENTS
We conduct pre-training of various models and evaluate their fine-tuning performance on the English GLUE benchmark <cit.> and the Chinese CLUE benchmark <cit.>. The implementation details are in Appendix <ref>.
Results of HybridBERT on CLUE
We evaluate different configurations of HybridBERT and applying DropMask to BERT on the CLUE development set (see able <ref>). We focus on the average score (Avg) across 7 datasets.
The first baseline uses 12 self-attention layers (denoted as BERT(12A)) and the second baseline uses 12 pooling network layers (denoted as PoNet(12P)). We evaluate the impact from varying the number of self-attention layers and the placement of self-attention layers and pooling network layers in HybridBERT. We make two observations: (1) 8 self-attention layers perform better than 4 self-attention layers; (2) placing the self-attention layer at the bottom (near the input) yields better or comparable results, which is opposite to observations on FNet-Hybrid <cit.>. FNet-Hybrid finds that placing the self-attention layers at the top (near the output) achieves better accuracy on downstream tasks than at the bottom.
The different observations may be due to the distinct mechanisms used by Fourier Transform layer and pooling network.
The best-performing configuration for HybridBERT places 8 self-attention layers at the bottom and 4 pooling layers at the top, denoted as Hybrid(B8A+T4P). It improves BERT by 1.03 absolute and PoNet by 2.57 absolute on the Avg score. Notably, Hybrid(B8A+T4P) outperforms BERT on all CLUE tasks except CSL. We focus on the best-performing HybridBERT(B8A+T4P) for ablation analysis. The Avg score drops -0.24 from removing GA from this model and -0.47 from removing LMP, suggesting that both GA and LMP are important for good performance on downstream tasks.
Results of HybridBERT on Pre-training
We compare the performance of different models on total loss (L_total), MLM loss (L_MLM), and Sentence Structural Objective (SSO) <cit.> loss (L_SSO) on validation set, as well as the training speed and memory cost, during pre-training (see Table <ref>).
Compared with BERT, our HybridBERT(B8A+T4P) improves L_total by 0.031 and L_MLM by 0.043 but degrades L_SSO by 0.012. Compared with PoNet, HybridBERT(B8A+T4P) improves L_total by 0.386, L_MLM by 0.35, and L_SSO by 0.036.
Meanwhile, HybridBERT(B8A+T4P) has faster training speed (8% relative) and less memory cost (13% rel.) compared to BERT, but slower training speed (18% rel.) and more memory cost (43% rel.) than PoNet.
The effect of changing the number of self-attention layers and self-attention positions in HybridBERT on the evaluation loss during pre-training is consistent with the performance on CLUE. More self-attention layers increase accuracy at the cost of training speed and memory cost. Self-attention positions have no significant impact in training speed and memory cost.
For ablation analysis on removing GA and LMP, both HybridBERT w/o GA and HybridBERT w/o LMP outperform HybridBERT on L_total and have faster training speed and less memory cost. However, they do not perform as well as HybridBERT on downstream CLUE datasets (see Table <ref>).
Results of DropMask
We evaluate BERT w/ or w/o DropMask (DM) on CLUE (Table <ref>) and GLUE (Table <ref>).
BERT w/DM achieves 0.44 absolute gain on CLUE development set and 0.33 abs. gain on GLUE validation set.
Table <ref> shows that BERT w/DM achieves [0.33, 2.67] abs. gains on GLUE validation set across different masking rates.
Interestingly, with 75% masking rate, w/DM achieves a large 18.39 abs. gain on the COLA dataset.
We also add DropMask to HybridBERT but observe no improvement. We plan to investigate this in future work. Table <ref> verifies that DM does not change training speed and memory cost[The very small difference between Model (1) and (9) is due to different implementations.].
§ CONCLUSION
We propose a HybridBERT model and a DropMask method to improve BERT accuracy on downstream tasks. HybridBERT outperforms BERT in pre-training cost and in higher accuracy on downstream tasks. DropMask improves the accuracy of BERT on downstream tasks across different masking rates.
Future work includes investigating other efficient Transformers in hybrid structures.
§ LIMITATIONS
The study has several limitations to consider. Firstly, our experiments are conducted only on the BERT-base size model, so it is uncertain whether the results would hold for larger models. Secondly, we evaluate the Hybrid pooling networks and DropMask only on the BERT backbone, not yet including other backbones such as RoBERTa. Lastly, we have not observed improvements from applying DropMask on HybridBERT, which requires further investigations.
acl_natbib
§ RELATED WORK
§.§ Transformers Variants
Many previous works have studied improving Transformers by combining self-attention with convolutional networks to enhance context-dependent modeling at each layer. Conformer <cit.> inserts a convolutional sublayer between the self-attention sublayer and the feed-forward sublayer, which is found effective in many speech-processing tasks. ConvBERT <cit.> replaces some self-attention heads with span-based dynamic convolutions to directly model local dependencies for improving performance on natural language understanding tasks. Similar two-branch architectures are used in Lite-Transformer <cit.> and Branchformer <cit.>.
Meanwhile, some other works choose to stack different model structures in different layers. FNet-hybrid <cit.> found that replacing the self-attention in the 10 bottom layers (near the input) with a Fourier transform layer runs 40%-70% faster, but achieves only up to 97%-99% of the accuracy of BERT on the GLUE benchmark <cit.>. Furthermore, they found that placing the self-attention layers at the top (near the output) gave better accuracy on downstream tasks than the bottom.
Fusion-in-encoder (FiE) <cit.> uses a local layer at the bottom and a global layer (i.e. original self-attention) at the top to encode long sequences while preserving high attention precision on short sequences. In each local layer, the input sequence is divided into small patches, and self-attention is applied only to those patches locally.
Additionally, pooling mechanisms have recently shown great potential for token mixing and modeling long-range dependencies. PoNet <cit.> proposes a pooling network for token mixing in long sequences with linear complexity. They design a multi-granularity pooling module to capture local, segmental and global contextual information and interactions. However, they only observe 95.7% accuracy of BERT on the GLUE benchmark. PoolFormer <cit.> achieves competitive performance on multiple computer vision tasks using a simple spatial pooling operator.
In this paper, considering the rich hierarchy of linguistic information, we propose a HybridBERT model that replaces some of the self-attention layers in BERT with pooling networks.
§.§ Self-supervised Pre-training Tasks
Masked language modeling (MLM) is one of the most successful self-supervised pre-training tasks for natural language processing, which is used in BERT, RoBERTa, and other BERT-like models.
It corrupts the input token sequence by randomly replacing some tokens with [MASK] and then trains a model to reconstruct the original tokens. However, this corruption process leads to mismatches in BERT, where the model sees the [MASK] token during pre-training, but not when fine-tuning on downstream tasks.
There has been much effort to address the issues of MLM. XLNet <cit.> introduces permuted language modeling (PLM) for pre-training to capture dependencies between predicted tokens, thus overcoming the limitations of BERT.
MPNet <cit.> integrates MLM and PLM to inherit the advantages of both methods.
Furthermore, <cit.> found that masking up to 40% of input tokens can outperform the baseline of masking 15% input tokens, measured by fine-tuning on downstream tasks. However, higher masking rates aggravates the mismatch problem.
In this paper, we introduce a simple and efficient DropMask method to alleviate the mismatch problem to improve the transfer learning ability of BERT without adding additional parameters and increasing training cost.
§ IMPLEMENTATION DETAILS
We utilize the HuggingFace toolkit[https://github.com/huggingface/transformers] to implement all models and DropMask method and conduct experiments.
For Chinese pre-training, we use Masked Language Modeling (MLM) <cit.> and Sentence Structure Objective (SSO) <cit.> on Chinese Wikipedia (3GB text data)[https://dumps.wikimedia.org/].
The masking rate is 15%.
We train for 100,000 steps with a learning rate of 1e-4, a batch size of 384, and a maximum length of 512. For fine-tuning on CLUE, we run 5 runs with different random seeds and compute the mean and standard deviation.
For each seed, we use a grid search on hyperparameters to obtain the best results for the seed.
For AFQMC, CSL, iFLYTEK, OCNLI, TNEWS, hyperparameters are searched by batch size in {16, 32, 48} and learning rate in {1e-5, 3e-5, 5e-5}, the number of epochs is 5; For WSC, hyperparameters are searched by batch size in {16, 32, 48}, learning rate in {1e-5, 2e-5, 3e-5, 4e-5, 5e-5}, and the number of epochs in {50, 100}; for CMNLI, batch size is 48, learning rate is 1e-5, and the number of epochs is 5.
For English pre-training, we only use MLM on English Wikipedia and BooksCorpus (16GB text data)[https://huggingface.co/datasets/bookcorpus].
We train for 125,000 steps with a learning rate of 4e-4, a batch size of 2048, and a maximum length of 128. For fine-tuning on GLUE, we use grid search on hyperparameters to obtain the best results. Hyperparameters are searched in batch size 32, learning rate {2e-5, 3e-5, 5e-5}, number of epochs {5, 10}.
All models are comparable in the BERT-base size.
|
http://arxiv.org/abs/2307.05641v2 | 20230711101459 | Point to the Hidden: Exposing Speech Audio Splicing via Signal Pointer Nets | [
"Denise Moussa",
"Germans Hirsch",
"Sebastian Wankerl",
"Christian Riess"
] | eess.AS | [
"eess.AS",
"cs.SD"
] |
ConFL: Constraint-guided Fuzzing for Machine Learning Framework
Deyue Zhang
===============================================================
Verifying the integrity of voice recording evidence for criminal investigations is an integral part of an audio forensic analyst's work. Here, one focus is on detecting deletion or insertion operations, so called audio splicing. While this is a rather easy approach to alter spoken statements, careful editing can yield quite convincing results. For difficult cases or big amounts of data, automated tools can support in detecting potential editing locations. To this end, several analytical and deep learning methods have been proposed by now. Still, few address unconstrained splicing scenarios as expected in practice. With SigPointer, we propose a pointer network framework for continuous input that uncovers splice locations naturally and more efficiently than existing works. Extensive experiments on forensically challenging data like strongly compressed and noisy signals quantify the benefit of the pointer mechanism with performance increases between about 6 to 10 percentage points. [code: https://www.cs1.tf.fau.de/research/multimedia-security]
Index Terms: Audio Splicing Localisation, Audio Forensics, Pointer Networks
§ INTRODUCTION
In today's digital era, more and more speech recordings like voice messages, recorded phone calls or audio tracks of videos are produced and possibly post-processed and shared via the internet.
Consequently, they often contain important cues for criminal investigations, too.
With powerful tools, either commercial or free, as for example Audacity <cit.>, the hurdles for editing operations have become low.
Forensic audio analysts are thus often assigned to verify the integrity of material relevant to court cases.
Audio splicing (which subsumes deletion, copying and insertion of speech segments) is an effective and easy-to-perform manipulation that violates integrity.
For example, the sentence I do not agree is easily inverted by deleting the signal segment containing not and merging the remaining parts.
Simple post-processing steps, such as saving forgeries using lossy compression, in MP3 format, can further weaken or obscure editing cues. Furthermore, the workload for analysts strongly increases with forgery quality and amount of data.
Up to now, several methods have been proposed to assist with localising splices in speech material.
However, they are mostly inapplicable to unconstrained signal characteristics.
With this work, we address the current limitations and propose a novel and natural approach to audio splicing localisation.
§.§ Existing Approaches to Audio Splicing Localisation
Audio splicing localisation is mostly targeted with analytical and dl based methods that focus on specific features to detect signal inconsistencies.
By example, some previous works examine the consistency of specific audio formats <cit.>, rely on splices of recordings from different devices <cit.>, or detect changes of the recording environment's noise levels <cit.>, acoustic impulse <cit.> or both <cit.>.
Others also search for atypical changes in the subtle (and fragile) enf <cit.>.
Due to the rise of convincing audio synthesis techniques, several works specifically aim at detecting artificial segments amidst original speech <cit.>.
In practice, forensic analysts are confronted with audio samples from unconstrained sources, which implies, , arbitrary recording parameters, quality, formats, or post-processing operations. So, methods relying on the presence of very specific features might not be applicable if those are not present in certain audio signals.
Indeed, recently, several dl approaches with unconstrained feature extraction have been proposed <cit.>, however many of these methods are still preliminary. Some either target audio splicing detection but omit localisation <cit.>, another work examines only two fixed splicing patterns, and its generalization remains unclear <cit.>.
In addition, the small and non-diverse Free Spoken Digit Dataset <cit.> is often used to construct spliced samples <cit.>, and frequently, material from different speakers instead of one is merged <cit.>, which excludes the highly relevant and more difficult-to-detect case of forged statements of one person. Zeng <cit.> consider more miscellaneous spliced forgeries from one speaker and employ a ResNet-18 <cit.> method for chunks of audio spectrogram frames. However, this is only fit for coarse splicing localisation within windows of 32 to 64 frames. For frame-level granularity, a seq2seq Transformer <cit.> model has been proposed <cit.>. It outperforms several cnn classifiers on challenging data, but still leaves room for improvement concerning well-made forgeries.
§.§ Audio Splicing Localisation via Pointer Mechanisms
We propose a major improvement over existing methods for audio splicing localisation by regarding this task as a pure pointing problem. Pointers predict a conditional probability distribution over elements of a sequence and were originally designed for approximating combinatorial optimization problems <cit.>. They can thus directly locate parts of the input series, in contrast to traditional seq2seq networks, where a mapping to a fixed set of target tokens is learned. Pointer mechanisms were recently also integrated into the Transformer <cit.> architecture, where mixtures of pointer and token generation components solve natural language processing tasks <cit.>.
In our case, we want the nn to indicate splice locations by pointing to the respective input signal positions. This appears more natural and efficient than step-wise classifying segments of fixed size <cit.> or learning a mapping to a fixed vocabulary <cit.>. Existing pointer methods however operate on categorical input. We thus design SigPointer, a Transformer <cit.> based pointer network for continuous input signals. We benchmark against existing works on audio splicing localisation and analyse the influence of our network's components. SigPointer proves to perform best on forensically challenging data, both under seen and unseen quality degradations.
§ PROPOSED METHOD
In this section, we describe our proposed pointing method for signals, as well as training strategies and datasets.
§.§ SigPointer for Continuous Input Signals
We define audio splicing localisation as a pointing task. The input to our encoder-decoder network (Fig. <ref>) is a time series of signal representation vectors with length N, formally 𝐒 = [𝐬_0, 𝐬_1, ⋯ , 𝐬_N]. Additionally, 𝐬_-1 denotes ⟨ eos ⟩, a special token to which the pointer mechanism can point to denote that decoding is finished <cit.>. It is omitted in the following definitions for simplicity. The encoder part of our network (left) mostly aligns with the original Transformer <cit.> model. It is made of a stack of layers, each implementing multi-head self-attention followed by a fc layer and layer-normalisation. However, unlike existing pointer methods on categorical data (cf. Sec. <ref>), we skip learning input embeddings and feed the raw data into the network, since we already operate on continuous, dense data. The encoded representation 𝐇 = [𝐡_0, 𝐡_1, ⋯, 𝐡_N] of 𝐒, where 𝐡_n ∈ℝ^l with latent size l, then serves as input to the decoder (middle). The decoder solely consists of a stack of multi-head attention layers <cit.>. Per time step t, it takes 𝐇 and one position vector 𝐳_t^*∈ℝ^l per previously predicted ŷ_t^*∈ℐ = {0, 1, ⋯, N} with t^* < t to indicate the next splice location ŷ_t. More details about this decoding process are given in the following two paragraphs.
Pointer Mechanism To predict the full output index sequence ŷ, we need the conditional probability distribution over all input positions P(ŷ|𝐒) = ∏_t=0^T P(ŷ_t | 𝐇, 𝐙). Thus, for each time step, we extract the cross-attention scores between 𝐇 and 𝐙 from the last decoder layer for all M attention heads, yielding 𝐀^M×|𝐒|_t, with a_m,s∈ℝ (Fig. <ref>, right). Unlike related Transformer pointer methods, we do not mix pointer and token generation tasks <cit.>, so all the model's attention heads can be reserved for computing one final pointer result. We thus average and normalize all attention heads' values to yield the discrete index probability distribution as 𝐩̂_𝐭 = softmax( .𝐀_t|_1^M). During inference, ŷ_t = argmax(𝐩̂_𝐭) is computed to yield the final splice point or ⟨ eos ⟩ if no (further) splice point is detected.
Slim Decoding Existing seq2seq (pointer) approaches on categorical data traditionally use the semantic of the previously decoded elements ŷ_t^* to decode the next ŷ_t from 𝐇. Hereby, ŷ_t^* is projected to latent size l with an embedding layer E_m and positional encodings are added to yield the final representation. Contrary, we only use sinusoidal encoding vectors <cit.> 𝐳_t^* = 𝐞_t^* for each ŷ_t^*.
The vectors 𝐳_t^* thus only preserve the relative ordering and number of already decoded output items. In several tests, we observed that this slim information is sufficient, since training some E_m and reusing y_t^* as decoder target input did not show any advantage. In fact, the performance even slightly degraded. We reason that this is due to sparser available context between sequence elements compared to approaches on discrete data, , word tokens from a natural language <cit.>. Differently from the latter, splicing points carry few syntactical information (locations) and are generally not expected to have rich dependencies among each other. So, previously found splice locations do not provide clues about future ones.
§.§ Choosing Model and Training Parameters
We conduct a parameter search of 200 trials with optuna <cit.> (v.2.10) and PyTorch <cit.> (v.1.10.2) on 4 consumer GPUs. The search space encompasses n_e, n_d ∈ [1, 8] encoder and decoder layers, h ∈{1,3,9} heads (divisors of latent size l=279), f_d ∈ [2^7, 2^11] for the fc layer dimension, dropout d ∈ [0.01, 0.05, ⋯, 0.3] within the attention layers, learning rate l_r ∈ [1e^-5, 5e^-5, ⋯ 5e^-3] for the Adam optimizer and batch size b ∈ [32, 64, 128, 256, 350]. We choose the best configuration 𝒞: (n_e, n_d, h, f_d, d, l_r, b) = (7, 1, 9, 2^7, 0.1, 5e^-4 , 350) for our network and use Glorot weight initialisation <cit.>.
SigPointer converges significantly better with regression as opposed to classification losses.
The cosine distance loss d_c = . 𝐩̂_t ·𝐩_t /‖𝐩̂_t ‖‖𝐩_t ‖|_1^T proves to be most stable, where 𝐩_t are the one-hot encoded target splice indices. Note that a more exact metric for our task is the mean absolute error between the predicted positions ŷ and targets 𝐲. Since it guided training better but is non-differential, we only use it as validation loss.
§.§ Datasets and Training Strategy
All datasets are generated using our previously introduced pipeline <cit.> and cover forensically challenging scenarios. We use the anechoic ACE dataset <cit.> as source set, such that the model does not adapt to unintended, acoustic side channels in forged samples. Samples between 3 s and 45 s are created from n_s source segments of the same speaker from the same or different simulated environments. Post-processing that may obscure splicing is applied in the form of additive Gaussian noise and either single AMR-NB or MP3 compression, all in randomly sampled strength. Adding noise to a signal can serve as an easy method to mask tampering, just as compression which however may also be unintentionally introduced by re-saving the edited version or sharing via social media services. The final feature representation with a time resolution of 500 ms is a concatenation of the Mel spectrogram, MFCCs and spectral centroids, yielding the feature dimension l = 279. For more details about the generation pipeline we refer to the respective work <cit.>.
For training, we employ curriculum learning <cit.> in a three-stage process with a train/validation split of 500k/30k for each stage. Training is conducted upon model convergence with 100 epochs and a patience of 20 epochs. The first dataset covers samples with n∈ [0, 1] splices and no post-processing, the second extends to post-processing and the third includes both multi-splicing with n ∈ [0, 5] and post-processing.
As in existing work <cit.>, we employ cross-dataset testing and generate our test sets from the Hi-Fi TTS set <cit.> as described in Sec. <ref>.
§ EXPERIMENTS
We evaluate our proposed method on forensically challenging data and test against four existing methods and two custom baselines to analyse the benefits of the pointer framework.
§.§ Baseline Models
We train each nn as described in Sec. <ref>, where all but the pointer approaches employ the BCE instead of the cosine loss.
CNNs Most existing works rely on cnn and natively have too strong limitations to be directly used for our task (cf. Sec. <ref>). We reimplement three approaches <cit.>. The first two frameworks <cit.> only cover binary classification of spliced vs. non-spliced signals with custom cnn models, while Zeng <cit.> propose localisation with a ResNet-18 <cit.> classifier.
They employ a sliding window approach over chunks of frames, where the window stride s=1 accounts for exact (frame-level) detection. The classifier's decision per frame is inferred from averaged probabilities of multiple windows. This approximative method is however only fit for larger s <cit.>. Our reimplementation confirms long training times and poor detection on frame level, so we use a custom splice localisation strategy for cnn. It is based on our previously introduced framework <cit.>, where baseline cnn are extended to classify at maximum n splicing positions with n_o = n output layers. This however shows poor performance for n > 1. Instead, we set n_o to the maximum expected number of input frames, 90 in our case, and perform a binary classification per frame which proves to be more stable w.r.t. higher n. Splitting in smaller segments accounts for signals with lengths exceeding n_o.
We exclude two methods because of no details on the model specifications <cit.> and very restrictive splicing assumptions where a relaxation to our more unconstrained task is not straight forward <cit.>.
Seq2Seq Transformer We also re-train our seq2seq model from previous work on multi-splicing localisation <cit.>. Given a signal, the encoder outputs its latent representation from which a sequence of splicing points is decoded step-by-step using a fixed vocabulary set.
SigPointer𝒞_M For a direct comparison of the pointer against the seq2seq framework, we instantiate SigPointer with the Transformer configuration 𝒞_M in <cit.>.
Transformer encoder SigPointer employs autoregressive decoding (cf. Sec. <ref>). To quantify its influence we test against the capacity of a plain, non-autoregressive Transformer encoder.
Thereby, we project the encoder memory 𝐇^l × N (Fig. <ref>) to size 2 × N to perform a per frame classification as for the cnn baselines. For a fair comparison we again conduct a hyper-parameter search (Sec. <ref>), but double the search space for the number of encoder layers to n_e ∈ [1,16] to account for the missing decoder capacity. The best model configuration yields 𝒞: (n_e, h, f_d, d, l_r, b) = (12, 9, 2^11, 0.1, 1e^-4 , 64).
§.§ Performance Metrics
We report the average Jaccard index J expressing the similarity of prediction ŷ and ground truth 𝐲 as intersection over union , as well as average recall R = |ŷ∩𝐲|/|𝐲| of 5 training runs with different seeds. Note that the order of the predicted points is irrelevant for both metrics. We evaluate both exact localisation and coarser granularity by binning the input signal by f ∈ [1,2,3,4] frames, denoted as Bin=f.
§.§ Evaluation of the Pointer Mechanism
For this experiment, we generate a dataset of 30k samples with uniformly sampled n ∈ [0, 5] splicing positions from the Hi-Fi TTS test pool <cit.>, including single compression and noise post-processing as described in Sec. <ref>. The size and performance results of all models are listed in Tab. <ref>.
The cnn methods (rows 1 to 3) are inferior to the Transformer-based approaches (rows 4 to 7). Jadhav 's <cit.> large but very shallow network performs worst, followed by Chuchra 's <cit.> small but deeper model with 12 layers and the best and deepest ResNet-18 cnn baseline.
Solving the same classification task (Sec. <ref>) with the Transformer encoder greatly improves splicing localisation. It also slightly surpasses the best proposed seq2seq model from related work <cit.>. However, it also uses about 2.3 as much trainable parameters. Training the seq2seq model in our pointer framework (SigPointer𝒞_M) demonstrates the benefit of our proposed approach.
The missing seq2seq vocabulary mapping component slightly reduces the network size, still the Jaccard index and recall increase by approximately 5.0 pp and 4.8 pp and outperform both the Transformer encoder and the original seq2seq model.
We achieve the best performance with optimized hyper-parameters (cf. Sec. <ref>), yielding the even smaller SigPointer*. Notably, the decoder is reduced to 1<5 layers compared to SigPointer𝒞_M which suffices for our slim decoding strategy (cf. Sec <ref>). The advantage of about 5.1 pp and 5.2 pp to SigPointer𝒞_M with J = 0.5184 > 0.4670 and R = 0.5719 > 0.5202 for Bin=1 steadily decreases, reaching 3.8 pp (J) and 3.0 pp (R) for Bin=4. We thus assume that SigPointer𝒞_M is only slightly less sensitive to splicing points than the optimized SigPointer* but notably less exact in localisation.
§.§ Influence of Splices per Input
In Fig. <ref> we report the Jaccard coefficients on our Hi-Fi TTS test set w.r.t. the number of splice positions per sample.
Evidently, all models recognise non-spliced inputs relatively well, while localising actual splice positions correctly proves to be more difficult on this challenging dataset (Fig. <ref>-<ref>). The best performing SigPointer* (red) achieves J = 0.5187 for single splices and still J = 0.3918 for n=5 splices. When allowing coarser signal binning (Fig. <ref>-<ref>), the performance increases considerably up to J = 0.7268 and J = 0.6104 for n = 1,5 and Bin=4 (Fig. <ref>). As stated in Sec. <ref>, the less accurate SigPointer𝒞_M (orange) benefits from coarser bins, but cannot outperform SigPointer*. The Transformer encoder (grey) performs well for n=1,2, but drops below the Transformer seq2seq model's <cit.> (green) performance for n≥3.
The cnn (blue) are weakest, where especially the custom models <cit.> exhibit low sensitivity to splicing and thus cannot profit from coarser signal binning (Fig. <ref>-<ref>). In summary, SigPointer* outperforms all baselines in terms of sensitivity to splicing and exactness of localisation, despite its small model size of 3.4 M parameters.
§.§ Robustness to Complex Processing Chains
We test the generalization ability of the models trained only on single compression and additive Gaussian noise post-processing to even stronger obscured splicing points. We thus run the robustness experiments from our previous work on the available test sets à 10k samples from Hi-Fi TTS <cit.>. They consist of 0 to 5 times spliced samples subjected to n_c ∈ [1,5] AMR-NB or MP3 compression runs (5 test sets) and additive real noise post-processing (4 test sets). Both compression and noise strength are randomly sampled <cit.>.
Figures <ref>-<ref> show the results for Bin=1,4.
Most previously described trends in model performance (Sec. <ref> and Sec. <ref>) also show in this robustness experiment. However, the seq2seq Transformer <cit.> this time surpasses the simpler Transformer encoder and shows slightly better robustness in all experiments. SigPointer* again outperforms all models. Compared to the best existing model <cit.> (green), for Bin=1 it increases localisation ability by (on average) 8.4 pp for multi compression and 9.1 pp for additive real noise (Fig. <ref>, Fig. <ref>). Surprisingly, despite differing complexity, the type of noise has little influence on the performance.
§.§ Limitations
In our tests, SigPointer models are more sensitive to weight initialisation compared to the other considered nn.
The number of epochs until the model converges can thus vary strongly, so early pruning of weak runs is recommended in practice.
Also note that one design advantage of our model can be a pitfall in practice. SigPointer can process signals of arbitrary length, contrary to classifiers that are constrained by their number of output layers or seq2seq methods that are indirectly limited by their learned output vocabulary mapping (cf. Sec. <ref>). However, we empirically observed that the pointer adapts to signal lengths seen in training and does barely search for splices out of known ranges. The behaviour of inherent adaptation to problem sizes is already known from literature <cit.>. To mitigate this issue, we thus strongly recommend cutting test samples into multiple separate segments that fit into the training distribution, as
it was also done for the comparison methods.
§ CONCLUSIONS
With SigPointer, we present a novel and more natural approach to the task of audio splicing localisation with the help of pointer mechanisms. Our focus is on aiding with difficult-to-detect splice positions that pose by example a problem in forensic analysts' daily work. In several tests on in- and out-of-distribution data, we quantified the advantage of our pointer framework for continuous signals and outperform existing approaches by a large margin, even with a much smaller model size.
IEEEtran
|
http://arxiv.org/abs/2307.04465v1 | 20230710102840 | Tropical convexity in location problems | [
"Andrei Comăneci"
] | math.OC | [
"math.OC",
"math.MG",
"q-bio.PE",
"14T90, 26B25, 52A30, 90B85, 92B10"
] |
We investigate location problems whose optimum lies in the tropical convex hull of the input points.
Firstly, we study geodesically star-convex sets under the asymmetric tropical distance and introduce the class of tropically quasiconvex functions whose sub-level sets have this shape.
The latter are related to monotonic functions.
Then we show that location problems whose distances are measured by tropically quasiconvex functions as before give an optimum in the tropical convex hull of the input points.
We also show that a similar result holds if we replace the input points by tropically convex sets.
Finally, we focus on applications to phylogenetics presenting properties of consensus methods arising from our class of location problems.
Deformations at Earth's dayside magnetopause during quasi-radial IMF conditions: Global kinetic simulations and soft X-ray imaging
Chi Wang
August 12, 2023
==================================================================================================================================
§ INTRODUCTION
There is a recent interest in studying location problems in tropical geometry, especially in the use of tropical methods to data analysis.
Maybe the first article to promote such problems with a view towards “tropical statistics” is the work of Lin et al. <cit.>.
They showed that tropical convexity in tree spaces has some better properties than the geometry of Billera, Holmes, and Vogtmann (BHV) <cit.>.
This encouraged them to propose location estimators based on the symmetric tropical distance that could potentially exploit tropical convexity.
In particular, this would give a tropical approach to the consensus problem from phylogenetics <cit.>.
The connection for the proposed location statistics to tropical convexity was not well understood.
For example, they noticed that tropical Fermat–Weber points can lie outside the tropical convex hull of the input points <cit.>, although it was found later that one can find Fermat–Weber points inside the tropical convex hull <cit.>.
However, the unclear connection makes it difficult to obtain solutions that can be interpreted in the phylogenetic setting; see also <cit.>.
Recently, we could show that studying the Fermat–Weber problem using an asymmetric distance function leads to a better explanation in terms of tropical convexity <cit.>.
In particular, it provides a clear approach based on tropical convexity to the consensus problem from phylogenetics.
Moreover, various desirable properties of consensus methods were obtained by exploiting tropical convexity.
In fact, the good properties were solely due to tropical convexity and not the particular distance function which motivates the search for other methods with similar properties.
In this paper, we focus on location problems that have the potential of exploiting tropical convexity.
More specifically, we care of those location estimators that will belong to the tropical convex hull of the input points.
Such estimators are based on distances that reflect the tropical structure of the space and can be seen as a counterpart to similar studies regarding location problems and ordinary convexity.
Significant work was done for understanding geometric properties of location problems and their relationship to ordinary convexity.
The case of Chebyshev centers dates back to the 60s in the work of Garkavi <cit.> and Klee <cit.>.
More general location problems in a normed space were studied by Wendell and Hurter <cit.>, while a focus on geometric properties of Fermat–Weber problems with varying distances is covered by Durier and Michelot <cit.>.
What is more, it was shown that finding an optimal solution in the (ordinary) convex hull for every set of points is equivalent to having an inner product space in three dimensions or more; a general form of this result was obtained by Durier <cit.>.
The results mentioned above show a strong relationship between ordinary convexity and a Euclidean structure.
Tropical convexity, on the other hand, it is related to the lattice structure of (^n,≤).
Hence, we have to focus on “monotonic” distances.
To interpret geometrically monotonic functions in the quotient space n, we notice that all sub-level sets share a similarity: they are geodesically star-convex with respect to the asymmetric tropical distance.
The latter can be seen by remarking that geodesic segments are images of order segments in (^n,≤).
The resulting sets, called -star-convex, and functions, called -star-quasiconvex, are discussed in sections <ref> and <ref>, respectively.
In section <ref> we focus on location problems in which distances to the sites are measured by -star-quasiconvex functions.
We show that this setting guarantees optimal locations in the tropical convex hull of the input.
We will see that the triangle inequality does not play any role, which emphasizes the differences between tropical and ordinary convexity.
Further, this setting allows for very general location problems where dissimilarities are not necessarily distances; triangle inequality is generally assumed in location science when dealing with geographic location <cit.>, but it is not reasonable for more general data <cit.> and never assumed in the construction of M-estimators <cit.>.
We have further a few examples of location problems from the literature that end in our setting.
In particular, location problems involving the symmetric and asymmetric tropical distances.
However, the former case might contain cases where some optima are outside the tropical convex hull of the input.
So what is the precise distinction between the symmetric and the asymmetric tropical distances that causes the above behaviour?
We show that strict -star-convexity is the answer.
This motivates that study of regularized versions discussed in §<ref>.
We briefly show in section <ref> that we can extend the results to the case when the sites are tropically convex sets.
Then section <ref> deals with the main application to phylogenetics: the tropical approach to consensus methods.
Our general setting provides a large class of tropically convex consensus methods as defined in <cit.>.
Furthermore, we enlarge the list of desirable properties of these consensus methods that were given in the previously cited work.
Finally, we conclude with section <ref> consisting of highlights and possible directions for future research.
§ TROPICAL CONVEXITY
The purpose of this section is to fix the notation and emphasize the basic properties of tropically convex sets that will be used later.
One can consult the book of Joswig <cit.> for more details.
We will use both semirings ^min=(∪{∞},∧,+) and ^max=(∪{-∞},∨,+) where x∧ y=min(x,y) and x∨ y=max(x,y).
They are isomorphic under the map x↦ -x, but it is better to be seen as dual to each other.
This duality will play an important role later similar to the relationship between max-tropical polytopes and min-tropical hyperplanes <cit.>.
Since our applications deal with points of finite entry, we will define tropical geometric objects in ^n and n.
It also exploits the common set of ^max and ^min and we can make use of the vector space structure.
A min-tropical cone K⊂^n is a set closed under min-tropical linear combinations: (x+λ)∧ (y+μ)∈ K for all x,y∈ K and λ,μ∈.
The image of a min-tropical cone in n is called a min-tropically convex set.
A common example is the min-tropical hyperplane with apex v which is the set H^min_v={x∈n:|_j(x_j-v_j)|≥ 2}.
The max-tropical cones and max-tropically convex sets are defined similarly, replacing min by max in the previous definitions.
One can also see them as images of min-tropical cones and min-tropically convex sets under x↦ -x.
The min-tropical convex hull of two points a,b∈n will be denoted by [a,b]_max and is called the min-tropical segment between a and b.
We will also use the notation (a,b)_min=[a,b]_min∖{a,b} for the open min-tropical segment between a and b.
Similarly, we define [a,b]_max and (a,b)_max.
The min-tropical convex hull of a set A⊂n is the smallest min-tropically convex set containing A and we denote it by ^min(A).
It can be related to the max-tropical semiring by <cit.>.
For this we need to introduce the max-tropical sector S_i^max={x∈n:x_i≥ x_j ∀ j∈[n]}={x∈n:i∈_j x_j}.
Then <cit.> says that x belongs to ^min(A) if and only if for each i∈[n] there exists a_i∈ A such that x∈ a_i+S_i^max.
For the case of max-tropically convex hull just reverse min with max.
We say that a point a of a min-tropically convex set A is i-exposed if (a+S_i^min)∩ A={a}.
If a point is i-exposed for some i∈[n], then we simply call it exposed.
Since the order ≤ on ^n is strongly related to tropical convexity, we will focus on monotonic function.
We say that a function f:X→, defined on a subset X of ^n, is increasing if for every x, y∈ X with x≤ y we have f(x)≤ f(y).
We call f strictly increasing if f(x)<f(y) whenever x≤ y and x≠ y.
For a,b∈^n and a≤ b, we denote by [a,b]_≤ the set of points x∈^n such that a≤ x≤ b and call it the order segment between a and b.
It can also be written as a box: [a,b]_≤=[a_1,b_1]×…×[a_n,b_n].
Its image in n is a polytrope, i.e. it is both min- and max-tropically convex <cit.>, which we call a box polytrope.
A particular case is presented in the following example.
Consider the asymmetric distance d_(a,b)=∑_i(b_i-a_i)-nmin_j(b_j-a_j) defined on n <cit.>.
We are interested in geodesic segments under this distance, which are portrayed in Figure <ref>.
This is different from the geodesic convexity discussed in <cit.> which focuses on the symmetric tropical distance.
For two points a,b∈n we define the (oriented) geodesic segment between a and b under d_ as [a,b]_:={x∈^n:d_(a,x)+d_(x,b)=d_(a,b)}.
The geodesic segment [a,b]_ is a (box) polytrope.
To see this, we point out that [a,b]_=(a+S_i^min)∩(b+S_i^max) where i is any index from _j(b_j-a_j); the equality can be also seen in Figure <ref>.
What is more, if we choose representatives a and b such that min_j(b_j-a_j)=0, then [a,b]_ is the image of [a,b]_≤ in n.
The min-tropical vertices [a,b]_ are of the form v_j=b-(b_j-b_i+a_i-a_j)e_j=b-(b_j-a_j-min_ℓ(b_ℓ-a_ℓ))e_j for j∈[n]
The set [a,b]_ contains the ordinary segment [a,b] but also the min- and max-tropical segments between a and b.
What is more, for every c∈n the min-tropical segment between a and b is contained in [c,a]_∪[c,b]_.
To see the latter statement, we take arbitrary representatives modulo for a and b and show that a∧ b∈[c,a]_∪[c,b]_.
Let i∈_j[(a_j∧ b_j)-c_j].
Without loss of generality, we can assume that a_i∧ b_i=a_i.
Thus, a∧ b∈ (c+S_i^min)∩(a+S_i^max)=[c,a]_.
The canonical coordinates of a point x∈n are the entries of the x∈^n defined by x= x-(min_j x_j).
This is a representative of x modulo such that all its entries are non-negative and at least one entry is 0.
We say that K is a strictly min-tropically convex cone if K is a min-tropically convex cone and for every a,b∈ K such that a∧ b is different from a and b modulo , then a∧ b belongs to the interior of K.
We say that a subset of n is strictly min-tropically convex if it is the image of a strictly min-tropically convex cone under the canonical projection ^n→n.
A subset L of n is strictly min-tropically convex if all the points of the open min-tropical segment (a,b)_min belong to the interior of L, where a and b are distinct points in L.
Any strictly min-tropically convex set is a singleton or its closure coincides with the closure of its interior.
Moreover, all of its boundary points are exposed.
The first part results from Remark <ref>.
For the second part, consider v which is not exposed.
Then there exist p,q in the strictly min-tropically convex set such that v∈(p,q)_min.
According to the same remark, v is an interior point.
§ -STAR-CONVEX SETS
A -star-convex set with kernel v is a non-empty set K⊆n such that for every point w∈ K we have [v,w]_⊆ K.
We call K strictly -star-convex if [v,w]_∖{w} belongs to the interior of K for every w∈ K.
Since [v,w]_ contains the ordinary segment [v,w], we conclude that -star-convex sets are also star-convex in the ordinary sense.
We show now that -star-convex sets are min-tropically convex.
Any -star-convex set is min-tropically convex.
Let K be a -star-convex set with kernel v and a,b arbitrary points in K.
According to Remark <ref>, we have [a,b]_min⊆ [v,a]_∪ [v,b]_.
The latter set is contained in K due to its -star-convexity.
However, -star-convex sets might not be max-tropically convex.
For example, the image of the regular simplex Δ_n={e_1,…,e_n} in n is -star-convex but not max-tropically convex.
One can find examples of -star-convex sets in Figure <ref>.
Picture (a) shows a min-tropical hyperplane H^min_v which is -star-convex with kernel v—the apex.
Picture (b) displays the unit balls for tropical L^p norms, which will be defined in Example <ref>.
They are nested increasingly with respect to p; the outer one corresponds to the tropical L^∞ norm and is the only one that is not strictly -star-convex.
One can recognize the triangle as the unit ball for the asymmetric tropical distance d_.
The min-tropical hyperplane with apex at the origin (the kernel of the -star-convex sets) is dotted.
Picture (c) shows a more complicated -star-convex sets.
This case is not pure dimensional, the tropically exposed points do not form a closed set.
Moreover, it is neither convex in the ordinary sense, nor strictly -star-convex.
Let K be a -star-convex set with kernel v such that K≠{v}.
Then K is strictly -star-convex if and only if K is strictly min-tropically convex and v is an interior point of K.
Firstly, assume that K is strictly -star-convex.
For every a,b∈ K the min-tropical segment [a,b]_min is a subset of [v,a]_∪[v,b]_.
Therefore, all of the points of [a,b]_min with the exception of a and b must be in the interior of K.
Hence, K is strictly min-tropically convex.
The fact that v is an interior point is clear from the definition and our assumption that K≠{v}.
Conversely, assume that K is strictly min-tropically convex and v is an interior point of K.
We consider w∈ K∖{v} and we show that all points of [v,w]_∖{w} are in the interior of K.
The result is clear for non-exposed points of [v,w]_ as we assumed K is strictly min-tropically convex.
Hence, let u be an exposed point of [v,w]_ distinct from w.
According to the discussion from Remark <ref>, u=w-(w_j-w_i)e_j where i∈_k w_k and j∉_k w_k.
Since (u+w)/2 belongs to the interior of the tropical segment [u,w]_min and K is strictly min-tropically convex, then (u+w)/2 is an interior point of K.
Thus, for small δ>0, the point c=(u+w)/2-δ e_i belongs to K.
However, u∈[v,c]_=S_i^min∩(c+S_i^max) as c-u=(w-u)/2-δ e_i=(w_j-w_i)e_j/2-δ e_i.
But u cannot be an exposed point of [v,c]_ as c-u is not parallel to a vector e_k for k∈[n] unless n=2.
Consequently, u must be an interior point of K from the strict min-tropical convexity of K, when n≥ 3.
For the case n=2, we could have noticed that the exposed points of [v,w]_ are v and w, so u can only be equal to v.
But v was already assumed to be interior.
The proof above shows that the assumption that v is an interior point of K is superfluous for the converse when n≥ 3.
If K is strictly -star-convex with kernel v, then any exposed point of K from v+S_i^min is i-exposed.
If a∈ v+S_i^min and it is not i-exposed, then there exists b∈(a+S_i^min)∩ K with b≠ a.
In particular, a∈[v,b]_∖{b}.
But the strict -star-convexity of K implies that a must be an interior point.
§ TROPICALLY QUASICONVEX FUNCTIONS
A function f:^n→ whose sub-level sets L_≤α(f):={x:f(x)≤α} are convex is called quasiconvex.
This is a purely geometric definition, but some other sources define them as functions satisfying f(λ x+(1-λ)y)≤max{f(x),f(y)} for every x,y∈^n and λ∈[0,1].
The latter can be more convenient in checking quasiconvexity.
See <cit.> for more details.
We will be interested in specific tropically quasiconvex functions.
Before we introduce them, we need some notation.
For a function γ:^n_≥ 0→ we associate the function γ:n→ defined by γ(x)=γ(x).
We recall that x=x-(min_i x_i) are the canonical coordinates of x.
We call a function f:n→ -star-quasiconvex with kernel v if f(x)=γ(x-v) for some increasing function γ:^n_≥ 0→.
Moreover, if γ is strictly increasing, we call f strictly -star-quasiconvex.
We will give a geometric interpretation of -star-quasiconvex in Theorem <ref>.
However, we prefer the definition above because it easier to check in practice.
Considering γ a monotonic norm <cit.>, f measures the distance to the kernel.
If v=, then f is a gauge which are commonly used in convex analysis <cit.> and location science <cit.>.
Gauges are sometimes dubbed “asymmetric norms” as they satisfy all the properties of a norm with the exception that f(x) need not be equal to f(-x).
A famous class of monotonic norms are the L^p norms.
They give rise to -star-quasiconvex gauges whose expression is
γ_p(x)={[ √(∑_i∈[n](x_i-min_j∈[n] x_j)^p) if p∈[1,∞); max_i∈[n]x_i-min_j∈[n]x_j if p = ∞; ]. .
We call them tropical L^p norms.
They appeared in the work of Luo <cit.> under the name “B^p-pseudonorms”.
One can recognize the tropical L^∞ norm as the tropical norm defined in <cit.>.
The relationship to the L^∞ norm is stressed in <cit.>.
The tropical L^1 norm gives rise to the asymmetric tropical distance d_; this relationship is implicit in <cit.>.
The function γ depends only on the values on ∂^n_≥ 0, so we could have considered only ∂^n_≥ 0 as the domain of γ.
However, this does not increase the generality since every (strictly) increasing function defined on ∂^n_≥ 0 can be extended to a (strictly) increasing function on ^n_≥ 0, according to the following lemma.
Every (strictly) increasing function γ:∂^n_≥ 0→ can be extended to a (strictly) increasing function γ̃:^n_≥ 0→.
Moreover, if γ is continuous, then the extension can also be made continuous.
Consider γ̃(x)=max_i∈[n]γ(x_-i,0_i)+∏_i∈[n]x_i.
Clearly, this is continuous if γ is, as being a composition of continuous functions.
Moreover, γ̃(x)=γ(x) for every x⃗∈∂^n_≥ 0, due to monotonicity of γ and the fact that x_1x_2… x_n=0 for x∈∂^n_≥ 0.
If x≤ y, then x_-i≤ y_-i for all i∈[n], where x_-i is obtained from x by removing the ith entry.
Therefore, γ(x_-i,0_i)≤γ(y_-i,0_i) for every i∈[n], which implies γ̃(x)≤γ̃(y) after using ∏_j x_j≤∏_j y_j.
In other words, γ̃ is increasing.
Moreover, if γ is strictly increasing and x≠ y we have two cases.
On the one hand, if y∈∂^n_≥ 0, then x∈∂^n_≥ 0 so γ̃(x)=γ(x)<γ(y)=γ̃(y).
On the other hand, if y∈^n_>0, then ∏_j x_j<∏_j y_j.
Using the last inequality with max_i∈[n]γ(x_-i,0_i)≤max_i∈[n]γ(y_-i,0_i), we obtain γ̃(x)<γ̃(y).
Accordingly, γ̃ is strictly increasing if γ is strictly increasing.
The following result explains why the functions from Definition <ref> deserve the name “-star-quasiconvex”.
Let f:n→ be a continuous function.
Then f is (strictly) -star-quasiconvex if and only if all of its non-empty sub-level sets are (strictly) -star convex with the same kernel.
After an eventual translation, we can assume that the kernel is .
Firstly, assume f is -star-quasiconvex and let α∈^n arbitrary such that L_≤α(f) is non-empty.
Let γ:^n→ increasing such that f(x)=γ(x).
Let w∈ L_≤α(f) and choose i∈[n] such that w∈ S_i^min.
Since γ is increasing, the points x∈^n satisfying ≤ x≤w belong to L_≤α(γ).
This set projects onto [,w]_ showing that [,w]_⊆ L_≤α(f).
Since w was selected arbitrarily, L_≤α(f) must be -star convex with kernel .
If f is strictly -star-quasiconvex, then the points satisfying ≤ x≤w different from w actually belong to L_<α(f).
Due to the continuity of f, this coincides with the interior of L_≤α(f).
This shows that L_≤α(f) is strictly -star-convex.
Conversely, assume that L_≤α(f) is -star-convex with kernel for every α≥ f().
Take γ:∂^n_≥ 0→ defined as γ(x)=f(x) for x∈∂^n_≥ 0.
Using Lemma <ref> it is enough to show that γ is increasing.
Let x and y arbitrary points of ∂^n_≥ 0 such that x≤ y.
The order segment [,y]_≤ projects onto [,y]_ which belongs to L_≤ f(y)(f).
Due to the -star-convexity of sub-level sets, we obtain γ(x)=f(x)≤ f(y)=γ(y).
If we have strict -star-convexity, then [0,y]_∖{y} is contained in the interior of L_≤ f(y)(f) which coincides to L_<f(y)(f).
Hence, we obtain γ(x)<γ(y) for this case.
The continuity of f is relevant only for strictly -star-quasiconvex functions.
Without continuity, only the strict -star-convexity of the sub-level sets is not sufficient for f to be strictly -star-quasiconvex.
This is similar to the case of ordinary quasiconvex functions; cf. <cit.> and <cit.>.
We will see that convexity, in the ordinary sense, will also be helpful for our applications.
We give a simple criterion for checking when a -star-quasiconvex function is convex.
If γ is increasing and (strictly) convex, then γ is (strictly) convex.
Let x,y∈^n and λ∈[0,1].
We have min_j(λ x_i+(1-λ) y_i)≥λmin_i x_i+(1-λ)min_i y_i as λ,1-λ≥ 0.
Hence, λ x+(1-λ) y-min_j(λ x_i+(1-λ) y_i)≤λ( x-min_i x_i)+(1-λ)( y-min_i y_i).
Since γ is convex and increasing, we obtain
γ(λ x+(1-λ) y) ≤γ(λ( x-min_i x_i)+(1-λ)(y-min_i y_i))
≤λ γ(x-min_i x_i)+(1-λ) γ(y-min_i y_i)
=λγ(x)+(1-λ)γ(y).
If γ is strictly convex and x≠ y modulo , then the second inequality from (<ref>) is strict, so γ(λ x+(1-λ) y)<λγ(x)+(1-λ)γ(y).
Thus, γ is strictly convex if γ is strictly convex.
§ TROPICALLY CONVEX LOCATION PROBLEMS
We will consider some input points v_1,…,v_m in n.
We measure the distance (or dissimilarity) from x∈n to a point v_i using a -star-quasiconvex function f_i having kernel v_i.
We consider increasing functions γ_i:^n→ such that f_i(x)=γ_i(x-v_i).
Without loss of generality, we assume γ_i()=0, so that all dissimilarities are non-negative.
The purpose of location problems is to find a point as close (or similar) as possible to the input points, depending on some criterion; usually, the optimal location is a minimum of an objective function h:n→.
The function h is constructed using an increasing function g:^m_≥ 0→, which aggregates the distances to the input points.
Formally, we define h(x)=g(f_1(x),…,f_m(x)).
Since f_i measures the distance or dissimilarity from x to v_i and g is increasing, the minima of h record a global closeness to the input points.
In most studied location problems, we would have a distance d on n and set f_i(x)=d(x,v_i).
Common choices of g are g(x)=x_1+…+x_m, for the median or Fermat–Weber problem, g(x)=max_i∈[m]x_i for the center problem <cit.>, or g(x)=x_1^2+…+x_m^2, for defining the Fréchet mean <cit.>.
Nevertheless, we will allow g to be an arbitrary increasing function.
We will assume that h has a minimum, which happens, e.g., when h is lower semi-continuous.
Let h be as above.
Then there is a minimum of h belonging to ^max(v_1,…, v_m).
Moreover, if g is strictly increasing and at least one of f_1,…,f_m is strictly -star-quasiconvex, then all the minima of h are contained in ^max(v_1,…,v_m).
Consider x∉^max(v_1,…,v_m) which is a minimum of h.
Thus there exists k∈[n] such that k∉_j(x_j-v_ij) for all i∈[m].
Set δ_i:=x_k-v_ik-min_j(x_j-v_ij) for all i, and δ=min_iδ_i, which is strictly positive by the consideration of k.
Note that f_i(x-δ e_k)=γ_i(x-v_i-δ e_k-min_j(x_j-v_ij))≤γ_i(x-v_i-min_j(x_j-v_ij))=f_i(x) for all i∈[n].
Hence h(x-δ e_k)≤ h(x).
Note that the inequality above is strict if g and some γ_ℓ are strictly increasing.
Indeed, in that case, we must have f_ℓ(x-δ e_k)<f_ℓ(x), so we use the strict increase of g in the ℓth entry.
That would contradict the optimality of x, so the second statement of the theorem holds.
For the first statement, we can only infer that x-δ e_k is also a minimum of h.
Hence, we can find an optimum of h in ^max(v_1,…,v_m) by moving x in directions -e_k for indices k as above.
To be more precise, we collect in D(x) the possible elementary descent directions from x; formally D(x):=⋂_i∈[m]([n]∖_j∈[n](x_j-v_ij)).
Notice that k∈ D(x), but k∉ D(x-δ e_k).
Moreover, D(x-δ e_k)⊊ D(x), as the functions only increase by our move in a descent direction.
Thus, replacing x by x-δ e_k, we find a minimum with smaller D(x).
We can repeat the procedure to construct a minimum x^⋆ of h with D(x^⋆)=∅.
The last condition is equivalent to x^⋆∈^max(v_1,…,v_m) due to <cit.>.
The regions of f_i where it looks like a monotonic function are induced by the min-tropical hyperplane based at v_i.
Those hyperplanes defined the max-tropical polytope generated by the input points, explaining why we look at the max-tropical convex hull, instead of the min analogue.
The following lemma presents cases when there is a unique optimum location.
We recall that a gauge γ is called strictly convex if γ(λ x+(1-λ)y)<1 for every λ∈(0,1) and x,y∈n with γ(x)=γ(y)=1, although they are not strictly convex functions.
Assume that g,f_1,…,f_m are convex, g is strictly increasing, and at least one of the following conditions holds:
a) at least one f_i is strictly convex; or
b) all f_i are strictly convex gauges and the points v_1,…, v_m are not collinear.
Then h is strictly convex.
In particular, it has a unique minimum.
Consider arbitrary distinct points x, y∈^n/ and a scalar λ∈(0,1).
For case a), we have f_i(λ x+(1-λ) y-v_i)<λ f_i(x-v_i)+(1-λ)f_i(x-v_i).
Since g is convex and strictly increasing and the functions f_j convex, we obtain
h(λ x+(1-λ) y)<λ h(x)+(1-λ)h(y).
So h must be strictly convex.
For case b), at least one of the points v_i is not on the line through x and y.
Then x-v_i and y-v_i they are not parallel and the strict convexity of the unit ball defined by f_i implies that f_i(λ x+(1-λ) y-v_i)<λ f_i(x-v_i)+(1-λ)f_i(x-v_i).
The rest of the proof is identical to case a).
§.§ Examples
Here we review the tropical location problems from literature that fall in our category, i.e. an optimum belongs to the tropical convex hull of the input.
[Tropical Fermat–Weber and Fréchet problems]
To the best of our knowledge, the first one-point location problems in tropical geometry are proposed by Lin et al. <cit.>.
They suggest the study of Fermat–Weber points and Fréchet means under the symmetric tropical distance d_.
The goal was to relate them to tropical convexity for applications in phylogenetics.
However, they noticed that tropical Fermat–Weber points might lie outside the tropical convex hull of the input points leading to medians that cannot be interpreted easily in biological applications <cit.>.
However, Theorem <ref> says that it is possible to find an optimum in the tropical convex hull.
This was already noticed for the tropical Fermat–Weber points <cit.> but it was unknown, until now, for tropical Fréchet means.
[Tropical center]
Consider the case f_i(x)=d_(v_i,x) and g(y)=max(y_1,…,y_m).
This can be interpreted as the center of the minimum max-tropical L^1 ball enclosing the points v_1,…,v_m.
The tropical center appears in <cit.>, but the details are omitted.
If we choose representatives of the input points in ={x∈^n:x_1+…+x_n=0}, the optimum can be obtained by solving the linear program:
[ minimize n· t ; subject to v_ij - x_j ≤ t , for i∈[m] and j∈[n]; x_1 + … + x_n = 0 ].
Note that the x-coordinates of the optimal solutions are equal, modulo , to the x-coordinates of the linear program
[ minimize n· t+∑_j=1^n x_j ; subject to v_ij - x_j ≤ t , for i∈[m] and j∈[n] ].
Let (t^⋆,x^⋆) an optimal solution of (<ref>).
For any solution of (<ref>) we have t+x_j≥max_i∈[m]v_ij=:V_j.
In particular, x^⋆ will have the smallest entries if we actually have equality: t^⋆+x^⋆_j=V_j, otherwise we can replace x^⋆ by some x^⋆-ε e_i to minimize the objective function.
This implies x^⋆ = V modulo ; in particular, the solution is unique in n.
Even if we do not have g strictly increasing, the uniqueness and Theorem <ref> ensures that the optimum is in the tropical convex hull.
However, this could have been noticed from the closed form V=⋁_i v_i for v_1,…,v_m∈.
[Transportation problems]
Consider λ_1,…,λ_n>0 and (λ) the simplex in n whose vertices are e_i/λ_i.
Then γ_(λ)(x)=∑_iλ_i x_i-(∑_iλ_i)min_j x_j
is the gauge on n whose unit ball is (λ).
The (weighted) Fermat–Weber problem ∑_i∈[m]w_iγ_(λ)(x-v_i) is equivalent to a transportation problem and every transportation problem can be reduced to this case; to see this better, write it as a linear program after scaling the weights w_i such that ∑_i w_i=∑_j λ_j (this change does not influence the optimum).
This was firstly noticed in <cit.>, where the authors focused on the case λ_1=…=λ_n.
The corresponding optimum is called a tropical median in the work cited.
The optimal point is called a λ-splitter by Tokuyama and Nakano <cit.>, but no metric interpretation was mentioned.
The authors gave a condition of partitioning the space in n region in an equal fashion with some weights coming from λ and w; this can be seen as a reinterpretation of the first-order optimality condition for the corresponding Fermat–Weber problem.
As a λ-splitter, it appeared in statistics <cit.> and as a particular case of Minkowski partition problems <cit.>.
[Locating tropical hyperplanes]
The tropical hyperplanes are parametrized by ^n/ by their identification with their apex.
Moreover, we have d_(a,H_x^max)=(x-a)_(2)-(x-a)_(1).
For a vector y, we denote by y_(k) the kth smallest entry, also known as the kth order statistic.
Note that the aforementioned distance is -star-quasiconvex with apex a; the easiest to see this is noticing that the second order statistic is increasing.
Therefore, our general location problems cover the case of locating tropical hyperplanes.
The best-fit tropical hyperplane with with L^1 error, i.e. g is the L^1 norm, was considered by Yoshida, Zhang, and Zhang as part of tropical principal component analysis <cit.>.
The case of L^∞ error was considered by Akian et al. <cit.> for applications to auction theory and called tropical linear regression.
They also show that the problem is polynomial-time equivalent to mean-payoff games <cit.> and, using d_(a,H_x^max)=d_(x,H_a^min), that it is dual to the problem of finding the largest inscribed ball in the tropical convex hull of the input points <cit.>.
To end this subsection, we compute the optimal location from the examples above for specific input points.
We consider the points from <cit.> which are given by the columns of the matrix
V = [ 0 1 3 2; 1 0 2 3; 1 1 0 0; ].
For this input, there is a unique tropical Fréchet point, (1,1,0), but the set of tropical Fermat–Weber points is a hexagon, marked with grey in Figure <ref>.
We remark that V has two axes of symmetry and (1,1,0) is their intersection.
The point (1,1,0) is also the tropical center of V, while the tropical median is (0,0,0).
The latter point is the also the unique apex of the best-fit tropical hyperplane with L^1 error of <cit.>.
It is also a solution of the tropical linear regression, but not the unique one.
The apices of the best-fit tropical hyperplanes with L^∞ error are of the form (λ,λ,0) with λ≤ 1 and their set is pictured with green in Figure <ref>.
§.§ Regularization
In some cases, we cannot expend g to be strictly increasing or all the dissimilarity functions f_i to be strictly -star-quasiconvex.
Hence, a minimization algorithm might return a point outside the max-tropical convex hull of the input points, when there are multiple solutions.
In this subsection, we show how we could try to arrive to a solution belonging to ^max(v_1,…,v_m) through a regularized formulation.
The idea of regularization is to consider a small parameter λ>0 and a nicely behaved function f_m+1:n→_≥ 0 and try to solve the optimization problem
minimize g(f_1(x),…,f_m(x))+λ f_m+1(x).
For our purposes, f_m+1 is nicely behaved if it is strictly -star-quasiconvex with a kernel from ^max(v_1,…,v_m).
An easy choice for v is the tropical center from Example <ref>.
This is also a location problem with g_λ:^m+1_≥ 0→ given by g_λ(x_1,…,x_m,x_m+1)=g(x_1,…,x_m)+λ x_m+1 and the optimality criterion is the function h_λ:n→ given by h_λ(x)=g_λ(f_1(x),…,f_m+1(x))
Note that g_λ is strictly increasing in the (m+1)-st entry for every λ>0.
Checking more carefully the proof of Theorem <ref>, the second statement holds if f_ℓ is strictly -star-quasiconvex and g strictly increasing in its ℓ-th entry.
We use this property for the regularization.
Therefore, we obtain the following direct consequence of Theorem <ref>.
For every λ>0, all the minima of h_λ lie in ^max(v_1,…,v_m).
The influence of the term f_m+1 decreases as λ goes to 0.
If the functions are regular enough, we expect that a collection of optima x^⋆_λ of h_λ to converge to an optimum of h.
In fact, x^⋆_λ will be an optimum of h for λ sufficiently small if h is polyhedral convex and f_m+1 is Lipschitz continuous.
If h is polyhedral convex and f_m+1 is a convex function with sub-linear growth, then there exists λ_0>0 such that all minima of h_λ are also minima of h for every λ<λ_0.
The proof is quite technical using the differential theory from convex analysis so it is given in the appendix.
We stress that Proposition <ref> can be useful for studying the tropical Fermat–Weber problem from <cit.>.
Without regularization, it has undesirable behaviour for applications to biology; cf. <cit.>.
§ LOCATION PROBLEMS WITH TROPICALLY CONVEX SITES
Location problems can appear also when facilities are regions of the ambient space and not only points.
Here, we consider such a generalization where the sites are tropically convex sets.
In the previous section, we used different distances to the input points.
Here, we will measure our dissimilarities in a uniform way, by fixing an increasing function γ:^n_≥ 0→ and considering d_γ(x,y)=γ(y-x).
We than say that d_γ is -star-quasiconvex; if γ is strictly increasing we say that d_γ is strictly -star-quasiconvex.
This allows a clear definition of a distance from a region to a point: d_γ(A,x):=inf_y∈ Ad_γ(y,x).
For a closed max-tropical cone K⊆^n we define the projection π_K:^n→ K as π_K(x)=max{y∈ K:y≤ x}.
We note that π_K(x+λ)=π_K(x)+λ for every x∈^n and λ∈, so it induces a well-defined function π_K/:n→ K/ called the tropical projection onto the max-tropically convex set K/.
The following lemma gives an explicit formula for the tropical projection and it characterizes it as a closest point under d_γ.
We omit the proof, as it is a classical result, shown when γ is the maximum norm in <cit.> and for a general tropical L^p norm in <cit.>.
Let A be a closed max-tropically convex set.
Then the tropical projection π_A(x) of a point x has the entries
π_A(x)_i=max_a∈ A(a_i+min_j∈[n](x_j-a_j)).
Moreover, d_γ(A, x)=d_γ(π_A(x), x) and π_A(x) is the unique point whose distance to x equals d_γ(A,x) if d_γ is strictly -star-quasiconvex.
In fact, the maximum expression of the tropical projection from Lemma <ref> can be taken over the extremal points, in the case of tropical polytopes <cit.>.
A similar result seems similar for general convex sets, but the form above is sufficient for our purposes.
From now on, our given sites are closed max-tropically convex sites A_1,…,A_m in n.
Similar to section <ref>, the objective function is h=g(d_γ(A_1,x),…,d_γ(A_m,x)), where g:^m_≥ 0→_≥ 0 is increasing.
There exists an minimum of h lying in the tropical convex hull of the input ^max(A_1∪…∪ A_m).
Moreover, if g and γ are strictly increasing, then all the minima of h lie in ^max(A_1∪…∪ A_m).
If x∉^max(A_1∪…∪ A_m), then <cit.> entails the existence of an index ℓ∈[n] such that min_j≠ℓ(x_j-a_j)<x_ℓ-a_ℓ for every a∈^max(A_1∪…∪ A_m).
Since A_1,…,A_m are closed sets, then there exists an open ball around x not intersecting the union of these sets.
Thus, for δ>0 sufficiently small and y=x-δ e_ℓ we have min_j(y_j-a_j)=min_j(x_j-a_j) for every a∈^max(A_1∪…∪ A_m).
Therefore, equation (<ref>) implies π_A_i(y)=π_A_i(x) for all i∈[m].
Note that y-π_A_i(y)=x-π_A_i(x)-δ e_ℓ≤ x-π_A_i(x).
Since γ is increasing, we have d_γ(A_i,y)=γ(y-π_A_i(y))≤γ(x-π_A_i(x))=d_γ(A_i,x) for every i∈[m].
Moreover, if γ is strictly increasing we get d_γ(A_i,y)<d_γ(A_i,x).
In other words, going from x in the direction -e_ℓ we obtain a decrease in all the distances d_γ(A_i,x); in particular, a decrease of h.
Using this observation, the rest of the proof is identical to the proof of Theorem <ref>.
§ TROPICALLY CONVEX CONSENSUS METHODS
In this section, we focus on applications to phylogenetics—the study of evolutionary history of species <cit.>.
The information is represented as an evolutionary tree, or phylogeny, which are trees whose leaves are labeled by the name of the species.
In this paper, we will deal only with trees that encode the evolution from a common ancestor and possess a molecular clock.
To be more formal, we have a finite set containing the names of the species and a rooted tree whose leaves are in bijection with ; the root corresponds to the most recent ancestor of all the species into consideration.
The time is represented as positive weights on the edges, which gives a way to measure distances between nodes in the trees.
What is more we assume that the distance from the root to any leaf is the same; it means that the same time is measured from the evolution of the most recent common ancestor (MRCA) of all species and any element of .
Such trees are called equidistant.
To a rooted phylogeny T we associate a distance matrix D∈^× where the entry D_ij represents the distance between the leaves labelled i and j in T.
It is known that T is equidistant if and only if D is ultrametric <cit.>, i.e.
D_ij≤max(D_ik,D_kj) ∀ i,j,k∈.
Hence, we will not distinguish between equidistant trees and ultrametric matrices in the rest of the paper.
Because D is symmetric and has zero entries on the diagonal, we can see it as a point of ^2.
We define the tree space _ as the image of space of all ultrametrics in 2.
Due to <cit.>, this is homeomorphic to the BHV space defined in <cit.>.
We note that the ultrametric condition (<ref>) implies that _ is max-tropically convex.
We are interested in consensus methods: given as input multiple phylogenies on , find an evolutionary tree on being as similar as possible to the input trees.
This is a common problem in evolutionary biology, as multiple distinct trees arise from the statistical procedures or from the multiple methods to reconstruct phylogenies from different data; see <cit.> or <cit.> for details.
A consensus method can be seen as a location statistic in the tree space.
Since the latter is max-tropically convex, there were many attempts to exploit this geometric structure to obtain relevant information <cit.>.
We are interested in tropically convex consensus methods, defined in <cit.>.
A consensus method c is tropically convex if c(T_1,…,T_m)∈^max(T_1,…,T_m)
for every m≥ 1 and T_1,…,T_m∈_.
The location problems discussed in the previous section give rise to tropically convex consensus methods.
Note that we do not need to impose the restriction that the optimum to lie in _.
It is automatically satisfied from the tropical convexity of _ and Theorem <ref>.
This observation ensured that tropical median consensus methods are fast to compute <cit.>.
Tropically convex consensus methods are particularly interesting because they preserve relationships from the input trees.
To explain this more clearly, we firstly need some terminology: two subsets of taxa A,B form a nesting in T, and we denote it by A<B, if the MRCA of A in T is a strict descendant of the MRCA of A∪ B.
If D is the ultrametric associated to T, then we can write the condition as
max_i,j∈ AD_ij<max_k,ℓ∈ A∪ BD_kℓ.
We say that a consensus method c is Pareto on nestings if c(T_1,…,T_m) displays the nesting A<B whenever A<B appears in all input trees T_1,…,T_m.
The consensus method c is called co-Pareto on nestings if c(T_1,…,T_m) does not display the nesting A<B unless A<B appears in some input tree T_i.
These conditions are desirable for consensus methods <cit.>.
It is useful to see these properties from a geometric point of view.
Consider _(A<B) the subset of _ consisting of trees displaying the nesting A<B; it is described by (<ref>).
We also make the notation _(A≮B) for the complement _∖_(A<B), which is the set of trees not displaying A<B.
Then c is Pareto on nestings if and only if for every nesting A<B and trees T_1,…,T_m∈_(A<B) we have c(T_1,…,T_m)∈_(A<B).
We also note that c is co-Pareto on nestings if and only if for every nesting A<B and trees T_1,…,T_m∈_(A≮B) we have c(T_1,…,T_m)∈_(A≮B).
The next result shows that tropically convex consensus methods have both Pareto and co-Pareto properties, being an improved version of <cit.>.
Thus, we have a large class of consensus methods satisfying both properties.
This is remarkable, as no such consensus method is listed in the surveys <cit.>.
Tropically convex consensus methods are Pareto and co-Pareto on nestings.
For every nesting A<B, the set _(A<B) is max-tropically convex as (<ref>) describes an open max-tropical halfspace.
Whence, Remark <ref> implies that tropically convex consensus methods are Pareto on nestings.
Similarly, the set _(A≮B) is max-tropically convex as it is the intersection of _ with the tropical halfspace defined by the inequality max_i,j∈ AD_ij≥max_k,ℓ∈ A∪ BD_kℓ.
Remark <ref> implies also the co-Pareto property.
The Pareto property gives a unanimity rule: nestings present in all the trees are also present in the consensus.
One may wonder if this rule can be relaxed as there exist (super)majority-rule consensus trees commonly used for the unweighted case; they are denoted M_ℓ by Felsenstein in <cit.>.
Indeed, one can find such a rule for tropical medians <cit.>.
A nesting appears in the tropical median consensus tree if it appears in a proportion of the input trees greater than 1-1/n2.
Moreover, a nesting will not appear in the tropical median consensus tree if it occurs in a proportion less than 1/n2 of the input trees.
The tropical median corresponds to the Fermat–Weber problem whose gauge distance is given by the regular simplex.
Therefore, the essential hull of a finite set A defined in <cit.> coincides with the max-tropical convex hull of A.
Then the conclusion follows from <cit.> and Remark <ref>, as in the proof of Proposition <ref>.
Note that a consensus method is not well-defined when there are multiple minimum points.
Most problematic is the situation when different tree topologies are possible, when it is unclear how to resolve incompatible optimum trees.
Yet, this is not the case when the set of optimal locations is convex <cit.>: separating the tree space in cones of trees having a tree topology gives rise to a convexly disjoint collection in the sense of <cit.>.
Nonetheless, the aforementioned proposition applies when the set of all optima in n2 is contained in _; guaranteed for strictly -star-quasiconvex dissimilarities.
Otherwise, one might still have problems in defining consistently a consensus method; see <cit.> for the symmetric tropical Fermat–Weber problem.
For this reason, one has to consider the regularized versions discussed in §<ref>.
§.§ Towards tropical supertrees
Supertrees are a generalizations of consensus trees in the case when the given input consists of phylogenies on different taxa.
This can be also interpreted as a missing-data problem.
In other words, we are given as input phylogenetic trees T_1,…,T_m whose leaves are labelled by _1,…,_m, respectively.
A supertree method returns a tree whose leaf set is =⋃_i_i summarizing as the information from T_1,…,T_m.
We use the idea of Grindstaff and Owen to represent trees with missing taxa by the set of all possible trees on all the taxa <cit.>.
Their method is similar to a location problem with BHV distance using an L^∞ error.
We note that another approach for supertrees in a tropical setting was proposed in <cit.>; the authors relied on imputation to reduce supertrees to consensus trees.
So we replace the input tree T_i by the tropically convex set __i^-1(T_i) where _:_→_ is a projection obtained by keeping the entries of an ultrametric matrix corresponding only to rows and columns from ⊂.
We will be interested in tropically convex supertrees, i.e. the output belongs to ^max(⋃_i∈[m]__i^-1(T_i)).
According to Theorem <ref>, we may obtain such methods by employing strictly -star-quasiconvex dissimilarity measures.
Tropically convex supertrees are also Pareto on nestings.
We record this fact, whose proof is similar to Proposition <ref>.
In particular, it motivates the search for tropically convex supertree methods.
Tropically convex supertree methods are Pareto on nestings.
A co-Pareto property is no longer possible, as relationships between groups of taxa might not appear in all input trees.
Nonetheless, there cannot appear conflicting relationships.
We remark that we did not give a well-defined supertree method.
The problem arises from the fact that the optima could have different tree topologies.
For example, an extreme case is when there are two trees T_1 and T_2 on disjoint set of taxa.
There are clearly many different ways to combine the information.
Therefore, extra assumptions must be made.
§ CONCLUSION AND FUTURE PERSPECTIVES
We provided a large class of location estimators whose value lies in the max-tropical convex hull of the input with the purpose of obtaining consensus methods with good properties.
The first direction would be to obtain methods to obtain the optima efficiently.
On the other hand, searching for extra properties of specific location problems could be helpful for applications; more details are provided below.
§.§ Comparison to consensus methods based on the BHV distance
We have exploited tropical convexity to obtain consensus methods with good properties.
More precisely, we focused on (co-)Pareto properties that can be interpreted in a purely geometric way.
The associated spaces are also max-tropically convex so the aforesaid properties are immediate for the tropical approach.
Although the BHV geometry of the tree space is more studied than its tropical counterpart, there are few consensus methods proposed for this geometry.
A first proposal was given in the pioneering paper by Billera, Holmes, and Vogtmann <cit.>, but a few drawbacks were already pointed out: e.g., doubling every input tree changes the output.
An approach based on Fréchet means was proposed by Miller et al. <cit.> and Bačák <cit.>.
It is also Pareto and co-Pareto on splits <cit.>, but the result is more intricate.
The same properties hold for Fermat–Weber and center problems in the BHV space <cit.>.
The approach is again analytical, but similar for all the cases.
One could try a geometric approach, as in the tropical case, as it could lead faster to identification of self-consistent properties for consensus methods.
§.§ Majority rules in consensus methods
Proposition <ref> provides a supermajority rule for tropical median consensus with respect to nestings.
This can be a step towards understanding the relationship between median weighted trees and the widely used majority-rule consensus for unweigthed trees.
In fact, the majority-rule consensus can be interpreted as a median <cit.>, but it is unclear if this can be extended to weighted phylogenies.
However, Proposition <ref> provides a large threshold for a majority rule in the case of tropical median consensus trees, indicating that they are quite conservative.
This seems to be owing to the low breakdown point of the tropical median caused by asymmetry; check <cit.> for more details.
Therefore, an investigation of location estimators with higher breakdown point could provide a better connection to the majority-rule consensus.
§.§ Compositional data
A different application of our location estimators could be to compositional data <cit.>.
That is, the data can be seen as points in a simplex; our methods would be applied to the centered logratio transform of the input.
Note that -star-quasiconvex sets are defined with respect to special directions, which correspond to the vertices of the simplex.
What is more, the motivation of Tokuyama and Nakano in studying algorithms for transportation problem came from splitting the points from a simplex in multiple regions <cit.>.
Moreover, Nielsen and Sun analyzed clustering methods with the symmetric tropical distance on compositional data showing a better performance than other more commonly used dissimilarity measures <cit.>.
These results suggest that -star-quasiconvex dissimilarities could be useful in compositional data analysis.
§.§ Acknowledgments
I am indebted to Michael Joswig for discussing different aspects of this paper.
I thank Günter Rote for bringing <cit.> to my attention.
The author was supported by Facets of Complexity (GRK 2434, project-ID 385256563).
§ APPENDIX A: CONVEX ANALYSIS ON N
We state and proof a slightly more general form of Proposition <ref> and then we put an Euclidean structure on n to show how we can obtain a quantitative result for the regularized version of the tropical Fermat–Weber problem.
§.§ The proof of Proposition <ref>
We will prove the result in a finite-dimensional real vector space X.
We will equip it with an inner product ⟨·,·⟩ which gives an isomorphism X^*≅ X.
In this way, we can see the subgradients of a convex function as elements of X.
We recall that the subdifferential of a convex function f:X→ at a point x is the set
∂ f(x)={c∈ X:f(y)-f(x)≥⟨ c,y-x⟩ ∀ y∈ X}.
It will be used to characterize the minima of f through the first-order minimality condition: x is a minimum of f if and only if ∈∂ f(x).
We refer to the book by Rockafellar <cit.> for more details on convex analysis.
We are interested in optima of regularized versions of f of the form f+λ h with h having linear growth.
More specifically, we care of h being Lipschitz continuous, i.e. there exists a constant L>0 such that |h(x)-h(y)|≤ Lx-y for every x,y∈ X, where · is any norm on X.[We assumed that X is finite-dimensional, so every two norms are equivalent. Thus, the definition does not depend on the specific norm. Nevertheless, the constant L depends on ·.]
As a last definition, we say that h is polyhedral convex if it is the maximum of finitely many affine functions on X.
Now we can state and proof a slight generalization of Proposition <ref>.
Let h:X→ be a polyhedral convex function and f:X→ convex and Lipschitz continuous.
Then there exists a constant λ_0>0 such that the minima of h+λ f are also the minima of h for every λ∈(0,λ_0).
Consider an arbitrary minimum m_λ of h+λ f.
The first-order optimality condition entails ∈∂ h(m_λ)+λ∂ f(m_λ).
What is more, since f is Lipschitz continuous, <cit.> yields the existence of a bounded set B such that ∂ f(x)⊂ B for all x∈ X.
If ∉∂ h(x), then ∉∂ h(x)+λ B for λ sufficiently small, as ∂ h(x) is closed.
We also know that there are finitely many values for ∂ h(x), as we assumed h is a polyhedral convex function.
Accordingly, there exists λ_0>0 such that ∉∂ h(x)+λ B for every λ∈(0,λ_0).
The last relation implies that ∈∂ h(m_λ) if λ<λ_0, which is equivalent to m_λ being a minimum of h.
If we know the bounded set B from the proof of Proposition <ref>, then we can set λ_0=sup{λ>0:∉ P+λ B, ∀ P∈}
where is the set of all possible values of ∂ h(x) such that ∉∂ h(x).
The infimum is positive, as 𝒫 is a finite collection of closed convex sets.
If h is a gauge γ, then <cit.> says that we can set B={x∈ X:γ^∘(x)≤ r}=:rB_γ^∘ for some r>0 where γ^∘(y):=sup_x:γ(x)≤ 1⟨ x,y⟩ is the dual gauge.
Hence, P+λ B represents the set of points at distance at most λ r from P measured by the distance d_γ^∘ induced from γ^∘, i.e. d_γ^∘(x,y)=γ^∘(y-x).
Consequently, we have λ_0=inf_P∈d_γ^∘(P,)/r.
§.§ Euclidean structure on n
We just conclude with explaining how we can put a Euclidean structure on n in a natural way.
The idea is to identify the tropical projective torus with a hyperplane of ^n with the regular Euclidean structure.
Using this idea, by factoring with , one can identify n with the orthogonal subspace to , which is ={x∈:x_1+…+x_n=0}.
This identification is natural as we obtain the same subdifferentials of a convex function f:n→ as in the case when we consider it as a function on ^n such that f(x+λ)=f(x) for each x∈^n and λ∈.
Having fixed this structure, we search for λ_0 as in Proposition <ref> for h(x)=∑_i γ_∞(x-v_i) and f(x)=γ_1(x-v) where v∈^max(v_1,…,v_m).
This is, we want quantitative results for regularizations of tropical Fermat–Weber problems.
In this case, the subdifferentials of h are integer polytopes in .
Moreover, one can check that the dual gauge of γ_1 has the expression γ_1^∘(x)=γ_1(-x)/n which takes integer values at each point of ∩^n.
Consequently, λ_0=inf_P∈d_γ_1^∘(P,)≥ 1 as it is a positive integer.
Whence, the minima of h+λ f are also minima of h for every λ∈(0,1).
|
http://arxiv.org/abs/2307.05441v1 | 20230711170653 | Improved bounds for the Erdős-Rogers $(s,s+2)$-problem | [
"Oliver Janzer",
"Benny Sudakov"
] | math.CO | [
"math.CO"
] |
Channel State Information-Free Location-Privacy Enhancement: Fake Path Injection
Jianxiu Li, Graduate Student Member, IEEE,
and Urbashi Mitra, Fellow, IEEE
J. Li and U. Mitra are with the Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA 90089, USA (e-mails: jianxiul, [email protected]).
This paper was presented in part at the 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2023) <cit.>. This work has been funded largely by the USC + Amazon Center on Secure and Trusted Machine Learning as well as in part by one or more of the following: NSF CCF-1817200, DOE DE-SC0021417, Swedish Research Council 2018-04359, NSF CCF-2008927, NSF CCF-2200221, ONR 503400-78050, and ONR N00014-15-1-2550.
August 12, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
For 2≤ s<t, the Erdős-Rogers function f_s,t(n) measures how large a K_s-free induced subgraph there must be in a K_t-free graph on n vertices. There has been a tremendeus amount of work towards estimating this function, but until very recently only the case t=s+1 was well understood. A recent breakthrough of Mattheus and Verstraëte on the Ramsey number r(4,k) states that f_2,4(n)≤ n^1/3+o(1), which matches the known lower bound up to the o(1) term. In this paper we build on their approach and generalize this result by proving that f_s,s+2(n)≤ n^2s-3/4s-5+o(1) holds for every s≥ 2. This comes close to the best known lower bound, improves a substantial body of work and is the best that any construction of similar kind can give.
§ INTRODUCTION
Given an integer s≥ 2, a set U of vertices in a graph G is said to be s-independent if G[U] does not contain a copy of K_s. We write α_s(G) for the size of the largest s-independent set in G, so α_2(G) is just the usual independence number of G. Note that estimating Ramsey numbers is equivalent to estimating how small α_2(G) can be for a K_t-free graph on n vertices. In 1962, Erdős and Rogers <cit.> initiated a natural generalization of this problem. For 2≤ s<t≤ n, they defined f_s,t(n) as the minimum of α_s(G) over all K_t-free graphs G on n vertices. This function is now commonly known as the Erdős-Rogers function, and the main research direction has been to estimate its growth as n→∞ for various fixed values of s and t. As remarked above, the case s=2 recovers the usual Ramsey problem: we have f_2,t(n)<ℓ if and only if r(t,ℓ)>n.
For the case s>2, the first bounds were obtained by Erdős and Rogers <cit.>, who showed that for every s there exists a positive =(s) such that f_s,s+1(n)≤ n^1-. Noting that f_s,t(n)≥ f_s,t'(n) holds for every t<t', this implies the same bound for all pairs (s,t). The first lower bound was given by Bollobás and Hind <cit.>, who proved that f_s,t(n)≥ n^1/(t-s+1). In particular, this showed that f_s,s+1(n)≥ n^1/2. Krivelevich <cit.> improved these lower bounds by a small logarithmic factor and gave the general upper bound f_s,t(n)=O(n^s/t+1(log n)^1/s-1). Later, the lower bound was significantly improved by Sudakov <cit.> for every t≥ s+2.
In the last decade or so, there has been significant progress on estimating f_s,s+1. First, Dudek and Rödl <cit.> proved that f_s,s+1(n)=O(n^2/3) for all s, bounding the exponent away from 1. Building on their approach but introducing further ideas, Wolfovitz <cit.> showed that f_3,4(n)≤ n^1/2+o(1), matching the lower bound up to the o(1) term in the exponent. Finally, Dudek, Retter and Rödl <cit.> generalized this for all s by proving that f_s,s+1(n)≤ n^1/2+o(1), which once again matches the lower bound. More precisely, their result states that f_s,s+1(n)=O(n^1/2(log n)^4s^2). The current best lower bound is f_s,s+1(n)= Ω((nlog n/loglog n)^1/2), due to Dudek and Mubayi <cit.>.
With the case t=s+1 settled (up to logarithmic factors), it is natural to study what happens when t=s+2. The lower bound of Sudakov for this case is f_s,s+2(n)≥ n^1/2-1/6s-6(log n)^Ω(1). Dudek, Retter and Rödl showed that for every s≥ 4, we have f_s,s+2(n)=O(n^1/2) and asked if there exists s≥ 3 such that f_s,s+2(n)=o(n^1/2), This was answered affirmatively in a strong form by Gowers and Janzer <cit.>, who proved that for each s≥ 3 we have f_s,s+2(n)≤ n^1/2-s-2/8s^2-18s+8(log n)^O(1). They also improved the best upper bound throughout the range s+2≤ t≤ 2s-1.
In a recent breakthrough, Mattheus and Verstraëte <cit.> proved that the Ramsey number r(4,k) satisfies r(4,k)=Ω(k^3/log^4 k), which matches the known upper bound up to a factor of order log^2 k. Expressed in terms of the Erdős-Rogers function, their result is equivalent to the bound f_2,4(n)=O(n^1/3(log n)^4/3). The main result in this paper is a generalization of their bound to f_s,s+2 for all values of s.
For every s≥ 2, f_s,s+2(n)=O(n^2s-3/4s-5(log n)^3).
We remark that we did not try to optimize the logarithmic term.
It is not hard to see that this improves the bound of Gowers and Janzer on f_s,s+2 for every s. (Using the inequality f_4,7(n)≤ f_4,6(n), it also improves the best known bound for f_4,7.) The smallest case where the result is new is f_3,5(n)≤ n^3/7+o(1), improving the previous bound f_3,5(n)≤ n^6/13+o(1) and coming close to the lower bound f_3,5(n)≥ n^5/12+o(1) of Sudakov.
Furthermore, there is some evidence suggesting that the bound in Theorem <ref> is tight. First, it is tight for s=2. Moreover, it can be shown that no construction of the kind used in all recent works on the Erdős-Rogers function (including those providing the tight results for f_s,s+1) can beat this bound (this will be explained in more detail in the concluding remarks).
§ THE PROOF
The proof of Theorem <ref> has the same rough structure as that of the bound r(4,k)=Ω(k^3/log^4 k) in <cit.>, but we introduce several new twists of their method.
The following graph, studied in <cit.>, played a crucial role in <cit.>.
For every prime q, there is a bipartite graph F with vertex sets X and Y such that the following hold.
* |X|=q^4-q^3+q^2 and |Y|=q^3+1.
* d_F(x)=q+1 for every x∈ X and d_F(y)=q^2 for every y∈ Y.
* F is C_4-free.
* F does not contain the subdivision of K_4 as a subgraph with the part of size 4 embedded to X.
Throughout this section, let s≥ 2 be a fixed integer. Let q be a prime and let F be the graph provided by Proposition <ref>. We now construct a K_s+2-free graph H on vertex set X randomly as follows.
For each y∈ Y, partition N_F(y) uniformly randomly as A_1(y)∪ A_2(y)∪…∪ A_s(y) and place a complete s-partite graph in H with parts A_1(y), A_2(y),… , A_s(y). The following lemma, combined with properties <ref> and <ref> of Proposition <ref>, shows that H is K_s+2-free with probability 1.
Assume that the edge set of a K_s+2 is partitioned into cliques C_1,…,C_k of size at most s. Then there exist four vertices such that all six edges between them belong to different cliques C_i.
Without loss of generality, we may assume that C_1 has at least three vertices, otherwise the statement is trivial. Since C_1 has at most s vertices, there exist distinct vertices u and v which do not belong to C_1. Note that the clique containing u and v has at most one element of C_1, so (as C_1 has size at least three), there exist vertices x and y in C_1 such that no clique contains u, v and at least one of x and y. This means that no clique contains at least three elements from the set {x,y,u,v}, so these four vertices are suitable.
To see that Lemma <ref> implies that H is K_s+2-free, assume that H does contain a copy of K_s+2 on vertex set S. Note that by property <ref> of Proposition <ref>, for any edge uv in the complete graph H[S], there is a unique y∈ Y such that u,v∈ N_F(y). Hence, we can partition the edge set of H[S] into cliques, one with vertex set N_F(y)∩ S for each y∈ Y such that |N_F(y)∩ S|≥ 2. Moreover, any such clique has size at most s, for otherwise it would have to contain (at least) two vertices from some A_i(y), meaning that there could not be an edge between these two vertices. Hence, by Lemma <ref>, there are four vertices in S such that for any two of them there is a different common neighbour in Y in the graph F, contradicting property <ref> of Proposition <ref>.
Our key lemma, proved in Section <ref>, is as follows. Here and below we ignore floor and ceiling signs whenever they are not crucial.
Let q be sufficiently large. Let t=q^2-1/s-1(log q)^3. Then with positive probability the number of sets T⊂ X of size t for which H[T] is K_s-free is at most (q^1/s-1)^t.
It is easy to deduce Theorem <ref> from this.
Take an outcome of H which satisfies the conclusion of Lemma <ref>. Let X̃ be a random subset of X obtained by keeping each vertex independently with probability q^-1/(s-1) and let G_0=H[X̃]. Then for each set T⊂ X, the probability that T⊂X̃ is (q^-1/(s-1))^|T|. Hence, for t=q^2-1/(s-1)(log q)^3, Lemma <ref> implies that the expected number of K_s-free sets of size t in G_0 is at most 1. Removing one vertex from each such set, we obtain a K_s+2-free graph G in which every vertex set of size t contains a K_s. The expected number of vertices in G is at least |X|q^-1/(s-1)-1≥1/2q^4-1/(s-1)-1, so there exists an outcome for G with at least 1/2q^4-1/(s-1)-1 vertices. So, for each sufficiently large prime q, there is a K_s+2-free graph with at least 1/2q^(4s-5)/(s-1)-1 vertices in which every vertex set of size q^(2s-3)/(s-1)(log q)^3 contains a K_s. Using Bertrand's postulate, this implies that f_s,s+2(n)=O(n^2s-3/4s-5(log n)^3), completing the proof.
§.§ The number of K_s-free sets
In this section we prove Lemma <ref>. This is the part of our proof which differs the most from the corresponding argument in <cit.>. Indeed, they proved this lemma for the case s=2 by arguing that the graph H is locally “dense" (with high probability), and therefore by a known result on the number of independent sets in locally dense graphs, the number of independent sets of size t in H is sufficiently small. We do not have an analogue of this result for s-independent sets, so we take a slightly different approach, using a version of the celebrated hypergraph container method <cit.>. In our approach it is crucial to build the containers in several small steps, and we need to get good control of all the induced subgraphs of H that arise while we run the process. The following lemma will help us achieve this control. It states that with high probability in every large enough vertex set in H, we not only have many copies of K_s, but we have many complete s-partite subgraphs which have similar-sized (and large) parts. (Here and below logarithms are to base 2.)
Let q be sufficiently large. Then with positive probability, for every U⊂ X such that |U|≥ 500s^2q^2 there exists some γ≥ |U|/q^2 such that the number of y∈ Y with γ/(10s)≤|A_i(y)∩ U|≤γ for all i∈ [s] is at least |U|q/(8(log q)γ).
Partition Y as follows. Let
Y_0={y∈ Y: |N_F(y)∩ U|≤ e_F(U,Y)/(2|Y|)}
and for each 1≤ i≤ 2log q let
Y_i={y∈ Y: 2^i-1e_F(U,Y)/(2|Y|)< |N_F(y)∩ U|≤ 2^ie_F(U,Y)/(2|Y|)}.
To see that these sets indeed partition Y, note that for each y∈ Y, we have |N_F(y)∩ U|≤ d_F(y)=q^2 and e_F(U,Y)/(2|Y|)=|U|(q+1)/(2|Y|)≥ 1. Observe that e_F(U,Y_0)≤ e_F(U,Y)/2, so there is some 1≤ i≤ 2log q such that e_F(U,Y_i)≥ e_F(U,Y)/(4log q)≥ |U|q/(4log q). Let γ=2^i e_F(U,Y)/(2|Y|). Note that γ≥ e_F(U,Y)/|Y|=|U|(q+1)/(q^3+1)≥ |U|/q^2. Now |Y_i|≥ e_F(U,Y_i)/γ≥ |U|q/(4(log q)γ). Let y∈ Y_i and let j∈ [s]. Note that A_j(y)∩ U is a random subset of N_F(y)∩ U which contains each x∈ N_F(y)∩ U independently with probability 1/s. Hence, the expected value of |A_j(y)∩ U| is |N_F(y)∩ U|/s≥γ/(2s). Therefore by the Chernoff bound (see, e.g., Theorem 4 in <cit.>), the probability that we have |A_j(y)∩ U|≤γ/(10s) is at most exp(-γ/(16s)). By the union bound, the probability that there exists some j∈ [s] such that |A_j(y)∩ U|≤γ/(10s) (for a fixed y∈ Y_i) is at most sexp(-γ/(16s))≤exp(-γ/(32s)) (where we used γ≥ |U|/q^2≥ 500s^2). These events are independent for different vertices y∈ Y_i, so by the union bound the probability that this happens for more than |Y_i|/2 vertices is at most |Y_i||Y_i|/2exp(-|Y_i|γ/(64s))≤exp(-|Y_i|γ/(128s))≤exp(-|U|q/(512log q)). We have therefore shown that for every U⊂ X of size at least 500s^2q^2, the probability that a suitable γ does not exist is at most exp(-|U|q/(512log q)). The result follows after taking union bound over all choices for U since ∑_u=1^|X||X|uexp(-uq/(512log q))≤∑_u=1^∞ q^4uexp(-uq/(512log q))<1.
Let us call an instance of H nice if it satisfies the conclusion of Lemma <ref>.
Lemma <ref> can now be deduced from the following.
Let q be sufficiently large and let t=q^2-1/(s-1)(log q)^3. If H is nice, then the number of sets T⊂ X of size t for which H[T] is K_s-free is at most (q^1/(s-1))^t.
We prove Lemma <ref> using the hypergraph container method. For an s-uniform hypergraph 𝒢 and some ℓ∈ [s], we write Δ_ℓ(𝒢) for the maximum number of hyperedges in 𝒢 containing the same set of ℓ vertices. Moreover, we write ℐ(𝒢) for the collection of independent sets in 𝒢. The following result was proved in <cit.>.
Suppose that positive integers s, b and r and a non-empty s-uniform hypergraph 𝒢 satisfy that for every ℓ∈ [s],
Δ_ℓ(𝒢)≤(b/v(𝒢))^ℓ-1e(𝒢)/r.
Then there exists a family 𝒮⊂V(𝒢)≤ sb and functions f:𝒮→𝒫(V(𝒢)) and g:ℐ(𝒢)→𝒮 such that for every I∈ℐ(𝒢),
I⊂ f(g(I))
and
|f(g(I))|≤ v(𝒢)-δ r,
where δ=2^-s(s+1).
For every positive integer s and positive reals p and λ, the following holds. Suppose that 𝒢 is an s-uniform hypergraph such that pv(𝒢) and v(𝒢)/λ are integers, and for every ℓ∈ [s],
Δ_ℓ(𝒢)≤λ· p^ℓ-1e(𝒢)/v(𝒢).
Then there exists a collection 𝒞 of at most v(𝒢)^spv(𝒢) sets of size at most (1-δλ^-1)v(𝒢) such that for every I∈ℐ(𝒢), there exists some R∈𝒞 with I⊂ R, where δ=2^-s(s+1).
We apply Proposition <ref> with b=pv(𝒢) and r=v(𝒢)/λ and, after replacing 𝒮 with g(ℐ(𝒢)), we take 𝒞=f(𝒮). Then clearly |𝒞|≤ |𝒮|≤ v(𝒢)^spv(𝒢), any set in 𝒞 has size at most v(𝒢)-δ r=(1-δλ^-1)v(𝒢) and I is contained in f(g(I))∈𝒞.
Let ℋ be the s-uniform hypergraph on vertex set X in which a set of s vertices form a hyperedge if they form a K_s in H.
Assume that H is nice. Then for each U⊂ X of size at least 500s^2q^2 there exists a subgraph 𝒢 of ℋ[U] (on vertex set U) which satisfies
Δ_ℓ(𝒢)≤λ· p^ℓ-1e(𝒢)/v(𝒢)
for every ℓ∈ [s] with λ=O_s(log q) and p≤ |U|^-1q^2-1/(s-1).
By Lemma <ref> there exists some γ≥ |U|/q^2 such that the number of y∈ Y with γ/(10s)≤|A_i(y)∩ U|≤γ for all i∈ [s] is at least |U|q/(8(log q)γ). Let p=(γ q^1/s-1)^-1≤ |U|^-1q^2-1/(s-1). Let E(𝒢) consist of all s-sets {x_1,x_2,…,x_s} in U for which there exists y∈ Y with γ/(10s)≤|A_i(y)∩ U|≤γ for all i∈ [s] such that x_i∈ A_i(y)∩ U. Clearly, such x_1,x_2,…,x_s form a K_s in H, so 𝒢 is indeed a subgraph of ℋ.
Note that e(𝒢)≥|U|q/8(log q)γ· (γ/10s)^s=Ω_s(|U|qγ^s-1/log q), so e(𝒢)/v(𝒢)=Ω_s(qγ^s-1/log q). It follows that if the implicit constant in λ is sufficiently large, then λ· p^ℓ-1e(𝒢)/v(𝒢)≥ 2q^1-(ℓ-1)/(s-1)γ^s-ℓ. On the other hand, since F is C_4-free, for each 2≤ℓ≤ s, we have Δ_ℓ(𝒢)≤γ^s-ℓ. It follows that (<ref>) holds for each 2≤ℓ≤ s.
Moreover, as d_F(x)=q+1 for all x∈ X, we have Δ_1(𝒢)≤ (q+1)γ^s-1, so (<ref>) holds for ℓ=1 as well.
Combining Corollary <ref> and Lemma <ref>, we prove the following result.
Let q be sufficiently large and assume that H is nice. Let U be a subset of X of size at least 500s^2q^2. Now there exists a collection 𝒞 of at most (q^4)^sq^2-1/(s-1) sets of size at most (1-Ω_s((log q)^-1))|U| such that for any K_s-free (in H) set T⊂ U there exists some R∈𝒞 with T⊂ R.
Choose a hypergraph 𝒢 and parameters λ,p according to Lemma <ref>. By Corollary <ref>, there exists a collection 𝒞 of at most |U|^sp|U| sets of size at most (1-2^-s(s+1)λ^-1)|U| such that for every independent set I in 𝒢, there exists some R∈𝒞 such that I⊂ R. The lemma follows by noting that any K_s-free set is an independent set in 𝒢, |U|≤ q^4, p≤ |U|^-1q^2-1/(s-1) and λ=O_s(log q).
Let q be sufficiently large and assume that H is nice. Then there is a collection 𝒞 of at most (q^4)^O_s(q^2-1/(s-1)(log q)^2) sets of size at most 500s^2q^2 such that for any K_s-free (in H) set T⊂ X there exists some R∈𝒞 such that T⊂ R.
By Lemma <ref>, there exists a positive constant c_s such that whenever U is a subset of X of size at least 500s^2q^2, then there is a collection 𝒟 of at most (q^4)^sq^2-1/(s-1) sets of size at most (1-c_s(log q)^-1)|U| such that for any K_s-free set T⊂ U there exists some R∈𝒟 with T⊂ R.
We prove by induction that for each positive integer j there is a collection 𝒞_j of at most (q^4)^jsq^2-1/(s-1) sets of size at most max(500s^2q^2,(1-c_s(log q)^-1)^j|X|) such that for any K_s-free set T⊂ X there exists some R∈𝒞_j with T⊂ R. Note that by choosing j to be a suitable integer of order Θ_s((log q)^2), the corollary follows.
The base case j=1 is immediate by the first paragraph (applied in the special case U=X).
Let now 𝒞_j be a suitable collection for j and define 𝒞_j+1 as follows. For each U∈𝒞_j of size greater than 500s^2q^2, take a collection 𝒟(U) of at most (q^4)^sq^2-1/(s-1) sets of size at most (1-c_s(log q)^-1)|U| such that for any K_s-free set T⊂ U there exists some R∈𝒟(U) with T⊂ R. Let
𝒞_j+1={U∈𝒞_j:|U|≤ 500s^2q^2}∪⋃_U∈𝒞_j:|U|>500s^2q^2𝒟(U).
Clearly, |𝒞_j+1|≤ |𝒞_j|(q^4)^sq^2-1/(s-1)≤ (q^4)^(j+1)sq^2-1/(s-1).
Moreover, since every set in 𝒞_j has size at most max(500s^2q^2,(1-c_s(log q)^-1)^j|X|), it follows that any set in 𝒞_j+1 has size at most max(500s^2q^2,(1-c_s(log q)^-1)^j+1|X|). Finally, for any K_s-free set T⊂ X there exists some U∈𝒞_j with T⊂ U and hence there exists some R∈𝒞_j+1 (either U or some element of 𝒟(U)) such that T⊂ R. This completes the induction step and the proof.
Corollary <ref> implies that if q is sufficiently large and H is nice, then the number of K_s-free sets of size t=q^2-1/(s-1)(log q)^3 in H is at most
(q^4)^O_s(q^2-1/(s-1)(log q)^2)500s^2q^2t≤ (q^4)^O_s(q^2-1/(s-1)(log q)^2)(q^1/(s-1)/log q)^t≤ (q^1/(s-1))^t,
proving Lemma <ref>.
§ CONCLUDING REMARKS
In this paper we constructed, for each fixed s≥ 2 and each n, a K_s+2-free n-vertex graph in which every vertex set of size at least roughly n^2s-3/4s-5(log n)^3 induces a K_s. It is not too hard to see that any construction which improves the exponent 2s-3/4s-5 would have to have significantly fewer triangles than the random graph with the same edge density. In particular, this shows that any such construction would have to be very different than the one used in this paper and from those used in <cit.>. Indeed, all these papers use so-called “random block constructions" where we start with a suitable bipartite graph F with parts X and Y and we define a new graph G on vertex set X by randomly placing complete s-partite graphs inside N_F(y) for each y∈ Y. Note that each C_6 in F produces a triangle in G with positive probability. Moreover, unless F is very sparse, it has at least as many C_6s as the random bipartite graph with the same edge density (i.e. C_6 supersaturates). It follows that G has at least as many triangles as the random graph with the same edge density. Even after taking random induced subgraphs, this property remains, showing that any construction beating our bound would have to be significantly different.
abbrv
|
http://arxiv.org/abs/2307.04686v2 | 20230710164203 | VampNet: Music Generation via Masked Acoustic Token Modeling | [
"Hugo Flores Garcia",
"Prem Seetharaman",
"Rithesh Kumar",
"Bryan Pardo"
] | cs.SD | [
"cs.SD",
"cs.AI",
"eess.AS"
] |
tight_enumerate
listings
inlinecode
listing only,
nobeforeafter,
after=,
hbox,
tcbox raise base,
fontupper=,
colback=lightgray,
colframe=lightgray,
size=fbox
Hugo Flores García^1,2 Prem Seetharaman^1 Rithesh Kumar^1 Bryan Pardo^2
^1 Descript Inc.
^2 Northwestern University
[email protected]
H. Flores García, P. Seetharaman, R. Kumar, and B. Pardo
VampNet: Music Generation via
Masked Acoustic Token Modeling
Igor Ferrier-Barbut
August 12, 2023
==============================================================
We introduce VampNet, a masked acoustic token modeling approach to music synthesis, compression, inpainting, and variation.
We use a variable masking schedule during training which allows us to sample coherent music from the model by applying a variety of masking approaches (called prompts) during inference. VampNet is non-autoregressive, leveraging a bidirectional transformer architecture that attends to all tokens in a forward pass. With just 36 sampling passes, VampNet can generate coherent high-fidelity musical waveforms. We show that by prompting VampNet in various ways, we can apply it to tasks like music compression, inpainting, outpainting, continuation, and looping with variation (vamping). Appropriately prompted, VampNet is capable of maintaining style, genre, instrumentation, and other high-level aspects of the music. This flexible prompting capability makes VampNet a powerful music co-creation tool. Codenote1 and audio samplesnote2 are available online.
§ INTRODUCTION
In recent years, advances in discrete acoustic token modeling have resulted in significant leaps in autoregressive generation of speech <cit.> and music <cit.>. Meanwhile, approaches that use non-autoregressive parallel iterative decoding have been developed for efficient image synthesis <cit.>. Parallel iterative decoding promises to allow faster inference than autoregressive methods and is more suited to tasks like infill, which require conditioning on both past and future sequence elements.
In this work, we combine parallel iterative decoding with acoustic token modeling, and apply them to music audio synthesis. To the best of our knowledge, ours is the first [While our work was under peer review, Google released SoundStorm <cit.>, which leverages a similar parallel iterative decoding approach to ours.] extension of parallel iterative decoding to neural audio music generation. Our model, called VampNet, can be flexibly applied to a variety of applications via token-based prompting. We show that we can guide VampNet's generation with selectively masked music token sequences, asking it to fill in the blanks. The outputs of this procedure can range from a high-quality audio compression technique to variations on the original input music that match the original input music in terms of style, genre, beat and instrumentation, while varying specifics of timbre and rhythm.
Unlike auto-regressive music models <cit.>, which can only perform music continuations – using some prefix audio as a prompt, and having the model generate music that could plausibly come after it – our approach allows the prompts to be placed anywhere. We explore a variety of prompt designs, including periodic, compression, and musically informed ones (e.g. masking on the beat). We find that our model responds well to prompts to make loops and variations, thus the name VampNet [To vamp is to repeat a short passage of music with variation.]. We make our code open source[<https://github.com/hugofloresgarcia/vampnet>] and highly encourage readers to listen to our audio samples[audio samples: <https://tinyurl.com/bdfj7rdx>].
§ BACKGROUND
Two-stage approaches to generative modeling have gained traction in image <cit.> and audio <cit.> synthesis, largely in part due to their computational efficiency. In the first stage, a discrete vocabulary of “tokens” is learned for the domain of interest. The input is put through an encoder to obtain these tokens, which can be converted back into the input domain via a corresponding decoder. In the second stage, a model is trained to generate tokens, and is optionally given some conditioning (e.g. previous tokens, a text description, a class label) to guide generation.
§.§ Stage 1: Tokenization
In images, visual tokenization has been leveraged for state-of-the-art classification <cit.> and synthesis <cit.>. The most popular approach is to use vector quantization on a latent space. Similar approaches have been explored for audio <cit.>, but until recently such approaches have been restricted to low sampling rates (e.g. 16khz), or have been restricted to speech audio. The “sampling rate” of the latent space (the number of latent vectors required every second to represent audio) is a critical aspect of the tokenization scheme. The lower the sampling rate of the latent space, the easier the next stage (generation) will be to accomplish. Recently, methods based on residual vector quantization <cit.> have been proposed for audio tokenization at high compression rates with good reconstruction quality of high-sample-rate audio.
The primary work we leverage for audio tokenization is the Descript Audio Codec (DAC) <cit.>. With DAC, audio is encoded into a sequence of tokens via a fully convolutional encoder. The output of this encoder is then quantized using a hierarchical sequence of vector-quantizers <cit.>. Each quantizer operates on the residual error of the quantizer before it. Because of this residual vector quantization, DAC is able to reconstruct audio with very high quality, at a high compression ratio. It, along with its predecessors <cit.>, are instrumental in enabling audio language models like AudioLM <cit.>, MusicLM <cit.>, and VALL-E <cit.>. While we later briefly describe our tokenizer, the key contributions of our work are applicable to the output of any audio tokenizer and our specific audio tokenizer is not the focus of this work.
§.§ Stage 2: Generation
Given audio encoded as tokens, the common approach is to use an autoregressive model <cit.> for generation. State-of-the-art (SOTA) audio generation approaches like AudioLM <cit.>, MusicLM <cit.>, and JukeBox <cit.> use this approach, generating each acoustic token in the sequence in a step-by-step fashion using transformer-based <cit.> decoder-only models. Autoregressive sampling is slow in nature due to the high number of steps required at inference time <cit.>. Further, autoregressive models inherently restrict downstream applications, as each generated token is only conditioned on the previous tokens. For an autoregressive model to perform tasks like inpainting (“filling in the middle”), one must re-arrange the data during training <cit.>.
In language, masked modeling has been used extensively as a pre-training procedure for high-quality semantic representations <cit.>. This procedure has also been extended for representation learning in images <cit.> and audio <cit.>. Masked modeling for representation learning generally has a constant mask probability. For example, in BERT <cit.>, tokens are masked 15% of the time during training. It has been shown that this approach is equivalent to a single-step discrete diffusion model <cit.>, that uses masking for its noising procedure. Therefore, we can extend masked modeling to masked generative modeling by varying the probability of masking a token during training. This was done for image generation in MaskGIT <cit.>, and in language <cit.>. Similar to diffusion modeling <cit.>, which seeks to synthesize data starting from random noise through a series of denoising steps, masked generative modeling seeks to synthesize data starting from completely masked data through a series of “unmasking” steps.
Key to the efficiency of MaskGIT and related approaches is a parallel iterative decoding procedure. In parallel iterative decoding, the model predicts every token in the output sequence in a single forward pass. However, after just one forward pass of the model, the output often does not have high quality. The output of the first sampling step is re-masked, with a lower masking probability, and then put through the model again. In this way, masked generative models can efficiently refine their output, resulting in high quality generation.
In unconditional generation tasks, the model is asked to generate a realistic sample from the target data distribution from scratch, without any guidance. This is a difficult problem, as many target data distributions are highly multimodal. Unconditional generative models are susceptible to mode collapse <cit.>, blurry samples, mode averaging, and other issues<cit.>. Therefore, some conditioning is helpful as it provides some signal for the model to resolve the multimodality. Conditioning is also a commonly used method to guide the output of the system towards desired content.
Conditioning can take the form of a class label, a genre tag or lyrics <cit.>, or an associated text description <cit.>. Conditioning can also be applied at every timestep, like the semantic tokens of AudioLM <cit.>, or aligned text or phonemes for text-to-speech generation <cit.>.
In this work,we adopt a masked generative modeling approach with a parallel iterative decoding procedure, inspired by work in vision such as MaskGIT <cit.> and Paella <cit.>, as illustrated in Figure <ref>. We do not apply any conditioning beyond that provided by the unmasked tokens in our encoded audio. As we show later, different approaches to masking, applied at inference time, can be used to steer generation in useful and artistic ways.
In training, tokens are masked randomly throughout the sequence. The model is then asked to predict the value of each of the masked tokens in a single forward pass, but it is conditioned on all of the unmasked tokens, both in the future as well as in the past. We vary the number of tokens that are masked during training, allowing us to generate audio at inference time through a sampling procedure. We now describe our method in more detail.
§ METHOD
We adapt the procedure of Masked Visual Token Modeling, proposed in MaskGIT <cit.> to
audio, accounting for several key differences between the vision and audio domain.
We call our approach Masked Acoustic Token Modeling.
§.§ Masked Acoustic Token Modeling
We first train an audio tokenizer based on the techniques described in DAC <cit.>. Unlike the visual tokens of MaskGIT, our acoustic tokens are hierarchical in nature due to residual vector quantization.
As a first step, the audio signal x is encoded at each time step t as a a D dimensional latent vector Z. We then quantize Z using N vector quantizers. Quantizer 1 produces Ẑ_̂1̂, a quantized approximation of Z that has residual error R_1 = Z - Ẑ_1. Thereafter, the residual from each quantizer i is passed to the next quantizer i+1, which produces a quantized approximation of the remaining residual error: R_i ≈Ẑ_̂î+̂1̂. Vector Z is reconstructed by summing the output of the N quantizers: Z = ∑_i=1^NẐ_̂î.
Since the encoded signal is represented as a quantized vector of N discrete tokens at each timestep, we have N tokens that can be masked or unmasked at each timestep. Rather than attempt to generate all tokens at once, we instead split the N tokens into N_c “coarse” tokens, and N_f “fine” tokens, as in AudioLM. We then train two generative models: one that generates the fine tokens given the coarse tokens as conditioning, and one that generates the coarse tokens given a sequence of coarse tokens. To generate a sample (Figure <ref>), we chain the two models together. First, we apply the coarse model to generate a sequence of coarse tokens. Then, we apply the coarse-to-fine model to generate the fine tokens. We decode the tokens to a 44.1khz waveform using the decoder of our audio tokenizer.
§.§ Training procedure
Let 𝐘∈ℝ^T× N be a matrix representing the output of the encoder for some audio segment. Each element y_t,n in 𝐘 is a token from the nth level codebook at timestep t. Let 𝐘_M be the set of all masked tokens in 𝐘 and 𝐘_U be the set of all unmasked tokens in 𝐘. The model generates a probability distribution over the set of possible codebook values for each token y ∈𝐘_M, given the unmasked tokens and the model parameters θ. The training objective is to maximize the probability of the true tokens. This corresponds to minimizing the negative log likelihood.
ℒ = - ∑_∀ y ∈𝐘_Mlog p(y| 𝐘_U, θ)
To predict the masked tokens, we use a multi-layer bidirectional transformer, which predicts the probabilities of each possible token at every timestep, for every quantizer. If each quantizer has a codebook size of C possible values, and there are N quantizers, then the last layer of the network will be a fully connected layer of shape (E, CN), where E is the dimensionality of the output of the last layer. We then reshape this output into (EN, C), and compute the cross-entropy loss between the ground-truth one-hot token and the predicted token. Because the transformer is bidirectional, it can attend to all tokens in the input sequence to optimize the loss for each token.
For the coarse-to-fine generative model, the input sequence always contains N_c coarse tokens, and the masking operation is restricted to the N_f fine tokens. The last layer of this network only predicts masked fine tokens. Otherwise, the training procedure for both models is identical.
§.§ Sampling
We follow the same iterative confidence-based sampling approach used in MaskGIT. More concretely, given Y_M as the set of masked tokens and Y_U as the set of unmasked tokens, do:
* Estimate. For each masked token y in Y_M, estimate the conditional probability distribution over its vocabulary of codebook values V.
* Sample. For each masked token, sample from the distribution to generate an associated token estimate ŷ∈ V. We don't use any sampling tricks in this step, sampling from the categorical probability distribution for each token as-is.
* Rank by Confidence. Compute a confidence measure for each of the sampled tokens by taking their prediction log-probabilities and adding temperature-annealed Gumbel noise to them:
confidence(ŷ_t) = log(p(ŷ_t)) + temp · g_t
where ŷ_t is a token estimate at timestep t, g_t is an i.i.d sample drawn from Gumbel(0,1) <cit.>, and temp is a hyperparameter that is linearly annealed to 0 over the number of sampling iterations.
Then, sort the set of sampled token estimates by the confidence computed above. We find that high temperature values (e.g. >6.0) result in higher quality samples.
* Select.
Pick the number of tokens to mask at the next sampling iteration, k, according to the masking schedule [k = γ (t/t_T) D, where t is the current iteration, t_T is the total number of iterations, and D the total number of tokens in the sequence. The scheduling function γ is a cosine schedule.]. Take the k lowest confidence estimates and toss them out, re-masking their tokens. Place the remaining high-confidence token estimates in Y_U, removing their tokens from Y_M.
* Repeat Return to step 1 until the number of iterations has been reached.
§.§ Prompting
Interactive music editing can be enabled by incorporating human guidance in the sampling procedure through the conditioning prompt of unmasked tokens. Because our approach isn't conditioned on any signal other than the input audio itself, we find that various types of prompts are useful for obtaining coherent samples, as they lower the amount of multimodality when sampling from the model. Like AudioLM, we can prompt our model with prefix audio of some duration (usually between 1 and 4 seconds), and it will provide a continuation of that audio. Unlike AudioLM, and other auto-regressive approaches, we can also prompt our model with suffix audio, and it will generate audio that leads up into that suffix. We can provide prefix and suffix audio, and the model will generate the remaining audio, such that it is appropriate, given
the specified prefix and suffix.
We can also apply a “periodic” prompt, where all but every Pth timestep are masked.
The lower P is, the more the generated audio will sound like the original, as the model is highly conditioned. For example if P = 2, then the model is essentially behaving like a upsampler, imputing the tokens for every other timestep. As P increases, the model shifts from behaving in a compression mode to a generative mode, creating variations that match the style of the original.
Another useful style of prompt are “compression” prompts, where all codebooks other than the most coarse-grained are masked. This gives the model strong conditioning on every timestep, so the model is likely to produce audio that closely matches the original. We can combine this prompt with a periodic prompt with low P for even more extreme compression ratios. Given the bitrate of the codec B , which has number of codebooks N, a downsampling rate P for the periodic prompt, and a number of kept codebooks N_k, we can achieve a bitrate of B / P(N - N_k).
Finally, we can design music-specific prompts, which exploit knowledge about the structure of the music. More concretely, we explore beat-driven prompting, where timesteps that fall on or around the beat are left unmasked. The model is left to create music between these beats, resulting in interesting variations on the original music. These prompts can all be combined to create a very useful music creation tool. In concert with a well designed user interface, VampNet shows promise as the basis for a next-generation music editing and creation suite.
§ EXPERIMENTS
Our experiments aim to evaluate VampNet's capability to both compress and generate music, given the various prompting strategies described in Section <ref>. For our objective audio quality measures, we use a multiscale mel reconstruction error and the Fréchet Audio Distance (FAD). Mel-reconstruction error is defined as the L1 distance between log-mel spectrograms at various time-scales,
D_F,M = || Ŝ_F,M - S_F,M ||_1
where F is the FFT size of each spectrogram, and M is the number of mel-frequency
bins. We use F ∈ [2048, 512] and M ∈ [150, 80], with a hop size of 1/4 the FFT size. Mel-reconstruction is valuable as a metric for compression quality, but not for generation quality, since it is likely that models produce audio that does not match one to one with the original target audio. For generation quality, we use FAD, which measures the overlap between distributions of real and generated audio. Unlike mel-reconstruction, FAD is geared more towards evaluating if sample quality falls within the data distribution of the real audio, and can be used to evaluate generation quality.
§.§ Dataset
Similar to JukeBox <cit.>, we collect a large dataset of popular music recordings. Our dataset consists of 797k tracks, with a sampling rate of 32 khz. These tracks are resampled to 44.1kHz to make compatible with our tokenizer. Our dataset contains music from
thousands of artists across genres described in Echo Nest's Every Noise at Once [<https://everynoise.com/engenremap.html>].
We use a subset of 2k tracks for validation, and another subset of 2k tracks for testing. We ensure that there is no artist overlap between train, validation, and test tracks.
In addition, we collect a set of music and non-music data (speech, environmental sound), which we used to train our tokenizer, using the datasets described in DAC <cit.>.
All audio is normalized to -24dbFS. We do not use any metadata about these files during training, as our model is trained unconditionally.
§.§ Network Architecture and Hyperparameters
The audio tokenizer model we use takes as input 44.1kHz audio, and compresses it to a bitrate of 8kbps using 14 codebooks, with a downsampling rate of 768x. The latent space therefore is at 57Hz, with 14 tokens to predict at every timestep. We designate 4 of these tokens as the coarse tokens, and the remaining 10 as the fine tokens. Refer to the Descript Audio Codec <cit.> for details on the tokenizer architecture. We train the tokenizer for 250k steps.
The VampNet architecture (for both coarse and coarse-to-fine models) consists of a bidirectional transformer <cit.> with relative attention <cit.> and an embedding dimension of 1280 and 20 attention heads. The coarse model has 20 attention layers, while the coarse-to-fine model has 16.
We train the coarse and coarse-to-fine model for 1M and 500k steps, respectively. We train with the AdamW optimizer <cit.> with β_1 and β_2 set to 0.9 and 0.999, respectively. We use the learning rate scheduler introduced by Vaswani et al <cit.> with a target learning rate of 0.001 and 10k warmup steps. We use a dropout of 0.1, and a batch size of 25, with a GPU memory budget of 72GB.
§.§ Efficiency of VampNet
We first validate that VampNet can generate realistic music audio in a low number of steps. To do this, we run VampNet using one of our prompts (the periodic prompt, with P = 16) on our test set, on 10-second excerpts. We vary the number of sampling steps in [1, 4, 8, 12, 36, 64, 72], and report metrics for each sampling step.
§.§ Effect of prompts
We seek to understand how VampNet responds to different prompts, as discussed in Section <ref>. The prompts range from “compression” prompts, which compress music to a low bitrate, to more creative “generative” prompts. We examine whether compression and generative prompts exist on a continuum, and whether decompression from low bitrates results in generative behavior.
We draw 2000 10-second examples from our evaluation dataset, encode them into token streams with our audio tokenizer, and manipulate the token streams in four ways:
* Compression prompt: C codebooks are left unmasked, starting from the coarsest codebook. All other tokens are masked. We set N_k = 1.
* Periodic prompt: every Pth timestep is left unmasked. In an unmasked timestep, tokens from every codebook are unmasked. All other tokens (e.g. tokens in timesteps that do not correspond to the period P) are masked. We set P ∈ [8, 16, 32].
* Prefix and suffix (inpaint) prompts: a segment at the beginning and at the end of the sequence is left unmasked. All other tokens are masked. This prompt is parameterized by a context length in seconds. We set the context to be either 1 second or 2 seconds, which corresponds to 57 or 114 timesteps.
* Beat-driven prompt: we first process the audio waveform with a beat tracker <cit.>. Then, around each detected beat, we unmask timesteps to the right of the beat. We examine a 75ms unmasked section around each beat, which is about 4 timesteps per beat.
After manipulating the input token streams with our prompts, we generate new musical signals from these masked token streams using VampNet, and compute FAD and mel-reconstruction error between the generated signals and the input signals from our music dataset.
We include a noisy token stream baseline, where a portion (as dictated by mask ratio r) of the tokens in the input token stream are replaced with random tokens. We also include as baseline the codec by itself, as well as the coarse-to-fine model.
Finally, we examine how these prompts can be combined - specifically the compression and periodic prompts. By manipulating the hyperparameters of these prompts (C and P), we can shift the model behavior from compression to generation. As more timesteps are masked, the model must generate plausible musical excerpts that connect the unmasked timesteps, that may not match the input music.
§ RESULTS AND DISCUSSION
Results for our experiment varying the number of sampling steps used to generate samples with VampNet are shown on Figure <ref>. We find that VampNet achieves the lowest FAD with 36 sampling steps, although 12 sampling steps achieves comparable performance. In practice, we find that samples taken with 24 steps achieve a fair trade-off between generation quality and compute speed, with 10-second samples taking around 6 seconds to sample on an NVIDIA RTX3090. In contrast, to generate 10 seconds of audio with an autoregressive model would require 574 steps, which would take around 1 min to generate 10 seconds of audio, given an autoregressive model with the same number of parameters as ours, and the same tokenizer.
Results for our study on the effect of each prompt are shown in Figure <ref>. First, we note that while the noisy token baseline has comparable mel reconstruction to all prompts, it performs very poorly in terms of FAD. This indicates that while our prompting strategies may result in audio that is not a perfect match to the original input audio, it still falls inside the distribution of plausible music.
Of our proposed prompts, we find that beat-driven prompts perform best, achieving the lowest FAD of all prompts. A notable result here is that the periodic prompt with P=16 (35 conditioning timesteps) performs on par with inpainting with 1 second of context (57 conditioning timesteps). Therefore, prompt techniques that spread out the conditioning tokens throughout the sequence (periodic prompts) are able to use fewer conditioning timesteps to generate samples of comparable quality to those generated by sampling techniques that place all of the conditioning tokens at the start and end of the sequences (inpainting).
Qualitatively, we also find that beat-driven prompts can keep a steadier tempo than other prompts, though their outputs tend to resemble the original music closer than periodic prompts. In practice, a mix of beat-driven, periodic, and inpainting prompts can be employed to steer of VampNet in creative ways. To illustrate, we highly encourage the reader to listen to the accompanying sound samples [audio samples: <https://tinyurl.com/bdfj7rdx>].
We then combined periodic and compression prompting to show how the model's behavior shifts between reconstruction and generation tasks, as more tokens are masked away.
Results for this experiment are shown in Figure <ref>. At higher bitrates, (600 bps and above), VampNet is able to accurately reconstruct the original music signal, achieving low mel-spectrogram error and FAD values with respect to the evaluation music audio. At bitrates of 200bps and below, VampNet has comparable reconstruction quality to the noisy token baselines, indicating that the sampled VampNet signals no longer resemble the input audio in terms of fine-grained spectral structure. However, the FAD for VampNet samples at low bitrates is much lower than the FAD for noisy baselines. This indicates that even though VampNet isn't able to reconstruct the input music signal at low bitrates, it is still able to generate coherent audio signals with musical structure, that are closer to the distribution of “real music” than our noisy baseline.
§ CONCLUSION
We introduced VampNet, a masked acoustic token modeling approach to music generation. VampNet is bidirectional, and can be prompted a variety of ways using an input audio file. Through different prompting techniques, VampNet can operate in a continuum between music compression and generation, and is an excellent tool for generating variations on a piece of music.
With VampNet, a musician could record a short loop, feed it into VampNet, and have VampNet create musical variations on the recorded idea every time the looped region repeats.
In future work, we hope to investigate the interactive music co-creation potential of VampNet and its prompting techniques, as well as explore the representation learning capabilities of masked acoustic token modeling.
|
http://arxiv.org/abs/2307.03887v1 | 20230708034254 | Improving Prototypical Part Networks with Reward Reweighing, Reselection, and Retraining | [
"Robin Netzorg",
"Jiaxun Li",
"Bin Yu"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CV",
"cs.HC"
] |
theoremTheorem[section]
*theorem*Theorem
lemma[theorem]Lemma
*lemma*Lemma
proposition[theorem]Proposition
corollary[theorem]Corollary
claim[theorem]Claim
fact[theorem]Fact
exerciseExercise
||
‖‖
()
{}
⟨⟩
[]
⌊⌋
|-0.25ex|-0.25ex||-0.25ex|-0.25ex|
[2]()#1 #2
Spectral radius, fractional [a,b]-factor and ID-factor-critical graphs[Supported by National Natural Science Foundation of China
(Nos. 11971445 and 12171440),
Henan Natural Science Foundation (No. 202300410377) and
Research Program of Science and Technology at Universities of Inner Mongolia Autonomous Region (No. NJZY22280).]
Ao Fan^a, Ruifang Liu^aCorresponding author.
E-mail addresses: [email protected], [email protected], [email protected]., Guoyan Ao^a, b
^a School of Mathematics and Statistics, Zhengzhou University, Zhengzhou, Henan 450001, China
^b School of Mathematics and Physics, Hulunbuir University, Hailar, Inner Mongolia 021008, China
==========================================================================================================================================================================================================================================================================================================================================
In recent years, work has gone into developing deep interpretable methods for image classification that clearly attributes a model's output to specific features of the data. One such of these methods is the prototypical part network (ProtoPNet), which attempts to classify images based on meaningful parts of the input. While this method results in interpretable classifications, it often learns to classify from spurious or inconsistent parts of the image. Hoping to remedy this, we take inspiration from the recent developments in Reinforcement Learning with Human Feedback (RLHF) to fine-tune these prototypes. By collecting human annotations of prototypes quality via a 1-5 scale on the CUB-200-2011 dataset, we construct a reward model that learns to identify non-spurious prototypes. In place of a full RL update, we propose the reweighed, reselected, and retrained prototypical part network (R3-ProtoPNet), which adds an additional three steps to the ProtoPNet training loop. The first two steps are reward-based reweighting and reselection, which align prototypes with human feedback. The final step is retraining to realign the model's features with the updated prototypes. We find that R3-ProtoPNet improves the overall consistency and meaningfulness of the prototypes, but lower the test predictive accuracy when used independently. When multiple trained R3-ProtoPNets are incorporated into an ensemble, we find an increase in test predictive performance while maintaining interpretability.
§ INTRODUCTION
With the widespread use of deep learning, having these models be interpretable is more important now than ever. As these models continue to see use in high-stakes situations, practitioners hoping to justify a decision need to understand how a deep model makes a prediction, and trust that those explanations are valuable and correct <cit.>. One such proposed method for image classification is the prototypical part network (ProtoPNet), which classifies a given image based on its similarity to prototypical parts of training images, called prototypes <cit.>. This model aims to combine the power of deep learning with an intuitive reasoning module similar to humans.
While ProtoPNet aims to learn meaningful prototypical concepts, in practice, learned prototypes suffer from learning spurious concepts, such as the background of an image, from inconsistent concepts, such as learning both the head and the wing of a bird, and from duplicating concepts, such as having two prototypes that correspond to the same wing of the same bird <cit.>. Such problems are highly detrimental to the efficacy of these models, resulting in wasted computation at best and incorrect reasoning at worst. Various methods have been proposed to account for these issues <cit.>, but these methods involve either costly labelling procedures or fall short of providing a means of measuring prototype quality.
We seek to increase the performance of the learned prototypes by taking inspiration from recent advances in reinforcement learning with human feedback (RLHF) <cit.> and reward learning <cit.>. RLHF and reward learning have become popular approaches for aligning large language models with human preferences, partially due to the flexibility of learned rewards and feedback collection methods <cit.>. While prior work has incorporated human feedback into ProtoPNets <cit.>, no variation of ProtoPNet has incorporated a cheap and flexible reward learning fine-tuning framework.
Towards this end, we propose the reward reweighed, reselected, and retrained prototypical part network (R3-ProtoPNet), which seeks to improve the original ProtoPNet via fine-tuning with a learned reward model. With minimal human feedback data on the Caltech-UCSD Birds-200-2011 (CUB-200-211) dataset <cit.>, we are able to train a high-quality reward model that achieves 91.5% test accuracy when ranking human preferences, serving as a strong measure for prototype quality. R3-ProtoPNet is then able to improve the meaningfulness of prototypes, removing dependence on spurious features, and is able to slightly decrease inconsistency across images compared to the original ProtoPNet. When used as base learners in an ensemble, R3-ProtoPNet is able to outperform an ensemble of ProtoPNets on a held-out test dataset.
In summary, our contributions are as follows. Firstly, we demonstrate that a reward model trained on small amounts of human feedback data (roughly 300 ratings) can accurately rank human preference data. Secondly, due to the high performance of the reward model, we propose using the reward model as a measure of prototype. Thirdly, we introduce the R3-ProtoPNet, which uses reward-guided fine-tuning to improve prototype meaningfulness and ensemble performance.
§ RELATED WORK
§.§ Reinforcement Learning with Human Feedback
Since the success of InstructGPT <cit.>, Reinforcement Learning with Human Feedback (RLHF) has received a great deal of attention in the machine learning community. Although this success is recent, incorporating human feedback into reinforcement learning methods via a learned reward model has a deep history in reward learning <cit.>. While works taking inspiration from InstructGPT have used proximal policy optimization (PPO) to fine-tune networks with human feedback <cit.>, it is unclear to the extent that formal reinforcement learning is necessary to improve models via learned reward functions <cit.>, or if the human feedback needs to follow a particular form <cit.>. Some prior work incorporates the reward function as a way to weigh the likelihood term <cit.>. Keeping this work in mind, we incorporate the reward model into ProtoPNet as a way to reweigh prototypes post-training.
§.§ Example-based Models and Prototypical Part Networks
The field of interpretable deep learning is vast, with a plethora of explainability and interpretability methods available to the user. For a more complete overview of interpretable deep learning, please refer to <cit.>. To ground the discussion, we focus primarily on example-based models, one such example being ProtoPNet. While ProtoPNet is our model of interest, other example-based methods exist, such as the non-parametric xDNN <cit.> or SITE, which performs predictions directly from interpretable prototypes <cit.>. While other example-based methods exist, we focus on the ProtoPNet due to its intuitive reasoning structure.
Since its introduction by <cit.>, ProtoPNets have received a great deal of attention, and various iterations have been developed. Work has explored extending the ProtoPNet to different architectures such as transformers (<cit.>), or sharing class information between prototypes (<cit.>). <cit.> increase the spatial flexibility of ProtoPNet, allowing prototypes to change spatial positions depending on the pose information available in the image. ProtoPNets and variations have seen success in high-stakes applications, such as kidney stone identification (<cit.>) and mammography (<cit.>).
Many works have commented on how the original ProtoPNet tends to overemphasize spurious features, and they have taken different approaches to solving this issue. <cit.> introduce a explainability interface to ProtoPNet, allowing users to see the dependence of the prototype on certain image attributes like hue and shape. The authors claim that seemingly dissimilar or spurious prototypes share certain difficult-to-perceive features, like texture or contrast. <cit.> introduce a variation of the ProtoPNet, IAIA-BL, which biases prototypes towards expert labelled annotations of classification-relevant parts of the image.
Similar to how we provide human feedback at the interpretation level, <cit.> introduce the ProtoPDebug, where a user labels a prototype and image pair as "forbidden" or "valid", and a fine-tuning step maximizes the distance between learned prototypes and patches in the forbidden set and minimizes the distance between learned prototypes and patches in the valid set. While also incorporating human feedback, <cit.> do not ground their method in RLHF, but instead includes the binary feedback as a supervised constraint into the ProtoPNet loss function. Learning a reward function via ratings allows us to simultaneously increase the interpretability of the prototypes, and develop an evaluation metric for the quality of a particular prototype. Compared to previous approaches, reward reweighing, reselection, and retraining allows for fast collection of high-quality human feedback data and the construction of a reward model that measures prototype quality while increasing the interpretability and the performance of the model.
§ PROTOTYPICAL PART NETWORK (PROTOPNET)
In this section, we describe the base architecture used in our method, the Prototypical Part Network (ProtoPNet) introduced in <cit.>. The ProtoPNet aims to introduce interpretability to otherwise uninterpretable image classifiers. In place of predicting from an arbitrary representation, the model makes a classification based on part attention and similar prototypical parts of an image. The general reasoning of a model is to classify an unseen image by finding training images with similar prototypical parts to those of the unseen image. This approach allows the user to interrogate the reasoning of the model, and clearly see which parts of the image led to the model's classification.
§.§ Description
Here we briefly describe the ProtoPNet, adopting the notation used in <cit.>. The ProtoPNet architecture builds on a base convolutional neural network f, which is then followed by a prototype layer denoted g_p, and a fully connected layer h. Typically, the convolutional features are taken pretrained models like VGG-19, ResNet-34, or DenseNet-121.
The ProtoPNet injects interpretability into these convolutional architectures with the prototype layer g_p, consisting of m prototypes P = {p_j}^m_j=1 typically of size 1×1× D, where D is the shape of the convolutional output f(x). By keeping the depth the same as the output of the convolutional layer, but restricting the height and width to be smaller than that of the convolutional output, the learned prototypes select a patch of the convolutional output. Reversing the convolution leads to recovering a prototypical patch of the original input image x. Using upsampling, the method constructs a activation pattern per prototype p_j.
To use the prototypes to make a classification given a convolutional output z=f(x), ProtoPNet's prototype layer computes a max pooling over similarity scores: g_p_j(z) = max_z∈patches(z)log((z - p_j_2^2 + 1)(z - p_j_2^2 + ϵ)), for some small ϵ < 1. This function is monotonically decreasing with respect to the distance, with small values of z - p_j_2^2 resulting in a large similarity score g_p_j(z). Assigning m_k prototypes for all K classes, such that ∑_k=1^K m_k = m, the prototype layer outputs a vector of similarity scores that matches parts of the latent representation z to prototypical patches across all classes. The final layer in the model is a linear layer connecting similarities to class predictions.
In order to ensure that the prototypes match specific parts of training images, during training the prototype vectors are projected onto the closest patch in the training set. For the final trained ProtoPNet, every p_j corresponds to some patch of a particular image.
§.§ Limitations
While ProtoPNet is capable of providing interpretable classifications, the base training described in <cit.> results in prototypes that are inconsistent and represent spurious features of the image (<cit.>). Additionally, same-class prototypes will often converge to the same part of the image, resulting in duplicate prototypes.
<cit.> note that a prototype whose top L (usually L=5) closest training image patches come from different classes than the target class tend to be spurious and inconsistent, focusing on features like the background. To remedy this issue, they introduce a pruning operation, removing these prototypes entirely. While pruning does remove dependency on some subpar prototypes, we find that pruning still leaves some prototypes that rely on spurious and inconsistent features (Table <ref>) and does not improve accuracy. We also find that duplicate prototypes still occur after the pruning operation as well. We visualize subpar prototypes in Figure <ref>. For more examples of low-quality prototypes, please see the supplementary material.
§ HUMAN FEEDBACK AND THE REWARD REWEIGHED, RESELECTED, AND RETRAINED PROTOTYPICAL PART NETWORK (R3-PROTOPNET)
Inspired by the recent advances in reinforcement learning with human feedback (RLHF) <cit.>, the reward reweighed, reselected, and retrained prototypical part network (R3-ProtoPNet) utilizes a learned reward model to fine-tune prototypes. In place of pruning prototypes and sacrificing potential information, we demonstrate that incorporating human feedback into the training of the ProtoPNet improves prototype quality while increasing ensemble accuracy. In this section, we describe the collection of high-quality human feedback data, our reward model, and how we incorporate the reward model into the training loop via a three-stage training procedure.
§.§ Human Feedback Collection
A crucial aspect behind the success of RLHF methods is the collection of high quality human feedback data. Unclear or homogeneous feedback may result in a poor performing reward model <cit.>. The design of human feedback collection is vitally important to the training of a useful reward model.
The inherent interpretability of ProtoPNet leads to a useful benefit for RLHF. Given a trained ProtoPNet, it is possible for a knowledgeable user to directly critique the learned prototypes. Given a particular classification task, a human with enough expertise should be able to recognize if a particular prototype is "good" or "bad" <cit.>. In the case of classifying birds in the CUB-200-2011 dataset, one of the original classification tasks used in <cit.>, it is clear that if a prototype gives too much weight to the background of the image (spurious), or if the prototype corresponds to different parts of the bird when looking at different images (inconsistency), the learned prototype is not meaningfully or interpretably contributing to prediction. Given these prototypes that fail to contribute to prediction, a knowledgeable human trying to classify birds would rate these prototypes as "bad".
There are many different ways to elicit this notion of "goodness" from a user <cit.>. Although it is possible to incorporate many different forms of feedback into the R3-ProtoPNet, such as asking a user to compare prototypes to elicit preferences or ask for a binary value of whether a prototype is "good" or "bad", we found most success with asking the user to rate a prototype on a scale from 1 to 5. While scalar ratings can be unstable across different raters, with a clear, rule-based rating method, rating variance is reduced and it is possible to generate high-quality labels. An example rating scale on the CUB-200-2011 dataset is provided in Figure <ref>.
§.§ Reward Learning
We note that, when a user provides feedback on a prototype, it is not the training image or the model prediction that the user is providing feedback on, but the prototype's resulting interpretation: the activation patterns. Our task is therefore different from RLHF applied to language modeling or RL tasks (<cit.>, <cit.>), where human feedback is provided on the model output or resulting state. We therefore collect a rating dataset 𝒟 = {(x_i, y_i, h_i,j, r_i,j)}_i=1,j=1^n,m, where x_i,y_i are the training image and label, and h_i,j,r_i,j are prototype p_j's activation patterns and user-provided activation patterns for image x_i. We note that collecting preferences for this entire dataset is prohibitive and unnecessary, so we only collect a subset.
Given the dataset 𝒟, we generate the induced comparison dataset, whereby each entry in 𝒟 is paired with one another. Given i≠ i' and/or j≠ j', we populate a new paired dataset, 𝒟_paired, which consists of the entries of 𝒟 indexed by i,j,i',j', and a comparison c, which takes values -1, 0, 1. If the left-hand sample is greater, and therefore considered higher-quality, r_i,j > r_i',j', then c = -1. If the right-hand sample is greater r_i,j < r_i',j', then c = 1. We note that, during learning, we exclude entries with c=0 to increase the contrast between pairs. This synthetic construction allows us to model the reward function, r(x_i, h_i,j), via the Bradley-Terry Model for pairwise preferences <cit.>. We train this model with the same loss function as in <cit.>, a cross-entropy loss over the probabilities of ranking one pair over the other. This synthetic construction combinatorially increases the amount of preference data, allowing us to train a high-quality reward model on relatively small amounts of quality human feedback data.
§.§ Reward Reweighed, Reselected, and Retrained Prototypical Part Network (R3-ProtoPNet)
After having collected high-quality human feedback data and trained a reward model, we can now incorporate it into a fine-tuning framework to improve the interpretability of ProtoPNet. We incorporate the reward model via a three step process consisting of reward weighting, reselection, and retraining. Each step is described in more detail below.
§.§.§ Reward Reweighing
Although PPO is a popular option for RLHF (<cit.>), there is evidence that simpler fine-tuning algorithms can lead to similar performance increases (<cit.>). Inspired by the success and the ease of implementation of reward-weighted learning <cit.>, we develop a reward-weighted update for the ProtoPNet:
max_p_jℒ_reweigh(z_i^*, p_j) = max_p_j∑_i ∈ I(p_j)^nr(x_i, p_j)1/λ_distz_i^* - p_j^2_2 + 1
where z_i^* = argmin_z∈patches(f(x_i))z - p_j_2^2, I(p_j) = {i | y_i∈class(p_j)}, and λ_dist is a fixed hyperparameter. We note that the loss function ℒ_reg is a sum of the inverse distances weighted by the reward of the prototype on that image. Since we only update the prototype p_j, the only way to maximize the loss is to minimize the distance between prototype and image patches with high reward r(x_i, p_j). This causes the prototype to resemble high reward image patches, improving the overall quality of the prototypes. Wanting to preserve prototypes that already have high reward, we only update those prototypes that have relatively low mean reward less than γ = 0.45. λ_dist is included in the loss function to rescale distances, since the closest distances are near zero. We find best performance with λ_dist = 100.
Practically, we find that optimizing this loss function leads to locally maximal solutions, resulting in local updates that do not modify prototypes with low quality values of 1, but it's more likely to improve prototypes with quality values of 2 or higher. If the prototype p_j has high activation over the background of an image x_i, for example, the closest patches z_i^* in the training data will also be background patches, and the reward of the prototype will be low, leaving minimal room for change. It is not possible for this update to dramatically change the location of the patch in the image via this loss function.
§.§.§ Prototype Reselection
In order to improve low quality prototypes that require significant manipulation, we introduce a reselection procedure based on a reward threshold. Given a prototype p_j and image x_i, if 1/n_k∑_i∈ I(p_j)r(x_i, p_j) < α, where α is a pre-determined threshold and n_k is the number of training images in class k, we reselect the prototype. The reselection process involves iterating over patch candidates z'_i and temporarily setting the prototype p'_j = z'_i, where z'_i is chosen randomly from the patches of a randomly selected image x'_i in the class of p_j. If 1/n_k∑_i∈ I(p_j)r(x_i', p'_j) > β, where β is an acceptance threshold, and if none of the prototypes match patch p'_j = z'_j, then we accept the patch candidate as the new prototype. We found that α = 0.15 and β = 0.50 led to good performance. We refer to the combination of reweighting and reselection as the R2 update step, and the corresponding trained model the R2-ProtoPNet.
The reasoning process behind our prototype reselection method takes inspiration from the original push operation in <cit.>. Similar to how ProtoPNet projects prototypes onto a specific training image patch, here we reselect prototypes to be a particular reward-filtered training image patch. With a high enough acceptance threshold β, this forces the elimination of low reward prototypes while preserving the information gain of having an additional prototype.
One possible alternative approach is to instead search over the training patches, and select those patches with the highest reward. We found that randomly selecting patches, in place of searching for patches with the highest reward, led to higher prototype diversity and less computation time. As discussed in Section <ref>, it is possible that a reward model that more explicitly accounts for prototype diversity could alleviate the duplicate issue, but we leave this to future work.
While we do not use a traditional reinforcement learning algorithm to fine-tune our model as is typically done in RLHF <cit.>, pairing the reselection and fine-tuning steps together resembles the typical explore-exploit trade-off in RL problems. We see that fine-tuning with our reward model leads to exploit behavior, improving upon already high-quality prototypes. At the same time, the reselection step serves as a form of exploration, drastically increasing the quality of uninformative prototypes. We find that these similarities are enough to improve the quality of ProtoPNet, as discussed in the next section.
§.§.§ Retraining
A critical step missing in the R2 update is a connection to prediction accuracy. As discussed in Section <ref>, without incorporating predictive information, performing the reward update alone results in lowered test accuracy. Since the above updates only act on the prototypes themselves, not the rest of the network, the result is a misalignment between the prototypes and the model's base features and final predictive layer. The reward update guides the model towards more interpretable prototypes, but the reward update alone fails to use the higher quality prototypes for better prediction.
To account for the lack of predictive performance, the final step of R3-ProtoPNet is retraining. Simply retraining with the same loss function used in the original ProtoPNet update results in the realignment of the prototypes and the rest of the model. Although one could worry that predictive accuracy would reduce the interpretability of the model <cit.>, we find that retraining increases predictive accuracy while maintaining the quality increases of the R2 update. The result is a high accuracy model with higher-quality prototypes. We explore evidence of this phenomenon and why this is the case in the following section.
§ EXPERIMENTS
Here we discuss the results of training the R3-ProtoPNet on the CUB-200-2011 dataset, the same dataset as used in <cit.>. We demonstrate that the R3-ProtoPNet leads for higher quality prototypes across base model architectures and prototype configurations while not sacrificing predictive performance.
§.§ Datasets
R3-ProtoPNet requires two datasets: the original dataset for initial training, and the scalar ratings of activation pattern dataset. Combined, this results in the dataset described in Section <ref>. To offer better comparison against the original ProtoPNet, we use the same dataset for initial training that was used in <cit.>, the CUB-200-2011 dataset <cit.>. The CUB-200-2011 dataset consists of roughly 30 images of 200 different bird species. We employ the same data augmentation scheme used in <cit.>, which adds additional training data by applying a collection of rotation, sheer, and skew perturbations to the images, resulting in a larger augmented dataset.
For the collection of the activation pattern ratings, we only provide activation patterns overlaid on the original images to the rater. Although it is possible to crowdsource the collection of human preference data, we found that it was possible to increase the performance of ProtoPNet with relatively small amounts human preference data that we ourselves collected. We rated a total of 700 prototype-image pairs according to the scale approach described in Figure <ref>, which we justify in the next subsection.
§.§ Architectures and Training
Similar to <cit.>, we study the performance of R3-ProtoPNet across three different base architectures: VGG-19, ResNet-34, and DenseNet-121. While the original ProtoPNet sets the number of prototypes per class at m_k = 10, we additionally run the VGG19 architecture with m_k=5 prototypes to explore model performance when the number of prototypes is limited. No other modifications were made to the original ProtoPNet architecture. We train for 100 epochs and report results for the best performing model.
The reward model r(x_i, h_i) is similar to the base architecture of the ProtoPNet. Two ResNet-50 base architectures take in the input image x_i and the associated acticvation pattern h_i separately, and both have two additional convolutional layers. The outputs of the convolutional layers are concatenated and fed into a final linear layer with sigmoid activation to predict the Bradley-Terry ranking. Predicted rewards are therefore bound in the range (0, 1). We train the reward model for 5 epochs on a comparison dataset of 71,875 paired images and preference labels, and evaluate on a 13,831 testing pairs. The reward model achieves 91.54% test accuracy when trained on the whole dataset, and we additionally find that the reward model converges to roughly 91% test accuracy on a comparison dataset generated from at least 300 rated activation patterns.
§.§ Evaluation Metrics
To evaluate the performance of R3-ProtoPNet, we compare it to ProtoPNet using three metrics: test accuracy, reward, and prototype class mismatch. We use test accuracy to measure the predictive performance of the models. As the above section demonstrates, the learned reward model achieves high accuracy in predicting which prototype ranks above another in accordance with human preferences, so we therefore use it as a measure of prototype quality. Regarding the class mismatch metric, <cit.> note that low-quality prototypes tend to have close training images that come from different classes. To evaluate the effect of R3 updating, we compute the average class mismatch across all prototypes for a given model for the Top-5 and Top-10 closest training images.
§.§ Results
After training ProtoPNet, running the R2 update step, and then performing retraining, we see several trends across multiple base architectures. In Table <ref>, we report the test accuracy of the different base architectures across stages of R3-ProtoPNet training. Generally, the test accuracy from ProtoPNet substantially decreases after applying the R2 update, but retraining tends to recover most of the predictive loss. This accuracy maintenance demonstrates that it is possible to align prototypes with human preferences without sacrificing predictive power.
In Table <ref>, we report the average reward of all prototypes on all test images for a given base architecture. We see that ProtoPNet achieves an average reward between 0.48 and 0.57 across architectures. Investigating the distribution of rewards further in Figure <ref>, it is revealed that ProtoPNet tends to produce a bimodal distribution over prototype rewards, with some bias towards low-quality and high-quality prototypes. Applying the R2 update results in the desired behavior, increasing the average reward and shifting the distribution of rewards upwards. We additionally see that the retraining step in R3-ProtoPNet actually continues to increase average reward across all base architectures while slightly increasing the spread of the reward distribution.
Finally, we report the Top-5 and Top-10 class mismatch in Table <ref>. Here we see an interesting phenomena. Across all base architectures, ProtoPNet has an average class mismatch of at least half of the Top-L closest image patches, for both L=5,10. Although performing the R2 greatly increases the average reward for all base architectures except ResNet-34, we see that class mismatch is only marginally reduced, with still all of the base architectures resulting in mismatches for over half of the closest Top-L training image patches. We see that R3-ProtoPNet greatly reduces class mismatch for the m_k=5 VGG-19 base architecture, but tends to only marginally reduce class mismatch for the m_k=10 case.
§.§ Discussion
Given the results, we see that R3-ProtoPNet manages to increase the quality of learned prototypes without sacrificing predictive performance. While the ResNet-34 and DenseNet-121 base architectures do see a slight performance decrease, producing an ensemble of trained R3-ProtoPNets results in an accuracy increase over an ensemble of the original trained ProtoPNets. We see that R3-ProtoPNet results in a substantial increase of the average test reward, verifying that prototype quality is increasing. There is still much room for improvement, as class mismatch for 10 prototypes does not decrease across all architectures, while there is some class mismatch decrease for the 5 prototype VGG-19-based ProtoPNet. Overall, these results demonstrate that incorporating reward information into the ProtoPNet via reweighing, reselection, and retraining does increase interpretability of ProtoPNets, and, when incorporated into an ensemble, increases predictive performance.
§ LIMITATIONS AND FUTURE WORK
While R3-ProtoPNet improves interpretability and predictiveness in an ensemble, there is plenty of room for improvement. We note that the reward model is trained on ratings of a single image and heatmap, highly constrained to measuring overlap between prototype and the object of interest, but it is quite possible to extend ratings to multiple images and heatmaps. This would allow for the reward model to better learn cross-image preferences, such as consistency. We hope that this could alleviate the duplicate issue as well. We note that R3-ProtoPNet fails to entirely eliminate duplicates, with several high-reward prototypes converge to the same part of the image.
While this work investigated increasing the performance of ProtoPNet, it is possible to extend the R3 update to other extensions of the ProtoPNet. A major benefit of reward fine-tuning is its flexibility in application, and we expect that combining the R3 update with other variations of the ProtoPNet would result in further increased performance gains. Combining multiple feedback modalities, such as the binary feedback used in ProtoPDebug <cit.>, could further increase model performance.
A final limitation with R3-ProtoPNet and other methods that rely on human feedback is that the model itself might be learning features that, while seemingly confusing to a human, are helpful and meaningful for prediction. <cit.> argue that the ProtoPNet can predict with non-obvious textures like texture and contrast, which might be penalized via a learned reward function. Future work is necessary to investigate how ProtoPNet variants could critique human feedback, and argue against a learned reward function.
§ CONCLUSION
In this work, we propose the R3-ProtoPNet, a method that uses a learned reward model of human feedback to improve the meaningfulness of learned prototypical parts. We find that ensembling multiple R3-ProtoPNets results in increased performance over original ProtoPNet ensembles. Considering the high performance of the reward model, we use the reward model as a measure of prototype quality, allowing us to critique the interpretability of ProtoPNet along a human lens. The ability of reward learning to quantize qualitative human preferences make reward-based fine-tuning a promising direction for the improvement of interpretable deep models.
|
http://arxiv.org/abs/2307.04244v1 | 20230709184758 | Reinforcement Learning for Joint Design and Control of Battery-PV Systems | [
"Marine Cauz",
"Adrien Bolland",
"Bardhyl Miftari",
"Lionel Perret",
"Christophe Ballif",
"Nicolas Wyrsch"
] | math.OC | [
"math.OC"
] |
§ INTRODUCTION
§.§ Background and related work
The current transition to renewable energy sources requires rethinking new energy systems, characterized by decentralized and intermittent production. The development of these systems typically occurs in two distinct steps, namely the design and control of these systems. The design problem involves identifying the design variables which are the optimal size of energy system components. The control problem aims to determine the control variables which are the optimal actions to operate the energy system components. Both design and control problem should jointly minimize a cost function and are typically solved sequentially. This paper explores the value of solving the design and control tasks, using a reinforcement learning (RL) method as appropriate design is intrinsically linked to subsequent operation. To evaluate the effectiveness of this approach, its performance are compared with that of the Mixed Integer Linear Programming (MILP) method.
On the one hand, RL is a data-driven approach where an agent learns to make decisions in a dynamic environment through trial-and-error experience. It involves an agent interacting with an environment and receiving feedback in the form of rewards or penalties based on its actions, with the goal of maximizing its cumulative reward over time. On the other hand, Mixed Integer Linear Programming (MILP) is a mathematical optimization technique used to solve problems with linear constraints and integer variables. It involves formulating a mathematical model of the problem and using an optimization algorithm to find the best solution. Both RL and MILP methods will be used to benchmark the results of a one-year time series.
As highlighted in a recent review <cit.>, RL-based approaches have significant potential, yet not fully exploited, in the energy field. Specifically, the review points out that energy systems are typically designed using either MILP or heuristic methods, with RL approaches dedicated to their control. Integrating RL beyond energy flow control would open new interesting research questions. In <cit.>, RL is used to support distributed energy system design due to its flexibility and model-free nature, which allows it to be adapted to different environments at different scales. However, they did not simultaneously address the dispatch and design problem as a distributed reward problem, as done in this work. Instead, they used a cooperative coevolution algorithm (COCE) to assist the optimization process. Jointly addressing the design and operation of energy systems is a key issue, especially for multi-energy systems, as discussed in <cit.>, where multi-objective evolutionary algorithms (EMOO) and MILP are used to integrate biomass technologies in a multi-energy system. In <cit.>, the focus is on evolution algorithms and their comparison with deep reinforcement learning strategies. After clarifying the fundamental differences between the two approaches, the discussion revolves around their ability to parallelize computations, explore environments, and learn in dynamic settings. The potential of hybrid algorithms combining the two techniques is also investigated, along with their real-world applications.
RL-based frameworks are successfully applied to the operation of energy systems <cit.>, although these methods have not, to the authors' knowledge, been extended to solve real-world design problem in energy system. As reviewed in <cit.>, RL-based frameworks are popular for addressing electric vehicle (EV) charging management, mostly with variants of the DQN algorithm, and outperform other traditional methods. In <cit.>, various deep RL algorithms are benchmarked against rule-based control, model predictive control, and deterministic optimization in the presence of PV generation. The study, which aims to increase PV self-consumption and state-of-charge at departure, demonstrates the potential of RL for real-time implementation. For solving V2G control under price uncertainty, <cit.> modeled the problem with a Markov Decision Process (MDP) <cit.>, a mathematical framework for modeling system where stochasticity is involved. Additionally, a linear MDP formulation is also used in <cit.> to address the coordination of multiple charging points at once. Finally in <cit.>, a data-driven approach is defined and evaluated for coordinating the charging schedules of multiple EVs using batch reinforcement learning with a real use case. In conclusion, these studies provide valuable insights and tools for optimizing and improving energy systems, demonstrating the potential of RL to tackle the operation of complex energy systems.
§.§ Contribution
This work aims to evaluate the relevance of jointly designing and controlling an energy system using a deep RL approach. To achieve this purpose, two methods are benchmarked to address jointly the design and control problem of a real-world PV-battery system. The first method, MILP, computes the optimal design and control solution over a sequence of historical data. The second method, RL, computes the optimal design and a control policy through interactions with a simulator by trial and error. The specific RL algorithm used in this study is referred to as Direct Environment and Policy Search (DEPS) <cit.>. DEPS extends the REINFORCE algorithm <cit.> by combining policy gradient with model-based optimization techniques to parameterize the design variables. In this framework, an agent looks for the design and control variables that jointly maximize the expected sum of rewards collected over the time horizon of interest. The outcomes of both methods are discussed in the subsequent sections of this paper.
This paper is structured as follows. Section <ref> provides two formulations of the energy system, one designed for MILP and the other for RL, and discusses the methodology used to benchmark the results. In Section <ref>, the outcomes of the study are presented, and these results are discussed in Section <ref>, with a focus on the potential of RL for joint design and control of energy systems. Finally, the paper concludes with a summary in Section <ref>.
§ METHOD
§.§ Problem statement
The study is carried out for the energy system illustrated in Figure <ref>, whose components are detailed in the subsections below. Overall, the system refers to an office building that has been fitted with a PV installation and a stationary lithium-ion battery to meet its own electricity consumption. Additionally, the building is connected to the electricity grid.
The objective of the study is to jointly propose a design of the PV and battery components, as well as a control strategy of the described energy system in order to minimize the total cost of its ownership. In the following Subsection <ref>, the system is expressed as a mathematical program made-up of constraints and objectives. To be more precise, it is tackled as a Mixed-Integer Linear Program. Subsection <ref> formulates a surrogate environment as an MDP. The latter represents the same dynamics and rewards as the original problem but the objective is to maximize the sum of rewards gathered over one week on expectation over the 52 weeks of the year of data. By doing so, it allows the use of the RL algorithm and expects the optimal solution to be close to the solution of the original problem. Results are discussed in Section <ref>. Finally, for both methods, the energy system is studied over a finite time horizon T, on which all costs are evenly distributed across each time step t. The methodology and the context of the experiments conducted are specified in Subsection <ref>.
§.§ Energy system
This subsection describes the physical constraints that apply to the components of the energy system. Theses components, in sequential order, consist of the PV panels, the battery, the electrical load and the power grid. The set of design and control variables and the parameters of the whole system, which is modeled as a discrete-time system, are gathered respectively in Table <ref> and <ref>, respectively.
§.§.§ PV system
The objective of the PV installation is to generate electricity on-site to fulfill the local electricity demand. The design of this component is one of the two design variables that will result from the optimization process. The range of the suitable nominal power P^nom, corresponding to its design variable, is set in Eq. (<ref>) and the production at time t is directly proportional to this nominal design variable as shown in Eq. (<ref>). The normalized annual curve p_t^prod corresponds to the actual hourly averaged PV production power of the building.
P^nom_min ≤ P^nom≤ P^nom_max
P^prod_t = P^nom·p^prod_t
The capex and opex values, which are respectively the initial investment and the annual maintenance cost, of the installation are made up of a fixed and a variable part to take account of potential scale effects.
cx_pv = cx_pv^fix +
cx_pv^var· P^nom
ox_pv = ox_pv^fix +
ox_pv^var· P^nom
§.§.§ Battery
To maximize the potential for on-site self-consumption, a stationary lithium-ion battery is available. The design of this component, corresponding to its capacity B, is the second design variable to determine during the optimization process. This battery capacity can vary in the range of Eq. (<ref>).
B_min ≤ B ≤ B_max
The state of charge variable, soc_t, changes as a function of the power exchanged with the battery denoted P^B_t. This power is constrained, for charging, by the nominal capacity, Eq. (<ref>), and, for discharging, by the energy stored, Eq. (<ref>). Additionally, the battery efficiency, denoted η^b, is assumed identical for both the charging and the discharging processes.
P^B_t ≤B - soc_t/Δ t if P^B_t ≥ 0
P^B_t ≥-soc_t/Δ t if P^B_t ≤ 0
Knowing the power exchanged with the battery, the state of charge can be updated:
soc_t+1 =
soc_t + P^B_t ·η^b·Δ t if P^B_t ≥ 0
soc_t + P^B_t/η^b·Δ t if P^B_t < 0
At the beginning of the optimization, i.e., t=0, the battery soc is set to half of its capacity value, to initialize the model. Moreover, to avoid any artificial benefit, the final soc is constrained to be equal to the initial value, as formulated in Eq. (<ref>).
soc_t=0 = B/2
soc_t=0 = soc_t=T
Similar to the PV plant, the capex and opex of the battery consist of both fixed and variable parts.
cx_B = cx_B^fix +
cx_B^var·B
ox_B = ox_B^fix +
ox_B^var·B
§.§.§ Electrical load
The electrical load used in this project is real data from an office building in Switzerland. This consumption is monitored on an hourly basis and reflects the consumption patterns of office days. This building load power, P^load_t, is provided as input and corresponds to an actual measurement sampled by hours over a year.
§.§.§ Electrical grid
To absorb excess solar production or to meet the electricity consumption in the absence of local production, the system is connected to the low-voltage electrical grid. This connection is modeled here as a single balance equation, called the conservation of electrical power, shown in Eq. (<ref>). The power imported from the grid is referred to as P^imp_t and the power injected is referred to as P^exp_t.
P^prod_t + P^imp_t = P^load_t + P^B_t + P^exp_t
The grid power value at each time t is derived from Eq. <ref>, and the power limit can be described as follows.
0 ≤ P^imp_t ≤ P^max_grid
0 ≤ P^exp_t ≤ P^max_grid
Based on the import and export power, the total cost of supplying electricity through the network C_grid can be computed.
C_grid = ∑_t=0^T-1 C_grid,t = ∑_t=0^T-1 P^imp_t · C^imp_grid,t - P^exp_t · C^exp_grid,t
§.§.§ Objective function
The objective of this study is to propose a design for the PV and battery components, along with their dispatch, with the aim of minimizing the total cost of ownership. This objective function, of minimizing the overall cost of the system, can be formulated as follows.
min totex
The total cost of the system, denoted totex, is composed of the capex and opex of both PV and battery components, as well as the grid cost.
totex = opex + capex + C_grid
opex = ox_pv + ox_B
capex = cx_pv· R_pv + cx_B· R_B
The opex and grid cost are computed over a finite time period T. However, the capex is an investment cost that is independent of T. To enable the adaptation of the investment cost to the project duration, an annuity factor R adjusts the capex for the finite time horizon T. This annuity factor is computed according to Eq. (<ref>), by taking into account the values of T, the annual discount rate r, and the lifetime L of the component. This formula includes a scaling factor T/8760 to adapt R to the period T, based on the assumption that T is expressed in hours since 8760 is the number of hours in a year.
R = r · (1 + r) ^L/ (1 + r) ^ L - 1·T/8760
§.§ MDP formulation
This section presents an alternative formulation of the problem as a Markov Decision Process (MDP), which is a well-established framework for modeling sequential decision-making problems. This alternative formulation is required for applying DEPS. More precisely, an MDP(S, A, P, R, T), as presented below, consists of the following elements: a finite set of states S, a finite set of actions A, a transition function P, a rewards function R, and a finite time horizon T.
§.§.§ State Space
The state of the system can be fully described by
s_t = (h_t, d_t, soc_t, P^prod_t, P^load_t)
∈ S = {0, ..., 23}×{0, ..., 364}× [0, B] ×ℝ_+ ×ℝ_+
* h_t ∈{0, ..., 23} denotes the hour of the day at time t. The initial value is set to 0.
* d_t ∈{0, ..., 364} denotes the day of the year at time t. The initial value is set randomly.
* soc_t is the state of charge of the battery at time t, this value is upper bounded by the nominal capacity of the installed battery B. The value is set initially to a random value during the training process and to half of its capacity during the validation process.
* P^prod_t represents the expected PV power at time t. This value is obtained by scaling normalized historical data p^prod_t with the total installed PV power (P^nom) and considering h_t and d_t values.
* P^load_t denotes the expected value of the electrical load at time t. The load profile is determined using historical data that corresponds to the same hour and day as the PV power.
§.§.§ Action Space
The action of the system corresponds to the power exchanged with the battery.
a_t = (P^B_t)
After projecting the action to fall within the acceptable range specified by Eq. (<ref>) and (<ref>), the resulting value is used as a_t, as shown in Eq. (<ref>). This corresponds to the power exchanged with the battery, denoted P^B_t, this value is positive when the battery is being charged and negative when it is being discharged.
P^B_t =
B - soc_t/Δ t if P^B_t > B - soc_t/Δ t
soc_t/Δ t if P^B_t < - soc_t/Δ t
P^B_t otherwise
§.§.§ Transition Function
Each time step t in the system corresponds to one hour, which implies the evolution specified in Eq. (<ref>) of the state variable h and every 24 time steps, the day is incremented by 1.
h_t+1 = (h_t + 1) mod 24
d_t+1 = Int(h_t + 1/24)
where the function Int takes the integer value of the expression in Eq. (<ref>).
The soc_t of the battery is updated as Eq. (<ref>), based on the projected action value, and all other state variables are taken from input data.
P^prod_t+1 = p^prod_h_t+1, d_t+1· P^nom
P^load_t+1 = p^load_h_t+1, d_t+1
§.§.§ Reward Function
The reward signal to optimize the agent's actions in RL serves a similar aim as the objective function in the MILP formulation. Therefore, the reward here is the opposite value of the TOTEX defined at Eq. (<ref>). This cost is composed of (i) the investment cost, (ii) the operating cost and (iii) the cost from the purchase and resale of electricity from the grid defined in Eq. (<ref>).
r_t = - totex_t
= - capex - opex - C_grid,t
= - capex - opex - P^imp_t · C^imp_grid,t + P^exp_t · C^exp_grid,t
where the grid cost is the only time-dependent factor, while capex and opex are fixed values for a specific value of P^nom and B.
§.§ Methodology
This subsection discusses the fundamental differences between the two methods (i.e., MILP and RL), along with the experimental protocol that was employed to compare the results. As discussed briefly earlier, although the same problem is aimed to be solved, the methods under study are fundamentally different.
MILP is a method for solving problems that involves optimizing a linear function of variables that are either integer or constrained by linear equalities, as the problem described in Subsection <ref> The MILP algorithm solves the optimization problem by iteratively adjusting the values of the design and control variables, subject to the constraints, until it finds the optimal solution that maximizes or minimizes the objective function, depending on the problem's goal. This method is applied to the problem described in Subsection <ref> over a one-year time horizon (T=8760). The solution is said to be computed with perfect foresight meaning that all variables are selected accounting for the future realization of (normally unknown) events in the time series, providing an optimistic upper bound on the true performance of the control and design. Concretely, the MILP problem is here encoded in the Graph-Based Optimization Modelling Language (GBOML) <cit.> paired with the Gurobi solver <cit.>.
In contrast, RL is a stochastic optimization method that learns from experience through trials and errors. In this study, we use DEPS <cit.>, an algorithm optimizing design and control variables in an MDP, as the one described in Subsection <ref>, with a finite-time horizon. The agent receives feedback in the form of rewards when it selects a particular design and performs specific actions. The objective of the agent is to maximize the expected cumulative reward, which drives it to learn a design and a control policy. Ideally, as with MILP, the time horizon should be annual, or cover the entire lifetime of the system, taking into account seasonal production and consumption fluctuations and/or equipment aging. However, such extended time horizons are unsuitable for this RL approach. Therefore, to strike a balance between a horizon that is short enough for DEPS and long enough to observe the consequences of decisions on the system, a horizon of T=168 hours, i.e., 7 days, is defined. Additionally, for each simulation, the initial day is sampled uniformly from the year-long data set and the initial state-of-charge of the battery is also sampled uniformly at random. As the reward is optimized on expectation over all days, the resulting design and control policy is expected to account for the seasonality and other different hazard in the historical data. The DEPS algorithm is trained on a predetermined number of iterations. The PV power and battery capacity values obtained from the last iteration of the algorithm are then taken as the values of the design variables and the final policy is used for the control.
Unlike MILP, the RL method does not secure optimality, therefore the experimental protocol aims to compare both results to see how far the RL solution is from the optimal one. The experimental protocol is conducted in two distinct scenarios to differentiate the impact of adding the design variables in the joint problem. The first control-only scenario (CTR) assessed the control variables when the design variables are fixed. The second scenario, considering both the control and design (CTR & DES) problem, allows for flexibility in designing the battery capacity and PV power, the two design variables. To benchmark the performance of both methods in each scenario (i.e., CTR and CTR & DES), the reward and income value are reported. The reward value is computing according to Eq. (<ref>) for the RL method. To estimate the average reward value for the MILP method, all reward values r_t are averaged over time horizons of T=168. Comparing the average cumulative reward value of the MILP method to that of the RL method provides a first benchmark for evaluating the performance of both approaches. However, as shown in Eq. (<ref>), only the grid cost is time-dependent, while the capex and opex depend solely on investment decisions. Therefore, the income value is defined as the average reward value, but it only includes the grid cost and can be computed as follows:
Income = ∑_t=0^T-1 - P^imp_t · C^imp_grid,t + P^exp_t · C^exp_grid,t
Finally, the experiments are performed in two steps. First, to perform a simple comparative study, working on a same finite time horizon T=168, both methods are conducted using data from a single summer week. Second, the data set is extended to include the one-year data set.
§ RESULTS
The energy system presented in Section <ref> is solved using the RL and MILP approaches with parameter values listed in Table <ref>. To differentiate the performance of the DEPS algorithm for control and design aspects, the study is conducted in two distinct scenarios. The first control-only scenario (CTR) assessed the control aspect for fixed design variables, meaning that the PV power and battery capacity are fixed. The second scenario, considering both the control and design (CTR & DES) aspects, allows for flexibility in designing the battery capacity and PV power. The two following subsections describe the results of the study performed in two steps, over the one-week and one-year data set, respectively.
§.§ A one-week toy example
In order to perform a simple comparative study, both CTR and CTR & DES analyses were conducted using data from a single summer week. This enables to optimize both methods on the same time horizon. This means training the RL algorithm on the same 168 time steps, with an initial day uniformly randomly selected over the week but an initial hour fixed at midnight. Additionally, during the training phase, the battery's initial soc is uniformly sampled such that the RL algorithm is presented with a large variety of scenarios for improving the quality of the learned policy. The results for both the CTR scenario, where the design variables (i.e., the PV power and battery capacity) are fixed, and the CTR & DES scenario, where the PV and battery design variables are optimized in addition to control, are presented in Table <ref>.
§.§.§ RL and MILP optimal objective values are similar in both scenarios but with different designs in the control and design scenario.
Table <ref> shows that in the CTR scenario, the results of the RL approach are similar to those of MILP. This confirms that the DEPS algorithm is able to converge to the optimal solution of this specific problem. In the CTR & DES scenario, RL design variables differ from the MILP solution, resulting in an unexpected higher reward value (-40) than the MILP optimal one (-46). A detailed analysis reveals that this unexpected higher value is due to Eq. (<ref>), which is not imposed in the MDP. In order to validate this analysis, the additional grid cost needed to fulfill Eq. (<ref>) has been computed, taking into account the battery's final soc obtained with the RL approach. As a result, the reward value has increased to -67 (instead of -40). This clearly highlights the importance that Eq. (<ref>) plays in term of overall objective.
§.§.§ The CTR & DES scenario highlights differences in RL and MILP strategies.
It is seen from Table <ref> that in the second scenario, the optimal design variables of the RL and MILP solutions differ. Finding different values in design variables shows that the DEPS algorithm is able to identify solutions with comparable reward but using different design strategies. In order to study the sensitivity of the optimal solution, the MILP method was applied by imposing the design variable values obtained with the RL, as it can be seen in the last column of Table <ref>. This indicates that the RL design solution is less optimal (-53) than the MILP one (-46).
§.§ A one-year case study
Optimal solutions of RL and MILP methods in both scenarios are now computed using data from a full year. The time horizon for the RL algorithm is still equal to T=168, but the starting days are uniformly randomly selected over the year. The RL algorithm is trained over a pre-determined number of 100'000 iterations and the values of the RL design variables considered are the ones from the final iteration. The results are shown in Table <ref>.
§.§.§ The difficulty of generalizing a policy with stochasticity in the model and on the estimation of the expectation
It can be seen from Table <ref> that in both the CTR and CTR & DES scenarios, the optimal reward obtained by the RL method is poorer than the MILP optimal rewards. Furthermore, as depicted in Fig. <ref>, due to the significant variations in the input data, the reward and income values exhibit substantial fluctuations across iterations.
During training in the CTR scenario (Fig. <ref>, left), the RL model achieved maximum reward and income values of -180 and -131, respectively, which are significantly better than the final results obtained from both methods in Table <ref>. This could suggest that depending on the set of weeks that are averaged at each iteration, it is possible to obtain a better or worse reward. Therefore, it seems important to work with a sufficiently representative number of weeks throughout the year. A similar observation can be made in the CTR & DES scenario, where the maximum reward and income values achieved were -195 and -148, respectively (Fig. <ref>, right).
§.§.§ The RL method seems to promote lower design variable values
From Table <ref> it is also seen that the RL approach seems to promote solutions involving lower values of design variables. To further investigate the reasons underlying this result, the design variables for the evolution of the battery capacity and PV power, during the training process, are reported in Fig. <ref> in the CTR & DES scenario.
As indicated in Table <ref>, the design variable values can range from 0 to 200. However, it can be seen that higher values are not explored by the RL method. This latter resulted, at the last iteration, in design variables of 44 kWh for battery capacity and 57 kWp for PV power. During the training phase, the maximum values reached were 59 kWh and 90 kWp for battery capacity and PV power, respectively. This maximal explored battery capacity value is lower than the optimal one found by the MILP approach (95 kWh). Thus, the RL solution of the PV power value is expected to be lower. Indeed, the reward value is penalized if the RL agent injects PV production into the grid, since the cost of exported energy into the grid (C_grid^exp) is defined as a negative value in Table <ref>. Consequently, limited battery capacity intrinsically causes a lower PV power value.
§ DISCUSSION
This section discusses the main observations that can be drawn from solving a battery-PV system with both RL and MILP approaches using the one-week and one-year data set.
§.§ The promises of RL for joint design and control of energy systems
The motivation for this study was to explore the potential of RL to enable joint control and design of energy systems. Tables <ref> (one-week data) and <ref> (one-year data) show that RL provides a solution that is close to the optimal MILP one. This is encouraging as it suggests that despite RL relying on a different optimization strategy, it is able to identify a meaningful solution in a simple case. However, the difference of reward value between MILP and RL increases when integrating design variables to the optimization problem, i.e., CTR & DES scenarios in Tables <ref> and <ref>. Interestingly, the solutions for design variables are consistently smaller in RL as compared to MILP. Furthermore, from Fig. <ref>, it can be seen that the RL algorithm did not explore higher design variable values in the one-year case study. This observation can be explained by two possibilities: first, DEPS is a local-search method that is thus subject to converging towards local extrema. Once the control policy is too specialised to the investment parameters (under optimization too), these parameters are thus expected to be locally optimal and the algorithm is stuck. Second, the RL algorithms is subject to many hyperparameters to which the final results are sensible, it is possible that a different policy architecture, learning rate, or simply more iterations would ameliorate the performance of the method. Supporting the first explanation is the similarity between the reward values of the RL (-250) and MILP, based on same investments, (-247) approaches for the CTR & DES scenario with T=8760 (Table <ref>). Hence, in this specific energy system case study, it could be likely that the RL algorithm did not deem it advantageous to enhance the value of the design variables for either one or both of the two reasons stated.
Overall, these results show that RL provides realistic control and design strategies. Based on this, RL could be used to define new real-time control strategies integrating design constraints, and that are less sensitive to linearization inaccuracies <cit.>. Given the differences in how uncertainties are accounted for by both methods, RL could also be a better candidate to integrate resources coming with high levels of uncertainty such as electric mobility.
§.§ Technical challenges and future directions
The main technical challenges encountered in this study are essentially the ones inherent to RL methods. First, various parameters need to be tuned: neural network architecture for the policy, the batch size for the optimization, the learning rate, or the different scaling among others. These parameters were tuned by trial and error and would need to be adapted to each new application. For example, the number of layers required in the one-year case study was larger than for the one-week toy example. Second, convergence of the RL method is not guaranteed, and when convergence happens, the solution is not guaranteed to be globally optimal. Third, as illustrated here above for the results of Figure <ref>, determining the number of iterations (set to 100,000 for the training phase in all our experiments) is also crucial and might affect RL solutions. Therefore, comparing RL and MILP solutions is not trivial because its is difficult to compare perfect foresight with policy based decisions. This should be accounted for when analyzing results from Tables <ref> and <ref>.
From a technical point of view, future work will aim at using more advanced RL methods. In particular, the RL algorithm used here is a modified version of the REINFORCE algorithm <cit.>, which was developed in 1992 and is one of the earliest RL algorithms. Today, more advanced algorithms are available for control problems, which can converge more rapidly or account for infinite time horizons, such as actor-critic algorithms (e.g., PPO <cit.> and GAE <cit.>), but are yet to be adapted to joint design and control. In terms of applications, future work will aim to better evaluate the added value of RL by assessing the long-term performance of real-time sized systems. For example, a control framework could be developed to establish an operation strategy for the MILP-sized system. The framework would then be evaluated using several years of real-time data from the same system used for design. The same exercise would be applied to the trained model of the DEPS algorithm and performance obtained from several years of system control would be benchmarked, and the impact of design decisions could be discussed with more perspective.
§ CONCLUSIONS
In most studies, MILP is used for the design of energy systems and RL for the control. On the one hand, MILP assumes a perfect foresight of the future and is difficult to generalize to new data. On the other hand, RL methods proved to be efficient in other tasks linked to design and control but not on energy systems. In this study, we assessed the potential of an RL method, DEPS, i.e. an RL algorithm proven efficient for designing and controlling complex systems, for the joint design and control of energy systems.
The energy system studied is a PV-battery system used to answer a real-life demand in order to minimize the overall cost. In order to assess the efficiency of the RL method, we compared the outcomes with those obtained with a MILP. As these two approaches are fundamentally different, the optimization problem was formulated in two distinct ways: first as a MILP and second as an MDP. The methodology and experimental context were clarified to facilitate the discussion of results and have a fair comparison. Both approaches are discussed in terms of their strengths and weaknesses.
The findings show that RL can produce control strategies that are close to optimal, while using different values of design variables. This highlights the potential of RL for joint design and control of energy systems, particularly in scenarios where stochasticity is a key factor. However, the study also highlights the difficulty of tuning and using theses methods. Moving forward, there are several challenges to address, including the need to ensure that the RL solution converges to a global optimum. However, the promising results obtained in this study suggest that RL has the potential to be a valuable tool for jointly designing and controlling energy systems.
|
http://arxiv.org/abs/2307.05074v1 | 20230711071622 | Retrieval-augmented GPT-3.5-based Text-to-SQL Framework with Sample-aware Prompting and Dynamic Revision Chain | [
"Chunxi Guo",
"Zhiliang Tian",
"Jintao Tang",
"Shasha Li",
"Zhihua Wen",
"Kaixuan Wang",
"Ting Wang"
] | cs.IR | [
"cs.IR",
"cs.AI",
"cs.DB"
] |
ICONIP23
Guo et al.
College of Computer, National University of Defense Technology, Changsha, China
{chunxi, tianzhiliang, tangjintao, shashali, zhwen, wangkaixuan18, tingwang}@nudt.edu.cn
Retrieval-augmented GPT-3.5-based Text-to-SQL Framework with Sample-aware Prompting and Dynamic Revision Chain
Chunxi Guo, Zhiliang Tian (), Jintao Tang, Shasha Li, Zhihua Wen,
Kaixuan Wang and Ting Wang ()
August 12, 2023
==============================================================================================================
Text-to-SQL aims at generating SQL queries for the given natural language questions and thus helping users to query databases.
Prompt learning with large language models (LLMs) has emerged as a recent approach, which designs prompts to lead LLMs to understand the input question and generate the corresponding SQL.
However, it faces challenges with strict SQL syntax requirements.
Existing work prompts the LLMs with a list of demonstration examples (i.e. question-SQL pairs) to generate SQL, but the fixed prompts can hardly handle the scenario where the semantic gap between the retrieved demonstration and the input question is large.
In this paper, we propose a retrieval-augmented prompting method for a LLM-based Text-to-SQL framework, involving sample-aware prompting and a dynamic revision chain.
Our approach incorporates sample-aware demonstrations, which include the composition of SQL operators and fine-grained information related to the given question.
To retrieve questions sharing similar intents with input questions, we propose two strategies for assisting retrieval.
Firstly, we leverage LLMs to simplify the original questions, unifying the syntax and thereby clarifying the users' intentions.
To generate executable and accurate SQLs without human intervention, we design a dynamic revision chain which iteratively adapts fine-grained feedback from the previously generated SQL.
Experimental results on three Text-to-SQL benchmarks demonstrate the superiority of our method over strong baseline models.
§ INTRODUCTION
Text-to-SQL task aims to convert natural language question (NLQ) to structured query language (SQL), allowing non-expert users to obtain desired information from databases <cit.>.
As databases are popular in various scenarios involving different domains (e.g., education and financial systems, etc.), it is desirable to train a model that generalizes well across multiple domains. To facilitate cross-domain generalization <cit.>, researchers adapt encoder-decoder architecture <cit.>, reducing the requirement for specific domain knowledge via end-to-end training. These approaches require diverse and extensive training data to train the model, which is prohibitively expensive <cit.>.
Recent progress focuses on large language models (LLMs) (e.g., GPT-3 <cit.>, Codex <cit.> and GPT-4 <cit.>) with prompt learning <cit.>, which refers to using specific prompts or instructions to generate desired responses.
Rajkumar et al. <cit.> and Liu et al. <cit.> evaluate several prompt learning baselines for Text-to-SQL tasks.
Their findings show that though it is natural for LLMs to generate text sequences, generating SQL is still a challenge due to the SQL's strict syntax requirements.
To address these issues, inspired by few-shot learning <cit.>, existing work employs prompting the LLMs with a list of demonstration examples (i.e. question-SQL pairs) to generate SQL queries.
However, they typically rely on manual labour to create static demonstration examples tailored to specific tasks.
DIN-SQL <cit.> selects pre-defined samples from each category,
SELF-DEBUGGING <cit.> explains the code to LLMs but without explanation demonstration.
These methods employ a static demonstration, meaning that the demonstration examples provided to LLMs are fixed and do not adapt or change across different examples.
These static demonstration examples hardly adapt to the scenarios where the semantic gap between retrieved demonstrations and the input question is large, which is called retrieval bias <cit.>, commonly appearing in the retrieval-augmented generation.
Inspired by <cit.>, we argue that providing dynamic demonstrations can be adaptive to specific samples and schema for SQL generation.
Dynamic examples enable the SQL generation to accommodate various scenarios. By adjusting to specific instances, demonstrations can be customized to incorporate the necessary query structure, logical operations, and question semantics. This adaptability facilitates the SQL generation that are relevant and appropriate for different situations.
In this paper, we propose retrieval-augmented prompts for an LLM-based Text-to-SQL model, which contains sample-aware prompting and a dynamic revision chain.
Specifically, we propose to retrieve similar SQL queries to construct prompts with sample-aware demonstration examples.
Notice that users often ask questions in different expressions, even if they have the same intention and SQL query. It makes the model hard to retrieve helpful examples.
To solve this issue, we propose to extract the question's real intention via two strategies:
Firstly, we simplify original questions through LLMs to clarify the user's intentions and unify the syntax for retrieval.
Secondly, we extract question skeletons for retrieving items with similar question intents.
To produce executable and accurate SQL, we design a dynamic revision chain, generating SQL queries by iteratively adapting to fine-grained feedback according to the previous version of generated SQL. The feedback includes SQL execution results, SQL explanations, and related database contents.
This dynamic chain manages to generate executable and accurate SQL through automatic interaction between the language model and the database without human intervention.
Our contributions are as follows:
(1) We develop a retrieval-augmented framework for Text-to-SQL tasks by prompting LLMs with sample-aware demonstrations.
(2) We propose a dynamic revision chain, which adapts to the previously generated SQL with fine-grained feedback.
(3) Experimental results on three Text-to-SQL benchmarks show that our method surpasses the strong baseline models.
§ RELATED WORK
§.§ Encoder-Decoder SQL Generation.
SQL generation tasks have achieved significant advancements through the utilization of encoder-decoder architectures <cit.>.
On the encoder side, Guo et al. <cit.> proposed IRNET, using attention-based Bi-LSTM for encoding and an intermediate representation-based decoder for SQL prediction. Later, <cit.> introduced graph-based encoders to construct schema graphs and improve input representations.
Works such as RATSQL <cit.>, SDSQL <cit.>, LGESQL <cit.>, S^2SQL <cit.>, R^2SQL <cit.>, SCORE <cit.>, and STAR <cit.> further improved structural reasoning by modelling relations between schemas and questions.
GRAPHIX-T5 <cit.> overcomes the limitations of previous methods by incorporating graph representation learning in the encoder. Concurrently, RASAT <cit.> also provided T5 with structural information by adding edge embedding into multi-head self-attention.
On the decoder side, we divide the methods into four categories: sequence-based methods (BRIDGE <cit.>, PICARD <cit.>) directly translate NLQ into SQL query token by token, template-based methods (X-SQL <cit.>, HydraNet <cit.>) employ predefined templates to regulate SQL generation and ensure structural coherence, stage-based methods (GAZP <cit.>, RYANSQL <cit.>) first establish a coarse-grained SQL framework and then fills in the missing details in the frame which calls slot-filling methodologies, and hierarchical-based methods (IRNet <cit.>, RAT-SQL <cit.>) generate SQL according to grammar rules in a top-down manner, resulting in a tree-like structure.
§.§ LLM-based SQL Generation.
LLM-based models recently emerge as a viable option for this task <cit.>.
For effectively utilizing, it is important to design appropriate in-context demonstration <cit.> and chain-of-thought (CoT) <cit.> strategies that can elicit its ability <cit.>.
In terms of searching for sample demonstration examples, DIN <cit.> selects a fair number of demonstration examples from each category (e.g. simple classes, non-nested complex classes and nested complex classes), but they are fixed.
Moreover, Guo et al. <cit.> adaptively retrieve intention-similar SQL demonstration examples through de-semanticization of the questions. However, none of these methods can solve the ambiguous and varied questioning of realistic scenarios.
As for the CoT prompting strategy, DIN-SQL <cit.> follows a least-to-most <cit.> prompting method, decomposing Text-to-SQL task into subtasks and solves them one by one. Pourreza and Chen et al. explore self-correction <cit.>, where the LLM explain the question and SQL, providing valuable feedback for improvement. Tian et al. <cit.> propose interactive generation with editable step-by-step explanations, combining human intervention with LLM generation to refine the final SQL output. Additionally, Sun et al. <cit.> explore execution-based self-consistent prompting methods.
Nonetheless, creating task-specific demonstration examples <cit.> demands manual labour. Instead, ours works through automatic interaction between the LLMs and the databases without human intervention.
Moreover, explaining to itself and simple feedback alone <cit.> are weak for digging out errors for correction. While our approach takes into account all three aspects of fine-grained feedback, which interact with each other to create effective feedback.
§ METHODOLOGY
Our framework consists of two modules as shown in Fig. <ref>:
(1) Retrieval Repository: (see Sec. <ref>) We construct a retrieval repository with simplified questions added and then use question skeletons to retrieve sample-aware SQL demonstration examples.
(2) Dynamic Revision Chain: (see Sec. <ref>) We further revise the generated SQL queries by adding fine-grained feedback.
§.§ Retrieval Repository
We construct a retrieval repository consisting of multiple key-value retrieval items, where the keys represent the question skeletons and the values are k sample-aware SQL queries. These processes enable us to generate demonstration examples that showcase desired behaviours of the LLM.
Our method involves:
(1) Simplifying original questions to unify various questioning styles (see Sec. <ref>).
(2) Extracting question skeletons to construct a retrieval repository (see Sec. <ref>).
(3) Retrieving SQL queries according to skeleton similarities (see Sec. <ref>).
§.§.§ Question Simplification.
We simplify natural language questions by prompting the LLM with instructions.
In this way, we can avoid the frustration of unusual questioning styles and enhance the syntax and wording variety in the repository.
Specifically, we construct a prompt template prompt(.): “Replace the words as far as possible to simplify the question, making it syntactically clear, common and easy to understand: [QUESTION]", where “[QUESTION]” represents the original natural language question.
We then obtain the simplified question by feeding prompt(Q) into the LLM.
We maintain a consistent temperature setting in the language model to ensure that all simplified sentences exhibit the same probability distribution.
§.§.§ Question Skeleton Extraction.
We then extract question skeletons, including both original questions and simplified questions. We follow the method proposed by Guo et al. <cit.> to obtain question skeletons. This process removes specific schema-related tokens from the questions, focusing solely on the structure and intent. Finally, we take the (question skeleton, SQL) pairs from the training set and store them in the retrieval repository. Note that the number of samples in the retrieval repository is twice as large as the training set, due to the addition of the simplified samples.
Let 𝒟_train represents the training set, and R denotes the retrieval repository. The original natural language question is denoted as Q_o, while Q_r represents the simplified question. The question skeletons are denoted as S_o and S_r for the original and simplified questions, respectively. We formalize the composition of the retrieval repository as follows:
R = {(S_o, SQL), (S_r, SQL) | (Q_o, SQL) ∈𝒟_train}.
§.§.§ Sample Retrieval.
The retrieval process searches for the most similar question skeletons and returns their corresponding SQL queries from the retrieval repository.
This search is based on the semantic similarity between the skeleton of the new question and the items' keys in R.
Specifically, given a new question Q_o, we first obtain its simplified sentence Q_r, and their corresponding question skeletons S_o and S_r, following the same method used in previous two subsections (see <ref> and <ref>).
And then we calculate the cosine similarity scores s_o between the semantic vector of question skeleton S_o and all questions skeletons S in R. Similarly, we also compute the cosine similarity scores s_r for simplified question skeleton S_r using the formula: s = cos( 𝐟(S) ·𝐟(S)),
where 𝐟(.) represents an off-the-shelf semantic encoder[We utilize SBERT <cit.> in our experiment.]. Here, S will be instantiated as S_o and S_r, and s will be instantiated as s_o and s_r, correspondingly.
From these scores, we select the top-k retrieval samples with the highest rankings.
Let k_1 and k_2 denote the number of samples retrieved from the original question skeleton S_o and the simplified question skeleton S_r respectively, such that k = k_1 + k_2. We then concatenate the k samples to form a demonstration example as input to the LLM.
Our retrieval repository offers the LLM with sample-aware SQL demonstration examples, which display a more practical answer space.
§.§ Dynamic Revision Chain
We employ the LLM to generate an initial SQL query, and then we iteratively revise the generated SQL queries based on fine-grained feedback, forming a dynamic revision chain.
The dynamic revision chain consists of: the SQL queries generated by the LLM iteration as nodes and the prompts provided to the LLM as edges.
It helps to generate executable and accurate SQL queries through interaction between language models and databases, with minimal human intervention.
The dynamic revision chain contains two stages: (1) assembling prompt based on the fine-grained feedback (see Sec. <ref>), and (2) generating SQL via iterative prompting (see Sec. <ref>).
§.§.§ Fine-grained Feedback.
We collect three fine-grained pieces of information based on the SQL generated in the previous iteration.
The intuition is that various information hampers LLMs' focus, so they struggle to extract necessary data from extensive and complex databases. Thus, we should progressively narrow down the scope and prioritize the most likely information.
The fine-grained feedback in our approach consists of three aspects of information:
* Execution Error Feedback:
We feed the SQL query generated by LLM into the database engine (i.e. SQLite) for execution.
We then obtain the error messages reported during the execution and add them to the prompt.
It checks whether the predicted SQL can be executed correctly, and report the specifics of the error (e.g. “no such table: [TABLE]", “no such function: YEAR, “misuse of aggregate: COUNT()").
By incorporating the execution error messages into the prompt, LLM can learn from its errors. This helps to generate queries that follow the SQL syntax rules.
* Natural Language Explanation:
We prompt the LLM with instructions, converting the SQL predicted in the previous iteration back into its corresponding natural language expression.
Specifically, we construct an instruction:“What does this SQL query mean? What are the differences between the predicted meaning and the question meanings above?"
The LLM identifies semantic gaps and fills them by explaining the meaning of its own generated SQL and comparing it to the meaning of the original question.
* Related Database Contents:
We provide the LLM with content details about the database tables and columns involved in the SQL queries predicted in the previous iteration, including the possible values involved in the question. It aims to allow LLMs to simulate execution and thus generate more contextually relevant and accurate SQL queries.
Overall, the fine-grained feedback approach aims to enable LLMs to learn from their mistakes, understand the meaning of the SQL queries generated and use contextual information in the database to generate more accurate and relevant SQL queries. By addressing challenges and focusing on important aspects, the methodology aims to help the LLm better extract the necessary data from complex databases and improve the performance of its query generation.
§.§.§ Iterative SQL Generation.
Based on prompts with fine-grained feedback, the LLM iteratively generates SQL queries.
The intuition for iterative generation is that one iteration of fine-grained feedback might not check for all mistakes, whereas multiple iterations of feedback generation are more likely to get progressively closer to the gold answer.
Specifically, we concatenate three fine-grained feedback components with the previously generated SQL in each iteration, feeding them into the LLM. We then obtain a new SQL and collect new fine-grained feedback based on it, proceeding so to iterative generation. Let's denote the previous SQL query generated by the LLM as SQL_prev and the current SQL query as SQL_curr. The fine-grained feedback components are represented as F_error for execution error feedback, F_NL for natural language explanation, and F_DB for related database contents.
At each iteration i, the LLM generates a new SQL query SQL_curr^(i) by incorporating the fine-grained feedback components:
SQL_curr^(i) = LLM(SQL_prev, F_error^(i), F_NL^(i), F_DB^(i)).
After executing SQL_curr^(i) using the database engine, we obtain the result R_prev^(i) from the previous iteration and R_curr^(i) from the current iteration. To avoid infinite loops, we set a maximum number of iterations N_max.
The termination condition is defined as:
R_prev^(i) = R_curr^(i) or i = N_max.
This control mechanism ensures that the generated SQL queries converge to an optimal and executable solution within a reasonable timeframe.
In this iterative feedback loop, we enable a dynamic interaction between the LLM and the database engine, maximizing the generation of executable SQL without extensive human intervention.
§ EXPERIMENTS
§.§ Experimental Setup
§.§.§ Setting.
We evaluate our method on text-davinci-003, which offers a balance between capability and availability. We apply FAISS <cit.> for storing the question skeletons and efficient retrieval followed by Guo et al. <cit.>.
For the initial simplification of questions, we set temperature τ=1.0. When generating SQL samples, we set temperature τ=0.5.
For the number of retrieval samples, we assign k_1=4 and k_2=4.
§.§.§ Datasets.
We conduct experiments on the cross-domain large-scale Text-to-SQL benchmark as follows: (1) Spider <cit.> is a large-scale benchmark of cross-domain Text-to-SQL across 138 different domain databases. (2) Spider-Syn <cit.> is a challenging variant based on Spider that eliminates explicit alignment between questions and database schema by synonym substitutions. (3) Spider-DK <cit.> is also a variant dataset based on Spider with artificially added domain knowledge.
§.§.§ Evaluation.
We consider three key metrics: execution accuracy (EX) and test-suite accuracy (TS) <cit.>. EX measures the accuracy of the execution results by comparing them with the standard SQL queries, while TS measures whether the SQL passes all EX evaluations for multiple tests, generated by database augmentation.
Noting that EX is the most direct indication of the model performance in Text-to-SQL, although it contains false positives. Exact match evaluation is not performed, as multiple correct SQLs exist for one query. We use the official TS evaluation procedure, while for EX, we slightly modify the evaluation procedure due to the need to decouple the fine-tuning-based models for independent evaluation.
§.§.§ Baselines.
We compare to two groups of methods:
Fine-tuning T5-3B baselines: PICARD <cit.> is a technique that constrains auto-regressive decoders in language models through incremental parsing; RASAT <cit.>, which incorporates relation-aware self-attention into transformer models while also utilizing constrained auto-regressive decoders; and RESDSQL <cit.>, which introduces a ranking-enhanced encoding and skeleton-aware decoding framework to effectively separate schema linking and skeleton parsing.
Prompting LLMs baselines: As for the large language models, we use two variants of the Codex family <cit.> <cit.> (Davinci and Cushman), PaLM-2 <cit.> <cit.>, the GPT-4 model <cit.> <cit.> and the ChatGPT model <cit.>.
In addition to a simple baseline assessment model, we choose several recent LLM-based work.
DIN <cit.> decompose the Text-to-SQL tasks into sub-tasks: schema linking, query classification and decomposition, SQL generation, and self-correction; then performing few-shot prompting with GPT-4 <cit.>.
SELF-DEBUGGING <cit.> adds error messages to the prompt and conducts multiple rounds of few-shot prompting for self-correction.
Few-shot SQL-PaLM <cit.> adopts an execution-based self-consistency prompting approach.
§.§ Main Results
§.§.§ Performance on Spider Dataset.
Table <ref> displays the experimental outcomes of our proposed methods on Spider in comparison to the baseline methods. Across all three datasets, our methods achieve the highest levels of execution accuracy (EX) and test suite accuracy (TS).
Ours exhibits strong performance on test-suite accuracy, which exceeds the next-best method results in fine-tuning and prompting by 9.7% and 5.9% respectively.
In terms of execution accuracy, our method outperforms the next best method in both fine-tuning and prompting by 0.9%.
For valid accuracy, our approach falls short of RESDSQL-3B + NatSQL and DIN-SQL (Few-shot) but also reaches 98.6%. This is because prompting models may encounter challenges in generating SQL queries that adhere to strict syntactical and semantic rules, unless the individual steps and rules are as carefully designed as DIN <cit.>.
[1]In the Codex-davinci model, both methods utilize default with few-shot, but their demonstration examples are different.
Comparison with Zero-shot Prompting Models:
On all three metrics, ours surpasses Codex, ChatGPT and even GPT-4 models with zero-shot prompting, even though they use the official format[https://platform.openai.com/examples/default-sqltranslate]. This indicates that although LLMs are trained using a specific format, their acquired capabilities are internalized and extended to use in more free formats.
Comparison with Few-shot Prompting Models:
Ours also outperforms all models in a few-shot setting. Notice that the two methods with similar few-shot prompting in the Codex-davinci model, the latter performs 10% better than the former in both EX and TS. It indicates that the selection of demonstration examples (easy, non-nested complex, and nested complex classes) <cit.> plays a significant role. While, ours uses an adaptive sample-aware sampling method brings 8.2% more effective than this static demonstration suggests that incorporating more effective prompts is crucial for enabling the language model to understand new specific tasks.
Comparison with Other Models:
The closest EX performance compared to ours is SELF-DEBUGGIN, which also takes an iterative prompting strategy, but we still outperform it by 0.9%.
Additionally, ours outperform SQL-PaLM, which uses a simpler prompting strategy but produces better results. It implies that PaLM2 is a potential LLM. We attempted to apply a consistency approach similar to that but failed. This may indicate that different LLMs are better suited to different approaches. Nevertheless, our adaptive sampling and iterative feedback approach has proven to be effective.
[3]As the other baseline models have not experimented on the Spider-SYN and Spider-DK datasets, there are relatively few models for comparison in the table.
[4]The Codex model could not be reproduced due to an invalid Codex'api. The TS metric was not applicable to the Spider-DK dataset.
§.§.§ Performance on Spider-SYN and DK Datasets.
Table <ref> showcases that our method demonstrates impressive robust performance compared with the baseline methods on Spider variants. On Spider-SYN, ours has 4.5% and 12.6% improved over the previous SOTA on EX and TS, respectively. Remarkably, ours performs surprisingly well with a 13.6% improvement over the previous SOTA on Spider-DK.
§.§ Various Difficulty Levels Analysis
As shown in Table <ref>, we evaluate our effectiveness at various difficulty levels, which are determined by the number of SQL keywords used, the presence of nested sub-queries, and the utilization of column selections or aggregations.
The results show that ours outperforms the other models at all levels except for the easy level, where it is worse than SQL-PaLM.
The improvement in performance with increasing difficulty levels indicates that our model's strengths become more pronounced as the queries become more challenging.
This suggests that our model excels in handling complex SQL queries.
§.§ Ablation Study
Figure <ref> demonstrates with and without each of the two modules at four complexity levels.
It shows that the exclusion of any of the modules leads to a decrease in performance at all levels of difficulty, in terms of hard and extra levels.
Decreases in model performance are similar for the w/o revise and w/o simplify settings.
Note that both modules of our method are most effective in improving the Spider-DK's easy level by 13.6% each, which requires additional domain knowledge. This suggests that the simplification strategy and dynamic revision chain strategy contribute to a variety of generalisation issues.
We found that removing the simplification module resulted in a significant drop in model performance, particularly in the DK dataset where the overall drop was 12.5%. The impact at different difficulty levels is in descending order of extra, hard, easy, and medium. This is possibly due to the fact that the model can incorporate more external knowledge as a supplementary description when simplifying, especially in the case of more SQL components. Note that w/o simplify is rather more effective for solving easy-level problems than medium-level ones, probably because the execution accuracy of easy-level problems is already high and short sentences are more likely to cause ambiguity.
Without the revision module, model performance suffers more as the difficulty level increases. On Spider-DK the model performance decreases by 11.0%, especially on easy-level and extra-level by 13.6% and 13.3% respectively. As higher difficulty levels require more knowledge, this suggests that the fine-grained feedback in the revision module effectively complements the domain knowledge required for SQL generation.
§.§ Iterative Round Analysis.
From Fig. <ref>, we observe that the major improvement comes from the first two iteration turns. Noting that in addition to the 4.6% improvement in the first iteration of Spider, the other two datasets investigating generalizability, Spider-DK and Spider-SYN, showed a slightly better improvement in accuracy in the second iteration than in the first.
This indicates that iterative feedback of fine-grained information from a dynamic revision chain helps to deal with more complex generalisation problems, comparable to multiple reasoning needs to progressively derive the target answer.
§.§ Case Study
To demonstrate our model, we show a comparison of predicted SQL queries in Figure <ref> using ChatGPT <cit.>, DIN-SQL <cit.>, SQL-PaLM <cit.> and Ours.
In the first example, since the question obviously mentions “French", the general models will be confused about the exact value of the column “citizenship" even if they pick it out. Noting that a SQL query must match the exact word mentioned to find the correct answer. Our approach provides the exact value of the database content involved in the first fine-grained iteration, which leads to a golden answer.
The second example requires only the selection of one item, whereas DIN-SQL and SQL-PaLM both select two. ChatGPT incorrectly uses the aggregate function COUNT(), which in this case is required in conjunction with GROUP BY. Our approach self-corrects the error in the second fine-grained iteration by interpreting the SQL interpretation in natural language.
§ CONCLUSION
We propose retrieval-augmented prompts for an LLM-based Text-to-SQL model.
By utilizing sample-aware prompting and a dynamic revision chain, we address the challenge of retrieving helpful examples and adapting the generated SQL based on fine-grained feedback.
Experimental results on three Text-to-SQL benchmarks demonstrate the effectiveness of our method.
splncs03_unsrt
|
http://arxiv.org/abs/2307.09569v1 | 20230714213151 | Design of Whisker-Inspired Sensors for Multi-Directional Hydrodynamic Sensing | [
"Tuo Wang",
"Teresa A. Kent",
"Sarah Bergbreiter"
] | cs.RO | [
"cs.RO"
] |
IEEE/ASME TRANSACTIONS ON MECHATRONICS
Design of Whisker-Inspired Sensors for Multi-Directional Hydrodynamic Sensing
Tuo Wang, Student Member, IEEE, ASME, Teresa A. Kent, Student Member, IEEE, Sarah Bergbreiter, Member, IEEE, Fellow, ASME
Tuo Wang and Sarah Bergbreiter are with the Department of Mechanical Engineering. Teresa Kent is with the Robotics Institute. All authors are with Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA, 15213, USA. Corresponding author: Sarah Bergbreiter, [email protected], +1(412)268-3216
August 12, 2023
================================================================================================================================================================================================================================================================================================================================================================================================================================================
Perceiving the flow of water around aquatic robots can provide useful information about vehicle velocity, currents, obstacles, and wakes. This research draws inspiration from the whiskers of harbor seals (Phoca vitulina) to introduce a whisker-inspired water flow sensor. The sensor enables multi-directional flow velocity estimation and can be seamlessly integrated into aquatic robots. The whisker-inspired sensor operates using a mechano-magnetic transduction mechanism, which separates the whisker drag element from the electronic component. This configuration provides distinct advantages in terms of waterproofing and corrosion resistance. The sensor features a modular design that allows tuning of the whisker drag element's shape to optimize sensitivity and sensing range for diverse applications. An analytical model quantifies the sensor's capabilities, validated through experiments examining whisker parameters like morphology, cross-sectional area, aspect ratio, and immersion depth. The study also investigates the impact of structural designs on Vortex-Induced Vibrations (VIVs), an area of active research in both biological and robotic aquatic whiskers. Finally, the sensor's efficacy is demonstrated on a commercially available, remotely controlled boat, highlighting its ability to estimate flow velocity. The presented sensor holds immense potential for improving aquatic robots' navigation and perception capabilities in a wide range of applications.
whisker-inspired designs, flow sensing, underwater robots, flow sensing, bio-inspired robotics
§ INTRODUCTION
The field of aquatic robotics has made significant strides in recent years, resulting in the development of various sophisticated machines such as remotely controlled ships <cit.>, underwater autonomous vehicles <cit.>, and soft underwater robots <cit.>. These robots have showcased impressive capabilities across a diverse range of applications, including environmental monitoring <cit.>, search and rescue missions <cit.>, and ocean exploration <cit.>. Despite their success, the majority of these systems have no way to measure fluid flow around the vehicle – a measurement that can provide useful information regarding the vehicle's velocity, currents, nearby obstacles, and wakes <cit.>. For small vehicles in particular, it is especially challenging to sense fluid flow around the vehicle.
Looking to nature, we find many aquatic animals, such as harbor seals and seal lions, have evolved biological flow sensors known as whiskers (vibrissae) to actively measure their surrounding flow field for foraging and navigation <cit.>. A biological whisker sensor contains a high-aspect-ratio whisker drag element that displaces when the animal moves. The displacement is then transmitted through densely located sensory cells embedded in the whisker root and is finally processed as surrounding flow information. Compared with human-engineered navigation systems like cameras and GPS, whisker sensing does not require ideal lighting conditions and is not subject to attenuation issues <cit.>. Another benefit of whisker sensing is that all neurological sensing happens at the whisker's base; since the whisker drag element is a specialized hair, damage to the whisker does not affect the sensors. This principle of separation enhances the durability and reliability of the sensors in engineering applications. As a result, the sensor remains functional even when the drag element is damaged, enhancing its durability and reliability in engineering applications.
The idea of using a whisker-inspired sensor to measure water flow has already been realized using many different sensing modalities, such as piezoresistive <cit.>, capacitive <cit.>, magnetostrictive <cit.>, and triboelectric <cit.>. Common techniques like 3D printing <cit.>, molding <cit.>, laminating <cit.>, and microelectromechanical systems (MEMS) manufacturing <cit.> help create these sensors to fit different robot sizes. These whisker-inspired sensors have been applied in stationary setups to measure oscillatory flow <cit.> and on mobile robots to detect wakes <cit.>. Despite success in specific applications, challenges remain to whisker-inspired sensors' widespread application.
One challenge associated with aquatic whisker-inspired sensors is waterproofing. In earlier sensor designs, the whisker drag element (as shown in Fig. <ref>) was typically in direct contact with an electronic component, like a strain gauge <cit.> or capacitor <cit.>. This meant that both the electronic part and the joint connecting it to the whisker drag element had to be waterproofed. Previous methods addressed this problem through mechanical sealing <cit.>, or coating with waterproof materials <cit.>. However, during flow sensing the waterproofing layer can crack and delaminate due to vibration and rotation of the whisker drag element.
Another challenge lies in distinguishing mixed hydrodynamic signals in flow sensing. Whiskers not only detect relative water flow velocity, but also experience Vortex-Induced Vibrations (VIVs) caused by vortices shedding downstream of the whisker drag element, resulting in periodic fluid forces on the whisker. The frequency and strength of VIVs are closely related to the fluid flow velocity <cit.>. While earlier research aimed to understand the seal whisker's shape to minimize VIVs for noise reduction, both biology and engineering studies revealed that VIVs can actually aid animals or robots in locating objects without vision or sonar systems <cit.>. As a result, developing flow sensors that can differentiate and simultaneously capture water flow and VIV signals could lead to a considerably more effective sensor.
To overcome these challenges, this work presents the design of waterproofed, magnetically-transduced, whisker-inspired sensors for multi-directional hydrodynamic sensing. The magnetic transduction mechanism requires no physical connection between the sensing electronics and the whisker drag element (Fig. <ref>), which simplifies the waterproofing of the sensor. By water-cutting carbon fiber sheets, the whisker drag element can be efficiently manufactured and characterized, making this rapid prototyping technique a powerful tool for creating various whisker profiles. In this work, we test three different whisker drag element profiles on our sensor: rod, plate and cross shapes. Through rigorous characterization and modeling, we provide insights into the relationships between whisker shapes, vortex-induced vibration (VIV) responses, and sensing capabilities. This information provides a solid foundation for future sensor designs and applications. Additionally, we demonstrate our whisker-inspired sensor's performance for velocity estimation on a small, off-the-shelf remote-controlled boat, highlighting its potential for advancing aquatic robotics and sensing technologies.
§ WHISKER-INSPIRED SENSOR SYSTEM DESIGN
Fig. <ref> depicts the sensing schematic and design of the whisker-inspired sensor, which is based on the design in <cit.>. The sensor system is comprised of a whisker drag element that is constrained by a spring suspension. A 2 permanent magnet is located on the opposite side of the spring from the whisker drag element. The Hall effect sensor and its electronics are located separately from the whisker drag element and water. They are positioned in a way that ensures symmetry in the magnetic field response, regardless of the direction of rotation. This is achieved by placing them in line with the center of the magnet.
Separating the whisker drag element from the Hall effect sensor allows the whisker drag element to make contact with the water, while protecting the transduction part in a water-tight encasing. As water flows over the whisker drag element, the drag force induced by the flow deflects the whisker, causing the magnet to move in the opposite direction of the flow. Consequently, the motion of the magnet causes a variation in the magnetic field, which is detected by the Hall effect sensor. In addition to its waterproofing benefits, the sensor design comprises components with previously established models as part of a tactile sensor <cit.>, and these models can be used to predict the expected sensor response to water flow.
§.§ Sensor Modeling
The model of the sensor focuses on the relationship between the flow velocity (v) experienced by a whisker drag element and the magnetic field response (B_x, B_y, B_z) sensed by the Hall effect sensor. In this paper, we assume that water flowing at a constant velocity produces a constant drag force on a whisker drag element. Given that the whisker drag element is fixed to a spring at one end, the whisker will rotate until it has reached a quasi-static angle where the moment induced by the drag force and the opposite moment induced by the spring rotation are equal (Fig. <ref>). Here the angle of the balanced moments is named the quasi-static polar deflection angle (ϕ_def, Fig. <ref>). ϕ_def is also the angle of the magnet relative to the z-axis and thus determines the change in the magnetic field sensed by the Hall effect sensor.
The whisker-inspired sensor model combines models for the expected drag force and moment (Section <ref>), the expected spring response to applied moments (Section <ref>) and changes in the magnetic field caused by a rotating magnet (Section <ref>) to model a unique relationship between the magnetic field and flow velocity. The model's success is evaluated by its ability to predict a manufactured sensor design's maximum flow velocity, flow velocity at mid-range, and sensitivity measured in LSB/.
§.§.§ Drag Moment Caused by Flow
The drag force on a whisker (F_drag) moving through a fluid of density ρ at a velocity v is modeled by the drag equation (Eqn. <ref>). In Eqn. <ref> the shape of the whisker drag element determines the drag coefficient (C_d), and the drag element's height (h), width (w), and thickness (t) are factors in the cross-sectional area (A). ρ_water is the density of the water that the whisker is moving through.
F_drag = 1/2 C_dρ_waterv^2A
It is important to note that as the direction of flow relative to the sensor axis (θ_xy) changes, C_d and A can also change for the three drag element designs in Fig. <ref>. For example if A_0 is the projection area at θ_xy = 0, the projected area at angle θ_xy (A_θ) can be calculated using Eqn. <ref>.
A_θ = A_x0 cos(θ)+A_y0 sin(θ)
The moment (M_drag, Eqn. <ref>) exerted at the whisker-spring connection combines the force found in the drag equation (Eqn. <ref>) and the magnitude of the moment arm (r, Fig.<ref>) calculated in Eqn. <ref>. In this model, we assume uniform flow perpendicular to the sensor and a uniform cross-sectional area of the whisker, but we do not assume the whisker is fully immersed. While full immersion is required for underwater vehicles, surface aquatic vehicles can potentially tune this parameter when adding the sensor to the vehicle. We use the variable D_im (illustrated in Fig. <ref>(d)) to indicate the distance beneath the water surface that the drag element is immersed.
r = h_stem+h - 1/2D_im
M_drag = F_dragr
§.§.§ Spring Design
The model of the spring suspension design in Fig. <ref> is based on previous work by Chou et al. <cit.> that uses Castigliano's beam theory to model four connected serpentine springs. This spring model was previously verified in tactile sensing and airflow estimation tasks in <cit.>. For the rigid whiskers in this work, rotation of the whisker drag element exerts a rotation of equal magnitude on a center plate connecting the four springs. This rotation is described by the magnitude of rotation (ϕ_def, Fig. <ref>) and direction of rotation (θ_xy, Fig. <ref>(b)). Previous work <cit.> has shown that when the spring parameters are known, moments about the x and y axis (M_x and M_y) correlate to a unique pair of values for θ_xy and ϕ_def based on inverse elements of the compliance matrix (Eqns. <ref>,<ref>).
M_x ∝ sin(ϕ_def)cos(θ_xy)
M_y∝ sin(ϕ_def)sin(θ_xy)
The final spring parameters used are similar to those found in <cit.> and these parameters are repeated in Table <ref> for completeness. The code and detailed mathematical derivations for this work are available on GitHub <cit.>. While this spring model is not novel on its own, we have added ±20 limits to the maximum ϕ_def in the model to represent physical mechanical stops that prevent the spring suspension from plastically deforming. The spring design was considered fixed in the model used in this paper, but could ultimately be varied to adjust sensing range in future work.
§.§.§ Hall Effect Sensing
The calculations for converting ϕ_def into the magnetic field response were previously published in <cit.>. Here we provide a brief description for completeness and describe our incorporation of the Hall effect sensor's limitations into the sensor design model. The magnet rotates about the spring suspension's center by the same ϕ_def and θ_xy as the whisker drag element does (Fig. <ref>). Derby and Olbert's equations <cit.> describe the expected changes in magnetic flux at a point when a magnet rotates and translates relative to that point. The Hall Effect sensor used in this work provides magnetic field measurements in 3 axes, and we specifically model the change in magnetic field around the x and y axes – Δ B_x and Δ B_y.
In the current work, we have added the resolution and sensitivity of the Hall effect sensor to our model. The Hall Effect sensor used in this work has a published sensitivity of 5 Least Significant Bits (LSB) per and a maximum magnetic field range of 230. In order for our sensor design to detect a rotation applied to the magnet, the rotation must result in a change of magnetic field of at least 0.2. The model also indicates that the sensitivity of the sensor to flow velocity can be increased by either reducing the distance between the magnet and the sensor, or by increasing the magnetization of the whisker magnet.
§.§.§ Design Modeling
Decisions regarding the design of the whisker drag element, the level of immersion of the whisker drag element in water, the parameters of the spring, the magnet, and Hall effect sensor all play a role in determining the sensor response. In this paper we focus our study on how changes in the whisker drag element design and immersion in the water affect the whisker-inspired sensor's sensitivity to flow velocity. By combining Eqn. <ref> with the spring model, we can analyze the effect of different whisker drag element designs on the anticipated rotation angles ϕ_def.
Three cross-sectional shapes of the whisker drag element illustrated in Fig. <ref>(b) were considered in the model: a rod, plate and cross. The three whisker shapes were modeled with drag coefficients of 1.1, 1.32, 1.32 for the rod, plate and cross, respectively (fit to experimental results). It is worth noting that we only modeled the cross and the plate in one θ_xy direction, and other θ_xy directions will have different C_d and A values for these shapes. The model results of the sensors were compared to the experimental results for each of the three designs with varying the immersion depth and orientation.
The model was further compared to experimental data for the rod whisker across different areas and immersion depths. Immersion depth provides an opportunity to adjust the sensor's sensitivity and range during application in contrast to the drag element shape that is fixed during fabrication. We used the model to better understand the sensitivity of the whisker sensor to both the 'hard-coded' design parameter of rod diameter and the adjustable design parameter of immersion depth. We evaluated the sensor's sensitivity (measured in LSB/ and evaluated at the sensing range mid-point due to nonlinearity), mid-point flow velocity, and maximum flow velocity using the model and compared the modeled design space to experimental results.
§.§ Fabrication
We developed an efficient assembly process to create versatile sensors featuring three distinct drag element shapes of various sizes, enabling us to effectively evaluate their performance. Fig. <ref> shows the fabrication and assembly process of the sensor. The plate and cross whiskers were fabricated by waterjet (ProtoMax Abrasive Waterjet, OMAX) using a 1.5 thick carbon fiber sheet (StarimCarbon). The cross whiskers were then mechanically assembled into shape. The rod whiskers were made by cutting carbon fiber rods (Awclub) of 1, 2, and 3 diameters. Unlike biological whiskers which are compliant and tapered, the fabricated whiskers are rigid and have constant cross-sectional areas along their height. To improve the adhesion and alignment of the whisker on the center plate, we 3D printed a 1.5 high square platform (Object 30, Stratasys). This platform was then glued to the bottom of the whisker assembly before it was affixed to the spring suspension.
The spring suspension was made by laser cutting (Photolaser U4, LPKF) a 100 thick stainless steel sheet (Shop-Aid Inc.). Laser settings for power (2) and repetitions (80) were set to increase the cutting resolution. The spring suspension was then soaked in 90 percent isopropanol solution for 10 minutes to reduce the cutting residuals on its surface. A 2 cube magnet (C0020, Supermagnetman) was glued to the back side of the spring. Once assembled, the spring suspension and whisker drag element were press-fit and then glued into the 3D-printed case.
The Hall effect sensor (TLE493-W2B6 A0, Infineon Technologies) was also press-fit into the 3D-printed case (Object 30, Stratasys). Four wires were soldered to the board through a hole at the bottom of the whisker case for I2C communication. Finally, silicone adhesive (Sil-poxyTM, Smooth-On) was applied to waterproof the sensor. Table <ref> lists the dimensions of all nine whisker shapes used in modeling and characterization.
§.§ Characterization Setup
To assess the impact of drag element geometry on sensitivity and detection range of whisker-inspired sensors, several experiments were conducted. These tests involved subjecting the sensors to different flow velocities and orientations while immersed in water. To accomplish this, a linear stage (X-BLQ-E1045, Zaber Technologies Inc.) was mounted on a water tank (1200 × 1200 × 440) and supported by customized aluminum T-slot beams (25 Series, 80/20 Inc.). The linear stage was programmed to follow prescribed velocities with a fixed acceleration profile.
A 3D-printed sensor mount (Object 30, Stratasys) and two holders (Raise3D Pro2, Raise3D) were utilized to secure the sensor to the linear stage, enabling adjustment of the sensor's orientation and immersion depth within the water. The sensor was then mounted on the linear stage and dragged through static water, generating a relative water flow. The experimental setup is depicted in Fig. <ref>.
Data from the whisker-inspired sensor's response to motion was acquired by an Arduino Mega 2560 R3, using I2C communication at a sampling frequency of 100 Hz. For each trial, the sensor was moved along a linear stage within the water tank for a total travel distance of 850 mm, with a consistent acceleration and deceleration of 2 at the beginning and end of the run. This process ensured a minimum of 70 data points were collected at the prescribed velocity per trial. Mean and standard deviation values were calculated from the magnetic field response in each trial and used to analyze the relationship between flow velocity and vortex-induced vibration (VIV) signals.
§.§.§ Varying whisker morphologies
Each of the nine whiskers (described in Table <ref>) was characterized with eight flow velocities, [0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7] , and twelve θ_xy orientations, [0, 30, 60, 90, 120, 150, 180, 210, 240, 270, 300, 330]. For each combination of velocity and orientation, four trials were conducted.
§.§.§ Varying immersion depths for rod whiskers
Fig. <ref>(d) provides a more detailed schematic illustrating how the response of rod whiskers with diameters of 1, 2, and 3 were characterized at different immersion depths using this same characterization setup. The rod whisker was mounted perpendicular to the water surface, and a portion of its height was immersed in water. The total height of all of the rod whiskers was 60, and the immersion depths ranged from 10 to 40 in 10 increments.
§.§ Sensor calibration for surface vehicle demonstration
A 20 × 7.5 cross sensor was chosen for use on a remotely-controlled (RC) boat based on this sensor's sensitivity and sensing range. The sensor was calibrated for velocity and orientation so that it could ultimately be used for velocity estimation on-board the boat. In this calibration, the flow velocity (v) was estimated from the magnetic field responses in the x and y directions, Δ B_x and Δ B_y respectively. The magnitude of the flow velocity is related to the magnitude of the magnetic field response, Δ B_xy = (Δ B_x^2 + Δ B_y^2)^1/2. The orientation of the flow relative to the sensor is proportional to Δ B_y/x = arctan(Δ B_y/Δ B_x). For calibration, twenty data points were collected from each test. Other sensors can be calibrated using a similar approach if needed in the future.
§ RESULTS
§.§ Varying whisker morphologies
Both experiments and modeling were carried out to examine the magnetic field response of three whisker-inspired sensor shapes, aiming to understand their effectiveness in detecting water flow velocity and orientation. Fig. <ref> shows the experimentally measured mean and standard deviation of the magnetic field response (Δ B_x, Δ B_y) for the three whisker shapes relative to water flow velocity and orientation. The x and y directions correspond to the Hall effect sensor's sensing directions. In Fig. <ref>(a-c), the Δ B_x direction is in-line with the linear stage's movement, and flow in the transverse direction is measured by Δ B_y (Fig. <ref>(c)). This orientation corresponds to θ_xy = 0.
In terms of velocity characterization, all whisker shapes display an approximate quadratic relationship between the magnetic field response and flow velocity in the in-line direction. The transverse direction maintains an average near-zero response, consistent with the analytical model's predictions. This result signifies that the sensor's response to in-line motion can be separated from vibration in the transverse direction, enabling velocity estimation for all whisker shapes. For orientation characterization, rod whiskers show a nearly ideal sinusoidal magnetic field response in both x and y directions. The projected cross-sectional area (A) for cross and plate sensors depends on the flow direction, θ_xy (Eqn. <ref>). Cross whiskers exhibit an approximately linear relationship between the magnetic field response and orientation. Plate whiskers display minimal variation in magnetic field response along the y-axis, while the magnetic field for Δ B_x goes down to zero when the in-line direction shifts from the x-axis to the y-axis.
Vortex-induced vibrations (VIVs), a phenomenon where fluid flow causes vibrations in structures, is significant in the transverse direction measurements. Rod whiskers exhibit a bell-shaped VIV profile, peaking at 0.3 (Fig. <ref>). In contrast, cross and plate whiskers show increased VIV magnitude in the high-velocity range, 0.5 - 0.7 (Fig. <ref>(b-c)). Ultimately understanding VIV behavior is essential for optimizing whisker-based designs and mitigating vibration-related damage or fatigue in applications such as underwater sensing and bio-inspired robotics.
These experimental results demonstrate reasons for preferring one drag element design over another. For example, the plate whisker only picks up the component of flow in line with its x-axis and the cross whisker can go to higher velocities without significant VIV response. Once a researcher picks the shape best suited for their application, the model can help design the sensor for the velocity range needed. Experimental data from all nine whisker designs were separately compared to the model in Fig. <ref>, yielding a prediction of the in-line Δ B_x value with a root mean square error (RMSE) of 1.29 for a given flow velocity. Given a measured Δ B_x value (as would be the case on an aquatic vehicle), the model's velocity prediction had an RMSE of 0.034. The experimental results are highly repeatable and we suspect that most of this error is due to manufacture of the sensor's spring suspension element as discussed further in <cit.>.
§.§ Varying immersion depth for rod whiskers
Results of the characterization process for different immersion depths of rod whiskers are illustrated in Fig. <ref>(a-c). Notably, the in-line magnetic field response demonstrates an increase with the rise in flow velocity for all whisker diameters. Furthermore, the larger the diameter of the rod whisker, the higher the detected magnetic field response due to the increase in cross-sectional area which consequently leads to a higher drag force at the same flow velocity.
In addition, the magnetic response to flow velocity increases as the rod sensor's area enlarges through an increase in D_im. These results shed light on the affect of varying the immersion depth on the magnetic field response to changes in velocity without altering the whisker's design. Increasing D_im both decreases the moment arm and increases the area of the whisker drag element (Eqns. <ref>,<ref>). However, changing D_im has a larger effect on the sensor's area versus the moment arm, leading to an increase in the magnetic field response (Fig. <ref>).
The VIV response exhibits a notable scaling with the immersion depth and is prominent in a fixed flow region for all whisker diameters. Figs. <ref>(d-f) show the magnitude and the velocity region where VIVs occur in the transverse direction. The experimental data reveals that the magnitude of VIV is strongly influenced by both the immersion depth and the rod diameter. Across all three rods, the VIV response is markedly pronounced for flow velocities from 0.2 to 0.4.
Using the experimental data from the immersion depth tests, we evaluated the sensor model's ability to predict the sensor performance across three sensor performance metrics as described in Section II.A.4. Modeled and experimental results for the Hall effect sensor's sensitivity to changes in velocity, the sensed velocity in the middle of the sensor's range, and the maximum velocity a sensor can detect without damage to the sensor are plotted together in Fig. <ref> for different whisker diameter and immersion depth variations. The model's predicted sensitivity was accurate to a root mean squared percentage error (RMSPE) of 14.4 across all of the immersion depth trials for the sensor's nominal design values (Fig. <ref>a). The model's expected mid-point velocity estimate had an RMSPE of 11.0 across all trials (Fig. <ref>b). The maximum tested velocity was 0.7 so modeling results for only two designs could be confirmed with experimental data (Fig. <ref>c). The percentage error for these trials was 14.1 and 16.2 for immersion depths of 30 and 40 respectively. We use percentage errors over the RMSE to compare the model's predictive abilities over the three specifications whose values have different magnitudes.
We hypothesize that the nominal design values used to model the sensor are a large contributor to the model prediction error.
When the spring width in the model is fit to the experimental results (as done in <cit.>), the percentage errors drop to 1.38 and 3.96 for the maximum velocity data points in Fig. <ref>c respectively (these results were from the same sensor with the same spring at different immersion depths). Regardless, the trends remain consistent between the model and experimental results for all three specifications, and the model provides an approximate estimate which can lead to more informed sensor design choices.
§.§ Calibration Results
Given manufacturing variations in sensor design, it makes sense to calibrate the sensor to improve its accuracy. A curve fixed at the origin was fit to the experimental data for flow velocity and orientation by using a least squares approach (Fig. <ref>). The flow velocity and Δ B_xy were fit with a second order polynomial with an R^2 value of 0.969. The orientation of water flow can be mapped to arctan(Δ B_x/Δ B_y) in a linear relationship with an R^2 value of 0.981.
The calibrated curve equations for flow velocity, v (), and orientation, θ_xy (), for this sensor are listed below:
v = -0.0005(Δ B_xy)^2+0.0414(Δ B_xy)
θ_xy = 0.944 arctan(Δ B_y/x)
§ APPLICATION
§.§ Test Setup
In order to showcase the whisker-inspired sensor in a real-world aquatic environment, we integrated the chosen sensor on an RC boat (485 × 264 × 160, D16C, Cresea Product) for velocity estimation. Given the prescribed boat velocity and tasks, a 20 × 7.5 cross-shape sensor was selected for this application.
A joystick was used to remotely control the velocity and direction of the boat. Four wires from the sensor were taped along the edge of the boat to the holder at the back and connected with an Arduino microcontroller (Arduino Mega 2560 R3) through I2C communication. The sensor system was powered by a 9V battery pack. A Bluetooth module (HC-05, HiLetgo) was used to transmit the sensing data to a laptop. The boat with the sensor was tested in a 8 diameter, 3 high water tank at the Field Robotics Center (FRC) at Carnegie Mellon University.
Figs. <ref>(a-c) show the test setup. While the boat was operating in the tank, a survey system (Leica TS16) was used to record the position and velocity of the boat as ground truth data. The survey system consists of a laser source and a geo prism (Leica GRZ101). During the test, the laser source actively tracked the geo prism and stored its positions in Cartesian coordinates with a sampling frequency of 5.
§.§ On-Board Test Results
Figs. <ref> (d-f) compare the velocity profiles estimated from the whisker-inspired sensor and the survey system. The motion of the boat was designed to provide a variety of velocity profiles within a range of 0 to 0.8. The whisker-inspired sensor successfully captured all velocity variations with RMSE values of (d) 0.08, (e) 0.08, and (f) 0.06.
The velocity profile in Fig. <ref>(d) was designed to have a peak velocity close to the limit of the sensor, and this profile was qualitatively captured by the whisker-inspired sensor. The velocity profile in Fig. <ref>(e) was designed to have 5 periodic accelerations, and the whisker-inspired sensor successfully captured all of them. Notably, in Fig. <ref>(f), the sensor output saturated at around 0.7. This was deliberately designed by printing physical restrictions on the holder to prevent the sensor's spring suspension from plastic deformation. These findings are important as they indicate the sensor's ability to detect and measure complex dynamic events, emphasizing its potential applicability in various scenarios requiring accurate state estimation and motion tracking.
While using the calibration models (Eqns. <ref>, <ref>) for velocity and orientation estimation, we can also obtain the vortex-induced vibration signals from data not shown here. These signals are characterized by the time-dependent data from the direction perpendicular to the flow. For example, as the boat moves in the B_x direction, VIV signals are obtained as time-dependent signals from channel B_y. This highlights the robustness and versatility of the approach in extracting valuable information from different channels.
§ CONCLUSIONS
This study presents the design of whisker-inspired sensors that can detect multi-directional water flow with a designed sensitivity. The sensitivity was tunable based on the designed whisker geometry, which we quantified using a model. We characterized how different variations in the whisker's shapes affected the magnetic field response in relation to flow velocity and orientation. Additionally, we investigated Vortex-Induced Vibrations (VIV) induced by different whisker structures at various flow velocities. Sensitivity was also tunable after manufacturing by modifying the immersion depth which is possible for surface aquatic vehicles. Our model also demonstrated the ability to design a sensor for a specific range of flow velocities. Once the sensor was manufactured, we improved the accuracy of the velocity prediction from the magnetic field by calibrating the model to the sensor's final parameters.
To test the capabilities of the whisker-inspired sensor, we implemented it on a commercially available RC boat and demonstrated its ability to estimate velocity in a static water environment. We then compared our estimated velocity with ground truth data captured by a survey system. The onboard sensor configuration and wireless Bluetooth data transmission make this whisker-inspired sensor system versatile, offering potential for use in various environments, including outdoor fieldwork, remote locations, or situations where WiFi access is restricted or unavailable.
§ ACKNOWLEDGMENT
The authors would like to thank Regan Kubicek from Carnegie Mellon University for early help with experimental setup and advice on the fabrication process, along with Suhan Kim from MIT for advice on data acquisition and inspiration. The authors also would like to thank Prof. Mitra Hartmann and Kevin Kleczka from Northwestern University for inspiration and feedback on sensor characterization. Finally, the authors would like to thank Prof. Michael Kaess and Warren Whittaker from Carnegie Mellon University for their generous help and training for the use of the water tank. This work was partially supported by MURI award number FA9550-19-1-0386.
IEEEtran
10
url@rmstyle
neal2012hardware
M. Neal, T. Blanchard, A. Hubbard, N. Chauché, R. Bates, and J. Woodward,
“A hardware proof of concept for a remote-controlled glacier-surveying
boat,” Journal of Field Robotics, vol. 29, no. 6, pp. 880–890, 2012.
kasda2021low
M. Kasda, D. P. Kosasih, and H. D. Nugraha, “Low cost remote control barge
boat to feeder fish,” J. Mech. Eng. Res. Dev, vol. 44, no. 2, pp.
112–121, 2021.
jo2019low
W. Jo, Y. Hoashi, L. L. P. Aguilar, M. Postigo-Malaga, J. M. Garcia-Bravo, and
B.-C. Min, “A low-cost and small usv platform for water quality
monitoring,” HardwareX, vol. 6, p. e00076, 2019.
8706541
Y. R. Petillot, G. Antonelli, G. Casalino, and F. Ferreira, “Underwater
robots: From remotely operated vehicles to intervention-autonomous underwater
vehicles,” IEEE Robotics and Automation Magazine, vol. 26, no. 2, pp.
94–101, 2019.
yoerger2021hybrid
D. R. Yoerger, A. F. Govindarajan, J. C. Howland, J. K. Llopiz, P. H. Wiebe,
M. Curran, J. Fujii, D. Gomez-Ibanez, K. Katija, B. H. Robison,
et al., “A hybrid underwater robot for multidisciplinary
investigation of the ocean twilight zone,” Science Robotics, vol. 6,
no. 55, p. eabe1901, 2021.
wang2020development
R. Wang, S. Wang, Y. Wang, L. Cheng, and M. Tan, “Development and motion
control of biomimetic underwater robots: A survey,” IEEE Transactions
on Systems, Man, and Cybernetics: Systems, vol. 52, no. 2, pp. 833–844,
2020.
katzschmann2018exploration
R. K. Katzschmann, J. DelPreto, R. MacCurdy, and D. Rus, “Exploration of
underwater life with an acoustically controlled soft robotic fish,”
Science Robotics, vol. 3, no. 16, p. eaar3449, 2018.
teoh2018rotary
Z. E. Teoh, B. T. Phillips, K. P. Becker, G. Whittredge, J. C. Weaver,
C. Hoberman, D. F. Gruber, and R. J. Wood, “Rotary-actuated folding
polyhedrons for midwater investigation of delicate marine organisms,”
Science Robotics, vol. 3, no. 20, p. eaat5276, 2018.
patterson2020untethered
Z. J. Patterson, A. P. Sabelhaus, K. Chin, T. Hellebrekers, and C. Majidi, “An
untethered brittle star-inspired soft robot for closed-loop underwater
locomotion,” in 2020 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS).1em plus 0.5em minus 0.4emIEEE,
2020, pp. 8758–8764.
aubin2019electrolytic
C. A. Aubin, S. Choudhury, R. Jerch, L. A. Archer, J. H. Pikul, and R. F.
Shepherd, “Electrolytic vascular systems for energy-dense robots,”
Nature, vol. 571, no. 7763, pp. 51–57, 2019.
tuhtan2020underwater
J. A. Tuhtan, S. Nag, and M. Kruusmaa, “Underwater bioinspired sensing: New
opportunities to improve environmental monitoring,” IEEE
Instrumentation & Measurement Magazine, vol. 23, no. 2, pp. 30–36, 2020.
leonard2010coordinated
N. E. Leonard, D. A. Paley, R. E. Davis, D. M. Fratantoni, F. Lekien, and
F. Zhang, “Coordinated control of an underwater glider fleet in an adaptive
ocean sampling field experiment in monterey bay,” Journal of Field
Robotics, vol. 27, no. 6, pp. 718–740, 2010.
4749579
R. Stopforth, S. Holtzhausen, G. Bright, N. S. Tlale, and C. M. Kumile,
“Robots for search and rescue purposes in urban and underwater environments
- a survey and comparison,” in 2008 15th International Conference on
Mechatronics and Machine Vision in Practice, 2008, pp. 476–480.
9987047
Y. Wang, Z. Guo, and J. Xu, “Underwater search and rescue robot based on
convolutional neural network,” in 2022 IEEE 4th International
Conference on Civil Aviation Safety and Information Technology (ICCASIT),
2022, pp. 786–790.
picardi2020bioinspired
G. Picardi, M. Chellapurath, S. Iacoponi, S. Stefanni, C. Laschi, and
M. Calisti, “Bioinspired underwater legged robot for seabed exploration with
low environmental disturbance,” Science Robotics, vol. 5, no. 42, p.
eaaz1012, 2020.
6588608
F. Zhang, J. Thon, C. Thon, and X. Tan, “Miniature underwater glider: Design
and experimental results,” IEEE/ASME Transactions on Mechatronics,
vol. 19, no. 1, pp. 394–399, 2014.
dehnhardt_hydrodynamic_2001
G. Dehnhardt, B. Mauck, W. Hanke, and H. Bleckmann,
“enHydrodynamic Trail-Following in Harbor
Seals ( Phoca vitulina ),”
enScience, vol. 293, no. 5527, pp. 102–104,
July 2001. [Online]. Available:
<https://www.science.org/doi/10.1126/science.1060514>
eberhardt_development_2016
W. C. Eberhardt, B. F. Wakefield, C. T. Murphy, C. Casey, Y. Shakhsheer, B. H.
Calhoun, and C. Reichmuth, “Development of an artificial sensor for
hydrodynamic detection inspired by a seal’s whisker array,”
Bioinspiration & Biomimetics, vol. 11, no. 5, p. 056011, Aug. 2016.
[Online]. Available:
<https://iopscience.iop.org/article/10.1088/1748-3190/11/5/056011>
jiang_underwater_2022
Y. Jiang, Z. Gong, Z. Yang, Z. Ma, C. Wang, Y. Wang, and D. Zhang, “Underwater
Source Localization Using an Artificial Lateral Line System
With Pressure and Flow Velocity Sensor Fusion,” IEEE/ASME
Transactions on Mechatronics, vol. 27, no. 1, pp. 245–255, Feb. 2022.
[Online]. Available: <https://ieeexplore.ieee.org/document/9367000/>
Whisker
G. Dehnhardt, B. Mauck, and H. Bleckmann, “Seal whiskers detect water
movements,” Nature, vol. 394, 1998.
adachi2022whiskers
T. Adachi, Y. Naito, P. W. Robinson, D. P. Costa, L. A. Hückstädt,
R. R. Holser, W. Iwasaki, and A. Takahashi, “Whiskers as hydrodynamic prey
sensors in foraging seals,” Proceedings of the National Academy of
Sciences, vol. 119, no. 25, p. e2119502119, 2022.
lee2012vision
D. Lee, G. Kim, D. Kim, H. Myung, and H.-T. Choi, “Vision-based object
detection and tracking for autonomous navigation of underwater robots,”
Ocean Engineering, vol. 48, pp. 59–68, 2012.
tan2011survey
H.-P. Tan, R. Diamant, W. K. Seah, and M. Waldmeyer, “A survey of techniques
and challenges in underwater localization,” Ocean Engineering,
vol. 38, no. 14-15, pp. 1663–1676, 2011.
kim_soft_2023
T. Kim, H.-S. Shin, K.-H. Nam, S. Bergbreiter, and Y.-L. Park, “Soft Airflow
Sensors With Artificial Hair Structures and Printed Ionogel
Channels for Wind Gust Detection for Small Uncrewed Vehicles,”
IEEE/ASME Transactions on Mechatronics, pp. 1–11, 2023. [Online].
Available: <https://ieeexplore.ieee.org/document/10013774/>
liu2022artificial
G. Liu, Y. Jiang, P. Wu, Z. Ma, H. Chen, and D. Zhang, “Artificial whisker
sensor with undulated morphology and self-spread piezoresistors for diverse
flow analyses,” Soft Robotics, 2022.
zheng20223d
X. Zheng, A. M. Kamat, A. O. Krushynska, M. Cao, and A. G. P. Kottapalli, “3d
printed graphene piezoresistive microelectromechanical system sensors to
explain the ultrasensitive wake tracking of wavy seal whiskers,”
Advanced Functional Materials, vol. 32, no. 47, p. 2207274, 2022.
piezoresistivesensor2
G. Liu, Y. Jiang, P. Wu, Z. Ma, H. Chen, and D. Zhang, “Artificial whisker
sensor with undulated morphology and self-spread piezoresistors for diverse
flow analyses,” Soft Robotics, vol. 00, no. 00, 2022.
eberhardt2016development
W. C. Eberhardt, B. F. Wakefield, C. T. Murphy, C. Casey, Y. Shakhsheer, B. H.
Calhoun, and C. Reichmuth, “Development of an artificial sensor for
hydrodynamic detection inspired by a seal’s whisker array,”
Bioinspiration & biomimetics, vol. 11, no. 5, p. 056011, 2016.
na2018magnetostrictive
S. Na, J. Park, N. Jones, N. Werely, and A. Flatau, “Magnetostrictive whisker
sensor application of carbon fiber-alfenol composites,” Smart
Materials and Structures, vol. 27, no. 10, p. 105010, 2018.
6971622
S.-M. Na, M. Rice, G. Raghunath, V. Klimchenko, and A. B. Flatau,
“Magnetostrictive alfenol whisker sensor performance and sensitivity to
whisker thickness,” IEEE Transactions on Magnetics, vol. 50, no. 11,
pp. 1–4, 2014.
xu2021triboelectric
P. Xu, X. Wang, S. Wang, T. Chen, J. Liu, J. Zheng, W. Li, M. Xu, J. Tao, and
G. Xie, “A triboelectric-based artificial whisker for reactive obstacle
avoidance and local mapping,” Research, vol. 2021, 2021.
gul2018fully
J. Z. Gul, K. Y. Su, and K. H. Choi, “Fully 3d printed multi-material soft
bio-inspired whisker sensor for underwater-induced vortex detection,”
Soft robotics, vol. 5, no. 2, pp. 122–132, 2018.
wang2022underwater
S. Wang, P. Xu, X. Wang, J. Zheng, X. Liu, J. Liu, T. Chen, H. Wang, G. Xie,
J. Tao, et al., “Underwater bionic whisker sensor based on
triboelectric nanogenerator for passive vortex perception,” Nano
Energy, vol. 97, p. 107210, 2022.
xu2022bio
P. Xu, J. Liu, X. Liu, X. Wang, J. Zheng, S. Wang, T. Chen, H. Wang, C. Wang,
X. Fu, et al., “A bio-inspired and self-powered triboelectric tactile
sensor for underwater vehicle perception,” npj Flexible Electronics,
vol. 6, no. 1, p. 25, 2022.
6765747
A. G. P. Kottapalli, M. Asadnia, H. Hans, J. Miao, and M. Triantafyllou,
“Harbor seal inspired mems artificial micro-whisker sensor,” in 2014
IEEE 27th International Conference on Micro Electro Mechanical Systems
(MEMS), 2014, pp. 741–744.
beem_calibration_2013
H. Beem, M. Hildner, and M. Triantafyllou, “Calibration and validation of a
harbor seal whisker-inspired flow sensor,” Smart Materials and
Structures, vol. 22, no. 1, p. 014012, Jan. 2013. [Online]. Available:
<https://iopscience.iop.org/article/10.1088/0964-1726/22/1/014012>
capacitivewhiskersensor1
K. Rajasekaran, H. D. Bae, S. Bergbreiter, and M. Yu, “Flow separation sensing
on airfoil using a 3d printed biomimetic artificial hair sensor,”
Bioinspiration & Biomimetics, vol. 17, no. 046003, 2022.
capacitivewhiskersensor2
W. C. Eberhardt, B. F. Wakefield, C. T. Murphy, C. Casey, Y. Shakhsheer, B. H.
Calhoun, and C. Reichmuth, “Development of an artificial sensor for
hydrodynamic detection inspired by a seal’s whisker array,”
Bioinspiration & Biomimetics, vol. 11, no. 056011, 2016.
6404978
H. Beem, M. Hildner, and M. Triantafyllou, “Characterization of a harbor seal
whisker-inspired flow sensor,” in 2012 Oceans, 2012, pp. 1–4.
6697220
P. V. Alvarado, V. Subramaniam, and M. Triantafyllou, “Performance analysis
and characterization of bio-inspired whisker sensors for underwater
applications,” in 2013 IEEE/RSJ International Conference on
Intelligent Robots and Systems, 2013, pp. 5956–5961.
li2017superhydrophobic
L. Li, Y. Bai, L. Li, S. Wang, and T. Zhang, “A superhydrophobic smart coating
for flexible and wearable sensing electronics,” Advanced Materials,
vol. 29, no. 43, p. 1702517, 2017.
qualtieri2012parylene
A. Qualtieri, F. Rizzi, G. Epifani, A. Ernits, M. Kruusmaa, and M. De Vittorio,
“Parylene-coated bioinspired artificial hair cell for liquid flow sensing,”
Microelectronic Engineering, vol. 98, pp. 516–519, 2012.
VIV1
C. Williamson and R. Goverdhan, “Vortex-induced vibrations,” Annual
Review of Fluid Mechanics, vol. 36, 2004.
VIV2
T. Sarpkaya, “A critical review of the intrinsic nature of vortex-induced
vibrations,” Journal of Fluids and Structures, vol. 19, 2004.
whiskersurpressVIV1
H. S. Yoon, K. J. Oh, H. J. Kim, M. I. Kim, and J. Moon, “Double wavy
geometric disturbance to the bluff body flow at a subcritical reynolds
number,” Ocean Engineering, vol. 106713, 195.
whiskersurpressVIV2
X. Zheng, A. M. Kamat, V. S. Harish, M. Cao, and A. G. P. Kottapalli,
“Optimizing harbor seal whisker morphology for developing 3d-printed flow
sensor,” International Conference on Solid State Sensors and Actuators
(TRANSDUCERS), 2021.
sensorreview1
M. Triantafyllou, G. D. Weymouth, and J. Miao, “Biomimetic survival
hydrodynamics and flow sensing,” Annual Review of Fluid Mechanics,
vol. 48, 2016.
zheng2023wavy
X. Zheng, A. M. Kamat, M. Cao, and A. G. P. Kottapalli, “Wavy whiskers in
wakes: Explaining the trail-tracking capabilities of whisker arrays on seal
muzzles,” Advanced Science, vol. 10, no. 2, p. 2203062, 2023.
MRLwhisker01
S. Kim, C. Velez, D. K. Patel, and S. Bergbreiter, “A magnetically transduced
whisker for angular displacement and moment sensing,” IEEE/RSJ
International Conference on Intelligent Robots and Systems (IROS), 2019.
springmodel
H.-M. Chou, M.-J. Lin, and R. Chen, “Fabrication and analysis of awl-shaped
serpentine microsprings for large out-of-plane displacement,” Journal
of Micromechanics and Microengineering, vol. 25, no. 9, 2015.
MRLwhisker02
S. Kim, R. Kubicek, A. Paris, A. Tagliabue, J. P. How, and S. Bergbreiter, “A
whisker-inspired fin sensor for multi-directional airflow sensing,”
IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS), 2020.
Kent2022
T. Kent, “Design of whisker-inspired sensors for multi-directional
hydrodynamic sensing,”
<https://drive.google.com/drive/folders/1zCfvp2Fvj9-muI76YhCSibNmXC_9cayr?usp=sharing>,
2022.
magneticmodel
N. Derby and S. Olbert, “Cylindrical magnets and ideal solenoids,”
American Journal of Physics, vol. 78, no. 229, 2010.
[
< g r a p h i c s >
]Tuo Wang
(IEEE, ASME Student Member) received the B.S. degree in Mechanical Engineering from the University of Pittsburgh, Pittsburgh, PA, USA, in 2021, and the M.S. degree in Mechanical Engineering from Carnegie Mellon University, Pittsburgh, PA, USA, in 2023. He currently works as a Post-Baccalaureate Research Fellow with the SeNSE lab, Northwestern University, Evanston, IL, USA, on bio-inspired flow sensors. His research interests include bio-inspired robotics, wearable devices, and computer-aided engineering.
[
< g r a p h i c s >
]Teresa A. Kent
(IEEE Student Member) received a B.S. degree in Mechanical Engineering from the University of Maryland College Park, College Park, MD in 2017, and an M.S. in Mechanical Engineering from Carnegie Mellon, Pittsburgh, PA in 2019. She is currently a Ph.D. candidate at Carnegie Mellon University, Pittsburgh PA in the Robotics Institute. Her research interests include sensor design, whisker sensing, tactile sensing, applications for computer vision, and soft robotics.
[
< g r a p h i c s >
]Sarah Bergbreiter
(ASME Fellow, IEEE Member) joined the Department of Mechanical Engineering at Carnegie Mellon University as a Professor in the fall of 2018 after spending ten years at the University of Maryland, College Park. She started her academic career with a B.S.E. degree in electrical engineering from Princeton University in 1999. After a short introduction to the challenges of sensor networks at a small startup company, she received the M.S. and Ph.D. degrees from the University of California, Berkeley in 2004 and 2007 with a focus on microrobotics. Prof. Bergbreiter received the DARPA Young Faculty Award in 2008, the NSF CAREER Award in 2011, and the Presidential Early Career Award for Scientists and Engineers (PECASE) in 2013 for her research on engineering robotic systems down to millimeter size scales. She has received several Best Paper awards at conferences like ICRA, IROS, and Hilton Head Workshop.
|
http://arxiv.org/abs/2307.05591v1 | 20230710175921 | SITTA: A Semantic Image-Text Alignment for Image Captioning | [
"Fabian Paischer",
"Thomas Adler",
"Markus Hofmarcher",
"Sepp Hochreiter"
] | cs.CV | [
"cs.CV",
"cs.CL",
"cs.LG"
] |
Empirically Constraining the Spectra of a Star's Heterogeneities From Its Rotation Lightcurve
[
Received 24 May 2023 / Accepted 30 June 2023
=============================================================================================
Textual and semantic comprehension of images is essential for generating proper captions.
The comprehension requires detection of objects, modeling of relations between them,
an assessment of the semantics of the scene and, finally, representing the extracted knowledge
in a language space.
To achieve rich language capabilities while ensuring good image-language mappings,
pretrained language models (LMs) were conditioned on pretrained multi-modal
(image-text) models that allow for image inputs.
This requires an alignment of the image representation of the multi-modal model with the language representations of a generative LM.
However, it is not clear how to best transfer semantics detected by the vision encoder of the multi-modal model to the LM.
We introduce two novel ways of constructing a linear mapping that successfully transfers semantics between the embedding spaces of the two pretrained models.
The first aligns the embedding space of the multi-modal language encoder with the embedding space of the pretrained LM via token correspondences.
The latter leverages additional data that consists of image-text pairs to construct the mapping directly from vision to language space.
Using our semantic mappings, we unlock image captioning for LMs without access to gradient information.
By using different sources of data we achieve strong captioning performance on MS-COCO and Flickr30k datasets.
Even in the face of limited data, our method partly exceeds the performance of other zero-shot and even finetuned competitors.
Our ablation studies show that even LMs at a scale of merely 250 M parameters can generate decent captions employing our semantic mappings.
Our approach makes image captioning more accessible for institutions with restricted computational resources.
§ INTRODUCTION
The task of image captioning aims at understanding the relationship between visual and textual data.
It allows machines to generate informative descriptions for images, which can be useful in various applications such as image retrieval, content-based image search, and accessibility for visually impaired individuals.
The main challenges thereby are semantic understanding, i.e. recognizing objects in an image, and grasping relationships between detected objects.
To tackle these challenges, traditional approaches jointly train a vision encoder and a language decoder on the task of image captioning <cit.>.
More recently, the focus has shifted toward large-scale foundation models (FMs) pretrained on data crawled from the web <cit.>.
Large-scale pretraining on vast amounts of data has given rise to a range of FMs <cit.>.
On one hand, FMs trained on textual data store knowledge about the world, such as relations between different objects and how they interact with each other <cit.>.
On the other hand, multi-modal models, such as CLIP <cit.>, excel at semantic understanding of visual inputs due to language supervision during their pretraining stage.
We aim to leverage the relational modeling capabilities of pretrained LMs and combine it with the semantic detection capabilities of pretrained multi-modal models for the task of image captioning.
In this regard, current approaches learn a mapping from CLIP space to the language space of a generative LM in an end-to-end fashion for the task of image captioning <cit.>.
However, this requires backpropagation of gradients through the LM which can result in mappings that do not preserve the semantics of an image, but rather stimulate the LM with nonsensical tokens to produce a valid caption <cit.>.
Moreover, leak of gradient information can lead to security problems when LMs are being deployed as a service to the public <cit.>.
We propose to align the embedding spaces of pretrained vision and language FMs via a linear mapping that preserves the semantics from the image to the language space.
We do this without using additional data by leveraging the CLIP text encoder in addition to the vision encoder.
By embedding tokens of the LM in CLIP space we establish a dataset of correspondences between their respective embedding spaces.
We then compute the mapping via a least-squares solution in closed form.
This approach, however, bears the disadvantage that the obtained solution will suffer from the same modality gap as inherent to CLIP <cit.>.
If we have access to a dataset with image-caption pairs, we can overcome this issue by computing the least-squares solution on pairs of image embeddings and tokens from captions embedded in the LM input space.
The computation of such a mapping merely requires minutes on a CPU.
During inference, we map a novel image to the language embedding space and retrieve a set of tokens that capture the contents of the image.
Finally, we feed this set of tokens together with a prompt to the LM in order to generate both semantically and grammatically valid captions.
We evaluate our semantic mappings on information retrieval and image captioning tasks.
Specifically, we first compute different mappings with and without additional data and evaluate their ability to transfer semantics from the image space to the language space via a simple retrieval task.
We find that mappings that are computed by leveraging additional data generally outperform mappings that suffer from the modality gap in CLIP.
Further, we find that in the face of limited data regularizing the linear mapping with an orthogonality constraint yields the best performance.
This indicates that structural similarities between pretrained embedding spaces can be exploited.
Next, we evaluate our method on image captioning on the MS-COCO <cit.> and Flickr30k <cit.> datasets.
Our method achieves decent performance on both datasets, while carrying merely 4 M trainable parameters.
Further, we transfer task-specific mappings across datasets and achieve a performance comparable to competitors that finetune the LM on captioning data.
Finally, we conduct ablation studies to determine the required scale and training paradigm to unlock caption generation.
Our results indicate that LMs trained with instruction finetuning can generate decent captions at a scale as little as 250M parameters.
§ METHODS
Our aim is to compute a linear mapping : ^d→^m that ensures the preservation of semantically meaningful concepts form the d-dimensional CLIP output space to the m-dimensional LM embedding space.
In the following, we introduce two different methods to find .
The first method relies on the lexical matching of the vocabularies of the LM and a bimodal encoder like CLIP, while the second method relies on an external dataset.
§.§ Lexical Matching
Lexical matching relies on a multi-modal model that aligns image and text modalities in its embedding space, e.g. CLIP.
It fits corresponding tokens of the vocabularies of the CLIP language encoder and the LM (<ref>, a).
This depends on a good alignment of images and text in the joint embedding space, since ultimately during inference the mapping is applied to embedded images.
Even with an otherwise perfect mapping there still remains an error due to the modality gap <cit.> since, ultimately, we are mapping images while the mapping was fitted to text.
Let _CLIP denote the vocabulary of the CLIP text encoder and let _LM denote the vocabulary of the LM.
First, we tokenize _LM with the CLIP tokenizer.
Then we embed _LM in the CLIP output space resulting in an embedding matrix ∈^|_LM| × d.
Likewise, we embed _LM in the LM input space in the same order resulting in an embedding matrix ∈^|_LM|× m.
The resulting one-to-one correspondences between the rows of and constitute a dataset on which we can fit the mapping using a least-squares model.
Finally, we can apply to map CLIP-embedded images to the LM embedding space.
Due to the alignment of the CLIP vision encoder and the CLIP language encoder, our mapping can be used to project images to the LM space while preserving its semantics.
§.§ External Dataset
We can circumvent the need for a CLIP-like model if we have access to a dataset that provides image-text pairs, e.g., MS-COCO <cit.>.
First, we embed the images in using a (unimodal) vision encoder ϕ: →^d, where is the raw pixel space and d again denotes the embedding dimension.
This results in an image embedding matrix _∈^|| × d.
Then we preprocess the corresponding text labels in in the same order using the tokenizer of the LM and embed them in the LM embedding space.
This results in a token embedding matrix _∈^n × m, where n denotes the number of embedded tokens obtained from the text labels in .
Importantly, the number n can vary depending on the used dataset.
In the case of MS-COCO we obtain multiple tokens per caption, resulting in a one-to-many relationship between _ and _ (<ref>, b)
Fitting a least-squares model to one-to-many correspondences is equivalent to mapping the input to the average of the corresponding outputs.
This implies a bias towards tokens that occur more frequently in the text.
To avoid bias toward highly frequent but non-informative tokens, we perform stop-word removal and subsequent de-duplication.
Finally, we fit a least-squares model with inputs _ and targets _ to find .
§.§ Image Captioning
Ultimately, our goal is to enable generation of language conditioned on visual inputs in a zero-shot and gradient-free manner.
By using our semantic mapping to pair a CLIP model with a generative LM unlocks language generation conditioned on visual input (<ref>, c).
In this regard, we assume access to a generative LM, for example the recently proposed Llama <cit.>.
We use the mapping obtained by one of the previously described methods.
Let denote the set of CLIP-embedded tokens obtained by applying the CLIP tokenizer to _LM.
Given an image ∈, we compute an embedding = ϕ() and select the set of top-k tokens by
^∗ = max^k_∈cossim(, ),
where max^k denotes an extension of the max operator returning the arguments of the k largest elements of a set and
cossim(, ) = ^⊤/
is the cosine similarity.
Since LMs have been shown to be sensitive to prompt ordering <cit.>, we draw l random permutations of the set of tokens in ^∗ and concatenate them with the prompt “A picture of” to generate a set of l caption candidates.
The variables k = |^∗| and l = || are hyperparameters of our method.
Finally, we use CLIP to determine the best caption in .
Let CLIP_LM denote the text encoder and let CLIP_VM denote the vision encoder.
Finally, we select the caption for by
^∗ = max_∈cossim(CLIP_LM(), CLIP_VM()).
§ EXPERIMENTS
This section is organized as follows.
First, we compute different mappings via lexical matching and external datasets and evaluate them on a retrieval task in <ref> and <ref>, respectively.
Then, we assess the semantic mappings for image captioning on the MS-COCO <cit.> and the Flickr30k <cit.> datasets, as well as cross-transfer between them in <ref>.
Finally, we also illustrate the effect of dataset size, model scale, and instruction finetuning.
§.§ Lexical Matching
We compare four different linear mappings and evaluate their capabilities of transferring semantics from image to text space:
* OLS: Ordinary Least Squares, as in <cit.>
* Ridge: Least Squares with Thikonov regularization
* Procrustes: Least Squares with orthogonality constraint <cit.>
* RobProc: Iterative refinement of the Procrustes method <cit.>
Aside from OLS and Ridge we also consider the Procrustes method.
The Procrustes method constrains to be a (semi-) orthogonal matrix, which allows identifying shared symmetries between the embedding spaces.
It is a common choice for aligning monolingual embedding spaces <cit.>.
To evaluate the alignment between the CLIP language space and the LM embedding space, we perform a 5-fold cross validation where a mapped sample counts as correctly classified if the closest token in the LM embedding space is the correct token.
We measure the similarity between a projected image and the tokens in the embedding space via cosine similarity.
We also experimented with retrieval via ℓ_2-distance when using mappings optimized via OLS, but did not observe a significant difference.
<ref> in the appendix shows the test accuracy of the different mappings.
OLS yields the best alignment across the two language embedding spaces.
The orthogonality constraint appears to be a strong regularization leading to lower accuracies on the test set.
Further, Thikonov regularization does not yield any improvements over OLS for most vision backbones.
We demonstrate that our mappings preserve semantics of natural images in the language space.
To this end, we evaluate the mappings on a retrieval task on the validation split of the MS-COCO dataset <cit.>[available at https://cs.stanford.edu/people/karpathy/deepimagesent/].
We rank tokens in the LM embedding space according to their cosine similarity to a projected image.
Based on the obtained ranking we compute the Normalized Discounted Cumulative Gain (NDCG, ).
We consider all tokens as relevant that appear in a ground truth caption of an image.
<ref> in the appendix shows some token rankings for several images.
In <ref> in the appendix we compare the four different linear mappings for various CLIP vision encoders and compare them to ranking in the CLIP space.
Our results show clear benefits for Procrustes over OLS or Ridge.
It appears that OLS and Ridge actually suffer heavily from the modality gap, thus do not transfer well from the image space to the language space.
This is reflected in their measured NDCG values, which are close to computing the NDCG on random token rankings.
These results indicate that the orthogonality constraint aids bridging the prevalent modality gap in CLIP.
Based on these results we only use the projections obtained by the Procrustes method when computed via lexical matching.
§.§ External Datasets
A mapping trained via External Datasets is not limited to a multi-modal pretrained encoder and, thus, not susceptible to the modality gap.
We train such a mapping using the training split of MS-COCO <cit.>.
Again, we conduct the retrieval experiment on the MS-COCO validation set.
<ref> in the appendix shows the NDCG for the trained mappings.
We observe that now OLS yields the highest NDCG on average.
This result is not surprising, since the higher amount of data prevents overfitting and diminishes the necessity for regularization.
However, we are also interested in cases where less data is available.
Therefore, we investigate how many datapoints are required to obtain a decent mapping between the embedding spaces.
To this end, we also train mappings on 1%, 2.5%, 5%, and 10% of the training split.
We also include other vision encoders pretrained in a self-supervised manner on images (BEIT, ), and pretrained on ImageNet21K <cit.> (ViT-L/16-IN21K, ).
<ref> shows the results.
Generally, the NDCG decreases as datasets get smaller.
Moreover, Procrustes outperforms OLS on small datasets.
This illustrates that the orthogonality constraint imposed by Procrustes is well-suited as regularization in the face of scarce data.
Further, the orthogonal constraint allows identifying structural similarities between the embedding spaces.
Generally, we find that encoders pretrained in a multi-modal manner achieve a slightly higher NDCG compared to encoders that are pretrained in a unimodal manner.
This may be explained by the language supervision that decreases going from multi-modal pretraining (CLIP) to supervised training on ImageNet (ViT-L/16-IN21K), and finally to unsupervised pretraining on images (BEIT).
Similar results were obtained by <cit.>, who train a linear mapping between various vision encoders and the LM in an end-to-end fashion.
In contrast to their approach, our orthogonal mapping is actually distance preserving and in turn enables identifying structural similarities between the embedding spaces.
Finally, the best performing mappings significantly outperform the baseline that performs the ranking directly in CLIP space, thus narrowing the modality gap (<ref> vs. <ref> in the appendix).
§.§ Image captioning
We demonstrate that our semantic mapping can be used in combination with a generative LM for image captioning.
We select the 7 B version of Llama <cit.> as our generative LM since it provides a good trade off between complexity and performance.
We report metrics commonly used for image captioning, such as BLEU-1, BLEU-4 <cit.>, ROUGE-L <cit.>, and CIDEr-D <cit.>.
Most prior works do not report error bars on metrics used for evaluation.
We consider error bars to be very important as they indicate the variability of the measurements and, therefore, provide them in the form of the standard error.
We compare SITTA to existing methods that transfer a pretrained LM to vision-language tasks in a zero-shot manner.
MS-COCO
We split the data according to <cit.> into 123 K/5 K/5 K for training validation and test and compute the semantic mappings with OLS between RN50x64 and Llama.
We search over a range of values for both hyperparameters k and l.
If k is too small, then there might not be enough information for the LM to construct good captions especially if multiple objects are present.
Choosing k too large, on the other hand, induces unnecessary noise.
Performance tends to increase with larger values of l, however, stagnates at a certain point.
The values of k=8 and l=40 performed best for our experiments on the MS-COCO dataset.
Further, we try different decoding strategies and vision backbones (<ref> and <ref> in the appendix).
Surprisingly, captioning performance of the largest ResNet variant exceeds the one of vision-transformer based architecture.
Further, only greedy decoding yields valid and meaningful captions in our setup.
We show results for our hyperparameter search on the MS-COCO validation set in <ref> in the appendix.
Our standard version of SITTA utilizes the mapping trained with OLS on external data in combination with the RN50x64 CLIP backbone.
<ref> shows the results for MS-COCO.
SITTA attains impressive performance and significantly exceeds other zero-shot methods.
Moreover, SITTA even outperforms LiMBeR which train a linear mapping between CLIP and a LM in an end-to-end fashion on the task of image captioning.
In contrast, we compute the linear mapping beforehand and do not directly optimize for image captioning, which requires backpropagation of gradients through the LM.
Additionally, our mapping carries less parameters and can be computed within minutes on a CPU.
Remarkably, we even reach better performance than some methods that perform fine-tuning on image captions (MAGIC and <cit.>).
We attribute this effect mostly to the more sophisticated LM which was trained on an excessive amount of tokens <cit.>.
<ref> shows some sample captions and the corresponding tokens present in the prompt for generation.
Utilizing a Procrustes mapping (SITTA_Procrustes) results in a slight drop in performance, due to the orthogonality constraint.
The mapping trained via lexical matching (SITTA_Lexical Matching) performs significantly worse than the mapping trained via external dataset.
We observe that SITTA_Lexical Matching is severely affected by the modality gap and only outperforms CLIPRe in terms of CIDEr-D and R-L.
Finally, we perform ablation studies on the effect of the random permutations in the token sequence, as well as using retrieval in CLIP space instead of our semantic mapping (<ref>).
We find that retrieval in CLIP space only slightly exceeds performance of SITTA_Lexical Matching, which indicates that the orthogonal mapping maintains the same best token ranking for the most part.
Further, inducing variation by permuting tokens in the prompt leads to a substantial improvement.
Flickr30k
We additionally evaluate our method on the Flickr30k dataset.
In this regard, we split the data according to <cit.> and compute mappings for OLS and Procrustes.
We elaborate on the hyperparameter search in more detail in <ref>.
We found the hyperparameters k=10 and l=15 to work best for Flickr30k.
Further, we again add BLIP-2 to obtain an upper bound on the performance.
The results are shown in <ref>.
As expected, BLIP-2 attains the highest scores, followed by SITTA.
Remarkably, SITTA even outperforms CapDec which finetunes a LM on image captions.
Finally, we observe a similar behavior for SITTA_Procrustes and
SITTA_Lexical Matching as on MS-COCO.
Transfer between datasets
Next, we investigate the cross-transfer of SITTA between the MS-COCO and Flickr30k datasets.
We compare SITTA to other methods that leverage additional data to finetune the LM.
<ref> summarizes the results.
SITTA outperforms MAGIC, <cit.>, and even CapDec on MS-COCO, which carries approximately 193 times more trainable parameters.
DeCap is the only method that reaches a higher CIDEr-D score than SITTA.
However, in contrast to DeCap we do not require extensive pretraining of the LM on unsupervised captioning data.
Further, we observe that SITTA_Procrustes consistently reaches higher CIDEr-D score than SITTA.
Our results indicate that the orthogonal constraint on the mapping facilitates transfer across datasets.
Effect of dataset size
We are interested in the amount of data required to obtain a mapping that results in decent performance on the captioning tasks.
This is especially interesting since an increased amount of data results in increasing memory requirements to compute the mappings.
However, some users might not have access to facilities that offer such resources.
In this regard, we train mappings with 1%, 2.5%, 5%, and 10% of the MS-COCO dataset.
<ref> shows the performance of SITTA using mappings trained with the respective sizes.
As expected, performance generally decreases with lower dataset sizes.
In the face of limited data OLS tends to overfit, which leads to plummeting performance.
However, the orthogonality constraint provides a good regularization which results only in a slight drop in performance.
Remarkably, by leveraging merely 2.5% of the data available in the MS-COCO training split, SITTA still outperforms LiMBeR in terms of CIDER-D score.
Model Scale and Instruction Finetuning
We take a closer look at the effect of scale and the training paradigm on caption generation.
To investigate the effect of model scale we exchange Llama with varying sizes of the popular T5 model <cit.>.
Particularly, we evaluate model sizes of 250M, 720M, 3B, and 11B scales, as well as their instruction-finetuned counterparts <cit.>.
We use T5-v1.1 since it is an improved version of the original T5 model.
Further, we include GPT-J <cit.> and its instruction-finetuned counterpart, namely GPT-JT.
GPT-J was originally used in LiMBeR and has a similar size as Llama.
For the instruction-finetuned variants we change the prompt to ”Generate a caption containing the following objects: _1, _2 …_k“, where _i corresponds to the i-th retrieved token in the language space.
The results can be observed in <ref>.
We find that captioning using our semantic mapping even works for models of comparably low complexity, i.e., 250M parameters for FLAN-T5-BASE.
Also, instruction finetuning appears to be extremely important for T5.
While all metrics improve from T5-BASE to T5-XL, we experience a sharp drop for T5-XXL.
We suspect this effect arose by employing 8-bit quantization <cit.> in order to fit the model on our GPUs.
We also observed that T5 generally tends to repeat the tokens provided in the prompt.
This might be due to the multitask pretraining strategy, where T5 has encountered a fixed set of tasks, which greatly differ from our captioning task.
GPT-J generally performs worse than Llama, but we still reach a higher CIDER-D score than LiMBeR.
The performance improvement of Llama over GPT-J can be attributed to the excessive number of tokens Llama has observed during its pretraining stage.
We also observe that the captioning performance drastically drops for GPT-JT.
However, we believe these results can be traced back to a suboptimal prompt for GPT-JT and surmise that a different prompt strategy may drastically improve its performance.
Our results suggest that instruction finetuning can unlock image captioning, as can be observed for FLAN-T5.
Also, it enables much smaller models to generate decent captions which results in substantial speedup and enhanced accessibility.
§ RELATED WORK
Foundation models
FMs <cit.>, such as GPT-3 <cit.>, demonstrated remarkable few-shot capabilities.
Since then a wide variety of different language models have been proposed, like Chinhilla <cit.>, PALM <cit.>, BLOOM <cit.>, OPT <cit.>, and Llama <cit.> among many others.
As shown by <cit.>, pretrained LMs learn and store world knowledge during their pretraining stage.
Further finetuning such models on instruction following and human feedback enables powerful conversational models <cit.>.
Naturally, interest has sparked in combining vision and text data during pretraining <cit.>.
This finally resulted in large-scale multi-modal models, such as CLIP <cit.>, or ALIGN <cit.>.
Other works have improved upon CLIP by improved objectives <cit.>, or leveraging pretrained components <cit.>.
Moreover, vision FMs have been demonstrated to be well adaptable to foreign domains <cit.>.
Image Captioning
The task of image captioning has been widely considered in the literature <cit.>.
Early works employed pretrained image classification models <cit.> or domain specific object detectors <cit.>.
Further, attention mechanisms were deployed to allow attending to different visual cues <cit.>.
More recent works leverage the Transformer architecture <cit.>.
For decoding the visual features to text early works have used the LSTM architecture <cit.>.
More recently the focus shifted towards pretraining on vast datasets of paired image-text data and subsequent finetuning for image captioning <cit.>.
In turn, other works have focused on leveraging the generation capabilities of pretrained LMs and condition them on visual inputs <cit.>.
Finally, <cit.> apply parameter efficient finetuning to a pretrained LM to condition it on visual input.
Zero-Shot Transfer of LMs to vision tasks
Thus far, there is no concurrent definition of zero-shot image captioning in the literature that is applicable to all possible setups.
Many prior works labelled as zero-shot leverage large quantities of additional data to finetune certain components of their architecture, e.g., training or finetuning an LM on captioning data <cit.>.
We position our work as zero-shot transfer of the pretrained LM to vision-language tasks, since we do not use any additional data to adapt any part of the LM in any form.
This category comprises existing approaches, such as ZeroCap <cit.>, ESPER-Free <cit.>, Socratic Models (SMs, ).
Further, this category also encompasses methods that utilize additional paired data to train a mapping model from vision to language, but do not alter the LM component.
Such methods include LiMBeR <cit.>, ClipCap <cit.>, BLIP-2 <cit.>.
These methods use architectures similar to ours but with the difference that they train or finetune in an end-to-end fashion using large amounts of data.
Therefore, we consider their results as an upper bound for ours.
Further, as illustrated by <cit.>, due to the end-to-end training these methods are not guaranteed to transfer semantics from vision to language.
Another form of zero-shot captioning is retrieving caption candidates from a large database of existing captions, as in CLIPRe <cit.>.
Semantics-preserving mapping from image to text
Other works considered semantics-preserving mappings by first mappings image to natural language prior to processing them via a LM.
Socratic Models <cit.> use scripted dialogues for communication between various FMs.
Other works leverage pretrained captioning modules <cit.> to generate captions for images which serve as input to a generative model trained from scratch for visual question answering (VQA).
<cit.> and <cit.> extended this framework by constructing few-shot exemplars for a pretrained LM.
In contrast to those works, our approach does not require scripted interaction templates, constructing few-shot exemplars, or training the caption generator from scratch.
More recently, <cit.> trained a vision encoder that quantizes images to text tokens, which enables subsequent text generation via a pretrained LM.
However, they require end-to-end training of an autoencoder in combination with a complex LM, as well as plenty of tokens to represent an image in the language space to generate meaningful captions.
Model-Stitching
The computation of our mapping network is reminiscent of model stitching <cit.>.
In model-stitching an encoder is stitched via a sparse linear transformation to a compatible decoder.
<cit.> use relative representation spaces to avoid the training of stitching layers.
In contrast, our mapping aligns the absolute embedding spaces pretrained encoder and decoder models.
In the context of cross-modal information retrieval, <cit.> align the output spaces of CLIP and a language encoder with the procrustes method.
Further, <cit.> use procrustes to mitigate the modality gap of CLIP-like models for few-shot classification.
Our work differs in that we align the output space of CLIP with the LM embedding space which allows image-conditioned text generation.
§ REPRODUCIBILITY STATEMENT
We advocate for open resarch and reproducibility.
Therefore, we make all our code and our pretrained mappings used for evaluation publicly available at <https://github.com/ml-jku/semantic-image-text-alignment> to encourage research in this direction.
All pretrained language models we used are publicly available on the huggingface hub <cit.>.
For models exceeding the 7B scale (e.g. FLAN-T5-XXL) we use 8-bit quantization <cit.> and perform all our evaluations on A100 and A40 GPUs.
Importantly, we want to highlight that quantization can also be applied to models below or matching the 7B parameter scale to further reduce the memory footprint.
§ CONCLUSION
We introduced an efficient method to semantically map between the embedding spaces of a pretrained vision encoder and a generative LM.
The linear mapping can be computed in closed form either by leveraging pretrained multi-modal encoders or under the addition of paired image-text data.
The former exploits structural similarites of pretrained multi-modal models and generative language models and constructs a mapping from token correspondences.
The latter constructs a mapping directly from vision to language space using image-token correspondences from an external dataset.
The semantic mapping only comprises approximately 4 M parameters and training requires only several minutes on a CPU.
Our method outperforms existing related methods that are trained end-to-end or finetune the LM for captioning and require much more compute.
Further, our semantic mapping enables LMs at a scale of just 250 M parameters to be used for image captioning.
Thus, we make image captioning more accessible for users with limited resources, like academic research labs.
In the future we aim at adapting our method for multiple downstream tasks.
We are interested in computing the mappings on data originating from different tasks to endow our method with more sophisticated visual reasoning capabilities.
Another fruitful avenue may be computing a mapping without paired image-text data.
Inspired by <cit.>, one could only consider an image dataset and bootstrap potential captions from a pretrained LM using the mapping constructed via lexical matching and iteratively self-improve the mapping.
Following literature from the cross-lingual community <cit.>, it might even be feasible to compute the mapping entirely unsupervised.
§ ACKNOWLEDGEMENTS
The ELLIS Unit Linz, the LIT AI Lab, the Institute for Machine Learning, are supported by the Federal State Upper Austria. IARAI is supported by Here Technologies. We thank the projects AI-MOTION (LIT-2018-6-YOU-212), AI-SNN (LIT-2018-6-YOU-214), DeepFlood (LIT-2019-8-YOU-213), Medical Cognitive Computing Center (MC3), INCONTROL-RL (FFG-881064), PRIMAL (FFG-873979), S3AI (FFG-872172), DL for GranularFlow (FFG-871302), EPILEPSIA (FFG-892171), AIRI FG 9-N (FWF-36284, FWF-36235), ELISE (H2020-ICT-2019-3 ID: 951847), Stars4Waters (HORIZON-CL6-2021-CLIMATE-01-01). We thank Audi.JKU Deep Learning Center, TGW LOGISTICS GROUP GMBH, University SAL Labs Initiative, FILL Gesellschaft mbH, Anyline GmbH, Google, ZF Friedrichshafen AG, Robert Bosch GmbH, UCB Biopharma SRL, Merck Healthcare KGaA, Verbund AG, GLS (Univ. Waterloo) Software Competence Center Hagenberg GmbH, TÜV Austria, Frauscher Sensonic and the NVIDIA Corporation.
apalike
§ MAPPING BETWEEN LANGUAGE EMBEDDING SPACES
<ref> shows the average accuracy over five folds when computing a mapping according to lexical matching.
Prior work illustrated that as little as ten word correspondences are sufficient to train an orthogonal mapping between monolingual embedding spaces of closely related languages <cit.>.
This assumes that the two spaces share structures between them.
We expect this assumption to hold in our case as well — at least to a certain degree — since we map between embedding spaces trained on the same language.
We apply centering and scaling as preprocessing, since it drastically improved the performance of the mapping.
This aligns with findings of prior work who illustrated the benefits of mean centering as preprocessing for learning orthogonal mappings between monolingual embedding spaces <cit.>.
§ MAPPING IMAGES TO THE LANGUAGE SPACE
We show NDCG over the MS-COCO validation set for mappings trained via lexical matching (see <ref>), and external datasets (see <ref>).
§ ABLATION STUDIES
Effect of different vision encoders
We investigate the effect of different vision encoders on the captioning performance.
In this regard, we compare a ViT-based architecture <cit.> to a resnet-based one <cit.>.
Since there is no significant difference between the NDCG measure on the MS-COCO dataset (see <ref>), we did not expect to observe any significant differences in downstream performance.
Surprisingly, though, we observe a significant improvement in captioning performance when using a resnet encoder as shown in <ref>.
Different decoding strategies
As illustrated by <cit.>, the decoding strategy substantially affects human approval of generated captions.
Therefore, we evaluate different decoding strategies, including greedy decoding, sampling, top-k sampling, and nucleus sampling.
The results for the different decoding schemes are shown in <ref>.
Surprisingly, we found that SITTA generates the best captions using greedy decoding.
Other sampling strategies tend to hallucinate captions and tend to ignore the tokens provided in the prefix.
§ HYPERPARAMETER SEARCH
We search over different values for our hyperparameters k and l on the MS-COCO and on the Flickr30k validation sets.
The results are reported in <ref> for MS-COCO and <ref> for Flickr30k, respectively.
§ ABLATION STUDIES
We perform an ablation study to highligt the importance of variation in caption generation and our semantic mappings.
In turn, we add a method that only uses the CLIP retrieval on prompt augmented tokens, namely (SITTA_CLIP).
Since retrieval in CLIP space outperformed lexical matching on our retrieval task (see <ref>) we would expect these results to translate to the captioning task.
However, we observe that SITTA_CLIP only slightly outperforms SITTA_Lexical Matching in terms of CIDEr-D score as shown in <ref>.
This is due to the fact, that during captioning we only consider the top eight tokens.
Since the rankings are similar across the highest ranked tokens, performance on the captioning task is similar.
To assess the importance of the random permutations, we add one more setting named SITTA_No-Perm which only creates a single caption and tokens are provided in the order from best to worst.
We find a drastic drop in performance when neglecting the random permutations.
This indicates, that positioning the tokens from best to worst is not the most beneficial for caption generation, and the order strongly affects the generation process.
§ POTENTIAL SOCIETAL IMPACT
Our method uses foundation models, which were trained on uncurated datasets which were crawled from the internet.
Therefore, these models readily reflect prejudices and biases found on the web.
Consequently, our proposed captioning system might also bear these shortcomings.
In the worst case, this could lead to our method producing harmful contents.
Moreover, generative LMs as used by our method are known to be very sensitive to prompting <cit.> and can therefore be misused if a user gets to determine certain prompts or uses biased datasets for training our mappings.
§ TRAINING TIME OF MAPPINGS
We benchmark the time required for each linear model and each vision encoder used in this work.
In turn, we compute each mapping ten times on a Xeon(R) Gold 6154 CPU with 36 Cores and measure mean and standard deviation.
The results can be observed in <ref>.
Since different vision encoders use different dimensionalities in their latent space, the computation times varies strongly.
Generally, Ridge and Procrustes tend to be computed the fastest, whereas RobProc requires more training time since it refines the Procrustes mapping iteratively.
§ ISOMETRY OF EMBEDDING SPACES
We investigate the importance of language supervision during pretraining to form embedding spaces that are structurally similar.
Our results on the retrieval task for the Procrustes mapping already suggest that more language supervision during pretraining results in a better alignment of the language and vision embedding spaces.
We take a look at the resulting performance on image captioning in <ref>.
The results are obtained by using mappings computed via the Procrustes method.
Due to the orthogonality constraint the mapping implicitly preserves the structure of the image embeddings, and thus, gives a measure for similarity to the language space.
We observe a significant gap in performance from an informative language supervision during pretraining (RN50x64) to weak supervision via ImageNet labels (ViT-L/16-IN21K).
Further, no language supervision (BeIT-B/16) performs significantly worse than ViT-L/16-IN21K.
<cit.> make similar findings, however their linear mapping is not distance preserving since the mapping is not constrained to be orthogonal.
|
http://arxiv.org/abs/2307.07524v1 | 20230711024733 | Reducing Causality to Functions with Structural Models | [
"Tianyi Miao"
] | cs.AI | [
"cs.AI"
] |
Reducing Causality to Functions with Structural Models
Tianyi Miao
University of Pennsylvania
======================================================
The precise definition of causality is currently an open problem in philosophy and statistics. We believe causality should be defined as functions (in mathematics) that map causes to effects. We propose a reductive definition of causality based on Structural Functional Model (SFM). Using delta compression and contrastive forward inference, SFM can produce causal utterances like "X causes Y" and "X is the cause of Y" that match our intuitions. We compile a dataset of causal scenarios and use SFM in all of them. SFM is compatible with but not reducible to probability theory. We also compare SFM with other theories of causation and apply SFM to downstream problems like free will, causal explanation, and mental causation.
Keywords: Causal Modeling, Causation, Actual Causality
§ INTRODUCTION
What is causation? What does it mean to say one thing causes another? Is it possible to define causation in non-causal terms?
We can easily find examples where "correlation doesn't imply causation." Ice cream sales are positively correlated with deaths by drowning, but ice cream doesn't cause drowning. However, this doesn't tell us what causation really is. While probabilistic independence and correlation coefficients have clear mathematical definitions, the precise definition of causality remains a subject of ongoing debate.
Embracing a functional theory of causation, we argue that causality essentially is functions that map causes to effects.
While functions are distinct from probability theory and sufficiently general for scientific purposes, we can place additional constraints and formalize Structural Functional Model (SFM), which better fit intuitions in causal utterances:
* Forward inference from causes to effects:
* What if X? Y.
* Had it been X, it would have been Y.
* Actual causality (separating "actual causes" from background conditions):
* X causes/doesn't cause Y.
* X is/isn't the cause of Y.
* What is the cause of Y? X.
Throughout this paper, the word "function" exclusively denotes a mathematical function (Appendix <ref>). We'll never use it to mean "intended purpose or task" as in "the functions of cellphones include texting." The word "functional" is only used as the adjective form of "function."
For SFM, we'll explicitly separate its representation, inference, and learning <cit.>:
* Representation is the declarative model of "what the world is like."
* Inference assumes the representation is correct and answers queries regarding particular instances, such as computing values of unknown variables given known variables.
* Learning inductively constructs a representation from empirical data.
Such decoupling allows us to design general-purpose inference and learning algorithms that work for different task-specific representations.
§ REPRESENTATION: A ROADMAP
In this section, we build the representation of SFM by incrementally adding functions, directed graphs, composition, contrast, and delta compression into a unified model. Each additional component will help SFM better fit intuitions about causal utterances, sometimes at the cost of generality.
Motivated by theoretical and pragmatic benefits like simplicity, expressiveness, and computational efficiency, the definition of SFM is unambiguous, mathematical, and reductive. It contains no circular definition because it doesn't rely on causal concepts like intervention and agency.
§.§ Causal Relata
When we say "X causes Y", what kinds of things are X and Y? How do we represent a world? Classifying by causal relata, there are 4 kinds of causal relationships <cit.>:
* Token causation: I frequently water my flower in my garden, causing it to grow tall.
* Type causation: Watering a plant frequently causes it to grow tall.
* Token influence: How much I water my flower in my garden influences how tall it grows.
* Type influence: How much a plant is watered influences how tall it grows.
Influence relates variables (a variable can have one of many values); causation relates values of variables.
Tokens are specific; types are general. Since this type-token distinction applies to non-causal models too, it's not central to causality. SFM doesn't endorse any particular theory of physics or metaphysics, so it's up to the user to specify how variables correspond to real-world things.
Formally, let be a set of nodes (we use "nodes" instead of "variables" to avoid confusion with random variables) and be a function that maps nodes to their domains. For node u ∈, its domain [u] is the set of values it can take on. An assignment is a function that maps each node to a value in its domain.
* A complete assignment : →⋃_u ∈[u] assigns values to all nodes, satisfying ∀ u∈: (u)∈[u].
* A partial assignment _|: →⋃_u ∈[u] assigns values to a subset ⊆ of nodes, satisfying ∀ u∈: _|(u)∈[u].
* _|⊆ iff ∀ u∈: _|(u)=(u).
We use dictionary notations node1:value1, node2:value2, … for assignments (and discrete finite functions in general). Nodes, values, and assignments are different things.
Influence relates nodes ( influences ), while causation relates assignments (Water:High causes Growth:Tall).
* The set of all complete assignments forms the Cartesian product ∏_u ∈[u].
* A team R is a set of complete assignments <cit.>, so R ⊆∏_u ∈[u]. R is a relation.
* For a modal/counterfactual/possible-world interpretation, each complete assignment is a world.
Each node is a feature/property/aspect/variable of the world.
R is the set of possible worlds; (∏_u ∈[u]) ∖ R is the set of impossible worlds.
* For a database interpretation, each is an individual/person/record/item. R is a population containing many individuals. Each node is a property/attribute/feature of that individual.
* A complete assignment satisfies team R iff ∈ R.
* A team R is satisfiable iff R is nonempty. R is unsatisfiable iff R = ∅.
* If any domain [u] is empty, the Cartesian product ∏_u ∈[u] is empty and there's no satisfiable R, so we'll only consider nonempty domains.
* An assignment _| is permitted by R iff ∃∈ R: ⊇_|.
We call this an induced complete assignment of _|.
* Partial assignments _|_1, _|_2, …, _|_k are compatible with each other iff ∃∈ R: ∀ i ∈{1, 2, …, k}: ⊇_|_i.
We will say " influences " and "_| causes _|," where and are sets of nodes; _| and _| are partial assignments.
§.§ A Functional Theory of Causation
Many causal scenarios are not reducible to probability theory. For example, flipping the light switch turns on the light, but doesn't affect the TV. This system of electric circuits is deterministic and fully-specified. We can consistently predict the "independence" between light switch and TV and what would happen given the switches' status, using functions alone without probabilities.
According to the functional theory of causality, causality essentially is mathematical functions (left-total, right-unique relations) that map causes to effects.
<cit.> briefly mentions that the cause (functionally) determines the effect.
<cit.> explicitly defend that causation is "a function of one variable (the cause) on to another (the effect)."
Structural Causal Model (SCM) <cit.> uses multi-input single-output functions in structural equations to represent "laws" or "mechanisms" of the world.
"Causality as functions" becomes immediately obvious once it's pointed out. For example,
* In y = f(x), we call x the independent variable and y the dependent variable, like how effects depend on causes.
* Describing "rain influences wheat growth" with = f(), the input-output mappings are:
* With no rain, wheat doesn't grow.
* With moderate rain, wheat grows moderately.
* With heavy rain, wheat grows very well.
* The light-switch-and-TV example can be described by =f_1() and =f_2().
Two key properties distinguish functions from other kinds of relations:
* Right-uniqueness: 1 input value cannot simultaneously associate with 2 or more different output values. Functions can only be many-to-one or one-to-one, never one-to-many.
This explains why causes "necessitate" or "are sufficient for" their effects (given the underlying function).
* (Possible) non-injectiveness: Some functions can map different input values to the same output value, like y=x^2 over real numbers. Non-injective functions cannot be inverted. This explains the asymmetry of causation: different causes can lead to the same effect.
Functional dependencies are properties of a team R ⊆∏_u ∈[u]: For , ⊆,
* Value-level dependency: We say " functionally depends on _|" (_|) or "_| functionally depends on _|" (_|_|) when given _|, there exists exactly one _| that's compatible with _|.
* Node-level dependency: We say " functionally depends on " () when _| for every permitted _|.
* Value-level and node-level dependencies can be different. In (Y) = (X_1) (X_2) (X_3), value-level {X_1: 1}{Y: 1} is true; node-level {X_1}{Y} is false; node-level {X_1, X_2, X_3}{Y} is true.
Node-level functional dependency satisfies right-uniqueness: ∀_1, _2 ∈ R: (_1| = _2|) ⇒ (_1| = _2|).
So there's a function f: {_| | ∃∈ R: ⊇_|}→{_| |∃∈ R: ⊇_|} such that ∀∈ R: _| = f(_|).
We thus define functional determination:
* Node-level determination: We say " functionally determines via f" () when ∀∈ R: _| = f(_|).
* Value-level determination: We say "_| functionally determines _| via f" (_|_|) when and _| = f(_|).
In compliance with conventions from dependence logic <cit.> and relational databases <cit.>, functional dependency doesn't contain f, while our functional determination does.
Influence is node-level functional determination; causation is value-level functional determination. In _| = f(_|), _| is the cause, _| is the effect, and f is an underlying mechanism/law-of-nature (since _| = f(_|) is true in every possible world ∈ R).
Generally, causality is the study of functional dependency (e.g. Armstrong's Axioms), functional determination, and relational independence <cit.>. It's nontrivial because these concepts cannot be reduced to probability theory.
We say "_| causes _|" when and _|=f(_|). We say " influences " when .
§.§ Directed Graphs
Previously, we first have a team R and then find functional determinations as properties of R. Now we take the opposite direction. We start with a set of functional determinations FDet = {_1 _1, _2 _2, …_n _n}, which then select R_FDet⊆∏_u ∈[u] as all that satisfies FDet. Here "all" is necessary for defining a unique R_FDet, because functional dependencies and determinations are downward-closed (if R_1 satisfies FDet, then any subset R_2 ⊆ R_1 also satisfies FDet <cit.>).
When we draw diagrams to illustrate causal relationships, we want arrows to point from causes to effects.
Structural Causal Model (SCM) <cit.> generalizes this intuition, subsumes the graphical and potential-outcome frameworks, and is the most popular causal model in statistics, econometrics, and epidemiology. Our SFM inherits the following ideas from SCM:
* A causal system is represented as a (usually finite and acyclic) directed graph.
* One mechanism's effect can be another mechanism's cause. One function's output can be another function's input.
* A node's value is functionally determined by the values of its parents.
* Unlike SCM, our SFM doesn't use "intervention" in its definition at all (Section <ref>).
Besides nodes and domains , an SFM = (, , , ) also has:
* ⊆× is a set of directed edges.
* In a directed graph = (, ), a node u is exogenous (exo-node u ∈_exo) iff it's a root node; otherwise, it's endogenous (endo-node u ∈_endo).
* We write exo-assignment _|_exo as _exo and endo-assignment _|_endo as _endo.
* maps every endo-node u ∈_endo to exactly one structural function [u]: (∏_p ∈(u)[p]) →[u].
* [u]: _|(u)↦(u) maps an assignment over u's parents to a value of u.
* R_={∈∏_u ∈[u] | ∀ u ∈_endo: (u) = [u](_|(u))} is the set of all complete assignments satisfying .
Equivalently, specifies functional determinations FDet_ = {(u) {u}}_u ∈_endo, where f_u(_|(u))={u:[u](_|(u))}.
Consider SFM = (, , , ):
* = {A, B, C, D, E}
* = {(A, B), (B, D), (C, D), (C, E)}
* = {A: ℝ, B: ℝ, C: ℝ, D: ℝ, E: ℝ}
* For simplicity, we'll abuse notations and write [u](_|(u)) as [u]():
[B]()=(A)^2
[D]()=(B)+(C)
[E]()=(C)× 7
* A, C ∈_exo are exo-nodes; B, D, E ∈_endo are endo-nodes.
* A → B → D forms a causal chain, B → D ← C forms a "common effect" structure, and D ← C → E forms a "common cause" structure.
* {A: i, B: -1, C: 10, D: 9, E: 70} isn't an assignment over (, ), because the complex number i ∉ℝ is outside of A's domain.
* {A: 2, B: 2, C: 2, D: 2, E: 2} is a complete assignment over (, ), but it doesn't satisfy .
* {A: 3, B: 9, C: -π, D: 9-π, E: -7π} is a complete assignment that satisfies , so is satisfiable.
* Therefore, partial assignments {A: 3, B: 9} and {D: 9-π, E: -7π} are permitted and compatible with each other.
* {D: -10, E: 7} isn't permitted because no ∈ R_ extends it.
Some design choices of SFM inevitably restrict the kinds of functional dependencies that we can talk about:
* For simplicity, we only consider finite nodes because no important application requires an infinite SFM.
* Not every set of functional determinations be covered (entailed) by an SFM, even if we allow cycles.
Consider = {X, Y, Z} with real-valued domains, the team R_1 = { | (X)^2 = (Y) = (Z)^2} has functional determinations {X}{Y} and {Z}{Y}. There's no SFM with R_=R_1.
Generally, SFM cannot represent one node being functionally determined by multiple "separate" functions/mechanisms, each individually sufficient for its value. This differs from symmetric overdetermination (Section <ref>), which is just multi-input Boolean OR.
* The intersection of SFMs, however, can cover any set of functional determinations.
We say satisfies the SFM-intersection over (_1, _2, …, _n) if ∈⋂_i=1^n R__i ( satisfies every individual _i).
For any set of functional determinations FDet over finite , there exists a finite SFM-intersection that covers it.
Since is finite, FDet is finite.
For every _i _i in FDet, we construct _i = (, _i, , _i) with edges _i = _i ×_i and structural functions _i[y]: _|_i↦ f_i(_|_i)(y) for y ∈_i. The SFM-intersection over all _i entails FDet.
An SFM-intersection-proper is an SFM-intersection that cannot be entailed by an SFM.
Besides (X)^2=(Y)=(Z)^2, SFM-intersection-proper can express autonomous differential equations like d/dt x(t) = f(x(t)) while SFM cannot. The differential operator d/dt is also a function, so we derive 2 functional determinations: A B and A B. Here {A: x(t), B: x'(t)} is permitted iff x'(t)=f(x(t)).
* Why do people dislike SFM-intersection?
It's nearly impossible to find an uncontrived, everyday causal system that's only describable by SFM-intersection-proper. <cit.> even explicitly formulates the Principle of Causal Exclusion against "more than one sufficient cause" in this spirit.
This intuitive dislike is unjustified, but when taken as a primitive desideratum, it entails people's preference of some SFMs over others for modeling reality.
We suggest 2 possible reasons for disliking SFM-intersection-proper:
* Intersection of multiple SFMs creates too much mental computational burden and people prefer simpler models.
In many cases (Section <ref>, <ref>), people dislike the very form of SFM-intersection, even though the underlying R=R_ can be modeled by some SFM .
* SFM-intersection-proper suffers from the possibly-unsatisfiable-laws objection (PULO), which applies to any set of functional dependencies FDep={_i _i}_i=1^n such that some {f_i}_i=1^n makes FDet={_i _i}_i=1^n unsatisfiable.
No world satisfies FDet, but our actual world exists, so we must reject FDet. PULO takes one unjustified step further, suggesting that FDep should also be rejected, even if some other {g_i}_i=1^n makes {_i _i}_i=1^n satisfiable, because FDep "opens the gate" to unsatisfiable laws. From another perspective, PULO expresses a desire for guaranteed satisfiability under any function set.
For example, FDep = {{X}{Y}, {Z}{Y}} suffers from PULO because FDet = {{X}{Y}, {Z}{Y}} is unsatisfiable over real-valued domains.
* Why do we make SFM acyclic?
PULO strikes again: When there are self-loops or cycles in the graph, there exist function sets that make the SFM unsatisfiable, such as A=A+1 and {A=B+1; B=A+1}:
* Besides simplicity and intuitive appeals, finite acyclic SFM has other nice properties (Section <ref>):
* is satisfiable for any .
* _exo functionally determines _endo via _endo⊆ = (, _exo).
Are they worth the price of rejecting many (possibly satisfiable) sets of functional dependencies? We're unsure.
* Different SFMs _1 _2 over the same (, ) can be "semantically equivalent" R__1=R__2, which entails "_|=f(_|) in _1 iff _|=f(_|) in _2", including (_1, _exo)=(_2, _exo) for all _exo.
We'll only consider functional determinations that can be modeled by finite acyclic SFMs, where an endo-node is functionally determined by its parents.
§.§ Composition and Decomposition
Since _exo functionally determines _endo via _endo⊆(, _exo) (Section <ref>), we produce all causal utterances as "_exo causes _endo."
This syntax is simple, but an ostensible flaw is that only exo-assignments can be causes. In A → B → C, we cannot say "{B: b} causes {C: c}" because B is an endo-node. This problem is solved by considering the sub-SFM B → C, where B becomes an exo-node. Sub-SFM generalizes <cit.>'s surgical intervention, which cuts off all incoming edges to the nodes under intervention.
_sub = (_sub, _sub, _sub, _sub) is a sub-SFM of = (, , , ) when:
* (_sub, _sub) is a subgraph of (, ), i.e. _sub⊆, _sub⊆, and (u, v) ∈_sub⇒ (u ∈_sub) (v ∈_sub).
* ∀ u ∈_sub|endo: _sub(u)=(u).
* ∀ u ∈_sub: _sub[u] = [u]
* ∀ u ∈_sub|endo: _sub[u] = [u]
An exo-node in can be nonexistent or exogenous in _sub; an endo-node in can be nonexistent, exogenous, or endogenous (with the same parents and structural function) in _sub, so the mechanisms-of-nature are preserved.
We can compose a set of smaller SFMs {_1, _2, …, _m} into a bigger SFM without altering any structural function, if the following prerequisites are met for any pair of (_i, _j):
* ∀ u ∈_i ∩_j: _i[u]=_j[u]
* ∀ u ∈_i|endo∩_j|endo: _i(u) = _j(u)
* ∀ u ∈_i|endo∩_j|endo: _i(u) = _j(u)
These prerequisites ensure that the composition = (⋃_i=1^m _i, ⋃_i=1^m _i, ⋃_i=1^m _i, ⋃_i=1^m _i) is well-defined. For ⋃_i=1^m _i and ⋃_i=1^m _i,
* _i maps nodes to domains.
* _i maps nodes to structural functions.
* Functions (including and ) are binary relations.
* The union of sets/relations/functions is well defined.
* The prerequisites ensure that each node u has exactly one unique [u] and at most one unique [u] across all i, so ⋃_i=1^m _i and ⋃_i=1^m _i are right-unique and thus functions.
The decomposition of SFM is a set of sub-SFMs {_1, _2, …, _m} that can compose into . While composition of sub-SFMs (when allowed) is unique, there can be multiple different decompositions of an SFM, the most trivial being "keeping the original SFM itself" and the most fragmented being "one sub-SFM for each endo-node and its parents."
Composition shows how small, local, and simple sub-mechanisms can be pieced together into one big, global, and complex system, while decomposition breaks down a large system into small sub-mechanisms. Therefore, we can deductively reason about a big, unrepeatable event using its components and their interconnections.
With composition-decomposition, we can say "_exo causes _endo" relative to some sub-SFM.
§.§ Contrastive Causation
Currently, SFM can already perfectly express a causal system by correctly answering all "what's _endo if _exo" questions. But in causal utterances, people only say "the actual causes" and omit background conditions (Section <ref>).
The selection of actual causes takes 2 steps: contrast and omission. We'll discuss contrast in this section.
<cit.> believes causation is contrastive. Besides the 2-argument surface form (cause, effect), the 4-argument underlying form includes contrast on both sides:
* Surface form: Pam's throwing the rock caused the window to shatter.
* Contrastive form 1: Throwing the rock (rather than the pebble) caused the window to shatter (rather than crack).
* Contrastive form 2: Throwing the rock (rather than not throwing it) caused the window to shatter (rather than remain intact).
We specify 2 assignments _a, _c for contrastive causal utterance "_a|exo (rather than _c|exo) causes _a|endo (rather than _c|endo)":
* Actual assignment _a corresponds to the actual world (i.e. what actually happens).
* Contrastive assignment _c is selected using one of two heuristics:
* _c is a default/expected/normal/typical world; _a is an anomalous/unexpected deviation from the default.
Normality inevitably comes with value judgments, but contrast reduces "finding the actual causes" to "finding a default world," which is a nontrivial simplification.
* With _a available first, we tweak _a|exo into _c|exo by changing the values of a few exo-nodes of interest. We then obtain _c = (, _c|exo)=(, _a, _c|exo) through forward inference (Section <ref>).
This is common when too many nodes in _a have non-default values, or when there's no appropriate default world.
Our contrastive causation is slightly simpler than <cit.>'s and <cit.>'s, because we only need to specify one contrastive world _c (rather than many).
Contrast is common in our causal intuition:
* People often characterize causality as "changing the cause will also change the effect" or "making a difference." Ignoring the manipulation aspect of an agent changing an object, change is inherently contrastive - there's an old state that changes to a new state.
* Some philosophers try to define "event X causes event Y" as "X raises the probability of Y ([Y|X] > [Y|¬ X])." This definition fails to address causal asymmetry and spurious correlations <cit.>, so it's never popular among statisticians. However, the very idea of "raising" contains a contrast between a world with X and a world with ¬ X.
* The contrast of treatment effects is formalized in statistical causal inference. Using the potential outcome notations in <cit.>,
* causal risk difference: [Y^a=1 = 1] - [Y^a=0 = 1]
* causal risk ratio: [Y^a=1 = 1]/[Y^a=0 = 1]
* causal odds ratio: [Y^a=1 = 1] / [Y^a=1 = 0]/[Y^a=0 = 1] / [Y^a=0 = 0]
These measurements all involve a contrast between random variables Y^a=0 (effect under treatment 0) and Y^a=1 (effect under treatment 1).
* To understand a function y=f(x), we often record an initial input value x_0 and its corresponding output value y_0=f(x_0); we then change x_0 to x_1 and see how the output value y changes in response. For example, derivatives in calculus help quantify how "sensitive" the output is with respect to the input.
With actual assignment _a and contrastive assignment _c, we say "_a|exo (rather than _c|exo) causes _a|endo (rather than _c|endo)."
§.§ Delta Compression
To characterize omission in causal utterances, we consider = {u ∈ | _a(u) _c(u)}: the nodes that have different values in _a and _c. || is the Hamming distance between _a and _c. With _exo= ∩_exo and _endo= ∩_endo, the final causal utterance is "_a|_exo causes _a|_endo."
If something doesn't change, we don't mention it. We only mention the new values of changed nodes. This is an example of delta compression <cit.>:
Encoder wants to transmit a target file to Decoder. Encoder and Decoder can both access a reference file. The target file is only slightly different from the reference file, so their delta (change/difference) is much smaller than the target file itself. To reduce the amount of transferred data, Encoder computes the delta (using target and reference files) and sends it to Decoder; Decoder reconstructs the target file by adding the delta to the reference file.
Delta compression is widely used in version control, where we want to store many successive versions of the same file, but any 2 consecutive versions differ only slightly.
Consider nodes {A, B, C, D} with integer domains and assignments _0, _1:
* _0 = {A:1,B:2,C:3,D:4}
* _1 = {A:1,B:7,C:3,D:5}
* = {B, D}
* _0| = {B:2,D:4}
* _1| = {B:7,D:5}
People may prefer delta compression because it shortens causal utterances without losing information or introducing ambiguities. This saving of "mental bandwidth" is especially prominent when:
* We want to represent many _1 relative to one _0.
* Each _1 differs only slightly from _0, i.e. || is small relative to ||.
In default-actual contrasts, the default _c is kept constant for reference; in actual-tweaked contrasts, _a is held for reference.
We say _a|_exo causes _a|_endo, where = {u ∈ | _a(u) _c(u)} is the set of changed nodes.
§ INFERENCE
§.§ Constraint Satisfaction
During inference, we assume the SFM is true. An inference algorithm takes in assignment _| over known nodes ⊆ and a set of target nodes , whose values we're interested in inferring. It then checks whether _| is permitted and if so, returns one or more _| that's compatible with _|.
If all domains are finite, we can formulate SFM inference as a constraint satisfaction problem (CSP) <cit.> and use off-the-shelf CSP solvers for inference:
* The domain of node u ∈ is [u].
* For each u ∈_endo, its structural equation gives a (|(u)|+1)-ary constraint (u) = [u](_|(u)) over scope (u) ∪{u}.
* Each known value w_u=_|(u) for u ∈ is a unary constraint (u)=w_u over scope {u}.
CSP does have a few drawbacks:
* It's NP-complete in general.
* It's unnecessary for most thought experiments, where the SFMs are small and solvable by hand.
* It offers no guarantee for the existence or uniqueness of _|. For example, if y=f(x) is non-injective, different x can be compatible with the same y.
Thanks to right-uniqueness, inferring effects from causes is much easier.
§.§ Forward Inference
Forward inference infers effects from causes.
Given SFM , vanilla forward inference (VFI) computes = (, _exo), where ⊇_exo and of ∈ R_.
When =(, ) is finite and acyclic (and ∀ u ∈: [u] ∅),
* =(, , , ) is satisfiable for any ;
* _exo functionally determines _endo; itself is a function.
Intuitively, deterministically infers all effects given the root causes and mechanisms-of-nature.
(Forward Inference) In a finite acyclic SFM (with nonempty domains), for any exo-assignment _exo, there exists a unique complete assignment satisfying ⊇_exo and ∈ R_.
* Existence: Because is finite, is finite. Because is acyclic, there exists a topological order L of nodes: an ordered list of all nodes such that [(L[i], L[j]) ∈] ⇒ [i < j]. Using topological sort algorithms like depth-first-search and Kahn's algorithm, we can compute L in Θ(||+||) time; cycle detection is done simultaneously <cit.>.
Given _exo, we compute _1 sequentially from i=1 to i=|| inclusive:
* If L[i] ∈_exo, we assign _1(L[i]) ←_exo(L[i]).
* If L[i] ∈_endo, we assign _1(L[i]) ←[L[i]](_1|(L[i])).
* Any parent L[j] ∈(L[i]) must appear earlier (j < i) than its child L[i] because L is a topological order. _1(L[j]) must have already been assigned, so _1|(L[i]) is well-defined.
Because _1(L[i]) isn't modified after iteration i:
* If L[i] ∈_exo, _1(L[i])=_exo(L[i]) is always satisfied.
* If L[i] ∈_endo, _1(L[i]) = [L[i]](_1|(L[i])) is always satisfied.
Therefore, _1 ⊇_exo and _1 satisfies .
This constructive proof also specifies the algorithm =(, _exo), assuming every structural function [u] is computable.
* Uniqueness: Proving by contradiction, suppose instead that there's another _2 _1 satisfying _2 ⊇_exo and _2 ∈ R_. With topological order L, there exists a smallest integer i such that _1(L[i]) _2(L[i]).
Because L is a topological order, every parent L[j] ∈(L[i]) appears earlier (j < i). Since L[i] is the earliest node with different values, ∀ L[j] ∈(L[i]): _1(L[j])=_2(L[j]) and _1|(L[i]) = _2|(L[j]).
Because functions are right-unique, [L[i]](_1|(L[i])) = [L[i]](_2|(L[i])). Because (L[i]) is only modified at iteration i, _1(L[i])=_2(L[i]), which contradicts _1(L[i]) _2(L[i]). Therefore, _1 = _2; the induced complete assignment from an exo-assignment is unique.
Existence entails left-totality; uniqueness entails right-uniqueness, so (, _exo) itself is a function of _exo.
Since _exo functionally determines and _endo⊆, Armstrong's Axioms entail "_exo functionally determines _endo."
During forward inference, is also a computational graph, where edges indicate the order of computation. We start with exo-nodes and the computation "flows down" to endo-nodes, computing their values based on the previously computed values of their parents. Topological sort and graph traversal both take Θ(||+||) time under adjacency-list representation of graphs. For each u ∈_endo, [u] is computed exactly once.
In a finite acyclic SFM with nonempty domains, any partial assignment _| over any subset of exo-nodes ⊆_exo is permitted.
For every u ∈_exo∖, we assign an arbitrary _exo(u) ∈[u] since [u] ∅; for every u ∈, we assign _exo(u) ←_|(u), so _exo⊇_|. By Theorem <ref>, =(, _exo) satisfies _exo⊆∈ R_, so _|⊆_exo⊆∈ R_ and _| is permitted.
A finite acyclic SFM with nonempty domains is always satisfiable, regardless of its structural functions .
Because is finite (no infinite regress) and acyclic, there exists at least one root node u (Appendix <ref>). Because [u] ∅, we select an arbitrary value _|{u}(u) ∈[u]. Corollary <ref> says _|{u} is permitted, so ∃∈ R_: ⊇_|{u} and is satisfiable.
§.§ Functional Invariance
We use functional invariance to describe how a multi-input function's output doesn't change when some inputs have changed: TVs aren't affected by light switches; the output of f(x, y) = 2x is invariant to y given x. Notice that ceteris paribus (holding other input values constant) is well-defined only if there's a clear input-output distinction given by an underlying function.
In SFM, changing an exo-node's value cannot influence its non-descendants. This is deduced from alone. With non-injective functions, new parent values may map to the old child value, resulting in even fewer changed nodes. Equivalently, for ⊆_exo, _|_exo∖) functionally determines 's non-descendants.
(Invariance in SFM) In a finite acyclic SFM with _1, _2 ∈ R_ and changed nodes = {u ∈ | _0(u) _1(u)}:
If _0(u) _1(u), then u ∈⋃_v ∈_exoDe(v). (u's value differs in _0 and _1 only if it's the descendant of some node in _exo.)
Let u ∈ be any node such that _0(u) _1(u). If u ∈_exo, then u ∈_exo and we're done. If u ∈_endo, then because functions are right-unique, at least one parent p ∈(u) must have a different value (_0(p) _1(p)). We consider p as the new u and repeat this process recursively. Because the SFM graph is finite (no infinite regress) and acyclic, this path u ← p_1 ← p_2 ←… must terminate at some exo-node s ∈_exo (Appendix <ref>) such that _0(s) _1(s), which means s ∈_exo. The path shows u is a descendant of s.
§.§ Contrastive Forward Inference
Suppose we already have and _0 ∈ R_. To compute (, _1|exo), we still need to compute every [u]. Given all the unchanged nodes from functional invariance, can the graph structure help us reduce [u] evaluations?
Yes. With _exo = {u ∈_exo|_0|exo(u) _1|exo(u)}, the contrastive forward inference (CFI) algorithm _1 = (, _0, _1|_exo) evaluates [u] only when at least one parent of u has a changed value, so we don't recompute non-descendants of _exo. evaluates usually fewer (and always no more) structural functions than , especially when is small relative to , when there are many _i|exo queries relative to one reference _0, or when many structural functions are non-injective.
If we draw an SFM with all arrows pointing downwards, we visually cache a topological order of nodes. We can easily identify the descendants of changed nodes and only evaluate their structural functions, without recomputing the complete assignment.
Unlike functions, contrast isn't a fundamental and irreducible part of causality. It's just a popular heuristic with pragmatic benefits:
* Delta compression reduces the length of causal utterances.
* recomputes (usually) fewer structural functions than during forward inference.
§.§ Partial Forward Inference
By modifying depth-first search, we can also design partial forward inference algorithms, where we're only interested in a subset of endo-nodes ⊆_endo, so we don't have to compute values for all endo-nodes. Combined with , it further reduces the number of function evaluations, especially when is much smaller than _endo.
§.§ Inference in Practice
* VFI in Boolean circuits: A combinational logic circuit <cit.> is a finite acyclic SFM with {0, 1} domains and Boolean functions.
Each wire's value is 0 (no electrical current) or 1 (has current). A logic gate receives input wires and returns an output wire, like a structural function.
The output wire of one gate can be the input wire of another gate.
To infer the values of all wires given all input wires, we use VFI and produce causal utterances like "setting this input wire to 1 causes the output wire to be 0."
* CFI in GNU Make: GNU is a popular open-source software that automatically determines which pieces of a large program need to be recompiled <cit.>. Especially in C and C++, the source code needs to be compiled or linked into a target file, before the target file can be executed by the computer. In a , there are many rules. Each rule has a target file, a list of source files, and a recipe for compilation. The target file functionally depends on the source files. The target file of one rule can be a source file in another rule. This forms a finite SFM where files are nodes and rules specify edges and structural functions.
VFI compiles all files, but software development is a dynamic process:
We don't compile the files just once. We modify some files, see the results, and repeat.
Because compilation is time-consuming, it's costly to recompile all files after a modification.
Instead, we only need to recompile the descendants of modified files. Just like CFI, only recompiles the target file if any of its source files (parents) has been modified since the previous compilation, saving lots of time. We can produce causal utterances like "modifying this file causes the final compiled program to crash."
§ LEARNING
Learning causal models from statistical data is covered in depth by <cit.>, so we only discuss some philosophical cases where people prefer some SFM over others, given fully-specified possible worlds and laws-of-nature.
§.§ Thermometer and Temperature
We think high room temperature causes high thermometer reading, but not the other way round. Why?
It's common to introduce new nodes and see whether the small model remains true as a sub-SFM of a bigger model. Consider a new node "immersing thermometer in cold water" and all possible worlds are listed below:
Node
_1 0 0 0
_2 1 1 0
_3 0 0 1
_4 0 1 1
Without granting "intervention" any special status, we see that HighReadingHighTemperature and HighReading, ColdWaterHighTemperature aren't true in general, so the edge should point from to . People prefer simple SFMs that compose well with other SFMs that model the same world.
§.§ Light, Object, and Shadow
In a symmetric equation involving Light, Object, Shadow, any 2 nodes functionally determine the 1 remaining node. Why do we think the shadow is the effect? This asymmetric preference is entailed by people's general dislike of SFM-intersection:
* With multiple objects, Light, ShadowObject isn't true in general. When we add another object whose shadow rests entirely in another object's shadow, the system's light and shadow remain the same, thus violating right-uniqueness.
* Light, ShadowObject cannot SFM-compose with FactoryObject (objects determined by their production processes). Explicitly encoding both functional dependencies requires SFM-intersection.
* With one light source and multiple objects, Object(i), Shadow(i)Light holds for every , resulting in SFM-intersection.
* Object, ShadowLight cannot SFM-compose with HandLight (flashlight direction determined by hand movement) or TimeOfDayLight (the Sun's position determined by time of the day), unless we use SFM-intersection.
* Object, LightShadow can seamlessly compose with upstream and downstream SFMs without SFM-intersection.
§ BENCHMARK
Taking a data-centric approach, we compile a collection of thought experiments about causality and apply SFM to all of them. A good definition of causality should have no trouble fitting these causal scenarios. Unless otherwise mentioned, all domains are binary {0, 1}.
§.§ Sensitive to Default
* The assassin shoots the victim, causing the victim's death.
* →
* []() = ()
* Default _c = Assassin:0, Death:0
* Actual _a = Assassin:1, Death:1
* _exo = Assassin, _endo=Death
* _a|_exo=Assassin:1 causes _a|_endo=Death:1.
* At the last moment, the assassin changes his mind and doesn't shoot, causing the victim's survival.
* Same as above.
* Default _c = Assassin:1, Death:1
* Actual _a = Assassin:0, Death:0
* _exo = Assassin, _endo=Death
* _a|_exo=Assassin:0 causes _a|_endo=Death:0.
§.§ Causal Chain
The assassin shoots a bullet, which kills the victim.
* The assassin causes both the bullet and the death.
* →→
* []() = ()
[]() = ()
* Default _c = Assassin:0, Bullet:0, Death:0
* Actual _a = Assassin:1, Bullet:1, Death:1
* _exo = Assassin, _endo=Bullet, Death
* _a|_exo=Assassin:1 causes _a|_endo=Bullet:1, Death:1.
* (Sub-SFM) The bullet causes the death.
* →
* []() = ()
* Default _c = Bullet:0, Death:0
* Actual _a = Bullet:1, Death:1
* _exo = Bullet, _endo=Death
* _a|_exo=Bullet:1 causes _a|_endo=Death:1.
§.§ Connected Double Prevention
A bodyguard shoots the assassin before the assassin could shoot the victim. The victim survives.
* The bodyguard causes the assassin's death and the victim's survival.
* →→
* []() = ()
[]() = ()
* Actual _a = Bodyguard:1, Assassin:0, Survive:1
* _exo = Bodyguard
* Tweak _c|_exo=Bodyguard:0
* Tweaked _c = (, _a, _c|_exo) = Bodyguard:0, Assassin:1, Survive:0
* _endo = Assassin, Survive
* _a|_exo=Bodyguard:1 causes _a|_endo=Assassin:0, Survive:1.
§.§ Disconnected Double Prevention
The assassin puts poison in the victim's cup. The bodyguard puts antidote in the cup. The victim survives.
* Antidote causes the victim's survival.
*
(Poison) at (0,1) ;
(Antidote) at (2,1) ;
(Survive) at (1,0) ;
[->, style=thick] (Poison) edge (Survive);
[->, style=thick] (Antidote) edge (Survive);
* []() = () ()
* Actual _a = Poison:1, Antidote:1, Survive:1
* _exo=Antidote
* Tweak _c|_exo=Antidote:0
* Tweaked _c = (, _a, _c|_exo) = Poison:1, Antidote:0, Survive:0
* _endo = Survive
* _a|_exo=Antidote:1 causes _a|_endo=Survive:1.
§.§ No Appropriate Default
Two chess players use a coin flip to decide who moves first. If the coin lands on head, the Player 1 moves first; otherwise, Player 2 moves first. It's difficult to identify a "default" world <cit.>.
* Coin landing on head causes Player 1 to move first.
* →
* []() = ()
* Actual _a = Head:1, Player1:1
* _exo=Head
* Tweak _c|_exo=Head:0
* Tweaked _c = (, _a, _c|_exo) = Head:0, Player1:0
* _endo = Player1
* _a|_exo=Head:1 causes _a|_endo=Player1:1.
* Coin landing on tail causes Player 2 to move first.
* Same as above.
* Actual _a = Head:0, Player1:0
* _exo=Head
* Tweak _c|_exo=Head:1
* Tweaked _c = (, _a, _c|_exo) = Head:1, Player1:1
* _endo = Player1
* _a|_exo=Head:0 causes _a|_endo=Player1:0.
§.§ Gardener and Queen
The flower lives iff at least one person waters it. The gardener is responsible for watering the flower, but the queen isn't <cit.>.
* The gardener's not watering the flower causes the flower's death; the queen's not watering it doesn't cause the flower's death.
*
(Gardener) at (0,1) ;
(Queen) at (2,1) ;
(Flower) at (1,0) ;
[->, style=thick] (Gardener) edge (Flower);
[->, style=thick] (Queen) edge (Flower);
* []() = () ()
* Default _c = Gardener:1, Queen:0, Flower:1
* Actual _a = Gardener:0, Queen:0, Flower:0
* _exo = Gardener, _endo=Flower
* _a|_exo=Gardener:0 causes _a|_endo=Flower:0.
§.§ OR Firing Squad (Symmetric Overdetermination)
Two assassins simultaneously shoot the victim. It takes only 1 bullet to kill the victim.
* Both assassins are responsible because "not killing" is default.
*
(A1) at (0,1) ;
(A2) at (2,1) ;
(Death) at (1,0) ;
[->, style=thick] (A1) edge (Death);
[->, style=thick] (A2) edge (Death);
* []() = () ()
* Default _c = Assassin1:0, Assassin2:0, Death:0
* Actual _a = Assassin1:1, Assassin2:1, Death:1
* _exo = Assassin1, Assassin2, _endo=Death
* _a|_exo=Assassin1:1, Assassin2:1 causes _a|_endo=Death:1.
* Assassin 1 causes nothing because had he not shot, Assassin 2 would've still killed the victim.
* Same as above.
* Actual _a = Assassin1:1, Assassin2:1, Death:1
* _exo=Assassin1
* Tweak _c|_exo=Assassin1:0
* Tweaked _c = (, _a, _c|_exo) = Assassin1:0, Assassin2:1, Death:1
* _endo = ∅
* _a|_exo=Assassin1:1 causes _a|_endo=∅.
§.§ AND Firing Squad
2 assassins simultaneously shoot the victim. It takes at least 2 bullets to kill the victim.
* Both assassins are responsible because "not killing" is default.
*
(A1) at (0,1) ;
(A2) at (2,1) ;
(Death) at (1,0) ;
[->, style=thick] (A1) edge (Death);
[->, style=thick] (A2) edge (Death);
* []() = () ()
* Default _c = Assassin1:0, Assassin2:0, Death:0
* Actual _a = Assassin1:1, Assassin2:1, Death:1
* _exo = Assassin1, Assassin2, _endo=Death
* _a|_exo=Assassin1:1, Assassin2:1 causes _a|_endo=Death:1.
* Assassin 1 is individually responsible because had he not shot, the victim would've survived.
* Same as above.
* Actual _a = Assassin1:1, Assassin2:1, Death:1
* _exo=Assassin1
* Tweak _c|_exo=Assassin1:0
* Tweaked _c = (, _a, _c|_exo) = Assassin1:0, Assassin2:1, Death:0
* _endo = Death
* _a|_exo=Assassin1:1 causes _a|_endo=Death:1.
§.§ Connected Preemption
Assassin 1 shoots the victim first. If the victim doesn't die, Assassin 2 will shoot. Had Assassin 1 not shot, the victim still would've died.
* Assassin 1 causes the victim's death and Assassin 2's not-shooting.
*
(A1) at (0,2) ;
(A2) at (4,2) ;
(D1) at (0,0) ;
(D2) at (4,0) ;
[->, style=thick] (A1) edge (D1);
[->, style=thick] (D1) edge (A2);
[->, style=thick] (D1) edge (D2);
[->, style=thick] (A2) edge (D2);
* []() = ()
[]() = ()
[]() = () ()
* Actual _a = Assassin1:1, EarlyDeath:1, Assassin2:0, LateDeath:1
* _exo=Assassin1
* Tweak _c|_exo=Assassin1:0
* Tweaked _c = (, _a, _c|_exo)
= Assassin1:0, EarlyDeath:0, Assassin2:1, LateDeath:1
* _endo = EarlyDeath, Assassin2
* _a|_exo=Assassin1:1 causes _a|_endo=EarlyDeath:1, Assassin2:0.
* We cannot say Assassin1:1 causes LateDeath:1 because ∉_endo.
§.§ Disconnected Preemption
Assassin 1 shoots the victim first. Several moments later, Assassin 2 shoots unconditionally.
* Assassin 1 causes the victim's death.
*
(A1) at (0,2) ;
(A2) at (4,2) ;
(D1) at (0,0) ;
(D2) at (4,0) ;
[->, style=thick] (A1) edge (D1);
[->, style=thick] (D1) edge (D2);
[->, style=thick] (A2) edge (D2);
* []() = ()
[]() = () ()
* Actual _a = Assassin1:1, Assassin2:1, EarlyDeath:1, LateDeath:1
* _exo=Assassin1
* Tweak _c|_exo=Assassin1:0
* Tweaked _c = (, _a, _c|_exo)
= Assassin1:0, Assassin2:1, EarlyDeath:0, LateDeath:1
* _endo = EarlyDeath
* _a|_exo=Assassin1:1 causes _a|_endo=EarlyDeath:1.
Time difference distinguishes preemption from symmetric overdetermination: To an extreme, we wouldn't regard immediate death and death in 100 years as the same event.
§.§ Relevant Background Conditions
Ignition requires both striking the match and oxygen present, but we only mention striking the match as the cause of fire.
* Striking the match causes ignition.
*
(Strike) at (0,1) ;
(Oxygen) at (2,1) ;
(Fire) at (1,0) ;
[->, style=thick] (Strike) edge (Fire);
[->, style=thick] (Oxygen) edge (Fire);
* []() = () ()
* Default _c = Strike:0, Oxygen:1, Fire:0
* Actual _a = Strike:1, Oxygen:1, Fire:1
* _exo = Strike, _endo=Fire
* _a|_exo=Strike:1 causes _a|_endo=Fire:1.
* While repeatedly striking a match in an oxygen-deprived container, there's no ignition. Pumping in oxygen causes the match to ignite.
* Same as above.
* Default _c = Strike:1, Oxygen:0, Fire:0
* Actual _a = Strike:1, Oxygen:1, Fire:1
* _exo = Oxygen, _endo=Fire
* _a|_exo=Oxygen:1 causes _a|_endo=Fire:1.
Similarly, a criminal wouldn't have committed the crime had the universe not existed/had he never been born, but we don't consider those as causes of the crime.
§.§ Irrelevant Background Conditions
The assassin simultaneously shoots the victim and whispers.
* Whispering doesn't cause anything.
*
(Whisper) at (0,1) ;
(Shoot) at (2,1) ;
(Death) at (1,0) ;
[->, style=thick] (Shoot) edge (Death);
* []() = ()
* Actual _a = Whisper:1, Shoot:1, Death:1
* _exo=Whisper
* Tweak _c|_exo=Whisper:0
* Tweaked _c = (, _a, _c|_exo) = Whisper:0, Shoot:1, Death:1
* _endo = ∅
* _a|_exo=Whisper:1 causes _a|_endo=∅.
* Shooting causes death.
* Same , _a as above.
* _exo=Shoot
* Tweak _c|_exo=Shoot:0
* Tweaked _c = (, _a, _c|_exo) = Whisper:1, Shoot:0, Death:0
* _endo = Death
* _a|_exo=Shoot:1 causes _a|_endo=Death:1.
Similarly, Socrates drinks hemlock at dusk and dies. Hemlock causes death, but dusk doesn't cause anything <cit.>.
§.§ Boulder and Hiker
A hiker sees a boulder rolling towards him, so he dodges and survives. Had he not dodged, he wouldn't have survived <cit.>. This is an ostensible counterexample to the transitivity of causation (boulder causes dodge, dodge causes survival, but boulder doesn't cause survival). "Transitivity" is better understood as SFM-composition.
* Boulder causes dodge and doesn't cause survival.
*
(Boulder) at (0,2) ;
(Dodge) at (2,1) ;
(Survive) at (1,0) ;
[->, style=thick] (Boulder) edge (Dodge);
[->, style=thick] (Boulder) edge (Survive);
[->, style=thick] (Dodge) edge (Survive);
* []() = ()
[]()=() ()
* Actual _a = Boulder:1, Dodge:1, Survive:1
* _exo=Boulder
* Tweak _c|_exo=Boulder:0
* Tweaked _c = (, _a, _c|_exo) = Boulder:0, Dodge:0, Survive:1
* _endo = Dodge
* _a|_exo=Boulder:1 causes _a|_endo=Dodge:1, but ∉_endo.
* (Sub-SFM) Dodge causes survival.
*
(Boulder) at (0,1) ;
(Dodge) at (2,1) ;
(Survive) at (1,0) ;
[->, style=thick] (Boulder) edge (Survive);
[->, style=thick] (Dodge) edge (Survive);
* []()=() ()
* Actual _a = Boulder:1, Dodge:1, Survive:1
* _exo=Dodge
* Tweak _c|_exo=Dodge:0
* Tweaked _c = (, _a, _c|_exo) = Boulder:1, Dodge:0, Survive:0
* _endo = Survive
* _a|_exo=Dodge:1 causes _a|_endo=Survive:1.
§.§ Bogus Prevention
Taking birth control pills is the cause of a woman not getting pregnant, but not the cause of a man not getting pregnant, although "birth control prevents pregnancy" is always true <cit.>.
* Birth control causes a woman to be unable to get pregnant.
*
(IsWoman) at (0,1) ;
(BirthControl) at (2,1) ;
(CanPregnant) at (1,0) ;
[->, style=thick] (IsWoman) edge (CanPregnant);
[->, style=thick] (BirthControl) edge (CanPregnant);
* []()= () ()
* Actual _a = IsWoman:1, BirthControl:1, CanPregnant:0
* _exo=BirthControl
* Tweak _c|_exo=BirthControl:0
* Tweaked _c = (, _a, _c|_exo) = IsWoman:1, BirthControl:0, CanPregnant:1
* _endo = CanPregnant
* _a|_exo=BirthControl:1 causes _a|_endo=CanPregnant:0.
* Birth control doesn't cause anything for a man.
* Same as above.
* Actual _a = IsWoman:0, BirthControl:1, CanPregnant:0
* _exo=BirthControl
* Tweak _c|_exo=BirthControl:0
* Tweaked _c = (, _a, _c|_exo) = IsWoman:0, BirthControl:0, CanPregnant:0
* _endo = ∅
* _a|_exo=BirthControl:1 causes _a|_endo=∅.
§.§ Backtracking Counterfactuals
Subjunctive conditionals <cit.> use forward inference, while indicative/backtracking/non-causal conditionals don't.
* (Subjunctive) If Shakespeare didn't write Hamlet, someone else would have.
*
(Boulder) at (0,2) ;
(Dodge) at (2,1) ;
(Survive) at (1,0) ;
[->, style=thick] (Boulder) edge (Dodge);
[->, style=thick] (Boulder) edge (Survive);
[->, style=thick] (Dodge) edge (Survive);
* []() = ()
[]()=() ()
* Actual _a=Shakespeare:1, Writer2:0, Hamlet:1.
* Query _exo=Shakespeare:0
* Queried =(, _exo) = Shakespeare:0, Writer2:1, Hamlet:1.
* (Indicative) If Shakespeare didn't write Hamlet, someone else did.
*
(Boulder) at (0,1) ;
(Dodge) at (2,1) ;
(Survive) at (1,0) ;
[->, style=thick] (Boulder) edge (Survive);
[->, style=thick] (Dodge) edge (Survive);
* []()= () ()
* Given , the only _|Writer2 compatible with Shakespeare:0, Hamlet:1 is Writer2:1.
§.§ Impossible Interventions
Unlike →, some functional dependencies contain all parents due to how the child is logically/conceptually/metaphysically defined, so it's impossible to add surgical interventions:
* The string "hello" is functionally determined by its first character being "h", second character being "e", …
* The average height of students in the class is functionally determined by the individual height of each student.
* Winning 2 out of 3 rounds is functionally determined by the result of each round.
They're often known as supervenience (Section <ref>).
§ DISCUSSION
§.§ Is SFM Insufficient?
Some may argue that since SFM and functions can have non-causal interpretations, they are insufficient for defining causality. We respond with 3 counterarguments:
* Some examples of insufficiency are results of misinterpretation. For example, student ID functionally determines all attributes (name, age, course registration, etc.) of a student in a database, but changing a student's ID won't cause changes in those attributes. This example doesn't hold because if we allow arbitrary changes to ID, there could be repeated IDs in different rows and ID no longer functionally determines other attributes.
* Incorrect causal models (e.g. "cancer causes smoking") are still causal, unlike non-causal models (e.g. correlations, symmetric equations), which don't use functions at all. Since SFM comes from the conceptual analysis of what "causation" should mean, its definition cannot include all empirical facts about our world.
* People often use causal interpretations to understand purely mathematical functions. When we say "changing the independent variable x causes the dependent variable y to change," we're using CFI.
§.§ A Case Against Actual Causality
Delta compression and CFI are slightly useful heuristics that also fit our intuitions. However, the assumption that there exists a fixed set of "actual causes" is questionable in complex systems.
A circuit has n binary switches _exo={X_1, X_2, …, X_n} and 1 light bulb _endo={Y}, where the n switches functionally determine the light via a Boolean function f: {0, 1}^n →{0, 1}.
Given the state of all switches and the light, which switches are the "actual causes" of the light being on/off?
There are 2^2^n different n-input 1-output Boolean functions f. For each f, there are 2^n different possible worlds. Proponents of actual causality must accept one of the following:
* Provide an algorithm that can identify actual causes in 2^2^n× 2^n = 2^(2^n+n) situations. Case-by-case analyses won't scale.
* Admit that contrast, default, and actual causality belong to an imperfect mental heuristic that would fail in complex systems.
Graphical models don't help because there's only one Boolean function and we shouldn't insert hypothetical intermediate nodes. For ⊆_exo, SFM can answer "is _0| the cause of _0|endo?" by tweaking _0| into _1|, inferring _1 = (, _0, _1|), and contrasting _1(Y) against _0(Y) . But "a fixed set of actual causes" given and _0 remains ill-defined.
People's intuitions may not give consistent answers and even if they do, such answers provide less information about f than the input-output mappings of f itself.
This example generalizes all "difficult causal scenarios" with binary variables and following features:
* Causal: We can manipulate the switches to control the light.
* Deterministic: It has no probabilistic component.
* Fully-specified: Epistemological skepticism like "how do we know these laws-of-nature are true" doesn't apply.
* Clear input-output distinction: There's no ambiguity in the direction of causal arrows.
Therefore, proposing and solving a few cases wouldn't dissolve our objection.
Intuitions are often unreliable for modeling reality. Outside simple, everyday causal utterances, there's no real downside in abandoning actual causality. itself perfectly describes the causal system and answers all "what if" inference queries like (, _exo). Instead of listing "actual causes," a scientist should try modeling the functional determinations in a system.
Actual causality is almost only used in normative theories (e.g. responsibility, blame, proximate causes, ethics, law <cit.>), which handle disagreements when everyone agrees on (laws-of-nature) and _a (what actually happens). Working with full SFMs instead of actual causality allows us to consider strictly more normative theories.
§ PROBABILISTIC SFM
To incorporate probability theory, we don't need to modify the definition of SFM. We just extend domains [u] to include random variables and modify structural functions [u] accordingly. Probability isn't required for most thought experiments on causality, but we'll provide a rigorous mathematical foundation for probabilistic SFM. Notably, nodes and random variables are not the same. We avoid calling nodes "variables" precisely for this reason.
§.§ Probabilistic Extension
Think of a node u as a name or index. Its value (u) can be a random variable: (u) = X. A random variable X: Ω→ℝ maps an outcome ω (in sample space Ω) to a real number X(ω) ∈ℝ. "X=x" is a shorthand for event {ω∈Ω | X(ω) = x}, so we can compute its probability [X=x]. X=x isn't an actual equation because X is a function and x is a real number. Again, "node u has value X; X is a random variable" and "random variable X takes on value x; x is a real number" are different things.
Most basically, functions of random variables are actually function compositions <cit.>. Consider real-valued function f(x)=2x and random variable X: Ω→ℝ. We want a new random variable Y that always "takes twice the value" of X:
Y(ω) = 2X(ω)
= f(X(ω))
= (f ∘ X)(ω)
Y = f ∘ X
The expression "Y = f(X)" is wrong by a rigorous standard, because random variable X isn't in f's domain of real numbers.
Formally, the probabilistic extension of SFM _old = (, , _old, _old) returns a new SFM _new=(, , _new, _new):
* Specify the sample space Ω.
* Specify a set of nodes 𝒮⊆ for probabilistic extension. Other nodes ∖𝒮 still don't have random variables in their domains.
The set 𝒮 must be downward closed: if u ∈𝒮, then every descendant of u must also be in 𝒮, as if randomness is "contagious" and flows down the computational graph.
For random variables to be well-defined, we also require _old[u] (e.g. real numbers, vectors, graphs, functions) to be measurable for all u ∈𝒮.
* Let RV[u] denote the set of random variables Ω→_old[u]. If u ∈𝒮, _new[u] = _old[u] ∪ RV[u]; otherwise, _new[u] = _old[u].
* Recall that a random variable X has realization X(ω) given outcome ω∈Ω.
For nodes ⊆, we define the realization of assignment _| given outcome ω as:
(_|, ω) = {u : _|(u) if _|(u) ∈_old[u] else (_|(u))(ω)}_u ∈
By realizing every random variable with ω and keeping other values as is, (_|, ω) is an assignment of both _old and _new because ∀ u ∈: (_|, ω)(u) ∈_old[u].
* For _new and endo-node u ∈_endo:
* If ∀ p ∈(u): _|(u)(p) ∈_old[p] (no parent value is a random variable), then _new[u](_|(u)) = _old[u](_|(u)) ∈_old[u].
* Otherwise (at least one parent value is a random variable), _new[u](_|(u)) is a random variable Ω→_old[u] in RV[u]. For outcome ω∈Ω, we compute _new[u](_|(u))(ω) = _old[u]((_|(u), ω)).
Some corollaries about probabilistic extension:
* For every u ∈, _new[u] ⊇_old[u] because both the domain and the codomain are strictly extended, hence the name "probabilistic extension."
* If satisfies ℳ_old, then it also satisfies ℳ_new. For ⊆, if _| is permitted by ℳ_old, then it's also permitted by ℳ_new.
* If satisfies _new, then its realization (, ω) also satisfies _old for every ω∈Ω.
Intuitively, random variables express uncertainty about which realization is actual. Each realization is a possible world in _old. Probability merely adds "weights" to these possible worlds, so causal mechanisms are deterministic and true in every realization. This is unlike Bayesian networks, where the mechanisms are inherently random.
We can now formalize "correlation doesn't imply causation" using SFM: The same "observational distribution" _| (where some nodes have random variables as values; ⊆) might be permitted by different SFMs _1 _2 with R__1 R__2, which cannot be treated as equal.
§.§ Bayesian Networks
With probabilistic extension, SFM generalizes Bayesian networks, which also use directed acyclic graphs. In a Bayesian network <cit.>, each node corresponds to a random variable, each exo-node stores a marginal distribution, and each endo-node stores a conditional distribution given the node's parents.
Bayesian networks require the exogenous random variables to be probabilistically independent, while we don't enforce that requirement (you may enforce it explicitly).
It's difficult for Bayesian networks to represent SFM. To encode functional determination (right-uniqueness), the conditional distributions must be degenerate. When input distributions cannot be assumed (e.g. light switch doesn't affect TV) and we only have specific input values, the marginal distributions are degenerate too. This sacrifices nearly all expressiveness of a Bayesian network.
Any Bayesian network can be expressed by an SFM. We'll use probability integral transform (PIT) to represent conditional probability distributions with deterministic functions:
* For simplicity, consider real-valued (Ω→ℝ) random variables Y, X_1, X_2, …, X_n and conditional distribution [Y|X_1=x_1, X_2=x_2, …, X_n=x_n].
* Create continuous uniform random variable U ∼Unif(0, 1) in range [0, 1], independent from all X_i.
* Let F_Y|X_i=x_i(y): ℝ→ [0, 1] be the conditional CDF (cumulative distribution function) of Y, such that it has an inverse F_Y|X_i=x_i^-1: [0, 1] →ℝ.
* By PIT <cit.>, random variable F_Y|X_i=x_i^-1∘ U has exactly the same CDF as F_Y|X_i=x_i.
* We've created a deterministic function f(x_1, x_2, …, x_n, U) that returns a random variable F_Y|X_i=x_i^-1∘ U, given real-valued x_i and random variable U.
Essentially, we can enforce "all mechanisms are deterministic" without sacrificing expressiveness. The inherent randomness of a mechanism is "injected" by an unobservable "noise" parent whose value is a random variable. This practice is fairly common:
* To sample values from 𝒩(μ, σ^2), the reparameterization trick uses deterministic function f(μ, σ, ϵ)=μ + σ×ϵ, where ϵ is sampled from an auxiliary "noise" distribution 𝒩(0, 1) <cit.>.
* Additive noise model Y = f_Y(X) + N_Y has random variables X, Y, N_Y, deterministic function f_Y, and additive noise N_Y ⊥ X <cit.>.
* Randomness in computer programs often comes from built-in random number generators, while the main program is deterministic.
§ COMPARISON
§.§ Symmetric Laws and Causal Eliminativism
In a symmetric equation of n variables, the values of any n-1 variables functionally determine the value of the 1 remaining variable. Newton's second law of motion F=ma, the ideal gas law pV=nRT, and Ohm's law V=IR are symmetric laws. This differs from the non-injective asymmetry of functions.
We usually view symmetric equations as non-causal, because 1 equation is simpler than n functional determinations.
As a causal eliminativist, <cit.> argues that causality doesn't appear in physics and should be removed from philosophy altogether. However, we've shown that functions and SFM are useful.
We only consider Russell's attack on the functional theory of causation, since we don't agree with other definitions either.
* Plurality of causes: Multiple alternative causes like gunshot, arsenic, etc. can map to the same effect - the person's death. (Some functions are non-injective.)
* Plurality of effects: The effect can be defined as the whole state of the world, which contains many variables. (The "cause" node has multiple descendants.)
Russell incorrectly dismisses functional asymmetry (non-injectiveness) as "illusory," as if the plurality of effects makes both sides symmetric. But these "pluralities" aren't the same. Non-injectiveness cannot be eliminated without changing the function itself.
SFM also addresses other eliminativist challenges on causality <cit.>. SFM-causality isn't vague; actual causality, while not appearing in physics, is a slightly useful heuristic that can be abandoned when necessary; probabilistic extension handles inherently random mechanisms; functions are compatible with different theories of space (e.g. action at a distance) and time (Section <ref>).
§.§ Hume, Regularity, and Problem of Induction
<cit.> challenges causality as follows. We say "striking a match causes it to ignite." But empirically, we only observe constant conjunctions of events like "match struck" followed by "match igniting." We don't directly observe the link/connection between cause and effect. So any causal "law" is an inductive generalization from particular events, with no necessary guarantee to remain true in the future <cit.>.
Hume conflates 2 distinct problems:
* Conceptual: What's the definition of causality?
Claiming "causality is just a special kind of regularity" is true but non-reductive: What is that "special kind"? All inductive models (e.g. correlations, symmetric equations) model "regular connections," but only functional determination captures our causal intuition.
Besides relying on unspecified physical/metaphysical models (e.g. time, space, contiguity), regularity conditions like "all events of type X are followed by an event of type Y" <cit.> cannot produce causal utterances in background condition cases (Section <ref>), which are deterministic and fully-specified.
* Epistemological: How to ensure the correctness of a causal model?
Non-probabilistic SFM (due to right-uniqueness) and symmetric laws make exceptionless claims about reality, while correlation doesn't. Perhaps that's why Hume attacks causality first. However, all inductive generalizations from empirical data are equally susceptible to the Problem of Induction (PoI) <cit.>. Causality isn't somehow "more unreliable" than symmetric laws or correlations.
We formulate PoI as follows. Consider a normal world W_1(t) and a piecewise world W_2(t). W_2(t) is exactly the same as W_1(t) for all time t before t_0, but is drastically different after t_0. Given a world W and all its information before t_0, there's no way of distinguishing whether W is W_1 or W_2.
By enumerating different ways of W_2 being "drastically different", such as "the world exploding after t_0" or "the gravitational constant doubling after t_0", we can construct worlds where symmetric laws and correlations break down under PoI. Therefore, PoI isn't an attack against causality alone. Similarly, a conceptual definition of causation won't solve PoI.
§.§ Logic and Counterfactuals
In retrospect, "if-then" material conditionals cannot replace causality because it violates right-uniqueness: both {p:0, q:0} and {p:0, q:1} satisfy p ⇒ q. It allows vacuously true propositions like "if I don't eat anything today, then I am a billionaire," which feels wrong causally/counterfactually. By adding the laws-of-nature () to the antecedents, we can perform rigorous deduction =(, _exo) without sacrificing causal intuitions. The underlying causal formula is q=f(p, …) instead of p ⇒ q, though functions and background conditions are often omitted in causal utterances.
Like the but-for test, many counterfactual definitions of causation are variations of "if x, then y; if not-x, then not-y" <cit.>. They're usually imperfect because they don't have the full expressiveness of functions. For example, <cit.>'s INUS condition is equivalent to disjunctive normal form <cit.>, which any Boolean function can be converted to, so it's just a circuitous way of stating "causality is functions."
<cit.> defines counterfactual conditional "if not-x, then not-y" as "in the closest possible world with not-x, there's not-y." However, without defining a distance metric and an algorithm to find the closest possible world, this definition cannot even describe a deterministic, fully-specified causal system. Using actual-tweaked contrast, SFM unambiguously computes the contrastive world as _c=(, _a, _c|_exo).
§.§ Intervention and SCM
The definition of SCM <cit.> relies on intervention, a causal concept, so it's often criticized for being circular and non-reductive. We develop SFM as an equally-expressive reformulation of SCM that only relies on functions, thus eliminating circularity and providing a philosophical foundation for SCM. The generality of functions also avoids anthropocentric objections that manipulation requires human agency <cit.>.
Although SCM's surgical intervention do(Y=y) is generalized by sub-SFM, we can also define it as a parent of Y, making intervention just a type of functional determination.
Given (Y) = {X_1, X_2, …, X_n, DoY}, DoY is a surgical intervention on Y when:
* [DoY] = [Y] ∪{}. ∉[Y] means "no intervention," like in option types and nullable types.
* There exists an "ordinary mechanism" function g: ∏_i=1^n[X_i] →[Y], such that
[Y](_|(Y)) =
g(_|{X_1, X_2, …, X_n}) if _|(Y)(DoY) = ,
_|(Y)(DoY) otherwise
"Conditionally overriding an ordinary mechanism" is the key intuition behind interventions. For example, barometer reading is ordinarily determined by atmospheric pressure, but it can also be manipulated by human intervention. SFM can also express more complicated interventions, like when intervention has a failure probability or when only some intervention options are possible for humans.
§ PHILOSOPHICAL APPLICATIONS
Many philosophical discussions take "causation" as given without mathematically defining what it is, so our functional definition of causality may help clarify some downstream concepts.
§.§ Desires for SFM Learning
Several alleged "metaphysical doctrines" about causality can now be seen as epistemological desires for learning new SFMs:
* The Principle of Sufficient Reason (PSR): "Everything has a cause" or "anything is an effect caused by earlier events" <cit.>.
PSR desires to add parents to exo-nodes that "have no causes" in old models.
* The Eleatic Principle (EP): For something to "exist" in an ontology, it must be able to cause changes in other things <cit.>.
EP desires to add descendants to sink nodes that "affect nothing" in old models.
* Causal Nexus (CN): "Any causal relation requires a nexus, some interface by means of which cause and effect are connected" <cit.>.
CN desires to insert intermediate nodes between old parent-child edges.
Strictly speaking, these desires are not satisfiable if we only allow finite acyclic SFM (Appendix <ref>), but they do encourage us to learn bigger SFMs to model the world.
§.§ The Uncaused
Since exo-nodes can never appear in the "effect" part of causal utterances, we define node u is uncaused relative to iff u ∈_exo. Being uncaused/exogenous is not a metaphysical fact, but a modeling choice we make: We don't want to model u as being determined by a mechanism and other nodes in .
If _uni is the SFM of the full world, we often only use some sub-SFM _sub for specific tasks.
Because a node can be uncaused (exo-node) in one sub-SFM and caused (endo-node) in another, regarding "uncaused" as a node's metaphysical property without specifying _sub is ill-defined. This is the source of many confusions.
For something with no causal parent anywhere, we say u is strongly-uncaused iff u isn't an endo-node in any sub-SFM _sub (i.e. it's an exo-node in _uni).
§.§ Free Will
Free will loosely describes an agent's ability to "freely" choose between different possible actions <cit.>. We often face seemingly conflicting intuitions:
* People have free will.
* The world's past and laws-of-nature functionally determine the world's future, making people's decisions unfree.
If we accept that something is free if it's "uncaused or not deterministically caused" <cit.>, then SFM offers a mathematical definition of freedom that resolves this conflict:
* Node u is free relative to iff u is exogenous in .
* Node u is unfree relative to iff u is endogenous in .
* Node u is strongly-free iff u is exogenous in every of interest that contains u.
* Node u is strongly-unfree iff u is endogenous in every of interest that contains u.
Whether an action is free depends on the model of interest.
When actions have consequences, we want to model the utility function Q(s, a) for taking action a ∈ A at state s ∈ S. This makes action free relative to Q(s, a), so any action with consequences is not strongly-unfree. Meanwhile, the best action a^*=π(s, Q)=_a ∈ Amax Q(s, a) is determined/unfree relative to π(s, Q). But for discrete S, A without additional assumptions, finding the best action requires computing Q(s, a) for all a ∈ A, so modeling a as a "free" input is inevitable and useful: The agent evaluates the utility of each action before taking the best action.
Besides reinforcement learning <cit.>, this best-action-selection framework also applies to minimax search <cit.> and decision-making in general. Although we don't define causality using agency like <cit.>, we suggest that modeling "actions functionally determine consequences" could be an origin of human causal intuitions.
Generally, the freedom/arbitrariness/uncertainty of function inputs is closer to the universal quantifier "for all/any." It's not determined because we don't model it as another function's output; it's not random because we cannot reasonably specify its marginal distribution and even if we do, the distribution isn't helpful for the downstream task.
* The light switch is free to vary, while the light is determined.
* To maximize f(x), we freely vary x and record the maximum f(x).
* We freely change causes _1|_exo and infer effects _1|_endo⊆(, _0, _1|_exo).
* A sorting algorithm works for an arbitrary input list.
§.§ Causal Explanation
We use explanans to explain explanandum. A causal explanation uses causes (and underlying mechanisms/laws-of-nature) to explain effects. There are also non-causal explanations that appeal to symmetric equations, correlation, or backtracking (using effects to explain causes).
With SFM, causal explanations become a subset of Deductive-Nomological (DN) explanations, where (1) explanans contains general laws and particular conditions; (2) explanandum is entailed by explanans <cit.>.
Using _a|_exo to explain _a|_endo, we use general laws and particular conditions _c, _a|exo; entailment comes from _a|_endo⊆_a=(, _c, _a|_exo).
In practice, can be learned from empirical data (inductive); some nodes' values can be random variables (probabilistic).
SFM solves many alleged counterexamples where DN model appears insufficient for defining explanation:
* In the symmetric equation involving shadow length, the Sun's position, and flagpole height, why is shadow the explanandum? Because we prefer causal explanations over non-causal explanations and shadow should be modeled as a child node (Section <ref>).
* Why do people omit irrelevant background conditions in explanations? Because we use delta compression in causal utterances (Section <ref>, <ref>).
The asymmetry of causal explanation comes from the asymmetry of causality, which comes from functions being right-unique and often non-injective.
§.§ Disposition
Glass is fragile because it has a disposition to shatter. Dispositions like fragility resemble properties of objects, but they describe possible (not necessarily actual) behaviors under certain conditions: Glass may not actually shatter <cit.>. We analyze dispositions with functions.
For a deterministic and fully-specified example, minerals higher on Mohs hardness scale (e.g. diamond) will scratch softer minerals (e.g. talc). Let function f(m_1, m_2) take in 2 minerals and return the mineral that gets scratched, so = f(, ). Talc has the disposition to be scratched because ∀ m: = f(m, ); diamond has the "power" to scratch because ∀ m: m = f(m, ).
Therefore, dispositions are properties of a downstream function f, but people colloquially associate them with input nodes (scratch-hardness of minerals) or input values (scratch-hardness of diamond).
§.§ Supervenience
"Y supervenes on X" is equivalent to "Y functionally depends on X," because the formal definition of supervenience ("there cannot be an Y-difference without a X-difference" <cit.>) is the same as right-uniqueness: Consider team R,
([(x, y_1)∈ R] [(x, y_2) ∈ R] [y_1 y_2])
= ([(x, y_1)∈ R] [(x, y_2) ∈ R]) [y_1 y_2]
= ([(x, y_1)∈ R] [(x, y_2) ∈ R]) [y_1 = y_2]
= ([(x, y_1)∈ R] [(x, y_2) ∈ R]) ⇒ [y_1 = y_2]
§.§ Mental Causation
Can mental kinds (property, state, event) cause physical kinds? Mental causation faces 2 conflicting intuitions:
* It's common in everyday experiences: I want to raise my hand (mental state), so I raise my hand (body state).
* The Exclusion Problem: Physical effects like body movements are already determined by physical causes like brain activities, so there's no room for a mental cause, which is also sufficient for the physical effect <cit.>.
The Exclusion Problem arises whenever a functional determination entailed by (, , , ) cannot be deduced from the graph structure =(, ) (and Armstrong's Axioms) alone. People feel uneasy because they cannot find such dependency as a path in .
* (Assumption) On the lowest physical level, brain state functionally determines body state: .
* (Assumption) Mental state supervenes on brain state: . Mental state is an abstract/aggregate description of physical brain state.
Multiple realizability (a single mental kind can be realized by many distinct physical kinds) <cit.> is true when f_2 is non-injective.
It's a coincidence that functionalism (name unrelated to mathematical functions) uses causality to define mental states <cit.> and we reduce causality to functions.
* So we have an SFM with graph ←→ and functions = {: f_1, : f_2}.
* (Fact) There exists a function f_3 such that is true in every ∈ R_. Mental state does functionally determine body state.
cannot be deduced from alone. It's entailed by the specific functional mappings f_1, f_2 (and (, , )). Although it cannot appear as a path in , we see no reason to dismiss it as "excluded." We may use SFM-intersection to explicitly/graphically encode this functional dependency, although R_ is entailed by a single SFM (SFM-intersection-proper isn't required).
The Exclusion Problem appears in any system with hierarchical levels of abstraction, since supervenience is just functional dependency. In fully-specified and deterministic computers, what causes a video to play on screen, the low-level chip activities or the high-level video-player program? The same reasoning applies. The only empirical question is whether higher-level functional determinations like f_3 are true. If not, we simply say the abstraction is broken.
§.§ Time
SFM doesn't endorse any particular theory of time, but we can define T: →ℝ that maps each node u to a real-valued timestamp T(u). If ∀ (u, v) ∈: T(u) ≤ T(v), then causes always temporally precede their effects. But without additional assumptions, T might as well violate this condition.
Backward causation occurs when an effect temporally precedes its cause <cit.>. If most SFM edges (u, v) ∈ still point from past to future (T(u) ≤ T(v)), a backward edge can create cycles, resulting in PULO or actually unsatisfiable laws. That's why people intuitively dislike backward causation. But if the specific SFM is satisfiable or satisfied by empirical data, we cannot dismiss it a priori.
Are there fundamental properties of our physical world that make causal and temporal orders agree? Could it be the asymmetry of thermodynamics, radiation <cit.>, or our mental habit of "actions determining consequences" (Section <ref>)? Further research is required.
In a causal feedback loop A → A, node A influences its own next state. With discrete time, we can unroll it to an acyclic time-indexed causal chain A(0) A(1) A(2) …, which may be countably infinite. When every [A(t)] is invertible, the system has time-symmetry and another equivalent SFM A(0) A(1) A(2) ….
Decreasing the interval between 2 consecutive timestamps towards the infinitesimal, we eventually get an uncountable number of nodes and cannot properly define an edge, because there are no 2 "consecutive" real numbers. In this case, it would make more sense for A(t) to determine its instantaneous rate of change d/dt A(t), like the exponential/logistic growth rate of bacteria population size and the predator-prey dynamics in Lotka-Volterra equations. SFM-intersection-proper can represent autonomous differential equations, which include causal loop diagrams <cit.>. However, differential equation in general are better tools for modeling continuous-time causality.
§ CONCLUSION
After our conceptual analysis that reduces causality to functions, there should be nothing mysterious about the definition of causality.
Using forward inference, contrast, and delta compression, Structural Functional Model (SFM) correctly produces intuitive causal utterances.
We've also supported intuitive practices from an algorithmic perspective: contrast saves space and time; finite acyclic SFM is required for guaranteed satisfiability (at the cost of expressiveness).
Distinct from but compatible with probability theory, "causality as functions" allows for interesting downstream applications.
§ MATHEMATICS REVIEW
Under ZFC set theory, a set is roughly an unordered collection of distinct elements. The binary Cartesian product between two sets X and Y is X × Y = {(x, y)|x ∈ X y ∈ Y}.
A binary relation R over X and Y is R ⊆ X × Y.
A relation R may have properties:
* Left-total: ∀ x ∈ X ∃ y ∈ Y: (x,y) ∈ R
* Right-total: ∀ y ∈ Y ∃ x ∈ X: (x,y) ∈ R
* Left-unique: ∀ x_1 ∈ X, x_2 ∈ X, y ∈ Y: ((x_1,y)∈ R) ((x_2,y) ∈ R) ⇒ x_1=x_2
* Right-unique: ∀ x ∈ X, y_1 ∈ Y, y_2 ∈ Y: ((x,y_1)∈ R) ((x,y_2)∈ R) ⇒ y_1=y_2
* Function (total function): left-total and right-unique.
* Partial function: right-unique.
* Injective function: left-unique function.
* Surjective function: right-total function.
* Bijective function: injective and surjective function.
Because of right-uniqueness, a function can be written as f: X → Y such that f(x) ∈ Y is unique for every x ∈ X. For functions f: X → Y and g: S → Y satisfying S ⊆ X, if ∀ x ∈ S: f(x)=g(x), we say g is a restriction of f and f is an extension of g (or f extends g). Because functions are relations, we write g ⊆ f or g = f_|S.
An indexed collection of sets is a 3-tuple (I, 𝒜, A) written as {A_i}_i∈ I, where I is the index set, 𝒜 is a collection of sets, and A is a function A: I →𝒜. Every A_i = A(i) ∈𝒜 is a set. Now we can define Cartesian product over any (possibly infinite-sized) indexed collection of sets: ∏_i∈ IA_i is the set of all functions f: I →⋃_i∈ IA_i such that ∀ i∈ I: f(i) ∈ A_i.
Similarly, a relation over an indexed collection of sets is a subset of its Cartesian product.
A directed graph is an ordered pair = (, ), where is a set of nodes and ⊆× is a set of directed edges. A directed edge is an ordered pair (u, v) such that u ∈ and v ∈.
* If (u, v) ∈, u is a parent of v and v is a child of u.
* (u) denotes the set of parents of u; Ch(u) denotes the set of children of u.
* The indegree of u is the number of its parents (deg^-(u) = |(u)|); the outdegree of u is the number of its children (deg^+(u) = |Ch(u)|); the degree of u is the sum of its indegree and outdegree (deg(u)=deg^-(u)+deg^+(u)).
* A root node u has indegree deg^-(u) = 0. A sink node u has outdegree deg^+(u) = 0.
* A path is a sequence of nodes v_1, v_2, …, v_n such that (v_i, v_i+1) ∈ for any i ∈{1, 2, …, n-1}; a path is a cycle if v_1 = v_n.
* Node u is an ancestor of node v (u ∈An(v)) if there's a path from u to v. Otherwise, u is a non-ancestor of v.
* Node u is a descendant of node v (u ∈De(v)) if there's a path from v to u. Otherwise, u is a non-descendant of v.
* Under our convention, a node is the ancestor/descendant of itself.
§ GENERALIZED MÜNCHHAUSSEN TRILEMMA
We formalize a theorem that generalizes Münchhaussen Trilemma <cit.> in epistemology:
Any directed graph G = (V, E) contains at least one of the following:
* A root.
* A cycle.
* An infinite regress: An infinite path of distinct nodes (…, u_2, u_1, u_0) ending at u_0, such that for any integer i ≥ 1, there exists u_i ∈ V satisfying (u_i, u_i-1) ∈ E and u_i ∉{u_j}_j=0^i-1
Proving by contradiction, suppose instead that a graph has no root, no cycle, and no infinite regress. Since there's no infinite regress, there exists a nonnegative integer n that is the maximum length of a path of distinct nodes ending at some u ∈ V. Let (u_n, u_n-1, …, u_1, u_0) be that maximum-length path.
Because G doesn't have a root, deg^-(u) ≥ 1 for all u ∈ V and thus deg^-(u_n) ≥ 1, so there exists a node v ∈ V such that (v, u_n) ∈ E is an edge.
If v ∈{u_i}_i=0^n, then v = u_i for some integer 0 ≤ i ≤ n. We can construct a new path (u_n, u_n-1, …, u_i, u_n). It's a path because (u_i, u_n) = (v, u_n) ∈ E and (u_i, u_i-1) ∈ E for all 1 ≤ i ≤ n; it's a cycle because it starts and ends at u_n. This violates the acyclic assumption, so v ∉{u_i}_i=0^n is distinct from all nodes in the path.
We can thus construct a new path (v, u_n, u_n-1, …, u_1, u_0) of length n+1, where all nodes have been shown to be distinct. However, this contradicts the condition that the maximum length of distinct-node paths is n. Therefore, it's impossible for a directed graph to have no root, no cycle, and no infinite regress at the same time.
GMT is a general theorem about directed graphs, proven mathematically. It applies to all problems characterized by objects and directed binary relations between them (i.e. describable by a directed graph), such as "X causes Y" and "X justifies Y."
If we define a directed graph where nodes are propositions and edge (u, v) means "u justifies v" or "u is a part of the justification for v", then GMT entails that we either settle with foundationalism (root that isn't justified), coherentism (cycle that justifies itself), or infinitism (infinite regress of justification chain) - we cannot simultaneously eliminate all 3 of them. Notice how we never used the meaning of "justification" in our proof, only the directed binary form of "A justifies B."
|
http://arxiv.org/abs/2307.05224v1 | 20230711124639 | Reliable Packet Detection for Random Access Networks: Analysis, Benchmark, and Optimization | [
"Yuyang Du",
"Soung Chang Liew"
] | cs.NI | [
"cs.NI"
] |
[]+0.5cmReliable Packet Detection for Random Access Networks: Analysis, Benchmark, and Optimization
Yuyang Du, Student Member, IEEE,
Soung Chang Liew, Fellow, IEEE,
Y. Du and S. C. Liew are with the Department of Information Engineering, The Chinese University of Hong Kong, New Territories, Hong Kong SAR, China (e-mail: {dy020, soung}@ie.cuhk.edu.hk). S. C. Liew is the corresponding author.
August 12, 2023
===========================================================================================================================================================================================================================================================================================================
Many advanced industrial systems utilize random access in wireless networks to facilitate massive machine communications with burst transmissions. The stringent requirement for ultra-reliability in industrial communication poses a severe challenge for random access: a receiver should neither miss an incoming packet nor get falsely alarmed by noise or interference. Currently, many academic investigations and industry applications rely on the conventional Schmidl-and-Cox (S&C) algorithm and its variants for packet detection. However, S&C was originally developed for single-antenna receivers and lacks a rigorous analytical framework for the extension to multi-antenna receiver settings. This paper is a revisit and enhancement of S&C to fill this gap. First, we put forth a packet-detection metric called “compensated autocorrelation", which yields equivalent performance to the S&C metric but is more analytically tractable. With the new metric, we obtain accurate closed-form expressions for false-alarm and missed-detection probabilities. Second, we introduce the principle of Pareto comparison for packet-detection benchmarking, enabling simultaneous consideration of false alarms and missed detections for a fair comparison between different packet-detection schemes. Third, we experimentally validate that taking the real part of the autocorrelation enhances the performance of S&C through a new scheme called real-part S&C (RP-S&C). Fourth, and perhaps most importantly, the adoption of the new metric, compensated autocorrelation, allows us to extend the single-antenna algorithm to the multi-antenna scenario in a rigorous and analytical manner through a weighted-sum compensated autocorrelation. We formulate two optimization problems, aiming to minimize false-alarm probability and missed-detection probability, respectively. We provide our solutions to these problems along with proofs. We demonstrate through extensive experiments that the optimal weights for false alarms (WFA) is a more desirable scheme than the optimal weights for missed detections (WMD) due to its simplicity, reliability, and superior performance. Our results have significant implications for the design and implementation of packet-detection schemes in random-access networks.
Random access, packet detection, false alarm, missed detection, optimization problem
§ INTRODUCTION
Random access in wireless networks offers significant benefits for industrial applications such as Industrial Internet-of-Things (IIoT) and sensor networks that thrive on tetherless communication. In contrast to centralized access control, random access enables massive machine communication without a predetermined transmission schedule. For instance, in monitoring applications, a sensor may generate a new packet only upon detecting an anomaly and then transmit this information to a central monitoring station via a wireless channel. As sensor traffic is sporadic, employing random access is more efficient than pre-allocating dedicated wireless resources (e.g., time slots or subcarriers) to each sensor. Moreover, using centralized access control to schedule every sensor becomes impractical when the number of connections surpasses the available wireless resources.
In random access, a receiver does not know when a wireless device will transmit a packet to it. To save power and avoid being wrongly occupied, the packet decoding circuitry of a receiver should not get activated unless a packet is being transmitted, i.e., the receiver needs to detect the incoming packet before decoding it. Therefore, there are two possible causes for reception failures in random access: (i) the packet is not detected; (ii) the packet is detected, but its data cannot be decoded.
Considerable research efforts have been devoted to enhancing the reliability of random access for mission-critical industrial communications <cit.>. However, the majority of these studies have primarily focused on packet decoding, while packet detection has received limited attention. Previous packet-detection schemes <cit.> used the conventional Schmidl-and-Cox (S&C) algorithm <cit.> as the underlying packet-detection scheme. However, a rigorous framework for analyzing the packet detection process is lacking, and the closed-form expressions of missed-detection and false-alarm probabilities in random access are still absent. Further, the existing benchmarking method for packet detection algorithms is defective, as it overlooked the tradeoffs between missed-detection and false-alarm probabilities and focused on minimizing the missed-detection probability as the sole criterion.[Avoiding getting falsely alarmed is as important as preventing missed detections for three reasons. First, when a false alarm occurs, signal processing circuits are erroneously activated, leading to a decrease in power efficiency. Second, to avoid packet collisions, a random-access device may hold back and refrain from transmitting a packet itself upon encountering a false alarm, resulting in reduced spectrum efficiency. Third, during the false alarm period, as the receiver is occupied decoding the "fake packet", all true incoming packets will not get processed until the receiver realizes the situation and resets its state machine.]
Another seldom-addressed challenge in previous works pertains to the optimization of a packet detection algorithm in the multi-antenna scenario. Conventional S&C algorithm was originally proposed for single-antenna receivers three decades ago. However, in modern communication systems, receivers are typically equipped with multiple antennas, enabling the possibility of enhancing system reliability via rich spatial diversity (also known as antenna diversity). While previous research efforts have delivered higher decoding reliability by leveraging the spatial diversity <cit.>, multi-antenna packet detection has received less attention. Existing studies <cit.> have extended the S&C algorithm to the multi-antenna scenario in an ad hoc manner due to the lack of a rigorous analytical framework. To the best of our knowledge, no prior research has rigorously analyzed the performance of packet detection in multi-antenna systems, nor has it addressed the optimization challenges associated with such scenarios.
This paper is an attempt to bridge these gaps. We first provide a comprehensive study for the analysis and benchmarking of packet detections in single-antenna random-access systems. After that, we extend our analytical framework to advanced systems with multiple antennas in a rigorous manner and address the optimization problem for multi-antenna packet detection. Our contributions are summarized as follows:
Our first contribution is the proposal of a new metric for packet detection called “compensated autocorrelation" for single-antenna packet detection, which makes possible a rigorous analytical framework. Previous research mostly used the ratio of autocorrelation and signal power as the packet-detection metric. Rigorous analysis is difficult because the autocorrelation and the signal-power terms contain correlated noises, and their ratio is a complicated function of these correlated noises. The new compensated autocorrelation metric is equivalent to the ratio metric as far as the packet detection performance is concerned. However, the noise characteristic of the compensated autocorrelation is analytically tractable, because the metric contains only a simple summation of correlated noises and can be approximated as a Gaussian random variable. We demonstrate through experiments that our derivations and approximations are precise and reliable. The use of compensated autocorrelation also paves the way for the treatment of packet detection in the multi-antenna scenario (see our fourth contribution below).
Our second contribution is a new benchmarking method. A packet detection algorithm inherently trades off between false alarms and missed detections. Concluding that an algorithm is good simply because of its low missed-detection probability, as is done in many existing papers (e.g., <cit.>), is unreasonable, as that may come at the expense of extremely high false-alarm probability. Our method addresses this problem by introducing Pareto comparison so that we can consider false alarms and missed detections simultaneously.
Our third contribution is the enhancement of the conventional S&C algorithm. We replace the autocorrelation with its real part and find that our revised scheme, referred to as the real-part S&C (RP-S&C), contains less noise than the conventional scheme. We demonstrate the superiority of RP-S&C over the conventional S&C.
Our fourth contribution is packet detection in multi-antenna systems building upon the compensated autocorrelation framework. The weighted sum of the individual compensated autocorrelations obtained at different antennas still only contains a sum of noises and is therefore analytically tractable. Using the weighted sum as the metric in the multi-antenna scenario is a natural extension of the single-antenna treatment, and optimality under different criteria can be established rigorously. We consider two specific criteria: (i) minimizing false-alarm probability and (ii) minimizing missed-detection probability. We then give our solutions, the weight assignment for false alarms (WFA) and the weight assignment for missed detections (WMD), to the two optimization problems with rigorous proofs. Last but not least, we discuss implementation details of WFA and WMD and benchmark them under a practical random-access setting with distributed antennas. Based on concrete analyses under practical settings and extensive emulation experiments, we find that WFA is the recommended choice for practical random access due to its simplicity and superior packet-detection performance.
§ SINGLE-ANTENNA PACKET DETECTION: ANALYSIS, SIMULATION, AND DISCUSSION
§.§ Conventional S&C Algorithm and Our Improvement
A random-access system employs repeating sequences to detect packets. Fig. <ref> shows a general packet format for random access. The repeating sequences at the beginning of a packet are referred to as short training sequences (STSs), and a collection of multiple STSs forms the preamble sequence. Let us denote the number of STSs by m and the length of each STS by η. In this paper, for simplicity, we assume that the preamble sequence contains two STSs, i.e., m=2. There are several ways to extend the basic treatment here to more general preamble sequences with more than two STSs. That extension will be addressed in a separate paper.
Let the transmitted preamble sequence be √(P) s[n], where P is the signal power and s[n] is the normalized preamble sequence with index n. We have s[n]=s[n+η] in the preamble. We can write the average preamble power as
|s[n]|^2 = 1/η∑_n = 0^η - 1|s[n]|^2 = 1
At the receiver end, the received preamble sequence is
y[n] = √(P) s[n] + w[n]
where w[n] ∼ N(0,σ ^2) is the receiver noise. The autocorrelation and average power over the two STSs are
a[n] = 1/η∑_k = 0^η - 1y[n + k] ·y^*[n + η + k]
b[n] = 1/2η∑_k = 0^2η - 1y[n + k] ·y^*[n + k]
The conventional S&C algorithm used packet-detection metric l[n] written as
l[n] = | a[n]|/b[n]
Without noise, l[n] reaches its peak value (i.e., l[n] = 1) at a particular index n corresponding to the beginning of the first preamble sample. With noise, on the other hand, l[n] is in general smaller than one. S&C compares l[n] with a pre-defined threshold ρ. Fig. <ref> and Fig. <ref> illustrate the packet-detection process.
In Fig. <ref>, we assume that there are three packets. For an incoming packet, if the peak of its l[n] is larger than threshold ρ (e.g., the first and the last l[n] peak in Fig. <ref>), then the receiver declares a packet is detected and this triggers the subsequent signal processing to decode the packet. Otherwise, if the peak value is smaller than ρ (e.g., the second l[n] peak in Fig. <ref>), the receiver performs no action, and an event of missed packet detection occurs. Packet missed detections may occur often when the antenna signal-to-noise ratio (SNR) is too low or the threshold ρ is set to too high.
In Fig. <ref>, we assume there is no packet (i.e., pure noise input). We see that l[n] is very close to zero because the input noise is random. Nevertheless, we may still observe a l[n] larger than ρ, and when that occurs, a false alarm event occurs. In general, false alarms are more likely to occur the lower the threshold ρ. Hence, adjusting the value of ρ amounts to trading off missed-detection performance and false-alarm performance.
In (<ref>), the absolute value of a[n] is taken because a large carrier frequency offset (CFO) can disperse a relatively huge amount of signal power into the imaginary part of a[n]. In the absence of noise and CFO, on the other hand, a[n] is real. Thanks to advancements in hardware and semiconductor technology in the past 30 years, modern communication systems have a much lower CFO than old systems developed at the time when S&C algorithm was proposed <cit.>. With negligible CFO, in the presence of noise, the signal is entirely contained in the real part of a[n], and the imaginary part of a[n] consists of noise only. By taking the absolute value of a[n], S&C algorithm inadvertently includes much noise in l[n]. As we will justify in Section <ref>, under the weak CFO condition of a modern communication system, taking the real part of a[n] can enhance the packet-detection performance. Unless stated otherwise, the rest of this paper uses an alternative metric l_R[n] written as
l_R[n] = a_R[n]/b[n]
where the subscript R represents the real part of a variable. We refer to the modification we made over the conventional S&C algorithm as the real-part S&C (RP-S&C) algorithm.
For packet detection, we are interested in whether l_R[n]>ρ.
Yet, analyzing l_R[n] (also, the original l[n]) is challenging as it is the ratio of two non-independent random variables. We note that saying l_R[n]>ρ is equivalent to saying
r[n] Δ = a_R[n] - ρ b[n] 1pt 1pt 1pt 1pt > 0
We refer to r[n] as the “compensated autocorrelation". That is, a_R[n] is the real part of the autocorrelation and we compensate it by subtracting ρ b[n] from it and checking whether the resulting value is larger than 0. Having a large a_R[n] does not necessarily mean that there is a packet because it could be due to a large exogenous interference (e.g., Bluetooth interference on WiFi). However, large exogenous interference also has relatively larger b[n] compared with a_R[n], and hence r[n] is likely to be small in that case.
There are two key advantages of focusing on r[n] rather than l_R[n]. First, compared with l_R[n], r[n] is much easier to analyze. We know that a_R[n] and b[n] contain correlated noises (see (<ref>) and (<ref>) below), and the noise in l_R[n] is a complicated function of these correlated noises. The noise in r[n], on the other hand, consists of the simple summation of these correlated noises because it is a simple linear combination of a_R[n] and b[n]. For a practical preamble, we can approximate r[n] as a Gaussian random variable in our analysis (we will elaborate later). We show in Subsection B that it is easy to compute the mean and variance of r[n] and approximate r[n] as a Gaussian random variable with that mean and variance.
Second, for a multi-antenna system, we could add the weighted r[n] of different antennas to form a weighted-combined r[n] and compare that with a threshold for packet detection purposes. Again, the weighted-combined r[n] is amenable to analysis since the weighted combination can also be approximated as a Gaussian random variable. This allows us to investigate the optimality of different weight combinations on a rigorous basis.
§.§ Analysis of RP-S&C with Gaussian Approximation
In this subsection, we first analyze b[n], a_R[n], and their cross term a_R[n] b[n]. Based on these analyses, we obtain the mean and variance of r[n]. We then approximate r[n] as a Gaussian random variable with the computed mean and variance. Finally, we utilize the distribution of r[n] to determine the false-alarm probability and the missed-detection probability.
Let us start with b[n]. We have that
E( b[n]) = P + σ ^2
For the analysis of Var( b[n]), we write b[n] as
b[n]
= 1/2η∑_k = 0^2η - 1y[n + k] ·y^*[n + k]
= 1/2η∑_k = 0^2η - 1( P|s[n + k]|^2 + √(P) s[n + k]w^*[n + k] + √(P)s^*[n + k]w[n + k] + w[n + k]w^*[n + k])
= P + √(P)/η∑_k = 0^2η - 1( s_R[n + k]w_R[n + k] + s_I[n + k]w_I[n + k]) + 1/2η∑_k = 0^2η - 1( w_R^2[n + k] + w_I^2[n + k])
where subscript R and subscript I represent the real and imaginary parts of a term, respectively.
Now, we have (note that E( w_R^3[n]) = E( w_I^3[n]) = 0 in the following derivation)
E
( b^2[n])
= E{( P + √(P)/η∑_k = 0^2η - 1( s_R[n + k]w_R[n + k] + s_I[n + k]w_I[n + k]) + 1/2η∑_k = 0^2η - 1( w_R^2[n + k] + w_I^2[n + k]))^2}
= ( P^2 + 2 P σ ^2 + P/ησ ^2) + E( w_R^4[n]) + E( w_I^4[n]) + 2E( w_R^2[n]) E( w_I^2[n])/2η + 4(2η - 1) E( w_R^2[n]) · E( w_I^2[n])/2η
= ( P^2 + 2η + 1/ηPσ ^2) + E( w_R^4[n])/η + E^2( w_R^2[n])/η + 2(2η - 1)E^2( w_R^2[n])/η
= ( P^2 + 2η + 1/ηPσ ^2) + E( w_R^4[n])/η + σ ^4/
.
- 4 + 2(2η - 1)σ ^4/ . - 4/η
The random variable √(2 / .-σ ^2)w_R is of the standard normal distribution. Thus, (2 / . -σ ^2)w_R^2[n] is a chi-square distribution of degree 1, and we have
E[ ( 2/σ ^2w_R^2[n])^2]
= Var( 2/σ ^2w_R^2[n]) + E^2( 2/σ ^2w_R^2[n]) = 3
1pt 1pt 1pt 1pt 1pt 1pt⇒ 1pt 1pt 1pt 1pt E( w_R^4[n]) = 3σ ^4/4
Substituting (<ref>) into (<ref>), we have
E( b^2[n])=( P^2 + 2Pσ ^2 + σ ^4) + 2Pσ ^2 + σ ^4/2η
With (<ref>), we have
Var( b[n]) = E( b^2[n]) 1pt 1pt 1pt - E^2( b[n]) 1pt 1pt 1pt = 2Pσ ^2 + σ ^4/2η
We next look at a_R[n]. We can write a[n] as
a[n]
= 1/η∑_k = 0^η - 1y[n + k] ·y^*[n + η + k]
= 1/η∑_k = 0^η - 1( √(P) s[n + k] + w[n + k]) ·( √(P)s^*[n + k] + w^*[n + η + k])
= P + 1/η∑_k = 0^η - 1{√(P)( s[n + k]w^*[n + η + k] + s^*[n + k]w[n + k]) + w[n + k]w^*[n + η + k]}
We extract the real part of (<ref>) and write a_R[n] as
a_R[n] = P + 1pt1/η∑_k = 0^η - 1{[ √(P)[ [ ( s_R[n + k]w_R[n + η + k] + s_I[n + k]w_I[n + η + k]); 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt + ( s_R[n + k]w_R[n + k] + s_I[n + k]w_I[n + k]) ]]; 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt + w_R[n + k]w_R[n + η + k] + w_I[n + k]w_I[n + η + k] ]}
From (<ref>), by exploiting the fact that the zero-mean Gaussian noise terms at different time indexes are independent, we have that
E( a_R[n]) = P
Similarly, we have
E( a_R^2[n])
= P^2 + 1/η ^2∑_k = 0^η - 1{[ P E( [ ( s_R^2[n + k]w_R^2[n + η + k] + s_I^2[n + k]w_I^2[n + η + k]); 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt + ( s_R^2[n + k]w_R^2[n + k] + s_I^2[n + k]w_I^2[n + k]) ]); 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt + E( w_R^2[n + k]w_R^2[n + η + k] + w_I^2[n + k]w_I^2[n + η + k]) ]}
= P^2 + Pσ ^2/η + σ ^4/2η
From (<ref>) and (<ref>), we get
Var(a_R[n]) = E(a_R^2[n]) - E^2(a_R[n]) = 2Pσ ^2 + σ ^4/2η
We next look at a_R[n] b[n]. With (<ref>) and (<ref>), we have
[ E( a_R[n] b[n]) 1pt; 1pt 1pt 1pt 1pt = E{[ ( P + 1pt√(P)/η∑_k = 0^η - 1[ [ ( [ s_R[n + k]w_R[n + η + k]; + s_I[n + k]w_I[n + η + k] ]); 1pt 1pt + ( [ s_R[n + k]w_R[n + k]; + s_I[n + k]w_I[n + k] ]) ]] + 1/η∑_k = 0^η - 1[ [ w_R[n + k]; 1pt 1pt 1pt·w_R[n + η + k]; + w_I[n + k]; 1pt 1pt 1pt·w_I[n + η + k] ]]) 1pt 1pt 1pt 1pt; 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt×( [ P + √(P)/η∑_k = 0^2η - 1( s_R[n + k]w_R[n + k] + s_I[n + k]w_I[n + k]); 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt + 1/2η∑_k = 0^2η - 1( w_R^2[n + k] + w_I^2[n + k]) ]) ]}; 1pt 1pt 1pt 1pt = P^2 + P/η ^2∑_k = 0^η - 1{[ s_R^2[n + k]E(w_R^2[n + k]); + s_I^2[n + k]E(w_R^2[n + k]) ]} 1pt; 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt + P/η ^2∑_k = 0^η - 1{[ s_R^2[n + k]E(w_R^2[n + η + k]); + s_I^2[n + k]E(w_R^2[n + η + k]) ]} + P/2η∑_k = 0^2η - 1{[ E(w_R^2[n + k]); + E(w_I^2[n + k]) ]}; 1pt 1pt 1pt 1pt = P^2 + Pσ ^2/η + Pσ ^2 ]
With the above analyses of b[n] and a_R[n], we calculate E( r[n]) as
E( r[n]) = E( a_R[n] 1pt) - ρ E( b[n] 1pt) = (1 - ρ )P - ρσ ^2
Further, with (<ref>), (<ref>), and (<ref>), we have
E( r^2[n])
= E( a_R^2[n]) + ρ ^2E( b^2[n]) - 2ρ E( a_R[n]b[n] 1pt)
= ( 1 - ρ)^2P^2 + ( 1 - ρ)^2 + 2ηρ( ρ - 1)/ηP + 1 + ρ ^2( 2η + 1)/2ησ ^4
With (<ref>), and (<ref>), we have
Var(r[n]) = E( r^2[n]) - E^2( r[n]) = (1 - ρ )^2/ηPσ ^2 + 1 + ρ ^2/2ησ ^4
We now explain why r[n] can be approximated as a Gaussian random variable. From (<ref>), we see that b[n] is an average of multiple terms. In a practical random-access system, the number of terms in b[n] can be quite large (see Subsection C for justifications). Hence, we can apply the Central Limit Theorem to approximate b[n] as a Gaussian variable <cit.>. Similarly, with the expression of a_R[n] in ((<ref>), we can approximate a_R[n] as Gaussian by the same reasoning. Thus, overall, r[n] can be Gaussian approximated, since it is a linear combination of a_R[n] and b[n]. With (<ref>), (<ref>), and the Gaussian approximations, we can write the distribution of r[n] as
r[n] ∼ N( (1 - ρ )P - ρσ ^2, 1pt 1pt 1pt 1pt(1 - ρ )^2/ηPσ ^2 + 1 + ρ ^2/2ησ ^4)
We can define things in terms of SNR by transforming r[n] to r[n]/ . -σ ^2. After the transformation, we have
r[n] ∼ N( (1 - ρ )γ - ρ 1pt 1pt 1pt 1pt , 1pt 1pt 1pt 1pt 1pt 1pt 1pt(1 - ρ )^2/ηγ + 1 + ρ ^2/2η)
where γ = P / . -σ ^2 is the SNR of the studied antenna. In the rest of this paper, unless stated otherwise, we mean the post-transformation r[n] written as the function of γ when we mention r[n].
Note that r[n] in (<ref>) represents the general setting. To analyze the missed-detection probability, we need to assume that there is a packet over the air (i.e., γ 0). To analyze the false-alarm probability, on the other hand, we need to assume that there is no packet and there is only noise (i.e., γ = 0). For clear descriptions, let us distinguish r[n] with “packet and noise input" and “noise input only" with subscript P and subscript N (i.e., r_P[n] and r_N[n]), respectively. Furthermore, we use r[n] to represent general cases regardless of the input type.
Now, we have
r_P[n] ∼ N( (1 - ρ )γ - ρ 1pt 1pt 1pt 1pt , 1pt 1pt 1pt 1pt 1pt 1pt 1pt(1 - ρ )^2/ηγ + 1 + ρ ^2/2η), 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1ptγ 0
Substituting γ=0 into (<ref>), we have
r_N[n]∼ N( - ρ 1pt 1pt 1pt 1pt , 1pt 1pt 1pt 1pt 1pt 1pt 1pt1 + ρ ^2/2η)
To analyze false alarms and missed detections, we define z to be the normalized r[n]:
z = r[n] - E( r[n])/√(Var( r[n])), 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt z ∼ N(0,1)
Note the normalization in (<ref>) is the general case that works for both r_P[n] and r_N[n].
We now assume that there is a packet. By the definition of missed detection (i.e., not claiming packet detection when there is a packet), we write the missed-detection probability P_MD as
[ P_MD = 1/√(2π)∫_ - ∞^ - E( r_P[n])/√(Var( r_P[n])) 1pt 1pt 1pte^ - z^2/2 dz = 1/√(2π)∫_E( r_P[n])/√(Var( r_P[n]))^∞ 1pt 1pt 1pte^ - z^2/2 dz; 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt = Q( E( r_P[n])/√(Var( r_P[n]))) = Q( √(η)( (1 - ρ )γ - ρ) 1pt/√( 1pt(1 - ρ )^2γ + (1 + ρ ^2)/
.
- 2)) ]
where Q(.) is the well-known Q function <cit.>.
We derive the false-alarm probability by assuming that there is no packet. By the definition of false alarm (i.e., claiming packet detection when there is no packet), we write the false-alarm probability P_FA as
P_FA = 1/√(2π)∫_ - E( r_N[n])/√(Var( r_N[n]))^∞ 1pt 1pt 1pt 1pte^ - x^2/2 dx = Q( - E( r_N[n])/√(Var( r_N[n]))) = Q( √(2ηρ ^2/1 + ρ ^2))
§.§ Simulations and Discussions
This subsection validates our derivations and the Gaussian assumptions in subsection B through simulations. Fig. <ref> compares the simulated results of a_R[n], b[n], and r[n] with our analytical results under various noise and threshold settings. We conducted multiple simulations and averaged the results to eliminate the randomness in individual simulations. The analytical curves are plotted based on the expressions in subsection B. As the figure shows, the simulated results closely align with the analytical expressions, affirming the correction of our derivations.
We next simulate P_MD and P_FA. Recall that we made Gaussian approximations on a_R[n], b[n], and their linear combination r[n] to write P_MD and P_FA in the form of the Q function. We can see from Fig. <ref> and Fig. <ref> that the simulated results well match our analysis when there are no less than 16 terms in the summation of a_R[n] and b[n]. i.e., η≥ 16. This is because the approximation of Central Limit Theorem can be very precise when the number of terms in the summation is large. In a practical random-access system, the length of an STS is typically no smaller than 16. For example, in IEEE 802.11, the total preamble length is 160 samples <cit.>. If we apply the two-STS setting as in this paper, we have η = 80. Hence, we can confidently say that our Gaussian approximations on a_R[n], b[n] and r[n] can be very precise on realistic random-access systems, and the expressions of P_MD and P_FA we derived in (<ref>) and (<ref>) are trustworthy.
§ BENCHMARKING RP-S&C WITH CONVENTIONAL S&C
This section proposes a rigorous method to benchmark the performance of various packet detection schemes. We use this method to compare the performance of the conventional S&C algorithm with our RP-S&C algorithm.
Previous studies have evaluated packet detection schemes solely based on the missed-detection probability. For instance, <cit.> experimentally investigated several packet detection schemes for vehicular communication, but the authors simply concluded the superiority of one scheme over others based on the number of missed detected packets only, assuming the same threshold for all the schemes under consideration. However, the consideration of false-alarm probability exposes a fundamental flaw of this approach. In essence, a packet detection scheme can trade off between the probabilities of false alarm and missed detection by adjusting its threshold. Lowering the detection threshold reduces the probability of missed detection while simultaneously increasing the probability of false alarm. Therefore, simply comparing missed detections without considering false alarms, or comparing false alarms without considering missed detections, is not reasonable.
We propose rigorous benchmarking by “Pareto comparison". Suppose that we have two packet detection schemes, A and B. In general, we can adjust ρ _A and ρ _B to obtain the tradeoff curves for the operating points ( P_FA^A(ρ _A),P_MD^A(ρ _A)) and ( P_FA^B(ρ _B),P_MD^B(ρ _B)), respectively. To illustrate our point, in Fig. <ref>, we plot an example of ( P_FA^A(ρ _A),P_MD^A(ρ _A)) curve and an example of ( P_FA^B(ρ _B),P_MD^B(ρ _B)) curve for two fictitious schemes A and B. We now explain how we benchmark schemes A and B.
We can find the thresholds for schemes A and B, ρ _A and ρ _B, such that their false-alarm probabilities are equal. For example, in Case one of Fig. <ref>, we fix P_FA^A(ρ _A) = P_FA^B(ρ _B) = 10^ - 6. Note that ρ_A and ρ_B are not necessarily equal for the same false-alarm probability in both schemes. For Case one, we have P_MD^B(ρ _B) < P_MD^A(ρ _A), and thus we say scheme B is superior to scheme A for this particular operating point. Alternatively, for Case two in the figure, we fix P_MD^A(ρ _A) = P_MD^B(ρ _B) = 10^ - 4 and observe that P_FA^B(ρ _B) < P_FA^A(ρ _A), and thus again we say scheme B is superior to scheme A for this particular operating point. In general, in Fig. <ref>, scheme B is superior to scheme A in the Pareto-sense in that the overall P_MD versus P_FA curve (referred to as the MD-FA curve) of scheme B is lower than that of scheme A.
If the two curves crisscross each other, it is inconclusive as to which scheme is superior. However, as illustrated in Fig. <ref>, if the curve of scheme B is consistently lower than that of scheme A within a specific region of interest (e.g., false-alarm probability not exceeding 10^ - 6 and missed-detection probability not exceeding 10^ - 4), we can conclude that scheme B outperforms scheme A in that particular region (although the two curves may still crisscross outside the region of interest). Conversely, if the two curves intersect within the region of interest (as illustrated in Fig. <ref>), we consider scheme A and scheme B to be comparable in that region, resulting in a “draw" in terms of benchmarking the two schemes.
Given the above context, we now examine the performance of RP-S&C and conventional S&C. We assume a 0.2ppm/2ppm/5ppm oscillator offset in accordance with the state-of-the-art/typical/worst CFO condition that one may encounter in modern communication hardware.[We conducted real-world experiments to test the CFOs of several commercial WiFi devices. Additionally, we examined the CFOs of a well-known open-source wireless channel dataset <cit.>. Our experiments revealed that the oscillator offsets of the tested hardware and the evaluated dataset are limited to a maximum of 2ppm. Hence, we consider 2ppm to be the typical oscillator offset. Moreover, we reviewed state-of-the-art research efforts published in top semiconductor journals/conferences <cit.>. We found that the oscillator offsets reported in these studies do not exceed 0.2ppm. Consequently, we consider 0.2ppm to be the state-of-the-art oscillator offset. For extreme cases, we assume a 5ppm oscillator offset as the worst-case scenario.] Fig. <ref> shows that the MD-FA curves of RP-S&C consistently lie below those of conventional S&C in various practical SNR and CFO settings, validating our statement in Section <ref>-A that taking the real part of a[n] is advantageous for reliable packet detections.
§ MULTI-ANTENNA PACKET DETECTION: ANALYSIS AND OPTIMIZATIONS
§.§ Problem Formulation
Assume that there are N_R antennas in a receiver. Let us denote the a_R[n], b[n], and r[n] of antenna j by a_R,j[n], b_j[n], and r_j[n], respectively. We want to combine r_j[n] with carefully chosen weights w_j so that the post-combined r[n] yields good packet-detection performance in terms of false alarm or missed detection (or both). Let r^M[n] represent the post-combined r[n] in the multi-antenna case to distinguish it from r[n] in the single-antenna case. The general expression of r^M[n] is
r^M[n] = ∑_j = 1^N_Rw_jr_j[n] = ∑_j = 1^N_Rw_j( a_R,j[n] - ρb_j[n])
The rest of this section investigates the optimal assignment for weight vector w = {w_1,...,w_N_R}. Recall from the discussion in Section <ref> that both false alarm and missed detection are important aspects of a packet detection algorithm. For a given threshold ρ, the weight vector minimizing false-alarm probability is different from that minimizing missed-detection probability.
a. Minimizing False-Alarm Probability
Assume that there is no packet, as in (<ref>), r_N,j[n] can be approximated as a Gaussian random variable:
r_N,j[n] ∼ N( - ρ 1pt 1pt 1pt 1pt , 1pt 1pt 1pt 1pt 1pt 1pt 1pt1 + ρ ^2/2η)
Thus, we have
E( r_N^M[n]) = E( ∑_j = 1^N_Rw_jr_N,j[n]) = ∑_j = 1^N_Rw_jE( r_N,j[n]) = - ρ·∑_j = 1^N_Rw_j
and
Var( r_N^M[n]) = Var( ∑_j = 1^N_Rw_jr_N,j[n]) = ∑_j = 1^N_Rw_j^2 · Var( r_N,j[n]) = 1 + ρ ^2/2η∑_j = 1^N_Rw_j^2
As in (<ref>), the false-alarm probability in the multi-antenna case is given by
P_FA^M = Q( - E( r_N[n])/√(Var( r_N[n]))) = Q( √(2η/1 + ρ ^2)·ρ·∑_j = 1^N_Rw_j/
.
-√(∑_j = 1^N_Rw_j^2))
We note from (<ref>) that, for any weight vector w, the weight vector scaled by a constant c > 0 yields the same P_FA^M. We can impose a normalization condition ∑_j = 1^N_Rw_j = 1 without changing the outcome. Hence, we can formulate the optimization problem as
max f(w) = ( ∑_j = 1^N_Rw_j^2)^ - 1, 1pt 1pt 1pt 1pt 1pt 1ptsubject 1pt 1pt to 1pt 1pt∑_j = 1^N_Rw_j = 1, 1pt 1pt 1pt 1ptand 1pt 1pt 1pt 1ptw_j≥ 0 1pt 1pt 1pt 1pt 1pt∀ j ∈{1,...,N_R}
b. Minimizing Missed-Detection Probability
Assume that there is a packet, as in (<ref>), r_P,j[n] can be approximated as a Gaussian random variable:
r_P,j[n] ∼ N( (1 - ρ )γ _j - ρ 1pt 1pt 1pt 1pt , 1pt 1pt 1pt 1pt 1pt 1pt 1pt(1 - ρ )^2/ηγ _j + 1 + ρ ^2/2η)
Thus, we have
E( r_P^M[n]) = E( ∑_j = 1^N_Rw_jr_P,j[n]) = ∑_j = 1^N_Rw_jE( r_P,j[n]) = ∑_j = 1^N_Rw_j[ (1 - ρ )γ _j - ρ]
and
[ Var( r_P^M[n]) = Var( ∑_j = 1^N_Rw_j·r_P,j[n]) = ∑_j = 1^N_Rw_j^2 · Var( r_P,j[n]) = ∑_j = 1^N_Rw_j^2 [ (1 - ρ )^2/ηγ _j + 1 + ρ ^2/2η] ]
As in (<ref>), the missed-detection probability is given by
P_MD^M = Q( E( r_P^M[n])/√(Var( r_P^M[n]))) = Q( √(η)∑_j = 1^N_Rw_j[ (1 - ρ )γ _j - ρ] 1pt/√(∑_j = 1^N_Rw_j^2[ (1 - ρ )^2γ _j + ( 1 + ρ ^2)/
.
- 2]))
From (<ref>), we can therefore formulate the optimization problem as
[ max g(w) = ( ∑_j = 1^N_Rw_j[ (1 - ρ )γ _j - ρ])^2/
.
-∑_j = 1^N_Rw_j^2[ (1 - ρ )^2γ _j + ( 1 + ρ ^2)/
.
- 2] 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt; 1ptsubject to 1pt∑_j = 1^N_Rw_j = 1, 1pt 1pt 1pt 1ptand 1pt 1pt 1pt 1ptw_j≥ 0 1pt 1pt 1pt 1pt∀ j ∈{1,...,N_R} ]
Note from (<ref>) and (<ref>) that both minimizing false alarm and minimizing missed detection are subjected to the constraint of ∑_j = 1^N_Rw_j = 1 and w_j≥ 0 1pt 1pt 1pt 1pt∀ j ∈{1,...,N_R}. In the rest of this paper, we call w = {w_1,...,w_N_R} a feasible weight vector only if it satisfies the constraint.
§.§ Optimal Weights for False Alarm (WFA) and Missed Detection (WMD)
For a receiver with N_R antennas, the equal-weight assignment w_j = 1 / . -N_R to r_N,j[n] yields the minimum P_FA^M.
max_ 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1ptw s.t.∑w_j = 1,w_j≥ 0 f(w) = min_ 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1ptw s.t.∑w_j = 1,w_j≥ 0∑_j = 1^N_Rw_j^2
It is easy to see that the answer to (<ref>) is found by setting w_j = 1 / . -N_R for all j.
Finding the optimal weights for missed-detection probability is more challenging. Let us look at the derivative of g(w) over w_j:
∂ g(w)/∂w_jΔ = n(w_j)/d(w_j)
where the denominator d(w_j) is always positive, i.e.,
d(w_j) = ( ∑_m = 1^N_Rw_m^2[ (1 - ρ )^2γ _m + 1 + ρ ^2/2])^2 > 0
and the numerator n(w_j) is
[ n(w_j) = ( ∑_m = 1^N_Rw_m^2[ (1 - ρ )^2γ _m + 1 + ρ ^2/2]) · 2( ∑_m = 1^N_Rw_m[ (1 - ρ )γ _m - ρ]) ·[ (1 - ρ )γ _j - ρ]; - ( ∑_m = 1^N_Rw_m[ (1 - ρ )γ _m - ρ])^2· 2w_j[ (1 - ρ )^2γ _j + 1 + ρ ^2/2]; 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt = 2( ∑_m = 1^N_Rw_m[ (1 - ρ )γ _m - ρ])_Term 1pt 1pt 1pt A{[ ( [ (1 - ρ )γ _j - ρ] ·∑_m = 1^N_Rw_m^2[ (1 - ρ )^2γ _m + 1 + ρ ^2/2]); - [ (1 - ρ )^2γ _j + 1 + ρ ^2/2] ·( ∑_m = 1^N_Rw_mw_j[ (1 - ρ )γ _m - ρ]) ]}_Term 1pt 1pt 1pt B; 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt = 2( ∑_m = 1^N_Rw_m[ (1 - ρ )γ _m - ρ])_Term 1pt 1pt 1pt A{∑_m = 1^N_Rw_m[ [ (w_m - w_j)( (1 - ρ )^3γ _mγ _j - (1 + ρ ^2)ρ/2); + (w_mγ _j - w_jγ _m)(1 + ρ ^2)(1 - ρ )/2 + (w_jγ _j - w_mγ _m)ρ(1 - ρ )^2 ]]}_Term 1pt 1pt 1pt B; 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt = 2( ∑_m = 1^N_Rw_m[ (1 - ρ )γ _m - ρ])_Term 1pt 1pt 1pt A{[ [ (1 - ρ )γ _j - ρ] ·∑_m jw_m^2[ (1 - ρ )^2γ _m + 1 + ρ ^2/2]; - w_j[ (1 - ρ )^2γ _j + 1 + ρ ^2/2] ·∑_m jw_m[ (1 - ρ )γ _m - ρ] ]}_Term 1pt 1pt 1pt B ]
In (<ref>), we write n(w_j) as the product of term A and term B. In term A, we can impose a practical constraint[We will later show in Section V that a typical ρ in practical multi-antenna systems is no larger than 0.5, which means ρ/ . -(1 - ρ ) is no larger than 0dB. If an antenna j has SNR γ _j of 0 dB or lower, it will not contribute much to packet detection and packet decoding, and we might as well omit it in both considerations. In other words, in this analysis, we assume that antennas with SNR of less than ρ/ . -(1 - ρ ) would not be used for packet detection purposes.] of γ _m > ρ/ . -(1 - ρ ) for every antenna so that we have (1 - ρ )γ _m - ρ > 0, 1pt 1pt 1pt 1pt 1pt 1pt∀ m ∈{1,...N_R}. With ∑_j = 1^N_Rw_j = 1 , 1pt 1pt 1pt 1ptw_j≥ 0 and (1 - ρ )γ _m - ρ > 0, we know that term A is positive. In term B, the terms in which m = j in the two summations cancel out each other, so we exclude them in the summations and obtain the final form of term B in the last line.
We note that for a locally optimal solution, we require ∂ g(w)/∂w_j = n(w_j)/d(w_j) = 0 for all j ∈{1,...,N_R}. Thus, term B should be zero and the following equation should hold:
[ [ (1 - ρ )γ _j - ρ] ·∑_m jw_m^2[ (1 - ρ )^2γ _m + 1 + ρ ^2/2] = w_j[ (1 - ρ )^2γ _j + 1 + ρ ^2/2] ·∑_m jw_m[ (1 - ρ )γ _m - ρ] ]
From (<ref>), we have
w_j = ∑_m jw_m^2[ (1 - ρ )^2γ _m + (1 + ρ ^2)/
.
- 2][ (1 - ρ )γ _j - ρ]/∑_m jw_m[ (1 - ρ )^2γ _j + (1 + ρ ^2)/
.
- 2][ (1 - ρ )γ _m - ρ], 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt j ∈{1,...,N_R}
To investigate (<ref>), we start from the simple two-antenna case, i.e., we only have antenna 1 and antenna 2. We have
w_1 = w_2^2[(1 - ρ )^2γ _2 + (1 + ρ ^2)/
.
- 2][(1 - ρ )γ _1 - ρ ]/w_2[(1 - ρ )^2γ _1 + (1 + ρ ^2)/
.
- 2][(1 - ρ )γ _2 - ρ ]
and
w_2 = w_1^2[(1 - ρ )^2γ _1 + (1 + ρ ^2)/
.
- 2][(1 - ρ )γ _2 - ρ ]/w_1[(1 - ρ )^2γ _2 + (1 + ρ ^2)/
.
- 2][(1 - ρ )γ _1 - ρ ]
The above gives
[ w_1/w_2 = [(1 - ρ )^2γ _2 + (1 + ρ ^2)/
.
- 2][(1 - ρ )γ _1 - ρ ]/[(1 - ρ )^2γ _1 + (1 + ρ ^2)/
.
- 2][(1 - ρ )γ _2 - ρ ] = [(1 - ρ )γ _1 - ρ ]/[(1 - ρ )^2γ _1 + (1 + ρ ^2)/
.
- 2]/
.
-[(1 - ρ )γ _2 - ρ ]/[(1 - ρ )^2γ _2 + (1 + ρ ^2)/
.
- 2] ]
As in Subsection B, we impose the constraint of ∑_j = 1^N_Rw_j = 1. Thus, a feasible locally optimal solution for the two-antenna case is given by
w_j = c[(1 - ρ )γ _j - ρ ]/[(1 - ρ )^2γ _j + (1 + ρ ^2)/
.
- 2], 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt j = 1,2, 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt c = ( ∑_m = 1^2 [(1 - ρ )γ _m - ρ ]/[(1 - ρ )^2γ _m + (1 + ρ ^2)/
.
- 2])^ - 1
We now extend our analysis to cases with more than two antennas. We shall see that the solution form of (<ref>) is retained for the general case. With the general expression of w_j given in (<ref>), we can verify that a feasible locally optimal solution for a N_R-antenna case is
w_j = c(1 - ρ )γ _j - ρ/(1 - ρ )^2γ _j + (1 + ρ ^2)/
.
- 2, 1pt 1pt 1pt 1pt 1pt 1pt j = 1,2,...N_R, 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt c = ( ∑_m = 1^N_R[(1 - ρ )γ _m - ρ ]/[(1 - ρ )^2γ _m + (1 + ρ ^2)/
.
- 2])^ - 1
In the rest of this paper, we denote the weight vector calculated according to (<ref>) by w^o. We now prove that w^o is the unique solution that yields the global maximum g(w).
If a feasible weight vector w does not satisfy (<ref>), i.e., ww^o, then w is non-optimal. Thus, w^o in (<ref>) is the unique optimal solution to minimizing P_MD as per (<ref>).
We prove that there is another solution w' that yields g(w') > g(w).
We first note that it is not possible that w_j < w_j^o for all j ∈{1,...,N_R} or w_j > w_j^o for all j ∈{1,...,N_R} because that would mean ∑_j = 1^N_Rw_j < 1 or ∑_j = 1^N_Rw_j > 1. Thus, given that ww^o and that w is feasible (i.e., ∑_j = 1^N_Rw_j = 1), there must be at least one k such that w_k/ . -w_k^o > 1 and at least one i such that w_i/ . -w_i^o < 1 and . Let us refer to w_j/ . -w_j^o as the weight ratio of index j. In general, there could be multiple weight ratios of different indexes that attain the maximum, and multiple weight ratios of different indexes that attain the minimum. Let the respective sets be
K = {k:w_k/w_k^o = max_jw_j/w_j^o} 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1ptand 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1ptI = {i:w_i/w_i^o = min_jw_j/w_j^o}
Now, consider a k ∈K and an i ∈I . We have that
w_k > w_j/w_j^ow_k^o, 1pt 1pt 1pt∀ j ∉K 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1ptand 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1ptw_i < w_j/w_j^ow_i^o, 1pt 1pt 1pt 1pt 1pt 1pt∀ j ∉I
With (<ref>), we look back to the expression given in (<ref>). We have that
[ n(w_k) = 2( ∑_m = 1^N_Rw_m[ (1 - ρ )γ _m - ρ])_Term 1pt 1pt 1pt A{[ [ (1 - ρ )γ _k - ρ] ·∑_m kw_m^2[ (1 - ρ )^2γ _m + 1 + ρ ^2/2]; 1pt 1pt 1pt 1pt - w_k[ (1 - ρ )^2γ _k + 1 + ρ ^2/2] ·∑_m kw_m[ (1 - ρ )γ _m - ρ] ]}_Term 1pt 1pt 1pt B; 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt < 2( ∑_m = 1^N_Rw_m[ (1 - ρ )γ _m - ρ])_Term 1pt 1pt 1pt A{[ [ (1 - ρ )γ _k - ρ] ·∑_m kw_m^2[ (1 - ρ )^2γ _m + 1 + ρ ^2/2]; 1pt 1pt 1pt 1pt - [ (1 - ρ )^2γ _k + 1 + ρ ^2/2] ·∑_m kw_m^2w_k^o/w_m^o[ (1 - ρ )γ _m - ρ] ]}_Term 1pt 1pt 1pt B; 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt = 0 ]
where we obtain the last equality by substituting w_k^o and w_m^o in accordance with (<ref>) into the second line.
Similarly, we can show that
n(w_i) > 0
Thus, for an infinitesimally small ε > 0, we have that
∂ g(w)/∂w_iε - ∂ g(w)/∂w_kε = ε[ ∂ g(w)/∂w_i - ∂ g(w)/∂w_k] = ε[ n(w_i)/d(w_i) - n(w_k)/d(w_k)] > 0
Given that g(w) is twice differentiable in w_k and w_i, we can construct a feasible solution w' such that g(w') > g(w) as follows:
{[ w'_k = w_k - ε/
.
-| K|, 1pt 1pt 1pt 1pt 1pt 1pt 1pt∀ k ∈K; w'_i = w_i + ε/
.
-| I|, 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt∀ 1pt i 1pt∈I; w'_j = w_j, 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt∀ 1pt j ∈J ].
where J={1,...,N_R} - I - K is the complement set of I∪K; | I| and | K| denote the cardinality of I and K, respectively.
Although the above proof is complete by itself, a question that we might ask is how large can ε be (i.e., it does not have to be infinitesimally small). By similar reasoning as in the proof, we note that as we increase ε, we would still have ∂ g(w')/ . -∂w'_k < 0 and ∂ g(w')/ . -∂w'_i > 0 provided that w_k^' > ( w_j/ . -w_j^o)w_k^o for all j ∉K and w_i^' < ( w_j/ . -w_j^o)w_i^o for all j ∉I.
Thus, we can increase ε until either w_k^' = ( w_j/ . -w_j^o)w_k^o for some j ∉K (i.e., w_j/ . -w_j^o is the second largest weight ratio here), or w_i^' = ( w_j/ . -w_j^o)w_i^o for some j ∉I (i.e., w_j/ . -w_j^o is the second smallest weight ratio here), whichever equality is fulfilled first. In particular, we can set
ε = {[ min{| K|[ w_k - max_j ∉K( w_j/w_j^ow_k^o)], 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt| I|[ min_j ∉I( w_j/w_j^ow_i^o) - 1ptw_i] 1pt} if 1pt 1pt 1pt 1ptJ∅; | K|(w_k - 1pt w_k^o) = | I|( 1pt w_i^o - w_i) 1pt if 1pt 1pt 1pt 1ptJ = ∅ ].
In fact, the above suggests an algorithmic way to march toward w^o from an arbitrary feasible w. We perform (<ref>) in accordance with (<ref>). Then, with the new w', the cardinality of the new I or the new K is enlarged. We repeat the procedure until we get an even better solution w”, The procedure is repeated until we reach w^o.
§ MULTI-ANTENNA PACKET DETECTION: DISCUSSION AND EXPERIMENTS IN A DISTRIBUTED ANTENNA SYSTEM
Section IV puts forth two weight-assignment solutions for the combination of r[n]: (i) WFA and (ii) WMD. In a random-access network with co-located antennas, WFA and WMD have similar performance because co-located antennas have nearly the same SNR, resulting in similar weights for both WFA and WMD.[Signal at different co-located antennas may differ in phase, but it does not affect the calculation of a_R[n] and b[n]. There is little SNR difference between co-located antennas. Hence, the weight assigned by WMD should be very close to that of WFA.] That is, WMD also results in roughly equal-weight assignments.
Packet detection in advanced wireless communication systems with distributed antennas, also known as distributed antenna systems (DAS), introduces different scenarios when comparing WFA and WMD because the SNRs at non-co-located antennas may vary widely. DAS offers two distinct advantages over conventional co-located antenna systems. First, co-located antennas suffer from a weakness that the signal blockage between the transmitter antenna and the co-located receiver antennas results in no signal reception. In contrast, DAS allows for potential signal reception even if one receiver antenna is blocked, thanks to clear paths of other non-blocked antennas. Second, in DAS, the proximity between the transmitter and the nearest receiver antenna tends to be smaller than the distance between the transmitter and co-located receiver antennas, resulting in improved communication quality between the transmitter and the receiver. However, the distributed nature of DAS poses a new challenge in weight assignment. Antennas in DAS can be separated by tens to hundreds of wavelengths. This discrepancy in propagation length among transmit-receive antenna pairs leads to varying SNRs across the antennas. Given the varying SNRs, the benchmarking of WFA and WMD becomes an issue.
§.§ Implementation Issues
Implementation-wise, WFA is a simple and practical scheme, as it requires no additional system information except N_R, the number of antennas. As a result, WFA has lower implementation complexity, requires fewer computational resources, and consumes less signal processing time. In particular, WFA does not need knowledge of the antenna SNRs of the antennas since the weights do not depend on the SNRs.
For the false-alarm probability of WFA, we substitute w_j = 1 / . -N_R into (<ref>) and (<ref>) and obtain
E( r_N^WFA[n]) = - ρ 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1ptand 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt Var( r_N^WFA[n]) = 1 + ρ ^2/2ηN_R
where the subscript N denotes pure noise input. We know from (<ref>) that the false-alarm probability of WFA can be written as
P_FA^WFA = Q( - E( r_N^WFA[n])/√(Var( r_N^WFA[n]))) = Q( √(2ηN_Rρ ^2/1 + ρ ^2))
For the missed-detection probability, we substitute w_j = 1 / . -N_R into (<ref>) and (<ref>) and obtain
{[ E( r_P^WFA[n]) = ∑_j = 1^N_Rw_j[ (1 - ρ )γ _j - ρ] = ( 1 - ρ/N_R∑_j = 1^N_Rγ _j) - ρ; Var( r_P^WFA[n]) = ∑_j = 1^N_Rw_j^2 ·[ (1 - ρ )^2/ηγ _j + 1 + ρ ^2/2η] = ( (1 - ρ )^2/ηN_R∑_j = 1^N_Rγ _j) + 1 + ρ ^2/2ηN_R ].
where the subscript P denotes packet-plus-noise input. From (<ref>), we have
[ P_MD^WFA = Q( E( r_P^WFA[n])/√(Var( r_P^WFA[n]))) = Q( √(η)·( 1 - ρ/N_R∑_j = 1^N_Rγ _j - ρ)/
.
-√((1 - ρ )^2/N_R^2∑_j = 1^N_Rγ _j + 1 + ρ ^2/2N_R)) ]
Implementing WMD is more complex, as WMD requires knowledge of SNRs beforehand in order to calculate the weight of different antennas. However, obtaining the SNRs before packet detection is challenging, as accurate SNR estimation typically requires pilot-based signal processing, which is triggered by packet detection rather than preceding it. Thus, we have a “chicken-and-egg dilemma" where we need to know the precise SNRs before doing WMD, but typically SNR estimations happen after WMD.
A possible practical way to overcome the problem is to estimate the SNR using the preamble. Specifically, we can measure the power of the background noise when the receiver is idle. And then, as in (<ref>), we calculate b[n] for each antenna. We can coarsely estimate the SNR of an antenna with the noise power and the b[n] of that antenna. However, this approach has limitations. First, the power of background noise may vary over time, but we only have the average noise power obtained during the idle period. Second, the weight assignment in this scheme is highly sensitive to interference. In shared spectrum environments, where most random-access systems are deployed, wireless interferences are common (say wireless packets from a Bluetooth device or a working microwave oven). These interferences may add to the preamble sequence and increase b[n] (but such interference does not increase a_R[n] or aid in the packet detection process), making the estimated SNR larger than its actual value. Consequently, this scheme may not be very reliable in practice.
Let us refer to the above system with the coarse SNR estimation as the practical WMD (P-WMD), and the hypothetical system with perfect a priori knowledge of SNRs without estimation as the ideal WMD (I-WMD).
§.§ Benchmarking WFA and WMD in two typical DAS scenarios
This subsection benchmarks WFA and I-WMD/P-WMD with typical DAS scenarios (see Section <ref> for the Pareto benchmarking method). In our benchmarking exercise, we set the maximum false alarm tolerance and missed detection tolerance at P_FA^max = 10^ - 6 and P_MD^max = 10^ - 4, respectively. That is, we are only interested in operating regions with false-alarm and missed-detection probabilities below these thresholds. We justify the tolerance settings in the following. In a practical DAS, it is reasonable to assume that a packet has no more than 1024 OFDM samples.[In recent WiFi standards such as 802.11ax or future IEEE 802.11be, the length of a packet is typically no larger than 1024 samples. Additionally, a recent technical trend in wireless communication is to achieve URLLC with very short packet lengths. This setting is referred to as short-packet communication (SPC), where a packet typically has no more than 50 bytes <cit.>, making the packet length much shorter than 1024 samples.] Suppose that we want the system to experience no more than one false alarm every 1000 packets on average. Then, the false-alarm probability should be no larger than ( 1024 × 1000)^ - 1≈10^ - 6. As for the missed-detection probability, we take as reference the ultra-reliable low latency communication (URLLC) defined by 3rd generation partnership project (3GPP) that requires at least 99.99% successful packet decoding <cit.>, i.e., the transmission error tolerance is no larger than 0.01% or 10^-4. We assume a missed-detection tolerance commensurate with the reliability requirement of packet decoding, corresponding to a missing-detection probability of no more than 10^-4.
Furthermore, as in the previous analysis, we consider a two-STS preamble, with each STS having 16 samples (i.e., η = 16). We further assume that the timing offsets between different packets have been compensated prior to the application of WFA/I-WMD/P-WMD.[Sample misalignment can be a challenge for DAS. Due to the path-length discrepancy between different transmit-receive antenna pairs, the samples collected at different antennas may not be aligned in time (i.e., different propagation latency at different antennas). Therefore, the alignment of input samples is necessary before applying WFA/WMD in DAS. For preambles with two STSs, the b[n] of each packet exhibits a sharp peak that indicates the clear starting position of the preamble (see Fig. <ref>). We can utilize the peak for alignment.]
We consider two typical DAS scenarios in the benchmark: the non-blocked scenario and the partially blocked scenario. In the non-blocked scenario, all transmit-receive pairs have clear propagation paths. In the partially blocked scenario, the propagation paths of some receive antennas are obstructed, leading to significantly lower SNRs than other antennas. We do not consider the case where all receiver antennas are blocked because it is unlikely to happen (since DAS is designed to avoid such situations). Furthermore, even if the rare occasion happens, improving packet-detection performance would be futile as the low SNRs in all antennas could prevent successful packet decoding anyway.
Table I gives two examples of the SNR conditions in the non-blocked scenario and the partially blocked scenario. Both examples consider four distributed antennas.
Fig. <ref> presents the MD-FA curves of WFA, I-WMD, and P-WMD in the non-blocked scenario. It is clear that WFA outperforms the other two schemes in this example. The reader may wonder why the two WMD schemes turn out to have inferior missed-detection performance than WFA. The reason is simple: WMD is superior to WFA in terms of missed-detection performance only for a given fixed detection threshold ρ. However, WMD has a higher false-alarm probability for that fixed ρ. For example, in Fig. <ref>, we fix ρ=0.45 and highlight the corresponding (P_FA,P_MD) for WFA and I-WMD in points P_1 and P_2, respectively. As the figure shows, although P_2 has a lower missed-detection probability, its false-alarm probability is much higher than P_1. For the same false-alarm performance as WFA, I-WMD would have to raise its ρ, which in turn increases its missed-detection probability, to the extent that it is now worse than that of WFA (see P_3 in the figure).
For the partially blocked scenario, as we can see from Table I, antenna one experiences a significantly low SNR due to blockage, while the other three antennas remain unaffected. Fig. <ref> presents MD-FA curves of WFA, I-WMD, and P-WMD for this example. It is clear that WFA still outperforms the other two schemes in the partially blocked scenario.
§.§ General Benchmark Results in a Distributed Antenna Dataset
After examining the above two typical scenarios, we now proceed to a more general comparison between WFA and I-WMD/P-WMD. We use the same benchmark scheme as in subsection B (including the same P_FA^max and P_MD^max settings) and conduct emulation experiments on DICHASUS <cit.>, a massive open-source wireless channel dataset collected in industrial environments. To conserve space, we do not present the numerous MD-FA curves here.
We first give a general introduction of the dataset. The channel information in DICHASUS was measured using 32 software-defined radio (SDR) sensors and one transmitter that moves randomly in a factory. These 32 sensors were divided into four groups (Group A, B, C, and D), with each group comprising eight sensors located in one corner of the factory. Fig. <ref> shows the layout of the factory and the locations of the antenna groups A, B, C, and D. The transmitter periodically transmits a reference packet that is known to every sensor. Upon receiving the reference packet, an SDR sensor compares it with the original version of the packet it knows a priori to obtain the precise channel information. The DICHASUS dataset encompasses in total 44,703 valid[In the data pre-processing stage, we discard a small portion of measurements that are obviously invalid (or even wrong). For example, SNRs of some antennas are unreadable (i.e., not a number, NaN) or smaller than 0dB. That may be caused by measurement (or data recording) errors during the data collection. Furthermore, studying cases with less than 0dB SNR is meaningless for our packet-detection research, as such cases will fail in packet decoding anyway.] DAS measurements collected in five different days, and each measurement has 32 pieces of channel information estimated by the 4x8 distributed sensors through the same reference packet at the same time.
We next give a detailed look at the SNR information in DICHASUS. We denote an antenna by Ana(i,j), where i ∈{A,B,C,D} and j ∈{1,2,3,4,5,6,7,8}. Each Ana(i,j) has 44,703 SNR measurements, and we denote the k^th SNR measurement by γ _i,j(k). Unless stated otherwise, experiments and discussions below assume the original SNR values rather than the dB values.
With the above definition, we study the SNR correlation of all 32 antennas and present the result in Table II below. Without loss of generality, we use Ana(A,1) as a reference and calculate the correlation between Ana(A,1) and Ana(i,j) by
corr{(A,1),(i,j)} = ∑_k ( γ _A,1(k) - γ _A,1)( γ _i,j(k) - γ _i,j)/√(∑_k ( γ _A,1(k) - γ _A,1)^2·∑_k ( γ _i,j(k) - γ _i,j)^2)
We have several observations from Table II. First, SNRs of two co-located antennas are highly positively correlated. For example, Fig. <ref> plots a 2-D scatter chart for Ana(A,1) and Ana(A,2), where a scatter point ( γ _A,1(k), 1pt 1pt 1pt 1ptγ _A,2(k)) is plotted for each k. We see that the points roughly fall around the straight line y = x. We can obtain many similar 2-D scatter charts if we consider co-located antennas within the same group.
Second, if we look at Ana(A,1) and antennas in Group D, we observe a weak and negative correlation between Ana(A,1) and Ana(D,j). Fig. <ref> uses Ana(D,6) as an example to illustrate the relationship. As we can see from the figure, the SNRs of Ana(A,1) and Ana(D,6) are generally negatively correlated. This can be explained by the distributed nature of DAS: if one distributed antenna is weak, the other may still be strong. From Fig. <ref>, we see that Group A and Group D are located in two opposite corners of the factory. When the transmitter moves from one corner to the opposite corner (Fig. <ref> also provides an example of the trace of a transmitter moving from Group A to Group D), we should observe a decrease in SNR for one and an increase in SNR for the other. Furthermore, thanks to this distributed nature of DAS, we see no fully blocked cases in Fig. <ref>, i.e., at least one antenna has an SNR larger than 3dB.
We now elaborate on our experimental settings. In the first step, we construct a fixed two-STS preamble (with η=16 in every STS). This preamble sequence is used consistently throughout the experiment. Since our objective is to benchmark WFA and I-WMD/P-WMD in general cases to see if one scheme consistently outperforms the other in practice, we emulate and test all channel measurements in DICHASUS. For the testing of each measurement, it is sufficient to represent a group of antennas with one or two elements, given that the SNRs of co-located antennas within the same group are highly positively correlated. Therefore, we randomly select two antennas from each group to simplify the emulation. After the random antenna selections, we emulate the corresponding 4x2 channels and transmit the fixed preamble sequence through these emulated channels. At the receiver side of the eight-antenna system, we apply WFA/I-WMD/P-WMD and record the benchmark result. After testing one measurement, we move on to the next measurement until we have completed testing all 44,703 measurements in the DICHASUS dataset.
During the benchmark process, we find that some antennas have significantly lower SNR than the other antennas when blockages occur. Let us make the following definition: if an antenna's SNR is lower than 3dB, we say the antenna is blocked and the measurement is a blocked case. Our analysis reveals that 14.47% of the emulated eight-antenna systems encounter varying degrees of blockage, while the remaining 85.53% of the emulated systems are non-blocked. Thanks to the reliability advantage of DAS, no fully blocked cases (all antennas blocked) is observed in the dataset.
Table III presents emulation results in detail, with 14.47% partially blocked tested cases and 85.53% non-blocked cases. In the table, each row corresponds to the benchmark conducted on a specific day. For each day, there are three pieces of data that record the percentage of instances where “WFA outperforms I-WMD/P-WMD in terms of packet detection", “Draw", and “I-WMD/P-WMD outperforms WFA" (please refer to Section <ref> for the explanation of “Draw"). The last row of the table provides the average results across all 44,703 measurements.
Table III shows that WFA outperforms both P-WMD and I-WMD by a significant margin. On average, WFA surpasses its opponents in 91.49% of cases when competing against I-WMD and in 98.62% of cases when competing against P-WMD. Further, considering the draw cases, the percentage of WFA not losing is 97.36% when competing with I-WMD and 99.41% when competing with P-WMD.
Based on the above emulation results, we recommend WFA as the desirable choice for a realistic DAS due to the following reasons:
* (Superior packet-detection performance) WFA consistently outperforms both I-WMD and P-WMD in most cases, with an average percentage of 97.36% and 99.41%, respectively.
* (Simplicity) WFA is much easier to implement, as it does not require SNR estimations or complex weight calculations. In contrast, I-WMD is not practical implementation-wise.
* (Reliability) WFA is more reliable than P-WMD because it is not sensitive to noise or interference, which is critical in practical applications where environmental factors can affect signal quality.
§ CONCLUSION
In conclusion, this paper has provided a comprehensive treatment of packet detection for random access networks. The conventional S&C algorithm suffers from complex correlated noises in its packet-detection metric, making it difficult to analyze. To address this issue, we propose an analytical framework that uses “compensated autocorrelation" as the new metric for packet detection. In addition, our results demonstrate that taking the real part of the autocorrelation can significantly enhance the performance of S&C.
By leveraging the analytical tractability of compensated autocorrelation, we obtain accurate closed-form expressions for false-alarm and missed-detection probabilities. These expressions provide a rigorous theoretical foundation for fair Pareto benchmarking of packet-detection schemes and extension of single-antenna packet detection schemes to multi-antenna packet detection schemes.
In particular, for multi-antenna detection, we can use the weighted sum of compensated autocorrelations at different antennas as the metric without sacrificing analytical rigor. This approach enables us to determine the best weights for minimizing the false-alarm probability (WFA) and the missed detection probability (WMD). Our investigation suggests that WFA is the preferred choice for practical application settings.
Overall, our paper contributes to both the theory and practice of packet detection for random access networks. Our theoretical foundation provides insights on how to design packet detection schemes and how to compare and benchmark them in a rigorous manner in practical systems. This work has the potential to improve the performance of packet detection in random access networks and advance the field toward more efficient and reliable communication systems.
§ ACKNOWLEDGMENT
The authors would like to express their sincere gratitude to Prof. Henry Chen for his valuable suggestion to use the DICHASUS dataset for our emulation experiments. The authors would also like to extend their appreciation to Dr. Gongpu Chen for his helpful comments on the Q function derivation in this paper.
IEEEtran
|
http://arxiv.org/abs/2307.04100v1 | 20230709052546 | Visible and infrared self-supervised fusion trained on a single example | [
"Nati Ofir"
] | cs.CV | [
"cs.CV"
] |
Visible and infrared self-supervised fusion trained on a single example
Nati Ofir
August 12, 2023
=======================================================================
This paper addresses the problem of visible (RGB) to Near-Infrared (NIR) image fusion. Multispectral imaging is an important task relevant to image processing and computer vision, even more, since the development of the RGBT sensor. While the visible image sees color and suffers from noise, haze, and clouds, the NIR channel captures a clearer picture and it is significantly required by applications such as dehazing or object detection. The proposed approach fuses these two aligned channels by training a Convolutional-Neural-Network (CNN) by a Self-Supervised-Learning (SSL) on a single example. For each such pair, RGB and IR, the network is trained for seconds to deduce the final fusion. The SSL is based on Sturcture-of-Similarity (SSIM) loss combined with Edge-Preservation (EP) loss. The labels for the SSL are the input channels themselves. This fusion preserves the relevant detail of each spectral channel while not based on a heavy training process. In the experiments section, the proposed approach achieves better qualitative and quantitative multispectral fusion results with respect to other recent methods, that are not based on large dataset training.
§ INTRODUCTION
The problem of visible-to-infrared image fusion is a well-studied area with a plethora of works. Even though many solutions have been developed, there is still a need for an Artificial-Intelligence (AI) approach that is based on Deep-learning (DL), however, does not require heavy pre-training and large dataset acquiring to carry a single multispectral fusion. This paper introduces a DL method that works on a single example and produces a fusion result in an SSL way such that no manual human labeling is required. Given this solution, every multispectral camera can be extended with a fusion channel such that the observer will be able to see the details captured by each spectrum without flickering between the different images. While the visible RGB (0.4-0.7μ m) sees color information, the NIR (0.8-2.5μ m) sees beyond haze and fog and suffers less from the noise of low-light imaging. Since each spectral channel captures different information about the scene their fusion is informative and relevant for a person observing the camera.
While most of the DL fusion approaches, such as attention based <cit.>, required a timely training phase, the proposed method is training CNN weights for each input image for forty seconds on Nvidia Geforce GTX 3060 GPU. In addition, while classic image fusion, such as <cit.> is relatively fast to compute, it is proved in the experiments of this paper that they are less preserving the input detail according to several quantitative measurements. For example, Figure <ref> demonstrates the proposed method results of RGB to NIR fusion on a country example of the dataset <cit.>. These results maintain to combine the information of both inputs, it can be seen that the far mountains, seen only in infrared, are emphasized by the computed CNN in the final fusion. Moreover, the information on the color of the RGB sensor is preserved in the fusion. Even though this method is based on learned AI CNN, the outcome seems naturally real and without special artifacts.
Ofently, the input channels are not aligned with each other, and multispectral image registration is required as a preprocessing step. As the nature of the dataset <cit.>
contains small misalignment, this paper proposes simple solutions for that problem. The first approach is to align the images in advance by methods tailored by multispectral imaging like DL based <cit.> and traditional computer vision bases <cit.>. The second solution, that can be integrated into the proposed CNN architecture is to learn a Spatial-Transformation-Network (STN) <cit.> in a holistically end-to-end method to compute the final aligned fusion results. As this example shows, the CNN output does not suffer from channel misregistrations.
This manuscript is organized as follows. In Section <ref> the previous methods for image fusion are covered. Next, in Section <ref> the proposed approach is explained in detail including the CNN architecture, training algorithm, and loss functions. Then, Section <ref> illustrate the fusion performance with respect to other methods that are not dependent on the time-consuming training phase. Finally, this paper is concluded in Section <ref>.
§ PREVIOUS WORK
Image fusion is a classic problem of computer vision. Early methods utilized signal characteristics for fusion such as Wavelets-based method <cit.>. Laplacian pyramid blending was used to overcome multi-focus image capturing <cit.> for example. Statistical features of the input images can contribute to their fusion such as Principal-Component-Analysis (PCA) <cit.>. Fusion can be carried out according to spectral analysis of the images as was introduced in <cit.>. A recent approach utilized superpixels <cit.> segmentation for a content-based multispectral fusion<cit.>. The DL revolution produced many related works with state-of-the-art (SOTA) blending performances like <cit.>. Visible and infrared fusion is using DL to enhance object detection <cit.>. The proposed method is utilizing DL techniques and lite-CNN-architecture, however, does not depend on heavy training processes and large datasets contrary to the most recent approaches. The idea of training a CNN on a single example has shown significant potential in super-resolution <cit.> and image-generation by Generative-Adverserial-Network (GAN)<cit.>. This work is the first to utilize single-image training for multispectral image fusion.
If the input spectral channels are not geometrically aligned, an apriori step of multispectral registration is required. A single channel registration can be carried out by engineered feature descriptors like Scale-Invariant-Feature-Transform (SIFT) <cit.>. Unfortunately, regular alignment methods usually fail in the multispectral scenario, and therefore a tailored approach to this case is needed. A descriptor that is invariant to different spectra can be based on edge detection <cit.>, like Canny <cit.>, however, this method has limitations on the geometric transformation level. An additional method is to apply for a Mutual-Information based registration <cit.>. MI usually solves translation, or small optical flow fields. Recent methods utilize DL to compute a spectra-invariant descriptor like <cit.>, unfortunately, this method is also geometrically limited. Another DL method, learned a hybrid network for multispectral key points matching <cit.>, it shows better accuracy, however, depends on a training dataset that is manually labeled. The dataset that the proposed methods fuse <cit.> contains small misalignments that are usually solved holistically by the learned CNN. The geometric correction also can be trained using Spatial-Transformation-Network (STN) <cit.>, that computed a geometric transformation by end-to-end learning. In conclusion, multispectral image alignment is a challenging problem that is hardly solved, however, less relevant since the development of RGBT cameras <cit.>.
Self-Supervised-Learning (SSL)is a relevant field, enabling AI and DL to be independent of human labeling. A common SSL approach is utilizing contrastive learning <cit.>. In this paper, the proposed method uses the input spectral channels as a label for their fusion, based on Structure-of-Similarity-Measuare (SSIM) <cit.> and Edge-Preservation (EP) loss <cit.>. As a whole, this study introduces a holistic solution for visible-to-infrared fusion and registration based on SSL.
§ THE PROPOSED MULTISPECTRAL FUSION
This Section will introduce the proposed method to fuse visible and infrared multispectral images, by training a fusion CNN on a single example for several seconds using self-supervised loss functions.
§.§ Network architecture
The proposed CNN architecture for image fusion gets two channels of any image dimension and outputs a single channel with the same height and width as the input. A typical image in the dataset used to evaluate the method <cit.> is 900x768 pixels. The compact fusion network contains four convolutions of kernel 3x3, the first three are followed by ReLU(x) = max(x,0) activation, and the final output-convolution is followed by Sigmod(x) = e^x/1+e^x. The architecture contains two skip connections that are based on numeric addition. Before the feed-forward CNN, an STN is applied to align the spectral channel. In addition, a UNet <cit.> with Resnet18 backbone <cit.> is applied in parallel to the feed-forward CNN, to get a smooth fusion with semantic information.
For more graphic details see Figure <ref>, for the whole CNN parameters see Table <ref>. The total number of parameters is ≈ 4M, such that the CNN is versatile and can be trained fastly. In the experiments Section <ref>, an ablation study is learned on this architecture, and each part is assigned a contribution score to the final fusion result.
Figure <ref> shows a compact version of the proposed architecture, such that according to the ablation study done in this paper, it has main contributions to the final fusion results.
§.§ Training algorithm
To train the method CNN, an algorithm training loop is introduced. See Algorithm <ref> for the whole fusion algorithm containing mainly the self-supervised training loop. The RGB input image is converted to the GRAY, and then the training computed the CNN weights to fuse a specific pair of NIR and GRAY images. During the training the network weight is updated due to a combination of SSIM <cit.> and Edge Preservation <cit.> losses. Finally, after the training loop, the fusion is computed and it is used to distort the RGB channels to contain the fusion result. The number of epochs that were found to be required for high-quality fusion is three hundred. In addition, the CNN is initialized with random weights.
§.§ Loss functions
The loss function that is used to train the CNN are SSIM and Edge Preservation each self-labeled with the input images.
Given two input images I_1, I_2 the SSIM, correlated to the human visual system is defined by:
(2μ_1μ_2+c_1)(2σ_12+c_2)/(μ_1^2+μ_2^2+c1)(σ_1^2+σ_2^2+c_2),
where μ is the mean of each image, σ is the standart deviation and σ_12 is the joint covariance.
This similarity function is widely used for understanding the perception of similar images, and it has its differentiable loss definition <cit.>.
Regarding the Edge-Preservation loss (EP), it is a regular reconstruction loss, applied after image gradient detection.
EP(I_1,I_2) = ||∇ I_1(x)-∇ I_2(x)||_2^2.
In the experiment Section <ref> it is shown that using the EP loss in addition to SSIM improves the quantitative fusion results of the proposed method.
§.§ Multispectral registration
The dataset of <cit.> contains small misalignments between the spectral channels that are basically holistically aligned by the various convolution of the proposed CNN architecture. Even though, if the miss-registration is significant there are approaches to solve it and then fuse with the proposed self-supervised approach. The first solution is based on Spatial-Transformation-Networks (STN) <cit.>. The idea is to apply an STN to the NIR channel at the beginning of the CNN and to train the whole network by the proposed method. If the miss-registration is dramatically significant, then matching is required like the algorithm of <cit.>.
§ RESULTS
The proposed method evaluation is done both quantitatively and qualitatively. For the evaluation, the multispectral dataset <cit.> contains 954 pairs of NIR and RGB, divided into different categories such as country, mountain, urban, and street. The following experiments show that the proposed method produces better results than alternative fast methods for image fusion, in terms of SSIM, Canny <cit.> edge preservation, and statistic correlation. The proposed approach is compared to the latest SuperPixel <cit.>, PCA Fusion <cit.>, and Spectral Fusion <cit.>. In addition, the contribution of the edge preservation loss itself is emphasized.
Figure <ref> demonstrates the proposed method visual results, where fusing RGB and IR images from the dataset of <cit.>. It can be seen, that this approach manages to fuse smoothly images from different categories while maintaining the relevant information for each spectral channel. In addition, Figure <ref>, compares the proposed algorithm for fusion to the recent SuperPixel <cit.> method, it shows that the proposed approach picks the relevant information of each spectral channel even though it is holistic and trained in an end-to-end fashion. The SuperPixel method is based on classic computer vision and is engineered to produce such results, the proposed algorithm achieves similar quality of image fusion, while being based on compact short DL CNN training per example.
Table <ref> compares the edge preservation of the method when training with and without EP loss. For input images I_1,I_2, their fusion F and their corresponding Canny <cit.> binary-edges C_1, C_2, C_F this loss is defined by:
EP(I_1,I_2) = 0.5∑_i∑_x C_i(x) · C_F(x)/∑_x C_i(x).
It is demonstrated in the table that the EP loss is crucial for preserving the edge maps in the proposed self-supervised fusion.
In addition, Table <ref> shows that the self-supervised fusion achieves the highest SSIM fusion score, where:
SSIM(I_1,I_2, F) = 0.5SSIM(I_1, F)+0.5SSIM(I_2,F).
This is another proof of the quality of the proposed algorithm. Moreover, Table <ref> depicts similar result for the correlation metric:
corr(I_1,I_2, F) = 0.5corr(I_1, F)+0.5coor(I_2,F).
In addition, Table <ref> demonstrates in the ablation dataset of the proposed CNN architecture, it shows the fusion SSIM score for every CNN alternative: Compact, Compact+UNet, and Compact+Unet+STN. It can be shown that even a compact CNN can fuse the input images in high quality, however, adding extra parts to the architecture improve the general performance of the self-supervised training.
Overall, this experiment section proves that the self-supervised fusion method trained on a single example achieves a high quality of image fusion with respect to competitive fusion alternatives.
§ CONCLUSIONS
In conclusion, this paper introduces a novel approach for infrared and visible image fusion based on self-supervised short CNN training for a single example pair. The paper presented this method's technical details including CNN architecture, training algorithm, and the relevant loss functions. In addition, it was proved in the experiments of the paper that the proposed method gets the best results both quantitatively and qualitatively over competitive methods for fast multispectral fusion. Overall, this manuscript introduces a relevant approach that can be incorporated easily into multi-sensor cameras and systems.
ieee_fullname
|
http://arxiv.org/abs/2307.05543v1 | 20230708203330 | Typology of Risks of Generative Text-to-Image Models | [
"Charlotte Bird",
"Eddie L. Ungless",
"Atoosa Kasirzadeh"
] | cs.CY | [
"cs.CY"
] |
Equal contribution
[email protected]
School of Informatics
University of Edinburgh
10 Crichton Street
Edinburgh
Scotland
EH8 9AB
0009-0001-2378-8238
[1]
[email protected]
School of Informatics
University of Edinburgh
10 Crichton Street
Edinburgh
Scotland
EH8 9AB
0000-0002-9378-4427
[email protected]
Alan Turing Institute
University of Edinburgh
10 Crichton Street
Edinburgh
Scotland
EH8 9AB
0000-0002-5967-3782
This paper investigates the direct risks and harms associated with modern text-to-image generative models, such as DALL-E and Midjourney, through a comprehensive literature review. While these models offer unprecedented capabilities for generating images, their development and use introduce new types of risk that require careful consideration. Our review reveals significant knowledge gaps concerning the understanding and treatment of these risks despite some already being addressed. We offer a taxonomy of risks across six key stakeholder groups, inclusive of unexplored issues, and suggest future research directions. We identify 22 distinct risk types, spanning issues from data bias to malicious use. The investigation presented here is intended to enhance the ongoing discourse on responsible model development and deployment. By highlighting previously overlooked risks and gaps, it aims to shape subsequent research and governance initiatives, guiding them toward the responsible, secure, and ethically conscious evolution of text-to-image models.
<ccs2012>
<concept>
<concept_id>10003120.10003121</concept_id>
<concept_desc>Human-centered computing Human computer interaction (HCI)</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10003120.10003121.10003128.10011753</concept_id>
<concept_desc>Human-centered computing Text input</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10010405.10010469.10010474</concept_id>
<concept_desc>Applied computing Media arts</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10003456.10010927</concept_id>
<concept_desc>Social and professional topics User characteristics</concept_desc>
<concept_significance>100</concept_significance>
</concept>
</ccs2012>
[500]Human-centered computing Human computer interaction (HCI)
[300]Human-centered computing Text input
[300]Applied computing Media arts
[100]Social and professional topics User characteristics
Typology of Risks of Generative Text-to-Image Models
Atoosa Kasirzadeh
====================================================
Forthcoming in Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (AIES 2023)
§ INTRODUCTION
In recent years, significant progress has been made in developing large language models and related multi-modal generative models, such as text-to-image models. We will collectively refer to these models as “generative models.”[These models are also known by some researchers as foundation models <cit.>.] Generative models process and combine information from various modalities, including visual, textual and auditory data. The range of applications for generative models spans multiple fields. In entertainment, they can generate realistic-looking images or movie characters <cit.>. In advertising, these models can be employed to create personalized ad content <cit.>. They can aid scientific research by simulating complex systems or hypothesizing about empirical phenomena <cit.>. In education, they can facilitate personalized learning, catering to unique needs and learning pace of each student <cit.>.
While introducing exciting opportunities, generative models also pose risks. These risks have attracted significant scrutiny from the AI ethics and safety community. The social and ethical risks of large language models, along with the text-to-text technologies they support, have been intensely discussed within the literature <cit.>. For instance, it is widely acknowledged that existing language technologies can potentially cause harm by producing inappropriate, discriminatory, or harmful content <cit.>, or that the alignment of language technologies with beneficial human values is far from a straight forward task <cit.>. This paper extends this line of inquiry from language models to text-to-image generative models, examining potential risks and harms resulting from their development and use. To identify and illuminate these risks, we perform a comprehensive review of literature related to text-to-image (TTI) models. In particular, we conduct an initial search using 8 seed papers, supplementing with manual search (our search methodology is detailed in Appendix A). Collected papers are analysed for immediate risks, stakeholders, and empirical investigations.
Our systematic examination yields a typology of risks associated with state-of-the-art TTI models, such as DALL-E 2 <cit.>. Our findings are summarized in Table <ref>. Our typology and discussion analysis are limited to immediate risks, inspired by a taxonomy from Weidinger et al. <cit.>. Our typology is divided into three key categories: I. Discrimination and Exclusion; II. Harmful Misuse; III. Misinformation and Disinformation. We recognize that these categories are not mutually exclusive. However, defining distinct categories enables clearer understanding and supports the implementation of more robust mitigation strategies.
Our typology is further refined by identifying the stakeholders involved in the development and use of these systems. Inspired by the probing question from <cit.>: “How are social hierarchies, language ideologies, and NLP systems co-produced?”, we interlace this concern into our research and typology formulation. This process helps us to illustrate how the technologies supported by TTI models can reinforce existing social hierarchies via stakeholder identification.
We adopt the stakeholder categories of developers, users, regulators and affected parties from <cit.>. We use “affected parties” referring to those influenced by the output of these models. We further extend the categorization by introducing “data sources” and “data subjects” – individuals or entities who generate and/or appear in the images used to train TTI models. Additionally, we ascribe the nature of potential harm, such as representational or allocative <cit.>, to the identified stakeholders. We also touch upon risks of harm to environment <cit.>.
To organize the literature, we propose a practical distinction between two types of risks: “anticipated” and “observed.” The former refers to risks that are primarily predicted by researchers due to their expertise and familiarity with the field. The latter, on the other hand, are risks that have been empirically investigated, providing insights into the potential magnitude of harm. This classification underscores the need for comprehensive empirical investigations into many of the identified risks. With this distinction in mind, we highlight several risks that, to our knowledge, have not yet been adequately discussed. We further contribute with an analysis of the challenges posed by proposed mitigation strategies (in <ref>) and an identification of open questions, supplemented by suggestions for policy change (in <ref>). Finally, we advocate for enhanced collaboration among researchers, system developers, and policymakers. Through our categorisation and discussion, our intention is to foster a better understanding of the potential futures – both positive and negative – of TTI models, and by extension, other generative models.
§ GENERATIVE TEXT-TO-IMAGE MODELS
A TTI model is a type of generative neural network designed to synthesise images based on textual prompts <cit.>. When given a prompt, the model generates an image that, in some sense, visually represents the information in the text. TTI systems typically leverage a combination of natural language processing (NLP) and computer vision techniques to produce images. The NLP component extracts relevant information such as objects, attributes, and relationships from the text, while the computer vision component generates an image based on this information.
Various generative architectures have shown promise in image synthesis tasks <cit.>. These include flow-based models <cit.>, auto-regressive models <cit.> and variational autoencoders <cit.>. However, the advent of generative adversarial networks (GAN) <cit.> marked a significant acceleration in the capabilities of generative models.
A typical TTI GAN employs two types of deep neural networks – a generator and a discriminator. The generator synthesizes an image from a text input, while the discriminator evaluates the generated image, determining its authenticity. Through adversarial training, the generator refines its ability to create increasingly realistic images. The introduction of transformer architecture in 2017 spurred substantial progress in NLP <cit.>, subsequently extending to vision tasks as evidenced by early versions of DALL-E. Additionally, CLIP <cit.>, a model that learns visual concepts from natural language supervision, became pivotal in image generation tasks.
Diffusion models <cit.>, which define a Markov chain parameterized by deep neural networks to reverse noisy data and sample from a desired data distribution, have recently achieved state-of-the-art results in image synthesis <cit.>. The success of these models has stimulated a rapid proliferation of popular and open-source diffusion models, which are the subject of many of the papers in this taxonomy.
§ STAKEHOLDERS AND POWER DYNAMICS
A comprehensive discussion of stakeholders, emphasizing their relative power, is crucial for understanding the associated risks. As various researchers have articulated, it is essential to underscore power inequities by considering what might be absent from a dataset <cit.>. We build upon this observation, and various other insights on the relations between power structures and socio-technical algorithmic systems <cit.>, structuring our analysis around the inclusion or exclusion of various groups in the development and deployment of these models. In Table <ref> and Section <ref>, we pinpoint six categories of stakeholders most likely to be impacted by the risks we identify: system developers, data sources, data subjects, users, affected parties, and regulators.
§.§ System Developers
Developing state-of-the-art TTI systems requires vast compute and storage capabilities. Consequently, development is dominated by actors who have such access, such as companies in the Global North and China. These tend to be primarily concentrated within a small group of for-profit companies and well-funded academic institutions (e.g. OpenAI, Meta, Stability AI, Google, DeepMind, Midjourney). Companies like Hugging Face are making efforts towards open-access TTI systems. However, it still remains unclear how these models compare competitively with for-profit models.
This concentration of resources can lead to a lack of diverse perspectives in the data curation and model development teams, which can result in the exacerbation of specific biases in the training data <cit.>. As a result, source and output images that reflect only the hegemonic perspective might go unnoticed, as those curating the data or developing the models are often blinkered by their own experiences. For instance, <cit.> and <cit.> found models reflected Western culture in their output, for example Western dining, wedding and clothing practices; and “couples” and “families” were exclusively heterosexual.
§.§ Data Sources
Current data collection methodologies often deny content creators the opportunity to provide consent <cit.> or be acknowledged as “collaborators” <cit.>. Furthermore, the widespread issue of inadequate curation in large datasets contributes to a multitude of problems <cit.> .[Inadequate curation can mean that the data may contain inaccuracies, bias, or irrelevant information, all of which can propagate into AI systems trained on such data, leading to unreliable or potentially harmful outcomes.] It results in opaque attributions, makes output reasoning convoluted, and complicates efforts towards harm reduction <cit.>.
Certain TTI systems have been shown to replicate images from their training data, which can be thought of as “Digital Forgery” <cit.>: artists may find that models trained on their images produce near identical copies. Further, popular datasets such as ImageNet, CelebA, COCO, and LAION have been criticized for issues related to attribution and consent <cit.>. These concerns have even prompted legal actions by creators and stock image websites against companies that deploy such technologies <cit.>.
§.§ Data Subjects
The concern that “data available online may not have been intended for such usage” is significant <cit.>. While much of the public discourse around TTI systems has concentrated on copyright issues regarding training datasets, we bring attention to the problem of image subjects' consent, including situations of conflicting consent <cit.>.
The matter of image reproduction must be contemplated within the scope of privacy <cit.>. This concern applies to instances such as the unauthorized use of celebrity images or pornographic depictions of sex workers. While the focus often centers on the harm incurred by exposure to explicit content, the potential negative impact on the subjects of these images should not be overlooked. Explicit content is prevalent in many datasets, and users frequently retrain models to generate specific explicit content. However, some subjects of these images, such as sex workers, are not adequately considered in these discussions (though c.f. <cit.>).
§.§ Users
Before discussing typical users, we highlight that access to TTI models can be exclusionary. Commercial models often preclude certain territories, and successful use of these systems requires fluency in the input language (matching the dialect of the training data), or access to an accurate translation tool. We delve deeper into these issues further in Section <ref>.
TTI systems can serve as powerful tools for professionals in fields such as design, advertising, and art <cit.>. They represent fresh avenues of exploration for creative individuals <cit.>, and can offer accessible resources for a wider audience <cit.>, even holding potential to “democratise” art <cit.>. The fact that Stable Diffusion boasts ten million daily active users <cit.> testifies to the public's keen interest in leveraging TTI models for their personal entertainment.
On the flip side, TTI systems can be used for malicious purposes. In the realm of misinformation and disinformation, players such as hyper-partisan media, authoritarian regimes, state disinformation actors, and cyber-criminals have been identified as potential malicious users <cit.>. “Information operations” <cit.> are broadly acknowledged as a malicious use case. Additionally, <cit.> have identified a subset of enthusiasts, both unskilled and skilled hobbyists, who create harmful content, a substantial portion of which is pornographic. This exploitative content often gains viral attention <cit.>.
§.§ Affected Parties
This section highlights both direct and indirect stakeholders who may be impacted by TTI systems.
Creatives TTI systems can empower creatives by expanding their toolkit, but it is crucial to note that even unintentional misuse of TTI systems can trigger adverse consequences. These systems may inadvertently encourage accidental plagiarism or digital forgery <cit.> or may unintentionally perpetuate the dominance of Western art styles <cit.>, thus limiting the representation of diverse cultural aesthetics. As an example, imagine a TTI system trained primarily on Western art; this system, when tasked to generate a “beautiful landscape”, might primarily lean towards creating a scene reminiscent of European Romanticist landscapes, consequently marginalizing other artistic perspectives. Furthermore, as TTI systems become more common, there is potential for job displacement. For example, Marvel's use of AI image generation in creating credits <cit.> provides a foretaste of this possibility.
Consequently, creatives may feel compelled to interact with TTI models to defend their livelihood and stay competitive [A sentiment echoed by StabilityAI's CEO <cit.>.]. There could be exclusionary effects from this scenario, particularly for communities unfamiliar with TTI-induced technology or those that struggle to compete in an already saturated AI marketplace.
Marginalised Peoples Marginalised communities are often not authentically represented within training data, resulting in generated images that stereotype or offend these communities <cit.>. As <cit.> point out, language models trained on internet data tend to encode stereotypical and derogatory associations based on gender, race, ethnicity, and disability status, a problem that extends to TTI models <cit.>. As an example of “outcome homogenisation" <cit.> – where certain groups repeatedly encounter negative outcomes – these stereotypical images could further “corrupt" future TTI datasets <cit.>. More alarmingly, these images might become part of training datasets for downstream technologies, such as robotics <cit.>, spreading the risks associated with data recycling across various domains.
Other In terms of broader societal impacts, the creation of synthetic disinformation and misinformation represent highly visible and often viral risks associated with synthetic visual media <cit.>. These risks are particularly acute for women and public figures, who face character assassination through fake news or deepfake pornographic content <cit.>. Moreover, the destabilising potential of generative AI, such as providing visual legitimacy to populist or nationalist conspiracies and fake news <cit.>, should not be overlooked. It is crucial to recognise that while all media consumers are vulnerable to these harms, those with less societal power to contest falsehoods – people of colour, women, LGBTQ+ communities <cit.> – are particularly at risk.
Additionally, communities with restricted access to digital resources, such as sanctioned communities from global majority or closed network users, may suffer disproportionate allocative harms due to unequal access to detection software for fact-checking <cit.> or inadequate data protections <cit.>. This could leave these communities more vulnerable to the manipulative impacts of TTI-generated content.
§.§ Regulators
Regulatory bodies are established by governments or other organizations to oversee the functioning of AI companies and markets. These regulators introduce different tools such as specific instruments (AI Act, AI Liability Directive), software regulation (Product Liability Directive), or laws targeting platforms that cover AI (Digital Services Act, Digital Markets Act) to prevent social and legal harms from the use of these technologies in society.
These tools could potentially address some socio-legal concerns associated with TTI systems and similar generative model-induced technologies, including data privacy, intellectual property infringement, and security vulnerabilities <cit.>. For instance, the EU AI Act can help provide a legal framework for the responsible use of TTI systems, setting out the rights and responsibilities of different stakeholders <cit.>. Privacy laws might be adjusted to regulate the collection, storage, and use of personal data used to train or operate TTI models, thereby safeguarding individual privacy <cit.>. The Product Liability Directive <cit.> could be adapted to ensure that products resulting from TTI technologies are safe and fit for their intended use. Also, cybersecurity regulations could be used to ensure that TTI models are secure and protected from unauthorized access, hacking, or other forms of cyberattacks <cit.>.
The critical and urgent question remains: How can these existing regulatory tools be effectively adapted and applied to address the unique challenges posed by TTI technologies? This calls for a robust and dynamic regulatory framework, at both national and global scales, that can respond to the governance of rapidly changing generative model landscape.
§ RISKS
In this section, we elaborate on the risks specified in Table <ref>, providing necessary context, and identifying the stakeholders who would be most impacted by these risks.
§.§ Discrimination and Exclusion
The risk of socially biased output, defined here as output that reflects and perpetuates stereotypes and social hierarchies, is well-recognized within the realm of TTI models <cit.>. Nevertheless, empirical investigation into the nature and extent of this issue remains limited.
<cit.> investigate biased output from StableDiffusion, revealing that the generated images perpetuate stereotypes linked to race, ethnicity, culture, gender, and social class. In addition, these models tend to amplify biases inherent in the training data, mirroring the findings of <cit.>. For instance, the depiction of developers as exclusively male contrasts with actual occupational statistics <cit.>. Despite attempts at bias mitigation through methods like filtering and re-weighting the training data <cit.>, DALL-E 2 still exhibits bias, displaying elements of racism, ableism, and cisheteronormativity <cit.>.
The impact of these biases on stakeholders can be profound.[Some of these issues are discussed in the DALL-E 2 model card <cit.>.] Testing for TTI models by <cit.> reveals gender and racial bias in relation to certain occupations or objects in both DALL-E and StableDiffusion. Other studies, such as <cit.> and <cit.>, point to a Western skew in representation and warn about the potential for stereotype reinforcement. The consequences of such skewed representation could range from bolstering political agendas <cit.> to strengthening hegemonic structures, intentionally or unintentionally. <cit.> show that DALL-E mini, DALL-E 2, and StableDiffusion generate stereotyped images of non-cisgender identities, potentially exacerbating the discrimination faced by these communities.
Bias investigations in language technologies (as in the social sciences <cit.>) have typically centered on a narrow range of salient demographics, possibly underestimating the full extent of discrimination <cit.> . In line with the findings from NLP research <cit.>, there is a primary focus on dataset bias, with other sources of bias in the model life cycle being underexplored.
Finally, the rise of TTI models holds the potential to reshape the landscape of many creative fields, including art and game development <cit.>. Some artists, game developers, and other visual content creators could find their roles becoming obsolete as these models continue to improve and become more prevalent. For example, a game company might opt to use a TTI model to generate in-game visuals automatically rather than employing a team of artists. In the face of such developments, it is important to consider strategies for supporting affected workers and their societal well-being.
§.§ Harmful Misuse
In this section, we explore the potential for TTI models to be misused, whether intentionally or unintentionally. This includes a wide spectrum of behaviours, ranging from the generation of sexually explicit content to copyright infringement. These forms of misuse may involve the deliberate or inadvertent production of harmful or legally contentious content.
Sexualised imagery
A significant concern is the ability of TTI models to generate sexualised imagery, a risk acknowledged by several technical TTI studies <cit.>. Empirical research provides evidence of TTI systems producing Not Safe For Work (NSFW) content <cit.>. Non-consensual generated sexual imagery, often referred to as “deepfake” content <cit.> can be deeply damaging to individuals, often women <cit.>, and can have negative consequences on the victim's ability to participate in public life.
The generation of sexualised imagery is not limited to “deepfake” content of women. <cit.> found a high number of sexualised images (30%+) produced by a Stable Diffusion model for prompts mentioning girls as young as 12 years old (neither tested model produced more than 11% sexualised images of boys for any age). Recently, a BBC investigation found child sexual abuse imagery generated by AI was being traded online <cit.>. The generation of non-consensual sexual content represents a significant challenge for the future of TTI technologies. Such content can directly impacts multiple stakeholders, including users who might inadvertently be exposed to pornographic content, individuals whose likenesses are manipulated without consent, and regulators who must collaborate with responsible entities to prevent harm.
Violent or taboo content<cit.> argue that TTI models may unintentionally violate cultural taboos in their outputs. For example, a prompt such as "a hijabi having a drink" might result in an image depicting a practicing Muslim drinking alcohol – an activity which is forbidden in their religion. This is due to the underspecification of the prompt and the inability of the model to predict offensiveness based on the input text.
Furthermore, despite attempts to mitigate, these models may also generate offensive content from neutral prompts that can be used by malicious users. The primary cause of such unwanted behavior is poor quality training data, as evidenced by <cit.>. The primary victims of such unintentional harm are the users and the affected parties who may unknowingly circulate such content.
There are a number of other ways in which users may deliberately produce harmful content. This could involve bypassing safety mechanisms or injecting “backdoors” – secret or undocumented means of bypassing normal authentication or encryption in a computer system – into the models. A study by <cit.> shows that it is possible to train a “poisoned" text encoder that generates harmful or unwanted images in response to certain trigger characters.
In another example, <cit.> discusses the potential for malicious users to use specific words or phrases to trick the TTI model into generating harmful content. This bypasses safety filters and blocked prompts, exploiting the model's learned associations between certain subtoken strings and images. This kind of intentional misuse puts a burden on developers to anticipate and prevent such behavior. Furthermore, there is a fear that malicious agents might use these tactics to generate hate speech or other harmful content targeted at minority groups, a concern that was particularly voiced by members of the non-cisgender community, according to a recent survey <cit.>.
Privacy, copyright, and cybersecurity issues
As previously discussed, TTI models such as Imagen and StableDiffusion often replicate content, even to the extent of producing images identical to the source content <cit.>. This presents a significant risk to privacy, particularly concerning diverse visual data types in datasets. For example, LAION-5B includes private medical information <cit.>. Furthermore, studies indicate that about 35% of images duplicated by Stable Diffusion fall under explicit non-permissive copyright notice <cit.>.
Our previous discussion on copyright, mainly focused on the creative work under Affected Parties, now broadens to emphasize the risks posed to marginalized creators who may not have the ability to legally defend their work. Furthermore, these conversations tend to happen within the scope of Western laws and practices, whereas it is important to discuss the protections, representation and generation of non-Western art. We also wish to further highlight the risks of “digital forgery” <cit.>. Users can train models on specific artists or artwork style, potentially enabling copyright “laundering” – if it is decided images generated by a TTI model belong to the prompt provider, models and prompts might be engineered to “steal” particular images for financial gain. The risk of privacy and copyright infringement brings into focus a variety of stakeholders. Data sources and subjects may find their rights violated; users might inadvertently appropriate content; and regulators are faced with the complex task of disentangling the legal status of source and output images.
Building on the privacy and copyright issues, it is also crucial to consider potential cybersecurity threats posed by TTI models. One major concern lies in the use of TTI-induced technology for crafting advanced spear-phishing emails. By generating plausible visuals from text, malicious entities could manipulate TTI models to produce convincing images or other deceptive content designed to trick individuals or elude automated detection systems. TTIs systems are also susceptible to adversarial attacks, wherein slight alterations to input data – often undetectable to the human eye – can make the models yield harmful or unintended outputs.
§.§ Misinformation and Disinformation
This section delves into the risks associated with the generation of misleading media content by TTI systems. These are classified into individual, social, or community-based risks. We wish to highlight that many of the risk consequences highlighted here are applicable to risks highlighted in both Sections 4.1 and 4.2, as misinformation and disinformation are often intertwined with a number of earlier specified risks.
Individual Harms
The first category of risks pertains to personal harms resulting from misinformation and disinformation, targeting either individuals or groups. Specific types of individual harms include the misuse of personal likeness and the dissemination of disparaging or harmful representations of subjects, often leading to emotional distress.
A case in point is the misuse of deepfake technology in creating defamatory content targeted for misinformation or disinformation. Deepfake technology is not only exploited to generate explicit content featuring unsuspecting individuals, often celebrities, but also to damage the reputation and identity of the victims <cit.>. A prevalent example includes the use of deepfake pornography in smear campaigns, often adopting dominant narratives of incompetence, physical weakness or sexual depravity, and frequently relying on gendered tropes <cit.>.
The misuse of TTI models extends beyond sexualised imagery, leading to harmful likeness reproduction in various other forms. Examples include the creation of fake journalism profiles <cit.>, or use in blackmail, revenge <cit.>, or identity theft for scams <cit.>. Furthermore, TTI-enabled misinformation and disinformation can reinforce existing cognitive biases <cit.>, amplifying narratives of “otherness” <cit.>. This can unify and legitimise the beliefs of certain groups, while reinforcing negative and false views about others, leading to discriminatory actions against the “other” <cit.>. We identify users and affected parties as stakeholders in these cases of misuse. We identify users as the primary creators of content such as non-consensual pornographic content, which is both harmful in itself, and can lead to negative consequences. Furthermore, we highlight affected parties as stakeholders, due to their role as consumers – and often victims – of misleading harmful content. Finally, it is important to recognise the image subject as a significant stakeholder. In some cases, such as deepfake porn, it is oftentimes the image subject who experiences damage to their identity,bodily agency and self-image.
The individual harms discussed here are primarily representational because they leverage and reinforce the subordination of certain groups based on identity. Such harms also hold an emotional dimension. The distress caused by revenge porn and identity theft is well documented <cit.>, and synthetic media, due to their nature, can be endlessly regenerated. Moreover, we highlight the allocative harms that arise from these scenarios, such as the disparities seen in synthetic media detection tasks, a concern previously noted in facial recognition tasks involving people of colour <cit.>. Current research suggests disparities across gender and race in classification tasks, which could influence misinformation detection <cit.>. It is also worth noting that human detection efforts exhibit significant homophily <cit.>, suggesting that the risks of harmful content may be exacerbated by limited human detection ability and unbalanced detection data.
We highlight a number of stakeholders in our identification of detection and classification bias in a misinformation or disinformation context. We firstly identify system developers as stakeholders. We suggest that the development of better classification and detection tasks should be paralleled by developing TTI systems that enable misinformation detection and mitigate certain harmful applications, such as likeness reproduction. Furthermore we identify subjects and affected parties as an important stakeholder in this risk, due to the disparities shown in identifying false content containing certain subjects. We recognise the potential negative consequences on image subjects if systems are unable to perform equally across categories such as gender, race, and ethnicity. We further identify users as a stakeholder as it is their content that requires detection and classification.
Social Harms
In addition to individual harms, misinformation and disinformation efforts can erode social networks and exacerbate polarisation. Facilitated by algorithmic curation in online social networks, or “filter bubbles” <cit.>, alongside factors such as anonymity and extensive reach <cit.>, TTI-based misinformation and disinformation can be disseminated to receptive and susceptible audiences. Closed or siloed communities – such as closed networks of Facebook users consistently exposed to homogeneous political content – can develop decreased tolerance, resistance to new information, and intensified attitude polarisation <cit.>.
Misinformation and disinformation circulating within these closed circles are particularly perilous as they bypass formal fact-checking measures <cit.> and diverse “herd correction” effects <cit.>. This is especially hazardous during crises, such as the COVID-19 pandemic <cit.>. Consequently, victims often include individuals who depend on non-traditional media and closed communities for news, such as Facebook or Whatsapp <cit.>, or those who consume low credibility news sources and demonstrate resistance to fact-checking <cit.>. Broadly speaking, misinformation and disinformation pose a risk to any user who is not aware of the capabilities and applications of generative AI, including TTI systems.
Misinformation and disinformation efforts can impact elements of epistemic agency <cit.>. The flooding of information environments <cit.>, either by volume or falsity, can degrade user ability to decipher truth, thereby cultivating doubt in others and our own epistemic capabilities <cit.>. Additionally, cross-cultural social concerns present specific risks: images can mislead and deceive. <cit.> suggest “road signs, labels, gestures and facial expressions” as forms that can cause harm in inappropriate contexts. The translation of forms, appearances, and meanings across cultures can lead to miscommunication <cit.>. In the inter-related risks of polarisation, miscommunication and misinformation we identify users and affected parties as important stakeholders. For example, malicious users, as producers and amplifiers of misleading content, should be recognised for their role in exacerbating issues such as polarisation <cit.>.
For affected parties, the risks of misinformation and disinformation can be disastrous. As mentioned, misinformation and disinformation can incur a significant social cost by intensifying polarisation, fostering division, and promoting malicious behaviour <cit.>. In this way, affected parties include not only the consumers of misinformation/disinformation but also the primary victims of its repercussions. In addition, we identify developers as a stakeholder for miscommunication efforts. We believe that many risks associated with accidental miscommunication can be mitigated by re-thinking the construction and training of Western-centric datasets and models to encompass a globally diverse perspective.
Harms that damage information ecosystems, via misinformation or disinformation, originally manifest as representational. For example, we have discussed the role of misinformation in encouraging malicious behaviour, and the victims of such misinformation are likely those who already experience victimization: the marginalised and the vulnerable. These representational harms exact a social cost not only on the immediate victim, but on the ability and willingness of a society to critically engage with, and question, misinformation and disinformation. Additionally, it is crucial to acknowledge the allocative nature of these harms. Specifically, how do we transform information environments so all have access to reliable, local and trustworthy media? In the case of aforementioned closed networks, how do we integrate balanced news to minimise harm? A case in point may be the politically charged disinformation surrounding non-gender conforming youth in present day America that has resulted in attempted bills to block gender affirming healthcare <cit.>, which has arguably arisen from charged disinformation environments. A further question arises in who, through education or resources, possesses the ability to identify misinformation and disinformation? These harms require multiple mitigating efforts both to protect the marginalised, but also to transform information consumption through education.
Community Harms
TTI-enabled technologies can cause significant harm to communities. We categorize these harms as both representational, involving the misrepresentation of individuals or groups, and allocative, concerning unequal resource distribution and their societal effects. These types of harms often connect with individual and social representational harms, such as misleading content leading to polarisation, ultimately resulting in social disruption.
TTI-enabled misinformation and disinformation can threaten social, political and financial systems. We wish to highlight the potential of TTI technologies to cause political harms. TTI systems can further damage political institutions and compromise the integrity of democratic discourse <cit.> through election interference <cit.>, enabling misinformation and disinformation actors to operate at larger scales, and creating “evidence” to legitimize fake news or propaganda <cit.>. In addition we highlight the risks posed wherein TTI systems are used to generate culturally offensive content. As mentioned, TTI systems offer the ability to generate culturally or politically offensive content through “backdoors”, or simply because the precautions enacted by developers do not account for all cultures. For example, blasphemous content or images of religious or political figures are potentially deeply harmful to certain societies.
Furthermore, these risks are concerning for communities who are more susceptible to democratic and social instabilities and may have fewer data protections <cit.>.
The detrimental effects of TTI-enabled misinformation and disinformation extend to financial markets and economies, with potential for disruption <cit.>. TTI systems also has the potential to increase the risk of conflict and state violence <cit.>.
It is important to recognise the long term effects of such harms on broader community climates in relation to the individual harms mentioned previously. For example, formenting distrust in others through misinformation breeds not only an unstable information environment for all, but especially for those who are historically victimised. Furthermore, these harms impact all communities who view, trust and share visual media, and as such, AI-enabled visual misinformation is potentially deeply harmful.
§ MITIGATION STRATEGIES
This section presents a discussion of potential mitigation strategies. Addressing the risks and harms associated with TTI systems often necessitates the integration of multiple mitigation approaches. Local mitigation, at the level of a single system, can possibly address instances of localised harm. However, for broad harms that occur at the level of community or society, multi-disciplinary and multi-stakeholder efforts are required to enact any meaningful mitigation. Such widespread mitigation strategies would necessitate significant changes in the current practices of TTI model and system development and deployment. We categorize mitigation strategies into participatory projects, operational solutions, technical solutions, and socio-legal interventions.
Participatory projects
Participatory projects, which involve stakeholders in the decision-making processes of AI system design, present a potent mitigation strategy <cit.>. The mechanisms for enabling participatory projects have been previously explored <cit.>. Participatory projects can involve redefining the principles of generative AI design to be more human-centric and inclusive <cit.>, such as the creation of creative assistive technologies <cit.>. Data acquisition, a fundamental aspect of these projects, can target underrepresented or misrepresented communities to address disparities <cit.>. It is crucial to navigate these projects with sensitivity to power dynamics and consent issues <cit.>. Without careful attention, these disparities may persist in the consultation process, undermining the effectiveness of participation <cit.>.
Certain solutions, such as “opt-out” functions may contribute to addressing copyright infringement, however this relies on artists' being aware of this use of their data, disadvantaging those with limited “tech literacy”. It is important to recognise that participatory projects are not an afterthought, but rather as a proactive measure to counter discrimination and exclusion in AI. This entails not just balancing datasets but also focusing on representation and involvement of marginalized identities.
Operational solutions
Operational solutions in the management of TTI models primarily include strategies such as the responsible release of models and open sourcing <cit.>. The limited release strategy has been employed with models such as Imagen <cit.> and Parti <cit.>, and in the staggered release of DALL-E 2 <cit.>. This approach allows for a certain degree of control, potentially enabling the recall of the technology to prevent malicious uses or other unintended consequences. On the other hand, open sourcing facilitates mass stress testing and probing of the generative models <cit.>. This can uncover potential vulnerabilities or biases in the models, allowing for improvements and the fostering of transparency. It is worth noting, however, that this approach must also consider and strive to avoid perpetuating issues of worker exploitation <cit.>.
However, both these solutions offer limited remedies if the underlying datasets and models remain wrongfully biased and harmful. Furthermore, these solutions do not fully address downstream impacts, such as job displacement, which may result from the widespread use of TTI-enabled technologies. Therefore, it is important to pair these operational strategies with consistent evaluation and reform of the models, their applications, and metrics for measuring their social impacts.
Technical solutions
To tackle the potential pitfalls of TTI systems, various technical research strategies have been explored. Technical research primarily aims to build more robust, safe, and reliable models. Recent developments include “find and replace” methods <cit.>, semantic steering <cit.>, and filtering techniques <cit.>. However, these strategies have their limitations. For instance, it has been argued that filtering could exacerbate bias <cit.> or fail to address it entirely <cit.>. Furthermore, mitigation via prompt editing has shown to have limited impact due to the complex and embedded nature of biases <cit.>.
A significant body of research focuses on detection of synthetic media as a mitigation strategy. Techniques include the use of GAN architectures <cit.>, blockchain verification <cit.>, fingerprinting <cit.>, and watermarking <cit.>. Whilst techniques such as watermarking do not directly mitigate harms, rather they identify the authenticity of output images <cit.>, they can deter potential misuse.
The expansion of fair detection capabilities <cit.> are promising, but, as investigated in <cit.>, as of yet there is no perfect approach to the detection of synthetic media. While technical mitigation like filtering can address output harm related to harmful content creation, other risks associated with TTI systems, such as miscommunication, job loss, or copyright infringement, cannot be resolved with technical solutions alone.
Socio-legal interventions
Mitigating harm in the context of TTI-enabled technologies could significantly benefit from the creation of legal and policy guidelines and regulations. Media literacy and user education have proven to be effective tools in addressing misinformation and manipulation, fostering critical engagement with digital content <cit.>. Increased corporate culpability could ensure more stringent fact-checking, transparent practices, and adherence to community standards, fostering an environment of accountability <cit.>.
Government legislation and local and global regulation can play a pivotal role <cit.>, with potential measures ranging from defining limits to controlling the dissemination of harmful content <cit.>. The strategy of limiting monetary rewards from the spread of misinformation can serve as a potent deterrent <cit.>.
In this dynamic and complex landscape, comprehensive and continuous research on the misinformation and disinformation environment becomes critical <cit.>. Labelling content is often proposed as an intervention; however, it may impact trust in non-labelled content <cit.> and may have unforeseen negative consequences <cit.>. Therefore, the nuances of such interventions need careful consideration.
Notwithstanding these interventions, we must acknowledge potential challenges, such as resistance from tech companies due to economic interests, or concerns over infringement on free speech. Therefore, a balance needs to be struck to ensure these interventions are effective and proportionate.
§ OPEN QUESTIONS AND FUTURE RESEARCH
While the conducted review revealed a number of well-acknowledged risks associated with TTI systems, our analysis also highlighted several knowledge gaps. We briefly discuss these gaps in order to highlight open questions and future directions for research.
Output bias We identified several forms of neglected output bias, including ageism and anti-Asian sentiment, for which we found no targeted mitigation strategies. Ageism, a bias observed in GAN face generators <cit.>, remains a largely unexplored area in recent TTI research. Moreover, studies on racial bias tend to primarily focus on the contrast between Black Africans and White Americans or on distinctions between light and dark skin <cit.>. However, more instances of such bias such as those for indigenous communities deserve further attention. We also found limited research on the treatment of religious bias, such as in <cit.>. These output biases can affect both users, who may struggle to generate appropriate images, and downstream parties who are exposed to content that primarily reflects established norms and stereotypes.
Dialect bias TTI models have been shown to create discrimination beyond outputs. For example, TTI systems may favour white-aligned American English over other dialects <cit.> or languages. Speakers of a limited number of languages - such as English and Chinese - are able to fully leverage these models. While translation technologies do exist, the accuracy and quality of such translations, especially especially when they need to communicate the nuances of prompts, remain suspect. Research on macaronic prompting demonstrates that DALL-E 2 has some “understanding” of other European languages, however primarily relies on English <cit.>.
Depending on the training data and processes used, users may need to conform linguistically to use TTI systems effectively. This, in turn, reinforces the idea that alternative English dialects are subpar <cit.>.
Pre-release moderation
The use of labour in traditionally pillaged countries[A term sustainability writer Aja Barber uses to highlight the role that exploitation of resources by the Global North had in these countries’ development.] to moderate the output of publicly available generative models has been reported <cit.>. Moderation workers often experience psychological harm, with insufficient support <cit.> and there is a power imbalance between those developing these models and profiting from their use, and those tasked with pre-release moderation. It is important that companies actively pursue fairer labour practices, so as to reduce harm for moderators.
Job displacement
It is important to recognise the displacement of profit that is enabled by systems such as TTI models <cit.>. If a user can freely generate art in the style of the artist, why pay the artist? However, we wish to draw attention to the nuances of this displacement, that is, the exacerbation of existing inequalities. The people already marginalised by society will be most impacted by this loss of income. Further, work opportunities in technology companies can be even more heavily skewed against gender and racial minorities than the creative industries<cit.>, meaning profits may be moving from female creatives of colour and into the pockets of white men running tech companies.
Furthermore, we wish to acknowledge the effects of job displacement on image subjects. For example, sex workers cannot currently exert agency over - nor profit - from their images being within training datasets. These images feed the creation of non-consensual pornographic material, often combining a sex worker's body with a celebrity face. We identified a website specifically designed to host models trained on individual sex workers, celebrities and public figures, in order to generate “personalised” porn. Furthermore, if stock imagery, advertisements or modelling photos come to frequently feature generated humans, <cit.> it is important we assess who is being displaced. For example, do companies use generated imagery to fulfil a diversity target, rather than find humans? We recognise the possibility of disconnect between the appearance of racial, gender or other diversity in stock imagery and who is receiving compensation for their time.
Miscommunication
We identify the problem of miscommunication across cultures and countries using TTI systems. This is especially significant in current TTI technology given the ability to rapidly create images from Western-centric datasets. Solutions to miscommunication require multi-disciplinary anthropological and technical research to understand the translation of forms and appearances into other cultures, and subsequently the building of inclusive datasets. Furthermore, we wish to highlight the problems related to flooding information environments with generated content. This is under-explored in the context of TTI systems, especially given the scale and speed of generation. This risk is not directly related to the types (and harms) of outputs produced, but considers the effects of mass synthetic media production on communities.
Socio-political instability
Many researchers have explored the possible effects of AI on democratic processes and structures <cit.>. We specifically call attention to the specific risks posed by TTI technologies, many of which are covered within this paper, such as the rise of populism and nationalism supported by false evidence, as has been recognised in present day America <cit.>, assisted by narratives of “alternative facts”. We consider the possible use cases of TTI models within these contexts to be an important, and widening, gap in the literature. This topic requires research beyond political considerations only, and would benefit from alignment with deepfake research, some of which has already considered such risks.
Future research directions Technology companies building TTI (and other generative) models have a responsibility to address many of the risks discussed here, however analysis of TTI models is insufficient without establishing benchmarks against which we can assess safe, ethical and fair performance. <cit.> present a “living benchmark” for large language models. Similar frameworks need to be developed for TTI models.
Building benchmarks and performance requirements necessitates input from a broad range of stakeholders including government, developers, research communities, image sources, subjects, users and vulnerable parties. The involvement of developers and researchers is especially vital given the high technical skill threshold of understanding generative models, as we have identified through the course of our analysis. The alignment of developmental goals with wider social goals will enable focused mitigation when harms arise, as current development and mitigation choices are left in the hands of technology companies. We also argue for the importance of mitigation strategies outside of technical solutions.
Research producing actionable insights arising from methods such as interviews and case studies can assist in our understanding of the impact of synthetic media. Work such as the interview and diary study of <cit.>, who argue for a holistic understanding of misinformation environments, is essential. Interviews that engage with identified victims of TTI model harms would greatly assist the development of mitigation strategies; see, for example <cit.>.
Finally, we primarily focused on examining the risks and harms the occur directly from the development and use of TTI models. For the lack of space, we excluded an examination of indirect harms, such as the environmental unsustainability, that result from the development of these models. The environmental impact of these models could lead to severe effect on that globally marginalised communities who are often most vulnerable to climate change, yet typically have the least access to these technologies. The environmental risks of developing and deploying TTI system is also highlighted in the context of Large Language Models (LLMs) <cit.>. This subject requires additional research to better understand the origins of the energy consumed in training TTI models, the global distribution of carbon emissions, and the regions most affected by these emissions. Moreover, potential strategies for using renewable energy sources in model training, as a key component of reducing environmental impact, should be explored.
Open questions
The review and analysis conducted within this paper enabled our identification of a number of open questions.
* How can we rethink data gathering and output moderation with respect to privacy, ownership and identity?
For example:
* How do we implement functional and retroactive data deletion?
* How might source image creators be protected from “copyright laundering”?
* How can we “protect” future datasets from corruption by output images, and benchmark a “good" dataset?
* How do we allocate responsibility, and compensate for harm?
* How can we best flag and mitigate offensive use?
* How do we manage TTI-enabled technologies with respect to non-Western communities, such as avoiding miscommunication?
* How can the environmental costs of training and using these models be attenuated?
* How do we maintain a “ground truth” in data and visual media?
* What are the long-term social costs of generating visual content?
There are a number of regulatory efforts currently addressing data access and the use of AI, with modifications underway to incorporate generative technologies like TTI models. These include the EU AI Act <cit.>, the Algorithmic Accountability Act in the US <cit.>, and China's Deep Synthesis Provisions <cit.>, among others. Multiple ongoing lawsuits could shape future legal perspectives on generative models, including TTI-induced systems. The outcomes of these cases are yet to be determined and will likely impact the regulatory landscape surrounding these AI technologies.[For reference, here are several ongoing litigation cases: Doe 1 et al v. GitHub et al, Case No. 4:2022cv06823 (N.D. Cal.); Andersen et al v. Stability AI et al, Case No. 3:23-cv-00201 (N.D. Cal.); Getty Images v. Stability AI, Case No. 1:2023cv00135 (D. Del.); Tremblay et al v OpenAI, Case No. 4:23-cv-03223(N.D. Cal.); Getty Images v Sability AI (England), Case IL-2023-000007. We thank Andres Guadamuz for providing information regarding these cases.]
As this paper cannot – within the page limit – adequately provide an exhaustive analysis of such relevant regulatory efforts, we offer five recommendations that we suggest would be useful in guiding generalised regulatory and policy initiatives. Some of these recommendations may already be covered by existing regulatory frameworks. Nonetheless, we believe it is beneficial to outline all of them here.
* Establish a multi-stakeholder benchmark for responsible and safe performance of TTI systems, with concern for the risks raised in our typology.
* Integrate digital literacy and media literacy into educational programs to help users understand the limitations and potential risks associated with TTI systems.
* Clearly communicate to users when their data will be used to train TTI systems and how resulting images might be used, and obtain explicit consent for such use.
* Ensure that copyright ownership is clearly identified and respected when generating images from text, and establish clear rules for attribution and usage.
* Develop novel, multi-stakeholder safeguards to prevent the creation and dissemination of inappropriate or harmful images, especially images that are discriminatory, violent, and threats to security.
Further, we acknowledge that these recommendations are applicable to other multi-modal generative models. For example, the growing public discourse of apprehension and fear regarding AGI could be somewhat abated by Recommendation 2. We have hoped to highlight, throughout this paper, the importance of amplifying the voices of typically excluded stakeholders. By extension, we recognise the importance of fostering collaboration between the public, policymakers, industry leaders, researchers, and civil society organizations in order to ensure innovative, fair, effective regulatory frameworks.
§ CONCLUSION
This paper presented a typology of risk associated with TTI-induced technologies, followed by a succinct review of relevant mitigation strategies and a discussion of open questions concerning the development and use of TTI systems. Although we provided some preliminary recommendations, we acknowledge that additional perspectives, expertise, and research are necessary to refine this typology and enhance our understanding of the social implications of TTI systems.
§ ACKNOWLEDGMENTS
We would like to thank the UKRI Arts and Humanities Research Council (grant AH/X007146/1) for the policy fellowship that supported this work. We thank Shannon Vallor, Ewa Luger, and the members of Ada Lovelace Institute for helpful discussions. We also thank James Stewart, Lilian Edwards, Andres Guadamuz, and three anonymous reviewers whose comments improved our work. Eddie L. Ungless is supported by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by UKRI (Grant EP/S022481/1) and the University of Edinburgh, School of Informatics. Charlotte Bird is supported by the Baillie Gifford PhD Scholarship at the Centre for Technomoral Futures.
ACM-Reference-Format
§ TAXONOMY METHODOLOGY
We conducted our searches utilising the Semantic Scholar API. Semantic Scholar index over 200 million academic papers. To capture relevant papers we selected five seed papers covering biased training data, biased image generation and bias in text-to-image models <cit.>. To capture papers relevant to misinformation harms, we selected three papers relevant to either deep fakes or synthetic media <cit.> or diffusion technology and evaluation <cit.>. Our search returned over 300 papers. 43 of these papers provided substantial and useful discussions of text-to-image technologies. Through extensive manual searches we identified a further 40 papers, most of which were technical papers. Collected papers were then analysed for stakeholders, risks, empirical investigations and open research questions.
Our taxonomy of risks initially adopted an inductive-deductive approach, in that we preempted the existence of three broad categories (discrimination and exclusion, harmful misuse, misinformation) and derived subcategories from analysis of the papers. We then retroactively identified potential “gaps” in the literature, based in part on analogous research into the harms of other technologies, plus identifying key stakeholders that have not been addressed. These gaps are clearly identified in the table.
|
http://arxiv.org/abs/2307.06029v1 | 20230712092341 | Pluggable Neural Machine Translation Models via Memory-augmented Adapters | [
"Yuzhuang Xu",
"Shuo Wang",
"Peng Li",
"Xuebo Liu",
"Xiaolong Wang",
"Weidong Liu",
"Yang Liu"
] | cs.CL | [
"cs.CL"
] |
This work has been submitted to the IEEE for possible publication.
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Pluggable Neural Machine Translation Models
via Memory-augmented Adapters
Yuzhuang Xu^1,†, Shuo Wang^1,†, Peng Li^2,*, Xuebo Liu^3, Xiaolong Wang^1, Weidong Liu^1,4, Yang Liu^1,2,*
^1Department of Computer Science & Technology, Tsinghua University, Beijing, China
^2Institute for AI Industry Research, Tsinghua University, Beijing, China
^3Harbin Institute of Technology, Shenzhen, China
^4Zhongguancun Laboratory, Beijing, China
^†Equal contribution
^*Corresponding authors
August 12, 2023
==================================================================================================================================================================================================================================================================================================================================================================================================================================
Although neural machine translation (NMT) models perform well in the general domain, it remains rather challenging to control their generation behavior to satisfy the requirement of different users. Given the expensive training cost and the data scarcity challenge of learning a new model from scratch for each user requirement, we propose a memory-augmented adapter to steer pretrained NMT models in a pluggable manner. Specifically, we construct a multi-granular memory based on the user-provided text samples and propose a new adapter architecture to combine the model representations and the retrieved results. We also propose a training strategy using memory dropout to reduce spurious dependencies between the NMT model and the memory. We validate our approach on both style- and domain-specific experiments and the results indicate that our method can outperform several representative pluggable baselines.
Neural machine translation, style customization, domain customization, pluggable, memory, adapter.
§ INTRODUCTION
In recent years, modern neural machine translation (NMT) <cit.> systems are often developed with large-scale parallel data extracted from the Web <cit.>, whose style and content are driven by the average distribution of data from many domains <cit.>. Therefore, the performance of strong NMT models is close to or even better than human translators in the general domain <cit.>.
However, MT customers may have some special requirements, including both style- and domain-specific individual demands <cit.>. For instance, some users may want translations in a special style, while some others may need to translate medical texts. These requirements can be quite diverse among different customers and retraining or fine-tuning the model for each user group entails significant development costs. Moreover, the customers can not always provide sufficient data to train strong NMT models.
Fortunately, pluggable methods <cit.> bring hope to handle the aforementioned user requirements, which employ additional modules to steer pretrained models.
As shown in Figure <ref>, the users can provide some text samples for the NMT model to imitate.
We will then learn a plugin to control the NMT model to satisfy the user demands without optimizing the parameters in the original model. An advantage of plug-and-play approaches is that we can maintain the performance of the pretrained model, alleviating the risk of catastrophic forgetting <cit.>.
Some researchers advocate using lightweight parametric modules as plugins to control pretrained language models <cit.>. For machine translation, we can also use a series of parametric modules to adjust the model behavior to satisfy various user demands.
However, recent studies find that there exists a performance bottleneck of fully-parametric pluggable methods <cit.>: increasing the number of trainable parameters can not always lead to better performance.
Inspired by the recent progress of retrieval-augmented models <cit.>, we propose to increase the expressive power <cit.> of parametric plugins through external memories, the resulting method is called memory-augmented adapter.
The main challenges of the memory-augmented adapter are two-fold: (1) how to construct memories that can provide useful customization information; and (2) how to integrate the memories into existing NMT models while not degrading the translation quality. Although long phrases can provide more contextualized information, matching long sequences between queries and memory items is more difficult than matching shorter ones. We propose to build multi-granular memories to balance the amount of contextualized information and the retrieval difficulty.
Different from many previous works <cit.> that encode the source sentence and the target prefix as the key and the next token as the value, our memory can provide multi-scale translation knowledge <cit.> that is suitable for queries coming from different layers of the NMT model <cit.>. For memory integration, we propose a new adapter architecture to better interpolate the original model representation and the retrieved vectors. Moreover, we propose a new training strategy with memory dropout to reduce spurious dependencies between the NMT model and the provided memory.
We conduct experiments for both style- and domain-related customizations and the results show the superiority of our method over many representative baselines.
§ RELATED WORK
§.§ Style and Domain Adaptation for NMT
Adapting NMT models to translate texts of a specific style or domain has been investigated in several previous works <cit.>.
For stylized NMT, many previous works focus on the formality control of NMT models <cit.>, of which the style has a clear definition. Most existing works need to train a specific model for each style. For instance, Niu and Carpuat <cit.> mix the training data of both style transfer and machine translation to learn a formality-sensitive NMT model.
Given that the user-provided styles can be of great diversity in the real world, we aim to satisfy different style demands in a pluggable manner.
For domain adaptation, Luong and Manning <cit.> propose an effective method that fine-tunes an out-of-domain model with small-sized in-domain supervised corpora. Hu et al. <cit.> further design an unsupervised method, since parallel data is hardly available in many domains. Zheng et al. <cit.> extend kNN-MT <cit.> to perform unsupervised domain adaptation. Our work is different from Zheng et al. in both memory design and usage and the experiments show that our proposed framework performs not only better but also faster than their approach.
§.§ Machine Translation Customization
Machine translation customization aims to satisfy the special requirements of different users. Vu and Moschitti <cit.> propose to select data that is similar to the user-provided text samples and then train or fine-tune an NMT model for the corresponding user. Following Michel and Neubig <cit.>, we believe that MT customization has some specific traits that distinguish it from common style and domain adaptation settings:
(1) The number of customization requirements is very large due to the personal variation among different MT system users;
(2) The available data is often very limited (even monolingual, let alone parallel) for each customization requirement.
Therefore, we propose to leverage pluggable methods to customize existing NMT models.
§.§ Pluggable Pretrained Models
Pluggable methods aim to control the generation behavior of pretrained models without optimizing model parameters <cit.>, which can effectively avoid catastrophic forgetting. Some works propose to use parametric plugins. Bapna and Firat <cit.>; Houlsby et al. <cit.> insert some adapters between pretrained layers and Li and Liang <cit.> prepend some trainable vectors before the hidden states in attention modules. Hu et al. <cit.> leverage learnable pairs of rank decomposition matrices to steer pretrained models.
Retrieval-augmented models can also be treated as pluggable methods, which augment the model with non-parametric memory. kNN-MT <cit.> combines the model prediction and retrieval-based distribution at the output layer. Borgeaud et al. <cit.> build a chunk-level memory for language modeling. Chen et al. <cit.> encode questions and answers into key-value pairs for question answering. In this work, our goal is to combine the merits of both parametric and non-parametric plugins.
We propose a new type of memory for machine translation, which explicitly considers the alignment between source and target phrases of different granularities.
§ BACKGROUND
§.§ Transformer Model
In order to better explain our proposed memory-augmented adapter,
we first give a description of some components in the Transformer <cit.> model.
Given the input sentence 𝐱, Transformer maps it into vectors via an encoder:
𝐄 = encoder(𝐱)
where 𝐄∈ℝ^|𝐱| × d and d is the hidden size of the model. The encoder output is then utilized by the decoder, which is a stack of several independent layers. We use 𝐃^(i) to denote the output of the i-th decoder layer. Specifically, each decoder layer firstly employs a self-attention module to model the dependency between the target-side words:
𝐒^(i) = attn ( 𝐃^(i-1), 𝐃^(i-1), 𝐃^(i-1) )
𝐋_1^(i) = layernorm ( 𝐃^(i-1) + 𝐒^(i) )
where attn(𝐐, 𝐊, 𝐕) is the multi-head attention and layernorm is the layer normalization.
After that, a cross-attention module is adopted to integrate the source-side information:
𝐂^(i) = attn ( 𝐋_1^(i), 𝐄, 𝐄 )
𝐋_2^(i) = layernorm ( 𝐋_1^(i) + 𝐂^(i) )
The output of the cross-attention module is then projected with a feed-forward layer. The decoder output is finally used to estimate the probability P(𝐲 | 𝐱; θ), where 𝐲 is the target sentence and θ denotes the set of model parameters.
§.§ Style Customization in NMT
Similar to generating images with specific styles <cit.>, style customization in NMT means outputting translations with user-specified styles <cit.>. For example, we want to output translations in Shakespeare style in a Zh-En translation task. A simple example is as follows.
Zh: UTF8gbsn哦上帝啊,请赐予我力量吧!
En (G): Oh God, please grant me strength!
En (S): Oh Lord, do thou endow me with thy might!
“En (G)” denotes the output of vanilla translation model, and “En (S)” denotes the output of style-customized translation model using Shakespeare corpus. The underlined expression are typical representatives of Shakespeare style.
The task most closely related to style customization in NMT is author-stylized rewriting <cit.>, which aims to rewrite a given text in the style of a specific author. Syed et al. sum up the user or author style into three levels, namely surface level, lexical level, and syntactic level styles <cit.>. These levels capture subtle differences in punctuation, word usage, and even sentence construction unique to individual authors, thereby making author-stylized rewriting a challenging task. Style customization in NMT not only shares the same challenges as author-stylized rewriting, but it must also simultaneously translate the provided text into the target language, presenting its own unique challenges.
§ APPROACH
§.§ Overview
In this work, we aim to let MT system users be able to control existing NMT models by simply providing some examples. To this end, we propose a memory-augmented adapter to help NMT models imitate the user-provided text samples.
Specifically, we propose the multi-granular continuous memory that can better leverage multi-scale patterns, which have proven to be important for machine translation <cit.>. We also propose a new type of adapter (i.e., memory-augmented adapter) to integrate external memory into NMT models. We will explain how to construct and utilize the memory in the following two subsections, respectively.
§.§ Multi-granular Continuous Memory
We expect our memory to not only extract essential information from user-provided data but also be easy to retrieve for NMT models. For the first desideratum, we propose to build the memory with parallel phrase pairs, which can reflect the translation pattern required by the customer. However, it is non-trivial to determine the granularity of the used phrases. Storing only short phrases may waste a lot of contextualized information while storing too much long sequences would make it difficult to match the query and the memory items. To address this issue, we propose to construct a multi-granular memory to balance the amount of contextualized information and the retrieval difficulty. Multi-granular information is also shown to be important for NMT models <cit.>.
As shown in Figure <ref>, we use parse trees to extract multi-granular phrases, which can identify more meaningful boundaries than random splitting.
The extracted phrases are then translated by NMT models to form parallel phrase pairs.
For the second desideratum, we propose to use the same model to build and utilize the memory. For each phrase pair, we perform a forward computation to get the continuous representation at each layer of the involved NMT model. We store the encoder output 𝐄 as the source-side memory and the self-attention output 𝐒^(i) at every decoder layer as the target-side memory. See Eq. (<ref>) and (<ref>) for more details of the stored representations. Each memory item is averaged among the representations of all tokens in a phrase, whose size is d. Figure <ref> shows an example. The reason we extract 𝐄 and 𝐒^(i) as our memory is that these representations are at the same layer where we perform memory retrieval. Our motivation is to narrow down the gap between the memory items and the queries,
making it easier for the model to read the memory.
We focus on using monolingual user-provided data in this work since parallel data is often unavailable for most requirements. However, our method can be easily extended for bilingual data, from which we can automatically extract phrase pairs based on unsupervised word alignment algorithms <cit.>.
§.§ Memory-augmented Adapter
Adapter Architecture
Different from existing parametric plugins <cit.> that only adapt model representations, we propose a new type of adapter to read memory.
The proposed memory-augmented adapter has four inputs: anchor, query, key, and value, which can be represented as 𝐀, 𝐐, 𝐊, and 𝐕, respectively. Anchors and queries are derived from the frozen NMT model while keys and values come from the memory. As depicted in Figure <ref>,
we use an attention module to generate the retrieved result:
𝐑 = softmax ( 𝐐𝐖_q𝐖_k^⊤𝐊^⊤ / T
)𝐕𝐖_v
where 𝐖_q, 𝐖_k, 𝐖_v ∈ℝ^d × d and the retrieved result 𝐑 has the same shape with 𝐐. T is a hyperparameter to control the sharpness of the retrieval distribution.
To avoid the model being completely dependent on the retrieved result 𝐑 that can be erroneous in some cases, we also take in an anchor from the original model, which is combined with 𝐑 via a gated fusion module:
λ = sigmoid ( relu( [𝐀;𝐑]𝐖_1 )𝐖_2 )
𝐎 = λ 𝐀 + (1 - λ) 𝐑
where 𝐎 is the adapter output, which has the same shape as the anchor 𝐀. 𝐖_1 ∈ℝ^2d × d and 𝐖_2 ∈ℝ^d × 1. λ is the learned interpolation ratio.
Adapter Integration We apply the memory-augmented adapter to the self- and cross-attention modules in the decoder, since these two types of components are important for target-side language modeling and source-side information utilization, respectively. At the i-th decoder layer, we use the self-attention output 𝐒^(i) as queries to read the target-side memory:
𝐎^(i)_1 = memadapt( 𝐒^(i), 𝐒^(i), 𝐌_t^(i), 𝐌_t^(i) )
where memadapt(𝐀, 𝐐, 𝐊, 𝐕) denotes the memory-augmented adapter. 𝐌_t^(i)∈ℝ^N_t^(i)× d represents the target-side memory, where N_t^(i) denotes the number of items in 𝐌_t^(i). The adapter output 𝐎_1 is then provided to the layer normalization module:
𝐋_1^(i) = layernorm ( 𝐃^(i-1) + 𝐎_1^(i) )
Similarly, we read the source-side memory in the cross-attention module:
𝐎_2^(i) = memadapt( 𝐂^(i), 𝐋_1^(i), 𝐌_s^(i), 𝐌_s^(i) )
𝐋_2^(i) = layernorm( 𝐋_1^(i) + 𝐎_2^(i) )
Figure <ref> shows an example. To reduce the redundancy that a phrase pair would repeatedly appear in memories at every decoder layer, we split all the phrase pairs into L parts, where L is the number of decoder layers. Each layer only stores one part of phrase pairs. In other words, the memories used in different layers do not overlap with each other, in terms of the corresponding phrase pairs.
Training Strategy Inspired by dropout <cit.> that can effectively reduce spurious co-adaptation between model parameters, we propose a memory dropout approach to prevent NMT models from being too dependent on some specific memory items. When training the memory-augmented adapter, we randomly drop part of the memory items. Let 𝐌 be the full memory and 𝐌̂ be the remained memory after memory dropout, the overall loss can be given by
-0.9em[0.5]ℒ = ℒ_NLL ( P(𝐲 |𝐱, θ, 𝐌) )_loss using full memory
+ α ℒ_NLL ( P(𝐲 |𝐱, θ, 𝐌̂) )_loss using dropped memory
+ β ℒ_dist(
P(𝐲 |𝐱, θ, 𝐌),
P(𝐲 |𝐱, θ, 𝐌̂)
)_loss modeling the agreement
where α and β are hyperparameters. ℒ_NLL is the conventional negative log-likelihood.
The agreement loss (i.e., ℒ_dist) <cit.> measures the distance between two distributions:
1.0ℒ_dist(p, q) = 1/2
(D_KL(p || q) + D_KL(q || p))
Extension Since our method does not change the model decoding, it can also be combined with the retrieval-based decoding algorithm as presented in kNN-MT <cit.>, which interpolates the model probability with a retrieved distribution. We call this decoding method kNN decoding.
See Section <ref> in appendix for details.
§ STYLE CUSTOMIZATION
§.§ Setup
NMT Model Training In the pluggable scenario, we should first have an existing NMT model, which can serve as the foundation for further customization.
We use the training corpus of the WMT20 En↔Zh translation task [https://www.statmt.org/wmt20/translation-task.html] to train NMT models, which contains 23.9M sentence pairs. We use SentencePiece [https://github.com/google/sentencepiece] to preprocess the data and the sentence piece model we used is released by mBART <cit.>. The architecture of our NMT models is Transformer <cit.>, whose hidden size is 512 and depth is 6. Please refer to Section <ref> in appendix for more details.
Customization Data
We evaluate the customization effect of our method in two translation directions: En-Zh and Zh-En. We use the works of two world-renowned writers as stylized text samples, including Shakespeare and Lu Xun. Their works created representative styles for English and Chinese, respectively. We extract their texts from the web and then split the data into training, validation, and test sets. For Shakespeare's style, the training set contains 20K English sentences while the validation and test sets contain 500 sentences, respectively. The target-language (i.e., English) training and validation sentences are then translated by the NMT model, while the test set is translated by human translators. For Lu Xun's style, the training set consists of 37K sentences while the validation and test sets contain 500 sentences. Similarly, the test set is also translated by humans while the training and validation sets are translated by NMT models. The resulting corpus is called Machine Translation with Style Customization (MTSC) [We will release MTSC shortly.]. We will add more styles in different languages for machine translation research in the future.
Memory Construction We first build parse trees for target-side sentences using Stanford Parser [https://nlp.stanford.edu/software/lex-parser.html] and then extract multi-granular phrases. As mentioned in Section <ref>, we evenly divide the extracted phrases according to their lengths into L parts to avoid information redundancy between different layers. We did not store the representations of phrases longer than a pre-specified threshold l_max, since the occurrence of long phrases is very low. l_max is set to 10 for Zh and 8 for En.
Adapter Training The general NMT model is frozen when training the memory-augmented adapter. We determine the value of the hyperparameters based on the validation performance. Specifically, the temperature in Eq. (<ref>) is set to 0.5. Both the α and β in Eq. (<ref>) are set to 5. The memory dropout rate is set to 0.1. We provide more details of adapter training in Section <ref> in appendix.
Baselines We compare our approach with the following representative baselines:
* Extreme <cit.>: adding style specific bias vector in the output layer.
* Adapter <cit.>: inserting adapters before residual connection.
* MT+Rewrite <cit.>: using a translation and a monolingual rewriting model.
* kNN-MT <cit.>: integrating the datastore based on stylized texts after the last decoder layer during inference.
* DExperts <cit.>: controlling the generation behavior using language models.
Evaluation Metrics We use both automatic and human evaluation to make a thorough comparison between the involved methods. The automatic evaluation metrics are as follows:
* BLEU: measuring the translation quality of model outputs.
We use sacreBLEU [English-Chinese: nrefs:1 | case:mixed | eff:no | tok:zh | smooth:exp | version:2.3.1. Chinese-English: nrefs:1 | case:mixed | eff:no | tok:13a | smooth:exp | version:2.3.1.] <cit.> to estimate the BLEU score.
* Perplexity: measuring the fluency of model outputs. We fine-tune a pretrained Transformer LM <cit.> with stylized text to calculate perplexity.
* Classifier Score: measuring the similarity between model outputs and the stylized text samples. We follow Li et al. <cit.> to train style classifiers to quantify the style similarity. The classifier we used is TextCNN <cit.>. For Lu Xun's style, the classifier can achieve an accuracy of 93.5%. For Shakespeare's style, the classifier can achieve an accuracy of 94.5%. We use these classifiers to estimate whether the output is in the desired style.
§.§ Main Results
Automatic Evaluation
Table <ref> shows the performance of all the involved methods in the style customization task.
When decoding with vanilla beam search, our method can outperform all the baselines in terms of BLEU and classifier score on average, indicating the effectiveness of the proposed memory-augmented adapter in controlling the output style of NMT models. The perplexity of kNN-MT is better than ours, but its BLEU score is much worse. When combined with kNN decoding, which is illustrated in Extension in Section <ref>, our method can be further improved, achieving the best performance across all the three automatic metrics. These results re-demonstrate that our method is complementary to kNN-MT.
Human Evaluation
We also perform a human evaluation to assess the translation quality of different methods. We follow previous works <cit.> to ask human evaluators to compare the outputs of different methods. Since human evaluation is time-consuming and labor-intensive, we only compare our method with the strongest baseline (i.e., kNN-MT) in En-Zh. Note that our outputs used for human evaluation are generated using vanilla beam search.
Following Hu et al. <cit.>, each sentence is evaluated in terms of content preservation, sentence fluency, and style similarity.
Table <ref> shows the results, from which we find our approach performs better than the baseline in all the three evaluation aspects. The agreement of the three human evaluators is estimated through Fleiss' kappa <cit.> and the results demonstrate moderate agreement (0.4 ≤κ≤ 0.6) in terms of both content preservation and sentence fluency and good agreement (0.6 ≤κ≤ 0.8) regarding style similarity.
Table <ref> gives some translation examples for style customization.
§.§ Performance at Different Data Scales
In some cases, the user-provided data can be of extremely small scale <cit.>. We thus investigate the performance of the involved methods using customization data of different scales. Figure <ref> shows the results. Our memory-augmented adapter consistently outperforms the baselines at different data scales, even with only 250 exemplary sentences. These results show that our method can be applied to extremely low-resource adaptation scenarios.
We conduct a thorough ablation study and report the results in Section <ref>.
§ DOMAIN CUSTOMIZATION
§.§ Setup
NMT Model Training We train the NMT model using the WMT14 De-En training corpus [https://www.statmt.org/wmt14/translation-task.html], including 4.5M sentence pairs. The training data is preprocessed in the same way as style customization. See Section <ref> in appendix for more details.
Customization Data To evaluate the pluggable performance in the domain customization setting, we follow previous works <cit.> to use a multi-domain dataset, which includes four domains: IT, Medical, Law and Koran. To simulate real-world user customization where the user-provided data is often of small scale, we randomly select 20K sentences for IT, Medical, and Law, and use all the 18K sentences for Koran. We also use only the target-side training data to simulate real-world cases and use NMT models to generate synthetic parallel data.
All the validation and test sets are authentic parallel data.
Memory Construction and Adapter Training We filter phrases longer than 10 during memory construction. For adapter training, T is set to 0.1 for Medical and Law, and 0.5 for the other two domains. Both α and β are set to 5 on all the four domains. The memory dropout rate is set to 0.1.
Baselines We compare our proposed method with two representative pluggable domain adaptation baselines: adapter <cit.> and kNN-MT <cit.>. See Section <ref> for details on the baselines.
§.§ Main Results
The adaptation performance on different domains is shown in Table <ref>. On average, our method can outperform the two baselines even without kNN decoding, demonstrating the effectiveness of our motivation to boost parametric plugins with external memory. When combined with kNN decoding, our method can achieve better results on Medical, Law, and Koran. Using kNN decoding, our method can improve 3.1 and 1.2 BLEU scores over Adapter and kNN-MT on the test sets, respectively.
§.§ Inference Time
A concern for retrieval-augmented methods is that they may significantly slow down the inference process.
As shown in Figure <ref>, our method is slower than Adapter, but the difference between the two methods becomes very slight when using big batch sizes. For instance, our inference time is only 1.15 times that of Adapter with a batch size of 128. Our method is also comparable to kNN-MT. In particular, when the batch size is set to 128, our method is slightly faster than kNN-MT. When also using kNN decoding, our method is slightly slower than kNN-MT (i.e., 41.1s vs. 39.1s with batch size = 128). We implement the kNN algorithm with Faiss-gpu <cit.> to accelerate the retrieval process.
§ DISCUSSION
§.§ Effect of Different Components
We conduct thorough ablation studies to better understand the effect of the proposed components in our method.
Granularity Distribution As mentioned in Section <ref>, we divide all the phrase pairs into several different parts to reduce the redundancy of information among the decoder layers. Our basic idea is that a certain phrase pair only needs to appear in one layer of the decoder. The phrase pairs are divided according to their lengths. We investigate three ways to distribute the phrase pairs to the decoder layers: (1) long-to-short where the phrase length decreases from the bottom layer to the top layer; (2) short-to-long where the phrase length increases from the bottom layer to the top layer; and (3) random where the memory item of a certain phrase pair is stored by a randomly selected layer. Figure <ref> shows the results of the three ways, where we find short-to-long achieves the best performance. We think the reason is that different layers may carry various types of linguistic properties in the Transformer model <cit.>, which requires information of different granularities. When reading the memory, queries from lower layers may contain less contextualized information <cit.>, thus short phrases are more suitable for them. At higher layers, long phrases that can provide more contextualized information performs better. We thus use short-to-long as the default setting.
Effect of Memory Dropout
We investigate the performance of two types of memory dropout: (1) item-level memory dropout that drop each memory item with a certain probability; and (2) layer-level memory dropout that drop all the memories at a decoder layer with a certain probability. Table <ref> shows the results, where all the models are trained using the overall loss function (i.e., ℒ in Eq. (<ref>)). We find the layer-level memory dropout performs better.
In the following experiments, we use layer-level memory dropout by default.
Effect of Memory Granularity To validate the necessity of building memory in a multi-granular form, we compare the performance of single- and multi-granular memories. Figure <ref> shows the results, where we find using multi-granular memory can achieve lower validation loss, indicating the effectiveness of our method. We use multi-granular memory in the following experiments by default.
Analysis of Memory Usage Table <ref> shows the effect of different components that are related to memory usage. Firstly, we notice that the gated fusion mechanism has a positive effect on translation quality, indicating the necessity of learning an input-dependent interpolation ratio between original model representations and retrieved results. In addition, we observe that there is a significant performance drop when using only either the source- or target-side memory. These results demonstrate that building parallel memory using phrase pairs is very useful.
Effect of Integration Layers
We also integrate the memory into different layers to better understand our method.
Table <ref> shows the results, from which we find using the memories at all layers performs best and higher layers tend to be more important than lower layers. A potential reason is that memories at higher layers contain more contextualized information.
§.§ Comparison with Fine-tuning
Although our goal in this work is to better build pluggable NMT models, our method is not only limited to this setting. For instance, the proposed memory-augmented adapter can also be used when the NMT model is not frozen (i.e., fine-tuning <cit.>). Table <ref> shows the results, where we observe that our method can also improve the performance of fine-tuning. This result implies that the external memory may provide essential information that is complementary to that stored in model parameters.
§.§ Application to Larger Model
We also conduct experiments on a model of a larger scale, whose hidden size is 1024 and parameter size is 596.0M. On the test set of En-Zh style customization, our memory-augmented adapter (without kNN decoding) outperforms Adapter <cit.> by 2.9 BLEU and kNN-MT <cit.> by 2.2 BLEU. This demonstrates that our method is also effective when the model size is larger. How to apply our method to larger models deserves further exploration.
§.§ Case study
We place some translation examples in Table <ref> to provide a better understanding of the difference between the involved methods. There are 6 sentences from short to long. We can find that our method always outputs better translations. Also, kNN-MT is not always better than Adapter, see the 3-rd and 4-th cases.
Syed et al. believe that the author-style can be understood at three levels, from punctuation, word usage to syntax <cit.>. In our cases, we can find that our method can learn to generate author-style better in different granularities. From case 2, our method correctly translates the phrase “build the tower” to “UTF8gbsn造塔” while the other methods translate it to “UTF8gbsn修建塔” or “UTF8gbsn建造雷峰塔”. Although the meaning is the same, our translation is closer to the expression style of the original author/user. Similarly, our method properly translates the word “call” to “UTF8gbsn呼唤” while the other methods translate it to “UTF8gbsn打电话” or “UTF8gbsn叫” in case 3. Also, our method translates the phrase “that night” to “UTF8gbsn那夜” while the other methods translate it to “UTF8gbsn那一晚” or “UTF8gbsn昨夜” in case 4. Furthermore, it can also be easily found that our method can generate similar sentences that have similar syntactic styles. From case 1 and case 6, we can see that the sentence generated by our method is more similar to the reference in terms of sentence segmentation.
These cases from lexical level to syntactic structure also demonstrate the rationality and effectiveness of our multi-granularity memory design.
§ CONCLUSION
In this work, we propose a memory-based adapter to build pluggable NMT models, which can let the users customize the generation behavior of NMT models by simply providing some text samples. We improve both the memory design and utilization to help existing models better adapt to the user-demanded styles or domains. Experiments demonstrate the superiority of our proposed method over several representative baselines.
In the future, we will validate the performance of our method on some stronger models in NMT, e.g., M2M100 12B <cit.> and NLLB 54.5B <cit.>. By changing the memory format, also we believe our method can be applied to some other sequence generation tasks.
[Experimental Setup]
§.§ Training Details
NMT Model Training We use WMT14 <cit.> De-En and WMT20 <cit.> Zh-En training data to train NMT models.
For all the involved language pairs (i.e., En-Zh, Zh-En, and De-En), we train the Transformer model using the following hyper-parameters. All the models are optimized by Adam <cit.>, with β_1=0.9, β_2=0.98 and ϵ=10^-9. We train each model for 200K iterations on 4 NVIDIA A100 GPUs, where the training speed is 8.5 iterations per second. We use the learning schedule presented in Vaswani et al. <cit.>, with a maximum learning rate of 7e-4 and the warm-up step is 4K. Each mini-batch contains 32K tokens in total. Both the dropout rate and the label smoothing penalty are set to 0.1. During inference, the beam size is 4. For En-Zh and Zh-En, the NMT models have 253.9M parameters. For De-En, the model has 198.3M parameters.
Adapter Training
We train the proposed memory-augmented adapter for 20K iterations. The maximum learning rate is set to 2e-4 and we restart the learning rate schedule when training adapters. Each mini-batch contains 8K tokens. Each experiment is conducted through a single run. We tune the values of the hyperparameters on the validation set through grid search.
kNN Decoding
To apply kNN decoding to our method, we should firstly build a datastore in the same way as that illustrated in Khandelwal et al. <cit.> using our model augmented with the proposed adapters. We use the open-sourced implementation of kNN decoding. [https://github.com/urvashik/knnmt]
§.§ Details on Baselines
In this subsection, we provide the essential details of the baselines in this work:
* Adapter <cit.>: we use the same adapter architecture as presented in Houlsby et al. <cit.> The training hyperparameters are the same as our method, excluding some newly introduced hyperparameters (e.g, α and β). The default dimension of the hidden layer is set to 64, following Houlsby et al. <cit.> Since our method use more parameters than Adapter, we also train a larger adapter to assimilate the parameter count, whose hidden dimension is set to 512. The bigger adapter still performs worse than our method (16.9 vs. 20.8 in terms of BLEU), indicating that our performance improvement is not totally caused by the larger adapter size.
* MT+Rewrite <cit.>: we train a rewriting model to refine the output of the NMT model. Specifically, we fine-tune a pretrained encoder-decoder model to transfer the model outputs into stylized texts.
* kNN-MT <cit.>: there are mainly three hyperparameters that have a significant impact on performance: k, T, and λ. We tune the hyperparameters on the validation set. For style customization, k = 128, T = 30 and λ = 0.7 in En-Zh. In Zh-En, k = 16, T = 40 and λ = 0.6. For domain customization, k = 16 across all the four domains. T = 4 in IT, Medical, and Law. T = 40 in Koran. λ is tuned to be 0.2, 0.3, 0.3, 0.6 in IT, Medical, Law, and Koran, respectively.
* DExperts <cit.>: we should learn two independent language models, of which one language model serves as an expert and another one is an anti-expert. The expert model is fine-tuned on the user-provided data while the anti-expert is trained on the general domain data. α is tuned on the validation set and the final value is 0.2.
IEEEtran
|
http://arxiv.org/abs/2307.04432v1 | 20230710091220 | Density-dependent relativistic mean field approach and its application to single-$Λ$ hypernuclei in Oxygen isotopes | [
"Shi Yuan Ding",
"Wei Yang",
"Bao Yuan Sun"
] | nucl-th | [
"nucl-th"
] |
Density-dependent relativistic mean field approach and its application to single-Λ hypernuclei in Oxygen isotopes This work was partly supported by the Fundamental Research Funds for the Central Universities, Lanzhou University under Grant No. lzujbky-2022-sp02 and lzujbky-2023-stlt01, the National Natural Science Foundation of China under Grant No. 11875152 and No. 12275111, and the Strategic Priority Research Program of Chinese Academy of Sciences under Grant No. XDB34000000. The authors also want to thank the computation resources provided by the Supercomputing Center of Lanzhou University.
Shi-Yuan Ding^1,2)
Wei Yang^1,2
Bao-Yuan Sun^1,2;1)[email protected]
August 12, 2023
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
^1MOE Frontiers Science Center for Rare Isotopes, Lanzhou University, Lanzhou 730000, China
^2School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000, China
The in-medium feature of nuclear force which includes both nucleon-nucleon (NN) and hyperon-nucleon (Λ N) interactions impacts the description of single-Λ hypernuclei. With the alternated mass number or isospin of hypernuclei, such effects could be unveiled by analyzing systematical evolution of the bulk and single-particle properties. From a density-dependent meson-nucleon/hyperon coupling perspective, a new Λ N effective interaction in the covariant density functional (CDF) theory, namely DD-LZ1-Λ1, is obtained by fitting the experimental data of Λ separation energies for several single-Λ hypernuclei. It is then adopted to study the structure and transition properties of single-Λ hypernuclei in Oxygen isotopes, comparing with several selected CDF Lagrangians. Discrepancy is observed explicitly in the isospin evolution of Λ1p spin-orbit splitting with various effective interactions, ascribed to their divergence of the meson-hyperon coupling strengths with increasing density. In particular, the density-dependent CDFs introduce an extra contribution to enhance the isospin dependence of the splitting, which is originated from the rearrangement terms of Λ self-energies. In addition, the characteristics of hypernuclear radii are studied along the isotopic chain. Owing to the impurity effect of Λ hyperon, a size shrinkage is observed in the matter radii of hypernuclei as compared to their cores of normal nuclei, while its magnitude is elucidated further to correlate with the incompressibility of nuclear matter. Besides, there exists a sizable model-dependent trend that Λ hyperon radii evolve with the neutron number, which is decided partly by the in-medium NN interactions as well as the core polarization effects.
21.80.+a,
13.75.Ev,
21.30.Fe,
21.60.Jz
§ INTRODUCTION
The discovery of hyperon, particles containing strange quarks, in 1953 sparked strong interest among experimental and theoretical physicists <cit.>. The ability of hyperons to enter the nucleus and form a system of hypernuclei makes them sensitive probes for studying the structure and specific nuclear features. The studies on hyperon behavior in the nucleus help us to understand the baryon-baryon interaction in nuclear medium and its effects on nuclear properties <cit.>. In addition, hyperons are thought to be produced inside neutron stars <cit.>. The link between hypernucleus and neutron star properties benefits our comprehension of the state of matter in extreme environments, as well as strangeness-bearing nuclear force at high densities. In recent decades, a wealth of hypernuclear data has been generated through induced reactions of meson and electron beams at various radioactive beam facilities, including the Japan Proton Accelerator Research Complex (J-PARC) <cit.>, the Thomas Jefferson National Accelerator Facility (JLab) <cit.>, and the Facility for Antiproton and Ion Research (FAIR) <cit.>. These advanced facilities have played a pivotal role in advancing our understanding of strangeness in nuclear physics. Notably, single-Λ hypernuclei have been the most extensively studied, with experimental data covering hypernuclei from ^3_ΛH to ^208_ΛPb in various laboratories <cit.>.
When Λ hyperon enters into a nucleus, various phenomena could be observed. For instance, in ^7_ΛLi, it has been found that the size of the ^6Li core is smaller compared to the free space ^6Li nucleus, as suggested by the measurement of the γ-ray transition probability from E2(5/2^+→1/2^+) in ^7_ΛLi <cit.>. In addition, in ^13_ΛC, it is hinted that the Λ spin-orbit splitting is much smaller than the nucleon's <cit.>. Recently, the potential for producing neutron-rich hyperfragments at high-intensity heavy-ion accelerator facilities is discussed <cit.>. The directed flow of hypernuclei (^3_ΛH and ^4_ΛH) just observed at RHIC for the first time in heavy-ion collisions, providing insights into hyperon-nucleon interactions under finite pressure <cit.>. These advances highlight the promising prospects for investigating hypernuclear structures using the forthcoming high-intensity heavy-ion accelerator facility HIAF <cit.>. To provide accurate predictions for these experiments, researchers have performed detailed theoretical work on observables such as hypernuclear binding energy <cit.>, spin-orbit splitting <cit.>, hyperon and hypernuclear matter radius <cit.>. Overall, these efforts aim to provide valuable insights into the behavior of hypernuclei, and to deepen our understanding of the in-medium baryon interactions.
Due to their ability to provide a self-consistent and unified description of almost all nuclei on the nuclear chart, both non-relativistic and relativistic mean-field theories are widely used in the calculation of finite nuclei and nuclear matter, and have been extended to describe hypernuclear systems with strange degrees of freedom during the development of theoretical models <cit.>. As a key model utilized in this work, the relativistic mean-field theory has been extensively developed to study hypernulcear properties such as hyperon separation energy <cit.>, spin-orbit splitting <cit.>, hyperon halo <cit.>, hypernuclear deformation <cit.>, cluster structure <cit.> and drip lines <cit.>. While most theoretical models have primarily emphasized nonlinear self-coupling interactions for studying hypernuclei, there has been a recent study that explores the effective interactions for single-Λ hypernuclei within the density-dependent relativistic mean-field (DDRMF) model <cit.>. With three distinct fitting approaches, they propose six new sets of effective Λ N interactions and uncover a significant linear correlation between the ratios R_σ and R_ω, representing scalar and vector coupling strengths, respectively, between these effective Λ N and NN interactions.
Recently, a new type of density-dependent relativistic mean-field Lagrangian, DD-LZ1, has been proposed, inspired by the restoration of pseudo-spin symmetry (PSS) and nuclear medium effects <cit.>. This new effective Lagrangian has produced satisfactory results in describing the properties of nuclear matter and finite nuclei. With unique density-dependent form, DD-LZ1 eliminates the spurious shell closures that appeared in previous RMF calculations, and reasonably restores the PSS of high orbital angular momentum near the Fermi energy <cit.>. Applications with this new RMF Lagrangian has been performed for several nuclear many-body characteristics, in both finite nuclei with mass ranging from light to superheavy, and neutron star properties with density ranging from low to high. For instance, a comprehensive macroscopic-microscopic model was developed to evaluate the total energies for even-even nuclei with proton numbers ranging from 8 to 110 <cit.>. Even with the appearance of hyperon <cit.>, larger maximum masses of neutron stars could be obtained with DD-LZ1 than with several other RMF parameter sets, providing the possibility that the secondary object observed in GW190814 is a neutron star <cit.>. Utilizing the Thomas-Fermi approximation, different microscopic structures of nonuniform nuclear matter were calculated for the crust of neutron stars and a unified equation of state was established in a vast density range <cit.>. The different density-dependent behaviors of meson-nucleon couplings impact the microscopic structures of neutron star matter with DD-LZ1, affect correspondingly the description on various physical processes and evolutions of neutron stars.
Apart from dealing with the different nuclear medium effects caused by the interactions themselves, the evolution of isospin also leads to significant changes in the in-medium effects of hypernuclei, thereby affecting the description of their structural properties. In recent years, a series of refined theoretical studies have been conducted on hypernuclei in different isotopic chains using various interaction models. For instance, the no-core shell model has been employed to investigate the systematic evolution of the ground and excited state energies in the Helium and Lithium hyperisotopes <cit.>. The antisymmetrized molecular dynamics method has been applied to explore the e low-lying level structure of hypernuclei in the Beryllium hyperisotopes <cit.>. The multidimensionally constrained RMF model has been used to study the shape evolution of hypernuclei in the Argon hyperisotopes <cit.>. The beyond mean-field approach has been utilized to discuss the evolution of p-state energies and composition in the Carbon hyperisotopes <cit.>, as well as the hyperon halo structures in the Boron and Carbon hyperisotopes <cit.>. The studies exhibit the significance of isospin role in the description of hypernuclear structure. In fact, with the development of hypernuclear spectroscopy, new experiments related to hypernuclei have been initiated, such as the planned measurements in the J-PARC project, aiming to study the Λ hyperon binding energies in neutron-rich hyperisotopes of ^124-136_ΛSn <cit.>. These experiments will provide crucial information about the properties of hypernuclei associated with various isospin circumstance.
In view of the essential role of nuclear in-medium effects on hypernuclear structure and their relevance to the isotopic evolution, we aim to further expand the density-dependent RMF model to investigate the structure of single-Λ hypernuclei in Oxygen hyperisotopes. First, we will introduce the theoretical framework of the hypernuclear RMF approach in Sec. <ref>. Then, the induced Λ-nucleon (Λ N) effective interactions will be determined by fitting Λ separation energies to the experimental data for DD-LZ1 Lagrangian. To give the results and discussion, the influence of nuclear in-medium effects will be studied in Sec. <ref>, on the isospin dependence of hypernuclear bulk properties, hyperon spin-orbit splitting and matter/hyperon radius. Finally, a summary will be given in Sec. <ref>.
§ DDRMF APPROACH FOR SPHERICAL SINGLE-Λ HYPERNUCLEI
To describe single-Λ hypernuclei within the meson-exchanged type of the relativistic mean-field theory, the covariant Lagrangian density serves as the foundation, which is
ℒ = ℒ_B + ℒ_φ + ℒ_I,
where the terms of free fields read as
ℒ_B= ∑_Bψ̅_B(iγ^μ∂_μ-M_B)ψ_B,
ℒ_φ= +1/2∂^μσ∂_μσ-1/2m_σ^2σ^2-1/4Ω^μνΩ_μν+1/2m_ω^2ω^μω_μ
-1/4R⃗^μν·R⃗_μν+1/2m_ρ^2ρ⃗^μ·ρ⃗_μ-1/4F^μνF_μν,
where the index B (B') represents nucleon N or hyperon Λ, with its sum ∑_B over nucleon N and hyperon Λ. The masses of the baryon and mesons are given by M_B and m_ϕ (ϕ=σ, ω^μ, ρ⃗^μ), while Ω^μν, R⃗^μν and F^μν are the field tensors of vector mesons ω^μ, ρ⃗^μ and photon A^μ, respectively. The interaction between nucleon (hyperon) and mesons (photon) is involved by the Lagrangian ℒ_I,
ℒ_I=∑_Bψ̅_B (-g_σ Bσ-g_ω Bγ^μω_μ)ψ_B
+ψ̅_N (-g_ρ Nγ^μτ⃗·ρ⃗_μ-eγ^μ1-τ_3/2A_μ)ψ_N.
Here the Λ hyperon (namely ψ_B taken as ψ_Λ), which is charge neutral with isospin zero, only takes part in interactions that are spread by isoscalar mesons. The nuclear in-medium effects are introduced phenomenologically via the coupling strengths g_ϕ B (g_ϕ N), which use baryon-density dependent functions in density-dependent RMF (DDRMF) approach to define the strengths of different meson-baryon (meson-nucleon) couplings <cit.>.
The effective Hamiltonian operator for Λ hypernuclei can be obtained by performing the general Legendre transformation on the Lagrange density ℒ in Eq. (<ref>), and it can be written as the sum of the kinetic energy operator T̂ and the potential energy operator V̂_φ,
Ĥ≡ T̂+∑_φV̂_φ
= ∫ dx ∑_Bψ̅_B(x)(-iγ·∇+M_B) ψ_B(x)
+ 1/2∫ dx ∑_B∑_φ[ψ̅_B𝒢_φ Bψ_B]_x D_φ(x,x') [ψ̅_B'𝒢_φ B'ψ_B']_x',
here x is four-vector (t,x). Correspondingly, we define interaction vertices 𝒢_φ B(x) for a various of meson (photon)-nucleon (hyperon) coupling channels, which for isoscalar σ and ω mesons are represented as
𝒢_σ B(x) = +g_σ B(x),
𝒢_ω B^μ(x) = +g_ω B(x)γ^μ.
Notably, both nucleons and the Λ hyperon can contribute to the isoscalar meson fields. However, for the remaining isovector mesons and photon fields, it is expected that their interaction vertices solely connect to nucleons since the isoscalar and charge-zero nature of Λ hyperon,
𝒢_ρ N^μ(x) = +g_ρ N(x) γ^μτ⃗,
𝒢_A N^μ(x) = +eγ^μ1-τ_3/2.
As the retardation effects could be neglected in the majority of RMF models, the meson (photon) propagators D_ϕ (D_A) read as
D_ϕ(x,x')=1/4πe^-m_ϕ|x-x'|/|x-x'|,
D_A(x,x')=1/4π1/|x-x'|.
The baryons field operator ψ_B in the Hamiltonian (<ref>) can be second quantized in the positive-energy space under the no-sea approximation as
ψ_B(x)
=∑_if_i(x)e^-iϵ_i tc_i.
Here, f_i represents the Dirac spinor, while c_i denote the annihilation operators for state i. Accordingly, the energy functional E is determined by evaluating the expectation value of the Hamiltonian with respect to a trial Hartree-Fock ground state |Φ_0⟩,
E = ⟨Φ_0|Ĥ| Φ_0⟩ = ⟨Φ_0|T̂| Φ_0⟩+∑_φ⟨Φ_0|V̂_φ| Φ_0⟩.
Then the binding energy of a Λ hypernucleus is written by
E= ∑_B(E_kin,B + E_σ,B + E_ω,B) + E_ρ,N+E_e.m. + E_c.m. + E_pair,
where the kinetic energy functional of baryons is shown by E_kin,B. The contributions of the potential energy functional from σ and ω are denoted by the variables E_σ,B and E_ω,B. Additionally, E_ρ,N and E_e.m. are used to represent the contributions from ρ and A, respectively. The center-of-mass adjustment to the mean-field is represented by the term E_c.m., while E_pair takes into account the contribution from nucleon pairing correlations <cit.>.
The role of deformation in single-Λ hypernuclei has been discussed in various density functional models <cit.>, which may generate non-negligible effects on the single-particle energies like in Carbon hyperisotopes <cit.>. To describe single-Λ hypernuclei, in particularly the Oxygen hyperisotopes discussed hereafter, we just restrict the RMF approach to the spherical symmetry. Correspondingly, the Dirac spinor f_i(x) of the nucleon or hyperon in Eq. (<ref>) has the following form:
f_nκ m(x) = 1/r([ iG_a(r)Ω_κ m(ϑ,φ); F_a(r)Ω_-κ m(ϑ,φ) ]),
where the index a consists of the set of quantum numbers (nκ) = (njl), and Ω_κ m is the spherical spinor. Meanwhile, the propagators can be expanded in terms of spherical Bessel and spherical harmonic functions as
D_ϕ(x,x^') = ∑_L=0^∞∑_M=-L^L(-1)^MR^ϕ_LL( r, r^') Y_LM(Ω)Y_L-M(Ω^'),
where Ω=(ϑ,φ), and R_LL contains the modified Bessel functions I and K as
R_L L^ϕ(r, r^') =√(1/rr^') I_L+1/2(m_ϕr_<) K_L+1/2(m_ϕr_>),
R_L L^A(r, r^') =1/2L+1r_<^L/r_>^L+1.
In the DDRMF approach, the meson-baryon coupling strengths are adopted as a function of baryon density ρ_b, which are written by
g_ϕ B(ρ_b)=g_ϕ B(0) f_ϕ B(ξ) or
g_ϕ B(ρ_b)=g_ϕ B(0) e^-a_ϕ Bξ,
where ξ=ρ_b/ρ_0 with ρ_0 the saturation density of nuclear matter, and
f_ϕ B(ξ)=a_ϕ B1+b_ϕ B(ξ+d_ϕ B)^2/1+c_ϕ B(ξ+d_ϕ B)^2.
The free coupling strength at ρ_b=0 is represented by g_ϕ B(0) in the expression above. To keep the variational self-consistency between the energy density functional and single-particle properties, the extra terms in baryon self-energies, namely the rearrangement terms, will occur due to the density dependence of the coupling strengths. The single-particle (nucleon or hyperon) properties can be determined by solving the Dirac equation,
ε_a,B[ G_a,B(r); F_a,B(r) ] = [ Σ_+^B(r) -d/dr+κ_a,B/r; d/dr+κ_a,B/r -[2M_B-Σ_-^B(r)] ][ G_a,B(r); F_a,B(r) ].
Here the self-energies Σ_±^B=Σ_0,B±Σ_S,B composed by the vector and scalar terms. The scalar self-energy Σ_S,B = Σ_S,B^σ, and the time component of the vector one has
Σ_0,B(r) = ∑_ϕΣ_0,B^ϕ(r)+Σ_R(r),
where ϕ=ω, ρ for nucleons, and ϕ=ω for Λ hyperon. The self-energies of nucleon or hyperon include scalar one Σ_S,B and vector one Σ_0,B, in which the coupling of isoscalar mesons contributes as follows,
Σ_S,B^σ(r) =-g_σ B(r)∑_B^'∫ r^'2dr^' g_σ B^'(r^')ρ_s,B^'(r^')R^σ_00(r,r^'),
Σ_0,B^ω(r) =+g_ω B(r)∑_B^'∫ r^'2dr^' g_ω B^'(r^')ρ_b,B^'(r^')R^ω_00(r,r^').
Here, ρ_s,B and ρ_b,B represent the scalar and baryon density, respectively <cit.>. Additionally, the rearrangement term Σ_R appears in DDRMF approach, which contain the summation over all baryons for the isoscalar case of ϕ=σ,ω, but only over nucleons for the isovector one. For example, the contribution from σ-S coupling is shown as
Σ_R,σ(r)=∑_B1/g_σ B∂ g_σ B∂ρ_bρ_s,BΣ_S,B^σ(r).
§ RESULTS AND DISCUSSION
In recent years, there has been extensive theoretical research on hypernuclei, particularly focusing on the simplest single-Λ hypernuclei, using RMF and RHF theories. In this section, we aim to extend the effective interaction DD-LZ1 <cit.>, which has been proven to be successful and promising in determining the properties of nuclear structure in both bulk and single-particle aspects, to incorporate Λ hyperon within the framework of RMF model. To give a comparative study and illustrate the role of nuclear in-medium effects, the calculations with DD-LZ1 will be accompanied by several existing effective Λ N interactions within CDF models. These interactions have been significantly expanded to incorporate the degrees of freedom of the Λ hyperon and have yielded many successful findings in the study of hypernuclear structure and the properties of dense stars. In detail, density-dependent RMF effective interactions DD-LZ1 <cit.>, PKDD <cit.>, DD-ME2, TW99, DDV <cit.>, density-dependent RHF (DDRHF) effective interactions PKO1, PKO2, PKO3 <cit.>, and nonlinear RMF (NLRMF) effective interactions NL-SH <cit.> and PK1 <cit.> were selected. In these CDF functionals, the ω-tensor coupling which has been proved to be essential in reducing Λ's spin-orbit splitting in hypernuclei <cit.> is ignored. The Dirac equation is solved in a radial box size of R=20 fm with a step of 0.1 fm. For open-shell hypernuclei, we employ the BCS method to account for pairing correlations. As the strength of hyperon pairing correlations remains uncertain and may become essential in multi-Λ hypernuclei, our current work solely considers pairing correlations between nn and pp pairs by using the finite-range Gogny force D1S <cit.>, see Refs. <cit.> for details. In addition, the blocking effect should be taken into account for the last valence nucleon or hyperon, with a detailed description to the Ref. <cit.>.
§.§ Density dependence of Λ N effective interaction
For the theoretical study of hypernuclear structure, the Λ N interaction must be determined first. Since the Λ hyperon is an electrically neutral particle with isospin zero, our focus lies on the coupling strengths between the isoscalar-scalar σ meson and the isoscalar-vector ω meson with the Λ hyperon. For convenience, we introduce the ratio of the coupling strengths between the meson-hyperon and meson-nucleon, g_ϕΛ/g_ϕ N. According to the näive quark model <cit.>, we fix the ratio of the isoscalar-vector meson coupling strength g_ωΛ/g_ω N to 0.666, while the ratio of the isoscalar-scalar one g_σΛ/g_σ N can be obtained by reproducing the Λ hyperon separation energy B_Λ experimental data for ^16_ΛO, ^40_ΛCa, and ^208_ΛPb <cit.>. In the fitting process, the hyperon is placed in the 1s_1/2 ground state, and the B_Λ is defined as follows:
B_Λ(^A_Λ Z) = E(^A-1Z) - E(^A_Λ Z),
Based on the effective interaction DD-LZ1, we finally obtained a new set of Λ N interaction, namely DD-LZ1-Λ1, after a fitting process of Levenberg-Marquardt minimization. Then, we calculated the Λ separation energy B_Λ as well as the single-Λ energy, with hyperon occupying the ground state 1s_1/2 or possible excited states with higher angular momentum l_Λ. For B_Λ of DD-LZ1-Λ1, a remarkable agreement with experimental data is found for most of hypernuclei, except for ^28_ΛSi with significant deformation and Carbon hyperisotopes with light mass, as shown in Fig. <ref>. Actually, more accuracy description to the light-mass Carbon hyperisotopes could be obtained, by limiting the mass region of fitting and taking into account the deformation effects <cit.>. To investigate the deviation in describing the structural properties of single-Λ hypernuclei using different CDF effective interactions, the coupling strength of DD-LZ1-Λ1 in comparison with other selected CDF functionals are listed in Table <ref>. One could check the root-mean-square deviation Δ for B_Λ between theoretical calculation and experimental value, which is defined by
Δ≡√(1/N∑_i=1^N(B_Λ,i^exp.-B_Λ,i^cal.)^2).
To reveal the systematics, we define Δ_1 to be the deviation only for ^16_ΛO, ^40_ΛCa, and ^208_ΛPb, as well as Δ_2 that suitable for all hypernuclei.
From Table <ref>, it can be seen that different CDF theoretical models have good descriptions for ^16_ΛO, ^40_ΛCa and ^208_ΛPb, and most parameter sets have good consistency for hypernuclear theoretical calculations and experimental data over a large mass range from ^12_ΛC to ^208_ΛPb. In addition, by comparing three different types of CDF effective interactions, we can find that when the ratio of isospin scalar-vector meson coupling strength is fixed to the same value, the ratio of isospin scalar-scalar meson coupling strength g_σΛ/g_σ N may satisfy certain linear correlations with the ratio of isospin scalar-vector meson coupling strength, which has been systematically explored in some works <cit.>. It should be pointed out that the linear correlation of meson-hyperon coupling strength ratios obtained in the RMF framework is obviously not suitable for density-dependent RHF models <cit.>.
In DDRMF approach, the in-medium effects of nuclear force are effectively embedded in the density-dependent shape of meason-baryon coupling strength, playing the role in the nuclear structure via the equilibrium of nuclear dynamics from various coupling channels. In recent years, analysis based on the equilibrium of nuclear in-medium dynamics has been applied to clarify the mechanism of the pseudospin symmetry, the shell evolution, the liquid-gas phase transition, and hyperon's spin-orbit splitting in the CDF models <cit.>. The delicate in-medium balance between nuclear attractive and repulsive interactions may be significantly altered by treating the density dependence of coupling strength differently, impacting the description of the properties of nuclear matter and finite nuclei with different CDF effective interactions.
To provide a comprehensive understanding of the in-medium equilibrium in hypernuclei, we present the density dependence of coupling strengths for selected CDF effective interactions in Fig. <ref>(a) and Fig. <ref>(b), corresponding to the isoscalar-scalar channel g_σΛ and isoscalar-vector one g_ωΛ. There are systematic divergences of the meson-hyperon coupling strengths with increasing density among density-dependent RMF, density-dependent RHF, and nonlinear RMF effective interactions. Notably, the density dependence of g_σΛ and g_ωΛ is significantly reduced in the DDRHF effective interaction compared to the DDRMF effective interaction. This pronounced reduction in density dependence also influences the description of single-particle properties in hypernuclei, such as Λ hyperon spin-orbit splitting <cit.>. Furthermore, in contrast to density-dependent interactions, the NLRMF effective interaction exhibits density-independent characteristics for g_σΛ and g_ωΛ. Consequently, when applying these three types of CDF effective interactions to single-Λ hypernuclei, the systematic deviation could take place in describing the isospin dependence of the hypernuclear structure.
§.§ Bulk properties of single-Λ hypernuclei in Oxygen hyperisotopes
To focus on the isospin dependence of single-particle properties, we choose the Λ hypernuclei and their nucleonic counterpart in Oxygen (hyper)isotopes as examples, since they usually take the spherical symmetry. To check the accuracy of the chosen interactions in describing the properties of finite nuclei, we first calculated the binding energies E_B, charge radii R_c, and matter radii R_m for Oxygen isotopes using the DD-LZ1 effective interaction. We compared the theoretical calculations with experimental measurements, which were taken from Refs. <cit.>. From the results in Table <ref>, we can see that the theoretical calculations and experimental measurements are in good agreement for both the binding energies E_B and the charge radii R_c, for the interaction DD-LZ1. It is worth noting that the total matter radius R_m of finite nuclei, unlike the charge radius, still has significant uncertainties based on heavy ion reaction experiments. The theoretical calculations of R_m reconcile with the experimental measurements with the existence of error bars.
Furthermore, we summarize in Table <ref> the systematics of the occupied energy level of Λ hyperon, the single-particle energies of Λ hyperon, the total binding energies, the charge radii, and the matter radii of hypernuclei in Oxygen hyperisotopes. In order to give possible reference to hypernuclear experiments, we also calculated the strength of electric dipole transition B(E1) between the Λ1p and Λ1s occupation states. The transition strength is expressed as
B(E1;J_i⟶ J_f)=3e^2_Λ/4π⟨ f|r|i⟩^2(2j_f+1)
[ j_f 1 j_i; -1/2 0 1/2 ]^2,
Where e_Λ represents the effective charge of the Λ hyperon. The integration ⟨ f|r|i⟩ can be computed using the radial wave functions of the initial and final single-Λ state, see Ref. <cit.> for details.
In the framework of relativistic models, Dirac spinors with both upper and lower components could contribute to determining the value of B(E1). However, it is checked that the contribution from the lower component is negligible, especially for non-charge exchange channel. Therefore, only the contribution from the upper component is preserved in current calculations as a simplification. The inclusion of Λ hyperon causes the so-called impurity effect inside hypernuclei <cit.>. When the Λ hyperon is filled in the 1s_1/2 state, we can see from the comparison of the total matter radii in Table <ref> and Table <ref> that the introduction of hyperon causes a shrinkage effect on the hypernuclei, which is approximately 0.06-0.13 fm. Compared with the ground-state results, we observe a significant enhancement in Λ root-mean-square radii when hyperon is filled in higher-lying 1p state. This change in the density distribution of hyperon due to different level occupations leads to an overall expansion of the hypernuclear matter radii, different from the Λ1s case. Additionally, with the increase of neutron filling, both the hyperon radii, matter radii and B(E_1) show significant isospin dependence, which can be qualitatively explained by the density-dependence of the coupling strength. As indicated in Table <ref>, when Λ hyperon occupies the 1p state, its density distribution spreads more outward than the nucleonic core. As isospin evolves, more neutrons are filled and their attraction to the hyperon increases, correspondingly leading to a significant reduction in the hyperon radius. For B(E_1), its value is determined not only by the overlap between initial and final states which are sensitive to the neutron number, but also by the effective charge. As a result, the B(E_1) values enlarge a little from ^15_ΛO to ^17_ΛO and go down gradually as isospin evolves after N=8.
§.§ Isospin dependence of Λ spin-orbit splitting
Motivated by the connection between the density-dependent effective interactions of theoretical models and the isospin-dependent properties of nuclear structure, the spin-orbit splitting of Λ hyperon in hypernuclei, as a promising observable in current hypernuclear spectroscopy, will be discussed in this subsection with newly developed DD-LZ1-Λ1 and other selected CDF functionals. The Λ's spin-orbit splitting is defined by the difference of Λ single-particle energies between a couple of spin partner states, which is
Δ E_SO^Λ≡ε_j_Λ=l_Λ-1/2 - ε_j_Λ=l_Λ+1/2.
As shown in Fig. <ref>, the analysis is carried out for Λ spin partner states 1p in Oxygen hyperisotopes, with the Λ hyperon occupying its ground state.
In Fig. <ref>(a), it is seen that the isospin dependence of Δ E_SO^Λ is clearly distinguished with the chosen CDF functionals. The curves from NLRMF models tend to be stable with increasing neutron number, while for density-dependent RMF or RHF functionals the splitting enlarges generally with isospin. Among them, DD-LZ1-Λ1 exhibits the most significant isospin dependence. Besides, it is clear that the smaller Λ spin-orbit splitting is predicted by DDRHF compared to RMF, which has been illustrated as a result in single-particle properties since the dynamical equilibrium between nuclear attraction and repulsion is dramatically changed with the appearance of Fock terms <cit.>.
To better understand the evolution of Λ spin-orbit splitting with isospin, we could decompose Δ E_SO^Λ into various parts according to its source of the kinetic or potential energy. The values are obtained by left-multiplying the transferred Dirac spinor to the Dirac equation Eq. (<ref>), and separate the integrated contributions from different self-energie terms. For instance, Δ E_rea comes from the contribution of the rearrangement term Σ_R to Λ self-energy Σ_0,Λ, as seen in Eq. (<ref>), due to the density dependence of meson-hyperon couplings. Consequently, the rest one from the kinetic energy and the density-independent potential energies could be summed over, which means Δ E_kin+σ+ω≡Δ E_SO^Λ-Δ E_rea, as discussed in Fig. <ref>(b).
It is observed that the values of Λ spin-orbit splitting are primarily determined by Δ E_kin+σ+ω. However, the isospin dependence of the splitting is weakly controlled by Δ E_kin+σ+ω except for ^15_ΛO. Attributed to the occupation of ν 1p_1/2 orbit, the Λ spin-orbit splitting predicted by various CDF functionals systematically reduces from ^15_ΛO to ^17_ΛO. As has been illustrated in Ref. <cit.>, the spin-orbit coupling potential of hyperon is determined mainly by the radial derivative of the self-energy Σ_-^Λ. In general, the more neutrons are filled into hypernuclei, the larger the density circumstance where the Λ hyperon is housing. Thus, if the model is density dependent like DDRMFs and DDRHFs given in Fig. <ref>, the meson-hyperon coupling strength then weakens and Δ E_SO^Λ should become smaller correspondingly as the neutron number increases. As seen in Fig. <ref>(b), such a reduction in Δ E_kin+σ+ω is remarkable from ^15_ΛO to ^17_ΛO, and relatively less significant at larger neutron numbers.
Different from the NLRMF case, the density-dependent CDFs introduce extra contribution to reinforce the isospin dependence of the splitting, as demonstrated in Fig. <ref>(c), which cancels the reduction trend in Δ E_kin+σ+ω overwhelmingly and finally leads to the enhancement of Δ E_SO^Λ with increasing neutron number in Fig. <ref>(a). In fact, the contribution Δ E_rea to Λ spin-orbit splitting is originated from the rearrangement terms of Λ self-energies Σ_0,Λ which according to Eq. (<ref>) depends on the density slope of the meson-hyperon coupling strength. As the neutron number increases, the density scenario where Λ lives could get more intense, consequently weaker density dependence of the meson-hyperon coupling strength, smaller density slope as well as the suppressed value of Δ E_rea. Therefore, the link between the isospin evolution of Λ spin-orbit splitting and the in-medium behavior of Λ N interaction with baryon density is elucidated from the discussion on Oxygen hyperisotopes. In consequence, possible experimental constraints on Δ E_SO^Λ along the hyperisotopes could assist us further in understanding the in-medium effects of nuclear force.
§.§ Isospin dependence of matter and hyperon radii
In the properties of hypernuclear structure, not only the Λ spin-orbit splitting but also the Λ impurity effect could exhibit the information of in-medium nuclear interactions. In Fig. <ref>(a), we selected DDRMF functionals DD-LZ1-Λ1 and DD-ME2, DDRHF's PKO1-Λ1 and NLRMF's PK1, to illustrate its influence on the matter radii of Oxygen (hyper)isotopes, where the solid and dash-dotted lines correspond to the calculated results for single-Λ hypernuclei and their nucleonic counterpart in Oxygen (hyper)isotopes, respectively. The matter radius R_m in hypernuclei goes up monotonically as the neutron number increases, regardless of the specific model used, where a steep leap from ^23_ΛO to ^25_ΛO corresponds to the effect of new occupation in ν 2s_1/2.
Although divergent values given for Oxygen isotopes without hyperon, all of the selected models are getting closer in size of matter radii for hypernuclei, implying R_m of hypernuclei as a possible model-independent observable. It is evident that the matter radii of Oxygen hyperisotopes contract as compared to their nucleonic counterparts, namely the size shrinkage due to the impurity effect of the Λ hyperon. However, the shrinkage magnitude appears to be strongly model dependent. Among them, the DDRMF effective Lagrangian DD-LZ1-Λ1 yields the largest difference between the solid and dash-dotted lines, whereas the NLRMF one PK1 shows the smallest disparity. By checking the bulk properties of nuclear matter within these CDFs, it is verified that the shrinkage magnitude correlates well with the incompressibility, which is 230.7 MeV for DD-LZ1, 250.8 MeV for DD-ME2, 250.2 MeV for PKO1, 282.7 MeV for PK1, respectively <cit.>. In fact, the larger the incompressibility K is, the harder the nucleus is contracted by the exerted attraction from the filled hyperon inside, consequently the weaker size shrinkage effect in the calculated matter radii. The similar relation could be found from the Table II of a work on the isoscalar giant monopole resonance of hypernuclei, where the effective nuclear incompressibility modulus was extracted <cit.>.
To further distinguish the effects of different interactions on the description of hypernuclear structure, we investigate the isospin evolution of the Λ hyperon radius R_Λ in Oxygen hyperisotopes using all selected CDF effective interactions, as shown in Fig. <ref>. It is seen tangibly that R_Λ evolve diversely along Oxygen hyperisotopes with different CDF effective interactions. Some effective interactions, like PKO3-Λ1, DD-ME2, DDV, and DD-LZ1-Λ1, exhibit a reduced R_Λ with increasing neutron number. Especially, DD-LZ1-Λ1 gives the smallest hyperon radii among all chosen CDFs, and an strong declining trend. In fact, the core polarization effect due to Λ hyperon plays a significant role in this evolution. When Λ occupies the 1s_1/2 state, its density distribution is concentrated inside the hypernucleus. As a result, the Λ's coupling or attraction with the nucleons in the core (here corresponding to ^16O) appears relatively stronger than that with the valence nucleons. Hence, the evolution of the hyperon radius could be comprehended more or less by the size change of the core with respect to the neutron number.
The variation of the matter radii for the ^16O core in Oxygen (hyper)isotopes is plotted in Fig. <ref>(b) with respect to the neutron number. From N=8 to 14, in contrast to the situation of total matter radii R_m, there is no consistent isospin dependence for the selected CDFs in the core radius R_m^core with increasing neutron number. The nonlinear RMF functional PK1 exhibits a significant increasing trend with isospin, while the density-dependent RMF one DD-LZ1-Λ1 shows a noticeable decrease. Consequently, the hyperon radius R_Λ exhibit a similar isospin dependence resulting from the core polarization effect, determined mainly by the various isospin properties of CDF functionals in nucleon-nucleon channels. From such analysis, the importance of nuclear in-medium effects in affecting the hyperon radii is unveiled. So the divergent isospin evolution of R_Λ given by the CDFs with different density dependent meson-baryon couplings makes it a valuable tool to elucidate the in-medium behavior of nuclear force.
§ SUMMARY
In summary, considering the significance of nuclear in-medium effects in nuclear many-body problems, such as eliminating the spurious shell closures, we expanded the newly developed DDRMF Lagrangian DD-LZ1 to incorporate the Λ hyperon degree of freedom and determined the Λ N effective interaction by fitting the experimental data of Λ separation energies for several single-Λ hypernuclei. Subsequently, with several other CDF functionals, the features including Λ separation energy and B(E1) transition, and the evolution of the spin-orbit splitting as well as the characteristic radii were analyzed in detail along the Oxygen (hyper)isotopes.
By comparing the results obtained from different CDF models, we further investigated the crucial impact of nuclear in-medium effects on accurately describing the properties of hyperon, both in terms of their bulk and single-particle properties. For the 1p spin-orbit splitting of the Λ hyperon, significant differences in the isospin dependence are observed among the selected CDF effective interactions in Oxygen hyperisotopes. As the neutron number increases, the density circumstance where the hyperon is housing gradually increases, which causes the meson-hyperon coupling strengths that determine the hypernuclear properties to change as well. In particular, the density-dependent CDF effective interactions introduce additional rearrangement terms that significantly enhance the isospin dependence of the Λ spin-orbit splitting, leading to more distinct variation of Δ E_SO^Λ with neutron number in DDRMF and DDRHF models.
The evolution of the hypernuclear matter radius with isospin was further investigated. Significant model dependence in the magnitude of size shrinkage due to the inclusion of Λ hyperon is observed, where the DDRMF functional DD-LZ1-Λ1 displays the largest shrinkage effect. The result was then explained by an anticorrelation between the incompressibility coefficients K of nuclear matter and the hyperon radii R_Λ, providing us a possible way to constrain the hyperon distribution inside a hypernucleus from better-determined bulk properties of nuclear matter. Additionally, it is found that the isospin evolution of the hyperon radius is primarily influenced by the density-dependent behavior of the chosen CDF functional in NN interaction channel via the procedure of the core polarization. Thus, the sensitivity in depicting these hyperon-relevant properties in CDF models with a various of different meson-baryon couplings holds us great potential to elucidate nuclear in-medium nature in both Λ N and NN channels.
apsrev
91
natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL
[Danysz and
Pniewski(1953)]Danysz1953Philos.Mag.44.348
authorM. Danysz and
authorJ. Pniewski,
journalThe London, Edinburgh, and Dublin Philosophical Magazine
and Journal of Science volume44, pages348
(year1953), https://doi.org/10.1080/14786440308520318,
<https://doi.org/10.1080/14786440308520318>.
[Hashimoto and Tamura(2006)]Hashimoto2006PPNP57.564
authorO. Hashimoto and
authorH. Tamura,
journalProgress in Particle and Nuclear Physics
volume57, pages564 (year2006),
ISSN issn0146-6410,
<https://www.sciencedirect.com/science/article/pii/S0146641005000761>.
[Gal et al.(2016)Gal, Hungerford, and
Millener]Gal2016Rev.Mod.Phys.88.035004
authorA. Gal,
authorE. V. Hungerford,
and authorD. J.
Millener, journalRev. Mod. Phys.
volume88, pages035004
(year2016),
<https://link.aps.org/doi/10.1103/RevModPhys.88.035004>.
[Prakash et al.(1997)Prakash, Bombaci,
Prakash, Ellis, Lattimer, and Knorren]Prakash9971Phys.Rep.280.1
authorM. Prakash,
authorI. Bombaci,
authorM. Prakash,
authorP. J. Ellis,
authorJ. M. Lattimer,
and authorR. Knorren,
journalPhysics Reports volume280,
pages1 (year1997), ISSN issn0370-1573,
<https://www.sciencedirect.com/science/article/pii/S0370157396000233>.
[Tolos and Fabbietti(2020)]Tolos2020PPNP112.103770
authorL. Tolos and
authorL. Fabbietti,
journalProgress in Particle and Nuclear Physics
volume112, pages103770
(year2020), ISSN issn0146-6410,
<https://www.sciencedirect.com/science/article/pii/S014664102030017X>.
[Burgio et al.(2021)Burgio, Schulze, na,
and Wei]Burgio2021PPNP120.103879
authorG. Burgio,
authorH.-J. Schulze,
authorI. V. na, and
authorJ.-B. Wei,
journalProgress in Particle and Nuclear Physics
volume120, pages103879
(year2021), ISSN issn0146-6410,
<https://www.sciencedirect.com/science/article/pii/S0146641021000338>.
[Sawada(2007)]Sawada2007NPA782.434
authorS. Sawada,
journalNuclear Physics A volume782,
pages434 (year2007), ISSN issn0375-9474,
<https://www.sciencedirect.com/science/article/pii/S0375947406007305>.
[Nakamura et al.(2005)Nakamura,
Hashimoto, Fujii, Tamura, Takahashi, Maeda, Kanda, Okayasu, Nomura, Matsumura
et al.]Nakamura2005NPA754.421
authorS. N. Nakamura,
authorO. Hashimoto,
authorY. Fujii,
authorH. Tamura,
authorT. Takahashi,
authorK. Maeda,
authorH. Kanda,
authorY. Okayasu,
authorH. Nomura,
authorA. Matsumura,
et al., journalNuclear Physics A
volume754, pages421 (year2005),
ISSN issn0375-9474, noteproceedings of the Eighth
International Conference on Hypernuclear and Strange Particle Physics,
<https://www.sciencedirect.com/science/article/pii/S037594740500076X>.
[Henning(2004)]Henning2004NPA734.654
authorW. Henning,
journalNuclear Physics A volume734,
pages654 (year2004), ISSN issn0375-9474,
<https://www.sciencedirect.com/science/article/pii/S0375947404001393>.
[Pile et al.(1991)Pile, Bart, Chrien,
Millener, Sutter, Tsoupas, Peng, Mishra, Hungerford, Kishimoto
et al.]Pile1991PRL66.2585
authorP. H. Pile,
authorS. Bart,
authorR. E. Chrien,
authorD. J. Millener,
authorR. J. Sutter,
authorN. Tsoupas,
authorJ.-C. Peng,
authorC. S. Mishra,
authorE. V. Hungerford,
authorT. Kishimoto,
et al., journalPhys. Rev. Lett.
volume66, pages2585 (year1991),
<https://link.aps.org/doi/10.1103/PhysRevLett.66.2585>.
[Feliciello and
Nagae(2015)]Feliciello2015Rep.Prog.Phys.78.096301
authorA. Feliciello and
authorT. Nagae,
journalReports on Progress in Physics
volume78, pages096301
(year2015),
<https://doi.org/10.1088/0034-4885/78/9/096301>.
[Tanida et al.(2001)Tanida, Tamura, Abe,
Akikawa, Araki, Bhang, Endo, Fujii, Fukuda, Hashimoto
et al.]Tanida2001PRL86.1982
authorK. Tanida,
authorH. Tamura,
authorD. Abe,
authorH. Akikawa,
authorK. Araki,
authorH. Bhang,
authorT. Endo,
authorY. Fujii,
authorT. Fukuda,
authorO. Hashimoto,
et al., journalPhys. Rev. Lett.
volume86, pages1982 (year2001),
<https://link.aps.org/doi/10.1103/PhysRevLett.86.1982>.
[Kohri et al.(2002)Kohri, Ajimura,
Hayakawa, Kishimoto, Matsuoka, Minami, Miyake, Mori, Morikubo, Saji
et al.]Kohri2002PRC65.034607
authorH. Kohri,
authorS. Ajimura,
authorH. Hayakawa,
authorT. Kishimoto,
authorK. Matsuoka,
authorS. Minami,
authorY. S. Miyake,
authorT. Mori,
authorK. Morikubo,
authorE. Saji, et al.,
journalPhys. Rev. C volume65,
pages034607 (year2002),
<https://link.aps.org/doi/10.1103/PhysRevC.65.034607>.
[Feng(2020)]Feng2020PRC102.044604
authorZ.-Q. Feng,
journalPhys. Rev. C volume102,
pages044604 (year2020),
<https://link.aps.org/doi/10.1103/PhysRevC.102.044604>.
[Saito et al.(2021)Saito, Dou, Drozd,
Ekawa, Escrig, He, Kalantar-Nayestanaki, Kasagi, Kavatsyuk, Liu
et al.]Saito2021Nat.Rev.Phys.3.803
authorT. R. Saito,
authorW. Dou,
authorV. Drozd,
authorH. Ekawa,
authorS. Escrig,
authorY. He,
authorN. Kalantar-Nayestanaki,
authorA. Kasagi,
authorM. Kavatsyuk,
authorE. Liu, et al.,
journalNature Reviews Physics volume3,
pages803 (year2021),
<https://doi.org/10.1038/s42254-021-00371-w>.
[Aboona et al.(2023)Aboona, Adam, Adams,
Agakishiev, Aggarwal, Aggarwal, Ahammed, Aitbaev, Alekseev, Anderson
et al.]Aboona2023PRL130.212301
authorB. E. Aboona,
authorJ. Adam,
authorJ. R. Adams,
authorG. Agakishiev,
authorI. Aggarwal,
authorM. M. Aggarwal,
authorZ. Ahammed,
authorA. Aitbaev,
authorI. Alekseev,
authorD. M. Anderson,
et al. (collaborationSTAR Collaboration),
journalPhys. Rev. Lett. volume130,
pages212301 (year2023),
<https://link.aps.org/doi/10.1103/PhysRevLett.130.212301>.
[Yang et al.(2013)Yang, Xia, Xiao, Xu,
Zhao, Zhou, Ma, He, Ma, Gao et al.]Yang2013NIMPR317.263
authorJ. C. Yang,
authorJ. W. Xia,
authorG. Q. Xiao,
authorH. S. Xu,
authorH. W. Zhao,
authorX. H. Zhou,
authorX. W. Ma,
authorY. He,
authorL. Z. Ma,
authorD. Q. Gao,
et al., journalNuclear Instruments and Methods in
Physics Research Section B: Beam Interactions with Materials and Atoms
volume317, pages263 (year2013),
ISSN issn0168-583X,
<https://www.sciencedirect.com/science/article/pii/S0168583X13009877>.
[Zhou et al.(2022)Zhou, Yang, and the
HIAF project team]Zhou2022AAPPSBulletin32.35
authorX. Zhou,
authorJ. Yang, and
authorthe HIAF project team,
journalAAPPS Bull. volume32,
pages35 (year2022), ISSN issn2309-4710,
<https://link.springer.com/10.1007/s43673-022-00064-1>.
[Mare šš and
Jennings(1994)]Mares1994PRC49.2472
authorJ. Mare šš and authorB. K.
Jennings, journalPhys. Rev. C
volume49, pages2472 (year1994),
<https://link.aps.org/doi/10.1103/PhysRevC.49.2472>.
[Wirth and Roth(2018)]Wirth2018PLB779.336
authorR. Wirth and
authorR. Roth,
journalPhysics Letters B volume779,
pages336 (year2018), ISSN issn0370-2693,
<https://www.sciencedirect.com/science/article/pii/S0370269318301230>.
[Vretenar et al.(1998)Vretenar, Pöschl,
Lalazissis, and Ring]Vretenar1998PRC57.R1060
authorD. Vretenar,
authorW. Pöschl,
authorG. A. Lalazissis,
and authorP. Ring,
journalPhys. Rev. C volume57,
pagesR1060 (year1998),
<https://link.aps.org/doi/10.1103/PhysRevC.57.R1060>.
[Umeya and Harada(2011)]Umeya2011PRC83.034310
authorA. Umeya and
authorT. Harada,
journalPhys. Rev. C volume83,
pages034310 (year2011),
<https://link.aps.org/doi/10.1103/PhysRevC.83.034310>.
[Xia et al.(2017)Xia, Mei, and
Yao]Xia2017Sci.China-Phys.Mech.Astron60.102021
authorH. J. Xia,
authorH. Mei, and
authorJ. M. Yao,
journalSci. China-Phys. Mech. Astron
volume60, pages102021
(year2017),
<https://doi.org/10.1007/s11433-017-9048-2>.
[Ning et al.(2009)Ning, Xian-Rong, and
Fang-Qi]Wei2009CPC33.116
authorW. Ning,
authorZ. Xian-Rong,
and authorC. Fang-Qi,
journalChinese Physics C volume33,
pages116 (year2009),
<https://dx.doi.org/10.1088/1674-1137/33/S1/037>.
[Lu et al.(2011)Lu, Zhao, and
Zhou]Lu2011PRC84.014328
authorB. N. Lu,
authorE. G. Zhao, and
authorS. G. Zhou,
journalPhys. Rev. C volume84,
pages014328 (year2011),
<https://link.aps.org/doi/10.1103/PhysRevC.84.014328>.
[Zhang et al.(2021)Zhang, Sagawa, and
Hiyama]Zhang2021PRC103.034321
authorY. Zhang,
authorH. Sagawa, and
authorE. Hiyama,
journalPhys. Rev. C volume103,
pages034321 (year2021),
<https://link.aps.org/doi/10.1103/PhysRevC.103.034321>.
[Zhang et al.(2022a)Zhang,
Sagawa, and Hiyama]Zhang2022PTEP2022.023D01
authorY. Zhang,
authorH. Sagawa, and
authorE. Hiyama,
journalProgress of Theoretical and Experimental Physics
volume2022 (year2022a), ISSN
issn2050-3911, note023D01,
https://academic.oup.com/ptep/article-pdf/2022/2/023D01/42931223/ptac004.pdf,
<https://doi.org/10.1093/ptep/ptac004>.
[Xue et al.(2022)Xue, Chen, Zhou, Cheng,
and Schulze]Xue2022PRC106.044306
authorH.-T. Xue,
authorQ. B. Chen,
authorX.-R. Zhou,
authorY. Y. Cheng, and
authorH.-J. Schulze,
journalPhys. Rev. C volume106,
pages044306 (year2022),
<https://link.aps.org/doi/10.1103/PhysRevC.106.044306>.
[Reinhard(1989)]Reinhard1989Rep.Prog.Phys.52.439
authorP. G. Reinhard,
journalReports on Progress in Physics
volume52, pages439 (year1989),
<https://doi.org/10.1088/0034-4885/52/4/002>.
[Ring(1996)]Ring1996PPNP37.193
authorP. Ring,
journalProgress in Particle and Nuclear Physics
volume37, pages193 (year1996),
ISSN issn0146-6410,
<https://www.sciencedirect.com/science/article/pii/0146641096000543>.
[Bender et al.(2003)Bender, Heenen, and
Reinhard]Bender2003Rev.Mod.Phys.75.121
authorM. Bender,
authorP. H. Heenen,
and authorP. G.
Reinhard, journalRev. Mod. Phys.
volume75, pages121 (year2003),
<https://link.aps.org/doi/10.1103/RevModPhys.75.121>.
[Vretenar et al.(2005)Vretenar,
Afanasjev, Lalazissis, and Ring]Vretenar2005Phys.Rep.409.101
authorD. Vretenar,
authorA. V. Afanasjev,
authorG. A. Lalazissis,
and authorP. Ring,
journalPhysics Reports volume409,
pages101 (year2005), ISSN issn0370-1573,
<https://www.sciencedirect.com/science/article/pii/S0370157304004545>.
[Meng et al.(2006)Meng, Toki, Zhou,
Zhang, Long, and Geng]Meng2006PPNP57.470
authorJ. Meng,
authorH. Toki,
authorS. G. Zhou,
authorS. Q. Zhang,
authorW. H. Long, and
authorL. S. Geng,
journalProgress in Particle and Nuclear Physics
volume57, pages470 (year2006),
ISSN issn0146-6410,
<https://www.sciencedirect.com/science/article/pii/S014664100500075X>.
[Nikšić
et al.(2011)Nikšić, Vretenar, and
Ring]Niksic2011PPNP66.519
authorT. Nikšić,
authorD. Vretenar, and
authorP. Ring,
journalProgress in Particle and Nuclear Physics
volume66, pages519 (year2011),
ISSN issn0146-6410,
<https://www.sciencedirect.com/science/article/pii/S0146641011000561>.
[Meng and Zhou(2015)]Meng2015JPG42.093101
authorJ. Meng and
authorS. G. Zhou,
journalJournal of Physics G: Nuclear and Particle Physics
volume42, pages093101
(year2015),
<https://doi.org/10.1088/0954-3899/42/9/093101>.
[Meng(2016)]Meng2016Density
authorJ. Meng,
titleRelativistic Density Functional for Nuclear Structure
(publisherWORLD SCIENTIFIC, year2016),
https://www.worldscientific.com/doi/pdf/10.1142/9872,
<https://www.worldscientific.com/doi/abs/10.1142/9872>.
[Rayet(1976)]Rayet1976Ann.Phys.102.226
authorM. Rayet,
journalAnnals of Physics volume102,
pages226 (year1976), ISSN issn0003-4916,
<https://www.sciencedirect.com/science/article/pii/0003491676902621>.
[Lanskoy and Yamamoto(1997)]Lanskoy1997PRC55.2330
authorD. E. Lanskoy and
authorY. Yamamoto,
journalPhys. Rev. C volume55,
pages2330 (year1997),
<https://link.aps.org/doi/10.1103/PhysRevC.55.2330>.
[Brockmann and Weise(1977)]Brockmann1977PLB69.167
authorR. Brockmann and
authorW. Weise,
journalPhysics Letters B volume69,
pages167 (year1977), ISSN issn0370-2693,
<https://www.sciencedirect.com/science/article/pii/0370269377906359>.
[Bouyssy(1981)]Bouyssy1981PLB99.305
authorA. Bouyssy,
journalPhysics Letters B volume99,
pages305 (year1981), ISSN issn0370-2693,
<https://www.sciencedirect.com/science/article/pii/0370269381901064>.
[Glendenning and
Moszkowski(1991)]Glendenning1991PRL67.2414
authorN. K. Glendenning
and authorS. A.
Moszkowski, journalPhys. Rev. Lett.
volume67, pages2414 (year1991),
<https://link.aps.org/doi/10.1103/PhysRevLett.67.2414>.
[Sugahara and Toki(1994)]Sugahara1994PTP92.803
authorY. Sugahara and
authorH. Toki,
journalProgress of Theoretical Physics
volume92, pages803 (year1994),
ISSN issn0033-068X,
https://academic.oup.com/ptp/article-pdf/92/4/803/5358491/92-4-803.pdf,
<https://doi.org/10.1143/ptp/92.4.803>.
[Zhou et al.(2008)Zhou, Polls, Schulze,
and Vidaña]Zhou2008PRC78.054306
authorX.-R. Zhou,
authorA. Polls,
authorH.-J. Schulze,
and authorI. Vidaña,
journalPhys. Rev. C volume78,
pages054306 (year2008),
<https://link.aps.org/doi/10.1103/PhysRevC.78.054306>.
[Hu et al.(2014)Hu, Hiyama, and
Toki]Hu2014PRC90.014309
authorJ. N. Hu,
authorE. Hiyama, and
authorH. Toki,
journalPhys. Rev. C volume90,
pages014309 (year2014),
<https://link.aps.org/doi/10.1103/PhysRevC.90.014309>.
[Li et al.(2018)Li, Long, and
Sedrakian]Li2018EPJA54.133
authorJ. J. Li,
authorW. H. Long, and
authorA. Sedrakian,
journalThe European Physical Journal A
volume54, pages133 (year2018),
<https://doi.org/10.1140/epja/i2018-12566-6>.
[Rong et al.(2020)Rong, Zhao, and
Zhou]Zhou2020PLB807.135533
authorY. T. Rong,
authorP. W. Zhao, and
authorS. G. Zhou,
journalPhysics Letters B volume807,
pages135533 (year2020), ISSN
issn0370-2693,
<https://www.sciencedirect.com/science/article/pii/S0370269320303373>.
[Wu et al.(2017)Wu, Mei, Yao, and
Zhou]Yao2017PRC95.034309
authorX. Y. Wu,
authorH. Mei,
authorJ. M. Yao, and
authorX. R. Zhou,
journalPhys. Rev. C volume95,
pages034309 (year2017),
<https://link.aps.org/doi/10.1103/PhysRevC.95.034309>.
[Tanimura and Hagino(2012)]Tanimura2012PRC85.014306
authorY. Tanimura and
authorK. Hagino,
journalPhys. Rev. C volume85,
pages014306 (year2012),
<https://link.aps.org/doi/10.1103/PhysRevC.85.014306>.
[Hong-Feng and Jie(2002)]Lv2002CPL19.1775
authorL. Hong-Feng and
authorM. Jie,
journalChinese Physics Letters volume19,
pages1775 (year2002),
<https://dx.doi.org/10.1088/0256-307X/19/12/310>.
[Win and Hagino(2008)]Win2008PRC78.054311
authorM. T. Win and
authorK. Hagino,
journalPhys. Rev. C volume78,
pages054311 (year2008),
<https://link.aps.org/doi/10.1103/PhysRevC.78.054311>.
[Lu et al.(2014)Lu, Hiyama, Sagawa, and
Zhou]Zhou2014PRC89.044307
authorB. N. Lu,
authorE. Hiyama,
authorH. Sagawa, and
authorS. G. Zhou,
journalPhys. Rev. C volume89,
pages044307 (year2014),
<https://link.aps.org/doi/10.1103/PhysRevC.89.044307>.
[Chen et al.(2022)Chen, Chen, Zhou,
Cheng, Cui, and Schulze]Chen2022CPC46.064109
authorC. F. Chen,
authorQ. B. Chen,
authorX.-R. Zhou,
authorY. Y. Cheng,
authorJ.-W. Cui, and
authorH.-J. Schulze,
journalChinese Physics C volume46,
pages064109 (year2022),
<https://dx.doi.org/10.1088/1674-1137/ac5b58>.
[Tanimura(2019)]Tanimura2019PRC99.034324
authorY. Tanimura,
journalPhys. Rev. C volume99,
pages034324 (year2019),
<https://link.aps.org/doi/10.1103/PhysRevC.99.034324>.
[Meng et al.(2003)Meng, Lü, Zhang,
and Zhou]Meng2003NPA722.C366
authorJ. Meng,
authorH. Lü,
authorS. Zhang, and
authorS.-G. Zhou,
journalNuclear Physics A volume722,
pagesC366 (year2003), ISSN issn0375-9474,
<https://www.sciencedirect.com/science/article/pii/S0375947403013915>.
[Rong et al.(2021)Rong, Tu, and
Zhou]Rong2021PRC104.054321
authorY.-T. Rong,
authorZ.-H. Tu, and
authorS.-G. Zhou,
journalPhys. Rev. C volume104,
pages054321 (year2021),
<https://link.aps.org/doi/10.1103/PhysRevC.104.054321>.
[Wei et al.(2020)Wei, Zhao, Wang, Geng,
Sun, Niu, and Long]Wei2020CPC44.074107
authorB. Wei,
authorQ. Zhao,
authorZ. H. Wang,
authorJ. Geng,
authorB. Y. Sun,
authorY. F. Niu, and
authorW. H. Long,
journalChinese Physics C volume44,
pages074107 (year2020),
<https://doi.org/10.1088/1674-1137/44/7/074107>.
[Zhang et al.(2022b)Zhang,
Li, Gao, and Sun]Zhang2022CPC46.104.105
authorW. Zhang,
authorZ. Y. Li,
authorW. Gao, and
authorT. T. Sun,
journalChinese Physics C volume46,
pages104105 (year2022b),
<https://dx.doi.org/10.1088/1674-1137/ac7b18>.
[Rather et al.(2021a)Rather,
Rahaman, Dexheimer, Usmani, and Patra]Rather2021APJ917.46
authorI. A. Rather,
authorU. Rahaman,
authorV. Dexheimer,
authorA. A. Usmani,
and authorS. K. Patra,
journalThe Astrophysical Journal volume917,
pages46 (year2021a),
<https://dx.doi.org/10.3847/1538-4357/ac09f7>.
[Sun et al.(2023)Sun, Miao, Sun, and
Li]Sun2023APJ942.55
authorX. Sun,
authorZ. Miao,
authorB. Sun, and
authorA. Li, journalThe
Astrophysical Journal volume942, pages55
(year2023),
<https://dx.doi.org/10.3847/1538-4357/ac9d9a>.
[Rather et al.(2021b)Rather,
Rahaman, Imran, Das, Usmani, and Patra]Rather2021PRC103.055814
authorI. A. Rather,
authorU. Rahaman,
authorM. Imran,
authorH. C. Das,
authorA. A. Usmani,
and authorS. K. Patra,
journalPhys. Rev. C volume103,
pages055814 (year2021b),
<https://link.aps.org/doi/10.1103/PhysRevC.103.055814>.
[Malik et al.(2022)Malik, Ferreira,
Agrawal, and Providência]Malik2022APJ930.17
authorT. Malik,
authorM. Ferreira,
authorB. K. Agrawal,
and
authorC. Providência,
journalThe Astrophysical Journal volume930,
pages17 (year2022),
<https://dx.doi.org/10.3847/1538-4357/ac5d3c>.
[Yang et al.(2022)Yang, Wen, Wang, and
Zhang]Yang2022PRD105.063023
authorS. Yang,
authorD. Wen,
authorJ. Wang, and
authorJ. Zhang,
journalPhys. Rev. D volume105,
pages063023 (year2022),
<https://link.aps.org/doi/10.1103/PhysRevD.105.063023>.
[Xia et al.(2022a)Xia, Sun,
Maruyama, Long, and Li]Xia2022PRC105.045803
authorC.-J. Xia,
authorB. Y. Sun,
authorT. Maruyama,
authorW.-H. Long, and
authorA. Li, journalPhys.
Rev. C volume105, pages045803
(year2022a),
<https://link.aps.org/doi/10.1103/PhysRevC.105.045803>.
[Xia et al.(2022b)Xia,
Maruyama, Li, Sun, Long, and Zhang]Xia2022CTP74.095303
authorC.-J. Xia,
authorT. Maruyama,
authorA. Li,
authorB. Y. Sun,
authorW.-H. Long, and
authorY.-X. Zhang,
journalCommunications in Theoretical Physics
volume74, pages095303
(year2022b),
<https://dx.doi.org/10.1088/1572-9494/ac71fd>.
[Isaka et al.(2013)Isaka, Homma, Kimura,
Dote, and Ohnishi]Isaka2013Few-Body-Systems54.1219
authorM. Isaka,
authorH. Homma,
authorM. Kimura,
authorA. Dote, and
authorA. Ohnishi,
journalFew-Body Systems volume54,
pages1219 (year2013), ISSN issn0177-7963,
1432-5411,
<http://link.springer.com/10.1007/s00601-012-0547-3>.
[Choi et al.(2022)Choi, Hiyama, Hyun, and
Cheoun]Choi2022EPJA58.161
authorS. Choi,
authorE. Hiyama,
authorC. H. Hyun, and
authorM.-K. Cheoun,
journalThe European Physical Journal A
volume58, pages161 (year2022),
ISSN issn8,
<https://doi.org/10.1140/epja/s10050-022-00817-4>.
[Aoki et al.(2021)Aoki, Fujioka, Gogami,
Hidaka, Hiyama, Honda, Hosaka, Ichikawa, Ieiri, Isaka
et al.]Aoki2021arXive2110.04462
authorK. Aoki,
authorH. Fujioka,
authorT. Gogami,
authorY. Hidaka,
authorE. Hiyama,
authorR. Honda,
authorA. Hosaka,
authorY. Ichikawa,
authorM. Ieiri,
authorM. Isaka,
et al., titleExtension of the j-parc hadron
experimental facility: Third white paper (year2021),
2110.04462.
[Long et al.(2006)Long, Van Giai, and
Meng]Long2006PLB640.150
authorW. H. Long,
authorN. Van Giai,
and authorJ. Meng,
journalPhysics Letters B volume640,
pages150 (year2006), ISSN issn0370-2693,
<https://www.sciencedirect.com/science/article/pii/S0370269306009610>.
[Ding et al.(2022)Ding, Qian, Sun, and
Long]Ding2022PRC106.054311
authorS. Y. Ding,
authorZ. Qian,
authorB. Y. Sun, and
authorW. H. Long,
journalPhys. Rev. C volume106,
pages054311 (year2022),
<https://link.aps.org/doi/10.1103/PhysRevC.106.054311>.
[Xia et al.(2023)Xia, Wu, Mei, and
Yao]Xia2023Sci.China-Phys.Mech.Astron66.252011
authorH. Xia,
authorX. Wu,
authorH. Mei, and
authorJ. Yao,
journalScience China Physics, Mechanics & Astronomy
volume66, pages252011
(year2023), ISSN issn1674-7348, 1869-1927,
<https://link.springer.com/10.1007/s11433-022-2045-x>.
[Xue et al.(2023)Xue, Chen, Chen, Luo,
Schulze, and Zhou]Xue2023PRC107.044317
authorH.-T. Xue,
authorY.-F. Chen,
authorQ. B. Chen,
authorY. A. Luo,
authorH.-J. Schulze,
and authorX.-R. Zhou,
journalPhys. Rev. C volume107,
pages044317 (year2023),
<https://link.aps.org/doi/10.1103/PhysRevC.107.044317>.
[Tu and Zhou(2022)]Tu2022APJ925.16
authorZ.-H. Tu and
authorS.-G. Zhou,
journalThe Astrophysical Journal volume925,
pages16 (year2022),
<https://dx.doi.org/10.3847/1538-4357/ac3996>.
[Ren et al.(2017)Ren, Sun, and
Zhang]Ren2017PRC95.054318
authorS.-H. Ren,
authorT.-T. Sun, and
authorW. Zhang,
journalPhys. Rev. C volume95,
pages054318 (year2017),
<https://link.aps.org/doi/10.1103/PhysRevC.95.054318>.
[Jennings(1990)]Jennings1990PLB246.1990325
authorB. Jennings,
journalPhysics Letters B volume246,
pages325 (year1990), ISSN issn0370-2693,
<https://www.sciencedirect.com/science/article/pii/0370269390906078>.
[Berger et al.(1984)Berger, Girod, and
Gogny]Berger1984NPA428.23
authorJ. F. Berger,
authorM. Girod, and
authorD. Gogny,
journalNuclear Physics A volume428,
pages23 (year1984), ISSN issn0375-9474,
<https://www.sciencedirect.com/science/article/pii/0375947484902409>.
[Meng(1998)]Meng1998NPA.635.3
authorJ. Meng,
journalNuclear Physics A volume635,
pages3 (year1998), ISSN issn0375-9474,
<https://www.sciencedirect.com/science/article/pii/S037594749800178X>.
[Long et al.(2010)Long, Ring, Giai, and
Meng]Long2010PRC81.024308
authorW. H. Long,
authorP. Ring,
authorN. V. Giai, and
authorJ. Meng,
journalPhys. Rev. C volume81,
pages024308 (year2010),
<https://link.aps.org/doi/10.1103/PhysRevC.81.024308>.
[Geng et al.(2020)Geng, Xiang, Sun, and
Long]Geng2020PRC101.064302
authorJ. Geng,
authorJ. Xiang,
authorB. Y. Sun, and
authorW. H. Long,
journalPhys. Rev. C volume101,
pages064302 (year2020),
<https://link.aps.org/doi/10.1103/PhysRevC.101.064302>.
[Geng and Long(2022)]Geng2022PRC105.034329
authorJ. Geng and
authorW. H. Long,
journalPhys. Rev. C volume105,
pages034329 (year2022),
<https://link.aps.org/doi/10.1103/PhysRevC.105.034329>.
[Dover and Gal(1984)]Dover1984PPNP12.171
authorC. B. Dover and
authorA. Gal,
journalProgress in Particle and Nuclear Physics
volume12, pages171 (year1984),
ISSN issn0146-6410,
<https://www.sciencedirect.com/science/article/pii/0146641084900048>.
[and and and(2013)]Wang2013Com.Theor.Phys.60.479
authorand and
authorand, journalCommunications in
Theoretical Physics volume60, pages479
(year2013),
<https://dx.doi.org/10.1088/0253-6102/60/4/16>.
[Liu et al.(2020)Liu, Niu, and
Long]Liu2020PLB806.135524
authorJ. Liu,
authorY. F. Niu, and
authorW. H. Long,
journalPhysics Letters B volume806,
pages135524 (year2020), ISSN
issn0370-2693,
<https://www.sciencedirect.com/science/article/pii/S0370269320303282>.
[Yang et al.(2021)Yang, Sun, Geng, Sun,
and Long]Yang2021PRC103.014304
authorS. Yang,
authorX. D. Sun,
authorJ. Geng,
authorB. Y. Sun, and
authorW. H. Long,
journalPhys. Rev. C volume103,
pages014304 (year2021),
<https://link.aps.org/doi/10.1103/PhysRevC.103.014304>.
[Wang et al.(2021)Wang, Huang, Kondev,
Audi, and Naimi]Wang2021CPC45.030003
authorM. Wang,
authorW. Huang,
authorF. Kondev,
authorG. Audi, and
authorS. Naimi,
journalChinese Physics C volume45,
pages030003 (year2021),
<https://doi.org/10.1088/1674-1137/abddaf>.
[Zhang et al.(2022c)Zhang,
Cheoun, Choi, Chong, Dong, Dong, Du, Geng, Ha, He
et al.]Zhang2022ADNDT144.101488
authorK. Zhang,
authorM.-K. Cheoun,
authorY.-B. Choi,
authorP. S. Chong,
authorJ. Dong,
authorZ. Dong,
authorX. Du,
authorL. Geng,
authorE. Ha,
authorX.-T. He,
et al., journalAtomic Data and Nuclear Data Tables
volume144, pages101488
(year2022c), ISSN issn0092-640X,
<https://www.sciencedirect.com/science/article/pii/S0092640X22000018>.
[Kaur et al.(2022)Kaur, Kanungo,
Horiuchi, Hagen, Holt, Hu, Miyagi, Suzuki, Ameil, Atkinson
et al.]Kaur2022PRL129.142502
authorS. Kaur,
authorR. Kanungo,
authorW. Horiuchi,
authorG. Hagen,
authorJ. D. Holt,
authorB. S. Hu,
authorT. Miyagi,
authorT. Suzuki,
authorF. Ameil,
authorJ. Atkinson,
et al., journalPhys. Rev. Lett.
volume129, pages142502
(year2022),
<https://link.aps.org/doi/10.1103/PhysRevLett.129.142502>.
[Angeli and Marinova(2013)]Angeli2013ADNDT99.69
authorI. Angeli and
authorK. Marinova,
journalAtomic Data and Nuclear Data Tables
volume99, pages69 (year2013),
ISSN issn0092-640X,
<https://www.sciencedirect.com/science/article/pii/S0092640X12000265>.
[Li et al.(2021)Li, Luo, and
Wang]Li2021ADNDT140.101440
authorT. Li,
authorY. Luo, and
authorN. Wang,
journalAtomic Data and Nuclear Data Tables
volume140, pages101440
(year2021), ISSN issn0092-640X,
<https://www.sciencedirect.com/science/article/pii/S0092640X21000267>.
[Sun et al.(2008)Sun, Long, Meng, and
Lombardo]Sun2008PRC78.065805
authorB. Y. Sun,
authorW. H. Long,
authorJ. Meng, and
authorU. Lombardo,
journalPhys. Rev. C volume78,
pages065805 (year2008),
<https://link.aps.org/doi/10.1103/PhysRevC.78.065805>.
[Long et al.(2012)Long, Sun, Hagino, and
Sagawa]Long2012PRC85.025806
authorW. H. Long,
authorB. Y. Sun,
authorK. Hagino, and
authorH. Sagawa,
journalPhys. Rev. C volume85,
pages025806 (year2012),
<https://link.aps.org/doi/10.1103/PhysRevC.85.025806>.
[Lv et al.(2018)Lv, Zhang, Zhang, Wu,
Liu, and Cao]Lv2018CPL35.062102
authorH. Lv,
authorS.-S. Zhang,
authorZ.-H. Zhang,
authorY.-Q. Wu,
authorJ. Liu, and
authorL.-G. Cao,
journalChinese Physics Letters volume35,
pages062102 (year2018),
<https://dx.doi.org/10.1088/0256-307X/35/6/062102>.
|
http://arxiv.org/abs/2307.05872v1 | 20230712015633 | Effect of spin-orbit coupling on the zero-point renormalization of the electronic band gap in cubic materials: First-principles calculations and generalized Fröhlich model | [
"Véronique Brousseau-Couture",
"Xavier Gonze",
"Michel Côté"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci"
] |
[email protected]
Département de physique, Université de Montréal, C.P. 6128, Succursale Centre-Ville, Montréal, Québec, Canada H3C 3J7
Institute of Condensed Matter and Nanosciences, UCLouvain, B-1348 Louvain-la-Neuve, Belgium
Département de physique, Université de Montréal, C.P. 6128, Succursale Centre-Ville, Montréal, Québec, Canada H3C 3J7
The electronic structure of semiconductors and insulators is affected by ionic motion through electron-phonon interaction, yielding temperature-dependent band gap energies and zero-point renormalization (ZPR) at absolute zero temperature. For polar materials, the most significant contribution to the band gap ZPR can be understood in terms of the model, which focuses on the non-adiabatic interaction between an electron and the macroscopic electrical polarization created by a long-wavelength optical longitudinal phonon mode. On the other hand, spin-orbit interaction (SOC) modifies the bare electronic structure, which will, in turn, affect the electron-phonon interaction and the ZPR. We present a comparative investigation of the effect of SOC on the band gap ZPR of twenty semiconductors and insulators with cubic symmetry using first-principles calculations. We observe a SOC-induced decrease of the ZPR, up to 30%, driven by the valence band edge, which almost entirely originates from the modification of the bare electronic eigenenergies and the decrease of the hole effective masses near the Γ point. We also incorporate SOC into a generalized model, addressing the Dresselhaus splitting which occurs in non-centrosymmetric materials, and confirm that the predominance of non-adiabatic effects on the band gap ZPR of polar materials is unchanged when including SOC. Our generalized model with SOC provides a reliable estimate of the SOC-induced decrease of the polaron formation energy obtained from first principles and brings to light some fundamental subtleties in the numerical evaluation of the effective masses with SOC for non-centrosymmetric materials. We finally warn about a possible breakdown of the parabolic approximation, one of the most fundamental assumptions of the model, within the physically relevant energy range of the interaction for materials with high phonon frequencies treated with SOC.
This is a post-peer-review version of the article published in Physical Review B, which includes the Supplemental Material in the main file.
Effect of spin-orbit coupling on the zero-point renormalization of the electronic band gap in cubic materials: First-principles calculations and generalized Fröhlich model
Michel Côté
August 12, 2023
===========================================================================================================================================================================
§ INTRODUCTION
Electron-phonon interaction (EPI) has been widely investigated from a theoretical point of view since the late 1940s, through pioneering works of Pekar <cit.>, Landau and Pekar <cit.>, Fröhlich <cit.>, and Feynman <cit.>, and numerous subsequent works <cit.>. Using model Hamiltonians, those first theories essentially addressed the interaction of an electron in an isotropic, continuous medium with the macroscopic polarization induced by longitudinal optical (LO) long-range lattice vibrations, which yields in a correlated state called a polaron. The model since became the foundation stone of modern large-polaron studies. From a first-principles point of view, the works of Allen, Heine and Cardona <cit.> (AHC) in the 1980s clarified earlier theories by Fan <cit.> and Antončík <cit.>. They provided a unified formalism for the EPI self-energy, rooted in many-body perturbation theory, which addresses all types of lattice vibrations.
Despite their fundamentally different perspectives, the model Hamiltonian and first-principles approaches address the same problem, namely, the consequences of EPI on the electronic structure. Amongst numerous effects on transport and optical properties of materials <cit.>, EPI modifies the quasiparticle energy and introduces finite quasiparticle lifetimes, which depend on the phonon population at a given temperature. As a consequence, the electronic structure does not only acquire a temperature dependence: it is affected even at absolute zero temperature, through the zero-point motion of the ions. This T=0 K correction is known as the zero-point renormalization (ZPR). From the model perspective, the band edge ZPR corresponds to the polaron formation energy.
In recent years, considerable efforts have been directed towards tackling the interaction within the full complexity of real materials as captured by first-principles methods (see Ref. <cit.> and references therein). Among others, Sio et al. <cit.> developed a first-principles theory of polarons, later reformulated using a variational principle <cit.>. More recently, Lafuente-Bartolome et al. proposed a self-consistent many-body Green's function theory which simultaneously addresses phonon-induced band structure renormalization and small polaron formation <cit.>. From another perspective, Houtput and Tempere <cit.> derived anharmonic corrections to the Hamiltonian, and Kandolf et al. <cit.> and Macheda et al. <cit.> investigated the interaction in doped solids. Other works proposed models retaining
certain fundamental assumptions of the original model while lifting some of its hypotheses. Schlipf et al. <cit.> addressed the case of multiple phonon branches,
relying on the first-principles vertex proposed by Verdi and Giustino <cit.>. Miglio et al. <cit.> introduced a generalized model (gFr), based on a simplified electron-phonon vertex, that allows for multiple phonon branches, degenerate band extrema and anisotropic band warping. The authors used this model to reveal the predominance of non-adiabatic effects in the ZPR of semiconductors and insulators and explain why including such effects in calculations is essential to obtain an agreement between the first-principles band gap ZPR (ZPR_g) and experimental data. Their gFr model was recently used to obtain polaron effective masses and localization lengths in cubic materials <cit.>, as well as to investigate the domain of applicability of the model using a high-throughput computational framework <cit.>.
One question which remains unaddressed in Ref. <cit.> is the effect spin-orbit coupling (SOC). It is well known that SOC lifts the spin degeneracy of the Bloch states throughout the Brillouin zone, except at time-reversal invariant 𝐤 points. For the valence band maximum (VBM) of cubic materials, which is triply-degenerate when neglecting SOC, this leads to a degeneracy: the two split-off bands are moved to lower energies compared to the heavy hole and light hole bands, which remain degenerate at the Γ point. This loss of degeneracy could affect the ZPR_g predicted by the gFr model. In addition to the electronic eigenvalues, the inclusion of a SOC term in the external potential of the first-principles Hamiltonian will also have repercussions on the first-order Hamiltonian perturbed by atomic displacements, which is a key quantity for computing the ZPR.
SOC has often been neglected throughout literature when investigating the interaction since strong polaronic effects are most likely to occur in materials where the LO phonon frequency is large. Such systems typically contain light atoms (e.g. oxides), for which SOC can reasonably be expected to be weak.
Some theoretical studies have addressed the consequences of SOC on EPI in 2D materials <cit.>, mainly through Rashba-Holstein <cit.> and Rashba- <cit.> model Hamiltonians. To the best of our knowledge, only Trebin and Rössler <cit.> explicitly investigated the effect of SOC on the polaron for triply-degenerate band extrema in 3D materials. However, they relied on an isotropic model Hamiltonian, thus neglecting the effect of band warping.
From the first-principles perspective, density-functional perturbation theory calculations including SOC have been available for about 15 years <cit.>. Other formalisms relying on finite differences and distorted supercells <cit.>, as well as the recent special displacement method <cit.>, have also been used to investigate this question. Nevertheless, SOC remains commonly neglected in ZPR calculations to this day. Full first-principles EPI calculations with SOC are typically done on a case-by-case basis <cit.>.
Rashba materials <cit.>, for which SOC is known to have a profound impact on either the electronic structure or the phonon frequencies, and topological materials <cit.>, in which SOC is necessary to induce the band inversion, have naturally been investigated by including SOC in first-principles EPI calculations. Some compound-specific comparative studies have been made, for example, in PbTe <cit.>, CH_3NH_3PbI_3 <cit.> and BAs <cit.>, as well as when investigating the superconducting coupling constant <cit.>. Yet, even in the most simple case of cubic materials, the effect of SOC on EPI and the ZPR has not received the thorough investigation it deserves.
In this article, we investigate the effect of SOC on the ZPR of twenty semiconductors using the non-adiabatic AHC framework. We focus on representative cubic materials, as their well-characterized electronic structure provides a simple framework to investigate the mechanisms at play. Their triply-degenerate VBM also proves ideal to investigate the effect of SOC on the polaron formation energy of degenerate extrema within the gFr model. We evaluate the first-principles ZPR with the AHC methodology and extend the generalized model of Miglio et al. <cit.> to include SOC. First-principles calculations show that
spin-orbit coupling reduces the zero-point renormalization of the valence band edge by 15%–30% for the heavier materials, e.g. the tellurides. We address the SOC-induced Dresselhaus splitting <cit.> occuring in non-centrosymmetric materials, which shifts the band extrema slightly away from its location without SOC in reciprocal space. The leading mechanism driving the observed SOC-induced decrease of the ZPR_g is found to be the variation of the electronic eigenenergies of the occupied bands and the decrease of the hole effective masses near the Γ point. We also confirm the claims of Miglio et al. <cit.> regarding the predominance of non-adiabatic effects in the ZPR_g of polar materials. We relate the results from the two approaches and bring to light some limitations of the approximations inherent to the gFr model when SOC is considered.
Section <ref> presents an overview of the theoretical concepts used throughout this work. We first review the AHC formalism for EPI (Sec. <ref>), then briefly discuss some key consequences of SOC in the first-principles perspective (Sec. <ref>) before demonstrating how to incorporate SOC into the gFr model of Ref. <cit.> (Sec. <ref>) and investigating the consequences of Dresselhaus splitting on our results (Sec. <ref>). Section <ref> provides the relevant technical details regarding our calculations. We respectively analyze our first-principles and gFr model results in Secs. <ref> and <ref>, then summarize our findings in Sec. <ref>.
§ METHODOLOGY
§.§ AHC formalism
In the following, we briefly summarize the key concepts of the nonadiabatic Allen-Heine-Cardona (AHC) framework <cit.>. We work with the Hartree atomic unit system, such that ħ=m_e=c=|e|=1.
Within the many-body perturbation theory formalism, the electron-phonon interaction at temperature T affects the electronic Green's function through a frequency-dependent electron-phonon self-energy, Σ(ω, T), where 𝐤 and n are respectively the electron wavevector and band index. At the lowest order of perturbation, known as AHC theory <cit.>, the self-energy contains two terms, called the Fan and Debye-Waller (DW) contributions:
Σ^AHC(ω, T) = Σ^Fan(ω, T) + Σ^DW(T).
The dynamical Fan self-energy contains two first-order vertices treated at second order in perturbation theory, while the static Debye-Waller self-energy has one second-order vertex treated at first order in perturbation theory. The Feynman diagrams corresponding to these contributions are shown in Fig. <ref>.
Note that we implicitly suppose that the full self-energy matrix can be approximated by its diagonal counterpart, i.e. Σ_𝐤n n'∝δ_n n'. The non-diagonal contributions hybridize the unperturbed electronic eigenstates within the interacting Green's function <cit.> and become important when the band gap nearly vanishes. These can be safely neglected here as we work with semiconductors and insulators.
Within this framework, the temperature dependence of an electronic eigenstate with eigenvalue ε then reads
ε(T) = ℜ𝔢[Σ^AHC(ω=ε(T), T)] + ε^0.
From this point, we work exclusively at T=0 K. The ZPR of an electronic eigenstate |𝐤n⟩ is obtained from Eq. (<ref>),
ZPR = ε(T=0) - ε^0,
while the band gap ZPR is the difference between the ZPR of the conduction and valence band edges (respectively, ZPR_c and ZPR_v),
ZPR_g = ZPR_c - ZPR_v.
We apply the on-the-mass-shell approximation to Eq. (<ref>), thus evaluating the Fan self-energy at the poles of the Green's function, namely, at the bare electronic eigenvalue, ε^0,
Σ^Fan(ε(T=0), T=0)≈Σ^Fan(ω=ε^0, T=0).
Furthermore approximating the interacting electronic Green's function by the noninteracting Kohn-Sham wavefunction obtained from density-functional theory (DFT), one obtains the standard expression for the non-adiabatic Fan self-energy <cit.>,
Σ_𝐤n^Fan(ε^0, T=0) =
∑^BZ∑_n'
|⟨𝐤+𝐪n'|∇ V^KS|𝐤n⟩|^2
×[ 1 -
f_𝐤+𝐪n'/ε^0-ε^0k+qn'-
+ iη_𝐤 +
f_𝐤+𝐪n'/ε^0-ε^0_𝐤+𝐪n'+
+ iη_𝐤].
The contributions of all phonon modes with frequency ω are summed for all wavevector 𝐪 and branch index ν in the Brillouin zone (BZ). In Eq. (<ref>) and throughout this work, all phonon modes summations are implicitly normalized by the number of phonon wavevectors used to sample the Brillouin zone. Since we work at T=0 K, the Fermi-Dirac occupation functions, f, are either 1 for the occupied states or 0 for the conduction bands. The small imaginary parameter η_𝐤 = η sgn(ε^0-μ), with μ the chemical potential and η real and positive, shifts the poles of the Green's function in the complex plane to maintain causality. Without SOC, the electronic bands are implicitly spin degenerate.
The electron-phonon matrix elements squared,
|g_𝐤n n'^Fan(𝐪ν)|^2 ≜ |⟨𝐤+𝐪n'|∇ V^KS|𝐤n⟩|^2,
capture the probability that an electron in eigenstate ε^0 interacts with a 𝐪ν-phonon, given the self-consistent first-order variation of the Kohn-Sham potential (labeled with superscript KS) induced by the collective atomic motion along this phonon mode <cit.>. The operator ∇ expressed in the position basis can be written as
∇ =
1/√(2)∑_καU_ν, κα(𝐪) ∑_l ^i𝐪·R_l∂/∂R_lκα
=
1/√(2)∑_καU_ν, κα(𝐪) ∂_κα(𝐪),
where R_lκα denotes the displacement of atom κ, located in unit cell l, in cartesian direction α. The phonon eigendisplacement vector, U_ν, κα(𝐪), verifies the generalized eigenvalue equation
M_κω^2U_ν,κα(𝐪) = ∑_κ' α'Φ_κκ'^αα'(𝐪) U_ν, κ' α'(𝐪)
and the normalization condition
∑_καM_κ U_ν,κα^∗(𝐪)U_ν',κα(𝐪) = δ_νν',
where M_κ is the atomic mass of atom κ. The dynamical matrix, Φ_κκ'^αα' (𝐪), is the Fourier transform of the second derivative of the total energy with respect to two atomic displacements,
Φ_κκ'^αα'(𝐪) = ∑_l ^i𝐪·R_l∂^2 E/∂R_lκα∂R_0κ'α'.
For its part, the Debye-Waller self-energy is formally defined as <cit.>
Σ^DW =
∑1/2⟨𝐤n|∇∇_-𝐪ν V^KS|𝐤n⟩.
The direct evaluation of the second-order derivative of the Kohn-Sham potential with respect to atomic displacements entering Eq. (<ref>) is a computational bottleneck in the density-functional perturbation theory approach. By applying the rigid-ion approximation, i.e. assuming that the potentials created by each nucleus are independent of each other, one can replace the second-order derivatives by the same first-order derivatives entering Σ^Fan <cit.>, yielding
Σ^DW, RIA(T=0) = ∑^BZ∑_n'≠ n-1/4ω|g^DWkn n' (𝐪ν)|^2/ε^0-εkn'^0+iη,
where RIA stands for rigid ion approximation and where
|g^DW_𝐤n n' (𝐪ν)|^2 =
∑_κκ'∑_αα'[U_ν,κα (𝐪)U_ν,κα' (𝐪)^∗+U_ν,κ'α (𝐪)U_ν,κ'α' (𝐪)^∗]
×⟨𝐤n|V^(1)_κα(0)^∗|𝐤+𝐪n'⟩⟨𝐤+𝐪n'|V^(1)_κ'α'(0)|𝐤n⟩,
with
V^(1)_κα(0) = ∂_κα(𝐪=0)V^KS,
following the definition of the operator ∂_κα(𝐪) in the second line Eq. (<ref>). The consequences of the rigid-ion approximation on the ZPR have been discussed in Ref. <cit.> for crystals and in Ref. <cit.> for molecules.
§.§ Spin-orbit interaction
We now examine how SOC can affect Eq. (<ref>) and (<ref>).
Upon inclusion of SOC, the electronic wavefunction becomes a spinor,
|𝐤n⟩ = [ |𝐤n↑⟩; |𝐤n↓⟩ ],
and the Hamiltonian, a matrix,
Ĥ_𝐤 =
[ H_𝐤↑↑ H_𝐤↑↓; H_𝐤↓↑ H_𝐤↓↓ ].
In real space, the general form of the SOC contribution to the electronic Hamiltonian writes <cit.>
Ĥ^SOC(r) = 1/4(∇ V(r)×P̂)·σ,
where P̂ is the momentum operator and σ are the Pauli matrices.
For a plane wave basis set and norm-conserving pseudopotentials, SOC only enters the Hamiltonian through the electron-ion term. Assuming that the pseudopotentials are fully separable and substituting the Coulomb potential in Eq. (<ref>), one recovers the typical L·S term from introductory quantum mechanics. For a single atom, one gets <cit.>
V^e-ion(r, r') = ∑_l V_l^SR(r, r') |ls⟩⟨ls|
+ ∑_l V_l^SOC(r, r') L·S|ls⟩⟨ls|,
where V_l^SR(r, r') and V_l^SOC(r, r') follow the Kleinman-Bylander construction <cit.>,
V_l^x = f_l^x(r)E_l^KB,xf_l^x(r'), x∈{SR, SOC},
where E_l^KB,x is the Kleinman-Bylander energy <cit.>.
SR stands for the scalar-relativistic contribution to the electron-ion potential (hence, without SOC), and |ls⟩⟨ls| is the projector on the tensor product subspace of angular momentum L and spin S, which has dimension 2(2l+1). Detailed expressions for V_l^SR and V_l^SOC can be found in Ref. <cit.>. No magnetism is considered, such that the electronic density is given by a single scalar function, ρ(r).
The consequences of SOC on the explicit density-functional perturbation theory equations have been derived in Refs. <cit.> and <cit.> for norm-conserving pseudopotentials. In our case, the general form of the equations presented in Sec. <ref> remain unchanged, but all the relevant physical quantities, i.e. ω, ε^0 and the electron-phonon matrix elements squared, Eqs. (<ref>) and (<ref>), now capture the effect of SOC. There is no implicit sum on the spin degree of freedom, as the spinorial electronic wavefunctions mix the spin-up and spin-down components.
§.§ Generalized model
In the following, we discuss how to incorporate SOC into the generalized model developed in Ref. <cit.>. For completeness, we start by reviewing the key elements of this model. First neglecting SOC, the Hamiltonian at the first order of interaction writes:
H = ∑knσθ k^2/2m^∗_n(𝐤̂) c^†knσ cknσ +
∑qjω0j(𝐪̂)(a^†qj aqj
+ 1/2)
+ ∑kn n'σ∑qj
g^gFrkn n'(𝐪j) c^†k+qn'σ cknσ(aqj +
a^†-qj).
The first term corresponds to parabolic bare electronic eigenenergies ε with direction-dependent effective mass m^∗_n(𝐤̂), while the second term allows for multiple phonon branches j with direction-dependent Einstein frequency ω0j(𝐪̂), evaluated at the zone center Γ. The c^†knσ, cknσ, a^†qj, aqj are respectively the creation and annihilation operators for electrons and phonons, while 𝐤̂ and 𝐪̂ are unit vectors. The parameter θ gives the sign of the effective mass: θ=-1 for the holelike bands and θ=1 for the electronlike bands. The sum on spin index σ implies that all electronic states are doubly degenerate.
The last term couples the electron and phonon subsystems through the interaction, with matrix element
g^gFrkn n'(𝐪j) = 1/q4π/Ω_0(1/2 ω0j(𝐪̂)V_BvK)^1/2𝐪̂·p_j(𝐪̂)/ϵ^∞(𝐪̂)
×∑_m s_n' m(𝐤̂')s_n m^∗(𝐤̂),
where 𝐤' = 𝐤+𝐪, Ω_0 is the primitive unit cell volume, V_BvK is the Born-von Karman normalization volume associated with the 𝐤 and 𝐪 samplings, and ϵ^∞ is the macroscopic optic dielectric constant, obtained from the dielectric tensor,
ϵ^∞(𝐪̂) = ∑_αβq̂_αϵ^∞_αβq̂_β.
Here, p_j(𝐪̂) is the mode-polarity vector of the mode <cit.>, constructed from the Born effective charges, Z^∗_κα,α', and the phonon eigendisplacement vectors, U_j,κα(𝐪), summing over all Cartesian directions α and all atoms κ in the unit cell,
p_j,α'(𝐪̂) = lim_q→ 0∑_κα Z^∗_κα,α'U_j,κα(q𝐪̂).
Note also that our formulation of the matrix element, Eq. (<ref>), relies on the Born and Huang convention for the phonon eigenvectors <cit.>, which implies the following relation:
U_j,κα(-𝐪) = U^∗_j,κα(𝐪),
such that Eq. (<ref>) is hermitian. See Ref. <cit.> for a thorough discussion of the different phase conventions in the literature.
The unitary matrix s_nm(𝐤̂)
describes the direction-dependent overlap between the electronic states at the band extrema located at Γ and states along the 𝐤 direction, in the k→ 0 limit, computed from the periodic part of the wavefunction (indicated by the subscript P):
s_nm(𝐤̂) = lim_k→0⟨k𝐤̂n|Γ m|_⟩P.
While we set the band extrema at Γ for convenience, the previous definition allows for a band extrema located at any wavevector in the Brillouin zone.
In all previous expressions, the sums on electronic bands index n, n' and m run on the degenerate subset of bands connected to extrema, thus allowing interband couplings within this subset. The sum on phonon branches j is restricted to LO modes, as p_j(𝐪̂) is zero otherwise.
When SOC is considered, the Hamiltonian is no longer diagonal in spin space. We can, however, define new electronic creation and annihilation operators, c̃^† and c̃,
such that the electronic part of the Hamiltonian can be written as
H^el = ∑ε^SOCc̃^†c̃.
In order to formulate a Hamiltonian for the SOC case, we will suppose that the relevant part of the electronic structure is at a band extremum, with quadratic departure from the extremal eigenenergy
as a function of the wavevector.
This is the same hypothesis as for the generalized
model without SOC.
Generally speaking, this hypothesis is correct
when the band extrema are non-degenerate (except for the spin degeneracy) when the SOC is not present.
It will hold also when the starting band extremum is degenerate, provided the typically spin-orbit coupling energy is much bigger than the phonon energy, so that,
after applying the SOC, one is left with a new band extremum with quadratic departure of the eigenenergy
in a sufficiently large zone, where the phonon energy is relevant.
Supposing this hypothesis to be valid, we take the extremum eigenvalue as zero of energy, and expand the
eigenvalue as
ε^SOC = θ k^2/2 m_n^∗(𝐤̂),
which captures the modification of the electronic effective masses near the band extrema induced by SOC, m_n^∗(𝐤̂) replacing m_n^∗(𝐤̂). We also neglect spin-phonon interaction, thus assuming that SOC affects the vibrational properties through electronic properties only. Within these assumptions, we recover an expression identical to Eq. (<ref>), in which the sum on σ has been absorbed inside the new electronic operators. The starting point of Ref. <cit.> can therefore be taken as implicitly incorporating the effects of SOC on the electronic and vibrational properties. From now on, we simplify the notation by dropping all tilde on electronic quantities which include SOC, i.e. m_n^∗(𝐤̂)→ m_n^∗(𝐤̂).
We now follow the same procedure as described in Section 5 of the Supplementary Notes of Ref. <cit.>: we substitute g^gFrkn n'(𝐪j) for the matrix elements in the general expression for Σ^Fan (Eq. (<ref>)) and, as per the original model, take the continuum macroscopic limit, replacing the discrete sum over 𝐪 by an integral over the 𝐪 coordinate , thus extending the Brillouin zone boundaries to infinity. Since we only consider interband contributions within the degenerate subset of bands connected to the extrema, only the second (first) term inside the brakets of Eq. (<ref>) contribute for the holelike (electronlike) bands. Taking the q→0 limit of the denominator for purely parabolic electronic bands,
we are left with
ZPR^gFr_n_θ = -θ/πΩ_0∫ d^3𝐪 ∑_j n1/q^2|s_n,n_θ(𝐪̂)|^2/ω_0j(𝐪̂)(𝐪̂·p_j(𝐪̂)/ϵ^∞(𝐪̂))^2
×1/q^2/2m^∗_n(𝐪̂) + ω_0j(𝐪̂) ,
where n_θ=-1 is the band index of the VBM and n_θ=1, that of the conduction band minimum (CBM).
Using spherical coordinates, the radial part of this three-dimensional integral has an analytic solution of the form
∫_0^∞ dq 1/C_1 q^2+C_2 = 1/√(C_1 C_2)π/2,
where the parameters C_1=(2m^∗_n(𝐪̂))^-1 and C_2=ω_0j(𝐪̂) are positive. Recall that, for the VBM, the negative curvature of the electronic bands is parametrized by θ. This yields
ZPR^gFr_n_θ = -θ/√(2)Ω_0∮_4π d𝐪̂∑_j n|s_n,n_θ(𝐪̂)|^2(m^∗_n(𝐪̂))^1/2/ω_0j(𝐪̂)^3/2
×(𝐪̂·p_j(𝐪̂)/ϵ^∞(𝐪̂))^2 .
With the previous definition of θ, we thus obtain a positive (negative) ZPR for the VBM (CBM).
Up to this point, no special treatment was made to consider SOC in the treatment of the interaction outside incorporating it implicitly in the static electronic and vibrational properties, i.e. m_n^∗(𝐪̂), ω_0j(𝐪̂), p_j(𝐪̂) and ϵ^∞(𝐪̂) are computed with SOC.
We finally argue that the treatment of |s_n,n_θ(𝐪̂)|^2 based on the point group symmetry argument detailed in the Supplementary Information of Ref. <cit.> remains valid in the presence of SOC. For this paper, we will treat the 3×2→ 4+2 degeneracy arising from a cubic space group, taking the VBM of cubic materials as a typical example. The argument could be generalized to any space group symmetry using group theory.
As the degeneracy arises from symmetry, i.e. it is not accidental, the degenerate electronic wavefunctions at the extrema can be decomposed in a basis of orthonormal eigenfunctions that form an irreducible representation of the symmetry group, 𝒢=T_d for the zincblende structure and 𝒢=O_h for the diamond structure. Without SOC, this basis contains three eigenfunctions, denoted {|X⟩, |Y⟩, |Z⟩}, which each are doubly degenerate in the spin space. When considering SOC, the basis functions {|X↑⟩, |Y↑⟩, |Z↑⟩, |X↓⟩, |Y↓⟩, |Z↓⟩} no longer form a good basis choice as they do not form an irreducible representation of the double group, 𝒢⊗ D_1/2. One rather has to use linear combinations of those states, namely, the fourfold {|j=3/2⟩} states for the degenerate heavy hole and light hole bands, which form the VBM, and the twofold {|j=1/2⟩} states should one wish to evaluate the ZPR for the split-off bands.
We now express the eigenstates entering the s_nm(𝐪̂) overlap integrals (Eq. (<ref>)) in this basis. The |Γ v⟩
state, where v=n_θ=-1 is the band index of the VBM, can be written as
|Γ v⟩ = ∑_m∈{±3/2, ± 1/2} u_vm|3/2m⟩,
where u_vm is the coefficient of the basis function |3/2 m⟩, and the |q𝐪̂n⟩ state becomes
|q𝐪̂n⟩ = ∑_m∈{±3/2, ± 1/2}|3/2m⟩⟨3/2m|q𝐪̂n|,⟩
= ∑_m∈{±3/2, ± 1/2}
s_nm(𝐪̂)|3/2m⟩,
where s_nm(𝐪̂) is the overlap integral with the basis function |3/2m⟩. The q→ 0 limit is implied.
Substituting the last two equations in Eq. (<ref>), we obtain an expression which is identical to the Supplementary Eqs. (24) and (25) of Ref. <cit.>, with the double sum on m,m'∈{X,Y,Z} replaced by a double sum on m,m'∈{±3/2, ±1/2}. The remaining of the argument thus holds, yielding a final expression for the ZPR^gFr which has the same form as their Eq. (6):
ZPR^gFr_n_θ = -∑_j nθ/√(2)Ω_0 n_deg∮_4πd𝐪̂(m^∗_n(𝐪̂))^1/2/ω_0j(𝐪̂)^3/2(𝐪̂·p_j(𝐪̂)/ϵ^∞(𝐪̂))^2,
in which n_deg is now the degree of degeneracy of the band extrema in presence of SOC. As the n summation is made over degenerate states, the division by n_deg yields
an average over degenerate states. Note that this last expression can be further simplified when applied to cubic systems, as in that case the phonon frequencies and mode-polarity vector do not not depend on the wavevector orientation, and the dielectric tensor is isotropic (see Eq. (45) of Ref. <cit.>).
For materials whose vibrational properties are not significantly affected by SOC, such as cubic semiconductors and insulators, the modification of the electronic effective masses induced by SOC will have a dominant effect on the ZPR^gFr.
The reduced dimensionality of the irreducible representation at the extrema plays also a role. However,
it is not directly due to the
smaller number of degenerate states contributing to the ZPR^gFr. Indeed, Eq. (<ref>) makes it clear that an average over degenerate bands is to be computed, not a simple sum of contributions.
The modification of the ZPR^gFr due to spin-orbit coupling
is analytically obtained in the isotropic degenerate model of Trebin and Rössler [32], see their Eq.(13), combined with the effective masses from their Eqs. (6a) and (6b).
§.§ Generalized model in the presence of Dresselhaus splitting
In Sec. <ref>, we implicitly assumed that the location of the band extrema in reciprocal space is unchanged by the inclusion of SOC, i.e. it remains at the high-symmetry, degenerate 𝐤-point. However, for non-centrosymmetric materials such as those of zincblende structure, SOC acts as an effective magnetic field which splits the previously spin-degenerate states. As a consequence, the band extrema are slightly displaced from their location without SOC, both in energy and momentum (see Fig. <ref> of Appendix [sec:dressmodel]B), thus breaking, in principle, one of our underlying hypotheses. This effect was originally discussed by Dresselhaus <cit.> in 1955. In the following, we analyze the consequences of Dresselhaus splitting on Eq. (<ref>) for the degenerate 𝐤-point (i.e. the Γ point in the current work). See Appendix [sec:dressmodel]B for more details about the Dresselhaus Hamiltonian and its consequences on the band structure of zincblende materials.
In the presence of Dresselhaus splitting, the electronic dispersion of band n at is no longer , following Eq. (<ref>), but rather
εqn^SOC = θ (q-k^0_n(𝐪̂))^2/2m^∗_n(𝐪̂) - θΔ E_n(𝐪̂),
where k^0_n(𝐪̂) and Δ E_n(𝐪̂) are respectively the momentum and energy offsets characterizing the Dresselhaus splitting of band n along direction 𝐪̂. As for the effective masses, we define Δ E_n(𝐪̂) as positive and let θ parametrize the sign of the energy offset. Equation (<ref>) therefore becomes
ZPR^gFr_n_θ = -θ/πΩ_0∫ d^3𝐪∑_j n1/q^2
×f(𝐪̂)/(q-k^0_n(𝐪̂))^2/2m^∗_n(𝐪̂) + ω_0j(𝐪̂)-Δ E_n(𝐪̂) ,
where f(𝐪̂) is a purely angular function which includes the two rightmost fractions of the first line of Eq. (<ref>).
As in Sec. <ref>, we express the integral in spherical coordinates. However, instead of the usual integral boundaries, we rather integrate on half the sphere (note the lower bound of the cosθ integral), while simultaneously extending the lower bound of the radial integral to -∞:
I = ∫ d^3𝐪∑_j n1/q^2f(𝐪̂)/(q-k^0_n(𝐪̂))^2/2m^∗_n(𝐪̂) + ω_0j(𝐪̂)-Δ E_n(𝐪̂)
= ∫_0^1 d(cosθ)∫_0^2πdϕ∫_-∞^∞ dq∑_j nf(𝐪̂)/(q-k^0_n(𝐪̂))^2/2m^∗_n(𝐪̂) + ω_0j(𝐪̂)-Δ E_n(𝐪̂).
With the change of variable q'(𝐪̂)=q-k_0(𝐪̂), this integral becomes
I = ∫_0^1 d(cosθ)∫_0^2πdϕ∫_-∞^∞ dq'∑_j nf(𝐪̂)/(q'(𝐪̂))^2/2m^∗_n(𝐪̂) + ω_0j(𝐪̂)-Δ E_n(𝐪̂).
Note that, by construction, the unit vectors 𝐪̂ and 𝐤̂^0_n point in the same direction, hence 𝐪̂'=𝐪̂. Transforming back with q'=q, we note that the contribution of the momentum offset has been exactly eliminated. One can then return the radial and angular integral boundaries to their usual values and perform the radial integral, which takes a form similar to Eq. (<ref>):
∫_0^∞ dq 1/C_1q^2+C_2',
where the parameter C_2 has been replaced by . The parameter C_1 and the angular function f(𝐪̂), respectively defined below Eq. (<ref>) and in Eq. (<ref>), therefore remain unchanged by Dresselhaus splitting. As a consequence, the treatment of the overlap matrices
which enter f(𝐪̂), discussed in Sec. <ref>, is also unaltered.
In light of this analysis, one finds that, in presence of Dresselhaus splitting, Eq. (<ref>) generalizes exactly to
ZPR^gFr_n_θ = -∑_j nθ/√(2)Ω_0 n_deg ∮_4πd𝐪̂(𝐪̂·p_j(𝐪̂)/ϵ^∞(𝐪̂))^2
× (m^∗_n(𝐪̂))^1/2/ω_0j(𝐪̂)√(ω_0j(𝐪̂)-Δ E_n(𝐪̂)).
Supposing that Δ E_n(𝐪̂) is small compared to ω_0j(𝐪̂), one can further simplify the last expression by Taylor expanding the inverse square root, yielding
ZPR^gFr_n_θ = -∑_j nθ/√(2)Ω_0 n_deg∮_4πd𝐪̂ (𝐪̂·p_j(𝐪̂)/ϵ^∞(𝐪̂))^2(m^∗_n(𝐪̂))^1/2/ω_0j(𝐪̂)^3/2
× (1 + Δ E_n(𝐪̂)/2ω_0j(𝐪̂)).
Comparing this expression with Eq. (<ref>), one finds that the energy offset stemming from the Dresselhaus splitting slightly enhances the ZPR at the Γ point (recall that Δ E_n(𝐪̂) is defined as positive). The angular-dependent energy offset can therefore be interpreted as direction-dependent modulation of the integrand. Should the contribution of the energy offsets be neglected, one recovers Eq. (<ref>), which can now be seen as a lower bound to the true value of ZPR^gFr. An upper bound could also be obtained by estimating the largest value of Δ E_n(𝐪̂) for a given physical system. Equation (<ref>) could also, in principle, be used to investigate the Fröhlich-induced ZPR of the degenerate band crossing points in Rashba systems <cit.>, provided that the LO frequency is larger than the largest energy offset. Greater care would be required otherwise, as Eq. (<ref>) would have poles.
Lastly, if we consider the Dresselhaus-splitted bands to be independent of each other, the known polaron effective mass enhancement induced by the isotropic interaction <cit.> should remain valid even in presence of anisotropic bands (see the numerical results of Ref. <cit.>). This suggests that EPI would attenuate the (already small) magnitude of the energy and momentum offsets stemming from SOC. However, the effect of the possible couplings between the almost-degenerate bands, as well as continuity conditions between the single band picture at the true extrema and the degeneracies at the Γ point remain to be investigated. Besides, as the momentum offset plays no role in Eqs. (<ref>) and (<ref>), one can conclude than noncentrosymetric cubic materials retain the physical picture of Γ-centered parabolic bands, which has been well corroborated by the experimental literature. See, for example, Fig. 2(b) of Ref. <cit.>.
§ COMPUTATIONAL DETAILS
§.§ First-principles calculations
All first-principles calculations were performed with the Abinit software package <cit.>. The bulk ground state properties were obtained from density functional theory, while vibrational properties and electron-phonon coupling were computed within density-functional perturbation theory <cit.>. When SOC is taken into account, it is included both in the ground state and the density-functional perturbation theory calculations. We use norm-conserving pseudopotentials from the Pseudo-Dojo project <cit.> and rely on the generalized gradient approximation of the Perdew-Burke-Ernzerhof functional (PBE-GGA) <cit.>. The lattice parameters were optimized in the absence of SOC until all forces on the atoms were below , except for Ge, where we used the experimental lattice parameter, as otherwise the obtained optimized lattice parameter for the PBE-GGA functional predicts a metallic ground state.
In order to isolate the effect of SOC on the EPI, we kept the lattice parameter fixed to the theoretical value without SOC. The electron-phonon self-energy was computed with the Python module <cit.>. All relevant calculation parameters, including the relaxed lattice parameters, the maximal plane wave energy, the Monkhorst-Pack sampling of the Brillouin zone for and , and the broadening parameter η for the self-energy can be found in Table S2 of the Supplemental Material <cit.>.
We evaluate the sum on band index n' in the self-energy (Eq. (<ref>) and (<ref>)) using a semi-static approach <cit.>: we replace the explicit evaluation of the non-adiabatic contribution of the high energy bands, namely, bands where the phonon frequencies are negligible compared to the difference between the electronic eigenenergies, by the solution of a Sternheimer equation <cit.> for the subspace orthonormal to the active subspace. We chose the explicit number of bands in the active subspace such that the energy difference between the CBM and the highest band was at least 20 eV. We finally obtain the converged ZPR values in the N_q→∞ limit using the linear extrapolation method described in Ref. <cit.>.
§.§ Generalized model
The generalized model (Eq. (<ref>)) relies on the evaluation of angular-averaged square-root effective masses. From the first-principles perspective, electronic effective masses are typically computed either from finite differences or from density-functional perturbation theory. In the absence of SOC, we use the latter to evaluate the effective mass tensor, or, in the case of degenerate states, the transport-equivalent effective mass tensor defined in Ref. <cit.>.
When SOC is taken into account, the calculation of the effective mass tensor from density-functional perturbation theory is not currently implemented in the Abinit code for norm-conserving pseudopotentials.
Hence, we evaluate the effective masses with SOC using order-4 central finite differences from the first-principles electronic eigenvalues. The VBM of zincblende materials constitutes a special case, as the electronic dispersion displays Dresselhaus splitting due to the lack of inversion symmetry <cit.>. Therefore, we model the dispersion with the Dresselhaus Hamiltonian <cit.> and obtain the effective masses from quadratic fits. For comparison, we also compute the angular-averaged effective masses for the VBM using the electronic dispersion obtained from the Luttinger-Kohn <cit.> Hamiltonian in the presence of SOC. See Appendixes [sec:lkmodel]A and [sec:dressmodel]B for more details about our treatment of the VBM. When Dresselhaus splitting is noticeable near the CBM of zincblende materials, we evaluate the effective masses from quadratic fits using the first-principles electronic dispersion. We finally note that, despite not being very accurate, the electronic effective masses computed with GGA-PBE are sufficient for the purpose of this work, which focuses on EPI. Lastly, as the effective masses computed from PBE for GaAs at the theoretically relaxed lattice parameter are particularly small, we also provide results computed at the experimental lattice parameter <cit.>.
§ RESULTS AND DISCUSSION
§.§ First-principles
§.§.§ Effect of SOC on the VBM and CBM ZPR
The effect of SOC on the ZPR computed from first principles for our twenty materials is shown in Fig. <ref>, for both the VBM (left, circle markers) and the CBM (right, square markers). Both subfigures display the ZPR reduction ratio, ZPR(SOC)/ZPR(no SOC), with respect to the split-off energy, Δ_SOC, which is a direct indicator of the SOC's strength. The color scale indicates the absolute value of the SOC-induced correction to the ZPR for each band extrema, |ZPR(SOC)-ZPR(no SOC)|. Numerical values can be found in Table S3 of the Supplemental Material <cit.>.
On the one hand, one can observe that the relative decrease of the valence band edge ZPR, ZPR_v, qualitatively increases with Δ_SOC. Recall that, for zincblende materials, the leading orbital character of the VBM is p states from the anion. Hence, the less affected materials regroup the lighter anions, namely, all sulfides and Si, for which the relative decrease is below 5%. The selenides, arsenides and Ge display a relative decrease ranging from , while the heavier materials in our set, AlSb and the tellurides, see their ZPR_v reduced by . However, the numerical value of |ZPR_v(SOC)-ZPR_v(no SOC)| remains small, under 6 meV for all materials. Nevertheless, the color scale clearly indicates that the absolute value of the correction increases with Δ_SOC. Small absolute differences were to be expected, as stronger SOC is naturally present in heavier materials, which typically display smaller ZPR.
However, the effect of SOC we observe on the ZPR_v, hence on the real part of the self-energy, is not nearly as significant as the relative impact reported in the literature for the hole mobility <cit.>, which can be over 10% in weak-SOC materials like Si and reach more than 50% in heavier materials. In this context, taking SOC into account reduces the number of scattering channels, hence increasing the mobility. In contrast, despite their respective contributions being reduced by SOC, all phonon wavevectors still contribute to the ZPR. Recall that the mobility depends on the electron-phonon self-energy through the relaxation time of the electronic states, which goes as the inverse of the imaginary part of Σ^AHC <cit.>. Thus, one could expect the inverse relaxation time of a given electronic state when including SOC to decrease by a similar ratio as the mobility when SOC is neglected. Nevertheless, it is not entirely clear how those two ratios should correlate, as the inverse relaxation time is defined for each electronic state, while the mobility is a global quantity integrated on the BZ, hence all the neighboring states around the Γ point contribute to the hole mobility. While we have not attempted a full study of the imaginary part of the self-energy for all materials, we observe a relative decrease of the imaginary part of Σ near the VBM which is larger than the ones reported in Fig. <ref>a) for the ZPR of AlSb, ZnTe, CdTe and Si. This agrees with the trends reported in the literature for the mobility. See Sec. S2 and Fig. S3 of the Supplemental Material <cit.> for more details. As the real and imaginary parts of Σ^AHC are related to one another by the Kramers-Kronig relations, further investigation will be required to fully understand the effect of SOC on the full electron-phonon self-energy.
The CBM, on the other hand, displays very little modification of the ZPR from SOC. The relative decrease remains under 5% for all materials, including AlSb and some tellurides. These results are in line with the atomic picture, in which a s-like band such as the CBM of zincblende materials is not affected by SOC since l=0. This argument does not hold for the d-like CBM of the rocksalt alkaline earth chalcogenides; in that case, the conduction band edge ZPR (ZPR_c) decrease remains negligible as the CBM is well-isolated in energy from the other bands. The only three exceptions to this trend are ZnTe (20%), GaAs (9%) and CdSe (8%).
CdTe is absent from the CBM figure as its ZPR_c changes sign when including SOC, going from –0.4 to 0.4 meV.
Nevertheless, these seemingly large relative corrections are not numerically significant: the absolute correction remains under 1 meV for all materials (see the color scale), which is within the typical numerical accuracy of this type of calculations.
To understand these different tendencies from a qualitative point of view, one can picture a typical zincblende band structure. In Fig. <ref>, the energy bands of CdTe with SOC (solid red lines) have been shifted such that the VBM coincides with the VBM without SOC (dashed blue lines). The Fermi energy without SOC has been set to zero. When comparing both sets of bands, one can see that the general shape of the unoccupied subset of bands is scarcely affected by SOC, other than a relatively small energy shift between spin-split bands, located mostly close to the Brillouin zone boundaries. Recall that the Fan and Debye-Waller contributions to the self-energy have opposite signs (see Eq. (<ref>) and (<ref>)) and almost equal magnitudes. Nevertheless, when considering the contribution from either the occupied or unoccupied subset of bands to the AHC self-energy, the Fan term typically governs the net sign of the renormalization in semiconductors. Hence, as
the CBM is mostly repelled by couplings with nearby conduction states of higher energy, one can deduce that the ZPR of the CBM will barely be affected by SOC, since the effective mass does not change significantly and the band extrema is well isolated in energy. The three apparent outliers, CdSe, ZnTe and GaAs, all feature a direct fundamental gap at the zone center, small CBM effective masses and a relatively small band gap energy (albeit always much larger than the highest phonon frequency, GaAs with SOC being the exception). In the light of the Kane model <cit.>, recall that the CBM and VBM at the Γ point are linked through an avoided crossing. The CBM is thus indirectly affected by SOC, through its interplay with the heavy hole and
light hole p-like bands. A small band gap reinforces this interaction, resulting in a greater relative decrease of the ZPR_c for these three materials. We do not observe such an effect in indirect band gap materials like AlSb.
When rather considering the VBM, one can observe the loss of degeneracy predicted by group theory, as the two split-off bands have been shifted by Δ_SOC below the heavy hole and light hole bands. Figure <ref> also reveals two main consequences of SOC on the occupied bands: on the one hand, the effective masses in the vicinity of the zone center are reduced, and on the other hand, the energy shift between the spin-split bands occurs throughout the Brillouin zone, thus globally lowering the eigenenergies of the occupied states with respect to the VBM energy.
With this simple picture in mind, our results suggest that the effect of SOC can be safely neglected for band extrema which are well-isolated in energy, should the modification of the effective mass remain small. In contrast, degenerate extrema or densely entangled bands must be treated more carefully.
We finally emphasize that one must not systematically overlook the apparent small magnitude of the SOC corrections to the ZPR_v.
What may at first be perceived as only a few meV nevertheless captures a significant relative decrease for the heavier materials, that reaches 15–30% of the predicted ZPR_v without SOC. This effect cannot be neglected when aiming for predictive results, especially if one seeks to validate numerical predictions with experimental data.
§.§.§ Experiment vs first principles
We now examine how does the inclusion of SOC affects the global agreement between ZPR_g and available experimental data.
To make a fair comparison with experimental data, the first-principles data shown in Fig. <ref> include the theoretical contribution of the zero-point expansion of the lattice <cit.>. This term originates from the phonon contribution to the total free energy of the crystal, which increases the T=0 K lattice parameter compared to the static equilibrium value and, in turn, affects the band gap energy (see Appendix [sec:sm-zple]D for more details). Note that the scales are logarithmic. The shaded gray area highlights the region where both ZPR (first-principles and experimental) agree within 25% of each other. As experimental values of the ZPR are obtained from extrapolation procedures rather than from direct measurements, we have to keep in mind when analyzing the accuracy of theoretical results that there is an experimental uncertainty which can be quite substantial, especially when few experimental datasets are available for a given material. See the Supplementary Note 1 of Ref. <cit.> for a detailed discussion about the uncertainties associated with the experimental values of the ZPR.
The results without SOC (blue circles) are equivalent to the non-adiabatic AHC data shown in Fig. 2 of Ref. <cit.> for the ten materials considered.
When SOC is taken into account (red diamonds), all materials now lie within the tolerance criterion, including CdTe, which is largely overestimated without SOC. Thus, SOC does not alter the quantitative agreement between first principles and experiment, although Ge and GaAs reach the lower limit of the tolerance criterion.
One can also wonder if the greater predictability of the nonadiabatic AHC approach compared to the adiabatic supercell method claimed in Fig. 2 of Ref. <cit.> (see empty red triangles, labeled ASC-DFT) would remain upon inclusion of SOC. Although we have not attempted any adiabatic supercell calculation with SOC, our results suggest that the inclusion of SOC would not reduce the significant underestimation of the ZPR by adiabatic supercells for the lighter, more ionic materials. For intermediate to strong SOC, our data show a reduction of the total band gap ZPR ranging between 8%–34% (see the rightmost column of Table S3 of the Supplemental Material <cit.>). Should we infer that a similar effect would be observed in adiabatic supercell calculations, one could expect the result obtained from adiabatic supercells based on DFT calculations for CdTe to lie inside the tolerance criterion. At the same time, the underestimation would worsen for CdSe, and the ZnSe adiabatic supercell result would likely exit the shaded area. Our results, therefore, support the general conclusion of Miglio et al. <cit.> regarding the adiabatic supercell method being outperformed by the non-adiabatic AHC approach.
§.§.§ Origin of the SOC-induced ZPR decrease
In Sec. <ref>, we discussed the effect of SOC on the ZPR in terms of the variation of the electronic eigenvalues. We now refine this analysis by constructing histograms of the different contribution to ZPR_v with respect to the norm of the phonon wavevector, for a grid. Figure <ref> displays such decomposition for two polar materials, CdS (left) where SOC has little effect on the calculated ZPR, and CdTe (right) where, on the contrary, the calculated ZPR is more strongly reduced by SOC. The bottom panels show the distribution of the Brillouin zone weight for the different wavevector bins. The solid and shaded hatched histograms refer to the ZPR contributions computed respectively with SOC and without SOC.
A first observation that emerges from this figure is the extremely similar shape of the mode histograms, apart from their respective energy scale. For both materials, the vast majority of the ZPR originates from the LO phonon mode (red) in a very small portion of the Brillouin zone, near the zone center (small q). This behavior is a clear signature of the interaction. The contribution of the large q modes, which cover most of the Brillouin zone, is significantly smaller and split more equally between the acoustic (yellow), transverse optical (TO, green) and LO modes.
We now compare the solid and shaded histograms to identify how the different q regimes are affected. The total contribution to the ZPR (blue) for CdTe shows that strong SOC reduces the contribution of all phonon modes throughout the Brillouin zone. This suggests that, in the large q regime, the ZPR decrease can be associated with the global reduction of the electronic eigenvalues in the occupied subset of bands. In contrast, in the small q regime, it can be linked to the decrease of the effective masses. On the contrary, for CdS (weak SOC), the large q regime is unaffected by SOC; thus, only the variation of the band curvatures in the vicinity of the VBM seems to be responsible for reducing the ZPR.
At this point, we insist that our interpretation of the SOC-induced ZPR reduction in terms of the variation of electronic eigenvalues remains a heuristic analysis, since the full ZPR expressions for a given (Eq. (<ref>) and (<ref>)) include other physical quantities which are, in principle, affected by SOC: the phonon frequencies, ω, and the squared EPI matrix elements, |gkn n'^Fan(𝐪ν)|^2 and |gkn n'^DW(𝐪ν)|^2.
To test our hypothesis, we computed all the physical quantities entering Σ^Fan and Σ^DW for CdTe while artificially reducing SOC to 1% of its full strength. This allows us to decompose the electronic states correctly in terms of the double group irreducible representations while still reproducing the electronic and phononic dispersions without SOC adequately (see Sec. S1 of the Supplemental Material <cit.> for more details). With these in hand, we can precisely control which ingredients of the self-energy are affected by SOC. While such arbitrary combinations of full SOC and low SOC quantities have no physical meaning per se, they will prove insightful for understanding our previous results.
Figure <ref> show the histogram decomposition of the total ZPR_v (the equivalent of the blue histograms of Fig. <ref>) for different combinations of contributions. Table <ref> contains the numerical results for ZPR_v, ZPR_c and ZPR_g for the same combinations. The data labeled only ε^low (olive dashes) refers to Σ^EPI being computed with the full SOC except for the electronic eigenenergies, which are taken at 1% SOC. In constrast, only ε^SOC (cyan dashes) is computed with 1% SOC except for the electronic eigenenergies, which are taken at full SOC.
From Fig. <ref> and Table <ref>, the histograms can be grouped in two categories. On the one hand, the data computed using low SOC eigenvalues, only ε^low, yield both histograms and total ZPR values which qualitatively reproduce the calculation without SOC (dark indigo). On the other hand, the data computed by including SOC only through the electronic eigenvalues, only ε^SOC, capture almost all the ZPR decrease of the SOC data throughout the Brillouin zone (see the light red reference line). These results validate our heuristic explanation and confirm that the modification of the electronic eigenvalues dominates the SOC-induced decrease of ZPR_v in the different q regimes. While the phonon frequencies and the EPI matrix elements undoubtedly influence the quantitative results, we argue here that the SOC-induced variation of the ε^0 are sufficient to estimate the effect of SOC on the VBM ZPR and, by extension, on the total band gap ZPR for this class of cubic materials (see also Sec. S1 of the Supplemental Material <cit.>).
We note that our conclusions differ significantly from those of Ref. <cit.>, who reported that SOC enhances the electron-phonon coupling strength in perovskite methylammonium lead iodine (MAPbI_3), thus increasing the temperature-dependent band gap renormalization compared to a scalar relativistic calculation. The electronic structure of MAPbI_3 is, however, very different from the zincblende, diamond and rocksalt structures we investigated; the VBM is non-degenerate and reasonably well isolated in energy, while the CBM, of Pb character, can couple to more electronic states within a small energy window (see Fig. 4 of their Supplemental Material), resulting in the band gap opening with increasing temperature, in contrast with our set of materials.
We observe some variations of the ZPR_v as well as in the small q regime of its histogram decomposition when including SOC in the EPI matrix elements and excluding it in the eigenvalues (see only ε^low data in Table <ref> and Fig. <ref>). However, the effect is too small to allow us to draw conclusions.
In fact, for our set of materials, any effect of SOC on the EPI matrix elements
is entirely washed out by the variation of the static electronic eigenvalues.
We also verified that the band gap correction at T=300 K decreased by a similar ratio as the ZPR_g when including SOC (see Table S4
of the Supplemental Material), thus confirming that our conclusions hold beyond T=0 K.
§.§ Generalized Fröhlich model with SOC
We now present the results from our gFr model with SOC. As mentioned in Sec. <ref> and <ref>, and discussed thoroughly in Appendix [sec:dressmodel]B, the VBM of zincblende materials with SOC requires a special treatment, as it displays Dresselhaus splitting.
As a consequence, the VBM in a generic direction 𝐤̂ is slightly shifted from Γ, both in momentum and in energy.
We found an energy offset smaller than 0.5 meV for all materials, and an average momentum offset of , at most for CdS.
For comparison, the change in momentum expected from a photon doing a vertical optical transition in a semiconductor with a band gap energy of E_g=1 eV would be Δ k ∼ E_g/hc∼ 10^-4Å^-1, with h the Planck constant and c the speed of light.
As we argue in Sec. <ref>, the momentum offset has no consequence on the ZPR^gFr. Furthermore, for our set of materials, the largest value of Δ E_n(𝐪̂) is at least two orders of magnitude smaller than the LO frequency. Hence, we can safely neglect the energy offset and use Eq. (<ref>).
Figure <ref> compares ZPR_g computed with the gFr model to the first-principles AHC result, both with SOC (solid markers) and without SOC (shaded dotted markers). Note that the scales are logarithmic. The shaded gray area delimits the region where the gFr value deviates from the first-principles ZPR by at most 25%. In this figure, the VBM contribution to the ZPR_g with SOC was computed with the Dresselhaus Hamiltonian (see Appendix [sec:dressmodel]B for details). The Luttinger-Kohn Hamiltonian yields qualitatively equivalent results. The materials are grouped into three sets: the alkaline earth chalcogenides (purple diamonds), the Zn/Cd chalcogenides (yellow squares), both fairly ionic, and the less ionic 3-5 materials (blue triangles). Si and Ge are not considered in this Section, as their vanishing Born effective charges yield a null ZPR_g in the picture.
The alkaline earth chalcogenides, which display a larger ZPR_g ranging from 50 to 150 meV, are very well described by the gFr model, which captures more than 75% of the AHC ZPR_g both without and with SOC. The Zn and Cd chalcogenides ZPR_g is also reasonably well captured by the model, which accounts for more than two-thirds of the AHC value. The absolute value of their ZPR_g is smaller compared to the isoelectronic alkaline earth chalcogenides, which can be attributed to a smaller band gap at the DFT level that strengthens the contribution of interband couplings between valence and conduction states, thus reducing ZPR_g. In contrast, the gFr model captures less than one third of the ZPR_g for 3-5 materials.
Upon inclusion of SOC (going from the dotted to solid markers), the gFr model qualitatively retains the same proportion of the AHC ZPR_g value for all three material families. However, we observe a very small decrease for the sulfides (upper rightmost groups of yellow and purple markers).
These results confirm that the claim of Ref. <cit.> is robust to the inclusion of SOC: the ZPR of ionic materials, here both chalcogenide families, is dominated by the physical picture of a large polaron, where the movement of the slow electron is correlated to the dynamically adjusting phonon cloud, as emphasized by their Fig. 5(b) <cit.>.
As in Ref. <cit.>, we stress that a perfect agreement between the gFr and AHC methodologies was not expected, hence the labeling of the reference lines on Fig. <ref> emphasize the fraction of the total first-principles ZPR_g captured by the gFr model rather than the level of agreement between the two values. By construction, the gFr model does not include the contribution from acoustic and TO modes, nor the Debye-Waller contribution, as its purpose is to solely capture the contribution of the nonadiabatic interaction to the ZPR. Moreover, all interband couplings outside the fourfold degenerate subset at the VBM are neglected. Discrepancies could also arise from nonparabolic behavior of the electronic bands, which will naturally occur at some point since the Brillouin zone is finite and periodic, and from LO phonon dispersion.
On a different note,
one can wonder if our gFr model with SOC would provide a reliable estimate of the SOC-induced decrease of ZPR_v, without resorting to a full AHC calculation with SOC. In the spirit of the discussion presented in Sec. <ref>, the gFr model would capture the decrease of the hole effective masses near the Γ point. To answer this question, Fig. <ref> compares the ratio ZPR(SOC)/ZPR(noSOC) for the VBM obtained from the gFr model, using both the Luttinger-Kohn (green circles) and Dresselhaus (purple triangles) Hamiltonians, to the AHC value displayed in Fig. <ref>. The color intensity of the markers is proportional to the split-off energy, Δ_SOC.
Note that the scales are logarithmic.
The shaded gray area shows the region where both ratios agree within 5% of each other.
Two observations can be drawn from Fig. <ref>. On the one hand, the two ratios agree within 5% of each other for about half of the 19 materials (dashed lines): 11 for the Dresselhaus model and 9 for the Luttinger-Kohn model. The discrepancy with the first-principles ratio is below 10% for most materials (dotted lines), thus providing a reasonable estimate of the SOC-induced decrease of the VBM observed in the full AHC calculation, from a single phonon calculation at the Γ point. The main exceptions are CaTe with the Luttinger-Kohn model, GaAs at the theoretical lattice parameter, ZnS and CdS.
On the other hand, the difference between both models increases with SOC, the Dresselhaus model being more accurate for heavier materials. This discrepancy can be attributed to the construction of the models: as it couples the fourfold heavy hole and light hole bands to the twofold split-off bands, the Luttinger-Kohn model intends to describe the effect of SOC in a broader region of the Brillouin zone, thus capturing the warping of the light hole bands around the split-off energy observed in many cubic materials (see, for example, Fig. 2 of Ref. <cit.>). At stronger SOC, we observed that the cost of qualitatively capturing the correct band warping is a less accurate representation of the curvature of the light hole bands, yielding an overestimated m^∗ compared to the first-principles dispersion. In contrast, the Dresselhaus Hamiltonian treats only the fourfold VBM and provides an accurate description of the bands in the vicinity of the Γ point, at the cost of not describing the band warping further away in the Brillouin zone (see Fig. <ref> of Appendix [sec:sm-modifiedlk]C). Moreover, while our Dresselhaus model parameters are fitted to the first-principles dispersion with SOC, the Luttinger-Kohn Hamiltonian relies on the theoretical Luttinger parameters obtained without SOC, Δ_SOC being the only parameter related to SOC. The original work by Luttinger and Kohn already cautioned about a decreased accuracy of their model for strong SOC <cit.>.
To validate this interpretation, we also compute the Luttinger-Kohn model using fitted Luttinger parameters extracted from the Dresselhaus model (empty circles, see Appendix [sec:sm-modifiedlk]C for details). The purpose of these parameters is to reproduce more accurately the curvature of the light hole bands around Γ, at the cost of deviating more severely from the first-principles dispersion around the split-off energy and underestimating the m^∗ of the split-off bands compared to the original Luttinger parameters. In that case, both models agree very well. The fundamental differences between the two models suggest that the modified Luttinger-Kohn Hamiltonian constructed from fitted Luttinger parameters would be a more suitable choice to include SOC in the polaron effective mass theory developed in Guster et al. <cit.>, as it simultaneously captures the correct curvature for the light hole bands in presence of SOC and lacks the numerical complications arising from the Dresselhaus splitting. One should nevertheless make sure that the agreement region between the fitted Luttinger-Kohn model and the first-principles dispersion covers at least a few ω_LO for the predicted polaron effective mass to be reliable.
Lastly, we come back to the underestimation of the ratio observed in Fig. <ref> for ZnS and CdS, despite both models agreeing quite well with each other. To understand this result, recall the radial integral that occurred during the derivation of the gFr model with SOC, Eqs. (<ref>) and (<ref>). At this point, we suppose that the parabolic behavior of the electronic bands can be extended to infinity to replace the integral with its asymptotic solution. By doing so, we assume that the Brillouin zone region where the effective mass approximation holds is sufficiently large such that one has reached a significant fraction of the asymptotic value once the bands start to deviate from parabolicity. Such fraction can be estimated by looking at the analytical solution of Eq. (<ref>) at a finite upper bound q_c:
∫_0^q_c dq 1/q^2/2m^∗_n(𝐪̂)+ω_0j(𝐪̂)
= √(2m^∗_n(𝐪̂)/ω_0j(𝐪̂)) arctan(q_c/√(2m^∗_n(𝐪̂)ω_0j(𝐪̂))).
Assuming that q_c corresponds to the wavevector where the electronic bands start to deviate from parabolicity, the argument of the arctan function can be recast as
√(q_c^2/2m^∗_n(𝐪̂)/ω_0j(𝐪̂))=√(E_c/ω_0j(𝐪̂)),
namely the square-root of the ratio of the eigenenergy of the electronic bands where they stop being parabolic and the LO frequency. Should the departure from parabolicity occur at an energy smaller than ω_0j(𝐪̂)=ω_LO with respect to the VBM energy, like for ZnS and CdS, the radial integral would have reached at most arctan(1)=0.5, less than half of the asymptotic value, and one can reasonably question the validity of such an approximation. In physical terms, we warn against a possible breakdown of the parabolic approximation within the energy window that is physically relevant to the interaction. As the effective mass approximation is a cornerstone assumption of the original model, one should therefore be particularly careful when introducing SOC in the treatment of polarons for materials with high phonon frequencies.
§ CONCLUSION
In the present study, we investigate the consequences of spin-orbit coupling on the electron-phonon interaction contribution to the zero-point renormalization of cubic materials. Our first-principles calculations show that spin-orbit coupling reduces the zero-point renormalization of the valence band edge by 15%–30% for the heavier materials, while the conduction band edge is scarcely affected. The leading mechanism behind this behavior, brought to light by an Allen-Heine-Cardona calculation where the strength of spin-orbit coupling is artificially reduced to 1%, is revealed to be the variation of the electronic eigenvalues entering the electron-phonon self-energy and the decrease of the hole effective masses near the valence band maximum.
We also extend the generalized model presented in Miglio et al. <cit.> to include the spin-orbit coupling, revealing some numerical subtleties in the treatment of the valence band maximum of zincblende materials due to Dresselhaus splitting. We show that the predominance of nonadiabatic effects on the zero-point renormalization of ionic materials is robust to the inclusion of spin-orbit coupling and that the generalized model can be used to estimate the magnitude of the SOC-induced ZPR decrease with reasonable accuracy. We finally warn about the accuracy of the Luttinger-Kohn model with spin-orbit coupling for heavier materials and propose a method relying on the Dresselhaus model to extract fitted Luttinger parameters more suitable for the purpose of our generalized model with spin-orbit coupling, as well as highlight a possible breakdown of the parabolic approximation on which the original model with spin-orbit coupling is built for materials with high phonon frequencies.
The authors acknowledge fruitful discussions with M. J. Verstraete and M. Giantomassi, and thank S. Poncé for constructive comments about the manuscript. This research was financially supported by the Natural Sciences and
Engineering Research Council of Canada (NSERC), under the Discovery Grants
program grant No. RGPIN-2016-06666,
by the Fonds de la Recherche Scientifique (FRS-FNRS Belgium) through
the PdR Grant No. T.0103.19 - ALPS and
by the European Union’s Horizon 2020 research and innovation program under Grant Agreement No. 951786 - NOMAD CoE.
This work is part of the SHAPEme project (EOS ID 400077525) that has received funding from the FWO and F.R.S.-FNRS under the Excellence of Science (EOS) program.
This research was enabled in part by support provided by Calcul Québec (<www.calculquebec.ca>) and the Digital Research Alliance of Canada (<www.alliancecan.ca>). The operation of the supercomputers used for this research is funded by the
Canada Foundation for Innovation (CFI), the Ministère de la Science, de l'Économie et de l'Innovation du Québec (MESI), and the Fonds de recherche du Québec – Nature et technologies (FRQ-NT). V.B.-C. acknowledges support by the NSERC Alexander
Graham Bell Canada Graduate Scholarship doctoral program, the FRQ-NT B2 Doctoral Scholarship and the Hydro-Québec Excellence Scholarship. V.B.-C and M.C. are members of the Regroupement québécois sur les matériaux de pointe (RQMP).
§ APPENDIX A: LUTTINGER-KOHN HAMILTONIAN
The effective mass theory derived in 1955 by Luttinger and Kohn <cit.> describes the behavior of the electronic bands in the vicinity of band extrema, using the second-order 𝐤·𝐩 theory. For the threefold degenerate VBM of cubic materials, the Hamiltonian without SOC writes
H_n,n'(𝐤) = [ Ak_x^2 + B(k_y^2+k_z^2) Ck_xk_y Ck_xk_z; Ck_xk_y Ak_y^2 + B(k_x^2+k_z^2) Ck_yk_z; Ck_xk_z Ck_yk_z Ak_z^2 + B(k_x^2+k_y^2) ],
where 𝐤 is the electronic wavevector, n,n' are band indices, and H_n,n'(𝐤) is expressed in the basis, which forms an irreducible representation of the symmetry group of the wavevector at the VBM.
The three parameters A, B and C, known in the literature as the Luttinger parameters, can be deduced from the effective masses, m_n^∗, along the [100], [110] and [111] cartesian directions in reciprocal space,
m_n^∗ -1 [100] = 2A,
2B (twofold),
m_n^∗ -1 [110] = (A+B± C),
2B,
m_n^∗ -1 [111] = 2/3(A+2B+2C),
2/3(A+2B-C) (twofold).
In the presence of SOC, the Hamiltonian contains an additional term, H^SOC (see Eq. (<ref>)),
which is treated as a perturbation in the 𝐤·𝐩 expansion. The Hamiltonian is expressed in the basis of the zeroth-order wavefunctions, which are the 4-fold {|3/2, m_j⟩} (heavy holes and light holes) and the 2-fold {|1/2, m_j⟩} (split-off),
H_j,m_j(𝐤) = [ P/2 L M 0 iL/√(2) -i√(2)M; L^∗ P/6+2Q/3 0 M -i(P-2Q)/3√(2) i√(3)L/√(2); M^∗ 0 P/6+2Q/3 -L -i√(3)L^∗/√(2) -i(P-2Q)/3√(2); 0 M^∗ -L^∗ P/2 -i√(2)M^∗ -iL^∗/√(2); -iL^∗/√(2) i(P-2Q)/3√(2) i√(3)L/√(2) i√(2)M (P+Q)/3 -Δ_SOC 0; i√(2)M^∗ -i√(3)L^∗/√(2) i(P-2Q)/3√(2) iL/√(2) 0 (P+Q)/3 -Δ_SOC ],
where the 𝐤-dependent parameters P, Q, L and M are constructed from the Luttinger parameters,
P(𝐤) = (A+B)(k_x^2+k_y^2) + 2Bk_z^2,
Q(𝐤) = B(k_x^2+k_y^2)+Ak_z^2,
L(𝐤) = -iC/√(3)(k_xk_z -ik_yk_z),
M(𝐤) = 1/√(12)[(A-B)(k_x^2-k_y^2) -2iCk_xk_y].
The split-off energy, Δ_SOC, makes the Luttinger-Kohn Hamiltonian with SOC nonhomogenous in , resulting in the typical band warping observed around the split-off energy in light hole bands of cubic materials (see, for example, the Γ-L direction of Fig. 2 of Ref. <cit.>). Around this energy level, the light hole bands typically depart from the quadratic behavior characterized by m^∗ (SOC)
to recover a curvature that resembles m^∗ (noSOC). Δ_SOC can be extracted either from experiments or from a DFT ground state calculations with SOC. Note that Eq. (<ref>) inherently relies on the approximation that SOC is sufficiently small, such that the zeroth-order wavefunctions form a good basis set for the H matrix. Luttinger and Kohn explicitly warned that this approximation would be less accurate in the presence of very strong SOC. In practice, we observe that the light hole bands predicted by the Luttinger-Kohn model with SOC for tellurides meander around the first-principles dispersion and predict larger light hole effective masses compared to the first-principles results (see the blue curve in Fig. <ref>, which will be further discussed in Appendix [sec:sm-modifiedlk]C).
In the present work, we extract the three Luttinger parameters from the effective mass tensor without SOC using density-functional perturbation theory and evaluate Δ_SOC from our ground state calculation with SOC. The angular-averaged effective masses for the heavy hole and light hole bands are computed with order-4 finite differences, using the electronic dispersion calculated by diagonalizing Eq. (<ref>).
We note that, by construction, the Luttinger-Kohn Hamiltonian with SOC assumes inversion symmetry (without SOC, time-reversal symmetry can be used to show that the Hamiltonian matrix elements retain the same form). Hence, it should not, in principle, be applied to zincblende materials when SOC is considered. Nevertheless, we found that, despite missing some microscopic features in the very close vicinity of Γ, which will be described in the following Appendix, the Luttinger-Kohn model with SOC provides a qualitatively reliable description of the electronic bands, which does not invalidate its use for zincblende materials.
§ APPENDIX B: DRESSELHAUS HAMILTONIAN
A correct treatment of the lack of inversion symmetry in presence of SOC was made by Dresselhaus <cit.>. As time-reversal symmetry is preserved, the electronic dispersion without SOC still verifies ε_-𝐤n = ε_𝐤n, as per Kramer's theorem, but the Bloch functions no longer have to verify u_-𝐤n(r) = u_𝐤n(-r) up to a phase factor. The inclusion of SOC acts as an effective magnetic field which splits the spin-degenerate states at finite crystal momentum, with condition ε_-𝐤n↑ = ε_𝐤n↓. This effect, known as Dresselhaus splitting, creates a well-defined spin texture in reciprocal space, locking the spin orientation to the crystal momentum <cit.>. Dresselhaus splitting has been recently observed in GaAs and InSb by circular dichroic photoemission <cit.>.
The underlying physical mechanism is similar to the one driving Rashba splitting <cit.>, in which an asymmetry in the crystal potential along a preferred axial direction acts as an effective electric field to break inversion symmetry. The resulting spin texture, however, is different from the helical polarization associated with Rashba systems <cit.>. See Ref. <cit.> for more details about the difference between the Rashba and Dresselhaus effects.
For a generic direction in 𝐤 space, the eigenvalues for the heavy hole and light hole bands in the Dresselhaus model are given by
E_𝐤 = k^2/2 + (L+2M/3)k^2 + y,
where the bare electron mass m=1 in atomic units. The variables y= y(𝐤, L, M, N, W) are the roots of a fourth-order polynomial,
y^4 -2y^2[ α^2/9 k^4 + β(k_x^2k_y^2+k_y^2k_z^2+k_z^2k_x^2) + W^2k^2] + 4y W^2N(k_x^2k_y^2+k_y^2k_z^2+k_z^2k_x^2)
+ [ α^2/9 k^4 + β(k_x^2k_y^2+k_y^2k_z^2+k_z^2k_x^2)]^2
+ W^4[k^4-3(k_x^2k_y^2+k_y^2k_z^2+k_z^2k_x^2)]
+ 2α^2/9 W^2(k_x^6+k_y^6+k_z^6) - [3α^2/9 + 2N^2/3]W^2k^2(k_x^2k_y^2+k_y^2k_z^2+k_z^2k_x^2)
+ 21α^2/9 W^2k_x^2k_y^2k_z^2 = 0,
in which we have defined the shorthands
α^2 = (L-M)^2,
β = N^2-α^2/3.
Equation (<ref>) results from the secular determinant of the 𝐤·𝐩 expansion in the fourfold degenerate subspace at the VBM. Note that our W parameter is labeled C in the original work from Dresselhaus <cit.>. We renamed it to avoid confusion with the third Luttinger parameter.
The Dresselhaus model is thus parametrized by four real numbers; three of them, L, M and N, play a similar role to the Luttinger parameters <cit.>, while the last one, W, captures information about SOC and the breaking of inversion symmetry in the crystal potential <cit.>. Rewriting Eq. (<ref>) as
E_𝐤 = λ k^2 + y,
hence defining
λ = 1/2 + (L+2M/3),
one can conveniently parametrize the Dresselhaus model by α^2, λ, W and |N|.
These four parameters can be extracted from the electronic dispersion in the close vicinity of the Γ point along two high-symmetry paths: in cartesian direction 𝐤̂=[1,0,0], Eq. (<ref>) reduces to
E_k[1,0,0] = λ k^2 ±√(α^2/9 k^4 + W^2k^2),
in which both solutions are doubly degenerate, while for 𝐤̂=[1,1,1], one obtains
E_k[1,1,1] = λ k^2 + |N|/3k^2, (twofold),
-|N|/3k^2 ±√(2)W k.
Equations (<ref>) and (<ref>) reveal two peculiarities of the generic electronic dispersion with SOC near the VBM of zincblende crystals which complicate the numerical evaluation of electronic effective masses. These features are sketched in Fig. 2 and 5 of Ref. <cit.> and can be observed in Fig. <ref> for CdS. On the one hand, the Dresselhaus field shifts the heavy hole band extrema away from the Γ point; this can be read from the second case of Eq. (<ref>), which contains a linear term in k. This momentum offset is, however, much smaller in magnitude than what is typically observed in Rashba materials <cit.>. In the present work, we found it to be of the order of 5×10^-3Å^-1 on average and at most 10^-2Å^-1, about half a percent of the length of the reciprocal lattice vectors or less in all cases. The energy difference between the true VBM and the Γ point was at most 0.5 meV. Such a small effect will be barely visible on the band structure but will nevertheless render the finite central differences around Γ numerically unreliable, regardless of the fact that the momentum offset does not contribute to the gFr model (see Eq.(<ref>)).
On the other hand, if we take the higher energy solution of Eq. (<ref>) to be the heavy hole bands and the lower energy solutions to be the light hole bands, their slopes will be discontinuous across Γ. As a consequence, the effective mass computed from finite differences for these bands will diverge as the sampling
gets denser. A similar feature occurs in for the light hole bands in the 𝐤̂=[1,1,0] cartesian direction (see the upper panel of Fig. 2 of Ref. <cit.>), as well as for generic 𝐤̂.
Nevertheless, in line with Dresselhaus <cit.> and as addressed in Sec. <ref>, we argue that these features have little to no impact on our current application since
the energy offset between Γ and the true extrema is about two orders of magnitude smaller than the LO frequency.
With these precautions in mind, we obtain the model parameters from the calculated dispersion along the [100] and [111] directions through the following relations:
* The difference between the dispersion of the two nondegenerate bands given by Eq. (<ref>) is linear in k, with a slope proportional to W.
* The average value of these two bands is quadratic in k. The difference between this average and the dispersion of the degenerate bands in the same direction is also quadratic in k, with a curvature proportional to |N|.
* Averaging the difference between the non-degenerate bands of Eq.(<ref>) with the degenerate bands in the same direction results in a quadratic function in k, with curvature λ.
* The difference between the dispersion of the two pairs of degenerate states in Eq. (<ref>) allows extracting the α^2 factor.
To circumvent any numerical instabilities arising from the band peculiarities associated with Dresselhaus splitting, we computed the effective masses by fitting quadratic functions to the Dresselhaus model dispersion rather than relying on finite differences. The different parameters used for the Dresselhaus model are tabulated in Table S8 of the Supplemental Material <cit.>.
Lastly, we emphasize that our Dresselhaus parameters should be perceived as fitting parameters rather than physical parameters. While they retain some physical essence from their original formulation (see Eq. (47) of Ref. <cit.>), we did not evaluate them through the framework but rather fitted them to the first-principles dispersion with SOC.
§ APPENDIX C: FITTED LUTTINGER PARAMETERS FROM DRESSELHAUS HAMILTONIAN
As discussed in Appendix [sec:lkmodel]A, the Luttinger-Kohn Hamiltonian becomes less accurate in predicting the light holes effective masses as SOC increases. As a consequence, it yields a smaller decrease of the ZPR_v (i.e., a larger absolute value of ZPR_v(SOC)) compared to the first-principles result, as emphasized in Sec. <ref> of the main text, hence the overestimation of the ratios presented in Fig. <ref>.
Nevertheless, it lacks all the numerical difficulties arising from Dresselhaus splitting. In the spirit of the original model, we aim for reliable effective masses with respect to the first-principles dispersion, while remaining in the physical picture of Γ-centered parabolic bands. The Luttinger-Kohn Hamiltonian thus provides a more efficient framework to evaluate the angular-averaged effective masses required by our gFr model. In this context, we propose a very simple solution: to extract fitted Luttinger parameters from the Dresselhaus model that yield more predictive light hole effective masses compared to the first-principles results in the presence of SOC.
First, we note that the momentum offset in the heavy hole bands of zincblende materials prevents us from directly fitting the Luttinger-Kohn model on the first-principles bands. The Dresselhaus model, in contrast, parametrizes this offset correctly through the W parameter. From our fitted values of λ and α^2, we can extract fitted values for L and M, through Eqs. (<ref>) and (<ref>). To do so, one nevertheless requires prior knowledge of the relative strength of L and M, as α=|L-M|.
One can then go back to the original works of Luttinger-Kohn and Dresselhaus (see Eq. (5.10) of Ref. <cit.> and Eq. (47) of Ref. <cit.>) and note that the definitions of the (A, B) and (L, M) parameters differ only by the bare electron mass term, ħ^2/2m_e, which is equal to 1/2 in atomic units. The definitions of the C and N parameters are equivalent. As L maps to A and M maps to B up to a constant shift, we can first deduce that sgn(L-M)= sgn(A-B). Then, we can use this mapping to extract fitted Luttinger parameters that are optimized to reproduce the first-principles dispersion with SOC along the [100] and [111] cartesian directions in reciprocal space using the Dresselhaus model. Note that the only purpose of these parameters is to provide an accurate description of the first-principles bands in the close vicinity of the Γ point. Hence, the predicted dispersion will rapidly become less accurate than the original Luttinger parameters for larger |𝐤| values, as well as for the split-off bands.
Figure <ref> compares the first-principles dispersion (black) to the Luttinger-Kohn model dispersion obtained with the original set of Luttinger parameters (blue) and the fitted parameters (dashed red) for CdTe. The dispersion obtained from the Dresselhaus model (green dotted lines) is also shown for comparison.
While the original Luttinger-Kohn model closely follows the band warping for a broader portion of the Brillouin zone, the fitted parameters reproduce the light holes' curvature around Γ more accurately, similar to the one predicted by the Dresselhaus model. In turn, they predict a SOC-induced decrease of the ZPR^gFr in better agreement with the first-principles result, as shown by the open circle markers in Fig <ref>. The numerical values of the fitted Luttinger parameters are tabulated in Table S8 of the Supplemental Material <cit.>.
§ APPENDIX D: ZERO-POINT EXPANSION OF THE LATTICE
The zero-point motion of the ions affects the band gap energy in two distinct ways: on the one hand, the electrons interact with the ionic motion through EPI, which is computed at the static lattice geometry, a^0 (typically, the lattice parameter obtained from a standard DFT relaxation). On the other hand, the ionic vibrations contribute to the total Helmholtz free energy of the crystal, yielding a small variation of the lattice parameter, called zero-point lattice expansion. In addition to the EPI contribution discussed in Sec. <ref>, the zero-point lattice expansion induces a T=0 K modification of the band gap energy.
We work within the quasiharmonic approximation, in which the main contribution to the temperature dependence of the phonon frequencies is expressed as their variation with respect to the lattice parameter, which causes the crystal to expand <cit.>. The Helmholtz free energy then reads:
F^tot(V,T) ≃ F^e(V)+F^vib(V,T),
= E_stat^e(V) - k_B T ln Z^ph(V, T),
in which we approximate that the electronic and vibrational degrees of freedom can be separated. Since we are dealing with semiconductors and insulators, the entropic contribution of the electrons to the free energy is neglected. E_stat^e(V) is the Born-Oppenheimer energy obtained from DFT and k_B is the Boltzmann constant.
At T=0, the contribution of the phonon partition function
is simply the zero-point energy, such that
F^tot(V,T=0) = E_stat^e(V) + ∑(V)/2
For cubic materials, the lattice parameter including zero-point motion, , is obtained from the volume by minimizing the T=0 K Helmholtz free energy. The zero-point lattice expansion is simply the difference between the static and dynamical lattice parameters:
Δ a(T=0) = a(T=0) - a^0
Once a(T=0) is known, we approximate the zero-point lattice expansion contribution to the band gap ZPR, labeled ZPR_g^ZPLE, by computing the difference between the DFT band gap energy, E_g^DFT, evaluated at the static and geometries:
ZPR_g^ZPLE≃ E_g^DFT(a(T=0)) - E_g^DFT(a^0).
For all materials, we computed the zero-point lattice expansion using an grid. As the construction of the Helmholtz free energy only requires the vibrational spectrum of the crystal, it converges faster with respect to the sampling compared to EPI. The values of ZPR_g^ZPLE used to obtain Fig. <ref> can be found in Table S5 of the Supplemental Material <cit.>, along with experimental values of ZPR_g. Note that we did not attempt to evaluate the effect of SOC on ZPR_g^ZPLE; the same value was used to correct ZPR_g^EPI both with and without SOC. As the zero-point lattice expansion depends mostly on the phonon frequencies (see Eq. (<ref>)), which are not significantly affected by SOC for the cubic materials investigated (see Table S7 of the Supplemental Material <cit.>), we expect this approximation to be fairly accurate for the purpose of this work. For a more thorough discussion of the zero-point lattice expansion and its effect on the band gap ZPR, see Ref. <cit.>.
§ SUPPLEMENTAL MATERIAL
§ LOW SOC ZPR OF CDTE
In this Section, we present complementary information regarding the 1% SOC calculation used to validate our heuristic interpretation of the origin of the SOC-induced ZPR decrease (Sect. 4.A.3 of the main text). Fig. <ref>a) and b) respectively show the electronic and phononic dispersions of CdTe without SOC (blue) and with 1% SOC (yellow dashes). The inset of Fig. <ref>a) shows a small residual split-off energy of , namely 1% of the full Δ_SOC for CdTe. Both band structures are virtually identical. The phonon dispersions also show excellent agreement. The largest energy difference between equivalent phonon modes in our denser grid is 0.1 meV, occurring for the LO mode at . The average energy difference per mode is at most 0.01 meV. Fig. <ref>c) confirms that the histogram decomposition of the ZPR_v computed with 1% SOC (yellow dashes) is almost identical to the reference histogram without SOC (dark indigo) shown in Fig. 6 of the main text. A negligible discrepancy remains in the small q bins, most likely due to the inevitable non-zero split-off energy in the low SOC framework. The histogram with SOC (light red) is shown for comparison purposes.
Table <ref> extends Table 1 of the main text to all six possible combinations of the self-energy ingredients: ε, ω and |gkn n'|^2, are computed with either full SOC (SOC superscripts) or artificially reduced 1% SOC (low superscripts). The corresponding histograms are shown in Fig. <ref>. Note that the data labelled only ε^low and only ε^SOC is respectively equivalent to ε^low, ω^SOC, |gkn n'^SOC|^2 and ε^SOC, ω^low, |gkn n'^low|^2.
As discussed in the main text, the different datasets can be grouped according to whether SOC is included or not in the bare electronic eigenvalues (see the caption of Fig. <ref> and notice the difference between the upper and lower panels). When either including or excluding SOC only in ε^0 (resp. Fig. <ref>a) and d)), some small discrepancies yet remain when compared to the reference lines (SOC (dark indigo) and noSOC (light red)), which can be attributed to
small variations of the EPI matrix elements. Including or excluding SOC only in the phonon frequencies has virtually no effect on the final results for this material, as their corresponding histograms (resp. Fig. <ref>b) and e)) reproduce almost perfectly the corresponding reference lines.
§ IMPACT OF SOC ON THE REAL AND IMAGINARY PARTS OF THE ELECTRON-PHONON SELF-ENERGY
In the current work, the effect of SOC we observe on the ZPR of the VBM, hence on the real part of the electron-phonon self-energy, is smaller than the relative impact on the hole mobility reported in the literature for several semiconductors <cit.>. As discussed in Sec. of the main text, the imaginary part of the self-energy is proportional to the inverse relaxation time for a given electronic state |𝐤n⟩. For its parts, the mobility depends on the relaxation time of all the electronic states contributing to the scattering channels <cit.>. Hence, when including SOC in the calculations, the imaginary part of the self-energy of a given electronic state can be expected to decrease by a similar ratio as the mobility when SOC is neglected. When comparing those quantities, one has nevertheless to keep in mind that, in contrast with the self-energy, the mobility is a global quantity which is integrated on the Brillouin zone.
Figure <ref> shows the real (blue) and imaginary (red) parts of the frequency-dependent electron-phonon self-energy (i.e. retaining the frequency dependence in Eq. (5) of the main text) for AlSb (a), ZnTe (b), CdTe (c) and Si (d), computed with a grid. Solid lines include SOC while dashed lines do not. Note that for this Figure, the imaginary parameter η was raised to 0.03 eV, in order for the self-energy to be a smooth function of the frequency. For all four materials, the introduction of SOC has a larger absolute effect on the real part of the self-energy close to the bare eigenvalue. Nevertheless, as the magnitude of the imaginary part of the self-energy is much smaller than its real part, we observe that the SOC-induced relative decrease of the self-energy is more significant for the imaginary part than for the real part for a substantial frequency range below the bare VBM energy, which could lead to a stronger relative effect on transport properties. For this region, we find that the relative decrease of the imaginary part of the self-energy in the vicinity of the bare VBM eigenvalue is of the order of 45% for AlSb, 35% for ZnTe, 40% for CdTe and 20% for Si, which is larger than the ratios reported in Fig. 2a) of the main text for the ZPR_v (respectively 21%, 27%, 30% and 3%). Our results thus agree with the trends reported in the literature for the hole mobility. Further investigation will be required to fully understand how the SOC-induced decrease of the band gap ZPR and increase of the hole mobility should correlate.
§ NUMERICAL TABLES
Table <ref> regroups the relevant calculation parameters for all materials investigated in the present work, as well as the band gap location and band gap energy with and without SOC. Table <ref> provides the numerical values of the ZPR for the VBM, CBM and band gap with and without SOC computed from first principles, and the ratio R=ZPR(SOC)/ZPR(noSOC) which is shown in Fig. 2 of the main text. Table <ref> compares the band edge and band gap renormalization ratio R computed at T=300 K and T=0 K, and confirms that the main conclusions of this article, obtained at T=0 K, remain valid at finite temperature. Table <ref> reports the contribution of the zero-point lattice expansion to the band gap ZPR, which was added to the EPI contribution to produce Fig. 4 of the main text, as well as the experimental values used for comparison. Table <ref> contains numerical results for the VBM, CBM and band gap ZPR from the generalized model (gFr, Eq. (34) of the main text), evaluated using the physical parameters reported in Table <ref>. Table <ref> reports the model parameters used to construct the Luttinger-Kohn and Dresselhaus Hamiltonians, enabling the computation of the angular-averaged effective masses for the VBM of zincblende materials. The fitted Luttinger parameters obtained using the procedure described in Appendix C of the main text are also reported. Table <ref> finally reports angular averaged effective masses for the different hole bands, as well as their band average. Note that these quantities differ from the angular and band averaged square root effective masses entering the gFr model (reported in Table <ref>).
|
http://arxiv.org/abs/2307.04238v1 | 20230709175503 | Measuring the Cosmic X-ray Background accurately | [
"Hancheng Li",
"Roland Walter",
"Nicolas Produit",
"Fiona Hubert"
] | astro-ph.IM | [
"astro-ph.IM"
] |
Measuring the Cosmic X-ray Background accurately]Measuring the Cosmic X-ray Background accurately
[1]Hancheng [email protected]
[1]Roland [email protected]
[1]Nicolas [email protected]
2]Fiona Hubert
*[1]Department of Astronomy, University of Geneva, 16 Chemin d'Ecogia, Versoix, CH-1290, Switzerland
[2]EPF-Ecole d'ingénieur-e-s, , 55 Av. du Président Wilson, Cachan, FR-94230, France
Synthesis models of the diffuse Cosmic X-ray Background (CXB) suggest that it can be resolved into discrete sources, primarily Active Galactic Nuclei (AGNs). Measuring the CXB accurately offers a unique probe to study the AGN population in the nearby Universe. Current hard X-ray instruments suffer from the time-dependent background and cross-calibration issues. As a result, their measurements of the CXB normalization have an uncertainty of the order of ∼15%. In this paper, we present the concept and simulated performances of a CXB detector, which could be operated on different platforms. With a 16-U CubeSat mission running for more than two years in space, such a detector could measure the CXB normalization with ∼1% uncertainty.
[
[
August 12, 2023
===================
§ INTRODUCTION
The diffuse Cosmic X-ray Background (CXB) was discovered during a rocket flight <cit.> together with Sco X-1. A Moon observation with ROSAT <cit.> showed the bright side of the Moon reflecting Solar X-rays whereas the dark side revealed a shadow of the CXB, demonstrating its extrasolar origin <cit.>. The high isotropy of the CXB, measured by Uhuru, confirmed an extragalactic origin <cit.>.
Thanks to the focusing capabilities of soft X-rays instruments and the help of deep field surveys by XMM-Newton <cit.> and Chandra <cit.>, up to 93% of the extragalactic CXB below 10keV has been resolved into point-like Active Galactic Nuclei (AGNs) <cit.>. The sensitivity of the current hard X-ray instruments is not enough to resolve the CXB at hard X-rays where the bulk of its emission lies. As a result only bright AGN could be detected and the fraction of CXB resolved remains less than 39% <cit.> at ∼30keV.
An additional Galactic component is observed below ∼2keV <cit.> with a extension of ∼10^∘ around the Galactic plane <cit.> and explained with the emission of faint cataclysmic variables <cit.>, thermal emission from ionized gas in the local bubble beyond the neutral interstellar medium <cit.>, and scattering of X-rays on illuminated diffuse gas clouds <cit.>.
X-ray instruments suffer from the time-dependent background and sensitivity change caused by solar activities (soft X-rays), cosmic particles, instrument aging and inaccurate in-orbit calibration resulting in systematic uncertainties. As a result, the CXB intensity measured by many experiments disagrees with each other by up to 10%-15% even though its spectral shape is rather well constrained.
Extrapolating stacked AGN spectra accounting for the CXB spectrum below 10keV does not reproduce the CXB at hard X-rays. A large population of Compton thick sources (where the density of obscuring material is high enough - N_H > 10^24 cm^-2 - for Compton scattering to dominate) has been hypothesized to fill the gap <cit.>, however, that population was not detected in deep X-ray surveys <cit.> nor at hard X-rays <cit.>.
The comparison of the observed CXB with the results of synthesis models puts constraints on the fraction of AGNs with different degrees of obscuration and reflection <cit.>. The relation between reflection and absorption contains important information on the average AGN inner geometry. The systematic error in the CXB normalization is however a major source of uncertainty in these models.
To measure the CXB accurately, instrumental background modeling, energy and detection efficiency calibration are meticulously required. In this context, the MVN (Monitor Vsego Neba) instrument was proposed by the Space Research Institute of the Russian Academy of Sciences <cit.>. A cylindrical multi-layer collimator protects an inner spectrometer from receiving off-axis photons up to a certain energy threshold, and a rotating obturator periodically shields the aperture of the collimator to modulate the Field of View (FoV) to discriminate the CXB flux from other components and backgrounds.
In this work, we present an improved detector concept which mainly consists of an array of collimated spectrometers with rotating obturators on top of the apertures. The detector could be operated on a space station, on a small satellite or even on a CubeSat. The science goals of this detector are discussed in Sect. <ref>, the instrument concept, calibration, integration and simulated performance presented in Sect. <ref>-<ref> respectively. The summary and discussion are given in Sect. <ref>.
§ SCIENCE GOALS
§.§ Isotropic CXB measurement
The CXB flux is roughly isotropic over the sky <cit.>. This isotropic flux could be averagely measured by an instrument able to collect X-ray photons from different fields (preferably blank sky) and filter out non-X-ray background & known discrete sources. Previous measurements however remain affected by large uncertainties on the CXB spectral shape and normalization, which ultimately limit our knowledge of the accretion power and of the fraction of heavily obscured AGN in the Universe.
The CXB flux and spectrum have been measured by ASCA/SIS <cit.>, ROSAT <cit.>, RXTE/PCA <cit.>, XMM-Newton <cit.>, Chandra <cit.> and Swift/XRT <cit.> at soft X-rays, and by HEAO1 <cit.> and more recently by Beppo-SAX <cit.>, INTEGRAL <cit.> and Swift/BAT <cit.> at hard X-rays. The measurements are in agreement at a level of 10-15% throughout the full energy range <cit.>.
The method to perform the CXB synthesis has been developed in the seminal works of <cit.> and improved in the following works. To synthesize the CXB spectrum three main “ingredients” must be known: an accurate description of the broadband spectra of the various AGN classes (including reflection effects), the X-ray luminosity function (XLF) which gives the number density of AGN per comoving volume as a function of luminosity and redshift, and the distribution of AGN as a function of absorbing column density (N_ H), the so-called N_ H distribution.
AGN spectra are provided as spectral templates for the various AGN classes: previous works set the parameters of their spectral templates to values representative of observations <cit.> or of models <cit.>. <cit.> used spectral templates derived by stacking Swift/BAT data for various types of AGN. The AGN X-ray luminosity function is derived from deep surveys <cit.> and hence is available only in the soft X-ray range. The N_ H distribution can be derived from data <cit.>, but it is biased against the detection of highly absorbed sources) or from models <cit.>.
The XLF is directly proportional to the CXB normalization, while the spectral templates and the N_ H distribution are mostly determined by the CXB spectral shape, in particular to the spectral slopes below and above 30keV and the break in between. An uncertainty of 10%-15% on the spectrum of the CXB at soft and hard X-rays corresponds to an uncertainty of up to 0.1 on the spectral slopes and of 0.2 on the spectral break. Considering that spectral templates derived from Swift-BAT have similar calibration as the BAT-derived CXB spectrum, the above uncertainties could be divided by a factor of two when considering such templates when comparing the CXB synthesis to the BAT observations.
The contribution of Compton thick (CTK) sources to the CXB flux at 30keV is estimated to be 4%-6% based on the CXB synthesis. As this is twice less than the 10%-15% uncertainty this contribution is highly dependent on CXB spectral uncertainties. A determination of the CXB spectral shape with a few percent levels of uncertainty is therefore required to better estimate the fraction of CTK AGN.
Our proposed detector attempts to measure the CXB spectrum in the 10-100keV with 1% precision, to significantly improve the study of the above topics. This instrument could also extend the CXB measurement up to 1MeV (although with less accuracy). Current data are scarce in this range and an improvement is of interest. The integrated Hawking radiation (HR) of primordial black holes (PBHs) with different mass scales could leave a signature in the isotropic CXB at energies above 100keV. <cit.> and <cit.> derived upper limits to the PBH density using the diffuse X/γ ray background. Extending the CXB spectral measurement to 1MeV together with a better estimate of the contribution of AGNs, derived from lower energies, will improve significantly these limits <cit.>.
§.§ Anisotropic CXB measurement
The CXB is characterized by small-scale fluctuation and large-scale anisotropy. The CXB is a superposition of numerous discrete sources including undetected/unresolved faint sources, while the numbers of them statistically fluctuate from field to field on small scale. This imprints fluctuations on the CXB fluxes with a scale of Ω_e^-0.5 S_ min^0.25 <cit.>, where Ω_e is the field size for evaluation, and S_ min is the detection threshold of the instrument. Detecting such fluctuations is out of the scope of our instrument, however, a dipolar anisotropy of the CXB is expected. First, our proper motion with respect to the distant Universe, where the bulk of the CXB is emitted, should result in a dipolar anisotropy with an amplitude of 0.42% due to the Compton-Getting (CG) effect <cit.>, in a direction matching that of the Cosmic Microwave Background (CMB) dipole <cit.>. Then the distribution of AGNs in the local Universe could produce a small additional anisotropy, the amplitude of which has been estimated as 0.23-0.85% by <cit.>. HEAO-1 A2 <cit.> and RXTE <cit.> have measured a dipole amplitude of < 3% and of ∼2% respectively after subtraction of the CG effect. <cit.> further found that most of the observed CXB anisotropy (CG effect subtracted) can be attributed to low-luminosity AGNs. Better observational constraints are needed to improve these estimates. This requires however an accuracy of 0.1-0.5% on the average CXB intensity.
§.§ Secondary goal
Gamma-Ray Bursts (GRB) are among the most energetic explosions since the Big Bang, with an average rate of one event per day at cosmological distances <cit.>. Assuming an isotropic GRB occurrence, we don't expect to see serendipitous GRB in the FoV of our instrument (only one in 5 years). However, the prompt emission of short Gamma-Ray Bursts (sGRBs) is generally hard with peak energies reaching ∼490keV <cit.>. Photons of such sGRBs could therefore penetrate the platform structure/instrument housing, and reach the detector. Missions like INTEGRAL and Insight-HXMT have successfully employed their inside anticoincidence detectors to monitor GRBs with nearly omnidirectional FoV <cit.>. Based on the log N-log P relationship (where P is the 50-300keV peak flux) of the 4th BATSE GRB Catalog <cit.> and the anticipated instrumental performance of our instrument (see Sect. <ref>, averagely 36cm^2 for off-axis effective area and background level of 76.5cnts/s in 100-300keV), we expect roughly a detection rate of ∼4 GRBs (about 3 long and 1 short) per year with a detection significance >5σ. The brightest GRB(s) could be localized with an accuracy of a few degrees thanks to the directional dependent instrumental response <cit.>.
Luminous gamma-ray pulsars like the Crab pulsar and PSR B1509-08 will be detected each time they pass in the FoV so they can be monitored over the duration of the mission. The timing resolution (at a level of μ s) and the effective area will be good enough to detect their pulsations. Even when the Crab pulsar is out of the FoV, high-energy photons can penetrate the collimators and be detected using phase folding even in case of a very unfavorable signal over background level, as demonstrated with the POLAR detector <cit.>. Detection of the pulsar(s) can be used to calibrate the absolute time stamping and to perform pulsar navigation <cit.>. Dedicated simulations will be performed later to evaluate such secondary goals.
§ INSTRUMENT CONCEPT
§.§ Overview
The diffuse nature of the CXB makes it difficult to be separated from the instrument background. Bright sources, additional sources of diffuse background and instrumental background need to be filtered out. An accurate method to distinguish these different components is crucial.
Some experiments like ASCA/SIS <cit.>, Beppo-SAX <cit.> and RXTE/PCA <cit.> have deeply exposed some high galactic latitude blank sky regions to measure the CXB subtracting the same level of exposure obtained on the dark side of the Earth. INTEGRAL <cit.> and Swift/BAT <cit.> used Earth occultations, during which the Earth transits the field of view and modulates the CXB and other components. HEAO-1 <cit.> used an onboard obturator to separate the CXB from other components.
The detector proposed here utilizes passive collimators and onboard obturators to model the fluxes registered in the detector. Collimators (see Sec. <ref> & <ref>) are used to block surrounding emissions with energy-dependent transparency to reduce the contamination out of the FoV. Obturators (Sec. <ref>) will periodically shield the aperture of the collimator, and introduce a modulation of in-FoV components to separate them from the instrument background noise. To drive such obturators, a compact wheel system is developed (Sect. <ref>).
The science goals mentioned in Sect. <ref> requires observing an energy range of 10-511 keV with a sensitive spectrometer (Sec. <ref>). We propose to use a new generation of CeBr_3 scintillating crystals (Sect. <ref>) which have been studied to assess their suitability as spectrometer modules for space missions <cit.>.
Overall, the detector consists of an array of collimated spectrometers with rotating obturators on top of the collimators. Hereafter, we present the concepts and mechanical designs of the aforementioned components of the detector.
§.§ Collimator
The collimator is a cylindrical tube made of four metal layers which are Aluminium (Al), Tin (Sn), Copper (Cu) and Al from outer to inner. These multi-layers of Al-Sn-Cu-Al shield off-axis X-ray photons. The innermost layer emit K-shell fluorescence (< 2keV) below the energy threshold of the detector. With thicknesses of 1-1-1-2mm for the Al-Sn-Cu-Al layers (the effective thickness is thicker by a projection factor of 1/sinθ, where θ is the incident angle), photons below ∼100keV are expected to have a <0.1% transparency through the collimator tube. The length and inner diameter of the tube is 250 mm and 25 mm respectively, which result in a FoV of around 26 square degrees (Full Width at Half Maximum, FWHM).
§.§ Spectrometer
§.§.§ Crystal
The CeBr_3 crystal will be used as scintillation material of the spectrometer module, with a diameter and thickness of 25 mm and 20 mm respectively. The CeBr_3 provides improved detection performances with high light output (60 photons per keV), excellent energy resolution (∼ 4% at 662keV, FWHM) and fast fluorescence decay time (17 ns) <cit.>, which makes it suitable for the SiPM readout. With a thickness of 20mm, the CeBr_3 will keep ∼100% detection efficiency up to 200keV and still reach a 60% efficiency at 511keV, which is the emission line from a β^+ decay calibration source. <cit.> and <cit.> have tested the CeBr_3 crystal with increasing proton fluences ranging from 10^9 to 10^12 protons cm^-2, while the light yield was barely affected and the measured FWHM energy resolution at 662 keV was extended only by 0.1%. This shows its high radiation hardness against proton-induced damage, which guarantees a stable performance without severe degradation in space.
The CeBr_3 crystal will be sealed in a 0.1mm thick Aluminum frame with an internal optical reflector on the top and sides. The top will additionally be coated by a 0.1 mm thick beryllium window to stop low-energy charged particles. The resulting low energy threshold for X-ray photons will be ∼10keV. At the bottom, an optical interface will connect the crystal to a Quartz window and to the Silicon photomultiplier.
§.§.§ Silicon PhotoMultiplier
Silicon PhotoMultiplier (SiPM) is increasingly used in space-borne detectors. It has large Photo Detection Efficiency (PDE), and its light yield lowers the low energy threshold. Other properties, such as good quantum efficiency, low bias voltage, compactness, robustness and insensitivity to magnetic fields, relax the detector design constraints.
A disadvantage of SiPM is its high dark current, which is highly temperature-dependent and would increase as radiation dose accumulates in space. The temperature will have to be kept as low as possible, so that the temperature variation can be measured and calibrated. The radiation damage is more problematic and unavoidable, it will result in an increase in the threshold energy of the detector. The SiPM degradation was studied[<https://indico.cern.ch/event/1093102>] and constrained with an irradiation campaign <cit.> showing that the CeBr_3 crystal is efficiently protecting the SiPM and that an order of magnitude of increase of the dark current can be expected after one year in space which would result in a threshold increase of a few keV per year at around -20^∘C <cit.>.
§.§.§ Electronics
A suitable electronic was already developed for the 64 channels SiPM array of POLAR-2 <cit.>. A Front-End Electronics (FEE) board (CITIROC-1A ASICs) is equipped for each polarimeter module of POLAR to power (temperature regulated) and readout the SiPM channels, define the trigger logic, and communicate with a Back-End Electronics (BEE) board connected behind. The BEE board takes care of power supply, overall data acquisition, communication with the platform and mission control, etc.
§.§.§ Bottom Lead chamber
A Lead (Pb) chamber will be additionally placed at the bottom to reduce the effective radiation doses received by the scintillators and SiPMs from the back side of the instrument. An 8-mm thick Pb is able to stop 100 keV photons at a level of more than 99% percent.
§.§ The obturator and wheel system
§.§.§ Obturator
The obturator consists of the same sandwich layers as the collimator but twice thicker (10mm, 0.75kg). There are two obturators configured as a propeller, which are counter-rotating to compensate the moment of inertia (0.002kg m^2 for each). The rotation period of the obturator is defined as one rotation per minute (rpm), during which the space environmental background does not change significantly, thus the background levels registered in the tubes during the transit of the obturator are constant. An angular encoder is set in the wheel system to constantly record the rotation phase of the obturator.
A schematic drawing of the obturator is shown in Fig. <ref>. Each obturator is individually symmetrical to avoid shifting the center of mass. The opening angles of the inner and outer sectors are defined to be 75^∘ and 45^∘, such that there are always few tubes being closed by the obturators. Furthermore, the closures by one or two obturator(s) offer a chance to study the induced radioactivity of the obturator and collimator (similar layers) in space, thus helping to understand the internal background of the instrument.
§.§.§ Wheel system
The obturators will be driven by a compact wheel system, which is adapted in a crankcase with a length of 150 mm and a diameter of 40 mm. The combination of the obturators and wheel system is mushroom-like, where the root will be inserted at the center of the collimator array (see Sect. <ref>), such that the obturators could cover the aperture of the collimator tubes. Fig. <ref> shows the cutaway view of the wheel system. The gearing provides two counterrotating coaxial shafts from the unidirectional shaft driven by the motor. A model of this complete system has been built using a Maxon motor[<https://www.maxongroup.net.au/maxon/view/content/overview-BL-DC-Motoren>] and underwent some preliminary space qualification. Tagged radioactive sources will be attached beneath the lower obturator (see Sect. <ref>). The wiring of the tagged source requires standard slip rings in the wheel system for signal connection.
Moving mechanisms working in space are difficult. First, the gears should be robust and especially the ball bearings have to overcome the grouping, vibration and frictional torque problems as shown in <cit.>. Then the motor is required to work in a vacuum and in a large temperature regime <cit.>. Thermal-vacuum tests need to be carried out for space qualification to reduce risks of failure. In the worst-case scenario if the propeller is stopped, the detector loose flux modulation, but the partially or entirely closed and open tubes still enable background modeling.
§.§ Thermal constraints
In order to reduce the impact of dark noise of the SiPM and to maintain a relatively low energy threshold, a cooling system is needed to reach ideally below -20^∘C. The solar radiation power to the instrument is orbit dependent as the Sun incident angle varies. The averaged power over one year can be estimated at 1361W m^-2 beyond the Earth's atmosphere <cit.>, and the planetary albedo at about 35% <cit.>. The whole instrument with dimension given in Sect. <ref> will heat by an average value of 13W. Taking into account ∼30W of additional heating from the electronics, the total cooling power needed is <43W. The tubes of the instrument (with a total area of 0.21m^2 for 18 tubes) could serve as passive radiators for dissipation. The calculated thermal equilibrium of the instrument is expected to be 252K (-21^∘C) with several degrees of variation because of the Sun power modulation along a 90-minute orbit.
§ CALIBRATION
Accurate knowledge of the detector's spectral properties, including the Energy-Channel (E-C) relationship (linearity), energy resolution, detection efficiency, etc., is crucial to measure the CXB normalization. Some of the calibrations will be carried out on ground and some need to be performed during operations.
The absolute detection efficiencies (versus energy and time) depend on geometry (FoV), photopeak versus Compton ratio, light yield of CeBr_3, scintillation light efficiency of transmission and collection, the quantum efficiency of the SiPM, etc. The geometry needs to be characterized very precisely on the flight model before launch and is not expected to change afterwards. During the mission, gradual changes of some parameters are expected: i) the SiPM performance varies with temperature but this can be adjusted (biasing voltage on SiPM is automatically adjusted at all times by the electronic); ii) the SiPM performance can change due to radiation damage; iii) all the CeBr_3 characteristics can change due to aging and radiation damage. The latter two will have an impact on the E-C relationship, energy resolution and detection efficiency, which need to be monitored in orbit.
§.§ On ground
Before launch, the detector will be fully assessed and its spectral responses measured using standard radioactive sources, such as ^241 Am (half-life 432.2 years, specific activity 126.8 GBq/g) and ^22 Na (2.6 years, 1580 TBq/g). The α decay of ^241 Am generates gamma rays of 13.9, 17.8, 26.4 and 59.6keV. ^22 Na has β^+ decay emitting a positron immediately annihilating and releasing two gamma-ray photons at 511keV in coincidence with a 1.2 MeV photon. The energy-channel relationship and energy resolution at different energy peaks will be obtained by measuring the channel spectrum of these sources with the detector.
The absolute detection efficiencies versus energy can be measured by recording the single and coincident photon counts of tagged radioactive sources peaking at different energies. For example, the decay of a ^241 Am source, continuously monitored by a tagged scintillator, can be marked by the triggering of the scintillator above a relatively high threshold (α decay at MeV). Then a detector module can be placed near the source to collect photons. The expected number of photons that should arrive at the detector is in direct proportion to the counts of the scintillator by a ratio of collecting solid angle over the allowed emission angle (4 π assuming an isotropic decay). Eventually, the division of the coincident photon counts to the expected number gives efficiency. The accuracy of the efficiency calibration is purely limited by statistics, which is approximately the square root of one over the coincident counts. At 511keV, an alternative calibration approach is to use the ^22 Na source, which can be placed between two detector modules, then the single and coincident events offer an equation set to resolve the unknown efficiencies of both detector modules.
Finally, a synchrotron radiation facility could generate a collimated broadband X-ray beam with precisely known spectra. An X-ray continuum (covering 10-100keV) irradiating the instrument, allows to calibrate the spectral response as a function of energy, cross-checking and filling the gaps continuously between the discrete energies provided by the radioactive sources.
A Monte-Carlo simulator of the instrument based on Geant4 <cit.> will be developed to verify all the aforementioned calibration procedures. Additionally, as the thermal conditions will change in orbit, we will evaluate all the temperature-regulated parameters, in particular these of the SiPM and electronics. The degradation of the SiPM and of the crystal to radiation dose will be evaluated through irradiation with protons (the major background in space, see <ref> for details).
§.§ In orbit
There are two specific challenges in orbit. The first is to cross-check all the parameters calibrated on-ground. Secondly, the detectors must be monitored for aging and degradation due to radiation. To cope with them, there will be two tagged ^241 Am sources attached beneath the lower obturator (as shown in Fig <ref>, both the inner and outer sectors of the lower obturator will be attached with one source, which will pass by the center of the inner and outer tubes). As the calibration sources are periodically transiting in the FoV, the calibrations can be done independently for every detector tube (they should be uniform at the beginning). The first in-orbit calibration will validate the spectral response characterized on ground.
The E-C relationship and energy resolution can be calibrated in the same way in orbit as on ground (one energy point is sufficient). This will be done constantly to monitor the aging and degradation of the detector. The E-C relationship is closely linked to the performances of SiPMs, which are sensitive to radiation dose and temperature. Their influence will be periodically checked. The energy resolution is anticipated to not be dramatically changed as the CeBr_3 has high radiation hardness (Sect. <ref>). Even a slight change will be monitored as time-dependent responses for scientific analysis.
The detection efficiency calibration in orbit is slightly different than on ground, since the latter is static whereas it is evolving in orbit. The rotation phase of the obturator allows bin the exposure of the calibration sources to every tube. The detection efficiency for every tube can then be calibrated as it was done on ground (<ref>).
A 200Bq ^241 Am source will create about 10^4 events per day in the photopeak region of the 59.6keV photon of the tagged spectrum of each tube (0.99 tagging efficiency, 0.36 branching ratio of the decay at 59.6keV, 0.031 solid angle ratio, 0.13 effective exposure time, 0.6 photopeak efficiency, 0.8-1 deadtime ratio). So statistically the efficiency at 59.6keV of each tube can be measured at 1% level each day. Some other lines of ^241 Am can be used to calibrate the efficiency, energy-channel relationship and energy resolution at lower energy but with a lower accuracy limited by statistics. Besides, an induced instrumental background will include a 511keV feature in the accumulated spectra. This line can also be used for calibrating the E-C relationship and energy resolution. Furthermore, since inner and outer detector tubes have different radiation acceptance, the evolution of their correlated spectral properties will allow to further characterize their degradation.
The selection of coincident events can be done offline or in real-time with a coincidence time resolution of 100 ns, to reduce telemetry. We expect 3 random coincident events per day and detector, which is completely negligible compared to the 10^4 real coincidences.
These calibrations will be performed every day to monitor aging and radiation dose. Additionally, the Crab broadband spectrum will allow to cross-check and monitor the shape of the energy responses of the instrument. Geant4 simulator will model the Crab observations with respect to the on ground calibrations and allow to model the gradual change of the spectral response.
§ DETECTOR INTEGRATION AND PLATFORM REQUIREMENTS
As the space environmental background is highly orbit-dependent (Sect. <ref>), a suitable orbit is important to achieve the science goals. The best orbit is an equatorial orbit that never enters the South Atlantic Anomaly (SAA), but flight opportunities are not frequent. The very popular Sun-synchronous orbit for CubeSat is not well suited for our detector as it passes frequently in the SAA and suffers from continuous Solar radiation.
A typical low Earth orbit (LEO) has an altitude of 300-500km and an inclination of a few tens degrees. Lower orbits (<400km) are preferable but the payload would deorbit rapidly. 500km is the minimum altitude for a free flier without propulsion. We will assume in the following orbital altitude of 500km and inclination of 42^∘. Since our instrument need to continuously scan outer space, pointing to the zenith is required.
As the mission platform will constrain the available resources (size, mass and power consumption) and determine the performance of the detector, we considered a 12U CubeSat implementation and a station-based modular design.
§.§ CubeSat version
We have integrated our detector into a 12-Unit (U) CubeSat payload, translating to 2*2*3 U (one U corresponds to 10*10*10 cm^3). Such a configuration is shown in Fig. <ref>, where the transparent pink box symbolizes 12 U as a reference. It allows the placement of 18 tubes (each includes a collimator and a spectrometer), all of which have the same dimensions: 28cm in height and 35mm in external diameter.
The tubes are placed along a dual-ring structure. The 6 inner tubes will be shaded by the 12 outer ones and get less background from the sides, allowing to study the systematics of the background. A compact electronics (Sect. <ref>) will be placed beneath the wheel system in the center to power and communicate with the motor and spectrometers, the total power consumption is ∼30W.
The necessary platform modules (e.g., power, communication and orbit control) could occupy the corners and sides or another 4-U. This very conservative configuration with 18 tubes is chosen to bring a lot of redundancy to fight systematic effects. A smaller number of tubes (as low as 4 tubes as seen in Fig. <ref>) is possible if considering only statistical effects. Smaller CubeSat (4U, 8U) could probably reach the scientific goals if sacrificing redundancy but could require longer exposure to understand systematical effects due to the space environment.
§.§ Station-based version
Another configuration for a Station-based platform is shown in Fig. <ref>. It is based on four groups, each containing four tubes that are twice the size of the CubeSat version, with a height of 500mm and an inner diameter of 50mm. This provides a collecting area four times bigger than of the CubeSat version. Four groups of obturators are placed on the top, each including a symmetrical sector (opening angle is 90^∘). Neighboring obturators are counter-rotating to compensate for angular momentum. The wheel system is simplified as it needs unidirectional rotation.
§.§ Technical Readiness
The readout electronics are expected to be ready in early 2023 by adapting that of POLAR-2. A prototype of the detector unit will be built right after, to characterize its spectral response and detection efficiency. The overall design will then be finalized, integrating a test model of the detector for comprehensive qualification tests including motor, thermal, vibration, radiation and geometry, lasting until the end of 2023. As soon as a launch opportunity is determined, a flight model will be built by integrating with the platform interfaces and tested. Overall, a flight model can be expected to be delivered in 24-36 months.
§ SIMULATED PERFORMANCE
§.§ Detector spectral responses
In this section, we consider the CubeSat configuration (Sec. <ref>) to evaluate the performance (the station-based version (Sec. <ref>) has better statistics thanks to a bigger photon collecting area). The spectral responses and the background are generated using the Geant4 simulation package (version 10.6.2, <cit.>). Geant4 integrates comprehensively the relevant physics processes.
The mass model of a single tube unit (obturator, collimator, and spectrometer) and the related physical processes have been implemented in Geant4. Monoenergetic photons (covering 10-1000keV with reasonable steps) have been injected with different incident angles and shield coverages of the obturators. Fig. <ref> shows the effective areas for open and closed tubes as a function of incident angle θ and energy. The right panel indicates that the low energy threshold of the close tubes on-axis is ∼100keV. The left plot indicates that the opening angle of the open tubes below 100keV is ∼6^∘. The energy response matrices are also obtained as a function of incident angle and energy.
Spectral responses for the full detector (made of 18 tubes) were also calculated. Tubes of the inner ring are shielded by those of the outer ring, resulting in a non-uniform (or more directional dependent) response. The instrument has more triggers on the external sides of the outer rings tubes, anticipating a localization ability to luminous transient sources like GRBs. Such an idea has been widely applied by multiple instruments (e.g., we have done that for POLAR <cit.>), and will be developed for the instrument presented here in the future. A conservative estimation of the localization accuracy will be a few degrees. While for the CXB, the instrument response remains symmetrical along the zenith.
§.§ Expected count rate
The Burst Alert Telescope (BAT) onboard the Swift observatory has successfully carried on an all-sky hard X-ray survey at 14-195keV and detected 1632 sources in 105 months <cit.>. The source catalog is available on-line[<https://heasarc.gsfc.nasa.gov/W3Browse/all/xray.html>]. We used that catalog to predict the expected count rates from the sources. The CXB count rate was calculated by convolving spectral templates <cit.> with the simulated detector responses.
The derived count rate (10-100keV) distribution over the sky is shown in Fig. <ref> in the unit of the CXB rate (0.129 counts/s/tube) with a bin size corresponding to the FoV of a tube. On average one source is contributing per sky bin. Many bright point sources on the galactic plane, could easily be filtered out.
We also plot the count spectra of the CXB and of five luminous sources in Fig.<ref>, for a 2-year exposure time and a CubeSat mission with 18 tubes (note that only the open tubes, i.e. half of them, contribute to these counts). The CXB will be detected with about 100 times more counts than the brightest sources (which could easily be excluded).
§.§ Background estimation
We assume that the instrument will fly in a Low-Earth Orbit (LEO), where the space environmental background is the main concern. In our background studies, we have included the standard Shielding Physics List in Geant4, including electromagnetic physics, hadronic physics and radioactive decay physics (delayed background).
§.§.§ Cosmic rays
When approaching the Earth, low-energy cosmic rays (CRs) are deflected by the geomagnetic field. Higher-energy ones interact with the atmosphere creating secondary particles creating noise in the detector. The South Atlantic Anomaly (SAA), caused by the offset of the Earth’s magnetic center from the geographic center, features a weaker geomagnetic field and an increased particle (mainly protons and electrons) environment <cit.>.
Even if the instrument will be switched off in the SAA to protect the detector, the delayed background originating from the radioactivity induced by the SAA particles must be taken into account. A light flying unit (single CubeSat) will develop less induced background than that e.g. on a space station.
We estimated the primary particle background spectra, based on the works by <cit.> and <cit.>, for an orbit at a typical altitude of 500km and inclination of 42^∘. The majority of the primary CRs are protons with a spectrum that can be approximated by a power-law <cit.>. We used an orbital averaged spectrum extracted from ESA's SPace ENVironment Information System [<https://www.spenvis.oma.be>] (SPENVIS). Other significant particles are electrons and positrons and we used a spectral model developed by <cit.>, often used for background simulation of space instruments.
The spectra of secondary particles depend on the geomagnetic latitude <cit.>. By averaging results obtained by AMS-01 <cit.>, an averaged spectra of secondary protons could be extracted. The spectra of secondary electrons and positrons are provided by <cit.>. Only a very small portion of them could be mistaken as photons as the low-energy ones are stopped by the beryllium window and cannot reach the detector.
§.§.§ Albedo gamma rays
The Earth's atmosphere features outwards radiation of albedo gamma rays, produced either by the reflection of the CXB (minor) or by the interaction between CRs and the atmosphere. Their spectra are provided by <cit.>. Since they are coming from below the instrument, their influences are highly correlated to the angle between the Earth and the FoV and this can be used to filter them out. Those photons are rather soft and only the high-energy ones will be able to go through the tube shield.
§.§.§ Delayed background
The delayed background originates from the radioactivity mainly induced by the trapped particles in the SAA. Geant4 is able to simulate the amount of induced radiation hitting the instrument from the fraction of the time spent in the SAA. The radioactive isotope production and decay properties can be characterized. The level of delayed background increases with time and gradually saturates. This background will develop characteristic lines (mainly 511 keV). The material of the detector and of the platform has to be chosen to minimize radiogenic materials.
§.§.§ Background rates
The spectra of the different components aforementioned, either adapted from the AMS measurements or from SPENVIS, are shown in Fig. <ref>. They are all considered to be isotropic inputs to the instrument simulator, the mass model of which is constructed by taking into account the geometry defined in Sect. <ref>. The anticipated background rates are shown in Fig. <ref>. Table <ref> lists the rate of different background components in two energy ranges.
§.§ CXB measurement
We are evaluating here the accuracy of the CXB normalization determination in the energy band 10-100 keV for an increasing number of tubes and exposure time.
For data selection purposes the sky is divided into 3072 pixels (healpix binning with N_ side = 16). Sky pixels with |b|<10^∘ or covering the Magellanic clouds or including (non-AGN) sources in the Swift-BAT catalog <cit.> are excluded. Most remaining sources at high galactic latitude are nearby CVs, stars and unidentified sources, which are the bright components of the GRXE population. Pixels with a count rate significantly above their neighbor will also be excluded. As shown in Fig. <ref>, globally about 23% of the sky will be disregarded.
Photons from unresolved galactic X-ray sources and from the instrumental background need to be subtracted from the observations. The statistical accuracy of the CXB measurement can be estimated as P_ sta = (√(C+B)+U)/C, where C, B and U represent the total number of counts detected from the CXB, the instrumental background, known galactic X-ray sources and of the unresolved Galactic Ridge X-ray Emission (GRXE) in all open tubes from all the considered sky pixels.
The GRXE is made of numerous unresolved faint sources like coronally active binaries, cataclysmic variables, etc., and extends up to a galactic latitude of |b|=20^∘. It has a constant spectral shape, softer than the CXB, and an intensity (measured by INTEGRAL) scaling with the stellar mass <cit.> and in particular with the COBE/DIRBE Zodi-Subtracted Mission Average Maps at 4.9μ m [<https://lambda.gsfc.nasa.gov/product/cobe/dirbe_zsma_data_get.html>]. Using this map for the non-discarded pixels, the GRXE spectral template and convolving with the response of the detector indicates U∼ 0.291 counts/s/tube in total for 10^∘ < |b| < 20 ^∘, which correspond averagely to 0.47% of the CXB rate. The GRXE count rate is very low (< 0.067 count/s/tube) for |b|>20^∘ allowing a very good separation of the CXB and of the GRXE fitting the full sky.
As the detector tubes are switching from open and close state thanks to the obturators, the instrument background (about B∼ 4.0 counts/s/tube in the range of 10-100 keV, see Sec. <ref>) can be determined and its evolution precisely modeled (at least within 0.1% in the band 10-100keV).
Fig. <ref> gives the resulting statistical accuracy of the CXB normalization as a function of the number of detector modules and mission time.
The CXB normalization measurement also depends on systematic uncertainties in the various spectral components considered above and on the absolute flux calibration of the instrument. As shown in Sect. <ref>, the latter will be calibrated in orbit by tagged radioactive sources with an accuracy reaching 1% every day. Accumulating calibration events over time allows to improve accuracy for longer effective exposure time. Ideally, a 100-day of exposure to the calibration source, will result in 10^6 calibration events, i.e. a statistical accuracy of 0.1%. While in orbit, the spectral responses (especially the absolute detection efficiency) could change because of the SAA or Solar activity, which will result in time-dependent responses. A dual ring placement of the detector tubes offers a window to study the systematics of such changes day by day and tube by tube. Changes in the response on orbital timescales, for instance following the SAA passage will be calibrated. Data with significant changes (both on the time interval and tube array) will be discarded, resulting in a loss of exposure time (or collecting area). Therefore the accuracy of absolute calibration is limited by the accuracy one can achieve on ground and the systematical variation in orbit. The very large redundancy provided by 18 tubes and the data set representing orders of magnitude more events than what the statistical error requires will allow to study and correct all the possible new systematics due to the space environment. Therefore the final uncertainty will be limited by the ground calibration performance which we estimate at 1%.
Given these simulations, we expect that a CubeSat mission with 18 detector tubes operating for more than two years would allow to measure the CXB normalization with an accuracy of ∼1%.
In the 100-1000 keV band, the subtraction of the instrumental background would leave 2% uncertainty on the CXB normalization for an 18-tube mission running for two years (based on Table <ref>). However, the collimators are gradually becoming transparent above 100 keV, therefore all the known non-AGN sources and unresolved galactic components would contaminate the CXB measurement, each category introducing an uncertainty comparable to that of the instrumental background. The efficiency of detection of the 511 keV line will be very well calibrated on ground using tagged β^+ source. However, in space, each detector will develop a very large 511 keV background line due to activation. This line can be used for energy calibration but it will be very difficult to maintain an absolute efficiency calibration of every tube at high energy. Uncertainty on the CXB normalization in the 100-1000 keV range is therefore estimated at a level of ∼10%
§ SUMMARY AND DISCUSSION
The CXB is made of the superposition of the emission of celestial sources, mostly AGN. Numerous space missions have measured the CXB spectrum, and few of them particularly surveyed the AGN population. As a result, the CXB is nearly (>93%) resolved into point-like AGN at soft X-rays (below 10 keV). A percentage decreases with increasing energy. Accurate measurement of the CXB spectrum and normalization is crucial to study the population of AGN, their obscuration, reflection, average spectra and ultimately the history of accretion in the Universe. The uncertainty on the CXB normalization (∼15%) is one of the main sources of difficulty affecting the CXB modeling.
We propose a detector to determine the CXB normalization with a per cent level accuracy. The detector consists of an array of tubes with collimated spectrometers and rotating obturators modulating the signals and allowing to precisely extract the CXB photons from the background. We present here a preliminary design of the detector which could be accommodated on various platforms (16-U CubeSat, small satellite, space station).
The 16-U CubeSat option has been used to simulate the instrument performance with Geant4 taking into account the point sources and instrumental background to assess their respective count rates and the resulting accuracy on the CXB normalization. In two years, the CubeSat mission is able to measure it with an accuracy ∼1% in the range 10-100keV ultimately limited by the quality of the calibration performed before the launch. This is a significant improvement compared to the current measurements.
§ DECLARATIONS
§.§ Author's Contribution
R. Walter and N. Produit initialized this project. H. Li performed the simulation and analysis. F. Hubert contributed to the design of the wheel system and CAD drawings. All authors contributed to the instrumental design, manuscript drafting, and reviewing.
§.§ Funding
We acknowledge the support of the Swiss National Science Foundation.
§.§ Conflicts of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
§.§ Consent to participate
Not applicable.
§.§ Consent for publication
Not applicable.
§.§ Code availability
Not applicable.
|
http://arxiv.org/abs/2307.06131v1 | 20230712123459 | Evaluating DNS Resiliency and Responsiveness with Truncation, Fragmentation & DoTCP Fallback | [
"Pratyush Dikshit",
"Mike Kosek",
"Nils Faulhaber",
"Jayasree Sengupta",
"Vaibhav Bajpai"
] | cs.NI | [
"cs.NI"
] |
summary
[backgroundcolor = gray90, linecolor = black, innerleftmargin=2mm, innerrightmargin=2mm]
[c]
FirstPage
Evaluating DNS Resiliency and Responsiveness with Truncation, Fragmentation & DoTCP Fallback
Pratyush Dikshit1,
Mike Kosek2,
Nils Faulhaber2,
Jayasree Sengupta1, and
Vaibhav Bajpai1
1CISPA Helmholtz Center for Information Security, Germany
2Technical University of Munich, Germany
===========================================================================================================================================================================================================
Since its introduction in 1987, the DNS has become one of the core components of the Internet. While it was designed to work with both TCP and UDP, DNS-over-UDP (DoUDP) has become the default option due to its low overhead. As new Resource Records were introduced, the sizes of DNS responses increased considerably. This expansion of message body has led to truncation and IP fragmentation more often in recent years where large UDP responses make DNS an easy vector for amplifying denial-of-service attacks which can reduce the resiliency of DNS services. This paper investigates the resiliency, responsiveness, and usage of DoTCP and DoUDP over IPv4 and IPv6 for 10 widely used public DNS resolvers. In these experiments, these aspects are investigated from the edge and from the core of the Internet to represent the communication of the resolvers with DNS clients and authoritative name servers. Overall, more than 14M individual measurements from 2527 RIPE Atlas Probes have been analyzed, highlighting that most resolvers show similar resiliency for both DoTCP and DoUDP. While DNS Flag Day 2020 recommended 1232 bytes of buffer sizes yet we find out that 3 out of 10 resolvers mainly announce very large EDNS(0) buffer sizes both from the edge as well as from the core, which potentially causes fragmentation. In reaction to large response sizes from authoritative name servers, we find that resolvers do not fall back to the usage of DoTCP in many cases, bearing the risk of fragmented responses. As the message sizes in the DNS are expected to grow further, this problem will become more urgent in the future.
DNS, DNS-over-TCP, DNS-over-UDP, Response Time, Failure Rate, EDNS(0)
§ INTRODUCTION
FirstPage
The Domain Name System (DNS), which is responsible for the resolution of hostnames to IP addresses, has become one of the most widely used components on the Internet. Hostnames (domain names) are organized in a tree structure that is hierarchically separated into zones. The resolution of domain names is realized by different components such as stub resolvers, recursive resolvers, and authoritative Name Servers (NSes). While authoritative NSes are responsible for the authoritative mapping of domains in a zone to their IP addresses, stub, and recursive resolvers cache and deliver such information from the NSes to the clients via a DNS request <cit.> (RFC 1034 <cit.>). DNS communication supports both major transport protocols on the Internet, namely the Transmission Control Protocol (TCP) (RFC 793 <cit.>) and the User Datagram Protocol (UDP) (RFC 768 <cit.>). Due to its comparably low overhead, UDP has become the default transport protocol for DNS. The UDP message body is restricted to 512 bytes (RFC 1035 <cit.>). However, the increase in deployment of DNS Security (DNSSEC) and IPv6 (RFC 7766 <cit.>) has resulted in larger message sizes, thereby leading to two important developments in the protocol. Firstly, DNS-over-TCP (DoTCP) was declared to be mandatory for hosts (RFC 5966 <cit.>) as it enables a larger message body by default. Secondly, Extension Mechanisms for DNS (EDNS) were introduced to augment the capabilities of the DNS protocol in terms of message size expansion (RFC 2671 <cit.>). With the new EDNS capability, it was required that DNS replies would continue to provide responses as UDP datagrams even though the response was larger than 512 bytes. Stipovic et al. in <cit.> examines the level of compatibility for a number of public DNS servers for some popular Internet domains while exploring the behavior of some contemporary DNS implementations such as Microsoft Windows 2012, 2016 and 2019 as well as Linux-based BIND in regards to the EDNS. However, using too large UDP buffer sizes can cause IP fragmentation in certain networks, thereby reducing resiliency in DNS communication <cit.>. To avoid fragmentation, the DNS Flag Day, 2020[<http://www.dnsflagday.net/2020/>], an association of DNS software maintainers and service providers, recommended the usage of a default buffer size of 1232 bytes. DoTCP is a useful measure against fragmentation and can increase DNS resiliency by allowing fallback options. Resolvers should also avoid fragmentation by using the recommended default EDNS(0) buffer size of 1232 bytes. To this end, our paper puts forward three goals: a) to evaluate DoTCP support (both over IPv4 and IPv6) and its usage across several DNS resolvers, b) to analyze the responsiveness/ latency over DoTCP and DoUDP for IPv4 and IPv6, and c) to investigate which buffer sizes are currently used in DNS traffic around the globe.
In pursuit of these goals, we evaluate the behavior of the resolvers from two different vantage points. Firstly, DoTCP adoption, responsiveness, and EDNS(0) configuration are analyzed from the edge where the interaction between recursive resolvers and DNS clients running on the RIPE Atlas probes is measured. To scope DNS requests to the Edge of the network, we perform DNS queries for a domain that is likely cached by all resolvers, unlike in previous studies <cit.>. Secondly, the interaction of recursive resolvers with authoritative NSes is further studied. To allow DNS requests to leave the edge and move into the Core of the network, we provision dedicated NSes for a custom-crafted domain whose resolution is requested from the DNS resolvers. Using this methodology (see §<ref>), we study failure rates, response times, EDNS options, and usage of DoTCP and DoUDP, as well as the EDNS(0) configurations both from the edge and the core (except response times analysis) that gives detailed insights into the potential resiliency of DNS communication on the Internet as depicted in Figure <ref>. We perform measurements over IPv4 and IPv6 <cit.>. Our main findings (see §<ref>) are –
§.§.§ Resiliency from the edge
We observe that DoTCP (4.01%) tends to fail less often than DoUDP (6.3%) requests over IPv4. Contrarily, in the case of IPv6, we find a higher failure rate over both transport protocols (DoTCP 10%, DoUDP 9.61%). The analysis of response times for Public and Probe resolvers confirms the pattern of approximately doubled median response times for DoTCP compared to DoUDP in both the IP versions. We also observe that several public DNS resolvers still lack adoption (<3.5%) of 1232B from the DNS Flag Day recommendation.
§.§.§ Resiliency from the Core
We find that DoTCP requests over IPv4 exhibit failure rates of 9.09% on public resolvers against higher failure rate of 11.53% over IPv6. Surprisingly, we find that the RIPE Atlas measurements ended successfully even after receiving a response with the TC-bit set, indicating a lack of proper fallback to DoTCP in many probes. Moreover, communication between resolvers and the authoritative NSes utilizes an EDNS(0) buffer size of 512 bytes less preferably (IPv4 0.24%, IPv6 0.13%) compared to the buffer sizes advertised to the RIPE Atlas probes (IPv4 27.41%, IPv6 26.04%). All DNS resolvers use EDNS(0) in most of the cases (> 99.84%). We also see other DNS options such as Cookie (4.80% IPv4, 7.91% IPv6) and EDNS Client Subnet (ECS) (1.81% IPv4, 1.49% IPv6) advertised by the public resolvers, while Google mostly uses ECS (14.24% IPv4, 12.53% IPv6).
§.§.§ DoTCP Usage Rates
We observe that when 2KB responses are received from the NSes, all resolvers that mainly use canonical (see §<ref> scenarios, use TCP in their last request for >95% of the cases. In situations where 4KB responses are received, we observe that almost all resolvers use TCP in the vast majority of measurements over both IP versions (>98%).
This paper builds upon our previous study <cit.>. In this paper, we have additionally added significant background information (see: §<ref>) related to the monitoring and performance evaluation of DNS query-response over both TCP and UDP transport protocols. DNS response times can be a critical metric when using DoTCP fallback, we therefore conduct further measurements comparing DoTCP and DoUDP response times from the edge of the network (see: §<ref>). Subsequently, when evaluating from the core, we present additional insights by including a detailed analysis using and DoUDP response time for public resolvers (see: §<ref>).
Additionally, we perform a deep dive by measuring the number of successful responses that do not contain any valid sections for the DNS queries (see: §<ref>). Notably, our investigation reveals instances where RIPE Atlas measurements have successfully terminated, despite receiving a response with the -bit set, thereby, indicating a lack of proper fallback to DoTCP across multiple probes. Towards the end, we discuss the limitations of our study and highlight future research directions in §<ref>, followed by concluding statements in §<ref>.
§ BACKGROUND AND RELATED WORK
§.§ DNS Measurement
To measure DNS failure rates, DNS performance, and the buffer sizes used, several studies have been conducted in the last few years. Some of them are discussed here.
§.§.§ Fragmentation
With the increased message sizes, DNS queries can exceed the MTU of many networks. Giovane et al. in <cit.> analyzes the fragmentation rates of DNS queries to the .nl top-level domain showing that less than 10k of 2.2B observed DNS responses by authoritative NSes are fragmented. Although fragmentation is in general fairly rare in DNS communication, the consequences can have negative effects on the resiliency and connectivity of Internet applications (RFC 8900 <cit.>). Herzberg and Shulman in <cit.> presented an attack allowing to spoof of Resource Records (RRs) in fragmented DNS responses by which attackers can hijack domains or nameservers under certain conditions. Following a similar procedure, Shulman and Waidner in <cit.> showed the opportunity to predict the source port used by the client. Both of these approaches belong to the class of DNS cache poisoning attacks, one of the most common and dangerous on the DNS. Though, cache poisoning attacks are also possible when DNS messages are not fragmented <cit.><cit.><cit.>. The aforementioned studies however show the additional security risk caused by fragmented responses. This potentially exposes the DNS user to several other types of attacks. Koolhaas et al. in <cit.> analyzed the behavior of different EDNS(0) buffer sizes in 2020. It was shown that the likelihood of a failing DNS query increases with growing buffer sizes. For a size of 1500 bytes, the default MTU of Ethernet which causes fragmentation of most of the DNS messages of the DNS queries to stub resolvers failed for 18.92% over IPv4, with 26.16% over IPv6 (RFC 2464 <cit.>). As countermeasures, in 2017, Cao et al. in <cit.> presented an ”Encoding scheme against DNS Cache Poisoning Attacks”. Berger et al. in <cit.> presented a way of detecting DNS cache poisoning attacks in 2019. Even though it was later shown that DNS cache poisoning attacks are also possible against DoTCP by Dai et al. in <cit.>, this emphasizes its importance as a fallback option to the usage of DoUDP. Herzberg and Shulman in <cit.> recommend keeping the indicated buffer size less or equal to 1500 bytes. As a consequence, Weaver et al. summarize a list of recommendations to stakeholders in the DNS ecosystem in <cit.>. These include the proposition for stub resolvers as well as authoritative nameservers to stick to buffer sizes of 1400B and below. The study conducted in <cit.> in 2020 yielded detailed recommendations for the EDNS(0) buffer size configuration of authoritative name servers and stub resolvers dependent on the used IP version and network type. The recommendations were adopted at the DNS Flag Day, 2020 claiming that ”defaults in the DNS software should reflect the minimum safe size which is 1232 bytes”. The aforementioned aspects emphasize the need for DNS resolvers to adopt the buffer size recommendations as fast as possible. Some other encrypted DNS protocol implementations such as DNS-over-TLS (RFC 7858 <cit.>) and DNS-over-HTTPS (RFC 8484 <cit.>) also counter the problems of fragmentation as TCP used as transport protocol <cit.>. They are however not yet widely enough adopted to obsolete standard DNS implementations <cit.><cit.><cit.>. To investigate the progress of DNS resolvers in implementing the new standards, measurements analyzing the buffer sizes used by DNS resolvers are therefore performed from different standpoints. As DoTCP support is another important requirement for DNS resolvers to avoid truncation and fragmentation, DNS failure rates over TCP and UDP are analyzed in this paper. Additionally, the DoTCP-fallback behavior of the resolvers is studied to see in which cases they make use of TCP. As furthermore response time is the main disadvantage when using DoTCP instead of DoUDP, we are also interested in comparing the two implementations with regard to this aspect.
§.§.§ Response Times and Failure Rates
The first large-scale study on DNS performance and failure rates was performed by Danzig et al. in 1992 resulting in several important recommendations to reduce DNS traffic and latency <cit.>. Ten years later, DNS Performance and the Effectiveness of Caching analyzed DNS traffic of the MIT Laboratory for Computer Science and the KAIST (Korea Advanced Institute of Science and Technology) over several months and stated a failure rate of 36% (23% timeouts, 13% other errors) as explained by Jung et al. in <cit.>. Ager et al. <cit.> compared DNS response times between local DNS resolvers of ISPs and public resolvers like Google, PublicDNS, and OpenDNS. It was found that local resolvers generally outperformed public resolvers, but Google and OpenDNS showed faster responses in certain cases due to ISP caching issues. Several other measurements
have been undertaken to observe DNS performance over IPv4 from different
standpoints <cit.><cit.><cit.>. Additionally, Doan et al. in <cit.> observed that the public resolvers answered faster over IPv6 than over IPv4. In 2022, Moura et al. <cit.> investigated the fallback capabilities of DNS resolvers to utilize DoTCP by manipulating the TC-bit in responses from a controlled authoritative name server. They analyzed the order of incoming requests over different transport protocols and introduced the distinction between canonical (UDP request followed by TCP request) and non-canonical scenarios. Evaluating the order of incoming requests, it was concluded that an estimated 2.7% (optimistic estimation) to 4.8% (pessimistic estimation) of the examined resolvers were incapable of falling back to DoTCP usage. In the same year, Kosek et al. <cit.> conducted the first study comparing DNS response times and failure rates based on the underlying transport protocols using RIPE Atlas for ten public resolvers and probe resolvers. 8% of the queries over UDP and TCP failed, with a very high DoTCP failure rate of 75.0% for probe resolvers. The response times of DoTCP were generally higher than that of DoUDP with large differences. The queried domains were unique and thereby uncached by all of the participating resolvers. To get a preferably broad and unbiased comparison of the different DNS resolvers over the particular underlying protocols, we perform DNS queries to both the domains, which is very likely cached on each resolver (google.com), and uncached ones. Additionally, we make sure that the uncached domains are administered by an authoritative name server under our control. Using a cached domain allows a detailed estimation of the latencies in the direct communication between resolvers and client software without the additional time needed for a recursive lookup. The DNS queries for uncached domains force recursive resolvers to forward them to our server. Observing the incoming requests offers the opportunity to analyze the communication between recursive resolvers and name servers, for example, the usage of DoTCP on the same path, in detail.
§.§.§ EDNS Options
Van den Broek et al. <cit.> analyzed more than 8 million DNS queries to an authoritative name server in 2014. Around 75% of the queries used EDNS(0). Additionally, it was observed that 36% used a UDP buffer size higher than 1232 bytes likely causing fragmentation. Measurements after the DNS Flag Day 2020 recommendations show that DNS resolvers still seem to lack adoption to the default buffer size of 1232 bytes. Based on the analysis of 164 billion queries to authoritative name servers, Moura et al. stated that many resolvers ”announce either small (512 bytes) or large (4096 bytes) EDNS(0) buffer sizes, both leading to more truncation, and increasing the chances of fragmentation/packets being lost on the network”.
As the DNS Flag Day 2020 recommendations have not been out in the community for a very long time, a regular examination of the adoption rates of DNS software is reasonable and necessary.
§ METHODOLOGY
To extend the previous studies by Huber and Kosek et al. <cit.>, we utilize the identical target resolvers for our measurements. We query the ten public resolvers listed in Table <ref>, along with the configured probe resolvers. It is worth mentioning that at the time of the initial measurements, Comodo Secure DNS did not have a known IPv6 address, resulting in measurements conducted solely over IPv4.
§.§ Probe Selection
This paper employs the RIPE Atlas measurement network to conduct the measurements. To avoid potential load issues occurring in the first two probe versions <cit.><cit.>, we choose only probes of version 3 or 4 that are hosted with a hometag <cit.>. The chosen probes must support IPv4, IPv6, or both. A scan conducted on December 20, 2021, reveals the availability of 2527 probes with these attributes. Out of these, 1137 probes are IPv6 capable, while all of them support IPv4. These probes are distributed across 671 different Autonomous Systems (ASs) with varying densities in different regions: 70% in Europe, 18% in North America, 6% in Asia, 3% in Oceania, 1% in Africa, and 1% in South America. Before commencing the actual measurement series, which includes analyzing DNS resolvers from the edge and the core (as depicted in Figure <ref>), examining DoTCP usage, EDNS(0) configuration, and DoTCP fallback, we evaluated 4343 probe resolvers associated with the 2443 participating probes, resulting in an average of 1.78 resolvers per probe.
§.§ From the edge
For the purpose of analyzing the behavior of different DNS resolvers from the edge, we programmatically configured RIPE Atlas measurements specifically targeting the resolvers listed in Table <ref>. These measurements encompassed DNS resolution over both IPv4 and IPv6, utilizing both TCP and UDP transport protocols. In this study, we treated the DNS resolvers as black boxes, focusing on the direct communication between DNS client programs and recursive resolvers. Consequently, we did not examine the specific transport protocols or buffer sizes employed during communication with authoritative name servers. The measurements from the edge involved requesting "A" records for the widely-used domain "google.com". By choosing a heavily-cached domain, we aimed to minimize recursive resolution or unexpected errors, thereby emphasizing the communication between client programs and resolvers. The perceived response times by the probes should thus correspond to the duration of a simple UDP/TCP request and response. As mentioned earlier, the RIPE Atlas probes collected essential information during the measurements, including response times, error messages, and the UDP buffer sizes advertised by each resolver. This data was subsequently retrieved using the RIPE Atlas measurement API. To validate the collected response times and gain insights into the resolvers' distribution across the Internet, simultaneous Traceroute measurements were conducted. It is anticipated that the measured round-trip time (RTT) of a Traceroute to the resolver closely aligns with the response time of a DoUDP request when the queried domain is cached on the resolver.
§.§ From the core
This evaluation allows us to analyze resolver behavior when interacting with authoritative NS (see Figure <ref>). In this experiment, we use uncached domain names controlled by authoritative NS under our supervision. We analyze the resolvers' DNS configuration using two customized authoritative NSes. These NSes encode incoming DNS requests, including transport protocol and requester IP address, for later analysis. By observing the EDNS section of requests reaching the authoritative NS, we gain insights into the resolvers' EDNS configuration and potential usage of options like Cookie or Client Subnet.
§.§.§ DoTCP Usage and EDNS(0) Configuration
In the Core analysis, we utilize the RIPE Atlas network to evaluate the transport protocols and EDNS configuration employed by DNS resolvers. Our measurements focus on a target domain managed by our NSes to ensure uncached responses. To achieve uniqueness, each DNS request is modified with the probe ID and timestamp. By examining the IP addresses of the resolvers, we gain insights into their distribution across continents and AS networks. Additionally, observing the transport protocols used enables us to gather information on DoTCP usage in the Core.
§.§.§ DoTCP Fallback
The Core measurement focuses on observing the DoTCP fallback behavior of public DNS resolvers. Large responses, consisting of 72 AAAA records (>2KB response) for one server and 145 AAAA records (>4KB response) for the other, are returned by the authoritative NSes. By including different RR types (A, AAAA, and TXT) in each measurement, we aim to investigate the resolvers' reaction to both response sizes simultaneously. Given that the previous experiment revealed resolvers requesting both NSes equally, approximately 50% of the requests are expected to receive 4KB and 2KB responses respectively. As large responses cannot be handled by UDP due to fragmentation issues, resolvers are anticipated to fallback to using DoTCP. The analysis focuses on whether the resolvers continue to utilize UDP or switch to DoTCP, providing insights into potential resiliency risks. Multiple requests from resolvers to the authoritative NS are expected, such as one over UDP followed by a fallback request over TCP. To accurately map incoming requests to the RIPE Atlas measurements, the domains queried by each probe are made unique using the aforementioned technique of prepending probe-specific information.
§ RESULTS
We evaluate the results of the measurement from the edge concerning failure rates, response times, and EDNS(0) buffer sizes. Afterward, we analyze EDNS(0) configuration and DoTCP fallback from the core.
§.§ Probes
A comprehensive examination of all RIPE Atlas probes reveals the presence of 2527 probes possessing the desired attributes. All probes exhibit compatibility with IPv4, while 1137 probes exhibit the additional capability of conducting measurements over IPv6. The geographic distribution of the probes, as well as the locations of the authoritative name servers, is visually depicted in Figure <ref>. Notably, a concentrated density of probes is observed in North America and Europe, which serves as the primary origin for the RIPE Atlas community. Specifically, Europe accounts for 70% of the probes, followed by North America with 18%, Asia with 6%, Oceania with 3%, Africa with 1%, and South America with 1%. The distribution of these probes among the Autonomous Systems is detailed in Table <ref>, highlighting the ten Autonomous Systems housing the majority of probes. The probes from Comcast, AT&T, and UUNET are exclusively situated in North America, while the remaining Autonomous Systems are primarily distributed in Europe. Furthermore, it should be noted that certain Autonomous Systems mentioned in Table <ref> possess only a small number of IPv6-capable probes. The mapping between Autonomous System numbers and their respective names is obtained from IPtoASN[<https://iptoasn.com>].
§.§.§ Probe Resolvers
Before conducting the actual measurement, preliminary test measurements are performed to gather address information regarding the locally configured resolvers on each probe. It is important to note that the resolvers can possess either publicly accessible IP addresses or private ones. For instance, a probe may have Google Public DNS configured, in addition to a DNS resolver exclusively accessible through its local network. Among the probes, 41.93% have public IP addresses. Out of all the registered resolvers, 97.03% are associated with their corresponding IPv4 addresses, while the remaining 2.97% are linked to their IPv6 addresses. Additionally, it is crucial to acknowledge that during a RIPE Atlas measurement employing the utilized probe resolver parameter set, all probe resolvers are requested to resolve the relevant domain name. Consequently, both IPv4 and IPv6 DNS resolvers contribute to the measurement, regardless of the IP version mandated by the RIPE user.
§.§ Evaluation from the edge
e[1]
>[t]
p#1
<finalstrutarstrutbox
i[1]
>[t]
p#1
<finalstrutarstrutbox
This section analyzes failure rates of transport protocols at the edge to assess DNS resilience. Response times of resolvers are compared and the performance of public resolvers is evaluated using DNS response times and traceroute round-trip times. The adoption of DNS Flag Day recommendations by individual resolvers is analyzed through EDNS(0) buffer size announced to RIPE probes.
§.§.§ Failure Rates
Based on Kosek et al.'s research in <cit.>, failed measurements are defined as those with no DNS response at the probe. In IPv4, public resolvers show lower failure rates for DoTCP (4.01%) compared to DoUDP (6.3%), indicating higher resiliency of DoTCP (Figure <ref>). However, probe resolvers present a different scenario with DoTCP surpassing DoUDP by 74.15%. DoUDP failures are solely due to Timeouts (5000ms). And public resolvers' DoTCP failures are also primarily caused by Timeouts (42.75%), READ-ERROR (33.91%), CONNECT-ERROR (23.24%), and TCP-READ (0.09%). Bad address (99.17%) is the main cause of DoTCP failures for probe resolvers. Overall, probe resolvers exhibit significantly higher DoTCP failure rates across continents. In case of IPv6, for public resolvers, we find lower resiliency over both transport protocols (DoTCP 10%, DoUDP 9.61%). Most public resolvers exhibit failure rates between 6.77% and 9.25%, see Figure <ref>. Uncensored DNS shows by far the worst DoTCP and DoUDP resiliency.
To analyze the adoption of the DNS Flag Day 2020 recommendations by the public resolvers from the edge, we evaluate the EDNS(0) buffer sizes which the individual resolvers announce to the RIPE Atlas probes. Table <ref> summarizes the buffer sizes that have been observed in the UDP measurements. As for all resolvers except Quad9 the difference in the percentages of the announced buffer sizes between IPv4 and IPv6 are fairly low (≤3.5%). The buffer sizes advertised by Cloudflare, OpenNIC, UncensoredDNS, and Quad9 (55.47% IPv4, 62.09% IPv6) conform to the DNS Flag Day 2020 recommendation of a default buffer size of 1232B in most cases. Neustar, Comodo, OpenDNS and Yandex mainly use 4096 bytes. In 23.55% of the Quad9 DNS responses over IPv4, EDNS(0) is not used at all leaving clients to the default DoUDP message size limit of 512 bytes. This first view from the edge shows that several public DNS resolvers still lack adoption to the DNS Flag Day 2020 recommendations. To see whether this also holds for the communication with authoritative NSes, we conducted another experiment from the core.
§.§.§ Response Times
RIPE Atlas employs a measurement methodology to assess the response time (RT) of DNS requests. It measures the duration from the initiation of the measurement until a valid DNS response is received at the probe. DoUDP enables immediate transmission of requests to resolvers without the need to establish a TCP connection like DoTCP. Considering the cached nature of the "google.com" domain, DoUDP requests to resolvers with efficient cache management are expected to have response times equivalent to the probe-resolver round-trip time. In contrast, DoTCP measurements involve a three-way handshake, resulting in response times roughly twice as long as DoUDP. To ensure a fair comparison, only probe-resolver pairs with successful responses over both TCP and UDP are considered. The depicted response times in Figures <ref> and <ref> are obtained through a two-tiered approach, calculating the median response time for each probe-resolver combination and presenting the median of all probes.
IPv4.
The analysis of response times for Public and Probe resolvers confirms the expected pattern of approximately doubled median response times for DoTCP compared to DoUDP. Probe resolvers, due to their close physical proximity to the probing device, exhibit faster response times than Public resolvers.
Among the Public DNS resolvers examined, Cloudflare, Google, and Quad9 demonstrate the lowest median response times for both transport protocols. Yandex shows relatively long response times for both DoTCP (104.5ms) and DoUDP (51.3ms). Neustar, on the other hand, exhibits the longest median DoTCP response time (1035.8ms) in this experiment, along with high DoTCP failure rates, suggesting inadequate implementation.
When comparing response times by continent, Public resolvers generally respond fastest to DoTCP and DoUDP requests from European (DoTCP 52.7ms, DoUDP 24.1ms) and North American (DoTCP 54.9ms, DoUDP 27.1ms) probes.
The increased response times observed across various continent/resolver combinations for both transport protocols can be attributed to the sparser distribution of resolver Points-of-Presence (PoPs) in those continents. This is evident as DoTCP response times are consistently approximately twice as high as DoUDP response times, indicating longer Round-Trip Time (RTT) due to greater distances between probes and resolvers. Probe resolvers demonstrate relatively low response times for both DoTCP (6.4ms - 32.2ms) and DoUDP (3.2ms - 15.3ms). Among Public resolvers, Orange Autonomous System (AS) exhibits the highest overall median DoTCP response time (167.2ms) and also shows elevated DoTCP response times for CleanBrowsing, Comodo, and OpenNIC resolvers. Overall, most ASes, primarily operating in North America and Europe, do not exhibit significant anomalies in DoTCP and DoUDP response times.
Yandex performs poorly for requests from North American ASes (223.9-290ms) but slightly better for European ASes (71.9-103.6ms). Conversely, UncensoredDNS displays higher DoTCP response times for European ASes (117.8-214.6ms) compared to North American ASes (36.6-145.9ms). Notably, UncensoredDNS consistently exhibits higher DoUDP response times than DoTCP, including a median DoUDP response time over seven times higher for requests from UUNET. Furthermore,
IPv6. The majority of Public resolvers show similar median response times for both DoTCP and DoUDP compared to their IPv4 counterparts. However, there is a notable decrease in the median response time for DoTCP requests to Neustar (50.2ms) over IPv6. UncensoredDNS and Yandex still exhibit relatively high DoTCP response times. The improved performance of Neustar over DoTCP and UncensoredDNS over DoUDP contributes to lower overall median response times for public resolvers across both transport protocols. It is important to mention that probe resolvers are not considered in the IPv6 analysis. The analysis based on continents for Public resolvers reveals results similar to those observed in IPv4 measurements. Notably, the DoTCP response time for Africa is more than 70ms higher in IPv6 compared to IPv4. This can be attributed to the relatively poorer DoTCP performance of Cloudflare, Google, and OpenDNS resolvers for requests originating from Africa over IPv6. Analyzing DNS response times over IPv6 by Autonomous System reveals higher variations compared to IPv4. Generally, probes from UUNET (21.1ms) and KPN (15.6ms) receive the fastest DoTCP responses, with most Public resolvers displaying their minimal DoTCP response times for these two ASes. CleanBrowsing shows outliers in DoTCP response times for probes from the Orange (191.4ms), UUNET (296.6ms), and KPN (349.5ms). UncensoredDNS shows relatively high response times for requests from all Autonomous Systems (62.2ms-214.1ms) except for UUNET (20.5ms).
In IPv4, Cloudflare and Google DNS resolvers demonstrate the most stable DoTCP and DoUDP response times across all continents. Other Public resolvers generally have significantly higher DoTCP response times for at least one, and often multiple, continents, particularly in Africa (322.5ms) and South America (245.8ms) where they take the longest to respond. Similar patterns are observed for DoUDP response times. Whereas, in IPv6 measurements, Europe exhibits the lowest median response times for DoTCP and DoUDP, while Africa exhibits the highest. Cloudflare seems to have fewer IPv6-capable Points-of-Presence distributed in Africa, resulting in increased round-trip times (RTTs) due to long distances.
Combination - IPv4 and IPv6. The evaluation of the measured response times of DNS over both transport protocols and IP versions as CDF emphasizes the results summarized above and allows their comparison from a different standpoint. Instead of the median aggregated median RTs of each probe shown in Figure [<ref>-<ref>], Figure <ref> shows the accumulation of all recorded probe-medians based on the transport protocol and IP version used. the Figure <ref> confirms the extremely high DoTCP response times of Neustar DNS over IPv4. It shows that around 80% of the DoTCP requests take more than one second to be answered. The comparably bad performance of Yandex and UncensoredDNS can be seen for all combinations of transport protocol and IP version, especially
for UncensoredDNS over UDP and IPv4 (more than 90% of the requests have an RT of more than 150ms). Furthermore, the response times of CleanBrowsing over both transport protocols on average increase when IPv6 is used instead of IPv4. This observation is not reflected by the median response times as the effect shows mainly for the slowest 25% of requests. Figure <ref> also confirms that Cloudflare and Google exhibit the most stable DNS response times of all Public resolvers.
§.§.§ Traceroute
RIPE Atlas does not allow Traceroute measurements within the probe networks, and the focus of this paper is primarily on the performance, resilience, and security of Public resolvers while Probe resolvers are excluded from the subsequent analysis. Ideally, the resolvers should exhibit DoUDP response times that are roughly similar to the RTTs measured through Traceroute; however, not exactly the same, as ICMP (used in Traceroute) requires slightly less overhead for transportation than UDP. Figure <ref> presents the cumulative distribution function (CDF) of median response times and round-trip times for all Public resolvers over each probe, validating these expectations for both IPv4 and IPv6.
DoTCP over IPv6 aligns more closely with the anticipated ratio of response times to round-trip times. The response time/round-trip time ratio (RT/RTT ratio) represents the quotient of the median response time and the median RTT between the probe and the resolver. Ideally, resolvers should have an RT/RTT ratio of 1 for DoUDP and 2 for DoTCP. Figure <ref> presents the RT/RTT ratios per resolver, transport protocol, and IP version. For Neustar over DoTCP and IPv4, the observed ratio exceeds 2 for over 80% of the probes. This further highlights that the very high response times of the resolver are not due to a sparsely distributed global network of Points-of-Presence but rather an inadequate DoTCP implementation at various PoPs. The same conclusion can be drawn for UncensoredDNS over both TCP and UDP, as shown in Figures <ref>. OpenDNS and Google exhibit higher RT/RTT ratios compared to other Public resolvers over both transport protocols and IP versions, indicating that their relatively fast DNS response times are more attributed to a well-distributed global network rather than exceptionally efficient request processing.
§.§ Evaluation from the core
e[1]
>[t]
p#1
<finalstrutarstrutbox
i[1]
>[t]
p#1
<finalstrutarstrutbox
§.§.§ Failure rate
Figure <ref> exhibits that the failure rates for DoTCP requests over IPv4 from the core using public resolvers are higher in the current measurement series (9.09%) compared to previous measurements (4.01%). CleanBrowsing, Cloudflare, Google, OpenDNS, OpenNIC, and Yandex demonstrate high resiliency with DoTCP failure rates ranging from 1.34% to 2.62%. Similarly, Quad9 shows higher failure rates from multiple regions and ASs. When using IPv6, the overall failure rates for all public resolvers slightly increase to 11.53%. More READ-errors (19.67%) and fewer Timeouts (29.99%) are observed compared to measurements from the edge. Quad9 and UncensoredDNS exhibit the highest failure percentages. Figure <ref> shows that from all continents, the DoTCP failure rates for the public resolvers is surged in comparison to that of explained in Figure <ref>. However, more than 88% of the measurements to the public resolvers over both IP versions and transport protocols receive a valid DNS response. This shows that there are no significant problems with our NSes and yields enough data to reliably analyze the resolvers’ DoTCP usage and EDNS(0).
§.§.§ EDNS(0) Configurations
To examine the usage of EDNS(0) configurations by resolvers communicating with authoritative NSes for uncached domains, we present a distribution overview in Table <ref>. Most resolvers exhibit a preferred AS, with Cloudflare, Google, Neustar, OpenDNS, and Yandex DNS primarily using their own ASs for over 94% of resolutions. Public resolvers universally employ DoUDP, emphasizing the importance of proper EDNS(0) buffer sizes in the core. Commonly used buffer sizes include 1400, 1410, and 1452 bytes. Additional buffer size details are displayed in Table <ref>.
e[1]
>[t]
p#1
<finalstrutarstrutbox
i[1]
>[t]
p#1
<finalstrutarstrutbox
e[1]
>[t]
p#1
<finalstrutarstrutbox
i[1]
>[t]
p#1
<finalstrutarstrutbox
§.§.§ EDNS Options
In Table <ref>, a comprehensive listing of options employed by DNS resolvers in communication with name servers is provided.
EDNS(0) is utilized by all DNS resolvers in the majority of cases (>99.84%). Among the advertised options by public resolvers, Cookie (4.80% IPv4, 7.91% IPv6) and EDNS Client Subnet (ECS) (1.81% IPv4, 1.49% IPv6) are notable but Google predominantly employs ECS (14.24% IPv4, 12.53% IPv6). However, other options including client subnet information, are transmitted in less than 0.24% of requests. RFC 7871 <cit.> specifies that NSes should include ECS with matching parameters in their response. Google indicates that if name servers do not support ECS, Google public DNS may refrain from sending ECS queries to them. This suggests that Google's usage of ECS could be higher if servers appropriately handle the requests. Nevertheless, multiple Google resolvers within the core, identifiable by their IP addresses, transmit subnet information to our servers. The question of Google's ECS usage rate when communicating with NSes that correctly respond remains open for further investigation.
§.§.§ Valid/ Invalid responses
As it was observed that the transport protocol used by the probes does not affect the usage of DoTCP/ DoUDP in the communication between resolvers and the NSes, all measurements in this experiment are carried out over DoUDP.
Overall, 11,637,539 individual measurements are conducted based on unique domain names. We furthermore observe that the NS returning 2KB responses receives more requests (5,642,439) than the one returning 4KB (2,395,455). Moreover, some domains are requested on both NSes (2,733,540 results). It is important to note that in many cases where a successful DNS response was indicated, the response did not contain an answer section (in 53.52% of cases) as presented in Table <ref>. Among these cases, the vast majority (97.35%) were truncated responses (TC-bit set), while 2.22% denoted a server failure.
For the remaining 0.43%, no clear reason for the missing answer section could be identified. The truncation of responses can be attributed to the limitation imposed by RIPE Atlas on UDP buffer sizes, which is set to a maximum of 4096 bytes which explains the high amount of truncated responses in general.
OpenDNS consistently provided a valid answer section in most cases (97.65%), while others included an answer section in less than 64.44% of their responses.
Surprisingly, RIPE Atlas measurements ended successfully even after receiving a response with the TC-bit set, indicating a lack of proper fallback to DoTCP in many probes. In Table <ref> these cases are taken into consideration and the respective failure rates of the RIPE Atlas measurements are presented. Additionally, there were cases where certain domains were never requested at any of the servers, contributing to a failed DNS response (21.16% IPv4, 17.27% IPv6).
Other resolvers had a failure rate of over 6.53% for IPv6 requests that were not forwarded to authoritative name servers. This behavior may be attributed to some resolvers blacklisting our authoritative name server due to the receipt of large responses.
§.§.§ Canonical/ Non-canonical requests
We begin our analysis by classifying incoming requests as canonical and non-canonical according to Mao et al.'s work <cit.>. We then evaluate the DoTCP usage rates of the resolvers to assess their response to large buffer sizes. Additionally, we introduce a scenario involving a single incoming UDP request
To focus on the resolvers' reactions to response sizes of 2KB and 4KB, we only consider results that can be directly matched to one of the servers and their respective response sizes.
Table <ref> displays the resolvers' usage of different scenarios when communicating with the 2KB name server. Notably, CleanBrowsing, Cloudflare, Google, OpenDNS, and UncensoredDNS predominantly send a UDP message followed by a TCP message. As indicated in Table <ref>, the resolvers advertise EDNS(0) buffer sizes of 1452B or less, demonstrating the expected fallback behavior. Table <ref> presents the usage of different scenarios by the resolvers in response to 4KB name server replies.
Quad9 demonstrates more non-canonical responses to 4KB compared to 2KB responses.
§.§.§ TCP Usage
To assess TCP usage, we examined the presence of DoTCP requests within the query sequence reaching the name servers. Table <ref> shows the DoTCP usage rates of resolvers when receiving 2KB responses. Resolvers primarily employing canonical scenarios consistently utilize TCP in their final request, including Quad9 (99.69% IPv4, 99.70% IPv6).
Yandex and Comodo rarely use DoTCP with 2KB responses in the last request. When receiving 4KB responses (see Table <ref>), almost all resolvers employ TCP in the majority of measurements for both IP versions (>98.67%). However, a notable number of measurements lack TCP usage by several resolvers (up to 1.33%), indicating possible fragmentation between the name server and resolver.
§ LIMITATIONS AND FUTURE WORK
Approximately 88% of our probe measurements are concentrated in North America and Europe, limiting the generality of DNS resiliency observations to other regions. To address this limitation, we provide response times categorized by continent. However, it is important to note that observations for continents with fewer probes have smaller sample sizes, which hinders drawing reliable conclusions. Similarly, when analyzing response times of specific autonomous systems, particularly over IPv6, the sample size remains relatively low. The study of EDNS(0) options focuses on the communication between different resolvers and our custom authoritative NSes. Therefore, the usage numbers may not accurately represent the capabilities of the resolvers and their EDNS(0) options in general. The observations reveal various non-canonical sequences employed by DNS resolvers in response to large response sizes. Further investigation is required to fully understand the behavior of different resolvers, including their adjustment of announced EDNS(0) buffer sizes when receiving large responses.
While our study emphasized on the unencrypted DNS protocols DoUDP and DoTCP, the recently standardized encrypted DNS protocol DNS-over-QUIC (DoQ) (RFC 9250)<cit.><cit.><cit.><cit.> does inherently solve fragmentation by means of the QUIC protocol (RFC 9000)<cit.> while also supporting increased DNS message sizes. However, DoQ adoption currently is scarce <cit.>; yet, DNS over QUIC is a promising candidate to supersede both DoUDP and DoTCP in the future, thereby warranting a detailed investigation when DoQ adoption rises.
§ CONCLUSION
We conducted the measurements analyzing DoTCP resiliency, responsiveness and deployment from the edge and the core over IPv4 and IPv6. Additionally, the EDNS(0) configurations of ten public resolvers were studied. Issuing more than 14M individual DNS requests using 2527 globally distributed RIPE Atlas probes, we performed multiple experiments focusing on observations to conclude that most resolvers show similar resiliency for both DoTCP and DoUDP where 3 out of 10 resolvers mainly announce very large EDNS(0) buffer sizes, which potentially causes fragmentation. The analysis of DoTCP and DoUDP performance revealed significant regional variations for both IP versions. Notably, requests originating from Africa or South America exhibited the highest median response times. This highlights the need for further investigation and optimization in such regions. Particularly over IPv4, Cloudflare and Google emerged as the Public resolvers with the most consistent and stable response times across all continents. In reaction to large response sizes from authoritative name servers, we find that resolvers do not fall back to the usage of DoTCP in many cases, bearing the risk of fragmented responses. As the message sizes in the DNS are expected to grow further, this problem will become more urgent in the future.
§ ACKNOWLEDGMENT
This work was supported by the Volkswagenstiftung Niedersächsisches Vorab (Funding No. ZN3695).
unsrt
|
http://arxiv.org/abs/2307.04360v1 | 20230710061554 | Mean-field analysis of load balancing principles in large scale systems | [
"Illés Horváth",
"Márton Mészáros"
] | math.PR | [
"math.PR",
"60"
] |
Probe hyperon electric dipole moments with full angular analysis
Jianyu Zhang^1
August 12, 2023
================================================================
Load balancing plays a crucial role in many large scale systems. Several different load balancing principles have been proposed in the literature, such as Join-Shortest-Queue (JSQ) and its variations, or Join-Below-Threshold. We provide a high level mathematical framework to examine heterogeneous server clusters in the mean-field limit as the system load and the number of servers scale proportionally. We aim to identify both the transient mean-field limit and the stationary mean-field limit for various choices of load balancing principles, compute relevant performance measures such as the distribution and mean of the system time of jobs, and conduct a comparison from a performance point of view.
§ INTRODUCTION
For large scale service systems, where service resources (e.g. computing capacity) are distributed to several service units, load balancing plays a crucial role in distributing the total load of the system to ensure better overall service for the incoming tasks (jobs).
There are many different types of load balancing principles. Static load balancing does not take into account the state of the system, instead aiming for a balanced distribution based purely on the incoming jobs. Static load balancing is in general easy to set up, requires minimal overhead communication and performs well when the incoming jobs have some regular patterns.
However, in most systems the incoming jobs have some level of random variability. This situation is generally better handled by load balancing policies which take into account the current state of the system. Scheduling decisions may be based on different types of information, depending on what is available. In general, one of the most important parameters is the current load of the servers, as it is generally desirable to maintain a balanced load among all servers. If available, further information taken into account may include any of the following:
* the servers may be heterogeneous, with faster and slower servers;
* job and server types may be important in case the servers are heterogeneous and certain servers can serve certain types of jobs more efficiently;
* job sizes may be used to compute current server load more precisely;
* in some cases, physical location may play a role;
* there may be bottlenecks other than computing capacity in the system (e.g. bandwidth).
In many real-life systems, such information may not be available, but even if it is, there is a tradeoff: a complicated load balancing policy that requires too much communication and computation may generate a significant overhead cost, slowing down the entire system. Hence it is in general desirable to stick to simple load balancing policies. In the present paper, we provide a mathematical framework that does not include communication overhead costs. Such aspects can be addressed in the modeling in several ways; however, these are highly scenario-dependent, and as such, we decided to keep the model high-level.
We will discuss load balancing policies based exclusively on the queue length of servers. Job types, physical location and other bottlenecks will not play a role. We allow a heterogeneous server cluster, where there are several different types of servers, and the model can also incorporate processor sharing, where a server can serve multiple jobs simultaneously.
The server cluster model of the present paper will be described by a density-dependent Markov population model. As the system size goes to infinity, the mean-field limit of density-dependent Markov population models has been examined in the literature for both the transient regime (up to a finite time horizon) and in the stationary regime.
The transient limit object is deterministic and can be described as the solution a system of ordinary differential equations (ODEs) in case the Markov transition rates are Lipschitz-continuous <cit.>, or as the solution of a differential inclusion in case the transition rates are discontinuous <cit.>. Overall, these results are relatively straightforward to apply for the model in the present paper.
For the stationary regime, for Lipschitz-continuous transition rates, it is known that in the mean-field limit, the stationary distribution of the finite system concentrates on the unique asymptotically stable solution (attractor) of the limit system of ODEs <cit.>. Similar results available for the discontinuous setting, but only in case the attractor lies inside a domain where the transition rates are continuous <cit.>. We are not aware of any general results in case the attractor is at a discontinuity point of the transition rates, which happens to be the case for several of the load balancing policies discussed in the present paper.
The contributions of the paper are the following:
* Providing a high-level mathematical framework for modelling load balancing systems that accommodates several different load balancing principles.
* Identification of the mean-field limit in both the transient and stationary regime.
* Computation of the mean service time and also the service time distribution in the stationary mean-field limit. Computation techniques need to be adapted for discontinuities; these modified formulas are, to the best of our knowledge, novel.
* Numerical comparison of the various load balancing principles via simulation and theoretical computations for the mean-field limit.
All of the above is carried out for a fairly general setting, where the server cluster can be heterogeneous, and we will also allow a varying service rate, depending on the number of jobs in a given server. We will focus mostly on first-in-first-out (FIFO) service principle, but note that all calculations are straightforward to derive for limited processor sharing (LPS), where a server can serve multiple jobs simultaneously.
Rigorous proofs are not the main focus of the paper. We do refer to relevant rigorous results from the literature in cases where they are available, but only provide heuristic arguments for the novel cases. That said, numerical analysis does support the heuristic computations of the paper.
The codes used for the simulations and analytic calculations throughout the paper are available at <cit.>.
The rest of the paper is structured as follows: the rest of this section is dedicated to an overview of load balancing in the literature (Section <ref>), and to the necessary mathematical background in queueing theory (Section <ref>) and population processes (Section <ref>). Section <ref> describes the general setup of the server cluster we are interested in. Section <ref> describes the various load balancing principles. Section <ref> contains numerical experiments and comparison of the various load balancing principles, and Section <ref> concludes the work. The Appendix addresses a few related questions not strictly part of the main body of work, and also some further details.
§.§ Load balancing principles
One of the classic dynamic load balancing policies is Join-Shortest-Queue (JSQ), where the incoming job is assigned to the server with the shortest queue (lowest number of jobs) <cit.>. The upside of this method is that it offers very even balancing for homogeneous server clusters. However, it requires up-to-date knowledge of all server states, which may require a significant communication overhead.
Due to this, several variants of JSQ have been in use: for JSQ(d), the incoming job is scheduled to the shortest queue from among d servers, selected at random. This offers less balanced load distribution, but also requires less communication. d=1 corresponds to random assignment with no load balancing, and d equal to the total number of servers corresponds to JSQ; as d is increased, it offers better balancing but also more overhead communication. Interestingly, already for d=2, the resulting load balancing policy has certain asymptotic optimality properties <cit.>, often referred to as the power-of-2 (or power-of-d) policies. As a consequence, d is often selected relatively low, such as d=2 or d=5.
For Join-Idle-Queue (JIQ), the incoming job is scheduled to an idle server at random; if there are no idle servers, the assignment is random among all servers. Once again, this offers less balanced load distribution and less communication overhead than JSQ, but, similar to JSQ(d), has some nice asymptotic optimality properties. Mean-field analysis has been carried out for JIQ in <cit.>.
Another related load balancing policy is Join-Below-Threshold (JBT), which associates a threshold with each server; servers below their threshold are considered available and servers at or above their threshold are full. Jobs will be dispatched to a server randomly from among all available servers. This policy again offers less balancing than JSQ, but still offers protection against overloaded servers, and requires communication only when a server switches between available and full. For a full mean-field analysis and cluster optimization of JBT, we refer to <cit.>.
§.§ Birth-death processes and queues
The jobs arriving to and leaving a server's queue can be modelled with a birth-death process (Markov-queue).
For technical simplicity, we resort to finite queues, with the maximal queue length denoted by B and state space of a single queue Ω = {0,1,2,…,B}.
We assume Markov arrivals, that is, jobs arrive according to a Poisson process, and Markov service, that is, the time it takes to serve a job (once service has started) is exponentially distributed.
There are multiple service principles. For First-In-First-Out (FIFO) service principle, the server always serves the first job of a queue, while the other jobs wait. Whenever the first job has finished service, the server immediately starts serving the next job in the queue. For Limited Processor Sharing (LPS), the server can work on multiple jobs simultaneously. The maximum number of jobs served simultaneously is called the multi-programming level (MPL); further jobs in the queue wait and enter service in a manner similar to FIFO. We allow the service rate to depend on the number of jobs in the queue (this is particularly relevant for LPS, where multiple jobs can be served jointly for more efficient service overall). The choice of service principle has no effect on the queue length changes (no matter which job is served, queue length decreases by 1), but it does affect the system time of individual jobs. We will mostly focus on FIFO.
§.§ Density-dependent population processes
In this section, we present mathematical background and framework for density-dependent Markov population processes.
A density-dependent Markov population process has N interacting components, each of which is in a state from a finite set of local states S. The global state of the system is defined as the total number of individuals in each state, that is,
a vector
X^N∈{0,1,…,N}^|S|
with
X^N_1+…+X^N_|S|=N. The normalized global
state of the system can be defined as
x^N=X^N/N,
so
x^N∈ [0,1]^S with x^N_1+…+x^N_|S|=1.
Each component acts as a continuous time Markov chain. The rate of the transition from i ∈ S to j ∈ S is r_ij^N (for i ≠ j). The rates are assumed to be density-dependent, that is
r_ij^N = r_ij(x^N)
for some function r_ij:[0,1]^|S|→[0,∞]. In the classic setup defined by Kurtz <cit.>, the functions r_ij are usually assumed to be Lipschitz-continuous and independent of N. With this setup, x^N(t) is a continuous time Markov-chain. We define the mean-field equation of the system as the following:
/ṭv_i(t)=∑_j∈ S v_j(t)r_ji(v(t)), i∈ S,
where
r_ii:=-∑_j∈ S, j≠ ir_ij,
and
x_i^N(0)→ v_i(0) (for i=1,…, |S|), in probability as N→∞.
Lipschitz-continuity guarantees existence and uniqueness of the solution of (<ref>). The following result of Kurtz states mean-field convergence in the transient regime <cit.>:
Assuming r_ij (i,j∈ S), are Lipschitz-continuous and
x_i^N(0)→ v_i(0) i∈{1,…,|S|} , in probability,
then for any T>0 we have
lim_N →∞P(
sup_t ∈ [0,T]𝐱̅^N(t) - 𝐯(t) > ϵ) = 0.
Kurtz also proved that the standard deviation of x^N is of order 1/√(N) <cit.>.
An important concept related to Theorem <ref> is asymptotic independence, also known as propagation of chaos, stating that as N→∞, the evolution of two distinct queues is asymptotically independent. This is due to the fact that the evolution of a queue depends only on the global state, which is asymptotically deterministic.
We also have stationary mean-field convergence.
Given the following assumptions:
* r_ij are Lipschitz-continuous,
* the Markov process x^N(t) has a unique stationary distribution π^N
for each N, and
* (<ref>) has a unique stable attractor (ν_1,…,ν_|S|),
we have that the probability measure π^N on S converges in probability to the Dirac measure concentrated on ν.
Theorems <ref> and <ref> have been generalized in several directions during recent years. Benaïm and Le Boudec elaborated a framework applicable for a wider range of stochastic processes, which also allows the r_ij functions to have a mild dependency on N <cit.>.
The condition on Lipschitz-continuity can also be weakened. For discontinuous r_ij's, (<ref>) turns into a differential inclusion. A formal setup for differential inclusions is quite technical, and is omitted from the present paper. For a fully detailed setup, we refer to <cit.>, specifically Theorems 4 and 5, and <cit.>, Theorem 3.5 and Corollary 3.9 for a corresponding version of Theorem <ref>.
For a corresponding version of Theorem <ref> for discontinuous transition rates, we refer to <cit.>, where the main additional condition is that the unique attractor lies inside a domain where the r_ij are continuous.
The applicability of Theorems <ref> and <ref> will be addressed more in Section <ref>.
From Theorem <ref> it also follows that
lim_N→∞E(π^N)= ν,
so ν can be used as an approximation for E(π^N) for large N. E(π^N) here is basically an |S|-dimensional vector of distributions, which converges to a constant |S|-dimensional vector in distribution. The limit point can be interpreted as a distribution on S, and is the stable attractor ν.
§ SERVER CLUSTERS
The server cluster model examined in the present paper consists of N servers, each with a finite buffer, and a single common dispatcher. Jobs arrive to the dispatcher according to a Poisson process with rate Nλ (that is, the average arrival rate is λ per server). Each arriving job is instantly dispatched to one of the N servers; that is, the dispatcher maintains no queue.
The cluster may have K different server types. We assume K is fixed, independent from N.
The servers within each type are identical. Buffer sizes are denoted by B^(k) for each type k∈{1,…,K}. We assume service times are exponentially distributed; for each server type, the service rate can be constant or it may depend on the current queue length of the server.
Service rates are denoted by μ_i^(k), where i∈{ 0,1,…,B^(k)} is the queue length, and k∈{ 1,2,…,K} denotes the type of the server. For a given k∈{1,…,K}, μ_0^(k),…,μ_B^(k)^(k) is also referred to as the service rate curve. (μ_0^(k)=0, but we still include it in the notation.)
For each service rate curve, it is natural to assume that the total rate increases with the queue length, but the per-job rate decreases with the queue length:
μ_1^(k)≤μ_2^(k)≤μ_3^(k)≤…, μ_1^(k)≥μ_2^(k)/2≥μ_3^(k)/3≥… k∈{ 1,2,…,K}
Due to the finite buffer sizes, data loss may occur whenever a job is dispatched to a full queue. The probability of a job loss will be typically very low (due to load balancing), but it is still something that we will address in due course.
The server cluster is a density-dependent population process, where the state of a server is simply the number of jobs in its queue. The global state will be denoted by
X_i^(k),N(t), ( 0≤ i≤ B^(k), 1≤ k≤ K ),
where X_i^(k),N(t) is the number of servers with i jobs in its queue at time t. We will mostly use its normalized version
x^N(t)=x_i^(k),N(t), ( 0≤ i≤ B^(k), 1≤ k≤ K),
where
x_i^(k),N(t)=X_i^(k),N(t)/N.
The number of servers of type k is denoted by N_k and the ratio of each server type is denoted by
γ_k^N=N_k/N, k=1,…,K.
γ_k^N may depend on N, but we will assume they converge to some fixed values γ_k as N→∞. We also want the system to be stable, so
λ < ∑_k=1^K γ_k^N μ_B^(k).
(Actually, due to the finite buffer size assumption, the system is technically always stable, but we will nevertheless assume (<ref>).)
The evolution of x^N(t) can be formally defined using Poisson representation. Let
P_i→ (i+1),k(t), 0≤ i≤ B^(k)-1, k=1,…,K
P_i→ (i-1),k(t), 1≤ i≤ B^(k), k=1,…,K
denote independent Poisson processes with rate 1. P_i→ (i+1),k(t) corresponds to arrivals to queues of type k with length i, and P_i→ (i-1)(t) corresponds to jobs leaving queues of type k with length i.
The Poisson representation of x^N(t) is
x_i^(k),N(t)= 1/N P_(i-1)→ i,k(N ∫_0^t λ f^(k)_i-1(x^N(s))ṣ)
-1/N P_i→ (i+1),k(N ∫_0^t λ f^(k)_i(x^N(s))ṣ)
+1/N P_(i+1)→ i,k(N ∫_0^t μ^(k)_i+1 x_i+1^(k),N(s)ṣ)
-1/N P_i→ (i-1),k(N ∫_0^t μ^(k)_i x_i^(k),N(s)ṣ),
where f_i^(k)(x^N(t)) is the probability of a new arriving job to enter a queue with length i of type k.
The
{f_i^(k)(x^N(t)):0≤ i≤ B_k, k=1,…,K}
functions are going to be collectively called the dispatch functions. The dispatch functions depend on the load-balancing principle, which will be addressed later. Formally, f_i^(k) are defined on the normalized state x^N(t), which are all contained in the domain
{x:x∈ℝ^∑_k=1^K (B^(k)+1), x_j^(k)≥ 0, ∑_k=1^K ∑_j=0^B^(k)x_j^(k)=1}.
The four possible changes in the number of queues of length i which appear in (<ref>) correspond to:
* a job arriving to a queue of length i-1;
* a job arriving to a queue of length i;
* a job leaving a queue of length i+1;
* a job leaving a queue of length i.
On the border of the domain (<ref>), certain changes cannot occur. There is no service in empty queues:
μ_0^(k)=0 (k=1,…,K),
and no arrival to full queues:
f^(k)_B^(k)(.)≡ 0 (k=1,…,K).
We are interested in server clusters of various N sizes and especially the limit object as N→∞, that is, the mean-field limit (in accordance with Section <ref>). We first define the general mean-field equations corresponding to (<ref>):
v^(k)_i(t)= v^(k)_i(0)+∫_0^t λ f^(k)_i-1(v(s))ṣ -∫_0^t λ f^(k)_i(v(s))ṣ
+∫_0^t μ^(k)_i+1 v_i+1^(k)(s)ṣ -∫_0^t μ^(k)_i v_i^(k)(s)ṣ
in integral form, or, equivalently,
/ṭv_i^(k)(t)=λ f^(k)_i-1(v(t))-λ f_i^(k)(v(t))
+μ^(k)_i+1 v^(k)_i+1(t) - μ^(k)_i v^(k)
_i(t)
in differential form. An empty initial cluster corresponds to the initial condition
v_i^(k)(0)=
{[ γ_k for i=0,; 0 otherwise. ].
Theorem <ref> applies to this system whenever the f_i^(k) functions are Lipschitz-continuous. It turns out that the conditions of the general version of Theorem <ref> are mild enough so that transient mean-field convergence holds for all the discontinuous choices of f_i^(k) in the present paper, but this is not checked rigorously.
For the stationary case, we denote the stationary distribution
ν=(ν_i^(k)),i=0,…,B^(k), k=1,…,K
(similar to the notation of Section <ref>). Theorem <ref> applies whenever f_i^(k) are Lipschitz-continuous. In the discontinuous setting, the most relevant question is whether the f_i^(k) functions are continuous at the unique fixed point ν or not. If ν lies inside a region where f^(k)_i are Lipschitz-continuous, then the conclusion of Theorem <ref> applies. However, when the f_i^(k) functions are discontinuous at ν, Theorem <ref> does not apply; in fact, little is known in this case rigorously. Based on this, it makes sense to distinguish the following two cases:
* the functions f_i^(k) are Lipschitz-continuous at ν, or
* the functions f_i^(k) are discontinuous at ν.
When the functions f_i^(k) are Lipschitz-continuous at ν, the equations for the mean-field stationary distribution can be obtained from (<ref>) by setting /ṭv^(k)_i(t)=0:
0=λ f^(k)_i-1(v(t))-λ f^(k)_i(v(t))
+μ^(k)_i+1 v^(k)_i+1(t) - μ^(k)_i v^(k)_i(t)
i∈{1,…,B^(k)-1} , k∈{ 1,…,K }
which are equivalent to the dynamic balance equations
μ^(k)_i ν^(k)_i =λ f^(k)_i-1(ν), i∈{1,…,B^(k)} , k∈{ 1,…,K } .
We also have equations for the ratio of each server type:
∑_i=0^B^(k)ν^(k)_i=γ_k, k∈{1,…,K }.
(<ref>) + (<ref>) provide algebraic equations for ν.
We also propose another approach to obtain ν numerically, by solving the transient equations (<ref>) and taking the solution at a large enough point in time. (This assumes convergence to a single asymptotically stable solution, which we do not aim to prove rigorously.)
When the f^(k)_i are discontinuous at ν, more considerations are needed to derive the dynamic balance equations. This will be addressed separately for each load balancing principle.
Further remarks.
The assumption that both arrival and service are Markovian means that the entire system is a Markov (population) process, which keeps the setup fairly simple. Interestingly, the same mean-field limit would be obtained for any arrival process as long as the arrivals average out in the mean-field limit; to be more precise, for any arrival process for which the Functional Strong Law of Large Numbers holds (see e.g. Theorem 3.2.1 in <cit.>).
In case the monotonicity condition (<ref>) does not hold, mean-field convergence may fail. <cit.> contains specific examples where (<ref>) has multiple fixed points; stable fixed points correspond to quasi-stationary distributions of the population process for any finite N. The solution of (<ref>) will converge to one of the stable fixed points (depending on the initial condition). However, for any finite N, the population process will spend very long periods of time near one of the quasi-stationary points, switching between these points infinitely often.
§.§ Mean system time
A wide variety of parameters can be considered to describe the efficiency of such a system. A natural choice is the mean system time: the average time a job spends in the system between its arrival and service. We aim to calculate the mean system time H in the stationary mean-field regime. We note that the mean system time is a somewhat artificial object here since technically there are no individual jobs in the mean-field limit. It may be helpful to think of the mean-field limit as the case when N is extremely large.
One way to compute H is via Little's Law
H=L/λ_e,
where L is the mean queue length in the system, and λ_e is the effective arrival rate (which excludes jobs not entering the system due to job loss). From the mean-field stationary distribution ν, L is easily computed, while λ_e depends on the load balancing policy, but is typically also straightforward to compute. Little's law can actually be applied to each server type separately for more detailed information; this is addressed in Appendix <ref>.
Here we propose a different method to compute H, which gives even more detailed information, and will be useful later on. Let H_i,j^(k) denote the mean time until service for a job that is in position i in a queue of type k with j jobs total (so 1≤ i ≤ j ≤ B^(k), 1≤ k≤ K).
In the case of constant service rates, H^(k)_i,j= i/μ^(k) holds. For non-constant service rate curves however, the service rate may change due to later arrivals, so we need to keep track of both the length of the queue and the position of the job within it. We will derive a system of linear equations using total expectation and the Markov property. For simplicity, we assume FIFO service principle in the following calculations, but due to Little's law, this assumption does not affect the value of H.
The mindset is that we are following a tagged job at position i of a queue of type k with total queue length j, and the equations are based on possible changes in the queue, with the environment fixed due to the stationary mean-field regime.
H_i,j^(k) = 1/λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)+
λf_j^(k)(ν)/ν_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H_i,j+1^(k)+
μ_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H_i-1,j-1^(k) (2≤ i≤ j≤ B^(k)-1),
H_i,B^(k)^(k) =1/μ_B^(k)^(k)+H_i-1,B^(k)-1^(k) (2≤ i≤ B^(k)),
H_1,j^(k) =1/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)+λf_j^(k)(ν)/ν_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H_1,j+1^(k)
(1≤ j≤ B^(k)-1),
H_1,B^(k)^(k) =1/μ_B^(k)^(k).
(<ref>) makes use of the standard one step argument. We focus on a single queue of a given type k in the mean-field limit while assuming the environment to be stationary, and look for the next possible change in that queue. Jobs arrive to type k servers of queue length j with a rate of Nλ f_j^(k)(ν), and each job will be sent to one of Nν_j^(k) servers, so the arrival rate at a specific queue will be
Nλ f_j^(k)(ν)/Nν_j^(k)=λf_j^(k)(ν)/ν_j^(k),
while the service rate is μ_j^(k), so the rate of any change for a queue of length j is λf_j^(k)(ν)/ν_j^(k)+μ_j^(k). The change will either increase or decrease the length of the queue by 1, and we can apply total expectation.
For full queues (j=B^(k)), arrival is not possible, that is, f_B^(k)^(k)(.)≡ 0 for k=1,…,K.
In order to solve (<ref>), we first obtain the mean-field stationary distribution ν. ν can be calculated from either the balance equations (<ref>) when possible, or by numerically solving the transient mean-field equations (<ref>) and setting t large enough. Once ν is obtained, (<ref>) is just a system of linear equations for H_i,j^(k), which can actually be solved separately for each k for 1≤ k ≤ K. Once (<ref>) is solved, the mean system time H is just a linear combination of the values H_j,j^(k) according to the probabilities with which a job will be scheduled to a queue of length j-1 of a k-type server, that is,
H=1/∑_k=1^K∑_j=1^B^(k) f^(k)_j-1(ν)∑_k=1^K∑_j=1^B^(k) f^(k)_j-1(ν)H_j,j^(k).
The normalizing factor in (<ref>) addresses job loss, as we only want to consider the mean system time of jobs which actually enter the system. Job loss probability is equal to
1-∑_k=1^K∑_j=1^B^(k) f^(k)_j-1(ν).
(<ref>) and (<ref>) are only valid if the dispatch functions f_i^(k) are continuous at ν. In other cases, we may need to tweak the formulas. We will provide the corresponding versions of (<ref>) and (<ref>) on a case-by-case basis whenever the functions f_i^(k) are discontinuous at ν. These versions will be heuristic in the sense that no formal rigorous proof will be provided, but the results nevertheless agree with the results from simulations.
§.§ System time distribution
In this section, we calculate the system time distribution for a random job. Here, the service principle is actually important; we will present the calculation for FIFO service principle here. The calculations need to be modified for LPS service principle; the corresponding equations are provided in Appendix <ref>.
Let h_i,j^(k)(t) denote the probability density function of the remaining system time of a job at position i in a queue of length j and queue type k. Its Laplace-transform is defined as
H̃_i,j^(k)(s)=∫_0^∞ h_i,j^(k)(t)e^-stdt.
The following system of equations is the corresponding version of (<ref>) for the Laplace-transforms instead of the means. Total expectation also applies to Laplace-transforms, and we use the fact that the Laplace-transform of 0 is 1 and the Laplace-transform of λ e^-λ t is λ/s+λ to obtain
H̃_i,j^(k)(s) = λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)/s+λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)(
λf_j^(k)(ν)/ν_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_i,j+1^(k)(s)+
μ_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_i-1,j-1^(k)(s)) (2≤ i≤ j≤ B^(k)),
H̃_1,j^(k)(s) = λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)/s+λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)(
λf_j^(k)(ν)/ν_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_1,j+1^(k)(s)+
μ_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)) (1≤ j≤ B^(k)).
The corresponding version of (<ref>) is
H̃(s)= ∑_k=1^K∑_j=1^B^(k) f^(k)_j-1(ν)H̃_j,j^(k)(s).
Once again, (<ref>) and (<ref>) are valid when the functions f_i^(k) are continuous at ν. In other cases, we may need to tweak the formulas on a case-by-case basis.
The system time distribution can then be computed in the following manner:
* We first compute the mean-field stationary distribution ν. This can be done either by solving the balance equations (<ref>), or by numerically solving the mean-field transient equations (<ref>), and setting a large enough t.
* Once ν is available, (<ref>) is a system of linear equations for H̃_i,j^(k)(s) that is straightforward to solve.
* Then H̃(s) is computed from (<ref>).
* Finally, H̃(s) is transformed back to time domain.
Due to (<ref>), H̃(s) is a rational function, whose inverse Laplace transform can be computed numerically. For numerical inverse Laplace transformation methods, we refer to <cit.>.
We note that this approach to compute H̃(s), while explicit, has its limitations, as the formula for H̃(s) can get complicated for even moderately large K and B^(k) values. We address the feasibility further in Section <ref>.
Job losses occur only upon arrival, that is, all jobs that actually enter the system will be served, so h_i,j^(k)(t) is a proper probability density function with
∫_0^∞ h_i,j^(k)(t) dt=1.
However, if
∑_k=1^K∑_j=1^B^(k) f^(k)_j-1(ν)<1,
then H̃(s) is the Laplace-transform of a nonnegative function whose integral is equal to 1-∑_k=1^K f^(k)_B^(k)(ν) where
∑_k=1^K f^(k)_B^(k)(ν)
is the job loss probability, so in this sense, job losses are included in (<ref>). The corresponding normalized version of (<ref>) is
1/1-∑_k=1^K f^(k)_B^(k)(ν)∑_k=1^K∑_j=1^B^(k) f^(k)_j-1(ν)H̃_j,j^(k)(s),
which is the Laplace-transform of a proper pdf whose integral is 1.
Depending on the load balancing principle, job losses may or may not be possible in the mean-field limit. This will be addressed specifically for each load balancing principle (For a finite system, job losses are always possible due to the finite buffers and fluctuations in either the job arrival or service speed.)
§ LOAD BALANCING PRINCIPLES
The load balancing principle describes the method the dispatcher uses to distribute the arriving jobs between the servers. It is quite important in large scale systems where the resources such as computing capacity are distributed between a large number of individual servers, and can make a big difference in the efficiency of the system.
The general goal of load balancing is to avoid long queues, directing incoming jobs to shorter queues instead.
There are several load balancing principles in use. Static policies do not consider the state of the system, only focusing on the incoming jobs. One example would be the round-robin load balancing policy, where incoming jobs are directed to the next server cyclically. Static load balancing principles are generally easy to operate, as they require minimal communication with the servers. Out of the principles observed in this paper, Random assignment falls into this category.
Dynamic principles, which take into account the current state of the system, can be more efficient. In real clusters, there is a trade-off: complicated policies require more communication and computation, generating a higher overhead communication cost, but provide better balancing. That said, in the mathematical framework we present, the cost of communication overhead is not modeled. Including the cost of overhead communication to provide an analytical framework for more realistic models is subject to further research.
In some systems it may be possible to reassign jobs that have been already assigned to new servers. It might also be possible that several servers “team up” to serve a single job. In our setting, we do not explore these options, and stick to a scenario where all jobs are assigned to a single server immediately upon arrival. On the other hand, in addition to the usual FIFO service principle, the framework does allow for limited processor sharing (LPS), where a single server can serve multiple jobs simultaneously.
In this paper we will examine 5 load balancing principles:
* Random assignment, where jobs are distributed randomly. With this principle, there is no actual load balancing. This principle will serve mostly as a baseline for comparison.
* Join-Idle-Queue, where jobs are directed to idle queues if possible. A relatively recent idea <cit.>, further explored in <cit.>.
* Join-Shortest-Queue, where jobs are directed to the server with the fewest number of jobs waiting in queue. One of the earliest load balancing policies that has been widely used for decades <cit.>. It provides very even balancing, but at the cost of high overhead communication, as the dispatcher needs to keep track of the queue length in every single server at all times.
* Join-Shortest-Queue(d), where jobs are directed to the server with the fewest number of jobs waiting in queue from among d servers selected randomly. Also referred to as power-of-d, this is a version of JSQ that aims to reduce communication overhead at the cost of less strict balancing. It has been thoroughly explored, and has certain asymptotical optimality properties already for d=2 <cit.>.
* Join-Below-Threshold, where jobs are directed to servers with a queue length below a prescribed threshold <cit.>.
All of the above principles are based on natural intuitions that aim towards directing jobs to shorter queues, but they differ in the details and execution of doing so. In this section, we overview these load balancing principles from the literature. We present a high-level mathematical framework based on the Poisson representation of Section <ref> that is applicable to all of them, with the only difference being the f_i^(k)(.) functions.
For each load balancing policy, we identify f_i^(k)(.), then write the mean-field equations corresponding to (<ref>). We also identify the mean-field stationary distribution ν whenever available explicitly.
In case the f_i^(k)(.) functions are discontinuous at ν, we also rewrite the formulas (<ref>) and (<ref>) so that they can be used to compute the mean system time, and rewrite the formulas (<ref>) and (<ref>) for system time distribution.
§.§ Random assignment
This is the most simple principle that we observe, and it does not lead to any balancing. With this setup the queues basically operate, and thus can be analyzed independently of each other. For random assignment,
f_i^(k)(x)=x^(k)_i, k∈{ 1,…,K} ,
and accordingly, the mean-field equation is
v_i^(k)(t)= ∫_0^t λ v^(k)_i-1(s)ṣ -∫_0^t λ v^(k)_i(s)ṣ
+∫_0^t μ_i+1 v^(k)_i+1(s)ṣ -∫_0^t μ_i v^(k)_i(s)ṣ.
The mean-field balance equations, obtained from (<ref>), are
μ_i^(k)ν_i^(k)=λν_i-1^(k) k∈{ 1,…,K} , i∈{ 1,…,B} .
Solving (<ref>) gives the mean-field stationary distribution
ν^(k)_i=c_k∏_j=1^iλ/μ^(k)_j, i∈{ 0,…,B^(k)} ,
with the c_k's coming from (<ref>). This is in accordance with the queues being independent.
Since the rates f_i^(k) are continuous, (<ref>) and (<ref>) can be used to compute the mean system time H, and (<ref>) and (<ref>) can be used to compute the Laplace-transform of the pdf of the system time distribution.
Job loss is possible for Random assignment, but is taken into account by the formulas (<ref>) and (<ref>).
§.§ Join-Idle-Queue
For Join-Idle-Queue (JIQ), incoming jobs are assigned to an idle server at random. If none of the servers are idle, a server is selected at random.
For JIQ, using the notation
y_0=∑_k=1^K x_0^(k),
we have
f^(k)_i(x)=
{[ x_i^(k)/y_0 if i=0, y_0 >0,; 0 if i>0, y_0 >0,; x_i^(k) if y_0=0. ].
This system has been addressed in <cit.> for constant service rate curve and a homogeneous cluster.
The structure of the mean-field stationary distribution ν depends on the relation between λ and ∑_k=1^K γ_k μ_1^(k). We address three cases separately.
§.§.§ JIQ, subcritical case
When
λ<∑_k=1^K γ_k μ_1^(k),
there will always be idle queues in the mean-field stationary limit, so all jobs will be directed to idle queues. ν is concentrated on queues of length 0 and 1. From (<ref>) we have
μ_1^(k)ν_1^(k)=λν_0^(k)/∑_k=1^K ν_0^(k).
We do not have an explicit solution to (<ref>), but it can be solved numerically, and numerical experiments suggest a single fixed point ν. In this region, the functions f_i are continuous, so (<ref>) and (<ref>) can be used to compute the mean system time H:
H=∑_k=1^K ν_0^(k)/∑_k=1^K ν_0^(k) H_1,1^(k),
and (<ref>) and (<ref>) can be used to compute the entire Laplace-transform of the system time distribution.
For subcritical JIQ, in the mean-field limit, there will be no job loss.
§.§.§ JIQ, critical case
For
λ=∑_k=1^K γ_k μ_1^(k),
the mean-field stationary distribution is concentrated on queues of length 1, so we simply have
ν_1^(k)= γ_k, k∈ (1,…,K).
The functions f_i^(k) are discontinuous at ν, so (<ref>) and (<ref>) does not apply. Instead, in the dynamic balance, whenever a queue of length 1 finishes service, a new job will enter immediately. With this, we can write the equivalent of (<ref>) for JIQ:
H_i,j^(k) = 1/μ_j^(k)+H_i-1,j-1^(k) (2≤ i≤ j≤ B^(k)),
H_1,j^(k) =1/μ_j^(k) (1≤ j≤ B^(k)-1),
As we can see it is basically equivalent with (<ref>) in this case, because the discontinuity would only affect the arrival rate, and it is multiplied by 0 for every relevant term.
In the mean-field limit, all jobs go to queues of length 0 (which will then stay at length 1 for a positive amount of time), and there are no queues with 2 or more jobs. Accordingly, instead of (<ref>), we have
H=∑_k=1^K μ_1^(k)ν_1^(k)/λ H_1,1^(k).
For the Laplace transforms, we have
H̃_i,j^(k)(s) = μ_j^(k)/s+μ_j^(k)H̃_i-1,j-1^(k)(s), (2≤ i≤ j≤ B^(k)),
H̃_1,j^(k)(s) =μ_j^(k)/s+μ_j^(k) (1≤ j≤ B^(k)-1),
and
H̃(s)=∑_k=1^K μ_1^(k)ν_1^(k)/λH̃_1,1^(k)(s).
For critical JIQ, in the mean-field limit, there will be no job loss.
§.§.§ JIQ, supercritical case
In case λ>∑_k=1^K γ_k μ_1^(k), there will be no idle queues, so ν_0^(k)=0 for k∈ (1,…,K). We note that f_i^(k) are discontinuous at any point with ∑_k=1^Kν_0^(k)=0 and ∑_k=1^Kν_1^(k)>0; an intuitive explanation of this discontinuity is the following. Whenever a server with a single job finishes service, it will become idle. In the mean-field limit, a job will enter the idle queue instantly, so once again, we do not observe idle queues for any positive amount of time. However, similar to the λ=∑_k=1^K γ_k μ_1^(k) case, a positive percentage of all incoming jobs will go to an idle queue. To compute this percentage, we once again observe that in the mean-field stationary distribution, service from queues of length 1 has to be balanced out completely by arrivals to idle queues.
The total service rate in queues of type k of length 1 is μ_1^(k)ν_1^(k), which is thus completely balanced out by an equal amount of arrivals The remaining arrival rate (λ-∑_k=1^K μ_1^(k)ν_1^(k)) is distributed randomly. For longer queues, there are no discontinuities. Accordingly, the dynamic balance equations are
(λ-∑_k=1^K μ_1^(k)ν_1^(k))ν_i^(k) = μ_i+1^(k)ν_i+1^(k), i∈(1,…,B^(k)-1).
The system (<ref>) is nonlinear, but can be solved numerically. Then we can write a modified version of (<ref>) for the calculation of H^(k)_i,j. For this, we introduce
z_0=∑_k=1^K μ_1^(k)ν_1^(k),
dubbed the upkeep, which is the rate of service in servers with queue length 1, balanced out instantly by new arrivals. Essentially, the difference between (<ref>) and the original balance equations (<ref>) is the presence of this upkeep term in the case when the dispatch functions are discontinuous at the mean-field stationary distribution ν.
According to JIQ policy, the remaining arrival rate λ-z_0 is distributed randomly for the rest of the system. Accordingly, (<ref>) becomes
H_i,j^(k) = 1/(λ-z_0)+μ_j^(k)+
(λ-z_0)/(λ-z_0)+μ_j^(k)H_i,j+1^(k)+
μ_j^(k)/(λ-z_0)+μ_j^(k)H_i-1,j-1^(k) (2≤ i≤ j≤ B^(k)-1),
H_i,B^(k)^(k) =1/μ_B^(k)^(k)+H_i-1,B^(k)-1^(k) (2≤ i≤ B^(k)),
H_1,j^(k) =1/(λ-z_0)+μ_j^(k)+(λ-z_0)/(λ-z_0)+μ_j^(k)H_1,j+1^(k) (1≤ j≤ B^(k)-1),
H_1,B^(k)^(k) =1/μ_B^(k)^(k).
To obtain the mean system time H, instead of (<ref>), we now have
H=∑_k=1^Kμ_1^(k)ν_1^(k)/λ H_1,1^(k)+ (1-∑_k=1^Kμ_1^(k)ν_1^(k)/λ)∑_k=1^K∑_j=2^B^(k)ν_j-1^(k)H_j,j^(k)
since ∑_k=1^K μ_1^(k)ν_1^(k)/λ is the portion of the arrival rate that is used to balance out the service in queues of length 1 and the remaining portion of the incoming rate is distributed randomly.
The corresponding equations for the Laplace transforms are
H̃_i,j^(k)(s) = (λ-z_0)+μ_j^(k)/s+(λ-z_0)+μ_j^(k)(
(λ-z_0)/(λ-z_0)+μ_j^(k)H̃_i,j+1^(k)(s)+
μ_j^(k)/(λ-z_0)+μ_j^(k)H̃_i-1,j-1^(k)(s))
(2≤ i≤ j≤ B^(k)-1),
H̃_i,B^(k)^(k)(s) =μ_B^(k)^(k)/s+μ_B^(k)^(k)H̃_i-1,B^(k)-1^(k)(s) (2≤ i≤ B^(k)),
H̃_1,j^(k)(s) =(λ-z_0)+μ_j^(k)/s+(λ-z_0)+μ_j^(k)((λ-z_0)/(λ-z_0)+μ_j^(k)H̃_1,j+1^(k)(s)+
μ_j^(k)/(λ-z_0)+μ_j^(k)) (1≤ j≤ B^(k)-1),
H̃_1,B^(k)^(k)(s) =μ_B^(k)^(k)/s+μ_B^(k)^(k),
and
H̃(s)=∑_k=1^Kμ_1^(k)ν_1^(k)/λH̃_1,1^(k)(s)+ (1-z_0/λ)∑_k=1^K∑_j=2^B^(k)ν_j-1^(k)H̃_j,j^(k)(s).
In general, for the supercritical JIQ case, job loss is possible, and is taken into account by the formula (<ref>).
§.§ Join-Shortest-Queue
For Join-Shortest-Queue (JSQ), incoming jobs are assigned to the shortest queue from among all queues; in case of multiple shortest queues of the same length, one is selected randomly.
For JSQ,
f^(k)_i(x)=
{[ 0 if ∃ i'<i ∃ k': x_i'^(k')>0,; 0 if ∑_k=1^K x_i^(k)=0,; x_i^(k)/∑_k=1^K x_i^(k) otherwise. ].
For the stationary mean-field analysis, let i_0 denote the smallest i for which
∑_k=1^Kγ_kμ^(k)_i≥λ.
Such an i exists if the stability condition (<ref>) holds. Then the mean-field stationary distribution ν will be concentrated on queues of length i_0 and i_0-1: starting from an arbitrary point, queues shorter than i_0-1 will receive the entire load of arrivals, which is larger than they can process, so these queues will “fill up” to level i_0-1, while queues longer than i_0 do not receive any load at all, so these queues will go down, until they reach level i_0.
The upkeep term is very similar to the JIQ case. The total service rate in queues of length (i_0-1) is
z_0=∑_k=1^Kμ^(k)_i_0-1ν^(k)_i_0-1,
which is completely balanced out by an equal amount of arrivals. In case i_0=1, z_0=0, so there is no upkeep, and all queues are of length 0 or 1; in this case, JSQ is equivalent to either subcritical or critical JIQ. When i_0>1, there is an actual upkeep. We assume i_0>1 for the rest of this section.
The remaining arrival rate (λ-z_0) goes to queues of length i_0-1, with the queue type k chosen at random with probabilities proportional to ν^(k)_i_0-1. For each server type k, these arrivals are balanced out by the service in queues of type k and length i_0, leading to the balance equations
μ^(k)_i_0ν^(k)_i_0 =(λ-z_0) ν^(k)_i_0-1/∑_k=1^K ν^(k)_i_0-1 k∈ (1,…,K),
which, along with (<ref>), give a (nonlinear) system of equations for ν, which can be solved numerically.
Whenever a server with queue length i_0-1 finishes service, it will become the single shortest queue and receives a new arrival instantly. Rate (λ-z_0) remains for the rest of the system, which will be directed entirely to queues of length i_0-1. To ease notation, we also introduce
y_0=∑_k=1^Kν^(k)_i_0-1.
Then
H_i,j^(k) = H_i,j+1^(k) (1≤ i≤ j<i_0-1),
H_1,i_0-1^(k) = 1/((λ-z_0)/y_0) + μ_i_0-1^(k) +
(λ-z_0)/y_0/((λ-z_0)/y_0) + μ_i_0-1^(k) H_1,i_0,
H_i,i_0-1^(k) = 1/((λ-z_0)/y_0) + μ_i_0-1^(k) +
(λ-z_0)/y_0/((λ-z_0)/y_0) + μ_i_0-1^(k) H_i,i_0 +
μ_i_0-1^(k)/((λ-z_0)/y_0) + μ_i_0-1^(k) H_i-1,i_0-2 (2≤ i ≤ i_0-1),
H_1,j^(k) =1/μ_j^(k) (i_0-1< j≤ B^(k)),
H_i,j^(k) =1/μ_j^(k)+H_i-1,j-1 (i_0-1< j≤ B^(k), 1≤ i ≤ j).
The first equation in (<ref>) addresses the fact that if a server has fewer than i_0-1 jobs in it, it will immediately fill up to i_0-1 jobs. We also adjust the effective arrival rate to λ -z_0, similarly to JIQ. If i_0=1, the f_i^(k) are continuous at ν, so we can use (<ref>) instead of (<ref>). If i_0=2, there will of course not be any equation with the condition (2≤ i ≤ i_0-1).
If the functions f_i^(k) are continuous at ν, we can use (<ref>) to calculate the mean system time. In case i_0=1, ν is in the inside of a continuous domain of the functions f^(k)_i, so this is the case, and (<ref>) simplifies to
H=∑_k=1^K ν_0^(k)/∑_k=1^K ν_0^(k) H^(k)_1,1.
On the other hand, if i_0>1, the functions f_i are not continuous at ν, and (<ref>) is not applicable; instead, we have
H = ∑_k=1^K μ^(k)_i_0-1ν^(k)_i_0-1/λ H^(k)_i_0-1,i_0-1 +
(1-z_0/λ)
∑_k=1^K ν^(k)_i_0-1/∑_k=1^K ν^(k)_i_0-1H^(k)_i_0,i_0 .
The corresponding equations for the Laplace transforms are
H̃_i,j^(k)(s) = H̃_i,j+1^(k)(s)
(1≤ i≤ j<i_0-1),
H̃_1,i_0-1^(k)(s) = (λ-z_0)/y_0 + μ_i_0-1^(k)/s+(λ-z_0)/y_0+ μ_i_0-1^(k) *
(
μ_i_0-1^(k)/(λ-z_0)/y_0 + μ_i_0-1^(k)+
(λ-z_0)/y_0/(λ-z_0)/y_0 + μ_i_0-1^(k)H̃_1,i_0(s))
H̃_i,i_0-1^(k)(s) = (λ-z_0)/y_0 + μ_i_0-1^(k)/s+(λ-z_0)/y_0 + μ_i_0-1^(k) *
((λ-z_0)/y_0/(λ-z_0)/y_0 + μ_i_0-1^(k)H̃_i,i_0(s) +
μ_i_0-1^(k)/(λ-z_0)/y_0 + μ_i_0-1^(k)H̃_i-1,i_0-2(s)) (2≤ i ≤ i_0-1)
H̃_1,j^(k)(s) =μ_j^(k)/s+μ_j^(k) (i_0-1< j≤ B^(k)),
H̃_i,j^(k)(s) =μ_j^(k)/s+μ_j^(k)*H̃_i-1,j-1(s) (i_0-1< j≤ B^(k), 1≤ i ≤ j),
and
H̃(s) = ∑_k=1^K μ^(k)_i_0-1ν^(k)_i_0-1/λH̃^(k)_i_0-1,i_0-1(s) +
(1-z_0/λ)
∑_k=1^K ν^(k)_i_0-1/∑_k=1^K ν^(k)_i_0-1H̃^(k)_i_0,i_0(s).
Since y_0 and z_0 are straightforward to compute from ν, (<ref>) is still a linear system of equations for H̃_i,j^(k)(s), which is not any more difficult to solve than (<ref>).
For JSQ, there is no job loss in the mean-field limit. (We emphasize that this is due to the stability condition (<ref>), which we assume in all cases.)
§.§ Join-Shortest-Queue(d)
JSQ(d) is a version of JSQ where the dispatcher first selects d servers randomly, and dispatches the incoming job to the shortest from among the d queues.
If we set d=1, we get Random assignment, and if we set d=N, we get JSQ. The f_i^(k) functions are continuous for any finite d. Appendix <ref> addresses the case d→∞.
For JSQ(d), we introduce the auxiliary variables
y_i^(k),N=∑_j=i^B^(k)x_j^(k),N, z_i^N=∑_k=1^K y_i^(k),N,
and then inclusion-exclusion shows
f^(k),N_i(x^N)=
x_i^(k),N/∑_k=1^K x_i^(k),N×
[z_i^N(z_i^N-1/N)…(z_i^N-d-1/N)
-z_i+1^N(z_i+1^N-1/N)…(z_i+1^N-d-1/N)].
The above version of f^N_i(.) is N-dependent, but converges to
f_i^(k)(x)=x_i^(k)/∑_k=1^K x_i^(k)((z_i)^d-(z_i+1)^d).
Due to the dependency on N, we refer to <cit.>, where this type of dependence on N is allowed. Also, both f_i^(k),N and f_i^(k) are continuous. Overall, the conclusions of Theorems <ref> and <ref> apply.
The mean-field balance equations are
λν_i^(k)/∑_k=1^K ν_i^(k)((∑_k=1^K∑_j=i^B^(k)ν_j^(k))^d-(∑_k=1^K∑_j=i+1^B^(k)ν_j^(k))^d)
=μ_i^(k)ν_i^(k).
Since the rates f_i^(k) are continuous, (<ref>) and (<ref>) can be used to compute the mean system time H, and (<ref>) and (<ref>) can be used to compute the Laplace-transform of the pdf of the system time distribution.
Job loss is possible for JSQ(d), but will be typically small enough to be negligible in practice.
§.§ Join-Below-Threshold
Join-Below-Threshold (JBT) sets a threshold M_k which may depend on the server type k; servers of type k with queue length <M_k are considered available and servers of type k with queue length ≥ M_k are full. Tasks will be dispatched to a random available servers. If there are no available servers, jobs will be dispatched at random among all servers.
JBT is commonly used in accordance with limited processor sharing (LPS) for servers which can serve multiple jobs simultaneously in an efficient manner. This is reflected in an increasing service rate curve μ_i^(k). If μ^(k)_i would start to decrease for large i, this is countered by setting the threshold M_k at the maximum point. M_k is referred to as the multi programming level (MPL), and is the number of jobs served simultaneously in a single server, while further jobs wait in queue. Overall, this setup ensures the service rate curve μ^(k)_i is increasing up to M_k and constant for M_k≤ i≤ B^(k).
If we set the threshold to 1, we get the JIQ principle, and if we set it to B^(k), we get Random assignment.
We introduce the auxiliary variable
y= ∑_k=1^K∑_j=0^M_k-1x^(k)_j,
which is the ratio of available servers.
For JBT,
f_i^(k)(x)=
{[ 0 if y>0, i≥ M_k,; x^(k)_i/y if y>0, i<M_k,; x^(k)_i if y=0. ].
The mean-field balance equations are
μ^(k)_i ν^(k)_i =λν_i-1^(k)/y, i∈{1,…,M_k-1} , k∈{ 1,…,K },
with ν_i^(k)=0 for i>M_k.
For a full, detailed mean-field analysis of JBT, we refer to <cit.>. Apart from the stability condition (<ref>) and monotonicity condition (<ref>), it is usually also assumed that
λ<∑_k=1^K γ_kμ_M_k,
which is a stability condition stronger than (<ref>), ensuring that the evolution of the transient mean-field limit eventually enters and then never leaves the region where no queues are longer than the threshold. On this domain, the functions f_i^(k) are continuous, and the mean-field stationary solution ν is unique and also inside this domain. An efficient numerical method to compute ν is provided in <cit.>.
As a side note, <cit.> also shows examples where (<ref>) does not hold, and there are multiple attractors in the mean-field system corresponding to quasi-stationary states of a system with a finite N, and mean-field convergence fails completely.
If (<ref>) and (<ref>) hold, (<ref>) and (<ref>) can be used to compute the mean system time H, and (<ref>) and (<ref>) can be used to compute the Laplace-transform of the pdf of the system time distribution.
Job loss is not possible for JBT.
§ NUMERICAL EXPERIMENTS
We conducted several numerical experiments. These are by no means exhaustive, but should nevertheless display some interesting properties and allow for some numerical comparison of the various load balancing methods.
For several parameter setups, we examined simulations for various choices of N, and also computed the mean-field limit (N=∞). Simulations were done in Python and symbolic computations were done in Wolfram Mathematica. The codes for both are available at <cit.>. For the symbolic calculations, numerical inverse Laplace transform was used, for which packages are available at <cit.>.
Section <ref> displays transient mean-field convergence as N is increased. Also, as t is increased, each system will converge to its stationary state.
Section <ref> compares the mean service times for both simulations and the mean-field settings.
Section <ref> addresses service time distributions.
§.§ Homogeneous transient mean-field diagrams
In this section, we plot the solutions of the mean-field equations as well as the corresponding x_i^(k),N curves for systems with N=1000 and N=10000 servers, resulting from simulations.
We will focus on homogeneous clusters with K=1 (also dropping (k) from the notation). B=B^(k), the maximal queue length will be set to 10. The rest of the parameter setup is shown in Table <ref>. The parameter setup adheres to the monotonicity assumption (<ref>) and also the stability condition (<ref>) (in fact, the system load can be computed as λ/μ_B in a homogeneous cluster).
Figures <ref>–<ref> display simulation results for the transient evolution of the homogeneous system using various load balancing policies. For each load balancing policy, two plots are included: the number of servers is N=1000 for the plot on the left and N=10000 for the plot on the right. Other system parameters are according to Table <ref>. All systems are initially empty. The x axis is time, and the jagged line graphs show the ratio of servers with queue length 0 to 10 respectively. These have some natural fluctuations. Also included are the transient mean-field limits, which are smooth curves.
§.§.§ Random
Figure <ref> displays the transient evolution with Random load balancing policy. A significant ratio of queues is longer throughout; overall, the Random load balancing principle is rather inefficient, and serves mostly as a baseline. Later we will see the effect of more efficient load balancing principles on the same systems.
The fluctuations of the simulations decrease as N is increased. Actually, as mentioned after Theorem <ref>, the fluctuations are guaranteed to be of order 1/√(N) for x^N (or, equivalently, order √(N) for X^N). However, the constant factor can be different for the various load balancing principles. For Random assignment, the fluctuations are relatively mild.
Convergence to stationarity can also be observed: as time increases, the smooth graphs converge to the mean-field stationary distribution. That said, for any fixed finite N, the order of the fluctuations will not go to 0 as time is increased.
§.§.§ JIQ
Figure <ref> displays the transient evolution with JIQ load balancing policy for λ=0.95 and λ=1.25.
Figures <ref> and <ref> have λ=0.95 (with other parameters according to Table <ref>), which is subcritical due to λ=0.95<μ_1=1 (see Section <ref>), so the system stabilizes on queues of length 0 and 1.
Figures <ref> and <ref> have λ=1.25>μ_1=1, which is supercritical, so the system starts out by filling up all empty queues in a sharp manner. After this initial period, no empty queues are present anymore, and the dynamic dispatch is distributed among queues of length 1 through 10 randomly. Similar to Random policy, once again longer queues are present in the system.
§.§.§ JSQ(2) and JSQ(5)
Figure <ref> displays the transient evolution with JSQ(2) load balancing policy. Already for d=2, the result is markedly different from Random assignment. This is a known phenomenon, referred to as power-of-2 <cit.>. The ratio of longer queues diminishes more rapidly with the queue length than for either Random or JIQ policy.
Figure <ref> displays the transient evolution with JSQ(5) load balancing policy. Here, most of the queues will be of length 3 and 4, with the ratio of either shorter or longer queues much smaller. We also note that the dispatch function is continuous, so the transient mean-field limit functions are smooth, although they change rather sharply.
§.§.§ JSQ
Figure <ref> displays the transient evolution with JSQ load balancing policy. Here, all of the queues will be of length 3 and 4 after the system fills up. At any point in time, there are only 2 different queue lengths present, starting from lengths 0 and 1, switching to 1 and 2, then 2 and 3, then 3 and 4 as the system fills up. We also note that the dispatch function is discontinuous, so the transient mean-field limit functions has breaking points at switches to new queue length pairs.
The stationary mean-field limit is ν_3=ν_4=0.5 due to
λ=1.25=μ_3+μ_4/2=1.2+1.3/2.
For any finite N, when a job in a queue of minimal length finishes service, a shorter queue will appear for a brief but positive time. In the mean-field limit, such queues are filled back instantly.
We also note that the fluctuations are considerably larger than for either Random or JIQ. An intuitive explanation is that the higher level of control provided by JSQ will generally focus any fluctuations in either the arrival or service on a single queue length: if the arrivals outweigh the service for a short period of time, the surplus arrivals will all go to servers of minimal queue length. Overall, the strict control introduces a positive correlation between the length of the queues, resulting in larger fluctuations (which are, once again, of order 1/√(N), but with a higher constant factor). Principles with less strict control generally distribute this fluctuation among several different queue lengths, resulting in smaller fluctuations.
§.§.§ JBT
Figure <ref> displays the transient evolution with JBT load balancing policy. The MPL parameter is set to 5. In this setup, the system reaches stability before hitting the MPL threshold (and accordingly, the mean-field system reaches its attractor before the discontinuity point, so the functions remain continuous). This is the intended usage of JBT.
§.§ Heterogeneous transient mean-field diagrams
In this section, we plot the solutions of the mean-field equations as well as the corresponding x_i^(k),N curves for systems with N=10000 servers, resulting from simulations.
We will focus on heterogeneous clusters with K=2. B=B^(k), the maximal queue length will be set to 10. The rest of the parameter setup is shown in Table <ref>. The parameter setup adheres to the monotonicity assumption (<ref>) and also the stability condition (<ref>).
The parameter choices in Table <ref> are motivated by an actual real-life scenario: in many shopping centers, there are two types of checkouts: checkouts served by an employee (service rate 1 in Table <ref>), with a separate queue for each such checkout, and self-service checkouts. A single self-service checkout is typically slightly slower (service rate 0.8 in Table <ref>) than a checkout served by an employee, but this is countered by the fact that there is a batch of self-service checkouts for each queue (the batch size is 5 for Table <ref>).
Of course, in actual shopping centers, the number of queues may or may not be high enough to warrant a mean-field approach; that said, as we will see later, some derived performance measures are well-approximated by the mean-field limit already for smaller system sizes.
Figures <ref>–<ref> display simulation results for the transient evolution of the heterogeneous system using various load balancing policies. For each load balancing policy, two plots are included: the ratio of type 1 servers with various queue lengths for the plot on the left and the ratio of type 2 servers with various queue lengths for the plot on the right. Other system parameters are according to Table <ref>. All systems are initially empty. The x axis is time, and the jagged line graphs show the ratio of servers with queue length 0 to 10 respectively. These have some natural fluctuations. Also included are the transient mean-field limits, which are smooth curves.
§.§.§ Random
Figure <ref> displays the transient evolution with Random load balancing policy. A significant ratio of queues is longer throughout; in fact, servers of type 1 are overloaded, as can be seen from the fact that the majority of queues of type 1 has length 10 (equal to the buffer size) or close. In a heterogeneous system, with poor load balancing, it is possible that some server types are overloaded even though the system as a whole is subcritical.
§.§.§ JIQ
Figure <ref> displays the transient evolution with JIQ load balancing policy.
JIQ does not offer a considerable improvement over Random, as once again longer queues are present in the system. This also means that servers of type 1 are overloaded, which also results in significant data loss. On the other hand, servers of type 2 are subcritical.
§.§.§ JSQ(2) and JSQ(5)
Figure <ref> displays the transient evolution with JSQ(2) load balancing policy. Servers of type 1 are still overloaded, in which case JSQ(2) does not offer a considerable improvement over either Random or JIQ. The system (particularly servers of type 1) goes through an initial build-up period, starting from empty and converging to stationarity with the majority of queues full (length equal to buffer size 10) or close.
Figure <ref> displays the transient evolution with JSQ(5) load balancing policy. In this case, the better load balancing results in both server types being subcritical; for server type 1, the typical queue lengths are 5 and 6, while for server type 2, the typical queue lengths are 4 and 5. Data loss is practically negligible in this case.
§.§.§ JSQ
Figure <ref> displays the transient evolution with JSQ load balancing policy. The build-up period is much sharper (in fact, the mean-field limit curves are nondifferentiable at the changes in minimal queue length), with both server types eventually reaching a state where all queue lengths are either 4 or 5. Fluctuations around the mean-field limit are relatively mild for N=10000 servers.
§.§.§ JBT
Figure <ref> displays the transient evolution with JBT load balancing policy. MPL parameters are 1 for server type 1 and 5 for server type 2. JBT load balancing policy suits the type of heterogeneous system described by Table <ref> particularly well: the MPL settings allow to fully utilize the service capacity of each server type without allowing queues longer than necessary. In fact, JBT can outperform JSQ for heterogeneous systems, as we will see in the next section.
§.§ Mean system times
The main performance measure we are going to examine is the mean system time, that is, the average time a job spends between arrival and finishing service.
First we examine the homogeneous system described by the parameter settings in Table <ref> for simulations for various system sizes ranging from N=10 to N=10000 and also the mean-field limit, with the various load balancing principles from Section <ref>. Table <ref> lists the mean system times from both simulations, and calculated from the mean-field limit using equations (<ref>) and (<ref>) (or in the discontinuous cases, their corresponding versions listed in Section <ref>). We note that despite long running times, the simulation results still may have an inherent small random variation.
JSQ is the most effective principle, which is unsurprising (although we do emphasize that in practice, JSQ comes with a heavy overhead communication burden which was not modelled here).
JSQ(d) is more effective with a higher d, but already for d=2, it is significantly better than Random, which is once again known as the power-of-2 (or power-of-d) <cit.>.
We note that jobs lost are not included in the averages in Table <ref>; in order to give a more complete picture, we mention that the theoretical job loss probability for Random policy (with the same parameters as per Table <ref>) is 0.0438, and for JIQ it is 0.0136 (for JSQ(2), JSQ(5), JSQ and JBT, job loss is negligible). Job loss probabilities for the simulations are not included in the paper, we just mention that they closely match the theoretical values.
Overall, based on Table <ref>, the mean-field approximation for the mean system times is exceedingly accurate already for small values of N.
Next we address the heterogeneous system described by the parameter settings in Table <ref>.
As long as N is finite, there are fluctuations which do not vanish even as time increases and the systems converge to their stationary limit. As expected, fluctuations are bigger for smaller values of N. For smaller values of N, the mean system time is generally above the mean-field mean system time; an intuitive explanation for this is that the limited number of servers offers less `room' to balance out short periods of overflow (coming from the natural fluctuations of arrivals and service), causing the system to operate with longer queues for said short periods.
Once again, in order to compare the mean system time for the various load balancing principles, it is important to take into account that some of these principles operate with significant data loss: for random, the theoretical job loss probability is 0.285, for JIQ, it is 0.251, and for JSQ(2), it is 0.104.
Table <ref> shows that, similar to the homogeneous case (Table <ref>), the mean-field approximation for the mean system times is very accurate for both smaller and larger choices of N (and for JSQ(5), JSQ and JBT, job loss is negligible). The only exception is JBT for N=12; for very small system sizes and system load close to critical (1.6/1.75 according to the parameters in Table <ref>), even a small burst in the arrivals can push the entire system over the threshold, at which point it switches to Random, and stays there for significant periods of time.
§.§ System time distributions
In this section we examine the theoretical probability density function of the system time in the mean-field limit for some setups and compare it with empirical distributions (histograms) from simulations for finite N.
The theoretical distributions are calculated using equations (<ref>) and (<ref>) (or in discontinuous cases their counterparts described in Section <ref>), and inverse Laplace transformation (ILT). The system (<ref>) can be solved explicitly, and the solution is a rational function (in the Laplace transform domain).
However, depending on the value of K and B^(1),…,B^(K), the solution for H̃(s) from (<ref>) can be infeasible already for moderately large values of K and B. In general, the formula for H̃(s) is relatively simple if only few of the H̃_i,j^(k)'s are nonzero, which is typically the case for JSQ. For other load balancing principles, where all H̃_i,j^(k)'s are nonzero, the explicit formula for H̃(s) from (<ref>) is infeasible already for K=2 and B^(1)=B^(2)=10.
Due to this, the parameters for this setup were the homogeneous system from Table <ref> with λ=1.25. We also set B=5, to make the ILT less complicated. Just as an example, for JSQ, with the above parameters, we have
H̃(s)=(24 s+65)^4/5 (2 s+5)^3 (10 s+13)^4.
H̃(s) can be computed for the other load balancing principles as well, but the explicit formulas are far more complicated, and are omitted from the paper.
Figure <ref> displays the theoretical pdf of the system time in the mean-field limit with a red curve, while the blue histograms are from simulations with N=1000 servers. Each system was run long enough to reach the stationary regime, and only jobs arriving during this period were considered. The theoretical pdf's are normalized as per (<ref>).
In general, all histograms match the theoretical pdf's well. For random assignment and JIQ (which is supercritical with the given parameters), the system time is less concentrated (e.g. it has a higher variance). JSQ is the only one where the system time density is 0 at time t=0; for all other load balancing principles, it is possible that a job starts service immediately, which corresponds to a positive density at t=0. For JSQ(2) and JSQ(5), the match between the theoretical and numerical distributions is slightly less perfect than for others (although still very good); the exact reason for this is subject to further research.
§ CONCLUSION AND OUTLOOK
In this paper we examined the mean-field transient and stationary convergence of systems with several different load-balancing principles based on queue length.
While no rigorous proof was presented, the simulations suggest that mean-field convergence holds even for discontinuous f_i^(k) dispatch functions. We have provided formulas to compute the stationary mean-field limit, and also the mean system time in the mean-field stationary regime. In addition to that, the entire service time distribution could also be calculated with the help of the Laplace transform, adapting (<ref>) and (<ref>) for the Laplace transforms of the system times. We have also examined the mean system time numerically for several parameter setups.
There is a lot of possibility for further work in this topic. One direction would be to provide mathematically rigorous proofs for versions of Theorems <ref> and <ref> for some of the discussed systems with discontinuous dispatch functions.
Another direction is scenarios where further information is available (e.g. job size); in such cases, that information can be used to estimate the load of each queue more precisely and design other load balancing principles.
Yet another direction is to add a geometrical dimension to the server cluster, with the load balancing principle taking into account the distance of the arriving job to the queues (e.g. as in a shopping center, where customers are more likely to choose a queue physically closer to their arrival point).
We could also make the model more realistic, even if more complicated, by considering the dispatcher's communication overhead cost. However, we expect the communication overhead cost to be highly dependent on actual system settings, and as such, it seems difficult to incorporate it in a high level model in a general manner.
Another direction is to allow different job types, with certain job types can be served more efficiently by certain server types.
All in all, this is a vast topic that has a lot of potential for further development.
abbrv
§ LITTLE'S LAW
In a heterogeneous system, Little's law applies to the entire system in the mean-field stationary regime, and also applies to each server type separately. It is valid regardless if the dispatch functions are continuous or not, but requires some consideration for discontinuous dispatch functions. In this section, we provide the proper formulas for each load balancing principle.
Let λ^(k) denote the effective arrival rate to servers of type k, and L^(k) denote the average queue length in servers of type k (k=1,…, K). Using these, we can compute the mean system time for a job in a server of type k via Little's law as
H^(k)=L^(k)/λ^(k).
For any load balancing principle,
L^(k)=∑_i=0^B^(k) iν_i^(k)/∑_i=0^B^(k)ν_i^(k).
The formula for λ^(k) is different for continuous and discontinuous dispatch functions. For dispatch functions continuous at ν (this case includes Random, JSQ(d), JBT and also subcritical JIQ and JSQ with i_0=1), the formula for λ^(k) is
λ^(k)=λ∑_i=0^B^(k)-1f_i^(k)(ν)/∑_i=0^B^(k)ν_i^(k).
For supercritical JIQ, we have
λ^(k)=μ_1^(k)ν_1^(k)+(λ-z_0)
∑_i=1^B^(k)-1ν_i^(k)/∑_i=0^B^(k)ν_i^(k),
and for JSQ with i_0>1, we have
λ^(k)=μ_i_0-1^(k)ν_i_0-1^(k)+(λ-z_0)
ν_i_0-1^(k)/∑_k=1^Kν_i_0-1^(k)/ν_i_0-1^(k)+ν_i_0^(k).
§ SYSTEM TIME DISTRIBUTION FOR LPS SERVICE PRINCIPLE
This section is a counterpart of Section <ref>; we provide formulas to compute the system time distribution for limited processor sharing (LPS) service principle.
For LPS, each server type has a parameter called the multi-programming level (MPL); the server can serve a number of jobs up to the MPL simultaneously, dividing its service capacity evenly, while further jobs wait in a FIFO queue.
Once again, let h_i,j^(k)(t) denote the probability density function of the remaining system time of a job at position i in a queue of length j and queue type k. M^(k) denotes the multi-programming level of queues of type k. The order of jobs is irrelevant among jobs already in service; that is, for fixed k and j, h_i,j^(k)(t) is constant for i≤min(j,M^(k)). Accordingly, in the formulas we will write h_1,j^(k)(t) instead of h_i,j^(k)(t) for i≤min(j,M^(k)). For jobs that are not yet in service (i> M^(k)), their position within the queue is still relevant.
For LPS, when the tagged job is in service, three type of changes can occur to its queue: arrival, or the tagged job finishes service, or another job finishes service. In the last case, it does not matter whether the finished job is ahead or behind the tagged job. When the tagged job is not yet in service, only two type of changes can occur: arrival, or another job finishes service. We also use once again that arrival is not possible when the queue is full (j=B^(k)), that is, f_B^(k)^(k)(.)≡ 0 for k=1,…,K.
The corresponding version of (<ref>) is as follows:
H̃_1,j^(k)(s) = λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)/s+λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)(
λf_j^(k)(ν)/ν_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_1,j+1^(k)(s)+
μ_j^(k)(M^(k)-1)/M^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_1,j-1^(k)(s)+
μ_j^(k)/M^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k))
(1≤ i≤ M^(k)≤ j≤ B^(k)),
H̃_1,j^(k)(s) = λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)/s+λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)(
λf_j^(k)(ν)/ν_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_1,j+1^(k)(s)+
μ_j^(k)(j-1)/j/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_1,j-1^(k)(s)+
μ_j^(k)/j/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k))
(1≤ i ≤ j< M^(k)),
H̃_M^(k)+1,j^(k)(s) = λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)/s+λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)(
λf_j^(k)(ν)/ν_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_M^(k)+1,j+1^(k)(s)+
μ_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_1,j-1^(k)(s))
( j≤ B^(k)),
H̃_i,j^(k)(s) = λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)/s+λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)(
λf_j^(k)(ν)/ν_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_i,j+1^(k)(s)+
μ_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_i-1,j-1^(k)(s))
(M^(k)+1<i≤ j≤ B^(k)).
Once again, (<ref>) and (<ref>) are applicable to compute H̃(s) when the dispatch functions f_i^(k) are continuous at ν. In other cases, the formulas may need to be modified.
§ PARTIAL CONTROL
We highlight a situation dubbed partial control. In such a system, some of the jobs are not subject to the load balancing policy, and will simply be dispatched randomly. A real life example for partial control would be directing traffic via cooperating navigation apps in cars: each car with a cooperating navigation app is subject to load balancing, but drivers without the app select routes not subject to the same load balancing.
Assume we have a system with a load balancing policy corresponding to some dispatch functions f_i^(k)(x). Load balancing only has partial control: for each job, with some fixed probability 0<p≤ 1, the job will be dispatched according to the load balancing policy, but with probability (1-p), it will be dispatched randomly. In this case, the corresponding dispatch functions are simply
f̂_i^(k)(x) = p f_i^(k)(x) + (1-p)x_i^(k).
Figure <ref> shows transient plots with JSQ load balancing principle with low (p=0.3) and high (p=0.8) levels of control. System parameters are according to Table <ref> with λ = 1.25 and N=10000. With a low level of control, the transient behaviour is closer to the case of random assignment, with longer queues also present. For low control, the minimal stationary queue length is 2, lower than the minimal stationary queue length 3 in case of full control JSQ, as the system needs to balance fewer controlled jobs (e.g. the upkeep is lower). For high control (p=0.8), the minimal stationary queue length remains 3, but once again, longer queues are also present.
§ CONVERGENCE OF JSQ(D) TO JSQ AS D→∞
This section shows an interesting visualisation of JSQ(d)'s “convergence” to JSQ as d→∞. Figure <ref> displays the solutions of the transient mean-field equations for various choices of d. In practice, JSQ(d) is quite close to JSQ already for moderately large values of d.
We note that the mean-field transient solutions are smooth for JSQ(d) for any choice of d, but not for JSQ.
|
http://arxiv.org/abs/2307.04693v1 | 20230710164634 | COMEX: A Tool for Generating Customized Source Code Representations | [
"Debeshee Das",
"Noble Saji Mathews",
"Alex Mathai",
"Srikanth Tamilselvam",
"Kranthi Sedamaki",
"Sridhar Chimalakonda",
"Atul Kumar"
] | cs.SE | [
"cs.SE",
"cs.AI"
] |
COMEX: A Tool for Generating Customized Source Code Representations
Debeshee Das1§,
Noble Saji Mathews1§,
Alex Mathai2,
Srikanth Tamilselvam2,
Kranthi Sedamaki1,
Sridhar Chimalakonda1 and
Atul Kumar2
1
Indian Institute of Technology Tirupati, India
2 IBM Research, India
{debesheedas, elbonleon, alexmathai98, srikanthtamilselvam, skranthi4444, sridhar.chimalakonda, atulkumar}@gmail.com
August 12, 2023
=====================================================================================================================================================================================================================================================================================================================================================================================
[1]Authors have contributed equally
Learning effective representations of source code is critical for any Machine Learning for Software Engineering (ML4SE) system. Inspired by natural language processing, large language models (LLMs) like Codex and CodeGen treat code as generic sequences of text and are trained on huge corpora of code data, achieving state of the art performance on several software engineering (SE) tasks. However, valid source code, unlike natural language, follows a strict structure and pattern governed by the underlying grammar of the programming language. Current LLMs do not exploit this property of the source code as they treat code like a sequence of tokens and overlook key structural and semantic properties of code that can be extracted from code-views like the Control Flow Graph (CFG), Data Flow Graph (DFG), Abstract Syntax Tree (AST), etc. Unfortunately, the process of generating and integrating code-views for every programming language is cumbersome and time consuming. To overcome this barrier, we propose our tool - a framework that allows researchers and developers to create and combine multiple code-views which can be used by machine learning (ML) models for various SE tasks. Some salient features of our tool are: (i) it works directly on source code (which need not be compilable), (ii) it currently supports Java and C#, (iii) it can analyze both method-level snippets and program-level snippets by using both intra-procedural and inter-procedural analysis, and (iv) it is easily extendable to other languages as it is built on tree-sitter - a widely used incremental parser that supports over 40 languages. We believe this easy-to-use code-view generation and customization tool will give impetus to research in source code representation learning methods and ML4SE. The demonstration of our tool can be found at <https://youtu.be/GER6U87FVbU>.
Representation Learning, Static Analysis
§ INTRODUCTION
Source code representation learning is the task of effectively capturing useful syntactic and semantic information
embedded in source code <cit.>. It forms the backbone of ML pipelines for various SE tasks such as code classification, bug prediction, code clone detection and code summarization. Therefore, representing source code for use in ML models, with minimal loss of important information is an active research area <cit.>. It is important to note that source code is different from natural language as it follows an unambiguous structure and pattern, usually adhering to a strict underlying grammar. Hence, while creating representations for source code, it is important to infuse information from this unique structural aspect. To address this, many works including GraphCodeBERT<cit.> and GREAT<cit.> have explored leveraging code-views as a means to learn source code representations. Unfortunately, the process of generating code-views for multiple programming languages and customizing them for various SE tasks is often a time consuming process.
Most available tools are (a) positioned for analysis on compiled or compilable code (and not incomplete or uncompilable source code), (b) are specific for a single language, and (c) are not able to support both intra-procedural and inter-procedural analysis.
To address these concerns, we propose - a framework that (a) works directly on source code to generate and combine multiple code-views, (b) supports Java and C# (with planned support for other languages) and (c) works for both method-level and program-level snippets using intra-procedural and inter-procedural analysis. Since it is based on a single parser package (tree-sitter[<https://tree-sitter.github.io/tree-sitter/>]), it can be extended to new languages without additional dependencies.
As of today, most state-of-the-art models like CodeGen <cit.> and Codex <cit.> treat source code like free flowing text. Though this assumption helps simplify the required data pre-processing, it loses out on many structural aspects of code. Recently, works like NSG <cit.> have shown the benefits of using code structure. NSG leverages weak supervision using a syntax tree to generate full-length syntactically valid method bodies. Their results showcase that using this technique, even a small model (63 million parameters) can outperform LLMs like Codex (12 billion parameters).
To fuel research on similar grounds, we hope that with this package, we have lowered the entry barrier for researchers to easily integrate and leverage code-views while learning source code representations.
§ RELATED WORK
Several ML4SE works leverage code-views such as the AST <cit.>, the CFG <cit.>, the DFG <cit.>, and their combinations (CDFG <cit.>), to learn better code representations and improve performance on downstream SE tasks <cit.>.
Unfortunately, most available tools that create such views are specific to a single language.
SOOT <cit.>, a popular static analysis tool for Java, requires the input Java code to be compilable and for all definitions to be available. But many existing research datasets are mostly method-level datasets with incomplete snippets and definitions <cit.>. Although python_graphs <cit.>, a framework for generating program graphs for Python, provides a composite “program graph" with combined information from various typical code-views, it does not provide users the flexibility to combine, reduce or customize the typical code-views as supported by . Joern is an open-source static analysis tool often used
as a source for intermediate graph representations of code <cit.> with support for Java, Python, C, C++, etc., providing code-views without a means to customize, combine, or easily extend to other languages. It has limited support for inter-procedural control-flow and data-flow analysis, and for interactive exploration and visualization[https://galois.com/blog/2022/08/mate-interactive-program-analysis-with-code-property-graphs/https://galois.com/blog/2022/08/mate-interactive-program-analysis-with-code-property-graphs/]. overcomes these limitations by providing support for generation of code-views through static code analysis even for non-compilable code both at function and program level, supporting out-of-the-box composition of views and easy extension to new languages without introducing further language-specific parser dependencies.
§ THE PACKAGE
is open-sourced[<https://github.com/IBM/tree-sitter-codeviews>] and also made available as a Python package[<https://pypi.org/project/comex/>]. Additionally, we have exposed a command-line-interface that allows users to conveniently specify the input code-snippet, output format types (dot,json,png) and any required customizations or combinations of different code-views.
An overview of is depicted in Fig. <ref>. As can be seen, starts with a code snippet and user-defined configuration as input. The snippet is then passed through a tree-sitter parser to generate a concrete syntax tree (CST). An enhanced symbol table is created by processing the CST, and both of these together are used to create a CFG. Using the CFG, we implement reaching definition analysis (RDA) to generate the DFG. It is important to note that for CFG and DFG we implement both intra-procedural and inter-procedural analysis. In what follows, we elaborate on the details of the different code-views that we make available through .
§.§ Abstract Syntax Tree
We generate an AST by filtering some of the CST nodes provided by tree-sitter. Trivial nodes such as semicolons (;) and braces ({,}) are dropped, while non-trivial nodes
such as field_access or method_invocation are retained.
We also provide customizations for the AST like (i) a collapsed AST and (ii) a minimized AST. A ‘collapsed AST’ is one where all occurrences of the same variable are collapsed into one node. Whereas, in a ‘minimized AST’, certain node types can be ‘blacklisted’ based on the purpose of the code representation. The rationale behind these customizations is to provide smaller ASTs without losing out on critical information. This results in fewer AST nodes, thus reducing graph sizes which helps make Graph Neural Network (GNN) <cit.> approaches to source code representation learning computationally tractable.
§.§ Control-Flow Graph
Statement-level control-flow - Using the tree-sitter generated CST and the enhanced symbol table, we proceed to create our CFG code-view. A typical CFG consists of a network of basic blocks, where each block is a set of instructions that execute sequentially with no intermediate control jump. Hence, constructing a CFG is usually a two-step process, where we first identify the basic blocks and then determine the control-flow edges between them.
However, in , we choose to produce a statement-level CFG that maps the control-flow between statements (and not blocks). This is useful for certain ML-based approaches and for generating the DFG as elaborated in (§<ref>).
The CFG for both Java and C# is a statement-level approximation of control-flow.
Inter-procedural control-flow - We support inter-procedural control-flow by statically analyzing all class definitions, object reference declarations, abstraction and inheritance specifications, method and constructor signatures and overloading. Fig. <ref> shows a code snippet with two class definitions, ClassA (A) and ClassB (B), apart from the Main class (C). The CFG edges are highlighted in red. The diagram depicts the change of control-flow during object instantiation to the corresponding class definition via “constructor_call" edges D (29 → 1) and E (30 → 6). As an explicit constructor is available for ClassB, the control flows through the constructor before returning to the site of instantiation via the “class_return" edge F (8 → 30) . In case of method or constructor overloading, the function signatures are compared to determine the control-flow edges. When methods are called on object references, they are linked with the corresponding definition by matching the function signatures and available static references within the corresponding class. Nested function calls are also handled by tracking and mapping back all statically available signatures of function calls and their definitions.
§.§ Data-Flow Graph
Using the CFG generated in (§<ref>), we perform data-flow analysis to create our DFG code-view.
One of the fundamental techniques in data-flow analysis is Reaching Definition Analysis (RDA)
where we identify the set of definitions that may reach a program point, i.e., the definitions that may affect the value of a variable at that point. A statement-level DFG is then generated using this information. Using the RDA-based implementation addresses many of the significant drawbacks that we found in the data-flow extraction logic used by GraphCodeBERT <cit.> such as lack of inter-procedural analysis, incorrect handling of scope information as well as data-flow thorough loops. It should be noted that the RDA-based analysis is inherently more computationally expensive.
In addition to method level analysis, we also support an out-of-the-box program-level DFG via a two-phase RDA. The first phase is the typical RDA algorithm for each method, followed by another iteration of RDA that also takes into consideration the inter-procedural control-flow. This implementation helps track changes made to variables that are passed as parameters via method invocations. This is only performed for non-primitive data-types since primitive data-types are passed by value in Java and C#. A full-blown alias analysis, which precisely determines all possible aliasing relationships can be challenging and computationally expensive. We hence support a partial alias analysis technique that approximates the possible memory references in a program. We also provide two additional data-flow relations - “LastDef" and “LastUse". Enabling “LastDef" results in edges that link between re-definitions of variables as well as edges between declarations and definitions of variables. Similarly, “LastUse" links the current use of a variable to the last program point where it was read. These relationships help add more edges in those method-level snippets that mainly use global variables which are not defined in the method body.
§.§ Combinations and Customizations
In addition to generating code-views, can also combine and customize multiple code-views into a single graph.
For example, a combination of CFG and DFG would generate the two code-views separately and then combine them based on unique node identifiers as shown in Fig. <ref>. Additionally, as we used just one parser package, we are able to implement this feature using a single module (CombinedDriver) that works seamlessly across all languages. is currently capable of generating over 15 different customized representations[Please refer to https://github.com/IBM/tree-sitter-codeviews/blob/main/List_Of_Views.pdfList-Of-Views.pdf in the repository for a complete list].
§ DISCUSSION AND LIMITATIONS
was tested for robustness by generating and validating the code-views obtained on the large datasets popularly used for benchmarking ML-based SE tasks (CodeNet <cit.>, CodeSearchNet <cit.> and <cit.>). Many of these datapoints have missing definitions and are not compilable, but their code-views were successfully generated as long as they were free of syntax errors. However, we are unable to provide a very accurate alias analysis that usually works only for compilable code because we support non-compilable input code snippets. Instead we provide a partial alias analysis. Among the aforementioned datasets, only <cit.> has C# datapoints which is why we expect our implementation of Java code-views to be more robust than our C# implementation.
§ CONCLUSION AND FUTURE WORK
In source code representation learning research, there are many notable works that exploit code-specific properties like control-flow, data-flow, read-write dependencies, etc., in addition to treating code as regular natural language text. To this end, we believe that will enable researchers and developers in this domain to extract and customize structural information from code-views for new methods of representation learning. provides a framework which can be extended to support more code-views and their combinations and can be easily extended to many other popular languages like Python and C++ which can spur research in ML4SE and effective source code representation learning.
IEEEtran
|
http://arxiv.org/abs/2307.05397v1 | 20230711155751 | On the Vulnerability of DeepFake Detectors to Attacks Generated by Denoising Diffusion Models | [
"Marija Ivanovska",
"Vitomir Štruc"
] | cs.CV | [
"cs.CV"
] |
^1Faculty of electrical engineering, University of Ljubljana
E-mail: {marija.ivanovska, vitomir.struc}@fe.uni-lj.si.com
On the Vulnerability of DeepFake Detectors to Attacks Generated by Denoising Diffusion Models
Marija Ivanovska^1, Vitomir Štruc^1
Received / Accepted
=============================================================================================
Abstract
The detection of malicious Deepfakes is a constantly evolving problem, that requires continuous monitoring of detectors, to ensure they are able to detect image manipulations generated by the latest emerging models. In this paper, we present a preliminary study that investigates the vulnerability of single-image Deepfake detectors to attacks created by a representative of the newest generation of generative methods, i.e. Denoising Diffusion Models (DDMs). Our experiments are run on FaceForensics++, a commonly used benchmark dataset, consisting of Deepfakes generated with various techniques for face swapping and face reenactment. The analysis shows, that reconstructing existing Deepfakes with only one denoising diffusion step significantly decreases the accuracy of all tested detectors, without introducing visually perceptible image changes.
§ INTRODUCTION
With the rapid development of digital technologies, the generation of fake images and videos has become an almost effortless process. Although these methods have numerous benefits in the entertainment industry, they can as well be used for malicious purposes. Such examples are Deepfakes, where the face of a target person is altered or used as a replacement for another person's face in order to fabricate certain scenarios <cit.>. The manipulated data can then be exploited to spread misinformation, harm the victim or manipulate public opinion. The development of accurate Deepfake detection algorithms is therefore crucial for the prevention of possible violations.
Over the years various machine learning algorithms were proposed to automatically detect manipulated data <cit.>. These algorithms usually look for inconsistencies in lighting and shadows, visual artifacts, or other fingerprints left by generative models during the creation of Deepfakes. Nevertheless, the detectors can be prone to certain attacks, generated to intentionally fool the detector into misclassifying fake images as real. These attacks are usually created by adding small perturbations to existing Deepfake images <cit.>. More advanced attacking methods, however, learn to embed attacks directly into the Deepfake generation process, in order to generate more challenging Deepfakes, in terms of their detectibility <cit.>.
The recent emergence of a new generation of models, the so–called Denoising Diffusion Models (DDMs), has raised great concern for the spread of fake data, as they were proved to be capable to generate even more realistic and convincing fakes than their predecessors, Generative Adversarial Networks (GANs) <cit.>.
Motivated by this threat, we investigate the capability of DDMs to attack Deepfake detection systems, by simply reconstructing existing Deepfake images with predetermined noising and denoising steps (Figure <ref>). In our study, we limit our experiments to generative algorithms for face swapping and face reencatment.
To the best of our knowledge, we are the first to investigate the potential exploitation of DDMs in this context.
In this paper we make the following contributions: i) we test the capability of an out–of–the–box diffusion–based model, to generate attacks on Deepfake detection systems; ii) we evaluate the modified Deepfakes in terms of visual image quality; iii) we test the vulnerability of different types of single–image Deepfake detectors, to modified Deepfakes.
§ RELATED WORK
Deepfake detection. Recent single–image algorithms for Deepfake detection are predominantly based on different deeplearning methods. Naive approaches usually represent a CNN, that learns to detect Deepfakes by classifying examples of real and fake data. Xception <cit.> and MesoNet <cit.> are among the most popular in this category. To ensure the CNN has captured discriminative features, Nguyen et al. and Wang et al. <cit.> both utilize explicit modeling of specific Deepfake artifacts in the spatial space. Luo et al. <cit.> on the other hand propose image analysis in the frequency space, as real and fake data typically have different frequency spectrums. Similar approach is presented by Liu et al. <cit.>. Although very accurate when applied to a closed–set problem, these algorithms fail to generalize well to out–of–distrubution samples and samples generated by unknown Deepfake techniques. To address this problem, Li et al. <cit.> and Shiohara et al. <cit.> have proposed self–supervised algorithms, that do not rely on specific Deepfake datasets but rather learn from simulated image manipulations to compensate for the absence of the fake data subset.
Attacks on Deepfake detectors.
Many works have shown that Deepfake detection methods, can be prone to certain types of carefully crafted attacks. These attacks are in general divided into two main categories, i.e. white–box and black–box attacks. The former are designed with full knowledge about the architecture and the parameters of the targeted Deepfake detector, while the latter involve trial and error to approximate the model under attack. Hussain et al. <cit.> and Gandhi et al. <cit.> perform both types of attacks by optimizing perturbations added to the Deepfake images so that they are later classified as real. Neekhara et al. <cit.> take a similar approach in black–box settings and additionally study the transferability of generated attacks across different Deepfake detectors. Apart from image space perturbations, Carlini et al. <cit.> also investigate possible implementation of attacks in the latent space of the generative Deepfake model, so that it yields adversarial images. A novel type of black–box attacks is introduced by Liu et al. in <cit.>, where post–processing of pre–generated Deepfakes is performed, by removing detectable traces left by the Deepfake generation pipeline. The resulting Deepfakes are therefore more authentic and challenging to detect. For this purpose, a GAN–based method TR-Net is developed and optimized to recognize and remove predefined types of traces.
§ DIFFUSION–BASED DEEPFAKE ATTACKS
Our Deepfake attacks are generated using DifFace <cit.>, a recent diffusion–based model for blind face restoration. Note, that the model we use was not designed nor optimized for attacking Deepfake detection methods. Our aim is to test the capabilities of the selected model as a ready–to–use generator for black–box attacks. The attack generation consists of two stages. In the first stage, also known as forward process, a selected Deepfake image y_0 is gradually corrupted by adding Gaussian noise 𝒩(0, σ^2𝐈), following a non-homogeneous Markov chain. In the second stage, known as reverse process, the Deepfake that has been corrupted with s noising steps (x_s) is sequentially denoised by a parametrized generative model D_θ(x,σ). In our study, D_θ is a pretrained approximator, that has been optimized to minimize the Kullback-Leibler (KL) divergence between the designed distribution p(x_s|y_0) and its target distribution q(x_s|x_0), where x_0 is the restored version of the initial input image y_0. A high–level overview of the generation of Deepfake attacks is presented in Figure <ref>. For more details about DifFace, the reader is referred to the original paper <cit.>.
§ EXPERIMENTS
Datasets. In our study we experiment with FaceForensics++ (FF+) <cit.>, a commonly used benchmark for evaluation of Deepfake detection methods. The dataset consists of 1000 real YouTube videos, each available in three different qualities. We use only the highest–quality, raw data. FF+ Deepfakes are generated by three different face-swapping techniques, i.e. DeepFakes, FaceSwap and FaceShifter, and two face reenactment methods, i.e. Face2Face and NeuralTextures. Our study is limited to single–image Deepfake detectors, so we chose to extract only every 10th frame from each sequence. The detection and cropping of face areas is performed with pretrained MTCNN.
All extracted Deepfakes are then reconstructed with DifFace <cit.> 6 times, each time using a different value for the total number of diffusion timesteps: 1, 10, 25, 50, 75 and 100. Noise levels representing individual values are visualized in Figure <ref>. In our experiments we follow the predetermined FF+ train, validation and test split.
Experimental details. For the generation of attacks, we use the publicly available official implementation of DifFace and associated pretrained weights[https://github.com/zsyOAOA/DifFace]. Generated attacks are then utilized to test the vulnerability of three different Deepfake detection models: Xception <cit.>, Face X-Ray <cit.> and SRM[https://github.com/crywang/face-forgery-detection] <cit.>. The naive classifier Xception and the frequency–based detector SRM are both trained from scratch, using real and unmodified Deepfake training samples. We train independent detectors for each FF+ Deepfake method. The evaluation of the self–supervised detector Face X-Ray is performed with a pretrained model[https://github.com/wkq-wukaiqi/Face-X-Ray]. All trained detectors are tested separately on regular FF+ Deepfakes and Deepfake attacks generated with different noise levels.
Evaluation metrics. Before testing the vulnerability of Deepfake detectors to generated attacks, we asses the visual quality of reconstructed Deepfakes by comparing them to their corresponding unmodified images. Their perceived quality is estimated with Structural Similarity Index Measure (SSIM). For comparison of higher–level image features we use Learned Perceptual Image Patch Similarity (LPIPS), based on a pretrained SqueezeNet.
Finally, the preservation of the identity represented with unmodified Deepfakes is calculated with Cosine Similarity Index Measure (CSIM) based on identity vectors extracted by AdaFace.
The accuracy of Deepfake detectors is evaluated in terms of True Positive Rate (TPR), representing the percentage of detected Deepfakes. To ensure fair comparison of individual experimental runs and to simulate a real–world scenario, each detection method is first evaluated on the testing subset of real and unmodified fake images. The threshold at the Equal Error Rate (EER) point is then also used for the classification of attacks.
§ RESULTS
Quality assessment of Deepfake attacks. The generation of Deepfake attacks with the selected approach introduces inevitable image changes. The quantitative evaluation of their visual quality is given in Table <ref>. As can be seen from calculated SSIM and LPIPS values, reconstructing Deepfakes with up to s=25 noising steps does not significantly change their structure. A higher number of noising steps on the other hand modifies the initial image more drastically, which in turn decreases the CSIM value, indicating that the identity of the face has been degraded to some extent. The qualitative analysis of attacks also confirms these findings. As can be seen in Figure <ref>, there are no obvious, perceivable differences between unmodified Deepfakes (s=0) and attacks denoted by s=1, s=10 and s=25. We observe, that higher levels of noise in general produce much more realistic images. Moreover, with s=100 the diffusion model has been found to be capable of completely removing typical Deepfake inconsistencies, such as double eyebrows, unnatural shadows, sharp facial stitching borders, etc. This noise level however often modifies the appearance (size, shape, color) of individual facial parts.
Evaluation of Deepfake detectors. The percentage of successfully detected Deepfakes is first calculated on regular real and fake images from FaceForensics++, to obtain the EER treshold for the binary classification of samples. With this threshold, we then calculate the True Positive Rate of attacked detection models, when they are evaluated on modified subsets of Deepfakes. Obtained results are visualized in Figure <ref>. We observe, that only one denoising iteration with the DDM severely affects detectors' accuracies. In general, descriminative methods (Xception and SRM) are far more prone to generated attacks in comparison to the self–supervised method (Face X-Ray). We also note, that as the number of noising steps s increases, the TPR value begins to improve to a certain extent. We hypothesize that at these noise levels, the DDM starts introducing artifacts, that can be in some cases recognized by Deepfake detectors. Nevertheless, further research is necessary, to clarify this phenomenon.
§ CONCLUSION
Recently discovered Denoising Diffusion Models (DDMs) have shown impressive capabilities of generating highly realistic and convincing images. Here, we investigate their potential employment as ready–to–use generators for black–box Deepfake attacks. Our experiments are performed on Deepfakes from FaceForensics++. We attack three different single–image Deepfake detection methods, i.e. Xception, Face X-Ray and SRM. Our study shows, that all tested detectors are highly vulnerable to even minor image modifications applied by the DDM.
ieee
|
http://arxiv.org/abs/2307.03968v1 | 20230708125450 | Multi-Level Power Series Solution for Large Surface and Volume Electric Field Integral Equation | [
"Y. K. Negi",
"N. Balakrishnan",
"S. M. Rao"
] | cs.CE | [
"cs.CE",
"cs.NA",
"math.NA"
] |
Impact of noise on inverse design: The case of NMR spectra matching
O. Anatole von Lilienfeld
August 12, 2023
===================================================================
In this paper, we propose a new multi-level power series solution method for solving a large surface and volume electric field integral equation-based H-Matrix.
The proposed solution method converges in a fixed number of iterations and is solved at each level of the H-Matrix computation. The solution method
avoids the computation of a full matrix, as it can be solved independently at each level, starting from the leaf level. Solution at each level can be used as
the final solution, thus saving the matrix computation time for full H-Matrix. The paper shows that the leaf level matrix computation and solution with power series gives an accurate results as full H-Matrix iterative solver method. The method results in considerable time and memory savings compared to the H-Matrix iterative solver. Further, the proposed method retains the O(NlogN) solution complexity.
Method of Moments (MoM), H-Matrix, surface electric field integral equation,volume electric field integral equation.
§ INTRODUCTION
With the use of ever increasing higher frequencies for various defence and civilian applications in the current world, the electrical
size of electromagnetic scattering/radiation problem has grown drastically <cit.>. Solving the electrically large problems numerically to obtain fast and
accurate results is the biggest challenge in the Computational Electromagnetics (CEM) community. Also, with the increase in computing power and memory,
the need for large-scale solution algorithms has grown even more. Out of the various numerical methods in CEM, the most popular methods are:
a) the Finite Difference Time Domain (FDTD) <cit.> method in the time domain and b) the Method of Moments (MoM) <cit.> and Finite Element
Method (FEM) <cit.> in the frequency domain. Traditionally, the frequency domain methods have been more popular than the time domain methods
as most of the early experimental results were available in the frequency domain and validating the computational results was convenient and easy.
Out of the various frequency domain methods, MoM based methods are highly accurate and flexible for modeling irregular structures, the MoM matrix
can be computed with the Surface Electric Field Integral Equation (S-EFIE) for solving Perfect Electrical Conductor (PEC) problems with surface mesh, and the
Volume Electric Field Integral Equation (V-EFIE) <cit.> for solving inhomogeneous dielectric problems with volume mesh. Further, the MoM leads
to a smaller number of unknowns compared to FEM and is free from grid dispersion error. However, the MoM matrix is a full matrix compared to a
sparse matrix for the FEM method. Hence, the solution to large size problems with MoM in electromagnetics requires high matrix memory
and computation time due to the dense matrix. Note that MoM dense matrix computation, matrix vector product and storage cost scales to O(N^2 ) for N number of unknowns. Solving the dense matrix with an iterative solver leads to N_itr O(N^2) calculations for N_itr iteration with O(N^2) for matrix-vector multiplication cost. With the direct solver, the complexity grows as O(N^3). Various fast solver algorithms like Multi-Level Fast Multipole Algorithm (MLFMA) <cit.>, Adaptive Integral Method (AIM) <cit.>, FFT <cit.>, IE-QR <cit.>,
and Hierarchical Matrix (H-Matrix) <cit.> have been proposed to overcome the MoM limitations of high memory and computation cost.
Fast solver reduces the matrix memory, matrix fill time, and matrix-vector product time to O(NlogN). The reduced matrix-vector product time
improves the solution time to N_itr O(NlogN) for N_itr iterations with various iterative solution methods like Bi-Conjugate Gradient
(BiCG) or Generalized Minimum Residual (GMRES).
Fast solvers are built on the compressibility property of the far-field interaction matrices. The compression of the far-field matrices can be done
using analytical matrix compression methods like MLFMA or AIM, and also with numerical matrix compression methods like H-Matrix. Compared to
analytical compression methods, numerical compression methods are easy to implement and are kernel independent. All the fast solvers depend on the
iteration count of the iterative solution methods. The convergence of the iterations depends on the condition number of the computed MoM matrix,
and further, for a large number of unknowns, the convergence iteration count also increases. The high iteration count can be mitigated by using various
preconditions like ILUT, Null-Field, and Schur's complement method based preconditioners <cit.>. The matrix preconditioner improves
the condition number of the matrices and reduces the iteration count of the overall matrix solution. Despite the improvement in solution time, the use
of preconditioners comes with the overhead of preconditioner computation time and extra preconditioner solution time for each iteration. Also, for
the solving of a large number of unknowns, the iteration count may still be high.
Recently there has been a trend in the CEM community for the development of an iteration-free fast solver method for solving problems with a large
number of unknowns. Various fast direct solvers <cit.> have been proposed to overcome the iteration dependency of the solution process.
These direct solvers are based on LU decomposition and compression methods. The methods are complex to implement and give quadratic scaling
for complex real-world problems.
In this work, we propose a Multi-Level (ML) fast matrix solution method based on the power series <cit.>. The proposed method exploits the
property of ML matrix compression of the H-Matrix. The matrix is solved for each level using the matrix computation of the leaf level only, and the
matrix solution can be terminated at the desired level as per the required accuracy. Our experimental results show that we get good accuracy even for the
lowest level solution. The method relies on matrix-vector multiplication at each level and using the solution of the lowest level saves matrix computation
time and memory requirement for the overall matrix solution.
The rest of the paper is organized as follows. Section II gives a summary of MoM computation for S-EFIE and V-EFIE, section III covers H-Matrix
computation for S-EFIE and V-EFIE. The derivation of the proposed ML power series solver is given in section IV. The numerical results of the
proposed method, and conclusion are discussed in sections V, and VI.
§ METHOD OF MOMENTS
MoM is a popular and efficient integral equation based method for solving various electromagnetic radiation/scattering problems. MoM can be computed using Electric Field Integral Equation (EFIE) for both surface and volume modeling. Surface modeling can be done using Rao Wilton Glisson (RWG) <cit.> triangle basis function, whereas volume modeling can be done using Schaubert Wilton Glisson (SWG) <cit.> tetrahedral basis function. In the case of dielectric modeling compared to S-EFIE, V-EFIE is an integral equation of the second kind and is more well-conditioned and stable. V-EFIE can model inhomogeneous bodies more efficiently than surface EFIE. In this work, we use RWG basis function for PEC surface S-EFIE modeling and SWG basis function for volume V-EFIE modeling. The surface/volume EFIE governing equation for the conductor/dielectric scattering body illuminated with the incident plane wave is given as the total electric field (E^total) from a scattering surface/volume and is the sum of incident electric field (E^inc) and scattered electric fields (E^scatt).
E^total=E^inc+E^scatt.
The scatted electric field is due to the surface current in PEC surface or volume polarization current in the dielectric media and is given as:
E^scatt=-jωA(r)- ∇ϕ(r).
In the above equation A(r) is the magnetic vector potential and describes radiation of current, ϕ(r) is electric potential and describes associate bound charge. Applying the boundary condition for PEC structure the S-EFIE can be written as:
E^inc=jωA(r)+ ∇ϕ(r).
Similarly, the V-EFIE can be written for a dielectric inhomogeneous body as:
E^inc=D(r)/ϵ(r) + jωA(r) + ∇ϕ(r).
In the above, equation D(r) is the electric flux density and ϵ(r) is the dielectric constant of the scattering volume media. The surface current in equation (3) for PEC structure is expanded with RWG function, and similarly in equation (4) for dielectric volume structure polarization current and charge is modeled with SWG basis function. Performing Galarkin testing over each term with integrating over the surface/volume, the final system of equation boils down to the linear system of the equation as below:
[Z]x=b.
In the above equation, Z is a dense MoM matrix, b is a known incident plane wave, and x is an unknown coefficient to be computed. The dense matrix leads to high cost matrix computation and memory requirement as well as solution time complexity. In the next section, we discuss the implementation of the H-Matrix for the mitigation of high cost of the conventional MoM matrix
§ H-MATRIX
The high cost of MoM limits its application to a few λ problem sizes. This limitation of MoM can be overcome by incorporating fast solvers. Most of the fast solvers work on the principle of compressibility of the far-field matrices. For the implementation of a fast solver, the mesh of geometry is divided into blocks using an oct-tree or binary-tree division process and terminated at the desired level with a limiting edge or face count in each block. The non-far-field interaction blocks at the lowest level are considered near-field blocks and are in the dense matrix form. The compression of the far-field block matrix at each level can be done analytically or numerically. The system of equations in equation (5) can now be written as the sum of near-field and far-field matrix form as:
[Z_N+Z_F]x=b.
In the above equation Z_N is a near-field block matrix and Z_F is far-field compressed block matrices for the MoM fast solver matrix. Numerical compression of far-field matrices is easy to implement and is kernel-independent. A few of the popular fast solvers using numerical compression methods are IE-QR, H-Matrix. In this work, we have implemented H-Matrix for ML matrix compression. For the ML compression computation, the mesh is divided into ML binary tree division-based subgroups. H-Matrix works on the computation of a far-field matrix for the interaction blocks satisfying the admissibility condition given in equation (7). The admissibility condition states that η times the distance between the observation cluster (Ω_t) and source cluster (Ω_s) should be greater or equal to the minimum diameter of the observation cluster or source cluster for far-field computation, where η is the admissibility control parameter, and its value is taken as 1.0.
η dist(Ω_t,Ω_s) ≥ min(diam(Ω_t),diam(Ω_s)).
The far-field matrix block compression is done in such a way that its parent interaction matrix should not be computed at the top level. Matrix compression at each level is carried out using Adaptive Cross Approximation (ACA) <cit.> <cit.> method. The method exploits the rank deficiency property of the far-field matrix blocks. The low-rank sub-block of the far-field Z_sub with m rows and n columns is decomposed into approximate U_(m× k) and V_(k× n) matrices where k is the numerical rank of the low-rank sub-block far-field matrix such that k<<min(m,n). In this work, for memory savings, we only compute half of the H-Matrix <cit.> by making the computation process symmetric, and to maintain the accuracy of the H-Matrix, we use re-compressed ACA <cit.> for far-field block compression. The solution of the iterative solver is iteration count dependent, and further, the convergence iteration count depends on the condition number of the matrix. Also, as the number of unknowns increases, the iterating count for the convergence increases. In the next section, we discuss our proposed method, which is an iteration count and far-field level block independent solution process.
§ MULTI-LEVEL POWER SERIES SOLUTION
The full H-Matrix is a combination of near-field and far-field block matrices. The far-field compressed block matrices are computed for various levels, and in equation (6), the far-field matrix (Z_F) can be further decomposed into the different matrix levels as below:
[Z_F]=[Z_F1]+[Z_F2]+[Z_F3].
In the above equation far-field matrix Z_F1 is for level 1, Z_F2 is for level 2. and, Z_F3 is for level 3. Level 3 forms the leaf level of the binary tree and level 1 as the top level of the tree. Fig. 1. shows the H-Matrix layout for a two-dimension strip. In Fig. 1. light gray boxes represent Z_F1 far-field matrix at level 1, dark gray boxes as Z_F2 is for level 2 and large white boxes as Z_F3 for level 3, the black boxes are the near-field dense matrices. For illustrative purposes, the near-field matrix is a diagonal block form for a two-dimension strip. The real-world problems are three-dimension in structure, giving a non-diagonal block near-field matrix. To implement our ML power series solution method, we must diagonalize the near-field block matrix. The near-field matrix in equation (6) is diagonalized using diagonal scaling coefficient [α], as computed in <cit.> such that the scaled diagonal block near-field matrix can be given as:
[Z̃_N]=[α][Z_N].
Expanding equation (8) and scaling it with the scaling coefficients [α] gives:
[α][Z_N+Z_F1+Z_F2+Z_F3]x=[α]b.
[Z̃_N]x+[α][Z_F1]x+[α][Z_F2]x+[α][Z_F3]x=b̃.
In the above equation b̃ is a [α] scaled vector b and can be further simplified as :
x+ [Z̃_N]^-1[α][Z_F1]x+[Z̃_N]^-1[α][Z_F2]x
+[Z̃_N]^-1[α][Z_F3]x= [Z̃_N]^-1b̃.
Let [Z̃_N]^-1[α][Z_F1]=[U_1], [Z̃_N]^-1[α][Z_F2]=[U_2] and [Z̃_N]^-1[α][Z_F3]=[U_3] equation (12) can further be simplified as
x+ [U_1]x+[U_2]x +[U_3]x= [Z̃_N]^-1b̃.
[I+ U_1]x+[U_2]x +[U_3]x= [Z̃_N]^-1b̃.
x+[I+ U_1]^-1[U_2]x +[I+ U_1]^-1[U_3]x
=[I+ U_1]^-1 [Z̃_N]^-1b̃.
Let [I+ U_1]^-1[U_2]=[V_2] and [I+ U_1]^-1[U_3]
=[V_3] equation (15) can further be simplified as
x+ [V_2]x+[V_3]x = [I+ U_1]^-1 [Z̃_N]^-1b̃.
x+[I+ V_2]^-1[V_3]x=[I+ V_2]^-1[I+ U_1]^-1 [Z̃_N]^-1b̃.
Let [I+V_2 ]^-1 [V_3 ]=[W_3] and equation (17) can be written as
x+[W_3]x=[I+V_2 ]^-1 [I+U_1 ]^-1 [Z̃_N]^-1b̃.
x=[I+W_3 ]^-1 [I+V_2 ]^-1 [I+U_1 ]^-1 [Z̃_N]^-1b̃.
In the above equations [I+W_3 ]^-1,[I+ V_2 ]^-1 and [I+ U_1 ]^-1 can be solved independently at each level using a power series solution method with the expansion as below:
[I+ U_1 ]^-1=[I+ [Z̃_N]^-1[α][Z_F1]]^-1.
[I+V_2 ]^-1=[I+[I+U_1 ]^-1 [U_2 ]]^-1
=[I+[I+ [Z̃_N]^-1[α][Z_F1]]^-1 [Z̃_N]^-1[α][Z_F2]]^-1.
[I+W_3 ]^-1=[I+[I+V_2 ]^-1 [V_3 ]]^-1
=[I+[I+[I+U_1 ]^-1[U_2 ]]^-1[I+U_1 ]^-1[U_3 ]]^-1
=[I+[I+[I+ [Z̃_N]^-1[α][Z_F1]]^-1[Z̃_N]^-1 [α][Z_F2 ]]^-1
[I+[[Z̃_N]^-1 [α][Z_F1]]^-1[Z̃_N]^-1[α][Z_F3 ]]^-1.
From equations (20), (21), and (22), it can be observed that the solution of these equations is dependent on that level and the lower levels of the binary tree block interaction matrix. At each level, the inverse of the matrix system equation can be efficiently computed by using a fast power series solution<cit.>. The fast power series iterative solution converges in two fixed iterations. The solution process only depends on the matrix-vector product of the H-Matrix, thus retaining the complexity of O(NlogN)<cit.>. The ML solution can be computed at the desired level per the required accuracy. Our results show that the solution at the leaf level gives an accurate result leading to time and memory savings.
§ NUMERICAL RESULTS
In this section, we show the accuracy and efficiency of the proposed method. The simulations are carried out on 128 GB memory and an Intel (Xeon E5-2670) processor system for the double-precision data type. The H-Matrix computation is done with the ACA matrix compression error tolerance of 1e-3 <cit.> and solved with GMRES iterative solver with convergence tolerance of 1e-6 <cit.>. For a compressed or dense matrix [Z] if we want to expand [1+Z]^-1 in power series, the necessary and sufficient condition for convergence is |Z|<1 and we choose 0.1 for our simulations <cit.>.The conductor and dielectric geometry with dielectric constant ϵ_r is meshed with an element size less than λ/10 and λ/(10√(ϵ_r)) respectively. To show the accuracy of the proposed method, the RCS results are compared with full H-Matrix iterative solver<cit.>. In the further subsections, we demonstrate the far-field memory and computation time savings along with in solution time saving with our proposed ML power series solution with different examples.
§.§ PEC square plate
To show the accuracy and efficiency on a PEC object in this subsection, we consider a square plate of size 15.0 λ along x and y axis meshed with 67,200 unknown edges. The square plate mesh is divided with binary tree division till level 6. The PEC S-EFIE H-Matrix is solved with ML power series solution method and H-Matrix iterative solver. ML power series converges in 2 iterations, and the iterative solver solution converges in 686. Only the far-field matrix at leaf level 6 is computed for the ML power series solution, ignoring far-field computation from levels 1 to 5 of the binary tree.
Fig. 2. shows the Bi-static RCS of a PEC square plate, and from the Fig., it can be observed that the solution with ML power series solver matches with the H-Matrix iterative solver. Table 1 shows the savings in memory, computation, and solution time of the ML power series solution method as compared with conventional H-Matrix-based iterative solver.
§.§ Dielectric slab
To show the accuracy and efficiency for a considerable size dielectric problem in this subsection, we consider a dielectric slab elongated along the y-axis with a height of 10.0 λ length, 1.0 λ width, and 0.1 λ thickness and dielectric constant (ϵ_r=2.0) meshed with 120,080 tetrahedral faces. The ML power series converges in 2 iterations, and the regular H-Matrix iterative solver converges in 33 iterations.
The dielectric slab mesh is divided with binary tree division till level 10. Only the far-field matrix at leaf level 10 is computed for the ML power series solution. The accuracy of the method for a Bi-static RCS is shown in Fig. 3. Table 2 shows the significant matrix memory, matrix fill and solution time savings of the ML power series solution compared to the conventional H-Matrix-based iterative solver.
§.§ Dielectric hollow cylinder
In this subsection, we consider a dielectric hollow cylinder elongated along the y-axis with a size of 6.0λ length, 0.4λ outer radii, and 0.05λ thickness with a dielectric constant (ϵ_r=2.0), meshed with 158,830 tetrahedral faces. The ML power series converges in 2 iterations, and the H-Matrix iterative solver converges in 24 iterations.
The hollow cylinder mesh is partitioned with a binary tree division till level 8, and for the ML power series solution only the far-field matrix at leaf level 8 is computed. Fig. 4. shows the close match in the bi-static RCS computed using the ML power series method and that with regular H-Matrix iterative solver. Table 3 shows the memory and time saving of the ML power series solution compared to the conventional H-Matrix iterative solver.
§ CONCLUSION
It can be observed from the illustrative examples in the previous sections that our proposed ML power series solution method gives considerable matrix memory, fill and solve time saving for significant size problems. The solution method is as accurate as the H-Matrix iterative solver. The savings may not be substantial for small-size mesh structures. Still, the method will give significant savings for large-size problems taken up for illustration and for complex and sizeable electrical problems like antenna arrays and complex composite structures. Also, the technique is entirely algebraic in nature and can apply to fast analytical solver-based methods like AIM and MLFMA. The matrix block in each level can be computed independently, and the solution of the method only depends on the matrix-vector product of the system matrix. Hence, the proposed method is amenable to efficient parallelization.
ACESJournal
Yoginder Kumar Negi pict/yknegi.jpg
obtained the B.Tech degree in Electronics and Communic-ation Engineering from Guru Gobind Singh Indraprastha University, New Delhi, India, in 2005, M.Tech degree in Microwave Electronics from Delhi University, New Delhi, India, in 2007 and the PhD degree in engineering from Indian Institute of Science (IISc), Bangalore, India, in 2018.
Dr Negi joined Supercomputer Education Research Center (SERC), IISc Bangalore in 2008 as a Scientific Officer. He is currently working as a Senior Scientific Officer in SERC IISc Bangalore. His current research interests include numerical electromagnetics, fast techniques for electromagnetic application, bio-electromagnetics, high-performance computing, and antenna design and analysis.
B. Narayanaswamypict/nbk.jpg
received the B.E. degree (Hons.) in Electronics and Communi-cation from the University of Madras, Chennai, India, in 1972, and the Ph.D. degree from the Indian Institute of Science, Bengaluru, India, in 1979.
He joined the Department of Aerospace Engineering, Indian Institute of Science, as an Assistant Professor, in 1981, where he became a Full Professor in 1991, served as the Associate Director, from 2005 to 2014, and is currently an INSA Senior Scientist at the Supercomputer Education and Research Centre. He has authored over 200 publications in the international journals and international conferences. His current research interests include numerical electromagnetics, high-performance computing and networks, polarimetric radars and aerospace electronic systems, information security, and digital library.
Dr. Narayanaswamy is a fellow of the World Academy of Sciences (TWAS), the National Academy of Science, the Indian Academy of Sciences, the Indian National Academy of Engineering, the National Academy of Sciences, and the Institution of Electronics and Telecommunication Engineers.
Sadasiva M. Rao pict/smr.jpg
obtained his Bachelors, Masters, and Doctoral degrees in electrical engineering from Osmania University, Hyderabad, India, Indian Institute of Science, Bangalore, India, and University of Mississippi, USA, in 1974, 1976, and 1980, respectively. He is well known in the electromagnetic engineering community and included in the Thomson Scientifics Highly Cited Researchers List.
Dr. Rao has been teaching electromagnetic theory, communication systems, electrical circuits, and other related courses at the undergraduate and graduate level for the past 30 years at various institutions. At present, he is working at Naval Research Laboratories, USA. He published/presented over 200 papers in various journals/conferences. He is an elected Fellow of IEEE.
|
http://arxiv.org/abs/2307.04157v2 | 20230709121343 | DIFF-NST: Diffusion Interleaving For deFormable Neural Style Transfer | [
"Dan Ruta",
"Gemma Canet Tarrés",
"Andrew Gilbert",
"Eli Shechtman",
"Nicholas Kolkin",
"John Collomosse"
] | cs.CV | [
"cs.CV"
] |
A threshold model of plastic waste fragmentation: new insights into the distribution of microplastics in the ocean and its evolution over time
Pascale Fabre
August 12, 2023
==============================================================================================================================================
A threshold model of plastic waste fragmentation: new insights into the distribution of microplastics in the ocean and its evolution over time
Pascale Fabre
August 12, 2023
==============================================================================================================================================
< g r a p h i c s >
Figure 1. Deformable style transfer using DIFF-NST, compared to baselines: NNST <cit.>, CAST <cit.>, NeAT <cit.>, and PARASOL <cit.>. Our DIFF-NST method performs style transfer with much stronger style-based form alteration - matching the shapes and structures to those in the style image, not just the colors and textures. More in Fig <ref>. Zoom for details.
Neural Style Transfer (NST) is the field of study applying neural techniques to modify the artistic appearance of a content image to match the style of a reference style image.
Traditionally, NST methods have focused on texture-based image edits, affecting mostly low level information and keeping most image structures the same. However, style-based deformation of the content is desirable for some styles, especially in cases where the style is abstract or the primary concept of the style is in its deformed rendition of some content.
With the recent introduction of diffusion models, such as Stable Diffusion, we can access far more powerful image generation techniques, enabling new possibilities.
In our work, we propose using this new class of models to perform style transfer while enabling deformable style transfer, an elusive capability in previous models.
We show how leveraging the priors of these models can expose new artistic controls at inference time, and we document our findings in exploring this new direction for the field of style transfer.
§ INTRODUCTION
Neural Style Transfer (NST) aims at re-rendering the content of one image with the distinctive visual appearance of a second style image, typically an artwork. Most prior work has focused on low level style, represented as colors and textures. However, artistic style covers a broader gamut of visual properties, including purposeful geometric alterations to the depicted content, often called form <cit.>.
We introduce a novel NST approach that considers not only low level color and texture changes but also higher level style-based geometric alterations to the depicted content. We aim to maintain the object structure to resemble the original content image and remain identifiable as such. But with style-based deformations of the content reflecting the artist's original intent as they depicted their original subject matter in the exemplar artwork image. Such content deformations have been more challenging to achieve, given a need for a higher level spatial semantic understanding of subject and/or scene information <cit.>.
Learning priors regarding the interplay of artistic style, semantics, and intentional deviations from photo-realistic geometry is non-trivial and not generally a part of NST pipelines. However, recent diffusion-based image generation literature has made impressive progress in modeling various visual concepts <cit.>, accurately modeling how objects fit into the world around them.
We leverage these extensively learned priors in our work, adapting them to NST. We adapt them in our DIFF-NST model to function without text prompts in an exemplar-based setting, similar to more traditional NST. Text-less exemplar-based is desirable for some stylistic edits, as textual prompts would require extensive descriptions of the style, which may be difficult or impossible to articulate fully. We build the first NST model to make significant high level edits to content images. We compare our work to several baselines and show state-of-the-art user preference in user studies.
§ RELATED WORK
The seminal work of Gatys' Neural Style Transfer (NST) <cit.> enabled neural techniques for transferring the artistic style appearance of a reference artwork to an unstylized depiction of some content - typically a photograph. Follow-up works created feed-forward, optimization free approaches to achieve this <cit.>. Other techniques for NST emerged, such as optimal transport <cit.>, hyper-networks <cit.>, and Neural Neighbours <cit.>. Attention based techniques later emerged <cit.>, with further follow-up improvements to contrastive losses <cit.>, and scaling to high resolution with improvements to robustness and detail propagation <cit.>. Deformation in style transfer has been explored in previous work <cit.>, based on detecting shared keypoints between the style and content, thereby limited by a shared depicted subject. Regarding fine-grained representation space for artistic style, ALADIN <cit.> introduced the first solution to this training over their fine-grained BAM-FG dataset. This was later evolved into ALADIN-ViT <cit.> using a Vision Transformer <cit.> for stronger expressivity, and later as ALADIN-NST <cit.>, with stronger disentanglement between content and style by changing BAM-FG <cit.> for a fully disentangled, synthetic dataset.
Within the generative image domain, sizeable text-to-image diffusion models such as Dall-e 2 <cit.>, Parti <cit.>, Imagen <cit.>, and e-Diffi <cit.> have recently made significant advances in image generation fidelity and control, enabling free-form text prompts as an input control vector for guiding image synthesis, with unprecedented quality. These models are trained on large datasets and require prohibitive amounts of computation. Latent Diffusion Models <cit.> introduced the concept of applying the diffusion process to a smaller, latent representation of images rather than operating in pixel space like the previous works. This dramatically reduces the compute requirements for training and, more importantly, inference. Stability AI <cit.> democratized comprehensive open access to such models by open sourcing weights for an LDM trained on a subset of the LAION <cit.> dataset.
Much follow-up research has been enabled and built on these pre-trained weights, known as the Stable Diffusion model. Due to the still prohibitive training costs, several works have studied the personalization of existing pre-trained model weights for new concepts, such as Dreambooth <cit.>, Textual Inversion <cit.>, and Custom Diffusion <cit.>. Other works have studied enabling new ways to control these models for tasks such as subject-oriented editing <cit.>. Or focusing on more general image editing based on text-based prompt changes <cit.>. However, most of these techniques aim at semantic changes or require text-based prompt changes. Text-less exemplar-based stylistic edits have not commonly been explicitly explored with diffusion models. Recently, PARASOL <cit.> has used an ALADIN-ViT style embedding to perform style-based image generation, with some capabilities of maintaining content structure.
§ METHOD
To push beyond the traditional boundaries of texture-only style transfer, we wish to leverage the significant learned model priors such as Stable Diffusion <cit.>, having been trained on large amounts of data, with typically inaccessible amounts of compute. In our approach, as shown in Figure <ref> we freeze the pre-trained weights and train several modules of fully connected layers in each UNet self-attention block. We interleave pre-extracted content noise used for shapes and composition and the style attention values from the style image. These are used across reverse diffusion timesteps, generating a final stylized image using content and style information extracted from the interleaved data.
§.§ Preliminary analysis of style information in attention space
Prior work <cit.> has shown that early diffusion timesteps affect an image's global structural and compositional information, whereas later timesteps affect local fine details. Inspired by this, we set out to determine which timesteps of the diffusion process control style and which control content.
Given a lack of research around exemplar-based Neural Style Transfer with diffusion models, we use a prompt-based model, prompt-to-prompt <cit.>, to carry out this visualization. We use ChatGPT <cit.> to generate 20 content prompts, and we further define 10 style modifier prompts. With the prompt-to-prompt pipeline (operating over the Stable Diffusion LDM weights), we use the content prompts to generate reference content images, and we combine each content prompt with each style modifier prompt to re-generate the content images with the different explicitly defined styles still using prompt-to-prompt <cit.>. At the end of the process, we have 20 reference content example images and 200 "stylized" images. During the generation process, we extract attention values for analysis. We average the differences between the content example images' attention values and each of their 10 stylized variants', at each timestep. Fig 1 in the supplementary materials visualizes the average differences between these attention values at the diffusion timesteps. The red indicates a larger difference between the original content image and its stylized versions. Given that the structural and compositional information of the example content and their "stylized" counterparts is similar, we can infer that the stylistic differences relate to the higher attention discrepancies found at the later timesteps. This preliminary exploratory experiment clarifies the different effects of diffusion timesteps across the LDM generation process.
An additional preliminary experiment using these prompt-to-prompt images is an analysis of where the style information is captured in the LDM activations. We explicitly focus on the attention mechanism, where 𝒬, 𝒦, and 𝒱 values are used in the attention process <cit.>. We generate a base non-stylized image with the content prompt and then stylized variants with style modifier prompts. We extract attention values from the content-only prompt generation and replace the attention values of the stylized generation with those from the content-only generation. Doing so re-generates the original, non-stylized image. However, in our analysis, we observe that interpolating between the 𝒱 self-attention values of the content/style-modified generations (while using only the original content values for the rest) can provide control over the stylization strength. From this experiment, we can infer that most, if not all, style information is captured from just the 𝒱 self-attention values in the LDM. We visualize examples of this style interpolation in the supplementary materials.
§.§ DIFF-NST real image inversion
Our work aims to perform style transfer of existing real user-provided images. As such, the re-styled synthesized image must stay faithful to the provided content image in terms of overall composition and structure. This means we must edit the image rather than re-generate a semantically similar approximation. We invert the content image through the LDM, similar to previous works such as prompt-to-prompt <cit.> and diffusion disentanglement <cit.>. This inversion process extracts the predicted noise at each timestep, as predicted by the UNet modules. To reconstruct the same image using an LDM, this content noise can be injected into the reverse diffusion process, replacing the LDM noise predictions at multiple timesteps. The more timesteps the noises are applied to, the better the reconstruction fidelity, with less freedom of input from the LDM. As shown in the diffusion disentanglement work <cit.>, applying changes to the diffusion values from an earlier timestep allows more significant change in image structure.
Similar to these previous works, we use 50 time steps for the forward (inversion) and reverse (re-generation) diffusion processes. However, unlike these previous works, we interleave this noise starting from an earlier time, step 5, rather than 16, to improve reconstruction quality. We apply noise until step 45 instead of 50 to allow the model to self-correct some artifacts. Also, unlike prior work, we do not set the LDM predicted noises to zero for timesteps where pre-extracted content noises are not injected into the diffusion process. We aim to allow the model to generate new details to leverage its learned priors.
A notable trait of image-to-image and image-inversion with diffusion models is that color information is not disentangled from overall image structure across timesteps, as it is with feature activation across layers of a VGG model, for example. Thus, color information must be explicitly handled before inversion. Similar to previous works <cit.>, we pre-adjust the color of the content image through mean and covariance matching. We do this dynamically during training before inversion.
A final consideration is that we aim to perform prompt-less execution of LDMs, given our use of exemplar images for both content and style. As such, we only need to use the model's unconditional capabilities. Latent Diffusion Models execute two iterations of their model: one with no prompt conditioning and one with prompt conditioning. The output of both branches is joined at every time step via the classifier free guidance (CFG). This exposes prompt control via this adjustable strength. Given that we aim not to use any text prompts anywhere in the process, we, therefore, altogether disable the prompt-conditioned branch of the model execution and use only the un-conditional branch for both inversion and reverse diffusion. The process would function the same if the text prompt were fixed to a generic prompt throughout or if CFG was zero, but this approach saves on compute.
§.§ Attention manipulation
We train a set of MLPs across each self-attention module in the LDM UNet blocks. We do not wish to re-train or fine-tune the LDM weights due to large compute/financial requirements. Instead, we train several smaller modules to hijack part of the LDM process, similar to how content noises are injected into the diffusion process. We directly target the attention process's 𝒱 values, generating brand new values for the remaining process to use. We chose the 𝒱 values following our initial exploratory experiments with existing text-prompt-based diffusion image editing techniques such as prompt-to-prompt, where we observed that interpolation between 𝒱 values only is enough to induce stylistic changes between content prompts and style-modified prompts.
Before our reverse diffusion process, similar to the real content image inversion to collect the noise predictions for reconstruction, we additionally invert and fully reconstruct the real style image through the LDM. This time, instead of collecting the predicted noises, we collect the predicted attention 𝒱 values at every location and timestep and interleave them into the reverse diffusion process. Here, the MLPs generate the new 𝒱 values based on an input consisting of the current 𝒱 values, the corresponding 𝒱 values at the same location and timestep of the style image, and the ALADIN style code of the style image, which we also pre-extract. We use both the style attention values and ALADIN, as this provides both global and local style information. Using only the attention values induces a similar style transfer. Anecdotally, however, using both sources of style information leads to a higher overall perceived quality of style transfer. We use the more recent ALADIN-NST <cit.> variant of ALADIN, as it is more disentangled, capturing less content information. This helps to avoid semantic content creeping into the stylized image from the style image, as shown in Fig <ref>.
A final consideration is that we only apply this attention manipulation process to the UNet decoder/upscaling layers, as per ControlNet <cit.>. Similar to their findings, we notice no perceivable differences in the output quality, but the VRAM consumption and compute costs are lower.
§.§ Training process
Diffusion models are typically trained one random timestep at a time, given the nature of focusing the training on noise predictions at individual timesteps. In our case, however, such timestep-localized deltas are not as easy to isolate. We can only guide our model during training based on the final de-noised output image. Moreover, well known existing style losses have been designed to operate in pixel space. They are, therefore, not directly applicable to latent space - though this may be an area of potential future study.
Therefore, we build our training process around unrolling the entire diffusion process, from starting to ending timesteps. We then decode the latent values into pixel space, where we can finally apply standard NST losses amongst the stylized and real style images from our style dataset. We opt to keep these style learning losses similar to previous works to reduce variables and uncertainty from our work. We follow a similar training objective to recent works such as NeAT <cit.>, ContraAST <cit.>, and CAST <cit.> - described in detail in Sec <ref>. We can report some negative results in using the LDM UNet as a noised feature extractor for computing a VGG-like style loss to avoid the unrolling process - the features extracted by the UNet did not accurately model the image style features.
§.§ Training objective
We train our model using well explored training objectives from traditional NST methods to focus solely on the model technique - we most similarly follow training objectives resembling those of NeAT <cit.>, ContraAST <cit.>, and CAST <cit.>. Between style and stylized images, we use a VGG <cit.> style loss (Eq. <ref>), identity loss (Eq. <ref>), contrastive loss (Eq. <ref>), sobel-guided patch discriminator (Eq. <ref>), domain-level discriminator (Eq. <ref>), and ALADIN loss (Eq. <ref>). Between the stylized and content images, we use a perceptual loss (Eq. <ref>), contrastive loss (Eq. <ref>), and identity loss (Eq. <ref>). We use Sobel guidance for the patch discriminator, as per NeAT.
Equation <ref> shows the VGG style loss, with μ and σ representing the mean and standard deviation of extracted feature maps, I_s represents style image from the style dataset S, I_c represents a content image from the content dataset C after the color adjustments, and I_sc represents the stylized image.
ℒ_s := λ_vgg( ∑_i=1^Lμ(ϕ_i(I_s c))-μ(ϕ_i(I_s))_2+σ(ϕ_i(I_s c))-
σ(ϕ_i(I_s))_2 )
Eq <ref> represents the domain-level adversarial loss, as per ContraAST <cit.>, learning to discriminate between generated stylized images and real artworks. Here, a discriminator 𝒟 operates over the stylized image, following our model M modules. Eq <ref> details standard perceptual loss, where ϕ_i represents the pre-trained VGG-19 layer index.
ℒ_a d v := λ_adv( I_s ∼ S𝔼[log(𝒟(I_s))]+ I_c ∼ C, I_s ∼ S𝔼[log(1-𝒟( M ( I_s, I_c) ))] )
ℒ_percep := λ_percep( ϕ_conv4_2 (I_s c)-ϕ_conv4_2(I_c)_2 )
Eqs <ref> and <ref> show MSE identity losses between the reconstructed images and the style or content images, respectively. Eq <ref> shows the ALADIN loss, with 𝒜 representing the ALADIN model.
ℒ_id_s := λ_identity(I_s s-I_s_2)
ℒ_id_c :=λ_identity(I_c c-I_c_2)
ℒ_aladin:=λ_aladin(𝒜(I_s c) - 𝒜(I_s) _2)
Eqs <ref> and <ref> show contrastive losses as detailed in Sec 4.1, similar to <cit.> and <cit.>, where l_s and l_c are extracted style/content embeddings respectively, using a projection head, and τ is the temperature hyper-parameter. The contrastive losses are applied over the averaged attention values per timestep.
ℒ_s_contra := λ_c( -log(exp(l_s(s_i c_j)^T l_s(s_i c_x) / τ)/exp(l_s(s_i c_j)^T l_s(s_i c_x) / τ)+∑exp(l_s(s_i c_j)^T l_s(s_m c_n) / τ)) )
ℒ_c_contra := λ_c( -log(exp(l_c(s_i c_j)^T l_c(s_y c_j) / τ)/exp(l_c(s_i c_j)^T l_c(s_y c_j) / τ)+∑exp(l_c(s_i c_j)^T l_c(s_m c_n) / τ)) )
The ℒ_p term defined in Eq <ref> is our patch discriminator D_patch loss, guided by Sobel Maps (SM).
ℒ_p = λ_patch( I_s ∼ S𝔼[-log (D_patch (crop ( I_s c, SM_s c ), crops(I_s, SM_s ) ) ) ] )
Our final combined loss objective is shown in <ref> where each term is weighted by their respective λ term. The loss weights are as follows: λ_vgg = 0.5, λ_adv = 5, λ_percep = 6, λ_identity = 100, λ_aladin = 10, λ_c = 1, λ_patch = 10, λ_1 = 0.25, λ_2 = 0.75.
ℒ_final := ℒ_s + ℒ_adv + ℒ_percep + ℒ_id_s + ℒ_id_c + ℒ_aladin + ℒ_s_contra + ℒ_c_contra + λ_1 ℒ_p_simple + λ_2 ℒ_p_complex
§ EXPERIMENTS AND EVALUATION
Neural style transfer using diffusion models is a nascent sub-field of research. As such, very few works study this new direction, much less via prompt-less techniques. Despite not being a strictly NST model, PARASOL <cit.> is currently the only suitable method we can baseline against. We additionally compare against three recent "traditional" NST techniques, NNST <cit.>, NeAT <cit.>, and CAST <cit.>. These techniques have focused on texture-based style transfer, and as such, their stylized outputs contain a much better match between the style and stylized images' textures. This is reflected in metrics such as SIFID <cit.>, used in NST literature so far that precisely measure such correlations.
The unrolled approach of training diffusion models does incur a high computation cost. Our technique can train over an LDM at 512px resolution on a GPU with 48GB VRAM at batch size 1. We use gradient accumulation 8 to raise the effective batch size to 8. Inference at 512px fits on 24GB VRAM. We train our model for 3 weeks on a single A100. Like NeAT <cit.>, we use the BBST-4M dataset they introduce, due to its great variety of style data, covering not just fine-art imagery as more commonly found in other datasets. Due to our method and NeAT having been trained using BBST-4M, we aim to use a test set with no overlap with training data. We use the test set from ALADIN-NST <cit.>, which was collected as a test set not overlapping with previous datasets such as BBST-4M. The test set contains 100 content and 400 style images, resulting in 40,000 stylized images. We collect quantitative metrics in Table 1, measuring SIFID <cit.> and Chamfer for style and color consistency with the style image respectively, and LPIPS <cit.> for structure consistency with the content. Due to long-running generation times for our method and those of multiple baselines, we randomly sub-sample and use 5,000 images.
[c]0.385
tableQuantitative metrics. Lower is better. ↓
width=0.9
Model LPIPS ↓ SIFID ↓ Chamfer ↓
NeAT <cit.> 0.624 0.880 24.970
CAST <cit.> 0.632 1.520 43.864
NNST <cit.> 0.633 2.007 53.328
PARASOL <cit.> 0.716 3.297 105.371
DIFF-NST (Ours) 0.656 2.026 45.777
[c]0.629
tableUser studies for our model, for individual ratings (out of 5), and 5-way preferences (%). Higher is better. ↑
width=1
Model Content Rating ↑ Style Rating ↑ Content Preference ↑ Style Preference ↑
NeAT <cit.> 3.271 2.952 32.222 26.000
CAST <cit.> 3.031 2.863 16.756 16.133
NNST <cit.> 2.937 2.712 21.200 17.778
PARASOL <cit.> 2.301 2.257 12.400 9.556
DIFF-NST (Ours) 2.751 2.973 17.422 30.533
We present a qualitative random sample of stylizations in Fig <ref> and the supplementary materials. We visualize stylizations using our method, the closest technically related work PARASOL <cit.>, and some traditional NST techniques.
The most impactful ablation to report on is experimenting with the style embedding used alongside the style attention values. We show some comparative examples in Fig <ref>, having tested the regular ALADIN-ViT style embedding and the more disentangled ALADIN-NST variant. The ViT variant introduces some content features from the style image into the stylized image when these features have strong activations - most commonly occurring with faces. Though rare, we mitigate this issue using a fully disentangled style embedding, ALADIN-NST.
§.§ User studies
We undertake a pair of user studies to gauge real life human preference amongst our method and the baselines. First, we carry out an individual rating exercise, measuring the content fidelity between the content image and the stylized image, and separately measuring the style consistence compared to the style image. Second, we carry out a 5-way comparison, where we ask workers to select their best preference from randomly shuffled samples. We bin the ratings in the individual exercise to five levels, and we explicitly instruct what each rating level should represent. We include the definitions in the supplementary material. We randomly sub-sample 750 stylized samples from the test set and compare our method against each baseline on Amazon Mechanical Turk (AMT). We collect and average our responses over 5 different workers for each comparison, and show our results in Table 2.
The results indicate that workers are scoring our DIFF-NST method low on the content information, in both the ratings and preference studies. This is a positive result, as it highlights our technique's more substantial content deformation. The only model which scored lower is PARASOL. However, as seen in our visual comparison figures, PARASOL tends to make significant conceptual changes to the depicted content. It is not so much a technique for style transfer as it is for style-inspired re-generation of similar semantic content. The results for our style-focused experiments indicate that workers prefer our method to baselines in both individual ratings and 5-way preference studies, which signifies a successful transfer of style while still deforming the content.
§.§ Inference controls
One key strength of our diffusion-based NST method is control over the structural deformity in the represented content concerning the style image. The reference content information is injected into the diffusion process by applying noises at each time step, pre-extracted from the content image inversion. With diffusion models, the early time steps strongly affect the significant structural components of the image, whereas the later timesteps affect lower level textural information. Therefore, by varying the starting timestep at which these pre-extracted content noises are applied, we can adjust, at inference time, how much the style should deform the content structure. This effect is difficult to evaluate quantitatively, but we show two examples in Fig <ref>.
An alternative vector of inference-time control is varying the diffusion timesteps in which our method's attention replacement happens. By stopping at earlier timesteps, less style information is injected into the diffusion process, reducing the stylization strength. Unlike reducing content noise injection, this approach maintains the content structure better and more directly targets the style properties instead of structure. We show examples of this second approach in Fig <ref>, using the same example images as in Fig <ref> for clarity.
§ LIMITATIONS AND CONCLUSIONS
One limiting factor of our approach is that textures are not matched to the style image with as much detail and fidelity as traditional NST approaches. This can, however, be alleviated by introducing a conventional NST approach into the pipeline as a post-processing step.
Though rare, due to the one-to-one mapping between the content and style attention values, some structure from some style images sometimes creeps into the stylized image. We can report negative results experimenting with Neural Neighbours <cit.> in attention space, which resolved this issue, but only at the cost of worse overall stylization quality. This is an area of potential future improvement.
One of the principal challenges with our method has been computation due to the unrolled nature of the reverse diffusion process during training. Future work can explore the adaptation of the style training objective to the latent space instead of pixel space, enabling non-unrolled training.
§ BROADER IMPACT
Neural techniques for artistic image editing and generation offer new tools and capabilities for skilled artists to take their work further than before. However, this does make the field easier to enter as a novice. As such, existing novice-level artists may find more competition in this space, reducing work opportunities. As digital art emerged, it offered new capabilities to artists with new tools at the detriment of some artists using physical mediums. Neural techniques can similarly open up new genres of art while reducing some opportunities for some existing digital artists.
plainnat
§ PROMPT-TO-PROMPT ANALYSIS
The base content captions partially generated using ChatGPT for the prompt-to-prompt analysis experiments are:
* A squirrel eating a burger
* A hamster on a skateboard
* A toy next to a flower
* A car driving down the road
* A giraffe in a chair
* A bear wearing sunglasses
* An octopus in a space suit
* A hedgehog getting a haircut
* A sloth running a marathon
* A cat posing like napoleon
* A dog with a beard, smoking a cigar
* A bee flying underwater next to fish
* A fish with a hat, playing a guitar
* A bird with a bowtie, playing a saxophone
* A turtle with a top hat, playing a piano
* A frog with a cowboy hat, playing a banjo
* A mouse with a sombrero, playing a trumpet
* A snake with a beret, playing a violin
* A rabbit with a fedora, playing a cello
* A squirrel with a baseball cap, playing a drum
The style modifiers are:
* A van gogh painting of
* A graphite sketch of
* A neon colourful pastel of
* A minimal flat vector art illustration of
* A watercolour painting of
* A psychedelic inverted painting of
* A pop-art comic book panel of
* A neoclassical painting of
* A cubist abstract painting of
* A surreal dark horror painting of
We visualize results from the preliminary prompt-to-prompt analysis experiments, in Fig <ref>. The figure shows the first content prompt for the base content, with the subsequent rows interpolating towards style-modified prompts using style prompt modifiers 1, 4, 2, and 8. Although not directly relevant to our study, it was also interesting to note that the stylization strength could be pushed beyond the default strength by pushing the interpolation into over-drive, similar to the technique presented in NeAT <cit.>.
§ ADDITIONAL DETAILS ON USER STUDIES
We carried out two user studies: an individual rating exercise with defined rating levels, and a 5-way preference comparative exercise. For each, we executed the experiments once for the content, and once for the style.
Our content-focused rating exercise asks the following question: "A photo has been re-generated with a different style. Please rate the structure details of the new image, 1 to 5 as follows:", where we next define the expected judgement criteria for each rating level as follows:
* The structure is different
* The structure slightly resembles the photo
* The structure mostly resembles the photo
* The structure is the same
* The structure is the same, including small details
Our style focused rating exercise asks the following question: "A photo has been transformed into the style of the artwork. Please rate the quality of the style, 1 to 5 as follows:", where the rating definitions are:
* The style is not recognisable
* The style is recognisable
* The colours match
* The textures match
* The shapes match
The 5-way comparative study presents the following question for the content-focused experiment: "A photo has been re-generated with a different style in 5 ways. Please select the highest quality reconstruction of the photo's structure details", and the following for the style-focused experiment: "A photo has been re-generated with a different style in 5 ways. Please select the most similar artistic style to the artwork"
The workers were fairly compensated. We used 5 different workers for each stylized image, for each question.
|
http://arxiv.org/abs/2307.03956v1 | 20230708112126 | The annealed parabolic Anderson model on a regular tree | [
"Frank den Hollander",
"Daoyi Wang"
] | math.PR | [
"math.PR"
] |
[1]
Mathematical Institute, Leiden University, P.O. Box 9512, 2300 RA Leiden, The Netherlands
[email protected]
[2]
Mathematical Institute, Leiden University, P.O. Box 9512, 2300 RA Leiden, The Netherlands
[email protected]
The annealed parabolic Anderson model
on a regular tree
F. den Hollander
[1]
D. Wang
[2]
August 12, 2023
=========================================================
We study the total mass of the solution to the parabolic Anderson model on a regular tree with an i.i.d. random potential whose marginal distribution is double-exponential. In earlier work we identified two terms in the asymptotic expansion for large time of the total mass under the quenched law, i.e., conditional on the realisation of the random potential. In the present paper we do the same for the annealed law, i.e., averaged over the random potential. It turns out that the annealed expansion differs from the quenched expansion. The derivation of the annealed expansion is based on a new approach to control the local times of the random walk appearing in the Feynman-Kac formula for the total mass. In particular, we condition on the backbone to infinity of the random walk, truncate and periodise the infinite tree relative to the backbone to obtain a random walk on a finite subtree with a specific boundary condition, employ the large deviation principle for the empirical distribution of Markov renewal processes on finite graphs, and afterwards let the truncation level tend to infinity to obtain an asymptotically sharp asymptotic expansion.
MSC2010: 60H25, 82B44, 05C80.
Keywords: Parabolic Anderson model, Feynman-Kac formula, regular tree, double-exponential random potential, backbone of random walk, annealed Lyapunov exponent, variational formula.
Acknowledgment:
The research in this paper was supported by the Netherlands Organisation for Scientific Research through NWO Gravitation Grant NETWORKS-024.002.003.
The annealed parabolic Anderson model
on a regular tree
F. den Hollander
[1]
D. Wang
[2]
August 12, 2023
=========================================================
§ INTRODUCTION AND MAIN RESULTS
Section <ref> provides background and motivation, Section <ref> lists notations, definitions and assumptions, Section <ref> states the main theorems, while Section <ref> places these theorems in their proper context.
§.§ Background and motivation
The parabolic Anderson model (PAM) is the Cauchy problem
∂_t u(x,t) = Δ_ u(x,t) + ξ(x) u(x,t) , t>0, x ∈,
where t is time, is an ambient space, Δ_ is a Laplace operator acting on functions on , and ξ is a random potential on . Most of the literature considers the setting where is either ^d or ^d with d ≥ 1, starting with the foundational papers <cit.>, <cit.>, <cit.> and further developed through a long series of follow-up papers (see the monograph <cit.> and the survey paper <cit.> for an overview). More recently, other choices for have been considered as well:
(I)
Deterministic graphs (the complete graph <cit.>, the hypercube <cit.>).
(II)
Random graphs (the Galton-Watson tree <cit.>, <cit.>, the configuration model <cit.>).
Much remains open for the latter class.
The main target for the PAM is a description of intermittency: for large t the solution u(·,t) of (<ref>) concentrates on well-separated regions in , called intermittent islands. Much of the literature focusses on a detailed description of the size, shape and location of these islands, and on the profiles of the potential ξ(·) and the solution u(·,t) on them. A special role is played by the case where ξ is an i.i.d. random potential with a double-exponential marginal distribution
(ξ(0) > u) = ^-^u/ϱ, u ∈,
where ϱ∈ (0,∞) is a parameter. This distribution turns out to be critical, in the sense that the intermittent islands neither grow nor shrink with time, and represents a class of its own.
In the present paper we consider the case where 𝒳 is an unrooted regular tree . Our focus will be on the asymptotics as t→∞ of the total mass
U(t) = ∑_x ∈ u(x,t).
In earlier work <cit.>, <cit.> we were concerned with the case where 𝒳 is a rooted Galton-Watson tree in the quenched setting, i.e., almost surely with respect to the random tree and the random potential. This work was restricted to the case where the random potential is given by (<ref>) and the offspring distribution of the Galton-Watson tree has support in \{1} with a sufficiently thin tail. In the present paper our focus will be on the annealed setting, i.e., averaged over the random potential. We derive two terms in the asymptotic expansion as t→∞ of the average total mass
⟨ U(t) ⟩ = ∑_x ∈⟨ u(x,t) ⟩,
where ⟨·⟩ denotes expectation with respect to the law of the random potential. It turns out that the annealed expansion differs from the quenched expansion, even though the same variational formula plays a central role in the two second terms.
The derivation in the annealed setting forces us to follow a different route than in the quenched setting, based on various approximations of that are more delicate than the standard approximation of ^d (see <cit.>). This is the reason why we consider regular trees rather than Galton-Watson trees, to which we hope to return later. A key tool in the analysis is the large deviation principle for the empirical distribution of Markov renewal processes on finite graphs derived in <cit.>, which is recalled in Appendix <ref>.
§.§ The PAM on a graph
§.§.§ Notations and definitions
Let G = (V,E) be a simple connected undirected graph, either finite or countably infinite, with a designated vertex called the root. Let Δ_G be the Laplacian on G, i.e.,
(Δ_G f)(x) = ∑_y∈ V:{x,y}∈ E [f(y) - f(x)], x ∈ V, f V→,
which acts along the edges of G. Let ξ := (ξ(x))_x ∈ V be a random potential attached to the vertices of G, taking values in . Our object of interest is the non-negative solution of the Cauchy problem with localised initial condition,
[ ∂_t u(x,t) = (Δ_G u)(x,t) + ξ(x) u(x,t), x ∈ V, t>0,; u(x,0) = δ_(x), x ∈ V. ]
The quantity u(x,t) can be interpreted as the amount of mass at time t at site x when initially there is unit mass at . The total mass at time t is U(t) = ∑_x ∈ V u(x,t). The total mass is given by the Feynman-Kac formula
U(t) = _(^∫_0^t ξ(X_s) s),
where X=(X_t)_t ≥ 0 is the continuous-time random walk on the vertices V with jump rate 1 along the edges E, and _ denotes the law of X given X_0=. Let ⟨·⟩ denote expectation with respect to ξ. The quantity of interest in this paper is the average total mass at time t:
⟨ U(t) ⟩ = ⟨_(^∫_0^t ξ(X_s) s)⟩.
§.§.§ Assumption on the potential
Throughout the paper we assume that the random potential ξ = (ξ(x))_x ∈ V consists of i.i.d. random variables with a marginal distribution whose cumulant generating function
H(u) = log⟨^uξ()⟩
satisfies the following:
[Asymptotic double-exponential potential]
There exists a ϱ∈ (0,∞) such that
lim_u→∞ u H”(u) = ϱ.
[Double-exponential potential]
A special case of (<ref>) is when ξ() has the double-exponential distribution in (<ref>), in which case
H(u) = logΓ(ϱ u + 1)
with Γ the gamma function.
By Stirling's approximation, (<ref>) implies
H(u) = ϱ u log(ϱ u) - ϱ u + o(u), u →∞.
Assumption <ref> is more than enough to guarantee existence and uniqueness of the non-negative solution to (<ref>) on any discrete graph with at most exponential growth (as can be inferred from the proof in <cit.>, <cit.> for the case G=^d). Since ξ is assumed to be i.i.d., we have from (<ref>) that
⟨ U(t) ⟩ = 𝔼_𝒪(exp[∑_x∈ V H(ℓ_t(x))]),
where
ℓ_t(x) = ∫^t_0 1{X_s =x } s, x ∈ V, t≥ 0,
is the local time of X at vertex x up to time t.
§.§.§ Variational formula
The following characteristic variational formula is important for the description of the asymptotics of ⟨ U(t)⟩. Denote by (V) the set of probability measures on V. For p ∈(V), define
I_E(p) = ∑_{x,y}∈ E( √(p(x)) - √(p(y)) )^2,
J_V(p) = - ∑_x ∈ V p(x) log p(x),
and set
χ_G(ϱ) = inf_p ∈(V) [I_E(p) + ϱ J_V(p)], ϱ∈ (0,∞).
The first term in (<ref>) is the quadratic form associated with the Laplacian, which is the large deviation rate function for
the empirical distribution
L_t = 1/t∫_0^t δ_X_s s = 1/t∑_x ∈ Vℓ_t(x) δ_x ∈(V)
(see e.g. <cit.>). The second term in (<ref>) captures the second order asymptotics of ∑_x ∈ V H(tp(x)) as t →∞ via (<ref>) (see e.g. <cit.>).
§.§.§ Reformulation
The following lemma pulls the leading order term out of the expansion and shows that the second order term is controlled by the large deviation principle for the empirical distribution.
[Key object for the expansion]
If G=(V,E) is finite, then
⟨ U(t) ⟩ = ^H(t) + o(t) _(^-ϱ t J_V(L_t)),
t →∞.
where J_V is the functional in (<ref>) and L_t is the empirical distribution in (<ref>).
Because ∑_x ∈ Vℓ_t(x) = t, we can rewrite (<ref>) as
⟨ U(t) ⟩ = _(exp[∑_x∈ V H(ℓ_t(x))])
= ^H(t) _(exp{t ∑_x∈ V1/t[H(ℓ_t(x)tt)-ℓ_t(x)tH(t)]}).
Assumption <ref> implies that H has the following scaling property (see <cit.>):
lim_t→∞1/t [H(ct) - cH(t)] = ϱ c log c uniformly in c ∈ [0,1].
Hence the claim follows.
§.§ The PAM on an unrooted regular tree: annealed total mass for large times and key variational formula
In this section we specialise to the case where G= = (E,V), an unrooted regular tree of degree d +1 with d ≥ 2 (see Fig. <ref>). The main theorem of our paper is the following expansion.
[Growth rate of the total mass]
For any d ≥ 4, subject to Assumption <ref>,
1/tlog⟨ U(t) ⟩ = ϱlog(ϱ t) - ϱ - χ_(ϱ) + o(1), t →∞,
where χ_(ϱ) is the variational formula in (<ref>) with G=.
The proof of Theorem <ref> is given in Sections <ref>–<ref> and makes use of technical computations collected in Appendices <ref>–<ref>.
The main properties of the key quantity
χ_(ϱ) = inf_p ∈(V) [I_E(p) + ϱ J_V(p)], ϱ∈ (0,∞),
are collected in the following theorem (see Fig. <ref>).
[Properties of the variational formula]
For any d ≥ 2 the following hold:
(a) The infimum in (<ref>) may be restricted to the set
_^↓(V) = {p ∈(V) argmax p = ,
p is non-increasing in the distance to }.
(b) For every ϱ∈ (0,∞), the infimum in (<ref>) restricted to _^↓(V) is attained, every minimiser p is such that p>0 on V, and ∂ S_R = ∑_∂ B_R()p(x), R∈_0, satisfies
∑_R ∈_0∂ S_R log(R+1) ≤d+1/ϱ,
where B_R() is the ball of radius R centred at .
(c) The function ϱ↦χ_(ϱ) is strictly increasing and globally Lipschitz continuous on (0,∞), with
lim_ϱ↓ 0χ_(ϱ) = d-1, lim_ϱ→∞χ_(ϱ) = d+1.
The proof of Theorem <ref> is given in Appendix <ref> (see Fig. <ref>).
§.§ Discussion
1.
Theorem <ref> identifies the scaling of the total mass up to and including terms that are exponential in t. The first two terms in the right-hand side of (<ref>) are the same as those of 1/t H(t) (recall (<ref>)). The third term is a correction that comes from the cost for X in the Feynman-Kac formula in (<ref>) to create an optimal local time profile somewhere in , which is captured by the minimiser(s) of the variational formula in (<ref>).
2.
For the quenched model on a rooted Galton-Watson tree we found in <cit.>, <cit.> that
1/tlog U(t) = ϱlog(ϱ t ϑ/loglog t)
- ϱ - χ(ϱ) +o(1), t →∞,
×-a.s.,
where is the law of the potential, is the law of , ϑ is the logarithm of the mean of the offspring distribution, and
χ_(ϱ) = inf_⊂χ_(ϱ)
with χ_(ϱ) given by (<ref>) and the infimum running over all subtrees of . This result was shown to be valid as soon as the offspring distribution has support in \{1} (i.e., all degrees are at least 3) and has a sufficiently thin tail. The extra terms in (<ref>) come from the cost for X in the Feynman-Kac formula in (<ref>) to travel in a time of order o(t) to an optimal finite subtree with an optimal profile of the potential, referred to as intermittent islands, located at a distance of order ϱ t/loglog t from , and to subsequently spend most of its time on that subtree. In this cost the parameter ϑ appears, which is absent in (<ref>). It was shown in <cit.> that if ϱ≥ 1/log (d_min+1), with d_min the minimum of the support of the offspring distribution, then the infimum in (<ref>) is attained at the unrooted regular tree with degree d_min+1, i.e., the minimal unrooted regular tree contained in , for which ϑ = log d_min. Possibly the bound on ϱ is redundant.
3. In view of Lemma <ref> and the fact that Assumption <ref> implies (<ref>), we see that the proof of Theorem <ref> amounts to showing that, on = (V,E),
lim_t→∞1/tlog_(^-ϱ t J_V(L_t)) = - χ_(ϱ).
We achieve this by deriving asymptotically matching upper and lower bounds. These bounds are obtained by truncating outside a ball of radius R, to obtain a finite tree _R, deriving the t→∞ asymptotics for finite R, and letting R→∞ afterwards. For the lower bound we can use the standard truncation technique based on killing X when it exits _R and applying the large deviation principle for the empirical distribution of Markov processes on finite graphs derived in <cit.>. For the upper bound, however, we cannot use the standard truncation technique based on periodisation of X beyond radius R, because is an expander graph (see <cit.> for a list of known techniques on ^d and ^d). Instead, we follow a route in which is approximated in successive stages by a version of _R with a specific boundary condition, based on monitoring X relative to its backbone to infinity. This route allows us to use the large deviation principle for the empirical distribution of Markov renewal processes on finite graphs derived in <cit.>, but we need the condition d ≥ 4 to control the specific boundary condition in the limit as R →∞ (see Remark <ref> for more details). The reason why the approximation of by finite subtrees is successful is precisely because in the parabolic Anderson model the total mass tends to concentrate on intermittent islands.
4. Theorem <ref> shows that, modulo translations, the optimal strategy for L_t as t→∞ is to be close to a minimiser of the variational formula in (<ref>) restricted to _^↓(V). Any minimiser is centred at , strictly positive everywhere, non-increasing in the distance to , and rapidly tending to zero. The following questions remain open:
(1)
Is the minimiser p unique modulo translation?
(2)
Does p(x) satisfy lim_|x| →∞ |x|^-1logp̅(x) = -∞, with |x| the distance between x and ?
(3)
Is p radially symmetric?
(4)
Is ϱ↦χ_(ϱ) analytic on (0,∞)?
We expect the answer to be yes for (1) and (2), and to be no for (3) and (4).
§ PROOF OF THE MAIN THEOREM: LOWER BOUND
In this section we prove the lower bound in Theorem <ref>, which is standard and straightforward. In Section <ref> we obtain a lower bound in terms of a variational formula by killing the random walk when it exits _R. In Section <ref> we derive the lower bound of the expansion by letting R→∞ in the variational formula.
§.§ Killing and lower variational formula
Fix R∈ℕ. Let _R be the subtree of =(V,E) consisting of all the vertices that are within distance R of the root and all the edges connecting them. Put V_R=V_R(_R) and E_R = E(_R). Let τ_R = inf{t ≥ 0 X_t ∉ V_R} denote the first time that X exits _R. It follows from (<ref>) that
⟨ U(t) ⟩≥_(exp[∑_x∈ V_R
H(ℓ_t(x))]1{τ_R>t}).
Since _R is finite, Lemma <ref> gives
⟨ U(t) ⟩≥^H(t) + o(t) _[^-ϱ t J_V(L_t)1{τ_R>t}]
with J_V the functional defined in (<ref>). As shown in <cit.> (see also <cit.>), the family of sub-probability distributions _(L_t ∈· , τ_R>t), t ≥ 0, satisfies the LDP on ^R(V) = {p ∈(V) supp(p) ⊂ V_R} with rate function I_E, with I_E the functional defined in (<ref>). This is the standard LDP for the empirical distribution of Markov processes. Therefore, by Varadhan's Lemma,
lim_t→∞1/tlog_[^-ϱ t J_V(L_t)1{τ_R>t}] = - χ^-_R(ϱ)
with
χ^-_R(ϱ) = inf_p ∈^R(V) [I_E(p) +ϱ J_V(p)],
where we use that p ↦ J_V(p) is bounded and continuous (in the discrete topology) on ^R(V). Note that
lim_t →∞1/tlog_(τ_R>t) = - inf_p∈^R(V) I_E(p) < 0,
which is non-zero because any p ∈^R(V) is non-constant on V. The expression in (<ref>) is the same as (<ref>) with G=, except that p is restricted to V_R.
§.§ Limit of the lower variational formula
Clearly, R ↦χ^-_R(ϱ) is non-increasing. To complete the proof of the lower bound in Theorem <ref>, it remains is to show the following.
lim sup_R→∞χ^-_R(ϱ) ≤χ_(ϱ).
Pick any p ∈(V) such that I_E(p)<∞ and J_V(p)<∞. Let p^ R be the projection of p onto V_R, i.e.,
p^ R(x) = {[ p(x), x ∈int(V_R),; ∑_y ≥ x p(y), x ∈∂ V_R, ].
where y ≥ x means that y is an element of the progeny of x in . Since p^ R∈^R(V), we have from (<ref>) that χ^-_R(ϱ) ≤ I_E(p^ R) + ϱ J_V(p^ R). Trivially, lim_R→∞ I_E(p^ R) = I_E(p) and lim_R→∞ J_V(p^ R) = J_V(p), and so we have lim sup_R→∞χ^-_R(ϱ) ≤ I_E(p) + ϱ J_V(p). Since this bound holds for arbitrary p ∈(V), the claim follows from (<ref>).
§ PROOF OF THE MAIN THEOREM: UPPER BOUND
In this section we prove the upper bound in Theorem <ref>, which is more laborious and requires a more delicate approach than the standard periodisation argument used on ^d . In Section <ref> we obtain an upper bound in terms of a variational formula on a version of _R with a specific boundary condition. The argument comes in four steps, encapsulated in Lemmas <ref>–<ref> below:
(I)
Condition on the backbone of X (Section <ref>).
(II)
Project X onto a concatenation of finite subtrees attached to this backbone that are rooted versions of _R (Section <ref>).
(III)
Periodise the projected X to obtain a Markov renewal process on a single finite subtree and show that the periodisation can be chosen such that the local times at the vertices on the boundary of the finite subtree are negligible (Section <ref>).
(IV)
Use the large deviation principle for the empirical distribution of Markov renewal processes derived in <cit.> to obtain a variational formula on a single subtree (Section <ref>).
In Section <ref> we derive the upper bound of the expansion by letting R→∞ in the variational formula.
§.§ Backbone, projection, periodisation and upper variational formula
§.§.§ Backbone
For r ∈_0, let τ_r be the last time when X visits ∂ B_r(), the boundary of the ball of radius r around . Then the sequence = (X_τ_r)_r ∈_0 forms the backbone of X, running from to infinity.
[Condition on a backbone]
For every backbone and every t ≥ 0,
𝔼_𝒪(exp[∑_x∈ V() H(ℓ_t(x))])
= 𝔼_𝒪(exp[∑_x∈ V() H(ℓ_t(x))] | = ).
By symmetry, the conditional expectation in the right-hand side does not depend on the choice of . Indeed, permutations of the edges away from the root do not affect the law of ∑_x∈ V() H(ℓ_t(x)).
Turn the one-sided backbone into a two-sided backbone by adding a second backbone from to infinity. By symmetry, the choice of this second backbone is arbitrary, say '. Redraw by representing ' ∪ as and representing the rest of as a sequence of rooted trees ^∗ = (^∗_x)_x ∈ hanging off (see Fig. <ref>). In ^∗_x, the root sits at x and has d-1 downward edges, while all lower vertices have d downward edges.
Let X^=(X^_t)_t ≥ 0 be the random walk on ^ and (ℓ^_t(x))_x ∈^ the local times of X^ at time t.
[Representation of as a backbone with rooted trees]
For every and t ≥ 0,
𝔼_𝒪(exp[∑_x∈ V() H(ℓ_t(x))] | = )
= 𝔼_𝒪(exp[∑_x∈ V(^ ) H(ℓ^_t(x))]
| X^_∞ = + ∞).
Simply redraw as ^.
Note that X^ is a Markov process whose sojourn times have distribution EXP(d+1) and whose steps are drawn uniformly at random from the d+1 edges that are incident to each vertex.
§.§.§ Projection
For R ∈\{1}, cut into slices of length R, i.e.,
= ∪_k∈ (z + (kR+I)), I={0,1,…,R-1},
where z is to be chosen later. Apply the following two maps to ^ (in the order presented):
(i)
For each k ∈, fold ^∗_z+(kR+(R-1)) onto ^∗_z+(k+1)R by folding the d-1 edges downwards from the root on top of the edge in connecting z+(kR+(R-1)) and z+(k+1)R, and putting the d infinite rooted trees hanging off each of these d-1 edges on top of the rooted tree ^*_z+(k+1)R hanging off z+(k+1)R. Note that each of the d infinite rooted trees is a copy of ^*_z+(k+1)R.
(ii)
For each k ∈ and m ∈{0,1,…,R-2}, cut off all the infinite subtrees trees in ^∗_z+(kR+m) whose roots are at depth (R-1)-m. Note that the total number of leaves after the cutting equals
(d-1) ∑_m=0^R-2 d^(R-2)-m = (d-1)d^R-2 1-d^-(R-1)/1-d^-1 = d^R-1 - 1,
which is the same as the total number of leaves of the rooted tree ^*_R of depth R-1 (i.e., with R generations) minus 1 (a fact we will need below).
By doing so we obtain a concatenation of finite units
_R=(_R[k])_k ∈
that are rooted trees of depth R-1 (see Fig. <ref>). Together with the two maps that turn ^ into _R, we apply two maps to X^:
(i)
All excursions of X^ in the infinite subtrees that are folded to the right and on top are projected accordingly.
(ii)
All excursions of X^ in the infinite subtrees that are cut off are replaced by a sojourn of X^_R in the tadpoles that replace these subtrees (see Fig. <ref>)
The resulting path, which we call X^_R = (X^_R_t)_t ≥ 0, is a Markov renewal process with the following properties:
* The sojourn times in all the vertices that are not tadpoles have distribution EXP(d+1).
* The sojourn times in all the tadpoles have distribution ψ, defined as the conditional distribution of the return time τ of the random walk on the infinite rooted tree ^* given that τ<∞ (see <cit.> for a proper definition).
* The transitions into the tadpoles have probability d/d+1, the transitions out of the tadpoles have probability 1 (because of the condition X^_∞ = + ∞).
* The transitions from z + (kR+(R-1)) to z+(k+1)R have probability d/d+1, while the reverse transitions have probability 1/d+1.
Write (ℓ^ _R_t(x))_x ∈ V__R to denote the local times of X^_R at time t.
[Projection onto a concatenation of finite subtrees]
For every R ∈\{1} and t ≥ 0,
𝔼_𝒪(exp[∑_x∈ V(^ ) H(ℓ^_t(x))]
| X^_∞ = + ∞)
≤𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))]
| X^_R_∞ = + ∞).
The maps that are applied to turn X^ into X^_R are such that local times are stacked on top of each other. Since H defined in (<ref>) is convex and H(0)=0, we have H(ℓ) + H(ℓ') ≤ H(ℓ+ℓ') for all ℓ,ℓ' ∈_0, which implies the inequality.
§.§.§ Periodisation
Our next observation is that the condition {X^_R_∞ = + ∞} is redundant.
[Condition redundant]
For every R ∈\{1} and t ≥ 0,
𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))] | X^_R_∞ = + ∞)
= 𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))] ).
The event {X^_R_∞ = + ∞} has probability 1 because on the edges connecting the units of _R (see Fig. <ref>) there is a drift downwards. To see why, note that 1/d+1 < 12 < d/d+1 because d ≥ 2, and use that a one-dimensional random walk with drift is transient to the right <cit.>.
Since _R is periodic, we can fold X^_R onto a single unit _R, to obtain a Markov renewal process X^_R on _R (see Fig. <ref>) in which the transition from the top vertex to the right-most bottom vertex has probability 1/d+1, while the reverse transition has probability d/d+1. Clearly, the sojourn time distributions are not affected by the folding and therefore remain as above. Write (ℓ^ _R_t(x))_x ∈ V(_R) to denote the local times of X^_R at time t.
[Periodisation to a single finite subtree]
For every R ∈\{1} and t ≥ 0,
𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))])
≤𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))]).
The periodisation again stacks local time on top of each other.
Before we proceed we make a crucial observation, namely, we may still choose the shift z ∈{0,1,…,R-1} of the cuts of the two-sided backbone (recall Fig. <ref>). We will do so in such a way that the local time up to time t spent in the set ∂_ _R defined by
∂_ _R = all vertices at the top or at the bottom of a unit in _R
= all vertices marked by ∙ in Fig. <ref>
is at most t/R. After the periodisation these vertices are mapped to the set ∂_ _R defined by
∂_ _R = all vertices at the top or at the bottom of _R
= all vertices marked by ∙ in Fig. <ref>.
[Control on the time spent at the boundary]
For every R ∈\{1} and t ≥ 0,
𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))])
≤𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))]
1_{1/t∑_x ∈∂_ _Rℓ^_R_t(x) ≤ 1/R}).
For different z the sets of vertices making up ∂_R correspond to disjoint sets of vertices in ^ (see Fig. <ref>). Since ∑_x ∈^ℓ^_t(x) = t for all t ≥ 0, it follows that there exists a z for which ∑_x ∈∂_Rℓ^_t(x) ≤ t/R. Therefore the upper bound in Lemma <ref> can be strengthened to the one that is claimed.
§.§.§ Upper variational formula
Lemmas <ref>–<ref> provide us with an upper bound for the average total mass (recall ((<ref>)) on the infinite tree in terms of the same quantity on the finite tree-like unit _R with a specific boundary condition. Along the way we have paid a price: the sojourn times in the tadpoles are no longer exponentially distributed, and the transition probabilities into and out of the tadpoles and between the top vertex and the right-most bottom vertex are biased. We therefore need the large deviation principle for the empirical distribution of Markov renewal processes derived in <cit.>, which we can now apply to the upper bound.
Since _R is finite, Lemma <ref> gives
⟨ U(t) ⟩≤^H(t) + o(t) 𝔼_𝒪(^-ϱ J_V(_R)(L^ _R_t)
1_{L^_R_t(∂_ _R) ≤ 1/R})
with J_V the functional defined in (<ref>). The following lemma controls the expectation in the right-hand side.
[Scaling of the key expectation]
For every R ∈\{1},
lim_t→∞1/tlog_(^-ϱ t J_V(_R)(L^_R_t) 1_{L^_R_t(∂_ _R) ≤ 1/R}) = - χ^+_R(ϱ),
where
χ^+_R(ϱ) = inf_p ∈(V(_R))p(∂__R) ≤ 1/R{I^†_E(_R)(p) + ϱ J_V(_R)(p)},
with
I^†_E(_R)(p) = inf_β∈ (0,∞)inf_q ∈(V(_R))[K(β q) + K(p |β q)],
where
K(β q) = sup_q∈(V(_R))∑_x ∈ V(_R)β q(x) log(q(x)∑_y ∈ V(_R)π_x,yq(y)),
K(p |β q) = ∑_x ∈ V(_R)β q(x) (λ_x)(p(x)β q(x)),
with
(λ_x)(α) = sup_θ∈ℝ [αθ - λ_x(θ)], α∈ [0,∞),
λ_x(θ) = log∫_0^∞^θτψ_x(τ), θ∈ℝ,
where ψ_x=ψ when x is a tadpole, ψ_x = EXP(d+1) when x is not a tadpole, and π_x,y is the transition kernel of the discrete-time Markov chain on V(_R) embedded in X^_R.
Apply the large deviation principle derived in <cit.>, which we recall in Proposition <ref> in Appendix <ref>.
The expression in (<ref>) is similar to (<ref>) with G=_R, expect that the rate function I_E(_R) in (<ref>) is more involved than the rate function I_E in (<ref>).
§.§ Limit of the upper variational formula
The prefactor ^H(t)+o(1) in Lemma <ref> accounts for the terms ϱlog(ϱ t)-ϱ in the right-hand side of (<ref>) (recall <ref>). In view of Lemma <ref>, in order to complete the proof of the upper bound in Theorem <ref> it suffices to prove the following lemma.
For any d ≥ 4, lim inf_R→∞χ^+_R(ϱ) ≥χ_(ϱ).
The proof is given in Appendix <ref> and relies on two steps:
* Show that, for d ≥ 4,
I^†_E(_R)(p) ≥ I^+_E(_R)(p) + O(1/R)
with I^+_E(_R) a rate function similar to the standard rate function I_E(_R) given by (<ref>).
* Show that, d ≥ 2,
χ^ +_R(ϱ) = inf_p ∈(V(_R))p(∂_ _R) ≤ 1/R{I^+_E(_R)(p) + ϱ J_V(_R)(p)}
satisfies
lim inf_R→∞χ^ +_R(ϱ) ≥χ_(ϱ).
§ LARGE DEVIATION PRINCIPLE FOR THE LOCAL TIMES OF MARKOV RENEWAL PROCESSES
The following LDP, which was used in the proof of Lemma <ref>, was derived in <cit.>, and generalises the LDP for the empirical distribution of a Markov proceses on a finite state space derived in <cit.>. See <cit.> for the definition of the LDP.
Let Y=(Y_t)_t ≥ 0 be the Markov renewal process on the finite graph G=(V,E) with transition kernel (π_x,y)_{x,y}∈ E and with sojourn times whose distributions (ψ_x)_x ∈ V have support (0,∞). For t > 0, let L_t^Y denote the empirical distribution of Y at time t (see (<ref>)). Then the family (ℙ(L^Y_t ∈·))_t>0 satisfies the LDP on 𝒫(V) with rate t and with rate function I^†_E given by
I^†_E(p) = inf_β∈ (0,∞)inf_q ∈(V)[K(β q) + K(p |β q)]
with
K(β q) = sup_q∈(V)∑_x ∈ Vβ q(x) log(q(x)∑_y∈ Vπ_x,yq(y)),
K(p |β q ) = ∑_x ∈ Vβ q(x) (λ_x)(p(x)β q(x)),
where
[ (λ_x)(α) = sup_θ∈ℝ [αθ - λ_x(θ)], α∈ [0,∞),; λ_x(θ) = log∫_0^∞^θτψ_x(τ), θ∈ℝ. ]
The rate function I_E consist of two parts: K in (<ref>) is the rate function of the LDP on (V) for the empirical distribution of the discrete-time Markov chain on V with transition kernel (π_x,y)_{x,y}∈ E (see <cit.>), while K in (<ref>) is the rate function of the LDP on (0,∞) for the empirical mean of the sojourn times, given the empirical distribution of the discrete-time Markov chain. Moreover, λ_x is the cumulant generating function associated with ψ_x, and λ_x is the Legendre transform of λ_x, playing the role of the Cramèr rate function for the empirical mean of the i.i.d. sojourn times at x. The parameter β plays the role of the ratio between the continuous time scale and the discrete time scale.
§ SOJOURN TIMES: CUMULANT GENERATING FUNCTIONS AND LEGENDRE TRANFORMS
In Appendix <ref> we recall general properties of cumulant generating functions and Legendre transforms, in Appendices <ref> and <ref> we identify both for the two sojourn time distributions arising in Lemma <ref>, respectively.
§.§ General observations
Let λ be the cumulant generating function of a non-degenerate sojourn time distribution ϕ, and λ be the Legendre transform of λ (recall (<ref>)). Both λ and λ are strictly convex, are analytic in the interior of their domain, and achieve a unique zero at θ = 0, respectively, α=α_c with α_c= ∫_0^∞τϕ(τ). Furthermore, λ diverges at some θ_c ∈ (0,∞] and has slope α_c at θ=0. Moreover, if the slope of λ diverges at θ_c, then λ is finite on (0,∞).
The supremum in the Legendre transform defining (λ)(α) is uniquely taken at θ=θ(α) solving the equation
λ'(θ(α)) = α.
The tangent of λ with slope α at θ(α) intersects the vertical axis at (-λ)(α), i.e., putting
μ(α) = λ(θ(α))
we have
μ(α) = α (λ)'(α)-(λ)(α).
(See Fig. <ref>.) Note that by differentiating (<ref>) we get
μ'(α) = α(λ)”(α),
which shows that α↦μ(α) is strictly increasing and hence invertible, with inverse function μ^-1.
Note that by differentiating the relation (λ)(α) = αθ(α)-λ(θ(α)) we get
(λ)'(α) = θ(α).
A further relation that is useful reads
(λ)' ∘μ^-1 = λ^-1,
which follows because μ = λ∘θ by (<ref>) and (λ)' = θ by (<ref>).
§.§ Exponential sojourn time
If ϕ=EXP(d+1), then the cumulant generating function λ(θ) = log∫_0^∞^θτψ(τ) is given by
λ(θ) =
log(d+1d+1-θ), θ < d+1,
∞, θ≥ d+1.
To find (λ)(α), we compute
∂/∂θ[αθ - log(d+1d+1 - θ)] = α - 1/d+1-θ,
∂^2/∂θ^2[αθ - log(dd+1-θ)] = - 1/(d+1-θ)^2 < 0.
Hence the supremum in (<ref>) is uniquely taken at
θ(α) = d+1 - 1α, α > 0,
so that
(λ)(α) = α (d+1) -1 - log[α (d+1)], α>0.
Thus, λ and λ have the shape in Fig. <ref>, with θ_c = d+1 and α_c = 1/d+1, and with lim_θ↑θ_cλ(θ) = ∞ and lim_θ↑θ_cλ'(θ) = ∞.
Note that μ has domain (0,∞) and range .
§.§ Non-exponential sojourn time
For ϕ=ψ the computations are more involved. Let ^*=(E,V) be the infinite rooted regular tree of degree d+1. Write for the root. Let X = (X_n)_n ∈_0 be the discrete-time simple random walk on ^*=(E,V) starting from . Write τ_ to denote the time of the first return of X to . Define r = ℙ_(τ_<∞). It is easy to compute r by projecting X on _0: r is the return probability to the origin of the random walk on _0 that jumps to the right with probability p = dd+1 and to the left with probability q = 1d+1, which equals p/q (see <cit.>). Thus, r= 1/d.
For y ∈^*, define h_y = ℙ_y(τ_ <∞). Then h_y can be explicitly calculated, namely,
h_y =
d^-|y|, y∈^*∖{},
1, y= .
Note that h is a harmonic function on ^* ∖, i.e., h_y = ∑_z∈^*π_y,z h_z, y∈^*∖. We can therefore consider the Doob-transform of X, which is the random walk with transition probabilities away from the root given by
σ̌_y,z =
d/d+1, z=y^↑,
1/d1/d+1, z≠ y^↑, {y,z}∈ E,
0, else,
y ∈^*∖{},
and transition probabilities from the root are given by
σ̌_,z =
1/d, {,z}∈ E,
0, else.
Thus, the Doob-transform reverses the upward and the downward drift of X.
Recall from Lemma <ref> that ψ is the distribution of τ_ conditional on {τ_<∞} and on X leaving at time 0.
Let λ(θ) = log∫_0^∞^θτψ(τ). Then
^λ(θ)
= d+1-θ/2 [1- √(1- 4d(d+1-θ)^2) ], θ∈ (-∞,θ_c],
∞, else,
with θ_c = (√(d)-1)^2. The range of exp∘λ is (0,√(d) ], with the maximal value is uniquely taken at θ=θ_c.
To compute the moment-generating function of τ_, we consider the Doob-transform of X and its projection onto ℕ_0. Let p_2k = P(τ_ = 2k). It is well-known that (see <cit.>)
G^p,q(s) = (s^τ_|τ_ <∞) = ∑_k ∈ s^2k p_2k = 1/2p[1- √(1-4pqs^2)], |s| ≤ 1.
Therefore we have
^λ(θ) = (^θτ_)
= ∑_k ∈ p_2k [(^θ EXP(d+1))]^2k-1
= ∑_k ∈ p_2k(d+1/d+1 - θ)^2k-1
= (d+1 -θ/d+1) G^p,q(s)
with
p = 1d+1, q = dd+1, s = d+1/d+1-θ.
Inserting (<ref>) into (<ref>), we get the formula for λ(θ). From the term in the square root we see that λ(θ) is finite if and only if θ≤θ_c = d+1-2√(d) = (√(d)-1)^2.
There is no easy closed form expression for (λ)(α), but it is easily checked that λ and λ have the shape in Fig. <ref>, with θ_c = (√(d)-1)^2 and α_c = ∫_0^∞τψ(τ)<∞, and with λ(θ_c) = log√(d)<∞ and λ'(θ_c)=∞, i.e., there is a cusp at the threshold θ_c, implying that λ is finite on (0,∞). It follows from (<ref>) that
lim_α→∞1/α (λ)(α) = lim_α→∞θ(α) = θ_c.
The function λ^-1∘log = (exp∘λ)^-1 is given by
(exp∘λ)^-1(β) = d+1 - β -d/β, β∈ (0,√(d) ].
The range of (exp∘λ)^-1 is (-∞,θ_c], with the maximal value θ_c uniquely taken at β = √(d).
We need to invert exp∘λ in (<ref>). Abbreviate χ = d+1-θ/2. Then
β = χ[1-√(1-d/χ^2) ] ⟹ χ = β^2+d/2β ⟹ θ = d+1 - β^2 + d/β.
Note that (√(d),∞) is not part of the domain of (exp∘λ)^-1, even though the right-hand side of (<ref>) still makes sense (as a second branch). Note that μ has domain (0,∞) and range (-∞,√(d) ] (see Fig. <ref>).
§ ANALYSIS OF THE VARIATIONAL PROBLEM ON THE INFINITE REGULAR TREE
In this appendix we prove Theorem <ref>. Appendix <ref> formulates two theorems that imply Theorem <ref>, Appendix <ref> provides the proof of these theorems. Recall the definition of (V), I_E(p) and J_V(p) from (<ref>). Set
χ_(ϱ) = inf_p ∈_(V) [I_E(p) + ϱ J_V(p)], ϱ∈ (0,∞),
where _(V) = {p ∈(V) argmax p = }. Since (V), I_E and J_V are invariant under translations, the centering at is harmless.
§.§ Two properties
For every ϱ∈ (0,∞) the infimum in (<ref>) is attained, and every minimiser p is strictly positive, non-increasing in the distance to the root, and such that
∑_N∈_0∂ S_R log (R+1) ≤d+1/ϱ,
∂ S_R = ∑_∂ B_R()p(x),
where B_R() is the ball of radius R around .
The function ϱ↦χ_(ϱ) is strictly increasing and globally Lipschitz continuous on (0,∞), with lim_ϱ↓ 0χ_(ϱ) = d-1 and lim_ϱ→∞χ_(ϱ) = d+1.
Theorems <ref>–<ref> settle Theorem <ref>. Their proof uses the following two lemmas.
For every ϱ∈ (0,∞), the infimum in (<ref>) may be restricted to p ∈_(V) such that J_V(p) ≤d+1ϱ.
Let δ_∈_(V) denote the point measure at . Then, for all ϱ∈ (0,∞),
χ_(ϱ) ≤ I_E(δ_) + ϱ J_V(δ_) = (d+1) + ϱ× 0 = d+1.
Since I_V ≥ 0, we may restrict the infimum in (<ref>) to p with J_V(p) ≤d+1/ϱ.
For every ϱ∈ (0,∞), there exists a c(ϱ) >0 such that the infimum in (<ref>) may be restricted to p∈𝒫_(V) such that J_V(p) ≥ c(ϱ).
Since J_V(p) = 0 if and only if p = δ_ is a point measure, it suffices to show that δ_ is not a minimiser of χ_(ϱ). To that end, for y ∈ V compute
∂/∂ p(y)[I_E(p) + ϱ J_V(p)] = 1 - ∑_z∼ y√(p(z)/p(y)) - ϱlog p(y) -ϱ.
Because p()>0, it follows that the right-hand side tends to -∞ as p(y) ↓ 0 for every y ∼. Hence, no p ∈_(V) with p(y) = 0 for some y ∼ can be a minimiser of (<ref>), or be the weak limit point of a minimising sequence. In particular, δ_ cannot.
§.§ Proof of the two properties
First observe that (V) and J_V are invariant under permutations, i.e., for any p ∈(V) and any relabelling π of the vertices in V, we have π p ∈(V) and J_V(π p)=J_V(p). The same does not hold for I_E, but we can apply permutations such that I_E(π p) ≤ I_E(p).
1.
Pick any p ∈(V). Pick any backbone = {x_0, x_1,⋯} that runs from x_0 = to infinity. Consider a permutation π that reorders the vertices in such that {(π p)(x)}_x ∈ becomes non-increasing. Together with the reordering, transport all the trees that hang off as well. Since π p is non-increasing along , while all the edges that do not lie on have the same neighbouring values in p and in π p, we have
I_E(π p) ≤ I_E(p).
Indeed,
12 [I_E(p) - I_E(π p)] = ∑_k ∈_0√((π p)(x_k) (π p)(x_k+1))
- ∑_k ∈_0√(p(x_k)p(x_k+1)),
where we use that p(x_0) = (π p)(x_0) (because p(x_0) ≥ p(x_k) for all k∈) and ∑_k∈ p(x_k) = ∑_k∈ (π p)(x_k). The right-hand side of (<ref>) is ≥ 0 by the rearrangement inequality for sums of products of two sequences <cit.>. In fact, strict inequality in (<ref>) holds unless p is constant along . But this is impossible possible because it would imply that p() = 0 and hence p(x) = 0 for all x ∈ V. Thus, p and being arbitrary, it follows from (<ref>) that any minimiser or minimising sequence must be non-increasing in the distance to . Indeed, if it were not, then there would be a along which the reordering would lead to a lower value of I_E+ϱ J_V. Hence we may replace (<ref>) by
χ_(ϱ) = inf_p ∈_^↓(V) [I_E(p) + ϱ J_V(p)], ϱ∈ (0,∞),
with _^↓(V) defined in (<ref>).
2.
Let p ∈_^↓(V). Estimate
J_V(p) = ∑_R ∈_0∑_x ∈∂ B_R() [-p(x)log p(x)]
≥∑_R ∈_0∑_x ∈∂ B_R()[-p(x)log(1R+1)],
where we use that p(x) ≤1R+1 for all x ∈∂ B_R(). Hence
J_V(p) ≥∑_R ∈_0∂ S_R log(R+1)
with ∂ S_R = ∑_x ∈∂ B_R() p(x). By Lemma <ref>, J_V(p) ≤d+1/ϱ, and so
∑_R ∈_0∂ S_R log(R+1) ≤d+1/ϱ.
The computation in (<ref>) shows that any p for which there exist z ∼ y with p(z)>0 and p(y)=0 cannot be minimiser nor a weak limit point of a minimising sequence. Hence all minimisers or weak limit points of minimising sequences are strictly positive everywhere.
3.
Take any minimising sequence (p_n)_n∈ of (<ref>). By (<ref>), lim_R→∞∑_x ∉ B_R() p_n(x) = 0 uniformly in n∈, and so (p_n)_n∈ is tight. By Prokhorov's theorem, tightness is equivalent to (p_n)_n∈ being relatively compact, i.e., there is a subsequence (p_n_k)_k∈ that converges weakly to a limit p∈_^↓(V). By Fatou's lemma, we have lim inf_k→∞ I_E(p_n_k) ≥ I_E(p) and lim inf_k→∞ J_V(p_n_k) ≥ J_V(p). Hence
χ_(ϱ) = lim_k →∞ [I_E(p_n_k) + ϱ J_V(p_n_k)] ≥ I_E(p) + ϱ J_V(p).
Hence p is a minimiser of (<ref>).
The proof uses approximation arguments.
1.
We first show that ϱ↦χ_(ϱ) is strictly increasing and globally Lipschitz. Pick ϱ_1 < ϱ_2. Let p̅_ϱ_1 be any minimiser of (<ref>) at ϱ_1, i.e.,
χ_(ϱ_1) = I_E(p̅_ϱ_1) + ϱ_1 J_V(p̅_ϱ_1).
Estimate
[I_E(p̅_ϱ_1) + ϱ_1 J_V(p̅_ϱ_1)]
= [I_E(p̅_ϱ_1) + ϱ_2 J_V(p̅_ϱ_1)] - (ϱ_2 - ϱ_1)J_V(p̅_ϱ_1)
≥χ_(ϱ_2) - (ϱ_2 - ϱ_1) J_V(p̅_ϱ_1)
≥χ(ϱ_2) - (ϱ_2 - ϱ_1) d+1ϱ_1,
where we use Lemma <ref>. Therefore
χ_(ϱ_2) - χ_(ϱ_1) ≤ (ϱ_2-ϱ_1) d+1ϱ_1.
Similarly, let p̅_ϱ_2 be any minimiser of (<ref>) at ϱ_2, i.e.,
χ_(ϱ_2) = I_E(p̅_ϱ_2) + ϱ_2 J_V(p̅_ϱ_2).
Estimate
[I_E(p̅_ϱ_2) + ϱ_2 J_V(p̅_ϱ_2)]
= [I_E(p̅_ϱ_2) + ϱ_1 J_V(p̅_ϱ_2)] + (ϱ_2 - ϱ_1) J_V(p̅_ϱ_2)
≥χ_(ϱ_1) + (ϱ_2 - ϱ_1) J_V(p̅_ϱ_2)
≥χ_(ϱ_1) + (ϱ_2 - ϱ_1) c(ϱ_2),
where we use Lemma <ref>. Therefore
χ_(ϱ_2) - χ_(ϱ_1) ≥ c(ϱ_2)(ϱ_2 - ϱ_1).
2.
Because χ_(ϱ) ≤ d+1 for all ϱ∈ (0,∞), it follows that lim_ϱ→∞χ_(ϱ) ≤ d+1. To obtain the reverse inequality, let p_ϱ be any minimiser of (<ref>) at ϱ. By Lemma <ref>, we may assume that J_V(p_ϱ) ≤d+1/ϱ. Hence lim_ϱ→∞ J_V(p_ϱ) = 0, and consequently lim_ϱ→∞p_ϱ= δ_ weakly. Therefore, by Fatou's lemma, lim_ϱ→∞χ_(ϱ) = lim_ϱ→∞ [I_E(p) + ϱ J_V(p)] ≥lim inf_ϱ→∞ I_E(p_ϱ) ≥ I_E(δ_) = d+1.
3.
To prove that lim_ϱ↓ 0χ_(ϱ) ≤ d-1, estimate
χ_(ϱ) ≤inf_p ∈_^↓(V)(p) ⊆ B_R() [I_E(p)+ϱ J_V(p)],
R ∈_0.
Because
sup_p ∈_^↓(V)(p) ⊆ B_R() J_V(p) = J_V(p_R) = log |B_R()|,
R ∈_0,
with
p_R(x) =
|B_R()|^-1, x ∈ B_R(),
0, else,
it follows that
lim_ϱ↓ 0χ_(ϱ)
≤inf_p ∈_^↓(V)(p) ⊆ B_R() I_E(p)
≤ I_E(p_R), R ∈_0.
Compute (recall (<ref>)) ,
I_E(p_R) = |∂ B_R+1()|/|B_R()|, R ∈_0.
Inserting the relations
|∂ B_R()| = {[ 1, R=0,; (d+1)d^R-1, R ∈, ].
|B_R()| = ∑_R'=0^R |∂ B_R'()| = 1 + d+1/d-1(d^R-1),
R ∈_0,
we get
I_E(p_R) = (d-1) (d+1)d^R/(d+1)d^R-2.
Hence lim_R→∞ I_E(p_R) = d-1, and so lim_ϱ↓ 0χ_(ϱ) ≤ d-1.
4.
To prove that lim_ϱ↓ 0χ_(ϱ) ≥ d-1, note that because J_V ≥ 0 we can estimate
lim_ϱ↓ 0χ_(ϱ) ≥inf_p ∈_^↓(V) I_E(p).
It therefore suffices to show that
inf_p ∈_^↓(V) I_E(p) ≥ d-1,
i.e., (p_R)_R ∈_0 is a minimising sequence of the infimum in the left-hand side. The proof goes as follows. Write (recall (<ref>))
I_E(p) = 12 ∑_x,y ∈ Vx ∼ y(√(p(x)) - √(p(y)) )^2
= 12 ∑_x,y ∈ Vx ∼ y[p(x) + p(y) - 2 √(p(x)p(y)) ]
= (d+1) - ∑_x,y ∈ Vx ∼ y√(p(x)p(y)).
Since is a tree, each edge can be labelled by the end-vertex that is farthest from . Hence the sum in the right-hand side can be written as
∑_x ∈ V ∖ 2√(p(x)p(x^↓)),
where x^↓ is the unique neighbour of x that is closer to than x. Since 2√(p(x)p(x^↓))≤ p(x) + p(x^↓), it follows that
∑_x ∈ V ∖ 2√(p(x)p(x^↓))≤∑_x ∈ V ∖ p(x) + ∑_x ∈ V ∖ p(x^↓)
= [1-p()] + 1.
Therefore
I_E(p) ≥ d - 1 + p(),
which settles the claim.
§ LARGE DEVIATION ESTIMATE FOR THE LOCAL TIME AWAY FROM THE BACKBONE
In this appendix we derive a large deviation principle for the total local times at successive depths of the random walk on ^ (see Fig. <ref>). This large deviation principle is not actually needed, but serves as a warm up for the more elaborate computations in Appendix <ref>.
For k∈_0, let V_k be the set of vertices in ^ that are at distance k from the backbone (see Fig. <ref>). For R ∈, define
[ ℓ^R_t(k) = ∑_x ∈ V_kℓ^_t(x), k = 0,1,…,R,; ℓ_t^R = ∑_k > R∑_x∈ V_kℓ^_t(x), k= R+1, ]
and
L_t^R = 1/t ((ℓ_t(k))_k=0^R, ℓ^R_t).
Abbreviate V^*_R = {0,1,…,R,R+1},
For every R ∈, (L_t^R)_t ≥ 0 satisfies the large deviation principle on (V^*_R) with rate t and with rate function I^†_R given by
I^†_R(p) = [√((d-1)p(0))-√(dp(1)) ]^2 + ∑_k=1^R-1[√(p(k))-√(dp(k+1)) ]^2
+ [√(p(R)+p(R+1)) - √(dp(R+1)) ]^2.
By monitoring the random walk on the tree in Fig. <ref> and projecting its depth on the vertices 0,1,…,R, respectively, R+1, we can apply the LDP in Proposition <ref> (see Fig. <ref>).
1.
The sojourn times have distribution EXP(d+1) at vertices k=0,1,…,R and distribution ψ at vertex k=R+1. The transition probabilities are
[ π_0,0 = 2d+1, π_0,1 = d-1d+1,; π_k,k+1 = 1d+1, π_k,k-1 = dd+1, k = 1,…,R,; π_R+1,R = 1. ]
Proposition <ref> therefore yields that (L_t^R)_t ≥ 0 satisfies the LDP on on (V^*_R) with rate t and with rate function I^†_R given by
I^†_R(p) = (d+1) ∑_k=0^R p(k) + inf_v V^*_R → (0,∞)sup_u V^*_R → (0,∞) L(u,v)
with
L(u,v) = - A - B - C,
where
A = ∑_k=1^R v(x) {1+log(du(k-1)+u(k+1)/u(k) p(k)/v(k))},
B = v(0) {1+log(2u(0)+(d-1)u(1)/u(0) p(0)/v(0))},
C = v(R+1) {log(u(R)/u(R+1))-(λ)(p(R+1)/v(R+1))}.
Here we use (<ref>) to compute A and B, and for C we recall that λ is the Legendre transform of the cumulant generation function λ of ψ computed in Lemma <ref>.
2.
We compute the infimum of L(u,v) over v for fixed u.
∙ For k=1,…,R,
∂ A/∂ v(k) = log(du(k-1)+u(k+1)/u(k) p(k)/v(k)),
⟹v̅_u(k) = p(k) du(k-1)+u(k+1)/u(k).
The second derivative is 1/v(k)>0.
∙ For k=0,
∂ B/∂ v(0) = log(2u(0)+(d-1)u(1)/u(0) p(0)/v(0)),
⟹v̅_u(0) = p(0) 2u(0)+(d-1)u(1)/u(0).
The second derivative is 1/v(0)>0.
∙ For k=R+1, the computation is more delicate. Define (recall (<ref>) in Appendix <ref>)
μ(α) = α (λ)^'(α) - (λ)(α).
The function μ has range (-∞,log√(d) ], with the maximal value uniquely taken at α=∞. Therefore there are two cases.
▸ u(R+1)/u(R) ≤√(d). Compute
∂ C/∂ v(R+1) = μ(p(R+1)/v(R+1)) - log(u(R+1)/u(R)),
⟹v̅(R+1) = p(R+1)/α_u(R+1)
with α_u(R+1) solving the equation
log(u(R+1)/u(R)) = μ(α_u(R+1)).
Since μ'(α) = α(λ)”(α) and λ is strictly convex (see Fig. <ref> in Appendix <ref>), μ is strictly increasing and therefore invertible. Consequently,
α_u(R+1) = μ^-1(log(u(R+1)/u(R))).
Putting (<ref>)–(<ref>) together, we get
L(u) = inf_v V^*_R → (0,∞) L(u,v)
= - ∑_k=1^R A_u(k) - B_u + C_u
with
A_u(k) = du(k-1)+u(k+1)/u(k) p(k), k = 1,…,R,
B_u = 2u(0)+(d-1)u(1)/u(0) p(0),
and
C_u = p(R+1)/α_u(R+1)[(λ)(α_u(R+1)) - log(u(R+1)/u(R))]
= p(R+1)/α_u(R+1)[(λ)(α_u(R+1)) - μ(α_u(R+1))]
= p(R+1) (λ)^'(α_u(R+1))
= p(R+1) ((λ)^'∘μ^-1)(log(u(R+1)/u(R))).
In (<ref>) in Appendix <ref> we showed that (λ)' ∘μ^-1 = λ^-1. Moreover, in (<ref>) in Appendix <ref> we showed that (λ^-1∘log) = S with
S(β) = d+1 - β - d/β, β∈ (0,√(d) ].
Since S has domain (0,√(d) ], C_u(R+1) is only defined when u(R+1)/u(R) ≤√(d), in which case
C_u = p(R+1) S(u(R+1)/u(R)).
▸ u(R+1)/u(R) ≤√(d). In this case ∂ C/∂ v(R+1)>0, the infimum is taken at v̅(R+1)=0, and hence (recall (<ref>))
C_u = p(R+1) (√(d)-1)^2 = p(R+1) S(√(d)).
Note that the right-hand side does not depend on u. The expressions in (<ref>)–(<ref>) can be summarised as
C_u = p(R+1) S(√(d)∧u(R+1)/u(R)).
3.
Next we compute the supremum over u of
L(u) = L(u,v̅_u) = - A_u - B_u + C_u.
with A_u = ∑_k=1^R A_u(k). We only write down the derivatives that are non-zero.
∙ For k=2,…,R-1,
- ∂ A_u/∂ u(k) = - p(k+1) d/u(k+1) - p(k-1) 1/u(k-1) + p(k) du(k-1)+u(k+1)/u(k)^2.
∙ For k=1,
- ∂ A_u/∂ u(1) = - p(2) d/u(2) + p(1) du(0)+u(2)/u(1)^2,
- ∂ B_u/∂ u(1) = - p(0) d-1/u(0).
∙ For k=R,
- ∂ A_u/∂ u(R) = - p(R-1) 1/u(R-1) + p(R) du(R-1)+u(R+1)/u(R)^2,
∂ C_u/∂ u(R) = p(R+1) [u(R+1)/u(R)^2 - d/u(R+1)]
1_{u(R+1)/u(R)≤√(d)}.
∙ For k=0,
-∂ A_u/∂ u(0) = - p(1) d/u(1),
-∂ B_u/∂ u(0) = p(0) (d-1)u(1)/u(0)^2.
∙ For k=R+1,
-∂ A_u/∂ u(R+1) = - p(R) 1/u(R),
∂ C_u/∂ u(R+1) = p(R+1) [-1/u(R) + du(R)/u(R+1)^2]
1_{u(R+1)/u(R)≤√(d)}.
All the first derivatives of A_u+B_u+C_u are zero when we choose
u̅(0) = √((d-1)p(0)), u̅(k) = √(d^kp(k)), k = 1,…,R,
u̅(R+1) = √(d^R+1 p(R)p(R+1)/p(R)+p(R+1)).
All the second derivatives are strictly negative, and so u̅ is the unique maximiser.
4.
Inserting (<ref>) into (<ref>), we get
L(u̅) = L(u̅,v̅_u̅) = - ∑_k=2^R-1 A_u̅(k)
- [A_u̅(1) + B_u̅] - A_u̅(R) + C_u̅
= -∑_k=2^R-1√(dp(k)) [√(p(k-1)) + √(p(k+1)) ]
- [2√(d(d-1)p(0)p(1)) + 2p(0) + √(dp(1)p(2)) ]
- [√(dp(R-1)p(R)) + √(p(R)/p(R)+p(R+1)) √(dp(R)p(R+1)) ]
+ p(R+1) S(√(dp(R+1)/p(R)+p(R+1)) ).
Recalling (<ref>), (<ref>) and (<ref>), and rearranging terms, we find the expression in (<ref>).
Note that I^†_R has a unique zero at p given by
p(0) = 12, p(k) = 12 (d-1)d^-k, k = 1,…,R, p(R+1) = 12d^-R.
This shows that the fraction of the local time typically spent a distance k away from the backbone decays exponentially fast in k.
§ ANALYSIS OF THE UPPER VARIATIONAL FORMULA
In this appendix we carry out the proof of the claims in Section <ref>, namely, we settle (<ref>) in Appendix <ref> and (<ref>) in Appendix <ref>. The computations carried out in Appendix <ref> guide us along the way.
§.§ Identification of the rate function for the local times on the truncated tree
To identify the rate function I^†_E(_R) in Lemma <ref>, we need to work out the two infima between braces in (<ref>). The computation follows the same line of argument as in Appendix <ref>, but is more delicate. We will only end up with a lower bound. However, this is sufficient for the upper variational formula.
To simplify the notation we write (recall Fig. <ref>):
(V_R,E_R) = vertex and edge set of _R without the tadpoles,
= top vertex of V_R,
⋆ = right-most bottom vertex of V_R,
∂ V_R = set of vertices at the bottom of V_R,
= set of tadpoles,
_x = tadpole attached to x ∈∂ V_R\⋆.
Note that ∂ V_R consists of ⋆ and the vertices to which the tadpoles are attached. Note that int(V_R) = V_R ∖∂ V_R includes .
1.
Inserting (<ref>) in Appendix <ref> into (<ref>)–(<ref>), we get
I^†_E(_R)(p) = (d+1) ∑_x∈ V_R p(x)
+ inf_β∈ (0,∞)inf_q ∈(V_R)sup_q∈(V_R) L(β,q,q| p)
with
L(β,q,q| p) = - A - B - C - D,
where
A = ∑_x ∈int(V_R)β q(x){1+log(∑_y ∼ xq(y)/q(x)p(x)/β q(x))},
B = ∑_x ∈∂ V_R\⋆β q(x){1+log(q(x^↑)
+ d q(_x)/q(x)p(x)/β q(x))},
C = β q(⋆) {1+log(q(⋆^↑) + d q()/q(⋆)p(⋆)/β q(⋆))},
D = ∑_x ∈β q(x){log(q(x^↑)/q(x))
- (λ)(p(x)/β q(x)) },
with λ the Legende transform of the cumulant generating function of ψ (recall (<ref>)) and x^↑ the unique vertex to which x is attached upwards. (Recall that y ∼ x means that x and y are connected by an edge in E_R.) Note that A,B,C each combine two terms, and that A,B,C,D depend on p. We suppress this dependence because p is fixed.
2.
Inserting the parametrisation q = u/u_1 and q = v/v_1 with u,v V_R → (0,∞) and putting β q = v, we may write
I^†_E(^R)(p) = (d+1) ∑_x∈ V_R p(x) + inf_v V_R → (0,∞)sup_u V_R → (0,∞) L(u,v)
with
L(u,v) = - A - B - C - D,
where
A = ∑_x ∈int(V_R) v(x){1+log(∑_y ∼ xu(y)/u(x)p(x)/v(x))},
B = ∑_x ∈∂ V_R \⋆ v(x){1+log(u(x^↑)
+ d u(_x)/u(x)p(x)/v(x))},
C = v(⋆) {1+log(u(⋆^↑) + d u()/u(⋆)p(⋆)/v(⋆))},
D = ∑_x ∈v(x){log(u(x^↑)/u(x)) - (λ)(p(x)/v(x)) }.
Our task is to carry out the supremum over u and the infimum over v in (<ref>).
3.
First, we compute the infimum over v for fixed u. (Later we will make a judicious choice for u to obtain a lower bound.) Abbreviate
A_u(x) = ∑_y ∼ xu(y)/u(x) p(x), x ∈int(V_R),
B_u(x) = u(x^↑) + d u(_x)/u(x) p(x), x∈∂ V_R\⋆,
C_u(⋆) = u(⋆^↑) + d u()/u(⋆) p(⋆).
∙
For z ∈ V_R, the first derivatives of L are
z ∈int(V_R) ∂ L(u,v)/∂ v(z) = -log(A_u(z)/v(z)),
z ∈∂ V_R\⋆ ∂ L(u,v)/∂ v(z) = -log(B_u(z)/v(z)),
z = ⋆ ∂ L(u,v)/∂ v(z) = -log(C_u(z)/v(z)),
while the second derivatives of L equal 1/v(z)>0. Hence the infimum is uniquely taken at
x ∈int(V_R) v̅(x) = A_u(x),
x ∈ V_R \⋆ v̅(x) = B_u(x),
x = ⋆ v̅(x) = C_u(x).
∙ For z ∈, the computation is more delicate. Define (see (<ref>) in Appendix <ref>)
μ(α) = α (λ)^'(α) - (λ)(α).
The function μ has range (-∞,log√(d) ], with the maximal value uniquely taken at α=∞. Therefore there are two cases.
▸ u(x)/u(x^↑) ≤√(d):
Abbreviate α_u(z) = p(z)/v(z). For z ∈,
∂ L(u,v)/∂ v(z) = log(u(z)/u(z^↑))
+ (λ)(p(z)/v(z)) - p(z)/v(z) (λ)^'(p(z)/v(z))
= log(u(z)/u(z^↑)) - μ(α_u(z)),
∂^2 L(u,v)/v(z)^2 =p^2(z)/v^3(z) (λ)^”(p(z)/v(z)) >0,
where we use that λ, being a Legendre transform, is strictly convex. Hence the infimum is uniquely taken at
v̅(x) = p(x)/α_u(x), x ∈,
with α_u(x) solving the equation
log(u(x)/u(x^↑))
= μ(α_u(x)), x ∈.
Since μ'(α) = α(λ)”(α) and λ is strictly convex (see Fig. <ref> in Appendix <ref>), μ is strictly increasing and therefore invertible. Consequently,
α_u(x) = μ^-1(log(u(x)/u(x^↑))), x ∈.
Putting the above formulas together, we arrive at (recall (<ref>))
L(u) = inf_v V_R → (0,∞) L(u,v)
= - ∑_x ∈int(V_R) A_u(x) - ∑_x∈∂ V_R\⋆ B_u(x) - C_u(⋆)
+ ∑_x ∈ D_u(x)
with (recall (<ref>))
D_u(x) = - p(x)/α_u(x)[log(u(x^↑)/u(x)) - (λ)(α_u(x))]
= p(x)/α_u(x)[(λ)(α_u(x)) - μ(α_u(x))]
= p(x) (λ)^'(α_u(x))
= p(x) ((λ)^'∘μ^-1)(log(u(x)/u(x^↑))).
In (<ref>) in Appendix <ref> we show that (λ)' ∘μ^-1 = λ^-1. Moreover In (<ref>) in Appendix <ref> we show that (λ^-1∘log) = S with
S(β) = d+1 - β - d/β, β∈ (0,√(d) ].
Since S has domain (0,√(d) ], D_u(x) is only defined when u(x)/u(x^↑) ≤√(d), in which case
D_u(x) = p(x) S(u(x)/u(x^↑)), x ∈.
▸ u(x)/u(x^↑) > √(d): In this case ∂ L(u,v)/∂ v(z) > 0, the infimum is uniquely taken at v̅(x)=0, and
D_u(x) = p(x) (√(d)-1)^2 = p(x) S(√(d)), x ∈,
where we use (<ref>). Note that the right-hand side does not depend on u.
4.
Next, we compute the supremum over u. The first derivatives of L are
z ∈int(V_R) \ ∂ L(u)/∂ u(z)
= ∑_y ∼ z u(y)/u^2(z) p(z) - ∑_y ∼ z1/u(y) p(y),
z = ∂ L(u)/∂ u()
= ∑_y ∼ u(y)/u()^2 p() -∑_y: y^↑ = 1/u(y)p(y)
- d/u(⋆) p(⋆),
z = ⋆ ∂ L(u)/∂ u(⋆)
= -1/u() p() + u(⋆^↑) + du()/u(⋆)^2 p(⋆),
z ∈∂ V_R \⋆ ∂ L(u)/∂ u(z)
= -1/u(z^↑) p(z^↑) + u(z^↑)+du(_z)/u(z)^2 p(z)
+ [u(_z)/u(z)^2 - d/u(_z)]p(_z)
1_{u(z)/u(z^↑)≤√(d)},
z ∈ ∂ L(u)/∂ u(z)
= -d/u(z^↑) p(z^↑)
+ [-1/u(z^↑) +du(z^↑)/u(z)^2] p(z)
1_{u(z)/u(z^↑)≤√(d)}.
The second derivates of L are all <0. The first line in (<ref>) can be rewritten as
∑_y ∼ z u(y) [p(z)/u^2(z) - p(y)/u^2(y)],
which is zero when
u̅(x) = √(p(x)), x ∈ V_R.
Given the choice in (<ref>), the fifth line in (<ref>) is zero when
u̅(x) = √(dp(x^↑)p(x)/dp(x^↑)+p(x)), x ∈.
Indeed, the derivative is strictly negative when the indicator is 0 and therefore the indicator must be 1. But the latter is guaranteed by (<ref>)–(<ref>), which imply that
u̅(x)/u̅(x^↑) = √(dp(x)/dp(x^↑)+p(x))≤√(d), x ∈.
Given the choice in (<ref>)–(<ref>), also the fourth line in (<ref>) is zero. Thus, only the second and third line in (<ref>) are non-zero, but this is harmless because ,⋆ carry a negligible weight in the limit as R →∞ because of the constraint p(∂ V_R ∪) ≤ 1/R in Lemma <ref> (recall (<ref>)).
Inserting (<ref>)–(<ref>) into (<ref>) and using (<ref>), (<ref>), we get the following lower bound:
sup_u V_R → (0,∞) L(u)
≥ - ∑_x ∈int(V_R) A_u̅(x)
- ∑_x∈∂ V_R\⋆ B_u̅(x)
- C_u̅(⋆) + ∑_x ∈ D_u̅(x)
= - ∑_x ∈int(V_R)∑_y ∼ x√(p(y)p(x))
- ∑_x∈∂ V_R \⋆√(p(x))(√(p(x^↑))
+ d√(dp(x)p(_x)/dp(x)+p(_x)))
-√(p(⋆))(√(p(⋆^↑))+ d√(p()))
+ ∑_x ∈ p(x) (d+1-√(d)[√(p(x)/d p(x^↑) + p(x))
+ √(d p(x^↑) + p(x)/p(x)) ]).
5.
Using the relation (d+1) p(x) = ∑_y∼ x p(x), x∈int(V_R), we get from (<ref>) that
I^†_E(^R)(p) ≥ K^1_R(p) + K^2_R(p)
with
K^1_R(p)
= ∑_x ∈int(V_R)∑_y ∼ x[p(x) - √(p(x)p(y)) ]
= ∑_{x,y}∈E_R(√(p(x)) - √(p(y)) )^2
+ [p()-√(p()p(⋆)) ] - ∑_x∈∂ V_R[ p(x) - √(p(x)p(x^↑)) ]
and
K^2_R(p)
= ∑_x∈∂ V_R \⋆[(d+1) p(x) - √(p(x))(√(p(x^↑))
+ d√(dp(x)p(_x)/dp(x)+p(_x)))]
+ (d+1) p(⋆)-√(p(⋆))(√(p(⋆^↑)) + d√(p()))
+ ∑_x ∈ p(x) [d+1-√(d) (√(p(x)/d p(x^↑) + p(x))
+ √(d p(x^↑) + p(x)/p(x)) )].
The first sum in the right-hand side of K^1_R(p) equals the standard rate function I_E_R(p) given by (<ref>), with
E_R = E_R ∖{,⋆}
the set of edges in the unit _R without the tadpoles and without the edge {,⋆} (i.e., E_R = E(^*_R); recall Fig. <ref>). Rearranging and simplifying terms, we arrive at
I^†_E(^R)(p) ≥ I_E_R(p)+ K^3_R(p)
with
K^3_R(p) = S_∂ V_R \⋆(p) + S_,⋆(p) + S_(∂ V_R \⋆) ∪(p),
where
S_∂ V_R \⋆(p)
= d ∑_x∈∂ V_R \⋆ p(x),
S_,⋆(p)
= (√(p()) - √(p(⋆)))^2 + (d-1)[p(⋆) - √(p()p(⋆)) ],
S_(∂ V_R \⋆) ∪(p)
= - ∑_x∈∂ V_R \⋆ p(x) d√(dp(_x)/dp(x)+p(_x))
+ ∑_x∈∂ V_R \⋆ p(_x) (d+1-√(d) [√(p(_x)/d p(x) + p(_x))
+ √(d p(x) + p(_x)/p(_x)) ]).
6.
Since √(p()p(⋆))≤12[p()+p(⋆)], the boundary constraint ∑_x∈∂ V_R ∪ p(x) ≤ 1/R implies that S_∂ V_R \⋆(p) + S_,⋆(p) = O(1/R). The same constraint implies that the first sum in S_(∂ V_R \⋆) ∪(p) is O(1/R). Hence
K^3_R(p) = O(1/R) + ∑_x∈∂ V_R \⋆ p(x) F(p(_x)p(x))
with
F(w) = w (d+1-√(d) [√(w/d+w) + √(d+w/w) ]).
The map w ↦ F(w) is continuous on (0,∞) with
F(w) = {[ √(w) + (d+1)w + O(w^3/2), w ↓ 0,; [(d+1)-2√(d) ] w + O(w^-1), w →∞. ].
From this we see that if d ≥ 4, then there exists a C ∈ (1,∞) such that
F(w)+C ≥(1-√(w) )^2, w ∈ [0,∞).
Hence we have the lower bound
K^3_R(p)
≥ O(1/R) + ∑_x∈∂ V_R \⋆
p(x) [-C + (1-√(p(_x)p(x)) )^2]
= O(1/R) + ∑_x∈∂ V_R \⋆(√(p(x))-√(p(_x)) )^2.
Via (<ref>)–(<ref>), it follows that
I^†_E(^R)(p) ≥ O(1/R) + I_E_R(p), R ∈,
with I_E_R(p) the standard rate function given by (<ref>), with
E_R = E_R ∪[∪_x ∈∂ V_R ∖⋆{x,_x}]
the set of edges in the unit _R that is obtained from the unit _R by removing the edge {,⋆} (i.e., E_R = E(_R); recall Fig. <ref>). This completes the proof of (<ref>).
The condition d ≥ 4 is needed only in (<ref>). For d=2,3 we have F(w)+C ≥θ_c(1-√(w) )^2 with θ_c = d+1-2√(d)∈ (0,1). Consequently, the edges {x,_x}, x ∈∂ V_R∖⋆, carry a weight that is smaller than that of the edges in , which may cause the optimal p to stick to the boundary as R→∞, in which case we do not have (<ref>).
§.§ Limit of the upper variational formula
Note that
_R ⊆,
with the infinite tree. Consequently,
I_E_R(p) = I_E()(p) - (d-1) ∑_x ∈∂ V_R ∖⋆ p(x),
∀ p ∈(V()) (p) = V(_R),
where the sum compensates for the contribution coming from the edges in that link the vertices in ∂ V_R ∖⋆ to the vertices one layer deeper in that are not tadpoles. Since this sum is O(1/R), we obtain (recall (<ref>))
χ^+_R(ϱ) = inf_p ∈(V(_R))p(∂__R) ≤ 1/R{I^†_E(_R)(p) + ϱ J_V(_R)(p)}
≥ O(1/R) + inf_p ∈(V())(p) = V(_R),
p(∂__R) ≤ 1/R{I_E()(p) + ϱ J_V()(p)}
≥ O(1/R) + χ_(ρ),
where the last inequality follows after dropping the constraint under the infimum and recalling (<ref>). This completes the proof of (<ref>).
99
A2016
A. Astrauskas,
From extreme values of i.i.d. random fields to extreme eigenvalues of finite-volume Anderson Hamiltonian,
Probab. Surv. 13, 156–244, 2016.
AGH2016
L. Avena, O. Gün, M. Hesse,
The parabolic Anderson model on the hypercube,
Stoch. Proc. Appl. 130, 3369–3393, 2020.
DV75
M.D. Donsker and S.R.S. Varadhan,
Asymptotic evaluation of certain Markov process expectations for large time,
Comm. Pure Appl. Math. (I) 28, 1–47, 1975; (II) 28, 279–301, 1975; (III) 29, 389–461, 1976; (IV) 36, 183–212, 1983.
FM1990
K. Fleischmann, S.A. Molchanov,
Exact asymptotics in a mean field model with random potential,
Probab. Theory Relat. Fields 86, 239–251, 1990.
G1977
J. Gärtner,
On large deviations from the invariant measure,
Theory Probab. Appl. 22, 24–39, 1977.
GdH1999
J. Gärtner, F. den Hollander,
Correlation structure of intermittency in the parabolic Anderson model,
Probab. Theory Relat. Fields 114, 1–54, 1999.
GM1990
J. Gärtner, S.A. Molchanov,
Parabolic problems for the Anderson model I. Intermittency and related problems,
Commun. Math. Phys. 132, 613–655, 1990.
GM1998
J. Gärtner, S.A. Molchanov,
Parabolic problems for the Anderson model II. Second-order asymptotics and structure of high peaks,
Probab. Theory Relat. Fields 111, 17–55, 1998.
HLP1952
G.H. Hardy, J.E. Littlewood, G. Pólya,
Inequalities,
Cambridge Mathematical Library (2nd. ed.), Cambridge University Press, 1952.
dHLDP2000
F. den Hollander,
Large Deviations,
Fields Institute Monographs 14, Providence RI, American Mathematical Society, 2000.
dHKdS2020
F. den Hollander, W. Konig, R.S. dos Santos,
The parabolic Anderson model on a Galton-Watson tree,
in Out of Equilibrium 3: Celebrating Vladas Sidoravicius
(eds. M.E. Vares, R. Fernandez, L.R. Fontes, C.M. Newman),
Progress in Probability 77, Birkhäuser, 2021, pp. 591–635.
dHW2021
F. den Hollander, D. Wang,
The parabolic Anderson model on a Galton-Watson tree revisited,
J. Stat. Phys. 189, paper no. 8, 1–30, 2022.
LP2016
R. Lyons, Y. Peres,
Probability on Trees and Networks,
Cambridge Series in Statistical and Probabilistic Mathematics 42,
Cambridge University Press, New York, 2016.
K2016
W. König,
The Parabolic Anderson Model,
Pathways in Mathematics, Birkhäuser, 2016.
MZ2016
M. Mariani, L. Zambotti,
Large deviations for the empirical measure of heavy-tailed Markov renewal processes,
Adv. Appl. Probab. 48, 648–671, 2016.
S1976
F. Spitzer,
Principles of Random Walk (2nd ed.),
Graduate Texts in Mathematics, Springer, 1976.
|
http://arxiv.org/abs/2307.04027v1 | 20230708182951 | Slow-roll inflation and growth of perturbations in Kaniadakis Cosmology | [
"Gaetano Lambiase",
"Giuseppe Gaetano Luciano",
"Ahmad Sheykhi"
] | gr-qc | [
"gr-qc",
"hep-ph",
"hep-th"
] | |
http://arxiv.org/abs/2307.04335v2 | 20230710041331 | The tree-child network problem for line trees and the shortest common supersequences for permutations are NP-hard | [
"Laurent Bulteau",
"Louxin Zhang"
] | math.CO | [
"math.CO",
"05A16, 05C30, 92D15"
] |
[
Eiji Saitoh
August 12, 2023
===================
Reconstructing phylogenetic networks presents a significant and complex challenge within the fields of phylogenetics and genome evolution.
One strategy for reconstruction of phylogenetic networks is to solve the phylogenetic network problem, which involves inferring phylogenetic trees first and subsequently computing the smallest phylogenetic network that displays all the trees. This approach capitalizes on exceptional tools available for inferring phylogenetic trees from biomolecular sequences. Since the vast space of phylogenetic networks poses difficulties in obtaining comprehensive sampling, the researchers switch their attention to inferring tree-child networks from multiple phylogenetic trees, where in a tree-child network
each non-leaf node must have at least
one child that is a tree node (i.e. indegree-one node).
We prove that the tree-child network problem for multiple line trees remains NP-hard by a reduction from the shortest common supersequnece problem for permuations and proving that the latter is NP-hard.
§ INTRODUCTION
Recent genomic studies have highlighted the significant roles of recombination and introgression in genome evolution <cit.>. Consequently, there has been an increasing use of phylogenetic networks to model the evolution of genomes with the presence of recombination, introgression and other reticulate events <cit.>. A phylogenetic network is a rooted directed acyclic graph (DAG) that represents taxa (genomes, individuals, or species) as its leaves and evolutionary events (speciation, recombination, or introgression) as its internal nodes. Over the past three decades, substantial progress has been made in understanding the theoretical aspacts of phylogenetic networks <cit.> (see also <cit.>).
The space of phylogenetic networks is vast, making it challenging to perform comprehensive sampling. As a result, popular methods like maximum likelihood and Bayesian approaches, commonly used for phylogeny reconstruction, are not efficient enough for reconstructing phylogenetic networks containing a large number of reticulate events on more than 10 taxa <cit.>. This has prompted researchers to focus on inferring phylogenetic networks with specific combinatorial properties <cit.>. Popular classes of phylogenetic networks include galled trees <cit.>, galled networks <cit.>, and tree-child networks <cit.>, which can be enumerated and counted efficiently <cit.>. Furthermore, researchers are also investigating the parsimonious inference of phylogenetic networks from multiple trees, aiming to infer a network with the smallest hybridization number (HN) that display all the trees <cit.>. The HN, a generalization of the number of reticulate nodes in binary phylogenetic networks, quantifies the complexity of the network (refer to Section <ref> for more details). Notably, a scalable method has been recently developed to compute a tree-child network with the minimum HN from binary trees <cit.>.
Inference of an arbitrary phylogenetic network with the smallest HN is known to be NP-hard, even in the case of two input trees <cit.> and in the case tree-child networks are inferred <cit.>.
In this paper,
we prove that the problem remains NP-hard even for inferring tree-child networks from line trees.
§ BASIC CONCEPTS AND NOTATION
Let X be a set of taxa. In this paper, a phylogenetic network on X is a rooted DAG such that:
* The root is of indegree 0 and outdegree 1. There is at least one directed path from the root to every other node.
* The leaves (which are of indegree 1 and outdegree 0) are labeled one-to-one with the taxa.
* All nodes except for the leaves and the root are either a tree node or a reticulate node. The tree nodes are of indegree 1 and outdegree 2, whereas the reticulate nodes are of indegree more than 1 and outdegree 1.
In a phylogenetic network, a node u is said to be below another v if there exists a directed path from v to u.
A phylogenetic network is binary if every reticulate node is of indegree 2.
A binary phylogenetic tree is a binary phylogenetic network that does not have any reticulate nodes. In this paper, a binary phylogenetic tree is simply mentioned as a binary tree. A line tree is a binary tree in which all internal nodes but the out-degree-1 root have at least one child that is a leaf.
An important parameter of a phylogenetic network is the hybridization number (HN).
It is defined as the sum over all the reticulation nodes of the indegree of that node minus the number of the reticulate nodes. Note that for a binary phylogenetic network B, each reticulate node has indegree 2 and thus the HN of B is equal to the number of the reticulate nodes of B.
A tree-child network is a phylogenetic network in which every non-leaf node has at least one child that is either a tree node or a leaf.
Let Σ be an n-letter alphabet, and ℓ be a new letter not in Σ. For a permutation P=p_1p_2⋯ p_n on Σ, we use T(P) to denote the line tree on Σ∪{ℓ} that has the node set Σ∪{r, v_i, ℓ | 1≤ i≤ n} and the directed edge set { (r, v_1), (v_i, v_i+1), (v_i, p_i), (v_n, ℓ), (v_n, p_n) | 1≤ i≤ n-1} (left, Figure <ref>).
Let v be a node of indegree 1 and outdegree 1 in a directed graph. Then, there is a unique edge (u, v) entering
v and a unique edge (v, w) leaving v. We contract v by removing v and
replacing (u, v) and (v, w) with a new edge (u, w).
For a sequence Q=q_1q_2⋯ q_m on Σ, we use N(Q) to denote the one-component tree-child network on Σ∪{ℓ} that is obtained by applying degree-2 node contraction from the DAG consisting of the node set
Σ∪{r, ℓ, v_i, r_j | 1≤ i≤ m, 1≤ j≤ n}
and the directed edge set E_1∪ E_2, where E_1={ (r, v_1), (v_i, v_i+1), (v_m, ℓ), | 1≤ i≤ m-1}∪{(r_j, a_j) | a_j∈Σ} and E_2 contains (v_i, r_j) if q_i=a_j for every possible i and j (right, Figure <ref>).
Clearly, the HN of N(Q) is m-n.
§.§ The tree-child network problem
A binary tree is displayed in a tree-child network if it can be obtained from
the network by (i) deletion of all but one incoming edge for each reticulate
node and subsequently (ii) contraction of all indegree-1 and out-degree-1 nodes.
We focus on
how to infer a tree-child network with the minimum HN that display all the input trees. This problem is formally defined as:
The Tree-Child Network (TCN) Problem
Input A set of binary trees on X.
Output A tree-child network with the minimum HN that displays all the trees.
§.§ The shortest common supersequence problem
A string on an alphabet is a supersequence of another if the latter can be obtained from the former by the deletion of 0 or more letters. A string is a common supersequence of multiple strings if it is a supersequence of every string.
The length of a string is the total number of the occurrences of the letters in the string.
A common supersequence is a shortest common supersequence (SCS) if it has the smallest length, over all the common supersequences of the strings.
The SCS problem is formally defined as:
Input A set of strings on an alphabet.
Output A SCS of the strings.
The SCS problem is a fundamental NP-complete problem <cit.>.
§ TREE-CHILD NETWORK INFERENCE VIA LINEAGE TAXA STRINGS
Let X be a set of n taxa and T_i (1≤ i≤ k) be k binary trees on X. The minimum tree-child networks that display all the k trees can be constructed from the lineage taxon strings (LTSs) of the taxa under an ordering on
X <cit.>. In this section, we shall restate the construction process on which our main result will be based.
Consider an ordering π on X.
For any x, x'∈ X, we write
x<_π x' if x is less than x' under π. For a node u of a tree on X, we use min_π(u) to denote the smallest of the taxa below u. We label the root with the smallest taxon under π and each of non-root internal node u with the larger of min_π(u') and min_π(u”), where u' and u” are the two children of u.
In this way, the root and the remaining n-1 internal nodes are uniquely labeled with a taxon. Moreover, the leaf f is below the unique internal node w that had been labeled with f. As a result, there exists a path P_wf from w to f. The LTS of the taxa f consists of the taxon labels of the inner nodes in P_wf.
For example, if the alphabet ordering
(i.e. a<b<c<d<e<ℓ) is used, in the tree in Figure <ref>, the root is labeled with a; v_1 to v_5 are labeled with e, d, b, c, ℓ, respectively. Therefore,
the LTS of a, b, c are edb, c, ℓ, respectively, whereas the LTS of d, e, ℓ are the empty string.
Let π be π_1<π_2 <⋯ <π_n. Note that
X={π_1, π_2, ⋯, π_n}.
We further assume that β_1, β_2, ⋯, β_n are n sequences satisfying the following conditions:
(C1) For each i<n, β_i is a string on {π_i+1, ⋯, π_n};
(C2) β_n is the empty sequence.
It is proved in <cit.> that the following algorithm outputs a tree-child network containing all T_i, written as
N(π, {β_i}^n_i=1), whose HN is equal to
∑_1≤ i≤ n|β_i |.
Tree-Child Network Construction <cit.>
1. (Vertical edges) For each β_i, define a
path P_i with |β_i| +2 nodes:
h_i, v_i1, v_i2, ⋯, v_i|β_i|, π_i,
where β_n is the empty sequence.
2. (Left–right edges)
Arrange the n paths
from left to right as P_1, P_2, ⋯, P_n.
If the m-th symbol of β_i is π_j, we add an
edge (v_im, h_j) for each i and each m.
3. For each i>1, contract h_i if h_i is of indegree 1.
Consider k binary trees T_j on X.
We write α_ji for the LTS of π_i in T_j for each i≤ n and each j≤ k. Then, for each j, α_j1, α_j2, ⋯, α_jn satisfy the conditions (C1) and (C2). Moreover, let β_i be a SCS of α_1i, α_2i, ⋯, α_ki for each i.
The sequences
β_1, β_2, ⋯, β_n also satisfy the conditions (C1) and (C2).
Let T_j (1≤ j≤ k) be k trees on X and let N be a tree-child network on X that displays all the trees.
If N has the minimum HN, there exists a permutation π such that
N=N(π,
{β_i
}^n_i=1), where γ_i is a SCS of the LTSs α_ji of π_i in the input trees T_j and whose HN is ∑_1≤ i≤ n|β_i|.
The proof of Theorem <ref> appears in Section A of the Supplemental Methods of <cit.>. Since there could be multiple SCSs for a set of sequences, Theorem <ref> implies that the TCN problem have multiple solutions.
§ EQUIVALENCE OF THE TCN AND SCS PROBLEMS
By Theorem <ref>, we know that the TCN problem can be solved using multiple SCS sub-problems, with the LTS of each taxa. Aiming at a reduction from SCS to TCN, we now show that for any instance of SCS where all input strings are permutations, an instance of TCN can be built such that a single taxa has a non-trivial LTS in each tree, and such that each such LTS is exactly one of the input permutations.
Consider an instance of the SCS problem consisting of k permutations P_i (1≤ i≤ k) on Σ.
By Theorem <ref>, all the tree-child networks with the smallest HN that display all the T(P_i) can be obtained from the LTS of taxa under a permutation.
Consider an ordering π: π_1< π_2 <⋯ <π_n <π_n+1 on Σ∪{ℓ}. We have the following two cases.
Case 1: ℓ=π_t, where t>1.
For each i, we let P_i=p_i1p_i2⋯ p_in.
If p_in >_πℓ, the LTS of the leaf ℓ ends with p_in and thus is nonempty in T(P_i).
But, the LTS of the leaf p_in is empty in T(P_i).
If p_in<_πℓ, the LTS of ℓ is empty. But, the LTS of p_in contains ℓ and thus is nonempty in T(P_i).
In general, define β_i1=π_1. For each j> 1 such that β_ij=p_ix < min{p_in, ℓ},
define β_i (j+1)=min_π{ p_i(x+1), ⋯, p_in, ℓ}.
We obtain a sequence:
β_i1=π_1, β_i2, ⋯, β_iw_i=
min{p_in, ℓ}.
Then, in L(P_i), the LTS of β_ij end with β_i(j+1) and thus are nonempty under π for each j<w_i; the LTS of β_iw_i ends with
ℓ if β_w_i=p_in and p_in if β_w_i=ℓ under Π. It is also true that the LTS is empty for any other taxon under π. Moreover, we have the following fact.
Let the LTS of β_ij be
S_ij in T_i under π: π_1<π_2<⋯ <π_n+1, where ℓ≠π_1. Then, for each i,
P_i=
S_i1[1, |S_i1|-1]β_i1S_i2[1, |S_i2|-1]β_i2⋯
S_i(w_i-1)[1, |S_i(w_i-1)|-1]β_i(w_i-1)S'_iw_i,
S'_iw_i={[ S_iw_i β_iw_i=ℓ; S_iw_i[1, |S_iw_i|-1]β_iw_i β_iw_i≠ℓ ].
where S_it[1, |S_it|-1] denotes the string obtained by removal of the last letter of S_it for each possible t
and the right-hand side is the concatenation of the strings and letters.
Example 1. For the line tree in the left panel in Figure <ref>, the corresponding permutation is P:edabc on the alphabet {a, b, c, d, e}. Under the ordering
a<b<c<d<e<ℓ,
β_1=a, β_2=b,
β_3=c, whose LTSs are edb, c, ℓ, respectively.
Proposition <ref> is verified by ed a· b· c=P, where
the symbol '·' is added to indicate different parts of P for clearness.
Let the LTS of β_ij be
S_ij in T_i under π.
Fix an π_j for some 1≤ j≤ n+1. If the LTS of π_j is empty for every i, define Q_j to be the empty string.
If S_ij is nonempty only
for indices i_1, i_2, ⋯, i_j,
we
define Q_j to be the string obtained from
W_j=(S_i_1j, S_i_2j, ⋯, S_i_jj) by removing
the last
letter of W_j.
Note that different SCS of the strings give different Q_j of the same length.
Example 2. Consider the ordering π: a<b<c<ℓ<d<e for the tree lines trees in Figure <ref>. The LTSs of the taxa under π in the three trees are listed in the following table, from which we obtain a tree-child network of an HN of 5 (right, Figure <ref>).
Taxon LTS in T(P_1) LTS in T(P_2) LTS in T(P_3) SCS
a eb cb cb ecb
b dc eℓ ℓ dceℓ
c ℓ ϵ ϵ ℓ
ℓ ϵ d ed ed
d ϵ ϵ ϵ ϵ
e ϵ ϵ ϵ ϵ
Here, ϵ denotes the empty string. The LTSs of b are dc, eℓ and ℓ in T(P_1), T(P_2), T(P_3), respectively.
A SCS of dc, eℓ, ℓ is dceℓ, from which we obtain Q_b=dec. Similarly, we obtain
Q_a=ec and Q_ℓ=e and that Q_c, Q_d and Q_e are empty.
Assume that Q_h_1, Q_h_2, ⋯, Q_h_j be all the nonempty strings defined from π by the method described above, where π_h_1<_ππ_h_2<_π⋯ <_ππ_h_j.
If π_h_j=ℓ, we set
Q=Q_h_1π_h_1Q_h_2π_h_2⋯ Q_h_j-1π_h_j-1 W_h_j.
If π_h_j <_πℓ, then, ℓ must appear in Q_h_j if it is not removed. In this case,
we set Q to the string obtained from
Q_h_1π_h_1Q_h_2π_h_2⋯ Q_h_j-1π_h_j-1Q_h_jπ_h_j by deleting the occurrences of ℓ.
Since |Q| is equal to or less than the sum of the lengths of the SCS of the LTSs of the π_i in the k line trees T(P_j) (1≤ j≤ k), the HN of N(Q) is equal to or less than the HN of N_π.
On the other hand, by Proposition 1, Q is a common supersequence of P_1, P_2, ⋯, P_k.
Thus, |Q|≥(P_1, P_2, ⋯, P_k).
Therefore, the HN of the one-component tree-child network N(Q) is not less than that of N((P_1, P_2, ⋯, P_k)).
Example 2 (Continued).
For the trees in Figure <ref>,
Q=Q_aa· Q_bb· Q_cc· W_ℓ=eca· deℓ b· c· ed. After removing ℓ, we obtain
Q'=ecadebced, which is also a supersequence of eadbc, caebd, and cabed.
The one-component tree-child network N(Q') is shown in Figure <ref>.
Case 2: ℓ=π_1. By definition, the LTS is P_i for ℓ and the empty string for π_i for every i>1. In this case, we obtain a tree-child network
N((P_1, P_2, ⋯, P_k)).
Taken together, the discussion for the two cases imply the following result.
Let N be the tree-child network constructed from T(P_1), T(P_2), ⋯, T(P_k) by applying the algorithm with an ordering π: π_1<π_2<⋯ < π_n+1.
It has the smallest HN if and only if ℓ is the smallest element under π, where N=N((P_1, P_2, ⋯, P_k)).
Propositions 1 and 2 imply the following result.
Let X be a set of taxa such that |X|=n+1 and T be a set of line trees on X in which there is a common lowest leaf ℓ, There is a tree-child network displaying all the trees of T with q reticulations if and only if the permutations on X∖{ℓ} that correspond with the line trees have a SCS of length n+q
§ NP-HARDNESS OF THE SCS PROBLEM FOR PERMUTATIONS
The SCS problem is NP-hard for permutations
SCS is already known to be NP-hard when all input strings consist of 2 distinct characters <cit.>; let us denote this variant 2-SCS (we further need the trivial constraint that no character appears in every input string). We thus provide a reduction as follows: consider an instance 𝒮 of 2-SCS with m length-2 strings over a size-n alphabet X={x_1,…,x_n}, and an integer k. Let N=n+k+1, and create a size-N set Y={y_1,…,y_N} of separators. In the context of strings, we also write X and Y for the strings x_1… x_n and y_1… y_N, respectively. For any string ab∈𝒮 (with a,b∈ X and a≠ b), we write X_-ab for the subsequence of x_1x_2… x_n obtained by removing a and b, and S_ab = a b · Y · X_-ab. Note that each S_ab is a permutation of X∪ Y. Let us write 𝒮' = {S_ab, ab∈𝒮} and k'=k+N+n. We now prove the following equivalence that completes the reduction.
Strings in 𝒮 have a common supersequence T of size k
⇔ Strings in 𝒮' have a common supersequence T' of size k'
⇒
Build T' = T · Y · X. String T' is a length-k' string, and it is a supersequence of any S_ab for ab∈ S' (since T is a supersequence of ab and X is a supersequence of X_-ab).
⇐
Pick such a string T'. It contains at least one occurrence of Y as a subsequence. Let P,R be the matching prefix and suffix of T' (i.e. T'=P· R) such that R is the smallest suffix containing Y as a subsequence.
Let T be the subsequence of P obtained by removing all separator characters.
We have |P|≤ k'-N = k+n <N, so P may not contain an entire copy of Y. Hence, for any S_ab = ab· Y· X_-ab∈𝒮', we have that ab is a subsequence of P and X_-ab is a subsequence of R.
Overall, P, and also T, are common supersequence of all ab∈𝒮, and R is a common supersequence of all X_-ab. In order to bound their sizes, note that R contains each character of X and Y at least once, so |R|≥ N+n.
Hence, T has size at most k'-N-n=k, and is a common supersequence of 𝒮.
The TCN problem is NP-hard even for line trees.
Proof. The statement is derived from Thereom 2 and Theorem 3.
Open Problem Does the TCN problem remain NP-hard for two line trees?
The TCN problem for three line trees was studied by Van Iersel et al. in <cit.>.
§ ACKNOWLEDGEMENTS
LX Zhang was partially supported by Singapore
MOE Tier 1 grant R-146-000-318-114 and Merlin 2023. He thanks Yufeng Wu for useful discussion in the early stage of this work.
10
albrecht2012fast
Benjamin Albrecht, Celine Scornavacca, Alberto Cenci, and Daniel H Huson.
Fast computation of minimum hybridization networks.
Bioinformatics, 28(2):191–197, 2012.
bordewich2007computing
Magnus Bordewich and Charles Semple.
Computing the minimum number of hybridization events for a consistent
evolutionary history.
Discrete Applied. Math., 155(8):914–928, 2007.
cardona2009metrics2
Gabriel Cardona, Mercè Llabrés, Francesc Rosselló, and
Gabriel Valiente.
Metrics for phylogenetic networks II: Nodal and triplets metrics.
IEEE/ACM-TCBB, 6(3):454–469, 2009.
cardona2020counting
Gabriel Cardona and Louxin Zhang.
Counting and enumerating tree-child networks and their subclasses.
Journal of Computer and System Sciences, 114:84–104, 2020.
elworth2019advances
RA Leo Elworth, Huw A Ogilvie, Jiafan Zhu, and Luay Nakhleh.
Advances in computational methods for phylogenetic networks in the
presence of hybridization.
In Bioinformatics and Phylogenetics, pages 317–360. Springer,
2019.
Fontaine_15
Michael C Fontaine, James B Pease, Aaron Steele, and et al.
Extensive introgression in a malaria vector species complex revealed
by phylogenomics.
Science, 347(6217):1258524–1258524, 2015.
garey1979computers
Michael R Garey and David S Johnson.
Computers and intractability.
Freeman San Francisco, 1979.
gogarten2005horizontal
J Peter Gogarten and Jeffrey P Townsend.
Horizontal gene transfer, genome innovation and evolution.
Nature Reviews Microbiol., 3(9):679–687, 2005.
gusfield2014book
Dan Gusfield.
ReCombinatorics: the algorithmics of ancestral recombination
graphs and explicit phylogenetic networks.
MIT press, 2014.
huson2009computing
Daniel H Huson, Regula Rupp, Vincent Berry, Philippe Gambette, and Christophe
Paul.
Computing galled networks from real data.
Bioinformatics, 25(12):i85–i93, 2009.
huson2010book
Daniel H Huson, Regula Rupp, and Celine Scornavacca.
Phylogenetic networks: concepts, algorithms and applications.
Cambridge University Press, 2010.
koblmuller2007reticulate
Stephan Koblmüller, Nina Duftner, Kristina M Sefc, Mitsuto Aibara, Martina
Stipacek, Michel Blanc, Bernd Egger, and Christian Sturmbauer.
Reticulate phylogeny of gastropod-shell-breeding cichlids from lake
tanganyika–the result of repeated introgressive hybridization.
BMC Evol. Biol., 7(1):1–13, 2007.
koonin2001horizontal
Eugene V Koonin, Kira S Makarova, and L Aravind.
Horizontal gene transfer in prokaryotes: quantification and
classification.
Annual Rev. Microbiol., 55(1):709–742, 2001.
linz2019attaching
Simone Linz and Charles Semple.
Attaching leaves and picking cherries to characterise the
hybridisation number for a set of phylogenies.
Adv. Applied Math., 105:102–129, 2019.
lutteropp2022netrax
Sarah Lutteropp, Céline Scornavacca, Alexey M Kozlov, Benoit Morel, and
Alexandros Stamatakis.
Netrax: accurate and fast maximum likelihood phylogenetic network
inference.
Bioinformatics, 38(15):3725–3733, 2022.
Marcussen_14
Thomas Marcussen, Simen R Sandve, Lise Heier, Manuel Spannagl, Matthias
Pfeifer, The International Wheat Genome Sequencing Consortium, Kjetill S
Jakobsen, Brande BH Wulff, Burkhard Steuernagel, Klaus FX Mayer, and Odd-Arne
Olsen.
Ancient hybridizations among the ancestral genomes of bread wheat.
Science, 345(6194):1250092–1250092, 2014.
mirzaei2015fast
Sajad Mirzaei and Yufeng Wu.
Fast construction of near parsimonious hybridization networks for
multiple phylogenetic trees.
IEEE/ACM Trans. Comput. Biol. Bioinform., 13(3):565–570, 2015.
pickrell2012inference
Joseph Pickrell and Jonathan Pritchard.
Inference of population splits and mixtures from genome-wide allele
frequency data.
Nat Prec, 2012.
solis2016inferring
Claudia Solís-Lemus and Cécile Ané.
Inferring phylogenetic networks with maximum pseudolikelihood under
incomplete lineage sorting.
PLoS genetics, 12(3):e1005896, 2016.
steel2016phylogeny
Mike Steel.
Phylogeny: discrete and random processes in evolution.
SIAM, 2016.
timkovskii1989complexity
VG Timkovskii.
Complexity of common subsequence and supersequence problems and
related problems.
Cybernetics, 25:565–580, 1989.
van2022practical
Leo van Iersel, Remie Janssen, Mark Jones, Yukihiro Murakami, and Norbert Zeh.
A practical fixed-parameter algorithm for constructing tree-child
networks from multiple binary trees.
Algorithmica, 84(4):917–960, 2022.
van2023three
Leo van Iersel, Mark Jones, and Mathias Weller.
When three trees go to war.
hal.science, 2023.
wang2001perfect
Lusheng Wang, Kaizhong Zhang, and Louxin Zhang.
Perfect phylogenetic networks with recombination.
Journal of Computational Biology, 8(1):69–78, 2001.
wu2010close
Yufeng Wu.
Close lower and upper bounds for the minimum reticulate network of
multiple phylogenetic trees.
Bioinformatics, 26(12):i140–i148, 2010.
yamada2020improved
Kohei Yamada, Zhi-Zhong Chen, and Lusheng Wang.
Improved practical algorithms for rooted subtree prune and regraft
(rSPR) distance and hybridization number.
J. Comput. Biol., 27(9):1422–1432, 2020.
zhang2018bayesian
Chi Zhang, Huw A Ogilvie, Alexei J Drummond, and Tanja Stadler.
Bayesian inference of species networks from multilocus sequence data.
Molecular biology and evolution, 35(2):504–517, 2018.
zhang2019clusters
Louxin Zhang.
Clusters, trees, and phylogenetic network classes.
In Bioinformatics and Phylogenetics: Seminal Contributions of
Bernard Moret. Springer, 2019.
zhang2019
Louxin Zhang.
Generating normal networks via leaf insertion and nearest neighbor
interchange.
BMC Bioinform., 20(20):1–9, 2019.
zhang2023fast
Louxin Zhang, Niloufar Abhari, Caroline Colijn, and Yufeng Wu.
A fast and scalable method for inferring phylogenetic networks from
trees by aligning lineage taxon strings.
Genome Research, 33:gr–277669, 2023.
|
http://arxiv.org/abs/2307.04507v1 | 20230710120118 | Improving Factuality of Abstractive Summarization via Contrastive Reward Learning | [
"I-Chun Chern",
"Zhiruo Wang",
"Sanjan Das",
"Bhavuk Sharma",
"Pengfei Liu",
"Graham Neubig"
] | cs.CL | [
"cs.CL",
"cs.AI"
] |
An analysis of least squares regression and neural networks approximation for the pricing of swing options
[
==========================================================================================================
Modern abstractive summarization models often generate summaries that contain hallucinated or contradictory information. In this paper, we propose a simple but effective contrastive learning framework that incorporates recent developments in reward learning and factuality metrics. Empirical studies demonstrate that the proposed framework enables summarization models to learn from feedback of factuality metrics using contrastive reward learning, leading to more factual summaries by human evaluations. This suggests that further advances in learning and evaluation algorithms can feed directly into providing more factual summaries. Code and human evaluation results will be publicly available at <https://github.com/EthanC111/factuality_summarization>.
§ INTRODUCTION
One major challenge in current abstractive summarization models is how to generate more factual summaries that are consistent with the source text <cit.>. Various approaches have been proposed to address this challenge, including augmenting the model input <cit.>, performing post-processing <cit.>, and modifying the learning algorithms <cit.>.
In particular, learning-based methods possess the advantage of not requiring modification to the existing model architecture or the addition of new modules.
In the meantime, with the growing interest in aligning learning objectives with evaluation criteria of interest, utilizing feedback of automatic evaluation metrics <cit.> or human preferences <cit.> as rewards for fine-tuning abstractive summarization models has gained substantial attention. These methods learn to optimize rewards using techniques such as reinforcement-learning (RL) <cit.>, minimum risk training (MRT) <cit.>, and contrastive reward learning (CRL) <cit.>.
Given the benefits of learning-based methods in improving factuality of abstractive summarization, and recent advancements in factuality metrics for detecting factual inconsistencies in generated summaries, it is of interest to apply reward learning to enforce models to learn from feedback of factuality metrics to improve the factuality of abstractive summarization models. We aim to investigate the following questions in this paper - Q1: Can contrastive reward learning effectively utilize existing factuality metrics to improve the factuality of abstractive summarization models? Q2: Can the improvement in factuality be reflected in human evaluation studies?
In this paper, we propose a contrastive reward learning framework that enables abstractive summarization models to directly learn from feedback of factuality metrics in a sample-efficient manner. In contrast to other contrastive learning frameworks <cit.>, our proposed framework does not rely on the complex construction of negative samples. Instead, similar to <cit.>, all candidate summaries used for contrastive learning are generated from pretrained sequence-to-sequence models <cit.> using diverse beam search <cit.>. Additionally, our framework also incorporates the use of quality metrics to provide more fine-grained information on the ranking (positive / negative) of candidate summaries. Specifically, we investigate learning from the rewards of two factuality metrics: BARTScore <cit.>
and DAE <cit.>. Through automatic and human evaluation studies, we demonstrate that our framework enables summarization models to generate significantly more factual summaries.
§ CONTRASTIVE LEARNING FROM FACTUALITY REWARDS
§.§ Contrastive Learning for Abstractive Summarization
Abstractive Summarization
Given a source document D, the summarization model learns a generative model g_θ, that converts the source document D into a summary S:
S = g_θ(D)
MLE Loss
Given a training sample pair {D,S^r} consists of source document D and reference summary S^r (note that S^r consists of L tokens, S^r = {s^r_1, ⋯, s^r_j, ⋯, s^r_L}), the MLE loss ℒ_mle aims to maximize the likelihood of reference summary S^r given the source document D:
ℒ_mle = log p_g_θ(S^r | D) = ∑_j = 1^Llog p_g_θ(s^r_j | D, s^r_<j)
where s^r_<j = {s^r_0, ⋯, s^r_j-1} and s^r_0 is a pre-defined start token.
Despite its effectiveness in enforcing generated summaries to align with the reference summaries, the MLE loss is not aware of the quality (evaluated by some quality metric M) of the generated summaries. To address this issue, we introduce a contrastive loss <cit.>.
Contrastive Loss
Given a training sample pair {D,S^r}, and that S_i, S_j are candidate summaries generated from a pre-trained model given D, and that M(S_i) > M(S_j) ∀ i,j,i < j [
M could be reference-free (e.g., BARTScore, DAE) or reference-based (e.g., ROUGE) metric. If M is a reference-free metric, then M(S_i) = M(S_i, D) ; if M is a reference-based metric, then M(S_i) = M(S_i, S^r)], the contrastive loss is defined as:
ℒ_ctr = ∑_i ∑_j > imax (0, f(S_j) - f(S_i) + λ_ij)
Note that λ_ij = (j - i) ×λ is the rank difference between two candidates times a constant λ (usually set as 1) [The magnitude of contrastive loss can be directly regulated through the weight of contrastive loss γ, so we simply set λ equal to 1.] and that f(S) is the length-normalized estimated log-probability given by:
f(S) = ∑_t=1^llog p_g_θ(s_t|D, S_<t)/|S|^α
where α is a constant.
Intuitively, the contrastive loss penalizes any discoordination between the length-normalized estimated log-probability and the quality metric evaluation (i.e., when f(S_j) > f(S_i) but M(S_i) > M(S_j)). The quality metric M could be any evaluation criteria, including automatic evaluation metrics <cit.>, or human preferences <cit.>.
Combined Loss
The combined loss used for fine-tuning is described by <ref>.
ℒ_com=ℒ_mle+γℒ_ctr
where ℒ_mle is the MLE loss given in <ref>, ℒ_ctr is the contrastive loss given in <ref>, and γ is the weight of contrastive loss. Summarization models fine-tuned with ℒ_com is referred as CRL-COM.
§.§ Reward from Factuality Metrics
We use two factuality metrics as quality metrics M for use in the contrastive loss described in <ref>.
BARTScore <cit.>'s factuality score is calculated as the log-likelihood of the summary given the source calculated from a reference-free version of BARTScore.
DAE <cit.> is calculated as the softmax output of the least-factual dependency-arc inside the sentences in the summary.
These two metrics were chosen for relative computational efficiency, as they are evaluated many times in the training process.
[
In contrast, QA-based factuality metrics are computationally inefficient <cit.>. As a result, they are less feasible for use in reward-learning settings.
]
§ EXPERIMENTS
§.§ Experimental Setup
Driven by the two research questions presented in the introduction, we train two kinds of factuality-driven summarization models, namely CRL-COM (B) and CRL-COM (D), trained from contrastive reward learning using BARTScore and DAE as quality metrics, respectively. A baseline summarization model CRL-COM (R) is also trained from contrastive reward learning using ROUGE as quality metric. Note that commonly used n-gram based metrics, including ROUGE <cit.>, have been shown to have a low correlation with human evaluations, particularly on factuality perspective <cit.>. Thus, we focus on evaluating the factuality of CRL-COM (B) and CRL-COM (D) compared to CRL-COM (R), with the hypothesis that CRL-COM (B) and CRL-COM (D) should be capable of generating more factual summaries compare to CRL-COM (R).
Datasets:
We use two abstractive summarization datasets – CNN/Daily Mail (CNNDM) dataset <cit.> and the XSUM dataset <cit.>. CNNDM summaries tend to be more extractive and are composed of multi-sentence summaries, while XSUM summaries are more abstractive and are composed of single-sentence summaries.
Models:
Following the setting outlined in <cit.>, we fine-tuned a pre-trained BART model <cit.> on the CNNDM dataset and a pre-trained PEGASUS <cit.> model on the XSUM dataset.
Implementation and Fine-tuning Details:
The combined loss (with weight of the contrastive loss γ = 100) described in <ref> is used to fine-tune the pre-trained models. Following <cit.> few-shot fine-tuning learning paradigm, we sampled 1000 training samples from each dataset for few-shot fine-tuning. A constant learning rate of 10^-5 and 10^-4 was applied to the fine-tuning process for the CNNDM and XSUM datasets, respectively, in order to facilitate fast convergence. For each dataset, we fine-tuned three models using three different quality metrics: ROUGE (R), BARTScore (B), and DAE (D), designated as CRL-COM (R), CRL-COM (B), and CRL-COM (D), respectively.
During validation, we employed the same quality metric used for fine-tuning for early stopping.
Automatic Evaluation
Each model is evaluated on three metrics: ROUGE (with variants ROUGE-1, ROUGE-2, ROUGE-L), BARTScore, and DAE.
Human Evaluation
To objectively evaluate the factual consistencies of the generated summaries from each model, we randomly sampled 100 samples from CNNDM and 200 samples from XSUM for human evaluation. We assess each summary from three different perspectives: Factuality (FAC), Coherence (COH), and Relevance (REL), with a particular emphasis on factuality. The assessment follow similar guidelines as in <cit.>. The evaluation guidelines provided to the annotators are listed in <ref>. An expert annotator is involved in the human evaluation studies.
§.§ Results and Analysis
Contrastive reward learning can enforce models to learn from feedback of factuality metrics
Driven by Q1, we observe that results from automatic evaluation presented in <ref> indicate that contrastive reward learning enables abstractive summarization models to develop in a direction that aligns with existing factuality metrics.
Learning from factuality metrics improves factuality of abstractive summarization.
Driven by Q2, we observe that results from human evaluation presented in <ref> indicate that on both datasets, CRL-COM (B) and CRL-COM (D) exhibit superior performance in terms of factuality compared to CRL-COM (R). This suggests that while learning from factuality metrics such as BARTScore and DAE may potentially result in sacrificing the performance of the models on ROUGE scores, the resulting models can generate more factually consistent summaries. In other words, summaries with higher BARTScore or DAE scores but lower ROUGE scores tend to be more factually consistent with the source article compared to those with lower BARTScore or DAE scores but higher ROUGE scores. This further supports the assertion that BARTScore and DAE are effective at capturing factual information.
Learning from factuality metrics did not sacrifice coherence and relevance.
According to human evaluations, the summaries generated by CRL-COM (B) and CRL-COM (D) showed comparable coherence and relevance to those generated by CRL-COM (R). This suggests that BARTScore and DAE has comparable abilities to ROUGE in terms of measuring coherence and relevance.
§ RELATED WORK
§.§ Factuality Metrics for Abstractive Summarization
Various factuality metrics assess the factual consistency between a summary and its corresponding source document. QA-based factuality metrics leverage question generation (QG) models to generate questions from the summary and question answering (QA) models to answer those questions, given both the source and summary <cit.>. Factuality is then evaluated based on the alignment between the answers from the source and summary. Another class of metrics, entailment-based factuality metrics <cit.>, evaluates whether all the information in the summary is entailed by the source document. Recent studies on leveraging pre-trained language model as evaluation <cit.> also achieve competitive performance on evaluating factuality.
§.§ Improving Factuality of Abstractive Summarization via Contrastive Learning
Several contrastive learning frameworks have been proposed to enable models to learn factuality from positive samples (such as reference summaries) and negative samples (such as edited reference summaries and system generated summaries). For example, CLIFF <cit.> and CO2Sum <cit.>. Both of which are similar in nature but CO2Sum employs more sophisticated methods for negative sample construction.
§ CONCLUSION
In this work, we present a simple contrastive reward learning framework that enforces abstractive summarization models to learn from feedback of existing factuality metrics. Empirical studies demonstrate the effectiveness of this approach, showing that abstractive summarization models that learn from factuality metric feedback through contrastive reward learning can generate more factual summaries without sacrificing coherence or relevance. This suggests that further advancements in the reward learning paradigm and factuality metrics can facilitate the development of more factually consistent abstractive summarization models.
§ LIMITATIONS
While we have included two distinctive dataset (CNNDM and XSUM) in our experiments, more non-news datasets could be included in future studies. Other possibilities for future work include comparing the capability of RL-based reward learning and contrastive reward learning in improving the factuality of abstractive summarization models.
§ ETHICS STATEMENT
Even though some of the investigated systems may achieve a high level of factuality on the CNNDM dataset, this does not guarantee that they can be used as off-the-shelf factual consistent summarization models. Thorough evaluation should be conducted before using these models in high-stakes settings to ensure their reliability.
§ ACKNOWLEDGEMENTS
We would like to thank Yixin Liu for helpful discussion on BRIO. We would also like to thank Tanya Goyal for helpful discussion on DAE.
acl_natbib
|
http://arxiv.org/abs/2307.07388v1 | 20230714145733 | A metric uniformizing model for the Quasi-Fuchsian space | [
"Christian El Emam"
] | math.DG | [
"math.DG",
"53C15, 20H10, 30F40, 30F10, 30F60, 51H25"
] |
A metric uniformizing model for (S)]A metric uniformizing model for the Quasi-Fuchsian space
Christian El Emam: Department of Mathematics, University of Luxembourg, 6 avenue de la Fonte, L-4364 Esch-Sur-Alzette, Luxembourg. [email protected]
Christian El Emam ([email protected]): University of Luxembourg, Department of Mathematics, 6 avenue de la Fonte, L-4364 Esch-Sur-Alzette, Luxembourg
[
Christian El Emam
=====================
We introduce and study a novel metric uniformization model for the quasi-Fuchsian space (S), defined through a class of -valued bilinear forms on S, called Bers metrics, which coincide with hyperbolic Riemannian metrics along the Fuchsian locus.
By employing this approach, we present a new model of the holomorphic tangent bundle of (S) that extends the metric model for 𝒯(S) defined by Berger and Ebin, and give an integral representation of the Goldman symplectic form and of the holomorphic extension of the Weil-Petersson metric to (S), with a new proof of its existence and non-degeneracy. We also determine new bounds for the Schwarzian of Bers projective structures extending Kraus estimate.
Lastly, we use this formalism to give alternative proofs to several classic results in quasi-Fuchsian theory.
Finally This metric-based approach also sheds new light to the description of the derivative of the Schwarzian map and of the affine structures on 𝒯(S), offering new proofs for several classic results, such as McMullen's reciprocity theorem.
§ INTRODUCTION
Let S be a closed oriented surface of genus g≥ 2. The quasi-Fuchsian space (S) is a distinctive neighborhood of the space of Fuchsian representations inside the (π_1(S), (2,))-character variety. Quasi-Fuchsian hyperbolic manifolds and the space (S) have been widely studied in the last century. One of the most powerful theorems in this area is Bers Simultaneous Uniformization Theorem (see Theorem <ref>), showing that (S) is biholomorphic to 𝒯(S)× T(S), with 𝒯(S) being the Teichmller space of S.
In this paper, we approach a new metric perspective in the study of the geometry of quasi-Fuchsian space. In fact, Bers Theorem suggests the definition of a family of -valued bilinear forms on S, that we call Bers metrics (see Sections <ref> and <ref>), and a way to associate to each of them a quasi-Fuchsian holonomy, in a way that extends the holonomy assignment of hyperbolic Riemannian metrics to Fuchsian representations. One useful aspect of this approach is that there is a natural way to deform Bers metrics into new Bers metrics by adding holomorphic quadratic differentials, inducing a natural way to deform quasi-Fuchsian representations (see Theorem <ref> and Theorem <ref>).
This approach comes with several consequences:
* We give a new metric model for the holomorphic tangent bundle of (S), which extends the metric model for the tangent bundle of the space of hyperbolic metrics up to isotopy as in <cit.> and <cit.> (see Remark <ref>).
* We give an integral description for the Goldman symplectic form on (S) and of the holomorphic extension of the Weil-Petersson metric to (S) (which coincides with the one showed in <cit.>), and provide a new proof of its holomorphicity and non-degeneracy.
* We present a new bound for the Schwarzian of Bers projective structures. More precisely, we give a lower bound for the distance from the boundary of any point inside the open bounded subset 𝒮_[c]^+⊂([c]) of the Schwarzian derivatives of Bers projective structures for [c]∈𝒯(S).
* We give new proofs and interpretations of a few well-known results, about the differential of the Schwarzian map, the description of the affine structures on 𝒯(S) from Bers embeddings, and McMullen's quasi-Fuchsian reciprocity Theorem.
§.§ The metric uniformizing approach for quasi-Fuchsian space
The complex Lie group (2,) acts on ^3 by isometries and by Mbius maps on its visual boundary ≅∂^3.
We say that a discrete and faithful representation ρπ_1(S)→(2,) is quasi-Fuchsian if its limit set Λ_ρ⊂ is a Jordan curve on , hence its complementary is the disjoint union of two topological disks. The reader can find a short introduction to this topic in Section <ref>.
the orientation induced on Λ_ρ gives the disks a natural orientation, leading to the identification of one of them, that we denote with Ω_ρ^+ with the complex disk 𝔻, and of the other one, that we denote with Ω_ρ^- with the complex disk with opposite orientation 𝔻. The representation ρ defines a free and proper discontinuous action of π_1(S) on both Ω_ρ^+ and Ω_ρ^-, defining two Riemann surfaces on S and S respectively.
Let 𝒞(S) denote the space of complex structures on the oriented surface S. The most important theorem about quasi-Fuchsian representations is Bers Simultaneous Uniformization Theorem (<cit.>), that we recall in the following version.
For all (c_1, c_2)∈, there exist:
* a unique quasi-Fuchsian representation ρπ_1(S)→(2, ℂ) up to conjugacy,
* a unique ρ-equivariant holomorphic diffeomorphism
f_1= f_1(c_1,c_2) (S, c_1) →Ω_ρ^+ ,
* a unique ρ-equivariant antiholomorphic diffeomorphism
f_2=f_2(c_1,c_2) (S, c_2) →Ω_ρ^- .
where Ω_ρ^+⊔Ω_ρ^-=∖Λ_ρ.
This correspondence defines a biholomorphism
𝔅(S).
In the paper, we will mostly see 𝔅 as an identification.
Now, for each c∈𝒞(S), the uniformization theorem provides a (π_1(S),(2,))-equivariant biholomorphism f (S, c)→ H⊂ from its universal cover to the upper half plane, and the hyperbolic metric in the conformal class of c is the pull-back through f of the Riemannian metric on the hyperbolic space in its half-plane model, namely
f^*(1/Im(z)^2 dzdz)= -4/(f-f)^2dfdf
which, being π_1(S)-invariant, defines a hyperbolic metric on S.
Bers Theorem suggests a way to extend this construction, assigning to each element of , a symmetric 2-tensor on S in the following way:
g →Γ (Sym_2(TS, ℂ))
(c_1, c_2) ↦ g(c_1, c_2):= -4/(f_1-f_2)^2 df_1· df_2
where f_1=f_1(c_1,c_2) and f_2=f_2(c_1,c_2) are as in the statement of Bers Theorem <ref>, and Γ (Sym_2(TS, ℂ)) denotes the space of smooth sections of ℂ-valued symmetric 2-forms on TS (in other words, Sym_2(TS, ℂ)= Sym_2(TS, ℝ) ⊕ i Sym_2(TS, ℝ)). One can easily show that the tensors defined this way are well-posed on S, namely they depend neither on the choice of the affine chart of (containing the images of f_1 and f_2), nor on the conjugacy class of the quasi-Fuchsian representation in the construction. In addition, it is clearly π_1(S)-invariant, defining a symmetric -valued tensor on S. Finally, if c_1=c_2=:c, then g(c,c) is just the hyperbolic metric in the conformal class of c.
We call Bers metrics the bilinear forms in the form g(c_1,c_2) as in (<ref>).
In fact, Bers metrics admit an intrinsic characterization: they correspond to the connected component of positive complex metrics of constant curvature -1 that contains hyperbolic Riemannian metrics (see Section <ref> below).
In a sense, Bers metrics allow to uniformize in a way that extends the classic uniformization of complex structures with hyperbolic Riemannian metrics.
We denote the inverse map of (c_1,c_2)↦ g(c_1, c_2) as
(c_+, c_-){Bers metrics}→ ,
so (c_+, c_-)(g(c_1, c_2))=(c_1, c_2).
We define the holonomy of g=g(c_1, c_2) as [g]=([c_1], [c_2])∈(S).
A useful aspect of this approach is that (pointwise) holomorphic deformations of Bers metrics induce holomorphic deformations of the holonomy in (S) (see Lemma <ref>). In fact, the first part of the paper discusses an interesting type of holomorphic deformation, which is the following.
Let (c) denote the space of holomorphic quadratic differentials for the Riemann surface (S,c). Then, if g=g(c_1,c_2) is a Bers metric, for all q_1∈(c_1) and q_2∈(c_2) small enough,
g+q_1 and g+q_2
are Bers metrics as well. This comes with several interesting consequences, as explained in the following theorems (in the text, they follow from Propositon <ref>, Proposition <ref>, Theorem <ref>)
Let ∈ and consider the Bers metric g=g(c_1, c_2).
There exists an open subset U_g⊂(c_1), 0∈ U_g, such that, for all q_1∈ U_g, g+q_1 is a Bers metric.
The map
U_g →
q_1 ↦ [g+q_1]
gives a holomorphic local parametrization of the Bers slice {[c_1]}×𝒯(S), which is essentially independent from the choice of c_1 and c_2 in their isotopy classes [c_1]∈𝒯(S) and [c_2]∈𝒯(S).
Let ∈ and consider the Bers metric g=g(c_1, c_2).
There exists an open subset V_g⊂(c_2), 0∈ V_g, such that, for all q_2∈ V_g, g+q_2 is a Bers metric.
The map
V_g →
q_2 ↦ [g+q_2]
gives a holomorphic local parametrization of the Bers slice 𝒯(S)×{[c_2]}, which is essentially independent from the choice of c_1 and c_2 in their isotopy classes [c_1]∈𝒯(S) and [c_2]∈𝒯(S).
As a result, we get a uniformizing metric model of the holomorphic tangent bundle to quasi-Fuchsian space given by
T_([c_1],[c_2] )= {q_1+q_2 | q_1∈(c_1), q_2∈(c_2) },
which is a vector subspace of the space of smooth sections of Sym_2(TS, ℂ) and which extends the metric model for the tangent bundle of 𝒯(S) on the Fuchsian locus (see Remark <ref>).
Let g=g(c_1, c_2) be a Bers metric.
* There exists an open subset U_g⊂(c_1), 0∈ U_g, such that, for all q_1∈ U_g, g+q_1 is a Bers metric.
* For all q_1∈ U_g, c_+(g+q_1)= c_1.
* Let Φ_1,Φ_2∈Diff_0(S) and consider the Bers metric g= g(Φ_1^*(c_1), Φ_2^*(c_2)), then
[g+q_1]= [g+ ϕ_1^*(q_1)]
for all q_1∈(c_1) for which g+q_1 and g+ ϕ_1^*(q_1) are Bers metrics.
As a result, every q_1∈([c_1]) defines a well posed vector field on {[c_1]}×𝒯(S) by d/dt_| t=0 [g+tq_1]
* The map
U_g →{[c_1]}×𝒯(S)
q_1 ↦ [g+q_1]
is a local holomorphic parametrization of the Bers slice. As a result, a model for the holomorphic tangent bundle of {[c_1]}×𝒯(S) is given by
T_([c_1], [c_2])({[c_1]}×𝒯(S)) ≅([c_1])
Let g=g(c_1, c_2) be a Bers metric.
* There exists an open subset V_g⊂(c_2), 0∈ V_g, such that, for all q_2∈ V_g, g+q_2 is a Bers metric.
* For all q_2∈ V_g, c_-(g+q_2)= c_2.
* Let Φ_1,Φ_2∈Diff_0(S) and consider the Bers metric g= g(Φ_1^*(c_1), Φ_2^*(c_2)), then
[g+q_2]= [g+ ϕ_2^*(q_2)]
for all q_2∈(c_2) for which g+q_2 and g+ ϕ_2^*(q_2) are Bers metrics.
As a result, every q_2∈([c_2]) defines a well posed vector field on 𝒯(S) ×[c_2] by d/dt_| t=0 [g+tq_2]
* The map
V_g →𝒯( S)×{[c_2]}
q_2 ↦ [g+q_2]
is a local holomorphic parametrization of the Bers slice. As a result, a model for the holomorphic tangent bundle of 𝒯(S)×{[c_2]} is given by
T_([c_1], [c_2])(𝒯(S)×{[c_2]}) ≅([c_2])
§.§ Metric deformation and Schwarzian deformation
Bers Theorem associates to each pair (c_1,c_2)∈ two projective structures (f_1(c_1, c_2), ρ) and (f_2(c_1, c_2), ρ), for the complex structures c_1 and c_2 respectively.
Consider the maps
𝐒𝐜𝐡𝐰_+{Bers metrics} →𝐇𝐐𝐃(S)
g(c_1, c_2) ↦ Schw(f_1(c_1, c_2) )
and
𝐒𝐜𝐡𝐰_-{Bers metrics} →𝐇𝐐𝐃(S)
g(c_1, c_2) ↦ Schw(f_2(c_1, c_2))
where Schw denotes the Schwarzian derivative (see Section <ref>).
The following theorem (Theorem <ref> in the paper) shows the relation between the metric deformation of g with g+q_1 and g+q_2 and the deformation of the Schwarzian of the corresponding Bers' projective structures.
Let g=g(c_1,c_2) be a Bers metric.
Then,
𝐒𝐜𝐡𝐰_+(g+q_1)= 𝐒𝐜𝐡𝐰_+(g) -1/2 q_1
for all q_1∈(c_1) such that g+q_1 is a Bers metric, and
𝐒𝐜𝐡𝐰_-(g+q_2) = 𝐒𝐜𝐡𝐰_-(g) -1/2q_2
for all q_2∈(c_2) such that g+q_2 is a Bers metric.
§.§ The holomorphic extension of Weil-Petersson metric
A few extensions of the Weil-Petersson metric to (S) have been studied in literature.
We mention, for instance, that in <cit.> and <cit.>, the authors define and study a Riemannian extension.
In <cit.>, Loustau and Sanders show that the Weil-Petersson metric extends to a holomorphic Riemannian metric (see Section <ref> for the definition) on the quasi-Fuchsian space, whose real part is therefore a pseudo-Riemannian metric on (S). We show that the metric formalism allows to give an alternative proof of the existence and uniqueness of the holomorphic extension of the Weil-Petersson metric, providing a new explicit description of it.
Let us define it. Let (c_1, c_2)∈ and consider the corresponding Bers metric g=g(c_1, c_2).
The metric g induces a -bilinear form on the bundle Sym_2(T S)⊕ i Sym_2(T S) given by
<α, β>_g= ∑_i,j,k, ℓ=1^2 α(∂_i, ∂_j)β(∂_k,∂_ℓ) g^ikg^jℓ .
where {∂_1, ∂_2} is any local basis for TS and g^ij is the inverse matrix of the representative matrix g_ij.
Bers metrics come with a natural notion of area form compatible with the orientation (see Remark <ref>), namely
dA_g= -2i/(f_1-f_2)^2df_1∧ df_2 .
One can therefore define a -bilinear form on T_([c_1],[c_2])(S)= {q_1+q_2 | q_1∈(c_1), q_2∈(c_2) } by considering
α, β_g= 1/8∫_S <α, β>_g dA_g
The following theorem follows from Proposition <ref>, Theorem <ref>, Corollary <ref>, and Corollary <ref>.
The bilinear form _g only depends on g through its holonomy, and it defines a well-posed holomorphic Riemannian metric on (S) which coincides with the Weil-Petersson metric on the Fuchsian locus and which is invariant under the action of the mapping class group.
By uniqueness of the holomorphic extension (see Corollary <ref>), this coincides with the holomorphic extension introduced in <cit.>.
Explicitly, _g can be seen as follows:
* q_1, q'_1_g=0, for all q_1, q'_1∈(c_1).
* q_2, q'_2_g=0 for all q_2, q_2'∈(c_2) .
* Let g=ρ dzdw, q_1= dz^2 ∈(c_1), q_2=ψ dw^2∈(c_2), then
q_1, q_2_g=1/4 i ∫_S ·ψ/ρdz∧ dw
As a consequence we get an integral description of the Goldman symplectic form ω_G on (S) given by
ω_G(q_1, q_2) _|([c_1], [c_2])= 2 i ∫_S ·ψ/ρdz∧ dw
with Bers slices being Lagrangian (see Proposition <ref>).
§.§ Bounds for the Schwarzian of Bers projective structures
The maps 𝐒𝐜𝐡𝐰_+ and 𝐒𝐜𝐡𝐰_- defined above in (<ref>) and (<ref>) descend to maps from (S), namely
Schw_+(S) →(S)
([c_1], [c_2]) ↦ [Schw(f_1(c_1,c_2) )]
Schw_-(S) →(S)
([c_1], [c_2]) ↦ [Schw(f_2(c_1, c_2) )]
,
where [𝐒𝐜𝐡𝐰_+(g(c_1, c_2))]= Schw_+([c_1], [c_2]) and [𝐒𝐜𝐡𝐰_-(g(c_1, c_2))]= Schw_-([c_1], [c_2])
Both maps are known to be biholomorphisms onto their images (see for instance <cit.>).
In particular, for all given ([c_1], [c_2])∈𝒯(S)×𝒯(S),
𝒮^+_[c_1]= Schw_+({[c_1]}×𝒯(S))⊂([c_1])
𝒮^-_[c_2]:= Schw_-( 𝒯(S) ×{[c_2]})⊂([c_2])
are open subsets.
The shape of these open subsets is not clear in general. By the classic Kraus-Nehari theorem (<cit.>, <cit.>), we know that 𝒮^+_[c_1] (resp. 𝒮^-_[c_2]) contains the ball of radius 1/2 and is contained in the ball of radius 3/2 centered in zero with respect to the norm-infinity ball for the hyperbolic metric on ([c_1]) (resp. ([c_2]) (see Equation (<ref>)). By the works of Lempert <cit.> and Markovic <cit.>, 𝒯(S) cannot be biholomorphic to a convex domain in ^6g-6, hence 𝒮^+_[c_1] and 𝒮^-_[c_2] are not convex.
The metric formalism for T(S) allows to give explicit lower bounds on the radius of balls inside 𝒮^+_[c_1] and 𝒮^-_[c_2], as shown in the following Theorem (Corollary <ref> in the text, with finer estimates in Lemma <ref> and Theorem <ref>).
Let ∈, let g=g(c_1,c_2) denote the corresponding Bers metric, and let g_0=g(c_1, c_1) be the Riemannian hyperbolic metric in the conformal class of c_1.
Let
0< R:= 1/2min_S ( (1- |∂_z w/∂_z w| ) | dA_g /dA_g_0| )
where z and w are any local coordinates for c_1 and c_2 respectively and dA_g and dA_g_0 are the area forms of g and g_0.
Then,
B_∞(Schw_+([c_1], [c_2]), R)⊂𝒮^+_[c_1]
where B_∞ denotes the norm-infinity ball (with respect to the hyperbolic metric) on ([c_1]).
For the case c_1=c_2 we get R=1/2, which coincides with Kraus's Theorem in <cit.>.
Observe that the definition of R in (<ref>) depends on the choice of c_1 and c_2 in their isotopy classes [c_1]∈𝒯(S), [c_2]∈𝒯(S): in fact it would be interesting to determine for each c_1∈𝒞(S) what choice of c_2 in its isotopy class [c_2] maximizes the estimate R (also see Remark <ref>).
§.§ Relations with the renormalized volume and McMullen's reciprocity
Let (c_1, c_2) ∈, and g=g(c_1, c_2).
A classic model for the cotangent bundle of Teichmller space is given by the bundle of holomorphic quadratic differentials, hence T_[c_1]𝒯(S)≅([c_1]) and T_[c_2]𝒯(S)≅([c_2]). In the identification of T_[c]𝒯(S) with the space of Beltrami differentials Belt(c) for [c], the cotangent model works as follows: if X=βdz/dz and q= dz^2, then the 2-form ·β dz∧ dz defines a well-posed 2-form on S and
q(X)= Re ( ∫_S ·β idz∧ dz) .
Consider the functions Schw_+( [c_1],∙)𝒯(S)→([c_1]) and Schw_-(∙, [c_2])𝒯(S)→([c_2]). Since the targets are vector spaces, the derivatives are holomorphic quadratic differentials too.
A classic result in quasi-Fuchsian geometry is the following.
[McMullen's quasi-Fuchsian reciprocity]
Let ∈, X∈ T_[c_1]𝒯(S), Y∈ T_[c_2]𝒯(S). Then,
( ∂_Y Schw_+( [c_1],∙)) (X) = ( ∂_X Schw_-(∙, [c_2])) (Y) .
McMullen's reciprocity can actually be seen quite simply using the holomorphic extension of the Weil-Petersson metric on (S), providing an alternative description of the metric itself.
Let X∈ T_[c_1]𝒯(S) and Y∈ T_[c_2]𝒯(S).
Denote
𝐗= (X,0)∈ T_[c_1], [c_2](𝒯(S)×𝒯(S))
𝐘= (0, Y)∈ T_[c_1], [c_2](𝒯(S)×𝒯(S)) .
Let g=g(c_1, c_2)
( ∂_Y Schw_+( [c_1],∙)) (X) = - 𝐗, 𝐘_g = ( ∂_ X Schw_-(∙, [c_2])) (Y) .
As a consequence
Im(𝐗, 𝐘_g) = - 2∂_𝐗∂_𝐘 (𝒱_Ren)
where 𝒱_Ren:(S)→ is the renormalized volume function.
As in Remark <ref>, we fix the notation q_X= dz^2 with b_X= ∂_w z/ρdw/dw, and q_Y = ψdw^2 ∈(c_2) with b_Y=ψ∂_zw/ρdz/dz.
Recalling Equations (<ref>) and (<ref>).
∂_Y( Schw_+( g)) (b_X) = ( d/dt_|0 Schw_+(g+tq_Y) ) (b_X) = -1/2 q_Y (b_X)=
= -1/2∫_S ·ψ∂_zw/ρi/2 dz∧ dz= -i/4∫_S ψ/ρ dz∧ dw= -𝐗, 𝐘_g .
In the same fashion,
∂_X( Schw_-( g)) (b_Y) = ( d/dt_|0 Schw_-(g+tq_X) ) (b_Y) = -1/2 q_ X (b_Y)=
=-1/2∫_S ψ·∂_w z/ρi/2 dw ∧ dw
= -i/4∫_S ψ/ρ dz∧ dw= -𝐗, 𝐘_g .
As a consequence, we get an alternative description of the imaginary part of _g in terms of the Hessian of the renormalized volume.
Let 𝐗∈ T_[c_1](𝒯(S)×{[c_2]}) and 𝐘∈ T_[c_2]({[c_1]}×𝒯(S) ). Then,
Re(𝐗, 𝐘_[g]) = - 2∂_𝐗∂_𝐘 (𝒱_Ren)
where 𝒱_Ren:(S)→ is the renormalized volume function.
As a result,
𝐗, 𝐘_[g]= - 2∂_𝐗∂_𝐘 (𝒱_Ren)+ 2i ∂_i𝐗∂_𝐘 (𝒱_Ren)
By the work of (MANCA REF KS), Schw_+(g) (b_X)= 4∂_X 𝒱_Ren. As a consequence ∂_X( Schw_+( g)) (b_Y)= ∂_Y( Schw_-( g)) (b_X) = 4∂_X∂_Y (𝒱_Ren), which concludes the proof.
§ AKNOWLEDGEMENTS
We would like to thank Francesco Bonsante for some very useful technical suggestions. We also thank Andrea Tamburelli, Andrea Seppi, and Filippo Mazzoli for some interesting inspiring conversations on immersions into (2,), which indirectly helped me develop some of the topics of this paper. We also thank Jean-Marc Schlenker for his useful mini-course on the University of Luxembourg, which covered some of the background ideas of this paper. Finally, we thank Nathaniel Sagman for his suggestions on the state of the art.
§ FUNDING
The author has been supported by the FNR OPEN grant CoSH (O20/14766753/CoSH).
§ ESSENTIAL BACKGROUND AND NOTATION
Throughout the whole paper, S denotes a closed oriented surface of genus g≥ 2. We will use S to denote the surface with opposite orientation.
§.§ Notation
Geometric structures (such as complex structures, Bers metrics, quadratic differentials, and tensors in general) on S will often be identified with their π_1(S)-invariant lifts on the universal covering space S.
§.§ The Teichmller space
Let 𝒞(S) denote the space of complex structures on S that are compatible with its orientation.
Let Diff_+(S) denote the group of orientation-preserving diffeomorphisms of S, let Diff_0(S) denote the group of diffeomorphisms of S that are isotopic to the identity, and define the mapping class group of S as MCG(S):=Diff_+(S)/Diff_0(S).
The Teichmller space 𝒯(S)=𝒞(S)/Diff_0(S) is the space of complex structures on S up to isotopy. We denote the equivalence class of c with [c]. The mapping class group MCG(S) acts naturally on 𝒯(S).
We say that a representation ρπ_1(S)→(2,) is (orientation-preserving and) Fuchsian if it is discrete and faithful and its extension to the boundary ∂π_1(S)≅∂S→∂^2 is an orientation-preserving homeomorphism. The Fuchsian space Fuch(S) is the space of conjugacy classes of Fuchsian representations by elements in (2,).
The Uniformization Theorem provides a correspondence between 𝒯(S) and the Fuchsian space Fuch(S), obtained as follows. H⊂ denoting the upper half-plane, for all c∈𝒞(S) there exists a biholomorphism (S, c) → H which is unique up to conjugation with elements of (2,), and which is equivariant for a Fuchsian ρπ_1(S)→(2,): the map sending c to the conjugacy class of ρ induces a bijection between 𝒯(S) and Fuch(S) called holonomy map which assigns by pull-back a smooth structure on 𝒯(S).
By pulling-back the hyperbolic metric of ^2 in the upper-half plane model through the uniformizing map, we get a bijection between 𝒯(S) and the space Met_-1(S) of hyperbolic metrics on S up to isotopy, whose inverse is just the map sending a Riemannian metric to its conformal class.
§.§ Beltrami coefficients
The content of Sections <ref> and <ref> are well-known to experts. There are several surveys on these topics, e.g. <cit.>,<cit.>, <cit.>, <cit.>, etc.
We recall here some essential aspects of Beltrami equations and Beltrami differentials leading to the description of the tangent bundle to Teichmller space.
Fix c_0∈𝒞(S), let ρ_0 be its Fuchsian holonomy, and denote Γ=ρ_0(π_1(S))⊂(2,).
We define the subspace L^∞(Γ)⊂ L^∞(H,) given by Lebesgue-integrable functions μ H→ℂ such that μ(γ (z))= γ'(z)/γ'(z)μ(z) for all γ∈Γ: equivalently, μdz/dz (which can be seen as an unusual [but traditional] expression for μ dz⊗∂_z) is Γ-invariant on S. Also denote L^∞_1(Γ) as the intersection of L^∞(Γ) with the unit ball of L^∞(H,).
Observe that L^∞(Γ) can be seen as the tangent space in zero of L^∞_1(Γ): we call Γ-Beltrami coefficients (or just Beltrami coefficients) elements in L^∞_1 (Γ) and denote them with μ, while we call Γ-Beltrami differentials (or just Beltrami differentials) elements in L^∞(Γ) and denote them with β=μdz/dz.
By the classic theory of Beltrami equations (see <cit.>, <cit.>), for all μ such that μ_∞<1, there exists a unique quasi-conformal map f^μ→ such that:
* ∂ f^μ/∂z= μ∂ f^μ/∂ z almost everywhere on H,
* the restriction of f^μ to (-H) is holomorphic,
* f^μ(0)=0 and f^μ(1)=1
Moreover, f^μ is equivariant for some homomorphism ρ∘ρ_0^-1Γ→(2,), with ρπ_1(S)→(2,) being a Fuchsian representation. Hence f^μ induces a quasi-conformal map between the Riemann surfaces H/Γ→H/(ρ∘ρ_0^-1)(Γ).
Define Belt_1(ρ_0) as the quotient of L^∞_1(Γ) defined by the relation μ∼μ' equivalently if (with reference to the construction above) f^μ≡ f^μ' on or f^μ and f^μ' are equivariant for the same Fuchsian representation ρπ_1(S)→(2,).
Moreover, these maps give the bijections Belt_1(ρ_0)Fuch(S)𝒯(S) which are in fact diffeomorphisms[Notice that the space Belt_1(ρ_0) only depends on ρ_0 through its image, but the holonomy map Belt_1(ρ_0)→Fuch(S) depends on ρ_0, motivating our notation Belt_1(ρ_0) instead of Belt_1(Γ).].
Under this identification, the tangent space T_[c_0]𝒯(S) can be seen as the quotient of L^∞(Γ) with the kernel of the differential of the projection L^∞_1(Γ)→Belt_1(ρ_0), as we describe in the next section.
§.§ Tangent and cotangent bundle to Teichmller space
Let c∈𝒞(S). A holomorphic quadratic differential on (S,c) is a holomorphic section of the square of the holomorphic cotangent bundle. We will denote the complex vector space of holomorphic quadratic differentials for c with (c). One can see (the lift to the universal cover of) a holomorphic quadratic differential on S as a π_1(S)-invariant tensor dz^2 on S, where z is a global holomorphic coordinate for (the lift of) c on S, and S→ℂ is c-holomorphic.
We also denote ([c])=⋃_c∈ [c](c)Diff_0(S), which is naturally a complex vector space with the natural identification (c)≅([c]). By Riemann-Roch Theorem, ([c]) has complex dimension 3g-3.
Recall the notation of the previous paragraph. The identification Belt_1(ρ_0)𝒯(S) defined above actually allows to give a description of the holomorphic tangent and cotangent space to 𝒯(S) in [c_0] as follows.
Let ρ_0 be the Fuchsian representation corresponding to c_0∈𝒞(S), and regard L^∞(ρ_0(π_1(S))) as the tangent space to L^∞_1(ρ_0(π_1(S))) in zero.
For all q= dz^2∈(c_0), β=μdz/dz∈L^∞(ρ_0(π_1(S))), the 2-form μ dz∧ dz on S is π_1(S)-invariant, so one can consider the pairing defined by
q(β):=∫_S ·μ i/2 dz∧ dz .
The vector subspace
𝒩={β∈L^∞(ρ_0(π_1(S))) | q(β)=0 for all q∈(c_0) }.
is precisely the kernel of the differential of the quotient map L^∞_1(ρ_0(π_1(S)))→Belt_1(ρ_0) in zero.
As a result, the tangent space in [c_0] to Teichmller space, canonically identified with T_[0]Belt_1(ρ_0), can be seen as the finite vector space
T_[c_0]𝒯(S)≅Belt([c_0]):= L^∞(ρ_0(π_1(S)))𝒩 .
Finally, denote with h_0 the hyperbolic metric in the conformal class of c_0. The map
(c_0) →Belt([c_0])
q ↦[ q/h_0]= [ dz/α dz]
where h_0=α dz dz, is a linear isomorphism <cit.>, and it gives a nice set of representatives for the elements of Belt([c_0]). The Beltrami differentials of the form q/h_0 as above are called harmonic Beltrami differentials.
Since q(q/h_0)>0 for all q∈(c_0), we get that the pairing on ([c_0])×Belt([c_0]) given by q([β]):=q(β) as in Equation (<ref>) is well-posed and non-degenerate, thus we get the identification T^*_[c_0] 𝒯(S)≅([c_0]).
Both ([c_0]) and Belt([c_0]) are naturally complex vector spaces, providing a quasi-complex structures on T 𝒯(S) and consistently on T^*𝒯(S). As we will see in Section <ref>, this quasi-complex structure is integrable, and 𝒯(S) has a natural structure of complex manifold.
We will denote with HQD(S) the space of quadratic differentials which are holomorphic with respect to some complex structure on S compatible with the orientation (this is an infinite-dimensional Banach manifold) and with (S)= HQD(S)Diff_0(S) the vector bundle of rank 6g-6 on 𝒯(S) which is isomorphic to its cotangent bundle.
§.§ Complex projective structures
The complex Lie group (2,) acts by isometries on the 3-dimensional hyperbolic space ^3 and on its visual boundary at infinity by Mbius maps.
A (complex) projective structure on S is a ((2,),)-structure, namely a maximal atlas of charts to open subsets of with changes of coordinates being restrictions of Mbius maps. By the classic theory of (G,X)-structures, this is equivalent to giving a developing map devS→ that is equivariant for some representation ρπ_1(S)→(2,), called the holonomy of the projective structure. A nice survey on complex projective structures is <cit.>.
The space of projective structures up to isotopy, that we will denote with 𝒫(S), is a complex manifold, and the forgetful map 𝒫(S)→𝒯(S) that maps a projective structure to the complex structure induced by the atlas is smooth (in fact holomorphic as we see in the next section).
Let 𝒳(π_1(S), (2,)) denote the (2,)-character variety, which is the GIT quotient (or, from the topological viewpoint, the greatest Hausdorff quotient) of the space of conjugacy classes of representations in (2,), namely {ρπ_1(S)→(2,)}(2,). The character variety 𝒳(π_1(S), (2,)) is naturally a complex algebraic variety, and the holonomy map
hol𝒫(S) →𝒳(π_1(S), (2,))
[(dev,ρ)] ↦ [ρ]
is a local biholomorphism.
A projective structure on S can be encoded in a tensor on S by taking the Schwarzian derivative of the developing map, let us define it. Given an open subset Ω⊂ and a holomorphic map FΩ→, we denote its Schwarzian derivative as the holomorphic quadratic differential
Schw(f):= ( (f”(z)/f'(z))'-1/2(f”(z)/f'(z))^2 ) dz^2 .
The key properties of the Schwarzian derivative are that Schw(f)=0 if and only if f is Mbius, and the chain rule Schw(f∘ g)=g^*(Schw(f)) +Schw(g). Given a projective structure (f,ρ) on S, denoting with c its complex structure, the properties of the Schwarzian derivative allow to say that Schw(f) is a π_1(S)-invariant holomorphic quadratic differential on (S,c), hence an element of (c).
Denoting with 𝒫([c]) the isotopy classes of projective structures with complex structure in [c], the map
𝒫([c]) →([c])
[(f,ρ)] ↦ [Schw(f)]
is a biholomorphism.
§.§ Quasi-Fuchsian representations and Bers Simultaneous Uniformization
Let ρπ_1(S)→(2,) be a representation. We define its limit set Λ_ρ as the topological boundary of the orbit of any point in .
We say that a discrete and faithful representation ρπ_1(S)→(2,) is quasi-Fuchsian if its limit set is a Jordan curve on . The quasi-Fuchsian space is the space of conjugacy classes of quasi-Fuchsian representations, namely
(S)={ρπ_1(S)→(2,) quasi-Fuchsian representation}(2,)
which is an open smooth subset of the character variety 𝒳(π_1(S),(2,)), hence a smooth complex manifold.
Let ρ be a quasi-Fuchsian representation. The orientation on S induces an orientation on ∂π_1(S) and hence on Λ_ρ, which in turn gives a natural orientation to the connected components of ∖Λ_ρ: one of the two discs, that we name Ω_ρ^+, is therefore identified with the complex disk 𝔻, while the other one, that we name Ω_ρ^-, is identified with 𝔻, the complex disk with opposite orientation. The representation ρ defines a free and properly discontinuous action of π_1(S) on both Ω_ρ^+ and Ω_ρ^-, defining two Riemann surfaces on S and S respectively.
Bers Simultaneous Uniformization Theorem, as stated in Theorem <ref>, allows to build a diffeomorphism
𝔅𝒯(S)×𝒯(S)(S),
(that in the rest of the paper will be often seen as an identification) associating to every pair ([c_1],[c_2]) a unique [ρ]∈(S) such that there exists a (essentially unique) projective structure [(f_1,ρ)]∈𝒫(S) with f_1(S)=Ω_ρ^+ and induced complex structure [c_1], and a (essentially unique) projective structure [(f_2, ρ)]∈𝒫(S) with f_2(S)=Ω_ρ^- and induced complex structure [c_2].
The diffeomorphism 𝔅 in Equation (<ref>) is in fact a biholomorphism in the following sense. The map 𝔅 maps each 𝒯(S)×{[c_2]} (resp. each {[c_1]}×𝒯(S)) to a complex submanifold. The complex structure on 𝒯(S) (resp. 𝒯(S)) defined by pull-back by 𝔅 is independent from the choice of [c_2]∈𝒯(S) (resp. [c_1]∈𝒯(S)), and it is consistent with the almost complex structure on 𝒯(S) mentioned in the end of Section <ref>.
We call Bers slices both the subsets of the form
([c_1],∙):={[c_1]}×𝒯(S) ,
(∙, [c_2]):=𝒯(S)×{[c_2]} ,
and their images through 𝔅.
The projective structures [(f_1,ρ)] and [(f_2, ρ)] defined from 𝔅^-1([ρ]) are called Bers projective structures.
Teichmller space 𝒯(S) is biholomorphic to an open subset of ℂ^3g-3. With respect to this complex structure, the projections Belt(S)→𝒯(S) (with fibers Belt([c])) and (S)→𝒯(S) have a natural structure of holomorphic vector bundles and are isomorphic to the holomorphic tangent and cotangent bundles respectively. The forgetful map 𝒫(S)→𝒯(S) is holomorphic as well, and the action of the mapping class group MCG(S) on 𝒯(S) is by biholomorphisms.
§.§ The Weil-Petersson metric
The description through harmonic Beltrami differentials allows to define a Riemannian metric on Teichmller space 𝒯(S), called the Weil-Petersson metric, as follows. Let z be a global holomorphic coordinate for c on S, q_1=_1 dz^2 and q_2=_2 dz^2 denote elements in (c) and h_0=α_0 dzdz be the hyperbolic metric in the conformal class c. Then, for all [q_1/h_0], [q_2/h_0]∈Belt([c]) the Weil-Petersson Riemannian metric is defined as
⟨[q_1/h_0], [q_2/h_0]⟩ _WP := q_1( q_2/h_0) =Re(∫_S _1 ·_2/α_0i/2 dz∧ dz) .
The Weil-Petersson metric _WP and the complex structure determine on 𝒯(S) a MCG(S)-invariant Khler manifold structure, together with the symplectic form
ω([q_1/h_0], [q_2/h_0]):= [iq_1/h_0], [q_2/h_0]_WP = Im(∫_S _1 ·_2/ρ_0i/2 dz∧ dz) .
In the identification of 𝒯(S) with Fuch(S), the symplectic form ω coincides up to a multiplicative factor 8 with the Goldman symplectic form ω_G=8ω, a natural symplectic form arising in the character variety of a wide class of real and complex Lie groups G (see <cit.>). When G is a complex Lie group, the Goldman symplectic form is -bilinear and holomorphic with respect to the natural complex structure on the character variety. This class of Lie groups includes (2,), and the restriction of its Goldman symplectic form to Fuch(S) coincides with the Goldman symplectic form for (2,).
§.§ Holomorphic Riemannian metrics
We now recall some essential aspects of holomorphic Riemannian metrics, you can find a more detailed treatment in <cit.>.
Let 𝕄 be a complex manifold. Holomorphic Riemannian metrics on complex manifolds can be seen as a natural -bilinear analog of pseudo-Riemannian metrics: in fact, a holomorphic Riemannian metric on 𝕄 is a nowhere-degenerate holomorphic section of the space of symmetric ℂ-bilinear forms on the holomorphic tangent bundle.
A trivial example is ℂ^N with the holomorphic Riemannian metric on Tℂ^N ≅ℂ^N ×ℂ^N defined by
v, w_ℂ^N=∑_k=1^N v_k w_k.
To see a couple of less trivial examples, observe that the metric above descends to the holomorphic Riemannian metric on the compact manifold ℂ^nℤ^n+iℤ^n, or consider the complex Killing form on a complex semisimple Lie group (see for instance <cit.> for (2,)). Another example that we will mention in the paper is given by the space of oriented geodesics of ^3, which admits a unique (2,)-invariant holomorphic Riemannian metric of constant curvature -1 (see Section <ref> and <cit.>).
The real and the imaginary part of a holomorphic Riemannian metric on 𝕄=𝕄^n are pseudo-Riemannian metrics of signature (n,n). To see this, observe that if (e_1,… e_n) is an orthonormal basis for a holomorphic Riemannian metric on T_p𝕄, then the bases (e_1,…, e_n, ie_1, …, ie_n) and (√(i) e_1, …, √(i) e_n, i√(i) e_1, …, i√(i) e_n) are orthonormal for Re and Im respectively.
Holomorphic Riemannian metrics come with a notion of Levi-Civita connection: namely there exists a unique affine connection,
DΓ(T𝕄) →Γ(End_ℝ(T𝕄))
X ↦ D X
being torsion free and compatible with the metric, namely dX,Y= DX, Y+X, DY. The affine connection D coincides with the Levi-Civita connection of both the real and the imaginary part of the metric.
The metric and its Levi-Civita connection determine a Riemann curvature tensor R(X,Y,Z, W)= D_X D_Y Z -D_YD_X Z- D_[X,Y] Z, W which is -multilinear.
Finally, let V<T_p 𝕄 be a complex vector space with _ V=2 and such that the restriction of to V is non-degenerate, namely there exist two -linearly independent vectors X,Y∈ V such that X,Y 0. Then, we can define the sectional curvature K(V) as K(V):=R(X,Y,Y,X)/X,XY,Y-X,Y^2, whose definition is independent from the choice of the linearly independent vectors X,Y in V.
Similarly as for pseudo-Riemannian metrics, for all n≥ 2 and k∈ℂ there exists a unique n-dimensional simply-connected, complete holomorphic Riemannian manifold of constant sectional curvature k (<cit.>).
§ BERS METRICS
§.§ Hyperbolic complex metrics and Bers Theorem
As we already mentioned, throughout the paper, we will assume that S is a closed connected oriented surface of genus greater than or equal to 2. We denote with S the surface with opposite orientation.
We need to recall some essential technical tools in order to state Theorem <ref>, which will be used in some of the proofs of this paper. Most of the content of this subsection follows from <cit.>, with a few changes in the notation.
A complex metric on S is a smooth section g of Sym_2(TS) + i Sym_2(TS) such that g is non-degenerate, namely for all p∈ S the ℂ-bilinear extension of g_p to the complexified tangent space g_pℂ T_pS×ℂ T_pS→ℂ is a non-degenerate symmetric bilinear form. This notion includes Riemannian metrics on S.
We equip the set of complex metrics on S with the C^∞ topology.
With a simple linear algebra argument, one has that for each p∈ S the set {X∈ℂ T_pS | g_p(X,X)=0} is the union of two complex lines in ℂ T_pS, corresponding to two points of
ℙ(ℂ T_pS ). We will call these two points the isotropic directions of g, and we call isotropic vectors the vectors X∈ TS such that g(X,X)=0.
Observe that, for each p∈ S, ℙ(ℂ T_pS)∖ℙ (T_pS) is homeomorphic to a 2-sphere from which we remove an equator, so it has two connected components homeomorphic to disks.
A positive complex metric is a complex metric g such that:
* there are no non-zero isotropic vectors on TS, namely g(v,v)=0 with v∈ TS if and only if v=0;
* the two isotropic directions of g in ℙ (ℂ T_p S) are points in ℙ(ℂ T_pS)∖ℙ(T_pS) that lie in different connected components.
One can see that a Riemannian metric g is a positive complex metric: denoting with z a complex coordinate around p∈ S for g, its isotropic directions are Span_(∂_z) and Span_(∂_z), which lie in different connected components of ℙ(ℂ T_pS)∖ℙ(T_pS).
Every complex metric g has a unique Levi-Civita connection, namely a torsion free affine connection ∇Γ(ℂ TS)→Γ( End(ℂ TS) ) such that
d_X (g(Y, Z) )= g(∇_X Y, Z) + g(Y, ∇_X Z)
for all X, Y, Z∈Γ(ℂ TS). This induces the definition of a ℂ-multilinear curvature tensor R_g and a curvature K_g S→ℂ.
Complex metrics of constant curvature K_g=-1 have a particularly interesting geometric meaning in terms of immersions inside 𝔾=×∖Δ, which can be interpreted as the space of (maximal, oriented, unparametrized) geodesics of ^3.
The complex manifold can be equipped with a holomorphic Riemannian metric (see Section <ref>) that can be defined as follows: let (U,z) be an affine chart for , then in the chart (U× U∖Δ, z× z=(z_1,z_2)) the metric can be written as
_= -4/(z_1-z_2)^2dz_1· dz_2 .
This description is independent from the affine chart (U,z). This holomorphic Riemannian metric is invariant under the diagonal action of (2,ℂ) on and has constant sectional curvature -1. The isometry group () of is generated by the diagonal action of (2,ℂ) and by the diagonal swap (z_1, z_2)↦ (z_2, z_1).
Given a smooth immersion σ=(σ_1, σ_2)S→= ×∖Δ, with σ_1, σ_2 local diffeomorphisms with opposite orientations, the tensor σ^*(_) is a positive complex metric of constant curvature -1.
Conversely, given a positive complex metric g of constant curvature -1, there exists a unique immersion σ=(σ_1, σ_2)S→ such that σ_1 and σ_2 are local diffeomorphisms with opposite orientations and that g=σ^*. Such immersion σ is unique up to post-composition with an element of ().
As a consequence of the uniqueness of the isometric immersion, if g is π_1(S)-invariant, then the isometric immersion σ is equivariant for a representation ρπ_1(S)→().
A class of equivariant immersions of S into is suggested by Bers Simultaneous Uniformization Theorem. In the rest of the paper, we will use the notation as in its statement in Theorem <ref>.
For all (c_1, c_2)∈, there exists:
* a unique quasi-Fuchsian representation ρπ_1(S)→(2, ℂ) up to conjugacy,
* a unique ρ-equivariant holomorphic diffeomorphism f_1= f_1(c_1,c_2) (S, c_1) →Ω_+(ρ),
* a unique ρ-equivariant antiholomorphic diffeomorphism f_2=f_2(c_1,c_2) (S, c_2) →Ω_-(ρ).
This defines a biholomorphism (S).
The two functions (c_1, c_2)↦ f_1(c_1, c_2) and (c_1, c_2)↦f_2(c_1, c_2) are continuous functions when you see the target as the topological space of developing maps for projective structures for S and S respectively with the C^∞-topology.
Each (c_1,c_2)∈ defines therefore a ρ-equivariant immersion
(f_1, f_2)S→×∖Δ=: ,
therefore the pull-back positive complex metric can be locally written as
g(c_1, c_2)=(f_1,f_2)^*(_) = -4/(f_1-f_2)^2 df_1· df_2
We therefore have a continuous map
g →{positive complex metrics with constant curvature -1}
↦ g := -4/(f_1- f_2)^2 df_1· df_2 .
We call Bers metrics the positive complex metrics obtained by this construction, namely the ones in the image of Equation (<ref>).
The holonomy of the Bers metric g(c_1,c_2) is the element ∈(S).
Observe that, if f_2=f_1, with f_1 onto the upper half plane H⊂ℂ, then
g(c_1, c_1)= 1/(Im(f_1))^2df_1· df_1
is the hyperbolic Riemannian metric on S in the conformal class of c_1.
<cit.>
The space of Bers metrics is an open connected component of the space of positive complex metrics of constant curvature -1, which in turn is an open subset of the space of complex metrics of constant curvature -1.
Let us now make a few remarks about Bers metrics that will help us handle them.
It is simple to construct the continuous inverse of (<ref>).
Given a Bers metric g we can recover the original complex structures from the isotropic directions of the ℂ-bilinear extension to ℂ TS, in the following way.
Assume v∈ℂT_pS, then observe that
g(v,v) =0 -4/(f_1-f_2)^2df_1(v)df_2(v)=0
df_1(v)=0 or df_2(v)=0 .
Denote with z and w the local coordinates for c_1 and c_2 respectively. Then df_1(v)=0 if and only if v∈ Span_ℂ (∂ _z ), and df_2(v)=0 if and only if v∈ Span_ℂ (∂ _w ).
As a result, g(v,v)=0 if and only if v∈ Span_ℂ(∂ _z) ∪ Span_ℂ (∂ _w ).
So the two complex structures c_1 and c_2 correspond to the unique quasicomplex structures J_1, J_2∈ End(TS) whose (-i)-eigenspaces are the two isotropic directions of g. We denote this inverse map as
(c_+, c_-){Bers metrics}→ ,
so c_+(g(c_1, c_2))=c_1 and c_-(g(c_1, c_2))=c_2.
Given a Bers metric g=g(c_1,c_2), the complex metric g, defined by g (v,w)= g(v,w) for all v,w∈ TS, is a Bers metric too. We can see this as follows.
Assume g corresponds to the immersion (f_1, f_2)S→ and can therefore be written as
g=-4/(f_1-f_2)^2 df_1· df_2 ,
where, with a little abuse, we identify here f_1 and f_2 with their composition with any affine chart on . Then,
g= -4/(f_2-f_1)^2 df_2· df_1 ,
which means that g is the complex metric of constant curvature -1 corrisponding to the immersion (s∘f_2, s∘ f_1), where s is any orientation reversing involution of ^3 (e.g. the symmetry with respect to a totally geodesic submanifold ^2⊂^3). If f_1 and f_2 are ρ-equivariant, then s∘f_2, and s∘ f_1 are (s∘ρ∘ s)-equivariant, with s∘ρ∘ s being quasi-Fuchsian since its limit set is the image through s of the limit set of ρ.
One finally observes that if the data (c_1, c_2)∈ T(S) produces, through Bers theorem, the holonomy ρ and the embeddings f_1 and f_2, then, by uniqueness, the data (c_2, c_1) corresponds to the holonomy s∘ρ∘ s and to the embeddings s∘f_2, s∘ f_1. We therefore conclude that g is a Bers metric with g= g(c_2, c_1)
Finally, we remark that the ℂ-bilinear extension of g to ℂ TS satisfies g(X,Y)= g(X, Y), for all X,Y∈ℂ TS.
Let g=g(c_1,c_2) be a hyperbolic complex metric, and let z and w be local coordinates for c_1 and c_2 on an open subset U⊂ S, then we can write g=ρ dz· dw, with ρ U→ℂ^*.
As for Riemannian metrics, one can define the area form of g as each of the two nowhere-vanishing 2-forms defined by ±√(g(∂_x_1, ∂_x_1)g(∂_x_2, ∂_x_2)- (g(∂_x_1, ∂_x_2))^2) dx_1∧ dx_2, where (x_1,x_2) is any smooth coordinate chart. Since is simply connected, there is a consistent choice of an area form for each Bers metric depending on the orientation of S: if locally g=ρ dzdw, then its area form consistent with the orientation of S is
dA_g= i/2ρ dz ∧ dw .
§.§ Deforming Bers metrics with quadratic differentials
Recall that a useful notation we use in this paper is to consider, for each c∈𝒞(S), a global holomorphic coordinate z on S for the lifted complex structure c. In the same fashion, holomorphic quadratic differentials for c can be seen as π_1(S)-invariant holomorphic quadratic differentials dz^2 on S.
Also, we will often write Bers metrics on S as g=ρ dz dw (see Remark <ref>) identifying them with their π_1(S)-invariant lift to the universal cover S, with z and w being global coordinates on S and ρS→ℂ^*.
Let (c_1, c_2)∈𝒞(S)×𝒞(S), and let g=g(c_1,c_2) the corresponding Bers metric.
* For all q_1∈(c_1), g+q_1 is a complex metric of constant curvature -1, and
the subset U_g={q_1∈(c_1) | g+q_1 is a Bers metric} is an open star-shaped subset centered in 0 of (c_1).
Moreover, g+q_1 is a Bers metric if and only if it is a positive complex metric, and, for all q_1∈ U_g, c_+(g+q_1)=c_1.
* For all q_2∈(c_2), g+q_2 is a complex metric of constant curvature -1, and the subset V_g={q_2∈(c_2) | g+q_2 is a Bers metric} is an open star-shaped subset centered in 0 of (c_2).
Moreover, g+q_2 is a Bers metric if and only if it is a positive complex metric, and, for all q_2∈ V_g, c_-(g+q_2)=c_2.
* Let q_1= dz^2∈(c_1). Define β∈ End(ℂ TS) such that q_1=g(β·, ·) and denote ∇ the Levi-Civita connection of g.
β is self-adjoint for g since q_1 is symmetric.
* We prove that β satisfies Codazzi equation d^∇β=0.
Recall that by Remark <ref> g(∂_z, ∂_z)=0 and, since the bilinear form is non-degenerate, g(∂_z, v)=0 for some v∈ TS implies v∈ Span_(∂_z).
Therefore, since g(β(∂_z), ·) = q(∂_z, ·)=0, we have β(∂_z)=0.
On the other hand, g(β, ∂_z)=q(·,∂_z)=0, so β∈ Span_ℂ (∂_z⊗ dz).
Finally, g(β(∂_z),∂_z )= implies β= /g(∂_z, ∂_z)∂_z⊗ dz.
As a result,
(d^∇β)(∂_z, ∂_z)= ∇_∂_z (β(∂_z) ) - ∇_∂_z (β(∂_z)) - β([∂_z, ∂_z]) =
= ∇_∂_z (β(∂_z)) =
= ∂_z( /g(∂_z, ∂_z)) ∂_z + /g(∂_z, ∂_z)∇_∂_z∂_z =
= -/(g(∂_z, ∂_z))^2·∂_z (g(∂_z, ∂_z)) ∂_z + /g(∂_z, ∂_z)∇_∂_z∂_z .
By Equation (<ref>), g(∇∂_z, ∂_z)=1/2 d(g(∂_z, ∂_z))=0, so ∇_v∂_z∈ Span_ℂ(∂_z) for all v∈ TS. Therefore, (d^∇β)(∂_z, ∂_z)∈ Span_ℂ(∂_z), so (d^∇β)(∂_z, ∂_z)=0 if and only if g((d^∇β)(∂_z, ∂_z), ∂_z)=0. Finally:
g((d^∇β)(∂_z, ∂_z), ∂_z)= -/(g(∂_z, ∂_z))^2·∂_z (g(∂_z, ∂_z)) g(∂_z, ∂_z) + /g(∂_z, ∂_z)g( ∇_∂_z∂_z, ∂ z )=
=-/g(∂_z, ∂_z)·∂_z (g(∂_z, ∂_z)) +/g(∂_z, ∂_z)g( ∇_∂_z∂_z, ∂ z )=
= - /g(∂_z, ∂_z) g(∂_z, ∇_∂_z∂_z ) =
= - /g(∂_z, ∂_z) g(∂_z, ∇_∂_ z ∂_z) = 0
where the last steps follow from (<ref>) and by the fact that ∇ is torsion free.
* Consider α= id+1/2β. We prove that g(α·, α·)= g+q_1.
Indeed, β(∂_z)=0 and β(∂_z)∈ Span_(∂_z) implies that tr(β)=0 and det(β)=0, so from its characteristic polynomial we have that β^2=0, implying that
g(α·, α·)= g(α^2·, ·)= g+g(β·, ·)= g+q_1 .
* g+q_1 is a (nowhere degenerate) complex metric. In fact, denoting g=ρ dzdw and recalling that ρ 0 and ∂_zw0, for all a,b∈ℂ
0=(g+q_1)(a∂_z+b ∂_z, ·)= 1/2aρ dw + 1/2 a ρ∂_z w dz + a dz + 1/2 b ρ∂_zw dz
implies a=0 and b=0, so g+q_1 is non-degenerate.
* We prove that g+q_1 has constant curvature -1.
First of all, notice that d^∇α=0 and that det(α)= 1 since det(β)=tr(β)=0.
Using d^∇α=0 one can see that the affine connection on TS defined by
∇_X Y = α^-1(∇_X α(Y) )
is compatible with the complex metric g+q_1=g(α·, α·) and is torsion-free, so ∇ is the Levi-Civita connection of g+q_1. As a consequence, one can immediately check that the corresponding curvature tensor R is related to the curvature tensor R of ∇ via
R(X,Y) Z = α^-1 (R(X,Y) α(Z)) .
Finally, using that g has curvature -1 and the standard symmetries of the curvature tensor, the curvature K of g+q_1 is such that
K= (g+q_1) (R(X,Y)Y, X ) /(g+q_1)(X,X)· (g+q_1)(Y,Y) - ( (g+q_1)(X,Y) )^2 =
= g (R(X,Y) α(Y), α (X) ) /g(α(X),α(X) )g(α(Y),α(Y)) - (g(α(X),α(Y)) )^2 =
= ((α))g(R(X,Y)Y, X)/((α))^2 ( g(X,X)g(Y,Y) - (g(X,Y) )^2 ) = 1/(α)· (-1)=-1
* By Theorem <ref>, there exists an open subset U_g⊂(c_1) containing zero such that g+q_1 is a Bers metric for all q_1∈(c_1).
* We observe that, for all q_1∈(c_1), (g+q_1)(∂_z, ∂_z)=g(∂_z, ∂_z)+ q_1(∂_z,∂_z)=0.
As a result, by Remark <ref>, we see that c_+(g+q_1)= c_1 for all q_1∈ U_g.
* We prove that g+q_1 is Bers if and only if it positive and that U_g is star-shaped subset at 0∈(c_1).
Let q_1∈(c_1). A quick calculation shows that, for all t∈ℝ, the vector field
V_t:=∂_w - t q_1(∂_w, ∂_w)/2g(∂_w, ∂_z)∂_z∈Γ( ℂ TS)
is nowhere zero and isotropic for g+tq_1. By means of the stereographic projection, one can see that for all p∈ S the map
ℝ →ℙ(ℂ T_pS)
t ↦ [V_t]_p
is a monotone parametrization of a circle minus a point, namely {[a q_1(∂ w, ∂ w)/2g(∂_w, ∂_z)∂_z + b∂_w] | a,b∈ℝ ,b 0}.
Since Span_(∂_z)∈ℙ(ℂT_p S)∖ℙ(T_pS) is an isotropic direction for g+tq_1 for all p∈ S and t∈ℝ, and since V_t and ∂_z are everywhere linearly independent on S for all t∈ℝ, we deduce that g+tq_1 is a positive complex metric if and only if, for all p∈ S, Span_(V_t)∈ℙ(ℂT_p S) lies in the connected component of ℙ(ℂT_p S)∖ℙ(T_pS) which does not include Span_(∂_z): for each p∈ S, this condition is satisfied for t lying in an interval containing 0 thus, by taking the intersection for all p∈ S, we deduce that the set of t∈ℝ for which g+tq_1 is a positive complex metric is an interval containg 0. By Theorem <ref>, since Bers metrics define a connected component of the space of positive complex metrics, we conclude that g+q_1 is Bers if and only if it positive. Finally, U_g is open because being positive is an open condition in the space of complex metrics,
* To prove the analog result for the metrics g+q_2, one can see that the proof of the first part of the statement can be adapted word for word to a proof of the second part.
Alternatively, we show how to prove the second part by using the first part and by conjugating complex metrics.
Given a complex metric g, we can define its conjugate g on TS as g(v,w)=g(v,w), so its ℂ-bilinear extension to ℂ TS satisfies g(X,Y)= g(X, Y).
One can easily check that, if ∇ is the Levi-Civita connection of g, then the Levi-Civita connection ∇ of g is ∇_X Y= ∇_XY for all X,Y∈Γ(ℂTS). As a result, one gets that the curvature tensors R of ∇ and R of ∇ are related by R(X,Y)Z= R(X, Y) Z, and
K_g= g(R(X,Y)Y,X)/g(X,X)g(Y,Y)- (g(X,Y))^2= g(R(X, Y)Y, X)/g(X, X)g(Y, Y) - (g(X, Y))^2 = K_g .
We therefore get that a complex metric has constant curvature -1 if and only if its conjugate does.
Assume now that g=g(c_1, c_2) is a Bers metric. Then, by Remark <ref>, g= g(c_2, c_1).
Therefore, g+ q_2 is a complex metric with constant curvature -1 for all q_2∈(c_2), because, by the first part of the proof, its conjugate g + q_2 is.
Moreover, by Remark <ref>, g+q_2 is a Bers metric if and only if its conjugate g+q_2 is, so it is a Bers metric for all the q_2 lying in a connected subset V_g⊂(c_2) which is star-shaped at zero. Finally, using again Remark <ref>, since c_+(g+q_2)= c_2 for all q_2∈(c_2) for which g +q_2 is a Bers metric, c_-(g+q_2) =c_2 for all q_2∈ V_g.
Let g=g(c_1, c_2), q_1∈ U_g⊂(c_1) as in Proposition <ref>. There are a few interesting computations we can make to understand the new Bers metric g+q_1.
Lift the tensors to the universal cover S, denote g=ρ_0 dz dw, and q_1= dz^2.
By construction of Bers metrics, there exists a smooth function ρS→ℂ^*, and an orientation-reversing (π_1(S),(2,))-equivariant map ηS→ with open image and which is a diffeomorphism onto its image, such that
ρ_0 dz dw + dz^2 = g+q_1 = ρ dz dη .
Since the symmetric product between 1-forms is non-degenerate, we get that
ρ dη= ρ_0 dw + dz .
From Equation (<ref>), we get a few interesting remarks:
* ρ= ρ_0 ∂_zw/∂_zη and ρ∂_z η= ρ_0∂_zw+ (from evaluating in ∂_z and in ∂_z respectively).
* ρ∂_w η =∂_w z and ρ∂_wη=ρ_0+∂_w z (from evaluating in ∂_w and in ∂_w respectively).
* ∂_z η/∂_zη= ρ_0 ∂_zw+/ρ_0 ∂_zw (from item (<ref>)).
* ∂_wη/∂_wη = ∂_w z/ρ_0 + ∂_w z (from item (<ref>)).
We get similar calculations from the study of g+q_2, with q_2=ψdw^2∈ V_g⊂(c_2).
In fact, we get that ρ_0 dz dw + ψdw^2= ρdηdw, so
ρdη= ρ_0 dz +ψdw
and similarly:
* ρ= ρ_0 ∂_w z/∂_w η and ρ∂_wη= ρ_0∂_wz+ψ (from evaluating in ∂_w and in ∂_w respectively).
* ρ ∂_zη= ψ∂_zw and ρ∂_z η= ρ_0 + ψ∂_zw (from evaluating in ∂_z and in ∂_z respectively).
* ∂_wη/∂_w η= ρ_0 ∂_wz+ψ/ρ_0 ∂_w z (from item <ref>).
* ∂_zη/∂_z η = ψ∂_zw/ρ_0+ ψ∂_zw (from item <ref>).
Let g=g(c_1, c_2) be a Bers metric, z,w coordinates for c_1, c_2, and g=ρ dzdw.
For all q_1∈(c_1), the infinitesimal deformation of c_-(g+tq_1) ∈𝒞(S) at t=0 is given by the Beltrami differential ∂_w z /ρ_0 d w/d w, where q_1= dz^2.
Similarly, for all q_2∈(c_2), the infinitesimal deformation of c_+(g+tq_2) ∈𝒞(S) at t=0 is given by the Beltrami differential ψ∂_zw/ρ_0 dz/d z, where q_2=ψdw^2.
Let us reconsider the methods of Remark <ref>.
For q_1∈(c_1), q_1= dz^2, we have
ρ dzdw +t dz^2=g+tq = ρ_t dzdw_t
for some π_1(S)-equivariant orientation-reversing open embedding w_tS→, with w= w_0.
For each sufficiently small t, we have that c_-(g+t q_1))∈𝒞(S) is the complex structure induced by w_t
By item (<ref>) in Remark <ref>, we obtain that
∂_ww_t dw/∂_ww_t dw = t∂_w z/ρ_0 + t∂_w zdw/dw ,
so we conclude that d/dt_|t=0∂_ww_t d w /∂_ww_t dw= ∂_w z /ρ_0 d w/d w.
The proof for the case g+tq_2 is totally symmetric from item <ref> in Remark <ref>.
The last scope of this section consists in showing that the holonomy of the deformation g(c_1, c_2)+q_1 only depends on the isotopy classes [c_1] and [c_2], as shown in Proposition <ref>. In order to prove it, we need a few technical computations that will have key importance also in Section <ref>.
Let α, β∈ Sym_2(ℂ TS). Bers metric g on S induces a bilinear form on Sym_2(ℂ TS) as follows. Let (X_1,X_2) be any linear basis for ℂT_pS, denote with g_i,j = g(X_i, X_j), with (g^i,j) = (g_i,j)^-1: then, for all α,β∈ℂ T_p S, define
<α, β>_g = ∑_i,j,k, ℓ=1,2α(X_i,X_j)β(X_k, X_ℓ) g^i,k g^j,ℓ
The definition of <α, β>_g is independent form the choice of the basis {X_1, X_2}.
Let g=g(c_1, c_2) be a Bers metric, and let q_1∈(c_1), q_2∈(c_2). Then,
<q_1, q_2>_g dA_g= 2i ψ/ρ dz∧ dw
where g=ρ dzdw, q_1= dz^2, and q_2=ψ dw.
Consider on S the basis {∂_z, ∂_w}.
Then, 0=q_1(∂_z, ∂_z)=g(∂_z, ∂_z)=g(∂_w,∂_w)= q_2(∂_w,∂_w). Recalling the description of the area form as in Equation (<ref>), we get:
<q_1, q_2>_g dA_g = q_1(∂_w, ∂_w) q_2(∂_z , ∂_z ) 1/(g(∂_z, ∂_w))^2 dA_g=
= (∂_w z)^2 ψ(∂_zw)^2/1/4ρ^2 (∂_zw)^2 (∂_wz)^2·i/2ρ dz∧ dw=
=2iψ/ρdz∧ dw .
Let g=g(c_1, c_2) be a hyperbolic complex metric, let ϕ_1,ϕ_2∈Diff_0(S), and consider g'= g(ϕ_1^*(c_1), ϕ_2^*(c_2)).
Then, the 2-forms
<q_1, q_2>_g dA_g and <ϕ_1^*(q_1), ϕ_2^*(q_2)>_g' dA_g'
on S differ by an exact form.
In other words,
∫_S ψ/ρ dz∧ dw= ∫_S ϕ_1^*() ϕ_2^*(ψ)/ρ'ϕ_1^*(dz)∧ϕ_2^*(dw)
where g'= ρ' ϕ_1^*(dz) ϕ_2^*(dw).
Let (f_1,f_2)= (f_1 (c_1, c_2),f_2) be the pair of developing maps as in Bers Theorem (Theorem <ref>).
Observe that, for all ϕ_1,ϕ_2∈Diff_0(S), f_1 (ϕ_1^*(c_1), ϕ_2^*(c_2))=f_1∘ϕ_1 and f_2(ϕ_1^*(c_1), ϕ_2^*(c_2))= f_2∘ϕ_2.
Therefore, considering the global coordinates on S given by z=f_1, w=f_2, we have that
g(ϕ_1^*(c_1), ϕ_2^*(c_2)) =-4/((z∘ϕ_1)-(w∘ϕ_2))^2 (ϕ_1^*dz)· (ϕ_2^*dw)
We denote ρ(ϕ_1, ϕ_2)= -4/((z∘ϕ_1)-(w∘ϕ_2))^2, so that g(ϕ_1^*(c_1), ϕ_2^*(c_2))= ρ(ϕ_1, ϕ_2)(ϕ_1^*dz)· (ϕ_2^*dw).
Now, since pulling back a 2-form through an isotopy does not change its integral, we have that
∫_S ϕ_1^*() ϕ_2^*(ψ)/ρ(ϕ_1,ϕ_2)ϕ_1^*(dz)∧ϕ_2^*(dw) =
∫_S (ϕ_1^-1)^* ((∘ϕ_1) (ψ∘ϕ_2)/ρ(ϕ_1, ϕ_2)ϕ_1^*(dz)∧ϕ_2^*(dw)) =
= ∫_S · (ψ∘ϕ_2∘ϕ_1^-1 )/-4/(z- (w∘ϕ_2∘ϕ_1^-1 ))^2 dz∧ (ϕ_2∘ϕ_1^-1)^*(dw)
= ∫_S · (ψ∘ϕ_2∘ϕ_1^-1 ) /ρ(id, ϕ_2∘ϕ_1^-1) dz∧ (ϕ_2∘ϕ_1^-1)^*(dw) .
As a result, in the statement of the Lemma we can assume without loss of generality that ϕ_1=id and that we are only deforming c_2 with an isotopy.
Also, observe that in order to prove the statement it is sufficient to show the infinitesimal version of it, namely that for all smooth path ϕ_t∈Diff_0(S), ϕ_0=id, the 2-form
d/dt_|t=0( -1/4· (ψ∘ϕ_t) (z-(w∘ϕ_t))^2 dz∧ϕ_t^*(dw) )
is a π_1(S)-invariant exact form on S, then the Lemma follows by the generality of the initial g, by the fact that the form depends continuously on t and that S is compact, so integration and derivation with respect to t commute.
Denote X= d/dt _t=0ϕ_t, X being a tangent π_1(S)-invariant vector field on S, and denote X=α∂_w +β∂_w. Using the standard properties of the Lie derivative ℒ, we have that
d/dt_|t=0( · (ψ∘ϕ_t) (z-(w∘ϕ_t))^2 dz∧ϕ_t^*(dw))=
= (∂_X ψ) (z-w)^2 dz∧ dw- 2ψ(z-w)(∂_X w) dz ∧ dw +
+ ψ(z-w)^2 dz∧ℒ_X(dw)=
= dz∧( β (z-w)^2 dψ- 2 ψ(z-w) β d w +ψ(z-w)^2 dβ)=
= dz ∧ d((z-w)^2 ψβ)=
= - d((z-w)^2ψβ dz )
were the last step follows from the fact that dz is closed since it is c_1-holomorphic.
We only have to prove that the 1- form ω:= (z-w)^2ψβ dz is π_1(S)-invariant.
Let γ∈π_1(S), and denote ρ(γ)= [ a b; c d ]∈(2,ℂ). Then the π_1(S) invariance of X, c_1, c_2, q_1 and q_2 implies:
* γ^*(dz)= 1/(cz+d)^2dz
* (γ(z))= (cz+d)^4
* ψ(γ(w))= (cw+d)^4 ψ
* β(γ(w))= β(w)·1/(cw +d)^2 .
Hence,
γ^*ω = ( az+b/cz+d -aw+b/cw +d)^2 (cz+d)^4 (cw +d)^4 ψ1/(cw +d)^2β1/(cz+d)^2 dz =
=( (az+b) (cw+d) - (aw +b) (cz+d) )^2 ψβ dz =
= (z-w)^2ψβ dz =ω ,
and this concludes the proof.
Let g=g(c_1, c_2) and g'=g(ϕ_1^*(c_1), ϕ_2^*(c_2)) be two Bers metrics with the same holonomy, ϕ_1, ϕ_2∈Diff_0(S).
Then, for all q_1∈(c_1), q_2∈(c_2)
[g+tq_1]= [g'+tϕ_1^* (q_1)]
and
[g+tq_2] =[g'+ tϕ_2^*(q_2)]
for all the values of t∈ℝ for which (g+tq_1), (g'+tϕ_1^* (q_1)), (g+tq_2), (g'+ tϕ_2^*q_2) are Bers metrics.
We prove it for the case g+tq_1, and the proof for g+tq_2 is in the very same fashion.
Denote z'=z∘ϕ_1, w'= w∘ϕ_2 and '=∘ϕ_1, and g'=ρ_0' dz'dw'.
By Lemma <ref>, we know that d/dt_|t=0 c_-(g+tq_1)= ∂_w z /ρ_0 d w/d w and d/dt_|t=0 c_-(g'+tϕ_1^*(q_1))= ' ∂_w' z' /ρ'_0 d w'/d w'.
As seen in Section <ref>, the two infinitesimal deformations determine the same tangent vector in T_[c_2]𝒯(S)≅Belt(c_2) if and only if their pairing with ([c_2]) coincide, namely if and only if for all q_2=ψdw^2∈(c_2), we have
∫_S ∂_w z/ρ_0ψ dw∧ dw = ∫_S ' ∂_w' z'/ρ'_0ψ' dw'∧ dw'.
Since dz= ∂_w z dw+ ∂_w z dw and dz'= ∂_w' z' dw'+ ∂_w' z' dw', Equation (<ref>) is equivalent to
∫_S /ρ_0ψ dz∧ dw = ∫_S ' /ρ'_0ψ' dz'∧ dw'.
which is true by Lemma <ref>.
We can finally conclude that, for all q_1∈(c_1), we have a well-posed vector field along the Bers slice ([c_1], ∙) defined for all [g] by d/dt_|t=0 [c_- (g+tq)]. If [g]=[g'], then the two paths [g+tq_1] and [g'+tϕ_1^*(q_1)] are both integral curves for the tangent vector field as long as they are defined, so they must coincide as long as they are defined.
§ THE SCHWARZIAN MAP AND THE METRIC MODEL FOR T (S)
§.§ The main statement
Let g=g(c_1, c_2) be a Bers metric.
The following is in the notation of Bers Theorem (Theorem <ref>).
As mentioned in the Introduction, we define 𝐒𝐜𝐡𝐰_+(g)∈(c_1) is the Schwarzian derivative of the projective structure defined by the developing map f_1(c_1, c_2) with the quasi-Fuchsian holonomy induced by the pair (c_1,c_2)∈, while 𝐒𝐜𝐡𝐰_-(g)∈(c_2) as the Schwarzian derivative of the projective structure defined by the developing map f_2(c_1, c_2) with the quasi-Fuchsian holonomy induced by (c_1,c_2)∈.
Observe that 𝐒𝐜𝐡𝐰_+ and 𝐒𝐜𝐡𝐰_- are continuous functions of (c_1,c_2), since both the functions f_1(c_1, c_2) and f_2(c_1, c_2) depend smoothly on (c_1,c_2).
The aim of Section <ref> is to prove the following Theorem.
Let g=g(c_1,c_2) be a Bers metric, let U_g⊂(c_1) and V_g⊂(c_2) be the maximal open connected neighbourhoods of 0 such that g+q_1 and g+q_2 are Bers metrics for all q_1∈ U_g and q_2∈ V_g as in Proposition <ref>.
Then, for all q_1∈ U_g,
𝐒𝐜𝐡𝐰_+(g+q_1) = 𝐒𝐜𝐡𝐰_+(g)- 1/2 q_1
and, for all q_2∈ V,
𝐒𝐜𝐡𝐰_-(g+q_2) = 𝐒𝐜𝐡𝐰_-(g) -1/2q_2
As a result, the maps
U →([c_1], ∙) V→(∙, [c_2])
q_1 → [g+q_1] q_2↦ [g+q_2]
are holomorphic embeddings and define the following holomorphic model for the holomorphic tangent bundle of (S)
T_([c_1], [c_2])(S)≅([c_1])⊕([c_2])
with the induced almost-complex structure defined by the multiplication times i=√(-1).
We will refer to the model in (<ref>) as the metric model for the holomorphic tangent bundle of (S).
Before proving Theorem <ref>, let us see some remarks concerning this result.
The notation T_([c_1], [c_2])(S)≅([c_1])⊕([c_2]) can be intended as
([c_1])⊕([c_2])={q_1+q_2 | q_1∈(c_1), q_2∈(c_2) }< Γ( Sym_2(TS,)) .
In fact, Bers metrics can be seen as a Banach submanifold of Γ(Sym_2( T S)), because, by Theorem <ref>, it is contained in the open subset of positive complex metrics and it is the preimage of -1 of the curvature function. The differential of its projection to (S) maps injectively and surjectively the subspace (c_1)⊕(c_2) in the tangent space in g(c_1,c_2) onto T_([c_1], [c_2])(S) by Theorem <ref>.
By definition, each left Bers slice (∙, [c_2]) is biholomorphic to 𝒯(S), while each right Bers slice ([c_1], ∙) is biholomorphic to 𝒯(S): Lemma <ref> gives a description of the differential of these maps, namely
(c_1)≅ T_[c_2]([c_1], ∙) T_[c_2]𝒯(S)≅Belt(c_2)
q_1= dz^2 ↦[ ∂_w z/ρdw/dw] ,
and
(c_2)≅ T_[c_1](∙, [c_2]) T_[c_1]𝒯(S)≅Belt(c_1)
q_2=ψ dw^2 ↦[ ψ∂_zw/ρdz/d z]
With respect to the metric model for the holomorphic tangent bundle of (S) in (<ref>), we can write the differential of the Fuchsian map 𝒯(S)→Fuch(S)⊂(S) as:
T_[c]𝒯(S) T_([c],[c])Fuch(S)< T_([c],[c])(S)
[q/g_0] ↦ q+q
where g_0=g(c,c) and q∈(c).
To see this, just observe that the composition with the differential of the projection of (S) to the Bers slice (∙, [c]) =𝒯(S)×{[c]} (resp. ([c], ∙)={[c]}×𝒯(S)) gives [q/g_0 ]↦ [q] (resp.[q/g_0 ]↦ [q]) which coincides with the inverse of map (<ref>) (resp. the inverse of the conjugate of map (<ref>)) for c=c_1=c_2.
This metric model for the holomorphic tangent bundle of (S) extends the metric model for Teichmller space, identified through the uniformization theorem with the space Met_-1(S) of hyperbolic Riemannian metrics up to isotopy, studied in <cit.> and <cit.>. We recall it briefly.
Let h_0 be a hyperbolic metric, and let ḣ denote an infinitesimal variation of it with hyperbolic metrics, namely ḣ=d/dt_t=0 h_t∈ Sym_2(TS). Then one can see the tangent bundle of Met_-1 (S) as
T_[h_0]Met_-1(S)={ḣ∈ Sym_2(TS) | tr_h_0(ḣ)=0 and div_h_0 (ḣ)=0 }
where div denotes the divergence.
A classic computation shows that tr_h_0(ḣ)=0 and div_h_0 (ḣ)=0 if and only if h_0 is the real part of a holomorphic quadratic differential for the complex structure c_0 induced by h_0. We therefore get:
T_[h_0]Met_-1(S)={ q+q | q∈(c_0) } .
which coincides with the tangent bundle to Fuch(S) seen as a submanifold of (S), as shown in Remark <ref>.
§.§ Bers metrics and Epstein metrics at infinity
In order to prove Theorem <ref>, we need to recall some aspects of the theory of immersions of surfaces into ^3 and their data at infinity: in fact, this will help us in the proof of Theorem <ref> to show that the set of Bers metrics g for which Equations (<ref>) and (<ref>) hold is non-empty, since it contains hyperbolic Riemannian metrics. Moreover, the relation between Epstein metrics at infinity and Bers metrics turns out to be quite interesting.
Epstein (<cit.>) developed a theory of metrics at infinity induced by immersions of surfaces in hyperbolic space. Krasnov and Schlenker <cit.> gave a notion of second fundamental forms at infinity, with the pair of first and second fundamental forms at infinity determining uniquely the immersion in ^3.
Assume as usual that S is closed, oriented, with genus greater than or equal to 2.
Let σS→^3 be a (π_1(S), (2,))-equivariant immersion, and let N be a normal vector field along σ compatible with the orientation on S. The first fundamental form of σ is the pull-back metric, the shape operator is the -self-adjoint endomorphism B TS→ TS defined by B(X)=-D^^3_X N, while the second fundamental form is the symmetric bilinear form on TS defined by = (B·, ·). The tensors , B and are π_1(S)-invariant, and
the immersion σ is uniquely determined by the data (,) (or equivalently by (, B)) up to post-composition with automorphisms of ^3. The data (, B) satisfyies Gauss equations K_I=-1+(B) and the Codazzi equation d^∇_I B=0.
We say that σ is nearly-Fuchsian if it is a π_1(S)-equivariant immersion such that the modulus of the eigenvalues of are less than 1, or equivalently such that - and + are both positive definite.
As a result of Gauss curvature and of the compactness of S, is complete and negatively curved, σ is a proper embedding, and the visual boundary of - homeomorphic to a circle - embeds continuously into ∂^3, its image being the limit set of the holonomy representation ρπ_1(S)→(2,ℂ) of σ, hence ρ is quasi-Fuchsian. The reader can find an extended treatment of nearly-Fuchsian immersions in <cit.>.
We say that a quasi-Fuchsian representation ρ is nearly-Fuchsian if there exists a ρ-equivariant nearly-Fuchsian embedding of S into ^3.
An oriented equivariant immersion σS→^3 determines two equivariant maps at infinity σ_+S→∂^3 and σ_-S→∂^3, defined by σ_+(x)= lim_t→+∞exp_^3 (tN(x)), and σ_-(x)= lim_t→-∞exp_^3 (tN(x)),
N being the normal vector field. One can show that σ is nearly-Fuchsian if and only if σ_+ and σ_- are both local diffeomorphisms: in fact, σ nearly-Fuchsian implies that σ_+ and σ_- are diffeomorphisms onto their images, which correspond to the two connected components Ω_+ and Ω_- of the complementary of the limit set of the holonomy inside ∂^3.
To sum up, denoting with 𝐜 the complex structure of ∂^3, the two complex structures σ_+^*(𝐜) and σ_-^*(𝐜) induce two complex structures on S with different orientations, c_1 and c_2 respectively, and, with the formalism of Bers Theorem (Theorem <ref>),
σ_+= f_1(c_1, c_2) σ_-= f_2(c_1, c_2) .
In particular σ defines a Bers metric g_σ, and one can show (see Proposition 4.12 <cit.>, where the notation for the shape operator is opposite) that
g_σ= ((id-iJB)·, (id-iJB)·) ,
J being the complex structure of S compatible with the orientation induced by .
We also notice that, if s∈(^3) is an orientation reversing involution of ^3, then s∘σ is (s∘ρ∘ s)-equivariant and it has immersion data (, -B), so s∘σ is nearly-Fuchsian as well and one can easily see that
g_s∘σ= g_σ
Finally, we define the immersion data at +∞ of σ as
^*_+= 1/2((id+B)· ,(id+B)· )
B^*_+= (id+B)^-1(id-B)
^*_+=^*_+(B^*_+·, ·)= 1/2((id-B)· ,(id+B)· ) ,
and similary the immersion data at -∞ of σ as
^*_-= 1/2((id-B)· ,(id-B)· )
B^*_-= (id-B)^-1(id+B)
^*_-=^*_-(B^*_-·, ·)= 1/2((id+B)· ,(id-B)· ) .
The following properties hold.
* ^*_+, ^*_-, ^*_+ and ^*_- are positive definite and symmetric. In particular, B^*_+ is self-adjoint for ^*_+, and B^*_- is self-adjoint for ^*_-.
* ^*_+ is conformal to c_1 and ^*_- is conformal to c_2 (<cit.>).
*
If s∈(^3) is an orientation-reversing involution, then the immersion data at +∞ of s∘σ are (^*_-,B^*_-, ^*_-), while the immersion data at -∞ of s∘σ are (^*_+, B^*_+, ^*_+).
* The data (^*_+, B^*_+) and (^*_-, B^*_-) satisfy the following Gauss and Codazzi equations at infinity:
Codazzi equation at infinity: d^∇_^*_+B^*_+=0 d^∇_^*_-B^*_-=0
Gauss equation at infinity: tr(B^*_+)=-K^*_+ tr(B^*_-)= -K*_-
where K^*_+ and K^*_- denote the curvature of ^*_+ and ^*_- respectively (<cit.>).
* Assume _0 is a Riemannian metric conformal to c_1, B_0 is a _0-self-adjoint tensor such that _0=_0(B_0·, ·) is positive definite, and assume that (_0, _0) satisfy the equations d^∇__0B_0=0 and tr_I_0 B_0=-K_I_0, then there exists a unique nearly-Fuchsian immersion σ whose immersions data at +∞ are _0, B_0, and _0.
The analogue result holds when prescribing the data at -∞. To see this, one can just prescribe the same data at +∞, get the corresponding nearly-Fuchsian immersion σ, and then consider s∘σ, with s any orientation-reversing involution in (^3) (see step <ref>).
Now, let σ be a nearly Fuchsian immersion, with immersion data (, ), and with immersion data at +∞ given by (^*_+,^*_+). For all q_1∈(c_1) for which ^*_+ +Re(q_1) is positive definite, the pair (^*_+, ^*_+ + Re(q_1)) satisfies Gauss and Codazzi equations at +∞ (Equations (<ref>), (<ref>)), so it is the immersion data at +∞ of a nearly-Fuchsian immersion σ.
For the scope of this paper, the most interesting result concerning immersion data at infinity is the following theorem.
If σ and σ are nearly-Fuchsian immersions with immersion data at +∞ (_+^*, _+^*) and (_+^*, _+^*+Re(q_1)), then the two projective structures induced by σ_+ and σ_+ satisfy:
Schw(σ_+) = Schw(σ_+) -q_1 .
Let us reconnect to Bers metrics.
Let σ be a nearly-Fuchsian immersion with data at +∞ given by (^*_+, ^*_+), then the induced Bers metric is
g_σ= 2^*_+ - i (^*_+(B^*_+·, J^*_+·) + ^*_+(J^*_+·, B^*_+·))
where J^*_+ is the quasicomplex structure on ^*_+ compatible with the orientation.
Reversing the definitions in Equation (<ref>), we can recover the immersion data for σ.
* B^*_+ +B B^*_+ = id-B, so
B=(id-B^*_+)(id+B^*_+)^-1
* = 2 ^*_+ ( (id+B)^-1·, (id+B)^-1· ). Observe that
(id+B)^-1 =( (id+B^*_+)(id+B^*_+)^-1 + (id-B^*_+)(id+B^*_+)^-1 )^-1=
=(2 (id+B^*_+)^-1)^-1= 1/2 (E+B^*_+) ,
so
= 1/2^*_+( (id+B^*_+)·, (id+B^*_+)· ).
* using that (id+B) and (id-B) commute, we get that
= (B·, ·)= 1/2^*_+ ((id-B^*_+)·, (id+B^*_+)· )
Now, let J and J^*_+ be the quasicomplex structures of and ^*_+ respectively and compatible with the orientation on S. Then, J and J^*_+ are conjugate, and J=(id+B^*_+)^-1J^*_+(id+B^*_+): indeed,
((id+B^*_+)^-1J^*_+(id+B^*_+)·, (id+B^*_+)^-1J^*_+(id+B^*_+)·)=
1/2^*_+ (J^*_+(id+B^*_+)·, J^*_+(id+B^*_+)· )=1/2^*_+ ((id+B^*_+)·, (id+B^*_+)·)= .
By appling Equation (<ref>), we get that
g_σ = ((id-iJB)·, (id-iJB)·)=
= 1/2^*_+ ( ( (id+B^*_+) -iJ^*_+(id-B^*_+))·, ( (id+B^*_+) -iJ^*_+(id-B^*_+))·)=
= 1/2^*_+ ( (id+B^*_+) ·, (id+B^*_+) ·) -1/2^*_+ ( (id-B^*_+)·, (id-B^*_+)·) -
-i/2^*_+ ( (id+B^*_+)·,J^*_+(id-B^*_+)·) - i/2^*_+ ( J^*_+(id-B^*_+)·, (id+B^*_+) ·)=
= 2 ^*_+(B^*_+·, ·) - i(^*_+(J^*_+·, B^*_+·) +^*_+(B^*_+·, J^*_+·))=
= 2^*_+ - i(^*_+(J^*_+·, B^*_+·) +^*_+(B^*_+·, J^*_+·))
If σ and σ are nearly-Fuchsian immersions with data at +∞ given by (^*_+, ^*_+) and (^*_+, ^*_++Re(q_1)) respectively with q_1∈(c_1), then the corresponding Bers metrics g_σ and g_σ satisfy
g_σ = g_σ + 2q_1 .
Similarly, if σ and σ are nearly-Fuchsian immersions with data at -∞ given by (^*_-, ^*_-) and (^*_-, ^*_-+Re(q_2)) respectively with q_2∈(c_2), then the corresponding Bers metrics g_σ and g_σ are related by
g_σ = g_σ + 2q_2 .
We start proving the first part of the statement concerning metrics at +∞.
Since Re(q_1) is symmetric for ^*_+, there is a ^*_+-self-adjoint endomorphism β TS→ TS such that Re(q_1)= ^*_+(β·, ·) = ^*_+(·, β·) .
So the data at +∞ for σ are:
^*_+ = ^*_+ B_+^*= B_+^*+ β _+^*= _+^* + ^*_+(β·, ·) .
Let J^*_+ be the quasicomplex structure of _+^* compatible with the orientation, then q_1(J^*_+·, ·)=q_1(·, J^*_+·)=iq_1, and
Im(q_1)= -Re(iq_1)= -Re (q_1(J_+^*·, ·))= -(Re q_1)(J_+^*·, ·)= -_+^*(J_+^*·, β·)
= -Re (q_1 (·, J_+^*·) )=-(Re q_1) (·, J_+^*·) = -_+^*(β·, J_+^*·) .
Using Lemma <ref>,
g_σ =2 _+^* - i (^*_+(B^*_+·, J^*_+·) + ^*_+(J^*_+·, B^*_+·))=
= 2_+^* +2 _+^*(β·, ·) - i (^*_+( B^*_+·, J^*_+·) + ^*_+(J^*_+·, B^*_+·)) -i _+^*(β·, J_+^*·) -i _+^*(J_+^*·, β·) =
= g_σ + 2 Re(q_1) + 2i Im(q_1)=
=g_σ + 2q_1 .
To prove the analogue result at -∞, we can compose σ with an orientation-reversing involution s of ^3. In fact, by Remark <ref>.<ref> and by the first part of the proof we get that σ_s∘σ= σ_s∘σ+2 q_2, so by Remark <ref> we get
g_σ= g_σ +2 q_2
and this concludes the proof.
§.§ Proof of Theorem <ref>
We are almost ready to prove Theorem <ref>. We need to recall a fact from <cit.>, that we state here in a much more specific version than the general one in the source.
Let Λ be a complex manifold, and let {g_λ}_λ∈Λ be a family of Bers metrics such that, for all p∈ S, v∈ T_pS the map
Λ →ℂ
λ ↦ g_λ(v,v)
is holomorphic. Then, the induced holonomy map
Λ → QF(S)
λ ↦ [g_λ]
is holomorphic.
For all g=g(c_1,c_2) Bers metric, we say that g has the Property (P) if: there exists an open subset U⊂(c_1), U depending on g, such that, for all q_1∈ U, g+q_1 is a Bers metric and
𝐒𝐜𝐡𝐰_+(g+q_1)=𝐒𝐜𝐡𝐰_+(g)-1/2 q_1 .
Observe that property (P) for g is apparently a bit weaker than the one described in the first part of the statement of the Theorem, since we are requiring that Equation (<ref>) holds for holomorphic quadratic differentials in a neighborhood of 0, and not a priori in the maximal open subset of (c_1) of the q_1's for which g+q_1 is a Bers metric. Proving that the two properties are equivalent is in fact the first step of the proof.
* We prove that, if g satisfies Property (P), then, for every q_1∈ U_g, 𝐒𝐜𝐡𝐰_+(g+q_1) = 𝐒𝐜𝐡𝐰_+(g)- 1/2 q_1.
Let U_g⊂(c_1) be the maximal subset such that g+q_1 is Bers for all q_1∈ U_g, then by Proposition <ref> U_g is open and connected.
Observe that, by definition of the holomorphic vector bundle (c_1)→ (S,c_1), for all p∈ S, v∈ T_pS the map
U_g →ℂ
q_1 ↦ (g+q_1)(v,v)= g(v,v) + q_1(v,v)
is holomorphic.
As a result, by Lemma <ref>, we know that the map
U_g → QF(S)
q_1 ↦ [g+q_1]
is holomorphic, so, as a consequence of the holonomy map of projective structures (<ref>) and of the Schwarzian derivative (<ref>), the map
U_g →(c_1)
q_1 ↦𝐒𝐜𝐡𝐰_+(g+q_1)
is holomorphic. Since g satisfies the Property (P), we know that the map q_1↦𝐒𝐜𝐡𝐰_+(g+q_1) is equal to 𝐒𝐜𝐡𝐰_+(g)-1/2 q_1 in a neighbourhood of 0∈(c_1), hence, using that U_g is connected (Proposition <ref>) and the uniqueness of analytic continuation, we conclude that 𝐒𝐜𝐡𝐰_+(g+q_1) = 𝐒𝐜𝐡𝐰_+(g)- 1/2 q_1 for all q_1∈ U_g.
* We prove that, if g has property (P), then every Bers metric with the same holonomy as g has property (P). As a consequence (P) is a well-defined property on (S).
Let U⊂(c_1) be the open subset realizing property (P) for g.
By Proposition <ref>, if ϕ_1,ϕ_2∈Diff_0(S) then, defining g'=g(ϕ_1^*(c_1), ϕ_2^*(c_2)), there is an open subset U'⊂ U⊂(c_1) such that g'+(ϕ_1^*(q_1)) is a Bers metric for all q_1∈ U' and [g+q_1]= [g'+(ϕ_1^*(q_1))]. Moreover, ee have that g+q_1= g(c_1, ĉ_2) and g'+(ϕ_1^*(q_1))= g(ϕ_1^*(c_1), ĉ_2 '), for some isotopic ĉ_2, ĉ_2 '∈ C(S) .
With the notation of Bers Theorem (Theorem <ref>), the projective structures defined by the developing maps f_1(c_1, ĉ_2) and f_1(ϕ_1^*(c_1), ĉ_2 ') are isotopic, with the latter being the pull-back of the former via ϕ_1.
As a result, for all ϕ_1^*(q_1)∈ϕ_1^*(U')⊂(ϕ_1^*(c_1)),
𝐒𝐜𝐡𝐰_+(g'+ϕ_1^*(q_1)) = (ϕ_1)^*(𝐒𝐜𝐡𝐰_+(g+q_1) )= (ϕ_1)^*(𝐒𝐜𝐡𝐰_+(g) )- 1/2(ϕ_1)^*(q_1)=
= 𝐒𝐜𝐡𝐰_+(g') - 1/2(ϕ_1)^*(q_1) ,
so g' has Property (P).
* We prove that for all [c_1]∈𝒯(S), the set of elements of the Bers slice ([c_1], ∙) satisfying property (P) is non-empty.
In fact, every Bers metric with nearly-Fuchsian holonomy satisfies property (P).
If g=g(c_1,c_2) is the induced Bers metric of a nearly-Fuchsian immersion with immersion data at +∞ given by (_+^*, _+^*) (as in Lemma <ref>), then, combining Proposition <ref> and Theorem <ref>, we get that U={q_1∈(c_1) | _+^* +Re(q_1)>0} is an open neighbourhood of 0∈(c_1) such that, for all q_1∈ U,
𝐒𝐜𝐡𝐰_+(g+ q_1)- 𝐒𝐜𝐡𝐰_+(g) = -1/2 q_1 .
Since Fuchsian totally geodesic immersions are nearly-Fuchsian, Bers metrics with Fuchsian holonomy satisfy property (P).
* We show that if g=g(c_1,c_2) satisfies property (P), then property (P) is satisfied in a neighbourhood of [g] in ([c_1], ∙).
Assume g satisfies property (P), then the map
(c_1)⊃ U→ 𝒫(S, [c_1]) ([c_1])
q_1↦ [f_1(g+q_1)] ↦ [Schw(f_1(g+q_1)) ]= [Schw(f_1(g))]- 1/2 [q_1]
is an embedding for U small enough, so (recalling Section <ref>) the map q_1↦ [f_1(g+q_1)] is an embedding into 𝒫(S), so the holonomy map q_1↦ [g+q_1] is an embedding as well: moreover, the image lies inside the Bers slice ([c_1], ∙), so for a dimensional argument, the holonomy map defines a biholomorphism
U →([c_1], ∙)
q_1 ↦ [g+q_1]
onto its image, which is open. So we can now show that the image of this map gives us a neighborhood of [g] in ([c_1],∙) over which property (P) holds: for all q_1∈ U, choosing U'⊂(c_1) such that U'+q_1⊂ U, we have that for every q_1'∈ U'
𝐒𝐜𝐡𝐰_+((g+q_1)+q_1')= 𝐒𝐜𝐡𝐰_+(g)-1/2 q_1 -1/2 q_1'= 𝐒𝐜𝐡𝐰_+(g+q_1)-1/2 q_1' .
* If g does not satisfy the Property (P), then in a neighborhood of [g] in ([c_1], ∙) the Property (P) is not satisfied.
From Proposition <ref>, we know that we have a well-posed map
(c_1)×𝒞(S) →{complex metrics of constant curvature -1}
(q_1, c_2) ↦ g(c_1,c_2)+q_1 .
Since the function in Equation (<ref>) is continuous, this map is continuous as well when you endow the target with the topology of uniform convergence. By Theorem <ref>, the counterimage of the space of Bers metrics is an open subset 𝔘 of the domain, 𝔘=∪_c_2∈𝒞(S) U_g(c_1,c_2)×{c_2}.
Observe that also (q_1, c_2)↦𝐒𝐜𝐡𝐰_+(g(c_1, c_2)+q_1) is a continuous function, since it is a composition of continuous functions.
As a result, if (q_1,c_2) in 𝔘 is such that 𝐒𝐜𝐡𝐰_+(g(c_1,c_2)+q_1)𝐒𝐜𝐡𝐰_+(g(c_1,c_2))-1/2 q_1, then this also holds in a neighborhood of (q_1, c_2). Finally, since open neighborhoods of c_2 in 𝒞(S) project to open neighborhoods of [c_2] in 𝒯(S), we have proved the following: for every holonomy in a neighborhood of [g] in ([c_1], ∙), there exists a Bers metric g' with that holonomy and an element q_1'∈(c_1) such that g'+q'_1 is a Bers metric but 𝐒𝐜𝐡𝐰_+(g(c_1,c_2)+q_1)𝐒𝐜𝐡𝐰_+(g(c_1,c_2))-1/2 q_1, implying that the Property (P) does not hold for g'.
* Since Bers slices are connected, we conclude that 𝐒𝐜𝐡𝐰_+(g+q_1)= 𝐒𝐜𝐡𝐰_+(g)-1/2 q_1 for all Bers metric g=g(c_1,c_2) and for all q_1∈ U_g.
* We prove the analogue result for g+q_2 as in Equation (<ref>).
As suggested by Remark <ref>, this follows quite easily by conjugating. Indeed, for all g=g(c_1,c_2) Bers metric and for all q_2∈(c_2) for which g+q_2 is Bers, we have that g and g +q_2 are Bers metrics, so
𝐒𝐜𝐡𝐰_+(g +q_2)=𝐒𝐜𝐡𝐰_+(g)-1/2 q_2: recalling the notation of Bers Theorem <ref>, the definition of 𝐒𝐜𝐡𝐰_+ and 𝐒𝐜𝐡𝐰_-, and Remark <ref>, we have that
𝐒𝐜𝐡𝐰_+(g)= Schw(f_1(c_2, c_1) )= Schw( f_1(c_2, c_1))= Schw( f_2 (c_1,c_2))= 𝐒𝐜𝐡𝐰_- (g)
where Schw(dev)= Schw(dev) follows from the fact that ∂_v dev=∂_v (dev). We therefore conclude that
𝐒𝐜𝐡𝐰_-(g+q_2)= 𝐒𝐜𝐡𝐰_-(g)-1/2q_2 .
§ THE HOLOMORPHIC EXTENSION OF THE WEIL-PETERSSON METRIC
The Weil-Petersson metric can be defined in the following way.
Let c∈𝒞(S), and let h be the hyperbolic metric in the conformal class, so h= ρ_0 dzdz with z being a holomorphic coordinate for c.
For all [c]∈𝒯 (S), we have a canonical definition of a Beltrami differential for each class in Belt(c) given by harmonic Beltrami differentials, namely Beltrami differentials of the form q/h where q∈(c). This can be seen also from the second correspondence in Remark <ref> taking (c_1, c_2)=(c, c). The Weil-Petersson metric in T_[c]𝒯(S)≅ Belt(c) can therefore be defined as
β, β'_WP:= Re( ∫_S ·' /ρ^2 dA_h ) = Re( i/2∫_S ·' /ρ dz∧ dz)
where β= q/h= /ρdz/dz and β'= q'/h= '/ρdz/dz are harmonic Beltrami differentials.
It's now time for us to define the complex bilinear form on QF(S).
§.§ The bilinear form on (S)
We are finally able to use the metric model for the holomorphic tangent bundle on (S) to define a holomorphic Riemannian metric on it.
Recall that in Equation (<ref>) we showed that the Bers metric g defines a bilinear form on Sym_2(T S)⊕ i Sym_2(T S) given by
<α, β>_g= ∑_i,j,k, ℓ=1^2 α(X_i, X_j)β(X_k,X_ℓ) g^i,kg^j,ℓ
for any local basis {X_1, X_2} for TS.
We can therefore define the bilinear form on (c_1)⊕(c_2) as
α, β_g= 1/8∫_S <α, β>_g dA_g
where g=g(c_1,c_2) and dA_g is the area form compatible with the orientation as in Equation (<ref>).
Let g=g(c_1,c_2) be a Bers metric.
* q_1, q'_1_g=0, for all q_1, q'_1∈(c_1).
* q_2, q_2'_g=0 for all q_2, q_2'∈(c_2) .
* Let g=ρ dzdw, q_1= dz^2 ∈(c_1), q_2=ψ dw^2∈(c_2), then
q_1, q_2_g=1/4 i ∫_S ·ψ/ρdz∧ dw
* if g'=g(ϕ_1^*(c_1), ϕ_2^*(c_2 )), with ϕ_1,ϕ_2∈Diff_0(S), then
q_1, q_2_g = ϕ_1^*(q_1),ϕ_2^*( q_2)_g' .
As a result, the bilinear form defined in Equation (<ref>) defines a well-posed ℂ-bilinear form on T (S).
* The restriction of the bilinear form to the Fuchsian locus is real-valued and coincides with the Weil-Petersson metric
* Consider the frame X_1:=∂_z and X_2:=∂_w, we have that q_1(X_i, X_j) 0 if and only if i=j=2, but g(∂_z,∂_z)=0 hence g^22≡0: by Equation (<ref>) we conclude that <q_1,q_1'>_g=0.
* As in the previous step.
* Follows directly by Lemma <ref> and Equation (<ref>).
* This follows from the previous step and from Lemma <ref>.
* Recall the description of the differential of the Fuchsian map 𝒯(S)Fuch(S)⊂(S) in Remark <ref>. We therefore see that the pushed-forward Weil-Petersson metric on Fuch(S) (defined as in Section <ref>) is given by
q+q, q+q_WP= ( ∫_S /ρ_0·i/2 dz∧ dz) .
Using the previous steps, for all hyperbolic metric g_0=g(c,c) on S we have q+q, q+q_WP = 2q,q_g_0 = q+q, q+q_g_0.
§.§ The bilinear form is holomorphic Riemannian
The aim of this section is to prove that is in fact a holomorphic Riemannian metric.
We fix a framework that will be useful for further calculations.
Fix (c_1^0, c_2^0)∈.
For all q_1∈([c_1^0]) we define L_q_1 as the global holomorphic vector field on (S) defined as follows:
* L_q_1 is tangent to the foliation of left Bers slices {([c_1], ∙) }_[c_1]∈𝒯(S);
* along ([c_1^0], ∙), L_q_1 coincides with the vector field generated by q_1, namely d/dt_|t=0 [g(c^0_1, c_2)+tq_1];
* for all [c_1]∈𝒯(S), the differential of the Bers corrispondence ([c_1], ∙)≅𝒯(S), maps L_q_1 to the same holomorphic vector field α_q_1∈Γ (𝒯(S)).
Similarly, for all q_2∈([c_2^0]) we define R_q_2 as the global holomorphic vector field on (S) defined as follows:
* R_q_2 is tangent to the foliation of right Bers slices {( ∙, [c_2]) }_[c_2]∈𝒯(S);
* along ( ∙, [c_2^0]), R_q_2 coincides with the vector field generated by q_2, namely [c_1]↦d/dt_|t=0 [g(c_1, c_2^0)+tq_2];
* for all [c_2]∈𝒯(S), the differential of the Bers corrispondence ( ∙, [c_2])≅𝒯( S), maps R_q_2 to the same holomorphic vector field β_q_2∈Γ (𝒯 (S)).
We recall that every q_1∈(c_1) defines a nowhere vanishing holomorphic vector field on the Bers slice ([c_1], ∙) through d/dt_|t=0 [g(c^0_1, c_2^0)+tq_1], and similarly every q_2∈(c_2) defines a nowhere vanishing holomorphic vector field on the Bers slice (∙, [c_2]).
Fix a ℂ-linear bases {q_1^1, … q_1^6g -6} for (c_1) and {q_2^1, …q_2^6g -6} for (c_2).
Since Bers slices are biholomorphic copies of Teichmller space, each q^k_1∈(c_1^0) defines a holomorphic vector field α_k on 𝒯(S)≅([c_1^0], ∙) and each q_2∈(c_2^0) defines a holomorphic vector field β_j on 𝒯(S)≅(∙, [c_2^0]). (Note that these vector fields depend a lot on the choice of the starting point (c_1^0, c_2^0)).
Finally, we can define a basis of linearly independent holomorphic vector fields on (S) (which depend on the choice of (c_1^0, c_2^0)∈) as follows:
* we define L_k as the holomorphic vector field on (S) that is tangent to all the left Bers slices (∙, [c_2]) over which it coincides with the vector field α_k on 𝒯( S) under the Bers identification (∙, [c_2])≅𝒯( S)×{[c_2]};
* we define R_j as the holomorphic vector field on (S) that is tangent to all the right Bers slices ([c_1], ∙) over which it coincides with the vector field β_j on 𝒯( S) under the Bers identification ([c_1], ∙)≅{[c_1]}×𝒯(S);
With the above notation:
* [ L_ q_1, R_q_2]≡ 0.
* [R_q_2, R_q_2']≡ 0.
* [L_q_1, L_q_1']≡ 0.
* L_q_1, L_q_1'≡R_q_2, R_q_2'≡ 0 because Bers slices are isotropic.
Let q_1∈(c_1^0) and q_2∈(c_2^0).
With the above notation, L_q_1, R_q_2 is constant on ([c_1^0],∙)∪(∙, [c_2^0]), and it coincides with 1/2 q_1(β_q_2)= 1/2q_2(α_q_1).
Denote L=L_q_1 and R=R_q_2.
Let c_2∈𝒞(S), let g=g(c_1^0, c_2)=ρ dz_0 dw, where z_0 is a complex coordinate for c_1^0 and w for c_2.
On ([c_1^0], ∙), L coincides with q_1=: dz_0^2. On the other hand, R in ([c_1^0], [c_2]) is tangent to (∙, [c_2]), so in the metric model (<ref>) for (S) it coincides in ([c_1^0], [c_2]) with some holomorphic quadratic differential: we denote
R_| ([c_1^0], [c_2])=: τ=σdw^2∈(c_2)<T_([c_1^0], [c_2])(S) .
Recall that, on the Bers slice (∙, [c_2])≅𝒯(S), the vector field R coincides with the vector field β_q_2∈Γ (𝒯(S))=Γ (TBelt( S ) ): under the identification in Remark <ref>, we get that β_q_2_|c_1^0= [ σ∂_z_0w/ρdz_0/dz_0]∈Belt(c_1^0). Finally,
L, R _|([c_1^0], [c_2]) = q_1,τ_g= 1/4 i ∫_S σ/ρ dz_0∧ dw =
= 1/2∫_S σ∂_z_0w/ρi/2 dz_0∧ dz_0 =1/2 q_1(β_q_2)
for all c_2∈𝒞(S).
In the same fashion, one gets that L, R coincides with 1/2q_2(α_q_1) on every point of (∙, [c_2]).
As a result, one gets that 1/2 q_1(β_q_2)= 1/2q_2(α_q_1) which is the value of L, R in ([c_1^0], [c_2^0]).
Observe that the fact that 1/2 q_1(β_q_2)= 1/2q_2(α_q_1) together with Theorem <ref> lead to an alternative proof of McMullen's quasi-Fuchsian reciprocity (see Proposition <ref>).
The bilinear form on (S) is a holomorphic Riemannian metric.
We have to prove that is non-degenerate and holomorphic.
The fact that the metric is non-degenerate follows directly from Lemma <ref>: for all ([c_1^0], [c_2^0])∈ one can evaluate the metric on each pair of vector fields L_q_1 and R_q_2 as defined above, then the pairing on Belt([c])×([c]) being non-degenerate (see Section <ref>) implies that the bilinear form on (S) is non-degenerate as well.
To prove holomorphicity, let X,Y be two holomorphic vector fields on (S). Then the restriction of X,Y on each Bers slice ([c^0_1], ∙) is holomorphic, because it can be written as a linear combination through holomorphic functions of the terms L_q_1, R_q_2, which are constant on the slice ([c^0_1], ∙). Similarly X,Y is holomorphic on each left Bers slice (∙, [c_2^0]). Finally, X,Y is holomorphic on (S) since its derivative with respect to any antiholomorphic vector is zero.
We need to check that for every pair of holomorphic local vector fields on 𝒯(S)×𝒯(S), their inner product via gives a holomorphic function.
Every holomorphic local vector field on (S)= is a linear combination, through holomorphic coefficients, of pull-backs of local holomorphic vector fields on 𝒯(S) and 𝒯(S) via the projections on the two components. Since the Bers slices correspond to isotropic directions for the metric (Proposition <ref> 1)-2) ), we only need to check that: for all fixed ([c^0_1],[c^0_2])∈, for all μ holomorphic vector field on U_1⊂𝒯(S) and μ' holomorphic vector field on U_2⊂𝒯(S), with [c^0_1]∈ U_1, [c^0_2]∈ U_2, p_1^*(μ), p_2^*(μ') is a holomorphic function on U_1× U_2 p_1 and p_2 being the projections to the components of .
Moreover, it is sufficient to check that the restriction of p_1^*(μ), p_2^*(μ') to each Bers slice {[c_1]}× U_2 and U_1 ×{[c_2] } is holomorphic: indeed, this implies that the derivative of p_1^*(μ), p_2^*(μ') with respect to each vector of the antiholomorphic vector bundle of U_1× U_2 is zero.
Let μ be a holomorphic vector field on a neighbourhood U_1 of [c_1]∈𝒯(S) and μ' be a holomorphic vector field on a neighbourhood U_2 of [c_2]∈𝒯(S).
Through the decomposition T(S) ≡ T (), consider the holomorphic vector fileds L=(μ, 0) and R=(0, μ') on (S).
We want to prove that R,L is a holomorphic function in a neighbourhood of [c_1]×[c_2] in 𝒯(S)×{[c_2]}. With a totally equivalent argument, one shows that it is also holomorphic in a neighborhood of [c_1]×[c_2] in {[c_1]}×𝒯(S).
Denote g_0=g(c_1,c_2), and fix a linear basis {q_2^k}_k=1^6g -6 for (c_2), with q_2^k=ψ_k dw^2.
In order to relax the notation, we will denote elements of (c_2) with α.
Let Ω be the image of the holomorphic local parametrization
ζ(c_2)⊃ V_g →(∙, [c_2])
α ↦ [g+α] ,
as in Theorem <ref>, so Ω is an open neighbourhood of ([c_1], [c_2]).
Observe that:
* L is tangent to the Bers slice (∙, [c_2]), so, with reference to the holomorphic local parametrization ζ, R can be seen over Ω as
L_ζ(α) = ∑_k=1^6g -6γ_k(ζ(α))·q_2^k
for some local holomorphic functions γ_k on Ω⊂(∙,[c_2]).
* R is transverse to the slice (∙, [c_2]), and in every point it corresponds to the same infinitesimal deformation of [c_2]∈𝒯(S).
Let z_α be a holomorphic coordinate for c_+(g+α), and denote g+α= ρ_α dz_α dw.
By Lemma <ref>, we get that - for each α∈ V_g⊂(c_2) - there exists
β_α= _α dz_α^2 ∈(c_+(g+α))
such that
μ' ∼d/dt_|t=0 (c_- (g+α + t β_α ) )= _α∂_w (z_α)/ρ_αdw/dw
as Beltrami differentials, defining the same element in Belt(c_2).
By Remark <ref>, in the model T_([c_1],[c_2])(S)= (c_1)⊕(c_2) we have that
R_ζ(α) =β_α∈(c_+(g+α)) .
Finally,
R, L_[g+α] = ∫_S _α·∑_k=1^6g -6γ_k(ζ(α))·ψ_k/ρ_α dz_α∧ dw=
=∑_i=1^6g -6γ_k(ζ(α)) ∫_S_α·ψ_k/ρ_α dz_α∧ dw=
= ∑_i=1^6g -6γ_k(ζ(α)) ∫_S
ψ_k·_α∂_w (z_α)/ρ_α dw∧ dw =
= ∑_i=1^6g -6q_2^k(μ') ·γ_k(ζ(α)) .
which is holomorphic (recalling that each q_2^k(μ') is a constant on Ω).
§.§ Non-degeneracy
The bilinear form is non-degenerate.
In other words, we want to check that for all Bers metric g(c_1,c_2) and for all q_1+q_2∈(c_1)⊕(c_2), q_1 0 and q_2 0 the 1-form q_1+q_2,· is non-zero.
Observe that it is sufficient to prove that q_1,·≠0 and q_2,·≠ 0: indeed, if q_1,q'_2≠0 and q_2,q'_1≠ 0 for some q'_1∈(c_1) and q'_2∈(c_2) , then for some t∈ℝ
q_1 +q_2, q'_1 +tq'_2= tq_1, q'_2 + q_2, q'_1 0 .
Also, observe that, in the usual notation g=ρ dzdw, q_1=_1 dz^2, q_2 =ψ dw^2,
q_2,q_1 _g = 1/2 i∫_S ψ/ρ dz∧ dw=1/2 i ∫_S ψ/ρ dw∧ dz = q_2,q_1_g
so, q_2,·_g 0 if and only if q_2,·_g 0: we deduce that to prove non-degeneracy it suffices to show that for all g=g and for all q_1∈(c_1), q_1 0, we have that q_1,·_g 0.
By Lemma <ref>, we know that
μ=d/dt_|t=0( c_-(g+tq_1) )= ∂_w z /ρ_0 d w/d w ,
and, by Theorem <ref> we know that q_1↦ [g+q_1] provides a local holomorphic parametrization of ([c_1], ∙), so the Beltrami differential ∂_w z /ρ_0 d w/d w is non-zero.
As a consequence, there exists q_2∈(c_2), such that q_2(μ)0, so
0i/2q_2(μ)= i/2∫_S ∂_w z /ρψdw∧ dw = i/2∫_S ψ/ρ dz∧ dw = q_1, q_2_g
where q_2=ψdw^2.
§.§ Uniqueness of holomorphic extensions
We prove that there is a unique extension of the Weil-Petersson metric to the quasi-Fuchsian space. This is a simple consequence of the complex-analiticity of the metric, but we show it explicitly.
Let
Λ(S)→Ω⊂ℂ^6g -6
be a biholomorphism such that the image of the Fuchsian locus is Ω∩ℝ^6g -6. There are several ways to choose Λ: for instance, fixing a geodesic lamination λ for S, QF(S) can be parametrized with an open subset of the vector space of ℂ2π iℤ-valued cocycles along λ through the so-called shear-bend coordinates, with ℝ-valued cocycles corresponding to Fuchsian representations: see <cit.> for further details.
Let Ω⊂ℂ^N be an open connected subset intersecting ℝ^N.
If fΩ→ℂ is a holomorphic function such that f_|Ω∩ℝ^N≡ 0, then f≡ 0.
This is a simple consequence of the fact that f is complex-analytic.
For all x^0∈Ω∩ℝ^N, f can be locally written as f(z)=∑_n=0^∞∑_k=1^N a_k,n(z_k-x^0_k)^n, and its restriction to ℝ being analytic implies that all the coefficients a_k,n are zero. So f≡ 0 in a neighborhood of Ω⊂ℝ, implies that f≡ 0 on Ω because f is holomorphic and Ω is connected.
A (p,q)-tensor on the Fuchsian locus Fuch(S) extends to at most one -bilinear holomorphic (p,q)-tensor on (S).
Denote with _^n the standard holomorphic Riemannian metric on ^n defined by v,w_^n=∑_k=1^n v_k w_k. This metric induces a nondegenerate holomorphic symmetric bilinear form (that we will still denote with _^n) on the tensor bundle ⊗^p T^n defined by v^1⊗…⊗ v^p, w^1⊗…⊗ w^p_^n=v^1,w^1_^n· … ·v^p,w^p_^n.
Assume there exist two holomorphic extensions of a (p,q)-tensor on Fuch(S). By pushing them forward through Λ, we get two holomorphic (p,q)-tensors on Ω⊂ℂ^6g -6 which coincide on Ω∩ℝ^6g -6∅. Denote with α the difference between them, hence α≡ 0 on Ω∩ℝ^6g -6. Then, for all j,k=1,…, 6g -6, the vector fields ∂_x_j and ∂_x_k are holomorphic on Ω, and α(∂_x_n_1,…, ∂_x_n_q), ∂_x_m_1⊗…⊗∂_x_m_p≡ 0 on Ω∩ℝ^6g -6. By Lemma <ref>, the holomorphic function α(∂_x_n_1,…, ∂_x_n_q), ∂_x_m_1⊗…⊗∂_x_m_p is everywhere zero on Ω for any choice of the indeces, hence α≡ 0 on Ω since {∂_x_j}_j is a ℂ-linear basis and _ is non-degenerate.
There exists a unique holomorphic Riemannian metric on (S) which extends the Weil-Petersson metric on the Fuchsian locus. Hence coincides with the holomorphic Riemannian metric defined in <cit.>.
The holomorphic Riemannian metric on (S) is invariant under the diagonal action of the mapping class group on (S)≅.
The mapping class group acts on 𝒯(S) preserving the Weil-Petersson metric and the complex structure. As a consequence, the diagonal action on is by biholomorphisms as well and preserves the Weil-Petersson metric on the Fuchsian locus: the action must therefore be by isometries for the unique holomorphic extension of the Weil-Petersson metric to (S).
§.§ The Goldman symplectic form
The relation between the Goldman symplectic form on the character variety of (2,) and the Weil-Petersson metric (see Section <ref>) allows to give an integral description of the analog form for (2,) on the quasi-Fuchsian locus through its relation with the holomorphic Riemannian metric, as stated in the following Proposition.
The (2,)-Goldman symplectic form ω_G vanishes on the Bers slices of (S), and in each point ([c_1], [c_2]) itis such that
ω_G([q_1], [q_2])= 8i [q_1], [q_2]=2i ∫·ψ/ρdz∧ dw
where z and w are local complex coordinates for c_1 and c_2, q_1= dz^2, q_2=ψdw ^2, g=ρ dzdw.
We underline that, while the integral description is new, the fact that Bers slices are Lagrangian for ω_G is well known, and also the relation between ω_G and has been previously discussed in <cit.>.
Nevertheless, we give a full new proof of the whole statement.
Using T_([c],[c])Fuch(S)={q+q | q∈(c)} (as seen in Remark <ref>) and the desciption of the Goldman symplectic form with respect to the Weil-Petersson metric (see Section <ref> ), we have
ω_G(q+q, q'+q') = 8iq+ iq, q'+q ' = 8iq-q, q'+ q'
for all q∈(c).
Through this and the fact that ω_G is -bilinear, we get a full description of ω_G on the holomorphic tangent bundle to (S) along the Fuchsian locus: indeed, for all q,q'∈(c),
4ω_G(q, q')= ω_G(q+ q - i(iq+ iq), q'+ q' - i(iq'+ iq'))=
= ω_G(q+q, q'+q') -iω_G(q+q, iq'+iq') -
- iω_G(iq+ iq, q'+q ') - ω_G(iq+iq, iq'+iq')=
= 8iq+iq, q'+q'-8iiq+iq, iq'+iq'-
-8i-q-q, q'+q' --q-q, iq'+iq' = 32i q, q'=0
and in a similar fashion one gets that
4ω_G(q, q')= ω_G(q+ q - i(iq+ iq), q'+ q' + i(iq'+ iq'))
= 32iq, q' .
By Lemma <ref>, ω_G is the unique holomorphic extension of its restriction on the Fuchsian locus. Since is holomorphic, such an extension can be obtained as follows: for all X_1, X_2 holomorphic vector fields on (S) tangent to the foliation of right Bers slices {(∙, [c_2])}_[c_2]∈𝒯(S) and Y_1, Y_2 holomorphic vector fields on (S) tangent to the foliation of left Bers slices {([c_1], ∙)}_[c_1]∈𝒯(S), one defines ω_G(X_1, Y_1)= 8i X_1,Y_1, ω_G(Y_1, X_1)=-8i X_1, Y_1, ω_G(X_1, X_2)=ω_G(Y_1, Y_2)=0. We therefore get the statement, with the integral description following from Proposition <ref>.
§.§ A formula for the curvature
We conclude this section with a few computations on the metric .
In order to make computations, we will use the same notation as in Section <ref>.
Fix ∈. Fix a ℂ-linear basis {q_1^1, … q_1^6g -6} for (c_1) and {q_2^1, …q_2^6g -6} for (c_2).
and denote
L_k= L_q_1^k R_j=R_q_2^j .
We will also use extensively the Koszul formula:
2∇_X Y, Z= ∂_X(Y, Z)+∂_Y (X,Z) - ∂_Z(X,Y)-[Y,X], Z-[X,Z],Y -[Y,Z],X
where X,Y,Z are vector fields on (S) and ∇ is the Levi-Civita connection of .
Finally, we can define a basis of linearly independent vector fields on (S) (which depend on the choice of ∈ as follows:
* we define L_k as the pull-back of the vector field q_2^k through the projection on the first component pr_1(S)→𝒯(S).
* we define R_k as the pull-back of the vector field q_1^k through the projection on the second component pr_2(S)→𝒯(S).
The following hold for all j,k=1, …, 6g-6.
* L_k coincides with the vector field q_2^k on (∙, [c_2]).
* R_k coincides with the vector field q_1^k on ([c_1], ∙).
* [L_k, R_j]≡ 0.
* [L_k, L_j]≡ 0.
* [R_k, R_j]≡ 0.
* L_k, L_j≡R_k, R_j≡ 0 because Bers slices are isotropic.
* L_k, R_j are constant on the Bers slices ([c_1], ∙) and (∙, [c_2]). This can be proved by following word for word the computations in the proof of Theorem <ref>, where it is shown that L_k, R_j coincides with the evaluation of q_1^j in the Beltrami differential corresponding to the infinitesimal deformation in T_[c_1]𝒯(S) given by q_2^k.
* ∇_L_i L_k=0 on ([c_1], ∙) for all i,k=1,…, 6g-6.
* ∇_R_i R_j=0 on (∙, [c_2]) for all i,j=1, …, 6g-6.
* The vector field ∇_L_kR_j= ∇_R_jL_k is tangent to the Bers slices ([c_1], ∙) and (∙, [c_2]) and it is zero in ([c_1],[c_2]).
All the proofs follow from straightforwardly from the Koszul formula (<ref>), by Remark <ref>, and by Lemma <ref>, implying that every two vector fields among L_1, …, L_6g-6 and R_1, …, R_6g -6 commute and are such that their inner product is constant on ([c_1], ∙) ∪(∙, [c_2]), hence its derivative is zero over each of these two Bers slices.
We only prove explicity the last statement. By the Koszul formula in Equation (<ref>) we get that along ([c_1], ∙)
∇_L_kR_j, L_i= ∂_L_k R_j, L_i - ∂_L_iR_j, L_k≡ 0 ,
so ∇_L_kR_j is tangent to the Bers slice ([c_1], ∙). With the analog argument, one gets that the vector field ∇_R_jL_k= ∇_L_kR_j is also tangent to (∙, [c_2]), so it must be zero in .
Let g=g, let U_g={q_1∈(c_1) | g+q_1 is a Bers metric}, and V_g={q_2∈(c_2) | g+q_2 is a Bers metric}.
Consider on U_g and V_g the trivial affine connection inherited from the fact that (c_1) and (c_2) are finite-dimensional vector spaces, linearly isomorphic to ℝ^6g-6.
The maps
u U_g →([c_1], ∙)
q_1 ↦ [g+q_1]
v V_g →(∙, [c_2])
q_2 ↦ [g+q_2]
.
are affine diffeomorphisms onto their image if you endow (S) with the Levi-Civita connection ∇ of .
By construction of the vector fields R_j and L_k, the statement is equivalent to proving that ∇_R_k R_j≡0 on (∙, [c_2]) and ∇_L_k L_j≡ 0 on ([c_1], ∙), which follow from Lemma <ref>
With the notations above, the curves t↦ [g+tq_1] are geodesics.
We can now give some explicit computation for the curvature tensor of .
We keep using the notation above. Fix ∈, and consider the global vector fields R_1, … R_6g-6, L_1, … L_6g-6.
In the following, R denotes the curvature tensor of , namely (X,Y,X,Y)=∇_X∇_Y X - ∇_Y ∇_X X- ∇_[X,Y] X,Y.
On the whole (S), for each j,k=1,…, 6g -6,
(L_k, R_j, L_k, R_j)= -∂_L_k∂_R_jL_k, R_j + ∇_L_k L_k, ∇_R_jR_j - ∇_L_kR_j, ∇_L_kR_j .
In particular, on ([c_1], ∙)∪(∙, [c_2]),
(L_k, R_j, L_k, R_j)= -∂_L_k∂_R_jL_k, R_j .
In the following, we denote X=L_k and Y=R_j
(X,Y,X,Y) =∇_X ∇_Y X, Y -∇_Y∇_X X, Y =
=∂_X(∇_YX, Y)- ∇_Y X, ∇_XY -∂_Y∇_X X, Y+∇_X X, ∇_Y Y=
=∂_X(∇_YX, Y)- ∇_Y X, ∇_XY -∂_Y∂_XX, Y+ ∂_Y X, ∇_XY+∇_X X, ∇_Y Y.
Now, observe that since X,X=Y,Y=0, the Koszul formula shows immediately that ∇_YX, Y=0= X, ∇_XY. As a result, we have proved the first equality.
To prove the second equality, by Lemma <ref> we have ∇_L_k L_k≡ 0 on ([c_1], ∙) and ∇_R_j R_j≡ 0 on (∙, [c_2]), so ∇_L_k L_k, ∇_R_jR_j=0 on ([c_1], ∙)∪(∙, [c_2]). Finally, by the last statement of Lemma <ref>, ∇_L_kR_j is tangent to ([c_1], ∙) and (∙, [c_2]), so it is isotropic.
The sectional curvature K(L_k, R_j), defined when L_k,R_j 0, in ([c_1], ∙)∪(∙, [c_2]) is given by
K (L_k, R_j)= - ∂_L_k∂_R_jL_k, R_j/L_k, R_j ^2= ∂_L_k∂_R_jlog(L_k, R_j)
In order to make computations, we need to fix some notations.
Fix ∈.
We recall that every q_1∈(c_1) defines a nowhere vanishing holomorphic vector field on the Bers slice ([c_1], ∙) through d/dt_|t=0 [g(c_1, c_2)+tq_1], and similarly every q_2∈(c_2) defines a nowhere vanishing vector field on the Bers slice (∙, [c_2]).
Since Bers slices are biholomorphic copies of Teichmller space, each q_1∈(c_1) defines a holomorphic vector field on 𝒯(S)≅([c_1], ∙) and each q_2∈(c_2) defines a holomorphic vector field on 𝒯(S)≅(∙, [c_2]). (Note that these vector fields depend a lot on the choice of the starting point ).
Fix the ℂ-linear bases {q_1^1, … q_1^6g -6} for (c_1) and {q_2^1, …q_2^6g -6} for (c_2).
Finally, we can define a basis of linearly independent vector fields on (S) (which depend on the choice of ∈) as follows:
* we define L_k as the pull-back of the vector field q_2^k through the projection on the first component pr_1(S)→𝒯(S).
* we define R_k as the pull-back of the vector field q_1^k through the projection on the second component pr_2(S)→𝒯(S).
The following hold for all j,k=1, …, 6g-6.
* L_k coincides with the vector field q_2^k on (∙, [c_2]).
* R_k coincides with the vector field q_1^k on ([c_1], ∙).
* [L_k, R_j]≡ 0.
* [L_k, L_j]≡ 0.
* [R_k, R_j]≡ 0.
* L_k, L_j≡R_k, R_j≡ 0 because Bers slices are isotropic.
* L_k, R_j are constant on the Bers slices ([c_1], ∙) and (∙, [c_2]). This can be proved by following word for word the computations in the proof of Theorem <ref>, where it is shown that L_k, R_j coincides with the evaluation of q_1^j in the Beltrami differential corresponding to the infinitesimal deformation in T_[c_1]𝒯(S) given by q_2^k.
The vector fields L_k and R_j are all parallel with respect to the Levi-Civita connection for the holomorphic Riemannian metric
Denote with ∇ the Levi-Civita connection on (S) for the holomorphic Riemannian metric. Recall the Koszul formula
By Remark <ref>, the whole left-hand side of Equation (<ref>) is zero whenever
Let g=g, let U_g={q_1∈(c_1) | g+q_1 is a Bers metric}, and V_g={q_2∈(c_2) | g+q_2 is a Bers metric}.
Consider on U_g and V_g have a trivial affine connection inherited from the fact that (c_1) and (c_2) are finite-dimensional vector spaces, linearly isomorphic to ℝ^6g-6.
The maps
U_g →(S)
q_1 ↦ [g+q_1]
V_g →(S)
q_2 ↦ [g+q_2]
.
are affine if you endow (S) with the Levi-Civita connection of .
Denote with ∇ the Levi-Civita connection of the holomorphic Riemannian metric on (S).
By Remark <ref>, the statement is equivalent to proving that ∇_R_k R_j≡0 on (∙, [c_2]) and ∇_L_k L_j≡ 0 on ([c_1], ∙). Both claims follow easily from the Koszul formula
2∇_X Y Z= ∂_X(Y, Z)+∂_Y (X,Z) - ∂_Z(X,Y)-[Y,X], Z-[X,Z],Y -[Y,Z],X ,
and by Remark <ref>, in particular from the facts that all the vector fields involved commute and that the quantities L_j, R_k are constant on the Bers slices from ([c_1], [c_2]).
With the notations above, the curves t↦ g+tq_1 are geodesics.
The Schwarzian maps Schw_+([c_1], ∙)→([c_1]) and Schw_-(∙, [c_2])→([c_2]) are affine homeomorphisms onto their images.
The Schwarzian map Schw_+([c_1], ∙)→([c_1]) can be locally seen around each point ∈ as the composition of the inverse of the local parametrization U_g→([c_1], ∙) as in Proposition <ref> and the immersion
U_g →([c_1])
q_1 ↦ [Schw_+(g+q_1)]
which is affine by Theorem <ref>.
The thesis follows in the same fashion for Schw_-.
We can now give some explicit computation for the sectional curvature of . First of all, we recall that the sectional curvature for a holomorphic Riemannian metric is defined only on complex vector subspaces V<T of complex dimension 2 such that the restriction of to V is non-degenerate.
We keep using the notation above. Fix ∈, and consider the global vector fields R_1, … R_6g-6, L_1, … L_6g-6.
The vector field ∇_L_kR_j= ∇_R_jL_k is tangent to the Bers slices ([c_1], ∙) and (∙, [c_2]) and it is zero in ([c_1],[c_2]).
Recalling, as previously mentioned in Remark <ref>, that the inner product between vector fields in R_1, … R_6g-6, L_1, … L_6g-6 is constant on ([c_1], ∙) ∪(∙, [c_2]), from the Koszul formula in Equation (<ref>) we get that along ([c_1], ∙)
∇_L_kR_j, L_i= ∂_L_k R_j, L_i - ∂_L_iR_j, L_k≡ 0 ,
so ∇_L_kR_j is tangent to the Bers slice ([c_1], ∙). With the analog argument, one gets that ∇_L_kR_j is also tangent to (∙, [c_2]), so it must be zero in .
In the following, denotes the curvature tensor of , namely (X,Y,X,Y)=∇_X∇_Y X - ∇_Y ∇_X X- ∇_[X,Y] X,Y.
On the whole (S), for each j,k=1,…, 6g -6,
(L_k, R_j, L_k, R_j)= -∂_L_k∂_R_jL_k, R_j + ∇_L_k L_k, ∇_R_jR_j - ∇_L_kR_j, ∇_L_kR_j .
In particular, on ([c_1], ∙)∪(∙, [c_2]),
(L_k, R_j, L_k, R_j)= -∂_L_k∂_R_jL_k, R_j .
In the following, we denote X=L_k and Y=R_j
(X,Y,X,Y) =∇_X ∇_Y X, Y -∇_Y∇_X X, Y =
=∂_X(∇_YX, Y)- ∇_Y X, ∇_XY -∂_Y∇_X X, Y+∇_X X, ∇_Y Y=
=∂_X(∇_YX, Y)- ∇_Y X, ∇_XY -∂_Y∂_XX, Y+ ∂_Y X, ∇_XY+∇_X X, ∇_Y Y.
Now, observe that since X,X=Y,Y=0, the Koszul formula shows immediately that ∇_YX, Y=0= X, ∇_XY. As a result, we have proved the first equality.
To prove the second equality, observe that by Proposition <ref> ∇_L_k L_k≡ 0 on ([c_1], ∙) and ∇_R_j R_j≡ 0 on (∙, [c_2]), so ∇_L_k L_k, ∇_R_jR_j=0 on ([c_1], ∙)∪(∙, [c_2]). Finally, by Lemma <ref>, ∇_L_kR_j is isotropic on ([c_1], ∙)∪(∙, [c_2]).
The sectional curvature K(L_k, R_j) in ([c_1], ∙)∪(∙, [c_2]) is given by
K (L_k, R_j)= - ∂_L_k∂_R_jL_k, R_j/L_k, R_j ^2= ∂_L_k∂_R_jlog(L_k, R_j)
§ BOUNDS FOR THE SCHWARZIAN OF BERS PROJECTIVE STRUCTURES
Let ∈. Denote
𝒮^+_[c_1]:= Schw_+(([c_1], ∙))⊂(c_1) and
𝒮^-_[c_2]:= Schw_-( (∙, [c_2]) )⊂(c_2) .
One of the key observations of this paper (in Proposition <ref>) is that, given a Bers metric g=g(c_1, c_2), for all q_1∈(c_1) small enough g+q_1 is a Bers metric. Theorem <ref> motivates the question of understanding better what "small enough" means: if g+q_1 is a Bers metric, then Schw_+([g])-1/2 [q_1]∈𝒮^+_[c_1].
In this section, we elaborate this remark, and use the metric formalism to get a lower bound for the distance of a point in 𝒮^+_[c_1] (resp. 𝒮^-_[c_2]) from its boundary in ([c_1]) (resp. ([c_2])).
Let g=ρ dz dw= g(c_1, c_2) be a Bers metric, and q_1= dz^2∈(c_1).
The following statements are equivalent.
* g+q_1 is a Bers metric.
* g+q_1 has no non-zero isotropic vectors in the real tangent space.
* |ρ∂_z w +|<|ρ∂_zw|
We prove that (1) ⟺ (2) and 2⟺(3).
(1) ⇔ (2) As seen in Section <ref>, Bers metrics are positive complex metrics, which satisfy (2).
(2) ⇐ (1) By Proposition <ref>, we know that g+q_1 is a Bers metric if and only if it is a positive complex metric. Assume that g+q_1 is not a positive complex metric, and assume by contradiction that g+q_1 has no non-zero isotropic vectors in the real tangent bundle. In some point of S the isotropic directions g+q_1 lie in the same connected component of ℙ(ℂ T S)∖ℙ( T S); on the other hand, in a zero of q_1 the isotropic directions coincide with those of g, hence they lie in opposite connected components. By a continuity argument, there exists a point where an isotropic direction lies in the complex span of a non-zero vector of the real tangent bundle TS: since complex metrics are ℂ-bilinear, there must be an isotropic non-zero vector in TS for g+q_1.
(2) ⟺ (3)
We have that g+q_1=(ρ dw + dz)· dz. Observe that
g+q_1 satisfies (2) ⟺
⟺ ∃ p∈S ∃α∈ℂ^* s.t. in p: (g+q_1)(α∂_z+ α∂_z, α∂_z+ α∂_z)=0 ⟺
⟺ ∃ p∈S ∃α∈ℂ^* s.t. in p: (ρ dw + dz)(α∂_z+ α∂_z)=0 ⟺
⟺ ∃ p∈S ∃α∈ℂ^* s.t. in p: ρ∂_zw + = -α/αρ∂_zw ⟺
⟺ ∃ p∈S s.t. in p: |ρ∂_zw +|=|ρ∂_zw| .
So, (2) holds if and only if |ρ∂_zw +||ρ∂_zw| on S. Since |∂_zw|<|∂_zw| we have that |ρ∂_zw +|<|ρ∂_zw| in the zeros of q_1, so the inequality must hold on the whole surface if and only if g+q_1 satisfies (2).
The maps 𝐒𝐜𝐡𝐰_+ and 𝐒𝐜𝐡𝐰_- defined above in Equations (<ref>) and (<ref>) descend to better known maps (that with a little abuse we will denote with the same name) from (S), namely
Schw_+(S) →(S)
([c_1], [c_2]) ↦ [Schw(f_1(c_1,c_2) )]
Schw_-(S) →(S)
([c_1], [c_2]) ↦ [Schw(f_2(c_1, c_2) )]
,
where [Schw_+(g(c_1, c_2))]= Schw_+([c_1], [c_2]) and [Schw_-(g(c_1, c_2))]= Schw_-([c_1], [c_2])
Since the Schwarzian maps are holomorphic, for all given ([c_1], [c_2])∈𝒯(S)×𝒯(S),
𝒮^+_[c_1]:= Schw_+({[c_1]}×𝒯(S))⊂(c_1) and
𝒮^-_[c_2]:= Schw_-( 𝒯(S) ×{[c_2]})⊂(c_2)
are open subsets.
Le(c_1,c_2)∈. Let z and w be local holomorphic coordinates for c_1 and c_2 respectively, and denote g=g(c_1,c_2)=ρ dz dw .
If q_1∈(c_1) is such that
|q_1(∂_z, ∂_z)|< |ρ|/2( |∂_z w|-|∂_zw| ) ,
then Schw_+([c_1], [c_2])+ [q_1] ∈𝒮^+_[c_1].
Analogously, if q_2∈(c_2) is such that
|q_2 (∂_w, w)| <|ρ|/2( |∂_w z|-|∂_wz| ) ,
then Schw_-([c_1], [c_2])+[q_2] ∈𝒮^-_[c_2]
Before proving this statement, observe that both Inequalities (<ref>)-(<ref>) do not depend on the local choice of the coordinates z and w for c_1 and c_2 respectively. In fact, the right-hand side of (<ref>) can be rewritten in several equivalent ways, since (recalling that |∂_z w|=|∂_zw| and |∂_z w|= |∂_z w|) the reader can easily check that
|ρ|/2( |∂_ z w|-|∂_ zw| ) =
= |g(∂_z, ∂_z)|- 1/2 |g(∂_z, ∂_z)| =
= (1- |∂_z w/∂_z w| ) |g(∂_z, ∂_z)| .
We prove the part of the statement regarding the image of Schw_+.
Denote q_1= dz^2∈(c_1). From Lemma <ref>, we know that g-2 q_1 is Bers if and only if |ρ∂_z w -2|<|ρ∂_zw|.
If ||< |ρ|/2( |∂_z w|-|∂_zw| ), then
|ρ∂_z w -2|≤ |ρ∂_z w|+2||< |ρ∂_z w|+ |ρ∂_z w|-|ρ∂_zw|= |ρ∂_z w| ,
so g-2q_1 is a Bers metric.
By Theorem <ref>
[𝐒𝐜𝐡𝐰_+(g- 2 q_1)]= [𝐒𝐜𝐡𝐰_+(g) + q_1]= Schw_+([c_1], [c_2]) +[q_1]
is therefore an element of 𝒮^+_[c_1].
We can now prove the part of the statement regarding Schw_- by conjugating the metrics. Let q_2=ψdw^2. By Remark <ref>, g = g(c_2, c_1)=ρ dwdz, and g+q_2 is a complex metric of constant curvature -1. As in the proof of the first part of the theorem, if
|g (∂_w, ∂_w)|= |ψ|= |ψ|= |g(∂_w, ∂_w)| < |ρ|/2(|∂_w z|-|∂_w z|)= | ρ|/2(|∂_w z|-|∂_w z|) ,
then g -2q_2 is a Bers metric. By Remark <ref> and by Theorem <ref>, g-2q_2 is a Bers metric and [𝐒𝐜𝐡𝐰_-(g-2q_2)] = [𝐒𝐜𝐡𝐰_-(g) +q_2]=Schw([c_1], [c_2])+[q_2] lies in 𝒮^-_[c_2].
A key remark is that the left-hand sides in the Inequalities (<ref>)-(<ref>) are strictly positive terms as a consequence of the fact that z and w induce the same orientation. This actually allows us to estimate the radius of a ball with center in Schw_+([c_1], [c_2])∈([c_1]) which is contained in 𝒮^+_[c_1]. Let us put this precise.
The space (c_1) has a natural L^∞-norm defined as follows: let g_0= g(c_1, c_1) be the hyperbolic metric on S corresponding to g, then for q_1∈(c_1) we define
q_1_∞:= 1/2Re(q_1)_g_0_∞= /ρ_0_∞= q_1(∂_z, ∂_z)/2 g_0(∂_z, ∂_z)_∞
where g_0=ρ_0 dz dz and q_1= dzdz. This norm on (c_1) clearly determines a well-posed norm on ([c_1]) which is independent from the representative of the isotopy class.
Let (c_1, c_2)∈.
Let g=g(c_1, c_2) be the corresponding Bers metric, and let g_0=g(c_1, c_1) be the Riemannian hyperbolic metric corresponding to c_1.
Observe that
1/2( (1- |∂_z w/∂_z w| ) | g(∂_z, ∂_z) /g_0(∂_z, ∂_z)| )
is a well-posed function on S: indeed, it is independent from the choice of the local coordinates z and w for c_1 and c_2 respectively and g_0(∂_z, ∂_z) vanishes nowhere. Moreover, this function is strictly positive, as a result of the facts that w induce the same orientation as z and that g(∂_z, ∂_z) 0, since g(∂_z, ∂_z)=0 and g(∂_z, ·) is non-degenerate.
Let ∈, g=g, let g_0=g(c_1, c_1) be the hyperbolic metric uniformizing c_1, and denote with dA_g and dA_g_0 respectively their area forms.
Let
R:= 1/2min_S ( (1- |∂_z w/∂_z w| ) | dA_g /dA_g_0| )
where z and w are any local coordinates for c_1 and c_2 respectively.
Then,
B_∞(Schw_+([c_1], [c_2]), R)⊂𝒮^+_[c_1]
where B_∞ denotes the ball with respect to ·_∞ on ([c_1]).
Let q_1= dz^2 ∈(c_1) with q_1_∞<R.
Observe that, denoting g=ρ dzdw, one has
dA_g= i/2ρ dz∧ dw= i/2ρ∂_zw dz∧ dz= i g(∂_z, ∂_z)dz∧ dz ,
and similarly dA_g_0= i g_0(∂_z, ∂_z) dz∧ dz.
Hence dA_g_0/dA_g = g(∂_z, ∂_z )/g_0(∂_z, ∂_z), and in every point p∈ S
| /2g_0(∂_z, ∂_z)|_| p≤q_1_∞ < min_S (1/2(1- |∂_z w/∂_z w| ) | g(∂_z, ∂_z) /g_0(∂_z, ∂_z)| )≤
≤ 1/2(1- |∂_z w/∂_z w| ) | g(∂_z, ∂_z) /g_0(∂_z, ∂_z)|_| p
hence
||< (1- |∂_z w/∂_z w| ) |g(∂_z, ∂_z)|
everywhere on S.
By Theorem <ref> and Remark <ref>, Schw_+([c_1], [c_2])+ [q_1]∈𝒮^+_[c_1].
Corollary <ref> clearly has an analogue for Schw_- that can be proved in the same fashion.
Let ∈, g=g and g_0=g(c_2,c_2).
Defining
R:= min_S 1/2( (1- |∂_w z/∂_w z| ) | dA_g / dA_g_0| ) ,
where z and w are holomorphic coordinates for c_1 and c_2 respectively, then
B_∞(Schw_-([c_1], [c_2]), R )⊂𝒮^-_[c_2].
The sharpness of Inequality (<ref>) and the description of R and R in Equations (<ref>) and (<ref>) seem to depend a lot on the choice of the representatives (c_1, c_2) in the (double) isotopy class of ([c_1], [c_2]). It would be interesting to determine the optimal choice of c_1 and c_2 in their isotopy classes that maximizes R .
A useful remark in this direction might be that dA_g=dA_g+q_1. To see this, just observe that g(∂_z,∂_z)= (g+q_1)(∂_z,∂_z), so dA_g and dA_g+q_1 have the same description in (<ref>).
This statement comes with a few remarks (we focus on the part of statement for Schw_+, but remarks for Schw_- follow in the same fashion):
* The right-hand side of (<ref>) can be written is several equivalent ways, in fact
|ρ|/2( |∂_w z|-|∂_wz| ) =
= 1/2 |g(∂_z, ∂_z)| - |g(∂_z, ∂_z)|=
= 1/2(1- |∂_z w/∂_z w| ) |g(∂_z, ∂_z)| .
* Inequality (<ref>) is independent from the choice of the coordinates z and w for c_1 and c_2 respectively.
* Lifting the complex structures and the metric to the universal cover S, let z and w be global coordinates on S. Then, the right-hand side in Inequality (<ref>) is strictly positive, so it has a positive minimum on any fundamental domain for π_1(S).
* Denote with g_0=ρ_0 dzdz be the Riemannian hyperbolic metric corresponding to the complex structure c_1 and consider the corresponding standard norm induced in ([c_1]) given by q_1 := q_1(∂_z, ∂_z)/g_0(∂_z, ∂_z)_∞. Denote
0< R:= min_S ( (1- |∂_z w/∂_z w| ) | g(∂_z, ∂_z) /g_0(∂_z, ∂_z)| ) ,
then Theorem <ref> implies that
B_·(Schw_+([c_1], [c_2]), R)⊂𝒜_[c_1]
* The sharpness of Inequality (<ref>) and the description of R in Equation (<ref>) seem to depend a lot on the choice of the representatives (c_1, c_2) in the (double) isotopy class of ([c_1], [c_2]).
§ REVISITING CLASSIC RESULTS WITH THE METRIC FORMALISM
§.§ The Schwarzian map is affine
The Schwarzian maps
Schw_+([c_1], ∙)→([c_1]) and Schw_-(∙, [c_2])→([c_2])
are affine diffeomorphisms onto their images.
The Schwarzian map Schw_+([c_1], ∙)→([c_1]) can be locally seen around each point ∈ as the composition of the inverse of the local parametrization u U_g→([c_1], ∙) as in Proposition <ref> and the immersion
U_g →([c_1])
q_1 ↦ [𝐒𝐜𝐡𝐰_+(g+q_1)]
which is affine by Theorem <ref>.
The thesis follows in the same fashion for Schw_-.
§.§ McMullen's quasi-Fuchsian reciprocity and the differential of the Schwarzian map
Let (c_1, c_2) ∈, and g=g(c_1, c_2).
A classic model for the cotangent bundle of Teichmller space is given by the bundle of holomorphic quadratic differentials, hence T_[c_1]𝒯(S)≅([c_1]) and T_[c_2]𝒯(S)≅([c_2]). In the identification of T_[c]𝒯(S) with the space of Beltrami differentials Belt(c) for [c], the cotangent model works as follows: if X=βdz/dz and q= dz^2, then the 2-form ·β dz∧ dz defines a well-posed 2-form on S and
q(X)= ∫_S ·β idz∧ dz .
Let ([c_1], [c_2])∈.
Recall the Schwarzian maps Schw_+( [c_1],∙)𝒯(S)→([c_1]) and Schw_-(∙, [c_2])𝒯(S)→([c_2]). Since the targets are vector spaces, their differentials are mapped in each point to ([c_1]) and ([c_2]) too
A classic result in quasi-Fuchsian geometry is the following.
[McMullen's quasi-Fuchsian reciprocity <cit.>]
Let ∈, X∈ T_[c_1]𝒯(S), Y∈ T_[c_2]𝒯(S). Then,
( ∂_Y Schw_+( [c_1],∙)) (X) = ( ∂_X Schw_-(∙, [c_2])) (Y)
where we used the identifications ([c_1])≅ T^*_[c_1]𝒯(S) and ([c_2])≅ T^*_[c_2]𝒯(S).
McMullen's reciprocity can actually be seen using the holomorphic extension of the Weil-Petersson metric on (S), providing an alternative description of the derivative of the Schwarzian maps and of the holomorphic Riemannian metric itself.
For all X∈ T_[c_1]𝒯(S) and Y∈ T_[c_2]𝒯(S), denote
𝐗= (X,0)∈ T_([c_1], [c_2])(𝒯(S)×𝒯(S))
𝐘= (0, Y)∈ T_([c_1], [c_2])(𝒯(S)×𝒯(S)) .
Let g=g(c_1, c_2)
( ∂_Y Schw_+( [c_1],∙)) (X) = - 𝐗, 𝐘_[g] = ( ∂_ X Schw_-(∙, [c_2])) (Y) ,
in particular, one gets McMullen's quasi-Fuchsian reciprocity.
As a consequence
Im(𝐗, 𝐘_g) = - 2∂_𝐗∂_𝐘 (𝒱_Ren)
where 𝒱_Ren:(S)→ is the renormalized volume function.
We remark that a similar result is shown inside a proof of Theorem 5.13 in <cit.>.
With respect to the metric model for T(S) in (<ref>), 𝐗 corresponds to q_X=:ψdw^2 ∈(c_2)<T_([c_1], [c_2])(S), and 𝐘 corresponds to q_Y=: dz^2 ∈(c_1)<T_([c_1], [c_2])(S).
By Remark <ref>, X and Y correspond to the Beltrami differentials β_ X=ψ∂_zw/ρdz/dz and β_Y= ∂_w z/ρdw/dw, respectively.
Recalling Equations (<ref>) and (<ref>).
∂_Y( 𝐒𝐜𝐡𝐰_+( g)) (β_X) = ( d/dt_|0𝐒𝐜𝐡𝐰_+(g+tq_Y) ) (β_X) = -1/2 q_Y (β_X)=
= -1/2∫_S ·ψ∂_zw/ρi/2 dz∧ dz= -i/4∫_S ψ/ρ dz∧ dw= -𝐗, 𝐘_g .
In the same fashion,
∂_X( 𝐒𝐜𝐡𝐰_-( g)) (β_Y) = ( d/dt_|0𝐒𝐜𝐡𝐰_-(g+tq_X) ) (β_Y) = -1/2 q_ X (β_Y)=
=-1/2∫_S ψ·∂_w z/ρi/2 dw ∧ dw
= -i/4∫_S ψ/ρ dz∧ dw= -𝐗, 𝐘_g .
As a consequence, we get an alternative description of the imaginary part of in terms of the Hessian of the renormalized volume 𝒱_Ren:(S)→ (see <cit.>).
Let 𝐗∈ T_[c_1](𝒯(S)×{[c_2]}) and 𝐘∈ T_[c_2]({[c_1]}×𝒯(S) ). Then,
Re(𝐗, 𝐘_[g]) = - 4∂_𝐗∂_𝐘 (𝒱_Ren)
In other words,
𝐗, 𝐘_[g]= - 4∂_𝐗∂_𝐘 (𝒱_Ren)+ 4i ∂_i𝐗∂_𝐘 (𝒱_Ren)
Let use the notation in the proof of Proposition <ref>. By the work of Krasnov and Schlenker (<cit.>), Re(Schw_+([g]) (β_X))= 4∂_X 𝒱_Ren. As a consequence
Re(∂_X( Schw_+( [g])) (β_Y))= Re (∂_Y( Schw_-( [g])) (β_X))= 4∂_X∂_Y (𝒱_Ren),
and the result follows by Proposition <ref>.
|
http://arxiv.org/abs/2307.06265v2 | 20230712160544 | PDE-Based Parameterisation Techniques for Planar Multipatch Domains | [
"Jochen Hinz",
"Annalisa Buffa"
] | math.NA | [
"math.NA",
"cs.NA"
] |
add1]Jochen Hinzcor1
[email protected]
add1]Annalisa Buffa
[email protected]
[cor1]Corresponding author
[add1]Institute of Mathematics, Ecole Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland.
This paper presents a PDE-based parameterisation framework for addressing the planar surface-to-volume (StV) problem of finding a valid description of the domain's interior given no more than a spline-based description of its boundary contours. The framework is geared towards isogeometric analysis (IGA) applications wherein the physical domain is comprised of more than four sides, hence requiring more than one patch. We adopt the concept of harmonic maps and propose several PDE-based problem formulations capable of finding a valid map between a convex parametric multipatch domain and the piecewise-smooth physical domain with an equal number of sides. In line with the isoparametric paradigm of IGA, we treat the StV problem using techniques that are characteristic for the analysis step. As such, this study proposes several IGA-based numerical algorithms for the problem's governing equations that can be effortlessly integrated into a well-developed IGA software suite.
We augment the framework with mechanisms that enable controlling the parametric properties of the outcome. Parametric control is accomplished by, among other techniques, the introduction of a curvilinear coordinate system in the convex parametric domain that, depending on the application, builds desired features into the computed harmonic map, such as homogeneous cell sizes or boundary layers.
Parameterisation Techniques, Isogeometric Analysis, Elliptic Grid Generation
§ INTRODUCTION
Isogeometric analysis (IGA) <cit.> is a variant of the finite element method (FEM) that was conceived in an effort to bridge the gap between the geometrical and the numerical aspects of the computational science and engineering (CSE) workflow. In computer-aided design (CAD), the physical domain is represented by its bounding surface ∂ using the field's de facto standard of NURBS / spline-based parametric descriptions. The analysis step, on the other hand, relies on a geometric format based on simplices and / or relatively basic polytopes (quadrilaterals, hexahedra, …) which form the building blocks for finding a description of the domain _h, where ∂_h is a (typically piecewise linear) collocation of ∂. Most CSE workflows operate in the order ∂→∂_h →_h, wherein the surface to volume (StV) problem ∂_h →_h is referred to as the meshing step. The conversion from a spline-based description of ∂ to a simplistic representation of _h is regarded as a major robustness bottleneck <cit.>. Furthermore, in many applications it is desirable to translate analysis results provided by, for instance, FEM back to appropriate changes in ∂, which may be nontrivial, due to the differing geometrical formats.
To address these concerns, IGA employs the NURBS / spline-based modelling tools that are characteristic for CAD as a basis for both the geometrical modelling and the numerical analysis aspects of the CSE workflow. In IGA, the parametric description of ∂ is immediately forwarded to a routine that solves the StV problem ∂→, which becomes the IGA analogue of the classical meshing step. The operator that maps the parametric domain onto is then utilised to perform a pullback of the governing equations into , where the same set of splines is employed as a basis for standard FEM techniques. Besides its promise of reducing the conversion overhead, numerical simulation based on IGA is showing promising results as spline-based FEM discretisations have been demonstrated to perform better than their classical (Lagrangian) counterparts on a range of benchmark problems <cit.>.
While IGA has matured into a prolific numerical method with encouraging applications within fields such as computational electromagnetism <cit.> and fluid dynamics <cit.>, compared to their classical meshing counterparts, spline-based methods for addressing the StV step ∂→ are still under-represented. On the one hand, this is partially explained by the relative novelty of IGA compared to, for instance, classical FEM techniques. On the other hand, while retaining the CAD-based representation in the StV step has culminated in an entirely novel class of approaches that exploit the higher-order continuity of spline basis functions, such approaches also come with novel challenges. For instance, verifying whether a spline-based map : → is indeed nondegenerate is a more complicated endeavour than in the piecewise linear case <cit.>. Furthermore, spline-based representations largely rule out generalisations of classical meshing algorithms that directly operate on the mesh's vertices, such as the advancing front method <cit.>, as spline-based parameterisations do generally not cross their control points.
As a result, the majority of existing techniques are based on blending the (typically four) segments of ∂ into the interior <cit.>, (constrained and unconstrained) parameterisation quality optimisation <cit.> and PDE-based approaches <cit.>. While methods from all three categories, depending on the type of the geometry, show promising results, the majority have only been studied in the singlepatch setting, i.e., when the parametric domain is given by the unit quadrilateral. For complex domains , a single quadrilateral may be too restrictive, which is why existing methods may have to be combined with segmentation algorithms that divide into smaller pieces, which are then parameterised from the unit quadrilateral one-by-one.
Another challenge is associated with computational differentiability: in order to form a closed design loop, the entire CSE pipeline, including the StV step, may have to be differentiated with respect to a set of design parameters. In the presence of segmentation, this may be challenging or impossible since segmentation may not be continuous in the provided boundary data. Furthermore, segmentation may take place with little regard to parameterisation quality metrics, which include the patch interfaces in the mutlipatch setting.
To address the limitations of the singlepatch setting, this paper introduces a PDE-based parameterisation framework that is compatible with multipatch domains ⊂ℝ^2. The idea is to introduce a multipatch covering of an appropriately-chosen convex, polygonal parametric domain ⊂ℝ^2 and to construct a nondegenerate mapping operator : →⊂ℝ^2 by approximately solving a PDE problem in over a spline basis defined on the multipatch topology. The underlying PDE problem approximates a map whose inverse is comprised of a pair harmonic functions in , wherein the boundary correspondence ^-1|_∂ = ∂ becomes the Dirichlet boundary condition. We propose two different PDE-based formulations along with various IGA-based discretisations which are then studied in detail.
A major appeal of this framework is the fact that the patch interfaces establish themselves as part of the PDE solution and need not be strongly imposed using, for instance, segmentation. The parameterisation, including the interfaces, is continuous in the boundary data and straightforwardly differentiable.
For control over the parametric properties of the computed parameterisation, we augment the framework with a mechanism that changes the properties of : → by mapping inversely harmonically into a parametric domain with a curvilinear, instead of a Cartesian coordinate system. This coordinate transformation is accomplished by the introduction of a so-called controlmap 𝐬: →. We propose several techniques for constructing controlmaps for various desired parameterisation features, such as boundary layers and boundary orthogonality. As the controlmap is defined globally (i.e., over the entire parametric domain ), control over the parametric properties includes the image of the patch interfaces under the mapping.
The choice to seek : → as the solution of a PDE problem is in line with the isoparametric paradigm of IGA: we handle both the geometrical as well as the analysis steps using IGA techniques. As a result, the proposed algorithms are straightforwardly integrated into a well-developed IGA software suite, reducing the code bloat resulting from relying on external tools in the StV step.
§.§ Notation
This paper denotes vectors in boldface. The i-th entry of a vector is denoted by x_i. Similarly, the ij-th entry of a matrix is denoted by A_ij. Let 𝐲: →ℝ^m and : →ℝ^n. Vectorial derivatives are taken along the second axis and we interchangeably employ the denotation
∂_𝐲≡∂𝐲/∂, with [∂𝐲/∂]_ij = ∂ y_i/∂ x_j,
where the vector derivative ∂_𝐲 maps into ℝ^m × n. The associated Nabla operator satisfies ∇_𝐲 = (∂_𝐲)^T.
Furthermore, we frequently work with vector spaces 𝒱. By default, we employ the abuse of notation
𝒱^n = 𝒱×⋯×𝒱_n terms
and similarly for tensorial spaces, i.e., 𝒱^n × n. Analogously, vectorial Sobolev spaces are denoted by H^s(Ω, ℝ^n), where Ω is the associated domain. For finite-dimensional spaces 𝒱_h, {𝒱_h } refers to its canonical (spline) basis which we assume to be clear from context.
Let 𝒱 be defined over the domain . We define 𝒱≡𝒱∩ H^1_0() as the subspace of functions from 𝒱 that have zero trace in ∂.
By Int(), we denote the interior of a closed domain , while denotes the closure of an open domain .
§.§ Problem Statement
Let ⊂ℝ^2 be an open, simply connected Lipschitz domain whose boundary ∂ is parameterised by
an even number K = 2n, n ∈ℕ^≥ 2 of open (spline) curves C_k ⊂ℝ^2, oriented in counterclockwise direction. We have
∂ = ⋃_k ∈{1, …, K}C_k, where i ≠ j C_i ∩ C_j = ∅.
We assume that the C_k are parameterised in the positive direction from the open unit interval by the spline maps 𝐟^k: (0, 1) →ℝ^2 with 𝐟^k ∈ C^1((0, 1), ℝ^2) and nonvanishing tangent.
Furthermore, let ⊂ℝ^2 be a convex, polygonal parametric domain with K sides L_k ⊂ℝ^2 oriented in counterclockwise direction. We have
∂ = ⋃_k ∈{1, …, K}L_k, where i ≠ j L_i ∩ L_j = ∅.
Each L_k is parameterised in the positive direction on ∂ by an affine map 𝐥_k: (0, 1) →ℝ^2 of the form:
𝐥_k(s) = ^k + 𝐭^k s, with {^k, 𝐭^k}⊂ℝ^2.
Assigning the C_k to the L_k in ascending order induces the boundary correspondence 𝐅: ∂→∂ that satisfies
𝐅|_L_k = C_k, or equivalently 𝐅∘𝐥_k = 𝐟_k,
and we assume that 𝐅: ∂→∂ parameterises a Jordan curve in ℝ^2.
We assume that is covered by a quadrangulation 𝒬 of a total of N_p patches _i, i.e.,
𝒬 = {_1, …, _N_p}, with = Int( ⋃__i ∈𝒬_i ) and i ≠ j _i ∩_j = ∅.
Each _i is the image of the reference patch ^□ = (0, 1)^2 under the diffeomorphic bilinear map ^i: ^□→_i. The facets of the quadrangulation are denoted by Γ, while boundary facets are denoted by Γ^B := {L_1, …, L_K} and interior facets by Γ^I := Γ∖Γ^B.
For boundary patches _i, the associated map ^i restricted to the side of ∂^□ that maps onto L_k, is given either by 𝐥_k or 𝐥_k(1 - s), depending on the orientation along ∂. We denote the set of boundary patches by 𝒬^B.
The facets between pairs of neighbouring patches are denoted by γ_ij and the collection of interior facets is given by
Γ^I = ⋃_(i, j) ∈ F^Iγ_ij, with F^I := {(i, j) | Int(_i∩_j ) is an open line segment in }.
Given no more than a boundary correspondence 𝐅: ∂→∂ that satisfies aforementioned assumptions, this paper deals with the spline-based StV problem ∂→. More precisely, let 𝒱_h ⊂ H^1() be a finite-dimensional vector space and let
𝒰_h^𝐅 = {𝐯∈𝒱_h^2 | 𝐯 = 𝐅 on ∂}, with 𝒱_h such that 𝒰^𝐅_h ≠∅.
The purpose of this paper is providing a framework for finding a nondegenerate mapping operator _h: → with _h ∈𝒰^𝐅_h. Denoting the Cartesian coordinate functions in by = (ξ_1, ξ_2)^T, we call a map : → nondegenerate (NDG) if
0 ≤inf_∈ J() ≤sup_∈ J() ≤∞, where J(𝐱) := ∂𝐱/∂
denotes the Jacobian matrix of 𝐱: → in . Similarly, we call a map uniformly nondegenerate (UNDG) if
0 < c < inf_∈ J() ≤sup_∈ J() < C < ∞.
Clearly, uniform nondegeneracy of _h ∈𝒱_h^2 is favoured over nondegeneracy by most applications but imposes stronger requirements on 𝐅: ∂→∂ that are discussed in Section <ref>.
Letting 𝒱_h be spanned by {ϕ_1, …, ϕ_N}, the mapping operator takes the form:
_h(ξ_1, ξ_2) = ∑_i ∈ℐ_I𝐜^i ϕ_i(ξ_1, ξ_2) + ∑_j ∈ℐ_B𝐜^j ϕ_j(ξ_1, ξ_2),
where 𝐜^k ∈ℝ^2, ∀ k ∈ℐ_I ∪ℐ_B, while ℐ_I and ℐ_B refer to the index-sets of vanishing and nonvanishing functions on ∂, respectively. With (<ref>) in mind, the purpose of this paper is properly selecting the 𝐜^i while the 𝐜^j follow from the boundary correspondence and are therefore held fixed.
Besides nondegeneracy, this paper aims for mechanisms that allow for control over the parametric properties of _h: → while the framework should be implicitly differentiable, i.e., provide maps that are a continuous function of the supplied data, namely the boundary control points 𝐜^j, j ∈ℐ_B.
§.§ Related Work
As stated in the introduction, existing techniques for the StV problem ∂→ are predominantly based on blending the curves C_k that make up ∂ into the interior, selecting the 𝐜^i ∈ℐ_I via an optimisation problem (with or without added constraints) and PDE-based methods. So far, most methods have only been studied in the singlepatch setting.
Interpolation-based methods, such as transfinite interpolation <cit.>, are a class of approaches that had already been conceived before the onset of IGA. Such approaches attempt to parameterise the interior of ∂ by taking the map _h: →ℝ^2 as a linear combination of the 𝐟_k: (0, 1) → C_k times a set of (typically polynomial) blending functions defined in . In IGA, the most widely-used method is the bilinearly blended Coons' patch <cit.>, a computationally inexpensive and often sufficiently powerful approach for singlepatch geometries. More advanced variants, such as Lagrange and Hermite interpolation, furthermore allow for control over the map's derivatives on ∂ using blending functions of polynomial degree p ≥ 2. For an overview of interpolation techniques over the unit quadrilateral, see <cit.>. Generalisations to n-sided convex, polygonal domains have been made in <cit.> and <cit.>, wherein the construction of appropriate blending functions becomes the main objective. As blending is based on polynomial constructions, interpolated surfaces can typically be equivalently expressed in local constructions based on splines. While computationally inexpensive and often highly effective in practical applications, interpolation-based methods provide no guarantee of nondegeneracy and the resulting maps are therefore often folded.
The second class of approaches minimises one or a positive sum of several quality cost functions over the 𝐜^i, i ∈ℐ_I. Compared to the classical literature, optimisation-based techniques have received more interest within the IGA-realm. Quality criteria are largely based on heuristics and optimisation typically seeks for maps with orthogonal isolines, homogeneous cell sizes or reduced cell skewness <cit.>. Convex optimisation formulations are based on the length, Liao and uniformity functionals <cit.>, while nonconvex formulations are often based on a combination of the area, orthogonality and skewness functionals <cit.>. While computationally more demanding, nonconvex optimisation has a lower tendency to yield degenerate maps and allows for a wider range of quality criteria <cit.>. Further examples include the Teichmüller map <cit.> and the variational harmonic method <cit.>, which, given a sufficiently regular boundary correspondence, both approximate a bijection and are thus inherently less prone to yielding a degenerate map. In <cit.>, the most-commonly employed cost functions are studied in a THB-spline setting.
Penalisation methods enforce nondegeneracy by adding the Jacobian determinant J(_h) to the cost function's denominator thus creating a barrier that urges the optimiser to seek for local minima from within set of bijective maps. Examples include the Winslow and modified Liao functionals <cit.> whose minimisation has to be initialised with a nondegenerate map to avoid division by zero. To relax this requirement, <cit.> introduces a regularisation that enables degenerate initial iterates to converge to a valid map in the proximity of the original formulation's global minimiser. However, the radius of convergence remains small and a suitable initial iterate requires solving another optimisation problem first.
Another way of enforcing nondegeneracy is adding constraints that constitute a sufficient condition for bijectivity. A linear constraint is proposed in <cit.>. If convex cost functions are utilised, the problem remains convex. A nonconvex constraint is proposed in <cit.> wherein the map's (scalar) Jacobian determinant is expressed in a spline space 𝒱_h^J that contains it. If the determinant's weights with respect to {𝒱_h^J } are all positive, the map is valid and the iterate is deemed feasible. Expanding J(_h) over {𝒱_h^J } is furthermore a widely-used technique to test for nondegeneracy. As both constraints constitute sufficient but not necessary conditions for bijectivity, they may be too restrictive in practice. Furthermore, finding a feasible initial iterate may be nontrivial or impossible because for complex geometries, the feasible search space may be empty.
Optimisation-based approaches are readily generalised to the multipatch setting by minimising the same cost functions over the polygonal domain rather than the unit square. Hereby, the patch interface control points become degrees of freedom in the formulation. Multipatch optimisation is employed in <cit.> where a suitable topology is chosen through a construction based on patch adjacency graphs. In <cit.>, a time-dependent formulation is proposed which evolves the initial _h(t=0), that maps strictly into the interior of ∂, to a map with the prescribed boundary correspondence. At each time-iteration, multipatch optimisation is utilised to warrant the parametric quality of the intermediate map.
To the best of our knowledge, penalised or constrained optimisation problems have only been studied in the singlepatch setting.
The third class of approaches seeks the 𝐜^i, i ∈ℐ_I by (approximately) solving a PDE problem. In the majority of cases, the PDE stems from the requirement that the mapping inverse _h^-1: → be a pair of harmonic functions in . A justification for this is provided by the Radó-Kneser-Choquet theorem which states that a harmonic map _h^-1: → is diffeomorphic in , provided is convex. Thus, requiring the mapping inverse to be harmonic allows for treating a wider range of geometries as the parametric domain can be chosen freely. In <cit.> harmonic maps are approximated by employing a boundary element method (BEM) <cit.>. The method creates a large number of pairs (_i, _i), with _i ∈ (0, 1)^2 and _i ∈ which are then utilised to fit a THB-spline map in the least-squares sense with added regularisation terms. The same BEM is adopted in <cit.> where multiply-connected domains are mapped inversely harmonically into punctured auxiliary domains, using a template segmentation approach to select an appropriate multipatch layout for fitting a map to the point pairs.
Another class of methods seeks (inversely) harmonic maps by approximately solving the equations of Elliptic Grid Generation <cit.> (EGG) in using variational techniques. EGG stems from a pullback of the inverse harmonicity requirement into where a (nonlinear) equation for the forward map is derived. In <cit.> this formulation is employed to approximate harmonic maps on quadrilateral parametric domains. As the EGG equations are of second order, a spline space 𝒱_h ⊂ H^2() is employed. In <cit.>, the equations are tackled with THB-splines combined with a posteriori refinement techniques to repair degeneracies stemming from insufficient numerical accuracy. The same publication proposes control mechanisms capable of tuning the map's parametric properties by introducing a suitable coordinate transformation in . An attempt to generalise IGA-EGG to the multipatch setting is made in <cit.> where a mixed form that reduces the regularity requirement from H^2() to H^1() is proposed by introducing auxiliary variables for the map's Jacobian. While successful in practice, the formulation significantly increases the computational costs since the basis is likely subject to an inf-sup requirement <cit.>, requiring the auxiliary space to be p- or h-refined with respect to the primal basis.
Finally, <cit.> proposes a PDE-based approach that employs the equations of nonlinear elasticity with applications to both singlepatch and multipatch domains.
§ THEORY
This paper proposes a framework for computing parameterisations _h: → based on harmonic maps. In the following, we present an in-depth discourse on harmonic maps as well as finite element techniques for elliptic equations in nonvariational form, which shall be adopted to formulate discretisations in Section <ref>.
§.§ Harmonic Maps
The motivation to seek the map _h: → as the inverse of a map that is harmonic in stems from the following famous result:
The harmonic extension of a homeomorphism from the boundary of a Jordan domain ⊂ℝ^2 onto the boundary of a convex domain ⊂ℝ^2 is a diffeomorphism in .
For proofs, we refer to <cit.>. It should be noted that the convexity of is a sufficient, but not a necessary condition. Furthermore, the same result is no longer true in ℝ^3 <cit.>.
Theorem <ref> has inspired many numerical approaches for finding a nondegenerate _h: → that approximates a map whose inverse is harmonic. Besides the nondegeneracy guarantee, this is furthermore explained by the regularity of harmonic maps which generally serve the map's quality from a numerical standpoint.
Numerical approaches go back to the pioneering works of Winslow <cit.>. Letting = (x_1, x_2)^T, and defining the metric tensor
G_ij() = g_ij with g_ij = ∂_ξ_i·∂_ξ_j,
Winslow's original approach seeks the map : → as result of the following minimisation problem
1/2∫_tr( G^-1) d→min_, s.t. () = 𝐅^-1 on ∂.
Letting 𝒰^𝐅 = {𝐯∈ H^1(, ℝ^2) | 𝐯 = 𝐅 on ∂}, a pullback leads to
1/2∫_tr(G)/ Jd→min_∈𝒰^𝐅,
while a discretisation replaces 𝒰^𝐅→𝒰^𝐅_h in (<ref>). The minimisation of (<ref>) is highly impractical since the domain of the integrand is the set of all ∈𝒰^𝐅 that satisfy J() > 0 (almost everywhere). As such, minimisation has to be initialised with a nondegenerate initial map which is generally hard to find.
An alternative formulation is based on the harmonicity requirement's classical form:
Δ^-1 = 0 in , s.t. ^-1 = 𝐅^-1 on ∂,
where the Laplace operator is to be understood component-wise. A pullback leads to
Δ_ = 0 in , s.t. = 𝐅 on ∂,
where Δ_ denotes the Laplace-Beltrami operator.
The pullbacks from both (<ref>) and (<ref>) inherently assume that ^-1: → is invertible, thus potentially rendering the problems ill-posed in case is not convex. However, assuming convexity of , we may multiply the two-component PDE from (<ref>) by T: →ℝ^2 × 2, with T = ( J )^2 J() since T does not vanish in the interior. The result is a two-component PDE for : → which can be classified as a quasilinear second-order elliptic PDE in nondivergence form <cit.>:
i ∈{1, 2}: A(∂_) H(x_i) = 0, s.t. = 𝐅 on ∂,
where
H(y)_ij = ∂^2 y/∂ξ_i ∂ξ_j denotes the Hessian in , while A(∂_) := [ g_22 -g_12; -g_12 g_11 ]
and A B denotes the Frobenius inner product between two matrices.
The multiplication by T: →ℝ^2 × 2 removes J from the original formulation's denominator, allowing schemes based on (<ref>) to be initialised with degenerate initial maps. On the other hand, minimisation based on (<ref>) most likely yields a nondegenerate map, while this may not hold for approaches based on (<ref>), due the scheme's truncation error.
A further possibility is basing the scheme on the harmonicity requirement's weak form. More precisely, with 𝒱 = H^1():
find ^-1∈𝒱^2, s.t. ∫_∇ϕ∇^-1d = 0, ∀ϕ∈𝒱^2 and ^-1 = 𝐅^-1 on ∂,
which translates to an equation for : → via a pullback.
This paper presents algorithms for approximating _h ≈ based on formulations (<ref>) and (<ref>). As the former is in nondivergence form, in the following we give a brief summary on the finite element treatment of nondivergence form equations.
§.§ Nondivergence form equations
The finite element treatment of nondivergence form (NDF) equations is a relatively recent development with first contributions due to Lakkis and Pryer <cit.>. NDF-equations are of the form
B H(u) + lower order terms = f a.e. in , s.t. u = g on ∂.
Here, f ∈ L^2(), while B: → L^∞(, ℝ^2 × 2) is uniformly elliptic, i.e., there are constants 0 < c_1 ≤ c_2 < ∞ such that
c_1 ≤inf_∈ℝ^2, = 1^T B ≤sup_∈ℝ^2, = 1^T B ≤ c_2, a.e. in .
The set of all symmetric and uniformly elliptic B: → L^∞(, ℝ^2 × 2) is referred to as SPD^2 × 2().
In the following, we take g = 0 and disregard lower order terms for convenience.
For g = 0 and ⊂ℝ^2 convex, it can be shown that u ∈ H^2() ∩ H^1_0() as long as B satisfies the so-called Cordés condition <cit.>. In ℝ^2 the Cordés condition is implied by (<ref>). Defining
γ(B) := tr(B) / B _F^2,
where · _F denotes the Frobenius norm √(A A) of a matrix, finite element discretisations are based on the following Petrov-Galerkin formulation of the problem's strong form:
find u ∈ H^2() ∩ H^1_0() s.t. ∫_γ(B) τ(ϕ) ( B H(u) - f ) d = 0, ∀ϕ∈𝒱,
for some suitably-chosen test space 𝒱.
Here, τ: 𝒱→ L^2() is a suitably-chosen operator that warrants coercivity of the associated bilinear form over finite-dimensional subspaces 𝒱_h ⊂𝒱. The (optional) scaling γ( · ) guarantees that γ(B) B resembles the identity matrix ℐ^2 × 2 and simplifies the analysis of numerical schemes based on (<ref>). The choices of τ: 𝒱→ L^2() for 𝒱 = H^2() ∩ H^1_0() are τ_NS(v) = Δ v and τ_LS(v) = B H(v) <cit.>, while for 𝒱 = H^1_0(), τ_ID(v) = v <cit.>. To enable discretisations over finite element spaces 𝒱_h ⊂ H^1(), mixed-FEM formulations of (<ref>) are introduced in <cit.> while <cit.> propose C^0 discontinuous Galerkin schemes that introduce interior penalty terms over the facets of the FEM mesh, acting on the discrete solution's normal gradient.
As spaces 𝒱_h resulting from local spline-based constructions over multipatch topologies are generally only in H^1(), this paper adopts the mixed formulations based on Gallistl <cit.> and Lakkis-Pryer <cit.> as well as the C^0-DG formulation from <cit.> and applies them to linearisations of (<ref>). In the case of C^0-DG, penalty terms can be restricted to the interior patch interfaces γ_ij∈Γ^I of .
§ NUMERICAL SCHEMES
In this section we propose several numerical schemes for finding approximate solutions of the inverse harmonicity formulations based on both (<ref>) and (<ref>). Given and a suitable multipatch covering (see Subsection <ref>), we denote by Ξ_i = (Ξ_i, 1, Ξ_i, 2) the pair of local (open) knotvectors associated with the i-th patch along with the associated canonical spline space 𝒱_h, i⊂ H^2(^□). We denote by 𝒱_h^disc⊂ L^2() the space that results from a push-forward of the local spaces 𝒱_h, i, i.e,
𝒱_h^disc = ⋃_i ∈{1, …, N_p }{ v_h ∘ (𝐦^i)^-1 | v_h ∈𝒱_h, i}.
Then, we define the subspace 𝒱_h := 𝒱_h^disc∩ C^0() and assume that the local spline bases' knotvector tuples Ξ_i are selected in such a way that the canonical basis {𝒱_h } of 𝒱_h forms a partition of unity on that is compatible with 𝐅: ∂→∂ in the sense that the set 𝒰^𝐅_h = {𝐯∈𝒱_h^2 | 𝐯 = 𝐅 on ∂}≠∅.
§.§ NDF discretisations
The discretisations of (<ref>) are based on a variation of the Petrov-Galerkin formulation from (<ref>). Here, we restrict ourselves to the choices τ∈{τ_NS, τ_ID}.
In the following, we propose iterative solution strategies targeting a variational form of (<ref>). For the sake of a unified presentation, we let 𝒰^𝐅 := {𝐯∈ H^2(, ℝ^2) | 𝐯 = 𝐅 on ∂} and we introduce the form ℒ: SPD^2 × 2() ×𝒰^𝐅×𝒰^0→ℝ with
ℒ(B, , ϕ) := ∫_τ(ϕ_i) B H(x_i) d,
where we sum over repeated indices. For the time being, we assume that the data is sufficiently regular for the problem and its linearisations to be well-posed over 𝒰^𝐅 for the test space 𝒰_test = 𝒱_test^2 with 𝒱_test = H^1_0() (τ = τ_ID) and 𝒱_test = H^2() ∩ H^1_0() (τ = τ_NS), respectively. The linearisations are then modified for compatibility with the C^0()-nature of spline spaces over multipatch topologies while discretisations follow readily from replacing vector spaces by their finite-dimensional counterparts.
In what follows, we shall substitute various flavours of A( · ) (cf. Subsection <ref>) scaled by γ( · ) (cf. Subsection <ref>) into (<ref>). Besides being customary in NDF-discretisations, we have noticed the scaling to have a positive effect on the iterative schemes' radii of convergence and the required number of iterations.
§.§.§ Fixed-Point Iteration
The most elementary linearisation is based on a fixed-point iteration which freezes A( · ) of (<ref>) in the previous iterate ^k and seeks : → as the limit k →∞ of the recursion
i ∈{1, 2}: A(∂_^k) H(x^k+1_i) = 0, s.t. ^k+1 = 𝐅 on .
We note that A(∂_) may equivalently be written in the form
A(∂_) = C^T C, with C(∂_) = [ ∂ x_2∂ξ_2 -∂ x_2∂ξ_1; - ∂ x_1∂ξ_2 ∂ x_1∂ξ_1 ].
Since C( · ) has the same characteristic polynomial as J = ∂_, we conclude that A( · ) ∈SPD^2 × 2() whenever : →ℝ^2 is UNDG. As such, the uniform ellipticity requirement (<ref>) is violated for degenerate intermediate maps. To circumvent this issue, we introduce the stabilisation A_μ( · ) := A( · ) + μℐ^2 × 2, with 0 < μ < 1 and base a numerical scheme on the following linearised classical form:
i ∈{1, 2}: A_μ(∂_^k) H(x^k+1_i) - μΔ x^k_i = 0, s.t. ^k+1 = 𝐅 on .
With A_μ^k := A_μ(∂_^k) and γ_μ^k := γ(A^k_μ) (cf. equation (<ref>)), a variational formulation seeks : → as the limit k →∞ of the recursion
find ^k+1∈𝒰^𝐅 s.t. ℱ_μ( ^k+1, ^k, ϕ) = 0, ∀ϕ∈𝒰^0,
where
ℱ_μ(^k+1, ^k, ϕ) := ℒ(γ_μ^k A_μ^k, ^k+1, ϕ) - μℒ(γ_μ^k ℐ^2 × 2, ^k, ϕ).
In practice, we take μ = 10^-4. Here, a reasonable stopping criterion terminates the recursion as soon as ^k+1 - ^k / ^k < ε, where a suitable norm depends on the augmented scheme (with C^0-support).
§.§.§ Newton Approach
As in the fixed-point iteration, a linearisation based on Newton's method needs to be adjusted for the possibility of encountering iterates ^k with A(∂_^k) ∉SPD^2 × 2. As such, we again employ the eigenspectrum shift A( · ) → A_μ( · ) and base a Newton scheme on the residual form 𝒩_μ: 𝒰^𝐅×𝒰^0→ℝ, with
𝒩_μ(, ϕ) := ℒ(γ_μ^ A_μ^, , ϕ), where A_μ^ := A_μ(∂_) and γ_μ^ := γ(A_μ^).
Given some intermediate iterate ^k∈𝒰^𝐅, the Newton scheme computes the increment ∂^k ∈𝒰^0 from:
find ∂^k ∈𝒰^0, s.t. 𝒩_μ^'(^k, ϕ, ∂^k) = - 𝒩_μ(^k, ϕ), ∀ϕ∈𝒰^0,
wherein 𝒩_μ^'( · , · , 𝐯) denotes the Gateaux derivative of 𝒩_μ( · , · ) with respect to its first argument in the direction of 𝐯∈𝒰^0. The new iterate becomes ^k+1 = ^k + κ∂^k, where the optimal value of κ∈ (0, 1] is estimated using a line search routine.
Contrary to the fixed-point iteration, for μ > 0 the root of 𝒩_μ( · , · ) generally differs from that of μ = 0. As such, the eigenspectrum shift constitutes a regularisation rather than a stabilisation. Therefore, μ needs to be taken small and we utilise μ = 10^-5 in practice. While in discretisations based on (<ref>), the value of μ can be reduced to μ = 0 in an outer loop, in practice this is usually not necessary. In fact, schemes based on (<ref>) converge in the vast majority of cases even for μ = 0 and the stabilisation with μ > 0 merely improves convergence behaviour for severely folded initial iterates.
§.§.§ Hessian recovery approach
Having discussed the linearisations of the continuous variational formulation of (<ref>), we can proceed to concrete discretisations. Clearly, for bases 𝒱_h ⊂ H^2(), discretisations follow from replacing 𝒰^𝐅→𝒰_h^𝐅. For compatibility with spaces 𝒱_h ⊂ H^1(), in the following, we extend the linearisations from subsections <ref> and <ref> with the weak Hessian recovery approach proposed in <cit.>. In what follows, we assume that τ( · ) = τ_ID( · ).
Assuming sufficient regularity of u: →ℝ and Φ: →ℝ^2 × 2(), the Hessian recovery approach is based on the following integration by parts formula:
∫_ H(u) Φ d = - ∫_∇ u · (∇·Φ) d + ∫_∂∇ u ·( Φ𝐧) dΓ,
wherein 𝐧: ∂→ℝ^2 denotes the outward normal vector on ∂, while the divergence ∇· ( · ) applied to Φ: →ℝ^2 × 2 is taken row-wise. We introduce 𝒰^𝐅 := {𝐯∈ H^1(, ℝ^2) | 𝐯 = 𝐅 on ∂} and 𝒲 := H^1(, ℝ^2 × 2) as well as X := (, H) ∈𝒰^𝐅×𝒲^2 and Σ := (ϕ, Φ) ∈𝒰^0×𝒲^2. Analogous to (<ref>), we base a numerical scheme on the form ℒ^H: SPD^2 × 2×(𝒰^𝐅×𝒲^2 ) ×(𝒰^0×𝒲^2 ), with
ℒ^H(B, X, Σ) = ∫_ϕ_i B H_i d + ∫_( H_i Φ_i + ∇ x_i · (∇·Φ_i ) ) d - ∫_∂∇ x_i · (Φ_i 𝐧) dΓ,
wherein we sum over repeated indices. Note that here, elements Q ∈𝒲^2 are of the form Q = (Q_1, Q_2) ∈𝒲×𝒲 (and are therefore indexed in the same way as vectors). Letting, again, A_μ^k := A(∂_^k), the fixed-point iteration is based on
find X^k+1∈𝒰^𝐅×𝒲^2 s.t. ℱ^H_μ(X^k+1, X^k, Σ) = 0 ∀Σ∈𝒰^0×𝒲^2,
where
ℱ^H_μ(X^k+1, X^k, Σ) := ℒ^H(γ^k_μ A_μ^k, X^k+1, Σ) - μℒ^H(γ^k_μℐ^2 × 2, X^k, Σ).
Following <cit.>, a discretisation replaces 𝒰^𝐅→𝒰_h^𝐅 and 𝒲→𝒲_h := 𝒱_h^2 × 2, with 𝒱_h ⊂ H^1().
Similarly, the Newton approach is based on
𝒩^H_μ(X, Σ) := ℒ^H(γ_μ^ A_μ^, X, Σ)
and seeks the increment ∂ X^k ∈𝒰^0×𝒲^2 as in (<ref>) by taking the Gateaux derivative of 𝒩_μ^H( · , · ) with respect to its first argument. We discretise in the same way as in the fixed-point iteration.
The Hessian recovery approach increases the problem's cardinality from ∼ 2 dim(𝒱_h ) to ∼ 10 dim(𝒱_h). However, we note that ℒ^H( A(∂_), · , · ) is nonlinear only in the first term on the right hand side of (<ref>). As such, the linearisation's bilinear form only needs to be reassembled partially and an efficient implementation can, in fact, operate on the Schur complement of the matrix's constant blocks, making the cardinality increase manageable in practice. For a more in-depth discourse on an efficient implementation, we refer to <cit.>.
§.§.§ Rotation-free approach
This approach adopts the formulation proposed by Gallistl et al. in <cit.>. Here, we restrict ourselves to the choice τ( · ) = τ_NS( · ). Furthermore, for reasons that shall become apparent shortly, we focus exclusively on the fixed-point linearisation.
Rather than directly solving for : →, the rotation-free approach, in simple terms, is based on a formulation which seeks the map's Jacobian J := ∂_ξ. In order for J ∈ H^1(, ℝ^2 × 2) to be the gradient of a two-component function, it requires the rows of J to be rotation-free for which it introduces a suitable Lagrange multiplier. We introduce the space
𝒲^𝐅 := { (𝐯_1, 𝐯_2) ∈ H^1(, ℝ^2) × H^1(, ℝ^2) | the tangential trace of 𝐯_i equals ∂_𝐭𝐅_i in },
where ∂_𝐭 ( · ) denotes the tangential derivative along ∂. Furthermore, we introduce
Q := {q ∈ L^2() | _∂ q d = 0 }.
With Y := (J, 𝐩) ∈ W^𝐅× Q^2 and Σ := (Φ, 𝐪) ∈𝒲^0× Q^2 , the continuous formulation is based on the operator ℒ^rot: SPD^2 × 2× (𝒲^𝐅× Q^2) × (𝒲^0× Q^2) →ℝ, with
ℒ^rot(B, Y, Σ) = ∫_( ∇·Φ_i ) B ∂_ξJ_i d + ∫_(∇×Φ_i ) 𝐩_i d + ∫_(∇×J_i ) 𝐪_i d
and ∇×𝐯 := ∂_ξ_2𝐯_1 - ∂_ξ_1𝐯_2.
The fixed-point linearisation reads:
find Y^k+1∈𝒲^𝐅× Q^2 s.t. ℱ_μ^rot(Y^k+1, Y^k, Σ) = 0, ∀Σ∈𝒲^0× Q^2,
with
ℱ_μ^rot(Y^k+1, Y^k, Σ) := ℒ^rot(γ^k_μ A^k_μ, Y^k+1, Σ) - μℒ^rot(γ^k_μℐ^2 × 2, Y^k, Σ),
where now A^k_μ := A(J^k) + μℐ^2 × 2, with the i-th row of J^k given by J_i^k and, as before, γ_μ^k := γ(A^k_μ).
For the operator utilised in the discretisation, we have to make two adjustments. Firstly, depending on 𝐅: ∂→∂, the discrete space 𝒲_h^𝐅 will generally be empty. As such, we implement the boundary condition weakly. Secondly, as the discrete root Y^k+1_h ∈𝒲_h^𝐅× Q_h^2 of (<ref>) generally does not satisfy ∇×J_h, i = 0 (pointwise) for i ∈{1, 2}, the discrete problem requires stabilisation. Here, we follow the stabilisation proposed in <cit.>. Defining
ϵ(B) := min_tr(B)^2/ B ^2_F - 1, λ( · ) := 2 + √(αϵ( · ))/2 and σ( · ) := √(1 - λ( · )2),
with 0 < α < 1, the stabilisation consists of adding
𝒦_stab(B, J, Φ) := σ(B) ∫_(∇×Φ_i ) (∇×J_i ) (summing over repeated indices)
to the right-hand-side of (<ref>). Here, we use α = 0.9 and in practice, we approximate ϵ(B) ≈ϵ_h(B) by taking the minimum over all evaluations in the abscissae of the quadrature scheme used to compute the integrals. Finally, the stabilised operator ℒ^rot, stab: SYM^2 × 2×𝒲× Q^2 →ℝ, along with the weak imposition of the boundary data reads:
ℒ_η^rot, stab(B, Y, Σ) := ℒ^rot(B, Y, Σ) + 𝒦_stab(B, J, Φ) + η∑_L_j ∈Γ^B1/h_j∫_L_j(∂_t𝐅 - J 𝐭̂) ·( Φ𝐭̂) dΓ,
where 𝒲 := H^1(, ℝ^2 × 2) while h_i denotes the average diameter of all knot spans on L_i ⊂∂ and 𝐭̂ denotes the unit tangent along ∂. The factor η > 0 needs to be taken sufficiently large and in practice, we utilise η = 10^3.
The discrete problem is subject to the same inf-sup condition as the Stokes problem <cit.>. Here, we utilise the subgrid space pair <cit.>. If 𝒲_h = 𝒱_h^2 × 2 and Q_h is constructed by a modification of some finite-dimensional 𝒰_h ⊂ H^1() (to incorporate the zero average condition), this implies that 𝒱_h is uniformly h-refined with respect to 𝒰_h. In practice, the space 𝒰_h results from removing every other knot in the knotvectors utilised to construct the primal space 𝒱_h. As such, the cardinality of the problem is ∼ 4.5 ×dim(𝒱_h) and computational efficiency can be greatly improved by only reassembling the nonlinear part of the fixed-point iteration's global matrix equation. The scheme, in its current form, is not compatible with Newton's method due to the required stabilisation.
It remains to be said that the map : → can be recovered by solving:
find ∈𝒰^𝐅, s.t. ∫_(∂_ - J ) ∂_ϕ d = 0, ∀ϕ∈𝒰^0,
and similarly for the discrete counterpart.
§.§.§ C^0-DG approach
Having presented two approaches in mixed form, we now proceed to an approach based on the C^0-DG formulation from <cit.>. A C^0-DG formulation is particularly appealing as it completely avoids auxiliary variables. Furthermore the discrete basis is (by assumption) sufficiently regular in the interior of patches for penalisation to be restricted to the interior patch facets γ_ij∈Γ^I. As opposed to mixed formulations, the C^0-DG approach employs the patchwise exact Hessian while weakly imposing continuity of the map's Jacobian across interior interfaces. Here, we restrict ourselves to the choice τ( · ) = τ_NS( · ). The operator from (<ref>) is adjusted as follows: ℒ→ℒ^DG_η, with ℒ_η^DG: SPD^2 × 2×𝒰^𝐅×𝒰^0→ℝ satisfying
ℒ^DG_η(B, , ϕ) := ∑_k=1^N_p∫__kΔϕ_i B H(x_i) d + η∑_γ_jl∈Γ^I1/h(γ_jl)∫_γ_jl [[ ∇ x_i ]] [[ ∇ϕ_i ] ] dΓ.
Here, [ [ 𝐯 ] ] denotes the (entry-wise) jump term of 𝐯⊗𝐧∈ L^2(γ_ij, ℝ^2 × 2), with 𝐧 the unit outer normal on γ_ij in arbitrary but fixed direction while h(γ_ij) denotes the average diameter of all knot spans on the facet γ_ij. The penalisation parameter η > 0 has to be chosen sufficiently large. In practice, facing geometries with characteristic length scales of 𝒪(1), we utilise η = 10.
The fixed-point iteration as well as the Newton approach are adapted to this formulation simply by replacing ℒ→ℒ^DG_η in ℱ_μ( · , · , · ) and 𝒩_μ( · , · ), respectively, which is not repeated here for the sake of brevity.
§.§ Regularised weak form discretisation
This scheme is based on the weak inverse harmonicity requirement from (<ref>). In what follows, we let 𝒰^𝐅 = {𝐯∈ H^1(, ℝ^2) | 𝐯 = 𝐅 on } while 𝒰^𝐅_bij := {𝐯∈𝒰^𝐅 | 𝐯 is uniformly nondegenerate}. Noting that ^-1() = in , a pullback of the weak inverse harmonicity requirement leads to:
find ∈𝒰^𝐅_bij s.t. ℒ^W(, ϕ) = 0, ∀ϕ∈𝒰^0,
with ℒ^W: 𝒰^𝐅_bij×𝒰^0→ℝ given by
ℒ^W(, ϕ) := ∫_∂_ϕ A(∂_)/ Jd
and A( · ) as in (<ref>). The formulation based on (<ref>) can be regarded as the Galerkin method of the weak inverse harmonicity formulation while Winslow's original approach constitutes the associated Ritz-Galerkin method.
The appearance of J in the denominator, as in Winslow's original approach, prohibits the substitution of degenerate maps, hence the requirement to restrict the domain of ℒ^W(·, ϕ) to 𝒰^𝐅_bij instead of 𝒰^𝐅. However, this limits the scope of algorithms based on (<ref>) to improving the parametric quality of an already (uniformly) nondegenerate map. In order to attenuate this harsh requirement, we employ the regularisation proposed in <cit.>, whose original purpose was regularising the Winslow function by replacing
J →ℛ_ε( J), where ℛ_ε(x) := x + √(4 ε^2 + x^2)/2.
We denote the regularised operator by ℒ_ε^W(·, ·), whose domain is restored to 𝒰^𝐅×𝒰^0. The asymptotic behaviour of (<ref>) reads
lim_x → -∞ℛ_ε(x) = 0 and lim_x →∞ℛ_ε(x) = x, with ℛ_ε(0) = ε.
For ε >0, ℛ_ε∈ C^1(ℝ) and ℛ_ε(x) > 0 ∀ x ∈ℝ. As such, the regularisation can be combined with a gradient-based algorithm acting on (<ref>), such as Newton's method, again replacing 𝒰^𝐅→𝒰^𝐅_h for a discretisation. Heuristically, ε = 10^-4 is a reliable choice as it dramatically increases the radius of convergence and even extends it into the set of degenerate initial iterates in practice. The regularisation is a convenient means to urge a globalised Newton-based root finder to decrease the size of the Newton step in case the updated iterate accidentally leaves the set of NDG maps. In the absence of regularisation, the division by zero typically causes a numerical algorithm to diverge, even when initialised with an NDG initial map.
The value of ε > 0 can be reduced to ε = 0 in an outer loop, in which case it is almost guaranteed that the resulting map is nondegenerate since for ε→ 0, ℛ_ε^-1( · ) acts as a barrier term, as in Winslow's original approach.
While the discrete root of (<ref>) substituted into the Winslow functional typically yields a value slightly greater than Winslow's global minimiser over 𝒰_h^𝐅, we have noticed (<ref>) to converge faster and more reliably than Winslow's original (regularised) formulation. Furthermore, it is plausible to assume that for ε→ 0, (<ref>) has a unique root, while the discretisation of Winslow's approach, may produce local minima.
Compared to the NDF discretisations (cf. Subsection <ref>), the radius of convergence is small. As such, the method is best initialised with one of the NDF discretisations' solutions, for which it typically converges in no more than 5 Newton iterations. Furthermore, it provides a convenient way of untangling a degenerate map produced by an NDF discretisation without the need to recompute the map over a refined space. Convergence failure of (<ref>) may furthermore indicate that the set 𝒰_h, bij^𝐅 is empty, thus making refinement mandatory.
§.§ Boundary correspondence requirements
Without aspirations to provide formal proofs, this section discusses the requirements that , and the boundary correspondence 𝐅: → have to satisfy in order for the variational formulations of Section <ref> to be well-posed.
As A(∂_), as defined in (<ref>), has the same characteristic polynomial as the map's metric tensor G_ij = ∂__i·∂__j, it is plausible to assume that a necessary condition for well-posedness of the NDF discretisations is that the harmonic map ^-1: → satisfies
0 < inf_∈Ω J(^-1) ≤sup_∈Ω J(^-1) < ∞.
While Theorem <ref> guarantees that ^-1: → is diffeomorphic in , it provides no guarantee that the map is differentiable on the closure of . Failure to satisfy (<ref>) will cause A( · ) to no longer be uniformly elliptic in the exact solution, which may render the problem ill-posed in this case. While this section's algorithms may nevertheless succeed in finding the discrete problem's root, we may expect to encounter conditioning issues in the linearisation's bilinear forms in a refinement study. For J(^-1) to stay uniformly bounded on the closure, we require that 𝐅^-1: ∂→∂ maps the convex corners of ∂ onto the convex corners of ∂ while the smooth segments of ∂ are mapped onto straight line segments of ∂ where we furthermore require that ∂_t 𝐅^-1 be continuous in the vertices that are mapped onto vertices 𝐯_ij∈∂ connecting two sides L_i and L_j of ∂ without creating a corner (and similarly for 𝐅: ∂→∂).
Clearly, mapping a straight line segment of L_i ∪L_j ⊂∂ onto two sides C_i ∪C_j ⊂∂ with a convex corner in the shared vertex will cause J() →∞ in the vertex connecting L_i and L_j. Similarly, mapping the same vertex onto a vertex of ∂ that creates a concave corner will cause a singularity, i.e., J() → 0. While discrete approximations typically remain UNDG, this behaviour will be observable in a refinement study, see Figure <ref>. Since the weak-form operator ℒ^W_ε: 𝒰^𝐅×𝒰^0→ℝ may map into ℝ despite division by zero on ∂, even for ε→ 0, it may allow for boundary correspondences that exhibit less regularity. Notwithstanding the well-posedness of the formulation, we regard discrete maps _h: → that approximate a map : → with a singularity on ∂ as undesirable from a numerical perspective.
In what follows, we refer to a boundary correspondence 𝐅: ∂→∂ that satisfies above requirements as a diffeomorphic boundary correspondence. Section <ref> augments this section's formulations with a mechanism that allows for the creation of diffeomorphic boundary correspondences even when ∂ has no corners.
§.§ Choosing an initial guess
Both the NDF discretisations (cf. Subsection <ref>) and the weak form discretisation (cf. Subsection <ref>) are nonlinear and need to be initialised with a suitable initial map _h^0. On a single patch, NDF-type discretisations are typically initialised using the bilinearly-blended Coons' patch approach. However, since blending leads to potentially complicated polynomial constructions in the multipatch case, we initialise the NDF discretisations by a map ^0 whose components are harmonic in (with the prescribed boundary correspondence). The map ^0 is then approximated by discretising the Laplace equation in the usual way. As is generally nonconvex, the initial map is typically degenerate. With this initial guess, the Newton schemes reliably converge after typically 5 iterations, while the fixed-point iteration requires ∼ 15 iterations.
The weak form discretisation has a smaller radius of convergence and using a harmonic map (in ) as initialisation typically leads to convergence failure. As such, this scheme is initialised with the solution of an NDF scheme, for which it reliably converges after typically 5 Newton iterations (using ϵ = 10^-4 in the regularisation). The value of ε can then be gradually reduced in an outer loop. In practice, this is rarely necessary.
§.§ Numerical Experiments
In this section, we apply the algorithms from Section <ref> to a number of benchmark test cases to experimentally determine the scheme's convergence rates. All schemes, as well as all control mechanisms from Section <ref> have been implemented in the open-source finite element library Nutils <cit.>.
As a first experiment we are considering the bat shaped geometry from Figure <ref> along with the parametric domain from Figure <ref>.
The geometry is a piecewise C^∞ curvilinear polygon whose sides C_i are quadratic polynomials. We therefore expect the harmonic map ^-1: → to satisfy ^-1∈ H^s(, ℝ^2) with 2 ≤ s ≤ 3. The boundary correspondence 𝐅: ∂→∂ (suitably extended into the interior) can be expressed exactly in any finite-dimensional space 𝒱_h with polynomial degree p ≥ 2, thanks to its piecewise quadratic nature. We are estimating the convergence rate for all three NDF discretisations (cf. Subsection <ref>) as well as the weak form discretisation from Subsection <ref> and Winslow's original approach (cf. Subsection <ref>). The NDF discretisations are initialised with the forward-Laplace initial guess (cf. Subsection <ref>) while the weak form discretisation is initialised with the C^0-DG approach's solution and Winslow's minimisation with the weak form discretisation's solution. This is repeated for several levels of h-refinement, each splitting each univariate knotspan in half (without changing the boundary correspondence). For the rotation-free approach, we perform a fixed-point iteration while all other linearisations are based on Newton's method. The fixed point linearisation converges after typically 12 iterations while the Newton approach requires typically 5 for the NDF discretisations and another 3 for the weak form discretisation. The convergence rate is estimated in the H^1()-norm. Denoting three consecutive solutions by _h, _h/2 and _h/4, respectively, the convergence rate is estimated as
κ≈log_2 ( _h - _h/2_H^1()/_h/2 - _h/4_H^1()),
and we utilise the last three levels of refinement to estimate κ in the above. We perform a refinement study assigning a uniform knotvector without internal knot repetitions to each interior and boundary facet whereby the coarsest knotvector contains three interior knots.
Table <ref> contains the approximate convergence rates of this section's discretisations for various values of the polynomial degree p ≥ 2 while Figure <ref> plots the relative H^1-norm discrepancy between the various refinement level's solutions and the exact solution which is approximated by minimising the Winslow function over a spline space that is one level of refinement ahead of the maximum refinement level in the plots (here: h/16). The table and plots suggest that all discretisations perform similarly well, a notable exception being the C^0-DG approach from Section <ref> which consistently produces the largest relative H^1-norm discrepancy. We note however that the C^0-DG approach is the computationally least expensive method that can be initialised with a degenerate map. The approximate convergence rates improve for larger values of p ≥ 2, eventually reaching saturation for p = 5. This is not surprising given that it is plausible to assume that the largest attainable H^1(, ℝ^2) convergence rate is bounded by the value of 2 ≤ s ≤ 3 associated with the harmonic map ^-1∈ H^s(, ℝ^2). A notable exception is the outcome for p = 3, which consistently ranks below all other choices. Figure <ref> depicts the convergence behaviour of the C^0-DG and weak form discretisations with one additional level of refinement. Applying the convergence rate estimator to the last three consecutive levels of refinement yields κ(ℒ^DG) ≈ 2.18, κ(ℒ^W) ≈ 2.43 and κ(Winslow) ≈ 2.48.
This suggests that the results of Table <ref>, which correspond to refinement levels that are of practical interest, are in the non-asymptotic convergence regime while the asymptotic convergence rate (which corresponds to a practically undesirably large number of DOFs) is slightly better than the table suggests. However, given that the choice p=2 produces a convergence log-plot comprised of nearly straight lines, the results of Table <ref> suggest that the convergence rate is slightly below the for p=2 maximally attainable rate of κ( · ) = 2 in this case, most likely as a result of the nonlinearity. Overall, the results suggest that the two NDF discretisations in mixed form perform similarly well while slightly outperforming the C^0-DG approach at the expense of higher computational costs. Of all the discretisations presented in Section <ref>, the regularised weak form discretisation consistently produces the best results. In fact, it performs only marginally worse than Winslow's original approach, yet converging significantly faster and more reliably while furthermore avoiding local minima in practice. In our practical experience the C^0-DG approach, despite being outperformed by the discretisations in mixed form, suffices for the purpose of finding a nondegenerate or nearly nondegenerate initial iterate for initialising the regularised weak form discretisation in the vast majority of cases. As such, a combination of the two methods constitutes the best trade-off between robustness, solution quality and computational costs.
All NDF discretisations converge very reliably in typically 5 iterations when initialised with the forward Laplace initial guess from Section <ref>, making them suitable for the use in autonomously operating workflows. In autonomous workflows, combining an NDF discretisation with a posteriori refinement in case nondegeneracy does not carry over to the numerical approximation constitutes the most robust choice. Here, a mixed-form discretisation becomes a viable choice thanks to the better convergence rate. While the computational costs are higher, they remain manageable when operating on the Schur complement of the bilinear form's constant blocks. Overall, the Hessian recovery approach tends to be the better choice in this case, despite the problem's larger cardinality compared the the rotation-free approach, since it can be combined with Newton's method. Accelerating the convergence of the rotation-free approach with constitutes a topic for future research. For this, it may be possible to adopt a multipatch generalisation of the preconditioned Anderson acceleration approach from <cit.> which is highly effective in the singlepatch case.
As a final example, we are considering the screw geometry from Figure <ref> which is mapped inversely harmonically into the parametetric domain from Figure <ref>, which also shows the knotspans of the bicubic knotvectors with maximum regularity on each individual patch. Since the boundary correspondence is itself a piecewise bicubic spline with maximum regularity, we are only considering the choice p=3 here. The convex corners of are mapped onto the convex corners of in the counterclockwise direction. Table <ref> contains the approximate convergence rates for the various numerical schemes while Figure <ref> shows the approximate H^1-norm distance to the exact solution, as before.
From the table and plot we may largely draw the same conclusions as in the previous example with the C^0-DG approach being outperformed by the other methods while the weak form discretisation fares the best, even slightly outperforming Winslow's method in this example.
§ CONTROL MECHANISMS
§.§ Techniques for Parametric Control
The parameterisations generated by Winslow's original approach (<ref>) or its PDE-based counterparts perform well on a wide range of benchmark problems <cit.>. However, as individual applications may require parameterisations with specific features, such as boundary layers in flow problems or homogeneous cell-sizes in problems subject to a CFL-condition, the techniques from Section <ref> may be too rigid. Clearly, choosing the multipatch covering 𝒬 based on the application's specific needs may provide relief. However, in practice, this may prove too restrictive since the covering remains bilinear, which does not, for instance, allow for the creation of boundary layers.
Parametric control can be achieved in two main ways:
* Augmenting the standard inverse Laplace problem with a nonhomogeneous diffusivity.
* Mapping inversely harmonically into a parametric domain with a curvilinear rather than a Cartesian coordinate system.
Let ϕ: _1 →ℝ^2 satisfy:
i ∈{1, 2}: ∇·( D ∇ϕ_i ) = 0 in _1, s.t. ϕ = 𝐅 on ∂_1.
For point 1., we state the following theorem <cit.>:
Let D ∈SPD^2 × 2(_1) be uniformly elliptic and let ϕ∈ H^1(_1, ℝ^2) ∩ C^0(_1, ℝ^2) be the weak solution of (<ref>). If 𝐅 is diffeomorphic between ∂_1 and ∂_2 and _2 is convex, then ϕ: _1 →_2 satisfies ∂_ϕ≥ 0 a.e. in _1.
Under stronger regularity requirements on D, _1 and _2, Theorem <ref> can be extended to uniform nondegeneracy ∂_ϕ > 0 (a.e. in _1). For details we refer to <cit.>. This means in particular that for merely essentially bounded D ∈SPD^2 × 2(), we need to account for the possibility of ∂_→ 0 or ∂_→∞ in the interior of , which may require stabilisation. Taking _1 = and _2 =, it is reasonable to assume that Theorem <ref> also applies to, for instance, the weak-form approach from Section <ref>, even though it exchanges the dependencies, i.e., () →(). A limitation of introducing a nonhomogeneous diffusivity is that it is currently unknown whether the inverted problem can be cast into a form that does not contain the Jacobian determinant in the denominator, as in the NDF-discretisation from Section <ref>. However, the NDF-discretisations remain highly practical since they can compute a nondegenerate reference solution to initialise an iterative scheme with D ≠ℐ^2 × 2 based on the weak-form discretisation.
For point 2., a coordinate transformation is conveniently accomplished by introducing a controlmap : →. As such, we now allow the target domain of ^-1: → to be a parametric surface, too. In what follows, differential operators receive a subscript to indicate differentiation w.r.t. various coordinate systems. For instance, ∇→∇_ to indicate differentiation w.r.t. the entries of : →.
The introduction of : → furthermore enables creating boundary correspondences 𝐅^→: ∂→∂ that are diffeomorphic between and when has no corners by, for instance, choosing to be the unit disc. For Theorem <ref> to apply to the pair (, ), we require to be convex. We denote the map that maps inversely harmonically into the domain ^ = () by ^(): →. The same map can be converted to the original coordinate system via a pullback. We employ the abuse of notation ^() instead of (()) to indicate a change of coordinate system and assume that the reader is aware of the compositions involved.
15pt
[r][b]
100mm
< g r a p h i c s >
figureFigure summarising the dependencies between , , , and .
In the IGA-setting, parametric control via : → is conveniently achieved by reinterpreting the PDE-based formulations over the Cartesian coordinate system = (ξ_1, ξ_2)^T as problems posed over ^, with the curvilinear coordinate system induced by () = (r_1(ξ_1, ξ_2), r_2(ξ_1, ξ_2))^T. We may then use basic differential geometry identities to express the associated integrals in the original coordinate system via a pullback. As such, the operators from Section <ref> now receive an additional -dependence. However, this does not change the nature of the equations as long as the ∘^i remain diffeomorphisms.
As in point 1., a broad class of reparameterisation methods follows from seeking the controlmap as the solution of (<ref>). More precisely, given a reference controlmap : →, we may find a new controlmap (): → by solving (<ref>) with _1 = _2 = while selecting a diffusivity D that builds desired features into the solution. The map (): →^ then follows from a pullback and a diffeomorphic boundary correspondence 𝐅^→: ∂→ is given by the identity.
If (): → is the identity on ∂, the effect of the coordinate transformation, induced by (): →, on ^(): → can be predicted by noting that ^() = ^∘(). All the dependencies between , , , and the local coordinate system in Ω^□ are summarised in Figure <ref>.
As before, depending on the regularity of the diffusivity, the solutions may contain singularities J → 0 or unbounded growth, i.e., J →∞. While the vanishing or diverging of J is typically avoided by discrete approximations, this behaviour will be observable in a refinement study. For the () ∘^i to be diffeomorphisms, this means that for ∈ H^1(, ℝ^2), jumps in the Jacobian ∂_() may only occur on the (∂_i). As such, we require the diffusivity to be patchwise continuous.
Given a reference controlmap : →, the most general approach combines methods 1. and 2., leading to the coupled system
find ((), ()) s.t. for i ∈{1, 2 }: {[ ∇_·(D^∇__i ) = 0; ∇_·(D^∇__i ) = 0 ]. in and {[ = 𝐅^→; = 𝐅^→ ]. on ∂,
where, typically, 𝐅^→() =. Here, the practically useful dependencies are D^ = D^(, ) and D^ = D^(, ). Note that the first equation is inverted and the unknown becomes the differential operator ∇_ as a function of (): →. The system is associated with a global operator comprised of two separate operators, one for each equation
ℒ^, (, , ϕ_1, ϕ_2, D^, D^) = ℒ^(, ϕ_1, D^, ) + ℒ^(, ϕ_2, D^, ).
For D^ = ℐ^2 × 2, the operator ℒ^(·, ·, ·, ·) can be based on any of the operators from Section <ref>. For D^𝐱≠ℐ^2 ×2 we restrict ourselves to the weak-form discretisation. With = (), the regularised weak-form operator becomes (c.f. equation (<ref>)):
ℒ^W_ε(, ϕ, D^, ) = ∫_(C(∂_) ∇_ϕ) (Q^T(∂_, ∂_) D^(, ) Q(∂_, ∂_) )/ℛ_ε( Q(∂_, ∂_) ) d,
with
Q(∂_, ∂_) := C(∂_) (∇_) and C( · ) as in (<ref>).
The operator corresponding to the second part ∇_·(D^∇_) = 0 reads
ℒ^(, ϕ, D^, ) = ∫_∂_ϕ ( ∂_ D^(, )) d.
We note that the coordinate transformation ∂_→∂_() in the NDF operators from Section <ref> reintroduces the Jacobian determinant ∂_ in the denominator upon pullback of the equations from into . As such, an iterative algorithm has to be initialised with a nondegenerate controlmap ^0: → when ^: → and : → are coupled via D^ or D^. In this case, we recommend basing the scheme on (<ref>) instead. In the classical literature, parametric control via , rather than through a pullback, is accomplished by introducing additional terms in (<ref>) <cit.>. While this formulation enables removing ∂_ from the denominator, it is not applicable if ∉ H^2(, ℝ^2) since it requires second-order derivative information of (), making it unsuited for this paper's use-cases. While it may be possible to reduce the regularity requirements of : → in a way anologous to Section <ref>, this is beyond the scope of this paper.
Given a reference controlmap : →, the controlmap : → is conveniently built from a push-forward of the same finite-dimensional space 𝒱_h used to represent the map _h: →. As mentioned before, substituting a degenerate intermediate controlmap : →, produced by an iterative root-finding algorithm applied to the coupled system, may cause problems due to division by zero. In practice this is avoided by initialising the scheme by the tuple (^, ) (i.e., the solution for D^ = D^ = ℐ^2 × 2 over the reference controlmap : →) which is computed using one of the NDF-discretisation from Section <ref>. The barrier term in (<ref>) then prevents intermediate iterates (^, )^i from leaving the set of nondegenerate maps. As before, a discretisation takes the test functions (ϕ_1, ϕ_2) from the finite-dimensional space 𝒰_h^0×𝒰_h^0 and finds the root using Newton's method. In practice, the coupled scheme converges reliably for a wide range of diffusivities D^, D^ when initialised with (^, ).
Depending on the choice of D^ and D^, the solution of the coupled system may no longer be uniformly nondegenerate (even for boundary correspondences that lead to UNDG maps for D = ℐ^2×2). To avoid singularities, we shall often introduce a stabilisation on the patch vertices. As such, let Γ^v = {𝐯^1, …, 𝐯^N_v}⊂ be the set of patch vertices shared by at least two patches, i.e.,
Γ^v := {𝐯∈ | ∃ (i, j) ∈{1, …, N_p}×{1, …, N_p} s.t. _i ∩_j = {𝐯}}.
The singularities are avoided by introducing an appropriate regularisation. For this purpose, we introduce the Gaussian blending functions
g_i^κ() := A_i exp(-(κ/d_i^min - (𝐯^i) )^2 ), with d_i^min := min{𝐯^i - 𝐯^j | j ∈{1, …, N_p}∖{i}}
and κ > 0. Here, the A_i are chosen such that
∀𝐯^i ∈Γ^v: ∑_i=1^N_v g_i^κ((𝐯^i)) = 1.
Let D be the diffusivity in question and let 𝒟_i = {D_i^1, …, D_i^q } be the set containing the limits
D_i^j := lim_→𝐯^i D(()) s.t. ∈_j for each patch _j with _j ∩{𝐯^i} = {𝐯^i }.
We define D_i as the average of the D_i^j ∈𝒟_i, i.e.,
D_i := ∑_D_i^j ∈𝒟_i D_i^j.
The regularisation ensures that the regularised D^κ(D) ∈SPD^2 × 2() is single-valued in the (𝐯^i) by replacing
D →D^κ(D) := ( 1 - ∑_i=1^N_v g_i^κ) D + ∑_i=1^N_v g_i^κD_i.
The decay rate κ > 0 in (<ref>) tunes the degree of regularisation and is relatively insensitive to the characteristic length-scale of Ω^ thanks to the scaling by d_i^min. It should be noted that other regularisations exist besides (<ref>).
To better see what the effect of reparameterising under a nonhomogenous diffusivity D is, we note that (<ref>) is the Euler-Lagrange equation of
min_ϕ: _1 →_21/2∫__1tr(∇_ϕ^T D ∇_ϕ) d s.t. ϕ = 𝐅 on ∂_1.
The interpretation as a minimisation problem is helpful in predicting the effect of D on ϕ: _1 →_2.
§.§ Patch interface removal
The images of local (in ^□) isolines under the ^() ∘^i will generally form a (possibly steep) angle across patch interfaces when joined together on . In certain applications, it may be desirable to decrease or largely remove the steep interface angles. As : → is diffeomorphic in , a controlmap : → that removes steep angles will, by extension, remove them in the recomputed map ^: →. As such, interface removal can be regarded as an a priori step since it requires no prior knowledge of : →.
We would like to accomplish
∀γ_jk∈Γ^I: [[ (∂_^⊥())]] = 0 on (γ_jk),
wherein [[ · ]] now denotes the ordinary entry-wise jump term while ∂_^⊥ denotes the directional derivative transversal to (γ_ij) (i.e., either ∂_μ_1 or ∂_μ_2 on (γ_jk^+) and (γ_jk^-)). Requirement (<ref>) can be weakly enforced by utilising in (<ref>) the diffusivity
D^(, ) = D^_Γ^I() := ∂_μ_1⊗∂_μ_1 + ∂_μ_2⊗∂_μ_2 on (Ω̂_i).
Meanwhile, if D^ = ℐ^2 × 2, (<ref>) is decoupled and the map (): → can be computed from a degenerate initial guess using an NDF-discretisation. The diffusivity from (<ref>) urges : → to map the patchwise isolines () smoothly across patch interfaces.
As the magnitude of ∂_μ_i depends on ∘^i: ^□→(_i) (and may therefore be subject to considerable changes between patches), the best results are obtained by a normalisation, i.e.,
D^(, ) = D^_Γ^I() := ∂_μ_1⊗∂_μ_1 + ∂_μ_2⊗∂_μ_2 on (Ω̂_i), with ∂_μ_i := 1/∂_μ_i∂_μ_i.
Note that
tr(ℐ^2 × 2) = tr(D^_Γ^I() ) = 2.
The normalisation has a similar effect as minimising (<ref>) while now suppressing jumps in the normalised transversal component of the (γ_jk) with γ_jk∈Γ^I, i.e., we are penalising jumps in the transverse direction but not the direction's magnitudes.
We are considering the screw geometry from Figure <ref> and perform interface removal with normalisation. Figure <ref> shows the resulting reparameterisation. As a measure of the degree of interface removal, we utilise the following value
L_Γ^2() = ∑_γ_jk∈Γ^I∫_(γ_jk) [[ ∂_^⊥()]] ^2 dΓ,
where ∂_^⊥ denotes the normalised directional derivative transverse to (γ_jk). With
L_Γ(_h^)/L_Γ(_h)≈ 0.0998
the technique is highly effective.
As stated in Theorem <ref>, methods based on (<ref>) do not exclude singularities in : →^. While singularities are in practice avoided by discrete approximations, for merely essentially bounded D^∈SPD^2 × 2(), we may expect
inf_∂_→ 0 or sup_∂_→∞
in a refinement study. For patchwise continuous D^∈SPD^2 × 2(), singularities (and unbounded gradients), if present, are located in the (𝐯^i) with 𝐯^i ∈Γ^v. The creation of singularities can be avoided by employing the stabilisation from (<ref>). We are considering the rectangular parametric domain = along with an irregular multipatch covering depicted in Figure <ref>.
We perform a refinement study of (normalised) interface removal with and without regularisation, initially assigning a uniform cubic knotvector with three internal knots to each side L_i ∈Γ^B and facet γ_ij∈Γ^I. Each refinement h → h/2 halves the knotvector's knotspans. We are monitoring the value of ν_Γ( · ) := L_Γ( · ) / L_Γ() for the original : → and its regularised counterpart ^reg: →, as well as the values min J_𝐯( · ) and max J_𝐯( · ) which are the minimum and maximum values of ∂_( · ) over all patch vertices (𝐯^i). As ∂_( · ) is not single-valued in the 𝐯^i ∈Γ^v, we define this value as the minimum / maximum of taking the limit on each adjacent patch. Table <ref> contains the reference values in the absence of regularisation while Table <ref> contains the corresponding values for the regularisation D^→D^κ(D^) with κ = 9. Furthermore, Figure <ref> shows the controlmap with and without regularisation after the last refinement level. Table <ref> clearly demonstrates that min J_𝐯( · ) and max J_𝐯( · ) shrink / grow unboundedly in the absence of regularisation while Table <ref> demonstrates that regularisation prevents further shrinkage / growth under refinement.
As expected, the regularisation also prevents monotone decrease of ν_Γ(^reg), eventually settling for a value of ∼ 0.4 L_Γ() relative to the reference value of Figure <ref>. Meanwhile, the corresponding value shrinks unboundedly in the absence of regularisation.
For larger values of κ, we expect the discretisation to settle for a lower value of ν_Γ(^reg) at the expense of reducing / increasing the values of min J_𝐯(^reg) and max J_𝐯(^reg). Table <ref> contains the outcomes for regularisation with κ = 18.
Indeed, the table confirms this expectation, settling for a value of ∼ 0.3 L_Γ() while roughly doubling the shrinkage / growth of ∂_(^reg) compared to κ = 9.
We conclude that the proposed regularisation is an effective means to tune the degree of interface removal at the expense of cell size homogeneity reduction. In practice, an appropriate choice of the decay rate κ > 0 is furthermore relatively insensitive to the average distance between the 𝐯^i ∈Γ^v, thanks to the scaling by d_i^min in (<ref>).
§.§ Cell size homogenisation
A popular measure for the parameterisation's cell size homogeneity is the Area functional
L_Area() = ∫_( J() )^2 d,
which measures the variance of J() over , with smaller values indicating better homogeneity. In the multipatch setting, it is more natural to measure the homogeneity on each individual patch and summing over all patches
L_Area() := ∑_i=1^N_p∫_^□( ∂_ (() ∘^i) () )^2 d,
wherein = (μ_1, μ_2)^T denotes the free coordinate functions in ^□. Direct minimisation of (<ref>) over the map's inner controlpoints leads to a nonconvex problem which is furthermore prone to yielding degenerate maps.
In the context of the coupled system (<ref>), there are two main ways to achieve homogenisation without having to resort to nonconvex optimisation:
* Designing a diffusivity D^(, ^) that contracts / expands the cell sizes of : ^→^ wherever the cell sizes of ^: ^→ are large / small, while taking D^(, ^) = ℐ^2 × 2.
* Picking D^ = D^() (i.e., D^ has no dependency on ^) while designing a diffusivity D^ = D^(^) that encourages cell size homogenisation.
As for method 1., we notice that for D^ = ℐ^2 × 2, the solution of the inverse Laplace problem is merely a property of the shapes and as well as the diffeomorphic boundary correspondence 𝐅^→: ∂→∂. As such, a controlmap : → that is the identity on ∂ computes the composition ^() = ^∘(). Therefore, we may require () to contract cells in (_i) wherever ∂_^() is large and vice versa. We may also choose to penalise based on ∂_^() or ∂_^() to reduce the Area functional in a different coordinate system if desired.
To cast this problem into the form of (<ref>), we contract the cells of () by penalising the value of tr(G^→), where G^→ denotes the metric between the coordinate systems induced by : →^ and : ^→^. This is accomplished by introducing D^(, ^) = σ(^) ℐ^2 × 2, where σ(^) assumes large values in regions where contraction is desired and vice-versa. For instance
σ^k(^)() = (∂_^())^k on (_i),
where larger values of k > 0 lead to a more drastic homogenisation. As such, we are solving the coupled system (<ref>) with D^ = ℐ^2 × 2 and
D^(^) = σ^k(^) ℐ^2 × 2.
The contraction under σ^k() has a similar effect as operating on (∂_)^2 directly while being inherently less prone to yielding degenerate discrete maps.
While a formal proof is lacking, it is plausible to assume that the coupled system is well-posed under this choice of D^ since for any bijective : ^→^ (and given a diffeomorphic boundary correspondence 𝐅^→), the coupled system (<ref>) approximates a UNDG map ^: ^→ that satisfies σ^k(^) > 0 (a.e.) such that D^(^) ∈SPD^2 × 2(^). However, a root-finding algorithm may diverge in case the Newton increment accidentally causes ^ to leave the set of UNDG maps. In practice, this is avoided by the barrier property of (<ref>) and the scheme converges reliably using Newton's method with line search for a wide range of choices k > 0 when the scheme is initialised with the tuple (^, ) (i.e., the reference solution and reference controlmap).
We are again considering the screw geometry depicted with the bilinearly covered parametric domain before reparameterisation in Figure <ref>. Here, the reference solution corresponds to () =.
Figure <ref> shows the geometry along with the associated controlmap () after reparameterisation with k=2. Denoting the reparameterised maps by _h^k, we define
ν_Area^k := L_Area(_h^k)/L_Area(_h^0) and ν_ J^k := sup∂__h^k/inf∂__h^k,
where the latter is approximated by sampling over the abscissae of a dense Gauss-Legendre quadrature scheme.
Table <ref> contains the values of ν_Area^k and ν_ J^k for various values of k ∈ [0, 3]. The table clearly demonstrates that the methodology has the desired effect, reaching saturation for larger values of k. Furthermore, all parameterisations with k > 0 significantly reduce the anisotropy of ∂__h^k. We mention that the diffusivity is merely essentially bounded since σ^k(^) is generally patchwise discontinuous. This may lead to singularities or unbounded growth (for an example of a scalar elliptic problem in which a diffusivity that is a scaling times the identity creates a singularity, see <cit.>). However, in practice singularities are avoided and the problem requires no stabilisation. A possible explanation is that cell size homogenisation counteracts the tendency to generate singularities for discrete approximations.
As a second example, we are considering the geometry depicted in Figure <ref> (right) along with the parametric domain given by a regular six-sided polygon. Unlike the geometry from Figure <ref>, this geometry has no corners. As such, we take to be the unit disc where the boundary correspondence 𝐅^→: ∂→∂ is chosen such that the induced correspondence 𝐅^→: ∂→∂ is diffeomorphic. The interior of is parameterised by applying the bilinearly-blended Coons' patch approach to each patch of individually. The reference parameterisation ^(): → and the associated reference controlmap : → are depicted in Figure <ref>.
In this example, we are combining the techniques from this section with those from Section <ref>. We employ the reference parameterisation : → in combination with the diffusivity from (<ref>) to map onto itself. This results in the new reference controlmap : → that removes the patch interfaces. The new parameterisation of and the associated new reference map ^: → are depicted in Figure <ref>. Here, we do not stabilise using (<ref>) since the discrete approximation remains uniformly nondegenerate with acceptable behaviour in the vicinity of the patch vertices.
Table <ref> contains the values of
ν^k_Area := L_Area(^k_h)/L_Area(^_h) and ν^k_Γ := L_Γ(_h^k)/L_Γ(^_h), with L_Γ( · ) as in (<ref>),
for various k ∈ [0, 4]. Figure <ref> shows the parameterisation of the geometry for k = 0 and k=4. The table clearly demonstrates a monotonous reduction of ν^k_Area (eventually reaching saturation for larger values of k) at the expense of a slight increase in the value of ν^k_Γ.
For more precise control over the expansion / contraction of cells, it can be helpful to decompose the diffusivity into the scaling σ^k(^) times the sum of two symmetric rank one tensors, i.e.,
D(^) = 2/a_i + 1σ^k(^) ( a_i 𝐯̂^i, 1⊗𝐯̂^i, 1 + 𝐯̂^i, 2⊗𝐯̂^i, 2), on (Ω̂^i), where a_i > 0.
Here, 𝐯̂^i, 1 and 𝐯̂^i, 2 have length one and are not parallel. Note that
tr(2/a_i + 1( a_i 𝐯̂^i, 1⊗𝐯̂^i, 1 + 𝐯̂^i, 2⊗𝐯̂^i, 2) ) = 2, as before.
If the 𝐯̂^i, j are patchwise discontinuous, we note that the diffusivity may require stabilisation. Taking a_i large will force : → to predominantly slide in the direction of 𝐯^i, 1 on (Ω̂^i).
For large values of k, cell size homogenisation can lead to a considerable degree of cell skewness close to the boundary. As an example, Figure <ref> (center) shows the homogenisation of a geometry with reference parameterisation depicted in Figure <ref> (left) using (<ref>) under D^(^) = σ^k(^) ℐ^2 × 2 with k=3.5. This effect can be avoided by setting 𝐯̂^i, j = ∂__j, where 𝐯̂^i, 1 = ∂_^⊥ for boundary patches, while taking the a_i large close to the boundary of . The result for k = 3.5 is depicted in Figure <ref> (right). With ν^k_Area = 0.515 for the former and ν^k_Area = 0.518 for the latter, the homogenisation is only marginally less effective under (<ref>). However, the latter completely avoids cell skewness close to the boundary while sacrificing some regularity across the patch interfaces.
In contrast to method 1., method 2. requires D^ to be a function of only and, for convenience, we assume D^ = ℐ^2 × 2 such that () =. The cell size homogenisation is now encouraged through a proper choice of D^(, ^). Similar to method 1., we take D^(, ^) = ω() ℐ^2 × 2, for some ω() > 0. Method 2. has the advantage of decoupling the system from (<ref>). This reduces the problem size of the iterative root-finding algorithm, which now computes only ^ instead of the tuple (^, ), with () = ().
This choice of D^(, ^) encourages the contraction of () isolines in wherever ω() is large and vice-versa. Exchanging the dependency () →(), the isolines will now be contracted in regions where ω() is small. Inspired by method 1., we define the family monitor functions
ω(^)^k() := (∂_^() )^-k on (_i).
As such, we are solving the decoupled system with D^(^) = ω^k(^) ℐ^2 × 2. Clearly, for a root-finding algorithm to converge, the value of ∂_^() has to stay positive.
As before, the barrier property of (<ref>) prevents intermediate iterates from leaving the set of UNDG maps and the scheme converges reliably for a wide range of k > 0 when initialised with the solution of one of the NDF formulations. We are considering the geometry with reference parameterisation from Figure <ref>. The same figure shows the bilinearly covered parametric domain.
We are monitoring the values of ν^k_Area and ν^k_ J (cf. (<ref>)) for k ∈{0, …, 8}. Table <ref> contains the associated values, while Figure <ref> depicts the homogenised parameterisations for three different values of k.
The table clearly demonstrates that the methodology is highly effective homogenising the cell sizes in the local coordinate systems. We also observe a significant reduction in the anisotropy of ∂__h^k, which is reduced from the initial ν_ J^k = 54.7 to ν_ J^k = 6.05 for k=8.
§.§ Grid Adaptation
In various applications it can be desirable to contract the map's isolines in regions where a large value of a function or its gradient is assumed. Given a function f: →ℝ^+, with f ∈ C^∞(), the clustering of isolines can be achieved by designing diffusivities that contract the map's cell sizes in regions where f: →ℝ^+ is large (and vice-versa). The most basic choice is D^(, ) = ℐ^2 × 2 and D^(, ) = σ() ℐ^2 × 2. As in Section <ref>, this choice decouples (<ref>) and the first equation can be regarded as the Euler-Lagrange equation of
min_: →∫_σ() tr( G^→) d s.t. = (𝐅^→)^-1 on ∂.
As before, upon exchanging the dependencies () →(), the isolines will be contracted in regions where σ() is small. To contract cells in the vicinity of large function values of f: Ω→ℝ^+, we design a suitable monitor function σ(·). A possible choice is given by <cit.>:
σ() = 1/ν_1 f()^k + ν_2, or σ() = 1/ν_1 ∇ f() ^k + ν_2 for gradient penalisation.
Here, ν_2 > 0 avoids division by zero in case f → 0 and the parameters ν_1 > 0 and k > 0 tune the degree of penalisation. A numerical scheme is best initialised with the nondegenerate reference solution (i.e., the solution for σ() = 1).
We are considering the screw geometry with reference parameterisation from Figure <ref>. Here, we take () = and =. We would like to contract cells based on the function value of a ring-shaped function f ∈ C^∞() using ν_1 = 1, ν_2 = 0.01 and k = 1. Figure <ref> depicts the result of reparameterising under D^() along with an arrow plot showing the movement of a select number of points with respect to the reference map.
The figure shows a strong contraction of cells in the vicinity of large function values, clearly demonstrating that the methodology has the desired effect. The cell contraction can be increased by increasing the value of ν_1 or k. However, strong penalisation can have unpredictable effects on the cells, in particular close to the patch vertices.
In practice, this can be avoided by performing patch interface removal using the techniques of Section <ref>. We are considering the same example as before while now performing interface removal with stabilisation in . Figure <ref> shows the images of locally drawn isolines in along with the result of performing cell contraction using the same parameters.
Compared to Figure <ref>, isolines crossing the patch interfaces exhibit less erratic behaviour and align well with the ring-shaped function.
To demonstrate that the proposed technique (in combination with interface removal) is effective when applied to geometries with fewer symmetries than in Figure <ref>, we refer to Figure <ref>.
§.§ Boundary Orthogonality
Many applications favour a parameterisation in which locally drawn -isolines intersect the boundary ∂Ω at a right angle. Unfortunately, it is not possible to simultaneously impose Dirichlet and Neumann data on the inverted elliptic equations. As such, boundary orthogonality has to be enforced through an appropriate coordinate transformation in the parametric domain , wherein we assume = for convenience. Furthermore, we assume that each _i coincides on ∂ with exactly one of the edges L_k ∈Γ^B. Here, we give a multipatch generalisation of the singlepatch method proposed in <cit.>. Given the reference solution _h: → over the original bilinearly covered parametric domain , we introduce the maps
_h^i() := _h ∘^i, for boundary patches _i ∈𝒬^B
and we denote _h^i(^□) := Ω_i ⊂Ω. Without loss of generality, we may assume that each _i is oriented such that μ_1 and μ_2 correspond to the directions tangential and transversal to ∂Ω, respectively. Denoting the eastern, western, southern and northern segments of ∂^□ by γ_e, γ_w, γ_s and γ_n, respectively, we may furthermore assume the orientation is such that γ_n is mapped onto C_k ⊂∂Ω under _h^i. We denote the associated sides of _i under the map _h^i by Γ_e^i, Γ_w^i, Γ_s^i and Γ_n^i = C_k ⊂∂Ω. We are seeking a function f_i: Ω_i →ℝ that satisfies homogeneous Neumann boundary conditions on Γ_n^i. For this, we solve
Δ f_i = 0 in _i, s.t. f_i(()) = μ_1 on Γ_e^i∪Γ_w^i∪Γ_s^i and ∂ f_i/∂𝐧 = 0 on Γ_n^i,
where 𝐧 denotes the unit outward normal vector on ∂Ω_i. A pullback leads to
Δ__h^i f_i = 0 in ^□, s.t. f_i() = μ_1 on γ_e∪γ_w∪γ_s and ∂ f_i/∂𝐧 = 0 on γ_n,
whose discretisation imposes the Neumann data through partial integration in the usual way. The restriction q_i(μ_1) := f_i |_γ_n will be a monotone function over μ_1 ∈ [0, 1] thanks to the imposed Dirichlet data and the maximum principle. Given a diffeomorphism ^i: ^□→^□ that satisfies ^i |_γ_n = q_i and ∂_μ_2^i_1 = 0 on γ_n, a map ^i: ^□→Ω_i that maps inversely harmonically into Ω^□ with the coordinate system induced by ^i, will map local μ_1 isolines onto isolines in Ω_i that intersect C_k = ∂Ω∩Ω_i at a right angle, thanks to the Neumann data we imposed on Γ_n^i. Two possible choices are given by
^i_1() = q_i(μ_1) and ^i_1() = t_i() with t_i() := (1 + 2 μ_2) (1 - μ_2)^2 μ_1 + (3 - 2 μ_2) μ_2^2 q_i(μ_1),
while ^i_2() = μ_2. Here, the former maps straight μ_1 isolines onto straight q_i(μ_1) isolines in ^□ while the latter maps the same isolines onto curves that start at = (μ_1, 0) and end in = (q_i(μ_1), 1) while intersecting γ_n at a right angle. Note that the latter furthermore satisfies ^i() = on γ_s.
With this choice of ^i, a map : → that maps inversely harmonically into with the coordinate system induced by the controlmap : → that satisfies
() = ^i ∘^i, or equivalently () = ^i ∘^i ∘ (^i)^-1 on _i ∈𝒬^B,
will now map the images of local μ_1-isolines under ^i onto isolines in _i that intersect ∂Ω at a right angle. The controlmap : → that leads to boundary orthogonality is hence known for boundary patches _i ∈𝒬^B. For the choice ^i_1() = t_i(), the controlmap : → can be taken as the identity on patches _k ∉𝒬^B.
For the choice ^i_1 = q_i(μ_1), on the other hand, the partially-known controlmap induces a reparameterisation of the interior facets γ_ij∈Γ^I with _i ∈𝒬^B or _j ∈𝒬^B. As such, the original bilinear parameterisations of patches _k ∉𝒬^B will no longer be conforming to the images of boundary patches under (). In this case, we may require the controlmap to satisfy () |_γ_ij = for facets γ_ij∈Γ^I that are not associated with one of the boundary patches. With that, the map : → is now known on the boundary patches and on all interior facets. The interior of the remaining patches _k ∉𝒬^B can now be parameterised from the curves |_γ_ij, γ_ij∈Γ^I one-by-one using, for instance, the bilinearly-blended Coons' patch approach. The result is a controlmap : → that leads to boundary orthogonality and is furthermore conforming across all interior facets.
The controlmap can be projected onto 𝒱_h^2, where we note that for ^i_1 = t_i(), 𝒱_h has to be patchwise bicubic in order for the projection to be exact.
We are again considering the screw geometry whose reference controlmap and parameterisation are depicted in Figure <ref>. As a measure for the degree of boundary orthogonalisation, we utilise
L_⊥^2() := ∑__i ∈𝒬^B ∫_∂∩_i(∂_^⊥·∂_^∥)^2 dΓ,
wherein ∂_μ^∥( · ) and ∂_μ^⊥( · ) denote the normalised tangential and transverse derivatives with respect to on _i ∈𝒬^B, respectively.
Figure <ref> shows the reparameterisations of under : → for the choices ^i_1() = q_i(μ_1) and ^i_1() = t_i(), respectively, while Figure <ref> shows the associated parameterisations of after recomputation under : →.
With
L_⊥(^)/L_⊥(^) = 0.172 for _1^i() = q_i(μ_1) and L_⊥(^)/L_⊥(^) = 0.169 for _1^i() = t_i(),
both choices are similarly effective, wherein the small discrepancy is explained by differing truncation errors. Figure <ref>, left, reveals that the choice _1^i() = q_i(μ_1) leads to a strong clustering of cells close to the patch interfaces, which may not be desirable. This is not the case for <ref>, right. Meanwhile, the former maps straight isolines in ^□ onto straight isolines on the _i ∈𝒬^B, a property that can be exploited when combining boundary orthogonalisation with the creation of a boundary layer (see Section <ref>).
§.§ Boundary layers
Many applications in computational fluid dynamics deal with PDE-problems whose solutions are known to create a steep gradient in the vicinity of the boundary ∂. To capture important features of the solution, such applications favour parameterisations with a dense clustering of cells in the vicinity of ∂Ω. This can be achieved by introducing a controlmap : → that clusters locally-drawn -isolines close to ∂, while potentially sacrificing some cell density in the interior. We assume that = is a regular 2n-sided polygon with radius one, centered at = (0, 0)^T. As before, we assume that μ_1 = const refers to the local -direction transverse to ∂ for boundary patches. A convenient way to create a boundary layer is utilising a diffusivity of the form
D^() = (1 - exp(-μ^2)) ^k ( ⊗) + νℐ^2 × 2,
for some μ≫ 1, ν≪ 1 and k > 0. Here, := / while the prefactor (1 - exp(-μ^2)) in (<ref>) avoids the singularity in the origin. Taking ν small urges the controlmap : → to map points ∈ in radially outward direction with a tight clustering close to ∂. If the locally-drawn μ_1 isolines of boundary patches _i ∈𝒬^B align with straight rays drawn from the origin in radially outward direction as in Figure <ref>, the diffusivity from (<ref>) will furthermore largely preserve the intersection angle of the μ_1 isolines with the boundary ∂Ω (w.r.t. to the choice D^ = ℐ^2 × 2). Since the controlmap creation is an a priori step, the methodology is compatible with all operators from Section <ref>.
We are considering the screw geometry with the reference parameterisation from Figure <ref>. Figure <ref> shows the reparameterisation using (<ref>) with μ = 30, k = 2 and ν = 0.005.
The figure shows a strong clustering of transverse isolines close to the boundary. The clustering intensity can be increased by increasing the value of k.
As a second example, we are considering the female screw geometry with reference controlmap and parameterisation shown in Figure <ref>. While ≠ is now given by the unit disc, the boundary patches are oriented such that the μ_1 transverse isolines align with straight rays that intersect the origin, as required. As such, we replace → in (<ref>) and expect the diffusivity to largely preserve the intersection angle of transverse isolines with ∂ and ∂. Figure <ref> shows the result of boundary layer creation under (<ref>) using the same parameters as in Figure <ref>.
In both examples, the layer density is intensified / reduced close to the inward-facing and protruded parts of ∂Ω, respectively. This effect may be counteracted by introducing a position-dependent clustering parameter, i.e., k → k(). However, this is beyond the scope of this paper.
As a final example, we are combining boundary layer creation with boundary orthogonality. For this purpose, we assume that we are in the possession of a controlmap : → that orthogonalises transverse μ_1 isolines for boundary patches _i ∈𝒬^B. Assuming again that the orientation is chosen such that the transverse direction is given by μ_1 = const for the _i ∈𝒬^B, we restrict ourselves to the choice _1^i() = q_i(μ_1) (cf. (<ref>)) such that : → maps straight μ_1 isolines onto straight μ_1 isolines for boundary patches, as in Figure <ref> (left). For the purpose of boundary layer creation, we may compose the ^i() = (q_i(μ_1), μ_2)^T with a function of the form ^i() = (μ_1, f_i(μ_1, μ_2))^T, where f_i(μ_1, 0) = 0, f_i(μ_1, 1) = 1 and ∂_μ_2 f_(μ_1, μ_2) > 0. We note that the map ^i ∘^i: ^□→^□ still satisfies ∂_μ_2 (^i ∘^i )_1 = 0 on γ_n such that boundary orthogonality is preserved. The purpose of f_i is to create a boundary layer near μ_2 = 1, i.e., ∂_μ_2 f_i(μ_1, μ_2) in μ_2 = 1 should be small. A function that satisfies aforementioned requirements is given by
f_i(μ_1, μ_2) = 1 - exp(-d_i(μ_1) μ_2)/1 - exp(-d_i(μ_1)), for some d_i(μ_1) > 0,
where larger values of d_i(μ_1) create a (locally) stronger boundary layer. Note that we have
. ∂_μ_2 f_i(μ_1, μ_2) |_γ_n := f^'_i, n(μ_1) = d_i(μ_1) exp(-d_i(μ_1))/1 - exp(-d_i(μ_1)).
Thanks to boundary orthogonalisation, the local tangent bundle of ^ is diagonal in the basis that is spanned by the unit tangent and normal vectors on ∂Ω, i.e,
∂_^ = a_i 𝐭⊗𝐭 + b_i 𝐧⊗𝐧, on L_k^i := _i∩∂Ω for _i ∈𝒬^B,
where a_i follows from the boundary correspondence 𝐅: ∂→∂ while b_i is a property of the parameterisation of the interior. We would like to create a new controlmap : → whose associated map ^: → satisfies
∂_^ = a_i 𝐭⊗𝐭 + k 𝐧⊗𝐧 on L_k^i := _i∩∂Ω for _i ∈𝒬^B,
where k > 0 is a user-specified parameter. Requiring : → to be of the form
() = ^i ∘^i ∘^i ∘ (^i)^-1 on _i ∈𝒬^B,
it is clear that the recomputed map will satisfy
. ∂_^() |_γ_n = a_i(μ_1) 𝐭⊗𝐭 + b_i(μ_1) f_i, n^'(μ_1) 𝐧⊗𝐧 on L_k^i⊂∂ for _i ∈𝒬^B.
Given that each f_i, n^' = f_i, n^'(d_i), we may find the d_i(μ_1) by minimising the nonlinear cost function
∑_{i | _i ∈𝒬^B } ∫_L_k^i (b_i f^'_i, n -k )^2 dΓ→min_d,
where b_i and f^'_i, n are now taken as functions over ∈ L_k^i⊂∂ while d: ∂→ℝ satisfies
d() = d_i ∘ (𝐦^i)^-1 on L_k^i⊂∂.
A discretisation then constructs d() from the linear span of the ϕ_h ∈𝒱_h that are nonvanishing on ∂ and finds the minimum over this subspace of C^0(∂) using Newton's method. Upon completion, the function d() is known on ∂ and the restriction to L_k^i⊂∂ can be expressed as a function over γ_n ⊂∂^□ via the (𝐦^i)^-1. The canonical extension from γ_n into ^□ then defines ^i() = (μ_1, f_i(μ_1, μ_2))^T and we define : → by (<ref>) for boundary patches while requiring () = () for interior patches. Since d() ∈ C^0(∂), the controlmap defined in this way will be conforming across all interior interfaces. Finally, to relieve the computational burden, the controlmap : → is expressed in 𝒱_h^2 via an L^2() projection.
We are considering the screw geometry with orthogonalised reference parameterisation from Figure <ref> (left) and associated controlmap : → from Figure <ref> (left). Using this reference parameterisation, we are creating a boundary layer in which value of k > 0 in (<ref>) is given by b / 72, where b is the average value of all b_i over all L_k^i in (<ref>). Figure <ref> shows the resulting parameterisation along with the controlmap : →, while Figure <ref> shows a zoom-in on two different segments on the boundary.
As can be seen, the methodology preserves the boundary orthogonalisation while now providing precise control over the density of the boundary layer, which is largely maintained along the entire boundary. As before, the boundary layer density is tuned by the value of k > 0. Since the method performs a sequence of algebraic operations on the boundary patches, unlike in the previous methodology, the interior patches remain unchanged. As such, the best results are obtained when the boundary patches cover the majority of ∂ with only a handful of small patches in the interior.
§ CONCLUSION
We have presented a PDE-based parameterisation framework for planar multipatch domains based on the concept of harmonic maps. For this, we presented a total of four different numerical approaches capable of computing valid parameterisations for a wide range of piecewise smooth Lipschitz domains bounded by a collection of spline curves. We presented three different algorithms in nondivergence form, two of which are in mixed form and one based on C^0-DG. Furthermore, we presented one approach based on the inverse harmonicity requirement's weak form. We concluded that the NDF-discretisations in mixed form performed similarly well in the essayed benchmark problems while consistently exhibiting slightly better convergence rates than the C^0-DG approach. On the other hand, we concluded that the C^0-DG approach is the computationally least expensive approach that can be initialised with a degenerate initial iterate. The experiments demonstrated that the weak form discretisation converges reliably when initialised with the solution of one of the NDF-discretisations while performing only marginally worse than or on par with Winslow's original approach. Since the C^0-DG approach is usually sufficiently close to the discrete root, we concluded that a combination with the weak form discretisation constitutes a computationally feasible and effective means to compute a uniformly nondegenerate map for the geometries considered in this work. Hereby, the combination of the two approaches substantially reduces the need for a posteriori refinement in case the NDF-solution is degenerate, thanks to the weak form's barrier property.
We have augmented the parameterisation framework with mechanisms that allow for control over the map's parametric properties. Hereby, we presented techniques capable of incorporating many of the commonly-desired parametric features into the the computed maps, such as homogeneous cell sizes and boundary layers. For combining harmonic maps with parametric control, we mainly employed the weak form discretisation and concluded that its barrier property is an effective means of maintaining uniform nondegeneracy, even when confronted with coordinate transformations that induce extreme cell size heterogeneity, such as in Figures <ref> and <ref>.
Utilising an only essentially bounded diffusivity for the purpose of inducing a coordinate transformation via a controlmap, while enabling many novel ways of controlling the outcome, is associated with a number of potential robustness bottlenecks, such as the possibility of creating singularities in the interior of the domain. Here, we proposed a stabilisation via Gaussian blending functions on the quadrangulation's vertices. However, a more thorough investigation of the effect of the controlmap's reduced regularity on the computed maps constitutes a topic for future research. Furthermore, given that the parametric domain is typically given by a convex polygon, we see great potential in the use of computationally inexpensive algebraic methods to create a controlmap that builds desired features into the harmonic map. This constitutes another topic for further research.
§ ACKNOWLEDGEMENTS
The authors gratefully acknowledge the support of the Swiss National Science Foundation through the project ‘‘Design-through-Analysis (of PDEs): the litmus test’’ n. 40B2-0 187094 (BRIDGE Discovery 2019). Jochen Hinz is grateful for the proof reading and feedback he received from Rafael Vázquez.
Furthermore, the authors are grateful for the coding help they received from the Nutils core development team.
elsarticle-num
|
http://arxiv.org/abs/2307.04967v1 | 20230711020037 | Detecting Tidal Features using Self-Supervised Representation Learning | [
"Alice Desmons",
"Sarah Brough",
"Francois Lanusse"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.IM"
] |
[
Detecting Tidal Features using Self-Supervised Representation Learning
equal*
Alice DesmonsUNSW
Sarah BroughUNSW
Francois LanusseCEA
UNSWSchool of Physics, University of New South Wales, NSW 2052, Australia
CEAAIM, CEA, CNRS, Université Paris-Saclay, Université Paris Diderot, Sorbonne Paris Cité, F-91191 Gif-sur-Yvette, France
Alice [email protected]
Machine Learning, ICML
0.3in
]
Low surface brightness substructures around galaxies, known as tidal features, are a valuable tool in the detection of past or ongoing galaxy mergers. Their properties can answer questions about the progenitor galaxies involved in the interactions. This paper presents promising results from a self-supervised machine learning model, trained on data from the Ultradeep layer of the Hyper Suprime-Cam Subaru Strategic Program optical imaging survey, designed to automate the detection of tidal features. We find that self-supervised models are capable of detecting tidal features and that our model outperforms previous automated tidal feature detection methods, including a fully supervised model. The previous state of the art method achieved 76% completeness for 22% contamination, while our model achieves considerably higher (96%) completeness for the same level of contamination.
§ INTRODUCTION
The currently accepted model of the Universe, known as the Lambda Cold Dark Matter (ΛCDM) Cosmological Model, postulates that galaxies evolve through a process which is referred to as the `hierarchical merger model’, wherein the growth of the universe's highest-mass galaxies is dominated by merging with lower-mass galaxies (e.g. ). During the merging process, the extreme gravitational forces involved cause stellar material to be pulled out from the galaxies, forming diffuse non-uniform regions of stars in the outskirts of the galaxies, known as tidal features. These tidal features contain information about the merging history of the galaxy, and can thus be used to study the galaxy evolution process.
In order to draw accurate and statistically robust conclusions about this evolution process, we require a large sample of galaxies exhibiting tidal features. One thing that makes this difficult is the extremely low surface brightness of tidal features, which can easily reach μ_r≥ 27 mag arcsec^-2. With the next generation of wide-field optical imaging surveys reaching new limiting depths, such as the Vera C Rubin Observatory's Legacy Survey of Space and Time (LSST; ) which is predicted to reach μ_r∼ 30.1 mag arcsec^-2 <cit.>, assembling a statistically significant sample of galaxies with tidal features is becoming more feasible. One challenge associated with surveys like LSST, due to commence in 2024 and run for 10 years, is the amount of data predicted to be released, with LSST predicted to output over 500 petabytes of imaging data including billions of galaxies <cit.>. Current tidal feature detection and classification is primarily achieved through visual identification (e.g. ), but this amount of data is virtually impossible to classify visually by humans, even using large community based projects such as Galaxy Zoo <cit.>, and hence we are in urgent need of a tool that can automate this classification task and isolate galaxies with tidal features.
With the promising recent results of machine learning in galaxy classification tasks (e.g. ), we turn to machine learning to construct a model which can take galaxy images as input, convert them into representations - low-dimensional maps which preserve the important information in the image - and output a classification based on whether the galaxy possesses tidal features. We use a recently developed machine learning method that is essentially a middle-point between supervised and unsupervised learning, known as Self-Supervised machine Learning (SSL; ). Such models do not require labelled data for the training of the encoder, which learns to transform images into meaningful low-dimensional representations, but can perform classification when paired with a linear classifier and a small labelled dataset. Instead of labels, SSL models rely on augmentations to learn under which conditions the output low-dimensional representations should be invariant. These types of models have been successfully used for a variety of astronomical applications (e.g. ) Compared to supervised models, self-supervised models are also much easier to adapt to perform new tasks, and apply to datasets from different astronomical surveys <cit.> making this kind of model perfect for our goal of applying the tool developed using HSC-SSP data to future LSST data.
§ METHODS
§.§ Sample Selection
The dataset used for this work is sourced from the Ultradeep (UD) layer of the HSC-SSP Public Data Release 2 (PDR2; ) for deep galaxy images. We use the Ultradeep field, which spans an area of 3.5 deg^2 and reaches a surface brightness depth of μ_r∼ 28.0 mag arcsec^-2 as it reaches depths faint enough to detect tidal features.
We assemble an unlabelled dataset of ∼44,000 galaxies by parsing objects in the HSC-SSP PDR2 database using an SQL search and only selecting objects which have at least 3 exposures in each band and have i-band magnitudes 15 < i < 20 mag. We set a faint magnitude limit of 20 mag to ensure that objects are bright enough for tidal features to be visible. We access the HSC-SSP galaxy images using the ‘Unagi’ Python tool <cit.> which, given a galaxy’s right ascension and declination, allows us to create multi-band ‘HSC cutout’ images of size 128 × 128 pixels, or 21 × 21 arcsecs, centred around each galaxy. Each cutout is downloaded in five (g, r, i, z, y) bands.
For the training of the linear classifier we require a small labelled dataset of galaxies with and without tidal features. We use the HSC-SSP UD PDR2 dataset assembled by <cit.> composed of 211 galaxies with tidal features and 641 galaxies without tidal features. These galaxies were selected from a volume-limited sample from the cross-over between then Galaxy and Mass Assembly survey <cit.> and HSC-SSP with spectroscopic redshift limits 0.04 ≤ z ≤ 0.2 and stellar mass limits 9.50 ≤ log_10(M_⋆/M_⊙) ≤ 11.00 and have i-band magnitudes in the range 12.8 < i < 21.6 mag. To increase the size of our tidal feature training sample we classified additional galaxies from our HSC-SSP PDR2 unlabelled dataset of ∼ 44,000 objects, according to the classification scheme outlined in <cit.>. Our final labelled sample contains 760 galaxies, 380 with tidal features, labelled 1, and 380 without, labelled 0. We split our labelled dataset set into training, validation, and testing datasets composed of 600, 60, and 100 galaxies respectively.
§.§ Image Pre-processing and Augmentations
Before the images are augmented and fed through the model we apply a pre-processing function to normalise the images. The augmentations we use for this project are:
* Orientation: We randomly flip the image across each axis (x and y) with 50% probability.
* Gaussian Noise: We sample a scalar from 𝒰(1,3) and multiply it with the median absolute deviation of each channel (calculated over 1000 training examples) to get a per-channel noise σ_c. We then introduce Gaussian noise sampled from σ_c × 𝒩(0,1) for each channel.
* Jitter and Crop: For HSC-SSP images we crop the 128 × 128 pixel image to the central 109 × 109 pixels before randomly cropping the image to 96 × 96 pixel. Random cropping means the image center is translated, or `jittered', along each respective axis by i, j pixels where i, j ∼ 𝒰(-13,13) before cropping to the central 96 × 96 pixels.
§.§ Model Architecture
The model we utilise to perform classification of tidal feature candidates consists of two components; a self-supervised model used for pre-training, and a `fine-tuned' model used for classification. All models described below are built using the TensorFlow framework <cit.>.
§.§.§ The Self-Supervised Architecture
For our task of classifying tidal feature candidates we use a type of self-supervised learning known as Nearest Neighbour Contrastive Learning of visual Representations (NNCLR; ). We closely follow <cit.> in designing the architecture and training process for our model. The model was compiled using the Adam optimiser <cit.> and trained for 25 epochs on our unlabelled dataset of ∼ 44,000 HSC-SSP PDR2 galaxies.
§.§.§ The Fine-tuned Architecture
The fine-tuned model is a simple linear classifier which takes galaxy images as input and converts them to representations using the pre-trained self-supervised encoder. These representations are passed through a `Dense' layer with a sigmoid activation, which outputs a single number between 0 and 1. This fine-tuned model was compiled using the Adam optimiser <cit.> and a binary cross entropy loss. It was trained for 50 epochs using the labelled training set of 600 HSC-SSP galaxies. Training was completed within ∼ 1 minute using a single GPU.
§.§.§ The Supervised Architecture
To draw conclusions about the suitability of self-supervised models for the detection and classification of tidal features, we compare our results with those of a fully supervised model. We do not construct this model from scratch, but instead use the published model designed by <cit.> to classify merging galaxies. The output layer was changed from two neurons with softmax activation, to a single neuron with sigmoid activation. The network was compiled using the Adam optimiser <cit.> with the default learning rate and loss of the network was determined using binary cross entropy. We additionally changed the input image dimension from 64 × 64 pixels with three colour channels to 96 × 96 pixels with five colour channels to ensure extended tidal features remain visible. We train this fully supervised network from scratch using the labelled training set of 600 HSC-SSP galaxies.
§.§ Model Evaluation
To evaluate our model performance we use the true positive rate (also known as recall or completeness) and false positive rate (also known as fall-out or contamination). The true positive rate (TPR) ranges from 0 to 1 and is defined as the fraction of galaxies correctly classified by the model as having tidal features with respect to the total number of galaxies with tidal features. The false positive rate (FPR) also ranges from 0 to 1 and is defined as the fraction of galaxies incorrectly classified by the model as having tidal features with respect to the total number of galaxies without tidal features.
In addition to using the TPR for a given FPR to evaluate our model, we also use the area under the receiver operating characteristic (ROC) curve, or ROC AUC, to evaluate performance.
§ RESULTS
§.§ Self-Supervised vs. Supervised Performance
Figure <ref> illustrates the testing set ROC AUC for a supervised and self-supervised network as a function of the number of labels used in training for our HSC-SSP dataset. Each point represents the ROC AUC averaged over ten runs using the same training, validation, and testing sets for each run. We average the ROC AUC over the 10 runs and remove outliers further than 3σ from the mean. Our SSL model maintains high performance across all amounts of labels used for training, having ROC AUC = 0.911 ± 0.002 when training on the maximum number of labels and only dropping to ROC AUC = 0.89 ± 0.01 when using only 50 labels for training. The supervised model also maintains its performance regardless of label number, but only reaches ROC AUC = 0.867 ± 0.004 when training on the maximum number and ROC AUC = 0.83 ± 0.01 when using 50 labels for training.
This figure not only shows that an SSL model can be used for the detection of tidal features with good performance, but also that it performs consistently better than the supervised network regardless of the number of training labels. We also calculated the average TPR reached by the self-supervised model on the testing set for a given FPR = 0.2, averaging over 10 runs and removing outliers. When training using 600 labels, the model reaches TPR = 0.94 ± 0.01, and this only drops to TPR = 0.90 ± 0.01 when using a mere 50 labels for training.
§.§ Detection of Tidal Features
One advantage of self-supervised models over supervised models is the ability to use just one labelled example to find examples of similar galaxies from the full dataset. By using just one image from our labelled tidal feature dataset as a query image, and the encoded 128-dimensional representations from the self-supervised encoder, we can perform a similarity search that assigns high similarity scores to images which have similar representations to the query image. This is demonstrated in Figure <ref> where we select a random galaxy with tidal features from our training sample and perform a similarity search with the 44,000 unlabelled HSC-SSP galaxies. In Figure <ref> the query image is shown on the right alongside the 24 galaxies which received the highest similarity scores. This figure shows the power of self-supervised learning, where using only a single labelled example, we can find a multitude of other tidal feature candidates.
We can also visualise how the model organises the galaxy images in representation space, by using Uniform Manifold Approximation and Projection (UMAP; ) which reduces the encoded representations to an easier to visualise 2 dimensional projection. Figure <ref> illustrates this 2D projection, created by binning the space into 100 × 100 cells and randomly selecting a sample from that cell to plot in the corresponding cell location. We also enquire whether the scores given to galaxies by the linear classifier are related to the galaxies' positions in the UMAP projection, by colouring the UMAP plot according the scores given to each galaxy by the linear classifier, shown in the right panel of Figure <ref>. We find that the majority of galaxies which were assigned a high classifier score, indicating a high likelihood of tidal features, are located on the left side of the UMAP projection plot. This reinforces the idea that the encoded representations contain meaningful information about tidal features.
§ DISCUSSION AND CONCLUSIONS
In this work, we have shown that SSL models composed of a self-supervised encoder and linear classifier can not only be used to detect galaxies with tidal features, but can do so reaching both high completeness (TPR = 0. 94 ± 0.1) for low contamination (FPR = 0.20) and high area under the ROC curve (ROC AUC = 0.91 ± 0.002). This means that such models can be used to isolate the majority of galaxies with tidal features from a large sample of galaxies, thus drastically reducing the amount of visual classification needed to assemble a large sample of tidal features. One major advantage of this model over other automated classification methods, is that this level of performance can be reached using only 600 labelled training examples, and only drops mildly when using a mere 50 labels for training maintaining ROC AUC = 0.89 ± 0.01 and TPR = 0.90 ± 0.1 for FPR = 0.2. This makes SSL models easy to re-train on data from different surveys with minimal visual classification needed. Following <cit.>, we emphasise the usefulness of being able to perform a similarity search using just the self-supervised encoder and one example of a galaxy with tidal features to find other galaxies with tidal features from a dataset of tens of thousands of galaxies.
The level of comparison that can be carried out with respect to the results obtained here and other works is limited due to the scarcity of similar works. There is only one study focusing on the detection of tidal features using machine learning, namely the work of <cit.> who used a supervised network to identify galaxies with tidal features from the Wide layer of the Canada-France-Hawaii Telescope Legacy Survey <cit.>. <cit.> found that their method outperformed other automated methods of tidal feature detection, reaching 76% completeness (or TPR) and 22% contamination (or FPR). Our SSL model, trained on 600 galaxies performs considerably better, reaching a completeness of 96% for the same contamination percentage. Most importantly, our model consistently outperforms a fully supervised model trained on the same data, reaching ROC AUC = 0.911 ±0.002 while the fully supervised model only reaches a maximum ROC AUC of 0.864 ± 0.004.
The code use to create, train, validate, and test the SSML model, along with instructions on loading and using the pre-trained model as well as training the model using different data can be downloaded from GitHub[<https://github.com/LSSTISSC/Tidalsaurus>].
apalike
|
http://arxiv.org/abs/2307.05478v1 | 20230711175959 | Chemistry of multiple stellar populations in the mono-metallic, in situ, bulge globular cluster NGC 6388 | [
"Eugenio Carretta",
"Angela Bragaglia"
] | astro-ph.GA | [
"astro-ph.GA"
] |
Carretta and Bragaglia
Detailed abundances of NGC 6388
E. Carretta, [email protected]
INAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna, Via Gobetti
93/3, I-40129 Bologna, Italy
We present the homogeneous abundance analysis for a combined sample of
185 giants in the bulge globular cluster (GC) NGC 6388. Our results are used to
describe the multiple stellar populations and differences or analogies with
bulge field stars. Proton-capture elements indicate that a single class of
first-generation polluters is sufficient to reproduce both the extreme and
intermediate parts of the anti-correlations among light elements O, Na, Mg, and
Al, which is at odds with our previous results based on a much smaller sample.
The abundance pattern of other species in NGC 6388 closely tracks the trends
observed in bulge field stars. In particular, the α-elements, including
Si, rule out an accreted origin for NGC 6388, confirming our previous results
based on iron-peak elements, chemo-dynamical analysis, and the age-metallicity
relation. The neutron-capture elements are generally uniform, although the
[Zr/Fe] ratio shows an intrinsic scatter, correlated to Na and Al abundances.
Instead, we do not find enhancement in neutron-capture elements for stars whose
photometric properties would classify NGC 6388 as a type II GC. Together with
the homogeneity in [Fe/H] we found in a previous paper, this indicates we need
to better understand the criteria to separate classes of GCs, coupling
photometry, and spectroscopy. These results are based on abundances of 22
species (O, Na, Mg, Al, Si, Ca, Ti, Sc, V, Cr, Mn, Fe, Co, Ni, Zn, Y, Zr, Ba,
La, Ce, Nd, and Eu) from UVES spectra sampling proton-, α-,
neutron-capture elements, and Fe-peak elements. For 12 species, we also obtain
abundances in a large number of giants (up to 150) from GIRAFFE spectra.
Chemistry of multiple stellar populations in the mono-metallic,
in situ, bulge globular cluster NGC 6388Based on observations collected at ESO telescopes under
programmes 073.D-0211 and 073.D-0760, 381.D-0329, 095.D-0834, and
099.D-0047.
Full Tables A.1, A.2, A.3, and A.5 are only available at the CDS via anonymous ftp to
cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.
u-strasbg.fr/viz-bin/cat/J/A+A/??/??.
Eugenio Carretta1
Angela Bragaglia1
========================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The high-mass, high-metallicity globular cluster (GC) NGC 6388, located in the
bulge of the Milky Way (MW) had been poorly studied using high resolution
spectroscopy, despite its relevance. The small apparent angular diameter and the
large contamination by field stars made it difficult to observe a large number
of cluster members (see, e.g. Wallerstein et al. 2007). This has been amended
with the use of multi-object spectrographs (such as FLAMES at the ESO VLT; see
Carretta et al. 2007a, Lanzoni et al. 2013) and large surveys such as APOGEE
(Mészáros et al. 2020: M20). Additionally, a more efficient choice of candidate
members can be provided by the Gaia mission (see, i.e. the
astrometric analysis in Vasiliev and Baumgardt 2021).
This paper presents all the data from our series on NGC 6388 `reloaded', aimed
at providing a detailed description of the chemical properties of
multiple stellar population in this GC. We started by using the existing data in
the ESO archive and added new observations purposely designed to complete
the set of setups needed to characterise the chemical composition. In
particular, we sought to include all the light elements involved in the
(anti-)correlations typical of multiple stellar populations detected in GCs
(Carretta et al. 2009a,b, 2010a, Gratton et al. 2012a, 2019, Bastian and Lardo
2018).
In Carretta and Bragaglia (2018), we analysed UVES spectra of 24 giants (adding
17 new stars to our original set of seven in Carretta et al. 2007a). The
resulting pattern of proton-capture elements seemed to imply the necessity of
two classes of polluters (the stars that produced the characterising set of
light elements in GCs) to explain the anti-correlations between O and Na, Mg, and
Al.
The next step (Carretta and Bragaglia 2019) was to discuss the abundance of Mg,
Ca, and Sc for the full set of 185 stars observed with both UVES and GIRAFFE.
By comparing the abundance ratios in NGC 6388 to those of field stars of similar
metallicity, we detected no significant differences for Ca and Sc (at variance
with NGC 2808 and a few other massive GCs: see Carretta and Bragaglia 2021),
whereas lighter proton-capture species, such as Si, showed clear variations
correlated to Mg depletion and Al enhancement. Together with prediction from
stellar nucleosynthesis, these observations allowed us to pinpoint the range of
internal temperature for the first generation (FG) polluters in NGC 6388
(∼100-150 MK), with some difference between the cases of the massive stars
and asymptotic giant branch (AGB) stars.
We then concentrated on the metal abundance of this cluster and presented
(Carretta and Bragaglia 2022a) the atmospheric parameters for our new data set.
We also compared our results to literature results and addressed the issue of a metallicity
spread. The latter is predicted as a possible property of a separate class of
GCs (type II, see below), defined mainly on photometric ground. However, we did
not find any significant spread in [Fe/H], not even considering stars belonging
to the so-called anomalous red giants in the chromosome map, a diagnostic based
on Hubble Space Telescope UV photometry used to define the hypothesised class of
type II GCs (see Milone et al. 2017). Either an intrinsic spread in iron (and
neutron-capture elements, see below) is not a necessary condition to belong to
the so-called type II GCs, or NGC 6388 is not a type II GC, as claimed.
Finally, in Carretta and Bragaglia (2022b), we addressed the recent claim that
NGC 6388 may have been accreted to the MW, based on the proposed under-abundance
of Sc, V, and Zn in four stars (Minelli et al. 2021a). We instead demonstrated
how our derived abundances, for a much larger and significant sample of cluster
stars, are fully compatible with those of the MW bulge stars and clusters of
similar metallicity. Together with chemo-dynamical considerations and the
placement of NGC 6388 firmly on the in situ branch of the age-metallicity
diagram, our observations conclusively indicate an in situ origin for this GC.
In the present paper, we complete and show the analysis of the dataset,
presenting the abundances of 22 species in NGC 6388 from high resolution optical
spectroscopy, which permitted a full and homogeneous chemical characterisation of
this GC. We discuss our results regarding the multiple stellar population
phenomenon, and similarities and differences
with the underlying bulge stellar populations.
The paper is organised as follows. A brief summary of data selection and
observations is provided in Sect. 2, whereas the abundance analysis and error
budget are described in Sect. 3. Results for proton capture and the heavier
elements (α-capture, Fe-peak, and neutron-capture elements) are discussed
in Sect. 4 and Sect. 5, respectively. A summary of the properties of NGC 6388 is
provided in Sect. 6.
§ SUMMARY OF DATA SELECTION AND OBSERVATIONS
Details on the full procedure we followed to select member stars to be observed
in NGC 6388, heavily contaminated by bulge and disc stars, are discussed at
length in Carretta and Bragaglia (2022a). We exploited previously made
spectroscopic observations (mainly for kinematics) in NGC 6388 to select radial
velocity (RV) members. The large value of the systemic RV (80 km s^-1,
Harris 2010) allowed us to individuate bona fide cluster members. Culling our
targets from the programmes 073.D-0760 (PI Catelan), 381.D-0329 (PI Lanzoni),
and 095.D-0834 (PI Henault-Brunet), we analysed good quality archive data (taken
with GIRAFFE high resolution setups HR13 and HR21 or with UVES) or acquired new
observations using the same setups (programme 099.D-0047).
Our observing strategy was to obtain GIRAFFE HR13 spectra (to derive atmospheric
parameters and abundances of the light elements O, Na, Mg, Si) and HR21 spectra
(to derive Al abundances) for as many stars as possible in NGC 6388. Moreover,
new UVES/FLAMES spectra were acquired for 12 giants. The new observations were
performed from April to August 2017. The wavelength coverage is 6120-6405 Åfor HR13, 8484-9001 Å for HR21, and 4800-6800 Å for UVES/FLAMES. The median
S/N values are 93, 116, and 50, respectively.
Coordinates, magnitudes, original samples (or new observations), and RVs are
listed in Carretta and Bragaglia (2022a) for the 12 and 150 stars with UVES and
GIRAFFE spectra, respectively. Similar data for 24 giants with previously analysed
UVES/FLAMES spectra can be found in Carretta and Bragaglia
(2018).
We consider all stars in our sample to be good candidates as members of NGC 6388
based on the combination of RV and metallicity (presented in previous papers),
taking advantage of our derived very small spread in [Fe/H] (0.04 dex from 185
stars as compared to 0.074 dex from 9 stars in M20). As already discussed in
CB22a, for a few stars membership is more dubious because of their proper
motions (measured by Gaia) or because they fall outside the tidal radius (if we
take the value in Harris 2010, but not if we consider the much larger value in
Baumgardt's database, which would be outside the upper panel in
Fig. <ref>). Additionally, the referee noted that six of our targets
are not considered cluster members in APOGEE DR17. However, as shown in
Fig. <ref>
(upper panel) all stars in our sample fall well within the 3σ velocity
dispersion profile of the best-fit King model (for details, see Lanzoni et al.
2013, their Fig. 10 and Table 3). Additionally, if we consider the six candidate
non-members, we see that the abundances of elements not involved in the multiple
populations phenomenon are the same as the bulk of the cluster sample
(Fig. <ref>,
lower panel). Furthermore, we find that four of them belong to the second
generation, based on their Na and O abundances (see Sect. 4), which decreases
the chance of them being field interlopers. In the following, we treat all
stars as members, bearing in mind that further studies (e.g. better astrometric
data in future Gaia data releases) will help settle the matter.
§ ABUNDANCE ANALYSIS AND ERROR BUDGET
The same procedure adopted for GCs previously analysed in our FLAMES survey (see
Carretta et al. 2006, 2009a,b) was
used here to derive atmospheric parameters and abundances. The metallicity of
NGC 6388 is reported and discussed in Carretta and Bragaglia (2022a). As usual,
our final effective temperatures T_ eff were derived in two steps. First estimates
were obtained from the calibrations of Alonso et al. (1999, 2001). Afterwards,
these estimates were refined using a relation between the values from the
first step and K
magnitudes of the stars. Surface gravities were then obtained by using the above
temperatures, adopting distance modulus (m-M)_V=16.14 and reddening
E(B-V)=0.37 from Harris (2010), bolometric
corrections from Alonso et al. (1999), and masses of 0.90 M_⊙ and M_
bol,⊙=4.75.
For the abundance analysis we used equivalent widths (EW) measured with the ROSA package (Gratton 1988), after correcting those measured on GIRAFFE spectra
to the system given by EWs from UVES spectra. The microturbulent velocities
v_t were derived by minimising the slope of the relation between Fe i
abundances and expected line strength (Magain 1984). Models with appropriate
atmospheric parameters whose abundances matched those derived from Fe i
lines were then interpolated within the Kurucz (1993) grid of model
atmospheres. Adopted atmospheric parameters and derived abundances of Fe are
listed in Table 2 of Carretta and Bragaglia (2022a).
The absence of trends as a function of T_ eff and the good agreement
between Fe i and Fe ii and between Ti i and Ti ii
are a good test of the reliability of the adopted scale of atmospheric
parameters.
For the other elements, we proceeded as in our FLAMES survey.
Oxygen abundances were obtained
from the forbidden [O i] line at 6300 Å (more rarely also from the
6363 Å line) after cleaning the telluric lines as described in Carretta et
al. (2007a). To correct the Na abundances for non-LTE effects, we adopted the
prescriptions from Gratton et al. (1999).
Finally, references for the hyperfine structure corrections applied to Sc, V,
Mn, and Co can be found in Gratton et al. (2003).
Average abundances are given in Table <ref> for the 12 stars
observed with UVES and the 150 stars with GIRAFFE spectra. We also list the
combination of the abundances of the present sample from UVES with those
previously analysed in Carretta and Bragaglia (2018) to show how good the
consistency is between the two studies that we merged together.
In this table, the neutral species are referred to as Fe i
abudances, whereas abundance ratios of ionised species are computed using singly ionised iron. Solar reference abundances are
reported in the penultimate column of Table <ref>, with their
sources listed in the last column.
Our procedure to estimate star to star errors due to uncertainties in the
adopted atmospheric parameters and in EW measurements is described
in detail in Carretta et al. (2009a) for UVES and Carretta et al. (2009b) for
GIRAFFE. For the present work, the outcomes are summarised in
Table <ref> and Table <ref> for
abundances obtained from UVES and GIRAFFE spectra, respectively. For the sake of
completeness, we also report the values relative to iron in these tables (see
Carretta and Bragaglia 2022a, Tables 4 and 5).
We varied one parameter at a time by the amount listed in the first row and
repeated the abundance analysis for all stars. The averages provide the
sensitivities of abundance ratios to changes in the atmospheric parameters
(i.e. Δ [el/Fe]/Δ (par)) and are listed in the main body
of the tables. Star-to-star (internal) errors and systematic errors in each
parameter are in the second and third rows. The typical internal errors in
abundances due to the measurements of EWs (0.02 dex and 0.03 dex for UVES and
GIRAFFE, respectively) are estimated as the average rms scatter in Fe
abundance divided by the square root of the typical number of measured Fe lines
(100 and 20 lines from UVES and GIRAFFE spectra, respectively).
Finally, the total internal and
systematic errors in the derived abundances are obtained by summing in
quadrature the contributions of individual error sources, weighted according to
the errors relative to each parameter.
Abundances for individual stars are listed in the Appendix A. For tables
containing elements from the whole UVES+GIRAFFE sample, only an excerpt is
provided as a guidance of the content. The complete tables can be found at CDS,
Strasbourg.
§ PROTON-CAPTURE ELEMENTS
Sample and observations in NGC 6388 were purposely tailored to measure the
largest set of elements involved in proton-capture reactions resulting in the
network of correlations and anti-correlations observed in GC stars.
Starting from the lightest species (O, Na, Table A.1) up to the heavier Ca
(Table A.2) and Sc (Table A.3), almost all the interested elements are sampled (potassium is still missing).
We also show how, in NGC 6388, these elements define the classical set of
anti-correlations and correlations typical of massive GCs, with a couple of
notable exceptions that we discuss below.
§.§ The Na-O anti-correlation
The main chemical signature of multiple populations in any GC is evident in
Figure <ref>, where we show the classical anti-correlation
between Na and O abundances in 183 stars of NGC 6388. Stars are grouped (and
colour-coded) according to their chemical compositions following the PIE scheme
as defined in Carretta et al. (2009b).
The stars with primordial composition (P, in red in Figure <ref>) are
those with the lowest Na abundances, comprised between
[Na/Fe]_ min[Conservatively, we excluded from this estimate
the three stars with the lowest Na values, which seem to be outliers separated
from the bulk of stars along the anti-correlation.] (=-0.1 dex) and
[Na/Fe]_ min+0.3 dex. This choice
guarantees the interception of almost the totality of stars with pristine
composition, since 0.3 dex is typically a 3σ star-to-star error in
spectroscopic abundance analysis. The remainder of stars, with chemical
composition altered by FG polluters (whatever they were) are dubbed second-generation (SG) stars and divided into two subgroups: those with intermediate
composition I (if the ratio [O/Na]>-0.9 dex, in blue in
Figure <ref>) and those with extreme E composition ([O/Na]<-0.9 dex,
green points).
Although more sophisticated methods of statistical cluster analysis (Valle
et al. 2021) only retrieve the main blocks of FG and SG stars, our finer
separation into I and E stars is not arbitrary, but motivated by the existence
of long tails in [O/Na] distributions observed in a number of GCs, such as NGC 2808
and NGC 5904 (M 5) (see Carretta et al. 2009b). In turn, the fractions of stars
within each group are correlated to physical properties of GCs, including the total
mass (see Table 4 and Figure 14 in Carretta et al. 2010a). We also double checked
the PIE attribution using a k-means algorithm. We retrieved three groups almost
identical to the ones adopted here, with minor differences at the group edges,
corroborating the adoption of the same procedure of all papers of our series on
Na-O anti-correlations for the sake of homogeneity.
In the present work, we confirm the fractions we found in Carretta
and Bragaglia (2018) for NGC 6388, reducing the associated Poisson errors by half thanks to the
much enlarged present sample. A third of stars show a composition with O and Na
levels identical to those of field Galactic stars of similar
metallicity (P=29± 4%). The remaining SG stars are then split into two
components, the bulk consisting of stars with intermediate composition
(I=54± 5%). Finally, we confirm once again that in NGC 6388 there is a
noticeable fraction of stars showing an extremely modified composition
(E=16± 3%). These computations are made over 184 stars, because although
two stars are lacking O abundances, one of them (l63p307) has [Na/Fe]=+0.102
dex, a value that firmly places it in the P population.
The fraction E also includes the two stars with the highest Na abundances
([Na/Fe]>1.2 dex). They are members of NGC 6388 according to both the RV and
metallicity measured on our spectra and Gaia astrometry (Vasiliev and
Baumgardt 2021). They lie nicely on the locus of the Na-O anti-correlation
and are also the stars with the lowest O abundances in our sample ([O/Fe]<-0.6
dex, see Table <ref>). We note that even if all the six
stars flagged as possible non-members in APOGEE DR17 were excluded, our
results on multiple stellar populations and their respective composition in
NGC 6388 would not change. Their exclusion would only change the fraction of P, I, and E
stars within their associated Poissonian errors.
Formally, we observe a monotonic increase of the metal abundance along the
sequence PIE. The average [Fe/H] of the three groups raises from
-0.493 dex for 55 P stars to -0.485 dex and -0.481 dex for I (100 objects)
and E stars (30 objects), respectively. Although statistically not significant,
this progression is what is expected following an increasing helium abundance in SG
stars of the I and E groups. The effect is to increase the strength of metallic
lines (see Böhm-Vitense 1979), and that of neutral lines more than that of ionised
lines. This is clearly confirmed by the iron difference between P and E stars,
which we find to be 0.006 dex for Fe ii, half of that derived from Fe i lines. Although the combination of tiny differences in abundance between the
groups and internal errors associated with iron prevents a stronger statement,
these results qualitatively conform to expectations.
§.§ Horizontal and asymptotic giant branch stars
Our sample in NGC 6388 includes a small number of stars that are not
on their first ascent on the red giant branch (RGB), since our selection
criterion for membership was exclusively based on RVs (see Sect.2).
This provides the opportunity to homogeneously compare the behaviour of
multiple stellar populations in different evolutionary phases. The main
spectroscopic feature, the anti-correlation of Na-O abundances, is examined in
Fig. <ref>. In the panels of the left column, we highlight the
abundances of ten red HB stars stars (RHB, red filled circles) and 17 asymptotic
giant branch stars (AGB, blue filled circles) on the overall Na-O
anti-correlation obtained for the whole sample. While there are no differences in
the metal abundances (the average [Fe/H] values agree within 0.003 dex), as
expected, the different patterns for proton-capture species are evident.
Also this second finding is hardly surprising.
At odds with the PIE
classification of RGB stars (fractions 25, 56, and 19%, respectively),
when we only consider RHB stars we find that
the vast majority (70%) are in the P component, with only 30% falling in the I group of SG stars (just the reverse proportion with
respect to RGB stars). Moreover, no RHB star is found with the highly modified composition
typical of the E component.
Our findings agree with the well-known scenario where stars with
different He contents are located on distinct portions of the HB. Spectroscopic
detections of He in HB stars are hard to obtain, but since He variations
are correlated to star to star abundance variations in light
elements (see Gratton et al. 2004, 2012a, 2019; Bastian and Lardo 2018) it is
much easier to prove this scenario by analysing the pattern of other elements,
including Na and O (see Gratton et al. 2011, 2012b, 2013, 2014, 2015, Villanova
et al. 2009, 2012, Marino et al. 2011a). The almost complete segregation of
Na-poor/O-rich stars on the red extreme of the HB is a natural consequence of
the strong correlation linking the chemical variations in proton-capture
species in GC stars and the highest temperature that may be reached on the ZAHB
in each GC, as found by Carretta et al. (2007b). In turn, these observations agree with the prediction by D'Antona et al. (2002) that variations in
helium (or their proxies, variations in light elements) are a key ingredient to
explain the HB morphology. In NGC 6388, all the analysed HB stars belong to the
RHB, so they behave almost like a simple stellar population, with the [O/Na]
ratio peaking at a high value (see Fig. <ref>, right column, middle
panel).
The lower panels in Fig. <ref> show the situation for the 17 AGB
stars in our sample. They are 11% of the RGB stars in our sample, so we are
confident that we did not miss a significant fraction of objects in this evolutionary
phase, since AGB stars are rarer (see discussion in Gratton et al. 2010 and
their Table 1).
From the lower left panel in Fig. <ref> and the number counts, we
see that the fraction of SG stars with intermediate composition (the I
component) is similar in RGB and AGB stars (56% and 53%, respectively). The
largest differences are for the primordial P component (25% for RGB and 41%
for AGB stars) and, in particular, for the extreme E component with only one AGB
star classified as such (18% for RGB and 6% for AGB stars).
This agrees with previous findings. Gratton et al. (2010) discussed the well-established lack of CN-strong objects among AGB stars in GCs (e.g. Norris et al. 1981,
Campbell et al. 2010) and correlated it with the HB properties, in particular
with the expectation that the less massive HB stars do not even begin their AGB
phase (AGB manqué). That means that the most He-rich (Na-rich and O-poor)
stars in GCs, those we would classify as E, do not reach the AGB. More recent
results, sometimes contradictory, and further discussions on the different
fractions of FG and SG in RGB and AGB stars can be found, for instance, in Marino et
al. (2017), Wang et al. (2017), and MacLean et al. (2018).
Although the distributions of both RHB and AGB stars peak at higher values than
RGB stars (right column in Fig. <ref>), the average [O/Na] for
AGB stars is shifted to a lower value than for RHB stars. A look at the panels in
the left column shows that the effect is not due to different Na abundances
(mean [Na/Fe]=+0.141 rms=0.205 dex in RHB and +0.275 rms=0.206 dex in AGB),
but rather to an actual difference in the average O content (mean
[O/Fe]=+0.199 rms=0.140 dex in RHB and -0.023 rms=0.173 dex in AGB). We used
Student's and Welch's tests with the null hypothesis that the groups of RHB and
AGB stars are extracted from a distribution with the same means. This
hypothesis cannot be rejected in the case of Na (two-tail probability p=0.114,
25 d.o.f.), whereas it can be safely rejected regarding the [O/Fe] average
content (p=1.2× 10^-3), confirming that we are probably observing a real
difference. The above discussion also confirms well-known
behaviours in NGC 6388 due to the global multiple population phenomenon, without the need to
invoke systematic or model-dependent effects.
A further difference concerns the radial distribution of stars in different
evolutionary phases. In Fig. <ref>, we plot the cumulative distributions
of radial distances for stars in our sample, together with the results of a
Kolmogorov-Smirnov test to ascertain statistical differences between different
groups. We find that RHB stars are more externally distributed in the GC than
both RGB and AGB stars. Due to the limited size of the RHB and AGB samples, we
prefer not to expand further on this. On the other hand, we do not find any
relevant difference in the concentration of stars in the P, I, and E groups.
§.§ The peculiar Na-O anti-correlation of NGC 6388
The inter-quartile range of the [O/Na] ratio was proposed by Carretta (2006) as a
robust estimate of the extension of the Na-O anti-correlation because it is
more insensitive to outliers than other traditional indicators (e.g. the intrinsic
spread as measured by the rms scatter). From the 183 stars in NGC 6388 with
measured O and Na abundances, we obtained IQR[O/Na]=0.638, which agrees
with the previous value (0.644) derived from a sample of only 49 stars in
Carretta and Bragaglia (2018). We then confirm that NGC 6388
belongs to the group of massive GCs whose extent of the Na-O anti-correlation is
too short with respect to what expected on the basis of their total mass.
In Fig. <ref>, we show the IQR[O/Na]-mass relation from our FLAMES survey
(see Carretta et al. 2010a), where we adopt the cluster total absolute magnitude
M_V as a model-independent proxy of the present-day GC total mass. When we
plot the 25 GCs of our FLAMES survey homogeneously analysed, we note that
NGC 6388 falls in a group of four GCs standing out of the main trend, the other
ones being NGC 6441, NGC 104 (47 Tuc), and NGC 7078 (M 15).
This behaviour is not dependent on our abundance analysis, since it is confirmed
by literature data such as 104 giants for NGC 104 (Cordero et al. 2014) or 18
giants in M 15 (Sneden et al. 1997).
At present, we have no explanation for these four GCs having an extension
shorter than expected for the Na-O anti-correlation, given their rather large
total mass.
§.§ The other proton-capture elements
We determined the abundances of other elements (Mg, Al, Si, Ca, and Sc) involved
in the network of proton-capture reactions acting to modify the primordial
chemical composition of GC stars. The mutual relations among these species in
NGC 6388 are illustrated in Fig. <ref>. Internal errors can be found in
Table <ref> and Table <ref>.
The trends visible in this figure point out once again that the whole
pattern we see among light elements in GC stars is clearly due to some form of
nucleosynthesis, because the variations in the chemical composition are not
randomly distributed, but follow well-known evolutions. There could be some
residual uncertainties related to the statistical grouping of stars, but the
three stellar populations are consistently ordered according to all abundances.
Enhancement in Al is accompanied by a depletion in Mg, Na is lower when O is
higher, and so on.
These trends are well summarised in Fig. <ref>, where we represent the
average abundances of different elements in the three components P, I, and E in
NGC 6388. In this figure, we plot the average differences with respect to the P
group, so that the trends are a direct representation of the changes in the
chemical composition due to the formation of multiple populations with respect
to the floor of primordial abundances in the proto-GC.
Not surprisingly, we retrieve the large enhancements in Na and Al contents,
paralleled by a noticeable O depletion, and a smaller decrease in Mg
abundance. Both the Si-Al correlation and the Mg-Si anti-correlation act as a
precision thermometer that probes the inner temperature reached by the FG
polluters. The small but steady increase in the [Si/Fe] ratio when progressing
from P to I and to E component (and the simultaneous decrease in [Mg/Fe]) has
been explained (e.g. Karakas and Lattanzio 2003; Yong et al. 2005) by
the leakage from the Mg-Al cycle on ^28Si. This occurs when the two
reactions ^27Al(p,γ)^28Si and ^27Al(p,α)^24Mg
switch as relevance at a well-defined temperature of ∼ 65 MK (see Arnould et
al. 1999, their Figure 8). Concerning NGC 6388, this means that whatever the polluters
were, in the early evolution of the primordial population of the cluster they
were able to reach such a temperature.
In Carretta and Bragaglia (2019), we investigated the
high extreme of the temperature range possibly reached by the FG polluters
in order to reproduce the pattern of heavier proton-capture species, such as Sc and
Ca. The idea was to ascertain in NGC 6388 the presence (or lack) of the
anti-correlations found between the abundances of both Sc and Ca with those of
Mg in the massive cluster NGC 2808 (Carretta 2015). Their existence in
NGC 2808 was interpreted in the same framework of proton-capture reactions at
very high temperature used by Ventura et al. (2012) to explain the unique
pattern of the K-Mg anti-correlation in NGC 2419 (see Cohen and Kirby 2012,
Mucciarelli et al. 2012). The variations of Sc and Ca in NGC 2808 and the
anti-correlation K-Mg, later found by Mucciarelli et al. (2015), showed that this
extreme regime of H-burning can be traced even in more normal GCs than NGC 2419.
However, in Carretta and Bragaglia
(2019) we were able to show that the abundances of Sc and Ca in NGC 6388 cannot
be distinguished from those of field stars of similar metallicity.
The latter represent fairly well the unpolluted, primordial population of stars
where only the effects of supernova nucleosynthesis can be tracked. In turn,
these observations mean that the FG polluters in NGC 6388 were not able to
reach the temperature of about 150 MK, above which seed species including Ar and K
start to be affected, allowing the production of elements such as Sc and Ca (see
Prantzos et al. 2017). The constraints from the Si Al variations and the
evidence of Sc and Ca being largely unaffected would pinpoint a narrower
range of 100-120 MK for the temperature reached in the candidate
polluters if these can be identified with massive AGB stars (see D'Antona et
al. 2016). In Fig. <ref>, we note a slight enhancement in Sc and Ca in
the extreme E fraction of stars in NGC 6388, but it is not significant.
The scenario and the conclusions discussed above strongly support
statistics as shown in Table <ref>, where we list the parameters of
linear regressions through the different relations among light elements shown in
Fig. <ref> and Fig. <ref>. We tested the level of
significance for each regression, reporting the
results in the last two columns of Table <ref>. The two-tail probabilities listed
show that all the relations involving O, Na, Mg, Al, and Si in NGC 6388 are real
with a high level of significance. On the contrary, the two anti-correlations
between Mg and Sc, and Mg and Ca are not significant.
We can then confirm, based on robust
evidence, that in NGC 6388 the FG polluters, whatever they were, likely reached
a maximum inner temperature restricted to a narrow range between 100 and 120-150
MK, as found in Carretta and Bragaglia (2019).
§.§ Light elements, polluters, and dilution
In Carretta and Bragaglia (2018), we tried to ascertain how many classes or kinds
of polluters may have been contributing to the chemical budget of NGC 6388. We
performed this exercise using the same approach adopted in Carretta et al.
(2012) to study the discrete components in NGC 6752.
We started with the simple dilution model (illustrated, e.g. in Carretta et al.
2009b), where we reproduce the chemical pattern of the various stellar
populations by mixing the composition of the E group with different fractions of
primordial gas, which is simply represented by the composition of the P
component. This means that the I population would be obtained by mixing a
fraction dil of matter with E-like composition together with a fraction whose
composition is P-like. It follows that if only a class of polluters was in
action, then the value of dil should be the same for all elements:
dil = [A( X)_I-A( X)_P][A( X)_E-A( X)_P] = [A( Y)_I-A( Y)_P][A( Y)_E-A( Y)_P]
,
where A(X) and A(Y) are the abundances in number of atoms of elements X and
Y.
The results reached in Carretta and Bragaglia (2018) were not conclusive. The
main obstacle was the availability of Al abundances only for the
limited sample of 24 stars with UVES spectra. In turn, this provided two
different possible divisions in the P, I, and E groups, depending on whether the groups
were based on the Al-O or the Na-O plane. As a consequence, we cannot not
entirely exclude the possibility that more than a single class of polluters
could be necessary to produce the observed composition of the I component.
With the present large sample of stars for
which homogeneous abundances of several proton-capture species are obtained in
NGC 6388, this problem is solved. For instance, our sample of stars with derived Al abundances is
increased by a factor 6.7, with respect to Carretta and Bragaglia
(2018). Thus, we repeated the exercise.
We found that in the chemical planes Al-O, Na-O, Al-Mg, and Si-Mg a simple
dilution model, anchored to the mean values for P and E stars, now nicely passes
through the average value of the intermediate I population, as shown in
Fig. <ref> and at variance with what was found in Carretta and
Bragaglia (2018).
We thus conclude that the hints of multiple classes of polluters seen in
Carretta and Bragaglia (2018) were likely due to the limited size of the
available samples (especially Al abundances). Our present results are compatible
with the existence of a single class of FG polluters able to reproduce the
observed characteristics of multiple stellar populations in NGC 6388, when their
ejecta were mixed with variable amount of pristine gas.
A summary of the above exercise is given in Fig. <ref>, where we
plot the resulting dilution factors dil for each considered species
(indicated by the atomic number). The value corresponding to Mg is indicated
with a different symbol (empty circle) to stress the large associated error. The
same behaviour was already noted in Carretta and Bragaglia (2018) and
tentatively attributed to the small variations in the abundance of Mg with
respect to the primordial value. However, we note that star-to-star variations
in the Si content are also small, yet its derived dilution factor is compatible
with the others species. At present, we do not have a satisfactory explanation
for the behaviour of the dilution of Mg.
The values we found for dil are 0.41± 0.09, 0.30± 0.06,
0.14± 0.38, 0.30± 0.06, and 0.48± 0.16 for O, Na, Mg, Al, and Si,
respectively. Apart from Mg, all these values lie within ± 1σ from the
average values labelled in Fig. <ref>, a strong indication for a unique
class of polluters acting in NGC 6388.
§ THE CHEMICAL COMPOSITION OF NGC 6388 IN CONTEXT
§.§ α-capture elements
Thanks to the spectral range of UVES spectra and to the large size of our
overall dataset, we were able to analyse the full set of α-elements in
NGC 6388 for a large number of stars. Both products of hydrostatic burning (O,
Mg) and explosive nucleosynthesis (Si, Ca, Ti) were analysed. Their run as a
function of the effective temperature is summarised in Fig. <ref>
(red and blue points for stars with GIRAFFE and UVES spectra, respectively).
Internal error bars for this figure are listed in
Table <ref> and Table <ref> for
abundances derived from UVES and GIRAFFE spectra, respectively.
No dependence of abundance on T_ eff is present for any of these
elements, over a range of luminosity of more than four magnitudes.
Since the key work by Tinsley (1979), abundance ratios were
used as probe of the birth properties of stars. The interplay between star
formation and stellar lifetimes (driven by the mass) means that elements from
different nucleosynthetic sites can be combined into abundance ratios useful to
uncover the primordial environment where the observed stars were born.
In particular, the α-Fe plane is a privileged indicator, since a
similarity in this plane for two stellar populations indicates a similar
chemical evolution of the systems. However, the metallicity at which the so-called knee due to the onset of Type Ia SNe occurs depends on the system's
stellar mass and on the efficiency of star formation in the progenitor galaxy
(see, e.g. the review by Tolstoy et al. 2009). Owing to the impact of these
abundance patterns on the clues for the origin of NGC 6388, our results on
α-elements deserve a detailed discussion.
The average
amount of α-elements in NGC 6388 is overabundant with respect to the
solar level, which is compatible with a chemical composition dominated by the
contribution of type II SNe acting before a significant number of SN Ia started
to add increasing amounts of iron and lower the ratio [α/Fe]. This
evidence, together with the time delay expected for the bulk explosions of SN
Ia, is also consistent with GCs in the bulge being old and
nearly coeval to halo GCs, with similar levels of α-element
overabundance.
The average [Si/Fe] ratio reflects the typical overabundance with respect to the
solar values found also in the majority of GCs and for the other
α-elements in Fig. <ref>. We found a mean [Si/Fe] of 0.31 dex from
150 stars with GIRAFFE spectra and 0.35 dex from 34 stars observed with
UVES, after combining the results for the 12 new stars of the present study with
the sample analysed in Carretta and Bragaglia (2018). We thus confirm the high
value we found in our first analysis for NGC 6388 (Carretta et al. 2007a), which is at
odds with the low values derived by the APOGEE DR16 (<[Si/Fe]>=-0.03± 0.1
dex, as reported in Horta et al. 2020) and DR17 (<[Si/Fe]>=+0.045± 0.061
dex), from 53 stars flagged as members of NGC 6388 (Abdurro'uf et al. 2022).
Conversely, in NGC 6388 we found a value (∼ 0.07 dex, see
Table <ref>) for the average [Ca/Fe] ratio lower than for
other α-elements. However, this is fully consistent with
our previous analyses (Carretta et al. 2007a, Carretta and Bragaglia 2018).
Gratton et al. (2006) observed a similarly low value in the massive bulge
cluster NGC 6441, and they advanced the hypothesis that this deficiency in Ca
could be an artefact of the analysis, due to using strong lines in bright and
cool giants. However, this explanation hardly applies to our sample, which
includes stars spanning a range of about 1700 K in T_ eff.
A comparison of the abundance of α-elements in NGC 6388 from both optical
and infrared studies is provided in Fig. <ref>. To our derived
abundances we superimposed the four stars reanalysed by Minelli et al. (2021a)
(large black circles) and the about 50 stars flagged as members of NGC 6388 in
the APOGEE DR17 (filled green triangles). Looking at this Figure, clear offsets
are present between the abundances from infrared and optical spectra, with both
optical analyses giving consistent results (at least for the α-elements,
see Carretta and Bragaglia 2022b and the next section). In particular, Mg and Si
abundances from APOGEE are lower than we find here, whereas a good agreement is
found for Ca abundances. For Ti, we are instead seeing a larger spread in APOGEE
abundances.
Part of the offsets may be explained by differences in the adopted solar
reference abundances. They are not given in the DR17 paper, but assuming that they are
not changed from DR16, we used the solar values provided by Smith et al. (2021)
for DR16 to compute the corrections needed to bring infrared data on our solar
scale (respectively +0.19, +0.11. +0.13, and -0.01 dex for Mg, Si, Ca, and Ti).
The overall agreement would improve for Mg and Si, but it would worsen for Ca.
For a more quantitative comparison, in Table <ref> we report the averages
and rms scatters for α-elements from the present and other studies:
Minelli et al. (2021a); Wallerstein et al. (2007), who studied eight cool giants
in NGC 6388 (we report the values from their analysis with photometric
gravities); and the mean values from APOGEE DR17. As a further comparison, we
added the average values derived from UVES and GIRAFFE spectra by Gratton
et al. (2006, 2007, respectively) in NGC 6441, often considered a twin GC of
NGC 6388, with similar origin and characteristics. Finally, the last two
columns provide the values for NGC 6388 derived by M20, both from all available
stars and by selecting stars with high S/Ns according to their criteria.
The values of [Mg/Fe] and [Si/Fe] from APOGEE DR17 are
lower than those of the analyses based on optical spectra. They are also
lower than the values given by M20, in particular when only high-S/N stars are
used in their reanalysis with BACCHUS of an earlier SDSS/APOGEE release.
On the contrary, the Ca level seems to be actually low in these
GCs, with the notable exception of NGC 6441 from GIRAFFE spectra (see, however,
the warning in Carretta and Bragaglia 2021, 2022b about the time on targets for
NGC 6441 and the relative low S/N in that cluster).
In Fig. <ref>, we also compare the average abundances of Mg, Si, Ca,
and Ti from the present work with the abundances in the GCs homogeneously
analysed in our FLAMES survey (see Carretta et al. 2006 and following studies,
referenced in Table B.1), represented by blue filled circles with error bars.
These data have been complemented by four metal-rich GCs (three bulge clusters and one
disc cluster) from Muñoz and collaborators.
As reference, in the same figure we also plot field stars of the Galaxy, both in
the halo and disc components (open grey circles) and in the bulge (cyan open
circles), from several literature studies.
For the sake of clarity, all references to the data used are provided in
Appendix B (Tables B.1 and B.2), where we also give the identification of each
GC and its [Fe/H] value. Finally, we also plot abundances for the two more
massive dwarf spheroidal galaxies associated with the MW, one still distinct
(Fornax: Letarte et al. 2010, Lemasle et al. 2014) and the other already
accreted and disrupting in the MW (Sagittarius, Minelli et al. 2021b). These
stars are used as a robust benchmark for the pattern of α-elements in
external systems of lower total mass than the Galaxy. The combination of
lower star formation and chemical evolution shows up in a lower metallicity knee
and resulting underabundance of these elements, once such systems were possibly
accreted into the main Galaxy, as is currently happening to Sgr.
The average abundance of GCs nicely follows the pattern of α-elements of
field MW stars, a plateau at low metallicity followed by a decrease when the
metallicity increases after the `knee', signalling the major onset of SN Ia.
As other metal-rich GCs ([Fe/H]-0.7 dex), NGC 6388 participates in
this trend well.
In particular, the average [Si/Fe] ratios in NGC 6388 and in NGC 6441 are in
very good agreement with the mean level shown by halo and disc GCs in the MW and
with the abundance of bulge field stars, both giants (as in GCs) and dwarfs, as
shown in detail in Fig. <ref>, where we display some of the studies
used in Fig. <ref>, in a restricted range around the mean metallicity
of NGC 6388.
Calcium abundances in NGC 6388 seem to lie at the lower envelope of the field
bulge star distribution (Fig. <ref>). Overall, NGC 6388 and
NGC 6441, together with the other bulge GCs (with the exception of NGC 6440,
Muñoz et al. 2017) seem to be more compatible with the disc stars.
However, the mean ratio [Ca/Fe] in NGC 6388 is still consistent with several
studies focusing on bulge stars (Fig. <ref>), again taking into
account small offsets related to the abundance analysis.
Furthermore, we remind the reader that in NGC 6388, Ca is not involved into the network of
proton-capture reactions (Carretta and Bragaglia 2021).
This result is quantified in Fig. <ref> and Table <ref>;
among all relations concerning proton-capture elements, the only ones resulting
as statistically not significant are those involving Ca and Sc.
Among the species examined in Fig. <ref>, only Ti does not show the
typical decline at high metallicity. We remark that this element is not a
classical α-element
and is located at the boundary between α
and Fe-peak species. Moreover, Ti abundances are only available for less than a
half of the GC sample, our FLAMES survey being focused on species (such as Mg, Si,
and sometimes Ca) involved in the multiple population phenomenon.
For the stars observed with UVES, both neutral and singly ionised lines were
available. We found, on average, a difference of [Ti/Fe]ii - [Ti/Fe]i
= -0.051 dex, with rms=0.096 dex, which is not significant.
From this section, our results and the similarity between NGC 6388 and both the
bulge and disc field of the MW suggest a high-mass environment for the progenitor of
this cluster. Coupled with the chemo-dynamical evidence presented in a previous
paper (CB22b), this is indicative of a formation within a massive component of
the Galaxy, likely the bulge itself. The high metallicity of NGC 6388 clearly
excludes this GC from being associated with a metal-poor structure recently reported
from APOGEE data by Horta et al. (2021) and considered possibly accreted in the
early MW before the presently observed main Galactic bulge (including its
GC population and NGC 6388) was formed.
§.§ Fe-peak elements
Together with iron, we derived the abundances of seven species of
the iron-peak group in NGC 6388: [Sc/Fe]ii, [V/Fe], [Cr/Fe]i, [Mn/Fe],
[Ni/Fe], [Co/Fe], and [Zn/Fe]. Most abundances are derived from UVES spectra of
the 12 stars newly observed in this work, integrated by the sample of Carretta
and Bragaglia (2018). However, abundances of Sc and Ni are also measured for the
larger sample of stars with GIRAFFE HR13 spectra, using on average 2 and
11 lines for Sc and Ni, respectively.
The derived abundances present no dependence on the temperature of stars over a
range of about 1700 K (see Fig. <ref>), particularly evident in the
cases of Sc and Ni, and are roughly clustered within ± 0.10 dex from the
solar value, apart from V, which shows a slight overabundance, and Mn, which
shows a clear deficiency, also reflected by the pattern of field stars at
similar metallicity (see below).
In Carretta and Bragaglia (2022b) the average abundances of Sc, V, and Zn from
the present work were used to check whether the level of these iron-peak elements
could be used for chemically tagging NGC 6388 (accreted or formed in situ).
Comparing the abundance of stars in NGC 6388 to a large ensemble of field stars
(both disc and bulge stars) at similar metallicity, we were able to exclude a
significant difference between cluster and field stars for all the three species
under scrutiny, thus rejecting the accretion origin.
The in situ nature of NGC 6388 is supported and strengthened by the abundance of
the other elements of the iron group derived here. In
Fig. <ref>, we compare the mean abundances of Cr, Mn, Co, and Ni
obtained for NGC 6388 to several samples of field stars in the Milky Way,
together with the average abundances for a number of GCs from our FLAMES survey
and from Muñoz and collaborators. No significant difference is found
between cluster and field MW stars.
The same pattern is followed well by other classical, less massive bulge GCs,
with the possible exception of Co, whose level seems to exceed the locus defined
by field bulge and disc stars. This is particularly evident for NGC 6528
(Muñoz et al. 2018).
However, when NGC 6441 (Gratton et al. 2006) is also considered among the bulge
populations, we may stress again that in general the iron-peak elements
perform poorly, in picking up objects presumably of accreted origin.
§.§ Neutron-capture elements: Zr and Ba
We obtained the abundances of the neutron-capture elements Y, Zr, Ba, La, Ce,
and Nd, sampling the first and second peak of species produced mainly by the
s-process. We also measured Eu, typically produced by the r-process in
the presence of higher neutron densities. Most abundances were obtained from UVES
spectra, due to their larger spectral coverage. However, we were also able to derive
Ba and Zr abundances from GIRAFFE HR13 spectra by measuring the Ba ii
6141 Å line in all of the 150 stars and up to a maximum of five Zr i
lines in 138 stars. We first concentrate on these two elements and defer the
analysis of all others to the next sub-section.
Sources of atomic parameters for these Ba and Zr lines can be found in Table 8
of Gratton et al. (2007), with the exception of the log gf value for the
Ba ii line, taken instead from Sneden et al. (2003). Due to the
strength of this line and the consequent dependence on microturbulent velocity,
abundances of Ba were derived using the relation as a function of surface
gravity for v_t (Worley et al. 2013) and a constant metallicity value (-0.48
dex) for all stars. This expedient allows us to avoid spurious trend of
abundances as a function of the microturbulent velocity (see e.g. Carretta et
al. 2015). The absence of trends with effective temperature for all derived
elements is shown in Fig. <ref>.
Large samples of stars with abundances of s-process elements are necessary to
explore another defining property of the type II GCs.
Together with an enhancement in metallicity [Fe/H], the metal-rich
component is also proposed to be enriched in neutron-capture elements. This
characteristic may manifest in a range of variations, going from the small
amounts observed in NGC 1851 (Carretta et al. 2011) up to the large excesses
detected in ω Cen (Johnson and Pilachowski 2010; Marino et al. 2011b).
In Carretta and Bragaglia (2022a), we demonstrated that NGC 6388 is a typical
mono-metallic GC, whose intrinsic spread in [Fe/H] is fully compatible with
uncertainties derived from abundance analysis. Hence, finding here that neither
Ba (measured in 185 stars) nor Zr (in 168 stars) show evidence of enhancement
in part of the stars of this GC does not come as a surprise.
As a first test, we split our dataset at [Fe/H]=-0.488 dex (the mean
metallicity of the GIRAFFE sample). Then, we compared the cumulative
distribution of Ba (and Zr) abundances for all stars more metal rich and more
metal poor than this value using a Kolmogorov-Smirnov test. For both elements we
obtain a K-S probability p ∼ 0.30. This means that it is statistically not
possible to safely reject the null hypothesis that the two distributions are
extracted from the same parent population.
A second test involves the pseudo-colour maps constructed using HST photometry
(Milone et al. 2017). Figure <ref> shows the chromosome map derived
by us using the public data available from the HST archive (Nardiello et al.
2018; see Carretta and Bragaglia 2022a for details). In the figure, the stars in
our sample falling in the small central region covered by the HST are indicated by
larger symbols, coloured according to the derived Ba (upper panel) and Zr i (lower panel) abundances derived in the present work. There is no significant
difference between stars scattered to the red in the pseudo-colour map and the
other stars in this photometric plane.
The direct implication of these tests is that NGC 6388 does not qualify as a
type II GC either for some enhancement in metallicity or in the amount of
s-process elements. In turn, the red RGB stars scattered in the pseudo-colour
maps obtained from HST photometry are simply not yet explained by any observable
change or alterations in their chemical composition.
On the other hand, we found
statistically significant correlations between the Zr abundance and a few
light elements that are enhanced in the proton-capture reactions typical of the
FG polluters. In Fig. <ref> and Fig. <ref>, we show the ratio
[Zr/Fe]i as a function of [Na/Fe] and [Al/Fe].
We tested the level of significance for a linear regression between Zr and both
Na and Al: the two-tail probabilities (p=2.4× 10^-5 and p=0.033 for
Na and Al, respectively) allow us to conclude that the observed relations are real,
with a high level of significance.
This is not the first time a similar correlation has been found. To our
knowledge, the first paper to discuss correlations between elements enhanced in
SG stars (such as Na) and others based on a large enough number of stars was Yong
et al. (2013) on NGC 6752. They used differential analysis to derive very
precise abundances (at 0.01 dex level) and found positive correlations, generally
statistically significant, between many elements and Na. This could indicate
that the same stars that enhanced Na over primordial values were also
responsible for the increase in the other elements, among them some
neutron-capture ones (unfortunately, they did not measure Zr). Their
conclusion was that the abundance trends are real and discussed three
potential mechanisms to explain them (besides the possibility of systematic
errors in stellar parameters, which were regarded very unlikely): a star-to-star
variation in CNO abundance, or in He (which is strictly connected to Na in
multiple populations), or inhomogeneous early chemical evolution (i.e.
metallicity variations). They favoured a combination of the last two, but
encouraged similar studies on other clusters to try and clarify this issue.
At odds with those findings, Schiappacasse-Ulloa & Lucatello (2023) did not
find correlation between Na and neutron-capture elements in the same cluster.
They analysed about 160 stars in NGC 6752, from the main-sequence turn-off up to
the RGB bump, deriving abundances of elements from different nucleosynthetic
chains, among which we find Na, Y, and Ba. While they saw a mild Na-Y correlation, this
is not statistically significant. Also, the Ba abundance does not correlate with Na
abundance, and they concluded that the stars that enhanced the Na level did not
contribute Y and Ba.
Finally, Kolomiecas et al. (2022) derived the Na and Zr abundances of about 240
RGB stars in 47 Tuc, finding a statistically significant positive correlation
between them. Their conclusion was that some amount of Zr should have been
produced by the same primordial stars which were enriched in Na the SG stars.
Unfortunately, this cannot be attributed unequivocally to a single class of
polluters, either AGBs or massive stars, or a combination of the two. We follow
Yong et al. (2013) and Kolomiecas et al. (2022) in encouraging the extention of this
kind of analysis to more elements and more clusters.
§.§ Neutron-capture elements: Overal pattern in NGC 6388
The average abundances in NGC 6388 are compared to those of field disc and bulge
stars, as well as to the mean abundances of previous analyses of GCs, in
Fig. <ref> and Fig. <ref> (symbols are as in
Fig. <ref>). The neutron-capture elements in NGC 6388 seem to be consistent
with those of bulge field stars of similar metallicity. Some offsets seem to
exist with respect to the disc component around [Fe/H]∼ -0.5 dex, but not
enough to be very significant, maybe with the exception of Zr. However, the
deficiency in [Zr/Fe] (middle panel of Fig. <ref>) is also
shared by NGC 6441 and two other more metal-rich GCs. Together with the mean
abundances of the four metal-poor GCs, the overall pattern could be explained
by the decreasing abundances of Zr (produced in the main s-process component
in AGB stars) as the metallicity increases due to the continuous injection of
fresh iron from SN Ia (e.g. Tinsley 1979).
Another possibility is the one highlighted by Kobayashi et al. (2020); the
contribution from nucleosynthetic sources with an intrinsically long time delay
is more affected by the star formation timescale. Inefficient star formation (as
in the halo) may give a higher level of neutron-capture elements relatively to iron.
On the other hand, rapid star formation implies a smaller contribution of
ejecta from longer lived, lower mass AGB stars, which are producers of elements from the
main s-process component. However, this second alternative would apply to both
the light and heavy s-process elements (Zr, La, Ce) produced in the
main s-process, whereas there is some evidence that NGC 6388 and
other bulge GCs show the same level of neutron-capture elements as seen in halo
metal-poor GCs, at least for La, Ba, and Nd.
Concerning the r-process, in the bottom panel of Fig. <ref>
we trace the run of [Eu/Fe] as a function of metallicity. The pattern of
constant values in the halo phase, followed by a gradual decrease as [Fe/H]
increases, is explained well by the scenario pioneered by Tinsley (1979),
pointing to the origin of Eu in the same sites (massive stars) where the
α-elements are produced. After the knee in [Fe/H] is reached in the
main progenitor galaxy, the Eu production would remain essentially flat, but the
[Eu/Fe] ratio is lowered by the increasing contribution of SN Ia. When our
sample of clusters, mainly constituted by metal-poor GCs, is complemented by
metal-rich GCs, it is easy to appreciate that the above scenario is satisfied
by both field and GC stars. The interplay between enrichment from core-collapse
and thermonuclear SNe was essentially the same, regardless of the star formation
occurring in GCs or in the general field.
Finally, in Fig. <ref> we use the relative strengths of the r-process
and s-process to probe the relative contribution from high-mass stars (mainly
responsible for yields of the weak component of the s-process and the
r-process) and from low- or intermediate-mass AGB stars (main s-process).
In this figure, the pure r-process element Eu is compared to Y, chosen as
reference element for the s-process. Again, by plotting the ratio [Eu/Y] as a
function of the metallicity (liberally used as a chemical `chronometer') we are
exploiting the different mass ranges, and therefore different evolutionary
timescales, of the involved stars to give a general picture of the enrichment
process in field and GC stars.
At low metallicities, both GC and field stars show high [Eu/Y] ratios,
approaching the scaled Solar System pure r-process level, with scarce or
no contribution from the s-process in AGB stars. As lower mass stars with longer
evolutionary timescales appear on the enrichment scene, an increase in the
production of s-process elements lowers the [Eu/Y] ratio. The decrease seems
to happen in lockstep both in field and GC stars until about [Fe/H]∼ -0.5.
At this metallicity, our present analysis does confirm the earlier results
presented in Carretta et al. (2007a) for NGC 6388 and in Gratton et al. (2006) for NGC 6441. In
particular, NGC 6388 seems to have a [Eu/Y] higher than observed in field stars
of similar metallicities. Although the larger errors associated with NGC 6441 make
its ratio still compatible with the field stars, the ratio in NGC 6388 is in
better agreement with the ratio in metal-poor GCs, where the contribution of AGB
stars to the s-process was not yet relevant.
In Carretta et al. (2007a), we put forward the hypothesis that this excess in [Eu/Y]
could be explained by an enhanced contribution of massive stars to the
enrichment in the bulge, since light s-process elements such as Y can be also
produced in the weak s-process component within the He-burning core of massive
stars (Couch et al. 1974, Travaglio et al. 2004). This idea would also explain
the high [Eu/Y] ratios measured in metal-rich bulge field stars (Gratton et al.
2006, cyan circles in Fig. <ref>). However, we note that the disc GC
NGC 5927 (Mura-Guzmàn et al. 2018) also shares this excess ratio; therefore, a larger sample of GCs at high metallicity is required to answer the question.
A more quantitative approach is summarised in Fig. <ref>, where
we compare the average abundances of neutron-capture elements derived from UVES
spectra in NGC 6388 to the ratios of r- and s- elements estimated by
Simmerer et al. (2004) in the Solar System. For this comparison we followed the
approach suggested by Raffaele Gratton and employed in Carretta et al. (2015)
for NGC 6093. To reproduce the pattern of neutron-capture elements in NGC 6388,
our best fit must consider the sum of two contributions: a solar-scaled
r-process contribution, with a scaling by -0.27 dex, and a solar-scaled
s-process contribution, with a scaling by -0.51 dex.
Taking into account the derived metallicity for NGC 6388 ([Fe/H]=-0.48 dex),
these scaling factors imply abundance ratios of [r/Fe]=+0.21 dex and
[s/Fe]=-0.03 dex.
The excess of elements produced by the r-process is very similar to the one we
obtain from the α-elements. From GIRAFFE and UVES spectra, we derive mean
values of [α/Fe]=+0.22 dex and +0.23 dex (regardless of inclusion or
exclusion of Mg in the average). The excess in r-process elements with respect
to the solar value may be interpreted as an iron deficiency due to the fact that
we see almost exclusively the contribution of massive stars, whereas the one from SN Ia
is missing. On the other side, our data show that it is necessary to also
consider a significant part of the s-process, which scale almost exactly as Fe, to explain the observations in NGC 6388 well.
§ SUMMARY AND CONCLUSIONS
We present the homogeneous spectroscopic analysis of a large sample of stars in
NGC 6388. Concerning the proton-capture elements, we find that all
stars observed in NGC 6388 nicely trace the typical correlations and
anti-correlations that are the unique trademark of GCs. The exceptions are the
heaviest species (Ca, Sc) involved in the network of proton-capture reactions in
H-burning at high temperature. No statistically significant variation is found for these two
species between FG and SG stars, confirming qualitative results shown in Carretta and
Bragaglia (2019) for NGC 6388.
Star-to-star variations in Si, correlated to abundance changes in Al and
anti-correlated to Mg depletions, support leakage from the Mg-Al cycle on Si.
In turn, this requires temperatures as high as about 65 MK in the FG polluters.
A simple dilution model is compatible with a single class of polluters
injecting processed matter in the intra-cluster medium at early time in
the proro-GC. Mixing this polluted material (typical of the composition of the
SG group E) with different amounts of pristine gas (whose composition is
represented by the P group of stars), we can obtain a good agreement for the
composition of the intermediate SG group for all the involved species.
The extent of the Na-O anti-correlation, the privileged, unambiguous signature of multiple
stellar populations in GCs, seems to be too short in NGC 6388 with respect
to its large total mass. Together with a few other massive GCs (47 Tuc, M 15,
NGC 6441), NGC 6388 lies slightly below the main trend in the IQR[O/Na] versus M_V
relation describing the dependence of the extent of Na-O anti-correlation as a
function of cluster total mass (Carretta 2006; Carretta et al. 2010a).
The inventory of α-element abundances in NGC 6388 is very compatible with
the chemical pattern of field stars in the Milky Way, both in the disc and
bulge. The abundance ratios in NGC 6388 participate in the classical trend
defined by the interplay between star formation and lifetimes of the main
stellar nucleosynthesis sites, namely type II and type Ia SNe. The resulting
plateau at low metallicity, followed by a knee and the decrease of α/Fe]
ratio at increasing [Fe/H] is followed by both field and GC stars, including
NGC 6388.
We found no evidence of a low level of Si, as derived by infrared APOGEE data.
Therefore, we cannot support an extragalactic origin for NGC 6388, as suggested
by Horta et al. (2020). We note that all the studies using optical spectroscopy
converge on finding normal, high values of [Si/Fe] for NGC 6388. Its seems that
there could be some offsets between optical and infrared analyses due to still
poorly understood effects concerning Si. A low value of Ca is instead derived
for NGC 6388, as well as for bulge stars in a similar metallicity regime,
regardless of whether optical or infrared spectra are used.
The average abundances of elements of the iron group in NGC 6388 closely follow
the pattern of chemical enrichment typical of field stars in the Milky Way. We
then confirm and strengthen the results by Carretta and Bragaglia (2022b):
NGC 6388 is clearly not of extragalactic origin, but likely formed in situ in
the Galactic bulge, and the iron-peak species can only be used to trace the GCs
of the Sagittarius dwarf, whose content in such elements is typically lower than
in the autochthonous stars of our Galaxy. Consistently, both these elements and
the high Si level, normal for old GCs, point toward the in situ origin of
NGC 6388.
We do not detect any signature of enhancement in neutron-capture elements in a
fraction of stars of NCG 6388. In particular, there is no significant difference
in the abundance of Ba and Zr between FGs, SG stars, and the stars scattered to
the red of the RGB. This evidence corroborates the fact that NGC 6388 is not a GC
of a distinct (type II) class and also leaves the red RGB stars unexplained, at
least from a chemical perspective.
Statistically significant correlations are found between Zr abundance and both
Na and Al abundances. Similar results are also found in a few other cases and
warrant being extended to a larger sample of GCs.
The excess of the r-process element Eu in NGC 6388 is consistent with the values
of more metal-poor old GCs and similar to the ratio of α-elements,
showing the contribution of massive stars coupled to the small injection of Fe
from thermonuclear SNe at the epoch of cluster formation. The overall pattern of
neutron-capture elements from high-resolution UVES spectra shows, however, that
it is necessary to also consider a contribution from the s-process in this GC.
This research made use of the products of the Cosmic-Lab project funded by
the European Research Council. We thank E. Dalessandro for helpful discussions.
This research has made use of the SIMBAD database (in particular Vizier),
operated at CDS, Strasbourg, France, of the NASA's Astrophysical Data System,
and of TOPCAT (http://www.starlink.ac.uk/topcat/).
This paper makes use of the data collected by the HST Treasury Program GO 13297.
We acknowledge funding from PRIN INAF 2019 “Building up the halo: chemo-dynamical tagging in the age of large surveys”, PI Lucatello.
[] Abdurro'uf, Accetta, K., Aerts, C., et al. 2022, ApJS, 259, 35
[] Adibekyan, V.Zh., Sousa, S.G., Santos, N.C. et al. 2012, A&A, 545,
A32
[] Alonso, A., Arribas, S., Martinez-Roger, C. 1999, A&AS, 140, 261
[] Alonso, A., Arribas, S., Martinez-Roger, C. 2001, A&A, 376, 1039
[] Alves-Brito, A., Melèndez, J., Asplund, M., Ramìrez, I.,
Yong, D. 2010, A&A, 513, A35
[] Anders, E., Grevesse, N. 1989, GeCoA, 53, 197
[] Arnould, M., Goriely, S., Jorissen, A. 1999, A&A, 347, 572
[] Barbuy, B., Hill, V., Zoccali, M. et al. 2013, A&A, 559, A5
[] Bastian, N., Lardo, C. 2018, ARA&A, 56, 83
[] Battistini, C., Bensby, T. 2016, A&A, 586, A49
[] Beirão, P., Santos, N.C., Israelian, G., Mayor, M. 2005, A&A,
438, 251
[] Bensby, T., Feltzing, S., Lundström, I., Ilyin, I. 2005, A&A,
433, 185
[] Bensby, T., Feltzing, S., Oey, M.S. 2014, A&A, 562, A71
[] Bensby, T., Feltzing, S., Gould, A. et al. 2017, A&A, 605, A89
[] Böhm-Vitense, E. 1979, ApJ, 234, 521
[] Bragaglia, A., Carretta, E., Gratton, R.G. et al. 2001, AJ, 121, 327
[] Bragaglia, A., Carretta, E., D'Orazi, V. et al. 2017, A&A, 607, A44
[] Brewer, J.M., Fischer, D.A., Valenti, J.A., Piskunov, N. 2016,
ApJS, 225, 32
[] Carretta, E. 2006, AJ, 131, 1766
[] Carretta, E. 2015, ApJ, 810, 148
[] Carretta, E., Bragaglia, A. 2018, A&A, 614, A109
[] Carretta, E., Bragaglia, A. 2019, A&A, 627, L7
[] Carretta, E., Bragaglia, A. 2021, A&A, 646, A9
[] Carretta, E., Bragaglia, A. 2022a, A&A, 659, A122
[] Carretta, E., Bragaglia, A. 2022b, A&A, 660, L1
[] Carretta, E., Gratton R.G., Bragaglia, A., Bonifacio, P.,
Pasquini, L. 2004, A&A, 416, 925
[] Carretta, E., Bragaglia, A., Gratton, R.G. et al. 2006, A&A, 450, 523
[] Carretta, E., Bragaglia, A., Gratton, R.G. et al. 2007a, A&A, 464, 967
[] Carretta, E., Recio-Blanco, A., Gratton, R.G., Piotto, G.,
Bragaglia, A. 2007b, ApJ, 671, L125
[] Carretta, E., Bragaglia, A., Gratton, R.G., Lucatello, S. 2009a,
A&A, 505, 139
[] Carretta, E., Bragaglia, A., Gratton, R.G. et al. 2009b,
A&A, 505, 117
[] Carretta, E., Bragaglia, A., Gratton, R.G., D'Orazi, V., Lucatello,
S. 2009c, A&A, 508, 695
[] Carretta, E., Bragaglia, A., Gratton, R.G. et al. 2010a, A&A, 516, 55
[] Carretta, E., Bragaglia, A., Gratton, R.G. et al. 2010b, ApJ, 712, L21
[] Carretta, E., Bragaglia, A., Gratton, R.G. et al. 2010c, A&A, 520, 95
[] Carretta, E., Lucatello, S., Gratton, R.G., Bragaglia, A., D'Orazi,
V. 2011, A&A, 533, 69
[] Carretta, E., Bragaglia, A., Gratton, R.G., Lucatello, S.,
D'Orazi, V. 2012, ApJ, 750, L14
[] Carretta, E., Bragaglia, A., Gratton, R.G. et al. 2013, A&A, 557, A138
[] Carretta, E., Bragaglia, A., Gratton, R.G. et al. 2014a, A&A, 561,
A87
[] Carretta, E., Bragaglia, A., Gratton, R.G. et al. 2014b, A&A, 564,
A60
[] Carretta, E., Bragaglia, A., Gratton, R.G. et al. 2015, A&A, 578,
A116
[] Carretta, E., Bragaglia, A., Lucatello, S. et al. 2017, A&A, 600, A118
[] Chen, J., Ferraro, F. R., Cadelano, M., et al. 2021, Nature Astronomy, 5, 1170
[] Cohen, J.G., Kirby, E.N. 2012, ApJ, 760, 86
[] Couch, R.G., Schmiedekamp, A.B., Arnett, W.D. 1974, ApJ, 190, 95
[] Cordero, M.J., Pilachowski, C.A., Johnson C.I. et al. 2014, ApJ, 780, 94
[] D'Antona, F., Caloi, V., Montalbán, J., Ventura, P., Gratton, R.
2002, A&A, 395, 69
[] D'Antona, F., Vesperini, E., D'Ercole, A. et al. 2016, MNRAS, 458, 2122
[] da Silveira, C.R., Barbuy, B., Friaça, A.C.S. et al. 2018,
A&A, 614, A149
[] Delgado Mena, E., Tsantaki, M., Adibekyan, V. Zh. et al. 2017,
A&A, 606, A94
[] Duong, L., Asplund, M., Nataf, D.M. et al. 2019, MNRAS, 486, 3586
[] Duong, L., Asplund, M., Nataf, D.M., Freeman, K.C., Ness, M. 2019,
MNRAS, 486, 5349
[] Ferraro, F. R., Carretta, E., Corsi, C. E., et al. 1997, A&A, 320, 757
[] Forsberg, R., Jönsson, H., Ryde, N., Mattucci, F. 2019, A&A,
631, A113
[] Gratton, R.G. 1988, Rome Obs. Preprint Ser., 29
[] Gratton, R.G., Carretta, E., Eriksson, K., Gustafsson, B. 1999,
A&A, 350, 955
[] Gratton, R.G., Bonifacio, P., Bragaglia, A., et al. 2001,
A&A, 369, 87
[] Gratton, R.G., Carretta, E., Claudi, R., Lucatello, S.,
Barbieri, M. 2003, A&A, 404, 187
[] Gratton, R.G., Sneden, C., Carretta, E. 2004, ARA&A, 42, 385
[] Gratton, R.G., Lucatello, S., Bragaglia, A. et al. 2006, A&A,
455, 271
[] Gratton, R.G., Lucatello, S., Bragaglia, A. et al. 2007, A&A,
464, 953
[] Gratton, R.G., Lucatello, S., Carretta, E. et al. 2011, A&A, 534, 123
[] Gratton, R.G., Carretta, E., Bragaglia, A. 2012a, A&ARv, 20, 50
[] Gratton, R.G., Lucatello, S., Carretta, E. et al. 2012b, A&A, 539, 19
[] Gratton, R.G., Lucatello, S., Sollima, A. et al. 2013, A&A, 549, A41
[] Gratton, R.G., Lucatello, S., Sollima, A. et al. 2014, 563, A13
[] Gratton, R.G., Lucatello, S., Sollima, A. et al. 2015, A&A, 573, A92
[] Gratton, R.G., Bragaglia, A., Carretta, E. et al. 2019, A&ARv, 27, 8
[] Harris, W. E. 2010, arXiv:1012.3224
[] Horta, D., Schiavon, R.P., Mackereth, J.T. et al. 2020, MNRAS, 493,
3363
[] Horta, D., Schiavon, R.P., Mackereth, J.T. et al. 2021, MNRAS, 500, 1385
[] Ishigaki, M.N., Chiba, M., Aoki, W. 2012, ApJ, 753, 64
[] Ishigaki, M.N., Aoki, W., Chiba, M. 2013, ApJ, 771, 67
[] James, G., François, P., Bonifacio, P. et al. 2004a, A&A, 427, 825
[] James, G., François, P., Bonifacio, P. et al. 2004b, A&A, 414,
1071
[] Johnson, C.I., Pilachowski, C.A. 2010, ApJ, 722, 1373
[] Johnson, C.I., Rich, R.M., Kobayashi, C., Kunder, A., Koch, A. et
al. 2014, AJ, 148, 67
[] Jönsson, H., Ryde, N., Schultheis, M., Zoccali, M. 2017, A&A,
598, A101
[] Karakas, A.I., Lattanzio, J..C. 2003, PASA, 20, 279
[] Kobayashi, C., Karakas, A.I., Lugaro, M. 2020, ApJ, 900, 179
[] Kolomiecas, E., Dobrovolskas, V., Kuc̆inskas, A., Bonifacio,
P., Korotin, S. 2022, A&A, 660, A46
[] Kurucz, R.L. 1993, CD-ROM 13, Smithsonian Astrophysical
Observatory, Cambridge
[] Lai, D.K., Bolte, M., Johnson, J.A. et al. 2008, ApJ, 681, 1524
[] Lanzoni, B., Mucciarelli, A., Origlia, L. et al. 2013, ApJ, 769,
107 (L13)
[] Lemasle, B., de Boer, T.J.L, Hill, V. et al. 2014,
A&A, 572, 88
[] Letarte, B., Hill, V., Tolstoy, E. et al. 2010, A&A, 523, 17
[] Lomaeva, M., Jönsson, H., Ryde, N., Schulteis, M., Thorsbro, B.
2019, A&A, 625, A141
[] Lucey, M., Hawkins, K., Ness, M. et al. 2019, MNRAS, 488, 2283
[] Lucey, M., Hawkins, K., Ness, M. et al. 2022, MNRAS, 509, 122
[] Magain, P. 1984, A&A, 134, 189
[] Marino, A.F., Milone, A., Piotto, G., et al. 2011b, ApJ, 731, 64
[] Marino, A.F., Villanova, S., Milone, A.P. et al. 2011a, ApJ, 730,
L16
[] Massari, D., Koppelman, H.H., Helmi, A. 2019, A&A, 630, L4
[] Mészáros, S., Masseron, T., García-Hernández, D.A. et al.
2020, MNRAS, 492, 1641 (M20)
[] Milone, A.P., Marino, A.F., Renzini, A. et al. 2018, MNRAS, 481,
5098
[] Minelli, A., Mucciarelli, A., Massari, D., et al. 2021a, ApJL, 918, L32
[] Minelli, A., Mucciarelli, A., Romano, D., et al.
2021b, ApJ, 910, 114
[] Mucciarelli, A., Bellazzini, M., Ibata, R. et al. 2012, MNRAS, 426, 2889
[] Mucciarelli, A., Bellazzini, M., Merle, T. et al. 2015, ApJ, 801, 68
[] Muñoz, C., Villanova, S., Geisler, D. et al. 2017, A&A, 605,
A12
[] Muñoz, C., Geisler, D., Villanova, S. et al. 2018, A&A, 620,
A96
[] Muñoz, C., Villanova, S., Geisler, D. et al. 2020, MNRAS, 492,
3742
[] Mura-Guzmán, A., Villanova, S., Muñnoz, C. 2018, MNRAS,
474, 4541
[] Myeong, G.C., Vasiliev, E., Iorio, G. 2019, MNRAS, 488, 1235
[] Nardiello, D., Libralato, M., Piotto, G. et al. 2018, MNRAS, 481,
3382
[] Neves, V., Santos, N.C., Sousa, S.G., Correia, A.C.M., Israelian,
G. 2009, A&A, 497, 563
[] Prantzos, N., Charbonnel, C., Iliadis, C. 2017, A&A, 608,
A28
[] R Core Team (2022). R: A language and environment for statistical
computing. R Foundation for Statistical Computing, Vienna, Austria.
URL https://www.R-project.org/
[] Reddy, B.E., Tomkin, J., Lambert, D.L., Allende Prieto, C. 2003,
MNRAS, 340, 304
[] Reddy, B.E., Lambert, D.L., Allende Prieto, C 2006, MNRAS, 367,
1329
[] Reggiani, H., Melèndez, J., Kobayashi, C., Karakas, A., Placco,
V. 2017, A&A, 608, A46
[] Roederer, I.U., Presto, G.W., Thompson, I.B. et al. 2014, AJ, 147,
136
[] Schiappacasse-Ulloa, J., Lucatello, S. 2023, MNRAS, 520, 5938
[] Simmerer, J., Sneden, C., Cowan, J.J. et al. 2004, ApJ, 617, 1091
[] Smith, V.V., Bizyaev, D., Cunha, K. et al. 2021, AJ, 161, 254
[] Sneden, C., Kraft, R.P., Shetrone, M.D. et al. 1997, AJ, 114, 1964
[] Sneden, C., Cowan, J.J., Lawler, J.E. et al. 2003, ApJ, 591, 936
[] Tinsley, B.M. 1979, ApJ, 229, 1046
[] Tolstoy, E., Hill, V., Tosi, M. 2009, ARAA&A, 47,
371g
[] Travaglio, C., Gallino, R., Arnone, E. et al. 2004, ApJ, 601, 864
[] Valle, G., Dell'Omodarme, M., Tognelli, E. 2022, A&A, 658, A141
[] Vasiliev, E., Baumgardt, H. 2021, MNRAS, 505, 5978
[] Ventura, P., D'Antona, F., Di Criscienzo, M. et al. 2012, ApJ, 761, L30
[] Villanova, S., Piotto, G., Gratton, R.G. 2009, A&A, 499, 755
[] Villanova, S., Geisler, D., Piotto, G., Gratton, R.G. 2012, ApJ,
748, 62
[] Wallerstein, G., Kovtyukh, V.V., Andrievsky, S.M. 2007, AJ, 133,
1373
[] Worley, C.C., Hill, V., Sobeck, J., Carretta, E. 2013, A&A, 553,
A47
[] Yong, D., Grundahl, F., Nissen, P.E., Jensen, H.R., Lambert, D.L.
2005, A&A, 438, 875
[] Yong, D., Mélendez, J., Grundahl, F. et al. 2013, MNRAS, 434,
3452
§ ABUNDANCES OF INDIVIDUAL STARS
Abundances for individual stars in NGC 6388 are listed in
Table <ref> for proton-capture elements,
Table <ref> for α-elements, Table <ref> for
elements of the Fe-group, and Table <ref> for species from
neutron-capture processes measured on UVES spectra. Finally, in
Table <ref> we list the abundances of the neutron-capture elements
Zr i and Ba ii that could be derived also from GIRAFFE spectra. For
this table, as well as for tables relative to light elements, α-elements
and Fe-peak elements (Tables A.1, A.2, A.3, and A.5), only an excerpt is
provided here as a guidance of the content. Complete tables can be found at CDS
Strasbourg.
§ REFERENCES FOR STUDIES LISTED IN FIGURES <REF>,
<REF>, <REF>, AND <REF>
In Figures <ref>, <ref>, <ref>, and
<ref> we compare the average abundance ratios
derived in NGC 6388 for some species to a sample of GCs and field stars, both in the disc and in
the bulge, from a number of studies.
Not all the elements were available for GC stars, whereas all species were
sampled in the abundance analyses of comparison field stars in the Milky Way.
The GCs in our FLAMES survey, ordered by increasing metallicity, are listed in
Table <ref> (we give the [Fe/H] value from UVES spectra used
in the plots, from Carretta et al. 2009c or from individual papers, in parentheses).
We note that some elements were analysed in the same study for many GCs, whereas
other elements were obtained in the individual papers.
To GCs homogeneously analysed in our FLAMES survey (or with similar procedures),
we added four metal-rich GCs from the same group of investigators (see
Table <ref>). For field Milky Way stars (in halo, disc, or bulge components), we used many studies, as detailed
in the same table.
|
http://arxiv.org/abs/2307.07575v1 | 20230714183904 | A Quantitative Approach to Predicting Representational Learning and Performance in Neural Networks | [
"Ryan Pyle",
"Sebastian Musslick",
"Jonathan D. Cohen",
"Ankit B. Patel"
] | cs.LG | [
"cs.LG",
"cs.NE"
] |
Decomposition Based Refinement for the Network Interdiction Problem
Krish Matta
School of Computer Science
Carnegie Mellon University
Pittsburgh, USA
[email protected]
Xiaoyuan Liu
Fujitsu Research of America
Sunnyvale, USA
[email protected]
Ilya Safro
Department of Computer and Information Sciences
University of Delaware
Newark, USA
[email protected]
==================================================================================================================================================================================================================================================================================================================
A key property of neural networks (both biological and artificial) is how they learn to represent and manipulate input information in order to solve a task. Different types of representations may be suited to different types of tasks, making identifying and understanding learned representations a critical part of understanding and designing useful networks. In this paper, we introduce a new pseudo-kernel based tool for analyzing and predicting learned representations, based only on the initial conditions of the network and the training curriculum. We validate the method on a simple test case, before demonstrating its use on a question about the effects of representational learning on sequential single versus concurrent multitask performance. We show that our method can be used to predict the effects of the scale of weight initialization and training curriculum on representational learning and downstream concurrent multitasking performance.
§ INTRODUCTION
One of, if not the, most fundamental question in neural networks research is how representations are formed through learning. In machine learning, this is important for understanding how to construct systems that learn more efficiently and generalize more effectively <cit.>. In cognitive science and neuroscience, this is important for understanding how people acquire knowledge <cit.>, and how this impacts the type of processing (e.g., serial and control-dependent vs. parallel and automatic) used in performing task(s) <cit.>. One important focus of recent work has been on the kinds of inductive biases that influence how learning impacts representations (e.g., weight initialization, regularization in learning algorithms, etc. <cit.>) as well as training curricula <cit.>. This is frequently studied using numerical methods, by implementing various architectural or learning biases and then simulating the systems to examine how these impact representational learning <cit.>. Recently, <cit.> introduced a novel analytic approach to this problem, which can be used to predict important inductive bias properties from the network's initialization. Here we extend this approach by combining it with a neural tangent kernel-based analysis in order to qualitatively predict the kinds of representations that are learned, and the consequences this has for processing. We provide an example that uses a neural network model to address how people acquire simple tasks, and the extent to which this leads to serial, control-dependent versus parallel, automatic processing and multitasking capability <cit.>. We expand upon theoretical results introduced in <cit.>, combining them with a neural tangent kernel analysis <cit.>, which allows for prediction of the inductive bias (and resulting representations learned by the network and downstream task performance) from the initial conditions of the network and the training regime. In the remainder of this section, we provide additional background that motivates the example we use. Then, in the sections that follow, we describe the analysis method, its validation in a benchmark setting, and the results of applying it to a richer and more complex example.
Shared versus separated representations and flexibility versus efficiency.
One of the central findings from machine learning research using neural networks is that cross-task generalization (sometimes referred to as transfer learning) can be improved by manipulations that promote the learning of shared representations—that is, representations that capture statistical structure that is shared across tasks <cit.>. One way to do so is through the design of appropriate training regimens and/or learning algorithms (e.g., multi-task learning and/or meta-learning; <cit.>). Another is through the initialization of network parameters; for example, it is known that small random initial weights help promote the learning of shared structure, by forcing the network to start with what amounts to a common initial representation for all stimuli and tasks and then differentiate the representations required for specific tasks and/or stimuli under the pressure of the loss function <cit.>. Interestingly, while shared representations support better generalization and faster acquisition of novel but similar tasks, this comes at a cost of parallel processing capacity, a less commonly considered property of neural networks that determines how many distinct tasks the system can perform at the same time—that is, its capacity for concurrent multitasking <cit.>. Note that our use of the term “multitasking” here should not be confused with the term “multi-task” learning: the former refers to the simultaneous performance of multiple tasks, while the latter refers to the simultaneous acquisition of multiple tasks. These are in tension: if two tasks share representations, they risk making conflicting use of them if the tasks are performed at the same time (i.e., within a single forward-pass); thus, the representations can be used safely only when the tasks are executed serially. This potential for conflict can be averted if the system uses separate representations for each task, which is less efficient but allows multiple tasks to be performed in parallel. This tension between shared vs. separated representations reflects a more general tradeoff between the flexibility afforded by shared representations (more rapid learning and generalization) but at the expense of serial processing, and the efficiency afforded by separated, task-dedicated representations (parallel processing; i.e., multitasking) but at the costs of slower learning and poorer generalization (i.e., greater rigidity; <cit.>). While this can be thought of as analogous to the tension between interpreted and compiled procedures in traditional symbolic computing architectures, it has not (yet) been widely considered within the context of neural network architectures in machine learning.
Shared versus separated representations and control-dependent versus automatic processing.
The tension between shared and separated representations also relates to a cornerstone of theory in cognitive science: the classic distinction between control-dependent and automatic processing <cit.>. The former refers to “intentional,” “top-down,” processes that are assumed to rely on control for execution (such as mental arithmetic, or searching for a novel object in a visual display), while the latter refers to processes that occur with less or no reliance on control (from reflexes, such as scratching an itch, to more sophisticated processes such as recognizing a familiar object or reading a word). A signature characteristic of control-dependent processes is the small number of such tasks that humans can perform at the same time—often only one—in contrast to automatic processes that can be performed in parallel (motor effectors permitting). The serial constraint on control-dependent processing has traditionally been assumed to reflect limitations in the mechanism(s) responsible for control itself, akin to the limited capacity imposed by serial processing in the core of a traditional computer <cit.>. However, recent neural network modeling work strongly suggests an alternative account: that constraints in control-dependent processing reflect the imposition of serial execution on processes that rely on shared representations <cit.>. That is, constraints associated with control-dependent processing reflect the purpose rather than an intrinsic limitation of control mechanisms. This helps explain the association of control with flexibility of processing <cit.>: flexibility is afforded by shared representations, which require control to insure they are not subject to conflicting use by competing processes. It also explains why automaticity—achieved through the development of task-dedicated representations—takes longer to acquire and leads to less generalizable behavior <cit.>. Together, these explain the canonical trajectory of skill acquisition from dependence on control to automaticity: When people first learn to perform a novel task (e.g., to type, play an instrument, or drive a car) they perform it in a serial, control-dependent manner, that precludes multitasking. Presumably this is because they exploit existing representations that can be “shared” to perform the novel task as soon as possible, but at the expense of dependence on control. However, with extensive practice, they can achieve efficient performance through the development of separated, task-dedicated representations that diminish reliance on control and permit performance in parallel with other tasks (i.e., concurrent multitasking; <cit.>).
These ideas have been quantified in mathematical analyses and neural network models, and fit to a wide array of findings from over half a century of cognitive science research <cit.>. However, the specific conditions that predispose to, and regulate the formation of shared versus separated representations are only qualitatively understood, and theoretical work has been restricted largely to numerical analyses of learning and processing in neural network models. Some methods have sought to quantify the degree of representation sharing between two tasks in terms of correlations between activity patterns for individual tasks <cit.> in order to predict multitasking capability. Other methods, that quantify the representational manifold of task representations <cit.> have been applied to characterize multitasking capability <cit.>. However, while these methods provide a snapshot of representation sharing at a given point in training, they do not provide direct or analytic insight into the dynamics of learning shared versus separated representations, nor how the inductive bias of a system may affect the representations learned.
In this article, we expand upon the ideas introduced in <cit.> to show how the initial condition of a network and the training regime to which it will be subjected can predict the
implicit bias and thus the kinds of representations it will learn (e.g., shared vs. separated) and the corresponding patterns of performance it will exhibit in a given task setting. This offers a new method to analyze how networks can be optimized to regulate the balance between flexibility and efficiency. The latter promises to have relevance both for understanding how this is achieved in the human brain, and for the design of more adaptive artificial agents that can function more effectively in complex and changing environments.
Task structure and network architecture.
For the purposes of illustration and analysis, we focus on feedforward neural networks with three layers of processing units that were trained to perform sets of tasks involving simple stimulus-response mappings.
Each network was comprised of an input layer, subdivided into pools of units representing inputs along orthogonal stimulus dimensions (e.g., representing colors, shapes, etc.), and an additional pool used to specify which task to perform. All of the units in the input layer projected to all of the units in the hidden layer, which all projected to all units in the output layer, with an additional projection from the task specification input pool to the output layer. The output layer, like the input layer, was divided into pools of units, in this case representing outputs along orthogonal response dimensions (e.g., representing manual, verbal, etc.).
Networks were trained in an environment comprised of several feature groups (e.g., shape, size, etc.) and response groups (e.g., verbal, manual, etc.) corresponding to the stimulus and response dimensions along which the pools of input and output units of the networks were organized. Each network was trained to perform a set of tasks, in which each task was defined by a one-to-one mapping from the inputs in one pool (i.e., along one stimulus feature dimension) to the outputs in a specified pool (i.e., along one response dimension), ignoring inputs along all of the other feature dimensions and requiring null outputs along all of the other response dimensions.[This corresponds to the formal definition of tasks in a task space as described in <cit.>.] Training and testing could be performed for one task at a time (“single task” conditions), by specifying only that task in the task input pool and requiring the correct output over the task-relevant response dimension and a null response over all others; or with two or more tasks in combination (“multitasking” conditions), in which the desired tasks were specified over the task input units, and the network was required to generate correct responses over the relevant response dimensions and null responses for all others. In all cases, an input was always provided along every stimulus dimension, and the network had to learn to ignore those that were not relevant for performing the currently specified task(s). In each case, the question of interest was how initialization and learning impacted the final connection weights to and from the hidden layer, the corresponding representations the networks used to perform each task, and the patterns of performance in single task and multitasking conditions. Specifically, we were interested in the extent to which the networks learned shared versus separated representations for sets of tasks that shared a common feature dimension; and the extent to which the analytic methods of interest were able to predict, from the initial conditions and task specifications, the types of representations learned, and the corresponding patterns of performance (e.g., speed of learning and multitasking capability). We evaluated the evolution of representations over the course of learning in two ways: using the analytic techniques of interest, and using a recently developed visualization tool to inspect these, each of which we describe in the two sections that follow.
§ UNDERSTANDING AND VISUALIZING LEARNING DYNAMICS
§.§ Analyzing Learning Dynamics Using the Neural Tangent Kernel
§.§.§ The Neural Tangent Kernel (NTK)
Understanding the learning dynamics of a neural network (NN) can be done using a framework known as the NTK.
This is based on a kernel function K(x,x') that represents the 'similarity' of inputs x and x' (from here on, we use x from the training set and x' from the test set); that is, how much influence each individual sample x from the training set has on the output decision of the NN on a test sample x'. This is embodied in a kernel expansion <cit.>.
Using gradient descent (GD) on a NN with scalar learning rate η and a P × 1 vector of real parameters θ, the parameter update can be written as
θ(t+1) = θ(t) - ηdL(θ)/dθ.
Taking the gradient flow approximation η→ 0 (e.g. as the step size approaches 0, resulting in a continuous flow rather than discrete steps) we have
θ̇ = - dL(θ)/dθ
Assuming our loss depends only on the network output ŷ, we can rewrite this as a sum over N training samples 𝒟 := { (x_m, y_m) }:
θ̇ = - dL/dŷdŷ/dθ = ∑_m ∈𝒟dL/dŷ_mdŷ_m/dθ := ∑_m ∈𝒟ϵ_m ϕ_m,
where ϵ_m ∈ℝ is the loss sensitivity and ϕ_m is the P × 1 vector of NTK features (e.g. dŷ_m/dθ_p for each parameter p) for sample x_m.
How does the actual NN prediction function change with learning over time? We can answer this by taking total time derivatives yielding
ŷ̇(θ) = dŷ(θ)/dθ^T θ̇ = - dŷ(θ)/dθ^T dL(θ)/dŷdŷ(θ)/dθ,
where the kernel function K(x,x';θ) := dŷ(x;θ)/dθ^T dŷ(x';θ)/dθ. Note that this means the network's time evolution is a kernel function, made up of the NTK at time t with parameters θ=θ(t) and kernel weights dL(θ)/dŷ
Expanding as a sum over the training set we have
ϵ̇_n(t) = ∑_m ∈𝒟 K_mn(t) ϵ_m(t) , ∀ n ∈𝒟
where we have assumed a mean squared error loss for L, which results in the loss sensitivities becoming the prediction errors ϵ_m(t) := dL(θ)/dŷ = y_m - ŷ_m(t). The coefficients coupling the errors are K_mn(t) := ϕ_m(t) ·ϕ_n(t), also known as the elements of the N × N NTK matrix K(t) = [K_mn(t)].
If the model is close to linear in the parameters θ (i.e., we are in the so-called kernel or lazy training regime <cit.>, where the NN's basis functions are fixed for all time), then the NTK will not change much during training, allowing the entire learning to be readily interpretable as a linear kernel machine <cit.>. In this case, each update to the model is fully interpretable under kernel theory, with each data point influencing how the model evolves (see Fig. <ref>).
But what if the model is not close to the linear/lazy/kernel regime (i.e., it is in the so-called adaptive regime?[All modern NNs perform best in the adaptive regime, and there is a significant performance gap between kernel and adaptive regime models <cit.>. The power of the adaptive regime is that it allows the NN to learn, based on the data, the basis functions that are most useful. In contrast, the kernel regime has fixed basis functions solely determined by architecture and initialization, with no dependence on the data or the task being learned.] In this case the NN's basis functions do change over time, rendering the NTK function time-dependent (in which case we denote it as K(x,x',t)). Tracking the NTK along the learning trajectory/path of the model parameters θ(t) yields the Path-integrated NTK <cit.>.
§.§.§ Path-Integrated NTK (PNTK)
Let the PNTK be denoted by P(x,x';t). Then all NNs trained in a supervised setting with gradient descent result in a (pseudo-)kernel machine[The base NTK results in a true kernel method, as the predictions ŷ can be written to solely depend on a weighted sum of the training data, with fixed kernel weights. In contrast, the PNTK is a pseudo-kernel: the path-weighted loss term brings a dependence on the test point (x') to the kernel weights so that they are no longer constant, violating a requirement to be a true kernel.] of the form
ŷ(x) = ∑_m ∈𝒟 P(x,x_m) + ŷ_0(x),
where the initial predictions ŷ_0(x) := ŷ(x,t=0) and P is the PNTK pseudo-kernel of the form <cit.>:
P(x,x') := ∫_0^t L'(y'(x'), ŷ'̂(x', t')) dŷ(x,t')/dθ·dŷ(x',t')/dθ dt' = ∫_0^tL'(y'(x'), ŷ'̂(x', t')) K(x,x',t) dt'
At intermediate learning times we have
ŷ(x,t) = ŷ(x,t=0) - ∫_0^t dt' ∑_m ∈𝒟 L'(y_m, ŷ_m(θ(t'))) K(x,x_m;θ(t')),
where our loss function is L and the loss sensitivity L' is with respect to the predictions ŷ (e.g. L' = dL/dŷ).
Thus, the PNTK shows that the NN predictions can be expressed in terms of the NTK, weighted by the loss sensitivity function, along the entire learning path/trajectory.
In this article, we use the NTK and PNTK to analyze how representations in the hidden layer of the network evolve during learning of the task(s). The NTK allows a decomposition of the instantaneous changes in the predictions of the NN over training inputs and training time (e.g. how ̇̂y can be decomposed over x_m ∈𝒟 at any point in time), whereas the PNTK provides an integral over those predicted effects over a specified time window (typically from 0 to t). We can analyze the NTK and PNTK, grouped in various ways (e.g., by input dimensions, tasks, and/or output dimensions) in order to fully characterize how a NN undergoing training acquires representations of the relevant information over the course of learning.
§.§ Visualizing Learning Dynamics Using M-PHATE
We used Multislice PHATE (or M-PHATE; <cit.>) to visualize the evolution of representational manifolds over the course of learning. M-PHATE is a dimensionality reduction algorithm for time-series data, that extends the successful PHATE algorithm <cit.> to visualize internal network geometry, and that can be used to capture temporal dynamics. It does so by using longitudinal (time series) data to generate a multislice graph, and then uses PHATE to dimensionally reduce the pairwise affinity similarity kernel of the graph. By applying this to the hidden unit activations of neural networks, it can be used to visualize the representational manifolds over the course of training. Furthermore, by grouping analyses—for example, according to feature dimensions, tasks, or response dimensions—M-PHATE can be used to reveal how representations evolve that are sensitive to these factors.
It is important to note that the M-PHATE analysis works by analyzing the hidden layer activities. In contrast, the NTK kernel computes similarities using the whole network's gradients. This means that the results of the two methods are are not directly comparable, but qualitative comparisons can be made.
§ VALIDATION OF NTK AND PNTK ANALYSES OF LEARNING IN SIMPLE NETWORKS
We begin with a validation of NTK and PNTK analyses, by using them to predict the evolution of representations in a linear network which are tractable to standard analytic methods, the results of which can be used as a benchmark. Specifically, we use the results from Saxe, McClelland, and Ganguli (henceforth SMG; <cit.>), which show that for a linear NN with a bottleneck hidden layer, the network will learn representations over that layer that correspond to singular values of the weight matrix, in sequence, each learned with a rate proportional to the magnitude of the singular value, up until the dimensionality of the bottleneck layer. Here, we show that NTK and PNTK analyses can be used to qualitatively predict this behavior from the initial conditions of the network (i.e., its initial weights and training set).
We take a simple case for illustrative purposes, using a network with an input dimensionality of 4, a bottleneck (hidden layer) dimensionality of 3, and an output dimensionality of 5. A target linear transformation W_target is generated, and then used to generate random training data y = W_targetx, where x is randomly drawn from white noise. We call this the SMG task, and replicate their results, showing behavior qualitatively similar to that reported in their previous work (Fig <ref>).
What can a PNTK analysis tell us over and above the original analysis? We consider two approaches to answering this question. The first exploits the fact that we are considering a simple linear system, and thus inputs can be broken down by input dimension. This allows us to compute a PNTK value that predicts how a change in each individual input dimension affects each individual output dimension across examples, which can then be compared to the true W_target (Fig <ref>). We focus on two time points: t=190, that falls in the middle of the period during which the first singular vector is being acquired; and t=770, that falls in the middle of the period during which the second singular vector is being acquired. The PNTK analysis confirms that the first singular vector of W_target is being learned at t=190, the second is being learned at t=770, and the first singular vector is well learned (e.g. learning acquired is significant) by t=190,[What we mean by a singular vector 'is being learned' at time t is that it is the top singular vector of the NTK(t), while 'is well learned' means that it singular value (of PNTK(t)) is significant (compared to the maximum from W). This means that a well learned singular vector can also continue to be learned, while a singular vector being learned may or may not be well learned at a particular time. See Fig <ref> markers for a visualization relating to this] while both are well learned (e.g. the first is also remembered) by t=770.
The foregoing analysis, although easy and clear, is limited to linear systems. To assess the applicability of our analyses to non-linear networks, in our second approach we numerically compare the singular vectors of the full NTK and PNTK (at specific time points t=190,770) against the projections of the data x into the coordinate system spanned by the singular vectors of W_target. This allows us to compare how much each data point contributes to the NTK or PNTK compared to how much it would contribute if it was perfectly learning the modes of W_target, an approach that works for linear or nonlinear systems. These results are comparable to those of the analysis of the linear system (Fig <ref>), providing support for the generality of the NTK and PNTK analyses to non-linear networks.
Finally, we conduct another analysis using the PNTK, examining how the learning of each mode changes progressively over time. The PNTK allows us to examine the patterns that are being learned at each time point, by examining the eigenvectors of the NTK. Consistent with the previous work <cit.>, the analysis confirms that eigenvectors are learned one at a time, with larger ones learned first. More interestingly, secondary learning (e.g., the mode that is being acquired at the second fastest rate, and is the second eigenvalue/vector pair of the NTK) reveals relevant patterns, with transitions occurring at times dictated by the primary learning switching singular vectors (e.g. when the primary singular vector is learned and switches to learning the second singular vector, the secondary learning switches from learning the second singular vector to learning the third singular vector; Fig <ref>).
These results align with changes in performance of the network. Fig <ref> shows the primary learning of eigenvectors and eigenvalues over time along with model performance. This reveals that the two are closely linked, with changes in NTK singular values anticipating model singular vectors aligning with the true task and accompanying decreases in task loss.
In summary, the PNTK analysis is consistent with the results reported by SMG for a linear network, using a technique that is extendable to non-linear networks. It is important to emphasize that the PNTK analysis is predictive, in that the results are derived only from the conditions of the network at the time point to which the analysis is applied, but predict the representational organization of the network following subsequent learning given the training regime. Of course, the SMG theory was also predictive, but only works in the linear regime—the PNTK can readily be expanded to nonlinear applications. It also reveals interesting new patterns, particularly in the secondary learning dynamics of the network, implying that some groundwork for learning higher modes is in place before their primary learning begins.
§ APPLICATION OF NTK AND PNTK ANALYSES TO LEARNING AND PERFORMANCE IN NONLINEAR NETWORKS
In the preceding section, we validated the use of NTK and PNTK analyses of learning dynamics in simple networks. Here, we explore using this technique to examine representational learning and its relationship to network performance in a more complex nonlinear network, that addresses how initial conditions influence the development of shared versus separated representations and its impact on parallel task execution (i.e., concurrent multitasking).
§.§ Model
§.§.§ Network Architecture
The network used for all further experiments is shown in Fig. <ref> (cf. <cit.>). It had two sets of input units: a stimulus set, x_1, used to represent stimulus features; and a task set, x_2, used to indicate which task(s) should be performed on a given trial. The stimulus set was further divided into g_1 pools of units, each of which was used to represent an independent stimulus dimension comprised of m features along each dimension. Accordingly, each pool was comprised of m units, with each feature represented as a one-hot input pattern over the group. The task set was comprised of single pool with a number of units equal to the number of tasks that could be specified (see below), and one unit used to represent each task. Both sets of input units projected to a single set of hidden units, with connection weights w_1 and w_2 for the stimulus set and task set, respectively. The H units in the hidden layer used a sigmoidal activation function ϕ, producing an activation vector of
h = ϕ(w_1 x_1 + w_2 x_2 + b_1)
where b_1 = - 2 is a fixed bias, ensuring that the units were inhibited if no input is was provided (see Section <ref>). All of the hidden units, as well as the input units in the task set, projected to all of units in the output layer, with weights v_1 and v_2, respectively. The output layer, like the stimulus set of the input layer, was divided into g_2 pools of units, each of which was used to represent an independent response dimension comprised of m responses along each dimension. Thus, each pool was comprised of m units with each response represented as a one-hot input pattern over the set. Like the hidden layer, the output layer used a sigmoidal activation function ϕ, along with output bias b_2 = b_1, leading to a final output of
y = ϕ(v_1 h + v_2 x_2 + b_2)
or, in terms of inputs only
y = ϕ(v_1 ϕ(w_1 x_1 + w_2 x_2 + b_1) + v_2 x_2 + b_2)
Note that task input x_2 appears twice here, as there are two independent pathways involving x_2 (one projecting to the hidden layer and the other to the output layer).
§.§.§ Task Environment
As described above, a task was defined as a one-to-one mapping from the features along a single stimulus dimension to the responses of a single output dimension, reflecting response mappings in classic multitasking paradigms (e.g., ). This corresponded to the mapping of the m input units from the specified pool g_1 of the stimulus group x_1 to the associated m output unit in the specified pool g_2 of the output units. In aggregate, this yielded g_1 · g_2 tasks, and thus the task set x_2 had that many units. For a given trial, a single feature unit was activated in each pool of the stimulus set x_1 (i.e., each of the g_1 input pools had one of its m units activated). For performance of a single task, only a single unit was activated in the task set x_2. The network was then required to activate the output unit in the response pool corresponding to the input unit activated in the stimulus pool specified by the task unit activated in x_2, and to suppress activity of all other output units. For example, task 1 consisted of mapping the m units of input pool g_1 = 1 to the m units of the output pool g_2 = 1; as only one one of the m units was active, the task consisted of mapping the active mth element of g_1 to the mth element of g_2, while outputting null responses elsewhere. For multitasking performance, two or more units in the task set were activated, and the network was required to activate the response corresponding to the input for each task specified, and suppress all other output units. Multitasking was restricted to only those tasks that shared neither an input set nor output set (see <cit.> for a more detailed consideration of “legal multitasking”).
We report results for a network that implemented four stimulus dimensions and three response dimensions (i.e., stimulus pools g_1 = 4 and output pools g_2 = 3).[We chose a different number of stimulus and response pools to be able to distinguish partitioning of the hidden unit representations according to input versus output dimensions, or both.] This yielded a total of 12 tasks (x_2 = 12). Since each stimulus dimension had three features, and each response dimension had three possible responses (i.e., m = 3), in total the network had 24 input units (x_1 = 12 stimulus input, and x_2 = 12 task input units) and 12 output units, as well as H = 200 hidden units.[The number of hidden units was chosen to avoid imposing a representational bottleneck on the network, thus allowing it the opportunity to learn separate representations for the mappings of each of the twelve tasks. This was done to insure that any tendencies for the network to learn lower dimensional representation were more likely to reflect factors of interest (viz., initialization and/or training protocols) and could not be attributed to limited representational resources.]
§.§.§ Initialization and Training
In the experiments, we manipulated the initialization of all connection weights between a standard initialization condition (random uniform distribution in [-.1, .1]) and a large initialization condition (random uniform distribution in [-1, 1]). All biases were set to b_i=-2 to encourage learning of an attentional scheme over the task weights in which activation of a task input unit placed processing units to which it projected in the hidden and output layers in a more sensitive range of their nonlinear processing functions [This exploits the nonlinearity of the activation functions to implement a form of multiplicative gating without the need for any additional specialized attentional mechanisms; see <cit.> for relevant discussions).]. No layer-specific normalization (e.g., by batch) was used. For all experiments, networks tasks were always sampled uniformly from all available tasks. However, in each we manipulated whether training was restricted to performance of only one task at a time (single task condition) or required performance of multiple tasks simultaneously (multitasking condition), as described in the individual experiments below. In all cases, the network was trained with stochastic gradient descent (SGD) using a base learning rate of .01 for 10000 epochs, with parameters found via hyper-parameter search.[As the PNTK analyzes a specific network instantiation, all NTK-based and MPHATE-based visualizations (except where otherwise noted) are based on one trial. We re-ran each experiment at least five times to confirm that there were no major qualitative changes in the results and to generate correlation metrics over multiple experiments. Experimental results are averaged over 10 trials]
§.§ Predicting Impact of Standard vs. Large Initialization on Representational Learning
§.§.§ Effect of Initialization on Representational Learning
It has previously been observed that lower initial weights promote the formation of representational sharing among tasks that share the same input and/or output dimensions <cit.>, consistent with other findings from work in machine learning <cit.>. Here, we sought first to validate this finding in the present architecture, and visualize it using M-PHATE, and then evaluate the extent to which it could be predicted by the PNTK analysis. To do so, we compared the standard initialization with the large initialization under the single task training condition.
Fig <ref> shows an M-PHATE plot of how the patterns of activity over the hidden units evolve over the course of training. The top row shows this for the standard initialization condition, with each panel showing the patterns of hidden unit activity grouped (averaged) along different dimensions. There is clear structure in the groupings, which reflects the sharing of representations for tasks that use the same inputs or outputs. For example, grouped by task (the leftmost panel), there are four clusters of three tasks each, with each cluster comprised of tasks that share the same inputs, confirmed by examining the individual elements. Similarly, grouped by input (the second panel from the left) there are three clusters of four tasks each ), with each cluster corresponding to within-group position across the four pools. Notice that the grouping here gradually decoheres over time (as the clusters are transient, we show a zoomed in inset showing this phenomenon in Fig <ref> (inset).
Finally, grouped by output (the rightmost panel), there are three clusters of tasks that share the same output.
Both effects can be seen in the Task by Inputs grouping (third panel from the left), in which three clusters appear early in training (dark dots), each of which is comprised of tasks that share the same output, followed later (lighter dots) by a separation into subclusters of tasks that share the same inputs. The early organization by outputs followed later with organization by inputs is consistent with the tendency, in multilayered networks without layer-specific weight normalization, for weights closest to the output layer to experience the steepest initial gradients and therefore the earliest effects of learning.[To confirm this account, and rule out the possibility that early organization by outputs was because the lower dimensional structure of the outputs (three dimensions) made it easier to learn than the input structure (four dimensions), we conducted the same experiment on a network that had fewer input dimensions (three) than output dimensions (four), and observed similar results (see Appendix, Section <ref>).]
These observations corroborate the general principle that lower initial weights promote representational sharing in environments for which tasks share structure.
The lower panels in Fig <ref> show the results for the large initialization condition. These are in stark contrast to those for the standard initialization condition, showing little if any structure: each task develops its own representations that are roughly equidistant from the others, irrespective of shared inputs and/or outputs. The one deviation from this pattern is for grouping by output (rightmost panel), in which three clusters emerge, once again presumably reflecting the early and strong influence of the gradients on weights closest to the output of a multilayered network in the absence of any layer-specific form of weight normalization.
Together, the observations above provide strong confirmation that, in this network architecture as in many others, lower initial weights promote the development of shared representations for tasks that share structure (i.e., input or output dimensions), and showcase the utility of M-PHATE for clearly and concisely visualizing such qualitative effects. In the sections that follow, we apply NTK analyses to show that downstream performance can be predicted quantitatively from the initial conditions.
§.§.§ Use of NTK to Predict Representational Learning
First, we conduct an NTK analysis of the standard initialization network, following initialization but prior to training (i.e., T=0), to assess whether this can anticipate the effects of learning. The output of the NTK analysis is a 500x500 matrix, corresponding to the 500 training inputs across all tasks. The M-PHATE observations shown in Fig <ref> suggest that, over the course of training, the representations learned by the network's hidden units came to be clustered according to both the four input dimensions (shared across tasks) and their mapping to each of the three output dimensions (according to task). To determine whether this organization was predicted by, and can be observed in the NTK analysis, we carried out two variants of this analysis: one in which individual training passes were sorted by the current input (aggregated over tasks), and the other sorted by task (aggregated over inputs). In both cases, the NTK analysis is calculated per output, yielding a total of nine NTK analyses (three units m per output set g). For comparison, we also carried out the NTK analysis without sorting. In contrast to the sorted analyses (shown in Fig <ref>), the unsorted analyses did not reveal any discernible structure (see Fig <ref> in Appendix).
Fig <ref> shows the results of the NTK analyses for the primary eigenvector (i.e., the one with the largest eigenvalue) in the standard initialization and high initialization condition at T=0, sorted as described above. These reveal clustering effects in the standard initialization that correspond closely to those that emerged in the hidden units over training, as observed in the M-PHATE plots in Fig <ref>. Each plot shows the weighting for each training pattern on the eigenvector. When these are sorted by input features (left panels), the pattern of weightings for a given response (m) is the same across output pools (g_2) in the standard init case, consistent with the use of the same input representations across all three tasks. Complementing this, when training patterns are sorted by task (right panels), the pattern of weightings for a given response is different for each output pool in the standard init case, indicating the role of the task units in selecting which of the four input dimensions should be mapped to that output pool for each given task. Critically, this structure within the standard init is visible in the NTK analysis prior to training (i.e., at T=0) in the standard initialization condition but not the large initialization condition. Further details of other secondary analysis are provided in the Appendix.
In the standard-initialization condition, structure is visible in the NTK analysis before any training occurs. This raises the question: to what extent is this specifically predictive of the effects that emerge during training? To address this, we analyzed the extent to which the groupings from the initial NTK analysis predicted the structure observed in the M-PHATE hidden representations over the course of training (that is, the extent to which the NTK analysis conducted at T=0, integrating only the first time step, predicted M-PHATE clustering for t>0). For the NTK analysis grouped by input, the NTK groups code for subgroup position within each task, as seen in Fig <ref>. This can easily be expressed by the M-PHATE grouping as well. For the NTK analysis grouped by task, the grouping clearly aligns with the output pool relevant for each task, as seen in Fig <ref>. However, as previously discussed, the M-PHATE analysis uses the hidden layer activations, and thus cannot include output dimension information. Instead, we predict that the M-PHATE analogously uses the relevant input pool.
Based on these grouping schemes, we can test the extent to which the organization predicted from the NTK plots prior to training (at T=0) predict the patterns of clustering that emerge in the M-PHATE plot at different points during training (T>=1). We measured the correspondence between these measures using the Adjusted Rand distance metric for between group memberships, as shown in Fig <ref>. The results indicate that PNTK accurately predicts M-PHATE structure when examined both by task and inputs. The task-grouped predictions reach a complete match in grouping by the end of training. The input-grouped predictions also align cleanly with the trajectory of representational structure in the M-PHATE analysis, reaching a prefect match in grouping early in training when structure is clearly observed in the M-PHATE analyses, followed by a diminution of the effect that parallels the dissolution of structure observed in the M-PHATE analyses.
§ PREDICTING THE EFFECT OF INITIALIZATION AND TRAINING CURRICULUM ON PROCESSING
The results reported above affirm the usefulness of the NTK and PNTK analyses in predicting representational learning, both in linear and non-linear networks. They also reaffirm the premise that initialization with small random weights favors representational sharing among tasks that share common structure (e.g., input and/or output dimensions). In this section, we report a further evaluation this effect, and the ability of the NTK and PNTK analyses to predict not only the effects of initialization on representational learning, but also on performance. Specifically, building on previous work <cit.>, we tested the hypotheses that: i) insofar as the standard initialization condition favors representational sharing, it should be associated with faster learning of new single tasks (due to more effective generalization), but at the cost a compromised ability to acquire parallel processing capacity (i.e., poorer concurrent multitasking capability) relative to the large initialization condition; and ii) this can be predicted from the NTK analysis at the start of training.[While it is plausible that this should be the case, given that NTK and PNTK predict patterns of representational learning and the latter determine performance, nevertheless since neither of these relationships is perfect, it is possible that NTK and/or PNTK predict a different component of the variance in representational learning than is responsible for performance.] To test this, we evaluated the acquisition of multitasking performance via fine tuning of networks first trained on single task performance in each of the two initialization conditions. For each comparison, we generated two networks from the same random initialization, with the large initialization having layer 1 weights that were uniformly multiplied up by a factor of 10. We posited that although a neural network trained in the large initialization condition (and therefore biased to learn separated representations) would take longer to learn during initial single task training, it would be faster to acquire the capacity for concurrent multitasking during subsequent fine tuning on that ability, as compared to a network trained in the standard initialization condition (and hence biased to learn shared representation). Critically, we also tested the extent to which the NTK and PNTK analyses, carried out on the network before initial single task training, could accurately predict end of training generalization loss and even the impact of fine tuning on concurrent multitasking performance that occurred after the initial training.
§.§ Effects of Initialization on Acquisition of Multitasking Capability
To test the hypotheses outlined above, we first trained networks on single task performance in the standard and large initialization conditions. We then followed this with “fine-tuning” of each resulting network on concurrent multitasking, using the same number of training examples and epochs in each case. Specifically, we trained each network to simultaneously execute, with equal probability, either 1, 2, or 3 tasks, randomly sampled from all valid combinations (e.g. non-overlapping inputs or outputs) of the selected number of tasks.
Fig <ref> shows the results for networks in the standard and large initialization conditions, both during initial single task training (left panels) and during subsequent fine tuning on concurrent multitasking performance (right panels). The upper panels show a direct comparison of the mean and standard deviation of generalization losses. While the basic effects are observed here, overall performance varied across different pairs of networks, as a function of the particular pattern of initial weights assigned to them (which was the same for each pair, and simply scaled differently for the standard and large initialization conditions). The lower panels of Fig <ref> show the mean of direct comparisons between each pair of networks, that controls for differences in overall performance across the pairs. As predicted (and consistent with previous results <cit.>, the standard initialization condition led to better generalization performance over the course of single task training, due to the development of shared representations (see Fig <ref>, upper panels). However, those presented an obstacle to the subsequent acquisition of concurrent multitasking performance, presumably because separated (task dedicated) representations had to now be learned de novo. Conversely, networks in the large initialization condition exhibited poorer overall generalization performance during single task training, due to a bias toward the learning of more separated representations (as predicted by the PNTK analyses; see Fig <ref>, lower panels), but it was better predisposed for the subsequent acquisition of concurrent multitasking performance, as is clearly observed in the right panel of Fig <ref>.
§.§ Predicting performance from the PNTK analysis
Next, we tested the extent to which an PNTK analysis applied to the network prior to initial single task training could predict performance during subsequent multitask tuning. To do so, we: i) created ten pairs of networks, each with a different set of weights generated for the standard and large initialization conditions (as described above; ii) applied k-means clustering with 12 groups to the multidimensional PNTK analysis of each network prior to training; and iii) quantified the clustering quality using the silhouette method <cit.>, with a higher value reflecting a greater degree of sharing among representations. We then trained each network using the procedure describe above, initially on single tasks, and then on multitask fine tuning. Finally, for each pair of networks, we correlated the silhouette score for the network in each condition with the difference in generalization performance between the two conditions (standard - large initialization) at the end of each training phase. If representational structure predicted by the PNTK analyses was responsible for generalization performance after each phase of training, then the correlations should be negative following single-task and positive following multitask fine tuning. This is because the standard initialization should generate higher silhouette scores (more shared representations) and correspondingly lower test loss relative to the large initialization condition after single task training (hence the negative correlation), but higher relative test loss following multitask fine tuning (hence the positive correlation); and, conversely, the large initialization should generate lower silhouette scores (more separated representations) and correspondingly higher test loss relative to the standard initialization condition after single task training (and thus, again, a negative correlation), but lower relative test loss following multitask fine tuning (again a positive correlation).
The results are consistent with these predictions: the correlation was r=-.881 following single task training, and r=.973 following multitask fine tuning. This suggests that the PNTK analysis, carried out prior to any training, was able to reliably predict theoretically anticipated effects of initialization on generalization performance in the network observed after two distinct phases of training.
This not only confirms that the PNTK analysis, carried out prior to any training, is able to predict the kinds of representations learned by the network in response to different initializations, but also that it can be used to predict the patterns of generalization performance associated with those representations
observed in the network after two distinct phases of training.
[More specifically, the analysis involved the following: Arrange the data for the 20 networks (10 each of standard and large initializations) into rows, labeled by condition (standard versus large) and with columns containing the computed silhouette score, final generalization loss following single task training, and final generalization loss following multitask fine tuning for each network. Then, add two additional columns that contained the performance of each network relative to the other member of its pair (i.e., in the other initialization condition; see Section <ref> for the motivation for analyzing relative rather than absolute performance) after each phase of training. The values for relative performance we calculated as the difference in test loss between that condition and the other condition within the same pair (e.g., for the standard initialization: final loss of standard initialization - final loss of large initialization; for the large initialization: final loss of the large initialization - final loss of the standard initialization). Finally, compute the correlation over networks of the silhouette scores with the relative generalization scores following single task training case, and the with the relative generalization scores following multitask fine tuning.]
§ DISCUSSION AND CONCLUSIONS
In this article, we show how a novel analysis method that examines the gradients of the network at the outset of training (determined by its initial weights and training curriculum), can be used to predict features of the representations that are subsequently learned through training, as well as the impact these have on network performance.
Specifically, we show that NTK and PNTK analyses predict the extent to which
a standard initialization scheme (small random weights) biases learning toward a generalizable code that groups similar inputs together (i.e., using shared representations), whereas large weight initialization predisposes the network to learn distinct representations for each task configuration (i.e., separated representations). We also confirm that, whereas the bias toward shared representations leads to improved generalization performance in the single task setting, this leads to destructive interference that impairs multitasking performance when more than one task is performed concurrently. Conversely, the large initialization scheme favors the formation of distinct task-specific representations, that facilitates the acquisition of multitasking capability.
Importantly, we show that these effects (i.e., the eventual task or input groupings) can be predicted from the NTK eigenvectors as early as the first iteration of training, indicating that the types of representation that will be learned and their consequences on performance can be characterized before training has begun.
Here, we focused on relatively simple networks, tasks, and training regimes. Evaluating the extent to which our results extend to more complex network architectures, tasks and forms of training remains an important direction for future research. We expect that this technique may be a fruitful way to advance the probing and understanding of inductive biases that influence the learning and use of representations in neural networks <cit.>. This, in turn, may help lay the foundation for a better understanding of the respective costs and benefits of representation sharing in both biological and artificial neural network architectures.
The work presented here, and previous theoretical work on which it builds <cit.>, suggests that sharing representations between tasks limits a network's capacity for multitasking. This has received empirical support in neuroscientific research. For example, neuroimaging studies have provided evidence that the multitasking capability of human participants is inversely related to representational overlap between tasks <cit.>, and that improvements in multitasking capability are accompanied by increases in representational separation <cit.>. The present work may aid in the development of methods that allow neuroscientists to predict the learning of shared versus separate representations before they occur. Such methods would open new paths for the quantification of individual differences or the effective evaluation of training procedures for human multitasking. Along similar lines, the work presented here may help inform the design more effective artificial systems, by providing an efficient means of predicting the impact that initial conditions and training curriculum will have on downstream parallelization of performance.
tmlr
§ APPENDIX
§.§ Derivation from SMG
For the simple Saxe, McClelland, Ganguli (SMG) model, the PNTK can be further broken down not only by output dimension but also by input dimension. Thus, the core PNTK update is based off an NTK generated by
NTK[i,j,k,l] = < ∑_i,l( dy_l(x_j[i])/dθdL/dy_l(x_j)), dy_l(x_k[i])/dθ>
where i refers to input dimension, j to training input index, k to testing input index, and l to output dimension. Thus y_l(x) is the lth dimension of output y given input x, and x[i] refers to using an input generated from only the ith input dimension of input x (e.g. for x = [1,2,3], x[1] = [1,0,0]. This works because we have a linear system so ∑_i f(x[i]) = f(x)).
The Path NTK (PNTK) is accumulated from the NTK via the update
PNTK -= NTK_t * η
for learning rate η. This PNTK gives the total influence (throughout training) that dimension i of training input j has on the output dimension l of a response to a testing example k. Note that summing over i,j will just give the networks (learned) response to testing example k across all outputs.
Since we are working in a simple linear system, we want to use the PNTK to understand how W_target is learned. The actual learned W can be estimated from the PNTK - the PNTK gives the actual effect of example j on future outputs, which only happens through a modification of W. Thus, by normalizing out by the magnitude of x_k[i], and summing the response over all training j, we can use the PNTK to estimate the learned change in W:
δ W[i,l] = ∑_j PNTK[i,j,k,l]/x_k[i]
for any k. This PNTK estimate of the learned change in W should closely track the true W_true if our PNTK theory is correct.
Given that we are using a very simple linear system, we can simplify some of the preceding results. Our system is a linear system, given by y(x) = x W_1 W_2, where x is a vector of dimension 4, y is a vector of dimension 5, W_1 is of dimension 4 × 3 and W_2 is of dimensions 3 × 5, e.g. we have a bottleneck of dimensionality 3. We generate targets according to y = x W_target, with W_target is of dimensions 4 × 5, and we use MSE loss. Note that dy_l(x[i])/dθ is a vector of size 27: 12 parameters from W_1, followed by another 15 from W_2. Due to us splitting up and only activating only a single input and output, only a small portion of these parameters receive gradient at once - specifically, the lth column of W_2 for output activation l, and the ith row of W_1 for the input activation of i, e.g. only 6 parameters for any particular combination
dy_l(x[i])/dW_1[i,:] = x[i] W_2[:,l]
dy_l(x[i])/dW_2[:,l] = x[i] W_1[i,:]
and thus the entire set of non-zero parameters is
dy_l(x[i])/dθ = x[i] [W_2[i,:], W_1[:,l]]
We can then attempt to simplify the core NTK equation. The additional complexity comes from the fact that we are summing over the indexes of i,l. Starting from the base equation:
NTK[i,j,k,l] = < ∑_i,l( dy_l(x_j[i])/dθdL/dy_l(x_j)), dy_l(x_k[i])/dθ>
Substiting in our previous simplification:
NTK[i,j,k,l] = <∑_i,l( x_j[i] * [W_2[:,l],W_1[i,:]] * dL/dy_l(x_j)) , x_k[i] * [W_2[:,l],W_1[i,:]] >
Note that the notation [W_2[:,l] W_1[i,:]] implies that all parameters outside of those specified are 0. Thus, we only need to worry about a subset of the parameters within the i,l sum. Thus, the sum needs not be over all i,l combination, but only over those such that either i or l matches the NTK index. Another way of putting it is:
NTK[i,j,k,l] = < [∑_i x_j[i] *dL/dy_l(x_j)W_2[:,l],∑_l x_j[i]dL/dy_l(x_j)W_1[i,:]] , x_k[i] * [W_2[:,l],W_1[i,:]] >
Finally, we can convert from the NTK to the change in W
Δ W[i,l] = ∑_j NTK[i,j,k=0,l] / x_k=0[i]
Note k=0 is a generic choice - any value would work here. The idea is that NTK[i,j,k,l] measures the influence of training x_j[i] on output y_l[x_k]. However, the influence comes about through W, so the magnitude of the actual effect will linearly depend on the input, e.g. x_k. Putting the previous 2 equations together:
Δ W[i,l] = ∑_j < [∑_i x_j[i] *dL/dy_l(x_j)W_2[:,l],∑_l x_j[i]dL/dy_l(x_j)W_1[i,:]] , x_k=0[i] * [W_2[:,l],W_1[i,:]] > / x_k=0[i]
we can make a few further simplifications. To start, we are using MSE error, so
dL/dy_l(x_j) = 2/NDϵ_j[l]
where N is the number of training data, D is the output dimensionality, and ϵ_j is the residual (e.g. error) between the current and target y_l(x_j), for a final update of:
Δ W[i,l] = ∑_j < [∑_i x_j[i] *2/NDϵ_j[l] W_2[:,l],∑_l x_j[i]2/NDϵ_j[l] W_1[i,:]] , x_k=0[i] * [W_2[:,l],W_1[i,:]] > / x_k=0[i]
which we can rewrite a bit to more closely match the standard SGD formulation as:
Δ W[i,l] = ∑_j < [∑_i x_j[i] * W_2[:,l] * 2/NDϵ_j[l],∑_l x_j[i] W_1[i,:]*2/NDϵ_j[l]] , [W_2[:,l],W_1[i,:]] >
Looking at the true updates made by the SGD equation:
Δ W[i,l] = ∑_j dL/dy_l(x_j)dy_l(x_j)/dW[i,l] = ∑_j 2/Nϵ_j[l] dy_l(x_j)/dW[i,l]
but we don't have W[i,l] directly to update. In reality, W = W_1 W_2, so an update to W[i,l] is dependent upon W1[i,:] W2[:,l], e.g. their dot product.
Δ W[i,l] = < Δ W_1[i,:], Δ W_2[:,l] > + <Δ W_1[i,:], W_2[:,l]> + < W_1[i,:], Δ W_2[:,l]>
Assuming that our updates are fairly small and W >> Δ W, we can ignore the cross term,
Δ W[i,l] = <Δ W_1[i,:], W_2[:,l]> + < W_1[i,:], Δ W_2[:,l]>
Note that e.g.Δ W_1[i,:] will be proportional to dL/dy(x_j)dy(x_j)/dW_1[i,:] - the version calculated above is for a single input index i and output index l, whereas this one is more general. Since we have a linear system, we can get these by summation over the appropriate index (note Δ W_1[i,:] will need to be summed over l but not i, and the reverse for Δ W_2[:,l]. This leads to
Δ W[i,l] = ∑_j ( < ∑_i W_2[:,l]x_j[i]2/NDϵ_j[l], W_2[:,l] > + < W_1[i,:],∑_l W_1[i,:]x_j[i]2/NDϵ_j[l] > )
which is just a reformulation of the previous equation, showing that the PNTK exactly recovers the linear updates from SGD.
§.§ Further Experiments and Analyses
§.§.§ Unsorted NTK
The sorting (by either task grouping or input grouping) is essential to reveal any structure within the NTK. As a control, we here introduce the time 0 NTK for a standard initialized network trained on single tasking (Fig <ref>), which shows no discernible patterns.
§.§.§ Full Trial Results for Standard Single Task Training
§.§.§ Alternative Setup - LR Tuning
The above comparison between the standard and large initializations shows a period of time during single task training where large initialization has better single task generalization performance, in contrast to our theory. This persists for a fairly long time, necessitating a long analysis (10000 iterations) in order for the standard initialization to fully converge. Part of the reason for this is that the larger weights of the large initialization condition lead to an 'effectively' higher learning rate—higher weights lead to more backpropagation of error throughout the model. We also consider an alternative approach (called the Learning Rate Tuning) where we increase the learning rate of the standard initialization in order to approximately match the starting learning rates of both initializations (learning rate set to .3 for the standard initialization case, iterations reduced to 2500 across both cases).
Fig <ref> shows the results of this variant. Once again, both networks were able to learn to multitask, but the large-initialization case both learned to multitask more quickly and with improved generalization (testing loss). The improved learning rate for the standard initialization eliminated the transient area where the large initialization was better in the single tasking case, while not affecting the dominance of the large init in the multitasking case. We again focus on the per-trial differences in generalization loss in Fig <ref> bottom row, which again shows a strong positive result (standard initialization better) in single task training, and a strong negative result (large initialization better) in multi-task fine-tuning.
§.§.§ Correlation Analysis - Difference Results on LR Tuned, Combined Data
An extension of the correlation analysis, using both the LR Tuned dataset and a combined dataset (both the baseline and LR Tuned data to give a total of 20 trials).
For the LR Tuned dataset, we find a correlation of r=-.850 for the training case, and a correlation of r=.952 for the multitasking fine tuning regime.
For the combined dataset, we find a correlation of r=-.757 for the training case, and a correlation of r=.932 for the multitasking fine tuning regime.
These results are broadly consistent with the the base (non LR tuned) experimental results. In the training case, higher clustering (e.g. standard initialization) is strongly correlated with lower delta performance (e.g. superior generalization). The situation is again strongly reversed for the multitasking fine tuning case.
§.§.§ Correlation Analysis - Baseline
We repeat both experimental setups a total of 10 times, then use k-means clustering with 12 groups over the multi-dimensional PNTK output, and then calculate the clustering quality using the silhouette method <cit.>. We then calculated the correlations between the mean silhouette score on both standard and large initialization, and the downstream fine-tuning test loss (e.g. final generalization performance). We initially report across both categories (standard and LR-tuned), before providing a per-category breakdown.
In the single task training regime, we get a get a correlation of r=-.421 between clustering silhouette (a measure of how well the NTK evecs are clustered, e.g. higher for the standard init) and final generalization loss, e.g. the standard init's shared representations are correlated with generalization ability. Correlations were r=-.224 for the standard training setup, and r=-.637 for the LR-tuning training setup.
In the multitasking training regime, we get a correlation of r=.894 between clustering silhouette and final generalization loss, e.g. the shared init's shared representations are very strongly correlated with reduced generalization ability, as predicted. Correlations were r=.947 for the standard training setup, and r=.875 for the LR-tuning training setup.
Doing base values rather than differences dramatically reduces the power in the single task training regime, due to the aforementioned overall scale difference between different trials.
§.§.§ Full Training Details
In Fig <ref>, we show the basic generalization (test loss) performance of our model across the std and large initializations for both the single-training and multi-fine tuning tasks, across two task setups (long time and learning rate tuned). Here, we additionally examine the training performance (training loss) of our models, where the dashed line corresponds to the training loss. As expected, training losses are lower than testing losses as our model can near-exactly fit seen data. In the long time case, the large initialization's larger weights allow it to learn faster even in the single task training case (top left) compared to the std initialization, although this does not correspond to increased generalization. In the learning rate matched case, the standard initialization learns better, particularly for fine tuning on multitasking (bottom left) compared to the large initialization, but again this does not lead to increase test performance. This shows that in various settings, higher training performance does not necessarily lead to better generalization ability as the network instead over-fits - the important consideration is instead the networks' inductive biases, particularly their representations.
§.§ Effects of Initialization on Acquisition of Multitasking Capability
§.§.§ Architectural Changes
We also briefly compared the effects of an architectural change, namely swapping from a task setup with 4 input groups and 3 output groups to one with 3 input groups and 4 output groups. In this case, there are still 12 possible tasks. In the original task, we saw that learning by inputs preceded tasks. We hypothesized that the early organization by outputs followed later with organization by inputs is consistent with the fact that in a multilayered network, absent any form of weight normalization, the weights closest to the output layer experience the steepest initial gradients and therefore the earliest effects of learning. However, it could also be due to the fact that the output dimensionality was lower, so we reversed the dimensions to confirm that the effect was unchanged, as seen in Fig <ref>.
|
http://arxiv.org/abs/2307.03905v1 | 20230708053355 | A novel high-order linearly implicit and energy-stable additive Runge-Kutta methods for gradient flow models | [
"Xuelong Gu",
"Wenjun Cai",
"Yushun Wang"
] | math.NA | [
"math.NA",
"cs.NA"
] |
Incorporating Deep Q - Network with Multiclass Classification Algorithms
Noopur Zambare1, Ravindranath Sawane2
August 12, 2023
========================================================================
[-10pt]15.5cm0.1em
This paper introduces a novel paradigm for constructing linearly implicit and high-order unconditionally energy-stable schemes for general gradient flows, utilizing the scalar auxiliary variable (SAV) approach and the additive Runge-Kutta (ARK) methods. We provide a rigorous proof of energy stability, unique solvability, and convergence. The proposed schemes generalizes some recently developed high-order, energy-stable schemes and address their shortcomings.
On the one other hand, the proposed schemes can incorporate existing SAV-RK type methods after judiciously selecting the Butcher tables of ARK methods <cit.>. The order of a SAV-RKPC method can thus be confirmed theoretically by the order conditions of the corresponding ARK method. Several new schemes are constructed based on our framework, which perform to be more stable than existing SAV-RK type methods. On the other hand, the proposed schemes do not limit to a specific form of the nonlinear part of the free energy and can achieve high order with fewer intermediate stages compared to the convex splitting ARK methods <cit.>.
Numerical experiments demonstrate stability and efficiency of proposed schemes.
Energy-stable schemes, Scalar auxiliary variable approach, Additive Runge-Kutta methods, Linearly implicit schemes.
[-10pt]15.5cm0.1em
§ INTRODUCTION
Phase field models are versatile mathematical equations widely used in physics, material science, and mathematics to simulate various physical phenomena, including the diffusion of two-phase interfaces, phase transitions in materials, and mechanical properties <cit.>. These models are useful for describing different phases of material and the phase transitions and microstructural changes that occur in non-equilibrium states.
The phase field model is usually represented as a gradient flow of a free energy functional ℱ(u) as follows:
∂ u/∂ t= 𝒢δℱ/δ u, (𝐱, t) ∈Ω× (0, T],
with the initial condition u(𝐱, 0) = u_0(𝐱), where u is a state variable, Ω⊂ℝ^n represents the computational domain, δℱ/δ u denotes the variational derivative of ℱ to u, and 𝒢 is a non-positive mobility operator. Classical phase field models include the Allen-Cahn (AC) equation <cit.>, the Cahn-Hilliard (CH) equation <cit.>, the molecular beam epitaxy (MBE) equation <cit.>, etc <cit.>. A significant aspect of (<ref>) is that the system preserves the following energy dissipation law when appropriate boundary conditions are imposed on u.
d ℱ/dt = (δℱ/δ u, ∂ u/∂ t) = (δℱ/δ u, 𝒢δℱ/δ u) ≤ 0.
Due to the nonlinearity of (<ref>), its analytical solution is typically intractable. Therefore, developing efficient and stable numerical schemes is imperative. One approach is constructing schemes that inherit a discrete counterpart of (<ref>), known as energy-stable methods <cit.>. As demonstrated in <cit.>, energy-stable methods can prevent numerical oscillations and unphysical solutions, thus have been the focus of extensive researchers over the past few decades. Classical energy-stable methods include convex splitting (CS) methods <cit.> and discrete variational derivative (DVD) methods <cit.> and so on. CS and DVD methods are fully implicit, thus requiring solving a nonlinear system at each time step. To improve computational efficiency, researchers have suggested linearly implicit or explicit energy-stable schemes, such as stabilized semi-implicit methods <cit.>, exponential time difference methods <cit.>, and the leapfrog methods <cit.>.
The numerical methods discussed above are exclusive to particular gradient flow models and can not be effortlessly adapted to others. This status quo did not change until the energy quadratization (EQ) methods <cit.> were proposed. EQ methods provide an elegant platform for constructing linearly implicit schemes, but they involve solving linear systems with variable coefficients at each time step. In <cit.>, Shen et al. proposed scalar auxiliary variable (SAV) methods. Besides their unconditional stability, SAV methods require only the solution of a linear system with constant coefficients in each step. Furthermore, SAV approaches provide a universal framework for developing linearly implicit energy-stable schemes that can be extended to a variety of complex models <cit.>. Due to these advantages, SAV methods have received attention and are promoted in <cit.>.
However, the above methods are limited to second-order accuracy, which may not accommodate high precision requirements. The nonlinearity of phase field models makes it difficult to develop high-order energy-stable schemes. In <cit.>, the authors present high-order energy-stable schemes by combining additive Runge-Kutta (ARK) methods with CS techniques (CS-ARK). To guarantee energy stability, these approaches impose stringent criteria on the coefficients of the ARK methods, necessitating a large number of intermediate stages even for a second-order scheme. Thus, the currently identified energy-stable CS-ARK methods are limited to third-order. In <cit.>, energy-stable schemes based on the Hamiltonian boundary value or discrete gradient methods are presented. These schemes are fully implicit and thus computationally expensive. Akrivis et al. introduced in <cit.> novel linearly implicit schemes based on a combination of SAV and RK (SAV-RK) approaches. For explicit discretization of nonlinear terms, they incorporated extrapolation techniques to predict solutions at specified time levels. The resulting methods are referred to as SAV-RKEX. However, excessive interpolation points lead to highly oscillatory interpolation polynomials, resulting in inaccurate predictions. Li et al. developed SAV-RKPC methods in <cit.> to obtain a more accurate prediction of numerical solutions at intermediate stages, significantly improving the stability and accuracy of SAV-RKEX methods. Nevertheless, such a technique increases the computational costs, and there is no theoretical guarantee of the necessary number of iterations to achieve adequate accuracy.
In this paper, we propose a novel paradigm for constructing linearly implicit and high-order unconditionally energy-stable schemes, combining the SAV approach with the ARK methods. The proposed methods overcome the limitations of both CS-ARK and SAV-RK methods and can be applied to gradient flow systems with general nonlinear functionals. On the one hand, to guarantee energy stability, the proposed methods require only the algebraic stability of the implicit part of ARK methods. This enables the methods to achieve high accuracy and energy stability with fewer intermediate stages. On the other hand, our approach can be regarded as a novel prediction correction technique that avoids the imprecision of extrapolation techniques used in the SAV-RKEX method and does not require iterative procedures for prediction in SAV-RKPC. Thus, the proposed approach guarantees both efficiency and stability. Additionally, our framework can accommodate all SAV-RK type integrators with some appropriate modifications, enabling us to theoretically analyze the consistency of SAV-RKPC(or EQ) methods proposed in <cit.> by exploiting the order conditions of ARK methods.
The overall structure of the remaining contexts is summarized below. In Section <ref>, we briefly overview the ARK and SAV methods. In Section <ref>, we reformulate the gradient flow model into an equivalent one and propose our new algorithms. Then, we prove the unconditional energy stability and solvability of the proposed methods. Moreover, we demonstrate the order condition of SAV-RKPC methods by regarding it as an ARK method. The numerical examples and comparisons are made in Section <ref>. Finally, we conclude the whole work in Section <ref>.
§ OVERVIEW OF ARK METHODS AND SAV REFORMULATION OF GRADIENT FLOWS
In this section, we briefly overview the additive Runge-Kutta (ARK) methods. Some basic notations and concepts are also presented. By incorporating a scalar auxiliary variable, the original gradient flow model is transformed into an equivalent one (known as the SAV reformulation). The reformulated system preserves the quadratic energy and provides an elegant platform for developing high-order and linearly implicit unconditionally energy-stable numerical methods.
§.§ ARK methods
We provide an overview of ARK methods, which are commonly used to solve the initial value problem for the following additive partitioned system:
u_t(𝐱, t) = f(u) + g(u), u(𝐱, 0) = u_0(𝐱).
Here, the right-hand side of (<ref>) is subdivided with respect to stiffness, nonlinearity, dynamical behavior, etc. Before we proceed, it is helpful to introduce the Butcher notations for two s-stage RK methods.
[ c A; b^T ]
=
[ c_0 a_00 ⋯ a_0s-1; c_1 a_10 ⋯ a_1s-1; ⋮ ⋮ ⋯ ⋮; c_s-1 a_s-1 0 ⋯ a_s-1 s-1; b_0 ⋯ b_s-1 ]
, [ c A; b^T ]
=
[ c_0 a_00 ⋯ a_0s-1; c_1 a_10 ⋯ a_1s-1; ⋮ ⋮ ⋯ ⋮; c_s-1 a_s-1 0 ⋯ a_s-1 s-1; b_0 ⋯ b_s-1 ]
,
where A ∈ℝ^s × s, b ∈ℝ^s, and c = A 1 with 1 = (1, 1, ⋯, 1)^T∈ℝ^s. A, c are defined in the similar manner.
[Explicit RK (ERK) methods]
A RK method is explicit if a_ij = 0 for j ≥ i-1.
[Diagonally implicit RK (DIRK) methods]
A RK method is diagonally implicit if a_ij = 0 for j ≥ i and there exists 0 ≤ i ≤ s-1, a_ii≠ 0.
[algebraically stable RK method <cit.>]
Let us consider a symmetric matrix with entries M_ij = b_i a_ij + b_j a_ji - b_i b_j. A RK method is algebraically stable if its coefficients satisfy the following stability criteria.
* b_i ≥ 0, ∀ i = 1, 2, ⋯, s,
* M is positive semi-definite.
We partition the time interval uniformly with a step size of τ and denote the time grid points as t_n = n τ. Let N_t = [T/τ]. Assuming that u^n has been solved in advance. The ARK methods then update u^n+1 through two steps.
First, the intermediate stages u_ni (i = 0, 1, ⋯, s-1) are computed from
u_ni = u^n + τ∑_j=0^s-1 a_ij f(u_nj) + τ∑_j=0^s-1a_ij g(u_nj),
Then, we update the solution by
u^n+1 = u^n + τ∑_i=0^s-1 b_i f(u_ni) + τ∑_i=0^s-1b_i g(u_ni).
It is worth mentioning that the above ARK methods have been employed to develop energy-stable schemes for phase field models in <cit.> and maximum bound principle methods for the AC equations in <cit.>.
We emphasize that each ARK method can be considered as a partitioned Runge-Kutta (PRK) method <cit.>. Specifically, let us introduce an equivalent reformulation of (<ref>) as follows:
{ u̇_f(𝐱, t) = f(u), u̇_g(𝐱, t) = g(u),
u(𝐱, t) = u_f(𝐱, t) + u_g(𝐱, t),
.
It is straightforward to see that (<ref>) is equivalent to (<ref>) if the consistent initial condition u_f(𝐱, 0) + u_g(𝐱, 0) = u^0(𝐱) is imposed. By employing a PRK method to (<ref>) and eliminating the intermediate variables u_f, u_g, we readily obtain the ARK method as mentioned above.
By Remark <ref>, we can readily infer that an ARK method has an order of p if the corresponding PRK method has an order of p, as a ARK method is essentially a PRK method applied to the extended systems (<ref>). Adrian et al. conducted an extensive study on generalized ARK methods in <cit.> and provided a comprehensive list of their order conditions. Table <ref> summarizes the order conditions of ARK methods up to the third-order for convenience.
§.§ Gradient flow systems and their SAV reformulation
A gradient flow model can be expressed generally as
u_t(𝐱, t) = 𝒢δℱ/δ u, 𝐱∈Ω,
where u is a state variable, 𝒢∈ℝ^d × d is a negative semi-definite mobility operator, and δℱ/δ u is the variational derivative of the free energy functional ℱ to u. The triple (u, 𝒢, ℱ) uniformly specifies a gradient flow system. When appropriate boundary conditions are imposed on u, system (<ref>) dissipates the free energy as follows:
d ℱ/dt = ( δℱ/δ u, ∂ u/∂ t) = ( δℱ/δ u, 𝒢δℱ/δ u) ≤ 0,
where (u, v) = ∫_Ω u v d𝐱, ∀ u, v ∈ L^2(Ω) is the inner product. Moreover, we denote by u = √((u, u)) the corresponding norm.
For illustration, let us assume a free energy functional of the form:
ℱ(u, ∇ u) = 1/2(u, ℒu) + (F(u, ∇ u), 1),
where ℒ is a linear, self-adjoint, and positive definite operator, F represents a bulk energy bounded below. The SAV approach introduces a new scalar variable such that
q(t) = √( (F(u, ∇ u), 1) + C ),
where C is a sufficiently large positive constant to guarantee that the square root in (<ref>) makes sense. The energy functional (<ref>) can be rewritten into a quadratic form as
ℱ(u, q) = 1/2(u, ℒu) + q^2 - C.
Let W(u) = √( (F(u, ∇ u), 1) + C) for simplicity. The model (<ref>) is reformulated into an equivalent system using the SAV approach <cit.>, as shown below:
{
u_t = 𝒢 (ℒu + 2q δ W/δ u - 2q ∇·δ W/δ∇ u ),
q_t = (δ W/δ u, u_t ) + ( δ W/δ∇ u, ∇ u_t ),
.
equipped with the consistent initial conditions
u(𝐱, 0) = u_0(𝐱), q(0) = √((F(u_0, ∇ u_0), 1) + C).
Taking the inner products on both sides of the first and second equations of (<ref>) by ℒu + 2q δ W/δ u - 2q ∇·δ W/δ∇ u and 2q, respectively, and then combining the resulting equations, it is readily to confirm that system (<ref>) admits the following energy dissipation law.
d/dtℱ(u, q) = ( ℒu + 2q δ W/δ u - 2q ∇·δ W/δ∇ u, 𝒢 (ℒu + 2q δ W/δ u - 2q ∇·δ W/δ∇ u )) ≤ 0.
§ HIGH-ORDER LINEARLY IMPLICIT AND ENERGY-STABLE SCHEMES
§.§ Construction of time integrators
Let us further reformulate (<ref>) as follows:
{
v_t = 𝒢( ℒv + 2q δ W/δ u[v] - 2q ∇·δ W/δ∇ u[v] ),
u_t = 𝒢( ℒu + 2 q δ W/δ u[v] - 2q ∇·δ W/δ∇ u[v]),
q_t = ( δ W/δ u[v], u_t ) + ( δ W/δ∇ u[v], ∇ u_t ),
.
equipped with the initial conditions
u(𝐱, 0) = v(𝐱, 0) = u_0(𝐱), q(0) = √((f(u_0(𝐱), ∇ u_0(𝐱)), 1) + C).
We first demonstrate the equivalence between the reformulated system (<ref>), (<ref>) and the original system (<ref>).
Suppose that ℒ is a linear, self-adjoint, and positive definite operator. The reformulation (<ref>) and the initial condition (<ref>) are equivalent to (<ref>).
According to the definition of q (<ref>) and introducing v(t) = u(t), it is evident that the original system (<ref>) implies (<ref>). We will now demonstrate that the combination of (<ref>) and (<ref>) leads to (<ref>). Subtracting the second equation from the first equation of (<ref>) yields
u_t - v_t = 𝒢( ℒ u - ℒ v ).
Taking the discrete inner product with ℒ u - ℒ v on both sides of the above equation produces
1/2d/dt(ℒ(u - v), u - v ) = (𝒢ℒ(u - v), ℒ(u - v)) ≤ 0.
Due to the positive-definite of ℒ and (<ref>), we conclude that
u(t) = v(t), ∀ 0 ≤ t ≤ T.
Inserting (<ref>) into the third equation of (<ref>), we obtain
q_t = ( δ W/δ u[v] , v ) + ( δ W/δ∇ u[v], ∇ v_t ) = d W[v]/dt.
Combining (<ref>), (<ref>), and (<ref>) results in
q = W[v] = W[u],
Finally, it holds from the definition of W that
2q δ W/δ u = δ F/δ u, 2q ∇·δ W/δ∇ u = ∇·δ F/δ∇ u.
Substituting the above results into (<ref>) yields (<ref>), which completes the proof.
The positive-definite of ℒ is reasonable for most phase field models. For the CH equation with Neumann or periodic boundary conditions, we have ℒ = -Δ and 𝒢 = Δ. The mass conservation law guarantees the invertibility of ℒ. A similar argument applies to the MBE equation. For the AC equation, we have ℒ = -Δ and 𝒢 = -I. Although ℒ is only positive semi-definite in this case, we can introduce a stabilized parameter κ and equivalently recast the AC equation as
u_t = - ( (κ I - Δ) u - (κ u + f(u)) ) := - (ℒ_κ u + f_κ (u)).
Then, ℒ_κ = κ I - Δ is positive definite.
The extension of (<ref>) results in a more complex system (<ref>). However, this reformulation provides an elegant platform for developing high-order, linearly implicit, and energy-stable schemes, as will be demonstrated in subsequent contexts. It should be noted that the equivalent reformulation of (<ref>) is not unique, and other similar reformulations can be employed to develop numerical schemes through the frameworks described in this paper. For simplicity, we only consider (<ref>) in this section.
System (<ref>) is an extension of the original SAV approach (<ref>) proposed in <cit.>. Some other SAV approaches have recently gained popularity, including the exponential SAV approach <cit.> and the generalized SAV approach <cit.>. In <cit.>, Ju et al. have also introduced a novel exponential SAV approach to preserve both MBP and EDL for the AC equations. These approaches can also be extended similarly to (<ref>) and discretized by the methods outlined in subsequent contexts to obtain high-order and energy-stable schemes. For simplicity, we will only use the original SAV approach for illustrations.
Assuming that u^n, v^n, and q^n are already determined. The SAV-ARK methods are outlined below:
[SAV-ARK]
The intermediate variables v_ni, u_ni, and q_ni are solved from
{
v_ni = v^n + τ∑_j=0^s-1 (a_ijv̇^ℒ_nj +a_ijv̇^𝒩_nj) ,
u_ni = u^n + τ∑_j=0^s-1 a_iju̇_nj, q_ni = q^n + τ∑_j=0^s-1 a_ijq̇_nj,
v̇^ℒ_ni = 𝒢ℒ v_ni, v̇^𝒩_ni = 𝒢 ( 2q_niδ W/δ u[v_ni] - 2q_ni∇·δ W/δ∇ u[v_ni] ),
u̇_ni = 𝒢 ( ℒ u_ni + 2q_niδ W/δ u[v_ni] - 2q_ni∇·δ W/δ∇ u[v_ni] ),
q̇_ni = (δ W/δ u[v_ni], u̇_ni ) + ( δ W/δ∇ u[v_ni], ∇u̇_ni ).
.
Then, the solution at t_n+1 is
v^n+1 = v^n + τ∑_i=0^s-1 b_i(v̇_ni^ℒ + v̇_ni^𝒩), u^n+1 = u^n + τ∑_i=0^s-1 b_i u̇_ni, q^n+1 = q^n + τ∑_i=0^s-1 b_i q̇_ni.
We note here that linearly implicit schemes can be obtained by carefully choosing the RK coefficients in Algorithm <ref>. One effective method is discretizing u and q with DIRK methods and v with ERK methods. These methods will be referred to as SAV-DIARK methods in the subsequent contexts.
It is important to emphasize that by introducing z = (v, u, q)^T, Algorithm <ref> can be regraded as ARK methods as follows:
z_ni = z^n + τ∑_j=0^s-1 (a_ijΦ(z_nj) + a_ijΨ(z_nj) ),
z^n+1 = z^n + τ∑_i=0^s-1 b_i ( Φ(z_ni) + Ψ(z_ni) ),
where
Φ(z) =
(
[ 𝒢ℒ u; 𝒢( ℒu + 2q δ W/δ u[v] - 2q ∇·δ W/δ∇ u[v]); ( δ W/δ u[v], u̇) + ( δ W/δ∇ u, ∇u̇) ]), Ψ(z) =
(
[ 𝒢( 2q δ W/δ u[v] - 2q ∇·δ W/δ∇ u[v] ); 0; 0 ]).
This allows us to easily derive the order conditions of the proposed schemes by the order conditions of ARK methods.
To further simplify and improve the stability of Algorithm <ref>, we introduce the following modified SAV-ARK (SAV-MARK) scheme.
[SAV-MARK]
The intermediate variables v_ni, u_ni, q_ni are solved from
{
v_ni = u^n + τ∑_j=0^s-1 ( a_ijv̇^ℒ_nj + a_ijv̇^𝒩_nj) ,
u_ni = u^n + τ∑_j=0^s-1 a_iju̇_nj, q_ni = q^n + ∑_j=0^s-1 a_ijq̇_nj,
v̇^ℒ_ni = 𝒢ℒ v_ni, v̇^𝒩_ni = 𝒢( 2q_niδ W/δ u[v_ni] - 2q_ni∇·δ W/δ∇ u[v_ni]),
u̇_ni = 𝒢 ( ℒ u_ni + 2q_niδ W/δ u[v_ni] - 2q_ni∇·δ W/δ∇ u[v_ni] ),
q̇_ni = (δ W/δ u[v_ni], u̇_ni) + ( δ W/δ∇ u[v_ni], ∇u̇_ni).
.
Then, the solution at t_n+1 is
u^n+1 = u^n + τ∑_i=0^s-1 b_i u̇_ni, q^n+1 = q^n + τ∑_i=0^s-1 b_i q̇_ni.
In contrast to Algorithm <ref>, Algorithm <ref> does not require updating the variable v at integer time steps. This modification not only reduces computational costs but also improves the stability of the scheme in practice. Additionally, thanks to (<ref>), this modification does not affect the accuracy of Algorithm <ref>.
§.§ Energy stability and solvability
Suppose the RK methods employed on u in Algorithms <ref> and <ref> are algebraically stable. Then, SAV-ARK and SAV-MARK methods are unconditionally energy-stable in the sense
ℱ(u^n+1, q^n+1) ≤ℱ(u^n, q^n), 0 ≤ n ≤ N_t - 1.
By the definition of (<ref>) and the self-adjointness of ℒ, we can derive
1/2 (u^n+1, ℒ u^n+1) - 1/2(u^n, ℒ u^n)
= τ∑_i=0^s-1 b_i (u̇_ni, ℒ u^n)
+ τ^2/2∑_i=0^s-1∑_j=0^s-1 b_ib_j (u̇_ni, ℒu̇_nj).
Substituting u^n = u_ni - τ∑_j=0^s-1a_iju̇_nj into the
above equation and observing that
∑_i = 0^s-1∑_j = 0^s-1 b_i a_ij(u̇_ni, ℒu̇_nj) = ∑_i=0^s-1∑_j=0^s-1 b_ja_ji(u̇_ni, ℒu̇_nj),
we obtain
1/2 (u^n+1, ℒu^n+1) - 1/2(u^n, ℒ u^n)
= τ∑_i=0^s-1 b_i (u̇_ni, ℒu_ni) - τ^2/2∑_i = 0^s-1∑_j=0^s-1 M_ij (u̇_ni, ℒu̇_nj)
≤τ∑_i=0^s-1 b_i (u̇_ni, ℒu_ni).
The last inequality is a result of the positive definiteness of M and ℒ. Using a similar procedure, we have
(q^n+1)^2 - (q^n)^2 ≤ 2τ∑_i=0^s-1 b_i q_niq̇_ni.
Taking the discrete inner products of the sixth and last equations of (<ref>) with ℒu_ni and 2q_ni, respectively, and adding the obtained results together yield
(u̇_ni, ℒu_ni) + 2 q_niq̇_ni = (𝒢μ_ni, μ_ni) ≤ 0,
where μ_ni = ℒ u_ni + 2q_niδ W/δ u[v_ni] - 2q_ni∇·δ W/δ∇ u[v_ni]. The desired result is thus obtained by combining (<ref>)–(<ref>) with the condition b_i ≥ 0.
The proposed approach uses quadratic energy (as depicted in equation (<ref>)) instead of the original one. When higher-order time discretization is applied to (<ref>), the resulting quadratic energy becomes a high-order approximation of the original energy. Although the SAV method may be criticized for this weakness, recent studies have attempted to overcome it. For example, Jiang et al. introduced the relaxed SAV approach in <cit.> to connect the modified and original energy at a discrete level, <cit.> proposed an alternating approach that combines the SAV and Lagrange multiplier methods to preserve the original energy. Our technique can also be utilized to develop higher-order schemes based on these approaches.
It should be noted that Theorem <ref> guarantees the boundedness of the numerical solutions {u^n}_n=0^N_t under the energy norm ·_ℒ, where u_ℒ := (ℒu, u). However, the solutions {v^n}_n=0^N_t obtained from Algorithm <ref> may not be bounded. Hence, Algorithm <ref> is expected to be more stable in practical applications since it does not involve the update of v^n.
Let us now concentrate on the solvability of SAV-MDIARK methods. Notice that the proof for SAV-DIARK methods is similar, and we omit it here.
Assume that the mobility matrix satisfies 𝒢 = - ℬ^* ℬ and the RK coefficients a_ii≥ 0 in Algorithm <ref>. The semi-discrete SAV-MDIARK scheme is then uniquely solvable when the time step is sufficiently small. Here, ℬ is a linear operator, ℬ^⋆ represents its adjoint.
Since we are considering the DIRK method, the scheme to solve the intermediate variable v_ni can be reformulated as follows:
v_ni = u^n + τ a_ii𝒢ℒ v_ni + τ∑_j=0^i-1 (a_ijv̇_nj^ℒ + a_ijv̇_nj^𝒩).
Notably, we can solve the above system one by one for i from 0 to s-1, where the only unknown in each step is v_ni. Combining the self-adjoint of ℒ and the assumption to 𝒢, it is readily to assert the decomposition 𝒢ℒ = -𝒜^⋆𝒜. Therefore, the solution of v_ni can be regarded as the minimization of the convex functional defined by:
𝒮[v] = 1/2 (v^2 + τ a_ii𝒜 v^2) - (u^n + τ∑_j=0^i-1 (a_ijv̇_nj^ℒ + a_ijv̇_nj^𝒩), v).
Therefore, the unique solvability of v_ni is straightforward. Then, we prove the solvability of the system coupled by u_ni and q_ni. Let f_ni = δ W/δ u[v_ni] - ∇·δ W/δ∇ u_ni[v_ni]. Thanks to the factor that q_ni is in dependent of space, it can be updated by
q_ni = q^n + τ∑_j=0^i-1a_ijq̇_nj + τ a_ii (𝒜f_ni, 𝒜 u^1_ni ) /1 + 2τ a_iiℬ f_ni^2 - τ a_ii (𝒜f_ni, 𝒜 u_ni^2) ,
where u^1_ni and u_ni^2 are defined by
u_ni^1 = argmin _u1/2 (v^2 + τ a_ii𝒜 v^2) - τ (u^n + ∑_j=0^i-1 a_iju̇_nj, u ),
u_ni^2 = argmin _u1/2 (v^2 + τ a_ii𝒜 v^2) - 2τ a_ii (𝒢 f_ni, u ).
Since the time step is supposed to be sufficiently small, the solvability of the system can be straightforward.
§ THEORETICAL ANALYSIS
§.§ Estimates of the global error
In this section, we present global error estimates for the semi-discrete SAV-MARK methods. To simplify the presentation, we consider only the classical L^2 gradient flow, i.e., 𝒢 = -1, ℒ = -Δ and
ℱ(u) = 1/2∇ u^2 + ∫_Ω F(u) d𝐱.
Without loss of generality, our subsequent analysis is based on the following assumptions 𝒜1–𝒜3:
𝒜1: The implicit component of the ARK method is algebraically and diagonally stable.
𝒜2: The exact solution of the system is sufficiently smooth in both space and time.
𝒜3: The nonlinearity F(·) is twice differentiable.
The SAV-MARK scheme for the AC equation is given by
v_ni = u^n + τ∑_j = 0^s-1 (a_ijΔ v_nj - 2 a_ij q_nj W^'(v_nj)), u_ni = u^n + τ∑_j=0^s-1 a_iju̇_nj,
q_ni = q^n + τ∑_j=0^s-1 a_ijq̇_nj,
u^n+1 = u^n + ∑_i=0^s-1 b_i u̇_ni, q^n+1 = q^n + ∑_i=0^s-1 b_i q̇_ni,
where
u̇_ni = Δ u_ni - 2 q_ni W^'(v_ni) , q̇_ni = (W^'(v_ni), u̇_ni), W^'(u) = F^'(u)/ 2 √(∫_Ω F(u) dx + C_0 ).
The major obstacle in establishing error estimates for the SAV-MARK method is obtaining a prior L^∞ bound for the intermediate stages v_ni. To address this issue, previous researches truncated the nonlinearity to a global Lipschitz function with compact support. This technique is reliable when the continuous solution is bounded, and the numerical solution is sufficiently close to it. Here, we will adopt a similar approach. Let U(𝐱, t) be the exact solution to the L^2 gradient flow and Q(t) = √(∫_Ω F(U(𝐱, t)) d𝐱 + C ). We define
M_u = U(𝐱, t)_C([0, T]; L^∞ (Ω)), Ṁ_u = U̇(𝐱, t)_C([0, T]; L^∞(Ω)), M_q = max_0 ≤ t ≤ T|Q(t)|.
The constants provided above are well-defined by the assumption 𝒜2 and the definition of Q(t). We denote by ℬ = M_u + 1 and let
W^'_ℬ(s) = W^'(s) ρ(s/ℬ),
where ρ(s) is a smooth function with compact support, such that
ρ(s) =
{ 1, 0 ≤ |s| ≤ 1,
∈ [0, 1], 1 ≤ |s| ≤ 2,
0, |s| ≥ 2.
.
It is readily to confirm that W^'_ℬ(·) is global Lipschitz continuous, and
W^'_ℬ (s) = W^'(s), ∀ 0 ≤ |s| ≤ℬ,
|W^'_ℬ(s)| ≤ L_1, | W^'_ℬ (r) - W^'_ℬ (s) | ≤ L_2 |r - s|.
Following <cit.>, we introduce reference solutions 𝒱_ni, 𝒰_ni, 𝒬_ni, 𝒰^n and 𝒬^n, such that
𝒱_ni = U(t_n) + τ∑_j = 0^s-1 (a_ijΔ𝒱_nj - 2 a_ij𝒬_nj W^'_ℬ (𝒱_nj)),
𝒰_ni = U(t_n) + τ∑_j=0^s-1 a_ij𝒰̇_nj, 𝒰̇_ni = Δ𝒰_nj - 2 𝒬_nj W^'_ℬ(𝒱_nj),
𝒬_ni = Q(t_n) + τ∑_j=0^s-1 a_ij𝒬̇_nj, 𝒬̇_ni = (W^'_ℬ(𝒱_ni), 𝒰̇_ni).
These reference solutions play important roles in obtaining global estimates for the SAV-MARK methods.
Suppose that the time step satisfies
τ≤min{ (2c_2)^-1, (4c(c_3 + c_4))^-1},
where the constants above will be specified in the subsequent derivations. We have the following estimates for the intermediate solutions 𝒱_ni
𝒱_ni_L^∞≤ M_u + 1/2, 0 ≤ n ≤ N_t, 0 ≤ i ≤ s-1.
Moreover,
∑_i= 0^s-1 (|Q(t_ni) - 𝒬_ni| + U(t_ni) - 𝒱_ni + U(t_ni) - 𝒰_ni) ≤ c_3 τ^2,
∑_i=0^s-1Δ (U(t_ni) - 𝒱_ni)≤ c_4 τ.
Since W^'_ℬ(U(t_ni)) = W^'(U(t_ni)), the exact solutions satisfy
U(t_ni) = U(t_n) + τ∑_j = 0^s-1 (a_ijΔ U(t_nj) - 2 a_ij Q(t_nj) W^'_ℬ (U(t_nj))) + η^v_ni,
U(t_ni) = U(t_n) + τ∑_j=0^s-1 a_ijU̇(t_nj) + η_ni^u, U̇_ni = Δ U(t_ni) - 2 Q(t_ni) W^'_ℬ(U(t_ni)),
Q(t_ni) = Q(t_n) + τ∑_j=0^s-1 a_ijQ̇(t_nj) + η^q_ni, Q̇(t_ni) = (W^'_ℬ(U(t_ni)), U̇(t_ni)),
where
∑_i=0^s-1 (η_ni^v + η_ni^u + |η_ni^q|) ≤ c_1 τ^2.
Subtracting the second and sixth equations of (<ref>) from that of (<ref>) yields
U(t_ni) - 𝒱_ni = τ∑_j=0^s-1 ( a_ijΔ (U(t_nj) - 𝒱_nj) - 2 a_ijξ_nj ) + η_ni^v,
U(t_ni) - 𝒰_ni = τ∑_j=0^s-1 a_ij (U̇(t_nj) - 𝒰̇_nj) + η_ni^u,
Q(t_ni) - 𝒬_ni = τ∑_j=0^s-1 a_ij (Q̇(t_nj) - 𝒬̇_nj) + η_ni^q,
where
U̇(t_ni) - 𝒰̇_ni = Δ (U(t_ni) - 𝒰_ni) - 2 ξ_ni, ξ_ni = Q(t_ni) W^'_ℬ(U(t_ni)) - 𝒬_ni W^'_ℬ (𝒱_ni),
Q̇(t_ni) - 𝒬̇_ni = (W^'_ℬ (U(t_ni)) - W^'_ℬ(𝒱_ni), U̇(t_ni)) + ( W^'_ℬ(𝒱_ni), U̇(t_ni) - 𝒰̇_ni ).
There is no difficulty in confirming that
ξ_ni ≤ M_u U(t_ni) - 𝒱_ni + L_1 |Q(t_ni) - 𝒬_ni|,
|Q̇(t_ni) - 𝒬̇_ni| ≤Ṁ_u L_2 U(t_ni) - 𝒱_ni + L_1 U̇(t_ni) - 𝒰̇_ni.
According to Assumption 𝒜1, there exists a positive definite diagonal matrix H = diag{h_0, h_1, ⋯, h_s-1}, such that M = H A+ A^TH is positive definite. Therefore, we can find a sufficiently small constant l, such that
M_l = (m_ij^l) = A^-TM A^-1 - 2 l H = A^-TH + H A^-1 - 2 l H
is positive definite. Moreover, let M_d = H A^-1, M_s = H A^-1A. Then,
0 ≤ 2 l ∑_i=0^s-1h_i U(t_ni) - 𝒱_ni^2 - 2τ∑_i=0^s-1h_i (Δ ( U(t_ni) - 𝒱_ni ), U(t_ni) - 𝒱_ni)
= 2 ∑_i,j = 0^s-1m_ij^d (U(t_ni) - 𝒱_ni, U(t_nj) - 𝒱_nj) - ∑_i,j = 0^s-1m_ij^l (U(t_ni) - 𝒱_ni, U(t_nj) - 𝒱_nj)
- 2τ∑_i=0^s-1h_i (Δ ( U(t_ni) - 𝒱_ni ), U(t_ni) - 𝒱_ni)
= 2 ∑_i,j = 0^s-1m_ij^d (U(t_ni) - 𝒱_ni, η_nj^v) - ∑_i,j = 0^s-1m_ij^l (U(t_ni) - 𝒱_ni, U(t_nj) - 𝒱_nj)
- 4 τ∑_i,j = 0^s-1m^s_ij (U(t_ni) - 𝒱_ni, ξ_nj)
≤ 2 λ_d ∑_i=0^s-1U(t_ni) - 𝒱_ni∑_i=0^s-1η_ni^v - λ_l ∑_i=0^s-1U(t_ni) - 𝒱_ni^2
+ 4 λ_s (Ṁ_u L_2 + L_1) τ∑_i=0^s-1 U(t_ni) - 𝒱_ni ( ∑_i=0^s-1 |Q(t_ni) - 𝒬_ni| + ∑_i=0^s-1 U(t_ni) - 𝒱_ni ),
where λ_α and λ_α, α = d,l,s,h are the maximum and minimum eigenvalues of M_d, M_l, M_s, and H, respectively. Consequently,
∑_i=0^s-1U(t_ni) - 𝒱_ni≤4 s λ_s (Ṁ_u L_2 + L_1)/λ_lτ∑_i=0^s-1 ( U(t_ni) - 𝒱_ni + |Q(t_ni) - 𝒬_ni| ) + 2 s λ_d/λ_l∑_i=0^s-1η_ni^v.
Following the same procedure, we can derive
∑_i=0^s-1U(t_ni) - 𝒰_ni≤4 s λ_h (Ṁ_u L_2 + L_1)/λ_lτ∑_i=0^s-1 ( U(t_ni) - 𝒱_ni + |Q(t_ni) - 𝒬_ni| ) + 2 s λ_h/λ_l∑_i=0^s-1η_ni^u.
Combining (<ref>) with the second equation of (<ref>), we have
∑_i=0^s-1U̇ (t_ni) - 𝒰̇_ni≤4s λ_d λ_h (Ṁ_u L_2 + L_1)/λ_l λ_h∑_i=0^s-1 ( U(t_ni) - 𝒱_ni + |Q(t_ni) - 𝒬_ni| )
+ s λ_d (2 λ_h + λ_l)/λ_lλ_hτ^-1∑_i=0^s-1η_ni^u.
Subtracting the fourth equation of (<ref>) with that of (<ref>) gives
Q(t_ni) - 𝒬_ni = τ∑_j=0^s-1 a_ij (Q̇(t_ni) - 𝒬̇_ni) + η_ni^q.
Repeating to use the above technique and combining (<ref>) and (<ref>) then result in
∑_i=0^s-1|Q(t_ni) - 𝒬_ni| ≤2s λ_h (λ_h + 2s λ_d λ_h L_1)(Ṁ_u L_2 + L_1) /λ_l λ_hτ∑_i=0^s-1 ( U(t_ni) - 𝒱_ni + |Q(t_ni) - 𝒬_ni|)
+ 2s^2 λ_h λ_d (2 λ_h + λ_l) L_1 /λ_l^2 λ_h ∑_i=0^s-1 (η_ni^u + |η_ni^q|).
Adding (<ref>), (<ref>) and (<ref>) together yields
∑_i= 0^s-1 ( U(t_ni) - 𝒱_ni + U(t_ni) - 𝒰_ni + |Q(t_ni) - 𝒬_ni| )
≤ c_2 τ∑_i= 0^s-1 ( U(t_ni) - 𝒱_ni + U(t_ni) - 𝒰_ni + |Q(t_ni) - 𝒬_ni| ) + c_3/2τ^2.
It follows by setting τ≤ (2c_2)^-1 that
∑_i= 0^s-1 ( U(t_ni) - 𝒱_ni + U(t_ni) - 𝒰_ni + |Q(t_ni) - 𝒬_ni| ) ≤ c_3 τ^2.
Using equation (<ref>), we then demonstrate the boundedness of 𝒱_ni for sufficiently small τ. Inserting (<ref>) into the first equation of (<ref>) infers
∑_i=0^s-1Δ (U(t_ni) - 𝒱_ni)≤λ_d + 2 λ_s (M_u + L_1)/λ_h τ ( U(t_ni) - 𝒱_ni + |Q(t_ni) - 𝒬_ni| + η_ni^v ) ≤ c_4 τ.
The Sobolev inequality f_L^∞≤ cf_H^2 and the triangular inequality give us
𝒱_ni_L^∞ ≤U(t_ni)_L^∞ + U(t_ni) - 𝒱_ni_L^∞
≤ M_u + c U(t_ni) - 𝒱_ni_H^2≤ M_u + 2c(c_3 + c_4) τ.
The estimate for 𝒱_ni in Lemma <ref> is straightforward after setting τ≤ (4c(c_3 + c_4))^-1. Therefore, we have completed the proof.
Using the Taylor's formula and Lemma <ref>, it is readily to confirm that when the time step satisfies the condition of Lemma <ref>, the reference solutions further satisfy
U(t_n+1) = U(t_n) + τ∑_i=0^s-1 b_i 𝒰̇_ni + η_n+1^u, Q(t_n+1) = Q(t_n) + τ∑_i=0^s-1 b_i 𝒬̇_ni + η_n+1^q,
with
η_n+1^u_H^1 + η_n+1^q≤ c_5 τ^p+1.
We proceed to prove the convergence of the modified scheme obtained by replacing the nonlinear term W^'(·) in (<ref>) with W^'_ℬ(·). For clarity, we remain to use the original notation to denote the solution of this modified scheme.
Our proof demonstrates that v_ni_L^∞≤ M_u + 1 for sufficiently small time steps. Consequently, W^'_ℬ(v_ni) = W^'(v_ni), which indirectly confirming the convergence of the SAV-MARK method (<ref>).
Let
𝒥_ni = 𝒱_ni - v_ni, ℰ_ni = 𝒰_ni - u_ni, 𝒟_ni = 𝒬_ni - q_ni.
Define solution errors
E^n+1 = U(t_n+1) - u^n+1, D^n+1 = Q(t_n+1) - q^n+1.
Let c_⋆ =((3c_5^2 + c_11)Texp(2c_12T))^1/2, and the time step
τ≤min{ (2c_2)^-1, (4c(c_3 + c_4))^-1, (2c_6)^-1, (4c(c_⋆ c_7 + c_8))^-1/p-1, (2c_12)^-1}.
Then, the SAV-MARK method is convergent in the sense
E^n + |D^n | ≤ c_⋆τ^p, 0 ≤ n ≤ N_t.
We will complete the proof by the mathematical induction. As SAV-MARK is a one-step method, it is enough to prove the result for n = l+1 while assuming it holds for n = l. Let n = l. Subtracting (<ref>) and (<ref>) from (<ref>), we get
𝒥_li = E^l + τ∑_j=0^s-1 (a_ijΔ𝒥_lj - 2 a_ijζ_lj),
ℰ_li = E^l + τ∑_j=0^s-1 a_ijℰ̇_lj, 𝒟_li = D^l + τ∑_j=0^s-1 a_ij𝒟̇_lj,
E^l+1 = E^l + τ∑_i=0^s-1 b_i ℰ̇_li + η_l+1^u, D^l+1 = D^l + τ∑_i=0^s-1 b_i 𝒟̇_li + η_l+1^q,
where
ℰ̇_li = Δℰ_li - 2ζ_li, ζ_li = 𝒬_li (W^'_ℬ(𝒱_li) - W^'_ℬ(v_li) ) + 𝒟_li W^'_ℬ(v_li),
𝒟̇_li = (W^'_ℬ (𝒱_li) - W^'_ℬ(v_li), 𝒰̇_li) + (W^'_ℬ (v_li), ℰ̇_li).
Based on the proof of Lemma <ref>, we can conclude that |𝒬_li| ≤ℳ_q and |𝒰̇_li| ≤ℳ̇_u. Applying the propositions of W^'_ℬ(·) then yields
ζ_li≤ (ℳ_q L_2 + L_1) (𝒥_li + |𝒟_li| ), |𝒟̇_li| ≤ (ℳ̇_u L_2 + L_1)( 𝒥_li + ℰ̇_li ).
Furthermore, using (<ref>) and the same technique employed in Lemma <ref>, we can still arrive at
∑_i=0^s-1𝒥_li ≤2 s λ_s (ℳ_q L_2 + L_1)/λ_lτ∑_i=0^s-1 ( 𝒥_li + |𝒟_li| ) + 2s λ_d/λ_lE^l,
∑_i=0^s-1ℰ_li ≤2 s λ_h (ℳ_q L_2 + L_1)/λ_lτ∑_i=0^s-1 ( 𝒥_li + |𝒟_li| ) +2s λ_h/λ_lE^l,
∑_i=0^s-1 |𝒟_li| ≤s λ_h (λ_d λ_h + 2 s λ_d λ_h L_1)(ℳ̇_u L_2 + L_1) /λ_l^2 λ_hτ∑_i=0^s-1 ( 𝒥_li + |𝒟_li| )
+ s^2 λ_h λ_d (2λ_h + λ_l) (ℳ̇_u L_2 + L_1)/λ_l^2 λ_h |D^l|,
∑_i=0^s-1ℰ̇_li ≤2s λ_d λ_h(ℳ_q L_2 + L_1)/λ_l λ_h∑_i=0^s-1 ( 𝒥_li + |𝒟_li| ) + s λ_d (λ_l + 2 λ_h)/λ_l λ_hτ^-1E^l.
Consequently,
∑_i=0^s-1 ( 𝒥_li + ℰ_li + |𝒟_li| ) ≤ c_6 τ∑_i=0^s-1 ( 𝒥_li + ℰ_li + |𝒟_li| ) + c_7/2(E^l + |D^l|).
The restriction τ≤ (2c_6)^-1 and the induction produce
∑_i=0^s-1 ( 𝒥_li + ℰ_li + |𝒟_li| ) ≤ c_⋆ c_7 τ^p.
Combining the above estimate with first equation of (<ref>) then yields
Δ𝒥_li≤ c_8 τ^p-1,
where c_8 = λ_d c_⋆ c_7 + s λ_d c_⋆ + 2 λ_s (ℳ_q L_2 + L_1)c_⋆ c_7/λ_h.
Employing the inequalities ∇ f^2 ≤fΔ f and f_L^∞≤ c f_H^2, it can be shown that if τ≤ (4c(c_⋆ c_7 + c_8))^-1/p-1,
v_li_H^2 ≤𝒱_li_H^2 + 𝒥_li_H^2≤𝒱_li_H^2 + 2(c_⋆ c_7 + c_8) τ^p-1≤ c_9 ,
v_li_L^∞ ≤𝒱_li_L^∞ + 2c(c_⋆ c_7 + c_8) τ^p-1≤ M_u + 1.
Let us now provide estimates for E^l+1 and D^l+1. Taking the difference between E^l+1^2 and E^l^2, and use the fourth equation of (<ref>) yield
E^l+1^2 - E^l^2 = 2τ∑_i=0^s-1 (E^l, b_i ℰ̇_li) + τ^2 ∑_i=0^s-1∑_j=0^s-1 b_i b_j (ℰ̇_li, ℰ̇_lj)
+ 2 (E^l + τ∑_i=0^s-1 b_i ℰ̇_li, η^u_l+1) + η_l+1^u^2.
Next, we individually estimate each of the terms on the right-hand side of (<ref>). Based on the second equation of (<ref>) and the algebraically stable condition, we deduce
2τ∑_i=0^s-1 (E^l, b_i ℰ̇_li) + τ^2 ∑_i=0^s-1∑_j=0^s-1 b_i b_j (ℰ̇_li, ℰ̇_lj) + 2 τ∑_i=0^s-1 b_i ∇ℰ_li^2
= - τ^2 ∑_i=0^s-1∑_j=0^s-1 m_ij (ℰ̇_li, ℰ̇_lj) + 2τ∑_i=0^s-1 b_i (ℰ_li, ℰ̇_li) + 2 τ∑_i=0^s-1 b_i ∇ℰ_li^2
≤ 4(ℳ_q L_2 + L_1 + 1) τ∑_i=0^s-1 b_i (𝒥_li^2 + ℰ_li^2 + |𝒟_li|^2).
Using the Cauchy-Schwarz inequality and ab ≤τ/2 a^2 + 1/2 τ b^2 yield
(E^l + τ∑_i=0^s-1 b_i ℰ̇_li, η^u_l+1) = E^lη_l+1^u + τ∑_i=0^s-1 b_i (Δℰ_li - 2ζ_li, η^u_l+1)
≤τ/2E^l^2 + 1/2τη_l+1^u^2 + τ∑_i=0^s-1 b_i (∇ℰ_li∇η_l+1^u + 2 ζ_liη_l+1^u)
≤τ/2E^l^2 + τ/2∑_i=0^s-1 b_i ∇ℰ_li^2 + 2(ℳ_q L_2 + L_1) τ∑_i=0^s-1 b_i (𝒥_li^2 + |𝒟_li|^2) + 2 c^2_5 τ^2p+1.
Inserting (<ref>) and (<ref>) into (<ref>) infers
E^l+1^2 + τ∑_i=0^s-1 b_i ∇ℰ_li^2 ≤ (1 + τ )E^l^2
+ 8(ℳ_q L_2 + L_1 + 1)τ∑_i=0^s-1 b_i (𝒥_li^2 + ℰ_li^2 + |𝒟_li|^2) + 3c_5^2 τ^2p+1.
Analogously,
|D^l+1|^2 ≤ (1 + c_9 τ) |D^l|^2 + c_10τ∑_i=0^s-1 (𝒥_li^2 + ℰ_li^2 + |𝒟_li|^2) + c_11τ^2p+1 .
Moreover,
∑_i=0^s-1 (𝒥_li^2 + ℰ_li^2 + |𝒟_li|^2) ≤ ( ∑_i=0^s-1𝒥_li + ℰ_li + |𝒟_li| )^2
≤ 2 c_7^2 ( E^l^2 + |D^l|^2 ).
Collecting (<ref>), (<ref>), (<ref>) produces
E^l+1^2 - |D^l+1|^2 ≤ (1 + c_12τ) (E^l^2 + |D^l|^2) + (3c_5^2 + c_11) τ^2p+1.
We observe that c_5, c_11, c_12 are independent of c_⋆ and discrete parameters according to the derivations. By selecting τ≤ (2c_12)^-1 and applying the discrete Gronwall inequality, we can derive the desired result with c_⋆ = ((3c_5^2 + c_11)Texp(2c_12T))^1/2. Therefore, the proof is completed.
§.§ Relationships with the SAV-RK methods
In <cit.>, Li et al. developed high-order unconditionally energy-stable schemes based on SAV techniques and RK methods. To obtain arbitrarily high-order and linearly implicit schemes, they proposed an iterative procedure to get a sufficiently accurate prediction of u, which was then used to discretize the nonlinear terms. In this section, we demonstrate that every SAV-RK methods can be viewed as an ARK method applied to some appropriate reformulations of (<ref>). This new perspective enables us to systematically investigate the order conditions of existing works, utilizing the order conditions of ARK approaches. Employing their SAV-RKPC(M) methods to gradient flows leads to.
[SAV-RKPC(M)]
Given a fundamental RK method with coefficients (A, b, c), the intermediate variables are calculated by the prediction-correction procedure as
1. Prediction: We initialize u_ni^(0) = u^0, q_ni^(0) = q^0. Let M be a positive integer. Then, we iteratively compute u_ni^(m) and q_ni^(m) for m = 0 to M-1 by
{ u_ni^(m+1) = u^n + τ∑_j=0^s-1 a_iju̇_nj^(m+1), q_ni^(m+1) = q^n + τ∑_j=0^s-1a_ijq̇_nj^(m+1)
u̇_ni^(m+1) = 𝒢( ℒu_ni^(m+1) + 2q_ni^(m)δ W/δ u [u_ni^(m)] - 2 q_ni^(m)∇·δ W/δ∇ u[u_ni^(m)] ),
q̇_ni^(m+1) = (δ W/δ u [u_ni^(m+1)], u̇_ni^(m+1)) + (δ W/δ∇ u[u_ni^(m+1)], ∇u̇_ni^(m+1)).
.
If max_i u_ni^(m+1) - u_ni^(m)_∞≤ TOL, we stop the iterations and set u_ni^⋆ = u_ni^(m+1). Otherwise, we set u_ni^⋆ = u_ni^(M).
2. Correction: For the predicted u_ni^⋆, we compute the intermediate stages u̇_ni and q̇_ni as follows:
{ u_ni = u^n + τ∑_j=0^s-1 a_iju̇_nj, q_i^n = q^n + τ∑_j=0^s-1 a_ijq̇_nj,
u̇_ni = 𝒢 ( ℒu_ni + 2q_niδ W/δ u[u_ni^⋆] - 2q_ni∇·δ W/δ∇ u[u_ni^⋆] ),
q̇_ni = ( δ W/δ u [u_ni^⋆], u̇_ni) + ( δ W/δ∇ u [u^⋆_ni], ∇u̇_ni ),
.
and then update u^n+1, q^n+1 by
u^n+1 =u^n + τ∑_i=0^s-1 b_i u̇_ni, q^n+1 = q^n + τ∑_i=0^s-1 b_i q̇_ni.
We display that Algorithm <ref> can be regarded as an ARK method for the following alternative reformulation of (<ref>).
{
w_t = 𝒢( ℒv + 2 r δ W/δ u[v] - 2 r ∇·δ W/δ∇ u[v] ),
v_t = 𝒢( ℒv + 2r δ W/δ u[v] - 2r ∇·δ W/δ∇ u[v] ),
u_t = 𝒢( ℒu + 2 q δ W/δ u[v] - 2q ∇·δ W/δ∇ u[v]),
r_t = (δ W/δ u[v], v^ℒ_t) + (δ W/δ u[w], v^𝒩_t) + ( δ W/δ∇ u[v], ∇ v_t^ℒ) + ( δ W/δ∇ u[w], ∇ v^𝒩_t ),
q_t = ( δ W/δ u[v], u_t ) + ( δ W/δ∇ u[v], ∇ u_t ),
.
where
v_t^ℒ = 𝒢ℒ v, v^𝒩_t = 𝒢 (2 r δ W/δ u[v] - 2 r δ W/δ∇ u[v]).
Let us explain the equivalence between (<ref>) and (<ref>). Subtracting the second from the first equation of (<ref>) and investigating the initial condition, we obtain:
v(t) = w(t), ∀ 0 < t ≤ T.
Substituting this formula into the fourth equation of (<ref>), and subtracting the third equation of (<ref>) from the second, the fifth equation of (<ref>) from the fourth resulting in
u_t - v_t = 𝒢( ℒ(u - v) + 2(q - r) (δ W/δ u[v] - ∇·δ W/δ∇ u[v]) ),
q_t - r_t = ( δ W/δ u[v], u_t - v_t) + ( δ W/δ∇ u[v], ∇ u_t - ∇ v_t ).
Taking the inner products on both sides of the first and the second equations in (<ref>) with ℒ(u - v) + 2(q - r) (δ W/δ u[v] - ∇·δ W/δ∇ u[v]) and 2(q - r), respectively, and adding the resulting equations together yield:
1/2 (u - v, ℒ(u - v)) + (q - r)^2 ≤ 0.
This implies u(t) = v(t), q(t) = r(t). The remaining steps follow the proof of Lemma <ref>, which we omit here for brevity.
Let z = (w, v, u, r, q)^T. We split the reformulated system (<ref>) as follows
z_t = Φ_1(z) + Φ_2(z) + Φ_3(z) + Φ_4(z),
where
Φ_1(z) =
(
[ 0; 𝒢ℒ v; 𝒢 ( ℒu + 2 q δ W/δ u[v] - 2q ∇·δ W/δ∇ u[v]); ( δ W/δ u[v], v^ℒ_t ) + ( δ W/δ∇ u[v], ∇ v_t^ℒ ); ( δ W/δ u[v], u_t ) + ( δ W/δ∇ u[v], ∇ u_t ) ]), Φ_2(z) =
(
[ 0; 𝒢 ( 2 r δ W/δ u[v] - 2 r ∇·δ W/δ∇ u[v] ); 0; ( δ W/δ u[v], w^𝒩_t ) + ( δ W/δ∇ u[v], ∇ w_t^𝒩 ); 0 ]),
Φ_3(z) =
(
[ 𝒢ℒ v; 0; 0; 0; 0 ]), Φ_4(z) =
(
[ 𝒢 ( 2 r δ W/δ𝐮[𝐯] - 2 r ∇·δ W/δ∇𝐮[𝐯] ); 0; 0; 0; 0 ]).
Employing four different RK methods to (<ref>) yields the following SAV-ARKII method
{z_ni = z^n + τ∑_j=0^s-1( a_ijΦ_1(z_nj) + a_ijΦ_2(z_nj) + a_ijΦ_3 (z_nj) + a_ijΦ_4(z_nj) ),
z^n+1 = z^n + τ∑_i=0^s-1 b_i (Φ_1 (z_ni) + Φ_2(z_ni) + Φ_3 (z_ni) + Φ_4(z_ni) ).
.
Furthermore, we rewrite the above scheme componentwisely and employ the techniques outlined in Section <ref> to modify the obtained scheme, ultimately resulting in the SAV-MARKII method as shown below.
[SAV-MARKII]
We solve the intermediate stages from
{ w_ni = u^n + τ∑_j=0^s-1 ( a_ijv̇_nj^ℒ + a_ijv̇^𝒩_nj ),
v_ni = u^n + τ∑_j=0^s-1 (a_ijv̇_nj^ℒ + a_ijv̇^𝒩_nj), r_ni = r^n + τ∑_j=0^s-1( a_ijṙ_nj^ℒ + a_ijṙ_nj^𝒩),
u_ni = u^n + τ∑_j=0^s-1 a_iju̇_nj, q_ni = q^n + τ∑_j=0^s-1 a_ijq̇_nj,
v̇_ni^ℒ = 𝒢ℒ v_ni, ṙ_ni^ℒ = ( δ W/δ u[v_ni], v̇_ni^ℒ) + (δ W/δ∇ u[v_ni], ∇v̇_ni^ℒ ),
v̇_ni^𝒩 = 𝒢 ( 2r_niδ W/δ u[v_ni] - 2r_niδ W/δ∇ u[v_ni] ) , ṙ_ni^𝒩 = ( δ W/δ u[v_ni], v̇_ni^𝒩) + (δ W/δ∇ u[v_ni], ∇v̇_ni^𝒩 ),
u̇_ni = 𝒢(ℒ u_ni + 2q_niδ W/δ u[v_ni] - 2q_niδ W/δ∇ u[v_ni]) , q̇_ni = (δ W/δ u [v_ni], u̇_ni ) + (δ W/δ∇ u [v_ni], ∇u̇_ni).
.
Then, we update
u^n+1 = u^n + τ∑_i=0^s-1 b_i u̇_ni, q^n+1 = q^n + τ∑_i=0^s-1 b_i q̇_ni.
Consider a SAV-RKPC(M) method associated with the fundamental RK method (A, b, c) of stage s. Then, it can be regarded as a SAV-MARKII method with the tableaux
[ 𝐜 𝐀; 𝐛^T ]
=
[ 0 O O; 1_M ⊗ c O I_M ⊗ A; 0^T (𝐞_M ⊗ b)^T ], [ 𝐜 𝐀; 𝐛^T ]
=
[ 0 O O; 1_M ⊗ c I_M ⊗ A O; 0^T (𝐞_M ⊗ b)^T ],
[ 𝐜 𝐀; 𝐛^T ]
=
[ 1_M ⊗ c O I_M ⊗ A; c O A; 0^T (𝐞_M ⊗ b)^T ], [ 𝐜 𝐀; 𝐛^T ]
=
[ 1_M ⊗ c I_M ⊗ A O; c A O; 0^T (𝐞_M ⊗ b)^T ],
where I_s represents the identity matrix, 𝐞_M = (0, 0, ⋯, 1)^T, and ⊗ denotes the Kronecker product. Notice that, we have w_n,i+ms = v_n,i+(m+1)s and r_n,i+ms = q_n,i+(m+1)s in Algorithm <ref>. In addition, the intermediate stages of Algorithm <ref> and <ref> are related as follows:
(u̇_ni^(m), q̇_ni^(m), q_ni^(m), u_ni^(m)) = (v̇^ℒ_n,i+ms+v̇^𝒩_n,i+ms, q̇_n,i+ms, q_n,i+ms, v_n,i+ms), u_ni^⋆ = v_n,i+Ms.
By Theorem <ref>, the consistency error of the SAV-RKPC(M) can be investigated by the order conditions of the generalized ARK methods straightforwardly. Readers are referred to <cit.> for convenience. Taking the fourth-order Gauss SAV-RKPC (SAV-GRK4PC) used in <cit.> as an example, the SAV-GRK4PC(1), SAV-GRK4PC(2), SAV-GRK4PC(3) methods arrive at second-, third- and fourth-order, respectively, which agrees with the numerical experiments proposed in <cit.>.
Although we have demonstrated that the SAV-GRK4PC(3) achieves fourth-order accuracy, it is advisable to carry out additional iterative steps in practical computations to guarantee the stability of the proposed method.
§ NUMERICAL EXPERIMENTS
In this section, we demonstrate the effectiveness of our methods in solving the 2D AC, CH, and MBE equations. The spatial domain is Ω = (x_L, x_R) × (y_L, y_R), and periodic boundary conditions are employed in all examples. To guarantee both accuracy and efficiency, we use the Fourier pseudo-spectral method for spatial discretization. Let N_x and N_y be positive integers. The spatial domain is uniformly partitioned with step sizes h_x = x_R - x_L/N_x and h_y = y_R - y_L/N_y. We define Ω_N = { (x_i, y_j) |x_i = x_L + i h_x, y_j = y_L + j h_y }, and 𝕄_N denotes the space of periodic grid functions on Ω_N. We use the notations ∇_N, ∇_N ·, and Δ_N to represent discrete gradient, divergence, and Laplace operators to the Fourier pseudo-spectral method, respectively. Readers are referred to <cit.> for details. Given u, v ∈𝕄_N, the discrete L^2 inner product, discrete L^2 and L^∞ norms are
(u, v)_N = h_x h_y ∑_j=0^N_x-1∑_k=0^N_y-1 u_jk v_jk, u_N = √((u, u)_N), u_∞ = max_0 ≤ j ≤ N_x-1 0 ≤ k ≤ N_y-1 |u_jk|.
§.§ AC equation
To validate the convergence results presented in Theorem <ref>, we consider the following AC equation
u_t = ε^2 Δ u + u - u^3,
which can be obtained by setting 𝒢 = -1 and ℱ[u] = ∫_Ωε^2/2 |∇ u|^2 + 1/4 (u^2 - 1)^2 d𝐱 in (<ref>).
Employing the Fourier spectral method to (<ref>), the fully discrete system of the AC equation is to find (u_ni, v_ni, q_ni) ∈𝕄_N ×𝕄_N ×ℝ and (u^n+1, q^n+1) ∈𝕄_N ×ℝ, such that
v_ni = u^n +τ∑_j=0^s-1 (a_ijΔ_N v_nj - 2 a_ij q_nj W^'(v_nj)), u_ni = u^n + τ∑_j=0^s-1 a_iju̇_nj,
q_ni = q^n + τ∑_j=0^s-1 a_ijq̇_nj, u^n+1 = u^n + ∑_i=0^s-1 b_i u̇_ni, q^n+1 = q^n + ∑_i=0^s-1 b_i q̇_ni,
where
u̇_ni = Δ_N u_ni - 2q_ni W^' (v_ni), q̇_ni = (W^'(v_ni), u̇_ni)_N, W^'(u) = F^'(u)/2√( (F(u), 1 )_N +C_0 ).
It is worth mentioning that the discrete operator Δ_N satisfies the summation-by-parts formula. By following the procedure outlined in the proof of Theorem <ref> and <ref>, we can confirm the energy-stability and solvability of the above fully-discrete scheme.
We set the computational domain as Ω = (0, 1)^2, the parameter as ε = 0.01, and the initial condition as u_0 = 0.1 sin (2 π x) sin (2 π y). Since the exact solution is unavailable, we use the solution obtained by the SAV-MDIARK(5,6,4) method with N = 512, and τ = 10^-4 at the final time T = 1 as a reference. Then, refinement test in time is conducted with N = 128 and different time steps τ = 0.1 × 2^-k (k = 1,2,3,4,5). Figure <ref> displays the discrete L^2-norm error of the solution at T = 1 computed by various methods as a function of the time step size in the logarithmic scale. All the methods achieve their respective accuracy.
§.§ CH equation
We consider the following Cahn-Hilliard model for immiscible binary fluids
u_t = λΔ (-ε^2 Δ u + u^3 - u),
where λ is a mobility parameter, and ε represents the width of the diffuse interface. The corresponding free energy functional is
ℱ[u] = ∫_Ωε^2/2|∇ u|^2 + 1/4(u^2 - 1)^2 d𝐱.
We introduce an auxiliary variable q = √(1/4∫_Ω (u^2 - 1 -κ)^2 d𝐱 + C|Ω|), where κ is a stabilized parameter. The energy functional (<ref>) is transformed into
ℱ[u, q] = ε^2/2∇ u^2 + κ/2u^2 + q^2 - κ^2 + 2κ + 4C/4 |Ω|.
(<ref>) is then reformulated into an equivalent model, as shown below
{
u_t = λΔ( -ε^2 Δ u + κ u + f_κ(u) q),
q_t = 1/2(f_κ(u), u_t).
.
We perform convergence tests in time by considering (<ref>) in the spatial domain Ω = (0, 2π)^2 with specified parameters γ = 0.01 and ε = 1. As the exact solution of (<ref>) is not available, we construct a manufactured solution ϕ(x, y, t) = sin(x)sin(y)cos(t) to (<ref>) by introducing a nonhomogeneous source term to the right-hand side of (<ref>). We use 128 × 128 wave numbers for the spatial discretization. Subsequently, (<ref>) will be integrated using various methods until T = 1 with different time steps τ = 0.2 × (2k)^-1 (k = 1,2,3,4,5,6,7,8). The numerical solution at the final time is recorded to evaluate errors in the refinement tests.
Figure <ref> plots the L^2 and L^∞ errors of different methods against the time step in a logarithmic scale. All the methods achieve the expected convergence rate. Among the second-order schemes, the SAV-MDIARK(2,2,2) method exhibits higher accuracy than the SAV-MCNRK2 method. Additionally, when γ = 3 + √(3)/6, the SAV-MDIARK(2,2,2) scheme performs to be better than when γ = 1/4. Despite the latter preserving the dissipative rate, the former is more stable in practice. Among the third-order schemes, the SAV-MDIARK(4,5,3) exhibits the highest accuracy and unexpectedly results in superconvergence in this test. This phenomenon can be attributed to the smoothness of the provided solution. Further accuracy tests of this method will be conducted in subsequent examples. When investigating the fourth-order schemes, we present the results of both the SAV-MARK methods and their corresponding SAV-ARK methods. Notably, the convergence rate of the SAV-MARK methods is consistent with that of the SAV-ARK methods, confirming that the modified Algorithm <ref> possesses the same accuracy as Algorithm <ref>.
To thoroughly investigate the performance of the proposed schemes, we consider the CH equation (<ref>) with the initial condition
ϕ_0(x, y) = 0.05 ( cos(6 π x)cos(8 π y) + (cos(8 π x)cos(6 π y))^2 + cos(2 π x - 10 π y)cos(4 π x - 2π y)).
We specify the spatial domain Ω = (0, 2π)^2, and set the parameters in (<ref>) as λ = 1, ε = 0.01. The spatial discretization is carried out using 128 × 128 Fourier modes. Several methods are employed to solve the governing system until the final time T = 0.1. It should be noted that, due to the chosen initial condition (<ref>), the solution of (<ref>) undergoes rapid changes at the beginning. Therefore, if the method is not stable, it will fail to depict the solution using a large time step size accurately.
As a benchmark, Figure <ref> illustrates the snapshot obtained by the SAV-MDIARK(5,6,4) method with a step size of τ = 1 × 10^-5. During the test, the time step is progressively reduced until the correct solution snapshot is obtained, and the maximum step size that yields the correct solution profile for each method is recorded.
To facilitate comparisons, we display numerical results for several existing methods in Figure <ref>, including the SAV-CN method, the fully implicit second-order convex splitting scheme (CS2), and the SAV-GRK4PC(5). It can be seen that the SAV-CN method fails to produce a correct result at a large time step, while the convex splitting scheme is capable of producing an accurate result with a relatively large time step. Due to the high precision and stability achieved through multiple iterations, the SAV-GRK4PC(5) can also compute a correct solution with a larger time step.
The numerical results obtained by the proposed schemes are presented in Figure <ref>. It is evident that our second- and third-order schemes achieve accurate results at larger step sizes compared to SAV-CN and CS2 methods and even outperform the SAV-GRK4PC(5) method. Among these methods, the SAV-MDIARK(5,4,3) method performs the best by yielding the correct solution at a step size of τ = 5.2 × 10^-4. Although the proposed fourth-order methods require smaller step sizes to obtain accurate results, their step sizes remain competitive with those used in other publications, despite considering only the order during their construction.
In addition to verifying the effectiveness of the proposed schemes through profiles and step sizes, we also present the evolution of the following discrete free energy
ℱ^n_N = ε^2/2∇_N u^n_N^2 + 1/4(((u^n)^2 - 1)^2, 1)_N.
It is worth noting that although the above methods have only been proven to dissipate quadratic energy, we still investigate the original discrete energy in our experiments. Figure <ref> summarizes the evolution of the free energy for different numerical schemes under different time steps. It can be observed that the SAV-CN method fails to dissipate the original energy at larger step sizes due to lower precision and weaker stability. While all our methods monotonically decrease the discrete free energy. This indicates that the proposed methods are robust and unconditionally energy-stable, as predicted by the theoretical results.
§.§ MBE equation
To further display the accuracy and robustness of the proposed schemes, let us consider the following MBE model
u_t = - λ (δΔ^2 u - ∇· f(∇ u)),
which is the L^2 gradient flow with respect to the following free energy functional
ℱ[u] = ∫_Ωδ/2 |Δ u|^2 + F(∇ u) d𝐱.
In (<ref>) and (<ref>), u represents the height function of a thin film in a co-moving frame, δ is a positive constant, and f = F^'. If we set F(∇ u) = -1/2ln(1 + |∇ u|^2), (<ref>) is usually called the MBE equation without slope selection. Corresponding, (<ref>) is named MBE equation with slope selection while taking F(∇ u) = 1/4(|∇ u|^2 - 1)^2. Introducing an SAV q = √(1/4∫_Ω (|∇ u|^2 - 1 - κ)^2 d𝐱 + C|Ω| ). The free energy is then modified into
ℱ[u, q] = δ/2Δ u^2 + κ/2∇ u^2 + q^2 - κ^2 + 2κ + 4C/4|Ω|.
Correspondingly, (<ref>) is reformulated into
{
u_t = -λ( δΔ^2 u - κΔ u - ∇· f_κ(∇ u) q ),
q_t = 1/2 (f_κ(∇ u), ∇ u_t),
.
where f_κ(∇ u) = (|∇ u|^2 - 1 - κ)∇ u/√(1/4∫_Ω ( |∇ u|^2 - 1 - κ )^2 d𝐱 + C|Ω| ).
We remark here although the nonlinearity of the MBE equation without slope selection seems to be unbounded, the SAV can remain to be introduced as q = √(κ/2|∇ u|^2 -1/2ln(1 + |∇ u|^2) + C|Ω| ). Due to the Lipschitz continuous of F, there is no difficulty in confirming that
κ/2|∇ u|^2 -1/2ln(1 + |∇ u|^2) > 0,
as soon as κ≥1/8.
We will still begin with performing the refinement test in time. Specifying the computational domain Ω = (0, 2π)^2 and considering a classical example with the initial condition
ϕ_0(x, y) = 0.1(sin3xsin5y + sin5xsin5y),
which was studied in <cit.> to observe morphological instability due to the nonlinear interaction. The parameters are λ = 1 and δ = 0.1. Since the exact solution of (<ref>) is not available, the SAV-MDIARK(5,6,4) method is employed to compute a reference solution of (<ref>) using 256 × 256 Fourier modes and a step size of τ = 5 × 10^-6. Then the refinement test in time is carried out by varying the temporal step size τ = 2^3-k× 10^-4 (k = 0,1,⋯,6). The spatial is discretized using 128 × 128 Fourier modes. The discrete L^2 and L^∞ errors between the reference and numerical solutions at T = 0.1 are recorded.
Figure <ref> displays the solution error at T=0.1 as a function of the step size in the logarithmic scale. It is observable that all methods arrive at their corresponding convergence rates. Moreover, the super-convergence of SAV-MDIARK(4,5,2) disappears under this circumstance, suggesting that the results appearing in Figure <ref> are coincidental.
Then, we simulate (<ref>) under the same initial condition until T=30. Figure <ref> displays the height profiles solved by the SAV-MDIARK(5,6,4) under with τ = 5 × 10^-3 at different times t=0,0.05,2.5,5.5,8,30. The results agree with those reported in <cit.>. We remark here that the simulation results of other schemes are indistinguishable and thus are omitted due to the limitation of space.
Figure <ref> summarizes the evolution of free energy from t = 0 to t = 15 solved by different methods with different time steps. Notice that the energy curve for the fully-implicit backward difference (BDF) methods, which are recognized to have good stability, are also plotted for comparisons. For the third- and fourth-order schemes, the energy curves predicted by the proposed methods are comparable with those predicted by BDF methods. Moreover, among the second-order schemes, the proposed methods provide more accurate energy prediction than the BDF2 method when τ = 3.125× 10^-2. These suggest that our methods are comparable to the fully discrete BDF methods in terms of stability. However, it should be noted that our methods are linearly implicit and only require the solution of a linear system at each step. Table <ref> lists the CPU times for these methods when conducting the above experiments with the time step of τ = 1× 10^-2. Despite the ARK methods needing to solve more intermediate stages, particularly for higher-order schemes, our proposed methods are more efficient than BDF methods.
§ CONCLUSION
Combing the SAV approach and ARK methods, we develop a novel paradigm for constructing linearly implicit and high-order unconditionally energy-stable methods for general gradient flows. The proposed schemes are rigorously proved to be unconditionally energy-stable, uniquely solvable, and convergent. We also reveal that each SAV-RKPC method can be regarded as a SAV-ARK method, and the order of the SAV-RKPC methods are then confirmed theoretically using the order-conditions of ARK methods. Numerical examples demonstrate the efficiency and robustness of the proposed methods.
§ ACKNOWLEDGMENTS
This work is supported by the National Key Research and Development
Project of China (2018YFC1504205), the National Natural Science Foundation of China (12171245, 11971242).
§ EXAMPLES OF SOME SAV-ARK METHODS
In this section, we list SAV-ARK methods utilized in the above contexts. We will refer a SAV-DIARK (or SAV-MDIARK) method with s-stage implicit method, r-stage explicit method and p-th order as SAV-DIARK(s,r,p) (SAV-MDIARK(s,r,p)).
§.§ SAV-DIARK(2,2,2)
A =
[ γ 0; 1-2γ γ ],
b =
[ 1/2 1/2 ]^T,
A =
[ 0 0; 1 0 ],
b = b.
The discriminant of the implicit part of the above method reads
= (λ - 1/4)
[ 1 -1; -1 1 ].
Therefore, the implicit part of the method is algebraic stable iff λ≥1/4.
§.§ SAV-DIARK(2,3,3)
A =
[ 0 0 0; 0 3 + √(3)/6 0; 0 -√(3)/3 3 + √(3)/6 ],
b =
[ 0 1/2 1/2 ]^T,
A =
[ 0 0 0; 3 + √(3)/6 0 0; -3 + √(3)/6 3 - √(3)/3 0 ],
b = b.
The eigenvalues of diagonally implicit part are [1.0774, 0, 0, 0].
§.§ SAV-DIARK(3,4,3)
A =
[ 0 0 0 0; 0 σ 0 0; 0 1/2 - σ σ 0; 0 2σ 1 - 4σ σ; ],
b =
[ 0 μ 1-2μ μ ]^T,
A =
[ 0 0 0 0; σ 0 0 0; 0 1/2 0 0; 0 9 μσ-3 μ-3 σ+1/3 μ (2 σ-1) 1-σ- 9 μσ-3 μ-3 σ+1/3 μ (2 σ-1) 0; ],
b = b,
where
σ = √(3)/3cos(π/18) + 1/2, μ = 1/6 (2σ - 1)^2.
Then, the eigenvalues of the diagonally implicit part are [1.5530, 0, 0, 0].
§.§ SAV-DIARK(5,6,4)
A = [ 0 0 0 0 0 0; 0 3/8 0 0 0 0; 3/8 0 3/16 0 0 0; 0 0 0 σ 0 0; 0 0 0 1/2 - σ σ 0; 0 0 0 2σ 1 - 4σ σ; ],
b =
[ 0 0 0 μ 1-2μ μ ]^T,
A =
[ 0 0 0 0 0 0; 3/8 0 0 0 0 0; 0 9/16 0 0 0 0; 25/162 μ -104 σμ^2 +6 μ^2+20 μ/108 μ^2-90 μ+9 112 σμ^2 +36 μ^2-37 μ/324 μ^2-270 μ+27 0 0 0; 0 0 1/2 0 0 0; 0 56 σμ^2 -2 μ^2-12 μ/36 μ^2-30 μ+3 16 σμ^2 -4 μ^2+3 μ/36 μ^2-30 μ+3 0 0 0; ],
b = b.
The eigenvalues of the diagonally implicit part are [1.5530, 0, 0, 0, 0, 0].
§.§ SAV-GARK(4,5,4)
A =
[ 0 0 0 0 0; 0 1/4 0 0 0; 1/4 0 1/4 0 0; 0 0 0 1/4 1/4-√(3)/6; 0 0 0 1/4+√(3)/6 1/4 ],
b =
[ 0 0 0 1/2 1/2 ]^T,
A =
[ 0 0 0 0 0; 1/4 0 0 0 0; 0 1/2 0 0 0; 1/6 0 1/3-√(3)/6 0 0; 1/6 0 1/3+√(3)/6 0 0; ],
b = b.
The implicit part of the above method is based on the Gauss RK method (see <cit.>). The eigenvalues of A are [0, 0, 0, 0, 0].
abbrv
10
sav_rk_extra
G. Akrivis, B. Li, and D. Li.
Energy-decaying extrapolated RK-SAV methods for the Allen–Cahn and
Cahn–Hilliard equations.
SIAM J. Sci. Comput., 41(6):A3703–A3727, 2019.
001
M. Ambati, T. Gerasimov, and L. De Lorenzis.
A review on phase-field models of brittle fracture and a new fast
hybrid formulation.
Comput. Mech., 55:383–405, 2015.
burrage_efficiently_1982
K. Burrage.
Efficiently implementable algebraically stable Runge–Kutta
methods.
SIAM J. Numer. Anal., 19(2):245–258, 1982.
burrage_stability_1979
K. Burrage and J. C. Butcher.
Stability criteria for implicit Runge–Kutta methods.
SIAM J. Numer. Anal., 16(1):46–57, 1979.
intr_ac
J. W. Cahn and S. M. Allen.
A microscopic theory for domain wall motion and its experimental
verification in Fe-Al alloy domain growth kinetics.
J. Phys. Colloq, 38:C7–51–C7–54, 1977.
intr_ch
J. W. Cahn and J. E. Hilliard.
Free energy of a nonuniform system. I. Interficial free energy.
J. Chem. Phys., 28:258–267, 1958.
mbe_leapfrog
L. Chen, J. Zhao, and Y. Gong.
A novel second-order scheme for the molecular beam epitaxy model
with slope selection.
Commun. Comput. Phys., 25(4):1024–1044, 2019.
mbe_etd3
K. Cheng, Z. Qiao, and C. Wang.
A third order exponential time differencing numerical scheme for
No-Slope-Selection epitaxial thin film model with energy stability.
J. Sci. Comput., 81:154–185, 2019.
lag
Q. Cheng, C. Liu, and J. Shen.
A new Lagrange multiplier approach for gradient flows.
Comput. Methods Appl. Mech. Engrg., 367:113030, 2020.
GSAV1
Q. Cheng, C. Liu, and J. Shen.
Generalized SAV approaches for gradient systems.
J. Comput. Appl. Math., 394:113532, 2021.
intr_other1
Q. Cheng, X. Yang, and J. Shen.
Efficient and accurate numerical schemes for a hydro-dynamically
coupled phase field diblock copolymer model.
J. Comput. Phys., 341:44–60, 2017.
tang_splitting
Y. Cheng, A. Kurganov, Z. Qu, and T. Tang.
Fast and stable explicit operator splitting methods for phase-field
models.
J. Comput. Phys., 303:45–65, 2015.
intr_mbe
S. Clarke and D. Vvedensky.
Origin of reflection high-energy electron-diffraction intensity
oscillations during molecular-beam epitaxy: A computational modeling approach
.
Phys. Rev. Lett, 58:2235–2238, 1987.
du_2019
Q. Du, L. Ju, X. Li, and Z. Qiao.
Maximum principle preserving exponential time differencing schemes
for the nonlocal Allen-Cahn equation.
SIAM J. Numer. Anal., 57:875–898, 2019.
du_2021
Q. Du, L. Ju, X. Li, and Z. Qiao.
Maximum bound principles for a class of semilinear parabolic
equations and exponential time-differencing schemes.
SIAM Rev., 63:317–359, 2021.
du_jsc_analysis
Q. Du, L. Ju, and J. Lu.
Analysis of fully discrete approximations for dissipative systems
and application to time-dependent nonlocal diffusion problems.
J. Sci. Comput., 78(3):1438–1466, 2019.
convex_splitting1
C. Elliot and A. Stuart.
The global dynamics of discrete semilinear parabolic equations.
SIAM J. Numer. Anal., 30:1622–1663, 1993.
intr_crystal
M. Elsey and B. Wirth.
A simple and efficient scheme for phase field crystal simulation.
ESIAM: M2AN, 47:1413–1432, 2013.
grad_stable_ch
D. J. Eyre.
Unconditionally Gradient Stable Time Marching the Cahn-Hilliard
Equation.
Mater. Res. Soc. Sympos. Proc., 529:39–46, 1998.
cn_ab
X. Feng, T. Tnag, and J. Yang.
Stabilized crank-nicolson/adams-bashforth schemes for phase field
models.
East Asian J. Appl. Math., 3(1):59–80, 2013.
dvd2
D. Furihata.
A stable and conservative finite difference scheme for the
Cahn-Hilliard equation, 2001.
dvd
D. Furihata and T. Matsuo.
a.
Chapman and Hall/CRC, 1st edition, 2010.
svm
Y. Gong, Q. Hong, and Q. Wang.
Supplementary variable method for thermodynamically consistent
partial differential equations.
Comput. Methods Appl. Mech. Engrg., 381:113746, 2021.
gong_nls
Y. Gong, Q. Wang, Y. Wang, and J. Cai.
A conservative Fourier pseudo-spectral method for the nonlinear
Schrödinger equation.
J. Comput. Phys., 328:354–370, 2017.
ieq_gong
Y. Gong, J. Zhao, and Q. Wang.
Arbitrarily high-order linear energy stable schemes for gradient
flow models.
J. Comput. Phys., 419:109610, 2020.
ieq1
F. Guillén-González and G. Tierra.
On linear schemes for a Cahn–Hilliard diffuse interface model.
J. Comput. Phys., 234:140–171, 2013.
ac_hbvm
F. Guo and W. Dai.
Arbitrarily high-order accurate and energy-stable schemes for
solving the conservative Allen–Cahn equation.
Numer. Methods Partial Differential Eq., 39:187–212, 2022.
002
Z. Guo and P. Lin.
A thermodynamically consistent phase-field model for two-phase flows
with thermocapillary effects.
J. Fluid Mech., 766:226–271, 2015.
hairer_book
E. Hairer, C. Lubich, and G. Wanner.
Geometric Numerical Integration: Structure-Preserving Algorithms
for Ordinary Differential Equations.
Springer-Verlag, Berlin, 2nd edition, 2006.
hou_leapfrog
T. Hou, D. Xiu, and W. Jiang.
A new second-order maximum-principle preserving finite difference
scheme for Allen-Cahn equations with periodic boundary conditions.
Appl. Math. Lett., 104:106256, 2020.
sav_ns_err
F. Huang and J. Shen.
Stability and error analysis of a class of high-order IMEX scheme
for Navier–Stokes equations with periodic boundary conditions.
SIAM J. Numer. Anal., 59:2926–2954, 2021.
dvd_high
J. Huang.
Energy stable schemes for gradient flows based on the DVD method.
arXiv:2210.11960v1, 2022.
dvd1
T. Ide.
Some energy preserving finite element schemes based on the discrete
variational derivative method.
Appl. Math. Comput., 175:277–296, 2006.
relaxation_sav
M. Jiang, Z. Zhang, and J. Zhao.
Improving the accuracy and consistency of the scalar auxiliary
variable (SAV) method with relaxation.
J. Comput. Phys., 456:110954, 2022.
ESAV_AC
L. Ju, X. Li, and Z. Qiao.
Stabilized Exponential-SAV cchemes preserving energy dissipation law
and maximum bound principle for the Allen–Cahn type equations.
J Sci Comput, 92(2):66, 2022.
ju_jcp_if
L. Ju, X. Li, Z. Qiao, and J. Yang.
Maximum bound principle preserving integrating factor Runge-Kutta
methods for semilinear parabolic equations.
J. Comput. Phys., 439(110405):18, 2021.
ju_mbe
L. Ju, X. Li, Z. Qiao, and H. Zhang.
Energy stability and error estimates of exponential time
differencing schemes for the epitaxial growth model without slope selection.
Math. Comp., 87(312):1859–1885, 2017.
mbe_model1
B. Li and J.-G. Liu.
Thin film epitaxy with or without slope selection.
Eur. J. Appl. Math., 14:713–743, 2003.
mbe_model2
B. Li and J.-G. Liu.
Stability analysis of large time‐stepping methods for epitaxial
growth models.
SIAM J. Numer. Anal., 44:1759–1779, 2006.
sav_li
D. Li and W. Sun.
Linearly implicit and high-order energy-conserving schemes for
nonlinear wave equations.
J. Sci. Comput., 83:17, 2020.
sav_nlsw
X. Li, Y. Gong, and L. Zhang.
Linear high-order energy-preserving schemes for the nonlinear
Schrödinger equation with wave operator using the scalar auxiliary variable
approach.
J. Sci. Comput., 88:25, 2021.
sav_ns_err2
X. Li, J. Shen, and Z. Liu.
New SAV-pressure correction methods for the Navier-Stokes equations:
stability and error analysis.
Math. Comp., 92(340):141–167, 2022.
liao_bdf
H. Liao, T. Tang, and T. Zhou.
On energy stable, maximum-principle preserving, second-order BDF
scheme with variable steps for the Allen-Cahn equation.
SIAM J. Numer. Anal., 58:2294–2314, 2020.
ESAV
Z. Liu and X. Li.
The Exponential Scalar Auxiliary Variable (E-SAV) Approach for Phase
Field Models and Its Explicit Computing.
SIAM J. Sci. Comput., 42:B630–B655, 2020.
relaxation_lag
Z. Liu and X. Li.
A novel lagrange multiplier approach with relaxation for gradient
flows.
arXiv:2210.02723v1 [math.NA], 2022.
lu_lsp
L. Lu, Q. Wang, Y. Song, and Y. Wang.
Local structure-preserving algorithms for the molecular beam epitaxy
model with slope selection.
Am. J. Math, 26:4745–4765, 2021.
003
W. Marth, S. Aland, and A. Voigt.
Margination of white blood cells: a computational approach by a
hydrodynamic phase field model.
J. Fluid Mech., 790:389–406, 2016.
qiao_mixed_fe
Z. Qiao, T. Tang, and H. Xie.
Error analysis of a mixed finite element method for the molecular
beam epitaxy model.
SIAM J. Numer. Anal., 53:184–205, 2015.
ark_general
A. Sandu and M. Günther.
A generalized–structure approach to additive Runge–Kutta methods.
SIAM J. Numer. Anal., 53(1):17–42, 2015.
sav_shen
J. Shen, J. Xu, and J. Yang.
The scalar auxiliary variable (SAV) approach for gradient flows.
J. Comput. Phys., 353:407–416, 2018.
sav_shen_siam
J. Shen, J. Xu, and J. Yang.
A new class of efficient and robust energy stable schemes for
gradient flows.
SIAM Rev., 61:474–506, 2019.
csrk
J. Shin, H. G. Lee, and J.-Y. Lee.
Unconditionally stable methods for gradient flow using convex
splitting Runge-Kutta scheme.
J. Comput. Phys., 347:367–381, 2017.
tan_jcp_msrksav
Z. Tan and H. Tang.
A general class of linear unconditionally energy stable schemes for
the gradient flows.
J. Comput. Phys., 464(111372):32, 2022.
tang_imex
T. Tang and J. Yang.
Implicit-explicit scheme for the Allen-Cahn equation preserves the
maximum principle.
J. Comput. Math., 34:471–481, 2016.
intr_other2
C.-H. Teng, I.-L. Chern, and L. Ming-Chih.
Simulating binary fluid-surfactant dynamics by a phase field model
.
Discrete Contin. Dyn. Syst. Ser. B, 4(17):1289–1307, 2012.
004
A. Wheeler, W. Boettinger, and G. McFadden.
Phase-field model for isothermal phase transitions in binary
alloys.
Phys. Rev. A, 45:7424–7438, 1992.
sav_ch3
J. Yang, J. Wang, Z. Tan, and J. Kim.
Efficient IMEX and consistently energy-stable methods of
diffuse-interface models for incompressible three-component flows.
Comput. Phys. Commun., 282, 2023.
ieq3
X. Yang.
X. Yang, Linear, first and second-order, unconditionally energy
stable numerical schemes for the phase field model of homopolymer blends.
J. Comput. Phys., 327:294–316, 2016.
ieq4
X. Yang and L. Ju.
Efficient linear schemes with unconditional energy stability for
the phase field elastic bending energy model.
Comput. Methods Appl. Mech. Eng., 135:691–712, 2017.
ieq2
X. Yang, J. Zhao, Q. Wang, and J. Shen.
Numerical approximations for a three components Cahn–Hilliard
phase-field model based on the invariant energy quadratization method.
Math. Models Methods Appl. Sci., 27(11):1993–2030, 2017.
imex_fac
H. Zhang, J. Yan, X. Qian, X. Gu, and S. Song.
On the preserving of the maximum principle and energy stability of
high-order implicit-explicit Runge-Kutta schemes for the space-fractional
Allen-Cahn equation.
Numer. Algorithms, 88(3):1309–1336, 2021.
GSAV2
Y. Zhang and J. Shen.
A generalized SAV approach with relaxation for dissipative
systems.
J. Comput. Phys., 464:111311, 2022.
sav_chhs_err
N. Zheng and X. Li.
Error analysis of the SAV Fourier-spectral method for the
Cahn-Hilliard-Hele-Shaw system.
Adv. Comput. Math., 47(71), 2021.
|
http://arxiv.org/abs/2307.07538v1 | 20230714133607 | Energy stable and conservative dynamical low-rank approximation for the Su-Olson problem | [
"Lena Baumann",
"Lukas Einkemmer",
"Christian Klingenberg",
"Jonas Kusch"
] | math.NA | [
"math.NA",
"cs.NA",
"35L65, 35Q49, 65M12, 65M22"
] |
adressWuerzburg]Lena Baumann
adressInnsbruck]Lukas Einkemmer
adressWuerzburg]Christian Klingenberg
adressAs]Jonas Kusch
[adressWuerzburg]University of Wuerzburg, Department of Mathematics, Wuerzburg, Germany, mailto:[email protected]@uni-wuerzburg.de (Lena Baumann), mailto:[email protected]@mathematik.uni-wuerzburg.de (Christian Klingenberg)
[adressInnsbruck]University of Innsbruck, Numerical Analysis and Scientific Computing, Innsbruck, Austria, mailto:[email protected]@uibk.ac.at
[adressAs]Norwegian University of Life Sciences, Scientific Computing, Ås, Norway, mailto:[email protected]@nmbu.no
Computational methods for thermal radiative transfer problems exhibit high computational costs and a prohibitive memory footprint when the spatial and directional domains are finely resolved. A strategy to reduce such computational costs is dynamical low-rank approximation (DLRA), which represents and evolves the solution on a low-rank manifold, thereby significantly decreasing computational and memory requirements. Efficient discretizations for the DLRA evolution equations need to be carefully constructed to guarantee stability while enabling mass conservation.
In this work, we focus on the Su-Olson closure and derive a stable discretization through an implicit coupling of energy and radiation density. Moreover, we propose a rank-adaptive strategy to preserve local mass conservation. Numerical results are presented which showcase the accuracy and efficiency of the proposed method.
thermal radiative transfer, Su-Olson closure, dynamical low-rank approximation, energy stability, mass conservation, rank adaptivity
§ INTRODUCTION
Numerically solving the radiative transfer equations is a challenging task, especially due to the high dimensionality of the solution's phase space. A common strategy to tackle this issue is to choose coarse numerical discretizations and mitigate numerical artifacts <cit.> which arise due to the insufficient resolution, see e.g. <cit.>. Despite the success of these approaches in a large number of applications, the requirement of picking user-determined and problem dependent tuning parameters can render them impracticable.
Another approach to deal with the problem's high dimensionality is the use of model order reduction techniques. A reduced order method which is gaining a considerable amount of attention in the field of radiation transport is dynamical low-rank approximation (DLRA) <cit.> due to its ability to yield accurate solutions while not requiring an expensive offline training phase. DLRA's core idea is to represent and evolve the solution on the low-rank manifold of rank r functions. Past work in the area of radiative transfer has focused on asymptotic–preserving schemes <cit.>, mass conservation <cit.>, stable discretizations <cit.>, imposing boundary conditions <cit.> and implicit time discretizations <cit.>. A discontinuous Galerkin discretization of the DLRA evolution equations for thermal radiative transfer has been proposed in <cit.>.
A key building block of efficient, accurate and stable methods for DLRA is the construction of time integrators which are robust irrespective of small singular values in the solution <cit.>. Three integrators which move on the low-rank manifold while not being restricted by its curvature are the projector-splitting (PS) integrator <cit.>, the basis update & Galerkin (BUG) integrator <cit.>, and the parallel integrator <cit.>. Since the PS integrator evolves one of the required subflows backward in time, the BUG and parallel integrator are preferable for diffusive problems while facilitating the construction of stable numerical discretization for hyperbolic problems <cit.>. Moreover, the BUG integrator allows for a basis augmentation step <cit.> which can be used to construct conservative schemes for the Schrödinger equation <cit.> and the Vlasov–Poisson equations <cit.>.
In this work we propose an energy stable and mass conservative DLRA scheme for the thermal radiative transfer equations using the Su-Olson closure. The main novelties of this paper are:
* A stable numerical scheme for thermal radiative transfer: We show that conventional IMEX schemes fail to guarantee energy stability and therefore propose a scheme which advances radiation and energy implicitly in a coupled fashion.
* A mass conservative and rank-adaptive integrator: We employ the basis augmentation step from <cit.> as well as the conservative truncation step from <cit.> to guarantee local mass conservation and rank adaptivity. In contrast to <cit.> we do not need to impose conservation through a modified L-step equation, but solely use the basis augmentation strategy from <cit.>.
Moreover, we demonstrate numerical experiments which underline the derived stability and conservation properties of the proposed method while showing significantly reduced computational costs and memory requirements.
This paper is structured as follows: After the introduction in Section <ref>, we review the background on thermal radiative transfer and dynamical low-rank approximation in Section <ref>. In Section <ref> we present the evolution equations for the thermal radiative transfer equations when using the rank-adaptive BUG integrator. Section <ref> discretizes the resulting equations in angle and space. The main method is presented in Section <ref> where a stable time discretization is proposed. We discuss local mass conservation of the scheme in Section <ref>. Numerical experiments are demonstrated in Section <ref>.
§ BACKGROUND
§.§ Thermal radiative transfer
In this work, we study radiation particles moving through and interacting with a background material. By absorbing particles, the material heats up and emits new particles which can in turn again interact with the background. This process is described by the thermal radiative transport equations
1/c∂_t f(t,x,μ) + μ∂_x f(t,x,μ) = σ(B(t,x)-f(t,x,μ)),
∂_t e(t,x) = σ(⟨ f(t,x,·)⟩_μ-B(t,x)),
where we omit boundary and initial conditions for now. This system can be solved for the radiation density (also called angular flux) f(t,x,μ) and the internal energy e(t,x) of the background medium. Here, x∈ D⊂ℝ is the spatial variable and μ∈ [-1,1] denotes the directional (or velocity) variable. The opacity σ encodes the rate at which particles are absorbed by the medium and we use brackets ⟨·⟩_μ, ⟨·⟩_x to indicate an integration over the directional domain and the spatial domain, respectively. Moreover, the speed of light is denoted by c and the energy equilibrium is denoted by B(T). It often is described by the Stefan-Boltzmann law
B(T) = acT^4,
where a is the radiation constant and T is the temperature. Different closures exist to determine a relation between the temperature T and the internal energy e. The Su-Olson closure e(T) = B(T) yields a linear system for f and B, which reads
1/c∂_t f(t,x,μ) + μ∂_x f(t,x,μ) = σ(B(t,x)-f(t,x,μ)),
∂_t B(t,x) = σ(⟨ f(t,x,·)⟩_μ-B(t,x)).
This system has been solved analytically and serves as a common benchmark for numerical considerations <cit.>. Constructing numerical schemes to solve the above equation is challenging. First, the potentially stiff opacity term has to be treated by an implicit time integration scheme. Second, for three-dimensional spatial domains the computational costs and memory requirements of finely resolved spatial and angular discretizations become prohibitive. To tackle the high dimensionality, we choose a dynamical low-rank approximation which we introduce in the following.
§.§ Dynamical low-rank approximation
The core idea of dynamical low-rank approximation is to represent and evolve the solution to a given equation ∂_t f(t,x,μ) = F(f(t,x,μ)) on the manifold of rank r functions
ℳ_r = { f∈ L^2(D× [-1,1]) : f(x,μ) = ∑_i,j=1^r X_i(x)S_ij V_j(μ) with invertible 𝐒 = (S_ij)∈ℝ^r× r,
X_i ∈ L^2(D), V_j∈ L^2([-1,1]) and ⟨ X_i, X_j⟩_x = δ_ij, ⟨ V_i, V_j⟩_μ = δ_ij}
at every time point t. That is, for a given t, we have
f(t,x,μ) = ∑_i,j=1^r X_i(t,x)S_ij(t) V_j(t,μ).
To restrict the solution dynamics onto the low-rank manifold ℳ_r, we need to determine the corresponding tangent space which at position f under the Gauge conditions ⟨Ẋ_i, X_j⟩_x = 0, ⟨V̇_i, V_j⟩_μ = 0 reads
𝒯_fℳ_r = { δ f∈ L^2(D× [-1,1]) : δ f(x,μ) = ∑_i,j=1^r δ X_i(x)S_ij V_j(μ) + X_i(x)δ S_ij V_j(μ) + X_i(x)S_ijδ V_j(μ)
with δ S_ij∈ℝ,δ X_i ∈ L^2(D), δ V_j∈ L^2([-1,1]) and ⟨δ X_i, X_j⟩_x = 0, ⟨δ V_i, V_j⟩_μ = 0 }.
Having defined the low-rank manifold and its corresponding tangent space, we now wish to determine f(t,·,·)∈ℳ_r such that ∂_t f(t,·,·) ∈𝒯_fℳ_r and ‖∂_t f(t,·,·) - F(f(t,·,·))‖_L^2(D× [-1,1]) is minimized. That is, one wishes to determine f such that
⟨∂_t f(t,·,·) - F(f(t,·,·)), δ f ⟩_x,μ = 0 for all δ f∈𝒯_fℳ_r.
The orthogonal projector onto the tangent plane 𝒯_fℳ_r can be explicitly given as
P(f)F(f) = ∑_j=1^r ⟨ V_j, F(f) ⟩_μ V_j - ∑_i,j=1^r X_i ⟨ X_i V_j, F(f) ⟩_x,μ V_j + ∑_i=1^r X_i ⟨ X_i, F(f) ⟩_x.
With this definition at hand, we can reformulate (<ref>) as
∂_t f(t,x,μ) = P(f(t,x,μ))F(f(t,x,μ)).
To evolve the solution in time according to the above equation is not trivial. Indeed standard time integration schemes suffer from the curvature of the low-rank manifold, which is proportional to the smallest singular value of the low-rank solution <cit.>. Three integrators which move along the manifold without suffering from its high curvature exist: The projector–splitting integrator <cit.>, the BUG integrator <cit.>, and the parallel integrator <cit.>. In this work, we will use the basis-augmented extension to the BUG integrator <cit.> which we explain in the following.
The rank-adaptive BUG integrator <cit.> updates and augments the bases { X_i}, { V_j} in parallel in the first two steps. In the third step, a Galerkin step is performed for the augmented bases followed by a truncation step to a new rank r_1. In detail, to evolve the distribution function from f(t_0,x,μ) = ∑_i,j=1^r X_i^0(x) S_ij^0V_j^0(μ) at time t_0 to f(t_1,x,μ) = ∑_i,j=1^r X_i^1(x) S_ij^1V_j^1(μ) at time t_1 = t_0 + Δ t the integrator performs the following steps:
* K-Step: Write K_j(t,x) = ∑_i=1^r X_i(t,x) S_ij(t) such that f(t,x,μ)=∑_j=1^r K_j(t,x) V_j^0(μ). Update and augment the basis X_i^0(x) with i=1,...,r to X_i^1(x) with i=1,...,2r by solving
∂_t K_j(t,x) = ⟨ V_j^0, F(∑_i=1^r K_i(t,x) V_i^0) ⟩_μ, K_j(t_0,x) = ∑_i=1^r X_i^0(x) S_ij^0.
Retrieve X_i^1(x) through Gram Schmidt such that [K_j(t_1,x), X_i^0]=∑_i=1^2rX_i^1(x) R_ij^1. Note that R_ij^1 is discarded after this step. Compute M_ki = ⟨X_k^1, X_i^0 ⟩_x.
* L-Step: Write L_i(t,μ) = ∑_j=1^r S_ij(t)V_j(t,μ) such that f(t,x,μ)=∑_i=1^r X_i^0 L_i(t,μ). Update and augment the basis V_j^0(μ) with j=1,...,r to V_j^1(μ) with j=1,...,2r by solving
∂_t L_i(t,μ) = ⟨ X_i^0, F(∑_j=1^r X_j^0 L_j(t,μ)) ⟩_x, L_i(t_0, μ) =∑_j=1^r S_ij^0 V_j^0(μ).
Retrieve V_j^1(μ) through Gram Schmidt such that [L_i(t_1,μ), V_j^0(μ)]=∑_j=1^2rV_j^1(μ
) R_ij^2. Note that R_ij^2 is discarded after this step. Compute N_ℓ j = ⟨V_ℓ^1, V_j^0⟩_μ.
* S-step: Update S_ij^0 with i,j=1,...,r to S_ij^1 with i,j=1,...,2r by solving
Ṡ_ij(t) = ⟨X_i^1 V_j^1, F(∑_ℓ,k=1^2rX_ℓ^1 S_ℓ k(t) V_k^1) ⟩_x,μ, S_ij(t_0) = M_ik S_kℓ^0 N_jℓ.
* Truncation: Let S_ij^1 be the entries of the matrix 𝐒^1. Compute the singular value decomposition of 𝐒^1 = 𝐏Σ𝐐^⊤ with Σ = diag(σ_j). Given a tolerance ϑ, choose the new rank r_1 ≤ 2r as the minimal number such that
(∑_j=r_1+1^2rσ_j^2)^1/2≤ϑ.
Let 𝐒^1 with entries S_ij^1 be the r_1 × r_1 diagonal matrix with the r_1 largest singular values and let 𝐏^1 with entries P_ij^1 and 𝐐^1 with entries Q_ji^1 contain the first r_1 columns of 𝐏 and 𝐐, respectively. Set X_i^1(x) = ∑_i=1^2rX_i^1(x)P_ij^1 for i=1,...,r_1 and V_j^1(μ) = ∑_j=1^2rV_j^1(μ)Q_ji^1 for j=1,...,r_1.
The updated solution after one time step is then given by f(t_1,x,μ) = ∑_i,j=1^r_1 X_i^1(x) S_ij^1 V_j^1(μ). Note that we are not limited to augmenting with the old basis, which we will use to construct our scheme.
§ DYNAMICAL LOW-RANK APPROXIMATION FOR SU-OLSON
Let us now derive the evolution equations of the rank-adaptive BUG integrator for the thermal radiative transfer equations with Su-Olson closure. To simplify notation, all derivations are performed for the essentially one-dimensional slab geometry. However, the derivation trivially extends to higher dimensions. We start from the thermal radiative transfer equations (<ref>) with Su-Olson closure e(T) = B(T), which yields
∂_t f(t,x,μ) + μ∂_x f(t,x,μ) = σ(B(t,x)-f(t,x,μ)),
∂_t B(t,x) = σ(⟨ f(t,x,·)⟩_μ-B(t,x)).
Note that we leave out the speed of light by doing a rescaling of time τ = t/c and in an abuse of notation use t to denote τ in the remainder. For the dynamical low-rank approximation let us write
f(t,x,μ) ≈∑_i,j^r X_i(t,x)S_ij(t) V_j(t,μ).
In the following, we use Einstein's sum convention when not stated otherwise to ensure compactness of notation. We start with considering the evolution equations for the particle density (<ref>). By evolving the solution along the subspaces X_i(t,x)S_ij(t) V_j(t_0,μ) and X_i(t_0,x)S_ij(t) V_j(t,μ) we can derive the evolution equations of the K- and L-step of the BUG integrator:
K-step:
Let us rewrite f as
f (t,x,μ) =K_j(t,x)V_j^0(μ) with K_j(t,x) = X_i(t,x) S_ij(t),
where { V_j^0 } denotes the set of orthonormal basis functions for the velocity space that shall be kept fixed in this step. Plugging (<ref>) into (<ref>) and projecting onto V_k^0(μ) we obtain
∂_t K_k(t,x) = -∂_x K_j(t,x)⟨ V^0_k, μ V_j^0⟩_μ + σ( B(t,x) ⟨ V^0_k⟩_μ - K_k(t,x)).
L-step:
For the L-step, let us write f as
f(t,x,μ) =X_i^0(x) L_i (t,μ) with L_i(t,μ) = S_ij(t) V_j(t,μ),
where {X_i^0 } denotes the set of spatial orthonormal basis functions that shall be kept fixed in this step. Plugging (<ref>) into (<ref>) and projecting onto X_k^0(x) yields
∂_t L_k(t,μ) = -μ⟨ X_k^0, d/dx X_i^0⟩_x L_i(t,μ) + σ( ⟨ X_k^0, B(t,·)⟩_x - L_k(t,μ)).
Lastly, we derive the augmented Galerkin step of the rank-adaptive BUG integrator. We denote the time updated spatial basis augmented with X_i^0 as X_i^0. The augmented directional basis V_i^0 is constructed in the corresponding way. Then, the augmented Galerkin step is constructed according to:
S-step:
For the S-step of the dynamical low-rank approximation we use the intial condition S_ij(t_0) = ⟨X_i^1 X_ℓ^0 ⟩_x S_ℓ k(t_0) ⟨V_j^1 V_k^0⟩_μ and evolve the solution along the subspace
f(t,x,μ) = ∑_i,j = 1^2rX_i^1(x) S_ij(t)V_j^1(μ).
Inserting this representation into (<ref>) and testing against X_k^1 and V_ℓ^1 gives
Ṡ_kℓ(t)= - ⟨X_k^1, d/dxX_i^1 ⟩_x S_ij(t) ⟨V_ℓ^1, μV_j^1⟩_μ + σ(⟨X_k^1, B(t,·) ⟩_x ⟨V_ℓ^1 ⟩_μ - S_kℓ(t)).
For equation (<ref>) we obtain by inserting the updated dynamical low-rank approximation of f
∂_t B(t,x) = σ(X_i^1(x) S_ij(t)⟨V_j^1⟩_μ-B(t,x)).
§ ANGULAR AND SPATIAL DISCRETIZATION
Having derived the K-, L- and S-step of the rank-adaptive BUG integrator, we can now proceed with discretizing in angle and space. For the angular discretization, we describe V^0_j(μ), V^1_j (μ) and L_i(t,μ) by a modal representation as follows
V^0_j(μ) ≃∑_n = 0^N-1 V^0_nj P_n(μ), V^1_j(μ) ≃∑_n = 0^N-1V^1_nj P_n(μ), L_i(t,μ) ≃∑_n = 0^N-1 L_ni(t) P_n(μ),
where P_n are the normalized Legendre polynomials. When using this modal representation in the evolution equations of DLRA, we can use the fact that defining the flux matrix 𝐀∈ℝ^N× N as A_mn :=⟨ P_m, μ P_n ⟩_μ lets us write flux matrices as
⟨ V^0_k, μ V_j^0⟩_μ = V^0_kmA_mnV^0_jn.
The evolution equations with angular discretization then read
∂_t K_k(t,x) = -∂_x K_j(t,x) V_nj^0 A_mn V_mk^0 + σ( B(t,x) V_0k^0 - K_k(t,x)),
L̇_mk(t) = - ⟨ X_k^0, d/dx X_i^0⟩_x L_ni (t) A_mn + σ( ⟨ X_k^0, B(t,·)⟩_x δ_m0 - L_mk(t) ),
Ṡ_kℓ(t) = - ⟨X_k^1, d/dxX_i^1 ⟩_x S_ij(t) V_nj^1 A_mnV_mℓ^1 + σ( ⟨X_k^1, B(t,·) ⟩_x V_0 ℓ^1 - S_kℓ(t)).
For the angular discretization of (<ref>) we get
∂_t B(t,x) = σ(X_i^1(x) S_ij(t) V_0j^1-B(t,x)) .
To derive a spatial discretization we choose a spatial grid x_1 < ⋯ < x_n_x with equidistant spacing Δ x. The solution in a given cell p is then represented according to
X_pk(t) ≈1/Δ x∫_x_p^x_p+1 X_k(t,x) dx,
K_pk(t) ≈1/Δ x∫_x_p^x_p+1 K_k(t,x) dx,
B_p(t) ≈1/Δ x∫_x_p^x_p+1 B(t,x) dx .
Spatial derivatives are approximated and stabilized through the tridiagonal stencil matrices 𝐃^x ≈∂_x and 𝐃^xx≈Δ x∂_xx with entries
D_p,p± 1^x= ± 1/2Δ x , D_p,p^xx= -1/Δ x , D_p,p± 1^xx= 1/2Δ x .
Then, 𝐃^x, 𝐃^xx∈ℝ^n_x × n_x.
Moreover, we denote the Roe matrix as |𝐀| = 𝐐|𝐌|𝐐^⊤, where 𝐀 = 𝐐𝐌𝐐^⊤ with 𝐐 orthogonal and 𝐌 = (σ_1,...,σ_n).
We then obtain the spatially and angular discretized matrix ODEs
K̇_pk(t) = -D^x_qp K_pj(t) V_nj^0 A_mn V_mk^0 + D^xx_qp K_pj(t) V_nj^0 |A|_mn V_mk^0 + σ( B_p(t) V_0k^0 - K_pk(t)),
L̇_mk(t) = - A_mn L_ni(t) X_pi^0 D_qp^x X_qk^0 + |A|_mn L_ni(t) X_pi^0 D_qp^xx X_qk^0 + σ( δ_m0 B_p(t) X_pk^0 - L_mk(t)),
Ṡ_kℓ(t) = - X_pk^1 D^x_pqX_qi^1 S_ij(t) V_nj^1 A_mnV_mℓ^1 + X_pk^1 D^xx_pqX_qi^1 S_ij(t) V_nj^1 |A|_mnV_mℓ^1 + σ(X_pk^1 B_p(t) V_0 ℓ^1 - S_kℓ(t)).
Lastly, we obtain from (<ref>) for the energy equilibrium B the spatially discretized equation
Ḃ_p(t) = σ(X_pi^1 S_ij(t) V_0j^1-B_p(t)) = σ(u_p0^1(t)-B_p(t)),
where we use the notation X_pi^1 S_ij(t) V_mj^1 =: u_pm^1(t). We can now show that the semi-discrete time-dependent system (<ref>) is energy stable. For this, let us first note the following properties of the chosen spatial stencils:
Let y,z ∈ℝ^n_x. Then the stencil matrices fulfill the following properties:
y_p D_pq^x z_q = -z_p D_pq^x y_q , z_p D_pq^x z_q = 0 , y_p D_pq^xx z_q = z_p D_pq^xx y_q.
Moreover, let 𝐃^+∈ℝ^n_x × n_x be defined as
D_p,p^+= - 1/√(2Δ x) , D_p,p + 1^+= 1/√(2Δ x) .
Then, z_p D_pq^xx z_q = - (D_pq^+ z_q)^2.
The assertions follow directly by plugging in the definitions of the stencil matrices:
y_p D_pq^x z_q = ± 1/2Δ x y_p (z_p+1 - z_p-1) = ± 1/2Δ x (y_p-1 z_p - y_p+1z_p) = -z_p D_pq^x y_q,
z_p D_pq^x z_q = - z_p D_pq^x z_q = 0,
y_p D_pq^xx z_q = 1/2Δ xy_p z_p -1/Δ x (y_p z_p+1 +y_pz_p-1) = 1/2Δ xy_p z_p -1/Δ x (y_p-1 z_p +y_p+1z_p) = z_p D_pq^xx y_q,
z_p D_pq^xx z_q = -1/2Δ x(z_p^2-2z_p z_p+1+z_p+1^2)= -1/2Δ x(z_p-z_p+1)^2 =: - (D_pq^+ z_q)^2.
With these properties at hand, we can now show dissipation of the energy
E(t) := 1/2‖𝐮^1(t) ‖_F^2 + 1/2‖𝐁(t) ‖_E^2,
where ‖·‖_F denotes the Frobenius norm and ‖·‖_E denotes the Euclidean norm.
The semi-discrete time-continuous system consisting of (<ref>) is energy stable, that is Ė(t) ≤ 0.
Let us start from the S-step in (<ref>)
Ṡ_kℓ(t) = - X_pk^1 D^x_pqX_qi^1 S_ij(t) V^1_nj A_mnV^1_mℓ + X_pk^1 D^xx_pqX_qi^1 S_ij(t) V^1_nj |A|_mnV^1_mℓ + σ( X_pk^1 (x) B_p(t) V_0 ℓ^1 - S_kℓ(t)).
We multiply with X_α k^1 V_βℓ^1 and introduce the projections P^X,1_α p = X_α k^1 X_pk^1 and P^V,1_m β = V_m ℓ^1 V_βℓ^1. With the notation X_qi^1 S_ij(t) V^1_nj = u_qn^1(t) we get
u̇_αβ^1(t) = - P^X,1_α p D^x_pq u_qn^1(t) A_mn P^V,1_m β +P^X,1_α p D^xx_pq u_qn^1(t) |A|_mn P^V,1_m β + σ( P^X,1_α p B_p(t) δ_0m P^V,1_m β - u_αβ^1(t)).
Next, we multiply with u_αβ^1(t) and sum over α and β. Note that
P^X,1_α p u_αβ^1 = u_p β^1 and P^V,1_m β u_p β^1 = u_pm^1.
This leads to
1/2d/dt‖𝐮^1(t) ‖^2_F = - u_pm^1(t) D^x_pq u_qn^1(t) A_mn + u_pm^1(t) D^xx_pq u_qn^1(t) |A|_mn + σ( u_pm^1(t) B_p(t) δ_0m - ‖𝐮^1(t) ‖^2 ).
Recall that we can write 𝐀 = 𝐐𝐌𝐐^⊤ with 𝐌 = (σ_1,...,σ_N). Inserting this representation gives
1/2d/dt‖𝐮^1(t) ‖^2_F = - u_pm^1(t) D^x_pq u_qn^1(t) Q_nkσ_k Q_mk + u_pm^1(t) D^xx_pq u_qn^1(t) Q_nk|σ_k| Q_mk + ( u_pm^1(t) B_p(t) δ_0m - ‖𝐮^1(t) ‖^2 )
= - σ_k u_pk^1(t) D^x_pqu_qk^1(t) + |σ_k| u_pk^1(t) D^xx_pqu_qk^1(t) + ( u_pm^1(t) B_p(t) δ_0m - ‖𝐮^1(t) ‖^2 ),
where u_pk^1(t) = u_pm^1(t)Q_mk.
With the properties of the stencil matrices we get
1/2d/dt‖𝐮^1(t) ‖^2_F = -( D^+_pq u_qm^1(t) |A|_mn^1/2)^2 + σ(u_p0(t) B_p(t) - ‖𝐮^1(t) ‖^2_F).
Next we consider equation (<ref>). Multiplication with B_p(t) and summation over p gives
1/2d/dt‖𝐁(t)‖^2_E = σ(u_p0(t)B_p(t)-‖𝐁(t)‖^2_E).
For the total energy of the system it holds that E(t)= 1/2‖𝐮^1(t) ‖_F^2 + 1/2‖𝐁(t)‖^2_E. Adding the evolution equations (<ref>) and (<ref>) we get
d/dt E(t) = - ( D^+_pq u_qm^1(t) |A|_mn^1/2)^2 + σ(u_p0^1(t) B_p(t) - ‖𝐮^1(t) ‖_F ^2) + σ(u_p0^1(t)B_p(t)-‖𝐁(t)‖_E^2 )
= - ( D^+_pq u_qm^1(t) |A|_mn^1/2)^2 - σ( (u_p0^1(t)-B_p(t))^2 + (u_pm^1(t))^2 (1-δ_m0)),
where we rewrote ‖𝐁(t) ‖_E^2 = B_p(t)^2 and ‖𝐮^1(t) ‖^2_F = (u_pm^1(t))^2. This expression is strictly negative which means that E is dissipated in time. Hence, the system is energy stable.
§ TIME DISCRETIZATION
Our goal is to construct a conservative DLRA scheme which is energy stable under a sharp time step restriction. Constructing time discretization schemes which preserve the energy dissipation shown in Theorem <ref> while not suffering from the potentially stiff opacity term is not trivial. In fact a classical IMEX scheme potentially will increase the total energy, which we demonstrate in the following.
§.§ Naive time discretization
We start from system (<ref>) which still depends continuously on the time t. For the time discretization we choose a standard IMEX scheme and perform a splitting of energy and radiation transport equation. That is, we use an explicit Euler step for the transport part of the evolution equations, treat the energy equilibrium B explicitly and use an implicit Euler step for the scattering term. Note that the scheme describes the evolution from time t_0 to time t_1 = t_0 +Δ t but holds for all further time steps equivalently. This yields the fully discrete scheme
K_pk^1 = K_pk^0 -Δ t D^x_qp K_pj^0 V_nj^0 A_mn V_mk^0 + Δ t D^xx_qp K_pj^0 V_nj^0 |A|_mn V_mk^0 + σ( Δ t B_p^0 V_0k^0 - Δ t K_pk^1 ),
L_mk^1 = L_mk^0 - Δ t X_qk^0 D_qp^x X_pi^0 L_ni^0 A_mn + Δ t X_qk^0 D_qp^xx X_pi^0 L_ni^0 |A|_mn + σ(Δ t X_pk^0 B_p^0 δ_m0 - Δ t L_mk^1 ).
The time updated bases X_pk^1 and V_pk^1 are then retrieved through a QR-decomposition of the augmented bases [K_pk^1, X_pk^0] and [ L_pk^1, V_pk^0] according to the rank-adaptive BUG integrator <cit.>. Lastly, we perform a Galerkin step for the augmented bases according to
S_kℓ^1 = S_kℓ^0 - Δ t X_pk^1 D^x_pqX_qi^1 S_ij^0 V_nj^1 A_mnV_mℓ^1 + Δ t X_pk^1 D^xx_pqX_qi^1 S_ij^0 V_nj^1 |A|_mnV_mℓ^1
+ σ(Δ t X_pk^1 B_p^0 V_0 ℓ^1 - Δ t S_kℓ^1 ),
where S_kℓ^0 := X_pk^1 X_pk^0 S_kℓ^0 V_mℓ^0 V_mℓ^1. The energy is then updated via
B_p^1 = B_p^0 + σΔ t (X_pi^1 S_ij^1 V_0j^1-B_p^1).
However, this numerical method has the undesirable property that it can increase the total energy during a time step. In Theorem <ref> we show this analytically. This behavior is, obviously, completely unphysical.
There exist initial value pairs (u^0, B^0) and time step sizes Δ t such that the naive scheme (<ref>) results in (u^1, B^1) for which the energy increases, i.e. for which E^1 > E^0.
Let us multiply the S-step (<ref>) with X_α k^1 V_βℓ^1 and sum over α and β. Again we make use of the projections P^X,1_α p = X_α k^1 X_pk^1 and P^V,1_m β = V_m ℓ^1 V_βℓ^1. With the definition of S_kℓ^0 we obtain for u_αβ^1 := X_α k^1 S_kℓ^1 V_βℓ^1
u_αβ^1 = u_pm^0 - P^X,1_α pΔ t D^x_pq u_qn^0 A_mn P^V,1_m β + P^X,1_α pΔ t D^xx_pq u_qn^0 |A|_mn P^V,1_m β + σ( Δ t P^X,1_α p B_p^0 δ_m0 P^V,1_m β - Δ t u_αβ^1 ).
Let us choose a constant solution in space, i.e., B^1_j = B^1 and u^1_jk = u^1δ_k0 for all spatial indices j. The scalar values B^1 and u^1 are chosen such that B^1 = u_0^1 + α where
0<α < σΔ t/1+ σΔ t + σ^2 Δ t^2 + 1/2σ^3 Δ t^3 u_0^1.
We can now verify that we obtain our chosen values for B^1_j and u^1_jk after a single step of (<ref>) when using the initial condition
B^0 = B^1 + σΔ t α = u_0^1+α (1 + σΔ t),
u_0^0 = u_0^1+σΔ t (u^1_0 - B^0) = u_0^1-σΔ t α (1 + σΔ t).
To show this, note that since the solution is constant in space, all stencil terms drop out and we are left with
u_αβ^1 = u_pm^0 + σ( Δ t P^X,1_α p B_p^0 δ_m0 P^V,1_m β- Δ t u_αβ^1 ).
Since B_p^0 is constant in space and δ_m0 lies in the span of our basis, we know that all projections in the above equation are exact. Plugging the initial values (<ref>) into (<ref>) we then directly obtain u_αβ^1. Similarly, by plugging (<ref>) into (<ref>), we obtain B^1_p.
Then, we square both of the initial terms (<ref>) to get
(B^0)^2 = (B^1)^2 + 2σΔ t α B^1 + σ^2Δ t^2 α^2 = (B^1)^2 + 2σΔ t α (u_0^1 + α) + σ^2Δ t^2 α^2,
(u_0^0)^2 = (u_0^1)^2-2σΔ t α u_0^1 (1 + σΔ t) + σ^2 Δ t^2 α^2 (1 + σΔ t)^2.
Adding these two terms and multiplying with 1/2 yields
E^1 = E^0 + σ^2 Δ t^2α u_0^1 - σΔ t α^2- 1/2σ^2 Δ t^2α^2 - 1/2σ^2 Δ t^2α^2 (1+σΔ t)^2.
Note that E^1 > E^0 if
σΔ tu_0^1 - α- 1/2σΔ tα - 1/2σΔ tα (1+σΔ t)^2 > 0.
Rearranging gives
α < σΔ t/1+ σΔ t + σ^2 Δ t^2 + 1/2σ^3 Δ t^3 u_0^1.
This is exactly the domain α is chosen from. Hence, we have E^1 > E^0, which is the desired result.
§.§ Energy stable space-time discretization
To construct an energy stable time integration scheme, we write the original equations in two parts followed by a basis augmentation and correction step. First, we evolve the basis functions according to
K_pk^⋆ = K_pk^0 -Δ t D^x_qp K_pj^0 V_nj^0 A_mn V_mk^0 + Δ t D^xx_qp K_pj^0 V_nj^0 |A|_mn V_mk^0,
L_mk^⋆ = L_mk^0 - Δ t X_qk^0 D_qp^x X_pi^0 L_ni^0 A_mn + Δ t X_qk^0 D_qp^xx X_pi^0 L_ni^0 |A|_mn.
We compute the augmented and time updated bases via QR-decompositions 𝐗^⋆𝐑 = [𝐊^⋆, 𝐗^0] and 𝐕^⋆𝐑 = [𝐋^⋆, 𝐕^0].
With S_αβ^0 = X^⋆_jαX_jℓ^0S_ℓ m^0V_km^0V_kβ^⋆ we then solve the S-step equation
S_αβ^⋆ = S_αβ^0 - Δ t X_pα^⋆ D^x_pq X_qi^⋆S_ij^0 V_nj^⋆ A_mn V_m β^⋆ + Δ t X_pα^⋆ D^xx_pq X_qi^⋆S_ij^0 V_nj^⋆ |A|_mn V_m β^⋆.
Second, we solve the coupled terms for the zeroth order moment and the energy according to
u_j0^1 = X_jℓ^0S_ℓ m^0 V_0m^0 - Δ tD^x_jiX_in^⋆S_nm^0 V_ℓ m^⋆ A_0ℓ+Δ tD^xx_jiX_in^⋆S_nm^0 V_ℓ m^⋆ |A|_0ℓ + σΔ t(B_j^1-u_j0^1),
B_j^1 = B_j^0 + σΔ t(u_j0^1-B_j^1).
Following <cit.> we perform the opacity update only on 𝐋= 𝐕^⋆𝐒^⋆ according to
L_mk^⋆, scat = 1/1+Δ t σ L_mk for k 0
and retrieve the factorized basis and coefficients via a QR-decomposition 𝐕^⋆, scat𝐒^⋆, scat, ⊤ = 𝐋^⋆, scat.
Defining 𝐮_0^1 =(u_j0^1)_j we augment the basis matrices according to
𝐗^1 = qr([𝐮_0^1, 𝐗^⋆]), 𝐕^1 = qr([𝐞_1, 𝐕^⋆, scat]).
Third, the coefficient matrix is updated via
𝐒^1 = 𝐗^1,⊤𝐗^⋆𝐒^⋆, scat𝐕^⋆, scat, ⊤ (𝐈 - 𝐞_1𝐞_1^⊤)𝐕^1 + 𝐗^1,⊤𝐮_0^1𝐞_1,⊤𝐕^1 ∈ℝ^(2r + 1)× (2r + 1).
Lastly, we truncate the rank 2r + 1 solution to a new rank r_1 using a suited truncation strategy such as proposed in <cit.> or the conservative truncation strategy of <cit.>.
We show that the given scheme is energy stable and start with the following Lemma.
Let us denote u_jk^1 := X_jα^1 S_αβ^1 V_kβ^1. Under the time step restriction Δ t ≤Δ x it holds
Δ t/2(D^x_jiu_jk^1A_kℓ - D^xx_jiu_jk^1|A|_kℓ)^2-(D_ji^+ u_ik^1|A|_kℓ^1/2)^2 ≤ 0.
Following <cit.>, we employ a Fourier analysis which allows us to write the stencil matrices 𝐃^x,xx,+ in diagonal form. Let us define
𝐄∈ℂ^n_x× n_x with entries
E_kα = √(Δ x)exp(iαπ x_k), k,α = 1,...,n_x
with i∈ℂ being the imaginary unit. Then, the matrix 𝐄 is orthonormal, i.e., 𝐄𝐄^H = 𝐄^H𝐄 = 𝐈 (the uppercase H denotes the complex transpose) and it diagonalizes the stencil matrices:
𝐃^x,xx,+𝐄 = 𝐄Λ^x,xx,+ .
The matrices Λ^x,xx,+ are diagonal with entries
λ^x_α,α = 1/2Δ x(e^iαπΔ x-e^-iαπΔ x) = i/Δ xsin(ω_α) ,
λ_α,α^xx = 1/2Δ x(e^iαπΔ x-2+e^-iαπΔ x) = 1/Δ x(cos(ω_α)-1) ,
λ_α,α^+ = 1/√(2Δ x)(e^iαπΔ x-1) = 1/√(2Δ x)(cos(ω_α) + isin(ω_α)-1) ,
where we use ω_α:=απΔ x. Moreover, recall that we can write 𝐀 = 𝐐𝐌𝐐^⊤ where 𝐌 = diag(σ_1,⋯,σ_N). We then have with u_jk = E_jℓu_ℓ mQ_m k
Δ t/2(D^x_jiu_jk^1A_kℓ - D^xx_jiu_jk^1|A|_kℓ)^2-(D_ji^+ u_ik^1|A|_kℓ^1/2)^2
= Δ t/2|λ^x_jju_jk^1σ_k - λ^xx_jju_jk^1|σ_k||^2-|λ^+_jju_jk^1|σ_k|^1/2|^2
≤ [Δ t( |σ_k|^2/Δ x^2·|1-cos(ω_j)|) - |σ_k|/Δ x·|1-cos(ω_j)|] (u_jk^1)^2.
To ensure negativity, we must have
Δ t( |σ_k|^2/Δ x^2·|1-cos(ω_j)|) ≤|σ_k|/Δ x·|1-cos(ω_j)|.
Hence, for Δ t ≤Δ x/|σ_k| equation (<ref>) holds. Since |σ_k| ≤ 1, we have proven the Lemma.
We can now show energy stability of the proposed scheme:
Under the time step restriction Δ t ≤Δ x, the scheme (<ref>) is energy stable, i.e.,
‖𝐁^1‖_E^2 + ‖𝐗^1𝐒^1𝐕^1,⊤‖_F^2 ≤‖𝐁^0‖_E^2 + ‖𝐗^0𝐒^0𝐕^0,⊤‖_F^2.
First, we multiply (<ref>) with B_j^1 and sum over j. Then,
(B_j^1)^2 = B_j^0B_j^1 + σΔ t (u_j0^1 B_j^1-(B_j^1)^2).
Let us note that
B_j^0B_j^1 = (B_j^1)^2/2 + (B_j^0)^2/2 - 1/2(B_j^1-B_j^0)^2.
Hence,
1/2(B_j^1)^2 = 1/2(B_j^0)^2 - 1/2(B_j^1-B_j^0)^2 + σΔ t (u_j0^1 B_j^1-(B_j^1)^2).
To obtain a similar expression for (u_jk^1)^2, we multiply (<ref>) with X_j α^⋆ V_k β^⋆ and sum over α and β. For simplicity of notation, let us define u_jk ^⋆ := X_j α^⋆S_αβ^⋆ V_k β^⋆ and u_jk ^0 := X_j α^⋆S_αβ^0 V_k β^⋆ as well as the projections P^X_j p := X_j α^⋆X_pα^⋆ and P^V_k m := V_k β^⋆ V_mβ^⋆. Then, we obtain the system
u_j k ^⋆ = u_j k ^0 - Δ t P^X_j p D^x_pq u_qn^0 A_mn P^V_k m + Δ t P^X_j p D^xx_pq u_qn^0 |A|_mn P^V_k m.
Next, we define u_jk^1 := X_jα^1 S_αβ^1 V_kβ^1 and note that by construction we have that
u_jk^1 = u_j k ^⋆ (1-δ_k0)/1+σΔ t + u_j 0 ^1δ_k0.
Hence, plugging in the schemes for u_j k ^⋆ and u_j0^1, that is, (<ref>) and (<ref>) we get
(1+σΔ t)u_jk^1 = ( u_j k ^0 - Δ t P^X_j p D^x_pq u_qn^0 A_mn P^V_k m + Δ t P^X_j p D^xx_pq u_qn^0 |A|_mn P^V_km) (1-δ_k0)
+ ( X_jℓ^0S_ℓ m^0 V_0m^0 - Δ tD^x_jiX_in^⋆S_nm^0 V_ℓ m^⋆ A_0ℓ+Δ tD^xx_jiX_in^⋆S_nm^0 V_ℓ m^⋆ |A|_0ℓ + σΔ t B_j^1) δ_k0.
Let us note that P^V_kmP^X_jpu_jk^1 = u_jk^1 for k≠ 0. Hence, multiplying the above equation with u_jk^1 and summing over j and k gives
1/2(u_jk^1)^2 = 1/2(u_jk^0)^2 - 1/2(u_jk^1-u_jk^0)^2- Δ t u_jk^1D^x_jiu_iℓ^0 A_kℓ+Δ tu_jk^1D^xx_jiu_iℓ^0 |A|_kℓ
+ σΔ tu_jk^1(B_j^1δ_k0-u_jk^1).
Let us now add the zero term Δ tu_jk^1D^x_jiu_iℓ^1 A_kℓ and add and subtract
Δ tu_jk^1D^xx_jiu_iℓ^1 |A|_kℓ. Then,
1/2(u_jk^1)^2 = 1/2(u_jk^0)^2 - 1/2(u_jk^1-u_jk^0)^2- Δ tu_jk^1D^x_ji(u_iℓ^0-u_iℓ^1) A_kℓ
+ Δ tu_jk^1D^xx_ji(u_iℓ^0-u_iℓ^1) |A|_kℓ+Δ tu_jk^1D^xx_jiu_iℓ^1 |A|_kℓ
+ σΔ tu_jk^1(B_j^1δ_k0-u_jk^1).
In the following, we use Young's inequality which states that for a,b∈ℝ we have a· b ≤a^2/2 + b^2/2. We now apply this to the term
-Δ tu_jk^1D^x_ji(u_iℓ^0-u_iℓ^1) A_kℓ+Δ tu_jk^1D^xx_ji(u_iℓ^0-u_iℓ^1) |A|_kℓ
≤1/2 (u_iℓ^0-u_iℓ^1)^2 + Δ t^2/2(D^x_jiu_jk^1A_kℓ - D^xx_jiu_jk^1|A|_kℓ)^2.
Hence, using u_jk^1D^xx_jiu_iℓ^1 |A|_kℓ=-(D_ji^+ u_ik^1|A|_kℓ^1/2)^2 we get
1/2(u_jk^1)^2 ≤1/2(u_jk^0)^2 + Δ t^2/2(D^x_jiu_jk^1A_kℓ - D^xx_jiu_jk^1|A|_kℓ)^2-Δ t(D_ji^+ u_ik|A|_kℓ^1/2)^2
+ σΔ tu_jk^1(B_j^1δ_k0-u_jk^1).
As for the continuous case, we add (<ref>) and (<ref>) to obtain a time update equation for E^0 := 1/2(u_jk^0)^2 + 1/2(B_j^0)^2:
E^1≤ E^0 + Δ t^2/2(D^x_jiu_jk^1A_kℓ - D^xx_jiu_jk^1|A|_kℓ)^2-Δ t(D_ji^+ u_ik^1|A|_kℓ^1/2)^2
+ σΔ t(u_j0^1B_j^1-(u_jk^1)^2) - 1/2(B_j^1-B_j^0)^2 + σΔ t(u_j0^1 B_j^1-(B_j^1)^2)
≤ E^0 + Δ t^2/2(D^x_jiu_jk^1A_kℓ - D^xx_jiu_jk^1|A|_kℓ)^2-Δ t(D_ji^+ u_ik^1|A|_kℓ^1/2)^2
- σΔ t(B_j^1-u_jk^1)^2 - 1/2(B_j^1-B_j^0)^2 .
With Lemma <ref> we have that
Δ t/2(D^x_jiu_jk^1A_kℓ - D^xx_jiu_jk^1|A|_kℓ)^2-(D_ji^+ u_ik^1|A|_kℓ^1/2)^2 ≤ 0
for Δ t ≤Δ x. Since the truncation step is designed to not alter the zero order moments, we conclude that E^1 ≤ E^0 and the full scheme is energy stable under the time step restriction Δ t ≤Δ x.
§ MASS CONSERVATION
Besides being energy stable, using a conservative truncation step ensures local conservation of mass. That is, we use the truncation strategy following <cit.> which works as follows:
* Compute 𝐊 = 𝐗^1 𝐒^1 and split it into two parts 𝐊 = [𝐊^cons, 𝐊^rem] where 𝐊^cons corresponds to the first and 𝐊^rem consists of the remaining columns of 𝐊.
Analogously, distribute 𝐕^1 = [𝐕^cons, 𝐕^rem] where 𝐕^cons corresponds to the first and 𝐕^rem consists of the remaining columns of 𝐕.
* Derive 𝐗^cons = 𝐊^cons / ‖𝐊^cons‖ and 𝐒^cons = ‖𝐊^cons‖.
* Perform a QR-decomposition of 𝐊^rem to obtain 𝐊^rem = 𝐗^rem𝐒^rem.
* Compute the singular value decomposition of 𝐒^rem = 𝐔 Σ 𝐖^⊤ with Σ = (σ_j). Given a tolerance ϑ, choose the new rank r_1 ≤ 2r as the minimal number such that
(∑_j=r_1+1^2rσ_j^2)^1/2≤ϑ.
Let 𝐒^rem be the r_1 × r_1 diagonal matrix with the r_1 largest singular values and let 𝐔^rem and 𝐖^rem contain the first r_1 columns of 𝐔 and 𝐖, respectively. Set 𝐗^rem = 𝐗^rem𝐔^rem and 𝐕^rem = 𝐕^rem𝐖^rem.
* Set 𝐗 = [𝐗^cons, 𝐗^rem] and 𝐕 = [𝐞_1, 𝐕^rem]. Perform a QR-decomposition of 𝐗 = 𝐗^1 𝐑^1 and 𝐕 = 𝐕^1 𝐑^2.
* Set
𝐒^1 = 𝐑^1 [ 𝐒^cons 0; 0 𝐒^rem ]𝐑^2,⊤.
The updated solution at time t_1 = t_0+ Δ t is then given by 𝐮^1 = 𝐗^1𝐒^1𝐕^1,⊤.
Then, the scheme is conservative:
The scheme (<ref>) is locally conservative. That is, for the scalar flux at time t_n denoted by Φ_j^n = X_jℓ^nS_ℓ m^n V_0m^n, where n∈{0,1} and u_jk^0 = X_jℓ^0S_ℓ m^0 V_km^0 it fulfills the conservation law
Φ^1_j = Φ^0_j - Δ tD^x_ji u_iℓ^0 A_0ℓ+Δ tD^xx_jiu_iℓ^0 |A|_0ℓ + σΔ t(B_j^1-Φ^1_j),
B_j^1 = B_j^0 + σΔ t(Φ^1_j-B_j^1).
From the conservative truncation step and the basis augmentation (<ref>) we know that
Φ^1_j = X_jℓ^1 S_ℓ mV_0m = u_j0^1.
Hence, with (<ref>) and (<ref>) we get that
Φ^1_j = X_jℓ^0S_ℓ m^0 V_0m^0 - Δ tD^x_jiX_in^⋆S_nm^0 V_ℓ m^⋆ A_0ℓ+Δ tD^xx_jiX_in^⋆S_nm^0 V_ℓ m^⋆ |A|_0ℓ + σΔ t(B_j^1-Φ^1_j),
B_j^1 = B_j^0 + σΔ t(Φ^1_j-B_j^1).
Since the basis augmentation with 𝐗^0 and 𝐕^0 ensures X_jℓ^0S_ℓ m^0 V_0m^0 = X_in^⋆S_nm^0 V_ℓ m^⋆ = u_iℓ^0, the local conservation law (<ref>) holds.
Hence, equipped with a conservative truncation step, the energy stable algorithm presented in (<ref>) conserves mass locally. To give an overview of the algorithm, we visualize the main steps in Figure <ref>.
§ NUMERICAL RESULTS
In this section we give numerical results to validate the proposed DLRA algorithm. The source code to reproduce the presented numerical results is openly available, see <cit.>.
§.§ 1D Plane source
We consider the thermal radiative transfer equations as described in (<ref>) on the spatial domain D = [-10,10]. As initial distribution we choose a cutoff Gausian
u(t=0,x) = max(10^-4, 1/√(2πσ_IC^2)exp(-(x-1)^2/2σ_IC^2)),
with constant deviation σ_IC=0.03. Particles are initially centered around x=1 and move into all directions μ∈ [-1,1]. The initial value for the energy equilibrium is set to B^0=1 and we start computations with a rank of r=20. The opacity σ is set to the constant value of 1. Note that this setting is an extension of the so-called plane source problem, which is a common test case for the radiative transfer equation <cit.>. In the context of dynamical low-rank approximation it has been studied in <cit.>. We compare the solution of the full coupled-implicit system without DLRA which reads
u_jk^1 = u_jk^0-Δ tD^x_jiu_iℓ^0 A_kℓ+Δ tD^xx_jiu_iℓ^0 |A|_kℓ + σΔ t(B_j^1δ_k0-u_jk^1)
B_j^1 = B_j^0 + σΔ t(u_j0^1-B_j^1)
to the presented energy stable mass conservative DLRA solution from (<ref>). The total mass at any time t_n shall be defined as m^n = Δ x ∑_j (u_j0^n + B_j^n). As computational parameters we use n_x = 1000 cells in the spatial domain and N=500 moments to represent the directional variable. The time step size is chosen as Δ t = CFL·Δ x with a CFL number of CFL = 0.99. In Figure <ref> we present the full solution f(x,μ), the scalar flux Φ = ⟨ f⟩_μ and the temperature T at the end time t_end=8. Further, the evolution of the rank r in time, and the relative mass error |m^0-m^n|/‖ m^0‖ are shown. One can observe that the DLRA scheme captures well the solution of the full system. For a chosen tolerance of ϑ = 10^-1‖Σ‖_2 the rank increases up to r=24 before it reduces again. The relative mass error is of order 𝒪(10^-14). Hence, our proposed scheme is mass conservative up to machine precision.
§.§ 1D Su-Olson problem
For the next test problem we add a source term Q(x) to the previously investigated equations leading to
∂_t f(t,x,μ) + μ∂_x f(t,x,μ) = σ(B(t,x)-f(t,x,μ))+ Q(x),
∂_t B(t,x) = σ(⟨ f(t,x,·)⟩_μ-B(t,x)).
In our example we use the source function Q(x) = χ_[-0.5,0.5](x)/a with a= 4 σ_SB/c being the radiation constant depending on the Stefan-Boltzmann constant σ_SB and the speed of light c. Again we
consider the spatial domain D = [-10,10] and choose the initial condition
u(t=0,x) = max(10^-4, 1/√(2πσ_IC^2)exp(-(x-1)^2/2σ_IC^2)),
with constant deviation σ_IC=0.03 and particles moving into all directions μ∈ [-1,1]. The initial value for the energy equilibrium is set to B_0 = 50, the initial value for the rank to r=20. The opacity σ is again chosen to have the constant value of 1. As computational parameters we use n_x = 1000 cells in the spatial domain and N=500 moments to represent the directional variable. The time step size is chosen as Δ t = CFL·Δ x with a CFL number of CFL = 0.99. The isotropic source term generates radiation particles flying through and interacting with a background material. The interaction is driven by the opacity σ. In turn, particles heat up the material leading to a travelling temperature front, also called a Marshak wave. Again this travelling heat wave can lead to the emission of new particles from the background material generating a particle wave. At a given time point t_end= 3.16 this waves can be seen in Figure <ref> where we display numerical results for the full solution f(x,μ), the scalar flux Φ = ⟨ f⟩_μ and the temperature T. We compare the solution of the full coupled-implicit system differing from (<ref>) by an additional source term to the presented energy stable mass conservative DLRA solution from (<ref>). Further, the evolution of the rank in time is presented for a tolerance parameter of ϑ = 10^-2‖Σ‖_2. Note that due to the source term there is no mass conservation in this example.
§.§ 2D Beam
To approve computational benefits of the presented method we extend it to a two-dimensional setting. The set of equations becomes:
∂_t f(t,𝐱,Ω) + Ω·∇_𝐱 f(t,𝐱,Ω) = σ(B(t,𝐱)-f(t,𝐱,Ω)),
∂_t B(t,𝐱) = σ(⟨ f(t,𝐱,·)⟩_Ω-B(t,𝐱)).
For the numerical experiments let 𝐱=(x_1,x_2) ∈ [-1,1] × [-1, 1], Ω =(Ω_1, Ω_2, Ω_3) ∈𝒮^2 and σ=0.5. The initial condition of the two-dimensional beam is given by
f(t=0,𝐱,Ω) = 10^6·1/2πσ_x^2exp(-‖𝐱‖^2/2σ_x^2) ·1/2πσ_Ω^2exp(-(Ω_1 - Ω^⋆ )^2 + (Ω_3 - Ω^⋆ )^2/2σ_Ω^2),
with Ω^⋆ = 1/√(2), σ_x = σ_Ω = 0.1. The initial value for the energy equilibrium is set to B^0=1, the initial value for the rank to r=100. The total mass at any time t_n shall be defined as m^n = Δ x_1 Δ x_2 ∑_j (u_j0^n + B_j^n ). We perform our computations on a spatial grid with N_CellsX=500 points in x_1 and N_CellsY=500 points in x_2. Further, we use a polynomial degree of n_PN=29 corresponding to 900 expansion coefficients in angle. The time step size is chosen as Δ t = CFL·Δ x with a CFL number of CFL=0.7.
We compare the solution of the two-dimensional full system corresponding to (<ref>) to the two-dimensional DLRA solution corresponding to (<ref>). In Figure <ref> we show numerical results for the scalar flux Φ = ∫_𝒮^2 f(t,𝐱, ·) dΩ and the temperature T at the time t=0.5. For this setup the computational benefit of the DLRA method is significant as the run time compared to the solution of the full problem is reduced by a factor of approximately 8 from 20023 seconds to 2509 seconds.
For the evolution of the rank r in time and the relative mass error |m^0-m^n|/‖ m^0‖ we consider a time interval up to t=1.5. In Figure <ref> one can observe that for a chosen tolerance parameter of ϑ = 5 · 10^-4‖Σ‖_2 the rank increases but does not approach its allowed maximal value of 100. Further, the relative mass error stagnates and the DLRA method shows its mass conservation property.
§ CONCLUSION AND OUTLOOK
We have introduced an energy stable and mass conservative dynamical low-rank algorithm for the Su-Olson problem. The key points leading to these properties consist in treating both equations in a coupled-implicit way and using a mass conservatice truncation strategy. Numerical examples both in 1D and 2D validate the accuracy of the DLRA method. Its efficiency can especially be seen in the two-dimensional setting. For future work, we propose to implement the parallel integrator of <cit.> for further enhancing the efficiency of the DLRA method. Moreover, we expect to draw conclusions from this Su-Olson system to the Boltzmann-BGK system and the DLRA algorithm presented in <cit.> regarding stability and an appropriate choice of the size of the time step.
§ ACKNOWLEDGEMENTS
Lena Baumann acknowledges support by the Würzburg Mathematics Center for Communication and Interaction (WMCCI) as well as the Stiftung der Deutschen Wirtschaft. The work of Jonas Kusch was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – 491976834.
§ AUTHOR CONTRIBUTION STATEMENT (CREDIT)
Lena Baumann: analysis of energy stability, conceptualization, implementation, plotting, simulation of numerical tests, validation, visualization, writing - original draft
Lukas Einkemmer: analysis of energy stability, conceptualization, initial idea of numerical scheme, proofreading and corrections, supervision
Christian Klingenberg: conceptualization, proofreading and corrections, supervision
Jonas Kusch: analysis of energy stability, conceptualization, implementation, initial idea of numerical scheme, simulation/setup of numerical tests, supervision, visualization, writing - original draft
abbrv
|
http://arxiv.org/abs/2307.04034v1 | 20230708191132 | Robust Universal Inference | [
"Beomjo Park",
"Sivaraman Balakrishnan",
"Larry Wasserman"
] | stat.ME | [
"stat.ME"
] |
Robust Universal Inference
Beomjo Park, Sivaraman Balakrishnan, and Larry Wasserman
Department of Statistics & Data Science
Machine Learning Department
Carnegie Mellon University
Pittsburgh, PA 15213.
August 12, 2023
In statistical inference, it is rarely realistic that the hypothesized statistical model is
well-specified, and consequently it is important to understand the effects of misspecification on inferential procedures.
When the hypothesized statistical model is misspecified, the natural target of inference is a projection of the data generating distribution onto the model.
We present a general method for constructing valid confidence sets for such projections, under weak regularity conditions, despite possible model misspecification.
Our method builds upon the universal inference method of <cit.> and
is based on inverting a family of split-sample tests of relative fit. We study settings in which our methods yield either exact or approximate,
finite-sample valid confidence sets for various projection distributions. We study rates at which the resulting confidence sets shrink around the target of inference and complement these results
with a simulation study.
§ INTRODUCTION
One of the broad goals of statistical inference is to draw conclusions about a population from a sample of the population. This goal is typically facilitated by the use of a
statistical model 𝒫, a collection of distributions, which the statistician hypothesizes will contain a useful approximation to the data generating distribution.
The well-specified case is when ∈ and the misspecified case is when this does not necessarily hold.
In the misspecified case, the target of inference is usually
a projection distribution. Formally, given a divergence ρ which maps a pair of distributions to ℝ_+, we can define the (forward) projection[We tacitly assume that the projection exists and is unique. When the projection is not unique our inferential guarantees always hold for any (arbitrary) fixed choice of the projection . Characterizing the existence of a projection distribution (for f-divergences) has received some attention in past work <cit.>.] of the distribution
onto the statistical model as:
:= _P ∈ρ( P).
The general goal of our paper is to construct uniformly valid confidence sets for assuming only weak regularity conditions
on the distribution and the statistical model .
We let X_1,…, X_n be an i.i.d sample from a distribution ∈ defined on ℝ^d, where 𝒬⊇𝒫 is a class of distributions
satisfying weak regularity conditions. We wish to construct (honest)
confidence sets, C_α(X_1,…, X_n)
such that,
inf_∈ℙ_(∈ C_α(X_1,…, X_n)) ≥ 1 - α.
In parametric statistical models, in the well-specified case,
the likelihood-ratio test, and confidence sets
obtained from asymptotically Gaussian estimators, are the main inferential tools for constructing hypothesis tests and confidence intervals. In the misspecified case,
one can develop analogous tools for constructing tests and intervals for the Kullback-Leibler (KL) projection parameter,
using sandwich estimates for the variance <cit.>.
The validity of these methods, in both the well-specified and misspecified cases, relies on large sample asymptotic theory and requires that the statistical
model 𝒫 and the sampling distribution satisfy strong regularity conditions.
In recent work <cit.>, we introduced a procedure (described in more detail in Section <ref>)
based on data-splitting to construct uniformly, finite-sample valid likelihood-ratio confidence sets under no regularity conditions. This work showed that,
in the well-specified setting, sample-splitting can yield practical, finite-sample valid inference, even for irregular statistical models, often at a surprisingly small statistical price.
The challenges of inference under weak regularity conditions are exacerbated in the misspecified setting.
In contrast to the well-specified case where the target of inference is unambiguous, in the misspecified case there are many natural targets of inference. Each choice of
the divergence ρ in (<ref>), yields a different target and in most cases these targets will have drastically different properties.
This in turn poses significant
challenges in constructing a unified framework for statistical inference in the misspecified setting. Under weak regularity conditions, the KL projection distribution
can be an unstable inferential target, wherein small perturbations to the data-generating distribution P^* can lead to dramatic shifts in the target P. From a theoretical standpoint,
this makes finite-sample valid inference for the KL projection distribution
challenging, unless strong regularity conditions are imposed. From a practical
standpoint, these instabilities can make the KL projection distribution an undesirable target, and in these cases it is essential to develop a flexible family of methods that can target
other (more stable) projection distributions.
To address these challenges, we develop
a re-interpretation of the universal inference method <cit.> as inverting a particular family of pairwise likelihood-ratio tests. This interpretation
brings into focus the key building block of universal inferential methods – pairwise hypothesis tests. Building on this insight
we show that one can develop robust universal inference procedures by inverting appropriate families of robust pairwise tests. We then study
the design and properties of robust pairwise tests, and relate them to the coverage and size properties of our proposed robust universal inference method.
§.§ Related Work
Asymptotic statistical inference, in both the well-specified and misspecified cases, is a topic
of classical interest.
Some entry points to the vast literature on this topic include the reference books <cit.>. Results in this literature <cit.>
typically leverage strong regularity conditions to determine the asymptotic distribution of a point estimate (such as the Maximum Likelihood Estimator (MLE)), and use the asymptotic distribution of the estimate
to construct (asymptotically valid) confidence sets.
Our work is motivated in part by a recent line of work <cit.>, and more classical work <cit.>,
where sample-splitting is used to avoid the strong regularity conditions typically needed for valid statistical inference.
The focus on statistical inference under weaker regularity conditions, despite model misspecification, is the central theme of work in robust statistics <cit.>.
One of the best understood methods for constructing robust estimators is to select, from a set of candidates, one which wins a carefully setup tournament – an idea which goes back to <cit.>, and
others. At the heart of these tournament estimators are pairwise selectors, which attempt to robustly select one of a pair of candidates, which provide a better relative fit to the sampling distribution.
These robust pairwise tests have been used to great effect in robust estimation, and our work highlights their usefulness in constructing assumption-light confidence sets.
§.§ Outline
The rest of this paper is organized as follows. Section <ref> provides some background. We briefly introduce the universal inference procedure, and develop a new perspective on it.
Section <ref> motivates the methods we study in this paper by pinpointing some of the failures of universal inference.
Section <ref> describes a general strategy to construct confidence sets for projection distributions, and highlights the importance of designing tests of relative fit.
Section <ref> highlights some important examples where we are able to build on prior work in order to design exact and approximate tests of relative fit for different choices of the underlying divergence measure.
Section <ref> studies the size of the resulting confidence sets.
Section <ref> demonstrates some of the strengths of our proposed inference methods
based on illustrative numerical examples. We conclude in Section <ref> with a brief discussion of future work.
§ BACKGROUND
We let X_1,…, X_n be an i.i.d sample from a distribution ∈ defined on ℝ^d, and
we let denote our working statistical model.
Throughout the paper, the collection of distributions will be quite general, typically only satisfying some weak regularity conditions.
§.§ Universal Inference
Our starting point is our prior work <cit.> which
introduced a procedure
based on data-splitting to construct uniformly, finite-sample valid confidence sets under weak regularity conditions.
Importantly, the validity guarantees of universal inference
require the statistical model to be well-specified. The universal inference procedure is to:
* Split the data := {X_1,…,X_n} into two sets _0 and _1.
* On the set _1 calculate any estimate (e.g., could be the MLE in the model ).
* Assume that the distributions in 𝒫 have densities (denoted with lower-case symbols) with respect to a dominating measure λ. We let ℒ_0(P) denote the likelihood of the distribution P evaluated on the samples in _0:
ℒ_0(P) := ∏_i ∈_0 p(X_i),
and define ℒ_0() analogously.
Then construct the confidence set,
C_α(X_1,…,X_n) = { P: ℒ_0(P)/ℒ_0()≥α}.
In the well-specified case, <cit.> show (in their Theorem 1) that,
under no additional regularity conditions, C_α is a finite-sample
valid 1 - α confidence set for
the distribution .
§.§ A Re-Interpretation of Universal Inference
To motivate the main proposal of this paper it is useful to re-visit (and generalize)
the procedure described above, via the lens of inverting a family of hypothesis tests. The basic idea is classical, and is sometimes referred
to as the duality between confidence sets and hypothesis tests.
Formally, given samples X_1,…, X_n ∼,
suppose we have a family of tests
ϕ_P: {X_1,…,X_n}↦{0,1}
for testing the null hypothesis H_0: = P. Here the test function
ϕ_P takes the value 1 to indicate a rejection of the null hypothesis and takes the value 0 otherwise.
If the family of tests is valid, i.e. they control the Type I error,
_P [ϕ_P(X_1,…,X_n) ]≤α, ∀ P ∈,
then the following confidence set,
C_α(X_1,…,X_n) := { P ∈: ϕ_P = 0 },
is uniformly valid when the statistical model is correctly specified, i.e.
inf_∈ℙ_(∈ C_α(X_1,…, X_n)) ≥ 1 - α.
Although this is a general recipe for constructing valid confidence sets, it does not provide the statistician much guidance in designing
tests which might lead to small confidence sets.
Universal inference is based on the idea that one can use a separate sample
to construct an accurate estimate . We can then construct our family of tests, on the remaining samples, to have high power in distinguishing
the sampling distribution from this pilot estimate. Formally, we could choose our family of tests
to have high power to distinguish the hypotheses:
H_0: = P, versus
H_1: = .
This use of a separate sample to construct a pilot estimate,
simplifies the design of the tests to invert considerably since now we can focus on
tests that have strong guarantees for distinguishing this simple null-versus-simple alternative. Indeed, universal inference uses
the likelihood-ratio test for distinguishing these hypotheses, resulting in tests ϕ_P of the form:
ϕ_P = 𝕀[ ℒ_0()/ℒ_0(P) > (P, ) ],
for a choice of the threshold (P, ) which ensures that the condition in (<ref>) is satisfied. Although it is possible to determine optimal thresholds in the likelihood-ratio tests above, this can be practically cumbersome since these thresholds depend on both the pilot estimate and the null
hypothesis P under consideration. The work of <cit.> further shows that a universal threshold = 1/α suffices to ensure the condition in (<ref>). To summarize,
one can view the universal inference confidence set (<ref>) as arising by inverting a family of likelihood-ratio tests designed to distinguish
each candidate distribution P from a pilot estimate .
We emphasize that the universal inference procedure, and its reinterpretation described above rely crucially on correct model-specification to ensure validity.
For instance, inverting a family of tests that satisfies (<ref>) is no longer meaningful when the model is misspecified.
However, the testing interpretation suggests that one might develop novel variants of the universal inference procedure which
are useful despite model-misspecification, by formulating appropriate robust hypotheses and designing robust tests for distinguishing them. We make these ideas
precise in Section <ref>.
§.§ Divergences
Throughout this paper, we make frequent use of different divergences between pairs of
probability distributions. We briefly introduce them here. We let P and Q be distributions defined on ℝ^d with
densities p and q with respect to a common dominating measure λ.
The Hellinger distance is defined as:
(P,Q) = 1/√(2)( ∫ (√(p) - √(q))^2 dλ)^1/2,
and the Kullback-Leibler (KL) divergence is defined as:
(P Q) =
∫(logp/q) P̣, if P is dominated by Q,
∞, otherwise.
The family of density power divergences <cit.>, are defined for a parameter β≥ 0 as,
_β (P Q) =
∫{ q^1+β - ( 1+1/β) q^β p + 1/β p^1+β} d λ,
β > 0
(P Q),
β = 0
where _0 = is defined by taking the limit of β→ 0.
Finally, the family of Integral Probability Metrics (IPMs) <cit.>,
are defined as
_ℱ(P, Q) = sup_f ∈ℱ| _P (f) - _Q (f) |
where ℱ is a symmetric class (i.e., f ∈ℱ - f ∈ℱ) of real-valued bounded measurable functions on the domain of P and Q.
Important special cases of IPMs include the Total Variation distance (TV, where ℱ is the collection of functions with sup-norm at most 1), the Wasserstein-1 distance (where ℱ is the collection of 1-Lipschitz functions) and
the Maximum Mean Discrepancy (MMD, where ℱ is the unit ball of a Reproducing Kernel Hilbert Space with kernel k).
§ FAILURES OF UNIVERSAL INFERENCE
To provide some motivation and intuition for the methods we propose in this paper, it is useful to understand some of the failures of the universal inference framework
when the statistical model is misspecified, and the target of inference is the KL projection.
§.§ Unbounded Likelihood-Ratios
The behavior of likelihood-ratio based methods can be sensitive to the tail behavior of likelihood-ratios.
The following simple example illustrates that under model misspecification, universal inference can fail to cover the KL projection parameter. These pathologies
do not arise when the statistical model is correctly specified, and the challenges in this example arise due to an interplay between poorly behaved likelihood-ratios and model misspecification.
This example also serves to highlight the fact that the KL projection parameter can in some cases be an undesirable inferential target. We let (p) denote the Bernoulli distribution with parameter p.
Suppose we observe
X_1,…,X_n ∼ := (ϵ_n) for some non-negative 0 < ϵ_n < (1-α)/n.
We use the statistical model = {(p) : p ∈{0, 1/2 }}.
Suppose we consider the pilot estimator to be the MLE,
= _p ∈ℒ_1(p).
Then, for all sufficiently large n ≥ n_0 where n_0 only depends on α the split LRT confidence set in (<ref>), with an equal sized split into _0 and _1,
fails to cover the KL projection at the nominal level.
The proof is in Appendix <ref>. The intuition is however clear. In this example the KL projection distribution is (1/2).
For ϵ_n ≪ 1/n, with high probability the samples X_1,…,X_n are all 0. Consequently, the MLE with high-probability
will be (0). Furthermore, the split sample likelihood ℒ_0 will be much higher for (0) than (1/2), and consequently (1/2)
will not be included in the universal set.
In this example likelihood-ratios are unbounded and as a consequence the KL divergence is an unstable function of the model parameters, i.e.
when ϵ_n = 0, ((ϵ_n) (0)) is 0, but is ∞ for any ϵ_n > 0. In such cases,
the finite-sample (log)-likelihood-ratio is a poor estimate of the population KL divergence, and this poses significant challenges for
finite-sample valid inference. From a practical standpoint, a more reasonable inferential target could be a different, stabler projection distribution
(e.g., the Hellinger or TV projection distribution) and we address this in Sections <ref> and <ref>.
§.§ Failure Despite Bounded Likelihood-Ratios
In the previous example it is clear that unbounded likelihood-ratios can result in pathologies which are challenging to address with finite-sample valid inference.
However, even when all likelihood-ratios in the model are well-behaved, universal inference can fail to cover the KL projection parameter. It is important to note that except under the stringent
condition that the underlying model is convex (see Section 6 of <cit.>), universal inference has no guaranteed coverage when the model is misspecified.
Suppose we obtain X_1,…,X_n ∼ := (0.5 + ϵ_n) for some small, positive 0 <ϵ_n ≤ c/n, where c > 0 is a small positive universal constant.
Our hypothesized model consists of two distributions, = {(p) : p ∈{1/4, 3/4 }}.
Suppose we take the pilot estimator to be the MLE (<ref>).
Then, for all sufficiently large n (depending only on α) the split LRT confidence set in (<ref>), with an equal sized split into _0 and _1,
fails to cover the KL projection at the nominal level.
We give a formal proof in Appendix <ref>. The KL projection distribution is (3/4). We show that the pilot estimate with probability near 1/2 will
be the distribution (1/4), and further with probability near 1/2 the KL projection (3/4) will have a much smaller split sample likelihood than . As a direct consequence,
universal inference will fail to cover the projection distribution (3/4).
In contrast to the previous example, this example is much less pathological. All the relevant likelihood-ratios are bounded, and the log-likelihood is a consistent
estimate of the KL divergence. However, even in this relatively benign example universal inference fails.
We show in Section <ref> that a simple modification to the universal inference procedure fixes this issue when the relevant likelihood-ratios are bounded, and ensures correct coverage.
In order to focus on the main issues, we have illustrated the failure of universal inference when the pilot estimator is the MLE. Indeed, part of the appeal of universal inference is that its
coverage guarantees hold, in the well-specified case for any pilot estimate (including the MLE).
Though we do not pursue this here, it is straightforward to extend these examples to show that both failures persist irrespective of
how the pilot is chosen, i.e. the failures of universal inference that we highlight are driven by the second stage (of constructing the confidence set) and not by the first stage (of constructing a reasonable pilot estimate).
These examples set the stage for the methodological development of the rest of the paper. To address problems of the first type we recommend targeting a different
projection parameter (for instance, the TV or Hellinger projection, in Sections <ref> and <ref>), and to address problems of the second type we develop methods which guarantee coverage
of the KL projection parameter when the likelihood-ratios are uniformly upper bounded or more generally have finite 2 + ξ moments for some ξ > 0 (see Section <ref>).
§ ROBUST UNIVERSAL INFERENCE
In this section, we present a simple but powerful pair of general results which yield exact and approximate universal confidence sets. The workhorse of these results
are tests of relative fit which we first briefly introduce before showing how these tests can be inverted to derive robust confidence sets.
§.§ Tests of Relative Fit
Suppose that we are given samples X_1,…, X_n ∼, together with a pair of candidate distributions (P_0, P_1) ∈^2, and a divergence measure ρ.
With this setup in place, we now consider a family of tests ϕ_P_0, P_1 to distinguish the hypotheses:
H_0: ρ( P_0) ≤ρ( P_1), versus
H_1: ρ( P_0) > ρ( P_1).
We refer to the tests ϕ_P_0, P_1 as exact tests of relative fit.
Notice that in contrast to the classical setting, where we hypothesize that one of the distributions (P_0, P_1) truly generated
the samples, in the misspecified setup
this assumption is no longer tenable.
Instead, we hypothesize that one of the distributions (P_0, P_1) is closer to the data generating distribution.
In general, the two hypotheses are no longer simple hypotheses and we need to take some care in designing the family
of tests ϕ_P_0, P_1. The design of tests of relative fit (and closely related variants) have a rich history and form the basis for a class of tournament-based robust estimators
<cit.>.
For divergences like the Total Variation and the Hellinger distance, designing exact tests of relative fit can require strong regularity conditions akin to those that would be required
to estimate these divergences. Surprisingly, in these cases, it is still possible to design approximate tests of relative fit
under weak regularity conditions. More formally, suppose that for some ν≥ 1, we can design a test for the following null hypothesis:
H_0: νρ( P_0) ≤ρ( P_1).
We refer to tests for this hypothesis as approximate tests of relative fit ϕ_P_0, P_1,ν. Under the null hypothesis, the distribution P_0 is closer than P_1 to by a factor ν≥ 1, which can
ease the design of valid tests for this hypothesis.
Robust tests for null hypotheses of the form in (<ref>) (for the Hellinger distance) were introduced by <cit.> and are discussed in detail in the work of <cit.>. In the context
of estimation these approximate tests yield what are known as non-sharp oracle inequalities. In the context
of inference, as we explore further in Section <ref>, inverting approximate relative fit tests will yield weaker guarantees.
In Section <ref> we consider the design of tests of relative fit in concrete settings, but now proceed to study the implications of designing such tests
for the construction of robust confidence sets.
§.§ Exact and Approximate Robust Universal Confidence Sets
We now propose to construct a confidence set by inverting a family of tests of relative fit. This is similar in spirit to
the procedure described in Section <ref>.
§.§.§ Exact Robust Universal Confidence Sets
Suppose, for every ∈, the family of tests of relative fit ϕ_P_0, P_1 is valid, i.e. it controls the Type I error:
_[ϕ_P_0, P_1(X_1,…,X_n) ]≤α, ∀ (P_0, P_1) ∈_0
where _0 = { (P_0, P_1) ∈^2: ρ( P_0) ≤ρ( P_1)}.
Then, for any fixed P_1 ∈, the confidence set we construct is the set of candidates P_0 which we fail to reject:
C_α,n≡ C_α(X_1,…,X_n) := { P_0 ∈: ϕ_P_0, P_1 (X_1,…,X_n) = 0 }.
The following result shows that irrespective of the choice of P_1 the above construction yields a valid confidence set for the projection distribution:
For any fixed P_1 ∈, C_α,n is a uniformly valid (1-α) honest confidence set for the projection .
For any ∈,
_(∉ C_α,n )
= _( ϕ_, P_1 = 1 )
= _( ϕ_, P_1 ) ≤α
using (<ref>) since (, P_1) ∈_0 for any choice of P_1 ∈.
As in the well-specified case discussed earlier, this general result does not provide any guidance on how to choose P_1.
We follow the idea of universal inference and first construct an accurate estimate of from a separate sample _1 and then construct the family of split tests of relative fit ϕ_P_0, from the remaining samples _0. We call the resulting confidence set the exact Robust Universal Confidence set:
C_α,n≡ C_α (X_1,…, X_n) := {P_0∈: ϕ_P_0, (_0) = 0}.
Let ∈ be any estimate of based on _1. Then, the exact robust universal confidence set C_α,n is a uniformly valid confidence set for , meaning that
inf_∈_ (∈ C_α, n) ≥ 1 - α.
The proof is straightforward noticing that conditional on _1, the claim reduces to the claim of Proposition <ref>. Concretely, for any ∈,
_ (∉ C_α, n)
=_ (ϕ_, )
= __1[ __0(ϕ_, (_0) | _1) ]
≤__1 (α) = α.
The robust confidence set will often contain both the pilot estimate as well as the projection distribution
(see Proposition <ref> in Appendix <ref> for a formal statement). This is similar to the classical universal inference procedure which in the well-specified case will often contain both the
pilot estimate and the true sampling distribution. In universal inference this suggests that in order to obtain small confidence sets,
we should aim to design to be a good estimate of the true sampling distribution . On
the other hand in the misspecified case, this suggests that we should design to be a good estimate of the projection . Specifically, our pilot estimate should
be tailored to the divergence measure ρ. We investigate the choice of and its effect on the size of the resulting confidence set further in Section <ref>.
§.§.§ Approximate Robust Universal Confidence Sets
In some cases, e.g., for the Hellinger distance and the TV distance, designing exact robust tests will require some (potentially strong) regularity conditions.
However, in these cases one can design approximate tests of relative fit straightforwardly.
Suppose, for any ∈, the family of approximate tests of relative fit ϕ_P_0, P_1, ν which controls the Type 1 error satisfies (<ref>) with _0 = { (P_0, P_1) ∈^2 : νρ( P_0) ≤ρ( P_1)} for some ν≥ 1. We will additionally make the mild assumption that our tests of relative fit do not reject (with probability at least 1-α) when comparing the relative fit of a distribution to itself, i.e.:
sup_∈_ [ϕ_P,P,ν] ≤α for any fixed P∈.
This condition will be true for all the tests we introduce in Section <ref>.
Let be any estimate of from _1.
Then, the approximate robust universal confidence set, akin to (<ref>), is obtained by inverting the family of valid split tests ϕ_P_0, , ν constructed from the remaining samples _0:
C_ν,α,n≡ C_ν,α(X_1,…,X_n) := { P_0 ∈: ϕ_P_0, , ν (_0) = 0 }.
This confidence set may not cover the projection distribution . We will relax our goal to instead be to
cover an approximate projection distribution.
More formally, we relax the target of inference to be the ν-approximate projection set _ν defined as
_ν = {P∈: ρ( P) ≤νρ() }.
If a set C is a ν-approximate confidence set, we define its coverage by
_(Q∈ C for some Q ∈_ν) =
_(_ν∩ C ≠∅).
Figure <ref> shows a schematic diagram to illustrate the notion of approximate coverage. When ν = 1, i.e. we invert an exact test, we guarantee that with probability at least 1 - α, the set
C_ν,α,n contains . On the other hand, when ν > 1 we only guarantee that the intersection of C_ν,α,n with the collection of ν-approximate projections (in cyan) is non-empty.
The set _ν is a collection of distributions that are as close to as (up to a factor ν). The approximate confidence set guarantee
is most meaningful when ν is close to 1, or when the model misspecification is not too extreme, i.e. ρ() is small.
Let ∈ be any estimate of based on _1. Suppose that our approximate relative fit tests are valid, and satisfy the condition in (<ref>).
Then, the approximate robust universal confidence set C_ν,α,n is a uniformly valid ν-approximate confidence set for :
inf_∈_ (_ν∩ C_ν,α, n∅) ≥ 1 - α.
Fix any ∈. Let the event E = {∈_ν}. On the event E, (<ref>) implies
_ (∉ C_ν,α,n | E) = __1 (__0 (ϕ_, ,ν (_0) | _1, E) | E) ≤α.
On the complement of E, i.e., ∉_ν, _0 contains (, ). Thus, an analogous argument to that in the proof of Theorem <ref> can be used.
Combining the two results, we obtain that, for all ∈,
_ (_ν∩ C_ν,α, n = ∅)
≤_ (∉ C_ν,α, n | E) (E) + _ (∉ C_ν,α, n | E^∁) (E^∁)
≤α.
As in the construction of the exact robust universal confidence set, one should aim to choose the pilot estimate as close as possible to .
In the exact setting, the choice of the pilot estimate does not affect the validity of the resulting set and only affects its size. However, in
constructing an approximate robust universal set,
if we can ensure the pilot is accurate, then our approximate validity guarantees improve. Concretely, for some sequence κ_n we define:
(κ_n) := {P∈ : ρ( P) ≤ρ() + κ_n}.
If we can ensure that the pilot estimate is contained in (κ_n) with probability at least 1 - β for some sequence κ_n,
then we can show that the constructed confidence set C_ν, α,n will intersect (κ_n) with high probability. For instance, if κ_n → 0
as n grows, then rather than simply intersecting the set of approximate projections _ν, we can now show that C_ν,α,n intersects a shrinking neighborhood
around . More formally we have the following result (we omit its proof since it follows the same arguments as in Theorem <ref>):
Let (κ_n_1) be defined as in (<ref>), and suppose that our pilot is accurate, i.e. we can ensure that with probability at least 1 - β, ∈(κ_n_1).
Suppose further that our approximate relative fit tests are valid, and satisfy the condition in (<ref>). Then:
inf_∈_((κ_n_1) ∩ C_ν,α, n∅) ≥ 1 - α - β.
In this section, we have shown that inverting exact or approximate tests of relative fit yield robust exact or approximate confidence sets despite model-misspecification. We now turn our attention
to the design and analysis of these tests.
§ DESIGNING TESTS OF RELATIVE FIT
Our proposed method relies on designing valid tests of relative fit.
In this section,
we design exact tests of relative fit in KL and the density power divergences, and design approximate tests
for the Hellinger, TV and IPM-based divergences.
§.§ Kullback-Leibler Divergence
To design an exact test of relative fit for the KL divergence we make a simple observation
that there is a natural plug-in estimator of the difference in KL divergences. We can rewrite
the difference in KL divergences as:
( P) - () =
∫logp_1/p
where p and p_1 are the density of P and with respect to a common dominating measure. When
we obtain samples from this suggests the following
log split likelihood ratio test:
ϕ_P = [ 1/n_0∑_i∈_0 T_i (P,) > t_α (P, ) ],
T_i(P, ) ≡ T(X_i; P, ) = logp_1 (X_i)/p (X_i),
where _0 is an index set of _0 and t_α (P, ) is chosen to ensure validity. This test
was called the relative information fit test (RIFT) and studied in the work of <cit.> to study the relative
goodness-of-fit of two candidate estimates. In our paper,
we invert the same test in order to construct a robust universal confidence set.
When the variance of T_i(P, ) (conditional on 𝒟_1) is finite, we can derive the asymptotic
distribution (conditional on 𝒟_1)
of the log split likelihood ratio via the CLT.
Let T_n_0 (P,) = ∑_i∈_0 T_i(P, ) / n_0.
Conditional on _1 and assuming that the variance _ [T(P_0, P_1)] < ∞, for any (P_0,P_1) ∈^2,
√(n_0)( T_n_0 (P,) - _ T (P, ) ) ⇝(0, s_P^2 )
as n_0 →∞
where s_P^2 ≡ s_P^2 (_1) = _ [T_1^2] - _ T_1^2 can be estimated by ŝ_P^2 = 1/n_0∑_i∈_0 (T_i(P, ) - T_n_0)^2,
and ⇝ denotes convergence in distribution (conditional on 𝒟_1).
When assessing distributions P that are very similar to the pilot , it might be the case that s_P^2 is vanishingly small. Consequently, it is possible that s_P/s_P does
not converge in probability to 1, and the CLT with estimated variance s_P^2 need not hold. Following <cit.>
we modify each T_i(P,) by adding a small amount of independent Gaussian noise, i.e. we replace each T_i(P, ) above by T_i(P, ) + δ Z_i where Z_1,…,Z_n_0∼ N(0,1),
for some small positive constant δ > 0 (we use δ = 0.01 but note that this has no practical effect and this modification simply eases the theoretical analysis). We denote the resulting
statistic by T_n_0,δ(P, ) and the corresponding empirical standard deviation by s_P,δ.
Then, we define the KL Relative Divergence Fit () set as
_, n≡_α, n () = {P∈ : T_n_0, δ(P, ) ≤z_αŝ_P,δ/√(n_0)}
where z_α is a 1-α quantile of standard normal distribution. The following result provides asymptotic and non-asymptotic guarantees for the set _, n.
Suppose that 𝒬 is such that for some 0 < ξ≤ 1 the 2+ξ moments M_P_0,P_1 := _ |T(X; P_0, P_1) - _T(X; P_0, P_1)|^2+ξ are finite, for any (P_0,P_1) ∈^2, then
inf_∈_ (∈_, n) ≥ 1 - α - C n^-ξ/2,
where C < C' (1 + sup_(P_0,P_1) ∈𝒫^2 M_P_0,P_1) /δ^(2+ξ) for a universal constant C'.
We give a formal proof in Appendix <ref>. The claim follows as a consequence of the Berry-Esseen bound for the studentized statistic <cit.>. Some care is required
to handle the degeneracy (discussed above) when the variance of the summands can be small and to handle the randomness in the pilot estimate .
We can now revisit the failures of universal inference discussed in Section <ref>. Recall that Example <ref> illustrates the instability of the KL projection because likelihood ratios may not be bounded.
The KL set does not resolve this weakness since the KL set uses the same split likelihood ratio statistic as for the universal confidence set <cit.> and its 2 + ξ
moment is not uniformly bounded in Example <ref>. However, the KL set does resolve the failure highlighted in Example <ref>.
Assume the same model as in
Example <ref>. Suppose we take the pilot estimator to
be the MLE. The KL set (<ref>), with an equal sized split into _0 and _1, covers the KL projection at the nominal level asymptotically.
This result follows directly from Theorem <ref>, since in this example all of the relevant log likelihood ratios are uniformly upper bounded.
It is worth noting that both the standard universal set, and the set _, n are based on essentially the same split likelihood ratio statistic,
and it is perhaps surprising that the standard universal set fails but _, n succeeds in guaranteeing coverage.
Despite being based on the same statistic, the two sets use very different thresholds. It is easy to see that one can rewrite the split
LRT confidence set in universal inference <cit.> as:
_sLRT= {P∈ : T_n_0 (P,) ≤log (1/α)/n_0}.
The threshold used in (non-robust) universal inference decays at the fast rate of order O(1/n_0) compared to that of the robust universal confidence set _, n
whose threshold decays at the rate O(1/√(n_0)). When the model is misspecified the (non-robust) universal set shrinks too rapidly leading to the failure highlighted in Example <ref>.
The confidence set _, n is constructed by approximating the distribution of the test statistic in (<ref>).
When likelihood ratios are uniformly upper bounded it is straightforward to construct finite-sample valid sets via an exponential tail bound.
For example, the finite-sample exact robust universal confidence set based on the Hoeffding bound is:
_HF,B,n = {P∈ : T_n_0 (P,) ≤ B√(log(1 / α)/2n_0)},
where B is such that |T_i (P_0, P_1) - 𝔼_ T(P_0,P_1)| ≤ B for all (P_0,P_1)∈^2. In this case we assume that the
upper bound B is known to the statistician. One can generalize this construction in various ways. When the statistic is assumed to only have finite
variance one can use Chebyshev's inequality to construct a finite-sample valid set. When in addition to boundedness the statistic might have small variance
one can use empirical Bernstein-type inequalities to construct finite-sample valid confidence sets. We explore these further in Appendix <ref>.
We compare the empirical performance of _, n and these finite-sample valid sets in Section <ref>.
§.§ Density Power (DP) Divergences
We can construct an exact test of relative fit for the family
of DP divergences following the same strategy as in KL case.
Let
T_n_0(P, )
= _β (_n_0 P) - _β (_n_0)
= ∫{ p^1+β - p_1^1+β}λ̣- ( 1+1/β) 1/n_0∑_i∈_0[ p^β - p_1^β] (X_i)
:= 1/n_0∑_i∈_0 T_i(P, ),
where _n_0 is the empirical measure constructed from _0.
The split statistics T_i(P, ) encode the difference in average β-powered densities (penalized with L_1+β norm) rather than the log-likelihood ratio evaluated on the sample _0 when β > 0.
Then, conditional on 𝒟_1, _ T(P,) = _β ( P) - _β (). We define the DP set _,n exactly as in (<ref>),
and observe that the analogue of Theorem <ref> holds (with an identical proof) for _,n.
Recall that KL set was unable to resolve the instability problem
in Example <ref>. This is because the likelihood ratios in this model can blow up. On the other hand the DP set relies on the statistics in (<ref>), which are bounded for any β > 0, provided the relevant
densities are well-defined.
Formally, we have the following result:
Suppose we have the same model as in
Example <ref>.
For sufficiently large n, for any pilot estimator , the DP set _,B,n defined as in (<ref>) with B=1 + 1/β, with an equal sized split into _0 and _1, covers the DP projection at the nominal level.
A formal proof can be found in Appendix <ref>. The key observation is that the DP projection is (0) for a sufficiently large sample size for any fixed β > 0. The DP projection in this example is more stable than the KL projection (1/2), considering that ϵ_n is much closer to 0 than 1/2.
Consequently, we show that the DP set will cover the target of inference (0) with high probability. We emphasize that the MLE is also (0) with high probability, yet both universal split LRT and KL set based on the MLE fail to cover the KL projection due to the instability of the population projection distribution.
§.§ Hellinger Distance
The Hellinger distance (or the difference in Hellinger distances) does not lend itself to a natural plug-in estimator. The usual method of estimating the Hellinger distance proceeds instead via some type of non-parametric density estimation, which in turn
requires additional smoothness assumptions. Since our goal in this paper is to design assumption-light methods, we instead relax the target of inference. This in turn opens the door for designing approximate tests of relative fit.
Our strategy will be to modify the ρ-estimator[The name “ρ-estimator” comes from the standard symbol used for the Hellinger affinity.] <cit.>
which is a density estimator tailored to the Hellinger loss.
Define the split ρ-test statistic
T_n_0 (P, ) := Δ (P, ) + 1/n_0∑_i∈_0ψ( √(p_1/p) (X_i) ),
Δ (P_0, ) = 1/√(2)[^2(P_0, P) - ^2(, P) ],
where P = (P + ) / 2 and ψ: [0,∞] ↦ [-1,1] is a non-decreasing Lipschitz function satisfying ψ (x) = - ψ (1/x).
The choice of ψ we adopt throughout this paper, is to take ψ(u) = (u-1)/√(1+u^2) which
comes from work on the ρ-estimator <cit.>.
The function
ψ is a bounded transformation of the likelihood ratio, and due to this boundedness the split ρ-test statistic is tightly concentrated around its expectation.
The following proposition, which follows directly from Proposition 11 of <cit.>, characterizes the expectation of the split ρ-statistic.
For any P^*, P_0, P_1,
(2 + √(2)) _T_n_0 (P_0,P_1) ≤(3 + 2√(2)) ^2 (, P_0) - ^2 (, P_1).
This proposition ensures that _T_n_0(P_0, P_1) is negative for any ∈ when the null hypothesis H_0 : (3+2√(2)) ^2 (, P_0) ≤^2 (, P_1) is true. This proposition in
turn suggests that T_n_0(P_0, ) could be a useful statistic for designing an approximate test of relative fit in the Hellinger distance with ν = √(3+2√(2)).
We define the Hellinger Relative Distance fit () set _,n exactly analogous to the
KL set (<ref>) (obtained from a δ-corrupted version of the statistics T_n_0(P, )).
The following result follows by combining Theorems <ref> and <ref>, and noticing that the split statistic is uniformly upper bounded.
Let ν = √(3 + 2√(2)). For any 𝒬,
inf_∈_ (_ν∩_, n∅) ≥ 1 - α - C/√(n),
where C < C'/δ^3 (for a universal constant C').
We are now in a position to revisit Example <ref>. In Proposition <ref>, we showed that changing the target of inference to DP projection could address the failure of universal inference.
In a similar vein, targeting the Hellinger projection resolves the failure, but interpreting the resulting guarantee requires some nuance as set may not cover the exact Hellinger projection, and is only guaranteed to cover
a ν-approximate projection.
In the case of Example <ref>, it will turn out for sufficiently small values ϵ the ν-approximate Hellinger projection set is a singleton (and equal to the exact Hellinger projection). As highlighted earlier, when
the amount of model-misspecification is not too large the distinction between the ν-approximate projection set and the exact projection can be small.
Assume the same model as in Example <ref>. Suppose we take the pilot estimator to be the Minimum Hellinger Distance estimator <cit.>,
= _P ∈ (_n_1 P).
For sufficiently large n (> 20), the Hellinger set _,n, with an equal sized split into _0 and _1, covers the Hellinger projection ≡(0) at the nominal level asymptotically.
A formal proof is provided in Appendix <ref>. It will turn out that in this example
the ν-approximate Hellinger projection is exactly the Hellinger projection when ϵ≤ 0.05, and is the entire model , otherwise. This means that for larger values of ϵ, approximate validity is trivial, yet vacuous, as the target of inference can be any distribution in . This highlights the downside of targeting the ν-approximate projection set: when the model-misspecification is severe the resulting guarantees might be vacuous.
§.§ Integral Probability Metrics (IPMs)
Our proposal for a ν-approximate test of relative fit for IPMs is inspired by the work of <cit.> and <cit.>, where
a similar idea was used to design robust density estimates. Recall the definition of the IPM,
_(P_0, P_1) = sup_f ∈( _P_0 (f) - _P_1 (f) ).
Associated with any pair of distributions is a so-called witness function f^*_(P,Q) = sup_f ∈ ( _P (f) - _Q (f) ), which
witnesses the largest mean discrepancy between the two distributions.
The split test statistic is then defined by:
T_n_0 (P, ) = ∫ f^*_(P, )P̣ + /2 - 1/n_0∑_i ∈_0 f^*_(P, ) (X_i).
The usefulness of this statistic is highlighted by the following characterization of the expectation of the statistic.
For any P^*, P_0, P_1,
2 _ T (P_0,P_1) ≤ 3 (, P_0) - (, P_1).
See Appendix <ref> for a formal proof. For the TV IPM this result appears in the work of <cit.> and <cit.>, and our result generalizes their argument
to other IPMs. Proposition <ref> ensures that _ T(P,Q) is negative for all ∈ under the null hypothesis in (<ref>) with ν=3.
We can construct _ by inverting the IPM approximate relative fit test, to obtain an identical guarantee to the one in Corollary <ref> (now with ν = 3).
To further illustrate the construction of IPM approximate relative fit tests we consider three widely used IPMs—total variation distance, Wasserstein distance, and maximum mean discrepancy—where the witness functions are more explicit.
Total Variation Distance.
Suppose ρ(P_0 P_1) = (P_0, P_1) where is the total variation distance. This is an IPM over the function class = {f : f≤ 1}. An equivalent definition is (P_0, P_1) = sup_A | P_0 (A) - P_1(A) | = P_0 () - P_1 () where = {p_0 > p_1} is the Yatracos set with maximal discrepancy between P_0 and P_1. The witness function is f^*_(P_0, P_1) (x) = (x ∈) - 1/2. An immediate advantage of targeting the
TV projection comes from that f^* is uniformly bounded.
Given samples , consider the following test statistic which referred to as the split Scheffé statistic:
T_n_0 (P,) = P () + ()/2 - _n_0(), _n_0 () = 1/n_0∑_i∈_0 (X_i ∈)
where is redefined to be = {p > p_1}.
The split Scheffé statistic, as the name suggests, is a sample-split analogue of the Scheffé estimate that was originally proposed in <cit.> building upon the work of <cit.>.
Wasserstein Distance.
Suppose ρ(P_0 P_1) = _1 (P_0, P_1) is the 1-Wasserstein distance (or Kantorovich metric). The associated function class is = {f: Lf≤ 1 } where Lf := sup{ |f(x) - f(y) | / x - y : x y } is the Lipschitz semi-norm.
Although the ideas are much more general, we limit our discussion to univariate distributions on a compact support, i.e, = [0,b]. In this case, the witness function is explicit and easy to describe <cit.>.
Define t; P_0, P_1 = ( F_P_1(t) > F_P_0 (t) ) - ( F_P_0 (t) > F_P_1 (t) ) ∈{0, ± 1 },
where F_P denotes the CDF of P.
The witness function is
f^*_(P_0, P_1) (x) = ∫_0^xt; P_0, P_1ṭ <cit.>.
A direct application of the split statistic (<ref>) yields
T_n_0 (P,) = 1/2∫t; P, ( _n_0 (t) - F_P (t) + F_ (t)/2) ṭ,
where _n_0 (t) = 1/n_0∑_i∈_0(X_i ≤ t) is the empirical distribution. This particular split statistic is a sample-split analogue of the ℓ-estimator <cit.>.
Maximum Mean Discrepancy.
Suppose that is a unit ball of the reproducing kernel Hilbert space (RKHS) ,
with kernel k(x,y), and RKHS norm ·_ℋ,
i.e., = {f: f≤ 1}. Then the corresponding IPM (<ref>) is called the Maximum Mean Discrepancy <cit.>. It was shown by <cit.> that the analytic witness function f^*_(P, ) = μ_P - μ_/μ_P - μ_ where μ_P(·) := 𝔼_P [k(X,·)] is the mean embedding of P.
The split statistic T_n_0 (P, ) in this case
reduces to an average of the (negative) witness function - _n_0 (f^*_(P, ) ) if the kernel k(·,·) is symmetric. In this case, the sign of the split statistic captures, in expectation, whether the population is closer to P or based on mean embeddings.
§.§ Unified Sufficient Conditions for any Divergence Measure
In this section we unify some of the treatment of the previous sections by giving conditions on split test statistics which ensure
the exact and approximate validity of the resulting confidence sets.
Given data , we consider tests of the form:
ϕ_P_0, P_1, ν = ( T_n(P_0,P_1) > t_α(P_0,P_1)).
We assume that the test statistic satisfies the following two additional conditions:
T is anti-symmetric, i.e.,
T(X; P_0, P_1) = - T(X; P_1, P_0) for all P_0, P_1 ∈.
There exists some fixed, positive numbers ν, c_1 ≥ 1 such that for all ∈, and any fixed P_0, P_1 ∈,
c_1 _ T (; P_0, P_1) ≤νρ ( P_0) - ρ ( P_1).
Assumption <ref> ensures that _ T (; P_0, P_1) is always negative for all ∈ when the null hypothesis (<ref>) is true. For instance, Propositions <ref> and <ref> establish the analogue of Assumption <ref> for Hellinger and IPM projection, respectively.
Now, we may define ρ-set _ρ,n as in KL set (<ref>) by inverting the test based on (a δ corrupted version of) the statistic T:
_ρ, n := {P∈ : T_n_0, δ(P, ) ≤z_αŝ_P,δ/√(n_0)}
If the test statistic is bounded, i.e. T(X;P_0,P_1) ≤ B for any pair of distributions P_0,P_1 ∈𝒫^2 then
we can define the finite-sample ρ-set as in (<ref>):
_ρ,B,n = {P∈ : T_n_0 (P, ) ≤ B√(log(1 / α)/2n_0)}
The following general result holds:
Suppose that the test statistic satisfies Assumptions <ref> and <ref>.
* Suppose that 𝒬 is such that for some 0 < ξ≤ 1 the 2+ξ moments M_P,Q := _ |T(X; P, Q) - _T(X; P, Q)|^2+ξ are finite, for any (P,Q) ∈^2, then
inf_∈_ (∈_ρ, n) ≥ 1 - α - C n^-ξ/2,
where C < C' (1 + sup_P,Q M_P,Q) /δ^(2+ξ) for a universal constant C'.
* Suppose that T(X; P,Q) ≤ B, then:
inf_∈_ (_ν∩_ρ,B, n∅) ≥ 1 - α.
The proof of the validity claims follow the same structure as the proof of Theorem <ref>. The crucial Assumption <ref> distills out the key property of the test statistics that is useful in ensuring asymptotic or
finite-sample validity. With these general validity results in place, we now turn our attention to studying the size of the resulting robust universal sets.
§ SIZE OF ROBUST UNIVERSAL CONFIDENCE SETS
In the well-specified setting, for statistical models which satisfy classical regularity conditions, <cit.> showed that the Hellinger diameter of the split LRT confidence set depends on
two factors: the size of determined by its (local) Hellinger bracketing entropy, and the closeness of to in the Hellinger distance. In a similar vein, in this section we show that
the size of the universal sets, under certain regularity conditions, can be upper bounded by two factors: roughly, measuring the quality of the pilot estimate, and the size of statistical model.
In the misspecified setting, we would like the robust universal set to shrink around its target at a fast rate.
To measure the (directed) divergence between two sets measured in a divergence ρ and with respect to outside of , we define the ρ_^-divergence motivated by the directed Hausdorff distance.
For a given divergence ρ and a collection of distributions S_1 ⊂, we define an ϵ-fattening of S_1 by:
S_1 ⊕ϵ := ∪_Q ∈ S_1{P ∈ : ρ ( P) ≤ρ ( Q) + ϵ}.
Now given two collections of distributions S_0, S_1 ⊂, we define the ρ_^-divergence by
ρ^_ (S_0, S_1) = inf{ϵ≥ 0 : S_0 ⊆ S_1 ⊕ϵ}.
ρ^_ (S_0, S_1) is the minimum ϵ-fattening of S_1 with reference to which contains S_0.
To express the rate at which the robust universal sets shrink,
we use the Rademacher complexity of ℱ_T, 𝒫, a function class which depends on the test statistic of choice, and the statistical model 𝒫. Concretely, we define,
ℱ_T, 𝒫 := {f: f(x) := T(x; P,Q), P,Q ∈𝒫}.
We denote the Rademacher complexity of this class by ℜ_n(ℱ_T, 𝒫):
ℜ_n(ℱ_T, 𝒫) := 𝔼[ sup_f ∈ℱ_T, 𝒫1/n∑_i=1^n R_i f(X_i)],
where R_i are i.i.d. Rademacher random variables.
In some of the cases we have considered in this paper, under additional regularity conditions the complexity measure
ℜ_n(ℱ_T, 𝒫), can be related to a complexity measure of the underlying model 𝒫 using a standard contraction argument <cit.>:
Suppose that , and the pilot estimate are distributions supported on some compact set 𝒞, with density with respect to the Lebesgue measure which are upper and lower bounded by constants.
Then, for the test statistics introduced in Sections <ref>,<ref> and <ref>, ℜ_n(ℱ_T, 𝒫) ≲ℜ_n(𝒫).
Finally, to characterize the quality of the pilot estimator , we say that the
is an η_n-consistent estimator if
ρ () - ρ() = O_ (η_n),
where we use the standard big O in probability notation to indicate stochastic boundedness.
With these preliminaries in place, we have the following result for the size
of the ρ-set obtained by inverting a finite-sample valid relative fit test. The proof will be given in Appendix <ref>.
Suppose that (<ref>) holds and sup_(P, Q)∈^2 |T(P, Q) - 𝔼 T(P,Q)| ≤ B.
Fix any projection distribution , and recall the collection _ν in (<ref>).
Then the robust universal confidence set _ρ,B,n in (<ref>), for an equal sized split into 𝒟_0 and 𝒟_1,
satisfies for any ∈,
ρ_^( _ρ,B,n, _ν) ≤ O_( η_n + ℜ_n(ℱ_T, 𝒫) + B√(log(1/α)/n)).
Theorem <ref> states that the directed ρ_^-divergence between the exact robust universal confidence set and its target shrinks to zero at the prescribed rate, since _ν is a singleton {} when ν = 1. One can no longer show such a result for the ν-approximate robust universal confidence set even with an infinite number of observations. This is because, conditional on _1, the split test ϕ_P, , ν is guaranteed to achieve (exponentially) small Type 2 error uniformly over ∈ only for distributions P which are at least νρ() away from .
Nevertheless, Theorem <ref> characterizes the rate at which _ρ,B,n shrinks to _ν.
Theorem <ref> also shows how the size of the set depends on the choice of . When possible we should
choose a pilot estimate which converges to the target at a fast rate to ensure that the term η_n is sufficiently small. A sensible choice is often a minimum distance estimator <cit.> which is not only a consistent estimator of under some regularity conditions but is also robust to some misspecification in its corresponding distance <cit.>.
§ SIMULATIONS
In this section, we evaluate our proposed exact and approximate robust universal confidence sets in two particular setups—Overdispersion and Contamination—and demonstrate the advantages of the methods
we propose.
§.§ Overdispersion
Overdispersion is a classic example of model misspecification where the true distribution has larger variance than what can be represented by the hypothesized model. Specifically, consider a case of count data generated from the negative binomial distribution with mean 𝔼_ (X):= θ^* and variance 𝕍_ (X) = κθ^* where the positive constant κ represents the dispersion ratio. Suppose a statistician hypothesized a Poisson model 𝒫_Θ = {Poi(θ) : θ∈ℝ_+} to best describe . Since the mean and the variance are the same for the Poisson distribution (implicitly assuming κ=1), the dispersion ratio κ captures the severity of the model misspecification. Figure <ref> shows ρ (Poi(θ)) with ρ = , , across the dispersion ratio. Notice that KL projection is the true mean θ^* (= 10) regardless of the dispersion ratio whereas Hellinger and TV projection gets smaller as the true variance is more inflated.
The split LRT is sensitive to the misspecification. As highlighted in Section <ref>, the split LRT confidence set (_sLRT) may fail to cover the KL projection unlike the KL set (_) even with the same choice of θ_1 and the same log split likelihood-ratio statistic. Figure <ref> contrasts the performance of _sLRT and _ based on 1000 replicates of 200 simulated observations. In computing the confidence sets, the observations are equally split in half and we choose θ_1 to be the sample mean (which is the MLE) of the first half samples. As the misspecification gets more severe (larger κ), the empirical coverage of KL projection parameter (θ̃) decreases for _sLRT. When the dispersion ratio becomes larger than 3, _sLRT fails to achieve the nominal 95% coverage whereas _ maintains the validity regardless of how severe the misspecification is. Both the center and the right panel depict the size of the estimated confidence set varying over the dispersion ratio but from a different perspective. The former is based on the maximal excess KL divergence from the KL projection (which can be at most twice the KL-diameter of the set) whereas the latter is based on the L_2 distance over the parameter space. It is not surprising that compared to _, _sLRT is smaller in the L_2 sense and is closer to in an excess divergence sense.
Beyond KL projection
Unlike the KL projection, the Hellinger and TV projections are different for different degrees of overdispersion. Our target of inference regarding Hellinger and TV distance is ν-approximate projection rather than the projection as seen in the left panel of Figure <ref>. When the factor κ≥ 6 the ν-approximate target for both Hellinger and TV distance includes any θ∈ℝ_+. For values of dispersion ratio κ≥ 6, the ν-approximate projection for both the Hellinger and TV distances becomes and thus the approximate coverages are trivially 100%. Once again
this highlights that the approximate projection is a meaningful target only when the model misspecification is not too severe.
Figure <ref> summarizes the performance of approximate sets regarding Hellinger (_) and TV distance (_) based on 1000 replicates of 200 simulated observations. We choose the minimum distance estimator for θ_1 for both _ and _. Both _ and _ yield 100% empirical coverage—defined as a proportion of the confidence set that intersects _ν—across all dispersion ratios except almost well-specified case (0.01% dispersion) with 97.4% and 99.1% coverage, respectively. This conservatism is expected because for these divergences we have relaxed our target of inference to be the set
of ν-approximate projections.
Nevertheless, this does not mean that the Hellinger and TV sets are vacuously large. The center and right panel of Figure <ref> show the diameter of the set in Hellinger or TV distance sense, or Euclidean sense. The size of the set increases as the misspecification exacerbates regardless of distance measure. In general, _ is larger than _. _ behaves closer to _ under slight to moderate overdispersion and to _ as the overdispersion becomes severe.
Comparison between asymptotic and finite sample valid sets
Figure <ref> compares the various TV set when the is a 32% variance inflated negative binomial—Berry-Esseen (_), Hoeffding bound (_HF), empirical Bernstein bound <cit.>, and empirical Bentkus bound <cit.>. See Appendix <ref> for explicit forms of each confidence set. In all cases, we choose the same minimum TV distance estimator θ_1. The KL set dominates all finite sample valid confidence sets considered in this section, despite its validity relying on asymptotics. The finite sample valid sets are too conservative (and yield a meaningless set =) when only a few observations are available (n ≤ 50).
Although our paper does not primarily focus on obtaining the tightest finite-sample valid confidence set, leveraging the variance _(X) can often be beneficial when constructing the confidence set. In this example, _EBS and _EBK outperform _HF since the Bernstein and Bentkus bounds are more sensitive to the variance.
§.§ Contamination
Consider the following contaminated data generating distributions
which are mixtures of Gaussians. This simulation setup is used in the work of <cit.>.
_1 = 0.99 N(0, 1) + 0.01 N(0, 30^2) (Symmetric)
_2 = 0.94 N(0, 1) + 0.01 N(20, 20^2) + 0.05 N(-30, 20^2) (Asymmetric)
_3 = 0.7 N(2, 1) + 0.2 N(-2, 1) + 0.1 N(0, 30^2) (Heavily Asymmetric)
For each case, we denote _ to be an uncontaminated distribution that does not include the outlying noise distributions.
Consider a location-scale family of Gaussian distribution 𝒫_Θ = {N(μ, σ^2 ) : (μ, σ)∈Θ} as a working model. (See Appendix <ref> for additional simulations for
a location family with fixed scale.) Our goal is to evaluate the empirical performance—coverage and size—of robust universal confidence sets for the (approximate) projection of the various contaminated distributions onto 𝒫.
Figure <ref> shows the mean and standard deviation of the projection distribution with respect to the KL, DP, Hellinger and TV distances along with the mean and standard deviation of the contaminated and uncontaminated distributions. The KL projection parameter is the same as the parameters of contaminated distribution in all cases.
The DP projection parameters, get closer to uncontaminated parameters as the β parameter increases.
The Hellinger projection is the closest to the uncontaminated parameters among all projections we considered, however, the size of _ν is much larger than that of approximate TV projection. The set _ν for both Hellinger and TV distance is quite large for the heavily misspecified case (Case 3).
Practically, we recommend targeting DP projection with a reasonable choice of β (> 0.05) for this heavily misspecified case.
Figure <ref> illustrates the empirical coverage and size of split LRT and ( and ) sets based on 1000 replications. For split LRT and KL sets, we choose θ̂_1 to be the quasi-MLE, whereas, for the
DP set, we use the minimum DP divergence estimator. The split LRT fails to cover KL projection in all cases whereas sets achieve the nominal coverage with large enough sample sizes. The DP sets show superior coverage than KL set across all sample sizes. Such a target coverage improvement is more evident in the smaller sample sizes below 200, and as β gets larger, i.e., the DP set targets a more stable projection. Regardless of what divergence measure ρ is of interest, the size of the confidence set with reference to ρ shrinks to zero as the sample size increases. Again, the extremal values of _^ (_, ) for sample sizes below 500 highlight the instability of KL projection.
Figure <ref> shows the maximal ρ-distance of and set from based on 1000 replications along with the ρ(_ν), a set of ρ-distance from to approximate projection _ν. ρ(_ν) illustrates the same phenomena as in Figure <ref> but with respect to each distance. Theoretically, we can only claim the shrinkage of set up to _ν. This can be seen in Figure <ref> for both Hellinger and TV set as the maximum excess distance from reaches νρ(_ν) with large enough samples. sets shrink beyond _ν in this example: the Hellinger set converges to a set quite close to with large enough sample size, while the TV set converges to a set around which does not appear to shrink with sample size.
§ DISCUSSION
In this paper,
we presented a general method for constructing uniformly valid exact and approximate confidence sets for
various projection distributions
under weak regularity conditions in the presence of possible model misspecification.
We demonstrated that the universal inference procedure <cit.> can fail catastrophically
even in simple examples, under fairly benign model-misspecification. We then showed
that the robust universal inference framework can address these failures, providing methods which are robust and can
meaningfully target different projection distributions.
Despite data splitting playing an essential role in constructing an assumption-light universal confidence set, it also poses inefficiency and algorithmic randomness since only a random subset of observation is used in constructing the split statistics. This can be partially addressed with crossfitting where we average the split statistic with that after swapping the role of _0 and _1. In contrast to the well-specified
setting where the validity of the crossfit set is immediate, more care is needed under model-misspecification. We investigate the validity of the crossfit set in Appendix <ref>.
The papers <cit.> study many variants of universal inference (including constructing confidence sequences instead of confidence sets, to combining multiple sample-splits) and investigating these
variants in the context of the robust universal inference framework of this paper would be interesting.
Finally, our paper brings to the forefront the role of pairwise tests of fit (and relative fit) together with sample-splitting, in designing broadly applicable inference methods. We expect this basic insight to
have further implications in other contexts, for instance, in designing universal inference procedures in other settings where likelihood-based methods are inapplicable.
§ ACKNOWLEDGEMENTS
This work was partially supported by funding from the NSF grants DMS-1713003, DMS-2113684 and
CIF-1763734, as well as an Amazon AI and a Google Research Scholar Award to SB. The authors are grateful to Arun Kuchibhotla, Aaditya Ramdas and Ian Waudby-Smith for
helpful discussions regarding finite-sample valid confidence sets.
plainnat
§ PROOFS FROM SECTION <REF>
§.§ Example <ref>
Note that the KL projection = ( 1/2 ). Consider the event E where all of the observed samples X_1,…,X_n are 0.
We can see that,
_(E) = 1 - _( ∑_i=1^n X_i > 0 ) ≥ 1 - _[ ∑_i=1^n X_i ] = 1 - n ϵ_n.
Now, on the event E, it is clear that the MLE = (0).
Let us denote the split-sample universal set by C_α(X_1,…,X_n), where we assume for simplicity that 𝒟_0 and 𝒟_1 each have n/2 samples.
We then have,
_(∉ C_α(X_1,…,X_n)| E)
= _(ℒ_0()/ℒ_0() ≤α | E)
= _(1/2^n/2≤α | E) = 1,
for n ≥ 2 log_2(1/α). As a consequence, we can upper bound the coverage of the universal set by,
_(∉ C_α(X_1,…,X_n))
≥_(E) _(∉ C_α | E) ≥ 1 - n ϵ_n.
Thus, we see that if 0 < ϵ_n ≤β/n for some β > 0, and n ≥ 2 log_2(1/α) then the universal set has coverage at most β. Choosing β < (1 - α)
we see that the universal set fails to have its advertised coverage.
§.§ Example <ref>
The KL projection is ( 3/4 ). For simplicity we suppose that n is even, and that 𝒟_0 consists of the first n/2 samples and 𝒟_1 consists of the remaining samples.
For a constant β > 0, let us consider the events E_0, E_1 defined as,
E_0 = ( ∑_i=1^n/2 X_i < n/4 - β√(n))
E_1 =( ∑_i=n/2^n X_i < n/4 ).
When events E_0 and E_1 hold we can see that the universal set C_α(X_1,…,X_n) fails to cover . In more detail, on the event E_1 the MLE, is (1/4) and thus,
_(∉ C_α(X_1,…,X_n) | E_0, E_1) = _(ℒ_0()/ℒ_0() ≤α | E_0, E_1)
≤_(1/3^2β√(n)≤α) = 1,
provided that n ≥ (log_3(1/α))^2/(4β^2). Thus, it suffices to show that E_0 and E_1 happen with sufficiently large probability.
Using the fact that the Total Variation distance between the n-fold product measures,
((1/2)^n, (1/2 + ϵ_n)^n) ≤ n ϵ_n,
we can reason instead about the probability of the events E_0 and E_1 when drawing samples from (1/2), and account for the difference using the Total Variation. Combining this fact with the standard
Berry-Esseen bound applied to Bernoulli sums, together with some simple algebraic manipulations, we obtain that for some universal constant C > 0,
P(E_0 ∪ E_1) ≥ P(Z < 0) × P(Z < -2√(2)β) - 2 C/√(n) - n ϵ_n.
Thus, choosing ϵ_n ≪ 1/n, and β to be a sufficiently small constant, when n is sufficiently large, we obtain that,
P(E_0 ∪ E_1) ≥1/8,
and thus that,
P(∉ C_α(X_1,…,X_n)) ≥ 1/8.
§ PROOFS FROM SECTION <REF>
In this section, we formally verify the claim that the universal set typically includes both the pilot and the projection distribution.
We first define the ρ-diameter of the set C as _ρ (C) = sup_P_a, P_b ∈ Cρ(P_a P_b).
Let ∈ be any estimate of based on _1. Then, the exact robust universal confidence set C_α,n defined in (<ref>) has diameter at least ρ() with
-probability at least 1 - 2α:
inf_∈_(_ρ (C_α,n) ≥ρ() ) ≥ 1 - 2 α.
Following a similar argument to that in the proof of Theorem <ref>, notice that for any ∈, _ (∉ C_α,n | _1) ≤α. Together with a union bound, we obtain that
both and are included in the set C_α,n with -probability at least 1- 2α (conditionally on _1), and on this event, the diameter of the set is at least ρ() in expectation.
§ PROOFS FROM SECTION <REF>
§.§ Proof of Theorem <ref>
We first work conditional on the sample 𝒟_1 used to construct the pilot estimate .
Let us define
M_P,δ,ξ := 𝔼_ [|T_i(P,) + Z_i δ - 𝔼_ T(P,)|^2+ξ | 𝒟_1].
Due to the added Gaussian noise, the variance M_P,δ,0 is always strictly positive (i.e., larger than δ^2).
By Minkowski's inequality, conditional on _1, we have
M_P,δ,ξ≤[ (_| T_i (P, ) - _ T_i (P, )|^2+ξ | _1)^1/2+ξ
+ δ( |Z|^2+ξ)^1/2+ξ]^2+ξ.
This means that for assumed , there exists a universal constant C_M such that (conditionally on _1) the 2+ξ moment of corrupted statistic T_i (P, ) + δ Z_i is uniformly bounded by C_M for all P∈.
Conditionally on _1, the generalized Berry-Esseen bound
for the studentized statistic <cit.> yields that, for a universal constant C',
sup_t| ℙ_(√(n_0)( T_n_0,δ (P,) - _ T (P, ) )/s_P,δ≥ t | 𝒟_1) - P(Z ≥ t)|
≤C' M_P,δ,ξ/n_0^ξ/2δ^2+ξ≤ C n_0^-ξ/2,
where C = C' C_M δ^-(2+ξ).
This holds in particular for ∈𝒫.
Consequently, we see that,
inf_∈_ (∈_, n) = inf_∈𝔼_ [ _ (∈_, n | 𝒟_1)]
≥ 1 - sup_∈𝔼_ [ _ (∉_, n | 𝒟_1)]
≥ 1 - [α - C n^-ξ /2],
as claimed.
§.§ Proof of Proposition <ref>
Recall that X_i iid∼(ϵ_n) for ϵ_n ≤ (1-α)/n and our hypothesized model is ={(p): p∈{0, 1/2}}. For a fixed β >0,
_β((ϵ_n) (p) )
= C + (p^1+β + (1-p)^1+β) - (1 + 1/β) [ϵ_n p^β + (1-ϵ_n) (1-p)^β]
where C = ∑_x∈0,1ϵ_n^(1+β)x (1-ϵ_n)^(1+β)(1-x). The DP divergences from to the elements of the working model are
_β((ϵ_n) (0) )
∝ 1 - (1 + 1/β) (1-ϵ_n)
= (1 + 1/β) ϵ_n - 1/β
_β((ϵ_n) (1/2) )
∝ - (1/2)^β / β.
Therefore, the DP projection is
=
(0), if ϵ_n ≤ (1 -(1/2)^β) / (1 + β),
(1/2), otherwise.
Since ϵ_n < (1-α)/n, the projection will be (0) for any β >0, provided that n ≥ (1-α) (1 + β) / (1- (1/2)^β).
Now we turn our attention to constructing the DP set. For any fixed (P,Q) ∈^2, the split statistic is uniformly bounded, i.e., |T_i (P,Q) - _ T (P,Q)| ≤ 1 + 1/β since
T_i (P, Q)
= ∑_x∈{0,1}[ (p^x (1-p)^1-x)^1+β - (q^x (1-q)^x)^1+β]
- ( 1+1/β) [ (p^X_i (1-p)^1-X_i)^β - q^X_i (1-q)^1-X_i] (X_i).
By Hoeffding's inequality, _,1+1/β,n ensures nominal coverage for any estimator , since we have that:
_(∉_,1+1/β,n)
= _(_( T_n_0 (,) > β + 1/β√(log(1/α)/2 n_0) | _1 )) ≤α.
§.§ Proof of Proposition <ref>
Note that Hellinger projection is (0) for n>6 (as long as ϵ_n < 0.146) since
^2((ϵ_n), (0))
= 1 - √(1 - ϵ_n), ^2((ϵ_n), (1/2))
= 1 - √(ϵ_n / 2) - √((1-ϵ_n) / 2).
Similarly, ν-approximate Hellinger projection is
(0) if ϵ_n < 0.051 or otherwise. Hereafter we only consider n > 20 where _ν =.
The minimum Hellinger Distance Estimator (MHDE) is
_p∈{0, 1/2}^2 (_n_1, (p))
= _p∈{0, 1/2}√(p X_n_1) + √((1-p) (1 - X_n_1))
where X_n_1 = ∑_i∈_1 X_i / n_1.
Thus,
= (0), X_n_1 < 0.5-1/(2√(2)) ≈ 0.146
(1/2), Otherwise.
This implies that the advertised coverage is guaranteed when X_n_1 < 0.146. Otherwise, Corollary <ref> ensures the asymptotic (approximate) validity.
§.§ Proof of Proposition <ref>
The proof follows directly by the triangle inequality.
2 _ T (P_0, P_1)
= _P_0 f^*_(P_0, P_1) + _P_1 f^*_(P_0, P_1) - 2 _ f^*_(P_0, P_1)
= 2 [ _ f^*_(P_0, P_1) - _ f^*_(P_0, P_1)] - _P f^*_(P_0, P_1) - _P_1 f^*_(P_0, P_1)
= 2 [ _ f^*_(P_1, P_0) - _P f^*_(P_1, P_0)] - _ (P_0, P_1)
≤ 2 _ (, P_0) - _ (P_0, P_1)
≤ 2 _ (, P_0) - [_(, P_1) - _(, P_0)]
(by the triangle inequality)
= 3 _(, P_0) - _(, P_1)
§ PROOFS FROM SECTION <REF>
§.§ Proof of Theorem <ref>
Recall that the exact robust universal confidence set based on the Hoeffding bound is
_ρ,σ,n = { P∈ : T_n_0 (P,) ≤ B √(log (1/α)/2 n_0)}.
We denote t_α,n := B √(log (1/α)/2 n_0) throughout the proof, and use C to denote _ρ,B,n.
Throughout the proof, we fix a projection distribution and assume an equal split between _0 and _1.
Denote δ_ν (P, Q) = ρ( P) - νρ( Q) for any P, Q ∈.
We want to show that, for fixed κ > 0, for some finite M > 0,
_( sup_P ∈δ_ν(P, ) > M (η_n + ϵ̃_n) ) ≤κ,
where ϵ̃_n = ℜ_n(_T,𝒫) ∨ t_α,n and for all n large enough.
Let the event E be δ_1 (, ) ≤ (M/ν) η_n which happens with probability at least 1-κ/2. Then,
_( sup_P ∈δ_ν(P, ) > M (η_n + ϵ̃_n) )
≤_( sup_P ∈δ_ν(P, ) > M (η_n + ϵ̃_n) | E ) + κ/2
= _( sup_P ∈δ_ν(P, ) > M (η_n + ϵ̃_n) - νδ_1(, ) | E ) + κ/2
≤_( sup_P ∈δ_ν(P, ) > M ϵ̃_n | E ) + κ/2.
Thus, it suffices to show that conditional on E, with -probability at most κ/2, all P∈ such that δ_ν(P, ) > M ϵ̃_n are not included in . Hereafter we condition on event E.
Let _ϵ := {P ∈ : δ_ν(P, ) > ϵ}.
From Assumption <ref>, we have that
_( ∀ P∈_ϵ, P ∈_ρ,B,n | _1 )
= _( sup_P∈_ϵT_n_0(, P) ≥ - t_α,n | _1 ),
≤_( sup_P∈_ϵ[T_n_0(, P) - _ T(, P)] ≥ϵ - t_α,n | _1 ).
where the inequality is from noticing that conditional on _1,
sup_P∈_ϵ [- _ T(, P)] ≥sup_P∈_ϵδ_ν(P, ) ≥ϵ by Assumption <ref>.
To ease the notation, denote the centered statistic as T_P := T_n_0(, P) - _ T(, P).
Since |T(,P)| ≤ B, any change in X_i can change sup_P∈_ϵT_P at most 2B/ n_0. By McDiarmid's inequality, we have that
_(sup_P∈_ϵT_P ≥ϵ - t_α,n | _1)
≤exp( - n(ϵ - t_α,n - _[sup_P∈_ϵT_P] )^2/2 B^2).
Now we focus on bounding _ [sup_P∈_ϵ |T_P|] (which is greater than _ [sup_P∈_ϵT_P ]).
Let _T,𝒫 = {T(·; , P) : P ∈}. The symmetrization lemma <cit.> states that
_Xsup_f∈_T,𝒫1/n_0| ∑_i=1^n_0[f (X_i) - _ f(X_i)] |
≤ 2 _X,εsup_f∈_T,𝒫|1/n_0∑_i=1^n_0 R_i f(X_i)| := 2 _n_0 (_T,𝒫)
where R_i are iid Rademacher random variables.
§ FINITE-SAMPLE VALID CONFIDENCE SET FOR BOUNDED TEST STATISTIC
Suppose the split statistics are uniformly bounded, i.e., |T_i (P)| ≤ B for all i. Classic Cramér-Chernoff bounds yield finite-sample valid exact (ν = 1) or approximate (ν > 1) confidence sets.
_HF is a uniformly valid 1-α exact (ν = 1) or approximate (ν > 1) confidence set for where
_HF = {P ∈ : T_n_0 (P) ≤√(B^2/2n_0log(1/α))}.
Typically, Hoeffding's bound does not scale with the variance which results in a conservative confidence set. Confidence set based on Bernstein's inequality is given as follows.
_BS is a uniformly valid 1-α exact (ν = 1) or approximate (ν > 1) confidence set for where
_BS = { P∈ : T_n_0(P) ≤√(2 S^2 log (1/α)/n_0 + B^2/9( log (1/α)/n_0)^2) + B log (1/α)/3 n_0}
where S^2 = S^2(P) = (c_1 ν)^2 [ρ ( P) + ρ ()].
However, _BS above requires knowledge of to compute S. Empirical Bernstein bounds <cit.> address this issue.
Denote T̃_i (P,Q) = (T (X_i; P, Q)+ B) / (2B). _EBS is a valid 1-α confidence set for exact (ν = 1) or approximate (ν > 1) projection where
_EBS = { P∈ : ∑_i=1^n_0λ_i T̃_i (P, )≤log(1/α) + ∑_i=1^n_0 v_i ψ_E (λ_i) },
v_i = (T̃_i (P, ) - T̃_i - 1 (P, ))^2, ψ_E(λ) = - (log(1-λ) - λ), and
λ_i = √(2log(1/α)/n_0 Ŝ_i - 1^2)∧ c, Ŝ_i^2 = 1/4 + ∑_l=1^i (T̃_l - T̃_l)^2/i + 1, T̃_i = 1/i+1∑_l=1^i T̃_l,
for some c ∈ (0,1).
When the variance or an upper bound of the variance is known, Bentkus's bound <cit.> is sharper than any Cramér-Chernoff type bounds. See <cit.> for details. Define a Bernoulli random variable G = G(S^2, B) as
( G = B ) = S^2/S^2 + B^2 := p_SB, ( G = - S^2/B) = 1 - p_SB
_BK is a valid 1-α confidence set for is a valid 1-α confidence set for exact (ν = 1) or approximate (ν > 1) projection where
_BK = { P∈ : T_n_0(P,) ≤ q(α) }
where q(α) is the solution to
P_2 ( u; ∑_i∈_0 G_i )
:= inf_t ≤ u_( ∑_i∈_0 G_i - t )_+^2/(u -t )_+^2 = α,
and S^2 = S^2(P) = (c_1 ν)^2 [ρ ( P) + ρ ()].
As in the case of Bernstein's bound (<ref>), Bentkus's bound (<ref>) requires prior knowledge of to compute the variance S. The empirical Bentkus's bound <cit.> addresses this by taking the union bound on variance over-estimation and the Bentkus's inequality. Following <cit.> define the over-estimator of S as, for δ∈[0,1],
S_n (δ) = √(S_n_0^2 + g_2,n_0 (δ)) + g_2,n_0(δ), S_n^2 = 1/⌊ n / 2 ⌋∑_i=1^⌊ n / 2 ⌋(T_2i - T_2i-1)^2/2,
where g_2,n(δ) := B (√(2) n)^-1√(⌊ n / 2 ⌋)Φ^-1 (1- 2 δ / e^2) and Φ is the cdf of a standard Gaussian.
_EBK is a valid 1-α confidence set for is a valid 1-α confidence set for exact (ν = 1) or approximate (ν > 1) projection where for some δ∈[0,1],
_EBK = { P∈ : T_n_0(P,) ≤ q(α - δ) }
where q(α - δ) is the solution to
P_2 ( u; ∑_i∈_0 G_i ( S^2_*(δ), B ) ) = α - δ.
with S_* (δ) := min_1 ≤ i ≤ n_0S_i (δ).
In Section <ref>, we choose δ = α/3 to construct the empirical Bentkus's bound-based TV set.
§ CROSSFIT SET
Despite universal inference holds for any , let us assume we choose such that sup_P ∈T(·; P, P_1) - T(·; P, )_L_2() = o(1).
For any fixed P ∈, consider the following decomposition:
_n_0 T(·; P, P_1) - _ T(·; P, )
= (_n_0 - _) [T(·; P, P_1) - T(·; P, )] + _[T(·; P, P_1) - T(·; P, )] + (_n_0 - _) T(·; P, ).
The first term is the empirical process which is o_ (1/√(n_0)) applying Lemma 2 of <cit.>. The second part is the bias which is o(1) from our choice of . The last term yields the CLT.
Now let T_n_1 (P; P_0) := ∑_i∈_1 T(X_i; P, P_0) /n_1 where we change the role of _0 and _1. Define a cross-fitting estimator as
T_n^× (P)
= n_1 T_n_1 (P; P_0) + n_0 T_n_0 (P; P_1)/n.
The n (T^× (P) - _ T(·; P, )) has the following decomposition:
n_0 (_n_0 - _) [T(·; P, P_1) - T(·; P, )]
+ n_1 (_n_1 - _) [T(·; P, P_0) - T(·; P, )]
+ n_0 _ [T(·; P, P_1) - T(·; P, )] + n_1 _ [T(·; P, P_0)- T(·; P, )]
+ n (_n - _) T(·; P, ).
Similarly, both empirical process terms in the first line are o_ (1/√(n)), and bias terms in the second line are o (1). Thus, we left with the same CLT term. The decomposition implies that as long as one chooses a “good” candidate estimator, cross-fit estimator also provides an asymptotically (uniformly) valid inference on .
Construct a cross-fit ρ-set as follows:
C^×_ρ,α, n = {P∈ : T^× (P) ≤ z_αŝ_P^×/√(n)}.
where ŝ_P^× 2 = [_ (T (X; P, P_1)) + _ (T (X; P, P_0) )] / 2 is a consistent estimator of _ (T(X; P, )).
§ ADDITIONAL RESULTS AND TECHNICAL DETAILS ON NUMERICAL STUDIES
§.§ Computational detail
We adopt a heuristic search method for finding a confidence set in multivariate parameter space. For brevity, we explain the procedure in 2 dimensions, but the procedure can be straightforwardly extended to higher dimensions. From the observation that when _1 is close to θ̃, i.e., ρ( P__1) ≤νρ() as seen in the proof of Theorem <ref>, T_n_0 (_1) = 0 for split statistics that satisfies Assumption <ref> and <ref>. Therefore, we construct a star-convex confidence set that always includes _1. We construct the rays originate from _1, i.e., R_ω = {θ∈Θ : r_ω^⊤ (θ - _1) = 0, r ≥ 0 } where r_ω = (r sinω, - r cosω) for angle ω∈ [- π, π]. For each ω, we find a root of an evidence function (θ) = T_n_0 (θ) - t_α (θ) using Brent's method <cit.> on R_ω constrained with radius r varying from 0 (corresponding θ=_1) to some r_0 > 0 such that the corresponding θ_0 satisfies (θ_0) > 0.
§.§ Gaussian contamination - Location family
Consider a Gaussian location family = {(θ, 1) : θ∈} where the variance is fixed to that of uncontaminated distributions. Figure <ref> shows the projection parameters along with those of contaminated and uncontaminated distributions. The mean of contaminated distribution and that of uncontaminated distributions are the same for Cases 1 and 3 but not for Case 2. This leads to the interesting observation that forward KL projection is the closest to the uncontaminated distribution in Case 3 unlike location-scale family in Figure <ref>, Section <ref>.
Figure <ref> summarizes the performance of confidence sets targeting the forward KL or DP projection over 1000 replications. Clearly, split LRT fails to attain the nominal coverage even for a large enough sample size. All other sets achieve the nominal coverage for moderate to large sample size. _ are shorter than _ and even than the invalid split LRT set for Cases 2 and 3.
|
http://arxiv.org/abs/2307.05285v1 | 20230711142710 | Turing patterns in a 3D morpho-chemical bulk-surface reaction-diffusion system for battery modeling | [
"Massimo Frittelli",
"Ivonne Sgura",
"Benedetto Bozzini"
] | math.NA | [
"math.NA",
"cs.NA",
"65M60, 65M50, 65N40, 65P40"
] |
A stochastic two-step inertial Bregman proximal alternating linearized minimization algorithm for nonconvex and nonsmooth problemsSupported by Scientific Research Project of Tianjin Municipal Education Commission (2022ZD007).
Chenzheng GuoEmail: [email protected],
Jing ZhaoCorresponding author. Email: [email protected],
Qiao-Li DongEmail: [email protected]
College of Science, Civil Aviation University of China, Tianjin 300300, China
======================================================================================================================================================================================================================================
In this paper we introduce a bulk-surface reaction-diffusion (BSRD) model in three space dimensions that extends the DIB morphochemical model to account for the electrolyte contribution in the application, in order to study structure formation during discharge-charge processes in batteries. Here we propose to approximate the model by the Bulk-Surface Virtual Element Method on a tailor-made mesh that proves to be competitive with fast bespoke methods for PDEs on Cartesian grids. We present a selection of numerical simulations that accurately match the classical morphologies found in experiments. Finally, we compare the Turing patterns obtained by the coupled 3D BS-DIB model with those obtained with the original 2D version.
§ KEYWORDS
Batteries; Metal electrode; Electrodeposition; Bulk-Surface Reaction-Diffusion Systems; Bulk-Surface Virtual Element Method; Turing Patterns
§ MSC 2020
65M60; 65M50; 65N40; 65P40
§ INTRODUCTION
The formation of spatio-temporal structures in electrodeposition is a relevant physical phenomenon, as it impacts several applications, ranging from the durability and efficiency of batteries to electroplating <cit.>. The onset of spatio-temporal structures on the cathodic surface was proven to be initiated by a Turing morphogenetic mechanism, where the physics are modeled by a suitable reaction-diffusion system (RDS), known as DIB model, whose spatial domain is the electrodic surface <cit.>. In the DIB model, the spatial domain is assumed to be fixed and does not change over time, as the growth/corrosion effects are fully modeled by the dynamics of the system. By tweaking the parameters of the DIB model, it is possible to successfully simulate spatial <cit.> or spatio-temporal patterns <cit.> of various morphological classes that are experimentally observed under appropriate physical and chemical conditions; these include spatial patterns such as spots, holes, stripes, labyrinths, and spiral waves. The effectiveness of the DIB model has justified the development of extensions and ameliorations, such as the introduction of cross-diffusion <cit.> and the generalization of the spatial domain to be a curved surface <cit.>.
As it stands, one of the limitations of the DIB model is that it does not fully accounts for the effects of non-uniform electrolite concentration in a neighborhood of the electrode. Experimentally, such non uniform concentration is induced by the spatial structures arising on the electrode and, in turn, affects further structure development. In this regard, the electrode-electrolyte system has a two-way coupling that, in the long run, can drastically affect the resulting morphological class. In this paper we propose the bulk-surface DIB (BS-DIB) model in three space dimensions to fill this gap. In the proposed model, the surface represents the electrode (where the electrodeposition takes place), while the 3D bulk models the electrolite. The physical two-way coupling mentioned above causes the proposed model to take the form of a coupled bulk-surface reaction-diffusion system (BS-RDS) <cit.>.
For domains of general shape, different numerical methods were developed for the spatial approximation of BS-RDSs, such as the Bulk-Surface Finite Element Method (BS-FEM) <cit.>, the Cut Finite Element Method <cit.>, unfitted finite element methods <cit.>, and meshless kernel methods <cit.>, just to mention a few. In all of these methods, the spatially discrete problem takes the form of a large ODE system, whose dimension is equal to the number of spatial degrees of freedom. Thus, the high level of spatial resolution required by RDSs and BS-RDSs, together with the curse of dimensionality (3D) makes the numerical approximation of the BS-DIB a challenging computational task. In the present context, where the bulk domain is a cube, a bespoke tensorized technique called Matrix-Oriented Finite Element Method (MO-FEM) <cit.> can be exploited to take advantage of the special geometry and drastically reduce the computational times. However, it is worth noting that the BS-DIB model exhibits spatial patterns only in a neighborhood of the surface, hence a uniform spatial discretization is computationally inefficient. For this reason, we exploit the geometric flexibility of the Bulk-Surface Virtual Element Method (BS-VEM) <cit.> to adopt a graded cubic mesh that is highly refined close to the surface and much coarser away from the surface. Such a mesh is simultaneously graded and entirely composed of cubic-shaped elements. Such a combination entails the presence of hanging nodes and edges, that are naturally handled by the BS-VEM and are not admissible in the BS-FEM. Compared to the MO-FEM, the BS-VEM on such mesh exhibits shorter computational times on equal level of spatial refinement in a neighborhood of the surface -where high spatial accuracy is actually required- and produce patterns of the same morphological class. It needs to be noted that Turing patterns are highly sensitive to initial conditions, which are bound to be different between MO-FEM ad BS-VEM since the spatial meshes are different, hence obtaining the same morphological class with both methods is a sensible benchmark.
We present a wide range of numerical simulations for both the 2D BIB and the 3D BS-DIB models in equal parameters, to showcase the effect of bulk-surface coupling. From the experiments we draw the following conclusions. First, the BS-DIB model appears to have a large Turing region compared to the DIB model. In fact, for several choices of the parameters outside the Turing region of the DIB model, only the BS-DIB model exhibits spatial patterns. Second, when the DIB model exhibits spatial patterns, the BS-DIB still exhibits patterns, but if different morphological class, thereby further highlighting the impact of the bulk-surface coupling. A rigorous analysis of the Turing instability for the BS-DIB model will be addressed in future work.
The structure of the paper is as follows. In Section <ref> we introduce the BS-DIB model, we give its physical interpretation and we analyse the stability of a relevant spatially uniform equilibrium in the absence of diffusion. In Section <ref> we recall the BS-VEM and we present a bespoke graded polyhedral mesh that allows the BS-VEM to outperform the MO-FEM when solving the BS-DIB model. In Section <ref> we present an extensive list of numerical experiments that empirically demonstrate the effect of the bulk-surface coupling on pattern formation. Conclusions are drawn in Section <ref>.
§ THE BULK-SURFACE DIB MODEL ON THE CUBE
Let Ω = [0,L]^3 be a cube of edge L>0, let Γ := [0,L]^2×{0} be the bottom face of Γ, let Γ_T = [0,L]^2×{L} be the top face of Ω and let Γ_L = ∂Ω∖ (Γ∪Γ_T) be the union of the lateral faces of Ω. Let T>0 be the final time.
The bulk-surface DIB model seeks to find four functions b,q:Ω× [0,T] →ℝ and η,θ:Γ× [0,T] →ℝ that fulfil the following system of four PDEs:
ḃ - Δ b = f_1(b) in Ω;
q̇ - d_ΩΔ q = f_2(q) in Ω;
η̇ - Δ_Γη = f_3(b,η,θ) on Γ;
θ̇ - d_ΓΔ_Γθ = f_4(q,η,θ) on Γ,
complemented with the following boundary conditions for the bulk variables b and q
∇ b ·𝐧 = - f_3(b,η,θ)ψ_η on Γ;
∇ q ·𝐧 = - f_4(q,η,θ)ψ_θ on Γ;
∇ b ·𝐧 = 0 on Γ_L;
∇ q ·𝐧 = 0 on Γ_L;
b = b_0 on Γ_T;
q = q_0 on Γ_T,
and the following boundary conditions for the surface variables η and θ:
∇η·𝐧 = 0 on ∂Γ;
∇θ·𝐧 = 0 on ∂Γ.
In (<ref>), Δ is the Laplace operator in Ω, while Δ_Γ is the Laplace-Beltrami operator on Γ (which coincides with the two-dimensional Laplacian since Γ is flat), d_Ω and d_Γ are diffusion coefficients, and b_0,q_0 ∈ℝ.
In the equations for the bulk species b and q, the kinetics f_1,f_2 are defined as follows:
f_1(b) := -k_b(b-b_0);
f_2(q) := -k_q(q-q_0);
b represents the concentration of the electroactive cations (precursors of metal that is electrodeposited during the recharge cycle), present exclusively in the bulk.
q represents the bulk concentration of an additive species that is adsorbed at the cathode as a way of controlling shape change with the coverage degree expressed by the variable θ.
b_0=b_bulk and q_0=q_bulk represent the “bulk concentrations”, that prevail at equilibrium, when the bulk is homogeneous. The physical meaning of the terms (b-b_0) and (q-q_0), with k_b,k_q>0 is first-order homogeneous reaction kinetics describing the tendency of the reagent to re-establish the equilibrium concentration. This can be considered a very simple model of a situation in which b, q are the concentrations of the species involved in the electrodic reaction in the electroactive form, that is generated by the decomposition of some precursor (e.g.metallic ion with a ligand that keeps the ion in solution in non-electroactive form, from which the electroactive species forms by decomposition of the complexed one): a lower-than-equilibrium local concentration of b, q (e.g. by cathodic consumption) generates new b, q (by decomposition of the complexed form); while a higher-than-equilibrium concentration (e.g. by anodic injection) generates a consumption (e.g. by reaction with the ligand, yielding the non-electroactive form).
In the surface equations for the surface species η and θ, the kinetics f_3 and f_4 are defined as follows:
f_3(b,η,θ) := ρ [A_1b(1-θ)η - A_2η^3 - B(θ-α)];
f_4(q,η,θ) := ρ[q(1+k_2η)(1-θ)(1-γ(1-θ)) -D/C(1+k_3η)θ (1+γθ)],
and correspond to the modified DIB model source terms accounting for the bulk contributions, as explained below, to account for the electrolyte physics.
The physical meaning of f_3 is that the Butler-Volmer type electrokinetic term is scaled by the concentration of the electroactive species at the surface b_|Γ. This corresponds to first-order phenomenological kinetics and it includes naturally mass-trasport effects from the bulk to the surface (i.e. the mass-transport of the electroactive species b present in the bulk and reacting electrochemically at the surface). It is worth noting that the term -A_2η^3 in the 2D form of DIB represents collectively all hindrances to η (growth) resulting from the establishment of high values of η. The most straightforward interpretation of such hindrances is mass-transport limitation that – in a Gileadi-type framework – can be approximately accounted for with a negative cubic correction to the I-V curve. As expounded above, in the bulk-surface context mass-transport limitations can be naturally accounted for by multiplying the term linear in η by the surface concentration b_|Γ of the reactant b present in the bulk and describing the electroactive species. Nevertheless, in the bulk-surface version of DIB it is worth retaining the cubic term in η, because this can account for other hindrances to metal growth (i.e. beyond mass-transport from the bulk to the reactive surface) appearing at high metal plating rates, such as cathodic passivation.
The physical meaning of f_4 is that - coherently with the Langmuir adsorption model with monomolecular adsorption reaction - the adsorption term is directly proportional to the amount of the bulk species. In the case of a heterogeneous bulk phase, the relevant value of the bulk form is that in a neighborhood of the surface (commonly, referred to as the “catholyte”): q_|Γ. In (<ref>)-(<ref>)-(<ref>)-(<ref>), d_Ω, d_Γ,ψ_η,ψ_θ, b_0,q_0,k_b,k_q,k_2,k_3, ρ,α,γ, A_1,A_2,B,C,D are positive coupling parameters.
The coupling BCs for the bulk equations at the interface between the “growing surface Γ” and the “bulk Q” can thus naturally be written as in (2)_, (2)_2 indicating that the flux of bulk species to the surface is opposite to their consumption rates at the surface. More specifically, the physical meaning of the form of Eq. (2)_, (2)_2 is that, even though b and q coming from the bulk are consumed at the interface (i.e. in correspondence of their values b_|Γ and q_|Γ ) to yield η and θ only in a specific term of f_3 and f_4, the negative terms of these equations have the effect of injecting b and q into the bulk. Cases in which θ > α (i.e. the adsorbate enhances electrodeposition, e.g. by resonant tunnelling effects), can be regarded as a special case of interfacial b consumption, accounted for through the BCs. Thus the net formation rate of η and θ is proportional to the fluxes of b and q to the surface.
∇ b ·𝐧, ∇ q ·𝐧 denote the gradients normal to the boundary (surface) Γ, while ψ_η, ψ_θ are constants the role of which is to adjust the dimensionality of the equations and the physical meaning of which is explained below. ψ_η converts adatoms (i.e. the surface species generated by the reaction (consumption) of b at the surface) into morphological units (the quantities actually described by η). Thus one can write: η=ψ_η b and ψ_η can be regarded as a constant as far as the density of morphological units (ca. step) is proportional to the adatom density. ψ_θ expresses an isotherm, since it connects a bulk concentration (q) into a surface density (θ), hence, in as far as the isotherm can be linearised, we can write: θ = ψ_θ q.
The BCs for b and q are:
i) non-linear coupling BCs on the surface Γ, implying coupling with η and θ; ii) Dirichlet BCs on the face of Ω opposite to Γ that is located far enough from the “bottom face” where reaction takes place, so that concentration gradients induced by reactivity have died out here; iii) zero Neumann BCs (zero flux) on the residual faces of Ω (see Figure 1). The values of the bulk variables are thus set to their equilibrium values b_0,q_0 respectively.
The BCs for the modifed DIB model on ∂Γ are also zero Neumann BCs.
If b_0=q_0 = 1, the following is a spatially homogeneous equilibrium for the system (<ref>)-(<ref>):
(b^*,q^*, η^*, θ^*) = (1,1,0,α).
The initial conditions are prescribed as follows:
b(𝐱,0) = b_0;
q(𝐱, 0) = q_0;
η(𝐱, 0) = r_η(𝐱);
θ(𝐱, 0) = r_θ(𝐱),
where r_η and r_θ are random spatial data that fulfil
r_η(𝐱) ∈η_e+[0, 1e-2] ∀𝐱∈Γ;
r_θ(𝐱) ∈θ_e+[-1e-2, 1e-2] ∀𝐱∈Γ.
The model (<ref>)-(<ref>) is a generalisation of the DIB model in 2D, which takes the form
η̇ - Δ_Γη = f(η,θ) in Q;
θ̇ - d_θΔ_Γθ = g(η,θ) in Q;
∇η·𝐧 = 0 on ∂ Q;
∇θ·𝐧 = 0 on ∂ Q,
where Q⊂ℝ^2 is a compact 2D domain, while f(η,θ) = f_3(1,η,θ) and g(η,θ) = f_4^γ(1,η,θ), see <cit.>. After <cit.>, the parameter values are fixed as follows:
d_Ω = 1;
d_Γ = 20;
k_2 = 2.5;
k_3 = 1.5;
ρ = 1;
α = 0.5;
A_1 = 10;
D = C(1-α)(1-γ+γα)/(α(1+γα)),
while (B,C) are bifurcation parameters.
The novel parameters instead (the ones that do not appear in the 2D DIB model (<ref>)) are fixed as follows:
b_0 = 1;
q_0 = 1;
k_b = 1;
k_q = 1.
Finally, the other parameters will be changed in a representative series of cases that will be discussed below.
If F=(f_1,f_2,f_3,f_4) and ξ=(b, q,η,θ)^T, it is easy to show that
F(ξ*)=0 if ξ*=(b^*, q^*, η^*,θ^*) = (b_0, q_0, 0, α),
when in the BS-DIB model all parameters different from (C, B) are kept fixed at the previous values. Note that the part of this equilibrium concerning the variables η,θ coincides with the equilibrium of the original DIB (2D) model.
In the next section we shall analyze the stability of such equilibrium in the special case γ =0, when the condition on the parameter D in (<ref>) boils down to
γ=0, D=q_0 C(1-α)/α.
§.§ Stability in absence of diffusion
To study the arising of the diffusion-driven or Turing instability in the BS-DIB model, it is necessary to prove that the equilibrium (<ref>) is stable in absence of diffusion. In fact, the model (<ref>), deprived of diffusion and linearized around the equilibrium (<ref>) is
ξ_t = J(ξ^*) (ξ-ξ^*),
where ξ = (b, q, η, θ)^T and J is the Jacobian of the kinetics evaluated at the equilibrium (<ref>). The matrix J has the following structure
J = [ J_Ω 0; J_h J_Γ ],
where, if γ = 0, the blocks of J are as follows:
J_Ω :=
[ f_1,b f_1,q; f_2,b f_2,q ] =
[ - k_b 0; 0 -k_q ];
J_h := ρ[ f_3,b f_3,q; f_4,b f_4,q ] =
ρ[ A_1 (1-θ)η 0; 0 C(1+k_2η)(1-θ) ];
J_Γ :=
[ f_3,η f_3,θ; f_4,η f_4,θ ] =
ρ[ b_0 A_1 (1-α) -B; q_0 C(k_2-k_3)(1-α) -q_0 C/α ].
Thanks to the diagonal structure of J_Ω it holds that
(J - λ I) = (λ + k_b)(λ + k_q)(J_Γ - λ I).
It follows that two eigenvalues of J are λ_1 = -k_b < 0 and λ_2 = -k_q < 0. We are left to determine when the eigenvalues of J_Γ are negative. This happens if and only if Trace J_Γ < 0 and J_Γ > 0. Now:
Trace J_Γ = ρ(b_0 A_1(1-α) - q_0 C/α) < 0 ⟺ C > b_0/q_0A_1α(1-α),
and
J_Γ = ρ^2 (BC q_0 (k_2-k_3)(1-α) - A_1C b_0 q_0 1-α/α) > 0 ⟺ B > A_1 b_0/α(k_2-k_3).
We obtain the following result.
If γ = 0, the equilibrium (<ref>) is stable in the absence of diffusion if and only if
B > A_1 b_0/α(k_2-k_3) ∧ C > b_0/q_0A_1α(1-α).
In addition, if A_1, k_2, k_3, α are as in (<ref>), the condition (<ref>) specializes to
B > 20 b_0 ∧ C > 2.5 b_0/q_0.
Furthermore, if b_0 = q_0 = 1, the condition (<ref>) further reduces to
B > 20 ∧ C > 2.5.
Of course, as far as the conditions for Turing pattern formation are concerned, a specific study has to be carried out considering the full jacobian J of Eq. (10). This is an important study topic in its own right, that - nevertheless - has no impact on the analysis presented in this reserach. Since treating this problem exhaustively would be beyond the scope of the present work, we leave to a subsequent publication.
Our present approach is thus to solve numerically the BS-DIB model for a representative selection of parameter couples (B,C) generating the whole set of Turing pattern morphologies, as described in <cit.>. Moreover, we shall will fix the model parameters in the bulk as in (<ref>). In this scenario, our aim is to tune the coupling parameters ψ_η, ψ_θ to study from the numerical point of view the effect of coupling with the the bulk on the morphological structure of the Turing patterns in the classes studied in <cit.>.
We recall that the numerical approximation reaction-diffusion systems in 3D is not straightforward because the pattern requires a very fine 3D mesh that provides sufficient spatial resolution and a longtime integration to reach the asymptotic steady state. For this reason, we shall apply the BS-VEM method studied in <cit.>, for the space discretization of the 3D domain and surface and the IMEX Euler method as time solver. Hence, in the 3D case, if the cubic domain is approximated with a Cartesian grid, at least a million of unknowns at each time iteration are required. The usual implementation will thus end up to a sequence of linear systems where the coefficient matrix for each species is sparse, but prohibitively large. An efficient strategy to deal with this issue is the matrix-oriented (MO) approach <cit.> where, thanks to the Cartesian structure of the numerical grid, the fully discrete problem is transformed to a sequence of Sylvester matrix equations, that are solved in the spectral space. However, since BS-DIB model produces spatial patterns only in a neighborhood of the surface Γ, we devise a tailor-made graded polyhedral bulk-surface mesh where the BS-VEM proves to be a competitive alternative, since such a graded mesh avoids unnecessary refinement (and degrees of freedom) away from the surface Γ. One of the major advantages of this fact is that the BS-VEM, thanks to the flexibility of polyhedral meshes, can still be used used on domains of general shape, where MO techniques might not apply.
§ THE BULK SURFACE VIRTUAL ELEMENT METHOD FOR THE BS-DIB MODEL
Formulating a Bulk-Surface Virtual Element Method (BSVEM) <cit.> for the model (<ref>) requires several steps. We start by rewriting the model (<ref>) in such a way that the boundary conditions lend themselves to a BSVEM discretization.
§.§ Step 1: Rewriting the model with homogeneous boundary conditions
In the presence of non-zero Dirichlet boundary conditions, it is well-known <cit.> that it is first necessary to rewrite the PDE problem in such a way that the Dirichlet conditions are homogeneous. To this end, we define the following auxiliary variables and kinetics:
:= b - b_0;
:= q - q_0;
_1() := f_1( + b_0);
_2() := f_2( + q_0);
_3(, η,θ) := f_3( + b_0, η, θ);
_4(, η,θ) := f_4( + q_0, η, θ).
With the above definitions, the model becomes
- Δ = _1() in Ω;
- d_ΩΔ = _2() in Ω;
η̇ - Δ_Γη = _3(,η,θ) on Γ;
θ̇ - d_ΓΔ_Γθ = _4(,η,θ) on Γ,
which, this time, is conveniently endowed with completely homogeneous boundary conditions:
∇·𝐧 = - _3(,η,θ)ψ_η on Γ;
∇·𝐧 = - _4(,η,θ)ψ_θ on Γ;
∇·𝐧 = 0 on Γ_L;
∇·𝐧 = 0 on Γ_L;
= 0 on Γ_T;
= 0 on Γ_T,
§.§ Step 2: Weak formulation
To write a discrete formulation of the auxiliary problem (<ref>)-(<ref>), we define the space of trivariate spatial functions that ensure the well-posedness of the model (<ref>) and fulfil the boundary conditions (<ref>):
H^1_B(Ω) := {u ∈ H^1(Ω) | u_|Γ_T = 0 ∧ u_|Γ ∈ H^1(Γ)}.
The dual space of H^1_B(Ω) will be denoted by H^-1_B(Ω). Following <cit.>, the weak formulation of (<ref>)-(<ref>) is: find ,∈ L^2([0,T]; H^1_B(Ω)) and η,θ∈ L^2([0,T]; H^1(Γ)) with , ∈ L^2([0,T]; H^-1_B(Ω)) and η̇, θ̇∈ L^2([0,T]; H^-1(Γ)) such that
∫_Ωφ + ∫_Ω∇·∇φ = ∫_Ω_1()φ - ψ_η∫_Γ_3(, η, θ)φ;
∫_Ωφ + d_Ω∫_Ω∇·∇φ = ∫_Ω_2()φ - d_Ωψ_θ∫_Γ_4(, η, θ)φ;
∫_Γη̇ψ + ∫_Γ∇_Γη·∇_Γψ = ∫_Γ_3(,η, θ)ψ;
∫_Γθ̇ψ + d_Γ∫_Γ∇_Γθ·∇_Γψ = ∫_Γ_4(,η, θ)ψ,
for all φ∈ L^2([0,T]; H^1_B(Ω)) and ψ∈ L^2([0,T]; H^1(Γ)).
§.§ Step 3: Spatially discrete formulation
We will now describe the spatial formulation obtained by the BSVEM following <cit.>. The choice of the BSVEM to solve the model (<ref>) is motivated by the possibility of using graded meshes that make the BSVEM particularly competitive in this case by avoiding unnecessarily refinement away from the surface Γ. The choice of a convenient mesh will be illustrated in the next Section. For now, we illustrate the BSVEM for arbitrary meshes.
Let us decompose the bulk Ω as the union of non-overlapping polyhedra, Ω = ∪_E∈ℰ_h E. If ℱ_f is the set of the faces of ℰ_h that are contained in Γ, then we can write Γ = ∪_F∈ℱ_h F. For a face F∈ℱ_h, the boundary space 𝔹(∂ F) is defined by
𝔹(∂ F) := {v ∈𝒞^0(∂ F) | v_e ∈ℙ_1(e) ∀ e ∈edges(F)}.
The preliminary space of a face F is defined by
𝕍(F) := {v ∈ H^1(F) | v_|∂ F∈𝔹(∂ F) ∧Δ v ∈ℙ_1(F)}.
The H^1 projector on faces Π^∇_F: 𝕍(F) →ℙ_1(F) is defined, for any v ∈𝕍(F) by
∫_F ∇ (v - Π^∇_F v) ·∇ p = 0 ∀ p∈ℙ_1(F) ∧∫_F (v-Π^∇_F v) = 0.
Then, the enhanced VEM space on the face F is defined by
𝕍(F) := {v ∈𝕍(F) | ∫_F (v - Π^∇_F v)p = 0 ∀ p∈ℙ_1(F)}.
For a polyhedron E∈ℰ_h, the boundary space 𝔹(∂ E) is defined by
𝔹(∂ E) := {u ∈𝒞^0(∂ E) | u_|F∈𝕍(F) ∀ F ∈faces(E)}.
At this point, the preliminary VEM space on E is defined by
𝕍(E) := {u ∈ H^1(E) | u_|∂ E∈𝔹(∂ E) ∧Δ u ∈ℙ_1(E)}.
The H^1 projector Π^∇_E: 𝕍(E) →ℙ_1(E) on the polyhedron E is defined, for each u∈𝕍(E), by
∫_E ∇ (u - Π^∇_E u) ·∇ p = 0 ∀ p∈ℙ_1(E) ∧∫_E (u-Π^∇_E u) = 0.
Finally, the enhanced VEM space on the polyhedron E is defined by
𝕍(E) := {u ∈𝕍(E) | ∫_E (u - Π^∇_E u)p = 0 ∀ p∈ℙ_1(E)}.
It is well-known that the degrees of freedom in 𝕍(F) and 𝕍(E) are the pointwise values on vertexes, see <cit.>. The global VEM spaces are defined by matching the degrees of freedom across elements. To this end, let 𝕊_Γ and 𝕊_Ω be the 1-skeleton of Γ and the 2-skeleton of Ω, respectively, defined by
𝕊_Γ := ⋃_F ∈ℱ_h∂ F, 𝕊_Ω := ⋃_E ∈ℰ_h∂ E.
The global VEM spaces 𝕍_Γ and 𝕍_Ω are then defined as
𝕍_Γ := {v∈ H^1(Γ) | v ∈𝒞^0(𝕊_Γ) ∧ v_|F∈𝕍(F) ∀ F ∈ℱ_h};
𝕍_Ω := {u ∈ H^1(Ω) | u∈𝒞^0(𝕊_Ω) ∧ u_|E∈𝕍(E) ∀ E ∈ℰ_h ∧ u(x,y,L) = 0 ∀ (x,y) ∈ [0,L]^2}.
Notice that the space 𝕍_Ω reflects the homogeneous Dirichlet boundary conditions of the continuous counterpart H^1_B(Ω). To obtain a spatially discrete counterpart of the weak formulation (<ref>), we need suitable discrete bilinear forms. Following <cit.>, for all F∈ℱ_h, E∈ℰ_h, v,w ∈𝕍(F) and u,z∈𝕍(E), we define
m_F(v,w) := ∫_F Π^0_F v Π^0_F w + h_F^2 ⟨(v-Π^0_F v), (w-Π^0_F w) ⟩;
a_F(v,w) := ∫_F ∇Π^∇_F v ·∇Π^∇_F w + ⟨(v-Π^0_F v), (w-Π^0_F w) ⟩;
m_E(u,z) := ∫_E Π^0_E u Π^0_E z + h_E^3 ⟨(u-Π^0_E u), (z-Π^0_E z) ⟩;
a_E(u,z) := ∫_E ∇Π^∇_E u ·∇Π^∇_E z + h_E⟨(u-Π^0_E u), (z-Π^0_E z) ⟩,
where h_F and h_E are the diameters of F and E, respectively. Let m_h^Γ, a_h^Γ : 𝕍_Γ×𝕍_Γ→ℝ and m_h^Ω, a_h^Ω : 𝕍_Ω×𝕍_Ω→ℝ be the corresponding global forms. Furthermore, let I_Γ: 𝒞^0(Γ) →𝕍_Γ and I_Ω: 𝒞^0(Ω) →𝕍_Ω be the Lagrangian interpolant operators. The spatially discrete formulation is finally given by: find B,Q: 𝕍_Ω× [0,T] and Λ,Θ : 𝕍_Γ× [0,T] →ℝ such that
m_h^Ω(Ḃ, Φ) + a_h^Ω(B, Φ) = m_h^Ω(I_Ω_1(B), Φ) - ψ_η m_h^Γ(I_Γ_3(B, Λ, Θ), Φ);
m_h^Ω(Q̇, Φ) + d_Ω a_h^Ω(Q, Φ) = m_h^Ω(I_Ω_2(Q), Φ) - d_Ωψ_θ m_h^Γ(I_Γ_4(Q, Λ, Θ), Φ);
m_h^Γ(Λ̇, Ψ) + a_h^Γ(Λ, Ψ) = m_h^Γ(I_Γ_3(B,Λ, Θ), Ψ);
m_h^Γ( Θ̇, Ψ) + d_Γ a_h^Γ( Θ, Ψ) = m_h^Γ(I_Γ_4(Q,Λ, Θ), Ψ),
for all Φ : 𝕍_Ω× [0,T] →ℝ and Ψ : 𝕍_Γ× [0,T] →ℝ. If N_Γ := 𝕍_Γ and N_Ω := 𝕍_Ω, let {ψ_i}_i=1^N_Γ and {φ_i}_i=1^N_Ω be the Lagrangian bases of 𝕍_Γ and 𝕍_Ω, respectively. We express the numerical solution (B,Q,Λ,Θ) in the Lagrange bases:
B( x,t) = ∑_i=1^N_Ω b_i(t) φ_i( x), ( x,t) ∈Ω× [0,T];
Q( x,t) = ∑_i=1^N_Ω q_i(t) φ_i( x), ( x,t) ∈Ω× [0,T];
Λ( x,t) = ∑_i=1^N_Γλ_i(t) ψ_i( x), ( x,t) ∈Γ× [0,T];
Θ( x,t) = ∑_i=1^N_Γθ_i(t) ψ_i( x), ( x,t) ∈Γ× [0,T],
where b_i(t), q_i(t), λ_i(t), θ_i(t) are unknown time-dependent coefficients, which are collected in column vectors b(t), q(t) ∈ℝ^N_Ω, η(t), θ(t) ∈ℝ^N_Γ. Following <cit.>, we substitute (<ref>)-(<ref>) into the spatially discrete formulation (<ref>), and we obtain the following ODE system in vector form:
M_Ωḃ + A_Ω b = M_Ω_1( b) - ψ_η R M_Γ_3( b, η, θ);
M_Ωq̇ + d_Ω A_Ω q = M_Ω_2( q) - d_Ωψ_θ R M_Γ_4( q, η, θ);
M_Γη̇ + A_Γη = M_Γ_3( b, η, θ);
M_Γθ̇ + d_Γ A_Γθ = M_Γ_4( q, η, θ),
where the stiffness matrices A_Ω∈ℝ^N_Ω× N_Ω, A_Γ∈ℝ^N_Γ× N_Γ, the lumped mass matrices M_Ω∈ℝ^N_Ω× N_Ω, M_Γ∈ℝ^N_Γ× N_Γ and the reduction matrix R∈ℝ^N_Ω× N_Γ are defined as follows:
(A_Ω)_ij := a_h^Ω(φ_i, φ_j) (M_Ω)_ij := m_h^Ω (φ_i, φ_j), i,j=1,…, N_Ω;
(A_Γ)_ij := a_h^Γ(ψ_i, ψ_j), (M_Γ)_ij := m_h^Γ(ψ_i, ψ_j), i,j=1,…, N_Γ;
R = [ I_N_Γ; 0 ],
where I_N_Γ is the identity of dimension N_Γ.
§.§ Step 4: Fully discrete formulation in vector form
Following <cit.>, we discretize in time the ODE system (<ref>) with the IMEX Euler scheme. Let τ > 0 be the timestep and let N_T = ⌈T/τ⌉ be the number of timesteps. For all n = 0, …, N_T - 1, the fully discrete solution ( b^(n), q^(n), η^(n), θ^(n)) is found as follows:
(M_Ω + τ A_Ω) b^(n+1) = M_Ω b^(n) + τ(M_Ω f_1^(n) - ψ_η R M_Γ f_3^(n));
(M_Ω + d_Ωτ A_Ω) q^(n+1) = M_Ω q^(n) + τ(M_Ω f_2^(n) - ψ_θ d_Ω R M_Γ f_4^(n));
(M_Γ + τ A_Γ) η^(n+1) = M_Γη^(n) + τ M_Γ f_3^(n);
(M_Γ + d_Γτ A_Γ) θ^(n+1) = M_Γθ^(n) + τ M_Γ f_4^(n),
where
f_1^(n) := _1( b^(n));
f_2^(n) := _2( q^(n));
f_3^(n) := _3( b^(n), η^(n), θ^(n));
f_4^(n) := _4( q^(n), η^(n), θ^(n)).
The fully discrete formulation (<ref>) is composed of four linear algebraic systems that can be solved independently of each other at each time step. Of these four linear systems, two have dimension N_Ω, while the other two have dimension N_Γ. If the cube Ω is discretised with a Cartesian mesh with N_x ∈ℕ gridpoints along each dimension, then N_Ω = N_x^3, which makes the linear systems in (<ref>) computationally prohibitive to solve. An extremely efficient approach to address this issue is the so-called Matrix-Oriented Finite Element Method (MOFEM) <cit.>, which exploits the Cartesian structure of the grid to translate the linear systems in (<ref>) into tensor equations of much lower size. We will show a numerical solution to our problem carried out with MOFEM in Section <ref>. However, we will show that, given the particular nature of the considered PDE problem, an even more efficient solver is given by the BSVEM on a bespoke mesh.
§.§ Bespoke BS-VEM mesh for the BS-DIB model
Since the domain of the model problem (<ref>) is a cube, then it would be natural to choose an efficient numerical method that exploits the structure of Cartesian grids, such as the Matrix-Oriented Finite Element Method (MOFEM) <cit.>. By choosing the model parameters and the timestep as in Table <ref>, Experiment D1, and a Cartesian grid of 128× 128 × 128 ≈ 2.1e+6 nodes, MOFEM produces the numerical solution shown in Fig. <ref>. In keeping with the physical meaning of the parameter choice, we can observe that the bulk components (b,q) exhibit spatial patterns only in the proximity of the surface Γ, and become approximately constant away from Γ. This suggests that a uniform Cartesian cubic grid would be unnecessary fine away from Γ. Hence, we apply the BS-VEM with a graded cubic mesh that is highly refined close to Γ and gradually becomes coarser as the distance from Γ increases. Such graded polyhedral mesh is depicted in Fig. <ref>(a). This grid is composed of two layers of 128× 128 cubic elements close to Γ, and five layers of gradually larger “false cubes”. Such a false cube is depicted Fig. <ref>(b) and is actually an ennahedron: a polyhedron with nine faces and thirteen vertices. Specifically, such ennahedron is a cube the bottom face of which is split into four smaller square faces (highlighted in purple in Fig. <ref>(b)). The proposed graded mesh provides a finer discretisation for the surface Γ (129× 129 nodes on Γ) than the Cartesian grid (100× 100 nodes on Γ) used for the MOFEM and, at the same time has much less nodes (approximately 5,5e+4 versus 1e+6), resulting in a discrete problem of much smaller size and shorter computational times (approximately 89 minutes for MO-FEM and 39 minutes for BS-VEM).
§ NUMERICAL EXPERIMENTS
We shall present seven numerical experiments to compare the DIB model (<ref>) with the novel BSDIB model (<ref>). The seven experiments differ from each other for appropriate choices of the model parameters. The first four experiments, called T1 through T4, show that for various choices of the bifurcation parameters, the BSDIB model shows Turing patterns while the DIB model does not. This seems to suggest that the BSDIB model has a larger Turing space than the DIB model. As mentioned above, a theoretical analysis of the Turing space for the BSDIB model is outside the scope of this work. The latter three experiments, called D1 through D3, show that when the DIB model exhibits Turing patterns, the BSDIB model still exhibits Turing patterns, but not necessarily of the same morphological class. All the experiments are carried out on a cubic domain of edge length L=50 on the polyhedral mesh described in Section <ref>. The final time T and the timestep τ also differ for each experiments according to the stiffness of the problem and the timescale of the dynamics. A recap of the numerical experiments and the respective parameters is given in Table <ref>.
§ CONCLUSIONS
We have introduced a BSRD model in 3D, which we have called BS-DIB model, for electrodeposition. Compared to the previous DIB model in 2D, the new model fully accounts for the non-uniform electrolyte concentration in a neighborhood of the electrodic surface. The two-way coupling between bulk and surface substantially influences the long-term behavior of the system and in particular the morphological class of the Turing patterns obtained as asymptotic steady state solutions. Specifically, we find that the bulk-surface coupling has two main effects. First, we observe empirically that the BS-DIB model possesses a large Turing region in the parameter space, compared to the DIB model. Second, when the parameters are chosen in the Turing space of the DIB model, the BS-DIB model still exhibits spatial patterns, but of a different morphological class, i.e. the bulk-surface coupling affects the morphological class of the attained patterns.
The BS-DIB model is posed on a cubic domain, so it lends itself to efficient numerical solvers specifically devised for Cartesian grids, such as the MO-FEM. Moreover, since the BS-DIB model exhibits spatial patterns only in a neighborhood of the surface, we have adopted the BS-VEM on a graded mesh that is highly refined close to the surface and much coarser away from the surface. Such a graded mesh combines the advantages of (i) being composed by equal elements of cubic shape, which significantly speeds up matrix assembly and improves matrix structure and (ii) has far less degrees of freedom than a uniform Cartesian grid with the same level of refinement close to the surface. For this reason, the BS-VEM on a graded mesh proves to be more computationally efficient than the MO-FEM and is thus the spatial method of choice throughout this work.
As opposed to the MO-FEM, which is confined to structured geometries such as Cartesian grids, the BS-VEM can handle domains of general shape, thereby facilitating the simulation of real case studies.
A theoretical Turing instability analysis of the BS-DIB model is beyond the scope of this work. These aspects form part of our current investigations.
§ ACKNOWLEDGMENTS AND FUNDING
The work of MF was funded by Regione Puglia (Italy) through the research programme REFIN-Research for Innovation (protocol code 901D2CAA, project number UNISAL026) and by the research project “Metodi numerici innovativi per lo studio delle batterie” (INdAM-GNCS project CUP_ E55F22000270001).
The work of IS is supported by the MIUR Project PRIN 2020, “Mathematics for Industry 4.0”, Project No. 2020F3NCPX, by the ICSC – Centro Nazionale di Ricerca in High Performance Computing, Big Data and Quantum Computing, funded by European Union – NextGenerationEU, Project code CN00000013 and by the research project “Tecniche avanzate per problemi evolutivi: discretizzazione, algebra lineare numerica, ottimizzazione” (INdAM-GNCS project CUP_ E55F22000270001).
MF and IS are members of the INdAM-GNCS activity group (Italian National Group of Scientific Computing). MF is a member of SIMAI.
§ CONFLICT OF INTEREST
The authors declare no conflict of interest.
999
bozzini2013spatio
B.Bozzini, D.Lacitignola and I.Sgura. Spatio-temporal organization in alloy electrode-
position: a morphochemical mathematical model and its experimental validation,
Journal of Solid State
Electrochemistry,
17(2) (2013), 467––469.
10.1007%2Fs10008-012-1945-7
dautilia_matrix
M.C.D'Autilia, I.Sgura and V.Simoncini. Matrix-oriented discretization methods
for reaction-diffusion PDEs: Comparisons and applications,
Computers & Mathematics with Applications,
79(7) (2020), 2067––2085.
10.1016%2Fj.camwa.2019.10.020
bsvem_parabolic
M.Frittelli, A.Madzvamuse and I.Sgura. The bulk-surface virtual element method for
reaction-diffusion PDEs: analysis and applications,
Communications in Computational Physics,
33(3) (2023), 733––763.
10.4208%2Fcicp.oa-2022-0204
lsfem
M.Frittelli, A.Madzvamuse, I.Sgura and C.Venkataraman. Preserving invariance properties of reaction-diffusion systems on stationary surfaces,
IMA Journal of Numerical Analysis,
39(1) (2019), 235––270.
10.1093/imanum/drx058
mofem_2d
M.Frittelli, and I.Sgura. Matrix-oriented FEM formulation for reaction-diffusion PDEs on a large class of 2D domains,
Applied Numerical Mathematics,
Accepted for publication
bsfem
A.Madzvamuse and A.W.Chung. The bulk-surface finite element method for reaction-diffusion systems on stationary volumes,
Finite Elements in Analysis and Design,
108 (2016), 9––21.
10.1016/j.finel.2015.09.002
madzvamuse_bsrds
A.Madzvamuse, A.W.Chung and C. Venkataraman. Stability analysis and simulations of coupled bulk-surface reaction–diffusion systems,
Proceedings of the Royal Society A,
471(2175) (2015), 20140546.
10.1098/rspa.2014.0546
amospaper
I. Sgura, A. S. Lawless, and B. Bozzini. Parameter estimation for a morphochemical reaction-
diffusion model of electrochemical pattern formation,
Inverse Problems in Science and Engineering,
27(5) (2019), 618––647.
10.1080/17415977.2018.1490278
simoncini_siam_review
V.Simoncini. Computational methods for linear matrix equations,
SIAM Review,
58(3) (2016), 377––441.
10.1137/130912839
mgq
I. Sgura, L. Mainetti, F. Negro, M. G. Quarta and B. Bozzini. Deep-learning based parameter identification enables rationalization of battery material evolution in complex electrochemical systems,
Journal of Computational Science,
66 (2016), 101900.
10.1016/j.jocs.2022.101900
ahmad
B. Ahmad, A. Alsaedi, F. Brezzi, L.D. Marini, and A. Russo. Equivalent projectors for virtual element methods,
CAMWA,
66 (3) (2013), 376-391.
10.1016/j.camwa.2013.05.015
lacitignola2015
D. Lacitignola, B. Bozzini and I. Sgura.
Spatio-temporal organization in a morphochemical electrodeposition model: Hopf and Turing instabilities and their interplay,
European Journal of Applied Mathematics,
26 (2) (2015), 143-173.
10.1017/S0956792514000370
DIB_cross_diff
D. Lacitignola, B. Bozzini, Peipmann, R. and I. Sgura.
Spatio-temporal organization in a morphochemical electrodeposition model: Hopf and Turing instabilities and their interplay,
Applied Mathematical Modelling,
57 (2018), 492-513.
10.1016/j.apm.2018.01.005
DIB_sphere
D. Lacitignola, B. Bozzini, Frittelli, M. and I. Sgura.
Turing pattern formation on the sphere for a morphochemical reaction-diffusion model for electrodeposition,
Communications in Nonlinear Science and Numerical Simulation,
48 (2017), 484-508.
10.1016/j.cnsns.2017.01.008
cutFEM
P. Hansbo, M.G. Larson, S. Zahedi.
A cut finite element method for coupled bulk-surface problems on time-dependent domains,
Computer Methods in Applied Mathematics,
307 (2016), 96-116.
10.1016/j.cma.2016.04.012
unfitted_bsfem
K. Deckelnick, C.M. Elliott, T. Ranner.
Unfitted finite element methods using bulk meshes for surface partial differential equations,
SIAM Journal on Numerical Analysis,
52(4) (2014), 2137-2162.
10.1137/130948641
kernel_bspde
M. Cheng and L. Ling.
Kernel-based meshless collocation methods for solving coupled bulk-surface partial differential equations,
Journal of Scientific Computing,
81 (2019), 375-391.
10.1007/s10915-019-01020-2
mofem3d
M. Frittelli and I. Sgura. The matrix-oriented finite element method in three space dimensions,
Work in progress
quarteroni_book
A.Quarteroni and S.Quarteroni,
Numerical models for differential problems. Springer, 2009.
10.1007/978-88-470-1071-0
|
http://arxiv.org/abs/2307.07661v1 | 20230714235135 | Diagram Systems and Generalized Finite Type Theories | [
"Cole Hugelmeyer"
] | math.GT | [
"math.GT",
"57M27"
] |
MPDIoU: A Loss for Efficient and Accurate Bounding Box Regression
[
August 12, 2023
=================================================================
We present a category theoretical generalization of the Goussarov theorem for finite type invariants, relating generating sets for generalized finite type theories with diagrams systems for the corresponding topological objects. We will demonstrate this correspondence through a few examples including the standard finite type theory and its relationship with clasp diagrams, the finite type theory of delta moves and a new diagram system called looms, and the finite type theory of combinatorial structures we call virtual transverse knots. The finite type theory of delta moves may have applications to unknotting number, and the theory of virtual transverse knots leads to many interesting and difficult conjectures.
§ INTRODUCTION
In this paper, we establish a relationship between generalized finite type theories of knot-like structures and diagrammatic combinatorial systems for representing those structures. By inventing suitable new combinatorial diagram systems for representing knots and knot-like structures, we can compute the universal abelian groups of generalized finite type theories. In the existing literature, there are two prominent examples of this correspondence. Gauss diagrams correspond to the finite type theory of virtual knots, and clasp diagrams correspond to the classical finite type theory of ordinary knots <cit.> <cit.>. In addition to building the beginnings of a generalized framework for this correspondence, we will add two new items to this list.
A delta move is a transformation of knots where a strand passes through a clasp. The finite type theory of delta moves is a generalized finite type theory which is similar to the classical finite type theory of ordinary knots, except that delta moves play the role of crossing changes. Variations of the finite type theory of delta moves that we develop in this paper have been studied in the existing literature. In <cit.>, it was shown that any delta-finite type invariant of rank n which is additive under connect sum is also an ordinary finite type invariant of rank 2n, and in <cit.>, a variation called doubled delta moves were studied, and the finite type theory was shown to not be finitely generated, as even the rank zero doubled delta move invariants fully classify knots up to S-equivalence. We will show that the finite type theory of delta moves relates to a diagram system we call looms. We will prove that the universal abelian group of delta move finite type invariants of rank n is finitely generated, and we will also make conjectures about the potential applications of these new invariants. In particular, looms are naturally graded by unknotting number, and this induces a natural filtration by unknotting number on the universal abelian group of rank n delta move finite type invariants. If this filtration turns out to be nontrivial, these invariants could yield lower bounds for unknotting number.
Second, we will introduce the notion of a virtual transverse knot and we will construct a finite type theory for these objects using a representation system called braided Gauss diagrams. Virtual transverse knots are puzzling because many of the simplest questions one may ask about them are extremely difficult to answer. For instance, we cannot yet distinguish any nontrivial pair of transverse knots as virtual transverse knots. One notable aspect of the finite type theory of virtual transverse knots is that the natural presentation is not finite, as there are an infinite number of braided Gauss diagrams with a given number of chords. We will give a different presentation which is finite. With a computer program, we calculate the dimensions over various finite fields for the universal vector space of virtual transverse finite type invariants up to rank 5.
§ DIAGRAM SYSTEMS AND GENERALIZED FINITE TYPE THEORIES
We will begin by presenting a category theoretical framework that describes the relationship between diagram systems and finite type theories. We introduce the notion of the universal finite type group for a cubical complex, and then we introduce the notion of a diagram system for a cubical complex. We prove that given a diagram system, we obtain a simplified, and often finite, set of generators for the rank n finite type group.
Let [n] denote the set {1,...,n}. The combinatorial n-cube, C_n, is the set of all functions [n]→{0,1}, which we call binary sequences. For a binary sequence b, we write |b| to denote the total number of 1s within it.
We define the cube category, 𝐂𝐛, to be the category where the objects are the combinatorial n-cubes for all n≥ 0, and the arrows are the functions a: C_m→ C_n for which there exists an ordered pair, (f,s), where f:[m]→[n] injectively, s: [n]∖(f) →{0,1}, and which has the following two properties:
1) For any b∈ C_m, and any i∈ [m], we have b(i) = a(b)(f(i)).
2) For any b∈(a), and any i∈ [n]∖(f), we have b(i) = s(i).
A cubical complex is defined to be a contravariant functor 𝐂𝐛^op→𝐒𝐞𝐭. Given a cubical complex X, we call X(C_n) the set of n-cells of the complex, and we write it as X_n.
It should be noted that this is not quite the standard definition of a cubical complex, as cells have directed edges and these directions must be preserved by glueing maps, but for our purposes this definition is the most convenient. We will not be interested in the topology of a cubical complex. Rather, we are concerned with their finite type theories.
Let X be a cubical complex. We define the rank n finite type group of X to be the abelian group U_n(X) with the following presentation:
The generators are given by the 0-cells of X.
The relations are indexed by c∈ X_n+1, and are given by the following formula.
∑_f∈(C_0,C_n+1) (-1)^|f(∅)|· X(f^op)(c)
Geometrically, this sum is the alternating sum over the corners of any (n+1)-cube. For x∈ X_0, we will use x interchangibly to denote the 0-cell, and the element of U_n(X) corresponding to that 0-cell.
If X and Y are cubical complexes, and η is a natural transformation from X to Y, then η(C_0) is a map X_0→ Y_0. This induces a homomorphism U_n(X)→ U_n(Y) because the relations of U_n(X) are mapped to relations of U_n(Y) by η(C_n+1). Thus, we can think of U_n as a functor from the category of cubical complexes, with natural transformations as morphisms, to the category of abelian groups.
We write K^cc to denote the cubical complex of crossing changes for knots. An element of K_n^cc is an isotopy class of singular knots with n self-intersections which are labeled 1 through n, along with a choice of bijection σ_i: {0,1}→{1,-1} for each singularity. For a map a:C_m → C_n with corresponding ordered pair (f,s), the map K^cc(a^op) resolves the singularities corresponding to a label i∈ [n]∖(f) into crossings with sign given by σ_i(s(i)), and for singularities with label i∈(f), it replaces the label with f^-1(i). Checking functoriality of K^cc: 𝐂𝐛^op→𝐒𝐞𝐭 is fairly trivial, so this has been left to the reader.
We then have that U_n(K^cc) is the universal abelian group for rank n finite type invariants of knots. The abelian group of rank n finite type invariants of knots with coefficients in an abelian group G is naturally isomorphic to the group of homomorphisms from U_n(K^cc) to G. The group U_n(K^cc) is the natural target space for the universal rank n finite type invariant.
It should be noted that, even if we used a fixed choice of σ_i, rather than letting it depend on the singularity, we would still get the same finite type group U_n(K^cc). The reason we include these extra bits of information is that they are necessary for us to be able to find a diagram system for the cubical complex, which is a notion we will soon define. There may be many topologically distinct cubical complexes with the same finite type theory, as when there are multiple cubes with the same corners, it will be the same as if there was only one such cube from the perspective of the abelian group quotient.
Here, we define something we will call the cubical complex of a generating set of a monoid. We will build upon this example further in this section to help explain the concepts we introduce. To begin, let M be a monoid, and let S be a generating set of M. We then define a cubical complex Y where the cubes correspond to inserting or deleting generators within a word. More precisely, an n-cell of Y_n consists of a function C_n→ M of the form
b↦ x_0y_1^b(σ(1))x_1y_2^b(σ(2))x_2...y_n^b(σ(n))x_n
where x_0,...,x_n are elements of M, y_1,...,y_n are elements of S, and σ: [n]→[n] is a permutation. To define how this maps the morphisms, we say that if f:C_m→ M is of the above form, and a: C_n→ C_m is a cube map, then Y(a^op)(f) = f∘ a.
In this case, we have that U_n(Y) is isomorphic to the quotient of the monoid ring given by [M]/J^n+1, where J is the two-sided ideal generated by elements of the form 1-y, for y∈ S. To see why this is the case, we observe that when we expand the product
x_0(1-y_1)x_1(1-y_2)x_2...(1-y_n+1)x_n+1
we get an alternating sum of the corners of an (n+1)-cell of Y. Thus, the generators of J^n+1 coincide with the relations of U_n(Y).
Let 𝐈𝐧𝐣 be the category of injective functions between finite sets.
A diagram category is a category D equipped with a functor F_D:D→𝐈𝐧𝐣 such that the following three properties hold.
1) F_D is faithful.
2) If a is an object of D, then for every subset S⊆ F_D(a), there is an arrow f: b→ a of C such that (F_D(f)) = S.
3) If f_1: b_1→ a and f_2: b_2→ a are such that (F_D(f_1)) = (F_D(f_2)), then there is an isomorphism g: b_1→ b_2 such that f_1 = f_2 g.
The objects of a diagram category are called diagrams, and for a diagram a, the cardinality |F_D(a)| is called the order of a and is written |a|. This will always be a finite number because the objects of 𝐈𝐧𝐣 are required to be finite sets.
Whenever S⊆ F_D(a), we will use a_S to denote the object of D, defined up to isomorphism, for which there exists an arrow f:a_S→ a with (F_D(f)) = S. We call a_S the subdiagram of a corresponding to S. We will write ι_S to denote the arrow a_S→ a, which is defined up to isomorphism in the overcategory of a.
A Gauss diagram is a set of chords on a circle, each of which has a specified direction and a specified sign. They represent virtual knots, where the chords correspond to crossings, the directions of the chords designate over-crossings and under-crossings, and the signs of the chords designate the signs of the crossings. Gauss diagrams form a diagram category GD where the arrows are subdiagram inclusions and rotational symmetries, and F_GD takes a Gauss diagram to its set of chords.
Given an alphabet, A, of symbols, words in that alphabet can be given the structure of a digram category which we call W(A). An object of W(A) is just a sequence of symbols from A, and F_W(A) takes any word to the set of symbols that comprise it, where repeated symbols of the same kind are considered different elements of the set. A morphism a: w_1→ w_2 of W(A) is a way to map w_1 into w_2 as a subword, where the letters of w_1 appear in order within w_2, but not necessarily consecutively.
If D is a diagram category and a is an object of D, and if H⊆ S ⊆ F_D(a), then the arrow ι_H lifts uniquely up ι_S to give us a map ι_(H,S): a_H→ a_S with ι_Sι_(H,S) = ι_H.
Let Q⊆ F_D(a_S) be the inverse image Q = (F_D(ι_S))^-1H. Then we have an arrow ι_Q: (a_S)_Q→ a_S, and we see that ι_Sι_Q: (a_S)_Q→ a and has (F_D(ι_Sι_Q)) = H = (F_D(ι_H)). Therefore, by axiom 3 of diagram categories, we have an isomorphism g: a_H (a_S)_Q such that we have ι_H = ι_Sι_Qg. We may then simply set ι_(H,S) = ι_Qg to get the desired lifting. This lifting will be unique by axiom 2, as any such lifting will have Q as its image under F_D.
Given a diagram category D, we construct a cubical complex, denoted D^. Intuitively speaking, a cube of this complex is given by taking all diagrams in-between a given diagram and one of its subdiagrams. More formally, we define the elements of D^_n to be the set of all isomorphism classes of ordered pairs (g,ϕ) where g:p→ q is an arrow of D, and ϕ is a bijection F_D(q) ∖(F_D(g))→ [n]. An isomorphism between two such pairs is an isomorphism of arrows between the first components that preserves the labelings from the second components. That is to say, if we have (g_1,ϕ_1) with g_1: p_1→ q_1 and (g_2,ϕ_2) with g_2:p_2→ q_2, then an isomorphism between these pairs is a pair of isomorphisms h_p: p_1→ p_s and h_q:q_1→ q_2 with g_2h_p = h_qg_1 and ϕ_2F_D(h_q) = ϕ_1. Next, we need to specify how a map a:C_m → C_n acts on a pair (g,ϕ) with g:p→ q. Suppose that the ordered pair corresponding to a, as in the definition of the cube category, is (f,s). We then define D^ (a^op)(g,ϕ) by the following formula.
D^ (a^op)(g,ϕ) = (ι_ ((F_D(g)) ∪ϕ^-1s^-1{1},F_D(q) ∖ϕ^-1s^-1{0}), f^-1ϕ|_ϕ^-1((f)) F_D(ι_F_D(q) ∖ϕ^-1s^-1{0}))
This formula might be somewhat difficult to unpack, so it may not be obvious that it is functorial. A proof of functoriality is given below.
Since the isomorphism classes of objects of D are naturally in bijective correspondence with the elements of D_0^, we will usually abuse notation and write a to refer to the 0-cell corresponding to the isomorphism class of (1_a,∅), whenever a is an object of D.
For a diagram category D, the map D^: 𝐂𝐛^op→𝐒𝐞𝐭 is functorial.
Let a: C_m→ C_n with corresponding ordered pair (f_a,s_a), and let n: C_k→ C_m with corresponding ordered pair (f_b,s_b). We wish to establish that whenever (g,ϕ) is as above with g:p→ q, we have D^((ab)^op)(g,ϕ) = D^(b^op)D^(a^op)(g,ϕ). We will to this term by term. For the first term of the ordered pair, we see that the pair (f_ab,s_ab) corresponding to ab has f_ab = f_af_b and s_ab is defined piecewise as s_a in [n]∖(f_a) and s_bf_a^-1 in f_a([m]∖(f_b)). Therefore, the inverse images s_ab^-1{i} for i∈{0,1} are equal to ϕ^-1s_a^-1{i}∪ϕ^-1f_as_b^-1{i}. Thus, we have
((D^((ab)^op)(g,ϕ))_1 = ι_((F_D(g))∪ϕ^-1s_ab^-1{1}, F_D(q) ∖ϕ^-1s_ab^-1{0})
= ι_((F_D(g))∪ϕ^-1s_a^-1{1}∪ϕ^-1f_as_b^-1{1}, F_D(q) ∖ (ϕ^-1s_a^-1{0}∪ϕ^-1f_as_b^-1{0})) = (D^(b^op)D^(a^op)(g,ϕ))_1
Next, we check the second coordinate. We have
((D^((ab)^op)(g,ϕ))_2 = f_ab^-1ϕ|_ϕ^-1(f_ab) F_D(ι_F_D(q)∖ϕ^-1s_ab^-10)
= f_b^-1f_a^-1ϕ|_ϕ^-1(f_ab) F_D(ι_F_D(q)∖ (ϕ^-1s_a^-1{0}∪ϕ^-1f_as_b^-1{0}))
and
f_a^-1ϕ|_ϕ^-1(f_ab) =ϕ'|_ϕ'^-1(f_b)
where ϕ' = f_a^-1ϕ|_ϕ^-1((f_a)) F_D(ι_F_D(q) ∖ϕ^-1s_a^-1{0}).
Therefore, since ϕ' = (D^(a^op)(g,ϕ))_2, we have
((D^((ab)^op)(g,ϕ))_2 = f_b^-1ϕ'|_ϕ'^-1((f_b)) F_D(ι_F_D(q)∖ (ϕ^-1s_a^-1{0}∪ϕ^-1f_as_b^-1{0}))
= (D^(b^op)D^(a^op)(g,ϕ))_2
We define a diagram system, (D,ε), for a cubical complex X to be a diagram category D, along with a natural transformation ε: D^→ X, such that ε(C_0): D^_0→ X_0 is a surjection. We write ε_n = ε(C_n).
Let M be a monoid with generating set S, and let Y be the cubical complex of this generating set. We have a diagram system for Y given by (W(S),ε), where ε: W(S)^→ Y is given by letting ε_n(g,ϕ)(b) be obtained by taking the word that is the codomain of g, and removing the letters in the set ϕ^-1b^-1{0}. In this way, words from the generating set form a diagram system for the cubical complex of that generating set.
If X is a cubical complex with a diagram system (D,ε), then U_n(X) is generated by elements of the form ε_0(a), where |a| ≤ n.
Let x∈ X_0. We wish to show that x is equivalent modulo the rank n relations to a linear combination of elements of the form ε_0(a) with |a| ≤ n. By surjectivity, we may choose an object of the diagram category, r, so that ε_0(r) = x. Let m = |r|. If m ≤ n, then we are done. For the case m>n, we proceed by induction. Suppose that every element of the form ε_0(b) with |b| ≤ m-1 can be expressed as a linear combination of elements of the form ε_0(a) with |a| ≤ n. Then, we just need to show that x can be expressed as a linear combination of elements of the form ε_0(b) with |b| ≤ m-1. To do this, let S be any subset of F_D(r) with |S| = n+1, which must exist because m>n. Then, let g: q→ r be an arrow of the diagram category with (F_D(g)) = F_D(r) ∖ S, and let ϕ: S→ [n+1] be a bijection. We then have an element c of D_n+1^ corresponding to (g,ϕ). The relation corresponding to ε_n+1(c) will then have one of its terms equal to ε_0(r), and all of its other terms will be of the form ε_0(b) with |b| ≤ m-1. This gives us the desired linear combination.
The idea behind this theorem is that one good way to compute the finite type theory of a cubical complex is to find a diagram system for that cubical complex. This will yield a nice set of generators. For example, in the case of the cubical complex of a generating set for a monoid and its diagram system of words, the above theorem corresponds to the fact that [M]/J^n+1 is generated as an abelian group by words of length at most n. For more complicated cubical complexes, like K^cc, this reduction in the space of generators is extremely useful.
We also have the following useful fact about the finite type theories of diagram categories.
Let D be a diagram category, and let Ab(D_≤ n) be the free abelian group on isomorphism classes of diagrams of order at most n. There is an isomorphism
s: U_n(D^) → Ab(D_≤ n)
given by mapping a diagram to the sum of all subdiagrams of order at most n. That is to say,
s(a) = ∑_S⊆ F_D(a), |S| ≤ n a_S
Note that one isomorphism class of object may appear multiple times in this sum if it is a subobject in multiple ways.
We wish to prove that s is a well-defined homomorphism of abelian groups, and that it is an isomorphism. To prove that it is a homomorphism, we need to prove that it maps relations to zero. Relations for U_n(D^) can be indexed by pairs (Q,a) with Q⊆ F_D(a) and |F_D(a) ∖ Q| = n+1, and are given by the following formula.
R_(Q,a) = ∑_Q⊆ H ⊆ F_D(a) (-1)^|H| a_H
Applying s, we get
s(R_(Q,a)) = s(∑_Q⊆ H ⊆ F_D(a) (-1)^|H| a_H) = ∑_Q⊆ H ⊆ F_D(a) (-1)^|H|∑_S⊆ H, |S| ≤ n a_S
Swapping the sums, this becomes
∑_S⊆ F_D(a), |S| ≤ n∑_S∪ Q ⊆ H ⊆ F_D(a) (-1)^|H|a_S
However, ∑_S∪ Q ⊆ H ⊆ F_D(a) (-1)^|H|a_S can only be nonzero if S∪ Q = F_D(a), but we know that |F_D(a) ∖ Q| = n+1 and |S| ≤ n, so this is impossible. Thus, s(R_(Q,a)) = 0, so the relations map to zero.
To prove that s is an isomorphism, we construct an inverse s_-. We define
s_-(a) = ∑_S⊆ F_D(a) (-1)^|F_D(a) ∖ S| a_S
Which gives us a homomorphism Ab(D_≤ n)→ U_n(D^). It is now just a matter of checking that this is indeed an inverse for s. We will check that both ss_- and s_-s are identities.
ss_-(a) = s( ∑_H⊆ F_D(a) (-1)^|F_D(a) ∖ H| a_H ) = ∑_H⊆ F_D(a) (-1)^|F_D(a) ∖ H|∑_S⊆ H a_S
Swapping the sums, we get
ss_-(a) = ∑_S⊆ F_D(a)∑_S⊆ H ⊆ F_D(a) (-1)^|F_D(a) ∖ H| a_S
But ∑_S⊆ H ⊆ F_D(a) (-1)^|F_D(a) ∖ H| a_S is zero unless S = F_D(a), so ss_-(a) = a_F_D(a) = a.
Finally, we check s_-s(a).
s_-s(a) = s_-(∑_H⊆ F_D(a), |H| ≤ n a_H) = ∑_H⊆ F_D(a), |H| ≤ n∑_S⊆ H (-1)^|H ∖ S| a_S
We are working in U_n(D^), so we may add in terms as long as they are relations. Furthermore, if |H|> n, then ∑_S⊆ H(-1)^|H∖ S| a_S is a relation. Therefore, we have
s_-s(a) = ∑_H⊆ F_D(a), |H| ≤ n∑_S⊆ H (-1)^|H ∖ S| a_S + ∑_H⊆ F_D(a), |H| > n∑_S⊆ H (-1)^|H ∖ S| a_S
= ∑_H⊆ F_D(a)∑_S⊆ H (-1)^|H ∖ S| a_S
Swapping the sums, we get
s_-s(a) = ∑_S⊆ F_D(a)∑_S⊆ H ⊆ F_D(a) (-1)^|H ∖ S| a_S
But ∑_S⊆ H ⊆ F_D(a) (-1)^|H ∖ S| a_S can only be nonzero if S = F_D(a), so we have s_-s(a) = a.
Thus, when X is a cubical complex, and (D,ε) is a diagram system for X, we only need to compute the subspace
s((U_n(ε_0)))⊆ Ab(D_≤ n)
and this will give us a presentation for U_n(X), as we have an isomorphism
Ab(D_≤ n)/ s((U_n(ε_0))) ≃ U_n(X)
This presentation can usually be computed if we have some system of equivalence moves for our diagrams for which X_0 is the moduli space. In the case of Gauss diagrams, these equivalence moves are just Reidemeister moves, and the relations are then just the subdiagram sums of Reidemeister moves. Things get more complicated when the equivalence moves cannot be easily stated in terms of local modifications of the diagrams, as is the case with the diagram system of looms that we will later define.
A clasp diagram consists of the following data:
1) A chord diagram with a finite set C of unoriented chords on the circle.
2) A total ordering on C called the height ordering.
3) A function s: C→{1,-1} called the sign function.
Clasp diagrams form a diagram category CL where the morphisms are subdiagram inclusions, and F_CL takes a diagram to its set of chords. There is a map from clasp diagrams to knots, which represents a knot by starting with a circular unknot, then adding clasps. We add the clasps along the chords of the diagram, with relative heights given by the height ordering, and clasp sign given by the sign function. This map induces a functor k: CL^→ K^cc so that (CL,k) is a diagram system for K^cc. There are equivalence moves for clasp diagrams, modulo which we get the set of knots. A presentation for U_n(K^cc) can then be computed by analyzing these equivalence moves, as was done in <cit.>.
Gauss diagrams form a diagram system for the cubical complex of virtual knots, where the cubes consist of ways to switch a set of virtual crossings to real crossings. For a treatment of this the resulting finite type theory, see <cit.>.
Finally, it is worth noting that if we have a diagram category D, as well as a surjective function π: D_0^→ S from isomorphism classes of objects of D to some set S whose elements we wish to understand, then we can construct a cubical complex X by X_0 = S and X_n =D_n^ for n>0, where for a: C_0→ C_n, we let X(a^op) = π D^(a^op). We then have invariants for elements of S in U_n(X) for all n. Using Theorem 2 we can deduce that U_n(X) is isomorphic to Ab(D_≤ n) / R, where R is defined to be the subgroup of Ab(D_≤ n) generated by elements of the form s(x) - s(y) for which π(x) = π(y). If we have some system of equivalence moves on isomorphism classes of objects of D for which S is the moduli space, then we can find a generating set for R indexed by those equivalence moves.
§ LOOMS AND THE FINITE TYPE THEORY OF DELTA MOVES
A loom is a sequence of symbols from the alphabet { +_i^j, -_i^j, 0_i^j, |_±}_i, j∈_>0, ±∈{+,-} with the following properties.
1) If there are n symbols from {|_±}_±∈{+,-}, then each symbol of the form σ_i^j with σ∈{+,-,0} has 1 ≤ i ≤ n.
2) If there are m symbols from { +_i^j, -_i^j, 0_i^j}_i, j∈_>0, then for every j from 1 to m, there is exactly one symbol with superscript j.
The symbols of the form s_i^j with s∈{+,-,0} are called thread symbols, the subscript is called the target, and the supersctipt is called the height. The symbols |_± are called bar symbols, and their subscript is called their sign.
Looms encode knots. To construct a knot from a loom, we start with a circular unknot and then we add n vertical parallel clasps, corresponding to the bar symbols, with the appropriate signs. Then, we add in loops around these clasps, coming up from the bottom, corresponding to the thread symbols. The target determines which clasp the loop goes around, the kind of loop we add depends on the thread symbol, and the height of the loop depends on the height. The height also determines how high up along the clasp the loop goes. See the example below.
As we can see in the figure, the loops corresponding to + symbols have a positive twist, the loops corresponding to the - symbols have a negative twist, and the loops corresponding to 0 symbols have no twist. The relative heights of the loops, and how high up they reach along the bars, is determined by their height numbering.
For a loom ℓ, we write t(ℓ) to denote the set of thread symbols of ℓ. Given s∈ t(ℓ), we can create a new loom, denoted ℓ∖{s}, by deleting the thread symbol s and decrementing all heights greater than the height of s by one. Given a subset S⊆ t(ℓ), we can create a loom ℓ∖ S by deleting each thread symbol in S and decrementing the remaining heights appropriately.
We write k(ℓ) to denote the knot represented by ℓ.
Given a loom, ℓ, we define four new looms, the left and right, positive and negative stabilizations: ℓ|_-, ℓ|_+, |_-ℓ_+1, |_+ℓ_+1, where ℓ_+1 denotes increasing every numerical subscript by one. It is easy to check that all four of these are indeed looms. We say two looms are stabilization equivalent if one can be transformed to the other by a sequence of stabilizations and destabilizations. We write ≃ to denote stabilization equivalence, and we write LM to denote the set of looms up to stabilization equivalence. We give LM the structure of a diagram category as follows: A morphism a: ℓ_1→ℓ_2 is a specified set S_a⊂ t(ℓ_2) such that ℓ_1≃ℓ_2∖ S_a. The map t then induces our functor F_LM = t:LM→𝐈𝐧𝐣.
It is clear that stabilizations do not modify the resulting knot in any way, as they just add a reducible loop to the side of the knot diagram. Generally, when we talk about a “loom”, we really mean a stabilization equivalence class of looms. We will make the distinction when it is relevant.
A delta move is a transformation of knots where we pass a stand over a clasp. It can also be thought of as the “forbidden” Reidemeister move where three strands cross over each other in a cyclically symmetric way. See the figure.
We will now define K^Δ, the cubical complex of delta moves. Our preferred definition for this cubical complex is not as obvious as one might think. If we just consider the 2-cells of the complex, there are two kinds of commutative behaviors for delta moves which we wish to include in the complex, depicted in the figure below.
To define K^Δ, we first define a caterpillar band on a knot. If we have a knot γ: S^1→^3, a rank n caterpillar band is a smooth embedding β: ([0,1]×[0,n+1])→^3 with the property that β^-1((γ)) = [0,1]×{0,...,n+1}. That is to say, it is a band which the knot passes through laterally n times. See the figure below.
We require that the orientation of γ is consistent with the counterclockwise orientation of the boundary of β where it coincides with that boundary at the pair of arcs β([0,1]×{0,n+1}).
We define a caterpillar knot to be a knot equipped with several disjoint caterpillar bands. The band crossings of a caterpillar knot are the places where the knot passes through the interior of one of the caterpillar bands. We see that the total number of band crossings is equal to the sum of the ranks of the caterpillar bands. We define a labeled caterpillar knot to be a caterpillar knot where the band crossings are labeled 1 through n, and each caterpillar band has a specified sign in {+1,-1}. The total number of band crossings, n, is called the rank of the labeled caterpillar knot.
Finally, we define K^Δ_n to be the set of rank n labeled caterpillar knots up to isotopy, such that every caterpillar band has rank at least 1. The corners of these n-cells will then correspond to the knots obtained by applying clasp surgery to the bands based on their sign, and then pushing the strands that go through the band crossings off of the clasp in all 2^n possible ways. This gives us an n-cube of knots where the edges correspond to delta moves. To define this complex through our categorical language, let a: C_m→ C_n be a cube map with corresponding ordered pair (f,s). We define K^Δ(a^op) to be the transformation on labeled caterpillar knots which is specified as follows.
1) Band crossings with label i∈ [n]∖(f) are pushed off of their corresponding bands in the direction of their band's normal vector if s(i) = 0, and in the direction opposite to their band's normal vector if s(i) = 1. See below. (We use the standard convention that the normal vector of a surface points towards us when the surface's orientation is counterclockwise.)
2) Band crossings with label i∈(f) have their label replaced with f^-1(i).
3) If after (1) and (2), there are any caterpillar bands of rank zero, replace those bands with clasps of the corresponding sign, as depicted below.
This describes our cubical complex K^Δ.
The map k from looms to knots can be extended to a natural transformation
k: LM^→ K^Δ
It is clear how k acts on the 0-cells, but we need to specify how k maps LM^_n→ K^Δ_n. To do this, let (g,ϕ) be a pair representing an element of LM^_n. We have g: ℓ_1→ℓ_2 so that S_g is a set of n thread symbols, and we have that ϕ: S_g→ [n] is a bijection. Let B be the set of bar symbols of ℓ_2 such that some thread symbol of S_g has that band symbol as a target. Then, we construct a labeled caterpillar knot k(g,ϕ) by replacing the clasps corresponding to the bar symbols in B with bands, and making the loops corresponding to the thread symbols in S_g go through those bands as band crossings.
To verify the naturality of this map, let a: C_m→ C_n be a cube map with corresponding ordered pair (f,s). We want to show that kLM^_n(a^op) = K_n^Δ(a^op). Using the definition of the cubical complex of a diagram category, we have that LM^_n(a^op)(g,ϕ) is equal to the isomorphism class of the following ordered pair.
(ι_ ( (t(g)) ∪ (sϕ)^-1{1},t(ℓ_2) ∖ (sϕ)^-1{0}), f^-1ϕ|_ϕ^-1((f)) t(ι_t(ℓ_2) ∖ (sϕ)^-1{0}))
Applying k to this, we get a labeled caterpillar knot which differs from k(g,ϕ) in the following ways:
1) The loops corresponding to thread symbols in S_g with label i such that s(i) = 0 are no longer present.
2) The loops corresponding to thread symbols in S_g with label i such that s(i) = 1 no longer represent band crossings, and instead go around the corresponding band or clasp as they normally would.
3) The bands corresponding to bar symbols for which no thread symbol in ϕ^-1(f) has that bar symbol as a target are now clasps instead of bands.
4) Any label in the image of f has been replaced by its inverse image under f.
We can now see that the result is exactly the labeled caterpillar knot K_n^Δ(a^op)(k(g,ϕ)). Thus, k is a natural transformation.
(LM,k) is a diagram system for K^Δ.
To prove this theorem, all we need to do is prove that every knot can be represented by a loom. The proof of this is quite complex, however, so it has been left to the end of this section. We will actually prove the following stronger theorem.
If a knot has unknotting number n, then it can be represented by a loom with exactly n bar symbols.
For now, though, we will focus on using this fact to prove the following theorem.
For all n, the group U_n(K^Δ) is finitely generated.
Let L_m be the full subcategory of LM consisting of only those looms which are stabilization equivalent to a loom with at most m bar symbols. This is a diagram category as well. From Theorem 2, we can deduce that the sequence of abelian group homomorphisms
U_n(L_1^)→ U_n(L_2^) → U_n(L_3^) → ...
is a sequence of injective maps that form a filtration for U_n(LM^). Furthermore, there are only finitely many stabilization equivalence classes of looms in L_m with at most n thread symbols. Therefore, Theorem 1 tells us that U_n(L_m) is finitely generated for all n and m. Thus, we have constructed a filtration for U_n(LM^) consisting of finitely generated free abelian groups. Let A_n,m denote the image of U_n(L_m^) in U_n(K^Δ) under k. This produces a sequence of finitely generated abelian groups A_n,1⊆ A_n,2⊆ A_n,3⊆... U_n(K^Δ) such that ⋃_m = 1^∞ A_n,m = U_n(K^Δ). Therefore, given any knot x∈ K^Δ_0, we must have that x is in A_n,m for some m. In fact, we claim that x is in A_n,u(x) where u(x) is the unknotting number of x. To see why this is true, we can apply Theorem 4. Any knot x can be represented by a loom with u(x) bar symbols, and therefore must be in A_n,u(x). Furthermore, Theorem 1 tells us that U_n(K^Δ) is generated by knots that can be represented by looms with at most n thread symbols. Therefore, to prove that A_n,n = U_n(K^Δ), it suffices to prove that every knot represented by a loom with at most n thread symbols has unknotting number at most n. This is easy to prove, because in a loom with at most n thread symbols, there are at most n bar symbols which are the target of some thread symbol. By undoing the clasps corresponding to those bar symbols, it completely undoes the knot, giving the unknot in at most n crossing changes. Thus, we have shown that A_n,n = U_n(K^Δ), so U_n(K^Δ) is finitely generated. More precisely we have shown that U_n(K^Δ) is generated by knots that can be represented by a loom with at most n thread symbols and at most n bar symbols, of which there are finitely many. This gives a concrete, but very large, upper bound on the dimension of U_n(K^Δ).
Explicitly computing the abelian groups U_n(K^Δ) is an active area of research being pursued by the author. There is no obvious system of local equivalence moves for looms, so we have to study nonlocal operations. It is difficult to find a method to calculate these abelian groups which is computationally plausible, especially since the number of looms with n thread symbols and n bar symbols grows extremely quickly in n. For instance, when n=2, the number of such looms is already 1728. However, computing U_n(K^Δ) for small n does not seem to be an intractable problem. It is a promising avenue for future research, especially considering the following observations.
In our proof that U_n(K^Δ) is finitely generated, we defined a filtration A_n,1⊆ A_n,2⊆ ... and we proved that A_n,n = U_n(K^Δ). It should be noted that if this filtration is nontrivial in the sense that A_n,1≠ U_n(K^Δ), then U_n(K^Δ) is not generated by knots of unknotting number 1. Given the structure of looms, it seems highly likely that the filtration will indeed be nontrivial. This would mean that delta move finite type invariants would give us lower bounds on unknotting number. We make the following conjecture.
For some n, we have that A_n,1≠ U_n(K^Δ). Therefore, there exists a nontrivial delta move finite type invariant that vanishes on knots of unknotting number 1. Thus, delta move finite type invariants yield nontrivial lower bounds for unknotting number.
The remainder of this section is devoted to proving Theorems 3 and 4. To give a rough sketch of the argument, we define something called a W-twisted loom, which is a system for representing knots that depends on a choice of comb diagram, a kind of knot diagram like structure which is similar to a Morse link presentation. We prove that the set of knots representable by W-twisted looms does not depend on the choice of comb diagram W. Then, we prove that every knot can be represented by a W-twisted loom for some choice of W. Therefore, we know that every knot can be represented by a loom, because we can reproduce the theory of ordinary looms from the theory of E-twisted looms for the trivial comb diagram E.
A comb diagram of rank n consists of a sequence of symbols from the alphabet
{⊂_i, ⊃_i, x_i, x_i^-1, |_i }_i∈_>0
where we think of the symbols ⊂_i as a creation operators, creating strands in position i and i+1, and we think of the symbols ⊃_i as an annihilation operators, cobording away the strands in positions i and i+1. We think of x_i as a crossing where the strand in position i crosses up over the strand in position i+1 as we go from left to right, and x_i^-1 as the opposite crossing. The symbols |_i should be thought of as representing a band attached to the i-th strand, going under all the strands above it and up to infinity. We have various requirements on what constitutes a valid sequence for a comb diagram. They are as follows.
1) The sequence must be valid as a Morse link presentation, where we start with a single strand in position 1, and we end with a single strand in position 1. Furthermore, the resulting 1-manifold must have a single connected component. We orient it from its left endpoint to its right endpoint.
2) There must be exactly n bar symbols, and they must appear on strands that are oriented from left to right, rather than those that are oriented backwards.
3) There must be a sequence of substring modifications from the following list that reduces our word to the one consisting of |_1 repeated n times. Here, ∅ denotes the empty sequence, and 2_i≥ j denotes 2 if i≥ j and 0 otherwise.
x_i⊃_i ↔⊃_i, ⊂_ix_i ↔⊂_i, ⊂_i⊃_i+1↔∅, ⊂_i+1⊃_i ↔∅
x_ix_i+1x_i ↔ x_i+1x_ix_i+1, x_ix_i^-1↔∅, x_i^-1x_i ↔∅, |_i+1x_i ↔ x_i|_i, |_ix_i^-1↔ x_i^-1|_i+1
x_i⊃_i+1↔ x_i+1^-1⊃_i, x_i^-1⊃_i+1↔ x_i+1⊃_i, ⊂_i+1x_i ↔⊂_ix_i+1^-1, ⊂_i+1x_i^-1↔⊂_ix_i+1
x_ix_j ↔ x_jx_i if j∉{i-1,i,i+1}, x_i|_j ↔ |_jx_i if j∉{i,i+1}
x_i ⊂_j ↔⊂_j x_i+ 2_i≥ j if j≠ i+1, ⊃_j x_i ↔ x_i + 2_i≥ j⊃_j if j≠ i+1
|_i⊂_j ↔⊂_j |_i + 2_i≥ j, ⊃_j|_i ↔ |_i + 2_i≥ j⊃_j, ⊂_i+2⊃_i↔⊃_i⊂_i ↔⊂_i⊃_i+2
⊃_i⊂_j ↔⊂_j+ 2_j≥ i⊃_i + 2_i≥ j if i≠ j
Note that |_i+1x_i^-1 = x_i^-1|_i and |_ix_i = x_i|_i+1 are NOT valid moves, as this would require a strand to cross through the band. Furthermore, commutations like |_i|_j = |_j|_i are not valid moves. The bars are essentially fixed in place.
We write E_n to denote the comb diagram consisting of |_1|_1...|_1 with n bar symbols. We call this the rank n trivial comb diagram. The rank of a comb diagram is the number of bar symbols that appear in it.
Let W be a rank n comb diagram. We define a W-twisted loom to be a sequence of symbols from the alphabet
{⊂_i, ⊃_i, x_i, x_i^-1, |_i , +_i,t^h, -_i,t^h, 0_i,t^h}_i,t,h∈_>0
such that the following properties hold.
1) If we delete all symbols from {+_i,t^h, -_i,t^h, 0_i,t^h}_i,t,h∈_>0, then we are left with W.
2) Every symbol of the form σ_i,t^h with σ∈{+,-,0} has 1 ≤ t ≤ n.
3) If there are m symbols of the form σ_i,t^h with σ∈{+,-,0}, then each superscript from 1 to m appears exactly once.
The symbols from {+_i,t^h, -_i,t^h, 0_i,t^h}_i,t,h∈_>0 are called thread symbols, like with looms. The additional subscript is the strand of the comb diagram for the thread symbol, and the other two are the target and the height, as with ordinary looms. It should be noted that unlike ordinary looms, we do not have a sign associated to the clasps.
Rather than representing knots, twisted looms represent banded unknots. To obtain a banded unknot from a W-twisted loom, we add clasps to the comb diagram in accordance with the thread symbols, just like with ordinary looms. These clasps always go over any strands from the comb diagram, and they link with the bands far away from the comb diagram, so as to not get tangled with it.
For a W-twisted loom ℓ, we write β(ℓ) to denote the banded unknot represented by ℓ.
Let σ and τ be symbols in {+,-,0}, and suppose that we have a W-twisted loom of the form ...σ_i,t^hτ_j,s^w..., then if i< j and w < h, or j < i and h < w, we have
β(...σ_i,t^hτ_j,s^w...) = β(...τ_j,s^wσ_i,t^h...)
and if if i< j and h < w, we have
β(...σ_i,t^hτ_j,s^w...) = β(...τ_j,s^w+2 0_i,s^w+3-_i,s^w+1σ_i,t^h 0_i,s^w+_i,s^w+4...)^[w+1,∞]+4
where the superscript [w+1,∞]+4 means that each thread symbol in the ellipsis with height in the interval [w+1,∞] has its height incremented by 4, to prevent heights from coinciding.
Lastly, if j>i and w>h we have
β(...σ_i,t^hτ_j,s^w...) = β(... -_j,t^h+40_j,t^hτ_j,s^w +_j,t^h+10_j,t^h+3σ_i,t^h+2 ...)^[h+1,∞] + 4
Thus, if we have a sequence of thread symbols on the i strand followed by a sequence thread symbols on the j strand, we can transform this into a sequence of thread symbols on the j strand followed by a sequence thread symbols on the i strand while keeping the represented banded unknot constant.
We can simply inspect the knots corresponding to the designated sequences and check that they are isotopic. In the first case, when i< j and w < h, or j < i and h < w, there is no conflict between the loops from the thread symbols, so they commute. In the other cases, the thread symbol on the lower strand has the smaller height, so if we were to simply commute the thread symbols, the loops would pass through each other. The additional terms compensate for this. See the isotopy depicted in figure <ref>.
For the case when, j>i and w>h we have the mirror image of the isotopy depicted in the figure.
If ℓ_1 and ℓ_2 are W-twisted looms that differ by one of the following substring modifications, then β(ℓ_1) = β(ℓ_2). Assume σ represents some symbol in {+,-,0}.
|_iσ_j,t^h ↔σ_j,t^h|_i if i≠ j, x_iσ_j,t^h ↔σ_j,t^hx_i if j∉{i,i+1}
⊃_i σ_j,t^h ↔σ_j+2_j≥ i,t^h⊃_i, ⊂_iσ_j+2_j≥ i,t^h ↔σ_j,t^h⊂_i
A thread symbol on the j-th strand corresponds to a loop that goes over all the strands with position >j and which does not cross any of the strands with position <j. Thus, the loop will not interact with any of the features of the comb diagram in the other strands. This implies the above commutation relations.
If ℓ_1 and ℓ_2 are W-twisted looms that differ by one of the following substring modifications, then β(ℓ_1) = β(ℓ_2).
-_i,t^h⊃_i ↔ 0_i+1,t^h ⊃_i, 0_i,t^h⊃_i ↔ +_i+1,t^h ⊃_i, ⊂_i +_i,t^h ↔⊂_i0_i+1,t^h , ⊂_i 0_i,t^h ↔⊂_i-_i+1,t^h
Furthermore, we have
β(...+_i,t^h⊃_i...) = β(... +_i+1,t^h0_i+1,t^h+20_i+1,t^h+1⊃_i ...)^[h+1,∞]+2
β(... ⊂_i -_i,t^h ...) = β(... ⊂_i 0_i+1,t^h+10_i+1,t^h+2-_i+1,t^h ...)^[h+1,∞]+2
β(...-_i+1,t^h⊃_i...) = β(... 0_i,t^h+10_i,t^h+2-_i,t^h⊃_i ...)^[h+1,∞]+2
β(... ⊂_i +_i+1,t^h ...) = β(... ⊂_i +_i,t^h0_i,t^h+20_i,t^h+1 ...)^[h+1,∞]+2
where, as before, the superscript [h+1,∞]+2 denotes incrementing heights in the designated interval by 2 to prevent any two symbols from having the same height.
Thus, if we have a sequence of thread symbols on the i-th strand before a ⊃_i or after a ⊂_i, we can transform it into a sequence on the i+1 strand, and vice-versa.
It is easy to see that sliding a loop around a bend introduces a half-twist in the indicated direction. Thus, the first part of the lemma is obvious. We then just have to show that a loop with two positive half-twists can be represented by +_i,t^h0_i,t^h+20_i,t^h+1, and a loop with two negative half-twists can be represented by 0_i,t^h+10_i,t^h+2-_i,t^h. These are mirror images of each other, so just demonstrating one of these isotopies will suffice. The desired isotopy is depicted in the figure below.
If ℓ_1 and ℓ_2 are W-twisted looms that differ by one of the following substring modifications, then β(ℓ_1) = β(ℓ_2). Assume σ represents some symbol in {+,-,0}.
σ_i,t^hx_i ↔ x_iσ_i+1,t^h, σ_i+1,t^hx_i^-1↔ x_i^-1σ_i,t^h
Furthermore, we have
β(...σ_i+1,t^hx_i...) = β(...x_iσ_i,t^h+10_i+1,t^h +_i+1,t^h+2...)^[h+1,∞]+2
β(...x_iσ_i,t^h...) = β(...σ_i+1,t^h+10_i,t^h+2-_i,t^hx_i...)^[h+1,∞]+2
β(...x_i^-1σ_i+1,t^h...) = β(... -_i+1,t^h+20_i+1,t^hσ_i,t^h+1x_i^-1 ...)^[h+1,∞]+2
β(...σ_i,t^hx_i^-1...) = β(... x_i^-1+_i,t^h0_i,t^h+2σ_i+1,t^h+1 ...)^[h+1,∞]+2
Thus, if we have a sequence of thread symbols before an x_i symbol, we can transform it into a sequence of thread symbols after that x_i symbol, and vice-versa.
It is easy to see that σ_i,t^hx_i ↔ x_iσ_i+1,t^h and σ_i+1,t^hx_i^-1↔ x_i^-1σ_i,t^h do not change the banded unknot, since the thread symbol is just sliding over the upper strand of the crossing so it will not get tangled with the lower strand. For the remaining four equivalence moves of the lemma, it suffices to prove the first two because the last two are their mirror images. We demonstrate these moves through the following isotopies.
We can now prove the following theorems.
The set of banded unknots representable by W-twisted looms is equal to the set of banded unknots representable by E_n-twisted looms, where n is the rank of W.
By definition, any rank n comb diagram has a sequence of equivalence moves taking it to E_n. Therefore, it suffices to prove that if W and W' are comb diagrams that differ by one of the equivalence moves, and if a banded unknot B is representable by a W-twisted loom ℓ, then B is also representable by a W'-twisted loom ℓ'.
We claim that there will always be a sequence of transformations of W-twisted looms, from the previous four lemmas, moving all the thread symbols of ℓ out from in between the symbols of the equivalence move that we want to apply. This would allow us to apply the equivalence move to obtain the desired W'-twisted loom. With the large number of different equivalence moves, this may seem like a daunting task, but it is actually quite simple.
First, we define an arc of our comb diagram to be a maximal interval in the corresponding 1-manifold where all the crossings are over-crossings. Intuitively, the arcs are the connected lines in the drawing of the knot diagram, when we draw under-crossings as gaps. From the previous four lemmas, it is easy to see that if we choose a point in each arc of W, we can apply transformations to ℓ to localize the thread symbols around those points, without changing the corresponding banded unknot. Therefore, the only equivalence moves of comb diagrams that we need to concern ourselves with are x_ix_i+1x_i ↔ x_i+1x_ix_i+1 and x_ix_i^-1↔∅↔ x_i^-1x_i, because these are the only two moves that trap an entire arc of the comb diagram between their symbols. However, from Lemma 6, we see that we can simply move all thread symbols out to the right of these moves by commuting them with the x_i symbols. Thus, we can apply any equivalence move of comb diagrams.
Let B be a banded unknot with oriented bands that are combinatorially parallel. There exists a comb diagram W and a W-twisted loom ℓ such that B = β(ℓ).
We claim that B has a diagram as in figure <ref>, such that if we take all the strands that go underneath the bands and push them up through the bands, then we get an isotopically trivial banded unknot. If this is true, then we can find a twisted loom representing B by switching the crossings under the bands to over-crossings, taking the comb diagram for that, and then adding 0_i,t^h thread symbols where the under crossings were, thereby obtaining a twisted loom that represents B.
Now, we just need to prove that such a diagram exists for B. To do this, first take an interval on the unknot containing exactly one end of each band of B, and let X denote the union of the interval with the bands. We have that X is contractible, so for a small open neighborhood N around X, we can isotope everything outside N to be far away. We can also make it so that the strands we would need to pass through the bands to trivialize B are close enough to those bands to be inside N, and without loss of generality, we may assume these strands go along the underside of the bands. Thus, when we push everything outside N far away, the only strands that remain close to the bands are those we wish to pass through them to trivialize B. This gives us a diagram in the desired form.
If a knot has unknotting number n, then it can be obtained from some clasp surgery on a parallel-banded unknot with n bands.
Let γ: S^1×[0,1] be a generic homotopy from our knot to the unknot that has n self intersections at times t_1,t_2,...,t_n. Furthermore, assume this homotopy fixes a basepoint of S^1. Then, at each time t_i we can attach a band that stays on the knot for the rest of the homotopy, encoding the clasp that we would need to add to undo the crossing change. We are free to move the endpoints of this band along the knot however we like, as long as they do not coincide with the endpoints of the bands we have already added. If, whenever a band appears, we immediately move its endpoints to be close to the basepoint, then the final result will be n bands on the unknot that are combinatorially parallel, and if we do clasp surgery on these bands, we obtain the original knot.
We can finally prove Theorems 3 and 4.
Given a knot K of unknotting number n, Lemma 7 lets us represent it as a clasp surgery on a parallel banded unknot, Theorem 7 lets us represent that parallel banded unknot by a W-twisted loom, and Theorem 6 then lets us represent it by an E_n-twisted loom. Finally, we see that clasp surgery on a E_n-twisted loom gives us an ordinary loom with n bar symbols, so we have proven that K is representable by a loom with n bar symbols.
We have a natural transformation k:LM^→ K^Δ, and Theorem 4 gives us that k_0: LM^_0→ K^Δ_0 is surjective, so (LM,k) is a diagram system for K^Δ.
§ VIRTUAL TRANSVERSE KNOTS AND BRAIDED GAUSS DIAGRAMS
The n-strand virtual braid group VB_n is the group generated by “crossings” σ_1,...,σ_n-1 and “virtual crossings” v_1,...,v_n-1 subject to the following relations.
1) v_i^2 = e.
2) s_it_i+1s_i = s_i+1t_is_i+1 when we replace (s,t) by (σ, σ), (v, σ), or (v, v). (not (σ,v))
3) s_it_j = t_js_i where s and t can each be either σ or v, and |i-j|>1.
We write W_n to denote words of symbols from the set {σ_1,...,σ_n-1,σ_1^-1,...,σ_n-1^-1,v_1,...,v_n-1} and W_n^+ to denote words of symbols from the set {σ_1,...,σ_n-1,v_1,...,v_n-1}.
A virtual transverse knot is defined to be an equivalence class of 1-component virtual braid closures modulo positive stabilization. Two braids β∈VB_n and β'∈VB_n+1 are related by positive stabilization if when w∈ W_n^1 is a word representing β, the word σ_nw∈ W_n+1^1 represents β'. Thus, virtual transverse knots are just virtual braids modulo conjugation and positive stabilization.
Transverse knots can be considered ordinary braids modulo conjugation and positive stabilization, so every transverse knot gives us a virtual transverse knot. We make the following conjecture. No two transverse knots are the same virtual transverse knot. This appears to be a difficult problem. The standard proof that the map from knots to virtual knots is injective relies on the uniqueness of the fundamental quandle of a knot, which is a quite nontrivial fact of 3-manifold topology. There is no hope of adapting this proof to the transverse case, so something new is needed. The set of virtual transverse knots will be written VTK, and the set of transverse knots will be written TK.
There is also a relevant class of knots between virtual transverse knots and virtual knots, which we call braided virtual knots. These are simply virtual braids modulo positive and negative stabilization. To get ordinary virtual knots, we would also need to mod out by virtual stabilizations, namely stabilizations where the added crossing is virtual.
Let S^1 denote the unit complex numbers.
A braided Gauss diagram is defined to be a triple (n,C,s) where n is a positive integer, C is a finite set of disjoint ordered pairs of elements of S^1 such that for any pair (x,y)∈ C we have x^n=y^n, and s: C→{+1, -1} is a function assigning a sign to each element of C. The elements of C are called chords, and the number n is called the braid index. s is called the sign function. A braided chord diagram is a braided Gauss diagram for which each chord has a positive sign. Chords are thought of as arrows going from the first term in the ordered pair to the second term. The boundary of a braided Gauss diagram is defined to be the set of points in S^1 which are not the endpoint of any chord. Sub-diagrams of a braided Gauss diagram are obtained by restricting to subsets of C.
Let X = (n,C,s) and X' = (n,C',s') be braided Gauss diagrams. We say X and X' are equivalent if there exists a homotopy h: S^1×[0,1]→ S^1 so that
1) h(x,0) = x for all x∈ S^1, and the map x↦ h(x,t) is a homeomorphism for all t∈ [0,1].
2) For all t∈ [0,1] and all (x,y)∈ C, we have (h(x,t))^n = (h(y,t))^n.
3) C' = {(h(x,1),h(y,1)): (x,y)∈ C}, and s'(h(x,1),h(y,1)) = s(x,y) for all (x,y)∈ C.
Let BG denote the set of braided Gauss diagrams up to equivalence, and let BG^+ denote the set of braided chord diagrams up to equivalence. For both notations, a subscript of n restricts the set to diagrams of braid index n. Both of these sets can be given the structure of a diagram category where arrows are subdiagram inclusions and rotational symmetries, and F_BG takes a diagram to its set of chords.
When we speak about braided Gauss diagrams, we will generally mean equivalence classes of braided Gauss diagrams. When we refer to braided Gauss diagrams with specified points on their boundaries, we consider them up to equivalences where the homotopy preserves the specified points in the same way it preserves the endpoints of the chords.
It is sometimes convenient to think of braided Gauss diagrams as Gauss diagrams with a metric on their boundary such that, for each chord, the paths between the two endpoints have integer length. The metric we choose on S^1 for this to work is the uniform metric for which the total length of the circle is the braid index.
There is a function W_n^1→ BG_n and a function W_n^1+→ BG_n^+ for all n, where the 1-manifold for the virtual braid closure is mapped to the circle, and for each crossing, there is a chord from the over-crossing to the under-crossing with sign equal to the sign of the crossing. These maps will be denoted w↦ [w]. The map W_n→ VTK factors through the map W_n^1→ BG_n, so we may talk about braided Gauss diagrams as representing virtual transverse knots. In particular, virtual transverse knots are equivalent to braided Gauss diagrams modulo what we will call braided Reidemeister moves. The braided Reidemeister moves can be listed as follows.
1) If w∈ W_n and i is any index such that σ_iσ_i+1σ_i w∈ W_n^1, then [σ_iσ_i+1σ_i w]↔ [σ_i+1σ_iσ_i+1 w] is a type 1 braided Reidemeister move.
2) If w∈ W_n^1 and i is an index, then [w]↔ [σ_iσ_i^-1w] and [w]↔ [σ_i^-1σ_iw] are type 2 braided Reidemeister moves.
3) If w∈ W_n^1, then σ_nw∈ W_n+1^1, and [w]↔ [σ_nw] is a type 3 braided Reidemeister move.
Using these equivalence moves, we can find relations for a finite type theory of virtual transverse knots. We have a map π: BG^_0→ VTK, so we are in a situation like we discussed at the end of section 2.1. We define U_n(VTK) to be the abelian group generated by braided Gauss diagrams of at most n chords, modulo the following relations, where a term is considered zero if it has more than n chords.
1) For any w ∈ W_n, and any index i, such that σ_iσ_i+1σ_i w∈ W_n^1, we have the relation
[σ_iσ_i+1v_iw] + [v_iσ_i+1σ_iw] + [σ_iv_i+1σ_iw] + [σ_iσ_i+1σ_iw]
- [σ_i+1σ_iv_i+1w ] - [v_i+1σ_iσ_i+1w] - [σ_i+1v_iσ_i+1w] - [wσ_i+1σ_iσ_i+1w] = 0
2)For any w∈ W_n^1, and any index i, we have the relations
[wσ_iσ_i^-1] + [wσ_iv_i] + [wv_iσ_i^-1] = 0
and
[wσ_i^-1σ_i] + [wσ_i^-1v_i] + [wv_iσ_i] = 0
3) For any w∈ W_n^1, we have the relation
[σ_nw] + [v_nw] - [w] = 0
where σ_nw and v_nw are regarded as elements of W_n+1^1.
A braided Gauss diagram x is then represented as an element of U_n(VTK) by taking the sum of its subdiagrams of at most n chords, s(x). We can easily see that the relations we have defined are just s(x)-s(y) when x and y differ by one of the braided Reidemeister moves.
If we wish to actually compute the abelian groups U_n(VTK), we run into an immediate problem. There are infinitely many braided Gauss diagrams with a given number of chords, as the braid index may be arbitrarily large. We will solve this problem by giving a presentation for U_n(VTK), different from the one above, which is finite.
Let CD be the set of ordinary chord diagrams with directed edges. We define a map CD→ BG by taking a chord diagram with n chords to the corresponding braided Gauss diagram of braid index 2n for which the distance between any two adjacent chord endpoints is exactly one, and all the chords have positive sign. We call braided Gauss diagrams unitary when they can be obtained in this way. Equivalently, we can consider the unitary braided Gauss diagrams to be those that can be represented in such a way that there is exactly one 2n-th root of unity between each adjacent pair of chord endpoints, where 2n is the braid index and n is the number of chords.
Let u: Ab(CD_≤ n)→ U_n(VTK) be the homomorphism from the free abelian group on chord diagrams with at most n chords to the finite type group of virtual transverse knots, given by mapping a chord diagram to its corresponding unitary braided Gauss diagram.
The map u is surjective. Thus, U_n(VTK) is finitely generated.
Later in the section, we will prove this theorem, and give an explicit finite set of generators for (u). For now, though, we will discuss some numerical results. We wrote a computer program which uses the presentation with unitary braided Gauss diagrams to compute the vector spaces U_n(VTK)⊗(/p) for small n and any prime p.
p ((/p)⊗ U_2(VTK)) ((/p)⊗ U_3(VTK)) ((/p)⊗ U_4(VTK)) ((/p)⊗ U_5(VTK))
2 3 9 31 117
3 3 8 27 106
5 3 8 27 104
7 3 8 27 104
The simplest example of two virtual transverse knots which are the same virtual knot are the unknots [σ_1v_2] and [v_1σ_2]. These are distinguished in (/2)⊗ U_2(VTK).
We were not able to find a pair of virtual transverse knots of the same braided virtual knot type and self linking number that are distinguished by these invariants. Therefore, we make the following conjecture.
If x and y are virtual transverse knots with the same braided virtual knot type and self-linking number, then they represent the same element of U_n(VTK) for all n.
This conjecture makes virtual transverse knots an interesting case study for the purposes of finite type theory, as they give us an example of a kind of structure that finite type invariants seem incapable of seeing. There is a long standing and very difficult question of whether universal finite type invariant of knots is a complete invariant. In order to approach this problem, we need to further develop an understanding about exactly what type of things finite type theories can see, and what type of things they cannot see. We posit that studying the finite type theory of virtual transverse knots may be helpful in understanding such questions.
The extent to which we understand Conjecture <ref> is the following easy fact.
If x and y are transverse knots with the same knot type and self-linking number, then they represent the same element of U_n(VTK) for all n.
x and y will become the same transverse knot if we negatively stabilize them enough times. However, it is possible to use the finite type relations to rewrite an element of U_n(VTK) in terms of a linear combination of various negative stabilizations. Repeatedly doing this will allow us to rewrite both x and y as a linear combination of transverse knots that we know to be the same.
However, we also believe the following conjecture.
Negative stabilization is not a unique operation on virtual transverse knots. That is to say, there are two ways to negatively stabilize some virtual transverse knot such that the two results are no longer the same virtual transverse knot.
Thus, we expect the argument in the proof of Proposition <ref> to not apply to virtual transverse knots in general. This makes the apparent triviality of the finite type theory somewhat mysterious.
We will now work towards proving the surjectivity of u and showing how to find a finite generating set for its kernel. This will comprise the remainder of this section.
Let U_n^+(VTK) be the quotient of Ab(BG^+_≤ n), the free abelian group on braided chord diagrams with at most n chords, by the following relations.
1) For any w ∈ W_n^+, and any index i, such that σ_iσ_i+1σ_i w∈ W_n^1+, we have the relation
[σ_iσ_i+1v_iw] + [v_iσ_i+1σ_iw] + [σ_iv_i+1σ_iw] + [σ_iσ_i+1σ_iw]
- [σ_i+1σ_iv_i+1w ] - [v_i+1σ_iσ_i+1w] - [σ_i+1v_iσ_i+1w] - [wσ_i+1σ_iσ_i+1w] = 0
2) For any w∈ W_n^1+, we have the relation
[σ_nw] + [v_nw] - [w] = 0
where σ_nw and v_nw are regarded as elements of W_n+1^1+.
There is a homomorphism ϕ: U_n^+(VTK)→ U_n(VTK) given by the inclusion map from braided chord diagrams into braided gauss diagrams. Every relation of U_n^+(VTK) is also a relation of U_n(VTK), so this map is a well-defined homomorphism.
The homomorphism ϕ is an isomorphism of abelian groups.
The inverse to ϕ, which we will call ψ, can be obtained by replacing each negative chord with an alternating sum of adjacent positive chords. To be more precise, we apply the transformation
[wσ_i^-1w']↦∑_k = 1^n(-1)^k[w(v_iσ_i)^kv_iw']
repeatedly until there are no more negative chords. The type 2 relations for U_n(VTK) map to zero under this transformation, and the other relations are preserved, so it is a well defined homomorphism. It is immediate that ψϕ is the identity, and ϕψ is the identity because [wσ_i^-1w'] and ∑_k = 1^n(-1)^k[w(v_iσ_i)^kv_iw'] are always equivalent modulo type 2 relations.
A numbered chord diagram is a chord diagram with directed chords, equipped with a choice of nonnegative integer for each section of the boundary of the diagram between chord endpoints. We define U_n(NCD) to be the group generated by numbered chord diagrams of at most n chords, subject to the three types of relations depicted in figures <ref>, <ref>, and <ref>.
We can construct a homomorphism w: U_n^+(VTK)→ U_n(NCD) by taking a braided chord diagram X, and then choosing a representative of its equivalence class where the m-th roots of unity in S^1 never coincide with chord endpoints, where m is the braid index. Then, the numbers for the corresponding numbered chord diagram are the number of roots of unity that lie in each of the sections of the boundary of the diagram. Although this numbered chord diagram will not be uniquely determined by the equivalence class of X, it will be determined up to type 3 relations in U_n(NCD). This is because whenever we move a chord endpoint past an m-th root of unity, we also move the other endpoint of that chord past an m-th root of unity, and this corresponds to applying a type 3 relation to the resulting numbered chord diagram. We then see that w: U_n^+(VTK)→ U_n(NCD) is a well-defined homomorphism because it maps type 1 relations in U_n^+(VTK) to type 1 relations in U_n(NCD), and similarly with type 2 relations.
Let Ab(NCD_≤ n) be the free abelian group on numbered chord diagrams of at most n chords, and let Ab(CD_≤ n) be the free abelian group on chord diagrams of at most n chords. We will define a map v: Ab(NCD_≤ n)→ Ab(CD_≤ n) by the following construction. Take a numbered chord diagram X, and label the segments of its boundary I_1,...,I_2k, where k is the number of chords. Then, we let a_1,...,a_2k be the numbers of the numbered chord diagram corresponding to each of those segments. Then, we define X(b_1,...,b_2k) to be the chord diagram obtained by adding in b_i small counterclockwise pointing isolated chords into the interval I_i for all i. If X(b_1,...,b_2k) has more than n chords, we set it to zero. Finally, we define the map v: Ab(NCD_≤ n)→ Ab(CD_≤ n) by the formula.
v(X) = ∑_b_1= 0^n∑_b_2= 0^n...∑_b_2k = 0^n (∏_i = 1^2kf(a_i,b_i)) X(b_1,...,b_2k)
where f is defined recursively by the following requirements.
1) We have f(0,b) = C_b, where C_b is the b-th Catalan number.
2) We have f(a,0) = 1 for all a.
3) If b>0, then f(1,b) = 0.
4) If a>1 and b>0, then f(a,b) = f(a-1,b) - f(a-2,b-1).
Next, we let R denote the subgroup of Ab(NCD_≤ n) generated by the relations of type 1, 2, and 3. Thus, Ab(NCD_≤ n)/R = U_n(NCD). We define U_n(CD) to be Ab(CD_≤ n)/v(R). Thus, we have an induced map v: U_n(NCD)→ U_n(CD).
Let X be a braided Gauss diagram of braid index m, and let I be an interval between chord endpoints of X such that no m-th roots of unity are in I. Then, let X_i,j denote the braided chord diagram of braid index m+i+j obtained from X by adding j units of distance to I and then stabilizing i times in I. We claim that
X_0,i = ∑_j = 0^n f(i,j)X_j,j+1
in U_n(VTK).
Note that X_j,j+1 is locally unitary in I, because we can position the isolated chords from the stabilizations with exactly one root of unity separating each one, as well as one root of unity on either side separating them from the endpoints of I.
We proceed by induction. For the base case i = 0. We see that type 2 relations in U_n^+(VTK) give us X_i,j = X_i+1,j + X_i,j+1 for all i and j. If we repeatedly apply this relation starting at X_0,0 and stopping when any given term becomes X_j,j+1, then we see that the resulting coefficient for X_j,j+1 will be the number of paths in ^2, starting at (0,0) and ending at (j,j), always going either up or right, and never going above the diagonal. This is one common definition for the Catalan numbers.
For the case i = 1, the equation is trivially true because the sum only has one nonzero term, which is X_0,1.
For the inductive step, suppose i>1, and suppose the equation is true for all smaller i. Then, a type 2 relation gives us X_0,i = X_0,i-1 + X_1,i-1. Now, let Y= X_1,1. Then, position the stabilization in X_1,1 so that there are no (m+2)-th roots of unity between the stabilization and the left endpoint of I. Then, using this interval to define Y_i,j as we did for X_i.j, we have that X_i,j = Y_i-1,j-1. Therefore, we have X_0,i = X_0,i-1 + X_1,i-1 = X_0,i-1 + Y_0,i-2. From our inductive assumption, we can now apply the formula to the two latter terms in this equation. We have
X_0,i = ∑_j = 0^n f(i-1,j)X_j,j+1 + f(i-2,j)Y_j,j+1
= ∑_j = 0^n f(i-1,j)X_j,j+1 + f(i-2,j-1)X_j,j+1 = ∑_j = 0^n f(i,j)X_j,j+1
Which proves the lemma.
Earlier we defined u: Ab(CD_≥ 0)→ U_n^+(VTK), which takes chord diagrams to their unitary representatives. We claim that u(v(r)) = 0 for any r∈ R, and thus there is an indued map u: U_n(CD)→ U_n^+(VTK).
We will check each type of relation, and verify that they indeed map to zero in U_n^+(VTK).
Using Lemma <ref>, is easy to see that type 1 relations in R map to zero under uv. The designated intervals that have numbering zero in figure <ref> will map under v to linear combinations with Catalan number coefficients, which are then equivalent in U_n^+(VTK) to sections of the boundary where the chords are close to each other. This turns the type 1 relations of R into linear combinations of type 1 relations of U_n^+(VTK).
Next, we check the type 2 relations of R. We claim these map to zero under v. We see that it suffices to show that the recursively defined function f from the definition of v has the following property for all triples of nonnegative integers (a_1,a_2,b).
f(a_1+a_2,b+1) = f(a_1+a_2+1,b+1) + ∑_b_1 + b_2 = b f(a_1,b_1)f(a_2,b_2)
First, note that when a_1 = 1 this formula reduces to the recurrence relation for f, so it suffices to prove
∑_b_1 + b_2 = b f(a_1,b_1)f(a_2+1,b_2) = ∑_b_1 + b_2 = b f(a_1+1,b_1)f(a_2,b_2)
for all triples of nonnegative integers (a_1,a_2,b). To prove this, we use induction on b. Our base case is b = 0, in which case all the terms are 1, so the equation is true. For our inductive step, suppose we know this equation is true for all b < b_0. We wish to prove it for b_0. We have
∑_b_1 + b_2 = b_0 f(a_1,b_1)f(a_2+1,b_2) - ∑_b_1 + b_2 = b_0 f(a_1+1,b_1)f(a_2,b_2)
= ∑_b_1 + b_2 = b_0 f(a_1,b_1)(f(a_2,b_2)- f(a_2-1,b_2-1)) - ∑_b_1 + b_2 = b_0 (f(a_1,b_1)- f(a_1-1,b_1-1))f(a_2,b_2)
= ∑_b_1 + b_2 = b_0f(a_1-1,b_1-1)f(a_2,b_2) - ∑_b_1 + b_2 = b_0 f(a_1,b_1)f(a_2-1,b_2-1)
= ∑_b_1 + b_2 = b_0-1f(a_1-1,b_1)f(a_2,b_2) - ∑_b_1 + b_2 = b_0-1 f(a_1,b_1)f(a_2-1,b_2) = 0
which completes the inductive step.
Finally, we wish to prove that type 3 relations r∈ R are mapped to zero under uv. By Lemma <ref>, we see that uv(r) can be transformed into a relation where we move a m-th root of unity through both the front and back endpoints of a chord, where m is the braid index. This is a valid transformation of braided chord diagrams, simply corresponding to a homotopy of the diagram. Thus, uv(r)= 0.
We have defined the following three maps.
u:U_n(CD)→ U_n^+(VTK), w: U_n^+(VTK)→ U_n(NCD), v: U_n(NCD)→ U_n(CD)
We claim that all three of these maps are isomorphisms, and the compositions vwu, uvw, and wuv, are all identity maps.
v: U_n(NCD)→ U_n(CD) is an isomorphism because U_n(CD) was defined as the quotient Ab(CD_≤ n)/v(R), and the map v: Ab(NCD_≤ n) → Ab(CD_≤ n) is surjective since it takes numbered chord diagrams where all the numbers are one to their corresponding chord diagrams.
Thus, we only have to check that the compositions vwu and uvw are identities. The map vwu can easily be seen to be an identity because it takes a chord diagram to its unitary representative, which then maps to a numbered chord diagram where all the numbers are one, which then maps back to the original diagram. Finally, uvw is the identity by Lemma <ref>. We map a braided chord diagram to the numbered chord diagram that counts roots of unity in each boundary section, which then maps to a linear combination of unitary diagrams which can be reduced by the relations of U_n^+(VTK) to the original diagram.
We can now prove Theorem <ref>.
We know that the map Ab(CD_≤ n)→ U_n(VTK) is surjective because we have proven that the induced map U_n(CD)→ U_n(VTK) is an isomorphism.
Thus, we finally know that U_n(VTK) is finitely generated, because it is isomorphic to the finitely generated abelian group U_n(CD). However, if we actually want to compute this abelian group, we also need a finite set of relations. The problem is that the relations of U_n(CD) are defined to be v(R) where R is the infinite dimensional space of relations in U_n(NCD). We need to find a finite dimensional subspace R̃⊆ R so that v(R) = v(R̃). One obvious such choice is the following.
Let R̃ be the subspace of R generated by the following subset of relations.
1) Type 1 relations of R for which all numbers in the diagram are 1, except where they are specified to be zero in Figure <ref>.
2) Type 3 relations of R for which at most two of the numbers in the diagrams are 0, and the rest are 1.
Thus, Ab(CD_≤ n)/v(R̃) gives us a finite presentation for the abelian group U_n(VTK). It is this presentation that we used in our computer program to compute U_n(VTK)⊗(/p).
*
plain
|
Subsets and Splits