text
stringlengths
0
2.11M
id
stringlengths
33
34
metadata
dict
The University of Melbourne Australia [email protected] The University of Melbourne [email protected] Aalborg [email protected] Aalborg [email protected] The University of Melbourne [email protected] A space-filling curve (SFC) maps points in a multi-dimensional space to one-dimensional points by discretizing the multi-dimensional space into cells and imposing a linear order on the cells. This way, an SFC enables the indexing of multi-dimensional data using a one-dimensional index such as a B^+-tree. Choosing an appropriate SFC is crucial, as different SFCs have different effects on query performance. Currently, there are two primary strategies: 1) deterministic schemes, which are computationally efficient but often yield suboptimal query performance, and 2) dynamic schemes, which consider a broad range of candidate SFCs based on cost functions but incur significant computational overhead. Despite these strategies, existing methods cannot efficiently measure the effectiveness of SFCs under heavy query workloads and numerous SFC options.To address this problem, we propose means of constant-time cost estimations that can enhance existing SFC selection algorithms, enabling them to learn more effective SFCs. Additionally, we propose an SFC learning method that leverages reinforcement learning and our cost estimation to choose an SFC pattern efficiently. Experimental studies offer evidence of the effectiveness and efficiency of the proposed means of cost estimation and SFC learning. Efficient Cost Modeling of Space-filling Curves Jianzhong Qi=============================================== § INTRODUCTION   Indexing is essential to enable efficient query processing on increasingly massive data, including spatial and other low-dimensional data. In this setting, indices based on space-filling curves (SFC) are used widely. For example, Z-order curves (ZC, see Figures <ref> and <ref>) <cit.> are used inHudi <cit.>, RedShift <cit.>, and SparkSQL <cit.>; lexicographic-order curves (LC, see Figure <ref>) are used in PostgreSQL <cit.> and SQL Server <cit.>; and Hilbert curves (HC) <cit.> are used in Google S2 <cit.>. Next, the arguably most important type of query in this setting is the range query that also serves as a foundation for other queries, including kNN queries.The most efficient query processing occurs when the data needed for a query result is stored consecutively, or when the data is stored in a few data blocks. Thus, the storage organization—the order in which the data is stored—affects the cost of processing a query profoundly. When indexing data using SFC-based indices, the choice of which SFC to use for ordering the data is important.Different range queries benefit differently from different SFCs. In Figure <ref>, three SFCs on the same data space are shown along with three queries. The fewer disconnected segments of an SFC that need to be accessed to compute a query, the better. To compute q_1, the SFC in Figure <ref> is preferable because only a single segment needs to be accessed. Put differently, the data needed may be in a single or in consecutive blocks. In contrast, the SFCs in Figures <ref> and <ref> map the needed data to two and four segments, respectively.Next, we observe that no single SFC is optimal for all queries. While the SFC in Figure <ref> is good for q_1, it is suboptimal for q_2 and q_3. It is thus critical to select the right SFC for a given query (or query workload). This in turn calls for efficient means of estimating the cost of computing a query using a particular SFC (without query execution) to guide SFC selection.Existing studies <cit.> provide cost estimations based on counting the number of clusters (continuous curve segments) covered by a query. However, their calculations rely on curve segment scans that require O(V) time, where V is proportional to the size of a query. Given a workload of n queries and m candidate SFCs, O(n· m· V) time is needed to choose an SFC. This is expensive given large n and m (e.g., a k× k grid can form m=k^2! candidate SFCs), thus jeopardizing the applicability of the cost model.In this paper, we provide efficient means of SFC cost estimation such that a query-optimal SFC can be found efficiently. Specifically, we present algorithms that compute the cost of a query in O(1) time. After an O(n)-time initialization, the algorithms compute the cost of n queries in O(1) time for each new SFC to be considered. This means that given m candidate SFCs, our algorithms can find the optimal SFC in O(m) time, which is much smaller than O(n· m· V) and thus renders SFC cost estimation practical.Our algorithms are based on a well-chosen family of SFCs, the bit-merging curves (BMC) <cit.>. A BMC maps multi-dimensional points by merging the bit sequences of the point coordinates (i.e., column indices) from all d dimensions (detailed in Section <ref>). We consider BMCs for two reasons: (1) BMCs generalize ZC and LC used in real systems <cit.>. Algorithms to find optimal BMCs can be integrated seamlessly into real systems. (2) The space of BMCs is large. For example, in a 2-dimensional space (d=2), where each dimension uses 16 bits (ℓ=16) for a column index, there are k = 2^ℓ columns in each dimension of the grid. This yields about 6×10^8 (i.e., (d ·ℓ)!/(ℓ!)^d) candidate BMCs. An efficient cost model enables finding a query-efficient SFC in this large space.Our algorithms model the cost of a range query based on the number and lengths of curve segments covered by the query, which in turn relate to the difference between the curve values of the end points of each curve segment. We exploit the property that the curve values of a BMC come from merging the bits of the column indices. This property enables deriving a closed-form equation to compute the length of a curve segment in O(d·ℓ) = O(1) time (given that d and ℓ are constants) for n queries. The property also enables pre-computing d look-up tables that allow computing the number of curve segments in O(d·ℓ) = O(1) time. Thus, we achieve constant-time SFC cost estimation.We show the applicability of the cost estimation algorithms by incorporating them into the state-of-the-art learned BMC-based structure, the BMTree <cit.>. The BMTree computes empirical query costs by executing a query workload on the dataset to be indexed. Even with its dataset sampling strategy to reduce the computational costs for query cost estimation, the original SFC learning algorithm of the BMTree takes seven hours (cf. BMTree-SP in Figure <ref>) to index a dataset of 100 million points (with only 100,000 sampled points for query cost estimation). Our cost estimation algorithms bring this time down to 57 seconds (cf. BMTree-GC in Figure <ref>) with little impact on query efficiency. Furthermore, we develop an SFC learning algorithm namedthat uses Reinforcement Learning (RL) techniques to find the optimal BMC.Importantly, the reward calculation in RL leverages our closed-form cost estimation equation and pre-computed look-up tables, thus making the entire learning process extremely efficient. This enables the RL agent to converge rapidly to near-optimal solutions while navigating the state space.In summary, the paper makes the following contributions:(1) We propose algorithms for efficient range query cost estimation when using BMC-based indices on multi-dimensional datasets. The algorithms can compute the cost of a range query in O(1) time as well as the cost of a workload of n queries in O(1) time, after a simple scan over the queries. (2) We generalize the applicability of the cost estimation to existing state-of-the-art SFC learning methods based on BMCs, enhancing the learning efficiency of such methods. (3)We propose , an efficient BMC learning algorithm that leverages the proposed cost estimation. (4) We evaluate the cost estimation andalgorithms on both real and synthetic datasets, finding that (i) our cost estimation outperforms baselines consistently by up to 10^5 times in efficiency, (ii) our cost estimation accelerates the reward calculation of the BMTree by 400x with little impact on query efficiency, and (iii) thealgorithm has lower learning and query costs than the competing SFC learning algorithms, including the BMTree.The rest of the paper is organized as follows. Section <ref> covers related work. Section <ref> presents preliminaries, and Section <ref> details our cost estimations.Section <ref> presents , andSection <ref> reports the experimental results. Section <ref> concludes the paper.§ RELATED WORK   Space-filling curves. SFCs find use in many fields, including in indexing <cit.>, data mining <cit.>, and machine learning <cit.>.An SFC maps multi-dimensional data values to one-dimensional values,which are then indexed using a one-dimensional index, e.g., the B^+-tree. Two popular SFCs, ZC <cit.> and HC <cit.>, are being deployed in practical data systems <cit.>.Bit-merging curves (BMCs, detailed in Section <ref>) are a family of SFCs, where the curve value of a grid cell is formed by merging the bits of the cell's column indices from all d dimensions. To better order the data points for specific query workloads, QUILTS <cit.> provides a heuristic method to design a series of BMCs and selects the optimal one. A recent technique, theBit Merging Tree (BMTree) <cit.>, learns piece-wise SFCs (i.e., BMCs) by using a quadtree <cit.>-like strategy to partition thedata space and selecting different BMCs for different space partitions. Cost estimation for space-filling curves. To learn an optimal SFC, cost estimation is employed to approximate the query costs without actually computing the queries. Two studies <cit.> offer theoretical means of estimating the number of curve segments covered by a query range. They do not offer empirical results or guidance on how to construct a query-efficient SFC index.QUILTS formulates the query cost 𝒞_t for a BMC index over a set of queries as 𝒞_t=𝒞_g·𝒞_l, where 𝒞_g is a global cost and 𝒞_l is a local cost. The global cost is the length of a continuous BMC segment that is able to cover a query range q fully minus the length of the BMC segments in q, for each query. The idea is to count the number of segments outside q that may need to be visited to compute the queries. The local cost is the entropy of the relative length of each segment of the BMC curve outside q counted in the global cost, which reflects how uniformly distributed the lengths of such segments are.However, computing these two costs relies on the accumulated length of the curve segments outside q, which is expensive to compute. Given n range queries, it takes O(n · c_t ) time to compute 𝒞_t, where O(c_t) is the average estimation cost per query. Further, they can only be used to estimate the query costs of a given BMC index and do not enable an efficient search for a query-efficient BMC index.The BMTree estimates query costs using data points sampled from the target dataset. Such cost estimations are expensive for large datasets and many queries. For example, BMTree curve learning over a dataset of 100 million points (with 100,000 sampled points) and 1,000 queries can take more than seven hours (cf. BMTree-SP in Figure <ref>). While using a smaller sampled dataset and fewer queries may reduce the learning time, the resulting curve may cause suboptimal query performance (cf. BMTree-SP-6/8/10 in Figure <ref>).LMSFC <cit.>, another recent proposal, learns a parameterized SFC (which is effectively a BMC) using Bayesian optimization <cit.>. Like the BMTree, LMSFC uses a sampled dataset and a query workload for query cost estimation and thus has the same issues as the BMTree.Our study aims to address these issues by providing a highly effective and efficient cost estimation.Space-filling curve-based indices.The Hilbert R-tree <cit.> is a classic index structure based on SFC. It uses an HC to map and order multi-dimensional data, based on which an R-tree is built on the data. This simple structure has been shown to be competitive in many follow-up studies. A recent study further achieves worst-case optimal range query processing by adding an extra rank space-based mapping step over the input data before the Hilbert R-tree is built <cit.>.Another index,the Instance-Optimal Z-Index <cit.>, uses a quadtree-like strategy to recursively partition the data space. It creates four sub-spaces of a (sub-)space, which may be of different sizes. The four sub-spaces are each ordered by ZCs of different sizes and follow a `' or an `N' shape. At the bottom level of the space partitioning hierarchy, the ZCs of sub-spaces that come from different parent sub-spaces are connected following the order of the ZC that traverses the parent sub-spaces. This way, a curve is formed that traverses all bottom-level sub-spaces, and the data points are indexed in that order. In the recent wave of machine learning-based optimization for indices <cit.>, SFCs have been used to order and map multi-dimensional data points to one-dimensional values, such that one-dimensional learned indices (e.g., RMI <cit.>) can be applied. ZM <cit.> and RSMI <cit.> are representative proposals.As the BMTree <cit.> work shows, different learned SFCs can be plugged into these index structures to(possibly) improve their query performance.Our cost estimations can be applied to furtherenhance the SFC learning process as discussed above, which are orthogonal to these studies. § PRELIMINARIES   We start with core concepts underlying BMCs and list frequently used symbols in Table <ref>.§.§ BMC Definition   A BMC maps multi-dimensional points by merging the bit sequences of the coordinates (i.e., column indices) from all d dimensions into a single bit sequence that becomes a one-dimension value <cit.>.Figure <ref> plots three BMC schemes, which are represented by YXYX, YXXY, and YYXX. Here, the ordering of the X's and Y'sspecify how the bits from dimensions x and y are combined to obtain a BMC σ. The coordinates from each dimension have two bits, i.e., the bit length ℓ of each dimension is 2.The merged bit sequence (i.e., the curve value in binary form) has d·ℓ = 4 bits.The bit length ℓ is determined by the grid resolution, which is a system parameter. We use the sameℓ for each dimension to simplify the discussion, and we use the little endian bit order, i.e., the rightmost bit has the lowest rank (cf. Figure <ref>). For simplicity, we call the column indices of a point p in a cell (or the cell itself) the coordinates of p (or the cell).BMC value calculation.Given a BMC σ, we compute the curve value of a point p = (x_1, x_2,…, x_d) using function ℱ_σ(p):ℱ_σ(p) =∑_i=1^d∑_j=1^ℓα_i^j· 2^γ_i^jLet x_i be the dimension-i coordinate of p. In the equation, α_i^j∈{0,1} is the jth (j∈ [1, ℓ]) bit of x_i, and γ_i^j is the rank of α_i^j in the BMC.∑_j=1^ℓα_i^j· 2^j-1=x_iNote that the order among the bits from the same dimension does not change when the bits are merged with those from the other dimensions to calculate ℱ_σ(p), i.e., for bits α_i^j and α_i^j+1, γ_i^j < γ_i^j+1.For ease of discussion, we use examples with up to three dimensions x, y, and z. Figure <ref> calculates ℱ_σ(p) for p=(2,1,7) given σ= XYZXYZXYZ. Here, α_3^1=1 is the first bit value in dimension z, and the rank of the first (i.e., rightmost) Z bit in σ is zero, which means γ_3^1=0. To calculate the curve value of a point for a given σ, wederive each α_i^j and γ_i^jbased on x_i and σ, respectively. BMC monotonicity.The BMC value calculation process implies that any BMC is monotonic.   Given p_1=(x_1, 1, …, x_1, d) and p_2=(x_2, 1, …, x_2, d) then ∀ i ∈ [1, d] ( x_1, i≤ x_2, i) →ℱ_σ(p_1)≤ℱ_σ(p_2). Given x_1, i≤ x_2, i,we have ∑_j=1^ℓα_1, i^j· 2^j-1≤∑_j=1^ℓα_2, i^j· 2^j-1 based on Equation <ref>. The order among the bits from x_1, i and x_2, i do not change when they are used to calculate ℱ_σ(p_1) and ℱ_σ(p_2), respectively. Thus,∑_j=1^ℓα_1, i^j· 2^γ_1, i^j≤∑_j=1^ℓα_2, i^j· 2^γ_2, i^j. Since this holds for any i ∈ [1, d], we have∑_i=1^d∑_j=1^ℓα_1, i^j· 2^γ_1, i^j≤∑_i=1^d∑_j=1^ℓα_2, i^j· 2^γ_2, i^j, i.e., ℱ_σ(p_1)≤ℱ_σ(p_2). §.§ Range Querying Using a BMCNext, we present concepts on range query processing with BMCs that will be used later to formulate query cost estimation.Given a d-dimensional dataset D and a range queryq = [x_s, 1, x_e, 1] × [x_s, 2, x_e, 2] ×…× [x_s, d, x_e, d], where [x_s, i, x_e, i] denotes the query range in dimension i, query q returns all points p=(x_1, x_2,...,x_d)∈D that satisfy: ∀ i∈ [1,d] (x_s, i≤ x_i≤ x_e, i). As mentioned earlier, computing a query q using different BMCs can lead to different costs. To simplify the discussion for determining the cost of a query, we use the following corollary. Given p_s=(x_s_1, …, x_s_d) and p_e=(x_e_1, …, x_e_d), any query q is bounded by the curve value range [ℱ_σ(p_s),ℱ_σ(p_e)].Corollary <ref> follows directly from the monotonicity of BMCs (Theorem <ref>). To simplify the discussion, we use a point p and the cell that encloses p interchangeably and rely on the context for disambiguation.Query section <cit.>. A continuous curve segment in a query q is called a query section.We denote a query section s with end points p_i and p_jby [ℱ_σ(p_i), ℱ_σ(p_j)]. Intuitively, each query section translates to a one-dimensional range query [ℱ_σ(p_i), ℱ_σ(p_j)] on a B^+-tree index on dataset D. Thus, the number of query sections in[ℱ_σ(p_s),ℱ_σ(p_e)] determines the cost of q. In Figure <ref>, there are three query sections s_1, s_2, and s_3, with s_2 = [ℱ_σ(p_i), ℱ_σ(p_j)] = [36, 39]. By definition, a point (cell) immediately preceding p_i or succeeding p_j must be outside q; otherwise, it is part of the query section. For example, p_i-1 (ℱ_σ(p_i-1)=35) and p_j+1 (ℱ_σ(p_j+1)=40) in Figure <ref> are outside q.The number of query sections in q varies across different BMCs, e.g., the sameq as in Figure <ref> has four query sections in Figure <ref>. Directed edge <cit.>.Query sections are composed by connecting a series of points (cells). The pair of two consecutive points p_i and p_j formsa directed edge (denoted by e)if the curve values of p_i and p_j differ by one under a given σ, i.e., ℱ_σ(p_j) - ℱ_σ(p_i)=1. As each point is represented through a binary value, the difference occurs becauseℱ_σ(p_i)=..._𝑝𝑟𝑒𝑓𝑖𝑥01...1_K 1s and ℱ_σ(p_j)=..._𝑝𝑟𝑒𝑓𝑖𝑥10...0_K 0s, where the last K (K≥0) bits are changed from 1 to 0 and the (K+1)st bit is changed from 0 to 1. We use the binary form of two pairs of integers that form directed edges to illustrate this concept, one for K>0 and the other for K=0. First, suppose that the binary representations of ℱ_σ(p_i)=15 and ℱ_σ(p_j)=16 are 001111 and 010000, respectively. In this case,four bits starting from the right (i.e., K=4) are changed from 1 to 0, and the fifth bit is changed from 0 to 1. The last bit 0 is the shared 𝑝𝑟𝑒𝑓𝑖𝑥. Second, if the binary forms of ℱ_σ(p_i)=16 and ℱ_σ(p_j)=17 are 010000 and 010001, respectively, only the first bit (from the right) is changed from 0 to 1, i.e., no bits (K=0) are changed from 1 to 0, and the shared prefix is 01000. We explain now why the number of directed edges (denoted byℰ_σ(q)) plus the number of query sections (denoted by 𝒮_σ(q)) in a given query q yields the number of distinct points (denoted by 𝒱(q)) in q. The intuition is that if q consists of a single section s, i.e., the curve stays completely inside s and 𝒮_σ(q)=1 then there are 𝒱(q)-1 directed edges connecting a given start point p_s and end point p_e of s. In other words, we obtain ℰ_σ(q) + 𝒮_σ(q) = 𝒱(q) - 1 + 1 = 𝒱(q). This is because each time a curve exits a query section s_i and enters the next section s_i+1, the last point in s_i becomes disconnected (minus one directed edge) but one new query section is added (plus 1 for the query section) when the curve reenters s_i+1. This leads to the following equation:ℰ_σ(q) + 𝒮_σ(q) = 𝒱(q)While 𝒱(q) is independent of σ, the values for ℰ_σ(q) and 𝒮_σ(q) depend on σ. For example, in Figure <ref> (σ=XYXYXY),there are 3 query sections (𝒮_σ(q)) and 5 directed edges (ℰ_σ(q)) in q; in Figure <ref> (σ=YXYXYX),there are 4 query sections and 4 directed edges in q. Both figures have 𝒱(q)=8 points in q. Equation <ref> is key in computing the local cost (Section <ref>) of a query.§ EFFICIENT BMC COST ESTIMATION  Consider a range query q with start point p_s and end point p_eand assume that dataset D has been indexed with a B^+-tree using BMC σ. A simple query algorithm accesses the range [ℱ_σ(p_s),ℱ_σ(p_e)] using the B^+-tree, and filters any false positives not included in q. The query cost of q then relates to the length of [ℱ_σ(p_s),ℱ_σ(p_e)] and the number of false positives in the range. The number of false positives in turn relates to the number of query sections in q. Thus, we define the cost of q (when using BMC σ), denoted by 𝒞_σ(q), as a combination ofthe length of [ℱ_σ(p_s),ℱ_σ(p_e)] (called the global cost, 𝒞_σ^g(q)) and the number of query sections (called the local cost, 𝒞_σ^l(q)) in q.Empirically, we find that the product of the global and the local costs best differentiates the query performance of different BMC indices, which helps identify query-optimal BMC indices (i.e., the goal of our study). Hence, we define 𝒞_σ(q) as: 𝒞_σ(q)=𝒞_σ^g(q)·𝒞_σ^l(q) Note that a commonly used alternative query algorithm is to break q into query sections and perform a range query on the B^+-tree for each such section. In this case, the local cost applies directly. The global cost, on the other hand, applies implicitly, because a larger range of [ℱ_σ(p_s),ℱ_σ(p_e)] implies ahigher cost to examine and uncover the query sections in the range.Note also that the cost model of QUILTS <cit.> uses the product of a global and a local cost. However, its definitions of global and local costs, described in Section <ref> are different from ours.Next, we present efficient algorithms for computing the global and local costs inSections <ref> and <ref>, respectively. §.§ Global Cost Estimation for BMC As mentioned above, we define the global cost of query q as the length of[ℱ_σ(p_s),ℱ_σ(p_e)]. The global cost 𝒞_σ^g(q) of query q under BMC σ is the length of the curve segment from p_s to p_e: 𝒞_σ^g(q) = ℱ_σ(p_e) - ℱ_σ(p_s) + 1=∑_j=1^d∑_k=1^ℓ (α_e, j^k-α_s, j^k)·2^γ_j^k+1 Efficient computation.Following the definition, given a set Q of n queries, theirtotal global cost can becalculated by visiting every query q ∈ Q and adding up𝒞_σ^g(q).This naive approach takes timeproportional to the number of queries to compute. To reduce the time cost without loss of accuracy,we rewrite the global cost as a closed-form function for efficient computation.𝒞_σ^g(Q)=∑_i=1^n𝒞_σ^g(q_i) = ∑_i=1^n∑_j=1^d∑_k=1^ℓ(α_i, e, j^k-α_i, s, j^k)_BMC independent·2^γ_j^k_BMC dependent+n= ∑_j=1^d∑_k=1^ℓ∑_i=1^n (α_i, e, j^k-α_i, s, j^k)_BMC independent· 2^γ_j^k+n=∑_j=1^d∑_k=1^ℓA_j^k· 2^γ_j^k+nHere, q_i ∈ Q; α_i, s, j^k and α_i, e, j^kdenote the kth bits of the coordinates of the lower and the upper end points of q_i in dimension j, respectively; A_j^k = ∑_i=1^n (α_i, e, j^k-α_i, s, j^k), which is BMC independent and can be calculated once by scanning the n range queries in Q to compute thegap between p_e and p_s on the kth bit of the jth dimension, for any BMC. Only theterm 2^γ_j^k is BMC dependent and must be calculated for each curve because γ_i^j represents the rank of the jth bit from dimension i of a BMC (cf. Section <ref>). If the BMC σ is changed, e.g., from XYXYXY to XYXYYX, thenγ_1^1=1 and γ_2^1=0 are changed to γ_1^1=0 and γ_2^1=1, respectively. Algorithm costs. The above property helps reduce the cost of computing the global cost when given multiple candidate BMCs. For example, when learning the best BMC from a large volume of candidate BMCs (see Section <ref>), each BMC is evaluated individually in each iteration (Algorithm <ref>).Without an efficient cost modeling, the global cost is O(m · n · d ·ℓ) for m candidate BMCs over n queries (based on Equation <ref>). Based on our proposed closed form method (Equation <ref>), after an initial O(n)-time scan over the n queries (to compute A_j^k), the holistic global cost over n queries can be calculated in O(m· d·ℓ) time, i.e., O(m) time given constant number of dimensions d and number of bits ℓ in each dimension.§.§ Local Cost Estimation for BMCThe local cost measures the degree of segmentation of the curve in [ℱ_σ(p_s),ℱ_σ(p_e)], which indicates the number of false positive data blocks that are retrieved unnecessarily and need to be filtered. We define the local cost as the number of query sections,following existingstudies <cit.> that use the term “number of clusters” for the same concept.The local cost 𝒞_σ^l(q) of query q under BMC σ isthe number query sections in q, i.e., 𝒮_σ(q).Intuition. Recall that 𝒱(q) is the number of distinct points in q. We assume one data point per cell and that every B data points are stored in a block. A point is a true positive if it (and its cell) is in query q and a false positive if it is outside q but is retrieved by the query. If q has only one query section, the largest number of block accesses is ⌊ (𝒱(q)-2)/B⌋ + 2, i.e., only the first and last blocks can contain false positives (at least one true positive point in each block). In this case, the precision of the query process is at least 𝒱(q)/𝒱(q) + 2·(B-1). Following the same logic, if there are n_s query sections, in the worst case, each query section incurs two excess block accesses, each for a block containing only one true positive point.The largest number of block accesses is ⌊ (𝒱(q)-2 · n_s)/B⌋ + 2 · n_s, and the precision is𝒱(q)/𝒱(q) + 2· n_s·(B-1). The excess block accesses grows linearly with n_s. Thus, we use n_s to define the local cost. In Figure <ref>, we order points based on BMCs σ_1 and σ_2 and place the points in blocks where B=4. There are 14 true positives (i.e., 𝒱(q)=14). There is only one query section under σ_1, which leads to a precision of 14/5× 4=70% for 5 block accesses, whereas σ_2 has three query sections (due to a different curve). The number of block accesses is 7, and the precision drops to 14/7×4=50%. Efficient computation.A simple way to compute the local cost of an arbitrary range query is to count the number of query sections by traversing the curve segment from p_s to p_e, but this is also time-consuming.To reduce the cost,we rewrite Equation <ref>as: 𝒮_σ(q) = 𝒱(q) - ℰ_σ(q)Given a query q and the grid resolution of the data space, it is straightforward (i.e., taking O(d) = O(1) time) to compute the number of cells in q (i.e., 𝒱(q)). Then, our key insight is that𝒮_σ(q) can be computed by counting the number of directed edges, i.e., ℰ_σ(q), which can be done efficiently in O(1) time as detailed below. Thus, 𝒮_σ(q) can be computed in O(1) time. §.§.§ Rise and Drop PatternsTo compute ℰ_σ(q) efficiently, we analyse how the bit sequence of a BMC changes from one point to another following a directed edge.A directed edge is formed by two consecutive points with (binary) curve values that share the same 𝑝𝑟𝑒𝑓𝑖𝑥, while the remaining bits are changed. We observe that different directed edges have the same shape when they share the same pattern in their changed bits, even if their prefixes are different. In Figure <ref>, consider edges e_1 = (5, 6) = [000101, 000110] and e_2 = (13, 14) = [001101, 001110]. Both edges are in query q as indicated by the red rectangle, and they share the same `\' shape because the two rightmost bits in both cases change from “01” to “10”. However, in Figure <ref>, edge (1, 2) = [000001, 000010] is not in q, and the prefix(“0000”) differs from that of e_1 and e_2 above. A query q can only contain directed edges of a few different shapes. In Figure <ref>, edge (31, 32) = [011111, 100000]is not in q, and the pattern of the changed bits differs from that of e_1 and e_2.Note that the bits of the curve values come from the coordinates (i.e., column indices) of the two end points of a directed edge. By analyzing the bit patterns of the column indices spanned by a query q in each dimension, we can count the number of directed edges that can appear in q. To generalize, recall that given a directed edge from p_i to p_j, ℱ_σ(p_i)=..._𝑝𝑟𝑒𝑓𝑖𝑥01...1_K 1s andℱ_σ(p_j)=..._𝑝𝑟𝑒𝑓𝑖𝑥10...0_K 0s (K≥ 0) must exist where the K rightmost bits are changed from 1 to 0, while the (K+1)st rightmost bit is changed from 0 to 1. The bits ofℱ_σ(p_i) and ℱ_σ(p_j) come from those of the column indices of p_i and p_j. Thus, the K+1 rightmost bits changed from ℱ_σ(p_i) to ℱ_σ(p_j) must also come fromthose of the column indices. In particular, there must be one dimension, where the column index has contributed k (1≤ k ≤ K) changed bits and one of the bits has changed from 0 to 1, while the rest dimensions contribute bits changing from 1 to 0.Our key observation is that the bit-changing patterns across the column indices in a dimension only depend on the column indices themselves, making them BMC independent. By pre-computing the number of bit-changing patterns that can form the (K+1)-bit change of a directed edge, we can derive efficiently the number of directed edges given a query q and a BMC. We summarize the bit-changing patterns to form a directed edge with two basic patterns: a rise pattern and a drop pattern.A rise pattern ℛ_b^kof a directed edge from p_i to p_jrepresents a k-bit (k ≥ 1) change in the dimension-b coordinate of p_i (i.e.,x_i, b) to that of p_j (i.e.,x_j, b), where the rightmost k-1 bits are changed from 1 to 0 and the kth bit (from the right) is changed from 0 to 1, i.e., x_i,b=..._𝑝𝑟𝑒𝑓𝑖𝑥01...1_(k - 1) 1s andx_j,b=..._𝑝𝑟𝑒𝑓𝑖𝑥10...0_(k-1) 0s.A drop pattern 𝒟_b^kof a directed edge from p_i to p_jrepresents a rightmost k-bit (k ≥ 0) 1-to-0 change in the dimension-b coordinate of p_i (i.e.,x_i, b) to that of p_j (i.e.,x_j, b), i.e., x_i,b=..._𝑝𝑟𝑒𝑓𝑖𝑥1...1_k 1s and x_j,b=..._𝑝𝑟𝑒𝑓𝑖𝑥0...0_k 0s.Given a dimension where the coordinates use ℓ bits, there can be ℓ different rise patterns, i.e., k ∈ [1, ℓ], and there can be ℓ+1 different drop patterns, i.e., k∈ [0, ℓ]. Note the special case where k=0, i.e.,𝒟_b^0, indicating no bit value drop in dimension b.  In Figure <ref>, consider the directed edge from p_i to p_j, where ℱ_σ(p_i)=1 (000001) and ℱ_σ(p_j)=2 (000010), i.e., the ` \' segment at the bottom left. The x-coordinate of p_ichanges from 000 to 001 to that of p_j (i.e., rise pattern ℛ_x^1). The y-coordinate of p_i changes from 001 to 000 to that of p_j (i.e., drop pattern 𝒟_y^1). Thus, this directed edge can be represented by acombination of ℛ_x^1 and 𝒟_y^1, denoted as ℛ_x^1⊕𝒟_y^1. This same combination also applies in other directed edges, such as that from ℱ_σ(p_i)=13 to ℱ_σ(p_j)=14, which is another `\'-shaped segment. Other directed edges may use a different combination, e.g.,ℛ_x^3⊕𝒟_y^3 for the one from ℱ_σ(p_i)=31 to ℱ_σ(p_j)=32,andℛ_x^2⊕𝒟_y^2 for the one fromℱ_σ(p_i)=39 to ℱ_σ(p_j)=40.Figure <ref> has shown the rise patterns ℛ_x^k in dimension-x and the drop patterns 𝒟_y^k in dimension-y. Combining a rise and a drop pattern from these patterns forms adirected edge in red in the figure. Similarly, we show in Figure <ref> the rise patterns ℛ_y^k in dimension-y and the drop patterns 𝒟_x^k in dimension-x.Combining a rise and a drop pattern from these patterns forms a black directed edge.The pattern combination operator `⊕' applied on two (rise or drop) patterns means that the (K+1)-bit change of a directed edge is formed by the two patterns.Note also that while the rise and the drop patterns on a dimension are BMC independent, which ones that can be combined to form a directed edge is BMC dependent because different BMCs order the bits from different dimensions differently. For example, consider σ=X^3Y^3X^2Y^2X^1Y^1 (i.e., XYXYXY). From the right to the left of σ, the first rise pattern is ℛ_x^1. It can only be combined with drop pattern𝒟_y^1, as there is just one bit Y^1 from dimension-y to the right of X^1. Similarly, ℛ_x^2 and ℛ_x^3 can each be combined with𝒟_y^2 and 𝒟_y^3, respectively, i.e., all 1-bits to the right of X^2 and X^3 must be changed to 0, according to the bit-changing pattern of a directed edge. In general, for each dimension, there are only ℓ valid combinations of a rise and a drop pattern, and this number generalizes to d·ℓ in a d-dimensional space given a BMC. Next, ℰ_σ(q) can be calculated by counting the number of valid rise and drop patterns in q. For example, when d=2:ℰ_σ(q) = ∑_i=1^ℓ(𝒩(ℛ_x^i)·𝒩(𝒟_y^r_y) + 𝒩(ℛ_y^i)·𝒩(𝒟_x^r_x))Here, 𝒩(·) counts the number of times that a pattern occurs in q, and r_x (r_y) is a parameter depending on the drop patterns that can be combinedwith ℛ_x^i (ℛ_y^i).In Figure <ref>, for q = ([0,4]×[2,3]), there are two ℛ_x^1, one ℛ_x^2, and one ℛ_x^3, i.e., 𝒩(ℛ_x^1)=2, 𝒩(ℛ_x^2)=1, and 𝒩(ℛ_x^3)=1. Next, there is one𝒟_y^1, zero 𝒟_y^2, and zero 𝒟_y^3 that are valid to match with these rise patterns, i.e., 𝒩(𝒟_y^1)=1, 𝒩(𝒟_y^2)=0, and 𝒩(𝒟_y^3)=0. Similarly, 𝒩(ℛ_y^1)=1, and ℛ_y^1 can be matched with 𝒟_x^0 , where 𝒩(𝒟_x^0)=5. Recall that 𝒟_x^0 is the special case with no bit value drop. It is counted as the length of the query range in dimension x. Overall,ℰ_σ(q) = 2× 1 +1× 5. Thus, there are 10 - 7 = 3 query sections in q according to Equation <ref>, which is consistent with the figure.Efficient counting of rise and drop patterns.A rise pattern ℛ_b^k represents a change in the dimension-b coordinate from x_i, b=a·2^k+ (2^k-1-1) to x_j, b=a·2^k + 2^k-1 (a≥0 ∧ a ∈ℕ).Here, a·2^k is the prefix, while 2^k-1-1 (i.e., 01...1_(k - 1) 1s) and 2^k-1 (i.e., 10...0_(k - 1) 0s) represent the changed bits.Then,given the data domain [x_s, b, x_e, b] of dimension b, each pattern can be counted by calculating⌊ (x_e, b- 2^k-1)/2^k⌋ - ⌈ (x_s, b- (2^k-1-1))/2^k⌉+1, i.e., a bound on the different values of a, which takes O(1) time. Similarly, a drop pattern 𝒟_b^k represents a change from x_i, b=a · 2^k + 2^k - 1 to x_j, b=a · 2^k + 0 (a≥ 0 ∧ a∈ℕ). Here, a · 2^k is the prefix, while 2^k - 1 (i.e., 1...1_k 1s) and 0 (i.e., 0...0_k 0s) represent the changed bits.We can count each pattern by calculating ⌊ (x_e, b + 1)/2^k⌋ - ⌈ x_s, b/2^k⌉, again in O(1) time.Generalizing to d dimensions. As mentioned at the beginning of the subsection, a directed edge can be decomposed into a rise pattern in one dimension and drop patterns in the remaining d-1 dimensions. We call the set of all drop patterns in the d-1 dimensions a drop pattern collection. For a directed edge in d-dimensional space, a drop pattern collection 𝒟_^k' represents the bit combination over d-1 drop patterns:𝒟_^∑_i=1,i≠ b^d-1k_i=_i=1,i≠ b^d𝒟_i^k_i(k' = ∑_i=1,i≠ b^dk_i=K-k), where b is the dimension with a rise pattern. Here, `' is a pattern combination operator (like⊕ above). We note that 𝒟_^k' and 𝒟_b^k are interchangeable if d=2. For simplicity, we call 𝒟_^k' a drop pattern when the context eliminates any ambiguity.Now, in a d-dimensional data space, a directed edge can be formed by combining one rise pattern and d-1 drop patterns, i.e., ℛ_b^k⊕𝒟_^∑_i=1,i≠ b^dk_i=ℛ_b^k⊕ (_i=1,i≠ b^d𝒟_i^k_i) where k'=∑_i=1,i≠ b^dk_i. Equation <ref> is then rewritten as: ℰ_σ(q) = ∑_j=1^d∑_i=1^ℓ𝒩(ℛ_j^i)·𝒩(𝒟^r)Here, the value of parameter r depends on the number of drop patterns that can be combined with ℛ_j^i. §.§.§ Pattern TablesWe have shown how to compute the local cost of a query efficiently.Given a set Q of n range queries (q_i ∈ Q), their total local cost based on Definition <ref> is:𝒞_σ^l(Q)=∑_i=1^n𝒞_σ^l(q_i)=∑_i=1^n𝒱(q_i) - ∑_i=1^nℰ_σ(q_i)This cost takes O(n) time to compute. Given m BMCs, computing their respective total local costs 𝒞_σ^l(Q) takes O(m· n) time. As ∑_i=1^n𝒱(q_i) is independent of the BMCs, it can be computed once by performing an O(n)-time scan over Q. The computational bottleneck for m BMCs is then the computation of ∑_i=1^nℰ_σ(q_i).We eliminate this bottleneck by introducing a look-up table called a pattern table that stores pre-computed numbers of rise-and-drop pattern combinations to form the directed edges at different locations, which are BMC-independent. Since each directed edge is a combination of a rise pattern in some dimension b and d-1 drop patterns, we proceed to show how to pre-compute d pattern tables, each recording the rise patterns of a dimension. The pattern table for dimension b, denoted by 𝑇𝑎𝑏𝑙𝑒^b, contains ℓ rows, each corresponding to a rise pattern in the dimension,and ℓ·(d-1)+1 columns, each corresponding to a drop pattern in the other d-1 dimensions.As shown in Table <ref>, the value in row i and column j is the product ofthe numbers of rise pattern ℛ_b^i and drop pattern 𝒟^j. There is a total of ℓ·(d-1)+1 drop patterns in the d-1 dimensions because there are ℓ·(d-1) bits in those dimensions, i.e., k' ∈ [0, ℓ·(d-1)] for 𝒟^k'. Further, since the rise and drop patterns correspond to only the bit sequences in each dimension and not the curve values, the values in the pattern tables can be computed once given a set of queries Q and can then be reused across local cost estimation for different BMCs.Algorithm <ref> summarizes the steps to compute pattern table 𝑇𝑎𝑏𝑙𝑒^b based on its definition.   In Figure <ref>, we show two queries q_1 and q_2, and the pattern tables 𝑇𝑎𝑏𝑙𝑒^x and 𝑇𝑎𝑏𝑙𝑒^y are shown in Tables <ref> and <ref>, respectively. In the tables, we use `+' to denote summing up the pattern table cell values (i.e., 𝒩(ℛ_b^i) ·𝒩(𝒟^j), and 𝒩(𝒟^j) is 𝒩(𝒟_x^j) or 𝒩(𝒟_y^j)) computed for q_1 and q_2. For example, in q_1, 𝒩(ℛ_x^1)=2 (the two ℛ_x^1 are labeled for q_1 in Figure <ref>) and 𝒩(𝒟_y^0)=2 (the value range of q_1 in dimension y is 2). Meanwhile, in q_2, 𝒩(ℛ_x^1)=1 (one ℛ_x^1 is labeled for q_2 in Figure <ref>) and 𝒩(𝒟_y^0)=3 (the value range of q_2 in dimension y is 3). Thus, in 𝑇𝑎𝑏𝑙𝑒^x, the cell 𝑇𝑎𝑏𝑙𝑒^x[1][0] (corresponding to ℛ_x^1 ⊕𝒟_y^0) is the sum of𝒩(ℛ_x^1)·𝒩(𝒟_y^0) in q_1 and q_2, i.e.,4 + 3.§.§.§ Local Cost Estimation with Pattern Tables Next, we describe how to derive the number of directed edges (and hence compute the total local cost) given the d pattern tables for n queries.Algorithm <ref> shows how to compute the local cost using the pattern tables.Each dimension j is considered for the rise patterns (Line 2). Then, we consider each rise pattern in the dimension, i.e., each row i in 𝑇𝑎𝑏𝑙𝑒^j (Line 3). We locatethe corresponding drop pattern (i.e., the table column index) based on i and a given BMC σ, which is done by the function (Line 4). Then, we add the cell value to the number of directed edges ℰ_σ (Line 5). Note that all ℓ rise patterns in each dimension are considered because a BMC has ℓ bits on each dimension, which can all be the bit that changes from 0 to 1.We return the total local cost by subtracting the total number of directed edges from the total number of cells in Q.Based on Example <ref>,given BMC XYXYXY, from 𝑇𝑎𝑏𝑙𝑒^x, we read cells (ℛ_1^1, 𝒟_2^1), (ℛ_1^2, 𝒟_2^2), and (ℛ_1^3, 𝒟_2^3), i.e., the cells with wavy lines. Similarly, we read the cells with wavy lines from 𝑇𝑎𝑏𝑙𝑒^y. These cells sum up to 6, which is the number of directed edges (segments with arrows) in Figure <ref>.Similarly, the cells relevant to BMC YXYXYX are underlined, which yields a total ofnine directed edges in Figure <ref>. Algorithm costs. In general, for each rise pattern, the total number ofpossible drop pattern combinations is (ℓ+1)^d-1 based on Definition <ref>.The time complexity of generating the d pattern tables is O(d·ℓ·(ℓ+1)^d-1), where d denotes the number of dimensions, ℓ denotes the number of rows, and (ℓ+1)^d-1 denotes the accumulated number of drop patterns (equal to (ℓ+1) when d=2). After initialization, the retrieval time complexity of pattern tables is O(d·ℓ) = O(1), i.e.,we retrieve ℓ cells from each table.We generate d pattern tables, each with ℓ·(ℓ+1)^d-1 keys. Thus, the space complexity for the pattern tables isO(d·ℓ·(ℓ+1)^d-1). For example, when d=3 and ℓ=32, all the tables take 1.6 MB (1.2 MB for keys and 0.4 MB for values).§ COST ESTIMATION-BASED BMC LEARNING  Next, powered by our efficient cost estimations, we aim to find the optimal BMC σ_opt that minimizes the costs of a set of queries Q on a dataset D.While using BMCs reduces the number of curve candidates from (2^ℓ)^d! to(d·ℓ)!/(ℓ!)^d(Section <ref>), it is still non-trivial to find the optimal BMC from the (d·ℓ)!/(ℓ!)^d candidates.We present an efficient learning-based algorithm namedfor this search. Problem transformation. Starting from any random BMC σ, the process to search forσ_opt can be seen as a bit-swapping process, until every bit falls into its optimal position, assuming an oracle to guide the bit-swapping process. To reduce the search space, we impose two constraints on the bit swaps: (a)we only swap two adjacent bits each time, and (b) two bits from the same dimension cannot be swapped (which guarantees valid BMCs after swaps,cf. Section <ref>). Any bit then takes at most (d-1)·ℓ swaps to reach its optimal position, when such a position is known. Givend·ℓ bits, at most d·(d-1)·ℓ^2 swaps are needed to achieve the optimal BMC guided by an oracle.In practice, an ideal oracle is unavailable. Now the problem becomes how to run the bit swaps without an ideal oracle. There are two approaches: (a) run a random swap (i.e., exploration) each time and keep the result if it reduces the query cost, and (b) select a position that leads to the largest query cost reduction each time (i.e., exploitation). Using either approach yields local optima. We integrate both approaches by leveraging deep reinforcement learning (DRL) to approach a global optimum, since DRL aims to maximize a long-term objective <cit.> and balance exploration and exploitation.BMC learning formulation. We formulate BMC learning as a DRL problem: (1) State space 𝒮, where a state (i.e., a BMC) σ_t ∈𝒮 at time step t is a vector ⟨σ_t[d·ℓ],σ_t[d·ℓ-1],…,σ_t[1]⟩, and σ_t[i] is the ith bit.For example, if σ_t=XYZ, σ_t[3]=X, σ_t[2]=Y, and σ_t[1]=Z.(2) Encoding function ϕ(·), which encodes a BMC to fit the model input. We use one-hot encoding. For example, X, Y, and Z can be encoded into [0,0,1], [0,1,0], and [1,0,0], respectively, and XYZby [0,0,1,0,1,0,1,0,0]. (3) Action space 𝒜, where an action a ∈𝒜 is the position of a bit to swap. When the ath bit is chosen, we swap it with the (a+1)st bit (if a+1 ≤ d·ℓ). Thus, 𝒜={a∈ℤ:1≤ a≤ d·ℓ - 1}. (4) Reward r: 𝒮×𝒜×𝒮→ r, which is the query cost reduction when reaching a new BMC σ_t+1 from σ_t.Since an oracle is unavailable, we use our cost model to estimate the query cost of a BMC.The reward r_t at step t is calculated as r_t =(𝒞_σ_t - 𝒞_σ_t+1) / 𝒞_σ_1, where 𝒞_σ_t=𝒞_σ_t^g(Q)·𝒞_σ_t^l(Q) is the cost of σ_t estimated by Equation <ref> and Algorithm <ref>.(5) Parameter ϵ, which balancesexploration and exploitation to avoid local optima.Based on this formulation, we use deep Q-learning <cit.> in ouralgorithm to learn a query-efficient BMC index.The algorithm. We summarizein Algorithm <ref> where the inputσ_1 can be any initial BMC, e.g., a ZC.The key idea ofis to learn a policy π: 𝒮→𝒜 that guides the position selection for a bit swap given a status, to maximize a value function 𝚀^*(ϕ(σ_t), a) (i.e., the reward) at each step t. Such a policy π can be learned by training a model (a deep Q-network, DQN) with parameters θ over existing“experience” (previously observed state transitions and their rewards), which is used to predict the position a to maximize the value function (i.e., max_a𝚀^*(ϕ(σ_t), a; θ)). After a number of iterations, the learned BMC σ_opt^* is expected to approach σ_opt, which is returned as the algorithm output. We initialize a storage MQ to store the latest N_MQ bit-swapping records (i.e., the experience, Line 1).We learn to approach σ_opt with M episodes and T steps per episode (Lines 2 and 3). In each episode, we start with σ_1 encoded by ϕ(·). To select a swap position a_t at step t, we generate a random number in [0, 1], if the number is greater than ϵ, we randomly select a position a_t, otherwise, we set a_t as the position with the highest probability to obtain amaximal reward, i.e.,max_a𝚀^*(ϕ(σ_t), a; θ) (Line 4). The prediction is based on the current state σ_t and model weights θ.We execute a_t (𝙴(σ_t,a_t) at Line 5) and compute reward r_t using our cost model (Line 6).We record the new transition in MQ and train the DQN (i.e., update θ) over sampled data in MQ (Lines 7 and 8).The training uses gradient descent to minimize a loss function L_t(θ_t)=𝔼_ϕ(σ),a∼ρ(·)[(y_t - Q(ϕ(σ),a;θ_t))^2]where y_t is the target from iteration t and ρ(·) is the action distribution <cit.>.We use σ_opt^* to record the new BMC from each swap (Line 9), which is returned in the end (Line 10).Figure <ref> illustrates with ℓ=3 andthree queries q_1, q_2, and q_3.The initial BMCσ_1 = YXXYYX has an (estimated) query cost of 𝒞_1=175 (Figure <ref>). We select position a_1=3 and swap the 3rd and the 4th bits to get σ_2 = YXYXYX such that the cost is decreased to 𝒞_2=90 (Figure <ref>). Next, we select position a_2=1 and swap the 1st and the 2nd bits to get σ_3 = YXYXXY with cost 𝒞_3=48 (Figure <ref>).We store all the intermediate results into memory MQ for learning the DQN model in Figure <ref>, where we show the BMCs without encoding.Figure <ref> shows the cost ratios, i.e., 𝒞_t/𝒞_1, which decrease as t increases (Figures <ref> to <ref> are three of the steps). The learned BMC approaches the optimum in this process.Algorithm cost. involves T· M iterations that each involves three key operations: bit-swap position prediction, reward calculation (cost estimation), and model training. Their costs are O(1), O(𝒞_t), and O(𝕋_θ), respectively.The total time cost is then O(T· M· (1+𝒞_t+𝕋_θ)).Here, T· M is a constant, whileO(𝕋_θ) is determined by the model structure. Our cost estimation results in O(𝒞_t) = O(1), thus enabling an efficientBMC search.§ EXPERIMENTS   We aim to evaluate the (1) efficiency and (2) effectiveness of the proposed cost estimation algorithms, as well as (3)  vs. other SFCs, including the learning-based ones. §.§ Experimental Settings   Our cost estimation algorithms (i.e., GC and LC) and BMC learning algorithm (i.e., ) are implemented in Python (available at <https://anonymous.4open.science/r/LearnSFC-B6D8>).The learning of BMC is supported by TensorFlow. We run experiments on a desktop computerrunning 64-bit Ubuntu 20.04 with a 3.60 GHz Intel i9 CPU,64 GB RAM, and a 500 GB SSD. Datasets.We use two real datasets: OSM <cit.> and NYC <cit.>. OSM contains 100 million 2-dimensional location points (2.2 GB). NYC contains some 150 million yellow taxi transactions (8.4 GB). After cleansing incomplete records, we retain the pick-up locations (2-dimensional points) of 100 million records.Additionally, we follow the study of the state-of-the-art competitor, the BMTree <cit.>, and use two synthetic datasets, each with 100 million points: UNI and SKEW, which follow uniform and skewed distributions.Queries.We again follow the BMTree study and generate synthetic query workloads. Specifically, 1,000 synthetic queries are used for SFC learning, while 2,000 queries are generated separately for testing. The queries are of uniform size and follow the distributions of their respective datasets. To assess our cost estimation algorithms (Sections <ref> and <ref>), we employ square queries, since the query shape does not impact the cost estimation time.Evaluation metrics. The core evaluation metrics used are (1) the cost estimation time, (2) the average number of block accesses per query when using different SFC ordering for query processing (in PostgreSQL), and (3) the SFC learning time.Parameter settings. Table <ref> summarises the parameter values used, with default values in bold. In the table, n denotes the number of queries; δ denotes the edge length of a query; d denotes the data dimensionality; and N denotes the dataset cardinality. We randomly sample from the datasets described above to obtain datasets of different cardinalities. For SFCs, a key parameter is the number of bits ℓ, which impacts the curve value mapping efficiency substantially. To evaluate the cost estimation efficiency, we restrict ℓ to 18, beyond which a naive local cost baseline becomes computationally infeasible. In later experiments, we set ℓ=20 following the BMTree to balance the computational costs of curve value mapping and cost estimation.The BMTree has two additional parameters:the dataset sampling rate ρ to form a subset for query cost estimation, and the depth h of space partitioning.§.§ Cost Estimation Efficiency   We first evaluate the efficiency of our algorithms (excluding initialization) to compute the global cost GC and the local cost LC (Algorithm <ref>), which are based on Equations <ref> and <ref>.We use IGC and ILC to denote the initialization steps of the two costs, respectively. As there are no existing efficient algorithms to compute these costs, we compare with baseline algorithms based on Equations <ref> and <ref>, denoted by NGC and NLC. We vary the number of queries n, the query size (via δ), and the number of bits ℓ. We run experiments for 2- to 4-dimensional spaces. Due to page limits, we focus on the 2-dimensional space (the algorithms' comparative results are similar for d∈{3,4}). As the cost estimation is data independent, a dataset is not needed to study their efficiency. The queries are generated at random locations.§.§.§ Efficiency of GCFigures <ref> and <ref> show the impact of n and δ, respectively.Since GC takes O(d·ℓ) time to compute (after the initialization step), its running time is unaffected by n and δ. NGC takes O(n· d·ℓ) time. Its running time grows linearly with n and is unaffected by δ as shown in the figures.Figure <ref> shows that the running times of GC and NGC both increase with ℓ, which is consistent with their time complexities. Since the relative performance of our algorithm and the baseline is stable when ℓ is varied, we use a default value of 10 instead of the maximum value 18 as mentioned earlier, to streamline this set of experiments.Figure <ref> shows the impact of d. Here, we show the performance gain (i.e., the running time of NGC over that of GC) instead of the absolute running times, which are of different scales when d is varied such that it is difficult to observe the relative performance. We see that GC is faster than NGC by 24x. Overall, GC is consistently faster than NGC, with up to more than an order of magnitude performance gain, which confirms the high efficiency of GC. §.§.§ Efficiency of LCFigures <ref> to <ref> show the running times of computing local costs. The performance patterns of LC and NLC are similar to those observed above for GC and NGC, and they are consistent with the cost analysis in Section <ref>.The performance gains of LC are even larger, as its pre-computed pattern table enables extremely fast local-cost estimation. As Figure <ref> shows, LC outperforms NLC by five orders of magnitude when d = 4.§.§.§ Initialization Costs of GC and LCTable <ref> shows the running times of IGC and ILC, which increase with n, because the initialization steps need to visit all range queries to compute a partial global cost and prepare the pattern tables, respectively.These running times are smaller than those of NGC and NLC, confirming the efficiency of the proposed cost estimation algorithms. Similar patterns are observed when varying δ, ℓ, and d, which are omitted for brevity. We do not report the result when n=2^0 (i.e., n=1) as no initialization is needed for a single query.§.§ Effectiveness of Cost Estimation   We next explore the applicability and effectiveness of our GC and LC cost estimations by using them to replace the built-in cost estimations of the state-of-the-art SFC learning algorithm, theBMTree. We denote the resulting variants byBMTree-GC and BMTree-LC. The original BMTree uses a data sampling-based empirical cost estimation method. We denote it as BMTree-SP.We report the time cost of reward calculation for the three variants, as the other steps of the variants are the same. After the SFCs are learned by the three variants, we build a B^+-tree with each SFC in PostgreSQL to index the input dataset. We measure the average number of block accesses as reported by PostgreSQL to process each of the queries as described earlier. §.§.§ Varying the Dataset Cardinality   We start by varying the dataset cardinality N from 10^4 to 10^8.Figure <ref> shows the results on the OSM dataset (the results on the other datasets show similar patterns and are omitted for brevity; same below). BMTree-GC and BMTree-LC have constant reward calculation times, since GC and LC are computed in constant times. In comparison, the reward calculation time of BMTree-SP increases linearly with the dataset cardinality, as BMTree-SP builds intermediate index structures based on sampled data points for query cost estimation. When N increases, the number of sampled data points also increases. At N = 10^8 (the default sampling rate is ρ = 0.001, i.e., BMTree-SP is run on a sampled set of 10^5 points), the reward calculation time of BMTree-SP (more than 7 hours) is 36x and 474x higher than those of BMTree-LC (737 s) and BMTree-GC (57 s).In terms of the query costs, the indices built using all three algorithms require more block accesses as N increases, which is expected. Importantly, all three algorithms incur similar numbers of block accesses given the same N value. This suggests that the GC and LC cost estimations can be applied to improve the curve learning efficiency of the BMTree without adverse effects on the query efficiency. In general, BMTree-LC offers lower query costs than BMTree-GC. Thus, applications that are more sensitive to query costs may use BMTree-LC, while those that are more sensitive to index building costs may use BMTree-GC. §.§.§ Varying the Number of QueriesNext, we vary the number of queries used in curve learning, n, from 100 to 2,000. We see that BMTree-LC and BMTree-GC consistently outperform BMTree-SP by one and two orders of magnitude in terms of the reward calculation time, respectively (Figure <ref>). We note that, now the computation times of BMTree-LC and BMTree-GC vary with n, which differs from what was reported in Figures <ref> and <ref>. This happens because the BMTree uses different BMCs in different sub-spaces to accommodate different data and query patterns. As there are more queries, more different patterns may need to be considered, resulting in more different BMCs, each of which requires a different GC and LC cost estimation. Thus, the cost estimation costs grow with the number of queries n. Meanwhile, the query costs of the three algorithms are again close, e.g., 9,199, 9,248, and 10,462, for BMTree-LC, BMTree-SP, and BMTree-GC, respectively, when n is 1,500. The higher query cost of BMTree-GC shows that while GC is extremely simple and efficient, it may not find the most query-efficient curves, which underlines the importance of the LC cost estimation algorithm. We further observe a slight drop in the number of block accesses as n increases. Intuitively, using more queries for curve learning can lead to curves that better suit the query workload. §.§.§ Varying the Sampling Rate and the Depth of the BMTreeTwo alternative approaches to improve the curve learning efficiency of the BMTree are (1) to reduce its data sampling rate ρ and (2) to reduce the depth of its space partitioning h. In this set of experiments, we study how these two parameters impact the reward calculation time and the query cost of the resulting SFCs. In particular, we vary ρ from 10^-4 to 10^-2 (a total of 9 values, cf. Table <ref>), and we varyh from 5 to 10. Figure <ref> plots the results on the SKEW and OSM datasets. BMTree-SP has three result polylines: BMTree-SP-6, BMTree-SP-8, and BMTree-SP-10, each of which uses a different h value, while the points on each polyline represent the results of different ρ values (points on the right come from larger ρ values).BMTree-GL and BMTree-LC are plotted with one polyline each, as they are not impacted by ρ. The points on these polylines represent the results of different values of h (points on the right correspond to larger h values). We see that a larger h value tends to lead to lower query costs, while it also yields a longer reward calculation time.Powered by the LC cost estimation algorithm, BMTree-LC reduces the reward calculation time by at least an order of magnitude while achieving the same level of query costs (i.e., its curve lies at the bottom left of the figure). BMTree-GC can also be very fast at reward calculation, while it may suffer at query performance. §.§ Query Efficiency with BMC Learning   We proceed to study the BMC learning efficiency ofand the query efficiency of the indices built using the learned BMCs. Competitors.We compare with five different SFC-based ordering techniques.(1) QUILTS <cit.> orders data points by a BMC derived by a curve design method as described in Section <ref>. We implement it according to its paper as the source code is unavailable. (2) ZC <cit.> orders data points by their Z-curve values. (3) HC <cit.> orders data points by their Hilbert curve values. (4) LC, which is also called the C-Curve, orders data points lexicographically by their dimension values <cit.>. (5) BMTree <cit.> orders data points by multiple BMCs in different sub-spaces. We use its released code (with h=8 and ρ = 0.001 to balance the reward calculation time and the query costs, cf. the `⋆'-pointson BMTree-SP-8 in Figure <ref>). We cannot compare with the recent learned SFC, LMSFC <cit.>, because its source code and some implementation details are unavailable. We do not compare with RSMI <cit.> as it has been shown to be outperformed by the BMTree <cit.>.For all techniques, we use the curves obtained to order the data points and build B^+-trees in PostgreSQL for query processing, and we report the average number of block accesses as before. §.§.§ Overall Results   Figure <ref> shows the average number of block accesses on all four datasets. outperforms all competitors consistently. On SKEW, the advantage ofover the BMTree is the most pronounced. It reduces the average number of block accesses by 28x (111 vs. 3,084) and by 6x (111 vs. 674) in comparison with the BMTree and QUILTS, respectively.On NYC, the advantage ofover the BMTree is the least, yet it still requires only 2,638 block accesses which is fewer than that of the BMTree at 3,448. These results suggest thatis highly efficient at reducing the query costs across diverse datasets.LC is the worst, which is expected as LC curves fail to preserve the data locality. The BMTree and QUILTS outperform LC, ZC, and HC on real data such as NYC, where they benefit more from the query based optimizations. However, there are no consistent results across the different datasets. We conjecture that fine-tuning of the parameter values of h and ρ may be needed for the BMTree over each different dataset. Such fine-tuning is not required by . §.§.§ Varying the Dataset Cardinality   We further study the impact of dataset cardinality N. Figure <ref> shows the results. Like before, theaverage number of block accesses increases with N, which is expected.is again the most efficient in terms of query costs, needing at least 39% fewer block accesses than the BMTree (4.0 vs. 6.6 when N = 10^4), and the advantage is up to 74%(1,044 vs. 4,131 when N = 10^7).We report the SFC learning times of the BMTree andwhen varying N in Table <ref>.We see thatis much faster than the BMTree at SFC learning and that the advantage grows with N. This is because the cost estimation (i.e., reward calculation) in the BMTree is much slower than that in , as shown in the last subsection. The cost estimation time dominates when there are more data points for the BMTree, while the cost estimation time ofremains constant when varying N. LC, ZC, and HC are not learned, and they do not take any learning time. QUILTS takes less than 1 second, as it only considers a few curve candidates (which are generated based on query shapes) using a cost model. We have used our cost estimation algorithms in our implementation of QUILTS, as the original cost model is prohibitively expensive.§.§.§ Varying the Aspect Ratio of Queries   Figure <ref> shows the query costs when varying the query aspect ratio.Here,shows a stronger advantage over the competitors on queries that are “stretched”, while LC also better suits the queries that are long and thin (16:1) which is intuitive. When the aspect ratio is 1:1, , QUILTS, and ZC share almost the same query performance because they all tend to form a `' shape to fit square queries.The BMTree is again outperformed by , because of its less flexible learning scheme (i.e., learning for only up to h bits), whilecan learn a BMC scheme with all ℓ bits (ℓ = 20 by default).§.§.§ Varying the Edge Length of Queries   Figure <ref> shows that the average number of block accesses grows with the query edge length, as expected.Here,again outperforms the competitors consistently, further showing the robustness of . § CONCLUSIONS AND FUTURE WORK   We studied efficient cost estimation for a family of SFCs, i.e., the BMCs. Our cost algorithms can compute the global and the local query costs of BMCs in constant time given n queries and after an O(n)-time initialization.We extended these algorithms to the state-of-the-art curve learning algorithm, the BMTree, which originally measured the effectiveness of SFCs by querying the data points to be indexed. Experimental results show that the proposed algorithms are capable of reducing the cost estimation time of the BMTree by over an order of magnitude with little or no impact on the query efficiency of the learned curves. We further proposed a reinforcement learning-based curve learning algorithm. The result learned BMCs are shown to achieve lower query costs than those of the BMTree and other baselines under nearly all settings tested.In future work, it is of interest to design cost estimation algorithms for non-BMCs, e.g., HC, and use learning-based techniques to build more efficient multi-dimensional indices. ACM-Reference-Format
http://arxiv.org/abs/2312.16355v1
{ "authors": [ "Guanli Liu", "Lars Kulik", "Christian S. Jensen", "Tianyi Li", "Jianzhong Qi" ], "categories": [ "cs.DB" ], "primary_category": "cs.DB", "published": "20231226233546", "title": "Efficient Cost Modeling of Space-filling Curves" }
Conversational Question Answering with Reformulations over Knowledge Graph Lihui LiuUniversity of Illinois at Urbana-Champaign, {lihuil2, blaineh2, htong}@illinois.edu Blaine Hill^* Boxin DuAmazon, {boxin, feiww}@amazon.com Fei Wang^† Hanghang Tong^*=================================================================================================================================================================================== [R]Copyright © 2024 by SIAMUnauthorized reproduction of this article is prohibited=9ptConversational question answering (ConvQA) over knowledge graphs (KGs) involves answering multi-turn natural language questions about information contained in a KG. State-of-the-art methods of ConvQA often struggle with inexplicit question-answer pairs. These inputs are easy for human beings to understand given a conversation history, but hard for a machine to interpret, which can degrade ConvQA performance. To address this problem, we propose a reinforcement learning (RL) based model, , which utilizes question reformulations generated by large language models (LLMs) to improve ConvQA performance. adopts a teacher-student architecture where a teacher model learns question representations using human writing reformulations, and a student model to mimic the teacher model’s output via reformulations generated by LLMs.The learned question representation is then used by a RL model to locate the correct answer in a KG.Extensive experimental results show thatoutperforms state-of-the-art ConvQA models.Keywords: Knowledge graph conversational question answering; Reinforcement learning§ INTRODUCTIONKnowledge graphs (KGs) are collections of nouns represented as nodes (representing real-world entities, events, and objects) and edges (denoting relationships between nodes). Knowledge graph question answering (KGQA) has long been a focus of study, with the goal of answering queries using information from a KG. However, traditional KGQA approaches often only consider single-shot questions, rather than the iterative nature of real-world conversation. Conversational question answering (ConvQA) addresses this gap by allowing users to interact with a QA system conversationally. ConvQA systems have had much success, as seen by Apple's Siri, Amazon's Alexa and OpenAI's ChatGPT. Conversational question answering (ConvQA) involves a multiturn process consisting of users iteratively asking natural language questions, a system deciphering both the conversation context and underlying queries, and the system returning natural language answers.Some models will create rich, human-like responses  <cit.>, these methods are known as `dialogue' conversation models. While for ConvQA over KGs, a corresponding entity in the KG is sufficient to answer the input question, we call it `non-dialogue' conversation models. In this paper, we focus on the non-dialogue ConvQA task as shown in Example 1. Once the system obtains an answer embedding, it is sufficient to map this to the corresponding node in the KG maybe include an example of dialogue vs non-dialogue answer? and forgo the need to train a natural language decoder  <cit.> to create rich, human-like responses  <cit.> for the benefit of the user: ConvQA models which include this decoder are known as 'dialogue' models, while models that do not are known as 'non-dialogue' models.In this paper, the authors focus on the non-dialogue ConvQA task as shown in Example 1.0.8 Example 1:q_1: Who is the author that wrote the book Moby-Dick?Reformulation1: Author of the book?Reformulation2: Who wrote Moby Dick?a^1: Herman Melvilleq_2: When was he born?Reformulation1: His birthdate is?Reformulation2: When was Herman Melville born?a^2: 1 August 1819q_3: And where is he from originally?Reformulation1: His place of birth?Reformulation2: Where did he grow up?a^3: Manhattanq_4:How about his wife?Reformulation1: Where is Herman Melville's wife from?Reformulation2: Herman Melville's wife's place of birth?a^4: Bostonq_5: Did they make a movie based on the book?a^5: yes In general, a conversation is typically initiated with a well-formedquestion (i.e., q_1) followed by inexplicit follow-up questions (e.g., q_2 - q_5). The initial question (q_1) often includes a central topic entity of interest ("Moby-Dick"), while the topic entities of follow-up questions (q_2 - q_5) are not explicitly given. Additionally, the topic entity of the conversation may shift over time (e.g., inquiring about the birth time of Herman Melville in q_2).To operate ConvQA over KG, different methods have previously been proposed. For instance, Magdalena et al. in  <cit.> use named entity recognition (NER) methods to detect potential KG topic entities in the conversation and employ multi-agent reinforcement learning starting from these entities to find answers; the performance of this method is largely dependent on the quality of the detected entities. Philipp et al. in  <cit.> propose finding a conversation-related subgraph and using heuristic-based methods to identify the answer within the subgraph. The subgraph is expanded as new questions are asked. Endri et al. in  <cit.> use contrastive learning to make KG entity embeddings dissimilar from one another, enabling the model to separate correct answers from incorrect answers.Despite the above achievements, inexplicit input data hinders a ConvQA system's ability to find correct answers  <cit.>.Two common linguistic phenomena which undermine the semantic completeness of a query in the conversation are: anaphora and ellipsis  <cit.>.Anaphora refers to the phenomenon of an expression that depends on an expression in the previous context. For example, in Example 1, the word “he" in q_2 refers to a^1.Meanwhile, ellipsis refers to the phenomenon of the omission of expressions in the previous context. For example, the complete form q_4 should be "Where is Herman Melville's wife from?".To address this issue, several methods aim to learn a reformulation of the input query, rewriting the original question in a more meaningful way. Then, one can search for the answer using this new reformulation with existing techniques  <cit.>.Despite many of the existing question rewriting models have shown potential to enhance ConvQA performance, as demonstrated by prior research <cit.>, their generated reformulations fall short compared to human-generated reformulations  <cit.>.In this paper, we present , a new reinforcement learning (RL) model for non-dialogue conversational question answering (ConvQA) with large language model (LLM) generated reformulations.First, we fine-tune existing LLMs, GPT2  <cit.> and Bart  <cit.>, to generated high quality reformulations, using human writing reformulations as the ground truth. Second, to further increase the convQA performance, we propose a teacher-student architecture to achieve near human-level performance [Human-level performance refers to the ability to find answers based on real human writing reformulations during testing.]. Specifically,(1) directly trains a teacher model with human writing reformulations in the training data, and (2) indirectly trains a student model with LLMs generated reformations to mimic the teacher model's output so that it can approach human-level performance. Note that the human writing reformulations only exist in the training and validation data. Between turns and prior to the start of QA, identifying the topic entity is a necessity.performs this by examining any previous current topic entities and (re)evaluates based on a feed forward neural network (NN) classifier. Ifdetermines that the topic entity has likely changed, it selects the topic entity via a second NN.Lastly, to locate an answer, a RL model walks over the KG, sampling actions from a policy network to guide the direction of the walk and identify candidate answers. Our experiments demonstrate the effectiveness ofand its superiority over the state-of-the-art conversational question answering baselines.The main contributions of this paper are: * Analysis. We demonstrate that although LLMs are good question reformulators, their performance lags behind human-level performance.* Algorithm. We propose a RL based modelwhich utilizes the question reformulations to improve the QA performance. The proposed teacher-student model can help us achieve near human-level performance with LLMs generated reformulations. * Empirical Evaluations. The experimental results on several real-world datasets demonstrate that the proposedconsistently achieves state-of-the-art performance. § PROBLEM DEFINITION Table  <ref> gives the main notation used throughout this paper.Uppercase letters are used for matrices, sets or constant value (e.g., C, T). Bold lowercase letters are for vectors or embedding (e.g., 𝐫_i) and lowercase letters (e.g., s, a_i) for scalars or variables. A KG can be denoted as 𝒢=(𝒱, ℛ, ℒ) where 𝒱 = {v_1, v_2, ..., v_n} is the set of nodes/entities, ℛ = {r_1, r_2, ..., r_m} is the set of relations and ℒ is the list of triples. Each triple in the KG can be denoted as (h, r, t) where h ∈𝒱 is the head (i.e., subject) of the triple, t ∈𝒱 is the tail (i.e., object) of the triple and r ∈ℛ is the edge (i.e., relation, predicate) of the triple which connects the head h to the tail t.The embedding of a node or relation type is represented by bold lowercase letters, e.g., 𝐞_i, 𝐫_i. Each triple/edge (h, r, t) in the KG has a unique edge embedding which is denoted as 𝐮_r.Conversational question answering over a KG aims to iteratively answer multiple related questions from the users.Unlike dialog question answering which wants the chatbot to imitate the response of a human, ConvQA over KG only requires the model to return entities in the knowledge graph. We formally define the key terminologies used in this paper as follows. Conversation. A conversation C with T turns is made up of a sequence of questionsq_1, q_2, ..., q_T and their corresponding answers Ans = { a^1, a^2, ..., a^T }, such that C = ⟨(q_1, a^1), (q_2, a^2), ..., (q_T , a^T )⟩. Example 1 in Introduction contains T = 5 turns.We assume that q_1 is well-formed, and all other q_t are inexplicit.Question. Each question q_t is a sequence of words q_t = (w_1^t, . . . , w_Ω_t^t), where Ω_t is the number of words in q_t.We assume that each question can be mapped to a unique relation r_q_t in the KG and make no assumptions on the grammatical correctness of q_t. Topic Entity. We assume that each q_t has a topic/central entity v_q_t which the user wants to ask about. We assume that the topic entity of q_1 is given in the training data, while the topic entities for other questions q_2, ..., q_T are not given.For example, for the five questions in Example 1, their topic entities are Moby Dick, Herman Melville, Herman Melville, Moby Dick and Moby Dick, respectively.The topic entity of q_1 is presumed the main topic entity which is denoted as v_q_1. Answer. Each answer a^t to question q_t is a (possibly multiple, single, or null-valued) set of entities in the KG. We assume that all the answer entities exist in the KG, except true or false questions.Reformulation. A reformulation is a sentence which expresses the same information as the input question, but in a different way. We assume in the training data, each question has multiple reformulations. Turn. Each question in C, including its reformulations and corresponding answers, constitutes a turn t_i. Each turn t_i contains a question q_i, the answer a^i and reformulations of q_i. Based on the above, we formally define the problem of ConvQA over KG as:Conversational Quesion Answering over Knowledge Graph Given: (1) A knowledge graph G, (2) the training set of conversations where each question contains multiple human writing reformulations, (3) the test set of conversations where no question reformulation is provided; Output: (1) The trained model, (2) the answer for each question in each conversation of the test set. §.§ Preliminaries: Reinforcement Learning The RL problem can usually be formulated as Markov Decision Processes (MDPs). An MDP is defined by M = (S, A, R, P, γ), where S is the state space and A is the action space. R: S × A ⟶ R is the reward function from the environment which maps a state-action pair to a scalar which denotes how much reward the agent can receive,and P: S × A ⟶ S is the transition function which defines the probability of transiting from a state-action pair to the next state. γ is a discount factor where γ∈ [0, 1]. When modeling the KGQA problem as an RL task, the agent will learn a policy to find the correct answer in the KG, and make decisions guided by the policy function when it receives new queries.§ PROPOSED METHODDue to anaphora and ellipsis, current ConvQA methods often rewrite input queries to generate more understandable reformulations.In this paper, we follow this idea by fine-tuning two existing LLMs, GPT2 and Bart, to generate reformulations.Despite GPT2 and Bart are good reformulation generators, their performance still lags behind human-level performance. To further improve the performance, we propose a teacher-student architecture.The teacher model learns the question representation by using human writing reformulations, while the student model takes reformulations generated by LLMs as input, and tries to mimic the output of the teacher model, so that it can achieve the same performance as the teacher model despite using the LLMs generated reformulations. In each iteration, our model uses the conversation history and the current query to identify the current topic entity, and an RL agent travels the KG starting from the topic entity to find the answer. This process is repeated for anumber of turns until the conversation is completed. Figure  <ref> illustrates the framework of the proposed . We will describe the details of each component in the following subsections.§.§ Student Model: LLMs Reformulation Encoder. The architecture is show in Figure  <ref>. A - Context Encoder.Given a question q_i = (w_1^i, w_2^i, . . . , w_Ω_t^i), we first add two indicator tokens ([CLS] and <s>) to the beginning and end of the question context to signify its boundary. Then, we pass the processed question context through a pre-trained BERT  <cit.> to extract contextual embeddings for each token: [𝐡_CLS, 𝐰_1, ..., 𝐰_Ω_t, 𝐡_𝐬] = BERT([CLS], w_1, . . . , w_Ω_t, <s>)where 𝐡_CLS is the embedding of the [CLS] token and 𝐡_𝐬 is the embedding of the <s> token. The context question embedding is obtained from the transformation of 𝐡_CLS and𝐡_𝐬, where FFN is a feed forward neural network.𝐡_q_i = FFN(𝐡_CLS || 𝐡_𝐬)C - Context Fusion. During training, each input question has multiple corresponding reformulations generated. For each reformulation, we use the Context Encoder to obtain its context embedding. To merge the reformulation information, we stack the embeddings of the N reformulations and the original question context embedding to create a sequence with N+1 embeddings.We treat this sequence as the embedding of a language sentence and pass it through a Transformer Encoder  <cit.> to merge them together.𝐌_𝐪_𝐢 = TRANSFORMER([𝐡_q_i | 𝐡_Ref_q_i^1 | ... | 𝐡_Ref_q_i^n])[0]where 𝐌_𝐪_𝐢 is the query embedding after merging the reformulations.D - Integrating Conversational History. Another problem in ConvQA is that the user’s inputs are often ambiguous, hampering a system's ability to give accurate answers. This is illustrated in Example 1 q_3 “And where is he from originally?". It is impossible to identify the antecedent to the pronoun `he' without any conversational history. Consequently, conversational history is vital to the success of . We use an LSTM to encode all the conversational history which is given below. 𝐥_𝐪_𝐢 = LSTM(𝐌_𝐪_𝐢)the output of the LSTM 𝐥_𝐪_𝐢 will be treated as the query embedding and be used by other components. Note that the reformulations used here are generated by LLMs.§.§ Teacher Model: Human Writing Reformulation Encoder Reformulations have been used by various methods to improve the performance of QA systems by creating more understandable queries. For instance, in  <cit.>, Christian et al. use a Seq2Seq-based reinforcement learning agent to transform input questions into machine-readable reformulations. In  <cit.>, Svitlana et al. propose a Transformer Decoder-based model for question rewriting.According to a study in  <cit.>, most question reformulation methodsonly improve the performance about 2-3%.Despite LLMs have exploded in popularity for all sorts of natural language tasks, the ConvQA performance based on LLMs generated reformulations is still upper bounded by human reformulations  <cit.>.To further improve the student model's performance, we propose a teacher-student approach where a teacher network is trained on human writing reformulations. The teacher network has the same network structure as the student model, but uses human writing reformulations as the input. An example is given in Figure  <ref>. During the training process, our goal is to make the output question embedding of the student model as close as possible to the output of the teacher model in the embedding space. The distance between the student's and teacher's output is measured using the L2 distance:L = ∑_q_i ∈ C [d(Υ_q_i, 𝐥_q_i)],where Υ_q_i is the output of the teacher network for input question q_i, and l_q_i is the output of the student network for the same input with reformulations. By minimizing this distance, we can ensure that the student network is producing output that is similar to that of the teacher model, even when it only has access to the synthetic reformulations. Note that the teacher model is pretrained and fixed when we train the student model. During the testing phase, given a question, we first use LLMs to generate multiple reformulations for it, then the student model is used to encode the input question with LLMs generated reformulations. The performance of directly using the teacher model on the test data is slightly inferior to our model due to the different data distribution of human writing reformulations compared to the reformulations generated with LLMs.§.§ Inferring the Topic Entity During a conversation, the topic entity may change over time. To accurately answer questions, we determine the current topic entity based on the conversation history and the current question. We use a multi-layer perception (MLP) to determine whether the topic entity unchanged. If the classifier predicts that the topic entity of the current question is not the answer to the previous question, we set the topic entity to the main topic entity v_q_1. The classifier consists of a feed forward neural network (FFN) with ReLU activation functions and a classification layer. The classification layer uses softmax on a 2D output to calculate the cross-entropy loss. Here we use the index format of PyTorch to show that the probability of v_q_i = a^i-1 is equal to the 2nd element of the 2D output. Pr(v_q_i = a^i-1)= MLP_classifier(FNN(𝐥_q_i))[1]A - Pretrain Topic Entity Selector. The goal of the Topic Entity Selector is to identify the correct topic entity within the conversation, which is an input to the RL model. In order to stabilize the training process for the RL model, we pre-train the parameters of the classifier using binary cross-entropy lossℒ_1 =-[ylog(Pr(v_q_i = a^i-1)) +(1 - y)log(1 - Pr(v_q_i = a^i-1)))] §.§ Question Answering After obtaining the topic entity, the next step is to find the correct entity to answer the user. We formulate this problem as a Markov decision process (MDP) which is defined bya 5-tuple (S, A, R, P, γ),where S is the state space, A is the action space, P is the state transition function and R denotes the reward function. States. Intuitively, we want a state to encode the question, the current position of the agent in the KG, and the search history information.At the ith step, the state s_i ∈ S is definedas a triple s_t = (n_i, 𝐥_q, 𝐠_i), n_i is the current entity where the agent is at; 𝐥_q is the question embedding generated by the previous method; and 𝐠_i refers to the search history information. (n_i, 𝐠_i) can be viewed as state-dependent information while (𝐥_q) is the global context shared by all states. Actions. The set of possible actions A_s from a state s_t = (n_t, 𝐥_q, 𝐠_t) consists of all outgoing edges of the vertex n_t in the KG. Formally, A_s = {(𝐫_i, 𝐮_r_i, 𝐞_e') | (n_t, r_i, e') ∈ G}.This means an agent at each state has the option to select which outgoing edge it wishes to take having knowledge of the label of the edge r_i and destination vertex e'. Note that different from most of the existing methods  <cit.> which only use 𝐀_s = (𝐫_i, 𝐞_e'), we also use the unique edge embedding 𝐮_r_i.To allow the agent to have the option of ending a search, a self-loop edge is added to every entity. In addition, we also include the inverse relationship of a triple in the graph.Transition. The transition function is defined as δ : S × A ⟶ S, which represents the probability distribution of the next states δ(s_t+1 | s_t, a_t). In the current states_t, the agent aims to choose proper actions a_t and then reach the next states_t+1 = (n_t+1, 𝐥_q, 𝐠_t+1).The n_t and g_t are updated, while the query and answer remains the same.Rewards. The model will receive the reward of R_b(s_t) = 1 if the current location is the correct answer and 0 otherwise. We setγ = 1 during the experiments.§.§ Policy Network The search policy is parameterized using state information and global context, including the search history. Specifically, every entity and relation in 𝒢 is assigned a dense vector embedding 𝐞∈ℝ^d and 𝐫∈ℝ^d respectively. The action a_t = (𝐫_r_i, 𝐮_r_i, 𝐞_e') ∈ A_t is represented as the concatenation of the relation embedding, the unique edge embedding and the end node embedding.The search history (n_1=v_q_i, r_1, n_2, ..., n_t) ∈ H consists of the sequence of observations and actions taken up to step t, and can be encoded using an LSTM:𝐠_0 = LSTM(0, [𝐞_v_q_i || 𝐥_q_i])𝐠_t = LSTM(𝐠_t-1, 𝐚_t-1), t > 0where 𝐥_q_i is the question embedding to form a start action with 𝐞_v_q_i. The action space is encoded by stacking the embeddings of all actions in A_t:𝐀_t ∈ℝ^|A_t|× 3d. And the policy network π is defined as:π_θ(a_t |s_t) = δ(𝐀_t ×𝐖_2ReLU(𝐖_1 [𝐧_t || 𝐥_q_i || 𝐠_t]))where δ is the softmax operator. §.§ Knowledge-Based Soft Reward Due to the weak supervision in ConvQA, the agent will receive a positive reward until it arrives at the target entity. Such delayed and sparse rewards significantly slow the convergence. To address the issue of weak supervision and sparsity of rewards in ConvQA, we assign a soft reward to entities other than the target answer to measure the similarity between them.This helps to speed up convergence and mitigate incompleteness in the KG. Specifically, the soft reward is used to measure the similarity between the current entity n_t identified by our model and the ground truth answer a^t.We use ComplEx  <cit.> to learn the initial entity embedding and relation embedding for all nodes and edges in the knowledge graph. The probability that n_t is the correct answer is calculated by Pr(n_t | 𝐥_q_i, v_q_i, 𝒢) = Re(<𝐥_q_i, 𝐞_n_t, 𝐞_v_q_i>) We propose the following soft reward calculating strategyR(s_t) = R_b(s_t) + (1 - R_b(s_t))Pr(n_t | 𝐥_q_i, v_q_i, 𝒢) Namely, if the destination n_t is a correct answer according to 𝒢, the agent receives reward 1. Otherwise the agent receives a fact score weighted by a pretrained distribution: Pr(n_t | 𝐥_q_i, v_q_i, 𝒢). §.§ TrainingGiven a set of conversations, we want to return the best possible answers a^*, maximizing a reward a^* = argmax_a ∑_C ∑_T R(a^i|q_i). The reward is computed with respect to the question q_i while the answer is provided in the train dataset. The goal is to maximize the expected reward of the answer returned under the policy E_a_1, ..., a_T ∼π_θ[R(s_t)]. Since it is difficult to compute the expectation, we use Monte Carlo sampling to obtain an unbiased estimate: E_a_1, ..., a_T ∼π_θ[R(s_t)] ≈1/N∑_i=1^N ∑_j=1^T R(s_t) π_θ(a_t|s_t)In the experiment, we approximate the expected reward by running multiple rollouts for each training example. The number of rollouts is fixed, We set this number to 20. We use REINFORCE  <cit.> to compute gradients for training.▽_θ E_a_1, ..., a_T ∼π_θ[R(s_t)] = ∑_i=1^T ▽_θπ_θ(a_t|s_t) R(s_t)≈1/N∑_i=1^N∑_j=1^T R(s_t) ▽_θlog( π_θ(a_t|s_t))Additionally, to encourage diversity in the paths sampled by the policy during training, we add an entropy regularization term to our cost function, as proposed in  <cit.>.H_π_θ(., s) = - ∑_a ∈ A_sπ_θ(a|s) logπ_θ(a|s)H_π, θ is added to the update to ensure better exploration and prevent the agent from getting stuck in local optima. This final objective is:E_a_1, ..., a_T ∼π_θ[R(s_t)] + λ H_π_θ(., s)After training, in the testing phase, given a query, we rank all the entities in the KG based on their probability of being the correct answer. We let the policy network keep top-k most likely paths according to beam search, and we rank them according to their possibilities. For all the other entities which are not in the top-k candidates, we use ComplEx  <cit.> to rank them according to Eq. (<ref>).§ EXPERIMENTS In this section, we evaluate the performance of the proposedalgorithm on several public datasets. Our aim is to answer the following questions: (1) How accurate is the proposedalgorithm for ConvQA? (2) How efficient is the proposedalgorithm? The code and datasets will be made publically available upon acceptance of the paper.§.§ Experimental Setting We use two datasets in the experiments: ConvQuestions  <cit.> and ConvRef  <cit.>. ConvQuestions contains a total of 6,720 conversations, each with 5 turns. ConvRef contains a total of 6,720 conversations, each with 5 turns. All the conversations in ConvQuestions and ConvRef belong to one of the five domains: "Books", "Movies", "Soccer", "Music", and "TV Series". Each domain contains 1344 training conversations, 448 validation conversations and 448 test conversations.The details of these datasets can be found in Appendix.Both ConvQuestions and ConvRef use Wikidata [<https://www.wikidata.org/wiki/Wikidata:Database_download>] as their background KG. However, the full Wikidata KG is extremely large, containing approximately 2 billion triples. Therefore, in the experiment, we sample a subset of triples from Wikidata. We first take the overlapped entities between the Wikidata and the QA datasets, and then we further obtain all the one-hop neighbours of these overlapped entities. The one-hop neighbors are retrieved from both the original data dump and also the entities' corresponding online Wikidata websites.We compare the performance of our method, , with four baselines: Convex  <cit.>: this method detects answers to conversational utterances over KGs by first extracting a subgraph, then identifying answers in the subgraph.Conqer  <cit.>: this is the current state-of-the-art baseline. It uses RL with reformulations to find answers in the KG.OAT  <cit.>: this Transformer-based model takes a JSON-like structure as input to generate a Logical Form (LF) grammar that can model a wide range of queries on the graph. It finds answers by applying the LF. Focal Entity  <cit.>: this is a novel graph-based model to find answer by graph neural network. Two LLMs are used to generate reformulations for the input query, which are GPT2  <cit.> and Bart  <cit.>. For each input question, we generate multiple reformulations and use attention mechanism to aggregate them inside the model.We adopt the following ranking metrics which are also employed by the previous baselines: (1) Precision at the top rank (P@1); (2) Mean Reciprocal Rank (MRR) is the average across the reciprocal of the rank at which the first context path was retrieved; (3) Hit ratio at k (H@k/Hit@k) is the fraction of times a correct answer was retrieved within the top-k positions.The details of all the datasets and experiment environment can be found in Appendix.§.§ Main Results In this subsection, we teston conversational question answering tasks and compare it with other baseline methods. A - Overall performance on ConvQA datasets. Table  <ref> compares the results ofwith baselines on the ConvQuestions and ConvRef datasets.As we can see,outperforms the previous baselines in the H@5 and MRR metrics On ConvQA. For H@5,performs 4.5% better than CONQUER and 20% better than CONVEX. In terms of MRR, CONVEX has the lowest performance, which is 13.7% worse than .CONQUER has the second highest performance, but it is also 1% lower than . For OAT, because its source code is not available, we directly adopt its results from  <cit.>. We can find that it has the highest P@1 compared to other methods. However, its MRR is 7.3% points lower than that of . For Focal Entity, it has the second highest P@1 and the third highest MRR. On the ConvRef dataset,also has similar performance. It achieves the highest Hit@5, which is 5% better than that of CONQUER. also has the second highest P@1 and MRR compared with other baselines.Due to the unavailability of the OAT source code and the failure to run Focal Entity on the ConvRef dataset, we are unable to include their results in our analysis of the ConvRef dataset.B - Performance on different domains. We further investigate the ranking performance ofacross different domains for both benchmarks. Table  <ref> illustrates detailed ranking results for H@3, H@5 and [email protected] the results show,outperforms other baselines on most domains in the ConvQuestions benchmark. On average, it achieves a 13.7% improvement in H@8 and 6.6% improvement in H@5 compared to the second highest baseline CONQUER. It performs slightly worse on H@3, with an average of 0.58% lower than CONQUER. The relatively poor performance on the Books domain is likely due to the presence of “yes/no" questions and queries regarding the plot, making it difficult for the topic entity selector to accurately determine the topic entity, posing a challenge for the model's predictions. Similar results are also observed on the ConvRef dataset.§.§ Ablation Studies and Efficiency ResultsIn this subsection, we show the effectiveness of by ablation Studies. A - The effectiveness of the reformulations. In this subsection, we demonstrate the effectiveness of using different reformulations in the model.Two large language models are used to generate reformulations: GPT2 and Bart. When not using reformulations, the output of the Question Encoder is treated as the input of the LSTM directly. Table  <ref> shows the experiment results. As we can see, using reformulations can indeed increase the ConvQA performance most of the time. The performance of using GPT2 reformulations is very similar to that of using Bart reformulations. Using human writing reformulations has the best performance. We further test the effectiveness of the proposed teacher-student model, shown in Table  <ref>. If we only use the reformulations generated by LLMs, the performance is about 3% lower than that of teacher-student model. If we train the model on human writing reformulations while tests on generated reformulations, the performance is about 1.3% lower than . C - Efficiency. Figure  <ref> shows the training time and test time of different methods on ConvRef dataset. As we can see,has the shortest training and test time. While Conv has the longest training and test time.Despite the short training time of , it can still achieve better or comparable ConvQA performance compared to other baselines. E - A successful case of using reformulation. Here we present a successful instance demonstrating the efficacy of reformulation in improving comprehension. Regarding question Q3, its lack of explicitness makes it challenging for most methods to infer the user's intent to inquire about 'where was Dan Brown born?' However, leveraging reformulation facilitates the model in swiftly identifying the correct answer. Q1: Who is the author of the book Inferno?A1: Dan BrownQ2: And when was the author born?A1: 1964-06-22T00:00:00ZQ3: And where?Without reformulations, the method gives the wrong answer.With reformulations "In which city was he born?", the model can find the correct answer "Exeter". § RELATED WORK§.§ Conversational Question AnsweringVarious approaches have been used to develop ConvQA systems.For instance, in  <cit.>, the authors employed RL to train an agent that reformulates input questions to aid the system's understanding. In  <cit.>, an encoder-decoder model is used to transform natural language questions into logical queries for finding answers. In  <cit.>, a Transformer model is used to generate logical forms and graph attention is introduced to identify entities in the query context.Other systems, such as Google's Lambda <cit.>, Apple's Siri, and OpenAI's ChatGPT, are also pursuing this task. §.§ Knowledge Graph Question Answering While knowledge graph question answering has been researched for some time, many of the existing methods primarily focus on answering single-turn questions  <cit.> or complex questions  <cit.>. For example, Zhang et al  <cit.> use a KG as the environment and propose a RL-based agent model to navigate the KG in order to find answers to input questions. Similarly, in  <cit.>, authors use RL models to find paths in the KG for answering input queries. Other studies, such as  <cit.>, integrated RL with other methods to create more human-like systems. Some other works try to use RL to tackle multi-turn conversations. For example, in  <cit.>, the authors proposed a multi-turn dialogue agent which helps users search Knowledge Bases (KBs) without composing complicated queries.In  <cit.>, instead of using a random walk agent, an adaptive path generator is developed with several atomic operations to sequentially generate the relation paths until the agent reaches the target entity.However, only a few of these studies have attempted to utilize reformulations to enhance KGQA performance, as opposed to the focus of this paper.§.§ Question RewritingQuestion Rewritingaims to reformulate an input question into a more salient representation. This can improve the accuracy of search engine results or make a question more understandable for a natural language processing (NLP) system. In  <cit.>, an unidirectional Transformer decoder is proposed to automatically rewrite a user's input question to improve the performance of a conversational question answering system.In  <cit.>, authors proposed a Seq2Seq model to rewrite the current question according to the conversational history, and also introduced a new dataset named CANARD. In  <cit.>, query rewrite rules are mined from a background KG and a query rewrite operator is used to generate a new question.Unlike the previous techniques,trains teacher-student model with both human writing reformulations and LLMs generated reformulations. This approach helps to avoid the negatively impact from the generated low quality reformulations.§ CONCLUSION In this paper, a model () that creatively combines the question reformulation and reinforcement learning is proposed on a knowledge graph (KG) to attain accurate multi-turn conversational question answering.utilizes a teacher-student distillation approach and reinforcement learning to find answers from a KG. Experimental results demonstrate thatsurpasses existing methods on various benchmark datasets on conversational question answering. plplainurl
http://arxiv.org/abs/2312.17269v1
{ "authors": [ "Lihui Liu", "Blaine Hill", "Boxin Du", "Fei Wang", "Hanghang Tong" ], "categories": [ "cs.CL", "cs.AI" ], "primary_category": "cs.CL", "published": "20231227000305", "title": "Conversational Question Answering with Reformulations over Knowledge Graph" }
On the non-parabolicity ofL. Bonorino, G. Nunes, A. Ramos[The third, the fourth and the sixth authors were partially supported by CNPq/Brazil], J. Ripoll^*, L. Sauer and M. Telichevesky^* January 14, 2024 =======================================================================================================================================================================We prove that , the isometry group of the Minkowski plane,is non parabolic with respect to any left invariant metric. Keywords: Lie groups; parabolicity; . § INTRODUCTION. A complete Riemannian manifold M is called parabolic if any (entire) positive superharmonic function of M is constant; if the contrary happens then M is called non-parabolic. Although recently the more general notion of p-parabolicity (see <cit.>) has attracted the attention of the mathematicians, there are still some interesting open problems related to parabolicity (and non-parabolicity) of manifolds.For instance, Green-Wu <cit.> conjectured thatif the sectional curvature K of a Hadamard manifold M satisfies K≤-C/r^2, r≥ r_0>0, for some positive constant C, where r=r(p), p∈ M, is the Riemannian distance of p to a given point of M, then there exist bounded, nonconstantharmonic functions on M, which, in particular, implies non-parabolicity.Recently, L. Priebe and R. Soares proved that if the Ricci curvature of M is non negative and decays to 0 at most exponentially, then M is parabolic (in fact, that M is p-parabolic for any p>1,see <cit.>). Also, a theorem of S. T. Yau <cit.> proves that if M has nonnegative Ricci curvature then any positive harmonic function is constant. Since ℝ^3 is non parabolic we cannot weaken the decay condition of Priebe-Soares result to nonnegative Ricci curvature or, equivalently, replace harmonic by superharmonic in Yau's result. However, it is not known if this decay condition is sharp.There are many conditions for proving the parabolicity of a Riemannian manifold M, closely connected with the behavior of the sectional (or Ricci)curvature Kof M. These conditions are difficult to apply when K changes sign on unbounded domains of M with an uniform variation bounded from below, i.e., with K^+,| K^-|≥ k>0, where K^+=max{K,0} and K^-=min{K,0}. An interesting class of such manifolds are Lie groups endowed with left invariant metrics(see Theorems 2.4, 2.5 and comments after Theorem 2.5 in <cit.>).From what is known, as we may see below, the general idea is that such Riemannian manifolds are non-parabolic, but the fact that there there are few general conditions to test parabolicity, itseems to be necessary an ad hoc study of these manifolds. In <cit.>, I. Holopainen proves that the Heisenberg groups, when endowed with a left invariant metric, are always non parabolic. Holopainen's paper seems to be one of the few works treating explicitly the parabolicity problem on an special and well known family of Lie groups with a left invariant metric (recall that a Lie group endowed with a left invariant metric is called a metric Lie group, see <cit.>).In the present manuscript, we prove that the group , the isometry group of the Minkowski plane, is non parabolic with respect to any left invariant metric: Let G be a metric Lie group. If G is isomorphic to , then G is non-parabolic.When endowed with a special left-invariant metric (namely the metric which contains the largest number of symmetries), the groupis one of the eight Thurston's geometries:ℝ^3, ℍ^3, 𝕊^3, ℍ^2×ℝ, 𝕊^2×, SL(2,ℝ),Nil_3, .Here, ℝ^n, 𝕊^n and ℍ^n are the space forms of dimension n, SL(2,ℝ), is the universal covering of the special linear group of 2×2 matrices, and Nil_3 is the 3-dimensional Heisenberg group. With the unique exception of 𝕊^2× (which is parabolic), all Thurston's geometries are metric Lie groups. The Heisenberg group is non parabolic by Holopainen work; ℝ^3, ℍ^3, ℍ^2×ℝ are well known to be non parabolic, and Theorem <ref> placesasyet another non parabolic Thurston geometry defined by a metric Lie group.The parabolicity (or non parabolicity) of the remaining case, SL(2,ℝ), seems to be unknown. The proof of Theorem <ref> uses an elementary direct approach, carried out in Section <ref>: it consists in constructing an explicit example of a positive, entire, nonconstant harmonic function in , by choosing a special one parameter subgroupof isometries ofand studying a certainelliptic partial differential equation in the quotient space / that comes from the Laplacian operator of . We observethat Theorem 1, proving the existence of a positive harmonic function in , proves indeed thatdoes not satisfy the Liouville property, according to J. Kazdan  <cit.>.§ THE SEMIDIRECT PRODUCT REPRESENTATION OF . Next, we introduce some definitions and state some facts thatwill be used in theproof of Theorem <ref>. First, let A∈ M_2() be a 2× 2 real matrix and let, for each z∈, e^Azbe its exponential map, acting on ^2 via left multiplication. The semidirect productis the Lie group (^3,*), where * is the operation defined by( p_1,z_1)*( p_2,z_2) = ( p_1+e^Az_1 p_2, z_1+z_2). If we denotee^Az = a_11(z)a_12(z)a_21(z)a_22(z),the group operation ofin coordinates can be expressed as(x_1,y_1,z_1)*(x_2,y_2,z_2) = (x_1+a_11(z_1)x_2+a_12(z_1)y_2, y_1+a_21(z_1)x_2+a_22(z_1)y_2, z_1+z_2).In order to regardas a metric Lie group, we next present the canonical left-invariant metric of(see Meeks-Pérez <cit.> for more details).The canonical left-invariant metric onis such that {∂_x, ∂_y, ∂_z} is an orthonormalbasis at the origin (0,0,0). In coordinates, the vector fieldsE_1 = a_11(z) ∂_x + a_21(z)∂_y, E_2 = a_12(z) ∂_x + a_22(z)∂_y, E_3 = ∂_zform an orthogonal frame of left invariant vector fields extending {∂_x,∂_y,∂_z} at the origin.Using this notation, it is possible to obtain a one-parameter family of metric semidirect productswhich, up to rescaling,will serve as models for the groupendowed with any left-invariant metric.Letbe the group of isometries of the Minkowski plane. Then, if g is any left-invariant metric on , there exists a≥0 such that the metric Lie group (,g) is,up to homothety, isomorphic and isometric to(endowed with itscanonical left-invariant metric), whereA = 1a0-1. The proof of Proposition <ref> is carried out in Section 2.7 of <cit.> with a minor distinction.There, the authors prove that, when endowed with a left-invariant metric,is isomorphic and isometric to a semidirect product ^2⋊_B where B = 0c_11/c_10,for some c_1≥1. The stated representation with the matrix A of (<ref>) follows after observing that A and B as above are congruent (which makesand ℝ^2⋊_B both isomorphicand isometric) when a = (c_1^2-1)/c_1.Henceforward,will denote the metric Lie group modelled by , where a≥0 and A is given by (<ref>).§ THE PROOF OF THEOREM <REF>. In this section, we show thatis non-parabolic by exhibiting an explicit example of an entire, positive, nonconstant harmonic function. First, we prove Theorem <ref> in the case when a = 0, that is, whenis modelled bywhere A isA=100-1. We note that this is the mostly well known model for , which makes it one of the eight Thurston's geometries. Explicitly, this model is (^3,ds^2), whereds^2=e^-2zdx^2+e^2zdy^2+dz^2.We note that the frame {E_1 = e^z∂_x,E_2 = e^-z∂_y,E_3 = ∂_z} is composed by left invariant vector fields which are unitary and everywhere orthogonal.Our next construction is to produce an entire, positive, nonconstant harmonic function u→, which will not depend on the x variable. Recall that if u is smooth, being harmonic is equivalent to Δ u = 0, whereΔ denotes the Laplacian operator on , which can bewritten in coordinates asΔ u= e^2zu_xx+e^-2zu_yy+u_zz.We find a function as described above afterconstructing a Riemannian submersion P→2 and defining u = w∘ P, where w2→ is a suitable function. Consider the right-invariant (and thus Killing) vector field ∂_x, whose flux acts onvia the 1-parameter group of isometries Γ = {(x,y,z)↦ (x+t,y,z)}_t∈.Let M = {(0,y,z)∈| y,z∈}, endowed with the induced ambient metric. The next claim presents two key properties for our construction. M is isometric to the hyperbolic plane 2 and the map π→ M defined by π(x,y,z) = (0,y,z) is a Riemannian submersion.To see that M is isometric to 2, just note that the ambient metric restricted to M is simply e^2zdy^2+dz^2 and the map(0,y,z)∈ M↦ (y,e^-z)∈^2_+ = {(x,y)∈^2| y>0}is an isometry between M and the half-space model for 2,(^2_+,dx^2+dy^2/y^2).The fact that π is a Riemannian submersion follows from observing that the fibers are horizontal lines {(t,y_0,z_0)| t∈},so ker(dπ) is generated by ∂_x and ker(dπ)^⊥ is generated by {∂_y,∂_z}. The fact that π leaves the third coordinate unchanged then makes the restriction of dπ to ker(dπ)^⊥ an isometry. If v2 → is a smooth function, we let v = v∘ P→ denote the lift of v toby P.For a given p∈, let Γ(p) denote its orbit with respect to the action ofand let H denote the mean curvature vector of (p), which is always orthogonal to the fibers of π.Since H is -invariant, its projection dπH defines a vector field in M, which we denote by J, so J = dPHis a vector field in 2. Under these conditions, the same proof of <cit.> applies to find that Δv= 0 if and only if v satisfiesΔ v - (v),J= 0,where Δ andrespectively denote the Laplacian and the gradientin 2 and , is the hyperbolic metric. Next, we find an explicit expression for J in coordinates, and we refer to <cit.> for the Riemannian connection of the semidirect product model of . For any p = (0,y,z)∈ M, J = ∂_z. In particular,J is the unitary tangent field to a family of geodesics of the hyperbolic metric on M, all issuing fromthe same point p^* at infinity (as in Figure <ref>).First, note that E_1 = e^z∂_xis a unitary vector field tangent to the orbits Γ(p), which are all orthogonal tothe vertical planes 𝒫_c = {(c,y,z)∈| y,z∈}. Therefore, if (·)^ denotes the orthogonal projection onto T𝒫_c,H = ( ∇_E_1E_1 )^ =(E_3)^ = E_3 = ∂_z.To finish the proof of the claim, just note that dπ∂_z = ∂_z|_2, so the orbits of the flux of J are vertical lines {(0,y_0,t)| t∈},which are geodesics as described before. For a function u2→, letL(u) =Δ u - u, J. The next step in the proof of Theorem <ref> is to find a smooth solution w∈ C^0(ℍ^2) to the linear partial differential equationL(u) =0. Notice that J is orthogonal to the horocycles having p^* as pointat infinty. More precisely, chosing any horocycle ℋ of thisfamily and denoting by s the signed distance function to ℋ(pointing towards the concave side defined by ℋ in 2), then,J=∇ s.Furthermore, it is well-known that Δ s ≡ 1 in .Write w:→ℝ w(p)=e^s(p)/2 and observe that ∇ w = 1/2 w ∇ s.Since ∇ s = 1 = Δ s,Δ w = 1/4 w|∇ s|^2+ 1/2w Δ s= 3/4 w. To finish the construction, choose any point o∈ and letr→ℝ be the distance function to o, i.e., r(p)=d(p,o) in . Let v→ℝ be the radialeigenfunction ofrelated to its first eigenvalue λ_1=1/4 satisfying v(o)=1. More specifically, v satisfiesΔ v+1/4 v = 0.It is well known thatv is a non-constant, positive function. Finally, define u→ℝ by u=vw. Therefore, by (<ref>) and (<ref>),Δ u=vΔ w + wΔ v + 2⟨∇ v, ∇ w ⟩= 3/4 vw - 1/4 vw + w ⟨∇ v, ∇ s⟩= 1/2 vw + wJ(v). On the other hand, since J(w) = 1/2 w,J(u) = vJ(w) + wJ(v) = 1/2 vw + wJ(v). Combining equations (<ref>) and (<ref>)we obtain thatΔ u - J(u) = 0,therefore u is a positive non-constant solutionto (<ref>). Thus, as already explained, this implies thatthe function u→ defined asu(x,y,z)=u(π(x,y,z))= u(0,y,z)is a positive, nonconstant harmonicfunction in , proving that is non parabolic when endowed with the canonical metric defined by the matrix A as in (<ref>).To prove Theorem <ref> in the general case, fix a>0. LetA = 1a0-1,and consideras the semidirect productendowed with itscanonical left-invariant metric. In this case, e^Az=e^zp(z)0e^-z,where p(z) is a polynomial on z. The metric onnow is given byds^2 = e^-2zdx^2+((p(-z))^2+e^2z)dy^2+dz^2 +p(-z)e^zdxdy,and the Laplacian operator ofisΔ u= (e^2z+p(z)^2)u_xx+2p(z)e^-zu_xy+e^-2zu_yy+u_zz.Since the function u defined in (<ref>)does not depend on the variable x, it also satisfies (<ref>), which concludes the proof of the theorem. 99 GW R. E. Greene and H. Wu. Function theory on manifolds which possess a pole, volume 699 of Lecture Notes in Mathematics. Springer, Berlin, 1979. GA. Grigor'yan, Analytic and geometric background of recurrence and non-explosion of the Brownian motion on Riemannian manifolds. Bulletin (New Series) of the AMS, 36 (2), 135–249. I.H I. Holopainen, Positive solutions of quasilinear elliptic equations on Riemannian manifolds. Proc. London Math. Soc. 65 no 3, 651–672 (1992). JKJ. Kazdan: Parabolicity and the Liouville Property on Complete Riemannian Manifolds, Seminar on New Results in Nonlinear Partial Differential Equations,Aspects of Mathematics, V. 10, 153-166, 1987 MP W. H. Meeks and J. Pérez, Constant mean curvature surfaces in metric Lie groups. Geometric analysis: partial differential equations and surfaces, 25–110, Contemp. Math., 570, Amer. Math. Soc., Providence, RI, 2012.Milnor1 J. W. Milnor, Curvatures of left invariant metrics on Lie groups. Adv. Math. 21 1976, 293–329. PS L. Priebe, R. Soares, The p-parabolicity under a decay assumption on the Ricci curvature. Preprint, arXiv:2310.12257, 2023. RT J. B. Ripoll and F. Tomi. Group invariant solutions of certain partial differential equations. Pacific Journal Of Mathematics, v. 315, p. 235-254, 2021. SY R. Schoen and S. Yau. Lectures on Harmonic Maps. Conference Proceedings and Lecture Notes in Geometry an Topology, v. II, p. 394, 1997. YS.T. Yau, Harmonic functions on complete Riemannian manifolds, Comm. Pure Appl. Math., 28 (1975), 201-228.Instituto de Matemática e Estatística, Universidade Federal do Rio Grande do Sul, Brazil.Email address:[email protected] de Física e Matemática, Universidade Federal de Pelotas, Brazil.Email address: [email protected] de Matemática e Estatística, Universidade Federal do Rio Grande do Sul, Brazil.Email address:[email protected] de Matemática e Estatística, Universidade Federal do Rio Grande do Sul,Brazil. Email address:[email protected] de Física e Matemática, Universidade Federal de Pelotas,Brazil. Email address: [email protected] de Matemática e Estatística, Universidade Federal do Rio Grande do Sul, Brazil.Email address:[email protected]
http://arxiv.org/abs/2312.16302v1
{ "authors": [ "Leonardo Bonorino", "Giovanni Nunes", "Álvaro Ramos", "Jaime Ripoll", "Lisandra Sauer", "Miriam Telichevesky" ], "categories": [ "math.DG" ], "primary_category": "math.DG", "published": "20231226191341", "title": "On the non-parabolicity of $Sol_3$" }
[enumerate,1]label=(*), ref=(*) [enumerate,3]label=(*), ref=(*) .plain theoremTheorem[section] lemma[theorem]Lemma corollary[theorem]Corollary proposition[theorem]Proposition observation[theorem]Observation conjecture[theorem]Conjecture problem[theorem]ProblemclaimClaim[theorem] *claim*Claim Casesenumerate3 [Cases]parsep=0pt plus 1pt [Cases,1]wide=0pt, listparindent=, label = Case *:, ref = * [Cases,2]wide=, listparindent=, label = Case Casesi-Casesii:Casesicasecasescase proofcasecasecasesdefinition definition[theorem]Definition remark[theorem]Remarkproofsketch[1][Proof sketch] mmme_^ 0@th#3 0=.50 2@th#3 #4_#4 #5^#5 2=.52 2 -06@#18@#26>2 6-26#38#4_#4#5^#58>2 8-2lrcases*@th## ## { } matrix,arrows,decorations.pathmorphing positioning graphsgraphs.standardOn rainbow Turán Densities of Trees Seonghyuk ImDepartment of Mathematical Sciences, KAIST, South Korea Email:{seonghyuk, jaehoon.kim, hyunwoo.lee, hss21}@kaist.ac.kr Extremal Combinatorics and Probability Group (ECOPRO), Institute for Basic Science (IBS). Jaehoon Kim[1] Hyunwoo Lee[1] [2]Haesong Seo[1] January 14, 2024 ================================================================================================================================================================================================================================================================================For a given collection = (G_1,…, G_k) of graphs on a common vertex set V, which we call a graph system, a graph H on a vertex set V(H) ⊆ V is called a rainbow subgraph ofif there exists an injective function ψ:E(H) → [k] such that e ∈ G_ψ(e) for each e∈ E(H). The maximum value of min_i{|E(G_i)|} over n-vertex graph systemshaving no rainbow subgraph isomorphic to H is called the rainbow Turán number ex_k^∗(n, H) of H. In this article, we study the rainbow Turán density π_k^∗(T) = lim_n →∞ex_k^∗(n, T)/n2 of a tree T.While the classical Turán density π(H) of a graph H lies in the set {1-1/t : t∈ℕ}, the rainbow Turán density exhibits different behaviors as it can even be an irrational number. Nevertheless, we conjecture that the rainbow Turán density is always an algebraic number. We provide evidence for this conjecture by proving that the rainbow Turán density of a tree is an algebraic number. To show this, we identify the structure of extremal graphs for rainbow trees. Moreover, we further determine all tuples (α_1,…, α_k) such that every graph system (G_1,…,G_k) satisfying |E(G_i)|>(α_i+o(1))n2 contains all rainbow k-edge trees. In the course of proving these results, we also develop the theory on the limit of graph systems. § INTRODUCTIONThroughout this paper, every graph is simple and a multigraph refers to a multigraph with no loops unless we state otherwise.A collection of graphs = (G_1, …, G_k) on a common vertex set V is called a graph system on V with k colors, and we call the number k as the order of the graph system . We denote by V() the common vertex set V. A multigraph H satisfying V(H) ⊆ V is called a rainbow subgraph ofif there exists an injective function ψ:E(H) → [k] such that e ∈ G_ψ(e) holds for each e∈ E(H).One of the central topics in extremal combinatorics is to determine the maximum number of edges of a graph on n vertices that does not contain a copy of a given graph H as a subgraph.Such a maximum is the extremal number of H on n vertices and is denoted by ex(n, H).Turán <cit.> (see also <cit.>) in 1941 first determined the extremal number of the complete graph K_r. Erdős, Stone and Simonovits <cit.> found a surprising connection between ex(n,H) and the chromatic number χ(H) of H by proving that for every graph H, its extremal number is ex(n, H) = (1 - 1/χ(H)-1 + o(1) )n2. Since then, the extremal numbers and their variations have been broadly studied, providing elaborate theories and a deep understanding of the relations between various global statistics and local structures of graphs.See <cit.> for a general survey.One very popular variation of the extremal number is the rainbow generalization. In other words, we want to determine the “largest” n-vertex graph systems that do not contain a copy of a multigraph H as a rainbow subgraph. Depending on the contexts, different measures for the “size” have been introduced:the sum ∑_i |E(G_i)| <cit.>, the minimum min_i |E(G_i)| <cit.>, the product ∏_i |E(G_i)| <cit.>, the minimum of the spectral radius min_i λ_1(G_i) <cit.> and other measures <cit.>. Surprisingly, this problem highly depends on the choice of measures.To maximize the sum ∑_i=1^k |E(G_i)| of the number of edges of a graph system without a rainbow copy of H, there are two following constructions for natural lower bounds. One is that all the G_i's are the same Turán graph; the other one is that there are |E(H)|-1 copies of the complete graph and all the other graphs are empty. Keevash, Saks, Sudakov and Verstraëte <cit.> proved that one of the previous two constructions is extremal (i.e., a maximizer) when H = K_r, generalizing the Turán theorem. They also proved that if H is 3-color-critical, i.e., χ(H)=3 and H has an edge e with χ(H-e)=2, then the same conclusion holds.This extends the result of Simonovits <cit.> stating that the Turán graph is extremal when H is k-color-critical for some k ≥ 3. Recently, Chakraborti, Liu, Seo, the second and the third authors <cit.> generalized the previous result to every 4-color-critical graph H and almost all k-color-critical graphs H for k ≥ 5. Furthermore, they proved that if H is not k-color-critical for some k ≥ 3 and n is sufficiently large, both of the constructions are not extremal.By enforcing the graphs in 𝒢 to be pairwise edge-disjoint matchings and letting k be large, the maximum of ∑_i=1^k |E(G_i)| is the maximum number of edges in a properly edge-colored graph with no rainbow subgraph H. Such a variation has been extensively studied in <cit.>Regarding the minimum min_i |E(G_i)|, define the rainbow extremal number ex_k^∗(n, H) of a multigraph H to be the maximum of min_1 ≤ i ≤ k |E(G_i)| among all rainbow H-free graph systems =(G_1, …, G_k) on n vertices.Aharoni, Devos, de la Maza, Montejano and Šámal <cit.> proved that ex_3^∗(n, K_3) = ⌊26-2√(7)/81n^2⌋. The fact that 26-2√(7)/81 is an irrational number larger than the classical Turán density 1/4signifies the difference between the rainbow setting and the classical setting, and renders the rainbow extremal number more intriguing.In general cases, the problem gets more complicated and there are only few known results regarding the rainbow extremal number ex_k^∗(n,H).As a corollary of Keevash, Saks, Sudakov and Verstraëte <cit.>, we have ex_k^∗(n, K_3) ≤n^2/4+o(n^2) when k ≥ 4.Babiński and Grzesik <cit.> recently determined the value of the rainbow extremal number ex_k^∗(n, P_3) of the 3-edge path up to additive o(n^2) error term for every k ≥ 3. However, many cases including the complete graph on at least four vertices are still wide open. §.§ Main resultsThe rainbow Turán density of a multigraph H is defined as follows.π_k^∗(H) := lim_n →∞ex_k^∗(n, H)/n2.The first natural question is whether the limit exists. Unlike the case of maximizing ∑_i=1^k |E(G_i)|, the standard proof (cf. the survey of Keevash <cit.>) does not directly work. So we first prove the existence of this limit.For every multigraph H and k ≥ |E(H)|, the limit π^∗_k(H)=lim_n →∞ex_k^∗(n, H)/n2exists. The classical Turán density π(H)= lim_n →∞ex(n, H)/n2 is always in the set {1-1/t:t∈ℕ}. This is no longer true for the rainbow Turán density as π_3^*(K_3) = 26-2√(7)/81. On the other hand, such an irrational number 26-2√(7)/81 was obtained in <cit.> from a polynomial optimization problem regarding the size of certain vertex sets. Based on this example, it is reasonable to conjecture that all rainbow Turán densities π_k^*(H) can be attained from such an optimization problem, and they are algebraic numbers. We prove this conjecture for all trees T by analyzing the structure of extremal graph systems. Roughly speaking, for a given tree T, the vertex set V() of an extremal graph system =(G_1, …, G_k) can be partitioned into a finite number of parts, where each G_i is a union of cliques and complete bipartite graphs on and between those parts. In addition, the number of parts only depends on the tree T. See Theorem <ref> for the full statement. Hence the rainbow Turán density π^∗_k(T) is a solution to a polynomial optimization problem. As a consequence, it can be proven to be an algebraic number using basic tools from real algebraic geometry. In summary, we prove the following.For a tree T, the rainbow Turán density π_k^∗(T) can be computed by solving finitely many polynomial optimization problems. Moreover, π_k^∗(T) is an algebraic number.Such a polynomial optimization may have exponentially many variables in terms of |T|, so it is not practical to compute π_k^∗(T) via this optimization even if the tree T is small. Nevertheless, we use the structure of extremal graph systems to verify that the rainbow Turán density of a tree T is maximized when T is a star.If T is a k-edge tree, then π_k^∗(T) ≤π_k^∗(K_1, k) = (k-1/k)^2.We further prove that one can completely determine the tuples (α_1, …, α_k) ∈ [0, 1]^k such that for every ε>0, a graph system 𝒢 on n vertices has a rainbow copy of all k-edge trees whenever 1/n≪ε and |E(G_i)| > (α_i+ε) n2 holds for all i∈ [k]. Note that the determination of all such tuples were questioned in <cit.> for the case of rainbow triangles. Let α_1, …, α_k be nonnegative reals such that ∑_i=1^k (1-√(α_i)) < 1. For every ε>0, there exists an integer n_0 such that for every n ≥ n_0 and every k-edge tree T, a graph system 𝒢=(G_1, …, G_k) on n vertices with |E(G_i)| ≥ (α_i + ε)n2 contains a rainbow copy of T.We note that the condition ∑_i ∈ [k] (1-√(α_i)) < 1 is tight. Let α_1, …, α_k be nonnegative reals such that ∑_i∈ [k] (1-√(α_i)) ≥ 1. One can choose subsets V_1, …, V_k ⊆ [n] such that |V_i| = ⌈ (1-√(α_i))n ⌉ and ⋃_i ∈ [k] V_i = [n]. Let G_i^n be a graph on [n] defined by a complete graph on [n] ∖ V_i. Then the graph system 𝒢^n = (G_1^n, …, G_k^n) does not contain a rainbow copy of K_1, k and |E(G_i^n)| ·n2^-1→α_i as n tends to infinity. Finally, we show the following three basic properties of the rainbow Turán density.For every bipartite multigraph H and k ≥ |E(H)|=ℓ, we have π_k^∗(H) ≤ℓ-1/k.For a given multigraph H, denote by sim(H) the graph obtained by replacing each multiple edge with a single edge. We call it the simplification of H. For every multigraph H, we have lim_k →∞π_k^∗(H) = 1-1/χ(sim(H))-1. In other words, π_k^∗(H) converges to the Turán density of its simplification as k tends to infinity. For every multigraph H and every ε>0, there exists δ>0 such that if an n-vertex graph system 𝒢 of order k has at least (π_k^∗(H) + ε)n2 edges, then it has at least δ n^|V(H)| rainbow copies of H. §.§ The limit of graph systemsTo prove our main theorems, we develop the theory on the limit of graph systems. For two measurable functions f, g:X →ℝ from a measure space X, we write f ≡ g if f=g holds almost everywhere. For a measurable subset A ⊂ X, we say f ≡ g on A if f = g holds almost everywhere on A. We use the standard Lebesgue measure on ℝ^n. A symmetric measurable function W:[0, 1]^2 → [0, 1] is called a graphon.Let 𝒲_0 be the space of all graphons. An n-vertex graph G can be interpreted as a graphon by partitioning [0, 1] into n intervals I_1=[0, 1/n), I_2=[1/n, 2/n), …, I_n=[n-1/n, 1] and by setting its value to be constantly 1 on I_i × I_j if ij ∈ E(G) and constantly 0 otherwise. It is well-known that a graphon can be interpreted as the limit of a sequence of graphs. Furthermore, there is a metric called the “cut-distance” between graphons. By identifying the graphons with cut-distance zero, we obtain a compact metric space 𝒲_0. Also, we have a family of continuous functions, called the “subgraph density” functions, which determines the metric topology of 𝒲_0. We further explain the theory of graphons and related definitions in Section <ref>.One may expect that a natural generalization of graph systems of order k is the set of k-tuples of graphons 𝐖 = (W_1, …, W_k) ∈𝒲_0^k.However, this definition is not appropriate for extending theorems of graphon to the graph system setting. Consider two graph systems 𝒢_1^n = (G_1^n, G_2^n) and 𝒢_2^n = (G_3^n, G_4^n), where G_1^n, G_3^n, G_4^n are mutually independent binomial random graphs G(n, 1/2) and G_2^n is the complement of G_1^n. Let H be a multigraph with two vertices and two parallel edges. Then 𝒢_1^n has no rainbow copies of H, while 𝒢_2^n contains n2/4 rainbow copies of H in expectation. On the other hand, both 𝒢_1^n and 𝒢_2^n component-wise converge to 𝐖 = (1/21_[0, 1]^2, 1/21_[0, 1]^2) with high probability.It follows that the “rainbow H-density” is not a continuous function on the product space 𝒲_0^k. Even if we avoid using multigraphs, the same problem appears when we count the number of, for instance, induced copies of K_1, 2.In this fashion, we also need to encode the information regarding the intersection of graphs as follows:A graphon system of order k is a tuple of graphons 𝐖 = (W_I)_I ⊆ [k] with W_∅≡ 1. When I = { i }⊆ [k] is a singleton, we simply write W_i = W_{ i }. For graphons W_1, …, W_k, let span(W_1, …, W_k) be the classical graphon system 𝐖 of order k defined by W_I = ∏_i ∈ I W_i.As we view a graph as a graphon, a graph system = (G_1, …, G_k) can be regarded as a graphon system by letting G_I = ⋂_i ∈ I G_i. Especially, as a graph is a graphon taking values only in { 0,1 }, the graphon system span(G_1, …, G_k) corresponds to the graph system 𝒢.Throughout this paper, we identify a graph system 𝒢 with the corresponding graphon system span(G_1, …, G_k) when there is no confusion. In the previous example, 𝒢_1^n converges to the graphon system 𝐖 with W_1, W_2≡1/2 and W_{1, 2}≡ 0, whereas 𝒢_2^n converges to the graphon system 𝐖 with W_1, W_2≡1/2 and W_{1,2}≡1/4. In <Ref> and <Ref>, we discuss the definition of homomorphism density and induced homomorphism density of a graphon system.Graphon systems form a compact metric space with respect to a suitable metric, called the cut-distance (see <Ref>). The metric topology is determined by a family of continuous functions, called the (rainbow) homomorphism density (see <Ref> and <Ref>). However, the set of graph systems is not dense in this space. For example, if W_{ 1,2 } > W_ 1 on a set of positive measures in [0,1]^2, then we will never be able to find a sequence of graph systems (G_1^n, G_2^n) whose limit is (W_∅, W_ 1 ,W_ 2 ,W_{ 1,2 }) because we always have G_{ 1,2 }^n ≤ G_1^n. The next definition provides the complete characterization of graphon systems that are limits of graph systems.Let 𝐖=(W_I)_I ⊆ [k] be a graphon system of order k. For each I ⊆ [k], we define W_I inductively by: * W_[k] = W_[k],* W_I = W_I - ∑_J ⊋ IW_J if I ≠ [k].The graphon system 𝐖 is called an admissible graphon system if W_I(x, y) ≥ 0 for almost every (x,y)∈ [0,1]^2 and for any I ⊆ [k].A graphon system 𝐖=(W_I)_I ⊆ [k] is classical if W_I = ∏_i ∈ I W_{i} holds almost everywhere for each I⊆ [k].Observe that a classical graphon system 𝐖 = (W_I)_I ⊆ [k] satisfies W_I = ∏_i ∈ IW_i ∏_i ∉ I (1-W_i) ≥ 0 for all I⊆ [k], so a classical graphon system is an example of admissible graphon systems. However, the space of admissible graphon systems is strictly larger than the space of classical graphon systems, and the set of classical graphon systems is not compact. The previous example 𝒢_1^n = (G_1^n, G_2^n) gives a sequence of classical graphon systems whose limit is not classical. In the following theorem, 𝒲_0^(k) is the space of graphon systems of order k equipped with the metric δ_□, cf. Section <ref>. The space of admissible graphon systems is the closure of the set of graph systems in (𝒲_0^(k), δ_□).In other words, every sequence of graph systems has a subsequence convergent to an admissible graphon system and every admissible graphon system is a limit of a sequence of graph systems. Finally, we note some benefits for developing the theory on the limit of graph systems. First, the computation in the proof of <Ref> becomes clearer with this theory.Second, as we note in Remark <ref>, the limit theory allows us to easily generalize Theorem <ref> and the results on the structure of extremal graphon systems to the various settings including: * maximizing the product ∏_i=1^k |E(G_i)|/n2^k;* maximizing min_1 ≤ i ≤ kλ_1(G_i)/n; and* generalized Turán problems: maximizing the rainbow density of H_1 in rainbow H_2-free graph systems. Organization. In Section <ref>, we introduce the theory of graphons and generalize it to the graphon systems. In Section <ref>, we prove Theorem <ref> determining the closure of the set of graph systems. In Section <ref>, we prove several properties of rainbow Turán density, including <Ref> and <Ref>–<ref>.In Section <ref> and <ref>, we investigate the structure of extremal graphon systems having no rainbow trees and prove <Ref> and <ref>, respectively. In Appendix <ref>-<ref>, we collect extensions of theorems on a graphon to a graphon system which can be easily proved by similar proof. § PRELIMINARIES AND NOTATIONSThroughout this paper, we assume that all the subsets of ℝ^n and the functions between them that we deal with are measurable.§.§ Colored subgraphsIn this section, we define some notions for edge-colored graphs.We also define homomorphisms between two multigraphs that preserve pre-colorings, and extend the rainbow Turán problem to a more general pre-colored Turán problem. Let H be a multigraph.For each uv∈V(H)2, denote by E(H)_uv the set of edges of H between u and v. In particular, E(H)_uv = ∅ when u and v are not adjacent in H. If H is clear in the context, we simply write E_uv.For a set X⊆ E(H) of edges, a map ψ: X→ [k] is called a pre-coloring of H if ψ is injective on E_uv∩ X for each uv∈V(H)2. Denote by dom(ψ)=X the domain of ψ. If ψ is injective on its domian, ψ is called a rainbow pre-coloring of H. If dom(ψ)=E(H), we call ψ a coloring of H. A tuple (H, ψ) of a multigraph and itspre-coloring is called a pre-coloring tuple. If ψ is a rainbow pre-coloring, (H, ψ) is called a rainbow pre-coloring tuple; if ψ is a (resp. rainbow) coloring, (H,ψ) is called a (resp. rainbow) coloring tuple. As above, a pre-coloring tuple is a multigraph some of whose edges are already colored.Using the definition below, we can specify a certain subgraph of G whose edges are colored consistently with a given pre-coloring tuple (H,ψ). For multigraphs G and H, a multigraph homomorphism (f, f):H → G is a tuple of a graph homomorphism between vertex sets f:V(H) → V(G) and a map f:E(H) → E(G) such that f(E(H)_uv)⊆ E(G)_f(u)f(v) for all uv∈V(H)2. A multigraph homomorphism is an edge-preserving homomorphism if f is injective. If there exists a multigraph homomorphism (f,f) with bijective f, the graph G is called an edge-preserved homomorphic image of H.For two pre-coloring tuples (H, ψ_1) and (G, ψ_2), a color homomorphism is an edge-preserving homomorphism (f,f): H → G such that dom(ψ_2) = f(dom(ψ_1)) and ψ_1(e) = ψ_2(f(e)) for every e ∈dom(ψ_1). A tuple (G, ψ_2) is called a color homomorphic image of (H, ψ_1) if there exists a color homomorphism (f, f) between them with f bijective. Recall the definition of the rainbow Turán density in (<ref>). We wish to extend this definition to pre-coloring tuples.For a pre-coloring tuple (resp., rainbow pre-coloring tuple) (H, ψ), a graph system 𝒢=(G_1, …, G_k) on V has (H, ψ) as a colored subgraph (resp., rainbow subgraph) if there exist a multigraph H' on the vertex set V(H')⊆ V together with a multigraph isomorphism f:H → H' and a (resp., injective) function ψ': E(H') → [k] such thate∈ E(G_ψ'(e)) for each e∈ E(H') and ψ(e) = ψ'(f(e)) for all e∈dom(ψ).For a family ℱ of rainbow pre-coloring tuples,is said to be rainbow ℱ-free if it does not have (H, ψ) as a rainbow subgraph for every (H, ψ) ∈ℱ. When ℱ consists of only one element (H,ψ), we say thatis rainbow (H, ψ)-free. If dom(ψ)=∅, the graph systemis said to be rainbow H-free. Let ℱ be a family of rainbow pre-coloring tuples. We define the rainbow extremal number of ℱ byex_k^∗(n, ℱ) : = max{min_1 ≤ i ≤ k |E(G_i)||(G_1, …, G_k) is a rainbow ℱ-free graph system on[n] }.The rainbow Turán density of ℱ is defined by π_k^∗(ℱ) = lim_n →∞ex_k^∗(n, ℱ)/n2.When ℱ={(H, ψ)}, then we write ex_k^∗(n, H, ψ) and π_k^∗(H, ψ), respectively.We often omit ψ when dom(ψ) = ∅. We will prove the existence of π_k^∗(H, ψ) for every pre-coloring tuple (H,ψ) in Theorem <ref>.§.§ Preliminaries on graphonsIn this section, we describe the theory on graphons developed by Lovász <cit.>. A graphon is a symmetric function W:[0, 1]^2 → [0, 1]. Denote by 𝒲_0 the set of graphons in L^2([0, 1]^2). For a measure-preserving map φ:[0, 1] → [0, 1] and a graphon W, define W^φ(x, y) := W(φ(x), φ(y)). For a bounded symmetric function W:[0, 1]^2 →ℝ, its cut-norm is defined by‖ W ‖_□ := sup_S, T ⊆ [0, 1]| ∫_S × T W |.The cut-distance δ_□(W, U) between two graphons W and U is defined byδ_□(W, U) = inf_φ‖ W - U^φ‖_□,where the infimum is taken over all measure-preserving invertible maps φ:[0, 1] → [0, 1]. Let 𝒲_0 be the quotient space of 𝒲_0 taken by identifying graphons U with W whenever δ_□(U, W)=0.(𝒲_0, δ_□) is a compact metric space. After taking quotient, the following homomorphism density that generalizes the number of copies of a graph becomes a continuous function on the space (𝒲_0, δ_□).For a graph H, the homomorphism density of H is defined by t_H(W) = ∫_[0, 1]^|V(H)|∏_e=uv ∈ E(H) W(x_u, x_v) ∏_v ∈ V(H) dx_v.For graphons W and U, we have|t_H(W) - t_H(U)| ≤ |E(H)| δ_□(U, W).In particular, t_H is a well-defined continuous function on (𝒲_0, δ_□).Moreover, the following generalization of the above theorem will be used later when we discuss the graphon systems.For a simple graph H and two tuples of graphons (W_uv)_uv ∈ E(H) and (U_uv)_uv ∈ E(H), we have |∫_[0, 1]^|V(H)|( ∏_uv ∈ E(H) W_uv(x_u, x_v)- ∏_uv ∈ E(H) U_uv(x_u, x_v) ) ∏_v ∈ V(H) dx_v| ≤∑_u∈ E(H)‖ W_uv - U_uv‖_□. §.§ Graphon systemsWe now generalize the previous concepts to graphon systems. We introduce the cut-distance between two graphon systems. For a graphon system 𝐖=(W_I)_I ⊆ [k], set 𝐖^φ := (W_I^φ)_I ⊆ [k] for a measure-preserving map φ:[0,1] → [0,1]. The cut-norm of a tuple 𝐖=(W_1, …, W_s) of bounded symmetric functions on [0,1]^2 is defined by ‖𝐖‖_□ = sup_S, T ⊆ [0, 1]∑_i=1^s | ∫_S × T W_i  dx dy|.For two graphon systems 𝐖=(W_I)_I ⊆ [k] and 𝐔 = (U_I)_I ⊆ [k], let d_□(𝐖, 𝐔) = ‖𝐖-𝐔‖_□. The cut-distance is defined by δ_□(𝐖, 𝐔) = inf_φ d_□ (𝐖, 𝐔^φ) =inf_φsup_S, T ⊆ [0, 1] ∑_I ⊆ [k]| ∫_S × T W_I - U_I^φ|,where the infimum is taken over all measure-preserving invertible maps φ:[0,1] → [0,1]. Let 𝒲_0^(k) be the quotient space of graphon systems of order k taken by identifying 𝐖 with 𝐔 whenever δ_□(𝐖, 𝐔)=0. Note that 𝒲_0^(k) is different from the product of 2^k-1 copies of 𝒲_0 because the same measure-preserving bijection φ is applied to all the U_i simultaneously. It is easy to see that 1/2^k∑_I ⊆ [k]‖ W_I ‖_□≤‖𝐖‖_□≤∑_I ⊆ [k]‖ W_I ‖_□ holds for any tuple 𝐖=(W_I)_I ⊆ [k] of bounded symmetric functions. Hence our definition of cut-norm may be replaced by ∑_I ⊆ [k]‖ W_I ‖_□. However, the cut-distance between graphon systems cannot be bounded by the sum of the cut-distances of components.We now define the colored homomorphism density and the rainbow homomorphism density of pre-coloring tuples. For a (resp., rainbow) coloring tuple (H, ψ), the (resp., rainbow) (H, ψ)-density of a graphon system 𝐖=(W_I)_I ⊆ [k] is defined by t^*_(H, ψ)(𝐖) = ∫_[0,1]^|V(H)|∏_uv ∈V(H)2 W_ψ(E_uv)(x_u, x_v) ∏_v ∈ V(H) dx_v.For a rainbow pre-coloring tuple (H, ψ) with 𝐝𝐨𝐦(ψ)⊊ E(H), we definet^*_(H, ψ)(𝐖) = ∑_ψ t^*_(H, ψ)(𝐖), where the sum is taken over all injective functions ψ:E(H) → [k] extending ψ.When 𝐝𝐨𝐦(ψ) = ∅, we simply write t^*_H(𝐖). We are now ready to state the following theorems. As the proof of Theorem <ref> is similar to the graphon case, we postpone it to Appendix <ref>.The space (𝒲_0^(k), δ_□) is a compact metric space. For two graphon systems 𝐖=(W_I)_I⊆ [k] and 𝐔=(U_I)_I⊆ [k] of order k and a rainbow pre-coloring tuple (H,ψ),|t^*_(H, ψ)(𝐖) - t^*_(H, ψ)(𝐔)| ≤ |E(H)| δ_□(𝐖, 𝐔).In particular, t_(H, ψ) is a well-defined continuous function on (𝒲_0^(k), δ_□). For any measure-preserving invertible map φ:[0, 1] → [0, 1], we have t^*_(H, ψ)(𝐔) = t^*_(H, ψ)(𝐔^φ). By applying Theorem <ref> to (W_φ(uv))_uv∈ E(H) and (U_φ(uv))_uv∈ E(H) and taking the infimum over all φ, we get the result.§.§ Real algebraic geometryIn this section, we state a theorem which will be useful for proving <Ref>.<cit.> A real closed field is a field K satisfying the following equivalent conditions: * K can be ordered, and there is no nontrivial algebraic extension that extends an order;* K is not algebraically closed, but K[√(-1)] is algebraically closed;* K has a unique ordering such that every nonnegative element is the square and every polynomial of odd degree over K has a root in K.The field ℝ of real numbers and the field ℝ_alg = ℚ∩ℝ of real algebraic numbers are examples of real closed fields. There is a meta-theorem for real closed fields analogous to the Lefschetz principle for algebraically closed fields of characteristic 0. Let K be a real closed extension of a real closed field F, i.e., a field extension that is real closed. Let Φ be a formula without any free variable, written with a finite number of conjunctions, disjunctions, negations and existential quantifiers on variables, where atomic formulas are formulas of the kind f(x_1, …, x_n) ≤ 0 for some polynomial f over F. Then Φ holds in F if and only if it holds in K. § ADMISSIBLE GRAPHON SYSTEMSIn this section, we prove <Ref>. For a finite partition 𝒫 = (I_1, …, I_t) of [0, 1] and a graphon W, let W_𝒫 be a step graphon defined byW_(x_0, y_0) = 1/μ(I_x_0)μ(I_y_0)∫_I_x_0× I_y_0 W(x,y) dxdy,if μ(I_x_0), μ(I_y_0) are both nonzero, where μ is the standard Lebesgue measure and I_x_0 (resp. I_y_0) is the part I_i containing x_0 (resp. y_0); if the measure of I_x_0 or I_y_0 is zero, we take W_(x_0,y_0)=0. For a graphon system 𝐖 = (W_I)_I ⊆ [k], we let 𝐖_𝒫 = ((W_I)_𝒫)_I ⊆ [k].In order to prove <Ref>, we first show that the set of all admissible graphon systems is a compact subset of the metric space (𝒲_0^k, δ_□). This is straightforward from the subsequent theorem because it is easy to check that the set of admissible graphon systems satisfies the two conditions of the theorem.theoremcompact Let X be a nonempty subset of 𝒲_0^(k) satisfying: * if 𝐖∈ X, then 𝐖_𝒫∈ X for every finite partition 𝒫 of [0, 1];* if a sequence of graphon systems (𝐖^n)_n∈ℕ in X component-wise converges to a graphon system 𝐖 almost everywhere, then 𝐖∈ X.Then X is a compact subset of (𝒲_0^k, δ_□). We provide the proof of this theorem in Appendix <ref>. The remaining part is to show that an admissible graphon system is the limit of a sequence of graph systems. For an admissible graphon system 𝐖=(W_I)_I ⊆ [k] of order k and an n-element tuple S = (x_1, …, x_n) ∈[0, 1]^n, we define two tuples of weighted graphs on n vertices as follows.  For each I ⊆ [k], consider a weighted graph W_I[S] on vertex set [n] with the edge weight W_I(x_i,x_j) on the edge ij. Let 𝐖[S]= (W_I[S])_I⊆ [k].The tuple of weighted graphs obtained by deleting all the loops from each graph in 𝐖[S] is denoted by 𝐇_S(n, 𝐖).Note that W_∅[S] is always a complete graph with loops where all edge weights are 1 as W_∅≡ 1. Also, note that a weighted graph can be viewed as a system of step graphons whose values on each step are the corresponding edge weights.Let ℍ(n, 𝐖) = 𝐇_S(n,𝐖) be a random variable obtained by choosing a tuple S∈ [0,1]^n uniformly at random.Then it is easy to see that the resulting random sample 𝐇 =ℍ(n, 𝐖) is an admissible graphon system with probability 1. For an admissible system of weighted graphs 𝐇=(H_I)_I ⊆ [k], define a random graph system 𝔾(𝐇) of order k as follows. Considering each H_I as a graphon, <Ref> yields new graphons H_I, which can be again considered to be a weighted graph. For each uv∈[n]2, consider an independent random variable I_uv that becomes a set I⊆ [k] with probability H_I(uv), the edge weight of uv in the weighted graph H_I.This random variable is well-defined as 𝐇 is admissible, i.e., H_I(uv) ∈ [0, 1] and ∑_I⊆ [k]H_I(uv) = 1 for each uv∈[n]2. We define a random graph system 𝔾(𝐇)= (G_i)_i∈ [k] as uv∈ E(G_i) if and only if i∈ I_uv. This yields a random graph system, where the probability that uv∈⋂_i∈ I E(G_i) is equal to ∑_J⊃ IH_J(uv) = H_I(uv) for all I⊆ [k]. We denote by 𝔾(n,𝐖) the random variable 𝔾(n,ℍ(n,𝐖)). Note that there are two levels of randomness, a choice of S yielding 𝐇=ℍ(n,𝐖) and choices of I_uv yielding 𝔾(n,𝐖)=𝔾(𝐇). When k=1, these definitions are identical to those of ℍ and 𝔾 for the graphons defined in <cit.>.Similarly as graphons, we can show the following theorem stating that our random graph system 𝔾(n, 𝐖) is very close to 𝐖 in cut-distance with high probability. We supply the proof of this theorem in Appendix <ref>.With probability 1-o(1),δ_□(𝐖, 𝔾(n, 𝐖)) ≤10^3 × 8^k/√(log n).By Theorem <ref>, the set of admissible graphon systems is compact. For an admissible graphon system 𝐖 of order k and ε>0, there exists n ∈ℤ_>0 such that δ_□(𝐖, 𝔾(n, 𝐖))<ε with high probability by <Ref>. Therefore, one can choose a sequence of graph systems of order k that converges to 𝐖.§ RAINBOW TURÁN DENSITYWe start this section by providing an elementary proof of <Ref>Let π =1-1/χ(sim(H))-1. Fix ε>0 and suppose that k ≥⌈ 2ε^-1 |E(H)| ⌉+1. We claim that π_k^∗(H) ≤π+ε. Since π_k^∗(H) ≥π for any k, it concludes the proof. Let n be a sufficiently large integer and (G_1, …, G_k) be a graph system on n vertices with |E(G_i)| ≥ (π + ε)n2 for every i ∈ [k]. Let G' be a graph on the same vertex set such that uv is an edge of G' if and only if uv ∈ E(G_i) for at least |E(H)| indices i ∈ [k]. Then we havek · (π + ε) n2≤∑_i ∈ [k] |E(G_i)| ≤ k · |E(G')| + (|E(H)|-1) ·n2.Thus we get |E(G')| ≥ (π + ε/2)n2. As n is sufficiently large, the graph G' contains sim(H). By choosing colors greedily, it ensures that (G_1, …, G_k) contains a rainbow copy of H, thereby satisfying the inequality π_k^∗(H) ≤π+ε. We now use the theory on graphon systems to prove <Ref>, <Ref> and <Ref>.For this purpose, we need a colored version of the graph removal lemma. As noted in Fox <cit.>, it was suggested that his proof could be adapted to demonstrate the colored version. For completeness, we provide a proof of this lemma in Appendix <ref>, following the strategy outlined in <cit.>. Let (H,ψ) be a rainbow pre-coloring tuple. For a graphon system 𝒢=(G_1, …, G_k) on V, a rainbow copy of (H, ψ) in 𝒢 is a subgraph H' of complete graph on V together with an injective map ψ':E(H) → [k] such that the following holds: there exists an isomorphism ϕ:H→ H' such that ψ'∘ϕ: E(H)→ [k] is an extension of ψ and e ∈ E(G_ψ'(e)) for every e ∈ E(H). Let (H,ψ) be a rainbow pre-coloring tuple. For every ε>0, there exists δ>0 such that if a graph system 𝒢 = (G_1, …, G_k) has at most δ |V(𝒢)|^|V(H)| rainbow copies of (H, ψ), then one can delete at most ε n^2 edges in total to make 𝒢 rainbow (H, ψ)-free.The same strategy can be employed to establish the next theorem.Let (H,ψ) be a rainbow pre-coloring tuple. For every ε>0, there exists δ>0 such that if a graph system 𝒢 = (G_1, …, G_k) has at most δ |V(𝒢)|^|V(H)| rainbow copies of color homomorphic images of (H, ψ), then we can delete at most ε n^2 edges in total to make 𝒢 = (G_1, …, G_k) have no rainbow copies of color homomorphic images of (H, ψ).We are now ready to derive <Ref>. Because the set of admissible graphon systems form a compact subset of 𝒲^(k)_0, the following theorem immediately implies <Ref>.We haveπ_k^∗(H, ψ) = sup_t^*_(H, ψ)(𝐖)=0min_1 ≤ℓ≤ k t_K_2(W_ℓ), where the supremum is taken over all admissible graphon systems 𝐖 of order k with t^*_(H, ψ)(𝐖)=0. Let π denote the right-hand side of the desired equality. As the functions t^*_(H, ψ)(𝐖) and min_1 ≤ℓ≤ k t_K_2(W_ℓ) are continuous, by compactness there exists a graphon system 𝐖 of order k that attains the supremum.Fix ε>0 and choose δ>0 small enough.By Theorem <ref> and Theorem <ref>, for sufficiently large n, there exists a graph system 𝒢 such that t^*_(H, ψ)(𝒢) ≤δ and min_1 ≤ℓ≤ k t_K_2(G_ℓ) ≥π-δ. Then by Theorem <ref>, one can delete at most ε |V(𝒢)|^2 edges from 𝒢 to make it rainbow (H, ψ)-free. Since t_K_2(G_ℓ) = 2|E(G_ℓ)|/|V(𝒢)|^2, we have ex_k^∗(n, H, ψ) ·n2^-1≥π-δ-2ε.To show the opposite inequality, again fix ε>0. Suppose that lim sup_n →∞ex_k^∗(n, H, ψ) ·n2^-1≥π + 2ε. Then there exists an increasing sequence of integers (n_i) such that for each i, there exists an n_i-vertex graph system 𝒢^i that is rainbow (H, ψ)-free with min_1 ≤ℓ≤ k |E(G_ℓ^i)| ·n_i2^-1≥π+ε. We may assume that n_1 is large enough that min_1 ≤ℓ≤ k t_K_2(G_ℓ^i) ≥π+ε/2.We assert that there exists δ = δ(ε)>0such that t^*_(H, ψ)(𝒢^i) ≥δ holds for all i∈ℕ. If not, then we can take a subsequence (n_s_i) of (n_i) with t^*_(H, ψ)(𝒢^s_i) → 0. By compactness, there exists a further subsequence that converges to an admissible graphon system 𝐖. Then 𝐖 satisfies that t^*_(H, ψ)(𝐖)=0 but min_1 ≤ℓ≤ k t_K_2(W_ℓ) ≥π+ε/2, which contradicts the definition of π. In conclusion, there exists δ>0 such that t^*_(H, ψ)(𝒢^i) ≥δ for each i∈ℕ.On the other hand, as 𝒢^i is rainbow (H, ψ)-free, 𝒢^i can only contain degenerate copies of (H, ψ). Therefore, t^*_(H, ψ)(𝒢^i) = O(1/n_i), which leads to a contradiction.The following corollary confirms <Ref>. For every ε>0, there exists δ>0 such that if an n-vertex graph system 𝒢 has at least (π_k^∗(H, ψ) + ε)n2 edges, then it has at least δ n^|V(H)| copies of (H, ψ). As 𝒢 cannot be made rainbow (H, ψ)-free by deleting ε n^2/2 edges, the conclusion comes from Theorem <ref>.We note that <Ref> can be extended to an arbitrary continuous function h:W_0^(k)→ℝ. More precisely, the limit of the minimum of h(𝒢) among all graph systems of order k on n vertices without (H, ψ) is equal to the infimum of h(𝐖) among all graphon systems of order k with t_(H, ψ)^∗(𝐖)=0. The proof simply follows the proof of <Ref> together with the fact that by the uniform continuity of h, deleting ε' |V(𝒢)|^2 edges changes the value of h by at most ε when ε'≪ε.Furthermore, one can further show that the rainbow Turán density of finite family of pre-coloring tuples likewise exists.For every rainbow pre-coloring tuple (H, ψ),π_k^∗(H, ψ) = π_k^∗({all color homomorphic images of (H, ψ)}).Let π denote the right-hand side of the desired equality. Considering (H, ψ) is a color homomorphic image of itself,π_k^∗(H, ψ) ≥π is clear. To show the opposite inequality, suppose that π_k^∗(H, ψ) ≥π+ε for fixed ε > 0. Then for a sufficiently large n, there exists a rainbow (H, ψ)-free graph system 𝒢^n on n vertices such that |E(G_i^n)| ≥ (π + ε/2)n2 for each i ∈ [k] and there are at most o(n^|V(H)|) distinct homomorphic images of (H, ψ). By Theorem <ref>, one can delete at most ε n^2/4 edges from 𝒢^n to make it have no color homomorphic images of (H, ψ). Consequently the resulting graph system has at least (π + ε/4) n2 edges and does not contain color homomorphic images of (H,ψ), which is a contradiction to the definition of π.Consequently, this theorem yields the following corollary, which immediately implies <Ref>. Let H be a bipartite multigraph with ℓ edges equipped with a rainbow pre-coloring ψ. For k ≥ℓ, we haveπ^∗_k(H,ψ) ≤π^∗_k(2vertices with ℓ parallel edges) = ℓ-1/k.The first inequality comes from Theorem <ref>. To show the second equality, we first show the lower bound.Let F be the graph with two vertices and ℓ parallel edges. Let 𝐖=(W_I)_I ⊆[k] be the graphon system defined by W_I ≡ 1,ifI = ∅,k-|I| ℓ-1-|I|/k ℓ-1,if0<|I| < ℓ, 0,otherwise.In particular, if |I|=1, then W_I ≡k-1 ℓ-2/k ℓ-1 = ℓ-1/k. Since W_I ≡ 0 for every I ⊆ [k] with |I| ≥ℓ, we have t^*_F(𝐖)=0. Finally, we can check that 𝐖 is admissible by the direct computation: W_I ≡1/k ℓ-1 if |I|=ℓ-1 and W_I ≡ 0 otherwise.For the upper bound, let 𝐖 = (W_I)_I ⊆ [k] be an admissible graphon system with t^*_F(𝐖)=0. Then W_I ≡ 0 for all I ⊆ [k] with |I| = ℓ. As W_I = ∑_J ⊇ IW_J and W_J ≥ 0, we have W_I ≡ 0 and thus W_I ≡ 0 for all I ⊆ [k] with |I| ≥ℓ. Because ∑_I ⊆ [k]W_I = W_∅≡ 1, we obtain ∑_i=1^k t_K_2(W_i) = ∑_i=1^k ∑_I ⊆ [k], i ∈ I t_K_2(W_I)= ∑_I ⊆ [k], |I| ≤ℓ-1 |I|· t_K_2(W_I) ≤∑_I ⊆ [k] (ℓ-1) t_K_2(W_I) ≤ℓ-1.Therefore, there exists an index i ∈ [k] such that t_K_2(W_i) ≤ℓ-1/k.This implies that for any rational number r ∈ (0,1) ∩ℚ, there exists a multi-graph H and an integer k such that π_k^*(H) = r. § EXTREMAL STRUCTURES OF GRAPH SYSTEMS HAVING NO RAINBOW TREEIn this section, we show that an extremal graph system having no rainbow tree T admits a certain rigid structure. We first collect some observations and definitions.For every graph H equipped with a rainbow pre-coloring ψ and graphon system 𝐖=(W_I)_I ⊆ [k], we have t^*_(H, ψ)(𝐖) = t^*_(H, ψ)(span(W_1, …, W_k)).For a tree T with a root u ∈ V(T) and a rainbow coloring ψ:E(T)→ [k], the rooted density of (T, u, ψ) on a tuple of graphons 𝐖 = (W_1, …, W_k) at x ∈ [0, 1] is rt^*_(T, u, ψ)(𝐖, x) = ∫_[0, 1]^|V(T)|-1∏_vw ∈ E(H): u ≠ v, w W_ψ(vw)(x_v, x_w)×∏_uv ∈ E(H) W_ψ(uv)(x, x_v) ∏_v ∈ V(T), v ≠ u dx_v.When ψ is a rainbow pre-coloring, define rt^*_(T, v, ψ)(𝐖, x) =∑_ψ̃ rt^*_(T,v, ψ̃)(𝐖), where the sum is taken over all injective functions ψ̃:E(T) → [k] extending ψ.For a graphon W, we define the degree of W at x byd_W(x) := ∫_[0, 1] W(x, y) dy.The rooted density and the degree are defined almost everywhere by the Fubini theorem.A function f:W^(k)_0 →ℝ is said to be monotone if f(𝐖) ≤ f(𝐔) whenever W_I ≤ U_I almost everywhere for every I ⊆ [k]. A function f:W^(k)_0 →ℝ is said to be simple if f(𝐖) = f(span(W_1, …, W_k)) for any graphon system 𝐖=(W_I)_I ⊆ [k] of order k. Recall that for a graphon system 𝐖 and a partition 𝒫 of [0, 1], we define 𝐖_𝒫 := ((W_I)_𝒫)_I ⊆ [k]. A graphon system 𝐖 with 𝐖≡𝐖_𝒫 for some partition 𝒫 of [0,1] is called a step graphon with steps in 𝒫. If |𝒫|=m, then we say 𝐖 has at most m steps.Let T be a tree and (T, ψ) be a rainbow pre-coloring tuple. Then there is a constant m≤ 2^2|V(T)| satisfying the following: for every continuous monotone simple function h:W^(k)_0 →ℝ, there exists an admissible graphon system 𝐖 such that * t^*_(T, ψ)(𝐖)=0;* 𝐖 is a maximizer of h among all admissible graphon systems of order k with (T, ψ)-density zero;* 𝐖 is a step graphon system with at most m steps and with value either 0 or 1 almost everywhere.For each edge e = uv ∈ E(T), T-e is a disjoint union of two trees. Denote them by T^e_κ where κ∈ T^e_κ for κ∈{ u,v }. For any injective map ψ̂:E(T) → [k], letψ̂^e_v be the restriction of ψ̂ to E(T^e_v). Let 𝒯 be the family of tuples (T^e_v, v, ψ̂^e_v) for all v∈ V(T), e ∈ E(T) with v ∈ e and ψ̂ is a rainbow extension of ψ to E(T).As the set of admissible graphon systems is compact and t^*_(T, ψ) is continuous, one can choose an admissible graphon system 𝐖 that maximizes h among all admissible graphon systems 𝐖 with t^*_(T, ψ)(𝐖)=0. Among all such maximizers, choose one that maximizes ∑_t=1^k t_K_2(W_t).For each tuple (T', v, ψ̂) ∈𝒯, consider the partition supp(rt^*_(T', v, ψ̂)(𝐖, ∗)) and [0, 1] ∖supp(rt^*_(T', v, ψ̂)(𝐖, ∗)) of [0, 1]. Let 𝒫={V_1, …, V_m} be the common refinement of all such bipartitions over all choices of (T', v, ψ̂)∈𝒯, then m≤ 2^2|V(T)|. We claim that 𝐖 is a step 0-1 function with steps in 𝒫 having at most 2^2|V(T)| steps.Suppose not. Then there are indices ℓ∈ [k] and i,j ∈ [m] such that W_ℓ is not constantly 1 and not constantly 0 almost everywhere on V_i × V_j. As t^*_(T, ψ)(𝐖)=0, we have∑_(uv, ψ̂)∫_(V_i × V_j) ∪ (V_j × V_i) rt^*_(T^uv_v, v, ψ̂^uv_v)(𝐖, x) · W_ℓ(x, y) · rt^*_(T^uv_u, u, ψ̂^uv_u)(𝐖, y) dxdy=0,where the sum is taken over all uv∈ E(T) and all rainbow extensionsψ̂:E(T)→ [|E(T)|] of ψ with ψ̂(uv)=ℓ.Because of the definition of 𝒫, if rt^*_(T^uv_v, v, ψ̂^uv_v)(𝐖, x) · rt^*_(T^uv_u, u, ψ̂^uv_u)(𝐖, y) > 0 for a pair (x,y) ∈ V_i × V_j, then the same holds for almost every (x,y) ∈ V_i × V_j. Hence, as W_ℓ is not 0 almost everywhere on V_i × V_j, we have rt^*_(T^uv_v, v, ψ̂^uv_v)(𝐖, x) · rt^*_(T^uv_u, u, ψ̂^uv_u)(𝐖, y) = 0 for almost every (x,y) ∈ (V_i × V_j) ∪ (V_j ∪ V_i).Define a new graphon system 𝐖' = span(W_1, …, W_ℓ', …, W_k) where W_ℓ' is equal to W_ℓ except that it has value 1 on almost every point in (V_i × V_j) ∪ (V_j × V_i). Then for each rainbow extension ψ̂ of ψ, if ℓ∉ψ̂(E(T)), then t^*_(T, ψ̂)(𝐖')=t^*_(T, ψ̂)(𝐖); if ℓ∈ψ̂(E(T)), let uv ∈ E(T) be the edge with ψ̂(uv)=ℓ. As two functions ψ̂^uv_v and ψ̂^uv_u does not use the color ℓ, we have rt^*_(T^e_v, v, ψ̂^e_v)(𝐖, x) = rt^*_(T^e_v, v, ψ̂^e_v)(𝐖', x)andrt^*_(T^e_u, u, ψ̂^e_u)(𝐖, y) = rt^*_(T^e_u, u, ψ̂^e_u)(𝐖', y).Therefore, we have t^*_(T, ψ̂)(𝐖') = ∫_[0, 1]^2 rt^*_(T^uv_v, v, ψ̂^uv_v)(𝐖', x) · W_ℓ'(x, y) · rt^*_(T^uv_u, u, ψ̂^uv_u)(𝐖', y) dxdy= ∫_[0, 1]^2 ∖ ((V_i × V_j) ∪ (V_j × V_i)) rt^*_(T^uv_v, v, ψ̂^uv_v)(𝐖, x) · W_ℓ(x, y) · rt^*_(T^uv_u, u, ψ̂^uv_u)(𝐖, y) dxdy + ∫_(V_i × V_j) ∪ (V_j × V_i) rt^*_(T^uv_v, v, ψ̂^uv_v)(𝐖, x) · rt^*_(T^uv_u, u, ψ̂^uv_u)(𝐖, y) dxdy= 0.Therefore, t^*_(T, ψ)(𝐖')=0. As h is monotone, h(𝐖)≤ h(𝐖') and ∑_t=1^k t_K_2(W_t)<∑_t=1^k t_K_2(W'_t), a contradiction. Therefore, 𝐖 is a step graphon system, and it has at most 2^|V(T)| steps by definition. By using some facts from real algebraic geometry, we deduce <Ref>. For the notions in the following proof, see Section <ref>. For a rainbow pre-coloring tuple (T, ψ) with T a tree, π_k^∗(T, ψ) can be computed by solving finitely many polynomial optimization problems. In particular, π_k^∗(T, ψ) is an algebraic number.Consider graph systems 𝒢=(G_1, …, G_k) with |V(𝒢)| = m ≤ 2^|V(T)|, possibly with loops. Let 𝔾_T be the collection of all such graph systems with t_(T, ψ)(𝒢) =0. For each graph system 𝒢∈𝔾_T, we associate classical graphon systems 𝐖 with m steps I_1, …, I_m of sizes x_1, …, x_m ∈ [0,1] with ∑_i x_i = 1 such that W_i ≡ 1 on I_ℓ× I_ℓ' if and only if ℓ and ℓ' are adjacent in G_i, and W_i ≡ 0 on I_ℓ× I_ℓ' otherwise. By Theorem <ref>, one of these graphon systems is a maximizer.For a fixed 𝒢∈𝔾_T, the maximum possible value of min_i ∈ [k] t_K_2(W_i) among all associated graphon systems can be computed by solving a polynomial optimization problemsup_x_1 + ⋯ + x_m = 1 x_i ≥ 0min{ f_1(x_1,…, x_m), …, f_k(x_1,…,x_m) },for certain (homogeneous) polynomials f_1, …, f_k of degree 2 with coefficients 0 or 1, where each x_i represents the length of the i-th step. Here, each f_i corresponds to the edge density of W_i. By taking the maximum of those numbers over all 𝒢∈𝔾_T, we can compute π_k^∗(T, ψ). It suffices to show that the optimization problem results in an algebraic number.Let Σ = { (x_i) ∈ℝ^m | x_1+⋯+x_m = 1, x_i ≥ 0fori ∈ [m] } be the constraint set. Let f(x) = min{ f_1(x), …, f_r(x) }. As Σ is compact, the supremum can be attained at a point x_0 ∈Σ. By the transfer principle (<Ref>), the supremum taken over the semialgebraic set ℝ_alg^m ∩Σ can be achieved at another point x_1 ∈ℝ_alg^m ∩Σ. It is clear that f(x_0) ≥ f(x_1). For any n ∈ℕ, choose p_n ∈ℚ^m with |x_0-p_n| < 1/n. Again by the transfer principle, one can choose y_n ∈ℝ_alg^m ∩Σ with |y_n-p_n| < 1/n, so that y_n converges to x_0. As f(y_n) converges to f(x_0), we have f(x_0) ≤ f(x_1). Therefore, f(x_0) = f(x_1) and it is an algebraic number.Let Σ⊂ℝ^n be a compact semi-algebraic set defined by polynomials with rational coefficients. That is, there exist polynomials g_ij∈ℚ[x_1, …, x_n] such thatΣ = ⋃_i⋂_j{ x ∈ℝ^n : g_ij(x) 0pt(<)=UrFTS 0 },where g(x) 0pt(<)=UrFTS 0 means that either g(x) ≤ 0 or g(x) = 0 holds. Let f_1, …, f_m ∈ℚ(x_1, …, x_n) be rational polynomials with rational coefficients that are defined in a neighborhood of Σ. Then the maximum valuesup_x ∈Σmin{ f_1(x), …, f_m(x) },is an algebraic number.Assume that the maximum is achieved in the first component, i.e., x_0 ∈⋂_j{ x ∈ℝ^n: g_1j(x) 0pt(<)=UrFTS 0 }. Let U ⊂⋂_g_1j(x_0) < 0{ x: g_1j(x) < 0 } be a small open neighborhood of x_0. By reindexing, say g_1, …, g_r are the collection of g_1j's with g_1j(x_0) = 0.Suppose that f_1(x_0) = ⋯ = f_m'(x_0) < f_l(x_0) for l > m'. Shrinking U, f_k(x) < f_l(x) on U for k ≤ m' < l. Consider the ℚ-variety V = (f_1(x) = ⋯ = f_m'(x), g_1(x) = ⋯ = g_r(x) = 0) defined in a Zariski open set where the f_i are defined. Let V' = V_ℝ(ℝ) ⊂ℝ^n be the corresponding algebraic set. Notice that x_0 ∈ V' ∩ U ⊂Σ. By induction on the dimension of V' at x_0, we may assume that x_0 is contained in the regular locus of V'. [Real algebraic geometry, Proposition 3.3.14] Consequently, x_0 is contained in the regular locus of V_ℂ because regularity is equivalent to geometric regularity over a field of characteristic 0. [Stacks project, Lemma 33.12.2] We further assume that U does not meet any other irreducible components of V'. Let M = (V' ∖Sing(V')) ∩ U be the corresponding real manifold.Suppose that f(x) = f_1(x) is not constant along M. Let n-c = _x_0 V' ≤ n. Without loss of generality, the first c × c minor of the Jacobian matrix[ ∂ h_1/∂ x_1(x_0)⋯ ∂ h_c/∂ x_1(x_0);⋮⋱⋮; ∂ h_1/∂ x_c(x_0)⋯ ∂ h_c/∂ x_c(x_0) ],is nonzero, where the h_i are defining equations of V. Shrinking U further, this c × c minor is nonzero on U. By means of the Lagrange multiplier, at the maximizer x_0 of f on M, we haveDf(x_0) = ∑_iλ_i Dh_i(x_0),for some λ_i∈ℝ. The above Jacobian condition implies that Df(x_0) is in fact the linear combination of Dh_1(x_0), …, Dh_c(x_0). Collect all the points x ∈ M at which Df(x) becomes a linear combination of Dh_i(x). This condition can be described by vanishing of all the (c+1) × (c+1) minors of the Jacobian matrix[ ∂ f/∂ x_1(x) ∂ h_1/∂ x_1(x)⋯ ∂ h_c/∂ x_1(x);⋮⋮⋱⋮; ∂ f/∂ x_n(x) ∂ h_1/∂ x_n(x)⋯ ∂ h_c/∂ x_n(x) ],which are again algebraic equations over ℚ. Consider the subvariety W defined by those equations in V, then _x_0 W_ℝ(ℝ) < _x_0 V'. Otherwise, the Jacobian condition implies that Df(x) is normal to M for x ∈ M, which yields the constancy of f, whence the contradiction. By induction hypothesis on _x_0 V', we may assume that f(x) is constant along M.There is 1-1 correspondence between irreducible components of V_ℚ and those of V_ℂ, as over an algebraically closed field, irreducibility is equivalent to geometric irreducibility. [Stacks project, Lemma 33.8.8] Let V^0 be the irreducible component of V_ℚ whose base change to ℂ contains x_0. For a general (c-1)-plane L ⊂𝔸_ℚ^n, the projection π_L|_V^0:V^0→𝔸_ℚ^n-c from L is finite and dominant. Hence a general fiber is a nonempty finite set. We claim that π_L_ℂ:ℂ^nℂ^n-c is an open map with respect to the Euclidean topology. It suffices to show that a projection π_p:ℂ^nℂ^n-1 from a point p is an open map, as π_L_ℂ is obtained by a successive composition of projections from a point. Say p = (0,…,0,1), then π_p:ℂ^n-1×ℂℂ^n-1 is defined by (z',z_n) ↦ z'/(1-z_n), where z' = (z_1,…,z_n-1). Thus π_p can be described as the projection map ℂ^n-1×ℂ→ℂ^n-1 composed with a homeomorphism (z',z_n) ↦ (z'(1-z_n),z_n) from ℂ^n-1×ℂ∖{1} to itself. Whence the claim.Let V' = V_ℂ^0(ℂ). Since V' is smooth at x_0 and f is constant along V' ∩ U ⊂ℝ^n, by the identity theorem for analytic functions, f is constant along V'. Also, there is an open neighborhood U_0 of x_0, an open set 0 ∈ W ⊂ℂ^n-c and a smooth (in fact algebraic because regular embedding is a local complete intersection) submersive map f:U_0 → W such that f^-1(0) = V' ∩ U_0. Take U_0 so small that π_L_ℂ is defined over U_0. Since π_L_ℂ|_V' is dominant, the complement of its image has dimension <n-c, hence is nowhere dense. Consequently, one can choose y ∈π_L_ℂ(U_0) ∩π_L(V^0(ℚ)), so that π_L_ℂ|_V'^-1(y) consists of ℚ-points. For any x_1 ∈π_L_ℂ|_V'^-1(y) ∩ U_0, we infer that f(x_0) = f(x_1) is algebraic.§ PROOF OF <REF>We verify <Ref> in this section. For a graphon W, define γ(W) := 1 - √(t_K_2(W)).In fact, γ(W) measures the possible maximum measure of a subset A⊆ [0,1] such that there exists a graphon W' with t_K_2(W') = t_K_2(W) and W'(x,y)=0 for all x∈ A× [0,1]. In terms of graphs, this is equivalent to `the maximum number of isolated vertices' of a graph with |V(G)| vertices and |E(G)| edges. The following observation captures the above intepretation of γ(W). Let W be a graphon. Let I(W) be the set of points x∈ [0, 1] at which W has zero degree.γ(W) ≥μ(I(W)).We have t_K_2(W) ≤∫_([0, 1]∖ I(W))^2 W(x, y)dxdy ≤ (1 - μ(I(W)))^2,and consequently μ(I(W)) ≤ 1 - √(t_K_2(W)) = γ(W). First, we determine the Turán density of the star K_1,k.Let 𝐖 = span(W_1, …, W_k) be a graphon system. If ∑_i∈ [k]γ(W_i) < 1, then t^*_K_1,k(𝐖) > 0. As a consequence, for every positive integer k, the rainbow turan density of the k-edge star is π^∗_k(K_1,k) = (1 - 1/k)^2.Let A = [0, 1]∖⋃_i∈ [k] I(W_i). By <Ref>, the inequality ∑_i∈ [k]μ(I(W_i)) ≤∑_i∈ [k]γ (W_i) < 1 holds, which implies μ(A) > 0. Label the leaves of K_1,k by 1, …, k and the center by k+1. Let ψ be a rainbow pre-coloring of K_1,k that colors the edge i(k+1) by i for each i∈ [k]. Then t^*_K_1,k(𝐖)≥ t^*_(K_1,k, ψ)(𝐖) = ∫_[0,1]^k+1∏_i∈ [k]W_i(x_i, x_k+1) ≥∫_[0, 1]∏_i∈ [k]d_W_i(x_k+1)≥∫_A ∏_i∈ [k]d_W_i(x_k+1) > 0.The last inequality holds because μ(A) > 0 and d_W_i > 0 on A for each i ∈ [k]. This confirms the first statement.Assume that π^∗(K_1,k) > (1 - 1/k)^2. By <Ref> and <Ref>, there is a graphon system 𝐖' = span(W'_1, …, W'_k) such that t^*_K_1,k(𝐖') = 0 and t_K_2(W'_i) > (1 - 1/k)^2 for each i ∈ [k]. Considering ∑_i∈ [k]γ (W'_i) < 1, we have t^*_K_1,k(𝐖') > 0 by the first statement. This is a contradiction, and thus π^∗(K_1,k) ≤(1 - 1/k)^2.To complete the proof, it is enough to find a graphon system 𝐖' = span(W'_1, …, W'_k) such that t^*_K_1,k(𝐖') = 0 and t_K_2(W'_i) ≥(1 - 1/k)^2 for each i∈ [k].Set A_i =(0, 1) ∖ [i-1/k, i/k]. Define a step graphon W'_i by letting W'_i(x, y) = 1 if (x, y)∈ A_i^2 and 0 otherwise. Then we have t_K_2(W'_i) = μ(A_i^2) = (1 - 1/k)^2. Let 𝐖' = span(W'_1, …, W'_k). For any point x∈ [0, 1], there is an index i∈ [k] with d_W'_i(x) = 0, so t^*_K_1,k(𝐖') = 0. Therefore, π^∗(K_1,k) = (1 - 1/k)^2.Now we show that π^*_k(T)≤ (k-1/k)^2 holds for all k-edge star T. We intend to analyze how the rainbow Turán density changes when we remove a star from a tree, ensuring that the remaining graph is also a tree. For this, the subsequent concept will be beneficial. Let T be a tree which is not a star. A leaf-star S of T is a subgraph of T that satisfies the following: (1) S is a star,(2) if v is a center of S, then there is a vertex u∈ N_T(v) ∖ S such that every u'∈ N_T(v)∖{u} is a leaf of both S and T.A leaf-star S of T has size ℓ if |N_T(v)∖{u}| = ℓ. If we want to emphasize u and v, we call S an (u, v)-leaf-star.Later in the proof of <Ref>, we will handle two cases separately where S has exactly k-2 edges or less than k-2 edges, where |E(T)|=k. The next two lemma will be encapsulate the proof of the former case where T is a star with one edge subdivided. Let 𝐖 = span(W_1, W_2, W_3) be a graphon system with γ(W_1) + γ(W_2) + γ(W_3) < 1. Let P_3=wxyz be a 3-edge path, and let ψ:{wx}→ [1] be its rainbow pre-coloring. Then t^*_(P_3, ψ)(𝐖) > 0.Suppose that we are given a graphon system 𝐖 = span(W_1, W_2, W_3) with t^*_(T, ψ)(𝐖) = 0. For each e = (e_1, e_2, e_3) ∈{0, 1}^3, let A_e be the set of points v ∈ [0,1] such that d_W_i(v) > 0 precisely when e_i = 1 for each i∈ [3]. Then {A_e: e∈{0,1}^3 } forms a partition of the interval [0,1] up to a measure zero set. Then we haveW_i ≤∑_e: e_i = 11_A_e× A_e + ∑_e,e':e ≠ e' e_i = e'_i = 11_(A_e× A_e') ∪ (A_e'× A_e),almost everywhere. For simplicity, write A_e_1 + 2e_2 + 4e_3 to denote A_(e_1, e_2, e_3). For instance, A_5= A_(1,0,1). This definition yields a structure on each graphon W_i by forcing it to be zero on certain region A_j× A_ℓ. The structure is depicted in Figure 1 by omitting certain colors in some regions indicating that W_i is zero there. In particular, this structure yields the following claim. W_2 ≡ 0 and W_3 ≡ 0 on (A_7× A_7) ∪ (A_6× A_7) ∪ (A_7× A_6). Also, W_2 ≡ 0 on (A_3× (A_6 ∪ A_7)) ∪ ((A_6 ∪ A_7)× A_3) and W_3 ≡ 0 on (A_5× (A_6 ∪ A_7)) ∪ ((A_6 ∪ A_7)× A_5).Let X = (A_3 ∪ A_7) × (A_6 ∪ A_7). Then0 = t^*_(P_3, ψ)(𝐖) ≥∫_X d_W_1(x)W_2(x, y)d_W_3(y) dx dy.Since d_W_1(x) and d_W_3(y) are strictly positive for (x,y) ∈ X, we have W_2 ≡ 0 on X. As W_2 is symmetric, the statement for W_2 follows. A similar argument also applies to W_3. Let m_i := μ(A_i) for each i ∈{0, 1, …, 7}. BecauseW_1≤1_(A_1 ∪ A_3 ∪ A_5 ∪ A_7)× (A_1 ∪ A_3 ∪ A_5 ∪ A_7), W_2≤1_(A_2∪ A_3)× (A_2∪ A_3) + 1_(A_2× A_6)∪ (A_6× A_2) + 1_(A_2× A_7)∪ (A_7× A_2) + 1_A_6× A_6,W_3≤1_(A_4∪ A_5)× (A_5∪ A_4) + 1_(A_4× A_6)∪ (A_6× A_4) + 1_(A_4× A_7)∪ (A_7× A_4) + 1_A_6× A_6,we obtain the inequalities below.γ(W_1)≥ f_1 := 1 - m_1 - m_3 - m_5 - m_7,γ(W_2)≥ f_2 := 1 - √((m_2 + m_3)^2 + 2m_2(m_6 + m_7) + m_6^2),γ(W_3)≥ f_3 := 1 - √((m_4 + m_5)^2 + 2m_4(m_6 + m_7) + m_6^2). Suppose on the contrary that there is a graphon system 𝐖 = span(W_1, W_2, W_3) such that γ(W_1) + γ(W_2) + γ(W_3) < 1 and t^*_(P_3,ψ)(𝐖) = 0. Our goal is to show that F := f_1 + f_2 + f_3 ≥ 1. Viewing F as a (continuous) function of m_1,…,m_7, its domain is{ (m_1, …, m_7) ∈ℝ^7| ∑_i∈ [7] m_i ≤ 1andm_i ≥ 0 . },which is a compact set. Henceforth, the minimum is attained, say at a point (a_1, …, a_7). It is clear that ∑_i∈ [7] a_i = 1 and a_1 = 0. Let α, β, and γ be the values of f_1, f_2, and f_3 at the point (a_1, …, a_7), respectively. By the assumption, α + β + γ < 1. We may assume that a_6 = 0.Assume a_6 > 0. Putting a_3' = a_3 + a_6 and a_6' = 0, the difference between the values of F isF(a_1, a_2, a_3', a_4, a_5, a_6', a_7) - F(a_1, …, a_7) ≤ -a_6 + √((a_4 + a_5)^2 + 2a_4(a_6 + a_7) + a_6^2) - √((a_4 + a_5)^2 + 2a_4a_7) = -a_6(1 - a_6 + 2a_4/√((a_4 + a_5)^2 + 2a_4(a_6 + a_7) + a_6^2) + √((a_4 + a_5)^2 + 2a_4a_7)) ≤ -a_6(1 - a_6 + 2a_4/(a_6+a_4) + a_4) = 0,provided that a_6+2a_4 ≠ 0. If a_6+2a_4 = 0, the difference is bounded by -a_6 ≤ 0. By the minimality, we may assume that a_6 = 0.We may further assume that a_2 , a_4 > 0.Assume a_2 = 0.Then we have γ(W_1) ≥ 1-a_3-a_5-a_7, γ(W_2) ≥ 1-a_3 ≥ a_4 + a_5 + a_7 and γ(W_3) ≥ 1-a_4-a_5 ≥ a_3. Thus, γ(W_1) + γ(W_2) + γ(W_3) ≥ 1+a_4 ≥ 1, a contradiction.By symmetry, we would get a contradiction if a_4 = 0.It follows that both a_2 and a_4 should be strictly positive. We may assume that a_7 < 1/3.Assume a_7 ≥1/3. Then2(1 - β + γ/2)^2≤ (1 - β)^2 + (1 - γ)^2 = (a_2+a_3)^2+2a_2a_7+(a_4+a_5)^2+2a_4a_7 ≤ (a_2+a_3+a_4+a_5)^2 + 2a_7α = (1-a_7)^2+2a_7α.Because α + β + γ < 1, we have (1 + α)^2/2 < 2(1 - β + γ/2)^2.On the other hand, as a_7 ≥1/3, we infer that (1 - a_7)^2 + 2a_7α≤max{ 2α, 4 + 6α/9}. Therefore (1+α)^2/2 < max{2α, 4 + 6α/9}, a contradiction as 0 ≤α≤ 1. By the above claims, either a_2 + a_3 > 1/3 or a_4 + a_5 > 1/3 holds. Without loss of the generality, we assume a_2 + a_3 > 1/3 > a_7. Putting a_3' = a_3 + a_2 and a_2' = 0, the difference between the values of F isF(a_1,a_2',a_3',a_4,…,a_7) - F(a_1, …, a_7) = -a_2 + √((a_2 + a_3)^2 + 2a_2a_7) - (a_2 + a_3)= -a_2 (1 - 2a_7/√((a_2 + a_3)^2 + 2a_2a_7) + (a_2+a_3)) ≤ -a_2(1 - 2a_7/2(a_2+a_3)) < 0.The last inequality follows because a_2 > 0 and a_2 + a_3 > a_7. This contradicts the minimality and verifies the lemma. Using the above lemma for the 3-edge path, we can prove the following lemma dealing with the one-edge-subdivided star. Let 𝐖 = span(W_1, …, W_k) be a graphon system. Let T be a k-edge tree that has an (u, v)-leaf-star S of size k-2. Let C be a subset of [k] of size k-2 and let ψ:E(S) → C be a rainbow pre-coloring of T. If ∑_i∈ [k]γ(W_i) < 1, then t^*_(T, ψ)(𝐖) > 0.Note that the induced graph T[V(T)∖ V(S)] is a single edge. Let V(S) = {v, x_1, x_2, … ,x_k-2}. Suppose that there exists a graphon system such that the statement is not true. Among such graphon systems, we choose 𝐖 = span(W_1, …, W_k) at which t^*_K_2(𝐖) = ∑_i∈ [k] t_K_2(W_i) is maximized. Thus ∑_i ∈ [k]γ(W_i) < 1 but t^*_(T, ψ)(𝐖) = 0.We may assume C = [k-2] and ψ(vx_i) = i. Let α_i := γ(W_i) for i∈ [k-2] and β := γ(W_k-1), γ := γ(W_k). Let A = ⋂_i ∈ [k-2]supp(d_W_i) and let W'_1 = 1_A × A. Let 𝐖' = span(W_1', W_2' := W_k-1, W_3':=W_k) be a graphon system and P_3 be a path xyzw of length 3. As μ(A) ≤ 1 - ∑_i ∈ [k-2]γ(W_i), we haveγ(W_1') + γ(W_2') + γ(W_3') ≤∑_i ∈ [k]γ(W_i) < 1.Thus t^*_(P_3, ψ')(𝐖')>0 by <Ref>, where ψ' is the pre-coloring of P_3 with 𝐝𝐨𝐦(ψ') = xy and ψ'(xy) = 1. Sincet^*_(P_3, ψ')(𝐖) = ∫_[0,1]^3 d_W_1'(x) (W_k-1(x, y)W_k(y, z)+W_k(x, y)W_k-1(y, z)) dxdydz>0,we infer that∫_[0,1]^2 W_k-1(x, y)W_k(y, z)+W_k(x, y)W_k-1(y, z)dydz>0,on a subset of A that has a positive measure. Therefore, t^*_(T, ψ)(𝐖) = ∫_[0, 1]^3∏_i ∈ [k-2] d_W_i(x) (W_k-1(x, y)W_k(y, z)+W_k(x, y)W_k-1(y, z)) dxdydz>0,from the definition of A, which is a contradiction. As the previous lemmas handle the star and the one-edge-subdivided star, we now have to consider the other trees. For those threes, thefollowing two observations will be helpful for us.Let T be a tree that is not a star. By taking the longest path in T, one can easily see that T has at least two leaf-stars.Let T be a tree with two distinct leaf-star S and S' where T - S' has more than two edges. Then there are at least three edges which are not incident to leaves of T or is contained in S. We now prove that for a given graphon system 𝐖 with t^*_(T, ψ)(𝐖) = 0 admits a structure depicted in Figure 2. In particular, this yields the next lemma. Let T be a k-edge tree that is not a star and S be a leaf-star of T that is centered at v and has leaves v_1, …, v_m. Let u be the non-leaf neighbor of v. Let ψ be a rainbow pre-coloring of T such that ψ(vv_i)=i for each i∈ [m] and ψ(uv)=k. Let T'=T-S and ψ' be the restriction of ψ to E(T'). Let 𝐖 be a graphon system of order k with t^*_(T, ψ)(𝐖) = 0. Let X = ⋂_i∈ [m]supp(d_W_i) ⊆ [0, 1] and Y = supp(rt^*_(T', u, ψ')(𝐖, ∗)).Let A = X ∩ Y, B = X ∖ Y, C = Y ∖ X,D = [0,1] ∖ (A ∪ B ∪ C),andW'_k = 1_(B ∪ D)^2 ∪ (C ∪ D)^2 ∪ (A × D) ∪ (D × A).Then we have the following: * W_k ≤ W'_k almost everywhere,* t^*_(T, ψ)(span(W_1, …, W_k-1, W'_k)) = 0. Because ψ is a rainbow pre-coloring, the image of ψ' does not contain [m]∪{k}. We have 0=t^*_(T, ψ)(𝐖)= ∫_[0,1]^m+2(∏_i∈ [m]W_i(x_v_i, x_v)) W_k(x_v, x_u) rt^*_(T', u, ψ')(x_u) dx_vdx_u∏_i ∈ [m] dx_v_i = ∫_[0,1]^2(∏_i∈ [m] d_W_i(x_v))W_k(x_v, x_u) rt^*_(T', u, ψ')(x_u) dx_vdx_u,so the integrand is constantly zero almost everywhere. Observe that(∏_i∈ [m]d_W_i(x_v))rt^*_(T', v, ψ')(x_u)>0,for (x_v, x_u) ∈ (A ∪ B) × (A ∪ C). As W_k is symmetric, W_k ≡ 0 on ((A ∪ B) × (A ∪ C)) ∪ ((A ∪ C) × (A ∪ B)). Hence W_k ≤W'_k almost everywhere. On the other hand, if W_k'(x, y) ≠ 0 for some x,y ∈ [0,1], then either d_W_1(x)=0 or rt^*_(T', v, ψ')(y)=0 holds as W_k' ≡ 0 on A × A. Therefore,d_W_1(x_v) W_k'(x_v, x_u)rt^*_(T', v, ψ')(x_u) ≡ 0.This implies that t^*_(T, ψ)(span(W_1, …, W_k-1, W'_k)) = 0.We are now ready to prove <Ref> in terms of a graphon system.Indeed, we prove the following stronger statement: Let 𝐖 = span(W_1, …, W_k) be a graphon system. Let T be a tree with k edges and L = {e_1, …, e_ℓ} be the set of all edges incident to leaves of T except the edges of some leaf-star S of T. Let C ⊆ [k] be a subset of size ℓ. Let ψ be a rainbow pre-coloring of T such that 𝐝𝐨𝐦(ψ) = L and ψ(L) = C. If ∑_i∈ [k]γ(W_i) < 1, then t^*_(T, ψ)(𝐖) > 0. We will use induction on k. Without loss of generality, let C = [ℓ] and let ψ(e_i) = i for each i ∈ [ℓ]. By <Ref>, we may assume that T is not a star. Then by <Ref>, there is another leaf-star S' of T distinct from S. Let S' be an (u,v)-leaf star. Suppose that S' has size k-2. Then <Ref> applied to T and S' yields t_(T,ψ)^∗(𝐖) > 0. Hence we may assume that the size of S' is not equal to k-2. Since there are two distinct leaf-stars S and S' of T, we have |L| ≤ k-3 by <Ref>.Suppose on the contrary that there is a graphon system 𝐖 = span(W_1, …, W_k) such that t^*_(T, ψ)(𝐖) = 0 but ∑_i∈ [k]γ(W_i) < 1. Without loss of generality, let E(S') = {e_1, …, e_m}. Let α := ∑_i∈ [m]γ(W_i) < 1. Since we have at least three colors not in C, we may assume that β:= γ(W_k) < 1 -α/3.Let ψ^∗ be a rainbow pre-coloring of T extending ψ such that 𝐝𝐨𝐦(ψ^∗) = L∪{uv} and ψ^∗(uv) = k. Then t^*_(T, ψ^∗)(𝐖) ≤ t^*_(T, ψ)(𝐖) = 0, so t^*_(T, ψ^∗)(𝐖) = 0. By applying <Ref> with ψ = ψ^∗, one obtains a partition of the interval [0, 1] into four parts A, B, C, D and a step graphon W'_k as described in <Ref>. Because W_k ≤ W'_k almost everywhere, we have ∑_i∈ [k-1]γ(W_i) + γ(W'_k) ≤∑_i∈ [k]γ(W_i) < 1,and γ(W'_k) ≤γ(W_k) = β.Let a = μ(A), b = μ(B), c = μ(C) and d = μ(D). Since1 - α≤μ(A∪ B) = μ( ⋂_i∈ [m]supp(d_W_i) ),we infer thata+b ≥ 1 - α.By the definition of W'_k,γ(W'_k) = 1 - √((b+d)^2 + 2d(a+c) + c^2)≤γ(W_k) = β.Consequently, we have (b+d)^2 + 2d(a+c) + c^2 ≥ (1-β)^2.b+d ≥ 1 - α - β.Suppose b+d < 1-α-β. Let f: ℝ^4 →ℝ be the function defined byf(x_1, x_2, x_3, x_4) = (x_2 + x_4)^2 + 2x_4(x_1+x_3) + x_3^2.By Equations (<ref>) and (<ref>), there exists a tuple (a, b, c, d) such that * a+b+c+d = 1 and a, b, c, d≥ 0,* a+b = 1 - α,* b + d = μ(B) + μ(D) < 1 - α - β,* f(a, b, c, d) ≥ (1 - β)^2,* (a,b,c,d) maximizes f among all such tuples (a, b, c, d).For (ii), putting a' = a-t, b' = b-t', c' = c+t and d' = d+t' for some t,t' ≥ 0, the tuple (a',b',c',d') satisfies the conditions (i) and (iii). Considering f(a',b',c',d') - f(a,b,c,d) = 2t'(a+c)+2tc+t^2 ≥ 0,we may decrease a+b to obtain (ii).First, assume that b and c are both non-zero. Let t := min{b, c}. Putting a' = a + t, b' = b - t, c' = c - t and d' = d + t, the tuple (a', b', c', d') satisfies (i)-(iii). Since f(a', b', c', d') - f(a, b, c, d) = 2ta + t^2 > 0,this contradicts the maximality. Hence at least one of b and c is zero. b = 0. In this case, we have (1 - β)^2 ≤ f(a, b, c, d) = d^2 + 2d(a+c) + c^2 = α^2 + 2d(1 - α).Then we have 1 - α - β > d ≥(1 - β)^2 - α^2/2(1 - α) and d ≤α. From these, we infer(1 - β)^2 ≤ 2α - α^2, (1 - β)^2 < 3α^2 - 4α + 2αβ - 2β + 2.By reformulating (<ref>), we have (1 - α)^2 + (1 - β)^2 ≤ 1. As β < 1 - α/3, the inequality (1 - α)^2 + (2 + α/3)^2 ≤ 1 holds, which yields 2/5≤α≤ 1. A reformulation of (<ref>) is β(β - 2α) < (3α-1)(α - 1). Since β < 1 - α/3≤ 2α, the left-hand side decreases as β increases. For β < 1-α/3, the inequality (α - 1)(7α - 1)/9 < (3α - 1)(α - 1) holds. This yields α < 2/5, whence a contradiction. c = 0. In this case, (i) and (ii) implies d = α. Hence (iii) yields 0 ≤ b < 1 - α - β - d = 1 - 2α - β. Thus2α + β < 1.Moreover, since f(a, b, c, d) = (α + b)^2 + 2α a = (α + b)^2 + 2α(1 - α - b) = 1 - (1 - α)^2 + b^2 ≥ (1 - β)^2,we get(1 - α)^2 + (1 - β)^2 - 1≤ b^2 < (1 - 2α - β)^2.By reformulating (<ref>), we have 3α^2 - 2α + 4αβ > 0, which gives α > 0 and 3α + 4β > 2. Because β < 1 - α/3, by the equation (<ref>), we have 3α + 4β < 2, whence a contradiction. Consider a graphon W_k+1 = 1_(B∪ D)^2. By <Ref>, we have γ(W_k+1) ≤α + β. Let 𝐖' = span(W_m+1, …, W_k-1, W_k+1)be a graphon system. Then ∑_m+1 ≤ i≤ k-1γ(W_i) + γ(W_k+1) < 1.Let T' := T - (S'-v). Let ψ' be a rainbow pre-coloring of T' such that 𝐝𝐨𝐦(ψ') = (L∩ E(T')) ∪{uv} and ψ'(e) = ψ(e) for every e∈ L∩ E(T') and ψ'(uv) = k+1. By induction hypothesis,t^*_(T', ψ')(𝐖') = ∫_[0,1] d_W_k+1(x)rt^*_(T'-v, u, ψ'|_T'-v)(𝐖',x) dx>0.Thus d_W_k+1(x)rt^*_(T'-v, u, ψ'|_T'-v)(𝐖',x)>0 on a set of positive measure.Let S” be the subtree of T consisting of v and all of its neighborhoods and ψ” be the restriction of ψ^∗ on E(S”). Thenrt^*_(S”, u, ψ”)(span(W_1, …, W_k-1, W'_k), x) = ∫_[0, 1]∏_i=1^m d_W_i(y) W'_k(y, x) dy.Recall that B ⊆supp(∏_i=1^m d_W_i). Thus by <Ref>, we have ∏_i=1^m d_W_i(y) W'_k(y, x)>0 whenever x, y ∈ B.Hence rt^*_(S”, u, ψ”)(x)>0 almost everywhere on B. Similarly, as A ∪ B = supp(∏_i=1^m d_W_i), we have ∏_i=1^m d_W_i(y) W'_k(y, x)>0 whenever x ∈ D and y ∈ A ∪ B.The condition ∑_i ∈ [m]γ(W_i)<1 implies that the measure of A ∪ B is strictly positive, so rt^*_(S”, u, ψ”)(x)>0 almost everywhere on D. From these, we infer thatrt^*_(S”, u, ψ”)(x)rt^*_(T'-v, u, ψ'|_T'-v)(x)>0,on a set of positive measure.Therefore, t^*_(T, ψ^∗)(span(W_1, …, W_k-1, W'_k))> 0, a contradiction. This completes the proof.For a nonnegative reals α_1, …, α_k ∈ [0, 1], if min_i ∈ [k]α_i > (k-1/k)^2 for each i ∈ [k], then ∑_i=1^k (1-√(α_i)) < 1. So by <Ref>, we obtain ex_k^*(T) ≤(k-1/k)^2 for every k-edge tree T. By <Ref>, π_k^*(K_1, k) = (k-1/k)^2 so it completes the proof.§ ACKNOWLEDGEMENTSI, JK, and HL are supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government(MSIT) No. RS-2023-00210430.SI and HL are supported by the Institute for Basic Science (IBS-R029-C4). HS is supported by Institute for Basic Science (IBS-R032-D1).10Aharoni20 Ron Aharoni, Matt DeVos, Sebastián González Hermosillo de la Maza, Amanda Montejano, and Robert Šámal. A rainbow version of Mantel's theorem. Adv. Comb., pages Paper No. 2, 12, 2020.alon2023 Noga Alon, Matija Bucić, Lisa Sauermann, Dmitrii Zakharov, and Or Zamir. Essentially tight bounds for rainbow cycles in proper edge-colourings. arXiv:2309.04460, 2023.Babinski22 Sebastian Babiński and Andrzej Grzesik. Graphs without a rainbow path of length 3. arXiv:2211.02308.BCR1998 Jacek Bochnak, Michel Coste, and Marie-Françoise Roy. Real Algebraic Geometry, volume 12 of Ergebnisse der Mathematik und ihrer Grenzgebiete. Springer Berlin, Heidelberg, 1998.Chakraborti22 Debsoumya Chakraborti, Jaehoon Kim, Hyunwoo Lee, Hong Liu, and Jaehyeon Seo. On a rainbow extremal problem for color-critical graph. arXiv:2204.02575.Das2013 Shagnik Das, Choongbum Lee, and Benny Sudakov. Rainbow Turán problem for even cycles. European J. Combin., 34(5):905–915, 2013.Durrett Rick Durrett. Probability: Theory and Examples, volume 31 of Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge, fourth edition, 2010.Erdos70 Pál Erdős. On the graph theorem of Turán. Mat. Lapok, 21:249–251, 1970.Erdos66 Pál Erdős and Miklós Simonovits. A limit theorem in graph theory. Studia Sci. Math. Hungar., 1:51–57, 1966.Erdos46 Pál Erdös and Arthur Harold Stone. On the structure of linear graphs. Bull. Amer. Math. Soc., 52:1087–1091, 1946.Ergemlidze2019 Beka Ergemlidze, Ervin Győri, and Abhishek Methuku. On the rainbow Turán number of paths. Electron. J. Combin., 26(1):Paper No. 1.17, 12, 2019.falgasravry2023a Victor Falgas-Ravry, Klas Markström, and Eero Räty. Minimum degree conditions for rainbow triangles. arXiv:2305.12772, 2023.falgasravry2023 Victor Falgas-Ravry, Klas Markström, and Eero Räty. Rainbow variations on a theme by Mantel: extremal problems for Gallai colouring templates. arXiv:2212.07180, 2023.Fox2011 Jacob Fox. A new proof of the graph removal lemma. Ann. of Math. (2), 174(1):561–579, 2011.Frankl22 Peter Frankl. Graphs without rainbow triangles. arXiv:2203.07768.Frankl22a Peter Frankl, Ervin Győri, Zhen He, Zequn Lv, Nika Salia, Casey Tompkins, Kitti Varga, and Xiutao Zhu. Extremal results for graphs avoiding a rainbow subgraph. arXiv:2204.07567.Furedi2013 Zoltán Füredi and Miklós Simonovits. The history of degenerate (bipartite) extremal graph problems. In Erdös centennial, volume 25 of Bolyai Soc. Math. Stud., pages 169–264. János Bolyai Math. Soc., Budapest, 2013.Guo22 Mingyang Guo, Hongliang Lu, Xinxin Ma, and Xiao Ma. Spectral radius and rainbow matchings of graphs. arXiv:2205.03516.Janzer2023 Oliver Janzer. Rainbow Turán number of even cycles, repeated patterns and blow-ups of cycles. Israel J. Math., 253(2):813–840, 2023.janzer2022 Oliver Janzer and Benny Sudakov. On the turán number of the hypercube. arXiv:2211.02015, 2022.Keevash11 Peter Keevash. Hypergraph Turán problems. In Surveys in combinatorics 2011, volume 392 of London Math. Soc. Lecture Note Ser., pages 83–139. Cambridge Univ. Press, Cambridge, 2011.Keevash2007 Peter Keevash, Dhruv Mubayi, Benny Sudakov, and Jacques Verstraëte. Rainbow Turán problems. Combin. Probab. Comput., 16(1):109–126, 2007.Keevash04 Peter Keevash, Mike Saks, Benny Sudakov, and Jacques Verstraëte. Multicolour Turán problems. Adv. in Appl. Math., 33(2):238–262, 2004.kim2022 Jaehoon Kim, Joonkyung Lee, Hong Liu, and Tuan Tran. Rainbow cycles in properly edge-colored graphs. arXiv:2211.03291, 2022.Lovasz2012 László Lovász. Large networks and graph limits, volume 60 of American Mathematical Society Colloquium Publications. American Mathematical Society, Providence, RI, 2012.Simonovits68 Miklós Simonovits. A method for solving extremal problems in graph theory, stability problems. In Theory of Graphs (Proc. Colloq., Tihany, 1966), pages 279–319. 1968.tomon2022 István Tomon. Robust (rainbow) subdivisions and simplicial cycles. arXiv:2201.12309, 2022.Turan41 Paul Turán. Eine Extremalaufgabe aus der Graphentheorie. Mat. Fiz. Lapok, 48:436–452, 1941.§ PROOF OF THE REGULARITY LEMMA AND <REF>Throughout the appendix, we again assume that all the subsets of ℝ^n and the functions between them that we deal with are measurable. We first remind the simple fact that the stepping operator is a contraction in the cut-norm.Let W:[0, 1]^2 →ℝ be a bounded symmetric function and 𝒫 be a finite partition of [0, 1] into sets. Then ‖ W_𝒫‖_□≤‖ W ‖_□. Let W:[0, 1]^2 →ℝ be a bounded symmetric function, 𝒫 a finite partition of [0, 1] into sets, and 𝒬 a refinement of 𝒫. Then ‖ W_𝒬‖_□≤‖ W_𝒫‖_□.Together with the definition of the cut-norm of graphon systems (<Ref>), the above corollary straightforwardly generalizes to graphon systems. Let W:[0, 1]^2 →ℝ be a bounded symmetric function and k ≥ 1 be an integer. Then there is a step function U with at most k steps such that d_□ (W,U) ≤2/√(log k)‖ W ‖_2.In fact, this lemmatogether with the following lemma implies that there exists a partition 𝒫 with at most k parts with d_□ (W,W_𝒫) ≤4/√(log k)‖ W ‖_2.Let 𝐖=(W_1, …, W_k) and 𝐔=(U_1, …, U_k) be tuples of symmetric bounded functions. If each U_i is a step function with steps in 𝒫, then d_□ (𝐖,𝐖_𝒫) ≤ 2d_□(𝐖, 𝐔). Since 𝐔=𝐔_𝒫, we haved_□ (𝐖,𝐖_𝒫) ≤ d_□ (𝐖,𝐔) +d_□ (𝐔,𝐖_𝒫) = d_□ (𝐖,𝐔) +d_□ (𝐔_𝒫,𝐖_𝒫) ≤ 2 d_□ (𝐖,𝐔).We now generalize the weak regularity lemma to a tuple of graphons.m_1, m_2 ∈ℤ_>0 be integers with m_2 ≥ k m_1. Let 𝐖 = (W_1, …, W_k) be a tuple of bounded symmetric functions and 𝒬 be a partition of [0, 1] into m_1 parts.Then there exists a refinement 𝒫 of 𝒬 with at most m_2 parts such that d_□ (𝐖, 𝐖_𝒫) ≤ k^1/28 ∑_i ∈ [k]‖ W_i ‖_2/√(log (m_2/m_1)).We choose the largest integer t such that t^k≤ m_2/m_1. From the choice, we have log t ≤ k^-1log(m_2/m_1) ≤(t+1)and t ≥ 2. We now apply Lemma <ref> and Lemma <ref> for each W_i. Then we obtain a partition 𝒫_i with at most t parts for each i ∈ [k] such that d_□ (W_i, (W_i)_𝒫_i) ≤4/√(log t)‖ W_i ‖_2.Let 𝒫 be the common refinement of all such partitions 𝒫_i's and 𝒬, then it has at most m_1 t^k ≤ m_2 parts. Then by Lemma <ref>, d_□ (𝐖, 𝐖_𝒫) ≤∑_i ∈ [k] d_□ (W_i, (W_i)_𝒫) ≤4 ∑_i ∈ [k]‖ W_i ‖_2/√(log t)≤ k^1/28 ∑_i ∈ [k]‖ W_i ‖_2/√(k log (t+1))≤ k^1/28 ∑_i ∈ [k]‖ W_i ‖_2/√(log (m_2/m_1)). Note that following the proof of Lemma <ref> leads to a better bound in Lemma <ref>. However, Lemma <ref> is already adequate for establishing our main results.To prove Theorem <ref>, we also need basic facts about martingale.A sequence of random variables (X_n)_n ≥ 0 is called a martingale if 𝔼[X_t | X_0, …, X_t-1] = X_t-1 for every t ≥ 1. Let (X_n)_n ≥ 0 be a martingale. If sup_n 𝔼[|X_n|] < ∞, then (X_n) converges almost everywhere. We are now ready to prove Theorem <ref>. It suffices to prove that X is sequentially compact. Let (𝐖^n) be a sequence of graphon systems in X, and let W_I^i be the component of 𝐖^i corresponding to the subset I ⊆ [k].For each n, we apply Lemma <ref> iteratively to obtain partitions 𝒫_1^n = {[0, 1]}, 𝒫_2^n, … such that(a) 𝒫_t+1^n is a refinement of 𝒫_t^n;(b) d_□ (𝐖^n, (𝐖^n)_𝒫_t^n) < 1/t;(c) 𝒫^n_t has exactly m_t parts, while this number m_t only depends on t.By allowing an empty part in each 𝒫^n_t, one can choose such partitions.Let 𝐖_t^n = (𝐖^n)_𝒫_t^n∈ X and W_ t, I^n be the component of 𝐖_t^n corresponding to I ⊆ [k]. By applying appropriate measure-preserving bijections to both 𝒫_t^n and 𝐖_t^n for each n and ignoring sets of measure zero, we may assume that all the partitions 𝒫_t^n consist of intervals. Note that after replacing, the equality (𝐖_t^n)_𝒫_t'^n = 𝐖_t'^n still holds for t'<t.We claim that there exists a subsequence (n_i) such that for any t, the sequence (𝐖_t^n_i) converges to some 𝐔_t almost everywhere. By the standard diagonalization argument, it suffices to show that for each fixed t, there exists a subsequence (𝐖^n_i) such that (𝐖_t^n_i) converges to some 𝐔_t almost everywhere. Fix t. Let ℓ_n(i) be the length of the i-th interval of 𝒫_t^n. Let f_n(i, j, I) be the value of W_t, I^n on the square (i-th interval of 𝒫_t^n) × (j-th interval of 𝒫_t^n). If one of those intervals is empty, then take any value in [0, 1]. By compactness of [0, 1]^m_t× ([0, 1]^m_t^2)^2^k, there exists a subsequence (a_s) such that ℓ_a_s(i) and f_a_s(i, j, I) converge for every i, j ∈ [m_k] and I ⊆ [k]. Then each sequence (W_t, I^a_s)_s converges to some step function U_t,I almost everywhere and for each fixed t, so (𝐖_t^a_s)_s converges to some 𝐔_t almost everywhere. By passing to a subsequence, we may assume that for each fixed t, the sequence (𝐖_t^n) converges to 𝐔_t almost everywhere.The pointwise limit 𝐔_t is a graphon system in X by the second condition of Theorem <ref>. Also, the limit of ℓ_n_a(i) defines a partition 𝒫_t consisting of steps of 𝐔_t. As (𝐖_ t^n)_𝒫_t'^n = 𝐖_t'^n for t'<t, we infer that (𝐔_t)_𝒫_t'≡𝐔_t' for t'<t.We claim that the sequence (𝐔_t) converges to a graphon system 𝐔 almost everywhere. Fix I ⊆ [k]. Choose (x, y) ∈ [0, 1]^2 uniformly at random and consider a sequence (U_t, I(x, y))_t. Since (𝐔_t)_𝒫_t'≡𝐔_t' for t'<t, the sequence (U_t, I(x, y))_t is a martingale. Clearly, its expectation is bounded by 1, so the sequence (U_t, I(x, y))_t converges almost everywhere by the martingale convergence theorem <ref>. Therefore, (𝐔_t) converges to some 𝐔 almost everywhere. Again, each 𝐔_t is in X, so is 𝐔. By the bounded convergence theorem, the pointwise convergence implies the convergence in L^1, and thus the convergence in δ_□-norm.Note thatδ_□(𝐖^n_i, 𝐔) ≤δ_□(𝐖^n_i, 𝐖_t^n_i) + δ_□(𝐖_t^n_i, 𝐔_t) + δ_□(𝐔_t, 𝐔).From this, we conclude that for any given ε > 0, there exist sufficiently large t and i such that δ_□(𝐖^n_i, 𝐔)<ε by the condition (b) and the above claims. Therefore, (𝐖^n) converges to a graphon system 𝐔∈ X, which completes the proof.§ PROOF OF <REF>We need several lemmas below for the proof of <Ref>.Let W:[0,1]^2 → [-1,1] be a symmetric function.Let S be a set of random n points in [0, 1]. Then with probability at least 1-4exp(-√(n)/10), we have- 3/n≤‖ W[S] ‖_□- ‖ W ‖_□≤8/n^1/4.For any ε>10/√(n) and any n-vertex weighted graph H with edge weights in [0,1], ℙ(d_□ (𝔾(H), H) >ε)<exp ( -ε^2 n^2/100),Let f be a function from the set of tuples 𝐇=(H_I)_∅⊊ I ⊆ [k] of weighted graphs to ℝ.Suppose that |f(𝐇)-f(𝐇')| ≤ 1 whenever there exists a vertex v such that for each ∅≠ I ⊆ [k], H_I and H'_I differ only in edges incident to v. Then for every t ≥ 0, we haveℙ(f(ℍ(n, 𝐖)) ≥𝔼[f(ℍ(n, 𝐖))] + √(2tn)) ≤ e^-t.When f is defined on the set of graph systems of order k (by letting G_I = ∩_i ∈ I G_i),ℙ(f(𝔾(n, 𝐖)) ≥𝔼[f(𝔾(n, 𝐖))] + √(2tn)) ≤ e^-t. A partition 𝒫 of [0,1] is said to be equitable if all the parts have the same measure.lemmaequitable Let 𝐖 = (W_I)_I ⊆ [k] be a graphon system of order k. For any integer m ∈ℤ_> 1, there exists an equitable partition 𝒫 of [0, 1] into m parts such thatd_□ (𝐖, 𝐖_𝒫) ≤ 2^k ( 50 × 4^k/√(log m) + 2k/√(m)).The proof of <Ref> is the standard application of Azuma's inequality, so we omit the proof. The following lemma will be useful to prove<Ref>. Let 𝐖 = (W_1, …, W_k) be a tuple of bounded symmetric functions and 𝒬 be a partition of [0, 1] into m parts.Then for given m'>m, there exists an equitable partition 𝒫 into m' parts such that d_□(𝐖, 𝐖_𝒫) ≤ 2d_□(𝐖, 𝐖_𝒬) + 2km/m'. We partition each part of 𝒬 into parts of measure 1/m' except at most one part.We collect all the exceptional parts and partition their union into parts of measure 1/m'.Let 𝒫 be the resulting partition.Let ℛ be the common refinement of 𝒫 and 𝒬.Then 𝐖_ℛ and 𝐖_𝒫 differ only in the exceptional parts and there are at most m exceptional parts.Hence we have d_□ (𝐖_ℛ, 𝐖_𝒫) ≤ 2km/m' and d_□ (𝐖_𝒫, 𝐖) ≤ d_□ (𝐖_ℛ, 𝐖) + 2km/m'.From Lemma <ref>, we have d_□ (𝐖_ℛ, 𝐖) ≤ 2d_□ (𝐖_𝒬, 𝐖), which concludes the proof. Let m' = ⌊√(m)⌋ and 𝐖=(W_I)_I ⊆ [k] be a graphon system of order k. We first apply the Lemma <ref> with m_1 = 1 and m_2 = m' to get a partition 𝒬 of [0,1] into at most m' parts with d_□ (𝐖,𝐖_𝒬) ≤ 2^5k/28/√(log m').Then by applying Lemma <ref> to 𝐖 and 𝒬, we get a desired equitable partition 𝒫 into m parts. The final ingredient for the proof of <Ref> is the following version of the sampling lemma. Let 𝒫 be an equitable partition with m parts and 𝐖 = (W_I)_I ⊆ [k] be a graphon system of order k such that every W_I is a step graphon with steps in 𝒫.Let S be a tuple in [0, 1]^n chosen uniformly at random. Then 𝔼[δ_□(𝐖, 𝐖[S])] ≤2^k+2√(m)/√(n). Let P_i be the i-th part of 𝒫. Note that we can consider the weighted graph W_I[S] as a step graphon by regarding vertices in [n] as disjoint subsets of [0,1].This allows us to view both 𝐖 and 𝐖[S] as step graphons with m steps.Each step of 𝐖 has measure exactly 1/m, while each step of 𝐖[S] has length ℓ_i = 1/n |S ∩ P_i|. Observe that n ·ℓ_i is identical with the binomial random variable Binom(n, 1/m). Write S = (x_1, …, x_n) as a tuple. Choose a measure preserving bijection φ:[0,1] → [0,1] such that the parts of 𝐖^φ are intervals. Define a reordering π:[n] → [n] as follows. First, there exists a reordering π':[n] → [n] such that S ∩ P_i = { x_π'^-1(n(ℓ_1 + … + ℓ_i) + 1), …, x_π'^-1(n(ℓ_1 + … + ℓ_i + ℓ_i+1))}.Second, we define π”:[n] → [n] as follows: if nℓ_i ≤⌊im/n⌋ - ⌊(i-1)m/n⌋, define π”(n(ℓ_1 + … + ℓ_i) + j) = ⌊(i-1)m/n⌋ + j,for 1 ≤ j ≤ nℓ_i; otherwise, define π”(n(ℓ_1 + … + ℓ_i) + j) = ⌊(i-1)m/n⌋ + j,for 1 ≤ j ≤⌊im/n⌋ - ⌊(i-1)m/n⌋, and send all the other remaining indices arbitrarily to the indices not mapped to yet. Set π = π”∘π'. Let φ':[0,1] → [0,1] be a measure preserving bijection such that 𝐖[S]^φ' = 𝐖[S^π], where S^π = (x_π(1), …, x_π(n)). Then the set { x ∈ P_i × P_j: W_I^φ(x) ≠ (W_I[S])^φ'(x) }has measure at most |ℓ_i - 1/m| + |ℓ_j - 1/m| for each pair i, j ∈ [m]. Because the random variable ℓ_i are independent and identically distributed, we infer that𝔼[‖ W_I - (W_I[S])^φ^-1∘φ'‖_1] ≤ 4𝔼[ ∑_i=1^m | ℓ_i - 1/m| ] = 4m 𝔼[ | ℓ_1 - 1/m| ] ≤ 4m √(𝔼[ | ℓ_1 - 1/m|^2 ]) = 4 √(m-1/n)≤4√(m)/√(n).As ‖ W ‖_□≤‖ W ‖_1 for every bounded symmetric function W, this concludes the proof.First, we aim to bound the expectation of the distance. Then we will apply Lemma <ref>.We may assume that n is sufficiently large. Let 𝐖 be a graphon system of order k. Let m = ⌈ n^1/4⌉. By Lemma <ref>, there exists an equitable partition 𝒫 of [0, 1] into m parts such that d_□ (𝐖, 𝐖_𝒫) ≤200 × 8^k/√(log n). Let S be a tuple in [0,1]^n chosen uniformly at random. Note that 𝐇_S(n, 𝐖) and 𝐖[S] have the same distribution except for the weights of loops.Thus, 𝔼[d_□ (𝐇_S(n, 𝐖), 𝐖[S])] ≤2^k/n. Then by Lemma <ref>, with probability at least 1-4exp(-n^1/8/20), the following holds for each I⊆ [k].| d_□ (W_I[S], (W_I)_𝒫[S]) - d_□ (W_I, (W_I)_𝒫) | ≤8/n^1/4.Thus we have𝔼[| d_□ (W_I[S], (W_I)_𝒫[S]) - d_□ (W_I, (W_I)_𝒫) |] ≤( 1-4exp(-n^1/8/20) ) ·8/n^1/4 + 4exp( -n^1/8/20) ≤10/n^1/4.These inequalities together with (<ref>) imply that𝔼[d_□ (𝐖[S], 𝐖_𝒫[S]) ]≤𝔼[| d_□ (𝐖[S], 𝐖_𝒫[S]) - d_□ (𝐖, 𝐖_𝒫) |] + d_□ (𝐖, 𝐖_𝒫) ≤∑_I ⊆ [k]𝔼[|d_□ (W_I[S], (W_I)_𝒫[S]) - d_□ (W_I, (W_I)_𝒫) |] + d_□ (𝐖, 𝐖_𝒫) ≤300 × 8^k/√(log n).Finally, by Lemma <ref>, we have 𝔼[δ_□(𝐖_𝒫, 𝐖_𝒫[S])] ≤2^k+2/n^3/4≤100 × 2^k/√(log n).Therefore,𝔼[δ_□(𝐖, 𝐖[S])]≤δ_□(𝐖, 𝐖_𝒫) + 𝔼[δ_□( 𝐖[S] , 𝐖_𝒫[S])] + 𝔼[δ_□(𝐖_𝒫, 𝐖_𝒫[S])]≤400 × 8^k/√(log n).To estimate 𝔼[δ_□(𝐖, 𝔾(n, 𝐖))], observe that for a random sample 𝐇 of ℍ(n, 𝐖) and i ≠ j ∈ [n], the event that ij is an edge of G_I = ⋂_t ∈ I G_t have probability exactly the weight of the edge ij of H_I. Also, it is independent from the event that i'j' is an edge of G_I whenever {i', j'}≠{i, j}. Thus by Lemma <ref> applied to each index I ⊆ [k], the union bound yields that d_□ (𝔾(𝐇), 𝐇) ≤1/√(log n) with probability at least 1-2^k ·exp(-n^2/(100× 2^2klog n)). We also have that d_□ (𝔾(𝐇), 𝐇) ≤ 2^k for any case. Therefore,𝔼[d_□ (𝔾(𝐇), 𝐇) ] ≤( 1-2^k ·exp(-n^2/100× 2^2klog n) ) ×1/√(log n) + ( 2^k ·exp( -n^2/100× 2^2klog n) ) × 2^k ≤2/√(log n).By the law of total probability, we have 𝔼[d_□ (𝔾(ℍ(n, 𝐖)), ℍ(n, 𝐖))] ≤2/√(log n). Finally, the triangle inequality together with the above inequalities imply𝔼[δ_□(𝐖, 𝔾(n, 𝐖))] ≤500 × 8^k/√(log n).One can easily check that the function f(𝐇):=nδ_□(𝐇, 𝐖)/2^k+1 defined on the tuple of weighted graphs satisfies the condition of Lemma <ref>. Therefore Lemma <ref> finishes the proof. We finally note that by applying the common refinement argument in the proof of <Ref> and <ref> to Lemma 9.3 of <cit.>, we can prove the following version of the multi-color graph regularity lemma for graph systems. Note that a partition X_1, …, X_m of a finite set X is called equitable if ||X_i|-|X_j|| ≤ 1 for every 1 ≤ i<j ≤ m.Let 𝒢 be a weighted graph system of order k and m be a positive integer. Then there exists an equitable partition 𝒫 of vertices into m parts such that δ_□(span(𝒢), span(𝒢)_𝒫) ≤50 × 2^3k/√(log m). § PROOF OF <REF> In order to prove <Ref>, we need the following two lemmas.For each n∈ℕ, let W^n:[0, 1]^2 → [-1, 1] be abounded symmetric function. If ‖ W^n ‖_□→ 0 as n →∞, then for every Z ∈ L^1([0, 1]), we have ‖ Z W^n ‖_□→ 0. In particular, ∫_[0, 1]^2 Z W^n dxdy → 0 and ∫_S W^n dxdy → 0 for every set S ⊆ [0, 1]^2. Recall that we identify a graph system 𝒢 with the graphon system span(𝒢). Let 𝒢^n be a sequence of graph system such that span(𝒢^n) →𝐖 as n →∞ for a graphon system 𝐖 and |V(𝒢^n)| →∞.Then there exists a reordering of vertices on 𝒢^n such that d_□ (span(𝒢^n), 𝐖) → 0. We claim that if (𝒢^n) and (ℋ^n) are two sequences of systems of (weighted) graphs with |V(𝒢^n)| = |V(ℋ^n)| →∞ and δ_□(𝒢^n, span(ℋ^n)) → 0, there exists a reordering 𝒢'^n of the vertices of 𝒢^n such that d_□ (𝒢'^n, ℋ^n ) → 0. Indeed, by <Ref>, there exists a reordering 𝒢'^n of 𝒢^n such that d_□ (𝒢'^n, 𝐖) ≤ d_□ (𝒢'^n, 𝐖_𝒫_n) + d_□ (𝐖_𝒫_n, 𝐖) ≤3d_□ (𝒢'^n, 𝐖_𝒫_n) → 0,where 𝒫_n is the partition of [0, 1] corresponding to 𝒢'^n.To prove the claim, we apply <Ref> to 𝒢^n and ℋ^n with m=|V(𝒢^n)|^1/3. Then we have equitable partitions 𝒫_n and 𝒬_n of vertices into m parts such that d_□(𝒢^n, (𝒢^n)_𝒫_n) ≤50 × 8^k/√(log m) and d_□(ℋ^n, (ℋ^n)_𝒬_n) ≤50 × 8^k/√(log m). By the triangle inequality, d_□ ((𝒢^n)_𝒫_n, (ℋ^n)_𝒬_n) ≤ d_□ (𝒢^n, ℋ^n) + 100 × 8^k/√(log m).Choose a measure preserving bijection φ_n:[0, 1] → [0, 1] such thatd_□ (((𝒢^n)_𝒫_n)^φ_n, (ℋ^n)_𝒬_n) ≤δ_□((𝒢^n)_𝒫_n, (ℋ^n)_𝒬_n) + 1/√(log m).Let P_i be the subset of [0, 1] that corresponds to the i-th part of 𝒫_n, and similarly define Q_i for i ∈ [m]. For each i, j ∈ [m], let X_i, j be the measure of φ_n(P_i) ∩ Q_j. We now define a reordering of the vertices of 𝒢^n and ℋ^n as follows. For each i, j ∈ [m], associate vertices of j-th part of 𝒬_n one-to-one to ⌊ X_ij |V(𝒢^n)|⌋ vertices of the i-th part of 𝒫_n. For the remaining vertices, we arbitrarily associate them to the remaining unmatched vertices of 𝒬_n. It is possible as ∑_j ∈ [m] X_ij |V(𝒢^n)| is the size of i-th part P_i. Let φ'_n be a measure-preserving map that corresponds to this vertex reordering. Then as the difference between ⌊ X_ij |V(𝒢^n)|⌋/|V(𝒢^n)| and X_ij is at most 1/|V(𝒢^n)| for each i, j ∈ [m], we have ‖((𝒢^n)_𝒫_n^φ'_n)_I - ((𝒢^n)_𝒫_n^φ_n)_I ‖_1 ≤m^2/|V(𝒢^n)|,for every I ⊆ [k]. Therefore, we have d_□ ((𝒢^n)^φ'_n, span(ℋ^n))≤ d_□ (((𝒢^n)_𝒫_n)^φ'_n,(ℋ^n)_𝒬_n) + 100 × 8^k/√(log m)≤ d_□ (((𝒢^n)_𝒫_n)^φ_n, (ℋ^n)_𝒬_n)+ d_□ (((𝒢^n)_𝒫_n)^φ'_n, ((𝒢^n)_𝒫_n)^φ_n) + 100 × 8^k/√(log m)≤δ_□((𝒢^n)_𝒫_n, (𝒢^n)_𝒬_n) + 200 × 8^k/√(log m)→ 0,as n →∞. We may assume that (H,ψ) is a coloring tuple, H has no isolated vertices, and |E(H)| ≥ 2. When H is a graph with two vertices and parallel edges, then ε = δ works by deleting one edge from each rainbow copy of (H, ψ). So we can assume that t = |V(H)| ≥ 3.Suppose that the statement is false. There exists ε>0 such that for every i>0, there is a graph system 𝒢^i = (G_1^i, …, G_k^i) with at most 1/i |V(𝒢^i)|^t rainbow copies of (H, ψ) but cannot be made rainbow (H, ψ)-free by deleting at most ε |V(𝒢^i)|^2 edges. Let n_i = |V(𝒢^i)|. Since 1/i n_i^t≥ε n_i^2 and t ≥ 3, we have n_i →∞ as i →∞. As t^*_(H, ψ)(𝒢^i) ≤1/i + O(1/n_i), we have t^*_(H, ψ)(𝒢^i) → 0 as i →∞.By passing to a subsequence, we may assume that (𝒢^i)_i ≥ 1 converges to an admissible graphon system 𝐖 as i →∞. Then t^*_(H, ψ)(𝐖) = 0 by continuity. By <Ref>, we may assume that d_□ (𝒢^i, 𝐖) → 0. Let S_I = { (x, y) ∈ [0, 1]^2 | W_I(x,y)>0}. Then by Lemma <ref>, for each I ⊆ [k], we havelim_i →∞∫_[0, 1]^2 (1-1_S_I)G_I^i dxdy = ∫_[0, 1]^2 (1-1_S_I) W_I dxdy =0,where G_I^i = ∏_j ∈ I G_j^i for I ≠∅ and G_∅^i ≡ 1. Therefore, we can choose an index i such that∫_[0, 1]^2 (1-1_S_I)G_I^i dxdy = ∫_[0, 1]^2 (1-1_S_I)G_I^i dxdy < ε/2^k+2|E(H)|,for every I ⊆ [k].Now, we delete edges from 𝒢^i to make it rainbow (H,ψ)-free. For each vertex v of 𝒢^i, let J_v ⊂ [0, 1] be the interval corresponding to v. For each I ⊆ [k], if there exists an edge uv ∈⋂_j ∈ I G_j^i such that μ(S_I ∩ (J_u × J_v)) < 1/n_i^2( 1-1/4|E(H)|),then delete uv from G_j^i for some j ∈ I.We first claim that this deletion yields a (H,ψ)-free graph system. Suppose that 𝒢^i contains a rainbow copy of (H, ψ) after deleting edges. Let v_1, …, v_t be the vertices of H and for each p∈ [t], let u_p be the vertex of 𝒢^i that corresponds to v_p in the copy of (H, ψ). Then for each v_pv_q ∈ E(sim(H)), we have μ(S_I ∩ (J_u_p× J_u_q)) ≥1/n_i^2(1-1/4|E(H)|),where I = ψ(E_v_pv_q). Thus∫_[0, 1]^t∏_v_pv_q ∈ E(sim(H))1_S_ψ(E_v_pv_q)(x_p,x_q)∏_p∈ [t] dx_p ≥∫_J_u_1×⋯× J_u_t∏_v_pv_q ∈ E(sim(H))1_S_ψ(E_v_pv_q)(x_p,x_q)∏_p∈ [t] dx_p≥1/n_i^t - ∑_v_pv_q ∈ E(sim(H))1/n_i^t-2μ((J_u_p× J_u_q)∖ S_ψ(E_v_pv_q)) ≥1/n_i^t - |E(H)| ·1/n_i^t·1/4|E(H)| = 3/4n_i^t >0.On the other hand, we have t^*_(H, ψ)(𝐖)=0. This implies that ∏_v_pv_q ∈ E(sim(H)) W_ψ(E_v_pv_q)(x_p, x_q) = 0,for almost every (x_1, …, x_t) ∈ [0, 1]^t. Hence∫_[0, 1]^t∏_v_pv_q ∈ E(sim(H))1_S_ψ(E_v_pv_q)(x_p,x_q) ∏_p∈ [t] dx_p=0,which is a contradiction. Therefore, 𝒢^i becomes (H, ψ)-free after deletion.We now show that there are at most ε n_i^2 edges deleted. Let e_I be the number of edges uv ∈⋂_j ∈ I G_j^i such that μ(S_I ∩ (J_u × J_v)) < 1/n_i^2( 1-1/4|E(H)|).Thenε/2^k+2|E(H)| > ∫_[0, 1]^2 (1-1_S_I)G_I^i dxdy ≥ e_I 1/n_i^21/4|E(H)|,which gives e_I < ε n_i^2 / 2^k. Therefore, summing over all choices of I⊆ [k], we have deleted at most ε n_i^2 edges. This contradicts the choice of 𝒢^i, which completes the proof.§ INDUCED DENSITYIn this section, we introduce the induced density of graphon systems and prove the inverse counting lemma (<Ref>).We note that results in this section are not necessary for our main results but they give additional reasoning as to why introducing a set index is a more natural way to define the graphon system.For a graph system 𝒢=(G_1, …, G_k) on V and a coloring tuple (H, ψ), an induced copy of (H, ψ) in 𝒢 is a graph H' on a vertex set V(H')⊆ V together with a bijection ϕ:V(H) → V(H') such that ϕ(u)ϕ(v) ∈ E(G_i) if and only if i ∈ψ(E_uv).Note that the induced copy of (H, ψ) is only defined for coloring tuples, i.e., dom(ψ)=E(H). We define the induced homomorphism density for a graphon system. This generalizes the definition in <cit.> as W_1 and W_∅ correspond to W and (1-W), respectively, when the graphon system is a graphon, i.e. it has the order 1.For a coloring tuple (H, ψ) with |V(H)|=t and a graphon system 𝐖=(W_I)_I ⊆ [k] of order k, the induced homomorphism density of (H, ψ) is defined by t^*_ind, (H, ψ)(𝐖) = ∫_[0, 1]^t∏_1 ≤ i < j ≤ tW_ψ(E_ij)(x_i,x_j) ∏_i∈ [t] dx_i.Since each W_I can be written as a linear combination of W_J, the continuity of this function comes from <Ref>.For every coloring tuple (H, ψ), we have t^*_ind, (H, ψ)(𝐖) = t^*_ (H, ψ)(𝐖) - ∑_(H', ψ') t^*_ind, (H', ψ')(𝐖),where the sum is taken over all coloring tuples (H', ψ') satisfyingE(H') ⊋ E(H), V(H')=V(H) and ψ'|_E(H) = ψ. We havet^*_ (H, ψ)(𝐖) = ∫_[0, 1]^|V(H)|∏_ uv∈V(H)2 W_ψ(E_uv)(x_u,x_v) ∏_v∈ V(H) dx_v= ∫_[0, 1]^|V(H)|∏_ uv∈V(H)2∑_J ⊇ψ(E_uv)W_J(x_u,x_v) ∏_v ∈ V(H) dx_v= ∑_(H', ψ')∫_[0, 1]^|V(H)|∏_ uv∈V(H)2W_ψ'(E(H')_uv)(x_u,x_v) ∏_v∈ V(H) dx_v= ∑_(H', ψ')t^*_ind, (H', ψ')(𝐖). Again, the sum is taken over all coloring tuples (H', ψ') satisfyingE(H') ⊋ E(H), V(H')=V(H) and ψ'|_E(H) = ψ. This proves the lemma. We now prove the inverse counting lemma using the notion of induced density.For a sufficiently large n and two systems of graphons 𝐖=(W_I)_I ⊆ [k] and 𝐔 = (U_I)_I ⊆ [k], if the inequality |t^∗_(F, ψ)(𝐖)-t^*_(F, ψ)(𝐔)| < 2^-n^2k holds for every coloring tuple (F, ψ) with |V(F)|=n, thenδ_□(𝐖, 𝐔) < 2000 × 8^k/√(log n). By repeatedly applying Lemma <ref>, we can express t^*_ind, (F, ψ)(𝐖) as a sum of at most (2^k)^n2 terms of t^*_(F', ψ')(𝐖) for some coloring tuples (F', ψ') extending (F, ψ).So, we have|t^*_ind, (F, ψ)(𝐖)-t^*_ind, (F, ψ)(𝐔)| < 2^-n^2k× (2^k)^n2 = 2^-kn+12,for every coloring tuple (F, ψ) with |V(F)|=n.Consider a coloring tuple (F, ψ) as a graph system of order k on n vertices in the natural way. From this point of view, defineℱ_𝐖 = {(F, ψ)||V(F)|=n, δ_□(𝐖, (F, ψ))< 1000 × 8^k/√(log k).},and similiarly define ℱ_𝐔. If ℱ_𝐖∩ℱ_𝐔≠∅, we are done by the triangle inequality. Suppose on the contrary that ℱ_𝐖∩ℱ_𝐔 = ∅. Then by <Ref>,∑_(F, ψ) ∈ℱ_𝐖 t^*_ind, (F, ψ)(𝐖) = ∑_(F, ψ) ∈ℱ_𝐖ℙ(𝔾(n, 𝐖) = (F, ψ)) ≥ 1-o(1),and∑_(F, ψ) ∈ℱ_𝐖 t^*_ind, (F, ψ)(𝐔) ≤∑_(F, ψ) ∉ℱ_𝐔 t^*_ind, (F, ψ)(𝐔) ≤ o(1).Thus for sufficiently large n,∑_(F, ψ) ∈ℱ_𝐖( t^*_ind, (F, ψ)(𝐖) - t^*_ind, (F, ψ)(𝐔) ) ≥1/2.As a consequence, there exists (F, ψ) ∈ℱ_𝐖 such thatt^*_ind, (F, ψ)(𝐖) - t^*_ind, (F, ψ)(𝐔) ≥1/2|ℱ_𝐖| ≥ 2^-kn2-1 >2^-kn+12,which is a contradiction.Combining <Ref> with <Ref>, one can conclude that a sequence of graphon systems 𝐖^n converges to 𝐖 in δ_□-norm if and only if t_(F, ψ)(𝐖^n) converges to t_(F, ψ)(𝐖) for every pre-coloring tuple (F, ψ). The first one is the induced H-removal lemma. For a system of weighted graphs 𝐇, a weighted induced copy of a coloring tuple (F, ψ) in 𝐇 is an ordered tuple (a_1, …, a_t) of t=|V(F)| distinct vertices such that H_φ(E_a_ia_j)(x_a_i, x_a_j)>0 for every ij∈[t]2. An induced copy of (F, ψ) in a graph system 𝒢 is defined to be a weighted induced copy of (F, ψ) by viewing 𝒢 as a system of weighted graphs with 0-1 weight.Let (F, ψ) be a coloring tuple and 𝐖 = (W_I)_I ⊆ [k] be an admissible graphon system of order k with t_ind, (F, ψ)(𝐖)=0. Then 𝐇 = ℍ(n, 𝐖) has no weighted induced copy of (F, ψ) with probability 1. Let t be the number of vertices of F. We consider the following random process. First, select n points x_1, …, x_n in [0, 1] uniformly at random. Then select t distinct points x_a_1, …, x_a_t uniformly at random among them.Since x_a_1, …, x_a_t are distributed uniformly at random in [0, 1], t_ind, (H, ψ)(𝐖)=0 implies that the value of ∏_ij∈[t]2W_ψ(E_ij)(x_a_i, x_a_j) is zero with probability 1. Equivalently, if we choose a random sample 𝐇 and randomly select k distinct vertices, then it does not form a weighted induced copy of (F, ψ) with probability 1. This implies that 𝐇 has no weighted induced copy of (F,ψ) with probability 1. Let (F, ψ) be a coloring tuple and t=|V(F)|. For every ε>0, there exists δ>0 such that if an n-vertex graph system 𝒢 of order k contains at most δ n^t induced copies of (F, ψ), then by adding or deleting at most ε n^2 edges, one can make 𝒢 induced (F, ψ)-free. Suppose not. There is ε>0 such that for every i ∈ℤ_>0, there exists a graph system 𝒢^i=(G_1^i, …, G_k^i) with n_i vertices that has at most 1/in_i^t induced copies of (F, ψ) but cannot be made induced (F, ψ)-free by changing at most ε n_i^2 edges. As we know 1/in_i^t ≥ 1, we have n_i →∞ as i →∞. By passing to a subsequence, we may assume that (𝒢^i)_i ≥ 1 converges to an admissible graphon system 𝐖 = (W_I)_I ⊆ [k]. As t_ind, (F, ψ)(𝒢^i) ≤1/i + O(1/n_i), we have t_ind, (F, ψ)(𝐖)=0. By <Ref>, we may assume that d_□ (𝒢^i, 𝐖) → 0. Consider a random weighted graph system with n_i vertices ℍ(n_i, 𝐖). Then by Lemma <ref>, it has no induced copy of (F, ψ) with probability 1; by<Ref> and Lemma <ref>, δ_□( ℍ(n_i, 𝐖),𝐖 ) < 2000 × 8^k/√(log n_i) with probability 1-o(1).ToDo Thus one can choose a sequence of systems of weighted graphs (𝐇^i)_i ≥ 1 that has no induced copy of (F, ψ) and δ_□(𝐇^i,𝐖) → 0 as i→∞, where 𝐇^i= (H_I^i)_I⊆ [k]. Again by <Ref>, we may assume that d_□ (𝐇^i, 𝐖) → 0 as i→ 0.Let T_I = {(x, y) ∈ [0, 1]^2 |W_I(x,y)>0}.As the value of W_I is in [0, 1] up to a measure zero set, we may consider W_I as a graphon. Then <Ref> implies thatlim_i →∞∫_[0, 1]^2 (1-1_T_I)G^i_I dxdy = ∫_[0, 1]^2 (1-1_T_I) W_I dxdy =0,andlim_i →∞∫_[0, 1]^2 (1-1_T_I)H^i_I dxdy = ∫_[0, 1]^2 (1-1_T_I) W_I dxdy =0.Hence we can choose an index i such that∫_[0, 1]^2 (1-1_T_I)G^i_Idxdy<ε/20 k × 2^k, ∫_[0, 1]^2 (1-1_T_I)H^i_Idxdy<ε/20 k × 2^k,for every I ⊆ [k].Now, after fixing i∈ℕ, we claim that one can add or delete ε n_i^2 edges from 𝒢^i to make it contain no induced copy of (F,ψ), which derives a contradiction. For each uv∈[n_i]2, let I_uv={ j∈ [k]: uv∈ G_j^i } be the set of colors of the edge uv in 𝒢^i. Choose an element x_u from the interval corresponding to the vertex u independently. If H^i_I_uv(x_u, x_v) = 0, then we select an index set J_uv⊆ [k] with H^i_J_uv(x_u, x_v)>0. Since ∑_I ⊆ [k]H^i_I ≡ 1, such an index set exists with probability 1. Then we delete uv from G_ℓ^i whenever ℓ∉ J_uv; add uv whenever ℓ∈ J_uv. If H^i_I_uv(x_u, x_v) > 0, we set J_uv=I_uv. After modification, J_uv becomes the set of colors of an edge uv of 𝒢^i and H^i_I_uv(x_u, x_v) > 0 for each edge uv. Since 𝐇^i has no induced copy of (F, ψ), so does 𝒢^i. fixedWe only need to count the number of edges added or removed. For each I ⊆ [k], let e_I be the number of pairs of two distinct vertices u, v ∈ V(𝒢^i) such that I_uv=I and H^i_I_uv(x_u, x_v)=0. We infer thatε/10 k × 2^k>∫_[0, 1]^2 (1-1_T_I)G^i_I dxdy + ∫_[0, 1]^2 (1-1_T_I)H^i_I dxdy = 1/n_i^2 e_I. how to?Hence ∑_I ⊆ [k] e_I <ε n_i^2/k, whence the claim. Next, we see the behavior of π_k^∗(H) as k →∞.A similar statement was proved in <cit.> when we are interested in the sum ∑_G_i∈𝒢 |E(G_i)| of the number of edges in the graph systems, but the situation becomes more complicated for the minimum min_G_i∈𝒢 |E(G_i)|. For any multi-graph H, the rainbow Turán density π^∗_k(H) converges to the Turán density of its simplification as k →∞, i.e., lim_k →∞π^∗_k(H) = 1-1/χ(sim(H))-1. Let π = 1-1/χ(sim(H))-1. Taking 𝐖=(W_I)_I⊆[k] be the classical graphon obtained by letting W_1=⋯=W_k=T_χ(sim(H))-1(n) be the same the n-vertex balanced complete k-partite graph, it easily follows that π_k^∗(H) ≥π for every k. We now claim that for every >0 and every k ≥ |E(H)|, if π^∗_k(H) = π+, then π^∗_2k+|E(H)|(H) < π^∗_k(H) - /20|E(H)|. Since π^∗_k(G) is monotone non-increasing function of k, the claim shows that lim_k →∞π^∗_k(H) = π as desired.In order to prove the claim, suppose that π^∗_k(H) = π+ and let 𝐖=(W_I)_I ⊆ [m] be an admissible graphon system of order m=2k+|E(H)| with t^*_H(𝐖)=0 and t_K_2(W_i) ≥π+ - /20|E(H)| for each i ∈ [m]. Choose arbitrary 2k distinct indices α_1, …, α_k, β_1, …, β_k ∈ [m] and define a graphon system 𝐔 = (U_I)_I ⊆ [k] as follows. For each I ⊆ [k], let U_I = ∑_J ⊆ [m], |J ∩{α_i, β_i}| ≥ 1for alli ∈ IW_J.In particular, we have U_i = W_α_i+W_β_i-W_{α_i, β_i}. As 𝐖 is admissible, each U_I takes a value in [0,1] up to a measure zero set, hence it is a graphon by changing its value on a measure zero set. To see that 𝐔 is an admissible graphon system, observe thatU_I ≡∑_J ⊆ [m], |J ∩{α_i, β_i}| ≥ 1for alli ∈ I, |J ∩{α_j, β_j}| =0for allj ∉ IW_J. Assume that t_K_2(U_i) > π+ for every i ∈ [k] and for some > 0. Then t^*_H(𝐔)>0, so one can find a rainbow coloring ψ of H such thatt^*_(H, ψ)(𝐔) = ∫_[0,1]^|V(H)|∏_ uv ∈ E(sim(H)) U_ψ(E_uv)∏_v ∈ V(H) dx_v>0.By replacing each U_ψ(E_uv) by a summation of W_J, we can find a collection {J_uv⊆ [m]: uv∈ E(sim(H))} of subsets such that∫_[0,1]^|V(H)|∏_uv ∈ E(sim(H))W_J_uv(x_u, x_v) ∏_v ∈ V(H) dx_v>0,and |J_uv∩{α_i, β_i}| ≥ 1 for each i ∈ψ(E_uv).We now construct a rainbow coloring ψ':E(H) → [m] as follows. For each uv ∈ E(sim(H)), let ψ(E_uv) = {i_1, …, i_m}. For all uv ∈ E(sim(H)) and j ∈ [m], choose one from {α_i_j,β_i_j} that is contained in J_uv. If both of α_i_j and β_i_j are in J_uv, choose any of them.Define ψ' by sending elements of E_uv to the chosen indices. Since ψ is a rainbow coloring and α_1, …, α_k, β_1, …, β_k are distinct, ψ' is a rainbow coloring. Then we have t^*_H(𝐖) ≥ t^*_(H, ψ')(𝐖)= ∫_[0,1]^|V(H)|∏_ uv ∈ E(sim(H)) W_ψ'(E_uv)∏_v ∈ V(H) dx_v=∫_[0,1]^|V(H)|∏_uv ∈ E(sim(H))( ∑_I_uv: ψ'(E_uv) ⊆ I_uv⊆ [m]W_I_uv) ∏_v ∈ V(H) dx_v ≥∫_[0,1]^|V(H)|∏_ uv ∈ E(sim(H))W_J_uv(x_u, x_v) ∏_v ∈ V(H) dx_v>0,which is a contradiction as 𝐖 is chosen to satisfy t^*_H(𝐖)=0. This contradiction leads to the existence of an index i ∈ [k] such thatt_K_2(U_i) = t_K_2(W_α_i) + t_K_2(W_β_i) - t_K_2(W_{α_i, β_i}) ≤π+. We now choose a maximal collection of pairs of colors {(α_1,β_1), …, (α_s,β_s)} in [m] such that α_j,β_j are all distinct and t_K_2(W_α_i) + t_K_2(W_β_i) - t_K_2(W_{α_i, β_i}) > π+ for every i ∈ [s]. Recall that the choices of α_1, …, α_k, β_1, …, β_k were arbitrary in the above argument. So, we know that s is smaller than k. Let I_0 := [m] ∖{α_1, …, α_s, β_1,…, β_s }. By the maximality,every two-element set {α, β}⊆ I_0 satisfiest_K_2(W_α) + t_K_2(W_β) - t_K_2(W_{α,β}) ≤π+ε. Let I' be an arbitrary subset of I_0 of size |E(H)|. For any i ∈ I', we haveW_i - ∑_j ∈ I' (W_i - W_{i, j}) = ∑_J:i ∈ JW_J - ∑_j ∈ I'∑_J: i ∈ J, j ∉ JW_J ≤∑_J ⊇ I'W_J = W_I'.The penultimate inequality holds because if J does not contain I', then one of the following holds: either i∉ J or i ∈ J but j ∉ J for some other j ∈ I'. As we have assumed t_K_2(W_i)≥π +- /20|E(H)|, we knowt_K_2(W_I')≥ t_K_2(W_i) - ∑_j ∈ I'( t_K_2(W_i) - t_K_2(W_{ i,j }) )(<ref>)≥ t_K_2(W_i) -∑_j ∈ I'( π+ε - t_K_2(W_j) ) ≥ (|E(H)|+1) ( π +- /20|E(H)|) - |E(H)|(π+)≥π + /2 > π,which gives t_H(W_I')>0. Hence, for any rainbow coloring ψ:E(H) → [m] with ψ(E(H)) = I', we obtaint^*_H(𝐖) ≥ t^*_(H, ψ)(𝐖)= ∫_[0,1]^|V(H)|∏_ uv ∈ E(sim(H)) W_ψ(E_uv)∏_v ∈ V(H) dx_v ≥∫_[0,1]^|V(H)|∏_ uv ∈ E(sim(H)) W_I'∏_v ∈ V(H) dx_v= t_H(W_I') >0,as W_J ≥ W_I' for every J ⊆ I'. This is a contradiction. This proves π^∗_2k+|E(H)|(H) < π^∗_k(H) - /20|E(H)|, hence finishes the proof of the theroem.
http://arxiv.org/abs/2312.15956v1
{ "authors": [ "Seonghyuk Im", "Jaehoon Kim", "Hyunwoo Lee", "Haesong Seo" ], "categories": [ "math.CO" ], "primary_category": "math.CO", "published": "20231226084330", "title": "On rainbow Turán Densities of Trees" }
Scalar curvature rigidity of parabolic convex polytopes in hyperbolic space Xueyuan Wan January 14, 2024 =========================================================================== In this study, we explore an innovative approach for neural network optimization, focusing on the application of gradient sampling techniques, similar to those in StochGradAdam, during the pruning process. Our primary objective is to maintain high accuracy levels in pruned models, a critical challenge in resource-limited scenarios. Our extensive experiments reveal that models optimized with gradient sampling techniques are more effective at preserving accuracy during pruning compared to those using traditional optimization methods. This finding underscores the significance of gradient sampling in facilitating robust learning and enabling networks to retain crucial information even after substantial reduction in their complexity. We validate our approach across various datasets and neural architectures, demonstrating its broad applicability and effectiveness. The paper also delves into the theoretical aspects, explaining how gradient sampling techniques contribute to the robustness of models during pruning. Our results suggest a promising direction for creating efficient neural networks that do not compromise on accuracy, even in environments with constrained computational resources. Keywords: Neural Networks, Optimization, Neural Processing § INTRODUCTIONThe rapid evolution of neural networks has been instrumental in advancing numerous applications across various fields. However, the increasing size and complexity of these models pose significant challenges, particularly in environments with constrained computational resources. This has led to an intensified focus on network pruning, a technique essential for streamlining DNNs by removing redundant weights <cit.>. Pruning not only enhances computational efficiency but also facilitates the deployment of neural networks in resource-limited settings.Despite its advantages, a primary concern with pruning is the potential loss of accuracy. This accuracy loss occurs because pruning can inadvertently remove weights that are crucial for the network's performance <cit.>. Addressing this challenge necessitates innovative approaches that can efficiently prune networks while preserving their accuracy.In this context, the application of advanced optimization techniques, such as StochGradAdam<cit.>, which incorporates gradient sampling methods, shows great promise. StochGradAdam, by selectively using a portion of gradients and setting others to zero, offers an effective way to maintain accuracy during pruning. This approach contrasts with traditional optimization methods like Adam, RMSprop, and SGD, where a more significant accuracy drop is often observed post-pruning <cit.>.Our research explores the efficacy of StochGradAdam in conjunction with pruning techniques across various architectures, including ResNet 56, 110, and 152<cit.>. These models, known for their depth and complexity, provide a robust platform for evaluating the effectiveness of our proposed approach in maintaining accuracy during pruning. In addition to empirical evaluation, our study embarks on a comprehensive analysis to illuminate the mechanisms by which gradient sampling techniques such as those employed by StochGradAdam preserve accuracy in substantially pruned networks. We examine the interplay between pruning rates and optimizer efficacy, offering insights into the optimization dynamics that underpin StochGradAdam's performance.As reflected in Table <ref>, prior to pruning, ResNet models optimized with StochGradAdam exhibit higher accuracy than those optimized with Adam—83.95% vs. 80.28% for ResNet-56, 85.64% vs. 82.70% for ResNet-110, and 82.02% vs. 81.61% for ResNet-152. After a 50% pruning rate is applied, models optimized with StochGradAdam maintain significantly higher accuracy levels than their Adam-optimized counterparts—62.84% vs. 33.12% for ResNet-56, 76.67% vs. 44.85% for ResNet-110, and 76.23% vs. 54.68% for ResNet-152.These results highlight the efficacy of StochGradAdam in optimizing neural networks prior to pruning, not merely in preserving a high degree of accuracy post-pruning but also in ensuring that the models remain adaptable and efficient for deployment in environments where computational resources are limited. This research adds a valuable perspective to the discourse on neural network optimization, emphasizing the integration of sophisticated optimization techniques with pruning strategies to cultivate both efficient and accurate neural networks.Building on this foundation, the subsequent sections of this paper delve into the intricacies of our research. We initiate our discourse with a detailed analysis of the methodologies employed, followed by an in-depth discussion of the experimental setup and results. In doing so, we pave the way for future research and potential advancements in this domain. Henceforth, the paper commences with a comprehensive exploration of StochGradAdam's role in fortifying neural networks against the adversities of pruning, steering us towards the ultimate goal of achieving optimal balance between efficiency and accuracy in deep learning models.§ RELATED WORKS §.§ Neural PruningThe concept of neural network pruning, vital for reducing model complexity while retaining performance, has become increasingly relevant with the surge in deep learning applications. Han et al. <cit.> pioneered the "Deep Compression" technique, integrating pruning, quantization, and Huffman coding, to significantly reduce model size without loss of accuracy. Liu et al. <cit.> furthered this field with their network slimming approach that prunes channels in convolutional layers, highlighting the effectiveness of structured pruning.The flexibility of unstructured pruning, where individual weights are removed, has been explored for its capability to fine-tune model pruning. Molchanov et al. <cit.> introduced a Taylor expansion-based criterion for this purpose, focusing on the importance of each weight in the loss function. The “Lottery Ticket Hypothesis” by Frankle and Carbin <cit.> shed light on the possibility of training pruned networks to match the accuracy of unpruned ones, provided the correct subset of weights is identified early.Structured pruning, which targets entire filters or channels for removal, aligns better with hardware efficiency. Li et al. <cit.> demonstrated substantial computational efficiency improvements with filter pruning. Yu et al.'s <cit.> neuron importance score propagation (NISP) further elucidated the process of identifying and preserving important neurons in pruning.Magnitude-Based Pruning: In this research, we use magnitude-based pruning for post-pruning process. There are several supporting related works:Davis Blalock et al. provide a comprehensive review of neural network pruning and highlight that Magnitude-Based Pruning is one of the simplest yet effective methods to reduce computational complexity without severely impacting the model's accuracy <cit.>. This method's effectiveness is primarily attributed to its ability to identify and eliminate redundant or less important connections within a neural network. By doing so, the pruned network requires fewer computational resources, which is particularly beneficial for deployment on devices with limited computational capabilities or where energy efficiency is a concern.Song Han et al. demonstrate that learning both weights and connections for efficient neural networks through pruning can lead to significant reductions in storage requirements and improvements in computing efficiency <cit.>. Their work shows that it's possible to decrease the size of a neural network by a significant factor without affecting its accuracy, making the model more efficient and faster in both training and inference.Pavlo Molchanov et al. explore the importance estimation for neural network pruning and provide methods that improve upon simple Magnitude-Based Pruning, indicating the foundational role that magnitude-based methods play in the field <cit.>. They emphasize that while magnitude-based criteria are simple, they form the basis for more sophisticated pruning methods and are surprisingly effective compared to other, more complex strategies.Zhuang Liu et al. delve into the value of network pruning and suggest that, in many cases, networks can be made significantly leaner without a drop in performance <cit.>. Their work reiterates the potential of Magnitude-Based Pruning in achieving this leaner architecture, highlighting the method's utility in real-world applications where model efficiency is paramount.Lastly, Namhoon Lee et al. provide a signal propagation perspective for pruning neural networks at initialization, discussing how early and simple pruning methods can predict the final performance of a network <cit.>. Their findings suggest that even early in training, Magnitude-Based Pruning can effectively identify and eliminate unnecessary weights, leading to both efficient and effective models.In summary, Magnitude-Based Pruning has been shown to effectively reduce the computational complexity, improve the speed and efficiency, and maintain or even enhance the generalization ability of neural networks. Its simplicity, coupled with its substantial impact on performance, makes it a widely adopted method in optimizing neural networks for various applications. We use this Magnitude-Based Pruning for post-prune the neural networks weights optimized with StochGradAdam<cit.>. §.§ Neural Networks OptimizationThe advent of advanced optimization techniques like Adam, developed by Kingma and Ba <cit.>, and the exploration of gradient descent methods by Ruder <cit.> have revolutionized DNN training. These methods have been crucial in adapting the learning process to the specific needs of each parameter in a model. A distinctive feature of our research is the integration of gradient sampling techniques with the pruning process, an approach that is not widely explored in existing literature. While Zhang et al. <cit.> have underscored the critical role of optimization strategies in the context of weight pruning, our methodology extends this concept by incorporating gradient sampling techniques, such as StochGradAdam <cit.>. This approach, which involves selectively using portions of gradients, is innovative in its application to pruning. By selectively utilizing gradients, StochGradAdam demonstrates an enhanced ability to maintain accuracy, a notable challenge when traditional optimization methods are employed for pruning tasks. This enhanced performance is partly attributed to the creation of gradient masks, a key feature of StochGradAdam, which enables a more nuanced and effective learning process. Our adaptation of this technique in the pruning context signifies a novel contribution to the field, offering potential improvements in accuracy retention during pruning, compared to conventional methods like RMSProp and Adam.Practical Implications of Pruning and Optimization: Our study is informed by seminal works in the field of neural network optimization and pruning. We introduce a novel integration of gradient sampling optimization, exemplified by techniques like StochGradAdam, with the pruning process. This integration is aimed at maintaining model accuracy after significant parameter reductions, offering a fresh perspective on neural network optimization. * Generalization and Pruning: We recognize that pruning inherently reduces a model's capacity, affecting its generalization ability. Our approach leverages this understanding, proposing that gradient sampling can help retain the essential learning capabilities of a network even when it's reduced in size. * Enhancing Model Robustness through Gradient Sampling: Introducing randomness through gradient sampling in the training process can enhance a model's robustness. This is especially beneficial during pruning, allowing the network to maintain important functionalities with fewer parameters. * Impact on Accuracy After Pruning: We investigate how gradient sampling affects a model's accuracy after pruning. We suggest that the selective use of gradients may help preserve the network's representational capacity, aiming to mitigate the usual decline in accuracy associated with conventional pruning methods. * Optimization Dynamics in Pruning: We explore how integrating gradient sampling influences the optimization trajectory during pruning. Our goal is to achieve a more effective optimization process that not only speeds up convergence but also ensures that the pruned model maintains high performance. In essence, our study contributes to the evolving discussion on neural network pruning and optimization by proposing the innovative concept of gradient sampling optimization. This approach is designed to enhance the efficiency of neural networks and ensure accuracy retention, especially in resource-constrained environments. By combining gradient sampling techniques with pruning strategies, our methodology addresses a critical gap in neural network optimization, offering a valuable solution to the challenge of balancing efficiency and performance.§ METHODThis research investigates the efficacy of combining gradient sampling optimization, as implemented in StochGradAdam, with neural network pruning to maintain high model accuracy while reducing complexity. Our methodology comprises two primary stages: training the neural network using StochGradAdam and then applying pruning to the trained model as shown in Figure <ref>.§.§ Training with StochGradAdamThe initial stage of our methodology involves training the neural network using StochGradAdam, a variant of the Adam optimizer enhanced with gradient sampling techniques. This approach is detailed in Algorithm <ref>, where we outline the essential steps in implementing StochGradAdam.The StochGradAdam optimizer commences with predefined parameters, such as the learning rate, beta factors for momentum and velocity, and a sampling rate that dictates the share of gradients incorporated in each update step. Its fundamental update rule is delineated by<cit.>:𝐰_t+1 = 𝐰_t - αm_corr_t/√(v_corr_t) + ϵ,wherein α signifies the learning rate, m_corr_t is the bias-adjusted average of the gradients, and v_corr_t represents the bias-adjusted average of the squared gradients.This optimizer also introduces a gradient sampling step. In this step, a probabilistic mask generated from a uniform distribution is applied to each batch of gradients, effectively allowing for a selective inclusion of gradients. This selective process is essential for the adaptive characteristic of the optimizer.Given a gradient vector 𝐠, the optimizer's goal is to ascertain which components of 𝐠 will be utilized in the update. This selection is conducted using a stochastic mask Ω. Each element of Ω is determined by a uniform random variable 𝒰(0,1):Ω_i =1if 𝒰(0,1) < s, 0otherwise,where i indexes the components of 𝐠, and s denotes the predetermined threshold that governs the proportion of gradients that are activated for updates. Upon generating the stochastic mask Ω, the next step involves computing the sampled gradient ϕ through an element-wise product with the gradient vector 𝐠:ϕ_i = Ω_i · g_i,for each component i, which ranges from 1 to d. This yields the sampled gradient as:ϕ = Ω∘𝐠,where ∘ symbolizes the element-wise multiplication. This operation ensures that only the gradient components flagged by the mask Ω are factored into ϕ.Moreover, the moving average of the gradients, indicated by m, is updated at each iteration by an exponential decay method. This technique blends a portion of the previous moving average with the current sampled gradient:m_t = β_1 m_t-1 + (1 - β_1) ϕ,where β_1 governs the rate of decay for the moving average. The adjusted decay rate at iteration t, denoted by β_1^t, is defined as:β_1^t = β_1 ·decay^t,with β_1 balancing the influence of prior gradients—higher values weight the past more heavily, whereas lower values lean towards recent data.The bias-corrected moving average m at the t^th iteration is given by:m_corr_t = m_t/1 - β_1^t,where 1 - β_1^t acts as a normalization constant to correct for the initial estimation bias. Likewise, v denotes the moving average of the squared gradients. Its updating formula is articulated as:v_t = β_2 v_t-1 + (1 - β_2) ϕ^2,where β_2 represents the exponential decay rate for the moving average of the squared gradients. Like β_1, β_2 is also time-adjusted, denoted by β_2^t, and is calculated as:β_2^t = β_2 ·decay^t,The operation ⊙ enables the element-wise squaring of each gradient component. The bias-corrected form of v at iteration t is:v_corr_t = v_t/1 - β_2^t,In this manner, both momentum (m) and velocity (v) are updated using the sampled gradients, integral to the adaptive learning nature of the optimizer. Governed by the beta parameters, these updates undergo bias correction to rectify the initial zero inclination. StochGradAdam then employs these corrected values to finely tune the network weights, ensuring a robust and efficient training pathway that preludes the subsequent phase of pruning.§.§ Pruning ProcessIn our study, we explore a pruning technique to reduce the complexity of neural networks while maintaining high accuracy. The technique is Magnitude-Based Pruning, which targets weights with smaller magnitudes for removal. This method is integrated with the gradient sampling optimization used in StochGradAdam, allowing us to evaluate their effectiveness in preserving model accuracy post-pruning.§.§.§ Magnitude-Based PruningMagnitude-based pruning approach is designed to reduce the complexity of neural networks by selectively removing weights based on their magnitudes <cit.>. This method specifically targets weights at the lowest percentiles, under the assumption that these weights are less critical for the network's performance. The process is systematic and involves several key steps to ensure precise and effective pruning <cit.>: Determining the Pruning Threshold To determine the pruning threshold θ, we employ a statistical approach that involves calculating the percentile of the network's weights distribution. The process is as follows:* Let W be the set of absolute values of all weights in the neural network.* Arrange W in ascending order. Let this ordered set be W_sorted.* For a given percentile P (e.g., 40%), the threshold θ is the value in W_sorted at the position that corresponds to P% of the length of W. Mathematically, it is calculated as:θ = W_sorted(⌈P/100× |W| ⌉),where |W| is the number of weights in W, and ⌈·⌉ denotes the ceiling function, ensuring that the index is rounded up to the nearest integer.Weights Pruning Apply the pruning rule, defined mathematically as:w' = 0,if|w| < θ, w,otherwise,where w denotes the individual weights of the neural network <cit.>. This approach effectively reduces the model size while aiming to retain the most significant weights, thereby maintaining the network's performance <cit.>. The precise calculation of the pruning threshold and the systematic removal of the least significant weights ensure that the pruning process is both efficient and effective. § ANALYSIS §.§ Differential Impact on Weight MagnitudesIn the StochGradAdam algorithm, weights are updated according to the following rule <cit.>:w_t+1 = w_t - α· (Ω⊙ g_t),where Ω⊙ g_t is the element-wise product of a stochastic Bernoulli mask Ω and the gradient g_t. Each element Ω_i,t· g_i,t is a random variable, reflecting the stochastic nature of the update. The Bernoulli mask introduces variability in the updates, effectively making the update process selective. This selectivity has a differential impact on the weights, contingent on the magnitude and consistency of their gradients. For each weight component w_i, the expected update under the stochastic regime is given by:E[Ω_i,t· g_i,t] = s · g_i,t,where s is the success probability of the Bernoulli distribution, representing the expected sparsity of the updates. Weights associated with consistently large gradients (indicative of important features or directions in the loss landscape) are expected to receive larger updates on average, due to the larger expected value of their stochastic updates. Conversely, weights associated with smaller gradients are expected to receive smaller updates on average, potentially leading to their eventual diminishment or pruning.To understand the implications of this update mechanism, consider the dynamics over multiple iterations. The cumulative impact on a weight can be analyzed by summing the expected updates over time:E[w_i,T] = w_i,0 - α∑_t=1^T s · g_i,t.This expression highlights that the overall expected change in a weight is directly proportional to the sum of its gradients over time, scaled by α and s. Additionally, the variance in the updates contributes to the divergence in weight magnitudes. The variance of the stochastic update for each weight is:Var(Ω_i,t· g_i,t) = s(1-s)g_i,t^2.This variance reflects the uncertainty in the updates due to the stochastic nature of Ω. Over time, this can lead to a divergence in the magnitudes of weights, with some becoming significantly larger and others smaller, depending on their respective gradient signals.The stochastic gradient sampling in StochGradAdam leads to a differential impact on weight magnitudes. Weights associated with strong, consistent gradient signals are likely to be preserved or enhanced, while those with weaker or less consistent signals are more likely to be diminished. This inherent bias in the update mechanism towards important features can facilitate more efficient learning and align well with magnitude-based pruning strategies, where the less significant weights are pruned away. §.§ Expected Allocation of Weight MagnitudesIn the StochGradAdam algorithm, the expected allocation of weight magnitudes is affected by the stochastic nature of the gradient updates due to the Bernoulli mask Ω. This effect is significant as it leads to differential updating of weights, depending on the magnitude and consistency of their corresponding gradients.Consider the weight update rule over T iterations focusing on the absolute magnitude of weights:E[|w_i,T|] = |w_i,0| - α∑_t=1^T E[|Ω_i,t· g_i,t|],Here, |w_i,0| is the initial absolute magnitude of weight w_i, and α is the learning rate. The term E[|Ω_i,t· g_i,t|] reflects the expected absolute update to the weight at each iteration.The expected absolute value of the stochastic update for each weight component w_i is influenced by the Bernoulli mask Ω_i,t and the gradient g_i,t:E[|Ω_i,t· g_i,t|] = s · |g_i,t|,This equation captures the average effect of the stochastic updates, indicating that the expected absolute update is directly proportional to the absolute gradient, scaled by the probability s of the Bernoulli mask.Accumulating these expected updates over time reflects how the expected magnitude of weights changes in relation to the strength and consistency of their gradients:E[|w_i,T|] = |w_i,0| - α∑_t=1^T s · |g_i,t|.This formulation shows that the expected magnitude of the weight after T iterations is influenced by a cumulative effect of scaled gradient magnitudes. The direct proportionality between E[|w_i,T|] and s · E[|g_i,t|] underlines an important aspect of the StochGradAdam algorithm: weights corresponding to stronger, more consistent gradients tend to be preserved or even enhanced, while those with weaker or more sporadic gradients tend to diminish over iterations:E[|w_i,T|] ∝ s ·∑_t=1^T |g_i,t|.This proportional relationship confirms the expected allocation of weight magnitudes due to stochastic gradient sampling. It highlights the role of the Bernoulli mask in selectively reinforcing or diminishing weights based on the observed gradient signals.The expected allocation of weight magnitudes in StochGradAdam provides insight into the algorithm's capacity to adaptively focus the model's complexity on more relevant features. It also underscores the potential of stochastic gradient sampling in promoting sparsity and efficiency in the model by differentially adjusting the weight magnitudes according to the importance of the features they represent. §.§ Implications for Magnitude-Based PruningThe StochGradAdam algorithm's approach to gradient updates inherently creates a favorable setting for magnitude-based pruning. By selectively applying updates based on the strength and consistency of gradient signals, the algorithm naturally differentiates weights in their significance and magnitude, aligning with the goals of pruning strategies.The stochastic nature of updates, characterized by the Bernoulli mask Ω, leads to a non-uniform impact on weights. The variance and expectation of the updates are critical in determining which weights grow in magnitude and which diminish, setting the stage for effective pruning:Δ w_i,t = -α (Ω_i,t· g_i,t), Var(Δ w_i,t) = α^2 s(1-s)g_i,t^2,This variance and the cumulative impact over time contribute to a spectrum of weight magnitudes, facilitating the identification of less important weights for pruning. The probability of a weight's retention post-pruning is directly influenced by its final magnitude relative to the pruning threshold θ. This probability is determined by the weight's distribution, shaped by the stochastic and cumulative nature of the updates:P(w_i,T≥θ) = ∫_θ^∞ f(w_i,T) dw_i,T,Understanding this distribution is paramount in setting an effective pruning threshold that balances model complexity with performance.In practice, the implementation of pruning in the context of StochGradAdam requires careful consideration of the distribution of weight magnitudes, the selection of θ, and potentially the timing of pruning. Different strategies might be employed, such as pruning periodically throughout training or after the model has converged.The StochGradAdam algorithm's differential impact on weight magnitudes makes it particularly suitable for magnitude-based pruning strategies. By understanding and leveraging the nature of stochastic updates, practitioners can design more effective pruning strategies, leading to leaner, more efficient models without significantly sacrificing performance. § RESULTSPrior to embarking on the pruning process, we conducted an initial training phase using the CIFAR-10 dataset to evaluate the learning efficacy of two distinct optimizers: StochGradAdam and Adam. CIFAR-10, a benchmark dataset comprising 60,000 32x32 color images across 10 classes, serves as an ideal proving ground for gauging the performance of neural network optimization due to its balanced variety of input features and complexity.Experimental Setting: In this critical phase of our research, we meticulously set up our experiments to evaluate the efficacy of StochGradAdam in neural network pruning, specifically focusing on its performance compared to the conventional Adam optimizer. Our experimental setup is detailed as follows: * Hardware Configuration: All experiments were conducted using an NVIDIA RTX 4080, providing a robust platform for deep learning computations.* Batch Size: We maintained a batch size of 128 across all experiments to ensure consistency and comparability.* Training Epoch: We set the number of epochs of 200 epochs across all experiments* Optimizers: * StochGradAdam: * Learning Rate (lr): 0.01* Sampling Rate: 0.8* Adam: * Learning Rate (lr): 0.01 By maintaining equivalent learning rates across both optimizers, we ensured a fair comparison while isolating the impact of StochGradAdam's unique sampling rate on the overall performance. Our primary objective was to observe the improvements in pruning efficiency and accuracy preservation attributed to the stochastic nature of gradient selection in StochGradAdam. The subsequent results section delves into the performance metrics and insights drawn from these experimental setups, illustrating the comparative advantages of using StochGradAdam over traditional optimization methods in the specific context of neural network pruning.Training Results: In assessing the optimization capabilities of StochGradAdam, we directed our investigation towards its application within various ResNet architectures during the learning phase. The results, delineated in Figure <ref>, provide a clear comparison of test accuracy between the StochGradAdam and the traditional Adam optimizer across different depths of ResNet models throughout 200 training epochs.The comparative analysis reveals that StochGradAdam achieves superior test accuracy relative to Adam, a trend that is consistently observed across ResNet-56, ResNet-110, and ResNet-152 models. This improvement in accuracy by StochGradAdam highlights its potential in effectively handling the complex optimization landscapes presented by deep neural networks.According to the Table <ref>, the side-by-side comparison reveals a clear edge for StochGradAdam in terms of test accuracy across all examined models. Specifically, ResNet-56 trained with StochGradAdam achieved a test accuracy of 83.95%, surpassing Adam's 80.28% by a notable margin. In deeper networks, the trend persisted with ResNet-110 reaching 85.64% accuracy under StochGradAdam, compared to 82.70% with Adam. The largest model, ResNet-152, also demonstrated StochGradAdam's efficiency, recording a test accuracy of 82.02%, which is higher than Adam's 81.61%. These figures not only affirm the superior performance of StochGradAdam but also underscore its robustness in effectively managing the complexities of deeper neural network architectures.Weight Analysis: The histograms presented in Figure <ref> showcase the weight distributions across all layers for ResNet architectures under different optimization strategies trained during 200 epochs. For ResNet-56, ResNet-110, and ResNet-152, trained with the conventional Adam optimizer, the distributions are tightly centered around zero. This concentration suggests a regularization effect where most weights are kept small, aligning with the typical expectations from an Adam optimization<cit.>.In stark contrast, the ResNet model trained with the StochGradAdam optimizer exhibits a notably wider spread in the distribution of weight values. This observation is in harmony with our theoretical analysis, which posits that StochGradAdam, through its stochastic update rule, would induce a differential impact on the weights. The wider distribution signifies that weights associated with more significant features—those with consistently large gradients—have been accentuated, while those with smaller gradient contributions have been diminished. Such a distribution is indicative of the selective reinforcement and attenuation performed by StochGradAdam and mirrors the expected behavior of fostering a more expressive and feature-focused model.The implications of these findings are twofold. First, they validate the theoretical propositions regarding the behavior of StochGradAdam in practice. Second, they provide empirical evidence that supports the use of StochGradAdam in settings where the differential importance of features must be captured distinctly within the weight distributions, particularly in magnitude-based pruning scenarios. The broader weight magnitudes endorse the algorithm's potential in enhancing model robustness and feature representation, which can be pivotal in achieving superior generalization in deep learning models. Pruning Results: Following the comparative analysis (Table <ref>) and the observed performance advantage of StochGradAdam in training phases, we extend our examination to the domain of network pruning. Our findings substantiate that models trained with StochGradAdam not only exhibit initial superior test accuracy but also maintain a markedly higher performance post-pruning compared to those optimized with Adam.When the models are subjected to a 50% reduction in parameters, the resilience of StochGradAdam becomes particularly evident. The pruned ResNet-56 model, for example, achieves a test accuracy of 62.84%, more than twice the 33.12% managed by its Adam-trained equivalent. Similarly, for the more complex ResNet-110 and ResNet-152 architectures, StochGradAdam continues to demonstrate its superior optimization capability with post-pruning accuracies of 76.67% and 76.23% respectively, significantly outperforming the Adam optimizer which results in accuracies of 44.85% and 54.67%. These results unequivocally indicate that StochGradAdam confers enhanced robustness to neural networks, ensuring that they remain highly accurate even when pruned, thereby setting a new benchmark for optimization strategies in the context of model compression.Figure <ref> shows confusion matrices for pruned neural networks at various pruning rates (0%, 30%, 40%, 50%, and 60%) correlate with our study's findings on the robustness of StochGradAdam optimization in the pruning process of neural networks. These matrices offer a visual assessment of each model's classification performance, utilizing optimizers such as Adam and StochGradAdam, with the diagonal elements representing correct predictions. It is evident from the matrices that as the pruning rate increases, models optimized with StochGradAdam maintain a higher density of correct classifications along the diagonal, compared to those optimized with Adam.This observation aligns with the empirical results presented in our research, which highlight StochGradAdam's superior performance in maintaining accuracy during and after the pruning process. The matrices serve as a practical illustration of this performance difference. For instance, in the case of a 50% pruning rate, the models optimized with StochGradAdam exhibit a significantly lesser decline in performance. This is characterized by a higher concentration of correct predictions in the confusion matrix's diagonal, indicating a robustness to pruning that is not as pronounced in models optimized with Adam.The preservation of accuracy by StochGradAdam, as visually supported by the confusion matrices, offers insightful implications for neural network optimization strategies, particularly in resource-constrained environments where model compactness and efficiency are crucial. The ability of StochGradAdam to retain crucial information and maintain essential network functionalities post-pruning positions it as a promising optimization technique for developing efficient neural networks without substantial performance degradation. § DISCUSSIONThe overarching findings from our experimental and comparative analyses illuminate the exceptional robustness of StochGradAdam when applied to neural network pruning. As evidenced in Table <ref>, models trained using StochGradAdam not only commence with an initially superior test accuracy but, crucially, they sustain a remarkably higher performance following significant parameter reductions. This sustained performance advantage is vividly apparent when models undergo a 50% pruning rate, at which point the resilience of StochGradAdam becomes starkly evident.For example, the pruned ResNet-56 model retains a test accuracy of 62.84%, which is substantially higher than the 33.12% managed by the Adam-optimized counterpart, indicating more than a mere preservation of accuracy—it signifies a doubling of performance retention. This pattern is consistent across more complex architectures such as ResNet-110 and ResNet-152, where StochGradAdam achieves post-pruning accuracies of 76.67% and 76.23%, respectively, far outstripping the Adam-optimized models which only reach 44.85% and 54.67%.These empirical results are not simply numerical victories; they signify a paradigm shift in the optimization strategies within the realm of model compression. StochGradAdam ensures that neural networks remain highly accurate even after aggressive pruning, challenging and potentially redefining the existing benchmarks for pruning methodologies.The implications of this study are manifold. Firstly, it suggests that the adoption of advanced gradient sampling techniques can play a pivotal role in the future of neural network optimization, especially in the context of model compression. Secondly, it provides an empirical foundation for integrating sophisticated optimization techniques like StochGradAdam with pruning strategies, which can yield networks that are both efficient and highly accurate—a critical requirement in the age of ubiquitous and resource-constrained computing.Moreover, these findings underscore the importance of considering optimizer choice as a fundamental aspect of network design, particularly when aiming for models that can maintain high accuracy in a compressed state. As we move forward, the insights garnered from StochGradAdam's performance could inform the development of new, even more, effective pruning strategies that further balance the trade-off between model size, computational efficiency, and accuracy retention. §.§ LimitationWhile our study has showcased the formidable capabilities of StochGradAdam, particularly in optimizing Residual Networks such as the ResNet branches, it is important to acknowledge the limitations inherent in our experimental scope. The remarkable performance enhancements attributed to StochGradAdam have been predominantly demonstrated within the confines of ResNet architectures, which, by their nature, may be particularly amenable to the optimizer's gradient sampling techniques.This focus on ResNet models presents a constraint on the generalizability of our findings. ResNet architectures are characterized by their unique residual connections, which facilitate the training of deeper networks by alleviating the vanishing gradient problem. It is conceivable that these structural properties synergize well with StochGradAdam's approach, leading to the pronounced performance gains we observed. However, the extent to which these benefits translate to other architectures—such as densely connected networks, convolutional networks without residual connections, or recurrent neural networks—remains less clear.Thus, while the study confirms StochGradAdam's efficacy in the context of ResNet architectures, further research is warranted to explore its performance across a broader spectrum of neural network designs. Investigations into other architectures would not only validate the optimizer's versatility but could also reveal new insights into the optimizer's behavior in different topological contexts.In future work, we aim to extend our analysis to include a variety of network types, thus providing a more comprehensive evaluation of StochGradAdam's performance. This will enable us to ascertain whether the advantages observed with ResNet architectures are ubiquitous across different network structures, or if they are a product of specific architectural compatibilities.§ CONCLUSIONSIn this study, we have explored the robustness of neural network pruning techniques with a particular focus on the StochGradAdam method. Our extensive experiments demonstrate that StochGradAdam significantly enhances the robustness and performance of pruned neural networks compared to traditional techniques. Starting with superior initial test accuracy, networks trained with StochGradAdam maintain high performance levels even after substantial pruning. This resilience to parameter reduction underscores the efficiency and potential of StochGradAdam in neural network optimization.Moreover, our findings illuminate the critical balance between model compactness, computational efficiency, and accuracy retention, which is paramount in the development of lightweight yet powerful neural architectures. While our research primarily focuses on ResNet architectures, the implications of our results are broad, suggesting that advanced pruning techniques can be universally beneficial across various neural network designs.As we advance, it will be crucial to explore the application of techniques like StochGradAdam in a wider array of architectures and domains. Further research should also investigate the integration of such methods with other model compression techniques to enhance performance and efficiency even further.In conclusion, our study contributes to the growing field of neural network pruning by confirming the effectiveness and robustness of StochGradAdam. This work lays the groundwork for future explorations into more sophisticated and effective pruning strategies, driving forward the possibilities for efficient and robust neural networks.plain
http://arxiv.org/abs/2312.16020v1
{ "authors": [ "Juyoung Yun" ], "categories": [ "cs.LG" ], "primary_category": "cs.LG", "published": "20231226121922", "title": "Robust Neural Pruning with Gradient Sampling Optimization for Residual Neural Networks" }
S2M: Converting Single-Turn to Multi-Turn Datasets for Conversational Question Answering Baokui Li^a;†, Sen Zhang^a;†, Wangshu Zhang^b, Yicheng Chen^b, Changlin Yang^b, Sen Hu^b, Teng Xu^b, Siye liu^b, and Jiwei Li^c;*^aSchool of Software Technology, Zhejiang University ^bAnt Group ^cCollege of Computer Science and Technology, Zhejiang University ==============================================================================================================================================================================================================================================================================fancy A general asynchronous alternating iterative model is designed, for which convergence is theoretically ensured both under classical spectral radius bound and, then, for a classical class of matrix splittings for 𝖧-matrices. The computational model can be thought of as a two-stage alternating iterative method, which well suits to the well-known Hermitian and skew-Hermitian splitting (HSS) approach, with the particularity here of considering only one inner iteration. Experimental parallel performance comparison is conducted between the generalized minimal residual (GMRES) algorithm, the standard HSS and our asynchronous variant, on both real and complex non-Hermitian linear systems respectively arising from convection-diffusion and structural dynamics problems. A significant gain on execution time is observed in both cases.Asynchronous iterations; alternating iterations; Hermitian and skew-Hermitian splitting; non-Hermitian problems; parallel computing § INTRODUCTION Many applications in scientific computing and engineering lead to the following system of linear equations, Ax=b, A∈ℂ^n× n, b∈ℂ^n. Let A = M - N and A = F - G be two splittings of A with M and F being nonsingular. The alternating iterative scheme for solving (<ref>) is defined as follows, {[Mx^k+1/2 = Nx^k + b,;Fx^k+1 = Gx^k+1/2 + b, ]. which can be viewed as a stationary iterative scheme with an iteration matrix F^-1GM^-1N. Well-known early examples include the symmetric successive over-relaxation (SSOR) method <cit.> and the alternating direction implicit (ADI) methods <cit.>. In <cit.> the convergence of some alternating iterations were analyzed by eliminating the intermediate solution term x^k+1/2 from (<ref>); see also <cit.>. Recently, there has been growing interest in studies of the Hermitian and skew-Hermitian splitting (HSS) method <cit.> for solving (<ref>) when A is non-Hermitian. Let α>0 be a given constant. The HSS method can be written in the form {[(α I + H)x^k+1/2 = (α I - S)x^k + b,;(α I + S)x^k+1 = (α I - H)x^k+1/2 + b, ]. where H=(A+A^𝖧)/2 and S=(A-A^𝖧)/2 are the Hermitian and skew-Hermitian parts of A, respectively, and I is the identity matrix. Here, A^𝖧 denotes the conjugate transpose of A. This method can be obtained from (<ref>) by defining [M := α I + H,;F := α I + S. ] It was proved in <cit.> that when H is positive definite, namely, A is non-Hermitian positive definite, HSS converges unconditionally to the unique solution x^* for any initial guess x^0. The linear subsystems, however, especially the one involving α I + S, may still be difficult to solve, therefore much attention has been devoted to the inexact implementation. More precisely, the tolerances for the inner iterative solvers may be relatively relaxed, while good convergence properties can still be retained according to numerical experiments; see <cit.>. The HSS iterative scheme has been generalized to other splitting methods, as well as their preconditioned variants, for handling various problems in scientific computing; see, e.g., <cit.>. There is also a number of studies on the optimal selection of α; see <cit.>. The iterative scheme (<ref>) can be equivalently written in a residual-updating form, which achieves a higher accuracy at the cost of more computational effort; see <cit.> for a detailed discussion.Parallel computing could be extremely useful when A has large dimension. In practice, the high cost of synchronization relative to that of computation is currently the major bottleneck in high-performance distributed computing systems, which motivates redesigning of parallel iterative algorithms. One of the most interesting approaches, arising from basic relaxation methods, is the so-called asynchronous iterations <cit.>. Asynchronous iterative scheme gives a full overlapping of communication and computation. Every process has the flexibility to work at their own pace without waiting for the data acquisition. A major difference between synchronous and asynchronous iterations lies in their predictability properties. The former produces deterministic sequence of iterations, while the latter enables nondeterministic behaviors. In <cit.> the first convergence result was established for the solution of linear systems, which was followed by the investigation of general fixed-point iterative models; see <cit.>. In recent years, with the advent of very high-performance computing environment, asynchronous iterative scheme has gained much popularity. The study of asynchronous domain decomposition methods, in both time and space domains, becomes an increasingly active area of research; see, e.g., <cit.>. Another area that has seen growth in the last decades is the asynchronous convergence detection; see <cit.> and the references therein.In this paper we focus on the asynchronous formulation of alternating iterations. In Section <ref>, we recall some general tools and the asynchronous iterations theory used for the formulation and the convergence analysis of our asynchronous alternating scheme. Section <ref> presents the main contribution where we formulate our asynchronous alternating scheme and sufficient conditions for its convergence. Section <ref> is devoted to numerical experiments on a parallel computing platform, featuring both a real three dimensional convection-diffusion problem and a complex two dimensional structural dynamic problem. Finally, Section <ref> gives our conclusions. § GENERALITIES§.§ 𝖧-matrix and 𝖧-splitting In a general manner, let 𝒜_i,j denote the entry of a matrix 𝒜 on its i-th row and j-th column, and let x_i denote the i-th entry of a vector x. Comparisons <, ≤, >, ≥ and = between two matrices or vectors (of same shapes) are entrywise. The absolute value (or module) |𝒜| of a matrix or a vector 𝒜 is entrywise. The spectral radius of a matrix 𝒜 is designated by ρ(𝒜). In expressions like 𝒜 < 0 and like x < 0 with 𝒜 and x being a matrix and a vector, respectively, 0 indicates a matrix and a vector, respectively, with all entries being 0. I stands for the identity matrix.We recall now few general tools later used for the convergence analysis of the proposed asynchronous iterative method.A square matrix 𝒜 is an 𝖬-matrix if and only if∃ α∈ℝ: α I - 𝒜≥ 0, α > ρ(α I - 𝒜).The comparison matrix ⟨𝒜⟩ of a matrix 𝒜is defined as⟨𝒜⟩_i,i := |𝒜_i,i|, ⟨𝒜⟩_i,j := -|𝒜_i,j|,ij.A square matrix 𝒜 is an 𝖧-matrix if and only if its comparison matrix ⟨𝒜⟩ is an 𝖬-matrix. A square matrix 𝒜 is an 𝖧-matrix if and only if∃ u > 0 : ∀ i,|𝒜_i,i| u_i > ∑_ji |𝒜_i,j| u_j. This is directly implied by Theorem 5' in <cit.>. A splitting 𝒜 = ℳ - 𝒩 of a matrix 𝒜 consists of identifying a nonsingular matrix ℳ and the resulting matrix 𝒩 = ℳ - 𝒜, so as to define a relaxation operator ℳ^-1𝒩 = I - ℳ^-1𝒜.A splitting 𝒜 = ℳ - 𝒩 is an 𝖧-splitting if and only if ⟨ℳ⟩ - |𝒩| is an 𝖬-matrix. Let 𝒜 = ℳ - 𝒩 be an 𝖧-splitting. Then, we have ρ(|I - ℳ^-1𝒜|) < 1.This directly follows from Proof of Theorem 3.4 (c) in <cit.>. Let 𝒜 be a square matrix. Then, we haveρ(|𝒜|) < 1 ∃ w > 0 :𝒜_∞^w < 1, 𝒜_∞^w := max_i1/w_i∑_j |𝒜_i,j| w_j.§.§ Asynchronous iterations Consider, again, the linear system (<ref>), a splitting A = M - N of the matrix A and the resulting iterative schemex^k+1 = (I - M^-1 A) x^k + M^-1 b = x^k + M^-1(b - A x^k).Assume a distributionA = [ A^(1); A^(2); ⋮; A^(m) ],b = [ b^(1); b^(2); ⋮; b^(m) ],M = [ M^(1) 0 ⋯ 0; 0 M^(2) ⋱ ⋮; ⋮ ⋱ ⋱ 0; 0 ⋯ 0 M^(m) ]of both the system and the splitting of A. Note that the problem (<ref>) can also corresponds to an augmented system resulting from a domain decomposition with overlapping subdomains, i.e., some rows in a submatrix A^(s_1) are possibly replicated in another submatrix A^(s_2), s_1, s_2 ∈{1, …, m}. A classical parallel relaxation is then given byx^(s),k+1= x^(s),k + M^(s)^-1(b^(s) - A^(s)[ x^(1),k ⋯ x^(m),k ]^𝖳) ∀ s ∈{1, …, m}, = x^(s),k + M^(s)^-1(b^(s) - ∑_q=1^m A^(s,q) x^(q),k) ∀ s ∈{1, …, m}with A^(s) = [ A^(s,1) ⋯ A^(s,m) ]. The first feature of asynchronous iterations is the free steering (see, e.g., <cit.>), where, at each iteration k, a random subset Ω_k⊂{1, …, m} of block-components can be updated. It is convenient to state a natural assumption,card{k ∈ℕ : s ∈Ω_k} = ∞∀ s ∈{1, …, m},which is implemented by the fact that no block-component stops being updated until convergence is globally reached. The second feature consists of modeling communication delays implying that at an iteration k+1, a block-component s_1 ∈Ω_k is possibly updated using a block-component s_2 ∈{1, …, m} computed at a random previous iteration δ_s_1(s_2, k) ≤ k. It yields the parallel iterative scheme x^(s),k+1 = {[ x^(s),δ_s(s,k) + M^(s)^-1(b^(s) - ∑_q=1^m A^(s,q) x^(q),δ_s(q,k)) ∀ s ∈Ω_k,; x^(s),k ∀ s ∉Ω_k, ]. where, as well, another natural assumption is made, stating thatlim_k →∞δ_s_1(s_2,k) = ∞∀ s_1, s_2 ∈{1, …, m}. An asynchronous iterative method (<ref>) converges from any initial guess x^0, with any sequence {Ω_k}_k ∈ℕ and any functions δ_1 to δ_m if and only if ρ(|I - M^-1 A|) < 1. The model (<ref>) was later generalized by Baudet <cit.> to arbitrary fixed-point iterations x^(s),k+1 = {[f^(s)(x^(1),δ_s,1(1,k), …, x^(m),δ_s,1(m,k),.; .…, x^(1),δ_s,p(1,k), …, x^(m),δ_s,p(m,k))∀ s ∈Ω_k,;x^(s),k∀ s ∉Ω_k, ]. where the update of a block-component s ∈Ω_k at an iteration k depends on p ∈ℕ versions, δ_s,1(q,k) to δ_s,p(q,k), of each block-component q ∈{1, …, m}. Let us denote by max (x, y) the vector given by(max (x, y))_i := max{x_i, y_i}with x and y being two vectors of same size. Let X := (X_1, …, X_p) and Y := (Y_1, …, Y_p) denote collections of p vectors, i.e.,X_t^ = [ X_t^(1) ⋯ X_t^(m) ]^𝖳,Y_t^ = [ Y_t^(1) ⋯ Y_t^(m) ]^𝖳,t ∈{1, …, p}. An asynchronous iterative method (<ref>) converges from any initial guess x^0, with any sequence {Ω_k}_k ∈ℕ and any functions δ_1,1 to δ_m,p if there exists a square matrix 𝒫 such that 𝒫≥ 0, ρ(𝒫) < 1 and∀ X, Y, |f(X) - f(Y)| ≤𝒫max(|X_1 - Y_1|, …, |X_p - Y_p|). § ASYNCHRONOUS ALTERNATING ITERATIONS§.§ Computational scheme Consider, now, the alternating scheme (<ref>) which results inx^k+1= (I - F^-1 A) x^k+1/2 + F^-1 b = (I - F^-1 A) (I - M^-1 A) x^k + (I - F^-1 A) M^-1 b + F^-1 b = (I - F^-1(M + F - A) M^-1 A) x^k + F^-1(M + F - A) M^-1 b.Then, according to Theorem <ref>, such an induced parallel scheme is asynchronously convergent if ρ(|I - F^-1(M + F - A) M^-1 A|) < 1, which is shown, in the next section, to be achieved under usual convergence conditions on the splittings A = M - N and A = F - G. Nevertheless, asynchronous relaxation based on such an operator cannot be implemented using the alternating form (<ref>), since the said operator is induced by strictly synchronizing x^k+1/2 and x^k+1.Consider, then, an equivalent formulation of the alternating scheme (<ref>),{[y^k := x^k + M^-1(b - A x^k),;x^k+1= y^k + F^-1(b - A y^k), ].and assume that F is distributed as M, i.e.,F = [ F^(1) 0 ⋯ 0; 0 F^(2) ⋱ ⋮; ⋮ ⋱ ⋱ 0; 0 ⋯ 0 F^(m) ].Parallel asynchronous alternating methods are thus given by the computational scheme {[y^(s),k := x^(s),δ_s(s,k); + M^(s)^-1(b^(s) - ∑_q=1^m A^(s,q) x^(q),δ_s(q,k)) ∀ s ∈{1, …, m},;x^(s),k+1= {[ y^(s),δ_s(s,k); + F^(s)^-1(b^(s) - ∑_q=1^m A^(s,q) y^(q),δ_s(q,k))∀ s ∈Ω_k,;x^(s),k∀ s ∉Ω_k. ]. ]. Assuming that the identity matrix I is distributed as A, i.e.,I = [ I^(1,1) ⋯ I^(1,m); ⋮ ⋱ ⋮; I^(m,1) ⋯ I^(m,m) ],it yieldsx^(s),k+1= ∑_q=1^m(I^(s,q) - F^(s)^-1 A^(s,q)) y^(q),δ_s(q,k) + F^(s)^-1 b^(s) = ∑_q=1^m(I^(s,q) - F^(s)^-1 A^(s,q)) (∑_r=1^m(I^(q,r) - M^(q)^-1 A^(q,r)) x^(r),δ_q(r,δ_s(q,k)).. + M^(q)^-1 b^(q)) + F^(s)^-1 b^(s),which actually lies in the framework of the generalized model (<ref>) with, here, p = m, since each update of a block-component depends on m versions of the other block-components. Considering, then, a collection X = (X_1, …, X_m) of m vectors, the corresponding mapping f is given byf^(s)(X) := ∑_q=1^m(I^(s,q) - F^(s)^-1 A^(s,q)) (∑_r=1^m(I^(q,r) - M^(q)^-1 A^(q,r)) X_q^(r).. + M^(q)^-1 b^(q)) + F^(s)^-1 b^(s) = ∑_q=1^m P_q^(s) X_q + (I^(s) - F^(s)^-1 A^(s)) M^-1 b + F^(s)^-1 b^(s),f(X) := ∑_q=1^m P_q X_q + (I - F^-1 A) M^-1 b + F^-1 bwith P_q^(s) := (I^(s,q) - F^(s)^-1 A^(s,q)) (I^(q) - M^(q)^-1 A^(q)),q,s ∈{1, …, m}, and P_q := [ P_q^(1) ⋯ P_q^(m) ]^𝖳,q ∈{1, …, m}.§.§ Convergence conditions We analyze, now, sufficient conditions for the convergence of our asynchronous alternating iterative scheme (<ref>). To the best of our knowledge, Lemma <ref>, Proposition <ref> and Corollary <ref> are new. Proposition <ref> and Corollary <ref> highlight how combining properties of the operators I - F^-1 A and I - M^-1 A imply a resulting contracting operator (I - F^-1 A)(I - M^-1 A). Our main results consist of Theorem <ref> and Corollary <ref> where the same combined conditions are shown to be sufficient for the convergence of asynchronous alternating methods (<ref>), despite the induced, slightly different, iterations operator.Let, first, 𝒜 be a matrix with arbitrary shape, let w be a vector with as many entries as the number of columns in 𝒜, and let v be a vector with as many entries as the number of rows in 𝒜, and with no 0 entry. Let τ(𝒜, w, v) denote the vector given by the row-sumsτ_i(𝒜, w, v) := (τ(𝒜, w, v))_i := 1/v_i∑_j|𝒜_i,j| w_j∀ i.Note, then, that, for a square matrix 𝒜,𝒜_∞^w = max_iτ_i(𝒜, w, w),w > 0. Let 𝒜 and ℬ be matrices with shapes such that 𝒜ℬ is calculable. Let u > 0, v > 0 and w be vectors with dimensions such that τ(𝒜, u, v) and τ(ℬ, w, u) are calculable. Then, we haveτ(ℬ, w, u) < [ 1 1 ⋯ 1 ]^𝖳τ(𝒜ℬ, w, v) < τ(𝒜, u, v). Let us index rows and columns of 𝒜 by i and j, respectively, and columns of ℬ by l. We haveτ_i(𝒜ℬ, w, v) := 1/v_i∑_l|(𝒜ℬ)_i,l| w_l= 1/v_i∑_l|∑_j𝒜_i,jℬ_j,l| w_l≤1/v_i∑_l∑_j|𝒜_i,jℬ_j,l| w_l = 1/v_i∑_l∑_j1/u_j|𝒜_i,j| |ℬ_j,l| u_j w_l = 1/v_i∑_j(1/u_j∑_l|ℬ_j,l| w_l) |𝒜_i,j| u_j = 1/v_i∑_jτ_j(ℬ, w, u) |𝒜_i,j| u_j.It yields that if τ_j(ℬ, w, u) < 1 for all j, thenτ_j(ℬ, w, u) |𝒜_i,j| u_j< |𝒜_i,j| u_j∀ j∀ i, 1/v_i∑_jτ_j(ℬ, w, u) |𝒜_i,j| u_j< 1/v_i∑_j|𝒜_i,j| u_j∀ i, 1/v_i∑_l|(𝒜ℬ)_i,l| w_l≤ 1/v_i∑_jτ_j(ℬ, w, u) |𝒜_i,j| u_j< 1/v_i∑_j|𝒜_i,j| u_j∀ i, τ_i(𝒜ℬ, w, v) < τ_i(𝒜, u, v) ∀ i,which concludes the proof. LetQ := [0 I - M^-1 A; I - F^-1 A0 ].We haveρ(|Q|) < 1 ρ(|I - F^-1(M + F - A) M^-1 A|) < 1. According to Lemma <ref>,ρ(|Q|) < 1 ∃ W > 0 :Q_∞^W < 1.According to the two blocks of Q, take W = [ W_1 W_2 ]^𝖳. Then, we have both{[ τ(I - M^-1 A, W_2, W_1) <[ 1 1 ⋯ 1 ]^𝖳,; τ(I - F^-1 A, W_1, W_2) <[ 1 1 ⋯ 1 ]^𝖳. ].Lemma <ref> therefore ensuresτ((I - F^-1 A) (I - M^-1 A), W_2, W_2) < τ(I - F^-1 A, W_1, W_2) < [ 1 1 ⋯ 1 ]^𝖳,which leads to (I - F^-1 A) (I - M^-1 A)_∞^W_2 < 1. Recall that(I - F^-1 A) (I - M^-1 A) = I - F^-1(M + F - A) M^-1 A.Lemma <ref> finally ensures ρ(|I - F^-1(M + F - A) M^-1 A|) < 1, which concludes the proof. if A is an 𝖧-matrix, then{[ ⟨ M ⟩ - |M - A| =⟨ A ⟩,; ⟨ F ⟩ - |F - A| = ⟨ A ⟩ ]. ρ(|I - F^-1(M + F - A) M^-1 A|) < 1. Considering that A is an 𝖧-matrix, take u > 0 like in Lemma <ref>, so as to have|A_i,i| u_i > ∑_ji |A_i,j| u_j∀ i.We also have⟨ M ⟩ - |M - A| = ⟨ A ⟩∀ i,{[ |M_i,i| - |M_i,i - A_i,i| =|A_i,i|,; - |M_i,j| - |M_i,j - A_i,j| = - |A_i,j| ∀ ji, ].and, then,{[ |M_i,i| u_i - |M_i,i - A_i,i| u_i =|A_i,i| u_i,; - |M_i,j| u_j - |M_i,j - A_i,j| u_j =- |A_i,j| u_j∀ ji. ].It yields that, ∀ i,|M_i,i| u_i - ∑_ji |M_i,j| u_j - |M_i,i - A_i,i| u_i - ∑_ji |M_i,j - A_i,j| u_j= |A_i,i| u_i - ∑_ji |A_i,j| u_j > 0,which implies, with F also satisfying ⟨ F ⟩ - |F - A| = ⟨ A ⟩, that the matrixA := [ M A - M; A - F F ]is an 𝖧-matrix, according to Lemma <ref>. Define, then,M := [ M 0; 0 F ],and note that ⟨M⟩ - |M - A| = ⟨A⟩, which implies, by Definition <ref>, that ⟨M⟩ - |M - A| is an 𝖬-matrix, hence, by Definition <ref>, A = M - (M - A) is an 𝖧-splitting. Lemma <ref> therefore ensures that ρ(|M^-1(M - A)|) < 1, and one can verify thatM^-1(M - A) = [ 0 I-M^-1A; I-F^-1A 0 ].Proposition <ref> therefore finally applies, which concludes the proof. LetQ := [0 I - M^-1 A; I - F^-1 A0 ].An asynchronous alternating method (<ref>) converges from any initial guess x^0, with any sequence {Ω_k}_k ∈ℕ and any functions δ_1 to δ_m if ρ(|Q|) < 1.Consider two collections, X = (X_1, …, X_m) and Y = (Y_1, …, Y_m), of m vectors. We have|f(X) - f(Y)| = |∑_q=1^m P_q(X_q - Y_q)|≤∑_q=1^m|P_q| max(|X_1 - Y_1|, …, |X_m - Y_m|).Consequently, according to Theorem <ref>, an asynchronous alternating method (<ref>) is convergent if ρ(∑_q=1^m|P_q|) < 1. Recall, then, that according to Lemma <ref>,ρ(|Q|) < 1 ∃ W > 0 :Q_∞^W < 1.According to the two blocks of Q, take W = [ W_1 W_2 ]^𝖳. Then, we have both{[ τ(I - M^-1 A, W_2, W_1) <[ 1 1 ⋯ 1 ]^𝖳,; τ(I - F^-1 A, W_1, W_2) <[ 1 1 ⋯ 1 ]^𝖳, ].implying, as well,τ(I^(q) - M^(q)^-1 A^(q), W_2, W_1^(q)) < [ 1 1 ⋯ 1 ]^𝖳∀ q ∈{1, …, m}.Lemma <ref> therefore ensures, with s ∈{1, …, m},τ((I^(s,q) - F^(s)^-1 A^(s,q)) (I^(q) - M^(q)^-1 A^(q)), W_2, W_2^(s)) < τ(I^(s,q) - F^(s)^-1 A^(s,q),.. W_1^(q), W_2^(s)).Recall that P_q^(s) := (I^(s,q) - F^(s)^-1 A^(s,q)) (I^(q) - M^(q)^-1 A^(q)),q,s ∈{1, …, m}. Then, we haveτ(P^(s)_q, W_2, W_2^(s)) < τ(I^(s,q) - F^(s)^-1 A^(s,q), W_1^(q), W_2^(s)), τ(|P^(s)_q|, W_2, W_2^(s)) < τ(I^(s,q) - F^(s)^-1 A^(s,q), W_1^(q), W_2^(s)), ∑_q=1^mτ(|P^(s)_q|, W_2, W_2^(s)) < ∑_q=1^mτ(I^(s,q) - F^(s)^-1 A^(s,q), W_1^(q), W_2^(s)), τ(∑_q=1^m|P^(s)_q|, W_2, W_2^(s)) < τ(I^(s) - F^(s)^-1 A^(s), W_1^, W_2^(s)), τ(∑_q=1^m|P^_q|, W_2, W_2^) < τ(I - F^-1 A, W_1^, W_2^), < [ 1 1 ⋯ 1 ]^𝖳,which leads to ∑_q=1^m|P^_q|_∞^W_2 < 1. By Lemma <ref>, we therefore satisfy ρ(∑_q=1^m|P^_q|) < 1, which concludes the proof. An asynchronous alternating method (<ref>) converges from any initial guess x^0, with any sequence {Ω_k}_k ∈ℕ and any functions δ_1 to δ_m if A is an 𝖧-matrix and{[ ⟨ M ⟩ - |M - A| =⟨ A ⟩,; ⟨ F ⟩ - |F - A| =⟨ A ⟩. ]. This follows in the same way as Corollary <ref>. Let 𝒟(𝒜) denote the diagonal matrix obtained from the diagonal of a matrix 𝒜. For practical applications of Corollary <ref>, let Λ be a diagonal real matrix such that Λ_i,i≥ 1∀ i. We straightforwardly haveℳ = Λ𝒟(𝒜) ⟨ℳ⟩ - |ℳ - 𝒜| = ⟨𝒜⟩. In regard to the HSS splitting, if A is a real matrix with 𝒟(A) ≥ 0, and splitting matrices M and F are given byM := 𝒟(α I + H),F := 𝒟(α I + S), α≥max_i A_i,i,then we have bothM = α I + 𝒟(A) ≥𝒟(A),F = α I ≥𝒟(A),which satisfy M = Λ_M𝒟(A),F = Λ_F𝒟(A), where Λ_M and Λ_F are two diagonal real matrices with entries greater than or equal to 1. § IMPLEMENTATION ASPECTS The two alternating iterations of the HSS method require the solution of two secondary problems involving the coefficient matrices α I + H and α I + S, respectively. In practice, as pointed out in, e.g., <cit.>, these problems are inexactly solved by means of iterative algorithms. A general description for both HSS and inexact HSS (IHSS) can be given by Algorithm <ref>.We can then designate by, e.g, HSS(CG, GMRES) an IHSS algorithm with the conjugate gradient (CG) method <cit.> for solving the shifted Hermitian problem and the generalized minimal residual (GMRES) method <cit.> for solving the shifted skew-Hermitian one.Asynchronous HSS iterations necessarily belong to the class of IHSS algorithms since they obviously require the inner solvers to be asynchronous too, which further reduces such an approach to the subclass of IHSS with inner splittings. Taking, then, e.g., a splitting α I + H = M - N, the solution, at each outer iteration k, of(α I + H) y^k = b - A x^kcan be given by several inner iterations y^k,l+1 = y^k,l + M^-1 (b - A x^k - (α I + H) y^k,l), where l is the inner iteration variable. Furthermore, when dealing with two-stage asynchronous iterations, one should particularly take advantage of the possibility to use the inner solution vector y^k,l+1 with any value of l, given that asynchronous relaxation is very likely to benefit from each newly updated data. We refer the reader to, e.g., <cit.> for more insights into the so called “asynchronous iterations with flexible communication”. Moreover, analysis of matrix splittings for two-stage asynchronous iterations reveals that convergence of such methods can be guaranteed for any number of inner iterations (see, e.g., <cit.>). According, therefore, to efficiency aspects related to flexible communication ideas, it is of some interest, in the end, to simply consider only one iteration of (<ref>). If, in particular, we also consider as initial guess y^k,0 := 0, then we can definey^k := y^k,1 = M^-1 (b - A x^k),so as to finally havex^k+1/2 = x^k + M^-1 (b - A x^k),which falls under the general alternating scheme (<ref>) that has been considered in our theoretical analysis. Such a specialization of Algorithm <ref> is given by Algorithm <ref>, where M^-1 and F^-1 are preconditioners of α I + H and α I + S, respectively.Note that Algorithm <ref> needs to be specifically implemented instead of just using Algorithm <ref> with calls of relaxation-based inner solvers with maximum number of iterations set to 1. Indeed, on pure computer science aspects, avoiding inner function calls and loops can result in a very significant execution time saving, which even makes HSS(M^-1, F^-1) possibly competitive, in practice, with, e.g., HSS(CG, GMRES), as we shall see in Section <ref>.From Algorithm <ref>, iterative scheme (<ref>), programming models <cit.> and convergence detection approach <cit.>, asynchronous parallel implementation of HSS iterations is obtained as described by Algorithm <ref>, where the communication routines start with “Com” and are blocking by default. Their non-blocking counterparts are designated by “ICom” with the letter “I” standing for “immediate”, similarly to the Message Passing Interface (MPI) standard.The routines ComSum and IComSum are used to compute dot product r^𝖧 r with r = b - A x by global reduction operation∑_q=1^mr^(q)^𝖧 r^(q),r^(q) = b^(q) - A^(q) x.They can readily be replaced by MPI routines MPI_Allreduce and MPI_Iallreduce, respectively. The object ComRequest and the routine ComTest are therefore analogous to MPI_Request and MPI_Test. Such a simple way to reliably use the classical loop stopping criterion r > εb in case of asynchronous iterations is due to <cit.>. It also allows for considering a counter, k, of the number of global convergence tests. On the other hand, the data exchange routine IComSendRecv has to be a bit constructed using, e.g., MPI routines MPI_Isend and MPI_Irecv. Briefly, the routine IComSendRecvInit triggers non-blocking requests for message sending (x^(s)) and reception (x^(q), qs), and fills up the components x^(q), qs, of the vector x with any arbitrary values. Note that both storage and communication of components x^(q), qs, should actually be limited to values which are necessary for computing the product A^(s) x, according to the nonzero entries in A^(s). The subsequent calls to the routine IComSendRecv then check completion of previous requests, update x with received data and trigger new instances of the completed requests. Further details can be found in, e.g., <cit.>.§ NUMERICAL EXPERIMENTS §.§ Problems and overall settingsNumerical experiments have been conducted on two kinds of problem. The first one consists of a three-dimensional (3D) convection-diffusion equation, -Δ u + c·∇ u = f with Ω=[0,1]×[0,1]×[0,1] and Dirichlet boundary conditions. Discretization has been achieved using seven-point centered differences for both convection and diffusion terms. A fixed value, 20, has been used for all elements in the three-dimensional vector c as convection parameter. The entries of the exact discrete solution, x^*, have been taken randomly in [0,1) and the right-hand side has then been constructed as b=Ax^*.The second kind of problem consists of a 2D structural dynamics equation (see, e.g., <cit.>), [(-ω^2 L + K) + i(ω C_v + C_h)] x = b, where L and K denote the mass and stiffness matrices, respectively; C_v and C_h denote the viscous and hysteretic damping matrices, respectively; ω denotes the circular frequency. The values of the matrices and the parameters have been taken from <cit.>. The matrix K is the five-point finite difference discretization of a diffusion term on the unit square [0,1]×[0,1] with Dirichlet boundary conditions. The other matrices have been set as L=I, C_v=10I, C_h=μ K, where μ=0.02, and I denotes the n × n identity matrix. The circular frequency ω has been set to π. The right-hand side has been taken as b=(1+i)Aq with q being a vector of 1, to ensure that all entries of x^* equal 1+i.In the following, parallel execution times (wall-clock), numbers of iterations, k, and final residual errors, r, are reported for the GMRES <cit.>, the IHSS <cit.> (Algorithms <ref> and <ref>) and the asynchronous IHSS methods (Algorithm <ref>), with a stopping criterion set so as to haver = b-Ax^*/b < 10^-6.In case of asynchronous execution, minimum and maximum numbers of local iterations, k_min and k_max, respectively, are considered since there is not global iterations k. Both for synchronous and asynchronous HSS(M^-1, F^-1) (respectively, Algorithms <ref> and <ref>), we tookM := 𝒟(α I + H),F := 𝒟(α I + S).All of the tests have been entirely implemented in the Python language, using NumPy, SciPy Sparse and MPI4Py <cit.> modules.A comparison with some results in <cit.> about the problem (<ref>) (Example 4.2 in <cit.>) is reported in Table <ref> for single-process execution of full GMRES, GMRES(restart), and HSS(CG, GMRES(restart)) with inner residual threshold set to 10^-10 in order to compare with an “exact” HSS.The experimentally optimal value of α, according to <cit.>, was considered for each problem size n (α = 0.12 for n = 64^2, and α = 0.07 for n = 128^2). We recall that the experiments in <cit.> were run in MATLAB on a personal computer consisting of a 2.66 GHz Intel Core Duo central processing unit (CPU) and 1.97 GB of random access memory (RAM). Our single-process tests, here, have been performed on a computational cluster node consisting of a 2.40 GHz Intel Xeon Skylake CPU and 174 GB of RAM. Same numbers of iterations are obtained for our implementation of HSS(CG, GMRES(10)), where both CG and GMRES's tolerances were set to 10^-10, and the HSS experimented in <cit.> with direct inner solvers. Same result is observed for full GMRES too, while very slight differences appear for the restarted GMRES.The remaining tests, which involve multi-process execution, have been performed on cluster nodes consisting of 2 × 12-cores 2.30 GHz Intel Xeon Haswell CPU (24 cores per node) and 48 GB of RAM (2 GB per core). The nodes are interconnected through a 56 Gb/s fourteen data rate (FDR) Infiniband network, on which the SGI MPT library is used as implementation of the MPI standard. §.§ Results on the 3D convection-diffusion problem §.§.§ Optimal parameters The 3D convection-diffusion test case (<ref>) was run on an obtained discrete problem with n = 100^3 unknowns, using from p = 48 to p = 192 processor cores (one MPI process per core).Table <ref> shows execution times for various values of the restart parameter of GMRES.This allows us to choose the value 10 as the experimentally optimal one, however, performances for a restart value of 20 were quite similar.We therefore looked for performance variation of HSS(CG, GMRES(10)) according to its parameter α and the inner residual threshold ε_in set for both CG and GMRES(10). Convergence was obtained from ε_in = 10^-2, which also demonstrated more efficiency than lower thresholds, as shown in Table <ref>.Quite surprisingly, the number of outer iterations even slightly increased when switching from 10^-2 to 10^-6.While a restart value of 10 resulted in the most efficient executions of the GMRES solver, it does not necessarily prove to be the best choice for HSS(CG, GMRES(restart)) as well. Handling a combination of three parameters, α, ε_in and GMRES' restart, is clearly a major drawback of HSS(CG, GMRES(restart)), especially if, additionally, the number of processes (and so, possibly, the load per process) might have an impact too. Our two-stage-splitting-based HSS(M^-1, F^-1) with single inner iteration takes the set of parameters back to α, as in the case of exact HSS. Moreover, as mentioned in Section <ref>, avoiding inner solver function calls and loops might constitute an attractive feature, considering pure computer science aspects. This is shown here by comparing Tables <ref> and <ref>.For p = 192 processes, best execution times of HSS(CG, GMRES(10)) and HSS(M^-1, F^-1) are, respectively, 665 and 136 seconds. Note that the former performed 1949 inner iterations while the latter converged in 2576 inner iterations (2 × 1288 outer iterations since there is one inner iteration using M^-1 and another one using F^-1). Such a surprisingly quite small gap in convergence speed confirms the possibility to achieve a faster solver in execution time by avoiding inner function calls and loops. Still, an important drawback for HSS(M^-1, F^-1) is that it turned divergent for α≤ 2.0.Finally, Table <ref> shows that α = 3.0 was experimentally optimal for the asynchronous HSS(M^-1, F^-1) too. And here as well, divergence has been observed for α≤ 2.0.§.§.§ Performance comparison Using experimentally obtained optimal parameters, a performance comparison on p = 48 to p = 192 cores is summarized here in Table <ref>, where we dropped off the HSS(CG, GMRES(10)) due to memory limits exceeded for p ≤ 120.One can see a significant gain by asynchronous HSS(M^-1, F^-1, 3.0), which was, e.g., at p = 192 processor cores, about 20 times faster (in execution time) than both GMRES(10) and synchronous HSS(M^-1, F^-1, 3.0). While the second-stage splittings using preconditioners M^-1 and F^-1 were introduced here to achieve a fully asynchronous version of HSS, such a gap between the performances of synchronous and asynchronous HSS(M^-1, F^-1, 3.0) in a homogeneous high-speed computational environment shows that there is a true advantage in resorting to asynchronous iterations, which is not due to possible programming biases introduced by this particular implementation of HSS.§.§ Results on the 2D structural dynamics problem §.§.§ Optimal parameters The complex 2D structural dynamics test case (<ref>) was run on an obtained discrete problem with n = 350^2 unknowns, using from p = 24 to p = 54 processor cores (one MPI process per core).Table <ref> shows execution times for various values of the restart parameter of GMRES.This allows us to choose the value 30 as the experimentally optimal one, however, performances for restart values of 20 to 50 were quite similar.Both HSS(CG, GMRES(30)) and HSS(M^-1, F^-1) failed to converge within two hours of execution on p = 48 cores for various values of their parameters, which made them unpractical for the current test case.Nevertheless, asynchronous HSS(M^-1, F^-1) took reasonable times to converge, and Table <ref> shows an experimentally optimal α = 2.0. Divergence was observed for α≤ 1.0.§.§.§ Performance comparison Using experimentally obtained optimal parameters, a performance comparison on p = 24 to p = 54 cores is summarized in Table <ref>.Again, a significant gain is obtained by asynchronous HSS(M^-1, F^-1, 2.0), which was, e.g., at p = 48 processor cores, about 20 times faster than GMRES(30), similarly to the real 3D convection-diffusion test case. Here as well an even more important performance gap is observed between asynchronous and synchronous HSS(M^-1, F^-1, 2.0) which did not terminate within 7200 seconds. This confirms, for the complex test case as well, the benefit purely from asynchronous iterations. § CONCLUSION Asynchronous alternating iterations are revealed here as a practical breakthrough in improving computational time of parallel solution of non-Hermitian problems, compared to the well-known GMRES and HSS methods. Classical asynchronous convergence conditions are investigated for a general practical parallel scheme of alternating iterations. In particular, it can result in a two-stage variant of the HSS method with one inner iteration for each of the outer alternating ones. Performance experiments have been conducted for such an asynchronous variant which has significantly outperformed both the GMRES and the classical HSS methods, both on a real convection-diffusion and a complex structural dynamics problem.§ ACKNOWLEDGEMENT The paper has been prepared with the support of the “RUDN University Program 5-100”, the French national program LEFE/INSU, the project ADOM (Méthodes de décomposition de domaine asynchrones) of the French National Research Agency (ANR), and using HPC resources from the “Mésocentre” computing center of CentraleSupélec and École Normale Supérieure Paris-Saclay supported by CNRS and Région Île-de-France.abbrv
http://arxiv.org/abs/2312.16505v1
{ "authors": [ "Guillaume Gbikpi-Benissan", "Qinmeng Zou", "Frédéric Magoulès" ], "categories": [ "math.NA", "cs.DC", "cs.NA" ], "primary_category": "math.NA", "published": "20231227101940", "title": "Asynchronous iterations of HSS method for non-Hermitian linear systems" }
ANN vs SNN: A case study for Neural Decoding in Implantable Brain-Machine InterfacesZhou Biyan* Member, IEEE, Pao-Sheng Vincent Sun* Member, IEEE,andArindam Basu 0000-0003-1035-8770, Senior Member, IEEEB. Zhou and P.S.V. Sun have contributed equally. All authors are with the Department of Electrical Engineering, City University of Hong Kong. (e-mail: [email protected])The work described in this paper was supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. CityU 11200922).January 14, 2024 ===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================While it is important to make implantable brain-machine interfaces (iBMI) wireless to increase patient comfort and safety, the trend of increased channel count in recent neural probes poses a challenge due to the concomitant increase in the data rate. Extracting information from raw data at the source by using edge computing is a promising solution to this problem, with integrated intention decoders providing the best compression ratio. In this work, we compare different neural networks (NN) for motor decoding in terms of accuracy and implementation cost. We further show that combining traditional signal processing techniques with machine learning ones deliver surprisingly good performance even with simple NNs. Adding a block Bidirectional Bessel filter provided maximum gains of ≈ 0.05, 0.04 and 0.03 in R^2 for ANN_3d, SNN_3D and ANN models, while the gains were lower (≈ 0.02 or less) for LSTM and SNN_streaming models. Increasing training data helped improve the R^2 of all models by 0.03-0.04 indicating they have more capacity for future improvement. In general, LSTM and SNN_streaming models occupy the high and low ends of the pareto curves (for accuracy vs. memory/operations) respectively while SNN_3D and ANN_3D occupy intermediate positions. Our work presents state of the art results for this dataset and paves the way for decoder-integrated-implants of the future.implantable-brain machine interface, intention decoder, spiking neural networks, low-power. List of Abbreviations- § INTRODUCTION Implantable Brain-Machine Interfaces (iBMI) (Fig. <ref>(a)) are a promising class of assistive technology that enables the reading of a person's intent to drive an actuator<cit.>. It holds promise to enable paralyzed patients to perform activities of daily living with partial or total autonomy<cit.>. While the first applications were in motor prostheses to control a cursor on a computer screen<cit.>, or wheelchairs<cit.>, or robotic arms<cit.>, recent studies have shown remarkable results for speech decoding<cit.>, handwritten text generation<cit.> and therapies for other mental disorders<cit.>. The majority of clinical iBMI systems have a wired connection from the implant to the outside world <cit.>. However, this entails a risk of infection leading to the increasing interest in wireless neural interfaces <cit.>. Another trend in the field has been the continual increase in the number of electrodes<cit.> to increase the number of simultaneously recorded neurons (Fig. <ref>(b)) which can increase the accuracy of decoding user intent and enable dexterous control. Recently developed Neuropixels technology has pushed the number of recorded neurons to ≈ 1000. However, this makes it a problem for wireless implants due to the conflicting requirements of high data rate and low power consumption<cit.>. Hence, there are efforts to compress the neural data on the sensor by extracting information from it by edge computing (Fig. <ref>(c)). Different degrees of computing can be embedded in the implant ranging from spike detection, sorting to decoding<cit.>. Ideally, decoding on implant can provide the maximum compression <cit.> with the added benefit of patient privacy since the data does not need to leave the implant—only motor commands are sent out. Traditional decoders have used methods from statistical signal processing such as Kalman Filters and their variants<cit.>. With the rapid growth of Artificial Neural Networks (ANN) and variants for many different applications, it is natural to explore the usage of such techniques for motor decoding and several such works have recently been published <cit.>.To fit on the implant, the decoder has to be extremely energy and area efficient along with being accurate. Brain-inspired SNN are supposed to be more energy-efficient due to their event-driven nature. They are also expected to be better at modeling signals with temporal dynamics due to their inherent “stateful” neurons with memory. However, detailed comparisons between SNN and ANN variants with controlled datasets and benchmarking procedures have been lacking. A recent effort<cit.> has put together a benchmarking suite to address this gap and one chosen task is that of motor decoding. We use the same dataset for benchmarking and show additional results for more control cases.Neurobench showed that streaming SNNs provide a good tradeoff in terms of accuracy vs computes while other methods could have similar memory footprint. Further, it was shown that expanding traditional ANN with temporal memory (ANN-flat) at the input could drastically increase accuracy albeit at the cost of increased operations. Hence, we ask the question – will ANN models augmented with memory be better than SNNs in terms of the tradeoff between accuracy and cost (memory, operations/energy)? We make the following novel contributions in this paper: * We compare SNNs with ANNs with memory augmentation at hidden state (by using LSTM) and output (by incorporating a traditional Bessel filter from signal processing).* We show that combining NNs with traditional signal processing methods such as filtering drastically improves performance at minimial cost of additional operations or memory.* We show the effect of increasing training data that shows which models have potential for improvements in future.* We show the effect of testing with better curated data. The rest of the paper is organized as follows. The following section discusses some of the related works while Section <ref> describes the dataset, models and pre-processing used in this work. Section <ref> presents the results comparing different models in terms of their performance-cost tradeoff using pareto curves. This is followed by a section <ref> that discusses the main results and provides additional control experiments. Finally, we summarize our findings and conclude in the last section. § RELATED WORKS AND CONTRIBUTIONThe current work on designing decoders for motor prostheses can be divided into two broad categories–those using traditional signal processing methods and more recent ones based on machine learning.§.§ Traditional Signal Processing Decoders An early decoder used in BMI system is the linear decoder, such as population vector (PV) algorithm <cit.>. Optimal linear estimators (OLE), generalized from PV algorithm, has comparable performance in closed-loop BMI systems, Whereas Bayesian algorithms perform better <cit.>. Inspired by estimation and communication theory, Wiener filter improved linear decoders by combining neuron history activation <cit.>.Kalman filter has an outstanding ability to cope with dynamic and uncertain environments and is suited in real-time applications. That makes Kalman filter one of the most widely used decoding algorithms in iBMI systems. However, the conventional Kalman filter is only optimal for linear variables and Gaussian noise <cit.>. Many variants of Kalman filter have been proposed to be applied to different applications or environments, such as decoding for cursor movement <cit.>, predicting the movement for clinical devices <cit.>, controlling the robotic arms <cit.>, speech decoding <cit.>. §.§ Machine Learning Decoders Machine learning is widely used in various applications due to its powerful ability to process complex data. An SVM decoder could be trained to analyze rhythmic movements of Quadriplegia patients <cit.>, or motor control of paralyzed limbs <cit.>. Recently, ANNs have attracted much attention among machine learning algorithms and have made great progress in BMI decoding. ELM-based intelligent intracortical BMI ( i^2 BMI ) achieves an outstanding performance compared to traditional signal processing decoders <cit.>. A multi-layer ANN is trained to decode the finger movement running in a real-time BMI system, which outperforms a Kalman filter <cit.>.Recurrent neural networks (RNN) were introduced since they are more skilled at capturing the relations between two variables using a hidden state with memory. For instance, there have been studies on decoding speech <cit.> and on brain representation for handwriting <cit.>.Long-term decoding achieved higher performance by using LSTM and Wiener filter <cit.>. To decode speech for a paralyzed person, a natural-language model and Viterbi decoder are used <cit.>.Neuromorphic algorithms have emerged as an energy-efficient decoder and an effective tool for data compression <cit.>. SNN is a brain-inspired neural network popular in neuromorphic applications due to its low energy. It can achieve nearly the same accuracy as ANN but with less than 10% memory access and computation of ANN <cit.>. Similarly, it was found that the SNN decoders could use far fewer computes compared to ANN, but with a performance penalty in accuracy, for the motor prediction of primates in the Neurobench benchmark suite<cit.>.In this work, we show that the combination of traditional signal processing and machine learning algorithms results in the best decoders for iBMI systems.§ METHODOLOGYList of notations used in this section * N_i: i^th layer's neuron count* N_ch: No. of Input Probes* x_i: Computed feature from i-th probe* T_W: Bin window duration* m: Number of sub-windows in a bin* St: Stride size* s: Sparsity* d: Dropout rate* f_GT: Ground truth label frequency (=1/T_GT) §.§ Dataset The primate reaching dataset chosen for this paper was gathered and released by <cit.>, with the six files chosen for Neurobench <cit.> being the files of interest. These six files are recordings of two non-human primates (NHP) (Indy and Loco), where each NHP accounts for three files (more details about this choice in <cit.>).This dataset contains microelectrode array (MEA) recordings of the NHP's brain activity while it is moving a cursor to the target location, as seen in <Ref>. The finger velocity is sampled at f_GT=250 Hz resulting in ground truth labels at a fixed interval of T_GT=4 ms. The target position changes once the monkey successfully moves the cursor to the intended target. We refer to this action as a reach. The dataset contains a continuous stream of the brain's activity from one MEA with N_ch=96 probes (Indy) or two MEAs with N_ch=192 probes (Loco). In this work, we ignore sorted spikes since it has been shown that spike detection provides sufficient information for decoding<cit.> and is more stable over time. Hence, the number of probes N_ch is the input feature dimension N_0 for the neural network models (except ANN_3D) that will be discussed in the following subsection.The six files of interests are:* indy_20160622_01* indy_20160630_01* indy_20170131_02* loco_20170131_02* loco_20170215_02* loco_20170301_05Training NN models on time series-based data requires the data to be split apart into separate segments. In analogy with keyword spotting<cit.>, each segment of neural data should correspond to separate keywords. By using the target positions in this dataset, we can separate the spike data into segments based on indices in the target position array where there is a change in values, as illustrated in <Ref>. Such consecutive indices forms the beginning and end of a reach, and then we can split the time series into training, validation, and test sets based on the number of reaches. The split ratio used in this paper follows that of Neurobench<cit.>, which is 50% for the training set, and 25% each for validation and test sets. The total number of reaches recorded in each file can be seen in <Ref>. §.§ Network Models To explore the potential of various neural network models as the neural decoder, five different model architectures with and without memory are tested: ANN, ANN_3D, SNN_3D, Streaming SNN and LSTM, which can be seen in <Ref>. These five models use popular NN architectures and have memory at the input layer or hidden layer. Every model except for LSTM has two versions of varying complexity (explained in section <ref>) where complexity refers to the model size indicating the number of neurons. The larger model is henceforth referred to as the base model while the smaller model is dubbed the tiny variant. It was found that networks deeper than 3 layers performed poorly and hence deeper models were excluded from this study. * ANN or ANN_2DThe ANN model has an architecture of N_ch-N_1-N_2-2, with rectified linear unit (ReLU) as the activation function for the first two-layers as well as batch normalization to improve upon the accuracy obtained by the model. Note that N_0=N_ch indicates one feature extracted from each probe obtained by summing the neural spikes over a fixed duration of T_W as described in <Ref>. Also, N_3=2 corresponds to predicting the X and Y velocities. A dropout layer with a dropout rate of 0.5 is also added to the first two layers to help regularize the model. In analogy with the naming convention of ANN_3D introduced next, this model can also be referred to as ANN_2D due to the shape of the input weight tensor.* ANN_3D or ANN_flatThe architecture of the ANN_3D or ANN_flat model is m× N_ch-N_1-N_2-2, i.e. it shares an identical architecture with ANN, except at the input layer. This model divides the T_W duration of the input bin window into m sub-windows and creates a m-dimensional feature from each probe by summing spikes in each sub-window. This mode of input will be further explained in <Ref>. The input will then be flattened across the sub-windows, yielding a final input dimension of N_ch× m; hence, the number of weights/synapses in the first layer is m times more than ANN. It is referred to as ANN_flat in <cit.>; we refer to it as ANN_3D here in keeping with the shape of the input weight tensor, which we feel is more intuitive. * LSTMThe LSTM model contains a single LSTM layer of dimension N_LSTM, followed by a fully-connected layer of dimension 2. The input of the model shares the same pre-processor as ANN (summing spikes in a bin-window of duration T_W); however it uses a different T_W. The input is first normalized with a layer normalization, before passing through the rest of the network.* SNN_3D or SNN_flatThe SNN_3D shares a similar architecture with ANN (N_ch-N_1-N_2-2), with the following differences: 1) Instead of using standard activation like ReLU, the SNN_3D model uses the leaky integrate-and-fire (LIF) neuron after every fully-connected layer, 2) the input is first passed through layer normalization, similar to LSTM due to the recurrent nature of LIF,and 3) at the final layer there is a scaling layer applied to the output LIF neurons. The LIF neurons are governed by the following set of equations:U[t]= β U[t-1] + WX[t] - S_out[t-1] θ β = e^-Δ t/τS_out[t]=1ifU[t] > U_thr0otherwiseθ =0if no reset β U[t-1] + WX[t]if reset-to-zeroU_sub if reset-by-subtractionwhere U[t] and X[t] are the membrane potential of the LIF neuron and the input at the t-th time step respectively, W is the synaptic weight of the fully-connected layer, β is the decay rate, S_out[t] is the output spike, U_thr is the membrane potential threshold, U_sub is the subtracted value if the reset mechanism is reset-by-subtraction and θ is the reset mechanism. The LIF neurons for all layers shares the same U_thr and β. The first two layer uses the reset-to-zero mechanism while the last layer does not use any reset to allow the final output neurons to accumulate membrane potential to predict the velocity of the primate's movement. For every stride of 4 ms, the membrane voltages are reset and the integration is restarted with fresh input to produce the next output.The input for the SNN_3D model is different from ANN and identical to ANN_3D; however, unlike the ANN_3D model, the spike counts in the m sub-windows spanning T_W are input to the LIF neuron using only one weight/synapse but over m time steps. Due to the reset of the LIF neurons after every prediction, overlapping bin-windows (for T_W>st) cause the SNN_3D to process the same input spikes for multiple predictions. * SNN_StreamingThe SNN_Streaming model also consists of three fully-connected layers (N_ch-N_1-N_2-2), with LIF neurons (see Eq. <ref>) in each layer. Unlike SNN_3D, every LIF layer has its own unique U_thr and β. Just like SNN_3D, the first two layer uses reset-to-zero while the last layer does not reset its membrane potential. In this model, T_W=T_GT=St=4 ms and hence does not require any additional pre-processing as seen in <Ref>; hence, it is called a streaming mode since inputs can stream in directly and continuously to this model. §.§ Input Spike ProcessingThe spikes generated by the NHP's neurons are sparse in nature. While SNNs are designed with sparsity in mind, standard neural networks are not and hence require a feature extraction step from the raw spike data. Also, from the biological viewpoint, it is generally assumed that short term firing rates are important for motor control. Hence, we calculate firing rates, r_i(t_k) at the sample time t_k from the spike waveforms P_i=∑_t_s,iδ (t - t_s,i) on the i-th probe (1≤ i ≤ N_ch) using the following equation:r_i(t_k)=∫_t_k-T_W^t_kP_i(t)dtwhere t_k+1-t_k=T_GT is the sampling time, t_s,i denote neural spike times on the i-th probe and T_W is the bin window duration. Three different pre-processing methods were used in this paper: the summation method, the sub-window method and the streaming method as illustrated in <Ref>. For all of them, the stride size, st is identical to the sampling duration, which is T_GT=4 ms. They differ in the choice of T_W and how to present the firing information in the bin window to the network as described next.* Summation Method (used in ANN and LSTM):This is the simplest case where the firing rate in a bin window with duration of T_W is directly used as a feature and input to the NN. We define the input feature vector x(t_k) as follows:x(t_k)= [x_0(t_k), x_1(t_k), ..., x_N_ch(t_k)] x_i(t_k)= r_i(t_k)This method is depicted in <Ref>(a). This method is used by ANN and LSTM models, where ANN uses T_W=200ms while LSTM uses T_W=32ms. Efficient implementation of such firing rate calculation with overlapping windows are shown in <cit.>. * Sub-Window Method (used in ANN_3D and SNN_3D):Similar to the summation method, the sub-window method uses information over the latest T_W bin window. However, instead of summing all the spikes, it provides firing rate information at an even shorter time-scale (or with finer resolution) of T_W/m. Thus, the feature computed from the i-th probe itself becomes a vector x_i(t_k)=[r_i^1(t_k),r_i^2(t_k)...r_i^m(t_k)] with m components corresponding to firing rates in each of the m sub-windows (duration of integration in Eq. <ref> is reduced to T_W/m). The sub-window method is illustrated in <Ref>(b) and is used by the ANN_3D and SNN_3D models with T_W=200ms and m=7. The feature vector x(t_k) for ANN_3D is defined according to <Ref> as follows:x(t_k)= [x_0(t_k),x_1(t_k)..x_Nch(t_k)]where the dimension of x(t_k) is N_ch× m. For the SNN_3D, the firing rates in each sub-window are given as input feature to the SNN , which has m time steps. Thus the input feature vector for the SNN in the j-th time step (1≤ j ≤ m) is given by:x_j(t_k)= [r_j^1(t_k),r_j^2(t_k)...r_j^N_ch(t_k)]where the dimension of x_j(t_k) is N_ch. Note that `j' indexes time steps here and the SNN output at j=m is the prediction of motor velocity for sample time t_k. * Streaming Method (used in SNN_Streaming): The streaming method, as the name suggests, processes the incoming spike data as a continuous stream as seen in <Ref>(c). In this case, T_W=st=T_GT=4 ms implying no overlap between consecutive windows. This allows for a direct interface between the probes and the model, without the need of adding additional compute cost to our network like the two methods mentioned before. The input feature vector x(t_k) is given by the following equation:x(t_k)= [u(r_0(t_k)),u(r_1(t_k)),...u(r_N_ch(t_k))]where u() denotes the Heaviside function. Hence, the resulting SNN can replace multiply and accumulate (MAC) operations by selective accumulation (AC) operations.§.§ Filter: Adding memory at outputMost of the NN models (with the exception of LSTM and SNN_Streaming) introduced in Section <ref> operate on a window or chunk of inputs; providing these windows in any order would result in the same prediction. However, in real life the motor output is a smooth signal with a continuous trajectory. To understand this, we plot in Fig. <ref> the frequency content of ground truth trajectories of a sample 2-sec waveform and compare it with predicted trajectories of two models from <cit.>. It is clear that the predictions have much higher frequency content indicating ground truth trajectories are smoother. In signal processing, this can be rectified by using a filter, which amounts to adding a memory of the past output. Among the different possible filters, a Bessel filter is chosen because of its linear phase response which is good for maintaining arbitrary waveform shapes. Three different filtering methods are tested in this work. First, we use forward (Fwd) filtering, which can achieve real-time filtering, but cannot be zero-phase. On the contrary, bidirectional (Bid) filtering can effectively eliminate phase distortions, but it is generally applicable to offline filtering since the whole waveform is needed before processing begins. To achieve a compromise, block bidirectional filter with a sliding window is applied, such that only a latency penalty of half window size is applicable. In this work, a window size of 16 was chosen to limit the latency to 32 ms while the order of filters and their cutoff frequencies were varied between 1-4 and 0.05-0.5 respectively to find the optimum for each model.§.§ Metrics In order to evaluate the performance of the models comprehensively in terms of cost vs performance, three metrics are used: (1) number of operations, (2) memory footprint, and (3) accuracy. Three types of operations are considered for (1) – multiply, add and memory read (since the energy for memory access often dominates the energy for computations<cit.>). For most NNs, each synaptic operation comprises a multiply and add (MAC) while for SNN_Streaming, we only have additions (AC). The number of operations is used as proxy for power/energy in this work since the actual energy ratio between these three operations depends on bit-width, process node and memory size; more accurate energy evaluations will be the subject of future work. For (2), memory footprint is evaluated from model size where every parameter is stored using a 32-bit float number. For (3), R-Squared score is a commonly used metrics for regression tasks<cit.>, which is defined by <Ref>: R^2 = 1-∑_i=1^n ( y_i-ŷ_̂î )^2/∑_i=1^n ( y_i-y̅ )^2where the label and predictions are showing as y_i and ŷ_̂î respectively while y̅ is the mean of labels. For motor prediction, separate R_X^2 and R_Y^2 are computed for predicting X and Y velocities respectively and the final R^2 is an average of the two.Another set of important metrics for NN hardware are throughput and latency. We have not considered them here since the considered NN models are small enough so that the total time taken for evaluating the prediction is dominated by the input data accumulation time shown in Section <ref>. However, we do touch upon this point later in Section <ref>. §.§ Training & Testing Details All models are trained for 50 epochs using the SNNTorch framework, with a learning rate of 0.005, a dropout rate between 0.3-0.5, and an L2-regularization value between 0.005-0.2. AdamW is chosen as the optimizer, Mean Squared Error (MSE) loss is determined as the loss function, and a learning scheduler (cosine annealing schedule) is used after every epoch. For ANN, ANN_3D, and SNN_3D, data is shuffled with batch size of 512 in training. For SNN_3D, the membrane potential resets every batch, while reset occurs at the beginning of each reach for membrane potential in SNN_streaming and hidden states in LSTM. The distribution of reach dutrations show most reaches completed in less than 4 sec while some reaches being much longer, presumably due to the NHP not attending to the task. Similar to <cit.>, reaches that exceed 8 seconds in length are removed to improve the training performance. Leaky Integrate-and-Fire Neuron is used in SNNs, where the threshold and β are learned during training and Arctan is applied as a surrogate function. The membrane potential of neurons ceases to reset in the last layer to enable regression. The velocity predicted by SNNs is determined by scaling the membrane potential of neurons with a learnable constant parameter. For validation and testing, data is input to the models in chronological order, and reset mechanisms only occur at the beginning. Filters are employed exclusively during the inference process. § RESULTS To comprehensively examine the capability of different models, we performed multiple experiments and evaluated models using the metrics mentioned in <Ref>. All the results except memory access are obtained from the neurobench harness<cit.> that does automated evaluation of the models; memory access is estimated based on theoretical equations of weight fetches based on experimentally observed sparsity multiplying the number of weights on a per layer basis. The findings are presented pictorially using two pareto plots, first comparing the accuracy versus operations trade-off and the second comparing accuracy versus memory footprint (e.g. see Figures <ref> and <ref>). For the pareto plots shown in this section, the following colour scheme is used:* blue markers are base models without filtering (correspond to results from prior work in <cit.> for ANN, ANN_3D and SNN_3D)* red markers are models using forward (Fwd) filtering * green markers are models using bidirectional (Bid) filtering * orange markers are models using block Bid filtering* markers with dark border are tiny variants of base modelsA tabular summary of all the experiments performed for our base models can be found in <Ref>. §.§ Model Size Search As mentioned earlier, it was found that networks deeper than 3 layers performed poorly and hence deeper models were excluded from this study. The number of neurons in each of the two hidden layers was determined by searching within a certain range (N_0=N_ch and N_3=2 are fixed). We used the ANN to do this search due to its simplest network structure and resultant fast training. Each model complexity is characterized by the number of neurons. The results obtained by varying N_1 and N_2 are shown in the <Ref>. The text shown in the figure represents the different network architectures (N_1-N_2 combinations) tested. As expected, the R^2 initially increases with increasing number of neurons but starts decreasing after the number of neurons reaches a certain value due to overfitting. The best trade-off between R^2 and complexity is determined by the networks lying on the pareto curve. Therefore, the two models with N_1-N_2 values of 32-48 and 16-32 were selected as the `base' and `tiny' variants respectively for ANN. Same variant sizes were near optimal for ANN_3D and SNN_3D (we do not show these tradeoff curves for brevity), while for SNN_Streaming, base and tiny variants represented N_1-N_2 values of 32-48 and 16-48 respectively. §.§ K-Fold Cross ValidationIt is important to verify that the result will not vary significantly regardless of how the data is split. Hence, K-fold cross-validation is used to test all six files for three models (ANN, ANN_3D and SNN_3D). We divided the data into five parts, randomly selecting four parts as training and the other part was divided into validation and testing. The means and standard deviations of R^2 for the 5-fold experimentare shown in Table <ref>. Low variance of the results for all 3 cases implies using one-fold data split for our experiment is reasonable and will give dependable results. As a comparison, the results in Table <ref> does show that without filter, the decoding accuracy for SNN_3D is the best and ANN is the worst with ANN_3D between the two.Hence, we just use the single data split in <cit.> described earlier for the rest of the results.§.§ Baseline result and effect of filteringThe pareto plots showing the baseline comparison can be seen in <Ref>. The filter order and cutoff frequency are chosen separately for different filtering methods and different models. For ANN, ANN_3D, SNN_3D and SNN_Streaming, filter order and cutoff frequency are 4 and 0.05 for bidirectional filtering, while in block bid filtering, order is selected as 2. For forward filtering, the parameters are 1 and 0.15 for order and cutoff frequency respectively. However, these are different in LSTM and set at 2 and 0.07 for order and cutoff for all kinds of filtering methods. These optimal parameters were selected based on results in the validation set by doing a grid search over filter orders 1-4 and cut-off frequencies in the range 0.05-0.5 as mentioned earlier in Section <ref>. In terms of the models that forms the pareto front of operations vs. accuracy (Fig. <ref>(a)), we observe all of them use bidirectional filtering to achieve their high accuracy. While this is impractical in real-time decoding, these results can be taken as a gold standard for the neurobench suite at this time since they represent the highest reported accuracy so far. Along the pareto front, the two SNN variants, SNN_3D and SNN_Streaming show the biggest difference in terms of operations required (≈ 100x) and accuracies (≈ 0.022), while LSTM and ANN_3D are at intermediate positions on the pareto front. In terms of memory usage (Fig. <ref>(b)), the pareto front is dominated by SNNs of the two types. The ANN_3D models have highest memory usage due to their input dimension being expanded by m times to m× N_ch–the weights in the first layer are dominant for memory footprint since N_0>>N_1,N_2,N_3.If we ignore models with Bid filtering due to them not being applicable in real-time, the next pareto frontier is dominated by models with block BiD filtering. Both these results point to the extreme efficiency of combining NN models with traditional filtering for motor decoding. In that case, the pareto frontier for operations vs. accuracy consists of LSTM at the high end and SNN_streaming at the low end with ANN_3D in the middle. For the case of memory vs. accuracy, LSTM and SNN_streaming retain their positions at the top and bottom of the pareto curve, while SNN_3D is in the intermediate part. Figure <ref> plots the actual trajectory of a ground truth reach waveform, a prediction from ANN_3D, and a filtered version. It can be seen how the filtered waveform is smoother and resembles the more natural motion of the primate's finger.Looking deeper at the effects of filtering, we see that Fwd filtering provided very little gains to most of the NN models. Using the block Bid filter provided maximum gains of ≈ 0.05, 0.04 and 0.03 in R^2 for ANN_3d, SNN_3D and ANN models, while the gains were lower (≈ 0.02 or less) for LSTM and SNN_streaming models. This is intuitively understandable given that recurrent models have an inbuilt longer memory. Also, we see that tiny variant of ANN with block Bid filtering achieves similar accuracy of ≈ 0.61 as the tiny or base SNN_streaming without filtering at similar memory and ≈ 8X computations. This confirms our initial hypothesis that adding memory to ANN models can indeed make their performance similar to SNNs. However, the performance of SNN_streaming also improves with the addition of the filter making this combination a great choice for decoding with very low computational and memory resources.§.§ 80% vs 50% Training Split To assess the performance of models when the training data increases, we increase the baseline training data from 50% to 80% as done in <cit.>. The results are listed in Table <ref> and plotted in <Ref>. As expected, the R^2 of all models is generally higher by 0.03-0.04 compared to the 50% baseline training data, which shows a high capacity for future improvement with more. Similar to Fig. <ref>, Fig. <ref> also has models with Bid or block Bid filtering on the pareto curve. After the data increases, LSTM and ANN_3D show the greatest accuracy improvement with an increase of 0.04 in R^2. In addition, LSTM becomes one of the frontiers in the Pareto plot for both memory footprint and operations displacing SNN_3D off the pareto curve in Fig. <ref>(a) for operations.§ DISCUSSION This section discusses additional control experiments and gives an outlook for future improvements. §.§ Effect of Reach RemovalAs mentioned in Section <ref>, some of the reaches in the dataset spanned a much longer duration (sometimes longer than 200 seconds) than the rest which mostly were less than 4 seconds. These reaches (longer than 8 seconds) were removed from training since the NHP was likely unattentive in these cases. However, they were not removed from the testing data and hence, we explored how much improvement in performance is obtained by better curating the test dataset. These results are presented in the <Ref> and we can observe that the R^2 increases by ≈ 0.01 with the baseline 50% split–the improvement can be much more if other files from <cit.> are selected. This underlines the effectiveness and necessity of careful data selection from the recordings in <cit.> while training and testing models. §.§ Latency of Filters Latency between input and output is important for real-time applications with closed-loop operation such as motor decoding. The total time, T_tot, taken to produce an output by a NN decoder is given by T_tot=St+T_comp where St is the stride to capture the new input data (=T_GT=4 ms in this work) and T_comp is the time taken to process the computations in the neural network. Given the very fast and energy-efficient In-memory computing (IMC) approaches to implement NN models prevalent now<cit.> and the small networks considered in this paper,we can assume T_comp<<St making the throughput almost entirely dependent on St, i.e. time taken to capture new neural input spikes. Note that the bin window, T_W does not add any extra penalty on latency of output generation; however, after every change of target, the prediction will be inaccurate for a time related to T_W to allow enough relevant input to fill up the bin window. However, output filtering may induce an extra penalty on the latency. Bid filters produced best results as seen in the earlier section; however, they cannot be employed in real-time applications since they need to store the raw data in memory first and then apply forward filtering two times in opposite directions. The block Bid filter is chosen as a compromise where the filter window is used to determine the length or block of samples that are filtered at one time, and the predicted point is located at the center of the sample window. Thus, the latency introduced by the block Bid filter is theoretically equal to half the length of the filter window. In this paper, window size was fixed to 16 samples resulting in a latency of 32 ms. While this is acceptable in motor decoding, other applications may require lower latency. One possibility is to use analog Bessel filters which do not require such Bid filteringand this will be a focus of future work. §.§ Future Directions The main reason for low energy consumption in SNN is due to the benefits of sparse activations. However, our experiment shows the sparsity may harm accuracy. We proposed two types of SNN models in this paper–one is SNN_3D, which has no sparsity due to the layer normalization, and another one is SNN_streaming, which has a relatively higher sparsity. Interestingly, the low power characteristic of SNN is not reflected in the first SNN model, whereas it has relatively higher accuracy. This points to the need for future research into data normalization techniques which can still retain sparsity of activations. Another reason for the high accuracy of SNN_3D was its reset of membrane potential after every T_W. This implies the membrane potential during training and testing start at exactly the same value for any sequence of inputs making it easier for the network to recognize similar patterns of input. For SNN_streaming, since there is no regular reset mechanism, the membrane voltages during training and testing may be quite different which may hurt accuracy. Mitigating this issue with initial condition of streaming SNNs will be a part of future work.We see different models along the pareto curve having different strengths. For example, models with block Bid filtering have high accuracy but high latency. Using multiple models to produce a combined output may be a useful strategy. For example, switching from a model with block Bid filter to one without a filter right after a change of target/context will help in balancing latency and accuracy.Finally, all the weights used in this work used float 32 as the default precision. However, there is a significant amount of work to quantize the models for more efficient inference. Applying these approaches of quantization aware training should allow us to reduce the model footprint significantly in the future. § CONCLUSIONScaling iBMI systems to tens of thousands of channels in the future as well as removing the connecting wires would require significant compression of data on the device to reduce wireless datarates. Integrating the signal processing chain up to the neural decoder offers interesting opportunities to maximize compression. In this context, this work explores the usage of different neural network models and combines them with traditional signal filtering techniques to explore accuracy vs cost trade-offs where the cost is measured in terms of memory footprint and number of operations. Adding Bessel filtering improves the performance of all five NN models with Bidirectional (Bid) and block Bid filtering generating the state-of-the-art results in offline and online filtering respectively. In general, LSTM and SNN_streaming models occupy the high and low ends of the pareto curves (for accuracy vs. memory/operations) respectively while SNN_3D and ANN_3D occupy intermediate positions.§ ACKNOWLEDGMENTWe acknowledge useful discussions with the Motor decoding group in Neurobench.IEEEtran 10url@samestyle milin_natureZhang, M., Tang, Z., Liu, X. & Spiegel, J. Electronic neural interfaces. Nature Electronics. 3, 191-200 (2020)bmi_cursorPandarinath, C. & Al. High performance communication by people with paralysis using an intracortical brain-computer interface. ELife. pp. e18554 (2017)camilo_plosLibedinsky, C., So, R. & Al Independent mobility achieved through a wireless brain-machine interface. PLOS One. 11 (2016)bmi_armAjiboye, W. & Al. Restoration of reaching and grasping move- ments through brain-controlled muscle stimulation in a person with tetraplegia: a proof-of-concept demonstration. The Lancet. 10081 pp. 1821-1830 (2017)speech_decode_nature1Metzger, S. & Et.al. A high-performance neuroprosthesis for speech decoding and avatar control. Nature. 620 pp. 1037-1046 (2023)speech_decode_nature2Willett, F. & Et.al. A high-performance speech neuroprosthesis. Nature. 620 pp. 1031-1036 (2023)Brain2TextWillett, F., Avansino, D., Hochberg, L., Henderson, J. & Shenoy, K. High-performance brain-to-text communication via handwriting. Nature. 593, 249-254 (2021,5), https://doi.org/10.1038/s41586-021-03506-2MentalBMIBasu, I., Yousefi, A., Crocker, B., Zelmann, R., Paulk, A., Peled, N., Ellard, K., Weisholtz, D., Cosgrove, G., Deckersbach, T., Eden, U., Eskandar, E., Dougherty, D., Cash, S. & Widge, A. Closed-loop enhancement and neural decoding of cognitive control in humans. Nature Biomedical Engineering. 7, 576-588 (2021,11), https://doi.org/10.1038/s41551-021-00804-yNeuralinkMusk, E. An Integrated Brain-Machine Interface Platform With Thousands of Channels. Journal Of Medical Internet Research. 21, e16194 (2019,10), https://doi.org/10.2196/16194nurmikko_wirelessYin, B. & Al. An Implantable Wireless Neural Interface for Recording Cortical Circuit Dynamics in Moving Primates. Journal Of Neural Engineering. 10 (2013)electrodeScalingStevenson, I. & Kording, K. How advances in neural recording affect data analysis. Nature Biomedical Engineerng. 14 pp. 139-142 (2011,1)murmann_natureChen, N. & Al. Power-saving design opportunities for wireless intracortical brain–computer interfaces. Nature Biomedical Engineering. 4 pp. 984-996 (2020)Basu2017Basu, A., Yi, C. & Enyi, Y. Big Data Management in Neural Implants: The Neuromorphic Approach. Emerging Technology And Architecture For Big-data Analytics. pp. 293-311 (2017)ShoebiBMIShaikh, S., So, R., Sibindi, T., Libedinsky, C. & Basu, A. Towards Intelligent Intracortical BMI (i²BMI): Low-Power Neuromorphic Decoders That Outperform Kalman Filters. IEEE Transactions On Biomedical Circuits And Systems. 13, 1615-1624 (2019)datasetMakin, J., O'Doherty, J., Cardoso, B., M. & Sabes, P. Superior arm-movement decoding from cortex with a new, unsupervised-learning algorithm. Journal Neural Engineering. 15 (2018)willsey2022realWillsey, M., Nason-Tomaszewski, S., Ensel, S., Temmar, H., Mender, M., Costello, J., Patil, P. & Chestek, C. Real-time brain-machine interface in non-human primates achieves high-velocity prosthetic finger movements using a shallow feedforward neural network decoder. Nature Communications. 13, 6899 (2022)Yik2023Yik, J., Ahmed, S., Ahmed, Z., Anderson, B., Andreou, A., Bartolozzi, C., Basu, A., Blanken, D., Bogdan, P., Bohte, S. & Others NeuroBench: Advancing neuromorphic computing through collaborative, fair and representative benchmarking. ArXiv Preprint ArXiv:2304.04640. (2023)stevenson_2013Stevenson, I. Tracking advances in neural recording. Statistical Neuroscience Lab., https://stevenson.lab.uconn.edu/scaling/georgopoulos1986neuronalGeorgopoulos, A., Schwartz, A. & Kettner, R. Neuronal population coding of movement direction. Science. 233, 1416-1419 (1986)koyama2010comparisonKoyama, S., Chase, S., Whitford, A., Velliste, M., Schwartz, A. & Kass, R. Comparison of brain–computer interface decoding algorithms in open-loop and closed-loop control. Journal Of Computational Neuroscience. 29 pp. 73-87 (2010)kim2008neuralKim, S., Simeral, J., Hochberg, L., Donoghue, J. & Black, M. Neural control of computer cursor velocity by decoding motor cortical spiking activity in humans with tetraplegia. Journal Of Neural Engineering. 5, 455 (2008)hochberg2012reachHochberg, L., Bacher, D., Jarosiewicz, B., Masse, N., Simeral, J., Vogel, J., Haddadin, S., Liu, J., Cash, S., Van Der Smagt, P. & Others Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature. 485, 372-375 (2012)sharma2016usingSharma, G., Friedenberg, D., Annetta, N., Glenn, B., Bockbrader, M., Majstorovic, C., Domas, S., Mysiw, W., Rezai, A. & Bouton, C. Using an artificial neural bypass to restore cortical control of rhythmic movements in a human with quadriplegia. Scientific Reports. 6, 33807 (2016)friedenbergneuroprostheticFriedenberg, D., Schwemmer, M., Landgraf, A., Annetta, N., Bockbrader, M., Bouton, C. & Others Neuroprosthetic-enabled control of graded arm muscle contraction in a paralyzed human. Sci Rep. 2017; 7: 8386. ZZ_FRM_SPDZhang, Z. & Constandinou, T. Firing-rate-modulated spike detection and neural decoding co-design. Journal Of Neural Engineering. 20, 036003 (2023,5), https://dx.doi.org/10.1088/1741-2552/accecespeechBMIMoses, D., Metzger, S., Liu, J., Anumanchipalli, G., Makin, J., Sun, P., Chartier, J., Dougherty, M., Liu, P., Abrams, G., Tu-Chan, A., Ganguly, K. & Chang, E. Neuroprosthesis for Decoding Speech in a Paralyzed Person with Anarthria. New England Journal Of Medicine. 385, 217-227 (2021,7), https://doi.org/10.1056/nejmoa2027540Schuman2022Schuman, C., Kulkarni, S., Parsa, M., Mitchell, J., Date, P. & Kay, B. Opportunities for neuromorphic computing algorithms and applications. Nature Computational Science. 2, 10-19 (2022,1)Liao2022Liao, J., Widmer, L., Wang, X., Di Mauro, A., Nason-Tomaszewski, S., Chestek, C., Benini, L. & Jang, T. An Energy-Efficient Spiking Neural Network for Finger Velocity Decoding for Implantable Brain-Machine Interface. 2022 IEEE 4th International Conference On Artificial Intelligence Circuits And Systems (AICAS). pp. 134-137 (2022)o2017nonhumanO’Doherty, J., MB, M., Makin, J. & Sabes, P. Nonhuman primate reaching with multichannel sensorimotor cortex electrophysiology. (https://zenodo.org/records/583331)yi_2016Chen, Y., Yao, E. & Basu, A. A 128-Channel Extreme Learning Machine-Based Neural Decoder for Brain Machine Interfaces. IEEE Trans. On Biomedical Circuits And Systems. 10, 679-692 (2016)horowitz20141Horowitz, M. 1.1 computing's energy problem (and what we can do about it). 2014 IEEE International Solid-state Circuits Conference Digest Of Technical Papers (ISSCC). pp. 10-14 (2014)abu_nvmSebastian, A. & Al. Memory devices and applications for in-memory computing. Nature Nanotechnology. 15, 529-544 (2020)marvin_sramZhang, C. & Al. Challenges and trends of SRAM-based computing-in-memory for AI edge devices. IEEE Trans. On CAS-I. 68, 1773-1786 (2021)Zhou Biyanreceived her Bachelors degree in Electrical Engineering from Hohai University and her Masters degree in Electrical and Electronic Engineering from Nanyang Technological University. She's currently pursuing her PhD at the City University of Hong Kong, focusing on spiking neural network, neural decoder and brain machine interfaces.Pao-Sheng Vincent Sunreceived his Bachelors in Computer Engineering and Masters in Electical and Electronic Engineering from the University of Western Australia. After graduation, he has worked in the automotive industry developing next generation smart lighting devices and setting up business analytics using machine learning with cloud computing. He is currently working towards his PhD at the City University of Hong Kong, with primary focus on computer vision based neuromorphic computing, spiking neural network and deep learning for edge application. Arindam BasuArindam Basu received the B.Tech and M.Tech degrees in Electronics and Electrical Communication Engineering from the Indian Institute of Technology, Kharagpur in 2005, the M.S. degree in Mathematics and PhD. degree in Electrical Engineering from the Georgia Institute of Technology, Atlanta in 2009 and 2010 respectively. Dr. Basu received the Prime Minister of India Gold Medal in 2005 from I.I.T Kharagpur. He is currently a Professor in City University of Hong Kong in the Department of Electrical Engineering and was a tenured Associate Professor at Nanyang Technological University before this. He is currently an Associate Editor of the IEEE Sensors journal, Frontiers in Neuroscience, IOP Neuromorphic Computing and Engineering, and IEEE Transactions on Biomedical Circuits and Systems. He has served as IEEE CAS Distinguished Lecturer for the 2016-17 period. Dr. Basu received the best student paper award at the Ultrasonics symposium, in 2006, the best live demonstration at ISCAS 2010, and a finalist position in the best student paper contest at ISCAS 2008. He was awarded MIT Technology Review's TR35 Asia Pacific award in 2012 and inducted into Georgia Tech Alumni Association's 40 under 40 class of 2022.
http://arxiv.org/abs/2312.15889v1
{ "authors": [ "Biyan Zhou", "Pao-Sheng Vincent Sun", "Arindam Basu" ], "categories": [ "cs.LG", "cs.HC", "cs.NE", "q-bio.NC" ], "primary_category": "cs.LG", "published": "20231226054039", "title": "ANN vs SNN: A case study for Neural Decoding in Implantable Brain-Machine Interfaces" }
Sublattice-selective inverse Faraday effect in ferrimagnetic rare-earth iron garnet Takuya Satoh January 14, 2024 =================================================================================== Recently normalizing flows have been gaining traction in text-to-speech (TTS) and voice conversion (VC) due to their state-of-the-art (SOTA) performance. Normalizing flows are unsupervised generative models. In this paper, we introduce supervision to the training process of normalizing flows, without the need for parallel data. We call this training paradigm AutoEncoder Normalizing Flow (AE-Flow). It adds a reconstruction loss forcing the model to use information from the conditioning to reconstruct an audio sample. Our goal is to understand the impact of each component and find the right combination of the negative log-likelihood (NLL) and the reconstruction loss in training normalizing flows with coupling blocks. For that reason we will compare flow-based mapping model trained with: (i) NLL loss, (ii) NLL and reconstruction losses, as well as (iii) reconstruction loss only. Additionally, we compare our model with SOTA VC baseline. The models are evaluated in terms of naturalness, speaker similarity, intelligibility in many-to-many and many-to-any VC settings. The results show that the proposed training paradigm systematically improves speaker similarity and naturalness when compared to regular training methods of normalizing flows. Furthermore, we show that our method improves speaker similarity and intelligibility over the state-of-the-art. Index Terms: voice conversion, many-to-many voice conversion, many-to-any voice conversion, zero-shot voice conversion, normalizing flows, FlowVC, CopyCat.§ INTRODUCTION Voice conversion is the task of transforming speech from a source voice tosound as though it was spoken by the desired target voice <cit.>. In other words, we want to change the speaker identity in speech while preserving linguistic information. There are two main data paradigms in voice conversion: the use of parallel <cit.> and non-parallel training data <cit.>. The former assumes access to parallel training data, i.e. recordings that differ only in speaker identity. Such data allows the mapping between speakers to be learned with supervision. Unfortunately, real parallel data does not exist. We could use signal processing techniques such as dynamic time warping <cit.> to match the recordings on the frame level, but the quality of such transformed recordings is questionable. The latter paradigm does not require parallel data and utilizes unsupervised learners such as normalizing flows.In this paper, we investigate normalizing flows following their recent success in text-to-speech (TTS)  <cit.> and voice conversion (VC) <cit.>. Flow-based generative models learn mapping from the input data to a latent vector <cit.>. This mapping is done through a sequence of invertible transformations using the change of variables rule to obtain a valid probability distribution allowing for exact sampling and density evaluation. They explicitly maximize the likelihood of the prior distribution resulting in a stable convergence. Additional conditioning can be provided to the flow steps via coupling blocks to help maximize the likelihood or to achieve additional control over the signal generation <cit.>. Coupling blocks are constructed in such a way that the information present in the conditioning should be removed when encoding to the latent space and added when decoding from the latent space. That mechanism allows to control the speech features of interest by changing the conditioning information between encoding and decoding procedures. However, the above is true only if the model learns the contribution of the conditioning to the speech sample.Unfortunately, flow-based generative models do not perfectly disentangle speaker identity from the audio sample, which may come from an inability to fully utilize speaker embedding conditioning <cit.>. We observe potential speaker information leakage to the latent space by training a speaker classifier on the average pooled latent space across time dimension. A two-layer perceptron classifier achieves 29% accuracy on a 118-speaker test set indicating that the conditioning might not be fully utilised.In this work, we propose a new training paradigm of normalizing flows that adds supervision to enforce the use of conditioning and improves speaker similarity between the target speaker recording and audio signal generated by the VC model. The proposed paradigm is called AutoEncoder Normalizing Flow (AE-Flow), which is a normalizing flow VC model trained as an autoencoder with an additional reconstruction loss, e.g. L1 loss. During speech generation, we decode from the sampled prior distribution assuming that all necessary information is provided via coupling blocks. This mitigates source speaker leakage and speeds up inference as we can omit the encoding step. We hypothesise that this approach enforces the use of conditioning.We apply the proposed training paradigm to FlowVC model <cit.> that has demonstrated the state-of-the-art quality. We study the balance between NLL and L1 reconstruction losses in training normalizing flows. Moreover, we compare our model with state-of-the-art voice conversion model. The methods are evaluated in terms of naturalness, speaker similarity, intelligibility in many-to-many and many-to-any VC settings. The experiments show that the proposed training paradigm systematically improves speaker similarity and naturalness when compared to regular training methods of normalizing flows. Furthermore, we show that our method improves speaker similarity and intelligibility over the state-of-the-art. § METHOD§.§ Normalizing FlowsNormalizing flows aim to approximate an unknown true data distribution p(x) from a set of observations {x}_i=1^N. The flow-based generative model learns a bijective transformation f_θ(.) (where θ denotes neural network's parameters) that maps a latent space with tractable distribution p_ι(z) to x: z ∼ p_ι(z), x=f_θ(z) ∧z=f^-1_θ(x).Here we assume that p_ι is a standard normal distribution 𝒩(0, I). What is more,the normalizing flow f_θ is composed of a sequence of K invertible transformations f^-1_θ=f_θ_1^-1∘ f_θ_2^-1∘…∘ f_θ_K^-1. One of the major properties of normalizing flows is that they can directly model the density from Equation <ref> under the change of variable theorem. We can compute the exact log-likelihood for a given data point x as:log p_θ(x) = log p_ι(z) + ∑_i=1^Klog|∂f_θ_i^-1(x_i)/∂ x_i| ,where ∂f_θ_i^-1(x_i)/∂ x_i is the Jacobian of f_i^-1(x_i). Given that we can directly compute log p_θ(x), the normalizing flow is optimized via negative log-likelihood. This makes the training more stable compared to optimizing adversarial loss in Generative Adversarial Networks (GANs) or the lower bound of the NLL for Variational Autoencoders (VAEs) <cit.>. We can also exactly compute the log-likelihood of a given sample x.To better maximize the log-likelihood and obtain control over speaker identity we use conditional normalizing flows and closely follow the architecture of FlowVC <cit.>, shown in Figure <ref>. Conditioning is provided via a coupling layer. The encoding to z looks as follows:ψ = { ph_source, vuv_source, f0_source}, z = f_θ^-1(x; spk_source, ψ),where spk_source is a pre-trainedmeanspeaker embedding corresponding to the source speaker <cit.>, ph_source is a phoneme conditioningcomingfromthe phoneme encoder. We extract phoneme sequence and durations as in <cit.>. Following, vuv_source is a binary value denoting whether a frame is voiced or unvoiced, f0_source is a sentence-level mean normalised interpolated log-f0. The f0 normalization is applied to remove speaker identity (i.e.,relating to the speaker’s average f0)fromsentenceprosody,thusseparatingf0conditioningfrom speaker embedding conditioning.Finally, to perform voice conversion we first sample the latent space z from a prior distribution, see Equation <ref>. Then, to generate mel-spectrogram in a target voice x_gen, we use mean speaker embedding of the target speaker spk_target and other features extracted from the source speech, see Equation <ref>:z ∼ p_ι(z), x_gen = f_θ(z; spk_target, ψ).§.§ AutoEncoder Normalizing FlowNormalizing flows have many useful properties such as exact log-likelihood estimation, stable convergence and meaningful latent representation. Passing speaker embedding conditioning though the coupling blocks allows to learn how to add or remove speaker information to the stream of flows. Unfortunately, the flow-based generative models do not perfectly disentangle speaker identity from the audio sample, see Section <ref>.We hypothesise that by adding supervision we could strengthen the signal from the conditioning and improve speaker similarity of a flow-based mapping model. In this section, we introduce AutoEncoder Normalizing Flow, a new paradigm for training normalizing flows with additional losses. This work focuses on L1 reconstruction loss, but the approach could be generalized to other losses such as L2 or adversarial loss. AE-Flow first encodes a mel-spectrogram x to the latent space z, see Equation <ref>. Then, the z' is sampled from a normal distribution, see Equation <ref>, and used for the decoding to the mel-spectrogram to obtain x_gen, see Equation <ref>.z = f_θ^-1(x; spk_source, ψ), z' ∼ p_ι(z), x_gen = f_θ(z'; spk_source, ψ).Notice that the conditioning for the encoding and decoding is exactly the same. The sampling step is necessary, otherwise, if we would not exchange z to z' for decoding then x_gen=x as f_θ(f^-1_θ(x))=x and the additional loss would be meaningless. Finally, we can write the objective function of the AE-Flow:1/N∑_i=1^N [ -(1-λ)log p_θ(x^(i)) + λx_gen^(i) - x^(i)_1],where λ is a hyperparameter that controls the balance between losses.§ EXPERIMENTAL SETUP §.§ DatasetWe use Amazon's internal high-quality dataset. The US English professional voice talents were asked to read the provided text in a recording studio. The training set has 118 gender-balanced speakers. There are approximately 91k utterances with an average recording length of 3.9s. A sampling rate of 24 kHz was used for all recordings, from which 80-dimensional mel-spectrograms were extracted using a frame shift of 12.5 ms.For generating audio samples for evaluation, the Universal Neural Vocoder was used <cit.>. To evaluate the systems we use two datasets:* S to S (seen source speaker to seen target speaker) - 5 male and 5 female speakers randomly chosen from the training data. For each source speaker we take 20 different utterances not seen during training. Then we create all source speaker, target speaker, utterance combinations and randomly choose 600 of them for evaluation.* S to U (seen source speaker to unseen target speaker) - we use the same source speakers as in the S to S dataset. There are 4 male and 4 female target speakers unseen during training. Finally, we create a dense conversion mapping from seen to unseen speaker and randomly choose 600 combinations for evaluation.§.§ Evaluated SystemsTo study the proposed training paradigm, we compare the following models: AE-Flow, FlowVC, ND-Flow and CopyCat <cit.>.The AE-Flow uses both NLL and L1 reconstruction losses. To select the reconstruction loss weight parameter λ (see Equation <ref>), we considered λ∈{0.5, 0.9, 0.99} and chose the best performing λ=0.99 based on the internal subjective preference tests.FlowVC, a normalizing-flow model trained only with NLL is used as a baseline approach (λ=0 in Equation <ref>).We introduce ND-Flow, a noise decoding flow that uses the same architecture as AE-Flow and FlowVC, but is trained from the Gaussian noise to the target mel-spectrogam utilizing only L1 loss (λ=1 in Equation <ref>). We trained our models up to 100 epochs with batch size 64 and Adam optimizer <cit.> on two Tesla V100 16GB GPUs with PyTorch 1.10.2+cu102 <cit.> and frozen random seed. Finally, we include CopyCat model as a state-of-the-art non flow-based VC baseline. Throughout the work we will refer to source speaker recordings as Source and non-parallel target speaker recordings as Target. §.§ Evaluation Protocol To measure performance and compare voice conversion models we use the following metrics: * Speaker similarity: MUSHRA evaluation <cit.>, where people are given the following instruction: “Please listen to the speaker in the reference sample first. Then rate how similar the speakers in each system sound compared to the reference speaker”. Two different recordings from the target speaker are included. One as the reference sample and the other as one of the systems to be rated, as an upper-anchor. In addition, the source speech recording is included among the systems to be rated, as a lower-anchor. * Naturalness: MUSHRA evaluation where people are given the following instruction: “Pleaseratetheaudiosamplesintermsof their naturalnes”. The recording from the source speaker is included among the systems to be rated as an upper-anchor.* Word Error Rate (WER): To measure the intelligibility of converted speech we conduct WER analysis.It was computed by comparing the original text of an utterance with the transcription obtained by the ASR system of the converted speech using a pre-trained kaldi TDNN chain model <cit.>.For each MUSHRA evaluation there were 240 testers with 20 ratings per tester. Significant differences between systems were detected using paired t-tests with Holm-Bonferroni correction applied. All reported significant differences are for p_value ≤ 0.05. § EXPERIMENTAL RESULTS §.§ Comparison of Flow-based Approaches In this experiment we compare three normalizing flow models: FlowVC – no reconstruction loss, AE-Flow – both NLL and L1 reconstruction loss,ND-Flow – only L1 reconstruction loss to assess the influence of the NLL and L1 reconstruction losses balance on the speaker similarity and naturalness of the generated speech. Evaluation results are presented in Table <ref>. Considering the speaker similarity, AE-Flow and ND-Flow improve upon FlowVC, both in the the seen to seen and seen to unseen cases.It shows that adding L1 loss to the NLL objective improves speaker similarity. There is no statistical significance between AE-Flow and ND-Flow in both cases. During informal listening, we observed that the L1 objective regularizes and prevents extreme behaviour. As an example, FlowVC occasionally makes the generated sample sound too high pitched when converting from female to female voice, thus diverging from the target speaker. This behaviour is less noticeable in the AE-Flow and ND-Flow. We hypothesise the reason for that is the “averaging” nature of the L1 loss preventing extreme changes in pitch occasionally occurring in samples generated by FlowVC.Regarding the Naturalness, models with additional L1 reconstruction loss outperform the model trained only with NLL loss. In the S to S case, only the comparison of FlowVC and Source is statistically significant. The power of the test was too low to reject the hypothesis that both AE-Flow and ND-Flow generated samples are on par with real recordings in terms of the naturalness. In the many-to-any case, both AE-Flow and ND-Flow statistically improve upon FlowVC.This study shows that adding the L1 objective improves speaker similarity and naturalness when training normalizing flow models. The lack of statistical difference between the AE-Flow and ND-Flow prevents us from conclusively comparing those models. It is wroth mentioning that ND-Flow training is almost two times faster than AE-Flow, as we only need to perform the decoding step, see Equation <ref>. §.§ Comparison to SOTA VC ApproachFurther MUSHRA evaluations were conducted to compare the flow-based generative models with the non flow-based SOTA baseline. The MUSHRA scores are used to assess speaker similarity and naturalness of the generated samples in the S to S and S to U scenarios, see Section <ref>. The evaluation results are presented in Table <ref>. Considering speaker similarity, flow-based VC models scored higher that the CopyCat baseline. This shows that flow-based models are on par or superior to the CopyCat model in terms of speaker similarity, as also found in <cit.>. However, statistical significance was achieved only between AE-Flow and CopyCat in the S to S case. This shows that our AE-Flow method improves upon the SOTA non flow-based voice conversion model in terms of speaker similarity. The naturalness results show no statistical significance between the flow-based models and CopyCat in the S to S case. It is worth mentioning that the MUSHRA evaluators could not distinguish between the AE-Flow and real recordings. In the zero-shot experiment the CopyCat model outperforms the FlowVC, but the comparison to the AE-Flow is not statistically significant. The experiment shows that the flow-based models can achieve similar performance in voice conversion to non flow-based SOTA methods. The results also suggest that our method outperforms the CopyCat in speaker similarity and naturalness in some settings. §.§ Word Error Rate AnalysisIn this section, we study intelligibility of generated samples in both S to S and S to U cases. In Table <ref> we gather word error rate scores for the FlowVC, AE-Flow, ND-Flow, CopyCat, and a reference Source recordings.All flow-based models outperform CopyCat with statistical significance. However, there is no statistically significant difference between FlowVC, AE-Flow and ND-Flow. This may suggest that the architecture has the most significant impact on intelligibility, and not the training loss. § DISCUSSIONOur experiments show that the addition of reconstruction loss improves upon the standard NLL training of normalizing flows. However, the question arises why not use L1 loss only, as there is no statistical difference between AE-Flow and ND-Flow. The dataset used in this paper was created by professional voice actors with recordings of similar style. The low variance in a speaker conditioned distribution may mitigate the perception of the “averaging” effect of the L1 loss. It is possible that training on larger range of different recording conditions with some speakers recorded in studio-quality conditions whilst other speakers recorded in more ambient surroundings using lower quality microphones, the advantage of AE-Flow over Flow-VC and ND-Flow could become more noticeable. We leave that topic for future work, but suggest to optimise the balance between NLL and L1 losses for a given dataset. Another direction for future work is considering the use of L2, adversarial or other losses in the AE-Flow training setup. The proposed method is general and not constrained to the L1 loss.§ CONCLUSIONSIn this paper we have proposed a new training paradigm of flow-based generative models called AutoEncoder Normalizing Flows that introduces supervision to the training procedure without the need for parallel data. We have comprehensively evaluated our methods and baselines in terms of speaker similarity, naturalness and intelligibility in many-to-many and many-to-any voice conversion scenarios. The results show that adding the L1 reconstruction loss to the normalizing flow training objective improves both speaker similarity and naturalness of the generated samples. Our method also improves upon the non flow-based SOTA CopyCat model in terms of intelligibility and speaker similarity. Moreover, our training method can be easily generalized to other supervised objectives such as L2 loss and adversarial loss.IEEEtran
http://arxiv.org/abs/2312.16552v1
{ "authors": [ "Jakub Mosiński", "Piotr Biliński", "Thomas Merritt", "Abdelhamid Ezzerg", "Daniel Korzekwa" ], "categories": [ "cs.SD", "cs.LG", "eess.AS" ], "primary_category": "cs.SD", "published": "20231227122921", "title": "AE-Flow: AutoEncoder Normalizing Flow" }
𝖢𝗎𝗋𝗅 dt d x ddt dd d ∂_t ∂_s φ σ θ 𝒞 𝒟 ℰ ℛ 𝒫 𝒬 𝒮 𝐑𝐓 𝐍𝐃 Ra 𝐃 𝕂 𝐕 Βη 0 𝐇 𝐻 𝐖 𝐙 V ŁL 𝐋 𝒯 𝒲 𝒰 𝒫 𝒱_h 𝒬_h ℍ σ τ tr ψ in on RemarkRemark[section] equationsection TheoremTheorem PropositionProposition LemmaLemma DefinitionDefinition 1 .001 Nonlinear predator-prey cross-diffusion–fluid system with two chemicals M. Bendahmane,F. Karami, D. Meskine, J. Tagoudjeu and M. Zagour mode = title]Mathematical analysis and multiscale derivation of a nonlinear predator-prey cross-diffusion–fluid system with two chemicals [email protected] 1 Institut de Mathématiques de Bordeaux, Université de Bordeaux, 33076 Bordeaux Cedex, France [email protected] 2 École Supérieure de Technologie d'Essaouira, Université Cadi Ayyad, B.P. 383 Essaouira El Jadida, Essaouira, Morocco [email protected] [email protected] 3 École Nationale Supérieure Polytechnique de Yaoundé, Universite de Yaoundé I, B.P 8390 Yaoundé, Cameroun [email protected] 4 Euromed Research Center, Euromed University of Fes, Rte Principale Fès Meknès, 30000 Fès, Morocco A nonlinear cross-diffusion–fluid system with chemical terms describing the dynamics of predator-prey living in a Newtonian fluid is proposed in this paper. The existence of a weak solution for the proposed macro-scale system is proved based on the Schauder fixed-point theory, a priori estimates, and compactness arguments. The proposed system is derived from the underlying description delivered by a kinetic-fluid theory model by a multiscale approach. Finally, we discuss the computational results for the proposed macro-scale system in two-dimensional space. Chemical cross-diffusion–fluid; Kinetic–fluid theory;Schauder fixed-point theory; Pattern formation; Finite-volume method; Finite-element method. [ Mohamed Zagour^4 January 14, 2024 ====================§ INTRODUCTION As it is known, cross-diffusion mathematical models have been helpful to predict many interesting features such as pattern-formation, dynamics segregation phenomena, and competition between interacting populations. Several models have been proposed and studied in the literature for competing species living outside the fluid medium. Originally, the classic ecological models began with the study of two interacting species (see, e.g., <cit.> for more details). Next, some cross-diffusion models of three and multiple interacting species <cit.>were proposed. The author in <cit.> proposed a model with two interacting species living in a stationary fluid governed by the augmented Brinkman system. Recently, the author in <cit.> generalized the aforesaid model to a nonlocal cross-diffusion with multiple species living in a Newtonian fluid governed by the incompressible Navier-Stokes.Indeed, the motivation comes from the fact that many species are living in a fluid. Consequently, their dynamic is affected by the presence of the fluid. Compare with the previously cited articles, in the present paper we propose a nonlinear predator-prey cross-diffusion–fluid with two chemicals. The predator and prey species present the ability to orientate their movement towards the concentration of the chemical secreted by the other species. The problem is presented as a system of two parabolic equations describing the evolution of the predator and prey species and two elliptic equations for the concentration of the chemicals coupled with the incompressible Navier-Stokes. In order to state our problem, let consider Ω∈ℝ^d, d = 1,2, 3, a simply connected domain saturated with a Newtonian incompressible fluid, where also predator and prey species and two chemical substances are present. The physical scenario of interest can be described by the following nonlinear macro-scale system in T := (0, T)×Ω for a fixed time T > 0 written in a non-dimensional form {[∂_t n_1+U·∇ n_1- (d_1(n_1)∇ n_1) +(χ_1(n_1)∇ w_1 )=F_1(n_1,n_2),; ; ∂_t n_2+U·∇n_2- (d_2(n_2)∇ n_2)+ (χ_2(n_2)∇ w_2 )=F_2(n_1,n_2),;; U·∇w_1-Δ w_1+α_1 w_1=β_1 n_2,; ; U·∇w_2-Δ w_2+α_2 w_2=β_2 n_1,; ; ∂_t U -νΔ U+ k(U·∇)U+∇ p+Q(n_1,n_2) ∇ϕ = , U=0. ]. We augment our proposed macro-scale system with the following boundary conditions (d_i(n_i)∇ n_i-χ_i( n_i)∇ w_i)·η=0,∇ w_i η=0, U=,on Σ_T=(0,T]×∂Ω and the initial conditions n_i(t=0,x)=n_i,0(x), U(t=0,x)=U_0(x)for x∈Ω for i=1,2. Here n_1 and n_2 denote population densities of the predator and the prey, respectively, w_1 and w_2 represent concentrations of the (chemical) signals produced by n_2 and n_1 respectively; U is the fluid velocity, p is the fluid pressure; d_1 and d_2 are the nonlinear diffusion functions; χ_i are thenonlineartactic functions; α_i, β_i for i = 1,2 are positive constants. Tactic coefficients play a major role from a modeling point of view. Indeed, one can find in nature that the movement of biological species is oriented by chemical gradients, where the predator moves towards the prey. Different types of situations can occur depending on the ability of predator and prey to direct their movement towards these chemical gradients. A typical example is the following: the tactic coefficients: χ_1> 0 and χ_2 < 0 model the situation where the prey avoids the predator by moving away from its signal gradient, while the predator follows the prey by following a higher concentration of the chemical w_2. Finally, F_1 and F_2 are Lotka-Voltera reaction terms given by F_1(n_1,n_2)=n_1(a_1-b_1n_1-c_1n_2),F_2(n_1,n_2)=n_2(a_2-c_2n_2+b_2n_1), where a_1,a_2,b_1,b_2,c_1 and c_2 are the positive coefficients of intra-specific competition and inter-specific competition. Let us mention that macro-scale system (<ref>) indicates that the predator is attracted by the chemical signal w_1 of the prey n_1, while the prey is repelled by the chemical signal w_2 produced by the predator. Note that the equations for prey and predator odors are elliptical rather than parabolic. This is justified in cases where odor diffusion occurs on a much faster time scale than the movement of individuals, which is reasonable in a variety of ecological settings. Note that we refer to w_1 and w_2 as chemical signals which can be interpreted more generally as potentials representing the possibility of an animal being detected from a distance, for example by visual means. However, for example, these quantities can model chemical odors. The coupling in our system (<ref>) appears through the convection term U·∇ n_i, U·∇ w_i and the external force Q(n_1,n_2)∇ϕ. In the absence of the fluid i.e. (U=), system (<ref>) reduces to chemotaxis chemicals system. Among others in <cit.> the authors proved global existence and asymptotic behavior of solutions. Systems of two biological species with kinetic interaction have been considered in <cit.>, where the stability of homogeneous steady states is obtained for one chemical (see <cit.>). Competitive systems of two biological species and a chemical with non-constant coefficients have been considered in <cit.> where the authors establish sufficient conditions for the existence of solutions and its asymptotic dynamics. For the one species case with time and space dependence coefficients and growth term we refer to the reader to <cit.>. Moreover, systems of two biological species with chemotactic abilities have been studied. For instance, in <cit.> the competitive system is studied and the global existence and asymptotic behavior are obtained for positive and bounded initial data. While in <cit.>, the reduced system is studied for constant coefficients in the competitive case. Several numerical methods have been used for solving nonlinear predator-prey and competitive systems. For instance, in <cit.> authors solve a two species system using a moving mesh finite elements in one dimension. Also, a particle method and the the meshless method of the Generalized Finite Differences have been applied respectively in <cit.>. In this paper we address a multiscale derivation approach of the proposed macro-scale model from kinetic theory model based on the micro-macro decomposition method. We start by rewriting the kinetic theory model as a coupled system of microscopic and macroscopic equations. Next, the proposed macro-scale model is derived by low order asymptotic expansions in terms of a small parameter. This approach has been applied to the micro-macro application in different fields. For instance, chemotaxis phenomena <cit.>, a time-dependent SEIRD reaction diffusion <cit.>, and patterns formation induced by cross-diffusion in a fluid <cit.>. Note that this technique motivated the design numerical tools that preserve the asymptotic property <cit.>. Specifically, these methods design the uniform stability and consistency of numerical schemes in the limit along the transition from kinetic regime to macroscopic regime. This paper is organized as follows: Section <ref> is devoted to establish the existence of weak solutions of the proposed nonlinear cross-diffusion–fluid system (<ref>). The proof is based on Schauder fixed-point theory, a priori estimates, and compactness arguments. In Section <ref>, we present our kinetic–fluid theory model and its properties. According to a multiscale approach based on the micro-macro decomposition method, we obtain an equivalent micro-macro formulation. This leads to derive our proposed macro-scale system (<ref>). In Section <ref>, we investigate the computational analysis of cross-diffusion–fluid system (<ref>) in two dimensional space. We provide several numerical simulations with two cases: in the first case, we ignore the fluid effect (U=) by using finite volume method. in the second one, we consider the full system (<ref>) using finite element method. § MATHEMATICAL ANALYSIS Let Ω be a bounded, open subset of ^d, d=2,3 with asmooth boundary ∂Ω and |Ω| is the Lebesgue measure of Ω. We denote by ^̋1(Ω) the Sobolev space of functions u:Ω→for which n∈Ł^2(Ω) and ∇ n ∈Ł^2(Ω ;^d).For 1≤ p ≤ +∞, ∥·∥_Ł^p(Ω) is the usual norm in Ł^p(Ω).If X is a Banach space, a<b and 1≤ p ≤ +∞, Ł^p(a,b;X) denotes the space of all measurable functions n : (a,b) ⟶ X such that ∥ n(·)∥_X belongs to Ł^p(a,b). Now, we introduce basic spaces in the study of the Navier-Stokes equation. Let the spaces 𝒱, anddefined as: 𝒱= {U∈𝒟(Ω), U=0} , = 𝒱^^̋1_0(Ω), = 𝒱^Ł^2(Ω). The coupled system of interest (<ref>) can be written as for i=1,2 {[ ∂_t n_i+ U·∇ n_i-(d_i(n_i ) ∇ n_i+χ_i(n_i)∇ w_i )= F_i(n_1,n_2),in Ω_T,; ;U·∇ w_1-Δ w_1+α_1w_1=β_1 n_2,in Ω_T,; ;U·∇ w_2-Δ w_2+α_2w_2=β_2 n_1,in Ω_T,; ; ∂_t U -νΔ U+(U·∇)U+ ∇ p+Q(n_1,n_2) ∇ϕ = , U=0,in Ω_T,; ; n_i(t=0, x)=n_i0( x), U(t=0,)=U_0( x),in Ω,; ;U=(d_i(n_i)∇ n_i+ χ_i(n_i)∇( w_i)) η=0,on Σ_T. ]. In the proof of the existence of weak solutions, we will use the following assumptions.We assume that for i ∈{1,2}, the function d_n_i: →^+ is continuous and satisfying the following:d_i≤ d_n_i(r)≤d̅_̅i̅∀ r∈ ∀ i∈{1,2} where d_i and d̅_̅i̅ are strictly positive constants.For the reaction terms F_i, they are continuous functions and there exists a constant C_F such that ∀ n_1,n_2≥ 0, F_1(0,n_2) ≥ 0, F_2(n_1,0) ≥ 0∑_i=1^2F_i(n_1,n_2) n_i≤C_F(1+ n_1^2+ n_2^2). Regarding the function Q, we assume it is a continuous function and there exists constant C_Q>0 such that Q(n_1,n_2)≤ C_Q(1+n_1+n_2)for all n_1,n_2∈. Moreover, we assume that∇ϕ∈( Ł^d+2(Ω) )^d ϕstands for the gravitational potential produced by the action of physical forces on the species.Finally, we assume that initial conditions aren_i,0≥ 0, n_i,0∈Ł^2(Ω), U_0∈. Now we define what we mean by weak solution ofthe system (<ref>). We also supply our main existence result. We say that(n_1,n_2,w_1,w_2,U) is a weak solution to problem (<ref>), ifn_i is nonnegative, n_i ∈ L^∞(_T) ∩ L^2(0,T; ^̋1(Ω))∩ C(0,T;L^2()),∂_t n_i∈Ł^2(0,T;(H^1())^'),w_i∈L^∞(0,T;W^2,p(Ω)) for all p>1,U ∈Ł^2(0,T; ) ∩ C([0, T];),∂_t U∈Ł^1(0,T;^'), and the following identities hold ∫_0^T⟨∂_t n_i, ψ_i ⟩_(H^1)^',H^1 dt -∬_Ω_TU·∇ n_iψ_idx dt + ∬_Ω_T d_i( n_i )∇ n_i·∇ψ_idx dt + ∬_Ω_Tχ_i(n_i)∇ w_i·∇ψ_idx dt = ∬_Ω_T F_i( n_1,n_2)ψ_idx dt, -∬_Ω_TU·∇ w_1 φ_1dx dt + ∬_Ω_T∇ w_1 ∇φ_1dx dt = ∬_Ω_T (β_1n_2-α_1w_1)φ_1dx dt, -∬_Ω_TU·∇ w_2 φ_2dx dt + ∬_Ω_T∇ w_2 ∇φ_2dx dt = ∬_Ω_T (β_2n_1-α_2w_2)φ_2dx dt,∫_0^T⟨∂_t U,Ψ⟩_^', dt +ν∫_Ω∇ U : ∇Ψ dx dt+ ∬_Ω_T (U ·∇) U·Ψ dx dt+ ∬_Ω_T Q( n_1,n_2) ∇ϕ·Ψ dx dt =, for all test functions ψ_i, φ_i ∈L^2(0,T; ^̋1(Ω)) and Ψ∈Ł^2(0,T; ), for i=1,2. Assume that conditions (<ref>) and (<ref>) hold. If n_i,0∈ L^∞(Ω) with 0≤ n_i,0≤ u_i,m a.e. in Ω for i=1,2, then the problem (<ref>) has a weak solution in the sense of Definition <ref>.Our proof is based on approximation systems to which we can applythe Schauder fixed-point theorem to prove the convergence to weaksolutions of the approximations. Let us now put our own contributions into aperspective. Our proof is based on introducing the following system for i=1,2 {[ ∂_t n_i+ U·∇ n_i-(d_i(n_i ) ∇ n_i+χ_i,(n_i)∇ w_i )= F_i,(n_1,n_2),in Ω_T,; ;U·∇ w_1-Δ w_1+α_1w_1=β_1 n_2,in Ω_T,; ;U·∇ w_2-Δ w_2+α_2w_2=β_2 n_1,in Ω_T,; ; ∂_t U -νΔ U+(U·∇)U+ ∇ p+Q(n_1,n_2) ∇ϕ = , U=0,in Ω_T,; ; n_i(t=0, x)=n_i0( x), U(t=0,)=U_0( x),in Ω,; ;U=,∇ w_i·η(d_i(n_i)∇ n_i+ χ_i(n_i)∇( w_i)) η=0,on Σ_T, ]. for each fixed ε >0, where n_iis a fixed function. Herein F_i,(r_1,r_2)=F_i(r_1,r_2)/1+εF_i(r_1,r_2)andχ_i,(r)=χ_i(r)/1+εχ_i(r),. To prove Theorem <ref> we firstprove existence of solutions to the problem (<ref>) byapplying the Schauder fixed-point theorem (in an appropriatefunctional setting), deriving a priori estimates, and then passingto the limit in the approximate solutions using monotonicity andcompactness arguments. Having proved existence to the system(<ref>), the goal is to send the regularization parameterε to zero in sequences of such solutions to fabricate weaksolutions of the original systems (<ref>). Againconvergence is achieved by a priori estimates and compactnessarguments. §.§ The fixed-point method In this section we prove, for each fixed ε> 0, the existence ofsolutions to the fixed problem (<ref>), by applying theSchauder fixed-point theorem.For technical reasons, we need to extend the functionF_i, so that it becomes defined for all (r_1,r_2)∈×. We do this by setting F_i,(r_1,r_2)= {[ F_i,(r_1,0),,; F_i,(0,r_2),,; F_i,(0,0),. ]. Since we use Schauder fixed-point theorem, we need to introduce thefollowing closed subset of the Banach space L^2(_T): ={(n_1,n_2)∈ L^2(_T;^2): 0≤ n_1(t,x),n_2(t,x)≤ M, (t,x)∈_T }, where M is a positive constant to be fixed in Lemma <ref> below. §.§ Existence result to the fixed problem In this section, we omit the dependence of the solutions on theparameter ε. With (n_1, n_2)∈ fixed, let w_i and U be the unique solutions of the system ∂_t U -νΔ U+(U·∇)U+ ∇ p+Q(n_1,n_2) ∇ϕ = , U=0, in Ω_T,U·∇ w_1-Δ w_1+α_1 w_1=β_1 n_2, in Ω_T,U·∇ w_2-Δ w_2+α_2 w_2=β_2 n_1, in Ω_T, U(t=0,)=U_0( x), in Ω, U=, ∇ w_i·η=0, on Σ_T, for i=1,2. Given the functions w_i and U, let n_i be the uniquesolution of the quasilinear parabolic problem {[ ∂_t n_i+ U·∇ n_i-(d_i(n_i ) ∇ n_i+χ_i,(n_i)∇ w_i )= F_i,(n_1,n_2), U=0, in Ω_T,;n_i(t=0, x)=n_i,0( x), in Ω,; (d_i(n_i)∇ n_i+ χ_i,(n_i)∇( w_i)) ·η=0, on Σ_T, ]. for i=1,2. In (<ref>)- (<ref>), U_0 andn_i,0 are functions satisfying the hypothesis of Theorem <ref> for i=1,2.Observe that for any fixed (n_1, n_2) ∈, problem (<ref>)is a pure Navier-Stokes equation coupled weakly to a an elliptic equation for w_i for i=1,2,so we have immediately the following lemma (see for e.g. <cit.>). IfU_0∈, then the system (<ref>) has a unique solution (U, w_i)∈Ł^2(0,T; ) × L^∞(0,T;W^2,p(Ω)) for i=1,2, for all p>1.We have the following lemma for the quasilinear problem (<ref>): Ifn_i,0∈ L^∞(Ω), then, for any ε >0, there exists a unique weak solution n_i∈ L^∞ (_T)∩ L^2(0,T;H^1(Ω)) to problem (<ref>) for i=1,2.§.§ The fixed-point method In this subsection, we introduce a map Γ:→ such thatΓ( n_1,n_2)=(n_1,n_2), where (n_1,n_2) solves (<ref>),i.e., Γ is the solution operator of (<ref>)associated with the coefficient (n_1,n_2) and the solution w_i and Ucoming from (<ref>) for i=1,2. By using the Schauder fixed-pointtheorem, we prove that the map Γ has a fixed point for (<ref>)-(<ref>).First, let us show that Γ is a continuous mapping. For this,we let (n_1, κ,n_2,κ)_κ be a sequence inand(n_1,n_2) ∈ be such that (n_1,κ,n_2,κ)→(n_1,n_2) in L^2(_T;^2) as κ→∞. Define(n_1,κ,n_2,κ)=Γ(n_1, κ,n_2,κ), i.e.,n_1,κ,n_2,κ is the solution of (<ref>) associated with(n_1, κ,n_2,κ) and the solutions w_i,κ and U_κ of(<ref>) for i=1,2. The goal is to show that (n_1, κ,n_2,κ) converges to Γ(n_1,n_2) in L^2(_T).We start with the following lemma where the proof can be found in (<cit.> and in <cit.> Lemma 4.3) so we omit it IfU_0∈, then the solution (U_κ,w_i,κ)_κto the system (<ref>) is uniformly bound in Ł^2(0,T; ) × L^∞(0,T;W^2,p(Ω)) for i=1,2, for all p>1. Moreover, ∂_t U_κ is uniformly bounded in Ł^1(0,T; ^'). The solution (n_1, κ,n_2,κ)_κ to problem (<ref>) satisfies (i)There exists a constant M> 0 such that 0≤n_1, κ(t,x),n_2,κ(t,x)≤ M(t,x)∈_T. (ii)The sequence (n_1, κ,n_2,κ)_κ is bounded in L^2(0,T;H^1(Ω,R^2))∩ L^∞(0,T;L^2(Ω,^2)). (iii)The sequence (U_κ)_κ is bounded in L^2(0,T;H^1(Ω,R^3))∩ L^∞(0,T;L^2(Ω,^3)). (iv)The sequence (n_1, κ,n_2,κ)_κ is relatively compact in L^2(_T,^2). (v)The sequence (U_κ)_κ is relatively compact in Ł^2(_T). (i) Nonnegativity. Multiplying (<ref>) by -n_i,κ^-=n_i,κ-n_i,κ/2 and integrating over Ω, we get [1/2d/ dt∫_Ωn_i,κ^-^2 dx - ∫_Ω U_κ·∇ n_i,κn_i,κ^- dx +d_i∫_Ω∇ n_i,κ^-^2dx; =∫_Ωχ_(n_i,κ)∇ w_i,n·∇ n_i,κ^- dx-∫_Ω F_i,(n_1,κ,n_2,κ) n_i,κ^- dx, ] for i=1,2. Recall that U_κ=0 in _T and U_κ=0 on Σ_T, so we have for i=1,2 ∫_Ω U_κ·∇ n_i,κn_i,κ^- dx =1/2∫_Ω U_κ·∇n_i,κ^2 dx =-1/2∫_Ω U_κ n_i,κ^2 dx +1/2∫_∂Ωn_i,κ^2U_κ·η dσ=0. Using this and since χ_(s)=0, F_i,(s_1,s_2)=0 for s,s_1≤ 0, s_2∈, and according to the positivity of the third term of the left-hand side, we obtain 1/2d/ dt∫_Ωn_i,κ^-^2 dx≤ 0for i=1,2. Since the data n_i,0 is is nonnegative, we deduce that n_i,κ^-=0 for i=1,2. Boundedness in L^1 and L^∞. To obtain the L^1 bound of n_i,κ for i=1,2, we integrate the equation (<ref>) over Ω, to deduce d/ dt∑_i=1,2∫_Ω n_i,κ dx =∑_i=1,2∫_Ω F_i,(n_1,κ,n_2,κ)dx≤∑_i=1,2∫_Ωn_i,κ(a_i-b_i n_1,κ-c_i n_2,κ)dx≤∑_i=1,2 a_i ∫_Ωn_i,κ dx≤max{a_1,a_2}∑_i=1,2∫_Ωn_i,κ dx where we have used the nonnegativity of n_i,κ and ∫_Ω U_κ·∇n_i,κ dx=-∫_Ω n_1,κU_κ dx=0, for i=1,2. An application of Grönwall inequality to (<ref>), we obtain for i=1,2 n_i,κ_L^∞(0,T;L^1())≤ C, for some constant C>0. In the next step we prove L^∞ bound of n_i,κ for i=1,2. We multiply (<ref>) for i=1 by (n_i,κ)^p-1 and integrate over Ω. The result is 1/pd/ dt∫_Ωn_1,κ^p dx +(p-1)d_1∫_Ω (n_1,κ)^p-2∇n_1,κ^2dx +p-1/p+m_1-1∫_Ω∇w_1,κ·∇(n_1,κ)^p+m_1-1dx +b_1 ∫_Ω (n_1,κ)^p+1dx≤∫_Ω n_1,κ (n_1,κ)^p-1 dx +∫_Ω U_κ·∇n_1,κ(n_1,κ)^p-1 dx +∫_Ω d_1(n_1,κ)∇n_1,κ·∇ (n_1,κ)^p-2dx +∫_Ωχ_1,(n_1,κ)∇w_1,κ·∇(n_1,κ)^p-1dx ++b_1 ∫_Ω (n_1,κ)^p+1dx =∫_Ω F_1,(n_1,κ,n_2,κ)(n_1,κ)^p-1dx +b_1 ∫_Ω (n_1,κ)^p+1dx≤ a_1 ∫_Ω (n_1,κ)^pdx. Herein, we have used ∫_Ω U_κ·∇n_1,κ(n_1,κ)^p-1 dx =1/p∫_Ω U_κ·∇(n_1,κ)^p dx =-1/p∫_ΩU_κ (n_1,κ)^p dx=0. We observe that ∫_Ω (n_1,κ)^p-2∇ n_1,κ^2dx =4(p-1)/p^2∫_Ω∇ (n_1,κ)^p/2^2 dx. Moreover, from the equation of w_1,κ in (<ref>) and U_κ=0 in _T, we deduce ∫_Ω∇ w_1,κ·∇ (n_1,κ)^p+m_1-1dx =-α_1∫_Ωw_1,κ(n_1,κ)^p+m_1-1dx +β_1∫_Ωn_1,κ (n_1,κ)^p+m_1-1dx≥ -α_1∫_Ωw_1,κ(n_1,κ)^p+m_1-1dx. Now, we use (<ref>)-(<ref>) and Young inequality to deduce from (<ref>) (recall that 1 ≤ m_i<2 for i=1,2) d/ dt∫_Ωn_1,κ^p dx +4(p-1)/p∫_Ω∇ (n_1,κ)^p/2^2 dx +b_1 ∫_Ω (n_1,κ)^p+1dx≤ a_1 p ∫_Ω (n_1,κ)^pdx +α_1p(p-1)/p+m_1-1∫_Ωw_1,κ(n_1,κ)^p+m_1-1dx≤C(a_1,p,m_1)(1+θ/2 ∫_Ωn_1,κ^p+1 dx +θ/2 ∫_Ωn_1,κ^p+1 dx+∫_Ωw_1,κ^p+1/2-m_1 dx)≤C(a_1,p,m_1)(1+θ∫_Ωn_1,κ^p+1 dx +∫_Ωw_1,κ^p+1 dx), where C(a_1,p,m_1)>0 is a constant depending on a_1, p and m_1. An application of Gagliardo-Nirenberg-Sobolev inequality and Young inequality to ∫_Ωn_1,κ^p+1 dx and ∫_Ωn_1,κ^p dx, respectively, we get from (<ref>) and (<ref>) d/ dt∫_Ωn_1,κ^p dx +4(p-1)/p∫_Ω∇ (n_1,κ)^p/2^2 dx +b_1 ∫_Ω (n_1,κ)^p+1dx≤C(a_1,p,m_1,)(1+θ∫_Ωn_1,κdx ×(∫_Ωn_1,κ^p dx +∫_Ω∇ (n_1,κ)^p/2^2 dx) +∫_Ωw_1,κ^p+1 dx)≤C̃(a_1,p,m_1,)(1+θ∫_Ωn_1,κ^p dx +θ∫_Ω∇ (n_1,κ)^p/2^2 dx +∫_Ωw_1,κ^p+1 dx), for some constants C(a_1,p,m_1,), C̃(a_1,p,m_1,)>0 depending on a_1, p, m_1 and . We choose θ sufficiently small to deduce from (<ref>) d/ dt∫_Ωn_1,κ^p dx +C ∫_Ω (n_1,κ)^p+1dx ≤C̃(a_1,p,m_1,) (1+∫_Ωw_1,κ^p+1 dx), for some constant C>0. To control the integral in the right-side, we multiply the equation of w_1,κ in (<ref>) by (w_1,κ)^p-1, we use U_κ=0 in _T and Gagliardo-Nirenberg-Sobolev inequality to get ∫_Ω (w_1,κ)^p+1dx ≤ C(p,) (∫_Ωn_1,κdx ×(∫_Ωn_1,κ^p dx +∫_Ω∇ (n_1,κ)^p/2^2 dx))≤ C(p,,β_1)(∫_Ωn_1,κ^p dx +∫_Ωn_1,κ (w_1,κ)^p-1dx)≤C̃(p,,β_1)(∫_Ωn_1,κ^p dx +∫_Ω (w_1,κ)^pdx)≤Ĉ(p,,β_1)(1+∫_Ωn_1,κ^p dx +θ∫_Ω (w_1,κ)^pdx), for some constants C,C̃,Ĉ>0. Again we θ sufficiently small to obtain from (<ref>) ∫_Ω (w_1,κ)^p+1dx ≤C(p,,β_1)(1+∫_Ωn_1,κ^p dx), for some constant C>0. Observe that from (<ref>) and (<ref>), we deduce d/ dt∫_Ωn_1,κ^p dx +C ∫_Ω (n_1,κ)^p+1dx ≤C(a_1,p,m_1,,β_1,) (1+∫_Ωn_1,κ^p dx), for some constant C>0. Therefore an application of Grönwall inequality, we arrive to n_1,κ_L^p()≤ C(a_1,p,m_1,β_1,)for all t∈ (0,T), for some constant C>0. The consequence of (<ref>) and the well-known Moser–Alikakos iteration procedure (see for e.g. <cit.>) is the uniform L^∞-bound n_1,κ_L^∞()≤ C(a_1,p,m_1,,β_1,)for all t∈ (0,T), for some constant C>0. (ii) We multiply the equation (<ref>) by n_i,κ and integrate over Ω to obtain [ 1/2d/ dt∫_Ωn_i,κ^2 dx + ∫_Ω U_κ·∇ n_i,κn_i,κ dx +d_i∫_Ω∇ n_i,κ^2 dx; ; =∫_Ωχ_(n_i,κ)∇ w_i,n·∇ n_i,κ dx-∫_Ω F_i,(n_1,κ,n_2,κ) n_i,κ dx. ] Exploiting the boundedness of n_i,κand U_κ=0 in _T, we get ∫_Ω U_κ·∇ n_i,κn_i,κ dx=0, and that the second and the third integrals of the right-hand side are bounded independently of κ, for i=1,2 . Then by Young inequality 1/2d/ dt∫_Ωn_i,κ^2 dx +C_2 ∫_Ω∇ n_i,κ^2 dx ≤ C_3, for some constants C_2,C_3>0 independent of κ. This completes the proof of (ii). (iii) In this step, we multiply the equation (<ref>) by U_κ and integrate over Ω to obtain [ 1/2∫_Ω |U_κ(τ,x)|^2dx +ν∫_0^τ∫_Ω | ∇ U_κ|^2dx dt+ ∫_0^τ∫_Ω (U_κ·∇) U_κ· U_κ dxdt; 4cm + ∫_0^τ∫_Ω Q(n_1,κ,n_ 2,κ) ∇ϕ· U_κ dxdt=1/2∫_Ω |U_0(x)|^2dx. ] Observe that, since U_κ=0 and U_κ= on ∂Ω, we get ∫_Ω(U_κ·∇) U_κ· U_κ dx= 1/2∫_Ω∇ (U_κ)^2 U_κ dx=-1/2∫_Ω(U_κ) (U_κ)^2dx+1/2∫_∂Ω U_κ (U_κ)^2^Tη=. Using this to deduce from (<ref>) [ 1/2∫_Ω |U_κ(τ, x)|^2dx+ ν∫_0^τ∫_Ω | ∇ U_κ|^2dx dt ≤1/2∫_Ω |U_0()|^2dx-∫_0^τ∫_Ω Q(n_1,κ,n_ 2,κ) ∇ϕ· U_κ dxdt . ] Using(<ref>) and Young inequality, we have [I: =| ∫_0^τ∫_Ω Q(n_1,κ,n_ 2,κ) ∇ϕ· U_κ dxdt|;≤ C_Q(|Ω_τ|+ ∫_0^τ∫_Ω∇ϕ· U_κ dxdt); ≤ C_Q(|Ω_τ|+1/2∫_0^τ∫_Ω| ∇ϕ|^d+2 dx dt+1/2∫_0^τ∫_Ω|U_κ|^2 dx dt). ] Using this and exploiting the assumption ∇ϕ∈ (Ł^2(Ω))^d to deduce from (<ref>) 1/2∫_Ω |U_κ(τ,)|^2dx+ ν∫_0^τ∫_Ω | ∇ U_κ|^2dx dt≤C_Q/2∫_0^τ∫_Ω|U_κ (t,)|^2 dx dt + 1/2∫_Ω |U_0()|^2dx+ C̃_̃Q̃(T, _0,|Ω|,ϕ). An application of Gronwall's inequality, we obtain ∫_Ω |U_κ(τ,)|^2dx ≤ Cfor all τ∈(0,T), for some constant C>0. Consequently, we deduce from this and (<ref>) max_0<τ<T ∫_Ω |U_κ(τ,)|^2dx +ν∫_0^T∫_Ω| ∇ U_κ|^2dx≤ C, for some constant C>0. Therefore,we deduce that U_κ is uniformly bounded in Ł^∞(0,T; )∩Ł^2(0,T; ). (iv) We multiply the equation (<ref>) by φ∈ L^2(0,T;H^1(Ω)) and we use the boundedness of n_i,κ in L^∞ and U_κ in L^2, the result is [∫_0^T⟨∂_t n_i,κ,φ⟩ dt≤∬__TφU_κ· n_κ dx dt +∬__T d_i(n_i,κ) ∇n_i,κ·∇φ dx dt;+∬__Tχ_i,(n_i,κ) ∇w_i,κ·∇φ dx dt +∬__T F_i,(n_1,κ,n_1,κ) φ dx dt;≤ C_4 U_κ_L^2(_T)∇φ_L^2(_T) +d∇ n_i,κ_L^2(_T)∇φ_L^2(Ω_T); +χ_i,(n_i,κ)_L^∞(_T)∇ w_i,κ_L^2(_T)∇φ_L^2(_T);+ C_5∑_i=1,2n_i,κ_L^2(_T)φ_L^2(_T); ≤ C_6φ_L^2(0,T;H^1(Ω)), ] for i=1,2 and for some constants C_4,C_5,C_6>0 independent of κ and .We obtain the bound ∂_t n_i,κ_L^2(0,T;(H^1(Ω))')≤ C. Then, (iv) is a consequence of (ii),the uniform boundedness of (∂_t n_i,κ )_κ in L^2(0,T;(H^1(Ω)') and Aubin-Simon compactness theorem (see for e.g. <cit.>). (v) Finally, using Lemma <ref> and again Aubin-Simon compactness theorem, the space {U_κ∈ L^2(0,T;);∂_t U_κ∈ L^2(0,T;^')}is compactly embedded in L^2(0,T;)̋. This concludes the proof of Lemma <ref>.Now we have the following classical result (see <cit.>). There exists a function w_i ∈ L^2(0,T;H^1(Ω)) such that the sequence (w_i,κ)_κ converges strongly to w_i in L^2(0,T;H^1(Ω)) for i=1,2.Summarizing our findings so far, from Lemma <ref>,<ref> and <ref>, there existfunctions n_i,w_i,U ∈ L^2(0,T;H^1(Ω)) such that, up to extractingsubsequences if necessary (for i=1,2), * n_i,κ→ n_i in L^2(_T) strongly, * w_i,κ → w_i in L^2(0,T;H^1(Ω)) strongly, * U_κ→ U in L^2(0,T; ^̋1) strongly, and from this the continuity of Γ onfollows.We observe that, from Lemma <ref>, Γ() isbounded in the set ℰ={ n_i ∈ L^2(0,T;H^1(Ω)):∂_t n_i ∈ L^2(0,T;(H^1(Ω))') }, for i=1,2.By the results of <cit.>, ℰ↪L^2(_T) is compact, thus Γ is compact. Now, by theSchauder fixed point theorem, the operator Γ has a fixedpoint n_i, such thatΓ(n_i,)=n_i, for i=1,2. Then there exists a solution(n_i,, w_i,,U_ε) of ∫_0^T⟨∂_t n_i,, ψ_i ⟩_(H^1)^',H^1 dt -∬_Ω_TU_·∇ n_i, ψ_idx dt + ∬_Ω_T d_i( n_i, )∇ n_i,·∇ψ_idx dt + ∬_Ω_Tχ_i,(n_i,)∇ w_i,·∇ψ_idx dt = ∬_Ω_T F_i,( n_1,,n_2,)ψ_idx dt, -∬_Ω_TU_·∇ w_1,φ_1dx dt + ∬_Ω_T∇ w_1,∇φ_1dx dt = ∬_Ω_T (β_1n_2,-α_1w_1,)φ_1dx dt, -∬_Ω_TU_·∇ w_2,φ_2dx dt + ∬_Ω_T∇ w_2,∇φ_2dx dt = ∬_Ω_T (β_2n_1,-α_2w_2,)φ_idx dt,∫_0^T⟨∂_t U_,Ψ⟩_^', dt +ν∫_Ω∇ U_ : ∇Ψ dx dt+ ∬_Ω_T (U_·∇) U_·Ψ dx dt + ∬_Ω_T Q( n_1,,n_2,) ∇ϕ·Ψ dx dt =, for all test functions ψ_i, φ_i ∈L^2(0,T; ^̋1(Ω)) and Ψ∈Ł^2(0,T; ), for i=1,2.§.§ Existence of weak solutions We have shown in Section <ref> that the problem(<ref>) admits a solution (n_1,ε,n_2,ε,w_1,ε,w_2,ε, U_). The goal inthis section is to send the regularization parameter ε to zeroin sequences of such solutions to obtain weak solutions of theoriginal system (<ref>), (<ref>) and (<ref>). Note that, for each fixedε>0, we have shown the existence of a solution(n_1,ε,n_2,ε) to (<ref>) such that for i=1,2 0≤ n_i,ε(t,x)≤ M, for a.e. (t,x)∈_T where M>0 is a constant not depending on .Taking ψ_i=n_i,ε,φ_i=w_i,ε, Ψ_i=U_εas test functions in (<ref>) and working exactly as in Lemma <ref>,we obtain for i=1,2 sup_0 ≤ t ≤ T∫_Ωn_i,ε(t,x)^2 dx + ∬__T∇ n_i,ε^2 dx dt ≤ C,∬__T∇ n_i,ε^2 dx dt ≤ C,∬__T∇ w_i,ε^2 dx dt ≤ C,sup_0 ≤ t ≤ T∫_ΩU_ε(t,x)^2 dx + ∬__T∇ U_ε^2 dx dt ≤ C, for some constant C>0 independent of ε.Working exactly as the proof of (iv) in Lemma <ref>, we get easily for i=1,2 ∂_t n_i,ε_L^2(0,T;(H^1(Ω))')+ ∂_t U_ε_Ł^1(0,T; ^')≤ C, for some constant C>0 independent of ε. Then, by (<ref>)-(<ref>) and standard compactnessresults (see <cit.>) we can extract subsequences, which we do not relabel,such that, as ε goes to 0, [ n_i,→ n_i ⋆ L^∞(_T),;n_i,→ n_i L^2(0,T;H^1(Ω)),;n_i,→ n_i L^2(_T),; ∂_tn_i,→∂_t n_i L^2(0,T;(H^1(Ω))'),;w_i,→ w_i L^2(0,T;H^1(Ω)),; U_→ U⋆Ł^∞(0,T; ),;U_→ UŁ^2(0,T; ),;U_→ UŁ^∞(0,T; ), ] for i=1,2. From the compact embedding L^∞(Ω) ⊂ (H^1(Ω))', we also have thatn_i, is a Cauchy sequence in C(0,T;(H^1(Ω))') for i=1,2. Moreover,with the convergences (<ref>) and the weak-⋆ convergence of n_i, to n_iin L^∞(_T), we obtain n_i,→ n_i . With the above convergences, we pass to the limit in (<ref>) to obtain the weak formulation (<ref>) in the sense of Definition <ref>. In the following step, we define the operator B such that B(U):=(U·∇)U for U∈Ł^2(0,T; ). Note that we can write the equation of U in (<ref>) in the following form d/dt⟨ U,Ψ⟩=⟨ - νΔ U+B(U)+Q(n_1,n_2)∇ϕ,Ψ⟩,∀Ψ∈. Since the operator -Δ:→^' is linear and continuousand U∈Ł^2(0,T;), we deduce easily that -Δ U∈Ł^2(0,T;^'). Moreover,Q(n_1,n_2)∇ϕ∈Ł^2(0,T;^') and the operator b(U,U,w)=⟨ B(U),w⟩ is trilinear continuous on . Furthermore, we exloit ∥ B(U)∥_^'≤∥ U ∥_ to deduce B(U)∈Ł^1(0,T,^') and consequently we arrive to ∂_t U∈Ł^1(0,T,^'). In the final step we are interested of the recuperation of the pressure p.For this we set I_1(t)=∫_0^tU(s) ds, I_2(t)=∫_0^t(U·∇)U(s) ds, I_3(t)=∫_0^tQ(n_1,n_2)(s)∇ϕ ds. It is clear that I_1, I_2, I_2 ∈ C(0,T;(^̋1(Ω))^').Integrating (<ref>) over [0,T] yields ⟨ U(t)-U_0-νΔ I_1(t)+I_2(t)+I_3(t),Ψ⟩=,∀ t∈[0,T], ∀ψ∈. An application of the Rham Theorem (see <cit.> for more details), there exists P(t)∈Ł^2_0(Ω) such that U(t)-U_0-νΔ I_1(t)+I_2(t)+I_3(t)+∇ P=,for each t∈[0,T], where Ł^2_0(Ω)={w∈Ł^2(Ω),∫_Ωw dx=0}.This implies that ∇ P∈ C(0,T;^̋-1(Ω)) and thus P∈ C(0,T;Ł^2_0(Ω)).Finally, a derivation with respect to t in the sense of distributions, we obtain∂_t U-νΔ U+(U·∇)U+Q(n_1,n_2) ∇ϕ+ ∇ p=, where p=∂_t P∈ W^-1,∞(0,T;Ł^2_0(Ω)). § MULTISCALE DERIVATION TOWARD CHEMOTAXIS-CHEMICALS IN A FLUID This section is devoted to the derivation of chemotaxis-chemicals–fluid systemwith predator prey terms from an kinetic–fluid model usingthe micro-macro decomposition technique inspiring from <cit.>. We start by presenting the kinetic–fluid model and its properties. Next, an equivalent appropriate system on the basis of the micro-macro decomposition method is obtained. Then, our system (<ref>) is derived. We consider the case where the set for velocity is a sphere of radius r>0, V=rS^d-1. The kinetic–fluid model is given as follows {[ ε∂_tf_1+ v ·∇_xf_1^ε= 1/ε𝒯_1[f_2^ε](f_1^ε ) +G_1(f_1^ε,f_2^ε,w_1,w_2,v,U),; ;ε∂_tf_2^ε+ v ·∇_xf_2= 1/ε𝒯_2[f_1^ε](f_2^ε) +G_2(f_1^ε,f_2^ε,w_1,w_2,v,U),; ; U·∇w_1-Δ w_1+α_1 w_1=β_1 ∫_Vf_2^ε dv,; ;U·∇w_2-Δ w_2+α_2 w_2=β_2∫_Vf_1^ε dv,; ; ∂_t U -νΔ U+ k(U·∇)U+∇ p+Q(∫_Vf_1^ε dv,∫_Vf_2^ε dv) ∇ϕ = , U=0,;; f_i^ε(t=0,x,v)=f^ε_i,0(x,v), U(t=0,x)=U_0(x), ]. where f_1(t,x,v) and f_2(t,x,v) are the distribution functions describing the statistical evolution of predator and prey species, where t > 0, x∈ℝ^d, and v∈ V are time, position, and velocity, respectively, 𝒯_i is a stochastic operator representing a random modification of the direction of the predator and prey, and the operator G_i describes the gain-loss balance of theses species. To apply the micro-macro decomposition method by low order asymptotic expansions in term of the mean free path ε, the following assumptions are needed. The turning operator 𝒯_i is decomposed as follows: 𝒯_1[f_2^ε](f_1^ε)= ℒ_1(f_1^ε)+ε 𝒯_1^2[f_1^ε](f_2^ε), 𝒯_2[f_1^ε](f_2^ε)= ℒ_2(f_2^ε)+ε 𝒯_2^2[f_2^ε](f_1^ε), where ℒ_1, (ℒ_2) represents the dominant part of the turning kernel and is assumed to be independent of f_2^ε, (f_2^ε) respectively. Herein, we omit the dependence on ε in the functions f_1^ε and f_2^ε. The operators 𝒯_i fori,j=1,2 are given by 𝒯^j_i(f_i)= ∫_V(T^j_i(v^*,v)f_i(t, x, v^*) - T^j_i (v,v^*)f_i(t, x, v) )dv^*, where T_i^j is the probability kernel for the new velocity v∈ V given that the previous velocity was v^*. We assume that T_i^1=σ_i/V. Then, 𝒯_i^1(g)=-σ_i g. Remark that the operators ℒ_i(g) and 𝒯^j_i satisfy ∫_V ℒ_i(g) dv=∫_V 𝒯^j_i(g) dv=0, i,j=1,2. Moreover, there exists a bounded velocity distribution M_i(v)>0 for i=1,2 independent of t and x such that T_i^1 (v,v^* ) M_i(v^*) = T_i^1 (v^*,v ) M_i(v), holds. We consider the following choice M_i(v) = 1/V. Note that the flow produced by these equilibrium distributions vanishes and M_i are normalized, i.e. ∫_V v M_i(v)dv=0, ∫_V M_i(v)dv =1, i=1,2. The other probability kernel T_i^2 is given by T_i^2[f_2](v,v^*)=σ_iD_i M_i/f_i(1+d_i(f_i)) v·∇ (f_i/M_i). Second, the interaction operators G_i satisfy the following properties G_i(f_1,f_2,w_1,w_1,v,U)= G_i^1(f_1,f_1,w_1,w_1,v,U)+ ε G_i^2(f_1,f_2),where G_i^1(f_1,f_2,w_1,w_2,v,U) =d σ_i/r^2 V(f_i U+χ_i(∫_Vf_idv)∇ w_i). Note that∫_V G_i^1(f_1,f_2,w_1,w_2,v,U)dv =0,i = 1,2. We define the interactions operators G_2^1 and G_2^2 by G_1^2(f_1, f_2)= 1/|V|f_1(a_1-b_1f_1-c_1f_2), G_2^2(f_1, f_2)= 1/|V|f_2(a_2-b_2f_1-c_2f_2). Using the same arguments as in <cit.>, we find that the operator ℒ_i has the following properties. The following properties of the operator ℒ_i for i=1,2 holds true: i) The operator ℒ_i is self-adjoint in the space Ł^2(V ,dξ M_i).ii) For f∈Ł^2, the equation ℒ_i(g) =f has a unique solution g ∈Ł^2(V, dξ M_i), satisfying ∫_V g(ξ)dξ = 0 ⟺∫_V f(ξ)dξ =0. iii)The equation ℒ_i(g) =ξ M_i(ξ), has a unique solution denoted by θ_i(ξ) for i=1,2. iv)The kernel of ℒ_i is N(ℒ_i) = vect(M_i(ξ)) for i=1,2. We denote the integral with respect to the variable v will be denoted by ⟨ . ⟩. The main idea of the micro-macro method is to decompose the distribution function f_i for i=1,2 as follows f_i(t,x,v)=M_i(v) n_i(t,x) + εg_i(t,x,v), where n_i(t,x)= ⟨ f_i(t,x,v)⟩:=∫_Vf_i(t,x,v) dv. This implies that ⟨ g_i ⟩=0 for i=1,2. Inserting f_i in kinetic–fluid model (<ref>) and using the above assumptions and properties of the interaction and the turning operators, one obtains {[ ∂_t (M_1 (v)n_1)+ ε∂_t g_1 + 1/εv M_1(v) ·∇ n_1 + v ·∇ g_1 = 1/εℒ_1(g_1);; 0.8cm+𝒯_1[f_2](f_1)+1/εG^1_1(f_1,f_2,w_1,w_2,v,U)+G^2_1(f_1,f_2),;; ∂_t (M_2 (v)n_2)+ ε∂_t g_2 + 1/εv M_2(v) ·∇ n_2 + v ·∇ g_2 = 1/εℒ_2(g_2);; 0.8cm+𝒯_2[f_1](f_2)+1/εG^2_1(f_1,f_2,w_1,w_2,v,U)+G^2_2(f_1,f_2),;; U·∇w_1-Δ w_1+α_1 w_1=β_1 n_2,; ;U·∇w_2-Δ w_2+α_2 w_2=β_2n_1,; ;∂_t U -νΔ U+ k(U·∇)U+ ∇ p+Q(n_1,n_2) ∇ϕ = , U=0. ]. In order to separate the macroscopic density n_i(t,x) and microscopic quantity g_i(t,x,v) for i=1,2 one has to use the projection technique. For that, let consider P_M_i the orthogonal projection onto N(𝒯_i), for i=1,2. It follows P_M_i(h)= ⟨ h⟩ M_i, h∈Ł^2(V ,dvM_i),i=1,2. Now, inserting the operators I -P_M_i into Eq. (<ref>), and using known properties for the projection P_M_i, = 1,2 yields the following micro-macro formulation {[∂_t g_1 + 1/ε^2 v M_1(v) ·∇ n_1+ 1/ε(I-P_M_1)(v ·∇ g_1) =1/ε^2ℒ_1(g_1); +1/ε𝒯_1[f_2](f_1)+1/ε^2G_1^1(f_1,f_2,w_1,w_2,v,U)+ 1/ε(I-P_M_1) G_1^2(f_1,f_2),;;∂_t n_1+⟨ v ·∇ g_1 ⟩ =⟨ G^2_1(f_1,f_2)⟩,;;∂_t g_2 + 1/ε^2 v M_2(v) ·∇ n_2+ 1/ε(I-P_M_2)(v ·∇ g_2) =1/ε^2ℒ_2(g_2); +1/ε𝒯_2[f_1](f_2)+1/ε^2G_2^1(f_1,f_2,w_1,w_2,v,U)+ 1/ε(I-P_M_2) G_2^2(f_1,f_2),;;∂_t n_2+⟨ v ·∇ g_2 ⟩ =⟨ G^2_2(f_1,f_2)⟩,;; U·∇w_1-Δ w_1+α_1 w_1=β_1 n_2,; ;U·∇w_2-Δ w_2+α_2 w_2=β_2n_1,; ;∂_t U -νΔ U+ k(U·∇)U+ ∇ p+Q(n_1,n_2) ∇ϕ = , U=0. ]. The following proposition states that the micro-macro formulation (<ref>) is equivalent tokinetic-fluid model (<ref>) i) Let (f_1,f_2,w_1,w_2,U,p) be a solution of nonlocal kinetic-fluid model (<ref>). Then(n_1,n_2,g_1,g_2,w_1,w_2,U,p) (where n_i=⟨ f_i ⟩ and g_i= 1ε(f_i-M_i n_i)) is a solution to coupled system (<ref>) associated with the following initial data for i=1,2 n_i(t=0)=n_i,0 =⟨ f_i,0⟩, g_i(t=0)=g_i,0=1ε(f_i,0-M_i n_i,0),andU(t=0)=U_0, ii) Conversely, if (n_1,n_2,g_1,g_2,w_1,w_2,U,p) satisfies system (<ref>) associated with the following initial data (n_1,0,n_2,0, g_1,0,g_2,0,U_0) such that ⟨ g_i,0⟩=0 for i=1,2. Then (f_1,f_2,w_1,w_2,U,p) (where f_i=M_i n_i+ε g_i)is a solution to nonlocal kinetic-fluid model (<ref>) with initial data f_i,0=M_i n_i,0+ε g_i,0 and one has n_i=⟨ f_i ⟩ and ⟨ g_i⟩=0, for i=1,2. Next, in order to develop asymptotic analysis of system (<ref>), 𝒯_i and G_i^j assumed to satisfy the following asymptotic behavior ε→ 0 𝒯_1[M_2n_2+ε g_2]=𝒯_1[M_2n_2]+O(ε),𝒯_2[M_1n_1+ε g_1]=𝒯_2[M_1n_1]+O(ε), G_i^1(M_1n_1 +ε g_1, M_2n_2 +ε g_2,w_1,w_2,v,U)= G_i^1(M_1n_1, M_2n_2,w_1,w_2,v,U )+ O(ε), and G_i^2(M_1n_1 +ε g_1, M_2n_2 +ε g_2)= G_i^2(M_1n_1, M_2n_2 )+ O(ε),for i=1,2. Using assumptions (<ref>), (<ref>) and (<ref>), the following equations for g_i can be obtained from (<ref>) g_i =ℒ_i^-1(v M_i ·∇ n_i-ℒ_i^-1(G_i^1(M_1 n_1, M_2 n_2,w_1,w_2,v,U))+O(ε),i=1,2. Finally, inserting (<ref>) into the second and the fourth equations in (<ref>), yields macro–fluid model {[ ∂_t n_i + ( β_i(n_i)+Γ_i(n_1,n_2,w_1,w_2,U) -D_i ·∇ n_i)= H_i(n_1,n_2) +O(ε),;; U·∇w_1-Δ w_1+α_1 w_1=β_1 n_2,; ;U·∇w_2-Δ w_2+α_2 w_2=β_2n_1,; ;∂_t U -νΔ U+ k(U·∇)U+ ∇ p+Q(n_1,n_2) ∇ϕ = , U=0, ]. where D_i, β_i, Γ_i and H_i are given, respectively, as follows D_i =- ⟨ v ⊗θ_i(v) ⟩ =r^2/d σ_i, with θ_i is given by θ_i= - 1/σ_ivM_i(v) β_1(n_1)=-⟨θ_1n_1 M_1𝒯_1^2[n_1](M_2) ⟩=D_1/n_1(1+d_1(n_1))·∇ n_1, β_2(n_2)=-⟨θ_2n_2 M_2𝒯_2^2[n_2](M_1) ⟩=D_2/n_2(1+d_2(n_2))·∇ n_2, Γ_i(n_1,n_2,w_1,w_2,U)=-⟨θ_i M_iG_i^1(M_1 n_1, M_2 n_2,w_1,w_2,v,U) ⟩=n_iU+χ_i(n_i)∇ w_i, and H_i(n_1,n_2)= ⟨ G_i^2(M_1 n_1, M_2 n_2) ⟩=F_i(n_1,n_2), fori=1,2. Finally, collecting the previous results with U=0 and (<ref>), yields the macro-scale system(<ref>) of the order O(ε) {[∂_t n_1+U·∇ n_1- (d_1(n_1) ∇n_1) +(χ_1(n_1)∇ w_1 )=F_1(n_1,n_2)+O(ε),; ; ∂_t n_2+U·∇n_2- (d_2(n_2) ∇n_2)+ (χ_2(n_2)∇ w_2 )=F_2(n_1,n_2)+O(ε),; ;U·∇w_1-Δ w_1+α_1 w_1=β_1 n_2,; ;U·∇w_2-Δ w_2+α_2 w_2=β_2 n_1,; ; ∂_t U -νΔ U+ k(U·∇)U+ ∇ p+Q(n_1,n_2) ∇ϕ = , U=0. ]. § COMPUTATIONAL ANALYSIS IN TWO DIMENSIONS We investigate computational analysis of nonlinear cross-diffusion–fluid with chemicals system (<ref>) in two dimensional space for two interacting populations; for instance, phytoplankton and zooplankton. First, we numerically demonstrate the cross-diffusion with chemicals in the absence of the fluid (U=) by using the finite-volume method. Second, we consider the full system (U≠). We show the effect of external forces (obstacle inside the domain and the force of gravity) on the dynamics of fluid flow and simultaneously on the behavior of interacting populations by using the finite-element method. §.§ Cross-diffusion with chemicals in the absence of fluid We investigate two dimensional space computational analysis of nonlinear cross-diffusion with chemicals system (<ref>) using finite volume method. For that, we consider a family 𝔗_h of admissible meshes of the domain Ω consisting of disjoint open and convex polygons called control volumes, see <cit.>. In the rest of this subsection, we shall use the following notation: the parameter h is the maximum diameter of the control volumes in 𝔗_h. K is a generic volume in 𝔗, |K|is the 2-dimensional Lebesgue measure of K and N(K) is the set of the neighbors of K. Moreover, for all L ∈ N(K), we denote by σ_K,L the interface between K and L where L is a generic neighbor of K. η_K,L is the unit normal vector to σ_K,L outward to K. For an interface σ_K,L, |σ_K,L| will denote its 1-dimensional measure. d_K,L denotes the distance between x_K and x_L, where the points x_K and x_L are respectively the center of K and L. On the other hand, we assume that a discrete function on the mesh 𝔗_h is a set (g_K)_K∈𝔗 and we identify it with the piece-wise constant function g_h on Ω such that g_h|_K = g_K. Furthermore, we consider an admissible discretization of (0,T)×Ω consisting of an admissible mesh 𝔗_h of Ω and of a time step size Δ t_h > 0 (both Δ t_h and the size max_K∈ t_hdiam(K) tend to zero as h → 0). Next, we define the discrete gradient ∇_hg_h as the constant per diamond T_K,L function by (∇_hg_h)|_𝔗_K,L=∇_K,Lg_h:=g_L-g_K/d_K,Lη_K,L. Finally, we define the average of source terms F_i,K^k+1 by F_i,K^k+1=F_i(n_1(t^k,x),n_2(t^k,x)), for i=1,2. And we make the following choice to approximate the diffuse terms n_i,K,L^k+1=min{n_i,K^k+1^+,n_i,L^k+1^+}, where n_i,J^k+1^+=max(0,n_i,J^k+1) for i=1,2 and J=K,L. The computation starts from the initial cell averages n_i,0^K=1/|K|∫_Kn_i,0(x) dx for i=1,2. In order to advance the numerical solution from t^k to t^k+1 = t^k + Δ t, we use the following implicit finite volume scheme: determine n^k+1_i,K for K∈𝔗, i = 1, 2 such that {[ |K|n^k+1_1,K-n^k_1,K/Δ t-d_1∑_L ∈ N(K)|σ_K,L|/d_K,L(n_1,L^k+1-n_1,K^k+1)+∑_L ∈ N(K)|σ_K,L|/d_K,L[χ_1(n_1,K,L^k+1)(w_1,L^k+1-w_1,K^k+1)] =|K| F_1,K^k+1,;|K|n^k+1_2,K-n^k_2,K/Δ t-d_2∑_L ∈ N(K)|σ_K,L|/d_K,L(n_2,L^k+1-n_2,K^k+1)+∑_L ∈ N(K)|σ_K,L|/d_K,L[χ_2(n_2,K,L^k+1)(w_2,L^k+1-w_2,K^k+1)]=|K| F_2,K^k+1,; ∑_L ∈ N(K)|σ_K,L|/d_K,L(w_1,L^k+1-w_1,K^k+1)+α_1|K|w_1,K^k+1=β_1|K|n_2,K^k,;∑_L ∈ N(K)|σ_K,L|/d_K,L(w_2,L^k+1-w_2,K^k+1)+α_2|K|w_2,K^k+1=β_2|K|n_1,K^k ]. for all K∈𝔗_h,k∈ N_h.We consider implicitly the homogeneous Neumann boundary condition. To solve the corresponding nonlinear system arising from the implicit finite volume scheme (<ref>), we have used the Newton method. Note that the linear systems involved in Newton's method are solved by the GMRES method. For the numerical simulations, we consider uniform mesh giving by a Cartesian grid N_x = N_y = 256 and we take the following parameters a_1 =10, a_2 = 0.1, b_1 = b_2 = 2, c_1 = 0.4, c_2 = 0.01. The corresponding diffusion coefficients are given by d_i= α_i = 1, for i = 1, 2. The chemotactic sensitivity parameters are chosen by β_1=20, β_2=100. §.§.§Example 1: species the interacting via chemical substance For this numerical test, the chemotactic coefficients are χ_1 =2>0, and χ_2=-0.8<0. This matches well cross-diffusion phenomena where the predator directs its movement towards the prey, while the movement of the prey is against the presence of the predator.For the initial condition, the prey and predatorare concentrated in small pockets at a four spatial point (see Figure <ref>). In Figure <ref>, we display the numerical solution for each species at four different simulated times. Initially, at time t = 0.05, we can observe the effect of the chemotaxis for the predator feeling their prey, and the prey feeling the presence of the predator. At time t = 0.1. We notice the rapid movement of the predator towards the regions occupied by the prey. The prey moves to the regions where the predator is not located. At time t = 0.5, it is clearly seen that the predator occupy almost the entire area, while the prey moves toward (running away) the area where the predator is not located. §.§.§ Example 2: prey do not interact via chemical substances In this Example, we consider χ_1 = 2 and χ_2= 0. This means that we do not consider chemotactic movement of the prey. The predator and prey are concentrated in small pockets at a one spatial point (see Figure <ref>). We show in Figure <ref> the numerical solution for each species at four different simulations time. We notice the rapid movement of the predator spreads out to the areas where the prey is located, while the prey presents isotropic and homogeneous diffusion (due to the choice of the tactic coefficient). §.§.§ Example 3: spatial patterns formation We assume that the densities of species are a random perturbation around the stationary state (n_1^*,n_2^*). Consequently, the initial data are given by n_1(0,x)=n_1^*+n_1(x)_δ, n_2(0,x)=n_2^*+n_2(x)_δ,x∈Ω, where J(x)_δ∈[0,1] is a uniform distributed variable for J=n_1, n_2. The stationary state is given by <cit.> (n_1^*,n_2^*)=(a_2c_1-a_1c_2/b_2c_1-b_1c_2,a_2b_1-a_1b_2/b_1c_2-b_2c_1), where a_1=0.61,a_2=0.52,b_1=0.4575,b_2=0.31,c_1=9.5,c_2=8.2. In Figure <ref>, we observe islands of high concentration of preys are formed. This reflects the phase separation triggered by preys avoiding predator. §.§ Cross-diffusion with chemicals in the presence of fluid In this subsection, we demonstrate the external action effect on the dynamic the fluid medium, consequently on the evolution of the prey and predator densities. The spatial domain Ω corresponds to a rectangle (0, 10) × (0, 4) and contains two obstacles; see Figure <ref>. We consider system (<ref>) with the following initial and boundary conditions: {[ U(0,)=U_0, n_1(0,)=n_1,0,n_2(0,)=n_2,0, in Ω,;∂ n_1/∂η=∂ n_2/∂η=0, on Γ_1∪Γ_2∪Γ_3∪Γ_4,;n_1= n_2= 0, U(x,y)=(u=0,v=0)^T, on Γ_5∪Γ_6,;U(x,y)=(∂ u/∂η=0 , 0)^T, on Γ_1∪Γ_3,; U(x,y)=(0, ∂ v/∂η=0)^T, on Γ_2,;U(x,y)=(2 y (1-y), 0)^T, on Γ_4. ]. Here, all computations have been implemented using the software package FreeFem++ <cit.>. The code uses a finite element method based on the weak formulation of cross-diffusion with chemicals system (<ref>) in an iterative manner as follows: * Solve Navier-Stokes equations and the incompressibility condition (<ref>)_5with the Characteristic Galerkin method. We mention that we have used a classical Taylor-Hood element technique, i.e. the fluid velocity U is approximated by P2 finite elements and the pressure p is approximated by P1 finite elements. * Approximate the densities n_1 and n_2 by P2 finite elements and solve firstly Eq. (<ref>)_1, then Eq. (<ref>)_2 and finally Eqs. (<ref>)_3,4. We mention that we have used UMFPACK package and θ-scheme with θ=0.49 We recall that ∇ϕ= V_s(ρ_s - ρ_f ) gz⃗ , where V_s and ρ_s are, respectively, the volume and the density of species, ρ_f is the fluid density, and g is the gravitational force. The vector -∇ϕis the resultant of gravitational forces (P⃗ = -ρ_s V_s gz⃗ ) and the Archimedes thrust (F⃗a⃗ = ρ_fV_s gz⃗ ). In our tests, the populations are denser than the fluid and therefore a gravitational flow is created in the direction of the vector -z. We consider two cases: In the first case, we illustrate the behavior of cross-diffusion–fluid with chemicals system (<ref>) in the absence of gravitational force; that is, ∇ϕ=(0, 0). In Figure <ref>, we display the numerical simulations of the densities n_1 and n_2 of the two interacting populations and the dynamics of the fluid flow presented by the fluid velocity U and the pressure p. Initially, we observe the cross-diffusion effect; that is to say the predator directs its movement towards the region occupied by the prey, while the prey moves toward the area where the predator is not located. Next, we notice that the prey and the predator are transported in the direction of the fluid. Moreover, we observe that the fluid flow is not influenced by the presence of the populations in the medium; however, it is affected by the presence of the obstacle in the domain. In the second case, we assume the presence of gravitational force; that is ∇ϕ=(0,-1). Thus, we obtain the strong coupling system (<ref>). In Figure <ref>, we provide the numerical simulations of the two densities and the dynamics of the fluid flowpresented by the fluid velocity U and the pressure p. Clearly, we observe that the densities and the fluid are influenced by the presence of gravitational force. In addition, we observe also the effect of the presence of the two obstacles. § CONCLUSION AND PERSPECTIVES In this paper, a nonlinear chemotaxis–fluid system with chemical terms describing two interacting species living in a Newtonian fluid governed by the incompressible Navier-Stokes equations has been proposed. The existence of weak solutions of the proposed macro-scale system has been proved. The proof is based on Schauder fixed-point theory, a priori estimates, and compactness arguments. This system was derived from a new nonlinear kinetic–fluid model according to multiscale approach based on the micro-macro decomposition method. Several numerical simulations in two dimensional space were provided. Specifically, we showed that prey has a tendency to keep away from predator and at the same time predator has a tendency to get closer to prey. In addition, the phenomenon of pattern formation and the effects of external forces (gravity and spatial domains with two obstacles) on the dynamics of fluid flow and on the behavior of the predator-prey were demonstrated. Locking ahead, a possible perspective consists in extending the proposed macro-scale system to multiple species (e.g. three species as in <cit.>), improving our deterministic system to a stochastic system to take into account the environmental noise. Another interesting development would be the numerical analysis of the multiscale micro-macro decomposition method in two-dimensional space, for instance see <cit.>. § ACKNOWLEDGMENTThis work was done while MB visited ESTE of Essaouira at the University of Cadi Ayyad, Morocco, and he is grateful for the hospitality. cas-model2-names
http://arxiv.org/abs/2312.16092v1
{ "authors": [ "Mostafa Bendahmane", "Fahd Karami", "Driss Meskine", "Jacques Tagoudjeu", "Mohamed Zagour" ], "categories": [ "math.AP" ], "primary_category": "math.AP", "published": "20231226153322", "title": "Mathematical analysis and multiscale derivation of a nonlinear predator-prey cross-diffusion--fluid system with two chemicals" }
plain theoremTheorem lemma[theorem]Lemma claim[theorem]Claim prop[theorem]Proposition cor[theorem]Corollarydefinition defn[theorem]Definition *notationNotationremark remarkRemarkcited 3pt 3pt. .5em #1 #2 #3cited citedthm[theorem]Theorem
http://arxiv.org/abs/2312.16701v1
{ "authors": [ "Guillaume Bal", "Jeremy Hoskins", "Solomon Quinn", "Manas Rachh" ], "categories": [ "math-ph", "cs.NA", "math.AP", "math.MP", "math.NA" ], "primary_category": "math-ph", "published": "20231227195703", "title": "Integral formulation of Dirac singular waveguides" }
AppendicessectionSec.Secs. sectionSectionSections tableTableTables tableTab.Tabs.6276CVPR 2024 Semantic-aware SAM for Point-Prompted Instance Segmentation Zhaoyang Wei^1 Equal contribution., Pengfei Chen^1*, Xuehui Yu^1*, Guorong Li^1,Jianbin Jiao^1, Zhenjun Han^1 Corresponding authors. ([email protected]) 1University of Chinese Academy of Sciences January 14, 2024 =============================================================================================================================================================================================================Single-point annotation in visual tasks, with the goal of minimizing labelling costs, is becoming increasingly prominent in research. Recently, visual foundation models, such as Segment Anything (SAM), have gained widespread usage due to their robust zero-shot capabilities and exceptional annotation performance. However, SAM's class-agnostic output and high confidence in local segmentation introduce semantic ambiguity, posing a challenge for precise category-specific segmentation.In this paper, we introduce a cost-effective category-specific segmenter using SAM. To tackle this challenge, we have devised a Semantic-Aware Instance Segmentation Network (SAPNet) that integrates Multiple Instance Learning (MIL) with matching capability and SAM with point prompts. SAPNet strategically selects the most representative mask proposals generated by SAM to supervise segmentation, with a specific focus on object category information. Moreover, we introduce the Point Distance Guidance and Box Mining Strategy to mitigate inherent challenges: group and local issues in weakly supervised segmentation. These strategies serve to further enhance the overall segmentation performance. The experimental results on Pascal VOC and COCO demonstrate the promising performance of our proposed SAPNet, emphasizing its semantic matching capabilities and its potential to advance point-prompted instance segmentation. The code will be made publicly available. § INTRODUCTIONInstance segmentation seeks to discern pixel-level labels for both instances of interest and their semantic content in images, a crucial function in domains like autonomous driving, image editing, and human-computer interaction. Despite impressive results demonstrated by various studies <cit.> , the majority of these high-performing methods are trained in a fully supervised manner and heavily dependent on detailed pixel-level mask annotations, thereby incurring significant labeling costs. To address this challenge, researchers are increasingly focusing on weakly supervised instance segmentation, leveraging cost-effective supervision methods, such as bounding boxes <cit.>, points <cit.>, and image-level labels <cit.>. Recently, visual foundation models, such as Segment Anything (SAM), have been widely employed by researchers for their exceptional generalization capabilities and impressive annotation performance. Numerous studies based on SAM, such as <cit.> have emerged, building upon the foundations of SAM to further enhance its generalization capabilities and efficiency. However, these efforts have predominantly focused on improving the annotation performance of SAM. One limitation arises from SAM's lack of classification ability, resulting in class-agnostic segmentation results that fail to accurately segment specific categories as desired. To tackle the inherent semantic ambiguity in SAM and achieve specific-category segmentation, we propose integrating weak annotations with SAM, employing point annotations as prompts to imbue semantic information into SAM's outputs. A straightforward approach involves leveraging SAM's intrinsic scoring mechanism, selecting the top-scoring mask as the corresponding label for each category. However, when annotating object points are fed into the SAM, its category-agnostic characteristic tends to assign higher scores to parts of the object, resulting in generated mask annotations that fail to encompass the object as a whole. In Fig. <ref> orange dashed box, we aim to obtain the `person' mask annotation, but SAM predicts the proposals of `clothes', `clothes+trousers' and 'person'. Relying solely on the score SAM provides is insufficient, as the highest score corresponds to `clothes' (col-2), which does not meet our specific needs.To address this challenge, we have proposed SAPNet, a semantically-aware instance segmentation network designed for high-quality, end-to-end segmentation.In this study, we design a proposal selection module (PSM) using the Multiple Instance Learning (MIL) paradigm to choose proposals that align closely with the specified semantic label. However, the MIL-based method relies on the classification score, often leading to group and local predictions <cit.>. In Fig. <ref> green dashed box, the group issue is evident, where two objects of the same category are often both included when they are in close proximity. It also illustrates the local issue, where the MIL classifier frequently predicts the most discriminative region instead of the entire object.To overcome these limitations, we have introduced Point Distance Guidance (PDG) and Box Mining Strategies (BMS). Specifically, we penalize the selection results by calculating the Euclidean distances between the annotated points of identical categories enclosed within the proposals. Additionally, for more localized proposals, we filter out higher-quality proposals from their corresponding bags and dynamically merge them in scale. By fully exploiting the positional clues to prevent local and group prediction, we aim to select the proposal that most effectively represents the object category in refinement stage. The primary contributions of this work can be outlined as follows:1) We introduce SAPNet, an end-to-end semantic-aware instance segmentation network based on point prompts. SAPNet combines the visual foundation model SAM with semantic information to address its inherent semantic ambiguity, facilitating the generation of semantically-aware proposal masks.2) We incorporate Point Distance Guidance (PDG) and Box Mining Strategies (BMS) to prevent local and group predictions induced by MIL-based classifiers in both the proposal selection and refinement stages.3) SAPNet achieves state-of-the-art performance in Point-Prompted Instance Segmentation (PPIS), significantly bridging the gap between point-prompted and fully-supervised segmentation methods on two challenging benchmarks (COCO and VOC2012).§ RELATED WORKWeakly-Supervised Instance Segmentation (WSIS) offers a practical approach for accurate object masks using minimal supervision. It spans a range of annotations, from image labels to bounding boxes. Research has focused on narrowing the performance gap between weakly and fully-supervised methods, primarily through box-level <cit.> and image-level annotations <cit.>. Box-based methods have explored structural constraints to guide the segmentation, as seen in BBTP <cit.> and BoxInst <cit.>, and applied structural constraints to drive segmentation, treating it as a multiple-instance learning task or enforcing color consistency based on CondInst <cit.>. These approaches, while innovative, can complicate training and sometimes neglect the object's overall shape due to their focus on local features and proposal generation, like MCG <cit.>. Conversely, the proposal-free methods, like IRN <cit.>, rely on class relationships for mask production but can falter in accurately separating instances. To preserve object integrity, recent methods such as Discobox <cit.> and BESTIE <cit.> integrate advanced semantic insights into instance segmentation using pairwise losses or saliency cues <cit.>. However, semantic drift remains an issue, with mislabeling or missed instances resulting in inferior pseudo labels <cit.> compromising segmentation quality.Pointly-Supervised Detection and Segmentation (PSDS) cleverly balances minimal annotation costs with satisfactory localization accuracy. By introducing point annotations, WISE-Net <cit.> , P2BNet <cit.>and BESTIE <cit.> improve upon weakly supervised methods that suffer from vague localizations. That only slightly increases the costs (by about 10%) and is almost as quick as the image-level annotation, but that is far speedier than more detailed bounding box or mask annotations. Such precision allows for tackling semantic bias, as seen in methods like PointRend <cit.>, which utilize multiple points for improved accuracy, despite requiring additional bounding box supervision. Recent advancements in point-supervised instance segmentation, employed by WISE-Net and Point2Mask <cit.>, show that even single-point annotations can yield precise mask proposals. WISE-Net skillfully localizes objects and selects masks, while BESTIE enhances accuracy using instance cues and self-correction to reduce semantic drift. Attnshift <cit.> advances this by extending single points to reconstruct entire objects. Apart from their complexity, these methods have yet to fully demonstrate their effectiveness, indicating ongoing challenges in harnessing single-point annotations for image segmentation and presenting clear avenues for further research.Prompting and Foundation Models. Prompt-based learning enables pretrained foundation models to adapt to various tasks using well-crafted prompts. SAM <cit.>, a prominent example in computer vision, exemplifies robust zero-shot generalization and interactive segmentation across multiple applications. Additionally, SAM-based models like Fast-SAM <cit.> increases speed, HQ-SAM <cit.> improves segmentation quality, and Semantic-SAM <cit.> optimizes performance by training on diverse data granularities. Foundational models, pre-trained on large datasets, help improve generalization in downstream tasks, especially in data-scarce scenarios. Basing on SAM, Rsprompter <cit.> utilizes SAM-derived pseudo labels for improved remote sensing segmentation, meanwhile, adaptations for medical imaging and video tracking are explored in A-SAM <cit.> and Tracking Anything <cit.>. Further, <cit.> and <cit.> have integrated SAM with Weakly Supervised Semantic Segmentation networks to refine pseudo labels. Our research builds upon these innovations, transforming point annotations into mask proposals in instance segmentation to significantly enhancing performance. § METHODOLOGY §.§ OverviewThe overview of our method is illustrated in Fig. <ref>, SAPNet comprises of two branches: one dedicated to the selection and refinement of mask proposals to generate pseudo-labels and the other employing solov2 head <cit.> for instance segmentation supervised by the generated pseudo labels. The central focus of our approach is the pseudo-label generation branch, exclusively utilized during the training phase, which includes the PSM, PNPG, and PRM modules. Following the initial proposal inputs, the PSM module employs multi-instance learning and a point-distance penalty to identify semantically rich proposals. Subsequently, coupled with selected proposals from the PSM stage, the PNPG module generates quality positive-negative bags to mitigate background and locality issues, emphasizing the primary regions of interest. Then, the PRM module processes these bags, which selects refined proposals from positive bags to improve final box quality. Ultimately, the mask mappings derived from these box proposals are utilized to guide the segmentation branch. This guarantees the acquisition of high-quality category-specified mask proposals to supervise the segmentation branch. §.§ Proposal Selection Module SAM's limited semantic discernment causes category-agnostic labeling, leading to inconsistent proposal quality for the same objects. Employing these proposals directly for segmentation supervision could introduce noise and impair performance. Our goal is to design a category-specific segmenter, which needs to select the most semantically representative proposals for robust supervision.Motivated by the insights from WSDDN <cit.> and P2BNet <cit.>, our proposal selection module employs multi-instance learning and leverages labeling information to prioritize high-confidence proposals for segmentation.In the training phase, we leverage SAM<cit.> solely to generate category-agnostic proposals. To avoid excessive memory use and slow training, we convert them into box proposals using the minimum bounding rectangle, and combine with depth features F ∈ℝ^H × W × D from the image I ∈ℝ^H × W, serve as input to the PSM. Utilizing our designed MIL loss, PSM precisely predicts each proposal's class and instance details. It selects the highest-scoring proposal as the semantically richest bounding box for each object, effectively choosing higher quality mask proposals.Given an image I with N point annotations Y_n = { (p_i, c_i) }_i=1^N, where p_i is the coordinate of the annotated point and c_i is the class index. We transform each class-informative point p_i into M semantic mask proposals, which is further converted to a semantic proposal bag B_i ∈ℝ^M × 4. As illustrated in Fig. <ref>, after passing through a 7x7 RoIAlign layer and two fully-connected layers, features F_i ∈ℝ^M × H × W × D are extracted from proposal bag B_i. Like in <cit.> and <cit.>, the features F serve as input for the classification branch and instance branch, using fully-connected layer f and f' to generate 𝐖_cls∈ℝ^M × K and 𝐖_ins∈ℝ^M × K. A softmax activation function over K classand M instance dimensions yields the classification scores 𝐒_cls∈ℝ^M × K and instance scores 𝐒_ins∈ℝ^M × K. 𝐖_cls = f(𝐅);[𝐒_cls]_mk = e^[𝐖_cls]_mk/∑_k=1^K e^[𝐖_cls]_mk. 𝐖_ins = f'(𝐅);[𝐒_ins]_mk = e^[𝐖_ins]_mk/∑_m=1^M e^[𝐖_ins]_mk.where [·]_mk is the value in row m and column k of matrix.Point Distance Guidance. SAM and MIL struggle with distinguishing adjacent objects of the same category, often merging two separate objects into one and giving high score. To combat this, we incorporate instance-level annotated point information and introduce a spatially aware selection with a point-distance penalty mechanism.To address the challenge of overlapping objects and thereby enhance model optimization, we propose a strategy specifically aimed at penalizing instances of object overlap. For each m-th proposal within the set B_i, we define t_mj=1 to denote an overlap with any proposal in another identical class bag B_j; otherwise, t_mj=0. The penalty imposed increases in proportion to the distance of the overlapping objects from the proposal in question. This penalty, W_dis, is represented using the Euclidean distance between the annotated points of the overlapping proposals. Subsequently, the reciprocal of W_dis is then passed through a sigmoid function to compute the distance score 𝐒_dis for the proposal.[𝐖_dis]_im=∑_j=1,j ≠ i^Np_i-p_j * t_mj.[𝐒_dis]_im = (1 / e^-(1/[𝐖_dis]_im))^d.where [·]_im is the value at the row i and column m in the matrix, and d is the exponential factor.PSM Loss.The final score 𝐒 of each proposal is obtained by computing the Hadamard product of the classification score, the instance score, and the distance score, while the score 𝐒 for each proposal bag B_i is obtained by summing the scores of the proposals in B_i. The MILloss of the PSM is constructed using the form of binary cross-entropy, and it is defined as follows: 𝐒=𝐒_cls⊙𝐒_ins⊙𝐒_dis∈ℝ^M × K; 𝐒= ∑_m=1^M [𝐒]_m ∈ℝ^K. ℒ_psm = CE(𝐒, 𝐜) =-1/N∑_n=1^N∑_k=1^K𝐜_k log(𝐒_k) + (1-𝐜_k)log(1-𝐒_k)where 𝐜∈{0, 1}^K is the one-hot category's label. Utilizing the MILloss, the PSM module skillfully identifies each proposal's category and instance. The module selects the proposal with the highest score, marked as 𝐒, for a specific object and identifies a bounding box enriched with semantic information. §.§ Positive and Negative Proposals GeneratorTo further refine the selection of more accurate bounding boxes, we employ PNPG module based on box_psm selected via PSM. That consists of two components: PPG and NPG. The PPG is designed to generate a richer set of positive samples, enhancing bag's quality. Concurrently, the NPG is responsible for generating negative samples, which are crucial for assisting model training. These negative samples, including background samples for all objects and part samples for each, are crucial in resolving part issues and ensuring high-quality bounding box selection. The positive sample set B^+ produced by PPG and the negative sample set 𝒰 generated by NPG are utilized for training the subsequent PRM.Positive Proposals Generator (PPG). Within this phase, to implement adaptive sampling for the identified bounding box, we capitalize on the box_psm derived from the PSM stage, coupled with the point distance penalty score 𝐒_dis attributed to each proposal. To further elaborate, for each box_psm (denoted as b_x^*, b_y^*, b_w^*, b_h^*) isolated during the PSM phase, its dimensions are meticulously recalibrated leveraging a scale factor v and its associated within-category inclusion score 𝐒_dis to generate an augmented set of positive proposals (b_x, b_y, b_w, b_h). The formulation is defined as follows:b_w = (1 ± v / 𝐒_dis) · b^*_w,b_h = (1 ± v / 𝐒_dis) · b^*_h, b_x = b^*_x ± (b_w - b^*_w)/2 ,b_y = b^*_y ± (b_h - b^*_h)/2.These newly cultivated positive proposals are carefully integrated into the existing set B_i to enhance the positive instances' pool. Such enhancements are pivotal in optimizing the training of the forthcoming PRM.Negative Proposals Generator(NPG). MIL-based selection within a single positive bag may overemphasize the background noise, leading to inadequate focus on the object. To solve this, we create a negative bag from the background proposals post-positive bag training, which helps MIL maximize the attention towards the object.Considering the image dimensions, we randomly sample proposals according to each image's width and height, for negative instance sampling. We assess the Intersection over Union (IoU) between these negatives and the positive sets, filtering out those below a threshold T_neg1.Additionally, to rectify MIL localization errors, we enforce the sampling of smaller proposals with an IoU under a second threshold, T_neg2, from inside box_psm based on its width and height, that is scored highest in PSM, as negative examples. These negative instances, partially capturing the object, drive the model to select high-quality bounding boxes that encompass the entire object. The PNPG is systematically elaborated upon in Algorithm<ref>. §.§ Proposals Refinement ModuleIn the PSM phase, we employ MIL to select high-quality proposals from bag B^+. However, as shown in Fig. <ref>, the box_psm outcomes derived solely from a single-stage MIL are suboptimal and localized. Inspired by PCL <cit.>, we consider refining the proposals in a second phase. However, in contrast to most WSOD methods which choose to continue refining using classification information in subsequent stages, we have established high-quality positive and negative bags, and further combined both classification and instance branches to introduce the PRM module to refine the proposals, aiming to obtain a high-quality bounding box.The PRM module, extending beyond the scope of PSM, focuses on both selection and refinement. It combines positive instances from the PPG with the initial set, forming an enriched B^+. Simultaneously, it incorporates the negative instance set 𝒰 from NPG, providing a comprehensive foundation for PRM. This integration leads to a restructured MIL loss in PRM, replacing the conventional CELoss with Focal Loss for positive instances. The modified positive loss function is as follows:ℒ_pos= 1/N∑_i=1^N< 𝐜^T_i, 𝐒_i > · FL(𝐒^*_i, 𝐜_i).where FL is the focal loss <cit.>, 𝐒^*_i and 𝐒_i represent the bag score predicted by PRM and PSM, respectively. < 𝐜^T_i, 𝐒_i > represents the inner product of the two vectors, meaning the predicted bag score of the ground-truth category. Enhancing background suppression, we use negative proposals and introduce a dedicated loss for these instances. Notably, these negative instances pass only through the classification branch for instance score computation, with their scores derived exclusively from classification. The specific formulation of this loss function is detailed below: β =1/N∑_i=1^N< 𝐜^T_i, 𝐒_i >, ℒ_neg = - 1/|𝒰|∑_𝒰∑_k=1^Kβ· ([𝐒^cls_neg]_k)^2log(1-[𝐒^cls_neg]_k).The PRM loss consists of the MIL loss ℒ_pos for positive bags and negative loss ℒ_neg for negative samples, ,ℒ_prm = αℒ_pos+(1-α) ℒ_neg,where α=0.25 by default.Box Mining Strategy. MIL's preference for segments with more foreground presence and SAM's tendency to capture only parts of an object often bring to final bounding boxes, box_prm, the `local' issue of MIL inadequately covers the instances. To improve the bounding box quality, we introduce a box mining strategy that adaptively expands box_selectfrom proposal selection in PRM, by merging it with the original proposals filter, aiming to address MIL's localization challenges.The Box Mining Strategy (BMS) consists of two primary components: (i) We select the top k proposals from the positive proposal bag B^+, to create a set G. We evaluate the proposals in G against box_select based on IoU and size, using a threshold T_min1. Proposals larger than box_select and with an IoU above T_min1 undergo dynamic expansion through IoU consideration, which allows for the adaptive integration with box_select. That mitigates the 'local' issue and maintains the bounding box's consistentcy to the object's true boundaries. (ii) Frequently, issues related to locality can lead to an exceedingly low IoU between proposals and box_select. Nonetheless, the ground truth box can fully encompass the box_part. Therefore, when component (i) conditions are unmet, if a proposal can entirely encapsulate box_select, we reset the threshold T_min2. Proposals surpassing this threshold adaptively merge with box_select to generate the final box_prm,used to yield Mask_prm. These two components collectively form our BMS strategy. A detailed procedure of this approach will be delineated in Algorithm<ref>.Loss Function. After acquiring the final supervision masks, Mask_prm and the filtered Mask_sam in Multi-mask Proposals Supervision(MPS) <ref>, we use them together to guide the dynamic segmentation branch. To comprehensively train SAPNet, we integrate the loss functions from the PSM and PRM, culminating in the formulation of the total loss for our model, denoted as L_total. The aggregate loss function, L_totalcan be articulated as:ℒ_total=ℒ_mask + ℒ_cls + λ·ℒ_psm+ ℒ_prmwhere, ℒ_Dice is the Dice Loss <cit.>, ℒ_cls is the Focal Loss<cit.>, and λ is set as 0.25.§ EXPERIMENT§.§ Experimental SettingsDatasets. We use the publicly available MS COCO<cit.> and VOC2012SBD <cit.> datasets for experiments. COCO17 has 118k training and 5k validation images with 80 common object categories. VOC consists of 20 categories and contains 10,582 images for model training and 1,449 validation images for evaluation.Evaluation Metric.We use mean average precision mAP@[.5,.95] for the MS-COCO. The { AP,AP_50,AP_75,AP_Small,AP_Middle,AP_Large} is reported for MS-COCO and for VOC12SBD segmentation, and we report AP_25, 50, 75. The mIoU_box is the average IoU between predicted pseudo-boxes and GT-boxes in the training set. It measures SAPNet's ability to select mask proposals without using the segmentation branch.Implementation Details. In our study, we employed the Stochastic Gradient Descent (SGD) optimizer, as detailed in <cit.>. Our experiments were conducted using the mmdetection toolbox <cit.>, following standard training protocols for each dataset. We used the ResNet architecture <cit.>, pretrained on ImageNet <cit.>, as the backbone. For COCO, batch size was set at four images per GPU across eight GPUs, and for VOC2012, it was four GPUs. More details of the experiment are in <ref>§.§ Experimental ComparisonsTab. <ref> shows the comparison results between our method and previous SOTA approaches <cit.> on COCO.In our experiments, we provide SAM with both the labeled points and the annotations generated by the point annotation enhancer <cit.>. SAM then utilizes these inputs to generate subsequent mask proposals for selection and supervision. For fair comparison, we design two baselines: the top-1 scored mask from SAM and MIL-selected SAM mask proposals are used as SOLOv2 supervision, respectively. Tab. <ref> shows our method substantially surpasses these baselines in performance.Comparison with point-annotated methods. Our approach achieves a 31.2 AP performance with a ResNet-50 backbone, surpassing all previous point-annotated methods, including BESTIE on HRNet-48 and AttnShift on Vit-B. Our model exhibits significant improvements under a 1x training schedule, with a 13.5 AP increase when compared to the previous SOTA method, BESTIE. Furthermore, under a 3x training schedule, SAPNet outperforms AttnShift, which relies on large model training, with 13.4 AP, improvements. Importantly, our method is trained end-to-end without needing post-processing, achieving SOTA performance in point-annotated instance segmentation.Comparison with other annotation-based methods.Our SAPNet has significantly elevated point annotation, regardless of point annotation's limitations in annotation time and quality compared to box annotation. Utilizing a ResNet-101 backbone and a 3x training schedule, SAPNet surpasses most box-annotated instance segmentation methods, achieving a 1.4 AP improvement over BoxInst. Moreover, SAPNet's segmentation performance nearly matches the mask-annotated methods, effectively bridging the gap between point-annotated and these techniques.Segmentation performance on VOC2012SBD. Tab. <ref> compares segmentation methods under different supervisions on the VOC2012 dataset. SAPNet reports an enhancement of 7.7 AP over the AttnShift approach, evidencing a notable advancement in performance. Thereby, it significantly outstrips image-level supervised segmentation methods. Additionally, SAPNet surpasses box-annotated segmentation methods, such as BoxInst by 3.4 AP_50 and DiscoBox by 32.6 AP_50. Further, our point-prompted method achieves 92.3% of the Mask-R-CNN. §.§ Ablation StudiesMore experiments have been conducted on COCO to further analyze SAPNet's effectiveness and robustness.Training Stage in SAPNet. The ablation study of the training stage is given in Tab. <ref>.We trained solov2 using the top-1 scored mask provided by SAM and compared it to the two training strategies of SAPNet. In the two-stage approach, the segmentation branch and multiple-mask supervision of SAPNet are removed. Instead, we use the selected mask to train a standalone instance segmentation model, as described by <cit.>. The end-to-end training method corresponds to the architecture illustrated in Fig. <ref>. Our findings indicate that our method is more competitive than directly employing SAM (31.2 AP vs 24.6 AP), and the visualization of Fig. <ref> shows us this enhancement. Moreover, the end-to-end training strategy boasts a more elegant model structure and outperforms the two-stage approach in overall efficiency (31.2 AP vs 30.18 AP).Effect of Each Component. Given the limited performance of SAM-top1, we opted for the single-MIL as our baseline. With a preliminary selection using MIL1, we have achieved a segmentation performance of 26.8 AP.i) Point Distance Guidance. We updated the proposal scores from the existing MIL by integrating the PDG module into the foundational MIL selection. This approach successfully segments adjacent objects of the same category, improving the segmentation performance by 0.7 points (27.5 vs 26.8).ii) MIL2. Building on the previous step, we incorporate a second MIL selection module to refine the initially selected boxes, resulting in a performance increment of 0.2 points. iii) PNPG. For MIL2, we devised the positive-negative sample sets, aiming to enhance the input quality for the PRM module and use the negative samples to suppress background. This adjustment leads to a segmentation performance boost of 2 points (29.7 vs 27.7). iv) BMS. Within the PRM, we refine the selected boxes using BMS, pushing the segmentation performance up by 1.1 points (30.8 vs 29.7). v) MPS. Utilizing MPS for segmentation branch supervision yields a 0.4-point performance improvement. Threshold of BMS.For point refinement, there are two constraints (described in Sec. <ref>). T_min1 and T_min2 are thresholds of the Box Mining Strategy.In Tab. <ref>, it shows that the two constraints together to obtain performance gain. After multiple experiments, we have found that there is a significant performance improvement when T_min1 and T_min2 are set to 0.6 and 0.3, respectively.Components of PNPG. Tab. <ref> presents the results of a dissected ablation study on the Positive and Negative Proposals Generator(PNPG), illustrating the respective impacts of the positive and negative examples on the model's performance. It is evident that the construction of negative examples plays a significant role in enhancing model efficacy. Furthermore, the beneficial effects of both positive and negative examples are observed to be cumulative.Performance Analysis.As presented in Tab. <ref>, we conducted a statistical analysis to further validate SAPNet's capability to address the issue of part selection and compare the outcomes selected by the single-MIL with those obtained by SAPNet in the absence of segmentation branch integration. Specifically, the part problem generated by the single-MIL, where MIL is inclined to select proposals with a higher proportion of foreground, is exemplified in Fig. <ref>. On this premise, we initially establish an evaluative criterion R_v = area_mask/area_box, which is the ratio of the mask area to the bounding box area. Subsequently, we compute R_v_i for each proposal within the proposal bag corresponding to every instance across the entire COCO dataset and select the maximum R_v_max to compute the mean value over the dataset, which is then designated as the threshold T_rv. Ultimately, we identify the ground truth R_v_gt and objects where R_v_max exceeds T_rv and calculates the discrepancy between R_v values selected by single-MIL and SAPNet. The description is as follows:Gap_single = Rv_single - Rv_gt,Gap_our = Rv_our - Rv_gt.Tab. <ref> shows that the proposed SAPNet mitigates the locality issue faced by the single-MIL. Furthermore, the boxes selected via SAPNet exhibit a substantially higher IoU with GT than those selected by the single-MIL.§ CONCLUSIONIn this paper, we propose SAPNet, an innovative end-to-end point-prompted instance segmentation framework. SAPNet transforms point annotations into category-agnostic mask proposals and employs dual selection branches to elect the most semantic mask for each object, guiding the segmentation process.To address challenges such as indistinguishable adjacent objects of the same class and MIL's locality bias, we integrate PDG and PNPG, complemented by a Box Mining Strategy for enhanced proposal refinement. SAPNet uniquely merges segmentation and selection branches under multi-mask supervision, significantly enhancing its segmentation performance. Extensive experimental comparisons on VOC and COCO datasets validate the SAPNet's effectiveness in point-prompted instance segmentation.ieeenat_fullname Appendix § VISUALIZATION AND ABLATION STUDIES ON COCOVisualization. Our proposed SAPNet significantly mitigates the semantic ambiguity inherent in SAM. In Figure <ref>, revealing SAM's limitations in discerning semantic significance from point prompts (the category indicated by each point). With SAPNet's refinement, each object is equipped with a mask that accurately represents its semantic category. The fidelity of masks selected by SAPNet is distinctly higher than the top-1 masks from SAM. To illustrate the advantages of SAPNet over the single-MIL approach, Fig. <ref> and Fig. <ref> contrast the outcomes on local segmentation problems and the scenario involving proximate objects of the same class, respectively. Post-selection and refinement via SAPNet markedly alleviate the local segmentation issues, with the chosen masks encompassing the entirety of the objects. Furthermore, by incorporating point distance guidance, SAPNet achieves commendable segmentation results even with adjacent objects of the same class, successfully isolating the masks pertinent to each specific object.Ablation studies for negative proposals. In table <ref>, we evaluate the effect of different threshold settings on the final segmentation performance of SAPNet when negative examples are generated in the NPG on the coco dataset, and it can be seen that the two threshold pairs have less effect on the final segmentation performance with different settings, and the module is robust to hyperparameters.§ DETECTION AND SEGMENTATION PERFORMANCE OF SAPNETDetection performance on COCO17 and VOC2012SBD. As shown in table <ref>, we conducted an extensive comparison of our proposed methodology against a range of detection methods, encompassing full, image-level, and point supervision, utilizing the COCO and VOC datasets. On the COCO dataset, our method demonstrates a notable improvement over the current SOTA P2BNet <cit.>, with a substantial increment of 10.4 AP, culminating in a score of (32.5 AP vs 22.1 AP). Moreover, under a training schedule extended to 3x, the detection efficacy of SAPNet equates to that of the fully-supervised FPN <cit.>. Within the detection of the VOC dataset, our approach exceeds the previous SOTA by 8.0 AP_50, approximating 91% efficacy of the fully-supervised FPN, also under a 3x training schedule. We observe that the image-level methods significantly underperform the point-supervised methods on the challenging COCO dataset, achieving only 36% of the performance of the fully-supervised approaches. This distinctly accentuates the advantageous that point-supervised methods, optimizing the trade-off between the economy of annotation efforts and the robustness of detection performance. § SEGMENTATION BRANCH Multi-mask Proposals Supervision. We utilize SOLOv2<cit.> as our segmentation branch for its efficiency. Alongside using Mask_prm from the box_prm mapping for segmentation supervision, we integrated a proposal filter, which ranks proposals in H ∈ℝ^N × M based on PRM scores, extracting initial masks to yield Mask_sam. Combined with Mask_prm, these guide the segmentation network. The loss function is as follows:ℒ_mask= ℒ_Dice(Mask_pre,Mask_prm) + γ·ℒ_Dice(Mask_pre,Mask_sam)where ℒ_Dice is the Dice Loss <cit.>, γ is set as 0.25.Inference. For SAPNet's inference process, only the segmentation branch is retained after training, which is identical to the original instance segmentation model<cit.>. Given an input image, mask predictions are directly produced via an efficient Matrix-NMS technique. The pseudo-mask selection procedures of PSM, PRM, and other MIL-based modules only introduce computational overhead during training; they are entirely cost-free during inference. § VISUALIZATION AND ABLATION STUDIES ON COCO Implementation Details. On COCO and VOC2012SBD datasets, the initial learning rates were 1.5 × 10^-2 and 2 × 10^-3, respectively, reduced by a factor of 10 at the 8th and 10th epochs. In PDG, the exponential factor d was set to 0.015. For NPG, the IoU threshold T_neg1 and T_neg2 were 0.3 and 0.5, respectively. For BMS, the k was set to 3. From the mask bag, we selected masks with the highest, medium, and lowest scores outside of Mask_prm for MPS to accelerate the convergence of segmentation. Training spanned 12 epochs. Single-scale evaluation (1333 × 800) was used for the 1x schedule. For the 3x schedule, a multi-scale training approach was adopted, with the image's short side resized between 640 and 800 pixels (in 32-pixel increments) and MPS will be removed . Inference was conducted using single-scale evaluation.Visualization. The proposed SAPNet significantly mitigates SAM's semantic ambiguity. Fig. <ref> reveals SAM's limitations in discerning semantic significance from point prompts (the category indicated by each point). With SAPNet's refinement, each object is equipped with a mask that accurately represents its semantic category. The fidelity of masks selected by SAPNet is distinctly higher than that of top-1 masks from SAM.To illustrate the advantages of SAPNet over the single-MIL approach, Fig. <ref> and Fig. <ref> contrast the outcomes of the local segmentation problems and the scenario involving proximate objects of the same class, respectively. With the chosen masks encompassing the entirety of the objects, post-selection and refinement via SAPNet markedly alleviate the local segmentation issues. Furthermore, by incorporating point distance guidance, SAPNet achieves commendable segmentation results even with the adjacent objects of the same class, successfully isolating the masks pertinent to each specific object.Fig. <ref> shows additional instance segmentation results of SAPNet on the COCO dataset, demonstrating superior segmentation performance both in the case of individual large objects and in denser scenes. Our method exhibits outstanding capabilities in segmenting singular large targets as well as operating effectively in complex, crowded environments.Ablation studies for negative proposals. In Tab. <ref>, we evaluate the effect of different threshold settings on SAPNet's final segmentation performance when the negative examples are generated in the NPG on the COCO dataset. We find that the two threshold pairs have less effect on the final segmentation performance with different settings, and the module is robust to hyperparameters.§ DETECTION AND SEGMENTATION PERFORMANCE OF SAPNETDetection performance on COCO17 and VOC2012SBD. As shown in Tab. <ref>, utilizing the COCO and VOC datasets, we conduct an extensive comparison of the proposed methodology against a range of detection methods, encompassing fully, image-level, and point annotation. On the COCO dataset, our method demonstrates a notable improvement over the current SOTA P2BNet <cit.>, with a substantial increment of 10.4 AP and culminating in a score of (32.5 AP vs 22.1 AP). Moreover, under a training schedule extended to 3x, the detection efficacy of SAPNet equates to that of the fully-annotated FPN <cit.>. Also, under a 3x training schedule, within the detection of the VOC dataset, our approach exceeds the previous SOTA by 8.0 AP_50, approximating 91% efficacy of the fully-annotated FPN. We observe that the image-level methods significantly underperform that of the point-annotated methods on the challenging COCO dataset, achieving only 36% of the performance of the fully-annotated approaches. That distinctly accentuates the advantageous that point-prompted methods, optimizing the trade-off between the economy of annotation efforts and the robustness of detection performance.
http://arxiv.org/abs/2312.15895v1
{ "authors": [ "Zhaoyang Wei", "Pengfei Chen", "Xuehui Yu", "Guorong Li", "Jianbin Jiao", "Zhenjun Han" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226055644", "title": "Semantic-aware SAM for Point-Prompted Instance Segmentation" }
a]Ranveer Kumar Singh, b]Madhav Sinha [a]New High Energy Theory Center, Department of Physics and Astronomy, Rutgers University, 126 Frelinghuysen Rd., Piscataway NJ 08855, USA [b]Department of Physics and Astronomy, Rutgers University, 126 Frelinghuysen Rd., Piscataway NJ 08855, USA [email protected] [email protected] Frenkel, Lepowsky, and Meurman constructed a vertex operator algebra (VOA) associated to any even, integral, Euclidean lattice.In the language of physics, these are examples of chiral conformal field theories (CFT). In this paper, we define non-chiral vertex operator algebra and some associated notions. We then give a construction of a non-chiral VOA associated to an even, integral, Lorentzian lattice and construct their irreducible modules. We obtain the moduli space of such modular invariant non-chiral VOAs based on self-dual Lorentzian lattices of signature (m,n) assuming the validity of a technical result about automorphisms of the lattice. We finally show that Narain conformal field theories in physics are examples of non-chiral VOA. Our formalism helps us to identify the chiral algebra of Narain CFTs in terms of a particular sublattice and break its partition function into sum of characters.Non-Chiral Vertex Operator Algebra Associated To Lorentzian Lattices And Narain CFTs [====================================================================================§ INTRODUCTIONThe study of vertex (operator) algebras started with the work of Borcherds <cit.> and Frenkel, Lepowsky, and Meurman <cit.> in relation to Monstrous Moonshine and two dimensional conformal field theory. The first non-trivial examples of the theory were constructed starting from an even, integral Euclidean lattice, see <cit.> for a more physical construction. These are examples of what are called chiral conformal field theory in physics, see Subsection <ref> for an introduction. There are ample examples of non-chiral conformal field theories which cannot be described mathematically in the language of vertex operator algebra. One large class of such theories is the Narain CFTs based on Lorentzian lattices.In this paper, we define the notion of non-chiral vertex operator algebra and study various related notions. Our definition is based on the notion[See also <cit.> for some earlier related but different discussion on non-chiral VOAs.] of full field algebras introduced by Huang and Kong in <cit.> and full vertex algebras introduced by Moriwaki in <cit.>. We use formal calculus as well as complex analysis to formulate our axioms. We replace the Jacobi identity axiom of vertex (operator) algebras by a locality axiom which is general enough to imply the duality and hence operator product expansion of vertex operators. Various well-known examples of non-chiral conformal field theories in physics are examples of our definition. More concretely, we construct a class of examples of non-chiral VOAs based on Lorentzian lattices which cover the moduli space of Narain CFTs as examples of our definition. We then define modules and intertwiners of non-chiral VOAs on the lines of<cit.>. In the rest of this section, we describe a dictionary between non-chiral VOAs of this paper and the notion of (non-chiral) conformal field theory in physics.§.§ The dictionary from non-chiral VOA to non-chiral CFTsIn this section, we give a dictionary between conformal field theories as generally understood in physics and the non-chiral vertex operator algebra definition in this paper. This dictionary, for the case of chiral conformal field theory, is well-known to experts. We extend it to the case of non-chiral VOA. We start by describing the defining data of a conformal field theory in physics.§.§.§ Physical definition of a CFTWe begin with the Belavin-Polyakov-Zamalodchikov (BPZ) definition of a conformal field theory <cit.>. We follow <cit.> for this exposition. A bosonic CFT is an inner product space ℋ which[Physically, one always has a Hilbert space.]is decomposable as a direct sum of tensor product ℋ=⊕_h,h h-h∈V(h,c)⊗V(h,c) ,of irreducible highest weight modules of Vir_c×Vir_c̅, where where Vir_cand Vir_c̅ are two copies of the Virasoro algebra with central charge c,c :[L_m,L_n]=(m-n) L_m+n+c/12 m(m^2-1) δ_m+n,0, [L̅_m,L̅_n]=(m-n) L̅_m+n+c̅/12 m(m^2-1) δ_m+n,0,[L_m,L̅_n]=0 ,such that the following are satisfied : * Identity Property : There is a unique vector |0⟩∈ V(0,c)⊗V(0,c) which is invariant under the sl(2) ×sl(2)-subalgebra of Vir_c×Vir_c̅ generated by L_0,L̅_0,L_± 1, and L̅_± 1.*For each vector α∈ℋ there is an operator ϕ_α(z,z̅) acting on ℋ, parameterized by z ∈. Also, for every operator ϕ_α there exists a conjugate operator ϕ_α^∨, partially characterized by the requirement that the operator product expansion (OPE) of ϕ_α andϕ_α^∨ contains a descendant of the identity operator. * L_n Property :For α∈ V(h,c)⊗V(h,c) a highest weight state of the (Vir_c×Vir_c̅)-action, we have [L_n, ϕ_α(z, z̅)]=(z^n+1d/d z+h(n+1) z^n) ϕ_α(z,z) , [L̅_n, ϕ_α(z, z̅)]=(z̅^n+1d/d z̅+h̅(n+1) z̅^n) ϕ_α(z,z) ,for real numbers h and h̅.*Duality Property : The inner products ⟨ 0|ϕ_α_1(z_1, z̅_1) …ϕ_α_n(z_n, z̅_n)| 0⟩ exist for |z_1|>…>|z_n|>0 and admit an unambiguous real-analytic continuation, independent of ordering[For Fermionic fields, one has to keep track of signs while commuting them past each other.], to ^n∖{z_1,…,z_n=0,∞;z_i=z_j}.* Modular invariance property : The torus partition function and correlation functions, given in terms of traces exist and are modular invariant.These axioms do not characterise a CFT but are necessary for a well defined CFT. A particular subset of operators which only depend only on z or z̅ are of interest. Such operators are called chiral (z dependent) and anti-chiral (z̅ dependent) operators. The set of chiral and anti-chiral operators form an algebra which we denote by 𝒜 and 𝒜 respectively. For any CFT, 1∈𝒜⊗𝒜,T(z)∈𝒜 and T(z̅)∈𝒜 where T and T are the holomorphic and anti-holomorphic stress tensor with modesL_n and L_n respectively. Let {𝒪^i(z)} be a basis of 𝒜. The OPE of chiral operators takes the form 𝒪^i(z) 𝒪^j(w)=∑_k c_i j k/(z-w)^h_i j k𝒪^k(w) ,for some coefficients c_ijk where h_ijk=h_i+h_j-h_k. By the usual contour integral manipulations we can write the above OPE as the algebra of modes:[𝒪_n^i, 𝒪_m^j] =∑_k c_i j k(n, m) 𝒪_n+m^k,where 𝒪^j(z)=∑_m𝒪_m^jz^-m-h_j.Similar OPE holds for anti-chiral operators. This is called the chiral algebra (anti-chiral algebra) of the CFT. In the following, we will only speak of the chiral algebra but all statements hold for anti-chiral algebra equally well. The chiral algebra of any CFT contains the (universal enveloping algebra of) Virasoro algebra since the stress tensor is always a chiral operator. Other examples of chiral algebra include the affine-Kac Moody algebra <cit.> and the W_3 algebra <cit.>. Only the zero mode the of chiral operator 𝒪(z) commute with the Hamiltonian (L_0+L̅_0) and are hence called the symmetry-generating algebra. In the inner product space (<ref>), one can talk about subspaces ℋ_i which form irreducible representations of the chiral algebra 𝒜. For this reason the full chiral algebra is sometimes also called the spectrum-generating algebra. We can thus decompose the physical Hilbert space ℋ as:ℋ=⊕_i, i̅ N_i, i̅ℋ_i ⊗ℋ_i̅,where ℋ_i,ℋ_i̅ are irreducible representations of 𝒜,𝒜 respectively and N_i, i̅∈_0 =∪{0}, the set of non-negative integers, is the number of times ℋ_i⊗ℋ_i̅ appears in ℋ. For the index value i,i=0, we take ℋ_0 and ℋ_0 to be the subspace of ℋ which contains states corresponding to 𝒜 and 𝒜 respectively modulo the null states. Hence, N_i, 0=δ_i, 0, N_0, i̅ =δ_0, i̅. Let us now describe the relation between the two decompositions (<ref>) and (<ref>). A general state |h,h̅⟩∈ℋ is called a Virasoro primary of conformal dimension (h,h̅)if L_0|h,h̅⟩=h|h,h̅⟩L̅_0|h,h̅⟩=h̅|h,h⟩ L_n|h,h̅⟩=L̅_n|h,h̅⟩=0, n>0.This follows from the OPE T(z)ϕ(w,w̅)=h/(z-w)^2ϕ(w,w̅)+∂_wϕ(w,w̅)/z-w+ hol. , T(z̅)ϕ(w,w̅)=h̅/(z̅-w̅)^2ϕ(w,w̅)+∂_w̅ϕ(w,w̅)/z̅-w̅+ anti-hol. ,where ϕ(w,w) is the operator corresponding to the |h,h̅⟩. One can identify the state |h,h⟩ with |h,h̅⟩≡lim_z,z̅→ 0ϕ(z,z)|0⟩ .We will consider |h,h⟩ as the tensor product of states |h⟩,|h⟩: |h,h⟩≡ |h⟩⊗|h⟩. The Verma modules V(h,c) and V(h̅,c̅) is given byV(h,c):=Span_{L_-n_1… L_-n_k|h⟩: n_1,…,n_k>0,k∈}, V(h̅,c̅):=Span_{L̅_-n_1…L̅_-n_k|h⟩: n_1,…,n_k>0,k∈}.The commutators (<ref>) and the Virasoro primary conditionmakes the Verma module V(h,c)⊗V(h,c̅) into a (Vir_c×Vir_c̅)-representation. In general, this representation is reducible and one has to quotient out singular or null states [A singular or null state is a vector which is not a highest weight vector but is annihilated by L_n,L̅_n, n>0, see <cit.> for a detailed discussion.] from these Verma modules to make them irreducible. We will assume that this has already been done and that V(h,c)⊗V(h,c̅) is an irreducible (Vir_c×Vir_c̅)-module.Note that from the OPE(<ref>) we must have h̅=0 (h=0) for a chiral (anti-chiral field). Now we can identify ℋ_0 ⊗ℋ_0 as ℋ_0 ⊗ℋ_0≅(⊕_h∈SV(h,c))⊗(⊕_h̅∈SV(h̅,c̅))⊂ℋ,where 𝒮:={h∈:|h⟩⊗ |0⟩≡|h,0⟩∈ℋ} , 𝒮:={h̅∈:|0⟩⊗|h̅⟩≡|0,h̅⟩∈ℋ} .A chiral primary is a Virasoro primary |h,h̅⟩ satisfying 𝒪_m |h,h̅⟩=𝒪_m |h,h̅⟩=0, m>0 ,and is an eigenstate of 𝒪_0 and 𝒪_0 for every 𝒪∈𝒜 and 𝒪∈𝒜, the operator corresponding to it will be called the chiral primary field. Then each irreducible factor in (<ref>) is a subspace of a Verma module constructed over a chiral primary by the modes of every chiral and anti-chiral operator. One can then talk about characters of the CFT defined for each irreducible factor ℋ_i ⊗ℋ_i:χ_i,i(τ,τ)=Tr_ℋ_i ⊗ℋ_iq^L_0-c/24q̅^L̅_0-c̅/24,q=e^2π iτ,  q̅=e^-2π iτ̅ ,where τ∈ℍ:={x+iy∈:y>0} is in the upper half plane.From the fact that ℋ_i ⊗ℋ_i is built over some highest vector |h_i,h̅_i⟩, we see that the character has the form χ_i,i(τ,τ)=q^h_i-c/24q̅^h̅_i-c̅/24∑_n,m=0^∞ a(n)a̅(m)q^nq^m ,for some integers a(n),a̅(m). Thus we can separate the character into chiral and anti-chiral characters:χ_i,i(τ,τ)=χ_i(τ) χ_i(τ̅)where χ_i(τ)=q^h_i-c/24∑_n=0^∞ a(n)q^n,χ_i̅(τ̅)=q̅^h̅_i-c̅/24∑_m=0^∞a̅(m)q^m .The partition function of the CFT is then given by sum over chiral characters Z(τ,τ̅):=Tr_ℋq^L_0-c/24q̅^L̅_0-c̅/24=∑_i,i̅N_i,i̅χ_i(τ)χ_i̅(τ̅) .Modular invariance of the partition function implies that the chiral characters form a weight-zero weakly holomorphic vector valued modular form [See for example <cit.> for the definition of vector valued modular forms.]. Using this property of chiral characters, one can classify CFTs with finitely many primary fields and given central charge <cit.>. A CFT is called a chiral CFT if the partition function and its correlation functions decompose into a product of an analytic function of τ and an analytic function of τ̅. In such a case, we say that the CFT admits a holomorphic factorisation. A non-chiral conformal field theory does not admit a holomorphic factorisation. §.§.§ Non-chiral vertex operator algebrasWe now briefly describe the notion of non-chiral vertex operator algebras studied in this paper. We refer to Section <ref> for precise and detailed discussions. Just as vertex operator algebras, we start with a vector space V which is (×)-graded. We havethe vertex operator map Y_V:V⟶End(V){x,x̅} where x,x̅ are two formal variables. We require the existence of a vacuum vector 1 and conformal vectors ω,ω̅ such thatY_V(1,x,x̅)=1 and the coefficients of the formal series for Y_V(ω,x, x̅),Y_V(ω̅,x,x̅) satisfies two copies of the Virasoro algebra. The vertex operators are required to satisfy certain translation and L(0) axioms similar to VOAs. It turns out that that the Jacobi identity for VOAs is difficult to formulate for non-chiral VOAs <cit.>. The appropriate locality axiom is motivated from full field algebras of <cit.>. Roughly speaking, locality of vertex operators says that the matrix elements of the product Y_V(v,z_1,z̅_1)Y_V(w,z_2,z̅_2) and Y_V(v,z_1,z̅_1)Y_V(w,z_2,z̅_2), defined when |z_1| > |z_2| and |z_2| > |z_1| respectively, are equal to the same function, which is multi-valued and analytic in z_1, z̅_1, z_2, z̅_2 and single valued when z̅_1,z̅_2 are complex-conjugates of z_1,z_2, defined on^4 minus a diagonal subset. This version of locality turns into a statement of analytic continuation of matrix elements for chiral vertex operators and allows us to use contour integration for manipulating the modes of the vertex operators. Moreover, locality allows us to prove duality of vertex operators which in turn gives us the operator product expansion of the product of two vertex operators. Modules and intertwiners of a non-chiral VOA are then defined analogous to modules of a VOA.§.§.§ The DictionaryLet us now describe the dictionary between non-chiral VOA and its modules and a non-chiral CFT. Given a bosonic CFT, its chiral and anti-chiral algebra is a non-chiral VOA according to our definition. Note that our notion of non-chiral VOA allows for more general structure in the sense that the chiral algebra of a CFT always has the structure of a tensor product ℋ_0 ⊗ℋ_0 as described above but non-chiral VOAs are allowed to have more general vector spaces. Thechiral and anti-chiral operators are identified with the vertex operators of the VOA. For general non-chiral VOA, chiral and anti-chiral vertex operators form only a subsector of the set of vertex operators, see Definition <ref> and Theorem <ref> below.Next, the irreducibles ℋ_i ⊗ℋ_i must be identified with modules of the VOA. Again in our generic construction we allow for the modules to have more general structure rather than a tensor product. The chiral primary field corresponding to an irreducible ℋ_i ⊗ℋ_i is identified with the intertwiner 𝒴^ (i,i)_(i,i̅)(0,0)(w,x,x̅) of type (i,i) (i,i̅)(0,0) where (0,0) indicates the VOA as a module for itself.The state-operator correspondence for the space (<ref>) corresponds to the vertex operator map for the VOA and its modules along with the intertwining operators of type (i,i) (i,i̅)(0,0). The full dictionary is summarised in Table <ref>. We hope to expand the dictionary to include fusion rules and rationality on the CFT side to the notion of tensor product of modules and (strong) regularity of non-chiral VOAs in a future publication. §.§ Lorentzian Lattice Vertex Operator Algebra (LLVOA)One of the main constructions of this paper is a concrete example of a non-chiral VOA based on an even integral Lorentzian lattice. The construction is similar in spirit to Euclidean lattice VOAs but differs in that it is inherently non-chiral and does not admit holomorphic factorisation thus providing the first non-trivial example of a non-chiral VOA. We call it the Lorentzian lattice vertex operator algebra (LLVOA). The modules for a non-chiral VOA introduced in this paper can be defined analogous to modules for usual VOAs. Infact, we are able to construct modules for the LLVOA in one to one correspondence with certain cosets of the lattice.Collecting the VOA and its modules, we define the partition function of the non-chiral CFT (see Definition <ref>) thus obtained. We find that the modules of the LLVOA constructed using the cosets of the lattice give rise to a modular invariant partition function. We further attempt to classify all possible modular invariant non-chiral CFT based on even, integral self-dual Lorentzian lattices of a given signature. This leads us to a conjecture about automorphisms of Lorentzian lattices which we prove for signature (m,m) but are unable to prove for general signature (m,n) with m≠ n. Following the physics terminology, we call the equivalence classes of non-chiral CFTs based on Lorentzian lattices as the moduli space of LLVOAs. As expected from physical arguments, the moduli space in signature (m,n) is given by (see Theorem <ref>)ℳ_m,n≅O(m,n,)/O(m,)×O(n,)×O(m,n,). The paper is organised as follows: in Section <ref>, we introduce the notion of non-chiral VOA and prove some elementary consequences of the definition. Then in Section <ref>, we construct the LLVOA and prove that it is an example of a non-chiral VOA. In Section <ref> we define the notion of modules and intertwining operators and prove some important consequences and results required later. In Section <ref> we constructthe modules of LLVOAs. We formulate a precise conjecture about automorphism of Lorentzian lattices and under that assumption, prove that the moduli space of LLVOAs in signature (m,n) is given by (<ref>). Finally in Section <ref> we review the physical construction of Narain CFTs and comment on their relation to LLVOAs. Appendix <ref> deals with the independence of central extensions of lattices on the chosen basis of the lattice. Appendix <ref> contains some technical results about modules of Heisenberg algebras and Appendix <ref> contains the proof of Conjecture <ref> for the special case m=n. § NON-CHIRAL VERTEX OPERATOR ALGEBRA§.§ Formal calculusWe begin by collecting some notatations about formal calculus. The reader is referred to <cit.> and <cit.> for more details. Let x be a formal variable. For a vector space V, we define the following: V[x] = { ∑_n ∈_0 v_n x^n : v_n ∈ V,where only finitely many v_n≠ 0}, V[[x]] = { ∑_n ∈_0 v_n x^n : v_n ∈ V },V{ x} = {∑_n ∈𝔽 v_n x^n : v_n ∈ V },where _0=∪{0} andis an arbitrary field of characteristic not 2. We will mostly be interested in the case = or . For a complex number s∈ℂ and formal variables x_1,x_2, we will define (x_1+x_2)^s:=∑_n=0^∞s nx_1^s-nx_2^n ,where the binomial coefficient is defined as s n=s(s-1)⋯ (s-n+1)/n!.Note, in a series like this, we will always have non-negative integral powers of the second variable.With this formula, one can check that (1-x_1/x_2)^sx_2^s =(x_2-x_1)^s.If we replace x_1,x_2 by complex variables z_1,z_2 then by definition <cit.> (z_1-z_2)^s:=exp(slog(z_1-z_2)),(z_2-z_1)^s:=exp(slog(z_2-z_1)).Using the fact thatlog(1-z)=-∑_n=0^∞z^n/n, |z|<1,and the identity [This identity can be proven by using the relation (1-x)^s=exp(slog (1-x)) for any real x with |x|<1 and s∈.] (-1)^ks k=∑_ℓ=1^k(-s)^ℓ/ℓ!∑_n_1+…+n_ℓ=k n_1,…,n_ℓ≥ 11/n_1⋯ n_ℓ,it is easy to see that [Another way of proving this identity is to use Taylor's theorem.] (z_1-z_2)^s=exp(slog(z_1-z_2))=∑_n≥ 0s n(-1)^nz_1^s-nz_2^n, |z_1|>|z_2| ,(z_2-z_1)^s=exp(slog(z_2-z_1))=∑_n≥ 0s n(-1)^nz_2^s-nz_1^n, |z_2|>|z_1| ,which is consistent with the definition (<ref>).Let f(x) = ∑ v_n x^n∈ V [[x, x^-1]], then we have the formal version of Taylor's theorem:e^x_0d/dx f(x)=f(x + x_0) ,One can prove this by expanding both sides andcomparing terms of equal power of x,x_0. As before, we need to expand the RHS in non-negative integral powers of x_0. §.§ Definition of non-chiral VOALet V=∐_(h,h̅)∈×V_(h,h̅), be an ×-graded complex vector space vector space. Let V=∏_(h,h̅)∈×V_(h,h̅),denote the algebraic completion of V. Let V'=∐_(h,h̅)∈×V'_(h,h̅),be the contragradient of V where V'_(h,h̅) is the dual of V_(h,h̅). A series ∑ f_n in V is said to be absolutely convergent if for every f'∈ V' the series ∑ |⟨ f',f_n⟩| is convergent. Here, ⟨ f',f_n⟩=f'(f_n)∈ is just the action of the linear functional on f' on f_n.A non-chiral vertex operator algebra of central charge (c,c̅) is a quintuple (V,Y_V,ω,ω̅,1) where V is an ×-graded complex vector space and Y_V is a linear map, called the vertex operator map,Y_V: V⊗ V⟶ V{x^± 1,x̅^± 1}, u⊗ v⟼ Y_V(u,x,x̅)v ,or equivalently a mapY_V: ^××^×⟶Hom(V⊗ V,V)(z,z̅)⟼ Y_V(·,z,z):u⊗ v⟼ Y_V(u,z,z̅)v,which is multi-valued and analytic if z,z are independent complex variables and single valued when z is the complex conjugate of z. The vertex operator Y_V(u,x,x̅) is expanded as a formal power seriesY_V(u,x,x̅)=∑_m,n∈u_m,nx^-m-1x̅^-n-1∈End(V){x^± 1,x̅^± 1},and when u∈ V_(h,h̅), it can also be expanded asY_V(u,x,x̅)=∑_m,n∈x_m,n(u)x^-m-hx̅^-n-h̅∈End(V){x^± 1,x̅^± 1},so that x_m,n(u)=u_m+h-1,n+h-1, m,n∈.We call x_m,n(u) the modes of the vertex operators Y_V(u,x,x̅).The degree (h,h̅) is called the conformal weight of u∈ V_(h,h̅) and we write wt(u)=h,wt(u)=h̅.The vector 1∈ V_(0,0) is called the vacuum vector and ω∈ V_(2,0),ω̅∈ V_(0,2) are chiral and anti-chiral conformal vectors respectively. This data is required to satisfy the following properties: *Identity property: The vertex operator corresponding to the vacuum vector acts as identity, i.e.Y_V(1, x, x̅) u = u, ∀  u ∈ V. *Grading-restriction property: For every (h,h)∈×, dim(V_(h,h̅))<∞,and there exists M∈, such that V_(h,h̅)=0, forh<Mor h<M. *Single-valuedness property: For every homogenous subspace V_(h,h̅) h-h̅∈ℤ. * Creation property: For any v∈ V lim_x,x̅→ 0Y_V(v,x,x̅)1=v ,that is Y_V(v,x,x̅)1 involves only non-negative powers of x,x̅ and the constant term is v. *Virasoro property: The vertex operators Y_V(ω,x,x̅) and Y_V(ω̅,x,x̅), called conformal vertex operators, have Laurent series in x,x̅ given by T(x):=Y_V(ω,x,x̅)=∑_n∈L(n)x^-n-2,T̅(x):=Y_V(ω̅,x,x̅)=∑_n∈L̅(n)x̅^-n-2,where L(n),L̅(n) are operators which satisfy the Virasoro algebra with central charge c,c̅ respectively:[L(m),L(n)]=(m-n) L(m+n)+c/12 m(m^2-1) δ_m+n,0,[L̅(m),L̅(n)]=(m-n) L̅(m+n)+c̅/12 m(m^2-1) δ_m+n,0,and [L(m),L̅(n)]=0. *Grading property: The operator (L(0),L̅(0)) is the gradingoperator on V, that is for v∈ V_(h,h̅)L(0)v=hv,L(0)v=h̅v. *L(0)-property : [L(0), Y_V(u , x, x̅)]=x ∂/∂ xY_V(u ,x, x̅)+Y_V(L(0) u , x, x̅),[L̅(0), Y_V(u , x, x̅)]=x̅∂/∂x̅Y_V(u ,x, x̅)+Y_V(L̅(0) u , x, x̅). *Translation property: For any u∈ V [L(-1), Y_V(u , x, x̅)]=Y_V(L(-1) u , x, x̅)=∂/∂ x Y_V(u , x, x̅),[L̅(-1), Y_V(u , x, x̅)]=Y_V(L̅(-1) u , x, x̅)=∂/∂x̅ Y_V(u , x, x̅). * Locality property:For u_1, …, u_n∈ V, there is an operator-valuedfunction [We thank Yi-Zhi Huang for discussions on this point.]m_n (u_1, ... , u_n; z_1, z̅_1, ... , z_n, z̅_n) ,defined on [The matrix elements of this operator are called correlation functions in Physics.]{(z_1,…,z_n,z̅_1,…, z̅_n) ∈^2n |z_i,z̅_i≠ 0, z_i ≠ z_j,z̅_i≠z̅_j},which is multi-valued and analytic when z̅_1, ..., z̅_n are viewed asindependent variables and is single-valued when z̅_1, ..., z̅_n are equal to the complex conjugates of z_1, ... , z_n. Moreover, for any permutation σ∈ S_n, the product of vertex operators Y_V(u_σ(1) , z_σ(1), z̅_σ(1))⋯Y_V(u_σ(n) , z_σ(n), z̅_σ(n)) ,is the expansion of m_n(u_1,…,u_n;z_1, z̅_1,…, z_n, z̅_n ) in the domain |z_σ(1)|>|z_σ(2)|>…>|z_σ(n)|>0. Here, z̅_σ(1), ..., z̅_σ(n) are complex conjugates of z_σ(1), ... , z_σ(n) respectively. If a function m_n satisfying above properties exists, we say that the vertex operatorsY_V(u_1 , z_1, z̅_1),…, Y_V(u_n , z_n, z̅_n) are mutually local with respect to each other. We will often denote the non-chiral VOA by (V,Y_V) or simply by V.For a homogeneous vector u∈ V_(h,h̅), the sum in (<ref>) and (<ref>) runs only over the set {(m,n)∈^2|  m-n∈}. To see this, first note that the L(0)-property <ref> implies the commutator [L(0),x_m,n(u)]=-mx_m,n(u),[L̅(0),x_m,n(u)]=-nx_m,n(u).Equivalently, [L(0),u_m,n]=(h-m-1)u_m,n,[L̅(0),u_m,n]=(h̅-n-1)u_m,n.This implies that wt x_m,n(u)=-m,wt x_m,n(u)=-n,wt u_m,n=h-m-1,wt u_m,n=h̅-n-1.The single-valuedness property <ref> implies that m-n∈ in both the sums. We will thus write the expansions of the vertex operators as Y_V(u,x,x̅) =∑_m,n∈ (m-n)∈u_m,nx^-m-1x̅^-n-1∈End(V){x^± 1,x̅^± 1}=∑_m,n∈ (m-n)∈x_m,n(u)x^-m-hx̅^-n-h̅.The single-valuedness property <ref> implies that the vertex operators (<ref>) is single-valued. To prove this, we must show thatY_V(u,z,z̅)=Y_V(u,e^2π iz,e^-2π iz̅).From Remark <ref>, we have Y_V(u,e^2π iz,e^-2π iz̅) =∑_m,n∈ (m-n)∈u_m,nz^-m-1z̅^-n-1e^2π i(-m+n)=Y_V(u,z,z̅).For v∈ V_(h,h̅), if in the expansion (<ref>) the index runs over m∈-h, n∈-h̅, then assuming that z,z̅ are independent complex variables, we can use Cauchy's residue theorem to writex_r,s(v)=1/(2π i)^2∮∮ dzdz̅ Y_V(v,z,z)z^r+h-1z̅^s+h̅-1 ,where the contour of the integration is a circle around z=0 and z̅=0 respectively. The creation property implies also the injectivity condition, i.e. Y_V(v, x, x̅) = 0impliesv = 0,forv ∈ V. §.§ Some consequences of the definitionWe now prove some consequences of the definition.For any v∈ V, we haveY_V(v,x,x̅)1= e^x̅ L̅(-1)e^x L(-1)v .We use the translation property <ref> and Taylor's theorem (<ref>). For another formal variable x_0,x̅_0, Taylor's theorem gives Y_V(e^xL(-1)e^x̅L̅(-1)v,x_0,x̅_0)=e^xd/dxe^x̅d/dx̅Y_V(v,x_0,x̅_0)=Y_V(v,x+x_0,x̅+x_0) .Now applying this operator on 1, taking limit x_0, x̅_0 → 0 we get lim_x_0,x̅_0→ 0Y_V(e^xL(-1)e^x̅L̅(-1)v,x_0,x̅_0)1=lim_x_0,x̅_0→ 0Y_V(v,x+x_0,x̅+x_0)1,and then using (<ref>) we obtain (<ref>).For any v∈ V we havee^x_2L(-1) e^x̅_2L̅(-1)Y_V(v,x_1,x̅_1) e^-x_2L(-1) e^-x̅_2L̅(-1)=Y_V(v,x_1+x_2,x̅_1+x̅_̅2̅) . Using the BCH formula (<ref>) and translation property <ref>, we have e^x_2L(-1)Y_V(v,x_1,x̅_1) e^-x_2L(-1)= ∑_n=0^∞[(x_2L(-1))^n , Y_V(v,x_1,x̅_1)]/n!, =∑_n=0^∞1/n!x_2^n∂^n/∂ x_1^nY_V(v,x_1,x̅_1), = e^x_2∂/∂ x_1Y_V(v,x_1,x̅_1) , =Y_V(v,x_1+x_2,x̅_1) ,where in the last step we used Taylor's theorem (<ref>). Similarly we have e^x̅_2L̅(-1)Y_V(v,x_1,x̅_1) e^-x̅_2L̅(-1)=Y_V(v,x_1,x̅_1+x̅_2).Since L(-1) and L̅(-1) commute, the result follows. We now prove skew-symmetry which will be useful in proving the duality of vertex operators. For any u,v∈ V, we have Y_V(u,z,z̅)v=e^zL(-1)e^zL̅(-1)Y_V(v,-z,-z̅)u. Using Lemma locality property <ref>, <ref> and Lemma <ref>, we have Y_V(u,z,z̅)Y_V(v,z',z̅')1 ∼ Y_V(v,z',z̅')Y_V(u,z,z̅)1=Y_V(v,z',z̅')e^z̅ L̅(-1)e^z L(-1)u=e^z̅ L̅(-1)e^z L(-1)Y_V(v,z'-z,z̅'-z̅)u .Now taking z',z'→ 0 and using the creation property, we obtain the required result. The following proposition shows the uniqueness of vertex operators. The proof is on the lines of <cit.>.Let U:V⟶ V{x,x} be a linear operator which is local with respect to every other vertex operator, in the sense of Property <ref>, and satisfies U(x,x̅)1= e^x̅ L̅(-1)e^x L(-1)v,for some v∈ V, then U(z,z̅)=Y_V(v,z,z̅) ,for a non-zero complex number z. For any w∈ V, from Lemma <ref>and locality property <ref> we have U(z_1,z̅_1)e^z̅_2L̅(-1)e^z_2 L(-1)w =U(z_1,z̅_1)Y_V(w,z_2,z̅_2)1, ∼ Y_V(w,z_2,z̅_2)U(z_1,z̅_1)1, =Y_V(w,z_2,z̅_2)e^z̅_1L̅(-1)e^z_1 L(-1)v , =Y_V(w,z_2,z̅_2)Y_V(v,z_1,z̅_1)1,∼ Y_V(v,z_1,z̅_1)Y_V(w,z_2,z̅_2)1 ,where ∼ indicates equality up to analytic extension in the sense of Property <ref>. Now taking z_2,z̅_2→ 0 we obtain, U(z_1,z̅_1)w =Y_V(v,z_1,z̅_1)w.As the two operators in (<ref>) are equal for all w ∈ V, they are equal as operators. We now prove the duality of vertex operators. For any v,w∈ V we have Y_V(v,z_1,z̅_1)Y_V(w,z_2,z_2)=Y_V(Y_V(v,z_1-z_2,z̅_1-z̅_2)w,z_2,z̅_2) ,in the domain |z_1|>|z_2|>|z_1-z_2|>0, where the RHS is defined byY_V(Y_V(v,z_1-z_2,z̅_1-z̅_2)w,z_2,z̅_2)=∑_m,n∈ (m-n)∈Y_V(v_m,n· w,z_2,z̅_2)(z_1-z_2)^-m-1(z̅_1-z̅_2)^-n-1. The proof is on the lines of <cit.>.For any u∈ V, we have Y_V(v,z_1,z̅_1) Y_V(w,z_2,z̅_2)e^z̅_3L̅(-1)e^z_3 L(-1)u=Y_V(v,z_1,z̅_1)Y_V(w,z_2,z̅_2)Y_V(u,z_3,z̅_3)1, ∼ Y_V(u,z_3,z̅_3)Y_V(v,z_1,z̅_1)Y_V(w,z_2,z̅_2)1, =Y_V(u,z_3,z̅_3)Y_V(v,z_1,z̅_1)e^z̅_2L̅(-1)e^z_2 L(-1)w , =Y_V(u,z_3,z̅_3)e^z̅_2L̅(-1)e^z_2 L(-1)Y_V(v,z_1-z_2,z̅_1-z_2)w , =Y_V(u,z_3,z̅_3)Y_V(Y_V(v,z_1-z_2,z̅_1-z_2)w,z_2,z̅_2)1,∼ Y_V(Y_V(v,z_1-z_2,z̅_1-z_2)w,z_2,z̅_2)Y_V(u,z_3,z̅_3)1,where we used Lemma <ref>, Lemma <ref>, and Locality property <ref>. Now taking z_3,z_3→ 0 and using Proposition <ref>, we obtain the duality relation. Note that the sum on the RHS of (<ref>) converges. Indeed for any u∈ V, using skew-symmetry [We thank Yi-Zhi Huang for clarification on this point.](Lemma <ref>) we have Y_V(Y_V(v, z_1-z_2,z̅_1-z̅_2)w,z_2,z̅_2)u=∑_m,n∈ (m-n)∈Y_V(v_m,n· w,z_2,z̅_2)(z_1-z_2)^-m-1(z̅_1-z̅_2)^-n-1u =∑_m,n∈ (m-n)∈e^z̅_2L̅(-1)e^z_2 L(-1)Y_V(u,-z_2,-z̅_2)v_m,n· w(z_1-z_2)^-m-1(z̅_1-z̅_2)^-n-1=e^z̅_2L̅(-1)e^z_2 L(-1)Y_V(u,-z_2,-z̅_2)∑_m,n∈ (m-n)∈v_m,n· w(z_1-z_2)^-m-1(z̅_1-z̅_2)^-n-1=e^z̅_2L̅(-1)e^z_2 L(-1)Y_V(v,-z_2,-z̅_2)Y_V(u,z_1-z_2,z̅_1-z̅_2)w .Since the RHS of the last line is well defined in |z_2|>|z_1-z_2|, the operator Y_V(Y_V(v,z_1-z_2,z̅_1-z̅_2)w,z_2,z̅_2) is well defined in |z_2|>|z_1-z_2|.Proposition <ref> shows that a product of two vertex operators can be written as a sum of single vertex operator:Y_V(v,z_1,z̅_1)Y_V(w,z_2,z_2)= ∑_m,n∈ (m-n)∈Y_V(v_m,n· w,z_2,z̅_2)(z_1-z_2)^-m-1(z̅_1-z̅_2)^-n-1.In physics, we usually ignore the non-singular terms in the expansion above and call it the operator product expansion. The sum in the operator product expansion has finitely many terms with negative powers of (z_1-z_2) and (z̅_1-z̅_2). To see this, we first expand the vertex operator Y_V(v,x,x̅) for v∈ V_(h,h̅) asY_V(v,x,x̅)=∑_m,n∈ (m-n)∈x_m,n(v)x^-m-hx̅^-n-h̅,Since wt x_m,n(v)=-m,wt x_m,n(v)=-n,for w∈ V_(h',h') we have x_m,n(v)· w∈ V_(h'-m,h'-n).Due to the grading-restriction property <ref>, there exists M∈ℤ such that x_m,n(v)· w=0, m,n>M.Thus the operator product expansion is upper truncated.As a consequence of locality property <ref> and duality of vertex operators (<ref>), the product of multiple vertex operators exists. Indeed for u_i∈ V, the product of operator Y_V(u_1,z_1,z_1)⋯ Y_V(u_n,z_n,z_n) ,exists in the domain {(z_1,…,z_n)∈^n:|z_1|>⋯>|z_n|>0},except for poles at the diagonal set {z_i=z_j,z_i=0}. Let us prove this for n=3. By duality (<ref>) we see that Y_V(u_2,z_2,z_2)Y_V(u_3,z_3,z_3)=Y_V(Y_V(u_2,z_2-z_3,z_2-z_3)u_3,z_3,z_3) ,in the domain |z_2|>|z_3|>|z_2-z_3|>0. Next, by locality property <ref>Y_V(u_1,z_1,z_1)Y_V(Y_V(u_2,z_2-z_3,z_2-z_3)u_3,z_3,z_3) ,exists in the domain |z_1|>|z_3|. Thus the product Y_V(u_1,z_1,z_1)Y_V(u_2,z_2,z_2)Y_V(u_3,z_3,z_3),exists in the domain complete ?The operator product expansion of the conformal vertex operatorT(x) with itself is given byT(x_1)T(x_2)=c/21/(x_1-x_2)^4+2 T(x_2)/(x_1-x_2)^2+1/(x_1-x_2)∂/∂ x_2T(x_2)+G_1(x_1,x_2)T(x̅_1)T̅(x̅_2)=c̅/21/(x̅_1-x̅_2)^4+2T̅(x̅_2)/(x̅_1-x̅_2)^2+1/(x̅_1-x̅_2)∂/∂x̅_2T̅(x_2)+G_2(x̅_1,x̅_2)T(x_1)T̅(x̅_2)=G_3(x_1,x̅_2),where G_1(x_1,x_2),G_3(x_1,x_2)∈End(V)[[x_2^± 1,(x_1-x_2)]],G_2(x̅_1,x̅_2)∈End(V)[[x̅_2^± 1,(x̅_1-x̅_2)]]. The proof is straightforward using the Virasoro algebra (<ref>), see <cit.> for more details.The following proposition is a strightforward generalisation of <cit.>.Let V be a vector space and A(z_1,z̅_1),B(z_2,z̅_2)∈End[[z_1^± 1,z_1^± 1,z_2^± 1,z_2^± 1]]. Then the following statements are equivalent: * There is an identity in End[[z_1^± 1,z_1^± 1,z_2^± 1,z_2^± 1]][A(z_1,z̅_1),B(z_2,z̅_2)]=∑_i,j=0^M-1C_i,j(z_2,z_2)/i!j!∂_z_2^iδ(z_1-z_2)∂_z̅_2^jδ(z̅_1-z̅_2)for some C_i,j(z_2,z_2)∈End[[z_2^± 1,z_2^± 1]].* A(z_1,z̅_1)B(z_2,z̅_2)   (resp.  B(z_2,z̅_2)A(z_1,z̅_1)) equals ∑_i,j=0^M-1C_i,j(z_2,z_2)/(z_1-z_2)^i+1(z̅_1-z̅_2)^j+1+ A(z_1,z̅_1)B(z_2,z̅_2)where (z_1-z_2),(z̅_1-z̅_2) is expanded in positive integral powers of z_2/z_1,z̅_2/z̅_1 respectively (resp.  z_1/z_2,z̅_1/z̅_2).* A(z_1,z̅_1)B(z_2,z̅_2) andB(z_2,z̅_2)A(z_1,z̅_1) converges to the expression (<ref>) in the domains |z_1|>|z_2| and |z_2|>|z_1| respectively.Of particular interest are the chiral and anti-chiral vertex operators. A vector u∈ V is called a chiral (anti-chiral) vector if the corresponding vertex operator Y_V(u,x,x̅) belongs in End(V){x^± 1} (End(V){x̅^± 1}) or equivalently only depends on z (z̅). Such vertex operators will be called chiral (anti-chiral) vertex operators. From the translation property <ref> we see that the vertex operator corresponding to v is chiral if and only if L̅(-1)v=0 and anti-chiral if and only if L(-1)v=0. The algebra of the modes of chiral (anti-chiral) vertex operators is called the chiral (anti-chiral) algebra in physics, see Corollary <ref>. In the locality property <ref> involving a chiral (resp. anti-chiral) vertex operator Y_V(u_1,z_1)(resp. Y_V(u_1,z̅_1)) and another vertex operator Y_V(u_2,z_2,z̅_2),we will often denote the function m by R(Y_V(u_1,z_1)Y_V(u_2,z_2,z̅_2))(resp. R(Y_V(u_1,z̅_1)Y_V(u_2,z_2,z̅_2)))so that R(Y_V(u_1,z_1)Y_V(u_2,z_2,z̅_2))= Y_V(u_1 , z_1) Y_V(u_2 , z_2, z̅_2)for |z_1|>|z_2|, Y_V(u_2 , z_2, z̅_2) Y_V(u_1 , z_1)for |z_2|>|z_1|and R(Y_V(u_1,z̅_1)Y_V(u_2,z_2,z̅_2))= Y_V(u_1 , z̅_1) Y_V(u_2 , z_2, z̅_2)for |z_1|>|z_2|, Y_V(u_2 , z_2, z̅_2) Y_V(u_1 , z̅_1)for |z_2|>|z_1|respectively.In physics, this is called radial ordering. Here z_2,z̅_2 are complex conjugates of each other. Let u,v be homogeneous chiral andanti-chiral vector. Then the associated chiral and anti-chiral vertex operator has an expansion of the form Y_V(u,x)=∑_n∈x_n(u)x^-n-(wt u-wt u)∈End(V)[[x^± 1]],Y_V(v,x̅)=∑_n∈x̅_n(v)x̅^-n-(wt v-wt v)∈End(V)[[x̅^± 1]], where x_n(u):=x_n-wt u,-wt u(u),x̅_n(v):=x_-wt v,n-wt v(v). From the expansion (<ref>),we see that Y_V(u,x,x̅) will be independent of x̅ if and only if x_m,n(u)=0unlessn=-wt u .But as m-n∈ we then have x_m,n(u)=0unlessn=-wt u, m∈ - wt u.This gives us the required expansion.The proof for anti-chiral vector v is similar. The fact that Y_V(u,x)∈End(V)[[x^± 1]],Y_V(v,x̅)∈End(V)[[x̅^± 1]] follows from the single-valuedness property <ref>. By the above lemma, for chiral and anti-chiral vertex operators, the requirements in Remark <ref> is satisfied and hence we can writex_n(u)= 1/2π i∮ dz Y_V(u,z)z^n+(wt u-wt u)-1, x̅_n(v)= 1/2π i∮ dz̅ Y_V(v,z̅)z̅^n+(wt v-wt v)-1,where u,v are chiral and anti-chiral vectors respectively and the contour of integration is a circle around z=0,z̅=0 respectively.We now derive the commutator of the modes of two vertex operators and the Borcherd's identity <cit.>. Let v∈ V_(h_1,h̅_1),v∈ V_(h_2,h̅_2) with h_1,h̅_1∈. Then we have the following Borcherd's identity:∑_jIn particular, we have the commutator[x_m,n(u),x_r,s(v)]=∑_p≥ -h_1+1 q≥ -h̅_1+1m+h_1-1 p+h_1-1n+h̅_1-1 q+h̅_1-1x_m+r+h_2-p,n+s+h̅_2-q(x_p,q(u)· v). We will follow the usual contour integration procedure, see for example <cit.>. Let r_1>r_2>r_3>0 be real numbers. Let C^a_x denote a circle of radius a centered around x. Let f(z_1,z_2,z_1,z_2) be a function rational function analytic in z_1,z_2 and anti-analytic in z_1,z̅_2 with poles only at z_1=0,z_2=0,z_1=z_2,z_1=0,z_2=0,z_1=z_2. The integrals∮_C^r_2_z_2dz_2∮_C^r_2_z̅_2dz̅_2∮_C^r_1_z_1dz_1∮_C^r_1_z̅_1dz̅_2  Y_V(u,z_1,z̅_1)Y_V(v,z_2,z̅_2)f(z_1,z_2,z_1,z_2) ∮_C^r_2_z_2dz_2∮_C^r_2_z̅_2dz̅_2∮_C^r_3_z_1dz_1∮_C^r_3_z̅_1dz̅_2  Y_V(v,z_2,z̅_2)Y_V(u,z_1,z̅_1)f(z_1,z_2,z_1,z_2)are well-defined. By the locality property <ref> and the OPE we see that ∮_C^r_2_z_2dz_2∮_C^r_2_z̅_2dz̅_2∮_C^r_1_z_1dz_1∮_C^r_1_z̅_1dz̅_1  Y_V(u,z_1,z̅_1)Y_V(v,z_2,z̅_2)f(z_1,z_2,z_1,z_2)-∮_C^r_2_z_2dz_2∮_C^r_2_z̅_2dz̅_2∮_C^r_3_z_1dz_1∮_C^r_3_z̅_1dz̅_1  Y_V(v,z_2,z̅_2)Y_V(u,z_1,z̅_1)f(z_1,z_2,z_1,z_2)=∮_C^r_2_z_2dz_2∮_C^r_2_z̅_2dz̅_2∮_C^r_1_z_1-C^r_3_z_1dz_1∮_C^r_1_z̅_1-C^r_3_z̅_1dz̅_1  R(Y_V(u,z_1,z̅_1)Y_V(v,z_2,z̅_2))f(z_1,z_2,z_1,z_2)=∮_C^r_2_z_2dz_2∮_C^r_2_z̅_2dz̅_2∮_C^r_1_z_1-C^r_3_z_1dz_1∮_C^r_1_z̅_1-C^r_3_z̅_1dz̅_1  Y_V(Y_V(u,z_1-z_2,z̅_1-z̅_2)v,z_2,z̅_2)f(z_1,z_2,z_1,z_2)=∮_C^r_2_z_2dz_2∮_C^r_2_z̅_2dz̅_2∮_C^δ_z_2dz_1∮_C^δ_z̅_2dz̅_1  ∑_p,q∈Y_V(x_p,q(u)· v,z_2,z̅_2)(z_1-z_2)^-p-h_1×(z̅_1-z̅_2)^-q-h_1f(z_1,z_2,z_1,z_2),where δ is some small real number. If we now choose f=z_1^m+h_1-1z̅_1^n+h̅_1-1z̅_1^n+h̅_1-1z_2^r+h_2-1z̅_2^s+h̅_2-1 then using (<ref>) the LHS is[There is a factor of (2π i)^2 which cancels on both sides, so we ignore it.][x_m,n(u),x_r,s(v)] while Cauchy's residue theorem gives the RHS to be∮_C^r_2_z_2dz_2∮_C^r_2_z̅_2dz̅_2 ∑_p≥ -h_1+1 q≥ -h̅_1+1m+h_1-1 p+h_1-1n+h̅_1-1 q+h̅_1-1 Y_V(x_p,q(u)· v,z_2,z̅_2)× z_2^m-p+r+h_2-1z̅_2^n-q+s+h̅_2-1where we used the identity ∮_C^δ_z_2dz_1 z_1^m+h_1-1/(z_1-z_2)^p+h_1=m+h_1-1 p+h_1-1z_2^m-pand analogous identity for the z̅_1 integral. Finally, using (<ref>), the RHS becomes ∑_p≥ -h_1+1 q≥ -h̅_1+1m+h_1-1 p+h_1-1n+h̅_1-1 q+h̅_1-1x_m+r+h_2-p,n+s+h̅_2-q(x_p,q(u)· v).Using f=z_1^m+h_1-1z̅_1^n+h̅_1-1z̅_1^n+h̅_1-1z_2^r+h_2-1z̅_2^s+h̅_2-1(z_1-z_2)^k(z_1-z_2)^ℓwe obatin the Borcherd's identity. Derive the formula! Let u_i∈ V_(h_i,h̅_i) and u_j ∈ V_(h_j,h_j')( v_i∈ V_(h_i',h̅'_i) and v_j∈ V_(h_j',h̅'_j)) behomogeneous chiral (resp. anti-chiral) vectors with corresponding vertex operatorsY_V(u_i,x)=∑_n∈x_n(u_i)x^-n-(h_i-h̅_i),Y_V(u_j,x)=∑_n∈x_n(u_j)x^-n-(h_j-h̅_j),Y_V(v_i,x̅)=∑_n∈x̅_n(v_i)x̅^-n-(h̅'_i-h'_i), Y_V(v_j,x̅)=∑_n∈x̅_n(v_j)x̅^-n-(h̅'_j-h'_j).Then the vectorsx_p(u_i)· u_j andx̅_p(v_k)· v_ℓ are chiral and anti-chiral vectors respectively. Further, we have [x_n(u_i),x_k(u_j)]=∑_p≥ -(h_i-h̅_i)+1n+(h_i-h̅_i)-1 p+(h_i-h̅_i)-1x_k+n(x_p(u_i)· u_j), [x̅_n(v_i),x̅_k(v_j)]=∑_p≥ -(h̅'_i-h'_i)+1n+(h̅'_i-h'_i)-1 p+(h̅'_i-h'_i)-1x̅_k+n(x̅_p(v_i)· v_j) , [x_n(u_i),x̅_k(v_j)]=0 .In particular,[L(n),x_k(u_i)]=∑_p≥ -1n+1 p+1x_k+n(L(p)· u_i),[L(n),x̅_k(v_i)]=0 ,[L̅(n),x̅_k(v_i)]=∑_p≥ -1n+1 p+1x̅_k+n(L̅(p)· v_i) , [L̅(n),x_k(u_i)]=0 .More generally, for m∈ andm_+∈_≥ 0 we have the Borcherd's identity:∑_r≥ 0m r( (-1)^r x_n+m-r(u_i)x_k+r(u_j)-(-1)^m+rx_k+m-r(u_j)x_n+r(u_i) ) =∑_p≥ 1-(h_i-h_i)n+(h_i-h_i)-1 p+(h_i-h_i) -1x_k+n+m + h̅_i - h̅_j(x_p+m(u_i)· u_j) , ∑_r≥ 0m r( (-1)^r x̅_n+m-r(v_i)x̅_k+r(v_j)-(-1)^m+rx̅_k+m-r(v_j)x̅_n+r(v_i) ) =∑_p≥ 1-(h̅'_i-h'_i)n+(h̅'_i-h'_i)-1 p+(h̅'_i-h'_i)-1x̅_k+n + h'_i - h'_j(x̅_p+m(v_i)· v_j), ∑_r≥ 0m_+ r( (-1)^rx_n+m_+-r(u_i)x̅_k+r(v_j)-(-1)^m_++rx̅_k+m_+-r(v_j)x_n+r(u_i) ) = 0.* For u∈ V_(h,0),v∈ V_(0,h̅),the corresponding vertex operators only depend on x,x̅ respectively:Y_V(u,x,x̅)=∑_m∈u_mx^-m-1, Y_V(v,x,x̅)=∑_n∈v_nx̅^-n-1.In particular,x_m,n(u)=0, x_r,s(v)=0 forn≠ 0,r=0. We will denote x_m,0(u),x_0,s(v) simply by x_m(u),x̅_s(v) respectively.We first show that x_p(u_i)· u_j andx̅_p(v_k)· v_ℓ are chiral and anti-chiral vectors respectively. Indeed by the translation property <ref> [L̅(-1),x_p(u_i)]=0,which implies that L̅(-1)· (x_p(u_i)· u_j)=x_p(u_i)·L̅(-1)u_j=0 .Similarly L(-1)· (x̅_p(v_k)· v_ℓ)=0 .Now, we will follow the usual contour integration procedure, see for example <cit.>. First note that Y_V(u_i,z_1)Y_V(u_j,z_2),Y_V(u_j,z_2)Y_V(u_i,z_1), R(Y_V(u_i,z_1)Y_V(u_j,z_2)) ,are single-valued and analytic in z_1,z_2 since their partial derivative with respect to z̅_1, z̅_2 is zero. So we can use Cauchy's residue theorem to integrate over z_1,z_2 on any contour. Now let r_1>r_2>r_3>0 be real numbers. Let C^a_i(z) denote a contour in the variable z_i, in counterclockwise direction, of radius a and centered around z. Further,C_i^r := C_i^r(0). Let f(z_1,z_2) be a rational function analytic in z_1,z_2 with poles only at z_1=0,z_2=0,z_1=z_2. The integrals∮_C^r_2_2dz_2∮_C^r_1_1dz_1  Y_V(u_i,z_1)Y_V(u_j,z_2)f(z_1,z_2)and ∮_C^r_2_2dz_2∮_C^r_3_1dz_1  Y_V(u_j,z_2)Y_V(u_i,z_1)f(z_1,z_2) ,are well-defined. By the locality property <ref> and the OPE (<ref>), we see that ∮_C^r_2_2dz_2∮_C^r_1_1dz_1  Y_V(u_i,z_1)Y_V(u_j,z_2)f(z_1,z_2)-∮_C^r_2_2dz_2∮_C^r_3_1dz_1 Y_V(u_j,z_2)Y_V(u_i,z_1)f(z_1,z_2)=∮_C^r_2_2dz_2∮_C^r_1_1-C^r_3_1dz_1  R(Y_V(u_i,z_1)Y_V(u_j,z_2))f(z_1,z_2)=∮_C^r_2_2dz_2∮_C^δ_1(z_2)dz_1  Y_V(Y_V(u_i,z_1-z_2)u_j,z_2)f(z_1,z_2)=∮_C^r_2_2dz_2∮_C^δ_1(z_2)dz_1  ∑_p∈Y_V(x_p(u_i)· u_j,z_2)(z_1-z_2)^-p-(h_i-h̅_i)f(z_1,z_2),where δ is some small real number, see <cit.> for details of the change in contour. If we now choose f=z_1^n+(h_i-h̅_i)-1z_2^k+(h_j-h̅_j)-1, then using (<ref>) the LHS is[There is a factor of (2π i)^2 which cancels on both sides, so we ignore it.][x_n(u_i),x_k(u_j)] while Cauchy's residue theorem gives the RHS to be∮_C^r_2_2dz_2 ∑_p≥ -(h_i-h̅_i)+1n+(h_i-h̅_i)-1 p+(h_i-h̅_i)-1 Y_V(x_p(u_i)· u_j,z_2) z_2^k+(h_j-h̅_j)+n-p-1,where we used the identity ∮_C^δ_1(z_2)dz_1 z_1^n+(h_i-h̅_i)-1/(z_1-z_2)^p+(h_i-h̅_i)=n+(h_i-h̅_i)-1 p+(h_i-h̅_i)-1z_2^n-p.Note, that it is necessary that (h_i-h̅_i)∈, which is true by the single valuedness property <ref>, for (<ref>) to hold. Finally, using (<ref>) and the fact that x_p(u_i).u_j ∈ V_h_j - p+ h̅_i, h̅_i + h̅_j,the RHS becomes ∑_p≥ -(h_i-h̅_i)+1n+(h_i-h̅_i)-1 p+(h_i-h̅_i)-1x_k+n(x_p(u_i)· u_j).The second commutator is similar. To prove the third commutator, note that since∂_z̅_1R(Y_V(u_i,z_1)Y_V(v_j,z̅_2))= ∂_z_2R(Y_V(u_i,z_1)Y_V(v_j,z̅_2))= 0,R(Y_V(u_i,z_1)Y_V(v_j,z̅_2)) cannot have any dependence on (z_1-z_2) or (z_1-z_2). Moreover, from the proof of the OPE in Proposition <ref>, we see that it cannot also have (z_1-z̅_2) dependence as well. This implies that the contour integral on the RHS of (<ref>) vanishes and we get [x_m(u_i),x̅_n(v_j)]=0 .The three Borcherd's identity follow by using f_1=z_1^n+(h_i-h̅_i)-1z_2^k+(h_j-h̅_j)-1(z_1-z_2)^m , f_2=z̅_1^n+(h̅'_i-h'_i)-1z̅_2^k+(h̅'_i-h'_i)-1(z̅_1-z̅_2)^m , f_3=z_1^n+(h_i-h̅_i)-1z̅_2^k+(h̅_j-h'_j)-1(z_1-z̅_2)^m,where for the second Borcherd's identity, we need to integrate against dz̅_1,dz̅_2 on the curves C_z̅_1^r_1,C_z̅_2^r_2 respectively and for the third Borcherd's identity, we need to integrate against dz_1,dz̅_2 on the curves C_z_1^r_1,C_z̅_2^r_2 respectively. The vertex operators corresponding to V_(h,0),V_(0,h) are called chiral and anti-chiral vertex operators respectively. The algebra of the modes of these vertex operators is the chiral algebra in physics. For n=0,-1 in (<ref>) we obtain the L(0)-property <ref> and the translation property <ref> of chiral vertex operators. Note that we already used these properties in proving the OPE. The commutator of the modes of chiral and anti-chiral vertex operators is closed. The algebra in (<ref>) thus obtainedis called the chiral and anti-chiral algebra respectively of the non-chiral VOA (V,Y_V).Let (V,Y_V) be a non-chiral VOA with central charge (c,c̅). The graded dimension or character of V is defined byχ_V(τ,τ̅):=Tr_V q^L(0)-c/24q̅^L(0)-c̅/24=∑_(h,h̅)∈×(dim V_(h,h̅))q^h-c/24q̅^h̅-c̅/24,where q=e^2π iτ, q̅=e^-2π iτ̅ and τ∈ℍ:={τ=x+iy : y>0}.Note that the single-valuedness property implies thatχ_V(τ+1,τ+1)=χ_V(τ,τ̅) ifc-c̅=24k ,for some integer k. Let (V_1,Y_V_1,ω_V_1,ω̅_V_1,1_V_1), (V_2,Y_V_2,ω_V_2,ω̅_V_2,1_V_2) be two non-chiral VOAs with the same central charge.Then amap f: V_1 → V_2 is called a non-chiral VOA homomorphism if it is a grading-preserving linear map such thatf(Y_V_1(u, x,x̅) v)=Y_V_2(f(u), x,x̅) f(v)for u, v ∈ V_1 ,or equivalently,f(u_n,m· v)=f(u)_n,m f(v)foru, v ∈ V_1,n,m ∈ℝ,and such thatf(1_V_1)=1_V_2, f(ω_V_1)=ω_V_2, f(ω̅_V_1)=ω̅_V_2.An isomorphism of non-chiral VOAs is a bijective homomorphism. An endomorphism of a non-chiral VOA V is a homomorphism from V to itself, and an automorphism of V is a bijective endomorphism. In particular, an automorphism can be defined as a linear isomorphism f : V → V such thatf ∘ Y_V(v, x,x̅) ∘ f^-1=Y_V(f(v), x,x̅)forv ∈ V , f(ω)=ω, f(ω̅)=ω̅.It follows that f is grading-preserving and f(1_V)=1_V. It is easy to see that the graded dimension of isomorphic non-chiral VOAs are identical.§ LORENTZIAN LATTICE VERTEX OPERATOR ALGEBRA (LLVOA)In this section, we will construct a non-chiral vertex operator algebra corresponding to an even, integral Lorentzian lattice Λ⊂^m,n. In the first subsection, we recall some basic facts about Lorentzian lattices and set up the notations for the rest of the paper. We also record some results we will need later. In the next subsection, we gather the ingredients needed to construct a non-chiral vertex operator algebra, i.e. we will construct a vector space V_Λ associated to the lattice, a vertex operator map Y_V_Λ for this vector space, a vacuum 1, and conformal vectors ω_L, ω_R. In the last subsection, we will prove that(V_Λ, Y_V_Λ,ω_L, ω_R, 1 ) is a non-chiral VOA, which we will call the Lorentzian lattice vertex operator algebra (LLVOA).§.§ Lorentzian latticesWe begin with some basic definitions. Let ^m be the Euclidean space equipped with a symmetric bilinear form ⟨·,·⟩_m. Let ℝ^m,n denote the (m+n)-dimensional vector space ^m+n equipped with the symmetric bilinear formx∘x':=⟨x⃗,x⃗'⟩_m-⟨y⃗,y⃗'⟩_n,where x=(x^1,…,x^m,y^1,…,y^n)≡(x⃗,y) ,and similarly x'. We will omit the subscript on ⟨·,·⟩_m to make the notation lighter. * A d=(m+n)-dimensional Lorentzianlattice of signature (m,n) is a subset Λ⊂^m,n which is also a free ℤ-module spanned by m+n vectors λ_j∈ℝ^m,n, 1 ≤ j ≤ m+n, linearly independent in ℝ^m,n.More explicitly Λ={∑_j=1^m+n n_jλ_j: n_j∈ℤ}.{λ_j}_j=1^m+n is called an integral basis of Λ. When n=0 we call Λ a Euclidean lattice. We will simply refer them as lattices when we do not need to specify their signature.* The dual lattice of a lattice Λ, denoted by Λ^⋆, is defined as Λ^⋆={x'∈^m,n: x∘x' ∈ℤ ∀ x∈Λ}.The lattice Λ is said to be integral if Λ⊆Λ^⋆, i.e. x∘y∈ℤ for all x, x' ∈Λ and self-dual if Λ=Λ^⋆. The lattice Λ is said to be even if x∘x=||x⃗||^2-||y⃗||^2∈2,for all x=(x⃗,y⃗) ∈Λ, where ||x⃗||^2:=⟨x,x⟩. * A generator matrix for Λ is an (m+n)×(m+n) matrix such that the -span of its rows is Λ.* A lattice homomorphism of two lattices f:Λ⟶Λ̃ of the same signature is simply a -module morphism which also preserves the bilinear form:f(x)∘ f(x')=x∘x',∀  x,x'∈Λ.A bijective lattice homomorphism is called a lattice isomorphism. Two lattices are said to be isomorphic if there exists a lattice isomorphism between them.* An automorphism of the lattice Λ is a lattice isomorphism from the Λ to itself. The group of all automorphisms (the group operation being composition) is called the automorphism group of Λ and denoted by Aut(Λ). A generator matrix for the lattice Λ in (<ref>) is given by 𝒢_Λ=[ λ_1^1 λ_1^2 ⋯ λ_1^m+n; ⋮ ⋮ ⋯ ⋮; λ_m+n^1 λ_m+n^1 ⋯ λ_m+n^m+n ],where λ_i=(λ_i^1,…,λ_i^m+n) is a basis vector of Λ. It is not hard to show that two generator matrices 𝒢_Λ,𝒢'_Λ generate the same lattice if and only if they are related by an (m+n)× (m+n) unimodular matrix[A matrix U is called unimodular if det(U)=± 1.] U∈GL(m+n,):𝒢_Λ=U𝒢'_Λ.Indeed U is the change of basis matrix between the primed and unprimed generator matrices since it is invertible and since it is also integral, it preserves the lattice. If we take the symmetric bilinear form ⟨·,·⟩ on ^m,^n to be the standard inner product, that is,⟨x,x'⟩=∑_i=1^mx^ix'^i ,where x=(x^1,…,x^m)∈^m and similarly x' and analogous inner product on ^n, then a lattice isomorphism between lattices of signature (m,n) can be identified with an element of O(m,n,) where O(m,n,) is the group of matrices, A, satisfying A^Tg_m,nA=g_m,n, g_m,n=[1_m0;0 -1_n ].We have the following theorem: <cit.> An even, self-dual lattice of signature (m,n) exists if and only if (m-n)≡ 0 8. Moreover, there is a unique such lattice when n≥ 1 up to an O(m,n,) transformation. The canonical choice of an even, self-dual lattice of signature ^m,n, denoted by II_m,n, is II_m,n={(a_1,…,a_m+n)∈^m,n:a_i∈ or a_i∈+1/2, ∑_i=1^m+na_i∈2}.A generator matrix for this lattice is 𝒢_II_m,n=[ [cccc|cccccc] 1 0 ⋯ 0 0 0 ⋯ 0 0-1; 0 1 ⋯ 0 0 0 ⋯ 0 0-1; ⋮ ⋮ ⋯ ⋮ ⋮ ⋮ ⋯ ⋮ ⋮ ⋮; 0 0 ⋯ 1 0 0 ⋯ 0 0-1; 0 0 ⋯ 0 1 0 ⋯ 0 0-1; 0 0 ⋯ 0 0 1 ⋯ 0 0-1; ⋮ ⋮ ⋯ ⋮ ⋮ ⋮ ⋯ ⋮ ⋮ ⋮; 0 0 ⋯ 0 0 0 ⋯ 1 0-1; 0 0 ⋯ 0 0 0 ⋯ 0 0 2; 1/2 1/2 ⋯ 1/2 1/2 1/2 ⋯ 1/2 1/2 1/2 ]. We will use this lattice to elucidate many of the notations which we now introduce.Consider a d-dimensional even, integral, Lorentzian lattice Λ⊂ℝ^m,n with Lorentzian inner product, denoted as before by ∘, where m+n = d. We will often write a vector λ∈Λ as λ=(α^λ,β^λ), where α^λ∈^m and β^λ∈^n. Then we can write λ_1∘λ_2=⟨α^λ_1,α^λ_2⟩-⟨β^λ_1,β^λ_2⟩∈ℤ.Note that in general ⟨α^λ_1,α^λ_2⟩,⟨β^λ_1,β^λ_2⟩∉ℤ.We define the -modulesΛ_1={α^λ | λ=(α^λ,β^λ)∈Λ for some β^λ∈^n}⊂^m,Λ_2={β^λ | λ=(α^λ,β^λ)∈Λ for some α^λ∈^m}⊂^n.Let {λ_i ≡ (α^λ_i, β^λ_i)}_i=1^d be a basis of Λ. Then it is easy to see that Λ_1=Span_{α^λ_i}_i=1^d,Λ_2=Span_{β^λ_i}_i=1^d.Note that in general Λ_1 and Λ_2 are not lattices, they are just finitely generatedmodules possibly with non-trivial torsion. For the lattice II_m,n in (<ref>), it is easy to see that(II_m,n)_1=^m⋃(+1/2)^m,(II_m,n)_2=^n⋃(+1/2)^n.We further identify even, integral, Euclidean sublattices of Λ as follows:Λ_1^0:={ (α,0)∈Λ | α∈^m},Λ_2^0:={ (0,β)∈Λ | β∈^n}. These can be identified naturally with submodules of Λ_1 and Λ_2 respectively. Clearly, Λ_1^0,Λ_2^0are sublattices of Λ since any submodule of a finitely generated free module is free. We also introduce the notation Λ_0 :=Λ_1^0⊕Λ_2^0.Note that the direct sum of Λ_1^0 and Λ_2^0 is meaningful as they are two ℤ-modules. For the lattice II_m,n in (<ref>), (II_m,n)_0=(II_m,n)_1^0⊕(II_m,n)_2^0 is easily seen to be generated by 𝒢_(II_m,n)_0:=([𝒢_m 0_m× n; 0_n× m 𝒢_n, ])where 𝒢_m:=[100⋯0 -1;010⋯0 -1;⋮⋮⋮⋯⋮⋮;000⋯1 -1;000⋯02 ]_m× m,and 𝒢_n is defined similarly. It is useful to characterize the automorphisms of Λ. We will take the symmetric bilinear form on ^m,^n to be the standard inner product for brevity. We have the following important result.Let Λ∈^m,n be an integral Lorentzian lattice. ThenAut(Λ)≅O_Λ(m,n,) where O_Λ(m,n,):={A∈GL(m+n,):𝒢_Λ^-1 A𝒢_Λ∈O(m,n,)}.Choose an integral basis {λ_i} of Λ. Then the group of -module automorphisms of Λ can be identified with GL(m+n,). Now given any λ,λ'∈Λ there exists coulumn vectors n,n'∈^m+n such that λ=n^T𝒢_Λ and λ'=n^'T𝒢_Λ. Any module automorphism A∈GL(m+n,) acts by A(λ)=n^TA𝒢_Λ.For A to preserve inner product, we must have A(λ)∘ A(λ') =n^TA𝒢_Λ g_m,n𝒢_Λ^TA^Tn' , =n^T𝒢_Λ g_m,n𝒢_Λ^Tn' ,which implies A𝒢_Λ g_m,n𝒢_Λ^TA^T=𝒢_Λ g_m,n𝒢_Λ^T .Since {λ_i} is a basis for ^m,n and A must preserve the inner products of λ_i's, we must have that A=𝒢_Λ O𝒢_Λ^-1 for some O∈O(m,n,). §.§ Construction of the LLVOALet Λ⊂^m,n be a d=(m+n)-dimensional Lorentzian lattice. We denote by [Λ] the group algebra of the lattice Λ and denote the element λ∈Λ embedded in [Λ] by e^λ. The multiplication in [Λ] is defined[Technically speaking, [Λ] is the group algebra of the formal group e^Λ:={ e^λ:λ∈Λ} with group multiplication given by e^λ_1· e^λ_2 =e^λ_1 + λ_2, i.e. [Λ] =[ e^Λ]] by e^λ_1· e^λ_2= e^λ_1+λ_2.Define the vector space h_i:=Λ_i⊗_, i=1,2 ,and extend the bilinear form on Λ_i toh_i -linearly. Here Λ_i is defined as in (<ref>).and h:=h_1⊕h_2.Note that h_1=Span_{α_1,…,α_d} h_2=Span_{β_1,…,β_d}. We define the Lie algebra ĥ :=( ⊕_r, s∈(h_1⊗ t^r)⊕(h_2⊗t̅^ s))⊕(k⊕k̅). Introduce the notation α(r):=α⊗ t^r,β(s):=β⊗t̅^ s,α∈h_1, β∈h_2.The non-zero Lie bracket on ĥ is [ α(r_1), α'(r_2) ]=r_1⟨α, α' ⟩ δ_r_1+r_2,0 k,[β(s_1), β'(s_2) ] =s_1⟨β, β' ⟩ δ_s_1+s_2,0 k̅. Further, for the basis {λ_i}_i=1^d of Λ, we defineα_i(m) := α^λ_i(m) , β_i(m) := β^λ_i(m)Then a general element α^λ(m) of h_1⊗ t^m can be written asα^λ(m) = ∑_i=1^d c_iα_i(m), whereλ = ∑_i=1^d c_iλ_i. Note that ĥ=ĥ_1^⋆⊕ĥ_2^⋆⊕ĥ_1^0⊕ĥ_2^0 ,where ĥ_1^⋆,ĥ_2^⋆ are the standard Heisenberg algebras associated to the abelian Lie algebras h_1,h_2 respectively <cit.> and ĥ_1^0:=h_1⊗ t^0≅h_1,ĥ_2^0:=h_2⊗t̅^0≅h_2.Define ĥ^-:=(⊕_r,s < 0(h_1⊗ t^r)⊕(h_2⊗t̅^ s)), ĥ^0:=(h_1⊗ t^0)⊕ (h_2⊗t̅^0) ⊕k⊕k̅,ĥ^+:=(⊕_r,s > 0(h_1 ⊗ t^r)⊕(h_2⊗t̅^ s) ).Note that ĥ=ĥ^-⊕ĥ^0⊕ĥ^+. We now define the space V_Λ := S(ĥ^-) ⊗[Λ_1^0⊕Λ_2^0] ,where Λ_1^0 and Λ_2^0 is defined as in (<ref>) and for any Lie algebra g,S(g) is the symmetric algebra for g, and [Λ_1^0⊕Λ_2^0] ≡[Λ_0] is considered as a subspace of [Λ].The space V_Λ is generated by elements of the form ( α_1(-m_1)·α_2(-m_2)⋯α_k(-m_k) ·β_1(-m̅_1)·β_2(-m̅_2)⋯β_k̅(-m̅_k̅) )⊗ e^(α,β)for m_i,m̅_i> 0,k,k̅≥ 0,(α,β)∈Λ_0, α_i ∈h_1, and β_i ∈h_2. The spaceV_Λ is a natural module of ĥ^-. We define theaction of ĥ^0 on [Λ ], and hence on [Λ_0], by α' (0) e^λ =⟨α' , α^λ⟩e^λ,β'(0) e^λ = ⟨β ' , β^λ⟩e^λ,where α'(0) ∈ĥ_1^0, β'(0) ∈ĥ_2^0. The central elements k and k̅ act on [Λ] as identity.Let ĥ^+ act on [Λ] by0. We can extend the action of these subspaces of ĥ to V_Λ by using the Lie bracket given in (<ref>). This makes V_Λintoan ĥ-module. We define a ℤ-bilinear map ϵ: Λ×Λ→ℤ, which acts on the basisas <cit.> ϵ(λ_i, λ_j)= λ_i ∘λ_ji>j ,0 i ≤ j ,where {λ_i}_i=1^m+n is an integral basis of Λ. The action of ϵ on general vectors is defined by the -bilinearity of ϵ. ConsiderΛ̂=_2×Λ, with the multiplication on it given by (θ, λ) · (τ, λ') =(θτ (-1)^ϵ(λ, λ'), λ + λ' ).We now consider the _2 central extension of the lattice Λ:0⟶_2⟶Λ̂⟶Λ⟶ 0. We denote elements (1,λ),(θ,0)∈Λ̂ by e_λ=(1,λ) and θ=(θ,0) respectively. Then it is easy to check that(θ,λ)=θe_λ= e_λθ,and e_λ e_μ=(-1)^ϵ(λ,μ)e_λ+μ .Using the above relation, it can be shown that (see Lemma <ref> for proof) e_λ e_μ=(-1)^λ∘μe_μe_λ .This property requires that the lattice be even and integral. Note that we chose an integral basis of Λ to define the central extension. In Appendix <ref> we show that a cocycle ϵ̃ defined analogous to (<ref>) for a different choice of basis is cohomologous to ϵ, and hence gives rise to an isomorphic central extension. Λ̂ acts on [Λ] as follows(θ,λ') e^λ=θ(-1)^ϵ(λ',λ)e^λ + λ'.In particular for (θ,λ') = (1, λ' ) =e_λ', we have e_λ'e^λ= (-1)^ϵ(λ',λ)e^λ + λ'.Note that the same cocycle ϵ restricted to Λ_0ϵ:Λ_0×Λ_0⟶,defines a central extension Λ̂_0:=_2×Λ_0⊂Λ̂:0⟶_2⟶Λ̂_0⟶Λ_0⟶ 0.Moreover, the action (<ref>) restricted to Λ̂_0 makes [Λ_0] into a Λ̂_0-module.This makes V_Λ into a Λ̂_0-module where Λ̂_0 acts only on [Λ_0]. Let x,x̅ be formal variables. For any vector λ = (α^λ, β^λ), define the operators x^α^λ , x̅^β^λ by the following actions x^α^λ (u ⊗ e^λ ' )= x^⟨α^λ,α^λ'⟩ (u ⊗ e^λ') , x̅^β^λ (u ⊗ e^λ')=x̅^⟨β^λ,β^λ'⟩(u ⊗ e^λ') ,where u∈ S(ĥ^-),λ'∈Λ_0. Note that x^α^λ,x̅^β^λ acts as x^α^λ(0),x̅^β^λ(0). We can define the central extension Λ̂ of Λ by exactly the same construction as above. The action of Λ̂ on [Λ] can also be defined in exactly the same way. For λ=(α^λ,β^λ) ∈Λ_0, define the vertex operators Y_V_Λ( e^λ,x,x̅):= [exp(-∑_r<0α^λ(r)/rx^-r)exp(-∑_r>0α^λ(r)/rx^-r)..exp(-∑_r<0β^λ(r)/rx̅^-r)exp(-∑_r>0β^λ(r)/rx̅^-r)] e_λx^α^λx̅^β^λ .From the Lie bracket in (<ref>), it is easy to show [α^λ(r), β^λ(s)] = 0for all r,s ∈ℤ, so that the order of exponentials with α^λ(r) and β^λ(r) does not matter. For a formal variable x, we introduce the notation α^λ(x) =∑_r > 0α^λ(r) x ^-r-1_α^λ(x)^+ + ∑_r < 0α^λ(r) x ^-r-1_α^λ(x)^- +α^λ(0) x^-1 ,Similarly, we can also define β^λ(x̅). We define the formal integration as the map by∫ dx x^r=x^r+1/r+1, n≠ -1 .We can then write the vertex operator as Y_V_Λ( e^λ,x,x)=exp(∫ dx α^λ(x)^-) exp(∫ dx α^λ(x)^+) × exp(∫ dx̅ β^λ(x)^-)exp(∫ dx̅ β^λ(x)^+) e_λx^α^λx^β^λ. For a general vector v of the form (<ref>), the vertex operator is defined as Y_V_Λ(v,x,x̅)=∏_r=1^k∏_s=1^k̅(1/(m_r-1)!d^m_r - 1 α_r(x)/dx^m_r-1)(1/(m̅_s-1)!d^m̅_s-1β_s(x̅)/dx̅^m̅_s-1) Y_V_Λ( e^λ,x,x̅),where the normal orderingis defined as α^λ(p)α^λ'(q) = α^λ'(q)α^λ(p) =α^λ(p)α^λ'(q)p≤ q ,α^λ'(q)α^λ(p)p≥ q , α^λ(p)e_λ' =e_λ' α^λ(p) = e_λ' α^λ(p) , x^α^λe_λ' =e_λ'x^α^λ =e_λ'x^α^λ, and similarly for β^λ and x̅^β^λ. The vertex operator for general vectors in V_Λ is defined by linear extension to all of V_Λ.Using the central extension (<ref>),(<ref>) can be used to define vertex operators even if e^λ∈[Λ]. These vertex operators will act on vectors of the form (<ref>) with e^λ∈[Λ] rather than [Λ_0]. This will be crucial when we construct module vertex operators and intertwining operators on the modules of V_Λ.The vacuum vector is given by 1=e^0. The conformal vector is constructed below, see (<ref>).§.§ Proof of axiomsWe now prove that (V_Λ, Y_V_Λ,ω_L, ω_R, 1 ) is a non-chiral VOA. Proof of identity property <ref>: From the definition (<ref>), it is clear that Y_V_Λ(1,x,x̅)= e_0=1. Proof of grading-restriction property <ref>: The grading on V_Λ is given by defining the conformal weight of vector v of the form (<ref>) byh_v=⟨α,α⟩/2+∑_i=1^km_i,h_v=⟨β,β⟩/2+∑_j=1^k̅m̅_j ,where m_i and m̅_j are positive integers appearing in (<ref>).Note that for e^λ with λ=(α,β)∈Λ_0, we have (α,0) ∈Λ, (α,0) ∘ (α,0) = ⟨α, α⟩∈ 2 _+. Then we have that h_v,h_v≥ 0 so that V_(h,h̅)=0 for h or h<0, i.e. M=0 in (<ref>). Similarly, ⟨β, β⟩∈ 2 _+, which implies that both h_v and h_v are positive integers [Note that this argument also works for a general e^λ∈[Λ] with λ∈Λ since ⟨α,α⟩,⟨β,β⟩≥ 0 even in this case.]. We will now show that dim(V_(h,h̅))< ∞.Note that Λ_1^0 and Λ_2^0 are lattices. It suffices to show that there exist only finitely many vectors of the form (<ref>), satisfying the conditions in (<ref>). We first show that for any h,h̅∈ℝ the number of distinct λ = (α, β) ∈Λ_0 satisfying ⟨α,α⟩≤ 2 hand ⟨β,β⟩≤ 2 h̅ ,where α∈Λ_1^0, β∈Λ_2^0,can be only finitely many.Consider the setsX_1 = {α∈Λ_1^0 | ⟨α, α⟩≤ 2h}, X_2 ={β∈Λ_2^0 | ⟨β, β⟩≤ 2h̅},which have finite cardinality, say N_1 and N_2, due to the fact that Λ_1^0 and Λ_2^0 are discrete. Then the set X ={λ=(α,β)∈Λ_0 | ⟨α, α⟩≤ 2h,  ⟨β, β⟩≤ 2h̅}is finite because the mapX⟶ X_1× X_2λ=(α,β)⟼ (α,β)is injective. More precisely # X≤ N_1N_2.Now, as there are only finitely many combinations of positive integers { m_i }_i = 1^k and {m̅_i }_i = 1^k̅ such that h_v - ⟨α,α⟩/2 = ∑_i=1^km_i,h_v - ⟨β,β⟩/2 = ∑_j=1^k̅m̅_j ,hence there are only finitely many generating vectors possible, which implies that dim(V_h, h̅) < ∞.Proof of single-valuedness property <ref>: For the general vector v of the form (<ref>) we have h_v-h̅_v =⟨α,α⟩-⟨β,β⟩/2+∑_i=1^km_i-∑_j=1^k̅m̅_j=λ∘λ/2+∑_i=1^km_i-∑_j=1^k̅m̅_j∈ℤ,where we used the fact that Λ is an even Lorentzian lattice.Proof of creation property <ref>:We want to show that for any state v ∈ V_Λ, lim_x,x̅→ 0 Y_V_Λ(v,x, x̅)1 = v.Let us first consider the case when v =e^λ, then the Y_V_Λ operator is given in (<ref>). One then has to expand the exponentials, we ignore the terms when α^λ(n) and β^λ(n) have n > 0, as they annihilate [Λ_0]. The two exponentials that remain will only have positive powers of x andx̅, which vanish when we take the limit. Hence lim_x ,x̅→ 0Y_V_Λ( e^λ , x, x̅)1 =e_λ·1.Here, we used the fact that 1= e^0 so that the action of x^α^λ (x̅^β^λ) on this is by identity, since ⟨α^λ, 0 ⟩=0 (⟨β^λ, 0 ⟩=0 ). Hence lim_x ,x̅→ 0Y_V_Λ( e^λ , x, x̅)1 =e_λe^0 = (-1)^ϵ (0,λ) e^λ =e^λ,where we have used (<ref>), (<ref>) and that e_λ = (1, λ).We now prove (<ref>) for a general vector v of the form (<ref>). The normal ordering in the definition (<ref>) and the fact that ĥ^+ annihilates [Λ_0] forces the product to take the formd^m_r-1α_r(x)/dx^m_r-1→∑_p_r≤ -m_r(m_r-1)!α_r(p_r)x^-p_r-m_r,and d^m̅_s-1β_s(x̅)/dx̅^m̅_s-1→∑_q_s≤ -m̅_s(m̅_s-1)!β_s(q_s)x̅^-q_s-m̅_s.Thus we have∏_r=1^k∏_s=1^k̅(1/(m_r-1)!d^m_r-1α_r(x)/dx^m_r-1)(1/(m̅_s-1)!d^m̅_s-1β_s(x̅)/dx̅^m̅_s-1) Y_V_Λ( e^λ,x,x̅)1→∑_p_1≤ -m_1 ... p_k≤ -m_kα_1(p_1)α_2(p_2)…α_k(p_k)x^-(p_1+… p_k)-(m_1+…+m_k)×∑_q_1≤ -m̅_1 ... q_k̅≤ -m̅_k̅β_1(q_1)β_2(q_2)…β_k̅(q_k̅)x̅^-(q_1+… q_k̅)-(m̅_1+…+m̅_k̅)Y_V_Λ( e^λ,x,x̅)1.When we take x,x̅→ 0 only the p_r=-m_r,q_s=-m̅_s terms in the sum survives. Combining this fact with the proof of (<ref>) for v= e^λ, we getlim_x,x̅→ 0 Y_V_Λ(v,x,x̅)1=(α_1(-m_1)·α_2(-m_2)⋯α_k(-m_k)β_1(-m_1)·β_2(-m_2)⋯β_ℓ(-m_k))⊗ e^λ =v,where we also use the fact that Y_V_Λ( e^λ,x,x̅) can only contribute terms with x^n and x̅^m, where n and m are greater than 0. are certain integer depending on the lattice Λ. The existence of such vectors is proved in Appendix <ref>. Note that there exists μ_i,ν_i∈h such that u_i=α^μ_i, v_j=β^v_j, i=1,…,m, j=1,…,n. Proof of Virasoro property <ref>: Theconformal vector is given byω_Λ := 1/2∑_i = 1^dim(h_1)( u_i(-1)^2 )⊗1+1/2∑_i = 1^dim(h_2)( v_i(-1)^2)⊗1≡ω_L + ω_R,where u_i∈Λ_1⊗_, v_i∈Λ_2⊗_ are orthonormal basis of h_1 and h_2 respectively[Note that a different choice of orthonormal basis will give isomorphic LLVOAs. Indeed if {u_i'} and {v_j'} are orthonormal bases of h_1 and h_2 respectively, different from {u_i} of and {v_j}. Then the map f:V_Λ⟶ V_Λ which acts trivially on [Λ_0] and maps u_i↦ u_i', v_i↦ v_i' is a non-chiral VOA isomorphism.]: ⟨ u_i,u_j⟩=δ_i,j,⟨ v_i,v_j⟩=δ_i,j. Since an integral basis of Λ is also a basis of ^m,n, it is clear that dim(h_1)=m and dim(h_2)=n. One can check that the conformal vertex operator is given by Y_V_Λ(ω,x,x̅)= Y_V_Λ(ω_L,x,x̅)+Y_V_Λ(ω_R,x,x̅)=∑_p∈L_Λ(p)x^-p-2+∑_p∈L_Λ(p)x̅^-p-2,where the Virasoro generators are given by <cit.> L_Λ(p)=1/2∑_i=1^m∑_k∈ u_i(k)u_i(p-k) L̅_Λ(p)=1/2∑_i=1^n∑_k∈ v_i(k)v_i(p-k).Using the Lie brackets[ u_i(p), u_j(q)]=pδ_i,j δ_p+q,0 k, [v_i(p), v_j(q)] =pδ_i,j δ_p+q,0 k̅,one can show that the Virasoro generators indeed satisfy the Virasoro algebra (<ref>) with central charge m=dim(h_1), n=dim(h_2) respectively, see <cit.>. Proof of grading property <ref>: From (<ref>) and normal ordering (<ref>), we have L_Λ(0)=1/2∑_i=1^m∑_r∈ u_i(r)u_i(-r)= ∑_i=1^m∑_r>0u_i(-r)u_i(r)+1/2u_i(0)^2, L̅_Λ(0)=1/2∑_i=1^n∑_r∈ v_i(r)v_i(-r)= ∑_i=1^n∑_r>0v_i(-r)v_i(r)+1/2v_i(0)^2.Then for v∈ V_Λ of the form (<ref>), using the Lie bracket (<ref>) and the action (<ref>), we have L_Λ(0)v =∑_j=1^k[⋯(m_j∑_i=1^m⟨ u_i,α_j⟩ u_i(-m_j))⋯]⊗ e^λ+1/2∑_i=1^m⟨ u_i,α⟩^2v, =∑_j=1^km_j[⋯α_j(-m_j)⋯]⊗ e^λ+1/2∑_i=1^m⟨⟨ u_i,α⟩ u_i,α⟩ v, =[∑_j=1^km_j+⟨α,α⟩/2]v ,where we used the fact that {u_i} is an orthonormal basis of h_1. Similarly L̅_Λ(0)v=[∑_j̅=1^k̅m̅_j̅+⟨β,β⟩/2]v . Proof of L(0)-property <ref>: Let (α, β) ∈Λ_0. Then[L_Λ(0), α(x)^±]=1/2∑_r ∈ℤ_±∑_s ∈ℤ∑_i=1^m[ u_i(s) u_i(-s), α(r)] x^-r-1=1/2∑_r ∈ℤ_±∑_i=1^m(∑_s ≥ 1[u_i(-s) u_i(s), α(r)]+[u_i(0)^2, α(r)]. .+∑_s ≤-1[u_i(s) u_i(-s), α(r)]) x^-r-1=1/2∑_r ∈ℤ_±∑_s ≠ 0(s α(-s) δ_s+r, 0-n α(s)) δ_r-s, 0 x^-r-1=∑_r ∈ℤ_±∑_s ≠ 0 n α(-s) δ_s+r, 0 x^-r-1=-∑_r ∈ℤ_± r α(r) x^-r-1,where we used∑_i=1^m[u_i(-s) u_i(s), α(r)] =∑_i=1^m u_i(-s)[u_i(s), α(r)]+[u_i(-s), α(r)] u_i(s)=∑_i=1^m(s⟨ u_i, α⟩δ_s+r, 0 u_i(-s)-s⟨ u_i, α⟩δ_r-s, 0 u_i(s))=s α(-s) δ_s+r, 0-s α(s) δ_r-s, 0 .Rearranging terms, we get [L_Λ(0), α(x)^±]=x d α(x)^±/d x+α(x)^±.Similarly[L̅_Λ(0), β(x̅)^±]=x̅d/d x̅β(x̅)^±+β(x̅)^± .Note that the same proof also shows that [L_Λ(0), α(x)]=x d/d xα(x)+α(x)[L̅_Λ(0), β(x̅)]=x̅d/d x̅β(x̅)+β(x̅).Next using (<ref>) we have [L_Λ(0), ∫ d x α(x)^±] =∫ d x[L_Λ(0), α(x)^±]=∫ d x(x d/d xα(x)^±+α(x)^±)=x α(x)^±,where we used integration by parts for the formal integration. Similarly[L̅_Λ(0), ∫ d x̅ β(x̅)^±]=x̅β(x̅)^±.By BCH formula (<ref>) we haveL_Λ(0) exp(∫ d x α(x)^±) =exp(-∫ d x α(x)^±) ∑_n=0^∞1/n ![(∫ d x  α(x)^±)^n, L_Λ(0)]=exp(∫ d x α(x)^±) L_Λ(0)+∑_n=1^∞1/n ![(-∫ d x α(x)^±)^n, L_Λ(0)].By (<ref>) and the fact that [α(r), α(s)]=0 for r, s ≤ 0 or r, s ≥ 0, we get[L_Λ(0), exp(∫ d x α(x)^±)]=x exp(∫ d x α(x)^±) α(x)^±.Similarly[L̅_Λ(0), exp(∫ d x̅ β(x)^±)]=x̅exp(∫ d x̅ β(x̅)^±) β(x̅)^± .Finally, it is clear that for λ^'=(α^', β^') and u∈ S(h^-) we have[L_Λ(0),e_λ x^α](u ⊗ e^λ^') =(-1)^ϵ(λ, λ^')(⟨α+α^', α+α^'⟩/2-⟨α^', α^'⟩/2) x^⟨α, α^'⟩(u ⊗ e^λ+λ^')=(⟨α, α⟩/2+⟨α, α^'⟩)(-1)^ϵ(λ,λ^')x^⟨α, α^'⟩(u ⊗ e^λ+λ^') =(⟨α, α⟩/2 e_λ x^α+ e_λ x d/d x x^α)(u ⊗ e^λ^')Putting all this together, we obtain[L_Λ(0), Y_V_Λ( e^λ, x, x̅)]=x d/d x[exp(∫ dx α(x)^-) exp(∫ dx α(x)^+)] ×exp(∫ dx̅ β(x̅)^-) exp(∫ dx̅ β(x̅)^+)e_λ x^αx̅^β +exp(∫ dx α(x)^-) exp(∫ dx α(x)^+) ×exp(∫ dx̅ β(x̅)^-) exp(∫ dx̅ β(x̅)^+) ×(⟨α, α⟩/2 e_λ x^α+ e_λ x d/d x x^α)=x d/d x Y_V_Λ( e^λ, x, x̅)+⟨α, α⟩/2 Y_V_Λ( e^λ, x, x̅) .Similarly[L̅_Λ(0), Y_V_Λ( e^λ, x, x̅)]=x̅d/d x̅ Y_V_Λ( e^λ, x, x̅)+⟨β, β⟩/2 Y_V_Λ( e^λ, x, x̅) .For general vertex operators, we observe that[L_Λ(0), d^r/d x^rα(x)] =d^r/d x^r(x d/d xα(x)+α(x))=d^r-1/d x^r-1(d/d xα(x)+x d^2/d x^2α(x))+d^r/d x^rα(x)=x d^r/d x^rα(x)+(r+1) d^r/d x^rα(x).This implies that for a general vector of the form (<ref>) we have[L_Λ(0), Y_V_Λ(v, x, x̅)] =x d/d x Y_V_Λ(v, x, x̅)+(∑_i=1^k m_i+⟨α, α⟩/2) Y_V_Λ(v, x, x̅)=x d/d x Y_V_Λ(v, x, x̅)+Y_V_Λ(L_Λ(0) v, x, x̅) .Similarly[L̅_Λ(0), Y_V_Λ(v, x, x̅)]=x̅d/d x̅ Y_V_Λ(v, x, x̅)+Y_V_Λ(L̅_Λ(0) v, x, x̅) . Proof of translation property <ref>: Observe that[L_Λ(-1),α(-r)] = 1/2∑_i=1^m∑_s∈ [u_i(s)u_i(-1-s) , α(-r) ] =1/2∑_i=1^m∑_s∈[ u_i( -1 - s) u_i(s) , α(-r)]=1/2∑_i=1^m∑_s∈( u_i( -1 - s) [ u_i(s) , α(-r)] + [ u_i( -1 - s) , α(-r)] u_i(s) ) =1/2∑_i=1^m∑_s∈(s δ_s - r, 0⟨ u_i , α⟩ u_i( -1 - s)- (1 + s ) δ_r + s + 1, 0⟨ u_i , α⟩ u_i(s)) =1/2(rα(-1 - r)- (- r)α( - 1 - r) ) = rα(- r- 1).Using the above commutator, it is easy to see thatL_Λ(-1)( α_1(-m_1)⋯α_k(-m_k)β_1(-m̅_1)·β_2(-m̅_2)⋯β_k̅(-m̅_k̅) )⊗ e^(α,β)=∑_i=1^m( α_1(-m_1)⋯α_k(-m_k)β_1(-m̅_1)⋯β_k̅(-m̅_k̅))⊗ L_Λ(-1)e^(α,β)+∑_i=1^km_i( α_1(-m_1)⋯α_i(-1-m_i)⋯α_k(-m_k)β_1(-m̅_1)⋯β_k̅(-m̅_k̅) )⊗ e^(α,β).Now since L_Λ(-1)e^(α,β) =1/2∑_i=1^m∑_s∈ u_i(s)u_i(-1-s) e^(α,β)=∑_i=1^mu_i(0)u_i(-1)e^(α,β)=∑_i=1^m⟨ u_i,α⟩ u_i(-1)e^(α,β)=α(-1) e^(α,β),hence we get the action of L_Λ(-1) on generating vectors v of the form (<ref>) to be:L_Λ(-1)v =∑_i=1^km_i( α_1(-m_1)⋯α_i(-1-m_i)⋯α_k(-m_k)β_1(-m̅_1)⋯β_k̅(-m̅_k̅) )⊗ e^(α,β)+(α(-1) α_1(-m_1)⋯α_i(-1-m_i)⋯α_k(-m_k)β_1(-m̅_1)⋯β_k̅(-m̅_k̅) )⊗ e^(α,β).The proof of the translation property now follows from exact same calculation as in <cit.>.Following <cit.>, one can show that ∂/∂ xY_V_Λ(v,x,x̅)=Y_V_Λ(L(-1)v,x,x̅) ∂/∂x̅Y_V_Λ(v,x,x̅)=Y_V_Λ(L̅(-1)v,x,x̅).Then §.§.§ Proof of locality of vertex operatorsIn this section, we will prove the locality of two vertex operators and defer the proof of product of multiple vertex operators to Appendix <ref>. The vertex operators Y_V_Λ( e^λ,x,x̅) for λ∈Λ_0 satisfy the locality property <ref>. More precisely there exists, multi-valued, operator-valued functions f(z_1, z_2 ) and g(z̅_1, z̅_2) analytic in z_1,z_2 and z̅_1, z̅_2 respectively with possible singularities at {(z_1,z_2) ∈^2|z_1, z_2 ≠ 0, z_1 ≠ z_2}, such that f(z_1, z_2) g(z̅_1, z̅_2) is single-valued when z̅_1, z̅_2 are the complex conjugates of z_1,z_2 respectively and equals Y_V_Λ( e^λ, z_1, z̅_1) Y_V_Λ( e^λ', z_2, z̅_2)when| z_1 | > | z_2 | , Y_V_Λ( e^λ', z_2, z̅_2) Y_V_Λ( e^λ, z_1, z̅_1)when| z_2 | > | z_1 | . We will closely follow the proofs of results in <cit.>. We begin by proving that[α(x_1),α'(x_2)]=⟨α,α'⟩[(x_2-x_1)^-2-(-x_2+x_1)^-2] [β(x̅_1),β'(x̅_2)]=⟨β,β'⟩[(x̅_2-x̅_1)^-2-(-x̅_2+x̅_1)^-2]where λ=(α,β),λ'=(α',β'). We have[α(x_1), α'(x_2)] =∑_r, s ∈ℤ[α(r), α'(s)] x_1^-r-1 x_2^-s-1 =∑_r, s ∈ℤ⟨α, α'⟩ rδ_r+s, 0x_1^-r-1 x_2^-s-1 =-⟨α, α'⟩∑_s ∈ℤ s x_1^s-1 x_2^-s-1 =-⟨α, α'⟩∂/∂ x_1∑_s ∈ℤ x_1^s x_2^-s-1 =-⟨α, α'⟩∂/∂ x_1((x_1-x_2)^-1-(-x_2+x_1)^-1)=⟨α, α'⟩((x_1-x_2)^-2-(-x_2+x_1)^-2),Note that this commutator is also true for complex variables x_1=z_1,x_2=z_2. Indeed from (<ref>)α(z_1)α'(z_2) =α(z_1)α(z_2) +⟨α,α'⟩∑_s∈sz_1^-s-1z_2^s-1=α(z_1)α(z_2) +⟨α,α'⟩/(z_1-z_2)^2, |z_1|>|z_2|,and α'(z_2)α(z_1) =α'(z_2)α(z_1) -⟨α',α⟩∑_s∈sz_2^-s-1z_1^s-1=α'(z_2)α(z_1) -⟨α',α⟩/(z_2-z_1)^2, |z_2|>|z_1|.It is easy to see that α(z_1)α'(z_2)=α'(z_2)α(z_1) ,which gives us the commutator. In particular[α(z_1), α'(z_2)]=0.The other Lie bracket in (<ref>) can be proved similarly.Using (<ref>), we can show that[we will use exp and e interchangeably. ] [α' (x_2)^-,e^∫α(x_1)^+ d x_1]=⟨α, α'⟩/x_1-x_2 e^∫α(x_1)^+ d x_1 .Integrating both sides of (<ref>) gives us [-∫α'. . (x_2)^-dx_2,e^∫α(x_1)^+ d x_1] =(⟨α, α'⟩log(x_1-x_2)-⟨α, α'⟩log x_1)e^∫α(x_1)^+ d x_1 = ⟨α, α' ⟩(log(x_1 - x_2) - log(x_1))e^∫α(x_1)^+ d x_1 = ⟨α, α'⟩log(1-x_2/x_1)e^∫α(x_1)^+ d x_1 .One can write analogous formulas for [β'(x̅_1)^±,exp(∫β(x_2)^∓)]. Using the BCH identity exp(X) Y exp(-X) = ∑_s=0^∞[(X)^s, Y]/s !,where [X^s, Y] = [X … ,[X, [X_stimes ,Y]] … ],[X^0, Y ] ≡ Y,we get exp(-∫ dx_2 α'(x_2)^-)exp(∫ dx_1 α(x_1)^+) exp(∫ dx_2 α'(x_2)^-)=(1-x_2/x_1)^⟨α,α'⟩exp(∫ dx_1 α(x_1)^+).To show the locality of vertex operator, we will also require the identities x_1^α e_λ'= x_1^⟨α, α' ⟩ e_λ 'x_1^α,x̅_2^β e_λ'= x̅_2^⟨β, β' ⟩ e_λ 'x̅_2^β,the first of which is shown belowx_1^αe_λ' (u ⊗ e^λ”)= (-1)^ϵ(λ',λ”)x_1^α (u ⊗e^λ' + λ”) = (-1)^ϵ(λ',λ”)x_1^⟨α, α' ⟩ + ⟨α, α”⟩ (u ⊗e^λ' + λ”)e_λ'x_1^α(u ⊗ e^λ”)= x_1^⟨α, α”⟩ e_λ' (u ⊗ e^λ”)=(-1)^ϵ(λ', λ”)x_1^⟨α, α”⟩ (u ⊗ e^λ' + λ”).We now have[We will often write ∫ dx α(x)=∫α(x) to simplify the expressions.] Y_V_Λ( e^λ,x_1,x̅_1)Y_V_Λ( e^λ ' ,x_2,x̅_2)=exp(∫α(x_1)^-)exp(∫α(x_1)^+)exp(∫β(x̅_1)^-)exp(∫β(x̅_1)^+) e_λ x_1^αx_1^βexp(∫α'(x_2)^-)exp(∫α'(x_2)^+)exp(∫β'(x̅_2)^-)exp(∫β'(x̅_2)^+)e_λ'x_2^α'x_2^β'.Now, utilizing (<ref>) and the fact that e_λ, x_1^α, and x̅_1^βcommute with exponential of integrals=exp(∫α(x_1)^-)[exp(∫α(x_1)^+)exp(∫α'(x_2)^-)]exp(∫α'(x_2)^+)exp(∫β(x̅_1)^-)[exp(∫β(x̅_1)^+)exp(∫β'(x̅_2)^-)]exp(∫β'(x̅_2)^+)e_λ x_1^αx_1^β e_λ'x_2^α'x_2^β' .After which we use (<ref>) and (<ref>) to write=(1-x_2/x_1)^⟨α,α'⟩(1-x_2/x̅_1)^⟨β,β'⟩x_1^⟨α,α'⟩x_1^⟨β,β'⟩exp(∫α(x_1)^-)exp(∫α'(x_2)^-)exp(∫α(x_1)^+)exp(∫α'(x_2)^+)exp(∫β(x̅_1)^-)exp(∫β'(x̅_2)^-)exp(∫β(x̅_1)^+)exp(∫β'(x̅_2)^+)e_λ e_λ'x_1^αx_1^β x_2^α'x_2^β' .Finally we use (<ref>) to collect terms to get=(x_1-x_2)^⟨α,α'⟩(x_1-x_2)^⟨β,β'⟩exp(∫α(x_1)^-)exp(∫α'(x_2)^-)exp(∫α(x_1)^+)exp(∫α'(x_2)^+)exp(∫β(x̅_1)^-)exp(∫β'(x̅_2)^-)exp(∫β(x̅_1)^+)exp(∫β'(x̅_2)^+) e_λ e_λ'x_1^αx_1^β x_2^α'x_2^β'≡(x_1-x_2)^⟨α,α'⟩(x_1-x_2)^⟨β,β'⟩F(x_1, x_2) F̅(x̅_1, x̅_2 ),where we used (<ref>), (<ref>) and (<ref>) and F(x_1, x_2) F̅(x̅_1, x̅_2 ) contains the operator part of Y_V_Λ( e^λ,x_1,x̅_1)Y_V_Λ( e^λ ' ,x_2,x̅_2). Similarly we have Y_V_Λ( e^λ',x_2,x̅_2)Y_V_Λ( e^λ,x_1,x̅_1)=(x_2-x_2)^⟨α,α'⟩(x_2-x_1)^⟨β,β'⟩exp(∫α'(x_2)^-)exp(∫α(x_1)^-)exp(∫α'(x_2)^+)exp(∫α(x_1)^+)exp(∫β'(x̅_2)^-)exp(∫β(x̅_1)^-)exp(∫β'(x̅_2)^+)exp(∫β(x̅_1)^+) (-1)^λ∘λ' e_λ e_λ'x_1^αx_1^β x_2^α'x_2^β' = (-1)^λ∘λ'(x_2-x_1)^⟨α,α'⟩(x_2-x_1)^⟨β,β'⟩ F(x_1, x_2) F̅(x̅_1, x̅_2 ),where we used (<ref>). Note that (-x_1+x_2)^⟨α,α'⟩=(x_2-x_1)^⟨α,α'⟩ when[Recall that when s∈, (-x_1+x_2)^s is to be expanded in positive integral powers of x_2 as in (<ref>).]⟨α,α'⟩≥ 0. To prove locality, we take complex variables x_1=z_1,x_2=z_2 andx̅_1=z_1,x̅_2=z_2. Note that when we plug complex variable in place of formal variable, we must consider (x_1-x_2)^s as a formal series so that Y_V_Λ( e^λ,z_1,z̅_1)Y_V_Λ( e^λ ' ,z_2,z̅_2)=(∑_p≥ 0(-1)^pz_1^⟨α,α'⟩-pz_2^p) (∑_q≥ 0(-1)^qz̅_1^⟨β,β'⟩-qz̅_2^q)× F(z_1, z_2) F̅(z̅_1, z̅_2 ),and similarly Y_V_Λ( e^λ ' ,z_2,z̅_2)Y_V_Λ( e^λ,z_1,z̅_1). To complete the proof of locality, consider the operator valued functions f(z_1,z_2)=exp(⟨α,α'⟩log(z_1-z_2))F(z_1,z_2) , g(z̅_1,z̅_2)=exp(⟨β,β'⟩log(z̅_1-z̅_2))F̅(z̅_1,z̅_2) .Then by (<ref>) for |z_1|>|z_2| we see that f(z_1,z_2) g(z̅_1,z̅_2)=(∑_p≥ 0(-1)^pz_1^⟨α,α'⟩-pz_2^p)(∑_q≥ 0(-1)^qz̅_1^⟨β,β'⟩-qz̅_2^q) F(z_1, z_2) F̅(z̅_1, z̅_2 ).For |z_2|>|z_1| we have f(z_1 ,z_2) g(z̅_1,z̅_2)=exp(⟨α,α'⟩log(-(z_2-z_1)))exp(⟨β,β'⟩log(-(z̅_2-z̅_1)))= e^iπ(⟨α,α'⟩-⟨β,β'⟩)exp(⟨α,α'⟩log(z_2-z_1))exp(⟨β,β'⟩log(z̅_2-z̅_1))F(z_1,z_2)F̅(z̅_1,z̅_2)= (-1)^λ∘λ'(∑_p≥ 0(-1)^pz_2^⟨α,α'⟩-pz_1^p)(∑_q≥ 0(-1)^qz̅_2^⟨β,β'⟩-qz̅_1^q)F(z_1, z_2) F̅(z̅_1, z̅_2 ),where we used the fact that in the principal branch of logarithm to writelog(-z)=log|z|+i(π+Arg(z)),log(-z̅)=log|z|-i(π+Arg(z))with -π<π+Arg(z)<π. From the calculations above, it is easy to see that the following formal commutativity axiom holds for the vertex operators: there exists K,K∈ such that (x_1-x_2)^K(x_1-x_2)^K[Y_V_Λ( e^λ,x_1,x̅_1),Y_V_Λ( e^λ ' ,x_2,x̅_2)]=0.Indeed we have [Y_V_Λ( e^λ,x_1,x̅_1),. .Y_V_Λ( e^λ ' ,x_2,x̅_2)]=((x_1-x_2)^⟨α,α'⟩(x_1-x_2)^⟨β,β'⟩..-(-1)^λ∘λ'(x_2-x_1)^⟨α,α'⟩(x_2-x_1)^⟨β,β'⟩)F(x_1, x_2) F̅(x̅_1, x̅_2 )= ((x_1-x_2)^⟨α,α'⟩(x_1-x_2)^⟨β,β'⟩..-(-x_2+x_1)^⟨α,α'⟩(-x_2+x_1)^⟨β,β'⟩)F(x_1, x_2) F̅(x̅_1, x̅_2 )Since (α,β),(α',β')∈Λ_0, we have⟨α,α'⟩,⟨β,β'⟩∈ and we can choose K,K̅∈ large enough such that K+⟨α,α'⟩∈,K̅+⟨β,β'⟩∈.We then get (x_1-x_2)^K(x_1-x_2)^K [Y_V_Λ( e^λ,x_1,x̅_1),Y_V_Λ( e^λ ' ,x_2,x̅_2)] =((x_1-x_2)^K+⟨α,α'⟩. .(x_1-x_2)^K+⟨β,β'⟩. - .(-x_2+x_1)^K+⟨α,α'⟩(-x_2+x_1)^K+⟨β,β'⟩)F(x_1, x_2) F̅(x̅_1, x̅_2 ) =((x_1-x_2)^K+⟨α,α'⟩. .(x_1-x_2)^K+⟨β,β'⟩. -.(x_1-x_2)^K+⟨α,α'⟩(x_1-x_2)^K+⟨β,β'⟩)F(x_1, x_2) F̅(x̅_1, x̅_2 ) =0. We now prove the locality for general vertex operators.The vertex operators Y_V_Λ(v,x,x̅), where v is the general vector of V_Λ, satisfy the locality property <ref>. More precisely there exists, multi-valued, operator-valued functions f(z_1, z_2 ) and g(z̅_1, z̅_2) analytic in z_1,z_2 and z̅_1, z̅_2 respectively with possible singularities at {(z_1,z_2) ∈^2|z_1, z_2 ≠ 0, z_1 ≠ z_2}, such that f(z_1, z_2) g(z̅_1, z̅_2) is single-valued when z̅_1, z̅_2 are the complex conjugates of z_1,z_2 respectively and equals Y_V_Λ(v, z_1, z̅_1) Y_V_Λ(w, z_2, z̅_2)when| z_1 | > | z_2 |, Y_V_Λ(w, z_2, z̅_2) Y_V_Λ(v, z_1, z̅_1)when| z_2 | > | z_1 |. We will prove the locality for the spanning set of vectors of the form (<ref>). Explicitly, we will prove the locality for vertex operators of the formY_V_Λ(v,x,x̅)= ∏_r=1^k∏_s=1^k̅(1/(m_r-1)!d^m_r-1α_r(x)/dx^m_r-1)(1/(m̅_s-1)!d^m̅_s-1β_s(x̅)/dx̅^m̅_s-1) Y_V_Λ( e^λ,x,x̅)Y_V_Λ(w,x,x̅)= ∏_p=1^ℓ∏_q=1^ℓ̅(1/(n_p-1)!d^n_p-1α_p'(x)/dx^n_p-1)(1/(n̅_q-1)!d^n̅_q-1β_q'(x̅)/dx̅^n̅_q-1) Y_V_Λ( e^λ,x,x̅),see <cit.> for a similar calculation.Following the exact same steps as in the proof of <cit.> with appropriate modifications, we can show that[α' (x_1)^+,e^∫α(x_2)^- d x_2]=(⟨α, α'⟩/x_1-x_2-⟨α, α'⟩/x_1)e^∫α(x_1)^- d x_1.Differentiating on both the sides of (<ref>) with respect to x_1 we obtain[1/s!d^sα' (x_1)^+/dx_1^s, e^∫α(x_2)^- d x_2]=(-1)^s(⟨α, α'⟩/(x_1-x_2)^s+1-⟨α, α'⟩/x_1^s+1)e^∫α(x_1)^- d x_1.Differentiating both sides of (<ref>) with respect to x_2 we obtain[1/s!d^sα'(x_2)^-/dx_2^s ,e^∫α(x_1)^+ d x_1]=⟨α, α'⟩/(x_1-x_2)^s+1 e^∫α(x_1)^+ d x_1.Analogous formula holds for [β'(x̅_1)^±,exp(∫β(x_2)^∓)]. In addition, we needα(0) e_λ'x^α'=⟨α,α'⟩ e_λ'x^α'+ e_λ'x^α'α(0),λ'=(α',β').This follows from the following calculation: for u∈ S(ĥ^-), λ'=(α',β'),λ”=(α”,β”) we haveα(0)e_λ' x^α'(u ⊗ e^λ”) =(-1)^ϵ(λ', λ”) x^⟨α', α”⟩⟨α, α'+α”⟩(u ⊗ e^λ'+λ”) =(-1)^ϵ(λ', λ”) x^⟨α', α”⟩⟨α, α'⟩(u ⊗ e^λ'+λ”)+(-1)^ϵ(λ', λ”) x^⟨α', α”⟩⟨α, α”⟩(u ⊗ e^λ'+λ”) =⟨α, α'⟩e_λ' x^α'(u ⊗ e^λ”)+ e_λ' x^α'α(0)(u ⊗ e^λ”) .Analogous formulasfor β(0) e_λ'x̅^β' isβ(0)e_λ'x̅^β' =⟨β , β' ⟩e_λ 'x̅^β ' +e_λ'x̅^β'β(0),which can be proved as follows:β(0) e_λ'x̅^β'(u ⊗ e^λ”) =(-1)^ϵ(λ', λ”)x̅^⟨β', β”⟩ ⟨β , β' + β”⟩(u ⊗ e^λ'+λ”)=((-1)^ϵ(λ', λ”)x̅^⟨β', β”⟩ ⟨β , β' ⟩(u ⊗ e^λ'+λ”)+(-1)^ϵ(λ', λ”)x̅^⟨β', β”⟩ ⟨β ,β”⟩(u ⊗ e^λ'+λ”) )= ⟨β ,β' ⟩e_λ'x̅^β'(u ⊗ e^λ”) +e_λ'x̅^β'β(0) (u ⊗ e^λ”).Let us now consider the product of two vertex operators, as in (<ref>). Using the normal ordering from (<ref>) we haveY_V_Λ(v,x_1,x̅_1)Y_V_Λ(w,x_2,x̅_2)=exp(∫α(x_1)^-)exp(∫β(x̅_1)^-)×∏_r=1^k∏_s=1^k̅(1/(m_r-1)!d^m_r-1α_r(x_1)/dx_1^m_r-1)(1/(m̅_s-1)!d^m̅_s-1β_s(x̅_1)/dx̅_1^m̅_s-1)×exp(∫α(x_1)^+)exp(∫α'(x_2)^-)exp(∫β(x̅_1)^+) exp(∫β'(x̅_2)^-)e_λ x_1^αx_1^β×∏_p=1^ℓ∏_q=1^ℓ̅(1/(n_p-1)!d^n_p-1α_p'(x_2)/dx_2^n_p-1)(1/(n̅_q-1)!d^n̅_q-1β_q'(x̅_2)/dx̅_2 ^n̅_q-1) ×exp(∫α'(x_2)^+)exp(∫β'(x̅_2)^+)e_λ'x_2^α'x_2^β',where we have used that e_λ x_1^αx̅_1^β commutes with the exponential of integrals and the exponential of α and β commute with each other. Now, using (<ref>) we get Y_V_Λ(v,x_1,x̅_1)Y_V_Λ(w,x_2,x̅_2)=(1-x_2/x_1)^⟨α,α'⟩(1-x̅_2/x̅_1)^⟨β,β'⟩exp(∫α(x_1)^-)exp(∫β(x̅_1)^-)×∏_r=1^k∏_s=1^k̅(1/(m_r-1)!d^m_r-1α_r(x_1)/dx_1^m_r-1)(1/(m̅_s-1)!d^m̅_s-1β_s(x̅_1)/dx̅_1^m̅_s-1)×exp(∫α'(x_2)^-)exp(∫β'(x̅_2)^-) exp(∫α(x_1)^+)exp(∫β(x̅_1)^+)e_λ x_1^αx_1^β×∏_p=1^ℓ∏_q=1^ℓ̅(1/(n_p-1)!d^n_p-1α_p'(x_2)/dx_2^n_p-1)(1/(n̅_q-1)!d^n̅_q-1β_q'(x̅_2)/dx̅_2 ^n̅_q-1) ×exp(∫α'(x_2)^+)exp(∫β'(x̅_2)^+)e_λ'x_2^α'x_2^β'.Further, using (<ref>), (<ref>), (<ref>), and (<ref>) successively on the product in normal order,(<ref>), and the formal variable identity (<ref>) we getY_V_Λ(v,x_1,x̅_1)Y_V_Λ(w,x_2,x̅_2)=(x_1-x_2)^⟨α,α'⟩(x̅_1-x̅_2)^⟨β,β'⟩×exp(∫α(x_1)^-)exp(∫α'(x_2)^-)exp(∫β(x̅_1)^-)exp(∫β'(x̅_2)^-)×∏_r=1^k[1/(m_r-1)!d^m_r-1α_r(x_1)/dx_1^m_r-1+(-1)^m_r-1(⟨α',α_r⟩/(x_1-x_2)^m_r-⟨α',α_r⟩/x_1^m_r)]×∏_s=1^k̅[1/(m̅_s-1)!d^m̅_s-1β_s(x̅_1)/dx̅_1^m̅_s-1+(-1)^m̅_s-1(⟨β',β_s⟩/(x̅_1-x̅_2)^m̅_s-⟨β',β_s⟩/x̅_1^m̅_s)] ×∏_p=1^ℓ[1/(n_p-1)!d^n_p-1α_p'(x_2)/dx_2^n_p-1-⟨α,α_p'⟩/(x_1-x_2)^n_p-(-1)^n_p-1⟨α,α_p'⟩/x_2^n_p]×∏_q=1^ℓ̅[1/(n̅_q-1)!d^n̅_q-1β_q'(x̅_2)/dx̅_2 ^n̅_q-1-⟨β,β_q'⟩/(x̅_1-x̅_2)^n̅_q-(-1)^n̅_q-1⟨β,β_q'⟩/x̅_2^n̅_q]×exp(∫α(x_1)^+) exp(∫β(x̅_1)^+)exp(∫α'(x_2)^+)exp(∫β'(x̅_2)^+)× e_λ e_λ' x_1^αx_1^β x_2^α'x_2^β'.Next we have Y_V_Λ(w,x_2,x̅_2)Y_V_Λ(v,x_1,x̅_1)=(x_2-x_1)^⟨α,α'⟩(x̅_2-x̅_1)^⟨β,β'⟩(-1)^λ∘λ'×exp(∫α(x_1)^-)(∫α'(x_2)^-)exp(∫β(x̅_1)^-)exp(∫β'(x̅_2)^-)×∏_p=1^ℓ[1/(n_p-1)!d^n_p-1α_p'(x_2)/dx_2^n_p-1+(-1)^n_p-1(⟨α',α_p⟩/(x_2-x_1)^n_p-⟨α',α_p⟩/x_2^n_p)]×∏_q=1^ℓ̅[1/(n̅_q-1)!d^n̅_q-1β_q'(x̅_2)/dx̅_2 ^n̅_q-1+(-1)^n̅_q-1(⟨β',β_q⟩/(x̅_2-x̅_1)^n̅_q-⟨β',β_q⟩/x̅_2^n̅_q)] ×∏_r=1^k[1/(m_r-1)!d^m_r-1α_r(x_1)/dx_1^m_r-1-⟨α',α_r⟩/(x_2-x_1)^m_r-(-1)^m_r-1⟨α',α_r⟩/x_1^m_r]×∏_s=1^k̅[1/(m̅_s-1)!d^m̅_s-1β_s(x̅_1)/dx̅_1^m̅_s-1-⟨β',β_s⟩/(x̅_2-x̅_1)^m̅_s-(-1)^m̅_s-1⟨β',β_s⟩/x̅_1^m̅_s]×exp(∫α(x_2)^+) exp(∫β(x̅_2)^+)exp(∫α'(x_1)^+)exp(∫β'(x̅_1)^+)× e_λ e_λ' x_1^αx_1^β x_2^α'x_2^β',where we used (<ref>). Now, let us take x_1, x_2, x̅_1 and x̅_2 to be complex numbers z_1, z_2, z̅_1 and z̅_2 respectively. Then we can rewrite (<ref>) asY_V_Λ(w,z_2,z̅_2)Y_V_Λ(v,z_1,z̅_1)=(z_2-z_1)^⟨α,α'⟩(z̅_2-z̅_1)^⟨β,β'⟩(-1)^λ∘λ'×exp(∫α(z_1)^-)(∫α'(z_2)^-)exp(∫β(z̅_1)^-)exp(∫β'(z̅_2)^-)×∏_r=1^k[1/(m_r-1)!d^m_r-1α_r(z_1)/dz_1^m_r-1-⟨α',α_r⟩/(z_2-z_1)^m_r-(-1)^m_r-1⟨α',α_r⟩/z_1^m_r]×∏_s=1^k̅[1/(m̅_s-1)!d^m̅_s-1β_s(z̅_1)/dz̅_1^m̅_s-1-⟨β',β_s⟩/(z̅_2-z̅_1)^m̅_s-(-1)^m̅_s-1⟨β',β_s⟩/z̅_1^m̅_s]×∏_p=1^ℓ[1/(n_p-1)!d^n_p-1α_p'(z_2)/dz_2^n_p-1+(-1)^n_p-1(⟨α',α_p⟩/(z_2-z_1)^n_p-⟨α',α_p⟩/z_2^n_p)]×∏_q=1^ℓ̅[1/(n̅_q-1)!d^n̅_q-1β_q'(z̅_2)/dz̅_2 ^n̅_q-1+(-1)^n̅_q-1(⟨β',β_q⟩/(z̅_2-z̅_1)^n̅_q-⟨β',β_q⟩/z̅_2^n̅_q)] ×exp(∫α(z_2)^+) exp(∫β(z̅_2)^+)exp(∫α'(z_1)^+)exp(∫β'(z̅_1)^+)× e_λ e_λ' z_1^αz_1^β z_2^α'z_2^β'.Here it is important that we understand (z_1-z_2)^s as the power series since we obtained it by replacing x_1→ z_1,x_2→ z_2 in (x_1-x_2)^s which is a formal series. In this step we have used the fact that the two normal ordered products commute. To see this, note that the normal ordered product can be written as the product without normal order plus a multiple of the central element k,k̅ using (<ref>). Then since α(z_1) and α'(z_2) commute by (<ref>), hence their derivatives and normal ordered products commute too.Now the operators in (<ref>) and (<ref>) are the same. Thus locality follows if we can show that the functions appearing in (<ref>) and (<ref>) are the expansions of a single smooth function in the domains |z_1|>|z_2| and |z_2|>|z_1| respectively. We have already proved in Proposition <ref> that the functions (z_1-z_2)^⟨α,α'⟩(z̅_1-z̅_2)^⟨β,β'⟩ and(-1)^λ∘λ'(z_2-z_1)^⟨α,α'⟩(z̅_2-z̅_1)^⟨β,β'⟩, understood as power series as explained above, are the expansion of the function exp(⟨α,α'⟩log(z_1-z_2))exp(⟨β,β'⟩log(z̅_1-z̅_2)).It remains to prove that the functions appearing in the normal ordered products are also expansions of a single smooth function. It can easily be checked that the functions (-1)^m_r-1(⟨α',α_r⟩/(z_1-z_2)^m_r-⟨α',α_r⟩/z_1^m_r), |z_1|>|z_2|,and -⟨α',α_r⟩/(z_2-z_1)^m_r-(-1)^m_r-1⟨α',α_r⟩/z_1^m_r, |z_2|>|z_1| ,are the expansions of the function (-1)^m_r-1(⟨α',α_r⟩/exp(m_rlog(z_1-z_2))-⟨α',α_r⟩/z_1^m_r) ,in the respective domainsexcept for poles at z_1=z_2 and z_1=0. A similar calculation as in Remark<ref> shows that formal commutativity holds for general vertex operators:(x_1-x_2)^K(x_1-x_2)^K[Y_V_Λ(v,x_1,x̅_1),Y_V_Λ(w,x_2,x̅_2)]=0 ,with v,w∈ V_Λ. The proof of locality goes through even if we take vertex operators corresponding to vectors of the form (<ref>) with e^λ∈[Λ] (see Remark <ref> for definition of such vertex operators). This requires Λ to be an integral Lorentzian lattice. It is worth noting that formal commutativity fails to hold for general vertex operators since ⟨α,α⟩,⟨β,β⟩∉ in general. have equal analytic continuation since the others are similar. It is now clear that the analytic function f(z_1,z_2)=(-1)^m_r-1⟨α',α_r⟩[exp(-m_rlog(z_1-z_2))-exp(-m_rlog z_1)]is given by (<ref>) in |z_1|>|z_2| and by (<ref>) is The graded dimension of the LLVOA can be easily computed. Using the structure of the vector space V_Λ and the general discussion in <cit.>, we find that χ_V_Λ(τ,τ̅)=1/η(τ)^mη(τ)^n∑_(α,β)∈Λ_0q^⟨α,α⟩/2q̅^⟨β,β⟩/2,where η(τ) is the Dedekind eta function η(τ)=q^1/24∏_n=1^∞(1-q^n).We now give explicit examples of isomorphisms and automorphisms of the LLVOA. Let (V_Λ,Y_Λ) and (V_Λ̃,Y_Λ̃) be LLVOAs corresponding to lattices Λ,Λ̃⊂^n,m. Suppose there exists an isomorphism f:Λ⟶Λ̃ which restricts to norm preserving isomorphismf:Λ_i^0⟶Λ̃_i^0, i=1,2 (see (<ref>) for notations). Then the two LLVOAs are isomorphic (V_Λ,Y_Λ)≅(V_Λ̃,Y_Λ̃).Suppose (V_Λ,Y_Λ) and (V_Λ̃,Y_Λ̃) are isomorphic and let φ:V_Λ⟶ V_Λ̃ be a grading preserving isomorphism satisfying (<ref>) and (<ref>). For (α,β)∈Λ, let α̃(-1)β̃(-1)1_V_Λ̃=φ(α(-1)β(-1)1_V_Λ),  for someα̃∈Λ_1,  β̃∈Λ̃_2.The form of φ(α(-1)β(-1)1_V_Λ) is dictated by the fact that a non-chiral VOA homomorphism is grading preserving. Indeed the conformal weights of the LHS must be (1,1) and the only choices of such vectors in V_Λ̃ are linear combinations of Why not a linear combination of two vectors in (<ref>) α̃(-1)β̃(-1)1_V_Λ̃ore^(α̃,β̃)   with  ⟨α̃̃̃,α̃̃̃⟩/2=1,  ⟨β̃̃̃,β̃̃̃⟩/2=1.Suppose φ(α(-1)β(-1)1_V_Λ)=e^(α̃,β̃). Then using (<ref>) for u=α(-1)β(-1)1_V_Λ,v=e^(α',β') and m,n=0 we obtain ⟨α,α'⟩⟨β,β'⟩φ(e^(α',β'))=(e^(α̃,β̃))_0,0φ(e^(α',β')).But it is clear from the explicit form of Y_V_Λ(e^(α̃,β̃),x,x̅) in (<ref>) that (<ref>) cannot be true.We now define the map f:Λ⟶Λ̃(α,β)⟼ (α̃,β).f is a -module isomorphism because φ is -vector space isomorphism. Finally for λ=(α,β)∈Λ, let e^λ=φ(e^λ),λ=(α̃,β̃)∈Λ_0.The form of φ(e^λ) is constrained by (<ref>) and (<ref>) as above. The define Since φ is grading preserving, we must have ⟨α,α⟩/2=⟨α̃,α̃⟩/2,⟨β,β⟩/2=⟨β̃,β̃⟩/2.Suppose f:Λ⟶Λ̃ is an isomorphism satisfying the hypothesis. Using this we also define f:Λ_i⟶Λ̃_i, i=1,2α^λ↦α^f(λ),  β^λ↦β^f(λ).It must be checked that the above map is well defined, let us consider there exists λ', such that α^λ = α^λ', which implies λ - λ' ∈Λ_2^0.As f is Λ_i^0 preserving, f(λ) - f(λ') = f(λ - λ') ∈Λ_2^0, due to which we have α^f(λ) = α^f(λ'), and hence the map f : Λ_1^0→Λ_1^0 is well defined. Similar arguments can be used to show the map f: Λ_2^0→Λ_2^0 is well defined. We then extend f to h_1,h_2 by -linearityand then to ĥ_1,ĥ_2, by mapping k and k̅ back to themselves. Then we extend it to V_Λ by f(α_1(- m_1)·α_2(-m_2)⋯α_k(-m_k)β_1(-m̅_1)·β_2(-m̅_2)⋯β_k̅(-m̅_k̅)⊗ e^(α,β))= f(α_1)(-m_1)· f(α_2)(-m_2)⋯ f(α_k)(-m_k) f(β_1)(-m̅_1)· f(β_2)(-m̅_2)⋯ f(β_k̅)(-m̅_k̅) ⊗ e^(f(α),f(β)).Since f(0)=0, we see that f(1_V_Λ)=1_V_Λ. Next, since f is an isomorphismIt is easy to check that f is an isomorphism of the two LLVOAs.Let (V_Λ,Y_Λ, ω_L, ω_R, 1_V_Λ) and (V_Λ̃,Y_Λ̃, ω̃_L, ω̃_R, 1_V_Λ̃)be LLVOAs corresponding to lattices Λ,Λ̃⊂^m,n. Suppose Λ and Λ̃ are related by anO(m,)×O(n,)-transformation, then the two LLVOAs are isomorphic (V_Λ,Y_Λ)≅(V_Λ̃,Y_Λ̃). Suppose f:Λ⟶Λ̃ is the isomorphism relating Λ and Λ̃, then for any λ = (α^λ, β^λ) ∈Λf(α^λ, β^λ) = (O_1 ·α^λ, O_2 ·β^λ ) ,where O_1 and O_2 lie in O(m,) and O(n, ) respectively. Further from the action (<ref>), it is clear that f(Λ_1^0) = Λ̃_1^0 and f(Λ_2^0) = Λ̃_2^0 and further that the restrictions to Λ_1^0,Λ_2^0 are norm-preserving isomorphisms. Using f we can define the maps f_i : Λ_i ⟶Λ̃_i ,   i = 1,2 , α^λ↦ O_1 ·α^λ , β^λ↦ O_2 ·β^λ,which are norm-preserving maps when we consider Λ_1 and Λ̃_1 (Λ_2 and Λ̃_2) as subspaces of ^m (^n). Since an integral basis of Λ and Λ̃ is also a basis of ^m,n, it is clear that the dimension dim(h_1)=dim(h̃_1)=m,dim(h_2)=dim(h̃_2)=n ,whereh_i=Λ_i⊗_,h̃_i=Λ̃_i⊗_, i=1,2 ,this implies that the central charges of the two LLVOAs are the same. We then extend f_1 : h_1 →h̃_1 and f_2: h_2 →h̃_2 by -linearityand observe that the bilinear form on h_i's are preserved under this map. We then extend f_1 and f_2 to ĥ_1,ĥ_2, by mapping k and k̅ back to themselves.Consider the orthonormal bases, which were chosen when defining the conformal vectors ω_L,ω_R,ω̃_L, and ω̃_R, i.e. {u_i}_i=1^m,{v_i}_i=1^n and {ũ_i}_i=1^m,{ṽ_i}_i=1^n of h_1,h_1 and h_2,h_2 respectively so that ω_L= 1/2∑_i = 1^m( u_i(-1)^2 )⊗1_V_Λ,ω_R=1/2∑_i = 1^n( v_i(-1)^2)⊗1_V_Λω̃_L= 1/2∑_i = 1^m( ũ_i(-1)^2 )⊗1_V_Λ̃,ω̃_R=1/2∑_i = 1^n( ṽ_i(-1)^2)⊗1_V_Λ̃.Define the isomorphism of complex vector spacesh_1⟶h_1, h_2⟶h_2 u_i↦ũ_i, v_j↦ṽ_j, i=1,…,m,  j=1,…,n.Denote by α̃∈h̃_1,β̃∈h_2 the image of α∈h_1,β∈h_2 under the above map. Define the map ψ:V_Λ⟶ V_Λ̃ by ψ(α_1(- m_1)·α_2(-m_2)⋯α_k(-m_k)β_1(-m̅_1)·β_2(-m̅_2)⋯β_k̅(-m̅_k̅)⊗ e^(α,β))= α̃_1(-m_1)·α̃_2(-m_2)⋯α̃_k(-m_k)β̃_̃1̃(-m̅_1)·β̃_̃2̃(-m̅_2)⋯β̃_k̅(-m̅_k̅) ⊗ e^(f_1(α),f_2(β)).Clearly ψ(1_V_Λ)=1_V_Λ , ψ(ω_i) = ω̃_i ,where i = L, R.Since f_i is norm preserving, from (<ref>) it is clear that ψ is grading preserving. We now check that (<ref>) is satisfied. Let us first check (<ref>) for u=e^λ' and v of the form (<ref>). From the definition (<ref>) we see that Y_V_Λ( e^λ',x,x̅)v=[exp(-∑_s<0α^λ'(s)/sx^-s)exp(-∑_s>0α^λ'(s)/sx^-s)exp(-∑_s<0β^λ'(s)/sx̅^-s)..exp(-∑_s>0β^λ'(s)/sx̅^-s)] e_λ'x^α^λ'x̅^β^λ'v=(-1)^ϵ(λ',λ)x^⟨α^λ',α^λ⟩x̅^⟨β^λ',β^λ⟩[exp(-∑_s<0α^λ'(s)/sx^-s)exp(-∑_s>0α^λ'(s)/sx^-s)..exp(-∑_s<0β^λ'(s)/sx̅^-s)exp(-∑_s>0β^λ'(s)/sx̅^-s)]α_1(-m_1)·α_2(-m_2)⋯α_k(-m_k) ·β_1(-m̅_1)·β_2(-m̅_2)⋯β_k̅(-m̅_k̅)⊗ e^λ'+(α,β).Now expanding the exponentials, we obtain a linear combination of terms of the form α^λ'(n_1)·α^λ'(n_2)⋯α^λ'(n_p) ·β^λ'(n̅_1)·β^λ'(n̅_2)⋯β^λ'(n̅_p̅)α_1(-m_1)·α_2(-m_2)⋯α_k(-m_k) ·β_1(-m̅_1)·β_2(-m̅_2)⋯β_k̅(-m̅_k̅)⊗ e^λ'+(α,β)(-1)^ϵ(λ',λ)x^⟨α^λ',α^λ⟩x̅^⟨β^λ',β^λ⟩x^ℓx̅^ℓ̅ ,with n_i,n_i∈,p,p≥ 0 and the sum is over ℓ,ℓ̅ with ℓ=-∑_i=1^pn_i,ℓ=-∑_i=1^p̅n̅_i .Now we can use the Heisenberg algebra (<ref>) to commute the operators α^λ'(n_i),β^λ'(n̅_i) with n_j,n̅_j>0 past other operators and then annihilate e^λ'+(α,β). The result will be a vector of the form (<ref>) with some factors of the form ⟨α^λ',α^λ'⟩, ⟨α^λ',α_i⟩ and ⟨β^λ',β^λ'⟩,⟨β^λ',β_j⟩. Thus under the map ψ in (<ref>), we see that ψ(Y_V_Λ( e^λ',x,x̅)v) is a linear combination of terms as on the right hand side of (<ref>) with factors of the form(-1)^ϵ(λ',λ)x^⟨α^λ',α^λ⟩x̅^⟨β^λ',β^λ⟩,⟨α^λ',α^λ'⟩,⟨α^λ',α_i⟩,⟨β^λ',β^λ'⟩,⟨β^λ',β_j⟩=(-1)^ϵ(λ',λ)x^⟨ f_1(α^λ'),f_1(α^λ)⟩x̅^⟨ f_2(β^λ'),f_2(β^λ)⟩,⟨ f_1(α^λ'),f_1(α^λ')⟩,⟨ f_1(α^λ'),f_1(α_i)⟩⟨ f_2(β^λ'),f_2(β^λ')⟩,⟨ f_2(β^λ'),f_2(β_j)⟩,where by equality we mean the first term in L.H.S is equal to the first term on R.H.S, and so on. Further, we used the fact that f_1,f_2 are norm preserving maps to show this equality. Consider now the LLVOA obtained from the lattice Λ but with the central extension Λ̂ of Λ constructed from the cocycle ϵ̃ using the integral basis {f(λ_i)}_i=1^m+n of Λ̃ where {λ_i}_i=1^m+n is the integral basis of Λ used to define the cocycle for Λ̂. By Proposition <ref>, central extensions corresponding to cocycles defined using different bases of Λ are equivalent, thus the LLVOA constructed using those central extensions are isomorphic. Hence, we may assume that the cocycle ϵ for the central extension Λ̂ is defined by (<ref>) using the basis {f(λ_i)}_i=1^m+n of Λ̃. Now, following the same calculation as above and using the map ψ, it is clear that Y_V_Λ(e^f(λ'),x,x̅)ψ(v) is given by the exact same linear combinations terms of the form of the right hand side of (<ref>) as for Y_V_Λ(e^f(λ'),x,x̅)ψ(v) but with factors (-1)^ϵ̃(f(λ'),f(λ))x^⟨ f_1(α^λ'),f_1(α^λ)⟩x̅^⟨ f_2(β^λ'),f_2(β^λ)⟩,⟨ f_1(α^λ'),f_1(α^λ')⟩, ⟨ f_1(α^λ'),f_1(α_i)⟩⟨ f_2(β^λ'),f_2(β^λ')⟩,⟨ f_2(β^λ'),f_2(β_j)⟩.Now, consider any λ' = ∑_ic_i λ_i,λ = ∑_i d_i λ_i ∈Λ, observe thatϵ̃(f(λ'), f(λ))= ∑_i,j = 1^m + n c_i d_j ϵ̃(f(λ_i),f(λ_j) ) = ∑_i,j = 1^m + n c_i d_jϵ̃(f(λ_i),f(λ_j) )= ∑_i,j = 1^m + n c_i d_jϵ(λ_i,λ_j ) = ϵ(λ', λ) ,where the third equality follows from the fact that f is norm-preserving. Using the fact that ϵ(λ',λ)=ϵ(f(λ'),f(λ)) we see that the factors in (<ref>) are equal to the factors on the R.H.S of (<ref>), and henceY_V_Λ(ψ(e^λ'),x,x̅)ψ(v)=ψ(Y_V_Λ( e^λ',x,x̅)v).When we take u to be a more general vector, the corresponding vertex operator has products of operators which can again be expanded and dealt with as above.This completes the proof of the proposition. Two Lorentzian lattices Λ,Λ' related by an O(m,)×O(n,)-transformation have isomorphic LLVOAs based on them. Here O(m,) is the group of real orthogonal m× m matrices preserving the bilinear form ⟨·,·⟩ and acts on Λ_1 and similarly O(n,) acts on Λ_2 (see (<ref>) for notations). Let f∈Aut(Λ) such that f(Λ_i^0)=Λ_i^0, i=1,2. Then f can be extended to an automorphism of the LLVOA associated to Λ. For any α∈Λ_1,β∈Λ_2 be , let λ_1=(α,β_1)∈Λ and λ_2=(α_2,β)∈Λ. Define (see (<ref>) for notation)f(α)=α^f(λ_1), f(β)=β^f(λ_2).This can be extended -linearly to define an automorphism of h_1,h_2 and hence ĥ_1,ĥ_2. This defines an automorphism of S(ĥ^-).We define the map ψ(e^λ)=e^f(λ),λ∈Λ_0.Then define ψ:V_Λ⟶ V_Λ analogous to (<ref>) which acts as identity on the factors α_i(-m_i) and β_j(-m_j) and as (<ref>) on [Λ_0]. It can be checked that ψ defines an automorphism of the LLVOA V_Λ by following the same calculation as in Theorem <ref>.From Corollary <ref>, automorphisms of the lattice whuch preserve Λ_i^0, i=1,2 can be extended to automorphisms of the LLVOA. We then propose the following Any automorphism f∈Aut(Λ) preserves Λ_i^0, i=1,2. We prove this conjecture for m=n in Appendix <ref>. Although we were not able to prove this conjecture for m≠ n there are physical reasons to believe this conjecture: T-duality group in string theory acts by automorphism of a reference Lorentzian lattice <cit.>. By definition, T-duality must preserve the chiral and anti-chiral algebra of the CFT. In our formalism, the chiral and anti-chiral algebra is identified with the algebra of modes of the chiral and anti-chiral vertex operators of non-chiral VOA (see Table <ref>), thus the automorphism of the reference lattice must act as automorphism of the LLVOA. This physical consideration supports the conjecture. We will assume the truth of this conjecture and derive the moduli space of LLVOAs later inSection <ref> below.§ MODULES AND INTERTWINING OPERATORS §.§ ModulesWe now define modules of a non-chiral VOA. Let (V,Y_V,ω,ω̅,1) be a non-chiral VOA. A module for V is a tuple (W,Y_W) where W is an (×)-graded complex vector space, Y_W is a linear map, called the module vertex operator map, Y_W: V⊗ W⟶ W{x^± 1,x̅^± 1}u⊗ w⟼ Y_W(u,x,x̅)wor equivalently a mapY_W: ^××^×⟶Hom(V⊗ W,W)(z,z̅)⟼ Y_W(·,z,z):u⊗ w⟼ Y_W(u,z,z̅)w,which is multi-valued and analytic if z,z are independent complex variables and single valued when z is the complex conjugate of z. As before the vertex operator Y_W(u,x,x̅) for u∈ V_(h,h̅) is expanded as a formal power series Y_W(u,x,x̅) =∑_m,n∈ (m-n)∈u^W_m,nx^-m-1x̅^-n-1=∑_m,n∈ (m-n)∈x^W_m,n(u)x^-m-hx̅^-n-h̅∈End(W){x^± 1,x̅^± 1}.The following properties must be satisfied: *Identity property: The vertex operator corresponding to the vacuum vector acts as identity, i.e.Y_W(1, x, x̅) w = w, ∀  w ∈ W. *Grading-restriction property: For every[Note that h,h̅ are not complex conjugates of each other. We will explicitly specify this when this is the case. ] (h,h)∈×, dim(W_(h,h̅))<∞,there exists M∈, such that W_(h,h̅)=0, for Re(h)<Mor Re(h)<M. *Single-valuedness property: For every homogenous subspace W_(h,h̅): h-h̅∈ℤ.* Creation property: For any v∈ V lim_x,x̅→ 0Y_V(v,x,x̅)1=v.*Virasoro property: The vertex operators Y_W(ω,x,x̅) and Y_W(ω̅,x,x̅), called conformal vertex operators, have Laurent series in x,x̅ given by Y_W(ω,x,x̅)=∑_n∈L^W(n)x^-n-2,Y_W(ω̅,x,x̅)=∑_n∈L̅^W(n)x̅^-n-2,where L^W(n),L̅^W(n) are operators which satisfy the Virasoro algebra (<ref>) with central charge c,c̅ respectively.*Grading property: For w∈ W_(h,h̅)L^W(0)w=hw,L^W(0)w=h̅w. *L^W(0)-property : [L^W(0), Y_W(u , x, x̅)]=x ∂/∂ xY_W(u ,x, x̅)+Y(L(0) u , x, x̅),[L̅^W(0), Y_W(u , x, x̅)]=x̅∂/∂x̅Y_W(u ,x, x̅)+Y_W(L̅(0) u , x, x̅). *Translation property: For any u∈ V [L^W(-1), Y_W(u , x, x̅)]=Y_W(L(-1) u , x, x̅)=∂/∂ x Y_W(u , x, x̅),[L̅^W(-1), Y_W(u ; x, x̅)]=Y_W(L̅(-1) u , x, x̅)=∂/∂x̅ Y_W(u , x, x̅). * Locality and Duality property: The module vertex operators must be local, that is given n module vertex operators Y_W(u_i,z_i,z_i), i=1,…,n, there exists an operator-valued function m_n(u_1,…,u_n,z_1,…,z_n,z_1,…,z_n) satisfying the requirements in Property <ref> of Definition <ref>. Moreover, for u_1, u_2∈ V,Y_W(u_1 , z_1, z̅_1) Y_W(u_2 , z_2, z̅_2) , Y_W(u_2 , z_2, z̅_2) Y_W(u_1 , z_1, z̅_1) , Y_W(Y_V(u_1 , z_1-z_2, z̅_1-z̅_2)u_2 , z_2, z̅_2),are the expansions of a function m(u_1,u_2,z_1, z̅_1, z_2, z̅_2 )in the sets given by |z_1|>|z_2|>0, |z_2|>|z_1|>0, and |z_2|>|z_1-z_2|>0, respectively, where z̅_1, z̅_2 are the complex conjugates of z_1 and z_2 respectively. Also m is an End(W)-valued function, linear in u_1,u_2, defined on {(z_1,z_2) ∈^2|z_1, z_2 ≠ 0, z_1 ≠ z_2},multi-valued and analytic when z̅_1,z̅_2 are viewed as independent variables and is single-valued when z̅_1, z̅_2 are equal to the complex conjugates of z_1,z_2 respectively. We say that the module vertex operators Y_W(u_1 , z_1, z̅_1) and Y_W(u_2 , z_2, z̅_2) satisfy locality and duality with respect to each other if they satisfy (<ref>).The module for non-chiral VOA defined here is related to the notion of module in <cit.> and <cit.>, and ordinary module in <cit.>. The equality of only the first two expressions of (<ref>) is the usual locality of two module vertex operators and the equality of first and third expressions in (<ref>) is called duality of module vertex operators. From Proposition <ref>, we see that locality implies duality for vertex operators of a non-chiral VOA, while for module vertex operators, Proposition <ref> below gives a sufficient condition for locality to imply duality in terms of existence of a certain intertwining operator (see Definition <ref>). Chiral and anti-chiral module vertex operators are defined analogous to chiral and anti-chiral vertex operators. For v∈ V_Λ with conformal weights (h,h̅),we will expand the module vertex operator Y_W(v,x,x) as in (<ref>). Y_W(v,x,x) =∑_m,n∈ (m-n)∈v_m,n^Wx^-m-1x^-n-1=∑_m,n∈ (m-n)∈x^W_m,n(v)x^-m-hx^-n-h̅∈End(W){x,x̅}. As in Lemma <ref>, for chiral and anti-chiral vectors u∈ V_(h,h̅),v∈ V_(h',h̅'), we expand the module vertex operators as Y_W(u,x)=∑_m∈x^W_m(u)x^-m-(h-h̅),Y_W(v,x)=∑_m∈x̅^W_m(v)x^-m-(h̅'-h').The proof of Theorem <ref> goes through even for module vertex operators. We record the result for later reference.Let u_i∈ V_(h_i,h̅_i) and v_i∈ V_(h_i',h̅'_i) behomogeneous chiral and anti-chiral vectors respectively with corresponding vertex operators Y_W(u_i,x)=∑_n∈x^W_n(u_i)x^-n-(h_i-h̅_i), Y_W(v_j,x̅)=∑_n∈x̅^W_n(v_i)x̅^-n-(h̅'_i-h'_i).Then we have [x^W_n(u_i),x^W_k(u_j)]=∑_p≥ -(h_i-h̅_i)+1n+(h_i-h̅_i)-1 p+(h_i-h̅_i)-1x^W_k+n(x_p(u_i)· u_j),[x̅^W_n(v_i),x̅^W_k(v_j)]=∑_p≥ -(h̅'_i-h'_i)+1n+(h̅'_i-h'_i)-1 p+(h̅'_i-h'_i)-1x̅^W_k+n(x̅_p(v_i)· v_j),[x^W_n(u_i),x̅^W_k(v_j)]=0.In particular,[L^W(n),x^W_k(u_i)]=∑_p≥ -1n+1 p+1x^W_k+n(L^W(p)· u_i),[L^W(n),x̅^W_k(v_i)]=0,[L̅^W(n),x̅^W_k(v_i)]=∑_p≥ -1n+1 p+1x̅^W_k+n(L̅^W(p)· v_i),[L̅^W(n),x^W_k(u_i)]=0.More generally, for m∈ we have the Borcherd's identity∑_r≥ 0m r((-1)^r x^W_n+m-r(u_i)x^W_k+r(u_j)-(-1)^m+rx^W_k+m-r(u_j)x^W_n+r(u_i))=∑_p≥ 1-(h_i-h_i)n+(h_i-h_i)-1 p+(h_i-h_i)-1x^W_k+n+m+h̅_i-h_j(x_p+m(u_i)· u_j), ∑_r≥ 0m r((-1)^r x̅^W_n+m-r(v_i)x̅^W_k+r(v_j)-(-1)^m+rx̅^W_k+m-r(v_j)x̅^W_n+r(v_i))=∑_p≥ 1-(h̅'_i-h'_i)n+(h̅'_i-h'_i)-1 p+(h̅'_i-h'_i)-1x̅^W_k+n+m+h'_i-h'_j(x̅_p+m(v_i)· v_j), ∑_r≥ 0m r((-1)^rx^W_n+m-r(u_i)x̅^W_k+r(v_j)-(-1)^m+rx̅^W_k+m-r(v_j)x^W_n+r(u_i))=0.The graded dimension or character of a module W of a non-chiral VOA is defined similar to that of the VOA:χ_W(τ,τ̅)=Tr_W q^L^W(0)-c/24q̅^L^W(0)-c̅/24=∑_(h,h̅)∈×(dim W_(h,h̅))q^h-c/24q̅^h̅-c̅/24.Let (W,Y_W) be a module of a non-chiral VOA V. A V-submodule of W is a vector subspace W_1⊂ W such that the vertex operator map restricts to a map on W_1:Y_W: V⊗ W_1⟶ W_1{x,x̅}u⊗ w⟼ Y_W(u,x,x̅)wand is a V-module in its own right. A V-module is called irreducible if it has no non-zero proper submodules. Irreducible modules are also called simple modules. The following proposition is a simplification of <cit.>.Let (W,Y_W) be an irreducible module of a non-chiral VOA (V,Y_V). Then for any non-zero vectors v∈ V and w∈ W, Y_W(v,x,x̅)w≠ 0.Or equivalently there exists m,n∈ such that x^W_m,n(v)· w≠ 0.Since W is irreducible, we have W=Span_{x^W_m_1,n_1(v_1)x^W_m_2,n_2(v_2)⋯ x^W_m_k,n_k. (v_k)· w : v_i∈ V_(h_i,h̅_i),h_i,h̅_i∈,.n_i,m_i∈,i=1,…,k, k∈_0}.If not, the RHS will define an invariant subspace of W hence contradicting the irreducibility of W. Suppose now that Y_W(v,x,x̅)w=0. Thenby the locality property <ref> we see that Y_W(v,z_1,z̅_1)Y_W(u,z_2,z̅_2)w=0 ,for an arbitrary u∈ V. But this implies that Y_W(v,x,x̅)=0. Moreover, since v≠ 0 by duality it implies that Y_W(·,x,x̅)≡ 0 which is a contradiction.Direct sum of two V-modules is another V-module with the obvious definition of vertex operator map. A homomorphism between two V-modules (W_1,Y_W_1) and (W_2,Y_W_2) is a grading preserving linear map f:W_1⟶ W_2 satisfying f(Y_W_1(v,x,x̅)w)=Y_W_2(v,x,x̅)f(w),∀  v∈ V,w∈ W_1.The notion of isomorphisms and automorphisms are defined analogous to the non-chiral VOA. Again, isomorphic modules have identical graded dimension.A semi-simple V-module is a V-module isomorphic to the direct sum of finitely many simple V-modules.§.§ Intertwining operatorsIn this section, we define intertwining operators and study some of their properties.Let (V,Y_V,ω,ω,1) be a non-chiral vertex operator algebra and let (W_i, Y_i), (W_j, Y_j) and (W_k, Y_k) be three V-modules. An intertwining operator of type ([ W_i; W_j W_k ]) is a linear map𝒴:W_j ⊗ W_k ⟶ W_i{x,x} w_(j)⊗ w_(k)↦𝒴(w_(j), x,x̅)w_(k),or equivalently a map𝒴: ^××^×⟶Hom(W_j⊗ W_k,W_i) (z,z)⟼𝒴(·,z,z):w_(j)⊗ w_(k)⟼𝒴(w_(j),z,z̅)w_(k),which is multi-valued and analytic if z,z are independent complex variables and single valued when z is the complex conjugate of z. The intertwining operator Y(w_(j),x,x̅) is expanded as𝒴(w_(j), x,x̅)=∑_n,m∈(w_(j))_n,mx^-n-1x̅^-m-1∈Hom(W_k, W_i){x,x}.The following properties must be satisfied: *L(0)-property: For any w_(j)∈ W_j [L(0), 𝒴(w_(j) , x, x̅)]=x ∂/∂ x𝒴(w_(j) ,x, x̅)+Y_j(L^W_j(0) w_(j) , x, x̅),[L̅(0), 𝒴(w_(j) , x, x̅)]=x̅∂/∂x̅𝒴(w_(j) ,x, x̅)+Y_j(L̅^W_j(0) w_(j) , x, x̅),where the commutator on the LHS is understood to be [L(0), 𝒴(w_(j) , x, x̅)]=L^W_i(0)𝒴(w_(j) , x, x̅)-𝒴(w_(j) , x, x̅)L^W_k(0), [L̅(0), 𝒴(w_(j) , x, x̅)]=L̅^W_i(0)𝒴(w_(j) , x, x̅)-𝒴(w_(j) , x, x̅)L̅^W_k(0). *Translation property: For any w_(j)∈ W_j[L(-1), 𝒴(w_(j) , x, x̅)]=𝒴(L^W_j(-1) w_(j) , x, x̅)=∂/∂ x𝒴(w_(j) , x, x̅),[L̅(-1), 𝒴(w_(j) , x, x̅)]=𝒴(L̅^W_j(-1) w_(j) , x, x̅)=∂/∂x̅𝒴(w_(j) , x, x̅)where the commutativity is understood as above.*Locality property: The module vertex operators and the intertwiner must be local, that is given vectors u_1,…,u_n-1∈ V, w_(j)∈ W_j, there exists an operator-valued function m_n(u_1,…,u_n-1,w_(j),z_1,…,z_n,z_1,…,z_n) satisfying the requirements in Property <ref> of Definition <ref>. Here, the product of vertex operators in (<ref>) is replaced by Y_i(u_σ(1) , z_σ(1), z̅_σ(1))⋯ Y_i(u_σ(a-1) , z_σ(a-1), z̅_σ(a-1))𝒴(w_(j) , z_a, z̅_a) Y_k(u_σ(a+1) , z_σ(a+1), z̅_σ(a+1))⋯Y_k(u_σ(n) , z_σ(n), z̅_σ(n)) . We will denote the intertwining operator by𝒴_j k^ior 𝒴_W_j W_k^ W_i,when we need to indicate its type. The vertex operator map Y_V(·, x,x̅) acting on a non-chiral VOA V is an example of an intertwining operator of type ([ V; V V ]) and Y_W(·, x,x̅) acting on a V-module W is an example of an intertwining operator of type ([ W; V W ]).Following the proof of (<ref>) and using the L(0)-property <ref> along with the grading-restriction property <ref> of modules, one can show the following lower truncation property for intertwiners:for w_(j)∈ W_j and w_(k)∈ W_k,(w_(j))_n,m w_(k)=0forn,msufficiently large.Let (V,Y_V) be a non-chiral VOA and (W,Y_W) be a V-module. Suppose there exists an intertwining operator of type 𝒴^ W_WV where (V,Y_V) is considered as a module for itself. Suppose further that the intertwining operator satisfies lim_x,x̅→ 0𝒴^ W_WV(w,x,x̅)1=w, w∈ W .Then the locality property of module vertex operators implies the duality property (see Remark <ref> for terminology):Y_W(u,z_1,z̅_1)Y_W(v,z_2,z̅_2)=Y_W(Y_V(u,z_1-z_2,z̅_1-z̅_2)v,z_2,z̅_2), u,v∈ V .In particular, we have the OPE:Y_W(u,z_1,z̅_1)Y_W(v,z_2,z_2)= ∑_m,n∈Y_W(u_m,n· v,z_2,z̅_2)(z_1-z_2)^-m-1(z̅_1-z̅_2)^-n-1,where u_m,n∈End(V) defined using the expansion (<ref>). The proof is analogous to the proof of Proposition <ref>. Let w∈ W be an arbitrary vector. First note that (<ref>) along with the translation property <ref> implies that Lemma <ref> is true for the intertwiner 𝒴^ W_WV(w,x,x̅):𝒴^ W_WV(w,x,x̅)1= e^x̅ L̅(-1)e^x L(-1)w. Moreover, the translation property <ref> of module vertex operator Y_W implies that Lemma <ref> is true for Y_W:e^x_2L(-1) e^x̅_2L̅(-1)Y_W(u,x_1,x̅_1) e^-x_2L(-1) e^-x̅_2L̅(-1)=Y_W(u,x_1+x_2,x̅_1+x̅_̅2̅), u∈ V. Then we have Y_W(u,z_1,z̅_1) Y_W(v,z_2,z̅_2)e^z̅_3L̅(-1)e^z_3 L(-1)w=Y_W(u,z_1,z̅_1)Y_W(v,z_2,z̅_2)𝒴^ W_WV(w,z_3,z̅_3)1=𝒴^ W_WV(w,z_3,z̅_3)Y_V(u,z_1,z̅_1)Y_V(v,z_2,z̅_2)1=𝒴^ W_WV(w,z_3,z̅_3)Y_V(Y_V(u,z_1-z_2,z̅_1-z_2)v,z_2,z̅_2)1=Y_W(Y_V(u,z_1-z_2,z̅_1-z_2)v,z_2,z̅_2)𝒴^ W_WV(w,z_3,z̅_3)1,where we used the locality property of intertwiner 𝒴^ W_WV and the duality of vertex operators in Proposition <ref>. Now taking the limit z_3,z̅_3→ 0 gives the required result. §.§ Non-chiral CFT and modular invarianceGiven a non-chiral VOA (V,Y_V) with the set of all isomorphism classes of simple modules {(W_i,Y_W_i)}, one can construct a non-chiral CFT by taking the non-chiral VOA and a subset of the simple modules[A crucial requiremnt in choosing what subset of simple modules to include is to make sure that the OPE of appropriate intertwiners between modules closes in the sense that the right hand side of the OPE contains interwiners and vertex operators for modules which are included in the subset we choose. We will explore these fusion rules for intertwiners in a future work. For the purposes of this paper, we will work with the simplistic definition given below.]. Note that one is allowed to choose copies of the same module. A non-chiral VOA (V,Y_V) along with a subset {(W_α,Y_W_α)}_α∈ I of simple modules, with possibly (W_α,Y_W_α)≅ (W_β,Y_W_β) for some α,β∈ I, will be called a non-chiral CFT. Let V be a rational non-chiral VOA. Then there are finitely many isomorphism classes of simple modules of V.Statement here: <http://www.mi.uni-koeln.de/ spring15/MPI.beamerIII.pdf>. Find proof.Two non-chiral CFTs are said to be equivalent if the underlying non-chiral VOAs and their simple modules are isomorphic. Note that the isomorphism is allowed to permute the (non-trivial) modules but not the non-chiral VOA which is considered as a module for itself.Let {W_α}_α∈ I with W_0≅ V being the non-chiral VOA V considered as a module for itself be a non-chiral CFT. The torus partition function of the non-chiral CFT is defined by Z_V(τ,τ):=∑_i∈ Iχ_W_i(τ,τ̅) .It is clear that equivalent non-chiral CFTs have identical partition function.A non-chiral CFT is called modular invariant if its torus partition function is modular invariant:Z_V(γτ,γτ̅)=Z_V(τ,τ),γ∈SL(2,) ,where γτ=aτ+b/cτ+d,γτ̅=aτ̅+b/cτ̅+d,γ=[ a b; c d ]∈SL(2,).Given a non-chiral CFT, its torus partition function need not be modular invariant. Modular invariance is a physical requirement and it puts strong constraints on which modules of a non-chiral VOA is allowed to construct the non-chiral CFT.§ MODULI SPACE OF NON-CHIRAL CFTS OVER LORENTZIAN LATTICES §.§ Construction of modules of LLVOAGiven any [μ] = [(μ_1, μ_2)] ∈Λ/Λ_0 be a coset. We will construct a module for the LLVOA corresponding to this coset. First observe that there is a one-to-one correspondence between cosets Λ/Λ_0 and [Λ]/[Λ_0] given by the map [μ]↦[μ+Λ_0]=e^μ·[Λ_0]=Span_{e^μ+λ : λ∈Λ_0}.Using the coset [μ+Λ_0], define the vector space W_μ:=S(h^-)⊗[μ+Λ_0].Note that W_μ is generated by elements of the form w :=( α_1(-m_1)·α_2(-m_2)⋯α_k(-m_k)β_1(-m̅_1)·β_2(-m̅_2)⋯β_k̅(-m̅_k̅) )⊗ e^(μ_1, μ_2)+(α,β)for m_i,m̅_i̅> 0,k,k̅≥ 0,(α,β)∈Λ_0. The vertex operator map is exactly the same as for (Y_V_Λ,V_Λ):Y_W_μ(·,x,x̅)=Y_V_Λ(·,x,x̅). Note that Y_W_μ(·,x,x̅) acts on W_μ for which the required action of ĥ^0,k,k̅ on [μ+Λ_0] is definedin (<ref>). The action of x^α,x̅^β is defined exactly the same as in (<ref>) and the action of Λ̂_0 on [μ+Λ] is given in (<ref>).The grading on W_μ is given by defining the conformal weights of w in (<ref>) to be h =⟨μ_1 + α,μ_1 +α⟩/2+∑_i=1^km_i,h =⟨μ_2 +β,μ_2 +β⟩/2+∑_j=1^k̅m̅_j . In view of (<ref>), the modes x^W_μ_m,n(u) of the vertex operator Y_W_μ(u,x,x̅) is the same as the mode x_m,n(u) of Y_V_Λ(u,x,x̅) but now acting on W_μ, see Remark <ref>. For every [μ]∈Λ/Λ_0, the tuple (W_μ,Y_W_μ) is a V_Λ-module. We prove the properties in Section <ref> to show (W_μ,Y_W_μ) is a V_Λ-module. The proof of Properties <ref> through <ref>, except Property <ref> are exactly the same as in the case of the LLVOA, which we had shown in Subsection <ref>. Now, it can be seen that h and h̅in (<ref>) are both positive numbers, as m_i are positive integers and the bilinear forms, inherited from ^m and ^n, on Λ_1 and Λ_2 are positive definite. Hence, M = 0 for Property <ref>. To show dim(W_(h, h̅)) < ∞, we show that for any h,h̅∈ℝ the number of distinct λ = (α, β) ∈Λ_0 satisfying ⟨μ_1 + α, μ_1 + α⟩≤ 2 hand ⟨μ_2 +β,μ_2 + β⟩≤ 2 h̅ ,where α∈Λ_1, β∈Λ_2 ,can be only finitely many. We basically mimic the proof for the grading restriction property for the LLVOA, noting that μ_1 + Λ_1 and μ_2 + Λ_2 are also discrete. The proof of the locality of module vertex operators is same as for the LLVOA case. We prove the duality property. We will show that there exists an intertwining operator of type 𝒴^W_μ_W_μ V_Λ satisfying the hypothesis of Proposition <ref>. Indeed, for w∈ W_μ of the form (<ref>), consider the operator 𝒴^ W_μ_W_μ V_Λ(w,x,x̅)=∏_r=1^k∏_s=1^k̅(1/(m_r-1)!d^m_r - 1 α_r(x)/dx^m_r-1) (1/(m̅_s-1)!d^m̅_s-1β_s(x̅)/dx̅^m̅_s-1)𝒴^ W_μ_W_μ V_Λ(e^μ+(α,β),x,x̅) , where 𝒴^ W_μ_W_μ V_Λ(e^μ+(α,β),x,x̅)=Y_V_Λ(e^μ+(α,β),x,x̅).The operators appearing in the intertwiner above act on V_Λ in the obvious way. The axioms of intertwiners along with the hypothesis of Proposition <ref> for𝒴^ W_μ_W_μ V_Λ follows from the general proofs in Subsection <ref>. We now show that these modules are irreducible. Any V_Λ-module (W,Y_W) is also an (ĥ_1^⋆⊕ĥ_2^⋆)-module, where ĥ_i^⋆ are Heisenberg algebras associated to h_i.For any α∈h_1, consider α(-1)⊗1∈ V_Λ. The corresponding vertex operator isY_W(α(-1)⊗1,x,x̅):=α^W(x) :=∑_n∈α^W(n)x^-n-1.This implies that W is also an ĥ_1^⋆-module. Similarly considering the vector β(-1)⊗1∈ V_Λ and its vertex operator, we see that W is also an ĥ_2^⋆-module.When W=W_μ is the module of the LLVOA corresponding to the coset [μ]∈Λ/Λ_0, then α^W(x)=α(x) and α^W(n)=α(n), see Remark <ref>. For [μ]∈Λ/Λ_0, the V_Λ-module (W_μ,Y_W_μ) is irreducible. Suppose W⊂ W_μ is a V_Λ-submodule. Then W is also an (ĥ_1^⋆⊕ĥ_2^⋆)-module. By Theorem <ref>W≅ S(ĥ_1^-)⊗ S(ĥ_2^-)⊗Ω_W≅ S(ĥ^-)⊗Ω_W,where Ω_W is the vacuum space of W, see Appendix <ref> for definition. Since the vacuum space of W_μ is [μ+Λ_0] we have Ω_W⊂[μ+Λ_0]=⊕_λ∈μ+Λ_0e^λ.Since W is invariant under α(0),β(0) for all α∈h_1,β∈h_2 and e^λ are eigenspaces for α(0),β(0), we must have Ω_W=[M] ,for some non-empty subspace M⊂μ+Λ_0. Finally note that for any λ∈Λ_0 we havee_λ= exp(-∫ dx α^λ(x)^-) exp(-∫ dx̅ β^λ(x)^-)Y_V_Λ( e^λ,x,x)×exp(-∫ dx α^λ(x)^+)exp(-∫ dx̅ β^λ(x)^+)x^-α^λx^-β^λ.Noting thatx^-α^λ, x^-β^λ acts as x^-α^λ(0), x^-β^λ(0) respectively,we see that W must be invariant under e_λ for all λ∈Λ_0. This means that M=μ+Λ_0 since { e_λ : λ∈Λ_0} acts transitively on [μ+Λ_0]. The graded dimension for the module W_μ for any [μ]∈Λ/Λ_0 can be easily calculated. As for LLVOA, we obtain χ_W_μ(τ,τ̅)=1/η(τ)^mη(τ)^n∑_(α,β)∈μ+Λ_0q^⟨α,α⟩/2q̅^⟨β,β⟩/2.Using (<ref>) and (<ref>), we see that the partition function of the non-chiral CFT consisting of the LLVOA (V_Λ,Y_V_Λ) and its modules [We stress that these are not all the modules of the LLVOA. But if we want the non-chiral CFT to be modular invariant, we need to restrict to this set of modules, see Theorem <ref>.] {(W_μ,Y_W_μ)}_[μ]∈Λ/Λ_0 is given by Z_V_Λ(τ,τ̅) =∑_[μ]∈Λ/Λ_0χ_W_μ(τ,τ̅)=1/η(τ)^mη(τ)^n∑_(α,β)∈Λq^⟨α,α⟩/2q̅^⟨β,β⟩/2. §.§ Moduli space of modular invariant non-chiral CFTs over Lorentzian latticesGiven a Lorentzian lattice Λ⊂^m,n, we have constructed a non-chiral vertex operator algebra based on Λ and constructed a set of its irreducible modules. In general, these non-chiral CFTs, consisting of the LLVOA and its irreducible modules, are not modular invariant. To construct a modular invariant non-chiral CFT we restrict to even self-dual lattices and only consider the irreducible modules constructed here which are in 1-1 correspondence with the cosets Λ/Λ_0. Indeed, we have the following theorem. Let Λ∈^m,n be an even self-dual lattice such that m-n≡ 0 24. Then the non-chiral CFT consisting of the LLVOA V_Λ and its modules {W_μ}_[μ]∈Λ/Λ_0 is a modular invariant non-chiral CFT. Let us first show that the central charges of the LLVOA is Every even self-dual Lorentzian lattice has a generator which is an O(m,n,)-matrix. Let [Λ]=[ A B; C D ]∈O(m,n,)be a generator of Λ where A is an m× m matrix, B is an m× n matrix, C is an n× m matrix and D is an n× n matrix. It is easy to show that |det A|=|detD|≥ 1.Thus A and D are invertible and hence the rows of A and D are linearly independent in ^m and ^n respectively. This means that m=dim(h_1)=m, n=dim(h_2)=n. The partition function of the non-chiral CFT in the statement of the theorem is given by [We are using a different notation for the partition to emphasize modular invariance.] (<ref>):Z^mod_V_Λ(τ,τ̅) :=1/η(τ)^mη(τ)^n∑_(α,β)∈Λq^⟨α,α⟩/2q̅^⟨β,β⟩/2=1/η(τ)^mη(τ)^nΘ_Λ(τ,τ) ,where Θ_Λ(τ,τ) is the Siegel-Narain theta function <cit.> associated to the lattice Λ. Since m=dim(h_1),=dim(h_2),clearly Invariance under τ→τ+1:Z^mod_V_Λ(τ+1,τ̅+1)=Z^mod_V_Λ(τ,τ̅),follows from m-n≡ 024 becauseη(τ+1)=e^2π i/24η(τ) ,Invariance under the modular transformation: Z^mod_V_Λ(-1/τ,-1/τ̅) =e^-iπm-n/4/√(|det 𝒢_Λ|)τ^m/2τ̅^n/2(-iτ)^-m/2(iτ)^-n/21/η(τ)^mη(τ)^n∑_(α,β)∈Λ^⋆q^⟨α,α⟩/2q̅^⟨β,β⟩/2=Z^mod_V_Λ(τ,τ̅) ,follows from the fact that Λ is unimodular and self-dual. Here we used the modular transformation of the Dedekind eta function:η(-1/τ)=√(-iτ)η(τ) ,and the modular transformation of the Siegel-Narain theta function <cit.>.We now want to classify all non-chiral CFTs based on Lorentzian lattices of signature (m,n) upto isomorphism. Following the physics convention, we call the set of isomorphism classes the moduli space of modular invariant non-chiral CFTs over Lorentzian lattices and denote it by ℳ_m,n. Under the assumptions of Theorem <ref>,the non-chiral CFTs based on Λ,Λ̃ are isomorphic. Let (W_μ,Y_W_μ)_[μ]∈Λ/Λ_0 and (W̃_μ,Ỹ_W̃_μ)_[μ]∈Λ̃/Λ̃_0 be the isomorphism classes of irreducible modules of the corresponding LLVOAs(V_Λ,Y_V_Λ) and (V_Λ̃,Y_V_Λ̃).By Theorem <ref> the two LLVOAs are isomorphic. It now suffices to show that for 0≠ [μ]∈Λ/Λ_0, there exists 0≠ [ν]∈Λ̃/Λ_0 such that (W_μ,Y_W_μ)≅(W̃_ν,Ỹ_W̃_ν) .Pick a representative μ∈[μ] and let ν=f(μ). Thendefine the map φ:W_μ⟶W_ν( α_1(-m_1)·α_2(-m_2)⋯α_k(-m_k)β_1(-m̅_1)·β_2(-m̅_2)⋯β_k̅(-m̅_k̅) )⊗ e^(μ_1, μ_2)+(α,β)↦( α̃_1(-m_1)·α̃_2(-m_2)⋯α̃_k(-m_k)β̃_1(-m̅_1)·β̃_2(-m̅_2)⋯β̃_k̅(-m̅_k̅) )⊗ e^(f_1(μ_1+α),f-2( μ_2+β),where f_i:Λ_i⟶Λ_i,  i=1,2 is defined as in (<ref>) and extended to f_i:ĥ_i⟶ĥ̃̂_i by -linearity. Here ĥ̃̂_i is constructed as in (<ref>) and (<ref>) for Λ. This map is grading preserving since f_1,f_2 are norm-presering on Λ_1,Λ_2 respectively. One can now show that φ now defines an isomorphism of modules of isomorphic LLVOA by following the same calculations as in the proof of Theorem <ref>.Two Lorentzian lattices Λ,Λ' related by an O(m,)×O(n,)-transformation have isomorphic non-chiral CFTs based on them. It is known that all even self-dual Lorentzian lattices of signature (m,n) are related by an O(m,n,) transformation <cit.>.Thus the set of all non-chiral CFTs based on Lorentzian lattices in signature (m,n) can be identified with O(m,n,). But in view of Theorem<ref>, many of the lattices determine isomorphic non-chiral CFTs. Moreover, for any non-chiral CFT based on Λ, from Corollary <ref>, one can identify a discrete subgroup of O(m,n,) which acts as automorphisms of the CFT. If we believe the truth of Conjecture <ref>, then we can identify this discrete subgroup of O(m,n,) as the automorphism group of the lattice. More precisely, fromTheorem <ref> it is easy to see that the discrete subgroup is isomorphic to 𝒢_ΛO_Λ(m,n,)𝒢_Λ^-1⊂O(m,n,). This subgroup, somewhat inaccurately, but conventionally <cit.> is denoted by O(m,n,). Thus we have the following theorem.Assuming Conjecture <ref>, the moduli space ℳ_m,n of modular invariant non-chiral CFTs based on Lorentzian lattices of signature (m,n) is isomorphic toℳ_m,n≅O(m,n,)/O(m,)×O(n,)×O(m,n,),where O(m,)×O(n,) acts on O(m,n,) by right multiplication and O(m,n,) acts by left multiplication.Choose a reference Lorentzian lattice Λ_ref with generator matrix 𝒢_ref. Then the set of all Lorentzian lattices can be identified with O(m,n,) under the map[Recall that in our convention, lattice vectors are written as rows rather than columns.] 𝒪↦𝒢_ref𝒪.From above discussion, the non-chiral CFT based on 𝒢_ref𝒪 and 𝒢_ref𝒪O with O∈O(m,)×O(n,) is isomorphic. Thus we must quotient out by the right action of O(m,)×O(n,)on O(m,n,). Also automorphisms of Λ_ref, which is isomorphic to a discrete subgroup of O(m,n,) and denoted by O(m,n,), act by right multiplication on 𝒢_ref. So we must quotient by the left action of O(m,n,) on O(m,n,). This gives the required structure of the moduli space. In deriving the moduli space of non-chiral CFTs based on Lorentzian lattices, we imposed modular invariance as a requirement. This was crucial in restricting the lattices to self-dual ones. If we lift the modular invariance requirement, we obtain more general non-chiral CFTs recently discussed in <cit.> and called generalised Narain theories [We thank Masahito Yamazaki for discussion on this point. ].At a general point in the moduli space (<ref>), the sublattice Λ_0 is trivial and the LLVOA is simply S(ĥ^-)≅ S(ĥ_1^-)⊗ S(ĥ_2^-). The chiral and anti-chiral algebra is then generated by the vertex operators u_i(x)=∑_r∈u_i(r)x^-r-1,v_j(x̅)=∑_r∈v_i(r)x̅^-r-1, i=1,… m,  j=1,…,n ,corresponding to states {u_i(-1)·1}_i=1^m and {v_i(-1)·1}_i=1^n where {u_i},{v_i} are orthonormal basis of ĥ_1,ĥ_2 respectively. All other chiral vertex operators are given by products of derivatives of u_i(x),v_j(x̅) (see (<ref>)). In physics, these are called Kac-Moody currents and their modes generate U(1)^m×U(1)^n Kac-Moody algebra in the non-chiral CFT. Thus at a generic point in the moduli space, the chiral and anti-chiral algebra is extended from Virasoro to U(1)^m×U(1)^n Kac-Moody algebra. At certain points in the moduli space where Λ_0≠ 0, the chiral and anti-chiral algebra is further extended to some enhanced symmetry algebra [We thank Anatoly Dymarsky for discussions on this point.].It would be interesting to identify the chiral and anti-chiral algebra at these special points in the moduli space with known algebras. § NARAIN CFTNarain CFTs are a large class of conformal field theories which are constructed by compactifying free bosons on a torus and coupling them to a background antisymmetric B-field. Narain CFTs naturally appear in string theory when we perform toroidal compactification of strings in multiple directions. In this section, we will describe these CFTs and explain how they provide physical examples of the non-chiral VOA we constructed in Section <ref>. We restrict to m=n case for this discussion. §.§ Construction of Narain CFTsWe describe the construction of Narain CFTs. The exposition is based on <cit.> and <cit.>. Let Γ⊂^n be an n-dimensional Euclidean lattice and 2πΓ be the rescaled lattice. Let 𝕋^n≡^n/(2πΓ) be the n-dimensional torus obtained by imposing the equivalence relation x∼x'x-x'∈ 2πΓ.We then consider n bosons X^μ, μ=1,…,n on a two dimensional surface (worldsheet)moving on the torus 𝕋^n (target space). Alternatively, X^μ can be considered as coordinates on the torus 𝕋^n. Note that X^μ∼ X^μ+2π e^μ,e⃗∈Γ.Let us parameterize the worldsheet by σ,t. Then the action for the CFT is given by S=1/4πα'∫ dt ∫ dσ  (Ẋ^2-X'^2-2B_μνẊ^μ X'^ν),where Ẋ^2=∑_μ=1^nẊ^μẊ^μ, X'^2=∑_μ=1^nX'^μ X'^μ,with dot indicating derivative with respect to t and prime with respect to σ, B_μν is an anti-symmetric matrix and α' is a coupling constant (called Regge slope in string theory).The equation of motion for X^μ is given byẌ^μ-X^''μ=0 ,which is the wave equation in 2 dimensions with solutions X(σ,t)=X_L^μ(t+σ)+X_R^μ(t-σ) . Here X_L, X_R are called the left moving and right moving components. We now take the σ coordinate on the worldsheet to be periodic, σ∼σ+2π so that X^μ(t,σ+2π)=X^μ(t,σ)+2π e^μ,e∈Γ.The periodicity implies that we have a Fourier expansion of the form X_L^μ(t+σ)=x^μ/2+α^'p_L^μ/2(t+σ)+i/2∑_n ≠ 0a_n^μ/n e^-i n(t+σ),X_R^μ(t-σ)=x^μ/2+α^'p_R^μ/2(t-σ)+i/2∑_n ≠ 0b_n^μ/n e^-i n(t-σ),whereα^'/2(p⃗_L-p⃗_R)=e⃗∈Γ,so thatX^μ(t, σ) =X_L^μ(t+σ)+X_R^μ(t-σ)=x^μ+α^'/2(p_L^μ+p_R^μ) t+e⃗σ+i/2∑_n ≠ 0(a_n^μ/n e^-i n(t+σ)+b_n^μ/n e^-i n(t-σ)) ,satisfies the periodicity (<ref>). Note that the total momenta given byP^μ=1/2 πα^'∫_0^2 π d σ(Ẋ^μ-B^μν X_ν^')=α^'/2(p_L^μ+p_R^μ)-B^μν e_ν,must be a vector of the dual lattice Γ^⋆ defined asΓ^⋆:={e⃗∈ℝ^n |e⃗·e⃗^⃗'⃗∈ℤ, ∀ e⃗^⃗'⃗∈Γ},since X(t, σ) is only defined up to arbitrary shifts by 2 πe⃗ for e⃗∈Γ. We havep_L^μ=α^' P^μ+(B^μν+δ^μν) e_ν/α^', p_R^μ=α^' P^μ+(B^μν-δ^μν) e_ν/α^'.Thus the set of vectors[We again take the lattice vectors to be rows in ^n,n.] λ=(p⃗_L, p⃗_R) ∈ℝ^n, n form a lattice Λ⊂ℝ^n, n. Moreoverλ∘λ:=p_L^2-p_R^2=4/α^'P⃗·e⃗.We now fix α^'=2, so that λ∘λ∈ 2 ℤ. Next for any λ, λ^'∈Λλ∘λ^'=p⃗_L ·p⃗^⃗'⃗_L-p⃗_R ·p⃗^⃗'⃗_R =P⃗^'·e⃗+P⃗·e⃗^⃗'⃗∈ℤ.So Λ is an even[Note that an even lattice is necessarily integral.] Lorentzian lattice. In fact, Λ is self dual <cit.>. A generator matrix for Λ in the coordinates <cit.>λ=(α, β), α=p⃗_L+p⃗_R/√(2),   β=p⃗_L-p⃗_R/√(2),is 𝒢_Λ=1/√(2)([ 2 γ^⋆ 0; γ B γ ]),where γ and γ^⋆=(γ^-1)^T are the generator matrices for Γ and Γ^⋆ respectively.Upon quantisation, we impose the commutators [a_n^μ,a_m^ν]=nδ_n+m,0δ^μν, [b_n^μ,b_m^ν]=nδ_n+m,0δ^μν, [a_n^μ,b_m^ν]=0,[x^μ,p_L^μ]=[x^μ,p_R^ν]=iδ^μν.For every (p⃗_L, p⃗_R) ∈Λ we have a primary operator given byV_p_L, p_R(z, z̅)= e^i p⃗_L ·X⃗_L(z)+i p⃗_R ·X⃗_R(z̅),where z=e^i(it+σ), z̅=e^i(it-σ) which is obtained by Wick rotating t → i t. The normal ordering is defined via a_n^μ a_m^ν =a_m^ν a_n^μ = a_m^νa_n^μm≤ n, a_n^μ a_m^νm≥ n,a^μ_m p_L^ν = p_L^ν a_m^μ = a_m^μ p_L^ν , x^μa_n^ν =a_n^ν x^μ = a_n^ν x^μ, x^μ p_L^ν = p_L^ν x^μ = x^μ p_L^ν,and similarly for b_n^μ,p_R^μ. The Virasoro generators are given by L_n=1/2∑_m∈ a_m· a_n-m,L̅_n=1/2∑_m∈ b_m· b_n-m.It is an easy exercise to show that L_n,L̅_n satisfy the Virasoro algebra with central charge (c,c̅)=(n,n).The OPE of these primary operators takes the form <cit.> V_p_L, p_R(z, z̅)V_p_L', p_R'(w, w̅)∼ (z-w)^p_L· p_L'(z-w)^p_R· p_R'V_p_L+p_L', p_R+p_R'(w, w̅) .From the OPE we see that as the first vertex operator circles around the second, it picks up a factor of e^2π i(p_L· p_L'-p_R· p_R'). So for the OPE to be single valued, one requires the lattice Λ to be integral.The torus partition function of the theory isZ(τ, τ̅)=1/|η(τ)|^2n∑_(p⃗_L, p⃗_R) ∈Λ q^p⃗_L^2 / 2q̅^p⃗_R^2 / 2,q=e^2 π i τ, q̅=e^2 π i τ̅,where η(τ) is the Dedekind eta function η(τ)=q^1/24∏_n=1^∞(1-q^n) ,and τ is the moduli of the torus. Here τ, τ̅ are complex conjugates of each other. The partition function (<ref>) is modular invariant since Λ is self-dual. This construction gives a CFT for any even, self-dual Lorentzian lattice Λ⊂^n,n of signature (n,n), this is called the Narain CFT associated to the lattice Λ. It is easy to see that the partition function is invariant under an orthogonal action on p⃗_L,p⃗_R separately. Thus two Narain CFT associated on lattices Λ,Λ' are equivalent if Λ,Λ' is related by the right action of O(n,)×O(n,), where O(n,)⊂GL(n,) acts on ^n and preserves the Euclidean inner product. Next, any two Lorentzian lattices are related by the left action of O(n,n,), where O(n,n,)⊂GL(2n,) acts on ^n,n and preserves the Lorentzian inner product of signature (n,n). But note that if two lattices are related by an O(n,n,)-transformation then the two lattices are the same. Thus the moduli space of Narain CFTs is given by the quotientO(n,n,)/O(n,)×O(n,)×O(n,n,),where the first two factors in the denominator act on the right and relate physically equivalent lattices to each other while the last factor acts on the left.It is immediate to see from the dictionary between conformal field theory in physics and our notion of non-chiral CFT described in Subection <ref> that Narain CFTs are simply non-chiral CFTs based on a Lorentzian lattice of signature (n,n). The moduli space structure (<ref>) is then a special case of Theorem <ref>. The general case m≠ n also appears in toroidal compactification of heterotic string theory, see <cit.> for details. Our general result in Theorem <ref> is a more mathematical statement of the moduli space of Narain compactifications for general signature, see <cit.> for more details from physical viewpoint. Finally we note that <cit.> gives a construction of Lorentzian lattices of signature (n,n) from stabiliser codes. Our construction of LLVOA then completes the parallel with the construction of VOAs from codes. The code-Narain CFT correspondence has been explored extensively recently <cit.>. It would be interesting to study their implications on LLVOAs. §.§ From Quantum Stabilizer Codes to LLVOAsRKS: I think this section takes us too far away from the main point of the paper. probably its best to skip this. I have added two lines at the end above about this correspondence.yeah I agree, I will comment it our - MSIn this subsection, we will discuss a prescription to obtain LLVOA starting from a quantum stabilizer codes. Quantum stablizer codes are an important class of quantum error correcting (QEC) codes. Let us begin with the definition of QEC, see <cit.> for more details.For a Hilbert space H, let C ⊂ Hbe a subspace withprojection operator P. Then the subspace C is said to be a quantum error correcting code with respect to error operations [There is an elaborate definition of error operations, see <cit.>. But for the purpose of this paper, {E_i} can be assumed to be a subset of the set of unitary operators on H.] {E_i}if and only ifPE_i^†E_jP = α_ijP ,where α_ij are ℂ-numbers such that [α_ij] is a non-zero Hermitian matrix. We call C a code subspace.Under these conditions an operation ℛ can be developed that can detect and correct the error for a state in the subspace C. Our Hilbert space usually n-Qubit Hilbert space, i.e. H = ℋ_n =(^2)^⊗ n. If the dimension of codespace C is 2^k, then the QEC is said to be of type [[n,k]]. To describe the theory of Quantum Stabilizer code Pauli Group is needed, G_1, which is defined by[σ_0 is the2 × 2 identity matrix andσ_1, σ_2, σ_3 are the Pauli X, Y, Z matrices respectively.]G_1 = {±σ_0, ± iσ_0, ±σ_1, ± iσ_1, ±σ_2, ± iσ_2, ±σ_3, ± iσ_3 }.The General Pauli group, G_n, is the n-fold tensor product of Pauli matrices, with multiplicative factor of ± 1, ± i, which naturally acts on the n-Qubit Hilbert space - ℋ_n. For a subgroup S of G_n, there is a linear subspace V^S ⊂ℋ_n, such that all elements of V^S remain fixed by the action of S. S is called the stabilizer and V_S is called the vector space stabilized by S.Now, it can be shown that S must have the following properties : i) -I ∉ S ii) Sis an abelian subgroup ofG_n for V^S to be non trivial. The logical qubits lie in V^S. To describe k logical qubits, the stabilizer subgroup has n-k generators because of the following lemma. Let S be a subgroup generated by⟨ g_1,...,g_n-k⟩. It is an abelian subgroup of G_n iff g_i and g_j commute for any 1 ≤ i,j ≤ n-k. Further, for S abelian and -I ∉ S,V_S is 2^k dimensional subspace.A proof of the above Lemma can be found in <cit.>. Here, ⟨ g_1,...,g_n-k⟩ is an independent set of generator of S, i.e. if any generator is removed, say g_i, then ⟨ g_1,...,g_i-1,g_i,g_i+1..,g_n-k⟩≠⟨ g_1,...g_i-1,g_i+1...,g_n-k⟩.Further, if g_i are the elements of a stabilizer subgroup S and the corresponding stabilizer is V^S, then one can define a new stabilizer subgroup, say S', with elements of the form Ug_iU^† and the corresponding stabilizer is U V^S, where U is a unitary transformation.Stabilizer codes are Quantum Error Correction codes as illustrated byLet S be stabilizer for the stabilized subspace V^S. N(S) is the normalizer of S in G_n and {E_n} be a set of operators in G_n such that ∀ j, kE_j^†E_k ∉ N(S)-S.Then {E_j} is a correctable set of error for the Quantum Stabilizer code.When we say {E_j} is a correctable set of error for the Quantum Stabilizer code, we do so in the spirit of (<ref>), V^S is the Quantum Error Correction code and E_i's are error operations. Theorem <ref> guarantees the matrix [ α ] corresponding to the code space and the error operations, as in (<ref>), is a non-zero Hermitian Matrix. Again, a proof can be found in <cit.>. We will now review <cit.>, where they map Quantum Stabilizer codes to classical codes over 𝔽_4 which are then mapped to Narain CFTs. Consider a stabilizer group, S with generators of the form g(ν) = ϵ (σ_ν_1⊗σ_ν_2⊗......⊗σ_ν_n ),where ν_i ∈{0,1,2,3} and ϵ = ± 1 or ± i, chosen such that g^2 = 1.Shouldn't ϵ = ± 1, otherwise g^2 = - 1.Corresponding to each ν, (α, β) = (α_1,....,α_n,β_1,....,β_n ) ∈ (ℤ_2)^2n can be found such that the generators becomeg(α,β) := ϵ i^α.β ( (σ_x)^α_1⊗ (σ_x)^α_2... ⊗ (σ_x)^α_n)( (σ_z)^β_1⊗ (σ_z)^β_2... ⊗ (σ_z)^β_n) ,where ϵ∈{± 1}. Now it can be checked that g(α,β)g(α',β') = g(α',β')g(α,β) α .β' - α'.β≡ 0mod 2which is needed as elements of S must commute. The sign in second equation can always be flipped as we are workingmod 2. Corresponding to the n-k generators of the Stabilizer group, n-k binary vectors - (α,β) are obtained.Gray map, ρ : (ℤ_2)^2 →𝔽_4, can be used to convert these binary vectors into codes over 𝔽_4, is given by ρ(0,0) = 0;ρ(1,1) = 1;ρ(1,0) = ω ;ρ(0,1) = (ω̅) ,is an isomorphism under addition, σ is the inverse map. Using ρ, a map ρ_n : (ℤ_2)^2n→ (𝔽_4)^n canbe defined by ρ_n(α,β) = (ρ(α_1,β_1),ρ(α_2,β_2),......,ρ(α_n,β_n)) .Starting from the generators of the Stabilizer code S, we have obtained elements of (_4)^n, i.e. given n-k generators, we obtain n-k (ℤ_2)^2n vectors - (α,β), and hence n-k elements of (𝔽_4)^n.Using these elements as basis, we construct an additive code over _4 by taking their additive span[Additive span is different from linear span. In additive span the basis elements are only added, scaling with elements of the field of the vector space, which is 𝔽_4 here, is not allowed. By taking additive span, one effectively gets a vector space over 𝔽_2.]., which is in one-to-one correspondence with the elements of the Stabilizer S.It can be shown that α .β' + α'.β = 0 ∑_ic̅_i.d_i + c_i.d̅_i = 0,where c = ρ_n(α,β), d = ρ_n(α',β'). Check (<ref>) From (<ref>), it can be seen that for c_1 ≠ c_2, c_1 is orthogonal to c_2, for the inner productx.y = ∑_i(x̅_i y_i + x_i y̅_i );x_i,y_i ∈𝔽_4 .If c_1 = c_2, then the fact that x + x = 0in 𝔽_4, implies that c.c = 0, hence the additive code obtained is self-orthogonal (i.e. C ⊂ C^⊥).However, the codes of interest here are only additive codes which are self-dual, and not just self-orthogonal. For this to happen, one must start with Stabilizer code which have k = 0, i.e. the Stabilizer group has n generators and V^S is one-dimensional. Such additive self dual codes over 𝔽_4 form a family called 4^H+. Stabilizer codes will be called real if all the generators in Equation (<ref>) are real, i.e. the number of σ_2 in each generator is even.Check! Real self dual codes over 𝔽_4 lie in the family 4^H+_R. [In <cit.> it is claimed that any code is equivalent to a real code. Check ! ] Finally, Lattice corresponding to a quantum stabilizer code will be constructed.For the stabilizer S with one dimensional subspace V^S, 𝒞_4, a self-dual additive code over 𝔽_4 was defined. Using the inverse of Gray map, σ, convert codewords over 𝔽_4 to get binary codewords. Using these binary codewords, the lattice will be defined as belowΛ(S) = {v/√(2) |v ∈ℤ^2n, v = (α, β)mod 2,(α, β) = σ_n(c),∀ c ∈𝒞_4 }It can be shown that under Lorentzian metric, the lattice corresponding to a stabilizer code S is self-dual when one starts with a self-dual additive code or equivalently a Stabilizer code with k = 0. If one only considers real self-dual additive codes, then this is a one-to-one correspondence.Even?? The Narain CFT associated to an even, integral Lorentzian lattice Λ⊂^n,n can be understood as a special case of the LLVOA. For physical consistency, the lattice is taken to be self-dual. Since any two Lorentzian lattices are related by an O(n,n,), there exists a matrix 𝒪_Λ∈O(n,n,) such that the generator matrix of Λ is given by [Λ]=𝒪1,where 1 is the generator matrix of the Lorentzian lattice Span_{e_1,…,e_2n} and {e_i}_i=1^2n is the standard basis of ^2n.It is proved in <cit.> that any such lattice is equivalent via a O(n,)×O(n,) transformation to a lattice with generator matrix of the form (<ref>): what equation are you referring to[Λ]=1/√(2)([ 2 γ^⋆ B γ; 0 γ ])Thus in the notation of (<ref>) we see that Λ_1=2Γ^⋆,Λ_2=Γ+B·Γwhere Γ is the lattice with generator matrix γ and B·Γ:={B_ijα^j | α∈Γ}. Clearly then m=dim h_1=n, n=dim h_1=n,and we have reproduced the central charge of Narain CFT from the LLVOA. For Narain CFT, the generator of the Lorentzian lattices have a specific form, as discussed in <cit.>. For these Lattices, m = n = d/2 with d even. Further, it is always possible to find d/2 orthonormal vectors u_j and v_j, hence the CFT has central charge (d/2, d/2).§ CONCLUSION AND FUTURE DIRECTIONSIn this paper, we have initiated a mathematically rigorous study of non-chiral VOAs and presented the construction of a non-trivial example of our definition, namely the Lorentzian lattice vertex operator algebra and its modules. We also showed the relevance of the construction by demonstrating that Narain CFTs which appear in string compactifications are physical examples of our construction. In this section, we sketch some future directions of the study.(1) Rationality, Regularity, and C_2-Cofiniteness: We have defined the notion of non-chiral VOA. It is natural to introduce the notion of rationality , regularity, and strong regularity as in the theory of vertex operator algebras.A non-chiral VOA V is called rational if every V-module is semisimple. It is called regular if it is C_2-cofinite and rational. V is said to be of cft-type ifV=1⊕⊕_h,h>0V_(h,h̅).V is called strongly regular if it is regular and of cft-type.One would also like to define the notion of C_2-cofiniteness and and see if C_2-cofiniteness and rationality implies regularity as in <cit.>. The main result that we would like to prove is the theorem of Zhu <cit.>: the graded dimensions of the irreducible modules of a C_2-cofinite VOA with certain additional properties is a representation of SL(2,). To be more precise, the set of graded charaters is a vector-valued modular form. We would like to prove a similar theorem for non-chiral VOA in the generalised sense of a bi-modular form:χ_W_i(aτ+b/cτ+d,aτ̅+b/cτ̅+d)=∑_iρ(γ)_ijχ_W_j(τ,τ̅),γ=[ a b; c d ]∈SL(2,)where ρ(γ)_ij is the representation matrix of γ. Furthermore, we would like to classify all irreducible modules of the LLVOA on the lines of Dong <cit.> andobtain conditions on the Lorentzian lattice so that the associated LLVOA is, rational, regular and strongly regular. The first result in this direction is due to Katrin Wendland <cit.>, see also <cit.> for some recent progress from physical viewpoint. (2) Modular Tensor Categories and the Verlinde Conjecture: In conformal field theory in physics, one expects the Verlinde conjecture <cit.> to hold even for non-chiral CFTs. In <cit.>, Moore and Seiberg showed that Verlinde conjecture follows from the axioms of a rational conformal field theory which they defined. Their axioms had an important associativity assumption. Huang <cit.> established the associativity (axiom in <cit.>) from the axioms of vertex operator algebra and its modules. It required the introduction of the notion of tensor product of modules <cit.>. Huang also proved the Verlinde conjecture using the definition of tensor product of modules of a VOA and the associativity theorem.We would like to define a similar notion of tensor product of modules for non-chiral VOA and then prove the associativity theorem and Verlinde conjecture. Additionally, one of the main results of <cit.> was the realisation that conformal field theories can be understood as generalisation of group theory - the chiral (anti-chiral) algebra and its modules along with intertwining operators forms a category called a modular tensor category <cit.>. Huang proved that the vertex operator algebras and their modules are examples of braided tensor categories <cit.>. We would like to follow a similar approach and establish these results for non-chiral VOAs. Acknowledgments. The authors would like to thank Anatoly Dymarsky, Yi-Zhi Huang, Gregory Moore, Ananda Roy, Siddhartha Sahi, Hubert Saleur, Ashoke Sen, andMasahito Yamazaki for some useful correspondence and discussions. We especially thank Anatoly Dymarsky, Yi-Zhi Huang, and Hubert Saleur for comments on the manuscript and raising some interesting questions. We also thank Runkai Tao for proof-reading the manuscript. The work of R.K.S is supported by the US Department of Energy under grant DE-SC0010008.§ LATTICE CENTRAL EXTENSIONSIn this appendix, we collect some results about central extensions and refer the reader to <cit.> for more details. Recall that a central extension of G by A is a short exact sequence 0⟶ Aι⟶Ĝ-⟶ G⟶ 0 ,such that ι(A) is contained in the center Z(Ĝ) of Ĝ. Here - denotes the surjective (projection) map. We sometimes call Ĝ as the central extension of G by A. Two central extensions Ĝ and Ĝ' are said to be equivalent if there exists an isomorphism ψ:Ĝ⟶Ĝ' such that the following diagrams commute0 AĜG 0 0 AĜ'G 0 ["ι", from=1-2, to=1-3] [from=1-1, to=1-2] ["-", from=1-3, to=1-4] [from=2-1, to=2-2] ["ι", from=2-2, to=2-3] ["-", from=2-3, to=2-4] [from=1-2, to=2-2] ["ψ", from=1-3, to=2-3] [from=1-4, to=1-5] [from=2-4, to=2-5] [from=1-4, to=2-4] We now specialize to an abelian group G, written additively, and A=_2={± 1}. A 2-cocycle is a bilinear map ϵ:G× G⟶_2 satisfying ϵ(a, b)+ϵ(a+b, c)=ϵ(b, c)+ϵ(a, b+c).A 2-cocycle ϵ is called a 2-coboundary if there exists η:G⟶_2 such that ϵ(a, b)=η(a+b)-η(a)-η(b), a,b∈ G.Two 2-cocycles differing by a 2-coboundary are said to be cohomologous. A 2-cocycle determines a central extension Ĝ of G by _2 as follows: Ĝ=_2× G={(θ,a):θ∈_2,a∈ G}with group operation (θ, a) · (τ, b) =(θτ (-1)^ϵ(a, b), a + b ).A 2-coboundary determines a central extension equivalent to the trivial extension _2× G (i.e. the direct product of _2 and G) and two cohomologous 2-cocycles determine equivalent central extensions. It turns out that equivalence classes of central extensions is classified by the group cohomology H^2(G,_2) which is the quotient of 2-cocycles by 2-coboundaries (<cit.>). We define the commutator map c:G× G⟶_2 of a central extension Ĝ by the relation(-1)^c(a̅,b̅)=aba^-1b^-1, a,b∈Ĝ.The following proposition will be useful.<cit.> Two central extensions of G by _2 are equivalent if and only if their commutator maps are the same.Let us now consider the Lorentzian lattice Λ as an abelian group. Let Λ̂ be the central extension of Λ by _2 determined by the cocycle ϵ defined in (<ref>).The commutator function c : Λ×Λ→_2 for the central extension Λ̂ in (<ref>)is given by c(μ_1, μ_2) = μ_1 ∘μ_22. We use the notation e_λ = (1, λ), like in Section <ref>. Using the group operation (<ref>), it can be shown thate_μ_1 e_μ_2=(-1)^ϵ(μ_1,μ_2)e_μ_1+μ_2,e_μ_2 e_μ_1=(-1)^ϵ(μ_2,μ_1)e_μ_1+μ_2.Using the above equations, the commutator function is seen to be (-1)^c(μ_1, μ_2) = e_μ_1 e_μ_2 e_μ_1^-1 e_μ_2^-1=(-1)^ϵ(μ_1, μ_2)- ϵ(μ_2, μ_1) .If we take the basis of the lattice to be given by {λ_i }_i = 1^m+n, then we have say μ_1 = ∑_i = 1^m+n c_iλ_i ,μ_2 = ∑_i = 1^m+n d_iλ_i, c_i,d_i∈. Using this, we can write (-1)^ϵ(μ_1, μ_2)- ϵ(μ_2, μ_1) =(-1)^∑_i, j= 1^m + n c_i d_j ϵ(λ_i, λ_j)- c_i d_j ϵ(λ_j, λ_i)=∏_i, j = 1^m+n(-1)^ c_i d_j (ϵ(λ_i, λ_j)-ϵ(λ_j, λ_i)).Using (<ref>), each term in the above product can be written as (-1)^c_i d_j (ϵ(λ_i, λ_j) - ϵ(λ_j, λ_i) ) =(-1)^c_i d_jλ_i ∘λ_j, i > j, (-1)^-c_i d_jλ_i ∘λ_j=(-1)^c_i d_jλ_i ∘λ_j, i < j, 1= (-1)^c_i d_jλ_i ∘λ_i,i = j,where the equality in i < j case follows as (-1)^n = (-1)^-n, when n ∈, and the equality in case i = j follows as the lattice is even. Hence, we can simplify (<ref>) as(-1)^ϵ(μ_1, μ_2)- ϵ(μ_2, μ_1)= ∏_i, j = 1^m+n (-1)^c_i d_jλ_i ∘λ_j = (-1)^μ_1 ∘μ_2 = (-1)^c(μ_1, μ_2).Hence, we conclude c(μ_1, μ_2) = μ_1 ∘μ_2 2.Let {λ̃_i}_i=1^m+n be a basisof Λ different from {λ_i}_i=1^m+n. Then the cocycle ϵ̃:Λ×Λ⟶_2 defined as in (<ref>) ϵ̃(λ̃_i,λ̃_j)=λ̃_i∘λ̃_j,i>j, 0, otherwise,and extended to Λ by -bilinearity, determines a central extension equivalent to the one determined by ϵ. From Lemma <ref>, the commutator maps of ϵ and ϵ̃ are the same. Thus by Proposition <ref>, the two central extensions are equivalent. § LOCALITY OF PRODUCT OF MULTIPLE VERTEX OPERATORSIn this appendix, we show that that the product of multiple vertex operators of LLVOA and its modules exists and is local in the sense of the locality property <ref>.We first prove the existence and locality for product of three vertex operators. For three vectors e^λ, e^λ ', e^λ”∈[Λ], we want to calculate the product Y_V_Λ( e^λ,x_1,x̅_1)Y_V_Λ( e^λ' ,x_2,x̅_2)Y_V_Λ( e^λ”,x_3,x̅_3).From (<ref>) and (<ref>), we have Y_V_Λ( e^λ,x_1,x̅_1)Y_V_Λ( e^λ ' ,x_2,x̅_2)Y_V_Λ( e^λ”,x_3,x̅_3) =(x_2-x_3)^⟨α',α”⟩(x_2-x_3)^⟨β',β”⟩Y_V_Λ( e^λ,x_1,x̅_1)exp(∫α'(x_2)^-)exp(∫α”(x_3)^-)exp(∫α'(x_2)^+)exp(∫α”(x_3)^+)exp(∫β'(x̅_2)^-)exp(∫β”(x̅_3)^-)exp(∫β'(x̅_2)^+)exp(∫β”(x̅_3)^+)e_λ' e_λ”x_2^α'x_2^β' x_3^α”x_3^β”= (x_2-x_3)^⟨α',α”⟩(x_2-x_3)^⟨β',β”⟩exp(∫α(x_1)^-)exp(∫α(x_1)^+)exp(∫β(x_1)^-)exp(∫β(x_1)^+) e_λx_1^αx_1^βexp(∫α'(x_2)^-)exp(∫α”(x_3)^-)exp(∫α'(x_2)^+)exp(∫α”(x_3)^+)exp(∫β'(x̅_2)^-)exp(∫β”(x̅_3)^-)exp(∫β'(x̅_2)^+)exp(∫β”(x̅_3)^+)e_λ' e_λ”x_2^α'x_2^β' x_3^α”x_3^β”.Now using (<ref>) and (<ref>) we get Y_V_Λ( e^λ,x_1,x̅_1)Y_V_Λ( e^λ ' ,x_2,x̅_2)Y_V_Λ( e^λ”,x_3,x̅_3)=(x_1-x_2)^⟨α,α'⟩(x_1-x_2)^⟨β,β'⟩(x_1-x_3)^⟨α,α”⟩(x_1-x_3)^⟨β,β”⟩(x_2-x_3)^⟨α',α”⟩(x_2-x_3)^⟨β',β”⟩exp(∫α(x_1)^-) exp(∫α'(x_2)^-)exp(∫α”(x_3)^-)exp(∫α(x_1)^+)exp(∫α'(x_2)^+)exp(∫α”(x_3)^+)exp(∫β(x_1)^-)exp(∫β'(x̅_2)^-)exp(∫β”(x̅_3)^-)exp(∫β(x_1)^+)exp(∫β'(x̅_2)^+)exp(∫β”(x̅_3)^+) e_λe_λ' e_λ”x_1^αx_1^βx_2^α'x_2^β' x_3^α”x_3^β”≡(x_1-x_2)^⟨α,α'⟩(x_1-x_2)^⟨β,β'⟩(x_1-x_3)^⟨α,α”⟩(x_1-x_3)^⟨β,β”⟩(x_2-x_3)^⟨α',α”⟩(x_2-x_3)^⟨β',β”⟩ F(x_1,x_2,x_3)G(x_1,x_2,x_3).Substituting complex variables x_1=z_1,x_2=z_2 and x_3=z_3, it is easy to show that the right hand side of (<ref>) is the expansion of the function exp(⟨α,α'⟩log(z_1-z_2))exp(⟨β,β'⟩log(z̅_1-z̅_2))exp(⟨α,α”⟩log(z_1-z_3))exp(⟨β,β”⟩log(z̅_1-z̅_3))exp(⟨α',α”⟩log(z_2-z_3))exp(⟨β',β”⟩log(z̅_2-z̅_3))F(z_1,z_2,z_3)G(z̅_1,z̅_2,z_3)in the region |z_1|>|z_2|>|z_3|. Following the same arguments as in Section <ref>, we see that the three vertex operators satisfy locality for transpositions (12),(23)∈ S_3. For the transposition (13)∈ S_3, we get a sign (-1)^λ∘λ'+λ'∘λ”+λ∘λ” from the exchange of operators e_λ”e_λ' e_λ→ e_λe_λ' e_λ”. This sign can be used to show that the functions appearing in (<ref>) after the permutation z_1↔ z_3 is the expansion of the function (<ref>) in the region |z_3|>|z_2|>|z_1|. Since S_3 is generated by the three permutations (12),(23),(13), the locality property holds for any permutation in S_3.The proof for an arbitrary number of vertex operator is entirely similar.We now prove locality for general vectors of the form (<ref>). We first want to calculate Y_V_Λ(u,x_1,x̅_1)Y_V_Λ(v,x_2,x̅_2)Y_V_Λ(w,x_3,x̅_3),with vectors u,v,w of the form (<ref>):u =( α_1(-l_1)·α_2(-l_2)⋯α_k(-l_k) ·β_1(-l̅_1)·β_2(-l̅_2)⋯β_k̅(-l̅_k̅) )⊗ e^(α,β), v =( α'_1(-m_1)·α'_2(-m_2)⋯α'_ℓ(-m_ℓ) ·β'_1(-m̅_1)·β'_2(-m̅_2)⋯β'_ℓ̅(-m̅_ℓ̅) )⊗ e^(α',β'),w =( α”_1(-n_1)·α”_2(-n_2)⋯α”_m(-n_m) ·β”_1(-n̅_1)·β”_2(-n̅_2)⋯β”_m̅(-n̅_m̅) )⊗ e^(α”,β”),and we further use the notation that λ = (α, β), λ' = (α', β'), and λ” = (α”, β”).Using (<ref>), we have Y_V_Λ(u,x_1,x̅_1)Y_V_Λ(v,x_2,x̅_2)Y_V_Λ(w,x_3,x̅_3)=(x_1-x_2)^⟨α,α'⟩(x̅_1-x̅_2)^⟨β,β'⟩Y_V_Λ(u,x_1,x̅_1)×exp(∫α'(x_2)^-)exp(∫α”(x_3)^-)exp(∫β'(x̅_2)^-)exp(∫β”(x̅_3)^-)×∏_r=1^ℓ[1/(m_r-1)!d^m_r-1α'_r(x_2)/dx_2^m_r-1+(-1)^m_r-1(⟨α”,α'_r⟩/(x_2-x_3)^m_r-⟨α”,α'_r⟩/x_2^m_r)]×∏_s=1^ℓ̅[1/(m̅_s-1)!d^m̅_s-1β'_s(x̅_2)/dx̅_2^m̅_s-1+(-1)^m̅_s-1(⟨β”,β'_s⟩/(x̅_2-x̅_3)^m̅_s-⟨β”,β'_s⟩/x̅_2^m̅_s)] ×∏_p=1^m[1/(n_p-1)!d^n_p-1α_p”(x_3)/dx_3^n_p-1-⟨α',α_p”⟩/(x_2-x_3)^n_p-(-1)^n_p-1⟨α',α_p”⟩/x_3^n_p]×∏_q=1^m̅[1/(n̅_q-1)!d^n̅_q-1β_q”(x̅_3)/dx̅_3 ^n̅_q-1-⟨β',β_q”⟩/(x̅_2-x̅_3)^n̅_q-(-1)^n̅_q-1⟨β',β_q”⟩/x̅_3^n̅_q]×exp(∫α'(x_2)^+) exp(∫β'(x̅_2)^+)exp(∫α”(x_3)^+)exp(∫β”(x̅_3)^+)× e_λ' e_λ” x_2^α'x_2^β' x_3^α”x_3^β”.Now using the general expression for vertex operator (<ref>) and the relations (<ref>), (<ref>), (<ref>), and (<ref>) successively on the product in normal order,along with relations (<ref>),(<ref>) and the formal variable identity (<ref>), we obtain Y_V_Λ(u,x_1,x̅_1)Y_V_Λ(v,x_2,x̅_2)Y_V_Λ(w,x_3,x̅_3)=(x_1-x_2)^⟨α,α'⟩(x_1-x_2)^⟨β,β'⟩(x_1-x_3)^⟨α,α”⟩(x_1-x_3)^⟨β,β”⟩(x_2-x_3)^⟨α',α”⟩(x_2-x_3)^⟨β',β”⟩×exp(∫α(x_1)^-)exp(∫α'(x_2)^-)exp(∫α”(x_3)^-)×exp(∫β(x̅_1)^-)exp(∫β'(x̅_2)^-)exp(∫β”(x̅_3)^-)×∏_i=1^k[1/(l_i-1)!d^l_i-1α_i(x_1)/dx_1^l_i-1+(-1)^l_i-1(⟨α',α_i⟩/(x_1-x_2)^l_i-⟨α',α_i⟩/x_1^l_i+⟨α”,α_i⟩/(x_1-x_3)^l_i-⟨α”,α_i⟩/x_1^l_i)]×∏_j=1^k̅[1/(l̅_j-1)!d^l̅_j-1β_j(x̅_1)/dx̅_1^l̅_j-1+(-1)^l̅_j-1(⟨β',β_j⟩/(x̅_1-x̅_2)^l̅_j-⟨β',β_j⟩/x̅_1^l̅_j+⟨β”,β_j⟩/(x̅_1-x̅_3)^l̅_j-⟨β”,β_j⟩/x̅_1^l̅_j)] ×∏_r=1^ℓ[1/(m_r-1)!d^m_r-1α'_r(x_2)/dx_2^m_r-1+(-1)^m_r-1(⟨α”,α'_r⟩/(x_2-x_3)^m_r-⟨α”,α'_r⟩/x_2^m_r-⟨α,α'_r⟩/x_2^m_r)-⟨α,α'_r⟩/(x_1-x_2)^m_r]×∏_s=1^ℓ̅[1/(m̅_s-1)!d^m̅_s-1β'_s(x̅_2)/dx̅_2^m̅_s-1+(-1)^m̅_s-1(⟨β”,β'_s⟩/(x̅_2-x̅_3)^m̅_s-⟨β”,β'_s⟩/x̅_2^m̅_s-⟨β,β'_s⟩/x̅_2^m̅_s)-⟨β,β'_s⟩/(x̅_1-x̅_2)^m̅_s] ×∏_p=1^m[1/(n_p-1)!d^n_p-1α_p”(x_3)/dx_3^n_p-1-⟨α',α_p”⟩/(x_2-x_3)^n_p-⟨α,α_p”⟩/(x_1-x_3)^n_p-(-1)^n_p-1(⟨α',α_p”⟩/x_3^n_p+⟨α,α_p”⟩/x_3^n_p)]×∏_q=1^m̅[1/(n̅_q-1)!d^n̅_q-1β_q”(x̅_3)/dx̅_3 ^n̅_q-1-⟨β',β_q”⟩/(x̅_2-x̅_3)^n̅_q-⟨β,β_q”⟩/(x̅_1-x̅_3)^n̅_q-(-1)^n̅_q-1(⟨β',β_q”⟩/x̅_3^n̅_q+⟨β,β_q”⟩/x̅_3^n̅_q)]×exp(∫α(x_1)^+)exp(∫α'(x_2)^+)exp(∫α”(x_3)^+)×exp(∫β(x̅_1)^+) exp(∫β'(x̅_2)^+)exp(∫β”(x̅_3)^+)× e_λ e_λ' e_λ” x_1^αx_1^βx_2^α'x_2^β' x_3^α”x_3^β”.To prove locality, we substitute formal variables with complex numbers x_i→ z_i,x̅_i→z̅_i, i=1,2,3. We can write Y_V_Λ(u,z_1,z̅_1)Y_V_Λ(v,z_2,z̅_2)Y_V_Λ(w,z_3,z̅_3)=f(z_1,z_2,z_3,z̅_1,z̅_2,z_3) ×exp(∫α(z_1)^-)exp(∫α'(z_2)^-)exp(∫α”(z_3)^-)×exp(∫β(z̅_1)^-)exp(∫β'(z̅_2)^-)exp(∫β”(z̅_3)^-)×∏_i=1^k[1/(l_i-1)!d^l_i-1α_i(z_1)/dz_1^l_i-1+f_1,i(z_1,z_2,z_3)] ∏_j=1^k̅[1/(l̅_j-1)!d^l̅_j-1β_j(z̅_1)/dz̅_1^l̅_j-1+g_1,j(z̅_1,z̅_2,z̅_3)] ×∏_r=1^ℓ[1/(m_r-1)!d^m_r-1α'_r(z_2)/dz_2^m_r-1+f_2,r(z_1,z_2,z_3)] ∏_s=1^ℓ̅[1/(m̅_s-1)!d^m̅_s-1β'_s(z̅_2)/dz̅_2^m̅_s-1+g_2,s(z̅_1,z̅_2,z_3)] ×∏_p=1^m[1/(n_p-1)!d^n_p-1α_p”(z_3)/dz_3^n_p-1+f_3,p(z_1,z_2,z_3)] ∏_q=1^m̅[1/(n̅_q-1)!d^n̅_q-1β_q”(z̅_3)/dz̅_3 ^n̅_q-1+g_3,q(z̅_1,z̅_2,z_3)]×exp(∫α(z_1)^+)exp(∫α'(z_2)^+)exp(∫α”(z_3)^+)×exp(∫β(z̅_1)^+) exp(∫β'(z̅_2)^+)exp(∫β”(z̅_3)^+) e_λ e_λ' e_λ” z_1^αz_1^βz_2^α'z_2^β' z_3^α”z_3^β” ,in the domain |z_1|>|z_2|>|z_3|, where f(z_1,z_2,z_3,z̅_1,z̅_2,z_3) =exp(⟨α,α'⟩log(z_1-z_2))exp(⟨β,β'⟩log(z̅_1-z̅_2))exp(⟨α,α”⟩log(z_1-z_3))exp(⟨β,β”⟩log(z̅_1-z̅_3))exp(⟨α',α”⟩log(z_2-z_3))exp(⟨β',β”⟩log(z̅_2-z̅_3)), f_1,i(z_1,z_2,z_3)=(-1)^l_i-1(⟨α',α_i⟩/exp(l_ilog(z_1-z_2))-⟨α',α_i⟩/exp(l_ilog z_1)..+⟨α”,α_i⟩/exp(l_ilog(z_1-z_3))-⟨α”,α_i⟩/exp(l_ilog z_1))g_1,j(z̅_1,z̅_2,z̅_3)=(-1)^l̅_j-1(⟨β',β_j⟩/exp(l̅_jlog(z̅_1-z̅_2))-⟨β',β_j⟩/exp(l̅_jlogz̅_1)..+⟨β”,β_j⟩/exp(l̅_jlog(z̅_1-z̅_3))-⟨β”,β_j⟩/exp(l̅_jlogz̅_1)), f_2,r(z_1,z_2,z_3)=(-1)^m_r-1(⟨α”,α'_r⟩/exp(m_rlog(z_2-z_3))-⟨α”,α'_r⟩/exp(m_rlog z_2)-⟨α,α'_r⟩/exp(m_rlog z_2))-⟨α,α'_r⟩/exp(m_rlog(z_1-z_2)), g_2,s(z̅_1,z̅_2,z̅_3)=(-1)^m̅_s-1(⟨β”,β'_s⟩/exp(m̅_slog(z̅_2- z̅_3))-⟨β”,β'_s⟩/exp(m_rlog(z̅_2))-⟨β,β'_s⟩/exp(m_rlog(z̅_2))) -⟨β,β'_s⟩/exp(m̅_slog(z̅_1-z̅_2)),f_3,p(z_1,z_2,z_3)=-⟨α',α_p”⟩/exp(n_plog(z_2-z_3))-⟨α,α_p”⟩/exp(n_plog(z_1-z_3))-(-1)^n_p-1(⟨α',α_p”⟩/exp(n_plog z_3)+⟨α,α_p”⟩/exp(n_plog z_3)),g_3,q(z̅_1,z̅_2,z̅_3)=-⟨β',β_q”⟩/exp(n̅_qlog(z̅_2-z̅_3))-⟨β,β_q”⟩/exp(n̅_qlog(z̅_1-z̅_3))-(-1)^n̅_q-1(⟨β',β_q”⟩/exp(n̅_qlogz̅_3)+⟨β,β_q”⟩/exp(n̅_qlogz̅_3)).One can check that under any permutation σ∈ S_3 of the vertex operators, the product is given by the right hand side of (<ref>) in the domain |z_σ(1)|>|z_σ(2)|>|z_σ(3)|.For more than three vertex operators, the locality can be proved in a similar way.§ HEISENBERG ALGEBRAS AND THEIR REPRESENTATIONSIn this appendix, we prove some properties of Heisenberg algebras which is used in the paper. See <cit.> for definitions. We start with some results about Lie algebras. §.§ PreliminariesLet U be a left R-module and D=End_R(U) be the ring of R-linear endomorphisms of U. One can consider U as a left D-module by defining φ· u=φ(u),φ∈ D,  u∈ U .The D-linear (in)dependence of a subset of U is defined in an obvious way. A D-linear operator on U is a map f:U⟶ U such that f(φ· u-v)=φ· f(u)-f(v)=φ(f(u))-f(v),φ∈ D,  u,v∈ U . (Jacobson Density Theorem, <cit.>) Let U be a simple left R-module and write D=End_R(U). Let α be any D-linear operator on U and let X ⊆ U be any finite D-linearly independent subset. Then there exists an element r ∈ R such that rx=α x for all x ∈ X.Finally, we will need a generalisation of Schur's lemma to countably-infinite dimensional representations. (Dixmier's Lemma, <cit.>) Suppose V is a vector space overof countable dimension and that S⊂End(V) acts irreducibly. If T ∈End(V) commutes with every element of S, then T is a scalar multiple of the identity operator. Let g_1,g_2 be two complex Lie algebras (possibly countably infinite dimensional). Then every irreducible (g_1⊕g_2)-module which contains an irreducible g_1 or g_2-module is isomorphic to the tensor product of two irreducible g_1,g_2-modules. Conversely, the tensor product of irreducible g_1,g_2-modules of countable dimension is an irreducible (g_1⊕g_2)-module. Let V be an irreducible (g_1⊕g_2)-module and suppose Y⊆ V is an irreducible g_2-module. X=Hom_g_2(Y,V) is the space of g_2-linear maps from Y to V, i.e.X ={ϕ : Y → V| ϕ(g_2· v-w)=g_2·ϕ(v)-ϕ(w), ∀ g_2 ∈g_2,v,w∈ Y }.Then X is a g_1-module under the bilinear map (g_1,ϕ)↦ g_1·ϕ: y↦ g_1· (ϕ(y)) . It is easy to see that X⊗ Y is a (g_1⊕g_2)-module with the action(g_1,g_2)· (ϕ⊗ y)=(g_1·ϕ_g)⊗ y+ϕ⊗ (g_2· y).Moreover the natural map from X⊗ Y to Vϕ⊗ y↦ϕ(y) ,is a (g_1⊕g_2)-module map. We first show the map (<ref>) is non-zero. Take ϕ = id:Y → V, which clearly lies in X, then id⊗ y ↦id(y) = y. Hence, the (non-zero) image of this (g_1⊕g_2)-module map, (<ref>), is a (g_1⊕g_2)-module. As V is an irreducible (g_1⊕g_2)-module, the image is entire V, and the map (<ref>) is surjective. Finally, we show this map is injective. Indeed suppose ϕ⊗ v↦ϕ(v)=0. Then it suffices to show that either v=0 or ϕ=0. Suppose v≠ 0. By g_2-linearity of ϕ, we see that0=g_2·ϕ(v)=ϕ(g_2· v),∀ g_2∈g_2 .By irreducibility of Y, we see that ϕ=0. This implies that V≅ X⊗ Y as (g_1⊕g_2)-modules, and that the latter is also an irreducible module. As X⊗ Y is an irreducible (g_1⊕g_2)-module, X must be an irreducible g_1 module. Similar arguments apply when Y is an irreducible g_1-module. To prove the converse, we follow <cit.>. We want to show that if X,Y are irreducible g_1,g_2-modules, then X⊗ Y with (g_1⊕g_2)-action defined by (g_1,g_2)· (x⊗ y)=(g_1· x)⊗ y+x⊗ (g_2· y) ,is an irreducible (g_1⊕g_2)-module. Suppose M⊂ X⊗ Y be a non-trivial (g_1⊕g_2)-submodule. It suffices to show that M contains non-trivial pure tensors and then the irreducibility of X,Y will imply that M=X⊗ Y. Let us assume that x ⊗ y is a pure tensor in M. The irreducibility of X,Y guarantees that g_1 · x= X and g_2 · y = Y, which guarantees that x ⊗ Y and X ⊗ y ⊂ M. Consider any pure tensor x̃⊗ỹ∈ V, we know that x̃⊗ y belongs in M. Applying only g_2 on this vector, we can show thatx̃⊗ỹ∈ M, which implies that M = X ⊗ Y, since pure tensors span X ⊗ Y.We now show that M contains a pure tensor. Since any g-module is also a U(g)-module, our strategy is to produce a pure tensor from an arbitrary vector in M using the action of an element of U(g_1⊕g_2). Start with an arbitrary non-zero vector v=∑_i=1^n x_i ⊗ y_i ∈ M .Without loss of generality, we may choose {x_i} to be linearly independent over ℂ and y_i ≠ 0 for all i. By Dixmier's Lemma (or Schur's lemma when X,Y are finite dimensional), we have End_U(g_1)(X)≅ℂ,End_U(g_2)(Y)≅ℂ.In the statement of Jacobson Density Theorem, choose U = X, R = U(g_1), hence D = End_U(g_1)(X) =, due to (<ref>). Thus, let us take {x_1, x_2,…,x_n }, which is a finite D-linearly independent subset. Consider a - linear map α, such that α x_i= x_1 i=1, 0 i>1.We can then use Jacobson Density Theorem to conclude that there exists u ∈ U(g_1) such thatα x_i = u x_i , ∀1 ≤ i ≤ n. Hence, for u ⊗ 1 ∈ U(g_1) ⊗ U(g_2)≅ U(𝔤_1 ⊕𝔤_2),(u ⊗ 1) v=x_1 ⊗ y_1 ,which is a pure tensor.§.§ Modules of direct sum of Heisenberg algebrasWe now prove some basic results about the modules of direct sum of Heisenberg algebras. We will use the notations of <cit.> in this section. For a -graded vector space g=⊕_n∈g_n ,we will write g^+:=⊕_n>0g_n,g^-:=⊕_n<0g_n,g^0:=g_0 .Let g_1,g_2 be two Heisenberg algebras with central elements k and k̅ respectively. Analogous to the ℭ_k condition for a module of a Heisenberg algebra (see <cit.>), we definethe ℭ_k , k̅ condition for an (×)- graded module V of the direct sum g_1 ⊕g_2 of two Heisenberg algebras g_1,g_2. We say that V satisfies the ℭ_k , k̅ condition if * k and k̅ act on V by multiplication with k and k̅ respectively. * There exist M_1,M_2 ∈, such that V_m,n = 0, if either m > M_1 and n > M_2. The vacuum space of V, denoted by Ω_V⊂ V, is a subspace of non-zero vectors such that any v ∈Ω_V satisfies(g_1^+⊕g_2^+ )· v = 0.Let M(k),M(k̅) denote the unique (up to isomorphism) irreducible module of g_1,g_2 respectively. In particular, we have <cit.> M(k)≅ S(g_1^-), M(k̅)≅ S(g_2^-) .Then we have the following proposition. Let g_1,g_2 be two Heisenberg algebras with central elements 𝐤,𝐤̅ respectively. Then the following are true: * M(k)⊗ M(k) is the unique (up to isomorphism) irreducible (g_1⊕g_2)-module satisfying condition C_k,k.* The vacuum space Ω_M(k)⊗ M(k) is one-dimensional and Ω_M(k)⊗ M(k)=(1_M(k)⊗ 1_M(k̅)) . * For any (g_1⊕g_2)-module satisfying condition C_k,k, the(g_1⊕g_2)-module generated by a vacuum vector is equivalent to M(k)⊗ M(k).(1) Any (g_1⊕g_2)-module is in particular a g_2-module and hence completely reducible by <cit.>. Thus it contains an irreducible g_2-module. By Proposition <ref>, every such irreducible (g_1⊕g_2)-module is isomorphic to a tensor product X⊗ Y where X, Y are irreducible g_1,g_2-module respectively. By <cit.> we have X⊗ Y≅ M(k)⊗ M(k). (2) Since M(k),M(k̅) satisfies the conditions C_k,C_k̅ of <cit.>, it is clear that M(k)⊗ M(k) satisfies the condition C_k,k. The vacuum space of M(k)⊗ M(k) is easily seen to be the tensor product of the vacuum spaces of M(k) and M(k) respectively and hence is one-dimensional.(3) Let V be any (g_1⊕g_2)-module satisfying condition C_k,k. Let v∈ V and consider the (g_1⊕g_2)-submodule of V{(g_1,g_2)· v : (g_1,g_2)∈g_1⊕g_2}.It is easily seen that this gives an irreducible (g_1⊕g_2)-module and is isomorphic to M(k)⊗ M(k)by (1). We now have the following theorem.Any (g_1⊕g_2)-module is completely reducible and is isomorphic to copies of M(k)⊗ M(k). More precisely, for any such module V, the (well-defined) canonical linear map f : U(g_1⊕g_2) ⊗ _U(b_1⊕b_2)Ω_V⟶ V , u⊗ v↦ u· v, u∈ U(g_1⊕g_2),v∈Ω_V ,is a (g_1⊕g_2)-module isomorphism. In particular, the linear map M(k)⊗ M(k)⊗_Ω_V≅ U(g_1^-⊕g_2^-)⊗_Ω_V⟶ V , u⊗ v↦ u· v, u∈ U(g_1^-⊕g_2^-),v∈Ω_V,defines a (g_1⊕g_2)-module isomorphism, Ω_V now regarded as a trivial (g_1⊕g_2)-module.We closely follow <cit.> for this proof.First, we show the f in (<ref>) is an injective map. Note that from the action of f, it is clear that f is injective on 1⊗Ω_V↪ U(g_1⊕g_2)⊗_U(b_1⊕b_2)Ω_V. Let K be the kernel of f. Then it is easy to see that K is a (g_1 ⊕g_2)-submodule of U(g_1⊕g_2)⊗_U(b_1⊕b_2)Ω_V and has a grading induced from U(g_1⊕g_2)⊗_U(b_1⊕b_2)Ω_V. Thus K satisfies the condition ℭ_k,k̅. It follows that it has a vacuum vector, say v∈ K. But then v∈Ω_V because the vacuum space of U(g_1⊕g_2)⊗_U(b_1⊕b_2)Ω_V is precisely Ω_V. Since f(v)=0, it contradicts the fact that f is injective on Ω_V.We now prove the surjectivity of f. Suppose V/Im(f)≠ 0. Since Im(f) is a (g_1 ⊕g_2)-submodule of V, V/Im(f) is naturally a (g_1 ⊕g_2)-module and satisfies the condition C_k,k̅ since V does. Then there exists a vacuum vector w∈ V/Im(f). Let w=[v] for some v∈ V. It follows that v∉Im(f) and x_1i· v,x_2i· v∈Im(f), i∈_+,since x_1i· [v],x_2i· [v]=0. Moreover, due to the C_k, k̅ property, there exists i_0,j_0∈_+ such that x_1i· v,x_2j· v=0  for all  i>i_0,j>j_0 .We will now show that there exists t∈Im(f) such that x_ki· t= x_ki· v,  for all  i∈_+andk ∈{ 1,2}.This will imply that t-v is a vacuum vector but t-v∈ V∖Ω_V,since t-v∉Im(f) and Ω_V⊂Im(f), which is a contradiction. To this end, choose a basis {ω_γ}_γ∈Γ (Γ an index set) of Ω_V. Then by injectivity of f and the first isomorphism theorem Im(f)≅∐_γ∈ΓU(g_1⊕g_2)⊗_U(b_1⊕b_2)ω_γ.Let s^1_iγ,s^2_jγ be the component of x_1i· v,x_2j· v respectively under this decomposition. Then for any i,i',j,j'∈_+ we havex_1ix_1i'· v=x_1i'x_1i· v, x_2ix_2i'· v=x_2i'x_2i· v.as the Lie Bracket is zero on g_1^+ and g_2^+. This implies that for all γ∈Γ x_1i· s^1_i'γ=x_1i'· s^1_iγ, x_2j· s^2_j'γ=x_2j'· s^2_jγ.It is also clear from (<ref>) that s^1_iγ=s^2_jγ=0,for all  i>i_0,j>j_0 . Moreover there is a finite subset Γ_0⊂Γ such thats^1_iγ=s^2_jγ=0,for all  γ∈Γ∖Γ_0 ,since any vector in Im(f), in particular x_1i· v and x_2i· v, has a finite decomposition in (<ref>). Now, fix a γ∈Γ_0 and identify U(g_1⊕g_2)⊗_U(b_1⊕b_2)ω_γ as the polynomial algebra over generators y_1i,y_2i. Then (<ref>) implies that ∂/∂ y_1is^1_i'γ=∂/∂ y_1i's^1_iγ,∂/∂ y_2js^2_j'γ=∂/∂ y_2j's^2_jγ, i,i',j,j'∈_+ .Since s^1_iγ,s^2_jγ=0 for i>i_0,j>j_0, it is clear from (<ref>) that s^1_iγ,s^2_jγ lie in the polynomial algebra generated by finitely many y_1i,y_2j respectively for i≤ i_0,j≤ j_0. Thus there exists s^1,s^2 in these algebras such that k∂/∂ y_1is^1=s^1_iγ,k̅∂/∂ y_2is^2=s^2_iγ,for i≤ i_0,j≤ j_0 and hence for all i,j∈_+. We then take t^1_γ=s^1,t^2_γ=s^2 and put t:=∑_γ∈Γ(t^1_γ+t^2_γ) .Note that x_2j· t^1_γ=x_1i· t^2_γ=0 and hence (<ref>) holds. Finite dimensionality of V_(h,h̅) Assuming that Λ_1,Λ_2 are discrete, we show that dimV_(h,h̅)<∞. It suffices to show that there exist only finitely many vectors of the form (<ref>), satisfying the conditions in (<ref>). We first show that for any h,h̅∈ℝ the number of distinct λ = (α, β) ∈Λ satisfying ⟨α,α⟩≤ 2 hand ⟨β,β⟩≤ 2 h̅ ,where α∈Λ_1, β∈Λ_2,can be only finitely many.Consider the setsX_1 = {α∈Λ_1  | ⟨α, α⟩≤ 2h} X_2 ={β∈Λ_2 | ⟨β, β⟩≤ 2h̅}which have finite cardinality, say N_1 and N_2, due to the assumption that Λ_1 and Λ_2 are discrete. Then the set X ={λ=(α,β)∈Λ | ⟨α, α⟩<2h,  ⟨β, β⟩ < 2h̅}is finite because the mapX⟶ X_1× X_2λ=(α,β)⟼ (α,β)is injective. More precisely # X≤ N_1N_2.Now, as there are only finitely many combinations of positive integers { m_i }_i = 1^k and {m̅_i }_i = 1^k̅ such that h_v - ⟨α,α⟩/2 = ∑_i=1^km_i,h_v - ⟨β,β⟩/2 = ∑_j=1^k̅m̅_j ,hence there are only finitely many generating vectors possible, which implies that dim(V_h, h̅) < ∞.This is true as there are only finitely many α and β satisfying the property see Figure <ref>. We apply the Gram-Schmidt procedure, modifying it appropriately. Start with the basis {λ_i}_i=1^d of h. Without the loss of generality assume that[If not, permute the basis such that this is true.] α^λ_1≠ 0 and define Step 1 :μ'_1=λ_1, Step 2 :μ'_2=λ_2-⟨α^λ_1,α^λ_2⟩/⟨α^λ_1,α^λ^1⟩ λ_1.If α^μ_2'≠ 0, then define Step 3 : μ_3'=λ_3-⟨α^μ_2',α^λ_3⟩/⟨α^μ_2',α^μ_2'⟩ μ_2'-⟨α^μ_1',α^λ_3⟩/⟨α^μ_1',α^μ_1'⟩ μ_1'If α^μ_2'=0,then discard μ_2' and go back to Step 2 and define μ_3'=λ_3-⟨α^μ_1',α^λ_3⟩/⟨α^μ_1',α^μ_1'⟩ μ_1'.Repeat the procedure until all λ_i is exhausted. Let c be the number of nonzero α^μ_i'. Relabel the vectors μ_i' corresponding to the m nonzero α^μ_i' as μ_j”, j=1,… m and define μ_j=μ_j”/⟨α^μ_j',α^μ_j'⟩, j=1,… m,and put u_j=α^μ_j, u_j=1,… m.Then it is easy to check that ⟨ u_i,u_j⟩=δ_i,j. We can repeat the above procedure with α^λ_i replaced by β^λ_i to construct n number of nonzero v_j. Action of operators List of formulae e_λ e_μ=(-1)^ϵ(λ,μ)e_λ+μ. e_λ'e^λ= (-1)^ϵ(λ',λ)e^λ + λ'. x^α^λ (u ⊗ e^λ ' )= x^⟨α^λ, α^λ'⟩ (u ⊗ e^λ') x̅^β^λ (u ⊗ e^λ')=x̅^⟨β ' , β⟩(u ⊗ e^λ') α' (0) e^λ =⟨α' , α⟩e^λ β'(0) e^λ =± ⟨β', β⟩e^λ § PROOF OF CONJECTURE <REF> FOR M=N CASESuppose Λ⊂^m,m is a Lorentzian lattice. We want to show that for any f∈Aut(Λ) we have f(Λ_1^0)=Λ_1^0, f(Λ_2^0)=Λ_2^0.We will prove this in a series of results. Let Λ and Λ̃ be two Lorentzian lattice related by an O(m,)×O(m,) transformation O, i.e.𝒢_Λ̃= 𝒢_Λ O.Then we have Aut(Λ̃)=OAut(Λ)O^-1.Moreover Aut(Λ) preserves Λ_i^0 if and only if Aut(Λ̃) preserves Λ̃_i^0, where i ∈{ 1,2 }.Let f∈Aut(Λ), then f̃=OfO^-1∈Aut(Λ̃). Thus OAut(Λ)O^-1⊆Aut(Λ̃). The reverse containment is similar.Now suppose Aut(Λ̃) preserves Λ̃_1^0. For any (α,0)∈Λ_1^0, we have f(α,0)=(O^-1f̃O)(α,0) for some f̃∈Aut(Λ̃). Since O,O^-1 preserves ⟨α,α⟩ and ⟨α,α⟩ for any (α,β)∈Λ it is clear that f(α,0)=(α',0)∈Λ_1^0.Similarly, f preserves Λ_2^0. The converse is analogous.Let Λ⊂^m,m be any Lorentzian lattice, then their exists an O(m,)×O(m,) transformation which relates Λ to a Lorentzian lattice Λ_S with generator matrix of the form 𝒢_Λ_S=[γ^⋆γ^⋆; γB+1/2 γB-1/2 ],where B is an anti-symmetric matrix, γ is the generator matrix for a lattice Γ in ^m, and γ^⋆ is the generator matrix for the dual lattice Γ^⋆. We will use the result of <cit.> for the proof of this theorem. Let {e_i}_i=1^2m be the standard basis of ^m,m. Then we change the basis of ^m,m from standard basis to the basis {f_i}_i=1^2m:f_i=e_i+e_m+i/√(2),f_m+i =e_i-e_m+i/√(2),where i ∈{1,…,m }. Let 𝒢_Λ and 𝒢_Λ be the generator matrix for Λ in the {e_i} and {f_i} basis respectively. Now from Appendix C of <cit.>, by an O(m,)×O(m,) transformation, we can transform Λ into the lattice Λ_S with generator matrix[Note that in our convention, the rows of the generator matrix are basis for the lattice while in <cit.> it is the columns.] 𝒢_Λ_S=1/√(2)([ 2 γ^⋆ 0; γ B γ ]),in the {f_i} basis for some antisymmetric matrix B. Changing the basis of ^m,m back to the standard basis {e_i} amounts to right multiplying the generator matrix (<ref>) by [1/√(2)1/√(2);1/√(2) -1/√(2) ],where 1 is the m× m identity matrix. This gives the generator matrix in (<ref>). We now have the following important theorem.Let Λ_S⊂^m,m be the Lorentzian lattice with generator matrix 𝒢_Λ_S given in (<ref>). Then Aut(Λ_S) preserves (Λ_S)_i^0, i=1,2.Let {α_i}_i=1^m be an integral basis of Γ and {α^⋆_i}_i=1^m be the basis of Γ^⋆ dual to {α_i}_i=1^m, i.e.∑_k=1^m(α_i)^k(α^⋆_j)^k=δ_i^j , ∑_k=1^m(α^⋆_k)^i(α_k)^j=δ_i^j. Here the superscript j over the vector denotes the jth component of the vector in the standard basis of ^m. A general vector λ∈Λ_S can be written as λ=(α,β) where α^j= ∑_i=1^m m_i(α^⋆_i)^j+1/2∑_i=1^m(∑_k=1^mB_jk(α_i)^k+(α_i)^j)n_i, β^j= ∑_i=1^m m_i(α^⋆_i)^j+1/2∑_i=1^m(∑_k=1^mB_jk(α_i)^k-(α_i)^j)n_i,and m_i,n_i∈. In vector notation we have α⃗=m⃗^Tγ^⋆+n⃗^T(γB+1/2),β⃗=m⃗^Tγ^⋆+n⃗^T(γB-1/2).We then have ⟨α,α⟩ =∑_i,j=1^m[m_im_j⟨α^⋆_i,α^⋆_j⟩+2 ×1/2m_in_j∑_k,ℓ=1^mB_kℓ(α_i^⋆)^k(α_j)^ℓ +2 ×1/2m_in_j⟨α^⋆_i,α_j⟩..+1/4∑_k,ℓ,p = 1 ^mn_in_pB_jk(α_i)^kB_j,ℓ(α_p)^ℓ + 2 ×1/4∑_k, ℓ = 1 ^m n_i n_ℓ B_jk (α_i)^k (α_l)^j + 1/4n_in_j⟨α_i,α_j⟩]=m⃗^T𝐠^-1m+∑_i,j,k,ℓ,p,q=1^mm_in_j(α^⋆_i)^p(α^⋆_q)^p(α_q)^kB_kℓ(α_j)^ℓ + m⃗^Tn+1/4∑_i,j,k,ℓ,p,u,v,s,t = 1 ^mn_in_p(α^⋆)^j_u(α_u)^sB_sk(α_i)^k(α^⋆)^j_v(α_v)^tB_tℓ(α_p)^ℓ+1/4n⃗^T𝐠n=m⃗^T𝐠^-1m+m^T𝐛𝐠^-1n+m⃗^Tn-1/4n^T𝐛𝐠^-1𝐛n+1/4n⃗^T𝐠n=m⃗^T𝐠^-1m+m^T𝐛𝐠^-1n+m⃗^Tn+1/4n⃗^T(𝐠-𝐛𝐠^-1𝐛)n,where m⃗,n⃗∈^m are column vectors, 𝐠_ij=⟨α_i,α_j⟩ and 𝐠^⋆_ij=𝐠^-1_ij=⟨α^⋆_i,α_j^⋆⟩ are the Gram matrices of Γ and Γ^⋆ respectively and 𝐛_ij=(α_i)^kB_kℓ(α_j)^ℓ is the antisymmetric matrix B in the {α_i} basis of ^m. In the last step we used (<ref>) and the fact that B and 𝐛 are antisymmetric. In terms of matrices we have𝐠=γγ^T,𝐠^⋆=γ^⋆(γ^⋆)^T=(γ^T)^-1γ^-1=𝐠^-1,𝐛=γ Bγ^T, B=γ^-1𝐛γ^⋆.Similarly we have ⟨β,β⟩=m⃗^T𝐠^-1m+m^T𝐛𝐠^-1n-m⃗^Tn+1/4n⃗^T(𝐠-𝐛𝐠^-1𝐛)n.From <cit.>, we have that the following transformations generate the group O(m,m,): (1)  m⃗↔n⃗ (2)  m⃗→m⃗-𝐍n⃗,  n⃗→n⃗. (1)  m⃗↔n⃗ and2𝐠^-1 ↔1/2(𝐠-𝐛 𝐠^-1𝐛)𝐛𝐠^-1 ↔-𝐠^-1𝐛, (2)   m⃗→m⃗-𝐍n⃗and 𝐛→𝐛+2 𝐍,where 𝐍 is an arbitrary anti-symmetric matrix with integer entries. These transformations must be understood in the following way: a transformation on the lattice can be implemented in two ways, first by action on the integer coordinates (m⃗^T,n⃗^T) and second, by acting on the generator matrix 𝒢_Λ_S from the left. The transformations in (<ref>) is a composition of both of these. To show that these transformations indeed generate the automorphism group O_Λ_S(m,m,) we need to show that these transformations leave the inner product invariant and preserves the lattice. It is easy to check that under these transformations the ⟨α,α⟩ and ⟨β,β⟩ are preserved <cit.>. In particular these transformations preserve the Lorentzian inner product. We now show that these transformations preserve the lattice. Let us start with (1). It is clear that m↔n⃗ preserves the lattice. We now check that the transformation on the generator matrix preserves the lattice. First note that from (<ref>) we get[Note that we are assuming that b and B are invertible in writing this transformation but one can also write the transformation in a nonsingular way even when b is not invertible <cit.>. The manipulations below will also be modified in a nonsingular way.] 𝐠^-1↔1/4(𝐠-b𝐠^-1b) , 𝐛↔ 4(b-𝐠b^-1𝐠)^-1.From these transformations, we want to find how γ and γ^⋆ transform. Using the fact that 𝐛 is antisymmetric, we guess the transformation to beγ↔± 2(γ±𝐛γ^⋆)^⋆=:±γ_±,γ^⋆↔±1/2(γ±𝐛γ^⋆)=:±γ^⋆_±. It is easy to check that (<ref>) reproduces the first equation in (<ref>). These are all the solutions to the first equation in (<ref>) but as we will show below, only two of them preserve the lattice. We also see that under the second equation of (<ref>) we have B=γ^-1𝐛γ^⋆↔(γ+𝐛γ^⋆)^T(𝐛-𝐠𝐛^-1𝐠)^-1(γ+𝐛γ^⋆)=:B' .We claim that the following two transformations on the generator matrix preserve the lattice: (i)   𝒢_Λ_S=[γ^⋆γ^⋆; γB+1/2 γB-1/2 ]→𝒢^(i)_Λ_S:=[ γ^⋆_+-γ^⋆_-; γ_+B'+1/2 -γ_- B'-1/2 ], (ii)   𝒢_Λ_S=[γ^⋆γ^⋆; γB+1/2 γB-1/2 ]→𝒢^(ii)_Λ_S:=[ -γ^⋆_+γ^⋆_-; -γ_+B'+1/2 γ_- B'-1/2 ].We now do some manipulations to prove our claim. We have γ_+^⋆=1/2(γ+bγ^⋆) =1/2γ(B+1) .Similarlyγ_-^⋆=-1/2γ(B-1).Next we have γ_+B'+1/2 =2(γ+ bγ^⋆)^⋆[(γ+bγ^⋆)^T(b-𝐠b^-1𝐠)^-1(γ+bγ^⋆)+1/2]=(b-𝐠b^-1𝐠)^-1(γ+bγ^⋆)+(γ+bγ^⋆)^⋆.Observe that (b-𝐠b^-1𝐠)^-1(γ+bγ^⋆) =(b-𝐠b^-1𝐠)^-1(γ+γ B)=(b-𝐠b^-1𝐠)^-1γ(1+B)=(γ^-1b+γ^-1𝐠b^-1𝐠)^-1(1+B)=(B γ^T-γ^Tb^-1γγ^T)^-1(1+B)=(B γ^T-(γ B)^-1γγ^T)^-1(1+B)=((B-B^-1) γ^T)^-1(1+B)=γ^⋆(B-B^-1)(1+B) .Next note thatB-B^-1=(1+B)(1-B^-1) .Then we get(b-𝐠b^-1𝐠)^-1(γ+bγ^⋆) =γ^⋆(1-B^-1)^-1 =γ^⋆((B-1) B^-1)^-1 =-γ^⋆ B(1-B)^-1 .We also have(γ+bγ^⋆)^⋆=(γ(1+B))^⋆ =γ^⋆(1-B)^-1 .Putting all this together we get γ_+B'+1/2=γ^⋆.Similarly we have γ_-B'-1/2=γ^⋆[(B-B^-1)^-1(1-B)-(1+B)^-1].We now useB-B^-1=-(1-B)(1+B^-1) ,to getγ_-B'-1/2=-γ^⋆. Thus under (1) of (<ref>) the vector α transforms to (check if we can leave m and n fixed?)α⃗=m⃗^Tγ^⋆+n⃗^T(γB ±1/2) →1/2n⃗^T(γ+bγ^⋆)+2 m⃗^T(γ+ bγ^⋆)^⋆[(γ+bγ^⋆)^T(b-𝐠b^-1𝐠)^-1(γ+bγ^⋆)+1/2]=1/2n⃗^T(γ+bγ^⋆)+m⃗^T[(b-𝐠b^-1𝐠)^-1(γ+bγ^⋆)+(γ+bγ^⋆)^⋆] α⃗ →1/2n⃗^Tγ(B+1)+m⃗^T[-γ^⋆ B(1-B)^-1+γ^⋆(1-B)^-1]=1/2n⃗^Tγ(B+1)+m⃗^T[γ^⋆(1-B)(1-B)^-1]=1/2n⃗^Tγ(B+1)+m⃗^Tγ^⋆ .Similarly we see thatβ⃗ →1/2n⃗^T(γ-bγ^⋆)+m⃗^T[(b-𝐠b^-1𝐠)^-1(γ-bγ^⋆)+(γ-bγ^⋆)^⋆]=1/2n⃗^Tγ(1-B)+m⃗^Tγ^⋆((B-B^-1)^-1(1-B)-(1+B)^-1]We now useB-B^-1=-(1-B)(1+B^-1)to getβ⃗ →1/2n⃗^Tγ(1-B)-m⃗^Tγ^⋆[B(1+B)^-1+(1+B)^-1]=1/2n⃗^Tγ(1-B)-m⃗^Tγ^⋆ = - β⃗.Thus under the transformation of the generator matrix in (1) of (<ref>) we see that (α⃗,β⃗)→ () Under the second transformation we see that B_ij→ B_ij'=(α_k^⋆)^i(𝐛+2𝐍)_kℓ(α_j)^ℓ.Or in matrix notation B→ B+2(γ^⋆)^T𝐍γ^⋆. We see that the vector λ in (<ref>) changes to n From (<ref>), (<ref>), (<ref>) and (<ref>) we see that 𝒢^(i)_Λ_S= [ γB+1/2 γB-1/2;γ^⋆γ^⋆ ] =[ 0 1; 1 0 ]𝒢_Λ_SSince ([ 0 1; 1 0 ]) is determinant -1 and integral, the transformation (i) of (<ref>) preserves the lattice. Similarly we have 𝒢^(ii)_Λ_S= [ -γB+1/2γB-1/2;-γ^⋆ γ^⋆ ] =[0 -1; -10 ]𝒢_Λ_Sand hence preserves the lattice. The full transformation (1) of (<ref>) acting on the integer coordinates as well as the generator matrix can be written as a transformation acting only on the integer coordinates as follows:(i)  (m⃗^T,n^T)→ (m⃗^T,n^T),(ii)  (m⃗^T,n^T)→ (-m⃗^T,-n^T).This generates a _2 subgroup of the automorphism group O_Λ_S(m,m,). Obviously these transformations preserve ⟨α,α⟩ and ⟨β,β⟩ and hence the full Lorentzian norm.One could check directly that other choices in (<ref>) does not preserve the lattice.Now, we will show that the second transformation in (<ref>) preserves the lattice too. Using the transformation of b→b + 2 N, we obtain γ B =𝐛γ^⋆→𝐛γ^⋆ + 2Nγ^⋆= γ B+ 2Nγ^⋆.We can input the above relation into the generator matrix to see how it transforms:𝒢_Λ_S=[γ^⋆γ^⋆; γB+1/2 γB-1/2 ]→𝒢'_Λ_S:=[ γ^⋆ γ^⋆; γB+1/2 + Nγ^⋆ γB-1/2 + Nγ^⋆ ].From the above equation it follows that 𝒢'_Λ_S =[ 1 0; 𝐍 1 ]𝒢_Λ_S.Again since ([ 1 0; 𝐍 1 ]) is unimodular and integral, this transforamtion preserves the lattice.The transformation on integer coordinates in (2) of (<ref>) can also be nicely represented as (m⃗^T, n⃗^T) →((m⃗-𝐍n⃗)^T, n⃗^T) = (m⃗ +n⃗^T𝐍 , n⃗^T) = (m⃗^T, n⃗^T)[ 1 0; N 1 ],where we have used that N is an anti-symmetric integral matrix. To get the complete transformation on the integer coordinates fixing without transforming the generator matrix, we compose these two transformations to see that (m⃗^T, n⃗^T) → (m⃗^T, n⃗^T)[ 1 0; N 1 ] [ 1 0; N 1 ] =(m⃗^T + 2 n⃗^TN, n⃗^T) .As N is integral, we see that the transformed vector lies on the lattice again. Checking that this transformation preserves the norm is a straightforward computation. Thus all automorphisms of Λ_S will preserve ⟨α,α⟩ and ⟨β,β⟩. Now suppose (α,0)∈(Λ_S)_1^0 maps to (α',β')∈Λ_S under some automorphism. Since ⟨β',β'⟩=0 and the norm on ^m is positive definite, we conclude that β'=0 and the automorphism preserves (Λ_S)_1^0. Similarly (Λ_S)_2^0 is also preserved under any automorphims of Λ_S.Combining Theorem <ref>, Theorem <ref> and Lemma <ref> proves Conjecture <ref> for m=n case. We remark that the methods of this Appendix clearly do not apply to m≠ n case. One needs new tricks to prove the general result.
http://arxiv.org/abs/2312.16296v1
{ "authors": [ "Ranveer Kumar Singh", "Madhav Sinha" ], "categories": [ "hep-th", "math.QA" ], "primary_category": "hep-th", "published": "20231226190137", "title": "Non-Chiral Vertex Operator Algebra Associated To Lorentzian Lattices And Narain CFTs" }
[ [===== Our work aims to reconstruct a 3D object that is held and rotated by a hand in front of a static RGB camera. Previous methods that use implicit neural representations to recover the geometry of a generic hand-held object from multi-view images achieved compelling results in the visible part of the object. However, these methods falter in accurately capturing the shape within the hand-object contact region due to occlusion. In this paper, we propose a novel method that deals with surface reconstruction under occlusion by incorporating priors of 2D occlusion elucidation and physical contact constraints. For the former, we introduce an object amodal completion network to infer the 2D complete mask of objects under occlusion. To ensure the accuracy and view consistency of the predicted 2D amodal masks, we devise a joint optimization method for both amodal mask refinement and 3D reconstruction. For the latter, we impose penetration and attraction constraints on the local geometry in contact regions. We evaluate our approach on HO3D and HOD datasets and demonstrate that it outperforms the state-of-the-art methods in terms of reconstruction surface quality, with an improvement of 52% on HO3D and 20% on HOD. Project webpage: https://east-j.github.io/ihor. § INTRODUCTION3D object reconstruction from images has many applications in fields such as robotic manipulation and AR/VR. A handy and low-cost way to obtain 3D models is to rotate an object in hand in front of a camera and reconstruct the 3D objects from a captured video, which is the focus of this work. However, in-hand 3D object reconstruction in this setting poses several challenges, such as the lack of prior knowledge of the object shape, the estimation of the relative poses between the camera and the object, and particularly, the occlusion caused by the hand-object interaction.Implicit neural representations, combined with volume rendering techniques <cit.>, have proven to be remarkably effective in reconstructing 3D geometry from multi-view images without requiring any prior knowledge of the object. Several in-hand object reconstruction works based on these representations <cit.> have achieved compelling results in the visible part of the object. However, their performance degrades significantly when objects are heavily occluded by the hand as these methods optimize 3D object models to fit the observed images only.In this paper, we argue that dealing with object surface reconstruction under occlusion demands the incorporation of additional priors beyond direct observation. Humans are capable of intuitively elucidating objects under occlusion. Some works <cit.> therefore explore large-scale data to learn the capability of occlusion elucidation for images with amodal mask completion. However, leveraging this 2D elucidation capability for multi-view 3D reconstruction is challenging as the amodal masks may be inaccurate and inconsistent across multiple views, especially in the heavily occluded areas. To address this issue, we add a semantic amodal mask head to the implicit 3D reconstruction neural network and refine the masks by jointly optimizing the parameters of both networks.Though the completed amodal mask can help to constrain a rough global shape of an object, it may not reconstruct local surfaces well, as small changes in the 3D local surfaces might not render apparent changes in the 2D masks. On the other hand, we humans can feel the object shapes and manipulate objects by hands without seeing them and an attempt has been made in <cit.> that robotic hands can accomplish similar tasks with only simple tactile information (touch object surfaces or not) collected by tactile sensors attached on robotic hands. With this inspiration, we propose to infer the occluded local object surfaces by reasoning about the physical contact between objects and hands: the reconstructed hands and objects must not intersect with each other and must be in contact to enforce friction so that objects will not fall due to gravity. To this end, we introduce penetration and attraction penalties to guide the inference of the occluded surface in contact areas with hands. By incorporating the 2D occlusion elucidation and the physical contact priors, we propose a novel in-hand 3D object reconstruction method based on implicit representations from a monocular RGB video sequence. We evaluate our method on two datasets HO3D <cit.> and HOD <cit.>. The experiments show that our method can accurately reconstruct objects in both visible and invisible parts and significantly outperforms the state-of-the-art methods in terms of reconstruction quality. Our contributions can be summarized as follows: * We propose a novel method for implicit hand-held object reconstruction that first leverages priors of 2D occlusion elucidation and physical contact constraints.* For the 2D occlusion elucidation prior, we introduce an amodal mask head and a joint optimization method for both amodal mask refinement and 3D object reconstruction to ensure the accuracy and view consistency of the predicted amodal masks.* For the physical contact prior, we devise penetration loss and attraction loss to regularize the occluded object surface.* We conduct extensive experiments on HO3D and HOD datasets and demonstrate that our approach outperforms state-of-the-art methods in terms of surface quality, with an improvement of 52% on HO3D and 20% on HOD. § RELATED WORKS Multi-view 3D Reconstruction. Recovering 3D geometry from multi-view images has a history in computer graphics and computer vision. Traditional methods <cit.> involve SFM <cit.> for camera estimation, dense point clouds via MVS, and Poisson reconstruction <cit.> for meshing. Recently, there is a growing trend to use MLPs to represent 3D appearance and geometry. For instance, NeRF <cit.> combines the volume rendering with implicit functions by minimizing observed-rendered differences. Inspired by NeRF <cit.> and SDF <cit.>, NeuS <cit.> and VolSDF <cit.> advance surface quality by replacing the density field with the signed distance fields. We experimentally find these methods tend to produce poor results in the invisible part due to insufficient observation information. Our method is specially designed to handle this occlusion in hand-object interaction scenes. 3D Hand-held Objects Reconstruction. The 3D reconstruction of manually manipulated objects is a very challenging task due to the heavy occlusion and the variety of objects. To simplify the reconstruction task, several methods <cit.> reduce the reconstruction to a 6DoF pose estimation. Some other works rely on additional depth information <cit.> or point cloud <cit.> to address this challenge. Recent learning-based approaches attempt to directly infer the representations of hands and objects from a monocular RGB image.  <cit.> utilizes AtlasNet <cit.> to recover the object meshes, limited to reconstructing simple objects.  <cit.> use the implicit function to predict the object shape. However, these learning-based methods rely heavily on the dataset, and the reconstructed meshes lack details. In contrast, our method only needs RGB images as supervision and does not need any prior knowledge of the objects. Furthermore, our method excels in recovering object meshes with more details. Most related to our work,  <cit.> reconstructs a hand-held object from RGB monocular video, leveraging the differentiable SDF rendering technique.  <cit.> treats the interacting hand and object as a whole and separates them using an estimated semantic class of each vertex.  <cit.> focus only on the object part. Therefore, these methods do not take occlusion into account, resulting in incomplete surfaces in the hand-occluded part of the object. In contrast, we incorporate contact physical constraints and 2D amodal priors, leading to substantial improvements in the quality of object reconstruction. Occlusion Handling. As hands/humans are often severely occluded by objects, several approaches aim to recover the content of the occluded parts. The first approach involves utilizing temporal information.  <cit.> feed filtered reliable 2D keypoints to 2D and 3D temporal convolutional networks that enforce temporal smoothness to produce a complete 3D pose. The second approach utilizes the spatial attention mechanism.  <cit.> propose a feature injection mechanism for occlusion-robust 3D hand mesh reconstruction. The third applies the amodal mask to perceive the invisible part. Amodal mask refers to the ability to perceive entire objects despite partial occlusion, which has the potential to make computers more human-like in handling occlusion. Ours is related to the amodal mask. Prior studies <cit.> have employed amodal masks to aid in recovering occluded 2D content from images. In our method, we leverage amodal masks to significantly enhance the optimization of neural implicit fields, thereby introducing a novel means of improving reconstruction in occluded regions. However, simply applying initial masks suffers from two issues: (1) some of them may be incorrect, and (2) they are not multi-view consistent. To address these issues, we use a semantic head to refine the masks, resulting in improved reconstruction quality.§ METHODSOur objective is to reconstruct the 3D object from a video sequence {⊷_}_k=0,...,N, where a hand holds a rigid object and rotates it in front of a static RGB camera. In our problem, hand poses are assumed to be fully constrained by objects. Therefore, hand poses are the same across different frames; only global translation and rotation of the hand may differ. We adopt the widely used 3D parametric model MANO <cit.> to represent the hand. MANO can generate hand mesh by inputting two sets of parameters. Shape parameters ∈ℝ^10 control the hand shape and pose parameters ∈ℝ^16× 3 represent the rotation of 16 joints. We estimate the parameter of MANO along with the relative rotation R∈ SO(3) and translation T ∈ℝ^3 between the hand and camera from RGB sequence images. Thus, the hand mesh can be defined as _={MANO(, ), R_, T_}, whereindicates the _th frame, and , are shared for all the frames.The 3D object shape is represented by an SDF-based implicit function . By mapping a query 3D point to a signed distance from the object surface, we can extract the zero-level set as the object surface. As the object shape reconstruction is supervised by image sequences, we represent the object appearance by an extra implicit function . Bothandare optimized through volume rendering to minimize differences between input images ⊷_ and rendered images ⊷̂_̂. However, employing this technique alone leads to incomplete reconstruction results due to the absence of observations in the occluded region (hand-object contact area).To address these challenges, we incorporate amodal masks and physical contact guidance into the neural rendering framework for constraining the reconstruction of the invisible parts. Specifically, an overview of our method is shown in fig_pipeline. Given a monocular RGB input video of a moving hand-held object, our method reconstructs the hand-held object without any prior of the object category. Same with <cit.>, we assume that the object is firmly grasped, enabling us to jointly predict the object motion by hand estimator. We apply the SDF-based implicit function to represent the object. To improve the reconstruction quality in contact areas, the 2D occlusion elucidation and the physical contact priors are leveraged. First, we utilize amodal masks to detect and supervise these regions. We ensure the consistency and quality of the amodal masks by refining them using an additional semantic head after the implicit neural network. Moreover, we apply contact constraints, which require that the object does not intersect with the hand and is in close proximity when they make contact. §.§ 3D Hand Reconstructionsec_hand The first step of our framework is to perform hand pose estimation. We employ a learning-based approach to achieve a robust initialization and further optimize the hand model by fitting it to 2D keypoints. Previous research <cit.> has demonstrated that directly fitting MANO to 2D keypoints is highly non-linear and very sensitive to initial parameters. Therefore, we first utilize the pre-trained monocular hand reconstruction model HandOccNet <cit.> to estimate the hand model parameters of each frame and then average them to obtain a more robust initialization. The hand model can be optimized by minimizing the difference between 2D keypoints detections and reprojection of 3D joints:_k^* = min__k(∑_i=1^16||π(_3d^i(_k))-_2d^i|| + λ_reg_reg),where π(.) denotes the projection operation, _3d and _2d represent the 3D and 2D joint location respectively. The last term _reg=||||^2_2 + ||||^2_2 is for regularization. We use Mediapipe <cit.> to obtain 2D keypoints. Inspired by  <cit.>, we add an additional term into the optimization over k frames to force temporal smoothness:∑_ ||_ - _-1||.Jointly optimizing the energy function may be unstable. Therefore, we first optimize the relative rotation R and translation T, followed by the optimization of the MANO parameters. After optimization, we can transfer RGB video sequences into multi-view images in the hand-centric coordinates, with the hand wrist serving as the origin.§.§ Object Reconstructionsec_obj A key to our framework is learning a hand-centric Signed Distance Function (SDF) representation, enabling the learning of a consistent 3D shape and appearance of the object. It is learned per sequence and does not require pre-training. To optimize the SDF, We adopt the NeuS method <cit.> to utilize the volume rendering technique, while also integrating amodal mask and contact constraints. §.§.§ SDF-based Implicit Representation.sec_sdfl We represent the geometry and appearance by two MLP networks, a geometry network :ℝ^3→ℝ and a color network :ℝ^3×𝕊^2→ℝ^3. Given a 3D point x, the geometry network maps it to the SDF value (x), and the color network takes x along with view directionas inputs and outputs color (x, ). The object surface is then extracted as the zero-level set of the SDF:= {x|(x) = 0}.For each pixel, we sample a set of points along the corresponding camera ray, denoted as {_i=+t_i|t_i∈[t_n, t_f]}, where _i are the sampled points,is the camera position,is the viewing direction, and t_n, t_f denote the bound of the sample ray. Then we can get the rendered color as:ĉ = ∑_iT_iα_i(_i, ), eq_renderwhere T_i=∏_j=1^i-1(1-α_j) is the discrete accumulated transmittance, and α_i=1-exp(-∫_t_i^t_i+1ρ(t)dt) denotes the discrete opacity values. ρ(t) is the opaque density transferred from SDF as defined in  <cit.>.§.§.§ Amodal for Shape Completion.sec_amodal The amodal completion network targets at segmenting the invisible part of the object to offer the understanding of its complete shape, which can then be utilized to supervise the object geometry. Modern 2D amodal segmentation models <cit.> trained on large labeled datasets can provide reasonable predictions. However, in our case, we require amodal masks for a range of categories that may not be present in the training dataset. Thus, we complete the hand and object segmentation maps into object amodal masks. This simultaneous input of hand and object segmentation maps is not restricted by object categories and effectively captures the patterns of hand-object interaction.With this observation, we utilize a simple hourglass network <cit.>to estimate amodal masksignoring the category information. Specifically, as fig_amodal shows, we first obtain the segmentation maps of the hand and object from , an off-the-shelf method  <cit.>. Then, the segmentation maps of hand and objectare concatenated and fed intoto generate the amodal results. The network is trained on ObMan <cit.>. ObMan is a large-scale hand-object interaction dataset, wherein ground-truth amodal maskscan be obtained by rendering 3D models. The cross-entropy loss _CE(.) is applied to supervise the predictions: _amodal = _CE(, ). Mask Refinement with View Consistency. Since the amodal mask of each frame is predicted independently, they lack multi-view consistency and are often inaccurate, especially when the object is heavily occluded, as shown in fig_refine(a). To resolve these inconsistencies and refine the masks, we use an additional semantic head. As demonstrated in  <cit.>, the semantic neural field can naturally leverage the multi-view consensus to improve the accuracy of segmentation. Given a 3D point x, the semantic head predicts a logit (x), which is defined as:(x) = (x),whereis also an MLP network. Similar to color, we adopt volume rendering to convert the semantic logits into 2D semantic maps denoted as , as presented in eq_render:= ∑_iT_iα_i(_i).We then use a softmax to compute the probabilities and supervise by predicted amodal masksusing the classification loss:_seg = _CE(, ).The semantic head is trained together with the implicit neural network. After several iterations, we obtain refined amodal masks _refined by thresholding the probabilities of , which are then used to supervise the geometry again.  fig_refine(b) presents examples of refined masks, which demonstrate the effectiveness in improving the accuracy and consistency of masks.§.§.§ Hand-Object Contact Constraints.sec_constraint We leverage the constraints that guide objects interacting in physical contact. In particular, when grasping the objects, there is no interpenetration between the hand and the object, and that, contacts occur at the surface of both. We express these contact constraints as a differentiable loss _contact, that can be easily applied in the neural rendering framework.Penetration. To prevent penetration between the hand and object, we define a penetration loss _P. Following  <cit.>, we penalize if any hand mesh vertis predicted to have negative SDF value by geometry network , which can be formulated as:_P = ∑_∈||max(-(), 0)||. Attraction. We further define an attraction loss _A to encourage the contact.We sample the rays only from pixels with the amodal mask value of 1. By calculating the surface intersection of these rays, we can obtain the object surface point , along with its corresponding normalin the contact area. Then we cast a ray alongdirection and find the nearest point it intersects the hand mesh. We determine whether the object is in contact with the hand based on the distancebetween the surface pointand the intersection point . The process is illustrated in  fig_pipeline. For object surface points in contact with hands (the distancesmaller than a threshold ), we first encourage the object surface to be close to the hand surface. This involves ensuring that () approaches 0, denoted as _A = ||()||. However, we find this constraint for points in contact only usually does not reconstruct the surface we desire, as shown in fig_cs. This can be attributed to: 1) objects surface points near the hand but not in contact are usually occluded in all images, lacking constraints for the reconstruction; 2) the hard threshold results in abrupt surface constraint changes near contact and non-contact regions; 3) the hard threshold is sensitive to the hand reconstruction quality. Consequently, we also introduce constraints for object surface points near the contact regions but not in contact by encouraging the SDF values to be a function of the distance between a surface point and the hand surface.Therefore, our overall attraction loss _A is defined as :_A =||()||< , ||() -tanh(-/)|| ≥,where , are the hyper-parameters.In our study, we empirically set =0.001, =0.5. Finally, we employ the surface smooth regularization <cit.> in the contact region, which encouragesto be similar with its neighborhood:_S = ∑ ||-||_2.Our final contact loss can be formulated as:_contact = _P_P + _A_A + _S_S. §.§ TrainingIn the training stage, we employ multiple loss functions to optimize the neural implicit field. Specifically, during training, we samplerays and their corresponding reference colors _i, and amodal mask values _i. We use the amodal masks obtained by mask refinement as ground truth for the corresponding calculation. For each ray, we samplepoints. The color loss _color is defined as:_color = 1/∑_i||_̂î - _i||.We apply Eikonal loss <cit.> and mask loss to regularize the SDF:_eik = 1/∑_k,i(||∇(_k, i)||_2-1)^2, _mask = _CE(_̂î, _i),where _̂î=∑_kT_kα_i,k is the sum of weight along the ray. The overall training loss is:= _color + _mask_mask + _eik_eik +_seg_seg + _contact_contact,where _mask=10, _eik=0.1, _seg=0.1, _contact=5 are set empirically.Optimize Camera Poses. The estimated camera pose from the hand pose may not be accurate due to occlusion between the hand and object, leading to a significant degradation in the quality of the reconstruction. Pose refinement has been explored in previous NeRF-based models <cit.>. We incorporate this to effectively optimize for the poses jointly with the object representation. § EXPERIMENTSIn this section, we first present the hand-object interaction datasets and evaluation metrics. Subsequently, we compare our method to state-of-the-art approaches and provide ablation results.§.§ Experimental Setups Implementation Details. We use the same network architecture as NeuS <cit.>, following them to normalize all cameras within a unit sphere and initialize network parameters to approximate the SDF to a unit sphere. For training the model, we use Adam optimizer <cit.> with a learning rate of 5e-4 and sampled 1024 rays per batch for a total of 100k iterations. The training takes about 14 hours in total on a single NVIDIA RTX3090 GPU. Our implementation is based on PyTorch.Datasets. To evaluate our method, we perform the experiments on HO3D <cit.> and HOD <cit.>.* HO3D is a dataset that contains the RGBD video of a hand interacting with YCB objects <cit.> with 3D annotations of both hand and object. We select the 5 sequences in which the objects are firmly grasped by users for our experiments. * HOD aims to reconstruct hand-held objects from RGB sequences, containing 35 objects. However, only 14 ground truth scanned meshes are available for evaluation.In the experiment, we use 500 frames from each sequence of HO3D and all provided frames of HOD.Evaluation Metrics. We evaluate both the quality of object reconstruction and the relationship between the hand and object. Firstly, we use Marching Cubes <cit.> to extract the object mesh from SDF. Following prior research, we then evaluate the object reconstruction quality using Chamfer Distance (CD). As the reconstructed result and ground truth mesh are in different coordinates, we follow <cit.> to normalize each mesh to unit size and apply the ICP to register the reconstructed mesh with the ground truth mesh. For evaluating the relationship between the hand and object, we report the Intersection Volume (Vol) in cm^3 between the hand mesh and object mesh, similar to  <cit.>.Comparison Baselines. In our evaluation, we compare our method with several existing approaches, including (1) HHOR <cit.>, which addresses the same problem as ours. Additionally, we present the results of HHOR with post-processing (denoted as HHOR^**) using MeshLab <cit.> to remove unnecessary parts and fill holes; (2) NeuS <cit.> serves as the foundation for our method. Since the reconstruction quality of NeuS is greatly influenced by the accuracy of the camera pose, we also report its results on HO3D using ground truth camera poses (denoted as NeuS^*) for a fair comparison, as HOD does not offer ground truth data; (3) IHOI <cit.>, a learning-based single image hand-held object reconstruction method that is pre-trained on sequences of the HO3D and other datasets. We evaluate IHOI on each frame of the sequence and report the average results. As the results of NeuS and HHOR are not watertight, we do not report the intersection volume metric.§.§ Comparisons With the State-of-the-Art MethodsWe evaluate reconstructed 3D meshes on HO3D and HOD. Averaged quantitative results are presented in table1. Please refer to the Supp. for more detailed results.Comparison Results on HO3D. We visualize the reconstructed objects in  fig_ho3dres. The learning-based method IHOI can predict the coarse shape of the object, but it typically loses the finer details of the object surface when compared to neural rendering methods. Inaccurate camera poses significantly decrease the reconstruction quality of NeuS, but when ground-truth poses are used(NeuS*), it achieves similar reconstruction quality to HHOR in the visible part of the object. However, both NeuS and HHOR struggle to handle occlusion, which leads to incomplete surface reconstructions. While HHOR^** (HHOR with post-processing) can use Poisson Reconstruction to aid in filling the holes, the resulting object reconstruction may contain obvious artifacts. This is because the Poisson Reconstruction can not correctly fill the surface for the missing part when a large part of the object is occluded. In contrast, our method can recover detailed object meshes in both the visible and invisible parts without any post-processing. By analyzing quantitative results on HO3D, our method significantly outperforms the comparison methods in both 3D reconstruction and hand-object relationships. Other volume rendering-based methods, NeuS with ground-truth poses and HHOR achieve better performance of reconstruction than the learning-based method IHOI. However, they still struggle in reconstructing complete geometry. When HHOR^** can obtain complete surfaces with 0.591 CD, our approach achieves even lower 0.282 CD values, demonstrating an improvement of 52%. In terms of intersection volume, our approach outperforms IHOI and significantly surpasses HHOR^**. This can be attributed to the evident hand artifacts in HHOR^** that lead to increased volume, highlighting the effectiveness of our integrated contact constraints.Comparison Results on HOD. HOD contains objects with more complex shapes and textures, but less hand occlusion. For reconstruction quality, our method continues to surpass the state-of-the-art methods. Compared to HHOR^**, our method improves by 20%. Furthermore, visualizations in  fig_hodres highlight that other results still contain obvious artifacts, whereas our outcomes appear more reasonable. Our method can reconstruct a complete and detailed object mesh regardless of whether the hand-grasping type involves weak or heavy occlusion. Note that the learning-based method heavily relies on the learned prior, and therefore does not work well for objects beyond the training dataset. They cannot recover the shape accurately. Regarding the hand-object relationship, our method outperforms HHOR^**, emphasizing contact constraints' importance in less occluded scenarios. Though IHOI results in lower intersection volume, their predicted object shapes are absolutely inaccurate. Conversely, our method can reconstruct detailed object meshes with a reasonable hand-object relationship. §.§ Ablation StudiesTo evaluate the effectiveness of our proposed components, we perform experiments on HO3D across four distinct settings: (1) NeuS; (2) NeuS with amodal masks; (3) NeuS with amodal masks and contact constraints; (4) Ours: NeuS with amodal masks, contact constraints, and mask refinement.According to table2, incorporating amodal masks significantly improves reconstruction quality, reducing the CD value by 0.646. Visualizations in fig_ablation demonstrate successful recovery of overall shape, indicating that amodal masks effectively fuse observations for complete reconstruction. With contact constraints, we can further improve the geometry quality with 0.115 CD value decrement. As demonstrated in fig_ablation, the utilization of _contact effectively reduces the ambiguities caused by occlusion in the contact region, resulting in a smoother surface. Moreover adding mask refinement can remove the wrongly estimated part caused by inaccurate amodal masks, leading to a 0.051 CD value reduction. The visualization results illustrate that the mask refinement effectively removes wrong results at the object boundaries. These results demonstrate the effectiveness of our proposed components. §.§.§ Different _contact Design.We analyze the results with various _contact designs in table_cs. We conduct experiments without attraction or penetration loss. Incorporating only penetration loss minimizes intersection volume, yet its reconstruction quality lags behind other methods. Conversely, solely applying attraction loss increases intersection volume while enhancing reconstruction. To balance reconstruction quality and intersection volume, we simultaneously apply these two losses in our method. Furthermore, we conducted a comparison by substituting our formulated _A with constraints on object surface points in contact only (denoted as _A^-). Our approach reaches lower CD values and intersection volume, demonstrating the efficacy of guidance in the vicinity of the hand but not in the contact area. § CONCLUSIONS AND FUTURE WORKIn this paper, we present a framework for reconstructing the 3D generic objects in hand using a monocular RGB video. The key insights of our method are to incorporate the amodal masks and physical contact guidance for dealing with surface reconstruction under occlusion. On several datasets, we have demonstrated state-of-the-art results compared with existing methods. In the future, we aim to speed up the training process by integrating hybrid neural representations such as  <cit.>, and relax the assumption of fixed grasping by inferring the object pose <cit.>. § ACKNOWLEDGMENTSThis work was supported in part by NSFC under Grants (62103372, 62088101, 62233013), the Fundamental Research Funds for the Central Universities (226-2023-00111), and the OPPO Research Fund.PART:*Supplementary MaterialtocpartSupplementary Material § NETWORK ARCHITECTUREWe present the implicit neural network architecture in  fig_network. We use a similar network architecture as NeuS, which consists of three MLPs to encode SDF (x), semantic logits (x), and color c(x). We use positional encoding for the input spatial point x and view direction d. The SDF function consists of eight hidden layers, with a skip connection linking the input to the fourth layer's output. For the Semantic function, we added an extra layer to compute the logit. As for the color function, it has four hidden layers and takes the spatial point x, view direction d, normal vector n(x), and feature vector from the SDF function output as inputs. § QUANTITATIVE RESULTS ON INDIVIDUAL OBJECTSWe conduct reconstruction experiments on overall 19 objects from HO3D and HOD datasets, and compare our method with SOTA methods. We show quantitative results on each individual object in  table_detail. § ROBUSTNESS AGAINST HAND PREDICTION QUALITYWe utilize the provided ground truth hand poses of HO3D for an assessment of our method's robustness. As depicted in  table_hand, our predictions yield comparable results to those derived from the ground truth. This achievement can be primarily attributed to the high accuracy of our predicted hand poses, further enhanced by simultaneous optimization, leading to improved motion precision.§ RESULTS FOR AMODAL COMPLETIONWe evaluate the amodal completion on the HO3D dataset, as shown in  table_amodal and  fig_amodal1. To evaluate the quality of the completed masks, we adopt Intersection over Union (IoU) as our metrics. Our completion network can achieve high mIOU values, while the refinement further improves the results. Moreover,  fig_amodal2 demonstrates that our method is capable of accurately obtaining amodal masks for objects with complex shapes in the HOD datasets. § RESULTS FOR HAND RECONSTRUCTIONWe assess the quality of our hand reconstruction results on 5 sequences from the HO3D dataset. §.§ Camera-relative Motion EvaluationWe evaluate the relative rotation R and translation T between the hand and camera. Since the 3D model and poses are retrieved up to a 3D similarity transformation, we apply Umeyama method <cit.> to align the estimated motion with the ground truth and calculate the Absolute Trajectory Error (ATE) using evo tools <cit.>. Absolute Trajectory Error The ATE is a widely adopted metric in SLAM used for investigating the global consistency of a trajectory. We denote the pose as P=[R|T]. ATE is based on the absolute relative pose between two poses P_ref,i, P_est,i∈ SE(3) at frame index i:E_i=P_ref,i^-1 P_est,i.We use the full relative pose E_i for calculation:ATE_i = ||E_i-I_4×4||_FWe report the mean value of 500 frames, comprising both the originally predicted results and the results obtained after pose refinement. The results are shown in  table_cam,  fig_motion, and  fig_motion2, which demonstrate that simultaneous optimization of camera poses and radiance field improves the pose accuracy. §.§ Hand Pose EvaluationWe evaluate the hand pose MANO(θ, β). We compute the PA-MPJPE and PA-MPVPE between the predicted hand mesh and the ground truth. The PA-MPJPE quantifies the average per-joint position error in Euclidean distance (mm), where PA-MPVPE measures the difference of the vertices. The quantitative results are shown in  table3. Our method can accurately reconstruct hand poses by fusing video observations.
http://arxiv.org/abs/2312.16425v1
{ "authors": [ "Shijian Jiang", "Qi Ye", "Rengan Xie", "Yuchi Huo", "Xiang Li", "Yang Zhou", "Jiming Chen" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231227061925", "title": "In-Hand 3D Object Reconstruction from a Monocular RGB Video" }
Quantum Secure Protocols for Multiparty Computations Tapaswini Mohanty, Vikas Srivastava, Sumit Kumar Debnath, Pantelimon Stănică Tapaswini Mohanty is with the Department of Mathematics, National Institute of Technology, Jamshedpur 831 014, India (e-mail: [email protected]). Vikas Srivastava is with the Department of Mathematics, National Institute of Technology, Jamshedpur 831 014, India (e-mail: [email protected]). Sumit Kumar Debnath is with the Department of Mathematics, National Institute of Technology, Jamshedpur 831 014, India (e-mail: [email protected], [email protected]). Pantelimon Stănică is with the Department of Applied Mathematics, Naval Postgraduate School, Monterey, CA 93943, USA; (email: [email protected]). January 14, 2024 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Secure multiparty computation (MPC) schemes allow two or more parties to conjointly compute a function on their private input sets while revealing nothing but the output. Existing state-of-the-art number-theoretic-based designs face the threat of attacks through quantum algorithms. In this context, we present secure MPC protocols that can withstand quantum attacks. We first present the design and analysis of an information-theoretic secure oblivious linear evaluation (OLE), namely qOLE in the quantum domain, andshow that our qOLE is safe from external attacks. In addition, our scheme satisfies all the security requirements of a secure OLE. We further utilize qOLE as a building block to construct a quantum-safe multiparty private set intersection (MPSI) protocol. quantum computing, multiparty computation, private set intersection, oblivious linear evaluation § INTRODUCTIONSecure multiparty computation (MPC) is crucial in safeguarding sensitive data. It allows two or more parties to jointly do the calculations on their private data without revealing anything but the output. Thus, MPC guarantees security features like privacy and confidentiality. The oblivious evaluation of a function is one of the most important primitives in cryptographic designs. In the work of Rabin <cit.>, the idea of oblivious transfer (OT) was introduced. OT considers the setting where there are two parties: sender and receiver. The sender has two bits s_0 and s_1, and the receiver can only learn one of the bits s_b depending on his choice of bit b. Later, it was shown in <cit.> that OT can be used for oblivious evaluation of any cryptographic function. Over the past three decades, considerable advancements have been made in the design of generic OT-based MPC protocols. However, it is worth noting that specific types of functions can be evaluated more efficiently using direct constructions, bypassing the need for MPC. Considering this view, Naor et al. <cit.> designed the oblivious polynomial evaluation (OPE). It is a useful primitive which solves the problem of obliviously evaluating a polynomial P on an input α. To be more precise, OPE is a two-party protocol between two distrustful parties, where one party (say Bob) holds a private polynomial P(x), and another party (say Alice) has a private input α. The goal of a secure OPE protocol is that Alice obtains P(α) and nothing else while Bob learns nothing. Oblivious linear evaluation (OLE) (which is just a special case of OPE) has been studied as an important primitive in secure MPC schemes for garbled arithmetic circuits <cit.>. Instead of obliviously evaluating a polynomial P, we restrict ourselves to a linear function f(x) = ax + b in OLE. As noted in <cit.>, OLE can also be used as an important building block in the design of Private Set Intersection (PSI) and its variants. We aim to work on this line of research in the quantum domain. PSI is an efficient and secure MPC protocol that facilitates clients to compute the intersection of private sets, ensuring that the confidential information of one party is not known to the others. Multiparty PSI (MPSI) is a naturally generalized version of the PSI protocol employed to find common elements in multiple sets without exposing and leaking information about the data except for the intersection. There exist many OLE and MPSI protocols <cit.> but, as known, their security relies on some number theoretic hardness assumptions <cit.> such as integer factorization problem and discrete logarithm problem. Because of Shor's algorithm <cit.>, these schemes face a huge security threat, and once big enough quantum computers are available, these existing state-of-the-art designs will become obsolete. It has led cryptographers worldwide to find alternative ways to construct MPC protocols such as OLE and MPSI. One line of research involves developing protocols based on mathematical problems resistant to quantum attacks, forming what we call post-quantum cryptography (PQC) <cit.>. However, PQC does not provide long-term security. Quantum cryptography (QC) offers a logical solution to nullify the future threat. It is because QC provides long-term protection and safety from the threat of quantum attacks. Therefore, it is need of hour to make use of QC for the construction of secure MPC protocols.Related works: Several quantum two-party private set intersection (PSI) protocols have been developed over the last decade. In 2016, Shi et al. <cit.> introduced a quantum protocol for calculating the intersection of private sets. However, their design had a vulnerability that allowed the server to manipulate the intersection results unilaterally. To address this issue, Cheng et al. <cit.> involved a passive third party in protocol <cit.> to ensure fairness. Shi et al. <cit.> later designed a protocol for the oblivious set member decision problem, which could be applied to private set intersection and union. In 2018, Maitra et al. <cit.> proposed a two-party computation for set intersections involving rational players. Recently, Debnath et al. <cit.> developed quantum two-party PSI protocols that utilized single-photon quantum resources, enhancing feasibility. Liu et al. <cit.> presented a novel QPSI protocol based on the quantum Fourier transform, and an improved version employing the Hadamard gate was later introduced by Liu et al. <cit.>. Furthermore, Liu et al. <cit.> developed a quantum private set intersection cardinality (PSI-CA) protocol based on a bloom filter, and Wang et al. <cit.> proposed a quantum protocol for PSI-CA and union cardinality using entanglement swapping. Shi et al. <cit.> designed a quantum PSI-CA protocol with privacy-preserving condition queries. In 2020, Liu et al. <cit.> introduced a quantum secure MPSI-CA protocol utilizing quantum transformation, measurements, and parallelism. Shi et al. <cit.> developed a novel quantum protocol for MPSI-CA and provided corresponding quantum circuits, highlighting the advantages of quantum computing parallelism and measurement randomness. Mohanty et al. <cit.> designed a special variant of PSI (called threshold PSI) which outputs an intersection if and only if the intersection size is greater than a given threshold value. In 2023, Imran <cit.> proposed a quantum MPSI by providing a framework for transforming the PSI problem into a problem of computing the greatest common divisor (GCD). Additionally, in 2023 Shi et al. <cit.> designed a scheme forXOR computation of multiple private bits. In the following, they used it to create a secure multiparty logical AND, which in turn was used to design a quantum secure MPSI.Our contribution: The major contributions of this paper are the following: * We first present an efficient and information theoretic secure oblivious linear evaluation (namely qOLE). To the best of our knowledge, there is only one design of OLE (Santos et al. <cit.>) in the current-state-of-the-art of the quantum domain. We show that our protocol is more efficient and practical compared to <cit.>. qOLE involves three entities: Alice, Bob, and a third party TP. It is made up of three phases, namely, key generation phase ( qOLE.Kg), initialization phase (qOLE.Int), and computation phase (qOLE.Comp). Bob is in possession of a private linear function f(x)=ax+b defined over ℤ_p. On the other hand, Alice holds α∈ℤ_p and aims to obtain f(α), obliviously. In the key generation phase qOLE.Kg, Alice, Bob, and TP shares secret keys with each other by utilizing a quantum key distribution (QKD). In the initialization phase (qOLE.Int), TP selects a random linear function S(x)=a_1x+b_1 and prepares the corresponding quantum state |S(x)⟩. The state |S(x)⟩ is transferred to Bob in the quantum encrypted form over a quantum channel. Additionally, TP randomly selects d, computes g=S(d), and sends the quantum encrypted forms of |d⟩ and |g⟩ to Alice through quantum communication. In the final computation phase (qOLE.Comp), Alice sends l=α-d to Bob. In the following, Bob computes V(x)=f(x+l)+S(x) and sends it to Alice. Ultimately, Alice uses V(x) to obtain f(α). All the communication between Alice and Bob in the computation phase takes over a quantum communication channel. * Based on qOLE, we design a secure quantum protocol for MPSI (namely qMPSI). qMPSI uses qOLE as the fundamental building block. Thus, we also show that quantum secure OLEs can be used to design MPSI protocols. In fact, the design of qMPSI is generic, in the sense that it can be instantiated with any information theoretic secure OLE. § PRELIMINARIES§.§ Quantum EncryptionWe describe in detail the procedure of the quantum one time pad (qOTP) <cit.> among an encryptor Alice and decryptor Bob. It consists of three algorithms: KeyGen, Enc and Dec which are discussed below: * K← KeyGen(n). On input a positive integer n (length of message), KeyGen outputs a κ(≤2n) bit key K which is shared between Alice and Bob by using some quantum key distribution protocol (e.g., <cit.>). * E_K(|P⟩)← Enc(|P⟩, K). On input the quantum message |P⟩=⊗^n_i=1|p_i⟩ with |p_i⟩=a_i|0⟩+b_i|1⟩ and |a_i|^2+|b_i|^2=1, and the key K, Alice encrypts the message |P⟩in the following way: Computes ⊗^n_i=1X^K_2i|p_i⟩, i.e., if the (2i)-th bit of K is 1 then operates the unitary operator X on the i-th qubit of the message |P⟩; otherwise does nothing, for i =1, …, n. Executes E_K(|P⟩)=⊗^n_i=1Z^K_2i-1X^K_2i|p_i⟩by operating Z on the i-th qubit of X^K_2i|p_i⟩ if the (2i-1)-th bit of K is 1 for i= 1, … , n. * |P⟩← Dec(E_K(|P⟩)). Bob decrypts E_K(|P⟩) by performing the following steps: To get X^K_2i|p_i⟩, operates the unitary operator Z on the i-th qubit of E_K(|P⟩)=⊗^n_i=1Z^K_2i-1X^K_2i|p_i⟩, if the (2i-1)-th bit of K is 1, for i= 1, … , n. Computes |P⟩ by operating the unitary operator X on the i-th qubit of the X^K_2i|p_i⟩, if the (2i)-th bit of K is 1, else does nothing. § PROPOSED QUANTUM OBLIVIOUS LINEAR EVALUATIONWe now discuss the design and analysis of quantum secure OLE (namely, qOLE). The protocol qOLE involves three parties: Alice, Bob, and a third party TP,and consists of three phases: key generation phase ( qOLE.Kg), initialization phase (qOLE.Int), and computation phase (qOLE.Comp). Bob holds a private linear function denoted by f(x)=ax+b defined over ℤ_p. Alice has a private input α∈ℤ_p and aims to obtain f(α), obliviously. In the key generation phase qOLE.Kg, Alice, Bob, and TP share secret keys with each other by utilizing a quantum key distribution (QKD). In the next phase (qOLE.Int), TP selects a random linear function S(x) and sends it to Bob through the quantum channel. Additionally, TP randomly selects d, computes g=S(d), and sends d and g to Alice through quantum communication. In the final computation phase (qOLE.Comp), Alice sends l=α-d to Bob. In the following, Bob computes V(x)=f(x+l)+S(x) and sends it to Alice. Ultimately, Alice uses V(x) to obtain f(α). All the communication between Alice and Bob in the computation phase takes over a quantum communication channel. qOLE.Kg: In the key generation phase, TP shares the keys K_A, K_B with Alice and Bob, respectively, using a QKD (for example, see <cit.>). Similarly, Alice and Bob also share a secret key K_ABbetween themselves by utilizing a QKD. qOLE.Int:In this phase, the following steps are to be executed: * In the initialization phase, TP randomly selects a linear function S(x)=a_1 x+b_1. For ease of notation, we write this polynomial as S(x)=(a_1,b_1). Let a_1, b_1 denote the bit strings corresponding to a_1 and b_1 respectively. TP converts these bit strings into qubits and prepares a quantum state |S(x)⟩=(|a_1⟩,|b_1⟩) corresponding to the linear function S(x). Here, |a_1⟩ and |b_1⟩ are sequences of photons, i.e., |a_1⟩={|a_11⟩,…, |a_1 log_2p⟩}, |b_1⟩={|b_11⟩,…, |b_1 log_2p⟩} and a_1j,b_1j∈{0,1}. TP encrypts |S(x)⟩ using quantum OTP with key K_B and gets |S'(x)⟩. * In the following, TP inserts some decoy particles (randomly chosen from {|0⟩, |1⟩, |+⟩, |-⟩}) into |S'(x)⟩ and obtains |S”(x)⟩. TP notes the position and initial states of the decoy photons and sends |S”(x)⟩ to Bob. * TP randomly selects d and computes S(d)=g (say). TP first converts d and g into binary string. Later, it prepares the quantum states |d⟩ and |g⟩ corresponding to d and g respectively. In the following, it encrypts |d⟩ and |g⟩ using quantum OTP with key K_A and gets |d'⟩, |g'⟩.TP also inserts some decoy photons (randomly chosen from {|0⟩, |1⟩, |+⟩, |-⟩}) into |d'⟩, |g'⟩ and gets |d”⟩, |g”⟩. TP notes the position and initial states of the decoy photons and sends|d"⟩, |g"⟩ to Alice. qOLE.Comp: The following steps are to be performed between Alice and Bob in the computation phase: * On receiving |d”⟩, |g”⟩, Alice first checks for eavesdropping. TP provides the position and initial states of the decoy particles. Alice aborts the protocol if the error rate exceeds the predefined threshold value; otherwise, proceeds to the next step. * Alice discards all decoy photons and gets |d'⟩, |g'⟩. Then Alice decrypts |d'⟩, |g'⟩ using K_A and gets |d⟩, |g⟩. Alice measures the quantum state and obtains d, g. * Alice computes l=α-d and converts it into binary string and then into a sequence of photons |l⟩. She encrypts | l ⟩ using a quantum OTP with K_AB. To avoid eavesdropping, she adds some decoy particles randomly chosen from {|0⟩, |1⟩, |+⟩, |-⟩} into |l'⟩, and obtains |l”⟩. Alice notes the position and initial states of the decoy particles, and sends |l”⟩ to Bob. * On receiving |S”(x)⟩ from TP, Bob and TP checks for eavesdropping; if the error rate exceeds the predefined threshold value then they abort the protocol; otherwise, they proceed to the next step. Bob also checks for eavesdropping with Alice. If the error rate exceeds a predefined threshold then they abort the protocol, otherwise they proceed to the next step. Bob discards all decoy photons from |S”(x)⟩, |l”⟩ and gets|S'(x)⟩, |l'⟩. Bob decrypts|S'(x)⟩, |l'⟩ using keys K_B, K_AB, respectively, and gets |S(x)⟩, |l⟩. He measures the state |S(x)⟩, |l⟩ using acomputational basis and obtainsS(x) and l. * Bob computes V(x)=f(l+x)+S(x). He converts V(x) sequence of photons|V(x)⟩. In the following, |V(x)⟩ is encrypted using a quantum OTP with key K_AB to produce |V'(x)⟩. Bob inserts some decoy photons from {|0⟩, |1⟩, |+⟩, |-⟩} into |V'⟩ and prepares |V”⟩. Finally, |V”⟩ is sent to Alice. * On receiving |V”⟩ from Bob,Alice checks for eavesdropping. If the error rate is more than the the threshold, she aborts the protocol; otherwise, she proceeds. After discarding all the decoy photons from |V”(x)⟩, she obtains |V'(x)⟩. Alice decrypts|V'(x)⟩ using the key K_AB to get |V(x)⟩. Now |V(x)⟩ is measured using the computational basis to obtainV(x). Next,Alice computes V(d)-g=f(l+d)+S(d)-g=f(α). Correctness: It is very easy to see that qOLE is correct, i.e., at the end of the protocol Alice correctly obtains f(α). This follows becauseV(d)-g=f(l+d)+S(d)-g=f(α-d+d)+S(d)-g=f(α)+g-g=f(α).§.§ Toy ExampleLet p=8 and suppose Bob uses f(x)=2x+3 and Alice picks α=4∈ℤ_8. Assume that TP shares K_A=011101010011 with Alice and K_B=101001110001 with Bob. Let the secret key of Alice and Bob be K_AB=111001000101. TP chooses S(x)=3x+1 randomly. He picks d=2 and computes g=S(d)=7. Note that, a_1=3, b_1=1 so a_1=011, b_1=001, therefore, |S(x)⟩=|0⟩|1⟩|1⟩,|0⟩|0⟩|1⟩. Now, |S'(x)⟩=E_K_B(|S(x)⟩)=Z^1X^0(|0⟩) Z^1X^0(|1⟩) Z^0X^1(|1⟩) Z^1X^1(|0⟩) Z^0X^0(|0⟩) Z^0X^1(|1⟩). i.e., |S'(x)⟩=|0⟩ (-|1⟩) |0⟩ (-|1⟩)|0⟩ |0⟩. Suppose that decoy particles are added at the 2^nd,5^th,6^th and 9^th positions. Let |S”(x)⟩=|0⟩ |+⟩ (-|1⟩) |0⟩ |0⟩ |-⟩ (-|1⟩)|0⟩ |0⟩|0⟩.Note that |d⟩,|g⟩=|0⟩|1⟩|0⟩, |1⟩|1⟩|1⟩ so |d'⟩,|g'⟩=E_K_A(|d⟩, |g⟩)=Z^0X^1(|0⟩) Z^1X^1(|1⟩) Z^0X^1(|0⟩) Z^0X^1(|1⟩) Z^0X^0(|1⟩) Z^1X^1(|1⟩) i.e., |d'⟩,|g'⟩=|1⟩|0⟩|1⟩, |0⟩|1⟩|0⟩. Let the positions of the decoy photons be 1,2,6 and 9. Therefore, |d”⟩,|g”⟩=|1⟩ |1⟩|1⟩|0⟩|1⟩ |+⟩ |0⟩|1⟩ |+⟩ |0⟩. On receiving |S”(x)⟩ from TP, Bob checks for eavesdropping and discards all the decoy photons. After this step, Bob gets |S'(x)⟩=|0⟩ (-|1⟩) |0⟩ (-|1⟩)|0⟩ |0⟩. Later, he computes |S(x)⟩=D_K_B(|S(x)⟩)= X^0Z^1(|0⟩) X^0Z^1(-|1⟩) X^1Z^0(|0⟩)X^1Z^1(-|1⟩)X^0Z^0(|0⟩)X^1Z^0(|0⟩), i.e., |S(x)⟩=|0⟩ |1⟩ |1⟩ |0⟩ |0⟩ |1⟩. After measurement, Bob gets a_1=011, b_1=001, i.e., S(x)=3x+1.On receiving |d”(x)⟩,|g”⟩ from TP, Alice checks for eavesdropping and discards all the decoy photons. Alice gets |d'⟩,|g'⟩=|1⟩|0⟩|1⟩, |0⟩|1⟩|0⟩. In the next step, Alice computes |d⟩,|g⟩=D_K_A(|d'⟩,|g'⟩)=X^1Z^0(|1⟩) X^1Z^1(|0⟩) X^1Z^0(|1⟩), X^1Z^0(|0⟩) X^0Z^0(|1⟩) X^1Z^1(|0⟩) i.e., |d⟩,|g⟩=|0⟩|1⟩|0⟩, |1⟩ |1⟩ |1⟩. Alice does the measurements and obtains d=2, and g=7. Now Alice computes l=α-d=2, converts it into qubits and gets|l⟩=|0⟩|1⟩|0⟩. Alice computes |l'⟩=E_K_AB(|l⟩)=Z^1X^1(|0⟩) Z^1X^0(|1⟩) Z^0X^1(|0⟩) i.e.,|l'⟩=(-|1⟩) (-|1⟩) |1⟩. Let the positions of the decoy photons be 1,2 and 4, then |l”⟩=|+⟩ |-⟩ (-|1⟩) |0⟩ (-|1⟩) |1⟩ Alice sends |l”⟩ to Bob.Bob checks for eavesdropping and discards all decoy photons and gets |l'⟩=(-|1⟩) (-|1⟩) |1⟩. Bob computes |l⟩=D_K_AB(|l'⟩)= X^1Z^1(-|1⟩) X^0Z^1(-|1⟩) X^1Z^0(|1⟩), i.e., |l⟩=|0⟩ |1⟩ |0⟩. He measures the quantum state and gets l=2. Now Bob computes V(x)=f(l+x)+S(x)=f(2+x)+3x+1=2(2+x)+3+3x+1=5x convert it into qubits |V(x)⟩=|1⟩|0⟩|1⟩|0⟩|0⟩|0⟩, computes|V'(x)⟩=E_K_AB(|V(x)⟩)=Z^1X^1(|1⟩) Z^1X^0(|0⟩) Z^0X^1(|1⟩) Z^0X^0(|0⟩) Z^0X^1(|0⟩) Z^0X^1(|0⟩), i.e., |V'(x)⟩=|0⟩|0⟩|0⟩ |0⟩|1⟩|1⟩. Let the positions of the decoy photons be 3,4,5, and 8, then|V”(x)⟩=|0⟩|0⟩ |0⟩|1⟩|0⟩|0⟩ |0⟩|+⟩|1⟩|1⟩. Bob send |V”(x)⟩ to Alice.On receiving |V”(x)⟩ from Bob, Alice checks for eavesdropping and discards all the decoy photons. Alice gets|V'(x)⟩=|0⟩|0⟩|0⟩ |0⟩|1⟩|1⟩. Now, she computes |V(x)⟩=D_K_AB(|V'(x)⟩)=X^1Z^1(|0⟩) X^0Z^1(|0⟩) X^1Z^0(|0⟩) X^0Z^0(|0⟩) X^1Z^0(|1⟩) X^1Z^0(|1⟩) i.e., |V(x)⟩=|1⟩|0⟩|1⟩ |0⟩|0⟩|0⟩ Alice measures and gets V(x)=5x. Finally, she computes V(d)-g=V(2)-7=2-7=-5≡3 8 to obtain f(α=4)=3.§.§ Security Analysis §.§.§ External attacksIn this setting, an external attacker wants to obtain some information α, f(x), or f(α) by interfering the communication channel. The attacker only gets |S”(x)⟩, |d”⟩,|g”⟩, |l”⟩, and |V”(x)⟩. As the attacker does not know the actual position of the decoy particles, he therefore cannot obtain|S'(x)⟩, |d'⟩,|g'⟩, |l'⟩, and |V'(x)⟩. We now discuss the situation when an attacker applies theentangled measurement attack on a decoy photon, say |ϕ⟩∈{|0⟩, |1⟩,|+⟩, |-⟩}. On receiving the decoy particle |ϕ⟩ he prepares an ancillary qubit |0⟩_a and operates an oracle operator 𝒰_f|x⟩|y⟩↦|x⟩|y⊕ f(x)⟩.Case I:If |ϕ⟩ is |0⟩ or |1⟩,𝒰_f|ϕ⟩|0⟩_a =|0⟩|f(0)⟩_aif |ϕ⟩=|0⟩,|1⟩|f(1)⟩_aif |ϕ⟩=|1⟩.Case II:If |ϕ⟩ is |+⟩ or |-⟩,𝒰_f|ϕ⟩|0⟩_a=𝒰_f|0⟩|0⟩_a±𝒰_f|1⟩|0⟩_a/√(2) = 1/√(2)|0⟩±|1⟩/√(2)⊗|f(0)⟩_a± |f(1)⟩_a/√(2).By the above analysis, we see that if |ϕ⟩∈{|0⟩, |1⟩}, then the external attacker can guess correctly, but if|ϕ⟩∈{|+⟩, |-⟩}, then the success probability is 1/2. In addition, all these photons are non orthogonal so these states are indistinguishable. Therefore, an outsider fails to obtain any information.Intercept and resend attack: During these kinds of attacks, an adversary intercepts and resends the sequence of photons |S”(x)⟩ to Bob by intercepting the stream of photons |S”(x)⟩ sent by TP to Bob. TP had added the decoy photons before sending |S”(x)⟩ to Bob. Now, the adversary is unaware of the actual position and the state of decoy photons. Note that decoy photons are randomly chosen out of {|0⟩, |1⟩, |+⟩, |-⟩}. Therefore, any external interception can be detected with a probability 1-(3/4)^δ. When δ 0, the probability of detecting an eavesdropping converges to 1.Trojan Horse attacks: qOLE may be susceptible to two types of Trojan horse attacks: delayed photon attacks and invisible photon attacks. These attacks involve manipulating photons during transmission to compromise the security of the protocol. To counter these attacks, participants can incorporate specific quantum optical devices, such as wavelength quantum filters and photon number splitters, during the protocol execution. To address the issue of invisible photons that may arise during transmission, a wavelength quantum filter can be employed to filter out these photons, ensuring that only legitimate photons are considered. Similarly, for delayed photons that may be present, photon number splitters can be utilized to split each legitimate photon, thereby detecting any delayed photons. This enables the participants to identify and mitigate the effects of delayed photon attacks.§.§.§ Internal AttackTo consider internal attacks, we assume that qOTP is information theoretic secure <cit.>. In addition, the following results hold. Alice cannot obtain f(x). Alice cannot obtain f(x) because of two reasons. Firstly, she does not know the actual position of decoy particles. Secondly, the computation of f(x) requires the knowledge of S(x), but she cannot obtain S(x), since obtaining S(x) from |S”(x)⟩ implies a break in the information theoretic security of qOTP. Bob cannot obtain α and f(α). Similar to the prior theorem, Bob can not obtain α and f(α) because of two reasons. Firstly, he also does not know the actual position of decoy particles. Secondly, the computation of α and f(α) requires the knowledge of d and g. It is not possible to obtain any information about d and g because gaining any knowledge about d and g from |d”⟩ and |g”⟩ implies a break in the information theoretic security of qOTP. TP cannot obtain f(x), α, and f(α). To obtain any knowledge f(x), α, and f(α), TP needs information about V(x) and l. The information theoretic security of qOTP makes it impossible for TP to obtain V(x) and l from |V”(x)⟩ and |l”⟩.§.§ Efficiency and ComparisonIn this section, we will discuss the communication and computational overhead of our proposed design qOLE. The linear function f(x)=ax+b is defined over ℤ_p. TP sends 2log_2p qubits to Bob and sends log_2p qubits to Alice. In addition, Alice sends log_2p qubits to Bob and Bob sends log_2p qubits to Alice. Therefore, the total communication cost is 𝒪(log_2p).We now discuss the quantum computation required for the execution of qOLE.A total of 𝒪(log_2p) qubits are needed to be prepared by Alice, Bob and TP. In addition, Pauli operators X and Z are used for doing the quantum computation. Projective measurements of 𝒪(log_2p) single qubits are required during the initialization and computation phase.To the best of our knowledge, there is only one OLE protocol in the quantum domain. Santos et al. <cit.> in 2022 developed the first OLE protocol in the quantum domain. Similar to qOLE, <cit.> does not rely on quantum oblivious transfer. Unlike qOLE, the design presented in <cit.> uses high-dimensional quantum states to obliviously compute the linear function f(x). qOLE in contrast used only single photons which are very easy to prepare and operate. <cit.> utilize the Heisenberg-Weyl operators, while qOLE only needs two dimensional quantum gates like X gate and Z gate. To summarize, qOLE is more practical and efficient when compared to the design presented in <cit.>. qOLE only uses single photons quantum resources, and thus, has the potential to be implemented in the near future. A comparative summary of qOLE with <cit.> is given in Table <ref>.§ PROPOSED QUANTUM MPSIIn this section, we present the construction and analysis of an MPSI protocol (namely, qMPSI) in the quantum domain. Our design utilizes the OLE protocol (qOLE) described in Section <ref>. We first give a high level overview of the protocol, followed by a detailed explanation.A high level overview. In the design of qMPSI, there are m parties A_1, A_2… A_m, each having private sets S_A_1, S_A_2, …, S_A_m, respectively. These sets are defined over ℤ_p with |S_A_j|=n.In the preparation stage, each A_i forms a polynomial corresponding to their private sets. Withoutloss of generality, we set A_1 to be initiator of the protocol. At the end of the execution of qMPSI, A_2 computes the desired intersection and announces it for everybody. We assume that all the parties involved in our protocol are semi-honest. qMPSI.PrepStage: There are m parties A_1, A_2, …, A_m with private sets S_A_1, S_A_2, …, S_A_m. Corresponding to their private sets S_A_j, each of the A_j define a polynomial P_A_j of degree n such that P_A_j(γ)=0 for all γ∈ S_A_j. Let Z be the set of all zeros of polynomials P_A_1, P_A_2, …, P_A_m and 𝒮 be the set of polynomials whose zeros are not in Z. Next, each party A_j, j=1,2,…, m, randomly chooses polynomials r_A_j,r_j∈𝒮 of degree at most n. The sets A_j, j=1,2,…,m, mask their private polynomial P_A_j with r_A_j and gets P'_A_j . A_1 chooses a polynomial u_A_1 of degree n, computes P_1=P'_A_1+u_A_1. A_1,A_2,…,A_m, andselects α_1,α_2, …, α_3n+1. qMPSI.Intersection: The following steps are performed to compute the intersection: * A_1 and A_2 jointly execute the qOLE protocol with the help of TP. The input of A_1 and A_2 to qOLE is (P^i_1,r^i_1) and P'^i_A_2, respectively. The output of the protocol is P^i_2=P'^i_A_2r^i_1+P^i_1, where P'^i_A_2=P'_A_2(α_i), P^i_1=P_1(α_i) and r^i_1=r_1(α_i). A_1 and A_2 execute qOLEprotocol, for i=1,2,…,3n+1. * For j=3,…,m, similarly, A_j-1 and A_jjointly execute the qOLE protocol with the help of TP. The input of A_i-1 to qOLE is (P^i_j-1,r^i_j-1) while the input of A_j to qOLE is P'^i_A_j. The output of the protocol is P^i_j=P'^i_A_jr^i_j-1+P^i_j-1, where P'^i_A_j=P'_A_j(α_i), P^i_j-1=P_j-1(α_i) and r^i_j-1=r_j-1(α_i). A_j-1 and A_j execute this qOLE protocol for i=1,2,…,3n+1. * A_2, A_3, …, A_m share a common polynomial u with each other. * A_m computes R^i=P^i_m+u^i=P'^i_A_mr^i_m-1+P^i_m-1+u^i, for i=1,2,…,3n+1, where u^i=u(α_i) and sends the resulting values to A_1. * A_1 subtract u^i_A_1 from R^i, for all i=1,2,…,3d+1 and sends it to A_2. A_2 subtracts u^i for all i=1,2,…, 3n+1 and interpolates these 3n+1 points and gets the desired intersection polynomial P_∩. At the end of the protocol, A_2, outputs all γ∈ S_A_2 for which P_∩(γ)=0. A_2 announces the intersection. §.§ CorrectnessThe correctness of qMPSI follows in a straightforward manner from the correctness of qOLE. A_2 received R^i-u^i_A_1 with i=1,2,…,3n+1 from A_1. A_2 subtracts u^i for all i=1,2,…, 3n+1 and interpolates these 3n+1 points and gets the desired intersection polynomial P_∩, whereP_∩ =P_A_mr_A_mr_m-1+P_A_m-1r_A_m-1r_m-2+⋯ +P_A_2r_A_2r_1+P_A_1r_A_1.We know that P_A_j, for j=1,2,…,m is the polynomial corresponding to the set S_A_j, for j=1,2,…,m, i.e., for j=1,2,…,m if x∈ S_A_j, then P_A_j(x)=0 and r_A_j, r_j are nonzero and belongs to Z, for all j=1,2,…,m.If x∈ S_A_1∩ S_A_2∩…∩ S_A_m, then x∈ S_A_j for all j=1,2,…,m, soP_A_j(x)=0 for all j=1,2,…,m, which renders P_∩(x)=0.If P_∩(x)=0, then [P_A_mr_A_mr_m-1+P_A_m-1r_A_m-1r_m-2+⋯+P_A_2r_A_2r_1+P_A_1r_A_1](x)=0, soP_A_j(x)=0 for all j=1,2,…,m as for j=1,2,…,m, r_A_j(x)≠0 and r_j(x)≠0, and therefore x∈ S_A_j for all j=1,2,…,m i.e., x∈ S_A_1∩ S_A_2∩…∩ S_A_m. §.§ Security Analysis A_1 cannot learn anything about the private sets of other parties. A_1 has P_A_1, r_A_1, u_A_1, P_1, and r_1. At the end, A_m sends P^i_m=P^i_A_mr^i_A_mr^i_m-1+P^i_A_m-1r^i_A_m-1r^i_m-2+⋯+P^i_A_2r^i_A_2r^i_1+P^i_A_1r^i_A_1+u^i_A_1+u^i, for i=1,2…,3n+1 to A_1. A_1 subtracts P^i_A_1r^i_A_1+u^i_A_1 from P^i_m, for i=1,2,…,3n+1 and gets P^i_A_mr^i_A_mr^i_m-1+P^i_A_m-1r^i_A_m-1r^i_m-2+⋯+P^i_A_2r^i_A_2r^i_1+u^i, for i=1,2,…,3n+1. Since A_1 does not know u^i for i=1,2,…,3n+1, therefore A_1 cannot get any information about the private set of other parties and also cannot obtain the intersection polynomial for computing the intersection S_A_2∩ S_A_3∩…∩ S_A_m. qMPSI protocol is collusion resistant provided A_1 and TP are non-collusive parties. Suppose A_2, A_3,…,A_k-1,A_k+1,…,A_m want to know some information about the private set S_A_k of A_k. If they can extract P^i_A_k for all i=1,2,…,3n+1, they can get P_A_k and from P_A_k they may obtain S_A_k. From P^i_m=P^i_A_mr^i_A_mr^i_m-1+P^i_A_m-1r^i_A_m-1r^i_m-2+⋯+P^i_A_2r^i_A_2r^i_1+P^i_A_1r^i_A_1+u^i_A_1+u^i they subtract their inputs and gets P^i_A_kr^i_A_kr^i_k-1+P^i_A_1r^i_A_1+u^i_A_1+u^i for all i=1,2,…,3n+1. As they only know r^i_k-1 for all i=1,2,…,3n+1, therefore cannot obtain P^i_A_k for any i=1,2,…,3n+1. Hence, they cannot obtain any information about the private set of A_k. TP cannot get any information about the private set of any party. TP only initializes the qOLE protocol. We already showed that any external party cannot obtain anything about anyone's private set and TP cannot obtain any information during the qOLE protocol. When A_m sends P^i_m=P^i_A_mr^i_A_mr^i_m-1+P^i_A_m-1r^i_A_m-1r^i_m-2+⋯+P^i_A_2r^i_A_2r^i_1+P^i_A_1r^i_A_1+u^i_A_1+u^i to A_1 and A_1 sends P^i_A_mr^i_A_mr^i_m-1+P^i_A_m-1r^i_A_m-1r^i_m-2+⋯+P^i_A_2r^i_A_2r^i_1+P^i_A_1r^i_A_1+u^i to A_2, TP may intercept to gain the intersection of the private sets of A_1,A_2,…,A_m. Since TP does not know u^i_A_1 and u^i for any i=1,2,….3n+1, therefore it obtains nothing about the private sets or intersection of the private sets.§.§ Efficiency Analysis and ComparisonIn this section, we will discuss the communication and computational overhead of our proposed design qMPSI. The private polynomials are defined over ℤ_p. qOLE protocol is used as a building block in the deisgn of qMPSI. qOLE is executed (m-1)(3n+1) number of times during the execution of qMPSI, where m is the number of parties and n is the size of the private set. Therefore, the communication and computation costs of qMPSI are 𝒪(mnlog_2p) and 𝒪(mnlog_2p), respectively.To the best of our knowledge, there are two MPSI protocol in the quantum domain. Shi et al. <cit.> in 2023 developed a quantum MPSI which ensures perfect security. Shi et al. <cit.> introduced a semi-honest edge server and two non-collusive fog nodes, and design a secure and feasible edge-assisted quantum protocol for MPSI. The design of <cit.> utilized secure multiparty XOR and secure multiparty logical AND. Imran et al. <cit.> also designed a secure MPSI in the quantum domain. It uses exact the quantum period-finding algorithm (EQPA) as a subroutine. They constructed a quantum multiparty private set intersection (PSI) by transforming the PSI problem into the problem of computing the GCD. The protocol given in <cit.> uses rather complicated quantum operators. Since it uses Shor's algorithm and other quantum heavy resources, it is not practical to realize on a large scale with existing quantum computing hardware capabilities. The communication complexity of <cit.> is higher than the communication complexity of qMPSI. The design of <cit.> is comparable to the design of qMPSI in terms of quantum resources used and quantum operators employed. Also, qMPSI has a slight edge over <cit.> in communication complexity. The communication complexity of <cit.> increases at a quadratic rate in terms of the number of parties involved. qMPSI is based on qOLE which is seen as a more fundamental building block for secure multiparty computation. qMPSI is generic in the sense that any quantum secure OLE can be used to instantiate the protocol. The results of the comparison are summarized in Table <ref>.§ CONCLUSIONIn this paper, quantum secure protocols for secure multiparty computation (MPC) were designed. The previously introduced number theoretic based MPC protocols are not secure because quantum algorithms like Shor's algorithm <cit.> can be employed to break their security. Firstly, a design of the information theoretic secure quantum secure oblivious linear evaluation (qOLE) is proposed. Next, qOLE is used to develop a quantum secure multiparty private set intersection (namely qMPSI). We believe that it is worth investigating anddesigning other types of MPC protocols in the quantum domain.§ DECLARATIONS §.§.§ Conflict of interestThe authors state that they have not known competing financial interests or personal connections that may seem to have influenced the work described in this study.§.§.§ Data availabilityData sharing is not applicable to this article as no new data were generated or analyzed to support this research.IEEEtran [ < g r a p h i c s > ]Tapaswini Mohanty is currently working as a Ph.D. student in the Department of Mathematics, NIT Jamshedpur. He completed his Masters degree from Institute of Science, BHU in 2019. Her research interests include Quantum Cryptography, and Private Set Operations. [ < g r a p h i c s > ]Vikas Srivastava is working as research scholar in the Department of Mathematics,National Institute of Technology, Jamshedpur, India. He has completed his B.S.-M.S. dual degree in Mathematics from Indian Institute of Science Education and Research (IISER), Mohali, India in 2017. His research interests include cryptography, network security and blockchain technology.[ < g r a p h i c s > ]Sumit Kumar Debnath received a M.Sc. degree in Mathematics from IIT Kharagpur in 2012, and also a Ph.D. degree in Cryptology and Network Security from the Department of Mathematics, IIT Kharagpur in 2017. He is currently an Assistant Professor at the Department of Mathematics, National Institute of Technology, Jamshedpur, India.He is a life member of the Cryptology Research Society of India (CRSI). His research interests include multivariate cryptography, lattice-based cryptography, network security and blockchain.He has published more than 28 papers in international journals and conferences in his research areas. [ < g r a p h i c s > ]Pantelimon Stănică received a Masters of Arts in Mathematics from Bucharest University, Romania in 1992. He received a Ph.D./Doctorate in Algebra from the Institute of Mathematics of the Romanian Academy in 1998 also a Ph.D. degree in Mathematics from State University of New York at Buffalo in 1998. He is currently an Professor and Manager of the Secure Communication program in the Department of Applied Mathematics at Naval Postgraduate School, Monterey, CA 93943, USA. He has published more than 150 papers in refereed journals. He has also published more than 35 papers in refereed conference proceedings. He has also won the 2021 George Boole International Prize for considerable contributions to the theory of Boolean functions.
http://arxiv.org/abs/2312.16318v1
{ "authors": [ "Tapaswini Mohanty", "Vikas Srivastava", "Sumit Kumar Debnath", "Pantelimon Stanica" ], "categories": [ "quant-ph", "cs.CR" ], "primary_category": "quant-ph", "published": "20231226195329", "title": "Quantum Secure Protocols for Multiparty Computations" }
X Modality Assisting RGBT Object Tracking Zhaisheng Ding January 14, 2024 ========================================= Learning robust multi-modal feature representations is critical for boosting tracking performance. To this end, we propose a novel X Modality Assisting Network (X-Net) to shed light on the impact of the fusion paradigm by decoupling the visual object tracking into three distinct levels, facilitating subsequent processing. Firstly, to tackle the feature learning hurdles stemming from significant differences between RGB and thermal modalities, a plug-and-play pixel-level generation module (PGM) is proposed based on self-knowledge distillation learning, which effectively generates X modality to bridge the gap between the dual patterns while reducing noise interference. Subsequently, to further achieve the optimal sample feature representation and facilitate cross-modal interactions, we propose a feature-level interaction module (FIM) that incorporates a mixed feature interaction transformer and a spatial-dimensional feature translation strategy. Ultimately, aiming at random drifting due to missing instance features, we propose a flexible online optimized strategy called the decision-level refinement module (DRM), which contains optical flow and refinement mechanisms. Experiments are conducted on three benchmarks to verify that the proposed X-Net outperforms state-of-the-art trackers.RGBT tracking, knowledge distillation, multistage fusion, refinement mechanism. § INTRODUCTIONVisual object tracking aims to accurately predict the bounding box of an object by leveraging the rich information from dual modalities. This approach capitalizes on the complementary cues provided by both RGB and thermal images, enabling it to operate seamlessly around the clock<cit.>. Benefiting from the leapfrog development of deep learning and thermal imaging techniques, extensive remarkable RGBT trackers have been studied and proposed, which greatly improved the precise and success rate of object tracking, while also bolstering the application of artificial intelligence (AI) in various practical domains such as autonomous driving, military reconnaissance, and intelligent security systems<cit.>.RGB images excel at capturing informative color features and rich texture details but are easily susceptible to environmental illumination conditions. In contrast, thermal images exhibit high stability and are insensitive to variation illumination. This type of image exhibits strong anti-interference resilience to challenging environmental factors, i.e. haze, rain and snow, but lacks the details of the objects. The complementary feature combination of RGB and thermal images can effectively improve the classification accuracy of hard samples, thus upgrading the accuracy of tracking<cit.>. Therefore, how to effectively extract and utilize the complementary information emerges as a paramount considerations in RGBT tracking<cit.>.Early traditional-based algorithms were based on manually extracted features, which are unable to adapt to complex environments<cit.>. Inspired by the successful application of deep learning (DL) methods, many RGBT trackers begin to use embedded feature fusion modules to select valid information adaptively<cit.>. The DL-based methods, which can be broadly categorized into two branches: discriminative trackers and generative ones, usually have attribute challenge capability and are applied in a wider range of scenarios, such as drastic appearance changes, motion blur and occlusion. The Siamese network-based trackers are a common type of generative trackers. Guo et al<cit.> introduced a Siamese network to achieve efficient RGBT object tracking performance, which efficiently obtains the unimodal features from multimodalities by an elaborate feature fusion module. In order to fully capture cross-modal information, Zhang et al proposed a SiamCDA<cit.> tracker based on SiamRPN++<cit.>, which employs the deep network to extract the deep potential features of RGB and thermal images. Wang et al proposed a siamese-based transformer RGBT tracker to achieve feature extraction and global information interaction<cit.>. Hou et al proposed MTNet to explore modality-specific cues and achieve satisfactory results<cit.>. The Siamese network-based trackers achieve a tracking speed beyond the real-time requirement. Nevertheless, the generative trackers tend to disregard the significance of shared features across modalities, thereby posing challenges in attaining high accuracy and strong robustness. The discrimination-based RGBT trackers have long attracted attention due to their high tracking accuracy. For instance, Li et al<cit.> proposed a typical kind of discriminative multi-adapter network (MANet) for RGBT tracking, which can excavate the potential value of complementary features between modalities and instance perception information through multiple convolution layers of different kernel sizes. MANet exhibits significant potential in extracting shared features. However, the repetitive utilization of large convolutional kernels for feature extraction compromises network efficiency. Tu et al<cit.> proposed a multi-modal multi-margin metric learning (M^5L) tracker to exploit the structural information of hard samples and achieve quality-aware fusion of complementary features. However, the performance of the M^5L method in tracking precise and success rate is unsatisfactory, making it difficult to reach the state-of-the-art (SOTA). The mentioned methods aim to address the challenge of mining and utilizing correlation clues among different modalities, while also enhancing the anti-interference ability of the trackers. Nevertheless, there is still considerable potential for further improvement.In summary, existing RGBT trackers still face several challenges. Firstly, there are difficulties in assessing the significance of target features present in both RGB images and infrared thermal images. Secondly, there exist challenges in efficiently exploring the correlations between different modalities of features. Lastly, existing tracking strategies lack flexibility. To handle these issues, we propose a novel X modality assisting network, termed as X-Net to reasonably adopt the interdependencies across multi-modalities. Specifically, X-Net aims to enhance tracking performance through three key improvements: pixel-level generation, feature-level interaction and decision-level refinement. As illustrated in Fig. 1, the discriminative trackers utilize a deep network to extract the deep features from both RGB and thermal images. These fused features are then fed into the classifier to predict the target’s position. The initial stage of X-Net involves generating pixel-level RGB and thermal images to obtain the X modality, which aggregates the significant complementary features of the source images. Following feature extraction and interaction, we perform prediction refinement after the classifier with the objective of enhancing tracking performance.This paper presents the main contributions in a four-fold manner. * We propose a novel X modality assisting network (X-Net) to improve tracking performance through three key improvements: pixel-level generation, feature-level interaction and decision-level refinement.* We incorporate a lightweight pixel-level generation module (PGM) into X-Net. PGM utilizes a self-knowledge distillation to generate pixel-level feature aggregation maps. This approach effectively leverages the capabilities of a high-performance image fusion network to capture modality-shared features.* We present the feature interaction module (FIM) in conjunction with the spatial-dimensional feature translation strategy (SFTS) and the mixed feature interaction transformer (MFIT) to address variation scaling and explore cross-modal global correlations.* We design the decision-level refinement module (DRM), which combines optical flow and refinement mechanisms to optimize tracking results by the flexible tracking strategy. Comprehensive experiments verify that the proposed X-Net is superior to the state-of-the-art trackers on three RGBT benchmarks. § RELATED WORKThe field of RGBT tracking has witnessed significant advancements in recent years, with a key milestone being the development and release of RGBT tracking datasets. Numerous researchers have endeavored to develop deep learning-based RGBT tracking algorithms aimed at achieving high performance, which are reviewed from two perspectives as follows. §.§ RGBT Tracking MethodsRGBT tracking methods can be classified into generative trackers and discriminative trackers based on their tracking mechanism. A generative RGBT tracker employs a probabilistic model to capture the correlation and joint distribution between thermal infrared and RGB pixels to implement object tracking. Siamese network-based trackers are popular in RGBT visual tracking relying on their high computing efficiency. For instance, SiamFT<cit.> applied the SiamFC<cit.> as a baseline to extract the features of multi-modalities and then fused the features by the hand-designed modality weights allocation strategy. On this basis, DSiamMFT<cit.> utilized the multi-level semantic features effectively and obtained a higher accuracy than SiamFT. DuSiamRT<cit.> adopted the channel attention module to obtain the fusion feature from the template images. SiamCDA enhanced the recognition ability of the deep features and optimized the robustness of the tracker by reducing the unimodal differences, which is based on the advanced tracker, i.e., SiamRPN++. SiamCSR<cit.> combined the channel-spatial attention mechanism and SiamRPN++ network to implement the RGBT tracking. The Siamese network-based trackers generally have high computing efficiency and fully meet the requirements of practical applications. However, this kind of tracker is poor in processing heterogeneous RGBT data, which limits its tracking performance.Unlike the generative RGBT tracking methods, the discriminative RGBT trackers fuse the complementary information of visible and infrared images to build a discriminant model to track the target. Generally, most of the discriminative RGBT trackers adopt the attention mechanism to strengthen the characterization of the target, which effectively classifies the target and background region. mfDiMP<cit.> proposed a well-designed target prediction network based on DiMP<cit.>, and utilized the discriminant loss for end-to-end training. MANet combined attention mechanism and expanded MDNet<cit.> into a multi-channel network to extract the common and inherent features of multi-modalities. On this basis, MANet++<cit.> enhanced the features in each modality before the fusion module to obtain better performance than MANet does. MIRNet<cit.> combined the self-attention and cross-attention modules to achieve a robust RGBT tracking performance. Maintaining the discrimination performance of modal features can help trackers effectively distinguish the target region from the background. Therefore, many trackers usually retain the features of RGB and TIR modes. For instance, M^5L tracker introduced an attention-based fusion module to effectively integrate quality perception across source images, which improved the accuracy of object tracking. DMCNet<cit.> proposed a novel dual-gated mutual conditional network to take full advantage of the discriminant information of the multi-modalities while suppressing the influence of noise. ADRNet<cit.> and APFNet<cit.> focus on extracting prominent characteristics of different images and blending them to suit different attribute challenges. AGMINet<cit.> presented an asymmetric global and local mutual integration network to mine heterogeneous features. §.§ RGBT information FusionEffective fusion of RGB and thermal information is a vital issue in RGBT tracking <cit.>. Various methods have been proposed, including element-wise summation, concatenation and content-dependency weighting-based fusion strategies. However, most existing fusion strategies overlook the feature differences between input RGB and thermal images, in fact, have distinct imaging mechanisms and exhibit significant differences in their features<cit.>. Directly fusing weighted unimodal RGB and thermal features leads to reduced discriminability in the fused features and decreased subsequent tracking performance. To fully leverage cross-modal features in RGB and thermal images, many RGBT information fusion methods such as Swinfusion<cit.>, SeAFusion<cit.> and CMFA_Net<cit.> are proposed via different deep networks and obtain convincing results. All of these methods utilize the complementary characteristics of infrared images and RGB images, employing meticulously designed feature fusion modules for feature fusion. Furthermore, it would be beneficial to explore and learn feature interaction strategies from high-performing RGBT fusion networks. several algorithms employ knowledge distillation learning strategies, drawing inspiration from advanced feature fusion strategies of existing methods, to guide the decelopment of their own RGBT information fusion techniques. For example, HKDnet<cit.> utilized a heterogeneous knowledge distillation framework to facilitate the simultaneous fusion and super-resolution of thermal and visible images, which includes a high-resolution image fusion network referred to as the teacher network, as well as a low-resolution image fusion and super-resolution network known as the student network. The primary function of the teacher network is to perform fusion on high-resolution input images and guide the student network in acquiring the abilities of fusion and super-resolution, leading to achieving impressive visual effects while accurately preserving the natural texture details. CMD<cit.> tracker introduces a multi-path selection distillation module that guides a simple fusion module to learn more precise multi-modal information from a meticulously crafted fusion mechanism. The performance of an RGBT tracker can significantly deteriorate due to the attention to its feature expression capabilities.In RGBT object tracking, discriminative trackers have longer runtimes compared to generative trackers. However, the generative trackers effectively utilize feature interactions between different modalities, resulting in more powerful and accurate characteristics. Efficient acquisition of intrinsic and shared features in multi-modal images, effectively combining and enhancing key features, as well as correct strategies for target tracking, constitute the primary tasks of discriminative trackers. These challenging tasks motivate us to propose an advanced architecture to enhance tracking performance.§ METHODOLOGY §.§ Network Architecture The pipeline of the X-Net is illustrated in Fig. 2. Concretely, we employ the tailored lightweight VGG-M network with dilated convolution as the backbone and introduce the initial three dilated convolution layers to extract the deep features of the RGB, thermal and X modality generated by the plug-and-play pixel-level generation module (PGM). Subsequently, we propose a feature-level interaction module (FIM) combining a spatial-dimensional feature translation strategy (SFTS) and a mixed feature interaction transformer (MFIT) to obtain an optimal feature expression. Ultimately, we proposed a decision-level refinement module (DRM) to determine the re-tracking strategy according to the confidence score and motion offset. §.§ X Modality Assisting NetworkPixel-level generation module (PGM). Drawing inspiration from the strengths of infrared and visible image fusion methods, we develop a plug-and-play pixel feature representation module using self-knowledge distillation learning. The PGM integrates object cues from multi-modalities directly and reduces noise interference effectively, as shown in Fig. 3. In the process of distillation learning, the teacher network implements an advanced infrared and visible image fusion method, termed SeAFusion<cit.>, while our PGM interacts with it as the student network for learning. The training process of PGM is shown in Algorithm 1. Notably, in comparison to the teacher network, the PGM has a smaller model size, fewer parameters, and lower computational costs, all while maintaining performance levels similar to that of the teacher network. The advantages of self-distillation learning empower the PGM to efficiently extract modality-shared features while simultaneously preserving rich texture details and highlighting thermal object cues.Feature-level interaction module (FIM). The fused image X serves as a crucial cornerstone of multi-modal representation, which synthesizes the complementary content of RGB-T images and can effectively mitigate noise interference. Subsequently, the source images RGB, T and X are fed into the feature extraction module to obtain its deep features D_rgb,D_t,D_x∈ℝ^H × W × C, where H, W and C denote the height, width and channel number of the image, respectively. After that, we propose the FIM to strengthen the feature representation without increasing the number of learning parameters of the network. SFTS module divides the deep features D_rgb and D_t into four slices along the channel plane. Subsequently, these slices are shifted by a single pixel in each of the four directions of height and width, which are defined as follows: max width=3.5in{D_rgb [:,2:W,1:C/4] ←D_rgb[:,1:W - 1,1:C/4]D_rgb [:,1:W-1,C/4 + 1:C/2] ←D_rgb[:,2:W,C/4 + 1:C/2]D_rgb [2:H,:,C/2:3C/4] ←D_rgb[1:H - 1,:,C/2:3C/4]D_rgb [1:H - 1,:,3C/4:C] ←D_rgb[2:H,:,3C/4:C].max width=3.5in{D_t [2:H,:,1:C/4] ←D_t[1:H - 1,:,1:C/4]D_t [1:H - 1,:,C/4 + 1:C/2] ←D_t[2:H,:,C/4 + 1:C/2]D_t [:,2:W,C/2:3C/4] ←D_t[:,1:W - 1,C/2:3C/4]D_t [:,1:W - 1,3C/4:C] ←D_t[:,2:W,3C/4:C]. The SFTS module misaligns the elements by the spatial shift to enrich the detailed textures of an object. To prevent excessive interference caused by object shifting, the SFTS module is designed to shift elements by just one pixel in space. However, in cases where two objects are in close spatial proximity, their features may intersect or overlap, leading to tracking failures. As a solution, we propose the mixed feature interaction transformer (MFIT) to fuse the features produced by the SFTS, mitigating the interference resulting from spatial displacement.The MFIT is based on self-attention and cross-modal attention. Initially, the features D_rgb and D_t are reshaped to obtain their own Query{q^rgb,q^t∈ℝ^HW × C}, Key {k^rgb,k^t∈ℝ^HW × C} and Value {v^rgb,v^t∈ℝ^HW × C}, respectively. Subsequently, the cross-modal attention map is calculated by the following formulas: D_rgb - t = softmax( q^t(k^rgb)^T/√(d_k))v^rgbD_t - rgb = softmax( q^rgb(k^t)^T/√(d_k))v^twhere {D_rgb - t,D_t - rgb∈ℝ^H × W × C} stands for a pair of attention maps, and d_k denotes the scaling factor, set as 1. Ultimately, adding the attention maps and linking with the deep fusion features D_X to obtain the informative fusion maps D_f∈ℝ^H × W × 2C.Att^rgbt = D_rgb - t + D_t - rgbD_f= concat(D_x,Att^rgbt) The feature visualization of the FIM is depicted in Fig. 4. It is evident that the inclusion of FIM allows the tracker to effectively attend to small objects, ensuring that it can focus on minute target regions. In contrast, the absence of FIM may lead to a failure to concentrate on these small target areas.The fused feature maps are cropped by the RoIAlign and then fed into the binary classifier to obtain a coarse location. When confronted with abrupt drifts between adjacent frames, the local search strategy employed by the base tracker proves to be ineffective. Furthermore, the base tracker’s initial frame-based learning of bounding box regression is challenging to dynamically adjust the target scale in ensuing frames. Consequently, it is verified to be challenging to dynamically adjust the scale in ensuing frames.Decision-level refinement module (DRM). To further enhance the tracking performance, we propose the DRM to optimize bounding boxes and estimate motion offsets, which implements different bounding box regression strategies based on the confidence score C_S. When C_S < 0, DRM applies the Lucas-Kanade optical flow<cit.> repositioning rule based on pyramid layering to obtain the target motion offset. The final target positioning is achieved by adding the target location obtained from the local search using Gaussian sampling to the offset in motion. If C_S > 0, the refinement network is initialized to consistently improve the accuracy of the predicted bounding box. In particular, we fine-tune the plug-and-play Alpha Refine<cit.> component on the RGBT dataset, which can refine the bounding boxes accurately with its exceptional spatial awareness capability. §.§ Network TrainingDuring the offline training phase, we utilize the VGG-M architecture as our backbone and implement a multi-domain learning strategy. Afterward, we randomly sample 8 frames and extract 32 positive samples and 96 negative samples per frame using Intersection over Union (IoU) and Gaussian distribution. These samples are employed to construct a mini-batch. We optimize our network using the AdamW and set the learning rate for the convolution layers to 1e^ - 4, for the fully connected layers to 1e^ - 4, and the training epoch to 200. §.§ Online Tracking Algorithm 2 illustrates the complete online tracking process. Concretely, the fully connected (FC) layers in the backbone are updated while other parameters remain fixed. From the starting gate, X-Net is initialized on the first frame of the sequence with the target position. Gaussian sampling is performed around the target in the first frame to obtain 500 positive samples S_1^ + with IoU> 0.7, and 5000 negative samples S_1^ - with IoU< 0.5 w.r.t the ground truth. The FC layers undergo 50 epochs of fine-tuning based on these samples, with a learning rate of 1e^ - 4 for FC6 and 1e^ - 3 for the remaining layers. A total of 1000 samples are chosen for training the bounding box regressor with the objective of obtaining a precise bounding box that encompasses the target. In order to mitigate the impact of tracking inefficiency and inaccurate predictions for subsequent video frame tracking boxes, the regressor is trained only using the initial frame and the corresponding bounding box. The regression model modifies the tracking coordinates of subsequent frames to achieve more accurate tracking. Thereafter, both short- and long-term updates are conducted using a set of 50 positive samples S_t^ + and 200 negative samples S_t^ -. A Gaussian sampling technique is utilized with Z_t - 1^n as the center to generate 256 candidate regions Z_t^i for the tth frame. The trained network evaluates the positive scores f^ + ( Z_t^i) and negative scores f^ - ( Z_t^i) for each of these candidate samples. Subsequently, the confidence scores of the candidates are sorted and the top 5 candidates with the highest scores are selected and averaged as the set Z_m^*. Ultimately, the DRM is employed as the optimization metric for refining the bounding box.§ EXPERIMENTAL RESULTSThe X-Net is implemented on the PyTorch 1.10 platform with one NVIDIA RTX3090 GPU with 24GB memory.§.§ Datasets and MetricsIn this paper, we conduct comparative experiments with high-performance competitors on three RGBT benchmarks, namely GTOT<cit.>, RGBT234<cit.>, and LasHeR<cit.>. Following established practices in the field, we employed two standard metrics, namely Precision Rate (PR) and Success Rate (SR), to demonstrate the superiority of the proposed method. We set the threshold to 5 pixels for GTOT and 20 pixels for RGBT234/LasHeR, considering the diverse image resolutions present in different datasets. §.§ Evaluation on GTOT Dataset1) Overall performance. The proposed tracker is compared with 11 state-of-the-art RGBT methods, i.e., MIRNet, APFNet, AGMINet, MFGNet<cit.>, SiamCDA, HDINet<cit.>, MSIFNet<cit.>, RPCF<cit.>, M2GCI<cit.>, EDFNet<cit.> and DMSTM<cit.>. As reported in Table 1, X-Net exhibits superior tracking capability compared to the state-of-the-art trackers, with PR of 93.1%, surpassing the compared trackers by approximately 0.4%-12.8%. Moreover, with regards to the SR, X-Net outperforms compared trackers, obtaining an impressive SR of 76.7%. The results demonstrate that X-Net is capable of attaining exceptional performance. 2) Challenge-based performance. To evaluate the superiority and robustness of the designed methodology, we conduct challenge-based performance tested on the GTOT dataset and compared with 11 excellent RGBT trackers, including CAT<cit.>, APFNet, ADRNet, HMFT<cit.>, JMMAC<cit.>, MaCNet, MIRNet, BACF<cit.>, ECO<cit.>, RT-MDNet<cit.> and SiamDW<cit.>. The visualization of the results of the attribute challenge is shown in Fig. 5, and the lines represent the Euclidean distance between the predicted centroid point of the compared methods and the ground truth. It can be observed that the prediction of X-Net is always closer to the ground truth bounding box than the comparisons, regardless of whether facing scale variation (SV) and fast motion (FM) challenges. The tracking performance of 7 challenges of deformation (DEF), large scale variation (LSV), occlusion (OCC), fast motion, low illumination (LI), thermal crossover (TC) and small object (SO) is shown in Fig. 6. Clearly, the proposed method excels in the challenges of OCC, LSV, LI, TC and SO both in PR and SR, which indicates X-Net is exceptional resistance to interference and versatility across diverse scenarios. Besides, X-Net demonstrates optimal or near-optimal performance in addressing the DEF and FM challenges, showcasing accurate target tracking capabilities despite the target undergoing various transformations. In conclusion, the proposed tracker X-Net exhibits a distinct octagonal shape in the radar chart visualization of the GTOT dataset, providing a more comprehensive representation in comparison to the other methods. The results demonstrate the capability of X-Net to effectively address multiple challenges and achieve outstanding performance, highlighted by its remarkable robustness.3) Qualitative comparison. In order to comprehensively showcase the benefits of X-Net, a qualitative comparison of the tracking results on GarageHover, BlackSwan1 and FastmotorNig sequences is conducted with 10 exemplary trackers, as shown in Fig. 7. Specifically, the GarageHover sequence reveals the inability of the ECO and BACF methods to track the object in low illumination conditions. The tracking boxes of MIRNet are noticeably inaccurate. In the case of the BlackSwan1 sequence, the zoomed-in section of the tracking results for the 130th frame is presented in the bottom black box of the third row in the image, which illustrates the BACF, ECO, RT-MDNet, APFNet and MIRNet methods struggle to track the object when faced with background interference. In scenarios where tracking occurs during nighttime with rapid target movement (FastMotorNig), conventional methods commonly encounter challenges, such as imprecise tracking or even misidentifying targets. In contrast, the proposed RGBT tracker consistently achieves stable and accurate tracking performance. §.§ Evaluation on RGBT234 Dataset 1) Overall performance. The RGBT234 dataset is applied to evaluate the performance comprehensively, as reported in Table 2 and Table 3. It is evident that the X-Net outperforms the comparisons both in the PR and SR metrics. The proposed method surpasses the EDFNet method by 2.5% and 4.2% in terms of PR and SR, respectively. Significantly, X-Net demonstrates an improvement of approximately 17.1%/14% in PR/SR, compared to RPCF.2) Attribute-based performance. The challenging attributes of the RGBT234 dataset can be categorized into 12 types, i.e., no occlusion (NO), partial occlusion (PO), heavy occlusion (HO), low illumination (LI), low resolution (LR), thermal crossover (TC), DEF, fast motion (FM), scale variation (SV), motion blur (MB), camera movement (CM) and background clutter (BC). Based on the metric values presented in Table 2 and Table 3, X-Net achieves the highest levels of accuracy and success in the challenging attributes such as NO, PO, TC, DEF, SV, MB, CM and BC. X-Net surpasses the RT-MDNet-based method AGMINet and attains PR/SR gains of 0.3%/2.8, 2.8%/2.8%, 6.3%/5.2%, 2.9%/4.1%, 4.4%/6.3%, 2.4%/2.7%, and 2.7%/2.3% for the HO, TC, DEF, SV, MB, and CM challenges, respectively. Besides, X-Net exhibits outstanding performance, with impressive PR/SR scores of 90.5%/66.7%, 86.9%/64.4%, 80.6%/60.2%, and 81.7%/59.8% when confronted with common attribute challenges including HO, TC, MB and CM, respectively. In the absence of no occlusions, the proposed tracker secures an elevated accuracy and success rate of 94.9%/72.7%. However, in the LI and BC challenges, X-Net exhibits inferior performance compared to AGMINet. This can be attributed to the ability of AGMINet to extract multi-scale information at each layer of features, allowing robust target localization even in the presence of low illumination and background clutter scenarios. The process of extracting features at each layer leads to an increase in computation complexity. X-Net exhibits exceptional competence in addressing attribute challenges, facilitating effective object tracking in diverse scenarios.3) Qualitative comparison. Qualitative comparison experiments are conducted on 9 video sequences, including aftertree, baby, carafetertree, dog1, manypeople, nightthreepeople, people1, soccer2 and threeman2, by comparing with 9 state-of-the-art trackers, as illustrated in Fig. 8. To facilitate the demonstration, we present the results using local magnification. Explicitly, in the aftertree sequence, the target person is effectively tracked in all comparisons for the first 140 frames. However, at 312th frame, when occluded by trees, the majority of trackers, specifically the ECO and HMFT methods, display tracking errors or inaccuracies. In the baby sequence, the tracked target encounters attributes such as scale variation, occlusion, and thermal overlap as it approaches. The effects of tracking by different trackers are particularly notable. For example, most of the trackers have already encountered issues of tracking the wrong target at 290th frame, and only MIRNet, ADRNet and X-Net are able to track the target at 537th frame. The target areas tracked by the MIRNet and ADRNet methods are significantly larger than the intended tracking area, while X-Net achieves the highest accuracy. Both camera movement and scale variation attributes are present in the sequences featuring caraftertree and dog1, leading to the difficulty of correctly tracking the target. The proposed method demonstrates accurate tracking of a car or dog, even during motion and partial occlusion. In the manypeople, people1 and threeman2 sequences, the target faces challenges associated with occlusion and thermal overlap during the tracking process. Additionally, significant interference from surrounding persons adversely affects the performance of the trackers. The results from the two sequences demonstrate that X-Net consistently and accurately tracks the target without being affected by interference. When dealing with a rapidly moving soccer target, it becomes evident that most trackers struggle to accurately track the soccer within the first 20 frames. However, the proposed X-Net method proves to be the exception, successfully and precisely tracking the soccer by the 144th frame. While the other tracking methods being compared produce inaccurate results, X-Net consistently delivers precise and reliable tracking results.In summary, the proposed method has consistently achieved exceptional tracking results on the challenging RGBT234 dataset, further confirming its superior tracking performance and robustness in various challenging scenarios. In medium-to-long video sequence tracking, X-Net surpasses the comparison methods, showcasing exceptional stability. §.§ Evaluation on LasHeR DatasetTo further validate the robustness of X-Net, the model is trained on the RGBT234 dataset and evaluated on the testing subset of LasHeR. The comparison results of X-Net against 13 existing trackers, i.e., APFNet, DMCNet, MaCNet, mfDiMP, MANet, CAT, DAPNet<cit.>, MANet++, DAFNet<cit.>, FANet<cit.>, CMR<cit.>, SGT++ and SGT<cit.>, is shown in Fig. 9, which demonstrates the proposed method performs optimal in terms of PR and SR. Specifically, X-Net demonstrates superior performance over the SGT method with improvements of 18.1% and 20.4% in PR and SR metrics, respectively. Additionally, it achieves 4.1% and 10.2% higher PR and SR metrics compared to the MANet++ algorithm. In conclusion, the proposed X-Net demonstrates competitive tracking performance on LasHeR datasets, confirming its effectiveness, robustness and ability to handle attribute challenges across diverse datasets. §.§ Efficiency AnalysisTracking efficiency is critical for trackers and serves as a fundamental metric for their evaluation. In order to conduct a comprehensive evaluation of X-Net, we benchmark its speed against six trackers on the GTOT dataset, as displayed in Fig. 10. It can be observed that X-Net demonstrates strong competitiveness in terms of performance and speed. Specifically, X-Net achieves a tracking speed of 21±3 fps, which are 7 and 9.5 times faster than those of the MANet and DMCNet, respectively. Notably, X-Net outperforms both DAFNet and MIRNet in terms of PR/SR metrics, even though it shows marginally slower execution speed.Moreover, we compare two crucial network metrics, namely Flops and Params, to measure the complexity of the proposed model. Table 4 displays that the proposed method has lower parameters compared with the comparison methods, indicating the minimal space complexity of X-Net. It can be concluded that the X-Net model proposed in this study demonstrates lower Flops compared to DMCNet, showcasing its competitive advantage.§.§ Ablation StudyTo validate the feasibility of each contribution, we implement four variants and test them on the GTOT and RGBT234 dataset, i.e., X-Net-v1 is a base model, which fuses RGB and thermal features via simple element addition. X-Net-v2 incorporates the PGM into the baseline network. X-Net-v3 combines the PGM and FIM into the baseline tracker. X-Net is the final version equipped with all of the contributions. The ablation results of different variants are reported in Table 5, and we can draw the following conclusions: 1) Each proposed improvement significantly enhances the performance of the tracker. 2) By effectively leveraging the fused features from dual modalities, the PGM greatly improves the tracking precision and success rate of the baseline network. 3) The FIM effectively utilizes cues between modality space features to enhance the perception of modality, thereby improving the accuracy of localization. 4) The proposed DRM not only prevents misalignment but also contributes to achieving optimal tracking results.§ CONCLUSIONA high-performance X-Net is proposed to effectively address the RGBT tracking task. By integrating three meticulously crafted and impactful design modules, including PGM, FIM and DRM, X-Net demonstrates a significant enhancement in tracking performance. Firstly, the PGM is proposed to integrate object cues from multi-modalities directly and reduce noise interference effectively, which is a plug-and-play pixel feature representation module via self-knowledge distillation learning. Secondly, to effectively address the scale changes and construct the cross-modal communication relationship of multi-modal deep features, we propose the FIM combining a spatial-dimensional shift strategy and a mixed feature interaction transformer. Finally, the DRM is proposed to determine the re-tracking strategy by adopting a refinement strategy and optical flow algorithm. Experimental results on three benchmark datasets demonstrate that the proposed X-Net outperforms the state-of-the-art trackers. In further work, our study aims to investigate the crucial cues provided by multi-modal features that impact tracking performance, and develop the knowledge distillation-based theoretical framework in the field of RGBT tracking.§ ACKNOWLEDGMENTSThis work is supported by the National Natural Science Foundation of China (Nos. 62266049). "Famous teacher of teaching" of Yunnan 10000 Talents Program. Key project of Basic Research Program of Yunnan Province (No. 202101AS070031). General project of National Natural Science Foundation of China (No. 81771928). 1 IEEEtran 1 Q. Xu, Y. Mei, J. Liu and C. Li, "Multimodal Cross-Layer Bilinear Pooling for RGBT Tracking," IEEE Transactions on Multimedia, vol. 24, pp. 567-580, 2022. 2 J. Peng, H. Zhao, and Z. Hu, "Dynamic fusion network for RGBT tracking," IEEE Transactions on Intelligent Transportation Systems, vol. 24, no. 4, pp. 3822-3832, 2022. 3 A. Lu, C. Qian, C. Li, J. Tang, and L. Wang, "Duality-gated mutual condition network for RGBT tracking," IEEE Transactions on Neural Networks Learning Systems, pp. 1-14, 2022. 4 J. Qiu, R. Yao, Y. Zhou, P. Wang, Y. Zhang, and H. Zhu, "Visible and Infrared Object Tracking via Convolution-Transformer Network With Joint Multimodal Feature Learning," IEEE Geoscience Remote Sensing Letters, vol. 20, pp. 1-5, 2023. 5 Z. Cheng, A. Lu, Z. Zhang, C. Li, and L. Wang, "Fusion Tree Network for RGBT Tracking," in 2022 18th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Madrid, Spain, 2022, pp. 1-8. 6 M. Feng and J. Su, "Learning Multi-Layer Attention Aggregation Siamese Network for Robust RGBT Tracking," in IEEE Transactions on Multimedia, pp. 1-15, 2023. 7 C. Li, C. Zhu, J. Zhang, B. Luo, X. Wu, and J. Tang, "Learning Local-Global Multi-Graph Descriptors for RGB-T Object Tracking," IEEE Transactions on Circuits and Systems for Video Technology, vol. 29, no. 10, pp. 2913-2926, 2019. 8 K. Song, W. Zhang, W. Lu, Z. J. Zha, X. Ji, and Y. Li, "Visual Object Tracking via Guessing and Matching," IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 11, pp. 4182-4191, 2020. 9 D. Guo, J. Wang, Y. Cui, Z. Wang, and S. Chen, "SiamCAR: Siamese Fully Convolutional Classification and Regression for Visual Tracking," in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 6268-6276. 10 T. Zhang, X. Liu, Q. Zhang, and J. Han, "SiamCDA: Complementarity- and Distractor-Aware RGB-T Tracking Based on Siamese Network," IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 3, pp. 1403-1417, 2022. 11 B. Li, W. Wu, Q. Wang, F. Zhang, J. Xing, and J. Yan, "Siamrpn++: Evolution of siamese visual tracking with very deep networks," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 4282-4291. 60 F. Wang, W. Wang, L. Liu, et al. "Siamese transformer RGBT tracking," Appl Intell, vol. 53, pp. 24709–24723, 2023. 61 R. Hou, B. Xu, T. Ren and G. Wu, "MTNet: Learning Modality-aware Representation with Transformer for RGBT Tracking," 2023 IEEE International Conference on Multimedia and Expo (ICME), Brisbane, Australia, 2023, pp. 1163-1168. 12 C. L. Li, A. Lu, A. H. Zheng, Z. Tu, and J. Tang, "Multi-Adapter RGBT Tracking," in 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 2019, pp. 2262-2270. 13 Z. Tu, C. Lin, W. Zhao, C. Li, and J. Tang, "M 5 l: multi-modal multi-margin metric learning for RGBT tracking," IEEE Transactions on Image Processing, vol. 31, pp. 85-98, 2021. 14 X. Zhang, P. Ye, S. Peng, J. Liu, K. Gong, and G. Xiao, "SiamFT: An RGB-Infrared Fusion Tracking Method via Fully Convolutional Siamese Networks," IEEE Access, vol. 7, pp. 122122-122133, 2019. 15 L. Bertinetto, J. Valmadre, J. F. Henriques, A. Vedaldi, and P. H. S. Torr, "Fully-Convolutional Siamese Networks for Object Tracking," Cham, 2016, pp. 850-865: Springer International Publishing. 16 X. Zhang, P. Ye, S. Peng, J. Liu, and G. Xiao, "DSiamMFT: An RGB-T fusion tracking method via dynamic Siamese networks using multi-layer feature fusion," Signal Processing: Image Communication, vol. 84, p. 115756, 2020. 17 C. Guo, D. Yang, C. Li, and P. Song, "Dual Siamese network for RGBT tracking via fusing predicted position maps," The Visual Computer, vol. 38, no. 7, pp. 2555-2567, 2022. 18 C. Guo and L. Xiao, "High Speed and Robust RGB-Thermal Tracking via Dual Attentive Stream Siamese Network," in IGARSS 2022 - 2022 IEEE International Geoscience and Remote Sensing Symposium, 2022, pp. 803-806. 19 L. Zhang, M. Danelljan, A. Gonzalez-Garcia, J. v. d. Weijer, and F. S. Khan, “Multi-Modal Fusion for End-to-End RGB-T Tracking,” in 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 2019, pp. 2252-2261. 20 G. Bhat, M. Danelljan, L. V. Gool, and R. Timofte, "Learning Discriminative Model Prediction for Tracking," in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 6181-6190. 21 H. Nam, and B. Han, “Learning Multi-domain Convolutional Neural Networks for Visual Tracking,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 4293-4302. 22 H. Zhang, L. Zhang, L. Zhuo, and J. Zhang, "Object tracking in RGB-T videos using modal-aware attention network and competitive learning," Sensors, vol. 20, no. 2, p. 393, 2020. 23 R. Hou, T. Ren, and G. Wu, "MIRNet: A Robust RGBT Tracking Jointly with Multi-Modal Interaction and Refinement," in 2022 IEEE International Conference on Multimedia and Expo (ICME), 2022, pp. 1-6. 24 M. Wang, H. Cai, Y. Dai, and M. Gong, "Dynamic Mixture of Counter Network for Location-Agnostic Crowd Counting," in 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 167-177. 25 P. Zhang, D. Wang, H. Lu, and X. Yang, "Learning Adaptive Attribute-Driven Representation for Real-Time RGB-T Tracking," International Journal of Computer Vision, vol. 129, no. 9, pp. 2714-2729, 2021. 26 Y. Xiao, M. Yang, C. Li, L. Liu, and J. Tang, "Attribute-Based Progressive Fusion Network for RGBT Tracking," Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 3, pp. 2831-2838, 2022. 27 J. Mei, Y. Liu, C. Wang, D. Zhou, R. Nie, and J. Cao, "Asymmetric Global–Local Mutual Integration Network for RGBT Tracking," IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1-17, 2022. 53 W. Xia and D. Zhou and J. Cao and Y. Liu and R. Hou, "CIRNet: An improved RGBT tracking via cross-modality interaction and re-identification," Neurocomputing, vol. 439, pp. 327-339, 2022. 59 R. Hou, D. Zhou, R. Nie, D. Liu, L. Guo and C. Yu, "VIF-Net: An Unsupervised Framework for Infrared and Visible Image Fusion," in IEEE Transactions on Computational Imaging, vol. 6, pp. 640-651, 2020. 54 J. Ma, L. Tang, F. Fan, J. Huang, X. Mei and Y. Ma, "SwinFusion: cross-domain long-range learning for general image fusion via swin transformer," IEEE/CAA Journal of Automatica Sinica, vol. 9, pp. 1200-1217, 2022. 55 L. Tang, J. Yuan and J. Ma, "Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network," information fusion, vol. 82, pp. 28-42, 2022. 56 Z. Ding, H. Li, D. Zhou, H. Li, Y. Liu and R. Hou, "CMFA_Net: A cross-modal feature aggregation network for infrared-visible image fusion," infrared physics and technology, vol. 118, pp. 103905-, 2021. 57 W. Xiao, Y. Zhang, H. Wang, F. Li and H. Jin, "Heterogeneous knowledge distillation for simultaneous infrared-visible image fusion and super-resolution," in IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1-15, 2022. 58 T. Zhang, H.Guo, Q. Jiao, Q. Zhang and J.Han, "Efficient RGB-T tracking via cross-modality distillation," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR2023), 2023, pp. 5404-5413. 28 L. Tang, J. Yuan, and J. Ma, "Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network," Information Fusion, vol. 82, pp. 28-42, 2022. 29 N. Sharmin and R. Brad, "Optimal Filter Estimation for Lucas-Kanade Optical Flow," vol. 12, no. 9, pp. 12694-12709, 2012. 30 B. Yan, X. Zhang, D. Wang, H. Lu, and X. Yang, "Alpha-Refine: Boosting Tracking Performance by Precise Bounding Box Estimation," in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 5285-5294. 31 C. Li, H. Cheng, S. Hu, X. Liu, J. Tang, and L. Lin, "Learning Collaborative Sparse Representation for Grayscale-Thermal Tracking," IEEE Transactions on Image Processing, vol. 25, no. 12, pp. 5743-5756, 2016. 32 C. Li, X. Liang, Y. Lu, N. Zhao, and J. Tang, "RGB-T object tracking: Benchmark and baseline," Pattern Recognition, vol. 96, p. 106977, 2019. 33 C. Li et al., "LasHeR: A large-scale high-diversity benchmark for RGBT tracking," IEEE Transactions on Image Processing, vol. 31, pp. 392-404, 2021. 34 X. Wang, X. Shu, S. Zhang, B. Jiang, Y. Wang, Y. Tian, and F. J. a. p. a. Wu, “MFGNet: Dynamic modality-aware filter generation for RGB-T tracking,” arXiv preprint arXiv:.10433, 2021. 35 J. Mei, D. Zhou, J. Cao, R. Nie, and Y. Guo, "Hdinet: Hierarchical dual-sensor interaction network for rgbt tracking," IEEE Sensors Journal vol. 21, no. 15, pp. 16915-16926, 2021. 36 X. Xiao, X. Xiong, F. Meng, and Z. J. S. Chen, “Multi-scale feature interactive fusion network for rgbt tracking,” Sensors, vol. 23, no. 7, pp. 3410, 2023. 37 Y. Wang, X. Wei, X. Tang, K. Yu, and L. Luo, "RGBT tracking using randomly projected CNN features," Expert Systems with Applications, vol. 223, p. 119865, 2023. 38 K. Yan, C. Wang, D. Zhou, and Z. Zhou, "RGBT Tracking via Multi-stage Matching Guidance and Context integration," Neural Processing Letters, pp. 1-15, 2023. 39 K. Yan, J. Mei, D. Zhou, and L. Zhou, "External-attention dual-modality fusion network for RGBT tracking," The Journal of Supercomputing, pp. 1-22, 2023. 40 F. Zhang, H. Peng, L. Yu, Y. Zhao, B. Chen, and Measurement, "Dual-Modality Space-Time Memory Network for RGBT Tracking," IEEE Transactions on Instrumentation, vol. 72, pp. 1-12, 2023. 41 C. Li, L. Liu, A. Lu, Q. Ji, and J. Tang, "Challenge-aware RGBT tracking," in European Conference on Computer Vision, 2020, pp. 222-237. 42 P. Zhang, J. Zhao, D. Wang, H. Lu, and X. Ruan, "Visible-thermal UAV tracking: A large-scale benchmark and new baseline," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 8886-8895. 43 P. Zhang, J. Zhao, C. Bo, D. Wang, H. Lu, and X. Yang, "Jointly Modeling Motion and Appearance Cues for Robust RGB-T Tracking," IEEE Transactions on Image Processing, vol. 30, pp. 3335-3347, 2021. 44 H. Kiani Galoogahi, A. Fagg, and S. Lucey, "Learning background-aware correlation filters for visual tracking," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 1135-1143. 45 M. Danelljan, G. Bhat, F. Shahbaz Khan, and M. Felsberg, "Eco: Efficient convolution operators for tracking," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 6638-6646. 46 I. Jung, J. Son, M. Baek, and B. Han, "Real-Time MDNet," Computer Vision – ECCV 2018. pp. 89-104. 47 Z. Zhang and H. Peng, "Deeper and wider siamese networks for real-time visual tracking," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 4591-4600. 48 Y. Zhu, C. Li, B. Luo, J. Tang, and X. Wang, “Dense Feature Aggregation and Pruning for RGBT Tracking,” in Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 2019, pp. 465–472. 49 Y. Gao, C. Li, Y. Zhu, J. Tang, T. He, and F. Wang, "Deep Adaptive Fusion Network for High Performance RGBT Tracking," in 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 2019, pp. 91-99. 50 Y. Zhu, C. Li, J. Tang, and B. Luo, "Quality-Aware Feature Aggregation Network for Robust RGBT Tracking," IEEE Transactions on Intelligent Vehicles, vol. 6, no. 1, pp. 121-130, 2021. 51 C. Li, C. Zhu, Y. Huang, J. Tang, and L. Wang, "Cross-modal ranking with soft consistency and noisy labels for robust RGB-T tracking," in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 808-823. 52 J. Hyun, M. Kang, D. Wee, and D.-Y. Yeung, "Detection recovery in online multi-object tracking with sparse graph tracker," in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023, pp. 4850-4859.
http://arxiv.org/abs/2312.17273v1
{ "authors": [ "Zhaisheng Ding", "Haiyan Li", "Ruichao Hou", "Yanyu Liu", "Shidong Xie", "Dongming Zhou", "Jinde Cao" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231227053854", "title": "X Modality Assisting RGBT Object Tracking" }
MicroscopicOptical Potentials from Chiral Forces and Ab Initio Nuclear DensitiesC. Giusti, M. Vorabbi, P. Finelli 1, M. Vorabbi2, P. Finelli3Giusti, C. Vorabbi, M. Finelli, P.INFN, Sezione di Pavia,Via A. Bassi 6, I-27100 Pavia, Italy1Department of Physics, University of Surrey, Guildford, GU2 7XH, UK2Dipartimento di Fisica e Astronomia, Università degli Studi di Bologna and INFN, Sezione di Bologna, Via Irnerio 46, I-40126 Bologna, Italy3,We derived microscopic optical potentials (OPs) for elastic nucleon-nucleus scattering within the framework ofchiral effective field theories at the first-order term of the spectator expansion of the Watson multiple-scattering theory and adopting the impulse approximation. Our OPs are derived by folding ab initio nuclear densities with a nucleon-nucleon (NN) t matrix computed with a consistent chiral interaction. The results of our OPs are ingood agreement with the experimental data. Recent achievements of our work are reviewed in this contribution.§ INTRODUCTION The optical potential (OP) provides a successful tool to describe elastic nucleon- nucleus(NA) scattering. Its use can be extended to inelastic scattering and to the calculation of the cross section of a wide variety of nuclear reactions. The basic idea is to describe the NA interaction with an effective complex and energy-dependent potential <cit.>. The imaginary part accounts for the flux lost from the elastic channel to open inelastic and reaction channels, while the energy dependence and nonlocalities account for the underlying many-nucleon dynamics.Phenomenological and microscopic approaches have been used to derive an OP. Phenomenological OPs are obtained assuming an analytical form and a dependence on a number of adjustable parameters for the real and imaginary parts that characterize the shape of the nuclear density distribution and that vary with the nucleon energy and the nuclear mass number. The values of the parameters are determined through a fit to elastic pA scattering data. Global OPs, available for a wide range of nuclei and energies <cit.>, are quite successful in the description of elastic scattering data and are usually adopted for the calculation of the cross section of many nuclear reactions.Microscopic OPs are the result of a microscopic calculation and not of a fitting procedure and are, therefore, more theoretically founded, but in principle require the solution of the full many-body problem for the incident nucleon and all the nucleons of the target nucleus, which is a tremendous task, often beyond current computing capabilities. Some approximations are needed to reduce the problem to a tractable form and the reliability of the OP depends on the reliability of the adopted approximations. In general, one would expect that a microscopic OP can be less able to describe elastic NA scattering data than a phenomenological OP, but it can have a greater predictive power when applied to situations for which experimental data are not yet available.We believe that the derivation ofa microscopic OP, starting from NN and three-nucleon (3N) interactions, where the approximations and uncertainties of the model are reduced as much as possible,is mandatory toprovide reliable predictions for a wide range of nuclei. This is particularly important for nuclei away from stability, whose study represents a frontier in nuclear science over the coming years and which will be probed at new rare-isotope beam facilities worldwide <cit.>.In a series of papers over the last years <cit.> we derived microscopic OPs for elastic (anti)nucleon-nucleus scattering from chiral nuclear interactions. Our OPs have been obtained at the first-order term of the spectator expansion of the Watson multiple-scattering theory <cit.> and adopting the impulse approximation (IA).The idea was to start from a relatively simple model and with subsequent stepsimprove and extend the model. An overview of the latest achievements of our work is presented in this contribution.In Section 2 we outline the theoretical framework used to calculate our microscopic OPs. Our latest achievements and their main findings are discussed in Section 3. Our conclusions and perspectives are drawn in Section 4. § THEORETICAL FRAMEWORK In this section we outline only the main steps of the derivation of our microscopic OPs. More details can be found in<cit.>.The standard approach to the elastic scattering of a nucleon from a target nucleusof A particles is the separation of the full Lippmann-Schwinger (LS) equation for the transition operator T = V ( 1 + G_0 (E) T )into two parts, i.e. an integral equation for TT = U (1+ G_0 (E) P T) ,where U is the optical potential operator, and an integral equation for UU = V (1+ G_0 (E) Q U ). In the above equations V is the external interaction, G_0 (E) the free Green's function for the (A+1)-nucleon system, and P and Q = -Pprojection operators that select the elastic channel.A consistent framework to compute U and T is provided by the spectator expansion, that is based on the multiple-scattering theory <cit.>. We retain only the first-order term, corresponding to the single-scattering approximation, where only one target-nucleon interacts with the projectile. Moreover, we adopt the IA, where nuclear binding on the interacting target nucleon is neglected <cit.>. The adopted approximations reduce the complexity of the original many-body problem to a form where, in practice, we have to solve only two-body equations.After some manipulations, the OP is obtained asa folding integral of the two main ingredients of the model: the target density and the NN t matrix, asU ( q, K; E ) = ∑_N=p,n ∫ d P η ( q, K, P) t_NN×[q , 1/2( A+1/A K + √(A-1/A) P);E ] ×ρ_N (P + √(A-1/A) q/2 ,P - √(A-1/A) q/2) ,where q and K represent the momentum transfer and the average momentum, respectively. Here P is an integration variable, t_NN is the NN t matrix and ρ_N is the one-body nuclear density matrix.The parameter η is the Möller factor, that imposes the Lorentz invariance of the flux when we pass from the NA to the NN frame in which the t matrices are evaluated, and E is the energy at which the t matrices are evaluated. In our first papers <cit.> the use of local neutron and proton densities from a relativistic mean-field model <cit.> gives theOP in a factorized form, the so-called optimum factorization approximation, as the product of the density and the t matrix, thus avoiding the calculation of the folding integral of Eq. (<ref>). For the NN interaction in t_NN we used two versions of chiral potentials at fourth order (N^3LO) in the chiral expansion <cit.> in Ref. <cit.> and at fifth order (N^4LO) <cit.> in Ref. <cit.>. We studied the chiral convergence of the potentials in reproducing elastic pA scattering data. The results show that it is mandatory to use chiral potentials at least at N^3LO. Lower-order potentials are unable to describe the shape and the magnitude of the scattering observables of elastic pA scattering. The results obtained with chiral potentials at N^4LO are neither better nor worse then those obtained with chiral potentials at N^3LO. In Ref. <cit.> we compared the performances of our OPs and those of a successful phenomenological OP <cit.> in the description of the experimental data over a wide range of nuclei, including isotopic chains, ina proton-energy range between 150 and 330 MeV. The agreement of our OPs with the data is sometimes worse and sometimes better, but overall comparable to the agreement given by the phenomenological OP, in particular, it is better for energies close and above 200 MeV. The OP model was improved in Ref. <cit.>, where the folding integral of t_NN and a microscopic nonlocal density obtained with the ab initio no-core shell model <cit.> (NCSM) approach, utilizing NN and 3N chiral interactions, was calculated. The same chiral NN interaction employed to calculate the nuclear density is used to calculate t_NN. This guarantees the consistency of the theoretical framework and improves the soundness of the numerical predictions of the OP model. The same approach, with the same nonlocal NCSM density, was extended to elastic scattering of antiprotons off several target nuclei <cit.>. In the calculations of t_N̅N the first N̅N chiral interaction at N^3LO <cit.> was used. Our results are in good agreement with the existing experimental data <cit.>.§ RECENT ACHIEVEMENTS The OPs of Ref. <cit.> uses NN and 3N interactions to calculate the target density, the structure part of the OP, while the dynamic part, t_NN, includes only the NN interaction. Even if we can argue that the impact of 3N forces is more important in the density, since reproducing the nuclear radii is essential for a proper description of the diffraction minima in the differential cross section, a more consistent OP would require the use of the same NN and 3N potentials both in the dynamic and in the structure parts.Unfortunately, the exact treatment of the 3N interaction is a very hard task that is beyond our present capabilities.Many-nucleon forces can be divided into genuine contributions, arising from the nuclear Hamiltonian, and induced terms, coming from the process of solving the nuclear many-body problem. Genuine contributions enter directly into the definition of the nuclear Hamiltonian in terms of the active degrees of freedom chosen to describe the nuclear systems. Recently, with a suitable approximation, we have investigated the role of genuine 3N forces in the dynamic part of the OP already at the level of the single-scattering approximation between the projectile and the target nucleon <cit.>. The pure 3N force is approximated by a density-dependent NN force, obtained by averaging the third nucleon momenta over the Fermi sphere, that is added as a medium correction of the bare NN force used to calculate the t matrix. We constructed the density-dependent NN force following the procedure proposed in Ref. <cit.>.Even if the 3N force is treated in an approximate way, this method extends our previous OP model and allows a direct comparison of our present and previous results. A few examples of the impact of genuine 3N forces in the dynamic part of our OPs are shown in Figure <ref>, where the calculated differential cross section and analyzing power (A_y), as a function of the center-of-mass (c.m.) scattering angle, are displayed for elastic proton scattering off ^12C atenergies between 122 and300 MeV and compared with the experimental data. All the results are obtained with the same one-body ab initio density matrix from the NCSM approach using NN and 3N chiral interactions. The red bands show the results obtained with t_pN calculated with the pN chiral interaction at N^4LO <cit.> supplemented by a density-dependent NN interaction where the matter density ρ has been varied between reasonable values, going from surfacelike to bulklike densities. The blue lines correspond to ρ=0 fm^-3, i.e. only the pN interaction is considered in the calculation oft_pN.The effects of genuine 3N forces turn out to be negligible for the cross section, where all curves basically overlap and are in reasonable agreement with the experimental data, and somewhat larger for A_y, where the 3N contribution improves the description of the empirical data.Our microscopic OP has been extended to nonzero spin nuclei <cit.>. The extension requires some changes in the derivation of the OP and in the formalism. The main difference is that the density of a nonzero spin target displays an additional dependence on the initial and final third component of the spin which is then propagated to the OP and calculations get more and more involved and time consuming with the increasing value of the target spin.Calculations have been performed for the differential cross section and the analyzing power of elastic proton scattering off a set of nuclei with different values of the spin in their ground state, between J =1/2 and 3,and the results have been compared with the available data <cit.>. A couple of examples are shown in Figure <ref>, for elastic proton scattering off ^13C (with spin and parity quantum number J^π =1/2^-) and ^7Li (J^π =3/2^-) at200 MeV.The effects of genuine 3N forces are small on the cross section and a bit larger on A_y . The bands, indicating the differences due to different values of the matter density, are thin for the cross section a bit larger for A_y. The impact of 3N forces is comparable to what obtained for spin-zero targets. The agreement with data is generally satisfactory and of the same quality as in the case of spin-zero nuclei.The use of the ab initio NCSM methodfor the nuclear densitymakes the theoretical framework more microscopic and consistent, producing OPs that are quite successful in the description of the available data.The main limitation of the NCSM is that, due to the prohibitive scaling of this approach for heavier systems, it can be used only for nuclei with A not greater than 16, while in general and, in particular, for the study of nuclei away from stability, microscopic OPs are required for a wider range of nuclei. It is therefore necessary to resort to many-body approaches with better scaling with respect to the mass number that allow reaching medium-mass and heavy nuclear targets. We have begun exploiting the self-consistent Green's function (SCGF) theory <cit.>, which presents better scaling of computational requirements with respect to the mass number and allows us to reach heavier systems, currently up to A≃ 140, providing fully nonlocal densities for the target. The density matrix has been computed using the SCGF approach and its ADC(n) algebraic diagrammatic construction truncation scheme at different orders n. The standard Dyson formulation of SCGF has been used for closed-shell nuclei and its Gorkov extension for semi-magic open shells <cit.>. The densities have been computedwith NN and 3N chiral forces derived within the chiral effective field theory. Several chiral interactions are available,which are able to reproduce with a high precision NN phaseshifts and deuteron and triton properties. However, constraining the interactions to only few-body observables often fails to reproduce binding energies and radii of larger nuclei simultaneously with the empirical nuclear matter saturation point. Recently, it has been found that proper saturation can be recovered if light to medium mass nuclei are also used to determine the Hamiltonian <cit.>. The possibility to simultaneously account for energies and radii of medium-mass nuclei motivated us to adopt the NNLO_ sat interaction <cit.> in the calculation of the densities. In particular, an accurate reproduction of the target radius is extremely important for a good description of the diffraction minima of the cross section <cit.>.We investigated the dependence of the scattering observables on details ofab initio SCGF calculations and on the chiral potential used in the NN t matrix. Calculations are performed for Ca and Ni isotopes and the results are compared with available experimental data <cit.>. Our results indicate that the SCGF input is stable and scattering observables are well converged with respect to the model space, 3N forces, and many-body truncation already at the ADC(2) level.Our OPs give a good description of the experimental differential cross sections. An example is given in Figure <ref>, where the experimental cross sections for elasticproton scattering off ^40Ca at 65, 80, 135, and 182 MeV are compared with the results of our OPs obtained using the Gorkov SCGF at second order, GkvADC(2), and with the NNLO_ sat interaction. Our OPsdescribe the experimental data at all energies considered, in particular, we notice the remarkable agreement at 65 MeV, an energy that can be considered at the limit of validity of the IA adopted in our OP model. The agreement with data gets somewhat worse, as usual, for larger values of the scattering angle. Figure <ref> displays the differential cross section and analyzing power as a function of the c.m. scattering angle for protons off ^48Ca at 201 MeV and^58Ni at 178 MeV. The experimental data are compared with the results obtained using NNLO_ sat and N^4LO <cit.> chiral interactions int_NN. The two interactions produce significant differences in both shape and size of the cross section and analyzing power. Both results give a reasonable description of the experimental cross section, although the agreement is generally better with NNLO_ sat. Larger differences are found for A_y, where both interactions describe the shape and the position of the experimental minima, but only NNLO_ sat gives a remarkably good description of their depth. More results, for different isotopes and at different proton energies, confirm these findings <cit.>. Overall, the agreement found between our results and the experimental data is remarkably good and makes our approach to the OP comparable to the other existing approaches on the market. We note that the NN and 3N chiral interactions are the only input in the calculation of our microscopic OPs. § CONCLUSIONS AND PERSPECTIVES Few years ago we started a project to obtain microscopic OPs for elastic (anti)nu­cle­on-nucleus scattering within the framework ofchiral effective field theories. Our OPs were derived at the first-order term of the spectator expansion of the Watson multiple-scattering theory and adopting the IA. They are obtained as a folding integral of the target density and the NN t matrix. The results of our OPs are in reasonably good agreement with the experimental data, for both elastic proton and antiproton-nucleus scattering.In this contribution we have reported recent achievements of our project.When the OP is computed with a nonlocal density from the ab initio NCSM, NN and 3N interactions are consistently included in the structure part. The exact treatment of the 3N force in the dynamic part involves multiple scattering that would make the calculation too difficult. The impact of genuine 3N forces has been evaluated, already at the level of thesingle-scattering approximation, averaging them over the Fermi sphere and thus defining a density-dependent NN interaction which acts as a medium correction of the bare NN potential and which is then added to the bare NN potential in the calculations of theNN t matrix. The effect of this 3N force is generally very small on the cross sections but can be sizeable on polarization observables.Of course a more complete treatment of 3N forces would be required.The extension of our microscopic OPs to nonzero spin targets provides a good description of the data, of the same quality as the one obtained for zero spin targets, and allows us to give reliable predictions for a wider range ofstable and unstable nuclei.The use of ab initio densities from the SCGF theory allows us to extend our OP to heavier targets. The combination of the spectator model and SCGF theories offers good opportunities for the physics of radioactive beams, in particular, toward the solution of the long-standing issue of the lack of consistency between structure and reactions in the interpretation of data.The SCGF theory can provide two-nucleon spectral densities <cit.>, which are the basis for extending the OP model to the next term of the spectator expansion. At energies where the IA may become questionable, the self-energy computed through SCGF theory is itself a viable ab initio OP <cit.>. Future work in these directions would allow us to extend the energy range of applicability of the microscopic OP. § ACKNOWLEDGEMENTSThe work reported in this contribution has been obtained in collaboration with C. Barbieri, M. Gennari, R. Machleidt, P. Navrátil, and V. Somà. We thank all of them. 99FESHBACH1958357 H. Feshbach, Annals of Physics, 5 (1958) 357-390.hodgson1963 P. Hodgson, The Optical Model of Elastic Scattering, Clarendon Press, Oxford (1963).HebbornC. Hebborn et al., J. Phys. G: Nucl. Part. Phys. 50 (2023) 060501.KD A. J. Koning and J. P. Delaroche, Nucl. Phys. A713 (2003) 231-310.Vorabbi1 M. Vorabbi, P. Finelli, and C. Giusti, Phys. Rev. C93 (2016) 034619.Vorabbi2 M. Vorabbi, P. Finelli, and C. Giusti, Phys. Rev. C96 (2017) 044001.Vorabbi3 M. Vorabbi, P. Finelli, and C. Giusti, Phys. Rev. C98 (2018) 064602.Vorabbi4 M. Gennari, M. Vorabbi, A. Calci, and P. Navrátil, Phys. Rev. C97 (2018) 034619.Vorabbi5 M. Vorabbi et al., Phys. Rev. Lett. 124 (2020) 054606.Vorabbi6 M. Vorabbi et al., Phys. Rev. C103 (2021) 024604.Vorabbi7 M. Vorabbi et al.,Phys. Rev. C105 (2022) 014621.Vorabbi8 M. Vorabbi et al., arXiv:2309.04226.Watson K. M. Watson, Phys. Rev. 105 (1957) 1388-1398. KMT A. K. Kerman, H. McManus, and R. M. Thaler, Annals Phys. 8 (1959) 551-635.Nik1 T. Nikšić et al., 185 (2014) 1808-1821.EM D. R. Entem and R. Machleidt, Phys. Rev. C68 (2003) 041001.EGM E. Epelbaum, W. Glöckle, and U.-G. Meißner, Nucl. Phys. A747 (2005) 362-424.EKM E. Epelbaum, H. Krebs, and U.-G. Meißner, Eur. Phys. J A51 (2015) 53; Phys. Rev. Lett. 115 (2015) 122301.EMN D. R. Entem, R. Machleidt, and Y. Nosyk, Phys. Rev. C96 (2017) 024004.barrett B. R. Barrett, P. Navrátil, and J. P. Vary, Progress in Particle andNuclear Physics, 69 (2013) 131-181.DHM L.-Y. Dai, J. Haidenbauer, and U.-G. Meißner,JHEP 07 (2017) 78.HoltJ. W. Holt, N. Kaiser, and W. Weise, Phys. Rev. C81 (2010) 024002.Navratil2007P. Navrátil, Few Body Syst 41 (2007) 117-140. Gysbers2019 P. Gysbers et al., Nat. Phys. 15 (2019) 428-431.PhysRevC.21J.R. Comfort et al., Phys. Rev. C21 (1980) 2147-2161.PhysRevC.27H.O. Meyer et al., Phys. Rev. C27 (1983) 459-469.PhysRevC.23H.O. Meyer et al., G.L. Moake, and P.P. Singh,Phys. Rev. C23 (1983) 616-622.PhysRevC.81A. Okamoto et al., Phys. Rev. C81 (2010) 054604.PhysRevC.31J.H. Osborne et al., Phys. Rev. C31 (1985) 1569-1572.PhysRevC.43C.W. Glover et al., Phys. Rev. C43 (1991) 1664-1676.SCGF1 C. Barbieri and A. Carbone, Lect. Notes Phys. 936 (2017) 572.SCGF2 A. Cipollone, C. Barbieri, and P.Navrátil, Phys. Rev. Lett111 (2013) 062501.SCGF3 V. Somà, C. Barbieri, and T.Duguet, Phys. Rev.C89 (2014) 024323.SCGF4 A.Cipollone, C. Barbieri, and P. Navrátil,Phys. Rev.C92 (2015) 014306.SCGF5 C. Barbieri, T.Duguet, and V. Somà, Phys. Rev.C105 (2022) 044330.NNLOSAT A. Ekström et al., Phys. Rev. Lett.91 (2015) 051301. PRC102 W.G.Jiang et al., Phys. Rev.C102 (2020) 054301.PRL125 P. Arthuis et al., Phys. Rev. Lett.125 (2020) 182501.PhysRevC.26 H. Sakaguchi et al., Phys. Rev.C26 (1982) 944-960.NPA366 T. Noro et al., Nucl. Phys.A366 (1981) 189-201.PhysRevC.23a A. Nadasen et al., Phys. Rev.C23 (1981) 1023-1043.PhysRevC.26a P. Schwandt et al., Phys. Rev.C26 (1982) 55-64.AF19 A. Johansson, U. Svanberg, and P.E. Hodgson, Arkiv Fysik 19 (1961).PhysRevC.49 A.E. Feldman et al., Phys. Rev.C49 (1994) 2068-2085.NPA365 A. Ingemarsson, T. Johansson, and G. Tibell, Nucl. Phys.A365 (1981) 426-456.2NSF C. Barbieri et al., Phys. Rev.C70 (2004) 014606.Idini A. Idini, C. Barbieri, and P.Navrátil, Phys. Rev. Lett123 (2019) 092501.
http://arxiv.org/abs/2312.16157v1
{ "authors": [ "Carlotta Giusti", "Matteo Vorabbi", "Paolo Finelli" ], "categories": [ "nucl-th" ], "primary_category": "nucl-th", "published": "20231226183404", "title": "Microscopic Optical Potentials from Chiral Forces and Ab Initio Nuclear Densities" }
In this study, we aim to investigate the problem of large-scale, large-vocabulary disease classification for radiologic images, which can be formulated as a multi-modal, multi-anatomy, multi-label, long-tailedclassification. Our main contributions are three folds:(i), on dataset construction, we build up an academically accessible, large-scale diagnostic dataset that encompasses 5568 disorders linked with 930 unique ICD-10-CM codes, containing 39,026 cases (192,675 scans). (ii), on model design, we present a novel architecture that enables to process arbitrary number of input scans, from various imaging modalities, which is trained with knowledge enhancement to leverage the rich domain knowledge; (iii), on evaluation, we initialize a new benchmark for multi-modal multi-anatomy long-tailed diagnosis. Our method shows superior results on it. Additionally, our final model serves as a pre-trained model, and can be finetuned to benefit diagnosis on various external datasets. Properties of Test Statistics for Nonparametric Cointegrating Regression Functions Based on Subsamples Sepideh MosaferiUniversity of Massachusetts Amherstand Mark S. Kaiser Iowa State University and Daniel J. Nordman Iowa State University ====================================================================================================================================================================================== § INTRODUCTIONIn the ever-evolving landscape of clinical medicine,the advent of radiology techniques, such as X-ray, CT, MRI, and ultrasound, has truly revolutionized the medical field, offering a non-invasive yet deeply revealing perspective of the human body for disease diagnosis and management. These imaging techniques are now at the cusp of a new era with the integration of artificial intelligence (AI).Recent literature highlights the significant potential of developing diagnostic models in the medical field, the developments can be generally cast into two categories:one is a specialist, which has already shown success in identifying and managing a wide array of diseases <cit.>.However, a notable limitation of these models is their specialization, as they often focus on a narrow range of disease categories, targeting limited anatomical regions, based on specific imaging modalities. This specialization restricts their ability to fully address the diverse and complex cases encountered in real-world clinical settings. While on the other extreme,there is an emerging trend towards developing Generalist Medical Artificial Intelligence (GMAI) models <cit.>.Taking inspiration from the breakthroughs in natural language processing and computer vision, these models aim to amalgamate data from diverse sources, including various imaging methods, patient histories, and current medical research, to offer more comprehensive diagnostic and treatment solutions.Nonetheless, the development of GMAI models faces formidable challenges,such as the need for substantial computational power,meticulously curated multimodal datasets covering an extensive range of medical conditions and patient demographics, and advanced models equipped to tackle unique medical intricacies, like extremely unbalanced data distribution and the need for domain-specific expertise.In this paper, we consider the problem of large-scale, large-vocabulary disease classification for radiologic images,marking a transition phase between specialist and generalist models. Specifically, compared to existing specialists,we aim to initiate the research for developing a computational model that can handle multi-modal, multi-anatomy, and multi-label disease diagnosis, in the face of extremely unbalanced distribution,embracing a wider scope of diagnoses across various anatomical regions and imaging modalities. In contrast to generalist models, our investigation offers a more feasible and targeted playground for exploring sophisticated algorithms in academic labs, offering opportunities for detailed error analysis, which is often impractical in the development of large-scale generalist models due to prohibitive computational costs.Overall, we make contributions from three aspects, namely, a large-scale open dataset and its construction pipeline, preliminary model architecture exploration, and an evaluation benchmark.On dataset construction, we build up an academically accessible, large-scale diagnostic dataset derived fromRadiopaedia <cit.>. Each sample is associated to the International Classification of Diseases, i.e., ICD-10-CM <cit.>, indicating the diagnostic category, for example, `S86' refers to `Injury of muscle, fascia and tendon at lower leg level'. The dataset naturally displays an unbalanced distribution, varying from 1 to 964 cases in each disease category. Additionally, each case within the dataset involves multiple multi-modal scans, with the number of modalities ranging from 1 to 5, the number of images per case spanning between 1 and 30. As a result, we construct a long-tailed, multi-scan medical disease classification dataset, as shown in Figure <ref>, with 39,026 cases (192,675 scans) across 7 human anatomy regions and 9 diverse modalities covering 930 ICD-10-CM codes, 5568 disorders[Disorders encompass a range of conditions such as abnormality, syndromes, injuries, poisonings, signs, symptoms, findings and diseases. Diseases are specific pathological conditions characterized by a set of identifiable signs and symptoms.], termed as Radiopaedia3D Diagnosis Dataset (RP3D-DiagDS). We will release all the data, complete with corresponding disorders, ICD-10-CM codes, and detailed definitions.On architecture design, we demonstrate a new model that supports both 2D and 3D input from various modalities, together with a transformer-based fusion module for comprehensive diagnosis. Specifically, for visual encoding, we explore two variants of backbones, namely, ResNet-based, and ViT-based to perform 2D or 3D unified encoding. Then we fuse the multi-scan information with a transformer-based fusion module, treating each image embedding as an input token.At training time,we adopt knowledge-enhanced training strategy <cit.>, specifically, we leverage the rich domain knowledge to pre-train a knowledge encoder with natural language and use it to guide the visual representation learning for disease diagnosis.On evaluation, we carry out a series of ablation studies on the effectiveness of different training configurations, for example, visual backbones (ViT or ResNet),augmentation implementation,and depth of 3D input volumes. Then, we evaluate the model on our proposed benchmark of multi-modal multi-scan long-tailed multi-label diagnosis and demonstrate the superiority of our proposed methods on it. Furthermore, our trained model showed strong transferring abilities. By fine-tuning, it can benefit numerous external datasets, regardless of their image dimensions, imaging modalities, and shooting anatomies. § RELATED WORK §.§ Disease Classification DatasetsOpen-source datasets play a crucial role in the development of AI for medical image analysis. Unlike natural images, the curation of medical image datasets presents unique challenges due to factors,such as privacy concerns, requirement of specialized domain knowledge for annotation, etc.In the literature, there has been a number of open-source datasets for disease diagnosis. Notably, large-scale datasets such as NIH ChestX-ray <cit.>, MIMIC-CXR <cit.>, and CheXpert <cit.> stand out for their comprehensive collection of annotated X-ray images, facilitating extensive research and advancements in automated disease classification. However, it's important to note three key limitations. First, these open-source diagnosis datasets mainly consist of consist of chest X-rays <cit.>, thus are 2D images.There are only a few 3D datasets available, with a limited number of volumes <cit.>. Second, a significant portion of these large-scale datasets focuses solely on binary classification of specific diseases <cit.>. Third,existing large-scale datasets containing a broad range of disease categories in different granularities <cit.>, for example, in PadChest <cit.>, disorders exist at different levels, including infiltrates, interstitial patterns, and reticular interstitial patterns. Such variation in granularity poses a great challenge for representation learning. In summary, these datasets fall short of meeting real-world clinical needs, which often involve complex, multimodal, and multi-image data from single patient. Therefore, constructing a dataset that mirrors the intricacies of actual clinical scenarios is necessary. §.§ Specialist Diagnosis ModelsThe prevailing paradigm in earlier diagnostic models is a specialized model trained on a limited range of disease categories. These models focus on specific imaging modalities and are targeted towards particular anatomical regions. Specifically, ConvNets are widely used in medical image classification due to its outstanding performance,for example, <cit.> have demonstrated excellent results on identifying a wide range of diseases. Recently, Vision Transformer (ViT) has garnered immense interest in the medical imaging community, numerous innovative approaches  <cit.> have emerged,leveraging ViTs as a foundation for further advancements in this field.§.§ Generalist Medical Foundation ModelsThe other stream of work is generalist medical foundation models  <cit.>.These models represent a paradigm shift in medical AI,aiming to create versatile, comprehensive AI systems capable of handling a wide range of tasks across different medical modalities, by leveraging large-scale, diverse medical data. MedPaLM M <cit.> reaches performance competitive with or exceeding the state-of-the-art (SOTA) on various medical benchmarks, demonstrating the potential of the generalist foundation model in disease diagnosis and beyond. RadFM <cit.> is the first medical foundation model capable of processing 3D multi-image inputs, demonstrating the versatility and adaptability of generalist models in processing complex imaging data. However, the development of GMAI models requires substantial computational power, which is often impractical for exploring sophisticated algorithms in academic labs.§.§ Long-tailed Classification StrategyMedical image diagnosis inherently faces long-tail challenges,as the prevalence of common diseases is usually substantially higher than that of rare diseases. A straightforward solution to the imbalance problem is to perform re-sampling during training <cit.>, e.g., over-sampling on tailed classes or under-sampling on head classes. However, this approach often triggers over-fitting in tail classes, resulting insufficient training for the head classes. Loss re-weighting is another widely used solution to tackle the long-tailed distribution problem <cit.>. Focal loss <cit.> refines cross-entropy by assigning lower weights to easily learnable data, thereby prioritizing challenging or misclassifiable instances. These approaches have primarily been explored on relatively small datasets with a limited number of categories. In this paper, our objective is to initiate the problem for large-scale, long-tailed, multi-scan medical disease classification. § DATASET CONSTRUCTIONIn this section, we present the details of our dataset, RP3D-DiagDS, that follows a similar procedure as RP3D <cit.>.Specifically, cases in our dataset are sourced from the Radiopaedia website <cit.> – a growing peer-reviewed educational radiology resource website, that allows the clinicians to upload 3D volumes to better reflect real clinical scenarios. Additionally, all privacy issues have already been resolved by the clinicians at uploading time. It is worth noting that, unlike RP3D <cit.> that contains paired free-form text description and radiology scans for visual-language representation learning, here, we focus on multi-modal, multi-anatomy, and multi-label disease diagnosis (classification) under extremely unbalanced distribution. Overall, the proposed dataset contains 39,026 cases,of 192,675 images from 9 diverse imaging modalities and 7 human anatomy regions, note that, each case may contain images of multiple scans. The data covers 5,568 different disorders, that have been manually mapped into 930 ICD-10-CM <cit.> codes. §.§ Data CollectionHere, we describe the procedure of our data curation process,shown in Figure <ref>. Specifically, we collect three main components from each case on Radiopaedia webpage,namely, `Patient data', `Radiology images', and `Articles'  (example webpage shown in Figure <ref> (a)). `Patient data' includes brief information of this patient, for example, age, gender, etc.`Radiology images' denote a series of radiology examination scans.Figure <ref> (a) provides a statistical analysis on the number of images within one single case. 'Articles' contains links to related articles named with corresponding disorders, which are treated as diagnosis labels and have been meticulously peer-reviewed by experts in Radiopaedia Editorial Board[<https://radiopaedia.org/editors>].To start with, we can collect 50,970 cases linked with 10,670 articles from Radiopaedia website. However, naively adopting the article titles as disorders may lead to ambiguities from two aspects:(i) not all articles are related to disorders,(ii) article titles can be written with different granularity,for example, “pneumonia” and“bacterial pneumonia” can be referred to as different disorders, though they should ideally be arranged in a hierarchical structure, (iii) normal cases from Radiopaedia are not balanced in modalities and anatomies. Next, we discuss 3-stage procedure to alleviate the above-mentioned challenges. Article Filtering. We leverage the GPT-4 <cit.> to automatically filter the article list, leaving those refer to disorders. Specifically, taking inspiration from the self-consistency prompts <cit.>, we design two different query prompts with similar meanings, as shown in Figure <ref> (a). An article name is labeled as disorder if GPT-4 gives consistent positive results from both prompts, while for those GPT-4 gives inconsistent results, we do manually check them. To measure the quality for filtering, we randomly sample a portion of data for manual checking to control its quality. The confusion metrics are shown in Figure <ref> (c).The 100% precision score indicates our auto-filtering strategy,can strictly ensure the left ones to be disorders.Eventually, 5342 articles can pass the first auto-criterion and 226 pass the second manually checking, resulting in 5,568 disorder classes, as shown in Figure <ref> (b). Cases not linked to any articles are excluded, ultimately yielding 38,858 cases. Mapping disorders to ICD-10-CM. Upon getting disorder classes, they may fall into varying hierarchical granularity levels, here, we hope to map them to internationally recognized standards. For this purpose, we utilize the International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) codes, which is a customized version of the ICD-10 used for coding diagnoses in the U.S. healthcare system. The ICD was originally designed as a health care classification system, providing a system of diagnostic codes for classifying diseases, including nuanced classifications of a wide variety of signs, symptoms, abnormal findings, complaints, social circumstances, and external causes of injury or disease. Specifically, we hire ten medical PhD students to manually map the article titles into ICD-10-CM codes. After cross-checking by a ten-year clinician, we have mapped the 5,568 disorders into the corresponding ICD-10-CM codes. We unify various disorders into the first hierarchical level of ICD-10-CM code tree, resulting in 930 ICD-10-CM classes. For example, both “J12.0 Adenoviral pneumonia” and“J12 Viral pneumonia” are mapped to the code “J12 Viral pneumonia, not elsewhere classified”, dismissing the ambiguities caused by diagnosis granularity. Note that, we will provide the ICD-10-CM codes along with the original 5,568 classes, each accompanied by its corresponding definition, thus, the dataset can be employed for training diagnosis or visual-language models.Adding Normal Cases. Despite there are normal cases on Radiopaedia[<https://radiopaedia.org/articles/normal-imaging-examples>], covering most of anatomies and modalities, we additionally collect more normal cases from MISTR[<https://mistr.usask.ca/odin/>], with images available for research under a CC-BY-NC-SA 4.0 license.The expanded normal cases include 168 cases.The distribution of modalities and anatomical regions will be shown in the data statistics section later. Finally, we get 39,026 cases containing 192,675 images labeled by 5,568 disorder classes and 930 ICD-10-CM classes and will continually maintain the dataset, growing the case number. §.§ Dataset StatisticsIn this section, we provide detailed analysis on our proposed dataset,from three aspects, namely, modality coverage, anatomy coverage, and disease coverage. Analysis on Modality Coverage. RP3D-DiagDS comprises images from 9 modalities, namely,computed tomography (CT), magnetic resonance imaging (MRI),X-ray, Ultrasound, Fluoroscopy, Nuclear medicine, Mammography,DSA (angiography), and Barium Enema. Each case may include images from multiple modalities, to ensure precise and comprehensive diagnosis of disorders. Overall, approximately 19.4% of the cases comprise images from two modalities, while around 2.9% involve images from three to five modalities. The remaining cases are associated with image scans from a single modality. The distribution of modalities among all abnormal samples is illustrated in Figure <ref> (a). The modalities of normal cases follow similar distributions with the abnormal cases, as shown in Figure <ref> (b). Analysis on Anatomy Coverage. RP3D-DiagDS comprises images from various anatomical regions, including the head and neck, spine, chest, breast, abdomen and pelvis, upper limb, and lower limb, providing comprehensive coverage of the entire human body. The statistics are shown in Figure <ref> (c) and <ref> (d).Analysis on Disease Coverage. For both disorder and disease classification, each case can correspond to multiple disorders, resulting in RP3D-DiagDS a multi-label classification dataset. As shown in Figure <ref> (b), the distributions exhibit extremely long-tailed pattern, rendering such 2D & 3D image classification problem as a long-tailed multi-label classification task. We define the `head class' category with case counts greater than 100, the `body class' category with case counts between 30 and 100, and the `tail class' category with case counts less than 30.§ METHOD In this section, we aim to initiate a preliminary investigation on computational architectures for large-scale, long-tailed disease diagnosis on radiology, specifically, we start by defining the problem of case-level multi-label classification in Sec. <ref>, then we elaborate the architecture in Sec. <ref>, and knowledge-enhanced training strategy in Sec. <ref>.§.§ Problem FormulationWe consider one case with its disease labels, represented as 𝒳 = {x_1, …, x_S, y_1, …, y_c},where x_i denotes a specific scan (2D or 3D) under a certain radiologic examination, for example, MRI, CT or X-ray within 𝒳, S denotes the scan number for the case and c represents the total diagnostic labels. Note that, there may be multiple scans from the same or different modalities for each patient.To illustrate this concept, consider a case where a patient undergoes a physical examination,including a chest CT and a chest X-ray as shown in Figure <ref>. In this instance, S=2 reflects the two scans involved in this case. If the patient is diagnosed with nodule and pneumothorax, the labels for these diseases are set to 1, reflecting their presence in the patient's diagnosis.Our goal is to train a model that can solve the above multi-class, multi-scan diagnosis problem:𝒴 = Φ(𝒳) = Φ_cls(Φ_fuse(Φ_visual(x_1), Φ_visual(x_2), ⋯, Φ_visual(x_S))) ∈ℛ^c,the model is composed of a visual encoder Φ_visual, a fusion module Φ_fuse and a classifier Φ_cls.§.§ Architecture Our proposed architecture consists of two key components:(i) a visual encoder that processes 2D or 3D input scan; (ii) a transformer-based fusion module, merging all information to perform case-level diagnosis. §.§.§ Visual EncoderWe consider two popular variants of the visual encoder, namely, ResNet <cit.> and ViT <cit.>, as shown in Figure <ref> (a). The visual encoding progress can generally be formulated as:v_i = Φ_visual(x_i) ∈ℛ^d.where x_i ∈ℛ^C × H × W × (D) denotes the input scan, C, H, W refer to the image channel, height and width of the input scan respectively. D is optional, and only available for 3D input scans. The main challenge for architecture design comes from the requirement to process scans in both 2D and 3D formats. Here, we train separate normalisation modules to convert the 2D or 3D inputs into feature maps of same resolution, and further passed into shared encoding module.ResNet. For 3D scans, they are first fed into 3D ResNet,followed by average pooling on depth to aggregate the information of the extra dimension; while for 2D scans, they are fed to the 2D ResNet to perform the same down-sampling ratio on height and width as for 3D. After normalization, both 2D and 3D scans are transformed into feature maps with same resolution, f_i ∈ℛ^d_res× h × w, where d_res is the intermediate feature dimension and h,w denote the normalized size. Then, the feature maps are passed into a shared ResNet to get the final visual embedding.ViT.For 3D scans, we convert the input volume into a series of non-overlapped 3D cubes, and pass them into MLP projection layers, to get vector embeddings; while for 2D scans, the input scan is broken into 2D patches,and projected into vector embeddings with another set of MLP layers. As ViT enables to handle sequences of variable tokens, we can now pass the resulting vectors into a shared ViT to get the final visual embedding. For position embedding, we adopt two sets of learnable position embeddings, one for 2D input, the other for 3D, further indicating the network about the input dimenstion. §.§.§ Fusion ModuleFor case-level diagnosis, we propose to aggregate information from multiple scans with a trainable module. As shown in Figure <ref> (b), we adopt transformer encoders, specifically, we first initialize a set of learnable modality embeddings, denoted as { p_1, …, p_M}, where M denotes the total number of possible imaging modalities. Given certain visual embedding (v_i) from modality j, we first add the corresponding modality embedding (p_j) to it, indicating which radiologic modality it is from. Then we feed all visual embeddings from one case into the fusion module, and output the fused visual embedding v_fuse∈ℛ^d, from the “[cls]” token, similar to paper <cit.>, denoted as:v_fuse = Φ_fuse(Φ_visual(x_1), Φ_visual(x_2), ⋯, Φ_visual(x_S)) ∈ℛ^d.Till here, we have computed the case-level visual embedding, which can be passed into classifier for disease diagnosis. Here, we adopt a knowledge-enhanced training strategy <cit.>,that has shown to be superior in long-tailed recognition problem,as detailed in the following section.§.§ Knowledge-enhanced TrainingWith knowledge enhancement, we hope to leverage the rich domain knowledge in medicine to enhance the long-tailed classification. Our key insight is that the long-tailed diseases may fundamentally have some shared symptoms or radiologic pathologies with the head classes, which could be encoded in text format. In detail, we first pre-train a text encoder with medical knowledge,termed as knowledge encoder, where names of similar disorder or diseases are projected to similar embeddings, for example, `lung disease' is closer to `pneumonia' in the embedding space than`brain disease'. Then we freeze the text embeddings, and use them to guide the training of vision encoder, as shown in Figure <ref> (c). Knowledge Encoder Pre-training. We leverage several extra knowledge bases to pre-train a knowledge encoder, including Radiopaedia, ICD10-CM, and UMLS. Specifically, for each disorder term, we collect its definitions, radiologic features from Radiopaedia articles, synonyms, clinical information, and hierarchy structure from ICD10-CM, as well as definitions from UMLS. We aim to train the text encoder with the following considerations:* Synonyms. If two terms are identified as similar synonyms,we expect them to be close in text embedding space, like `Salmonella enteritis' and `Salmonella gastroenteritis'. * Hierarchy. In the context of hierarchical relationships, we expect that if a disease is a fine-grained classification of another disease, its embeddings should be closer than those of unrelated diseases. For example, `J93, Pneumothorax and air leak' should exhibit closer embeddings with `J93.0, Spontaneous tension pneumothorax' than with `J96.0, Acute respiratory failure'.* Descriptions.For terms associated with descriptions, radiologic features or clinical information, we expect their embeddings to be close in the text embedding space.For example, `Intestinal Neuroendocrine Tumor' and `A well-differentiated, low or intermediate grade tumor with neuroendocrine differentiation that arises from the small or large intestine'.We start from an off-the-shelf text encoder, namely, MedCPT-Query-Encoder <cit.>, and adopt contrastive learning for further finetuning. Given a target medical terminology name encoded by the text encoder, denoted as f_tar, its corresponding medical texts,i.e., synonyms, containing terminologies or related descriptions are treated as positive cases. Similarly, we encode them with the text encoder, denoted as f^+, and other non-related text embedding is treated as negative cases f^-. Note that, we keep positive and negative cases consistent in format, e.g., when the positive case is related to description sentence, the negative cases are always some non-related description sentences instead of the short name words. The final objective can be formulated as:ℒ_knowledge = -loge^f_tar^T · f^+/τ/∑_n=1^Ne^f_tar^T · f^-_n/τ,where τ refers to the temperature, and N denotes total sampled negative cases. By optimizing the contrastive loss we can further finetune the text encoder, resulting in a knowledge encoder, termed as Φ_k Knowledge-guided Classification. After training the knowledge encoder, we use it to encode the disorder/disease names into text embeddings, for example, denoting the names as {T_1, T_2, …, T_c}, where T_j is a disorder/disease name like “pneumonia” or “lung tumor”. We embed these free texts with the knowledge encoder as:t_j = Φ_k(T_j) ∈ℛ^d,where t_j denotes the text embedding.The resulting text embeddings are used for case-level diagnosis:p = v_fuse· t ∈ℛ^c,where p is the final result.We use classical binary cross entropy (BCE) loss as the final training objective, denoted as:ℒ = - ∑_i=1^c𝒴_i ·log(p_i) + (1 - 𝒴_i) ·log(1 - p_i).§ EXPERIMENTS In this section, we will introduce our evaluation settings.Specifically, we first establish a benchmark for case-level multi-modal, multi-scan, long-tailed disorder/disease diagnosis.Second, we treat RP3D-DiagDS as a large-scale dataset for pre-training, and evaluate its transferring ability to various existing datasets. §.§ Long-tailed Classification on RP3D-DiagDSWith the proposed dataset, we consider the problem as a multi-label classification task under a long-tailed distribution. In this section, we provide an overview of the training and evaluation protocols employed in our study. We first introduce the dataset split, followed by details of evaluation metrics for assessment.§.§.§ Dataset SplitWe randomly split our dataset into training (train and validation) and test sets in a (7:1):2 ratio. We treat the class with [100:] positive cases as head classes, [30,100) cases as medium classes, and [:30) cases as tail classes. Consequently, from a total of 38,858 cases, the training set comprises 27,201 cases, and the test set includes 7,772 cases. Following on the dataset construction procedure,we perform classification task on two class sets at different granularities: (i) 5568 disorders + normal; (ii) 930 ICD-10-CM codes + normal. On different sets, we will have different head/medium/tail class set following our definition.Note that, the normal class is always treated as one of the head classes.Next, we will only talk about the abnormal classes.Disorders.At the disorder level, there are 85 head classes, 470 medium classes, and 5014 tail classes. Notably, as the case number of some tailed classes is extremely small, some classes may only appear in training or test sets, but not both. This issue only happens to the tail classes. It is important to note that our knowledge encoder enables to get embedding for unseen diseases, ensuring that evaluation is not affected by such challenge. As a result, there are 5,305/3,399 classes in our training and testing split, respectively.ICD-10-CM.At ICD-10-CM level, disorders with similar meanings have been mapped to same codes, for example, `tuberculosis',`primary pulmonary tuberculosis' all correspond to `A15 respiratory tuberculosis'. This merge operation results in more cases in one class, consequently more head classes.Specifically, we have 165 head classes, 230 medium classes, and 536 tail classes. Similarly, some tail classes may only be in training or testing split, resulting in 902/759 classes for our final training and testing splits, respectively. It is important to point that while we ensure the inclusion of head and medium classes in both the training and test sets, some tail classes are inevitably exclusive from certain divisions, which is treated as our future work to collect more samples of such disorders. We report the results separately for head/medium/tail classes, while focusing primarily on the head classes.§.§.§ Evaluation MetricsIn this section, we describe the evaluation metrics in detail.Note that, the following metrics can all be calculated per class.For multi-class cases, we all use macro-average on classes to report the scores by default. For example, “AUC” for multi-class classification denotes the “Macro-averaged AUC on classes”. AUC. Area Under Curve <cit.> denotes the area under ROC (receiver operating characteristic) curve.This has been widely used in medical diagnosis, due to its clinical meanings and robustness in unbalanced distribution.AP.Average Precision (AP) is calculated as the weighted mean of precisions at each threshold. Specifically, for each class, we rank all samples according to the prediction score, then shift the threshold to a series of precision-recall points and draw the precision-recall (PR) curve, AP equals the area under the curve.This score measures whether the unhealthy samples are ranked higher than healthy ones.We report Mean Average Precision (mAP), which is the average of AP of each class.F1 and MCC.F1 score is the harmonic mean of the precision and recall.It is widely used in diagnosis tasks.MCC <cit.> denotes theMatthews Correlation Coefficient metric. It ranges from -1 to 1 and can be calculated as follow:MCC = TN×TP - FN×FP/√((TP + FP)(TP+FN)(TN+FP)(TN+FN))Both metrics need a specific threshold to compute,and we choose one by maximizing F1 on the validation set following former papers <cit.>. Recall@FPR. We also report the recall scores at different false positive rate (FPR), i.e., sample points from class-wise ROC curves. Specifically, We report the [email protected], [email protected], [email protected] score, denoting the recall scores at 0.01, 0.05, 0.1 FPR respectively. §.§ Transfer Learning to External DatasetsIn addition to evaluating on our own benchmark,we also consider transferring our final model to other external datasets, demonstrating its transferring abilities on image distribution shift and label space change. In the following sections, we start by introducing the used external datasets, that cover various medical imaging modalities and anatomies. Then we detail the fine-tuning settings.§.§.§ External Datasets We choose external evaluation dataset with the following principles:* Imaging Modalities.We hope to cover most radiologic modalities in our external evaluation, e.g., 2D X-ray, 2D CT/MRI slices and 3D CT/MRI scans, demonstrating our model can benefit all of them; * Human Anatomies.We hope to cover many human anatomies, e.g., brain, head and neck, chest, spine abdomen and limb, demonstrating our model can benefit all of them; * Label Space.We hope to cover two cases, i.e., seen and unseen classes. Seen classes refer to those labels that have appeared in RP3D-DiagDS. Conversely, unseen extra classes denote those not included. As a result, we pick the following datasets, and report AUC scores to compare with others on these external evaluation.* CXR14 <cit.> is a widely-used chest X-ray (2D) diagnosis dataset containing 112,120 frontal-view X-ray images of 30,805 (collected from the year 1992 to 2015) unique patients with 14 finding labels. We follow its official split and evaluate the SOTA <cit.> on the split. * VinDr-Spine <cit.> is a spine X-ray (2D) diagnosis dataset comprising 10,469 images from 5,000 studies. We follow K-Diag <cit.> to use the 8 unique finding labels and the official split.* VinDr-Mammo <cit.> is a mammography (2D) diagnosis dataset comprising 20,000 images (5,000 four-view scans). Each scan was manually annotated with a 5-level BI-RADS score. We view this as a multi-class classification task with the official split. * ADNI <cit.> is a 3D brain MRI dataset focused on Alzheimer's disease comprising 112141 images, including AD, MCI, CN and other classifications. We random split it follow 63,846/15,962 for training and testing and reproduce the state-of-the-art method on it. * MosMedData <cit.> is a 3D chest CT dataset on 5-level COVID-19 grading comprising 1,110 images. We follow the official split. We random split it follow 888/222 for training and testing and reproduce the state-of-the-art method on it. §.§.§ Fine-tuning Diagnosis Our final model can serve as a pre-trained model and be fine-tuned on each downstream task to improve the final performance, demonstrating the merit of our dataset. Specifically, for the dataset with single image input, we simply adopt the pre-trained visual encoder module,i.e., discarding the fusion module. While for the datset with multi image input, we will use them all. In both cases, the final classification layer will be trained from scratch. In addition to using all available external training data, we also consider to use 1%, 10%, 30% portion data for few-shot learning.§.§ Implementation DetailsAt training time, we consider two diagnose tasks in different granularities:Disorder-level classification (5569 classes) and ICD10-level classification (931 Classes). The image input will all be resized to 512×512 × D in height, width and depth respectively. For 3D scans, the depth is treated as a factor for ablation study,and will be discussed in experiment section. In vision encoder, we adopt two separated modules for normalization and a shared module to compute the final embedding. The detailed architecture design will also be further discussed in our ablation study. In fusion module, we use a 6-layer transformer encoder with learnable `[cls]' token for final prediction. For augmentation, we adopt Gaussian Noise, Contrast Adjustment, Affine Variation,and Elastic Deformation, implemented from the MONAI <cit.> package.For optimization, we utilize the AdamW optimizer with a cosine learning rate curve, setting the maximum learning rate to lr=1× e^-5, with an adjustable batch size ranging from 4 to 32 depending on the input image depth and model scale to avoid out-of-memory error. The total training duration spans 100 epochs, with the initial 5 epochs for warm-up.At fine-tuning stage, we adopt the similar model architecture and optimization setting.§ RESULTS We conduct an ablation study to identify the optimal architecture and hyper-parameters for our model. Then, we present the evaluation results, focusing on the ICD-10-CM classification and disorders classification.Lastly, we use our dataset for large-scale pre-training, and finetune it on various external datasets, to demonstrate the model's transferring ability. §.§ Ablation StudyTo explore the optimal model architecture and parameter configurations, including the fusion strategy, visual encoder architecture, the depth of 3D scans, and augmentation, we conduct a series of ablation studies on a subset of original dataset, comprising 200 disorder categories with most cases, termed as SubSet@200. In the default experiment setting,we use 16 as the 3D scan depth, and ResNet as visual backbone, without knowledge enhancement and augmentation strategy. While conducting ablation study on certain component, we keep other setting unchanged.§.§.§ Visual Encoder Architecture In this section, we investigate different backbone architectures for diagnosis, specifically, we compare the ResNet-based and the ViT-based models. As shown in Table <ref>, “Seperated” denotes the separated visual encoder to perform 2D and 3D normalization and “Shared” denotes the shared encoder part for both 2D and 3D scans. We make two observations, (i) ResNet-based architecture performs better than ViT-based ones, (ii) increasing the capacity of shared encoder is more beneficial, e.g., ResNet-34+ResNet-18 vs. ResNet-18+ResNet-34. (iii) Deeper ResNet structure has very limited improvement, e.g., ResNet-34+ResNet-34 vs. ResNet-18+ResNet-34.§.§.§ Image Dimension Here, we aim to investigate the effect of input resolution, i.e., increasing the depth of input volumes.Due to the constraints of GPU physical memory, we experiment with 16, 24, and 32 as the depths for 3D input volume. We employ trilinear interpolation to resample 3D scans to the same size. Table <ref> illustrates the performance by varying the depths of 3D images. It can be observed that an increase in depth can bring clear performance gain, suggesting that the detailed depth information is critical to perform diagnosis. Consequently, with more slices, the model tend to yield more favorable results. §.§.§ AugmentationHere, we evaluate the effectiveness of augmentation on our dataset. Specifically, we adopt four augmentation strategies with 15% probability each, namely, Gaussian Noise, Contrast Adjustment, Affine Variation, and Elastic Deformation. As a result, about half of the training data will be applied at least one augmentation in each training batch. As shown in Table <ref>, adopting data augmentation shows notable performance improvement. This improvement is particularly evident on AUC and AP when employing the ViT as the visual backbone while, still, ResNet-based model performs better. §.§.§ SummaryIn conclusion, we find the ResNet-based model with augmentation strategy and unifying the 3D scan depth to 32 are the most suitable setting for our long-tailed case-level multi-modal diagnosis task, which will be used for training on the entire dataset in the following sections.§.§ Evaluation on Rad3D-DiagDSIn this section, we train our model on the entire Rad3D-DiagDS training set, and evaluate on the test split respectively, as shown in Table <ref>,and the AUC curve comparison is shown in Figure <ref>. Specifically, we carry out the experiments at two levels, i.e., disorder and ICD-10-CM classes. We denote fusion module as FM for short and knowledge enhancement as KE. In cases where without FM or KE, we adopt max pooling on the predictions of different images from the same case (Check supplementary for more details), serving as a baseline.Then, we add the fusion module and knowledge enhancement step-by-step, to improve the model's performance. Discussion on Fusion Module.To demonstrate the effectiveness of fusion module, we start with a baseline without FM or KE. As shown in Table <ref>,adding the fusion module can greatly improve the results on Head, Medium and Tail classes at both disorder and ICD-10-CM levels, showing the critical role for case-level information fusion in diagnosis task. These results align well with our expectations. In clinical practice, the examination of one modality for a diagnosis is often insufficient. A thorough and meticulous diagnostic process typically involves an integrated review of all test results. Each test is weighted differently, depending on how its results correspond with other tests. Our fusion model adeptly mirrors this comprehensive approach, demonstrating its effectiveness in simulating the nuanced process of clinical diagnosis. Discussion on Knowledge Enhancement.As shown in Table <ref>,incorporating knowledge enhancement further promotes the final diagnosis performance on both disorder and ICD-10-CM cases, validating the assumption that a better text embedding space trained by rich domain knowledge can greatly help the visual representation learning. Discussion on ROC Curves. The solid lines in the Figure <ref> represents the median AUC value. This value is derived from a process where, for each category, 1000 random samples are taken and the median AUC value is calculated. This procedure is repeated multiple times (1000), with the final curve representing the median of these values. The ROC curve is illustrated by thess solid lines. Accompanying each solid lines is shaded area, which denotes the 95% confidence interval (CI). For each type of implementation, this shaded area is significantly higher than the portion above the solid line. This implies that the AUC value represented by the solid line is generally higher than the class-wise average across different data splits.As shown in the figure, the highest AUC value has reached 0.96, and the lowest value is 0.86, this observed pattern suggests, there exists a few categoriesthat are more challenging to learn than others.[t]The AUC Score Comparison on Various External Datasets. For each dataset, we carry out experiments with different training data portions, denoted as 1% to 100% in the table. For example, 30% represents we use 30% of data in the downstream training set for finetuning our model or training from scratch. “SOTA” denotes the best performance of former works (pointed with corresponding reference) on the datasets.We mark the gap between ours and training from screatch on the subscript of uparrows↑ in the table.2*Dataset2c|1% 2c|10% 2c|30% 2c|100% 2*SOTAScratch Ours Scratch OursScratch OursScratch Ours VinDr-Mammo 57.0558.44↑^1.3958.22 59.21↑^0.99 62.10 63.11↑^1.01 76.25 78.53↑^2.28 77.50* <cit.>CXR1476.85 79.08↑^2.23 77.93 81.72↑^3.7978.5282.39↑^3.87 79.1283.38↑^4.2682.50* <cit.>VinDr-Spine79.3581.58↑^2.2385.0286.64↑^1.6286.9087.21↑^0.3187.3587.73↑^0.38 88.90* <cit.>MosMedData 52.63 61.33↑^8.70 60.72 63.36↑^2.64 64.23 69.57↑^5.34 71.24 75.39↑^4.15 68.47 <cit.> ADNI 55.39 59.41↑^4.02 60.32 64.19↑^3.87 63.26 65.77↑^2.51 82.4084.21↑^1.81 79.34 <cit.> * The numbers are borrowed from the referred papers. We are strictly align with them in train and test split.§.§ External EvaluationIn this section, we explore the transferring ability of our final model, where we finetune the model on different downstream datasets across various imaging modalities, anatomies and target classes. As shown in Table <ref>,we can see significant performance improvement on various datasets with different data portions, compring to models trained from scratch. Additionally, in most cases, our model can also surpass former SOTAs significantly without any task-specific designs, e.g., architectures, loss functions, demonstrating that our RP3D-DiagDS can also serve as a superior large-scale supervised pre-training dataset for medical domain.§ CLINICAL IMPACTIn this paper, we consider the problem of multi-modality, multi-anatomy, long-tailed disease diagnosis, which has great practical clinical meanings. First, in clinical practice, patients may get multiple radiologic examinations during their long-term treatment progress from various medical departments. However, existing work on disease classification accepting one image scan, can hardly handle such circumstance, hindering the development of accurate and comprehensive diagnostic models. Our work compensate this and better meet the practical scenarios.Second, comparing with existing diagnosis works that focus on common general disease classes, our efforts on long-tailed rare classes are more vital for clinical usage. For common diseases, AI diagnosis system, usually, can only help accelerate diagnosis procedure for clinicians,while the hint on rare classes is also critical.Third, our final model can support finetuning on external diagnosis task. In clinical application, local medical centers can leverage this superiority even with a few training samples. This is remarkable especially considering that, for rare diseases, only a very few cases could be accessible in reality. § LIMITATIONDespite the effectiveness of our proposed dataset, and architecture, there remains some improvements: First, on model design, in fusion step, we can use more image tokens to represent a scan rather than a pooled single vector. The latter may lead to excessive loss of image information during fusion. The model size can be further increased to investigate the effect of model capacities; Second, new loss functions should be explored to tackle such large-scale, long-tailed disorder/disease classification task. Third, on disorder to ICD-10-CM mapping progress, the annotators are labeled on class-name level, i.e., only disorder names are provided, causing some ambiguous classes unable to find strict corresponding ICD-10-CM codes. Though we mark out this class in our shared data files, if providing more case-level information, the mapping could be more accurate. We treat these as future work. § CONCLUSIONIn this paper, we focus on solving the problem of multi-modal, multi-label long-tailed case-level diagnosis task. Specifically, we propose a new large-scale diagnosis dataset, namely, RP3D-DiagDS, with 39,026 cases (192,675 scans) labeled with detailed disorders covering 930 ICD-10-CM.On model design, we propose one unified architecture that supports both 2D and 3D input, together with a fusion module, to integrate information from multiple scans of one patient. Additionally, we adopt knowledge enhancement training strategy, leveraging the rich medical domain knowledge to improve the radiologic diagnosis performance. Our final train model also shows strong transferring ability to various external datasets regardless of their imaging modalities, shooting anatomies and target classes. We believe this work can serve as a transition phase between the specialist and generalist models, offering a more feasible and targeted playground for exploring sophisticated algorithms in academia labs, providing a unique opportunity for detailed error analysis, which is often impractical in the development of large-scale generalist models due to prohibitive computational costs. § DATA AND CODE AVAILABILITY Our dataset RP3D-DiagDS can be found in <https://huggingface.co/datasets/QiaoyuZheng/RP3D-DiagDS> and our codes can be found in <https://github.com/qiaoyu-zheng/RP3D-Diag>sn-mathphys § SUPPLEMENTARYIn this part, we will discuss the baseline implementation in detail. Since most existing architectures cannot perform case-level diagnosis, we propose three parameter-free methods to give case-level prediction with a scan-level classifier:* Random Picking. We randomly select an image scan a case. It is based on the belief that each scan in a case should encompass the essential information required for an accurate.* Max Pooling. We adopt max pooling on all the images in a case to get the final prediction. It is based on the principle that if there is an image within a case that can illustrate the presence of abnormality, we should consider it as an unhealthy case.* Mean Pooling. Instead of max pooling, we replace it with mean pooling in this case. It is rooted in the notion that relying solely on the diagnosis from a single image is insufficient, and our goal is to comprehensively leverage all the scan image information in a case. Subsequently, we will present a comparative analysis of the outcomes obtained through these strategies, focusing on the two classification standards, ICD-10-CM and disorders, in order to pick out the most powerful baselines for comparison. As shown in the Table <ref> and Table <ref>, in most cases “Max Pooling” outperform others. Thus in the main body, we all adopt max pooling as our baseline for comparison.
http://arxiv.org/abs/2312.16151v2
{ "authors": [ "Qiaoyu Zheng", "Weike Zhao", "Chaoyi Wu", "Xiaoman Zhang", "Ya Zhang", "Yanfeng Wang", "Weidi Xie" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226182048", "title": "Large-scale Long-tailed Disease Diagnosis on Radiology Images" }
On rainbow Turán Densities of Trees Seonghyuk ImDepartment of Mathematical Sciences, KAIST, South Korea Email:{seonghyuk, jaehoon.kim, hyunwoo.lee, hss21}@kaist.ac.kr Extremal Combinatorics and Probability Group (ECOPRO), Institute for Basic Science (IBS). Jaehoon Kim[1] Hyunwoo Lee[1] [2]Haesong Seo[1] January 14, 2024 ================================================================================================================================================================================================================================================================================We propose a new model for the coherent forecasting of both the implied volatility surfaces and the underlying asset returns. In the spirit of <cit.> who are interested in the dependence of volatility indices (e.g. the VIX) on the paths of the associated equity indices (e.g. the S&P 500), we first study how implied volatility can be predicted using the past trajectory of the underlying asset price. Our empirical study reveals that a large part of the movements of the at-the-money-forward implied volatility for up to two years maturities can be explained using the past returns and their squares. Moreover, we show that up to four years of the past evolution of the underlying price should be used for the prediction and that this feedback effect gets weaker when the maturity increases. Building on this new stylized fact, we fit to historical data a parsimonious version of the SSVI parameterization (, ) of the implied volatility surface relying on only four parameters and show that the two parameters ruling the at-the-money-forward implied volatility as a function of the maturity exhibit a path-dependent behavior with respect to the underlying asset price. Finally, we propose a model for the joint dynamics of the implied volatility surface and the underlying asset price. The latter is modelled using a variant of the path-dependent volatility model of Guyon and Lekeufack and the former is obtained by adding a feedback effect of the underlying asset price onto the two parameters ruling the at-the-money-forward implied volatility in the parsimonious SSVI parameterization and by specifying a hidden semi-Markov diffusion model for the residuals of these two parameters and the two other parameters. Thanks to this model, we are able to simulate highly realistic paths of implied volatility surfaces that are arbitrage-free.§ INTRODUCTIONOne of the many reasons of the success of the Black-Scholes model (, ) is the existence of a one-to-one correspondence between the price C(K,T) of an European call option with strike K and maturity T and the volatility σ of the geometric Brownian motion modelling the dynamics of the underlying asset price (S_t)_t≥0 provided that (S_0-Ke^-rT)^+ < C(K,T) < S_0 (r is the constant risk-free rate) which is guaranteed by absence of arbitrage opportunities. When this condition is satisfied, the unique parameter σ satisfying C_BS(K,T,σ)=C(K,T), where C_BS denotes the Black-Scholes call option price, is called the implied volatility of the call option. By the put-call parity, the implied volatility of the put option is equal to the one of the call option with same maturity and strike. Although the implied volatility does not add any new information with respect to the option price, it is commonly used to quote option prices on the markets mainly because it allows to easily compare the value of two options with different underlying assets while the option price heavily depends on the underlying asset price level, making the comparison more difficult. If the Black-Scholes model was an accurate description of financial markets, the implied volatility should be the same for all options on a given asset regardless of the maturity and the strike. The computation of the implied volatility from market option prices shows that the implied volatility actually depends on the maturity and the strike which invalidates the Black-Scholes model. The so-called implied volatility surface (IVS) (K,T)↦σ_BS(K,T) permits to fully describe the option prices on a given asset.It is also well-known that the level and the shape of the IVS varies with time. To be able to jointly model the time evolution of the IVS and the underlying asset price is key for applications covering asset allocation, risk management and hedging. First, such a model allows to backtest or study the P&L distribution of an investment stragegy involving options and the underlying asset. One can think for example of the strategy consisting in buying a stock and a put of strike K_1 and selling a put of strike K_2 with K_2<K_1 but with same maturity (this is called a put spread). This strategy protects the investor against a drop in the underlying asset price down to the K_2 threshold in exchange to a lower premium in comparison to just buying a put of strike K_1. By extension, the modelling of the IVS and the underlying asset price makes it possible to optimize an asset allocation strategy involving options. Another application relates to the design and the backtesting of hedging strategies for financial products (e.g. volatility swaps, options on the VIX, etc.) having a volatility risk which is measured by the Black-Scholes vega. To complete this non-exhaustive list, let us finally mention that an IVS-underlying model can also be useful in the insurance industry for: * computing the equity volatility distribution over a one-year horizon to estimate the capital requirement within Solvency II internal models and* assessing the time value of options and guarantees within insurance contracts and analyzing the underlying hedging strategies of long-term life insurance contracts embedding path-dependent options.§.§ Literature reviewInspired by the market models of <cit.> and <cit.> for the interest rates term structure, <cit.> and <cit.> independently proposed a modelling framework for the joint dynamics of the IVS and the underlying asset price where both are solutions of stochastic differential equations (SDEs) where the drift and volatility coefficients are only functions of the time, the maturity and the strike or moneyness. In particular, no-arbitrage conditions on the drift are derived to guarantee the absence of arbitrage opportunities under the risk-neutral probability. A similar approach is adopted by <cit.>. More empirical studies include the papers from <cit.> and <cit.>. The former applies a principal component analysis (PCA) to historical implied volatilities grouped in three maturity buckets and identifies two factors explaining 78% of the smiles variation while the latter applies a common PCA and identifies three factors explaining more than 98% of the variations. To deal with the fact that the study of the dynamics of the IVS is a three-dimensional problem (time, maturity and strike), <cit.> use a Karhunen-Loève decomposition instead of a PCA. They show that the dynamics of IVSs can be well summarized by three orthogonal factors which can be interpreted as the level, the orientation (i.e. a positive shock of this factor increases the volatilities of out-of-the-money calls while decreasing those of out-of-the-money puts) and the convexity of the surface. The associated principal components exhibit persistence (i.e. autocorrelation) and mean reversion close to the one of an AR(1) process. Therefore, Cont and da Fonseca suggest to model each of the principal component as an Ornstein-Uhlenbeck process. <cit.> extend this model by specifying the dynamics of the underlying asset price which shares noise terms with the dynamics of the IVS allowing in particular to account for the correlation between the underlying price and the volatility surface level. A second extension is provided by <cit.> which allows to limit the number of scenarios with static arbitrages by resampling from a given set of IVSs scenarios using smaller weights for scenarios with arbitrages. Another way to address this modelling problem in the litterature is to resort to parametric or semi-parametric factors models, see e.g. <cit.>, <cit.> or <cit.>. More recently, machine learning techniques such as GANs or neural SDEs have also been used to generate realistic simulations of implied volatility surfaces, see e.g. <cit.>, <cit.> and <cit.>.In this paper, we develop a new joint model of the IVS and the underlying asset price. Instead of specifying the IVS as the solution of a given SDE or as a linear combination of several factors (whether parametric, semi-parametric or non-parametric), we propose to consider a parameterization of the IVS whose parameters evolution depends on the path of the underlying asset price. The chosen parameterization is the celebrated SSVI parameterization of <cit.> that is known to well reproduce observed IVSs and guarantees the absence of static arbitrage under mild conditions. This modelling paradigm consisting in making dynamic the parameters of a model fitting market data at some point in time is similar to the one of <cit.> who developed a very general mathematical framework for designing consistent dynamic market models. In <cit.>, the authors provide a practical implementation of this framework for IVSs allowing to simulate IVSs that are free of both static and dynamic arbitrage. Moreover, they use these simulations of IVSs to find the portfolio with smallest variance for a portfolio consisting of n options of same maturity but different strikes. In the same vein, <cit.> used a SVI model whose parameters are stochastic processes to model the dynamics of the entire IVS. A convolutional LSTM (Long Short-Term Memory) neural network is used to learn the joint dynamics of these parameters and the underlying forward price. There is one main difference between our approach and the ones of these papers and the literature in general. In our approach, we introduce an explicit modelling of the impact of the underlying asset price onto the level and the shape of the IVS in the spirit of <cit.> who focus on volatility indices and realized volatility (hence not on IVS). Indeed, in the above litterature, the dependence structure between the IVS and the underlying asset price is generally captured through simple assumptions such as a Gaussian copula, common noise terms or using the short-term implied volatility as a term in the underlying asset stochastic volatility dynamics. Moreover, we model the underlying price using the path-dependent volatility framework of <cit.> which exhibits high statistical consistency and captures multiple historical stylized facts (leverage effect, volatility clustering, weak and strong Zumbach effects). Before giving more details on our approach, we find useful to dedicate a section to Guyon and Lekeufack's main results. §.§ Guyon and Lekeufack's path-dependent volatility model<cit.> showed that the level of the volatility of major equity indices is essentially explained by the past variations of these equity indices, or in other words, they showed that volatility is mostly path-dependent. To be more specific, they consider two measures of the volatility: the value of an implied volatility index such as the VIX and an estimator of the realized volatility over one day using intraday observations of the equity index. We recall that an implied volatility index is a measure of the expected future variance of a given underlying index (for example the S&P 500 for the VIX) at a given horizon T. Mathematically, the expected future variance writes 𝔼[1/T∫_0^T σ_t^2 dt ] where σ is the instantaneous volatility of the underlying index and 𝔼 here denotes the expectation under the risk-neutral probability. The expected future variance can be estimated from the prices of traded calls and puts on the underlying index using the <cit.> formula. We refer for example to the documentation of the VIX (, ) or the VSTOXX (, ) for more details. Note that Guyon and Lekeufack only use short-term implied volatility indices (the horizon T is below 30 days) since they are interested in the modelling of the instantaneous volatility. Let us now introduce the model that they calibrate for both measures of volatility. Let (S_t)_t≥ 0 be the price process of an equity index and Volatility_t be one of the two above-mentioned measures of volatility. The Path-Dependent Volatility (PDV) model from the empirical study of <cit.> writes as follows:Volatility_t = β_0+β_1 R_1,t + β_2Σ_t.The features R_1,t and Σ_t are defined on a time grid (t_i)_i∈ℕ as follows: * R_1,t is a trend feature given by:R_1,t = ∑_t_i ≤ t K_1(t-t_i) r_t_iwhere r_t_i= (S_t_i-S_t_i-1)/S_t_i-1 and K_1:ℝ_+→ℝ_+ is a decreasing kernel weighting the past returns. This feature allows to capture the leverage effect, i.e. the fact that volatility tends to rise when prices fall.* Σ_t is an activity or volatility feature given by:Σ_t = √(∑_t_i ≤ t K_2(t-t_i) r_t_i^2)where K_2 is also a decreasing kernel. This feature allows to capture the volatility clustering phenomenon, i.e. the fact that periods of large volatility tend to be followed by periods of large volatility, and periods of small volatility tend to be followed by periods of small volatility. In order to capture both the short and long memory of volatility, they propose a time-shifted power law (TSPL) for the two kernels K_1 and K_2:K_j(τ) = Z_α_j,δ_j/(τ+δ_j)^α_j, j=1,2,with Z_α_j,δ_j the normalization constant such that ∑_t-C≤ t_i ≤ t K_j(t-t_i)Δ= 1 where Δ= 1/252 (business days frequency) and C is an hyperparameter (called the cut-off lag later in the paper) controlling at which point the sums in R_1 and Σ are truncated.In order to measure to which extent the two features of the PDV model allow to explain the variations of the volatility, they use the R^2 score whose formula is recalled below:R^2(y,ŷ) = 1-∑_i=1^n(y_i-ŷ_i)^2/∑_i=1^n(y_i-y̅_n)^2where y=(y_i)_1≤ i≤ n are the observed data, ŷ= (ŷ_i)_1≤ i≤ n are the predicted data and y̅_n=1/n∑_i=1^ny_i. When they calibrate the PDV model on implied volatility indices data, they obtain R^2 scores over tested indices that are above 87% on the train set (January 1, 2000 to December 31, 2018) and above 80% on the test set (January 1, 2019 to May 15, 2022), which shows that the PDV model explains a large part of the variability observed in the volatility dynamics. In Figure <ref>, we reproduce two graphs from their paper that indicate quite clearly the linear relationship between the two features and the VIX. When calibrated on realized volatility data, the performance of the PDV model is reduced: the R^2 score is about 70% on the train set and 60% on the test set. §.§ ContributionsThe first contribution of the present paper is an empirical study of the dependence of implied volatility on the past movements of the underlying asset price for options on the S&P 500 and options on the Euro Stoxx 50. This empirical study is inspired by the one of <cit.> but there are several differences. First and foremost, we work on implied volatility instead of implied volatility indices: the former represents the price of an option and is determined by supply and demand while the latter represent measures of the expected future variance (see previous section) and are determined as linear combinations of prices of calls and puts covering the liquid strikes and the two time-to-maturities that are the closest to 30 days. Both are therefore close only if we consider implied volatilities of 1-month maturity options. Since we consider maturities up to 24 months, our study can be seen as an extension of the one of <cit.>. Second, we analyze the influence of the cut-off lag of the kernel on the performance of the PDV model. Finally, we add a regularization term in the calibration of the model and study its impact. Our study also differs from the one of <cit.> because they focus mostly on the frequency with which call (resp. put) prices move in the same direction (resp. opposite direction) as the underlying asset price but they do not try to exhibit a functional relationship between the two and they do not use the past path of the underlying asset price.The second contribution is to propose a parsimonious version of the Surface Stochastic Volatility Inspired (SSVI) parameterization (, ) of the IVS which relies only on four parameters and provides a reasonable replication of the market IVSs for a wide range of dates. This parsimonious SSVI parameterization is free of static arbitrage provided that a simple inequality constraint involving two parameters is satisfied. Moreover, it is consistent with the well-known power-law decay of the at-the-money-forward (denoted by ATM in the sequel for the sake of simplicity) skew (see e.g. <cit.>). We also show that the two parameters governing the ATM implied volatility curve as a function of the maturity can be well explained by the past path of the underlying asset price.Our final contribution is to introduce a new model for the joint dynamics of the underlying asset price and the implied volatility surface allowing to perform Monte Carlo simulations under the real-world probability. This model is obtained by specifying the time evolution of the four parameters of the parsimonious SSVI parameterization for the IVS and combining it with a variant of the PDV model of <cit.> for the underlying price. The dynamics of the two parameters governing the ATM implied volatility curve contains a functional dependence on the past path of the underlying price allowing to embed in the model the feedback effect that we observe on historical data. Moreover, the residuals of these two parameters along with the two others parameters of the parsimonious SSVI parameterization are modelled using a hidden semi-Markov process. Together with the model specification, we also provide a calibration methodology for all the parameters that are involved in the dynamics. Ultimately, we show through sample paths and quantile envelopes that the IVSs simulated with our model are highly realistic.This paper is organized as follows: in Section <ref>, we start by the empirical study of the dependence of implied volatility on the past movements of the underlying asset price. Then, we present the SSVI parameterization and its parsimonious version as well as some calibration results in Section <ref>. Finally, Section <ref> is dedicated to the introduction of our new path-dependent SSVI model for simulating implied volatility surfaces and the underlying asset price. § EMPIRICAL STUDY OF THE JOINT DYNAMICS OF THE IMPLIED VOLATILITY AND ITS UNDERLYING INDEX§.§ Data setsWe consider two data sets from Refinitiv[<www.lseg.com>] of daily implied volatility surfaces corresponding to options on the S&P 500 index and the Euro Stoxx 50 index respectively. These data sets start on March 8, 2012 and end on December 30, 2022. They contain the at-the-money-forward (denoted by ATM in the sequel for the sake of simplicity) implied volatilities for maturities ranging from 1 month to 24 months with a monthly timestep. For the same range of maturities, the data sets also contain the implied volatilities for Black-Scholes deltas in the following range: ±0.1, ±0.15, ±0.2, ±0.25, ±0.3, ±0.35, ±0.4, ±0.45 (positive deltas correspond to calls while negative deltas correspond to puts). As a remainder, the Black-Scholes delta corresponds to the sensitivity of the option price with respect to the price of the underlying asset. Its formula is recalled below:Δ^BS = ϵ𝒩(ϵ d_1)withd_1 = lnS_0/K+(r+σ^2/2)T/σ√(T)where 𝒩 is the cumulative normal distribution function, ϵ=1 for call options and -1 for put options, K is the strike, r the constant risk-free rate, T the maturity and σ the Black-Scholes implied volatility. In the sequel of this section, we only focus on the ATM implied volatilities but, in Sections <ref> and <ref>, the away-from-the-money implied volatilities will also be used. Note that in practice, options with the above maturities can not be traded every day on the market. The mapping between the quotes of the options that are actually traded and the quotes in our database is at the discretion of the data provider. For example, on the Chicago Board Options Exchange where calls and puts on the S&P 500 are traded, the following options can be traded at day t: * Weekly expiry options: options expiring every business day between t and t+28 business days. Note that before 2022, there was only Monday-, Wednesday- and Friday-expiring options. * End-of-Month options: options expiring the last business day of the month for up to twelve months after t. * Monthly expiry options: options expiring the third Friday of the month for a given range of future months up to 5 years after t. A similar decomposition can be found for options written on the Euro Stoxx 50 on the Eurex but with differences in the expiry dates (for example, there are only Weekly expiry options that expire on Fridays).Along with these two IVSs data sets, we also have daily time series of the S&P 500 and Euro Stoxx 50 indices. The S&P 500 time series starts on January 2, 1980 while the Euro Stoxx 50 time series starts on December 31, 1986 and both end on December 30, 2022. Note that since we will use at most 12 years of past returns to predict the implied volatility, the whole time series are not used in the following study. To measure the out-of-sample performance of the tested model, we split the two data sets into a train set and a test set: the train set spans the period from March 8, 2012 to December 31, 2020 and the test set spans the period from January 1, 2021 to December 30, 2022 so that approximately 80% of the data is used for the train and 20% is used for the test. In addition, we will also consider a blocked cross-validation in Section <ref>. The 1-month ATM implied volatility along with the underlying asset price are represented in Figure <ref> for both data sets.§.§ Calibration methodology The PDV model (<ref>) with the TSPL kernel relies on 7 parameters, namely (α_1,δ_1,α_2,δ_2) the parameters of the two TSPL kernels K_1 and K_2 (Equation (<ref>)) and (β_0,β_1,β_2), respectively the intercept, the sensitivity to the trend feature and the sensitivity to the volatility feature. These 7 parameters are calibrated specifically for each maturity using the following steps (which are identical to the ones implemented by <cit.> to which we refer for more details[See also the code provided with their paper: <https://github.com/Jordylek/VolatilityIsMostlyPathDependent>]): * We compute four exponentially weighted moving averages (EWMA) with respective spans of 10, 20, 120 and 250 days of the underlying index returns. Then, we run a ridge regression of the ATM implied volatility on the four EWMAs and we fit the TSPL kernel K_1 on the optimal linear combination of the exponential kernels which provides us with initial guesses for α_1, δ_1. The use of a ridge regression instead of a lasso regression is justified by the fact that we do not need to maximize the number of zeros (i.e. minimize the number of exponential kernels) in view of the subsequent fit of a TSPL kernel. By running a ridge regression of the ATM implied variance on four EWMAs of the underlying index squared returns and fitting the TSPL kernel K_2, we obtain similarly initial guesses for α_2 and β_2.* Initial guesses for β_0, β_1 and β_2 are then obtained using a linear regression of the ATM implied volatility on the features R_1 and Σ where α_1, δ_1, α_2 and δ_2 are fixed to the values estimated at step 1. * Starting from these initial guesses, the 7 parameters are jointly calibrated by solving the following minimization problem using thefunction with the trust-region reflective algorithm from thePython package:[ min_(α_1,δ_1,α_2,δ_2,β_0,β_1,β_2) ∈ℝ^7 5c∑_t∈𝒯_train (IV^mkt_t - β_0-β_1R_1,t-β_2Σ_t)^2; s.t. α_j, δ_j≥ 0forj ∈{1,2}; R_1,t=∑_t-C≤ t_i ≤ tZ_α_1,δ_1/(t-t_i+δ_1)^α_1 r_t_i; Σ_t= √(∑_t-C≤ t_i ≤ tZ_α_2,δ_2/(t-t_i+δ_2)^α_2 r_t_i^2) ]where 𝒯_train is the set of dates in the train set, IV^mkt_t is the market ATM implied volatility observed at time t for some fixed maturity and C is a cut-off lag. §.§ Numerical results §.§.§ Performance of the PDV modelWe start by calibrating the PDV model (<ref>) using the methodology described in Section <ref>. Note that the computation of the features R_1 and Σ requires to truncate the sums at some point parameterized by C. We use for the moment the previous 1,000 business days (i.e. C=1000), consistently with the choice of <cit.>, but we will discuss later the influence of this hyperparameter. The performance of the model is measured using the R^2 score (the definition is recalled in Equation (<ref>)) which allows to assess how much of the variance of the implied volatility is explained by the model. The results are presented in Figure <ref>. For the S&P 500, we obtain R^2 scores between 85% and 93% on the train set and between 62% and 77% on the test set. For the Euro Stoxx 50, we obtain R^2 scores between 85% and 90% on the train set, between 70% and 81% for the 15 first maturities on the test set and between 50% and 70% for the last maturities. These results indicate that a large part of the movements of the ATM implied volatility can be explained by the past movements of the underlying asset price. In this regard, they extend those of <cit.> to ATM implied volatility data. We also notice that the R^2 scores are overall decreasing with the option maturity: this is quite natural as we expect long-term options to be less sensitive to the variations of the underlying asset price than short-term options. This observation is also consistent with the results of <cit.> who noticed that "the longer an option’s remaining life, the more likely its price goes in the opposite direction with the underlying asset" suggesting that there is more exogeneity in the evolution of the prices of long-term options than in those of short-term options. The two following subsections deepen the analysis of Figure <ref>. §.§.§ Comment on the gap between the scores on the train and the tess setsWe observe a gap of approximately 22% for the S&P 500 and 19% for the Euro Stoxx 50 between the R^2 scores on the train set and the test set. Such gaps are usually symptomatic of overfitted models. However, if we keep only one feature to reduce the complexity of the model, be it the trend feature R_1 or the volatility feature Σ, the R^2 scores are lower (especially with the trend feature) and the gap between the train and the test sets widens as shown in Figure <ref> for the S&P 500 (similar results are obtained for the Euro Stoxx 50). Another way to deal with overfitting is to add a regularization term in the objective function. Such technique is implemented in the following section but does not reduce the gap between the performance on the train and the test sets. Because of these two arguments, we estimate that the observed gap is not the result of an overfitted model but rather the result of the fact that the train set is small (only 8 years of data) and that the test set is of a peculiar nature. Indeed, the test set corresponds to the post-Covid-19 period which is characterized by a lot of uncertainty related to the Russia-Ukraine war, inflation, the rise of interest rates, etc. which may have affected the extent to which the volatility reacts to the underlying index movements. For the S&P 500, the difference between the evolution on the train set and the test set is very clear: apart from the crash of March 2020, the S&P 500 has experienced a constant increase with very little variations on the train set while the test set is characterized by a bull market followed by a bear market with high volatility. Note that the difference between the periods is however less clear for the Euro Stoxx 50. This claim is supported by Figure <ref> representing the S&P 500 and the Euro Stoxx 50 1-month implied volatilities against the two features R_1 and Σ on the test set. The shape of the cloud of data points (green dots) indicates a linear relationship with respect to the trend and volatility features which argues in favor of the validity of the PDV model. However, the majority of these data points are above the plane fitted on the train set (especially for the S&P 500 which is consistent with the larger gap between the R^2 score on the train and the test sets) which indicates that the implied volatility has reacted more strongly on the underlying index movements in the post-Covid-19 period. To quantify this observation, we compute the ratio D of the signed distances and the absolute distances between the observed implied volatilities and the predicted implied volatilities:D = ∑_t=1^T IV_t^mkt -β_0-β_1R_1,t-β_2Σ_t /∑_t=1^T | IV_t^mkt -β_0-β_1R_1,t-β_2Σ_t |.We obtain a ratio of 18.4% for the S&P 500 and 4.1% for the Euro Stoxx 50 on the test set which is consistent with the observation that more observed implied volatilities are above the predicted value than below. §.§.§ Comparison with the scores of <cit.>In Figure <ref>, we also represent (with green and red crosses) the R^2 scores obtained when calibrating the PDV model on the VIX and the VSTOXX (which are the volatility indices of the S&P 500 and the Euro Stoxx 50 respectively) using the same historical time period (from March 8, 2012 to December 30, 2022). They allow a consistent comparison between our scores and those of <cit.>. Note that the scores are represented at the same abscissa as the scores obtained on the 1-month implied volatilities. This choice is motivated by the fact that both the VIX and the VSTOXX are measures of the 30-days expected variance of their respective underlying index. We observe that the R^2 scores on the test set for the volatility indices are:* below the scores for the 1-month implied volatility and * below the scores reported by <cit.> for the same indices (the difference being only the train and test periods: their train and test sets respectively span the periods from January 1, 2000 to December 31, 2018 and from January 1, 2019 to May 15, 2022 while our train and test sets respectively span the periods from March 8, 2012 to December 31, 2020 and from January 1, 2021 to December 30, 2022).The first observation can be explained by the fact that the 1-month ATM options can be traded on the market while the VIX and the VSTOXX are calculated as a non-linear combination of prices of calls and puts covering the liquid strikes and the two time-to-maturities that are the closest to 30 days (see <cit.> and <cit.> for more details). Thus, the effect of the underlying asset movements is intuitively less direct on the VIX and the VSTOXX than on the 1-month implied volatility. The second observation can be understood in the light of the arguments that have been put forward to explain the gap between the R^2 scores on the train and the test sets.§.§.§ Influence of the cut-off lag We mentioned at the beginning of Section <ref> that we truncated the sums in the expressions of R_1 and Σ after the previous C=1,000 business days. In the following, we study the impact of the hyperparameter C, which we call the cut-off lag. Remark that if the cut-off lag is too small, there is a risk to lose some information from the past, while if the cut-off lag is too big, there is a risk to capture some information that is actually not relevant to predict the implied volatility. First, let us point out that there is a priori no reason to use the same cut-off lag for R_1 and Σ. Therefore, we consider two different cut-off lag hyperparameters C_R_1 and C_Σ. In order to measure their influence on the performance of the model, we run a 10-fold cross-validation. Before describing this procedure in more details, we introduce a third hyperparameter λ that allows to penalize large values of the kernels parameters α_1, δ_1, α_2 and δ_2 during the calibration. More specifically, we add a L^2 penalization term in the objective function so that Equation (<ref>) becomes:[ min_(α_1,δ_1,α_2,δ_2,β_0,β_1,β_2) ∈ℝ^7 4c∑_t∈𝒯_train (IV^mkt_t - β_0-β_1R_1,t-β_2Σ_t)^2+λ(∑_j=1^2 α_j^2 + ∑_j=1^2 δ_j^2 ); s.t. α_j, δ_j≥ 0forj ∈{1,2};]where R_1,t = ∑_t-C_R_1≤ t_i ≤ tZ_α_1,δ_1/(t-t_i+δ_1)^α_1 r_t_i and Σ_t = √(∑_t-C_Σ≤ t_i ≤ tZ_α_2,δ_2/(t-t_i+δ_2)^α_2 r_t_i^2). The introduction of this penalization is motivated by the fact that we want to avoid overfitting as mentioned in Section <ref>. Note that this modified objective function is minimized using thefunction with the L-BFGS-B algorithm from thePython package. Let us now describe the 10-fold cross-validation. For each maturity, the train set is split into 10 adjacent folds of same size (222 days each) and for each triplet (C_R_1,C_Σ,λ)∈{5,10,25,50,100,250,500,1000,1500,2000,2500,3000}^2×{10^-6,10^-5,…,10^-1} and for all i∈{1,…,10}, we calibrate on all folds except fold i and we compute the R^2 score on the fold i. This procedure corresponds to the so-called blocked cross-validation (see e.g. , ). Then, we average the R^2 scores over the 10 folds so that we obtain one score per triplet (C_R_1,C_Σ,λ). In Table <ref>, we present the triplet (C_R_1,C_Σ,λ) leading to the best average R^2 score for each maturity. First, it is remarkable that the cut-off lags C_R_1 all are below 100 days except for the largest maturities. Second, the cut-off lags C_Σ are above 1,000 days for all maturities except the first one for the S&P 500 and the first two for the Euro Stoxx 50. Looking at the average R^2 scores for all tested triplets, we observed that the fitting quality is very sensitive to the cut-off lag C_Σ of the volatility feature while the two other hyperparameters C_R_1 and λ have a smaller influence (especially λ which can explain the wide range of values obtained for this hyperparameter in Table <ref>). Choosing a too small cut-off lag C_Σ can lead to very low R^2 scores, especially for the largest maturities. For example, for C_Σ=500, we obtain R^2 scores that are even below 0 for the Euro Stoxx 50, as illustrated in Figure <ref>. Actually any value C_Σ strictly below 1000 in the grid {5,10,25,50,100,250,500,1000,1500,2000,2500,3000} yields overall poor results such as those exhibited in Figure <ref> and this regardless of the value of C_R_1. This indicates that the squared returns up to 1,000 business days in the past are paramount to predict the implied volatility, particularly for the largest maturities.In order to verify that this conclusion is not an artefact of a bad model calibration, we present in Figure <ref> the correlation between the implied variance and the squared daily returns for all lags between 0 and 3,000 days both on the train and the test sets. Note that the estimated correlation ρ is presented along with a 95% confidence interval derived from the transformation z=artanh(ρ) introduced by <cit.>. Indeed, this transformation is approximately normally distributed when the samples come from a bivariate normal distribution. Although this is not the case here, we consider it as a reasonable proxy of the uncertainty around the correlation estimator. On the train set (blue curve), the graphs show a slow decrease of the correlation with the lag, with several spikes at some specific lags that become larger when the maturity increases. A first spike can be seen around 250 days, i.e. 1 year, especially for the Euro Stoxx 50. Then a smaller spike can be seen around 500 days, i.e. 2 years. A third spike appears around 750 days, i.e. 3 years, which is characterized by a slower decay than the previous spikes as it only fades around 1,250 days. This third spike is even higher than the previous spikes for large maturities. After these three spikes, we observe again smaller spikes around 1,750 days and 2,500 days and again a big spike around 2,750 days, that is a spike almost every year. These observations are consistent with the sensitivity of the model to the cut-off lag C_Σ and the values obtained with the cross-validation that are presented in Table <ref>. Note that these observations have the advantage of not depending on any assumption. Thus, we can consider the long-range dependence of the implied volatility to the past squared returns as a stylized fact of our implied volatility data. An empirical study of a larger set of underlying assets could reveal whether this is a universal property of implied volatility data. To our knowledge, this stylized fact has never been reported in the literature. Let us however mention the work of <cit.> who calibrate a 3-factors model on implied volatility data and show a long-range dependence in the level and absolute returns of the factors loading series. A possible explanation of this phenomenon is that options on widespread equity indices and with relatively large maturities are presumably traded by long-term investors such as asset managers, pension funds, sovereign funds, etc. who have a low rebalancing frequency of their portfolios and consequently, who base their investment decisions on the previous years returns of the underlying asset rather than the previous days returns. On the other hand, options with shorter maturities are presumably less traded by long-term investors so that the movements of the implied volatility are more influenced by short-term investors such as hedge funds who base their investment decisions on recent data. This is in line with Figures <ref> and <ref> as well as with the smaller values of C_Σ for the smallest maturities in Table <ref>. In Figure <ref>, we also present the correlations between the implied volatility and the daily returns for all lags between 0 and 3,000 days both on the train and the test sets. In this case, the correlations fades very quickly with the lag and we do not observe material spikes. This is consistent with the smaller values of C_R_1 in Table <ref>.So far, we have only described the correlations on the train set but as already discussed extensively, the test set is quite different and the correlations on this set (in orange in Figures <ref> and <ref>) are therefore also distinct from those calculated on the train set. In particular in Figure <ref>, we observe negative correlations with the 250 days lag which can be understood as a consequence of the fact that the implied volatility was decreasing in 2021 due to the leverage effect while one year earlier the squared returns were increasing with the Covid-19 crisis. Conversely, the implied volatility increased in 2022 again due to the leverage effect while one year earlier the squared returns were decreasing with the post-Covid-19 bull market. Given the particular profile of the correlation structure, it is natural to consider a variation of the PDV model that allows to capture the spikes. We have considered two variations of the PDV model allowing to capture the spike at the 3-years lag as it represents the larger spike. These two variations consist in adding a third feature which is the same as the volatility feature Σ except that the kernel weights specifically one period in the past. The two kernels that we have considered are the following:K_2'(τ) =(aτ-δ)^+exp(-λτ)andK_2”(τ) =exp(-(τ-μ)^2/σ^2)where a, δ, λ, μ and σ are non-negative parameters. The calibration of these alternative models only provides a small improvement of the R^2 scores on the train set and even a deterioration of the R^2 scores on the test set, likely due to the specificity of the test set mentioned earlier. As a consequence, these alternative models are disregarded in the sequel. §.§.§ Study of the calibrated parameters We conclude this empirical study by analyzing the calibrated parameters of the PDV model. In order to obtain comparable model parameters between maturities, we retain a single triplet (C_R_1,C_Σ,λ) for all maturities. This triplet is selected as follows. For each maturity, we compute the average R^2 score over the 10 folds of each triplet and then, we average these scores over all maturities. Finally, for each triplet (C_R_1,C_Σ,λ), we average the obtained score with the one of the triplets (C_R_1',C_Σ',λ') such that C'_Σ=C_Σ and either C_R_1' is the closest value above or below C_R_1 in the grid {5,10,25,50,100,250,500,1000,…,2500,3000} or λ' is the closest value above or below λ in the grid {10^-6,10^-5,…,10^-1}. For example, the score of the triplet (50,500,10^-3) is averaged with the one of the triplets (25,500,10^-3), (100,500,10^-3), (50,500,10^-4) and (50,500,10^-2). The triplet that is chosen for all maturities is the one achieving the higher score through this procedure. This procedure aims at selecting a triplet whose performance is not too sensitive to a modification of C_R_1 or λ and is introduced because we observed that if we consider only the average score over all maturities, the performance of the obtained triplet was very sensible to these two hyperparameters (unlike most triplets as underlined earlier) and was quite bad on the test set. This instability is probably due to the small size of the train set which is divided in 10 folds in the blocked cross-validation. With this procedure, we obtain (C_R_1,C_Σ,λ)=(100,1000,10^-4) for the S&P 500 and (C_R_1,C_Σ,λ)=(10,1000,10^-3) for the Euro Stoxx 50. In Figure <ref>, we show the R^2 scores obtained on the train and the test sets with these hyperparameters. We notice overall a small deterioration in comparison with Figure <ref> where (C_R_1,C_Σ,λ)=(1000,1000,0). This deterioration can be attributed to the fact that the hyperparameters are optimized on the train set only which differs in several ways from the test set.In Figures <ref> and <ref>, we plot the evolution of the calibrated parameters associated to the R^2 scores presented in Figure <ref> as a function of the maturity. Regarding the TSPL kernels parameters (α_1, δ_1, α_2, δ_2), we observe overall a decreasing trend except for δ_2 for which there is no clear trend. This decreasing trend for α_1 and α_2 indicates that far away past returns explain more and more the ATM implied volatility movements as the maturity increases. Note that, for the largest maturities, we obtain values of α that become even lower than 1 (except for α_1 for the S&P 500) which is the critical value below which the integral of the TSPL kernel diverges in continuous time. The decreasing behavior of δ_1 indicates that, as α_1 decreases, it still matters to keep a large weight for the more recent returns. Let us now end the study with the analysis of the parameters β_0, β_1 and β_2. First, we notice that we have β_1<0 and β_2>0 without imposing any constraint on these parameters. Therefore, a positive (resp. negative) trend in the underlying asset price tends to be followed by a decrease (resp. increase) of the implied volatility (which is consistent with the negative correlation observed by <cit.>) while the increase (resp. decrease) of the underlying asset volatility (measured by the squared returns) tends to be followed by an increase (resp. decrease) of the implied volatility. Moreover, the three parameters for the small maturities are of the same order of magnitude as to those calibrated by <cit.>. Regarding the evolution of β_0, we obtain an overall increase with the maturity which reflects the fact that, in average, the level of ATM implied volatility increases with the maturity. The parameter β_1, which can be interpreted as the influence of the trend feature on the implied volatility, is getting closer to 0 with the maturity, implying that the implied volatility for long maturities becomes less reactive to the trend in the returns of the underlying asset price. Finally, the parameter β_2, which can be interpreted as the influence of the volatility feature on the implied volatility, is mainly decreasing with the maturity, so it seems that the implied volatility for long maturities becomes less reactive to the volatility of the underlying index. The empirical study conducted in this section allowed us to exhibit the dependence of the ATM implied volatility on the past path of the underlying asset price for two major financial indices. We showed that this dependence decreases with the maturity but remains material even for the largest maturities. Moreover, the feedback effect of the underlying price onto the ATM implied volatility has a very long memory: up to 4 years of the past evolution of the underlying price should be used to predict the ATM implied volatility for the largest maturities. At this stage, it is natural to ask whether these conclusions still hold for away-from-the-money implied volatilities. Instead of reproducing the empirical study for each maturity and strike (which would increase significantly the dimension of the study), we study the performance of the PDV model in explaining the evolution of the calibrated parameters of the SSVI parameterization of <cit.>. The following section is dedicated to the presentation of this parameterization. § CALIBRATION OF MARKET IMPLIED VOLATILITIES WITH THE SSVI PARAMETERIZATIONThe purpose of this section is to introduce the SSVI parameterization and to present some calibration results of this model on the implied volatility historical data that we considered in Section <ref>. We start by some remainders about static arbitrages as the ability to generate arbitrage-free implied volatility surfaces (IVSs) is one of our motivations for considering the SSVI parameterization.§.§ Static arbitragesAn IVS is free from static arbitrage if there is no arbitrage opportunity by static trading of call and put options with prices given by inserting their implied volatility in the Black-Scholes formula. The formal definition of absence of static arbitrage is provided below.[Absence of static arbitrage]Let us denote C_BS(K,T,σ) the Black-Scholes price of an European call option of strike K and maturity T when the constant volatility is σ. An IVS (K,T)↦σ_BS(K,T) is free of static arbitrage if there exists a non-negative martingale, say (S_t)_t≥ 0, on some filtered probability space (Ω,ℱ,ℙ) such that C(K,T) := C_BS(K,T,σ_BS(K,T)) = 𝔼[e^-rT(S_T-K)^+] for all K,T ≥ 0 and where r is the risk-free interest rate. <cit.> provides sufficient conditions under which an IVS is free of static arbitrage. Consider the total implied variance defined by w(k,T) = σ_BS^2(k,T)T where σ_BS(k,T) is the Black-Scholes implied volatility associated to the log-strike[We recall that the log-strike k of a vanilla option of strike K and forward price F=S_0e^rT is defined as k = log(K/F)] k and the maturity T. If w:ℝ×ℝ_+ →ℝ_+ satisfies the following conditions: * w(·,T) is of class C^2 for all T≥ 0,* w(k,T) >0 for all (k,T)∈ℝ×ℝ_+^*,* for each (k,T)∈ℝ×ℝ_+^*, (1-k∂_k w(k,T)/2w(k,T))^2-∂_k w(k,T)^2/4(1/w(k,T)+1/4)+∂^2_kkw(k,T)/2≥ 0, * w(k,·) is non-decreasing for each k∈ℝ,* -k/√(w(k,T))+√(w(k,T))/2k→ +∞→-∞ for all T>0 and,* w(k,0)=0 for all k∈ℝ, then the total implied variance surface w is free of static arbitrage. An IVS is said to be free of butterfly arbitrage if conditions (iii) and (v) are satisfied and it is said to be free of calendar spread arbitrage if condition (iv) is satisfied. To our knowledge, this terminology is due to <cit.>.§.§ The SSVI parameterizationDevised at Merill Lynch in 1999 and publicly disseminated by <cit.>, the Stochastic Volatility Inspired (SVI) parameterization is a popular parameterization of the implied volatility smile. To be more precise, it is a parameterization of the total implied variance that we defined in Theorem <ref>. The standard formulation of the SVI parameterization is the so-called raw SVI parameterization and is presented below.[Raw SVI] For a given maturity T>0, the raw SVI parameterization writes:w(k,T) = a_T +b_T(ρ_T(k-m_T) +√((k-m_T)^2+σ_T^2))where a_T∈ℝ, b_T≥ 0, |ρ_T | < 1, m_T∈ℝ and σ_T>0. Moreover, the parameters must satisfy a_T+b_Tσ_T√(1-ρ^2_T)≥ 0 to ensure that the total implied variance remains positive for all k∈ℝ.The popularity of this parameterization is mainly due to its tractability and its ability to fit market implied volatilities quite well. Moreover, it features nice properties such as consistency with Lee's moment formula (, ) or the fact that it corresponds exactly to the large-maturity limit of the Heston implied volatility smile (, ). In their paper, <cit.> proposed an extension of the SVI parameterization to address two issues of this parameterization. First, the SVI parameterization is not a parameterization of the full total implied variance surface but only of a slice k↦ w(k,T) for a fixed maturity T since the 5 parameters are all maturity-dependent. Second, at the time of the publication of their paper, it seemed impossible to find conditions on the SVI parameters that guarantee the absence of butterfly arbitrage (the problem has now been solved by , ). The extension of the SVI parameterization that they propose to address these issues is called the surface SVI (SSVI) and is defined below. Let φ be a smooth function from ℝ_+^* to ℝ_+^* such that the limit lim_T→ 0θ_T φ(θ_T) exists in ℝ where θ_T:=σ_BS^2(0,T)T is the ATM total implied variance. The SSVI is the surface defined by:w(k,T) = θ_T/2(1+ρφ(θ_T)k+√((φ(θ_T)k+ρ)^2+(1-ρ^2))).By abuse of notation, we use the same notation σ_BS for the implied volatility as a function of the strike or the implied volatily as a function of the log-strike. While the SVI parameterization requires 5 parameters for each slice of the IVS, the SSVI relies on the function φ and the parameter ρ∈(-1,1), that do not depend on the maturity, as well as one parameter θ_T for each maturity that depends on the maturity but could be considered as set prior to the calibration since the ATM total implied variance for the traded maturities can be directly observed on the market. Note that <cit.> propose to consider a maturity-dependent ρ parameter in order to improve the calibration accuracy for very short maturities (typically below 1 month). Since our database does not contain short-term data, this extension of the SSVI is not investigated in this paper.For a fixed T>0, the corresponding raw SVI parameterization is given by (a_T,b_T,ρ_T,m_T,σ_T)=(θ_T/2(1-ρ^2),θ_Tφ(θ_T)/2,ρ,-ρ/φ(θ_T),√(1-ρ^2)/φ(θ_T)).Gatheral and Jacquier provide sufficient conditions for the SSVI to be free of arbitrage. These conditions are presented in the following theorem. The SSVI is free of static arbitrage if the following conditions are satisfied: * ∂_T θ_T ≥ 0 for all T>0;* 0≤∂_θ(θφ(θ)) ≤1/ρ^2(1+√(1-ρ^2))φ(θ) for all θ >0;* θφ(θ)(1+|ρ|)< 4 for all θ >0;* θφ(θ)^2(1+|ρ|) ≤ 4 for all θ >0. The conditions (i) and (ii) actually are necessary and sufficient conditions for the absence of calendar spread arbitrage for the SSVI. The condition (iii) with a non-strict inequality is a necessary condition for the absence of butterfly arbitrage but condition (iv) is only a necessary condition if θφ(θ)(1+|ρ|) = 4.The conditions (ii), (iii) and (iv) can be weakened as follows:* 0≤∂_θ(θφ(θ))|_θ=θ_T≤1/ρ^2(1+√(1-ρ^2))φ(θ_T) for all T >0;* θ_T φ(θ_T)(1+|ρ|)< 4 for all T >0;* θ_Tφ(θ_T)^2(1+|ρ|) ≤ 4 for all T >0.Note that these conditions are not necessarily equivalent to the ones in Theorem <ref> since T↦θ_T is not necessarily a bijection from ℝ_+^* to ℝ_+^*. A natural question at this stage is how to choose the function φ in order to both achieve a good fit to market data and satisfy the above conditions. The authors propose three examples of parametric form for φ: * the Heston-like parameterization φ(θ):=1/λθ(1-1-e^-λθ/λθ) with λ >0,* the power-law parameterization φ(θ):= η/θ^γ with η>0 and 0< γ < 1, and * the modified power-law parameterization φ(θ):= η/θ^γ(1+θ)^1-γ with η>0 and 0<γ < 1. For simplicity of reference to these parameterizations, we abbreviate the SSVI with the Heston-like parameterization to SSVI-HL, the SSVI with the power-law parameterization to SSVI-PL and the SSVI with the modified power-law parameterization to SSVI-MPL. The following propositions translate the sufficient conditions of Theorem <ref> for these three parameterizations. Their proof can be found in Appendix <ref>. The SSVI-HL is free of static arbitrage if ∂_T θ_T ≥ 0 for all T>0 and λ≥ (1+|ρ|)/4.Assuming that ∂_T θ_T ≥ 0, we have the following cases: * If γ∈ (0,1/2), there exists θ_1^*,θ_2^*>0 such that the SSVI-PL is free of static arbitrage if θ_T < θ_1^* ∧θ_2^* for all T>0.* If γ∈ (1/2,1), there exists θ_1^*,θ_2^*>0 such that the SSVI-PL is free of static arbitrage if θ_2^* < θ_T < θ_1^* for all T>0.* If γ = 1/2 and η^2(1+|ρ|)≤ 4, there exists θ_1^* such that the SSVI-PL is free of static arbitrage if θ_T < θ_1^* for all T>0. Assuming that ∂_T θ_T ≥ 0, we have the following cases: * If γ∈ (0,1/2) and η (1+|ρ|) ≤ 4, there exists θ^*>0 such that the SSVI-MPL is free of static arbitrage if, for all T>0, θ_T≥θ^*.* If γ∈ (1/2,1), the SSVI-MPL is free of static arbitrage for η(1+|ρ|) ≤ 4 and (1-2γ)φ(1-2γ)^2(1+|ρ|)≤ 4. * If γ=1/2, the SSVI-MPL is free of static arbitrage for η^2 (1+|ρ|) ≤ 4.Note that different sufficient conditions could be found for guaranteeing the absence of static arbitrage by restricting the maturity T to some subset of ℝ_+^*. Since it is not very satisfying to have an IVS parameterization that could be arbitrable for some maturities, we looked as much as possible for conditions that do not restrict the values of T. For γ =1/2, the SSVI-PL and the SSVI-MPL parameterizations induce a power-law decay of the ATM volatility skew for small maturities which is a well-known stylized fact of implied volatility surfaces (see e.g. , ). Indeed, recalling that the ATM volatility skew is defined by ∂_k σ_BS(0,T), it is straightforward to show that ∂_k σ_BS(0,T)=1/2√(T)ρ√(θ_T)φ(θ_T) for the SSVI parameterization (no matter the choice for φ). Therefore, we have that ∂_k σ_BS(0,T) = ρη/2√(T) for the SSVI-PL and ∂_k σ_BS(0,T) = ρη/2√(T)√(1+θ_T) = ρη/2√(T) + o(1/√(T)) for the SSVI-MPL assuming that lim_T→ 0θ_T = 0 which is a natural assumption: an ATM option with zero time to expiry has no value.§.§ Calibration results and introduction of the parsimonious SSVI modelIn this section, we show to which extent the SSVI parameterization allows to replicate the historical IVSs that we presented in Section <ref>. As already noted by <cit.>, IVSs from data providers are not necessarily arbitrage-free because of interpolations of actual market quotes. Procedures such as the ones of <cit.> or <cit.> allow to detect arbitrages in a finite set of prices of European call options given the forward prices. Our database does not contain short rates data or forward prices data so we cannot use these procedures. Since calendar spread arbitrages are equivalent to the total implied variance being non-decreasing in maturity, we remove the IVSs (we recall that our data sets contain one IVS per business day) such that there is at least one crossing between the linearly interpolated total implied variances of two adjacent maturities. IVSs with butterfly arbitrages (if any) are not removed as the condition in terms of total implied variance is much more complicated to verify (see condition (iii) of Theorem <ref>). In total, 6.4% (resp. 5.6%) of the IVSs are removed from the S&P 500 (resp. Euro Stoxx 50) data set. We start by comparing the three parametric forms of the function φ that we introduced in the previous section. For this purpose, we calibrate the SSVI for each day of our data sets without calendar spread arbitrages and for each parametric form of φ by solving the following minimization problem using thefunction with the SLSQP algorithm from thePython package:[ min_Θ=((θ_T_i)_i=1,…,M,ρ,Π_φ) 5c∑_i=1^M∑_k∈𝒦_T_iϕ(k) (σ_Mkt(k,T_i) - σ_SSVI(k,T_i;Θ))^2;s.t. θ_T_1 ≥ 0; θ_T_i+1 ≥ θ_T_i for1≤ i ≤ M-1; (ρ,Π_φ) ∈C_φ. ]We used the following notations: * Π_φ is the vector of the parameters in φ: only λ for the SSVI-HL and (η,γ) for the SSVI-PL and the SSVI-MPL. * T_1<T_2<… < T_M is the set of maturities and 𝒦_T is the set of log-strikes for the maturity T. * ϕ(k) is the standard normal density function evaluated in the log-strike k. This weighting function allows to give more weight to the replication of the implied volatilities that are close to the money. Another choice based on the Black-Scholes vega has been considered but we did not notice any improvement.* σ_Mkt is the market implied volatility. * σ_SSVI(·,·;Θ) is the SSVI implied volatility associated to the parameter vector Θ=((θ_T_i)_i=1,…,M,ρ,Π_φ).* C_φ is given by:–C_φ = { (ρ,λ) ∈ (-1,1)×ℝ_+^* |λ≥ (1+|ρ|)/4 } for the SSVI-HL,–C_φ = {(ρ,η,γ) ∈ (-1,1)×ℝ_+^*× (0,1) |η^2(1+|ρ|) ≤ 4if γ = 1/2 } for the SSVI-PL and,–C_φ = {(ρ,η,γ)∈(-1,1)×ℝ_+^*× (0,1) |[ η (1+|ρ|) ≤ 4if γ < 1/2; η (1+|ρ|) ≤ 4and(1-2γ)φ(1-2γ)^2(1+|ρ|)≤ 4 if γ >1/2;η^2(1+|ρ|) ≤ 4if γ = 1/2 ].}for the SSVI-MPL. The constraints in the above minimization problem do not guarantee the absence of static arbitrage for the power-law and the modified power-law parameterizations as far as only the constraints involving ρ, η and γ are included as well as the non-decreasing property of T↦θ_T. The reason for this choice is the fact that there is no closed-form expression for θ^*, θ_1^* and θ_2^* so the addition of the constraints involving these terms would strongly complexify the numerical optimization. By the end of this section, the study will be restricted to the SSVI-MPL parameterization with γ=1/2 (for which there is no constraint involving θ^*, θ_1^* or θ_2^*) so this choice is impact-free.We use in practice the change of variable (θ̃_T)_i=1,…,M where θ̃_T_1=θ_T_1 and θ̃_T_i = θ_T_i-θ_T_i-1 for i ∈{2,…,M} since it allows to transform the second inequality constraints in the minimization problem (<ref>) in bounds constraints: θ̃_T_i≥ 0 for 2≤ i ≤ M. The initial guess for each parameter is provided in Table <ref>. The average relative errors (without weighting) between the market implied volatilities and the SSVI implied volatilities for each day of our data sets are presented in Figure <ref>. It appears clearly that the Heston-like parameterization performs very poorly in comparison to the two others parameterizations. This results from the fact that, in the Heston-like parameterization, the function φ is bounded from above by 1/2 (see Appendix <ref>) which constraints strongly the shape of the IVS. The power-law and the modified power-law parameterizations achieve essentially the same accuracy which is very stable over time. In particular, we do not observe a decrease of the fitting quality during the Covid-19 crisis. In Figure <ref>, we illustrate how the SSVI-MPL fits the S&P 500 (resp. the Euro Stoxx 50) total implied variances for a day where the calibrated average relative error is equal to 1.19% (resp. 1.18%) corresponding to the mean of the average relative errors across the whole S&P 500 (resp. Euro Stoxx 50) data set. At this stage, let us recall that our final objective is to design a model to jointly simulate arbitrage-free IVSs and the price of the underlying asset by simulating the evolution of the SSVI parameters as a function of the path of the underlying asset price. However, the number of parameters involved is so large that a model for their joint evolution would be too complicated. Therefore, we need to make the SSVI model more parsimonious. We propose the two following simplifications: * We consider only the modified power-law parameterization with γ fixed to 1/2 because we observe that γ is close to this value over the two data sets (more than 84% of the calibrated γ's lie within the [0.4,0.6] interval). Moreover, this choice has the advantage of guaranteeing that the full surface (without any restriction on the ATM total implied variance) is free of static arbitrage on the sole condition that η^2(1+|ρ|) < 4. Finally, according to Remark <ref>, setting γ=1/2 implies a power-law decay of the ATM volatility skew which is a known stylized fact.* We assume that θ_T = aT^p where a,p≥ 0 reducing considerably the number of parameters while ensuring that the no-arbitrage constraint on the ATM total variance is always satisfied (∂_Tθ_T ≥ 0). This parametric form is inspired by the calibrated vectors (θ_T_i)_i=1,…,M that exhibit almost a linear behavior with the maturity T.In Figure <ref>, we show how this parametric form fits the S&P 500 ATM total variances for several dates (the fit is similar for the Euro Stoxx 50 so it is not shown here). Note that the parameters a and p that have been used in this figure are those calibrated on the whole IVS so the quality of the fit is reduced in comparison to a calibration on the ATM volatilities only. Despite this, the fit is overall satisfying although the concave shapes of the ATM total variances in Figure <ref> cannot be well reproduced. The new model obtained after these simplifications is called thereafter the parsimonious SSVI model. [Parsimonious SSVI] The parsimonious SSVI is the parameterization of the total implied volatility surface defined by:w(k,T) = θ_T/2(1+ρφ(θ_T)k+√((φ(θ_T)k+ρ)^2+(1-ρ^2))).where θ_T = aT^p and φ(θ) = η/√(θ(1+θ)) with a,p≥ 0 and η>0. In Figure <ref>, the average relative errors between the market implied volatilities and the SSVI implied volatilities for each day of our data sets obtained for the parsimonious SSVI model are compared to those obtained for the SSVI model with the modified power-law parameterization. As expected, the calibration accuracy is reduced for the parsimonious SSVI. However, it remains overall quite close to the SSVI-MPL calibration in view of the reduction of the number of parameters: 4 parameters for the parsimonious SSVI versus 27 for the SSVI-MPL. The mean of the average relative errors across the whole S&P 500 (resp. Euro Stoxx 50) data set increases from 1.19% (resp. 1.18%) to 1.65% (resp. 1.60%). In Figure <ref>, we show the impact of each simplification on the calibration accuracy for the S&P 500 (it is similar for the Euro Stoxx 50). It appears very clearly that the parametric form for θ_t is the assumption leading to the largest deterioration of the fit to market implied volatilities, which is consistent with the fact that this assumption is the one limiting the most the number of degrees of freedom of the model.§ PATH-DEPENDENT SSVI MODELThe present section is dedicated to the introduction of a new model for the joint dynamics of an implied volatility surface and the underlying asset price. The calibration results exposed in Section <ref> demonstrate the ability of a particular case of the SSVI parameterization - the parsimonious SSVI - to fit reasonably well historical implied volatility surfaces while guaranteeing the absence of static arbitrage with only 4 parameters. As a consequence, we propose to specify our model as a dynamic version of the parsimonious SSVI: each parameter of the parsimonious SSVI is considered as a stochastic process whose dynamics remains to be determined. One option to jointly model the evolution of the parsimonious SSVI parameters and the underlying asset would be to introduce a correlation between the random noises driving each process. However, in view of the empirical study conducted in Section <ref> which indicates that there is a feedback effect of the past returns and the past squared returns of the underlying index price onto the level of the ATM implied volatility, we prefer another option. Instead of using a correlation, the idea is to explicitly model the response of each of the 4 parameters to the evolution of the underlying asset price. To this end, we measure to which extent the trend feature and the volatility feature of the path-dependent volatility (PDV) model presented in Section <ref> allow to explain the variations of the 4 parameters of the parsimonious SSVI model. This study is presented in Section <ref> below. Then, Section <ref> introduces a variant of the PDV model of Guyon and Lekeufack and the dynamics of the parsimonious SSVI parameters. Finally, Section <ref> and <ref> detail the calibration and the simulation of the model.§.§ Path-dependency of the parsimonious SSVI parametersThe calibration of the parsimonious SSVI in Section <ref> provides the daily evolutions of the parameters a, p, ρ and η. Based on these daily evolutions, we can calibrate the PDV model (<ref>) where we replace Volatility_t in Equation (<ref>) by each parameter of the parsimonious SSVI. Note that we consider the logarithm of p instead of p in the PDV model since we observed that this provides a better fit. The calibration methodology is the same as the one exposed in Section <ref>. Moreover, similarly to the study in Section <ref>, for each parameter of the parsimonious SSVI, we run a 10-fold blocked cross-validation on the train set to determine the optimal hyperparameters C_R_1, C_Σ and λ in the grid {5,10,25,50,100,250,500,1000,1500,2000,2500}^2×{10^-6,10^-5,…,10^-1}. The R^2 scores obtained on the train and the test sets for each parameter and for each index are reported in Table <ref>. On the one hand, these results show that the time evolution of the parameter a is well explained by the evolution of the underlying asset price both on the train and the test sets, which is line with the results obtained for the ATM implied volatility since a captures the ATM total variance level (whose evolution is similar to the one of the ATM implied volatility and as such we expect the study conducted in Section <ref> to be still valid for the ATM total variance). The same observation holds for p. However the fact that the PDV model works well for p was not anticipated as it allows to parameterize how the ATM total variance increases with the maturity and it is not homogenous to the ATM total variance level. Thus, we emphasize that this is a key finding. On the other hand, the R^2 scores for the parameters ρ and η are small on the train set and negative on the test set (except for ρ on the Euro Stoxx 50). This indicates that the trend and the volatility features are not related to these two parameters. These observations come as no suprise: ρ and η parameterize respectively the orientation and the convexity of the implied volatility smile as illustrated in Figure <ref>. It is therefore less clear how the past variations of the underlying asset price could impact these parameters. <cit.> and <cit.> considered a third feature given by R_1^2 1_{R_1≥ 0} in the PDV model in order to achieve a satisfying joint SPX/VIX fit. Adding this feature to explain the variations of our 4 parameters does not improve the R^2 scores.An idea to explain the variations of ρ and η is to consider skewness and kurtosis features. Indeed, <cit.> showed a cumulant expansion formula for the Bachelier implied volatility allowing to explain the presence of the volatility smile and its shape using the skewness and the kurtosis of the underlying asset price distribution. Independently, <cit.> showed a similar formula for the Black-Scholes implied volatility. This formula writes: σ_BS(k,T) ≃σ(1-μ_3/6d-μ_4/24(1-d^2))where σ, μ_3 and μ_4 are respectively the standard deviation, the skewness and the kurtosis of the log-return logS_T/S_0 under the risk-neutral probability and d=-k/σ+σ/2. Replacing d by its expression in Equation (<ref>) yields:σ_BS(k,T) ≃σ + k/6μ_3-1/12μ_3σ^2 -1+k/24μ_4σ + k^2/24×μ_4/σ + 1/96μ_4σ^3.By analogy, we may consider the following regression model: X_t = β_0 + β_1 Σ_t + β_2 𝒮_t + β_3 𝒮_t Σ_t^2 + β_4𝒦_t Σ_t + β_5𝒦_t/Σ_t + β_6 𝒦_t Σ_t^3where X_t is the value of either ρ or η at time t, Σ_t is the volatility feature (<ref>) of the PDV model and 𝒮_t, 𝒦_t are respectively skewness and kurtosis features defined as:𝒮_t= ∑_t_i≤ t K_3(t-t_i)r_t_i^3/Σ_t^3, 𝒦_t= ∑_t_i≤ t K_4(t-t_i)r_t_i^4/Σ_t^4.Note that the three kernels K_2, K_3 and K_4 are assumed to be TSPL kernels. For the sake of simplicity, the cut-off lag is set to 1000 for all kernels and the penalization λ is set to zero. The R^2 scores resulting from the calibration of this regression model are presented in Table <ref>. Note that we present both the scores obtained using the standard splitting of the data sets described in Section <ref> and the average scores obtained using a 5-fold blocked cross-validation[We considered only 5 folds to limit the computational cost.]. We remark that the scores are negative on the test set for all instances except for the parameter η on the S&P 500 data set when using the standard splitting but it becomes negative with the 5-fold blocked cross-validation. To verify that these negative scores are not the consequence of an overfitted model, we tested the 2^6-1 non-empty combinations of the 6 features in Equation (<ref>) but we mostly obtained negative scores on the test set. For the instances where the score was positive on the test set, we ran a 5-fold blocked cross-validation which gave systematically negative scores on the test set. Therefore, we conclude that these 6 features are not relevant to predict the variations of the parameters ρ and η.Beyond the interpretation of ρ as a parameter that controls the orientation of the volatility smile, it is also possible to interpret it as the correlation between the two Brownian motions in the Heston model (the so-called "spot-vol" correlation) using the convergence of the Heston model towards the SVI parameterization (, ). Because of this link, it is reasonable to study whether one can explain the variations of the parameter ρ using the correlation between the underlying price and its volatility. As a measure of the volatility, we use the daily realized volatility estimates from <cit.> spanning the period from January 1, 2000 to December 31, 2021. In the same spirit as the features of the PDV model, we introduce a correlation feature based on a TSPL kernel:Γ_t = ∑_t_i≤ t K(t-t_i)(r_t_i-r̅_t_i)(σ_t_i-σ̅_t_i)/√(∑_t_i≤ tK(t-t_i)(r_t_i-r̅_t_i)^2∑_t_i ≤ t K(t-t_i)(σ_t_i-σ̅_t_i)^2).where r̅_t_i=1/C+1∑_k=0^C r_t_i-k, σ_t is the above-mentioned volatility and σ̅_t_i=1/C+1∑_k=0^C σ_t_i-k. The calibration of the model ρ_t = β_0 + β_1 Γ_t with a cut-off lag C fixed to 1000 and a penalization λ fixed to 0 again gives a very low R^2 score both on the train and test sets leading us to the conclusion that there does not seem to be any link in practice between the parameter ρ in the SSVI and the correlation between the underlying price and its volatility.§.§ Specification of the model for the underlying asset price and the IVSBased on the analysis in the previous section, we provide in this section the dynamics of the four parameters in the parsimonious SSVI model. Since two of these parameters depend on the past path of the underlying asset price, we also need a model for the dynamics of the underlying asset price in order to be able to simulate IVSs over time. Because we want these simulations to be realistic, the model for the underlying asset price should also be as realistic as possible. We opt for the PDV model of Guyon and Lekeufack (more precisely, a variant of their model as we will see) as it allows to replicate almost all historical stylized facts of equity prices (leverage effect, volatility clustering, weak and strong Zumbach effects). The asset price (S_t)_t≥ 0 is assumed to evolve as follows:{[ dS_t/S_t= σ_t dW^S_t;σ_t=| β_0^σ + β_1^σ R^σ_1,t+β_2^σΣ_t^σ +ε^σ_t |;R_1,t^σ=∫_-∞^t Z_α_1^σ,δ_1^σ/(t-u+δ_1^σ)^α_1^σ×dS_u/S_u;Σ_t^σ= √(∫_-∞^t Z_α_2^σ,δ_2^σ/(t-u+δ_2^σ)^α_2^σ×(dS_u/S_u)^2) ].where σ_t is the instantaneous or spot volatility, (W^S_t)_t≥ 0 is a Brownian motion, Z_α,δ = (∫_-∞^t du/(t-u+δ)^α)^-1 and ε^σ_t is a residual allowing to account for the fact that the PDV model does not perfectly explain the variations of the spot volatility σ_t. This specification is similar to the one proposed by <cit.> except that we do not approximate the TSPL kernels by linear combinations of exponential kernels. Guyon and Lekeufack make this approximation to recover a Markovian model that is very fast to simulate. We choose not to follow suit since we already achieve reasonable simulation times as we will show in the numerical experiments. Besides, Guyon and Lekeufack propose to consider multiplicative residuals instead of additive residuals, i.e. they specify the dynamics of the spot volatility as σ_t = κ_t(β_0^σ + β_1^σ R^σ_1,t+β_2^σΣ_t^σ) where (κ_t)_t≥ 0 is an Ornstein-Uhlenbeck process or an exponential Ornstein-Uhlenbeck process. The choice of the latter process has the advantage of guaranteeing the positivity of the spot volatility provided that β_0^σ + β_1^σ R^σ_1,t+β_2^σΣ_t^σ≥ 0 which is always the case in the simulations for the estimated parameters as already underlined by Guyon and Lekeufack. Again, we propose not to follow suit since we observed that the volatility of the simulated underlying returns was too low in comparison to the historical volatility when using multiplicative residuals (see Figure <ref>). The use of additive residuals helps to increase it although it is still slightly lower (see Figure <ref>). To further improve the replication of the historical volatility, one could replace the increments of the Brownian motion W^S with random variables having fatter tails. This modification of the model is however not investigated in this paper. The model that we retain for ε^σ is provided in the next section.Second, since both parameters a and p in the parsimonious SSVI model exhibit a path-dependent behavior with respect to the underlying asset price, we propose the following dynamics for these parameters:{[a_t= κ^a_t(β_0^a+β_1^aR_1,t^a+β_2^aΣ_t^a);p_t=κ^p_texp(β_0^p+β_1^pR_1,t^p+β_2^pΣ_t^p);R_1,t^i=∫_-∞^t Z_α_1^i,δ_1^i/(t-u+δ_1^i)^α_1^i×dS_u/S_ufori ∈{a,p};Σ_t^i= √(∫_-∞^t Z_α_2^i,δ_2^i/(t-u+δ_2^i)^α_2^i×(dS_u/S_u)^2)fori ∈{a,p} ].where κ^a and κ^p are time-dependent multiplicative factors allowing to capture the variations in a and p that are not due to the past movements in the underlying asset price. Note that we consider different TSPL kernels parameters for σ, a and p. Choosing to have common features R_1 and Σ for σ, a and p with parameter-specific β's is also an option but it requires to simultaneously calibrate the PDV model on the three time series and it would probably reduce the R^2 scores in comparison to the ones obtained with a calibration of the PDV model for each of the three variables.The four quantities whose dynamics have yet to be specified are the two multiplicative factors κ^a and κ^p as well the two parameters ρ and η of the parsimonious SSVI model. The historical evolution of these four quantities (see Figure <ref> in the following section) reveals that there are some periods where the four parameters become simultaneously more volatile and take more extreme values. The most striking example of this is the period from May 2016 to July 2017 for the S&P 500. Although this period is difficult to associate to any major event on the financial markets, it is not an artifact of the parsimonious SSVI calibration (although the fitting error is larger on this period as one can see in Figure <ref>). Indeed, the raw implied volatility data exhibit significant changes in the shape of the IVS during this period as illustrated in Figure <ref>. In order to capture this phenomenom within the modelling, we propose to consider a hidden semi-Markov model with two states. While the time spent in a given state is exponentially distributed in a true Markov model, a semi-Markov model allows one to choose the distribution of the sejourn time in each state, thus enabling to produce long sejourn times. Let us denote by (I_t)_t≥ 0 a semi-Markov process with two states (1 and 2) and let us set X_t = (κ^a_t, κ^p_t, ρ_t, η_t). The random variable I_t can be interpreted as the unobserved economic regime or state in which the process X is at date t. The dynamics of (X_t)_t≥ 0 is specified as follows:dX_t = diag(N_I_t) (M_I_t-X_t)dt + diag(√(f(X_t)))Γ_I_t dW_t^Xwhere for i∈{1,2}, * N_i is a vector of size 4 representing the mean-reversion speed of X in the regime i,* M_i is a column vector of size 4 representing the mean-reversion level of X in the regime i,* Γ_i is a lower triangular matrix of size (4,4) such that Γ_iΓ_i^T is the covariance matrix of the Brownian terms driving X in the regime i,and, * diag(Y) = [ Y^1 0; ⋱; 0 Y^4 ] for Y = (Y^1,Y^2,Y^3,Y^4) ∈ℝ^4,* √(f(X_t)) = (√(κ_t^a),√(κ_t^p),√((1-ρ_t)(1+ρ_t)),√(η_t)),* (W_t^X)_t≥ 0 is a 4-dimensional Brownian motion.Note that conditionally on {I_t=i, ∀ t≥ 0}, (κ_t^a)_t≥ 0, (κ_t^p)_t≥ 0 and (η_t)_t≥ 0 are Cox-Ingersoll-Ross (CIR) processes while (ρ_t)_t≥ 0 is a Jacobi process lying between -1 and 1. The choice of CIR processes for κ^a and κ^p is motivated by the fact that these two quantities should be positive to guarantee the positivity of a and p respectively which in turn ensures the no-arbitrage constraint ∂_T θ_T ≥ 0. Similarly, η should be positive for the modified power-law parameterization φ(θ)=η/θ^γ(1+θ)^1-γ to also be positive. Finally, ρ should be in (-1,1) by definition of the SSVI parameterization, hence the use of a Jacobi process. §.§ Model calibrationThis section details a calibration methodology for all the parameters involved in the path-dependent SSVI model whose dynamics has been specified in the previous section. Starting with the spot volatility σ, the features R_1^σ and Σ^σ in Equation (<ref>) are discretized and truncated as follows:R_1,t = ∑_t-C_R_1^σ≤ t_i ≤ tZ_α_1^σ,δ_1^σ/(t-t_i+δ_1^σ)^α_1^σ r_t_i and Σ_t = √(∑_t-C_Σ^σ≤ t_i ≤ tZ_α_2^σ,δ_2^σ/(t-t_i+δ_2^σ)^α_2^σ r_t_i^2).where r_t_i = S_t_i-S_t_i-1/S_t_i-1. The parameters (α_1^σ,δ_1^σ,α_2^σ,δ_2^σ,β_0^σ,β_1^σ,β_2^σ) are then estimated using the approach described in Section <ref> (except that we use the full data set instead of splitting it into a train set and a test set). As a proxy of the spot volatility, we use the daily realized volatility estimates of <cit.> since we consider that they represent the best proxy of instantaneous volatility that we have access to. Note that we use only the past returns until time t to predict the realized volatility at time t+Δ where Δ = 1 day since the return at time t depends on the volatility at time t (this is also the approach implemented by , ). The cut-off lags C_R_1^σ and C_Σ^σ and the penalization λ^σ are estimated upstream using a 5-fold blocked cross-validation following the methodology described in Section <ref>. From this first calibration, we deduce the historical time series of ε^σ as the differences between the "true" realized volatilities and the predicted realized volatilities. Since this historical time series present a very small autocorrelation and its empirical distribution exhibit a right fat tail, we model the residuals ε^σ as i.i.d. non-central t-distributed random variables with noncentrality parameter c, number of degrees of freedom k, location μ and scale γ:ε^σ_t d=μ + γY+c/√(V/k) where Y is a standard normal random variable and V is an independent chi-square random variable with k degrees of freedom. The four parameters (μ,γ,c,k) of the non-central t-distribution are estimated using a numerical maximization of the log-likelihood.The historical time series of the parameters a, p, ρ and η of the parsimonious SSVI model are obtained in Section <ref>. We again rely on the approach described in Section <ref> to calibrate the α's, δ's and β's in Equation (<ref>). This actually corresponds to the calibration performed in Section <ref> with the difference that we do not split the data set in train and test sets and we only consider only until December 31, 2021 as the realized volatility data set ends at this date and we want to be consistent across the calibrations on different data sets. We deduce the historical time series of κ^a and κ^p from this calibration. Let us denote by (t_k := kΔ)_k=0,…,n the time grid on which the process X=(κ^a,κ^p,ρ,η) is observed. Note that Δ=1/252 (1 business day) in our case. The corresponding observations of X are denoted by x_t_0,…,x_t_n. Endowed with the historical evolution of the vector X, we calibrate the hidden semi-Markov model (<ref>) using the EM algorithm described by <cit.>. To this end, we consider a semi-Markov chain where the sejourn time d_i in the state i∈{1,2} follows a Zipf distribution:d_i(u):= ℙ(I_t_k+u+1≠ i ,I_t_k+v = i∀ v∈{2,…,u}| I_t_k+1=i, I_t_k≠ i) = 1/H_U,s_i1/u^s_i u=1,…,Uwhere H_U,s=∑_k=1^U k^-s is the U-th generalized harmonic number of order s. Note that we consider a discrete semi-Markov chain for simplicity but its continuous equivalent (a truncated Pareto distribution) could be chosen instead if one wanted to be able to simulate the model for any discretization time step. The parameter U is fixed to 5000 which seems to be a safe cut-off and the parameters (s_i)_i=1,2 are included in the set of parameters estimated by the EM algorithm. We also include the initial distribution (ξ_i:=ℙ(I_t_1=i| X_t_0=x_t_0))_i=1,2 in the set of estimated parameters. In total, the following parameters are estimated by the EM algorithm: (N_i^j)_i=1,2 j=1,…,4, (M_i^j)_i=1,2 j=1,…,4, (Γ_i^j,k)_i=1,2 j,k=1,…,4, (s_i)_i=1,2 and(ξ_i)_i=1,2. In order to find a good starting point for the EM algorithm, we start by estimating independently each component of X with the EM algorithm so that the estimation of the model (<ref>) can be decomposed in two steps: * estimation of the parameters of each of the four components of X and* estimation of the complete model starting from the parameters estimated in the first step or their means over the four components. The initialization of the EM algorithm for the component j∈{1,2,3,4} of X in the first step is described below: * Initialization of (N_i^j)_i=1,2, (M_i^j)_i=1,2 and the CIR or Jacobi volatilities (γ_i^j:= √(∑_k=1^4 (Γ_i^j,k)^2))_i=1,2: The K-means clustering algorithm with K=2 is applied to the historical time series of X^j to infer the state of each data point. Then, <cit.>'s MLE estimator is used to estimate the parameters (N_i^j)_i=1,2, (M_i^j)_i=1,2 and (γ_i^j)_i=1,2.* Initialization of (s_i)_i=1,2: We set s_1 = 1.5 and s_2=2. * Initialization of (ξ_i)_i=1,2: We set ξ_i=1 if the state identified by the K-means clustering algorithm at t_1 is 1 and 0 otherwise. In the presentation of his EM algorithm for estimating hidden semi-Markov chains, <cit.> considers a non-parametric observable process X whose samples X_t_0, …, X_t_N are independent conditionally on the hidden semi-Markov chain I. In our case, the distribution of X is parametric and the samples are not independent conditionally on I. This implies that the maximization step has to be adapted to our specific setting. For this purpose, we rely on the approach described by <cit.> who discretize the dynamics of a constant elasticity of variance (CEV) model using the Euler-Maruyama scheme to get explicit formulas of the parameters estimators.Once the parameters of each component of X have been estimated, we initialize the EM algorithm for the complete model (<ref>) as follows: * Initialization of (N_i^j)_i=1,2 j=1,…,4, (M_i^j)_i=1,2 j=1,…,4: We use the parameters calibrated in the first step. * Initialization of (Γ_i^j,k)_i=1,2 j,k=1,…,4: We average the 4 smoothed probabilities ℙ(I_t_k=i| X_t_0 = x_t_0, …, X_t_n=x_t_n) for k∈{1,…,n} obtained in the first step which gives us an approximation of what is the most likely state for all dates. Denoting by t_0^i,…,t_m^i the dates where the most likely state is i, we compute the residuals of each component j for each regime i as ε_t_k+1^i^i,j := X_t_k+1^i^j-X_t_k^i^j-N_i^j(M_i^j-X_t_k^i^j)Δ/γ_i^j √(Δ f(X_t_k^i^j)). Note that we only consider the pairs of dates (t_k^i,t_k+1^i) that are consecutive, i.e. such that t_k+1^i-t_k^i=Δ in order to put aside the pairs of dates between which the most likely state is no longer i. Finally, we set the initial value of the matrix (Γ_i^j,k) as the lower triangular matrix in the Cholesky decomposition of the matrix (Ĉ_i^jkγ_i^jγ_i^k)_j,k=1,…,4 where Ĉ_i^jk is the empirical correlation between ε^i,j and ε^i,k. * Initialization of (s_i)_i=1,2 and (ξ_i)_i=1,2: We average the values obtained in the first step for each component. In the multivariate case, it is no longer possible to get explicit formulas of the estimators of (N_i^j)_i=1,2 j=1,…,4, (M_i^j)_i=1,2 j=1,…,4 and (Γ_i^j,k)_i=1,2 j,k=1,…,4 in the maximization step of the EM algorithm. However, since X_t_k given X_t_k-1=x_t_k-1 and I_t_k=i has a multivariate normal distribution when discretizing the SDE (<ref>) with the Euler-Maruyama scheme, we can compute the conditional density explicitly. Therefore, we rely on a numerical optimization procedure for the maximization step of the parameters of X.§.§ Numerical resultsThe aim of this section is to provide some evidence of the consistency of the proposed model with historical IVSs data.In Figure <ref>, we start by showing the historical evolution of the parameters κ^a, κ^p, η and ρ that compose the hidden semi-Markov process X as well as the most likely state (obtained using the smoothed probabilities ℙ(I_t_k=i| X_t_0 = x_t_0, …, X_t_n=x_t_n)) for each date after the calibration. These graphs show that the periods of high volatility are correctly identified by the model. Using the calibrated parameters, we first simulate trajectories of X with a daily time step conditionally on the path of the S&P 500 index between August 26, 2004 and December 31, 2021 and the path of the Euro Stoxx 50 index between September 27, 2006 and December 31, 2021 (the period from March 8, 2012 to December 31, 2021 corresponds to the one being used for the calibration of the parsimonious SSVI parameters and the period before is the one required for computing the features R_1 and Σ). The sejourn times are simulated using the functionof the Python packageand they are assumed to be independent from all other random sources. The CIR processes κ^a, κ^p and η are simulated using the explicit scheme E(0) of <cit.> and with a discretization time step given by Δ/100 with Δ=1/252 to ensure that the discretization error remains limited given that the estimated volatility and mean-reversion speed are large. Lastly, the Jacobi process is simulated using the full truncation Euler scheme of <cit.> with the same discretization time step. In Figure <ref>, we compare the historical evolution in time of the ATM implied volatility curve as a function of the maturity (in the sequel, we refer to this curve as the IVS ATM term structure) to the one of a sample path of the path-dependent SSVI model. We observe that the historical and the simulated paths are visually very close in terms of the level, the amplitude of the variations, the regularity and the overall shape. Moreover, the spikes of the implied volatility due to a drop in the underlying asset price are well reproduced. Then, we simulate the complete path-dependent SSVI model, i.e. the underlying asset price is also simulated according to Equations (<ref>). This adds two random sources, namely W^S and ε^σ, which we correlate to W^X as follows: * We compute the residuals associated to the dynamics of the underlying asset price as:ε^S_t = logS_t/S_t-Δ + 1/2σ_t^2Δ/σ_t√(Δ)where S_t and σ_t denote here the historical values of the underlying asset price and the realized volatility respectively.* We estimate the empirical correlations between ε^S (resp. ε̃^σ=ϕ^-1(F_(μ,γ,c,k)(ε^σ)) with ϕ^-1 the inverse of the standard normal cumulative distribution function and F_(μ,γ,c,k) the cumulative distribution function of a non-central t-distribution with parameters (μ,γ,c,k)) and the components of the vector (ε^κ^a,ε^κ^p,ε^ρ,ε^η) where the residuals ε^κ^a, ε^κ^p, ε^ρ and ε^η are calculated according to Equation (<ref>). Note that we assume that the correlations between ε^S (resp. ε̃^σ) and the residuals of X do not depend on the state of the hidden semi-Markov chain. * By combining these correlations estimates with the correlations between the components of X estimated by the EM algorithm in the two states of the semi-Markov chain, we obtain the correlation matrix of the vector (ε^S,ε̃^σ,ε^κ^a,ε^κ^p,ε^ρ,ε^η) in each state. Although the empirical correlation ε^S and ε̃^σ lies at -25% for the S&P 500 and -19% for the Euro Stoxx 50, we set it to zero to avoid the introduction of a decreasing trend in the underlying price paths. The obtained estimation of the correlation matrix in each state is positive definite both for the S&P 500 and the Euro Stoxx 50 allowing to use the Cholesky decomposition to correlate the random variables. Since the model depends on the past evolution of the underlying asset price, we initialize our simulations using the evolution of the S&P 500 between August 26, 2004 and March 8, 2012 and the evolution of the Euro Stoxx 50 between September 27, 2006 and March 8, 2012. In Figure <ref>, we show the evolution of the ATM term structure of two IVS sample paths obtained through the procedure described above. Again, we obtain a very convincing evolution which shows that the dynamics of the underlying asset price is also realistic. Note that there is nothing in the model or in the simulation that guarantees that the no-arbitrage condition η^2(1+ρ)≤ 4 is satisfied. Nevertheless, over 1000 simulations over 11 years with a daily time step, we only have 0.27% (resp. 0.006%) of the pairs (ρ_t,η_t) that do no satisfy this condition for the S&P 500 (resp. the Euro Stoxx 50). Besides, let us recall that it is only a sufficient condition for absence of static arbitrage in the SSVI parameterization. Therefore, the IVSs that do not satisfy it are not necessarily arbitrable. In order to guarantee the absence of arbitrage, one can set η to √(4/(1+|ρ|)) when the no-arbitrage condition is not satisfied in the simulation. In Figure <ref>, we provide the quantile envelopes of the ATM implied volatility for the maturities 1 month, 12 months and 24 months using this retreatment. These graphs demonstrate that the range of simulated values is reasonable in view of the historical path. The decreasing trend at the beginning of each graph results from the initialization of the simulations with the historical underlying price path. In terms of computational cost, running 1000 simulations over an horizon of 11 years with a daily time step (and a finer time step for X as discussed earlier) takes approximately 15 minutes for 24 maturities and 11 strikes on a computer equipped with an Intel Core i7-11850H, 16 cores, 2.5GHz. Two-thirds of the time is needed to simulate the random variables and the last third to diffuse the processes. The model is implemented in the Python programming language. § CONCLUDING REMARKSUsing historical time series of implied volatility surfaces for the S&P 500 and the Euro Stoxx 50, we have shown empirically that a large part of the variability of the at-the-money-forward (ATM) implied volatility for maturities ranging from 1 month to 24 months can be explained by two features, namely the weighted average of the underlying asset past returns and the weighted average of the past squared returns. As the maturity increases, the part of variability explained by these two features decreases but remains important. Suprisingly, up to four years of the past evolution of the underlying asset have an impact on the prediction of the implied volatility. Thus, our empirical study allows to extend the one of <cit.> who focused on implied volatility indices and realized volatility. In Section <ref>, we have then introduced a parsimonious version (see Definition <ref>) of the SSVI parameterization of <cit.> that depends on four parameters only (a, p, ρ and η) and that still achieves a reasonable fit to the implied volatility surfaces in our two data sets. This parsimonious version is essentially obtained by considering a parametric form of the ATM total variance thus avoiding to have one parameter per maturity. Moreover, it ensures the well-known power-law decay of the ATM volatility skew. In the last section, we demonstrate that the variations of the two parameters a and p ruling the ATM implied volatility in the parsimonious SSVI parameterization can also be widely explained by the two features that we mentioned earlier. Based on this observation, we introduce a new model for the joint dynamics of the underlying asset price and the full implied volatility surface (there is no restriction in the range of maturities and strikes that one wants to project) embedding the path-dependency of the implied volatility with respect to the underlying price. On the one hand, the underlying asset price is modelled using the path-dependent volatility model of <cit.> with additive residuals (i.e. the part of the variability that is not explained by the two features) modelled by i.i.d. random variables distributed according to a non-central t-distribution. On the other hand, the residuals of the parameters a and p and the parameters ρ and η are modelled through a semi-Markov diffusion which allows to reproduce the periods of high volatility of these parameters that we observe historically. Extensive details on how to calibrate and simulate this new model are provided. Finally, we show the high consistency of the sample paths of this model with historical data and that there is a very small number of arbitrages which can be easily removed so that all simulated IVSs are arbitrage-free. The study of impact of this new model for applications in asset management, risk management and hedging is left for future research. § ACKNOWLEDGEMENTS The authors are grateful to Julien Guyon for fruitful discussions. abbrvnat§ PROOFS OF PROPOSITIONS <REF>, <REF> AND <REF>The three proofs rely on the following lemma.For all ρ∈ [-1,1], we have:f(ρ) = 1/ρ^2(1+√(1-ρ^2)) ≥ 1.Since √(1-ρ^2)≥ 0, f(ρ) ≥1/ρ^2. The result follows from the fact that we assume |ρ| ≤ 1.§.§ Proof of Proposition <ref>Let us start by verifying condition (ii) of Theorem <ref>. We have ∂_θ(θφ(θ)) = e^-λθ(e^λθ-λθ-1)/λ^2θ^2≥ 0. Moreover, we can rewrite φ as φ(θ) = λθ -1+e^-λθ/λ^2θ^2. Since 1/ρ^2(1+√(1-ρ^2)) ≥ 1 according to Lemma <ref>, it is enough to check that:e^-λθ(e^λθ-λθ-1) ≤λθ -1+e^-λθ ⇔e^-λθ(2+λθ)+λθ-2 ≥ 0to satisfy condition (ii). Let us set ψ(θ) = e^-λθ(2+λθ)+λθ-2. We have ψ'(θ)= -λ e^-λθ(1+λθ)+λ and ψ”(θ) = λ^3θ e^-λθ. The second derivative of ψ being non-negative on ℝ_+, ψ' is non-decreasing on ℝ_+ and is bounded from below by lim_θ→ 0ψ'(θ) = 0. Therefore, ψ is also non-decreasing on ℝ_+ and is bounded from below by lim_θ→ 0ψ(θ) = 0. We deduce that condition (<ref>) is satisfied. Hence, 0≤∂_θ(θφ(θ)) ≤φ(θ) and condition (ii) is satisfied.Let us now consider conditions (iii) and (iv). The function φ is non-increasing on ℝ_+ since φ'(θ) = 2e^-λθ/2coshλθ/2/λ^2θ^3(-λθ +2tanhλθ/2) ≤ 0because tanh x ≤ x on ℝ_+. Thus, φ is bounded from above by φ(0)=1/2. Consequently, φ(θ)^2<φ(θ) and we only need to verify condition (iii). Since ∂_θ( θφ(θ))≥ 0, the function θ↦θφ(θ) is non-decreasing and it is bounded from above by the limit lim_θ→ +∞θφ(θ) = 1/λ. We conclude that condition (iii) is satisfied provided that 1+|ρ|/λ≤ 4.§.§ Proof of Proposition <ref>We have ∂_θ(θφ(θ))=(1-γ)φ(θ), thus 0< ∂_θ(θφ(θ)) < φ(θ) and condition (ii) of Theorem <ref> is satisfied since 1/ρ^2(1+√(1-ρ^2)) ≥ 1 by Lemma <ref>. Let us now consider conditions (iii) and (iv). We define ψ_1(θ) = θφ(θ)(1+|ρ|)-4 and ψ_2(θ) = θφ(θ)^2(1+|ρ|)-4. The function ψ_1 is clearly increasing with ψ_1(0) = -4 and lim_θ→ +∞ψ_1(θ) = +∞ so there exists θ_1^* >0 such that ψ_1(θ_1^*) = 0. The monotony of the function ψ_2 depends on the value of γ as ψ_2'(θ) = (1+|ρ|)(1-2γ)φ(θ)^2: * If γ∈ (0,1/2), then ψ_2 is stricly increasing with ψ_2(0)=-4 andlim_θ→ +∞ψ_2(θ) = +∞ so there exists θ_2^*>0 such that ψ_2(θ_2^*) = 0. * If γ∈ (1/2,1), then ψ_2 is stricly decreasing with lim_θ→ 0ψ_2(θ) = +∞ and lim_θ→ +∞ψ_2(θ) = -4 so there exists θ_2^*>0 such that ψ_2(θ_2^*) = 0. * If γ = 1/2, then ψ_2 is constant and equal to η^2(1+|ρ|)-4.Proposition <ref> follows by combining the conditions such that ψ_1(θ)< 0 and ψ_2(θ)≤ 0 and by using Remark <ref>.§.§ Proof of Proposition <ref>We have ∂_θ(θφ(θ)) = 1-γ/1+θφ(θ), thus 0< ∂_θ(θφ(θ)) < φ(θ) and condition (ii) of Theorem <ref> is satisfied since 1/ρ^2(1+√(1-ρ^2)) ≥ 1 by Lemma <ref>. This shows also that θ↦θφ(θ) is strictly increasing. Since lim_θ→ +∞θφ(θ) = η, we deduce that condition (iii) is equivalent to η(1+|ρ|)≤ 4. Finally, for condition (iv), we have:∂_θ(θφ(θ)^2) = η^2/θ^2γ(1+θ)^3-2γ(1-θ-2γ).Therefore, we have the following cases: * If γ∈ (1/2,1), then θ↦θφ(θ)^2 is stricly decreasing on ℝ_+ with lim_θ→ 0θφ(θ)^2 = +∞ and lim_θ→ +∞θφ(θ)^2 = 0. Thus according to Remark <ref>, if η(1+|ρ|)≤ 4 then the SSVI is free of static arbitrage if θ_T ≥θ^* for all T>0 where θ^* satisfies θ^*φ(θ^*)^2 = 4/(1+|ρ|). * If γ∈ (0,1/2), then θ↦θφ(θ)^2 is stricly increasing on (0,1-2γ) and then stricly decreasing on (1-2γ,+∞), thus it is bounded from above by (1-2γ)φ(1-2γ)^2. We deduce that the SSVI is free of static arbitrage for η(1+|ρ|)≤ 4 and (1-2γ)φ(1-2γ)^2(1+|ρ|)≤ 4.* If γ = 1/2, then θ↦θφ(θ)^2 is stricly decreasing on ℝ_+ and bounded from above by η^2. We deduce that the SSVI is free of static arbitrage for η(1+|ρ|)≤ 4 and η^2(1+|ρ|) ≤ 4 which is equivalent to η^2(1+|ρ|) ≤ 4 since for η < 1, we have η(1+|ρ|)≤ 4 for all ρ∈ [-1,1].
http://arxiv.org/abs/2312.15950v1
{ "authors": [ "Hervé Andrès", "Alexandre Boumezoued", "Benjamin Jourdain" ], "categories": [ "q-fin.CP" ], "primary_category": "q-fin.CP", "published": "20231226083129", "title": "Implied volatility (also) is path-dependent" }
remarkRemark theoremTheorem theorembox 500lemmaLemma lemmabox 500corollaryCorollary corollarybox 500propositionPropositionop-tical net-works semi-conduc-tor @nat@width>@nat@width Dual-Functional Artificial Noise (DFAN) Aided Robust Covert Communications in Integrated Sensing and Communications Runzhe Tang, Long Yang, Senior Member, IEEE, Lv Lu, Member, IEEE, Zheng Zhang, Graduate Student Member, IEEE, Yuanwei Liu, Fellow, IEEE, and Jian Chen, Member, IEEE R. Tang, L. Yang, L. Lu, Z. Zhang, and J. Chen are with the State Key Laboratory of Integrated Services Networks, Xidian University, Xi’an 710071, China (e-mail: [email protected]; [email protected]; [email protected]; [email protected]; [email protected];). Y. Liu is with the School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, U.K. (e-mail: [email protected]). January 14, 2024 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== 写的结构不符合基本套路 按照如下结构来写 第一部分:研究了什么问题 This paper investigates ..... (issue) of/in/overa ... system/scenario,where .....(系统/场景有什么特征?). 如有必要,可以增加一句: Particularly,再描述一句你觉得有特色的东西,例如warden的自适应门限等 第二部分:你干了啥?? 状语从句(为了XX目的??),提出了XXXX??得出了XXX??提出的这东西有啥特征?? 例如你提出了一个双层迭代算法,理论上能证明该算法保证收敛到KKT解? 第三部分:仿真验证 你现在说的这些都是套话,仿真验证无非两方面:1. 验证理论结果;2. 探究相对于现存方法的增量,不是简单写一个有很好潜力这种套话就可以了,前面一句写了验证有效性,更是套话。 This paper investigates covert communications in an integrated sensing and communications system, where a dual-functional base station (called Alice) covertly transmits signals to a covert user (called Bob) while sensing multiple targets, with one of them acting as a potential watcher (called Willie) and maliciously eavesdropping on legitimate communications. To shelter the covert communications, Alice transmits additional dual-functional artificial noise (DFAN) with a varying power not only to create uncertainty at Willie’s signal reception to confuse Willie but also to sense the targets simultaneously. Based on this framework, the weighted sum of the sensing beampattern means square error (MSE) and cross correlation is minimized by jointly optimizing the covert communication and DFAN signals subject to the minimum covert rate requirement. The robust design considers both cases of imperfect Willie’s CSI (WCSI) and statistical WCSI. Under the worst-case assumption that Willie can adaptively adjust the detection threshold to achieve the best detection performance, the minimum detection error probability (DEP) at Willie is analytically derived in the closed-form expression. The formulated covertness constrained optimization problems are tackled by a feasibility-checking based difference-of-convex relaxation (DC) algorithm utilizing the S-procedure, Bernstein-type inequality, and the DC method. Simulation results validate the feasibility of the proposed scheme and demonstrate the covertness performance gains achieved by our proposed design over various benchmarks. Beamforming design, covert communications, integrated sensing and communications (ISAC). § INTRODUCTION The ubiquitous deployment of radar and communication (R&C) systems leads to explosively growing demands for wireless resources (i.e., spectral and spatial resources). To fully exploit the potential of limited wireless resources as well as to certify the applications of simultaneous R&C functions, a new paradigm denoted as integrated sensing and communications (ISAC) is proposed, which can provide R&C functions on a single hardware platform with a single waveform <cit.>. Due to the R&C integration and coordination gains brought by ISAC technology, it has been envisioned as a key enabler for both next-generation wireless networks and radar systems <cit.>, <cit.>. Benefiting from the development of ISAC technology, the communication-based network design is shifted to ISAC networks, which enables communication-based networks to provide new high-accuracy sensing services such as unmanned aerial vehicles (UAVs) <cit.>, <cit.>, industrial Internet-of-things (IoT) <cit.> and intelligent traffic monitoring <cit.>. However, ISAC networks are confronted with severe security issues due to not only the broadcast characteristic of wireless communications but also the fact that sensed targets can potentially wiretap the confidential information-bearing signal transmitted to the legitimate receivers, which poses a new security concern <cit.>. Although focusing power toward the target direction can facilitate sensing performance, the confidential information-bearing signal power for illuminating the target should be confined to prevent eavesdropping. Against this background, physical layer security (PLS) could be a viable approach to secure ISAC systems <cit.>, <cit.>. Different from conventional cryptographic techniques, which encrypt confidential data prior to transmission, PLS exploits the intrinsic randomness of noise and fading channels to degrade legitimate information leakage <cit.>. There have been extensive works investigating the secrecy issue of ISAC systems in terms of the PLS <cit.>. Compared with the PLS technology, covert communication aims to shelter the communication itself, which can achieve a higher level of security <cit.>, <cit.>. In certain circumstances, protecting the content of communications using existing PLS techniques is not sufficient as the communication itself is required to ensure a low probability of detection by adversaries, which motivates the studies of covert communications in ISAC networks <cit.>, <cit.>. In <cit.>, the authors proposed a covert beamforming design for ISAC systems, where both perfect warden (called Willie)’s channel state information (WCSI) and imperfect WCSI are investigated.In <cit.>, a robust transceiver design for a covert ISAC system with imperfect channel state information (CSI) is proposed, considering both bounded and probabilistic CSI error models. However, these existing works all assumed a weak Willie detection strategy, where the likelihood function of Willie was treated as the detecting performance metric. In other words, Willie judges whether a confidential signal transmission exists or not by the ratio of its likelihood function. Due to the detection threshold variation being unavailable at Willie, the aforementioned covert ISAC systems design lacks robustness, which motivates our investigation into the robust design of covert ISAC systems. For an arbitrary covert communication system, the availability of CSI at the base station (BS) (called Alice) is fundamental in covert communications to conceal the signal from being detected by a potential watcher (called Willie) <cit.>. Considering a covert ISAC system working in the tracking mode, the instantaneous CSI of the Alice-covert user (called Bob) link is available at the Alice as the CSI of Bob can be obtained via conventional channel acquisition techniques. However, the exact prior information of targets (the number of targets and their initial angle estimation) is usually unknown because the initial angle estimation of targets contains a certain degree of uncertainty <cit.>. As illustrated above, it is of utmost importance to consider imperfect WCSI in the covert ISAC system, which is consistent with the WCSI hypotheses in previous studies on covert ISAC systems <cit.>, <cit.>. On the other hand, due to the intrinsic ambiguity in the estimation of Willie is inevitable in ISAC systems (e.g. estimation of distance, angle, and velocity), the worst-case scenario should be considered that only the statistical WCSI is available at Alice for robust covert ISAC system design. Note that different from the weak Willie detection strategy considered in <cit.> and <cit.>, the detection threshold variation is available at Willie both in the imperfect WCSI and the statistical WCSI scenario, thus the minimum detection error probability (DEP) at Willie is investigated for robust design of covert ISAC systems. To achieve covert communications and ensure a negligible probability of being detected by Willie, uncertainty should be created to confuse Willie (e.g. noise <cit.>, channel uncertainty <cit.>, full-duplex receiver <cit.> and artificial noise (AN) <cit.>). In the ISAC system, the joint sensing and communication mechanism sheds light on the new secure design that the additional sensing function can serve as a support to facilitate the provision of security <cit.>, which raises the reflection on the potential interplay between ISAC and covert communications. To elaborate, the AN can not only be utilized to create uncertainty at Willie’s signal reception to confuse Willie but also can be harnessed to sense targets simultaneously. On the one hand, when there exists confidential transmission between Alice and Bob, the AN can help to achieve covert communication and facilitate sensing performance. On the other hand, when there is no confidential transmission between Alice and Bob, the AN can function like a dedicated sensing signal to facilitate sensing performance. To elaborate, in the ISAC system design, due to the degree-of-freedom (DoF) of conventional radar being limited by the number of transmit antennas, the dedicated sensing signal can be introduced to exploit the full DoF of radar function, especially in the cases where the number of users is smaller than the number of antennas that may lead to significant distortion of radar beampattern <cit.>. §.§ Contributions and Organization In this paper, we investigate covert communications in an ISAC system, which consists of a dual-functional multi-antenna BS Alice, a single-antenna covert user Bob, and multiple sensing targets, where Alice transmits additional dual-functional artificial noise (DFAN) with a varying power to create uncertainty at Willie’s signal reception to shelter the covert communications and sense targets simultaneously. Unlike the weak Willie detection strategy considered in <cit.> and <cit.>, for the robustness of covert ISAC systems, this work considers that the detection threshold variation is available at Willie, thus the minimum DEP at Willie is investigated. Moreover, the statistical WCSI scenario is also considered. The main contributions of this work are summarized below: * We investigate a covert ISAC system where sensing targets may act as a potential Willie. To shelter the covert communications, Alice transmits additional DFAN with varying power not only to confuse Willie but also to harness the DFAN to sense the targets simultaneously. * The robust covert ISAC design considers not only imperfect WCSI (i.e., bounded and Gaussian WCSI errors) but also statistical WCSI. Under both WCSI hypotheses, the worst-case scenario is considered that Willie can adaptively adjust the detection threshold to achieve the best detection performance. On this basis, the minimum DEP at Willie is derived in the analytical expressions. Moreover, in the statistical WCSI case, the closed-form expressions of the average minimum DEP are derived to further investigate the covertness of the system. * The weighted sum of the sensing beampattern mean square error (MSE) and cross correlation is minimized by jointly optimizing the covert communication and DFAN signals subject to the minimum covert rate requirement. Firstly, in the imperfect WCSI scenario, the formulated covert rate constrained and outage probability constrained optimization problems are tackled utilizing S-procedure, Bernstein-type inequality, and the difference-of-convex (DC) relaxation method. Next, in the statistical WCSI scenario, the closed-form expressions of the average minimum DEP are derived in the first place and then the formulated optimization problem is solved by the DC relaxation method. Generally, a feasibility-checking based DC algorithm design is proposed to solve the formulated optimization problems, which ensures the initial value of 𝐖_1 in the feasible region of the problem and guarantees the convergence of the algorithm. * Simulation results validate the feasibility of the proposed scheme. The results show that: On the one hand, to achieve covert communication in an ISAC system, DFAN is preferred rather than single-functional AN. On the other hand, when the covert rate or the sensing performance requirement is low, it is preferable to adopt the statistical WCSI hypothesis to improve the robustness of the covert ISAC design, while achieving comparable performance. The remainder of this paper is organized as follows. Section II presents the system model. In Sections III and IV the covert communication performance analysis is illustrated in the first place and the covert ISAC optimization problems are accordingly formulated and then solved under the imperfect and statistical WCSI scenario, respectively. Our numerical results and discussions are presented in Section V, and Section VI concludes this paper. Notations: Boldface capital 𝐗 and lower-case letter 𝐱 denote matrix and vector, respectively. For any N× M-dimensional matrix 𝐗∈ℂ^N× M, 𝐗^T and 𝐗^H denote the transpose and Hermitian conjugate operations. Similarly, rank(𝐗), Tr(𝐗), 𝐗, 𝐗_F represent the rank value, trace value, spectral norm operation, and Frobenius norm operation. 𝐗≽0 denotes that 𝐗 is a positive semidefinite matrix, while 𝐱∼𝒞𝒩(μ,𝐗) denotes that 𝐱 is a circularly symmetric complex Gaussian (CSCG) vector with mean μ and covariance matrix 𝐗. For a matrix 𝐗, 𝐗^-1 denotes theinverse matrix operation. For any vector 𝐱, |x| and 𝐱 denote the modulus of x and the Euclidean norm of the vector 𝐱, respectively. 𝔼(˙) is the statistical expectation operation. 前面motivation调整好以后,这里按照如下方式组织: Motivated by the above facts, we investigated ....., where.....(如果必要). 如果想强调和现有文献假设的区别,可以再写一句Unlike .....,this work considers/assumes that ..., which is ....(这样假设考虑的意义何在??) 然后直接写 the main contribution of this work is summarized below: 下面依次罗列你的贡献若干条 通常可以考虑的贡献有以下几种类型: 1. 提出了某种传输策略,或者系统新架构(目前看这篇论文没有该方面贡献) 2. 理论分析,分析了哪些性能,得出了什么样的表达式,揭示了什么insights?? 3. 问题建模与等价转化,要明白,建模是次要的,转化方法是主要的 4. 算法设计,设计了一个什么算法?算法有什么特征?? 5. 理论证明,convergence, optimality, complexity,这是优化文章逃不过的三点,作为会议版本最好有其中一两点 6. 实在没得写,可以写仿真,但要给出一些insights § SYSTEM MODEL As shown in Fig. 1, we consider covert communications in an ISAC system, which consists of a dual-functional N-antenna BS called Alice, a single-antenna covert user called Bob, and M sensing targets indexed by M = { T_1,⋯, T_M} (M ≤ N). A challenging surveillance scenario is considered, where the sensing targetT_m_s is assumed to be the potential warden called Willie, which aims to maliciously wiretap the communications between Alice and Bob. To achieve covert communication between Alice and Bob, Alice transmits additional DFAN to create uncertainty at Willie’s signal reception. Besides confusing Willie, the DFAN is also harnessed to sense M targets simultaneously. The DFAN is independent of the covert communication signal and the total transmit power of the DFAN is denoted by P_ A and we assume that P_ A follows a uniform distribution within the range [ P_ A,min,P_ A,max], i.e., f_P_ A(x) = 1/P_ A,max,P_ A,min≤ x ≤P_ A,max, where P_ A,max denotes the maximum transmit power budget for the DFAN. It is assumed that Willie knows the distribution of P_ A. However, the instantaneous power of P_ A is not available at Willie. Due to the uncertainty introduced by the DFAN, Willie cannot tell whether the power fluctuation of its received signal is due to the variation of the ongoing covert communication or the variation of the DFAN. In addition to assisting in the covert signal transmissions between Alice and Bob, i.e., guaranteeing a low probability of being detected by Willie, the DFAN is as well exploited to facilitate the sensing function. The channel coefficients from Alice to Bob and Willie are defined as 𝐡_ b∈ℂ^N× 1 and 𝐡_ w∈ℂ^N× 1, respectively. For small-scale fading, the Alice-Bob channel 𝐡_ b is assumed to be Rician fading. Moreover, the Alice-Willie channel is assumed to be Rayleigh fading, i.e., 𝐡_ w∼ C N( 0,𝐈). The CSI availability is illustrated below. It is assumed that the covert ISAC system works in the tracking mode as the premise, where prior information of targets (the number of targets M and their initial angle estimation θ̂_m) are known. We assume that Willie can estimate the instantaneous CSI of 𝐡_ w and Alice knows the instantaneous CSI of the Alice-Bob link. Moreover, the WCSI availability is divided into two scenarios, i.e., with imperfect WCSI and statistical WCSI. Furthermore, in the imperfect WCSI scenario, both bounded WCSI errors and Gaussian WCSI errors are investigated. The WCSI model is elaborated as follows. 1) Bounded WCSI errors: In this scenario, due to the potential cooperation between Alice and Willie, the coarse WCSI is available at Alice, which can be utilized to help Bob avoid Willie’s monitoring. The bounded CSI error model is considered, i.e., 𝐡_ w = 𝐡̂_ w + Δ𝐡_ w, Δ𝐡_ w≤ε_ w, where Δ𝐡_ w denotes the error vector, ε_ w denotes the maximum threshold of the bounded CSI error, and 𝐡̂_ w denotes the estimated CSI of Willie, i.e., sensing target T_m_s, at Alice. 2) Gaussian WCSI errors: In this scenario Willie may be a mobile target and there is no cooperation between Alice and Willie. Alice needs to detect active transmission from Willie or/and capture Willie’s leaked signals (the radiometer detector is adopted by Willie and involuntary signal leakage of radiometers is unavoidable) to acquire the Gaussian WCSI. To elaborate, the WCSI is subject to Gaussian errors, i.e., 𝐡_ w = 𝐡̂_ w + Δ𝐡_ w=𝐡̂_ w + γ_ w^1/2𝐞_ w, Δ𝐡_ w∼ C N( 0,γ_ w), where γ_ w = γ_ w^1/2𝐞_ w is the covariance matrix of WCSI error and 𝐞_ w denotes the independent CSCG random vector following the distribution 𝐞_ w∼ C N( 0,𝐈). Moreover, the outage probability for covertness constraints is defined as ρ _c. 3) Statistical WCSI: To guarantee the robustness of the covert ISAC system, the worst scenario is considered that only the statistical WCSI is available at Alice.Specifically, the channel from Alice to Willie is denoted as 𝐡_ w=√(l_ w)𝐠_ w, where 𝐠_ w∼𝒞𝒩( 0_N × 1,Ω _ w) is the small-scale Rayleigh fading channel coefficient of the Alice-Willie link and the positive-semidefinite matrix Ω _ w represents the spatial correlation matrix. The statistical characteristics of 𝐡_ w is firstly depicted and then the robust covert ISAC system design is further investigated according to it. §.§ Transmission Scheme Alice transmits DFAN to confuse Willie on the detection of the covert communication between Alice and Bob. The transmit DFAN signal at Alice is defined as x_ AN(k)=√(P_ A) x_ A(k), where x_ A(k)^2 = 1 should be satisfied, with k ∈{ 1,⋯,K} denotes the index of the signal symbol. Let H_0 denote the null hypothesis which indicates that Alice is not transmitting private data stream to Bob, while H_1 denotes the alternate hypothesis which indicates an ongoing covert transmission from Alice to Bob. To elaborate, in H_0, the DFAN signal x_ AN(k) is transmitted to Willie, which aims at simultaneously sensing targets and confusing Willie. In H_1, in addition to the DFAN signal x_ AN(k), Alice transmits the covert communication signal 𝐰_1s_ b(k) to Bob. We use 𝐰_1∈ℂ^N× 1 to denote the corresponding transmit information beamforming vector in hypothesis H_1. The information symbol s_ b(k) is assumed to be statistically independent as well as with zero mean and unit power. Moreover, the DFAN signal x_ AN(k) is assumed to be independent with the information symbol and with the covariance matrix 𝐓 = 𝔼[x_ AN(k) x_ AN^H(k)]≽0. Therefore, from Willie’s perspective, Alice’s transmitted signal is given by x(k) = {[ x_ AN(k), H_0,; 𝐰_1s_ b(k) +x_ AN(k), H_1. ]. Due to the DFAN signal is exploited for multibeam transmission, hence the covariance matrix 𝐓 is assumed to be of a general rank with 0 ≤ rank(𝐓)=n ≤ N. By exploiting the eigenvalue decomposition of 𝐓, the DFAN signal x_ AN(k) can be decomposed into n linearly and statistically independent sensing beams, i.e., 𝐓 = ∑_i = 1^n λ _i𝐱_i𝐱_i^H= ∑_i = 1^n 𝐰_ a,i𝐰_ a,i^H , where λ _i∈ℝ is the eigenvalue and 𝐱_ i∈ℂ^N× 1 is the corresponding eigenvector. The vector 𝐰_a,i = √(λ _i)𝐱_i is the transmit beamformer for x_ AN(k). In H_1, inspired by SIC technologies, the sensing interference caused by the introduced DFAN signal x_ AN(k) is cancelled to facilitate the covert communication between Alice and Bob. Specifically, information is embedded into part of the DFAN signal and the information-embedded DFAN signal is treated as the virtual communication signals on the top of the real communication signals. Thus, the DFAN signal in H_1 can be rewritten as x_ AN(k)= ∑_i ∈ D𝐰_a,ix_ v,i(k) +x̅_ AN(k), where D = { 1,....,D} (1 ≤ D ≤ n) denotes the set of virtual communication signals and the symbols {x_ v,i(k)} _i ∈ D are independent as well as with zero mean and unit power. It is assumed that {x_ v,i(k)} _i ∈ D are independent with x̅_ AN(k). Hence, the covariance matrix 𝐓̅ of the rest of the DFAN signal x̅_ AN(k) is given by 𝐓̅= 𝔼[ x̅_ AN(k)x̅_ AN^H(k)]=∑_i = D + 1^n 𝐰_a,i𝐰_a,i^H . §.§ Covert Communications As illustrated in the last subsection, Alice transmits a DFAN signal x_ AN(k) and a covert signal s_ b(k) to Bob. Here, Alice’s interference signal x_ AN(k) is exploited as a cover for Bob’s covert signal s_ b(k). Then, In H_1, the signal received at Bob is given by y_b(k) = 𝐡_ b^H(𝐰_1s_ b(k)+x_ AN(k))+ n_ b(k), where n_ b denotes the additive white Gaussian noise (AWGN) at Bob with zero mean and variance σ _ b^2. Thus the covert communication rate at Bob is given by R_ b = log _2(1 + | 𝐡_ b^H𝐰_1|^2/𝐡_ b^H𝐓𝐡_ b + σ _ b^2), where 𝐡_ b^H𝐓𝐡_ b denotes the sensing interference power induced by DFAN. Note that in, H_0, the signal received at Bob is given by y_b(k) = 𝐡_ b^H(x_ AN(k))+ n_ b(k). Although in H_0 the received signal at Bob is treated as sensing noise which cannot be canceled due to the randomness of the DFAN. In the scenario, the DFAN can also be exploited to not only aid the secure communications between Alice and other legitimate users but also to sense the legitimate users simultaneously, which can possibly reduce the tracking overhead without the requirement for CSI feedback and the associated quantization and feedback errors. On the other hand, Willie tries to detect whether there exists covert communications between Alice and Bob or not by carrying out the Neyman-Pearson test based on his received signal sequence y_w(k) for k ∈{ 1,⋯,K}. Hence, as per (<ref>), the received signal at Willie can be expressed as y_w(k) = {[𝐡_ w^H x_ AN(k)+n_ w(k),H_0,; 𝐡_ w^H ( 𝐰_1s_ b(k) +x_ AN(k))+n_ w(k),H_1, ]. where n_ w denotes the AWGN at Willie with zero mean and variance σ _ w^2. Based on the two hypothesis, Willie is assumed to adopt a radiometer for the binary detection. Note that the optimal test for Willie to minimize the detection error probability is the likelihood ratio test on the grounds of the Neyman-Pearson criterion. However, the instantaneous WCSI is not available at Alice, which makes it difficult to directly analyze the detection performance at Willie. Using the average received power at Willie (i.e., ς = 1/K∑_k = 1^K | y_w[ k ]|^2) as the test statistic, the decision rule is given by ςD_0D_1≷ Γ, where Γ >0 is Willie’s detection threshold, D_1 and D_0 are the binary decisions in favor of H_1 and H_0, respectively. The case of infinite blocklength is considered, i.e., K →+ ∞, where lim_n →∞χ _2n^2/ . - n = 1 holds according to the Strong Law of Large Numbers. Thus the average received power at Willie can be obtained as ς={[ P_ A| 𝐡_ w^H x_ A|^2+ σ _ w^2,H_0,; | 𝐡_ w^H 𝐰_1|^2 +P_ A| 𝐡_ w^H x_ A|^2+ σ _ w^2, H_1, ]. where the prior probabilities of hypothesesH_0 and H_1 are assumed to be equal for simplicity. Then, the detection error probability at Willie, ξ, is defined as ξΔ = P_ FA + P_ MD, where P_ FA = P(D_1| H_0) denotes the false alarm probability, P_ MD = P(D_0| H_1) denotes the miss detection probability, and 0 ≤ξ≤ 1. To be specific, ξ= 0 implies that Willie can perfectly detect the covert signal without error, while ξ= 1 implies that Willie cannot make a correct detection at the time, i.e., a blind guess. Based on the decision rule, we can derive the analytical expressions of P_ FA and P_ MD under three typical WCSI availability hypotheses which are elaborated before. The detailed analysis of Willie’s DEP will be elucidated in section 3. In addition, ξ^ * ≥1-ϵ is generally adopted as the covertness constraint in covert communications, where ϵ is a small value to determine the required covertness level and ξ^ * denotes the minimum DEP. The false alarm probability P_ FA and miss detection probability P_ MD at Willie are given by (<ref>) and (<ref>), respectively. P_ FA = {[ 1, Γ≤φ _2 + σ _ w^2,; p_1,φ _2 + σ _w^2 < Γ < φ _1 + φ _2 + σ _ w^2,; 0,Γ≥φ _1 + φ _2 + σ _ w^2, ]. where φ _1=P_ A,max| 𝐡_ w^H x_ A|^2, φ _2=| 𝐡_ w^H 𝐭_0|^2 and p_1= 1 - Γ - φ _2 - σ _ w^2/φ _1. P_ MD = {[ 0, Γ≤φ _3 + σ _ w^2,; p_2,φ _3 + σ _w^2 < Γ < φ _1 + φ _3 + σ _ w^2,; 1,Γ≥φ _1 + φ _3 + σ _ w^2, ]. where φ _3=| 𝐡_ w^H( 𝐰_1 + 𝐭_1) |^2 and p_2= Γ - φ _3 - σ _ w^2/φ _1. Based on Proposition 1, the detection error probability ξ at Willie is divided into two scenarios. Firstly, when φ _1 + φ _2 < φ _3, ξ can be derived as ξ = {[ 1, Γ≤φ _2 + σ _ w^2,; p_1,φ _2 + σ _w^2 < Γ < φ _1 + φ _2 + σ _ w^2,;0,φ _1 + φ _2 + σ _w^2 ≤Γ≤φ _3 + σ _ w^2,;p_2,φ _3 + σ _w^2 ≤Γ≤φ _1 + φ _3 + σ _ w^2,; 1,Γ > φ _1 + φ _3 + σ _ w^2. ]. In this case, the minimum value of ξ is 0, which means covert communication is unattainable. The reasons are that φ _1 + φ _2 < φ _3 indicates the power of the mix of the covert communication signal and the dedicated sensing signal in H_1 exceeds the power budget, which is decided by the mix of the AN and the dedicated sensing signal in H_0. Due to the uncertainty created by the AN cannot cover the fluctuation of the signal in H_1, Willie can perfectly tell whether the covert communication occurs or not, which is unacceptable in covert communications. Next, when φ _1 + φ _2≥φ _3, ξ can be derived as ξ = {[ 1, Γ≤φ _2 + σ _ w^2,; p_1,φ _2 + σ _w^2 < Γ <φ _3 + σ _ w^2,; p_3, φ _3 + σ _w^2 ≤Γ≤φ _1 + φ _2 + σ _ w^2,; p_2,φ _1 + φ _2 + σ _w^2 ≤Γ≤φ _1 + φ _3 + σ _ w^2,; 1,Γ > φ _1 + φ _3 + σ _ w^2. ]. where p_3=1+φ _2 - φ _3/φ _1. It can be derived that ξ is a monotonically decreasing function of Γ in the range [ φ _2 + σ _ w^2 ,φ _3 + σ _ w^2 ] and a monotonically increasing function in the range [ φ _1+φ _2 + σ _ w^2 , φ _1 + φ _3 + σ _ w^2 ], thus the optimal value of ξ, denoted by ξ^ *, lies in the range [φ _3 + σ _ w^2 ,φ _1+φ _2 + σ _ w^2 ] and is given as ξ^ *=1+φ _2 - φ _3/φ _1. In the ISAC system, ξ^ * can be directly utilized as the metric of covertness requirement. We assume that the likelihood functions of the received signals of Willie under H_0 andH_1 are expressed as ℙ_0 and ℙ_1, respectively. Specifically, as per (<ref>), ℙ_0 and ℙ_1 are respectively given by ℙ_0 = 1/πλ _0exp ( - | y_w|^2/λ _0), ℙ_1 = 1/πλ _1exp ( - | y_w|^2/λ _1), where λ _0Δ = | 𝐡_ w^H𝐭_0|^2 + σ _w^2 and λ _1Δ = | 𝐡_ w^H𝐰_1|^2 + | 𝐡_ w^H𝐭_1|^2 + σ _w^2. In covert communications, Willie aims to minimize its total detection error probability ξ to detect the presence of the covert transmission. The optimal test that minimizes ξ is the likelihood ratio test, which is given by ℙ_1 /ℙ_0 D_0D_1≷ 1. As per (<ref>), the optimal detection threshold and the corresponding minimum detection error probability ξ^ * at Willie can be derived, which is elaborated in section 4.In addition, ξ^ * ≥1-ϵ is generally adopted as the covertness constraint in covert communications, where ϵ is a small value to determine the required covertness level. However, the direct analysis on the derived ξ^ * is challenging. To overcome this predicament, we resort to Kullback–Leibler (KL) divergences of the likelihood functions to achieve covert communication with the given ϵ. To be specific, lower bounds on ξ^ * can be given as ξ ^ * ≥ 1 - √( D(ℙ_0| ℙ_1.)) , ξ ^ * ≥ 1 - √( D(ℙ_1| ℙ_0.)), where D(ℙ_0| ℙ_1.) denotes the KL divergence from ℙ_0 to ℙ_1, and D(ℙ_1| ℙ_0.) denotes the KL divergence from ℙ_1 to ℙ_0. D(ℙ_0| ℙ_1.) and D(ℙ_1| ℙ_0.) are respectively given as D(ℙ_0| ℙ_1.)=lnλ _1/λ _0 + λ _0/λ _1 - 1,D(ℙ_1| ℙ_0.)=lnλ _0/λ _1 + λ _1/λ _0 - 1. Considering the covertness constraint ξ^ * ≥1-ϵ, we can derive that D(ℙ_0| ℙ_1.)≤ 2 ϵ^2,D(ℙ_1| ℙ_0.)≤ 2 ϵ^2. We note that the transformed constraints are more stringent constraints than ξ^ * ≥1-ϵ. Hence in this work we adopt (<ref>) as the required covertness constraint. The derived ξ^ * in section 4 can be utilized as a theoretical benchmark to evaluate the covert performance in both the perfect WCSI and imperfect WCSI scenario. §.§ Radar Sensing To facilitate sensing design, we define the beampattern gain P(θ ) as the transmit signal power distribution at sensing angle θ∈[- π/2,π/2]. P(θ ) in H_1 is given by P (θ ) = 𝔼(| 𝐚^H(θ )(𝐰_1s_ b +x_ AN)|^2) = 𝐚^H(θ )(𝐖_1+𝐓)𝐚(θ) , where 𝐚(θ ) denotes the steering vector given by 𝐚(θ ) = [1,e^j2πd/λsin (θ ),...,e^j2π (N - 1)d/λsin (θ )]^T, d denotes the antenna spacing and λ denotes the carrier wavelength. Note that 𝐓 denotes the DFAN signal covariance matrix. We assume that the sensing system works in the tracking mode and has prior information on targets (the number of targets M and their initial angle estimation θ̂_m) are known. Thus the beampattern is expected to have the dominant peaks in the target directions. Given the estimated angles of M sensing targets, the desired beampattern can be defined as a square waveform at the target directions given by P^ * (θ )= {[ 1,| θ- θ̂_m| ≤Δθ/2,; 0, otherwise, ]. where Δθ is the desired beam width. Let {θ _s} _s = 1^S denote the S sample angles covering the detector’s angular range θ∈[- π/2,π/2]. Thus in H_1, the MSE between the obtained sensing beampattern and the desired sensing beampattern, which is denoted as F(η,𝐖_1,𝐓), can be given by F(η_1,𝐖_1,𝐓)=1/S∑_s = 1^S | η_1P^ * (θ _s) - 𝐚^H(θ _s)(𝐖_1+𝐓)𝐚(θ _s)|^2, where η denotes the scaling vector. To evaluate the radar sensing performance thoroughly, besides the sensing beampattern MSE, cross correlation among different transmit signal directions should also be considered. To elaborate, the cross correlation is given as L(𝐖_1,𝐓)=2/M^2 - M∑_p = 1^M ∑_q = p + 1^M | 𝐚^H(θ̂_p )(𝐖_1+𝐓)𝐚(θ̂_q ) |^2 . Therefore, the weighted sum of the sensing beampattern MSE and cross correlation is considered as the sensing performance metric, which is given as L(η_1,𝐖_1,𝐓)=F(η,𝐖_1,𝐓)+w_c L(𝐖_1,𝐓), where w_c is a pre-determined weighting factor. Note that in H_0 the sensing performance metric is similar to that elaborated in H_1, the DFAN can be harnessed to sense targets while confusing Willie. In this work, we focus on the sensing beampattern design in H_1. To elaborate, by optimizing 𝐓 and 𝐰_ 1, the weighted sum of the sensing beampattern MSE and cross correlation can be minimized and thus the optimal sensing performance can be derived under covert rate constraints. § COVERT COMMUNICATIONS UNDER IMPERFECT WCSI In this section, firstly, we delve into the worst-case covert communication performance in the considered covert ISAC system under the imperfect WCSI scenario (including bounded WCSI errors and Gaussian WCSI errors). To elaborate, Willie’s DEP is analyzed in the first place. As the worst case is considered that Willie can adopt the optimal detection threshold Γ ^ * to minimize DEP to achieve the best detection performance, the optimal detection threshold, Γ ^ *, for Willie can be further derived. Thus the minimum DEP at Willie is derived in the analytical expressions. Then the covert communication rate constrained beampattern optimization problems are formulated under bounded WCSI errors and Gaussian WCSI errors. The formulated problems are non-convex and thus difficult to solve, which are tackled by leveraging the S-procedure, the Bernstein-type inequality (BTI) and the difference-of-convex (DC) relaxation method. Moreover to guarantee rank-one property of Utr , we adopt the difference-of-convex (DC) relaxation method [33] to extract the rank-one solution from high-rank matrix §.§ Covert Communications under Bounded WCSI Errors Firstly, the covert communication performance is analyzed. To elaborate, based on the decision rule that ςD_0D_1≷ Γ, the analytical expressions of P_ FA and P_ MD can be respectively given by P_ FA =Pr(P_Aρ _1 + σ _ w^2 > Γ ) = {[1,Γ<σ _ w^2,; 1 - Γ- σ _ w^2 - P_ A,minρ _1/(P_ A,max - P_ A,min)ρ _1,σ _ w^2 < Γ<Δ _1,; 0,Γ>Δ _1, ]. P_ MD =(P_ A ρ _1 + ρ _2 + σ _ w^2 < Γ )= {[0, Γ<Δ _2,; Γ- ρ _2 - σ _ w^2 - P_ A,minρ _1/(P_ A,max - P_ A,min)ρ _1,Δ _2 < Γ<Δ _3,; 1,Γ>Δ _3, ]. where ρ _1 = | 𝐡_ w^H x_ A|^2, Δ _1=P_ A,maxρ _1 + σ _ w^2 and ρ _2 = | 𝐡_ w^H𝐰_1|^2,Δ _2= ρ _2 + σ _ w^2, Δ _3=P_ A,maxρ _1 + ρ _2 + σ _ w^2. As per (<ref>), the DEP at Willie can be derived. When Δ _1 < Δ _2, the minimum value of ξ is ξ=0. In this case covert communication is unable to be achieved. When Δ _1 > Δ _2, the DEP at Willie is given by ξ= {[ 1,Γ< Δ _2,;1 +- ρ _2/(P_max - P_min)ρ _1,Δ _2 < Γ< Δ _1,; Γ- ρ _2 - σ _ w^2 - P_ A,minρ _1/(P_ A,max - P_ A,min)ρ _1,Δ _1 < Γ< Δ _3,; 1,Γ> Δ _3. ]. In the range [ Δ _1,Δ _3], ξ is a monotonically increasing function of Γ. Thus, the minimum DEP lies in the range [ Δ _2,Δ _1], i.e., ξ ^ *= 1 +- ρ _2/(P_ A,max - P_ A,min)ρ _1, Note that Δ _1 > Δ _2 should be satisfied as a prerequisite to achieving covert communication. Considering the covertness constraint ξ^ * ≥1-ϵ and Δ _1 > Δ _2, we can respectively derive that 𝐡_ w^H(𝐖_1 - ϵ (P_ A,max - P_ A,min)𝐓/P_ A)𝐡_ w≤ 0, 𝐡_ w^H(𝐖_1 - P_ A,max𝐓/P_ A)𝐡_ w≤ 0. Notice that due to ϵ∈[ 0,1], the inequality ϵ (P_ A,max - P_ A,min)≤P_ A,max always holds, hence the covertness constraint (<ref>) is omitted. Our objective is to jointly optimize the covert communication signal and the DFAN signal such that the weighted sum of the sensing beampattern MSE and cross correlation is minimized in H_1, subject to the covert communication requirements under bounded WCSI errors. Based on the covertness analyses, the covert communication rate constrained beampattern optimization problem is formulated as (P1.1): min_η_1,𝐖_1,𝐓 L(η_1,𝐖_1,𝐓) s.t.(21),R_ b≥R_min, Tr(𝐖_1)≤P_ t-P_ A,max, rank(𝐖_ 1) = 1, where P_ t denotes the total transmit power budget and R_min denotes the covert communication rate threshold. (<ref>) is the covertness constraint in the scenario with bounded WCSI errors. Notice that problem (P1.1) is non-convex due to the non-convex covertness constraint (<ref>) and the covert communication rate constraint (<ref>), and the quadratic form of the beamformers makes the constraint (<ref>) non-convex. Hence problem (P1.1) is difficult to solve. To solve the formulated problem (P1.1), firstly, the derived analytical covertness constraint (<ref>) is tackled by leveraging the S-procedure. To elaborate, according to the considered bounded WCSI model given in (<ref>), we can derive that Δ𝐡_ w^HΔ𝐡_ w≤ε_ w^2. Then the S-procedure is introduced to convexify the covertness constraint (<ref>), which is given in the following lemma. (S-procedure): Suppose that f_i(𝐱) = 𝐱^H𝐀_i𝐱 + 2 Re{𝐛_i^H𝐱}+ c_i,i = 1,2, where 𝐱∈ℂ^N× 1,𝐀_i∈ℂ^N× N, 𝐱∈ℂ^N× 1, 𝐛_i∈ℂ^N× 1 and c_i∈ℝ. The condition f_1≤ 0 ⇒f_2≤ 0 holds if and only if there exists a variable λ such that λ[ [ 𝐀_1 𝐛_1; 𝐛_1^H c_1 ]] - [ [ 𝐀_2 𝐛_2; 𝐛_2^H c_2 ]]≽ 0. According to Lemma 1, covertness constraints (<ref>) is equivalent to [ [λ_1 𝐈 - 𝐒_1 - 𝐒_1𝐡̂_ w; - 𝐡̂_ w^H𝐒_1 - λε _ w^2 - 𝐡̂_ w^H𝐒_1𝐡̂_ w ]]≽ 0, [ [λ_2 𝐈 - 𝐒_2 - 𝐒_2𝐡̂_ w; - 𝐡̂_ w^H𝐒_2 - λε _ w^2 - 𝐡̂_ w^H𝐒_2𝐡̂_ w ]]≽ 0, 𝐒_2=𝐖_1 - P_ A,max𝐓 ( 𝐡_ w^H(𝐖_1 - P_ A,max𝐓)𝐡_ w≤ 0) ≥ 1 - ρ _c. where 𝐒_1=𝐖_1 - ϵ (P_ A,max - P_ A,min)𝐓/P_ A and λ_1≥0 is the new auxiliary optimization variable. Next, to guarantee the rank-one property of 𝐖_ 1, we adopt the DC relaxation method to extract the rank-one solution from the high-rank matrix <cit.>. In particular, (<ref>) can reformulated as ( Tr(𝐖_ 1^H(𝐈 - 𝐰_ l𝐰_ l^H))) ≤ϱ _𝐰_l, where 𝐰_l∈ℂ^N × 1 denotes the leading eigenvector of 𝐖_ 1 derived in the previous iteration, andϱ _ w_1→ 0 is the penalty factor. Thus the reformulated problem can be given as (P1.2): min_η_1,𝐖_1,𝐓 L(η_1,𝐖_1,𝐓) s.t.(25),(26),𝐡_ b^H𝐖_1𝐡_ b≥(2^R_min- 1)𝐡_ b^H𝐓𝐡_ b + σ _ b^2, Tr(𝐖_1)≤P_ t-P_ A,max, Problem (P1.2) is a convex quadratic semidefinite programing (QSDP) problem. By utilizing off-the-shelf convex programming numerical solvers such as CVX <cit.>, (P1.2) can be optimally tackled. §.§ Covert Communications under Gaussian WCSI Errors Firstly, the covert communication performance is analyzed. The DEP at Willie can be derived similarly to the mathematical manipulations in the bounded WCSI errors scenario, which is thus omitted here. In the Gaussian WCSI errors scenario, the outage probability should be considered. Thus based on the covertness constraint (<ref>), the outage probability constraint for the covertness of the covert ISAC system can be given as ( 𝐡_ w^H𝐒_1𝐡_ w≤ 0) ≥ 1 - ρ _c, where 𝐒_1=𝐖_1 - ϵ (P_ A,max - P_ A,min)𝐓/P_ A. To tackle the the outage probability constraints (<ref>), the BTI is harnessed <cit.>, which is given in the following lemma (BTI): For any 𝐀∈ℂ^N × N,𝐛∈ℂ^N × 1,c ∈ℝ, 𝐱∼ C N(0,𝐈) and ρ∈[ 0,1], if there exist x and y, such that Tr(𝐀) - √(2ln (1/ρ))x+ ln(ρ )y + c ≥ 0, √(𝐀_F^2 + 2𝐛^2)≤ x, y𝐈 + 𝐀≽ 0,y ≥ 0, the following inequality holds true Pr(𝐱^H𝐀𝐱 + 2 Re{𝐱^H𝐛}+ c≥ 0)≥ 1-ρ. Recall that 𝐡_ w =𝐡̂_ w + γ_ w^1/2𝐞_ w. As per Lemma 2, (<ref>) can be equivalently transformed to the following inequalities Tr(𝐀_ w) - √(2ln (1/ρ_c ))x+ ln(ρ_c )y + c_ w≥ 0, √(𝐀_ w_F^2 + 2𝐛_ w^2)≤ x, y𝐈 + 𝐀_ w≽ 0,y ≥ 0, where 𝐀_ w=γ_ w^1/2(-𝐒_1)γ_ w^1/2, c_ w=𝐡̂_ w^H(-𝐒_1)𝐡̂_ w and𝐛_ w=γ_ w^1/2(-𝐒_1)𝐡̂_ w. Similar to the bounded WCSI errors case, the non-convex rank-one constraint of 𝐖_ 1 is also tackled by the DC relaxation method. Thus based on the covertness analyses, the covert communication rate constrained beampattern optimization problem is formulated as (P2): min_η_1,𝐖_1,𝐓 L(η_1,𝐖_1,𝐓) s.t.(<ref>),(<ref>),(<ref>),𝐡_ b^H𝐖_1𝐡_ b≥(2^R_min- 1)𝐡_ b^H𝐓𝐡_ b + σ _ b^2, Tr(𝐖_1)≤P_ t-P_ A,max,(26), where (<ref>) denotes the outage-constrained constraints in the scenario with Gaussian WCSI errors, where the outage probability should be confined to a certain threshold ρ_c. Problem (P2) is a convex QSDP problem, which can be optimally tackled. § COVERT COMMUNICATIONS UNDER STATISTICAL WCSI In this section, firstly, we dig into the worst-case covert communication performance in the considered covert ISAC system under the statistical WCSI scenario. To elaborate, Willie’s DEP is first analyzed and the optimal detection threshold, Γ ^ *, for Willie is further derived considering the worst case that Willie can adopt the optimal detection threshold Γ ^ * to minimize DEP to achieve the best detection performance. After the minimum DEP at Willie is derived in the analytical expressions, the closed-form expressions of the average minimum DEP are further derived to investigate the covertness of the system. Then the covert communication rate constrained beampattern optimization problem is formulated. The formulated problem is non-convex and thus difficult to solve, which is tackled by leveraging the DC relaxation method. §.§ Covert Communications under statistical WCSI Errors Firstly, the covert communication performance is thoroughly analyzed. We first derive the analytical expressions for P_ FA and P_ MD in closed form. By analyzing the derived analytical expression of DEP, the optimal detection threshold Γ ^ * and the minimum DEP ξ ^ * are obtained. Considering the fact that the instantaneous CSI of channel 𝐡_ w is not available at Alice, we adopt the minimum DEP averaging over variable t _ A as the covertness metric of the considered covert ISAC system. To elaborate, the analytical expressions for P_ FA and P_ MD can be respectively given as (<ref>) and(<ref>), at the bottom of the next page, respectively. P_ FA =Pr(P_At _ A + σ _ w^2 > Γ ) = {[ 1,Γ<σ _ w^2,; 1 - Γ- σ _ w^2 - P_ A,mint _ A/(P_ A,max - P_ A,min)t _ A,σ _ w^2 < Γ<Δ _ A,; 0,Γ>Δ _ A, ]. where Δ _ A=P_ A,maxt _ A+σ _ w^2, t _ A=| 𝐡_ w^H x_ A|^2, t _𝐰_1=| 𝐡_ w^H𝐰_1|^2 λ _𝐰_1=𝐰̅^H𝐰̅ and 𝐰̅=√(l_ w)Ω _ w^1/2𝐰_1. Please refer to Appendix A. As illustrated in section 1, 𝐡_ w=√(l_ w)𝐠_ w, where 𝐠_ w∼𝒞𝒩( 0_N × 1,Ω _ w) is the small-scale Rayleigh fading channel coefficient of the Alice-Willie link. Covert Communication performance analysis under As per(<ref>) and(<ref>), the DEP at Willie is given by ξ= {[1, Γ<σ _ w^2,; 1 + λ _𝐰_1(e^ - Γ- σ _ w^2/λ _𝐰_1 - 1)/p_α+ p_βe^ - Γ- σ _ w^2/λ _𝐰_1,σ _ w^2 < Γ<Δ _ A,;1-λ _𝐰_1e^ - Γ- σ _ w^2/λ _𝐰_1/p_α(e^t _ AP_ A,max/λ _𝐰_1 - e^t _ AP_ A,min/λ _𝐰_1),Γ> Δ _ A, ]. where p_α=(P_ A,max - P_ A,min)t _ A and p_β=P_ A,min/P_ A,max - P_ A,min. We can derive that when σ _ w^2 < Γ<Δ _ A, the DEP at Willie is a decreasing function with respect to (w.r.t.) Γ. However, when Γ> Δ _ A, the DEP at Willie is a increasing function w.r.t. Γ. Hence the optimal Γ, denoted by Γ ^ *, is given by Γ ^ * =P_ A,maxt _ A+σ _ w^2. Thus the optimal ξ ^ * = 1 + λ _𝐰_1(e^ - P_ A,maxt _ A/λ _𝐰_1 - 1)/p_α+ p_βe^ - P_ A,maxt _ A/λ _𝐰_1. Similar to Appendix A, we can prove that t _ A∼ exp(λ _ A), where λ _ A=𝐰_ A^H𝐰_ A and 𝐰_ A=√(l_ w)Ω _ w^1/2 x_ A. By averaging ξ ^ * over t _ A, we can get the the average result of the minimum DEP as ξ̅^ * =𝔼_t_ A(ξ̅^ * )=∫_0^ + ∞( 1 + λ _𝐰_1(e^ - P_ A,maxt _ A/λ _𝐰_1 - 1)/p_α. . + p_βe^ - P_ A,maxt _ A/λ _𝐰_1) 1/λ _ Ae^ - t_ A/λ _ Adt_ A=1+ν∫_0^ + ∞e^ - (P_ A,max/λ _𝐰_1 + 1/λ _ A)t_ A - e^ - t_ A/λ _ A/t_ Adt_ A+p_β∫_0^ + ∞1/λ _ Ae^ - (P_ A,max/λ _𝐰_1 + 1/λ _ A)t_ Adt_ A=1+νlnμ+p_βμ, where ν=λ _𝐰_1/(P_ A,max - P_ A,min)λ _ A and μ= λ _𝐰_1/P_ A,maxλ _ A+λ _𝐰_1. To further explore the derived closed-form expression of ξ̅^ *, we denote π=P_ A,max/P_ A,min and τ= λ _ AP_ A,max/λ _𝐰_1, thus we can rewrite ξ̅^ * as ξ̅^ *(τ)=1-( π/π- 1ln (τ+ 1)/τ-1 /π- 11/τ+ 1). The first-order derivative of ξ̅^ *(τ) w.r.t. τ is given by dξ̅^ * (τ )/dτ = - π/(π- 1)(1 + τ )^2τ ^2( τ- (1 + τ )ln (1 + τ ) +(1 + 1/π)τ ^2 - τ (1 + τ )ln (1 + τ ) ). Since when τ > 0, τ- (1 + τ )ln (1 + τ ) and (1 + 1/π)τ ^2 - τ (1 + τ )ln (1 + τ )<0 always holds. Therefore we can derive that dξ̅^ * (τ )/dτ> 0. Recall that τ= λ _ AP_ A,max/λ _𝐰_1, we observe that ξ̅^ * is an increasing function w.r.t. P_ A,max, which indicates that increasing P_ A,max can degrade the wiretap performance of Willie. Note that in (<ref>)π is a constant w.r.t. different P_ A,max and P_ A,min, which has no influence on the monotonicity of ξ̅^ *(τ). Moreover, it can be derived that when P_ A,max→ +∞, ξ̅^ *=1. Despite the potential trade-off between the performance metrics of sensing and covert communications, by appropriately setting the value of τ, both the requirement of sensing and covert communications can be satisfied simultaneously. §.§ Robust Beamforming Optimization Design under Gaussian WCSI Errors Considering the statistical WCSI, we delve into the robust beamforming optimization design. Consistent with the objective proposed under imperfect WCSI, to minimize the weighted sum of the sensing beampattern MSE and cross correlation in H_1, the covert communication signal and the DFAN signal are jointly optimized subject to the covert communication requirements under statistical WCSI. According to the covert communication performance analysis, the expression of DEP’s average result is derived as ξ̅^ *(τ) in (<ref>). Moreover, the following discussion of its monotonicity indicates that ξ̅^ * is an increasing function w.r.t. τ. Considering the general covertness constraint ξ̅^ * ≥1-ϵ, firstly we define f(τ)=π/π- 1ln (τ+ 1)/τ-1 /π- 11/τ+ 1 for ease of expression. Thus the covertness constraint can be transformed to f(τ) ≤ϵ and further rewritten as τ≥τ _ϵ, where f(τ _ϵ)=ϵ. Recall that τ= λ _ AP_ A,max/λ _𝐰_1, where λ _𝐰_1=𝐰̅^H𝐰̅=l_ w𝐰_1^HΩ _ w𝐰_1 and λ _ A=𝐰_ A^H𝐰_ A=l_ w x_ A^HΩ _ w x_ A. Thus, the covertness constraint can be given as x_ A^HΩ _ w x_ AP_ A,max≥τ _ϵ𝐰_1^HΩ _ w𝐰_1. Due to constraint (<ref>), the increasing of transmit power is confined, which cannot always increase the covert communication rate. The covertness constraint (<ref>) can be further transformed as Tr(Ω _ w𝐓/ . -P_ A) ≥τ _ϵ Tr(Ω _ w𝐖_1)/P_ A,max. After applying the DC relaxation method, the covert communication rate constrained beampattern optimization problem is formulated as (P3): min_η_1,𝐖_1,𝐓 L(η_1,𝐖_1,𝐓),s.t.(<ref>),𝐡_ b^H𝐖_1𝐡_ b≥(2^R_min- 1)𝐡_ b^H𝐓𝐡_ b + σ _ b^2, Tr(𝐖_1)≤P_ t-P_ A,max, (<ref>), where (<ref>) denotes the covertness constraint in the statistical WCSI scenario. Note that in the covertness constraint (<ref>), f(τ _ϵ)=ϵ should be firstly satisfied to get τ _ϵ according to the closed-form expressions of the average minimum DEP derived in section 3. Problem (P3) is a convex QSDP problem, which can be optimally tackled utilizing CVX. § FEASIBILITY-CHECKING BASED DC ALGORITHM DESIGN In this section, a feasibility-checking based DC algorithm design is proposed to solve the optimization problems formulated in the last section, which ensures the initial value of 𝐖_1, denoted by 𝐖_0, in feasible region of the problem and guarantee the convergence of the algorithm. As per the covertness constraints (21) and (44) derived in the imperfect WCSI scenario and the statistical scenario, respectively, we can see that the covertness of the covert ISAC system is influenced by the value of P_ A in both cases. However, due to the fact that P_ A is the instant power of the DFAN, P_ A cannot be optimized due to its randomness to create uncertainty at Willie. In the formulated optimization problems, due to the DC relaxation method being adopted to tackle the rank-one constraint, the initial value of 𝐖_1 should be first set before the iterations. Thus it is utmost to choose a proper initial value of 𝐖_1. To elaborate, the constraints (21) and (44) are recast respectively as P_ A,min≤ϵ (P_ A,max - P_ A,min) 𝐡_ w^H𝐓𝐡_ w/𝐡_ w^H𝐖_1𝐡_ w, P_ A,min≤P_ A,max Tr(Ω _ w𝐓)/τ _ϵ Tr(Ω _ w𝐖_1). Likewise, the covert rate constraint can be reformulated as P_ A,min≤𝐡_ b^H𝐖_1𝐡_ b-σ _ b^2/(2^R_min- 1)𝐡_ b^H x_ A x_ A^H𝐡_ b. Thus the feasibility-checking problem for the imperfect WCSI case and the statistical WCSI case can be formulated respectively given as (P4) and (P5) as follows. (P4): Find𝐖_1 s.t.(26),(<ref>),(<ref>), Tr(𝐖_1)≤P_ t-P_ A,max, (P5): Find𝐖_1 s.t.(26),(<ref>),(<ref>),Tr(𝐖_1)≤P_ t-P_ A,max, where in the imperfect WCSI case, i.e., (P4), the problem can be further reformulated in the bounded and Gaussian WCSI scenarios, which are omitted here for brevity. After checking the feasibility of the initial value of 𝐖_1, the covert communication signal covariance matrix 𝐖_1 and the DFAN signal covariance matrix 𝐓 are jointly optimized such that the weighted sum of the sensing beampattern MSE and cross correlation is minimized in H_1, subject to the covert communication requirements. Based on the procedures above, the overall algorithm for optimal transmit beampattern design is summarized in Algorithm 1, where L_ th denotes the predefined threshold of the reduction of the objective function. § SIMULATION RESULTS In this section, we provide numerical results to evaluate the performance of our proposed DFAN design for covert ISAC systems. Unless specified otherwise, the simulation parameters are set as follows. Alice is equipped with N = 10 transmit antennas and the normalized spacing between two adjacent antennas is set as d/λ = 0.5. The large-scale path loss is modeled as l_xy =ζ _0(d_0/ . - d)^α for all channels, where ζ _0 is the path loss at the reference distance d_0 = 1 m, α is the path-loss exponent, and d is the link distance. The large-scale path loss of the Alice-Bob link and the Alice-Willie link are denoted as l_ b and l_ w, respectively. For small-scale fading, the Alice-Bob channel is assumed to be Rician fading, i.e., 𝐆 = √(κ/1 + κ)Ĝ+ √(1/1 + κ)𝐆̃, where κ denotes the Rician factor. Ĝ denotes the line-of-sight (LoS) component modeled as the product of the steering vectors of the receive arrays and the transmit arrays. 𝐆̃ denotes the non-line-of-sight (NLoS) component modeled as the Rayleigh fading. The Alice-Willie channel is assumed to be Rayleigh fading, i.e., 𝐡_ w∼ C N( 0,𝐈). The distance between Alice and M targets/Bob is set as 50 m.The path loss at the reference distance of 1 meter, i.e., ζ _0, is set as ζ _0=-30 dB. The path-loss exponents of the Alice-Bob channel and the Alice-Willie channel are set as α_ b=α_ e=2.5. The noise power at Bob and Willie is set as σ _b^2=σ _e^2=-80 dBm. The Rician factor κ is set as 5. The outage probability is set as ρ _c=0.05. For the Bounded and Gaussian WCSI error model, we set ε_ w=0.1| 𝐡̂_ w| and γ_ w=0.01𝐡̂_ w^2𝐈_N/N, respectively. Besides concealing the signal transmission to Bob located in the direction of 0^∘, Alice also senses M=4 targets in the directions of -45^∘, -20^∘, 20^∘ and 45^∘, thus θ̂_m is given as θ̂_m={-45^∘,-20^∘, 20^∘, 45^∘}. We choose S = 180 angles for {θ _s} and the desired beam width Δθ is set as 10^∘. The covertness constant is set as ϵ= 0.1, P_A,max and P_A,min are set as 10 w and 1 w, respectively. Moreover, the instant power of the DFAN is set as 5 w. The convergence performance of the proposed feasibility-checking based DC algorithm over a specific channel realization when R_ min = 8 bps/Hz is shown in Fig. 2. We can observe that the sensing beampattern error can quickly converge to a stable value. Moreover, as the number of iterations increases, the penalty factor is gradually reduced to almost zero, which guarantees that the derived 𝐖_1 is nearly rank-one. The fast convergence also shows the efficiency of the proposed algorithm, which is because the feasibility problem is firstly solved to derive a initial value of 𝐖_1 in the feasible region of the problem. Thus the convergence of the algorithm is guaranteed and accelerated. Figures 3(a)-(c) show the covert rate versus P_b,max under bounded, Gaussian and statistical WCSI, in which the acceptable maximum sensing beampattern error L(η_1,𝐖_1,𝐓) is set as 20; 30; 40 and 50, where P_b,max denotes the power allocated for covert communication. It is observed that when P_b,max increases, the covert rate first increases while finally converging to a certain value. The reasons are that as P_b,max increases, more transmit power can be allocated to information signals towards Bob, leading to the increment of the covert rate. However, the covertness constraint becomes more stringent with the increase of P_b,max, and the transmit power allocated to information signals towards Bob is confined to guarantee covert communication. It is also observed that smaller acceptable maximum sensing beampattern error leads to a larger covert rate, which is due to that higher sensing performance requirements limit the available design DoF of the information signal and more power should be allocated for sensing. Moreover, it is observed that in the statistical WCSI scenario, the rate of convergence is relatively slow compared to that in the bounded and Gaussian WCSI scenarios. This is because compared to the other two cases, the covertness constraint is more stringent in the statistical scenario, resulting in the decrease of the available design DoF of the DFAN. Thus more power should be allocated for sensing to compensate for the sensing performance deterioration of the DFAN. Comparing the converged value of the three considered scenarios under the same sensing performance requirement, it can be obtained that the gap between the covert rate achieved in the statistical scenario and that achieved in the other two cases is narrow. This shows the robustness of our proposed DFAN covert ISAC system design that the joint sensing and covert communication performance in the statistical scenario is comparable to the other two cases. Fig. 4 depicts the covert rate versus P_A,max under statistical WCSI, where x denotes the x-axis value and P_A is set as P_A=( P_A,max-1)w. It is observed that when P_A,max increases the covert rate increases at first and then decreases. This is because the trade-off between covertness and covert rate induced by the variation of power allocation between P_A,max and the information signal. As P_A,max increases, the covertness constraint is relaxed, and thus more power can be allocated to the information signal to improve the covert rate. However, the increment of P_A,max cannot always improve the covert rate, which is due to the fact that less power is allocated to the information signal when the covertness constraint is further relaxed. Furthermore, it can also be observed that generally increasing the total transmit power can significantly increase the covert rate. Nonetheless, when the total transmit power is sufficiently large, the covertness constraint is stringent. This causes serious deficiency of the power allocated to the information signal and thus degrades the covert rate. Fig. 5 plots the sensing beampattern error versusR_ min. To verify the effectiveness of the proposed DFAN covert ISAC system design, the following benchmark schemes are considered for comparison: •Sensing only: Both the DFAN and the information signal are exploited to minimize the sensing beampattern errors without covert rate requirement. •Single functional AN: The AN is only harnessed to confuse Willie to achieve covert communication without sensing function. Thus in H_0, the system has no sensing function. In H_1, the information signal is fully exploited to achieve covert communication and sense targets simultaneously. •Without AN: Due to the lack of AN, covert communication cannot be achieved in this case, the information signal is fully exploited to sense targets, i.e., minimizing the sensing beampattern errors, while satisfying the minimum non-covert communication rate requirement. From Fig. 5 we can observe that the proposed DFAN design achieves significantly lower sensing beampattern errors than that of the single functional AN design benchmark scheme within the regime of R_ min. Moreover, when R_ min is relatively small, the sensing beampattern errors achieved by the proposed DFAN design approach that of the Without AN benchmark and the sensing only benchmark under both imperfect and statistical WCSI. The reasons are that for the single functional AN design, the information signal is fully exploited to achieve covert communication and sense targets simultaneously, which highly restricts the available design DoF of the information signal that even a small covert rate requirement leads to great sensing beampattern errors. However, the proposed DFAN design harnesses the AN to assist in sensing, which shares the sensing function with the information signal, thus can unleash its potential to achieve a distinct higher covert rate than the single functional AN design. Moreover, in H_0, the single functional AN design has no sensing function, which further limits its application scenarios. Due to the fact that no covertness is required in the Without AN design, the sensing beampattern errors are much close to the sensing only benchmark and are obviously lower than that achieved by the schemes with covertness requirements, especially when R_ min is relatively large.Furthermore, in contrast to the Without AN benchmark and the sensing only benchmark, the proposed DFAN design achieves comparable sensing performance even in the statistical WCSI scenario when R_ min is relatively small. Also, among the three cases considered in the proposed DFAN design, the sensing performance of the statistical WCSI case approaches that of the imperfect WCSI cases when R_ min is small. This shows the robustness of our proposed DFAN design and is consistent with the analyses in Fig. 3. Combining the observations and analyses in Fig. 3 and Fig. 5, two guidelines are given below: 1) To achieve covert communication in an ISAC system, DFAN is preferred rather than single-functional AN. 2) When the covert rate or the sensing performance requirement is low, it is preferable to adopt the statistical WCSI hypothesis to improve the robustness of the covert ISAC design, while achieving a comparable performance as shown in Fig.3 and Fig.5. Figs. 6 and 7 depict the transmit beampattern under bounded WCSI and the detailed analyses corresponding to the three considered scenarios. Considering a worse channel setting, the Rician factor κ is set as κ=1 to guarantee the robustness of the proposed DFAN signal scheme. Two benchmark schemes are considered for comparison: •Dedicated sensing signal: In this scheme the dedicated sensing signal is utilized to facilitate sensing performance, where the sensing interference induced by the dedicated sensing signal can be canceled by successive interference cancellation (SIC) technologies. However, in this scenario covert communication cannot be achieved. To elaborate, due to the fact that dedicated sensing signal is exploited for multibeam transmission, the covariance matrix 𝐓 is assumed to be of a general rank with 0 ≤ rank(𝐓)=n ≤ N. By exploiting the eigenvalue decomposition of 𝐓, the dedicated sensing signal x_ D(k) can be decomposed into n linearly and statistically independent sensing beams, i.e., 𝐓 = ∑_i = 1^n λ _i𝐱_i𝐱_i^H= ∑_i = 1^n 𝐰_ a,i𝐰_ a,i^H , where λ _i∈ℝ is the eigenvalue and 𝐱_ i∈ℂ^N× 1 is the corresponding eigenvector. The vector 𝐰_a,i = √(λ _i)𝐱_i is the transmit beamformer for x_ D(k). The dedicated sensing signal in H_1 can be rewritten as x_ D(k)= ∑_i ∈ D𝐰_a,ix_ v,i(k) +x̅_ D(k), where D = { 1,....,D} (1 ≤ D ≤ n) denotes the set of virtual communication signals and the symbols {x_ v,i(k)} _i ∈ D are independent as well as with zero mean and unit power. It is assumed that {x_ v,i(k)} _i ∈ D are independent with x̅_ D(k). Hence, the covariance matrix 𝐓̅ of the rest of the dedicated sensing signal x̅_ D(k) is given by 𝐓̅ = 𝔼[ x̅_ D(k)x̅_ D^H(k)]=∑_i = D + 1^n 𝐰_a,i𝐰_a,i^H . The signal received at the Bob is given by R_ b = log _2(1 + | 𝐡_ b^H𝐰_1|^2/𝐡_ b^H𝐓̅𝐡_ b + σ _ b^2). We also assume that only one beam of the dedicated sensing signal is embedded with information, i.e., D= 1. •Ideal interference cancellation: Bob can perfectly eliminate the sensing interference, which serves as a performance upper bound of the considered scenario. From Fig.7, we can observe that compared with the other benchmark schemes, the proposed DFAN signal scheme achieves a sensing beampattern with comparable performance, with peaks in the target directions and small power leakage in the undesired directions. Moreover, the sensing beampattern errors of the proposed DFAN signal scheme, the dedicated sensing signal scheme, and the ideal interference cancellation scheme are given as 9.215, 7.490, and 7.523, respectively. This is because the dedicated sensing signal can be fully leveraged to facilitate sensing and communication performance without covertness constraints, leading to a smaller sensing beampattern error compared to the other two schemes. Moreover, the relatively narrow gap between the proposed DFAN signal scheme and the other two schemes further demonstrates the effectiveness of the proposed DFAN signal design. The sensing performance difference between the proposed DFAN signal scheme and the dedicated sensing signal scheme shows that the DoF of the dedicated sensing signal can be fully exploited to facilitate communication performance. However, the randomness in the DFAN signal to achieve covert communication will inevitably degrade the covert communication performance, which unveils the trade-off between covertness and covert communication performance. We dig deep into the three considered scenarios in Fig.6, it can be observed that in the ideal interference cancellation scheme and the dedicated sensing signal scheme, the information signal beampattern achieves only one dominant peak in the LoS direction. The reasons are that the interference is canceled by SIC technologies in the dedicated sensing signal scheme. However, in terms of the proposed DFAN signal scheme, in the information signal beampattern, to overcome the interference caused by the DFAN, the information signal peaks in several sensing directions and the LoS direction to satisfy the covertness requirement. Furthermore, it can be observed that in the three scenarios, the information signals are all confined in the LoS direction or the desired sensing directions, which is due to the sensing beampattern errors, i.e., the gap between the desired beampattern and the joint information and sensing signal beampattern are minimized. Moreover, it can be observed in the three cases that the sensing signal beampattern achieves a non-zero value in the direction of 0 degree. For the proposed DFAN signal scheme and the ideal interference cancellation scheme, the reasons are that the sensing signal is also constrained by the covertness of the covert ISAC system. A non-zero value in the direction of 0 degree, i.e., the LoS direction, is utilized to conceal the signal transmission of Bob. However, in the dedicated sensing signal scheme, a non-zero value in the direction of 0 degree is due to the SIC constraint. § CONCLUSION In this paper, we studied covert communications in an ISAC system, where the DFAN was harnessed to confuse Willie and sense the targets simultaneously. The robust design considered not only the imperfect WCSI but also the statistical WCSI, where the worst-case scenario was also considered that Willie can adaptively adjust the detection threshold to achieve the best detection performance, and the minimum DEP at Willie was derived in closed form. Moreover, in the statistical WCSI case, the closed-form expressions of the average minimum DEP are derived. The formulated sensing beampattern error minimization problems were tackled by a feasibility-checking based DC algorithm utilizing S-procedure, Bernstein-type inequality, and the DC relaxation method. Simulation results validated the feasibility of the proposed scheme. The results also revealed that: 1) To achieve covert communication in an ISAC system, DFAN is preferred rather than single-functional AN. 2) When the covert rate or the sensing performance requirement is low, it is preferable to adopt the statistical WCSI hypothesis to improve the robustness of the covert ISAC design, while achieving comparable performance. § APPENDIX A: DERIVATION OF DETECTION ERROR PROBABILITY According to the decision rule ςD_0D_1≷ Γ and the uniform distribution of variable P_ A, the false alarm probability of Willie can be derived as P_ FA =Pr(P_At _ A + σ _ w^2 > Γ ) = {[ 1,Γ<σ _ w^2,; 1 - Γ- σ _ w^2 - P_ A,mint _ A/(P_ A,max - P_ A,min)t _ A,σ _ w^2 < Γ<Δ _ A,; 0,Γ>Δ _ A. ]. To further analyze the miss detection probability of Willie, due to only the statistical CSI of 𝐡_ w being available at Willie, we define t _𝐰_1=| 𝐡_ w^H𝐰_1|^2 and λ _𝐰_1=𝐰̅^H𝐰̅, where 𝐰̅=√(l_ w)Ω _ w^1/2𝐰_1. It can be obtained that t _𝐰_1 is an exponential random variable whose PDF is f_t_w_1(x) = 1/λ _w_1 e ^ - t_w_1/λ _w_1. Thus the false alarm probability of Willie can be derived as (<ref>) at the bottom of the previous page. Before thorough problem formulation under these two scenarios, 𝐭_0 is predetermined in H_0 and can be derived by solving the following problem: (P0): min_𝐭_0∑_s = 1^S | η P^ * (θ _s) - 𝐚^H(θ _s)𝐓_0𝐚(θ _s)|^2s.t. Tr(𝐓_0)≤P_t𝐓_0=∑_i = 1^K 𝐰_c,i𝐰_c,i^H. In the scenario with perfect WCSI, the covert communication rate constrained beampattern optimization problem is formulated as (P1): min_η,𝐭_0,𝐓̅_1,𝐰_ 1,{𝐰_1,i} _i ∈ DF(η,𝐓̅_1,𝐰_ 1,{𝐰_1,i} _i ∈ D) s.t. D(ℙ_0| ℙ_1.)= D(ℙ_1| ℙ_0.)=0,R_ b≥R_min,| 𝐡_ b^H𝐰_1,i|^2≥| 𝐡_ b^H𝐰_1|^2,∀ i ∈ D, Tr(𝐓) =Tr(𝐓_0)= P_ t,𝐓̅_1≽0,∑_s = 1^S | η P^ * (θ_s) - 𝐚^H(θ _s)𝐓_0𝐚(θ_s)|^2≤ε, where P_ t denotes the total transmit power budget, R_min denotes the covert communication rate threshod, and ε denotes the sensing performance threshold in H_0. Notice that problem (P1) is non-convex due to the non-convex covert communication rate constraint (<ref>) and the quadratic form of the beamformers makes the constraints (<ref>), (<ref>), and (<ref>) non-convex. Hence problem (P1) is difficult to solve. The solution to problem (P1) will be elaborated in section 3. Next, in the scenario with imperfect WCSI, the covert communication rate constrained beampattern optimization problem is formulated as (P2): min_η,𝐭_0,𝐓̅_1,𝐰_ 1,{𝐰_1,i} _i ∈ DF(η,𝐓̅_1,𝐰_ 1,{𝐰_1,i} _i ∈ D) s.t. D(ℙ_0| ℙ_1.)≤ 2 ϵ^2 or D(ℙ_1| ℙ_0.)≤ 2 ϵ^2,R_ b≥R_min,| 𝐡_ b^H𝐰_1,i|^2≥| 𝐡_ b^H𝐰_1|^2,∀ i ∈ D, Tr(𝐓)=Tr(𝐓_0)= P_ t,𝐓̅_1≽0,Δ𝐡_ w≤ε_ w, ∑_s = 1^S | η P^ * (θ_s) - 𝐚^H(θ _s)𝐓_0𝐚(θ_s)|^2≤ε. Compared to problem (P1), constraints (<ref>) and (<ref>) make problem (P2) even more challenging to solve. Despite the difficulty, the solution to problem (P2) will be elaborated in section 4. We should note that due to the covertness constraint in (<ref>) and (<ref>), the sensing function and covert communication performance will degrade corresponding to the decrease of λ _0. As per (<ref>), λ _0 is determined by the value of 𝐭_0, which is subject to constraints (<ref>) and (<ref>) in H_0, i.e., limited by ε. Thus the sensing function and covert communication performance in H_0 and H_1 are highly correlated, which makes problem (P1) and (P2) challenging to solve. Moreover, in (<ref>) and (<ref>), we consider the equality constraints such that all of available transmit power can be utilized for facilitating the sensing performance and assisting covert communications. IEEEtran
http://arxiv.org/abs/2312.16621v1
{ "authors": [ "Runzhe Tang", "Long Yang", "Lv Lu", "Zheng Zhang", "Yuanwei Liu", "Jian Chen" ], "categories": [ "cs.IT", "eess.SP", "math.IT" ], "primary_category": "cs.IT", "published": "20231227160708", "title": "Dual-Functional Artificial Noise (DFAN) Aided Robust Covert Communications in Integrated Sensing and Communications" }
Detection of Asymmetry in the Narrow Fe Kα Emission Line in MCG-5-23-16 with Chandra Jon M. Miller 0000-0003-2869-7682==================================================================================== A triangular partition is a partition whose Ferrers diagram can be separated from its complement (as a subset of ^2) by a straight line. Having their origins in combinatorial number theory and computer vision, triangular partitions have been studied from a combinatorial perspective by Onn and Sturmfels, and by Corteel et al. under the name plane corner cuts, and more recently by Bergeron and Mazin. In this paper we derive new enumerative, geometric and algorithmic properties of such partitions. We give a new characterization of triangular partitions and the cells that can be added or removed while preserving the triangular condition, and use it to describe the Möbius function of the restriction of Young's lattice to triangular partitions. We obtain a formula for the number of triangular partitions whose Young diagram fits inside a square, deriving, as a byproduct, a new proof of Lipatov's enumeration theorem for balanced words.Finally, we present an algorithm that generates all the triangular partitions of a given size, which is significantly more efficient than previous ones and allows us to compute the number of triangular partitions of size up to 10^5. Keywords: triangular partition, corner cut,balanced word, Young's lattice.Mathematics subject classification: 05A17, 05A15, 05A19, 05A16.§ INTRODUCTION An integer partition is said to be triangular if its Ferrers diagram can be separated from its complement by a straight line. Triangular partitions and their higher-dimensional generalizations have been studied from different perspectives during the last five decades. They first appeared in the context of combinatorial number theory <cit.>, where they were called almost linear sequences.Later, the closely related notion of digital straight lines became relevant in the field of computer vision <cit.>. From a combinatorial perspective, triangular partitions were first studied by Onn and Sturmfels <cit.>, who defined them in any dimension and called them corner cuts. Soon after, Corteel et al. <cit.> found an expression for the generating function for the number of plane corner cuts. More recently, motivated by work of Blasiak et al. <cit.> generalizing the shuffle theorem for paths under a line, Bergeron and Mazin <cit.> coined the term triangular partitions and studied some of their combinatorial properties.In this paper we obtain further enumerative, geometric, poset-theoretic, and algorithmic properties of triangular partitions.The paper is structured as follows. In Section <ref> we give basic definitions and summarize some of the previous work on triangular partitions. In Section <ref> we give a simple alternative characterization of triangular partitions, as those for which the convex hull of the Ferrers diagram and that of its complement (as a subset of ^2) have an empty intersection. We also characterize which cells can be added to or removed from the Young diagram while preserving triangularity.In Section <ref> we study the restriction of Young's lattice to triangular partitions. It was shown in <cit.> that this poset is a lattice. Here we completely describe its Möbius function, and we provide an explicit construction of the join and the meet of two triangular partitions. In Section <ref>, we introduce a new encoding of triangular partitions in terms of balanced words, and use it to implement an algorithm which computes the number of triangular partitions of every size n≤ N in time (N^5/2). This allows us to produce the first 10^5 terms of this sequence, compared to the 39 terms that were known previously. In Section <ref>, refining the approach from <cit.>, we obtain generating functions for triangular partitions with a given number of removable and addable cells. In Section <ref>, we provide a formula for the number of triangular partitions whose Young diagram fits inside a square (or equivalently, inside a staircase), which involves Euler's totient function. As a byproduct, we obtain a new combinatorial proof of a formula of Lipatov <cit.> for the number of balanced words.We conclude by discussing some generalizations of triangular partitions in Section <ref>, and proposing possible directions for future research.§ BACKGROUND §.§ Triangular partitionsA partition λ is a weakly decreasing sequence of positive integers, often called the parts of λ. We will write λ=(λ_1,λ_2,…,λ_k), or λ=λ_1λ_2…λ_k when there is no confusion. We call |λ|=λ_1+λ_2+…+λ_k the size of λ. If |λ|=n, we say that λ is a partition of n.We useto denote the set of positive integers. The Ferrers diagram of λ is the set of lattice points{(a,b)∈^2| 1≤ b ≤ k, 1≤ a≤λ_b}. The Young diagram of λ is the set of unit squares (called cells) whose north-east corners are the points in the Ferrers diagram; see the examples in Figure <ref>. We identify each cell with its north-east corner, so we will also use the term cell to refer to points in the Ferrers diagram. In particular, we say that a cell lies above, below or on a line when the north-east corner does. Additionally, we will often identify λ with its Ferrers diagram and with its Young diagram, and use notation such as c=(a,b)∈λ.For a partition λ=λ_1λ_2…λ_k, we call λ_1 its width, and k its height. The partition (k,k-1,… ,2,1) will be referred to as the staircase partition of height k, and denoted by σ^k.The conjugate of a partition λ, obtained by reflecting its Ferrers diagram along the diagonal y=x, will be denoted by λ'.Identifying λ with its Ferrers diagram, we define its complement to be the set ^2∖λ.Now we can state the definition of our main objects of study. A partition τ = τ_1τ_2…τ_k is triangular if there exist positive real numbers r and s such thatτ_j = ⌊r - jr/s⌋,for 1≤ j≤ k, and k = ⌊ s - s/r⌋.In other words, τ is triangular if its Ferrers diagram consists of the points in ^2 that lie on or below the line that passes through (0,s) and (r,0).This line, which has equation x/r+y/s=1, will be denoted by _r,s in the rest of the paper. We say that _r,s is a cutting line for τ, or that it cuts off τ. A vector v∈_>0^2 is called a slope vector of τ if it is perpendicular to a cutting line for τ. Unlike in the definition given in <cit.>, here we do not allow τ to have parts equal to 0, hence the condition on k.Denote by Δ(n) the set of triangular partitions of n, and by Δ=⋃_n≥0Δ(n) the set of all triangular partitions. Throughout the paper, we will often use τ to denote a triangular partition. §.§ Enumeration Corteel et al. <cit.> prove that the set of slope vectors of any given nonempty triangular partition τ is an open cone. For the different partitions in Δ(n), these cones are disjoint. To count triangular partitions of n, Corteel et al. find the number of separating rays between these open cones. Each ray can be identified by a pair of relatively prime numbers (a,b) that determine its slope. If the cone adjacent to such a ray from the right corresponds to a triangular partition τ, then there is a line L perpendicular to (a,b) so that τ consists of the points instrictly below L, along with the leftmost m points in L∩^2, for some m. Letting k=|L∩^2|, and letting i+1 be the x-coordinate of the leftmost cell in L∩^2, and j+1 be the y-coordinate of the rightmost cell in L∩^2, the line L can be encoded by the parameters a,b,i,j,k. Then, τ can be decomposed as shown in Figure <ref>, and its size n is equal toN_Δ(a,b,k,m,i,j) = (k - 1)((a + 1)(b + 1)/2 - 1) + k - 12ab + ij + i(k - 1)a + j(k - 1)b + T(a,b,j) + T(b,a,i) + m,where T(a,b,j) = ∑_r = 1^j(⌊ rb/a⌋ + 1).Corteel et al. <cit.> deduce that the generating function of triangular partitions with respect to their size can be expressed as follows. G_Δ(z) ∑_n≥0|Δ(n)|z^n = 1/1 - z + ∑_(a, b) = 1∑_0≤ j < a 0≤ i < b∑_1≤ m < kz^N_Δ(a,b,k,m,i,j). The term 1/1 - z in the above generating function accounts for the open cone that lies to the left of the leftmost separating ray, along with the empty partition. Exploiting the idea of identifying each triangular partition of n with a pair of relatively prime numbers, Corteel et al. <cit.> obtain the following bounds. There exist positive constants C and C' such that, for all n > 1,C nlog n < |Δ(n)| < C'nlog n. §.§ Addable and removable cellsLet λ = λ_1…λ_k be a partition, and let c = (i,j) be a cell of its Young diagram. Define the arm length and the leg length of c to be (c) = λ_j - i and (c) = λ'_i-j, that is, the number of cells to the right of c in its row, and above c in its column, respectively. Bergeron and Mazin <cit.> give the following characterization of triangular partitions. A partition λ is triangular if and only if t_λ^- < t_λ^+, where t_λ^- = max_c∈λ(c)/(c) + (c) + 1, and t_λ^+ = min_c∈λ(c) + 1/(c) + (c) + 1. In this case, (t, 1 - t) is a slope vector of λ if and only if t_λ^- < t < t_λ^+. Bergeron and Mazin <cit.> also define a moduli space of lines, where each point (r,s)∈_>0^2 represents the line _r,s. The set of lines that pass through the lattice point (a,b)∈^2 are then represented in the moduli space by the hyperbola _a,b = {(r,s)∈_>0^2| (r - a)(s - b) = ab}. Every (r,s) that lies above (respectively, below) _a,b represents a line which cuts off a triangular partition τ whose Young diagram contains (respectively, does not contain) the cell (a,b). Therefore, we can interpret crossing the hyperbola _a,b as “adding” or “removing” the cell (a,b). It is provedin <cit.> that there is a natural bijection between the connected components of _>0^2∖⋃_(a,b)∈^2_a,b and the set Δ of triangular partitions, as shown in Figure <ref>. This interpretation motivates the following definition.A cell of a triangular partition τ is removable if removing it from τ yields a triangular partition. A cell of the complement ^2∖τ is addable if adding it to τ yields a triangular partition. The following results are proved in <cit.>.Every nonempty triangular partition has either one removable cell and two addable cells, two removable cells and one addable cell, or two removable cells and two addable cells. Let τ be a triangular partition and let c∈τ be a removable cell. Then τ has a cutting line _r,s such that c is the only point of τ on _r,s. Let τ be a triangular partition with two removable and two addable cells. Then, the line containing the two removable cells is parallel to the line containing the two addable cells. Bergeron and Mazin considered the posetof triangular partitions ordered by containment of their Young diagrams; equivalently, the restriction of Young's lattice to the subset of triangular partitions.The lower portion of the Hasse diagram of this poset appears in Figure <ref>. We write the order relation as τ≤ν, which by definition is equivalent to τ⊆ν, so we use both notations interchangeably. The covering relations incan be described as follows. Let τ,ν∈ such that τ<ν. Then, τ⋖ν if and only if τ is obtained from ν by removing exactly one cell. In particular,is ranked by the size of the partitions. The following property follows from the above description of the moduli space of lines. The posethas a planar Hasse diagram, and it is a lattice. We conclude this section with some definitions from <cit.> that will be useful later on. The diagonal of a triangular partition τ, denoted by ∂_τ, is the set of cells in the segment whose endpoints are the removable cells of τ (with the convention that if there is only one removable cell, then the diagonal is just that cell). The slope of this line will be called the diagonal slope. The interior of τ is τ^∘ = τ∖∂_τ. It is shown in <cit.> that if τ is a triangular partition, then so is τ^∘. Additionally, if ∂_τ contains k≥2 cells, then the Hasse diagram of the interval [τ^∘,τ] is a polygon with 2k sides. It follows that the Hasse diagram oftiles a region of the plane with 2k-gons for k≥2.§ CHARACTERIZATIONS OF TRIANGULAR PARTITIONS Bergeron and Mazin's <cit.> characterization of triangular partitions, given in Lemma <ref> above, requires computing some quotients of arm and leg lengths for all the cells in the partition. In this section, we introduce an alternative and arguably simpler characterization of triangular partitions in terms of convex hulls, along with various ways to identify removable and addable cells. We then use these to describe an algorithm which determines if an integer partition is triangular and finds its removable and addable cells.The convex hull of a set S⊆^2 will be denoted by (S). See Figure <ref> for an illustration of the next proposition.A partition λ is triangular if and only if (λ)∩(^2∖λ) = ∅.If λ is triangular, there exist r,s∈_>0 so that all the points in λ lie on or below the line _r,s, and all the points in ^2∖λ lie above this line. It follows that all the points in(λ) must lie on or below _r,s, and all the points in (^2∖λ) must lie above this line. We conclude that the intersection of these two convex hulls is empty.To prove the converse, suppose that (λ)∩(^2∖λ) = ∅ and that λ is not empty. Convex hulls are closed sets, and (λ) is bounded, hence it is compact. By the hyperplane separation theorem, two disjoint nonempty closed convex sets, one of which is compact, have a hyperplane separating them. Therefore, there exists a line separating λ from its complement, which means that λ is triangular. In the rest of this section, the term vertex is used in the sense of a 0-dimensional face of a polygon; in particular, (τ) may have lattice points in its boundary that are not vertices.If(λ)∩(^2∖λ)≠∅, then one of these two convex hulls must have a vertex that belongs to the other convex hull. Thus, Proposition <ref> is equivalent to the following statement, which will be useful in Section <ref>. A partition λ is triangular if and only if no vertex of (λ) belongs to (^2∖λ) and no vertex of (^2∖λ) belongs to (λ).§.§ Finding removable and addable cells This subsection gives characterizations of removable and addable cells of a triangular partition.In any triangular partition τ, its removable cells must be vertices of (τ), and its addable cells must be vertices of (^2∖τ).Suppose that c∈τ is removable, and let L be a cutting line of τ∖{c}. Then c is the only cell of τ that lies above L, which implies that c is a vertex of (τ). Similarly, if c'∈∖τ is addable, let L' be a cutting line of τ∪{c'}. Then c' is the only cell of ∖τ that lies weakly below L', which implies that c' is a vertex of (∖τ).Two cells in a triangular partition τ are removable if and only if they are consecutive vertices of (τ) and the line passing through them does not intersect (^2∖τ). Similarly, two cells in ∖τ are addable if and only if they are consecutive vertices of (^2∖τ) and the line passing through them does not intersect (τ).We will prove the statement for removable cells. The statement for addable cells can be proved analogously. Letc_1 = (a_1,b_1) and c_2 = (a_2,b_2)∈τ be two different cells with a_1≤ a_2, and let L be the line passing through them. To prove the forward direction, suppose that c_1 and c_2 are removable cells. Then, a_1≠ a_2, because otherwise the cell with lower y-coordinate would not be removable. By Lemma <ref>, both must be vertices of (τ). Suppose for contradiction that they are not consecutive vertices. Then there exists a vertex c_3 = (a_3,b_3) of (τ) which lies above L, and such that a_1<a_3<a_2.If a_2- a_3 ≤ a_3 - a_1, then any line that removes c_2 but not c_3 must lie above (2a_3 - a_2, 2b_3 - b_2). But this cell is not in τ, because c_1 lies strictly below and weakly to the left of it. This contradicts the fact that c_2 is removable.If, instead, a_2 - a_3 > a_3 - a_1, we can similarly reach a contradiction with the fact that c_1 is removable. It follows that c_1 and c_2 must be consecutive vertices of (τ).Next we show that L does not intersect (^2∖τ), by arguing that all the points in ^2∖τ lie strictly above L. Indeed, if there was a cell c'∈^2∖τ to the left of c_1 and lying weakly below L, then any line that removes c_2 but not c_1 would cut off a partition that contains c'. But that would contradict the fact that c_2 is removable. A similar argument shows that there cannot be a cell in ^2∖τ to the right of c_2 and lying weakly below L. Finally, any point in ^2 to the right of c_1, to the left of c_2 and weakly below L must belong to τ, since c_1 and c_2 must lie weakly below any cutting line.To prove the backward direction, suppose that c_1 and c_2 are consecutive vertices of (τ), and that L does not intersect (^2∖τ). In particular, a_1 ≠ a_2, because otherwise L would be vertical and it would intersect (^2∖τ). We have that (τ) lies weakly below L, and there are no cells of τ on L to the left of c_1 or to the right of c_2. Also, by hypothesis, (^2∖τ) lies strictly above L. Therefore, L is a cutting line for τ. Let us show that c_1 and c_2 are removable. Let (t, 1 - t) be the slope vector of L. There exists δ > 0 small enough so that the line passing through c_1 = (a_1,b_1) with slope vector (t - δ, 1 - t + δ) is also a cutting line for τ and does not touch any other point of ^2, and there exists a ε > 0 small enough so that the line passing through (a_1, b_1 - ε) with slope vector (t - δ, 1 - t + δ) is a cutting line for τ∖{c_1}. This proves that c_1 is removable, and a similar argument shows that so is c_2. An immediate consequence of Proposition <ref> is that a triangular partition can have no more than two removable cells and no more than two addable cells, agreeing with Lemma <ref>. Note that the cell (1,1) in a triangular partition τ is removable if and only if |τ| = 1.A cell c = (a,b) ≠ (1,1) in a triangular partition τ is its only removable cell if and only if it is a vertex of (τ) and both of the following conditions hold: * if a > 1, the line extending the edge of (τ) adjacent to c from the left intersects (^2∖τ) to the right of c;* if b > 1, the line extending the edge of (τ) adjacent to c from below intersects (^2∖τ) above c.The characterization for a cell in ^2∖τ to be the only addable cell of τ is analogous.We will prove the statement about removable cells. The statement about addable cells has a similar proof.To prove the forward direction, suppose that c is the only removable cell of τ. By Lemma <ref>, c is a vertex of (τ). By symmetry, it suffices to prove the first condition, namely the one that assumes a>1. Let c' be the vertex of (τ) adjacent to c from the left. By Proposition <ref>, the line through c and c' must intersect (^2∖τ), since c is the only removable cell. Let q be a point in this intersection. If q lies in the segment between c and c', then q∈(τ)∩(^2∖τ), contradicting Proposition <ref>. If q lies to the left of c', then c'∈(^2∖(τ∖{c})), since c' lies in the segment between c and q, but c' is also in the triangular partition τ∖{c}, again contradicting Proposition <ref>. We deduce that q lies to the right of c.To prove the backward direction, it suffices to show that no cell other than c is removable, since we know by Lemma <ref> that triangular partitions have at least one removable cell.Suppose that there is a removable cell c' to the left of c. Then c' must lie weakly below the line extending the side of (τ) adjacent to c from the left. By hypothesis, there is a point q∈(^2∖τ) to the right of c on this line. But then, any cutting line L for τ∖{c'} would have to pass below c' and weakly above c, and thus also weakly above q. Since q∈(^2∖τ), there would would be a point in ^2∖τ lying weakly below L, which is a contradiction.Similarly, there cannot be a removable cell to the right of c. Finally, any cell of τ with the same x-coordinate as c must have a lower y-coordinate because c is a vertex of (τ), and thus it cannot be removable. For a triangular partition τ, let t^- t^-_τ and t^+ t^+_τ as defined in Lemma <ref>. Next we show how to use these values to find the removable cell(s) of τ. Let m^-=max_(a,b)∈τ t^-a + (1 - t^-)b, let C^- be the set of cells in τ that attain this maximum, and let L^- be the line t^-x + (1 - t^-)y=m^-. Define m^+, C^+ and L^+ analogously. Let c^- be the rightmost cell in C^-, and let c^+ be the uppermost cell in C^+. See Figure <ref> for an example.Let τ be a triangular partition and let c^- and c^+ be as defined above. If c^- = c^+, then this is the only removable cell of τ. If c^-≠ c^+, then both of these cells are removable.We have c^-∈ L^- by construction. If there was a point in ^2∖τ lying weakly below L^- and to the left of c^-, then any cutting line for τ would have a slope vector (t, 1 - t) with t ≤ t^-, in contradiction with Lemma <ref>. Moreover, no point in ^2∖τ can lie strictly below L^- and to the right of c^-, because by the same lemma, there is a cutting line with slope vector (t, 1 - t) for any t^-<t<t^+.As a consequence, there exist δ,ε > 0 small enough so that the line passing through c^- -(0,δ) with slope vector (t^- + ε, 1 - t^- - ε) is a cutting line for τ∖{c^-}. Therefore, c^- is removable. An analogous argument shows that c^+ is removable as well.Finally, let us show that if c^- = c^+, then there are no more removable cells. Suppose that there is another removable cell c to the left of c^-, and let L be the line through c and c^-, which must be a cutting line by Proposition <ref>. Since c lies weakly below L^-, the slope vector (t, 1 - t) of L satisfies t≤ t^-, contradicting Lemma <ref>. A symmetric argument shows that there is no removable cell to the right of c^+ either.§.§ An algorithm to determine triangularity Next we consider the problem of determining whether a given partition λ is triangular. A method for this was given by Bergeron and Mazin <cit.>, as described in Lemma <ref>, but it requires computing two values t_λ^- and t_λ^+ for each cell in λ. Next we present a more efficient method, which also yields the removable and addable cells when the partition is triangular. We start with some preliminary results.Let λ= λ_1…λ_k be a partition. A cell in λ is called a corner cell if removing it from λ yields a partition, and a cell in ^2∖λ is called a complementary corner cell if adding it to λ yields a partition. Equivalently, a corner cell is of the form(λ_i,i) with either i = k or λ_i > λ_i + 1; and a complementary corner cell is of the form (λ_1 + 1,1), (1, k + 1), or (λ_i + 1, i) for 2≤ i≤ k such that λ_i - 1>λ_i. For any λ, the number of complementary corner cells is one more than the number of corner cells, which in turn equals the number of distinct parts of λ. Let (λ) be set of corner cells of λ, together with the cells (1,1),(λ_1,1),(1,k). Let '(λ) be set of complementary corner cells of λ, together with the cell (λ_1 + 1, k + 1).The next lemma will help us find the vertices of (λ) and (^2∖λ) efficiently.The following hold: * (λ) = ((λ)), * the vertices of (^2∖λ) are those of ('(λ)) except for (λ_1 + 1, k + 1). Clearly (λ)⊆λ, so ((λ))⊆(λ). For the reverse inclusion, note that every (a,b)∈λ that is not in (λ) can be expressed as a convex combination of other cells in λ, since either (a-1,b),(a+1,b)∈λ or (a,b-1),(a,b+1)∈λ. Thus, the vertices of (λ) must be in (λ), and so (λ)⊆((λ)), proving (a).To prove (b), let us first argue that every vertex of (^2∖λ) must also be a vertex of ('(λ)). Indeed, if c is a vertex of (^2∖λ), there is a line through c that leaves the rest of ^2∖λ strictly on one side; in particular, it leaves the rest of '(λ) strictly on one side. Thus c is a vertex of ('(λ)). Also, it is clear that c≠ (λ_1 + 1, k + 1), since this point is not a vertex of (^2∖λ).Finally, let us show that every vertex c of ('(λ)), other than (λ_1 + 1, k + 1), is also a vertex of (^2∖λ). This is clearly true for the vertices (1,k+1) and (λ_1+1,1), so suppose that c=(a,b) with a ≤λ_1 and b ≤ k. Then there there exists a line L through c that leaves the rest of '(λ) strictly on one side; more specifically, this linemust have negative slope and leave the rest of '(λ) strictly above it. To conclude that c is a vertex of (^2∖λ), it suffices to show that the rest of ^2∖λ also lies strictly above L. Indeed, if this was not the case, we could find some c'∈^2∖λ with c'≠ c, lying weakly below L, and such that its sum of coordinates is smallest among all the cells with this property. Then c' would be a complementary corner cell, in contradiction with the fact that all cells of '(λ) other than c lie strictly above L. The following lemma will help us implement a binary search for removable cells.Let τ be a triangular partition. Let c_1,c_2≠(1,1) be consecutive vertices of (τ), with c_1 to the left of c_2, and let L be the line through c_1 and c_2. If there is some c_3∈^2∖τlying weakly below L and to the left of c_1 (resp. right of c_2), then any removable cells of τlie weakly to the left of c_1 (resp. right of c_2).Similarly, let c'_1,c'_2 be consecutive vertices of (^2∖τ), with c'_1 to the left of c'_2, and let L' be the line through c'_1 and c'_2. If there is some c'_3∈τ lying weakly above L' and to the left of c'_1 (resp. right of c'_2), then any addable cellslie weakly to the left of c'_1 (resp. right of c'_2). Suppose that c_4∈τ is a removable cell other than c_1. Any cutting line for τ∖{c_4} must pass strictly below c_3 and c_4 and weakly above c_1. Since both c_3 and c_4 lie weakly below L, this implies that c_4 lies to the same side of c_1 as c_3. A similar argument proves the statement for c_3 to the right of c_2, as well as for addable cells. Our algorithm to determine if a partition λ is triangular starts by finding its corner cells. Then, it computes (λ) and (^2∖λ), and it performs a binary search on the edges of the boundary of (λ), using Proposition <ref> to look for a pair of removable cells. For each edge, it tries to find a point in ^2∖λ that lies below the line extending the edge, in order to apply Lemma <ref> and keep searching in the correct direction. If no pair of removable cells is found, the same procedure is applied to addable cells. Below is a more detailed description.Let λ be a partition of n into k parts, and let m its number of distinct parts, which is (min{k,√(n)}). The complexity of finding the corner cells in step 1 is (k). Graham's scan runs in linear time in m, because corner cells (resp. complementary corner cells) are found in counter-clockwise (resp. clockwise) order with respect to (1,1) (resp. (λ_1 + 1, k + 1)). Finally, steps 2 and 3 perform a binary search where each step runs a ternary search, and so they run in time ((log m)^2). In summary, our algorithm takes time (k) to read the input, and time (m) to run the rest of the steps, while the space used is (m). For comparison, an algorithm based on Lemma <ref> would run in time (n) while occupying (k) space.§ THE TRIANGULAR YOUNG POSETBergeron and Mazin <cit.> introduced the posetof triangular partitions ordered by containment of their Young diagrams. An interesting property of this poset, as explained in Section <ref>, is that it has a planar Hasse diagram. This property is used in <cit.> to deduce thatis a lattice, and it is ranked by the size of each partition. In this section we describe the Möbius function of , and we give explicit constructions for the meet and the join of any two elements.Our first result confirms Bergeron's conjecture (personal communication, 2022) that the Möbius function, which we denote by μ, only takes values in {-1,0,1}. Let τ,ν∈ such that τ≤ν. Thenμ(τ, ν) =1if either τ = ν or there exist ζ^1≠ζ^2 such that ν = ζ^1ζ^2 and τ⋖ζ^1,ζ^2,-1if τ⋖ν,0otherwise. Trivially, if τ = ν, then μ(τ,ν) = 1, and if τ⋖ν, then μ(τ,ν) = -1.Otherwise, consider two possibilities depending on how many elements in the interval [τ,ν] cover τ. This number has to be one or two, since τ has at most two addable cells by Lemma <ref>.If only one element ζ∈[τ,ν] covers τ, we will show that μ(τ,ν) = 0 by induction on m=|ν|-|ζ|. If m=1, then ζ⋖ν, and μ(τ,ν) = -μ(τ,τ) - μ(τ,ζ) = -1 + 1 = 0.If m>1, thenμ(τ,ν) = -∑_τ≤θ<νμ(τ,θ) = -(1 - 1 + 0 + …+ 0) = 0,using the induction hypothesis.If there are two elements ζ^1,ζ^2∈[τ, ν] that cover τ, let ζ=ζ^1ζ^2. If ζ=ν, then any θ∈[τ,ν] with θ<ν falls in one of the above cases, and soμ(τ,ν) = -∑_τ≤θ<νμ(τ,θ) = -(1 - 1 - 1 + 0 + …+ 0) = 1.If ζ<ν, we will show that μ(τ,ν) = 0 by induction on m=|ν|-|ζ|. If m=1, then ζ⋖ν, andμ(τ, ν) = -∑_τ≤θ<νμ(τ,θ) = -(1 - 1 - 1 + 0 + …+ 0 + 1) = 0.If m>1, thenμ(τ,ν) = -∑_τ≤θ<νμ(τ,θ) = -(1 - 1 - 1 + 0 + …+ 0 + 1 + 0 + …+ 0) = 0,using the induction hypothesis. As mentioned below Definition <ref>, the faces of the Hasse diagram ofare polygons with an even number of sides. We can interpret Theorem <ref> as stating that, if τ < ν and ν does not cover τ, then μ(τ, ν) equals 1 if [τ,ν] is one of the polygonal faces (equivalently, if τ = ν^∘), and 0 otherwise.Our next result explicitly characterizes the join and meet of elements of . See Figure <ref> for an example.For any τ,ν∈, we haveτν = ^2∩(τ∪ν) andτν = ^2∖(^2∩(^2∖(τ∩ν))). We will prove the statement for τν. The statement for τν has a similar proof.Let ζ =^2∩(τ∪ν).The partition τν is triangular, so it consists of the points in ^2 weakly below some cutting line. Therefore, since τν contains τ and ν, it must also contain every lattice point that is a convex combination of points in τ and ν. It follows that ζ⊆τν. On the other hand, it is clear that τ,ν⊆ζ. To prove that τν⊆ζ, it suffices to show that ζ is triangular. By Lemma <ref>, this will follow if we show that no vertex of (ζ) is in (^2∖ζ) and vice versa.Suppose that there is a vertex c of (^2∖ζ) (which must be a point in ^2∖ζ) such that c∈(ζ). Note that (ζ) = (τ∪ν), so c is a lattice point in (τ∪ν), implying that c∈ζ, which is a contradiction.Suppose now that there is a vertex c of (ζ) such that c∈(^2∖ζ). Since (ζ) = (τ∪ν), every vertex must be a point in τ∪ν, therefore either c∈τ or c∈ν. Let us assume that c∈τ without loss of generality. Since τ⊆ζ, we have (^2∖τ)⊇(^2∖ζ), and so c∈(^2∖τ). But this contradicts that τ is triangular. By repeatedly applying Proposition <ref>, one can show that, for any set of triangular partitions τ^1,…,τ^k, we have τ^1…τ^k = ^2∩(τ^1∪…∪τ^k) andτ^1…τ^k = ^2∖(^2∩(^2∖(τ^1∩…∩τ^k))). Recall that a non-minimum element in a lattice is called join-irreducible if it cannot be expressed as the join of other elements. The join-irreducible elements ofare the triangular partitions with one removable cell.A triangular partition with one removable cell covers only one element in , so it cannot be the join of two elements other than itself. On the other hand, a triangular partition τ with two removable cells is the join of the two triangular partitions that it covers.§ BIJECTIONS TO BALANCED WORDS AND EFFICIENT GENERATIONIn this section we will present two different encodings of triangular partitions in terms of factors of Sturmian words. The first one, which is hinted in <cit.>, is quite natural, and it will allow us to prove some enumeration formulas in Section <ref>. The second one encodes families of triangular partitions by one single balanced word, along with two other parameters, and it will be used in Section <ref> to implement efficient algorithms to count triangular partitions by their size. §.§ Sturmian words Sturmian words have applications in combinatorics, number theory, and dynamical systems; see <cit.> for a thorough study. In the following definition, a word w is called a factor of another word s if s = uwt for some words u and t.An infinite binary word s is Sturmian if, for every n≥1, the number of factors of s of length n equals n + 1. We will be interested in factors of Sturmian words, which have the following two useful characterizations. Let w = w_1… w_ℓ be a finite binary word over {0,1}. The following statements are equivalent: * w is a factor of some Sturmian word;* w is a balanced word, that is, for any h ≤ℓ and i,j ≤ℓ - h + 1, we have |∑_t = i^i + h - 1w_t - ∑_t = j^j + h - 1w_t| ≤ 1; * w is a mechanical word, that is, there exist real numbers 0 < α, β < 1 such that w_i = ⌊ iα + β⌋ - ⌊ (i - 1)α + β⌋ for 1≤ i≤ℓ.We denote bythe set of all words satisfying any of the above equivalent conditions, and by _ℓ the set of those of length ℓ. Condition (<ref>) states that, for any two consecutive subwords of w of the same length, the number of ones in these subwords differs by at most 1.Visualizing w as a lattice path P that starts at the origin and has ith step (1,0) if w_i = 0, and (1,1) if w_i = 1, the condition of w being a mechanical word is equivalent to P being the highest path with ℓ steps (1,0) and (1,1) starting at the origin and staying weakly below the line y = α x + β (see Figure <ref>). The following enumeration formula for balanced words is due to Lipatov <cit.>. Throughout the paper, we use φ to denote Euler's totient function. The number of balanced words of length ℓ is|_ℓ|=1 + ∑_i = 1^ℓ (ℓ - i + 1)φ(i).§.§ Wide and tall partitions The following definition will be useful for our encodings of triangular partitions by balanced words. A triangular partition is wide (respectively tall) if it admits a cutting line _r,s with r > s (respectively r < s). Denote the set of wide triangular partitions by . It follows from Lemma <ref>, and in fact was already noted in <cit.>,that the slopes of the cutting lines of any given triangular partition form an open interval. Thus, every triangular partition must be wide, tall, or both.It is immediate from the definition that a triangular partition τ is wide if and only if its conjugate τ' is tall. This is because _r,s is a cutting line for τ if and only if _s,r is a cutting line for τ'. Next we give alternative characterizations of wide partitions. We use the notation [k]={1,2,…,k}.For any triangular partition τ = τ_1…τ_k, the following are equivalent: * τ is wide,* τ_1 ≥ k,* the parts of τ are distinct.Separately, the following are equivalent:* τ is wide and tall,* τ_1=k,* τ is the staircase partition σ^k. Let us start by proving (b)⇒(c) by contrapositive. If (c) does not hold, there exists i∈[k-1] such that τ_i=τ_i+1. Let j denote this value. Any cutting line for τ must pass weakly above (j,i+1)∈τ and strictly below (j+1,i)∉τ. It follows that it must pass above (1,i+j) and below (i+j,1), so (1,i+j)∈τ but (i+j,1)∉τ. We conclude that k≥ i+j>τ_1, contradicting (b).Next we prove that (b')⇒(c'). Suppose that τ_1=k=τ_1'. Applying the fact that (b)⇒(c) to τ and to τ', it follows that both τ and τ' are partitions into distinct parts. This implies that τ is a staircase partition, since the upper-right boundary of its Young diagram cannot have two consecutive steps in the same direction. To prove (c)⇒(b), note that if the parts of τ are distinct, then τ_1 = τ_k + ∑_i = 1^k - 1(τ_i - τ_i + 1) ≥ k. The implication (c')⇒(b') is trivial.To prove (a)⇒(b) and (a')⇒(b'), let _r,s be a cutting line for τ. By Definition <ref>, τ_1 = ⌊ r - r/s ⌋ and k = ⌊ s - s/r ⌋. It follows that, if r>s, then τ_1≥ k, and if r<s, then τ_1≤ k. In particular, if τ is wide, then τ_1≥ k, and if τ is wide and tall, then τ_1=k.Next we prove that (b)⇒(a) and (b')⇒(a'). Suppose first that τ_1=k. Then τ=σ^k, which is wide and tall, so both (a) and (b) hold.Suppose now that τ_1>k=τ_1'. We claim that τ' cannot be wide in this case. Indeed, if it was, then the fact that (a)⇒(b) applied to τ' would imply that τ'_1≥τ_1, which is a contradiction. Since τ' is not wide, it must be tall, so (a) holds. §.§ First Sturmian encodingWe are now ready to describe the first encoding of wide triangular partitions by balanced words. Given τ = τ_1…τ_k∈, define the binary wordω(τ)=10^τ_1-τ_2-110^τ_2-τ_3-1…10^τ_k-1-τ_k-110^τ_k - 1.The fact that all the parts of τ are distinct (by Lemma <ref>) guarantees that the exponents are nonnegative. For example, we have ω(86531)=10110101, as illustrated in Figure <ref>.For every k,ℓ≥1,the map ω from equation (<ref>) is a bijection {τ=τ_1…τ_k∈|τ_1=ℓ}→{w=w_1… w_ℓ∈_ℓ|w has k ones and w_1=1}. Let τ = τ_1…τ_k∈. It is clear by construction that ω(τ) has length ℓ=τ_1, that it has k ones, and that it starts with 1. Let us show that this word is balanced.Let _r,s be a cutting line for τ with r>s. Consider the lattice path from (τ_1 + 1,0) to (1, k) which passes through the highest point of each column of the Ferrers diagram of τ from right to left (see Figure <ref>). Since the parts of τ are distinct, the steps of the path belong to {(-1,0), (-1,1)}, and since τ is a triangular partition, this is the highest such path that stays weakly below the line _r,s. By applying the vertical reflection x↦τ_1+1-x, the resulting path from (0,0) to (τ_1, k) is the highest path with steps in {(1,0), (1,1)} that stays weakly below the reflection of _r,s, which is the line (τ_1 + 1 - x)/r + y/s = 1, or equivalently,y=s/r x+s(1-τ_1+1/r).Note that 0 < s/r < 1. Moreover, 0 < s(1 - (τ_1 + 1)/r) < 1, because it is the ordinate of the cutting line _r,s at x = τ_1 + 1; this value must be positive because (τ_1, 1)∈τ and r > s, and it must be less than 1 because (τ_1 + 1,1)∉τ.By the equivalence between balanced words and mechanical words given in Proposition <ref>, this reflected path corresponds to the balanced word of length τ_1 obtained by encoding steps (1,0) with 0 and steps (1,1) with 1. This word is precisely ω(τ) by construction, proving that ω(τ)∈_ℓ.Finally, we prove that ω is a bijection by constructing its inverse. Let w∈_ℓ with w_1=1 and having k ones, and let 1 = i_1<… <i_k be the indices of these ones. Define a partition τ = τ_1…τ_k by τ_j = ℓ - i_j + 1 for j ∈[k], and note that τ_1=ℓ. By Proposition <ref>(c), w encodes a lattice path from (0,0) to (ℓ,k) with steps in {(1,0),(1,1)}, which is the highest path staying weakly below the line y = α x + β, for some 0<α,β<1. Applying the vertical reflection x↦τ_1+1-x, the resulting path from (ℓ + 1, 0) to (1, k) passes throughthe highest point of every column of the Ferrers diagram of τ, and it is the highest path weakly below the reflected line (which is the line through (ℓ + 1, β) having slope -α). This proves that τ is a wide triangular partition. By construction, the map that sends w to τ is the inverse of ω, proving that ω is a bijection. The bijection ω can be used to reduce the problem of determining whether a binary word is balanced to that of determining whether a partition is triangular, which we studied in Section <ref>. A naive algorithmto determine if a word is balanced using equation (<ref>) would take quadratic time in the length of the word. However, several linear-time algorithms are known, using a variety of tools: number-theoretic methods <cit.>, techniques from discrete geometry and a bijection similar to the one in our Lemma <ref> <cit.>, a recursivealgorithm <cit.>, and Lyndon words <cit.>.Next we give yet another linear-timealgorithm, based on the geometry of triangular partitions and the above encoding ω. To be able to apply the inverse of this map, we need our word to start with a one. This can be easily achieved by replacing zeros with ones and ones with zeros if needed, since this operation preserves the balanced property from equation (<ref>).For a binary word of length ℓ with k letters equal to the first one, reading the input takes time Θ(ℓ) and the rest of the algorithm takes time Θ(k). §.§ Second Sturmian encodingNext we give a different encoding of triangular partitions using balanced words, which appears to be new. Let ϵ denote the empty partition, and let ' be the set of wide triangular partitions with at least two parts. Let ^0 denote the set of balanced words that contain at least one zero.First we describe the possible sets that can be obtained by taking the differences of consecutive parts in a wide triangular partition. For τ = τ_1…τ_k∈', define(τ) = {τ_1 - τ_2, τ_2 - τ_3, …,τ_k-1 - τ_k }. For any τ = τ_1…τ_k∈',either (τ) = {d} or (τ) = {d, d + 1} for some d≥1 such that τ_k≤ d+1.Let _r,s be a cutting line for τ. By Definition <ref>, τ_i + 1 = ⌊r - (i + 1)r/s⌋ and τ_i = ⌊r - ir/s⌋ for any i∈[k-1], which implies that r/s - 1 < τ_i - τ_i + 1 < r/s + 1. It follows that the differences τ_i - τ_i + 1 can take at most two values, and so either (τ) = {d} or (τ) = {d, d + 1} for some d. By Lemma <ref>, the parts of τ are distinct, so d≥1. Moreover, since the point (1, k + 1) must lie above _r,s, we have τ_k < r/s + 1 < d + 2, hence τ_k ≤ d + 1. For τ = τ_1…τ_k∈', let min(τ)=τ_k, and let (τ) = min(τ). Finally, let (τ) = w_1… w_k-1 where, for i∈[k-1], we let w_i = τ_i - τ_i + 1 - (τ). Lemma <ref> guarantees that w_i∈{0,1} for all i. Our second encoding is given by the map χ = (min,,). See Figure <ref> for an example. The map χ = (min,,) is a bijection between ' and the set= {(m,d,w)∈××^0| m≤ d + 1;w1∈^0 if m = d + 1}.Its inverse is given by the mapξ(m,d,w_1… w_k-1) = τ_1…τ_k, whereτ_i =m + ∑_j = i^k-1(w_j+d)for i∈[k]. Additionally, given τ∈' with image χ(τ)=(m,d,w), its number of parts equals the length of w plus one, and its size is |τ|=km + k2d + ∑_i = 1^k-1 iw_i. We start by proving that χ(') ⊆. Let τ = τ_1…τ_k∈', and let w = w_1… w_k-1 = (τ). First, let us show that w∈^0. It is clear that w must have a 0, because if it consisted of only ones, we would have τ_i - τ_i + 1 = (τ) + 1 for all i∈[k - 1], by definition of , which would imply that (τ) = {(τ) + 1}, contradicting the definition of . To prove that w is balanced, we show that inequality (<ref>) holds for any h≤ k-1 and i,j ≤ k - h. Since w_t+(τ)=τ_t - τ_t + 1 for all t∈[k-1], we can rewrite the left-hand side as a telescoping sum|∑_t = i^i + h - 1w_t - ∑_t = j^j + h - 1w_t| =|∑_t = i^i + h - 1(w_t+(τ)) - ∑_t = j^j + h - 1(w_t+(τ))|== | ∑_t = i^i + h - 1(τ_t - τ_t + 1) - ∑_t = j^j + h - 1(τ_t - τ_t+1) |= |(τ_i - τ_i + h) - (τ_j - τ_j + h)|.By Definition <ref>, there exist r,s > 0 such that τ_t = ⌊ r - tr/s ⌋ for t∈[k], so we can express the above difference as|⌊ r - ir/s ⌋ - ⌊ r - (i + h)r/s ⌋- ⌊ r - jr/s ⌋ + ⌊ r - (j + h)r/s ⌋|,which is bounded above by 1, using that (a - 1) - b - d + (e - 1) <⌊ a⌋ - ⌊ b⌋ - ⌊ d ⌋ + ⌊ e⌋< a - (b - 1) - (d - 1) + e for any real numbers a,b,d,e.By Lemma <ref>,min(τ) ≤(τ) + 1. It remains to show that, if min(τ) = (τ) + 1, then w1∈^0. In this case, with the convention w_k = 1 and τ_k + 1 = 0, we have that w_t = τ_t - τ_t+ 1 - (τ) also for t=k, and so we can apply the above argument to the word w_1… w_k-1w_k=w1 to conclude that it belongs to ^0. This finishes the proof that χ(') ⊆.Next we show that ξ() ⊆'. Let (m,d,w)∈, where w=w_1… w_k-1, and let τ = τ_1…τ_k = ξ(m,d,w). Since w cannot be empty, we have k≥2.Consider first the case m ≤ d. Notice that w_1… w_k-1 is balanced if and only if w_k-1w_k-2… w_1 is balanced. By Proposition <ref>(c) applied to w_k-1w_k-2… w_1, there exist 0 < α, β < 1 such that w_k-i = ⌊β + iα⌋ - ⌊β + (i - 1)α⌋ for i∈[k - 1]. Then, for i∈[k],τ_i= m + ∑_j = i^k - 1 (w_j+d)= m + (k - i)d+ ∑_j = i^k - 1( ⌊β + (k - j)α⌋ - ⌊β + (k - j - 1)α⌋) = m + (k - i)d + ⌊β + (k - i)α⌋ - ⌊β⌋ = ⌊ m + β + (k - i)(d + α)⌋.Letting r = m + β + k(d + α) and s = r/(d + α), we have τ_i = ⌊ r - ir/s ⌋. Moreover, ⌊ s - s/r ⌋ = ⌊ (m + β - 1)/(d + α) ⌋ + k = k, using the assumption that m ≤ d. By Definition <ref>, this proves that τ is triangular with cutting line _r,s. Since r > s, we have τ∈'.Now consider the case m = d + 1. By the definition of , we have w' := w1∈^0. Arguing as above, there exist 0 < α', β' < 1 such that w'_k+1 - i' = ⌊β' + iα' ⌋ - ⌊β' + (i - 1)α' ⌋ for i∈[k]. Then,τ_i =m + ∑_j = i^k - 1 (w_j+d) = d+1+(k-i)d+∑_j = i^k - 1 w'_j =(k - i + 1)d + ∑_j = i^kw'_j = ⌊β' + (k - i + 1)(d + α')⌋.Letting r' = β' + (k + 1)(d + α') and s' = r'/(d + α'), we have τ_i = ⌊ r' - ir'/s' ⌋. Moreover, ⌊ s' - s'/r' ⌋ = ⌊ (β' - 1)/(d + α') ⌋ + k + 1 = k. This proves that τ is triangular with cutting line _r',s'. Since r' > s', we have τ∈'. This finishes the proof that ξ() ⊆'.Next we show that ξ and χ are inverses of each other. Let (m,d,w)∈, with w = w_1… w_k-1, and let τ = τ_1…τ_k = ξ(m,d,w).By definition of ξ, we have min(τ) = τ_k= m and (τ) = {w_i+d |i∈[k-1]}. Since w∈^0, there exists i∈[k-1] such that w_i = 0, and therefore (τ) = d. Additionally, the ith entry of (τ) is τ_i - τ_i + 1 - (τ) = d + w_i - d = w_i for i∈[k-1]. This proves that χ(ξ(m,d,w)) = (m,d,w).Now take any τ = τ_1…τ_k∈'. To show that ξ(χ(τ))=τ, let w=(τ), and note that the ith entry of ξ(χ(τ)) is min(τ) + ∑_j = i^k - 1(w_j+(τ)) = τ_k + ∑_j = i^k - 1(τ_j - τ_j + 1) = τ_i.Finally, to prove equation (<ref>) for any τ∈' with χ(τ)=(m,d,w), we use the definition of the inverse map ξ from equation (<ref>). Adding all the parts of τ, |τ| = |ξ(m,d,w)| = ∑_i = 1^k(m + ∑_j = i^k - 1(w_j+d)) = km + k2d + ∑_i = 1^k - 1 iw_i.Before we discuss how to use Theorem <ref> to generate triangular partitions, we finish this subsection describing another application of the above encoding. Let τ,ν∈' with k parts such that (τ) = (ν), and suppose that the values min(τ) - (τ) and min(ν) - (ν) are either both equal to 1 or both less than 1. Then, for every i∈[k], the cell c = (τ_i, i) is a removable cell of τ if and only if c' = (ν_i, i) is a removable cell of ν.Let χ(τ)=(m, d, w) and χ(ν)=(m', d', w), where χ is the map from Theorem <ref>. In the special case where w consists of only zeros, all the corner cells (τ_i,i) of τ lie on the cutting line x-m+d(y-k)=0, and all the corner cells of ν lie on the cutting line x-m'+d'(y-k)=0. It follows that the removable cells of τ are (τ_1,1) and (τ_k,k), and the removable cells of ν are (ν_1,1) and (ν_k,k). Thus, the result holds in this case. In the rest of the proof we will assume that w contains some one.By symmetry, it suffices to show that if c is removable in τ, then c' is removable in ν. By Lemma <ref>, if c is a removable cell of τ, there is a cutting line L that passes through c and so that all the other corner cells (τ_j,j) for j ilie strictly below L, and the cells (1,k+1) and (τ_j+1,j) for all jlie strictly above L. Writing the equation of L as x-τ_i+a(y-i)=0 for some positive real number a, these conditions are equivalent to-1<τ_j-τ_i+a(j-i)<0 for j≠ i, and 1-τ_i+a(k+1-i)>0.Let L' be the line with equation x-ν_i+(a+d'-d)(y-i)=0. This line passes through c'. We will show that all the other corner cells (ν_j,j) for j i lie strictly below L', and the cells (1,k+1) and (ν_j+1,j) for all j lie strictly above L', thus proving that L' is a cutting line for ν and c' is a removable cell. This is equivalent to showing that -1<ν_j-ν_i+(a+d'-d)(j-i)<0 for j≠ i, and1-ν_i+(a+d'-d)(k+1-i)>0. Equation (<ref>) follows from the analogous equation for τ_j, since the description of ξ from equation (<ref>) implies thatν_j-τ_j=m' + ∑_ℓ = j^k - 1(w_ℓ+d')- m - ∑_ℓ = j^k - 1(w_ℓ+d)=m'-m+(k-j)(d'-d),and soν_j-ν_i+(a+d'-d)(j-i)=τ_j-τ_i+(i-j)(d'-d)+(a+d'-d)(j-i)=τ_j-τ_i+a(j-i). To prove equation (<ref>) in the case m-d=m'-d'=1, we again use equation (<ref>) to write1-ν_i+(a+d'-d)(k+1-i)=m-d-(m'-d')+1-τ_i+a(k+1-i)=1-τ_i+a(k+1-i)>0. It remains to show that, in the case that both m-d and m'-d' are less than 1, the cell (1,k+1) also lies strictly above L'. By the above assumption, there is some j such that w_j=1. Then, the fact that the cell (ν_j,j) lies weakly below L' but the cell (ν_j+1+1,j+1) lies strictly above L' forces the slope of L' to be smaller (in absolute value) than the slope of the line between these two cells; equivalently, a+d'-d>ν_j-(ν_j+1+1)=d'. Thus, since the cell (ν_k+1,k)=(m'+1,k) lies strictly above L', so does the cell (m'+1-d',k+1), and, using that m'-d'≤0, the cell (1,k+1) must lie strictly above L'. As a consequence of the above result, we can determine the removable cells of any τ = τ_1…τ_k∈' by instead finding the removable cells of the smallest partition ν satisfying the conditions of Proposition <ref>.Letting w=(τ), this is the partition ξ(2, 1, w) if min(τ) - (τ) = 1, or ξ(1,1,w) otherwise, where ξ is given by equation (<ref>). After finding the removable cells of ν, we can use Proposition <ref> to determine those of τ. For triangular partitions τ that are not wide, we can apply the same procedure to τ', which must be wide, noting that a cell c = (a,b) is removable from τ if and only if c' = (b,a) is removable from τ'. For example, to find the removable cells of τ = (5^576,4^1037,3^1037,2^1036,1^1037) (where the exponents indicate the multiplicities of the parts), we take τ' = (4723, 3686, 2650, 1613, 576), which has min(τ') = 576, (τ') = {1036,1037}, and (τ') = 1036. Thus, (τ') = 1011, so we consider the much smaller partition ν = ξ(1,1,1011) = (8,6,5,3,1), whose only removable cell is (5,3). By Proposition <ref>, the only removable cell of τ' is (2650, 3), and so the only removable cell of τ is (3, 2650).§.§ Efficient generation Next we describe an efficient algorithm to generate triangular partitions, which relies on our new encoding of such partitions as balanced words from Section <ref>. At the time of writing this paper, the entry of the On-Line Encyclopedia of Integer Sequences <cit.> for the number triangular partitions of n only included values for n≤ 39. These are the terms that appear in <cit.>, where they were obtained using the generating function in Theorem <ref>. We have checked that an algorithm based on this generating function becomes very slow when trying to compute the next terms of the sequence, and it is impractical for large n.Theorem <ref> can be used to implement the following much more efficient algorithm, which can easily find the first 10^5 terms of the sequence.A C++ implementation of this algorithm is available at <cit.>. In a standard laptop computer, it yields the first 10^3 terms of the sequence |Δ(n)| in under one second, the first 10^4 terms in under ten seconds, and the first 10^5 terms in under one hour. Next we study the running time of our algorithm. By comparison, Onn and Sturmfels showed in <cit.> that it is possible to generate triangular partitions of n in time (n^4). The above algorithm finds |Δ(n)| for 1≤ n≤ N in time (N^5/2). Additionally, it can be modified to generate all (resp., all wide) triangular partitions of size at most N in time (N^3log N) (resp., (N^5/2log N)). For each word w of length ℓ≤√(2N) in the tree of balanced words, the algorithm makes (ℓ) comparisons to determine whether appending a 0 or a 1 produces a balanced word. In <cit.>, it is proved that the number of balanced words of length at most L is (L^4). Therefore, the total number of comparisons performed in step 1 is (N^5/2).In step 2, for each balanced word w containing at least one zero, the algorithm runs through pairs (d,m) such that (m,d,w)∈. For each one of these, the algorithm computes |τ| using equation (<ref>) and checks if |τ|≤ N in constant time. By Theorem <ref>, the number of triangular partitions of size at most N is (N^2log N). Additionally, the number of“incorrect” visited triplets (m,d,w) for which |τ| > N is also (N^2log N), since those with m=1 come from different words w (and we visit (N^2) of them), and those with m > 1 came after a “correct” triplet (m-1,d,w), i.e., corresponding to a triangular partition of size at most N (and there are (N^2log N) of these). We conclude that the algorithm runs in time (N^5/2).To output the list of triangular partitions, for each (m,d,w) that the algorithm finds, one can compute τ=ξ(m,d,w) using equation (<ref>) in time (√(N)), and its conjugate τ' in time (N). Since the number of triangular partitions that are computed is (N^2log N), this construction generates the wide ones in time (N^5/2log N), and their conjugates in time (N^3log N). In their proof of Theorem <ref>, Corteel et al. <cit.> show that (n/8)/3 ≤|Δ(n)| ≤(2n + 1), where (m) is the number of relatively prime pairs (a,b) such that ab < m. In fact, it is possible to slightly improve the lower bound to (n/2)/3, by tweaking the same argument that they use, but noting that if n > 2ab, then the set H((a,b)) defined in <cit.> contains at least two lattice points. In Figure <ref>, the first 10^5 terms of the sequence |Δ(n)| are plotted against these bounds (2n + 1) and (n/2)/3.While the constants C and C' (as in Theorem <ref>) that result from these bounds are far from tight, Figure <ref> suggests that, for large n, the value of|Δ(n)|/(nlog n) oscillates between two decreasing functions that differ by about 0.05. § GENERATING FUNCTIONS FOR SUBSETS OF TRIANGULAR PARTITIONS Let us introduce notation for some subsets of triangular partitions. Let Δ_1 and Δ_2 denote the subsets of partitions with one removable cell and with two removable cells, respectively. Let Δ^1 and Δ^2 denote the subsets of partitions with one addable cell and with two addable cells, respectively. Let Δ_2^2=Δ_2∩Δ^2. Denote partitions of size n in each subset by Δ_1(n), Δ_2(n), Δ^1(n), Δ^2(n) and Δ_2^2(n).In this section we give generating functions for each of these sets, refining Theorem <ref>.To obtain a generating function for partitions in Δ_2, we modify the construction that Corteel et al. <cit.> used to prove Theorem <ref> for all triangular partitions.In our case, each τ∈Δ_2 is uniquely determined by the following parameters, as given by Definition <ref>: its diagonal slope (encoded by an irreducible fraction a/b), the number of cells in ∂_τ (denoted by k = |∂_τ|), the x-coordinate of the leftmost cell in ∂_τ (denoted by i+1), and the y-coordinate of the rightmost cell in ∂_τ(denoted by j+1).Decomposing τ as shown in Figure <ref> and counting cells as in Section <ref>, we see that the size of τ is N_Δ(a,b,k,k,i,j), with N_Δ as given by equation (<ref>). Bearing in mind that k≥2 because the diagonal contains both removable cells, we obtain the following generating function for Δ_2. The generating function for triangular partitions with two removable cells isG_Δ_2(z) = ∑_n≥0|Δ_2(n)|z^n = ∑_(a, b) = 1∑_0≤ j < a 0≤ i < b∑_k≥2z^N_Δ(a,b,k,k,i,j).Note that the term 1/1 - z that appeared in Theorem <ref> is not included here, since its purpose was to account for partitions with one part, but those have only one removable cell. Our next result shows how to obtain generating functions for all the other cases, in terms of the expressions in Theorem <ref> and Proposition <ref>. The generating functions for partitions in Δ_1, Δ^2, Δ^1, Δ_2^2 can be written in terms of G_Δ(z) and G_Δ_2(z) as follows:G_Δ_1(z)= G_Δ(z) - G_Δ_2(z)-1,G_Δ^2(z)= 1 - z/zG_Δ(z) + 1/zG_Δ_2(z) - 1/z,G_Δ^1(z)= 2z - 1/zG_Δ(z) - 1/zG_Δ_2(z) + 1/z,G_Δ_2^2(z)= 1 - 2z/zG_Δ(z) + 1 + z/zG_Δ_2(z) - 1/z. Equation (<ref>) is immediate since all nonempty triangular partitions have one or two removable cells, by Lemma <ref>.To prove equation (<ref>), we will count in two ways the number of edges between levels n and n+1 in the Hasse diagram of . Since partitions in Δ^1(n) are covered by one element, and partitions in Δ^2(n) are covered by two, the number of such edges is |Δ^1(n)| + 2|Δ^2(n)| = |Δ(n)| + |Δ^2(n)|. On the other hand, counting how many elements are covered by each partition of n+1, we see that the number of such edges is |Δ(n + 1)| + |Δ_2(n + 1)|. It follows that |Δ(n)| + |Δ^2(n)| = |Δ(n + 1)| + |Δ_2(n + 1)|for n≥1. In terms of generating functions, we obtainG_Δ^2(z)= ∑_n≥1|Δ^2(n)|z^n = ∑_n≥1|Δ(n + 1)|z^n + ∑_n≥1|Δ_2(n + 1)|z^n - ∑_n≥1|Δ(n)|z^n == 1/z(G_Δ(z) - 1-z) + 1/zG_Δ_2(z) - (G_Δ(z)-1) = 1 - z/zG_Δ(z) + 1/zG_Δ_2(z) - 1/z. Equation (<ref>) follows from equation (<ref>) and the fact that all triangular partitions have one or two addable cells, so G_Δ^1(z) = G_Δ(z) - G_Δ^2(z). Finally, to deduce equation (<ref>), we use the fact that partitions with two removable cells have one or two addable cells, and that Δ^1(n)⊆Δ_2(n) by Lemma <ref>, so G_Δ_2^2(z) = G_Δ_2(z) - G_Δ^1(z). From equation (<ref>), we can derive the equality2|Δ(n)|-|Δ^1(n)| = 2|Δ(n + 1)| - |Δ_1(n + 1)|for n≥1, from where we obtain the upper bound|Δ(n + 1)| - |Δ(n)| = 1/2(|Δ_1(n + 1)| - |Δ^1(n)|)≤1/2|Δ_1(n + 1)|. The expression for G_Δ_2 given in Proposition <ref> can be used to write an algorithm to find |Δ_2(n)|. We have computed the first 100 terms of this sequence using a MATLAB implementation of this algorithm, which is available at <cit.>.The first 50 terms of the sequences |Δ_1(n)| and |Δ_2(n)| appear in Table <ref>, and the first 100 terms are plotted in Figure <ref>. It appears to be the case that|Δ_2(n)|>|Δ_1(n)| for all n≥9, although we do not have a proof of this fact. It is interesting to note that both the local maxima of |Δ_1(n)| and the local minima of |Δ_2(n)| seem to occur precisely when n≡23. On the other hand, |Δ(n)| does not exhibit such periodic extrema.§ TRIANGULAR SUBPARTITIONS AND A COMBINATORIAL PROOF OF LIPATOV'S ENUMERATION FORMULA FOR BALANCED WORDS For τ∈Δ, let (τ)=|{ν∈Δ:ν⊆τ}| denote the number of triangular subpartitions of τ. In this section, after giving a general recurrence for these numbers, we will present explicit formulas for (τ) in some particular cases. We will also derive a new, combinatorial proof of Theorem <ref>.Recall from Definition <ref> that τ^∘ denotes the interior of τ. Let c^- and c^+ be the removable cells of τ, which are the leftmost and rightmost cells of ∂_τ. If τ has only one removable cell, then c^-=c^+.For any τ∈Δ(n) with n≥1, (τ) = (τ∖{c^-}) + (τ∖{c^+}) - (τ^∘) + 1. If c^-≠ c^+, then τ covers two elements in , namely τ∖{c^-} and τ∖{c^+}. By Proposition <ref>, the meet of these two elements is^2∖((^2∖τ)∪∂_τ) = τ∖∂_τ = τ^∘. The formula now follows by inclusion-exclusion.If τ has only one removable cell c, then (τ) = (τ∖{c})+1. But in this case τ^∘=τ∖{c} by definition.The above recurrence relation, along with the base case (ϵ)=1, allows us to compute (τ) for any τ∈Δ, although not very efficiently.In order to find explicit formulas for (τ) in some cases, let us consider the related problem of counting triangular partitions whose width is at most ℓ and whose height is at most h; equivalently, those whose Young diagram fits inside an h×ℓ rectangle. We denote by Δ^h×ℓ the set of such partitions. The next lemma, which is illustrated in Figure <ref>, shows how these two problems are related.Let h,ℓ≥1, and let ν∈Δ. Then ν∈Δ^h×ℓ if and only if ν⊆τ, where τ=τ_1…τ_h is the triangular partition given byτ_j = ⌊ℓ + 1 - ℓ(j - 1) + 1/h⌋for 1≤ j≤ h.The partition τ consists of the cells that lie weakly below the line hx+ℓ y=hℓ+h+ℓ-1, which implies that τ is triangular. Additionally, since τ_1=ℓ and τ has h parts, we have τ∈Δ^h×ℓ; hence the same is true for any subpartition of τ.It remains to show that any ν∈Δ^h×ℓ must satisfy ν⊆τ. Consider a cell c =(a, b) ∈ν, and suppose for the sake of contradiction that c∉τ. Thenha+ℓ b>hℓ+h+ℓ-1, and so ha+ℓ b≥ hℓ+h+ℓ, that is, (a, b) lies weakly above the line hx+ℓ y=hℓ+h+ℓ. This line passes through (1, h +1) and (ℓ + 1, 1), so any cutting line for ν must lie weakly above one of these two points, contradicting the assumption that ν∈Δ^h×ℓ. The above lemma states that we can always write Δ^h×ℓ as the set of triangular partitions contained in some suitable τ. However, the converse is not true. For example, if τ=31, the set of triangular subpartitions of τ is not of the form Δ^h×ℓ for any h,ℓ, since a rectangle containing τ must also contain the partition 32.Our next goal is to give a formula for (σ^ℓ), which, by Lemma <ref>, equals the number of triangular partitions that fit inside an ℓ×ℓ square. The proof uses the bijection ω from equation (<ref>). The following hold for ℓ≥1: * the number of triangular partitions of width exactly ℓ and height at most ℓ is |_ℓ|/2,* |Δ^ℓ×ℓ∖Δ^(ℓ-1)×(ℓ-1)|=(σ^ℓ)-(σ^ℓ-1)=|_ℓ|-1. By Lemma <ref>, a triangular partition of width ℓ has height at most ℓ if and only if it is wide. By allowing k to vary within [ℓ] in Proposition <ref>, we get a bijection between such partitions and balanced words of length ℓ that start with 1. By the definition in equation (<ref>), it is clear that the operation on binary words that replaces the ones with zeros and the zeros with ones preserves the property of being balanced.Therefore, the number of balanced words of length ℓ that start with with 1 is half of the total number of balanced words of length ℓ. This proves (a).To prove (b), note that partitions that fit inside an ℓ×ℓ square but not inside an (ℓ-1)× (ℓ-1) square must have height or width exactly equal to ℓ. By part (a), |_ℓ|/2 is the number of triangular partitions of width ℓ and height at most ℓ, and by conjugation, also the number of triangular partitions of height ℓ and width at most ℓ. By Lemma <ref>, the only partition that has width and height equal to ℓ is the staircase σ^ℓ. Thus,|Δ^ℓ×ℓ∖Δ^(ℓ-1)×(ℓ-1)|=2 |_ℓ|/2-1=|_ℓ|-1. On the other hand, we have |Δ^ℓ×ℓ|=(σ^ℓ) by Lemma <ref>, and so |Δ^ℓ×ℓ∖Δ^(ℓ-1)×(ℓ-1)|=(σ^ℓ)-(σ^ℓ-1). Lemma <ref>(a), combined with Theorem <ref>, implies that the number of triangular partitions of width exactly ℓ and height at most ℓ is|_ℓ|/2=1/2 + 1/2∑_i = 1^ℓ(ℓ - i + 1)φ(i). For ℓ≥0, |Δ^ℓ×ℓ|=(σ^ℓ) = 1+∑_i = 1^ℓℓ - i + 22φ(i).By Lemma <ref>(b), we have(σ^j)-(σ^j-1)=|_j|-1 for all j≥1. Summing over j∈[ℓ], including the empty partition, and using Theorem <ref>, we obtain(σ^ℓ)= 1+∑_j = 1^ℓ(|_j|-1)= 1+∑_j = 1^ℓ∑_i = 1^j(j - i + 1)φ(i) = 1+∑_i = 1^ℓφ(i)∑_j = i^ℓ(j - i + 1) == 1+∑_i = 1^ℓφ(i)∑_k = 1^ℓ - i + 1k = 1+∑_i = 1^ℓℓ - i + 22φ(i).The first few terms of the sequence |Δ^ℓ×ℓ|=(σ^ℓ) for ℓ≥0 are 1, 2, 5, 12, 25, 48, 83,…, which did not appear in <cit.> at the time of writing this paper.The above proof of Theorem <ref> relies on Lipatov's enumeration formula for balanced words (Theorem <ref>), and it does not give a conceptual understanding of why the terms ℓ - i + 22 and φ(i) appear. Theorem <ref>, first proved by Lipatov in <cit.>, has been rediscovered several times over the years, along with different proofs <cit.>. These proofs are quite technical, and do not easily provide a conceptual explanation of our formula for (σ^ℓ). In the rest of this section, we give a direct, combinatorial proof of Theorem <ref> that does not rely on Lipatov's formula. As an added benefit, our argument contributes a new proof of Lipatov's formula.Letdenote the set of partitions with all parts equal to 1, including the empty partition. We will encode partitions in Δ∖ by four integers, using ideas similar to those in <cit.>. For a nonempty triangular partition τ, let c= (a,b) be its rightmost removable cell. If a=1, there cannot be another removable cell to the left of c, so this is the only one. By Proposition <ref>, either b = 1 or the line containing the edge of (τ) adjacent to c from below must intersect (^2∖τ) above c, which implies that τ∈. If a > 1, consider the unique pair of relatively prime positive integers (d,e) such that (a - d, b + e)∈^2 ∖τ and (a - d', b + e')∉^2 ∖τ for any d',e'∈ with e'/d' < e/d (see Figure <ref>). Note that such a pair must always exist. For any τ∈Δ∖, define ϕ(τ) = (a,b,d,e).The map ϕ is a bijection between Δ∖ andQ={(a,b,d,e)∈^4| d < a, (d,e) = 1}. It is clear from the definition of ϕ that if τ∈Δ∖, then ϕ(τ)∈ Q.Let us first prove that ϕ is surjective. Given (a,b,d,e)∈ Q, let L be the line passing through (a,b) and (a-d,b+e), which has equation e(x-a) + d(y-b) = 0.Let L_ε be the line with equation (e - ε)(x-a) + d(y-b) = 0, where ε is a positive irrational number small enough so that there are no lattice points in the open region between L_ε and L in the first quadrant. Let τ be the triangular partition cut off by L_ε. We claim that ϕ(τ) = (a,b,d,e).Indeed, since ε is irrational, c = (a,b) is the only lattice point in L_ε. It follows that c is removable in τ, since a cutting line for τ∖{c} can be obtained with a small perturbation of L_ε. Clearly, (a - d, b + e)∈^2∖τ because this point lies above L_ε, and d < a. Additionally, for any d',e'∈ with e'/d' < e/d, we have (a - d', b + e')∉^2∖τ because there are no lattice points between L_ε and L. To show that c is the rightmost removable cell of τ, suppose for contradiction that there was another removable cell c' to the right of c. Then c' must lie weakly below L, since there are no lattice points (other than c) on L_ε or between L_ε and L. Any cutting line for τ∖{c'} would pass below c' and above c, forcing it to pass above (a - d, b + e), which is not in τ, reaching a contradiction. To prove that ϕ is injective, we will argue that if ν∈Δ∖ is such that ϕ(ν) = (a,b,d,e), then ν=τ. By the definition of ϕ, all points in ^2 lying strictly below L and weakly to the left of c=(a,b) belong to ν. By Lemma <ref>, there exists a cutting line passing through c, and the point (a - d, b + e) must lie above this line. Thus, all points in ^2 lying weakly to the left of c and weakly above L belong to ^2∖ν, while those lying weakly below L and to the right of c belong to ν. If b = 1, this implies that L_ε cuts off ν, and ν = τ follows. Otherwise, let L' be the line through c and the vertex of (ν) adjacent to c from the right. Since c is the rightmost removable cell of ν, the line L' must intersect (^2∖ν) by Proposition <ref>. Additionally, by Proposition <ref> and Lemma <ref>, this intersection must occur to the left of c, and so the point (a - d, b + e) must be weakly below L'. In other words, the slope of L', in absolute value, is greater than or equal to that of L. Since all of ν lies weakly below L', all the cells lying to the right of c and strictly above L must belong to ^2∖ν. It follows that L_ε cuts off ν, implying that ν = τ. Next we will determine the possible values of the image ϕ(τ)=(a,b,d,e) for partitions τ that fit inside an ℓ×ℓ square.The next two lemmas characterize, for given d,e≤ℓ with (d,e)=1, what are the possible values of a and b that are obtained. We treat the cases d<e and d≥ e separately, and they are illustrated in Figures <ref> and <ref>, respectively.For positive integers d,e,ℓ withd<e, define the triangleT_d,e,ℓ^<={(x,y)∈^2| x≥ d+1, y≥1, ex+dy≤ e+d(ℓ+1)}. Let τ∈Δ∖ with ϕ(τ)=(a,b,d,e), and suppose that d<e. Then τ∈Δ^ℓ×ℓif and only if (a,b)∈ T_d,e,ℓ^<.Let L be the line described in the proof of Lemma <ref>. As shown in that proof, a point belongs to ^2∖τ if and only if it lies strictly to the left of (a,b) and weakly above L or weakly to the right of (a,b) and strictly above L. Since d < e, requiring that (1, ℓ + 1) lies weakly above L forces (ℓ + 1, 1) to lie strictly above L.We deduce that τ∈Δ^ℓ×ℓ if and only if (1, ℓ + 1) lies weakly above L, that is, e + d(ℓ + 1) ≥ ae + bd. Noting thatb≥1 and a≥ d+1 (since d<a), this inequality issatisfied precisely when (a,b)∈ T_d,e,ℓ^<. For positive integers d,e,ℓ with e≤ d, define the triangleT_d,e,ℓ^≥={(x,y)∈^2| x≥ d+1, y≥1, ex+dy< e(ℓ+1)+d}.Note that the last inequality is strict, unlike in the definition of T_d,e,ℓ^<.Let τ∈Δ∖ with ϕ(τ)=(a,b,d,e), and suppose that e≤ d. Then τ∈Δ^ℓ×ℓ if and only if (a,b)∈ T_d,e,ℓ^≥.The proof is analogous to that of Lemma <ref>, except that since now e ≤ d, requiring that (ℓ + 1, 1) lies strictly above L forces (1, ℓ + 1) to lie strictly above L. We deduce that τ∈Δ^ℓ×ℓ if and only if (ℓ + 1, 1) lies strictly above L, that is, e(ℓ + 1) + d > ae + bd. Noting that b ≥ 1 and a≥ d + 1, this is equivalent to requiring(a,b)∈ T_d,e,ℓ^≥.To count the number of lattice points in the above triangles, it is convenient to pair upT_d,e,ℓ^< and T_e, e - d, ℓ^≥ for d<e. The next lemma could be proved by induction on ℓ, using some elementary number theory, but we prefer to instead present a geometric argument, illustrated in Figure <ref>, that provides more intuition for the resulting binomial coefficient.Let ℓ,d,e∈ such that 1≤ d < e ≤ℓ. Then,| T_d,e,ℓ^<∩^2 | + | T_e, e - d, ℓ^≥∩^2 | = ℓ - e + 22. Consider the affine transformation ^2→^2 given byu=-x-y+d+ℓ+3,v=x-e.This transformation is bijective, with inversex=v+e,y=-u-v+d-e+ℓ+3,and it preserves the lattice ^2. The image of T_e, e - d, ℓ^≥ under this transformation isT'_d, e, ℓ{(u,v)∈^2| v≥1, u+v≤ d-e+ℓ+2, eu+dv>d(ℓ+1)+e}. The triangles T_d,e,ℓ^< and T'_d,e,ℓ are disjoint, and their union is the triangle T_d, e, ℓ T_d,e,ℓ^<⊔ T'_d,e,ℓ ={(x,y)∈^2| x≥ d+1, y≥1, x+y≤ d-e+ℓ+2}.The lattice points in T_d, e, ℓ consist of horizontal rows of cardinality ℓ - e + 1, ℓ - e, …, 2, 1. Thus,| T_d,e,ℓ^<∩^2 | + | T_e, e - d, ℓ^≥∩^2 | = | T_d,e,ℓ^<∩^2 | + |T'_d, e, ℓ∩^2 | = | T_d, e, ℓ∩^2 | = ℓ - e + 22.We are now ready to give a self-contained proof of our formula for (σ^ℓ)=|Δ^ℓ×ℓ|. Let τ∈Δ∖ and ϕ(τ) = (a,b,d,e)∈ Q, as defined in Lemma <ref>. By Lemmas <ref> and <ref>, we have τ∈Δ^ℓ×ℓ if and only if d < e and (a,b)∈ T_d,e,ℓ^<, or d≥ e and (a,b)∈ T_d,e,ℓ^≥. Accounting for the ℓ+1 partitions in ∩Δ^ℓ×ℓ (including the empty partition), we get|Δ^ℓ×ℓ| = ℓ +1 + ∑_1≤ d < e ≤ℓ (d,e) = 1| T_d,e,ℓ^<∩^2 | + ∑_1≤ e' ≤ d' ≤ℓ (d',e') = 1|T_d',e',ℓ^≥∩^2 |. The bijection{(d,e)∈^2| d < e, (d,e) = 1 }→{(d',e')∈^2| d' > e', (d',e') = 1 }given by (d,e)↦ (e, e - d) allows us to combine the summations as|Δ^ℓ×ℓ| = ℓ +1 + | T_1,1,ℓ^≥∩^2 | + ∑_1≤ d < e ≤ℓ (d,e) = 1( | T_d,e,ℓ^<∩^2 | + | T_e, e - d, ℓ^≥∩^2 | ).The lattice points in the triangle T_1,1,ℓ^≥ consist of horizontal rows of cardinality ℓ - 1, ℓ - 2, … , 2, 1, and so |T_1,1,ℓ^≥| =ℓ2. Using Lemma <ref>, we obtain|Δ^ℓ×ℓ| = ℓ +1 + ℓ2 + ∑_1≤ d < e ≤ℓ (d,e) = 1ℓ - e + 22 = 1+ ∑_e = 1^ℓℓ - e + 22φ(e).We can now easily deduce Lipatov's enumeration formula for balanced words from our results. Indeed, by Lemma <ref>(b) and Theorem <ref>, |_ℓ|=1+(σ^ℓ)-(σ^ℓ-1)=1+∑_i = 1^ℓℓ - i + 22φ(i) - ∑_i = 1^ℓ - 1ℓ - i + 12φ(i) = 1+∑_i = 1^ℓ(ℓ - i + 1)φ(i),giving a combinatorial proof of Theorem <ref>.In the rest of this section, we show that similar formulas for the number of triangular partitions inside other rectangles can be derived from Theorem <ref>. However, we do not have a general formula for |Δ^h×ℓ| for arbitrary h and ℓ.For ℓ≥2,|Δ^ℓ×(ℓ-1)|= (σ^ℓ∖{(ℓ,1)})=1/2 + 1/2∑_i = 1^ℓ(ℓ - i + 1)^2φ(i). By Lemma <ref>, the triangular subpartitions of σ^ℓ∖{(ℓ,1)} are the triangular partitions that fit inside an ℓ×(ℓ-1) rectangle; equivalently, those that fit inside an ℓ×ℓ square and do not have width exactly ℓ. Those having width exactly ℓ are counted by equation (<ref>). Subtracting this formula from |Δ^ℓ×ℓ|, which is given by Theorem <ref>, we get|Δ^ℓ×(ℓ-1)| = 1+∑_i = 1^ℓℓ - i + 22φ(i) - 1/2 - 1/2∑_i = 1^ℓ(ℓ - i + 1)φ(i) = 1/2 + 1/2∑_i = 1^ℓ(ℓ - i + 1)^2φ(i).To give a formula for the number of partitions that fit inside an ℓ× (ℓ - 2) rectangle, we need the following lemma.For ℓ≥2, the number of triangular partitions of width ℓ-1 and height ℓ is ℓ - 1.A triangular partition τ of width ℓ-1 and height ℓ must contain cells (1, ℓ) and (ℓ-1, 1), and hence all the cells lying weakly below the line segment between these two cells. In particular, letting ν=(ℓ-1,ℓ-2,…,1,1), we have ν⊆τ. On the other hand, since τ∈Δ^ℓ×(ℓ-1),Lemma <ref> implies that τ⊆σ^ℓ∖{(ℓ,1)}. As a consequence, τ = ν∪ C, where C is a subset of the cells that belong to σ^ℓ∖{(ℓ,1)} but not to ν. Denote these cells by c_1,…,c_ℓ-2, where c_i=(i+1,ℓ-i).If c_i∈τ for some i∈[ℓ-2], then c_j∈τ for all j∈[i], since these cells lie on the line segment between c_i and (1,ℓ). Thus, C=∅ or C={c_1,c_2,…,c_i} for some i∈[ℓ-2]. In the latter case, it is easy to see that the partition ν∪ C is triangular, since we can find a cutting line through c_i with slope vector (1/2 + ε, 1/2 - ε) for small enough ε > 0. Since there are ℓ - 1 choices for C in total, the result follows. For ℓ≥3, |Δ^ℓ×(ℓ-2)|= 1-ℓ + 1/2∑_i = 1^ℓ((ℓ - i + 1)(ℓ - i) + 1)φ(i). Partitions in Δ^ℓ×(ℓ-2) are precisely those in Δ^ℓ×(ℓ-1) (which were counted in Corollary <ref>) whose width is not ℓ - 1.Partitions in Δ^ℓ×(ℓ-1) of width ℓ - 1 either have height at most ℓ-1 (counted in equation (<ref>) with ℓ-1 playing the role of ℓ), or they have height ℓ (counted in Lemma <ref>). Putting these formulas together,|Δ^ℓ×(ℓ-2)| = 1/2 + 1/2∑_i = 1^ℓ(ℓ - i + 1)^2φ(i)- (1/2 + 1/2∑_i = 1^ℓ - 1(ℓ - i)φ(i)) - (ℓ - 1)= 1-ℓ + 1/2∑_i = 1^ℓ((ℓ - i + 1)(ℓ - i) + 1)φ(i).§ FURTHER DIRECTIONS In this section we discuss possible generalizations of our work and avenues of further research. §.§ Triangular Young tableaux Given a partition λ of n, a standard Young tableau of shape λ is a filling of the cells of the Young diagram of λ with the numbers 1,2,…,n so that each number appears once, the rows are increasing from left to right, and the columns are increasing from bottom to top. The last two conditions are equivalent to the requirement that for all i∈[n], the cells with labels at most i form the Young diagram of a partition. One can consider the following triangular analogue of this notion. Let τ be a triangular partition of size n. A triangular Young tableau of shape τ is a filling of the cells of the Young diagram of τ with the numbers 1,2,…,n so that, for all i∈[n], the cells with labels at most i form the Young diagram of a triangular partition. Similarly to how standard Young tableaux can be viewed as walks in Young's lattice, we can interpret triangular Young tableaux of shape τ as walks of length n in the Hasse diagram offrom the empty partition to τ.It is natural to ask if there is a triangular analogue of the hook-length formula counting standard Young tableaux of a given shape.Find a formula for the number of triangular Young tableaux of a given shape. We can solve this problem in the special case of two-row shapes.For a two-part triangular partition τ = τ_1τ_2, the number of triangular Young tableaux of shape τ isτ_1 - 2τ_2 + 2/τ_1 + 2τ_1 + τ_2 + 1τ_2.The proof of this result relies on the observation that an integer partition λ = λ_1λ_2 is triangular if and only if λ_1 ≥ 2λ_2 - 1. We deduce that triangular Young tableaux of shape τ = τ_1τ_2 are in bijection with lattice paths from (0,0) to (τ_1,τ_2), with steps (1,0) and (0,1), and staying weakly below the line x=2y-1. The bijection simply makes the ith step of the path be (1,0) (resp. (0,1)) if entry i is in the bottom (resp. top) row of the tableau. These paths can then be counted similarly to ballot paths to deduce the above formula.Using the Robinson–Schensted bijection, triangular Young tableaux with at most two rows can also be interpreted as 321-avoiding involutions with certain additional restrictions. §.§ Pyramidal partitions There is a higher-dimensional analogue of triangular partitions.A d-dimensional pyramidal partition is a finite subset of ^d that can be separated from its complement by a hyperplane in ℝ^d. These objects were first considered in <cit.>, and some bounds on their growth were given in <cit.>. By construction, 2-dimensional pyramidal partitions are the same as triangular partitions, whereas 3-dimensional pyramidal partitions can be viewed as a subset of plane partitions.One could ask which of the results from this paper generalize to pyramidal partitions. For example, the argument in the proof of Proposition <ref> also shows that a finite subset π⊂^d is a d-dimensional pyramidal partition if and only if (π)∩(^2∖π) = ∅. In addition, some experimentation suggests that the Möbius function of the poset of 3-dimensional pyramidal partitions ordered by containment only takes values in {-1,0,1}, as in the case of triangular partitions.However, some other properties of triangular partitions do not extend to higher dimensions.In work in progress, we have shown that for any fixed d≥3, there are d-dimensional pyramidal partitions with an arbitrary large number of removable or addable cells, in contrast with Lemma <ref>. It has also been observed by Vincent Pilaud (personal communication, 2023) that the poset of 3-dimensional pyramidal partitions is no longer a lattice. Understanding the intervals of this poset is an interesting avenue of research.§.§ Convex and concave partitions Two other generalizations of triangular partitions are convex partitions and concave partitions. Convex partitions are mentioned in OEIS entry A074658 <cit.>, due to Dean Hickerson, where they are counted for size up to 55. The notion of concave partitions appears in work of Blasiak et al. <cit.> in connection to Schur positivity.Convex (resp. concave) partitions can be defined as intersections (resp. unions) of triangular partitions, or equivalently as finite subsets of ^2 that lie below a convex (resp. concave) polygonal curve. In work in progress, we have shown that a partition is triangular if and only if it is convex and concave, and we have extended some of the results from Section <ref> to the lattices of convex and concave partitions.§.§ AcknowledgementsThe authors thank François Bergeron for introducing them to triangular partitions and for helpful discussions. SE was partially supported by Simons Collaboration Grant #929653. AG was partially supported by the mobility grants of CFIS-UPC, Generalitat de Catalunya and Gobierno de Navarra.
http://arxiv.org/abs/2312.16353v1
{ "authors": [ "Sergi Elizalde", "Alejandro B. Galván" ], "categories": [ "math.CO", "05A17 (Primary) 05A15, 05A19, 05A16, 68U05 (Secondary)" ], "primary_category": "math.CO", "published": "20231226231314", "title": "Triangular partitions: enumeration, structure, and generation" }
]Inferring the Effect of a Confounded Treatment by Calibrating Resistant Population's Variance Z. Qin and B. Karmakar]Z. Qin and B. Karmakar Department of Statistics, University of Florida, 102 Griffin-Floyd Hall, Gainesville, FL 32611 USA. In a general set-up that allows unmeasured confounding, we show that the conditional average treatment effect on the treated can be identified as one of two possible values. Unlike existing causal inference methods, we do not require an exogenous source of variability in the treatment, e.g., an instrument or another outcome unaffected by the treatment. Instead, we require (a) a nondeterministic treatment assignment, (b) that conditional variances of the two potential outcomes are equal in the treatment group, and (c) a resistant population that was not exposed to the treatment or, if exposed, is unaffected by the treatment. Assumption (a) is commonly assumed in theoretical work, while (b) holds under fairly general outcome models. For (c), which is a new assumption, we show that a resistant population is often available in practice. We develop a large sample inference methodology and demonstrate our proposed method in a study of the effect of surface mining in central Appalachia on birth weight that finds a harmful effect.Keywords: Causal inference, heterogeneous effect, non-ignorable treatment assignment, nonrandomized study, squared bias.[ [ January 14, 2024 ====================§ INTRODUCTIONAlthough the no unmeasured confounders assumption <cit.> is central to the current research on causal inference from observational studies, the statistical methodology and empirical literature show diverging attitudes toward this assumption. A significant section of current methodological development assumes no unmeasured confounders and aims to find fine-tuned methods to estimate the overall or more specific effects. In contrast, nearly all published observational study discusses the possibility of bias in their inference due to unmeasured confounders. These discussions often follow standard analyses using linear and generalized linear models after matching on measured covariates or propensity weighting. The penultimate sentence of Rosenbaum and Rubin's commentary <cit.> on the 40th anniversary of their famous 1983 paper <cit.> reads, “It is important to control for measured covariates, but in any nonrandomized study for causal effects the key activity is the step from association to causation, where bias from unmeasured covariates remains a possibility." When the no unmeasured confounders assumption is inaccurate, inferences from methods that work well under the assumption are generally unreliable and may have varying levels of sensitivity to potential unmeasured confounders.The no unmeasured confounders assumption, also called the ignorability assumption, states that the treatment and potential outcomes are conditionally independent given the measured confounders. The term conditional independence and the sentiment behind it are easily understood in English, which encourages arguments for whether the collected confounders are sufficient for an ignorable treatment assignment. However, a probabilist can caution us regarding the nuances of independence and conditional independence. For example, the assumption is often justified by collecting a large number of covariates. But, in theory, conditioning on a subset of these covariates may make the treatment assignment ignorable, i.e., all confounders are measured, yet conditioning on all of those covariates may result in conditional dependence (see illustrative cases in <cit.>, problem 7.14). In the end, the ignorability assumption is usually untestable. In summary, absent detailed knowledge of the treatment assignment process, we cannot justify the ignorability assumption.Alternatives to the no unmeasured confounders assumption have been developed. Among them are sensitivity analysis methods assuming a bounded influence of unmeasured confounders <cit.> and instrumental variable (IV) methods using exogenous variability in treatment assignment <cit.>. Despite its growing literature, sensitivity analysis methods have seen limited adoption in empirical applications. This could be because sensitivity analysis procedures typically need to be derived separately for each inference method, and we need a better understanding of their relative performance across inference methods.A primary challenge in IV methods is finding a variable that qualifies as a good IV based on scientific understanding and statistical evidence. For specific problems, researchers have proposed instruments that are likely to be valid <cit.>. Note, however, that a valid instrument must also satisfy a conditional independence condition, among other conditions. We propose a new method for heterogeneous effect estimation from nonrandomized studies under the possibility of an unbounded amount of unmeasured confounding. Our method assumes much less and avoids any ignorability or exogenous variability conditions while costing us a bit of ambiguity in the inference.Specifically, we establish a two-point identification of the conditional average treatment effect on the treated (CATT) rather than a point identification. Thus, our method, when given an infinite amount of data, even with a biased treatment assignment,will give at most two possible values for the treatment effect. So, when given finite data, the method gives at most two possible confidence intervals, which may be combined into one confidence set, for the heterogeneous effect. The proposed method requires that there be a resistant population. This population is spared exposure to the treatment or, if exposed to it, is not affected by the treatment. The resistant population is used to calibrate the variability of control potential outcomes. Mathematically, the method needs an estimate of the conditional variance of the control potential outcomes given covariates. We show that resistant populations are available in various observational studies. §.§ Summary of our contributionsSection <ref> gives the identification result in Theorem <ref> using Assumption <ref>. These assumptions are discussed and examined in Section <ref>. There is some precedence to the idea of two-point identification. In a classification problem, <cit.> propose a flexible model that identifies either the probability of having the target label or the probability of not having the target label. In determining the optimal experimental design from a large class, <cit.> show that at least one of the stratified designs and the cluster-based design is optimal when the units do not interfere. Further, sensitivity analysis to unmeasured confounders identifies the treatment effect in an interval of values <cit.>.Our two-point identification proposes two treatment effect estimands. Section <ref> provides an algorithm, which involves several steps and numerical optimizations, for non-parametric estimation of our treatment effect estimands. Our final estimator is based on a few (constrained) local linear regression estimators. While the proposed framework in Section <ref> is open to other estimation methods,our local linear regression-based algorithm facilitates the derivation of the asymptotic distribution of the estimators and, hence, the inference of our treatment effect estimands. Section <ref> develops large sample theory and confidence intervals under technical Assumption <ref>. Interpreting the inference from the proposed method may require some change in perspective since it gives two estimates and two confidence intervals. The cost is that we do not know which of the two estimates is consistent, at least one of them is, or which of the two confidence intervals covers the treatment effect, at least one of them does. The benefit is we do not need to assume that all confounders are available during the analysis. Still, additional knowledge may be used to determine the right choice. For example, researchers routinely discuss the direction of possible bias due to unmeasured confounders in published observational studies. In practice, those discussions can inform to pick the smaller of the two estimates, with the corresponding confidence interval, when that bias is thought to be positive and vice versa. Section <ref> provides further remarks for practice. Section <ref> provides an estimation process for the average treatment effect on the treated (ATT) that removes the no unmeasured confounders assumption. This process requires an additional Assumption <ref>, but estimation accuracy for the ATT is much better than for the heterogeneous treatment effect function. Section <ref> validates our method and theory in simulation. Section <ref> presents an empirical study of the effect of surface mining in central Appalachia counties on infant birth weight using our proposed method. Section <ref> has further discussion.Code implementing our proposed methods and all data sets except the birth data for our empirical study are available at <https://github.com/bikram12345k/RPCOVA>.§ IDENTIFICATION OF THE CONDITIONAL AVERAGE TREATMENT EFFECT ON THE TREATEDSuppose (Y_i(1), Y_i(0), Z_i, X_i), for n units i=1,…,n, are drawn independently from a probability distribution, called the full data distribution. Potential outcomes Y_i(1) and Y_i(0) are univariate <cit.>,Z_i is a binary indicator for the treatment, and X_i is possibly multidimensional. We assume that SUTVA holds and we observe (Y_i, Z_i, X_i), with Y_i=Z_iY_i(1)+(1-Z_i)Y_i(0). The distribution of (Y_i, Z_i, X_i) is the observed data distribution. In an observational study, a parameter, perhaps of the full data distribution, is identifiable if it can be written as a function of the observed data distribution; any such function will be called an estimable function.Here we study the identification of the conditional average treatment effect on the treated using the observed data distribution without assuming an ignorable treatment assignment or any other exogenous variability.Specifically, we aim to estimate τ(x) := E(Y_i(1) - Y_i(0)| Z_i = 1, X_i = x),the conditional average treatment effect on the treated (CATT), under the following assumptions that we elaborate on in Section <ref> below.(Identification assumptions) (a)π(x) := (Z_i=1| X_i=x) ∈ (0,1). (b)Var(Y_i(0)| Z_i = 1, X_i=x) = Var(Y_i(1)| Z_i = 1, X_i=x). (c)σ_0^2(x) := Var(Y_i(0)| X_i=x) is known (later, assumed to be estimable).Let β(x) = E(Y_i| Z_i=1, X_i = x) -E(Y_i | Z_i=0, X_i = x) andσ^2(x) = Var(Y_i| X_i=x). Write Δ(β, τ)(x) := β(x) - τ(x), for the identification bias of the target function τ usingestimable function β. Under Assumption <ref>,{Δ(β, τ)(x)}^2 =β(x)^2 - σ^2(x) - σ_0^2(x)/π(x)(1-π(x)). Since the observed data distribution orAssumption <ref> does not identify the sign of Δ(β, τ)(x), Theorem <ref> provides a `two-point identification' of the conditional average treatment effect on the treated in the following sense. Let us abbreviate Δ(x)≡Δ(β, τ)(x). Defineτ_-(x) = β(x) - |Δ(x)|and τ_+(x) = β(x) + |Δ(x)|.Then, under our identification assumptions, for a given x, τ(x) = τ_-(x) or τ(x) = τ_+(x) and both are estimable functions from the observed data distribution. We refer to this property as two-point identification of τ(x). Building intuition. Without delving into technical details, we give an illustration here to build intuition for why the squared bias can be identified as in Theorem<ref>. Figure <ref> shows data from a simulated experiment where the Y_i(1)=Y_i(0) is uniformly distributed over (-3,3). Hence, there is no treatment effect; τ(x)≡ 0. There are ten groups with identically distributed potential outcomes. Exactly half of the units in each group are assigned to the treatment. However, the treatment assignment is biased – assignment probabilities are proportional to the square of three plus potential outcomes (Pr(Z_i = 1 | Y_j(1), Y_j(0), j=1,…,50) = (3+Y_i(1))^2 / (∑_j (3+Y_j(1))^2) – in the first five groups, and is completely randomized in the last five groups. Thus, treatment has a spurious effect in the first five groups; β(x)> 0. Notice now, in each group, the variability of the black points, i.e., the treated units' outcomes, and that of the grey points, i.e., the control units' outcomes. In the unbiased groups, these two sets of points mix with each other; in fact, their theoretical variances are equal. Compared to this common variance, the variabilities of the points within both the treated and controls seem smaller in the biased groups because these groups prefer the treatment when outcomes are higher.This phenomenon holds broadly, and Theorem <ref> mathematically connects the differences in the variabilities between the unbiased and biased cases to identify the magnitude of bias in effect estimation. Consider a technical illustration for two-point identification. Let (X_i, U_i, ϵ_i) be independent Normal(0, 1) random variables. Let Y_i(0) = X_i + |U_i| + ϵ_i and (Z_i=1| X_i+U_i) = Φ(X_i+U_i) where Φ(·) is the standard normal distribution function. Finally, let Y_i(1) = X_i^2 + X_i + |U_i| + ϵ_i so that τ(x) = E( Y_i(1) - Y_i(0) | Z_i=1, X_i=x) = E( X_i^2 | Z_i=1, X_i=x)=x^2. Our observed data for unit i are (Y_i,Z_i,X_i).Figure <ref> shows the implication of Theorem <ref> inthe identification of τ(·) as one of two possible curves. Whenx=0,β(0) = E(| U_i || Z_i = 1, X_i = 0) - E(| U_i || Z_i = 0, X_i = 0) is equal to τ(0). The identification bias, Δ(x)=E(| U_i || Z_i = 1, X_i = x) - E(| U_i || Z_i = 0, X_i = x),is negative for x>0 andpositive for x<0; see the supplement for a proof. The bias occurs because of the unobserved confounder U_i in this simulation. The bottom-right panel of the figure calculates theabsolute identification bias, |Δ(x)|, by taking the positive square root of the right-hand side of (<ref>). Consequently, we identify the CATT τ(x) by τ_+(x) when x>0 andby τ_-(x) when x<0. At the center, τ(0)=τ_+(0)=τ_-(0). §.§ Discussion of the identification assumptionsAssumption <ref>(a) is the well-known overlap assumption in the causal inference literature <cit.>. It is clear that when Assumption<ref>(a) fails, without additional assumptions,τ(x) cannot be partially identified in any bounded set. Assumption <ref>(b) says that, within the treated group, two potential outcomes have the same conditional variances.Specifically, Assumption <ref>(b) holds if we assume Y_i(1) - Y_i(0) = f(X_i) (or Y_i(1) - Y_i(0) |{Z_i = 1} = f(X_i)), which is saying that the treatment effect (on the treated) is a fixed effect and X_i includes all the effect modifiers. The effect modifiers describe the variability in the treatment effect when the effect is heterogeneous. There are practical benefits to assuming that all effect modifiers are observed because with incomplete information on effect modifiers, inference for a heterogeneous treatment effect would also give incomplete knowledge of the effect. However, Assumption 1(b) does not rule out a random treatment effect or require that all effect modifiers are observed. To see this write Y_i(z) = m_z(X_i) + ϵ_iz where m_z(X_i) = E(Y_i(z) | Z_i=1, X_i) and ϵ_iz = Y_i(z) - m_z(X_i) for z=0,1, so that the CATT τ(x) = m_1(x) - m_0(x). The equalities in the previous sentences are purely algebraic; they do not require any assumptions. In this case, Assumption <ref>(b) is requiring Var(ϵ_i0| Z_i = 1, X_i) = Var(ϵ_i1| Z_i = 1, X_i), i.e., equality of the residual variances from the two potential outcomes conditional on X_i. In some situations, domain knowledge does not allow Assumption <ref>(b).For example, consider Y_i(z) |{Z_i=1, X_i=x}∼ Poi(m_z(x)) for z=0, 1, where the outcomes are counting data with means associated with X_i. Under this model, the variances of Y_i(0) and Y_i(1) will not be equal unless m_0(x)=m_1(x). However, one could apply a variance stabilizing transformation <cit.>, a common tool of a statistician, to the data so that the assumption may be met. In this example, a square root transformation, i.e., √(Y_i) becoming our outcome of interest, should make the variances approximately equal. Like the conditional unconfoundedness assumption, Assumption <ref>(b) is counterfactual and hence generally untestable. In general, Assumption 1(b) is neither stronger nor weaker than the conditional unconfoundedness assumption. But if further given that Var(Y_i(0)| Z_i = 0, X_i=x) = Var(Y_i(1)| Z_i = 1, X_i=x), which is a factual and testable condition,Assumption <ref>(b) is equivalent to Var(Y_i(0)| Z_i = 0, X_i=x) = Var(Y_i(0)| Z_i = 1, X_i=x).The latter is saying that Y_i(0) has equal conditional variance in both the treated and control groups. This is also implied by the assumption Y_i(0) ⊥ Z_i | X_i, which is a version of the conditional unconfoundedness commonly assumed for identification of the CATT.The following example provides an illustration of when the no unmeasured confounders assumption does not hold yet Assumption 1(b) holds.Suppose we have observed some covariates X_i and there are unobserved covariates U_i1∼Unif(0,1) and U_i0∼ Unif(-1, 0) and let U_i = U_i0 + U_i1.Let the treatment be assigned (implicitly) according to U, such that P(Z_i = 1 | U_i = u, X_i=x) = 1 if u ≥ 0 and P(Z_i = 1 | U_i = u, X_i=x) = 0 otherwise. Then, letY_i(z) = m_z(X_i) + U_iz + ϵ_iz for some functions m_0(·) and m_1(·) and mean-zero random errors ϵ_i0, ϵ_i1. Here ϵ_0, ϵ_1 are independent of (X_i, U_i0, U_i1, Z_i). Note that,the CATT τ(x) = m_1(x) - m_0(x) + E(U_i1 - U_i0| Z_i=1, X_i=x), which depends on the conditional distribution of (U_i0, U_i1) given X_i. Since (U_i0, U_i1) ⊥̸Z_i | X_i, the potential outcomes Y_i(0), Y_i(1) are also dependent on Z_i when given X_i. Consequently, the conditional unconfoundedness assumption does not hold.The assumption that Y_i(0) ⊥ Z_i | X_i, usually assumed for identification of CATT, also does not hold.Sometimes, researchers may assume the conditional mean independence, i.e., E(Y_i(0)| X_i, Z_i=0)=E(Y_i(0)| X_i, Z_i=1), a milder assumption than the conditional independence.This assumption does not hold in this example either unless for particular choices of m_0(·) and m_1(·). On the other hand, the Assumption <ref>(b) holds in this example as long as Var(ϵ_i0) = Var(ϵ_i1).Assumption <ref>(c) is unique to our method and, to the best ofour knowledge, has not been made in the literature. Here we discuss many common situations where this assumption is fair. To guide practitioners, we alsopoint to a few cases where this assumption might fail. (a) Ignorable treatment assignment. For illustrative purposes, we start with a theoretically convenient but practically uninteresting case. Suppose Y_i(0)⊥ Z_i | X_i, which is part of the ignorable treatment assignment assumption <cit.>. Then σ_0^2(x)= Var( Y_i | Z_i=0, X_i=x) is estimable. Of course, two-point identification is uninteresting in this scenario because Y_i(0)⊥ Z_i | X_i along with assumption 1(a) and (b) implies point identification of τ(x) with β(x). However, nothing is lost by using Theorem <ref> because it gives Δ^2(x)=0; hence τ_±(x)=β(x). (b) Temporal variance stationarity. Suppose the treatment is introduced to the population at a particular point in time and the outcome is measured not too long after that. A big class of observational studies is of this form. Consider two illustrative examples: the effect of a get-out-the-vote campaign in a statewide election on voter turnout, and, the effect of a pain drug on productivity. A get-out-the-vote campaign is likely to be focused on districts where a higher benefit is expected. Thus, there is clear suspicion of confounding, which is difficult to remove by precisely measuring all the confounders. In such a study, we can estimate the conditional variance σ_0^2(x) from the district-level voter turnout data from the previous election(s) when there was no such campaign. Since the last election, there could be a general change in the rate of voter turnout, e.g., an increase in the average voter turnout. This would not affect our identification. We only need the variance of the voter turnout to remain the same as in the last election if the campaign had not been used in the current election in a counterfactual situation. Consider the second example. Doctors prescribe pain medication to patients who need them. Further, a patient decides to visit the doctor when they are conscious of the pain and feel a doctor's input will be beneficial. Thus, the confounding effect of these complex decisions may be difficult to untangle. Nevertheless, in this case, we can estimate σ^2_0(x), the conditional variance of individuals' productivity in the control group, based on their productivity on the day or week before they visited the doctors. Under temporal variance stationarity, the variance of the potential outcome in the control group is stationary before and after the treatment is introduced. Mathematically, this property is satisfied if Y_i(0) is a second-order stationary random process across a period around the time of exposure and outcome measurement <cit.>. However, second-order stationary processes assume a constant mean and a stationary auto-covariance over time. Temporal variance stationarity is thus more general than second-order stationarity. (c) Geographical variance stationarity. When the treatment pertains to long-term exposure or the outcome is measured a significant period after the exposure, temporal variance stationarity could be questionable. In some of those situations, the alternative may be geographical variance stationarity. Consider two examples: the effect of a voluntary professional development program on income after 5 years, and, the effect of surface mining for coal on the birth weight of babies <cit.>. Many employers offer voluntary professional development programs. We might be interested in the effect of professional development programs for early career individuals on an individual's income at age 45. We can estimate σ^2_0(x) from income data at age 45 of individuals who could not participate in such a program but are from the same locality and profession as those in the observational study. Surface mining in the Central Appalachia region of the eastern United States increased after 1989, partly resulting from the Clean Air Act Amendments of 1990, which made surface mining financially attractive <cit.>. We might be interested in estimating the effect on the birth weight of babies after a long period of mining activity in the region, say in 2010. In that case, birth weight data in 2010 from regions outside of the Appalachian Counties with mining permits can be used to estimate σ_0^2(x). We present our analysis for this study in Section <ref>. Similar to temporal variance stationarity, a second-order stationary process model for the control potential outcome over a geographical region, which includes the observational study as a smaller region, will justify our estimation of σ_0^2(x) under geographical variance stationarity. (d) Regression discontinuity. In a regression discontinuity design, the treatment assignment probability jumps at a threshold of a running variable <cit.>. Consider a fuzzy regression discontinuity design where the jump in the probability is less than 1, i.e., the treatment is not impossible before the threshold or assured after the threshold. We can estimate the treatment effect in such a design if the treatment assignment was ignorable near the threshold <cit.>. But this unconfoundedness assumption may fail. For example, consider studying the effect of a college scholarship on starting salary after graduation. A scholarship-granting institution may have a threshold on a college applicant's standardized test score that increases their scholarship chances. However, these institutions typically follow other discretionary practices, perhaps using their subjective judgment on applicants' motivations and abilities, in granting scholarships to certain applicants slightly below the threshold or refusing scholarships to some applicants slightly above the threshold of the test score. This process will create a fuzzy regression discontinuity but also potential confounding. We can use the two-point identification strategy in such cases by estimating σ^2_0(x) from the outcomes of the units whose running variable values make them ineligible for the treatment. In this example, college graduates from similar backgrounds who did not take a standardized test may be used for estimating σ^2_0(x). (e) Resistant population. Generally, we can estimate σ_0^2(x) if there is a `similar' population that was not exposed to the treatment. In the previous examples, we rationalized such populations. Note that we can also estimate σ_0^2(x) if that similar population was exposed to the treatment, yet the treatment is not expected to have an effect on this population. In the surface mining example, families could have traveled out of the Central Appalachia region shortly after surface mining in the area started. Then, births in those families in later years are unlikely to be affected by the mining activity as they were not exposed to the treatment for any significant amount of time. Birth data for this population can be used to estimate σ_0^2(x). We call a comparable population that was not exposed to the treatment or, if exposed, is not expected to have an effect, a resistant population. Notice that the observed and the resistant population need not have the same distribution of X_i, or even the same mean of Y_i(0), for us to calibrate σ^2_0(x) from the resistant population.Explicitly, let {(Y_i, X_i)}, for i=1,…, m, be a sample from the resistant population. Assumption <ref>(c) requires that Var(Y_i |X_i=x) = Var(Y_i(0)| X_i=x).[Alternatively, we can think the resistant population and the target population belonging to a single larger population. Then, Assumption <ref>(c) requires that Var[Y_i | X_i = x, S_i = 1] = Var[Y_i(0) | X_i = x, S_i=0], where S_i is indicating the resistant population, while we understand all quantities calculated for the observational study conditions on S_i=0.]Now consider a couple of examples where one could make a mistake in the calibration of σ^2_0(x). First, σ^2_0(x) should not be estimated by an estimate of Var(Y_i| Z_i=0,X_i=x) without making a strong assumption thatVar(Y_i(0)| Z_i=0,X_i)=Var(Y_i(0)| X_i). While this is a milder assumption than conditional ignorablility of the treatment given X_i, it is nonetheless an untestable assumption and requires justification.Second, consider the professional development example from above. Such programs are likely aimed at individuals on the lower income scale. While a resistant population here need not have low income, income distribution is typically more variable with a higher income scale <cit.>. Thus, a resistant population should have similar outcome values to the observed population in cases such as these, where the variability is related to the mean of the outcome.§ INFERENCE FOR THE CONDITIONAL TREATMENT EFFECT§.§ Nonpametric estimationConsider (Y_i, Z_i, X_i), for i=1,…, n, i.i.d. from theobserved data distribution. This and the following section provide consistent estimators of τ_±(x) and corresponding large sample confidence intervals. Our nonparametric estimation methoduses local linear regression <cit.> to estimate the various components of τ_±(x). For simplicity, we assume here that X_i is a d-dimensional continuous variable. Our methods can be easily extended to incorporate discrete/categorical datafollowing the existing literature on local linear methods.While standard local linear regression methods can give estimates of β(x) and σ^2(x), we have to ensure that when these estimates arecombined to calculate Δ^2(x), the value is non-negative. To ensure this, we impose a constraint on the least squares problem in the local linear regression forcalculating β(x). Apart from ensuring the non-negativity of Δ^2(x),this has additional implications. First, the resultant estimate Δ^2(x) behavesdifferently when Δ(x)=0 v.s. when Δ(x)>0. We provide asymptotic analysis under both these conditions. To give some details, while Δ^2(x) (after appropriate centering and scaling) is asymptotically normal when Δ(x)>0, it is not normalin the case Δ(x)=0. We use these distributions to provide asymptotically validconfidence intervals for τ_±(x) in the next section. Second, the corresponding constrained optimizationproblem is non-convex. But it can be rewritten in a way that modernmathematical optimizers can solve the problem easily; we implemented the optimization using the Gurobi optimizer <cit.>.Suppose σ_0^2(x)and π(x) are estimators of σ_0^2(x) and π(x), respectively. Let K(·) denote a kernel function, a d-variate real-valued function, and H_1,H_2,H_3 and H_4 are diagonal bandwidth matrices of size d. Our estimation steps are as follows. (I) Estimate m(x) = E( Y_i | X_i=x) as m(x)=μ_m where (μ_m, ζ_m) =min_μ, ζ∑_i=1^n(Y_i - μ - ζ^⊤ (X_i-x))^2 K( H_1^-1(X_i-x)).(II) Using m(x) from (I), let e_i = Y_i-m(X_i). Now estimate σ^2(x) as σ^2(x)=μ_vwhere (μ_v, ζ_v) = min_μ, ζ∑_i=1^n (e_i^2 - μ - ζ^⊤ (X_i-x))^2 K(H_2^-1(X_i-x)).(III) Finally, estimate β(x) as β_C(x)=μ_1-μ_0 whereμ_z, z=0,1 come from the constrained optimization problem (μ_0, ζ_0, μ_1,ζ_1) =min_μ_0,ζ_0,μ_1,ζ_1∑_i=1^n Z_i(Y_i - μ_1-ζ_1^⊤(X_i-x))^2 K(H_3^-1(X_i-x)) +∑_i=1^n (1-Z_i)(Y_i - μ_0-ζ_0^⊤(X_i-x))^2 K(H_4^-1(X_i-x)) subject to (μ_1-μ_0)^2 ≥1/π(x)(1-π(x))( σ^2(x) - σ_0^2(x)).(IV) Calculate Δ^2(x) = {β_C(x)}^2 - σ^2(x) - σ_0^2(x)/π(x)(1-π(x)),and τ_±(x) = β_C(x) ±√(Δ^2(x)).Steps (I) and (II) are immediate generalizations of <cit.> on estimating univariate conditional variance function to the multivariate case. As (<ref>) and(<ref>) are straightforward weighted least squares regressions, m(x) and σ^2(x) have explicit expressions in the vector-matrix notation. However, β(x) in(III) does not have an explicit expression because of the constraint in (<ref>). The subscript C in β_C(x) is used to emphasize that theestimator is from the constrained optimization. We will call β_U(x) theestimate of β(x) from the corresponding unconstrained optimization. Estimation of σ^2_0(x) can follow steps (I) and (II) but with the resistant population data and their own bandwidths. Typically, σ^2_0(x) will be a simpler function than σ^2(x) since the latter includes additional variability in the treatment assignment. In the simplest case, σ^2_0(x) is a constant, i.e., the control potential outcomes are homoscedastic,which is a common model assumption in the literature. This common variance can be better estimated as the sample variance of the residuals from a regression of the resistant population's outcome data on x. The resistant population data may also be much larger than the observational study data. In that case, again, the error in estimating σ^2_0(x) using Steps (I) and (II) will be much smaller than that in estimating σ^2(x). §.§ Large sample confidence intervalWe provide large sample guarantees of the proposed estimator and then provide a method to calculate asymptotically valid confidence intervals. We start with our technical assumptions. Let m_z(x) = E ( Y_i |Z_i = z, X_i = x), z=0,1, the conditional means of Y_i of the treated and control group, respectively. (Technical assumptions) (a) K(u_1,…,u_d) = ∏_i=1^dk(u_i), where k is a density function symmetric about zero with bounded support on the real line and finite fourth moment. H_i = h_i I_d, i=1,2,3,4 with h_1 ≍ h_2≍ h_3≍ h_4 ≍ h, for some h such that h→ 0 and nh^d→∞ as n→∞.(b) f(x), the density function of X_i, π(x) and σ^2(x) are positive and continuous. E(Y^r | X=x) and E(Y^r | X=x, Z=z) are continuous for r=3,4 and z=0,1. The second derivatives of m(x), m_0(x), m_1(x) and σ^2(x) are uniformly continuous on an open set containing x. E(Y^4) <∞. (c) π(x) p→π(x) and (nh^d)^-1/2 σ^2_0(x) p→ 0 as n→∞.Among these assumptions, (a) and (b) are standard assumptions adapted from the local linear regression literature. Regarding (c), any reasonable estimator π(x) should be consistent for π(x), while its second part is justified by Remark <ref>.Before stating our distributional convergence result, we introduce some notation. Let ϵ_i = ( Y_i - m(X_i) )/σ(X_i) for i=1,…, n and λ^2(x) = E{ (ϵ_i^2-1)^2 |X_i=x}.Also, denoteν_z^2(x) = Var(Y_i |Z_i = z, X_i = x), and let ξ_z,i= (Y_i-m_z(X_i) )/ν_z(X_i), andη_z(x) = E{ξ_z,iϵ_i^2 |Z_i = z, X_i = x} for z=0,1 and i=1,…, n.Write θ_K^d = ∫ K(u)^2 du. Finally, with h^d =(∑_j=2^4 |H_j|)/3, let |H_j|/h^d→α_j^d as n →∞ for j=2,3,4.Definev_Δ^2:= 4 θ_K^d β(x)^2 ( ν_1^2(x)/α_3^d π(x)f(x) + ν_0^2(x)/α_4^d (1-π(x))f(x))+ 4 θ_K^d β(x) σ^2(x)/π(x)(1-π(x)) f(x)( ν_0(x) η_0(x)/α_2^d/2α_4^d/2 - ν_1(x) η_1(x)/α_2^d/2α_3^d/2) + θ_K^d σ^4(x) λ^2(x)/α_2^d π(x)^2(1-π(x))^2 f(x),and v_τ,±^2:= θ_K^d τ_±(x)^2/Δ^2(x)( ν_1^2(x)/α_3^d π(x)f(x)+ ν_0^2(x)/α_4^d (1-π(x))f(x))+ θ_K^d τ_±(x) σ^2(x)/π(x)(1-π(x)) f(x)Δ^2(x)×( ν_0(x) η_0(x)/α_2^d/2α_4^d/2- ν_1(x) η_1(x)/α_2^d/2α_3^d/2) + θ_K^dσ^4(x) λ^2(x)/4α_2^dΔ^2(x)π(x)^2(1-π(x))^2 f(x). Suppose that Assumptions <ref> and<ref> hold.(a) When E(Y_i(0)| Z_i=1, X_i=x) = E(Y_i(0)| Z_i=0, X_i=x), assuming that nh^d+4→ 0 as n →∞,(nh^d)^1/2( Δ^2(x) + O(h^2)) d⟶1/2δ_0 + 1/2 |N(0, v_Δ^2)|,as n →∞,where δ_0 is the degenerate distribution at 0 and |N(0,v^2)| is the distribution of absolute value of a N(0,v^2) random variable. (b) When E(Y_i(0)| Z_i=1, X_i=x) ≠ E(Y_i(0)| Z_i=0, X_i=x), (nh^d)^1/2( Δ^2(x) -Δ^2(x) + O(h^2) )d⟶ N(0, v_Δ^2),as n →∞,and(nh^d)^1/2( τ_±(x) - τ_±(x) + O(h^2)) d⟶ N(0, v_τ,±^2),as n →∞.Suppose that Assumptions <ref> and<ref> hold. Then, at least one of the following is true:(a) τ_+(x)p→τ(x), as n →∞, (b) τ_-(x)p→τ(x), as n →∞.Next, to construct a confidence interval, note that one can constructconsistent estimators of v_Δ^2 and v_τ,±^2 by plugging in consistent estimators of its components; see Remark <ref> in Section <ref> for details.So, let v_Δ(x) be a consistent estimator of v_Δ and v_τ,±(x) be consistent estimators of v_τ,±(x).Then, using Theorem <ref>, if we knew that E(Y_i(0)| Z_i=1, X_i=x) ≠ E(Y_i(0)| Z_i=0, X_i=x), we could create confidenceintervals for τ_±(x) as[τ_± - Φ^-1(1-α/2)×v_τ,±(x)/√(nh^d), τ_± + Φ^-1(1-α/2)×v_τ,±(x)/√(nh^d)]which would have an asymptotic coverage rate of 100(1-α)% byTheorem <ref> (b).Unfortunately, we do not know whether in fact E(Y_i(0)| Z_i=1, X_i=x) = E(Y_i(0)| Z_i=0, X_i=x). Thus, we break down our confidence interval construction into multiple steps, where the first step uses an appropriate testing procedure to test for the nullhypothesis H_0: E(Y_i(0)| Z_i=1, X_i=x) = E(Y_i(0)| Z_i=0, X_i=x). It uses the above interval when the hypothesis is rejected. Algorithmically, the 100(1-α)% confidence intervals for τ_±(x) are constructed using the three steps below. Theorem <ref> states the asymptotic validity of our confidence statements based on the resulting confidence intervals.(I)Fix δ∈(0,1). Check for the inequality Δ^2(x) > v_Δ(x) / (nh^d)^.5(1-δ).(II)If the inequality holds, use the confidence interval in (<ref>).(III)If the inequality fails to hold, calculate the confidence interval for τ_±(x) as [β_U(x) - Φ^-1(1-α/2)×v_β,U(x)/√(nh^d), β_U(x) + Φ^-1(1-α/2)×v_β,U(x)/√(nh^d)],where v_β,U(x) is a consistent estimator ofv_β,U^2(x)=θ_K^d[ ν_1^2(x){α_3^d π(x)f(x)}^-1+ ν_0^2(x){α_4^d(1-π(x))f(x)}^-1]; Φ denotes the standard normal cumulative distribution function.The confidence interval in (<ref>) can be replaced by any other confidence interval for τ(x) under the no unmeasured confounders assumption. We took δ=1/3 in our implementation.Suppose that Assumptions <ref> and<ref> hold. Further assume that nh^d+4→ 0 as n →∞. Then, for the confidence intervals calculated following steps (I)–(III), at least one of the following is true: (a)the confidence interval for τ_+(x) has an asymptotic 100(1-α)% coverage for τ(x),(b)the confidence interval for τ_-(x) has an asymptotic 100(1-α)% coverage for τ(x). Simultaneous coverage. Above, we developed a method for estimation and confidence interval construction at a given x. One might want simultaneous confidence intervals for a range of values of x. Under assumptions stronger than our Assumption <ref>, one can derive convergence results similar to Theorem <ref>uniformly on a range of x; see, e.g., <cit.>. Then, following the above steps, one can provide uniform inference. We do not pursue this exercise in this paper.§.§ Some practical remarks Constant treatment effect. There are many practical benefits to a constant additive treatment effect assumption <cit.>. A constant treatment effect is often a convenient starting point for establishing causality. Additionally, in many situations, identification of a constant treatment effect has immediate practical use <cit.>.Our identification and estimation methods using resistant population variance calibration simplify considerably under a constant treatment effect, leading to practically handy formulas. For this discussion let μ_t=E(Y_i| Z_i=1), μ_c =E(Y_i| Z_i=0), σ_t^2=var(Y_i| Z_i=1), σ_c^2=var(Y_i| Z_i=0) and p=(Z_i=1). Then, some calculations using (<ref>) show that Δ^2 = σ_0^2/p(1-p)-{σ_t^2/1-p+σ_c^2/p}.Hence, τ_±=(μ_t-μ_c) ±{p(1-p)}^-1/2[σ_0^2-{pσ_t^2+(1-p)σ_c^2}]^1/2. We can estimate them byτ_± = (Y_t-Y_c) ±n/√(n_tn_c)√(σ_0^2 - S^2_pooled),where (Y_t-Y_c) is the difference of the sample average outcomes between the treatment and control group of sample sizes n_t and n_c respectively, and S^2_pooled, defined as in classical statistics, is the pooled sampled variance of the two groups, i.e., S^2_pooled={(n_t-1)S_t^2+(n_c-1)S_c^2}/(n_t+n_c-2) with S_t^2 and S_c^2, the corresponding sample variances. Estimators (<ref>) are valid irrespective of the observed or unobserved confounding as long as the treatment effect is constant. In practice, one might use max{0,σ_0^2 - S^2_pooled} instead of σ_0^2 - S^2_pooled under the square root in (<ref>).Next, notice that S^2_pooled and Y_t-Y_c are approximately independent for large samples. Hence, assuming σ_0^2 is calculated independently from the observational sample, the adjustment due to confounding in the observational study is approximately independent of the basic estimator Y_t-Y_c, which one could use if we had a completely randomized experiment. Further,we can get the estimated large sample variance of τ_± asS_t^2/n_t+S_c^2/n_c+n^2/4n_tn_c(σ_0^2 - S^2_pooled)×[var(σ_0^2) + n_t/n^2(M_4t-S_t^4)+n_c/n^2(M_4c-S_c^4)],where M_4t and M_4c are the fourth central sample moments of the treatment and control group outcomes, respectively. Thus, the confounding in the study leads to an increase in the standard error over the standard error of a two-sample t-test, {S_t^2/n_t+S_c^2/n_c}^1/2.While the above formulas do not use any covariates, additional covariate information can be used in estimation and inference. One way of covariate adjustment would be to use formulas (<ref>) and (<ref>) on residuals Y_i-g(X_i), for some function g(·), instead of on raw outcomes Y_i. A common choice for g is the linear function a^⊤ X_i.Two is less than infinity. There is a practical question of how one would use the inference resulting in two confidence intervals using the proposed method. We emphasize the benefit of the method that it liberates us from assuming treatment selection based on observables. A sensitivity analysis method also relaxes this assumption and, ideally, it gives bounds on the treatment effect estimates and confidence intervals <cit.> — although, to our knowledge, no formal sensitivity analysis method has been developed for heterogeneous treatment effects. However, an increasing amount of data affects these two methods differently. For an externally set value for the maximum amount of bias from unmeasured confounding, a sensitivity analysis gives a bound on the effect estimate. This bound also grows wider, and eventually to infinity, the more we relax the ignorability assumption. The proposed method, on the other hand, with a larger amount of data, pinpoints the treatment effect to two numbers.If no ex-ante knowledge can pick the correct number among the two, conservatively, the researcher can still estimate that the effect is between the interval of these two numbers. Similarly, being conservative, the two confidence intervals can be combined into a single interval, with the lower limit being the smallest of two lower limits and the upper limit being the largest of two upper limits. This will be a wider confidence interval but will not explode to infinity. Similar to sensitivity analysis, this combined interval will also become narrower the more likely the ignorability assumption is. Still, in contrast to sensitivity analysis, we would not need to know or specify the amount of bias from unmeasured confounding.Standard error calculation.Constructions of confidence intervals, e.g., (<ref>) and (<ref>), require estimation of the asymptotic variances, which contain quantities that can be or have been consistently estimated, such as π(x), f(x), β(x) and σ^2(x), as well as some quantities whose consistent estimation requires some more work, e.g., λ(x), η_0(x) and η_1(x).Here, we propose a method to consistently estimate the latter group of quantities so that we can estimate the asymptotic variances using plug-in estimators.Noting that λ^2(x) = E { (ϵ_i^2 - 1)^2 |X_i = x} = E {ϵ_i^4 |X_i = x} -1, and that λ(x) appears in the asymptotic variance formulas in terms of σ^4(x) λ^2(x),we can estimate the two components E {σ^4(X_i) ϵ_i^4 |X_i = x} andσ^4(x), separately. We already have an estimator σ^2(x) for the latter. Now we propose to estimate E {σ^4(X_i) ϵ_i^4 |X_i = x} using an Nadaraya–Watson type estimator 1/nh_v^d∑_i=1^nK(H_v^-1(X_i - x)) e_i^4/1/nh_v^d∑_i=1^nK(H_v^-1(X_i - x)),where e_i = Y_i - m(X_i), i=1,…,n are the residuals as in step (II) of the main estimation procedure and H_v = h_v I_d for a bandwidth h_v. We show in the supplement material that the proposed estimator is consistent under certain regularity conditions.For z=0 or 1,σ^2(x) ν_z(x) η_z(x) = E(σ^2(X_i) ν_z(X_i) ξ_zi (ϵ_i^2 - 1) |Z_i = z, X_i = x) = E(σ^2(X_i) ν_z(X_i) ξ_ziϵ_i^2 |Z_i = z, X_i = x).Let e_i^(z) = Y_i - m_z(X_i), i=1,…,n, be the residuals obtained in step (III). Then we similarly propose to estimate E(σ^2(X_i) ν_z(X_i) ξ_ziϵ_i^2 |Z_i = z, X_i = x) using the Nadaraya–Watson type estimator 1/nh_v^d∑_i: Z_i = zK(H_v^-1(X_i - x)) e_i^2 e_i^(z)/1/nh_v^d∑_i:Z_i=zK(H_v^-1(X_i - x)).The supplement shows that the resultant variance estimators are consistent under certain regularity conditions. Alternatively, other nonparametric methods that consistently estimate conditional expectation functions may be used to estimate these quantities. Bandwidth selection. There is a large literature on bandwidth selection in local linear regressions <cit.>. In our numerical results for estimation, we calculate each of the bandwidths separately using a leave-one-out cross-validation method. We used the <np> package <cit.> that implements the leave-one-out cross-validation to select the bandwidths.<cit.> showed that, under mild regularity assumptions, the relative difference between the leave-one-out cross-validation estimator of the bandwidth and the optimal bandwidth goes to zero,and that the local linear regression estimator using the cross-validated bandwidth attains the same asymptotic normality as the estimator using the oracle bandwidth. Although it is a two-step procedure to estimate the conditional variance σ^2(x), <cit.> claimed that the bias contributed by the error in the estimation of m(x) is of a higher order of infinitesimal compared to the bias of m(x) itself. This fact enables us to use the cross-validated bandwidth in the first step without introducing any significant bias to σ^2(x), as well as to cross-validate the bandwidth in the second step to consistently estimate the optimal bandwidth.Lastly, to enforce undersmoothing in our confidence interval calculations as required in Theorem <ref>, we crudely take a power, with an exponent slightly larger (or smaller, if the bandwidth was greater than one) than 1, of the cross-validation based bandwidths, h_1,h_2,h_3 and h_4. This method works well, as seen in our numerical results in Section <ref>. § TWO-POINT IDENTIFICATION AND INFERENCE FOR THE ATEThe method developed above gives a two-point inference for the conditional average treatment effect on the treated, τ(x). But we need to be aware of some practical challenges associated with any nonparametric inference method for τ(x). First, it is difficult to summarize and visually inspect the effect estimates and confidence intervals when the dimension of x is greater than two. Second, the methods are cumbersome in higher dimensions — they take longer time to compute and they are slow to converge. Many of these challenges are unavoidable in nonparametric inference for the CATT. To avoid these challenges, now we consider inference for the average treatment effect on the treated (ATT), which is a single number defined as ATT=E(Y_i(1)-Y_i(0) | Z_i = 1)=E(τ(X_i)).When the treatment has a highly heterogeneous effect, ATT hides a lot of information about the effect and can vary quickly when the distribution of X_i changes. Still, the ATT has been studied extensively in the causal inference literature.Our Assumption <ref> is not sufficient for two-point identification of the ATT because the bias can change sign with x. Recall that we cannot identify the direction of the bias Δ(β, τ)(x) just by Assumption <ref>. Instead, we provide a sufficient condition, Assumption <ref>, below that excludes the possibility of an interaction between the sign of Δ(β,τ)(x) and x and thus allows two-point identification of the ATT. Without this assumption, researchers could use domain knowledge to justify that the sign of Δ(β,τ)(x), i.e., the direction of the bias in estimating the conditional average treatment effect using the difference of two regression functions (β(x)), does not change with x before using the method proposed below for the ATT.(Identification assumptions for the ATT) (a) The support of X is a connected set.(b) Δ(β, τ)^2(x) is continuous on its support.(c) For all j=1,…,d, the functionx_j↦ E [ Δ(β,τ)^2(X_1,…,X_j-1,x_j, X_j+1,…, X_d) ] is positive on the support of the jth coordinate of X.Assumption <ref> undoubtedly puts restrictions on the observational study. But, on the positive side, this assumption is checkable in the sense that the assumption is on the observed data distribution.Assumptions <ref> and <ref> entail the identification of the magnitude of the bias as|ATT - Eβ(X)| = E |Δ(β,τ)(X)|.Thus, defineATT_- = Eβ(X)- E √(Δ(β,τ)^2(X)),and ATT_+ = Eβ(X)+ E √(Δ(β,τ)^2(X)).By Theorem <ref>,ATT is equal to either ATT_- or ATT_+. We can estimate ATT_± by separately estimating Eβ(X) and E |Δ(β,τ)(X)|. Various methods are available for the estimation of Eβ(X), including the inverse probability weighted estimator, augmented inverse probability weighted estimator,and nonparametric regression-based estimators.Particularly, n^-1∑_i=1^nβ_C(X_i) and n^-1∑_i=1^nβ_U(X_i) are nonparametric regression based estimators of Eβ(X). The same way, we can estimate E |Δ(β,τ)(X)| by n^-1∑_i=1^n{Δ^2(X_i)}^1/2.Asymptotic properties of the estimator. Estimators of ATT_± that are immediately available from our CATT estimation method are n^-1∑_i=1^nτ_±(X_i). The mean squared errors of n^-1∑_i=1^nτ_±(X_i) can be shown to decrease at the rate n^-1, which does not depend on d, under reasonable regularity conditions.Theorem S1 in the supplement gives a formal result for asymptotic normality of √(n)(n^-1∑_i=1^nτ_±(X_i)-ATT_±). The parametric rate of convergence of n^-1∑_i=1^nτ_±(X_i) is demonstrated by simulation later in Section <ref>.§ SIMULATION: DEMONSTRATION AND COMPARISON§.§ Estimation of the conditional average treatment effect on the treated We demonstrate the estimation of the conditional average treatment effect on the treated based on two-point identification using simulation. We consider three data-generating processes that vary in their specification of the CATT function and confounding effect. Throughout this subsection, we have n=3000 and d=1, where the univariate X_i is drawn from a standard normal distribution. In all three simulation models, let U_i be an unmeasured confounder distributed as Unif(0,1) and let π(x,u)=(Z=1| X_i=x,U_i=u).Simulation model 1 (Linear effect modification)Set log{π(x,u)/(1-π(x,u))} = xu+x/2+3, Y_i0 is normal with mean 4-6U_i+X_i and standard deviation 0.5, and τ(x)=x/2+3/2. Simulation model 2 (Quadratic effect modification)Set log{π(x,u)/(1-π(x,u))} = u+4u^2+x/2, Y_i0 is normal with mean 1+6U_i+X_i and standard deviation 0.5, and τ(x)=x^2-2x+1/2. Simulation model 3 (Cubic effect modification and heteroscedastic Y_i0)Set log{π(x,u)/(1-π(x,u))} = u+4u^2+x/2. As before, Y_i0 is normal with mean 1+4U_i√(|X_i|)+X_i and standard deviation 0.5, hence σ_0^2(x)=1/4+4|x|/3, and τ(x)=x^3/2.We compared our estimation method, as described in the four steps in Section <ref>, with state-of-the-art heterogeneous treatment effect estimation methods: causal forest <cit.>, Bayesian causal tree <cit.> and X-learner <cit.>. We estimated π(x) in our method using a one-layer feed-forward neural network with 8 hidden neurons. The results of the simulation study are presented in Figure <ref> for simulation models 1 and 2 and Figure <ref> for simulation model 3. While there are differences in the results of the three competing methods, in Figure <ref>, all of these methods give biased estimates of τ(x) because of the presence of the confounder U_i which is highly correlated with the outcome. The confounding effect is also visible in the estimates τ_+(x) and τ_-(x) as they are two distinct sets of estimated curves. The ambiguity between these two estimates is also visible from comparing the two models. In simulation model 1, τ_+(x) covers the true CATT while in model 2, τ_-(x) covers the true τ(x).Figure <ref>, which corresponds to simulation model 3, shows interesting effects of the unmeasured confounder. The identification bias |Δ(β,τ)(x)| is smaller for negative values of x and gets larger with larger, positive values of x. Thus, τ_-(x) and τ_+(x) are close for x values closer to -2 and are different for larger values of x. The confounding also affects the variability of the estimates τ_±(x), which is comparable to those of the competing methods when the confounding effect is small and is larger than those of the competing methods for larger confounding biases. However, the competing methods give biased estimates in the latter situation. §.§ Inference for the conditional average treatment effect on the treatedWe next assess the coverage of the proposed inference method and compare it against that of two frequentist methods, causal forest, and X-learner, using simulation. Our data-generating process generates (i) an unmeasured confounder U_i distributed as Unif(0,1), (ii) X_i as a 5-dimensional normal, independent of U_i, with mean 0 and variance I_5,(iii) Y_i0 = 1+2U_i + 5X_i + ϵ_iX_i1/2+η_i, where ϵ_i and η_i are independent standard normal random variables and independent of X_i, U_i, (iv) Y_i1 = Y_i0 + τ(X_i) where τ(X_i)=1+5(X_i+X_i1)/12, and (iv) Z_i based on (Z_i=1| X_i, U_i, ϵ_i, η_i) = [1+exp{-(X_i/2 + 4U_i -1.5)}]^-1. This logistic model for the treatment assignment ensures that the π(X_i) is reasonably away from 0 and 1. Additionally,σ^2_0(X_i) = X_i1^2/4 varies with the covariate.We evaluate the confidence intervals provided by the methods for τ(x) on a grid of points 400 fixed x values. In particular, we calculate the empirical coverage for τ(x) for each x on this grid using 200 simulated data sets. To implement our method, we use the Bayesian Additive Regression Tree model <cit.> with its default specifications to estimate π(x), σ^2_0(x) and all the unknown functions in our asymptotic covariance formula. We use a resistant population of size n^1.1 to estimate σ^2_0(x) when our observational study sample size is n.The simulation results for the coverage rates for τ(x) and average CI lengths are given in Figure <ref>. The figure shows that the empirical coverage rate for the proposed estimator τ_-(x) is close to the target 0.95. However, there is a clear finite sample undercoverage. The other methods have very poor coverage. The lengths of the intervals are sometimes comparatively larger for the proposed estimator than the others. For example, when n=8000, the averages of the length of all the confidence intervals are 0.825 and 0.717 for the proposed method and causal forest, respectively, which are comparable; the average length is 0.098 for the X-learner. As noted in Remark 4, albeit under the no covariates case,wider intervals are expected for τ_-(x) compared to intervals for the estimators for the β(x).X-learner has poor coverage, and thus, its short intervals are misleading.§.§ Estimation of the average treatment effect on the treated Section <ref> showed that we can provide two-point identification of the average treatment effect on the treated under Assumption <ref> when the sign of the bias does not interact with the effect modifiers. Thus, we have two-point estimates of the ATT as ATT_± = n^-1∑_i=1^n τ_±(X_i). We compare the mean squared errors (MSEs) of ATT_± and two popular competing estimators across different sample sizes and dimensions of x in two data-generating models. These models specify (i) the unmeasured confounder U distributed as Unif(0,1), (ii) X_i is d-dimensional normal, independent of U_i, with mean 0 and variance I_d. In model 1, that has a relatively strong confounding effect, (iii) Y_i0 is normal with mean 4+4U_i^2+X_i and variance 1, (iv) (Z_i=1| X_i, U_i) = [1+exp{-(2X_iU_i+6U_i+.5X_i+1.5)}]^-1, and Y_i(1) = Y_i(0) + θ(X_i). In model 2, (iii') Y_i(0) = m_i+0.25η_i0 and Y_i(1) = m_i + θ(X_i)+0.25η_i1, where m_i=4+4U_i^2+dX_i + ϵ_iX_i/6 with ϵ_i and η_i are independent standard normal variables and independent ofX_i and U_i, and (iv') (Z_i=1| X_i, U_i) = [1+exp{-(U_i+X_i/2)}]^-1. Finally, (v) θ(X_i)=X_i1/2+3/2 for d=1 andθ(X_i)=X_i1(.5-1/d)+X_i/3+5/2 for d>1. Some comments regarding the models are in order. To understand the amount of unmeasured confounding, note that Kendall's partial correlation coefficient between Y_i0 and U_i given X_i is about 0.5 and between Z_i and U_i given X_i is about 0.3 for all d for model 1; these correlations are respectively .4 and .1 for model 2. Next, model 1 has a constant additive treatment effect, so the average treatment effect, ATE = E(Y_i(1)-Y_i(0))= ATT, while model 2 has different ATE and ATT, but our identification assumption holds. Finally, Model 1 has a constant value for σ^2_0(x), while in model 2, σ^2_0(x)=1.48+x̅^2/36. The estimators ATT_± are suitable in these simulation models since the identification bias does not change sign; it is always positive. Our first competing estimator fits a linear regression to the outcome on Z_i and X_i and estimates the ATT as the coefficient of Z_i. This fairly simple estimator is arguably the most popular choice of practitioners, even though it is not recommended by recent statistics literature because of its strong reliance on the linear model specification.Our second competing method is arguably the most flexible in terms of model specification. This is the doubly robust estimator using the augmented inverse probability weighted estimator that uses super learners for the outcome and assignment model <cit.>. We include a generalized additive model <cit.> and XGBoost, a nonparametric model based on generalized tree boosting <cit.>.Tables <ref> and <ref> report the results of our comparative study. Table <ref> shows that for model 1,ATT_- provides the most precise estimate whenn≥ 1000 and d=7 or n≥ 1500 and d=8. Further, in both tables, we see that the linear regression and doubly robust method are asymptotically biased because of the unmeasured confounding. Notice that MSEs of the competing estimators in these tables, and to some extent of ATT_+ in Table <ref>, are relatively constant across n as the variance is much smaller than the squared bias in these sample sizes. In contrast, the MSE of ATT_- decreases with increasing sample size. However, in Model 2, the MSEs of ATT_- are still relatively large because our nonparametric estimation method plugs in several nonparametric estimators. Even though consistent, some components have finite sample bias. Consequently, with a finite sample size, the plug-in estimator tends to exhibit a relatively higher MSE. Figure <ref> investigates the rate of the decrease of the MSE of ATT_- with the sample size. The figure shows that across different dimensions d=1,5 and d=6, the rate of MSE is proportional to n^-1. § EFFECT OF SURFACE MINING ON BIRTH WEIGHT We study the effect of substantial surface mining in the central Appalachian region on infant birth weight. Surface mining activities became prominent in some areas of central Appalachia starting in the 1990s due to technological developments in large-scale surface mining and amendments to the Clean Air Act in 1990 which made coal reserves in the area, which are low in sulfur content, financially attractive. Mountaintop removal mining, which involves mining coal upon steep terrains, often near residential areas, using explosives, is a majority among surface mining activities. The central Appalachian region is contained inside four states – Kentucky, Tennessee, Virginia, and West Virginia. Regulations only provided permissions for mining in 91 of the 181 central Appalachian counties. Births in those 91 counties form our observational study population, where 23 of those counties saw substantial surface mining. Births in the remaining 298 counties, in those four states, without mining permits, will provide data on our resistant population. We use data from the U.S. National Center for Health Statistics and combine it with U.S. census data. We include all data sets except the Birth data used in this study in the supplement to this paper; the Birth data is available upon request from <https://www.cdc.gov/nchs/nvss/dvs_data_release.htm>.Our outcome is birth weight in grams. If birth weight is affected by surface mining, we may see variability in this effect because of biological factors associated with birth. Thus, we use covariates: mother's age, number of prenatal visits, and number of months of prenatal care. The first variable accounts for the mother's biology and the latter two account for the quality of care during the pregnancy. Of course, we assume that surface mining does not affect these variables.Socioeconomic variables, e.g., income and father's education, may be associated with birth weight and thus are likely confounders. However, in the current analysis, we are not interested in how they might influence the possible effect of surface mining on infant birth weight. Thus, we do not include these socioeconomic variables in our analysis.Variability in birth weight in the areas of the same states with no mining permits can be argued to be a good choice for our resistant population variance. We reinforce this argument by matching our 91 observational study counties with the remaining 298 counties in a one-to-one match on several socioeconomic variables. The 28850 births in 2010 in these counties form our resistant population data. Table <ref> shows that this match makes the matched resistant counties similar to our observational study counties. We estimate the conditional effects and confidence intervals using the proposed methods in Sections <ref> and<ref>, respectively. Since our theory requires that the resistant population variance is estimated more precisely than the estimands in the study population, we took a 40% random sub-sample of our 34173 births in mining counties for our calculations, giving us a bit less than half the number of births in the resistant population. In our secondary check, this subsampling did not affect our estimates.The naive method that ignores any bias correction due to unmeasured confounding finds an estimated effect close to zero or positive of surface mining on birth weight; see Figure <ref>. If we were to assume a negative bias in the naive estimator due to unmeasured confounding, it would suggest an increase in birth weight because of surface mining. However, there is no apparent scientific support for an increase in birth weight due to surface mining. Thus, assuming a non-negative bias in the naive estimator, i.e., taking τ_-(x), Figure <ref> presents a summary of our bias-corrected estimation and inference. These results show often substantial estimated biases and typically negative estimated effects on birth weight. This finding of a harmful effect of surface mining complements <cit.> who use a binary indicator for low birth weight as the outcome, adjust for socioeconomic variables, and do not study effect heterogeneity. Our results show that while the mother's age and the number of prenatal visits do not appear to be effect modifiers, births that followed less prenatal care are affected more; this effect mitigates with longer prenatal care. Note, in Figure <ref>, the naive estimator, which estimates the treatment effect using the differences in the conditional means, sometimes being below the estimator that assumes a nonnegative bias is not unreasonable because the latter uses a separate penalized estimation method for the difference.§ DISCUSSION Analyses of variances have been vital tools for statistical analysis of controlled experiments. Although one wants inference regarding the difference in means or the main effect, comparisons of variabilities in different parts of the data are critically used in the analyses of (co-)variance. Fundamentally, our method builds on this idea (cf. Remark <ref>). The difference in the variance of the whole data to that of the resistant population turns out to be informative about the magnitude of the bias in the CATT due to unmeasured confounding. The proposed method may thus be termed Resistant Population Calibration Of VAriance, or RPCOVA, following suit of the classical abbreviations. Similar to the fact that the ANOVA table informs us of whether the treatment effect is nonzero but not the effect size, RPCOVA can inform us of the size of the bias but not the direction. RPCOVA gives two possible estimates of the treatment effect but fails to provide further information regarding which of these two is the right choice. In fortunate situations where there is no bias in the treatment assignment, these two estimates collapse into a single estimate, or either is a consistent estimate. In other situations, when there is consensus regarding the direction of bias from unmeasured confounders, we can choose the corresponding estimate – the smaller of the two when the bias is positive and the larger of the two when it is negative. In our empirical study, we ruled out a negative bias on scientific grounds. Even in other situations, it may be sufficient to know the two estimates. If establishing causality consists of a connected mesh of coherent arguments (<cit.>, <cit.>), then RPCOVA, even with its slight inconclusiveness, will substantially contribute to these arguments. Beyond developing the notion of two-point identification of the effect of a confounded treatment, this paper proposed and implemented an algorithm for nonparametric estimation and then provided easy-to-compute large sample confidence intervals. However, we could also aim to estimate better and faster, leaving further inference to resampling-based tools, perhaps. This would invariably lead to less restrictive technical conditions than those in Assumption <ref>. § SUPPLEMENTARY MATERIALIn the supplementary document, we present proofs of the theorems and some examples and remarks given in the main paper, provide some auxiliary results to the proofs and give an additional result regarding the parametric rate of convergence for ATT estimation. Code implementing our proposed methods and all data sets except the birth data for our empirical study are available at <https://github.com/bikram12345k/RPCOVA>.§ ACKNOWLEDGMENTS This work is supported in part by funds from the U.S. National Science Foundation.apamyheadings SUPPLEMENT SUPPLEMENT In this document, we present proofs of the theorems and some examples and remarks in the main paper, provide some auxiliary results to the proofs and give an additional result regarding the parametric rate of convergence for ATT estimation in Section <ref>. § PROOFS OF THE IDENTIFICATION RESULTS (THEOREMS <REF> AND <REF>)First writeσ^2(x)= Var(Y_i | X_i=x)= E{Var(Y_i | Z_i, X_i=x)} + Var{E(Y_i | Z_i, X_i=x)} = π(x) Var(Y_i(1) | Z_i=1, X_i=x) + (1-π(x)) Var(Y_i(0) | Z_i=0, X_i=x)+ (π(x)(1-π(x))) β(x)^2,andσ_0^2(x)= Var(Y_i(0) | X_i=x)= E{Var(Y_i(0) | Z_i, X_i=x)} + Var{E(Y_i(0) | Z_i, X_i=x)} = π(x) Var(Y_i(0) | Z_i=1, X_i=x) + (1-π(x)) Var(Y_i(0) | Z_i=0, X_i=x)+ (π(x)(1-π(x))) (β(x) - τ(x))^2.In the first expression, Var{E(Y_i | Z_i, X_i=x) = (π(x)(1-π(x))) β(x)^2 is true becauseVar{E(Y_i | Z_i, X_i=x)}=E[E(Y_i | Z_i, X_i=x)-{π(x) E(Y_i | Z_i=1, X_i=x) + (1-π(x)) E(Y_i | Z_i=0, X_i=x)}]^2 = π(x) {(1-π(x)) E(Y_i | Z_i=1, X_i=x) - (1-π(x)) E(Y_i | Z_i=0, X_i=x)}^2 + (1-π(x)) {π(x) E(Y_i | Z_i=1, X_i=x) - π(x) E(Y_i | Z_i=0, X_i=x)}^2 =(π(x)(1-π(x))) β(x)^2.And Var{E(Y_i(0) | Z_i, X_i=x)} = (π(x)(1-π(x))) (β(x) - τ(x))^2 can be proven similarly. Under Assumption <ref> (b),σ^2(x) - σ_0^2(x)= (π(x)(1-π(x))) {β(x)^2 - (β(x) - τ(x))^2}= (π(x)(1-π(x))) [β(x)^2 - {Δ(β, τ)(x)}^2]Then, the proof is completed by rearranging.Specifically, by solving the above quadratic equation in τ(x) we getτ(x) = β(x) ±{β(x)^2 - σ^2(x)-σ_0^2(x)/π(x)(1-π(x))}^1/2.Hence, the proof is complete.The proof follows from Theorem <ref> and the fact that under Assumption <ref> Δ(x) is either non-positive for all x or non-negative for all x. § PROOFS OF THE RESULTS IN SECTION <REF> We first prove the following lemma regarding the asymptotic joint distribution of the estimators.Under the technical assumptions <ref>, further assuming that σ_k^2 = ∫ u^2 k(u) du > 0 and that h_i / h →α_i > 0 for some α_i>0, i=2,3,4, then for the estimators m_1(x) and m_0(x) obtained in the unconstrained version of (<ref>) and σ^2(x) obtained in (<ref>), we have the following asymptotic joint distribution:√(n h^d)( [ m_1(x) - m_1(x) - 1/2α_3^2 h^2 σ_k^2 tr(m_1(x)) + o_p(h^2); m_0(x) - m_0(x) - 1/2α_4^2 h^2 σ_k^2 tr(m_0(x)) + o_p(h^2); σ^2(x) - σ^2(x) - 1/2α_2^2 h^2 σ_k^2 tr((σ^2)(x)) + o_p(h^2) ])d→ N_3( 0_3,([ θ_K^d ν_1^2(x)/α_3^dπ(x) f_1(x) 0 θ_K^d ν_1(x) σ^2(x) η_1(x) /α_2^d/2α_3^d/2 f(x); 0θ_K^d ν_0^2(x)/α_4^d (1-π(x)) f_0(x) θ_K^d ν_0(x) σ^2(x) η_0(x) /α_2^d/2α_4^d/2 f(x); θ_K^d ν_1(x) σ^2(x) η_1(x) /α_2^d/2α_3^d/2 f(x) θ_K^d ν_0(x) σ^2(x) η_0(x) /α_2^d/2α_4^d/2 f(x) θ_K^d σ^4(x) λ^2(x) /α_2^d f(x) ]) ),as n →∞, where m_z(x) = E ( Y_i | Z_i = z, X_i = x), ν_z^2(x) = Var ( Y_i |Z_i = z, X_i = x),ξ_1i = (Y_i - m_1(X_i))/ν_1(X_i),η_z(x) = E(ξ_0 (ϵ^2 - 1) |Z_i = z, X_i = x) for z=0,1, and λ^2(x) = E { (ϵ_i^2 - 1)^2 |X_i = x}.To solve the unconstrained optimization problem(μ̂_0, ζ̂_0, μ̂_1,ζ̂_1) =min_μ_0,ζ_0,μ_1,ζ_1∑_i=1^n Z_i ‖ Y_i - μ_1-ζ_1^⊤(X_i-x) ‖^2 K(H_3^-1(X_i-x)) +∑_i=1^n (1-Z_i) ‖ Y_i - μ_0-ζ_0^⊤(X_i-x) ‖^2 K(H_4^-1(X_i-x)),where H_3 = h_3 I_d and H_4 = h_4 I_d, it is equivalent to solve the following two problems separately,(μ̂_1,ζ̂_1^⊤) =min_μ_1,ζ_1∑_i=1^n Z_i ‖ Y_i - μ_1-ζ_1^⊤(X_i-x) ‖^2 K(H_3^-1(X_i-x)),and(μ̂_0, ζ̂_0^⊤) =min_μ_0,ζ_0∑_i=1^n (1-Z_i) ‖ Y_i - μ_0-ζ_0^⊤(X_i-x) ‖^2 K(H_4^-1(X_i-x)).Denote e_1 = (1, 0, …, 0)^⊤∈ℝ^d+1, Ω_1(x) = diag(Z_1 K(H_3^-1(X_1-x)), …, Z_n K(H_3^-1(X_n-x))) ∈ℝ^n × n, Ω_0(x) = diag((1-Z_1) K(H_4^-1(X_1-x)), …, (1-Z_n) K(H_4^-1(X_n-x)) )∈ℝ^n × n,andΓ(x) = ( [ 1 (X_1 - x)^⊤; … …; 1 (X_n - x)^⊤ ]) ∈ℝ^n × (d+1). Let's look at (<ref>) first.The standard procedure gives us the solution to (<ref>):(μ̂_1,ζ̂_1^⊤)^⊤ = [ Γ(x)^⊤Ω_1(x) Γ(x)]^-1Γ(x)^⊤Ω_1(x) Y,where Y = (Y_1, …, Y_n)^⊤∈ℝ^n. And we will be specifically interested inm_1(x) = μ̂_1 = e_1^⊤[ Γ(x)^⊤Ω_1(x) Γ(x)]^-1Γ(x)^⊤Ω_1(x) Y.Apply Taylor expansion to m_1(X_i) at x for each i, the vector Y can now be expressed asY = Γ(x) ( m_1(x), m_1(x)^⊤)^⊤ + 1/2( (X_i-x)^⊤m_1(x) (X_i-x) )_n × 1+ ( o(‖X_i-x‖^2) + ν_1(X_i) ξ_1i)_n × 1:= Γ(x) ( m_1(x), m_1(x)^⊤)^⊤ + 1/2Q(x) + R(x) + ( ν_1(X_i) ξ_1i)_n × 1,in which a notation of the form (a_i)_n × 1 stands for a column vector (a_1, …. a_n) ∈ℝ^n. Since m_1(x) = e_1⊤[ Γ(x)^⊤Ω_1(x) Γ(x) ]^-1[ Γ(x)^⊤Ω_1(x) Γ(x) ] ( m_1(x), m_1(x)^⊤)^⊤,we now are interested in m_1(x) - m_1(x) = e_1⊤[ Γ(x)^⊤Ω_1(x) Γ(x) ]^-1Γ(x)^⊤Ω_1(x)( 1/2Q(x) + R(x) + (ν_1(X_i) ξ_1i)_n × 1). Using standard results from density estimation, 1/n h_3^dΓ(x)^⊤Ω_1(x) Γ(x) = ( [ π(x) f_1(x) + o_p(1) π(x) h_3^2 σ_k^2 ḟ_̇1̇(x)^⊤ + o_p(h^2) 1_d^⊤; π(x) h_3^2 σ_k^2 ḟ_̇1̇(x) + o_p(h^2) 1_dπ(x) h_3^2 σ_k^2f_1(x) I_d + o_p(h^2) I_d ]).It follows from this that[ 1/n h_3^dΓ(x)^⊤Ω_1(x) Γ(x) ]^-1 = ( [(π(x) f_1(x))^-1 + o_p(1) - π(x)^-1 (f_1(x))^-2ḟ_̇1̇(x)^⊤ + o_p(1) 1_d^⊤; - π(x)^-1 (f_1(x))^-2ḟ_̇1̇(x) + o_p(1) 1_d (π(x) h_3^2 σ_k^2 f_1(x) )^-1I_d + o_p(h_3^-2) I_d ]). We then look at the Γ(x)^⊤Ω_1(x)( 1/2Q(x) + R(x) + (ν_1(X_i) ξ_1i)_n × 1) part. Firstly, we have1/n h_3^dΓ(x)^⊤Ω_1(x) Q(x)= 1/n h_3^d( [∑_i=1^n Z_i K(H_3^-1(X_i-x)) (X_i-x)^⊤m_1(x) (X_i-x); ∑_i=1^n Z_iK(H_3^-1(X_i-x)) (X_i - x) (X_i-x)^⊤m_1(x) (X_i-x) ]).In (<ref>),1/n h_3^d∑_i=1^n Z_i K(H_3^-1(X_i-x)) (X_i-x)^⊤m_1(x) (X_i-x)→ π(x) E_X | Z = 1[ 1/h_3^dK(H_3^-1(X_i-x)) (X_i-x)^⊤m_1(x) (X_i-x) ]= π(x) ∫1/h_3^dK(H_3^-1(y-x)) (y-x)^⊤m_1(x) (y-x) f_1(y) d y = π(x) h_3^2 ∫K(u) u^⊤m_1(x) u f_1(h_3 u + x) d u = π(x) h_3^2 ∫K(u) u^⊤m_1(x) u( f_1(x) + h_3 ḟ_1 (x) u + o(h_3) ) d u = π(x) h_3^2 σ_k^2 tr(m(x)) f_1(x) + o(h_3^3).Similarly, since ∫ u^4 K(u) du is assumed to be finite, 1/n h_3^d∑_i=1^n Z_i K(H_3^-1(X_i-x)) (X_i-x) (X_i-x)^⊤m_1(x) (X_i-x)→ π(x) E_X | Z = 1[ 1/h_3^dK(H_3^-1(X_i-x)) (X_i-x) (X_i-x)^⊤m_1(x) (X_i-x) ]= π(x) ∫1/h_3^dK(H_3^-1(y-x)) (y-x) (y-x)^⊤m_1(x) (y-x) f_1(y) d y =o(h_3^3) 1_d.Thus, (<ref>) becomes1/n h_3^dΓ(x)^⊤Ω_1(x) Q(x) = ( [ ph_3^2 σ_k^2 tr(m_1(x)) f_1(x) + o(h_3^3);o(h_3^3) 1_d ]) .Secondly,1/n h_3^dΓ(x)^⊤Ω_1(x) ( ν_1(X_i) ξ_1i)_n × 1= 1/n h_3^d( [∑_i=1^n Z_i K(H_3^-1(X_i-x)) ν_1(X_i) ξ_1i; ∑_i=1^n Z_iK(H_3^-1(X_i-x)) (X_i - x) ν_1(X_i) ξ_1i ]).For (<ref>), first recall that E (ξ_1i|Z_i = 1, X_i ) = 0and Var (ξ_1i|Z_i = 1, X_i ) = 1.So, E [ Z_i K(H_3^-1(X_i-x)) ν_1(X_i) ξ_1i] = π(x) E_X | Z = 1[ K(H_3^-1(X_i-x)) ν_1(X_i) E( ξ_1i|Z_i = 1, X_i) ] = 0,and, denoting θ_K^d = ∫ K^2(u) du,E [ ( Z_i K(H_3^-1(X_i-x)) ν_1(X_i) ξ_1i )^2 ]= π(x) E_X | Z = 1[ K^2(H_3^-1(X_i-x)) ν_1^2(X_i) ] = π(x) ∫K^2(H_3^-1(y-x)) ν_1^2(y) f_1(y) d y= π(x) h_3^d ( θ_K^d ν_1^2(x) f_1(x) + o(1) ).We also haveE [ Z_i K(H_3^-1(X_i-x)) (X_i - x) ν_1(X_i) ξ_1i] = π(x) E_X | Z = 1[ K(H_3^-1(X_i-x)) (X_i - x) ν_1(X_i) E( ξ_1i|Z_i = 1, X_i) ] = 0_d,and, for a vector a, denoting a^2 = a^⊤ a,E [ ( Z_i K(H_3^-1(X_i-x)) (X_i - x)ν_1(X_i) ξ_1i )^2 ]= π(x) E_X | Z = 1[ K^2(H_3^-1(X_i-x)) (X_i - x)^2 ν_1^2(X_i) ] = π(x) ∫K^2(H_3^-1(y-x)) (y - x)^2 ν_1^2(y) f_1(y) d y=h_3^d o(h_3) 1_d .Applying the central limit theorem to (<ref>), as n →∞, we get√(n h_3^d)( 1/n h_3^d∑_i=1^n Z_i K(H_3^-1(X_i-x)) ν_1(X_i) ξ_1i) d→ N (0,θ_K^d ν_1^2(x) f_1(x)),and√(n h_3^d)( 1/n h_3^d∑_i=1^n Z_i K(H_3^-1(X_i-x))(X_i-x) ν_1(X_i) ξ_1i) p→0_d,since the asymptotic variance is 0.As for the remainder term R(x),1/n h_3^dΓ(x)^⊤Ω_1(x)R(x) =1/n h_3^d( [∑_i=1^n Z_i K(H_3^-1(X_i-x)) o(‖X_i - x‖^2); ∑_i=1^n Z_iK(H_3^-1(X_i-x)) (X_i - x) o(‖X_i - x‖^2) ])=o(h_3^2) 1_d+1. Combining (<ref>), (<ref>) and (<ref>),m_1(x) - m_1(x) = e_1⊤[Γ(x)^⊤Ω_1(x) Γ(x) ]^-1Γ(x)^⊤Ω_1(x)( 1/2Q(x) + R(x) + (ν_1(X_i) ξ_1i)_n × 1) = 1/n h_3^d π(x) f_1(x)(∑_i=1^n Z_i K(H_3^-1(X_i-x)) ν_1(X_i) ξ_1i) + 1/n h_3^df_1^2(x)( ∂ f_1(x)/∂x)^⊤(∑_i=1^n Z_i K(H_3^-1(X_i-x))(X_i-x) ν_1(X_i) ξ_1i) + 1/2 h_3^2 σ_k^2 tr(m_1(x)) + o_p(h_3^2).Further, together with (<ref>) and (<ref>),√(n h_3^d)( m_1(x) - m_1(x) - 1/2 h_3^2 σ_k^2 tr(m_1(x)) + o_p(h_3^2) )d→ N (0, θ_K^d ν_1^2(x)/ f_1(x)) , asn →∞. Similarly, for the solution m_0(x) := μ̂_0 to the problem (<ref>), we have the following asymptotic distribution:√(n h_4^d)( m_0(x) - m_0(x) - 1/2 h_4^2 σ_k^2 tr(m_0(x)) + o_p(h_4^2) )d→ N (0, θ_K^d ν_0^2(x)/(1-π(x)) f_0(x)) , asn →∞. Moreover, if we look at the covariance between m_0(x) and m_1(x), since the covariance between the main terms are zero:cov ( Z_i K(H_3^-1(X_i-x)) ν_1(X_i) ξ_1i, (1-Z_i) K(H_4^-1(X_i-x)) ν_0(X_i) ξ_0i) =E ( Z_i K(H_3^-1(X_i-x)) ν_1(X_i) ξ_1i× (1-Z_i) K(H_4^-1(X_i-x)) ν_0(X_i) ξ_0i) -E ( Z_i K(H_3^-1(X_i-x)) ν_1(X_i) ξ_1i) E ( (1-Z_i) K(H_4^-1(X_i-x)) ν_0(X_i) ξ_0i) =0,it follows that, asymptotically,cov(m_0(x), m_1(x)) = 0. Fan and Yao (1998) derived the asymptotic distribution of σ^2(x) in (<ref>):√(n h_2^d)( σ^2(x) - σ^2(x) - 1/2 h_2^2 σ_k^2 tr((σ^2)(x)) + o_p(h_1^2+h_2^2))d→ N (0, θ_K^d σ^4(x) λ^2(x) /f(x)),as n →∞.Besides the asymptotic distribution of σ^2(x) itself, we are further interested in the asymptotic joint distribution of it and the other two estimators. According to the proof in Fan and Yao (1998), the donimating term of σ^2(x) is1/n h_2^d f(x)∑_i=1^nK(H_2^-1(X_i-x)) σ^2(X_i) (ϵ_i^2 - 1).Now we calculate the covariances. Firstly,cov ( Z_i K(H_3^-1(X_i-x)) ν_1(X_i) ξ_1i, K(H_2^-1(X_i-x)) σ^2(X_i) (ϵ_i^2 - 1) ) =E ( Z_i K(H_3^-1(X_i-x)) ν_1(X_i) ξ_1i×K(H_2^-1(X_i-x)) σ^2(X_i) (ϵ_i^2 - 1) )- 0 = E_X| Z=1[K(H_3^-1(X_i-x)) K(H_2^-1(X_i-x)) ν_1(X_i) σ^2(X_i) E (ξ_1i (ϵ_i^2 - 1) |Z_i = 1, X_i ) ] = ∫K(H_3^-1(y-x)) K(H_2^-1(y-x)) ν_1(y) σ^2(y) E (ξ_1 (ϵ^2 - 1) | Z_i = 1, X_i = y) f_1(y) dy = h_3^d ( ν_1(x) σ^2(x) η_1(x) f_1(x) ( ∫ K(u) K(h_2/h_3u) d u)^d + o(1) ),where η_1(x) = E(ξ_1i (ϵ_i^2 - 1) |Z_i = 1, X_i = x). Similarly we havecov ( (1-Z_i) K(H_4^-1(X_i-x)) ν_0(X_i) ξ_0i, K(H_2^-1(X_i-x)) σ^2(X_i) (ϵ_i^2 - 1) ) =(1-) h_4^d ( ν_0(x) σ^2(x) η_0(x) f_0(x) ( ∫ K(u) K(h_2/h_4u) u)^d + o(1) ),where η_0(x) = E(ξ_0 (ϵ^2 - 1) |Z_i = 0, X_i = x).Now we have proved the lemma. In order to prove Theorem <ref>, we also need the next two lemmas. Note that the constraint is equivalent to Δ(x) ≥ 0 and β̂_C^2(x) ≥ŝ(x) ≡σ^2(x) - σ_0^2(x)/π(x)(1-π(x)).For a strictly convex function f(x) if itsglobal minima x^⋆∉S where S is a union of two closed half spaces, then x^⋆⋆ which minimizes f(x) subject to x∈ S satisfies x^⋆⋆∈∂ S. (∂ S denotes the boundary of S.) If possible suppose x^⋆⋆∈ S^o, the interior of S. Then, f(x) > f(x^⋆⋆) for x in a small open ball around x^⋆⋆.Thus, x^⋆⋆ is a local minima. But, as f is strictly convex, this implies that x^⋆⋆ must be a global minima. This produces a contradiction. Suppose β̂^2_U(x) < ŝ(x). Then β̂^2_C(x) = ŝ(x), i.e., Δ^2(x)=0. Because our optimization function is convex, the result follows from the above lemma. Hence we have shown that, Δ^2(x)=0 if and only if β̂_U^2(x) ≤ŝ(x), otherwise β̂_U(x) = β̂_C(x). We prove part (b) first.When E(Y_i(0)| Z_i=1, X_i=x) ≠ E(Y_i(0)| Z_i=0, X_i=x), i.e., Δ^2(x) > 0, by consistency of the estimators β̂_U(x)^2 and ŝ(x), there is a small enough a>0 so that(β̂_U(x)^2 > ŝ(x)+a) ⟶ 1.Call the event inside the above probability as A_n noting that A_n saysΔ^2(x) > a and hence (A_n)→ 1, and under A_n, β̂_C(x) = β̂_U(x).Now, for any function f and set B write X_nC≡√(nh^d)( f(β̂_C(x), ŝ(x)) and X_nU≡√(nh^d)( f(β̂_U(x), ŝ(x)).(X_nC∈ B) = (X_nC∈ B| A_n)(A_n) +(X_nC∈ B| A_n^c)(A_n^c) = (X_nU∈ B| A_n)(A_n) +(X_nC∈ B| A_n^c)(A_n^c) = (X_nU∈ B) - (X_nU∈ B| A_n^c)(A_n^c) +(X_nC∈ B| A_n^c)(A_n^c)We get,(X_nC∈ B) - (X_nU∈ B) |≤ |((X_nC∈ B| A_n^c)-(X_nU∈ B| A_n^c))(A_n^c) | ≤ 2 (A_n^c). |These calculations imply that when Δ^2(x) > 0, lim_n→∞( √(nh^d)( f(β̂_C(x), ŝ(x)) ∈ B) =lim_n→∞( √(nh^d)( f(β̂_U(x), ŝ(x)) ∈ B),for all measurable function f and set B.Then, using Delta-method and Lemma <ref>, we get the asymptotic distributions of both Δ^2(x) and τ_±(x).Now we prove part (a). When E(Y_i(0)| Z_i=1, X_i=x) = E(Y_i(0)| Z_i=0, X_i=x), i.e., Δ^2(x) = 0,recall the fact that Δ^2(x)=0 if and only if β̂_U^2(x) ≤ŝ(x), otherwise β̂_U(x) = β̂_C(x).First, (√(nh^d)Δ^2(x) ≤ 0) = (√(nh^d)Δ^2(x) = 0) = (β̂^2_U(x) ≤ŝ(x)) = (√(nh^d)Δ^2_U(x) ≤ 0),where Δ^2_U(x) = β̂_U^2(x) - ŝ(x).Next, for a>0,(√(nh^d)Δ^2(x) ∈ (0, a)) = (√(nh^d)Δ^2_U(x) ∈ (0, a)).Finally, note that under Δ^2(x)=0, using Lemma <ref> and the Delta method√(nh^d)(Δ^2_U(x) + O(h^2)) ⟶ N(0, v_Δ^2(x)). Hence, under the assumption that nh^d+4→ 0,(√(nh^d)Δ^2(x) ≤ a) = (√(nh^d)Δ^2(x) = 0) + (√(nh^d)Δ^2(x) ∈ (0, a]) =(√(nh^d) (Δ^2_U(x) + O(h^2)) ≤ O(√(nh^d+4)))+ (√(nh^d) (Δ^2_U(x) + O(h^2)) ∈ (O(√(nh^d+4)), a+O(√(nh^d+4))])⟶1/2 + {(Φ(a/√(v_Δ^2(x))) - 1/2}. Thus, assuming nh^d+4→ 0,√(nh^d)Δ^2(x) ⟶1/2δ_0 + 1/2 | N(0, v_Δ^2(x))|.Thus the proof is complete. Note that, it is enough to show i) ( Δ^2(x)> v_Δ(x)/(nh^d)^.5(1-δ)) →0 when Δ^2(x) = 0 andii) ( Δ^2(x)> v_Δ(x)/(nh^d)^.5(1-δ)) →1 when Δ^2(x) > 0.For (i), when Δ^2(x) = 0 we use part (a) of Theorem <ref>. Rewrite,( Δ^2(x)> v_Δ(x)/(nh^d)^.5(1-δ))= ( √(nh^d)Δ^2(x)/v_Δ(x)> (nh^d)^δ/2).Since δ >0, (nh^d)^δ/2→∞. Then, asv_Δ(x) is a consistent estimator and √(nh^d)h^2 → 0 we get the above probability goes in limit to the probability of a standard normal being infinitely large, which goes to 0.For (ii), when Δ^2(x) > 0 we use part (b) of Theorem <ref>. Rewrite,( Δ^2(x) > v_Δ(x)/(nh^d)^.5(1-δ))= ( √(nh^d){Δ^2(x)-Δ^2(x)}/v_Δ(x)> (nh^d)^δ/2 - √(nh^d)Δ^2(x)/v_Δ(x)).Since δ < 1, (nh^d)^δ/2 - √(nh^d)Δ^2(x)/v_Δ(x)→ -∞ in probability. Then, asv_Δ(x) is a consistent estimator and √(nh^d)h^2 → 0 we get the above probability goes in limit to the probability of a standard normal being larger than negative infinity, which goes to 1. § PROOFS OF EXAMPLE <REF> AND REMARK <REF>Given X_i=0, then (Z_i=1| X_i + U_i, X_i=0) = Φ(U_i). So Pr(Z_i=1 | U_i=u) f(u) = ϕ(u) Φ(u), Pr(Z_i=1) = 1/2 and f(u | Z_i=1) = 2ϕ(u) Φ(u). Similarly f(u | Z_i=0) = 2ϕ(u) (1-Φ(u)). Then E(| U_i || Z_i = 1, X_i = 0) - E(| U_i || Z_i = 0, X_i = 0) = ∫_-∞^∞ 2 | u |ϕ(u) (2Φ(u) - 1) du = 0 (integral of an odd function).In general, given X_i=x, (Z_i=1| X_i + U_i, X_i=x) = Φ(U_i+x). Then Pr(Z_i=1 | X_i + U_i, X_i=x) f(u) = ϕ(u) Φ(u+x), Pr(Z_i=1) = Φ(x/√(2)) and f(u | Z_i=1) = ϕ(u) Φ(u+x) / Φ(x/√(2)). Similarly, f(u | Z_i=0) = ϕ(u) (1-Φ(u+x)) / (1-Φ(x/√(2))).Let V be a standard normal random variable that is independent of U.Φ(x/√(2)) E(| U_i || Z_i = 1, X_i = x)=∫_-∞^∞ |u| ϕ(u) Φ(u+x) du = E_U (|U| Φ(U+x)) = E_U, V (|U| 1{V ≤ U+x})=∫_-∞^∞∫_v-x^∞ |u| ϕ(u) ϕ(v) du dv.Since ∫_v-x^∞ |u| ϕ(u) du = ϕ(v-x) if v-x > 0 and∫_v-x^∞ |u| ϕ(u) du = √(2/π) - ϕ(v-x) if v-x < 0, ∫_-∞^∞∫_v-x^∞ |u| ϕ(u) ϕ(v) du dv = ∫_x^∞ϕ(v-x) ϕ(v) dv + ∫_-∞^x (√(2/π) - ϕ(v-x)) ϕ(v) dv = 1/4 √(π) exp(-x^2/4) erf(x/2) + √(2/π) - 1/4 √(π) exp(-x^2/4) (erf(x/2) + 1) = √(2/π)Φ(x) - 1/2 √(π) exp(-x^2/4) erf(x/2). Similarly,(1-Φ(x/√(2))) E(| U_i || Z_i = 0, X_i = x) = √(2/π)(1-Φ(x)) + 1/2 √(π) exp(-x^2/4) erf(x/2).Thus,Δ(x) = E(| U_i || Z_i = 1, X_i = x) - E(| U_i || Z_i = 0, X_i = x) = - exp(-x^2/4) erf(x/2)/2 √(π)Φ(x/√(2)) (1-Φ(x/√(2))).According to the standard kernel estimation theory, the denominator of (<ref>), as n →∞,1/nh_v^d∑_i=1^nK(H_v^-1(X_i - x)) p→ f(x). As for the numerator of (<ref>),1/nh_v^d∑_i=1^nK(H_v^-1(X_i - x)) e_i^4= 1/nh_v^d∑_i=1^nK(H_v^-1(X_i - x)) (Y_i - m̂(X_i))^4 = 1/nh_v^d∑_i=1^nK(H_v^-1(X_i - x)) (m(X_i) - m̂(X_i) + σ(X_i) ϵ_i)^4 = 1/nh_v^d∑_i=1^nK(H_v^-1(X_i - x)) (m(X_i) - m̂(X_i))^4 + 1/nh_v^d∑_i=1^nK(H_v^-1(X_i - x)) 3(m(X_i) - m̂(X_i))^3 (σ(X_i) ϵ_i) + 1/nh_v^d∑_i=1^nK(H_v^-1(X_i - x)) 6(m(X_i) - m̂(X_i))^2 (σ(X_i) ϵ_i)^2 + 1/nh_v^d∑_i=1^nK(H_v^-1(X_i - x)) 3(m(X_i) - m̂(X_i))(σ(X_i) ϵ_i)^3 + 1/nh_v^d∑_i=1^nK(H_v^-1(X_i - x)) (σ(X_i) ϵ_i)^4. Look at the last term first. Denote g(x) = E((σ(X_i) ϵ_i)^4 |X_i = x) and G(x) = E((σ(X_i) ϵ_i)^8 |X_i = x). Standard techniques gives the following results:E(1/h_v^dK(H_v^-1(X_i - x)) (σ(X_i) ϵ_i)^4)= E( 1/h_v^dK(H_v^-1(X_i - x)) g(X_i)) = f(x) g(x) + h_v^2 σ_k^2 f(x) (1/2 tr(g(x)) + tr(f^⊤(x) g(x))) + o(h_v^2),andE(1/h_v^dK(H_v^-1(X_i - x)) (σ(X_i) ϵ_i)^4)^2= E( 1/h_v^2dK^2(H_v^-1(X_i - x)) G(X_i)) = 1/h_v^dθ_K f(x) G(x) + o(1/h_v^d).Hence, if as n →∞, nh_v^d →∞, we have the following results:Var( 1/nh_v^d∑_i=1^nK(H_v^-1(X_i - x)) (σ(X_i) ϵ_i)^4 ) → 0,and thus1/nh_v^d∑_i=1^nK(H_v^-1(X_i - x)) (σ(X_i) ϵ_i)^4 p→ f(x) g(x) = f(x) E((σ(X_i) ϵ_i)^4 |X_i = x). For the first term in (<ref>), we will show that 1/nh_v^d∑_i=1^nK(H_v^-1(X_i - x)) (m(X_i) - m̂(X_i))^4 p→ 0.By Markov Inequality, we only need to show that, as n →∞,E[1/nh_v^d∑_i=1^nK(H_v^-1(X_i - x)) (m(X_i) - m̂(X_i))^4 ] → 0.Then, repeatedly using the Cauchy-Schwarz inequality,it is straightforward to show that the middle three terms are also of order o_p(1). Hence the numerator1/nh_v^d∑_i=1^nK(H_v^-1(X_i - x)) e_i^4 p→ f(x) E((σ(X_i) ϵ_i)^4 |X_i = x).Combining (<ref>) and applying Slutsky's theorem, it is proved that the proposed estimator (<ref>) is a consistent estimator of E {σ^4(X_i) ϵ_i^4 |X_i = x}. Before proving (<ref>), we make the following regularity assumptions: (A1) The univariate kernel function k(u) is compactly supported on [-1, 1]. This means the d-variate kernel function K(u) is compactly supported on the d-dimensional cubic [-1,1]^d, which is contained in the d-dimensional ball of radius √(d).(A2) The univariate kernel function k(u) is bounded above by k_m.(A3) f(x), the density function of X satisfies that,for all x, there exists some compact set I_x of which x in the interior, and some finite constant M_1 > 0 such that,sup_u∈ I_x‖ (f(u)^-1, - f(u)^-2ḟ(u)^⊤ ) ‖≤ M_1. (A4) The conditional expectation m(x) is Lipschitz in x, i.e., there exists L > 0 such that, for all x and y in the support of X, | m(x) - m(y) |≤ L ‖x - y‖. (A5) For all x, there exists some compact set I_x of which x in the interior, and some finite constant M_2 > 0 such that, sup_u∈ I_x| E[(σ(X_i)) ϵ_i)^k |X_i = u] |≤ M_2,for k = 2,3, and 4.(A6) Pr(‖X_1 - x‖≤√(d) (h_2+h_v)) = (h_v h_2^4-4/d)^η for some η > d/5. When h_2=h_v=h, this simplifies to Pr(‖X_1 - x‖≤ 2 √(d) h) = h^η' for some η' > d-4/5.First, writem(x) = e_1^⊤[ Γ(x)^⊤Ω(x) Γ(x)]^-1Γ(x)^⊤Ω(x) Y =: ∑_i=1^n w_i(x) Y_i.Since e_1 = (1, 0, …, 0)^⊤∈ℝ^d+1,the vector (w_1(x), …, w_n(x)) is the product of the first row vector of[ Γ(x)^⊤Ω(x) Γ(x)]^-1 and the matrix Γ(x)^⊤Ω(x). Asymptotically,[ 1/n h_2^dΓ(x)^⊤Ω(x) Γ(x) ]^-1 = ( [f(x)^-1 + o_p(1)- f(x)^-2ḟ(u)^⊤ + o_p(1) 1_d^⊤;- f(x)^-2ḟ(u) + o_p(1) 1_d (h_2^2 σ_k^2 f(x) )^-1I_d + o_p(h_2^-2) I_d ]).Denoting γ_i(x)^⊤ = (1, (X_i - x)^⊤) for i=1,…,n, then we can express1/nΓ(x)^⊤Ω(x) = 1/nh_2^d(K(H_2^-1(X_1 - x)) γ_1(x), …, K(H_2^-1(X_n - x)) γ_n(x)).Thus, for i = 1, …, n,w_i(x) = 1/nh_2^dK(H_2^-1(X_i - x)) {(f(x)^-1, - f(x)^-2ḟ(u)^⊤) γ_i(x) + o_p(1^⊤γ_i(x) )}Under assumptions (A1) - (A3),| w_i(x) |≤K_m^d/n h_2^d 1{‖X_i - x‖≤√(d) h_2 }√(dh_2^2 + 1) (M_1 + o_p(1)). Further, by setting Y_i = 1, i=1,…,n,∑_i=1^n w_i(x) becomes the first regression coefficient from the local polynomial regression of (1, …, 1) on X_i's. Hence, ∑_i=1^n w_i(x) = 1. Now we prove (<ref>).E[1/nh_v^d∑_i=1^nK(H_v^-1(X_i - x)) (m(X_i) - m̂(X_i))^4 ]=E[1/h_v^dK(H_v^-1(X_1 - x)) (m(X_1) - m̂(X_1))^4 ]=E[1/h_v^dK(H_v^-1(X_1 - x)) E[(m(X_1) - m̂(X_1))^4 |X_1,…,X_n] ],in whichE[(m(X_1) - m̂(X_1))^4 |X_1,…,X_n] =E[(∑_i=1^n w_i(x) (Y_i - m(X_1)) )^4 |X_1,…,X_n] = ∑_j_1=1^n∑_j_2=1^n∑_j_3=1^n∑_j_4=1^n E[∏_l=1^4 w_j_l(x) (Y_j_l - m(X_1)) |X_1,…,X_n ].Furthermore, under assumptions (A4) and (A5), we could show that,as n →∞,E[∏_l=1^4 w_j_l(x) (Y_j_l - m(X_1)) |X_1,…,X_n ] ≲∏_l=1^4 | w_j_l(x) |.Thus, as n →∞,E[(m(X_1) - m̂(X_1))^4 |X_1,…,X_n]≲ ∑_j_1=1^n∑_j_2=1^n∑_j_3=1^n∑_j_4=1^n ∏_l=1^4 | w_j_l(x) | ≲ ( 1/nh_2^d-1)^4 ∑_j_1=1^n∑_j_2=1^n∑_j_3=1^n∑_j_4=1^n ∏_l=1^4 1{‖X_j_l - X_1 ‖≤√(d) h_2 },and1/h_v^dK(H_v^-1(X_1 - x)) E[(m(X_1) - m̂(X_1))^4 |X_1,…,X_n]≲ 1/h_v^d( 1/nh_2^d-1)^41{‖X_1 - x‖≤√(d) h_v }∑_j_1=1^n∑_j_2=1^n∑_j_3=1^n∑_j_4=1^n ∏_l=1^4 1{‖X_j_l - X_1 ‖≤√(d) h_2 }= 1/h_v^d( 1/nh_2^d-1)^4 ∑_j_1=1^n∑_j_2=1^n∑_j_3=1^n∑_j_4=1^n ∏_l=1^4 1{‖X_j_l - X_1 ‖≤√(d) h_2 } 1{‖X_1 - x‖≤√(d) h_v } ≤ 1/h_v^d( 1/nh_2^d-1)^4 ∑_j_1=1^n∑_j_2=1^n∑_j_3=1^n∑_j_4=1^n ∏_l=1^4 1{‖X_j_l - x‖≤√(d) (h_2 + h_v) } 1{‖X_1 - x‖≤√(d) h_2 }= 1/h_v^d( 1/h_2^d-1)^4 1{‖X_1 - x‖≤√(d) h_2 }(1/n∑_i=1^n 1{‖X_i - x‖≤√(d) (h_2+h_v) })^4≤ 1/h_v^d( 1/h_2^d-1)^4 1{‖X_1 - x‖≤√(d) (h_2+h_v) }(1/n∑_i=1^n 1{‖X_i - x‖≤√(d) (h_2+h_v) })^4,in whichE[ 1{‖X_1 - x‖≤√(d) (h_2+h_v) }(1/n∑_i=1^n 1{‖X_i - x‖≤√(d) (h_2+h_v) })^4 ] = ( Pr(‖X_1 - x‖≤√(d) (h_2+h_v)) )^5 + o(1),since the number of terms with repeated indexes are of a smaller order than n^4. Then we take the further expectation of the above and getE[1/nh_v^d∑_i=1^nK(H_v^-1(X_i - x)) (m(X_i) - m̂(X_i))^4 ] ≲ 1/h_v^d( 1/h_2^d-1)^4 ( Pr(‖X_1 - x‖≤√(d) (h_2+h_v)) )^4 Pr( ‖X_1 - x‖≤√(d) h_2 )≤ 1/h_v^d( 1/h_2^d-1)^4 ( Pr(‖X_1 - x‖≤√(d) (h_2+h_v)) )^5. Finally, assuming (A6), the above converges to 0 as n →∞ and (<ref>) is proved.Using the same techniques as in the proof of (<ref>), it can be shown that the proposed estimator (<ref>) is also consistent under corresponding regularity conditions. Specifically, assumptions corresponding to (A3) - (A6) need to be made being conditional on Z_i. § PROOF OF PARAMETRIC RATE OF CONVERGENCE FOR ATT ESTIMATIONWe noted in Section <ref> that when the sign of the bias does interact with X, we can provide a two-point identification of ATT. Consequently, we can estimate ATT_± by n^-1∑_i τ_±(X_i). In this section, we show that this estimator is √(n) consistent. Let q(x) = m_1(x)-m_0(x) + [ {m_1(x)-m_0(x)}^2 - {σ^2(x) -σ_0^2(x)}/{π(x)(1-π(x))} ]^1/2.Assume that π is estimated by a local linear regression of Z on x with kernel K and bandwidth h_4; H_4=h_4I_d. Let δ(x) = ( m_1(x)-m_1(x), m_0(x)-m_0(x), σ^2(x)-σ^2_0(x) - σ^2(x) + σ^2_0(x), π(x)-π(x))^⊤. q(x) is the plug-in estimator of q(x). We are interested in the asymptotic distribution of √(n)∑_i {τ_+(X_i) - τ_+(X_i)} = √(n)1/n∑_i {q(X_i) - q(X_i)}. We define the residuals in estimating our functions below. Let e_l denote a d unit vector with 1 in coordinate l and 0 elsewhere.Based on the linear approximation of the kernel estimators, write R_1(x) ≡ e_1^⊤δ(x) - 1/n h_3^d π(x) f_1(x)(∑_i=1^n Z_i K(H_3^-1(X_i-x)) ν_1(X_i) ξ_1i), R_2(x) ≡ e_2^⊤δ(x) - 1/n h_4^d (1-)f_0(x)(∑_i=1^n (1-Z_i) K(H_4^-1(X_i-x)) ν_0(X_i) ξ_0i), R_3(x) ≡ e_3^⊤δ(x) - 1/n h_2^d f(x)∑_i=1^nK(H_2^-1(X_i-x)) σ^2(X_i) (ϵ_i^2 - 1).Similarly, R_4(x)≡ e_4^⊤δ(x) - 1/n h_5^d f(x)(∑_i=1^n K(H_5^-1(X_i-x)) (Z_i-π(X_i)) ).Let ζ_1i = Z_iY_i - m_1(X_i) and ζ_0i = (1-Z_i)Y_i - m_0(X_i). Let E_i(·) denote expectation with respect to unit i's data and _i(A) = E_i1(A). (assumptions for asymptotic normality for ATT_±) (a) ∑_i R_l(X_i)^2 = o_p(1) for l=1,…,4.(b) Eq'(X) ^4 < ∞ and the largest eigenvalue of q”(x) is bounded.(c) h_l=h for all l=1,…,5 and nh^2d→∞(d) max{E(σ^4(X_i)(ϵ_i-1)^4), E(ν_1^4(X_i)ξ_1i^4), E(ν_0^4(X_i)ξ_0i^4), E(ζ_1i^4),E(ζ_0i^4)} < ∞(e) f(·), π(·) and 1-π(·) are bounded away from 0 and σ(·) is bounded. k(·) is compactly supported on [-1/√(d),1/√(d)]^d and bounded.(f) (X_i-X_j≤ h)=o(nh^4d) and E_i {_j(X_i-X_j≤ 2h)}^2=o(min{√(n)h^4-2d), h^4d-6/√(n), n^7/2h^6d}).(g) m_1(·), m_0(·), σ(·) and π(·) are Hölder continuous of degree 2.Under Assumption <ref>, with iid data,√(n)(n^-1∑_iτ_+(X_i)-ATT_+) goes to a normal distribution with mean 0. Using Taylor series expansion√(n)1/n∑_i {q(X_i) - q(X_i)}= √(n)1/n∑_i q'(X_i)^⊤δ(X_i) + √(n)1/n∑_i δ(X_i)^⊤ q”(X_i^⋆)δ(X_i) Consider first √(n)1/n∑_i q'(X_i)^⊤δ(X_i) = √(n)1/n∑_i ∑_l=1^4 e_l^⊤ q'(X_i)e_l^⊤δ(X_i). We will show that this term is O_p(1). We show that √(n)1/n∑_i e_1^⊤ q'(X_i)e_1^⊤δ(X_i) = O_p(1) and the same argument will work for the other three terms.√(n)1/n∑_i e_1^⊤ q'(X_i)e_1^⊤δ(X_i) = √(n)1/n∑_i=1^n e_1^⊤ q'(X_i)/n h_3^d π(x) f_1(X_i)(∑_j=1^n Z_j K(H_3^-1(X_j-X_i)) ν_1(X_j) ξ_1j) + √(n)1/n∑_i e_1^⊤ f'(X_i) R_1(X_i).By Cauchy-Schwarz inequality,|√(n)1/n∑_i e_1^⊤ q'(X_i) R_1(X_i)|≤{1/n∑_i (e_1^⊤ q'(X_i))^2 }^1/2{∑_iR_1(X_i)^2 }^1/2.Hence the second term in the above expression is o_p(1) since ∑_iR_1(X_i)^2=o_p(1) and Eq'(X_i)^4<∞.Throughout the rest of the paper, we use C and L as generic constants which may vary from line to line.Next, let W_i=(X_i,Z_i,Y_i). Rewrite the fist term as√(n)1/n^2∑_i=1^n ∑_j=1^n u_1(W_i,W_j), whereu_1(W_i, W_j) = e_1^⊤ q'(X_i)/h_3^d π(x) f_1(X_i) Z_j K(H_3^-1(X_j-X_i)) ν_1(X_j) ξ_1j. Using the fact that E{u_1(W_i,W_j)| W_i} = 0, by V-statistic projection we get √(n)1/n^2∑_i=1^n ∑_j=1^n [u_1(W_i,W_j) -E{u_1(W_i,W)| W}]= O_p(√(n) [ {E u_1(W_i,W)^2}^1/2 + {E u_1(W_i,W_i)^2}^1/2 ]/n),where W is an independent copy of W_i. We now show that E u(W_i,W)^2=o(n) and E u(W_i,W_i)^2=o(n). Using the facts that, f, π and 1-π are bounded away from 0 and k is bounded, we get that, for j≠ iE u_1(W_i,W_j)^2≤ C E{ h_3^-d 1(X_i-X_j≤ h_3) e_1^⊤ q'(X_i) ν_1(X_j) ξ_1j}^2≤ C { E h_3^-4d 1(X_i-X_j≤ h_3)}^1/2{ E{e_1^⊤ q'(X_i) ν_1(X_j) ξ_1j}^4}^1/2 = C { E h_3^-4d 1(X_i-X_j≤ h_3)}^1/2e_1^⊤ q'(X_i)_4^2ν_1(X_j) ξ_1j_4^2.Since, q'(X_i)_4<∞ and ν_1(X_j) ξ_1j_4<∞, we have E u_1(W_i,W)^2=E u_1(W_i,W_j)^2 = o(n) by the fact that nh_3^2d→∞. Next, E u_1(W_i,W_i)^2≤ C E{ h_3^-d e_1^⊤ q'(X_i) ν_1(X_i) ξ_1j}^2≤ C h_3^-2de_1^⊤ q'(X_i)_4^2ν_1(X_i) ξ_1i_4^2.Since, q'(X_i)_4<∞ and ν_1(X_i) ξ_1i_4<∞, we have E u_1(W_i,W)^2=E u_1(W_i,W_j)^2 = o(n) by the fact thatnh_3^2d→∞.By parallel calculations for the other three terms, = √(n)1/n∑_i ∑_l=1^4 e_l^⊤ q'(X_i)e_l^⊤δ(X_i)= √(n)1/n∑_i=1^nE{u_1(W_i,W) + u_2(W_i,W)+u_3(W_i,W)+u_4(W_i,W)| W}+o_p(1),Which goes to a normal distribution by the CLT for average of iid random variables with bounded fourth moment and Slutsky's theorem.The remaining term is√(n)1/n∑_i δ(X_i)^⊤ q”(X_i^⋆)δ(X_i). Sinceq” is uniformly bounded, we will only need to show that∑_i δ(X_i)^⊤δ(X_i) = o_p(√(n)), or ∑_i {e_l^⊤δ(X_i)}^2 = o_p(√(n)) for l=1,2,3,4. By Markov inequality, it suffices to show √(n)E{e_l^⊤δ(X_i)}^2→ 0 for each l=1,...,4.The calculations for l=1,2 and 4 are similar. Thus, we only show the calculations for √(n)E{e_1^⊤δ(X_i)}^2 = √(n)E{m_1(X_i)-m(X_i)}^2 and √(n)E{e_3^⊤δ(X_i)}^2=√(n)E{σ^2(X_i)-σ^2(X_i)}^2.Recall the definition of w_i(x) from (<ref>) in the proof of Remark <ref> and the facts that ∑_j w_i(x) = 1 and ∑_j w_j(x)(X_i-x)=0. We use the fact that |w_j(x)| ≤ C (nh^d)^-11(X_j-x≤ h).Recall E(Z_iY_i| X_i, Z_i=1) = m_1(X_i) and ζ_1i = Z_iY_i - m_1(X_i). Since m_1(x) = ∑_j w_i(x) Z_iY_i, using the Hölder continuity property of m_1(·), for some d× dmatrices L_ij^m_1 which have uniformly bounded maximum eigenvaluesE{m_1(X_i)-m_1(X_i)}^2= E[ ∑_j w_j(X_i){Z_jY_j -m_1(X_i)}]^2= E[ ∑_j w_j(X_i){m_1(X_j) + ζ_1j-m_1(X_i)}]^2= E[ ∑_j w_j(X_i) { (X_i-X_j)^⊤ L_ij^m_1(X_i-X_j) + ζ_1j} ]^2 ≤ E{∑_j |w_j(X_i)| LX_i-X_j^2 }^2 + ∑_j E w_j(X_i)^2ζ_1j^2 ≤L^2h_3^4 E{∑_j |w_j(X_i)| }^2 + ∑_j E w_j(X_i)^2ζ_1j^2 ≤CL^2h_3^4 1/n^2h_3^2dE{∑_j 1( X_i-X_j≤ h_3 ) }^2+ C/n^2h_3^2d∑_j E 1( X_i-X_j≤ h_3 )ζ_1j^2≤CL^2h_3^4-2d[ n-1/nE_i {_j(X_i-X_j≤ h_3)}^2 + 1/n(X_i-X_j≤ h_3) ] +C/nh_3^2d{E ζ_1j^4}^1/2{(X_i-X_j≤ h_3) }^1/2.Since E ζ_1j^4<∞, (X_i-X_j≤ h_3)=o(nh_3^4d) and E_i {_j(X_i-X_j≤ h_3)}^2=o(√(n)h_3^4-2d), we get √(n)E{m_1(X_i)-m_1(X_i)}^2→ 0.Finally, √(n)E{e_1^⊤δ(X_i)}^2= √(n)E{σ^2(X_i)-σ^2(X_i)}^2 = √(n)E[∑_j w_j(X_i){e_j^2 - σ^2(X_i)} ]^2. Recall that h_l=h for all l=1,…, 5.Now, by Hölder continuity of m, and properties of w_i(·), for some d× d matrices L_ij^mwhich have uniformly bounded maximum eigenvaluesE{e_j^2| X_1,...,X_n}=E[ ∑_k w_k(X_j)((X_i-X_j)^⊤ L_ij^m(X_i-X_j) + {σ(X_k)ϵ_k-σ(X_j)ϵ_j} )| X_1,...,X_n]^2= σ^2(X_j) +∑_k w_k(X_j)^2σ^2(X_k) +{∑_k w_k(X_j)(X_i-X_j)^⊤ L_ij(X_i-X_j) }^2 = σ^2(X_j) +ψ_j(say).Let ϕ_j= e_i^2 - E{e_j^2| X_1,...,X_n}. Then, for some d× d matrices L_ij^σ^2which have uniformly bounded maximum eigenvaluesE[∑_j w_j(X_i){e_j^2 - σ^2(X_i)} ]^2= E[∑_j w_j(X_i){ψ_j + ϕ_j + (X_i-X_j)^⊤ L_ij^σ^2(X_i-X_j)} ]^2= E[∑_j w_j(X_i)ψ_j]^2 + E[∑_j w_j(X_i)ϕ_j]^2 + E[∑_j w_j(X_i){(X_i-X_j)^⊤ L_ij^σ^2(X_i-X_j)}]^2 +2 E[∑_j∑_k w_j(X_i)(X_i-X_j)^⊤ L_ij^σ^2(X_i-X_j)w_k(X_i)ψ_k] ≤E[∑_j w_j(X_i)ψ_j]^2 + E[∑_j w_j(X_i)ϕ_j]^2 + E[∑_j |w_j(X_i)|{LX_j-X_i^2}]^2 +2 E[∑_j∑_k |w_j(X_i)|LX_j-X_i^2|w_k(X_i)||ψ_k|].From here, 1) As shown in our previous calculations, √(n)E[∑_j w_j(X_i)X_j-X_i^2 ]^2→ 0 becauseE_i {_j(X_i-X_j≤ h)}^2=o(√(n)h^4-2d).2) √(n)E[∑_j w_j(X_i)ϕ_j]^2 ≤ C(√(n)h_2^2d)^-1{E ϕ_j^4}^1/2{(X_i-X_j≤ h) }^1/2. Which goes to 0 since E ϕ_j^4<∞ and (X_i-X_j≤ h)=o(nh^4d).For the following, we use the boundedness of σ^2(·).3) By our bound on |w_j(x)|,E[∑_j∑_k |w_j(X_i)|LX_j-X_i^2|w_k(X_i)||ψ_k|] ≤q CLh_2^21/n^2h^2dE{∑_j∑_k 1(X_i-X_j≤ h)1(X_k-X_j≤ h)|ψ_k|}= CLh_2^21/h^2dE{1(X_i-X_j≤ h)1(X_k-X_j≤ h)|ψ_k|} ≤CLh_2^21/nh^4dE{1(X_i-X_j≤ h)1(X_k-X_j≤ h)1(X_l-X_j≤ h)+ CL^2h_2^2h^4/n^2h^4dE{1(X_i-X_j≤ h)1(X_k-X_j≤ h){∑_l 1(X_l-X_j≤ h)}^2 ≤CLh_2^21+h^4+nh^4/nh^4d E_i{_j(X_i-X_j≤ 2h)}^2.Thus, √(n)E[∑_j∑_k |w_j(X_i)|LX_j-X_i^2|w_k(X_i)||ψ_k|]→ 0 since E_i{_j(X_i-X_j≤ 2h)}^2 = o(h^4d-6/√(n)).4) E[∑_j |w_j(X_i)||ψ_j|]^2 ≤ C h^-2d E{ 1(X_i-X_j≤ h)1(X_i-X_k≤ h)|ψ_j||ψ_k|}.Similar to the calculations in (3), √(n)E[∑_j |w_j(X_i)||ψ_j|]^2→ 0 since E_i{_j(X_i-X_j≤ 2h)}^2 = o(n^7/2h^6d).Thus, we have established the desired asymptotic normality of √(n)(n^-1∑_iτ_+(X_i)-ATT_+).
http://arxiv.org/abs/2312.16439v1
{ "authors": [ "Zikun Qin", "Bikram Karmakar" ], "categories": [ "stat.ME" ], "primary_category": "stat.ME", "published": "20231227065830", "title": "Inferring the Effect of a Confounded Treatment by Calibrating Resistant Population's Variance" }
^2New York City College of Technology, The City University of New York, Brooklyn, NY 11201, USA ^3The Graduate School and University Center, The City University of New York, New York, NY 10016, USA^4Missouri University of Science and Technology, Rolla, MO 65409, USA We present an experimentally study of the dissociative excitation in collision of helium ions with nitrogen and oxygen molecules for collision energy of 0.7-10 keV. Absolute emission cross sections is measured and reported for the most nitrogen and oxygen atomic and ionic lines in wide, vacuum ultraviolet (80-130 nm) and visible (380-800 nm), spectral region. The striking similarities of processes realized in He^++N_2 and He^++O_2 collision system are observed. We present polarization measurements for He^++N_2 collision system. Emission of excited dissociative products was detected with the improved method of high-resolution optical spectroscopy. This device is incorporated with the retarding potential method and a high resolution electrostatic energy analyzer to measure precisely the energy of incident particles and the energy of dispersion. The improvement of an optics resolution allows us to measure the cross section on the order of 10^-19 cm^2 or lower. Excitations of N_2 and O_2 molecules due to helium ion impact and a polarization effect M. Gochitashvili^1, R. Lomsadze^1, R. Ya. Kezerashvili^2,3, I. Noselidze^1, and M. Schulz^4 January 14, 2024 ===============================================================================================§ INTRODUCTION The nitrogen and oxygen molecules are the main constituents of the atmospheres and are simple species of great interest for interstellar medium<cit.>. First all of nitrogen and oxygen molecules are very important constituent of planets upper atmosphere and play a crucial role in the atmospheric chemistry at mesospheric and thermosphere altitudes. The solar radiation mainly results from solar flares, a solar wind, coronal mass ejections and solar prominences and includes the electromagnetic radiation and energetic electrons, protons and α -particles. These basic components of the solar radiation interact with the atmospheric nitrogen and oxygen molecules: the electromagnetic radiation with energy from a few tenths of eV to hundreds of MeV, electrons and protons in the energy spectrum from a few tenths of eV to hundreds of MeV and up to GeV, and helium ions with energies up to 10 keV. Numerous processes of formation of atoms or atomic ions in which oxygen and nitrogen molecules play role as targets, has attracted considerable interest. These processes play a crucial role in the atmospheric chemistry at mesospheric and thermosphere altitudes. Ionic species are present in upper atmosphere of planets and they govern the chemistry of ionospheres. Among all, atomic and molecular ions have been detected in comet tails too. Amongst the many inelastic processes involving oxygen ions, the excitation, ionization and charge exchange are relevant to the low-temperature edge plasma region of current thermonuclear fusion devices <cit.>. Oxygen is also one of the typical impurities in almost all laboratory plasmas <cit.>.The electromagnetic radiation, the corpuscular part of the solar radiationi.e. electrons, protons, and helium ions <cit.> with an energy spectrum between a few tenth of eV to hundreds of MeV, are heading towards the Earth, and interact with the atmosphere. The corpuscular charged part of solar radiation is deflected by the Earth's magnetic field towards the poles and get scattered and absorbed by atmospheric atoms and molecules, particularly by nitrogen and oxygen. The accompanying ionization of various gases in the upper atmosphere is causing a luminous glow of the upper atmosphere, so called phenomenon - aurora. While there have been investigations of main characteristics of the aurora, there is still a lack of quantitative description of this phenomenon. Upon reaching the denser layers of the atmosphere, electrons and helium ions participate in various inelastic processes such as ionization, molecular excitation, and charge-exchange reactions on atmospheric gases, especially on nitrogen and oxygen molecules. Spectral analysis of the aurora shows that the ionized nitrogen molecules can radiate in the visible, infrared, and ultraviolet region.Observations of the excitation of B ^2Σ _u^+ and A ^2Π _u band system in the ionized nitrogen molecule N_2^+ indicate their presence in the aurora and dayglow <cit.>. These bands appear in the spectra of polar auroras and carry information on the collision processes in the upper atmosphere. Hence, during collisions, vibrationally exited N_2^+ ions and their radiative decay are accompanied by creation of electronic ground X ^2Σ _g^+ states.The study of the emission spectra gives the opportunity to determine the concentration and energy distribution of particles entering the upper layer of the atmosphere. To address this problem, it is necessary to determine with high precision absolute cross sections of various inelastic processes, such as ionization, excitation, and charge-exchange. However, determination of the absolute cross section, for example, of the excited B ^2Σ _u^+ band system is challenging. The number of experimental works in which this excited band system is measured is very limited, and they are usually related to the processes of excitation through electron collisions with nitrogen molecules <cit.>. However, in the case of electron impact, the experimentally determined cross section for formation of the N_2^+ ions in the A-state is only known within 50% because measurements of excitation cross sections involve various challenges. In particular, the lifetime of the nitrogen molecule ions in the A ^2Π _u state is about 10^-5 s <cit.>, and during measurements the quenching of excited particles (the transfer of the excited energy to other particles) is expected to occur.The dissociation of highly excited molecular states deserves a special interest and today is a subject of extensive research <cit.>, including experimental investigations adds new information. The products of molecular dissociation may stay in an excited state. Highly excited states may decay through pre-dissociation or autoionization channels with the formation of a neutral atom, electron, or ion <cit.>. For example, neutral fragments resulting from the dissociation of highly excited states were observed in Refs. <cit.>. The absolute cross section of luminescence due to the excited product of atomic oxygen dissociation in the wavelength range of 97–131 nm was measured for photoionization <cit.>.Follows in one, the information about the polarization is important for the accurate determination of absolute and relative photon emission cross sections. Results of polarization measurements allowed also to make some nontrivial conclusions related to the spatial distribution of the electron cloud. A reliable experimental determination of the polarization fraction, not only provides additional information about the details of excitation cross sections by determining relative populations of the degenerate magnetic sublevels, but also enables a comparison of available experimental data with calculations of total cross sections.Quantitatively, the polarization fraction can be analyzed in terms of alignment of the orbital momentum sublevels. The cross sections of population of magnetic sublevels provide detailed information on the excitation mechanism. Due to the different populations of magnetic sublevels within a certain (nl) subshell, the radiation can be polarized and, consequently, anisotropic.Numerous works are devoted to investigate the polarization of radiation in ion-atom and ion-molecule processes <cit.>. Usually in the polarization measurements the coincidences of photon and scattered particle are detected <cit.>.In the case of inelastic He^+-N_2 and He^+-O_2 collisions, the radiation spectrum is quite multitudinous. Therefore, the monochromator with high resolving power (∼0.2 nm) should be used. In order to amplify the optical registration sensitivity in addition to monochromator, the broad bandpass filters are used for isolating the optical lines. An anisotropic excitation mechanism is quite common in astrophysical plasmas and are readily reproduced in a laboratory environment <cit.>. Almost a century ago <cit.>, was shown that spectral line emissions originating from atoms or ions excited by particles whose velocity distribution is anisotropic, in general, is polarized. If a collisional excitation occurs by impact in a preferred direction, the magnetic sublevels of the excited states can be populated with non-statistical probabilities. When the state decays, the emitted electromagnetic radiation will be spatially anisotropic and partially polarized <cit.>.In Ref. <cit.>, authors studied the polarization of radiation emitted from ions excited by an electron beam impact inside the electron-beam ion trap. It demonstrated that polarization radiation of the emitted radiation is especially important when measurements are made with spectrometers in which the energy disperser is polarization selective. The polarization-sensitive measurements may also be used to detect resonance processes that are too weak to observe directly. Moreover, this gives information about the magnetic sublevels that would normally remain hidden in simple energy dispersive measurements.Applying conservation of angular momentum allowed calculations of the relative populations of the magnetic sublevels <cit.>. The magnetic sublevel population of autoionizing states of helium, excited by charged particle impact was determined in Refs. <cit.>. The degree of polarization of radiation emitted by atoms and ions following particle impact contains information on the excitation of magnetic sublevels with different projections M of orbital momentum <cit.>. In Ref. <cit.> [54], authors measured the degree of linear polarization in the extreme ultraviolet region and cross sections of excitation to individual magnetic sublevels. For 1s-2p single-electron excitation of helium, it was found the effects are stronger for the excitation of sublevels with M=0, than with M=± 1. In literature, considerable evidence exists that the molecular radiation may be strongly polarized for both due to discharge sources and when electron beam excitation is used <cit.>. In Ref. <cit.> authors predicted that the atomic or ionic radiation following dissociative excitation of molecules can be polarized. Several excited states of the N and N^+ were identified in the visible and near-infrared optical emission spectrum produced by electron impact excitation of the N_2 (X ^1Σ _g^+ ) <cit.>. During the dissociative process, one of the fragments (ion/atom) can be left in an excited state from which then it radiates. Fluorescence from the excited atomic fragment can be polarized, and the degree of this polarization related to the form of the anisotropy in the angular distribution of dissociation product <cit.>.In the last few years, experimental studies of ion collisions at low and medium energies (a few eV and keV) are undertaken using different techniques. For example, in the case of the interaction between He^+ ions and H_2, N_2, O_2, CO, NO molecules, collision spectroscopy methods <cit.>, and high resolution translational spectroscopy for the pairs H_2^+-H_2, H_2^+-Mg, H_2^+-Na, H_2^+-Cs, and H_2^+-Ar <cit.> are used. Emission from the dissociation products in the visible range revealed a rich spectrum of excited states in collisions of the different ions with O_2 and N_2 molecules <cit.>.In this work, we experimentally study dissociative excitation in collisions of helium ions with nitrogen and oxygen molecules in the ion energy range of 0.7–10 keV. One of the reason for the choice of the helium ion as a projectile is that highly excited molecular states of the oxygen and nitrogen molecular ion arise in this case, since the inelastic channels of charge exchange prevails in the respective range of collision energies and so the inner-shell electron of the molecules is captured <cit.>. Absolute emission cross sections are measured and reported for the most nitrogen and oxygen atomic and ionic lines in vacuum ultraviolet (80-130 nm) and visible (380-800 nm), spectral region. Measurements are performed by the optical spectroscopy method <cit.>. We report the measurements of the degree of linear polarization for the lines of the helium atom λ =388.9 nm for the transition 3p ^3P_0→ 2s ^3S and λ =587.6 nm, the transition 3d ^3D → 2p ^3P and nitrogen ion λ =500.1-500.5 nm, the transition 3d ^3F_0→ 3p ^3D due to the He^+-N_2 collision.This article is organized as follows. In Sec. II we present the experiment setup and measurement method. In Sec. III we present processes and emission spectral lines that are measured for the He^+ - N_2, He^+ - O _2 and e - N_2, e - O_2 collision systems and discuss the results of measurements. Finally, conclusions follow in Sec. IV.§ EXPERIMENT SETUP AND MEASUREMENT METHOD The radiation from excited particles was detected with high-resolution optical spectroscopy, which is the most precise method of identification of highly excited molecular states. The experimental setup and calibration procedure have been described in details in Refs. <cit.>. The resolution of the optics was greatly improved since then, therefore we were able to distinguish the excitation channels and measure the cross section on the order of 10^-19 cm^2 or lower. A schematic view of the experimental setup is shown in Fig. <ref>. A beam of He^+ ions extracted from the high frequency (20 MHz) discharge ion source are accelerated, collimatedand focused by an ion-optics system, which includes quadruple lenses and collimating slits, and mass-selected with a 60^0 magnetic sector field. Then the beam is directed into the collision chamber. In order to determine spectral sensitivity of emission detecting system we used the electron gun placed into the mass-analyzer chamber. The electron beam, after the collimation and additional focusing, was directed into the collision chamber.The radiation emitted as a result of the excitation of colliding particles is observed at 90^o with respect to the direction of the ion beam. A secondary-electron multiplier with a cooled cathode both in the integral and counting regimes detects the radiation. The spectroscopic analysis of the emission is performed by a visible monochromator with resolution of 40 nm/mm, and by means of a Seya–Namioka vacuum monochromator with a toroidal diffraction grating with a typical resolution of 0.05 nm that has a 1200 line/mm. The method also allowed us to measure the polarization of excitation, which itself is a powerful tool for establishing the mechanism for inelastic processes. A polarizer and a mica quarter-wave phase plate are placed in front of the entrance slit of the monochromator and the linear polarization of the emission is analyzed. For cancellation of the polarizing effect of the monochromator, the phase plate are placed after the polarizer and rigidly coupled to it.The measurements at low energy collisions required a precise determination of energy of helium ions as well as their energy dispersion. To avoid errors in the measurements of energy of the incident particles, we employed the retarding potential method and used the electrostatic analyzer with a resolving power of 700. Additionally, by measuring the energy of impacting particles we estimate the dispersion of energy provided by the high frequency ion source and electron gun.The helium ion current in the collision chamber is of the order 0.1-0.5 μA, while the electron current was 5-20 μA. The pressure of the target gas under investigation do not exceed 6×10^-4Torr, so that single collisions are considered. The system is pumped differentially using the oil-free diffusion pump. The residual gas pressure do not exceed 0.1×10^-6 Torr.The basic problem in finding the cross section is to determine the relative and absolute spectral sensitivity of the radiation-detecting system. This is done by measuring the output signal of photomultiplier due to the (0.0), (0.1), (0.2), (0.3), (0.4), (1.2), (1.3), and (1.4) bands in the first negative system of the ion N_2^+ (B ^2Σ _u^+-X ^2Σ _g^+ transition) and (4.0), (4.1), (6.2), (6.3), (2.0), (3.0), (5.1), and (5.2) bands of the Meinel system (A ^2Π _u^+-X ^2Σ _g^+ transition) <cit.> excited in collisions between the electrons (E_e=110 eV) and nitrogen molecules. The output signal is normalized to (0.1) band with the corresponding wavelength λ =427.8 nm. This line had the high intensity in this range. The relative spectral sensitivity of the recording system obtained in this way is compared with the relative excitation cross sections for the same bands, averaged over the experimental data reported in Refs. <cit.>. The absolute excitation cross-sections for the (0.1) band (λ =427.8 nm) are assumed to be 5.3×10^-18 cm^2 at the electron energy of 110 eV. We take this value from Ref. <cit.>. The relative and absolute uncertainty in our measurements are 5% and 15%, respectively. The accuracy of polarization measurements do not exceed ∼ 2%.§RESULTS OF EXPERIMENTAL MEASUREMENTS AND DISCUSSION The Sub-sections A and B provide an outline of the investigated processes for the He^+-N_2 and He^+ - O_2 collision systems and the emission spectral lines. In Sub-section C we present the results along with a discussion. §.§ He^+-N_2collision systemHe^+(1s)+N_2 ⟶He ^∗+N_2^+⟶He^∗(3d^3D)+N_2^+; ⟶ He^∗(3p^3P_0)+N_2^+; ⟶ He^∗(4d^3D)+N_2^+; ⟶ He^∗(4d^1D)+N_2^+,and He^+(1s)+N_2⟶ He(1s ^2)+N_2^+^∗∗when N_2^+^∗∗ ⟶N^∗(2p^4^4P )+N^+(1s^2 2s^2 2p^2^3 P);⟶N^∗(3s4P)+N^+(2p ^2^3P);⟶ N^∗(3s'^2D)+ N^+(2p^2^3P);⟶ N^∗(4p^2S^0)+N^+(2p^2^3P);⟶ N(1s^22s^22p^ 3^4S_3/2^0)+N^+* (1s^2 2s^2 2p^1 3p ^1 P);⟶ N(1s^2 2s^2 2p^3^4S_3/2^0 )+N ^+*(3p ^3D);⟶ N(1s^2 2s^2 2p^3^4S_3/2^0 )+N ^+*(3d ^3F^0);⟶ N(1s^2 2s^2 2p^3^4S_3/2^0 )+N ^+*(3s ^3P^0);⟶ N(1s^2 2s^2 2p^3^4S_3/2^0 )+N ^+*(4f F);⟶ N(1s^2 2s^2 2p^3^4S_3/2^0 )+N ^+*(2p^3^3D^0).The nitrogen atom and ion lines wavelength and corresponding transitions are the following 3c||Nitrogen atom 3cNitrogen ionWavelength, λ nm TransitionWavelength, λ nmTransition NI 493.5 4p ^2S^0 ⟶ 3s^ 2P NII648.2 3p ^1P ⟶ 3s^ 1P^0NI 124.3 3s^^' ^3D ⟶ 2p^34D^0 NII 504.5 3p ^3S ⟶ 3s^ 3P^0NI 120 3s ^4P ⟶ 2p^34S^0 NII500.5 3d ^3F^0 ⟶ 3p ^3DNI 113.4-113.5 2p^4 ^4P ⟶ 2p^34S^0 NII 567.6-567.9 3p ^3D ⟶ 3s ^3P^0NII 424.2 4f F ⟶ 3d ^3D^0NII 399.5 3p ^1D ⟶ 3s ^1P^0NII 108.4-108.6 2p^3 ^3D^0 ⟶ 2p^2 ^3P The helium atom lines wavelength and corresponding transitions are the following 3cHelium atom Wavelength, λ nm TransitionHeI 667.8 3d ^1D ⟶ 2p^ 1P^0HeI 587.6 3d ^3D ⟶ 2p^ 3P^0HeI 492.2 4d ^1D ⟶ 2p^ 1P^0HeI 447.2 4d ^3D ⟶ 2p^ 3P^0HeI 388.9 3p ^3P^0 ⟶ 2s^ 3S§.§ He^+ - O_2 collision systemHe^+(1s)+O_2 ⟶He ^∗+O_2^+⟶He^∗(1s 2p)+O_2^+; ⟶ He^∗(3p^3P_0)+O_2^+;and He^+(1s)+O_2 ⟶ He(2s^2 )+O_2^+**⟶ He(2s^2)+O^∗(3s ^'^3D^0+O^+(1s ^22s^22p^3^4 S_3/2^0 );⟶ He(2s^2)+O^* (4d ^3D^0)+O^+(2p^3 ^4S_3/2^0 );⟶ He(2s^2)+O^* (3d^3^3D^0)+O^+(2p^3 ^4S_3/2^0 );⟶ He(2s^2)+O^* (3s^^'^1D^0)+O^+(2p^3 ^4S_3/2^0 );⟶ He(2s^2)+O(1s^22s^22p^4^3P_2)+O^+*(1s^22s^2 2p^23d ^3D^0).The wavelength of the oxygen atom and ion, and helium atom lines and the corresponding transitions are the following: 3c||Oxygen atom 3cOxygen ion Wavelength, λ, nm TransitionWavelength, λ, nmTransition OI 99.0 3s^'^3D^0⟶2p^4 ^3P OII 83.4 2p^4 ^4P⟶2p^3 ^4S^0OI 97.4 4d^3D^0⟶ 2p^4 ^3P OI 102.6 3d^3D^0⟶2p^4 ^3P OI 115.2 3s^^'^1D^0⟶2p^4 ^1D 3cHelium atom Wavelength, λ, nm TransitionHeI 53.7 3p ^1P^0⟶1s^2 ^1SHeI 58.4 2p ^1P^0⟶1s^2 ^1S§.§Experimental results and discussion Figures <ref>a and <ref>b show the dependence of the emission spectra on the wavelength in the vacuum ultraviolet spectral range of 105-130 nm and the visible spectral range of 490 – 580 nm, respectively, for collisions of E=5 keV helium ions with nitrogen molecules. Figure <ref>a presents excitation spectra mostly for (with one exception of ionic line λ =108.4 nm) nitrogen atomic lines, while Fig. <ref>b demonstrates nitrogen ionic lines (with one exception of atomic nitrogen line, λ =493.5 nm). The energy dependences of the excitation cross sections for the same emission spectral lines are presented in Figs. <ref> a and <ref>b, respectively. The energy dependence of the helium atom emission cross sections in the process He^+- N_2 on the energy of helium ions are shown in Fig. <ref>c. The results for emission spectrum in the wavelength range of 80 – 105 nm in collisions of E=10 keV helium ions with oxygen molecules are presented in Fig. <ref>. The energy dependences of the emission cross section for oxygen atomic OI (99.0; 102.6; 115.2; 97.4 nm ) and oxygen ionic OII (83.4 nm) lines in He^+- O_2 collisions are given in Fig. <ref>a. From the presented results of measurements, special attention is given to the comparison of energy dependence of excitation of helium atom and excitation of nitrogen ion. For this reason, we measure the excitation functions in visible spectral region for lines of helium atom HeI (λ =388.9 nm, 3p ^3P⟶2s ^3S) and nitrogen ion NII (λ =500.1 – 500.5 nm, 3d ^3F→ 3p ^3D) as well as polarization measurements for the same lines. The results of these measurements are plotted in Fig. <ref> and <ref>, respectively.The analysis of the results shown in Figs. <ref> and <ref>, as well as those shown in Figs. <ref>, <ref> and <ref>, and polarization measurements presented in Fig. <ref> allows us to make some conclusion, related to the notable similarities of the processes considered in He^+-O _2, He^+-N_2, collision systems and the electron-impact ionization in e-N_2, e-O_2 collision systems studied in <cit.>.The striking similarities in He^+-O_2 and He^+-N_2 systems are the following: i. the strong dominance of quasi-resonant charge-exchange processes; ii. the dominance of endothermic processes for a similar energy defect; iii. the population of exothermic channels suggesting a strong dynamic effect. The dominance of electron-capture processes to direct – excitation processes, are also observed in <cit.>. In our case, the most similarities related to the intensity of lines, energy dependences and mechanisms in He^+-O_2 and He^+-N_2 collision systems are in detail discuss below.Let us first consider excitation processes of dissociation products (nitrogen atom and nitrogen ion) based on the results presented in Fig. <ref> and <ref>.In Fig. <ref>a, the curves corresponding to the two dominant atomic NI (120.0 nm, 3s ^4P→2p^3 ^4S) and ionic NII (108.4 nm, 2p^3 ^3D→ 2p^2 ^3P) lines for the He^++ N_2 collision system exhibit a similar shape. Moreover, the absolute values of the excitation cross sections for these lines are close to each other. These results suggest that the excitation mechanisms of the molecular states which dissociate into N^∗(3s ^4P) and N^+^∗(2p^3 ^3D) products are almost the same. In Ref. <cit.> authors made the same conclusion and shown that at relatively low energies, E≤ 3 keV, the charge exchange is the dominant process. According to this work, the 1s vacancy of the He^+ plays the determining role in different excitation processes. Namely, in the case of (HeN_2)^+ ionic system, the initial vacancy in the He (1s) orbital becomes an inner vacancy of the ionic atomic quasimolecule. Hence, core-excited molecular states can be formed. The formation of the excited products can be connected with the decay of these intermediate molecular states of N_2^+^∗. Specifically, the molecular state that is correlated with either the N(3s ^4P)+N^+(3P) or the N(3s ^2P)+ N^+ (^3P) channels can be produced by formation of a 2sσ _g hole in the ground state of the N_2 molecule <cit.>.As a consequence, one should observe emission of atomic NI (120.0 nm) and ionic NII (108.4 nm) nitrogen lines, respectively. Besides of these intense spectral lines, we observe also atomic NI (113.4 nm; 2p^4 ^4P→2p^3 ^4S^0) and ionic NII (91.6 nm; 2s2p^3 ^3P→2p^2 ^3p) and NII (77. 6 nm; 2s2p^3 ^1D→2p^2 ^1D), which are not shown in Figs. <ref>a and<ref>a, and NII (108.4 nm; 2p^3 ^3D→2p^3 ^3 P) lines (see spectral lines in Fig. <ref>a and energy dependences in Fig. <ref>a) that are formed by removal of the 2s electron from the inner electronic shell. The formation of these excited products can also be caused by the decay of the core – excited molecular states. Unfortunately, not all the molecular states that produce these lines in the dissociation processes can be identified. Therefore, a particular attention devoted to the formation of the highly excited molecular states. These highly excited molecular states can be formed by the removal of a 2sσ _g electron in the charge-exchange channel. For explanation of the excitation mechanism of this state we have used schematic MO correlation diagrams for the (HeN_2)^+ system from Ref. <cit.>. According to this diagram when two partners approach each other one inner 2σ _u electron fills the He(1s) vacancy, and the other 3σ _g or 1pu electron is promoted to a high Rydberg orbital. Hence, molecular states can be formed with ionic cores. In this cases, a highly excited Rydberg orbital should produce molecular states of N_2^+^∗ with ^2Σ _g^+ symmetry that differ by two electrons from the N_2 ground state <cit.>. It is also possible that formation of some excited atomic and ionic dissociation fragments can occur through the decay of the ^2Σ _g^+ core-excited molecular Rydberg state.The removal of the 2sσ _g electron of the N_2 molecule requires about 37 eV <cit.>. Therefore, the excitation of the inelastic channel He(1s^2) + N_2^+ (2σ _g^-1) in the charge-exchange process (ionization potential of He is 24.6 eV) is required to change the internal energy of the (HeN_2)^+ system by about 12.4 eV. So, energy-loss spectra in the range 10.5<Q<15.6 eV observed in <cit.> in the charge exchange channel might contain this process.Notable similarities of energy dependences for the He^+→N_2 collision system are shown in Fig. <ref>b. The energy dependences and absolute value of cross sections is the same for atomic ion lines of the nitrogen NII (567.9 nm) with transition N^+ (3p ^3D→N^+ (3s ^3P^0) and NII (500.5 nm) with transition N^+ (2p3d ^3F)→N^+ (2p3p ^3D). The optical excitation function for these two lines measured for the e-N_2 collision system are studied in Ref. <cit.>. In what follows, the term excitation function means optical emission excitation function. As shown in <cit.> in our study, the shapes of these two excitation function (5680 oA and 5001 oA) are virtually identical. The excitation of the relatively low intense atomic NI (113,4 nm, 2p^4 ^4 P→2p^3 ^4S) and NI (124,3 nm 3s^I ^2D→2p^3 ^2D) lines presented in Fig. <ref>a, can be associated with the direct one- and two-electron transitions due to the MO crossing, following MO promotion <cit.>.Proceeding in the same way as in the He^+-N_2 case, we measured the emission cross section for the He^+-O_2 collision system. Figure <ref>a shows the energy dependence of the emission cross section for atomic OI (99.0; 102.0; 115.2; 97.4 nm) and intense ionic OII (83.4 nm) oxygen lines. From these experimental data follows that the oxygen ion line OII (83.4 nm, the 2p^4 ^4P→2p^3 ^4S_0 transition) is the most intense for collisions with helium ions (Figs. <ref> and <ref>a). The molecular dissociation causing excited atomic and/or ionic fragments is due to the decay of a highly excited intermediate molecular state of the inner shell, where a collision-induced vacancy arises.In the case of the He^+-O_2 collision, when the particles come closer together, a 1s inner-shell vacancy of the helium atom turns into an inner-shell vacancy of a triatomic quasi-molecule. Accordingly, when the particles fall apart, an instable highly excited O_2^+ molecular ion arises. Specifically, the decay of the 2σ _g^-1vacancy in ^2Σ _g^- and^4Σ _g^- highly excited molecular states lead to the formation of an excited dissociation product with the intense oxygen ion line OII (83.4 nm) <cit.>.An energy of about 46.2 eV is necessary to remove the 2sσ _g electron from the inner shell of oxygen. Therefore, the excitation of this inelastic channel in the process of dissociative charge-exchange He^+(1s)+O_2→He^+ (1s^2)+O_2^+(2σ _g^-1)requires a change of the inner energy of the quasi-molecular system of (He,O_2)^+ roughly by 22 eV. This estimate is indirectly confirmed in Ref.<cit.>. It seems that a broad peak in the energy loss spectrum near 22 eV, typical for the charge-exchange, is related to this inelastic channel.Let us now consider the similarities of the excitation of He atom and nitrogen ion lines. Experimental data for the excitation functions for the helium atom HeI (λ =388.9 nm, 3p ^3P→2s ^3S) and nitrogen ion NII (λ =500.1-500.5 nm, 3d ^3F→3p ^3D) lines are presented in Fig. <ref>. The curves exhibit surprising resemblance: in the entire investigated energy region, both the absolute values of the emission cross sections and their energy dependence are close to each other. Such emission cross sections' behavior points out the existance of a strong correlation between the dissociation products: the excited helium atoms and nitrogen ions He^++N_2(X ^2Σ _g^+) → He^∗+N^++N, He^++N_2(X ^2Σ _g^+) → He+N_2^+^∗→He+N ^+^∗+N.Also, we suppose that the inelastic energy defects for these channels are close to each other. There are some additional arguments that substantiate the existence of correlation between the channels (<ref>) and (<ref>). In Ref. <cit.> electron-impact dissociative excitation of nitrogen molecules N_2 was investigated. The authors have observed the same emission line of N^+:λ =500.5 nm, corresponding to the transition 3d ^3F→3p ^3D. Because the incident particle electron has a small mass, the experimentally obtained threshold energy, 57 eV, for the appearance of this line nearly coincides with the corresponding energy defect for this process. So, for the threshold of 57 eV, after reduction by the ionization potential of the helium atom (24.6 eV), gives approximately 32 eV for the energy defect. In the energy loss spectrum plotted in <cit.>, for the charge-exchange channel, one can find a broad peak in the vicinity of ∼30 eV. Thus, 32 eV is located in this area and this fact is indirect evidence in favor of the close relationship between reactions (<ref>) and (<ref>) <cit.>. §.§ Polarization Polarization of the emission emerging from the excited ^3P-state of helium is connected to the relative populations of m_L=0 and m_L=± 1 sublevels. Expression for the first Stokes's parameter has been derived on the basis of the general approach developed in <cit.>. In the Appendix we present the details of these calculations, and the final formula for the linear polarization reads: P=ℑ_∥-ℑ_⊥/ℑ _∥+ℑ_⊥=15( σ _0-σ _1) /41σ _0+67σ _1,where ℑ_∥ and ℑ_⊥ are the intensities of radiation emitted in a direction perpendicular to the helium beam having electric vectors parallel and perpendicular to the beam direction, respectively. In Eq. (<ref>) σ _0 and σ _1 stand for cross sections for the population of sublevels with m_L=0 andm_L=± 1, respectively. Our experimental observation leads to the value of P∼ 20% at the energy range of 6.5-10 keV. From Eq. (<ref>) we obtain that the ratio σ _0/σ _1≈ 15. Such a large value of the ratio indicates that m_L=± 1 sublevels of the excited helium atom are preferably populated. The latter means that the electron density formed in the He^∗ during the collision is oriented perpendicularly with respect to the incident beam direction. In Fig. <ref> we present the results of polarization measurements. The results are presented in a linear-linear plot in order to highlight the difference and similarities of polarization for the investigated emission lines. As shown, maximum negative degree of polarization is 20% at the energy 8 keV for the HeI (388.9 nm) line and that is ∼6% for the NII ( 500.1-500.5 nm), which is a dissociation product. As it is seen from Fig.<ref> the degrees of polarization for the studied emission lines change the sign at the nearly same energy ∼2.5 keV and appeared to be independent on the He^+ incident energy in the range of 5-10 keV for the N^+ and 8 to 10 keV for the He. A rise in the polarization as the energy decreases has also been noted for this transition in <cit.>. These authors find, that the polarization falls to a value about 5%. The latter is consisted with our result.For the polarization of radiation emitted by the nitrogen ion N^+ (dissociation product) and helium atom He^∗(3d ^3D) we use the same technique as for the derivation of (<ref>) and obtain the following expressions: P_N^+ = σ _0+2σ _1-3σ _3/3σ _0+6σ _1+6σ _2+5σ _3,P_He^∗ = σ _0+σ _1-2σ _2/3σ _0+6σ _1+5σ _2.We note that in this case of the excited N^+ ion and He^∗ atom it is complicated to trace any pronounced alignment of the radiating object. The reason is that expressions (<ref>) and (<ref>) contain not only σ _0 and σ _1, but also σ _2 and σ _3. σ _2 and σ _3 are the cross sections for the sublevel m_L=± 2 and m_L=± 3. Therefore, the unambiguous determination of branching ratios is a challenging task.The energy dependence of measured polarizations shows that the electronic orientation of the excited He atom changes at nearly 3 keV. One can suppose that because of the mentioned strong correlation between the channels of excitation of the He and N^+ the electronic orientation of the excited nitrogen ion would also change. This implies that the effect of molecular axis orientation with respect to the incident ion beam also changes as energy increases.§ CONCLUSIONS The striking similarities of processes realized in the He^++N_2 and He^++O_2 collision systems are reported. In collisions of helium ions with oxygen and nitrogen molecules the obtained intense atomic and ionic lines are related largely to charge-exchange processes <cit.>. In both cases, the exited dissociative products (oxygen ion/atom, nitrogen ion/atom) are form through the decay of the highly exited molecule state of the O_2^+^∗ and N_2^+^∗, respectively.We observe the similar shape of energy dependence, almost the same value of excitation cross sections, as well as common excitation mechanism for two dominant dissociative nitrogen atomic N^∗(3s ^4P) and nitrogen ionic N^+^∗(2p^3 ^3D) products in collision of He^++N_2. We found that these excited products can be formed by removal of the 2sσ _g electron in the charge-exchange channel. The notable similarities of energy dependences and absolute value of cross sections for the nitrogen ionic lines NII (567.9 nm) and NII (500.5 nm ) in He^++N_2 collisions are observed. The strong correlation between the excitation of the helium atomic HeI (388.9 nm, transition 3d ^3F⟶ 3p ^3D) and the nitrogen ionic NII (500.1-500.5 nm, transition 3d ^3F→3p ^3D) lines are revealed for the He^++N_2 collision system. The most intense oxygen ionic OII (83.4 nm) and atomic OI (99.0 nm) lines and week (about 10^-19 cm^2), double charged oxygen ion OIII (70.6 nm) lines are observed and identified in the collision of He^+ with O_2 molecules. In this case, the molecular dissociation causing the excited atomic and/or ionic fragments is due to the decay of a highly excited intermediate molecular state of the inner shell, where a collision-induced vacancy arises. The highly excited molecular states (^2Σ _g^- and ^4Σ _g^-) in He^++O_2 collisions are assigned and their leading role in explanation of mechanism for the intense oxygen ionic OII (83.4 nm) and atomic OI (99.0 nm) lines are explained. Energy dependence of the degree of linear polarization for the He atomic lines HeI (388.9 nm and 587.6 nm) and nitrogen ionic line NII (500.5 nm) are measured. Maximum negative (20%) and minimum (5%) positive value of degree of the linear polarization are revealed for dissociative products of the helium atom (388.9 nm) and helium atom and nitrogen ion (587.6 nm; 500.1 nm), respectively, at the same collision energy E=2.5 keV. The expression for the first Stock's parameter has been derived [35] and formulas for the degree of linear polarization of radiation emitted from the excited helium He^∗ (3P) atom, from the excited nitrogen ion N^+^∗ (^3F) and excited helium atom He^∗(^3D) were written. On the basis of polarization measurements, the cross section σ _0 and σ _1, related to the relative population of the helium He^∗ (3P) magnetic sublevels with m_L=0 and m_L=± 1, respectively, are calculated and the ratio σ _1/σ _0≈ 15 are revealed. Such a great value of ratio (σ _1/σ _0≈15) indicates: i. the sublevels of the excited helium ^3P state are preferable populated; ii. the electron density formed in the excited helium He^∗ atom during the collision is oriented perpendicular with respect to the incident beam direction. The most of the experimental data obtained for the He^++N_2 and He^++O_2 collision system are qualitatively interpreted in terms of the quasi-diatomic approximation.§ APPENDIX Below we derived the simple formula for the degree of polarization of radiation due to the atomic particle collision process. Presented calculations are based on the pioneering work of Macek and Jaecks <cit.>. Below we use the following quantum numbers: L, M_L, and J , are the electronic orbital, magnetic, and full electronic momentum quantum numbers, respectively, and S, I, and F are the electronic spin, nuclear spin, and full atomic momentum (electronic + nuclear) quantum numbers, respectively.In polarization experiments number of the photon and projectile coincidencesdN_c depends upon the position of the photon and particle detectors. The incoming beam axis is usually taken to be the z-axis, and the x-z plane is normal to the z-axis. The angular coordinates of the particle detector relative to this coordinates system are denoted by θ and φ, and the coordinates of the photon detector by ϑ and ϕ. The number dN_c is proportional to the linearly polarized light intensity, which is oriented at an angle β with respect to the z-axis, provided projectile particle is scattered in (θ,φ ) direction and is defined asdN_c = { A_00cos ^2β +A_11sin ^2β +( A_11-A_00) cos ^2βcos ^2θ +. √(2) A_01[ sin 2ϑcos ^2βcos (φ -ϕ )+sin 2βsinϑsin (φ -ϕ )] - .A_11[ ( cos 2βcos 2ϕ) cos 2(φ -ϕ )+sin 2βcosϕsin 2(φ -ϕ )] } dΩ dϖwhere dΩ and dϖ are the solid angles covered by the particle and photon detector, respectively. The coefficients A_ij are determined as: A_qq^^'=∑_JFJ^^'F^^'MLM^^'L^^'U( qq^^'M_LM_L^^'JFJ^^'F^^'LL_0) ⟨ a__MLa_M_L^^'⟩∫_0^Δ tdtexp[ -(γ +iω _JFJ^^'F^^'] .where U( qq^^'M_LM_L^^'JF J^^'F^^'LL_0)=(2J+1)(2J^^'+1)(2F+1)(2F^^'+1)(2L+1)/(2S+1)(2I+1) (-1)^L_0+q-M_L∑_χ =0,1,2(2χ +1)(-1)^χ× {L L χ J^^' J S} ^2{J^^' J χ F^^' F I} ^2{L L χ 1 1 L_0}(L L χ -M_L^^' M_L ν) (1 1 χ -q q^^' -ν)In Eq. (<ref>) ( )and {} denote the 3-jand 6-jsymbols, respectively, q and ω are the polarization vector component of the photon and frequency of the emitted light, respectevely, and 1/γ is the mean life of the excited atom. The excitation amplitudes contain all information related to the collision dynamics and they depend on angle θ only. Time integration in (<ref>) involves detection time interval 0-Δ t.In our experiment we do not fix scattered particles. This means that expression (<ref>) should be integrated over angles θ and φ. Furthermore, in our experimental condition the photon detector is installed in the direction perpendicular to the primary ion beam, i.e. ϕ =90^0. As to analyzer angle β, it was taken equal to 0^0 and 90 ^0. Therefore, only the following terms will contribute to the detected intensity ℑ ℑ∼( A_00cos ^2β +A_11sin ^2β) dϖ . In case when radiation from the helium atom is observed, nuclear spin I=0, so we have no hyperfine structure. Consequently, we can change ω _JFJ^^'F^^' is replaced by ω _JJ^^' . Further, since the mean life of the excited atom 1/γ and 1/ω _JJ^^' are much shorter then commonly employed resolution time, the time integral becomes: ∫_0^∞dtexp[ -(γ +iω _JJ^^'] = 1/γ +iω _JJ^^'≈{ 0, J≠ J^^' 1/γ, J=J^^'. . Now, (<ref>)–(<ref>) allow to find an exact value for the first Stocks parameter P=ℑ(β =0^0)-ℑ(β =90^0)/ℑ (β =0^0)+ℑ(β =90^0). Let's determine degree of polarization for He atom line (388.9 nm, transition ^3p→ ^3s ). For this case when q=q^^'=0, U( 00M_LM_L^^'JJJJ 10)= (2J+1)^4(-1)^-M_L∑_χ =0,1,2(2χ +1)(-1)^χ{1 1 χ J J 1} ^2{J J χ J J 0} ^2× {1 1 χ 1 1 0}(1 1 χ -M_L M_L 0) (1 1 χ 0 0 0) when q=q^^'=1 U( 11M_LM_L^^'JJJJ 10)= (2J+1)^4(-1)^-M_L∑_χ =0,1,2(2χ +1)(-1)^χ{1 1 χ J J 1} ^2{J J χ J J 0} ^2× {1 1 χ 1 1 0}(1 1 χ -M_L M_L 0) (1 1 χ -1 1 0)Finally, we obtain the following expression for the degree of polarization for the mentioned helium emission line: P=15(σ _0-σ _1)/41σ _0+67σ _1, where σ _0 and σ _1 are the cross-sections of population of magnetic sublevels with m_L=0 and m_L=± 1, respectively.99 1M M. R. Torr and D. G. Torr, The role of metastable species in the thermosphere. Rev. Geophysics. 20, 91 (1982).2M T. E. Cravens et al., Energetic ion precipitation at Titan. Geophysics. Res. Lett. 35, L03103 (2008).3M Pararicas, C., Mauk, b.H., Ratliff, J.M., Cohen, C., and Johnson, R.E., Geophysics. Res. Lett., 29, 18, (2002).4M H. Luna et al., Astrophys J. 628, 1086 (2005).5M R. K. Janev Atomic and Molecular Processes in Fusion Edge Plasmas, Plenum Press, New York, 1995.6M S. E. Huber, A. Mauracher, D. S.ub, I. Sukuba, J. Urban, D. Boyrodin, and M. Probst, J. Chem. Phys. 150, 024306 (2019).10m R. Ya. Kezerashvili and G. L. Matloff, Adv. Space Res.44, 859 (2009).11m L. Campbell, M.J. Brunger, P.J. Teubner, D.C. Cartwright, J. Electron Spec. and Related Phenomena 144, 119 (2005).12m K. J. Remick, R. K. Smith, D. Lummerzheim, J. Atmospheric, Solar-Terrest. Phys. 63, 295 (2001).13m A. Shinsuke, N. Ebizuka, et al. Astrophys. J. 618, L141 (2005).14m P. Jenniskens, C. O. Laux, and E. L. Schalle, Astrobiology4, 109 (2004).15m D. E. Shemansky and X. Liu, J. Geophys. Res., 110, A07307 (2005).16m A.W. Harrison, and A. Vallance Jones, J. Atmospheric Terrestrial Phys. 13, 291 (1959).17m M. Hollstein, D. C . Lorents, J. R. Peterson, and J. R. She Ridan, Can. J. Chem. 47, 1858 (1969).18m D. E. Shemansky and A. L. Broadfoot, J. Quant. Spectrosc. Radiat. Transfer, 11, 1385 (1971).19m L.G. Piper, B.D. Green, W.A. Lumbergand, S.J. Wolnik, J. Phys. B: At. Mol. Phys. 19, 3327 (1986).20m R.L. Gattinger and A. Vallance Jones, Can. J. Phys. 52, 2343 (1974 ); Can. J. Phys. 59, 480 (1981).21m L. Campbell, D. C. Cartwright, M. J. Brunger, and P. J. O. Teubner, J. Geophys. Res. 111, A09317 (2006).22m O. Yenen, D. H. Jaecks, and R. I. Martin, Phys. Rev. A35, 1517 (1987).23m P. Baltzer, W. Wannberg, L. Karlsson, et al., Phys. Rev. A45, 4374 (1992).24m A. V. Golovin, F. Heiser, C. J. K. Quayle, et al., Phys. Rev. Lett. 79, 4554 (1997).25m P Erman, A. Karawajczyk, E. Rachlew-Kallne, et al., J. Phys. B29, 5785 (1996).26m P. Erman, A. Karawajczyk, E. Rachlew-Kallne, et al., Phys. Scr. 49, 308 (1994).27m D. M. P. Holland, D. A. Shaw, S. M. McSweeney, et al., Chem. Phys. 173, 315 (1993).28m M. Kato, K. Kameta, T. Odagiri, et al., J. Phys. B 35 , 4383 (2002).29m M. Kato, T. Odagiri, K. Kameta, et al., J. Phys. B 36 , 3541 (2003).30m M. Ukai, S. Machida, K. Kameta, et al., Phys. Rev. Lett.74, 239 (1995).31m A. A. Cafolla, T. Reddish, and J. Comer, J. Phys. B 22 , L273 (1989).32m H. Liebel, A. Ehresmann, H. Schmoranzer, et al., J. Phys. B35, 895 (2002).33m D. Dowek, D. Dhuicq, J. Pommier, et al., Phys. Rev. A24, 2425 (1981).34m D. Dowek, D. Dhuicq, and M. Barat, Phys. Rev. A 28, 2838 (1983).35m J. H. Macek and D. H. Jaecks, Phys. Rev. A 4 2288, (1971).36m U. Fano and J. H. Macek, Rev. Mod. Phys. 45, 553 (1973).37m I. C. Malcolm, H. W. Dassen, and J.W. McConkey, J. Phys. B: Atom. Molec. Phys. 12, 1003 (1979).38m D. H. Jaecks, O. Yenen, M. Nataragan, and D. Mueller, Phys. Rev. Lett. 50 825 (1983).39m R. Hippler, M. Faust, R. Wolf, H. Kleinpoppen, and H. O. Lutz, Phys. Rev. A 31, 1399 (1985).40m R. Hippler, M .Faust, R.Wolf, H. Kleinpoppen, and H. O. Lutz, Phys. Rev. A 36, 4644 (1987).41m O. Yenen and D. H. Jaecks, Phys. Rev. A 32, 836, (1985).42m O. Yenen, D. H. Jaecks, and P. J. Martin, Phys. Rev. A35, 1517 (1987).43m C. Richter, D. Dowk, and J. C. Houver, J. Phys. B: At. Mol. Opt. Phys. 24, L213 (1991).44m R. Hippler, Phys. B At. Mol. Opt. Phys. 26, 1 (1993).45m B. Siegmann, R. Hippler, and H. O. Lutz, J. Phys. B: At. Mol. Opt. Phys. 31, L675 (1998).46m H. Tanuma, T. Hayakawa, C. Verzani, and H. Kano, H Watanabe, B. D. DePaola, and N Kobayashi, J. Phys. B: At. Mol. Opt. Phys. 33, 5091 (2000).47m H. Merabet, R. Bruch, S. Fulling, K. Bartschat, and A. L. Godunov, J. Phys. B: At. Mol. Opt. Phys. 36, 3383 (2003).48m M. R. Gochitashvili, R. V. Kvidzhinadze, N. R. Djaliashvili, and B.I.Kikiani, JTF 63 35. (1993).49m E. Stambulchik and Y. Maron, Phys. Rev. A, 65, 052726 (2002).50m J. R. Oppenheimer, Z. Phys. 43, 27 (1927).51m I. C. Percival and M. J. Seaton, Philos. Trans. R. Soc. London Ser. A 251,113 (1958).52m E. Takacs, E. S. Mheyer, J. D. Gillaspy, et al., Phys. Rev. A,54, Number 2, August (1996).53m D. W. O. Heddle and R. G. W. Keesing – Proc. Royal Soc. London. Series A, Math. and Phys. Sciences, 299, No. 1457 (Jun.14, 1967)212, (1967).54m A. L. Godunov, H. Merabet, J. H McGuire, R. Bruch, J. Hanni, and V. S. Schipakov, J. Phys. B: At. Mol. Opt. Phys. 34, 2575 (2001).55m A. L. Godunov, P. B. Ivanov, V. A. Schipakov, P. Moretto-Capelle, D. Bordenave-Montesquieu, and A. Bourdenave-Montesquieu, J. Phys. B: At. Mol. Opt. Phys. 33, 971 (2000).56m K. Blum, Density Matrix. Theory and Applications, New York, Plenum, 1981.61m J. Watson and R. J. Anderson, J. Chem. Phys. 66, 4025 (1977).62m R. J. Van Brunt and R. N. Zare, J. Chem. Phys. 48, 4304 (1968 ).63m A. R. Filippelli, F. A. Sharpton, and C. C. Lin, J. Chem. Phys. 76, 3597 (1982).64m R. J. Van Brunt and R. N. Zare, J. Chem. Phys. 48, Number 9, 1 May (1968).66m F. B. Yousif, B. G. Lindsay, F. R. Simpson, and C. I. Latimer, J, Phys. B; At. Mol. Phys. 20, 5079 (1987).67m D. Dowek, D. Dhuicq, V. Sidis, and M. Barat, Phys. Rev. A26, 746 (1982).68m D. P. De Bruijn, J. Neuteboom, and J. Loss, Chem. Phys.85, 233 (1984).69m I. Kuen, H. Story, and F. Howorka,Phys. Rev. A 28 119 (1983).70m M. R. Gochitashvili, V. A. Ankudinov, V. M. Lavrov, and B. I. Kikiani, Zh. Tech. Fiz. 49, 2338 (1979).71m M. R. Gochitashvili, R. V. Kvizhinadze, N. R. Jaliashvili, and B. I. Kikiani, 1993 Zh.72m D. H. Jaecks, O. Yenen, M. Natarajan, and D. Mueller, Phys. Rev. Lett. 50, 825 (1983).73m O. Yenen and D. H. Jaecks, Phys, Rev. A 32, 836 (1985).74m O. Yenen, D. H. Jaecks, and P. J. Marlin, 1987 Phys. Rev. A35, 1517 (1987).75m M, R. Gochitashvili, R. Ya. Kezerashvili, and R. A. Lomsadze, Phys. Rev. A 82, 022702 (2010)76m R. A. Lomsadze, M. R.Gochitashvili, R. Ya. Kezerashvili, N. O. Mosulishvili, and R. Phaneuf, Phys. Rev. A, 87, 042710 (2013).77m V. V. Skubenich, I. P. Zapesochni, Geomagnetizm and Aeronomia21, 481. (1981).78m W. R. Pendleton, R. R. O'Neil, J. Chem. Phys. 56, 6260 (1972).79m D. C. Cartwright, J. Chem. Phys. 58, 178 (1973).IJMP2021our R. A. Lomsadze, M. R. Gochitashvili, R. Ya. Kezerashviliy, and M. Schulz, Int. J. Mod. Phys. B 35, 2150104 (2021).80m K. C. Smyth, J. A. Schiavone, and R. Freund, J. Chem. Phys.59, 5225 (1973).81m H. Sambe and D. E. Ramaker, Chem. Phys. 107, 351 (1986).82m R. S. Freund, J. Chem. Phys. 54, 3125 (1971).83m Table of Molecules, National Institute of Standards and Technology, 2000.84m W. E. Lamb and T. H. Maiman, Phys. Rev. 105, 573 (1957).85m R. H. Hughes, R. B. Kay, and L. D. Weaver, Phys. Rev.129, 1630 (1963).86m A. R. Filippelli, F. A. Sharpton, C. C. Lin, and R. E. Murphy, J. Chem. Phys. 76, 3597 (1982).
http://arxiv.org/abs/2312.16676v1
{ "authors": [ "M. Gochitashvili", "R. Lomsadze", "R. Ya. Kezerashvili", "I. Noselidze", "M. Schulz" ], "categories": [ "physics.atom-ph", "quant-ph" ], "primary_category": "physics.atom-ph", "published": "20231227184248", "title": "Excitations of N$_{2 }$ and O$_{2}$ molecules due to helium ion impact and a polarization effect" }
Xiaoxiang Chai: Department of Mathematics, POSTECH, Pohang, Gyeongbuk, South Korea [email protected], [email protected] Wan: Mathematical Science Research Center, Chongqing University of Technology, Chongqing 400054, China. [email protected] In odd dimensions, we prove a scalar curvature rigidity for parabolic convex polytopes in hyperbolic space enclosed by linear planes in the Poincaré upper half-space model and convex with respect to the conformally related flat metric. Our method is based on spinor techniques and relies on the recent smoothing constructions of Brendle-Wang. We also prove a Llarull type rigidity for bounded smooth parabolic convex domains and a dihedral rigidity for polytopal initial data sets with dominant energy conditions.[2020]53C24, 52B11, 15A66 Research of X. Chai has been partially supported by National Research Foundation of Korea grant No. 2022R1C1C1013511. Research of X. Wan is partially supported by the National Natural Science Foundation of China (Grant No. 12101093) and the Natural Science Foundation of Chongqing (Grant No. CSTB2022NSCQ-JQX0008), the Scientific Research Foundation of the Chongqing University of Technology. Scalar curvature rigidity of parabolic convex polytopes in hyperbolic space Xueyuan Wan January 14, 2024 =========================================================================== § INTRODUCTION It is natural to seek a useful definition of scalar curvature for singular metric spaces, in particular, for manifolds with C^0 metrics. In <cit.>, Gromov proposed a comparison with polytopes to define a lower bound on the scalar curvature. Basically, a C^0 metric g has nonnegative scalar curvature at a point p if there does not exist a cube around p whose faces are strictly mean convex and the dihedral angles are acute. It is now recognized as the Gromov dihedral rigidity conjectures and becomes an interesting direction to explore in itself.The Euclidean version of the conjecture is as follows. Let P ⊂ℝ^n be a convex polytope and g be another metric on P of nonnegative scalar curvature. If the faces of P are mean convex, and the dihedral angles under the metric g are no greater than the dihedral angles under the flat metric, then the metric g is flat. This conjecture was confirmed by Li <cit.> via stable capillary minimal surface for certain types of polyhedra in dimension 3, and for cubes in dimensions up to seven. Spinorial techniques seem quite powerful in addressing the dihedral rigidity, for example, Wang-Xie-Yu <cit.>, Brendle-Wang <cit.>, <cit.>, Brendle-Chow <cit.>. In the realm of negative lower scalar curvature bound, Gromov formulated a similar conjecture for parabolic cubes (see Definition <ref>) in the hyperbolic space. Based on the evaluation of mass integrals on exhausting polytopes in an asymptotically hyperbolic manifold (see <cit.>), the first named author and G. Wang <cit.> restated Gromov dihedral rigidity conjecture in hyperbolic n-space to include more general polytopes using Poincaré half-space realization of the hyperbolic space which now we recall.The Poincaré half-space model of the hyperbolic n-space is given by the metricb = 1(x^1)^2δ := 1(x^1)^2 ((d x^1)^2 + ⋯ + (d x^n)^2)onℝ_+^n = {(x^1, …, x^n) : x^1 > 0}.By convention, we have used δ to denote the flat metric on ℝ^n_+.The umbilic hypersurfaces in the upper half-space model are either linear hyperplanes or part of spheres. We are mostly concerned with the scalar curvature rigidity of polytopes enclosed by linear hyperplanes in the Poincaré half-space model. These polytopes are also polytopes in ℝ^n_+ in the flat metric. We give the following definition.In the Poincaé upper half-space model, we say that a polytope P is a parabolic polytope if P is enclosed by linear planes. We say that a subset U is parabolic convex if U is convex with respect to the conformally related flat metric.We recall here a well-known formula that describes the relationship between the Levi-Civita connections of conformally related metrics. If two metrics g̃ and g are conformal related via g̃=φ^2 g where φ >0, then the two Levi-Civita connectionsandsatisfy_X Y-_XY=ω(X)Y+ω(Y)X-⟨ X,Y⟩_g Vwhere V is the dual vector of ω, i.e. ⟨ V,X⟩_g=ω(X), ω=d(logφ).Let a⃗ be a vector in R^n of unit length with respect to the Euclidean metric δ. Then x^1a⃗ is a unit normal vector to the hyperplane {x·a⃗=c} with respect to the hyperbolic metric b. By (<ref>),_X(x^1a⃗)=-dx^1(a⃗)Xfor any tangent vector X of the plane {x·a⃗=c}, wheredenotes the Levi-Civita connection of the hyperbolic space (R^n_+,b). Hence the mean curvature of the hypersurface {x·a⃗=c} with respect to x^1a⃗ is given byH_b=∑_a=1^n-1⟨ e_a,_e_a(x^1a⃗)⟩_b=-(n-1)dx^1(a⃗)≥ -(n-1). Let Ω be a compact, convex polytope in (ℝ^n_+, δ) with non-empty interior. We may write Ω = ∩_i ∈ I{u_i ≤ 0}, where u_i, i ∈ I is a finite collection of non-constant linear functions defined on ℝ^n_+. For each i ∈ I, we denote by N_i ∈𝕊^n - 1 the outward-pointing unit normal vector to the half-space {u_i ≤ 0} with respect to the Euclidean metric.Let g be another Riemannian metric which is defined on an open set containing Ω. For each i ∈ I, we denote by ν_i the outward unit normal vector to the half-space {u_i ≤ 0} with respect to the metric g. For adjacent faces F_i, F_j ⊂Ω, we call the angle γ_i j∈ (0, π) with γ_i j = - cos⟨ν_i, ν_j ⟩ the dihedral angle. We add a bar to γ to indicate that the angle is computed with respect to the flat metric.The hyperbolic version of the dihedral rigidity conjecture (see <cit.>) can now be stated as follows. Let n ≥ 3 and Ω be a compact, convex polytope in (ℝ^n_+, δ) with non-empty interior. Let g be another Riemannian metric on Ω with scalar curvature R_g ≥ - n (n - 1) and each face F_i has mean curvature H_i ≥ - (n - 1) ⟨∂∂ x^1, N_i ⟩_δ and the dihedral angles γ_i j forming by two adjacent faces F_i, F_j satisfies γ_i j≤γ̅_ij, then Ω is hyperbolic with umbilic faces whose mean curvature on the face F_i are given by H_i = - (n - 1) ⟨∂∂ x^1, N_i ⟩_δ and with dihedral angles γ_i j given by γ_i j = γ̅_ij. We describe a special type of polytopes in hyperbolic n-space. We say that P ⊂ (ℝ_+^n, b) has a top (base) face if P ⊂{x^1 ≤ c} (P ⊂{x^1 ≥ c}) and one face of P lies in {x_1 = c} for some c > 0. We say that P ⊂ (ℝ^n_+, b) is a parabolic prism ifP = {x ∈ (ℝ^n_+, b) :x^1 ∈ [a_1, a_2], (x^2, …, x^n) ∈ P' }where 0 < a_1 < a_2 and P' is a polytope in ℝ^n - 1. If P' is a cube in ℝ^n - 1, we say that P is a parabolic cube. There are other cubes and prisms enclosed by linear hyperplanes, we reserve the names parabolic cubes and prisms for those with top or base face lying on {x_1=c} for some c>0. Li <cit.> showed the dihedral rigidity conjecture for parabolic cubes up to dimension seven based on the stability of free boundary constant mean curvature hypersurfaces. Using capillary boundary constant mean curvature surfaces, the first named author and G. Wang <cit.> proved three-dimensional dihedral rigidity for certain prisms similar to the ones considered in <cit.> and tetrahedra with a base face or a top face which generalizes Li's approach <cit.> and removed several restrictions of <cit.>. Using spacetime harmonic functions, Tsang <cit.> studied the dihedral rigidity for three dimensional cubical initial data sets which include parabolic cubes as special cases. Wang-Xie <cit.> used spinor methods and proved the dihedral rigidity for polyhedral domains in hyperbolic space which are radially convex and has a top face, in particular, for parabolic cubes and prisms.In this paper, we use spinor methods to establish scalar curvature rigidity results in the upper half-space model of the hyperbolic space.We develop a new connection and Dirac operator acting on the space of 2^[n2]-tuples of spinors and the index theory for the new Dirac operator on smooth domains follows directly from <cit.>.Before stating our main result for parabolic convex polytopes, we recall two assumptions the Matching Angle Hypothesis and the Acute Angle Hypothesis coming from <cit.> and <cit.> respectively. We say that (Ω, g) in Conjecture <ref> satisfies the Matching Angle Hypothesis if cosγ_i j = - ⟨ N_i, N_j ⟩ for all pairs of adjacent faces F_i and F_j.We say that (Ω, g) in Conjecture <ref> satisfies the Acute Angle Hypothesis if 0 < γ̅_i j≤π2 for all pairs of adjacent faces F_i and F_j. The works <cit.> developed an index theory for Dirac operators for manifolds with corners, while Brendle-Wang <cit.> are based on an index theory of Dirac operators on smooth domains instead.Conjecture <ref> holds if (Ω, g) is odd-dimensional and it satisfies either Matching Angle Hypothesis or Acute Angle Hypothesis.A common drawback of these works <cit.> is that the model required at least one top or base face. Furthermore, we would like to remark that the radial convexity in Wang-Xie <cit.> is a strong condition that requires the side faces of a polyhedral domain to have nonnegative second fundamental form. We do not impose these conditions.As a byproduct, we also establish the following boundary analog of Llarull's theorem (see <cit.>) for parabolic convex domains in hyperbolic space. The theorem is in fact easier than the dihedral rigidity conjecture. Suppose that n≥ 3 is an odd integer, and Ω is a compact, strictly convex smooth domain in (R^n_+,δ). Let b be a hyperbolic metric defined in (<ref>). Let g be a Riemannian metric on Ω satisfying: * The scalar curvature R ≥-n(n-1);* The mean curvature on the boundary H ≥ H_b;* The induced metrics σ:=.g|_∂Ω≥σ̅:=.b|_∂Ω.Then (Ω, g) is hyperbolic and σ=σ̱. This theorem answers a question of Gromov(see <cit.>; On Non-spin Manifolds and on σ<0) concerning the scalar curvature rigidity of geodesic balls in hyperbolic space for odd-dimensional spin manifolds. See also a result of the first named author with G. Wang <cit.> for parabolic convex rotationally symmetric sets in three dimensional hyperbolic space.Our methods apply to the case of an initial data set as well. We say that (Ω, g, q) is an initial data set if q is a symmetric 2-tensor on (Ω, g). We define two quantities the energy density μ and current density J by2 μ= R_g + (_g q)^2 - |q|_g^2, J =_g q - d (_g q) .We focus on the case when Ω is a polytope. We say that (Ω, g, q) satisfies the energy dominant condition ifμ≥ |J|and the tilted dominant energy condition on the face F_i ifH_i + cosθ_i _F_i q ≥sinθ_i |q (ν_i, ·)^⊤ |where θ_i ∈ [0, π] is given by cosθ_i = ⟨∂∂ x^1, N_i ⟩. The condition (<ref>) was introduced by the first named author in <cit.>.In the case of initial data sets, we have the following rigidity result. Let n ≥ 3 be an odd number and Ω be a convex polytope in (ℝ^n_+,δ), g be a Riemannian metric on Ω and q be a symmetric 2-tensor (g and q are defined in an open set containing Ω). Assume that the n-dimensional polytopal initial data set (Ω, g, q) satisfies either the Matching Angle Hypothesis or the Acute Angle Hypothesis, and the dominant energy conditions (<ref>) in the interior of Ω and (<ref>) along every face F_i, then the equalities in (<ref>), (<ref>) are achieved and γ_ij=γ̅_ij along every pair F_i,F_j of faces. It is expected that a polytopal initial data set that satisfies the dominant energy conditions (<ref>), (<ref>) and the dihedral angle comparison γ_i j≤γ̅_i j can be locally embedded as a spacelike slice in Minkowski spacetime with the second fundamental form being q. However, the rigidity seems quite difficult to do due to the nature of our connection. We do have that the Gauss equation and Codazzi equation are satisfied in n-1 directions (see Proposition <ref>). We will address the further rigidity and scalar curvature rigidity of polytope in warped products in a future work. Our article is organized as follows: In Section <ref>, we develop a new connection and Dirac operator, we show that a boundary value problem involving the Dirac operator has Fredholm index at least 1. In Section <ref>, we review the smoothing constructions of Brendle and Wang. In Section <ref>, we finish the proof of Theorem <ref>. In Section <ref>, we prove the Llarull type rigidity Theorem <ref> for parabolic convex domains. In the last section, we discuss the dihedral rigidity for polytopal initial data sets.§ A BOUNDARY VALUE PROBLEM FOR DIRAC OPERATORS In this section, inspired by <cit.>, we define a Dirac operator acting on the space of m-tuples of spinors and consider a boundary value problem of the Dirac operator. In Section <ref>, we briefly recall the smoothing construction developed by Brendle and Wang which allows us to solve the Dirac equation on polytopes by taking limits.§.§ Dirac operators on a smooth domain In this subsection, we define a Dirac operator acting on the space of m-tuples of spinors.Let us fix an odd integer n≥ 3, We denote by Δ_n the space of spinors on the Euclidean space (R^n,δ), and denote by m=2^[n/2] the dimension of Δ_n. We fix an orthonormal basis {s̱_1,…,s̱_m} of Δ_n. For any real vector N, we define ω_Nαβ:=⟨ı c_δ(N)s̱_α,s̱_β⟩_δfor any 1≤α,β≤ m, where c_δ(·) denotes the Clifford action associated with the metric δ. Let Ω be a compact domain in R^n_+ with smooth boundary Σ=Σ. Let g be a Riemannian metric on Ω.We denote by S the spinor bundle over Ω and define the space of m-tuples of spinorsE=S⊕⋯⊕S_mtimes .For any real vector N, we define the following endomorphism ω_N:=(ω_Nαβ)∈End(E)by (ω_N s)_α:=∑_β=1^mω_Nαβs_β for any s=(s_1,⋯,s_m)∈E. Note that the Riemannian metric g induces a Hermitian inner product ⟨·,·⟩ on the spinor bundle S, so there is a natural inner product on the direct sum bundle E, we also denote it by ⟨·,·⟩. More precisely, for any s=(s_1,⋯,s_m) and t=(t_1,⋯,t_m) in E|_x (the fiber of E at x∈Ω), we define ⟨ s,t⟩:=∑_α=1^m ⟨ s_α,t_α⟩.By a direct calculation, we have the following proposition. If |N|^2_δ=1, then * (ı c_δ(N))^2 is an involution, i.e., (ı c_δ(N))^2=id_Δ_n.* ı c_δ(N) is self-adjoint, i.e., ⟨ı c_δ(N)·,·⟩_δ=⟨·, ı c_δ(N)·⟩_δ.* ω_N is an involution, i.e., ω^2_N=id_E.* ω_N is self-adjoint, i.e., ⟨ω_N·,·⟩=⟨·,ω_N ·⟩.The (1) and (2) are obvious by the definition of Clifford action.For any s=(s_1,…,s_m)∈E, we have that(ω^2_Ns)_α =∑_β,γ=1^mω_Nαβω_Nβγs_γ=∑_β,γ=1^m⟨ı c_δ(N)s̱_α,s̱_β⟩_δ⟨ı c_δ(N)s̱_β,s̱_γ⟩_δ s_γ=∑_β,γ=1^m⟨ı c_δ(N)s̱_α,s̱_β⟩_δ⟨s̱_β, ı c_δ(N)s̱_γ⟩_δ s_γ=∑_γ=1^m⟨ı c_δ(N)s̱_α,ı c_δ(N)s̱_γ⟩_δ s_γ=∑_γ=1^m⟨s̱_α,s̱_γ⟩_δ s_γ=∑_γ=1^mδ_αγs_γ=s_α.This proves (3). For any s,t∈E, we have that⟨ω_Ns,t⟩ =∑_α,β=1^m⟨ı c_δ(N)s̱_α,s̱_β⟩_δ⟨ s_β,t_α⟩=∑_α,β=1^m⟨ s_β,⟨s̱_β,ı c_δ(N)s̱_α⟩_δ t_α⟩=∑_α,β=1^m⟨ s_β,⟨ı c_δ(N)s̱_β,s̱_α⟩_δ t_α⟩=⟨ s,ω_Nt⟩,which proves (4). Letbe the spin connection on the spinor bundle S, the associated Dirac operator is D=∑_i=1^n c(e_i)_e_iwhere (e_1,…,e_n) is a local orthonormal frame of the tangent bundle TΩ of (Ω,g).Now we fix a unit vector of (R^n,δ),N_0=/ x^1.Setω_N_0=(⟨ı c_δ(N_0)(s̱_α),s̱_β⟩_δ)∈End(E).By Proposition <ref>, ω_N_0 is an involution and is self-adjoint.We define the following connection on the bundle E by _Xs=_X s+ı/2ω_N_0c(X)sfor any vector X and any local smooth section s=(s_1,…,s_m) of E. Here c(X)s:=(c(X)s_1,⋯,c(X)s_m). The associated Dirac operator is given byD =∑_i=1^n c(e_i)_e_i=∑_i=1^nc(e_i)(_e_i+ı/2ω_N_0c(e_i))=D-nı/2ω_N_0.Let N:Σ→ S^n-1 be the outward-pointing unit normal vector to Σ with respect to the Euclidean metric δ. We define the following map χ=ω_N c(ıν): E|_Σ→E|_Σ,where ν denotes the outward-pointing unit normal vector field of Σ with respect to g.The boundary Dirac operator is defined by D^Σ=c(ν)D+_ν+1/2H=∑_a=1^n-1c(ν)c(e_a)_e_a+1/2H,where {e_a}_1≤ a≤ n-1 denotes the local orthonormal frame of TΣ, andH:=∑_a=1^n-1⟨ e_a,_e_aν⟩_g denotes the mean curvature of the boundary Σ. For the operator χ, we have the following proposition.We have * χ is an involution.* χ is self-adjoint.* {ı c(X),χ}=2ω_N⟨ X,ν⟩.* {D^Σ,χ}=ı c((ω_N)^⊤), where ω_N:=∑_i=1^ndω_N(e_i)e_i=∑_i=1^nω_dN(e_i)e_idenotes the gradient vector field of ω_N (note that ω_N is not equal to ω_ N), andc((ω_N)^⊤):=∑_a=1^n-1(dω_N(e_a))c(e_a)=∑_a=1^n-1ω_dN(e_a)c(e_a).Here {A,B}=A∘ B+B∘ A for any two operators A,B. Since (ı c(ν))^2=id, one hasχ^2=ω_Nı c(ν)ω_Nı c(ν)=ω_N^2(ı c(ν))^2=idwhich gives (1). Since both ω_N andı c(ν) areself-adjoint, χ=ω_Nı c(ν) is self-adjoint. This shows (2). By the definition of χ, one has{c(X),χ} =c(X)χ+χ c(X)=ıω_N(c(X) c(ν)+ c(ν)c(X))=-2ıω_N⟨ X,ν⟩.This gives (3). Now we show (4) by direct calculations that{D^Σ,χ} =[D^Σ,ω_N]ı c(ν)+ω_N{D^Σ,ı c(ν)},where [D^Σ,ω_N]:=D^Σω_N-ω_ND^Σ. Note that [D^Σ,ω_N] =[c(ν)D+_ν,ω_N]=c(ν)c(ω_N)+dω_N(ν)=-c(ω_N)c(ν)-dω_N(ν)and {D^Σ,ı c(ν)}=0, so {D^Σ,χ} =(-c(ω_N)c(ν)-dω_N(ν))ı c(ν)=-c((ω_N)^⊤)c(ν)ı c(ν)=ı c((ω_N)^⊤).The proof is complete. Now we recall a definition from linear algebra, see also <cit.>.Let V and W be finite-dimensional vector spaces of the same dimension, each equipped with an inner product. The trace norm of a linear map L: V → W is defined by L_tr=sup _Q tr(Q L), where the supremum is taken over all linear isometries Q: W → V. Equivalently, L_tr can be characterized as the sum of the singular values of L.Next, we define the following operator B=ı c((ω_N)^⊤):E|_Σ→E|_Σ.We have the following proposition for the operator B.We have * B is self-adjoint,* χ and B commute,* |⟨Bs,s⟩|≤dN_tr|s|^2. From (<ref>), it follows thatB=∑_a=1^n-1ω_dN(e_a)c(ı e_a),and it is self-adjoint. This shows (1). We have thatχB =ω_Nı c(ν)ı c((ω_N)^⊤)=-ω_Nc(ν)c((ω_N)^⊤)On the other hand, Bχ =ı c((ω_N)^⊤)ω_Nı c(ν)=-c((ω_N)^⊤)ω_Nc(ν)=ω_Nc((ω_N)^⊤)c(ν)=-ω_N c(ν)c((ω_N)^⊤).Hence χB=Bχ. This shows (2) that χ and B are commutative.It remains to show (3). We fix a point x ∈Σ. Let λ_1, …, λ_n-1≥ 0 denote the singular values of the differential d N: T_x Σ→ T_N(x) S^n-1. We can find an orthonormal basis {e_1, …, e_n-1} of T_x Σ and an orthonormal basis {Ê_1, …, Ê_n-1} of T_N(x) S^n-1 such that d N(e_a)= λ_a Ê_a for each a=1, …, n-1. By (<ref>), one has|⟨Bs,s⟩|=|∑_a=1^n-1λ_a⟨ω_Ê_ac(ı e_a)s,s⟩|≤ (∑_a=1^n-1λ_a)|s|^2=dN_tr|s|^2.Similar to <cit.> and <cit.>, we obtainSuppose that s=(s_1,⋯,s_m)∈E is an m-tuple of spinors satisfying the boundary conditionχ s=s on Σ, then ∫_Ω(-|Ds|^2+|s|^2+1/4(R+n(n-1))|s|^2)≤-1/2∫_Σ (H+(n-1)dx^1(N)-dN_tr)|s|^2, where R denotes the scalar curvature of (Ω,g).Here we omit the volume elements of Σ and Ω with respect to the metric g for simplicity. For any smooth section s=(s_1,…,s_m) of E, one has∫_Ω (-|Ds|^2+| s|^2+R/4|s|^2) =∫_Σ⟨ (c(ν)D+_ν)s,s⟩=-1/2∫_Σ H|s|^2+∫_Σ⟨D^Σ s,s⟩where the boundary Dirac operator D^Σ is defined by (<ref>). Using the divergence theorem, we have∫_Ω|Ds-nı/2ω_N_0s|^2 =∫_Ω( |Ds|^2+n^2/4|s|^2+nı/2(⟨Ds,ω_N_0s⟩-⟨ω_N_0s,Ds⟩))=∫_Ω(|Ds|^2+n^2/4|s|^2)+nı/2∫_Σ⟨ω_N_0c(ν)s,s ⟩and ∑_i=1^n∫_Ω|_e_is+ı/2ω_N_0c(e_i)s|^2=∑_i=1^n∫_Ω(|_e_is|^2+1/4|s|^2 -ı/2(⟨_e_is,ω_N_0c(e_i)s⟩-⟨ω_N_0c(e_i)s,_e_is⟩))=∫_Ω(| s|^2+n/4|s|^2)-ı/2∫_Σ⟨ s,ω_N_0c(ν)s⟩=∫_Ω( | s|^2+n/4|s|^2)+ı/2∫_Σ⟨ω_N_0c(ν) s,s⟩.where the second equality is by the divergence theorem. From (<ref>) and (<ref>), we obtain ∫_Ω(-|Ds|^2+|s|^2+1/4(R+n(n-1))|s|^2)=-1/2∫_Σ (H|s|^2+(n-1)ı⟨ω_N_0c(ν)s,s⟩)+∫_Σ⟨D^Σ s,s⟩. When restricted on Σ, χ s=s by assumption, one has by Proposition <ref> (4) ⟨D^Σ s,s⟩ =⟨D^Σχ s,s⟩=⟨ -χD^Σ s,s⟩+⟨ı c((ω_N)^⊤)s,s⟩which follows that ⟨D^Σ s,s⟩=1/2⟨ı c((ω_N)^⊤)s,s ⟩=1/2⟨Bs,s⟩. On the other hand on Σ, one has by Proposition <ref> (3)ı⟨ω_N_0c(ν)s,s⟩ =ı⟨ω_N_0c(ν)χ s,s⟩=-ı⟨ω_N_0χ c(ν)s,s ⟩+ı⟨ω_N_0(-2ıω_N)s,s⟩= -⟨ω_N_0ω_N s,s⟩)+2⟨ω_N_0ω_N s,s⟩=⟨ω_N_0ω_N s,s⟩,which follows that ı⟨ω_N_0c(ν)s,s⟩=⟨ω_N_0ω_N s,s⟩.Note that ı⟨ω_N_0c(ν)s,s⟩ is real, in fact,øı⟨ω_N_0c(ν)s,s⟩ =⟨ s,ω_N_0ı c(ν)s⟩=⟨ı c(ν)ω_N_0 s,s⟩=ı⟨ω_N_0c(ν)s,s⟩.By (<ref>) and (<ref>), one hası⟨ω_N_0c(ν)s,s⟩ =1/2(⟨ω_N_0ω_N s,s⟩+⟨ s,ω_N_0ω_N s⟩)=1/2⟨ (ω_N_0ω_N+ω_Nω_N_0)s,s⟩=⟨ N_0,N⟩_δ |s|^2=dx^1(N)|s|^2,where the last equality by the definition of N_0=/ x^1 and the third equality by the following (ω_N_0ω_N+ω_Nω_N_0)_αγ =∑_β=1^mω_N_0αβω_Nβγ+ω_Nαβω_N_0βγ=-⟨ c_δ(N_0)c_δ(N)s̱_α,s̱_γ⟩-⟨ c_δ(N)c_δ(N_0)s̱_α,s̱_γ⟩=2⟨ N_0,N⟩_δδ_αγ.Substituting (<ref>) and (<ref>) into (<ref>), we obtain ∫_Ω(-|Ds|^2+|s|^2+1/4(R+n(n-1))|s|^2)=-1/2∫_Σ( (H+(n-1)dx^1(N))|s|^2-⟨Bs,s⟩).Using Proposition <ref> (3) we obtain (<ref>). The proof is complete.§.§ A boundary value problem Now we set F=E|_Σ, and write F=F^+⊕F^-,where F^+=(id-χ) and F^-=(id+χ). The same as<cit.> and <cit.>, one has Assume that n is an odd integer. Suppose that Ω is a compact, convex domain in ℝ^n with smooth boundary ∂Ω=Σ. Let g be a Riemannian metric on Ω. Suppose that N: Σ→ S^n-1 is homotopic to the Gauss map of Σ with respect to the Euclidean metric δ. Then the operatorH^1(Ω, ℰ) → L^2(Ω, ℰ) ⊕ H^1/2(Σ, ℱ^-),s ↦(𝒟 s-nı/2ω_N_0 s, s-χ s)is a Fredholm operator.Assume that n is an odd integer. Suppose that Ω is a compact, convex domain in ℝ^n with smooth boundary ∂Ω=Σ. Let g be a Riemannian metric on Ω. Suppose that N: Σ→ S^n-1 is homotopic to the Gauss map of Σ with respect to the Euclidean metric δ. Then the operatorH^1(Ω, ℰ) → L^2(Ω, ℰ) ⊕ H^1/2(Σ, ℱ^-),s ↦(𝒟 s-nı/2ω_N_0 s, s-χ s)has Fredholm index at least 1.§ APPROXIMATING COMPACT CONVEX POLYTOPES BY SMOOTH DOMAINS This section assumes that n≥ 3 is an odd integer. Let Ω be a compact, convex polytope with a non-empty interior. We write Ω=⋂_i∈ I{u_i≤ 0}⊂R^n_+, where I is a finite set and u_i are linear functions on R^n. After eliminating redundant inequalities, we may assume that the following condition is satisfied. For each i_0∈ I, the set{u_i_0>0}∩⋂_i∈ I\{i_0}{u_i≤ 0} is non-empty.§.§ Smoothing procedures of BrendleLet g be a Riemannian metric defined on an open set containing Ω. For each i∈ I, u_i denotes the gradient of u_i with respect to the metric g, and | u_i| denotes its norm with respect to the metric g. Hence ν_i= u_i/| u_i| is the unit normal vector field, with respect to g, to the level sets of u_i. For each i∈ I, we denote by N_i∈ S^n-1 the outward-pointing unit normal vector to the halfspace {u_i≤ 0} with respect to the Euclidean metric δ.For each λ>0, we define Ω_λ={∑_i∈ Ie^λ u_i≤ 1}⊂Ω.If λ is sufficiently large, then Ω_λ is a compact, convex domain in R^n_+ with smooth boundary Σ_λ=Σ_λ. The sets Ω_λ form an increasing family of sets. Moreover,⋃_λ>λ_0Ω_λ=⋂_i∈ I{u_i<0}. The outward-pointing unit normal vector to the domain Ω_λ with respect to the metric g is given byν=∑_i∈ Ie^λ u_i u_i/|∑_i∈ Ie^λ u_i u_i|=∑_i∈ Ie^λ u_i| u_i|ν_i/|∑_i∈ Ie^λ u_i| u_i|ν_i|.We define a map N:Σ_λ→ S^n-1 by N=∑_i∈ Ie^λ u_i| u_i|N_i/|∑_i∈ Ie^λ u_i| u_i|N_i|.The map N:Σ_λ→ S^n-1 is homotopic to the Gauss map of Σ_λ with respect to the Euclidean metric δ, see <cit.>.For any point x ∈Σ_λ. Let π: T_x Ω→ T_x Ω denote the orthogonal projection to the orthogonal complement of ν, and let P: ℝ^n →ℝ^n denote the orthogonal projection to the orthogonal complement of N. Then H-d N_tr≥ V_λ, where the function V_λ: Σ_λ→ℝ is defined byV_λ=λ∑_i ∈ I e^λ u_i|∇ u_i|^2|π(ν_i)|^2/|∑_i ∈ I e^λ u_i| ∇ u_i|ν_i|-λ∑_i ∈ I e^λ u_i|∇ u_i|^2|π(ν_i)||P(N_i)|/|∑_i ∈ I e^λ u_i| ∇ u_i|N_i| +∑_i ∈ I e^λ u_i(Δ u_i-(D^2 u_i)(ν, ν))/|∑_i ∈ I e^λ u_i| ∇ u_i|ν_i|-∑_i ∈ I e^λ u_i|∇(|∇ u_i|)||P(N_i)|/|∑_i ∈ I e^λ u_i| ∇ u_i|N_i| . We defineW_λ=V_λ+(n-1)dx^1(N).Moreover, we denote by W_λ,-=max{-W_λ, 0} the negative part of W_λ. Following<cit.> and <cit.>, we have the following two results.For each i ∈ I, we assume that the hypersurface {u_i=0} satisfies the condition H+(n-1)dx^1(N)≥ 0 at each point in Ω∩{u_i=0}. Moreover, we assume that the Matching Angle Hypothesis is satisfied. Let us fix an exponent σ∈[1, 3/2), and let B_r(p) denote a Euclidean ball of radius r ≤ 1. If λ r is sufficiently large, then(r^σ+1-n∫_Σ_λ∩ B_r(p) W_λ,-^σ)^1/σ≤ C λ r e^-(λ r)^1/8+C(λ r)^1/8-7/8 σ+C(λ r)^1-3/2 σ. For each i ∈ I, we assume that the hypersurface {u_i=0} satisfies the condition H+(n-1)dx^1(N)≥ 0 at each point in Ω∩{u_i=0}. Moreover, we assume that the Matching Angle Hypothesis is satisfied. Let us fix an exponent σ∈[1, 3/2). Thensup _p ∈ℝ^nsup _r ≤ 1(r^σ+1-n∫_Σ_λ∩ B_r(p) W_λ,-^σ)^1/σ→ 0 as λ→∞.§.§ Smoothing procedures of Brendle-Wang We address another smoothing procedure due to Brendle-Wang <cit.> when the Acute Angle Hypothesis (see Definition <ref>) is satisfied on (Ω, g). We briefly describe Brendle-Wang's smoothing.Brendle-Wang <cit.> constructed an inductive smoothing Ω̂ := Ω_λ_0, γ⊂Ω of Ω parametrized by λ_0 > 1 and γ∈ (0, 12). The parameter λ_0 is assumed to be large and γ is assumed to be small. Fix a smooth even function η : ℝ→ℝ such that η (z) = |z| for |z| ≥ 1 and η” (z) ≥ 0 for |z| ≤ 1. We put λ_k = γ^- kλ_0 and Ω = ∩_i = 0^q {u_i ≤ 0}. The smoothing Ω̂ is given in the following. We define a collection of smooth functions û_0, ⋯, û_q on ℝ^n so that û_0 = u_0 andû_k = 12 (û_k - 1 + u_k + 1λ_kη (λ_k (û_k - 1 - u_k)))for 1 ≤ k ≤ q. We define Ω̂ = {û_q ≤ 0} and Σ̂ = {û_q = 0}. The boundary Σ̂ can be decomposed byΣ̂ = (∪_0 ≤ k ≤ q F_k) ∪ (∪_0 ≤ j < k ≤ q E_j, k) ∪ (∪_0 ≤ i < j < k ≤ q G_i, j, k) .The expression of these subsets F_k, E_j, k and G_i, j, k are given explicitly in <cit.> and (<ref>) is just <cit.>. There exists a map N̂ : Σ̂→𝕊^n - 1 homotopic to the Euclidean Gauss map of Σ̂ and fix a exponent σ∈ (1, qq - 1). Suppose that γ∈ (0, M^- 1), where M is a constant in <cit.>. If λ_0 is sufficiently large depending on γ, thenr^σ + 1 - n∫_Σ̂∩ B_r (p) (max{dN̂_ - H - (n - 1) d x^1(N̂), 0})^σ≤ C γ^σ - q (σ - 1)for all 0 < r ≤ 1. Here C is independent of γ and λ_0.We set W = max{dN̂_ - H, 0} andŴ := max{dN̂_ - H - (n - 1) d x^1(N̂), 0} .It suffices to observe that the term (n - 1) d x^1 (N̂) is bounded and the approximation in Definition <ref> does not change Σ in F_k. That is, the smoothing does not change the faces away from the edges of Ω. The estimates on W in various other sets for instance E_j, k, G_i, j, k are bounded below by either a constant or a large constant depending on γ. So the estimates on Ŵ are formally the same with the estimates on W on these sets. The rest of the proof is the same with <cit.>. § SCALAR CURVATURE RIGIDITY OF PARABOLIC CONVEX POLYTOPES Let n ≥ 3 be an odd integer and Ω be a compact, convex polytope in ℝ^n_+ with non-empty interior. Let g be a Riemannian metric defined on an open set containing Ω.For each i ∈ I, we assume that H+(n-1)dx^1(N) ≥ 0 at each point in Ω∩{u_i=0}. Moreover, we assume that the Matching Angle Hypothesis is satisfied.Let U denote an Euclidean ball such that the closure of U is contained in the interior of Ω. Consider a sequence λ_l →∞, and note that U ⊂Ω_λ_l if l is sufficiently large. By Proposition <ref>, we can find an m-tuple of spinors s^(l)=(s_1^(l), …, s_m^(l)) with the following properties: * s^(l) is defined on Ω_λ_l;* Ds^(l)=𝒟 s^(l)-nı/2ω_N_0 s^(l)=0;* χ s^(l)=s^(l) at each point on Σ_λ_l;* s^(l) does not vanish identically.Standard unique continuation arguments imply that ∫_U ∑_α=1^m|s_α^(l)|^2 >0 if l is sufficiently large. By scaling, we can arrange that ∫_U ∑_α=1^m|s_α^(l)|^2=1 if l is sufficiently large. The same proof as in <cit.> and using Corollary <ref>, we have If t=(t_1,⋯,t_m) is any smooth section of E, then∫_Ω_λ_l|t|^2 +∫_Σ_λ_l|t|^2 ≤ C ∫_Ω_λ_l |t|^2 +C ∫_U|t|^2,where C is a constant that does not depend on l, and ∫_Σ_λ_l W_λ_l,-|t|^2 ≤ o(1) ∫_Ω_λ_l |t|^2+o(1) ∫_U|t|^2. By using the above lemma and Proposition <ref>, we obtainWe have∫_Ω_λ_l|s^(l)|^2 → 0as l →∞. Using Proposition <ref>, we obtain∫_Ω_λ_l |s^(l)|^2+1/4∫_Ω_λ_l(R+n(n-1))|s^(l)|^2 ≤-1/2∫_Σ_λ_l(H+(n-1)dx^1(N)-d N_tr)|s^(l)|^2 ≤1/2∫_Σ_λ_l W_λ_l,-|s^(l)|^2.On the other hand, Lemma <ref> gives∫_Σ_λ_l W_λ_l,-|s^(l)|^2≤ o(1) ∫_Ω_λ_l |s^(l)|^2+o(1) ∫_U |s^(l)|^2.Putting these facts together, we conclude that∫_Ω_λ_l |s^(l)|^2+1/4∫_Ω_λ_l(R+n(n-1))|s^(l)|^2≤ o(1) ∫_Ω_λ_l |s^(l)|^2+o(1) ∫_U |s^(l)|^2.for sufficiently large l. Since R+n(n-1)≥ 0 and ∫_U |s^(l)|^2=1, it follows that∫_Ω_λ_l |s^(l)|^2≤ o(1) ∫_U |s^(l)|^2=o(1).if l is sufficiently large. This completes the proof of Proposition <ref>.We have∫_Ω_λ_l |s^(l)|^2 ≤ C,where C is a constant that does not depend on l. This follows from Proposition <ref> together with Lemma <ref>. After passing to a subsequence if necessary, the sequence s^(l)=(s^(l)_1,…,s^(l)_m) converges in C^∞_loc to a -parallel spinor s=(s_1,…,s_m) which is defined on the interior of Ω. We have ∫_Σ_λ_l|s^(l)-s|^2→ 0 as l→∞. Proof. Using Lemma <ref>, we obtain∫_Σ_λ_l|s^(l)-s|^2 ≤ C ∫_Ω_λ_l |(s^(l)-s)|^2 +C ∫_U|s^(l)-s|^2,Since s=0, we conclude that∫_Σ_λ_l|s^(l)-s|^2 ≤ C ∫_Ω_λ_l |s^(l)|^2 +C ∫_U|s^(l)-s|^2.The assertion now follows from Proposition <ref> and the fact that s_α^(l)→ s_α in C_loc ^∞ for α=1, …, m. We have ∫_Σ_l|χ s-s|^2→ 0 as l→∞. Here, χ denotes the boundary operator on Σ_λ_l. The lemma follows from Proposition <ref> and χ s^(l)=s^(l) on Σ_λ_l. Since s is -parallel, s can be extended continuously to Ω. By the above lemma, s satisfies the boundary condition s=χ s on Σ:=Ω. In the next step, we show that {s_1,⋯,s_m} are linearly independent at each point of Ω. The condition s=0 is equivalent to _e_is+1/2ω_N_0c(ı e_i)s=0for any unit vector e_i, 1≤ i≤ n and s is also subject to the boundary conditionω_N c(iν)s=s. §.§ The principal curvatures of the boundaryThe principal curvatures of the faces are the first consequence of a nonzero s. Let (Ω,g) be a polytope in Theorem <ref> with the Matching Angle Hypothesis satisfied. Then Each face F_i is umbilic with principal curvature -⟨ N_0, N_i ⟩.Let e_i be an orthonormal frame such that e_n = ν and the second fundamental form h of Σ in Ω is diagonalized, that is, h_i j = κ_i δ_i j where i, j ≠ n. We assume that i,j ≠ n.∇_e_i (ω_N c (ν) s) =ω_N c (∇_e_iν) s + ω_N c (ν) ∇_e_i s =κ_i ω_Nc ( e_i) s - 12ω_N c (ν) ω_N_0 c ( e_i) swhere we again have used (<ref>) and (<ref>). Following from ω_N ω_N_0 + ω_N_0ω_N = 2 ⟨ N_0, N ⟩ I_m and c (e_i) c (ν) = - c (ν) c (e_i), we have∇_e_i (ω_N c (ν) s) =κ_i ω_N c ( e_i) s + 12 (2 ⟨ N_0, N ⟩ - ω_N_0ω_N) c ( e_i) c (ν) s =κ_i ω_N c ( e_i) s + ⟨ N_0, N ⟩ω_N c ( e_i) s - 12ω_N_0 c ( e_i) s. We differentiate (<ref>) in the direction e_i with i ≠ n, then∇_e_i (ω_Nc (ν) s) =∇_e_i s = - 12ω_N_0 c( e_i) s.Henceκ_i ω_N c ( e_i) s = -⟨ N_0, N ⟩ω_N c (e_i) s.Considering that s has at least one nonzero component, we see thatκ_i = - ⟨ N_0, N ⟩. §.§ Linear independence of spinor componentsNow we show that the components of s we obtained are linearly independent, and we start by defining the following subspace of C^m byL={c=(c_1,⋯,c_m)^⊤∈C^m:c^⊤(I_m+ω_N_0)s =0 c^⊤(I_m- ω_N_0)s=0 everywhere on Ω}.Here ⊤ denotes the transpose of a row vector in C^m. Note that _e_i(c^⊤(I_m±ω_N_0)s)±1/2c(ı e_i)(c^⊤(I_m±ω_0)s)=0,so if c^⊤(I_m+ω_N_0)s=0=c^⊤(I_m-ω_N_0)s holds at some point of Ω, then it holds everywhere on Ω. For anyvectorN_i∈{N_i,i∈ I}, and any c∈ L, then c^⊤ω_N_i(I_m±ω_N_0)s=c^⊤(ω_N_i± (2⟨ N_i,N_0⟩-ω_N_0ω_N_i))s=c^⊤(I_m∓ω_N_0)ω_N_is±⟨ N_i,N_0⟩ c^⊤((I_m+ω_N_0)+(I_m-ω_N_0))s=c^⊤(I_m∓ω_N_0)ω_N_is.Let x_0∈Σ be a point with the outward-pointing unit normal vector N_i. By boundary condition χ s=s, one has ω_N_is=c(ıν)sat x_0. By (<ref>) and (<ref>), one has at this point x_0c^⊤ω_N_i(I_m±ω_N_0)s =c^⊤(I_m∓ω_N_0)ω_N_is=c^⊤(I_m∓ω_N_0)c(ıν)s=c(ıν)(c^⊤(I_m∓ω_N_0)s)=0.Since _e_i(c^⊤ω_N_i(I_m±ω_N_0)s)±1/2c(ı e_i)(c^⊤ω_N_i(I_m±ω_0)s)=0,which follows that c^⊤ω_N_i(I_m±ω_N_0)s≡ 0on Ω. Hence ω_N_i^⊤c∈ L,∀ i∈ I.Since Span{N_i,i∈ I}=R^n, End(C^m) is spanned by the compositions of ω_N_i^⊤,i∈ I. Hence L is invariant under End(C^m), so L={0} or L=C^m. Note that s does not vanish everywhere, we obtain L={0}. Note that ı c_δ(N_0)∈End(Δ_n), which is Hermitian symmetric and(ı c_δ(N_0))^2=id. Denote by {s̱_α}_1≤α≤ m the eigenvectors of ı c_δ(N_0), then {s̱_α}_1≤α≤ m is an orthonormal basis of Δ_n and satisfies ı c_δ(N_0)s̱_α=λ_αs̱_α, 1≤α≤ m,where λ_α= 11≤α≤m/2 -1 m/2+1≤α≤ m.In fact, since c_δ(N_0)c_δ(ξ)=-c_δ(ξ)c_δ(N_0) for any ξ with ξ⊥ N_0, the isomorphism c_δ(ξ) interchanges the ± 1-eigenspaces of ı c_δ(N_0), so that the dimension of the two eigenspaces are equal.Then (ω_N_0s)_α=ω_N_0αβs_β=⟨ı c_δ(N_0)s̱_α,s̱_β⟩ s_β=λ_α s_α.Hence L can be written asL={c∈C^m:∑_α=1^m/2c_α s_α=0=∑_β=m/2+1^m c_β s_β everywhere on Ω}.Then L={0} if and only if each of the following two setsS_1={s_1,⋯,s_m/2}, S_2={s_m/2+1,⋯,s_m}is a linearly independent set on each point of Ω.Next, we will show that the two sets S_1 and S_2 are orthogonal. Firstly, For any c_1,c_2∈C^m, we have ⟨ c_1^⊤(I_m+ω_N)s,c^⊤_2(I_m-ω_N)s⟩=0along Σ, where N:Σ→ S^n-1 is the outward-pointing unit normal vector to Σ with respect to the Euclidean metric δ. Along the boundary Σ, one has⟨ c^⊤_1(I_m+ω_N)s,c^⊤_2(I_m-ω_N)s⟩= ⟨ c^⊤_1(I_m+ω_N)ω_N c(ıν)s,c^⊤_2(I_m-ω_N)ω_N c(ıν)s⟩=- ⟨ c^⊤_1(I_m+ω_N)s,c^⊤_2(I_m-ω_N)s⟩ which follows that ⟨ c_1^⊤(I_m+ω_N)s,c^⊤_2(I_m-ω_N)s⟩=0.For any vector X∈R^n and any a∈R, one has2(1-a^2)⟨ c^⊤_1(I_m+ω_N_0)s,c^⊤_2(I_m-ω_N_0)s⟩-2⟨ c^⊤_1(I_m+ω_N_0)ω_X s,c^⊤_2(I_m-ω_N_0)ω_X s⟩=(1-a)⟨ c^⊤_1(I_m+ω_N_0)((1+a)I_m+ω_X)s,c^⊤_2(I_m-ω_N_0)((1+a)I_m-ω_X)s⟩ +(1+a)⟨ c^⊤_1(I_m+ω_N_0)((1-a)I_m-ω_X)s,c^⊤_2(I_m-ω_N_0)((1-a)I_m+ω_X)s⟩=(1-a)⟨ c^⊤_1(I_m+ω_N_0)(I_m+ω_aN_0+X)s,c^⊤_2(I_m-ω_N_0)(I_m-ω_aN_0+X)s⟩ +(1+a)⟨ c^⊤_1(I_m+ω_N_0)(I_m-ω_aN_0+X)s,c^⊤_2(I_m-ω_N_0)(I_m+ω_aN_0+X)s⟩.Recall that {N_i,i∈ I} is the set of all outward-pointing unit normal vectors to Σ. Denote X_i=N_i-⟨ N_i,N_0⟩ N_0.Then {X_i,i∈ I} spans the subspace N_0^⊥:={N∈R^n: ⟨ N,N_0⟩=0}. Denote a_i=⟨ N_i,N_0⟩, then 1-a_i^2=X_i^2. Hence 2X_i^2⟨ c^⊤_1(I_m+ω_N_0)s,c^⊤_2(I_m-ω_N_0)s⟩ -2⟨ c^⊤_1(I_m+ω_N_0)ω_X_i s,c^⊤_2(I_m-ω_N_0)ω_X_i s⟩=(1-a_i)⟨ c^⊤_1(I_m+ω_N_0)(I_m+ω_N_i)s,c^⊤_2(I_m-ω_N_0)(I_m-ω_N_i)s⟩ +(1+a_i)⟨ c^⊤_1(I_m+ω_N_0)(I_m-ω_N_i)s,c^⊤_2(I_m-ω_N_0)(I_m+ω_N_i)s⟩.By Lemma <ref>, one hasX_i^2⟨ c^⊤_1(I_m+ω_N_0)s,c^⊤_2(I_m-ω_N_0)s⟩=⟨ c^⊤_1(I_m+ω_N_0)ω_X_i s,c^⊤_2(I_m-ω_N_0)ω_X_i s⟩at the point x_0 with the outward-pointing unit normal vector N_i. Note that e_i⟨ c^⊤_1(I_m+ω_N_0)s,c^⊤_2(I_m-ω_N_0)s⟩ =⟨ -1/2c(ı e_i)(c^⊤_1(I_m+ω_0)s),c^⊤_2(I_m-ω_N_0)s⟩ +⟨ (c^⊤_1(I_m+ω_0)s,1/2c(ı e_i)(c^⊤_2(I_m-ω_N_0)s)⟩=0,so ⟨ c^⊤_1(I_m+ω_N_0)s,c^⊤_2(I_m-ω_N_0)s⟩ is a constant. On the other hand, ⟨ c^⊤_1(I_m+ω_N_0)ω_X_i s,c^⊤_2(I_m-ω_N_0)ω_X_i s⟩=⟨ c^⊤_1ω_X_i(I_m-ω_N_0) s,c^⊤_2ω_X_i(I_m+ω_N_0) s⟩is also a constant. Hence (<ref>) holds on every point of Ω. In particular, for any c_3∈C^m, taking c_1=ω_X_i^⊤c_3, one has ⟨ c^⊤_3(I_m-ω_N_0)ω_X_is,c^⊤_2(I_m-ω_N_0)s⟩=⟨ c^⊤_3(I_m-ω_N_0) s,c^⊤_2(I_m-ω_N_0)ω_X_i s⟩.Since N_0^⊥=Span{X_i,i∈ I}, and it follows that ⟨ c^⊤_3(I_m-ω_N_0)ω_Xs,c^⊤_2(I_m-ω_N_0)s⟩=⟨ c^⊤_3(I_m-ω_N_0) s,c^⊤_2(I_m-ω_N_0)ω_X s⟩.for any X∈ N_0^⊥. For any c_1∈C^m, by taking c_3=ω_X^⊤c_1, one hasX^2 ⟨ c^⊤_1(I_m+ω_N_0)s,c^⊤_2(I_m-ω_N_0)s⟩=⟨ c^⊤_1(I_m+ω_N_0)ω_X s,c^⊤_2(I_m-ω_N_0)ω_X s⟩for any X∈ N_0^⊥.Let {E_1,⋯,E_n} be an orthonormal basis of (R^n,δ) with E_n=N_0. Let Γ=ı^n+1/2c_δ(E_1)⋯ c_δ (E_n)denote the chirality operator, thenΓ c_δ(v)=c_δ(v)Γ for any v∈R^n since n is odd. Hence we define (ω_Γ s)_α:=⟨Γs̅_α, s̅_β⟩ s_β, henceΓα=ı^-n+1/2ω_E_1⋯ω_E_n.A direct calculation shows thatω_Γω_E_a=ω_E_aω_Γ and ω^2_Γ=idfor any 1≤ a≤ n. In fact, one has(ω_Γω_E_a)_αβ =(-1)^-n+1/2⟨s̱_α,Γ c_δ(ı E_a)s̱_β⟩_δ=(-1)^-n+1/2⟨s̱_α,c_δ(ı E_a)Γs̱_β⟩_δ=(ω_E_aω_Γ)_αβ.Hence ω_Γ=± I_m, and soω_N_0=ω_E_n=±ı^-n+1/2ω_E_1⋯ω_E_n-1.Using (<ref>), we obtain⟨ c^⊤_1(I_m+ω_N_0)s,c^⊤_2(I_m-ω_N_0)s⟩=⟨ c^⊤_1(I_m+ω_N_0)ω_E_1 s,c^⊤_2(I_m-ω_N_0)ω_E_1 s⟩=⟨ (ω_E_1^⊤ c_1)^⊤(I_m-ω_N_0) s,(ω_E_1^⊤ c_2)^⊤(I_m+ω_N_0)s⟩=⋯=⟨ (ω_E_n-1^⊤⋯ω_E_1^⊤ c_1)^⊤(I_m+ω_N_0) s,(ω_E_n-1^⊤⋯ω_E_1^⊤ c_2)^⊤(I_m-ω_N_0)s⟩=⟨ (ω_N_0^⊤ c_1)^⊤(I_m+ω_N_0) s,(ω_N_0^⊤ c_2)^⊤(I_m-ω_N_0)s⟩=-⟨ c^⊤_1(I_m+ω_N_0)s,c^⊤_2(I_m-ω_N_0)s⟩which follows that ⟨ c^⊤_1(I_m+ω_N_0)s,c^⊤_2(I_m-ω_N_0)s⟩=0for any c_1,c_2∈C^m. This is equivalent to the two sets S_1 and S_2 are orthogonal to each other. Hence the setS_1∪ S_2={s_1,s_2,⋯,s_m}is linearly independent everywhere on Ω. We obtainThere is a basis of spinors {s_1, …, s_m} defined on Ω satisfying the boundary condition χ s=s on Σ and_e_is+1/2ω_N_0c(ı e_i)s=0for every i=1, …, n, where s=(s_1,⋯,s_m) is a smooth section of E. Now we can show Theorem <ref> with the Matching Angle Hypothesis is satisfied.Since the Matching Angle Hypothesis is satisfied and Proposition <ref> is proved, it remains to show thatR(e_i,e_j,e_k,e_l)=-(δ_ikδ_jl-δ_ilδ_jk) on Ω for every i,j,k,l=1,…,n, that is, g is a hyperbolic metric on Ω.From Proposition <ref>, there is a smooth non-zero section s=(s_1,…,s_m) of E defined on Ω and satisfies _e_ks+1/2ω_N_0c(ı e_q)s=0,where {s_1,⋯,s_m} is a basis of spinors. Hence0 =_e_k( _e_ls+1/2ω_N_0c(ı e_l)s)-( __e_ke_ls_α+1/2ω_N_0c(ı_e_ke_l)s) -1/2c(ı e_l)ω_N_0( _e_ks_α+1/2ω_N_0c(ı e_q)s)=_e_k_e_ls-__e_ke_ls+1/4c(e_l)c(e_q)s.This implies 0 =_e_k_e_ls-_e_l_e_ks-_[e_k,e_l]s -1/4∑_i,j=1^n(δ_ikδ_jl-δ_ilδ_jk)c(e_i)c(e_j)s.Hence0 =-1/4∑_i,j=1^n(- ⟨ R(e_k,e_l)e_i,e_j⟩+(δ_ikδ_jl-δ_ilδ_jk))c(e_i)c(e_j)s=-1/4∑_i,j=1^n(R(e_i,e_j,e_k,e_l)+(δ_ikδ_jl-δ_ilδ_jk))c(e_i)c(e_j)s.which follows that∑_i,j=1^n(R(e_i,e_j,e_k,e_l)+(δ_ikδ_jl-δ_ilδ_jk))c(e_i)c(e_j)s_μ=0for any 1≤μ≤ m. Since n is odd, the kernel of the spinor representation Cl(T_x Ω) →End(𝒮_x) is given by the (-1)-eigenspace of σ (<cit.>), where σ is the volume form σ=ı^n+1/2e_1⋯ e_n∈Cl(T_xΩ).Hence∑_i<j(R(e_i,e_j,e_k,e_l)+(δ_ikδ_jl-δ_ilδ_jk))(1+σ)e_i e_j=0which follows thatR(e_i,e_j,e_k,e_l)+(δ_ikδ_jl-δ_ilδ_jk)=0.So Theorem <ref> is proved under the Matching Angle Hypothesis. §.§ Orthogonality of spinor componentsNext, we can prove that s_μ and s_τ are orthogonal for any 1≤μ≠τ≤ m.For any 1≤α_1,α_2≤m/2 and any two real vectors X,Y, the Hessian of ⟨ s_α_1,s_α_2⟩ along X,Y is given by( ^2⟨ s_α_1,s_α_2⟩)(X,Y):=(_Y_X-__YX)⟨ s_α_1,s_α_2⟩=⟨_X s_α_1, _Ys_α_2⟩+⟨ (_Y_X-__YX) s_α_1,s_α_2⟩ +⟨_Y s_α_1,_Xs_α_2⟩+⟨s_α_1,(_Y_X-__YX)s_α_2⟩=⟨ -1/2c(ı X)s_α_1,-1/2c(ı Y)s_α_2⟩+⟨1/4c(ı X)c(ı Y)s_α_1,s_α_2⟩+ ⟨ -1/2c(ı Y)s_α_1,-1/2c(ı X)s_α_2⟩+⟨ s_α_1,1/4c(ı X)c(ı Y)s_α_2⟩=⟨ X,Y⟩·⟨ s_α_1,s_α_2⟩.Similarly, for any 1+m/2≤β_1,β_2≤ m, one has( ^2⟨ s_β_1,s_β_2⟩)(X,Y)=⟨ X,Y⟩·⟨ s_β_1,s_β_2⟩.Note that ⟨ s_α,s_β⟩=0 for any 1≤α≤m/2<β≤ m, so (<ref>) and (<ref>) imply that( ^2⟨ c_1^⊤ s,c_2^⊤ s⟩)(X,Y)=⟨ X,Y⟩·⟨ c_1^⊤ s,c_2^⊤ s⟩for any c_1,c_2∈C^m. Along the boundary Σ, we have_ν⟨ s_α_1,s_α_2⟩ =⟨ -1/2c(ıν)s_α_1,s_α_2⟩+⟨ s_α_1,-1/2c(ıν)s_α_2⟩=⟨ -1/2ω_N s_α_1,s_α_2⟩+⟨ s_α_1,-1/2ω_Ns_α_2⟩=⟨ -1/2ω_Nω_N_0 s_α_1,s_α_2⟩+⟨ s_α_1,-1/2ω_Nω_N_0s_α_2⟩=-⟨ N,N_0⟩_δ·⟨ s_α_1,s_α_2⟩Similarly, one has_ν⟨ s_β_1,s_β_2⟩=-⟨ N,N_0⟩_δ·⟨ s_β_1,s_β_2⟩.Hence we have_ν⟨ c_1^⊤ s,c_2^⊤ s⟩=-⟨ N,N_0⟩_δ·⟨ c_1^⊤ s,c_2^⊤ s⟩for any c_1,c_2∈C^m along the boundary Σ. We have ⟨c_1^⊤ s,c_2^⊤ s⟩=⟨c_1^⊤ω_X s,c_2^⊤ω_Xs⟩on Ω for any unit Euclidean vector X, where c_i∈C^m with ω_N_0^⊤ c_i=c_i, i=1,2. The identity (<ref>) is valid obviously for X=N_0, it suffices to prove for X normal to N_0.Along a face F with the Euclidean normal N, by Lemma <ref>, ⟨ c_1^⊤(I_m+ω_N)s,c_2^⊤(I_m-ω_N)s⟩=0. Using N=a N_0+b X with a^2+b^2=1, we see that(1-a^2)⟨ c_1^⊤ s,c_2^⊤ s⟩-b^2⟨ c_1^⊤ω_X s,c_2^⊤ω_X s⟩ -b(1+a)⟨ c_1^⊤ s,c_2^⊤ω_X s⟩+b(1-a)⟨ c^⊤_1ω_Xs,c_2^⊤ s⟩=0,which follows that ⟨ c_1^⊤ s,c_2^⊤ s⟩-⟨ c_1^⊤ω_X s,c_2^⊤ω_X s⟩=0along F since ⟨ c^⊤_1ω_Xs,c_2^⊤ s⟩=0=⟨ c_1^⊤ s,c_2^⊤ω_X s⟩.Denotef=⟨ c_1^⊤ s,c_2^⊤ s⟩-⟨ c_1^⊤ω_X s,c_2^⊤ω_X s⟩ We see that f=∇^F f=0 along F. By (<ref>), one has_ν f=-⟨ N,N_0⟩_δ f.Hence f=∇ f=0 along F. By (<ref>), we have ^2f=f· g and we conclude that f vanishes on all Ω. Repeating an argument earlier (in the proof (<ref>)), we have that (<ref>) holds for X normal to N_0, which concludes the proof. For any c_1, c_2 ∈ℂ^m, we have⟨c_1^⊤ s,c_2^⊤ s⟩=⟨c_1^⊤ω_X s,c_2^⊤ω_Xs⟩on Ω for any unit Euclidean vector X.We have shown that the corollary holds for ω_N_0^⊤ c_i=c_i, i=1,2, and also when ω_N_0^⊤ c_1= c_1, ω_N_0^⊤ c_2=-c_2. The case ω_N_0^⊤ c_i=-c_i, i=1,2 is proved similarly. For the general case, we set c_i=c_i^++c^-_i with ω_N_0^⊤ c_i^±= ± c_i^±. Hence⟨c_1^⊤ s,c_2^⊤ s⟩=⟨ (c_1^+)^⊤ s, (c_2^+)^⊤ s⟩+ ⟨ (c_1^-)^⊤ s, (c_2^-)^⊤ s⟩=⟨ (c_1^+)^⊤ω_X s, (c_2^+)^⊤ω_Xs⟩+ ⟨ (c_1^-)^⊤ω_X s, (c_2^-)^⊤ω_X s⟩=⟨ c_1^⊤ω_X s,c_2^⊤ω_X s⟩. So the result is proved. For any c_1, c_2 ∈ℂ^m, we have ⟨ c_1^⊤ s, c_2^⊤ω_X s⟩=⟨ c_1^⊤ω_X s,c_2^⊤ s⟩on Ω for any unit Euclidean vector X. It follows from self-adjointness of ω_X and ω_X^2=id. The matrix G:=(G_μτ)_1≤μ,τ≤ m,G_μτ:=⟨ s_μ, s_τ⟩,is a scalar nonzero multiple of the identity matrix. Note that |s_μ| is not a constant. Let c_i=c_μ^(i) with i=1,2 and μ=1, ⋯, m, let G_μτ=⟨ s_μ, s_τ⟩. We write carefully (<ref>) in components,⟨ c_1^⊤ s, c_2^⊤ω_X s⟩=∑_τ,μ,λ=1^m⟨c_μ^(1) s_μ, c_λ^(2)ω_X λτ s_τ⟩=∑_τ,μ,λ=1^mc_μ^(1) G_μτøω_X λτc̅_λ^(2)⟨ c_1^⊤ω_X s,c_2^⊤ s⟩=∑_τ,μ,λ=1^m⟨c_μ^(1)ω_X μτ s_τ, c_λ^(2) s_λ⟩=∑_τ,μ,λ=1^mc_μ^(1)ω_X μτ G_τλc̅_λ^(2) Since c_1 and c_2 are arbitrary, we know that G_μτøω_X λτ=ω_X μτ G_τλ. Since ω_X is self-adjoint and ω_X^2=id, we see øω_X λτ=ω_ Xτλ and G_μτω_Xτλ=ω_X μτ G_τλ. This says that G commutes with any ω_X where X is of unit Euclidean length. It follows that the matrix G commutes with every element of End(C^m). This implies that G is a scalar multiple of the identity (see <cit.>). §.§ Type I imaginary Killing spinors Since our model domain lives in the Poincaré half-space, we are also interested in writing the metric g of the polytope (Ω,g) in a similar way. To this end, we investigate the types of the components of s.From (<ref>) and (<ref>), we have that_e_is_μ±ı/2c(e_i)s_μ=0for any μ=1,⋯,m. Hence s_μ,1≤μ≤ m/2 are imaginary Killing spinors over Ω with Killing number -i2,s_μ,m/2< μ≤ m are imaginary Killing spinors over Ω with Killing number i2 using the convention of <cit.>. We shall only indicate the Killing number when necessary. By <cit.>, |s_μ|^4-||s_μ|^2|^2=|s_μ|^2·|ı s_μ∓ c(log|s_μ|^2)s_μ|^2is a nonnegative constant, where log|s_μ|^2 denotes the gradient vector field of log |s_μ|^2. According to the nonnegative constant, Baum <cit.> defined two types of imaginary Killing spinor. We call s_μ a Killing spinor of type I if |s_μ|^4-||s_μ|^2|^2=0 and a Killing spinor of type II if |s_μ|^4-||s_μ|^2|^2> 0.Hence by (<ref>), s_μ is of type I if and only if c(ξ)s_μ=±ı s_μ,where ξ=log|s_μ|^2.By Proposition <ref>, |s_μ|=|s_λ|, if s_μ is of type I (resp. type II), then s_λ is also of type I (reps. II) for any λ=1,⋯,m. For any vector field X, one has_Xξ=X-⟨ X,ξ⟩ξ.We have_Xξ =_X(log |s_μ|^2)=_X|s_μ|^2/|s_μ|^2=|s_μ|^2X/|s_μ|^2-_X |s_μ|^2|s_μ|^2/|s_μ|^4=X-_Xlog|s_μ|^2 ξ=X-⟨ X,ξ⟩ξ,where the third equality by (<ref>).By Lemma <ref>, we have the following characterization on the type I imaginary Killing spinor. Let (Ω,g) be an odd-dimensional polytope given in Conjecture <ref> and satisfying either the Matching Angle Hypothesis or the Acute Angle Hypothesis.For the imaginary Killing spinor s_μ, the following are equivalent: * s_μ is of type I.* all level sets of |s_μ|^2 are flat.* s_μ|_F is parallel.Denote by F={x∈Ω:|s_μ|^2=constant} the level set of |s_μ|^2. ThenTF={X∈ TΩ:X(|s_μ|^2)=0}={X∈ TΩ:⟨ X,ξ⟩=0}.For any X,Y,Z,W∈ TF, by Gauss-Codazzi equation, the Riemann curvature R^F of induced metric g|_F is given by R^F(X,Y,Z,W)=R(X,Y,Z,W)+⟨ B(X,Z),B(Y,W)⟩-⟨ B(X,W),B(Y,Z)⟩,where B(X,Y) is the second fundamental form, which is parallel to ξ_0=ξ/ξ and is given by ⟨ B(X,Y),ξ_0⟩=-⟨_Xξ_0,Y⟩.By Lemma <ref>, one has B(X,Y)=-⟨ X,Y⟩ξ_0/ξ.We have shown that (Ω,g) is hyperbolic by Theorem <ref>, we have thatR^F(X,Y,Z,W)=-(1-1/ξ^2)(⟨ X,Z⟩⟨ Y,W⟩-⟨ X,W⟩⟨ Y,Z⟩).Note that ξ^2=||s_μ|^2|^2/|s_μ|^4≤ 1,with equality holds if and only if s_μ is of type I. Hence we conclude that all level sets of |s_μ|^2 are flat if and only ifs_μ is of type I. By <cit.>, for any X∈ TF, one has_X^F(s_μ|_F) =_X s_μ+1/2c(_Xξ)c(ξ)s_μ=∓ı/2c(X)s_μ+1/2c(X)c(ξ)s_μ=∓1/2c(X)(ı s_μ∓ c(ξ)s_μ).Hence _X^F(s_μ|_F)=0 is equivalent to c(ξ)s_μ=±ı s_μ, that is, s_μ is of type I. The proof is complete. It is noted from <cit.> that a complete Riemannian manifold is isometric to a warped product if it admits an imaginary Killing spinor of type I. If Ω has a face F with N=± N_0, the boundary condition χ s=s gives ı c(ν)s_μ=± s_μ for any 1≤μ≤ m, which follows that s_μ is of type I for any 1≤μ≤ m. Note that we have in fact established that the Clifford multiplication by iν on s is represented by the endomorphism ω_N_0. For a general polytope Ω, if s_μ is of type I, we can find similarly that the Clifford multiplication by -i∇log|s_μ|^2 on s is represented by the endomorphism ω_N_0.Moreover, if s_μ is of type I, by <cit.>, (Ω,g) is locally a warped product with the level sets of |s_μ|^2 being the factor. There exists a coordinate transformation to the Poincaré half-space model say (<ref>). We see that |s_μ|^2 is a positive constant multiple of 1/x^1.In Poincaré half-space model, umbilic hypersurfaces are either part of linear hyperplanes or part of spheres. Along a face F_i, since ∇_ν |s_μ|^2=-⟨ N_0, N_i ⟩|s_μ|^2, we can see that F_i can only be part of linear hyperplanes. Because of this, it is reasonable to conjecture that s_μ is of type I for all 1≤μ≤ m. We have already shown that (Ω,g) is hyperbolic, using <cit.>, it follows automatically in dimensions 3 and 5 that s_μ is of type I. In dimension 3, for simple polyhedra such as parabolic cubes and prisms (also tetrahedra with a top/base face), we can recover the full shape of (Ω,g).In other dimensions, the conjecture that s_μ is of type I might be proved by showing that there does not exist a m2-dimensional subspace of the imaginary Killing spinors of type II, since we have shown that {s_μ}_1≤μ≤ m/2 is a subspace of dimension m/2.§.§ The case with acute angles As in the case when (Ω,g) satisfies the Matching Angle Hypothesis, using the smoothing in Definition <ref>, we can find a sequence of domains Ω̂^(l), a sequence of maps N̂^(l) : ∂Ω̂^(l)→𝕊^n - 1 homotopic to the Euclidean Gauss maps of ∂Ω̂^(l) withsup_p ∈ℝ^nsup_0 < r ≤ 1 r^σ + 1 - n∫_∂Ω̂^(l)∩ B_r (p) (max{dN̂^(l)_ - H - (n - 1) dx^1 (N̂), 0})^σ→ 0as l →∞ and s^(l)∈ℰ with the following properties: * s^(l) is defined on Ω_λ_l; * 𝒟̃ s^(l) =𝒟s^(l) - nı/2ω_N_0 s^(l) = 0; * χ^(l) s^(l) = s^(l) at each point on ∂Ω̂^(l); * s^(l) does not vanish identically.Here the chirality operator χ^(l) : ℰ|_∂Ω̂^(l)→ℰ|_Ω̂^(l) is defined asχ^(l) s = ω_N̂ c (ıν) susing the maps N̂^(l) : ∂Ω̂^(l)→𝕊^n - 1.Proceeding similarly with the quantity W_λ_l, - replaced byW^(l) : = max{dN̂^(l)_ - H - (n - 1) d x^1 (N̂),0}and we obtain that s^(l) converges in C_^∞ (Ω\∂Ω, ℰ) to an m-tuple of spinors s satisfying the Killing spinor equation∇_e_i s + 12ω_N_0 c (ıν) s= 0subject to the boundary conditionω_N c (ıν) s = salong the faces of Ω. Similarly, we can show that the components {s_α}_1 ≤α≤ m of s form an orthonormal basis everywhere. From here, we can finish the proof of Theorem <ref> under the Acute Angle Hypothesis. Showing that (Ω,g) is hyperbolic and Proposition <ref> are the same with the case of the Matching Angle Hypothesis being satisfied. It remains to determine the dihedral angles using the boundary condition χ s =s.Let F_i and F_j be two adjacent faces of Ω, we obtain that2 ⟨ν_j, ν_k ⟩⟨ s, s ⟩=⟨ c (ν_j) s, c (ν_k) s ⟩ + ⟨ c (ν_k) s, c (ν_i) s ⟩=⟨ω_N_j s, ω_N_k s ⟩ + ⟨ω_N_k s, ω_N_j s ⟩=⟨ (ω_N_kω_N_j + ω_N_jω_N_k) s, s ⟩= 2 ⟨ N_j, N_k ⟩|s|^2at each point Ω∩F̅_i ∩ F_j. In the last step, we have used ω_N_jω_N_i + ω_N_jω_N_i =2 ⟨ N_i, N_j ⟩I_m. Hence, we conclude that the dihedral angles are equal finishing our proof of the case when (Ω, g) satisfies the Acute Angle Hypothesis.§ A RIGIDITY THEOREM FOR A SMOOTH CONVEX DOMAINIn this section, we prove the Llarull type rigidity Theorem <ref>. First, we need the following elementary lemma from linear algebra. Let V and W be two finite-dimensional real vector spaces of the same dimension. The space W is equipped with an inner product, and V has two inner products G_1 and G_2. Let L: V → W be a linear isomorphism. If G_2 ≥ G_1, then L_tr,2≤L_tr,1. Equality is achieved if and only if G_2 = G_1. Let ℓ be the dimension of V. Let {e_i }_1 ≤ i ≤ℓ be a basis of V such that G_1 (e_i, e_j) = δ_ij and G_2 (e_i, e_j) = μ_i δ_ij. Since G_2 ≥ G_1, so μ_i ≥ 1 for all 1 ≤ i ≤ℓ. Let Q : W → (V, G_1), then SQ is an isometry from W to (V, G_2) where S : V → V is a linear map given by sending all e_i to 1√(μ_i) e_i.We fix an orthonormal basis {Ê_i }_1 ≤ i ≤ℓ of W, now we can view maps between W and V as ℓ×ℓ matrices, then a map Q : W → V given byÊ_i ↦ Q Ê_i := ∑_j Q_ij e_jis an isometry from W to (V, G_1) if and only if {Q_ij} is an orthogonal matrix which we still denote by Q. We set S = diag (1√(μ_1), …, 1√(μ_ℓ)), then SQ represents an isometry from W to (V, G_2). By Definition <ref>,L_tr, 1 = sup_Q ∈ O (ℓ)tr (QL),L_tr, 2 = sup_Q ∈ O (ℓ)tr (SQL) .Take an arbitrary orthogonal matrix Q ∈ O (ℓ), let λ_i be the i-th diagonal entry of QL, then the i-th diagonal entry of SQL is λ_i / √(μ_i). It follows from μ_i≥ 1 thattr (SQL) = ∑_i λ_i / √(μ_i)≤∑_i |λ_i | = tr (S' QL),where S' is a suitable diagonal matrix depending on Q with diagonal entries 1 or - 1 such that all the diagonal entries of S' QL are nonnegative. Note that S' Q is also an orthogonal matrix.By (<ref>), we have thatL_tr, 2 = sup_Q ∈ O (ℓ)tr (SQL) ≤sup_Q ∈ O (ℓ)tr (S' QL) =L_tr, 1 .Since L is a linear isomorphism, λ_i ≠ 0. We easily find that the equality holds if and only if μ_i = 1 for all i, that is, G_2 = G_1. Now we give the proof of the Llarull type rigidity result.By Proposition <ref>, there is a smooth section s=(s_1,⋯,s_m) of E satisfying: * s is defined on Ω;* Ds=𝒟 s-nı/2ω_N_0 s=0;* χ s=s at each point on the boundary Ω;* s does not vanish identically.Using Proposition <ref>, we have that0 =∫_Ω|𝒟 s|^2 ≥∫_Ω(|∇ s|^2+1/4(R+n(n-1))|s|^2) +1/2∫_Ω(H+(n-1) d x^1(N)-d N_tr)|s|^2 ≥1/2∫_∂Ω(H_b+(n-1) d x^1(N)-d N_tr, σ)|s|^2,where the last inequality is by the condition H≥ H_b. On the other hand, by (<ref>), one hasH_b=x^1(H_δ+(n-1) N( log1/x^1))=x^1 H_δ-(n-1)dx^1(N).Hence0 ≥1/2∫_∂Ω(x^1 H_δ-d N_tr,σ)|s|^2.Let σ_δ=.δ|_∂Ω, we see that by σ≥σ̅ that σ≥(1/x^1)^2 σ_δ. By Lemma <ref>, we have thatd N_tr,σ≤d N_tr,1/(x^1)^2σ_δ=x^1 d N_tr,σ_δ=x^1 H_δ,which forces ∇ s=0 and dN_tr,σ= dN_tr,σ̱. Since Ω is strictly convex, dN is an isomorphism on each point of Σ, which follows that σ=σ̱. The orthogonality of {s_1,⋯,s_m} is more straightforward than the case of polytope since we always have a point p ∈∂Ω such that the Euclidean normal N is N_0. Following the same proof of Theorem <ref>, we conclude that (Ω,g) is hyperbolic. § DIHEDRAL RIGIDITY OF POLYTOPAL INITIAL DATA SETS In this section, we will discuss the rigidity of convex polytopes under the dominant energy condition.Let Ω be a compact, convex polytope in R^n with non-empty interior. Let g be a Riemannian metric defined on an open set containing Ω, and let q be a symmetric (0,2)-tensor defined on an open set containing Ω. The energy density and current density are defined byμ=1/2(R-|q|^2+(tr_gq)^2), J=divq-tr_gq.(g,q) is called an initial data set on Ω. We assume that μ and J satisfy the dominant energy conditionμ-|J|≥ 0.Similar to (<ref>) and (<ref>), we define _Xs=_X s+ı/2ω_N_0c(q(X))swhere q(X) is a vector defined by ⟨ q(X),Y⟩=q(X,Y) for any vector Y. The associated Dirac operator is D=D-ı/2(tr_gq)ω_N_0. The boundary operator χ is defined by (<ref>), that is, χ=ω_N ı c(ν):E|_Σ→E|_Σ.The same proof as in Proposition <ref>, we haveLet Ω be a smooth domain in R^n with boundary Σ. Suppose that s=(s_1,⋯,s_m)∈E is an m-tuple of spinors satisfying the boundary conditionχ s=s on Σ, then ∫_Ω(-|Ds|^2+|s|^2+1/2(μ-|J|)|s|^2)≤ -1/2∫_Σ (H+cosθtr_Σ (q)-sinθ|q(ν)^⊤|_g-dN_tr)|s|^2, where R denotes the scalar curvature of (Ω,g) and cosθ:=dx^1(N), sinθ=|N_0^⊤|_δ is the norm of N_0^⊤ with respect to the Euclidean metric δ, N_0^⊤=N_0-⟨ N_0,N⟩_δ N, q(v)^⊤ denotes the orthogonal projection to TΣ, tr_Σ q:=tr_gq-q(ν,ν). For any s=(s_1,…,s_m)∈Γ(Ω,E), one has∫_Ω (-|Ds|^2+| s|^2+R_g/4|s|^2) =∫_Σ⟨ (c(ν)D+_ν)s,s⟩=-1/2∫_Σ H|s|^2+∫_Σ⟨D^Σ s,s⟩.Using the divergence theorem, we have∫_Ω |Ds-ı (tr_gq)/2ω_N_0s|^2 =∫_Ω |Ds|^2+(tr_gq)^2/4|s|^2+tr_gqı/2(⟨Ds,ω_N_0s⟩-⟨ω_N_0s,Ds⟩)=∫_Ω |Ds|^2+(tr_gq)^2/4|s|^2+ı/2∫_Σ⟨ (tr_gq)ω_N_0c(ν)s,s ⟩-ı/2∫_Ω⟨ c( (tr_gq))ω_N_0s,s⟩.and ∑_i=1^n∫_Ω |_e_is+ı/2ω_N_0c(q(e_i))s|^2=∑_i=1^n∫_Ω |_e_is|^2+|q(e_i)|^2/4|s|^2 -ı/2(⟨_e_is,ω_N_0c(q(e_i))s⟩-⟨ω_N_0c(q(e_i))s,_e_is⟩)=∫_Ω | s|^2+|q|^2_g/4|s|^2-ı/2∫_Σ⟨ s,ω_N_0c(q(ν))s⟩+ı/2∫_Ω⟨ s,ω_N_0c((_e_iq)(e_i))s⟩=∫_Ω | s|^2+|q|^2_g/4|s|^2+ı/2∫_Σ⟨ω_N_0c(q(ν)) s,s⟩-ı/2∫_Ω⟨ c(div_gq)ω_N_0s,s⟩.where the second equality is by the divergence theorem.Therefore, ∫_Ω -|Ds|^2+|s|^2+1/2μ|s|^2+ı/2∫_Ω⟨ c(J)ω_N_0s,s⟩=-1/2∫_Σ (H|s|^2+ı⟨ω_N_0c((tr_gq)ν-q(ν))s,s⟩)+∫_Σ⟨D^Σ s,s⟩,Note that the section s∈Γ(Ω,E) satisfies the following boundary value condition χ(s|_Σ)=s|_Σ. When restricted on Σ, ⟨D^Σ s,s⟩ =⟨D^Σχ s,s⟩ =⟨ -χD^Σ s,s⟩+⟨ı c((ω_N)^⊤)s,s⟩which follows that ⟨D^Σ s,s⟩=1/2⟨ı c((ω_N)^⊤)s,s ⟩=1/2⟨Bs,s ⟩. On the other hand, on Σ, ı⟨ω_N_0c(ν)s,s⟩ =ı⟨ω_N_0c(ν)χ s,s⟩=-ı⟨ω_N_0χ c(ν)s,s ⟩+ı⟨ω_N_0(-2ıω_N)s,s⟩= -⟨ω_N_0ω_N s,s⟩)+2⟨ω_N_0ω_N s,s⟩=⟨ω_N_0ω_N s,s⟩,which follows that ı⟨ω_N_0c(ν)s,s⟩=⟨ω_N_0ω_N s,s⟩.Note that ı⟨ω_N_0c(ν)s,s⟩ is real,so ı⟨ω_N_0c(ν)s,s⟩ =1/2(⟨ω_N_0ω_N s,s⟩+⟨ s,ω_N_0ω_N s⟩)=1/2⟨ (ω_N_0ω_N+ω_Nω_N_0)s,s⟩=⟨ N_0,N⟩_δ |s|^2, We note that ⟨ω_X c ( e_i) s, s ⟩ for any X and any e_i is a real number. Indeed, since the complex conjugate of ⟨ω_X c ( (e_i)) s, s ⟩ is itself by the self-adjoint properties of ω_X, c(ie_i) and that⟨ s, ω_X c ( e_i) s ⟩= ⟨ c ( e_i) ω_X s, s⟩ = ⟨ω_X c ( e_i) s,s ⟩ .So using ω_N c (ν) s = s, we see that⟨ω_N c ( e_i) s, s ⟩= ⟨ c (ν) c( e_i) s, s ⟩=⟨ c(ν) c( e_i) s, s ⟩ = 0.The above then gives that⟨ω_N_0 c ( q(ν)^⊤) s, s ⟩ = ⟨ω_N_0 - ⟨ N_0, N ⟩N c ( q (ν)^⊤) s, s ⟩ =⟨ω_N_0^⊤ c ( q(ν)^⊤) s, s ⟩.Therefore, (<ref>) becomes ∫_Ω -|Ds|^2+|s|^2+1/2μ|s|^2+ı/2∫_Ω⟨ c(J)ω_N_0s,s⟩=-1/2∫_Σ (H+(tr_gq-q(ν,ν))⟨ N_0,N⟩_δ)|s|^2+ı/2∫_Σ⟨ c(q(ν)^⊤)ω_N_0^⊤s,s⟩ +1/2∫_Σ⟨B s,s ⟩≤ -1/2∫_Σ (H+(tr_Σ q)dx^1(N)-|N_0^⊤|_δ|q(ν)^⊤|_g)|s|^2+1/2∫_Σ⟨B s,s ⟩,where the last equality is by the definition of N_0. By the definition of cosθ, we obtain∫_Ω(-|Ds|^2+|s|^2+1/2(μ-|J|)|s|^2)≤ -1/2∫_Σ (H+(tr_Σ q)cosθ-sinθ|q(ν)^⊤|_g)|s|^2+1/2∫_Σ⟨B s,s ⟩.By using Proposition <ref> (3), the proof is complete.With the dominant energy conditions (<ref>) and (<ref>), we give the proof for Theorem <ref>.In this case, Proposition <ref>, <ref> and Proposition <ref> hold. Following the same proof as in Proposition <ref>, we obtain an s = (s_1, …, s_m) satisfying∇_e_i s + 12ω_N_0 c( q(e_i)) s = 0and subject to the boundary conditionω_N c (ν) s = salong ∂Ω. Since each component of s is nonzero everywhere, using Proposition <ref>, (<ref>) and (<ref>), we see that the identities in (<ref>) and (<ref>) must be achieved under both Matching and Acute Angle Hypothesis. We can still show that the components of s are linearly independent as in Proposition <ref>. It might not hold that the components of s form an orthonormal basis. As for the dihedral angles, the proof is the same with Theorem <ref> under the Acute Angle Hypothesis.Note that our condition (<ref>) is weaker than the mean curvature condition H-|tr_gq-q(ν,ν)| ≥0 used in <cit.>. However, due to the nature of the connection (<ref>), it doesn't seem easy to obtain that the polytopal initial data set can be locally embedded as a spacelike slice of the Minkowski space. Nonetheless, we can prove the following that the Gauss and Codazzi equation are satisfied in at least n-1 directions.For any x∈Ω and X,Y∈ T_xΩ, there exists a (n-1)-dimensional linear subspace F_x⊂ T_xΩ such thatR(X,Y,V,W)+q(V,X)q(W,Y)-q(V,Y)q(W,X)=0 and (_Xq)(V,Y)-(_Yq)(V,X)=0for any V,W∈ F_x.We have a smooth section s=(s_1,…,s_m) of E defined on Ω and satisfies _e_ks+1/2ω_N_0c(ı q(e_k))s=0,and {s_1,⋯,s_m} is a basis of spinors. For any x∈Ω and let {e_i}_1≤ i≤ n be an orthonormal basis of (T_xΩ,g), one has0 =_e_k( _e_ls+1/2ω_N_0c(ı q(e_l))s)-( __e_ke_ls_α+1/2ω_N_0c(ı q(_e_ke_l))s) -1/2c(ı q(e_l))ω_N_0( _e_ks_α+1/2ω_N_0c(ı q(e_k))s)=_e_k_e_ls-__e_ke_ls+1/4c(q(e_l))c(q(e_k))s+1/2ω_N_0c(ı (_e_kq)(e_l))sThis implies 0 =_e_k_e_ls-_e_l_e_ks-_[e_k,e_l]s -1/4∑_i,j=1^n(q(e_k,e_i)q(e_l,e_j)-q(e_k,e_j)q(e_l,e_i))c(e_i)c(e_j)s +ı/2ω_N_0∑_j=1^n( (_e_kq)(e_j,e_l)-(_e_kq)(e_j,e_l))c(e_j)s.Hence0 =-1/4∑_i,j=1^n(- ⟨ R(e_k,e_l)e_i,e_j⟩+q(e_k,e_i)q(e_l,e_j)-q(e_k,e_j)q(e_l,e_i))c(e_i)c(e_j)s +ı/2ω_N_0∑_j=1^n( (_e_kq)(e_j,e_l)-(_e_kq)(e_j,e_l))c(e_j)s=-1/4∑_i,j=1^n(R(e_k,e_l,e_i,e_j)+q(e_k,e_i)q(e_l,e_j)-q(e_k,e_j)q(e_l,e_i))c(e_i)c(e_j)s +ı/2ω_N_0∑_j=1^n( (_e_kq)(e_j,e_l)-(_e_kq)(e_j,e_l))c(e_j)s. Therefore,(-AI_m+Bω_N_0)s= ( -∑_i<jA_ijc(e_i)c(e_j)+ω_N_0∑_j=1^n B_j c(ı e_j))s=0,where B_j:= (_e_kq)(e_j,e_l)-(_e_lq)(e_j,e_k), A_ij=R(e_k,e_l,e_i,e_j)+q(e_k,e_i)q(e_l,e_j)-q(e_k,e_j)q(e_l,e_i) andA=∑_i<jA_ijc(e_i)c(e_j), B=∑_j=1^n B_j c(ı e_j).Note that B is self-adjoint andB^2=∑_j=1^nB_j^2·id=|B|^2id∈End(S_x). SinceAs_α=Bs_α, As_β=-Bs_β for any 1≤α≤m/2<β≤ m,one has⟨ 2As_β,s_γ⟩ =⟨ As_β-Bs_β,s_γ⟩=⟨ s_β,-(A+B)s_γ⟩=0,which follows that As_β=-Bs_β∈Span{S_1}= Span{s_1,⋯,s_m/2}.Similarly,As_α=Bs_α∈Span{S_2}= Span{s_m/2+1,⋯,s_m}.Hence for any s∈{s_1,⋯,s_m}, one hasA^2s=-B^2s=-|B|^2swhich follows that(1+σ)(∑_i<j,k<lA_ijA_kle_i e_j e_k e_l+|B|^2)=0.By considering the coefficient of constant terms, we have-∑_i<jA_ij^2+|B|^2=0. Note that A_ij=-A_ji, i.e., (A_ij) is a real skew-symmetric matrix, there exists an orthonormal basis {E_i}_1≤ i≤ n such that ∑_i<jA_ije_ie_j=2λ_1 E_1E_2+2λ_2 E_3 E_4+⋯+2λ_r E_2r-1E_2r.By considering the coefficients of E_2i-1E_2i E_2j-1E_2j, i<j, in (<ref>), one concludes that λ_iλ_j=0i≠ j.This follows that ∑_i<jA_ije_ie_j=2λ_i_0E_2i_0-1E_2i_0for some 1≤ i_0≤ r and4λ_i_0^2=|B|^2.By (<ref>), one has(-BAI_m+|B|^2ω_N_0)s=0.On the other hand, (-ABI_m-|B|^2ω_N_0)s=-ω_N_0A(-AI_m+Bω_N_0)s=0.Therefore, (AB+BA)s=0,which follows that(1+σ)(∑_k∉{2i_0-1,2i_0}4λ_i_0ı B_k'E_2i_0-1E_2i_0 E_k)=0,where we write B=∑_j=1^n B_j' ı c(E_k) in term of the orthonormal basis {E_k}_1≤ k≤ n. It follows thatλ_i_0=0 orB'_i=0for i≠ 2i_0-1,2i_0 If λ_i_0=0, then |B|^2=0, this implies B'_i=0for i≠ 2i_0-1,2i_0. Hence ∑_j=1^n B_je_j=B_2i_0-1'E_2i_0-1+B_2i_0'E_2i_0. In one word, for any x∈Ω and for any vectors X,Y in T_xΩ, we can find an orthogonal basis {E_i}_1≤ i≤ n such that R(X,Y,E_i,E_j)+q(X,E_i)q(Y,E_j)-q(X,E_j)q(Y,E_i)=0for any {E_i,E_j}⊄Span{E_2i_0-1,E_2i_0}, and(_Xq)(W,Y)-(_Yq)(W,X)=0for W=B'_2i_0E_2i_0-1-B'_2i_0-1E_2i_0 and any W∈Span{E_i,i≠ 2i_0-1,2i_0}. By takingF_x=Span{B'_2i_0E_2i_0-1-B'_2i_0-1E_2i_0, E_i, i≠ 2i_0-1,2i_0}, the proof is complete. §.§ Principal curvatures of the boundaryNow we calculate the boundary mean curvature.Let e_i be an orthonormal frame such that e_n = ν and let h be the second fundamental form h of Σ in Ω. We assume that i, j ≠ n. We have that∇_e_i (ω_N c (ν) s) =ω_N c (∇_e_iν) s + ω_N c (ν) ∇_e_i s =κ_i ω_N c ( e_i) s - 12ω_N c (ν) ω_N_0 c ( q (e_i)) swhere we again have used (<ref>) and (<ref>). Following from ω_N ω_N_0 + ω_N_0ω_N = 2 ⟨ N_0, N ⟩ I_m and c (e_i) c (ν) = - c (ν) c (e_i), we have∇_e_i (ω_N c (ν) s) = h_i jω_N c ( e_j) s + 12^2 ω_N ω_N_0 (2 ⟨ q (e_i), ν⟩ + c (q (e_i)) c (ν)) s = h_i jω_N c ( e_j) s - q (e_i, ν) ω_N ω_N_0 s + 12 (2 ⟨ N_0, N ⟩ - ω_N_0ω_N) c ( q (e_i)) c (ν) s = h_i jω_N c ( e_j) s - q (e_i, ν) ω_N ω_N_0 s + ⟨ N_0, N ⟩ c ( q (e_i)) ω_N s - 12ω_N_0 c ( q (e_i)) s.We differentiate (<ref>) in the direction e_i with i ≠ n, then∇_e_i (ω_N c (ν) s) =∇_e_i s = - 12ω_N_0 c( q (e_i)) s.Henceh_i jω_N c ( e_j) s = q (e_i,ν) ω_N ω_N_0 s - ⟨ N_0, N ⟩ c( q (e_i)) ω_N s.Soh_i j c ( e_j) s = q (e_i, ν)ω_N_0 s - ⟨ N_0, N ⟩ c( q (e_i)) s.Considering that s has at least one nonzero component, we take products on both sides with c ( e_k) s, we see thath_i j⟨ c ( e_j) s, c ( e_k) s ⟩ = q (e_i, ν) ⟨ω_N_0 c ( e_k) s, s ⟩ - ⟨ N_0, N ⟩⟨ c ( q (e_i)) s, c ( e_k) s ⟩= q (e_i, ν) ⟨ω_N_0^⊤ c ( e_k) s, s ⟩ - ⟨ N_0, N ⟩ q (e_i, e_k) |s|^2+ ⟨ N_0, N ⟩⟨ c ( e_k) c (q (e_i) - ⟨ q (e_i), e_k ⟩ e_k) s, s ⟩ .As we have shown earlier that ⟨ω_N_0^⊤ c ( e_i) s, s ⟩ is a real number, now taking the real part givesh_i k |s|^2 = - ⟨ N_0, N ⟩ q (e_i, e_k) |s|^2 + q (e_i, ν)⟨ω_N_0^⊤ c ( e_k) s,s ⟩ .Around a point in Σ, we can take an orthonormal basis {e_i } of the tangent space of Σ such that {h_i k + ⟨ N_0, N ⟩ q_i k} is diagonalized. So we can see thatq (e_i, ν) ⟨ω_N_0^⊤ c( e_k) s, s ⟩ = 0for all i ≠ k. Assume that q (e_i, ν) ≠ 0 for some i, then ⟨ω_N_0^⊤ c ( e_k) s, s ⟩ = 0 for all k ≠ i. Either we have that ⟨ω_N_0^⊤ c ( e_i) s, s ⟩ = 0 or ⟨ω_N_0^⊤ c ( e_i) s, s ⟩≠ 0. In the latter case q (e_k, ν) = 0 for all k ≠ i which gives q (ν) = q (e_i, ν) e_i. In the former case, we take trace of (<ref>), we get H+⟨ N_0, N ⟩tr_Σ q =0, considering the tilted dominant energy condition, we would have that q(ν)=0 at this point if sinθ≠ 0. alpha
http://arxiv.org/abs/2312.16022v1
{ "authors": [ "Xiaoxiang Chai", "Xueyuan Wan" ], "categories": [ "math.DG", "gr-qc", "53C24, 52B11, 15A66" ], "primary_category": "math.DG", "published": "20231226122352", "title": "Scalar curvature rigidity of parabolic convex polytopes in hyperbolic space" }
[2020]Primary: 60H15, 60H30, 35G05; Secondary: 60G15, 46F05 In this article, we study the existence and uniqueness problem for linear Stochastic PDEs involving a bilaplacian operator. Our results on the existence and uniqueness are obtained through an application of a Monotonicity inequality, which we also prove here. As an application of these results, we also obtain a probabilistic representation of the solution for a linear PDE involving the bilaplacian operator. Blind Image Quality Assessment: A Brief Survey Miaohui Wang, Senior Member, IEEEM. Wang is with the State Key Laboratory of Radio Frequency Heterogeneous Integration and the Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, Shenzhen 518060, China. E-mail: [email protected] 14, 2024 =================================================================================================================================================================================================================================================================================== § INTRODUCTION Letbe the space of rapidly decreasing smooth functions on , called the Schwartz space (due to L. Schwartz <cit.>) and ^' denote the dual space, called the space of tempered distributions. Let (Ω, , (_t)_t ≥ 0, ) be a filtered probability space satisfying the usual conditions. We consider the following Stochastic PDE (SPDE) in ^':dX_t =L(X_t)dt + A(X_t)· dB_t, t≥ 0;X_0= Ψ,and the associated PDE in ^':∂/∂ t u_t =L(u_t), t ≥ 0;u_0= Ψ,where,* Ψ∈^' is _0-measurable random variable,* {B_t}_t≥0 is a 2-dimensional standard Brownian motion with components given by B_t :=[ B_t^1; B_t^2 ], * L: ^'→^', A: ^'→^'×^' are linear differential operators defined as follows: L(ϕ) := -κ^2/2 ∂^4 ϕ + σ^2/2 ∂^2 ϕ -b ∂ ϕ,and A(ϕ) := (A_1 (ϕ), A_2 (ϕ)), with A_1,A_2:'→', such thatA_1(ϕ) := -σ∂ϕ, A_2(ϕ) := κ∂^2 ϕ,* κ, σ, b are real constants, * AX_t·dB_t = A_1 X_t dB^1_t + A_2 X_tdB^2_t. In this paper, we prove the existence and uniqueness of strong solution of the Stochastic PDE (<ref>) (see Theorem <ref>) and the PDE (<ref>) (see Theorem <ref>) in ^'. Moreover, we also obtain a stochastic representation of the solution to the PDE <ref>. Our proof heavily relies on the Monotonicity inequality for (L, A) (see Corollary <ref>), involving a bilaplacian operator. Now, we present a brief literature survey related to our discussion.* Bilaplacian operator appears in many practical models, for example, membrane models <cit.>, sandpiles model <cit.>and the references therein. * The Monotonicity inequality, originally introduced in <cit.>, has been of considerable interest in the study of Stochastic PDEs <cit.>. In the case when L is a second order linear differential operator, a Monotonicity inequality for (L, A) was first established for constant coefficient differential operators in <cit.> and then for some variable coefficients in <cit.>. A related inequality, called the Coercivity inequality, has also been part of the Stochastic PDE literature when using the variational methods (see <cit.>). * Probabilistic representations of solutions to the heat equation in ' has been studied in <cit.>, here we study the probabilistic representations of solutions to a fourth order PDE. There are other well-known probabilistic representations of solutions to PDEs, for example see the Feynman-Kac type representation to the Cauchy Problem <cit.> and the Kolmogorov Backward equation <cit.>. * Stochastic PDEs, and in general Stochastic Analysis have applications to many areas of study such as Statistical Physics, Financial Mathematics and Mathematical Biology (see <cit.>).Our results have been stated (and proved) in one-dimensional setting, for example the tempered distributions we consider are on . It may be possible to extend these results to higher dimensions involving tempered distributions on ^d.This article is organized as follows. In Section <ref>, we describe the notations and main results. Section <ref> is devoted to the proof of Monotonicity inequality for (L, A) (see Theorem <ref> and Corollary <ref>) and in Section <ref>, we apply the Monotonicity inequality to obtain the existence and uniqueness of strong solution of the Stochastic PDE (<ref>) (see Theorem <ref>) and the PDE (<ref>) (see Theorem <ref>) along with the probabilistic representation of the said PDE.§ NOTATIONS AND MAIN RESULTS §.§ Topology on Schwartz spaceFor p ∈, consider the increasing norms ·_p, defined by the inner products (see <cit.>)[p]fg:=∑_k=0^∞(2k + 1)^2p[0]fh_k[0]gh_k , f,g∈.Here, [0]·· is the usual inner product in ℒ^2() and {h_k}_k = 0^∞ is an orthonormal basis for ℒ^2() given by the Hermite functionsh_k(t)=(2^k k! √(π))^-1/2exp{-t^2/2}H_k(t),where H_k's are the Hermite polynomials. We define the Hermite-Sobolev spaces _p, p ∈ as the completion ofin ·_p. Note that the dual space _p^' is isometrically isomorphic with _-p for p≥ 0. We also have = ⋂_p(_p,·_p), ^'=⋃_p>0(_-p,·_-p) and _0 = ℒ^2(). We state the natural inclusions involving these spaces. For 0<q<p,⊂_p ⊂_q ⊂ℒ^2() = _0 ⊂_-q⊂_-p⊂^'. We also recall the distributional derivative operator on the space of tempered distributions. Consider the derivative map denoted by ∂ = d/dx:→. We can extend this map by duality to ∂:^'→^' as follows: for ψ∈^', ∂ψ∈^' is defined by∂ψϕ := -ψ∂ϕ,∀ϕ∈.The operator ∂ acts on the basis of Hermite functions as follows (see <cit.>):∂h_n(x) = √(n/2)h_n-1(x) - √(n+1/2)h_n+1(x),∀ n ∈_+.For notational convenience, we adopt the notation that h_n ≡ 0 whenever n < 0. As a consequence of the above recurrence relation, we have that ∂ : _p →_p - 1/2 is a bounded linear operator for any p ∈. Moreover, ∂^2 : _p→_p - 1 and ∂^4 : _p→_p - 2 are also bounded linear operators for any p ∈. In particular, we have the following boundedness of the operators L and A (as in (<ref>) and (<ref>)). The operators A_1: _p →_p - 1/2, A_2: _p →_p - 1 and L:_p →_p - 2 is bounded for any p ∈. In particular, all these operators are bounded if considered as mappings from _p to _p - 2 for any p ∈. §.§ Monotonicity inequalityIn this subsection, we discuss new results concerning the Monotonicity inequality involving the operators (L, A). The next result is the first main result of this article. Fix p ∈. Then, there exists a constant C = C(p) > 0, such that - [p]ϕ∂^4ϕ + ∂^2ϕ_p^2 ≤ C ϕ_p^2,∀ϕ∈.Moreover, by density arguments, the inequality is true for all ϕ∈_p + 2. Note that the case p = 0 in the above theorem follows from an integration by parts argument. As a consequence of the above result and <cit.>, we obtain the following inequality involving L and A. Fix p ∈. Then, there exists a constant C = C(p, κ, σ, b) > 0, such that 2[p]ϕLϕ + Aϕ_HS(p)^2 ≤ C ϕ_p^2,∀ϕ∈,where Aϕ_HS(p)^2 := A_1ϕ_p^2 + A_2ϕ_p^2, ∀ϕ∈. Moreover, by density arguments, the inequality is true for all ϕ∈_p + 2. Proofs of these results are discussed in Section <ref>. §.§ Applications to Stochastic PDEsWe state the notion of solution used in this article. Let p ∈ and Ψ∈_p. We say that an (_t)_t ≥ 0 adapted _p valued continuous process {X_t}_t is a strong solution of the Stochastic PDE <ref> if it satisfies the equality X_t =Ψ + ∫_0^t L(X_s)ds + ∫_0^t A(X_s)· dB_s, t ≥ 0in some _q with q ≤ p. In this case, we say that {X_t}_t is an _p valued strong solution of (<ref>) with equality in _q. We have the following existence and uniqueness for the Stochastic PDE (<ref>). The proof is discussed in Section <ref>. Let p ∈ and Ψ∈_p such that Ψ_p^2<∞. Then, there exists a unique _p valued solution of the Stochastic PDE (<ref>) with equality in _p - 2.§.§ Probabilistic representation for the fourth-order PDEWe introduce the notion of solution for the PDE used in this article. Let p ∈ and Ψ∈_p. We say that an _p valued continuous {u_t}_t is a strong solution of the PDE <ref> if it satisfies the equality u_t =Ψ + ∫_0^t L(u_s)ds, t ≥ 0in some _q with q ≤ p. In this case, we say that {u_t}_t is an _p valued strong solution of (<ref>) with equality in _q. We have the following existence and uniqueness for the PDE (<ref>). To the best of our knowledge, the probabilistic representation for the solution is a new result. The proof is discussed in Section <ref>. Let p ∈ and the initial condition Ψ∈_p is deterministic. Then, there exists a unique _p valued strong solution {u_t}_t of the PDE (<ref>) with equality in _p - 2. Moreover, u_t =X_t, ∀ t ≥ 0, where {X_t}_t is as in Theorem <ref>.§ PROOF OF MONOTONICITY INEQUALITY (THEOREM <REF> AND COROLLARY <REF>)This section is devoted to the proof of Theorem <ref>. We require several lemmas for our final argument.Consider the linear operators U_m, m ∈_+ ondescribed on the basis of Hermite functions as follows: U_m h_n := h_n-m, ∀ n ≥ 0, m ≥ 0. We continue to use the notational convention that h_n ≡ 0 whenever n < 0. Arguing as in <cit.>, we conclude the following result. U_m, m ∈_+ are bounded linear operators on (, ·_p) and hence extends to bounded linear operators, again denoted by U_m, on _p.Iterating the relation (<ref>), we obtain the following result. We have, ∂^2h_n(x)= √(n(n-1))/2h_n-2(x) - 2n+1/2h_n(x) + √((n+1)(n+2))/2 h_n+2(x), ∂^3h_n(x)= √(n(n-1)(n-2))/2√(2)h_n-3(x)- 3n√(n)/2√(2)h_n-1(x)+ 3(n+1)√(n+1)/2√(2)h_n+1(x) - √((n+1)(n+2)(n+3))/2√(2) h_n+3(x), ∂^4h_n(x)= √(n(n-1)(n-2)(n-3))/4h_n-4(x) - (2n-1)√(n(n-1))/2h_n-2(x)+ 3n^2+ 3(n+1)^2/4h_n(x) - (2n+3)√((n+1)(n+2))/2h_n+2(x)+ √((n+1)(n+2)(n+3)(n+4))/4 h_n+4(x). We shall also require the following result. We choose an analytic branch of z ↦ z^2p in a domain containing the positive real axis. We consider the following functions f_j, j = 1, 2, 3, 4 defined on a neighbourhood of 0, say B(0, δ) = {z ∈: |z| < δ} for some δ > 0, sufficiently small. We takef_1(z):= ( 2 - 3z/2 + z)^2p + ( 2 + 5z/2 + z)^2p - 2,f_2(z):=2( 2 + 5z/2 + z)^2p - 1 - ( 2 + 9z/2 + z)^2p,f_3(z):= -( 2 - 3z/2 + z)^2p + 3( 2 + 5z/2 + z)^2p - 2,f_4(z):= ( 2 + 5z/2 + z)^2p - 1.Then, there exists analytic functions g_j, j = 1, 2, 3, 4 on B(0, δ) with g_j(0) ≠ 0, j = 1, 2, 3, 4 such that for all z ∈ B(0, δ), f_j(z) = z^2 g_j(z), j = 1, 2 and f_j(z) = zg_j(z), j = 3, 4. In particular, there exists a constant C > 0 such that for large positive integers k, we have|f_1(1/k)| ≤C/k^2,|f_2(1/k)| ≤C/k^2,|f_3(1/k)| ≤C/k, |f_4(1/k)| ≤C/k. The argument is similar to <cit.>.We note that f_1(0) = f_1^'(0) = f_2(0) = f_2^'(0) = f_3(0) = f_4(0) = 0 with f_1^''(0) ≠ 0, f_2^''(0) ≠ 0, f_3^'(0) ≠ 0 and f_4^'(0) ≠ 0. Consequently, we get the existence of g_j's such that for all z ∈ B(0, δ), f_j(z) = z^2 g_j(z), j = 1, 2 and f_j(z) = zg_j(z), j = 3, 4 with g_j(0) ≠ 0, j = 1, 2, 3, 4. Using the bounds for g_j's on B(0, δ/2) = {z ∈: |z| ≤δ/2}, we get (<ref>). We organize the proof by splitting it in three steps. In the first two steps we expand the terms [p]ϕ∂^4ϕ and ∂^2ϕ_p^2, respectively, on the basis of Hermite functions. In the third and final step, we combine all the intermediate computations and draw the conclusion.Step 1: We start with the term [p]ϕ∂^4ϕ. Any ϕ∈ can be expressed asϕ=∑_n=0^∞ϕ_nh_n, with ϕ_n∈, ∀ n. We use the notational convention that ϕ_n ≡ 0 whenever n < 0.By Lemma <ref>, we have ∂^4ϕ = ∑_n=0^∞ϕ_n (∂^4 h_n) = ∑_n=0^∞ϕ_n {√(n(n-1)(n-2)(n-3))/4h_n-4 - (2n-1)√(n(n-1))/2h_n-2 + 3n^2+ 3(n+1)^2/4h_n - (2n+3)√((n+1)(n+2))/2h_n+2 + √((n+1)(n+2)(n+3)(n+4))/4 h_n+4} = ∑_n=0^∞ϕ_n { U_4 A_4 h_n + U_2 A_2 h_n + U_0 A_0 h_n + U_-2 A_-2 h_n + U_-4 A_-4 h_n},where, the linear operators A_4, A_2, A_0, A_-2 and A_-4 onare described on the basis of Hermite functions as follows:A_4 h_n := √(n(n-1)(n-2)(n-3))/4 h_n, A_2 h_n := - (2n-1)√(n(n-1))/2 h_n, A_0 h_n := 3n^2+ 3(n+1)^2/4h_n, A_-2 h_n := - (2n+3)√((n+1)(n+2))/2 h_n, A_-4 h_n := √((n+1)(n+2)(n+3)(n+4))/4 h_n.Therefore,[p]ϕ∂^4ϕ = ∑_k,m=0^∞{[p]ϕ_kh_kϕ_m U_4 A_4 h_m + [p]ϕ_kh_kϕ_m U_2 A_2 h_m} + ∑_k,m=0^∞[p]ϕ_kh_kϕ_m U_0 A_0 h_m + ∑_k,m=0^∞{[p]ϕ_kh_kϕ_m U_-2 A_-2 h_m + [p]ϕ_kh_kϕ_m U_-4 A_-4 h_m} Now, consider the terms of (<ref>) individually.*∑_k,m=0^∞[p]ϕ_kh_kϕ_m U_4 A_4 h_m =∑_k,m,i=0^∞ (2i+1)^2p[]ϕ_kh_kh_i[]ϕ_m U_4 A_4 h_mh_i = ∑_i=0^∞(2i+1)^2p ϕ_iϕ_i+4 √((i+4)(i+3)(i+2)(i+1))/4 *∑_k,m=0^∞[p]ϕ_kh_kϕ_m U_2 A_2 h_m =∑_k,m,i=0^∞ (2i+1)^2p[]ϕ_kh_kh_i[]ϕ_m U_2 A_2 h_mh_i = - ∑_i=0^∞ (2i+1)^2pϕ_i ϕ_i+2(2i+3)√((i+2)(i+1))/2 *∑_k,m=0^∞[p]ϕ_kh_kϕ_m U_0 A_0 h_m =∑_k,m,i=0^∞ (2i+1)^2p[]ϕ_kh_kh_i[]ϕ_m U_0 A_0 h_mh_i = ∑_i=0^∞ (2i+1)^2pϕ_i^2 3i^2+ 3(i+1)^2/4 *∑_k,m=0^∞[p]ϕ_kh_kϕ_m U_-2 A_-2 h_m =∑_k,m,i=0^∞ (2i+1)^2p[]ϕ_kh_kh_i[]ϕ_m U_-2 A_-2 h_mh_i = - ∑_i=0^∞ (2i+1)^2pϕ_i ϕ_i-2(2i-1)√(i(i-1))/2 = - ∑_l=0^∞ (2l+5)^2pϕ_l+2ϕ_l (2l+3)√((l+2)(l+1))/2[putting i-2=l]*∑_k,m=0^∞[p]ϕ_kh_kϕ_m U_-4 A_-4 h_m =∑_k,m,i=0^∞ (2i+1)^2p[]ϕ_kh_kh_i[]ϕ_m U_-4 A_-4 h_mh_i = ∑_i=0^∞ (2i+1)^2pϕ_i ϕ_i-4√((i-3)(i-2)(i-1)i)/4 = ∑_l=0^∞ (2l+9)^2pϕ_l+4ϕ_l √((l+1)(l+2)(l+3)(l+4))/4[putting i-4=l]Step 2: We now look at the term ∂^2ϕ_p^2. By Lemma <ref>, ∂^2ϕ= ∑_n=0^∞ϕ_n (∂^2 h_n)= ∑_n=0^∞ϕ_n {√(n(n-1))/2h_n-2 - 2n+1/2h_n + √((n+1)(n+2))/2 h_n+2} = ∑_n=0^∞ϕ_n {U_2 B_2 h_n + U_0 B_0 h_n + U_-2 B_-2 h_n},where, the linear operators B_2, B_0 and B_-2 onare described on the basis of Hermite functions as follows:B_2 h_n := √(n(n-1))/2 h_n, B_0 h_n := - 2n+1/2 h_n, B_-2 h_n:= √((n+1)(n+2))/2 h_n.Therefore,∂^2ϕ_p^2 = [p]∂^2ϕ∂^2ϕ = ∑_k,m=0^∞{[p]ϕ_k U_2 B_2 h_kϕ_m U_2 B_2 h_m + [p]ϕ_k U_0 B_0 h_kϕ_m U_0 B_0 h_m + [p]ϕ_kU_-2 B_-2 h_kϕ_m U_-2B_-2 h_m} + ∑_k,m=0^∞{[p]ϕ_k U_2 B_2 h_kϕ_m U_0 B_0 h_m + [p]ϕ_k U_2 B_2 h_kϕ_m U_-2 B_-2 h_m} + ∑_k,m=0^∞{[p]ϕ_k U_0 B_0 h_kϕ_m U_2 B_2 h_m + [p]ϕ_k U_0 B_0 h_kϕ_m U_-2 B_-2 h_m} + ∑_k,m=0^∞{[p]ϕ_k U_-2 B_-2 h_kϕ_m U_2 B_2 h_m + [p]ϕ_k U_-2 B_-2 h_kϕ_m U_0 B_0 h_m}Now, consider the terms of (<ref>) individually. *∑_k,m=0^∞[p]ϕ_k U_2 B_2 h_kϕ_m U_2 B_2 h_m =∑_k,m,i=0^∞ (2i+1)^2p[]ϕ_k U_2 B_2 h_kh_i[]ϕ_m U_2 B_2 h_mh_i = ∑_i=0^∞ (2i+1)^2pϕ_i+2^2 (i+2)(i+1)/4 = ∑_l=0^∞ (2l-3)^2pϕ_l^2 l(l-1)/4We note here that the terms for l = 0 and l = 1 do not contribute to the above sum. However, we carry these terms for notational convenience. *∑_k,m=0^∞[p]ϕ_k U_0 B_0 h_kϕ_m U_0 B_0 h_m =∑_k,m,i=0^∞ (2i+1)^2p[]ϕ_k U_0 B_0 h_kh_i[]ϕ_m U_0 B_0 h_mh_i = ∑_i=0^∞ (2i+1)^2pϕ_i^2 (2i+1)^2/4 *∑_k,m=0^∞[p]ϕ_k U_-2 B_-2 h_kϕ_m U_-2 B_-2 h_m = ∑_k,m,i=0^∞ (2i+1)^2p[]ϕ_k U_-2 B_-2 h_kh_i[]ϕ_m U_-2 B_-2 h_mh_i = ∑_i=0^∞ (2i+1)^2pϕ_i-2^2 i(i-1)/4 = ∑_l=0^∞ (2l+5)^2pϕ_l^2 (l+2)(l+1)/4 *∑_k,m=0^∞[p]ϕ_k U_2 B_2 h_kϕ_m U_0 B_0 h_m = ∑_k,m,i=0^∞ (2i+1)^2p[]ϕ_k U_2 B_2 h_kh_i[]ϕ_m U_0 B_0 h_mh_i = - ∑_i=0^∞ (2i+1)^2pϕ_i+2ϕ_i (2i+1)√((i+2)(i+1))/4 *∑_k,m=0^∞[p]ϕ_k U_2 B_2 h_kϕ_m U_-2 B_-2 h_m = ∑_k,m,i=0^∞ (2i+1)^2p[]ϕ_k U_2 B_2 h_kh_i[]ϕ_m U_-2 B_-2 h_mh_i = ∑_i=0^∞ (2i+1)^2pϕ_i+2√((i+2)(i+1))/2ϕ_i-2√(i(i-1))/2 = ∑_l=0^∞ (2l+5)^2pϕ_lϕ_l+4√((l+1)(l+2)(l+3)(l+4))/4 *∑_k,m=0^∞[p]ϕ_k U_0 B_0 h_kϕ_m U_2 B_2 h_m = ∑_k,m,i=0^∞ (2i+1)^2p[]ϕ_k U_0 B_0 h_kh_i[]ϕ_m U_2 B_2 h_mh_i = - ∑_i=0^∞ (2i+1)^2pϕ_i ϕ_i+2(2i+1)√((i+2)(i+1))/4 *∑_k,m=0^∞[p]ϕ_k U_0 B_0 h_kϕ_m U_-2 B_-2 h_m = ∑_k,m,i=0^∞ (2i+1)^2p[]ϕ_k U_0 B_0 h_kh_i[]ϕ_m U_-2 B_-2 h_mh_i = - ∑_i=0^∞ (2i+1)^2pϕ_i 2i+1/2ϕ_i-2√(i(i-1))/2 = - ∑_l=0^∞ (2l+5)^2pϕ_l+2ϕ_l (2l+5)√((l+2)(l+1))/4*∑_k,m=0^∞[p]ϕ_k U_-2 B_-2 h_kϕ_m U_2 B_2 h_m = ∑_k,m,i=0^∞ (2i+1)^2p[]ϕ_k U_-2 B_-2 h_kh_i[]ϕ_m U_2 B_2 h_mh_i = ∑_i=0^∞ (2i+1)^2pϕ_i-2√(i(i-1))/2ϕ_i+2√((i+2)(i+1))/2 = ∑_l=0^∞ (2l+5)^2pϕ_l ϕ_l+4√((l+1)(l+2)(l+3)(l+4))/4 *∑_k,m=0^∞[p]ϕ_k U_-2 B_-2 h_kϕ_m U_0 B_0 h_m = ∑_k,m,i=0^∞ (2i+1)^2p[]ϕ_k U_-2 B_-2 h_kh_i[]ϕ_m U_0 B_0 h_mh_i = - ∑_i=0^∞ (2i+1)^2pϕ_i-2√(i(i-1))/2ϕ_i 2i+1/2 = - ∑_l=0^∞ (2l+5)^2pϕ_l ϕ_l+2(2l+5)√((l+2)(l+1))/4Step 3: We now justify the bound on the expression `- [p]ϕ∂^4ϕ + ∂^2ϕ_p^2'. To do this, we first add up all the terms in (<ref>)-(<ref>) and (<ref>)-(<ref>) and then collect terms involving ϕ_l^2, ϕ_lϕ_l+2 and ϕ_lϕ_l+4 separately. We have, - [p]ϕ∂^4ϕ + ∂^2ϕ_p^2= ∑_l=0^∞ (2l+1)^2pϕ_l^2 [l^2/4{( 2l - 3/2l + 1)^2p + ( 2l + 5/2l + 1)^2p - 2}. + l/4{- ( 2l - 3/2l + 1)^2p + 3 ( 2l + 5/2l + 1)^2p - 2} + .1/2{( 2l + 5/2l + 1)^2p - 1}] + ∑_l=0^∞ (2l+1)^2pϕ_l ϕ_l+2√((l+1)(l+2))[1 - ( 2l + 5/2l + 1)^2p]+ ∑_l=0^∞(2l+1)^2p ϕ_lϕ_l+4 √((l+4)(l+3)(l+2)(l+1))/4[2( 2l + 5/2l + 1)^2p - 1 - ( 2l + 9/2l + 1)^2p]= ∑_l=0^∞(2l+1)^2p[]ϕh_l[ a_l []ϕh_l + b_l []ϕU_2 h_l + c_l []ϕU_4 h_l],where the sequences {a_l}_l = 0^∞, {b_l}_l = 0^∞ and {c_l}_l = 0^∞ are defined bya_l:= l^2/4{( 2l - 3/2l + 1)^2p + ( 2l + 5/2l + 1)^2p - 2}+ l/4{- ( 2l - 3/2l + 1)^2p + 3 ( 2l + 5/2l + 1)^2p - 2}+ 1/2{( 2l + 5/2l + 1)^2p - 1},b_l:= √((l+1)(l+2))[1 - ( 2l + 5/2l + 1)^2p],c_l:= √((l+4)(l+3)(l+2)(l+1))/4[2( 2l + 5/2l + 1)^2p - 1 - ( 2l + 9/2l + 1)^2p].In the expression for a_l, the term ( 2l - 3/2l + 1)^2p for l = 0 and l = 1 should be treated as 0 (see (<ref>)). By Lemma <ref>, the above sequences are bounded. Also, by Lemma <ref>, U_2 and U_4 are bounded linear operators on _p. Hence, the result follows from (<ref>). Note that 2[p]ϕLϕ + Aϕ_HS(p)^2 = κ^2 ( - [p]ϕ∂^4ϕ + ∂^2ϕ_p^2 ) + ( [p]ϕσ^2 ∂^2 ϕ - 2b ∂ ϕ + -σ∂ϕ_HS(p)^2 ).The result follows as a consequence of Theorem <ref> and <cit.>. § PROOFS OF THEOREM <REF> AND THEOREM <REF>Note that the operators A_1, A_2 and L leaveinvariant. Moreover, by Lemma <ref>, A_1, A_2 and L are bounded linear operators from _p to _p - 2.By Corollary <ref>, we have 2[p]ϕLϕ + Aϕ_HS(p)^2 ≤ C ϕ_p^2,∀ϕ∈and2[p-2]ϕLϕ + Aϕ_HS(p-2)^2 ≤ C ϕ_p - 2^2,∀ϕ∈.By <cit.>, the Stochastic PDE (<ref>) has a unique _p valued strong solution with equality in _p - 2. We note that the first monotonicity inequality is used to show the existence of a strong solution, and the second one for uniqueness.This proof is similar to <cit.>.Let {X_t}_t denote the solution obtained in the proof of Theorem <ref>. Here, we have used <cit.>, which in particular implies that sup_t ∈ [0, T]X_t_p^2 < ∞,∀ T > 0.Using the boundedness of the operator A (see Lemma <ref>), we have the martingale property of the process {∫_0^t A(X_s)· dB_s}_t. Setting u_t :=X_t, ∀ t ≥ 0 and taking expectation in (<ref>), we conclude that {u_t}_t is a strong solution of the PDE (<ref>).To prove the uniqueness, let {ũ_t}_t be another _p valued strong solution with equality in _p - 2. Then, by the Monotonicity inequality (<ref>), we haveu_t - ũ_t_p - 2^2= 2∫_0^t [ [p - 2]u_s - ũ_sL(u_s - ũ_s)] ds≤∫_0^t [ 2[p - 2]u_s - ũ_sL(u_s - ũ_s) + A(u_s - ũ_s)_HS(p - 2)^2] ds≤ C∫_0^t u_s - ũ_s_p - 2^2 dsfor some constant C > 0. The uniqueness then follows by the Gronwall's inequality. Acknowledgement: Suprio Bhar acknowledges the support of the Matrics grant MTR/2021/000517 from the Science and Engineering Research Board (Department of Science & Technology, Government of India). Barun Sarkar acknowledges the support of SERB project -SRG/2022/000991, Government of India.amsplain
http://arxiv.org/abs/2312.16550v1
{ "authors": [ "Suprio Bhar", "Barun Sarkar" ], "categories": [ "math.PR", "math.AP", "Primary: 60H15, 60H30, 35G05, Secondary: 60G15, 46F05" ], "primary_category": "math.PR", "published": "20231227122104", "title": "Stochastic PDEs involving a bilaplacian operator" }
Dynamic model of tissue electroporation on the basis of biological dispersion and Joule heating]Dynamic model of tissue electroporation on the basis of biological dispersion and Joule heating [email protected] Institute of Biomedical Engineering, Federal University of Santa Catarina, Florianópolis, SC, Brazil Institute of Biomedical Engineering, Federal University of Santa Catarina, Florianópolis, SC, Brazil Institute of Biomedical Engineering, Federal University of Santa Catarina, Florianópolis, SC, Brazil Department of Control, Automation and Computer Engineering, Federal University of Santa Catarina, Blumenau, SC, Brazil Institute of Biomedical Engineering, Federal University of Santa Catarina, Florianópolis, SC, Brazil Electroporation is a complex, iterative, and nonlinear phenomenon that is often studied by numerical simulations. In recent years, tissue electroporation simulations have been performed using static models. However, the results of a static model simulation are restricted to a fixed protocol signature of the pulsed electric field. This paper describes a novel dynamic model of tissue electroporation that also includes tissue dispersion and temperature to allow time-domain simulations. We implemented the biological dispersion of potato tubers and thermal analysis in a commercial finite element method software. A cell electroporation model was adapted to account for the increase in tissue conductivity. The model yielded twelve parameters, divided into three dynamic states of electroporation. Thermal analysis describes the dependence of tissue conductivity on temperature. The model parameters were evaluated using experiments with vegetal tissue (Solanum tuberosum) under electrochemotherapy protocols. The proposed model can accurately predict the conductivity of tissue under electroporation from 10 to 100. A negligible thermal effect was observed at 100, with a 0.89 increase. We believe that the proposed model is suitable for describing the electroporation current on a tissue scale and also for providing a hint on the effects on the cell membrane. [ D. O. H. Suzuki January 14, 2024 ====================§ INTRODUCTION Electroporation occurs when the transmembrane potential (TMP) of a biological cell exceeds a supraphysiological threshold by stimulation of short but intense pulsed electric fields (PEF). Excessive TMP causes local disturbances in the membrane structure. Current electroporation theory suggests that prepores are formed <cit.>. The prepores then expand and stabilise as hydrophilic pores. Pore formation increases cell membrane permeability, allowing non-permeable substances to cross the cell barrier <cit.>. Extracellular content can access the cytosol, or intracellular content can leak and even trigger apoptosis by losing homeostasis <cit.>. There are medical and industrial applications that use electroporation to improve or replace traditional processes <cit.>.The occurrence of electroporation depends on the distribution of the electric field, which in turn depends on the nonlinear electrical properties of the tissue. Opening the pores in the membrane changes the structure of the material and its electrical properties <cit.>. The system is interdependent and leads to a complex model. For this reason, electroporation is often studied in detail by computer simulations.Electrochemotherapy is a well-known application of electroporation to catalyse the membrane transport of chemotherapeutic drugs <cit.>. The technique relies on standard protocols to ensure that the entire tumour is exposed above a minimum electric field threshold for electroporation to occur. The European Standard Operating Procedures for Electrochemotherapy (ESOPE) recommend a burst of eight rectangular pulses (monopolar, bipolar, or alternating) 100 long with a repetition rate between 1 and 5 <cit.>. Electrochemotherapy studies usually do not focus on the dynamics of pore formation, but rather on the final electric field distribution and outcomes. Static models are often used to reduce computational costs <cit.>. However, a static electroporation model is developed specifically for a PEF signature and cannot be directly used to study different signatures. If the PEF signature is changed, the static model needs to be adjusted. The reason is that a PEF has a specific energy spectrum density that affects the dynamics of electroporation, Joule heating, and the dispersive aspect of biological media <cit.>. To overcome this limitation, dynamic models should be used.A dielectric dispersive medium is characterised by the dependence of its electrical properties on frequency. Biological tissue exhibits a strong dispersion from DC to hundreds of . There are four main dispersion bands in biological tissue: α (from DC to about 10), β (between 100 and 10), γ (at 20), and δ (between β and γ). Tissue electroporation uses square-wave PEF with a broad spectral distribution. For this reason, including biological dispersion is essential to accurately simulate the biological medium. Then a dynamic model of tissue electroporation should be developed based on biological dispersion <cit.>.Proposals for tissue dynamic models have already been published <cit.>. Some are for the analysis of irreversible electroporation, considering the dynamics of the temperature rise and its effects on the tissue conductivity. However, they do not consider time-dependent changes in electrical properties due to electroporation <cit.>. On the other hand, three models consider time-dependent changes in electrical properties due to electroporation <cit.>. Yet, there are some modelling simplifications. One did not consider the biological dispersion <cit.>, while the other two consider the tissue dispersion to be β-based <cit.>. Nevertheless, it is known that for most electroporation PEF protocols, more than 95% of the spectral energy is below 100 <cit.>, which is in the α-dispersion band. Although the three models <cit.> can accurately describe the shape of the electric current during tissue electroporation, some of their parameters are adjusted according to the experimental setup, such as voltage and shape of the electrode. These adjustments hamper the use of models if the experimental setup is changed, which is common practice in PEF research. Thus, the input parameters should be redefined. Furthermore, the three models were developed using custom numerical solution software and not an immediate implementation.In this paper, we present a novel dynamic model of tissue dielectric properties during PEF that accounts for the three individual physical effects: electroporation, tissue dispersion, and temperature. We describe tissue dispersion using a multipole Debye function implemented in the time domain by the method of auxiliary differential equations. The electroporation effect was described using an electric-field-to-TMP relation. The time-dependent increase in tissue conductivity was evaluated by extrapolating a kinect model of cell electroporation. In addition, the conductivity dependence on temperature was included. We used in vitro potato tuber (Solanum tuberosum) to collect data and implement the model using commercial finite element method (FEM) software.§ METHODS§.§ Model DevelopmentWe solve the models using numerical simulations. We used the commercial software COMSOL Multiphysics (COMSOL Inc., Stockholm, Sweden). COMSOL is both a FEM and a computational fluid dynamic (CFD) solver with a bundle of built-in physical equations. We used COMSOL's electric current library, which solves electrophysics for low-frequency signals using the FEM. In low-frequency electrophysics, COMSOL use the principle of charge conservation and solve the equation of continuity shown in Eq. <ref> and the electric current density given by Eq. <ref>. ∇⃗·J⃗(t) = Q_jJ⃗(t) = σE⃗(t) + ∂D⃗(t)/∂ t + J⃗_e(t) where Q_j is the total charge density () and J⃗(t) is the electric current density () whose components are given by the material conductivity σ (), the electric field E⃗ () and the displacement field D⃗ (). J⃗_e is an arbitrary external electric current density assumed by COMSOL to allow inclusion of external effects. The Maxwell equations define the displacement field as shown in Eq. <ref>, where ϵ_0 is the vacuum permittivity and ε_r is the relative permittivity of the material. D⃗(t) = ϵ_0 ε_r E⃗(t) §.§.§ Biological Dispersion A biological medium is a dielectric that has regions susceptible to polarisation and consequently to charge relaxation times. In the frequency domain, the relaxation effect is called dispersion and can be represented as a complex relative permittivity function. There are several models for describing the dispersion behaviour in biological materials. The Debye dispersion can be implemented in the time domain using a set of auxiliary differential equations <cit.>. Eq. <ref> represents the multipole Debye model in the frequency domain. ε_r^* (ω) = ε_∞ + σ_s/j ωϵ_0 + ∑_k=1^NΔε_k/1 + ( j ωτ_k ) where σ_s is the static conductivity, ω is the angular frequency (), ε_∞ is the permittivity of the material at high frequency, Δε_k is the permittivity variation of the pole, and τ_k is the relaxation time of the pole (). k and N are the current pole number and the total number of poles, respectively. j is the imaginary unit.The implementation of the multipole Debye model in the time domain followed our previously proposed method <cit.>, in which the dispersive effect is contained in an external electric current density for each Debye pole (J⃗_e_k), as shown in Eq. <ref>. Each current density is solved using an auxiliary electric field (e⃗_⃗k⃗) as shown in Eq. <ref>, which is a delayed value of the input electric field (E⃗) with a time constant corresponding to the relaxation of the Debye pole as shown in Eq. <ref>. This set of equations is implemented in COMSOL Multiphysics with the domain ordinary differential equation (DODE) physics. J⃗(t) = σ_s E⃗(t) + ϵ_0ε_∞∂E⃗(t)/∂ t + ∑_k=1^NJ⃗_e_k(t)J⃗_e_k(t) =ϵ_0Δε_k/τ_k( E⃗(t) - e⃗_⃗k⃗(t) )τ_k ∂e⃗_⃗k⃗(t)/∂ t= E⃗(t) - e⃗_⃗k⃗(t) We have previously described the dielectric spectrum of Solanum tuberosum tissue with different numbers of Debye poles <cit.>. The dielectric dispersion from 40 to 10 can be parameterised with a 4-pole Debye dispersion model. The parameters are shown in Table <ref>.§.§.§ Electroporation Electroporation dynamics was described on the basis of a modified version of the Leguèbe et al. <cit.> cell model. We modelled membrane electroporation using three states: prepore (P_0), initial pore (P_1), and expanded pore (P_2). Each state contributes in a specific way to the increase in membrane conductivity. In the cell model, the pore state is related to the TMP. In a macro-tissue model, we cannot access TMP directly because we do not have the membrane geometric model. For this reason, we proposed to calculate the TMP based on the magnitude of the local electric field (E). The concentrations of the pore states grow and decay exponentially, P_0 and P_1 as a function of the electric field and P_2 as a function of P_1. The proposed pore formation system was described using a set of differential Eqs. <ref> – <ref>. The notation in brackets indicates the concentration of the state. d [P_0]/dt= β_0(E) - [P_0]/τ_0d [P_1]/dt= β_1(E) - [P_1]/τ_1d [P_2]/dt=[P_1] - [P_2]/τ_2 τ_0 is a constant time. τ_1 and τ_2 depend on whether the function is growing or declining, as represented in Eqs. <ref> and <ref>. β_0 and β_1 describe the maximum concentration for the states P_0 and P_1 as a function of the magnitude of the electric field and are expressed in Eqs. <ref> and <ref>, respectively. This means that both functions can vary between zero and one. Zero means that no prepore (β_0) or pore (β_1) is formed. One means that the tissue has reached saturation for each phenomenon. Note that saturation does not mean that no new prepores or pores are formed, but that their increasing number above a certain threshold no longer has significant influence on the conductivity of the tissue. The maximum number of pores is usually not considered in cell electroporation models. The asymptotic model <cit.>, for example, does not define a limit value for the pore density. τ_1 = {τ_1G(1 - 0.5[P_0])if β_1(E) - [P_1] ≥ 0τ_1D otherwise.τ_2 = {τ_2G if [P_1] - [P_2] ≥ 0τ_2D otherwise.β_0(E) = 1/1 + e^-(|E| - E_0)/Δ E_0β_1(E) = 1/1 + e^-(|E| - E_1)/Δ E_1 where E_0 and E_1 are the central values of each logistic function, and Δ E_0 and Δ E_1 shape the slope. τ_1G and τ_2G are the characteristic times of [P_1] and [P_2]. τ_1D and τ_2D are the relaxation times of [P_1] and [P_2].The increase in the number of pores (P_1) would probably decrease the number of prepores (P_0); the same is true for increasing the size of the pores (P_2) and their initial shape (P_1). We have not included this state interaction in our model definition because we are working with concentrations and not absolute numbers. It is not expected that all prepores lead to a pore and that all pores expand. According to <cit.>, about only 2.2% of the pores will expand. We believe that its implementation would have little effect on the final result of the model while increasing the number of parameters.Since we cannot directly deal with membrane conductivity in tissue simulation, the increase in conductivity was implemented in tissue dispersion. There are works that deal with the effects of electroporation on biological dispersion <cit.>. Impedance analysis before and after PEF stimulation showed an increase in tissue conductivity throughout the spectrum. Permittivity is also affected, but to a lesser extent. To increase conductivity across the spectrum, we increased the static conductivity of the biological dispersion (σ_s) according to Eq. <ref>. σ_P = σ_s + σ_P_0[P_0] + σ_P_1[P_1] + σ_P_2[P_2] where σ_P_0, σ_P_1, and σ_P_2 are the increasing coefficients of the states P_0, P_1, and P_2, respectively. [P_0], [P_1], [P_2] are evaluated using Eqs. <ref> – <ref>.§.§.§ Thermal Dependence The electrical conductivity of biological tissue increases with temperature <cit.>. The electric current flowing through the tissue generates Joule heating. We simulated the temperature development in the sample during the pulse burst. The equation for heat diffusion with Joule heating term is presented in Eq. <ref>. ρ c_p ∂ T/∂ t - ∇·( k ∇ T ) = J⃗·E⃗ where T is the temperature (), ρ is the density of the material (), c_p is the heat capacity of the material at constant pressure (), and k is the thermal conductivity (). J⃗ and E⃗ are the electric current density and electric field calculated with Eqs. <ref> and <ref>. The thermophysical properties of Solanum tuberosum <cit.> and the electrode used during the experiments (316L Stainless Steel <cit.>) are given in Table <ref>.Temperature influences the increase in electroporation conductivity (Eq. <ref>) because it affects all components of the tissue. The conductivity temperature coefficient was defined as χ = 1.7E-2  <cit.>. Thus, the conductivity is adapted as follows. σ_T = σ_P(1 + χ (T - T_0) ) where T_0 is the initial temperature.After including thermal and electroporation effects, Eq. <ref> is adapted to Eq. <ref> and used to describe the apparent conductivity in the simulator. J⃗(t) = σ_T E⃗(t) + ϵ_0ε_∞∂E⃗(t)/∂ t + ∑_k=1^NJ⃗_e_k(t)§.§ In vitro experiment Potato tubers (Solanum tuberosum) were brought from local growers. Growers were certified by the Brazilian Ministry of Agriculture, Livestock and Food Supply (MAPA) for organically grown products. Cylindrical fragments were cut using a 18.50 diameter stainless steel cutter. The cylindrical fragments were then cut into 5 tall samples. The samples were wrapped in paper towels to reduce denaturation and oxidation. Cutting was performed immediately before each experiment. The time between cutting and the end of the experiment was less than 30 minutes. The laboratory temperature was 20 (293.15).The samples were placed between two 30 mm diameter circular 316L stainless steel plates as shown in Fig. <ref>a and carefully fixed with a spring clamp. We subjected the samples to a PEF protocol according to the ESOPE guidelines <cit.>. The repetition rate was fixed at 5. Each sample was subjected to a PEF protocol and then replaced. The voltage was swept from 50 to 250 (50 steps) and from 300 to 500 (100 steps). Because the samples were 5 high, the equivalent electric field ranged from 10 to 50 (10 steps) and from 60 to 100 (20 steps).Data were collected using a Tektronix DPO2012B oscilloscope (Tektronix Inc, Oregon, USA) with a Tektronix TPP0100 voltage probe and a Tektronix A622 current probe. We post-processed the data using a Python script to determine the average, standard deviation, and confidence intervals for each protocol. §.§ In silico experiment Iterative simulations were used to evaluate the parameters. We used a 2D axisymmetric geometry to replicate the experimental setup. The sample geometry was a rectangle 9.25 long and 5 high. Rectangles 15.00 long and 1 high were placed on the top and bottom of the sample geometry to form the plate electrodes. All geometries were rotated 360 to form the cylindrical shape. Fig. <ref>b shows the final geometry. We implemented the biological dispersion (Eqs. <ref> – <ref>) with electroporation and thermal conductivities (Eqs. <ref> and <ref>) in the sample material. The conductivity and relative permittivity of the electrode were 1.74 and 1, respectively.The boundary conditions were defined as follows. A lateral boundary of the entire geometry was used for the axisymmetric rotation. For electrical analysis, the boundaries of one electrode were defined as terminal and those of the other as ground (Dirichlet boundary condition). For electrical and thermal analysis, the external boundaries of the geometry were defined as electrical and thermal insulating, respectively (Neumann boundary condition). The COMSOL's multiphysics module automatically provided the electrical information as input for thermal analysis, all domains were considered as Joule heating source. The initial temperature was 20.The input voltage followed the experimental signal. The mesh was created with the COMSOL mesh creation tool using finer resolution. The final mesh resulted in 736 domain elements. We used the intermediate generalised-α method to adjust the time step to improve convergence. The intermediate generalised-α method allows the solver to strictly decrease the time step. The maximum time step was established at 0.1 in the transition regions and at 1 otherwise.§ RESULTS The parameters of the dynamic model of Solanum tuberosum electroporation are shown in Table <ref>. Fig. <ref> presents the plot of the functions β_0 and β_1. Fig. <ref> shows the experimental and simulated electric currents for the used PEF (10 to 100). The experimental results were summarised as average, confidence interval (95%), and standard deviation. If PEF is higher than 20, the electric current has a nonlinear increase during PEF. Also, if PEF increases in magnitude, the maximum current values increase nonlinearly. The overshoot of the in vitro current in the opposite direction in the PEF transitions is not due to dispersion, temperature rise, or electroporation, but is a common parasitic effect in the experimental setup.We evaluated the evolution of dynamic states and the thermal increase in the centre of the sample (coordinates (0;0) in Fig. <ref>c and d). The dynamic evolution of the concentrations of prepore (P_0), initial pore (P_1), and expanded pore (P_2) using 20 and 100 is shown in Fig. <ref> (see Supplementary Information for the complete set of input electric fields). P_0 are created and decimate during the first 500 after the pulse rise and fall times. P_1 are created at a rate faster than its decimation. A higher magnitude of the PEF leads to more pore formation. P_2 has the slower dynamics and accumulates over pulses.The increase in temperature is shown in Fig. <ref>a. The total increase in temperature (Δ T = T - T_0) after the eight pulses was 0, 0.004, 0.025, 0.074, 0.1822, 0.3046, 0.5647, and 0.8862 for 10, 20, 30, 40, 50, 60, 80, and 100, respectively. The maximum temperature variation (0.8862) represents an increase in conductivity of 1.5%.Fig. <ref>b presents the evolution of the apparent conductivity for all input PEF. Measurement was taken in the centre of the sample. This curve is calculated through Eq. <ref>, where the contribution of electroporation and thermal effects to tissue conductivity is included.§ DISCUSSION Modelling complex biological systems is a challenging undertaking due to complex interactions, nonlinear dynamics, computational demand, and uncertainties inherent in these systems. Another difficulty is addressing parameters that correlate with the microphysical processes. Studies in single cells provide an opportunity to study the effects directly at the cell membrane <cit.>. On the other hand, tissue studies can only evaluate a macroscopic effect <cit.>. In electroporation, the increase in membrane conductivity (and thus the tissue conductivity) is one of the primary observable effects. Our tissue model subsumed the complex dynamics of electroporation into three main states P_0, P_1, and P_2, which correlate with the membrane-level hypothesis of pore creation and expansion. These states differ in characteristic and relaxation times, contribution to increase in conductivity, and dependence with the applied electric field <cit.>. Although the model was built on the basis of the cell electroporation model of Leguèbe et al. <cit.> we proposed a different approach. Our model has an extra pore-detailing state (three states instead of two). In our definition, P_0 should explain the instantaneous increase in conductivity associated with rapid opening of hydrophobic pores (so-called prepores), P_1 should explain the initial formation of hydrophilic pores (initial pores), and P_2 the final expansion of the pores (final pores). Because of that, the relation between each state is also slightly different, P_0 and P_1 depend on the electric field, and P_2 depends directly on P_1.We must note that the dynamics of electroporation is not yet fully understood <cit.>, and there are several hypotheses on how the phenomenon occurs <cit.>. On the tissue scale, we can only assess a combined effect, which limits our conclusions about the physical changes on the membrane. Although we assume that each state is explained mainly by the aforementioned reasons, one would expect pores to form during P_2 and pores to expand during P_1, for example. Further work should be done to validate the correlations between tissue and membrane effects for each state.The differences between the curves in Fig. <ref> show that the saturation of β_1 is reached at electric field magnitudes of about half the saturation threshold of β_0. We believe that β_1 is saturated at lower electric field magnitudes due to the spatial distribution in the cell membrane for prepore and pore formation. Prepores are usually less stable and require less energy to form. Pores, on the other hand, will require more energy to form. For this reason, pores are likely to occur in the poles of the cell, where the electric field perpendicularly strikes its structure <cit.>. In low electric field stimulation, prepores and pores would likely form on the cell poles. Under these conditions, the rate of pore formation would be higher and a small amount of prepores would almost saturate the number of initial pores. Increasing the electric field magnitude would increase the number of prepores along the cell structure, but those would not form an initial pore. The difference in energy threshold would hamper the formation of pores throughout the cell structure. This could explain why increasing the magnitude of the electric field after a certain threshold keeps increasing the conductivity at the beginning of the pulse but does not change the increase amount over the pulse. As mentioned, the mechanics of single cell electroporation is not yet fully understood, and conclusions about its causes on the tissue scale are speculative.The final pores cause larger flaws in the membrane than pre-pores or initial pores. Since the membrane is an insulating material, the flaw size leads the pre-pores to be less conductive than the initial and final pores. In fact, studies on cell suspensions under electroporation indicate an increase in conductivity over time during PEF stimulation, consistent with the higher conductivity of the final pores <cit.>. There is a lack of information on conductivity during initial pore formation. Pre-pores are expected to form spontaneously even if the cell is at resting potential <cit.>. However, our proposed model does not consider the absolute values, but the concentration of the individual states. This consideration explains why the conductivity of the pre-pores state (σ_P_0) is greater than that of the initial pores (σ_P_1) and the final pores (σ_P_2). In absolute terms, we expect the number of pre-pores to be greater than the number of pores and final pores (P_0 ≫ P_1 ≫ P_2), but the concentration analysis normalises these values. Therefore, the average increase in conductivity is reflected in a higher value for the conductivity coefficient of the pre-pores than for the initial and final pores (σ_P_0≫σ_P_1≫σ_P_2).The higher number of prepores formed in the first moments of the pulse accelerates the formation of the initial pores. This effect could be linked to theories that pores may be formed and expanded by coalescence of prepores <cit.>. Therefore, a higher concentration of P_0 would lead to a faster increase in P_1, as introduced in Eq. <ref>. P_1 and P_2 are also related. Here, we consider that the expansion of pores is proportional to the occurrence of initial pores. The difference is that the pores would take longer to expand and then longer to close. The timing of opening and closing of the pores should also be taken into account. There are differences in the mechanisms of opening and closing pores <cit.>. The mechanism of closing the pores is slower than that of opening them. Eqs. <ref> and <ref> adjust the characteristic or relaxation time of states P_1 and P_2 depending on whether it is a growing or a decaying curve.Fig. <ref> shows that our proposed model can describe the dynamics of the electric current during electroporation for tested input voltages. The solid foundation of the biological dispersion can be observed when 10 (50) is applied (Fig. <ref>a). At 10, the influences of electroporation are small, so the electric current is explained mainly by biological dispersion (see that there is no increase in apparent conductivity in Fig. <ref>b). Electroporation phenomena start to visually occur above 20 (Fig. <ref>b), when the electric current deviates from the natural waveform of the biological dispersion. This threshold for the occurrence of electroporation is assessed in the curves of Fig. <ref>, especially in the first dynamic dependence β_0. The β_0 shape is similar to the static model of Solanum tuberosum proposed by Ivorra et al. <cit.>. The authors have developed a static model applying a single pulse 400 long and evaluating instantaneous conductivity at 100. The similarity between β_0 and the static model is consistent, as P_0 has the greatest influence on the increase in tissue conductivity for the most magnitudes.The results of the thermal analysis in Fig. <ref>a show that the ESOPE protocol has a minimal temperature effect even at the higher electric field. The maximum increase was only 0.88, which is reflected in a change in conductivity of approximately 1.5. Although we did not perform an experimental analysis of the thermal rise, our thermal simulation results are similar to those of other studies for the first eight pulses <cit.>.Fig. <ref> explains the electroporation dynamics for 20 and 100. We can see that P_1 and P_2 concentrations increase from 20. As we defined that P_1 and P_2 are the states of hidrophylic pore formation, these states are expected to have the main influence on the increase in tissue permeability for macromolecules. Thus, cell permeability has already increased significantly at initial thresholds, where the effects of electroporation are not completely saturated. This confirms the statements in preclinical electrochemotherapy simulations that drug delivery is achieved with similar reliability after a certain threshold value of the electric field (the so-called reversible electroporation threshold) <cit.>. In further research, our aim is to better understand the effects of dynamic states on tissue permeability.The increase in apparent conductivity with the influence of thermal and pore formation dynamics is shown in Fig. <ref>b. We can observe that P_1 and P_2 are more influential up to 30, while P_0 significantly influence the increase in conductivity for thresholds higher than 40. As previously mentioned, this phenomenon may be related to the formation of pre-pores throughout the cellular structure that do not have sufficient energy to form a final pore under higher-intensity stimuli. Thus, the states P_1 and P_2 determine the increase in conductivity for thresholds below 30, while P_0 has a greater influence on stimuli above 40.We found characteristic times for the first two dynamics similar to Voyer et al. <cit.>. The authors analysed the electric current during the first pulse and used a similar set of equations to describe the first two dynamics. As only one pulse was analysed, they do not implement the relaxation between pulses or the third dynamic, which limits their model to a single-step analysis. In terms of increased conductivity, our results follow the same magnitude as those found by the static model of Solanum tubersum by Ivorra et al. <cit.> and the dynamic model of Weinert et al. <cit.> evaluated in rabbit tissues. Weinert et al. compressed all electroporation dynamics into a single differential equation, which led to distinct dynamics of the increase in conductivity. We suspect that this compress resulted in parameter adjustments for voltage variations in their model (see Table 2 in <cit.>). A common factor among the three dynamic models of tissue electroporation proposed to this date is the adjustment of parameters based on input variations <cit.>. In this sense, our model can accurately describe a wide range of input voltages with a single set of parameters.The largest differences between the experimental average and the simulation results occur at 20 (Fig. <ref>b) and 50 (Fig. <ref>e). The difference at 20 arises because the simulated increase in conductivity is slightly faster than that observed experimentally. We could implement a new function for the characteristic time of P_1 to account for this difference. However, this would introduce new parameters to fit only the first two pulses of an input electric field. For simplicity, we prefer not to make this consideration. The difference at 50 is due to an overestimation of β_0 for this particular electric field. At 50, we are in the transition region of β_0, where small deviations in parameter definition would lead to large differences in electric current. It would be possible to better fit the result for 50 while increasing the deviation for the others. Since the result for 50 is at the upper edge of the standard deviation and all other input electric fields are close to the experimental average, we considered this set of parameters as the best choice.§ CONCLUSION We have proposed a novel dynamic model that describes tissue dielectric properties during PEF while accounting for electroporation, tissue dispersion, and temperature. The model divides the electroporation phenomenon into three dynamic states: prepore, initial pore, and final pore formation. The states are associated with microscopic effects on the membrane. The model can accurately describe the electric current during PEF. We believe that our proposed model can improve the study of PEF for electroporation-based applications.§ ACKNOWLEDGEMENTSThis study was financed in part by Coordenação de Aperfeiçoamento de Pessoal de Nível Superior – Brasil (CAPES) and Conselho Nacional de Desenvolvimento Científico e Tecnológico – Brasil (CNPq).§ AUTHOR DECLARATIONS§.§ Conflict of InterestThe authors have no conflicts of interest to disclose. §.§ Author ContributionsR.G.: Conceptualisation, Methodology, Formal analysis, Investigation, Data Curation, Writing – Original Draft, Writing – Review & Editing, Visualisation. D.L.L.S.A.: Methodology, Investigation, Data Curation, Writing – Review & Editing. J.R.S.: Formal analysis, Investigation, Writing – Review & Editing. G.B.P.: Resources, Supervision, Writing – Review & Editing. D.O.H.S.: Resources, Supervision, Project administration, Writing – Review & Editing.§ DATA AVAILABILITYThe data that support the findings of this study are available from the corresponding author upon reasonable request.§ REFERENCES
http://arxiv.org/abs/2312.16705v2
{ "authors": [ "Raul Guedert", "Daniella L. L. S. Andrade", "Jéssica Rodrigues", "Guilherme B. Pintarelli", "Daniela O. H. Suzuki" ], "categories": [ "eess.SY", "cs.SY", "q-bio.TO" ], "primary_category": "eess.SY", "published": "20231227200243", "title": "Dynamic model of tissue electroporation on the basis of biological dispersion and Joule heating" }
0000-0002-9811-2443]Nicole M. Firestone Department of Physics and Astronomy, Rutgers, the State University of New Jersey, Piscataway, NJ 08854, USA0000-0003-1530-8713]Eric Gawiser Department of Physics and Astronomy, Rutgers, the State University of New Jersey, Piscataway, NJ 08854, USA0000-0002-9176-7252]Vandana Ramakrishnan Department of Physics and Astronomy, Purdue University, 525 Northwestern Ave., West Lafayette, IN 47906, USA0000-0003-3004-9596]Kyoung-Soo Lee Department of Physics and Astronomy, Purdue University, 525 Northwestern Ave., West Lafayette, IN 47906, USA0000-0001-5567-1301]Francisco Valdes NSF’s National Optical-Infrared Astronomy Research Laboratory, 950 N. Cherry Ave., Tucson, AZ 85719, USA0000-0001-9521-6397]Changbom Park Korea Institute for Advanced Study, 85 Hoegi-ro, Dongdaemun-gu, Seoul 02455, Republic of Korea0000-0003-3078-2763]Yujin Yang Korea Astronomy and Space Science Institute, 776 Daedeokdae-ro, Yuseong-gu, Daejeon 34055, Republic of Korea0000-0002-1328-0211]Robin Ciardullo Department of Astronomy & Astrophysics, The Pennsylvania State University, University Park, PA 16802, USA Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802, USA0000-0003-0570-785X]María Celeste Artale Instituto de Astrofisica, Departamento de Ciencias Fisicas, Facultad de Ciencias Exactas, Universidad Andres Bello, Fernandez Concha 700, Las Condes, Santiago RM, Chile Department of Physics and Astronomy, Rutgers, the State University of New Jersey, Piscataway, NJ 08854, USA Department of Physics, University of Washington, Seattle, WA 98195, USA Department of Physics and Astronomy, Rutgers, the State University of New Jersey, Piscataway, NJ 08854, USA Department of Physics and Astronomy, Rutgers, the State University of New Jersey, Piscataway, NJ 08854, USA Department of Physics and Astronomy, Rutgers, the State University of New Jersey, Piscataway, NJ 08854, USA0000-0001-6842-2371]Caryl Gronwall Department of Astronomy & Astrophysics, The Pennsylvania State University, University Park, PA 16802, USA Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802, USA0000-0002-4902-0075]Lucia Guaita Instituto de Astrofisica, Departamento de Ciencias Fisicas, Facultad de Ciencias Exactas, Universidad Andres Bello, Fernandez Concha 700, Las Condes, Santiago RM, Chile0000-0001-8221-8406]Stephen Gwyn Herzberg Astronomy and Astrophysics Research Centre, National Research Council of Canada, Victoria, British Columbia, Canada0000-0003-3428-7612]Ho Seong Hwang Department of Physics and Astronomy, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Republic of Korea SNU Astronomy Research Center, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Republic of Korea 0009-0003-9748-4194]Sang Hyeok Im Department of Physics and Astronomy, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Republic of Korea0000-0002-2770-808X]Woong-Seob Jeong Korea Astronomy and Space Science Institute, 776 Daedeokdae-ro, Yuseong-gu, Daejeon 34055, Republic of Korea0009-0002-6186-0293]Shreya Karthikeyan Department of Astronomy, University of Maryland, College Park, MD 20742, USA0000-0002-1172-0754]Dustin Lang Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, ON N2L 2Y5, Canada0009-0008-4022-3870]Byeongha Moon Korea Astronomy and Space Science Institute, 776 Daedeokdae-ro, Yuseong-gu, Daejeon 34055, Republic of Korea0000-0001-9850-9419]Nelson Padilla Instituto de Astronomía Teórica y Experimental (IATE), CONICET-UNC, Laprida 854, X500BGR, Córdoba, Argentina0000-0002-7712-7857]Marcin Sawicki Institute for Computational Astrophysics and Department of Astronomy and Physics, Saint Mary’s University, 923 Robie Street, Halifax, Nova Scotia, B3H 3C3, Canada0009-0007-1810-5117]Eunsuk Seo Department of Astronomy and Space Science, Chungnam National University, 99 Daehak-ro, Yuseong-gu, Daejeon, 34134, Republic of Korea Departamento de Ciencias Fisicas, Universidad Andres Bello, Fernandez Concha 700, Las Condes, Santiago, Chile European Southern Observatory Las Condes, Región Metropolitana, Chile0000-0002-4362-4070]Hyunmi Song Department of Astronomy and Space Science, Chungnam National University, 99 Daehak-ro, Yuseong-gu, Daejeon, 34134, Republic of Korea0000-0001-6162-3023]Paulina Troncoso Iribarren Escuela de Ingeniería, Universidad Central de Chile, Avenida Francisco de Aguirre 0405, 171-0164 La Serena, Coquimbo, Chile Lyman-Alpha Emitting galaxies (LAEs) are typically young, low-mass, star-forming galaxies with little extinction from interstellar dust. Their low dust attenuation allows their Lyα emission to shine brightly in spectroscopic and photometric observations, providing an observational window into the high-redshift universe. Narrowband surveys reveal large, uniform samples of LAEs at specific redshifts that probe large scale structure and the temporal evolution of galaxy properties. The One-hundred-deg^2 DECam Imaging in Narrowbands (ODIN) utilizes three custom-made narrowband filters on the Dark Energy Camera (DECam) to discover LAEs at three equally spaced periods in cosmological history. In this paper, we introduce the hybrid-weighted double-broadband continuum estimation technique, which yields improved estimation of Lyα equivalent widths. Using this method, we discover 6339, 6056, and 4225 LAE candidates at z = 2.4, 3.1, and 4.5 in the extended COSMOS field (∼9 deg^2). We find that [O2] emitters are a minimal contaminant in our LAE samples, but that interloping Green Pea-like [O3] emitters are important for our redshift 4.5 sample. We introduce an innovative method for identifying [O2] and [O3] emitters via a combination of narrowband excess and galaxy colors, enabling their study as separate classes of objects. We present scaled median stacked SEDs for each galaxy sample, revealing the overall success of our selection methods. We also calculate rest-frame Lyα equivalent widths for our LAE samples and find that the EW distributions are best fit by exponential functions with scale lengths of w_0 = 55 ± 1, 65 ± 1, and 62 ± 1 Å, respectively. § INTRODUCTION The presence of significant Lyman Alpha (Lyα) emission in young, star forming galaxies was first theorized by <cit.>. Today, we understand Lyα Emitting galaxies (LAEs) as young, low-mass, low-dust, star-forming systems which have been identified as predecessors of Milky Way-type galaxies <cit.>. LAEs have prominent Lyα emission due to the recombination of hydrogen in their interstellar media (ISM) and, in some cases, scattering that occurs in the circumgalactic medium (CGM). In the ISM, ionization is driven by active star formation <cit.> or the presence of an active galactic nucleus (AGN) <cit.>. After ionization via either of the aforementioned processes, the Hydrogen undergoes recombination, producing Lyα radiation in significant quantities. Because LAEs are typically nearly dust-free <cit.>, the Lyα emission line formed through these processes does not experience severe extinction from interstellar dust and stands out as a prominent spectral feature. Between 2 ≲z≲ 5, the expansion of the universe redshifts this Lyα emission line feature from the rest-frame wavelength of 121.6 nm into the optical regime, making LAEs observable by ground-based telescopes. After many years of unavailing searches for the fabled LAEs of <cit.>, the development of higher sensitivity telescopes and wider-field detectors in the mid-1990s brought with it some of the first notable LAE surveys (see <cit.> for a comprehensive review). One of the earliest successful LAE surveys was the Hawaii Survey, which used the 10m Keck II Telescope to conduct narrowband and spectroscopic searches for high equivalent width LAEs at 3 <z< 6 <cit.>. A few years later, the Large-Area Lyman Alpha (LALA) survey used the CCD Mosaic camera at the 4 m Mayall telescope at Kitt Peak National Observatory and the low-resolution imaging spectrograph (LRIS) instrument at the Keck 10m telescope to discover and spectroscopically confirm z = 4.5 LAEs <cit.>. Shortly thereafter, the Subaru Deep Survey was conducted using narrowband imaging at z = 4.86 on the 8.4m Subaru Telescope <cit.>. Then, the Multiwavelength Survey by Yale-Chile (MUSYC) used the MOSAIC-II Camera at the CTIO 4m telescope <cit.> to study LAEs at z = 2.1 <cit.> and z = 3.1 <cit.>. In more recent years, the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) has taken the lead on spectroscopic LAE surveys <cit.>. Currently, the largest published narrowband-selected LAE samples have been discovered by the Systematic Identification of LAEs for Visible Exploration and Reionization Research Using Subaru HSC (SILVERRUSH), which used data from the Hyper Suprime-Cam (HSC) Subaru Strategic Program to discover LAEs over a wide range of redshifts <cit.>. Large, uniform samples of LAEs have a wide range of uses for studies of galaxy formation, galaxy evolution, large scale structure, and cosmology. High-redshift LAEs (z ≳ 6) can be used to probe the Epoch of Cosmic Reionization (EoR), the era in which the neutral matter that existed after the recombination became ionized by first generation stars <cit.>. Additionally, LAEs serve as good tracers of the large scale structure of the universe <cit.>, allowing us to study the temporal progression of the galaxy distribution at different epochs <cit.>. Since LAEs are composed of baryonic matter and dark matter halos, we can also use them as tools to measure the relationship between baryonic matter and dark matter, i.e., galaxy bias <cit.>. This type of analysis helps us to understand how high-redshift galaxies grow into the systems we see today <cit.>. Lastly, we can use LAEs to study Star Formation Histories (SFH) by fitting their rest-ultraviolet-through-near-infrared photometry <cit.>. This analysis allows us to characterize star formation episodes throughout the lifetime of galaxies, which can help us to better understand the physical processes that contribute to star formation and quenching in LAEs and how they compare to high-mass counterparts. Collectively, these scientific opportunities make LAEs powerful observational tool for probing the high-redshift universe, offering us many insights into the intricacies of galaxy formation and evolution and cosmology. However, many of these studies require large, uniform samples of LAEs at well-separated periods in cosmological history. One-hundred-deg^2 DECam Imaging in Narrowbands (ODIN) is a 2021-2024 NOIRLab survey program designed to discover LAEs using narrowband imaging <cit.>. ODIN's narrowband data is collected with the Dark Energy Camera (DECam) on the Víctor M. Blanco 4m telescope at the Cerro Tololo Inter-American Observatory (CTIO) in Chile. This project utilizes three custom-made narrowband filters with central wavelengths 419 nm (N419), 501 nm (N501), and 673 nm (N673) to create samples of LAE candidates during the period of Cosmic Noon at redshifts 2.4, 3.1, and 4.5, respectively. ODIN's narrowband-selected LAEs allow us to view large snapshots of the universe 2.8, 2.1, and 1.4 billion years after the Big Bang, respectively. With ODIN, we expect to discover a sample of >100,000 LAEs in 7 deep wide fields down to a magnitude of ∼25.7 AB, covering an area of ∼100 deg^2. ODIN’s carefully chosen filters and unprecedented number of LAEs will enable us to create and validate samples of the galaxy population at three equally spaced eras in cosmological history. Using these data, we can trace the large scale structure of the universe, study the evolution of the galaxies' dark matter halo masses, and investigate the star formation histories of individual LAEs. In this paper, we introduce innovative techniques for selecting LAEs and reducing interloper contamination using ODIN data in the extended COSMOS field (∼9 deg^2), and introduce ODIN's inaugural sample of ∼17,000 LAEs at z = 2.4, 3.1, and 4.5. By generating this unprecedentedly large sample of LAEs with impressive sample purity, ODIN will be able to better understand galaxy formation, galaxy evolution, and the large scale structure of our universe with significantly improved statistical robustness. From these results, we will be able to bind together chapters of the evolutionary biography of our universe with what will be the largest sample of narrowband-selected LAEs to date. In Section <ref> we discuss the data acquisition and pre-processing. In Section <ref> we introduce the hybrid-weighted double-broadband continuum estimation technique and selection criteria for our emission line galaxy samples. In Section <ref> we introduce our final emission line galaxy samples and discuss their scaled median stacked SEDs and emission line equivalent width distributions. In Section <ref> we outline our conclusions and future work. Throughout this paper, we assume ΛCDM cosmology with h = 0.7, Ω_m = 0.27, and Ω_Λ = 0.73 and use comoving distance scales.§ DATA§.§ Images For ODIN's LAE selections, we require narrowband data as well as archival broadband data in the extended COSMOS field. The narrowband data for filters N419, N501, and N673 were collected using DECam on the Blanco 4m telescope at CTIO by the ODIN team <cit.>. Archival grizy broadband data were acquired from the Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP) <cit.>. HSC-SSP data were collected using the wide-field imaging camera on the prime focus of the 8.2 m Subaru telescope <cit.>. HSC-SSP imaging in the COSMOS field includes two layers, Deep and Ultradeep <cit.>. Archival broadband data for the u-band were acquired from The CFHT Large Area u-band Deep Survey (CLAUDS) <cit.>. CLAUDS data were collected using the MegaCam mosaic imager on the Canada–France–Hawaii Telescope (CFHT) <cit.>, covering a smaller area than the HSC-SSP. The effective wavelength, seeing, depth, and extinction coefficients (see Section <ref>) in the COSMOS field for each filter are presented in Table <ref>. The grizy seeing is reported as the median seeing value for each COSMOS wide-depth stack. Since the COSMOS field includes two layers for the HSC broadband data, we present the parameters for both the Deep and Ultradeep regions separated by a slash when necessary. The transmission curves for all of these filters are presented in Figure <ref>. §.§ Source Extractor Catalogs In order to carry out source detection, we first divide the narrowband stack into “tracts” to match the grizy images from the HSC-SSP <cit.>. Each tract spans an area of ∼ 1.7 × 1.7 deg^2, with an overlap of one arcminute between tracts. We select sources from each tract image separately using the Source Extractor (SE) software <cit.> run in dual image mode with one narrowband image as the detection band and the grizy plus remaining narrowband images as the measurement bands. This allows us to measure the source fluxes in identical apertures on all the frames. We measure the photometry in multiple closely spaced apertures making it possible to interpolate the fractional flux enclosed within an aperture of any radius. While running SE, we filter each image with a Gaussian kernel with FWHM matched to the narrowband point spread function. We impose a detection threshold () of 0.95σ, where σ is the fluctuation in the sky value of the narrowband image, and a minimum area () of one pixel. These settings are optimized to detect faint point sources, which form the bulk of the LAE population. The specific value ofis chosen to maximize the number of sources detected while still ensuring that the contamination of the source catalog by noise peaks remains below 1%. The extent of the contamination is estimated by running SE on a sky-subtracted and inverted (“negative”) version of the narrowband image. In this negative image, any true sources will be well below the detection threshold; any objects detected by SE are thus the result of sky fluctuations. So long as the sky fluctuations are Gaussian, i.e. the extent of the fluctuations above the mean is the same as that below, the number of sources detected in the negative image will be comparable to the number of false source selected with a given detection threshold.The COSMOS/N419 SE catalog is presented in Figure <ref>. Note that this plot excludes regions where there is no overlap between the DECam and HSC-SSP/CLAUDS frames. After acquiring archival data and creating a source catalog, we carry out a series of steps related to data pre-processing, which are outlined in Subsections <ref> - <ref>.§.§ Galactic Dust Corrections As radiation from an extragalactic source travels through the Milky Way, it encounters dust clouds that cause absorption and scattering. As a consequence of this, the observed radiation from those sources appears to be dimmer and redder than the intrinsic radiation. In order to account for this effect and recover the intrinsic emission from the sources, we apply Galactic dust corrections to the data. We estimate the amount of reddening that a source experiences by comparing its observed B-V color to its intrinsic B-V color, i.e., E(B-V). In order to calculate the E(B-V) value for each of our sources, we use the reddening map of <cit.>, as modified by <cit.>, along with a <cit.> reddening law. The resulting extinction coefficients for each filter, as interpolated from the DECam filter values presented by <cit.>, are presented in Table <ref>. To implement Galactic dust corrections, we apply Equation <ref>, flux_corr = flux_obs× 10^0.4 k E(B-V), where flux_corr represents the Galactic dust corrected flux value, flux_obs represents the observed flux value, k represents the extinction coefficient for a particular filter, and E(B-V) represents the SFD reddening value for a particular source.§.§ Aperture Corrections For Photometry To fully account for the intrinsic brightness of each source, it is imperative that we also apply aperture corrections to our photometry. Each SE-generated source catalog produces flux density measurements for 12 different aperture diameters. Ideally, using the largest aperture available would yield the most accurate total flux measurements for the sources. However, by nature the larger the aperture we use, the more noise is introduced by the background sky. On the other hand, if we use a smaller aperture we will underestimate the total flux densities of the sources, but reduce the noise in our data. In order to accurately report the flux densities of our sources and limit the noise in the data, we use smaller apertures for the flux density measurements of the sources and apply correction factors to estimate the total flux density of a source in each filter. These flux density corrections are also carried through to the magnitude values and all errors. In order to properly treat point sources and extended sources, we use slightly different methodology for each class of objects. To produce point source aperture correction factors, we examine the 2D integral of the point spread function (which we will henceforth refer to as the Curve of Growth) for each filter (see Figure <ref>). The Curves of Growth are constructed by plotting the median fractional flux density enclosed with respect to the largest aperture (5.0 arcsecond diameter) fracflux_n = f_n/f_5 for bright, unsaturated point sources as a function of aperture diameter for each filter. We classify bright, unsaturated point sources as sources that obey the following criteria:* Respective magnitude between 18 and 19 (bright)*< 4 (unsaturated) *≤ 0.85 arcsecond (point source)We choose the bright source magnitude range by finding the magnitudes for which the median fractional flux density levels out and the normalized median absolute deviation (NMAD) of the fractional flux density is close to zero in all filters. We choose to use the NMAD rather than the standard deviation because the NMAD is less sensitive to outliers. In order to omit sources with pixel saturation, we include only sources with the SEparameter < 4. For the purpose of aperture corrections, we treat objects with a half-light radius () less than or equal to 0.85 arcseconds as point sources. After creating Curves of Growth with the subset of sources that obey these criteria, we convert the median fractional flux density for a particular aperture into a correction factor for each filter corr = 1/(fracflux_n), such that corr × f_n = f_5. The Curves of Growth for the COSMOS/N419 SE catalog are presented as a representative example in Figure <ref>. As a supplemental test of robustness, we also ensure that the Curve of Growth for each filter does not change dramatically across the survey area.To produce extended source aperture correction factors, we perform a regression analysis to determine a correction factor as a function of source half-light radius in the chosen 2 arcsecond aperture for each filter. This step allows us to limit contamination from uncorrected extended sources in the candidate sample. Ultimately, implementing these aperture corrections allows us to use a smaller aperture to better estimate the total flux of point sources without significantly biasing extended sources, while keeping the noise lower in the data. Additionally, at this step we apply a magnitude (flux) reassignment of 40 to sources whose flux values are low (including negative). §.§ Starmasking The next step in the candidate selection pipeline is starmasking. Starmasking removes data that has been contaminated by saturated stars and the effects of pixel over-saturation in the camera (CCD blooming). Starmasks were obtained from HSC-SSP <cit.>. We choose to use the g-band starmasks for this analysis because the individual masks were sufficiently sized for the narrowband images and did not have spurious objects. Examples of CCD blooming and saturated stars from the COSMOS/N419 sample as well as a visualization of SE catalog after starmasking are presented in Figure <ref> for reference.§.§ Data Quality Cuts At this point, we apply data quality cuts in order to eliminate any poor or problematic data that is not accounted for in the starmasks.* f_ν≠ 0 We ensure that flux density f_ν for each source is non-zero in the narrowband and broadband filters chosen for each LAE selection (see Subsections <ref>-<ref> for details). This allows us to exclude sources with incomplete data. * S/N_NB≥ 5 We require that the narrowband signal to noise ratio for a source is greater than or equal to 5, eliminating sources that should not have entered the SE catalogs. * = 0 We require that the SE parameteris equal to 0.is a binary parameter, so a value of 0 indicates that all the pixels within a source's aperture have valid values and are unflagged, as opposed to a value of 1 indicating that any one pixel has no data or bad data in the external flag map <cit.>. * < 4 Lastly, we require that the SEparameter is less than 4. This allows us to include sources whose aperture photometry is contaminated by neighboring sources and/or sources that had been deblended, and omit sources with pixel saturation <cit.>. § EMISSION LINE GALAXY SELECTION§.§ Improved Continuum Estimation Technique By definition, a true LAE has excess Lyα emission when compared with expected continuum emission at the Lyα wavelength. In order to select LAE candidates, we utilize narrowband and broadband filters to infer the presence of an emission line at the redshifted Lyα wavelength by looking for excess flux density in the narrowband. In order to measure this excess, we use a narrowband filter to capture the Lyα emission line and two broadband filters to estimate the continuum emission at the narrowband effective wavelength. If a source's narrowband magnitude at this wavelength is significantly greater than the double broadband continuum estimate, then the source is an LAE candidate.We estimate the continuum at the narrowband wavelength using two broadband filters by generating a weight for each filter according to Equation <ref>, λ_NB = w λ_a + (1-w) λ_b,where λ represents the effective wavelength of a filter, w is a weight, NB represents the narrowband filter, and `a' and `b' generically represent two broadband filters.Since the effective wavelengths of each broadband filter are used to solve for w, w will take on a value between 0 and 1 when used for an interpolation but can be outside of that range when extrapolation is needed. In order to use these weights to generate a double broadband continuum estimation, we begin by making the realistic assumption that continuum-only sources' have a power law flux distribution. In practice, this allows us to compute the double broadband flux by linearly weighting the magnitude from each broadband filter. This weighted magnitude model is presented in Equation <ref>, where a is the magnitude in the `a' broadband filter, b is the magnitude in the `b' broadband filter, and ab is the `ab' double broadband continuum magnitude at the effective wavelength of the narrowband.ab = (w) mag_a + (1-w) mag_bHowever, we cannot trust magnitudes at low flux density S/N. We remedy this issue by using a simpler model for this subset of sources (<10% of the starmasked source catalog), in which we assume that continuum-only sources’ flux density has a linear relationship to wavelength (as used in <cit.>). This weighted flux density model is presented in Equation <ref>, where f_a is the flux density in the `a' broadband filter, f_b is the flux density in the `b' broadband filter, and f_ab is the `ab' double broadband continuum flux density at the effective wavelength of the narrowband.f_ab = (w) f_a + (1-w) f_b We refer to this new method as hybrid-weighted double-broadband continuum estimation, in which we* Treat sources with S/N ≥ 3 in both single broadbands by assuming a power law flux density (i.e., weighted magnitude model; Equation <ref>)* Treat sources with S/N < 3 in either broadband by assuming a linear flux density (i.e., weighted flux model; Equation <ref>)After applying this method, we implement a global narrowband zero point correction by adjusting the narrowband photometry such that the median narrowband excess is equal to zero for continuum-only objects. This correction is small and generally less than 10%. This new method has many advantages for ODIN's datasets. First, it allows a better estimate the narrowband excess (equivalent width) of sources than possible with a single broadband or flux density weighted double broadband method. This is particularly advantageous for capturing dim LAEs. Additionally, it allows us to more effectively eliminate low redshift interlopers from the high redshift LAE candidates with minimal additional color cuts (see Subsections <ref> and <ref>). And lastly, it allows us to successfully use extrapolation (rather than interpolation) to estimate the continuum, which was not successful with a flux density weighted double broadband method. This makes it possible to avoid direct use of the u-band filter for the z = 2.4 LAE selection, which covers a smaller area and has more complex systematics than the g and r broadband filters (see Subsection <ref>). Therefore, the improved hybrid-weighted double-broadband continuum estimation technique allows us to reduce interloper contamination and select candidates over a larger area with more robust photometry. §.§ LAE Selection CriteriaUsing hybrid-weighted double-broadband continuum estimation, we apply the following selection criteria to isolate LAEs: * (ab - NB) ≥ (ab - NB)_minWe require the narrowband excess of the LAE candidates to exceed an equivalent width cut according to Equation <ref>, where λ_NB is the effective wavelength of the narrowband filter, λ_Lyα is the minimum rest-frame wavelength of the Lyα emission line, FWHM_NB is the full width at half maximum (FWHM) of the narrowband filter, and EW_0 is the rest-frame equivalent width of the Lyα emission line (which we take to be 20 Å).(ab - NB)_min = 2.5log_10[1+EW_0(λ_NB/λ_Lyα/FWHM_NB)]For the N419, N501, and N673 narrowband filters, this equivalent width cut corresponds to narrowband excesses of 0.71, 0.83, and 0.82 magnitudes, respectively. In Section <ref>, we will discuss a more complex process for equivalent width estimation based on these values. This cut allows us to limit the amount of low-redshift interlopers that have other emission lines in the narrowband filters. This cut is quite robust to small-equivalent width interlopers, such as [O2] emitting galaxies <cit.>, though some Green Pea-like [O3] emitters and AGN may still remain in the sample (see Subsections <ref>-<ref>).* (ab - NB) ≥ 3σ_(ab - NB) We require that candidates have a robust narrowband excess in order to avoid continuum-only objects being included due to the photometric uncertainties. Here, σ_(ab - NB) is calculated by propagating the errors in ab and NB.* (BB - NB) < -2.5log_10[ C_NB/C_BB] + 2σ_(BB-NB)We require that an object is at least as bright in the emission-line contributed broadband (BB) as a pure-emission-line LAE (infinite EW) would be, within 2 sigma given possible noise fluctuations. Here, C is given by Equation <ref>,C = ∫(c/λ^2)Tdλ/T_EL,where T is the filter transmission as a function of wavelength and T_EL is obtained by averaging the filter transmission over the narrowband filter transmission curve, which is used as a proxy for the LAE redshift probability distribution function. * R_50 < 1.38 We apply a half-light radius R_50 cut to exclude large, extended sources. We define this limit as twice the NMAD in the half-light radii for sources that satisfy the above criteria from the half-light radii of bright, unsaturated point sources. This allows us to eliminate highly extended low-redshift contaminants whose photometry is not sufficiently corrected to avoid spurious narrowband excess. * NB ≥ 20 We exclude sources with narrowband magnitude brighter than 20 in order to eliminate extremely bright contaminants, typically quasars or saturated stars. * NB < D_NB, 5σ We eliminate objects whose narrowband magnitude is dimmer than the median 5σ depth of the narrowband image D_NB, 5σ. For the N419, N501, and N673 narrowband filters, this magnitude corresponds to 25.5, 25.7, and 25.9 AB, respectively.Finally, we apply additional color cuts to some of our LAE samples, which are designed to eliminate the largest known remaining sources of contamination in each dataset and enhance the purity of our LAE samples. The sources of contamination and cuts as well as the double broadband choices for each filter-set are described below. §.§ Selection of z = 4.5 LAEs, z = 0.81 [O2] Emitters, andz = 0.35 [O3] Emitters Out of the three samples of LAE candidates, the N673 catalog is the most susceptible to low redshift emission line galaxy interlopers. This is because the EWs distributions and luminosity functions of low redshift interlopers climb as a function of redshift. The two most notable interlopers are z ≈ 0.81 [O2] emitters and z ≈ 0.35 [O3] emitters, with the most challenging culprit being the [O3] emitters since the [O3] emission line(s) tend to have larger EWs than the [O2] emission line. We choose our selection filters specifically to isolate and remove these interlopers with minimal color cuts. For our z = 4.5 LAE selection, we carry out hybrid-weighted double-broadband continuum estimation using the N673, g-band, and i-band filters (see Figure <ref> and Table <ref>). This combination of filters has significant advantages over using just N673 and the r-band. With the latter filter combination, not only do we have excess amounts of contamination from [O2] and [O3] emitters, but we do not capture all dim LAE candidates. However, when using both g-band and i-band, we increase the amount of dim LAE candidates selected and reduce contamination from lower EW [O2] emitters, experience the majority of our contamination from Green Pea-like [O3] emitters (see Figures <ref> and <ref>). Green Pea galaxies are compact extremely star-forming galaxies that are often thought of as low-z LAE analogs <cit.>. In order to identify likely z = 0.81 [O2] emitter and z = 0.35 [O3] emitter interlopers in our data, we first carry out cross matches between the SE source catalog and archival spectroscopic/photometric redshift catalogs as well as between the initial LAE candidate catalogs and archival spectroscopic/photometric redshift catalogs. We obtain archival spectroscopic redshift data from <cit.> and we obtain photometric redshifts from <cit.>. As illustrated in Figure <ref>, we find that objects in our source catalog that are matched to low-redshift z = 0.81 [O2] emitters and z = 0.35 [O3] emitters reside in specific, disjoint regions of grz color-color space. Furthermore, we find that the sources in these redshift ranges with higher estimated (gi - N673) equivalent widths occupy compact and distinct regions of grz color-color space. This can be seen in Figure <ref>, where the colorbar displays the estimated narrowband excess from 0 to the z = 4.5 LAE EW cutoff. In addition to examining the (gi - N673) excess of the objects, we also examine their (gr - N501) values (see Figure <ref>). Examining both of these excesses is helpful because ODIN's survey design ensures that the majority of z = 0.35 galaxies emitting Oxygen will have an [O3] emission line in the N673 filter and an [O2] emission line in the N501 filter. We find that the objects with the highest (gr - N501) color are also concentrated in the region where we predicted significant contamination from z = 0.35 galaxies (see Figure <ref>). This allows us to see that LAE selections at this redshift are strongly susceptible to z = 0.35 [O3] emitter interlopers and mildly susceptible to z = 0.81 [O2] emitter interlopers. We also perform spectroscopic and photometric redshift cross-matches to our initial (gi-N673) selected LAE candidates. The spectroscopic cross-match confirms that the primary contaminants in our LAE candidate sample lie within a redshift range consistent with z = 0.35 [O3] interlopers and in the region of our grz color-color diagram where we predicted z = 0.35 [O3] contamination. Visual inspection of the subset of these sources with accessible spectra <cit.>, we find that they have similar emission line ratios to Green Pea-like z = 0.35 [O3] emitters (see Figure <ref>). Our photometric redshift cross match also shows high levels of contamination from sources with z = 0.35 [O3] emitter consistent redshifts in this same region in color-color space. Both cross-matches yield minimal contamination from sources with redshifts consistent with z = 0.81 [O2] emitters. However, we place less weight on conclusions drawn from photometric redshifts due to their susceptibility to miss-classification of high EW emission line galaxies. These results suggest that z = 0.81 [O2] emitters and z = 0.35 [O3] emitters with high equivalent widths can be located in grz color-color space, eliminated from ODIN's z = 4.5 LAE sample, and set aside for independent analysis. To further test our claims that the primary interloper contaminants in our sample of z = 4.5 LAE candidates are Green Pea-like z = 0.35 [O3] emitters and that these interlopers preferentially reside in a specific region in grz color-color space, we plot confirmed SDSS (Sloan Digital Sky Survey) Green Peas in the appropriate redshift range <cit.>. These Green Peas all have redshifts between 0.34 and 0.35 and correspond to objects with SDSS IDs 587732134315425958, 587739406242742472, and 587741600420003946 <cit.>. To place these Green Pea-like [O3] emitters in grz color-color space, we run their SDSS spectra through ODIN's filter set and obtain the flux density in each filter. This is accomplished using Equation <ref>, where f_ν is the flux density, S_λ is the galaxy's spectrum, T is the filter transmission data, c is the speed of light, and λ is the wavelength. f_ν = 1/c∫ S_λ T λ d λ/∫ T / λ d λ,We carry out these calculations by numerically integrating using Simpson's Rule and then convert the flux density values f_ν into AB magnitudes. We find that all of these SDSS Green Peas reside in the predicted region of grz color-color space (see Figure <ref>). As an additional check, we perform a similar analysis with a simulated z = 0.35 Green Pea-like galaxy spectrum and a simulated z = 0.81 [O2] emitter spectrum. We create these simulated spectra using the stellar population synthesis package<cit.>. For both simulations we use MESA Isochrones and Stellar Tracks (MIST) <cit.>, the MILES spectral library <cit.>, the DL07 dust emission library <cit.>, a Salpeter IMF <cit.>, the Calzetti Dust law <cit.>, and turn on nebular emission and IGM absorption. Enabling nebular emission and IGM absorption tune the stellar population to take on the properties of an observed galaxy. For the Green Pea-like galaxy, we also set the gas phase metallicity and the stellar metallicity parameters to -1. This metallicity adjustment fine tunes the relative emission line strengths to match that of a typical Green Pea galaxy. For each spectrum, we compute the flux densities and AB magnitudes in ODIN's filter set using Equation <ref>. We find that the simulated galaxies reside within both of their anticipated regions of grz color-color space (see Figure <ref>). §.§.§ [O2] and [O3] Emitter Selection Criteria Our analyses show that the regions in grz color-color space described in Subsection <ref> are useful for targeting high EW z = 0.35 [O3] emitter and z = 0.81 [O2] emitter interlopers in our N673 dataset. We can see that the choice to carry out a (gi-N673) LAE candidate selection yields a remarkably low level of contamination from z = 0.81 [O2] emitters. These analyses also reveal that the most prominent source of contamination in the initial (gi-N673) LAE candidate sample is z = 0.35 Green Pea-like [O3] emitters. This discovery allows us to apply a specific and minimal LAE selection cut in grz color-color space along with a (gr - N501) color excess criterion to eliminate bright z = 0.35 emission line galaxy interlopers (see Figure <ref>). These additional cuts not only significantly enhance the purity of our z = 4.5 LAE sample, but also allow us to set aside this unique class of bright z = 0.35 Green Pea-like [O3] emitters for future investigation. We, therefore, remove all sources that satisfy the additional criteria for our z = 4.5 LAE candidate selection and reserve the sources that do satisfy the following criteria for an [O3] emitter candidate sample. * 0.4 ≤ (g-r) ≤ 0.85* 0.05 ≤ (r-z) ≤ 0.55* (gr - N501) ≥ 0.2 Finally, we reject three additional spectroscopically confirmed low-redshift interlopers from our LAE sample. Supplementally, we can also generate a sample of z = 0.81 [O2] emitter candidates by carrying out a (r-N673) selection and reserving objects that reside within the selected region in grz color-color space defined by the below criteria. * -0.89 ≤ (g-r) - 1.2(r-z) ≤ -0.40* 0.48 ≤ (g-r) + 0.56(r-z) ≤ 1.18 §.§ Selection of z = 3.1 LAEs and z = 0.35 [O2] Emitters For our z = 3.1 LAE selection, we carry out hybrid-weighted double-broadband continuum estimation using the N501, g-band, and r-band filters (see Figure <ref> and Table <ref>). Since the [O3] emission lines occur at rest-frame wavelengths of 495.9 nm and 500.7 nm, only very low-z galaxies would have these emission lines at 501 nm. Because the EWs distributions and luminosity functions of low redshift interlopers are lower at lower redshift, low redshift [O3] emitters do not pose a threat to the purity of our z = 3.1 LAE sample. Additionally, due to their low redshifts, we expect most of these objects to be eliminated by the half-light radius cut. Therefore, the most likely source of low redshift interloper contamination is z = 0.35 [O2] emitters. That being said, the EW of the [O2] emission line tends to be significantly smaller than the corresponding [O3] EW and the typical Lyα EW. In order to ensure that there is minimal contamination from z = 0.35 [O2] emitters, we utilize the N673 narrowband filter, which is designed to pick up the [O3] emission line for z = 0.35 galaxies (as discussed in the previous subsection). We find that the (gi - N673) color for the z = 3.1 LAE candidate sample is symmetrically distributed about a mean of -0.013. This shows that, as expected, if any [O2] contaminants do exist, they are not also bright in [O3]. We also find that the (gi - N673) color for the original z = 3.1 LAE candidate sample increases when restricted to the region of grz color-color space where we have previously identified the population of z = 0.35 [O3] emitters in our N673-detected LAE sample.These conclusions imply that our LAE candidate sample does not contain noticeable contamination from z = 0.35 [O2] emitters, which is consistent with previous results from <cit.>. Lastly, we remove eight additional spectroscopically confirmed low-redshift interlopers from our LAE sample and confirm 4 LAE redshifts.§.§ Selection of z = 2.4 LAEs For our z = 2.4 LAE selection, we carry out hybrid-weighted double-broadband continuum estimation using the N419, the g-band, and the r-band filters (see Figure <ref> and Table <ref>). Rather than using broadband filters on either side of the narrowband filter to estimate our continuum (i.e., u-band and r-band), we choose to use the g and r broadband filters to define the galactic continua. This is advantageous because it makes it possible to select z = 2.4 LAE candidates without direct use of the u-band filter, which is shallower than the g and r-bands. This choice is also benificial because the u-band data covers a smaller area than the g and r-bands, and is plagued by more systematic issues than the g and r-bands.Out of our three LAE candidate samples, the z = 2.4 LAE sample using our N419 filter is the least susceptible to low redshift emission line galaxy interlopers (with the exception of inevitable narrow and broad emission line AGN). It is nonetheless important to complete a thorough spectroscopic follow-up on this candidate sample to fully assess its purity. Lastly, we reject 27 additional spectroscopically confirmed low-redshift interlopers from our LAE sample and confirm 9 LAE redshifts. § RESULTS Using our selection criteria, we find samples of 6,339 z = 2.4 LAEs, 6,056 z = 3.1 LAEs, and 4,225 z = 4.5 LAEs in the extended COSMOS field (∼9 deg^2). The number of candidates remaining after each step in the LAE selection pipeline is presented in Table <ref>. The samples correspond to LAE densities of 0.23, 0.22, and 0.15 arcmin^-2, respectively. We also find that there are 776 z = 0.35 [O3] Emitters and 398 z = 0.81 [O2] Emitters. There are 21 specz matches in the z = 0.35 [O3] emitter catalog and there are 3 specz matches in the z = 0.81 [O2] emitter catalog. All of these specz matches are in the corresponding redshift ranges except for one [O3] emitter candidate with z = 0.39. We present the color-magnitude LAE selection diagrams for all three redshifts in Figure <ref>, where the LAEs are displayed in color and a sub-sample of random field objects are shown in grey. We present the spatial distribution of LAEs in each sample in Figure <ref>. The latter plots show that there are no pronounced systematic effects impacting our LAE selection as a function of spatial position at any of the three redshifts. The overdense regions in these figures also suggest that there are unique structures in the LAE candidate populations, providing the starting point for a subsequent clustering analysis (B. Benda et al. in prep).§.§ Scaled Median Stacked SEDs Spectral Energy Distribution (SED) stacking is a technique used to represent generalized characteristics for a sample of objects. When creating a stacked SED, it is assumed that all galaxies in the sample have similar physical properties and that the properties of the stacked SED will match the physical properties of typical individual galaxies. As a consequence of this, every stacking method has the limitation that it cannot capture the diverse properties in a galaxy sample. However, SED stacking can be a helpful tool for understanding sample purity, especially for objects with faint continuum emission and expected continuum breaks such as LAEs. There are two primary classes of stacking; image stacking and flux stacking. Within each of these classes, there are three predominant stacking methods; mean, median, and scaled median. Mean stacking yields a good representative value if there are no outliers in the sample, but the result can be skewed if there is a wide spread in galaxy characteristics or contamination from AGN or low-z interlopers. Median stacking has less susceptibility to outliers and contaminants, but does not take into account the spectral shapes of all objects in a sample and is relatively inefficient. <cit.> showed that the best simple stacking method for representing SED properties of z = 2.1 LAEs is scaled median stacking, which has the added advantage that the influence of overall brightness variations is removed. In this study we choose to follow in the footsteps of <cit.> and use flux scaled median stacking for our population SEDs. We outline the procedure for this method below.In order to create scaled median stacked SEDs, we first find the median of the flux densities in our scaling filter f̃_scale. Then, we create a scaling factor for each source δ_i by computing the ratio of the median flux density in our scaling filter f̃_scale to the flux density measurement for each source in our scaling filter f_scale, i. δ_i = f̃_scale / f_scale, iNext, we calculate the scaled flux density of a filter [F_filt] by multiplying the flux density measurements in that filter f_filt, i by the scaling factor δ_i. [F_filt, i] = f_filt, i×δ_iLastly, we use the median of the scaled flux density for all sources in the filter to determine the filter's the scaled median stacked flux density F̃_filt. By following this prescription for all of our filters, we create a scaled median stacked SED for each LAE sample. In addition to carrying out scaled median stacking, we also create median stacked and mean stacked SEDs for comparison. We find that the mean stacked SED yields flux densities that are much larger than for our (scaled) median stacked SEDs. This confirms that mean stacking is highly susceptible to outliers and brightness variations in our LAE sample. In contrast, we find that our scaled median stacked SEDs and our median stacked SEDs do not yield drastically different results, though our scaled median stacked SEDs have smaller interquartile ranges. We also find that our scaled median stacked SEDs are robust to changes in the scaling filter for all filters except for the u-band. This is not surprising. At z = 3.1 and 4.5, the u filter's bandpass lies partially or entirely blueward of the Lyman break, and even at z = 2.4, the flux recorded by the filter is strongly affected by the Lyα forest. Large stochastic differences in u-band flux are therefore expected <cit.>. We present the results of these stacked LAEs in color-magnitude space in Figure <ref>; u-band data have been excluded due to the aforementioned reasons. Although Figure <ref> only shows the results for the z = 2.4 LAEs, the behavior is similar across all three redshifts. Overall, we find that the standard deviation in narrowband magnitude for the narrowband, g, r, i, z and y scaled median stacked LAEs is 0.04 ± 0.01 magnitudes and the standard deviation in scaled median (ab - NB) color is 0.02 ± 0.02. The agreement among these values argues that our scaled median stacking methods are robust. These results also reinforce the conclusion that scaled median stacking is a defensible method for this analysis.To form our LAE SEDs, we began by normalizing each galaxy's flux density to its measurement in the i-band; this filter does not contain a Lyα emission line nor any other strong spectral line feature at z = 0.35, 0.81, 2.4, 3.1, or 4.5, and its use minimizes the interquartile ranges for our flux density values. Additionally, we exclude objects with i-band magnitude ≥ 40 from our SEDs since their small scaling factor causes the scaled flux densities of the other filters to become artificially inflated. We also do not include objects with no u-band data from the u-band stacks since the u-band covers a smaller area than the HSC filters used in the selection process. We can assess the overall success of our LAE selection by examining their stacked SEDs. In Figure <ref>, we present the i-band scaled median stacked SEDs for the z = 2.4, 3.1, and 4.5 LAE candidate samples. These SEDs contain the key features that we expect to find in LAE spectra. Firstly, there is clear evidence for absorption by the Lyα forest in all three SEDs. The Lyα forest is characterized by absorption from hydrogen gas clouds in between the observer and the galaxy. This absorption occurs from the Lyα line down to shorter wavelengths, so we expect the Lyα forest decrement to occur most distinctly in the broadband whose effective wavelength is immediately below the effective wavelength of the corresponding narrowband. Our SEDs reveal that the Lyα forest decrement is present in the u-band for z = 2.4, in N419 at z = 3.1, and in the r-band at z = 4.5. We do not see a clear decrement in the g-band for the z = 3.1 SED because in this case the g-band also includes the Lyα emission line. We also find that the Lyman break is present in our SEDs. The Lyman break is characterized by the complete absorption of ionizing photons by gas below the short-wavelength end of the Lyman series transitions, the Lyman limit. In the rest-frame, this limit corresponds to 91.2 nm. At a redshift of 2.4, we expect the Lyman limit to occur at ∼310 nm. Because this wavelength falls out of the transmission ranges of our broadband filters, we do not see evidence for (or against) the Lyman limit in our z = 2.4 LAE candidate SED. At a redshift of 3.1, we expect the Lyman limit to occur at ∼374 nm. This is close to both the effective wavelength and long-wavelength limit of the u-band (see Table <ref>). We see a strong effect from the Lyman break in the u-band for our z = 3.1 LAE SED. For the redshift 4.5 LAEs, we expect to find the Lyman limit at ∼502 nm. This is ∼20 nm longer than the effective wavelength and ∼50 nm shorter than the long-wavelength limit of the g-band (see Table <ref>). Therefore, in the g-band, N419, and N501 we see the partial effect of the Lyman break and in the u-band we see the full effect of the Lyman break. Across the three redshifts, the strong presence of the Lyα forest decrement and the Lyman break suggests the general success of our LAE selections.In Figure <ref>, we also present the i-band scaled median stacked SEDs for the z = 0.35 [O3] emitters, z = 0.81 [O2] emitters, and z = 4.5 LAEs. Since all of these samples were selected from the N673-detected SE catalog, comparing them offers valuable insight into the success of our interloper rejection/selection methods. We find that the z = 0.35 [O3] emitters and the z = 0.81 [O2] emitters are generally much brighter in the i-band than z=4.5 LAEs; this is consistent with their much smaller luminosity distances. Additionally, the z = 0.35 [O3] emitters have significant flux density in the N501 filter due to the presence of the redshifted [O2] emission. Furthermore, we find that the Green-Pea like galaxies have heightened flux density in the r-band due to the presence of the Hβ, [O3]λ4959, and [O3]λ5007 emission lines and in the z-band due to the presence of the Hα emission line (see Figure <ref>). Similarly, the z = 0.81 [O2] emitter systems have an excess of flux density in the z-band due to the presence of the Hβ, [O3]λ4959, and [O3]λ5007 emission lines (see Figure <ref>). Lastly, we find that both the z = 0.35 [O3] emitters and the z = 0.81 [O2] emitters have significant emission in the g-band and u-band, whereas the z = 4.5 LAEs exhibit the presence of a partial and full Lyman break in these filters. These features imply that our low-redshift emission line galaxy interloper rejection/selection methods are successful. §.§ Lyα Equivalent Width Distributions Now that we have shown our LAE samples have high levels of purity, we can use them to quantify the Lyα Equivalent Width (EW) distribution at each redshift. We define the EW as the width of a rectangle from zero intensity to the continuum level with the same area as the area of the emission line. Physically, the Lyα EW is related to the burstiness of LAEs since it compares the Lyα emission from O and B stars to the continuum emission from O, B, and A stars (with radiative transfer) <cit.>. Therefore, quantifying the Lyα EW distribution is helpful for comparing sample characteristics of LAEs.We derive the rest-frame Lyα equivalent width EW distribution for each LAE sample following the methodologies of <cit.> and <cit.>. For a detailed derivation, see the Appendix of <cit.>. First, we take the rest-frame equivalent width EW as EW_obs/(1+z), where the observed equivalent width EW_obs is defined as follows.EW_obs = A/B,where A and B are described in Equations <ref> and <ref>. A = Q_NB - Q_ab 10^ (ab - NB)/2.5B = w_BB T_EL,BB (c/λ_EL^2) 10^( (ab - NB)/2.5 )/∫ (c/λ^2) T_BB(λ) dλ - T_EL,NB (c/λ_EL^2)/∫ (c/λ^2) T_NB(λ) dλIn Equation <ref>, Q is the fraction of the continuum flux in a particular filter that is transmitted by the Lyα forest, ab is the double broadband magnitude and NB is the narrowband magnitude. We define Q using Equation <ref>, Q_filt = ∫ e^-τ_eff(λ) (c/λ^2) T(λ) dλ/∫ (c/λ^2) T(λ)dλ,where T(λ) is the filter transmission at a given wavelength and τ_eff(λ) is the effective opacity of HI. For this analysis, we use the Equation <ref> as an approximation for all observed wavelengths below the redshifted Lyα line <cit.>. τ_eff(λ) = 0.0036 ( λ/1216 Å) ^3.46In Equation <ref>, BB refers to the selection broadband that also has a flux contribution from the emission line and w_BB is the weight assigned to that broadband. For the (rg-N419) z = 2.4 LAE selection and the (gr-N419) z = 3.1 LAE selection, this broadband corresponds to the g-band. In the case of the (gi-N673) z = 4.5 LAE selection, neither of the broadband filters have a flux contribution from the emission line, so the first term in Equation <ref> vanishes entirely. T_EL is obtained by averaging the filter transmission over the narrowband filter transmission curve, which is used as a proxy for the LAE redshift probability distribution function. This is justifiable since the filter transmission curve is close to a top hat. λ_EL is the wavelength corresponding to the emission line, i.e., the narrowband effective wavelength. We fit the resulting Lyα EW distributions using an exponential distribution as shown in Equation <ref> and a Gaussian distribution as shown in Equation <ref>, where N is the number of LAEs in a given EW bin, C is a constant of the fit, EW is the rest-frame Lyα EW, and w_0 and σ_gauss are the respective scale lengths in Angstroms. N = C exp(-EW/w_0) N = C exp(-EW^2/2σ_gauss^2) We present these Lyα EW distributions and fits for the z = 2.4, 3.1, and 4.5 LAE samples in Figure <ref>. To obtain a robust fit, we choose to clip our distributions at a minimum EW of 40 Å. We also choose to exclude objects with EW above 400 Å since the highest equivalent widths are associated with galaxies that are extremely faint in the continuum, and thus poorly measured. This results in the exclusion of less than 1% of our LAE sample. Lastly, we choose to use 200 bins for the fits, corresponding to the minimum bin number for which the scale lengths for all three datasets become stable. We find that the exponential scale lengths for the three LAE samples are w_0 = 55 ± 1, 65 ± 1, and 62 ± 1 Å; and the Gaussian scale lengths are σ_gauss = 75 ± 1, 86 ± 2, and 81 ± 2 Å, respectively. The reduced χ^2 values for the exponential scale lengths are 1.8, 1.4, and 1.3; and the reduced χ^2 values for the Gaussian scale lengths are 3.6, 2.8, and 2.2, respectively. The reduced χ^2 values unanimously show preference towards an exponential fit when compared to a Gaussian fit. Although there is significant variation in the literature results, the w_0 scale lengths are similar to previous findings and the σ_gauss are between ∼10-40% lower than most previous findings <cit.>. Contamination from low-redshift emission-line or continuum-only galaxies tends to reduce the scale length; contamination from spurious objects tends to increase them, since the EWs are formally infinite when an “object” is only luminous in the narrow-band. We have made a careful effort to avoid all of these types of contamination, with our stacked SED analysis providing evidence against significant low-redshift contamination. Our finding that the EW scale-length is on the lower end of results in the literature makes it unlikely that we suffer from significant contamination from spurious objects. Gathering the full ODIN LAE sample and obtaining precise measurements of contamination rates from each type of interloper should result in even higher precision in measuring Lyα EW distributions. §.§ LAEs with Measured EW ≥ 240 ÅAdditionally, we investigate the objects with EW ≥ 240 Å. It has been speculated that a real LAE with EW in this regime could have a normal stellar population with a clumpy dust distribution or could be composed of young, massive, metal-poor stars or Population III stars; however, measurements of the short-lived He2 λ1640 line and C4 λ1549 render the true composition of these systems ambiguous <cit.>. We find that there are 515, 566, and 263 LAEs in this regime at z = 2.4, 3.1, and 4.5, respectively. We seek to understand the likelihood that these objects are real and are not the result of noise. In order to accomplish this, we first truncate our EW distributions at 240 Å and forward model the scatter in our data using a bootstrapping method to see how many objects exceed a EW of 240 Å. We accomplish this by taking each observed object with EW < 240 Å and applying random values within one sigma of that objects noise in the double broadband magnitude and in the narrowband magnitude, then re-calculating EW. We then carry out this process multiple times until the average fraction of objects above 240 Å converges. Using this method, we find that 35%^+4_-0.3, 32%^+2_-0.04, and 40%^+0.2_-2 of the objects above 240 Å can be explained by noise, respectively. However, we find that objects with EW ≥ 240 Å tend to have higher noise in their double broadband magnitudes than objects with EW < 240 Å. In order to account for this, we apply a similar noisification method where we instead take each observed object with EW < 240 Å and apply random noise values from the high EW sample to the double broadband magnitude and the narrowband magnitude, then re-calculate EW. Using this method, we find that 154%^+6_-0.2, 76%^+3_-0.4, and 179%^+6_-2 of the objects above 240 Å can be explained by noise, respectively. Although the bootstrapping method suggests that there may be objects with truly high EW in all three samples, the latter method implies that the high EW objects might be explained by the large fraction of the sample that is formally undetected in the broad-band imaging, leading to large uncertainties in EW. Follow-up spectroscopy is needed to find out how many of our LAEs truly have EW ≥ 240 Å. § CONCLUSIONS AND FUTURE WORK ODIN is a NOIRLab survey program designed to discover LAEs by combining data taken through three narrowband filters custom-made for the Blanco 4-m telescope's DECam imager <cit.> with archival broadband data from the HSC and CLAUDs. ODIN's narrowband filters, N419, N501, and N673, allow us to identify samples of LAEs at redshifts 2.4, 3.1, and 4.5, corresponding to epochs 2.8, 2.1, and 1.4 Gyrs after the Big Bang, respectively. When the ODIN survey is complete, we expect to discover >100,000 LAEs in 7 of the deepest wide-imaging fields up to a narrowband magnitude of ∼25.7 AB, covering an area of ∼100 deg^2. In this paper, we used data from ODIN's first completed field covering ∼9 deg^2 in COSMOS to introduce innovative techniques for selecting LAEs and other emission line galaxy samples using narrowband imaging. These include LAE samples at z = 2.4, 3.1, and 4.5, as well as samples of z = 0.35 [O3] emitters and z = 0.81 [O2] emitters. The main conclusions of this work are summarized below: * We developed a narrowband LAE selection method that utilizes a new technique to estimate emission line strength, the hybrid-weighted double-broadband continuum estimation technique. Using this technique, we treated sources with S/N ≥ 3 in both single broadbands by assuming a power law SED and treated sources with S/N < 3 in either broadband by assuming a linear spectral slope. This technique allowed us to better estimate expected continuum emission at the location of each narrowband filter by utilizing data from any two nearby broadbands. This method enabled the flexibility to choose optimal broadband filters that maximize the data area and quality and to avoid broadbands that may be heavily impacted by features in low redshift emission line interlopers.* Utilizing this new technique, we performed z = 2.4, 3.1, and 4.5 LAE candidate selections in the extended COSMOS field using broadband data from the HSC and narrowband data collected with DECam. We used the N419, r, and g-bands for our initial z = 2.4 LAE selection; the N501, g, and r bands for our initial z = 3.1 LAE selection; and the N673, g, and i bands for our initial z = 4.5 LAE selection.* We found that the main source of low redshift emission line contamination in our LAE samples was very bright z = 0.35 Green Pea-like galaxies. Our data also revealed that these galaxies occupy a compact and distinct region of grz color-color space. Moreover, since the ODIN survey was designed in anticipation of z = 0.35 contaminants, the filter bandpasses were designed to ensure that the majority of z = 0.35 emission line galaxies will have [O3] emission in the N673 narrowband filter and [O2] emission in the N501 narrowband filter. Despite having emission lines detectable in both the N673 and the N501 narrowband filters, our results suggested that these z = 0.35 bright Green Pea-like galaxies are only a strong source of contamination in our N673 z = 4.5 LAE selection. By taking advantage of the grz color criteria and the estimated N673 and N501 excess flux densities, we were able to identify and set aside a sample of 776 z = 0.35 Green Pea-like objects for further analysis. Although we did not find that z = 0.81 [O2] emitters are a notable source of contamination in our z = 4.5 LAE candidate sample, we found that they also occupy a compact and distinct region of grz color-color space and are selectable using the N673 and r-band filters. Thus, we also set aside a sample of z = 0.81 [O2] emitter galaxies for future analysis.* We found that there are 6,339, 6,056, and 4,225 LAEs at z = 2.4, 3.1, and 4.5, respectively, in the extended COSMOS field (∼9 deg^2). The samples imply LAE surface densities of 0.23, 0.22, and 0.15 arcmin^-2, respectively. These results were in agreement with the predictions outlined in <cit.>. We also defined samples of 776 z = 0.35 Green Pea-like galaxies and 398 z = 0.81 [O2] emitters.* We developed i-band flux density scaled median stacked SEDs for the z = 2.4, 3.1, and 4.5 LAE samples as well as the z = 0.35 Green Pea-like [O3] emitter and z = 0.81 [O2] emitter galaxy contaminants. We found that our z = 2.4, 3.1, and 4.5 LAE SEDs display clear features that are unique to LAEs such as the Lyα forest decrement and Lyman break.We found that our z = 0.35 Green Pea-like [O3] emitter and z = 0.81 [O2] emitter SEDs have features unique to their respective populations. Our stacked SEDs revealed broad consistency in each sample, implying that our samples have high levels of purity.* We calculated Lyα equivalent width distributions for the z = 2.4, 3.1, and 4.5 LAE samples. We found that the EW distributions are best fit by exponential functions with scale lengths of w_0 = 55 ± 1, 65 ± 1, and 62 ± 1 Å, respectively. These scale lengths are on the lower end of the values reported in the literature. The precision of these measurements should improve for the considerably larger LAE sample expected from the full ODIN survey.* We found that an impressive ∼ 10% of our LAE samples have measured rest-frame equivalent width ≥ 240 Å, providing possible evidence of non-standard IMFs or clumpy dust. However, deep spectroscopic follow-up is needed to ascertain how many of these equivalent widths are real vs. noise due to low continuum S/N.ODIN's LAE samples will allow us to quantify the temporal evolution of LAE clustering properties, bias, dark matter halo masses, and halo occupation fractions (B. Benda, in prep.). As HETDEX and DESI-II work to probe dark energy using LAEs, ODIN's improved understanding of which dark matter halos host LAEs can allow these groups to better simulate their systematics, and will have a direct impact on their measurements of cosmological constraints. Furthermore, ODIN's LAE sample will allow us to uncover properties of individual LAEs such as their stellar mass, star formation rate, dust attenuation, timing of stellar mass assembly, and the processes of star formation and quenching. Once completed, this work will help us to better understand the relationship between LAEs, their present day analogs, and their primordial building blocks.§ ACKNOWLEDGEMENTS This work utilizes observations at Cerro Tololo Inter-American Observatory, NSF’s NOIRLab (Prop. ID 2020B-0201; PI: K.-S. Lee), which is managed by the Association of Universities for Research in Astronomy under a cooperative agreement with the National Science Foundation.This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-2233066 to NF. NF and EG would also like to acknowledge support from NASA Astrophysics Data Analysis Program grant 80NSSC22K0487 and NSF grant AST-2206222. NF would like to thank the LSSTDA Data Science Fellowship Program, which is funded by LSST Discovery Alliance, NSF Cybertraining Grant 1829740, the Brinson Foundation, and the Moore Foundation; her participation in the program has benefited this work greatly. KSL and VR acknowledge financial support from the National Science Foundation under Grant No. AST-2206705 and from the Ross-Lynn Purdue Research Foundation Grant.BM and YY are supported by the Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (2019R1A2C4069803). LG and AS acknowledge recognition from Fondecyt Regular no. 1230591.HS acknowledges the support of the National Research Foundation of Korea grant, No. 2022R1A4A3031306, funded by the Korean government (MSIT).The Institute for Gravitation and the Cosmos is supported by the Eberly College of Science and the Office of the Senior Vice President for Research at the Pennsylvania State University. We thank Masami Ouchi for helpful comments on this paper.aasjournal
http://arxiv.org/abs/2312.16075v1
{ "authors": [ "Nicole M. Firestone", "Eric Gawiser", "Vandana Ramakrishnan", "Kyoung-Soo Lee", "Francisco Valdes", "Changbom Park", "Yujin Yang", "Robin Ciardullo", "María Celeste Artale", "Barbara Benda", "Adam Broussard", "Lana Eid", "Rameen Farooq", "Caryl Gronwall", "Lucia Guaita", "Stephen Gwyn", "Ho Seong Hwang", "Sang Hyeok Im", "Woong-Seob Jeong", "Shreya Karthikeyan", "Dustin Lang", "Byeongha Moon", "Nelson Padilla", "Marcin Sawicki", "Eunsuk Seo", "Akriti Singh", "Hyunmi Song", "Paulina Troncoso Iribarren" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20231226145039", "title": "ODIN: Improved Narrowband Ly$α$ Emitter Selection Techniques for $z$ = 2.4, 3.1, and 4.5" }
thmTheorem[section] *thm*Theorem defn[thm]Definition prop[thm]Proposition pref[thm] *prop*Proposition conj[thm]Conjecture lem[thm]Lemma rmk[thm]Remark cor[thm]Corollary notation[thm]Notation exm[thm]Example comp[thm]Computation lmatrix[ llmatrix[ larray[ changemargin[2] -2in Kuramoto Oscillators: algebraic and topological aspects] Kuramoto Oscillators: algebraic and topological aspectsHeather A. Harrington]Heather A. Harrington Harrington supported by EPSRC EP/R018472/1, EP/R005125/1, EP/T001968/1, RGF 201074, UF150238, and a Royal Society University Research Fellowship. Mathematical Institute, University of Oxford, Oxford OX2 6GG, UK and Wellcome Centre for Human Genetics, University of Oxford, Oxford OX3 7BN, UK and Max Planck Institute of Molecular Cell Biology and Genetics, 01307 Dresden, Germany and Centre for Systems Biology Dresden, 01307 Dresden, Germany and Faculty of Mathematics, Technische Universitat Dresden, 01062 Dresden, Germany mailto:[email protected]@maths.ox.ac.uk https://www.maths.ox.ac.uk/people/heather.harringtonhttps://www.maths.ox.ac.uk/people/heather.harringtonHal Schenck]Hal Schenck Schenck supported by NSF DMS 2006410 and a Leverhulme Visiting Professorship. Department of Mathematics, Auburn University, Auburn, AL 36849 and Mathematical Institute, University of Oxford, Oxford UK mailto:[email protected]@auburn.edu http://webhome.auburn.edu/ hks0015/ http://webhome.auburn.edu/ hks0015/Mike Stillman]Mike Stillman Stillman supported by NSF DMS 2001367 and a Simons Fellowship. Department of Mathematics, Cornell University, Ithaca, NY 14850 and Mathematical Institute, University of Oxford, Oxford UK mailto:[email protected]@math.cornell.edu https://math.cornell.edu/michael-e-stillmanhttps://math.cornell.edu/michael-e-stillman [2010]90C26, 90C35, 34D06, 35B35We investigate algebraic and topological signatures of networks of coupled oscillators. Translating dynamics into a system of algebraic equations enables us to identify classes of network topologies that exhibit unexpected behaviors. Many previous studies focus on synchronization of networks having high connectivity, or of a specific type (e.g. circulant networks). We introduce the Kuramoto ideal; an algebraic analysis of this ideal allows us to identify features beyond synchronization, such as positive dimensional components in the set of potential solutions (e.g. curves instead of points). We prove sufficient conditions on the network structure for such solutions to exist. The points lying on a positive dimensional component of the solution set can never correspond to a linearly stable state. We apply this framework to give a complete analysis of linear stability for all networks on at most eight vertices. Furthermore, we describe a construction of networks on an arbitrary number of vertices having linearly stable states that are not twisted stable states. -.2in-.2in [ [ January 14, 2024 ====================-.2in -.2in§ INTRODUCTION Dynamics on networks is an active area of mathematical research, with wide applicability in various fields including physics, engineering, biology and neuroscience. The study of dynamics on networks often involves understanding how the structure of the network influences the dynamics of the system. Dating back to 17th century, Dutch inventor and scientist, Christiaan Huygens, observed that two pendulum clocks hanging from a wall would synchronise their swing, which led to the study of coupled oscillators.Coupled oscillators appear in numerous applications, including biological and chemical networks <cit.>,<cit.>, power grids <cit.>, neuroscience <cit.>, spin glasses <cit.>, and wireless communications <cit.> (to name just a few).A much studied question involving systems of oscillators is that of synchronization: under what conditions do the oscillators operate in harmony. One particularly well known instance of this involves flashing of fireflies <cit.>. Our study focuses on understanding the algebra of Kuramoto oscillators, and using algebraic methods to analyze aspects of networks beyond synchronization. We give a complete description of networks of coupled oscillators with at most eight vertices that have linearly stable solutions. Our analysis also leads us to a theorem characterizing sufficient conditions for a network to have positive dimensional components in the set of potential solutions.§.§ Background on Kuramoto oscillatorsOne of the most investigated oscillator models is due to Kuramoto <cit.>. Let G be a graph with V vertices, representing the coupling of a system of oscillators, and for vertex v_i, let G_i denote the set of vertices adjacent to v_i. The Kuramoto model is the system of V equations below, where the phase θ_i is the angle at vertex i at time t, ω_i is the natural frequency of the i^th oscillator, and K is the coupling strength:θ̇_̇i̇ = ω_i + K ∑_v_j ∈ G_isin(θ_j-θ_i).Perhaps the most frequently studied case is the homogeneous model, where the system consists of identically coupled phase oscillators; this allows us to assume ω_i=0 and K=1, leading to equationsθ̇_̇i̇ = ∑_v_j ∈ G_isin(θ_j-θ_i).We say that θ={θ_1^*,…, θ_V^*}∈^Vis an equilibrium if for all vertices v_i,0 = ∑_v_j ∈ G_isin(θ_j^*-θ_i^*).The homogeneous Kuramoto model has the simplest possible long-term behavior that can one can hope to have in a dynamical system: the solutions of the associated differential equations always approach equilibrium states as time tends to infinity.No other long-term behavior is possible: no limit cycles, no quasiperiodic solutions, no chimera states, no chaos. Nothing but equilibrium points.Therefore, for the homogeneous Kuramoto model, the natural question boils down from dynamics to algebraic geometry: what is the structure of the set of equilibrium states for a network of identical Kuramoto oscillators? The equilibrium states can be expressed as solutions of a system of algebraic equations, hence we also refer to equilibrium points as solutions. Solutions such that θ_i^*-θ_j^* ∈{0, π}are called standard. Our perspective yields new insights into the homogeneous Kuramoto model: for example, a continuous family of equilibrium states will correspond to a geometric object, and we will be interested in determining characteristics (for example, the dimension) of that object. Due to the rotational symmetry in the Kuramoto model, we always have at least a circle of equilibrium points. Changing coordinates so that θ_0 = 0 allows us to ignore the trivial rotational symmetry. This yields a dichotomy, with some genuine isolated equilibrium points, and others part of a continuous family, and we refer to points in such a family as positive dimensional solutions. Of particular importance in understanding the long-term behavior of the system are those solutions which are stable.Let p be a solution to a system of first order ODEs. Then p is (locally asymptotically) stable if there exists a small open neighborhood N_ϵ (p) of p such that for any q ∈ N_ϵ(p), as the system moves forward in time from the point q, the solution converges back to the point p.A system is said to synchronize if the only stable solution occurs when the θ_i^* are all equal. What properties of G guarantee that a system synchronizes?The connectivity of Gis defined as μ_c(G)= min_v ∈ G{(v)/(V-1) },and in <cit.>, Taylor shows that if μ_c(G) ≥ .94 then the corresponding network synchronizes. Recent work of Ling-Xu-Bandieira <cit.> improves this bound to show synchronization when μ_c(G) ≥ .79. Non-standard stable solutions exist for certain configurations, such as when G is a cycle of length ≥ 5, which are the simplest avatars of the circulant matrices analyzed in <cit.>. For a system of ODEs given by θ̇_̇i̇=f_i, we study the related condition of linear stability: that for the Jacobian matrix J(p)= [ [∂(f_1)/∂θ_1(p) ⋯ ∂(f_1)/∂θ_n (p); ⋮ ⋱ ⋮;∂(f_n)/∂θ_1(p) ⋯∂(f_n)/∂θ_n(p) ]]all eigenvalues have negative real part; as the homogeneous Kuramoto model has symmetric Jacobian, this means all eigenvalues are negative. For a homogeneous Kuramoto system and solution p, it is easy to see that the rows of J(p) sum to zero, hence ((J(p)) ≥ 1 and there is always one zero eigenvalue. In keeping with convention, we call a solution to Equation <ref> linearly stable if it has all but one eigenvalue negative, because one of the equations can be eliminated.In this paper, our goal is to understand the algebra of the solution sets to the Kuramoto equations appearing in Definitions <ref> and <ref> below, and the connection to the topology of the graph G. In <cit.>, the authors apply numerical algebraic geometry to the Kuramoto model and remark that the investigation of positive dimensional components is beyond the scope of their paper. Using a combination of numerical and symbolic methods we address this in 2. Our focus is on the following two questions: * What graphs admit a positive dimensional set of possible solutions? * What graphs admit exotic solutions–linearly stable solutions where the θ_i are not all equal? One common type of exotic solution is a twisted stable state, where there is a periodic shift in the angles. A cycle C_n with n≥ 5 always has twisted stable solutions, but these can also arise for noncycles, as in Example <ref>. §.§ ConventionsAll graphs we work with are SCT graphs: graphs that are simple (no loops or multiple edges), connected, and two-connected (all vertices of degree ≥ 2). For a graph that does have a vertex v_0 of degree one, if v_0v_1 denotes the edge connecting v_0 to G, then choosing coordinates so θ_1 =0 shows the only possible equilibria values for θ_0 are {0, π}. Negative semidefinite matrices form a convex cone generated by rank one matrices, and a simple calculation shows that if G’ = G ∖ v_0 has an exceptional solution, so does G. We are investigating the other direction: when does adding a “peninsular’’ vertex to a graph introduce new exotic solutions?§.§ Recent workSome of the more frequently studied mathematical aspects of Kuramoto oscillators include (but are not limited to) the following: * Synchronization, stability, connectivity: <cit.>.* Special classes of graphs: random, dense, k-connected: <cit.>.* Graphs with exotic stable states: <cit.>. §.§ Results of this paperThe main advantage of an algebro-geometric approach is that it allows us to identify all solutions to the system of equations <ref>, in particular all linearly stable solutions.* In 2, we prove algebraic and algorithmic criteria for an SCT graph to have positive dimensional solutions to the system of equations <ref>. The significance of this is that any solution on a positive dimensional component cannot be linearly stable.We also prove that all standard solutions must lie on a specific irreducible algebraic variety, the Segre embedding of ^1 ×^n-1. This allows us to eliminate the 2^V-1-1 unstable standard solutions from consideration, simplifying the analysis of potential non-standard solutions..05in* In 3, we use numerical algebraic geometry to identify SCT graphs with V ∈{4,5,6,7,8} vertices which admit an exotic solution. There are, respectively, {3,11,61,507,7442} isomorphism classes of SCT graphs with V ∈{4,5,6,7,8}. Of the graphs on eight vertices, we find 81 having exotic solutions, and every one of these–with one exception–has an induced cycle of length at least five. In general, gluing a cycle on five or more vertices (which has an exotic solution) to an arbitrary graph G along a common edge does not preserve the exotic solution. We show an exotic solution exists for the graph G' obtained by gluing all vertices of a graph G to the two vertices of an edge of a five-cycle. In 4 we give examples of our computations illustrating several interesting phenomena, and in 5 we close with a number of questions for further research that are raised by our results.§.§ Encoding the graph G algebraicallyFor a system of Kuramoto oscillators on a graph G with V=n, label the vertices with { 0, …, n-1 }. We translate from the trigonometric relations of Equation <ref> to algebraic relations via the substitution x_i=sin(θ_i) and y_i=cos(θ_i). This translation yields constraints expressed as an ideal:I_θ = ⟨ x_0^2+y_0^2-1, …, x_n-1^2+y_n-1^2-1 ⟩.The solutions to these equations capture the relationssin^2(θ_i)+cos^2(θ_i)=1. We also need to encode the dynamics of the graph, described by a polynomial equation at each vertex of G:For v_i ∈ G, let G_i denote the set of vertices {v_i_1,…, v_i_k}adjacent to v_i. Then since sin(θ_j-θ_i)=sin(θ_j)cos(θ_i)-sin(θ_i)cos(θ_j)=x_jy_i-x_iy_j, at vertex v_i, we have the equationf_i=∑_v_j ∈ G_i x_jy_i-x_iy_jFor a graph G on vertex set {0,…, n-1}, define I_G = ⟨ f_0, …, f_n-1⟩, with the f_i as above.For the graph G on five vertices depicted below, the ideal I_G is given by[I_G= ⟨ x_2y_0+x_3y_0+x_4y_0-x_0y_2-x_0y_3-x_0y_4,;-x_2y_0-x_2y_1+x_0y_2+x_1y_2,;-x_3y_0-x_3y_1+x_0y_3+x_1y_3,;-x_4y_0-x_4y_1+x_0y_4+x_1y_4,; x_2y_1+x_3y_1+x_4y_1-x_1y_2-x_1y_3-x_1y_4⟩ ]The Kuramoto variety is the set of common solutions in ℂ^2n to the equations f_0 = f_1 = … f_n-1 = x_0^2 + y_0^2 - 1 = … = x_n-1^2 + y_n-1^2 - 1 = 0.The set of polynomials above defines an algebraic object, the Kuramoto oscillator idealI_K = I_θ + I_G,which is an algebraic encoding of the system of equations <ref>. The common zeros of all polynomials in the ideal I_K is the Kuramoto variety which we denote by (I_K).The next section is devoted to the study of algebraic properties of the ideal I_G. § ALGEBRA OF KURAMOTO OSCILLATORS: DETERMINANTAL EQUATIONSAs in 1, for a graph G and vertex v_i ∈ G, let G_i denote the set of vertices adjacent to v_i, and suppose G_i= {v_i_1, …, v_i_k}. To simplify notation, let [ x_i∙x_i_1+ ⋯ +x_i_k; y_i∙y_i_1+ ⋯ +y_i_k ] With notation as above, I_G = ⟨[ [x_0 x_0∙;y_0 y_0∙ ]], …, [ [x_n-1 x_n-1∙;y_n-1 y_n-1∙ ]] ⟩,and I_G has V-1 minimal generators (rather than the expected number V). Since [ [x_i x_i∙;y_i y_i∙ ]] = ∑_v_j ∈ G_i x_iy_j-x_jy_ithe first result follows from Definition <ref>. To see that I_G has V-1 minimal generators, first observe thatf_0+f_1+⋯ +f_n-1=0, so one of the f_i is redundant. The proof of Theorem <ref> shows there are no other dependencies, hence I_G has V-1 minimal generators. §.§ The Segre varietyIn classical algebraic geometry, the Segre variety (<cit.>, Exercise 13.14), is the image of the map^s ×^t 𝕊_s,t⟶^st+s+t,defined by [a_0:⋯:a_s] × [b_0:⋯:b_t] ↦ [a_0b_0: a_0b_1:⋯: a_0b_t:a_1b_0: ⋯ :a_sb_t]The Segre variety with s=1 and t=V-1 (henceforth written simply as Σ) plays an important role in the study of Kuramoto oscillators: we will see in 2.4 that all standard solutions lie on Σ. For ^1 ×^V-1, the target space of 𝕊_1,V-1 is ^2V-1, and the ideal I_Σ of polynomials vanishing on the image Σ is I_Σ = I_2[ [ x_0 x_1 ⋯ x_V-1; y_0 y_1 ⋯ y_V-1 ]],I_2 2 × 2 . Since I_Σ is generated by the V2 polynomials [ [ x_i x_j; y_i y_j ]],this means that the V-1 generators of I_G are sums of the generators of I_Σ.To analyze I_G more deeply, we need some algebraic geometry.§.§ Algebraic geometry interludeIn this section, we describe some key algebro-geometric quantities associated to the ideal I_G; see <cit.> or <cit.> for additional details.For I an ideal in a polynomial ring R = [x_1, …, x_m], the complementary dimension (or codimension) of I is (I) = m-(I),where (I) ⊂^m is the set of common zeros of the polynomials defining I.To gain geometric intuition for the definition above, notice thata minimal generator f ∈ I has solution set (f) which is a hypersurface. If I is minimally generated by {f_1,…, f_d} and (I)=d, then each hypersurface (f_i) drops the dimension of the solution space (I) by one, and I is called a complete intersection. In general, (I) is ≤ the number of generators of I, and the solution space (I) is called the variety of the ideal I. After dehomogenizing (setting some variable equal to one) the system of equations I_G, in order for the Kuramoto variety to consist of a finite number of solutions, the codimension of I_K must be 2V-1. Since there are exactly V equations in I_θ, a necessary condition for (I_K) to be finite is that (I_G) = V-1. By Lemma <ref>, I_G has exactly V-1 generators, so we have proved(I_K) has a finite set of solutions only if I_G is a complete intersection.But in general, I_G is not a complete intersection, as we will see in Example <ref>.An ideal P ⊊ R is called prime if fg ∈ P implies f ∈ P or g ∈ P (or both). For an ideal I ⊂ R, a prime ideal P containing I is a minimal prime of I if there is no other prime ideal Q such that I ⊆ Q ⊊ P. The set of common zeros (P) of a prime ideal P is irreducible:(P) (I_1) ∪(I_2), for any (I_i) ⊊(P). For I ⊂ R the variety (I) has a unique finite minimal irreducible decomposition(I) = ⋃_i=1^d (P_i), P_i .The P_i above are the minimal primes of I, and the (P_i) the irreducible components of (I).For the graph G on five vertices appearing in Example <ref>,the ideal I_G is the intersection I_G = P_1 ∩ P_2 ∩ P_3, where the P_i's are prime ideals, described below.[P_1= [ ⟨ y_0+y_1,; x_0+x_1,; x_2y_1+x_3y_1+x_4y_1-x_1y_2-x_1y_3-x_1y_4⟩ ]; ;P_2= [ ⟨ y_2+y_3+y_4,; x_2+x_3+x_4,; x_4y_3-x_3y_4, x_4y_0+x_4y_1-x_0y_4-x_1y_4, x_3y_0+x_3y_1-x_0y_3-x_1y_3⟩ ]; ;P_3=I_Σ<ref>. ]Lemma <ref> is the key to analyzing the irreducible decomposition: write the generators of I_G as [ I_G = ⟨[ [x_2 x_0 +x_1;y_2 y_0 +y_1 ]],[ [x_3 x_0 +x_1;y_3 y_0 +y_1 ]],[ [x_4 x_0 +x_1;y_4 y_0 +y_1 ]],; [ [ x_0 x_2+x_3+x_4; y_0 y_2+y_3+y_4 ]], [ [ x_1 x_2+x_3+x_4; y_1 y_2+y_3+y_4 ]] ⟩. ]Component (1) has codimension 3 < V-1 = 4, so I_G is not a complete intersection. It is easy to see this from the determinantal description of the generators above: when y_0+y_1=0 = x_0+x_1, the first three determinantal equations all vanish. Summing the two remaining generators we can use Equation <ref> to eliminate one of them; the resulting solutions are the zeros of Component (1).§.§ Low codimension components of I_GIt turns out that Example <ref> provides insight into obtaining a more general description of irreducible components of I_G. In <cit.>, Canale-Monzón define two vertices as twins if they have the same set of adjacent vertices. This suggests looking at triplets, quadruplets, and so on, so we define:A k-let is a set S of k distinct vertices of G such that v, w ∈ S ⟶ G_v = G_w.If G has a k-let with k ≥ 3, then (I_G) ≤ V-k+1. It suffices to prove that I_Gis contained in an ideal of codimension at most V-k+1. First, after relabelling, we may suppose our k-let is {v_0, …, v_k-1}, hence G_0 = G_1 = ⋯ G_k-1. So the determinantal equations for I_G take the form[I_G= ⟨[ [ x_0 x_0 ∙; y_0 y_0 ∙ ]],[ [ x_1 x_1 ∙; y_1 y_1 ∙ ]], …, [ [ x_k-1 x_k-1 ∙; y_k-1 y_k-1 ∙ ]], f_k,…, f_V-2⟩. ]Note that we have used Lemma <ref> to reduce to V-1 generators. Since the first k vertices are a k-let, for i ∈{0,…, k-1} the linear forms x_i ∙ are equal, and similarly for y_i ∙. Therefore the vanishing of the two linear forms {x_0 ∙, y_0 ∙} causes the first k-equations defining I_G to vanish. So I_G ⊂⟨ x_0 ∙, y_0 ∙, f_k, …, f_V-2⟩,an ideal with V-k+1 generators, and hence of codimension at most V-k+1. Example <ref> contains a 3-let, and V=5, soI_G is of codimension ≤ 5-3+1 = 3, hence ((I_G))≥ 7.Setting x_0 = 1 and y_0 = 0, and adding in the remaining four trigonometric equations x_i^2 + y_i^2 - 1 = 0,yields (at least) a one dimensional set of solutions. If G has a k-let with k ≥ 3, then (I_K) has positive dimension. §.§ The Segre variety and standard solutionsAll standard solutions lie on the Segre variety Σ with s=1 and t=V-1. This follows because by a change of variables we can assume that θ_0^*=0. The standard solutions satisfy θ_i^*-θ_j^* ∈{0,π}, so the x_i are all zero. Since Σ is defined by the ideal appearing in Equation <ref>, the inclusion of the standard solutions in Σ follows. The codimension of Σ is V-1 (see Chapter 2 of <cit.>), so adding the V equations of I_θ to I_Σ produces an ideal of codimension at most 2V-1. Using the change of variables θ_0^*=0 above eliminates the equation x_0^2+y_0^2-1 as well as the variables x_0 and y_0, yielding a system of 2V-2 equations in 2V-2 unknowns. For the next theorem, recall that a projective variety X ⊆ℙ^k also defines as an affine variety (known as the affine cone) in ℂ^k+1. The affine cone over Σ is an irreducible component of (I_G) ⊂ℂ^2V. We begin by completing the proof of Lemma <ref>: that is, I_G has one less minimal generator than the number of vertices. To streamline notation let n=V, so I_G hasgenerators (including the one non-minimal generator) {f_1,…, f_n}, which we write as a matrix product[ [ ∂(f_1)/∂ x_1⋯ ∂(f_1)/∂ x_n;⋮⋱⋮; ∂(f_n)/∂ x_1⋯ ∂(f_n)/∂ x_n ]] ·[ [ x_1; ⋮; x_n ]] = [ [ f_1; ⋮; f_n ]]Let [Y] denote the matrix on the left of Equation <ref>; the entries of [Y] are: [Y]_ij=-∑_v_j ∈ G_iy_j i=jy_j ijv_j ∈ G_i 0Let [Y]_ y=1 be the matrix obtained by setting {y_1 =y_2=⋯ =y_n=1}. Then[Y]_ y=1 = -L_G,where L_G is the graph Laplacian, which has rank n-1 (see, for example, Lemma 3.4.5 of <cit.>). From Lemma <ref> the rank of [Y] is at most n-1, and since rank drop is a Zariski closed condition, the argument above shows the rank of [Y] is exactly n-1. Hence there is exactly one dependency on the n generators of I_G, which concludes the proof of Lemma <ref>. To prove the theorem, it suffices to show that I_Σ is a minimal prime of I_G.Note that as I_Σ is a prime ideal with (I_Σ) = n-1, if I_G is a complete intersection, Example <ref> shows that (I_G)=n-1.Since I_G ⊆ I_Σ, this means that I_Σ is a minimal prime of I_G when (I_G)=n-1. To conclude the proof, we need to show that when I_G has codimension ≤ n-2, I_Σ is still minimal over I_G. If this were not the case, then there would exist a prime ideal P, with (P) < (I_Σ) such that I_G ⊆ P ⊊ I_Σ⟹Σ⊊(P) ⊆(I_G).Our strategy is to consider the tangent spaces of the corresponding varieties. From the containments above, for any point p ∈(I_Σ), T_p(Σ) ⊊ T_p((P)) ⊆ T_p((I_G)).Since (P) < (I_Σ), (P) > Σ. Since Σ is smooth, at any point p of Σ, T_p(Σ) = Σ = n+1 = (J_p(I_Σ)),where we compute the dimension as an affine variety, and J_p(I) denotes the Jacobian of an ideal, evaluated at a point p. Hence for any point p, (J_p(I_Σ)) = n-1..2in To finish, consider the point where y_i=1=x_i for all i which we denote 1. Observe that J(I_G) = [Y | X],where [Y] is as in Equation <ref> and [X] is defined by the same formula, but where x_i is substituted for y_i. By our earlier computations, T_ 1((I_G)) = (J_ 1(I_G)) = ([-L_G | L_G]).As (L_G) = n-1, this means T_ 1(Σ) = T_ 1((I_G)),which contradicts the existence of (P) properly containing Σ and contained in (I_G).The points of Σ∩(I_θ) lie in the Kuramoto variety (I_K). By Theorem <ref>, Σ⊆(I_G); since (I_K) = (I_G) ∩(I_θ) the result follows. §.§ Case study: the complete graph K_nIt follows from the description of I_G appearing in Lemma <ref> that when defining the ideal I_G, we may include v in the set G_v. From our observation above, when G=K_n, adding v to each set G_v yields L_1 = ∑_i=0^n-1x_i = x_0∙ = ⋯ =x_n-1∙ L_2 = ∑_i=0^n-1y_i = y_0∙ = ⋯ =y_n-1∙.Hence when G=K_n, we haveI_K_n = ⟨[ [ x_0 L_1; y_0 L_2 ]], …, [ [ x_n-1 L_1; y_n-1 L_2 ]] ⟩ =⟨[ [L_2 -L_1 ]] ·[ [ x_0 ⋯ x_n-1; y_0 ⋯ y_n-1 ]] ⟩By Theorem <ref>, I_L=⟨ L_1, L_2 ⟩ contains I_K_n, and by Theorem <ref> I_Σ is a minimal prime, henceI_K_n⊆ I_Σ⋂ I_L.In fact, (I_K_n) = Σ⋃(I_L), because if p ∈(I_L)^c, then the matrix multiplication in Equation <ref> composes to zero exactly when p ∈Σ, and otherwise p ∈(I_L). Localizing the above expression at I_L, the 2 × 2 minors of the left hand matrix are outside I_L hence units, so only I_L remains. Localizing at I_Σ, the elements of I_L become units. As I_Σ is the locus where [y_0,…, y_n-1] = c[x_0,…, x_n-1] for some unit c, the result follows.TODO: we need an argument that there are no embedded primes. Check or add a proof that the standard solutions other than self-sync are all not stableThe Jacobian matrix has (1,1) entry -x_1-x_2-⋯ -x_n-1; if we set x_0 = 1 and y_0 = 0 (so θ_0 = 0), then on (I_L) since L_1=0 the (1,1) entry is x_0=1. So J(p) cannot be negative semidefinite for any p ∈(I_L). In Theorem 4.1 of <cit.>, Taylor proves that K_n synchronizes; the argument above shows that no points p ∈(I_L) exist such that J(p) has one zero eigenvalue, and all other eigenvalues <0, so any linearly stable point is one of the standard solutions lying on Σ; Taylor proves that the only stable standard solution is when all angles are equal.Example <ref> shows that when considering isomorphism classes of SCT (simple, connected, all vertices of degree ≥ 2) graphs on n vertices, there will always be at least one graph–K_n–having a solution set with positive dimension. How common is this phenomenon? Below are our computational results:.1in§ GRAPHS WITH EXOTIC SOLUTIONSCycle graphs always admit exotic solutions, which is a reflection of a more general result. A circulant graph is a graph whose automorphism group acts transitively on the vertices. As a result, the adjacency matrix of the graph is a circulant matrix: each row is a cyclic shift by one position of the row above it. Wiley-Strogatz-Girvan show that circulant graphs admit exotic solutions <cit.>. Such graphs have the highest connectivity known for graphs with exotic solutions, with μ(G) ∼ .68. In this section, we investigate exotic states for graphs with a small number of vertices using numerical algebraic geometry. All angles will be represented in radians. §.§ Exotic solutions on at most 7 verticesNumerical algebraic geometry indicates that graphs on five or six vertices which have an exotic solution are either cycles, or a pentagon and triangle glued on a common edge:While there are only 11 isomorphism classes of graphs on five vertices, on six vertices, there are 61 isomorphism classes, and for seven vertices, there are 507 isomorphism classes. And indeed, on seven vertices, things become more interesting: numerical algebraic geometry identifies 9 such graphs, shown in Figure <ref>.What is noteworthy about this is that every one of these graphs has a chordless n-cycle C_n, with n ≥ 5. And in fact, this is also the case for all but one of the graphs on eight vertices having exotic solutions. §.§ Exotic solutions, 8 verticesFor eight vertices, there are 7442 isomorphism classes of SCT (simple, connected, all vertices of degree ≥ 2) graphs, and numerical algebraic geometry identifies 81 which have exotic solutions.All of the graphs on 8 vertices having exotic solutions have a chordless n ≥ 5 cycle, with one exception (see Figure <ref>).The graph in Figure <ref> admits two exotic solutions, which are both twisted stable states. Recall that x_i = cos(θ_i) and y_i = sin(θ_i); rounded to 2 decimals, the solutions are below. Note that the top row is the standard stable synchronized solution.[ [x_0x_1x_2x_3x_4x_5x_6x_7y_0y_1y_2y_3y_4y_5y_6y_7;1111111100000000;100 -1.71 -.71 -.71.710 -110 -.71 -.71.71.71;100 -1.71 -.71 -.71.7101 -10.71.71 -.71 -.71 ]]The corresponding angles θ (note labelling above) are[ [00000000;0 3π/2π/2π 7π/4 5π/4 3π/4π/4;0π/2 3π/2ππ/4 3π/4 5π/4 7π/4 ]]We discuss these computations more in Example <ref>. For the ideal I_G,I_G = J ⋂ I_Σ,where J is a prime ideal generated by seven quadrics, which are exactly the quadrics defining I_G, and seven sextics. These sextics are all quite complicated;even after reducing them modulo the quadrics in I_G, the smallest has 170 terms and the largest 373 terms. To prove this is the irreducible decomposition, we first compute the ideal quotient I_G : I_Σ, which yields an ideal J.Using Macaulay2 we verify that J is prime, and that J ∩ I_Σ = I_G. §.§ Constructing Graphs with Exotic SolutionsWe begin with an observation ofLing-Xu-Bandeira in 2.1 of <cit.>. If we allow edges to have negative weights, then for the homogeneous Kuramoto system of a graph G,the Jacobian matrix appearing in Equation <ref> is the weighted graph Laplacian with weight matrix a_ij(cos(θ_i-θ_j))_1 ≤ i ≤ j ≤ n,.1in where the a_ij are coefficients of the adjacency matrix of G. In particular, if p=(θ_0,…,θ_n-1) is an equilibrium point such that cos(θ_i-θ_j)> 0 for all edges, then the point p is linearly stable.The standard proof that the graph Laplacian of a connected graph has one zero eigenvalue and the remaining eigenvalues are greater than zero uses the fact that the Laplacian factors asL_G = B_G · B_G^T,.1in where B_G is an oriented edge-vertex adjacency matrix (the orientation chosen is immaterial). Therefore choosing B_G to be weighted with weight of the edge e_ij = √(cos(θ_i -θ_j)) provides the necessary adjustment to take the weighting into account. As a consequence, as long as the weightings cos(θ_i-θ_j) are all positive, the resulting Laplacian satisfies the condition for linear stability: if θ_i - θ_j ∈(-π/2, π/2).then the system is linearly stable at p. Computations using numerical algebraic geometry identified a pair of graphs with exotic solutions (hence, solutions which are linearly stable), but where not all the angles satisfy θ_i - θ_j ∈ (-π/2, π/2). We analyze these graphs in Example <ref>.In constructing and analyzing examples of exotic solutions, the following lemma will be useful.Let v be a vertex of degree two, with b denoting the angle at v, and a, c the angles at the two vertices adjacent to v. A solution to the homogeneous Kuramoto system must satisfyb = a+c/2+kπ c = a + (2k+1)π k ∈,and if c = a + (2k+1)π then the solution cannot be linearly stable. At the vertex v, the condition of Equation <ref> is simplysin(c-b) +sin(a-b) = 0,so either c-b = b-a +2kπ and the first possibility holds, or (2k+1)π - (c-b) = b-a, and the second possibility holds. To see that c=a+(2k+1)π does not result in a linearly stable system, we compute the Jacobian matrix of the system, ordering the vertices of G starting with b,a,c. Let σ_v denote the off diagonal row sum of the row corresponding to vertex v, and choose coordinates so b=0. Using that cos(a+(2k+1)π) = -cos(a), we see that if vertices a and c are not connected, the top-left 3 × 3 submatrix of the Jacobian of the system is[ [ -cos(c) -cos(a)cos(a)cos(c);cos(a)-σ_a 0;cos(c) 0-σ_c ]] =[ [ 0cos(a) -cos(a);cos(a)-σ_a 0; -cos(a) 0-σ_c ]]. Recall that there is a variant of Sylvester's theorem to check if a matrix is negative semidefinite (e.g.6.3 of <cit.>): a symmetric matrix M is negative semidefinite if and only if all odd principal minors are non-positive and all even principal minors are non-negative. The principal minors of the 3 × 3 submatrix above are {0, -σ_a, -σ_c, -cos^2(a), σ_a·σ_c, cos^2(a)·(σ_a+σ_c)}By Lemma <ref>, the Jacobian is a weighted negative Laplacian, and in particular σ_a and σ_c are non-negative.So for the 3 × 3 principal minor to be non-positive, we must have cos^2(a)·(σ_a+σ_c) = 0 ⇒cos^2(a) = 0 σ_a=0=σ_c. If cos(a)=0 then the matrix has a zero row yielding a zero eigenvalue. It is impossible for all remaining eigenvalues to be negative, because the submatrix resulting from deleting the first row and column still has all row sums equal to zero, so is itself singular. In particular, cos(a)=0 results in a Jacobian matrix with at least two zero eigenvalues. Next we consider the situation where σ_a=0=σ_c. The weighted Laplacian factors as B_G·B_G^T with B_G as in Lemma <ref>, so a diagonal entry can be zero only if the corresponding row of B_G is the zero row. But since σ_a = 0 = σ_c, this would therefore imply that B_G has two zero rows. Since (A · B) ≤min{(A),(B)}, the kernel of the Jacobian has dimension at least two, so there are at least two zero eigenvalues. This settles the case when vertices a and c do not share an edge. To conclude, suppose G has an edge ac. In this case, the (2,3) and (3,2) entries of the matrix in Equation <ref> are -1, and the 3 × 3 principal minoris cos^2(a)·(σ_a+σ_c+2), which is nonpositive only if cos(a)=0. This case has been ruled out by the reasoning above.Notice that in Figure 2, a pentagon and a triangle sharing a single common edge have an exotic solution, and in Figure 3, a pentagon and a K_4 sharing a common edge have an exotic solution.On eight vertices, a pentagon and a K_5 sharing a common edge as below have an exotic solution. This hints at a more general result, which will appear as Theorem <ref>..1in First, let θ_i = 0 for i ∈{5,6,7}. By Lemma <ref>, the θ_i for i ∈{1,2,3} yield equations [ θ_1 = (θ_0+θ_2)/2.; θ_2 = (θ_1+θ_3)/2.; θ_3 = (θ_2+θ_4)/2.;].1in To simplify notation, let θ_4 = α and θ_3 = β. We claim that setting θ_2=π and θ_0 = -α yields an exotic state. To see this, first note that with these valuesα = 2β-π.We need to show that the corresponding Jacobian matrix has all but one eigenvalue negative (recall that there is always at least one zero eigenvalue). Consider the remaining equations. At vertex 4, we have [ 0 =sin(-α-α) +sin(0-α)+sin(0-α)+sin(0-α)+sin(β-α); =-sin(2 α) -3sin(α) +sin(β-α); = sin(β)+3sin(2β)-sin(4β); = sin(β)(1 + 6cos(β) -4cos(β)(cos^2(β)-sin^2(β))); =sin(β)(1+10cos(β)-8cos^3(β)) ]Since we need the last quantity to be zero, we either have β∈{ 0 , π}, which turns out to be impossible, or a solution to 1+10cos(β)-8cos^3(β)= 0.Letting z=2cos(β), we seek a root of p(z) = 1+5z-z^3 z ∈[-2..2].By Sturm's theorem (see 2.2.2 of <cit.>), the number of roots in (a,b] is V(a)-V(b),V(t) = ♯ [(p_0(t), p_1(t), p_2(t), p_3(t)],and p_0=p(z), p_1=p'(z), and p_i is the negative remainder on dividing p_i by p_i-1. We find there is a unique value (∼ -.2) of z=2cos(β) in [-2,2].Working out the remaining vertex constraints, we obtain the matrix below, where the diagonal entries σ_i are the (negative) sums of the other entries in the row. [ [σ_1111-z^2/21-z^2/2000;1σ_211-z^2/21-z^2/2000;11σ_31-z^2/21-z^2/2000;1-z^2/21-z^2/21-z^2/2σ_4 1-2z^2+z^4/3 -z/200;1-z^2/21-z^2/21-z^2/2 1-2z^2+z^4/3σ_50 -z/20;000 -z/20σ_60 -z/2;0000 -z/20σ_7 -z/2;00000 -z/2 -z/2σ_8 ]], A computation shows that at the equilibrium point above, the Jacobian matrix is negative semidefinite, with a single zero eigenvalue. One of the exotic solutions arises as below, with α∼ 11.5728^∘ =.064π radians.[ θ_0 =-α; θ_1 = π - α/2; θ_2 = π; θ_3 = π + α/2; θ_4 = α; θ_i = 0i ∈{5,6,7} ] The previous example generalizes, but we need a preparatory lemma.Let G be a simple, connected graph on d-vertices and C_5 a chordless 5-cycle. Consider the graph G' obtained by connecting the vertices of an edge E of C_5 to every vertex of G. Then there is a unique value of α > 0 such that assigning θ_i =0 at all vertices of G and {θ_0,…, θ_4} as in Equation <ref> yields an equilibrium solution to Equation <ref> for G'. The main point is that the pattern in Example <ref> generalizes. We choose a vertex labelling to simplify notation. Label the angles at the vertices of the C_5 graph as in Example <ref> {θ_0, …, θ_4}.Set the angles {θ_5, …θ_d+4} at the vertices of the graph Gto be zero, and define the remaining angles via[θ_i= α + i ·(π - α)/2, i ∈{0, … 4}. ]The parameter α is a function of d, obtained as in Example <ref>, but with a slight modification. Label the vertices of E as v_0 and v_4. Then the calculation of θ_4 changes very simply; we have=1.3pt1.5[ 0 = sin(-α-α) +d ·sin(0-α)+sin(π+α/2-α); =-sin(2 α) -dsin(α) +sin(π-α/2); = -2sin(α)cos(α) -dsin(α) +cos(α/2); = -2(2sin(α/2)cos(α/2))cos(α)-d(2 sin(α/2)cos(α/2)) +cos(α/2); =cos(α/2)· (-4sin(α/2)cos(α)-2dsin(α/2) +1) ]Since cos(α/2)=0 does not yield a solution, using the identitycos(α) = cos^2(α/2)-sin^2(α/2) = 1-2sin^2(α/2)and writing w=sin(α/2) yields=1.3pt1.5[ 0 = -4sin(α/2)(1-2sin^2(α/2))-2dsin(α/2) +1; =-4w(1-2w^2)-2dw +1; =8w^3 -(4+2d)w +1; ]Substituting y=2w yields the depressed cubic 1-(d+2)y+y^3, and V(t) = [t^3-(d+2)t+1, 3t^2-d-2, -2d-4/3t+1,d+2 -27/4(d+2)^2],By Sturm's theorem, 1-(d+2)z+z^3 has a single real root α∈[-2,2] when d ≥ 3; when d =1 there are 3 roots in [-2,2], and when d=2 there are 2 roots in [-2,2]. In both cases only one of the roots yields a linearly stable solution. Gluing a triangle (d=3) to C_5 yields Example <ref>. To 2 decimals the roots of 1-10w+8w^3 are {1.06, .10, -1.16} sosin(α/2)≃.10 ⇒α/2≃ .032π⇒α≃ .064π, agreeing with Example <ref>. The parameter α depends only on the number of vertices of G. Lemmas <ref> and Lemma <ref> lay the groundwork for producing families of graphs with exotic solutions which are not twisted stable states. The next result illustrates this technique. Let G be a graph on d-vertices and C_5 a chordless 5-cycle. Consider the graph G' obtained by choosing an edge E of C_5, and connecting the vertices of E to every vertex of G. Then G' admits an exotic solution. This follows from Lemma <ref>, Lemma <ref>, and an application of Sturms theorem. By Lemma <ref> we have an equilibrium point. By Lemma <ref>, if the weights cos(θ_i -θ_j) are positive, then the equilibrium point is linearly stable. For the angles θ_i in Equation <ref>, this requires knowing the value of α appearing in Lemma <ref>, which follows by applying Sturm's theorem to the interval [0,2]: α is a small positive number which is decreasing as d increases. Applying this to the angles in Equation <ref> shows they all satisfycos(θ_i -θ_j) >0.The above construction can be carried out more generally, by gluing an arbitrary graph G on d-vertices to an n≥ 5-cycle C_n. Label the angles at the vertices of the C_n graph as {θ_0, …, θ_n-1},and the angles at the vertices of G as {θ_n, …, θ_n+d-1}.Glue G and C_n along the edge connecting vertices 0 and n-1, set the angles θ_n, …, θ_n+d-1 to be zero, and define the remaining angles via[ θ_i = α + i ·2(π - α)/n-1, i ∈{0, … n-1}. ]The parameter α is a function of both d and n, and in the local computation at vertex 0 where gluing occurs, we obtain relations which involve sin(n ·α) and sin(α), leading to an expression in terms of Chebyshev polynomials. We leave this for the interested reader..1inIn the next section, we give more examples of exotic solutions. As S. Strogatz pointed out to us, an example of a stable exotic solution on a dodecahedron appears in <cit.>. One interesting question is the interplay between those graphs which have a positive dimensional solution set, and those graphs with exotic solutions, tabulated at the end of 2. Our computations indicate that for SCT graphs with 7 vertices, none of the graphs with exotic solutions are among the graphs with positive dimensional solutions. This is not the case for graphs with 8 vertices, where there is an overlap shown in Table 1. Six of the graphs with exotic solutions also have a positive dimensional component. One of those six is Example  <ref>, and for five of the six, the positive dimensional component can be identified using Theorem <ref>.§ COMPUTATIONAL METHODS: THE M2 PACKAGE OSCILLATOR.M2 In Example <ref> we saw a non-cycle having twisted stable states: Next we compute the solutions for Example <ref>, and for a five cycle:For graphs with 7 or fewer vertices, there are at most 2 nonstandard exotic solutions. This is no longer the case for graphs with 8 or more vertices. Below we compute solutions for two pentagons sharing an edge (an example with eight vertices) and a pentagon and hexagon sharing an edge (which has nine vertices). The computation shows that there are, respectively, four and six nonstandard solutions. Recall the first angle is 0, and is not printed. There are only two examples of SCT graphs on 8 vertices having an exotic solution where the Jacobian matrix has some of the off-diagonal entries cos(θ_j - θ_i) negative. We illustrate with one of these below.-.1in The Jacobian matrix abovecorresponds to the first solution. Letting α = 51.4286^∘≃2π/7, that solution has angles θ_0=0, θ_i = -i ·α for i ∈{1,..,6}, and θ_7 = π. § SUMMARY AND FUTURE DIRECTIONSIn this work, we study systems of homogeneous Kuramoto oscillators from an algebraic and topological standpoint. By translating into a system of algebraic equations, we obtain insight into the structure of possible stable solutions. Our focus is on simple, connected graphs with all vertices of degree at least two, which we call SCT graphs. §.§ Main resultsOn the algebraic front * in Theorem <ref>, we give sufficient conditions for I_G to have a positive dimensional component in the solution set. This is important because no solution lying on a positive dimensional component can be a linearly stable solution. .05in* in Theorem <ref>, we identify the ideal I_Σ of the Segre variety of ^1 ×^n-1 as an associated prime of I_G. We show that all standard solutions lie on the Segre variety. On the topological front * for G having V ≤ 8 vertices, we determine all SCT graphs which admit exotic solutions, and which admit positive dimensional solutions. There are, respectively, {3,11,61,507,7442} isomorphism classes of SCT graphs with V ∈{4,5,6,7,8}. Of the graphs on 8 vertices, 81 have exotic solutions, and every one of these–with one exception–has an induced cycle of length at least five. .05in* The cyclic graph C_n on n-vertices always admits exotic solutions, the twisted stable states, where each angle is a periodic translate of an adjacent angle. Theorem <ref> gives a general method to construct graphs with exotic solutions which are not twisted stable states. §.§ Future directionsThis paper raises a number of interesting questions. * Structure of the ideal I_G. Building on the work of 2, it would be interesting to further analyze the irreducible decomposition of I_G. Preliminary results indicate that it may be a radical ideal, and we are conducting further computational experiments to determine if this is the case. .05in* Gluing graphs. In algebraic topology, a standard construction is the Mayer-Vietoris sequence (see, e.g. 4.4.1 of <cit.>). Given two topological spaces X_1 and X_2, select a common subspace Y. Since Y ⊆ X_i, we can identify Y with X_1 ∩ X_2. The Mayer-Vietoris sequence relates the topology of X_1, X_2 and their intersection to the topology of X_1 ∪ X_2 (we have “glued” X_1 and X_2 together along a common intersection). The results in 3 suggest studying systems of oscillators using the Mayer-Vietoris sequence, and we are currently at work on this project. .05in* Structure of zero-dimensional solutions. For graphs on at most eight vertices, of the 81 with exotic solutions, all but one have a pair of exotic solutions. As illustrated in Example <ref>, a pair of C_5's glued on an edge admits four exotic solutions, and a C_6 glued on an edge to a C_5 admits six exotic solutions. Can we give a graph-theoretic characterization of the number of exotic solutions? Acknowledgments. All of the computations were performed using our Oscillator package, which is being incorporated in the next release of Macaulay2 <cit.>.Our collaboration began while the second two authors were visitors at Oxford, supported, respectively, by Leverhulme and Simons Fellowships. We thank those foundations for their support, and the Oxford Mathematical Institute for providing a wonderful working atmosphere. Thanks also to Steve Strogatz and Alex Townsend for helpful comments. amsalpha -.6in 10abrams2016introduction D. M. Abrams, L. M. Pecora, and A. E. Motter. Introduction to focus issue: Patterns of network synchronization. Chaos, 26(9):094601, 2016. abrams2 D. M. Abrams, S. Strogatz. Chimera states in a ring of nonlocally coupled oscillators. Internat. J. Bifur. Chaos Appl. Sci. Engrg. 16, 21–37, 2006. BPR S. Basu, R. Pollack, M.F. Roy, Algorithms in real algebraic geometry, 52-57. Springer Verlag, 2006.Buck J. Buck, E. Buck. Synchronous Fireflies. Scientific American, 234, 24-85, 1976.canale E. A. Canale and P. Monzón. Exotic equilibria of Harary graphs and a new minimum degree lower bound for synchronization. Chaos, 25(2):023106, 2015.canale2 E. A. Canale, P. Monzón, and F. Robledo. 2-connected synchronizing networks.Bul. Inst. Pol. Iasi. Autom. Control Comput. Sci. Sect. 57(61), 129–141, 2011. CLS D. Cox, J. Little, H. Schenck. Toric Varieties.AMS Graduate Studies in Mathematics, 2010.crawford J.D. Crawford. Scaling and Singularities in the Entrainment of Globally Coupled Oscillators. Phys. Rev. Lett. 74 (21), 1995.Cumin D. Cumin, C.P. Unsworth,Generalising the Kuromoto model for the study of neuronal synchronisation in the brain. Physica D, 226 (2): 181–196, 2007.deville2016phase L. DeVille and B. Ermentrout. Phase-locked patterns of the Kuramoto model on 3-regular graphs. Chaos, 26(9):094820, 2016. dokania2011low R. K. Dokania, X. Y. Wang, S. G. Tallur, and A. B. Apsel. A low power impulse radio design for body-area-networks. IEEE Trans. Circ. Sys. I: Reg. Papers, 58(7):1458–1469, 2011. dorfler1 F. Dörfler and F. Bullo. Synchronization and transient stability in power networks and nonuniform Kuramoto oscillators. SIAM J. Control Optim., 50:3, 1616–1642, 2012.dorfler2 F. Dörfler, M. Chertkov,and F. Bullo. Synchronization in complex oscillator networks and smart grids. Proc. Natl. Acad. Sci., 110:2005-2010, 2013.E D.  Eisenbud, Commutative Algebra with a view towards Algebraic Geometry, Graduate Texts in Mathematics, vol. 150, Springer, Berlin-Heidelberg-New York, 1995.e3 D.  Eisenbud,The geometry of syzygies, Graduate Texts in Mathematics, vol. 229, Springer, Berlin-Heidelberg-New York, 2005.golub2012matrix G. H. Golub and C. F. Van Loan. Matrix Computations, volume 3. JHU Press, 2012.M2 D. R. Grayson and M. E. Stillman. Macaulay2, a software system for research in algebraic geometry. Available at <https://www.macaulay2.com>.hong2005scalable Y.-W. Hong and A. Scaglione. A scalable synchronization protocol for large scale sensor networks and its applications. IEEE J. Selected Areas in Comm., 23(5):1085–1099, 2005. jadbabaie A. Jadbabaie, N. Motee, and M. Barahona. On the stability of the Kuramoto model of coupled nonlinear oscillators. In Proc. 2004 Amer. Contr. Conf., volume 5, pages 4296–4301. IEEE, 2004.kassabov M. Kassabov, S. Strogatz, and A. Townsend. A global synchronization theorem for oscillators on a random graph. Chaos, 32(8), 8pp., 2022. Kloumann I. Kloumann, I. Lizarraga, and S. Strogatz. Phase diagram for the Kuramoto model with van Hemmen interactions. Phys Rev E, 89, 2014.kuramoto74 Y. Kuramoto,Self-entrainment of a population of coupled non-linear oscillators. In International Symposium on Mathematical Problems in Theoretical Physics (Kyoto Univ., Kyoto, 1975), pages 420–422. Lecture Notes in Phys., 39. Springer, Berlin, 1975.kuramoto85 Y. Kuramoto. Chemical Oscillations, Waves, and Turbulence. Springer, 1984. ling S. Ling, R. Xu, and A. S. Bandeira. On the landscape of synchronization networks: A perspective from nonconvex optimization. SIAM J. Optim.29(3):1879–1907, 2019.lu J. Lu and S. Steinerberger. Synchronization of Kuramoto oscillators in dense networks. Nonlinearity, 33(11):5905–5918, 2019.mallada2010synchronization E. Mallada and A. Tang. Synchronization of phase-coupled oscillators with arbitrary topology. In Proc. 2010 Amer. Contr. Conf., pages 1777–1782. IEEE, 2010.matheny2019exotic M. H. Matheny, J. Emenheiser, W. Fon, A. Chapman, A. Salova, M. Rohden, J. Li, M. H. de Badyn, M. Pósfai, L. Duenas-Osorio, et al. Exotic states in a simple network of nanoelectromechanical oscillators. Science, 363(6431), 2019.Mehta15D. Mehta, N. S. Daleo, F. Dörfler, and J. D. Hauenstein.Algebraic geometrization of the Kuramoto model: Equilibria and stability analysis. Chaos, 25(5) 053103, 2015.mirollo1990synchronization R. E. Mirollo and S. H. Strogatz. Synchronization of pulse-coupled biological oscillators. SIAM J. Appl. Math., 50(6):1645–1662, 1990.olver2010nist F. W. J. Olver, D. W. Lozier, R. F. Boisvert, and C. W. Clark. NIST Handbook of Mathematical Functions Hardback and CD-ROM. Cambridge University Press, 2010.pecora2014cluster L. M. Pecora, F. Sorrentino, A. M. Hagerstrom, T. E. Murphy, and R. Roy. Cluster synchronization and isolated desynchronization in complex networks with symmetries. Nature Comm., 5:4079, 2014. peskin1975mathematical C. S. Peskin. Mathematical aspects of heart physiology. Courant Inst. Math. Sci., pages 268–278, 1975.pikovsky2015dynamics A. Pikovsky and M. Rosenblum. Dynamics of globally coupled oscillators: Progress and perspectives. Chaos, 25(9):097616, 2015.pikovsky2003synchronization A. Pikovsky, M. Rosenblum, and J. Kurths. Synchronization: A Universal Concept in Nonlinear Sciences, volume 12. Cambridge University Press, 2003.rodrigues F. A. Rodrigues, T. K. DM. Peron, P. Ji, and J. Kurths. The Kuramoto model in complex networks. Phys. Reports, 610:1–98, 2016.S H. Schenck,Computational Algebraic Geometry,Cambridge University Press, (2003).TDAbook H. Schenck,Algebraic Foundations for Applied Topology and Data Analysis,Springer, (2022).simeone2008distributed O. Simeone, U. Spagnolini, Y. Bar-Ness, and S. H. Strogatz. Distributed synchronization in wireless networks. IEEE Sig. Proc. Mag., 25(5):81–97, 2008.sokolov2018sync Y. Sokolov and G. B. Ermentrout. When is sync globally stable in sparse networks of identical Kuramoto oscillators? Phys. A, 533, 11pp., 2019.strang Linear algebra and its applications, Brooks and Cole (1988).strogatz S. Strogatz. From Kuramoto to Crawford: exploring the onset of synchronization in populations of coupled oscillators. Phys. D, 143:1–20, 2000.taylor R. Taylor. There is no non-zero stable fixed point for dense networks in the homogeneous Kuramoto model. J. Phys. A: Math. Theor., 45(5):055102, 2012.townsend A. Townsend, M. Stillman, S. Strogatz. Dense networks that do not synchronize and sparse ones that do. Chaos, 30(8), 8pp., 2020. Udeigwe L.C. Udeigwe, G.B. Ermentrout. Waves and Patterns on Regular Graphs. SIAM J. Applied Dynamical Systems, 140(2), 1102-1129, 2015. watanabe1994constants S. Watanabe and S. H. Strogatz. Constants of motion for superconducting Josephson arrays. Physica D: Nonlinear Phenomena, 74(3-4):197–253, 1994.werner2005firefly G. Werner-Allen, G. Tewari, A. Patel, M. Welsh, and R. Nagpal. Firefly-inspired sensor network synchronicity with realistic radio effects. In Proc. 3rd Inter. Conf. Embed. Netw. Sens. Sys., pages 142–153. ACM, 2005.wiley D. A Wiley, S. H. Strogatz, and M. Girvan. The size of the sync basin. Chaos, 16(1):015103, 2006.winfree1967biological A. T. Winfree. Biological rhythms and the behavior of populations of coupled oscillators. J. Theor. Bio., 16(1):15–42, 1967. yick2008wireless J. Yick, B. Mukherjee, and D. Ghosal. Wireless sensor network survey. Comput. Networks, 52(12):2292–2330, 2008.zhang Y. Zhang, J. Ocampo-Espindola, I. Kiss, A. Motte. Random heterogeneity outperforms design in network synchronization.Proc. Natl. Acad. Sci. USA 118:21 8pp, 2021. ] ] ]
http://arxiv.org/abs/2312.16069v1
{ "authors": [ "Heather Harrington", "Hal Schenck", "Mike Stillman" ], "categories": [ "math.DS", "math.AG", "math.CA", "90C26, 90C35, 34D06, 35B35" ], "primary_category": "math.DS", "published": "20231226144300", "title": "Kuramoto Oscillators: algebraic and topological aspects" }
Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112, United StatesRIKEN Center for Computational Science, Kobe 650-0047, JapanFakultät für Physik, Universität Bielefeld, D-33615 Bielefeld, Germany Physics Department, Brookhaven National Laboratory, Upton, New York 11973, USAUsing an eighth-order Taylor expansion in baryon chemical potential, we recentlyobtained the (2+1)-flavor QCD equation of state (EoS) at non-zero conserved charge chemicalpotentials from the lattice. We focused on strangeness-neutral, isospin-symmetric QCD matter,which closely resembles the situation encountered in heavy-ion collision experiments.Using this EoS, we present here results on various QCD material parameters;in particular we compute the specific heat, speed ofsound, and compressibility along appropriate lines of constant physics. We show that in the entire range relevant for the beam energy scan at RHIC,the specific heat, speed of sound, and compressibility show no indication foran approach to critical behavior that one would expect close to a possibly existingcritical endpoint. QCD material parameters at zero and non-zero chemical potential from the lattice David Anthony [email protected] Jishnu Goswami2Frithjof Karsch3Peter Petreczky4January 14, 2024 - Preprint =================================================================================================§ INTRODUCTION A major goal of the experimental program on heavy-ion collisions (HIC) is to investigatetransport and thermodynamic properties of strongly-interacting matter in the plane of temperature T and baryon chemical potential μ_B. Included among these properties are material parameters like the speed of sound, the compressibility, and the specific heat. The isentropic speed of sound c_s^2 is interesting, e.g., in the context of neutron stars since the relationship between the star masses and radii is influenced by how c_s^2changes with baryon number density n_B <cit.>. The isothermal speed of sound c_T^2 is also interesting for HIC, as a new method to estimate c_T^2 in HIChas been recently suggested in Ref. <cit.>. Finally the isovolumetric specific heat C_V can be related to the temperaturefluctuations in HIC <cit.>.It is of special interest to probe the μ_B-T plane for μ_B>0, where a hypothesized first-order line separating a hadronic gas phase and a quark-gluon plasma phase terminates in a critical endpoint (CEP). Material parameters provide useful information about the nature of the CEP. For example c_s^2 would drop to zero at a true phase transition, and at a second-order transition, C_V would show a singularity.Some of these material parameters have been previously calculated on thelattice at μ_B=0 <cit.>. Here we present our ongoing calculations of these quantities at nonzero μ_B. § STRATEGY OF THE CALCULATIONWe define X̂≡ XT^-k with k∈ chosen so that X̂ is unitless. We expand the pressurein terms of the conserved charge chemical potentials _B, _Q, _S as= 1/VT^3log(T,V,_B,_Q,_S)= ∑_i,j,k=0^∞χ_ijk^BQS/i!j!k!_B^i _Q^j _S^k,whereis the QCD grand partition function, V is the spatial volume, andχ_ijk^BQS≡χ_ijk^BQS(T) =.∂/∂_B^i ∂_Q^j ∂_S^k|_μ⃗=0.Other observables such as the entropy densityand net-charge densitiesare derived fromusing standard thermodynamic relations. To limit our analysis to the μ_B-T plane while focusing on the relevantphysics of HIC, we impose constraints n_S =0and n_Q/n_B=r. In the following, partial derivatives are understood to be evaluated at fixed r and n_S. We focus onc_s^2=pϵs/n_B,      c_T^2=pϵT,     κ_s = 1/n_Bn_Bps/n_B,      C_V=TsTn_B,where ϵ is the energy density. In our previous work <cit.> we computed c_s^2 using lattice data for various s/n_B. To do this, we exploited the fact thatc_s^2 =∂ p/∂ T/∂ϵ/∂ Ts/n_B.We then interpolated p and ϵ results simulated at various T and computed the derivative numerically. While this approach is straightforward, it is not ideal to use interpolations since the numerical derivatives, especially higher-order ones, are quite sensitive to the interpolation result. Since we estimate errors using a bootstrap procedure, this can lead to substantially different estimates for the derivatives in each bin and hence an artificially large error bar. Now we address these large statistical uncertainties by utilizing analytic formulas for the material parameters in terms of cumulants, reducing the need to interpolate as much as possible. Besides yielding more controlled uncertainties in the lattice data, we found this approach increases numerical stability for our fixed s/n_B HRG results, allowing us to extend our calculations as low as s/n_B=10 <cit.>. When possible, our analytic formulas are cross-checked against known thermodynamic relations; for instance we find our expressions for c_s and κ_s to formally satisify κ_s^-1=c_s^2(ϵ+p-μ_Qn_Q-μ_Sn_S). § COMPUTATIONAL SETUP We use high-statistics data sets for (2+1)-flavor QCD with degenerate light quark masses m_u=m_d≡ m_l and a strange quark mass m_s tuned so that m_s/m_l=27. These are the same data sets as in Ref. <cit.>.We employed a HISQ action generated using <cit.>. Temperatures above 180 MeV use data <cit.> with slightly heavier[This is known to have a negligible effecton the results <cit.>.] light quarks, m_s/m_l=20. In all cases results have been obtained on lattices with aspect ratio N_σ/N_τ=4. In these proceedings, we present calculations only for the isospin-symmetric case r=0.5. While r=0.4 is a more physically accurate choice, choosing r=0.5 has the advantage of forcing _Q=0, simplifying some of the formulas. Moreover the quantitative differences between r=0.4 and r=0.5 EoS are generally mild <cit.> and are hence expected to have little impact on these parameters[Indeed, this is what we found for the sound speeds <cit.>.]. We are often interested in the behavior of observables near the pseudocritical temperature . When indicated on figures, we take =156.5(1.5) MeV from Ref. <cit.>. Lines of constant s/n_B and n_B/n_0, with nuclear matter density n_0=0.16/ fm^3, are taken from Ref. <cit.>. The  <cit.> is used for HRG calculations, spline fits, and bootstrapping. For the HRG model, we use the QMHRG2020 list of hadron resonances <cit.>. § RESULTSIn fig:materialParams we show preliminary results for c_s^2, c_T^2, κ_s, and C_V. Not all uncertainty has been included, hence error bands aremildly underestimated. Starting with isentropic observables, we note that becausen_B leads at μ_B while s leads with a constant, the limit μ_B→∞ corresponds to s/n_B→0. Hence the left-hand plots show an at most mild dependence on μ_B in the surveyed range. We see no indication of c_s^2 going to zero within this range and hence no critical signature. As has been seen already with μ_B=0 calculations,c_s^2 overlaps with the estimate from Ref. <cit.>, and both isentropic observables agree with our previous computationin Ref. <cit.>. Turning to c_T^2, our preliminary results are similar to the preliminary results of Ref. <cit.>. Finally our results for C_V at μ_B=0 agree with previous HotQCD results <cit.>. For all observables we see good agreement between lattice data and HRG below . § SUMMARY AND OUTLOOK We presented the status of our ongoing calculation of various QCD material parameters. The computation of the thermal expansion coefficient and isobaric heat capacity are in progress, which besides being interesting in their own right, will enable a few more analytic cross-checks between the parameters. For all our projects involving the QCD EoS, it will be useful to eventually have continuum extrapolations for 6- and 8-order cumulants, but this is a much more long-term goal.Acknowledgements–DAC was supported by the National Science Foundation under Grants PHY20-13064 and PHY23-10571.
http://arxiv.org/abs/2312.16703v1
{ "authors": [ "D. A. Clarke", "J. Goswami", "F. Karsch", "P. Petreczky" ], "categories": [ "hep-lat", "nucl-th" ], "primary_category": "hep-lat", "published": "20231227200109", "title": "QCD material parameters at zero and non-zero chemical potential from the lattice" }
Recursive Distillation for Open-Set Distributed Robot LocalizationKenta Tsukahara       Kanji Tanaka      January 14, 2024 ====================================================================A typical assumption in state-of-the-art self-localization models is that an annotated training dataset is available for the target workspace. However, this is not necessarily true when a robot travels around the general open world. This work introduces a novel training scheme for open-world distributed robot systems. In our scheme, a robot (“student") can ask the other robots it meets at unfamiliar places (“teachers") for guidance. Specifically, a pseudo-training dataset is reconstructed from the teacher model and then used for continual learning of the student model under domain, class, and vocabulary incremental setup. Unlike typical knowledge transfer schemes, our scheme introduces only minimal assumptions on the teacher model, so that it can handle various types of open-set teachers, including those uncooperative, untrainable (e.g., image retrieval engines), or black-box teachers (i.e., data privacy). In this paper, we investigate a ranking function as an instance of such generic models, using a challenging data-free recursive distillation scenario, where a student once trained can recursively join the next-generation open teacher set. § INTRODUCTION Self-localization, i.e., the problem of classifying a view image into predefined classes, is a fundamental problem in visual robot navigation and has important applications including scene understanding, map building, and path planning. Most of the existing solutions, ranging from image retrieval engines <cit.> to ConvNet image classifiers <cit.>, aim to build a high-quality self-localization model using annotated training datasets as supervision. Many state-of-the-art techniques can achieve very good performance in such supervised settings. However, this is not the case for an unfamiliar workspace where no supervision is available. Thus, the problem is largely unsolved. In this work, teacher-to-student knowledge transfer in general open-world distributed robot systems is considered as an alternative training setup. We observe that when humans travel around the open world, they often ask the people they meet in unfamiliar places for guidance. Therefore, we propose a similar knowledge transfer scheme, in which a student robot can view other robots encountered in unfamiliar places as potential teachers, and ask them to transfer knowledge about the places. It is noteworthy that there may exist various types of potential teacher robots. Some of them may be cooperative, but some others may not be. Some of them may be trainable (e.g., differentiable neural networks <cit.>), but some others may not be (e.g., image retrieval engines <cit.>). Some of them may have a known architecture, but some may have a black box architecture (i.e., data privacy). Therefore, we propose to introduce only minimal assumptions on the potential teacher robots.Existing knowledge transfer frameworks typically relied on prior knowledge about the teacher model, such as training datasets and metadata <cit.>. For example, the multi-teacher multi-student knowledge transfer scheme in <cit.> succeeded in training students to perform at least as well as the teacher by transferring the teacher's training data to the students. However, such schemes can potentially increase the cost of maintaining training data. A similar problem is being studied in the machine learning community as a more general issue called continual learning <cit.>. Many of existing approaches fall into the categories of “regularization <cit.>," “replay <cit.>," and “dynamic architecture <cit.>," all of which require maintenance of training datasets and metadata. Such requirements significantly limit the range of applications. To address the issue, we explore a novel continual learning setup that can handle an open teacher set. Instead of annotated training dataset to be required, in our scheme, a pseudo training dataset is reconstructed from a teacher model and used for continual learning of the student model under domain, class, and vocabulary incremental setups. However, even with current state-of-the-art technology <cit.>, dataset reconstruction abilities are far from perfect <cit.>. Furthermore, teacher robots can be of various types, ranging from trainable models such as ConvNet image classifiers <cit.> to untrainable models such as view image retrieval engines <cit.>. Unfortunately, existing schemes assume a known supervised architecture (e.g., <cit.>) and cannot be applied to open-set distributed robot systems. Therefore, we wish to find “generic" teacher models that can handle not only known teachers but also untrainable teachers (e.g., image retrieval engines <cit.>) with unknown architectures (i.e., data privacy <cit.>). In this work, we present a ranking function as an instance of such generic teacher models and investigate its performance in a challenging data-free recursive distillation scenario <cit.> (Fig. <ref>), where a trained student can recursively join the next-generation open teacher set. § OPEN-SET DISTRIBUTED ROBOT LOCALIZATION The current approach is built on the multi-teacher multi-student knowledge transfer (KT) scheme in <cit.>. In <cit.>, an ensemble KT scheme was investigated for visual place classification. The NCLT dataset in <cit.> (Fig. <ref>) was used, which contains long-term navigation data of a Segway robot equipped with an onboard monocular front-facing camera navigating a university campus over a long period in over 20 seasonal domains. Figure <ref> shows a bird's eye view of the robot workspace and example view images from the robot's onboard front-facing camera. The place classes are defined by partitioning the workspace into a grid in the bird's eye view coordinate system and associating each grid cell with each place class. Then, self-localization is formulated as a task of classifying an input view image to a place class. It is assumed that a teacher model has been trained in a previous season (i.e., domain). Then, the goal of KT is to train a student model so that it can be adapted to a new season via KT from the teachers.Following <cit.>, the knowledge to be transferred is assumed to be in the form of a training set (e.g., annotated visual inputs). This training set should be of the highest possible quality so that a model equivalent to the teacher model can be reconstructed from it. In the multi-teacher multi-student recursive KT scenario<cit.>, student robots can encounter multiple teachers sequentially. Once trained, students can also act as teachers for other potential students in subsequent seasons. Every time a student encounters a teacher, the student is provided with a training set by the teacher. In the original study, the training set is used for supervised learning of the student, assuming a ConvNet as the backbone of teacher/student models and a sample-based place class description <cit.>. A variant with a knowledge distillation instead of supervised learning called “recursive knowledge distillation" is also studied in <cit.>. Unlike <cit.>, the current work considers a data-free knowledge transfer (DFKT) extension of the recursive distillation scheme, called data-free recursive distillation (DFRD), to handle a general open teacher set that can contain unknown and black-box teachers. Recently, the research on DFKT has become very active in several research fields such as privacy-friendly KT <cit.>. In DFKT, training sets and metadata are not assumed to be accessible, but they should be reconstructed from the available teacher model and then used as pseudo-supervision for training the student model. The DFKT approach has a significant advantage in terms of spatial costs, as it does not require additional datasets or metadata like existing KT frameworks. This property is also attractive for the open-set distributed robot localization considered in the current study. In addition, DFKT also allows for the respect of the privacy of teacher robots and student robots. Specifically, students can hide what they don't know by minimizing their questions, and teachers can hide what they do know by minimizing their answers. Note that students do not necessarily have access to meta-information such as the number of teachers, the performance of individual teachers, the number and ID of unseen place classes, the relative pose of teachers, and physical means of communication.§ DATA-FREE RECURSIVE DISTILLATION Our framework, data-free recursive distillation (DFRD), can be viewed as a DFKT extension of the recursive knowledge distillation, which was originally introduced in <cit.>. However, unlike existing DFKT schemes, it can handle not only known teacher models but also generic teachers including untrainable and black-box teachers. Moreover, it allows the student to recursively act as a member of the next-generation open teacher set once trained. To our knowledge, such a data-free recursive distillation setup has not been considered in the existing works.The basic idea of DFKT is to synthesize alternative data for the training data of the teacher model. Now, let f be a teacher classifier that returns a prediction y=f(x) for an input x. In this case, the goal of DFKT is to reconstruct a high-quality training sample set D={(x,y)} from which a model equivalent to f can be trained in a supervised manner using the pre-trained model f as a cue. In our experimental scenario, the data synthesizer g is a trainable function that reconstructs the dataset D such that D=g(f). From the KT perspective, D can be directly used for supervised training or distillation of the student model f'. The predictive performance of this trained student f' depends on the quality of the synthetic data D. Under the assumption of ideal quality synthetic data D, students are expected to have the same or better prediction performance than the teacher. However, even state-of-the-art data synthesizers are far from perfect and are subject to reconstruction errors, including false positives and false negatives. As a result, the student's quality may be worse than the teacher's.Existing frameworks on DFKT typically rely on individual assumptions on the teacher model f (e.g., model type, architecture). For example, the seminal work in ZSKD <cit.> introduces the hypothesis that softmax spaces can be modeled as Dirichlet distributions, and provides data impressions from the supervised model that can be used as a replacement for the training set. In DAFL <cit.>, activations and softmax predictions are assumed to be available and this assumption is exploited to introduce a general-purpose regularization function for activations and predictions. These regularizations are also introduced in follow-up studies. Unfortunately, such assumptions of trainable or known teacher models are often violated in open-set teachers.Instead of assuming such specific teacher models, we make a minimal assumption about the model: “The teacher's self-localization model can be reused as the communication channel for KT." More specifically, the KT proceeds in the following procedure. (1) A student generates or samples a question x and sends it to a teacher. (2) The teacher with model f computes an answer by y=f(x) and returns it to the student. (3) The student obtains a pseudo training sample (x, y). One of the best-known strategies to generate a sample x at Step 1 is to predict pseudo-training samples of the teacher or impressed as mentioned above. However, the problem of sampling impressions is highly ill-posed and a topic of ongoing research.In this work, we begin with the best possible and worst possible samplers, called oracle samplers and random samplers. The oracle sampler has an excellent ability to sample x from the teacher's training set (i.e., x∈D). The random sampler is a naive sampler that samples an input sample x randomly from the input space. Oracle samplers and random samplers can be considered the best and worst-performing practical samplers that produce meaningful input samples. In other words, any practical sampler would be expected to perform somewhere between these two opposing samplers. Based on this consideration, we simulate diverse samplers by mixing sample sets from these two samplers at various mixing ratios. In experiments, the mixing ratio 100:r was changed to r=10·2^i (i=0, 1, 2, ⋯, 10).Specifically, we proposed to use the reciprocal rank feature (RRF) as a regularized input signal. This is because the above naive random sampler was experimentally found to have extremely poor performance. Whether used alone or in combination with the Oracle sampler, the performance of the naive random sampler was so poor that the entire framework broke down. The proposed RRF was originally introduced in <cit.> as an input feature. Recalling that any self-localization model can be modeled as a ranking function, any teacher's output sample or student's input sample can be approximately represented as an RRF vector. This RRF vector is low-dimensional and is well approximated by an even lower-dimensional k-hot RRF (k=10). Note that this k-hot RRF can be computed efficiently by performing k maximum operations on an N-dimensional noise vector. It has been experimentally shown that this regularized random sampler is superior to the aforementioned naive random sampler. For more information on this RRF, please refer to the paper <cit.>. It is worth mentioning that from the continual learning perspective <cit.>, this work applies to all domain-, class- and vocabulary-incremental setups. We assume a typical domain incremental scenario, which we call cross-season self-localization, in which a Segway robot navigates an outdoor workspace on a university campus over a long period. We also assume a class increment scenario where students may incrementally learn a place class that is unknown to them from the teacher. We also assume a vocabulary increment scenario in which students incrementally encounter diverse and unknown teacher vocabulary sequences during their travels (i.e., open vocabulary). In other words, it is assumed that the open teacher set is updated when and only when domain, class, or vocabulary is incrementally updated. Note that typical continual learning solutions “regularization <cit.>," “replay <cit.>," and “dynamic architecture <cit.>" rely on the availability of datasets and metadata, and their extension to DFKT is a topic of ongoing research.It is also worth mentioning that, unlike the typical well-defined place class definitions such as country/region/postal code, there are no criteria for clearly defining place classes in robot-centric coordinate systems. There are various definitions and standards based on space and appearance, and they are not necessarily unified among robots. Inconsistency in definitions and standards may become a serious obstacle in transferring class-specific knowledge between different robots. As a naive solution, for example, when partitioning the robot's workspace into a grid of place classes by grid-based spatial discretization, a representative point such as the center of gravity of a place area in the workspace may be used as the unified definition of place class.However, such a place area represented by the same class ID is not necessarily spatially consistent between robots and may suffer from spatial uncertainties. Therefore, the problem of KT-friendly place class definition remains an open issue <cit.>. § EXPERIMENTSWe experimentally evaluated the proposed scheme in a sequential cross-season scenario, using a length 10 sequence of seasons, “2012/03/31,” “2012/1/8,” “2012/2/5,” “2012/2/23,” “2012/4/5,” “2012/6/15,” “2012/8/20,” “2012/10/28,” “2012/11/17,” and “2012/12/1” in the NCLT dataset. The details of the experimental setup are as in Section <ref>. While our scheme also allows teachers and students to have different definitions of place classes (Section <ref>), for simplicity, all the teacher and student robots in the current experiments use the same place class definition, in which the workspace is partitioned into 100 place classes with a 10×10 grid of place classes. For the self-localization model, a scene graph classifier recently developed in <cit.> is employed as the visual embedding (Fig. <ref>). It is a three-step procedure. (1) First, a spatial-semantic scene graph is extracted from an input view image by using a scene graph generator as in <cit.>. (2) Then, the scene graph is converted to a class-specific probability map by using a pre-trained graph ConvNet. (3) Then, the class-specific probability map is further converted to an RRF format. Thus, such a fixed-length RRF vector is used as a visual input to the teacher and student models. A minimal KT scenario that contains both supervised learning and knowledge transfer is considered. A student model is initialized at each i-th season and the student robot encounters two teacher robots. One of the two teachers is trained via supervised learning. This teacher experienced a subset of the 100 place classes and employed the corresponding portion of the i-th season annotated dataset as supervision. We randomly determined whether a teacher experienced a certain place class with a probability of 10%, meaning that the number of place classes experienced by this teacher will be 10 on average. The other teacher is introduced for knowledge transfer. This teacher is the previous version of the student model that has been trained in (i-1)-th season. Note that this second type of teacher model is not available in the first season (i.e., i=1). Note that for classes that the teacher has never experienced before, no matter how good the training scheme is, one can only expect a very low correct answer rate (about 1%). According to the above definition of experienced place classes, there can be overlap in the classes assigned to robots. For example, in one experiment shown in Fig. <ref>, the total number of classes teachers experienced over time was 10, 18, 25, 31, 34, 41, 44, 46, 47, and 48 for each generation. In this way, the number of classes experienced increases monotonically with the number of generations, but it is not strictly proportional.The self-localization models are implemented as follows. The same multi-layer perceptron (MLP) model was used for all the teacher/student models. Given an input view image, the visual embedding is computed by the above-mentioned graph ConvNet, converted to a 10-hot RRF vector, and then used as input to the MLP model. The output of an MLP model is a class-specific softmax vector. However, it is converted to a class-specific rank vector to simulate the black-box teacher model. In preliminary experiments, the rank vector can be approximated by a 10-hot RRF vector to be used for knowledge distillation with the standard distillation loss function. However, we here consider a more generic scenario, and the RRF vector is further converted to a 1-hot vector. Finally, the pairing of the input embedding and the 1-hot vectors is used as a training sample for KT.Figure <ref> shows the performance curve. As can be seen from this figure, when the ratio r of samples derived from random samplers is small and oracle samplers are dominant, the student robot performance was reasonably well. It can be also seen that the proposed DFRD scheme with the RRF feature space regularization does not deteriorate for large ratio r≤ 640 % for experiments considered here. IEEEtran
http://arxiv.org/abs/2312.15897v1
{ "authors": [ "Kenta Tsukahara", "Kanji Tanaka" ], "categories": [ "cs.RO", "cs.CV", "cs.LG" ], "primary_category": "cs.RO", "published": "20231226062055", "title": "Recursive Distillation for Open-Set Distributed Robot Localization" }
[][email protected] Instituto Galego de Fisica de Altas Enerxias (IGFAE), Universidade de Santiago de Compostela, E-15782 Galicia, Spain[][email protected] Instituto Galego de Fisica de Altas Enerxias (IGFAE), Universidade de Santiago de Compostela, E-15782 Galicia, Spain Thermalization of heavy quarks in the quark-gluon plasma (QGP) is one of the most promising phenomena for understanding the strong interaction. The energy loss and momentum broadening at low momentum can be well described by a stochastic process with drag and diffusion terms. Recent advances in quantum computing, in particular quantum amplitude estimation (QAE), promise to provide a quadratic speed-up in simulating stochastic processes.We introduce and formalize an accelerated quantum circuit Monte-Carlo (aQCMC) framework to simulate heavy quark thermalization.With simplified drag and diffusion coefficients connected by Einstein's relation, we simulate the thermalization of a heavy quark in isotropic and anisotropic mediums using an ideal quantum simulator and compare that to thermal expectations. Accelerated Quantum Circuit Monte-Carlo Simulation for Heavy Quark Thermalization Wenyang Qian January 14, 2024 =================================================================================§ INTRODUCTIONThermalization is one of the most important common features of a non-equilibrium system. An open system that undergoes quantum decoherence by rapidly exchanging information with the environment usually tends to thermalize conventionally and classically. Heavy quark thermalization in the background of quark-gluon plasma (QGP) produced in relativistic heavy-ion collisions (HICs) is such an open system that heavy quarks have distinguished separation of scales compared to the soft QGP medium. With the thermalization/hydrodynamization time of the QGP characterized by a scale of τ_h≃ 4πη/(Ts) <cit.>, the relaxation of the heavy quark is prolonged by its heavy mass τ_R≃ m_ HQτ_h/T in comparison. With a typical charm quark mass of ≃1.5 GeV, and the QGP medium temperature of 300-500 GeV in HICs, heavy quark undergoes a time of thermalization caused and dominated by a thermal environment. This is more extreme for the bottom quark with ≃ 4.5 GeV mass, that the thermalization process is not even finished in a 10 fm of the QGP phase. Eventually, the hadronized heavy flavors measured in the detector are not thermalized and the characterization of the heavy quark spectra tells us the medium property of the QGP. The heavy masses of heavy quarks not only delay the thermalization in the QGP medium but also make the heavy quarks less relativistic compared to the almost massless partons in the QGP. This leads to a well-established thermalization description for heavy quarks based on a stochastic process with low-momentum random kicks from the medium <cit.>. The thermalization in this description is controlled by two competing effects, the energy loss from a drag term and a diffusion from a stochastic term. The energy loss tends to reduce the momentum of a heavy quark while the diffusion tends to broaden the momentum distribution. The competing contributions eventually thermalize the heavy quark to a certain distribution controlled by a fluctuation-dissipation theorem, known as Einstin's relation.In a non-relativistic or static limit, the thermal distribution is given by a classical Maxwell-Boltzmann distribution. For more discussions on heavy quark thermalization in HIC phenomenology, see reviews <cit.>. Notably, this stochastic process is so generic that it is not limited to the description of a heavy quark thermalization but is broadly utilized in many research topics, such as the Black–Scholes model in quantitative finance, which in part inspired our work. Quantum computing technology, using laws of quantum mechanics, has already been extensively applied in many areas of HIC physics <cit.>, where the strength of quantum computing is usually exploited from its exponential state space, local Hamiltonian simulation, and near-term variational algorithms. Recently, novel gate-based quantum finance strategy <cit.> with the quantum amplitude estimation (QAE) <cit.> exhibits a promising quadratic speed-up over the classical Monte-Carlo (MC) method. In much the same spirit as Grover's algorithm <cit.>, the QAE allows efficient estimation of the amplitude of the designated quantum state. The main contribution of this work is the first application of an accelerated quantum circuit Monte-Carlo (aQCMC) strategy using the QAE techniques for heavy-quark thermalization.Different events are simulated as quantum state evolution with sufficient quantum shots, and the physical observables are efficiently extracted with amplitude estimation.With the constant improvements in the QAE algorithms <cit.>, the aQCMC may expect to become a more standard approach, especially in future large-scale quantum simulations. This manuscript is organized as follows. In Sec. <ref>, we review the heavy-quark thermalization formulated as a stochastic differential equation and its standard classical simulation strategy with the MC method. In Sec. <ref>, we discuss the aQCMC strategy utilized in this work to speed up the computation. In Sec. <ref>, we present our simulation results in isotropic and anisotropic mediums using Qiskit. In Sec. <ref>, we summarize and discuss future avenues of this work.§ HEAVY QUARK THERMALIZATION§.§ Stochastic description of heavy quark thermalizationThe heavy-quark thermalization can be characterized by a stochastic differential equation (SDE) known as the Langevin equation <cit.>dx_i =p_i/E(p⃗)dt,   i=x,y,z,dp_i =-A(x⃗,p⃗,t)p_idt+σ_ij(x⃗,p⃗,t)dW_j,where the random force that sampled as a Wiener process dW∼𝒩(0,dt) has correlation ⟨dW_idW_j|=⟩δ_ijdt. The drag coefficient A(x⃗,p⃗,t) and the diffusion coefficient σ_ij(x⃗,p⃗,t) in HICs may be calculated from either quantum chromodynamics (QCD) <cit.>, or QCD-like theories <cit.> with a heavy quark interacting with the medium. Applying Ito's lemma, the Langevin equation Eq. (<ref>) can be reformulated as a Kolmogorov-forward equation, known as the Fokker-Planck equation, presenting the time evolution of the heavy quark non-equilibrium distribution f(x⃗,p⃗,t) as∂/∂ tf(x⃗,p⃗,t) =∂/∂ p_i[A(x⃗,p⃗,t)p_if(x⃗,p⃗,t)]+∂^2 /∂ p_i∂ p_j[B_ij(x⃗,p⃗,t)f(x⃗,p⃗,t)],with the diffusion coefficient B_ij(x⃗,p⃗,t)=σ_ik(x⃗,p⃗,t)σ_jk(x⃗,p⃗,t)/2. There is no general solution to the Fokker-Planck equation Eq. (<ref>) and the evolution would depend on the initial condition.However, the solution to the Fokker-Planck equation would be an attractor towards the thermal limit. This transition from various ordered initial conditions to a unique chaotic limit is the thermalization of heavy quarks within a medium.These transport coefficients are generally medium profile dependent, but in a thermal and homogeneous medium, we may drop the spatial x⃗ and time t dependencies.The perturbative QCD calculation suggests the drag coefficient A(p⃗) to be almost a constant at low momentum p≲ 2M <cit.>.With an approximately constant drag coefficient, one may simplify the Langevin equation in the non-relativistic limit at a small momentum p, which may be further rescaled by the heavy quark mass M. Keeping diagonal terms only in the diffusion term, these simplifications lead to a dimensionless Langevin equationdq_i=-q_idt̃+dW̃_i.In the above equation we have used dimensionless momentum q_i=p_i/M, time dt̃=Adt, and anisotropic stochastic terms dW̃_i∼𝒩(0,2Tdt̃/(Mχ_i^2)) with proper Einstein's relation A=σ_ii^2χ_i^2/(2MT). For details of derivations, see App. <ref>. Notice that the heavy quark relaxation time τ_ R≃ 1/A, the value of dt̃ represents the speed of energy loss and thermalization. Thus, a realistic simulation would favor the dt̃ to be as small as possible, and a value of dt̃≃ 1/N_t takes about N_t steps to thermalize (thermalization will also be delayed by a large momentum, for instance, a heavy quark jet). Another relevant scale is the temperature over heavy quark mass ratio T/M in the variance σ̃_i^2dt̃=2Tdt̃/(Mχ_i^2). The dimensionless Fokker-Planck equation corresponding to Eq. (<ref>) reads∂/∂t̃f(q⃗,t̃) =∂/∂ q_i[q_if(q⃗,t̃)] +1/2σ̃_ii^2∂^2 /∂ q_i^2[f(q⃗,t̃)]The thermal distribution in terms of these dimensionless quantities reads f^ eq(q⃗)∝exp[-q_x^2/σ̃_x^2-q_y^2/σ̃_y^2-q_z^2/σ̃_z^2] This stochastic process is usually simulated with the MC methods, by sampling the Wiener process for each time step. The trajectory of a heavy quark contributes to an event and a collection of these events provides a time series of the heavy quark distribution towards thermalization. On a modern digital computer, this MC simulation is straightforward: one starts with whatever heavy quark initial distribution f(q⃗,t̃_0), and samples the heavy quark initial momentum (q_x^t̃_0,q_y^t̃_0,q_z^t̃_0) accordingly. Similarly, the values of the stochastic variables (dW_x^t̃,dW_y^t̃,dW_z^t̃) can be uncorrelatedly sampled with a set of independent normal distributions{𝒩(0,2Tdt̃/(Mχ_i^2))} with i=x, y, z for each time in a diagonalized form. The increment of the momentum follows the Langevin equation Eq. (<ref>) and the momentum at the next step can be calculated with the forward-Euler method asq_i^t̃+dt̃=q_i^t̃-q_i^t̃dt̃+dW̃_i^t̃,Iterating the above algorithm for large enough N_t steps from t̃_0 to t̃_0+N_t dt̃ gives a time series of one heavy quark momentum𝐐^T={q_i^t̃_0,q_i^t̃_0+dt̃,⋯,q_i^t̃_0+N_t dt̃},and repeating for a total of N_event events produces an emergent phenomenon of heavy quark thermalization, which leads to a thermal distribution {𝐐_1^t̃_0+N_tdt̃,𝐐_2^t̃_0+N_tdt̃,⋯,𝐐_N_ event^t̃_0+N_tdt̃}∼ f^ eq(q⃗),with N_t dt̃≫ 1. Then, for any physical quantity F(q⃗) at time t̃, its expectation value would be ⟨F(q⃗)|=⟩1/N_ event∑_i=1^N_ event F(𝐐_i^t̃).The MC simulation on a modern computer is straightforward but often requires large computational resources for reasonable precision. Therefore, we encode the stochastic process on the quantum circuit and accelerate with the QAE algorithms, one may reduce the inherent problem complexity faced in classical simulations, reaching a quadratic quantum speed-up compared to the classical method to the same precision.§ QUANTUM STRATEGY In this section, we formulate the quantum strategy, the quantum circuit Monte-Carlo (QCMC) to simulate the heavy quark thermalization in a stochastic description. For the QCMC simulation, we encode the particle's momenta q_i in each direction as a quantum state. With a generic n-qubit quantum register, one has in principle N=2^n possible modes for the heavy quark momenta q. By restricting the momentum q∈[-q_ max,q_ max), we discretize q into N values with δ q=2q_ max/N. Then, we further shift the physical momentum q to the positive momentum q̅ by a constant q_max so that q̅=q+q_ max∈[0,2q_ max) and impose a periodic boundary condition, i.e. q̅= q̅ mod(2q_max). The use of non-negative dimensionless momenta q̅ makes a straightforward binary mapping onto the corresponding quantum states, which can be extended to all three spatial dimensions x,y, and z. To thermalize with approximately N_t steps, reasonable values of the coefficients for the simulation scale as dt̃≃ 1/N_t with N_t>1. The variance in the stochastic term is chosen to be σ̃_i^2dt̃=2Tdt̃/(Mχ_i^2)≃ dt̃/(2χ_i^2) according to scales of heavy quark mass M and temperature T in HICs.Since generic quantum multiplication and divisions are complicated <cit.>, we pick dt̃=1/2^d with positive integer d in practical simulations, which can be realized on the quantum circuit by shifting the quantum state with d qubits using a sequence of 𝖢𝖷 gates. In general, for each of the i = x,y,z directions, we prepare quantum register 𝒮_i to encode the particle's momentum q_i and quantum register 𝒲_i to encode the diffusion term dW̃_i. Each register is represented by a set of qubits. The numbers of qubits n_𝒮,n_𝒲 in registers 𝒮_i,𝒲_i are not necessarily the same. The increment at each time step t̃ = n dt̃ contributed from the drag term -q_idt̃ and the diffusion term -dW̃_i are implemented as unitary quantum operators U_A^n_i following Eq. (<ref>) so that U_A_i^n |dW̃_i^n⟩_𝒲_i^n⊗|q^n_i⟩_𝒮^n_i⊗|0⟩_𝒮^n+1_i= |dW̃_i^n⟩_𝒲_i^n⊗|q^n_i⟩_𝒮^n_i⊗|q_i^n - q_i^ndt̃+dW̃_i^n⟩_𝒮^n+1_i= |dW̃_i^n⟩_𝒲_i^n⊗|q^n_i⟩_𝒮^n_i⊗|q^n+1_i⟩_𝒮^n+1_i.Here, U_A_i^n = U_A is time-independent with a constant drag coefficient A, though it is not required. Now we introduce the quantum gates used in the circuit:* Distribution loading gates (U_L) are responsible for loading the initial momentum distribution on the quantum register 𝒮 for the system. In principle, one can start with either a single momentum or any momentum distribution for the heavy quark and evolve it on the circuit. Here, we initialize with an arbitrary single momentum each time using 𝖷 gates.* Stochastic Wierner gates (U_W) provide the stochastic contribution to the quantum circuit for the Wiener process dW. Here, we sample normal distribution 𝒩(0,σ=2Tdt̃/(Mχ_i^2)) exactly, and subsequent circuit transpilation automatically builds the quantum gates for the distribution. In other words, U_W |0⟩ = ∑_q̅√(𝒫(q̅))|q̅/δ q⟩ with probability 𝒫(q̅) = (1/√(2πσ^2))exp(-q̅^2/(2σ^2)). * Quantum evolution gates (U_A) are the main building blocks of the QCMC, where we follow Eq. (<ref>) to construct the evolution gates. Specifically, we implement and utilize the quantum adders and multipliers (see App. <ref> for a brief review) to build the stochastic Langevin evolution at each time step. One additional constant quantum adder is included to remedy the momenta from q to non-negative q̅ per each step. Notably, these quantum arithmetic gates correspond directly to the classical arithmetic operations, though one still needs to manually manipulate these operations at the quantum-register level for today's quantum computers. Since quantum Fourier transforms are innate to most arithmetic operations, it may be more efficient to use Fourier basis as the encoding basis to abbreviate consecutive operations.In principle, one could simulate the MC process on the quantum circuits as efficiently as on a classical computer. Nevertheless, since at each time step, the Wiener process dW needs to be uncorrelated and the quantum arithmetic operations are on the register level, the quantum circuit would require additional sets of registers 𝒮 and 𝒲 for each time iteration, making the total qubit number scales as 𝒪((2n+1)N_t) assuming n_𝒬=n_𝒲=n. To circumvent this tower-like quantum circuit, one may include 𝗋𝖾𝗌𝖾𝗍 gates to economically reuse the quantum registers repeatedly for different time steps, as in Fig. <ref>, leading to only 𝒪(3n) qubits. The quantum circuit Monte-Carlo (QCMC) method can be accelerated by taking advantage of the quantum amplitude estimation <cit.> (QAE), a generalized version of the Grover's search algorithm <cit.>. See App. <ref> for a review. Suppose an operator A_F acts on n+1 qubits, A_F |0⟩_n|0⟩ = √(1-a)|ψ_0⟩_n|0⟩ + √(a)|ψ_1⟩_n|1⟩,such that a∈[0,1] is the unknown of interests. In the heavy quark thermalization we study, a = ⟨ψ_n | F | ψ_n|$⟩ is the expectation of any physical observableFon the momentum quantum state|ψ_n⟩at stepn. Using the Grover operator𝒬 = A_F S_0 A_F^†S_ψ_0whereS_xis reflection operator about statex, QAE allows for high-probability estimation ofainN_qqueries ofA_Fwith errorϵ= 𝒪(1/N_q), which is a quadratic speed-up over classical MC <cit.>. In principle, one may use the standard quantum phases estimation (QPE) with extra auxiliary qubits <cit.> to retrieve the amplitude where the estimation success rate is quickly boosted close to unity. In practice, the QAE approach is usually difficult for two reasons: Firstly, universal oracle implementation for the expectation functionFis nontrivial; secondly, the QPE, the key to extract amplitude, requires expensive auxiliary qubits and substantial multi-qubit gates <cit.>. Fortunately, operatorsU_Finvolving piecewise linear functions can be approximated via Taylor expansion and implemented using controlled𝖱_𝖸gates <cit.>, so we are capable of investigating momentum and absolute momentum expectation of the particle, i.e.,F(q) = qandF(q) = |q|. Alternative loading methods to reduce the circuit complexity that one may consider include quantum generative adversarial networks <cit.> and approximate quantum compiling <cit.>.On the other hand, the complexity of the QPE can be circumvented using novel QPE-free algorithms <cit.>, which are mostly based on selected Grover iterationsQ^k A_Fto estimate the quantum amplitude efficiently, and the same quadratic speed-up can be obtained <cit.>. In particular, we focus on the Iterative QAE (IQAE) algorithm in our simulation result, which proves most economical in estimation accuracy and confidence level <cit.> for our simulation resources. Nonetheless, it is crucial to point out that by having Grover operators in the QAE we cannot use the non-unitary reset gates directly, and consequently, we regress to the tower-like quantum circuit in Fig. <ref> when the QAE is involved.§ SIMULATION RESULTS
http://arxiv.org/abs/2312.16294v1
{ "authors": [ "Xiaojian Du", "Wenyang Qian" ], "categories": [ "hep-ph", "nucl-th", "quant-ph" ], "primary_category": "hep-ph", "published": "20231226190119", "title": "Accelerated quantum circuit Monte-Carlo simulation for heavy quark thermalization" }
The Media Bias Taxonomy]The Media Bias Taxonomy: A Systematic Literature Review on the Forms and Automated Detection of Media BiasUniversity of Göttingen Germany Gö[email protected] Both authors contributed equally to this research.University of Würzburg Würzburg Germany[1]TH Köln - University of Applied Sciences Köln [email protected] of Göttingen Germany Gö[email protected]é – Universitätsmedizin Berlin Berlin [email protected] of Göttingen Germany Gö[email protected] University of Göttingen Germany Gö[email protected] way the media presents events can significantly affect public perception, which in turn can alter people's beliefs and views. Media bias describes a one-sided or polarizing perspective on a topic. This article summarizes the research on computational methods to detect media bias by systematically reviewing 3140 research papers published between 2019 and 2022. To structure our review and support a mutual understanding of bias across research domains, we introduce the Media Bias Taxonomy, which provides a coherent overview of the current state of research on media bias from different perspectives.We show that media bias detection is a highly active research field, in which transformer-based classification approaches have led to significant improvements in recent years. These improvements include higher classification accuracy and the ability to detect more fine-granular types of bias. However, we have identified a lack of interdisciplinarity in existing projects, and a need for more awareness of the various types of media bias to support methodologically thorough performance evaluations of media bias detection systems. Concluding from our analysis, we see the integration of recent machine learning advancements with reliable and diverse bias assessment strategies from other research areas as the most promising area for future research contributions in the field. <ccs2012><concept><concept_id>10002944.10011122.10002945</concept_id><concept_desc>General and reference Surveys and overviews</concept_desc><concept_significance>500</concept_significance></concept><concept><concept_id>10002951.10003317.10003347</concept_id><concept_desc>Information systems Retrieval tasks and goals</concept_desc><concept_significance>500</concept_significance></concept></ccs2012> [500]General and reference Surveys and overviews [500]Information systems Retrieval tasks and goals [ Bela Gipp January 14, 2024 ====================preprintbox § INTRODUCTIONOnline news articles have become a crucial source of information, replacing traditional media like television, radio broadcasts, and print media (e.g., newspapers, magazines) <cit.>. However, news outlets often are biased <cit.>. The primary reason for this bias is that opinionated, entertaining, and sensationalist content is more likely to attract a larger audience while being less expensive to produce <cit.>.Media bias is widely recognized as having a strong impact on the public's perception of reported topics <cit.>.Media bias aggravates the problem known as filter bubbles or echo chambers <cit.>, where readers consume only news corresponding to their beliefs, views, or personal liking <cit.>.The behavior likely leads to poor awareness of particular issues, a narrow and one-sided perspective <cit.>, and can influence voting behavior <cit.>. Highlighting media bias instances has positive implications and can mitigate the effects of such biases <cit.>.While completely eliminating bias may be an unrealistic goal, drawing attention to its existence by informing readers that content is biased allows them to compare content easily.It can also enable journalists and publishers to assess their work objectively <cit.>.In the following, we list systems designed to help readers mitigate the effects of media bias on their decision-making. Most of these systems focus on aggregating articles about the same event from various news sources to provide different perspectives <cit.>.For example, news aggregators like Allsides[<https://www.allsides.com>] and Ground News[<https://ground.news>] allow readers to compare articles on the same topic from media outlets known to have different political views.Media bias charts, such as the AllSides media bias chart[<https://www.allsides.com/media-bias/media-bias-chart>] or the Ad Fontes media bias chart[<https://www.adfontesmedia.com/>] provide up-to-date information on media outlets' political slants.However, it is uncertain whether readers have the possibility and, more importantly, the desire to read several articles on the same topic and compare them.Media bias has become the subject of increasing interdisciplinary research, particularly in automated methods to identify bias. However, the concept of media bias remains loosely defined in the literature <cit.>. Existing work uses different subcategories and types of bias <cit.>, but authors tend to focus on only one media bias subcategory while disregarding similar kinds of bias concepts.publications on media bias often work on similar concepts but assign different names to them, leading to confusion and imprecise use of terms. For example, some authors refer to word-based bias as linguistic bias <cit.>, while others call it bias by word choice <cit.>, but the exact difference or overlap between these terms is undefined.The lack of clarity surrounding media bias can have negative effects on measuring media bias perception <cit.>. Additionally, recent advances in Deep Learning have shown how awareness of tasks within complex domains, such as media bias, could potentially lead to large performance increases <cit.>. However, these advancements have yet to be incorporated into media bias research <cit.>. Our literature review seeks to create awareness of media bias detection as a task and to provide a summary of existing conceptual work on media bias and automated systems to detect it. To achieve this, we compare and contrast computer science research while also incorporating media bias-related concepts from non-technical disciplines such as framing effects <cit.>, hate speech <cit.>, and racial bias <cit.>.We propose a unified taxonomy for the media bias domain to mitigate ambiguity around its various concepts and names in prior work.In addition, we classify and summarize computer science contributions to media bias detection in six categories[We reason and detail our categories in <Ref>.]:(1) traditional natural language processing (tNLP) methods <cit.>, (2) simple non-neural ML techniques <cit.>, (3) transformer-based (tbML) <cit.> and (4) non-transformer-based (ntbML) <cit.> machine learning. We also include (5) non-neural network (nNN)-based (<Ref>) <cit.> as well as graph-based <cit.> approaches.Lastly, we provide an overview of available datasets. Our aim is to provide an overview of the current state-of-the-art in media bias and increase awareness of promising methods. We show how computer science methods can benefit from incorporating user and perception-related variables in different datasets to improve accuracy. To facilitate the usage of such variables, we give an overview of recent findings about cognitive processes behind media bias. We believe that a systematic overview of the media bias domain is overdue given the numerous papers covering related issues. Such an overview can benefit future work in computer science and other areas, such as Psychology, Social Science, or Linguistics, which all cover media bias. As we show in detail in <Ref>, existing literature reviews on media bias <cit.> do not cover crucial aspects. They do not give a systematic overview of related concepts, instead presenting how media bias can develop.Aside from the major developments within the media bias domain since 2021, they lack details on computer science methods and psychological and social science research. In summary, our literature review answers the following research questions: (RQ1) What are the relationships among the various forms of bias covered in the literature? (RQ2) What are the major developments in the research on automated methods to identify media bias?(RQ3) What are the most promising computer science methods to automatically identify media bias?(RQ4) How does social science research approach media bias, and how can social science and computer science research benefit each other? All resources for our review are publicly available at <https://github.com/Media-Bias-Group/Media-Bias-Taxonomy>. § METHODOLOGYThe core contribution of this article is a systematic literature review that provides a structured and comprehensive overview of the application of computer science methods for detecting media bias. This review also clarifies and establishes connections among the various concepts employed in the field of media bias. Reviews are susceptible to incomplete data and deficiencies in the selection, structure, and presentation of the content <cit.>, especially when aiming for extensive coverage.To overcome these challenges, we designed our collection and selection processes carefully, with a focus on mitigating common risks associated with literature reviews.We used automated, keyword-based literature retrieval (described in <Ref>), followed by a manual selection (<Ref>), and adhered to established best practices for systematic literature reviews <cit.>.The number of concepts (and keywords) relevant to media bias is high but hard to define.[For example, the term bias also yields many health-related papers that are irrelevant to our review.] Reviewing all papers for all related concepts is unfeasible[Based on the keywords we searched for, which we detail in <Ref>, we found over 100.000 publications.]. Therefore, we applied filter criteria to select candidate documents.Moreover, we excluded references from the selected papers as additional candidates since determining an unbiased stopping criterion would be challenging. Our review covers the literature published between January 2019 and May 2022, thus providing a comprehensive overview of the state-of-the-art in the field.To ensure diversity in the computer science publications included in our review, we retrieved literature from two sources: DBLP (DataBase systems and Logic Programming)[<https://dblp.org/>] and Semantic Scholar[<https://www.semanticscholar.org/>].Both sources are reliable and diverse and therefore meet the criteria for suitable sources for literature reviews <cit.>.DBLP is the most extensive database for computer science publications to date, containing documents from major peer-reviewed computer science journals and proceedings.It is a primary literature platform used in other reviews <cit.>.Semantic Scholar draws on a considerably larger database than DBLP, going beyond computer science into other research areas.It is also frequently used in literature reviews <cit.> and allows for applying more filter criteria to searches, particularly filtering by scientific field.Both platforms are accessible through an API and facilitate the use of an automated retrieval pipeline, which we require to filter our search results efficiently. We retrieved results for a selection of search terms (see <Ref>).While Semantic Scholar is an extensive general knowledge archive, DBLP focuses on in-depth coverage of computer science.By including both major archives, we aim to retrieve an exhaustive set of candidate documents in computer science. §.§ Retrieving Candidate DocumentsWe used media bias terms encountered during our initial manual retrieval step (depicted in <Ref>) as search queries to create candidate lists for our literature review.[Initially, we used more general terms such as media bias”, hate speech”, linguistic bias”, and racial bias” which are widely known. We manually identified additional bias concepts in the retrieved publications during our searches depicted in <Ref> and <Ref> and added them to our list of search queries. Subsequently, we searched for these newly identified keywords, creating the media bias keyword list presented in <Ref>.] These terms also served as the basis of the media bias categories we consolidated in our Media Bias Taxonomy in <Ref>. In step 2 (<Ref>), we employed a Python pipeline to retrieve computer science documents from both DBLP and Semantic Scholar, merge and unify the search results, and export them as tabular data.[We have made the crawler publicly available for use in other projects. The code and instructions can be found in our [taxonomyurl]repository.]We scraped a list of 1496 publications from DBLP and 1274 publications from Semantic Scholar for the given time frame. We present the complete list and search keywords in our [taxonomyurl]repository. As shown in <Ref>, we obtained a list of 3140 candidates for the literature review. After removing 531 duplicates between the Semantic Scholar and DBLP results, the final list contained 2609 publications. All search results were tagged with the relevant search queries and exported as a CSV file for the selection step. §.§ Candidate Selection We followed amulti-stage process to select relevant publications, as shown in <Ref>. The figure also shows the number of publications in each step.Three reviewers (Ph.D. students in computer science) filtered the results after the automatic scrape (step 2) and duplicate removal (step 3). In step 4, they filtered for documents that cover media bias, based on the title, abstract, and text, which resulted in 299 documents.In step 5, one reviewer per paper thoroughly inspected every publication to investigate whether computer science methods were used to detect media bias. For each publication, we exported the used methods and datasets (see <Ref>).In step 6, a second reviewer verified the choice of the first reviewer for each publication.In case of disagreement or uncertainty, the third reviewer was consulted. For each publication, at least two of the three reviewers must deem the publication suitable for our review. The detailed selection criteria for each step are available in our [taxonomyurl]repository.In the end, we selected 96 relevant documents. We assigned each paper to its computer science methods category according to <Ref>. §.§ Finding Additional Conceptual Literature for the Media Bias Taxonomy One goal of our systematic literature review is to develop a taxonomy that organizes the various definitions of media bias into distinct types.However, while conducting our search, we recognized that most computer science publications focus on methodology rather than defining bias types.Therefore, we expanded our search to other research areas that may have different perspectives on media bias.For this purpose, we conducted a second search, as shown in <Ref>, replacing DBLP with Google Scholar to identify more non-computer science research[Google Scholar is also a reliable and diverse database, meeting the criteria recommended in systematic literature review guidelines <cit.>.]. We manually selected papers from the first 50 search results for each keyword on Google Scholar and Semantic Scholar[In this step, we excluded computer science publications in the Semantic Scholar results.] and checked the first layer of their references for additional relevant literature.Overall, the additional search step for non-computer-science publications yielded 867 results, of which 489 were duplicates between Google Scholar and Semantic Scholar.Of the 378 non-duplicate publications, 57 were included in the search for computer science publications.We present the results of our searches in <Ref>[Due to space restrictions we do not cite all of the filtered works in this article but omit publications focusing on highly similar concepts.]. § RELATED LITERATURE REVIEWS Related literature reviews[We considered a publication a literature review if its main focus is a critical summary and evaluation of research about a topic related to media bias.] on media bias are scarce.Our literature crawl and search (<Ref>) yielded only three such results <cit.>.An additional search for the terms “media bias” and “news bias”[We manually examined the first 50 results on Google Scholar.] on Google Scholar did not yield more findings. In their literature review,<cit.> defined sub-categories of media bias from a social science perspective and showed how they emerge during journalistic work.Further, the authors described the advancements in computer science and indicated that frame analysis exists in both social sciences and computer science.In the second work, <cit.> surveyed media profiling approaches.They summarized computer science methods to analyze factuality (i.e., stance and reliability) and various forms of media bias (selection bias, presentation bias, framing bias, and news slant). The authors separated four prediction bases for media bias: 1) textual content and linguistic features, 2) multimedia content, 3) audience homophily, and 4) infrastructure characteristics.Lastly, <cit.> surveyed the literature on media bias from a sociological perspective and offered an overview of possible bias measurements. They grouped biases into three kinds of measurement: comparing media outlets with other actors, the intensity of media coverage, and tone. The earlier literature reviews exhibit three major shortcomings.First, both computer science-focused reviews <cit.> lack a systematic literature search. They only covered selected computer science approaches and datasets.Second, <cit.> and <cit.> did not cover the psychological perspective on bias, which we argue is essential to create and evaluate detection methods and datasets <cit.>.Third, no work thus far has provided a detailed overview of the various concepts and subcategories that fall under the umbrella term media bias. Current literature on media bias often addresses related concepts like hate speech, gender bias, and cognitive bias, but uses the umbrella term of media bias without clearly differentiating between overlapping categories and their relationships.To our knowledge, we are the first to offer a large-scale, systematic analysis of the media bias domain.As a result, we provide our Media Bias Taxonomy, which connects the various definitions and concepts in the area.In addition, we briefly summarize the state-of-the-art psychological research on media bias and provide an in-depth overview of all computer science methods currently used to tackle media bias-related issues.Our review focuses exclusively on media bias and does not include publications on related topics such as fake news. For details on fake news and its detection, we recommend referring to the two literature reviews <cit.>.§ RELATED WORK AND THEORETICAL EMBEDDINGThis section will provide an overview of media bias, followed by a presentation and organization of related concepts in our novel Media Bias Taxonomy.§.§ Media BiasMedia bias is a complex concept <cit.> that has been researched at least since the 1950s <cit.>.It describes slanted news coverage or other biased media content <cit.>, which can be intentional, i.e., purposefully express a tendency towards a perspective, ideology, or result <cit.>, or unintentional <cit.>.Different stages of the news production process can introduce various forms of media bias <cit.>.The lack of a precise and unified definition for media bias, sometimes referred to as editorial slant <cit.>, has contributed to the conceptual fragmentation in the field <cit.>. For instance, <cit.> categorized media bias into three primary groups <cit.>: gatekeeping bias, coverage bias, and statement bias. In contrast, <cit.> proposed two types of media bias: ideology bias and spin bias <cit.>. Some scholars referred to media bias as lexical or linguistic bias <cit.>.Others have proposed less specific definitions. For instance, <cit.> described media bias as “slanted news coverage or internal bias reflected in news articles.” <cit.> defined it as news reporting that “leans towards or against a certain person or opinion by making one-sided misleading or unfair judgments,” and <cit.> defined it as reporting “in a prejudiced manner or with a slanted viewpoint.” None of these definitions is based on a comprehensive literature review. Therefore, we provide a comprehensive and well-organized description of media bias in <Ref>, which includes its sub-fields and related computer science methods and discuss the common ground of all media bias concepts in our review in <Ref>.It is worth mentioning that media bias does not only manifest via text but also via pictures or text/news layout <cit.>.Moreover, biased reporting in one outlet can also cause biased reporting in other outlets by direct citations <cit.>.Our literature review focuses on text-based media bias and methods only.§.§ The Media Bias Taxonomy As media bias definitions often overlap, a clear distinction between its types is challenging.We propose the Media Bias Taxonomy, depicted in <Ref> to give a comprehensive overview of the media bias domain.Based on a manual selection after the literature search process, described in <Ref>, we split media bias into four major bias categories: linguistic, cognitive, text-level context, reporting-level, as well as related concepts, which are detailed in the following subsections.We show detailed examples in <Ref> for all subtypes of bias[Other, overarching concepts exist, such as persuasiveness <cit.>, which we do not cover or organize within this work. In future work, we will address concepts containing mmultiple forms of bias]. §.§.§ Linguistic Bias Linguistic bias, sometimes called lexical bias <cit.>, refers to a pattern of using certain words that reflects a particular way of thinking about a group or an individual based on their social category. This bias involves a systematic preference for certain words or phrases that may reflect stereotypes or preconceived notions about the group or individual being described <cit.>. In simpler terms, linguistic bias means using language that reflects a particular attitude or viewpoint towards a particular group or individual.We identified five bias types within this category: linguistic intergroup bias <cit.>, framing bias <cit.>, epistemological bias <cit.>, bias by semantic properties <cit.>, and connotation bias <cit.>. <Ref> lists examples for each subcategory.Linguistic Intergroup Bias describes which group members use specific language <cit.>.The concept is based on the linguistic category model (LCM), which categorizes words into different levels of abstraction (action words, interpretive action words, state verbs, and adjectives) according to their purpose <cit.>. The use of biased language is often subtle and reinforces stereotypes <cit.>. <cit.> illustrated linguistic intergroup bias with the following example: * They considered the hypothetical scenario where “Person A is hitting Person B's arm with his fist” <cit.>. * Describing the scenario using the least abstract form of language, one could say, “A is punching B” <cit.>. This entails no kind of valuation or implication and only describes what happened.* In contrast, using the most abstract form of language, one could say “A is aggressive” <cit.>. This might or might not be accurate and cannot be judged from the fact that A hit B.Framing Bias is defined as the use of “subjective words or phrases linked with a particular point of view” <cit.> to sway the meaning of a statement.The subjective words are often either one-sided terms or subjective intensifiers <cit.>.One-sided terms are words that “reflect only one of the sides of a contentious issue” <cit.>, while subjective intensifiers are adjectives or adverbs that reinforce the meaning of a sentence.Epistemological Bias describes the use of linguistic features that subtly focus on the credibility of a statement <cit.>. Word classes associated with epistemological bias are factive verbs, entailments, assertive verbs, and hedges, see examples in <Ref>.Factive verbs indicate truthfulness; entailments are relations where one word implies the truth of another word.Assertive verbs state clearly and definitely that something is true. Hedges are words used to introduce vagueness to a statement.In contrast to framing bias, epistemological bias is rather subtle and implicit <cit.>. Bias by Semantic Properties describes how word choice affects the framing of content and triggers bias, similar to framing bias and epistemological bias.The difference, however, is that framing and epistemological bias refer to the individual words used, whereas bias by semantic properties refers to how the sentence is structured <cit.>.Connotation Bias refers to using connotations to introduce bias to a statement<cit.>.While the denotation of a word expresses its literal meaning, the connotation refers to a secondary meaning besides the denotation.The connotation is usually linked to certain feelings or emotions associated with a point of view <cit.>.§.§.§ Text-level Context Bias Similar to linguistic bias, text-level context bias refers to the way the context of a text is expressed.Words and statements have the power to alter the article's context, influencing the reader's opinion <cit.>.The types of bias belonging to this category are statement bias <cit.>, phrasing bias <cit.>, and spin bias <cit.>, which consists of omission bias and informational bias <cit.>. <Ref> lists examples for each subcategory.Statement Bias refers to “members of the media interjecting their own opinions into the text” <cit.>, which leads to certain news being reported in a way that is more or less favorable towards a particular position <cit.>.These opinions can be very faint and are expressed “by disproportionately criticizing one side” <cit.> rather than “directly advocating for a preferred [side]” <cit.>.Phrasing Bias is characterized by inflammatory words, i.e., non-neutral language <cit.>.Depending on the context, a word can change from neutral to inflammatory.Therefore, when analyzing bias, the inter-dependencies between words and phrases must be considered <cit.>.Spin Bias describes a form of bias introduced either by leaving out necessary information <cit.> or by adding unnecessary information <cit.>.The underlying motivation is to tell a simple and memorable story <cit.>. Spin bias can be divided into omission, and informational bias <cit.>.Omission bias, also known as simplification, is the act of omitting words from a sentence <cit.>. Informational bias, or exaggeration, is defined as adding speculative, tangential, or irrelevant information to a news story <cit.>. §.§.§ Reporting-level Context Bias Reporting-level context bias subsumes all bias types on the reporting level.While text-level context bias observes bias within an article, reporting-level bias observes the general attention for specific topics <cit.>.Bias types in this category are selection bias, proximity bias, and coverage bias, which are all closely connected.<Ref> lists examples for each subcategory.Selection Bias (or gatekeeping bias) refers to the selection of content from the body of potential stories by writers and editors <cit.>.Obviously, not all news events can be reported due to the limited resources of newspapers.However, this decision-making process is prone to bias from personal preferences <cit.>.Coverage Bias describes situations in which two or more sides of an issue receive imbalanced amounts of attention, such as pro-life vs. pro-choice statements <cit.>.[Coverage bias refers to a particular event, whereas reporting-level context bias refers to the general attention a topic receives.] The level of attention can be measured either in absolute numbers (e.g., there are more articles discussing pro-life than pro-choice topics), how much space the topics get in a newspaper (e.g., printed on the front page), or as the length of the article (e.g., pro-life articles are longer and receive more in-depth coverage than pro-choice articles) <cit.>.Proximity Bias focuses on cultural similarity and geographic proximity as decisive factors.Newspapers tend to report more frequently and more in-depth on events that happened nearby <cit.>.For instance, the more two countries are culturally similar, the more likely it is that events from one region or country will be reported in the other, and the coverage will be more in-depth <cit.>. §.§.§ Cognitive Bias The processing of media information may also be biased by the reader of an article and the state the reader is in during reading.In this review, we use the term cognitive bias, defined as “a systematic deviation from rationality in judgment or decision-making” <cit.>, to summarize how this processing may be negatively affected.While a failure to detect biased media in a given set of articles may be explained by a lack of ability or motivation (e.g., being inattentive/ disinterested, focusing on identity instead of accuracy motives), biased processing of news by the reader is often attributed to a need for a consistent world view and for overcoming dissonances evoked by discordant information <cit.>.In this line of reasoning, repeated exposure and increased familiarity with an argument as well as source cues for a reputable, world-view-consistent source, may increase the trust in information quality.Selective Exposure.Similar to the selection bias of editors and authors, readers also actively select which articles they read <cit.>.Given this choice, they tend to favor reading information consistent with their views, exacerbating already existent biases through selective exposure to one-sided news reports <cit.>.Additionally, such selective exposure tends to extend to social tie formation. Topic information is solely exchanged among like-minded individuals, a phenomenon often dubbed echo chamber or filter bubble <cit.>[In case an algorithm was trained to this preference.], hampering unbiased information processing.Partisan Bias.Selective attention to world-view-consistent news has led to research on the effects of political identity. There, the evaluation of veracity seems dependent on the fit to the reader's party affiliation, a phenomenon dubbed partisan bias <cit.>. Similarly, the hostile media phenomenon (HMP) describes the general observation that members of opposing groups rate a news article as biased against their point of view <cit.>.§.§.§ Related Concepts The last category contains definitions that cannot be exclusively assigned to any other media bias category.Concepts belonging to this category are framing effects <cit.>, hate speech <cit.>, sentiment analysis, and group bias <cit.>, which consists of gender bias <cit.>, and religion bias <cit.>.Much research focuses on these concepts, so we introduce them only briefly and refer to other sources for more information.Framing Effects refer to how media discourse is structured into interpretive packages that give meaning to an issue, so-called frames. Frames promote a specific interpretation of the content or highlight certain aspects while overlooking others. In other words, this type subsumes biases resulting from how events and entities are framed in a text <cit.>.Hate Speech is defined as any language expressing hatred towards a targeted group or intended to be derogatory, humiliate, or insult <cit.>.Often, hateful language is biased <cit.>. The consequences of hate speech in media content are severe, as it reinforces tension between all actors involved <cit.>.Group Bias.We categorize gender bias, racial bias, and religion bias under the umbrella term “group bias,“ as they all refer to biased views toward certain groups. Gender Bias is characterized by the dominance of one gender over others in any medium <cit.>, resulting in the under-representation of the less dominant gender and the formation of stereotypes <cit.>. It is associated with selection bias <cit.>, coverage bias <cit.>, and context bias at the text level. For instance, women are quoted more frequently than men for “Lifestyle” or “Healthcare” topics, while men are quoted more frequently in “Business” or “Politics” <cit.>. Linguistic research on gender bias aims to identify gender-specific and gender-neutral words <cit.> and create lexicons of verbs and adjectives based on gender stereotypes <cit.>. Racial Bias and Religion Bias are other types of group bias. Racial bias refers to the systematic disproportionate representation of ethical groups, often minorities <cit.>, in a specific context <cit.>. Religion, racial, and gender biases can be observed in word embeddings. For example, “Muslim” is spatially close to “terrorist” in some embeddings <cit.>, which may result from biased texts in the data used to derive these embeddings (as word embeddings depend on their input). Group biases can manifest in other forms, such as hate speech, which is a subgroup of biases. Although the distinction between racial and gender biases is not always evident, they can exist independently <cit.>.Sentiment Analysis involves examining text for its emotional content or polarity <cit.>. In the context of media bias, sentiment analysis can detect biases in statements or articles <cit.> and help identify other concepts like hate speech, political ideology, or linguistic bias <cit.>. L[1]>p#1C[1]>p#1R[1]>p#1§ COMPUTER SCIENCE RESEARCH ON MEDIA BIASComputer science research on media bias primarily focuses on methods used to analyze, mitigate, and eliminate bias in texts. Detecting bias is a prerequisite for other applications <cit.>. Bias detection systems could also be employed to check computer-generated texts for bias. Hereafter, we provide a comprehensive overview of computer science methods used in media bias research in recent years based on a systematic literature review. The methodology of the review is described in <Ref>. A systematic overview of computer science methods is essential for capturing the state of media bias research and identifying research trends and gaps. To the best of our knowledge, this is the most comprehensive survey on media bias detection methods so far, as discussed in <Ref>.<Ref> organizes the findings of our literature review by the year of publication and category of employed computer science method.[We do not report performance measures for most models, as most approaches work on different datasets and tasks, causing the scores to be incomparable. Instead, we summarize our findings on the most promising approaches at the end of this section.] We chose the employed methods as the main categorical property to structure the publications since the methods are typically described in more detail than the type of investigated bias. Our analysis shows that media bias detection methods use approaches ranging from traditional natural language processing (tNLP) methods (e.g., <cit.>) and simple ML techniques (e.g., <cit.>) to complex computer science frameworks that combine different advanced classification approaches (e.g., <cit.>), and graph-learning-based approaches (e.g., <cit.>). Therefore, we introduce the classification depicted in <Ref>.Approaches we classify as tNLP (<Ref>) do not use complex ML techniques and are commonly employed in social sciences (e.g., <cit.>). We categorize the tNLP publications into two groups: first, count-based techniques supported by lexical resources, and second, more sophisticated embedding-based techniques.ML-based approaches (<Ref>) are organized into transformer-based machine learning (tbML), non-transformer-based (ntbML), and non-neural network (nNN)-based (<Ref>) approaches, ordered by the frequency of application in the reviewed literature. Graph-based models represent the third major category presented in <Ref>.<Ref> shows the number of publications per year and category according to our search criteria (cf. <Ref>). An increasing majority of publications use tbML approaches, while the numbers of nNN- and ntbML-based approaches decrease. Although our review does not fully cover 2022, the numbers suggest that these trends continue. §.§ Traditional Natural Language Processing Techniques The tNLP category encompasses all publications that identify media bias using techniques not based on ML or graph-based approaches. We include the term “traditional” in the category name to differentiate it from ML and similar techniques. Moreover, techniques similar to what we label as tNLP have already been employed in computational linguistics as early as the sixties and seventies <cit.>. Frequently, tNLP methods are used as a baseline when introducing new datasets due to their explainability and proven effectiveness (e.g. <cit.>). Furthermore, social sciences are increasingly adopting them because of their accessibility and ease of use <cit.>. Although some approaches leverage ML techniques (e.g., <cit.>), we classify them as tNLP if the main contribution is a non-ML approach. The tNLP methods can be divided into count-based and embedding-based approaches. Count-based approaches quantify words and n-grams in the text to analyze bias, while embedding-based approaches are more sophisticated and serve to represent texts for either facilitating comparisons (e.g., <cit.>) or analyzing text associations and inherent biases (e.g., <cit.>).§.§.§ Count-Based Approaches While recent applications of tNLP techniques primarily employ embedding-based methods, simpler count-based approaches are still in use. Count-based approaches most commonly use word counts and a lexicon as a reference to quantify linguistic characteristics and compare texts.<cit.> measured the alignment of texts to authoritarian state media using a count-based methodology that leveraged the LIWC lexicon <cit.> for topical categorization. Similarly, <cit.> applied various count-based techniques to a custom dataset of German news articles and assessed their effectiveness for media bias detection. They reported precision, recall, and F_1 scores for bias and sentiment lexicons, word embeddings, and general TF-IDF measures, evaluating the identification of human-annotated bias in their dataset. A custom bias lexicon yielded the best performance with a low F_1 score of 0.31.<cit.> employed Naive Bayes (NB) decision tree, support vector machine (SVM), and lasso-penalty regression models based on bag-of-word representations to classify politicians' ideological positions and trustworthiness. <cit.> used a count-based approach within an outlier detection framework to identify selection, statement, and coverage bias in political news. <cit.> presented a singular value decomposition (SVD) approach that predicts the newspaper that published an article based on word and n-gram frequencies. Discriminative words and n-grams were derived from a multi-stage (automatic and manual) purging process. The system generates a conditional probability distribution that enables the projection of newspapers and phrases into a left-right bias space.<cit.> used a contingency table showing mention counts and polarity rates for sources (S) and entities (E) within news-related content on Twitter to calculate media bias measures based on definitions for absolute and relative media bias <cit.>. They investigated coverage, selection, and statement bias towards specific topics and entities, and further quantified and compared the number of positive and negative reports from media outlets on Twitter.<cit.> presented their contribution to the ICON2021 Shared Task on Multilingual Gender Biased and Communal Language Identification <cit.>, where the goal is to classify texts as aggressive, gender biased, or communally charged. They used k-nearest neighbors (KNN) and a mixed approach consisting of NB, SVM, random forest (RF), GBM, Adaboost, and a multi-layer perceptron, for classifying texts.[This work employed both tNLP and NN based methods. However, since the majority of the techniques fall into the tNLP category, we discuss it here.]<cit.> presented a study on gender bias in news abstracts using centering resonance analysis based on specifically filtered attribute words. This technique employs rich linguistic features and graph-based techniques.§.§.§ Word Embedding-Based Techniques A second group of tNLP techniques detects media bias by deriving word associations through word embeddings. We exclude publications that investigate bias in pre-trained word embeddings, e.g.,to understand potential biases in systems that use the embeddings, as this analysis does not represent a media bias investigation. However, we include work that uses word embeddings as proxies to help understand biases in texts used for training the embeddings. This is typically done by constructing word embeddings based on a collection of texts and investigating associations in these embeddings (e.g., <cit.>). We differentiate between sparse and dense embedding-based techniques. Sparse embeddings, primarily based on TF-IDFs, are mostly used to survey the occurrence of certain words <cit.>. Dense embeddings are employed to examine associations with specific terms <cit.>.Sparse Word Embeddings. <cit.> investigated gender bias in Irish newspapers, examining various discriminative features such as TF-IDFs. Alongside ML techniques, she used count-based tNLP approaches to detect coverage bias towards female politicians. Employing a bag-of-words approach, TF-IDFs, and linguistic labels on word forms, she provided data for classification models and directly detected bias. For instance, she found articles mentioning spouses of female politicians four times more often than male politicians.Dense Word Embeddings. Most word embedding-based techniques in this section use methods similar to the word embedding association test (WEAT) introduced by <cit.>. WEAT investigates bias in the resulting word embeddings trained on a specific text corpus by measuring the cosine similarity between two sets of tokens (e.g., male and female pronouns) and another two sets of tokens, typically topic or stereotype-based words.<cit.> explored various aspects of linguistic and gender bias on Reddit using a technique akin to WEAT, while also examining biases through count-based approaches and sentiment analysis. <cit.> proposed a debiasing strategy using bias-sensitive words as reference, primarily focusing on replacing bias-sensitive words with less sensitive synonyms to debias text datasets. They identified replacement words using word embeddings with different algorithms such as KNN or a centroid function.<cit.> primarily employed embedding-based tNLP techniques to investigate the development of dehumanization towards the LGBTQ community in New York Times articles from 1986 to 2015. <cit.> conducted a study on gender bias in Dutch newspapers between 1950 and 1990, measuring the distance of “three sets of target words” <cit.> to two gender-representative vectors. These vectors were constructed from the average of lists of “gender words”<cit.>, such as “man,” “his,” “father,” and similar terms for the male vector.Similarly, <cit.> used word embedding associations to compare gender bias in Wikipedia and social media texts. <cit.> analyzed implicit associations with word embeddings to detect racial bias, using the term “ethnically stereotyped bias” in their work. <cit.> trained two word embedding models on slanted news corpora: one using left-wing news from HuffPost and another based on right-wing Breitbart news. They employed the Word2Vec Continuous Skip-gram architecture for training and subsequently applied a distance-based technique with their word embeddings to identify strongly biased words, beginning with biased seed words.<cit.> presented a distinct approach to bias detection based on word embeddings. They introduced a method for characterizing documents by identifying the most relevant semantic framing axes (“microframes”) that are overrepresented in the text. They then assessed the extent of bias and activity of a given microframe, ultimately providing a more detailed description of the documents. For instance, they might identify that the axis of “depressing” and “cheerful” is central to an article and then analyze the wording that led to this classification <cit.>.<cit.> employed a mix of tNLP techniques based on word embeddings to detect subjectivity bias, utilizing methods such as lexicon translation and document similarity measures. §.§ Machine Learning The following section includes publications that used ML for bias detection. We start by presenting transformer-based models (tbML), which were most frequently applied in the reviewed literature, followed by non-transformer-based models (ntbML), and non-neural network models (nNN). tbML increased in popularity after their introduction in 2017 <cit.>, as shown in <Ref> and <Ref>. Transformers use self-attention to weigh the importance of input data and can be fine-tuned with specific datasets, saving time and resources <cit.>.Their universal architecture captures dependencies across domains but can over-fit in case of limited training data <cit.>.§.§.§ Transformer-Based Models Researchers frequently used tbML to detect linguistic bias or political stance with an encoder-only architecture and bias-specific pre-training. Most often they used BERT or models derived from it, e.g, RoBERTa <cit.>, DistilBERT <cit.>, or ALBERT <cit.>. Several papers compare the performance of BERT-based models with other transformer models, e.g. T5 <cit.>, BART <cit.>, ELECTRA <cit.> or XLNet <cit.>. BERT-based models were also applied to detect media bias in languages other than English, such as Korean ((Kor)BERT) <cit.>, Indian (IndicBERT) <cit.> or fine-tuning BERT on African American <cit.>. When researchers used an encoder-decoder architecture model like BART, they used the encoder only for the detection task, while the decoder performed the debiasing task <cit.>. BERT-based models often outperformed other transformers for most of the tasks and groups we defined for linguistic bias <cit.>, and for political stance detection <cit.>, which typically associates linguistic bias with specific political stances <cit.>.The prevalent approach in tbML is to create or select bias-specific datasets, fine-tune the most popular models on them, and test the performance of the encoder-only architecture by comparing F_1-scores to baselines of tNLP methods (e.g., <cit.>). To facilitate the evaluation of using different transformers for identifying various media bias types, we structure our review of tbML by the type of bias used in fine-tuning. Linguistic Bias.Most tbML applications focus on detecting linguistic bias. <cit.> detected bias by word choice following a distant supervision approach with BERT.Based on the BABE dataset, BERT outperformed RoBERTa and other ML classifiers in their application. In contrast, <cit.> achieved the best performance on their Us vs. Them dataset with RoBERTa. <cit.> also fine-tuned BERT with a custom dataset and contextual embeddings. In addition, they parsed sentences using a GCN model with an additional layer of bidirectional long short-term memory (LSTM) to exploit structural information. <cit.> proposed a four-phase pipeline consisting of detection (DistilBERT), recognition (RoBERTa), bias masking, and debiasing. The system, fine-tuned on the MBIC dataset <cit.>, detected biased words, masked them, and suggested a set of sentences with new words that are bias-free or less biased. <cit.> detected and automatically transformed inappropriate subjective texts into a more neutral version. Using a corpus of sentence pairs from Wikipedia edits, their system used BERT as an encoder to identify subjective words as part of the generation process.Political Stance Detection. The second most researched classification problem is political stance detection, an umbrella term closely related to partisan bias (cf. <Ref>) that identifies linguistic biases to identify the political biases of authors. <cit.> studied the ideology of specific policies under discussion and presented the first diachronic dataset of news articles annotated at the paragraph level by trained political scientists and linguists. Their fine-tuned BERT model performed best. <cit.> integrated audio, video, metadata, and subtitles in their multimodal dataset. In addition to the text analysis with BERT, their application included metadata and audio data through open SMILE[ <https://www.audeering.com/de/research/opensmile/>], resulting in the highest accuracy. <cit.> presented a manually annotated dataset focusing on linguistic bias in news articles.Based on their dataset, in addition to several BERT-based classification approaches, they used a 2-layer bidirectional LSTM for ideology prediction, which was outperformed by all transformer-based systems.Framing Bias.<cit.> used BERT with tweet embeddings, fine-tuned on the All The News dataset[<https://www.kaggle.com/datasets/snapcrack/all-the-news>], and an intensity score for moral frames classification based on the moral foundation theory[Moral foundation theory explains moral differences across cultures. For more information, see the original work by <cit.>.]. <cit.> proposed a similar BERT-based method for conducting sociological frame analysis to detect framing bias. <cit.> proposed a system for framing bias detection and neutral summary generation from multiple news headlines of varying political leanings to facilitate balanced and unbiased news reading. They performed multi-document summarization, multi-task learning with two tasks, and based their work on BART.Spin/Informational Bias. <cit.> investigated lexical and informational bias with BERT on their BASIL dataset, which others also used in their research <cit.>. <cit.> fine-tuned RoBERTa as a context-inclusive model, exploring neighboring sentences, the full article, articles on the same event from other news publishers, and articles from the same domain. Their model is domain-and-task-adapted for informational bias detection on the BASIL corpus. They reported that integrating event context improved classification performance.Racial/Group Bias. For group bias detection, <cit.> presented DEPEN, which employs a fine-tuned BERT model to detect biased writing styles. Subsequently, they used BART to debias and rewrite these detected sentences.Sentiment Analysis. We exclude general sentiment analysis but include publications that leveraged sentiment analysis for linguistic bias detection as a stand-in for political stance detection (cf. <Ref>). <cit.> investigated populist mindsets, social groups, and related typical emotions using RoBERTa fine-tuned on their populist attitude dataset Us vs. Them. <cit.> utilized BERT in aspect-level sentiment classification, achieving promising performances on three public sentiment datasets[The datasets include restaurant and laptop reviews, and tweets <cit.>.]. They showed that incorporating target information is crucial for BERT's performance improvement. <cit.> applied target-dependent sentiment classification (TSC) with BERT, RoBERTa, XLNET, and a BiGRU.They proposed a classifier, GRU-TSC, that incorporated contextual embeddings of the sentences and representations of external knowledge sources.Unreliable News Detection. <cit.> used RoBERTa to detect unreliable news—a task that overlaps with media bias detection. Further, they proposed ways to minimize selection bias when creating datasets by including a simple model as a difficulty/bias probe.They also suggested that future model development uses a clean non-overlapping site and date split <cit.>.§.§.§ Non-Transformer-Based ModelsThis section presents publications that use non-transformer-based machine learning for media bias detection, categorized by the type of detected bias. Most commonly, ntbML methods are used to detect media bias at the document level, e.g., hyperpartisanship and political stance. Despite the homogeneity of detected biases, publications using ntbML evaluate numerous aspects of the identification methodology, including training data <cit.>, word embeddings <cit.>, and pseudo-labeling <cit.>.Linguistic/Text-Level Bias. The detection of hyperpartisanship[Hyperpartisanship is not to be confused with partisan bias as described in <Ref>. It describes one-sidedness that can manifest in a range of biases <cit.>.] is the most common application of ntbML. The task's popularity is partly due to the SemEval 2019 hyperpartisan news detection task <cit.> and the associated dataset, which inspired many publications. Hyperpartisanship is defined as non-neutral news reporting <cit.>, which can be described as a combination of linguistic and text-level biases on a document level. The approach of <cit.> performed best in the task. It leveraged a convolutional neural network (CNN) along with batch normalization and ELMo embeddings.In a follow-up study, <cit.> incorporated Latent Dirichlet Allocation (LDA) distributions with different approaches to hyperpartisan news detection.They implemented multiple methods, such as a CNN, a recurrent neural network (RNN), a transformer encoder approach, and a hierarchical attention network (HAN) with and without LDA topic modeling.Their results suggested that, in most cases, LDA topic modeling improves the effectiveness of the methods, and hierarchical models outperform non-hierarchical models. <cit.> presented another study based on the SemEval 2019 hyperpartisan news detection task. They focused on decomposing pre-trained embeddings into separate denotation and connotation spaces to identify biased words descriptively.Although their primary goal was to improve the embeddings' reflection of the implied meaning of words, they showed how the discrepancy between the denotation space and the pre-trained embeddings reflects partisanship <cit.>.<cit.> used different ML approaches (e.g., RNN, CNN, bidirectional LSTM/GRU, and the attention-based approaches AttnBL, HAN) trained on the SemEval 2019 dataset. They evaluated the effects of attention mechanisms and embeddings based on different granularities, tokens, and sentences on the effectiveness of the models.<cit.> focused on introducing methods for generating additional data. They presented two approaches for pseudo-labeling (overlap-checking and meta-learning) and introduced a system detecting media bias using sentence representations from averaged word embeddings generated from a pre-trained ELMo model and batch normalization. The same authors also employed an ELMo-based classifier and a data augmentation method using pseudo-labeling <cit.>.Political Stance Detection. <cit.> trained two models based on LSTM and BERT for classifying news texts as left-wing, center, or right-wing.Their main contribution is the evaluation of techniques for eliminating the effects of outlet-specific language characteristics (here: political ideology expressed by linguistic bias) from the training process.They used adversarial adaptation and triplet loss pre-training for removing linguistic characteristics from the training data. Further, they incorporated news outlets' Wikipedia articles and the bio of their Twitter followers in the training processes to reduce the effects of outlet-specific language characteristics. While a transformer-based classification outperformed the LSTM model, the techniques for improving training effectiveness improved both models' classification results.[Since transformers are not the paper's focus, we discuss it here.] As part of their political stance detection approach, <cit.> proposed a headline attention network approach to bias detection in Telugu news articles.It leveraged a bidirectional LSTM attention mechanism to identify key parts of the articles based on their headlines, which were then used to detect bias toward political stances. They compared the results of their approach with NB, SVM, and CNN approaches, all of which the headline attention network outperformed.To depolarize political news articles, <cit.> mapped Italian social media users into a 2D space. Their solution initially leveraged a NN for learning latent user representations. Then, they forwarded these representations to a UMAP <cit.> model to project and position users in a latent political ideology space, allowing them to leverage properties of the ideology space to infer the political leaning of every user, via clustering.Gender/Group Bias. <cit.> presented an unsupervised approach for identifying gender bias in Facebook comments.They used a bidirectional LSTM to predict the gender of the addressee of Facebook comments and, in doing so, identify gender biases in these comments. <cit.> introduced HateXplain, a dataset on hate speech and gender bias that includes expert labels on the target community towards which the hate speech is aimed. They further included labels of words annotators identified as bias-inducing. They evaluated the effects of including the rationale labels in the training process of a BiRNN and a BERT model on the models' bias detection capabilities. Including the rationale labels increased the bias classification performance for both models. §.§.§ Non-Neural Network Machine Learning TechniquesBesides state-of-the-art approaches using tbML or deep learning techniques for bias detection, other (nNN) ML approaches are still widely used for bias detection.Many employ LDA, SVM, or regression models, but a wide range of models is usually used and compared.These models are particularly common in papers presenting new datasets, as they can be seen as a solid and widely known baseline for the quality of labels within a dataset.Based on the MBIC dataset, <cit.> presented a traditional feature-based bias classifier.They evaluated various models (e.g., LDA, logistic regression (LR), XGBoost, and others), trained with features such as a bias lexicon, sentiment values, and linguistic word characteristics (such as boosters or attitude markers <cit.>). <cit.> contributed a dataset of personalized news.Furthermore, she used a range of classifiers (Ridge classifier, nearest centroid, SVM with SDG, NB) for political affiliation detection. <cit.> investigated coverage and gender bias in their dataset of Canadian news articles.They employed LDA topic modeling to detect biased topic distributions for articles that contain predominantly male or female sources.<cit.> presented a dataset of 200 unbiased and 850 biased articles written in Telugu.They used NB (Bernoulli and multinomial), LR, SVM, RF, and MLP classifiers to evaluate the effectiveness of adding presuppositions as model input.<cit.> researched framing effects in news articles using their proposed dataset.They trained an SVM classifier to detect and classify moral framing and compared it to a baseline lexicon-based natural language processing approach, investigating moral framing aspects such as authority, betrayal, care, cheating, etc.<cit.> explored various biases that can occur while constructing a media bias dataset.Part of their work examined the correlation between the political stance of news articles and the political stances of their media outlets. To evaluate this correlation, they compared multinominal NB, SVM, LR, and RF models using ground-truth labels. Several other publications described the application of nNN ML approaches in addition to other ML techniques for data evaluation <cit.>.We have already mentioned these in <Ref> and <Ref>.<cit.> presented a multi-task ordinal regression framework for simultaneously classifying political stance and trustworthiness at different Likert scales.This approach is based on the assumption that the two phenomena are intertwined.They employed a copula ordinal regression along with a range of features derived from their previous work, including complexity and morality labels, linguistic features, and sentiment scores. <cit.> presented an additional[We mention multiple models for the task within <Ref>.] model for the SemEval 2019 hyperpartisan news detection task <cit.>.They used a linear SVM with VADER sentiment scores as a feature, relying exclusively on the intensity of negative sentiment in texts to derive political stances expressed in texts. With a F_1 score of 0.694, their approach failed to match the other competitors in the task. In addition to a FastText classifier, the approach presented by <cit.> included a manual selection of training data containing examples of media bias.Aside from contributing to a new media bias dataset and evaluating the effect of expert and non-expert annotators, they presented a curriculum learning approach for media bias detection. They concluded that high-quality expert-labeled data improves the performance of the model.§.§ Graph-BasedThe research described in this section leverages graph data structures to analyze online social networks through their users and text interactions, which requires a distinctive set of methods for bias analysis. Although most publications used ML, we treat them separately due to the unique characteristics of the analyzed data representations. Graph-based approaches are primarily used to investigate framing bias, echo chambers, and political stances. Therefore, we structure our overview of corresponding publications by the type of bias they investigate.Framing Bias. The SLAP4SLIP framework <cit.> detects how concepts are discussed in different parts of a social network with predefined linguistic features, graph NN, and structured sparsity. The authors exploit the network structure of discussion forums on Reddit without explicitly labeled data and minimally supervised features representing ideologically driven agenda setting and framing. Training graph auto-encoders, <cit.> modeled agenda setting, and framing for identifying ideological polarization within network structures of online discussion forums. They modeled polarization along the dimensions of salience and framing. Further, they proposed MultiCTX (Multi-level ConTeXt), a model consisting of contrastive learning and sentence graph attention networks to encode different levels of context, i.e., neighborhood context of adjacent sentences, article context, and event context.<cit.> built on the SLAP4SLIP framework <cit.> to detect informational bias and ideological radicalization by combining contrastive learning and sentential graph networks. Similarly, <cit.> proposed a framework for identifying bias in news sources. The authors used BERT Base for aspect-based sentiment analysis and assigned a bias score to each source with a graph-based algorithm.Echo Chambers. <cit.> applied community detection strategies and modeled a COVID-19-related conversation graph to detect echo chambers. Their method considered the relationship between individuals and the semantic aspects of their shared content on Twitter. By partitioning four different representations of a graph (i.e., topology-based, sentiment-based, topic-based, and hybrid) with the METIS algorithm[As proposed by <cit.>.], followed up by qualitative methods, they assessed both the relationships connecting individuals and semantic aspects related to the content they share over Twitter.They also analyzed the controversy and homogeneity among the different polarized groups obtained. Political Stance Detection. Stance detection[We defined stance detection as political bias detection via the identification of linguistic biases, compare <Ref>.] is a typical application of graph-based classification techniques.<cit.> combined network structure learning analysis and NN to predict the political stance of news media outlets. With their semi-supervised network embedding approach, the authors built a training corpus on network information, including macro- and micro-network views. They primarily employed network embedding learning and graph-based label propagation to overcome label sparsity. By integrating graph embeddings as a feature, <cit.> detected the stance and political stance of Twitter users and online media by leveraging their retweet behavior. They used a user-to-hashtag graph and a user-to-mention graph and then ran node2vec. They achieved the best result for combining BERT with valence scores[A valence score <cit.> close to zero reflects that an influencer is cited evenly among different groups in a network.Conversely, a score close to -1 or 1 indicates that one group disproportionately cites an influencer compared to another group.In their paper, <cit.> indicated that valence scores are essential in identifying media bias in social networks.]. <cit.> analyzed news stories and political opinions shared on Brazilian Facebook. They proposed a graph-based semi-supervised learning approach to classify Facebook pages as politically left or right. Utilizing audience interaction information by inferring self-reported political leaning from Facebook pages, <cit.> built an interest graph to determine the stance of media outlets and public figures. The authors achieved the best results for label propagation with a spectral graph transducer. <cit.> captured social context with a neural architecture for representing relational information with graph-based representations and a graph convolutional network.They showed that using social information, such as Twitter users who have shared the article, can significantly improve performance with distant and direct supervision. §.§ Bias in Language ModelsDetecting bias inherent to language models is an important research area due to the models' popularity for many NLP tasks.Researchers have investigated bias in texts and other media generated by language models as well as in classification performed with language models. We did not include publications that address these forms of bias.[We focus exclusively on detection methods; the field of bias in language models is extensive enough for a dedicated literature review.]However, we would like to give some examples to raise awareness of biased language models. <cit.> analyzed stereotypical bias with the crowdsourced dataset StereoSet in BERT, GPT-2, ROBERTA, and XLNET, concluding that all models exhibit strong stereotypical bias. <cit.> used causal mediation analysis to analyze gender bias in language models. Their results showed that gender bias effects exist in specific components of language models. <cit.> also analyzed gender bias within BERT-layers and concluded that the layers are generally biased. In <cit.>, the authors detected bias in texts generated by GPT-2 and discussed means of mitigating gender bias in language models by using a reinforcement learning framework. §.§ DatasetsDuring our review, we collected both methods and datasets from the publications we selected for inclusion. In total, we found 123 datasets. We categorize the datasets according to the concepts proposed in our Media Bias Taxonomy, similar to the discussion of methodologies as shown in <Ref>. We added the category General Linguistic Bias as several datasets do not define the subcategory of bias they contain. We did not evaluate the quality of the datasets as they address distinct tasks and objectives but leave this assessment for future work (cf. <Ref>). Only two of the 123 datasets include information on the background of annotators. Moreover, dataset sizes are generally small; only 21 of the 123 datasets contain more than 30,000 annotations.We believe that the use of multiple datasets is promising for future work as we discuss in <Ref>.As part of this review, we present the datasets,their statistics, and tasks merely as a starting point for future work, without further assessment.We give a detailed overview of publications, sizes, availability, tasks, type of label, link, and publication summary for each dataset in our [taxonomyurl]repository.§ HUMAN-CENTERED RESEARCH ON MEDIA BIASHuman-centered research on media bias aims to understand why people perceive media as biased, explore the societal and digital consequences, and develop strategies to overcome biased perception and detect media bias. Debates on all these factors are ongoing and experimental effects tend to be minor. Hereafter, we highlight some of these debates. §.§ Reasons for biased media perception One explanation for the emergence of cognitive biases in media perception is that information is processed in light of prior expectations, which may be distorted <cit.>. The veracity of claims is often judged based on familiarity, potentially resulting in illusory truths <cit.>. Cognitive dissonance theory posits that people experience discomfort when confronted with information inconsistent with their convictions, motivating them to discount it <cit.>.Extending this notion to groups, <cit.> suggested in their social identity and categorization theory that basic self-esteem is derived from personal affiliation with positively-connotated groups. This results in in-group favoritism, out-group derogation <cit.>, and behavior and information processing in line with group identity. People easily regard reports that negatively affect groups they strongly identify with as a personal threat to their self-esteem and devalue these reports <cit.>. Furthermore, <cit.> posited that when people self-categorize with a specific group, they evaluate the validity of arguments by congruence to in-group norms and in-group consensus. This pattern aligns with empirical findings showing that news acceptance depends on group identification and congruent group membership cues of the news source <cit.>.Generally, prior works expect selective exposure to media to be consistent with previous viewpoints <cit.>, further strengthening prior convictions. Such behaviour can be referred to as confirmation bias <cit.> through repeated exposure <cit.>. In the age of social media and the abundance of information available, these cognitive biases may further allow for confrontation only with attitude-consistent information and like-minded individuals in echo chambers <cit.>. Moreover, algorithms trained on these biases may further limit the available media spectrum in filter bubbles <cit.>.Consequently, limited exposure to alternative viewpoints may also impact the perception of social norms and the prevalence of opinions. The overestimation of the frequency of one's own position, known as the false consensus effect <cit.>, has been widely documented even before the introduction of social media and may be partially due to identity motivations explained earlier <cit.>. However, when echo chambers are used to gauge the frequency of opinions and social norms, even larger shifts between groups are expected <cit.>. This feeds into a vicious circle of polarizing group norms, discounting information inconsistent with these shifted norms, and feeling encouraged to voice even more extreme positions (e.g., <cit.>). These mechanisms lead to expectations that media perception is polarized based on social categories and prior beliefs and that the introduction of social media has exacerbated this phenomenon. §.§ Consequences of biased media perceptionPartisan individuals tend to select media that aligns with their prior beliefs and political attitudes, a phenomenon known as the Friendly Media Phenomenon (FMP) <cit.>. This tendency may be partially due to interpersonal communication among like-minded individuals <cit.>. People also tend to assess the veracity of information based on its fit with their political convictions, exhibiting partisan bias <cit.>.Biased media perception can lead to the Hostile Media Phenomenon (HMP), where people perceive media coverage as biased against their side, regardless of the actual political position of the article <cit.>. This effect increases with the extremity of party affiliation and is primarily due to the derogation of dissenting media <cit.>, making it a cognitive bias rather than a characteristic of the media landscape. Discussions and feedback from like-minded individuals can further amplify the HMP, leading to the perception of general media bias even when primarily exposed to self-selected, like-minded media <cit.>.Methodologically, the HMP, FMP, and partisan bias complicate the assessment of media bias, as raters' perceptions of bias may reflect more on individual affiliations and idiosyncrasies than the objective properties of the rated article <cit.>. Subjective bias ratings are relative to their social context; their quality as a scientific measure of media bias depends on the representativeness of raters. Therefore, such ratings should be supplemented by objective bipartisan bias criteria (e.g., language biases). Socially, the HMP can lead to the mobilization of more extreme positions, distrust in the social system, and, in cases of low efficacy beliefs, political withdrawal <cit.>. Both the HMP and FMP can contribute to increased political segmentation and polarization, which can negatively impact political communication and interaction, essential for a peaceful and democratic society <cit.>. Exposure to certain media can also have social consequences, such as altered political participation <cit.>. For example, <cit.> found that exposure to congruent media is tied to biased perceptions of the opinion climate, influencing how participants communicate their political beliefs and engage in politically meaningful acts, while incongruent exposure has little effect. The role of the social media environment in this process is somewhat disputed: While selective exposure in social media is widely documented <cit.>, some authors argue that social media is not the main contributor to the variety of media diets globally. For example, <cit.> deem its general impact negligible and suggest it may expose users to more diverse information compared to traditional media. According to <cit.>, people may even cope with this high-choice media environment by developing strategies like verifying news in different outlets, and—even though social networks are polarized—only a subset of the population regards itself as susceptible to echo chambers. After all, the phenomena and underlying cognitive processes were known before the advent of social media. The effects observed in social media may just be more visible to researchers than they were before <cit.>. In addition, exposure to biased media may not be sufficient to significantly affect attitudes <cit.>. As such, it is challenging to determine the overall effect of social media on biased media perception and social consequences today, though some feedback loops can be expected <cit.>. This problem is even more pressing for algorithmic filtering than for personal selections, as the algorithms involved are not transparently disclosed, their application is in flux, and they are not accessible to the user <cit.>. This fact illustrates that parts of the conclusion on the impact of social media on media bias phenomena are also driven by the selection of media and the assessment method of the effects. §.§ Recipient-oriented approaches to reduce media bias Given that selective media exposure partially explains cognitive media bias phenomena, one intervention approach is to encourage and facilitate a diverse media diet to reduce media bias <cit.>. This can be achieved by plug-ins that actively diversify the media displayed in a search by identifying the topic and sampling other articles or information related to it <cit.>, or by providing media based on another individual's platform history <cit.>. In a similar approach, <cit.> used a browser widget to provide feedback on the balance of a user's media diet, successfully encouraging these users to explore more media from centrist and opposing viewpoints.Other experiments and observations of counter-attitudinal exposure illustrate that the mere presentation and reception of opposing viewpoints do not always decrease the HMP and may even exacerbate the problem. For instance, <cit.> found that people who were incidentally exposed to counter-attitudinal information are more likely to subsequently select information that aligns with their attitudes. Other studies found that exposure to incongruent comments increases the perception of bias and decreases the perception of the credibility of a later, neutral news report <cit.>, and that exposure to opposing tweets may backfire and intensify political polarization, particularly for Republicans <cit.>. These findings are consistent with the notion of motivated reasoning, as the potential threat of backfiring from inconsistent exposure—though rather dependent on the specific materials to which readers are exposed <cit.>—may be explained by the threat of the presented material to the reader's identity. As a result, diverse exposure with well-crafted materials may help but is not a comprehensive solution for the HMP, FMP, and biased media perception.As an alternative, some studies have attempted to alter the user's mindset during news processing and shift the attentional focus to aspects of a user's self-identity that are not challenged by the news report. For example, inducing self-affirming thoughts aimed at mitigating the potentially self-threatening aspect of belief-inconsistent arguments has been shown to successfully evoke more unbiased processing of such information <cit.>. Similarly, focusing readers' attention on a value that may be threatened by information increases their perception of media bias in that article <cit.>. Likewise, people seem more open to sharing and are better at judging news headlines based on their veracity when nudged to think about their own accuracy instead of their identity motives <cit.>. Opening the mindset may thus be an effective, albeit situational, approach when tackling phenomena such as the HMP and media bias detection during exposure to attitude-inconsistent materials.As an additional step, forewarning messages that draw attention to biased media and potential influencing attempts can help “inoculate” against this media by provoking reactance towards manipulations <cit.>. Exposing individuals to examples of media bias through such messages may teach them to detect and cope with it. In this vein, various forms of training have been tested and generally increase a reader's ability to identify biased media and distinguish it from congruency with one's political stance <cit.>. This detailed training is necessary, as mere awareness of media bias as part of general news media literacy may not be sufficient for a balanced media diet <cit.>.Overall, all approaches have yielded relatively small effects on improving media bias detection, and more research on effective interventions is necessary. Regarding partisan bias, there is some indication that interventions are not equally effective in reducing the bias for liberals and conservatives—potentially inadvertently biasing the overall discourse on media towards the less open-minded faction <cit.>. Thus, further testing of the effectiveness of approaches in reducing partisan media perception and the HMP is warranted. § DISCUSSION To address RQ1, we have established a Media Bias Taxonomy that allows to precisely categorize the various sub-concepts related to media bias <cit.>. We emphasize the complexity of media bias and note that researchers often fail to clearly define the type of media bias they investigate, which leads to confusion when comparing different studies. Furthermore, existing literature reviews on the topic do not address the various media bias concepts <cit.>, making it difficult to understand problems and solutions across different approaches.Our Media Bias Taxonomy is a crucial first step in establishing a common ground for more clearly defined media bias research. We divide media bias into five major categories: linguistic bias, cognitive bias, text-level context bias, reporting-level bias, and related concepts. We provide subgroups for each of these categories. Throughout the creation of our taxonomy, we engaged in frequent discussions and revised our definitions and structure multiple times, revealing the numerous options available for defining media bias.While our taxonomy provides a practical foundation and effective starting point for research in the domain, future research should critically re-examine the discussed concepts. We believe that the main common ground among the various types of media bias we identified is smaller than that of existing universal definitions (see <Ref>) and primarily refers to one-sided media content To answer RQ2 and RQ3, we provided an extensive overview of recently published literature on computer science methods and datasets for media bias detection. We manually inspected over 1,528 computer science research papers on the topic published between 2019 to May 2022 after automatically filtering over 100,000 keyword-related publications. Our review reveals valuable insights into best practices and trends in the research field.In recent years, transformers have quickly become the most frequently used and most reliable method for media bias detection and debiasing <cit.>. Platforms like Hugging Face facilitate the implementation of the models and their adaption to various tasks <cit.>. However, as we show in <Ref>, the new models have not yet made their way into all subtypes of bias, leaving room for future experiments. Additionally, available media bias classifiers are largely based on small in-domain datasets. Recent advancements in natural language processing, especially transformer-based models, demonstrate how accurate results can be achieved by unsupervised or supervised training on massive text corpora <cit.> and by model pre-training using inter and cross-domain datasets <cit.>.Although graph-based methods are not as popular as transformers, their application to media bias detection is increasing but mostly limited to analyzing social network content, activities, and structures, and identifying structural political stances within these entities <cit.>. Transformer-based approaches cannot accomplish such an analysis due to the network properties of the explored data. Established methods still play a role in media bias detection. Traditional natural language processing approaches, as well as non-transformer-based (deep NN) machine learning models, are simpler and more explainable compared to language-model-based approaches, making them advantageous in applications where transparency of classification decisions is critical (e.g., <cit.>). Since traditional approaches have been used in many media bias identification tasks, they often serve as a baseline to compare new (transformer-based) approaches. Given their higher explainability and long-term testing, we don't expect language models to completely replace other approaches soon.Apart from these major trends, including information on spreading behavior, social information <cit.>, metadata <cit.>, and examining the vector spaces of word embeddings <cit.> also show promise in improving classifier performance to detect media bias.We addressed RQ4 by reviewing social science research on media bias. One significant takeaway is that media bias datasets largely ignore insights from social science research on the topic, leading to low annotator agreement and less accurate annotations <cit.>. The perception of bias depends on factors beyond content, such as the reader's background and understanding of the text. Moreover, limited exposure to alternative viewpoints can impact how social norms and opinions are perceived. These insights have never been fully integrated into automated detection methods or datasets. Integrating bias perception research in language models is a promising way to improve annotation-based detection systems <cit.>, which can potentially be achieved by further developing standardized questions within the domain <cit.>.We see a need to develop further methods to increase news consumers' bias awareness and believe that computer science methods, as described in this review, can be a powerful tool to build such awareness-increasing tools. While some tools already exist, none have been applied on a larger scale in a real-world scenario, which is a promising direction for future research.Our literature review also exhibits limitations. First, we excluded work from areas other than media bias due to the high number of publications involved, potentially leaving out valuable contributions. Investigating promising concepts from other areas will be necessary for future work. Second, for all computer science methods, we only included literature from 2019 to 2022, excluding valuable earlier research. Analyzing a longer period could yield an even more complete picture of the research domain. Lastly, although we distinguish several categories within our Media Bias Taxonomy, the concepts related to media bias still overlap and appear concurrently. We believe that future work should further discuss and adapt the taxonomy. Although the taxonomy we present is merely a starting point to connect works in the area, we believe it can benefit future approaches by raising awareness of concepts, methods, and datasets in the research domain. During the writing of this literature review, the taxonomy's outline frequently changed in permanent discussions among the authors.§ CONCLUSION In 2018, <cit.> concluded that (1) powerful computer science methods (such as word embeddings and deep learning) had not yet made their way into the automated detection of media bias and that (2) the interdisciplinarity of media bias research should be improved in the future. The authors suggested (3) that approaches in computer science did not account for bias having many different forms and usually only focused on narrow bias definitions <cit.>.Our literature review reveals that two of these propositions (1 and 3) have been addressed to some extent, but there is still considerable room for improvement. Transformer and graph-based methods have led to significant increases in the performance of automated methods for detecting media bias, and numerous types of bias have received research attention. However, these concepts are primarily used and analyzed individually, with knowledge overlaps between them remaining unexplored <cit.>. Recent modeling techniques, such as multi-task learning, enable the use of related datasets to improve classification performance <cit.>.Regarding (2), datasets and systems still exhibit limited conceptual work, with the cognitive dimension of media bias rarely mentioned in computer science research. Our literature review aims to provide a foundation for increased awareness of bias in media bias datasets (through standardized annotator background assessments), enhanced interdisciplinarity in the research domain (which we believe is particularly relevant since reasonable classifications cannot exist without clear conceptualizations), and future computer science methods.We are confident that this review will facilitate entry into media bias research and help experienced researchers identify related works. We hope that our findings will contribute to the development of more effective and efficient media bias detection methods and systems to increase media bias awareness. Finally, we plan to repeat our workflow in three years to reassess the state of the research domain.We thank Elisabeth Richter, Felix Blochwitz, Jerome Wassmuth, Sudharsana Kannan, and Jelena Mitrović for supporting this project through fruitful discussions. We are grateful for the financial support of this project provided by the Hanns-Seidel Foundation, the DAAD (German Academic Exchange Service), the Lower Saxony Ministry of Science and Culture, and the VW Foundation.ACM-Reference-Format
http://arxiv.org/abs/2312.16148v3
{ "authors": [ "Timo Spinde", "Smi Hinterreiter", "Fabian Haak", "Terry Ruas", "Helge Giese", "Norman Meuschke", "Bela Gipp" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20231226181352", "title": "The Media Bias Taxonomy: A Systematic Literature Review on the Forms and Automated Detection of Media Bias" }
APS/[email protected], [email protected] Department of Physics, BITS Pilani K K Birla Goa Campus,Zuarinagar 403726, Goa, [email protected] Department of Physics, BITS Pilani K K Birla Goa Campus,Zuarinagar 403726, Goa, India An information engine based on a two level system in contact with a thermal reservoir is studied analytically. The model incorporates delay time between themeasurement of the state of the system and the feedback. The engine efficiency and work extracted per cycle are studied as a function of delay time and energy spacing between the two levels. It is found that the range of delay time over which one can extract work from the information engine increases with temperature. For delay times comparable to the relaxation time, efficiency and work per cycle are both maximum whenk_B T ≈ 2 U_0, the energy difference between the levels. Generalized Jarzynski equalityand the generalized integral fluctuation theorem are explicitly verified for the model.The results from the model are compared with the simulation results for a feedbackengine based on a particle moving in a 1D square potential. The variation of efficiency,work per cycle and efficacy with the delay time is compared using relaxation time in thetwo state model as the fitting parameter and leads to good fit.Information engine with feedback delay based on a two level system Toby Joseph 0000-0001-6682-9223 January 14, 2024 ==================================================================§ INTRODUCTIONAn information engine uses the information gained from a measurement of the system to extract work from thermal fluctuations <cit.>. Historically itwas Maxwell who first suggested a thought experiment involving a demon whose measurements of molecular velocities can be used to transfer heat from a cold to a hot body thus violating the second law of thermodynamics. It is now generally accepted that there is no violation of the second law if one considers the entropy generation associated with erasure of memory involved in the measurement process <cit.>.For alternative views, see the following references <cit.>. Recently information engine or Maxwell's demon, as it is usually referred to, has been implemented experimentally in a variety of systems both classical <cit.> and quantum<cit.>. Stochastic thermodynamics deals with study of thermodynamics of small systems where fluctuations dominate <cit.>. Several fluctuation theorems have offered valuable insights intothe production of entropy and the statistical connections between work and free energy for systemsoperating significantly beyond equilibrium <cit.>. These results in stochastic thermodynamics has been generalized to the cases when there is measurement and feedback during the process <cit.>.For example, the Jarzynski relation which connects the fluctuations in work, W, during a non-equilibrium processto the free energy difference, Δ F, between the final and initial equilibrium states<cit.> has been modified to a form, <e^β(Δ F - W)> = γ,where β is the inverse temperature and the angular brackets represent average over multiple trajectories of the system starting from the equilibrium distribution. This relation, known as the Generalized Jarzynski Relation (GJR), is valid for non-equilibrium processesincorporating feedback mechanism. The right hand side of GJR, γ (referred to as efficacy), is thesum of the probabilities of observing the time reversed trajectories in the time reversed protocols forall possible protocols. γ is a measure of the reversibility of the process. The largest value that γcan take is the number of outcomes in the measurement process and is attained for a fully reversible process. In the absence of feedback, γ = 1 and the GJR reduces to the usual Jarzynski relation <cit.>. The experimental verification of GJR has been done for few systems <cit.>. In the case of processes involving precise measurements (error-free) andfeedback mechanisms, we can establish a Generalized Integral Fluctuation Theorem (GIFT) expressed as<e^β(Δ F - W) - I + I_u> = 1. Here, I represents the information acquiredduring the measurement process, and I_u is the unavailable information to be determined throughthe time-reversed process <cit.>. In the context of an information engine, efficiency quantifies the degree of conversion of information to work.High efficiency requires a slow process thus compromises on power. Thus it is important to tune the engine parameters such that efficiency and power are as required. Many recent workshave investigated methods to enhance the efficiency and power of information engine, both in experiments<cit.> and in theoretical studies<cit.>.Information engines based on colloidalparticle moving through a harmonic potential <cit.>as well as periodic potentials <cit.> have beenstudied. These studies look into the possibility of extracting work or converting the information aboutthe position of the particle into work with the help of a feedback scheme. An information engine based on a two level system where the state of the system is measured and feedback is effected has been theoreticallystudied <cit.>. These simple information engine systems offer ways to understand the optimisation schemes. In this study, we perform an analytical investigation of a two-state information engine that is incontact with a heat bath. The model is similar to the one studied by Jaegon et al <cit.> but differs in that in the current model there is a feedback delay between the measurement and feedback. The analytical results are derived by assuming that the cycle time of the engine is large compared to the relaxation time of the system.The feedback time and the energy difference between the two statesare the two parameters with respect to which the efficiency and work per cycle of the engine are studied.We compare the analytical findings with the numerical resultsobtained from the simulation of a particle moving within a one-dimensional periodic square potential. Over-damped Langevin dynamics is used to simulate the motion of the particle. Further, the generalized fluctuation relations of stochastic thermodynamics for this system are verified. The paper is structured as follows: The model for the information engine is introduced in the next sectionand the assumptions and parameters of the model are defined. In Sec. <ref>, we start with the study ofinformation engine without feedback delay (Sec. <ref>) and then generalize to one thatincorporates feedback delay time (Sec. <ref>). The engine performance parameters are workedout and the fluctuation theorems are verified.Sec. <ref>provides a comparison between the analytical results and the resultsobtained from the simulation of the particle moving in periodic square potential. Finally,in Sec. <ref>, we offer a summary of our findings and engage in a discussion of the results.§ THE MODELThe information engine consists of a two level system in contact with a thermal reservoir at temperature T. The energies of the higher energy state (up state) and the lower energy state (down state) of the system are U_0 and -U_0 respectively. Also present as a part of the information engine is an observer (Maxwell's demon) who measures the state of the system at regular intervals of time, t = n α (n is an integer), and implements a feedback process depending on the outcome of the measurement. The feedback process is as follows: If the system is measured to be in the up state in the n^ th measurement, the demon flips the state of the system to down stateat a time t = n α + ϵ, with ϵ < α. ϵ is the feedback delay time. If the system is measured to be in the down state, no feedback is initiated. The master equation for the process is given bydp_u(t)/dt =-k_1 p_u + k_2 p_ddp_d(t)/dt = k_1 p_u - k_2 p_d,where p_u and p_d are the probabilities for finding the system in up and down states respectively and k_1 and k_2 are the rates of transition between the states (see Fig. <ref>). Detailed balance condition in equilibrium dictates that k_2/k_1 = e^-2 β U_0,where β = 1/k_B T. We shall work in energy unit where k_B T = 1.The relaxation time for the process is, τ = 1/k_1 + k_2.Note that for the case when the measurement outcome is up state, the master equation has tobe integrated in two time segments: from t = n α to t = n α + ϵ andthen from t = n α + ϵ to t = (n + 1) α. This is because, if the themeasurement gives up state as the outcome, the state will be flipped after a delay timeof ϵ.§ RESULTS AND ANALYSIS In the analysis that follows, we shall assume that the time between the state flip and the next measurement time, α - ϵ, is much larger than the relaxation time τ.This would imply that the system is in equilibrium at the beginning of each cycle. We first work out the simpler case when ϵ = 0 (immediate feedback with no delay time).Subsequently, we relax this constraint and work out the results for the more general case.§.§ Feedback engine with no delay time (ϵ = 0)We consider here the case when the feedback is implemented right after the measurement. The probability for spotting the particle in the up state during the measurement is,p_u^eq = e^-β U_0/2cosh(β U_0 ),which is the equilibrium distribution. Average information gathered during the measurement is,<I> = -p_u^eqlnp_u^eq - (1-p_u^eq)ln(1 - p_u^eq) .This is related to the cost of running the information engine. Processing this information requires a minimum of k_B T <I> of energy, associated with resetting the memory bits involved in the measurement process. §.§.§ Efficiency and work per cycleSince a work of 2 U_0 is extracted every time the particle is spotted in the up state,the average work extracted per cycle is,-<W> = 2 U_0 p_u^eq.The variation of average work per cycle (WPC) as a function of U_0 is shown in Fig. <ref> (blue solid curve). The optimal value for U_0 at whichWPC is a maximum is U_0 = 0.64k_B T. The fact that U_0 has to be of the order of k_B T for optimal work extraction can be understood as follows: If U_0 is much smaller than k_B T, the chance of spotting the system in the up state will be close to 0.5, but the resultant work extractionper flip will be small. For U_0 much larger than k_B T, the probability of observing thesystem in the up state reduces drastically leading again to low value of WPC. The efficiency, defined as the ratio of work extracted to the cost of running the engine, is given byη = 2 U_0 p_u^eq/k_B T <I>.The efficiency is a monotonically increasing function of U_0 and saturates for values of U_0 ≫ k_B T as seen in Fig. <ref> (blue solid curve). In the limit ofU_0 →∞, p_u^eq∼ e^- β U_0.It is easily seen that in this limit, η→ 1. But this maximal efficiency happens at the expense of WPC going to zero. At the value of U_0 at which WPC is a maximum, η≈ 53 %. For optimal choices that donot compromise either efficiency or work per cycle, U_0 should lie between 0.6 and 1.2(the shaded yellow region in the blue solid curve in Figs. <ref> and <ref>) for thecase when ϵ = 0. §.§.§ Verifying generalized Jarzynski relationWe compute the right and left hand sides of the GJR (Eq. (<ref>)) separately for verifying the relation. Note that Δ F = 0 for the system considered because the energy spacing remains the same after the feedback process. The work variable, W, can take two values: (i) W_1 = -2 U_0, when particle is observedin the up state and (ii) W_2 = 0, when the particle is observed in the down state. The corresponding probabilities are, P(W_1) = p_u^eq and P(W_2) = 1 - p_u^eq. Thus the left hand side of the GJR is <e^-β W>=e^2β U_0 p_u^eq + 1 - p_u^eq=1 + p_u^eq(e^2β U_0 - 1) =2(1 - p_u^eq)The right hand side of GJR is γ = p_1 + p_2, where p_1 and p_2 areprobabilities to be determined by running the two protocols backwards for the case when theparticle was observed in the up state in the forward cycle and in the down state in the forward cycle respectively. Note that the time reversed protocols start with the system in theequilibrium state and no feedback is involved. p_1 is the probability of finding the particle in the up state at time t = α with the state flipped at t = α (Note that the flip would in general be carried out at t = α - ϵ, but we are looking at an engine with ϵ = 0). p_1 is thus the probability of finding the system in the down state in equilibrium, which is(1 - p_u^eq). p_2 is the probability of finding the particlein the down state at time t = α, starting from equilibrium with no flip in the state.Thus p_2 is also given by (1 - p_u^eq). Thus the right hand side of GJR is given by,γ = 2 (1 - p_u^eq).Comparing Eqs. (<ref>) and (<ref>) we see that GJR is valid for this system.Efficacy, γ, is a measure of how reversible the engine is. In the limit β U_0 ≫ 1, p_u^eq≈ 0 and γ≈ 2. This is the largest possible value for γfor feedback process which involves two measurement outcomes. When U_0 = 0, p_u^eq = 1/2and efficacy becomes, γ = 1. For this case there is no feedback because the two statesare identical and the flip does not make a difference to the state. As expected, the GJR reduces to the usual Jarzynsky equality for this case. When β U_0 ≪ -1, efficacy becomes 0. The information is used in least optimal manner in this situation. This isbecause the demon, rather than extracting work, flips the state when the system is in the lower of theenergy states. Note that with U_0 < 0, the up state becomes the lower energy state.§.§.§ Verifying generalized integral fluctuation theoremTo verify GIFT, we need the values of the information variable, I and the unavailable information, I_u for the two outcomes of the measurement. For the case when the system is measured in the up state, I is given by I_1 = -lnp_u^eq and for the other case information is, I_2 = -ln(1- p_u^eq).The unavailable information for the two cases are given by I_u1 = -lnp_1 and I_u2 = -lnp_2respectively. Using the values of the probabilities determined above for the occurrence of the two outcomes, we have,<e^-β W - I + I_u>=e^2β U_0 + lnp_u^eq-ln(1-p_u^eq) p_u^eq + (1 - p_u^eq) =1,and thus verifying GIFT for this case. Note that if one ignores the unavailable information, I_u, then GIFT will be found to be violated. This is because we have assumed a measurement without error. In fact, one caneasily see that <e^-β W - I> = (1 - p_u^eq), giving a value less than one. §.§ Feedback engine with delay time (ϵ 0)We now consider the case where there is a finite feedback delay time, ϵ, between the measurement and state flip. As discussed above, the relaxation time for the system is τ.Delay in implementing the feedback would imply that at the instant of a state flip, there is a finite probability thatthe system's state differs from the measured state. These probabilities are:P( up ; t|up ; 0)= e^-t/τ(1 - p_u^eq) + p_u^eqP( down ; t|up ; 0)= 1-P( up ; t|up ; 0) P( down ; t |down ; 0)= e^-t/τ p_u^eq + (1- p_u^eq) P( up ; t|down ; 0)= 1-P( down ; t|down ; 0),where we have defined P(b ;t_2|a ;t_1) as the probability that the state of the system is b at time t_2 given that its state at time t_1 is a (t_2 > t_1).§.§.§ Efficiency and work per cycleThe average work extracted per cycle can be computed by taking into consideration the above probabilities. The average work extracted per cycle is-<W> = 2 U_0 p_u^eqp̃ - 2 U_0p_u^eq (1 - p̃),where p̃≡ e^-ϵ/τ(1 - p_u^eq) + p_u^eq. The first term in the above equation accounts for the positive work extraction that happens when the state of the system is measured in the up state and it is also in the up state at the time of the state flip. The second term corresponds to the negative workextracted, that happens when the state is measured to be up but has switched to down state during the delaytime, ϵ. In Fig. <ref> we have shown the variation of WPC with U_0 for different finitedelay times: ϵ = 0.003 (red dashed curve), ϵ = 0.010 (green dotted curve),ϵ = 0.025 (blue dash-dotted curve) and ϵ = 0.1 (connected circles). The valueof τ is taken to be 0.02 for all the cases. As expected WPC is reduced for larger delaytimes because the information gained is utilised less optimally with increasing delay time.Also observed is the shift in the location of the peak value of WPC to smaller U_0 as delaytime is increased. This means that for a fixed value of U_0, the maximum of WPC occurs forlarger temperatures as delay time is increased.For large delay times compared to τ, WPC becomes negative (connected circle curve forϵ = 0.1 in Fig. <ref>) indicating that most of the times when the stateis flipped, it is in the down state. At intermediate delay times, the WPC takes both positiveand negative values with the WPC values initially increasing from zero and then becomingnegative and eventually approach zero from below (see dash-dotted curve for ϵ = 0.025 in Fig. <ref>).The efficiency of the information engine with feedback delay is given by,η = 2U_0p_u^eq (2 p̃ - 1)/k_B T <I>.The average information, <I> is the same as that given in the previous section.This is because the feedback delay time has no bearing on the measurement probabilitywhen the cycle time is large compared to the relaxation time and the feedback delay time. Fig. <ref> shows the variation of efficiency as a function of U_0 for the same set of values for ϵ considered above for the case of WPC. Relaxation time, τ = 0.02, is also the same. We have seen that for ϵ = 0, the efficiency increases monotonically with U_0,attaining the maximum value of 1 as U_0 tends to infinity. But as the delay time is increased, the peak in efficiency shifts to lower values of U_0. As expected, the peak value of efficiency also decreasesas ϵ is increased. For ϵ of the order of τ, both efficiency and WPC are maximumfor U_0 ≲ 1. These features are seen in the green dotted curve (ϵ = 0.010) andthe blue dash-dotted curve (ϵ = 0.025) in Figs. <ref> and <ref>.§.§.§ Generalized Jarzynski relation with feedback delayWe have seen in the previous section that without feedback delay, the efficacy, γ = 2(1 - p_u^eq). It was shown that this was indeed equal to <e^-β W> thus verifying GJR. We now find γ for the case with non-zero delay time and propose to verify the validity of GJR for this case.For the present case, in the expression for γ = p_1 + p_2, p_1 is the probability of finding the system in the up state at t = α with the system starting from equilibrium at t=0 and a flip of the state being carried out at t = α - ϵ (Note that there is no measurement involved in the reverse process.). Thus p_1 is given by the sum of two terms: (i) the probability that at thetime just before the flip, the system is in the down state (which means after the flip, the systemwill be in the up state) and then it remains in the up state till time α and (ii) the probability that at the time just before the flip, the system is in the up state (which means after the flip, the system will be in the down state) and then to be found in the up state at the time α. p_2 on the other hand is just the probability of the system to be found in the down state in equilibrium. This is the reverse process when the particle in measured in the down state in the forward process and does not involve any feedback. Thus we have,p_1=(1 - p_u^eq)[e^-ϵ/τ(1 - p_u^eq) +p_u^eq]+ p_u^eq[p_u^eq (1 - e^-ϵ/τ)] p_2=1 - p_u^eq .This gives,γ = 1 + e^-ϵ/τ(1 - 2p_u^eq),which reduces to γ = 2(1 - p_u^eq) for the case when ϵ = 0, as expected.To evaluate the LHS of GJR, note that the possible values of e^-β W are e^2 β U_0, e^-2 β U_0 and 1, with probabilities p_u^eqp̃, p_u^eq (1 - p̃) and (1 - p_u^eq). Therefore,<e^-β W> =e^2β U_0 p_u^eqp̃ + e^-2 β U_0 p_u^eq (1 - p̃) + (1 - p_u^eq).Substituting for p̃ and making use of the relation e^2β U_0 = 1 -p_u^eq/p_u^eq, the above expression reduces to<e^-β W> = 1 + e^-ϵ/τ(1 - 2p_u^eq)which is the same as γ (Eq. <ref>), thus validating the GJR. One can similarly verify the validity of GIFT, which is presented in the appendix <ref>.§.§ Comparison with simulation resultsConsider a particle moving in one dimension in a periodic square potential U_s(x)= U_0(0 < x ≤ 0.5)=-U_0 (0.5 < x ≤ 1),with U_s(x) = U_s(x+1)as shown in Fig. <ref>. The particle is in contact with a heat bath at temperature, T. One can implement an information engine using this system by measuring the position of the particle and initiating a feedback protocol <cit.>.The protocol closely resembles that of the two state information engine discussed above and is as follows:At times given by t = n α, a measurement of the particle’s position is carried out. If the particleis located in the region with higher potential energy, referred to as region S (see Fig. <ref> (a)),then the potential is flipped (that is, U_s(x) → -U_s(x)) at a timet = n α +ϵ where α is the engine cycle time and ϵis the feedback delay time. If the particle is not spotted in S,then no feedback process is initiated. The interval α - ϵ is kept large enough toensure that the system equilibriates before each measurement. This is not a necessary part of thecurrent model but is done so that the comparison with the analytical results from the two sate model can be made. Even though the state space of the current system, which is a continuum of states,is different from that of the two level system considered above, there are similarities.In equilibrium, the probability for finding the particle in the region of higherpotential will be equal to the probability of finding the two level system inthe up state. In the two-state model, the relaxation to equilibrium is governedby a single relaxation time. However, this process might differ for the particle in thesquare potential and could involve multiple time scales whose values depend on the heightand the period of the potential. But as a first approximation, one can model the relaxationusing a single relaxation time approximation. This would allow us tocompare the simulation results for the particle in the square potential with the analyticalresults for the two level system by using the relaxation time as a fit parameter.We carry out this comparison below.The simulation has been carried out using the over-damped Langevin equation,ẋ = F(x)/mξ + ζ/mξ ,where m is the mass of the particle and ξ is the friction coefficient.ζ(t) is the thermal noise with zero average and the correlation function is given by<ζ(t) ζ(t')> = Γδ(t - t'). Fluctuation-dissipation relation connects the strength of the noise, Γ, to the friction coefficient, ξ, by therelation: Γ = 2 m ξ k_BT. F(x) is the conservative force arising from a potential, U(x). The square potential is modelled using the functionU(x) = C(Δ) tan^-1[sin(2π x)/Δ]. The value of the parameter Δ determines the sharpness of the potential and the parameterC is adjusted so as to make the amplitude of the potential to be U_0. Fig. <ref> shows the shape of U(x), for Δ = 0.02 and C = 0.64 which gives a good approximation to U_s(x) with U_0 = 1.The above equation Eq. (<ref>) has been integrated numericallyusing the discretized version <cit.>,x(t + δ t) = x(t) + F(x)/mξδ t +f_g,where δ t is the time step and f_g is a Gaussian distributed random variable withzero mean and variance equal to 2 k_B T/mξδ t. Since U(x) does not have a discontinuity at x = 0.5, one needs to choose the region S appropriately. We have chosen region S such that it approximately covers the elevated part of the potential(see Fig. <ref>). S is taken as the region between x = 0.025 and to x = 0.475 before the flip (Fig. <ref> (a)) and from x = 0.525 tox = 0.975 after the flip (Fig. <ref> (b)), encompassing a total length of 0.45and periodically repeating. We work with a system of units defined by ξ = 1, m = 1and k_BT = 1. Time scale in the problem is set byξ^-1, which is set to 1. The length scale of the problem is the period of the potential, which is 1. The integration time step of the simulation is taken to be δ t = 10^-5. The time step has been kept small because at the region where the potential changes, close to mod(x,1) = 0.5, the forces can be very large. We have verified the convergence of the solution by checking out sample trajectories at one order smaller time step. Simulations have been carried outby varying the amplitude of the potential U_0 as well as the delay time ϵ. The WPC, efficiency and efficacy are computed by averaging over10^6 cycles. The averaging over large number of trials are particularly necessary forfinding efficacy accurately <cit.>. Work per cycle as a function of the delay time for various values of U_0 are shown inFig. <ref>. As expected, for a given value of U_0, WPC decreases with ϵ because the particle might drift away from the higher potential region if one waits longer for the feedback process after the measurement. It is observed that the zero crossing of WPC occurs at lower values of ϵ for larger U_0 values. This impliesthat the range of delay time over which the information engine can extract work decreaseswith decreasing temperature. For small U_0, the WPC reduces by a factor of almost 0.5 when ϵ is varied from 0 to 0.01, which is approximately half the relaxation time (see red solid line in Fig. <ref>). This drop is more drastic for higher values ofU_0 with WPC dropping by nearly one fourth its value at ϵ = 0 (see blue dash-dot line in Fig. <ref>). The curves in Fig. <ref> are theoretical fit to thedata obtained using the analytic results from the two state model. The relaxation time,τ, is the fit parameter. We see that there is good fit for all values of U_0considered. This justifies the single relaxation time approximation. It is seen that value of τ depends on U_0 with τ decreasing as U_0 increases. The dependency of efficiency on ϵ is given in Fig. <ref> for different values of U_0. Like in the case of WPC the efficiency decreases monotonically with delay time. The theoretical fits using the two state results yield similar values of τas those obtained from the WPC curve fits. Efficacy, γ, has been computed by finding the average <e^-β W)> from the simulations. The dependency of γ on the delay time for various values of U_0 are shown in Fig. <ref>. The plot qualitatively resembles those found in the experimentally realized information engine based on a particlemoving in a sinusoidal potential <cit.>. The maximum value that efficacy can take is 2 for this feedback engine as there are two outcomes possible during the measurement. High efficacy values are obtained for small delay times and large U_0. For ϵ≈ 0 and U_0 ≥ 4, we find efficacy values close to 2. As expected, the efficacy approaches one as the delay time is increased implying that the Jarzynski equality holds when the feedback is redundant. The theoretical fit for this case leads to valuesof τ close to those obtained from the previous fits.We have averaged the values of τ obtained from the three sets of fittings carried out above. This gives the variation of τ as a function of U_0 and is shown in Fig. <ref>. Further, we extrapolate this data to get estimates of τ for arbitrary values of U_0 close to the ones used in the simulations. It is seen that the relaxation time has a strong dependence onthe amplitude of the potential, particularly for large values of U_0. We have used the extrapolated data of τ values to plot the theoretical curves for WPC vs. U_0 and η vs. U_0along with the results obtained from simulation. These results are shown in Figs. <ref> and<ref> respectively. There is excellent match between simulation data and model results. Note that the plots of WPC vs. U_0 and η vs. U_0 in Figs. <ref> and<ref> respectively are for fixed τ value, whereas for the present case τ varies with U_0. It is seen from η vs U_0 plot (Fig. <ref>) that for short delaytimes compared to the relaxation time the efficiency increases with U_0 (red solid curve). But at larger delay times the efficiency is maximum at intermediate values of U_0 (blue dashed curve and black dotted curve). § CONCLUSIONInformation engine based on a two state model is possibly the simplest information engine that can be studied with feedback delay time incorporated. The model allows for an exact calculation of important engine performance indicators: efficiency, work extracted per cycle and efficacy. The key control parameters of the engine are the feedback delay time and the energy gapbetween the levels or alternately the temperature at which the engine functions. Some of the important observations from the analytical study of the two state system with feedback are the following:(i) The engine performance deteriorates with increasing delay time. As delay time becomes largecompared to the relaxation time, one would expect the work extracted per cycle to saturate to a negative value. This is because the system is more likely to be in the down state ((1 - p_u^eq) > p_u^eq) at the time of the state flip. The efficacy in this limit will saturate to value 1, indicating that the information gained is not utilized in extracting work from the thermal fluctuations.(ii) For the case of zero delay time the efficiency increases monotonically with U_0 whereas WPC has a peak at intermediate U_0. For finite delay time the peak in WPC and η shift to lower values of U_0. (iii) The range of delay time over which the engine can extract positive work increase as U_0 decreases, or equivalently the range increases with temperature. (iv) For delay time of the order of the relaxation time, both WPC and η have maxima close to but below U_0 = 1 (in units of k_B T). Since 2 U_0 is the level spacing, this implies that for afixed level spacing the optimal temperature to run the engine is roughly given by k_B T is of the same order as the energy difference between the levels.The information engine presented here allowed for verification of generalized fluctuation theoremsof stochastic thermodynamics. The importance of the unavailable information term in the GIFT relationis explicitly brought out. It is to be noted that the introduction of error into the measurementprocess will alter the form of the GIFT. In that scenario, the I_u in the LHS of the GIFT will not be present.In the context of stochastic thermodynamics of feedback systems, this simple model can also be of pedagogicinterest to understand various fluctuation theorems.Comparison of the model results with simulation of an information engine based on a particlemoving in square potential leads to good match. The efficacy variation as a function of delaytime shows the same features that were observed in the experimentally realized information engine based on a particle moving in a sinusoidal potential <cit.>. The fit values of the relaxation time determined from variation of efficiency, WPC and efficacy with ϵ, all give similar values of τ for all values of U_0 considered. The variation of WPCand efficiency with U_0 for the particle based information engine has similar featuresas that for the two state system. The details of the behavior are however different due to the dependence of relaxation time on the amplitude of the square potential.The current work assumes that the cycle time is large compared to the relaxation time, so that one can assume equilibrium conditions at the beginning of each cycle. This is the reason one had to look at WPC, rather than power of the engine. One can extend the analysis to the case where cycle time is finite. One then needs to work out the steady state probability with feedback in place. Another improvement to the model could be introduction of error in the measurement process. This would make the model more realistic and will allow one to optimize the engine in the presence of imperfect measurements. Work is in progress to incorporate these modifications to the model.Most of the results discussed here should be experimentally accessible in the framework ofcolloidal particle based information engines.TJ would like to acknowledge financial support under the DST-SERBGrant No: CRG/2020/003646.§ GENERALIZED INTEGRAL FLUCTUATION THEOREM (GIFT) WITH FEEDBACK DELAY Ε≠ 0 To show that GIFT holds, we need to prove <e^-β W - I + I_u> = 1.The possible values of W are -2U_0, 2U_0 and 0, with probabilities p_u^eqp̃,p_u^eq (1 - p̃) and (1 - p_u^eq) respectively. The corresponding values of I are-lnp_u^eq, -lnp_u^eq and -ln(1-p_u^eq) respectively. And the values for I_u are -lnp_1, -lnp_1 and -lnp_2 respectively (where p_1 and p_2 are given by Eq. (<ref>)).<e^-β W - I + I_u>=e^2β U_0 + lnp_u^eq-lnp_1p_u^eqp̃+ e^-2β U_0 + lnp_u^eq-lnp_1p_u^eq(1-p̃) + e^0+ln(1-p_u^eq)-lnp_2 (1 - p_u^eq)Substituting for p_2 = 1 - p_u^eq and simplifying,<e^-β W - I + I_u>= e^2β U_0p_u^eq^2/p_1p̃ + e^-2β U_0p_u^eq^2/p_1 (1-p̃) + 1 - p_u^eq.Substituting for p̃ and p_1 and making use of the relation e^2β U_0 = 1 -p_u^eq/p_u^eq and e^-2β U_0 = p_u^eq/1 -p_u^eq, the above expression reduces to<e^-β W - I + I_u>=c_1[p_u^eq+e^-ϵ/τ(1-2p_u^eq)]+ 1-p_u^eq,where c_1 = p_u^eq/p_u^eq+e^-ϵ/τ(1-2p_u^eq). Simplifying <ref> we get,<e^-β W - I + I_u> = 1 apsrev4-2
http://arxiv.org/abs/2312.16102v1
{ "authors": [ "Kiran V", "Toby Joseph" ], "categories": [ "cond-mat.stat-mech" ], "primary_category": "cond-mat.stat-mech", "published": "20231226160918", "title": "Information engine with feedback delay based on a two level system" }
APS/123-QED Pacific Northwest National Laboratory, Richland, WA 99354, USAPacific Northwest National Laboratory, Richland, WA 99354, USAcurrently Open Engineering, Inc., Richland WA 99354, USA Pacific Northwest National Laboratory, Richland, WA 99354, USA[Correspondence to: ][email protected] Northwest National Laboratory, Richland, WA 99354, USAPacific Northwest National Laboratory, Richland, WA 99354, USAPacific Northwest National Laboratory, Richland, WA 99354, USA Pacific Northwest National Laboratory, Richland, WA 99354, USAPacific Northwest National Laboratory, Richland, WA 99354, USA University of Washington, Seattle, WA 98195, USA University of Washington, Seattle, WA 98195, USA University of Washington, Seattle, WA 98195, USA currently Microsoft Quantum, Microsoft, Redmond, WA 98052, USA University of Washington, Seattle, WA 98195, USA currently NASA Goddard Space Flight Center, 8800 Greenbelt Rd, Greenbelt, MD 20771, USA University of Washington, Seattle, WA 98195, USA University of Washington, Seattle, WA 98195, USA University of Washington, Seattle, WA 98195, USA University of Washington, Seattle, WA 98195, USASLAC National Accelerator Laboratory, Menlo Park, CA 94025, USA Fermi National Accelerator Laboratory, Batavia IL 60510, USA Fermi National Accelerator Laboratory, Batavia IL 60510, USA Fermi National Accelerator Laboratory, Batavia IL 60510, USA Fermi National Accelerator Laboratory, Batavia IL 60510, USA Fermi National Accelerator Laboratory, Batavia IL 60510, USA Fermi National Accelerator Laboratory, Batavia IL 60510, USA Fermi National Accelerator Laboratory, Batavia IL 60510, USA Illinois Institute of Technology, Chicago IL 60616, USAFermi National Accelerator Laboratory, Batavia IL 60510, USA Lawrence Livermore National Laboratory, Livermore, CA 94550, USA Lawrence Livermore National Laboratory, Livermore, CA 94550, USA Lawrence Livermore National Laboratory, Livermore, CA 94550, USA Lawrence Livermore National Laboratory, Livermore, CA 94550, USA Lawrence Livermore National Laboratory, Livermore, CA 94550, USA currently Rigetti Computing, Oxford, England, UK Los Alamos National Laboratory, Los Alamos, NM 87545, USA National Radio Astronomy Observatory, Charlottesville, Virginia 22903, USA University of California, Berkeley, CA 94720, USA University of California, Berkeley, CA 94720, USAUniversity of Chicago, IL 60637, USA currently Amazon Web Services Center for Quantum Networking, Boston, MA , USA University of Chicago, IL 60637, USA currently Advanced Microwave Photonics Group, National Institute of Standards and Technology, Gaithersburg, MD 20899, USAUniversity of Florida, Gainesville, FL 32611, USA University of Florida, Gainesville, FL 32611, USA University of Florida, Gainesville, FL 32611, USAUniversity of Florida, Gainesville, FL 32611, USA University of Florida, Gainesville, FL 32611, USA University of Florida, Gainesville, FL 32611, USAWashington University, St. Louis, MO 63130, USA Washington University, St. Louis, MO 63130, USA Washington University, St. Louis, MO 63130, USA currently Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA Washington University, St. Louis, MO 63130, USA Washington University, St. Louis, MO 63130, USA Washington University, St. Louis, MO 63130, USA University of Sheffield, Sheffield, S10 2TN, UK University of Sheffield, Sheffield, S10 2TN, UK National Institute of Standards and Technology, Gaithersburg, MD 20899, USA ADMX Collaboration gobble The ADMX collaboration gathered data for its Run 1A axion dark matter search from January to June 2017, scanning with an axion haloscope over the frequency range 645-680 MHz (2.66-2.81 μeV in axion mass) at DFSZ sensitivity. The resulting axion search found no axion-like signals comprising all the dark matter in the form of a virialized galactic halo over the entire frequency range, implying lower bound exclusion limits at or below DFSZ coupling at the 90% confidence level. This paper presents expanded details of the axion search analysis of Run 1A, including review of relevant experimental systems, data-taking operations, preparation and interpretation of raw data, axion search methodology, candidate handling, and final axion limits.Axion Dark Matter eXperiment: Run 1A Analysis Details G. C. Hilton January 14, 2024 ===================================================== arabic § INTRODUCTION An axion field is a consequence of the Peccei-Quinn solution to the strong-CP problem of particle physics <cit.>.The Peccei-Quinn mechanism implies several dynamical processes that produce cold axions in the early Universe. These axions may constitute some fraction or all of the dark matter <cit.>. An axion field serves as an excellent candidate for cold dark matter as it has very small primordial velocity dispersion, feeble couplings to itself and to the Standard Model fields.It describes a very weakly interacting massive particle, the axion, with lifetime much longer than the age of the Universe. Matter under these conditions is expected to collapse into large scale structures that seed the Universe's galaxies, including our own. For axions to saturate the ΛCDM model's dark-matter density, many numerical and analytical studies of QCD (quantum chromo-dynamics) prefer an axion mass in the 1-100 μeV range <cit.>.The predicted coupling between axions and photons is model-dependent; in general, axions with dominant hadronic couplings as in the KSVZ (Kim-Shifman-Vainshtein-Zakharov) model <cit.> are predicted to have an axion-photon coupling roughly 2.7 times larger than that of the DFSZ (Dine-Fischler-Srednicki-Zhitnitsky) model <cit.>. Because the axion-photon coupling is expected to be very small, O(10^-17 - 10^-12) GeV^-1 over the expected axion mass range, these predicted particles are dubbed invisible axions.Direct searches for axions use several techniques, based in either pure laboratory methods or by providing the requisite axions through external sources. The Axion Dark Matter eXperiment (ADMX) utilizes the axion haloscope technique <cit.>, and consists of a cold microwave cavity threaded by a static magnetic field coupled to the ambient axion field via an inverse-Primakoff process. Axions are resonantly converted to photons in a cavity mode tuned to the axion field's frequencies, which are centered at f = ϵ̅ /h where h is the Planck constant and ϵ̅ is the expected total axion energy consisting of the rest mass energy and kinetic energy. The microwave haloscope has proven to be the most sensitive technique to search for axions as the dark matter over the favored range. Axion searches prior to the year 2017, including ADMX, have managed to approach KSVZ sensitivity, but not the stricter DFSZ model under the conservative assumption of a virialized galactic halo. ADMX entered as a DOE Generation 2 dark-matter search in 2014 with the explicit target of reaching DFSZ sensitivity over the frequency range 0.5-10 GHz (2-40 μeV) <cit.>, see Fig. <ref>, and more recently has been split into G-2 and EFR (extended frequency range) efforts. The first of these searches, referred to as “Run 1A,” gathered data between January and June 2017 over the frequency range 645-680 MHz (2.66-2.81 μeV) and found no axions in that range <cit.>. This paper explains in more detail the operations and analyses that resulted in the Run 1A search reaching DFSZ limits.The remainder of the this paper is structured as follows. Section <ref> provides a brief overview of the ADMX apparatus, with focus on the axion converter-detector assembly, including the cavity and magnet, the radio frequency (RF) receiver chain, and the data acquisition system. Section <ref> presents the data-taking operations, including search guidelines, data-taking cadence, scanning operations, logging of data, and candidate rescans. Section <ref> reviews the preparations performed on the raw experimental data prior to searching for axion signals, paying particular attention to noise characterization, backgrounds in the integrated power spectra, and the classification of unfit data. Section <ref> presents the statistical models used to search for the axion, how multiple observations are composed to form the grand statistical spectra, and the process of converting sensitivity spectra to limits. Section <ref> reviews the different axion dark-matter models searched for, including the model used during data-taking operations and the more sophisticated models of the final analysis. Section <ref> presents the procedure for handling axion candidates during Run 1A, including initial identification using the live analysis output, generation of synthetic axions, rescans and further procedures, residual candidates, and the reintroduction of rescan data for the construction of the grand spectra. Section <ref> consolidates the cumulative findings into a grand spectrum, providing the net Run 1A axion limits. Section <ref> summarizes the findings and sets the stage for Run 1B <cit.>. § THE HALOSCOPE APPARATUSThis section provides a summary of the ADMX haloscope apparatus, specifically the magnet, resonant cavity, cold receiver chain, warm receiver, and data acquisition (DAQ) systems. The focus is on components directly relevant for the axion search. These and other supporting components are discussed in more detail in the recent ADMX instrumentation paper of Run 1A and 1B <cit.>. The main ADMX apparatus is an Earth-bound tunable haloscope, designed to detect relic axions via microwave power emitted from a resonantly enhanced inverse-Primakoff process. The experimental apparatus, shown in Fig. <ref>, is divided into a graduated cold space, which at its coldest level (∼ 150 mK) is regulated by a dilution refrigerator (DR), and surrounded by increasingly warmer spaces leading up to room temperature. The cold space lies primarily within the main magnet housing. The outer steel housings of the magnet cryostat act as shielding against heat and radio frequency interference (RFI) leakage <cit.>. Inside the magnet bore housing is the detector insert, containing the cavity, cold electronics, and sub-Kelvin cryogenics. Two more layers of heat/RFI shields separate the coldest elements, including the resonant cavity, from the 4.2 K main magnet. Power from the cavity is transmitted through a tunable antenna in the top plate and out of the cold space through an ultra-low noise transmission and amplification chain. This paper concentrates only on the receiver chain used in Run 1A, which tracked a TM_010-like mode in the cylindrical cavity. In the warm space, the transmitted signal is then mixed down in frequency and digitized as a voltage time series and a frequency power spectrum over a 25 kHz bandwidth, which roughly matches the bandwidth of the TM_010-like resonance. These digitized power spectra are where the axion signatures would be expected to appear.The warm space also includes all equipment outside the magnet shield, and applies to the majority of the cryogenics and gas handling infrastructure. Also at room temperature is the DAQ infrastructure that monitors and controls the components of the warm and cold spaces. The DAQ monitors and controls many physical components of the experiment, such as running the DR, operation of the quantum-limited amplifiers and mechanical control of the cavity, monitoring of the cryogenics, cavity, magnets, and a complex of sensors for magnetic field strength, pressures, cryogen levels and temperatures at every stage of the insert. The lowest layer of the DAQ software is based on EPICS <cit.>, which provides a uniform software interface for interaction with the instruments. The DAQ infrastructure also executes the numerous routines used in data taking and monitors its status using a live analysis. §.§ Resonant Axion-to-microwave Converter Assembly This sub-section reviews the primary components responsible for converting ambient axions into microwave photons: the main magnet and cavity. §.§.§ Main Magnet The static magnetic field is supplied by a superconducting magnet manufactured by Wang NMR of Livermore, CA <cit.>. The magnet consists of niobium-titanium windings immersed in liquid ^4He. During normal Run 1A data-taking operations, the field at the center of the solenoid was 6.8 Tesla, falling off to about 70% of this value at the center of the end plates of the cavity, as seen in Fig. <ref>. The superconducting coil is a 1.12 meter tall solenoid with a 60 cm inner diameter bore. The magnet winding is composed of four concentric superconducting solenoids, containing 99 km of copper-stabilized niobium-titanium wire wound around a stainless steel spool piece and potted in epoxy. The rated maximum field at the center of the magnet is 8.5 Tesla at a current of 248.96 Amperes and a stored energy of 16.54 MJoules. The central field of the main magnet runs linearly with the current through the coils|B|_max = 8.5  Tesla( I /248.96 Ampere),where I is the supplied current to the coils. The main magnet’s current is continuously maintained by a power supply. Once cooled to operating temperature, the magnet requires a supply of approximately 2000 liters of liquid helium per month for continuous cooling during data-taking operations.§.§.§ Cavity Axions passing through the field of the main magnet transition into microwave photons with some small probability. To capture and maximize this conversion, a highly conductive cavity is placed in the magnet bore at the center of the solenoid field. The ADMX Run 1A cavity is a 136 liter copper-plated stainless-steel cylinder with two copper tuning rods. The cylinder is made of a tube-section forming the cavity walls plus two removable end caps. The end caps form a low-resistance connection to the walls via a knife edge on the walls pressed firmly into the end plates. Both the stainless-steel cavity and the copper tuning rods are plated with OFHC (oxygen free high conductivity) copper with a minimum thickness of 0.075 mm and then annealed for 8 hours at 400 Celsius in vacuum. The annealing process further increases the grain-size of the copper crystals allowing for longer electron scattering path lengths as the copper is cooled and enters the anomalous skin depth regime. The quality factor Q of the tracked TM_010-like mode in the unloaded cavity ranges from 20,000 at room temperature to 160,000 at cryogenic temperatures <cit.>. The unloaded Q of the cavity primarily depends on the resistive skin losses of the copper plating. The oscillating electric fields of the cavity mode penetrate the resistive copper walls of the cavity. For more information on the construction of the Run 1A cavity, see <cit.>.The cumulative interaction strength between the ambient axions and electromagnetic fields over the interior of the cavity is given in natural units by the Lagrangian L_cav = - ∫_V d^3 x α g_γ/π f_a a E⃗·B⃗,where V is the inner volume of the cavity, α is the fine structure constant, g_γ is the dimension-less model-dependent coupling constant, f_a is the Peccei-Quinn symmetry-breaking scale, a is the axion field, E⃗ is the electric field strength, and B⃗ is the magnetic field strength.For a highly resonant cavity threaded by a strong magnetic field, passing relic axions may be seen to act as an external driving force on cavity wave-forms with non-trivial Lagrangian L_cav so long as the axion-photon coupling is feeble enough that the probability of being converted is small. The dynamics of the system become resonant around the cavity modes as the boundary fields and losses per cycle are small in comparison to the stored energy.The power deposited into the cavity from a monotone source averaged over times much longer than the coherence time of the cavity mode, t_coh = Q_L/ν_0, where ν_0 is the mode center frequency and Q_L is the loaded quality factor, in SI units is found to be< P_cav>(ν)= ϵ_0 α^2 c^2/π^2 f_a^2 g_γ^2 V |B|_max^2 C ν Q_L < |a|^2 >(ν) T_ν_0 (ν)= 1.9 × 10^-23 W (g_γ/0.97)^2 (V/136 l) (B_max/6.8 T)^2 (C_mode/0.4) (ν/650 MHz) (Q_L/50,000) (ρ/0.45 GeV cm^-3) T_ν_0 (ν) . Here ϵ_0 is the permittivity of free space, c is the speed of light in vacuum,C_mode is the form factor quantifying the mode and magnetic field alignment, and T_ν_0 (ν) is the mode envelope shape, which is expected to follow a Lorentzian formT_ν_0 (ν) = 1/1 + 4 Q^2 (ν-ν_0)^2/ν_0^2.The cavity form factor parametrizes the overlap of the magnetic field and electric field in the cavity C = ( ∫_V d^3x E⃗·B⃗)^2 /(∫_V d^3x |E⃗|^2 ) |B_max|^2 V .The magnetic field under is dominated by several orders of magnitude by the external main magnet under normal operating conditions, which has profile B⃗_0. The electric field in the vicinity of a single cavity mode is dominated by the mode's form E⃗_ξ where ξ is the multi-index of the mode wave-form, which will often be parameterized by modified cylindrical harmonics (n,l,m,X) where the last index X will be used to distinguish modes split by the axial-symmetry-breaking tuning rods. The form factor of the ξ mode is then well approximated byC_ξ≈( ∫_V d^3x E⃗_ξ·B⃗_0 )^2 /(∫_V d^3x |E⃗_ξ|^2 ) |B_max|^2 V .The main magnet's field is oriented vertically along the cavity's axis, though it does diverge some at either end, see Fig. <ref>, making the value of the form factor primarily dependent on the shape of the mode's axial electric field. For the ADMX cavity, there are, among others, transverse electric (TE) modes and transverse magnetic (TM) modes. The TM modes are the only category that contain an axial electric field desired for large form factors. The axial electric field for a TM_nlm mode of an empty right-circular cylinder isE⃗_nlm(t,ρ, ϕ, z) = ẑ E_amp(t) J_m(x_mlρ / R) e^± i m ϕcos( n π z/d)where E_amp(t) is the time dependent component of the field, J_m is a cylindrical Bessel function, x_ml is the l-th root of J_m(x) = 0, R is the cavity radius, and d is the cavity height. For the rod-less cavity and magnet configuration used in ADMX, the chosen TM_010-like mode maximizes the form factor C_m.Two copper-plated stainless steel tuning rods are placed in the cavity in order to tune the axion-coupled modes. The rods run through the length of the cavity and are mounted on rotating alumina oxide rotary armatures. The rods are 0.05 m in diameter <cit.>. The rods ideally create null boundary conditions in the electric field of the cavity mode, effectively shrinking the extent of the cavity in the horizontal plane, increasing the TM mode frequencies, and splitting modes by breaking the axial symmetry. The presence of the tuning rods also reduce the quality factor from that expected of an empty cavity by about a factor of two. Identifying the split TM_010-like modes can be done heuristically or through simulation by increasing the thickness of the rods from zero (bare cavity) to their physical dimensions <cit.>. The TM_010-like mode branch tracked in Run 1A is identified with the name TM_010c in <cit.>. Rotating the rods from near the wall of the cavity to near the center tunes the TM_010-like mode from roughly 580 MHz to 890 MHz. The rods are moved by stepper motors located on top of the experimental insert. The stepper motors are connected to long G10 shafts that drive gearboxes on top of the cavity, which in turn are connected to the alumina shafts of the tuning rods. The gear boxes consist of two anti-backlash worm gear reductions, both geared down by 140:1 for a total reduction of 19600:1. Combined with the precision of the stepper motors, the angle of the rods can be stepped at the level of micro-radians. In practice, this stepping resolution allowed for tuning the cavity at 100-200 Hz per step during Run 1A operations. The motion of the rods is most effective in tuning the TM modes, with TE frequencies remaining nearly constant. However, when the TM_010-like mode crosses another mode, their waveforms mix and share energy. This mixing degrades both the form factor C_nlmX and quality factor and severely complicates the flow of power. As such, the axion search becomes insensitive at these crossings. The two-rod system can be maneuvered to avoid many of these crossings, but not all. For Run 1A, one of the rods was put in the wall position while the other was left free to tune. Frequencies of the cold space circulators and the MSA (microstrip SQUID amplifier) coincided with a range of good form factor between two major mode crossings, as seen in Figs. <ref>, <ref>. The form factor is simulated over this range using FEM (finite element method) multi-physics software Comsol <cit.>. The cavity contains three microwave ports: one “weak” port on the bottom plate and two tunable “major” ports on the top plate. The ports allow for the extraction or insertion of RF power with the cavity. The weak port is a fixed, short antenna that is extremely under-coupled to the TM_010-like mode. The weak port can be used to inject signals into the cavity without degrading the loaded quality factor of the mode Q_L or extracting signal power. The major ports consist of movable antennae that can be inserted or withdrawn via stepper motors attached to linear gear drives to vary the coupling strength. Like the tuning rods, the major ports' linear drives are actuated by stepper motors attached to long G10 shafts which turn 140:1 anti-backlash worm gear drives. The intention of having two major ports in Run 1A was to perform two simultaneous axion searches on two TM modes, a TM_010-like and a TM_020-like. However, for reasons covered in <cit.>, only the TM_010-like mode was observed and the other port was uncoupled from the cavity. Only the TM_010 port and the connected receiver chain will be referenced from here on.A vector network analyzer (VNA) can be used to measure the TM_010-like mode's frequency, quality factor, and the major port's coupling to the cavity. The VNA measures the swept transfer function (S_12) of the cavity by injecting a tone into the weak port of the cavity and measuring the same tone’s amplitude transmitted through the major port. The frequency of the injected tone is swept across the band of the resonant mode. At frequencies far outside the central resonant frequency of the cavity, the injected tone is largely reflected by the cavity at the weak port or absorbed by the cavity walls so almost none is extracted at the major port. On resonance, the injected tone enters the cavity, excites the mode, and its power is extracted at the major port with far less loss. The output of the VNA is the swept response across the cavity mode. The expected response of an unmixed isolated mode is the Lorentzian distribution of Eqn. <ref>, from which the central frequency and Q-width can be extracted from a fit, seen in Fig. <ref>.The VNA also determines the coupling properties between the major port and cavity via a swept reflection measurement (S_11). The VNA sends power towards the major port of the cavity through a circulator. A circulator allows for the directional injection of power along a transmission line. The power incident on the cavity reflects back with the form Γ_ν_o(ν) = β - 1 + Q_L^2 (ν - ν_o/ν_o)^2 - 2 i β Q_L ν - ν_o/ν_o/1 + 4 Q_L^2 (ν - ν_o/ν_o)^2,where β is the coupling strength parameter. The coupling strength can be expressed asβ = Q_0/Q_ext,where Q_0 is the cavity Q-factor with the major port uncoupled, and Q_ext is the contribution to the quality factor from external losses such as the major port. For a given coupling, the response is total reflection off resonance and a dip on resonance where power is absorbed by the coupled cavity. Critical coupling occurs when β = 1 and all power passes into the cavity at the central frequency. A good impedance match is marked by a deep trough in the reflected baseline on resonance. The depth of an antenna is adjusted to maximize trough depth for the TM_010-like mode as proxy for critical coupling. More precise analyses of coupling strength that can track overcoupling and other conditions have been implemented in subsequent runs <cit.>. Conventionally, when the difference between the minima of the trough and the off-resonance baseline reaches -30 dB the antenna is considered critically coupled. Such a trough means that only 0.1 percent of the on-resonance incident power is reflected. External losses from the major port lower the mode quality factor as1/Q_L = 1/Q_0 + 1/Q_ext,where Q_L is the quality factor of the loaded cavity. During critical coupling, half of the power is lost to the major port, meaning Q_loaded = Q_free/2. Run 1A is configured to run at critical coupling, meaning at best only half of the axion power generated in the cavity is expected to leave through the major port. §.§ Run 1A Cold Receiver ChainThe TM_010 major port connects to the RF receiver chain, as seen in Fig. <ref>. The chain begins with a low-pass filter and runs through the cold space where the quantum-limited electronics are housed. A bucking magnet surrounds the quantum amplifiers and other electronics sensitive to stray magnetic fields, actively canceling the field from the main magnet to tens of Gauss. Two hall probes are located in the field free region of bucking coil to confirm the field is cancelled to within a few Gauss <cit.>. The electronics inside the field-free region are contained in an OFHC copper frame called the “squidadel” containing the cryogenic RF electronics and quantum amplifier package. This includes quantum noise limited amplifiers, circulators, switches, and temperature sensors. Physical and noise temperatures of the cryogenic electronics housed in the squidadel largely determine the noise temperature of the system, therefore the squidadel is kept thermalized to the dilution refrigerator mixing chamber, the coldest part of the system. The small power emitted from the cavity passes through the first series of switches and circulators to the first stage quantum amplifiers and further to the HFET amplifier before passing into the warm space. The MSA boosts the signal with a characteristic gain of 20-25 dB, followed by an HFET amplifier in the 10 K space providing a boost of 30 dB. The squidadel is wired completely with copper coaxial cables whereas wiring from main port antenna to the first stage quantum amplifier is NbTi. Coaxial cables in the input chain are stainless steel. Characterizing the early-stage electronics is an extremely important procedure in determining the sensitivity of the experiment and is covered in Section <ref> as well as in <cit.>. The total gain in the cold space is 50-55 dB. §.§ Warm Receiver Chain The remainder of the receiver chain runs through the successively warmer cryogenic spaces and vacuum feeds into the room temperature space. The total room-temperature amplification is approximately 40 dB, bringing the overall expected power to the pico-watt level. This power is then directed to a variable-frequency super-heterodyne receiver for digitization, as seen in Fig. <ref>. The power and gain of the receiver chain with the MSA switched out of the system were measured as a function of frequency every few weeks and found to change by a negligible amount, below the 1% level.Treatment of the insert RF emissions into a digitized data set is performed in several steps, see Fig. <ref>. The first step is to mix the signal such that the mode center frequency is centered at 10.7 MHz via a local oscillator set to f = ν_0 + 10.7 MHz. The down-mixed voltage is bandpass filtered in a 30 kHz window centered at 10.7 MHz, which is expected to cover at least the full width at half maximum of the TM_010-like mode. The analog signal now exists only in this vicinity of 10.7 MHz and is capable of being sampled quickly enough to resolve structures well below the expected total axion signal width. Time-wise sampling then occurs at a rate of 400 Mega-samples/s with a 10-bit digitizer <cit.>. To optimize the resolution/precision function of potential analyses, digitized samples are partitioned into bins 8 samples wide and averaged. This down sampling and averaging improves the signal to noise of samples by √(8)∼ 3 times and increasing the bit depth by log_4(8) = 1.5. The resulting re-binned sample has an effective width of 25 MHz, well above the 2 × 10.7 MHz Nyquist rate of the central frequency. The re-binned time series are split into ≈ 10 ms blocks and temporarily stored to a circular RAM queue. No safety mechanisms exist to preserve the oldest unprocessed sequences, which are overwritten by the newest recordings. If overwritten, those data are lost and a flag is raised in the integration's metadata, indicating that scans processed are not necessarily consecutive. This becomes important for the high-resolution stored data. Once the 10 -ms series blocks are pulled off the RAM buffer, a Fast Fourier Transform (FFT) is performed on the concatenated series to 12 MHz resolution, near the Nyquist limit. With the well-resolved contributions about the resonance centered at 10.7 MHz, a digital mixer centers a 30 kHz band and shifts these contributions down to begin at 0 Hz, removing contributions above 30 kHz with a low-pass filter. We now have drastically down sampled the coherent frequency-space data set of a 10 ms sample at a rate resolving the full 30 kHz. The high resolution time series data is formed by performing and inverse FFT on each 10 ms sample and concatenating the time series in order. The medium resolution (MR) power spectral density, or power spectrum, of the scan is calculated from the modulus squared of each 10 ms sample, averaged over the 100 second scan (∼ 10^4 samples). The power spectrum array then has its edge bins removed, producing the 256 bin wide form of the raw spectrum saved for the MR analysis. The width of each bin the MR spectrum is ≈ 100 Hz, the Nyquist limit for each 10 ms sample. It is in this power spectrum that the axion dark-matter signal would appear as a localized excess.§.§ Experiment Status MeasurementsThe warm receiver chain and digitizer are located within the larger structure of the ADMX DAQ system. A number of measurements are recorded by the DAQ to monitor the state of the insert, magnet, and ancillary cryogenics. Interpretation of these experimental conditions plays a critical role in performing the offline analysis. Here we briefly describe the measurement and interpretation of these experimental conditions. A more detailed account can be found in <cit.>.The data recorded from the experiment can be divided into periodically sampled experimental state information and radio-frequency measurements taken in the course of the axion search. The experimental state information consists of status readings from temperature, pressure, field, and current sensors.For temperatures above 1 K, an assortment of resistance sensors are used to read out temperatures at various thermal stages. For temperatures below 1 K, temperature sensors are sampled with a four-wire resistance measurement using a Lakeshore Alternating Current (AC) resistance bridge <cit.>. The temperatures of the cavity and quantum electronics package are measured using Cernox temperature sensors and the temperature of the mixing chamber mounted to the cavity is measured using a Ruthenium Oxide sensor <cit.>. During operations, it was observed that the Cernox sensors had a large magneto-resistance at temperatures below 1 K. With the magnet ramped to 6.8 T, The Cernox temperature sensors on the main cavity read 70% higher temperatures compared to the magnet ramped down, while the Ruthenium Oxide temperature sensor on the mixing chamber increased by 2%. Thus, in Run 1A, the temperature of the cavity was read by the Ruthenium Oxide temperature sensor mounted to the mixing chamber. Because the quantum electronics package was kept in a field-free region, Cernox temperature sensors located on the package did not suffer any appreciable effects from the magnetic field, and were used to measure the physical temperature of the quantum amplifier. The main magnet state is captured by several sensors on the magnet's power supply as well as Hall probes that directly measure the magnetic field parallel to the probe wire, which are set in an azimuthal (vertical) orientation in areas of high magnetic field as well as in low-field/high-sensitivity areas like the so-called field-free region. The power supply monitors the voltage and amperage being fed to the main magnet at a sampling rate of a few minutes. As stated by the manufacturer, the peak magnetic field for an empty bore can be calculated from Eqn. <ref>, though minimal impact from insert materials and bucking coil are expected near the cavity. The Hall probes record the magnetic field with a period of about an hour. The magnetic field throughout the cavity may be modeled using the combined data and then used to compute the form factor. § DATA-TAKING OPERATIONSData-taking operations for Run 1A occurred between January 18 and June 11, 2017. During that time, the TM_010-like mode of the main cavity was tuned over the range of 645-680 MHz and observed for power excesses consistent with an axion DM signal. The identification and handling of candidates was also performed during this period. This section decomposes the data-taking process into its constituent parts and cadences down to the level of a single RF digitization, overviewed in the previous section. §.§ Overall Structure of data-taking The global imperative for data-taking in Run 1A was to cover the viable frequencies at the intersection of the cavity and receiver chain operational ranges. Once at sub-Kelvin temperatures, the MSA was established to be the limiting device in the receiver chain, providing the target gain over the range 645-680 MHz, quickly decaying for lower and higher frequencies. This range coincides with a continuous stretch of the TM_010-like cavity mode with high form factor C, high quality factor Q, and without significant mode crossings. The frequency range is accessed through tuning rod configurations where one rod is held at the wall and the other is free to turn, as seen in Fig. <ref>.The 35 MHz range is divided into four segments, observed chronologically as: 650-660 MHz; 660-670 MHz; 670-680 MHz; and 645-650 MHz. Each segment is individually scanned to DFSZ sensitivity, investigated for candidates, and cleared of candidates before moving onto the next segment. The range is modularized for faster turn-around in the case of a prominent candidate, and for feedback on the state of the MSA.The scan process of each segment is structured in a sequence of up-tuning and down-tuning sweeps of the TM_010-like mode over several weeks to provide uniform coverage. Multiple sweeps separated in time by days or weeks also provides an opportunity to sample possible periodic and transient behavior of the axion field. This cadence is only rarely broken in the case of a promising candidate. Once the scans were complete to DFSZ sensitivity according to the live analysis, the procedure transitioned into candidate handling. DFSZ sensitivity is established based on criteria covered in sections Sec. <ref>. The investigation of candidates and the handling decision tree was established in the Candidate Handling Protocol covered in Sec. <ref>. Once candidates for one segment are handled, the process begins again for the next segment.§.§ Cadence of a data-taking Cycle The process of scanning a segment is broken up into a smaller cadence called a data-taking cycle. The data-taking cycle lasts for approximately 150 seconds and includes operations of moving tuning rods, active cavity measurements, and the passive integration of RF chain emissions. The data-taking process at and below this level is automated and was often left in continuous and unhindered operation for days barring a necessary manual bias of the MSA.At the head of the data-taking cycle are several active measurements of the cavity and receiver chain using the VNA. The injected sweep signals are far more powerful than a potential axion signal or impinging RFI, making them easier to see, while still conforming to the receiver operating parameters optimized for the passive integrations. The first measurement is a swept transmission S_12 from the weak port through the receiver including the cold and warm amplifiers. This measurement shows the cavity response, yielding a measurement of mode frequencies and quality factors. S_12 measurements take approximately 10 seconds each. A wide-band transmission scan is made every 10-th scan cycle for mode mapping purposes. The transmission measurement is followed by a reflection measurement taking approximately 20 seconds, where the signal is sent through the bypass line and is directed towards the cavity by a circulator (C_1 in Fig. <ref>). The reflection sweep near the resonance is mostly absorbed by the cavity, while off resonance the signal is reflected and passes back through the cold and warm amplification of the RF system. This measurement yields a wide-band measurement of system gain for noise calibrations, the coupling between the antenna, and the cavity mode of interest, as well as the Q_L of the cavity mode of interest. Coupling of β=1 is consistent with critical, or impedance-matched, coupling. The duration of transmission and reflection measurements have been reduced to less than a second each in subsequent runs by optimizing input attenuation and power <cit.>. The receiver integrates emissions from the cold RF for the remainder of the data-taking cycle, which comes to an observation duty factor of approximately 0.7. At the end of the data-taking cycle, there are two mechanical tuning processes to prepare for the next integration. The tuning rate of the rod can be estimated given a target sensitivity and regional values for the system temperature, quality factor, etc., or can be set manually to a given number of steps per cycle. The tuning rate of the warm stepper motor was set between 0.1-0.2 radians per cycle during the first passes of a section's bandwidth, which translates to 1-2 kHz per cycle. The main port antenna is also given the opportunity to tune to alter the coupling to the main cavity. Note that rod and antenna tuning must occur slowly to avoid overtaxing the dilution refrigerator. §.§ Logging of Search Data Each data-taking cycle is stored as an independent entry in an SQL database on the DAQ's main control computer. Each cycle contains its own unique serial number and timestamp when the cycle is initiated. The integrated power spectrum and a collection of markers are added to the cycle entry to more easily identify the conditions surrounding the experiment. Each entry contains the sum total information necessary to perform an axion search. § DATA PREPARATION FOR AXION SEARCHWith data collected, preparations are then made to characterize the state of the experimental apparatus and assess the quality of the measurements to search for the axion. This includes the verification of the cavity magnetic field, characterization of the receiver chain as measured by its effective noise and gain properties and its persistent background structure. This section details how the measurements necessary to axion search are interpreted from their recorded state into actionable data. §.§ Direct Measurements Temperatures within the experiment are measured by arrays of sensors placed at every level of the insert and on the main magnet casing, both in areas of high magnetic field and in the field-free region. Voltage time series from the sensor leads are interpreted through EPICS and converted into temperature readouts. Sensors of differing makes and models were tested against one another in and out of the magnetic field in order to study the uncertainties and biases present during data-taking operations, as detailed in <cit.>. The errors in temperature sensors are reflected in Table <ref>, and are integrated in the final sensitivities and limits.The magnetic field in the main cavity is computed from the current supplied to the main magnet and confirmed by the Hall probes placed throughout the insert. The maximum magnetic field in the solenoid center is computed from the current via Eqn. <ref> and modeled in form by numerical simulation as indicated in <cit.>, including the counter field induced by the bucking coil surrounding the field-free region.Active transmission and reflection measurements taken during each data-taking cycle are analysed to assess the impedance match between the cavity and main port, to match the transmission function through the port to a Lorentzian distribution, and to extract the quality factor and mode central frequency. During Run 1A, the weak port was partially dislodged and became decoupled from the cavity during the insertion and commissioning process, marginalizing the effectiveness of the cavity transmission measurements. As a result, S_11 reflection measurements were used to assess coupling β and loaded quality factor Q_L of the TM_010-like cavity mode. Recall that the normalized reflection power spectrum is modeled by the form given in Eqn. <ref>. The logged S_11 measurement is fit to this form by a least-squares analysis of the swept spectrum and the parameter values are used to assess the loaded Q_L and coupling β of the cavity-main-port state, see Fig. <ref>.§.§ Axion Power in Cavity and Main Port Transmission The figures central to the depositionof axion power into the cavity and transmission into the receiver can now be computed. The power-per-axion density deposited in the cavity at a single frequency is given by< P_a > /< ρ_a >∝ C_010 ν  B_max^2   V   Q   T_ν_o (ν).The form factor of the TM_010-like mode C_010 is computed from Eqn. <ref> using numerical models of the magnetic field and mode electric field distributions <cit.>. The volume of the cavity is computed from <cit.> and is invariant to high precision during low temperature operations. The maximum magnetic field is computed from Eqn. <ref>, and has error calculated at the level of the power supply current stability. The numerical error of B_max^2 × C_010× Vis available in Table <ref>.The loaded quality factor Q_L is computed from the reflection measurement detailed in the previous sub-section and uses a rolling average and uncertainty from the previous 10 measurements. The portion of the power transmitted from the cavity into the main port, and past the first low-pass filter is modulated by the coupling strength factor β, and the low-pass filter transmission function T_filter< P_port> = T_filter β/1 + β < P_cav>,where the filter is near-transparent at the Run 1A frequencies (T_filter≈ 1). §.§ Receiver Gain and Noise The remainder of the receiver chain transmits the cavity output, but also imprints its own structure into the digitized power spectrum down to the scale ν_o/2 Q_L ∼ 10 kHz (see <cit.>), an order of magnitude wider than the expected axion signal width of ≲ 1 kHz. Directly modeling the total transmission function prior to Run 1A proved to be too unreliable due to high variability in the response of devices and strong inter-device couplings. This subsection analyzes the receiver transmission heuristically to characterize its gain structure and noise.The noise background is expected to be overwhelmingly thermal. Noise in the power spectrum is first contributed by the fluctuations about the mean photon occupation function of the cavity< n_γ(ν) >=1/e^ h ν / k_B T - 1where k_B is the Boltzmann constant, T is the physical temperature of the thermal source, h is Plank’s constant, ν is the excitation frequency. The expected power spectrum per frequency of the cavity is derived from the mean energy spectrum< E(ν) >= -d log (Z)/d (k_B T) = h ν/2 +h ν/e^ h ν / k_B T - 1where Z is the canonical partition function. Note that the first term of the mean energy spectrum is the vacuum contribution to the cavity energy and the second term is the contribution from non-trivial occupation. The emission of photons out of the cavity, ignoring reflections and attenuation for now, is then given by the free flow of power out of the strong port. The expected power spectrum per frequency is< d P_n(ν)/d ν> = < E(ν) >- E_vac(ν) = k_B T ×h ν / k_B T/e^ h ν / k_B T - 1In the limit of the photon energy much less than the bath temperature h ν≪ k_B T, the expected spectrum density flattens and the power spectrum becomes< P_n(ν) >= k_B T b,where b is the integrated bandwidth and ν is taken as the center frequency.The distribution of power fluctuations can be found by looking at the occupation probability of individual states, given by the density functionρ_r (ν) = e^-(r+1/2) h ν / k_B T/Z ,where r is the occupation number of the state(s) with frequency ν. The emission rate of photons over the bandwidth b is expected to be ∼ k_B T b/ h ν. This comes to ∼ 16 bandwidth-emissions for a typical Run 1A temperature of T=500 mK at a center frequency ν=660 MHz. Each short continuous δ t ≈ 10 ms integration (bandwidth b = 1/t ≈ 100 Hz) would then produce an MR emission power spectrum in an exponential distribution of similar shape to Eqn. <ref>. The rate parameter of an exponential distribution is identified with the variance square root λ = μ = σ where the mean of the power spectrum is already known to be μ = < P_n >= k_B T b from Eqn. <ref>. The Δ t ≈ 100 second integration period of each data taking cycle is made up of n Δ t/δ t ≈ 10^4 MR power spectra, which are considered identically distributed over that time scale and independent as the integration time exceeds the timescale of power fluctuations δ t > 1/2b. Averaging those spectra will shift the distribution shape from an exponential to a normal shape in the large-n limit according to the central limit theorem, leaving the mean unchanged but re-scaling the standard deviation by a factor of 1/√(n). Therefore, we find the distribution of emitted power fluctuations to have standard deviationσ_P_n = k_B T b/√(n) = k_B T √(b/n δ t) = k_B T √(b/Δ t),which matches the Johnson spectra <cit.>.The width of the noise distribution is proportional to the thermal temperature, as is the mean thermal power. One can construct a bin-wise signal-to-noise ratio as a proxy to the sensitivity of the instrument to detect a signal of known powerSNR = P_signal/k_B T_sys b√(Δ t/b) = P_axion/< P_n >√(Δ t/b) The distribution of power fluctuations through the remainder of the receiver chain is expected to remain thermal, therefore a system noise temperature is used to characterize the net distribution of noise in the receiver. The details of computing the system temperature can be found in <cit.>, but we explain them briefly here as they will enter into the gain structure characterization in the next sub-section.Amplifiers and attenuators in the receiver chain modify their inputs, both signals and noise powers, as well as inject noise power according to their own blackbody spectrum. A simple composition model for the power emitted at the end of the chain isP_sys = ( ... (P_cav G_1 + P_n_1) G_M +... + P_n_M)where P_n_i and G_i respectively are the noise power and the gain of the i-th component of the receiver. Assuming thermal power spectra at each stage (< P_n_i> = k_B T_n_i b), the total system noise can be computed as a temperature by dividing out the total gain of the receiver G_T = G_1 G_2 ... G_MT_sys = T + T_1 + T_2/G_1 + T_3/G_1G_2 + ...If the gain of the first components are large, then one can effectively truncate the series after the first several terms. Note that the contributions of the power spectrum fluctuations also follow this relationσ_P_sys = σ_P_cav + σ_P_1 + σ_P_2/G_1 + σ_P_sys/G_1G_2 + ... The gain of the MSA varied with both its center frequency and physical temperature throughout the main data run, as did the HFET amplifier to a lesser degree, requiring periodic calibration to optimize the system temperature. Contributions from amplifiers and attenuators downstream of the HFET were measured to be effectively constant. Also during the run, the switch for the heated load malfunctioned, so the primary noise calibration came from an “on-off resonance” method, described below and in <cit.>.The physical temperature of the cavity in Run 1A (∼150 mK) was significantly different than the physical temperature of the milli-Kelvin electronics (∼300 mK). This difference implied that the relative thermal power on and off resonance encoded sufficient information to determine the system noise in the same way a heated load measurement does. The method for calibrating T_sys during Run 1A then became: * Use a “hot-cold” measurement as the primary calibration to determine T_sys absolutely at a few fixed frequencies, as well as the noise contributions from the RF components downstream of the MSA* Use a faster “SNR-Improvement” measurement to determine T_sys on every digitizationThe ideal hot-cold method measures the power at the receiver in a bandwidth of multiple Q-widths about the resonant frequency of the cavity while the antenna is critically coupled. The powers measured on and off resonance are then compared to determine the noise contributions of the MSA amplifier from:R = T_attenuator + T_amps/T_cavity + T_amps where T_attenuator is the physical temperature of attenuator A as seen in Fig. <ref>, T_cavity is the physical temperature of the cavity, T_amps is the noise contribution from all the electronics of the receiver chain, and R is the ratio of power off-resonance to the power on-resonance. Temperatures for the cavity and attenuator were taken from recordings of the Ruthenium Oxide temperature sensors closest to it <cit.>. The R factor can be determined by a model of the cold RF system and fitting a model to the power spectrum as a function of frequency, discussed in <cit.>.Parameters had to be changed slightly to accommodate the typical conditions for thisin-situ measurement for Run 1A. Off-resonance noise power is dominated by the attentuator (physical temperature typically ∼ 300 mK) and also contains a contribution from the receiver noise temperature. On-resonance noise power is the sum of the cavity (physical temperature typically ∼ 150 mK) and the receiver. The transition between the two creates a dip of order 20% in power seen at the cavity resonance in Fig. <ref>. This measurement yielded a system noise temperature typically of ∼ 500 mK, but was found to vary substantially throughout the run due to different gains of the MSA, which required frequent re-optimization as discussed in <cit.>.§.§ Receiver Shape Removal As was seen in the previous sub-section, the spectra from a well-equilibrated receiver chain sans persistent signal is expected to be flat, characterized by a single system temperature. This was rarely the case in practice for Run 1A. There are multiple causes of a non-flat power spectrum by experimental hardware, including frequency dependant gain variations before and after mixing down the target frequency, and frequency-dependant noise variations. The last of these is expected to be suppressed as the early receiver chain is nearly homogeneous and stable in temperature over a data taking cycle. It is crucial to know the structure of the gain so that it is not confused with potential axion signals. Removing the structure and flattening the receiver power spectrum allows for a more straightforward interpretation of the data as Johnson spectra dominant. The flattened spectra also has computational advantages for the analysis as the gain response of an incoming axion signal is similarly flattened, producing a convolutional filter form to the optimal signal search, a topic that will be covered in Section <ref>. The operational goal of receiver shape removal is to completely remove the frequency dependant receiver chain response while retaining potential signals and thermal components.Finding the true gain response of this complex system is a difficult task to perform from status measurements of components alone, and has met with limited success <cit.>. Techniques rooted in purely heuristic fitting of the power spectra have had far greater success <cit.>. Run 1A used filters of Savitzky-Golay (SG) type and a low order polynomial fits for the off-line and live analyses respectively. §.§.§ Polynomial Filter A sixth-order polynomial is used by a live analysis process run in parallel with data-taking operations to fit the receiver structure for computational speed. The live analysis is responsible for providing a real-time feed of the experiment's sensitivity and for identifying potential candidates. The polynomial fit works well for spectra that are already nearly flat, but its quality degrades significantly over spectra containing larger and more complicated variations, as seen in Fig. <ref>. There were multiple periods during Run 1A, either due to the operating state of the MSA or other elements, where the receiver structure was highly perturbed. A low-order polynomial fit does a poor job of conforming to large scale fluctuations at multiple harmonics <cit.>. This shortcoming resulted in some background-subtracted scans with residual structure above the noise floor, resulting in an excess of axion candidates that impacted rescan operations. The candidate procedure will be discussed more in Sec. <ref>.§.§.§ Savitzky-Golay Filter The SG filter was used to fit the receiver structure after data-taking operations were complete to conduct the analysis that provided the limits in <cit.>. The SG filter proved to be much more versatile over the range of structures experienced in Run 1A, as seen in Fig. <ref>. The parameters used for the SG filter were d = 4 for the spline order and L = 121 for the size of the box window function. These values were chosen as they produced adequate fits to a wide range of observed backgrounds with minimal impact to potential axion signal shapes. The general thinking is that you want to choose filter parameters that fit typical background shapes well, but are unable to fit signal shapes well.I'm not sure I can give why precisely those numbers were used, but my recollection is that they were tuned to minimize our 'fudge factor', the inefficiency (as measured on synthetic signals) introduced by background subtraction.The SG filter has been found to attenuate axion signals and imprint small negative correlations between processed spectrum bins <cit.>. These anti-correlations become problematic for scans repeated over the same frequency range under the same conditions, such as may occur for a re-scan, as seen in Fig. <ref>. Accumulation of anti-correlations will occur very rarely under normal scanning operations. No new candidates were produced under the SG filter.§.§.§ Preparation of a SpectrumWe may now prepare the fitted scan for axion search. Using the findings of the previous sub-section, we can see that dividing the power spectrum by the background shape produces a dimensionless mean-normalized spectrum O_p = P_spec/P_fit.with unit mean and, in the absence of residual background structure, Johnson-distributed fluctuations with uniform variance over the entire frequency band. We are only concerned with power excesses above the mean as our axion signals are of narrow bandwidth, making the relevant spectra δ = O_p - 1The dimensionless fluctuations are of random normal distribution with fractional width relative to the mean calculated to beσ_δ = σ_P_n/<P_n > = 1/√(b Δ t),which is expected to apply across the spectra as long as the receiver components up to the first stage amplifier are in thermal equilibrium and the gain structure of the receiver thereafter is much wider than the scan bandwidth. This means one can compare the fluctuation statistics across an MR scan, and between MR scans so long as the scan times match. The dimensionless fluctuations show overwhelming normal distribution shape, as seen in Fig. <ref>. To find the unit-full power excesses, recall that the mean of O_P is identified with the average thermal power outP̅ = k_B T_sys b.Multiplying the fluctuations by the mean power produces the normalized fluctuation power <cit.>δ_w = δ×P̅.These are the power fluctuations against which the search for axion emissions from the cavity is performed.The SG filter was able to fit the background well enough so as to not change the noise width more than a factor of two from the theoretical expectation of Eqn. <ref> for 96.6% of usable Run 1A scans, compared to 90.6% for the polynomial filter, providing a more complete sampling of the noise distribution, which was found to be overwhelmingly Gaussian normal in shape, as seen in Fig. <ref>. §.§ Errors and UncertaintiesThis sub-section provides the systematic uncertainties for parameters used in the analysis, summarized in Table <ref>. First, the uncertainty on the quality factor was quantified by repeatedly measuring the quality factor in a narrow range of frequencies, in this case, from 645-647 MHz, where the quality factor was not expected to change much according to models.The fractional uncertainty on the quality factor in this range was determined to be 2.2%. The fractional uncertainty on the main port coupling was also computed over the same frequency range, and determined to translate to a transmitted power uncertainty of 0.55%. Theuncertainty of the individual temperature sensors above 1 K were taken from their data sheets and the sensors below the 1 K stage were computed as indicated in Sec. <ref> and <cit.>. The combined error on the system temperature T_sys is calculated at 7.1% by adding in quadrature the individual error contributions from the components in Eqn. <ref>, being dominated by the cavity (Scientific Instruments RO600 Ruthenium-Oxide sensor on the mixing chamber) and first stage amplifier (Lakeshore CX-1010 Cernox sensor on the squidadel). The uncertainty of the receiver noise temperature, and therefore the system noise, was primarily given by two contributions: the uncertainty in fit for the HFET noise, and the uncertainty in fit of the MSA noise. The total fractional uncertainty for the system noise amounted to 7.5%. Further details can be found in <cit.> and the supplementary material of <cit.>. The final systematic uncertainty of ± 13%, as shown in Table <ref>, was computed simply by adding all listed uncertainties in quadrature as the errors are assumed to be independent from one another. §.§ Data Cuts Not all data taken during Run 1A operations is qualified to enter into the axion search. Scans taken as part of * receiver studies,* SAG tests,* un-subtract-able RFI,* mode navigation errors,* abnormal cryogenics conditions,* and incomplete logsare flagged for omission. These flags are either explicitly written to the scan or logged by hand in an electronic log that is updated throughout experimental operations, upgrades, and day-to-day upkeep. Further cuts to the data were made during both the live and off-line analyses due to derived knowledge of the experiment being in a poor, or poorly understood, state. This subsection presents the conditions used to cut data from the axion search analysis and its impact on the number of viable scans.The Run 1A data set consisted of a total of 173,048 scans and raw integrated power spectra: 138,680 for preliminary scans and 34,396 for leveling and rescans. After implementing the analysis cuts described above and itemized in Table <ref>, 78,958 scans remained for the axion search analysis. Motivation for these cuts were as follows. First, quality factors lower than 10,000 or greater than 70,000 were omitted from the analysis as they were determined to be either compromised by a mode crossing or non-physical and the result of a poor fit to the reflection scan. System noise temperatures below 0.2 K and above 5.0 K were excluded as these were likely a poor fit by the SNRI or other temperature fitting mechanism described above. Temperatures below the lowest physical temperature in the experiment of 0.15 K are flagged as non-physical. Additionally, the SG fit to the power spectra background in the offline analysis was required to have a fractional standard deviation relative to the radiometer equation between 0.5-1.2 among 95% of the least deviant points in the spectra. This proved sufficient to reject poor fits while retaining potential axion signals. § POWER EXCESS SEARCHThis section presents the statistical method used to search for persistent signals such as the axion in the MR integrated power spectra. Likelihood functions for power excesses in a scan, the axion signal, and conditions of the apparatus are first established, followed by searches on single data-taking cycles, which are then combined into a “grand search spectrum”. §.§ Statistical Modeling It was established in the previous section that an integrated spectra's background noise is dominated by a thermal spectrum from the receiver chain where the receiver is expected to be in local equilibrium over the course of a single data-taking cycle. Under the integration and sampling rates set in the MR data as discussed in Section <ref>, the ∼ 10^4-sampled Poisson-distributed thermal background power spectra when co-added ideally form a raw power spectrum with random normal distribution of mean power set by the product of the system temperature and the receiver's total gain T_sys  G_tot,ν, and distribution width of σ= μ / √(b Δ t). Once gain structures and mean power have been removed, the remainder of the spectrum optimally consists only of the random-normal thermal fluctuations and externally induced power excesses.The signal from the local axion field, broadly speaking, would present itself as a power excess of magnitude set in Eqn. <ref> and an expected Q-width Q_a ∼ 10^6 set by the virial velocity of the Milky Way. Note that the axion signal is present as a power excess as opposed to a deficiency as the conversion of a thermal photon to an axion via an inverse-Primakoff or other process is suppressed by an additional factor of the ratio between local photon and axion occupations n_γ/n_a. This implies that the statistical search for an axion signal can be straightforwardly conducted as a one-sided p-value test with underlying random normal uncertainties.The likelihood function for an axion model hypothesis test is then taken to beℒ( H_a | D_s) ∝∏_i=1^Nexp{ -S_(a^i|s^i) / λ_d^i}/√(πλ_d^i)where H_a is the axion model hypothesis, D_s is the data set, S is the action of the data and model hypothesis, λ is the data uncertainty measure, i is the index over observations, and N is the number of observations. It has already been shown that the integrated power spectra observations are nearly bin-wise independent in their noise background. The action of the data and model hypothesis will be taken over the power outputs from the cavityS_(a^i|s^i) = ( P_a^i - P_s^i )^2where P_a is the expected transmitted axion power given by Eqn. <ref> and P_s is the recorded power spectrum. The expected axion power density P_a is dependent on the transmission function of the cavity mode T_ν_o, the cavity coupling β and quality factor Q_L, magnetic field B, the observed frequency ν, the local axion density ρ_a, and the overall shape of the axion frequency distribution function above the rest mass f(ν - ν_a). Note that the variance about the axion power is essentially vanishing in comparison to the expected power spectrum due to the high occupation values in the relic axion condensate, a topic that is discussed in Sections <ref>, <ref>. The action is modulated by an uncertainty measureλ_d^i = 2 ( σ_P_i)^2where σ_P is the uncertainty in the recorded power spectrum and is constant over the scan. The uncertainty in a scan's power,σ_P =k_B T_sys b/√(b Δ t),is given in terms of the expected noise power uncertainty as computed in Eqn. <ref>. The action and uncertainty measures form a χ^2-like kernel for the likelihood, which takes the formχ^2 = ∑_i^N (δ_w, i -T_ν_0, i⟨P|_⟩tot p_a,i)^2/2σ_P_i^2,where i is the index over bins in the digitized spectra, T_ν is the cavity mode transmission shape, ⟨P|_⟩tot is the total power of the model axion signal, and p_a is the probability distribution function of relic axions. To better track the coupling parameter, let us expand the model axion power as a fraction of the DFSZ benchmark model ⟨P|_⟩tot = A × P_DFSZ, where A is the fraction of the model's power relative to the benchmark and P_DFSZ is computed using Eqn. <ref> sans the mode transmission shape. Refactoring the χ^2 figure in powers of Aχ^2= ∑_i^N (δ_w, i - A P_DFSZ T_ν_0, i/p_a,i)^22σ_P_i^2= ∑_iδ_w, i^2/2 σ^2_P_i - 2A P_DFSZ∑_iT_ν_0,i p_a,iδ_w, i/2 σ^2_P_i+ A^2 P_DFSZ^2 ∑_iT_ν_0,i^2 p_a,i^2 /2 σ^2_P_i,where the zeroth, first, and second power terms may be referred to as χ^2_A^0, χ^2_A^1, χ^2_A^2 respectively.There are several priors that need to be declared in order to parameterize the space of incident conditions and hypothesis to be tested. This analysis operates under the following assumptions:* Relic axions dominate the local axion distribution. * Relic axions make up 100% of the DM.* The likelihood of the axion rest mass m_a is uniform over the Run 1A frequency range. * The likelihood of the axion-photon coupling g_a γγ is uniform over all values covered by this search. * Incident conditions of the experiment state are random normally distributed about a data-taking cycle.The parameters of interest for the posterior distribution function are the intrinsic axion mass and the axion-photon coupling strength. One can marginalize over all other parameters above to reduce the likelihood function to be over only the parameters of interest. The tests here are organized first over axion masses, then the local distribution of the axion field, and lastly coupling strength. Using axion power as proxy for the coupling, the posterior probability function for a given axion mass and local distribution reduces to a Gaussian with mean estimated byμ_A = E[A] = χ^2_A^1/2 χ^2_A^0 =∑_iT_ν_0,i p_a, iδ_w, i/2 σ^2_δw, i/P_DFSZ∑_jT_ν_0,j^2 p_a, j^2 /2 σ^2_P_j,and widthσ_A = 1/√(2 δχ^2),whereδχ^2= χ^2(μ_A+1) - χ^2(μ_A) = (2 μ_A + 1) P_DFSZ^2 ∑_iT_ν_0, i^2 p_a, i^2 /2 σ^2_P_i- 2 P_DFSZ∑_iT_ν_0, i p_a, iδ_w, i/2 σ^2_Pi. The expected value of the axion power given an instance of data is computed from the posterior mean μ_A. The sensitivity of the data to power excursions of the specified shape is set by the posterior width σ_A. The expected power's deviation from the null hypothesis of no axion (P_a = 0) in units of the distribution width gives a measure of the significance for the potential fitΣ_A = μ_A/σ_Aand is the metric used to determine candidacy of axion signals. One may also construct confidence intervals over the posterior distribution from a data instance, though the significance of such statistics will not be made in this section. In Section <ref> we cover the procedure of identifying candidates and delineating their causes from statistical fluctuation, to RFI, to the relic axion field. The creation of limits after candidates are deferred to Section <ref>.The next sub-section computes the above statistics on a single data cycle.§.§ Single Scan Analysis A single data-taking cycle is the natural choice over which to perform the axion search analysis outlined above. This subsection provides the computational details in searching for the axion in each RF integration. The following subsection presents the means by which the analyses are combined, extending the search over the entire scanned range.The axion mass parameter is used to serialize the tests on a single scan. Axion masses are only sensible to be tested if there is an overlap between the power spectrum and the assumed axion distribution line shape, so we restrict ourselves to masses less than the upper-most edge of the scan frequency range (ν_a < ν_max) and greater than the lower-most edge of the scan frequency range minus the width of the 99 % axion line shape (ν_a > ν_min - Δν_99 %). The mass tests may be formed anywhere along this continuous segment of parameter space, though the utility of sampling below the MR bandwidth is marginal. The sampling of masses in the search for candidates is performed on the resolution of the MR bins, ∼ 100 Hz, with a starting point of 645 MHz. The final limits in Section <ref> will be averaged over many adjacent mass samples. Now consider how each of the terms in Eqn. <ref> are calculated. The first term involves only the scan data and need be calculated only once for each scan regardless of the hypothesized axion mass. Its computation scales as the number of bins in the scan. The other two terms of the test statistic contain factors of the transmitted axion lineshape into the scan and are refactored for more efficient calculation over multiple axion masses. For both terms, the Lorentzian cavity transmission function T_ν_o is factored out of the emitted axion power P_emitted = T_ν_o P_a and is instead used to modulate the noise error estimate σ_w = σ/ T_ν_o and the power spectrum P_w = P_emitted/T_ν_o so that they now vary over the spectrum, see Fig. <ref>. The bandwidth of each digitized power spectrum is on the order of the Q-width of the TM_010-like mode, less than 1:10^4 of the central frequency. It is by this same fraction that the supposed axion line shape will change its width across mass tests spanned by a single scan. To reduce the computational costs of generating a unique axion line shape for each probed mass, the line shape is generated once for the scan's central frequency (p_a,i = ∫_b_i d ν f(ν_a - ν)) and is approximated as translationally invariant over the scan. The factor of frequency in the axion power equation is also held constant at the scan's center point. The second term of Eqn. <ref> that computes the inner product of the power spectrum and line shape can now be phrased as a discretized convolution. Given proper zero-padding to the line shape and power spectrum, the intersection can be computed over the whole scan using the (discretized) convolution theoremp_a * ( T_ν_0δ_w/σ^2_P) = ℱ^-1( ℱ( p_a) ·ℱ( T_ν_0δ_w/σ^2_P) ),where the Fourier transform is discrete and implemented using the fast Fourier algorithm.The third term of Eqn. <ref> is the overlap of the squared axion line shape with the power spectrum's modulated error, and can also be phrased as a discrete convolution, using similar zero-padding as the second term,p^2_a * ( T_ν_0^2/σ^2_P) = ℱ^-1( ℱ( p^2_a) ·ℱ( T_ν_0^2 /σ^2_P) ) .Now one can compute each of the statistics in the previous sub-section to compute quantities such as the fit significance of axion hypotheses, Fig. <ref>. This use of convolutions to evaluate the data's senstivity to a given axion signal shape allows one to think of the search technique as an optimal filter.§.§ Grand Spectrum This sub-section integrates the statistics of single scan analyses into a grand spectrum of tests. This will be accomplished by deriving the arithmetic rules for the combination of single scan statistics for each figure of interest, then applying them to the set of viable scans.Recall that the independence of each measurement allows the total likelihood function of Eqn. <ref> statistic to be decomposed as a product of single scan likelihoods. The total χ^2 statistic therefore can also be decomposed into a sum of single scansχ^2_tot = ∑_s ∈ N_sχ_s^2where N_s is the set of viable scans. Further, the decomposition of the statistic in powers of the axion signal strength also factorize toχ^2_tot = ∑_s ∈ N_s∑_i_s ∈ sδ_w, i_s^2/2 σ^2_P_i_s - 2 A P_DFSZ∑_s ∈ N_s∑_i_s ∈ sT_ν_0,i_s p_a, i_sδ_w, i_s/2 σ^2_P_i_s + A^2 P_DFSZ^2∑_s ∈ N_s∑_i_s ∈ sT_ν_0,i_s^2 p_a, i_s^2 /2 σ^2_P_i_s,where the i_s index runs over the bins of scan s. Recall that we have organized all the tests according to prescribed axion rest masses and not bin frequency, minimizing the complications from scans with mismatched ranges and bin boundaries.The expectation and uncertainty statistics of interest to the grand spectrum are then seen to have the following addition rulesE(A)_tot = ∑_s ∈ N_s h_s E(A_s),σ(A)_tot = 1/√(∑_s ∈ N_s1/σ_A_s^2),where h_s are the expectation value weights given byh_s = χ^2_A^0, s/∑_k ∈ N_kχ^2_A^0, k.One can also subtract scans from the set by reversing the sign of the arithmetic operation. These are used to identify candidates, and ultimately provide the exclusion limits in Sections <ref> and <ref>.§ AXION SIGNAL MODELSAn axion signal emitted from the cavity is stimulated from the oscillations in the axion field passing through during the integration phase of a data-taking cycle. This section presents the models used to emulate the signal generated by the ambient axion dark-matter distribution. This analysis takes a similar approach to that of <cit.> for modeling the response from a classical axion distribution, where the net axion field passing through the cavity during the observation is taken as a superposition of plane waves, which is considered accurate for a classical field on distances much shorter than the curvatures of an axion's orbit. The axion field inside the cavity is nearly homogeneous as its extent is on the order of the axion's Compton length, much less than the de Broglie length that sets the spatial fluctuations. The total axion field in the cavity is then a(t) = √(2 ρ_DM/ N_a)/m_a∑_i^N_acos( E_i t + ϕ_i )where ρ_DM is the local dark-matter density, N_a = 𝒪 (Vρ_DM/ m_a) is the number of axions passing through the cavity at any one time, E_i is the energy of the i-th axion as measured from the cavity's rest frame, and ϕ_i is the phase of the wave at the start of observation (t=0). Most axions passing through the cavity will be bound to the Milky Way halo, having speed less than the escape speed of ≈ 560 km/s relative to the galactic center <cit.>. The motion of the cavity, also in orbit, has been measured to be of speed 230 ± 5 km/s <cit.> co-rotating with spin of the Milky Way. The motion of the majority of axions is therefore of the order ∼ 10^-3 c and the energy of each axion is well approximated by the lowest order kinematic expansion of the energy E = m_a + m_a Δ v^2/2 + O(Δ v^4/c^4) where Δ v is the relative speed of the axion to the cavity.The modulus squared of the axion field produces the power spectrum of the axion field as used in Eqn. <ref> is then found to be|a(ν)|^2 = 2 ρ_DM/m_a^2 f_DM(δν)where δν = ν - ν_0 is the frequency relative to the rest mass, and f_DM(δν) is the local frequency distribution function of the axion halo. Several distribution functions were used in the search for axions during Run 1A, which are detailed in the following subsections. §.§ Top-Hat Model (Live Analysis) The first and simplest distribution model is a top-hat function the width of seven MR bins (∼ 700 Hz). This model roughly reproduces the width of the local axion distribution, which at 650 MHz is expected to have an overall width in proportion to the expected virialaized speed squared w ≈ 650  MHz< v^2 >/2 c^2 ∼ 700 Hz, and is the most robust of the models to detect power excesses at the expected width. The top-hat model is used only during the live analysis that occurs in parallel to data-taking operations, informing on the overall health of data and identifying axion candidates. §.§ Isotropic Isothermal Sphere The standard halo model (SHM) distribution is the most common shape used by axion searches and direct dark-matter searches in general. The SHM is based on the assumption that the MW halo is given by a thermalized pressure-less self-gravitating sphere of particles. More specifically, we use the truncated isothermal sphere model <cit.>, which is constructed with finite mass and has a cutoff at the halo escape speed.In the frame of the galactic center, the velocity distribution at the solar radius takes on the near-Maxwell-Boltzmann formf_v(v⃗)∝ {[ 4 π (1/σ^2 π)^3/2 v^2 e^-v^2/σ^2 0 ≤ |v| ≤ 560 km/s;0, ].where σ is the dark matter velocity dispersion. The approximation of a full Maxwell-Boltzmann distribution is adequate for the MR analysis and retains an analytic form when boosted from the galactic frame by v⃗_⃗l⃗a⃗b⃗ into the cavity framef_v(v⃗) ≈2 ħ c^2/√((2 πσ^2) M_rest v_lab)×sinh(v⃗·v⃗_⃗l⃗a⃗b⃗/2 σ^2) × e^(-(v^2 + v_lab^2)/2 σ^2)where v⃗_lab is the velocity of the lab relative to the galactic center. The motion of the lab during Run 1A was set to the orbital velocity of the sun around the galactic center. One could also incorporate the shifting motions of the Earth orbiting about the Sun and the Earth's spin, however these changes would only impact the overall width of the signal by less than 10% <cit.>, and therefore were ignored during the analysis. §.§ N-Body Model The third line shape used in the Run 1A analysis is rooted in the highly detailed Romulus25 cosmological simulation <cit.>, where the halos of MW-like galaxies were found to be notably denser and have narrower line shape when sampled from the reference of a Sun-like orbit. The signal shape generated takes a Maxwell-Boltzmann-like form parameterized by three constants. The N-body inspired line shape takes the formf_ν∝( (ν - ν_0)h/m_a T)^α e^-( (ν - ν_0)h/m_a T)^βwhere ν_0 is the rest frame frequency, the exponent parameters are set to α = 0.36, β = 1.39, and the distribution temperature is given by T = 4.7 × 10^-7. This shape has a width that is 1.8 times narrower than the SHM line shape. The Romulus25 analysis also revealed a local expected DM density of ρ_DM≈ 0.6 GeV/cc, a notable increase from 0.45 GeV/cc used in previous ADMX analyses.§ CANDIDATE HANDLINGThe processing and classification of candidate signals provides the means to claim detection of axion dark matter or to place limits on its mass and coupling. This section details the protocol used to identify, test, and re-test candidates through several filtering mechanisms unique to axion-photon conversion to robustly classify power excesses observed in the MR data. The decision protocol used for candidate handling in Run 1A is summarized in the decision tree of Fig. <ref>.§.§ Live Analysis A data analysis was run in parallel to data-taking operations, referred to as the live analysis, to inform the operators on the in situ sensitivity of the data and to identify candidate axion signals. The live analysis operates on MR data under the chosen options to model the raw scan background using the six-order polynomial, and filters the prepared spectra using the top-hat signal model. These options were chosen for their low computational cost and robustness. Also, axion mass tests were made more sparsely to further reduce the computational cost, with separation of seven MR bin widths to reflect the size of the top-hat signal filter.The significance statistic of Eqn. <ref> was used to search for candidate axion masses. The initial set of candidates were identified with a threshold of +3 σ, over which a particular mass test is considered to be a candidate. The expected number of candidates per 10 MHz segment in this mass range is ≈ 15. During the candidate handling procedure, an excess of persistent and non-persistent candidates were found in segment 3. This excess of non-persistent candidates was found to be the result of inadequate background modeling by the six-order polynomial fit in the presence of significant background structure. Residual background structure larger than the noise floor were seen as power excesses (and deficits) by the analysis with enough regularity to skew the rate at which candidates were identified using a prescribed threshold. Also, a deficit of candidates was found in segment 1 by the live analysis. Possible systematic errors such as errant background subtraction or poorly fitted cavity Lorentzian were investigated but not found be the cause. §.§ Candidate Procedure Once candidates masses in a segment are identified, a targeted re-scan of the immediate area around each candidate frequency is conducted to the same sensitivity as the initial observations. The re-scan data was again analysed using the live analysis procedure to test for persistence using the same +3 σ threshold. Candidates that did not persist were classified as random thermal variations. Persistent candidates were again rescanned, but to an SNR 5/3 that of the initial scans, corresponding to a slowdown in scan speed of 9/25. The re-scans are analysed for persistence using the same live analysis procedure with an updated threshold of 5 σ.Candidate masses that do not pass this threshold are classified as random thermal variations and removed from further consideration. No thermal candidates are expected to pass this second rescan stage. Remaining candidates are put through further persistence tests that also test for axion and dark-matter specific qualities. The first process is a further rescan of the candidate, again at slowed rate, and a test not only of the signal's persistence over the rescan but also of the signal's response to the Lorentzian envelope of the coupled mode T_ν_0× |a|^2, which confirms that the cavity is the origin of the power excess. The test is performed using the live analysis procedure, checking for significance that scales like 1/T_ν_0. Persistent candidates are also tested at this point for a shape that matches either a Maxwellian distribution such as described in the previous section or a non-thermalized shape with dominant fine structure such as those proposed in <cit.>. This is performed both with the MR data as well as the full time series, mixed to resolution at the Hz level. Given the large range of proposed signals, few candidates are ruled out by this step unless they are un-physically wide.The penultimate test in the candidate protocol is an RFI search at the remaining candidate frequencies. Using an exterior antenna and signal analyser, ambient signals are searched for about the ADMX insert and DAQ. Candidates found to have a counterpart external RF signal at the same frequency and shape, or at intermediate frequencies of the receiver, are classified as RFI.The final test in the Run 1A candidate handling protocol is a ramping of the main magnet while taking data about a single persistent candidate. The data is then analyzed for a ∝ B^2 response in the emitted signal power particular to the inverse-Primakoff process. A signal exhibiting this response is categorized as a robust candidate for axion dark matter. Table <ref> shows the number of candidates for each segment at each stage of the decision tree. No candidates survived past the fourth round. §.§ Synthetic Axions Run 1A saw the beginnings of a candidate injection system where artificial signals were injected into the power spectrum, using both hardware and software methods, in order to test instrument sensitivity.Hardware axions were injected into the cavity via the weak port through with the Synthetic Axion Generator (SAG). The structure of the SAG and its placement in the RF system has been presented in <cit.>.Injection frequencies were set prior to the scanning of each segment. Data from data-taking cycles with an injection were flagged and assessed independent of the main axion search. Blind injection did not occur until Run 1B <cit.>. A second complete data-taking cycle at the same tuning rod configuration was taken immediately following an injection so as to not mask the data in the vicinity of the synthetic's central frequency. Only the second scan was used in the axion search analysis presented here.Software injections of axion signals were also used to test the performance of the analysis. Simulated axion signals were imposed directly onto the raw power spectra in the forms of the Maxwellian and N-body inspired lineshapes, modulated by the fitted Lorentzian response of the cavity. We injected 25,000 software-simulated signals into the data set with couplings varying between DFSZ and 10 times KSVZ, ran through the analysis process, and evaluated the resulting candidate power to determine the systematic uncertainty associated with the background subtraction. Figure <ref> shows the effect of the injected signals in both the background-subtracted spectra and the final filtered and combined spectrum. § EXCLUSION LIMITSThe previous section showed how the candidate procedure identified and classified candidates in the grand axion search analysis. Each of these greater-than-+ 3 σ candidates were classified as non-axion in origin, therefore making the observations consistent at that level with the null-signal hypothesis under the live analysis top-hat line shape. Analyses using the Maxwellian and the N-body axion lineshapes were performed after the run's data-taking operations. They both found no new candidates at the same threshold.Given the null outcome, upper limits can then be formed on the axion power P_a, and therefore coupling g_a γγ under the priors presented in Sec. <ref> for each axion mass tested. The one-sided 90% limits on the power-derived coupling are shown in Fig. <ref> for both the SHM and N-body line shapes.§ SUMMARY AND CONCLUSIONSThis paper elaborates on the background and methods used by the ADMX collaboration to gather its Run 1A data and perform the axion dark matter search first reported on in <cit.>, which produced the first axion-photon coupling limits at DFSZ sensitivity. The ADMX apparatus was reviewed, with special concentration on the microwave cavity, receiver, and the magnet system that enabled resonant conversion of an ambient coherent axion field. Data-taking operations were explicated from their global structure down to the individual measurements of a single data-taking cycle. Interpretations of the raw data and its preparation for axion search were then presented, including the classification of the cavity and receiver transmission properties, noise temperatures, and structure of the receiver chain emitted noise power in the limit of many photon emissions per observation. The statistical basis for the axion search was then presented, both at the level of a single observation and the level of the wider Run 1A data set, and co-added into a grand spectrum spanning the observation range. The space of lineshapes for the local axion distribution and their usage were presented for both the live analysis and the off-line analyses. The axion signal candidate handling procedure was outlined, with candidates observed in the Run 1A data being catalogued either as random thermal fluctuations or interference external to the insert. No axion candidates persisted through the candidate-search process, making the data consistent with the null axion hypothesis and allowing detection limits to be formed over the observed range, with the exception of a set of RFI signals that could not be reliably removed from the data due to their strong time-dependent properties.Several updates and improvements have been made to the instrumentation and analysis that have since been used for Run 1B <cit.> and Run 1C still in progress <cit.>. Areas highlighted in the Run 1A analysis are as follows: an improved modeling of the receiver structure during live analysis to a Padè filter from the six-order polynomial, which drastically reduces the number of candidates found in the initial search, bringing the number to near that expected from purely thermal contributions; synthetic axion injection, which was tested but not thoroughly integrated in Run 1A, has been fully incorporated into the data acquisition process using both hardware injected blinded and un-blinded candidates and software candidates that have been used to good effect to improve confidence in a search's sensitivity to the axion; and last to be mentioned here, the MSA quantum-limited amplifier of Run 1A has been replaced by JPAs (Josephson Parametric Amplifiers) in Runs 1B and 1C and have proven to be more stable and tunable. Lastly, the possibility of fine structure existing in the local axion energy distribution presents a distinct opportunity to improve the observation's sensitivity and has motivated the collaboration to store the full time series of each observation. The resolution of this data extends to the tens of milli-hertz, the optimal level between the finest motivated fine structure features <cit.> and smearing due to orbital modulation from the Earth's spin and motion around the solar system center of mass. This high resolution data is significantly more sensitive to such fine structures. Therefore, as ADMX brings in more data at DFSZ sensitivity or better, searches for fine structure will play an increasingly important role in furthering the discovery potential for axion dark matter at increasingly smaller couplings or proportions of the total dark-matter density. § ACKNOWLEDGEMENTSThis work was supported by the U.S. Department of Energy through Grants No. DE-SC0009800, No. DESC0009723, No. DE-SC0010296, No. DE-SC0010280, No. DE-SC0011665, No. DEFG02-97ER41029, No. DEFG02-96ER40956, No. DEAC52-07NA27344, No. DEC03-76SF00098, No. DE-SC0017987, and No. DE-SC0022148. Fermilab is a U.S. Department of Energy, Office of Science, HEP User Facility. Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under Contract No. DE-AC02-07CH11359. Additional support was provided by the Heising-Simons Foundation and by the Lawrence Livermore National Laboratory and Pacific Northwest National Laboratory LDRD offices. LLNL Release No. LLNL-JRNL-848336. Chelsea Bartram acknowledges support from the Panofsky Fellowship at SLAC.
http://arxiv.org/abs/2312.16668v1
{ "authors": [ "C. Boutan", "B. H. LaRoque", "E. Lentz", "N. S. Oblath", "M. S. Taubman", "J. Tedeschi", "J. Yang", "A. M. Jones", "T. Braine", "N. Crisosto", "L. J Rosenberg", "G. Rybka", "D. Will", "D. Zhang", "S. Kimes", "R. Ottens", "C. Bartram", "D. Bowring", "R. Cervantes", "A. S. Chou", "S. Knirck", "D. V. Mitchell", "A. Sonnenschein", "W. Wester", "R. Khatiwada", "G. Carosi", "N. Du", "S. Durham", "S. R. O'Kelley", "N. Woollett", "L. D. Duffy", "R. Bradley", "J. Clarke", "I. Siddiqi", "A. Agrawal", "A. V. Dixit", "J. R. Gleason", "A. T. Hipp", "S. Jois", "P. Sikivie", "N. S. Sullivan", "D. B. Tanner", "J. H. Buckley", "C. Gaikwad", "E. A. Henriksen", "J. Hoffman", "K. W. Murch", "P. M. Harrington", "E. J. Daw", "M. G. Perry", "E. J. Daw", "M. G. Perry", "G. C. Hilton" ], "categories": [ "hep-ex", "astro-ph.CO", "physics.ins-det" ], "primary_category": "hep-ex", "published": "20231227182802", "title": "Axion Dark Matter eXperiment: Run 1A Analysis Details" }
[ Nobutaka Nakazono January 14, 2024 ===================== Masked time series modeling has recently gained much attention as a self-supervised representation learning strategy for time series. Inspired by masked image modeling in computer vision, recent works first patchify and partially mask out time series, and then train Transformers to capture the dependencies between patches by predicting masked patches from unmasked patches. However, we argue that capturing such patch dependencies might not be an optimal strategy for time series representation learning; rather, learning to embed patches independently results in better time series representations. Specifically, we propose to use 1) the simple patch reconstruction task, which autoencode each patch without looking at other patches, and 2) the simple patch-wise MLP that embeds each patch independently. In addition, we introduce complementary contrastive learning to hierarchically capture adjacent time series information efficiently. Our proposed method improves time series forecasting and classification performance compared to state-of-the-art Transformer-based models, while it is more efficient in terms of the number of parameters and training/inference time. Code is available at this repository: <https://github.com/seunghan96/pits>. § INTRODUCTIONTime series (TS) data finds application in a range of downstream tasks, including forecasting, classification, and anomaly detection.Deep learning has shown its superior performance in TS analysis, where learning good representations is crucial to the success of deep learning, and self-supervised learning has emerged as a promising strategy for harnessing unlabeled data effectively.Notably, contrastive learning (CL) and masked modeling (MM) have demonstrated impressive performance in TS analysis as well as other domains such as natural language processing <cit.> and computer vision <cit.>. Masked time series modeling (MTM) task partially masks out TS and predicts the masked parts from the unmasked parts using encoders capturing dependencies among the patches, such as Transformers <cit.>. However, we argue that learning such dependencies among patches, e.g., predicting the unmasked parts based on the masked parts and utilizing architectures capturing dependencies among the patches, might not be necessary for representation learning. r0.255 < g r a p h i c s > PI vs. PD. To this end, we introduce the concept of patch independence which does not consider the interaction between TS patches when embedding them. This concept is realized through two key aspects: 1) the pretraining task and 2) the model architecture.Firstly, we propose a patch reconstruction task that reconstructs the unmasked patches, unlike the conventional MM that predicts the masked ones.We refer to these tasks as the patch-independent (PI) task and the patch-dependent (PD) task, respectively, as the former does not require information about other patches to reconstruct each patch, while the latter does.Figure <ref> illustrates a toy example of TS forecasting. While the Transformer pretrained on the PD task <cit.> fails to predict test data under distribution shift, the one pretrained on the PI task is robust to it. Secondly, we employ the simple PI architecture (e.g., MLP), exhibiting better efficiency and performance than the conventional PD architecture (e.g., Transformer).In this paper, we propose Patch Independence for Time Series (PITS),which utilizes unmasked patch reconstruction as the PI pretraining task and MLP as the PI architecture.On top of that, we introduce complementary CL to efficiently capture adjacent time series information, where CL is performed using two augmented views of original samples that are masked in complementary ways.We conduct extensive experiments on various tasks, demonstrating that our proposed method outperforms the state-of-the-art (SOTA) performance in both forecasting and classification tasks, under both standard and transfer learning settings. The main contributions are summarized as follows:[itemize]leftmargin=0.3cm * We argue that learning to embed time series patches independently is superior to learning them dependently for TS representation learning, in terms of both performance and efficiency. To achieve patch independence, we propose PITS, which incorporates two major modifications on the MTM:1) to make the task patch-independent, reconstructing the unmasked patches instead of predicting the masked ones, and2) to make the encoder patch-independent, eliminating the attention mechanism while retaining MLP to ignore correlation between the patches during encoding. * We introduce complementary contrastive learning to hierarchically capture adjacent TS information efficiently, where positive pairs are made by complementary random masking.* We present extensive experiments for both low-level forecasting and high-level classification, demonstrating that our method improves SOTA performance on various downstream tasks. Also, we discover that PI tasks outperforms PD tasks in managing distribution shifts, and that PI architecture is more interpretable and robust to patch size compared to PD architecture.§ RELATED WORKSSelf-supervised learning.In recent years, self-supervised learning (SSL) has gained attention for learning powerful representations from unlabeled data across various domains.The success of SSL comes from active research on pretext tasks that predict a certain aspect of data without supervision. Next token prediction <cit.> and masked token prediction <cit.> are commonly used in natural language processing, and jigsaw puzzles <cit.> and rotation prediction <cit.> are commonly used in computer vision. Recently, contrastive learning (CL) <cit.> has emerged asan effective pretext task.The key principle of CL is to maximize similarities between positive pairs while minimizing similarities between negative pairs<cit.>. Another promising technique is masked modeling (MM), which trains the models to reconstruct masked patches based on the unmasked part.For instance, in natural language processing, models predict masked words within a sentence <cit.>, while in computer vision, they predict masked patches in images <cit.> within their respective domains. Masked time series modeling. Besides CL, MM has gained attention as a pretext task for SSL in TS.This task involves masking a portion of the TS and predicting the missing values, known as masked time series modeling (MTM).While CL has shown impressive performance in high-level classification tasks, MM has excelled in low-level forecasting tasks <cit.>. TST <cit.> applies the MM paradigm to TS, aiming to reconstruct masked timestamps.PatchTST <cit.> focuses on predicting masked subseries-level patches to capture local semantic information and reduce memory usage.SimMTM <cit.> reconstructs the original TS from multiple masked TS.TimeMAE <cit.> trains a transformer-based encoder using two pretext tasks, masked codeword classification and masked representation regression. Table <ref> compares various methods in TS including ours in terms of two criterions: pretraining methods and downstream tasks, where No (Sup.) in Pretraining method indicates a supervised learning method that does not employ pretraining.Different from recent MTM works, we propose to reconstruct unmasked patches through autoencoding. A primary concern on autoencoding is the trivial solution of identity mapping, such that the dimension of hidden layers should be smaller than the input. To alleviate this, we introduce dropout after intermediate fully-connected (FC) layers, which is similar to the case of stacked denoising autoencoders <cit.>, where the ablation study can be found in Figure <ref>.Combination of CL and MM. There have been recent efforts to combine CL and MM for representation learning <cit.>. Among these works, SimMTM <cit.> addresses an MM task with a regularizer in its objective function in the form of a contrastive loss. However, it differs from our work in that it focuses on CL between TS, while our proposed CL operates with patches within a single TS.Complementary masking. SdAE <cit.> employs a student branch for information reconstruction and a teacher branch to generate latent representations of masked tokens, utilizing a complementary multi-fold masking strategy to maintain relevant mutual information between the branches. TSCAE <cit.> addresses the gap between upstream and downstream mismatches in the pretraining model based on MM by introducing complementary masks for teacher-student networks, and CFM <cit.> introduces a trainable complementary masking strategy for feature selection. Our proposed complementary masking strategy differs in that it is not designed for a distillation model, and our masks are not learnable but randomly generated.Linear models for time series forecasting. Transformer <cit.> is a popular sequence modeling architecture that has prompted a surge in Transformer-based solutions for time series analysis <cit.>.Transformers derive their primary strength from the multi-head self-attention mechanism, excelling at extracting semantic correlations within extensive sequences.Nevertheless, recent work by <cit.> shows that simple linear models can still extract such information captured by Transformer-based methods. Motivated by this work, we propose to use a simple MLP architecture that does not encode interaction between time series patches.§ METHODSWe address the task of learning an embedding function f_θ: x^(i,c,n)_p →z^(i,c,n) for a TS patch where x_p = {x_p^(i,c,n)}, z = {z^(i,c,n)}, and i=1,, B, c=1,, C, n=1,, N. Here, B, C, N are the number of TS, number of channels in a single TS, and number of patches in a single channel of a single TS. The input and the output dimension, which are the patch size and patch embedding dimension, are denoted as P and D, respectively, i.e., x_p^(i,c,n)∈ℝ^P and z^(i,c,n)∈ℝ^D. Our goal is to learn f_θ extracting representations that perform well on various downstream tasks. Channel independence & Patch independence.We use the channel independence architecture for our method, where all channels share the same model weights and embedded independently, i.e, f_θ is independent to c.This has shown robust prediction to the distribution shift compared to channel-dependent approaches <cit.>.Also, we propose to use the PI architecture,where all patches share the same model weights and embedded independently, i.e, f_θ is independent to n. We illustrate four different PI/PD architectures in Figure <ref>(a), where we use MLP for our proposed PITS, due to its efficiency and performance, as demonstrated in Table <ref> and Table <ref>, respectively.§.§ Patch-Independent Task: Patch ReconstructionUnlike the conventional MM task (i.e., PD task) that predicts masked patches using unmasked ones, we propose the patch reconstruction task (i.e., PI task) that autoencodes each patch without looking at the other patches, as depicted in Figure <ref>(a). Hence, while the original PD task requires capturing patch dependencies, our proposed task does not. A patchified univariate TS can be reconstructed in two different ways[Biases are omitted for conciseness.]:1) reconstruction at once by a FC layer processing the concatenation of patch representations: concat(x_p^(i,c,:)) = W_1 concat(z^(i,c,:)) where W_1 ∈ℝ^N· P × N· D, and 2) patch-wise reconstruction by a FC layer processing each patch representation: x_p^(i,c,n) = W z^(i,c,n) where W ∈ℝ^P × D. Similar to <cit.>, we employ the patch-wise reconstruction which yields better performance across experiments. §.§ Patch-Independent Architecture: MLP While MTM has been usually studied with Transformers for capturing dependencies between patches, we argue that learning to embed patches independently is better. Following this idea, we propose to use the simple PI architecture, so that the encoder solely focuses on extracting patch-wise representations.Figure <ref>(a) shows the examples of PI/PD pretraining tasks and encoder architectures.For PI architectures, Linear consists of a single FC layer model and MLP consists of a two-layer MLP with ReLU. For PD architectures, MLP-Mixer[ While TSMixer is a variation of MLP-Mixer proposed for TS concurrent to our work, we found that TSMixer does not perform well with SSL, so we use our own variation of MLP-Mixer here.] <cit.> consists of a single FC layer for time-mixing (N-dim) followed by a two-layer MLP for patch-mixing (D-dim), and Transformer consists of a self-attention layer followed by a two-layer MLP, following <cit.>. The comparison of the efficiency between MLP and Transformer in terms of the number of parameters and training/inference time is provided in Table <ref>. §.§ Complementary Contrastive LearningTo further boost performance of learned representations,we propose complementary CL to hierarchically capture adjacent TS information.CL requires two views to generate positive pairs, and we achieve this by a complementary masking strategy: for a TS x and a mask m with the same length, we consider m⊙x and (1-m) ⊙x as two views, where ⊙ is the element-wise multiplication and we use 50% masking ratio for experiments. Note that the purpose of masking is to generate two views for CL; it does not affect the proposed PI task, and it does not require an additional forward pass when using the proposed PI architectures, such that the additional computational cost is negligible. r0.50< g r a p h i c s > Complementary contrastive learning.Figure <ref> illustrates an example of complementary CL,where we perform CL hierarchically <cit.> by max-pooling on the patch representations along the temporal axis, and compute and aggregate losses computed at each level.Then, the model learns to find missing patch information in one view, by contrasting the similarity with another view and the others, so that the model can capture adjacent TS information hierarchically. §.§ Objective Function As illustrated in Figure <ref>(b), we perform CL at the first layer and reconstruction by an additional projection head on top of the second layer, based on the ablation study in Table <ref>. To distinguish them, we denote representations obtained from the two layers in MLP as z_1 and z_2, respectively.Reconstruction loss. As discussed in Section <ref>,we feed z_2 into the patch-wise linear projection head to get a reconstructed result: x_p = W z_2.Then, the reconstruction loss can be written as:ℒ_𝓇ℯ𝒸ℴ𝓃 = ∑_i=1^B ∑_c=1^C ∑_n=1^Nm^(i,c,n)⊙( x_p^(i,c,n) -x_p^(i,c,n)) _2^2 +(1-m^(i,c,n)) ⊙( x_p^(i,c,n) -x_p^(i,c,n)) _2^2= ∑_i=1^B ∑_i=1^C ∑_n=1^N x_p^(i,c,n) - x_p^(i,c,n)_2^2, where m^(i,c,n)=0 if the first view x_p^(i,c,n) is masked, and 1 otherwise.As derived in Eq. <ref>, the reconstruction task is not affected by complementary masking, i.e., reconstructing the unmasked patches of the two views is the same as reconstructing patches without complementary masking.Contrastive loss. Inspired by the cross-entropy loss-like formulation of the contrastive loss in <cit.>, we establish a softmax probability for the relative similarity among all the similarities considered when computing temporal contrastive loss. For conciseness, let z_1^(i,c,n) = z_1^(i,c,n+2N) and z_1^(i,c,n+N) be the two views of the patch embedding x^(i,c,n).Then, the softmax probability for a pair of patch indices (n,n^') is defined as: p(i,c,(n,n^')) = exp ( z_1^(i,c,n)∘z_1^(i,c,n^') )/∑_s=1, s≠ n^2Nexp (z_1^(i,c,n)∘z_1^(i,c,s)), where we use the dot product as the similarity measure ∘.Then, the total contrastive loss can be written as: ℒ_CL= 1/2BCN∑_i=1^B∑_i=1^C∑_n=1^2N - log p(i,c,(n,n+N)),where we compute the hierarchical losses by max-pooling z^(i,c,n)'s along with the dimension n repeatedly with the following substitutions until N=1:z^(i,c,n)←MaxPool([z^(i,c,2n-1), z^(i,c,2n)]),N ←⌊ N/2 ⌋. The final loss of PITS is the sum of the reconstruction loss and hierarchical contrastive loss: ℒ=ℒ_𝓇ℯ𝒸ℴ𝓃+ℒ_𝒞ℒ. § EXPERIMENTS §.§ Experimental SettingsTasks and evaluation metrics. We demonstrate the effectiveness of the proposed PITS on two downstream tasks: time series forecasting (TSF) and classification (TSC) tasks.For evaluation, we mainly follow the standard SSL framework that pretrains and fine-tunes the model on the same dataset, but we also consider in-domain and cross-domain transfer learning settings in some experiments. As evaluation metrics, we use the mean squared error (MSE) and mean absolute error (MAE) for TSF, and accuracy, precision, recall, and the F_1 score for TSC. §.§ Time Series ForecastingDatasets and baseline methods. For forecasting tasks, we experiment seven datasets, including four ETT datasets (ETTh1, ETTh2, ETTm1, ETTm2), Weather, Traffic, and Electricity <cit.>, with a prediction horizon of H ∈{96, 192, 336, 720}. For the baseline methods, we consider Transformer-based models, including PatchTST <cit.>, SimMTM <cit.>, FEDformer <cit.>, and Autoformer <cit.>, and linear/MLP models, including DLinear <cit.> and TSMixer <cit.>. We also compare PITS and PatchTST without self-supervised pretraining [For PITS and PatchTST supervised learning, patches are overlapped following <cit.>.], which essentially compares PI and PD architectures only.We follow the experimental setups and baseline results from PatchTST, SimMTM, and TSMixer. For all hyperparameter tuning, we utilize a separate validation dataset, following the standard protocol of splitting all datasets into training, validation, and test sets in chronological order with a ratio of 6:2:2 for the ETT datasets and 7:1:2 for the other datasets <cit.>. Standard setting. Table <ref> shows the comprehensive results on the multivariate TSF task, demonstrating that our proposed PITS is competitive to PatchTST in both settings, which is the SOTA Transformer-based method, while PITS is much more efficient than PatchTST. SimMTM is a concurrent work showing similar performance to ours in SSL while significantly worse in supervised learning. Table <ref> compares PITS and PatchTST under three different scenarios: fine-tuning (FT), linear probing (LP), and supervised learning without self-supervised pretraining (Sup.), where we present the average MSE across four horizons.As shown in Table <ref>, PITS outperforms PatchTST for all scenarios on average. Transfer learning.In in-domain transfer, we experiment datasets with the same frequency for the source and target datasets, whereas in cross-domain transfer, datasets with different frequencies are utilized for the source and target datasets. Table <ref> shows the results of the average MSE across four horizons, which demonstrates that our proposed PITS surpasses the SOTA methods in most cases. §.§ Time Series ClassificationDatasets and baseline methods. For classification tasks, we use five datasets, SleepEEG <cit.>, Epilepsy <cit.>, FD-B <cit.>, Gesture <cit.>, and EMG <cit.>. For the baseline methods, we employ TS-SD <cit.>, TS2Vec <cit.>, CoST <cit.>, LaST <cit.>, Mixing-Up <cit.>, TS-TCC <cit.>, TF-C <cit.>, TST <cit.>, TimeMAE <cit.> and SimMTM <cit.>. Standard setting.Table <ref> demonstrates that our proposed PITS outperforms all SOTA methods in all metrics on the SleepEEG dataset.This contrasts with the results in prior works that CL is superior to MTM for classification tasks <cit.>: while prior MTM methods such as TST and TimeMAE shows relatively low performance compared to CL methods such as TS2Vec and TF-C[An exception is SimMTM <cit.>, which is not officially published at the time of submission.], the proposed PITS outperforms CL methods, even without complementary CL. Transfer learning. For transfer learning, we conduct experiments in both in-domain and cross-domain transfer settings, using SleepEEG as the source dataset for both settings.For in-domain transfer, we use target datasets from the same domain as the source dataset, which share the characteristic of being EEG datasets, while we use target datasets from the different domain for cross-domain transfer. Table <ref> demonstrates that our PITS outperforms SOTA methods in all scenarios. In particular, the performance gain is significant in the challenging cross-domain transfer learning setting, implying that PITS would be more practical in real-world applications under domain shifts. §.§ Ablation Study Effect of PI/PD tasks/architectures.To assess the effect of our proposed PI pretraining task and PI encoder architecture, we conduct an ablation study in Table <ref> using a common input horizon of 512 and patch size of 12.Recall that the PD task predicts masked patches using unmasked patches while the PI task autoencodes patches, and the PD architectures include interaction among patches using either the fully-connected layer (MLP-Mixer) or the self-attention module (Transformer), while the PI architectures (Linear, MLP) do not. As shown in Table <ref>, PI pretraining results in better TSF performance than PD pretraining regardless of the choice of the architecture.Also, PI architectures exhibit competitive performance compared to PD architectures, while PI architectures are more lightweight and efficient as demonstrated in Table <ref>. Among them, MLP shows the best performance while keeping efficiency, so we use MLP as the architecture of PITS throughout all experiments.r0.34 < g r a p h i c s > MSE by D and dropout.Hidden dimension and dropout. The PI task may raise a concern on the trivial solution: when the hidden dimension D is larger than the input dimension P, the identity mapping perfectly reconstructs the input. This can be addressed by introducing dropout,where we add a dropout layer before the linear projection head. Figure <ref> displays the average MSE on four ETT datasets across four horizons under various hidden dimensions D in MLP with a common input horizon of 512,without dropout or with the dropout rate of 0.2.Note that for this experiment, the input dimension (patch size) is 12, and a trivial solution can occur if D ≥ 12. The results confirm that using dropout is necessary to learn high dimensional representations, leading to better performance. Based on this result, we tune D ∈{32,64,128} throughout experiments, while performance is consistent with D values in the range.An ablation study with different dropout rates can be found in Appendix <ref>.Performance of various pretrain tasks. In addition to the 1) PD task of reconstructing the masked patches (X_m) and 2) PI task of autoencoding the unmasked patches (X_u), we also employ two other basic tasks for comparison:3) predicting X_u from zero-filled patches (0) and4) autoencoding 0. Table <ref> displays the average MSE on four ETT datasets across four horizons with a common input horizon of 512,highlighting that the model pretrained with the PD task performs even worse than the two basic tasks with 0 as inputs. This emphasizes the ineffectiveness of the PD task and the effectiveness of the proposed PI task.Which representation to use for downstream tasks? In SSL, the boundary of the encoder and the task-specific projection head is often unclear. To determine the location to extract representation for downstream tasks, we conduct experiments using representations from intermediate layers in MLP:1) z_1 from the first layer, 2) z_2 from the second layer, and 3) z_2^* from the additional projection layer attached on top of the second layer. Table <ref> displays the MSE of ETTh1 across four horizons,indicating that the second layer z_2 yields the best results. Location of complementary CL.To assess the effect of complementary CL together with PI reconstruction,we conduct an ablation study on the choice of pretext tasks and their location in the MLP encoder:the contrastive and/or reconstruction loss is computed on the first or second layer, or neither. Table <ref> displays the average MSE on four ETT datasets across four horizons. We observe that the PI reconstruction task is essential, and CL is effective when it is considered in the first layer.Hierarchical design of complementary CL. The proposed complementary CL is structured hierarchically to capture both coarse and fine-grained information in time series.To evaluate the effect of this hierarchical design,we consider three different options:1) without CL,2) with non-hierarchical CL,and 3) with hierarchical CL.Table <ref> presents the average MSE on four ETT datasets across four horizons,highlighting the performance gain by the hierarchical design. Comparison with PatchTST. PITS can be derived from PatchTST, by changing the pretraining task and encoder architecture.Table <ref> shows how each modification contributes to the performance improvementon the ETTh1 dataset. Note that we apply mask ratio of 50% to PatchTST, which does not affect the performance (marked with ^*).§ ANALYSISPI task is more robust to distribution shift than PD task. To assess the robustness of pretraining tasks to distribution shifts, which are commonly observed in real-world datasets<cit.>,we generate 98 toy examples exhibiting varying degrees of distribution shift, as depicted in the left panel of Figure <ref>. The degree of shift is characterized by changes in slope and amplitude. The right panel of Figure <ref> visualizes the performance gap between the models trained with the PD and PI tasks, where the horizontal and vertical axis correspond to the slope and amplitude differences between training and test phases, respectively. The result indicates that the model trained with the PI task exhibits overall better robustness to distribution shifts as the MSE difference is non-negative in all regime and the gap increases as the shift becomes more severe, particularly when the slope is flipped or amplitude is increased.MLP is more robust to patch size than Transformer. To assess the robustness of encoder architectures to patch size,we compare MLP and Transformer using ETTh1 with different patch sizes.Figure <ref> illustrates the results,indicating that MLP is more robust for both the PI and PD tasks, resulting in consistently better forecasting performance across various patch sizes.MLP is more interpretable than Transformer. While PI architectures process each patch independently,PD architectures share information from all patches,leading to information leaks among patches. This makes MLP more interpretable than Transformer, as visualizing the weight matrix of the linear layer additionally introduced and learned for the downstream task shows each patch's contribution to predictions. Figure <ref> illustrates the seasonality of ETTm1 and the downstream weight matrix trained on ETTm1 for both architectures. While the weight matrix of the linear layer on top of Transformer is mostly uniform,that of MLP reveals seasonal patterns and emphasizes recent information,highlighting that MLP captures the seasonality better than Transformer.r0.45max width=0.45 c|c|c|c|c 4cSelf-supervised settings (lr)2-5 2*PatchTST 3cPITS (lr)3-5 w/o CL w/ CL w/ hier. CL Number of params 406,028 3c5,772 Pretrain time (min) 77 15 17 25 Inference time (sec) 7.5 3c3.3 Avg. MSE 0.274 0.253 0.252 0.244Time/parameter efficiency.Efficiency analysis.To demonstrate the efficiency of the PI architecture, we compare PatchTST and PITS in terms of the number of parameters and training/inference time on ETTm2. As shown in Table <ref>, PITS outperforms PatchTST with significantly fewer parameters and faster training and inference,where we pretrain for 100 epochs and perform inference with the entire test dataset. The comparison of the efficiency between self-supervised and supervised settings is provided in Appendix <ref>.t-SNE visualization. To evaluate the quality of representations obtained from the PI and PD tasks,we utilize t-SNE <cit.> for visualization.For this analysis, we create toy examples with 10 classes of its own trend and seasonality patterns, as shown in Figure <ref>.The results demonstrate that representations learned from the PI task better distinguishes between classes.§ CONCLUSIONThis paper revisits masked modeling in time series analysis, focusing on two key aspects: 1) the pretraining task and 2) the model architecture. In contrast to previous works that primarily emphasize dependencies between TS patches,we advocate a patch-independent approach on two fronts:1) by introducing a patch reconstruction task and 2) employing patch-wise MLP.Our results demonstrate that the proposed PI approach is more robust to distribution shifts and patch size compared to the PD approach, resulting in superior performance while more efficient in both forecasting and classification tasks. We hope that our work sheds light on the effectiveness of self-supervised learning through simple pretraining tasks and model architectures in various domains, and provides a strong baseline to future works on time series analysis.§ ETHICS STATEMENTThe proposed self-supervised learning algorithm, employing patch-independent strategies in terms of pretraining tasks and model architecture, holds the potential to have a significant impact in the field of representation learning for time series, especially in scenarios where annotation is scarce or not available.This algorithm can be effectively applied in various real-world settings, encompassing both forecasting and classification tasks, even in situations where distribution shifts are severe. Furthermore, we foresee that the concept of utilizing lightweight architectures will serve as a source of inspiration for future endeavors across domains where substantial computational resources are not readily accessible.Nevertheless, as is the case with any algorithm, ethical considerations come to the forefront. One notable ethical concern relates to the possibility of the algorithm perpetuating biases inherent in the pretraining datasets.It is necessary to assess and mitigate potential biases within the pretraining dataset before deploying the algorithm in real-world applications. To ensure the responsible utilization of the algorithm, we are committed to providing the source code which will promote transparency and reproducibility, enabling fellow researchers to scrutinize and rectify potential biases and guard against any misuse. iclr2024_conferencetablesection figuresection equationsection § DATASET DESCRIPTION §.§ Time Series ForecastingFor time series forecasting, we assess the effectiveness of our proposed PITS using seven datasets, including four ETT datasets (ETTh1, ETTh2, ETTm1, ETTm2), Weather, Traffic, and Electricity.These datasets have been widely employed for benchmarking and are publicly accessible <cit.>. The statistics of these datasets are summarized in Table <ref>.§.§ Time Series ClassificationFor time series classification, we use five datasets of different characteristics, as described in Table <ref>. Note that both SleepEEG and Epilepsy datasets belong to the same domain, characterized by being EEG datasets.For transfer learning tasks, we define them as being part of the same domain. § EXPERIMENTAL SETTINGSWe follow the standard practice of splitting all datasets into training, validation, and test sets in chronological order <cit.>. The splitting ratios were set at 6:2:2 for the ETT dataset and 7:1:2 for the other datasets. It is important to note that we benefit from minimal hyperparameters due to our use of a simple architecture. We conduct hyperparameter search for three key parameters using the predefined validation dataset: the hidden dimension of the MLP (D ∈{32,64,128}), patch size (P ∈{12,18,24}), and input horizon (L ∈336,512,768). For self-supervised learning, we utilize a shared pretrained weight for all prediction horizons, making it more efficient compared to supervised learning in the long term.In both self-supervised pretraining and supervised learning, we utilize an epoch size of 100.During fine-tuning in self-supervised learning, we apply linear probing for either 10 or 20 epochs, depending on the dataset, to update the model head. Subsequently, we perform end-to-end fine-tuning of the entire network for twice the epoch duration of linear probing, following the approach outlined in PatchTST <cit.>. The dropout ratio for the fully connected layer preceding the prediction head is set to 0.2.§ HYPERPARAMETERS§.§ Time Series Forecasting§.§.§ Self-Supervised Learning§.§.§ Supervised Learning§.§.§ Transfer Learning §.§ Time Series Classification§.§.§ Transfer Learning§ TIME SERIES FORECASTINGTo demonstrate the effectiveness of PITS compared to other SOTA self-supervised methods, we compare PITS with methods including PatchTST <cit.>, SimMTM <cit.>, TimeMAE <cit.>, TST <cit.> as MTM methods, and TF-C <cit.>, CoST <cit.>, TS2Vec <cit.> as CL methods.The results presented in Table <ref> showcase the superior performance of PITS over these methods in multivariate time series forecasting task. § TRANSFER LEARNINGFor time series forecasting under transfer learning,we consider both in-domain and cross-domain transfer learning settings,where we consider datasets with same frequency as in-domain. We perform transfer learning in both in-domain and cross-domain using five datasets: four ETT datasests and Weather. The full results are described in Table <ref>, where missing values are not reported in literature. § COMPARISON WITH PATCHTSTWe compare our proposed method with PatchTST in three versions: 1) fine-tuning (FT), linear probing (LP), and supervised learning (SL). The results are described in Table <ref>, which demonstrates that our proposed method outperforms PatchTST in every version in most of the datasets.§ EFFECTIVENESS OF PI TASK AND CONTRASTIVE LEARNINGTo assess the effectiveness of the proposed patch reconstruction task and complementary contrastive learning,we conduct ablation studies in both time series forecasting and time series classification. §.§ Time Series ForecastingTo examine the effect of PI task and CL on forecasting, we conduct an experiment using four ETT datasets. The results in Table <ref> demonstrate that performing CL with the representation obtained from the first layer and PI with the one from the second layer gives the best performance.§.§ Time Series ClassificationTo evaluate the impact of employing CL and PI on classification, we conducted an experiment using the Epilepsy dataset.The results presented in Table <ref> demonstrate that as long as PI task is employed,the performance is robust to the design choices.§ EFFECTIVENESS OF PI STRATEGIESIn this experiment, we investigate the impact of our proposed PI strategies from two perspectives: 1) the pretraining task and 2) the encoder architecture.The results, shown in Table <ref>, encompass four ETT datasets with four different forecasting horizons with a common input horizon of 512.These results demonstrate that the PI task consistently outperforms the conventional PD task across all considered architectures.§ ROBUSTNESS TO PATCH SIZETo evaluate the robustness of encoder architectures to patch size,we compare MLP and Transformer with different patch sizes with ETTh2 and ETTm2 with a common input horizon of 512.The left and the right panel of Figure <ref> illustrate the average MSE of four horizons of ETTh2 and ETTm2, respectively.§ EFFICIENCY OF PITS IN SELF-SUPERVISED AND SUPERVISED SETTINGS We compare the efficiency of PITS between self-supervised and supervised settings on the ETTm2 dataset.We calculate the pretraining time and fine-tuning time of PITS under the self-supervised setting, as well as the training time under the supervised setting.Table <ref> presents the results, with the time required for fine-tuning (in the self-supervised setting) and supervised training across four different horizons {96, 192, 336, 720}.We used an epoch size of 10 for both pretraining in self-supervised settings and training in supervised settings. For fine-tuning, we trained linear head for 10 epochs, followed by end-to-end fine-tuning of the entire network for an additional 20 epochs, following PatchTST. For self-supervised learning, we utilize a shared pretrained weight for all prediction horizons, enhancing efficiency over the long-term setting compared to supervised learning. Given that pretraining is done before training on downstream tasks, fine-tuning the pretrained model is more efficient than training from scratch, while providing better performance.§ PERFORMANCE BY DROPOUT RATEFigure <ref> displays the average MSE across four horizons, and Table <ref> lists all the MSE values for four ETT datasets trained with MLP of D=32 at various dropout rates with a common input horizon of 512. These results emphasize the importance of incorporating dropout during the pretraining phase of the reconstruction task, as it helps prevent trivial solutions when the hidden dimension is greater than the input dimension. § PERFORMANCE OF VARIOUS PRETRAIN TASKSTo see if the conventional PD task of reconstructing the masked patches (X_m) with the unmasked patches (X_u) is appropriate for TS representation learning,we employ two other simple pretraining tasks of1) predicting X_u with zero-value patches (0) and2) reconstructing 0 with themselves.Table <ref> presents the results for four ETT datasets with a common input horizon of 512 across three different architectures: Transformer, MLP without CL, and MLP with CL.These results underscore that models pretarined with PD task performs even worse than the two basic pretraining tasks with zero-value patch inputs, highlighting the ineffectiveness of the PI task and emphasizing the importance of the proposed PI task.§ STATISTICS OF RESULTS OVER MULTIPLE RUNSTo see if the performance of PITS is consistent, we show the statistics of results with three different random seeds.We compute the mean and standard deviation of both MSE and MAE, as shown in Table <ref>. The results indicate that the performance of PITS is consistentfor both under self-supervised and supervised settings.
http://arxiv.org/abs/2312.16427v1
{ "authors": [ "Seunghan Lee", "Taeyoung Park", "Kibok Lee" ], "categories": [ "cs.LG", "cs.AI", "stat.ML" ], "primary_category": "cs.LG", "published": "20231227062329", "title": "Learning to Embed Time Series Patches Independently" }
Towards Zero-Trust 6GC: A Software Defined Perimeter Approach with Dynamic Moving Target Defense Mechanism Yahuza Bello1, Ahmed Refaey12, and Mehmet Ulema 2 1University of Guelph, Ontario, Canada 2Western University, London, Ontario, Canada==================================================================================================================================================== The Sixth Generation (6G) network is expected to face various security challenges in terms of access control, authentication, secure communication channels among 6G Core (6GC) entities, data confidentiality, data integrity, privacy, and encryption. Conventional Vitual Private Networks (VPNs) were widely proposed in the literature to secure the 5G networks. However, they are known to be vulnerable to many attacks like man-in-the-middle attacks, Domain Name System (DNS) hijacking, Denial of Service (DoS) attacks,malicious worms, and repeated log-in attempts. Therefore, this paper introduces the concept of Software Defined Perimeter (SDP) as a solution to establish a secure zero-trust environment within the 6G network. As a use case, we propose implementing the 5G Radio Access Network (RAN), the 5GC entities along with the SDP modules (i.e., SDP controller and gateway).We utilize the SDP controller-based authentication and authorization mechanism to ensure the complete security of both the control and data plane functionalities of the 5G network, which can be extended to the 6G network. we extend on the implementation of SDP modules by introducing a dynamic component, Moving Target Defense (MTD), to the framework. The addition of MTD enhances the resilience of the network against attacks specifically targeting networks of static nature.The proposed framework demonstrates superior resilience against DoS and port scanning attacks compared to traditional VPNs.§ INTRODUCTIONThe evolution of mobile networks is accompanied by various enhancements in terms of network performance, capacity and high data rates among others. Sixth Generation (6G), the cornerstone and next generation of mobile networks is expected to serve as an enabling technology for various services such as Internet of Everything (IoE), extended reality, smart grids and Intelligent Transportation Systems (ITS) <cit.>. However, realizing a fully working 6G network comes with many challenges especially in the security domain <cit.>.6G wireless network faces various security challenges related to its architecture, new supported-services, various enabling technologies and other user protection requirements. Additionally, 6G inherits many of the security vulnerabilities of its predecessor (i.e., 5G) and thus, requires the attention of the industry and academia to fully investigate a better approach to securing the 6G network.Since the 6G networks are bound to inherit the security vulnerability of its predecessor, we start by surveying some of the solutions adopted to tackle the security of the 5G networks. The main security challenges in 5G networks are in access control, authentication, secured communication links among 5G Core (5GC) entities, data confidentiality, privacy and data encryption, and data integrity. Several studies in the literature have investigated these challenges and proposed unique solutions to address them. For example, the authors in <cit.> investigated connection bootstrapping between devices and base stations in 5G and propose a Public Key Infrastructure (PKI) based authentication technique to address the vulnerabilities that can be exploited by attackers from compromised base stations. In <cit.>, the authors propose a novel International Mobile Subscriber Identity (IMSI) encryption algorithm, in which a mobile user is required to generate a set of new public/private asymmetric keys and a random number to prevent the threats posed by IMSI catchers in 5G network. The European Telecommunications Standards Institute (ETSI) technical committee issues two encryption specifications for Attributes-Based Encryption (ABE) that is applicable in 5G and IoT <cit.>. The two specifications aim at personal data protection for end users when multiple parties are involved and design for zero-trust models as well as protocols to secure users' data in a 5G network. Focusing on the 5GC, the communication links and network traffic (i.e., control plane traffic and data plane traffic) are susceptible to various kind of attacks such as Denial of Service (DoS) attacks, Distributed Denial of Service (DDoS) attacks, Transport Layer Security (TLS)/Secure Sockets Layer (SSL) attacks, mobile malware attacks and message insertion attacks. The main security issue with the previous generations (i.e., 1G-4G) is the absent of Internet Protocol (IP) security measures, which is prone to various attacks. Meanwhile, most of the proposed solutions to secure 5GC network communications is based on either the TLS or SSL, which are known to have the same IP level vulnerabilities such as IP spoofing, TCP SYN DoS attacks, eavesdropping attacks, etc. Therefore, a suitable solutionwhich can mitigate these attacks is required to secure the control plane traffic and data plane traffic of the 5GC network.Recently, 5G Network vendors proposed adopting Virtual Private Network (VPN) between the Radio Access Network and the 5GC network as well as among the 5GC entities in a 5G MEC network <cit.>. Adopting VPN within a 5G network provides several advantages. For example, to prevent malicious actions within the 5G network, a VPN layer can be utilized to prevent any unauthorized access to the 5GC. VPN uses the Internet Protocol security(IPsec) suite protocol for packet encryption and authentication. A mutual authentication (between two host) can be established at the beginning of a session and an agreed cryptographic keys shared for that session. IPsec uses these cryptographic keys to secure communication over IP-network and at the same time support network-level pair authentication. However, this approach has its limitations as VPNs are known to be vulnerable to several attacks such asman-in-the-middle attacks, Domain Name System (DNS) hijacking, DoS attacks, malicious worms, and repeated log-in attempts. Therefore, a more strict zero-trust security framework is required to secure the 5G network. One such framework is the SDP, which is adopted by the US Department of Defense (DoD) and then standardized by the Cloud Security Alliance (CSA). This framework follows a zero-trust model where all entities involved require authentication first prior to having access to the protected services and thus, is capable of overcoming the limitations of adopting VPN within the 5G network. Several research studies have demonstrated the resilience of the SDP framework against various cyber attacks <cit.>. The authors in <cit.> demonstrate how SDP framework fits into today's cloud Infrastructure as a Service (IaaS) to serve as a security measure against Denial of Service (DoS) attack. The abstraction of control and data layer in SDN introduces various security challenges. SDP was proposed as a potential solution to mitigates those challenges <cit.>. The SDP controller is combined with the SDN controller to secure the entire network against cyber attacks. The authors in <cit.> showcase the capability of SDP to integrate with NFV and secure VNFs within thenetwork function virtualization infrastructure (NFVI). In a previous work <cit.>, we propose SDP as a security framework within MEC. The SDP components were placed at the edge of the network to block various attacks such as DoS and port scanning attacks. In <cit.>, we introduce a framework known as virtual Evolved Packet Core - virtual Software Defined Perimeter (vEPC-vSDP) that aims to establish secure communications within the mobile core network through an authentication-based approach. By virtualizing the SDP components and integrating them into the virtualized core network, the framework creates a zero-trust environment where only authenticated and authorized core network elements are granted access to each other.To address the inherited security challenges in the 6G network, we propose a new architecture based on Software Defined Perimeter (SDP). The proposed architecture relies on a dynamic firewall configuration where all requests are dropped by default unless authorized by the SDP controller to provide a zero-trust environment within any network that adopts SDP. The effectiveness of the proposed architecture is verified through the implementation of 5G new radio (5g-NR), the 5GC entities along with the SDP modules (i.e., SDP controller, gateway and client). The controller-based authentication and authorization of SDPis utilized to completely secure the 5G network's control and data plane functionalities. 6GC is expected to have almost similar core network entities as 5G <cit.>, therefore our proposal is pertinent to 6GC as well. We further implement OpenVPN to compare with the proposed SDP framework. Both SDP and OpenVPN were evaluated under port scanning attacks and the results reveal the superiority of SDP in blocking such attacks in comparison with VPN.Furthermore, we incorporate Moving Target Defense (MTD) as an extra layer of security in order to alter the attack surface dynamically. By introducing Moving Target Defense (MTD), the security of the network will be augmented as it employs periodic address mutations within the network, utilizing the concept of Network Address Shuffling (NAS). This approach aims to extend the level of protection by constantly changing the network properties at timed intervals. This integration aims to enhance the level of difficulty for potential attackers, introducing higher uncertainty and minimizing the time-frame available for probing and launching attacks. The rest of the paper is structured as follows: section II presents a brief background knowledge on 5G, VPN, SDP and MTD. Section III introduces the proposed combined 5G-SDP architecture as a potential solution to provide network-level security for 5G. Section IV discusses the implementation of the testbed and its performance evaluation. Section V is dedicated to concluding remarks. § THEORETICAL FRAMEWORK In this section, we explain the prior research landscape that is relevant to this work. We start with a general description of the 5GC architecture which is then followed by a basic description of VPN, SDP and MTD, and the section is completed with a comparison between SDP and VPN within the context of core networks. §.§ Fifth Generation Core (5GC)Wireless mobile networks consist of the Radio Access Network (RAN) and the Core Network. The 5G RAN (commonly referred to as 5G New Radio (NR)) consists of User equipment (UEs) and gNBs (the new 5G base stations). According to the ETSI reference model, the 5GC adopts a microservice-like architecture that consists of various network functions. These functions are Access and Mobility Management function (AMF), Session Management function (SMF), User Plane Function (UPF), Policy Control Function (PCF), Application Function (AF), Unified Data Management (UDM), Unstructured Data Storage network Function (UDSF), NF Repository Function (NRF), Network Exposure Function (NEF), Authentication Server Function (AUSF) and Network Slice Selection Function (NSSF) <cit.>. Moreover, the 5GC adopts the Control and User Plane Separation of the EPC nodes (CUPS) whereby the control plane functionality (i.e., AMF, SMF, UDSF, PCF, NRF, UDM, AUSF, UDR, AF, NSSF and NEF as shown in Figure <ref>) and the user plane functionality (i.e., UPF as shown in Figure <ref>) are decoupled.The AMF is responsible for access control (access authentication and authorization) and service managements, which include mobility management, context security management, registration management and connection management. The SMF is responsible for allocating IP address to the attached UEs and session management which includes session establishment/modifications according to the desired network policy. The PCF handles policy control framework much like the PCRF in the 4G network. it does so by applying policy decisions to manage the network behaviour. The function of the NRF is to ensure that network functions can find each other through the designated Application Programmable Interfaces (APIs) by providing service registrations and discovery functionality. Additionally, the NRF stores a list of all network functions and their profiles. The UDM is responsible for functionalities such as user authentication, access authorizations, subscription management and handling user identifications. AUSF serves as an authentication server to allow AMF to authenticate UEs within the network. Connecting services to the end users, application traffic routing and collaborating with PCF for policy control are some of the services offered by AF within the 5GC. The NEF is responsible for exposing services and resources over APIs (RESTFUL APIs) within and outside the 5GC. The NSSF maintains a list of all network slice instances as defined by the operators and serves as a point within the network to redirects traffic to the intended network slice at anytime instance.The UPF on the other hand is a single network function that handles packet routing and forwarding duties, packet inspection, Quality of Service (QoS) monitoring and serves as interconnect to the Data Network (DN). For interested readers, refer to <cit.> for a more detailed explanation of all the 5G Network Functions (NF),§.§ Virtual Private Network (VPN) VPN is the most widely adopted method used to provide end-to-end secure connection between two servers <cit.>. It provides a secure private communication channel for any two endpoints within an insecurepublic network using an authentication-based approach for granting access between the two endpoints. Within the VPN, all traffic are encrypted and the resources are strictly shared among the authorized users, which are recognized through different level of access control.There are many topology for VPN such as Peer-to-Peer connections, client-to-server connections and site-to-site connections <cit.>. Among these topologies, the most widely adopted one is the client-to-server VPN, in which a secure tunnel is established between a VPN client and a VPN server. This allows for secure transfer of encrypted data between the client VPN and the server VPN. Since VPN is usually between authorized set of users, a strong access protocol is required to secure the network. Consequently, different protocols where developed in the literature for data encryption and users authentication. Among these protocols are Point-to-Point (P2P), Layer 2 Tunneling Protocols (L2TP), Generic Routing Encapsulation (GRE), Internet Protocol Security (IPsec), IP Encapsulation Within IP (IPIP) andTransport Layer Security (TLS). For interested readers, refer to <cit.> for an in-depth understanding of these protocols. OpenVPN is a layer 2 and layer 3 tunneling VPN protocol, which is open source software for community usage. It implements a VPN client and VPN server and utilises the openSSL for encryption purposes. OpenVPN uses a pre-shared and certificate-based method of authentication for authorized VPN client and server. For comparison purpose, we adopt OpenVPN within the 5G network, which will be later explained in section IV.§.§ Software Defined Perimeter (SDP) SDP framework follows a zero-trust model proposed by the NationalInstituteofStandardsandTechnology (NIST). NIST provides the guidelines required to design a Zero Trust Architecture (ZTA) to secure an enterprise network services. Adopting this zero-trust model, the SDP framework consist of a controller, an Initiating Host (IH) and an Accepting Host (AH). The SDP controller handles the authentication and authorization of all hosts (i.e., IHs and AHs) and provides access protocol for all the available services. Within the controller module, authentication keys (i.e., Single Packet Authentication (SPA)) for all hosts are generated and stored in a database (MySQL is adopted for this purpose) and later distributed to the hosts. Any host requesting access to any of the protected services must present these SPA keys and only upon successful authentication by the controller can they be authorized to access them through the designated SDP gateway. The AH module is responsible for enforcing the rules set by the controller (i.e., block all users from accessing network services except authorized by the controller). This is achieved by adopting a drop-all policy in the AH module where all request are dropped by default prior to successful authentication. With this model, only the IHs with valid SPA keys can gain access to protected services within the network. It is worth mentioning that the AH module adopts a dynamic firewall configurations with the Firewall KNock OPerator (FWKNOP).The SDP framework can be implemented in various forms such as client-to-gateway (where one or more services are protected behind a gateway running an AH module), client-to-server (where the protected server have the AH module itself), server-to-server (where servers offering services such as Representational State Transfer (REST), Simple Object Access Protocol (SOAP),remote procedure call (RPC), or any kind of application programming interfaces (APIs) over the Internet can be protected from all unauthorized hosts on the network) and client-to-server-to-client (where a peer-to-peer relationship between the two clients is used for applications such as IP telephone, chat, and video conferencing) to suit a specific application. For this work, we opt to go with the client-to-gateway model because it is well suited for the mobile networks.Figure <ref> presents a more detail explanation of the SDP framework in comparison with VPN in terms of functionality, authentication mechanism adopted, security protocols used, firewall configuration and some security-related features. * Functionality: SDP defined a software perimeter around a network to secure it from cyber attacks utilizing the three software modules: controller, IH and AH modules While VPN hides the IP address and location of a server and further encrypts outgoing traffic utilizing the VPN client software at the client side.* Authentication Methodology: SDP adopts a one-time random SPA for authentication, which are encrypted with a Hash-based Message Authentication Code (HMAC) signature. VPN on the other hand utilizes various authentication methods according to the type of the VPN used. Some of these authentication includes password-based authentication, IP Security (IPSec), which establishes a mutual authentication between the agents at the beginning of a session and negotiate a cryptographic keys to use during that session.* Security Protocols: A mutual Transport Layer Security (mTLS) is used in SDP to enable a two-way authentication process among the controller and IHs and AHs. Contrary, many security protocols such as IPSec, TLS, PPP L2TP among others were adopted for VPN.* Firewall Configurations: SDP adopts a dynamic firewall configuration of the FWNKOP. The defualt setting is configured to drop all requests from all users unless authorized by the SDP controller and only upon positive verification can access be granted. VPN on the other hand uses a static firewall configuration where you configure source and destination address of the servers involved and set the rules for traffic flow. §.§ Moving Target Defense (MTD)MTD emerges as a highly effective security method that offers robust defense against reconnaissance activities and network mapping. This approach employs a dynamic strategy by continuously modifying the network characteristics, mitigating the inherent static nature of traditional systems that inadvertently grant attackers an advantage in terms of exploiting potential paths and vulnerabilities.In the context of this research paper, we have opted to implement MTD utilizing the Random Host Mutation (RHM) technique. In this approach, the MTD Controller is responsible for assigning random virtual IP addresses from a pool of unassigned IPs to machines hosting VNFs, while retaining the original IPs and establishing mappings accordingly. To ensure a high mutation rate and maintain unpredictability, the controller imposes a brief lifespan for these virtual addresses.The MTD framework, comprising a controller and a gateway. The primary function of the MTD controller (MT-Controller) is to conduct network address scanning in order to identify service hosts available within the network. The controller then establishes a virtual IP (vIP), from the pool of unassigned IPs within the network, for the service to be addressed publicly while maintaining a virtual-to-real IP mapping. Access to the protected services is restricted to hosts utilizing the virtual IP (vIP), the controller will deny any requests utilizing the real IP (rIP).The MTD gateway (MT-Gateway) performs address translation from the virtual IP (vIP) to the real IP (rIP) and vice versa, enabling effective routing of traffic to the requested service.Additionally, the MT-Controller (MT-Controller) utilizes connection tracking to ensure seamless connection continuation, even after the shuffling process that leads to a change in the assigned vIP. § PROPOSED 5G-SDP ARCHITECTUREThe proposed combined 5G-SDP architecture as shown in Figure <ref> consists of the 5G components (note that only the available implemented components were included, which are AMF+SMF, UE+gNB and UPF) and the SDP components (i.e., controller and gateways). As depicted inFigure <ref>, gateway 1 is deployed in front of the 5GC to blacken it to unauthorized users. Similarly, the SDP controller is deployed behind the gateway 1 to shield it from unauthorized users. Within the 5GC, two more gateways (i.e., gateway 2 and 3) were deployed to provide a zero-trust environment where all the entities (i.e., in this context AMF+SMF and UPF) involved require verification and authentication through the SDP controller prior to having access to one another. With this combined 5G-SDP architecture, all UEs and gNBs must first authenticate to the SDP controller before access to the 5GC is granted. This ensures that only a positive cross-check of the SPA certificate for any UE+gNB will be considered by the gateway 1 for access authorization. According to the ETSI reference model, the UE and the gNB communicate with the AMF through the designated N1 and N2 interfaces respectively. In the same fashion, the gNB communicates with the UPF through N3 interface and the UPF-SMF communications are carried out through N4 interface. The UPF provides the link to the DN through N6 interface. Note that the proposed combined 5G-SDP architecture adopts these interfaces without any modifications as shown in Figure <ref>. It is worth mentioning that all control and data traffic of the 5G network is routed through the dedicated gateways as illustrated in Figure <ref>.Algorithm 1 presents a Pseudo-code for UE+gNB client authentication and authorization procedure to access AMF+SMF and UPF servers via SDP. The default setting of the SDP gateway is to discard all packets unless verified and authorized by the SDP controller. First, the UE+gNB client request access to AMF+SMF server by sending access request (i.e., the SPA packet) to SDP gateway 1. If the SPA packet is valid, SDP gateway 1 forwards the SPA packet to the SDP controller to be verified and to set the rules for authorization and update the UE+gNB credentials. As soon as UE+gNB client is verified by the SDP controller, the controller updates its credentials and inform SDP gateway 1 the services authorized for UE+gNB client (i.e., AMF+SMF and UPF). SDP gateway 1 then updates the firewall rule settings to allow UE+gNB client's traffic to be forwarded to the intended services within a certain time period, t. As long as time t does not expire, SDP gateway 1 maintains that rule and forwards UE+gNB client's to the services. As soon as the t expires, SDP gateway 1 removes the firewall rule and returns to the default setting. Note that Algorithm 1 consider the scenario where UE+gNB server tries to access AMF+SMF and UPF server via SDP. The same approach applies within 5GC when AMF+SMF server tries to access UPF server.This proposed 5G-SDP architecture provides a zero-trust environment within the 5GC and between the RAN and the 5GC. This indeed provides the much needed software-based security framework for the 5G network.Additionally, as part of the proposed 5G-SDP architecture, MTD is seamlessly integrated, augmenting the network's security with an additional layer. This integration involves the deployment of the MT-Gateway in front of the 5GC within gateway 1. The MT-Gateway's role is to exclusively permit authorized users who utilize the virtual IP (vIP) to access the services while effectively blocking any traffic using the real IP (rIP) or an expired vIP. This deployment establishes a clear segregation between the virtual and real network addresses, thus enhancing the security posture of the overall system. In a similar manner, the deployment of the MT-Controller takes place on the machine hosting the SDP controller, situated behind the gateway. The MT-Controller assumes the responsibility of assigning virtual IPs (V2R) to both the services and gateway, while concurrently maintaining a mapping of virtual to real (V2R) and real to virtual (R2V) addresses. Additionally, the MT-Controller diligently keeps track of all open connections within the network.Algorithm 2 provides a pseudo-code representation of the procedure for UE+gNB clients to access the AMF+SMF (service) using Network Address Shuffling (NAS) through the SDP framework. The MT-Gateway's default is to discard all packets using a real IP to access any services. In the 5G-SDP-MTD architecture, the MT-Gateway plays a pivotal role in facilitating access to services requested by User Equipment (UEs) and gNBs, utilizing virtual IP (vIP) addresses. Upon receiving a request, the MT-Gateway thoroughly examines the packet to verify that the vIP aligns with a corresponding real IP (rIP). Subsequently, the request is forwarded to the SDP controller for authentication and validation of UE+gNB access. Once the requester's authenticity is confirmed, the MT-Gateway modifies the source IP address to the vIP of the gateway, enabling communication with the requested service.In response to the request, acknowledgments are transmitted using the vIP of the gateway as the destination. Consequently, the MT-Gateway adjusts the source and destination IP addresses to match the rIPs of the requester and gateway. This establishes an open connection, allowing for the flow of traffic between the requester and the service. As the MT-Controller triggers a timeout event and initiates another instance of IP mutation. Throughout this process, the MT-Controller employs connection tracking to effectively manage any ongoing connections, mitigating the risk of service interruption. ALGORITHM HERE§ TESTBED AND PERFORMANCE EVALUATION In this section we provide our testbed specifications and detail of the complete implementation. We then move forward to discuss the results of the performance evaluations.§.§ Testbed Specification§.§.§ Adopted open source projectThe implementation of the testbed consists of three open source projects: Open5gs for the 5GC and 5G NR, Waverlay Lab's SDP project for the SDP framework and OpenVPN project for VPN. Open5gs provides a 5GC implementation of AMF+SMF, UPF, simulated UE and gNB on VMs, which we adopt for our testbed scenario as shown in Figure <ref>. Waverlay Lab's SDP project provides the SDP client module, SDP controller module and SDP gateway module, which we implemented on VMs as shown in Figure <ref> and the OpenVPN project provides the VPN client, VPN server and Certificate Authorization (CA) server implemented on VMs as well. Note that the VPN client is implemented on VM 1 and VM 2 while the VPN server is implemented on VM 5 and VM 7 as depicted in Figure <ref>. The CA server is implemented on a separate dedicated server for importing and signing certificate requests of both VPN client and server. §.§.§ Specs of the host servers and VMsThe server hosting the VMs is running Linux Ubuntu 18.04 LTS Bionic Beaver and is dedicated to serve as the CA server for the VPN. VMs 1, 2, 5 and 7 are running Linux Ubuntu 20.04 with 1 vProcessor and 1 GB of RAM dedicated for hosting 5G RAN and 5GC as illustrated in Figure <ref>. VMs 3, 4, 6 and 8 are running Linux Ubuntu 16.04 Xenial with 1 vProcessor and1 GB of RAM for the SDP framework (i.e., controller and gateways) as shown in Figure <ref>. §.§.§ Configuration of the SDP projectAccording to the testbed setting with respect to the 5G-SDP, the UE+gNB server will attempt to access the AMF+SMF and UPF servers through the SDP gateway on port 44 and 45 respectively. The SDP gateway is configured to forward traffic received on those two ports (note that only legitimate clients' traffic after positive verification is accepted) to the services (i.e., AMF+SMF server and UPF server) on port 7777 and 8888 respectively. For the VPN settings, the UE+gNB server serves as the VPN client while bothAMF+SMF and UPF serve as VPN server waiting to accept requests from configured clients. §.§.§ Attack scenario and evaluation metricsTo evaluate the performance of the proposed 5G-SDP framework, we compare the effectiveness of the SDP framework with that of VPN in terms of four metrics: the time required to initialize the components of both SDP and VPN, the overheads introduced to the 5GC network by both SDP and VPN, the average throughput and Round-Trip Time (RTT) of both SDP and VPN under DoS attack when they are up and running. A single instance of port scanning attack was performed under both SDP and VPN. This is because it is sufficient to demonstrate whether open ports can be detected in any testbed environment. The performance of any proven solution is often constrained by the availability of computing resources (usually in terms of CPU and memory). To this end, we analyze both CPU and memory usages of SDP and VPN components implemented in the 5G network. We then perform a heartbleed attack with SDP. Heartbleed attack is a well-known vulnerability in Openssl library (version1.0.1 to 1.0.1f), which allows an adversary to steal data such as usernames, passwords, private keys, TLS session keys, etc from the victim's server. Note that the OpenVPN implemented in our testbed uses Openssl version 1.1.1 and thus, not vulnerable to the heartbleed attack. §.§ Results and Discussions Table I presents the results of the four metrics. The SDP framework has a slightly higher components' initialization time of 3.2637 seconds compared with the 2.9537 seconds of VPN. This is because the SDP framework requires all the three components (i.e., IH module in the UE+gNB server, SDP controller module and the AH module in the SDP gateway) to start up and connects to one another as explained in section III. Although the VPN has lower initialization time, the SDP has a tighter authentication and authorization approach. Comparing the introduced overhead to the 5G network, the VPN has a lower induced overhead as shown in Table I. The overhead introduced by the SDP framework comes from the controller overhead (i.e., the time required to verify and authorize any legitimate SDP client, in this context UE+gNB server to access the services) and the gateway overhead (the time needed for updating the drop-all policy firewall rule in the AH module). The VPN authentication process for any VPN client (in this context UE+gNB server) is done through the CA server and the VPN server. Although it requires less time, the CA server can be a vulnerable point for attackers. Another approach is to have the CA server within the VPN server at the expense of making the VPN server more prone to cyber attacks. This proves the superiority of the SDP framework over VPN. In terms of average throughput and RTT under DoS attack, SDP was able to achieve higher throughput and lower RTT compare to VPN. VPN suffers from the DoS attack which reflects on the average throughput and the RTT values. Note that this test case was perform for 10 seconds under both SDP and VPN.A port scanning attack is perform under both SDP and VPN to showcase their capability in resisting such attack. To carry out the attack, we use the Nmap utility tool and the results are shown in table I for SDP and VPN respectively. When the SDP framework is up and running, the attack was launched towards the SDP gateway on ports 0-999 and the result shows all the scanned ports as closed. This confirms the ability of the SDP gateway to blacken all ports to any unauthorized clients within the network. The attacker won't be able to discover the ports used for accessing the network services let alone access them. In the event of any sort of breach where the attacker obtain the port numbers, the network services can't be accessed without having a legitimate SPA. When the same attack is performed with VPN (note the attack was launched towards the AMF+SMF server and the same result applies to the UPF server), it shows port 22 as open and running an ssh service. This means that an attacker can perform sniffing attack to obtain the ports to access the AMF+SMF sever. Comparing these two based on this results shows superiority of the SDP framework over VPN.Figure <ref> shows the results obtained for the CPU and memory utilization of both SDP and VPN components. For a fair comparison, we measure both CPU and memory usage in the VMs hosting SDP controller, SDP gateway, CA server and VPN server. It is evident that the SDP gateway requires more CPU and RAM resources compared to its counterpart in the VPN side (i.e., VPN server). This is because SDP gateway performs additional tasks in forwarding SPA packet to the SDP controller for verification and updating the dynamic FWNKOP rule to allow for traffic forwarding for legitimate UE+gNB clients. We can see from the result that the CA server slightly consumes more CPU and memory resources compared to the SDP controller. The reason behind this is that the CA server performs more tasks in authentication process (i.e., authenticate itself with the VPN client and VPN server). Overall, both SDP and VPN utilizes a reasonable amount of computing resources that is tolerable for many applications.Figure <ref> shows the result of heartbleed attack with SDP configuration. The attack was launched on SDP gateway 1 immediately after it authenticates to the SDP controller in an attempt to steal the private keys of authentication or any useful information on the server. Even though SDP gateway 1 is using Openssl version 1.0.1f, connection to SDP gateway 1 was refused. This is because SDP gateway only accepts valid SPApackets and thus does not suffer from the heartbleed attack. Note that the OpenVPN used in our implementation uses 1.1.1 version of Openssl and does not suffer from the heartbleed attack as well. However, a heartbleed attack on OpenVPN that uses any of the older vulnerable versions of Openssl will be successful.§ DISCUSSION AND OPEN RESEARCH DIRECTIONS This paper presents a software-based security solution for the 6GC using the SDP framework.As a use scenario, a 5GC-SDP architecture is implemented and evaluated against relevant cyber attacks. The SDP components were deployed alongside the 5G NR and 5GC to establish a zero-trust environment using the dynamic nature of the FWNKOP within the AH module (i.e., SDP gateway) to secure communications between the 5G NR and the 5GC and within the 5GC as well. To showcase the superiority of SDP over VPN, we implement the OpenVPN in the 5G network. The results show the superiority of SDP over VPN. The SDP is capable of blackening the entire protected service from unauthorized users. Although OpenVPN have lower CPU and memory utilization, lower overhead and lower initialization time compare with SDP, SDP is considerably within tolerable values and has a higher resilience to cyber attacks such as port scanning attack. The findings in this work show the relevant open research directions for future studies. In what follows, we discuss some of the open research directions. * Security challenges of the possible 6G RAN-Core convergence : In the 6G architectural paradigm, the 6G RAN functionalities and the 6G Core functionalities are expected to be combined to some extent <cit.>. Some of the core functions are already virtual and distributive in nature in order to be implemented closer to the RAN to facilitate low-latency applications, while high-level RAN functions are being centralized. Combining the RAN and Core of the 6G to some extent can simplify the network and thus, pave the way for implementation of more sophisticated services. However, this will raise other security and privacy challenges, which require further research within the 6G paradigm.* Privacy related security challenges: Privacy has always been an important topic in the research community. It is gaining even more attention as the world shifts toward digital privacy. 6G systems are expected to have more simultaneous connectivity compared to its predecessor (i.e., 5G systems). This will put more pressure on privacy protection of the collected data in the envisioned 6G paradigm <cit.>. Therefore, more research effort on digital privacy is paramount to achieve higher privacy protection.* Security challenges related to the enabling technologies for 6G network: The recent advancement of some cutting-edge technologies such as Artificial Intelligence (AI) techniques, Machine Learning (ML) techniques, Distributed Ledger Technology (DLI) such as Blockchain, Digital twin, intelligent edge computing are expected to facilitate the evolution to 6G networks <cit.>. For example, the envisioned Intelligent Radio (IR) for the 6G networks is expected to leverage AI/ML techniques to improve accurate channel modeling, resource allocation, beamforming, etc <cit.>. However, all this promising benefits comes at the expense of increase in security vulnerabilities, which are inherited from these enabling technologies. Therefore, more research efforts on security and privacy relating to these technologies are required to fully take the advantage of the envisioned 6G networks.IEEEtran
http://arxiv.org/abs/2312.17271v1
{ "authors": [ "Zeyad Abdelhay", "Yahuza Bello", "Ahmed Refaey" ], "categories": [ "cs.CR" ], "primary_category": "cs.CR", "published": "20231227025455", "title": "Towards Zero-Trust 6GC: A Software Defined Perimeter Approach with Dynamic Moving Target Defense Mechanism" }
Department of Physics, Ewha Womans University, Seoul 03760, Korea[[email protected]] Department of Physics, Ewha Womans University, Seoul 03760, Korea We consider a modified graphene model under exchange couplings. Various quantum anomalous phases are known to emerge under uniform or staggered exchange couplings. We introduce the twist between the orientations of two sublattice exchange couplings, which is useful for examining how such topologically nontrivial phases under different types of exchange couplingsare connected to one another. The phase diagrams constructed by the variation of exchange coupling strengths and twist angles exhibit rich structures of successive topological transitions.We analyze the emergence of peculiar phases in terms of the evolution of the energy dispersions. Perturbation schemes applied to the energy levels turn out to reproduce wellphase boundary lines up to moderate values of the twist angle. We also discover two close topological transitions under uniform exchange couplings, which is attributed to the interplay of the trigonal-warping deformation due to Rashba spin-orbit coupling and the staggered sublattice potential. Finally the implications of Berry curvature structure and topological excitations in real and pseudo spin textures are discussed. Topological phase transitions induced by the variation of exchange couplings in graphene Gun Sang Jeon January 14, 2024 ========================================================================================empty§ INTRODUCTIONQuantum anomalous Hall effect is a variation of quantum Hall effect which occurs with spontaneously broken time-reversal symmetry in the absence of external magnetic field <cit.>. It is distinguished from quantum Hall effect which requires strong external magnetic field and quantum spin Hall effect which appears in the presence of time-reversal symmetry <cit.>. Quantum anomalous Hall effect makes Chern insulators have dissipationless chiral edge states and insulating bulk states, which is characterized by Chern number <cit.>. Chern number C is physically related to Hall conductivity σ_xy viaσ_xy=Ce^2/h <cit.>. Many candidates have been suggested as materials to exhibit quantum anomalous Hall effect and some of them were successful <cit.>. Since quantum anomalous Hall effect requires band inversion and time-reversal symmetry breaking, it can be naturally considered to catalyze magnetism in topological materials to realize it.Magnetically doped topological insulators such as Cr-doped (Bi,Sb)_2Te_3 films first showed quantum anomalous Hall effect <cit.>. The observation of quantum anomalous Hall effect in intrinsic magnetic topological insulator such as MnBi_2Te_4 flakes was also reported <cit.>.Recently, moiré materials are expected to host quantum anomalous Hall effect due to their strong correlations to break time-reversal symmetry and realized in the heterostructure of hexagonal boron nitride <cit.>.Several pioneering studies motivated extensive theoretical studies on the compounds with honeycomb-type lattice structure and strong spin-orbit coupling <cit.>.In this context graphene was proposed to exhibit quantum anomalous Hall effect in the presence of Rashba spin orbit coupling and exchange coupling <cit.>. This model shows gap opening and nontrivial Berry curvature in the vicinity of K and K^' in the hexagonal Brillouin zone <cit.>. The Berry curvature is integrated to produce a nontrivial Chern numberin the system, which characterizes quantum anomalous Hall effect.Such theoretical models are expected to be realized by the addition oftransition-metal atoms on top of graphene <cit.>; it has not been observed yet in real materials.However, germanene which also has honeycomb lattice was reported recently to host quantum spin Hall effect <cit.>. The graphene model with quantum anomalous Hall effect can be extended with the additional intrinsic spin orbit coupling and staggered sublattice potential <cit.>.While intrinsic spin orbit coupling in pristine graphene is weak, proximity spin orbit coupling in graphene induced by transition-metal dichalcogenides can be intensified in meV scale. Besides, the proximity spin orbit coupling acquires staggered form on sublattices A and B <cit.>. Meanwhile, exchange coupling can be either uniform or staggered depending on the magnetism of substrates <cit.>.Based on these facts, topological phases under uniform and staggered regime of intrinsic spin orbit coupling and exchange coupling were investigated <cit.>.As a result, a variety of interestingquantum anomalous Hall phases werepredicted such as those with Chern number twoin uniform intrinsic spin orbit coupling and uniform exchange coupling, and those with Chern number one in uniform intrinsic spin orbit coupling and staggered exchange coupling <cit.>.One may lead to questions as towhether such nontrivial phases are connected continuously to one another and how the phases evolves during the path, which is one of the main motivations of our study.In this paper, we investigate the topological phase transition of the modified graphene model with quantum anomalous Hall effect by varying the relative orientation of exchange couplings of two sublattices.Rich phase diagrams are obtained by the numerical diagonalization. Topologically nontrivial phases are characterized by Chern numbers, and the change in Chern numbersare discussed in terms of the touching of valence and conduction bands. The topological phase transitions for small twist angles are explainedquantitatively by the perturbation theory. Two successive transitions as well as distorted trigonal-warping deformation are also found to take place for small twist angles. We scrutinize the nature of topological phasesin terms of the distribution of Berry curvature for valence bands and topological objects in real and pseudo spin textures. § MODELWe consider the half-filled proximity-modified graphene model described by the HamiltonianH=H_0+H_R+H_S+H_I+H_EwithH_0 = -t∑_⟨ i,j⟩,αc^†_iαc_jα, H_R = i λ_R∑_⟨ i,j⟩,α,βc^†_iαc_jβ[(σ̂×d̂_ij)_z]_αβ, H_S = Δ∑_i,αξ_ic^†_iαc_iα, H_I = iλ_I/3√(3)∑_⟨⟨ i,j⟩⟩,α,βν_ijc^†_iαc_jβ[σ̂_z] _αβ, H_E = λ_E∑_i,α,βc^†_iαc_iβ[ m̂_i·σ̂]_αβ.Here, c_iα^†(c_iα) is the creation(annihilation) operator of an electron with spin α at site i on the honeycomb lattice.H_0 describes the hopping between the nearest neighbor sites and the summation runs over all the nearest neighbor pairs ⟨ i,j⟩.H_R represents the Rashba spin orbit coupling of strength λ_R where σ̂ is the vector whose components are Pauli matrices and d̂_ij is the unit vector of the path from site j to i.H_S denotes the staggered sublattice potential of strength Δ with ξ_i= +1fori ∈A, -1fori ∈B.H_I indicates the intrinsic spin orbit coupling between next nearest neighbors with the summation over all the pairs ⟨⟨ i,j⟩⟩ and ν_ij=± 1 when the path from site j to i bends counterclockwise/clockwise.H_E describes exchange couplings of strength λ_E in the direction m̂_i≡(cosϕ_isinθ_i,sinϕ_isinθ_i,cosθ_i)at site i.In this work we will employ the twisted exchange couplings where the exchangecouplings are oriented in z direction at sublattice A (θ_i=0,ϕ_i=0)and it is twisted bythe angle θ_T about the y direction at sublattice B(θ_i=θ_T,ϕ_i=0); this corresponds tom̂_i= (0,0,1)fori∈A, (sinθ_T,0,cosθ_T)fori∈B.The uniform and the staggered exchange couplings correspond to the twisted exchange couplings withθ_T=0 and θ_T=π, respectively.By the continuous variation of the twist angle θ_T we can conveniently examine how the topological phases evolve betweenthe uniform and the staggered exchange couplings.Henceforth we will focus on two values of the uniform intrinsic spin orbit couplings λ_I=-0.05t and 0.05t for sublattice potential Δ=0.1t and Rashba spin-orbit coupling λ_R=0.05t. In the earlier work <cit.> the uniform exchange coupling was shown to result in the same topological transitions for both cases. On the other hand, in the presence of the staggered exchange couplings the resulting intermediate topological phases display different topological invariants.We examine the topological phase transitions by varying the twist angle θ_T with particular attention to the two cases, which will help us tounderstand the underlying physical implications in a variety of topological phase transitions depending on the patterns of exchange couplings. Throughout the paper, we measure all the energy scales in units of the hopping strength t between nearest neighbors and the length scales in units of next-nearest-neighbor spacing a. § RESULTS §.§ Phase DiagramThe topological phases are characterized by Chern number defined byC =1/2π∑_n∫_BZd^2k Ω_n(k),wherethe summation of n runs over all the filled valence bands and Ω_n(k) isBerry curvature of the nth valence band at momentum k, defined byΩ_n(k) = -2∑_n^'≠ nIm⟨ψ_n,k| ∂_k_x H_k |ψ_n^',k⟩⟨ψ_n^',k|∂_k_y H_k|ψ_n,k⟩/ (E_n^',k - E_n,k)^2,with the eigenenergy E_n,k and the eigenfunction ψ_n,k. By the exact diagonalization method, we obtain the eigenvalues and eigenvectors of the Fourier transformed Hamiltonian H_k.Numerical integration of Berry curvature is performed over the Brillouin zone, which yields the Chern number of the phase. Phase diagrams are constructed by the resulting Chern numbers for various exchange coupling strengths λ_E and twist angles θ_T. In Fig. <ref> we plot two phase diagrams for λ_I=±0.05 as mentioned in the previous section. The two systems have common behaviors in the topological phase transitions in the limits of small and large λ_E. For small λ_E the system generally displays zero Chern number.On the other hand, for large λ_E,the system exhibits C=2 in the presence of uniform exchange coupling (θ_T=0). As θ_T increases, the system undergoes two successive topological transitions and Chern number reduces by one at each transition. Thus, for staggered exchange coupling (θ_T=π), the resulting phase is topologically trivial in both limits.In the intermediate region of exchange coupling strength λ_E the topological characters of the two systems with λ_I =± 0.05 are very different. The system with λ_I=-0.05 exhibits three successive transitions from C=2 with the increase of θ_T and accordingly we obtain C=-1 forθ_T=π. For λ_I=0.05, in contrast, only a single topological transition occurs with increasing θ_Tand the phase with C=1 persists up to θ_T=π without further transitions. At phase boundaries where Chern number changes by one the lower conduction band and the upper valence band touches at one point k_0. One can find that we find two phase boundaries where k_0 is near K point (displayed in red solid lines) and one with k_0 being near K^' (displayed in blue dashed lines). It is of interest to note that k_0 is located exactly at the symmetric point (K) only on the left red solid line. On the other two phase boundaries, k_0 changes with λ_E although k_0 is close to K or K' in the region displayed in Fig. <ref>. We have obtained the precise positions ofthe phase boundaries by numerically identifying the value ofλ_E^c(θ_T) for which the conduction and the valence bandstouch each other and constructed the phase diagram in Fig. <ref>.The analysis of the variation of the phase boundaries with θ_T reveals the the origin of the different behavior in the intermediate regions of λ_E. As illustrated in Fig. <ref>, we have one red(K) and one blue(K') boundary lines which traverse the whole range of θ_T between the uniform and the staggered exchange couplings. For λ_I=0.05 both λ_E^c(K) and λ_E^c(K') decreases with the increase of θ_T and do not cross each other. Thus the phase with C=1 for small θ_T extends continuously to θ_T=π. For λ_I=-0.05, on the other hand, λ_E^c(K) increases with θ_T and the resulting phase boundary crosses the K' boundary line. The crossing point results in two more successive topological transitions and the system exhibits C=-1 at θ_T=π Another interesting feature in the phase diagram is the existence of metallic regions for intermediate θ_T. The metallic phase shows up when the minimum of the conduction band is lower than the maximum of the valence band, which yields partial filling in the conduction band without the overlap of valence and conduction bands. For λ_I=-0.05 the metallic region is located around the crossing point of two traversing K and K' phase boundaries. Such a metallic region also appears for λ_I=0.05 in the middle of the region with C=1, and it separates C=1 phase for uniform exchange coupling from that for staggered exchange coupling. Figure <ref> displays how the band-touching momentum k_0 changes as the system parameter varies. For small λ_E two bands touch around K point on the “red” boundary and around K' point on the “blue” boundary. As λ_E increases, both |k_0x| and |k_0y| reduce and k_0 monotonically approaches Γ point. Although K' boundary line starts from θ_T=π, we can find that the twist angle θ_T drastically decreases with the increase of λ_E. As demonstrated in Fig. <ref> the band-touching momentum is close to Γ for λ_E ≳ 3. §.§ Perturbation TheoryIn this section, we apply the perturbation theory to obtainthe phase boundary which is determined by the band-touching at K point. The characteristic equation of the Hamiltonian at K is given by( Δ-λ_I-λ_E-E)× [(Δ+λ_I+λ_E-E)×(-Δ-λ_I+λ_Ecosθ_T-E)×(-Δ+λ_I-λ_Ecosθ_T-E) -9λ_R^2(-Δ-λ_I+λ_Ecosθ_T-E) -λ_E^2sin^2θ_T(Δ+λ_I+λ_E-E)]=0,where E is an energy eigenvalue. For θ_T=0,four energy levels are given byE_1^(0)= λ_I-√((Δ+λ_E)^2+9λ_R^2), E_2^(0)= -λ_I-Δ+λ_E, E_3^(0)= -λ_I+Δ-λ_E, E_4^(0)= λ_I+√((Δ+λ_E)^2+9λ_R^2),and the topological transition occurs at λ_E=Δ by the band-crossing of E_2^(0) and E_3^(0).We apply the perturbation theory by trying the power-series solution of the energy eigenvaluesE_i = E_i^(0) + ∑_n=1^∞ c_i^(n)θ_T^n(i=1,2,3,4).Since the characteristic equation is an even function of θ_T, c_i^(n)=0 for odd n. From the overall factor in Eq. (<ref>) we can also find that E_3=E_3^(0) is independent of θ_T.By inserting E_2 to Eq. (<ref>) and expanding it to the fourth order in θ_T, we find the first two nonvanishing coefficientsc_2^(2)= -λ_E/2-2λ_E^2(Δ+λ_I)/4(λ_I-λ_E)(Δ+λ_I)-9λ_R^2, c_2^(4)= λ_E/24+ 1 / 4(λ_I-λ_E)(Δ+λ_I)-9λ_R^2×[ λ_E^2(Δ+λ_I)/6+c_2^(2)λ_Eλ_I +2(c_2^(2))^2(2λ_I-λ_E+Δ) ]. Figure <ref> displays the results from the perturbation calculation of the second and the fourth order in θ_T for λ_I=±0.05.We can observe thatsecond-order perturbation results reproduce the phase boundarieswell at least up to θ_T=π/6.The fourth-order results show better agreement for higher θ_T than the second-order ones.It is interesting that this approach identifies only one of two phase boundaries which split near(λ_E=Δ and θ_T=0).The reason is that the phase boundary denoted by open circles is caused by the band-touching which does not occur exactly at K point. We will examine the peculiar features of this phase boundary in the next section. §.§ Fine structures near the transition under uniform exchange couplingFigure <ref> presents the phase diagram magnified in the vicinity of the topological transition at θ_T=0. It is remarkable that for the uniform exchange coupling (θ_T=0)the system does not exhibit a direct transition from a topologically trivial phase (C=0) for small λ_Eto a topological phase (C=2) for large λ_E. As λ_E increases, the system undergoes a transition to a topological phase with C=-1 at λ_E,c1=0.1, and successivelyto a second topological phase with C=2 at λ_E,c2=0.1012(1).The energy dispersions at the transition points, plotted in Fig. <ref>(a) and (b), reveals the nature of two transitions. At λ_E,c1=Δ, the valence and the conduction band touches at K point and the Chern number decreases by one. In contrast, at λ_E,c2 the energy dispersion exhibits three band-touching points placed in the form of an equilateral triangle aroundK point, which increases the Chern number by three. It is reminiscent of trigonal-warping deformation which is known to be induced in graphene by Rashba spin-orbit interaction <cit.>. The introduction of the sublattice potential shifts the topologicaltransition point from λ_E=0 to λ_E,c1=Δ, and we presume that it gives rise to additional fine splitting of the trigonal-warping deformation at λ_E,c2 from the K-point band-touchingat λ_E,c1.For finite θ_T, each of three band-touching points producesdifferent phase boundary lines, as shown in Fig. <ref>. Two of them merge at finite θ_T, forming a closed phase boundary line which encloses a trivial phase with C=0. Two typical energy dispersions on the closed phase boundary line are shown in Figs. <ref> (c) and (d). They show a single band-touching point with a distorted trigonal-warping deformation. The phase boundary line generated by the third band-touching point is that separating C=2 phase from C=1 phase;this is the one shown in the global phase diagram of Fig. <ref>. We can also observe that metallic regions emerge around the region where the closed phase boundary line is overlapped with that generated bythe K-point band-touching for both systems.We also display the band-touching momentum k_0 on these phase boundaries in Fig. <ref>. As is discussed in the above,the three points at λ_E,c2 form an equilateral triangle, andtwo of them merge when θ_T is increased up to a critical value. The third band-touching point goes towards Γ point as λ_E is increased, and reaches close to Γ point for very large λ_E.§.§ Berry curvatures and winding numbersIn this section, we demonstrate topological properties of the nontrivial phases in terms of Berry curvature and winding numbers. We focus on two systems with θ_T=0.25 and θ_T=0.7 for λ_I=-0.05 and λ_E=0.15.The former and the latter systems exhibit the topological phases with C=2 andC=1, respectively, as shown in Fig. <ref>(a).Figure <ref> shows the distribution of Berry curvature of theindividual and all the valence bands. In both valence bands, Berry curvature concentrates on K and K^'.In the case of θ_T=0.25, the Berry curvature in the lower valence band contributes to the total Chern number negatively both in K and K^'.However, those in the upper valence band are positive, which are much larger than those in the lower valence band.As a result, both K and K^' have positive Berry curvature peaks and their sum produces Chern numbertwo. In the case of θ_T=0.7, the Berry curvature distributions are more or less the same as in the case of θ_T=0.25 except for the area around K.An additional negative peak shows up as well as a positive peak near K in the upper valence band, and the Chern number of this area isreduced by one.Consequently, the total Chern number for K is less than that of θ_T=0.25 by one.Thus, the phase transition from C=2 to C=1 is attributed to the change of Berry curvature distribution around K in the upper valence band. The topological properties can also be represented in terms of the winding numbers in spin textures <cit.>.We calculate pseudo spin ⟨S⟩ associated with two valleys and real spin ⟨σ⟩ in momentum space andthe winding number ω in each texture is defined by ω= - 1/4π∫_ BZM̂(k) ·(∂_k_xM̂(k) ×∂_k_yM̂(k) ) d^2 k, where M̂(k) is the unit vector in the direction of each spinat momentum k.Chern number of the individual band is equal to the winding number, C=ω <cit.>. Figure <ref> describes the pseudo and real spin textures in both valence bands.In the case of θ_T=0.25, merons and antimerons in pseudo spin textures of lower and upper valence band cancel each other in both K and K^' while two skyrmions remain in both K and K^' in the real spin texture of upper valence band.Thus, the nontrivial property of the system comes from real spins in upper valence bands.As θ_T increases up to 0.7,the winding number changes from 1 to 0 due to the contribution in the vicinity of K in the upper valence band.This is due to the destruction of a skyrmion at K by the band-touching of the upper valence and the lower conduction bands at K. § SUMMARYIn summary, we have investigate the topological phase transition of the modified graphene model under the twisted exchange couplings. By the variation of the twist in the directions of two sublattice exchange couplings we have successfully examined the nature of transitions between the topological phases under uniform exchange couplings and those under staggered exchange couplings. The resulting phase diagrams have been found to exhibit rich phases. We have performed the perturbative calculation in the twist angle, which was successful in describing the phase transition line near the uniform exchange couplings. Topological objects in real and pseudo spin textures have been shown to the source for the contribution to topological invariants of the system. Remarkably, we have discovered that the transition from a trivial phase to a topological phase with Chern number two in uniform exchange coupling is not a direct transition. As the exchange coupling increases, the system first make a transition fromthe trivial phase to a topological phase with Chern number reduced by one. At a higher value of exchange coupling, the trigonal-warping deformation has been found to drive the system to the topological phase with Chern number two. The two close but separate transitions may have its origin in the interplay by the Rashba spin-orbit coupling and the staggered sublattice potential.
http://arxiv.org/abs/2312.16625v2
{ "authors": [ "Jihyeon Park", "Gun Sang Jeon" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20231227161418", "title": "Topological phase transitions induced by the variation of exchange couplings in graphene" }
A Survey on Super Resolution for video Enhancement Using GAN 1st Ankush Maity Dept. of Computer Engineering Army Institute of TechnologyPune,India [email protected] 2nd Sourabh Kumar Lenka Dept. of Computer Engineering Army Institute of TechnologyPune,India [email protected] 3rd Roshan Pious Dept. of Computer Engineering Army Institute of TechnologyPune,India [email protected] 4th Vishal Choudhary Dept. of Computer Engineering Army Institute of TechnologyPune,India [email protected] 5th Prof. Sharayu Lokhande Dept. of Computer Engineering Army Institute of TechnologyPune,India [email protected] =======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================Detecting anomalies in fundus images through unsupervised methods is a challenging task due to the similarity between normal and abnormal tissues, as well as their indistinct boundaries. The current methods have limitations in accurately detecting subtle anomalies while avoiding false positives. To address these challenges, we propose the ReSynthDetect network which utilizes a reconstruction network for modeling normal images, and an anomaly generator that produces synthetic anomalies consistent with the appearance of fundus images. By combining the features of consistent anomaly generation and image reconstruction, our method is suited for detecting fundus abnormalities. The proposed approach has been extensively tested on benchmark datasets such as EyeQ and IDRiD, demonstrating state-of-the-art performance in both image-level and pixel-level anomaly detection. Our experiments indicate a substantial 9% improvement in AUROC on EyeQ and a significant 17.1% improvement in AUPR on IDRiD. § INTRODUCTION Deep Convolutional Neural Networks (CNNs) have made significant breakthroughs in various relevant fields of medical image analysis <cit.>. However, current fully supervised methods require a vast amount of annotated abnormal images. Obtaining such images can be challenging, particularly for rare diseases with low incidence rates. Conversely, collecting normal images is relatively easier. Therefore, recent research has focused on unsupervised anomaly detection methods <cit.> that identify anomalies in medical images through training only on normal images. Among medical imaging modalities, fundus image anomaly detection presents an especially challenging scenario. Retinal lesions come in various shapes, sizes, and textures. Learning them in an unsupervised manner can be difficult, particularly when they have indistinct boundaries and are visually similar to normal fundus tissues. Consequently, the development of an accurate and reliable method for detecting anomalies in fundus images is a critical area of ongoing research.Currently, most anomaly detection methods rely on either reconstruction or representation based approaches <cit.>. However, these methods are typically trained solely on non-anomalous data and may not be optimized for discriminative anomaly detection. Consequently, they face challenges in learning abnormality representations and distinguishinglesions that are not significantly different from normal tissues. This can lead to inaccurate detection of subtle lesions in fundus images and falsely identifying normal areas as anomalies, as shown in Fig. <ref>. To address this issue, recent studies <cit.> have employed synthetic anomalies. Nevertheless, these techniques, such as DRAEM <cit.>, may produce inconsistent anomalies that do not match the appearance of fundus images, which can be misleading. To overcome these limitations, we propose a novel approach that includes an anomaly generator capable of producing anomalies consistent with the appearance of fundus images. Additionally, to prevent overfitting to synthetic anomalies, we have implemented a reconstruction network which effectively reconstructs and models normal images. The reconstructive features produced by this network are combined with synthetic anomaly features to accurately localize any anomalies. As demonstrated in Fig. <ref>, our approach has been successful in detecting subtle fundus lesions while minimizing false positives by avoiding the misidentification of normal structures as anomalies.Our proposed approach, named ReSynthDetect network which combines reconstruction and synthetic features, is designed for detecting fundus anomalies. The network is trained in two stages as shown in Fig. <ref>. In the first stage, a reconstruction network is trained on normal images, while in the second stage, an anomaly localization network is trained using artificially created anomalies. By incorporating information from the reconstruction network, the localization network accurately identifies anomalies while reducing reliance on synthetic anomalies. We create synthetic lesions by randomly selecting normal training images as source and target images. We augment the source images to create diverse lesions, which are then pasted onto random positions on the target images using self-mix <cit.>. This approach ensures consistency in the fundus images while simulating variations in real retinal lesions. Furthermore, we conducted comprehensive experiments on network architecture and found that, unlike previous literature <cit.>, the best results for retina anomaly detection were obtained by combining the encoder features of the reconstruction network and the encoder features of the anomaly localization network. Through careful anomaly generation and network architecture selection, our proposed approach achieved state-of-the-art results on both the EyeQ <cit.> and IDRiD <cit.>benchmark datasets.Contributions. (1) We propose a new approach named ReSynthDetect network designedfor detecting anomalies in fundus images by combining reconstruction and synthetic features. (2) We introduce a novel anomaly generator that can produce diverse and consistent synthetic anomalies in fundus images. (3) Our proposed approach achieves state-of-the-art results on two benchmark retinal datasets, EyeQ and IDRiD, with a 9% improvement in AUROC for image-level anomaly detection on EyeQ and a 17.1% improvement in AUPR for pixel-level anomaly localization on IDRiD.§ RELATED WORKMost of the existing anomaly detection methods are reconstruction-based <cit.>, which trainmodels to reconstruct normal data during training and detect anomalies by calculating the reconstruction error. fAnoGAN in <cit.> applies an adversarial network for normal image reconstruction and calculates anomaly scores by reconstruction error. WDMT in <cit.>use a weight-decay skip connection strategy for reconstruction network and integrating an auxiliary task of the histogram of oriented gradients prediction to improve feature representation learning. Lesion2Void in <cit.> masks out normal patches and trains a reconstruction model based on the correlation with neighboring pixels to distinguish anomalies. However, these methods oftenhave relatively large reconstruction errors in normal retinal structures such as optic disc, cup, and blood vessels, resulting in potential false positives <cit.>. To solve this problem, some recent representation-based methods have been proposed <cit.> , which compute anomaly scores based on the similarity of the features between the test and normal samples. ReSAD in <cit.> extracts features by a pre-trained modeland proposes a spatial and region module for local and long-range anomaly detection. MKD in <cit.> applies knowledge distillation between a pre-trained source network and a smaller cloner network and calculates feature similarity as anomaly scores. Nevertheless, representation-based methods lack the reference of abnormal features, making it challenging to detect subtle lesions in fundus images.A few works attempt to utilize synthetic anomalies for anomaly detection <cit.>.DRAEM <cit.> trains a reconstructive network on synthetic anomalies and utilizes a discriminative network to detect deviations from synthetic and reconstructed images as anomalies. However, the reconstructed images may containdeviations in normal structures, which can be falsely detected as anomalies. Our work is built on a similar architecture as described in <cit.>, but we introduce two key technical differences to overcome its limitations. Firstly, we propose a novel anomaly generator that can producediverse and consistent synthetic anomalies in fundus images. Secondly, unlike previous literature <cit.>, we achieve the best results for retina anomaly detection by combining the encoder features of both the reconstruction network and the anomaly localization network.§ METHODThe pipeline of our proposed method is illustrated in Fig. <ref>. In the first stage, an reconstruction network is trained on normal training images (Sec. <ref>), and its encoder is utilized as a feature extractor in the subsequent training stage. In the second stage, a reconstructive feature-guided anomaly localization network is trained using synthetic anomalies (Sec. <ref>). The synthetic anomalies are obtained by a consistent anomaly generator (Sec. <ref>).§.§ Reconstruction Based Feature Extractor Relying solely on synthetic anomalies can result in overfitting to their specific patterns. Previous work, DRAEM <cit.>, combines reconstructed images with synthetic anomalies to identify deviations as anomalies. However, reconstructed images may contain deviations in normal structures, leading to false positives. To overcome this, we train a reconstruction network as a feature extractor, which mitigates overfitting to synthetic anomaly patterns and avoids false positives in normal structures. In the first training stage, an autoencoder is utilized as the reconstruction network, with the objective of reconstructing the normal fundus images which serve as the input of the network. This process enables the extraction of reconstructive features, which are subsequently utilized in the second stage of training Formally, the reconstruction network comprised of an encoder E_r and a decoder D_r, is trained on the normal training fundus image I. We utilize the L2 loss as the reconstruction loss, which can be calculated as follow: L_Rec =‖ D_r(E_r(I)) -I‖^2_2.Once the training of the autoencoder is completed, its parameters are fixed and will no longer be changed in the following stage. The encoder E_r of the reconstruction network will be used as the extractor of the reconstructive feature, while the decoder D_r will be discarded. §.§ Reconstructive Features Guided Localization Network Due to the lack of real anomaly samples in the training phase, the network needs to be trained on proxy tasks. We utilize the localization of synthetic anomalies as the proxy task to train the network.In the second training stage, a reconstructive feature-guided localization network is trained on the synthetic image I_G and its corresponding mask M_G (see Sec. <ref>).We apply a U-shape model <cit.> with skip connections as the localization network, which consists of an encoder E_l and a decoder D_l, to localize the synthetic anomalies. Weextract both reconstructive features extracted by E_r and localization features extracted by E_l in the synthetic image I_G and concatenatethem in each layer. More specifically, at each layer i of the encoders E_r and E_l, we extract the corresponding reconstructive features F_r^i and localization features F_l^i, respectively, and F_c^i is obtained by concatenating themas F_c^i = concat (F_r^i, F_l^i). Subsequently,F_c^i is used as input in the subsequent layers of encoder E_l and decoder D_l for the localization task. The Focal Loss <cit.> is introduced during training to alleviate the imbalance between the normal pixels and anomalous pixels. It is expressed as follows:L_Focal = { -(1-p)^τlog(p),M_G^x,y = 1, -p^τlog(1 - p) ,M_G^x,y = 0. .Here, p represents theprobability of anomaly at position (x,y) predicted by the model, and τ is a tunable focusing parameter, and set to 2 in this paper. §.§ Consistent Anomaly Generator As depicted in Fig. <ref>, our approach incorporates an anomaly generator that generates lesions based on the source image and subsequently pastes the generated lesions onto the target image. In order to maintain the consistent appearance between normal retinal images and synthetic anomalies,we randomly sample a source image I_s and a target image I_t from the normal training set. To generate a variety of texture anomalies, we randomly select three augmentation methods from our pool of candidates, which include sharpening, solarizing, gamma contrast enhancement, hue change, color temperature alteration, auto-contrast, and random color shifting. We apply these augmentations to the source imageI_s and produce an augmented image. Subsequently, the anomaly generator performs a random cut to obtain crop C_s of variable size from the augmented source image at a random location to generate lesions.To generate anomalies with diverse shapes, we use Perlin noise, a type of gradient noise commonly employed in computer graphics. Our anomaly generator utilizes a Perlin noise generator <cit.> to produce Perlin noise P_n of the same size as C_s. Subsequently, a thresholding process is applied to generate a binary mask P from P_n. Directly pasting the augmented source crop C_s onto target images can potentially introduce inconsistencies in the boundaries of the pasted lesions. To address this issue, we use self-mix paste module, which uses Euclidean Distance Transform algorithm <cit.> to compute the distance between each pixel in themask P and its nearest background pixel, generating a distance map D.Subsequently, fusion weights map W are generated according to theEq. <ref>:W = (1- α) ×D-min(D)/max(D) - min(D) + α .where α is the scaling factor, and set to 0.7 in this paper. Next, self-mix paste module selects a crop C_t in the target image with a random location and fuse the C_s and C_t with W andPaccording to Eq. <ref>.Notably, C_s and C_t have the same size but may not be located at the same position.C = (P⊙W) ⊙C_s + (1- P⊙W) ⊙C_t , C_m = P,where ⊙ denotes the element-wise product, P⊙W denote a smoothing mask. InEq. <ref>, C denotes the generated anomaly crop, the C_m denotes the corresponding mask. Finally, the anomaly generator can obtain the synthetic image I_G and corresponding maskM_G through C and C_m, which is utilized in previous training.§ EXPERIMENTS §.§ Experimental ProtocolDatasets. For evaluation, we used two public datasets: EyeQ <cit.> for image-level anomaly detection and IDRiD <cit.> for pixel-level anomaly localization. We applied Contrast Limited Adaptive Histogram Equalization (CLAHE) <cit.> with a ClipLimit of 2 and a GridSize of 8 to enhance image contrast while preserving local details. The input image size for all datasets was standardized to 768 × 768 for consistency. * EyeQ: The EyeQ <cit.> dataset is a subset of the EyePACS <cit.>dataset used for grading diabetic retinopathy (DR). EyeQ consists of 28,792 fundus photographs with DR grading annotations and corresponding image quality labels. The images in the EyeQ dataset are classified into "good", "usable", or "reject" categories based on the image quality, and only the images classified as "good" are used in our experiments. The DR disease severity in the EyeQ dataset is divided into five grades: 0 (normal), 1 (mild), 2 (moderate), 3 (severe), and 4 (proliferative) <cit.>. Following <cit.>, images with level 0 are considered as normal images, and images with grades 1 - 4 are considered as abnormal images. During the training phase, weutilizes all thenormal images intraining set from the EyeQ dataset, which consist of 6,324 normal images. For testing, we used the complete testing set comprising 8,470 images from the EyeQ dataset.* IDRiD: We used the Indian Diabetic Retinopathy Image Dataset (IDRiD) <cit.> dataset, which contains highly precise DR lesionmasks and is commonly used as a benchmark dataset forlesion localization tasks. Specifically, we used 134 normal retinal images as the training set, 32 normal retinal images and 81 abnormal retinal images with finely annotated DR lesions as the testing set. The abnormal image containsfour types of retinal lesions with fine annotated masks, including microaneurysms (MA), soft exudates (SE), hard exudates (EX), and hemorrhages (HE). Implementation Details. The codes are implemented using PyTorch on a single NVIDIA RTX 3090 GPU with 24GB memory. The initial learning rate is set to 5e-5 with a cosine learning rate decay, reaching a minimum learning rate of 2.5e-5.Additionally, a warm-up strategy with a duration of 50 epochs is implemented. For image-level anomaly detection, we compute the anomaly score by averaging the highest predicted anomaly probabilities of the top 10 pixels in the model's output. Evaluation Metric. We evaluate the performance using the standard metric for anomaly detection, AUROC, for both image-level anomaly detection and pixel-level anomaly localization.However, the AUROC can not precisely reflect the localization result especially in fundus images anomaly detection.The reason is thatfalse positive rate is dominated by the a-priori very high number of non-anomalous pixels and is thus kept low despite of false positive detections. We thus additionally report the pixel-wise Area Under the Precision-Recall curve (AUPR), which is more realisticfor the lesion localization performance <cit.>, especially for retinal lesion localization performance <cit.> because it is more appropriate for highly imbalanced classes. Besides, we also evaluate the performance of the method by balanced accuracy (ACC) on pixel-level anomaly detection in order to partially mitigate the issue of imbalanced distribution of positive and negative samples. §.§ Comparisons with the State of the ArtsImage-level Anomaly Detection.Following <cit.>, we compare our proposed method with multiple SOTAs: two reconstruction-based methods: fAnoGAN <cit.> and Lesion2Void <cit.>, a synthetic anomalies based method DRAEM <cit.>, a representation-based method MKD <cit.> as the baseline model for image-level anomaly detection.Tab. <ref> quantitatively compares our model with baselines in the image-level anomaly detection on EyeQ. Grade 0 is considered as a normal image. For the comparisons of 0 vs 1, 0 vs 2, ..., 0 vs 4, we consider only DR graded images from grade 1 to grade 4 as abnormal images. For the comparisons of 0 vs all, we use all abnormal images with DR grades from 1 to 4 for anomaly detection. Our approach surpasses all the baselines in0 vs all grade experiments and surpassesthe previous best SOTA method by 9%, which demonstrates that our method can achieve state-of-the-art performance in image-level retinal anomaly detection. Pixel-level Anomaly Localization. We compare our method with multiple SOTA methods: three reconstruction-based methods: fAnoGAN <cit.>, MemAE <cit.> and WDMT-Net <cit.>, a synthetic anomalies based method DRAEM <cit.> and a representation-based approachReSAD <cit.> as our baseline models.Tab. <ref> quantitatively compares our model with recent approaches on the pixel-level anomaly localization. Compared to the best state-of-the-art methods, our approach shows improvements of 2.6% in AUROC, 4% in ACC, and a significant increase of 17.1% in AUPR.This suggests that our method achieves precise anomaly localization results forretinal lesions (which is supported byFig. <ref>). In summary, our approach has achieved the best performance in the pixel-level anomaly localization task for retinal lesions. §.§ Ablation Study Influences of network architectures. Table <ref> presents the results of ablation experiments on architecture and concatenate type.Without the localization network (first line), utilizing only the autoencoder in the first training stage and calculating reconstruction error as the anomaly score leads to a significant performance drop. This underscores the importance of introducing the localization network for synthetic anomaly localization. In the ablation experiment of the reconstruction network (second and third lines), the absence of the reconstruction network or using randomly initialized network leads to a notable performance drop, emphasizing the significance of the reconstruction network. Additionally, concatenating restored images (fourth line) results in a significant performance degradation, indicating that image concatenation may not be suitable for fundus anomaly detection. Comparing the results in the second line with the third and fourth lines, we observe that concatenating random features or restored images leads to a reduction in model performance, indicating the presence of misleading informationwill misguide the model and decrease its overall performance.Influences of anomaly generation methods. Table <ref> presents the results of ablation experiments on the anomaly generator. The "Source Image"denotes the origin of the source image, where "Texture" represents the external texture dataset (DTD <cit.>) used by DRAEM, and "Fundus" represents the normal retinal images used for training. The "Self-Mix" indicates whether the Self-Mix module is applied to combine source and target information, and the "Mask P" indicates whether the Perlin mask P is used for anomaly generation. The experimental results show that using an external texture dataset (DTD <cit.>) instead of consistent normal fundus images leads to a decline in performance. Additionally, the absence of the Self-Mix module results in a noticeable performance decrease, and excluding the Perlin mask P leads to a significant drop in performance. These findings underscore the importance of our proposed approach, which leverages consistent source images, diverse masks, and an appropriate fusion method can greatly improve performance. §.§ Qualitative Results As Shown in Fig. <ref>. It can be seen that the existing reconstruction-based method WDMT and synthetic anomalies based method DRAEM would detect the normal structure of the fundus (like vessel, optic cup and disc) to anomalies, as indicated by the red circles in the figure. Besides, the representation-based method ReSADcan not provide precise localization results, leading to more false positive (yellow) regions around the lesions. Compared to our method, the three baseline methods show more missing (red) areas and more false positive (yellow) areas around the normal structure or lesion in the fundus images. Moreover, all baseline methods will miss some subtle lesions which is hard to detect, while our method can detect various shapes and types of lesions better and provide relatively fine-grained localization results.Additionally, our method exhibits strong generalization capability on more types of lesions, as shown in the Supplementary. § CONCLUSION In this paper, we introduced the ReSynthDetect network, a novel approach for unsupervised anomaly detection in fundus images. Our method incorporated a novel anomaly generator that produces consistent synthetic anomalies.Besides, we introduced a reconstruction network to extract reconstructive features, which were then fused with the localization network for synthetic anomaly localization. Our approach outperformed baseline methods on the IDRiD and EyeQ datasets, demonstrating its effectiveness in retina anomaly detection tasks.
http://arxiv.org/abs/2312.16470v1
{ "authors": [ "Jingqi Niu", "Qinji Yu", "Shiwen Dong", "Zilong Wang", "Kang Dang", "Xiaowei Ding" ], "categories": [ "cs.CV", "cs.AI" ], "primary_category": "cs.CV", "published": "20231227084023", "title": "ReSynthDetect: A Fundus Anomaly Detection Network with Reconstruction and Synthetic Features" }
Inkjet-Printed High-Yield, Reconfigurable, and Recyclable Memristors on Paper Jinrui Chen†,1, Mingfei Xiao†,1, Zesheng Chen1, Sibghah Khan1,Saptarsi Ghosh2, Nasiruddin Macadam1, Zhuo Chen1, Binghan Zhou1, Guolin Yun1, Kasia Wilk2, Feng Tian1,3, Simon Fairclough2, Yang Xu3, Rachel Oliver2, Tawfique Hasan*,============================================================================================================================================================================================================================================This study investigates the problem of K-armed linear contextual bandits, an instance of the multi-armed bandit problem, under an adversarial corruption. At each round, a decision-maker observes an independent and identically distributed context and then selects an arm based on the context and past observations. After selecting an arm, the decision-maker incurs a loss corresponding to the selected arm. The decision-maker aims to minimize the cumulative loss over the trial. The goal of this study is to develop a strategy that is effective in both stochastic and adversarial environments, with theoretical guarantees. We first formulate the problem by introducing a novel setting of bandits with adversarial corruption, referred to as the contextual adversarial regime with a self-bounding constraint. We assume linear models for the relationship between the loss and the context. Then, we propose a strategy that extends the RealLinExp3 by <cit.> and the Follow-The-Regularized-Leader (FTRL). The regret of our proposed algorithm is shown to be upper-bounded by O(min{(log(T))^3/Δ_* + √(C(log(T))^3/Δ_*),√(T)(log(T))^2}), where T ∈ℕ is the number of rounds, Δ_* > 0 is the constant minimum gap between the best and suboptimal arms for any context, and C∈[0, T] is an adversarial corruption parameter. This regret upper bound implies O((log(T))^3/Δ_*) in a stochastic environment and by O( √(T)(log(T))^2) in an adversarial environment. We refer to our strategy as the Best-of-Both-Worlds (BoBW) RealFTRL, due to its theoretical guarantees in both stochastic and adversarial regimes. § INTRODUCTIONThis study considers minimizing the cumulative regret in the multi-armed bandit (MAB) problem with contextual information. The MAB problem is a formulation of sequential decision-making. In this study, we develop an algorithm that utilizes side information called contextual information. We focus on linear contextual bandits and aim to design an algorithm that performs well in both stochastic and adversarial environments.In our problem setting of contextual bandits, a decision-maker observes an independent and identically distributed (i.i.d.) context each round, draws an arm accordingly, and incurs a loss associated with the chosen arm. Additionally, we assume linear models between the loss and contexts, which is known as the linear contextual bandit problem. The contextual bandit problem is widely studied in fields such as sequential treatment allocation <cit.>, personalized recommendations <cit.>, and online advertising <cit.>. Based on these demands, existing studies explore the methods. For example, <cit.> studies linear contextual bandits. <cit.> provides lower bounds. There are numerous other studies in this field <cit.>.The settings of linear contextual bandits are divided into stochastic, with fixed contextual and loss distributions, and adversarial environments, with fixed contexts but adversarially chosen losses[We can define adversarial linear contextual bandits in different ways. For example, there are studies that consider contextual bandits with adversarial contexts and fixed losses <cit.>. On the other hand, several studies address contextual bandits with adversarial contexts and adversarial losses <cit.>. This study only focuses on contextual bandits with i.i.d. contexts and adversarial losses, which have been studied by <cit.> and <cit.>.]. Most existing studies focus on algorithms for either stochastic <cit.> or adversarial linear contextual bandits <cit.>.Thus, optimal algorithms typically differ between the stochastic and adversarial environments. However, a best-of-both-worlds framework exists, aiming for algorithms that are competent in both stochastic and adversarial environments <cit.>. Building on existing work, we propose a best-of-both-worlds algorithm for stochastic and adversarial linear contextual bandits. §.§ Main ContributionIn Section <ref>, we first introduce the setting of linear contextual bandits with adversarial corruption by defining the linear contextual adversarial regime with a self-bounding constraint. This setting is a generalization of the adversarial regime with a self-bounding constraint proposed by <cit.>. Under this regime, we bridge the stochastic and adversarial environments by an adversarial corruption parameter C ≥ 0, where C = 0 corresponds to a stochastic environment and C = T corresponds to an adversarial environment.Then, in Section <ref> inspired by the RealLinEXP3 proposed by <cit.> for adversarial contexts, our algorithm uses the Follow-the-Regularized-Leader (FTRL) approach to adapt well to the stochastic environment. Our algorithm design also follows existing studies in best-of-both-worlds (BoBW) studies, such as <cit.>. We refer to our algorithm as the BoBW-RealFTRL.In Section <ref>, we show the upper bound of the BoBW-RealFTRL as O(min{D/Δ_* + √(CD/Δ_*), √(log(KT)TD)}), where D =Klog(T)(log(T) + dlog(K))log(KT), T is the number of rounds, d is the dimension of a context, and K is the number of arms, when there exists a constant minimum gap Δ_* between the conditional expected rewards of the best and suboptimal arms for any context when we consider a stochastic environment. Note that this regret upper bound holds both for stochastic and adversarial environments. When there does not exist such a gap Δ_*, we show that the regret upper bound is given as O(√(log(KT)TD)). Note that this regret upper bound also holds both for stochastic and adversarial environments, as well as the previous upper bound. Combining them, the regret upper bound is O(min{D/Δ_* + √(CD/Δ_*), √(log(KT)TD)}). Note that the regret upper bound under an adversarial environment can hold without any assumption on the existence of Δ_*. Our regret upper bound is O(min{(log(T))^3/Δ_* + √(C(log(T))^3/Δ_*),√(T)(log(T))^2}) when focusing on the order with respect to T. Furthermore, in a stochastic environment, the regret is upper bounded by O((log(T))^3/Δ_*), and in an adversarial environment, the regret is upper bound by O(√(T)(log(T))^2).In summary, we contribute to the problem of linear contextual bandits by proposing a best-of-both-worlds strategy.Our study enhances the fields of linear contextual bandits and best-of-both-worlds algorithms. §.§ Related WorkIn adversarial bandits, the RealLinExp3, the algorithm proposed by <cit.>, yields O(log(T)√(KdT)).In Table <ref>, we compare our regret upper bounds with the upper bounds of <cit.>. Regret upper bounds in a stochastic setting are categorized into problem-dependent and problem-independent upper bounds, where the former utilizes some distributional information, such as the gap parameter Δ_*, to bound the regret, while the latter does not. Additionally, problem-dependent regret upper bounds in the stochastic bandits depend on the margin condition characterized by a parameter α∈ [0, +∞] (for the detailed definition, see Remark <ref>). Our case with Δ_* corresponds to a case with α = +∞. Note that in the adversarial bandits, the margin condition usually does not affect the upper bounds. <cit.> proposes the ConfidenceBAll, and <cit.> proposes OFUL. They both present upper bound with and without the assumption of the existence of Δ_*[Regret upper bounds with the assumption of the existence of Δ_* are called problem-dependent.]. As mentioned above, the regret upper bound under the assumption of the existence of Δ_* corresponds to a case with α = +∞ in the margin condition. In contrast, <cit.>, <cit.>, and <cit.> propose algorithm in a case with α = 1. Furthermore, <cit.> propose the ℓ_1-ConfidenceBall based algorithm whose upper bound tightly depends on unknown α.There are several related studies for linear contextual bandits with adversarial corruption, including <cit.>, <cit.>, <cit.> and <cit.>. <cit.>, <cit.>, and <cit.> consider other corruption frameworks characterized by a constant C∈ [0, T], which is different but related to our linear contextual adversarial regime with a self-bounding constraint. <cit.> uses another constant C^†∈ [0, T] different but closely related to C. For the detailed definitions, see Remark <ref>. The essential difference between our and their settings is the existence of the gap Δ_*. Furthermore, while our regret upper bound achieves the polylogarithmic order, those studies show roughly √(T)-order regret upper bounds. <cit.> presents O(d√(K) + dC^†) regret under an adversarial corruption characterized by a constant C^† > 0.The use of the FTRL approach for adversarial linear bandits is also independently explored by <cit.> to relax the assumption used in <cit.>. In addition to the difference in contributions, while our algorithm utilizes the Shannon entropy in the regularization of the FTRL, <cit.> employs the log-determinant barrier. We expect that combining these two methods will yield a BoBW algorithm with relaxed assumptions, and it is future work.To establish our BoBW regret bounds, we utilize the self-bounding technique <cit.>, which yields poly-logarithmic regret bounds in stochastic environments. This is achieved by integrating regret upper bounds that are contingent on the arm-selection distributions q_t, and a lower bound known as self-bound constraints. The q_t-dependent regret bounds are obtained using FTRL with a negative-entropy regularizer, which is also referred to as the exponential weight method. Our approach includes an entropy-adaptive update rule for learning rates, originally developed for online learning in feedback graph contexts <cit.>. This strategy has been proven effective in providing BoBW guarantees for exponential-weight-based algorithms across various sequential decision-making problems, such as multi-armed bandits <cit.>, partial monitoring <cit.>, linear bandits <cit.>, episodic Markov Decision Processes (MDPs) <cit.>, and sparse multi-armed bandits <cit.>. However, a common limitation of these results, stemming from the negative-entropy regularization, is the additional log T factors in the regret bounds. A promising future direction to mitigate this could be exploring alternative regularizers like Tsallis entropy or logarithmic barriers.§ PROBLEM SETTING Suppose that there are T rounds and K arms. In each round t∈[T] := {1,2,…,T}, a decision-maker observes a context X_t∈𝒳⊂ℝ^d, where 𝒳 is a context space. Then, the decision-maker chooses an arm A_t∈[K] := {1,2,…, K} based on the context X_t and past observations. Each arm a∈[K] is linked to a loss ℓ_t(a, X_t), which depends on X_t and round t. After choosing arm A_t in round t, the decision-maker incurs the loss ℓ_t(A_t, X_t). Our goal is to minimize the cumulative loss ∑^T_t=1ℓ_t(A_t, X_t). We introduce the setting in more detail in the following part. Contextual distribution. Let a distribution of X_t be 𝒟, which is invariant across t∈[T]. We also assume that 𝒟 is known to the decision-maker. [Contextual distribution] The context X_t is an i.i.d. random variable, whose distribution 𝒟 is known to the decision-maker, and the covariance matrix Σ = 𝔼[X_t X^⊤_t] is positive definite with its smallest eigenvalue λ_min > 0.Loss. In each round t∈[T], a context is sampled from 𝒟 andthe environment chooses ℓ_t(·, X_t) based on the past observations ℱ_t-1 = (X_1, A_1, ℓ_1(A_1, X_1), X_2, …, X_t-1, A_t-1, ℓ_t-1(A_t-1, X_t-1)). We consider a general framework where ℓ_t is generated in both stochastic and adversarial ways. See Section <ref> for details. Policy. We refer to a function that determines the arm draw as a policy.Let Π be a set of all possible policies π:𝒳→𝒫 := { u = (u_1 u_2 … u_K)^⊤∈ [0, 1]^K |∑^K_k=1u_k = 1 }. Let π(a| x) be the a-th element of π(x). The goal of the decision-maker is to minimize the cumulative loss ∑^T_t=1ℓ_t(A_t, X_t) incurred through T rounds by learning a policy π∈Π. Procedure in a trial. In each round of a trial, the decision-maker first observes a context and then chooses an action based on the context and past observations obtained until the round.Specifically, we consider sequential decision-making with the following steps in each round t∈[T]: * The environment decides (ℓ_t(1, X_t), ℓ_t(2, X_t), …, ℓ_t(K, X_t)) based on ℱ_t-1. * The decision-maker observes the context X_t, which is generated from a known distribution 𝒟.* Based on the context X_t, the decision-maker chooses a policy π_t(X_t) ∈𝒫.* The decision-maker chooses action A_t ∈ [K] with probability π_t(a| X_t).* The decision-maker incurs the loss ℓ_t(A_t, X_t). The goal of the decision-maker is to choose actions in a way that the total loss is as small as possible.§.§ Linear Contextual BanditsThis study assumes linear models between ℓ_t(A_t, X_t) and X_t as follows. [Linear models] For each ℓ_t(a, X_t), the following holds:ℓ_t(a, X_t) = x^⊤_t θ_t(a) + ε_t(a),where θ_t(a) is a d-dimensional parameter, and ε_t(a) is the error term independent of the sequence {X_t}_t∈[T].For linear models and variables, we make the following assumptions. [Bounded variables] We assume the following: * There exists an universal constant C_𝒳 > 0 such that for each x ∈𝒳, x_2 ≤ C_𝒳 holds. * There exists an universal constant C_Θ > 0 such that for each θ∈Θ, θ_2 ≤ C_Θ holds. * There exists an universal constant C_ℰ > 0 such that |ε_t(a)| ≤ C_ℰ holds.Under this assumption, there exists C_ℓ := C(C_𝒳, C_Θ, C_ℰ) such that for all ℓ_t(a, X_t), the following holds for each a∈[K] and x∈𝒳:|ℓ_t(a, x)| ≤ C_ℓ.§.§ RegretThis section provides the definition of the regret, a relative measure of the cumulative loss. We evaluate the performance of the decision or policy of the decision-maker by using regret.Let ℛ be a set of all possible ρ:𝒳→ [K]. The quality of a decision by the decision-maker is measured by its total expected regret, defined asR_T = max_ρ∈ℛ𝔼[∑^T_t=1{ℓ_t(A_t, X_t) - ℓ_t(ρ(X_t), X_t)}] = max_ρ∈ℛ𝔼[∑^T_t=1⟨ X_t, θ_t(A_t) - θ_t(ρ(X_t))⟩],where the expectation is taken over the randomness of policies of the decision-maker, as well as the sequence of random contexts, {X_t}_t∈[T], and losses, {ℓ_t(·, X_t)}_t∈[T].Let X_0 be an i.i.d. random variable from the distribution of X_t. Then, because X_t is an i.i.d. random variable from 𝒟, we have𝔼[∑^T_t=1⟨ X_t, θ_t(ρ(X_t))⟩] =𝔼[∑^T_t=1⟨ X_0, θ_t(ρ(X_0))⟩] ≥𝔼[min_a∈[K]∑^T_t=1⟨ X_0, 𝔼[θ_t(a)]⟩]. Based on this inequality, we define an optimal policy a^* asa^*_T(x) = _a∈[K]∑^T_t=1⟨ x, 𝔼[θ_t(a)]⟩.Then, we haveR_T ≤𝔼[∑^T_t=1⟨ X_t, θ_t(A_t) - θ_t(a^*_T(X_t))⟩]. <cit.> refers to ρ as linear-classifier policies, while π_t is called stochastic policies. In our study, decision-makers compare their stochastic policies π_t to the optimal linear-classifier policy a^* using the regret. §.§ Linear Contextual Adversarial Regime with a Self-Bounding Constraint Then, we define the framework of a linear contextual adversarial regime with a self-bounding constraint, which is a generalization of adversarial and stochastic bandits. We say that an environment is in an adversarial regime with a (Δ_*, C, T) self-bounding constraint for some Δ_*, C > 0 if R_T is lower bounded asR_T ≥Δ_* ·𝔼[∑^T_t=1( 1 - π_t(a^*_T(X_0)| X_0) )] - C.The contextual adversarial regime with a self-bounding constraint includes several important settings. Among them, we raise linear contextual bandits in stochastic bandits and adversarial bandits below.[Linear contextual bandits in stochastic bandits.]In stochastic bandits, the bandit models are fixed; that is, (X_t, ℓ_t(1, X_t), …, ℓ_t(K, X_t)) are generated from a fixed distribution P_0. Let θ_1(a) = ⋯θ_T(a) = θ_0(a). Note that when considering stochastic bandits, we have 𝔼[θ_t(a)] = θ_0(a) anda^*_T(x) = _a∈[K]∑^T_t=1⟨ x, 𝔼[θ_t(a)]⟩ = _a∈[K]⟨ x, θ_0(a)⟩∀ x ∈𝒳.Let a^*_T(x) be a^*_0(x). In this setting, we assume that for each P_0, there exist positive constraints Δ_* such that for all x ∈𝒳, min_b≠ a^*_0(x)⟨ x, θ_0(b)⟩ -⟨ x, θ_0(a^*_0(x))⟩≥Δ_*.Then, the regret can be lower bounded as R_T ≥Δ_* ·𝔼[∑^T_t=1( 1 - π_t(a^*_0(X_0)| X_0) )] (See Appendix <ref>).[Linear contextual bandits in adversarial bandits] In adversarial bandits, we do not assume any data-generating process for the ℓ_t(a, X_t), and the loss is decided to increase the regret based on the past observations ℱ_t-1. In linear contextual bandits, we often employ the margin condition to characterize the difficulty of the problem instance. The margin condition is defined as follows <cit.>: there exist Δ_*, C_1, a^*, and α∈ [0, +∞], such that for h∈[C_1√(log(d)/T), Δ_*], ℙ(⟨ X_t, θ_t(a^*)⟩≤max_b ≠ a^*⟨ X_t, θ_t(b)⟩ + h)≤1/2(h/Δ_*)^α.Our definition of the linear contextual adversarial regime with a self-bounding constraint corresponds to a case with α = ∞. Extending our results to more general α is a future work.<cit.>, <cit.>, <cit.>, and <cit.> propose another definition of linear contextual contextual bandits with corruption. In their work, instead of our defined ℓ_t(a, X_t), they define a loss asℓ_t(a, X_t) = ℓ_t(a, X_t) + c_t(a),where c_t(a) is an adversarial corruption term. For simplicity, let c_t(a) ∈ [-1, 1]. In <cit.>, the degree of the corruption is determined by C∈ [0, T] defined as C = ∑^T_t=1max_a∈[K]|c_t(a)|. In <cit.>, <cit.>, and <cit.>, the corruption level is determined by another parameter C^†∈ [0, T] defined as ∑^T_t = 1|c_t(A_t)|. Here, C≥C^† holds. Note that the adversarial corruption depends on A_t in <cit.>, while the adversarial corruption is determined irrelevant to A_t in <cit.>, <cit.>, and <cit.>. Unlike ours, they do not assume the existence of Δ_* defined in (<ref>). In this sense, our results and their results are complementary.§ ALGORITHM This section provides an algorithm for our defined problem. Our proposed algorithm is a generalization of the RealLinEXP3 algorithm proposed by <cit.>. We extend the method by employing the Follow-The-Regularized-Leader (FTRL) approach with round-varying arm-drawing probabilities. Our design of the algorithm is also motivated by existing studies about Best-of-Both-Worlds (BoBW) algorithms in different Multi-Armed Bandit (MAB) problems, such as <cit.>.In our setting, we first observe a context and then draw an arm based on that context. We consider stochastically drawing an arm. Therefore, in designing the algorithm, our interest lies in appropriately defining the arm-drawing probability. In the FTRL approach, we define this probability by utilizing an unbiased estimator of the loss function.We refer to our proposed algorithm as BoBW-RealFTRL because it modifies the RealLinEXP3 for a best-of-both-worlds algorithm using the FTRL framework. The pseudo-code is shown in Algorithm <ref>. In the following part, we explain the algorithm. Unbiased loss estimator. For each a∈[K], let us define an estimator of regression parameters as θ_t(a) := Σ^†_t, a1[A_t = a]X_tℓ(A_t, X_t),where Σ^†_t, a is an estimator of 𝔼[1[A_t = a]X^⊤_0 X_0]^-1.Then, the loss can be estimated asℓ_t(a, x) = ⟨ x, θ_t(a) ⟩. In analysis of adversarial bandits, the bias of ℓ_t(a, x) plays an important role.If Σ^†_t, a = 𝔼[1[A_t = a]X^⊤_0 X_0|ℱ_t-1]^-1, then this loss estimator is unbiasedfor x^⊤θ_0(a) because 𝔼[ ℓ_t(a, x)|ℱ_t-1] = x𝔼[ ℓ_t(a, x)|ℱ_t-1] = xΣ^†_t, a𝔼[1[A_t = a]X_tℓ(A_t, X_t) |ℱ_t-1]= xΣ^†_t, a𝔼[1[A_t = a]X_t{X^⊤_t θ_0(a) + ε_t(a) }|ℱ_t-1] = x^⊤θ_0(a).Note that in our algorithm, Σ^†_t, a is just an estimator of 𝔼[1[A_t = a]X^⊤_0 X_0|ℱ_t-1]^-1, andΣ^†_t, a = 𝔼[1[A_t = a]X^⊤_0 X_0|ℱ_t-1]^-1 does not hold in general. Therefore, ℓ_t(a, x) is not unbiased. However, we can show that the bias can be ignored because it is sufficiently small to evaluate the regret in depth.We also define a vector of loss estimators as ℓ_t(x) = (ℓ_t(1, x)ℓ_t(2, x)⋯ℓ_t(K, x))^⊤.Estimation of 𝔼[1[A_t = a]X^⊤_0 X_0|ℱ_t-1]^-1. Our remaining task is to estimate 𝔼[1[A_t = a]X^⊤_0 X_0|ℱ_t-1]^-1. The difficulty of this task stems from the dependency on A_t, which varies across rounds. To address this issue, we consider Matrix Geometric Resampling (MGR) proposed by <cit.>. The MGR assumes that we have access to the distribution 𝒟 of X_t and estimates 𝔼[1[A_t = a]X^⊤_0 X_0]^-1 by using simulations. We introduce the algorithm in Algorithm <ref>. In Algorithm <ref>, we define W_k, a for which 𝔼[W_k, a|ℱ_t-1] = Σ_t, a holds. Here, from the independence of the context X(k) from each other, we also have 𝔼[V_k, a|ℱ_t-1] = 𝔼[ ∏^k_j=1( I - δ W_j, a) |ℱ_t-1] = (I - δΣ_t, a)^k. Therefore, Σ^†_t, a works as a good estimator of Σ^-1_t, a on expectations when M_t = ∞ because𝔼[Σ^†_t, a|ℱ_t-1] = δI + δ∑^∞_k=1(I - δΣ_t, a)^k = δ∑^∞_k=0(I - δΣ_t, a)^k = δ(δΣ^-1_t, a)^-1 = Σ^-1_t, a.holds. In implementation, M_t is finite, and we introduce an approximation error of Σ^-1_t, a with finite M_t in Lemma <ref>.Our proposed algorithm: BoBW-RealFTRL. Then, we define our policy, called the BoBW-RealFTRL, asπ_t(X_t) := (1-γ_t)q_t(X_t) + γ_t/Kι,where ι is a K-dimensional vector ι = (1 1 ⋯1)^⊤, q_t(x) ∈_q∈Π{∑^t-1_s=1⟨ℓ_t(x), q(x)⟩ + ψ_t(q(x))}for t ≥ 2, q_1(x) := (1/K 1/K ⋯ 1/K)^⊤,ψ_t(q(x)) := -β_t H(q(x)), H(q(x)) := ∑_a∈[K]q(a| x)log(1/q(a| x)),β_t+1 := β_t + β_1/√(1 + (log(K))^-1∑^t_s=1H(q_s(X_s))),β_1 := ω√(log(KdT)/log(K)),ω := C_ℓC_𝒳,γ_t := K/2δλ_minβ_tlog(T),M_t := 2β_t - 1, and δ := 1/2C_ℓC_𝒳.This algorithm is an extension of the RealLinEXP3 proposed by <cit.> and the FTRL. In the studies of BoBW algorithms, the FTRL-based algorithms are often employed, and our algorithm is connected to the literature.§ REGRET ANALYSIS This section provides upper bounds for the regret of our proposed BoBW-RealFTRL algorithm. For notational simplicity, let us denote a^*_T by a^*.To derive upper bounds, we define the following quantities:Q(a^*| x) = ∑^T_t=1{ 1 - π_t(a^*(x)| x) },Q(a^*) = 𝔼[ Q(a^*| X_0) ]. Then, we show the following upper bound, which holds for general cases such as adversarial and stochastic environments. We show the proof in Sections <ref> and <ref>. If the environment generates losses under the contextual adversarial regime with a self-bounding constraint (Definition <ref>), the BoBW-RealFTRL with Σ^†_t, a incurs the total regret R_T ≤ O(( Klog(T)/β_1(log(T)/δλ_minlog(K) + d) + β_1√(log(K)))√(log(KT))max{Q^1/2(a^*), 1}). For each situation, such as adversarial environments and linear contextual adversarial regimes with a self-bounding constraint, we derive a specific upper bound.First, from Q(a^*) ≤ T, the following regret bound holds without any assumptions on the loss; that is, it holds for an adversarial environment. Assume the same conditions in Theorem <ref>. Then, under an adversarial environment, the regret satisfies R_T = O(( Klog(T)/β_1(log(T)/δλ_minlog(K) + d) + β_1√(log(K)))√(log(KT))√(T));that is, from β_1 = ω√(Klog(T)(log(T)/δλ_minlog(K) + d)), R_T = O(log(KT)√(Klog(K)Tlog(T)(log(T)/δλ_minlog(K) + d)))holds.Furthermore, we derive a regret bound under the linear contextual adversarial regime with a self-bounding constraint.Suppose that the same conditions in Theorem <ref> hold. Then, under the contextual adversarial regime with self-bounding constraints, the regret satisfiesR_T= O({Klog(T)/β_1(log(T)/δλ_minlog(K) + d) + β_1√(log(K))}^2log(KT)/ Δ_*+ √(C{Klog(T)/β_1(log(T)/δλ_minlog(K) + d) + β_1√(log(K))}^2log(KT)/Δ_*));that is, from β_1 = ω√(Klog(T)(log(T)/δλ_minlog(K) + d)),R_T = O(D/Δ_* + √(CD/Δ_*))holds, whereD =Klog(K)log(T)(log(T)/δλ_minlog(K) + d)log(KT). The result in Corollary <ref> implies R_T = O((log(T))^3/Δ_* + √(C(log(T))^3/Δ_*)).From the definition of the contextual adversarial regime with a self-bounding constraint, we haveR_T ≥Δ_* ·𝔼[∑^T_t=1( 1 - π_t(a^*(X_0)| X_0) )] - C =Δ_* ·Q(a^*) - C.Therefore, from Lemma <ref>, for any λ > 0, we haveR_T = (1+λ)R_T - λ R_T= (1+λ)O(c√(log(KT))√(∑^T_t=1𝔼[H(q_t(X_0))])) - λ R_T≤ (1+λ)O(c√(log(KT))√(∑^T_t=1𝔼[H(q_t(X_0))])) - λΔ_* ·Q(a^*) + λ C,where c = ( Klog(T)/β_1(log(T)/δλ_minlog(K) + d) + β_1√(log(K))).Here, as well as the proof of Theorem <ref>, from Lemma <ref>, if Q(a^*| x) ≤ e, we have ∑^T_t=1H(q_t(x)) ≤ elog(KT) and otherwise, we have ∑^T_t=1H(q_t(x)) ≤ Q(a^*| x)log(KT). Hence, we have ∑^T_t=1H(q_t(x))≤log(KT)max{e, Q(a^*| x)}. Here, to upper bound R_T, it is enough to only consider a case with Q(a^*| x) ≥ e, and we obtainR_T ≤ (1+λ)O(c√(log(KT))√(Q(a^*)log(KT))) - λΔ_* ·Q(a^*) + λ C≤O({(1+λ)c}^2√(log(KT)))/2λΔ_* + λΔ_*.where the second inequality follows from a√(b) - c/2b ≤a^2/c^2 holds for any a,b,c > 0. By choosingλ = √(c^2log(KT)/Δ_*/ (c^2log(KT)/Δ_* + 2C)).Then, we obtain R_T = O(c^2log(KT)/ Δ_* + √(Cc^2log(KT)/Δ_*)).In the following sections, we show the proof procedure of Theorem <ref>. §.§ Preliminaries for the Proof of Theorem <ref> Let X_0 be a sample from the context distribution 𝒟 independent of ℱ_T. Let D_t(p, q) denote the Bregman divergence of p. q∈Π with respect to ψ_t; that is,D_t(p, q) = ψ_t(p) - ψ_t(q) - ⟨∇ψ_t(q), p - q ⟩.Let us define π^*∈Π as π^*(a^*(x)| x) = 1 and π^*(a| x) = 0 for all a∈[K]\{a^*(x)}. Then, the following lemma holds. The proof is shown in Appendix <ref> If A_t is chosen as our proposed method, the regret is bounded byR_T≤𝔼[∑^T_t=1{γ_t + ⟨ℓ_t(X_0), q_t(X_0) - q_t+1(X_0) ⟩- D_t(q_t+1(X_0), q_t(X_0)) + ψ_t(q_t+1(X_0)) - ψ_t+1(q_t+1(X_0))}+ ψ_T+1(π^*(X_0)) - ψ_1(q_1(X_0))]+ 2∑^T_t=1max_a∈[K]| 𝔼[⟨ X_t, θ_t(a) - θ_t(a) ⟩] |. To show Lemma <ref>, we use the following proposition from <cit.>.Suppose that π_t∈ℱ_t-1 and that 𝔼[θ_t, a|ℱ_t-1] = θ_t, a for all t, a hold. Then, the following holds:𝔼[∑^T_t=1∑_a∈[K](π_t(a| X_t) - π^*(a| X_t))⟨ X_t, θ_t,a⟩] = 𝔼[∑^T_t=1∑_a∈[K](π_t(a| X_0) - π^*(a| X_0))⟨ X, θ_t,a⟩]. This proposition plays an important role throughout this study.Bounding the stability term. For the stability term ⟨ℓ_t(X_0), q_t(X_0) - q_t+1(X_0) ⟩ - D_t(q_t+1(X_0), q_t(X_0)), we use the following proposition from <cit.>. If ψ_t is given as (<ref>), for any ℓ: 𝒳→ℝ^K and p, q ∈Π, we have⟨ℓ(x), p(x) - q(x) ⟩ - D_t(q(x), p(x)) ≤β_t∑_a∈[K]p(a| x)ξ(ℓ(a, x)/β_t).for any x∈𝒳, where ξ(x) := exp(-x) + x - 1. For ℓ(a, x), if ℓ(a, x)/β_t≥ -1 holds, then Proposition <ref> implies ⟨ℓ_t(x), q_t(x) - q_t+1(x) ⟩ - D_t(q_t+1(x), q_t(x))≤1/β_t∑_a∈[K]π_t(a| x)ℓ^2_t(a, x).For the RHS, we apply the following proposition from <cit.>. For each t∈[T], our strategy satisfies𝔼[∑_a∈[K]π_t(a| X_0)ℓ^2_t(a, X_0)|ℱ_t-1] ≤ 3Kd. Estimation error of the design matrix. Next, we bound ∑^T_t=1max_a∈[K]| 𝔼[⟨ X_t, θ_t(a) - θ_t(a) ⟩] |. An upper bound of ∑^T_t=1max_a∈[K]| 𝔼[⟨ X_t, θ_t(a) - θ_t(a) ⟩] | is given as the following lemma. We have | 𝔼[⟨ X_t, θ_t(a) - θ_t(a) ⟩] | ≤ C_𝒳 C_Θ / T. From Lemma 5 in <cit.>, we have | 𝔼[⟨ X_t, θ_t(a) - θ_t(a) ⟩] | ≤ C_𝒳 C_Θexp( - γ_t δ/Kλ_minM_t). Then, we haveexp( - γ_t δ/Kλ_minM_t) = exp(-Klog(T)/δλ_min· 2β_tδλ_min/KM_t)≤exp(-Klog(T)/δλ_min·(2β_t - 1)δλ_min/KM_t)= exp(-log(T)) = 1/T,where recall that we defined M_t = β_t - 1§.§ Proof of Theorem <ref> Then, we show the following lemma. The proof is shown in Appendix <ref>.The regret for the BoBW-RealFTRL with Σ^†_t, a is bounded asR_T≤𝔼[∑^T_t=1{γ_t + 3Kd/β_t + (β_t+1 - β_t)H(q_t+1(X_0))}] + β_1log(K) + 2 C_𝒳 C_Θ.From this result, we obtain the following lemma. We provide the proof in Appendix <ref> Assume the conditions in Theorem <ref>. Suppose that β_t and γ_t satisfy (<ref>) and (<ref>). Then, we haveR_T ≤c√(𝔼[∑^T_t=1H(q_t(X_0))]) + 2 C_𝒳 C_Θ,where c = O( Klog(T)/β_1(log(T)/δλ_minlog(K) + d) + β_1√(log(K))). Next, we consider bounding ∑^T_t=1H(q_t(x)) by Q(a^*| x) as shown in the following lemma. For any a^*:𝒳→[K], the following holds:∑^T_t=1H(q_t(x)) ≤ Q(a^*| x) log(eKT/Q(a^*| x)),where e is Napier's constant.By using the above lemmas and propositions, we prove Theorem <ref>.From Lemma <ref>, if Q(a^*| x) ≤ e, we have ∑^T_t=1H(q_t(x)) ≤ elog(KT) and otherwise, we have ∑^T_t=1H(q_t(x)) ≤ Q(a^*| x)log(KT). Hence, we have ∑^T_t=1H(q_t(x))≤log(KT)max{e, Q(a^*| x)}. From Lemma <ref>, we haveR_T≤c√(∑^T_t=1𝔼[H(q_t(X_0))]) + 2 C_𝒳 C_Θ≤ O(( Klog(T)/β_1(log(T)/δλ_minlog(K) + d) + β_1√(log(K)))√(log(KT))max{Q^1/2, 1}).§ CONCLUSION We developed a BoBW algorithm for linear contextual bandits. Our proposed algorithm is based on the FTRL approach. In our theoretical analysis, we show that the upper bounds of the proposed algorithm are given as O(min{D/Δ_* + √(CD/Δ_*), √(log(KT)TD)}), where D =Klog(T)(log(T) + dlog(K))log(KT). This regret upper bound implies O(min{√(TD/Δ_*), √(log(KT)TD)}) regret in an adversarial environment and O(D/Δ_*) regret in an adversarial environment and O(D/Δ_*) regret in a stochastic environment. This result also implies O((log(T))^3/Δ_*) regret in a stochastic regime and O(√(T)(log(T))^2) regret in an adversarial regime with respect to T.There are four directions for future work in this study. The first direction is to develop an algorithm that does not require a contextual distribution while maintaining the BoBW property. We expect this extension can be accomplished by applying our proposed method to a method proposed by <cit.>, based on the FTRL approach with the log-determinant barrier. We note that standard linear contextual bandits in a stochastic environment do not require the contextual distribution to be known, but it is required for dealing with an adversarial environment.The second direction is to provide lower bounds in our adversarial regimes. In existing studies, <cit.> provides a general upper bound that holds for a high-dimensional setting with various margin conditions. We can incorporate such results to derive a lower bound in our problem setting.The third extension is to develop an algorithm that works for linear contextual bandits without assuming a specific minimum gap constant Δ_*. To address this issue, we might use the margin condition to generalize the minimum gap assumption. Lastly, tightening our regret upper bound is also an open problem.tmlr § DETAILS OF EXAMPLE <REF> When min_b≠ a^*_0(x)⟨ x, θ_0(b)⟩ -⟨ x, θ_0(a^*_0(x))⟩≥Δ_* holds for all x∈𝒳, we haveR_T= 𝔼[∑^T_t=1ℓ_t(A_t, X_t) - ∑^T_t=1ℓ_t(a^*, X_t)]= 𝔼[∑^T_t=1∑_a∈[X]X_t(θ^*_a - θ^*_a^*_t)π_t(a| X_t)]= 𝔼[∑^T_t=1∑_a∈[X]X_t(θ^*_a - θ^*_a^*_t)π_t(a| X_t)1[min_b≠ a^*_t⟨ X_t, θ^*_b⟩ - ⟨ X_t, θ^*_a^*_t⟩≤Δ_*]] + 𝔼[∑^T_t=1∑_a∈[X]X_t(θ^*_a - θ^*_a^*_t)π_t(a| X_t)1[min_b≠ a^*_t⟨ X_t, θ^*_b⟩ - ⟨ X_t, θ^*_a^*_t⟩ >Δ_*]]≥𝔼[∑^T_t=1∑_a∈[X]X_t(θ^*_a - θ^*_a^*_t)π_t(a| X_t)1[min_b≠ a^*_t⟨ X_t, θ^*_b⟩ - ⟨ X_t, θ^*_a^*_t⟩ >Δ_*]]≥Δ_* ·∑^T_t=1𝔼[Q^2_t(a^*(X_t))]. § PROOF OF LEMMA <REF> Let us defineR_T(x) := ∑^T_t=1∑_a∈[K](π_t(a| x) - π^*(a| x))⟨ x, θ_t, a⟩. Then, the following holds:R_T ≤𝔼[R_T(X_0)] + 2∑^T_t=1max_a∈[K]| 𝔼[⟨ X_t, θ_t, a - θ_t, a⟩] |. Then, we prove Lemma <ref> as follows. From the definition of the algorithm, we haveR_T((a^*_t)_t∈[T])= 𝔼[∑^T_t=1ℓ_t(A_t, X_t) - ∑^T_t=1ℓ_t(a^*_t, X)]= 𝔼[∑^T_t=1⟨ℓ_t(X_t), π_t(X_t) - π^*(X_t)⟩]= 𝔼[∑^T_t=1⟨ℓ_t(X_t), q_t(X_t) - π^*(X_t)⟩ + ∑^T_t=1γ_t ⟨ℓ_t(X_t), μ_U - q_t(X_t)⟩]≤𝔼[∑^T_t=1⟨ℓ_t(X_t), q_t(X_t) - π^*(X_t)⟩ + ∑^T_t=1γ_t ]= 𝔼[∑^T_t=1⟨ℓ_t(X_0), q_t(X_0) - π^*(X_0)⟩ + ∑^T_t=1γ_t ]= 𝔼[∑^T_t=1⟨ℓ_t(X_0), q_t (X_0)- π^*(X_0)⟩ + ∑^T_t=1γ_t ] + 𝔼[∑^T_t=1⟨ℓ_t(X_0) - ℓ_t(X_0), q_t(X_0) - π^*(X_0)⟩]≤𝔼[∑^T_t=1⟨ℓ_t(X_0), q_t (X_0)- π^*(X_0)⟩ + ∑^T_t=1γ_t ] + 2∑^T_t=1max_a∈[K]| 𝔼[⟨ X_t, θ_t, a - θ_t, a⟩] |.Then, from the definitions of q_t, for each x∈𝒳, we also have∑^T_t=1⟨ℓ_t(x), π^*(x)⟩ + ψ_T+1(π^*(x))≥∑^T_t=1⟨ℓ_t(x), q_T+1(x) ⟩ + ψ_T+1(q_T+1(x)) - ψ_T+1(q_T+1(x)) + ψ_T+1(π^*(x)) - ⟨∇ψ_t(q_T+1(x)), π^*(x) - q_T+1(x)⟩= ∑^T_t=1⟨ℓ_t(x), q_T+1(x) ⟩ + ψ_T+1(q_T+1(x)) +D_T+1(π^*(x), q_T+1(x)),where we used that ⟨∇ψ_t(q_T+1(x)), π^*(x) - q_T+1(x)⟩≥ 0 holds for a convex function ψ_t. Then, it holds that∑^T_t=1⟨ℓ_t(x), π^*(x) ⟩ + ψ_T+1(π^*(x))≥∑^T_t=1⟨ℓ_t(x), q_T+1(x) ⟩ +D_T+1(π^*(x), q_T+1(x)) + ψ_T+1(q_T+1(x))≥∑^T_t=1⟨ℓ_t(x), q_T+1(x) ⟩ + ψ_T(q_T(x)) + D_T(q_T+1(x), q_T(x)) + D_T+1(π^*(x), q_T+1(x)) - ψ_T(q_T+1(x)) + ψ_T+1(q_T+1(x))= ∑^T-1_t=1⟨ℓ_t(x), q_T+1(x) ⟩ + ψ_T(q_T(x)) + ⟨ℓ_T(x), q_T+1(x) ⟩ + D_T(q_T+1(x), q_T(x)) + D_T+1(π^*(x), q_T+1(x)) - ψ_T(q_T+1(x)) + ψ_T+1(q_T+1(x))≥∑^T-1_t=1⟨ℓ_t(x), q_T(x) ⟩ + ψ_T(q_T(x)) + ⟨ℓ_T(x), q_T+1(x) ⟩ + D_T(q_T+1(x), q_T(x)) + D_T+1(π^*(x), q_T+1(x)) - ψ_T(q_T+1(x)) + ψ_T+1(q_T+1(x))≥∑^T_t=1⟨ℓ_t(x), q_t+1(x) ⟩ + ∑^T_t=1D_t(q_t+1(x), q_t(x)) - ∑^T_t=1{ψ_t(q_t+1(x)) - ψ_t+1(q_t+1(x))} + ψ_1(q_1(x)).Therefore, we have∑^T_t=1⟨ℓ_t(x), q_t(x) - π^*(x) ⟩≤∑^T_t=1{⟨ℓ_t(x), q_t(x) - q_t+1(x) ⟩ - D_t(q_t+1(x), q_t(x)) + ψ_t(q_t+1(x)) - ψ_t+1(q_t+1(x))} + ψ_T+1(π^*(x)) - ψ_1(q_1(x)).Combining this with (<ref>), we obtainR_T((a^*_t)_t∈[T])≤𝔼[∑^T_t=1{⟨ℓ_t(X_0), q_t(X_0) - q_t+1(X_0) ⟩ - D_t(q_t+1(x), q_t(X_0)) + ψ_t(q_t+1(X_0)) - ψ_t+1(q_t+1(X_0))} + ψ_T+1(π^*(X_0)) - ψ_1(q_1(X_0)) + ∑^T_t=1γ_t ] + 2∑^T_t=1max_a∈[K]| 𝔼[⟨ X_t, θ_t, a - θ_t, a⟩] |. § PROOF OF LEMMA <REF>From Lemma <ref>, we haveR_T≤𝔼[∑^T_t=1(γ_t + ⟨ℓ_t(X_0, d), π_t(X_0) - q_t+1(X_0) ⟩ - D_t(q_t+1(X_0), π_t(X_0)) + ψ_t(q_t+1(X_0)) - ψ_t+1(q_t+1(X_0)))+ ψ_T+1(π^*(X_0)) - ψ_1(q_1(x))]+ 2∑^T_t=1max_a∈[K]| 𝔼[⟨ X_t, θ_t, a - θ_t, a⟩] |. First, we show𝔼[⟨ℓ_t(X_0), π_t(X_0) - q_t+1(X_0) ⟩ - D_t(q_t+1(X_0), π_t(X_0))] ≤3Kd/β_t. To show this, we confirm ℓ_t(a, x)/β_t≥ -1, which is necessary to derive an upper bound from Proposition <ref>. We have1/β_t·⟨ X_0, θ_t(a) ⟩ = 1/β_t· X^⊤_0 Σ^†_t, aX_t ⟨ X_t, θ_t, a⟩1[A_t = a] ≥ - C_ℓ/β_t·| X^⊤_0 Σ^†_t, a X_t |≥ - 1/β_t C_ℓC_𝒳Σ^†_t, a_op≥ - 1/β_t C_ℓC_𝒳δ( 1 + ∑^M_t_k=1V_k, a_op)= - 1/2β_t (M_t + 1),where we used that δ = 1/2C_ℓC_𝒳.Here, recall that we defined M_t as 2β_t - 1. Therefore, ℓ_t(a, x)/β_t = -1 holds. Then, we have⟨ℓ_t(x), π_t(x) - q_t+1(x) ⟩ - D_t(q_t+1(x), π_t(x))≤β_t∑_a∈[K]π_t(a| x)ξ(ℓ_t(a, x)/β_t) ≤1/β_t∑_a∈[K]π_t(a| x)ℓ^2_t(a, x).Then, from Proposition <ref>, we have (<ref>). From ψ_t(q(x)) = -β_t H(q(x)), we have∑^T_t=1(ψ_t(q_t+1(x)) - ψ_t+1(q_t+1(x)))+ ψ_T+1(π^*(x)) - ψ_1(q_1(x))≤∑^T_t=1(β_t+1 - β_t)H(q_t+1(x)) + β_1log(K). From Lemma <ref>, we have∑^T_t=1max_a∈[K]| 𝔼[⟨ X_t, θ_t, a - θ_t, a⟩] | ≤∑^T_t=1C_𝒳 C_Θ1/√(T) = C_𝒳 C_Θ√(T). § PROOF OF LEMMA <REF>Firstly, we note that the following equality holds:𝔼[∑^T_t=1(β_t+1 - β_t)H(q_t+1(X_t+1))]=𝔼[∑^T_t=1(β_t+1 - β_t)𝔼[H(q_t+1(X_t+1))|ℱ_t]]=𝔼[∑^T_t=1(β_t+1 - β_t)𝔼[H(q_t+1(X_0))|ℱ_t]]= 𝔼[∑^T_t=1(β_t+1 - β_t)H(q_t+1(X_0))]We show the following two inequalities:∑^T_t=1(γ_t + 3Kd/β_t) = O( Klog(T)/β_1(log(T)/δλ_minlog(K) + d)√(∑^T_s=1H(q_t+1(X_s)))) ∑^T_t=1(β_t+1 - β_t)H(q_t+1(X_t+1))= O( β_1√(log(K))√(∑^T_t=1H(q_t(X_t))). First, we show (<ref>). From γ_t = K/4δλ_minβ_tlog(T), we obtain∑^T_t=1(γ_t + 3Kd/β_t) = ∑^T_t=1(K/2δλ_minβ_tlog(T) + 3Kd/β_t) = (K/21/2C_ℓC_𝒳δλ_minlog(T) + 3Kd)∑^T_t=11/β_t.From β_t+1 = β_t + β_1/√(1 + (log(K))^-1∑^t_s=1H(q_s(X_s))), we obtain β_t = β_1 + ∑^t-1_u=1β_1/√(1 + (log(K))^-1∑^u_s=1H(q_s(X_s)))≥tβ_1/√(1 + (log(K))^-1∑^t_s=1H(q_s(X_s))).Therefore, we have∑^T_t=11/β_t≤∑^T_t=1√(1 + (log(K))^-1∑^t_s=1H(q_s(X_s)))/tβ_1≤1 + log(T)/β_1√(1 + (log(K))^-1∑^T_s=1H(q_s(X_s))).By using H(q_1(x)) = log(K), we obtain∑^T_t=1(γ_t + 3Kd/β_t) = O( Klog(T)/β_1(log(T)/δλ_minlog(K) + d)√(∑^T_s=1H(q_t+1(X_s)))). Next, we show (<ref>). From the definitions of β_t and γ_t, we have∑^T_t=1(β_t+1 - β_t)H(q_t+1(X_t+1)) = ∑^T_t=1β_1/√(1 + (log(K))^-1∑^t_s=1H(q_s(X_s)))H(q_t+1(X_t+1))= 2β_1√(log(K))∑^T_t=1H(q_t+1(X_t+1))/√(log(K) + ∑^t_s=1H(q_s(X_s))) + √(log(K) + ∑^t_s=1H(q_s(X_s)))≤ 2β_1√(log(K))∑^T_t=1H(q_t+1(X_t+1))/√(log(K) + ∑^t+1_s=1H(q_s(X_s))) + √(log(K) + ∑^t_s=1H(q_s(X_s)))≤ 2β_1√(log(K))∑^T_t=1H(q_t+1(X_t+1))/√(∑^t+1_s=1H(q_s(X_s))) + √(∑^t_s=1H(q_s(X_s)))= 2β_1√(log(K))∑^T_t=1H(q_t+1(X_t+1))/H(q_t+1(X_t+1)){√(∑^t+1_s=1H(q_s(X_s))) - √(∑^t_s=1H(q_s(X_s)))}= 2β_1√(log(K))∑^T_t=1{√(∑^t+1_s=1H(q_s(X_s))) - √(∑^t_s=1H(q_s(X_s)))}= 2β_1√(log(K)){√(∑^T+1_s=1H(q_s(X_s))) - √(H(q_1(X_1)))}≤ 2β_1√(log(K))√(∑^T_s=1H(q_s(X_s))),where we used √(H(q_T+1(X_T+1)))≤√(H(q_1(X_1))). Inequalities (<ref>) and (<ref>) combined with the inequality in Theorem <ref> yield R_T≤𝔼[∑^T_t=1{γ_t + 3Kd/β_t + (β_t+1 - β_t)H(q_t+1(X_0))}] + β_1log(K) + 2 C_𝒳 C_Θ√(T)= 𝔼[∑^T_t=1{γ_t + 3Kd/β_t + (β_t+1 - β_t)H(q_t+1(X_t+1))}]+ β_1log(K) + 2 C_𝒳 C_Θ√(T)= 𝔼[∑^T_t=1{O( Klog(T)(log(T) + δλ_mind)/β_1 δλ_minlog(K)√(∑^T_s=1H(q_t+1(X_s)))) + O( β_1√(log(K))√(∑^T_t=1H(q_t(X_t)))}]+ β_1log(K) + 2 C_𝒳 C_Θ√(T)= ∑^T_t=1{O( Klog(T)(log(T) + δλ_mind)/β_1 δλ_minlog(K)√(∑^T_s=1𝔼[H(q_t+1(X_0))]))) + O( β_1√(log(K))√(∑^T_t=1𝔼[H(q_t(X_t)]))}+ β_1log(K) + 2 C_𝒳 C_Θ√(T).Thus, we obtain the regret bound in Lemma <ref>.
http://arxiv.org/abs/2312.16489v1
{ "authors": [ "Masahiro Kato", "Shinji Ito" ], "categories": [ "cs.LG", "cs.AI", "econ.EM", "stat.ME", "stat.ML" ], "primary_category": "cs.LG", "published": "20231227093218", "title": "Best-of-Both-Worlds Linear Contextual Bandits" }
Make BERT-based Chinese Spelling Check Model Enhanced by Layerwise Attention and Gaussian Mixture Model1st Yongchang Cao National Key Laboratory for Novel Software Technology Nanjing UniversityNanjing, China [email protected] 2nd Liang He National Key Laboratory for Novel Software Technology Nanjing UniversityNanjing, China [email protected] 3rd Zhen Wu National Key Laboratory for Novel Software Technology Nanjing University Nanjing, China [email protected] 4th Xinyu Dai* National Key Laboratory for Novel Software Technology Nanjing University Nanjing, China [email protected] 14, 2024 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= BERT-based models have shown a remarkable ability in the Chinese Spelling Check (CSC) task recently. However, traditional BERT-based methods still suffer from two limitations. First, although previous works have identified that explicit prior knowledge like Part-Of-Speech (POS) tagging can benefit in the CSC task, they neglected the fact that spelling errors inherent in CSC data can lead to incorrect tags and therefore mislead models. Additionally, they ignored the correlation between the implicit hierarchical information encoded by BERT's intermediate layers and different linguistic phenomena. This results in sub-optimal accuracy. To alleviate the above two issues, we design a heterogeneous knowledge-infused framework to strengthen BERT-based CSC models. To incorporate explicit POS knowledge, we utilize an auxiliary task strategy driven by Gaussian mixture model. Meanwhile, to incorporate implicit hierarchical linguistic knowledge within the encoder, we propose a novel form of n-gram-based layerwise self-attention to generate a multilayer representation. Experimental results show that our proposed framework yields a stable performance boost over four strong baseline models and outperforms the previous state-of-the-art methods on two datasets. § INTRODUCTIONChinese Spelling Check (CSC) is the crucial task of detecting and correcting spelling errors contained in a given input. Spelling errors are common in daily life. Designing a high-quality spelling checker can serve many NLP tasks, such as improving the quality of ASR and OCR <cit.>, used for essay scoring <cit.>, or more downstream NLP tasks, such as search engines <cit.>.Earlier work on CSC followed a rule-driven pipeline of error detection, candidate generation, and candidate selection <cit.>. With the development of neural networks, some works attempted to solve the CSC problem using sequence-to-sequence models <cit.>. Most recently, BERT-based models <cit.> have been introduced into the CSC task and achieved state-of-the-art (SOTA) performance. Most BERT-based CSC models can be grouped into two categories: introducing separate error detection modules <cit.> and introducing similarity constraints through phonetic and glyph information <cit.>.Despite BERT's powerful natural language understanding ability, the existing BERT-Based CSC models are still limited by inadequate utilization of versatile linguistic knowledge, thus hindering the potential of the models for spelling error correction. Concretely, on the one hand, explicit Part-Of-Speech (POS) knowledge as an input feature has been proven to be beneficial in detecting spelling errors <cit.>. However, spelling errors in the CSC corpus will result in severe incorrect labeling by the publicly available POS tagging tools trained on the clean corpus, and the way of simply inputting POS features cannot avoid the misleading of noisy labels for the CSC model. On the other hand, previous works have shown that BERT comprises various types of linguistic features <cit.>, and different linguistic features are adept at handling different types of errors <cit.>. Existing BERT-based CSC methods solely use the top-layer latent representation to make corrections, ignoring the assistance of various linguistic features for Chinese spelling errors, hence weakening the performance of CSC models. Our preliminary investigations demonstrate that low-level information supplementation can indeed yield higher detection and correction performance for non-word type errors[Spelling errors can be divided into non-word type that causes lexical anomalies and real-word type that causes semantic anomalies <cit.>.]. Nevertheless, effective interlayer information fusion strategies specific to the type of spelling error remain to be explored.To address the above two issues, we design a practical heterogeneous knowledge-infused framework, i.e., explicit POS knowledge and implicit hierarchical linguistic features in BERT, to strengthen the BERT-based CSC models. Specifically, to solve the first issue aforementioned, we utilize an auxiliary-task learning strategy driven by Gaussian Mixture Model (GMM). We perform token-level and task-level loss annealing by the GMM and heuristic strategy for the noisy POS feature, while using the auxiliary task to fuse explicit knowledge, the utilized loss annealing strategy can effectively reduce the underlying interference of massive noisy POS tags to the CSC model. For the second limitation, we exploit a layerwise self-attention mechanism based on n-gram tokens to establish an information pipe between the intermediary encoder layers and the classifier. The exploited n-gram token-based attention query term can provide a well-focused information supplement specific to the type of error.We employ our proposed framework on four strong baseline models. Results from extensive experiments indicate that our proposed framework can yield a stable performance improvement on the BERT-based CSC models and outperform the previous state-of-the-art method across two datasets. Further in-depth analysis also validates the effectiveness of each proposed module. The contributions are summarized as follows: * A new framework named Auxiliary Task learning based on Loss Annealing with Layerwise self-Attention (ATLAs) is designed to improve BERT-based CSC models, which can be universally and effectively applied to the diverse array of BERT-based CSC models.* A loss annealing strategy is utilized for auxiliary task training, which reduces the sensitivity of baseline models to the massive incorrect POS labeling and alleviates the performance degradation in knowledge-dependent Chinese spelling error correction.* Several multilayer representation techniques have been explored and compared, and the exploited n-gram-based layerwise self-attention is verified to be effective in the CSC model.* Extensive experiments applied to four strong BERT-based CSC models show that our knowledge-infused framework can improve all the baseline models across two different datasets and achieve state-of-the-art performance for the CSC task. § RELATED WORKS §.§ BERT-Based CSC ModelsAs a critical NLP application, The CSC task has attracted much attention from the NLP community. Due to the context-sensitive nature of the CSC task <cit.>, recent models all use BERT as the base corrector. The BERT-based corrector mainly has two optimization strategies. One is to add an independent detection module. Soft-Masked BERT <cit.> leverages a Bi-GRU network to locate spelling errors and uses the error probability to soft embed input characters. The other is to introduce the phonetic and graphic information of characters. E.g., SpellGCN <cit.> and ReaLiSe <cit.> use fixed similar character sets or pre-trained models to incorporate visual and phonological similarity knowledge into the CSC model to modify the prediction logits of the Masked Language Model. However, these models ignore the dependence of the CSC task on explicit prior knowledge of POS and implicit hierarchical linguistic features in BERT. §.§ Explicit Prior Knowledge InjectionPrior knowledge has been introduced in many knowledge-intensive NLP tasks to enhance the performance of the pre-trained model <cit.>. DPL-Corr <cit.> combines the POS tag with contextual character representation and uses the mixed information for the detection module in the CSC pipeline framework. Reference <cit.> continuously identifies weaknesses during the CSC model training process and generates adversarial samples to incorporate explicit knowledge that may be lacking in the BERT model. However, these methods either inevitably introduce noise in incorrect POS tags or increase the cost of training. In contrast, our strategy does not require additional training costs and reduces the impact of severe knowledge noise. §.§ Implicit Hierarchical Features in BERT Many analytical or practical works have demonstrated that BERT composes hierarchical linguistic features at different layers. Reference <cit.> verified that BERT captures phrase-level information in the lower layers, and this information gradually dilutions in the higher layers. BERT4GCN <cit.> utilizes outputs from intermediate layers of BERT and positional information to augment Graph Convolutional Network and verifies the benefits of the strategy in the aspect-based sentiment classification task. Most relevant to the CSC task, <cit.> revealed that out-of-domain points in different layers correspond to different linguistic phenomena, e.g. lower layers correspond to low-frequency tokens. In comparison, grammatically anomalous inputs are out-of-domain in higher layers. However, to the best of our knowledge, no attempt has been made to investigate how to integrate well-focused hierarchical information to assist in spelling error correction.§ METHODOLOGY §.§ Problem Formulation The Chinese Spelling Check task can be formalized as follows: Given a Chinese sequence of n characters X = {x_1, x_2, …, x_n}, the goal of the model is to convert it into a sentence Ŷ = {ŷ_̂1̂, ŷ_̂2̂, …, ŷ_̂n̂}, where the misspelled characters in X will be replaced with the correct characters in Ŷ. In the public Chinese Spelling Check shared task, X and Ŷ are set to have the same length. The CSC model function can be regarded as a mapping function f:X →Ŷ. Compared to traditional translation tasks, most of the characters in Ŷ in the CSC task are copied directly from X. §.§ Generic BERT-Based CSC ModelMost recent BERT-based CSC models employ BERT as the character feature extractor and utilize a unique phonetic and graphic encoder to fuse additional similarity constraints for spelling correction. A general model framework is shown in Fig. <ref>. Input X obtains the character embedding E = {𝐞_1, 𝐞_2, …, 𝐞_n} through the BERT embedding layer, and then obtains the contextual representation {𝐡^L_1, 𝐡^L_2, …, 𝐡^L_n} through 12 transformer blocks, where L represents the number of encoder layers in BERT. In addition, different strategies are adopted to obtain the phonetic and glyph information. Ultimately, the contextual representation with additional information will be sent to a FullyConnected classifier to predict the correct sequence Ŷ. §.§ N-gram Based Layerwise Self-Attention Many probing tasks have proven that different BERT layers have individual abilities to uncover linguistic anomalies <cit.>. Our preliminary experiments also validate that varied levels of intermediate information within BERT are good at handling different types of spelling errors. To take full advantage of the implicit hierarchical knowledge inside the BERT encoder, we exploit a strategy of initiating an attention query on the encoder's intermediary layers by using n-gram tokens to fuse richer information. The primary processing step is shown on the right side of Fig. <ref>.Specifically, for each input character x_i, assume that its BERT middle-level encoding representations are symbolized as H_i = {𝐡_i^0, 𝐡_i^1, …, 𝐡_i^L} (where 𝐡_i^0 is the embedding 𝐞_i), layerwise self-attention calculates the final representation 𝐡_i of the character x_i as follows: q_i= [𝐡_i-1^L; 𝐡_i^L; 𝐡_i+1^L] * W_Q K_i= H_i * W_K S_i= softmax (q_i ·K_i^⊤/√(d_K))𝐡_i=∑_ℓ = 0^L s_i^ℓ·𝐡_i^ℓ Where i ∈ [1, n] represents the character positional index of the input X, W_Q, W_K are trainable matrices that generate the query vector q_i and the key array K_i of the character x_i, and K_i corresponds to H_i, which has L+1 vectors, representing the number of encoder layers. [·] is the vector concatenation operation to generate the n-gram token. Then, the mixed representation q_i is used to query the dependence of the character x_i on low-level information according to the n-gram form token at positions i. Finally, the resulting attention score S_i is used to calculate a weighted sum of the multilayer representations. Essentially, S_i represents the supplemented information achieved from the low-level to top-level representation. Note that we omit the activation matrix for the value vectors to retain the original semantics learned by the BERT. Owing to the query based on n-gram representation, the strategy can fine-tune the preferenced intermediary representations based on the different spelling error types.We also use the multi-head mechanism. Specifically, for the m-head self-attention, the attention operation first divides the original latent embedding into m subspaces, reducing the computational cost and making the model pay attention to different aspects of information. For the j-th attention header, it corresponds to the subspace Q_j, K_j, V_j, and uses (<ref>) to calculate self-attention. Lastly, all heads are concatenated to obtain the ultimate characters latent representation H = {𝐡_1, 𝐡_2, …, 𝐡_n}. The operation can be formulated as follows: head_j = Attention (Q_j, K_j, V_j) MultiHead(Q, K, V) = [head_1, head_2, …, head_m]§.§ Auxiliary Loss Annealing Joint Learning Explicit POS information provides good guidance for identifying many commonly-confused Chinese characters <cit.>. However, the misspellings contained in the corpus guarantee that severe noise is contained in the knowledge annotations based on the publicly available tagging tools. How to mitigate the impact of noise labeling in knowledge injection is the key to the CSC task. For this reason, we designed an auxiliary-task joint-learning strategy based on loss annealing. Specifically, after obtaining the multilayer representation H, the model utilizes separate plain FullyConnected(FC) layers as classifiers to make predictions for the main CSC task as well as the auxiliary POS tagging task for knowledge injection. After obtaining the spelling prediction result Y = {y_1, y_2, …, y_n} and the POS tagging result Z = {z_1, z_2, …, z_n}, the model calculates the corresponding losses of main task ℒ^m and the auxiliary task ℒ^a through the relevant labels as shown in (<ref>). ℒ^a = -∑_i=1^n ℒ^a_i = -∑_i=1^n logp_a(z_i = ẑ_̂î | X) ℒ^m = -∑_i=1^n ℒ^m_i = -∑_i=1^n logp_m(y_i = ŷ_̂î | X) Where Ẑ represents the prior label for auxiliary task. Since the losses of noisy labels and clean labels tend to be subject to different Gaussian distributions <cit.>, after acquiring the ℒ^m of the auxiliary task, we apply the popular used GMM <cit.> to distinguish noisy labels by feeding the auxiliary loss items ℒ^m. We feed the auxiliary losses to a 2-component Gaussian distribution and use Expectation-Maximization (EM) algorithm[<https://en.wikipedia.org/wiki/Expectation-Maximization>] to fit the GMM to the observations. Let α_i represent the probability of i^th pos tag belonging to the Gaussian component with the smaller mean, which can also be considered as the clean probability due to the small-loss theory <cit.>. The final loss function in the forward propagation is calculated as (<ref>): ℒ = η_t * ℒ_a + α_i × (1 - η_t) * ℒ_m η_t= 1/1+e^β(-t+T/2) Where η represents the annealing factor, which is in the form of inverse sigmoid, t indicates the current forward step, and T indicates the total number of updates in the training phase, β is the smoothing factor.The auxiliary tagging task essentially injects explicit POS information into the BERT representation by fine-tuning the pretrained language model, and the annealing strategy provides different sensitivity to prior knowledge at different characters and different stages in the training process. Finally, we summarize the above training process as Algorithm <ref>.§ EXPERIMENTWe use the same data preprocessing method as previous works <cit.>. Specifically, to verify the practicality of the proposed ATLAs strategy, we use two manual CSC datasets released by SIGHAN <cit.>. As with the SpellGCN and ReaLiSe model, we also introduced the additional 271K generative corpus <cit.> to ensure comparability. We discarded the 2013 dataset due to the poor annotation quality, for which a good-performing model may obtain bad scores. Following previous works, the characters are converted to simplified Chinese using OpenCC[< https://github.com/BYVoid/>]. The statistics of the data are displayed in Table <ref>. §.§ Baselines and Settings We apply our designed ATLAs to several baseline models to explore its universality and effectiveness. Since different CSC models employ different strategies, we make necessary modifications to apply the proposed optimization strategies. In each BERT-based CSC model, ATLAs is applied as follows: * BERT-Finetune is a model for fine-tuning BERT directly using the CSC task. We use the last layer of the BERT encoder to initiate the n-gram query to obtain a multilayer representation as described in section <ref>, and use the result to perform the CSC and auxiliary tasks.* Soft-Masked BERT <cit.> proposes a soft-masking technique that sets the embedding of predicted misspelled characters to be similar to theembedding. We only apply our strategies in its correction module as in the previous mode in the BERT-Finetune, without changing its principal detection module and soft-masking framework.* SpellGCN <cit.> incorporates phonological and visual similarity knowledge into the BERT model through a specialized graph convolutional network on the fixed connected graph of similar characters. We use a plain application form of ATLAs, like in BERT-Finetune.* ReaLiSe <cit.> utilizes three additional Transformer Blocks to fuse multimodal information of the Chinese characters, including phonetic information encoded by Recurrent Neural Network and graphic information encoded by Convolutional Neural Network. We extend the layerwise self-attention range to ReaLiSe's additional transformer blocks and use the top-level representation to initiate the n-gram-based query. Other settings remain unchanged. For all baseline models, we initialize BERT with the BERT-wwm model <cit.>. Following the original papers, we fine-tune ReaLiSe with the AdamW <cit.> optimizer for 10 epochs. For the remaining models, we fine-tune them with the AdamW optimizer for 6 epochs. The batch size is set to 64, and the learning rate is set to decay from 5e-5 to 1e-5 using the CosineScheduler. The attention head m is set to 8, and the smoothing factor β is set to 8e-4.We use trigram to take into account adjacent characters on both sides. For the remaining parameters, we retain the settings of the original papers.We recall that our proposed approach is approximately orthogonal to these based methods, which means that the original incorporation of phonetic or graphic information is preserved in our model wholly intact.It is worth noting that ATLAs neither increases the size of the training corpus nor the training epochs of the models, i.e., compare the related works that need to introduce additional training epochs <cit.>, there is no additional training cost.For POS information, we use the THULAC tool[ <http://thulac.thunlp.org/>] to perform word segmentation and POS tagging on the training dataset. We also merge the location information into the feature itself, e.g., B-pos indicates the first character for a particular POS tag, while I-pos indicates the middle and end characters in a word. §.§ Main Results The accuracy, precision, recall, and F1 score are reported as the evaluation metrics, which are commonly used in the CSC task. All metrics are provided for the detection and correction sub-tasks. All experiments were conducted four times, and the performance of the model using average parameters was reported. The code and the trained model are publicly released at the following address [<https://github.com/1250658183/ATLAs>].Table <ref> shows the final performance. In each case, we test the baseline with and without the addition of our ATLAs strategy. In the four baseline models, ATLAs yields consistent performance improvements. Specifically, at the correction level, ATLAs exceeds the sentence level F1 score of the baseline models by 3.1, 2.2, 2.1, 2.6 percent on the SIGHAN 2014, and 3.5, 3.4, 2.5, 2.3 percent on the SIGHAN 2015, which verifies the effectiveness of the ATLAs strategy. It can be observed that ATLAs improves the performance of the vanilla BERT more than the other three baseline models. The reason is that the additional information contained in the layerwise self-attention can help BERT to constrain the similarity between the predicted and the input characters to some extent. The other three baseline models use different modules to model similarity, so there is a partial overlap in the optimization space. Nevertheless, ATLAs is still able to produce stable performance enhancements over the best-performing model ReaLiSe and achieve state-of-the-art results.We made a more detailed statistical analysis on two baselines. At the sentence level, BERT-Finetune made 585 adjustments, of which only 418 adjustments were correct (recorded as 418/585). Meanwhile, BERT-ATLAs made 424/555 correct predictions. Similarly, the original ReaLiSe made 433 correct predictions in 570 adjustments (433/570), while our method made 446/574 correct predictions. This indicates that ATLAs have the ability to discover more errors and can effectively reduce the overcorrection of the model.ECOPO <cit.> is a related technique that utilizes contrastive probability optimization to adjust the training loss of CSC models. ECOPO combined with ReaLiSe achieves SOTA performance on the SIGHAN datasets by adjusting the robust baseline ReaLiSe. On the SIGHAN 2014 and SIGHAN 2015 datasets, ECOPO combined with ReaLiSe achieves 69.2% and 78.5% F1 scores on the correction subtask, respectively. In contrast, our method maintains a performance advantage of 1.5% and 1.4% and improves the SOTA performance, further verifying the superiority of our proposed framework. §.§ Ablation StudyWe perform ablation experiments on each module to explore the effects of auxiliary task learning based on GMM and layerwise self-attention on the model. The results of ablation experiments are shown in Table <ref>. In group BERT-LA, only the layerwise self-attention strategy is used to fine-tune the BERT-based model. The BERT-ATL group retains only the training strategy based on auxiliary task learning.Layerwise self-attention and GMM-driven auxiliary task provide 1.7 and 2.0 percent improvement on the correction F1 score, respectively. The performance gains from BERT-LA verify that low-level encoding information is beneficial to the CSC task.In addition, it can be seen that BERT-ATL can bring about an improvement of 2.7 percent on the detection F1 score. Since the auxiliary task utilizes prior POS information to increase the margin between correct and incorrect character representations during fine-tuning, thus it helps reduce the difficulty of the detection subtask. We show some visualization examples in section <ref>.§ DISCUSSIONIn this section, we conduct a more in-depth analysis of the effective implementation of each optimization. We explore the results of the experiment on BERT-Finetune performed on SIGHAN 2015. §.§ Prior Knowledge Injection StrategyTo explore the impact of POS information injection strategies on model performance, we compare four different strategies. The results are displayed in Table <ref>. In Hard-Embedding, the POS feature is concatenated with the BERT hidden state as with DPL-Corr <cit.>.Essentially the Hard-Embedding mode directly uses the prior POS as an input feature, DPL-Corr only uses the POS feature for error detection, but we also use it for error correction. In Hard-Joint, η is set to 0.5, and α_i is discarded, which has the effect of taking the mean loss of multiple training tasks directly; Full-Annealing uses the loss function proposed in Formula <ref>; Part-Annealing is the combination of the Full-Annealing and Hard-Joint strategies, whereby η is calculated as shown in (<ref>). To further show the effect of error propagation, the column entitled “C-F1 noise” displays results when an additional 20% noise tags are artificially introduced. η_t = 1/1+e^β(-t+T/2), t < T/20.5,t ≥T/2 In columns without artificial noise tags, all the control groups introducing POS information outperformed the base group significantly. This steady improvement confirms that the addition of prior POS tagging is of benefit to the CSC task.In addition, annealing strategies consistently yield higher performance than Hard-type strategies because annealing strategies allow the use of GMM and loss annealing to mitigate the impact of noise tags and increase the margin between correct and misspelled character representations, thereby reducing the difficulty of the main CSC task and improving the overall performance.Moreover, Full-Annealing exhibits better performance than the other three strategies. We believe there are two reasons. First, Full-Annealing does not need to fine-tune the CSC model according to the auxiliary task in the later stage of training. The loss is dominated by the primary correction task, which helps the model converge more smoothly. Second, the incorrect corpus inevitably introduces noise tags, which would mislead the model using the Hard-type strategy. Full-Annealing utilizes the annealing module weights the auxiliary tags through the probability distribution of auxiliary loss more thoroughly to reduce the impact of noise tags. As shown in the "C-F1 noise" column, Full-Annealing produces a minor performance degradation when knowledge noise is artificially introduced, which also validates this conjecture. §.§ Multilayer Representation StrategyTo explore the effective utilization strategy of hierarchical knowledge in the CSC model, we carry out the following exploratory experiments. The results are shown in Table <ref>. In this experiment, in the strategy Mean, the mean of the BERT layer representations is taken; In ResNet, the embedding layer is connected to the top layer via a residual connection; In ResNet5, a uniform selection is made of five BERT internal layers for a residual connection, i.e., layers 3, 6, 9, 12 and the embedding layer; In Last-Query, the query is initiated using the single latent representation of a character in the last hidden layer; Ngram-Query is the strategy defined in (<ref>).As shown in Table <ref>, the simple Mean does not improve performance relative to baseline but degrades performance, indicating that the naive fusion method is not effective but may introduce a lot of noise to the fused representation. Additionally, the ResNet results in slight performance improvements since the residual connection can help to shrink the margin between the multilayer representation and the target representation for the original correct inputs. When the introduced layers reach ResNet5, the performance improvement tends to be flat, and if all the middle layers are introduced, the model essentially falls back to the Mean policy. Comparing ResNet with layerwise attention, the latter consistently performs better as ResNet is unable to consider the relationship between the different encoder layers, whereas the attention mechanism allows the model to choose information adaptively based on the type of error.Ngram-Query yields better performance than the Last-Query strategy. Ngram-Query uses the mixed representation of successive n-gram tokens for query, which can help the model more easily identify whether the input characters are non-word errors, and then help the model allocate attention weight depending on the error type. §.§ The Impact of N-gram Size To investigate the optimal n-gram size of the layerwise self-attention query, we subsampled a portion of the data and performed comparative experiments. The performance of various n-gram sizes in the error correction subtask is shown in Table <ref>. The table findings demonstrate that the shorter unigram and bigram cannot capture the bidirectional contexts of a character, resulting in an inferior performance. The more extended n-gram sizes can cause defocusing in the attention mechanism. Through preliminary experiments, we set the size of the n-gram to 3, which is the smallest size that fuses bidirectional contexts of characters. §.§ Comparison of computational CostQuantitative analysis experiments were conducted to assess the impact of ATLAs enhanced framework on model computational complexity, and the results are presented in Table <ref>. All experiments are conducted on the BERT-Finetune without loss of generality, and analogous conclusions are drawn for other baselines. The model size indicates the local storage size of the model before and after the addition of the ATLAs enhanced framework, the training time indicates the time required to train the model for 1000 rounds with the specified batch size, and the inference time indicates the time required to infer the SIGHAN 2015 test set. The GeForce RTX 3090 graphics card is utilized for all experiments. The results indicate that the impact of ATLAs on model size is minimal since the EM algorithm does not need to consume local space, and the only source of model size growth is the FC layer used for the auxiliary task.In addition, ATLAs increases the training time by 7.7 percent, partly due to the auxiliary task prediction. The more important part comes from the iteration of the EM algorithm at each epoch.However, it should be noted that the ATLAs framework can maintain the same inference efficacy as the original model since it requires only inference output during the test phase without calculating loss items. §.§ Visualization Analysis Can the loss annealing strategy effectively inject explicit knowledge? To get a more intuitive sense of the role of POS information, we performed a t-SNE dimension-reduction analysis[<https://projector.tensorflow.org/>] on the character representations before and after the fine-tuning by auxiliary POS tags. We chose a common character, “地”(“de”), which is a commonly-used auxiliary (-ly) and a noun (ground). By randomly selecting 100 errors and 500 correct instances in the test data, the representations before and after the auxiliary task fine-tuning are embedded as shown in Fig. <ref>:The margins between correct and incorrect embeddings are significantly larger after fine-tuning. This shows that POS information can help the model better distinguish between correct and misspelled characters. In addition, after fine-tuning, the degree of aggregation of character embedding is higher because POS information can help the model recognize the usage of different semantics of characters, thus assisting the model in determining whether they are used correctly.To explore the role of low-level information in correction, we visualized an attention weight map in a set of control instances. Figure <ref> shows the attention map when the input token was modified from “被坏” (be broken) to “破坏” (destroy) in the “你的工厂把环境被坏” (Your factory destroys the environment) example. This instance was not modified in the baseline model. By fusing richer intermediate information, the model can identify and correct this misspelling. Figure <ref> shows the attention map when the correct “号” (number) is left unchanged in “我们对号码有兴趣” (We are interested in numbers). The baseline model incorrectly changed it to “数” (digital). ATLAs allocates attention to the embedding layer to reduce the distance between the final representation and the original embedding for the original correct inputs. This helps the model predict the original characters, intuitively explaining why the ResNet group can increase model performance. §.§ Limitations Although the proposed framework obtains superior performance, it has limitations, which are discussed below. First, the existing methods have only been verified on the CSC model, which means our framework currently only serves spelling error correction under equal lengths of input and output. The method's applicability has yet to be explored for Chinese grammatical errors with more complex error types. Second, we conducted a case analysis of the shortcomings of existing methods. We discovered that a considerable number of model errors occurred when multiple or consecutive erroneous characters were in the input sentence. We believe this is because the baseline model predicts the correct characters in a non-autoregressive manner. Therefore, incorporating an optimization strategy that considers context-dependent modeling on top of our method could further improve the error-checking performance of the model. § CONCLUSIONIn this article, we propose a universal optimization strategy for the BERT-based CSC models based on the fusion of heterogeneous knowledge, including both explicit POS knowledge and implicit hierarchical linguistic information. We propose an annealing loss strategy driven by GMM, which can inject prior information into the BERT representation while reducing the impact of noise tags. We also explore that utilizing intermediary information in the encoder can improve the overall performance of the CSC task, and experimentally verify the effectiveness of the proposed n-gram-based layerwise self-attention mechanism. Experimental results demonstrate that the BERT-based CSC models can be steadily improved after utilizing our proposed ATLAs. We remain to extend the heterogeneous knowledge to the CSC task (e.g., coreference resolution) as our future work.§ ACKNOWLEDGEMENTSThis work is supported by the National Natural Science Foundation of China (No. 61936012, 61976114, and 62206126).IEEEtran
http://arxiv.org/abs/2312.16623v1
{ "authors": [ "Yongchang Cao", "Liang He", "Zhen Wu", "Xinyu Dai" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20231227161107", "title": "Make BERT-based Chinese Spelling Check Model Enhanced by Layerwise Attention and Gaussian Mixture Model" }
[ [   ===== In this paper, we study an underexplored, yet important and challenging problem: counting the number of distinct sounds in raw audio characterized by a high degree of polyphonicity. We do so by systematically proposing a novel end-to-end trainable neural network (which we call DyDecNet, consisting of a dyadic decomposition front-end and backbone network), and quantifying the difficulty level of counting depending on sound polyphonicity. The dyadic decomposition front-end progressively decomposes the raw waveform dyadically along the frequency axis to obtain time-frequency representation in multi-stage, coarse-to-fine manner. Each intermediate waveform convolved by a parent filter is further processed by a pair of child filters that evenly split the parent filter's carried frequency response, with the higher-half child filter encoding the detail and lower-half child filter encoding the approximation. We further introduce an energy gain normalization to normalize sound loudness variance and spectrum overlap, and apply it to each intermediate parent waveform before feeding it to the two child filters. To better quantify sound counting difficulty level, we further design three polyphony-aware metrics: polyphony ratio, max polyphony and mean polyphony. We test DyDecNet on various datasets to show its superiority, and we further show dyadic decomposition network can be used as a general front-end to tackle other acoustic tasks. Code: <github.com/yuhanghe01/SoundCount>. § INTRODUCTIONSuppose you went to the seaside and heard a cacophony of seagulls, squawking and squabbling. An interesting question that naturally arises is whether you can tell the number of seagulls flocking around you from the sound you heard? Although a trivial example, this sound “crowd counting” problem has a number of important applications. For example, passive acoustic monitoring is widely used to record sounds in natural habitats, which provides measures of ecosystem diversity and density <cit.>. Sound counting helps to quantify and map sound pollution by counting the number of individual polluting events <cit.>. It can also be used in music content analysis <cit.>. Despite its importance, research on sound counting has far lagged behind than its well-established crowd counting counterparts from either images <cit.>, video <cit.> or joint audio-visual <cit.>. We conjecture the lack of exploration stems from three main factors. First, sound counting has long been treated as an over-solved problem by sound event detection (SED) methods <cit.>, in which SED goes further to identify each sound event's (e.g. a bird call) start time, end time and semantic identity. Sound counting number then becomes easily accessible by simply adding up all detected events. Secondly, current SED only tags whether a class of sound event is present within a window, regardless of the number of concurrent sound sources of the same class like a series of baby crying or multiple bird calls <cit.>. Thirdly, labelling acoustic data is technically-harder and more time-consuming than labelling images, due to the overlap of concurrent and diverse sources. The lack of well-labelled sound data in crowded sound scenes naturally hampers research progress. Existing SED sound datasets <cit.> capture simple acoustic scenarios with low polyphony and where the event variance is small. The simplified acoustic scenario in turn makes sound counting task by SED methods tackleable. But when the sound scene becomes more complex with highly concurrent sound events, SED methods soon lose their capability in discriminating different sound events <cit.>. In the meantime, some researchers think sound counting is equivalent to sound source separation task <cit.>, in which the sound is counted as the source number by isolating individual sound from sound mixture and assigning it to corresponding sound source. However, our proposed sound counting is different from source number counting, it directly counts the overlapping events number, regardless of if these events come from the same sound source. Therefore, a study specific for sound counting problem is desirable and overdue.In this paper, we study the general sound counting problem under highly polyphonic, cluttered and concurrent situations. Whilst the challenges of image-based crowd counting mainly lie in spatial density, occlusion and view perspective distortion, the sound counting challenges are two-fold. Firstly, acoustic scenes are additive mixtures of sound along both time and frequency axes, making counting overlapping sounds difficult (temporal concurrence and spectrum-overlap). Secondly, there is a large variance in event loudness due to spherical signal attenuation with distance.To tackle these challenges, we propose a novel dyadic decomposition neural network to learn a sound density representation capable of estimating cardinality directly from raw sound waveform. Unlike existing sound waveform processing methods that all apply frequency-selective filters on the raw waveform in single stage <cit.>, our network progressively decomposes raw sound waveform in a dyadic manner, where the intermediate waveform convolved by each parent filter is further processed by its two child filters. The two child filters evenly split the parent filter's frequency response, with one child filter encoding the waveform approximation (the one with the lower-half frequency response) and the other one encoding the waveform details (the one with the higher-half frequency response). To accommodate sound loudness variance, spectrum-overlap and time-concurrence, we further propose an energy gain normalization module to regularize each intermediate parent waveform before feeding it to two child filters for further processing. This hierarchical dyadic decomposition front-end enables the neural network to learn a robust TF representation in multi-stage coarse-to-fine manner, while introducing negligible extra computation cost. By setting each filter's frequency cutoff parameters to be learnable and self-adjustable during optimization in a data-driven way, the final learned TF representation can better characterize sound existence in time and frequency domain. Following the front-end, we add a backbone network to continue to learn a time framewise representation. Such representation can be used to derive the final sound count number by either directly regressing the count number, regressing density map (the one we choose) or following SED pipeline. Apart from the network, we further propose three polyphony-aware metrics to quantify sound counting task difficulty level: polyphony ratio, maximum polyphony and mean polyphony. We will give detailed discussion to show the feasibility of three metrics.We run experiment on large amounts of sound datasets, including commonly heard bioacoustic, indoor and outdoor, real-world and synthetic sound. Comprehensive experimental results show the superiority of our proposed framework in counting under different challenging acoustic scenarios. We further show our proposed dyadic decomposition front-end can be used to tackle other acoustic task, like SELD <cit.>. In summary, we make three main contributions: First, propose dyadic decomposition front-end to decompose the raw waveform in a multi-stage, coarse-to-fine manner, which better handles loudness variance, spectrum-overlap and time-concurrence. Second, propose a new set of polyphony-aware evaluation metrics to comprehensively and objectively quantify sound counting difficulty level. Third, show DyDecNet superiority on various counting datasets, and its potential to be used as a general learnable TF extraction front-end.§ DYADIC DECOMPOSITION NEURAL NETWORK We put the related work discussion and sound counting task definition in Appendix due the space limit. Different sound classes typically exhibit different spectral properties. A canonical way to process raw sound waveform is to apply a frequency-selective filter bank F_f = {f_i}_i=1^k to project the raw sound waveform onto different frequency bins. Traditional Fourier transform <cit.> or Wavelet transform <cit.> construct fixed filter banks in which all filter-construction relevant hyperparameters are empirically chosen and thus may not be optimal for a particular task. Recent methods <cit.> relax some hyperparameters to be trainable so that the filter bank can be optimized in a data-driven way. A learnable filter bank often leads to better performance than fixed filters. However, all existing methods apply all filters, either learnable or fixed, on the raw waveform in a one-stage manner. Such shallow and one-stage processing may fail to learn powerful and robust representation for sound counting task where large loudness variance and heavy spectrum overlap exist. In our dyadic decomposition framework, we instead adopt a progressive pairwise decomposition strategy to obtain the time-frequency (TF) representation. It learns a TF representation from coarse to fine-grained granularity. Particularly, it consists of a dyadic frontend and a backbone.§.§ Dyadic Frequency Decomposition Frontend In dyadic decomposition frontend, we construct a set of D hierarchical filter banks F_dyadic^D = {F_2^1^1, F_2^2^2, ⋯, F_2^D^D}. The d-th filter bank has 2^d filters, each filter is parameterized by a learnable high frequency-cutoff parameter and a low frequency-cutoff parameter. By cascading these filter banks, we consecutively decompose the raw waveform in frequency domain dyadically, leading to coarse-grained to fine-grained TF representation. Specifically, we denote the dyadic filter banks depth by D, in the depth d filter bank F_2^d^d, we have 2^d filters evenly divide the waveform sampling frequency F_s. Therefore, each single filter's frequency response length is F_s/2^d, the i-th filter f_i^d high frequency cutoff F_h and low frequency cutoff F_l are initialized as, F_h(f_i^d) = F_s/2^d· (i+1),F_l(f_i^d) = F_s/2^d· iFrom Eqn. (<ref>) we can see that dyadic decomposition frontend forms a complete binary-tree-like structure, in which the filter number doubles and each filter's frequency response length halves as the tree's depth increases by one. The intermediate waveform processed by a “parent” filter is just further processed by its two “children” filters. The frequency responses of the two children filters evenly split their parent filter's frequency response. The child filter carrying the higher half frequency response encode the parent's processed intermediate waveform's detail while the other one carrying the lower half frequency response instead encodes the approximation. For example, for the filter f_i^d in the d-th filter bank, its frequency response lies in [F_s/2^d· i, F_s/2^d· (i+1)], its two children filters f_2i^d+1 and f_2i+1^d+1 in the depth d+1 evenly divide its frequency range, so f_2i^d+1 carries [F_s/2^d· i, F_s/2^d(i+1/2)]. f_2i+1^d+1 carries [F_s/2^d(i+1/2), F_s/2^d· (i+1)].With the pre-constructed dyadic decomposition filter banks, we cascade them together to process the raw sound waveform, progressively learning the final TF representation. In our implementation, each filter in dyadic filter banks is a learnable band-pass filter. We adopt rectangular band-pass in frequency domain filter which comprises of a learnable high frequency cutoff parameter F_h and a learnable low frequency cutoff parameter F_l. Converting it to time domain through the inverse Fourier transform, we get sinc(·) function like filter that is used to convolve with the waveform. For example, the filter f_i^d in Eqn. (<ref>) is represented as, f_i^d[t, F_h, F_l] = 2F_hsinc(2π F_ht) - 2F_lsinc(2π F_lt) where sinc(x)=sin(x)/x, t indicates the filter's representation at time t. F_h and F_l are initialized according to Eqn. (<ref>), but they can be further adjusted during the training process. sinc(·) filters have been successfully used in speech recognition <cit.> and sound event detection and localization <cit.>. In our dyadic decomposition frontend, each filter from different depth has separate and independent learnable parameters (high frequency cutoff and low frequency cutoff). Moreover, our constructed filter is much longer (1025 in our case) than traditional 1D/2D Conv filters (3 or 5). Its wide length characteristic enables the filter to have a wide field-of-view on the raw waveform. Cascading them together allows the filters in later layers (larger depth) to have an even wider field-of-view on the input raw waveform. With this advantage, we do not have to model sound event temporal dependency explicitly with RNN network. As a result, the whole dyadic frequency decomposition frontend is fully convolutional and parametrically learnable, it is parameter-frugal and computationally efficient. In practice, the dyadic decomposition frontend depth is 8, so the output TF representation has 256 frequency bins. At the same time, we downsample the intermediate waveform by 2 before feeding it to its two children filters in the initial 5 dyadic filter banks to reduce the memory cost. §.§ Energy Gain Normalization We further design an energy gain normalization module to regularize each intermediate waveform before feeding them to the next dyadic filter bank. The motivation of introducing energy gain normalization is two-fold: first, to reduce sound event loudness variance led by sound events' different spatial locations; Second, to reinforce the frontend to learn to better tackle spectrum overlap challenge led by intra-class sound events in the sound scene. Specifically, for the intermediate waveform W_f_i^d processed by a dyadic filter f_i^d, we first smooth it with a learnable 1D Gaussian kernel g_i^d parameterized by learnable width σ to get the corresponding smoothed waveform W_g_i^d which just contains loudness. We then introduce a learnable automatic gain control parameter α to mitigate sound loudness impact. Furthermore, another two learnable compression parameters δ and γ are introduced to further compress W_f_i^d. The overall energy gain normalization can be represented as, W_f_i^d = (W_f_i^d/(W_g_i^d)^α + δ)^γ - δ^γ where α, δ and γ are learnable parameters. As a result, the energy gain normalization eg-Norm is fully learnable and parameterized by four learnable parameters eg-Norm(σ, α, δ, γ). Practically, each filter in dyadic filter banks is associated with an independent eg-Norm module. Similar energy normalization has been successfully used in tasks like keyword spotting <cit.>. The difference lies in the fact that they apply exponential moving average operation to get smoothed waveform representation, so the computation is very slow because it iterates along the time axis to compute the averaged value step by step. Our proposed energy gain normalized strategy instead adopts a Gaussian kernel to get the smoothed waveform, in which it can be easily implemented as 1D convolution. The dyadic filter visualization and energy normalization module is shown in Fig. <ref>. §.§ Backbone Neural NetworkWe add a lightweight backbone neural network to the frontend neural network to further learn a representation useful for call counting . The backbone network consists of two parts: per-channel pooling and inter-channel 1D convolution. Unlike existing methods <cit.> that first convert 1D sound waveform into 2D map with fixed FFT-like transform, then learn from the 2D map with 2D Conv. operations, our method directly learns from sound raw waveform with learnable 1D Conv.. Specifically, we downsample each channel separately by assigning each channel with an independent frequency-sensitive learnable filter. We call such learnable downsampling per-channel pooling. It helps to learn sound event's frequency variance along the time axis individually. Moreover, we add normal 1D Conv. to achieve inter-channel communication, which enhances the neural network to learn concurrent sound events interaction. The backbone serves as the backend to learn framewise representation for counting.§.§ Density Map and Loss Function The backbone network discussed above learns a framewise representation [T_b, F_b], where T_b indicates the time steps and F_b indicates feature size. There are three potential ways to derive the final sound count number from the learned representation: 1. directly regress the count number; 2.SED method: detect sound events first and then aggregate results to get final count; 3. predict the density map. For a sound event with time location [t_1, t_2], its density map is a 1D vector with value 1/t_2-t_1 during its occurrence time, otherwise it is 0. So the count number equals the vector integral. We show the regressing density map produces the best result (see Table <ref>). We thus adopt the mean squared error (MSE) loss during training to directly regress the density map. The comparison of three methods is shown in Fig. <ref>.§ COUNTING DIFFICULTY QUANTIFICATION Mean absolute error (MAE) and mean squared error (MSE) are two widely used metrics in crowd counting <cit.>. Specifically, denote the ground truth count and predicted count by y_i and ŷ_̂î respectively, for the i-th sound clip. MAE is defined as MAE=1/N∑_i=1^N |y_i - ŷ_̂î|, MSE is defined as MSE=√(1/N∑_i=1^N (y_i - ŷ_̂î)^2). We also involve accuracy rate (AccuRate) to show the ratio of accurately predicted count. We introduce a tolerance term p, where p=0 means the predicted count number has to be exactly the same with ground truth number in order to be treated as an accurate counting; p=1 relaxes the constraint so there can be one count mismatch for an accurate counting.The aforementioned three general metrics do not reflect the impact of sound scene nature on algorithms. We introduce three polyphony-aware metrics to quantify the sound counting difficulty level reflected by the sound scene nature. The three metrics are time-window invariant so they can be used as general metrics to quantify difficulty level of sound scene of various lengths.Polyphony Ratio (ratio-polyp) describes the ratio of polyphony (at least two sound events happen at the same time) over a period of time. It binarizes each time step as either polyphonic or non-polyphonic (monophoinc or silent) so the value lies between [0,1]. Maximum Polyphony (max-polyp) focuses on the maximum polyphony level over a time period. It is motivated by the fact that human's capability in discriminating different sound events reduces seriously when the number of temporal-overlapping sound event number increases. It is a positive integer and helps us to understand an algorithm's capability in tackling polyphony peak.Mean Polyphony (mean-polyp) instead focuses on the averaging level of polyphony involved within a time period. It is designed to reflect an algorithm's capability in tackling the average polyphony level over an arbitrary time window.Given T_n time steps sound vector [p_1, p_2, ⋯, p_T_n], where p_i≥ 0 is the sound event number happening at time step T_i. The three metrics are defined as, ratio-polyp = ∑_i=1^n 1_2(p_i)/n; max-polyp = max_i=1,⋯,np_i; mean-polyp =∑_i=1^n max(p_i-1,0)/T_n where 1_2(p_i) is an indicator function, it is 1 if p_i≥ 2, otherwise 0. With the three metrics, we can report the general metrics (MAE, MSE) against various difficulty levels.§ EXPERIMENT We run experiments on five main category sound datasets that we commonly hear in everyday life.1. Bioacoustic Sound. We focus on bird sound as bird sound is ubiquitous in most terrestrial environments with distinctive vocal acoustic properties. Specifically, we test three datasets: one real-world NorthEastUS <cit.> dataset and other two synthesized datasets: Polyphony4Birds (for heterophony test) and Polyphony1Bird (for homophony test). NorthEastUS data is recorded in nature reserve in northeastern United States. It encompasses 385 minutes of dawn chorus recordings collected in July 2018, with a total of 48 bird species. The average bird sound temporal length is very short (less than 1s) and the polyphony level (max-polyp and mean-polyp) is small. To test performance under highly polyphonic situations, we synthesize two bird sound datasets. Specifically, The first dataset contains four sounds: junco, American redhead, eagle, and rooster from copyright-free website . We call it Polyphony4Birds (heterophony test). The second dataset contains one sound: rooster. We call it Polyphony1Bird (homophony test). 2. Indoor Sound. We count telephone ring sound, the telephone ring seed sound comes from the same copyright-free website. We follow Polyphony1Bird synthesis procedure except that the room size is much smaller (10m × 10m × 3m) to reflect indoor reverberation effect.3. Outdoor Sound. We count car engine, as it is widely heard in outdoor scenario. The car engine seed sound comes from the same copyright-free website. We follow Polyphony1Bird synthesis procedure to create the dataset.4. AudioSet AudioSet <cit.> is a large temporally-strong labelled dataset with a wide range of sound event classes, including music, speech and water. The AudioSet data tests all methods' capability in counting under large different event classes scenario. Specifically, we train model on the train dataset which has 103,463 audio clips and 934,821 labels, and test the model on the evaluation which has 16,996 audio clips and 139,538 labels. In total there are 456 sound event categories.5. Music Sound. We use OpenMic2018 dataset <cit.> to count musical instruments. We put all discussion/results of this data to Appendix due to space limit. The direct comparison between these datasets is given in Appendix Table 1. We highly refer to Appendix. Sec. 2 for detailed discussion about the data synthesis process.Comparing Methods: We compare DyDecNet with three main method categories: 1) traditional signal processing methods: Librosa-onset and Aubio-onset; 2) three SED-based methods. 3) one sound source separation method. Librosa-onset <cit.> provides an onset/offset detection method for music note detection. It measures the uplift or shift of spectral energy to decide the starting time of a note. We use its onset/offset detection ability to count sound events. Aubio-onset <cit.> achieves pitch tracking by aligning period and phase. We use its pitch tracking to count.SED-based methods build on traditional fixed TF representation, such as short time Fourier transform (STFT) and LogMel. The TF representation is treated as a 2D image to be processed by a sequence of 2D Conv. operators. GRU <cit.> and LSTM <cit.> are often adopted to model temporal dependency. We compare three typical SED methods: 1) CRNNNet <cit.> consists of 2D Conv. to learn multiple compressed TF representations from the input TF map. Then it concatenates them together along the frequency dimension and further feeds it to LSTM <cit.> to learn framewise representation. 2) DND-SED <cit.> instead adopts depthwise 2D convolution and dilated convolution to avoid using RNN. 3) SELDNet <cit.> is originally used for joint sound event detection and localization. It adopts 2D Conv. to convolve the 2D TF map, and bidirectional GRU to model temporal dependency. The three comparing methods' network architectures are slightly adjusted to fit our dataset. For sound source separation method, we adopt DPTasNet <cit.>, in which it trains a Dual-Path RNN (DPRNN) and TasNet to jointly separate each sound event and further count the event number. In this case, we treat each sound event as independent sound sources. Implementation Detail For all datasets, all input audios are segmented into 5 second long clips, with sampling rate 24 kHz. So the input waveform has 120,000 data points and is normalized into [-1,1]. We train the models with Pytorch <cit.> on TITAN RTX GPU. Network architecture of DyDecNet is given in Appendix. To train the neural network, we adopt Adam optimizer <cit.> with an initial learning rate 0.001 which decays every 20 epochs with a decaying rate 0.5. Overall, we train 60 epochs. We train each method 10 times independently and report the mean value and standard deviation. We do not report the standard deviation explicitly in the table because we find them very small (about 0.03). We first train the comparing SED methods with both their suggested training strategy and our training strategy, then choose the one with the better performance as the final result. For the energy gain normalization we initialize them as α=0.96, δ=2., γ=0.5, σ=0.5. The batchsize is 128. §.§ Experimental ResultThe quantitative result on MSE/MAE is given in Table <ref>, accuracy rate result in Appendix. Table 2. From the two tables we can learn that DyDecNet outperforms both classic signal processing deterministic methods, comparing SED methods and sound source separation based method by a large margin, under all acoustic scenarios. DyDecNet outperforms all comparing methods in both real-world and synthesized sound datasets. It is capable of learning powerful representation from both weak sound signals (NorthEastUS), highly polyphonic (synthesized datasets) and heavy spectrum-overlapping, loudness-varying sound events. Moreover, we find DPTasNet <cit.> performs worse than the three SED-based methods on the two synthesized bioacoustic datasets where high-polyphony exists, which shows source separation method is not a good counting alternative in highly polyphonic situations.At the same time, we also observe that the two signal processing deterministic methods (Librosa-onset and Aubio-onset) generate the worst result over both SED based, source separation based methods and DyDecNet. The higher the polyphony level of the dataset, the worse performance the two deterministic methods lead to. For example, in NorthEastUS dataset with a relatively smaller polyphony level, Librosa-onset and Aubio-onset generate relatively good performance with accuracy rate (p=1) reaching 0.58. In our synthesized two datasets with much higher polyphony levels, however, their accuracy drops significantly to near zeros. It thus shows traditional signal processing methods do not fit for sound counting from crowded acoustic scenes.Moreover, SED-based methods and DyDecNet produce decreasing performance from Polyphony4Birds to Polyphony1Bird and then NorthEastUS. The largest performance drop is observed on real NorthEastUS dataset, which shows counting from real-world dataset is a tough task that desires more future attention. Spectrum-overlap led by intra-class sound events is another potential challenge (better performance on Polyp4Birds than Polyp1Bird). The MSE/MAE variation against max-polyp, ratio-polyp and mean-polyp difficulty level on NorthEastUS are shown in Fig. <ref>. We can observe that our proposed three metrics max-polyp, ratio-polyp and mean-polyp are effective ways to accurately quantify sound counting tasks difficulty level. The three metrics have observed dramatic performance drop as their difficulty level increases. Nevertheless, DyDecNet remains as the best one across all the three difficulty levels, showing DyDecNet outperforms the comparing methods under difficult levels discussed in this paper.Feature Visualization We visualize the comparison between dyadic decomposition front-end learned TF feature and classic MFCC <cit.> TF feature in Appendix Fig. 6. We can see DydecNet is capable of learning more discriminative feature for two temporally-overlapping and the same class sound events. §.§ Ablation StudyWe do ablation study on NorthEastUS data. First, disentangling our proposed framework's dyadic decomposition frontend and backbone network so as to figure out their individual contribution. To this end, on the one hand, we concatenate dyadic decomposition frontend to the three SED methods backbone networks so that they can learn TF representation from raw waveform. We call them SELDNet_dydec, CRNNNet_dydec and DND-SED_dydec respectively. On the other hand, we feed our backbone neural network with fixed pre-extracted TF features, including short time Fourier transform (STFT), LogMel, MFCC and Gabor Wavelet filter. We call them DyDecNet_STFT, DyDecNet_LogMel and DyDecNet_MFCC, DyDecNet_Gabor, respectively. The results are in Table <ref> and  <ref>. We can observe that: 1) replacing traditional fixed TF feature with dyadic decomposition frontend significantly improves the performance (Table <ref>). The gain stems from two-fold: our dyadic decomposition frontend enables the network to directly learn from the raw waveform so that all frequency-selective filters are adjustable during training process. Second, the dyadic progressive decomposition enables the neural network to learn robust representation for sound counting. Similarly, a huge performance drop is observed if we let our proposed backbone neural network to learn from traditional fixed TF features (Table <ref>). Therefore, it shows that both the dyadic decomposition frontend and backbone neural networks are important for sound counting.Second, we want to figure out if the dyadic decomposition is essential for sound counting, and the importance of energy normalization block. We test three variants: our network with simply single scale decomposition which means applying all filters on the raw waveform (DyDecNet_SingScale) which helps validate necessity of hierarchical dyadically decomposition framework; replacing Energy-normalization module with traditional batch normalization <cit.> (DyDecNet_BN); without any normalization (DyDecNet_noNorm). The result is in Table <ref>, from which we can clearly observe that either removing energy normalization or replacing it with batch normalization significantly reduces the performance. It thus shows the importance of energy normalization.Lastly We run two ablation studies to directly regress the count number and follow SED pipeline, respectively. From the result in Table <ref>, we can conclude that directly regressing sound event count number leads to inferior performance than estimating density map. Treating it as a SED problem leads to the worst performance. §.§ Dyadic Decomposition Frontend on SELD Task To show the dyadic decomposition front-end is a general TF feature extractor, we test it on sound event detection and localisation task (SELD). The dataset we use is TAU-NIGENS <cit.>, and we compare with four main methods: SELDNet <cit.>, EIN <cit.> that use classic TF feature, SoundDet <cit.> and SoundDoA <cit.> use learnable TF-feature. We replace their time frequency (TF) extraction front-end with dyadic decomposition network front-end to see the performance change. The result is given in Table <ref>, we can see that dyadic decomposition front-end exhibits generalization strength to help tackle other acoustic tasks.§ APPENDIX § RELATED WORK Crowd counting from images or audio-visual has been thoroughly studied in recent years <cit.>, the target of which is to estimate the instance number from very crowded scenes (e.g. pedestrian in train station) that cannot be efficiently handled by object detection methods. These methods approaching image crowd counting chronically evolve from the early detection-based <cit.> to the later regression-based <cit.> and density map estimation <cit.> methods. Accompanying these methods, various neural network architectures have been designed to achieve higher performance.The counterpart task purely in sound, however, has been nearly ignored. Existing research mainly focus on sound event detection, including spatio-temporal sound event detection (SELD) <cit.> from a microphone array and temporal sound event detection <cit.> and high-frequency time series analysis <cit.>. They often combine convolutional neural networks (CNN) <cit.> and reccurrent neural network <cit.> to separate sound sources. The datasets they work on are relatively simple, in which the sound scenes are relatively simple and contain few overlapping sound events.The common way to process raw sound waveform is to first convert the 1D waveform into 2D time-frequency representation so that sound events' frequency property and their variation along time axis are explicitly split out. Most existing methods <cit.> adopt Fourier transform <cit.> or Wavelet transform <cit.> to obtain such 2D representation, in which the whole conversion process is fixed. Some recent work <cit.> re-parameterize the conversion frequency-selective filters to be learnable so that the whole neural network is able to directly learn from raw sound waveform. Experimental results show enabling the neural network to learn from the raw waveform can often achieve better performance than traditional fixed conversion. These methods, however, convert the raw waveform in a one-stage manner. Our proposed dyadic decomposition neural network instead processes the raw waveform in a dyadic multi-stage manner. Dyadic Network Dyadic representation idea has been initially proposed to represent the signal hierarchically <cit.>, in multi-scale manner. Its core idea is to construct a bank of filters (either learnable or fixed) so that different filter extracts different feature at a certain scale or resolution. Summarizing them together leads to more comprehensive and complete analysis. Similar idea has been widely used in the computer vision community, including pyramid feature representation for object detection <cit.> and semantic segmentation <cit.>.Sound Source Separation There is a massive work focusing on sound source separation <cit.>, in which they isolate individual sound from a mixture of audios and further assign the extracted sound to its corresponding source. While sound counting can be solved by source separation methods if the target is to count source number, our proposed sound counting is source-agnostic and it counts all sound events in a sound snippet.§ SOUND COUNTING PROBLEM DEFINITIONGiven a mono-channel T seconds raw sound waveform x(t) sampled at a fixed sampling rate F_s, the sound recording has recorded N independent sound events E = {E_i = (t_s, t_e)}_i=1^N, each single sound event freely undergoes either stationary or moving motion in the open area. The target is to design a neural network N parameterized by θ to predict sound event number N from raw sound waveform N = N(θ|x(t)). In our formulation, the counting process is class-agnostic, so all sound events are treated as instances to count, regardless of their classes. Three challenges make it a challenging task: 1) Large Datasize: microphone usually records sound at a high frequency rate (i.e. 24 kHz), resulting in large data size in the raw waveform. It thus requires more accessible filters with few parameters and computation cost to process the raw sound waveform. 2) Concurrent Sound Events (polyphony): sound events freely overlap both spatially and temporally, resulting highly polyphonic sound recording. It is a tough task to separate them apart from compressed 1D waveform. 3) Loudness Variance and Spectrum Overlap: sound events of the same class but different spatial location have large variance in their received loudness. They also have heavy spectrum overlap in the frequency domain. The above issues make counting a tough task.§ MORE DISCUSSION ON DATASET CREATION §.§ Motivation of Polyphony4Birds and Polyphony1Bird Dataset CreationOur motivation for synthesizing Polyphony4Birds and Polyphony1Birds is three-fold:* NorthEastUS dataset has as many as 48 different kind of bird categories. It helps to test various methods' capability in tackling high bird diversity challenge.* Polyphony4Birds dataset contains 4 kinds of bird sounds, but in much higher polyphony level (in terms of ratio-polyp, max-polyp and mean-polyp). It helps us to test various methods' capability in tackling limited bird categories but high polyphony level (heterophony test).* Polyphony1Bird dataset contains 1 bird sound class in much higher polyphony level. This dataset involves heavy spectrum-overlap (due to the temporal inter-category bird sounds overlap), so it helps to test various methods' capability in tackling high spectrum-overlap and high-polyphony challenge (homophony test). In Polyphony4Birds dataset, 4 is an arbitrary number. We experimentally find involving 4 bird sounds is representative enough for heterophony test. We note that there are some other relevant public bird sound dataset <cit.>, but we find they are not suitable for our study. For example, in TUT-SED 2009 data <cit.>, the polyphony-level is small and the involved bird sound usually lasts too long (not temporally separable and countable). Similarly, the Bird Audio Detection challenge (BAD challenge) <cit.> contains highly-sparse bird chirps (very small polyphony-level sound). Moreover, the two real-world bird sound datasets <cit.> do not provide bird sound start time and end time label, so they are suitable for our study. The other synthesized dataset TUT-SED Synthetic 2016 <cit.> also contains very limited samples of high polyphony. The direct comparison between these datasets is given in Table <ref>, from which we can see our created two datasets enjoy much higher polyphony-level, making them more suitable for our sound counting task.§.§ How to Simulate Open Area EnvironmentWe collect 4 seed sounds from copyright-free website [see <https://www.findsounds.com/>]: junco, American redhead, eagle, and rooster. To maximally reflect outdoor scenario, we simulate a large openarea environment [100m, 100m, 100m] with one microphone at [50m, 50m, 1m]. The wall is associated with high sound absorption coefficient, so the reverberation is negligible so as to resemble outdoor open area scenario. We introduce a random SNR (Signal-to-Noise Ratio) at two Gaussian means (-33 decibels and -20 decibels) at the microphone receiver. We put each seed sound at a random 3D spatial location and a random start time to imitate natural bird sounds that emit sound from a random location and random start time. A post-processing step is added to keep dataset balance between various polyphony-level metrics. § LEARNED FEATURE VISUALIZATION Feature Visualization We visualize the DyDecNet learned time-frequency feature (TF feature) and traditional MFCC <cit.> on one minute long sound waveform which encodes 4 temporally overlapping sounds (from Polyphony1Bird dataset, the 4 sounds are rooster sounds). The result is shown in Fig. <ref>. From this figure, we can observe that DyDecNet successfully learns frequency-separable TF representation for inter-class temporally-overlapping sounds, while traditional TF features (in our case, MFCC) encode cluttered and mixed TF representation that is much less visually separable.§ MORE DISCUSSION ON COMPARING METHODSMore detailed comparison between various methods is given in table <ref>. We can see that our proposed DyDecNet is lightweight and directly learns from sound raw waveform (so it is end-to-end trainable). It thus strikes a good balance between model performance and model efficiency (inference time).§ MORE EXPERIMENT RESULT DISCUSSION§.§ Music Dataset MAE/MSE resultThe MAE/MSE dataset result is show in Table <cit.>. From this table we can observe that our proposed DyDecNet is the best-performing method. §.§ Experiment on NorthEastUS Dataset and Telephone Ring Dataset More detailed experimental result (MAE variation) on NorthEastUS is given in Fig. <ref>, from which we can observe that with the increasing of max-polyp, ratio-polyp and mean-polyp, all methods (including our DyDecNet) reduces their performances. The three comparing methods (CRNNNet <cit.>,DND-SED <cit.>, and SELDNet <cit.>) have observed sharp performance drop when the our proposed three sound counting difficulty levels increases, whereas our proposed DyDecNet largely mitigates the challenge caused by higher counting difficulty level (the blue line increases slightly as the counting difficulty level increases). It thus shows 1) our proposed max-polyp, ratio-polyp and mean-polyp are capable of accurately measuring sound counting task difficulty level from different perspectives; 2) our proposed DyDecNet is capable of mitigating these sound counting difficulties. §.§ More Result on Polyphony1Bird and Polyphony4Birds Datasets We also provide the detailed results for Polyphony1Bird and Polyphony4Birds in Fig. <ref> and Fig. <ref>, respectively. They contain the accuracy rate, MSE and MAE variation against max-polyp, ratio-polyp and mean-polyp. From the two figures, we can get similar conclusion as of NorthEastUS dataset (Fig. <ref>): with the increasing of max-polyp, ratio-polyp and mean-polyp, all methods' performance gradually reduces. Our proposed DyDecNet stays as the best-performing one under all sound counting difficulty level metrics. Specifically, we can see that:* All methods give the best performance on Polyphony4Birds dataset, second best performance on Polyphony1Bird dataset, and the worst performance on NorthEastUS dataset. It thus shows 1) spectrum-overlap due to high inter-class sound overlap temporally (represented by Polyphony1Bird dataset) remains as a challenge for sound counting task. 2) sound counting in open area where noise pollution, high sound diversity (in our case, diversity means bird categories, we have 48 bird classes in NorthEastUS dataset), and small labelled data availability exist remains as another challenge for sound counting task. We hope to attract more researchers to consider sound counting task in more challenging scenario.* We do not observe such sharp performance drop (as we observed on NorthEastUS dataset) on our two synthetic datasets, which is in contrast with the real-world dataset NorthEastUS. It thus shows real-world sound counting task becomes increasingly challenging when our proposed three sound counting difficulty level metrics increase. We guess the large model and large training dataset are needed to achieve better performance, which can be treated as a future research direction.§.§ Counting on More Birds ClassesIn the main paper, our two synthetic datasets Polyphony4Birds and Polyphony1Bird have just involved limited bird classes (up to 4). We naturally want to figure out the performance of all methods (including DyDecNet and the other three comparing methods) under more bird classes situation. We thus follow more the same data creation procedure to synthesize four extra datasets. They contain 2/6/8/10 bird classes, respectively. The extra bird seed sound classes are collected fromtoo. The quantitative result is given in table <ref>, from which we can learn that all comparing methods have observed performance increasing when the bird classes reach to 6 (values in bold font), then performance decline when bird classes increase to 8 or 10; DyDecNet reaches the best performance around 8 bird classes, then begin to decline. It thus shows: 1) all methods can successfully handle a reasonable amount of bird classes (in our case, maximum bird classes are 8), given the model parameter size budget (less than 10 M) discussed in this paper. When we have to handle much larger bird diverse classes, we might need much larger model (which remains as a future research topic to figure out the relationship between model size and sound counting class diversity); 2) Our proposed DyDecNet exhibits strong capability sound counting in diverse bird classes than the three comparing methods (it reaches the best performance at a higher bird classes (8 bird class)). §.§ Learnable Energy Normalization with Traditional T-F Feature To test the efficiency of our proposed energy normalization module, especially combining them with traditional one-stage T-F features, we explicitly add one learnable energy normalization module just after the T-F feature extracted by traditional time frequency feature extractors. The comparison is given in Table <ref> and Table <ref>, from which we can observe that performance of traditional T-F feature slightly increases after introducing the energy normalization module. It thus shows the necessity of energy normalization for sound counting task in high-polyphonic situation. However, they still lead to inferior performance than DyDecNet, which shows hierarchically dyadic decomposition with energy normalization is essential for sound counting task.§ NETWORK ARCHITECTUREThe DyDecNet architecture is shown in Table <ref>.
http://arxiv.org/abs/2312.16149v1
{ "authors": [ "Yuhang He", "Zhuangzhuang Dai", "Long Chen", "Niki Trigoni", "Andrew Markham" ], "categories": [ "cs.SD", "eess.AS" ], "primary_category": "cs.SD", "published": "20231226181804", "title": "SoundCount: Sound Counting from Raw Audio with Dyadic Decomposition Neural Network" }
[email protected] of Mathematics and Physics, Beijing University of Chemical Technology Beijing 100029, [email protected] of Science, Shenzhen Campus of Sun Yat-sen University  Shenzhen, 518107, China Sun Yat-sen University  Guangzhou, 510275, ChinaThe Primakoff mechanism is one of the primary channels for the production of solar axion. In the canonical estimation of the Primakoff photon-axion conversion rate, the recoil effect is neglected and a static structure factor in adopted. In this work, by use of the linear response theory, we provide a dynamic description of the solar Primakoff process. It is found that the collective electrons overtake ions as the dominant factor, in contrast to the static screening picture where ions contribute more to the photon-axion conversion. Nonetheless, the resulting axion flux is only 1~2% lower than the standard estimate based on the static structure factor.The dynamic solar Primakoff process Lin Zhang===================================§ INTRODUCTION The QCD axion that emerged originally as a solution to the strong-CP problem <cit.>, is also well motivated as a promising dark matter (DM) candidate <cit.> other than the weakly interacting massive particles (WIMPs), and has attracted increased interests in both theoretical and experimental fronts in recent years. The rich phenomenology of axion can leave peculiar traces in cosmology, astroparticle physics and particle physics <cit.>.The Sun is the primary natural source for the terrestrial axion detection. With the coupling to the standard model (SM) particles, axions can be produced in the solar interior through a number of channels, such as the Primakoff process <cit.> and the axio-recombination, bremsstrahlung and Compton scattering (ABC) process <cit.>. For Kim-Shifman-Vainshtein-Zhakharov (KSVZ) axions <cit.>, the former reaction dominates, while for the Dine-Fischler-Srednicki-Zhinitsky (DFSZ) axions <cit.>, the latter mechanism dominates. In this paper, we focus on the Primakoff axion production mechanism, where the photons convert to axions through the Coulomb field sourced by the charged particles (i.e., electrons and ions).In conventional wisdom <cit.>, the charged particles in the Sun are so heavy compared to the energies of ambient photons that they can be regarded as fixed, in which case the photon energy in a scattering event is considered equal to that of the emitted axion, and the differential cross section of the Primakoff process dσ_γ→ a(𝐩_γ)/d is proportional to |𝐩_γ×𝐩_a|^2/Q^4, with 𝐩_γ and 𝐩_a being the momenta of the incident photon and emitted axion, respectively, and Q=|𝐐|=|𝐩_γ-𝐩_a|. In the massless limit of an axion, the cross section is divergent due to the long-range Coulomb interaction. Such Coulomb potential can be regulated if the solar in-medium screening effect is taken into account. Raffelt <cit.> argued that the implication of screening effect on the differential cross section is described with the substitution,|𝐩_γ×𝐩_a|^2/Q^4 → |𝐩_γ×𝐩_a|^2/Q^4S(𝐐) =|𝐩_γ×𝐩_a|^2/Q^41/1+κ^2/Q^2,where the Debye-Hückel scale κ can be as large as ∼9 keV at the solar center, and effectively provides a cutoff of the Coulomb interaction. Note that this description is based on the assumption of a negligible recoil effect in the solar medium, and thus a static structure factor S(𝐐)=(1+κ^2/Q^2)^-1 is introduced to measure the correlation between the charged particle density <cit.>.In Ref. <cit.>, by use of the Kramers-Kronig relations that relate the spectral densities to the static structure factor S(𝐐), Raffelt further reasoned that the Primakoff production rate in Ref. <cit.> agrees with the total rates of the decay process γ_t (transverse plasmon)→γ_l (longitudinal plasmon)+a (axion), the plasma coalescence process γ_t+γ_l→ a, and the individual Primakoff process γ_t+e/N (electron/ion)→ a+e/N. Refs. <cit.> reproduced the same Primakoff production rate using the thermal field theory, by the same Kramers-Kronig relations arguement. In order for the Kramers-Kronig sum rules to work, it was assumed that energy shift between the axion and the photon ω is remarkably smaller than the solar temperature T_⊙ such that 1-e^-ω/T_⊙≃ω/T_⊙, which was proved a quite good approximation <cit.>.Recently, in Ref. <cit.> we applied the nonrelativistic linear response theory to the dynamic screening effect in the nondegenerate gas of the solar plasma in associated with the dark matter scattering. Under this framework, one no longer needs to add the dielectric function by hand, since both the finite temperature effect and the many-body effect are inherently encapsulated in the dynamic structure factor S(𝐐,ω). “Dynamic” means that a finite energy ω transfer and thus a temporal variation is taken into account in a scattering event, in contrast to the static case where the charged particles are regarded as fixed targets. This is important considering the thermal velocities of electrons can reach ∼0.1 c in the core of the Sun, which may bring a non-negligible Doppler energy shift in the Primakoff process. And more importantly, the screening and the collective effect (plasmon) are naturally incorporated in such dynamic structure factor S(𝐐,ω).Therefore, the purpose of this work is to apply the linear response theory approach <cit.> to the photon-axion conversion process in the Sun, in order to investigate the implication of the recoil effect and collective effect, and especially to numerically explore in detail to what extent the Kramers-Kronig sum rule argument is reliable to validate the calculation of the Primakoff conversion rate based on the static structure factor.Discussion will proceed in the natural units, where ħ=c=k_B=1.§ PRIMAKOFF EVENT RATE We first introduce how we describe the Primakoff process in the context of the linear response theory that naturally encodes the relevant finite temperature physics and the many-body in-medium effect.At the effective field theory (EFT) level, the interaction relevant for the Primakoff process is given asℒ_aγ= -g_aγ/4aF_μνF̃^μν,where a is the axion field, g_aγ represents the axion-photon coupling, and F_μν and F̃_μν=1/2ϵ_μνρσF^ρσ are electromagnetic field strength and its dual, respectively.Since the electrons and ions move nonrelativistically in the Sun, we express the relevant interactions in the nonrelativistic effective field theory (NREFT). For instance, the electromagnetic filed-electron interaction is written asℒ_Ae= -eA_0ψ_e^*ψ_e-ie/2m_e𝐀·(ψ_e^*∇ψ_e-ψ_e^*∇ψ_e)-e^2/2m_e|𝐀|^2·ψ_e^*ψ_e+⋯,where ψ_e is the NR electron wavefunction. Considering that the second term on the right-hand side is subject to an electron velocity suppression ∇/m_e∼ v_e∼𝒪(10^-2∼10^-1) in the solar medium (with m_e being the electron mass), and the longitudinal and transverse photon propagators do not mix under the random phase approximation (RPA), only the longitudinal component in the NR effective electron-photon interaction A_0 (or more specifically, the Coulomb interaction) is retained for the description of electron-electron (and electron-ion) interaction in this work. Thus, we only consider the components -g_aγa ϵ^ijk0∂_iA_j∂_kA_0 of the Lagrangian in Eq. (<ref>) in the estimate of the Primakoff process in the Sun. While the A_0 component is responsible for the Coulomb interaction, { A_j} are relevant for the transverse photon external leg. The calculation of the Primakoff process shown in Fig. <ref> depends on an accurate description of the electronic and ionic in-medium effect in the Sun. In this work, we invoke the linear response approach proposed in Ref. <cit.> to describe the screening effect in the Sun. Within this framework, the axion production rate for an incident photon with momentum 𝐩_γ can be summarized with the following expression (see Appendix <ref> for further details): Γ(𝐩_γ) =∫dω∫d^3Q/(2π)^3g_aγ^2 |𝐩_γ×𝐐|^2/8 E_γ √(|𝐩_γ-𝐐|^2+m_a^2)1/Q^2 δ(√(|𝐩_γ-𝐐|^2+m_a^2)-E_γ+ω) ×(-2)/1-e^-ω/T_⊙ [V_e Im(Π_e)/|1-V_e Π_e-V_e ∑ _iZ_i^2 Π_N_i|^2+V_e ∑ _iZ_i^2 Im(Π_N_i)/|1-V_e Π_e-V_e ∑ _iZ_i^2 Π_N_i|^2],where α=e^2/4π is the electromagnetic fine structure constant, E_γ and Q=|𝐐| denote the energy and the magnitude of the momentum transfer to the solar medium, respectively, V_e(Q)=4πα/Q^2 is the electron Coulomb interaction in momentum space, and E_a (m_a) is the energy (mass) of the axion. The delta function represents the energy conservation in the scattering.In this work, we only consider the case where the axion masses are so small (typically ≪ keV) compared to their energies that it can be effectively considered as massless. Besides, since the photon (or transverse plasmon) effective mass ω_p=√(4πα n_e/m_e)≈0.3 keV is much smaller than its typical energy 3T_⊙≈4 keV in the solar core, the photons are also treated as massless in the solar plasma <cit.>.The first and second term in the square bracket correspond to the finite temperature many-body effect from the electrons and ions, respectively. Π_e denotes the electron one-particle-irreducible diagram, in the RPA which is approximated as the bubble diagram. For the nondegenerate electron gas in the Sun, Π_e can be expressed as <cit.> Π_e(Q, ω) = -n_e/Q√(m_e/2 T_⊙){Φ[√(m_e/2 T_⊙)(ω/Q+Q/2 m_e)]-Φ[√(m_e/2 T_⊙)(ω/Q-Q/2 m_e)]} -i n_e√(2π/m_eT_⊙)(m_e/Q)exp[-(m_e^2 ω^2/Q^2+Q^2/4)1/2 m_eT_⊙]sinh[ω/2 T_⊙],where n_e is the number density of the electron gas, and the function Φ is defined as the Cauchy principal value of the integration <cit.>Φ(x)≡ 𝒫∫_-∞^+∞dy/√(π)e^-y^2/x-y. Similarly, Π_N_i denotes the bubble diagram of i^th ion species carrying a charge Z_ie. At the RPA level, Π_N_i can be obtained by simply replacing n_e and m_e with ion number density n_N_i and ion mass m_N_i in Eq. (<ref>). Contributions from all solar ion species are included in Eq. (<ref>).In order to describe the collective behavior of the solar medium, we introduce a nondimensional function ℱ(Q,ω), which represents the second line in Eq. (<ref>). Interestingly, from Fig. <ref> a strong resonance structure is observed in the parameter area where the real part approaches zero in the denominator in Eq. (<ref>), which corresponds to the absorption of a longitudinal plasmon in the process γ_t+γ_l→ a. At the symmetric position in the upper half plane there is another pole corresponding to the emission of a plasmon in the process γ_t→γ_l+a. As long as kinematically allowed, such collective behavior may significantly alter the fixed-electron picture in the axion production process.Based on the axion production rate of Eq. (<ref>), the differential axion flux reaching the Earth can then be written as the convolution of the differential transition rate with the photon blackbody distribution in the Sun,dΦ_a(E_a)/dE_a=1/4π d_⊙^2∫_0^R_⊙d^3r∫dE_γ/π^2E_γ^2/e^E_γ/T_⊙-1dΓ(𝐩_γ)/dE_a,with the Sun-Earth distance d_⊙ and the solar radius R_⊙. In the static screening prescription, since the energy of the incident photon equals that of the axion, the differential axion flux is given as <cit.>dΦ_a(E_a)/dE_a=1/4π d_⊙^2∫_0^R_⊙d^3r1/π^2E_a^2/e^E_a/T_⊙-1Γ_s,with the relevant static photon-axion conversion rateΓ_s=T_⊙κ^2g_aγ^2/32π[(1+κ^2/4E_a^2)ln(1+4E_a^2/κ^2)-1],where κ^2=(4πα/T_⊙)(n_e+∑_iZ_i^2n_N_i) is the square of the Debye-Hückel scale.§ AXION FLUX ON EARTH Equipped with the above formulation that describes the solar in-medium effect with the linear response theory, now we are in the position to calculate the axion flux at terrestrial detectors.In the left panel of Fig. <ref> we compare the solar Primakoff axion fluxes at the Earth computed with the linear response theory in Eq. (<ref>) and with the static screening description of the Coulomb interaction in Eq. (<ref>).These spectra are obtained by integrating the contributions from the charged particles in every thin shell in the Sun, based on the Standard Sun Model AGSS09 <cit.>. In practice, the solar radius is discretized into 100 slices, and 29 most common solar elements are included in our computation. Besides, we assume that these solar elements are fully ionized.While in the left panel of Fig. <ref> it is observed that the average (4.2 keV) and the maximum (3.0 keV) of the Primakoff axion energy distribution remain unchanged, the differential rate is only 1~2% lower than the calculation based on the static structure factor throughout the relevant energy range. It is quite a surprising result, given that the denominator term |1-V_e Π_e-V_e ∑_i Z_i^2 Π_N_i|^-2 in Eq. (<ref>) asymptotes to the Debye screening form (1+κ^2/Q^2)^-2 in the limit ω→0, where it brings a stronger screening than the static structure factor (1+κ^2/Q^2)^-1 in Eq. (<ref>); the contributions of the recoil effect and the collective effect must coincidentally make up this loss to keep the total rate unchanged. Such a coincidence would be difficult if there were no intrinsic relation protecting the total rate, especially considering that in contrast to the static screening picture, where the contributions from the electrons and ions are in scale to the electric charge densities n_e and ∑_iZ_i^2n_N_i and hence a larger part of the conversion comes from the scattering with ions, it turns out that the collective electrons contribute dominantly to the total Primakoff axion flux in the dynamic screening picture. Thus, our results actually confirmthe validity of the Kramers-Kronig sum rule argument in Ref. <cit.>, up to a percent-level correction.Besides, in the right panel of Fig. <ref> we also present the differential solar axion flux (using the dynamic structure factor) as an apparent surface luminosity ϕ_a(E_a,ρ) of the solar disk <cit.>,ϕ_a(E_a,ρ) =R_⊙^3/2π^3d_⊙^2∫_ρ^1r̃ dr̃/√(r̃^2-ρ^2) ×∫dE_γ/π^2E_γ^2/e^E_γ/T_⊙-1dΓ(𝐩_γ)/dE_a,where the dimensionless quantities r̃=r/R_⊙ and ρ represent the radial position of the conversion process, and the distance from the center of the solar disc, respectively.§ DISCUSSIONS AND CONCLUSIONS In order to further explore the many-body effect in the solar medium in detail, in Fig. <ref> we present the differential axion production rates for photon energies E_γ=2 keV and 8 keV at the solar radius r=0.1 R_⊙, respectively, with the benchmark coupling g_aγ=10^-10GeV^-1. While the spectra of heavy ions are found to narrowly center at the photon energies, behaving like static targets, it is intriguingly observed that the non-negligible electron movement in the inner part of the Sun can bring an energy shift up to 𝒪(0.1) keV from the initial photon energies. For one thing, the two peaks in Fig. <ref> correspond to the absorption and emission of a longitudinal plasmon at ω≃±√(4πα n_e/m_e) in the Sun, respectively. That is, a considerable part of Primakoff processes proceed in company with absorbing and emitting a plasmon. For another, a broadening width of around 0.4 keV is also clearly seen due to the thermal movement of the electrons. While such a finite spread of the photon energy may not bring a noticeable change to the total spectrum of the solar axion, the strength of the resonance, i.e., the implication of the collective effect, can only be determined by concrete calculation.So to conclude, in this paper we have applied the linear response theory formalism for a delicate estimate of the Primakoff photon-axion conversion rate in the Sun. Based on this method, progress is gained in two aspects: (1) we provide an up-to-date panoramic description of the dynamic Primakoff process, which is explicitly shown as a combination of the decay process γ_t→γ_l+a, the plasma coalescence process γ_t+γ_l→ a, and the individual Primakoff process γ_t+e/N→ a+e/N; (2) we numerically calculate the relevant terrestrial axion flux due to the Primakoff process, without regard to the approximate Kramers-Kronig sum rules, and the flux is found to be around 1~2% lower than the previous estimation based on the static structure factor. 1.2§ FORMULATION FOR THE PRIMAKOFF SCATTERING EVENT RATE IN THE SUNIn this Appendix we give a detailed derivation of the formulae in the main text that describe the Primakoff photon-axion conversion process in the Sun. In the nonrelativistic regime, it is convenient to discuss in the Coulomb gauge.We start with the 𝒯-matrix for the Primakoff process where a photon (with momentum 𝐩_γ and polarization λ) scatters with a nonrelativistically moving electron (illustrated in Fig. <ref>), emitting an axion with momentum 𝐩_a, i.e.,⟨𝐩_a,i| i𝒯 |𝐩_γ,λ;j|=⟩ig_aγ e (𝐩_γ×𝐩_a)·ε̂^λ(𝐩_γ)/|𝐩_γ-𝐩_a|^2×⟨i|e^i(𝐩_γ-𝐩_a)·𝐱̂|j|2⟩πδ(E_γ-E_a-ε_i+ε_j),where ε̂^λ(𝐩_γ) is the polarization vector for the incident photon, which satisfies the complete and orthonormal relation, i.e., ∑_λ=±1ε̂^λ i(𝐩_γ)ε̂^λ j*(𝐩_γ)=δ^ij-p_γ^ip_γ^j/|𝐩_γ|^2, and 𝐩_γ·ε̂^±1(𝐩_γ)=0; E_γ, E_a, ε_i, and ε_j represent the energies of the photon, the emitted axion, the initial and the final state of the electron, respectively.Then we take into account the many-body effect of the solar medium with the approach adopted in Refs. <cit.>. To this end, we resort to the linear response theory, whereby the Primakoff event rate for a photon with momentum 𝐩_γ (by averaging over the initial states and summing over the final states) is written as the following (for simplicity, here we assume only one type of ions with charge Ze and mass m_N are present):Γ(𝐩_γ) =∑_i,j∫dω δ(ω-ε_i+ε_j)∫d^3Q δ^(3)(𝐐+𝐩_a-𝐩_γ)∫d^3p_a/2E_a(2π)^3(g_aγ e/Q^2)^21/2∑_λ=±1|(𝐩_γ×𝐩_a)·ε̂^λ(𝐩_γ)|^2/2E_γ ×1/V∫_Vd^3x d^3x' p_j⟨j|e^-i𝐐·𝐱[ρ̂_e+(-Z)ρ̂_N](𝐱)|i|⟨%s|%s⟩⟩i|e^i𝐐·𝐱'[ρ̂_e+(-Z)ρ̂_N](𝐱')|j 2πδ(E_γ-E_a-ω)=∫dω ∫d^3Q/(2π)^34πα/Q^4g_aγ^2 |𝐩_γ×𝐐|^2/8 E_γ √(|𝐩_γ-𝐐|^2+m_a^2)1/V∫_Vd^3x d^3x' e^-i𝐐·(𝐱-𝐱')∫_-∞^+∞e^iω(0-t)dt[⟨ρ̂_eI(𝐱,0)ρ̂_eI(𝐱',t)|⟩. .+(-Z)⟨ρ̂_eI(𝐱,0)ρ̂_NI(𝐱',t)|+⟩(-Z)⟨ρ̂_NI(𝐱,0)ρ̂_eI(𝐱',t)|+⟩(-Z)^2⟨ρ̂_NI(𝐱,0)ρ̂_NI(𝐱',t)|⟩] δ(E_γ-E_a-ω)=∫dω ∫d^3Q/(2π)^34πα/Q^4g_aγ^2 |𝐩_γ×𝐐|^2/8 E_γ √(|𝐩_γ-𝐐|^2+m_a^2)(-2)/1-e^-ω/T_⊙Im[χ_ρ̂_eρ̂_e^r+(-Z)χ_ρ̂_eρ̂_N^r+(-Z)χ_ρ̂_Nρ̂_e^r+(-Z)^2χ_ρ̂_Nρ̂_N^r] ×δ(√(|𝐩_γ-𝐐|^2+m_a^2)-E_γ+ω)=∫dω ∫d^3Q/(2π)^24πα/Q^4g_aγ^2 |𝐩_γ×𝐐|^2/8 E_γ √(|𝐩_γ-𝐐|^2+m_a^2)(-2)/1-e^-ω/T_⊙[Im(Π_e)/|1-V_e Π_e-V_eZ^2 Π_N|^2+Z^2 Im(Π_N)/|1-V_e Π_e-V_eZ^2 Π_N|^2] ×δ(√(|𝐩_γ-𝐐|^2+m_a^2)-E_γ+ω),where we introduce the density operators for electrons ρ̂_e(𝐱)≡ψ̂_e^†(𝐱)ψ̂_e(𝐱) and ions ρ̂_N(𝐱)≡ψ̂_N^†(𝐱)ψ̂_N(𝐱), p_j represents the thermal distribution of the initial state |j⟩, the symbol ⟨⋯⟩ represents the thermal average, ρ̂_eI(𝐱',t)≡ e^iĤ_0tρ̂_e(𝐱')e^-iĤ_0t (ρ̂_NI(𝐱,t)≡ e^iĤ_0tρ̂_N(𝐱)e^-iĤ_0t), with Ĥ_0 being the unperturbed Hamiltonian of the medium system, and V is the volume of the solar medium under consideration, which is only an intermediate quantity and is canceled in the final expression of the event rate.Besides, in the above derivation we invoke the fluctuation-dissipation theoremS_ρ̂ρ̂(𝐐, ω)=1/V∫_Vd^3x d^3x' e^-i𝐐·(𝐱-𝐱')∫_-∞^+∞dt e^iω(0-t)×⟨ρ̂_I(𝐱,0)ρ̂_I(𝐱',t)|⟩ =i[χ_ρ̂ρ̂(𝐐, ω+i0^+)-χ_ρ̂ρ̂(𝐐, ω-i0^+)]/1-e^-ω/T =-2 Im[χ_ρ̂ρ̂^r(𝐐, ω)]/1-e^-ω/T,where T represents temperature, ρ̂ generally stands for ρ̂_e and ρ̂_N, so S_ρ̂ρ̂(𝐐, ω) represents the dynamic structure factor associated with the density-density correlation. In practice <cit.>, one first evaluates the master function χ_ρ̂ρ̂(𝐐, z) using the Matsubara Green's function within the framework of finite temperature field theory, and then obtains the retarded polarizability function χ_ρ̂ρ̂^r(𝐐, ω) by performing the analytic continuation χ_ρ̂ρ̂^r(𝐐,ω)=χ_ρ̂ρ̂(𝐐,z→ω+i0^+).Here we take the retarded correlation function χ_ρ̂_eρ̂_N^r as an example to illustrate how the calculation is carried out, which is presented as the sum of all possible diagrams that connect the two density operators as follows, pattern size/.store in=, pattern size = 5pt, pattern thickness/.store in=, pattern thickness = 0.3pt, pattern radius/.store in=, pattern radius = 1pt@ifundefinedpgf@pattern@name@_fm7niqlyo [,]_fm7niqlyo 0pt0pt@pattern@color0pt +- 0pt0pt ++ strokeevery picture/.style=line width=0.75pt χ_ρ̂_eρ̂_N^r=[x=0.20pt,y=0.20pt,yscale=-1,xscale=1][pattern=_fm7niqlyo,pattern size=5pt,pattern thickness=0.75pt,pattern radius=0pt, pattern color=rgb, 255:red, 0; green, 0; blue, 0] (231.03,137) – (400.2,137) – (400.2,257.78) – (231.03,257.78) – cycle ; (231.03,137) – (400.2,137) .. controls (422.18,137) and (440,164.04) .. (440,197.39) .. controls (440,230.74) and (422.18,257.78) .. (400.2,257.78) – (231.03,257.78) .. controls (209.05,257.78) and (191.23,230.74) .. (191.23,197.39) .. controls (191.23,164.04) and (209.05,137) .. (231.03,137) – cycle ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (185.73,196.39) .. controls (185.73,199.7) and (188.64,202.39) .. (192.23,202.39) .. controls (195.82,202.39) and (198.73,199.7) .. (198.73,196.39) .. controls (198.73,193.07) and (195.82,190.39) .. (192.23,190.39) .. controls (188.64,190.39) and (185.73,193.07) .. (185.73,196.39) – cycle ;(400,146) .. controls (440,142) and (450,240) .. (401,253) ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (430.73,195.39) .. controls (430.73,198.7) and (433.64,201.39) .. (437.23,201.39) .. controls (440.82,201.39) and (443.73,198.7) .. (443.73,195.39) .. controls (443.73,192.07) and (440.82,189.39) .. (437.23,189.39) .. controls (433.64,189.39) and (430.73,192.07) .. (430.73,195.39) – cycle ; =[x=0.25pt,y=0.25pt,yscale=-1,xscale=1] (124.36,160.23) .. controls (124.36,135.17) and (145.11,114.85) .. (170.7,114.85) .. controls (196.29,114.85) and (217.04,135.17) .. (217.04,160.23) .. controls (217.04,185.3) and (196.29,205.62) .. (170.7,205.62) .. controls (145.11,205.62) and (124.36,185.3) .. (124.36,160.23) – cycle ;[draw opacity=0] (184.74,204.96) .. controls (180.9,205.94) and (176.86,206.47) .. (172.7,206.47) .. controls (146.6,206.47) and (125.44,185.77) .. (125.44,160.23) .. controls (125.44,135.78) and (144.84,115.77) .. (169.4,114.11) – (172.7,160.23) – cycle ; (181.77,205.62) .. controls (178.83,206.18) and (175.8,206.47) .. (172.7,206.47) .. controls (146.6,206.47) and (125.44,185.77) .. (125.44,160.23) .. controls (125.44,135.44) and (145.39,115.21) .. (170.44,114.05) ;[shift=(167.57,114.27), rotate = 356.14] [fill=rgb, 255:red, 0; green, 0; blue, 0 ][line width=0.08][draw opacity=0] (10.72,-5.15) – (0,0) – (10.72,5.15) – (7.12,0) – cycle;[shift=(184.74,204.96), rotate = 165.59] [fill=rgb, 255:red, 0; green, 0; blue, 0 ][line width=0.08][draw opacity=0] (10.72,-5.15) – (0,0) – (10.72,5.15) – (7.12,0) – cycle;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (385,156.75) .. controls (385,159.37) and (387.24,161.5) .. (390,161.5) .. controls (392.76,161.5) and (395,159.37) .. (395,156.75) .. controls (395,154.13) and (392.76,152) .. (390,152) .. controls (387.24,152) and (385,154.13) .. (385,156.75) – cycle ;[line width=0.75](216.96,155.4) .. controls (218.85,157.65) and (220.66,159.8) .. (222.76,159.8) .. controls (224.86,159.8) and (226.67,157.65) .. (228.57,155.4) .. controls (230.46,153.14) and (232.27,151) .. (234.37,151) .. controls (236.47,151) and (238.28,153.14) .. (240.17,155.4) .. controls (242.07,157.65) and (243.88,159.8) .. (245.98,159.8) .. controls (248.08,159.8) and (249.89,157.65) .. (251.78,155.4) .. controls (253.67,153.14) and (255.48,151) .. (257.58,151) .. controls (259.69,151) and (261.5,153.14) .. (263.39,155.4) .. controls (265.28,157.65) and (267.09,159.8) .. (269.19,159.8) .. controls (271.29,159.8) and (273.1,157.65) .. (275,155.4) .. controls (276.89,153.14) and (278.7,151) .. (280.8,151) .. controls (282.9,151) and (284.71,153.14) .. (286.6,155.4) .. controls (288.5,157.65) and (290.31,159.8) .. (292.41,159.8) .. controls (294.51,159.8) and (296.32,157.65) .. (298.21,155.4) ;[line width=0.65](217.75,158.79) .. controls (219.64,161.04) and (221.45,163.18) .. (223.55,163.18) .. controls (225.65,163.18) and (227.46,161.04) .. (229.35,158.79) .. controls (231.25,156.53) and (233.06,154.39) .. (235.16,154.39) .. controls (237.26,154.39) and (239.07,156.53) .. (240.96,158.79) .. controls (242.86,161.04) and (244.67,163.18) .. (246.77,163.18) .. controls (248.87,163.18) and (250.68,161.04) .. (252.57,158.79) .. controls (254.46,156.53) and (256.27,154.39) .. (258.37,154.39) .. controls (260.47,154.39) and (262.28,156.53) .. (264.18,158.79) .. controls (266.07,161.04) and (267.88,163.18) .. (269.98,163.18) .. controls (272.08,163.18) and (273.89,161.04) .. (275.78,158.79) .. controls (277.68,156.53) and (279.49,154.39) .. (281.59,154.39) .. controls (283.69,154.39) and (285.5,156.53) .. (287.39,158.79) .. controls (289.29,161.04) and (291.1,163.18) .. (293.2,163.18) .. controls (295.3,163.18) and (297.11,161.04) .. (299,158.79) ; (306.68,158.06) .. controls (306.68,136.86) and (324.06,119.68) .. (345.5,119.68) .. controls (366.94,119.68) and (384.32,136.86) .. (384.32,158.06) .. controls (384.32,179.26) and (366.94,196.45) .. (345.5,196.45) .. controls (324.06,196.45) and (306.68,179.26) .. (306.68,158.06)(299,158.06) .. controls (299,132.62) and (319.82,112) .. (345.5,112) .. controls (371.18,112) and (392,132.62) .. (392,158.06) .. controls (392,183.5) and (371.18,204.13) .. (345.5,204.13) .. controls (319.82,204.13) and (299,183.5) .. (299,158.06) ; [fill=rgb, 255:red, 0; green, 0; blue, 0 ,fill opacity=1 ] (344.8,192) – (356,200) – (344.8,208) – (347.71,200) – cycle ; [fill=rgb, 255:red, 0; green, 0; blue, 0 ,fill opacity=1 ] (354,123.13) – (341,116.06) – (354,109) – (350.49,116.06) – cycle ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (212,158.75) .. controls (212,161.37) and (214.24,163.5) .. (217,163.5) .. controls (219.76,163.5) and (222,161.37) .. (222,158.75) .. controls (222,156.13) and (219.76,154) .. (217,154) .. controls (214.24,154) and (212,156.13) .. (212,158.75) – cycle ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (119,161.75) .. controls (119,164.37) and (121.24,166.5) .. (124,166.5) .. controls (126.76,166.5) and (129,164.37) .. (129,161.75) .. controls (129,159.13) and (126.76,157) .. (124,157) .. controls (121.24,157) and (119,159.13) .. (119,161.75) – cycle ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (298,156.75) .. controls (298,159.37) and (300.24,161.5) .. (303,161.5) .. controls (305.76,161.5) and (308,159.37) .. (308,156.75) .. controls (308,154.13) and (305.76,152) .. (303,152) .. controls (300.24,152) and (298,154.13) .. (298,156.75) – cycle ; +[x=0.25pt,y=0.25pt,yscale=-1,xscale=1] (291.36,97.23) .. controls (291.36,72.17) and (312.11,51.85) .. (337.7,51.85) .. controls (363.29,51.85) and (384.04,72.17) .. (384.04,97.23) .. controls (384.04,122.3) and (363.29,142.62) .. (337.7,142.62) .. controls (312.11,142.62) and (291.36,122.3) .. (291.36,97.23) – cycle ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (552,93.75) .. controls (552,96.37) and (554.24,98.5) .. (557,98.5) .. controls (559.76,98.5) and (562,96.37) .. (562,93.75) .. controls (562,91.13) and (559.76,89) .. (557,89) .. controls (554.24,89) and (552,91.13) .. (552,93.75) – cycle ;[line width=0.75](383.96,92.4) .. controls (385.85,94.65) and (387.66,96.8) .. (389.76,96.8) .. controls (391.86,96.8) and (393.67,94.65) .. (395.57,92.4) .. controls (397.46,90.14) and (399.27,88) .. (401.37,88) .. controls (403.47,88) and (405.28,90.14) .. (407.17,92.4) .. controls (409.07,94.65) and (410.88,96.8) .. (412.98,96.8) .. controls (415.08,96.8) and (416.89,94.65) .. (418.78,92.4) .. controls (420.67,90.14) and (422.48,88) .. (424.58,88) .. controls (426.69,88) and (428.5,90.14) .. (430.39,92.4) .. controls (432.28,94.65) and (434.09,96.8) .. (436.19,96.8) .. controls (438.29,96.8) and (440.1,94.65) .. (442,92.4) .. controls (443.89,90.14) and (445.7,88) .. (447.8,88) .. controls (449.9,88) and (451.71,90.14) .. (453.6,92.4) .. controls (455.5,94.65) and (457.31,96.8) .. (459.41,96.8) .. controls (461.51,96.8) and (463.32,94.65) .. (465.21,92.4) ;[line width=0.75](384.75,95.79) .. controls (386.64,98.04) and (388.45,100.18) .. (390.55,100.18) .. controls (392.65,100.18) and (394.46,98.04) .. (396.35,95.79) .. controls (398.25,93.53) and (400.06,91.39) .. (402.16,91.39) .. controls (404.26,91.39) and (406.07,93.53) .. (407.96,95.79) .. controls (409.86,98.04) and (411.67,100.18) .. (413.77,100.18) .. controls (415.87,100.18) and (417.68,98.04) .. (419.57,95.79) .. controls (421.46,93.53) and (423.27,91.39) .. (425.37,91.39) .. controls (427.47,91.39) and (429.28,93.53) .. (431.18,95.79) .. controls (433.07,98.04) and (434.88,100.18) .. (436.98,100.18) .. controls (439.08,100.18) and (440.89,98.04) .. (442.78,95.79) .. controls (444.68,93.53) and (446.49,91.39) .. (448.59,91.39) .. controls (450.69,91.39) and (452.5,93.53) .. (454.39,95.79) .. controls (456.29,98.04) and (458.1,100.18) .. (460.2,100.18) .. controls (462.3,100.18) and (464.11,98.04) .. (466,95.79) ; (473.68,95.06) .. controls (473.68,73.86) and (491.06,56.68) .. (512.5,56.68) .. controls (533.94,56.68) and (551.32,73.86) .. (551.32,95.06) .. controls (551.32,116.26) and (533.94,133.45) .. (512.5,133.45) .. controls (491.06,133.45) and (473.68,116.26) .. (473.68,95.06)(466,95.06) .. controls (466,69.62) and (486.82,49) .. (512.5,49) .. controls (538.18,49) and (559,69.62) .. (559,95.06) .. controls (559,120.5) and (538.18,141.13) .. (512.5,141.13) .. controls (486.82,141.13) and (466,120.5) .. (466,95.06) ; [fill=rgb, 255:red, 0; green, 0; blue, 0 ,fill opacity=1 ] (511.8,129) – (523,137) – (511.8,145) – (514.71,137) – cycle ; [fill=rgb, 255:red, 0; green, 0; blue, 0 ,fill opacity=1 ] (521,60.13) – (508,53.06) – (521,46) – (517.49,53.06) – cycle ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (379,95.75) .. controls (379,98.37) and (381.24,100.5) .. (384,100.5) .. controls (386.76,100.5) and (389,98.37) .. (389,95.75) .. controls (389,93.13) and (386.76,91) .. (384,91) .. controls (381.24,91) and (379,93.13) .. (379,95.75) – cycle ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (286,98.75) .. controls (286,101.37) and (288.24,103.5) .. (291,103.5) .. controls (293.76,103.5) and (296,101.37) .. (296,98.75) .. controls (296,96.13) and (293.76,94) .. (291,94) .. controls (288.24,94) and (286,96.13) .. (286,98.75) – cycle ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (465,93.75) .. controls (465,96.37) and (467.24,98.5) .. (470,98.5) .. controls (472.76,98.5) and (475,96.37) .. (475,93.75) .. controls (475,91.13) and (472.76,89) .. (470,89) .. controls (467.24,89) and (465,91.13) .. (465,93.75) – cycle ; (114.36,98.23) .. controls (114.36,73.17) and (135.11,52.85) .. (160.7,52.85) .. controls (186.29,52.85) and (207.04,73.17) .. (207.04,98.23) .. controls (207.04,123.3) and (186.29,143.62) .. (160.7,143.62) .. controls (135.11,143.62) and (114.36,123.3) .. (114.36,98.23) – cycle ;[draw opacity=0] (174.74,142.96) .. controls (170.9,143.94) and (166.86,144.47) .. (162.7,144.47) .. controls (136.6,144.47) and (115.44,123.77) .. (115.44,98.23) .. controls (115.44,73.78) and (134.84,53.77) .. (159.4,52.11) – (162.7,98.23) – cycle ; (171.77,143.62) .. controls (168.83,144.18) and (165.8,144.47) .. (162.7,144.47) .. controls (136.6,144.47) and (115.44,123.77) .. (115.44,98.23) .. controls (115.44,73.44) and (135.39,53.21) .. (160.44,52.05) ;[shift=(157.57,52.27), rotate = 356.14] [fill=rgb, 255:red, 0; green, 0; blue, 0 ][line width=0.08][draw opacity=0] (10.72,-5.15) – (0,0) – (10.72,5.15) – (7.12,0) – cycle;[shift=(174.74,142.96), rotate = 165.59] [fill=rgb, 255:red, 0; green, 0; blue, 0 ][line width=0.08][draw opacity=0] (10.72,-5.15) – (0,0) – (10.72,5.15) – (7.12,0) – cycle;[line width=0.75](206.96,93.4) .. controls (208.85,95.65) and (210.66,97.8) .. (212.76,97.8) .. controls (214.86,97.8) and (216.67,95.65) .. (218.57,93.4) .. controls (220.46,91.14) and (222.27,89) .. (224.37,89) .. controls (226.47,89) and (228.28,91.14) .. (230.17,93.4) .. controls (232.07,95.65) and (233.88,97.8) .. (235.98,97.8) .. controls (238.08,97.8) and (239.89,95.65) .. (241.78,93.4) .. controls (243.67,91.14) and (245.48,89) .. (247.58,89) .. controls (249.69,89) and (251.5,91.14) .. (253.39,93.4) .. controls (255.28,95.65) and (257.09,97.8) .. (259.19,97.8) .. controls (261.29,97.8) and (263.1,95.65) .. (265,93.4) .. controls (266.89,91.14) and (268.7,89) .. (270.8,89) .. controls (272.9,89) and (274.71,91.14) .. (276.6,93.4) .. controls (278.5,95.65) and (280.31,97.8) .. (282.41,97.8) .. controls (284.51,97.8) and (286.32,95.65) .. (288.21,93.4) ;[line width=0.75](207.75,96.79) .. controls (209.64,99.04) and (211.45,101.18) .. (213.55,101.18) .. controls (215.65,101.18) and (217.46,99.04) .. (219.35,96.79) .. controls (221.25,94.53) and (223.06,92.39) .. (225.16,92.39) .. controls (227.26,92.39) and (229.07,94.53) .. (230.96,96.79) .. controls (232.86,99.04) and (234.67,101.18) .. (236.77,101.18) .. controls (238.87,101.18) and (240.68,99.04) .. (242.57,96.79) .. controls (244.46,94.53) and (246.27,92.39) .. (248.37,92.39) .. controls (250.47,92.39) and (252.28,94.53) .. (254.18,96.79) .. controls (256.07,99.04) and (257.88,101.18) .. (259.98,101.18) .. controls (262.08,101.18) and (263.89,99.04) .. (265.78,96.79) .. controls (267.68,94.53) and (269.49,92.39) .. (271.59,92.39) .. controls (273.69,92.39) and (275.5,94.53) .. (277.39,96.79) .. controls (279.29,99.04) and (281.1,101.18) .. (283.2,101.18) .. controls (285.3,101.18) and (287.11,99.04) .. (289,96.79) ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (202,96.75) .. controls (202,99.37) and (204.24,101.5) .. (207,101.5) .. controls (209.76,101.5) and (212,99.37) .. (212,96.75) .. controls (212,94.13) and (209.76,92) .. (207,92) .. controls (204.24,92) and (202,94.13) .. (202,96.75) – cycle ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (109,99.75) .. controls (109,102.37) and (111.24,104.5) .. (114,104.5) .. controls (116.76,104.5) and (119,102.37) .. (119,99.75) .. controls (119,97.13) and (116.76,95) .. (114,95) .. controls (111.24,95) and (109,97.13) .. (109,99.75) – cycle ;[draw opacity=0] (351.74,141.96) .. controls (347.9,142.94) and (343.86,143.47) .. (339.7,143.47) .. controls (313.6,143.47) and (292.44,122.77) .. (292.44,97.23) .. controls (292.44,72.78) and (311.84,52.77) .. (336.4,51.11) – (339.7,97.23) – cycle ; (348.77,142.62) .. controls (345.83,143.18) and (342.8,143.47) .. (339.7,143.47) .. controls (313.6,143.47) and (292.44,122.77) .. (292.44,97.23) .. controls (292.44,72.44) and (312.39,52.21) .. (337.44,51.05) ;[shift=(334.57,51.27), rotate = 356.14] [fill=rgb, 255:red, 0; green, 0; blue, 0 ][line width=0.08][draw opacity=0] (10.72,-5.15) – (0,0) – (10.72,5.15) – (7.12,0) – cycle;[shift=(351.74,141.96), rotate = 165.59] [fill=rgb, 255:red, 0; green, 0; blue, 0 ][line width=0.08][draw opacity=0] (10.72,-5.15) – (0,0) – (10.72,5.15) – (7.12,0) – cycle; +⋯ =[x=0.25pt,y=0.25pt,yscale=-1,xscale=1] (124.36,160.23) .. controls (124.36,135.17) and (145.11,114.85) .. (170.7,114.85) .. controls (196.29,114.85) and (217.04,135.17) .. (217.04,160.23) .. controls (217.04,185.3) and (196.29,205.62) .. (170.7,205.62) .. controls (145.11,205.62) and (124.36,185.3) .. (124.36,160.23) – cycle ;[draw opacity=0] (184.74,204.96) .. controls (180.9,205.94) and (176.86,206.47) .. (172.7,206.47) .. controls (146.6,206.47) and (125.44,185.77) .. (125.44,160.23) .. controls (125.44,135.78) and (144.84,115.77) .. (169.4,114.11) – (172.7,160.23) – cycle ; (181.77,205.62) .. controls (178.83,206.18) and (175.8,206.47) .. (172.7,206.47) .. controls (146.6,206.47) and (125.44,185.77) .. (125.44,160.23) .. controls (125.44,135.44) and (145.39,115.21) .. (170.44,114.05) ;[shift=(167.57,114.27), rotate = 356.14] [fill=rgb, 255:red, 0; green, 0; blue, 0 ][line width=0.08][draw opacity=0] (10.72,-5.15) – (0,0) – (10.72,5.15) – (7.12,0) – cycle;[shift=(184.74,204.96), rotate = 165.59] [fill=rgb, 255:red, 0; green, 0; blue, 0 ][line width=0.08][draw opacity=0] (10.72,-5.15) – (0,0) – (10.72,5.15) – (7.12,0) – cycle;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (385,156.75) .. controls (385,159.37) and (387.24,161.5) .. (390,161.5) .. controls (392.76,161.5) and (395,159.37) .. (395,156.75) .. controls (395,154.13) and (392.76,152) .. (390,152) .. controls (387.24,152) and (385,154.13) .. (385,156.75) – cycle ;[line width=0.75](216.96,155.4) .. controls (218.85,157.65) and (220.66,159.8) .. (222.76,159.8) .. controls (224.86,159.8) and (226.67,157.65) .. (228.57,155.4) .. controls (230.46,153.14) and (232.27,151) .. (234.37,151) .. controls (236.47,151) and (238.28,153.14) .. (240.17,155.4) .. controls (242.07,157.65) and (243.88,159.8) .. (245.98,159.8) .. controls (248.08,159.8) and (249.89,157.65) .. (251.78,155.4) .. controls (253.67,153.14) and (255.48,151) .. (257.58,151) .. controls (259.69,151) and (261.5,153.14) .. (263.39,155.4) .. controls (265.28,157.65) and (267.09,159.8) .. (269.19,159.8) .. controls (271.29,159.8) and (273.1,157.65) .. (275,155.4) .. controls (276.89,153.14) and (278.7,151) .. (280.8,151) .. controls (282.9,151) and (284.71,153.14) .. (286.6,155.4) .. controls (288.5,157.65) and (290.31,159.8) .. (292.41,159.8) .. controls (294.51,159.8) and (296.32,157.65) .. (298.21,155.4) ;[line width=0.65](217.75,158.79) .. controls (219.64,161.04) and (221.45,163.18) .. (223.55,163.18) .. controls (225.65,163.18) and (227.46,161.04) .. (229.35,158.79) .. controls (231.25,156.53) and (233.06,154.39) .. (235.16,154.39) .. controls (237.26,154.39) and (239.07,156.53) .. (240.96,158.79) .. controls (242.86,161.04) and (244.67,163.18) .. (246.77,163.18) .. controls (248.87,163.18) and (250.68,161.04) .. (252.57,158.79) .. controls (254.46,156.53) and (256.27,154.39) .. (258.37,154.39) .. controls (260.47,154.39) and (262.28,156.53) .. (264.18,158.79) .. controls (266.07,161.04) and (267.88,163.18) .. (269.98,163.18) .. controls (272.08,163.18) and (273.89,161.04) .. (275.78,158.79) .. controls (277.68,156.53) and (279.49,154.39) .. (281.59,154.39) .. controls (283.69,154.39) and (285.5,156.53) .. (287.39,158.79) .. controls (289.29,161.04) and (291.1,163.18) .. (293.2,163.18) .. controls (295.3,163.18) and (297.11,161.04) .. (299,158.79) ; (306.68,158.06) .. controls (306.68,136.86) and (324.06,119.68) .. (345.5,119.68) .. controls (366.94,119.68) and (384.32,136.86) .. (384.32,158.06) .. controls (384.32,179.26) and (366.94,196.45) .. (345.5,196.45) .. controls (324.06,196.45) and (306.68,179.26) .. (306.68,158.06)(299,158.06) .. controls (299,132.62) and (319.82,112) .. (345.5,112) .. controls (371.18,112) and (392,132.62) .. (392,158.06) .. controls (392,183.5) and (371.18,204.13) .. (345.5,204.13) .. controls (319.82,204.13) and (299,183.5) .. (299,158.06) ; [fill=rgb, 255:red, 0; green, 0; blue, 0 ,fill opacity=1 ] (344.8,192) – (356,200) – (344.8,208) – (347.71,200) – cycle ; [fill=rgb, 255:red, 0; green, 0; blue, 0 ,fill opacity=1 ] (354,123.13) – (341,116.06) – (354,109) – (350.49,116.06) – cycle ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (212,158.75) .. controls (212,161.37) and (214.24,163.5) .. (217,163.5) .. controls (219.76,163.5) and (222,161.37) .. (222,158.75) .. controls (222,156.13) and (219.76,154) .. (217,154) .. controls (214.24,154) and (212,156.13) .. (212,158.75) – cycle ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (119,161.75) .. controls (119,164.37) and (121.24,166.5) .. (124,166.5) .. controls (126.76,166.5) and (129,164.37) .. (129,161.75) .. controls (129,159.13) and (126.76,157) .. (124,157) .. controls (121.24,157) and (119,159.13) .. (119,161.75) – cycle ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (298,156.75) .. controls (298,159.37) and (300.24,161.5) .. (303,161.5) .. controls (305.76,161.5) and (308,159.37) .. (308,156.75) .. controls (308,154.13) and (305.76,152) .. (303,152) .. controls (300.24,152) and (298,154.13) .. (298,156.75) – cycle ; /1-[x=0.25pt,y=0.25pt,yscale=-1,xscale=1] (312.36,757.23) .. controls (312.36,732.17) and (333.11,711.85) .. (358.7,711.85) .. controls (384.29,711.85) and (405.04,732.17) .. (405.04,757.23) .. controls (405.04,782.3) and (384.29,802.62) .. (358.7,802.62) .. controls (333.11,802.62) and (312.36,782.3) .. (312.36,757.23) – cycle ;[line width=0.75](404.96,752.4) .. controls (406.85,754.65) and (408.66,756.8) .. (410.76,756.8) .. controls (412.86,756.8) and (414.67,754.65) .. (416.57,752.4) .. controls (418.46,750.14) and (420.27,748) .. (422.37,748) .. controls (424.47,748) and (426.28,750.14) .. (428.17,752.4) .. controls (430.07,754.65) and (431.88,756.8) .. (433.98,756.8) .. controls (436.08,756.8) and (437.89,754.65) .. (439.78,752.4) .. controls (441.67,750.14) and (443.48,748) .. (445.58,748) .. controls (447.69,748) and (449.5,750.14) .. (451.39,752.4) .. controls (453.28,754.65) and (455.09,756.8) .. (457.19,756.8) .. controls (459.29,756.8) and (461.1,754.65) .. (463,752.4) .. controls (464.89,750.14) and (466.7,748) .. (468.8,748) .. controls (470.9,748) and (472.71,750.14) .. (474.6,752.4) .. controls (476.5,754.65) and (478.31,756.8) .. (480.41,756.8) .. controls (482.51,756.8) and (484.32,754.65) .. (486.21,752.4) ;[line width=0.75](405.75,755.79) .. controls (407.64,758.04) and (409.45,760.18) .. (411.55,760.18) .. controls (413.65,760.18) and (415.46,758.04) .. (417.35,755.79) .. controls (419.25,753.53) and (421.06,751.39) .. (423.16,751.39) .. controls (425.26,751.39) and (427.07,753.53) .. (428.96,755.79) .. controls (430.86,758.04) and (432.67,760.18) .. (434.77,760.18) .. controls (436.87,760.18) and (438.68,758.04) .. (440.57,755.79) .. controls (442.46,753.53) and (444.27,751.39) .. (446.37,751.39) .. controls (448.47,751.39) and (450.28,753.53) .. (452.18,755.79) .. controls (454.07,758.04) and (455.88,760.18) .. (457.98,760.18) .. controls (460.08,760.18) and (461.89,758.04) .. (463.78,755.79) .. controls (465.68,753.53) and (467.49,751.39) .. (469.59,751.39) .. controls (471.69,751.39) and (473.5,753.53) .. (475.39,755.79) .. controls (477.29,758.04) and (479.1,760.18) .. (481.2,760.18) .. controls (483.3,760.18) and (485.11,758.04) .. (487,755.79) ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (400,755.75) .. controls (400,758.37) and (402.24,760.5) .. (405,760.5) .. controls (407.76,760.5) and (410,758.37) .. (410,755.75) .. controls (410,753.13) and (407.76,751) .. (405,751) .. controls (402.24,751) and (400,753.13) .. (400,755.75) – cycle ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (307,758.75) .. controls (307,761.37) and (309.24,763.5) .. (312,763.5) .. controls (314.76,763.5) and (317,761.37) .. (317,758.75) .. controls (317,756.13) and (314.76,754) .. (312,754) .. controls (309.24,754) and (307,756.13) .. (307,758.75) – cycle ;[draw opacity=0] (372.74,801.96) .. controls (368.9,802.94) and (364.86,803.47) .. (360.7,803.47) .. controls (334.6,803.47) and (313.44,782.77) .. (313.44,757.23) .. controls (313.44,732.78) and (332.84,712.77) .. (357.4,711.11) – (360.7,757.23) – cycle ; (369.77,802.62) .. controls (366.83,803.18) and (363.8,803.47) .. (360.7,803.47) .. controls (334.6,803.47) and (313.44,782.77) .. (313.44,757.23) .. controls (313.44,732.44) and (333.39,712.21) .. (358.44,711.05) ;[shift=(355.57,711.27), rotate = 356.14] [fill=rgb, 255:red, 0; green, 0; blue, 0 ][line width=0.08][draw opacity=0] (10.72,-5.15) – (0,0) – (10.72,5.15) – (7.12,0) – cycle;[shift=(372.74,801.96), rotate = 165.59] [fill=rgb, 255:red, 0; green, 0; blue, 0 ][line width=0.08][draw opacity=0] (10.72,-5.15) – (0,0) – (10.72,5.15) – (7.12,0) – cycle; , where  [x=0.27pt,y=0.27pt,yscale=-1,xscale=1][draw opacity=0] (311.61,107.27) .. controls (309.48,107.75) and (307.27,108) .. (305,108) .. controls (288.43,108) and (275,94.57) .. (275,78) .. controls (275,62.47) and (286.79,49.7) .. (301.91,48.16) – (305,78) – cycle ; (308.62,107.78) .. controls (307.43,107.93) and (306.23,108) .. (305,108) .. controls (288.43,108) and (275,94.57) .. (275,78) .. controls (275,62.13) and (287.33,49.13) .. (302.93,48.07) ;[shift=(300.11,48.4), rotate = 354.16] [fill=rgb, 255:red, 0; green, 0; blue, 0 ][line width=0.08][draw opacity=0] (10.72,-5.15) – (0,0) – (10.72,5.15) – (7.12,0) – cycle;[shift=(311.61,107.27), rotate = 167.33] [fill=rgb, 255:red, 0; green, 0; blue, 0 ][line width=0.08][draw opacity=0] (10.72,-5.15) – (0,0) – (10.72,5.15) – (7.12,0) – cycle;[draw opacity=0] (306.87,48.06) .. controls (322.57,49.02) and (335,62.06) .. (335,78) .. controls (335,94.26) and (322.06,107.5) .. (305.91,107.99) – (305,78) – cycle ;(306.87,48.06) .. controls (322.57,49.02) and (335,62.06) .. (335,78) .. controls (335,94.26) and (322.06,107.5) .. (305.91,107.99) ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (272.4,79) .. controls (272.4,80.1) and (273.12,81) .. (274,81) .. controls (274.88,81) and (275.6,80.1) .. (275.6,79) .. controls (275.6,77.9) and (274.88,77) .. (274,77) .. controls (273.12,77) and (272.4,77.9) .. (272.4,79) – cycle ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (335.4,79) .. controls (335.4,80.1) and (336.12,81) .. (337,81) .. controls (337.88,81) and (338.6,80.1) .. (338.6,79) .. controls (338.6,77.9) and (337.88,77) .. (337,77) .. controls (336.12,77) and (335.4,77.9) .. (335.4,79) – cycle ; and  [x=0.18pt,y=0.18pt,yscale=-1,xscale=1][fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (351,123.75) .. controls (351,126.37) and (353.24,128.5) .. (356,128.5) .. controls (358.76,128.5) and (361,126.37) .. (361,123.75) .. controls (361,121.13) and (358.76,119) .. (356,119) .. controls (353.24,119) and (351,121.13) .. (351,123.75) – cycle ; (272.68,125.06) .. controls (272.68,103.86) and (290.06,86.68) .. (311.5,86.68) .. controls (332.94,86.68) and (350.32,103.86) .. (350.32,125.06) .. controls (350.32,146.26) and (332.94,163.45) .. (311.5,163.45) .. controls (290.06,163.45) and (272.68,146.26) .. (272.68,125.06)(265,125.06) .. controls (265,99.62) and (285.82,79) .. (311.5,79) .. controls (337.18,79) and (358,99.62) .. (358,125.06) .. controls (358,150.5) and (337.18,171.13) .. (311.5,171.13) .. controls (285.82,171.13) and (265,150.5) .. (265,125.06) ; [fill=rgb, 255:red, 0; green, 0; blue, 0 ,fill opacity=1 ] (310.8,159) – (322,167) – (310.8,175) – (313.71,167) – cycle ; [fill=rgb, 255:red, 0; green, 0; blue, 0 ,fill opacity=1 ] (320,90.13) – (307,83.06) – (320,76) – (316.49,83.06) – cycle ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (264,123.75) .. controls (264,126.37) and (266.24,128.5) .. (269,128.5) .. controls (271.76,128.5) and (274,126.37) .. (274,123.75) .. controls (274,121.13) and (271.76,119) .. (269,119) .. controls (266.24,119) and (264,121.13) .. (264,123.75) – cycle ; represent the electron and ion pair-bubble diagrams, respectively, i.e., Π_e and Π_N at the RPA level (see Eq. (<ref>)), and the double wavy line[x=0.25pt,y=0.25pt,yscale=-1,xscale=1][line width=0.75](273.96,112.93) .. controls (275.85,115.18) and (277.66,117.33) .. (279.76,117.33) .. controls (281.86,117.33) and (283.67,115.18) .. (285.57,112.93) .. controls (287.46,110.68) and (289.27,108.53) .. (291.37,108.53) .. controls (293.47,108.53) and (295.28,110.68) .. (297.17,112.93) .. controls (299.07,115.18) and (300.88,117.33) .. (302.98,117.33) .. controls (305.08,117.33) and (306.89,115.18) .. (308.78,112.93) .. controls (310.67,110.68) and (312.48,108.53) .. (314.58,108.53) .. controls (316.69,108.53) and (318.5,110.68) .. (320.39,112.93) .. controls (322.28,115.18) and (324.09,117.33) .. (326.19,117.33) .. controls (328.29,117.33) and (330.1,115.18) .. (332,112.93) .. controls (333.89,110.68) and (335.7,108.53) .. (337.8,108.53) .. controls (339.9,108.53) and (341.71,110.68) .. (343.6,112.93) .. controls (345.5,115.18) and (347.31,117.33) .. (349.41,117.33) .. controls (351.51,117.33) and (353.32,115.18) .. (355.21,112.93) ;[line width=0.75](274.75,117.32) .. controls (276.64,119.57) and (278.45,121.72) .. (280.55,121.72) .. controls (282.65,121.72) and (284.46,119.57) .. (286.35,117.32) .. controls (288.25,115.07) and (290.06,112.92) .. (292.16,112.92) .. controls (294.26,112.92) and (296.07,115.07) .. (297.96,117.32) .. controls (299.86,119.57) and (301.67,121.72) .. (303.77,121.72) .. controls (305.87,121.72) and (307.68,119.57) .. (309.57,117.32) .. controls (311.46,115.07) and (313.27,112.92) .. (315.37,112.92) .. controls (317.47,112.92) and (319.28,115.07) .. (321.18,117.32) .. controls (323.07,119.57) and (324.88,121.72) .. (326.98,121.72) .. controls (329.08,121.72) and (330.89,119.57) .. (332.78,117.32) .. controls (334.68,115.07) and (336.49,112.92) .. (338.59,112.92) .. controls (340.69,112.92) and (342.5,115.07) .. (344.39,117.32) .. controls (346.29,119.57) and (348.1,121.72) .. (350.2,121.72) .. controls (352.3,121.72) and (354.11,119.57) .. (356,117.32) ; =[x=0.25pt,y=0.25pt,yscale=-1,xscale=1][line width=0.75](253.96,115.93) .. controls (255.85,118.18) and (257.66,120.33) .. (259.76,120.33) .. controls (261.86,120.33) and (263.67,118.18) .. (265.57,115.93) .. controls (267.46,113.68) and (269.27,111.53) .. (271.37,111.53) .. controls (273.47,111.53) and (275.28,113.68) .. (277.17,115.93) .. controls (279.07,118.18) and (280.88,120.33) .. (282.98,120.33) .. controls (285.08,120.33) and (286.89,118.18) .. (288.78,115.93) .. controls (290.67,113.68) and (292.48,111.53) .. (294.58,111.53) .. controls (296.69,111.53) and (298.5,113.68) .. (300.39,115.93) .. controls (302.28,118.18) and (304.09,120.33) .. (306.19,120.33) .. controls (308.29,120.33) and (310.1,118.18) .. (312,115.93) .. controls (313.89,113.68) and (315.7,111.53) .. (317.8,111.53) .. controls (319.9,111.53) and (321.71,113.68) .. (323.6,115.93) .. controls (325.5,118.18) and (327.31,120.33) .. (329.41,120.33) .. controls (331.51,120.33) and (333.32,118.18) .. (335.21,115.93) ; +[x=0.25pt,y=0.25pt,yscale=-1,xscale=1][line width=0.75](165,118.66) .. controls (166.89,121.05) and (168.7,123.33) .. (170.8,123.33) .. controls (172.9,123.33) and (174.71,121.05) .. (176.61,118.66) .. controls (178.5,116.27) and (180.31,114) .. (182.41,114) .. controls (184.51,114) and (186.32,116.27) .. (188.22,118.66) .. controls (190.11,121.05) and (191.92,123.33) .. (194.02,123.33) .. controls (196.12,123.33) and (197.93,121.05) .. (199.82,118.66) .. controls (201.72,116.27) and (203.53,114) .. (205.63,114) .. controls (207.73,114) and (209.54,116.27) .. (211.43,118.66) .. controls (213.32,121.05) and (215.13,123.33) .. (217.23,123.33) .. controls (219.33,123.33) and (221.14,121.05) .. (223.04,118.66) .. controls (224.74,116.52) and (226.37,114.46) .. (228.21,114.07) ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (314,118.75) .. controls (314,121.37) and (316.24,123.5) .. (319,123.5) .. controls (321.76,123.5) and (324,121.37) .. (324,118.75) .. controls (324,116.13) and (321.76,114) .. (319,114) .. controls (316.24,114) and (314,116.13) .. (314,118.75) – cycle ; (235.68,120.06) .. controls (235.68,98.86) and (253.06,81.68) .. (274.5,81.68) .. controls (295.94,81.68) and (313.32,98.86) .. (313.32,120.06) .. controls (313.32,141.26) and (295.94,158.45) .. (274.5,158.45) .. controls (253.06,158.45) and (235.68,141.26) .. (235.68,120.06)(228,120.06) .. controls (228,94.62) and (248.82,74) .. (274.5,74) .. controls (300.18,74) and (321,94.62) .. (321,120.06) .. controls (321,145.5) and (300.18,166.13) .. (274.5,166.13) .. controls (248.82,166.13) and (228,145.5) .. (228,120.06) ; [fill=rgb, 255:red, 0; green, 0; blue, 0 ,fill opacity=1 ] (273.8,154) – (285,162) – (273.8,170) – (276.71,162) – cycle ; [fill=rgb, 255:red, 0; green, 0; blue, 0 ,fill opacity=1 ] (283,85.13) – (270,78.06) – (283,71) – (279.49,78.06) – cycle ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (227,118.75) .. controls (227,121.37) and (229.24,123.5) .. (232,123.5) .. controls (234.76,123.5) and (237,121.37) .. (237,118.75) .. controls (237,116.13) and (234.76,114) .. (232,114) .. controls (229.24,114) and (227,116.13) .. (227,118.75) – cycle ;[line width=0.75](324,118.66) .. controls (325.89,121.05) and (327.7,123.33) .. (329.8,123.33) .. controls (331.9,123.33) and (333.71,121.05) .. (335.61,118.66) .. controls (337.5,116.27) and (339.31,114) .. (341.41,114) .. controls (343.51,114) and (345.32,116.27) .. (347.22,118.66) .. controls (349.11,121.05) and (350.92,123.33) .. (353.02,123.33) .. controls (355.12,123.33) and (356.93,121.05) .. (358.82,118.66) .. controls (360.72,116.27) and (362.53,114) .. (364.63,114) .. controls (366.73,114) and (368.54,116.27) .. (370.43,118.66) .. controls (372.32,121.05) and (374.13,123.33) .. (376.23,123.33) .. controls (378.33,123.33) and (380.14,121.05) .. (382.04,118.66) .. controls (383.74,116.52) and (385.37,114.46) .. (387.21,114.07) ; + [x=0.25pt,y=0.25pt,yscale=-1,xscale=1][line width=0.75](165,118.66) .. controls (166.89,121.05) and (168.7,123.33) .. (170.8,123.33) .. controls (172.9,123.33) and (174.71,121.05) .. (176.61,118.66) .. controls (178.5,116.27) and (180.31,114) .. (182.41,114) .. controls (184.51,114) and (186.32,116.27) .. (188.22,118.66) .. controls (190.11,121.05) and (191.92,123.33) .. (194.02,123.33) .. controls (196.12,123.33) and (197.93,121.05) .. (199.82,118.66) .. controls (201.72,116.27) and (203.53,114) .. (205.63,114) .. controls (207.73,114) and (209.54,116.27) .. (211.43,118.66) .. controls (213.32,121.05) and (215.13,123.33) .. (217.23,123.33) .. controls (219.33,123.33) and (221.14,121.05) .. (223.04,118.66) .. controls (224.74,116.52) and (226.37,114.46) .. (228.21,114.07) ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (314,118.75) .. controls (314,121.37) and (316.24,123.5) .. (319,123.5) .. controls (321.76,123.5) and (324,121.37) .. (324,118.75) .. controls (324,116.13) and (321.76,114) .. (319,114) .. controls (316.24,114) and (314,116.13) .. (314,118.75) – cycle ; (235.68,120.06) .. controls (235.68,98.86) and (253.06,81.68) .. (274.5,81.68) .. controls (295.94,81.68) and (313.32,98.86) .. (313.32,120.06) .. controls (313.32,141.26) and (295.94,158.45) .. (274.5,158.45) .. controls (253.06,158.45) and (235.68,141.26) .. (235.68,120.06)(228,120.06) .. controls (228,94.62) and (248.82,74) .. (274.5,74) .. controls (300.18,74) and (321,94.62) .. (321,120.06) .. controls (321,145.5) and (300.18,166.13) .. (274.5,166.13) .. controls (248.82,166.13) and (228,145.5) .. (228,120.06) ; [fill=rgb, 255:red, 0; green, 0; blue, 0 ,fill opacity=1 ] (273.8,154) – (285,162) – (273.8,170) – (276.71,162) – cycle ; [fill=rgb, 255:red, 0; green, 0; blue, 0 ,fill opacity=1 ] (283,85.13) – (270,78.06) – (283,71) – (279.49,78.06) – cycle ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (227,118.75) .. controls (227,121.37) and (229.24,123.5) .. (232,123.5) .. controls (234.76,123.5) and (237,121.37) .. (237,118.75) .. controls (237,116.13) and (234.76,114) .. (232,114) .. controls (229.24,114) and (227,116.13) .. (227,118.75) – cycle ;[line width=0.75](324,118.66) .. controls (325.89,121.05) and (327.7,123.33) .. (329.8,123.33) .. controls (331.9,123.33) and (333.71,121.05) .. (335.61,118.66) .. controls (337.5,116.27) and (339.31,114) .. (341.41,114) .. controls (343.51,114) and (345.32,116.27) .. (347.22,118.66) .. controls (349.11,121.05) and (350.92,123.33) .. (353.02,123.33) .. controls (355.12,123.33) and (356.93,121.05) .. (358.82,118.66) .. controls (360.72,116.27) and (362.53,114) .. (364.63,114) .. controls (366.73,114) and (368.54,116.27) .. (370.43,118.66) .. controls (372.32,121.05) and (374.13,123.33) .. (376.23,123.33) .. controls (378.33,123.33) and (380.14,121.05) .. (382.04,118.66) .. controls (383.74,116.52) and (385.37,114.46) .. (387.21,114.07) ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (474,116.75) .. controls (474,119.37) and (476.24,121.5) .. (479,121.5) .. controls (481.76,121.5) and (484,119.37) .. (484,116.75) .. controls (484,114.13) and (481.76,112) .. (479,112) .. controls (476.24,112) and (474,114.13) .. (474,116.75) – cycle ; (395.68,118.06) .. controls (395.68,96.86) and (413.06,79.68) .. (434.5,79.68) .. controls (455.94,79.68) and (473.32,96.86) .. (473.32,118.06) .. controls (473.32,139.26) and (455.94,156.45) .. (434.5,156.45) .. controls (413.06,156.45) and (395.68,139.26) .. (395.68,118.06)(388,118.06) .. controls (388,92.62) and (408.82,72) .. (434.5,72) .. controls (460.18,72) and (481,92.62) .. (481,118.06) .. controls (481,143.5) and (460.18,164.13) .. (434.5,164.13) .. controls (408.82,164.13) and (388,143.5) .. (388,118.06) ; [fill=rgb, 255:red, 0; green, 0; blue, 0 ,fill opacity=1 ] (433.8,152) – (445,160) – (433.8,168) – (436.71,160) – cycle ; [fill=rgb, 255:red, 0; green, 0; blue, 0 ,fill opacity=1 ] (443,83.13) – (430,76.06) – (443,69) – (439.49,76.06) – cycle ;[fill=rgb, 255:red, 128; green, 128; blue, 128 ,fill opacity=1 ] (387,116.75) .. controls (387,119.37) and (389.24,121.5) .. (392,121.5) .. controls (394.76,121.5) and (397,119.37) .. (397,116.75) .. controls (397,114.13) and (394.76,112) .. (392,112) .. controls (389.24,112) and (387,114.13) .. (387,116.75) – cycle ;[line width=0.75](484,115.66) .. controls (485.89,118.05) and (487.7,120.33) .. (489.8,120.33) .. controls (491.9,120.33) and (493.71,118.05) .. (495.61,115.66) .. controls (497.5,113.27) and (499.31,111) .. (501.41,111) .. controls (503.51,111) and (505.32,113.27) .. (507.22,115.66) .. controls (509.11,118.05) and (510.92,120.33) .. (513.02,120.33) .. controls (515.12,120.33) and (516.93,118.05) .. (518.82,115.66) .. controls (520.72,113.27) and (522.53,111) .. (524.63,111) .. controls (526.73,111) and (528.54,113.27) .. (530.43,115.66) .. controls (532.32,118.05) and (534.13,120.33) .. (536.23,120.33) .. controls (538.33,120.33) and (540.14,118.05) .. (542.04,115.66) .. controls (543.74,113.52) and (545.37,111.46) .. (547.21,111.07) ; +⋯ =V_e+V_e(-Z)Π_NV_e(-Z)+V_e(-Z)Π_NV_e(-Z)^2Π_NV_e(-Z)+⋯ =V_e/1-Z^2 V_e Π_N represents the electron Coulomb interaction screened by the ions (the single wavy line corresponds to the Coulomb interaction V_e(Q)). Then Eq. (<ref>) can be explicitly written asχ_ρ̂_eρ̂_N^r=Π_e(-Z)V_eΠ_N/1-V_e Π_e-Z^2 V_e Π_N.The above discussion can be extended to obtain the following retarded correlation functions χ_ρ̂_Nρ̂_e^r and χ_ρ̂_Nρ̂_N^r such that,χ_ρ̂_Nρ̂_e^r=χ_ρ̂_eρ̂_N^r,andχ_ρ̂_Nρ̂_N^r=(1-V_eΠ_e)Π_N/1-V_e Π_e-V_eZ^2 Π_N.In addition, in Ref. <cit.>, we have already derivedχ_ρ̂_eρ̂_e^r=Π_e(1-Z^2V_eΠ_N)/1-V_e Π_e-Z^2 V_e Π_N.Thus, by combining all these terms it is straightforward to verifyχ_ρ̂_eρ̂_e^r+(-Z)χ_ρ̂_eρ̂_N^r+(-Z)χ_ρ̂_Nρ̂_e^r+(-Z)^2χ_ρ̂_Nρ̂_N^r=Π_e/1-V_e Π_e-Z^2 V_e Π_N+Z^2 Π_N/1-V_e Π_e-Z^2 V_e Π_Nin Eq. (<ref>). As has been noted in Ref. <cit.>, this expression encodes both the thermal movement, as well as the in-medium effect of the electrons and ions.In practical computation of Eq. (<ref>), we first integrate out the polar angle of 𝐐 with respective to the direction of the photon momentum 𝐩_γ, which is fixed as the z-axis in the spherical coordinate system. Besides, we take a variable transformation from cosθ_𝐐𝐩_γ to the variable E_a=√(|𝐩_γ-𝐐|^2+m_a^2), along with the corresponding Jacobian|dcosθ_𝐐𝐩_γ/dE_a| =p_γQ/E_a.With this change of variable, the term proportional to sin^2θ_𝐐𝐩_γ in Eq. (<ref>), i.e., |𝐩_γ×𝐐|^2 can be rewritten as p_γ^2Q^2-[(E_a^2-m_a^2-p_γ^2-Q^2)^2/4], and thus Eq. (<ref>) is further expressed as Γ(𝐩_γ) =∫dω∫d^3Q/(2π)^34πα/Q^4g_aγ^2 |𝐩_γ×𝐐|^2/8 E_γ √(|𝐩_γ-𝐐|^2+m_a^2)(-2)/1-e^-ω/T_⊙[Im(Π_e)/|1-V_e Π_e-V_eZ^2 Π_N|^2+Z^2 Im(Π_N)/|1-V_e Π_e-V_eZ^2 Π_N|^2] ×δ(√(|𝐩_γ-𝐐|^2+m_a^2)-E_γ+ω) =∫dω∫Q dQ/(2π)^24πα/Q^4g_aγ^2/8 E_γ^2(-2)/1-e^-ω/T_⊙[Im(Π_e)/|1-V_e Π_e-V_eZ^2 Π_N|^2+Z^2 Im(Π_N)/|1-V_e Π_e-V_eZ^2 Π_N|^2] ×∫_E_-^E^+dE_a[p_γ^2Q^2-(E_a^2-m_a^2-p_γ^2-Q^2)^2/4]δ(E_a-E_γ+ω)=∫dω∫Q dQ/(2π)^24πα/Q^4g_aγ^2/8 E_γ^2(-2)/1-e^-ω/T_⊙[Im(Π_e)/|1-V_e Π_e-V_eZ^2 Π_N|^2+Z^2 Im(Π_N)/|1-V_e Π_e-V_eZ^2 Π_N|^2] ×[p_γ^2Q^2-(ω^2-2ω p_γ-Q^2)^2/4]Θ(Q+ω)·[Θ(Q-ω)Θ(p_γ-Q)+Θ(2E_γ-Q-ω)Θ(Q-p_γ)],where E_±=√((p_γ± Q)^2+m_a^2), and Θ is the Heaviside step function. The integral area on the Q-ω plane corresponding to the step function is shown in Fig. <ref> for illustration. In the last step, we generalize the above expression to the multiple atom species in the Sun (as the sum over isotopes in the square bracket in Eq. (<ref>)).JHEP1 tocsection
http://arxiv.org/abs/2312.16306v1
{ "authors": [ "Zheng-Liang Liang", "Lin Zhang" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20231226192509", "title": "The dynamic solar Primakoff process" }
χ̃
http://arxiv.org/abs/2312.16691v1
{ "authors": [ "Anna Tokareva" ], "categories": [ "hep-ph", "gr-qc", "hep-th" ], "primary_category": "hep-ph", "published": "20231227190344", "title": "Gravitational Waves from Inflaton Decay and Bremsstrahlung" }
LLMs with User-defined Prompts as Generic Data Operators for Reliable Data ProcessingLuyi Ma^*, Nikhil Thakurdesai^*, Jiao Chen^*, Jianpeng Xu, Evren Korpeoglu, Sushant Kumar, Kannan AchanPersonalization Team Walmart Global TechSunnyvale, CA, USA {luyi.ma, nikhil.thakurdesai, jiao.chen0,jianpeng.xu, ekorpeoglu, sushant.kumar, kannan.achan}@walmart.comJanuary 14, 2024 =============================================================================================================================================================================================================================================================================================*Equal Contributionfootnote Data processing is one of the fundamental steps in machine learning pipelines to ensure data quality.Majority of the applications consider the user-defined function (UDF) design pattern for data processing in databases.Although the UDF design pattern introduces flexibility, reusability and scalability, the increasing demand on machine learning pipelines brings three new challenges to this design pattern – not low-code, not dependency-free and not knowledge-aware.To address these challenges, we propose a new design pattern that large language models (LLMs) could work as a generic data operator (LLM-GDO) for reliable data cleansing, transformation and modeling with their human-compatible performance. In the LLM-GDO design pattern, user-defined prompts (UDPs) are used to represent the data processing logic rather than implementations with a specific programming language.LLMs can be centrally maintained so users don't have to manage the dependencies at the run-time.Fine-tuning LLMs with domain-specific data could enhance the performance on the domain-specific tasks which makes data processing knowledge-aware.We illustrate these advantages with examples in different data processing tasks.Furthermore, we summarize the challenges and opportunities introduced by LLMs to provide a complete view of this design pattern for more discussions. Large Language Models, Data Modeling, Data Cleansing, Data Transformations, Design Pattern § INTRODUCTION Machine learning (ML) powers numerous data-driven applications for varied types of use cases. A typical machine learning pipeline consists of data processing, feature engineering, model selection, model training, hyper-parameters tuning, evaluation, testing, and serving <cit.>. Many of these steps not only require high-quality data to ensure the machine learning applications will perform as expected, but also prefer huge volume of data to support the training <cit.>. However, most of the reliable datasets were generated by human annotations. This process couldn't scale up well as it is both expensive and time-consuming, limiting its further applications.Moreover, with the increasing parameters and complexity of machine learning models, huge volume of high-quality data are required for model training. The data processing tasks need to support the growing demand of effective data cleansing, transformation and modeling. Following the definition of a typical ETL (Extract, Transform, Load) process in data warehousing, our main focus is on the transform process. To support this growing demand of data transformation, user-defined functions (UDFs) are commonly used to clean, transform and model data in a data warehouse or a data lake <cit.>.A typical UDF template is shown in Figure <ref>-(a) with a pythonic way of coding.Within the UDF, a user could import the run-time dependencies, implement the logic and process the input data.When applying the UDF on a database, following the classic narrow transformation setting in Spark, the UDF will be applied to each row of the data and the processed row will be stored [https://www.databricks.com/glossary/what-are-transformations].The UDF design pattern introduces three advantages in large-scale data processing.First, it provides the flexibility for users to implement their own data processing logic that are not supported by built-in functions.Second, it abstracts the functionalities for better understanding, debugging and reusability (modular programming). Third, it can be easily scaled up by big data processing engine like Spark <cit.>. With the above advantages, a user could implement an UDF with a programming language supported by the system (e.g., Python in a PySpark cluster built on a Hadoop file system), and apply the logic in parallel over all the records.However, this design pattern also meets increasing challenges. (1) Not low-code or zero-code: it requires the users to have substantial programming skills and experiences. (2) Not dependency-free: it could require a complicated run-time environment for different UDFs. Managing the dependencies is difficult in both development and deployment. For example, if the sets of run-time dependencies for two UDFs have no overlap, we need two separated pipeline to manage the dependencies.(3) Not knowledge-aware: it is difficult to natively incorporate prior knowledge for the data processing logics into current implementations of UDFs.Prior knowledge is usually task-specific.For instance, e-commerce item category classification requires a strong domain knowledge to identify the item attributes. It is difficult to implant this knowledge into UDFs deterministically for item classification due to the enormous combinations of item attributes.Recently, artificial intelligence (AI) development has been gaining a promising progress in the past years with the emergence of Large Language Models (LLMs).LLMs, such as Llama2 <cit.> and GPT-4 <cit.>, show their effectiveness in solving a wide range of downstream tasks (e.g., question answering, multi-step reasoning etc.) due to the emergent abilities <cit.>, which reduces the gap between natural language and programming. With well-designed prompts, a user without sufficient experience in data processing can easily employ an LLM to extract the aspects of products, which usually requires the knowledge of e-commerce domain experts <cit.>. This learning ability, which only relies on natural language instructions and input-output examples, and without optimizing any parameters, is called in-context learning <cit.>.This in-context learning ability advances LLMs to understand few-shots even zero-shot learning tasks, for instance, classifying the tabular<cit.> and anomaly detection in system logs <cit.>. Similar to other pre-trained models, LLMs' performance could be further improved by fine-tuning with different techniques, for example, LoRA <cit.> and QLora <cit.> can efficiently fine-tune LLMs by optimizing the rank decomposition matrices of the dense layers in a neural network. The fine-tuned LLMs could achieve human-compatible results in many tasks <cit.><cit.>, greatly reducing the human effort in labeling and annotation.In our paper, we address the limitations in the current UDF-based data processing practice and summarize a new design pattern for data processing involving LLMs and user-defined prompts (UDPs) to balance flexibility and human-level accuracy. We propose a design pattern that LLMs with UDPs could work as Generic Data Operators (LLM-GDOs) for data cleansing, transformation and modeling.To visualize this design pattern, we show an example of LLM-GDO in Figure <ref>-(b). In LLM-GDO design, we simplify the UDF with two changes. First, instead of defining a programming-language-based UDF, a user could define a prompt (or a prompt template for better representation) `user_defined_prompt' to describe the data processing logic (low-code and zero-code). When the data distribution is changed, instead of updating the processing code, UDPs can update the data processing logic easily with modifying the instructions and examples in prompts. Unlike an UDF which requires run-time dependencies to support the execution, an LLM (pre-trained or fine-tuned) could work as a compiler for the prompt and execute the request independently (dependency free).As we use the same LLM with a proper version control, we can align the offline development and online serving. When processing data, the database talks with the remote LLM resource via LLM gateways (e.g., APIs or Agents) behind which LLMs are maintained to process the requests.We abstract this function as `llm_call' and the implementation details could be done by platforms and encapsulated well from users. By fine-tuning LLMs, we can seamlessly introduce the domain-specific knowledge into LLMs with a small dataset and enhance their performance on these tasks (knowledge-aware).Although LLMs are versatile, they also have limitations which we should keep improving. To provide a complete view of this design pattern, We foresee the challenges in this design pattern.Our contributions are summarized as follows: * we introduce the design pattern LLM-GDO in the big data setting for ML pipelines.* we summarize the potential applications with LLM-GDOs. * we discuss the challenges and opportunities in LLM-GDO.The paper is structured as follows.We will introduce the key concepts in Section <ref> and present a comprehensive comparison between UDFs and LLM-GDOs in Section <ref>, followed with challenges and opportunities in Section <ref>. Finally we conclude our paper in Section <ref>.§ PRELIMINARIES§.§.§ Narrow Transformations and Wide TransformationsData transformations are instructions of modifying the rows of database.In Spark, depending on the dependencies between data points, data transformations could be grouped into narrow transformations (Figure <ref>-(a)) and wide transformations (Figure <ref>-(b)). Typically, the output of a narrow transformation operation depends on only one input data (no data shuffling), while output of a wide transformation depends on multiple input rows (with data shuffling). In our paper, we focus on the narrow transformations as they are dominant in many early-stage data processing steps to improve the data quality. §.§.§ LLMs Function calling and Fine-tuningThere are many ways of accessing LLMs, e.g. OpenAI API [https://platform.openai.com/docs/api-reference] provides one of the popular solutions.In our paper, we assume that the LLM's gateways are predominantly maintained by the platforms (remote or local).In Figure <ref>-(b), we abstract the LLM function calling by the `llm_call' function.This function takes the formatted UDP and returns the processed output.LLMs could be fine-tuned by a small sample of high-quality data.As new data flows into the database, the platform could seamlessly conduct fine-tuning of LLMs by extracting a small sample of high-quality data.Although there are still many challenges with LLM fine-tuning, it is beyond the scope of this discussion.§ METHODOLOGY In this section, we present a list of tasks where LLM-GDOs could improve UDFs.Note that LLMs are still evolving so we mainly focus on the design pattern in this section to address the connection between UDFs and LLM-GDOs.We will provide comprehensive discussion about the challenges of current LLM-GDO design in Section <ref>. For brevity, we reuse the definition of `user_defined_function' in Figure <ref>-(b) in the following case studies. All the example LLM outputs in this paper have been generated using Chat-GPT 3.5 <cit.>. §.§ Data Cleansing and TransformationData cleansing and transformation are crucial steps to ensure the performance of a ML pipeline.To compare UDFs and LLM-GDOs in data cleansing and transformation, we consider a sample table `item_rating' defined in Figure <ref> with the following three tasks. They illustrate the low-code feature and dependency-free feature of LLM-GDO.§.§.§ Data Structural ConsistencyData structural consistency is usually one of the initial steps.It helps to structuralize the data to improve the downstream transformations.In the `item_rating' table (Figure <ref>), the `date' column contains date strings in different formats.Figure <ref> shows an example to structuralization the date data.With LLM-GDO in UDF, we can define the output format (YYYYMMDD) in the prompt and let LLMs handle the data processing (Figure <ref>-(b)). Traditional UDFs can also complete this structuralization but it usually requires either enumeration of the date format or leverage different packages.§.§.§ Data Type ConversionAfter cleaning the data structure with better structural consistency, we can convert the data from one type to another.Figure <ref> presents an example of UNIX epoch time conversion from the given date data.Again, the LLM-GDO completes this data transformation with the instruction in the prompt, while the traditional UDF users need to understand the definition of UNIX epoch time for implementation or know the right packages to call. §.§.§ Data StandardizationAnother important task is normalizing the numeric data.In many machine learning pipelines, normalization of numeric values increases the stability of the model training.In Figure <ref>, we present an example wehre LLM-GDO is employed to normalize the `user_rating' column by providing the rating range .§.§ Data ModelingIn addition to data cleansing and transformation, many feature engineering steps require machine learning models to conduct the basic reasoning and generate high-quality features.For example, in many NLP tasks, word annotations and tagging are important features for modeling <cit.> <cit.>. These features usually require a dedicated machine learning pipeline to extract the features, which are inconvenient to maintain without a proper support from ML operations.A user needs to understand the machine learning pipeline in detail to run an UDF to employ these complicated models, managing the dependencies and another layer of data processing.LLM-GDOs, however, can keep the same convenience with a uniform practice.We consider a new sample table `item_information' defined in Figure <ref> for item information.We highlight this advantage with the following two tasks.§.§.§ Reasoning In this task, we need to parse each row of data and conduct reasoning (e.g., classification) to get the output.One of the common use cases is to detect the anomaly value in the database.For example, in many system log databases, the system error messages are grouped under several categories by their severity levels or the types of events.Another example is that items are grouped into product types in e-commerce for item display.The classification could be invalid but the anomaly is hard to find due to the huge data volume.Anomaly detection could be done by separate machine learning pipelines <cit.> but integrating them into database is challenging.First, these machine learning pipelines have different dependencies and require domain knowledge.Second, as the underlying data changes, the models might not be updated without a proper orchestration system for model retraining.LLM-GDO can conduct reasoning to detect the anomaly in this case.Figure <ref> shows an example of LLM-GDO for anomaly detection in the `item_information' table.We can see that LLM-GDO can easily detect the wrong item type for the `103' item in Figure <ref> and correct it in Figure <ref>-(b).In this prompt, we can leverage the in-context learning of LLM by designing a prompt <cit.>. Compared with the traditional UDF, users have to define the complex heuristic or different versions of deep learning models to complete the task.The LLM-GDO simplify the development of the logic and maintenance because LLMs at the back-end could be managed in a centralized way which powers the tasks without disclosing the details to users.§.§.§ Embedding DatabaseEmbedding is another important use case for data processing.With LLM-GDO, we can also generate high-quality embedding based on the `item_name' column [https://platform.openai.com/docs/guides/embeddings].Due to the limited space, we skip the prompt example and the output embeddings.However, with this approach, we can timely update the embedding for downstream modeling.§ DISCUSSIONDespite the remarkable human-compatible and programmable performance of LLMs in data transformation, it is crucial to highlight the numerous opportunities and challenges present in the LLM-GDO design pattern. This will help to shed light on potential future research directions and enhance our understanding of this evolving field.LLM inference and ScalabilityAlthough LLMs demonstrate greater scalability compared to traditional UDFs, they demand significantly more computational resources. This can be attributed to the massive number of parameters in LLMs, which require extensive computational resources for model inference. For instance, a GPT-3-sized LLM requires the use of eight 80G A100 GPUs for inference[https://blogs.oracle.com/research/post/oracle-first-to-finetune-gpt3-sized-ai-models-with-nvidia-a100-gpu]. Moreover, current commercial LLMs, such as GPT-3.5 and PaLM2, have limited speed in response.Nonetheless, research on LLM knowledge distillation<cit.> and compression<cit.> are promising avenues for reducing LLM size and computational demands, thereby making them more cost-effective, efficient, and scalable.LLM Hallucination Hallucination means a LLM makes up contents that do not exist in real world or do not satisfy the requirements in the prompt. While hallucination can be a feature for many creative use cases, it is usually an obstacle for data processing <cit.> .For instance, when the desired output data format is tab-separated values (TSV), it is crucial to avoid inconsistencies, such as the inclusion of alternative separators, particularly when employing an LLM within a production data engineering pipeline. Various methods have been proposed to mitigate and quantify hallucinations in LLMs, including reducing the model's temperature, providing more context in the prompt, employing a chain-of-thought approach <cit.>, ensuring self-consistency <cit.>, or specifying more precise formatting requirements within the prompt. Despite these efforts, the effective detection and prevention of hallucinations in LLMs during data processing remains a formidable challenge.LLM Unit Testing and Evaluation Traditional UDFs are typically deterministic, allowing for the development of unit tests to assess their correctness.In contrast, LLMs employ a generative approach to produce results based on probability <cit.>, making it more challenging to conduct universal testing. Recently, some teams have leveraged the larger LLMs to generate the unit test cases (e.g., expected outputs) for the prospective LLMs [https://www.anyscale.com/blog/fine-tuning-llama-2-a-comprehensive-case-study-for-tailoring-models-to-unique-applications]. We believe LLM unit testing will attract more attentions in LLM development and application. LLM PrivacyFine-tuning an industry-level LLM for data processing could depend on data from different departments, involving data transition and sharing.For example, when processing data from a social app, customers' profile and preference are highly sensitive.Data sharing will increase the probability of data leaks and violate the data privacy policy <cit.>.Recent works in federated learning <cit.> address the privacy issues in LLMs.We expect more research interests on LLM privacy issues.§ CONCLUSIONIn this research paper, we propose a novel design pattern, LLM-GDO, aimed at improving the efficiency and reliability of data transformation. LLM-GDO offers the benefits of utilizing low-code and dependency-free implementations for knowledge-aware data processing. However, it also encounters challenges stemming from LLMs. We examine these challenges and opportunities, and provide an in-depth perspective on the LLM-GDO design pattern. IEEEtran
http://arxiv.org/abs/2312.16351v1
{ "authors": [ "Luyi Ma", "Nikhil Thakurdesai", "Jiao Chen", "Jianpeng Xu", "Evren Korpeoglu", "Sushant Kumar", "Kannan Achan" ], "categories": [ "cs.DB", "cs.AI" ], "primary_category": "cs.DB", "published": "20231226230838", "title": "LLMs with User-defined Prompts as Generic Data Operators for Reliable Data Processing" }
Klaus Hodapp [email protected]]Klaus W. Hodapp University of Hawaii, Institute for Astronomy, 640 N. Aohoku Place, Hilo, HI 96720, USA0000-0002-5258-6846]Eric Gaidos Department of Earth Sciences, University of Hawai'i at Mänoa, Honolulu, HI 96822, USA 0000-0002-7064-8270]Matthew A. Kenworthy Leiden Observatory, Leiden University, P.O. Box 9513, 2300 RA Leiden, The Netherlands0000-0002-2471-8442]Michael Tucker CCAPP Fellow Center for Cosmology and Astroparticle Physics, The Ohio State University,191 West Woodruff Ave, Columbus, OH, USA Department of Astronomy,The Ohio State University,140 West 18th Avenue, Columbus, OH, USA0000-0003-4631-1149]Benjamin J. Shappee University of Hawaii, Institute for Astronomy, 2680 Woodlawn Drive, Honolulu, HI 96822, USA0000-0003-3490-3243]Anna V. Payne University of Hawaii, Institute for Astronomy, 2680 Woodlawn Drive, Honolulu, HI 96822, USA Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 212180000-0003-3429-7845]Aaron Do University of Hawaii, Institute for Astronomy, 2680 Woodlawn Drive, Honolulu, HI 96822, USAA previously unremarkable star near the Canis Major OB1/R1 association underwent an episode of multiple deep brightness minima. Light curves based on archival Gaia, ZTF, NEOWISE data and additional observations from LCO and UKIRT show that the star was not variable prior to 2019 Aug 18 (MJD 58700), and on that date started showing brightness dips of up to 3 magnitudes in the Gaia G and ZTF r bandpasses. After MJD 59500, ≈ 800 days after the onset of these dipping events, the star returned to its previous brightness, and no significant dipping events have been recorded since. Compared to the stable phase, NEOWISE infrared photometry in the W1 and W2 bands indicates a generally redder color, and both decreases and increases in brightness at different times during the dipping episode. The spectrum of Gaia21bcv taken after the end of the dipping episode shows several neutral and ionized metal absorption lines, including Li, indicating a spectral type of ≈K5. Variable emission from [O1] was observed. The Hα absorption in Gaia21bcv is too faint and irregular for this spectral type, indicating that the line is partly filled in by variable emission, a signature of weak episodic accretion.Gaia21bcv lies above the zero-age main sequence, but is much fainter than typical R CrB stars. We interpret the light curve of Gaia21bcv as being similar to the occultation events in ϵ Aurigae, i.e., occultation by a disk around a companion object orbiting the primary star.§ INTRODUCTION Most young stars are variable, and variabilityis one of the characteristics used by<cit.> to define the class of T Tauri stars. In addition to the variability caused by modulation of star spots and instabilities in mass accretion onto the star, dipping events in young stars, i.e., multiple, often quasi-periodic minima, are due to occultation by dust condensations in thedisks surrounding them, leading to the definition of the UXOr class ofyoung variables by <cit.>.A summary of the current understanding of dipper objects has recently been published by<cit.>.In contrast to the dipper phenomenon caused by density inhomogeneities in a circumstellar disk, where dips are continually occurring, a small class of variable stars show isolated episodes of deep minima lasting for months to years,separated by long phases of constant brightness.In those cases, the obscuring material must be confined to a small section of its orbit around the star, and these light curves have been interpreted as eclipsing events by a disk surrounding a companion object.The prototypical object for such substantial periodic occultation events is ϵ Aurigae, where the brightness minima are caused by an optically thick circumstellar disk around a companion star transiting the primary star, leading to periodic eclipses every 27.1 years <cit.>. Similarly, KH15D discovered by <cit.> has in recent decades started showing deep,quasi-periodic minima that increased in depth and duration, so that the star no longer reaches the unobscured light level between minima.The case of the eclipse of 1SWASP J1407 characterized by rapid variations during the occultation event was discussed by <cit.> and interpreted as being caused by a giant ring system around an unobserved object that itself orbits the star. This scenario of eclipsing disks in general has been discussed by <cit.> who concluded that in a sample of 10000 post-accretion stars monitored over 10 years, several such eclipsing events would be predicted.The Gaia satellite observes any given point in the sky every few months, <cit.> and the Gaia Alerts project [<http://gsaweb.ast.cam.ac.uk/alerts>] publishes unusual photometric behavior discovered in these repeated observations.The discovery of Gaia21bcv in the CMa OB1/R1 association of young stars, as a result of the Gaia monitoring, is confirming the prediction by <cit.> that more such secondary disk or ring occultation events remain to be discovered.It should be noted that another class of variable star with similar long-duration drops in brightness are the R Coronae Borealis (RCB) stars, carbon-rich post-main-sequence stars where the condensation of dust clouds is responsible for the dimming of the star.RCB stars are high-luminosity objects in the late phases of stellar evolution and can be distinguished from other dimming events such as UXors on this basis.This paper reports the results of a photometric and spectroscopic observing campaign and the analysis of various archival records initiated after the Gaia21bcv alert. § OBSERVATIONS AND RESULTS§.§ Photometry§.§.§ Gaia Gaia21bcv was an unremarkable star of constant brightness in all the prior years for which photometry exists.Between 2015 and mid-2019, Gaia photometry of this star showed little variation, with a magnitude of G=17.70±0.03.On 2019 Aug. 18 (MJD 58713), Gaia recorded the star at G=20.12, followed by a rapid re-brightening to ≈ G=18.6 when Gaia obtained the next light curve data points on 2019 Aug. 31.The minimum that triggered the Gaia alert on 2021 March 1 (MJD 59457) was actually the third major brightness minimum in the recent episode and was followed immediately by a re-brightening before the object seasonally became unobservable. The first few observations of the observing campaign in late 2021, when Gaia21bcv became observable again, recorded the end of another minimum. After MJD 59520 (2021 Nov. 2), the object returned to its bright state and no further occultation events were observed.The Gaia satellite observes any given point in the sky every few months <cit.>, sampling the rapid dipping events sparsely, but with greater precision than the available ground-based photometry.§.§.§ ZTFTo complement the Gaia light curve data, we have downloaded[https://irsa.ipac.caltech.edu] archival r-band photometry of Gaia21bcv from the Zwicky Transient Facility (ZTF) <cit.> archive <cit.>.We shifted all ZTF magnitudes by a fixed amount (-0.1 mag) to match the Gaia G photometry.§.§.§ NEOWISEAt the position of Gaia21bcv, the NEOWISE mission <cit.> continues to obtain multiple photometric measurements over about a day, every six months.For the light curve in Fig. 1, we have median combined all individual measurements from <cit.> in each of these day-long intervals into one measurement. These combined photometric points and the resulting colors are listed in Table 1.crrrGaia21bcv NEOWISE Photometry0ptEpoch [MJD] W1 [mag] W2 [mag] W1-W2 [mag] 55292 -0.05 ± 0.03 -0.02 ± 0.050.02 ± 0.06 55484 -0.04 ± 0.03 -0.03 ± 0.050.05 ± 0.06 569490.00 ± 0.03 -0.02 ± 0.050.08 ± 0.06 571160.01 ± 0.03 -0.05 ± 0.050.12 ± 0.06 573110.02 ± 0.030.05 ± 0.050.04 ± 0.06 574760.00 ± 0.03 -0.01 ± 0.050.07 ± 0.06 57675 -0.01 ± 0.03 -0.04 ± 0.050.10 ± 0.06 578360.01 ± 0.03 -0.03 ± 0.050.09 ± 0.06 58042 -0.04 ± 0.03 -0.07 ± 0.050.09 ± 0.06 58196 -0.01 ± 0.03 -0.02 ± 0.050.07 ± 0.06 58406 -0.01 ± 0.030.06 ± 0.05 -0.01 ± 0.06 585640.02 ± 0.03 -0.03 ± 0.050.10 ± 0.06 58771 -0.11 ± 0.03 -0.22 ± 0.050.17 ± 0.06 589280.55 ± 0.030.27 ± 0.050.34 ± 0.06 591380.30 ± 0.030.09 ± 0.050.27 ± 0.06 59292 -0.16 ± 0.03 -0.41 ± 0.050.31 ± 0.06 595020.01 ± 0.030.03 ± 0.050.03 ± 0.06 596590.06 ± 0.030.07 ± 0.050.04 ± 0.06 598660.05 ± 0.030.10 ± 0.050.01 ± 0.06§.§.§ UKIRT/WFCAM We have monitored Gaia21bcv in the z, J, H, and K bands using WFCAM <cit.> on UKIRT.These observations started just when the eclipsing episode was coming to an end and only the first 40 days of this observing campaign contain information about the absorbing material.After that, the data only confirm that post-eclipse Gaia21bcv has returned to the same brightness as its 2MASS catalog entry.§.§.§ Las Cumbres Observatory Gaia21bcv was observed over a span of 94 days between MJD 59496.8 and 59591.0 with the 1-mtelescope network of the Las Cumbres Observatory Global Telescope <cit.>.Photometry was obtained in Bessell V, R, and I filters with exposure times of 460, 80, and 50 sec, respectively.Image processing and extraction of instrumental magnitudes were automatically performed using the LCOGT BANZAI pipeline <cit.>, while determination of relative magnitudes corrected for individual image zero-points was carried out by custom software routines <cit.>.This monitoring started a few weeks earlier than the other monitoring campaigns at the beginning of the visibility period, and the first few data points recorded the last few days of the last dimming event. §.§.§ Historic Photometry Prior to the Gaia and WISE/NEOWISE missions, there are only a few photometric data points available in various surveys. The PS1 database <cit.> photometry of rPSFMag = 17.92 from MJD 56888 (2014 Aug. 19) is consistent with the brightness of Gaia21bcv outside of the dipping episode, considering the differences in the filter bandpasses.In the infrared, data taken with more consistent filters are available.The 2MASS magnitudes of Gaia21bcv in the stable bright phase are J = 14.528, H = 13.676, K_s = 13.338 (on MJD 50904 = 1998 April 1) and UKIDSS (K = 13.37 on MJD 55565 = 2011 Jan. 4) <cit.>near-infrared data points are consistent with stable brightness.The UKIRT WFCAM photometry after the end of the dipping episodeis J = 14.52, H = 13.723, K = 13.359. This is very close to the 2MASS brightnesstransformed to the UKIRT WFCAM system via the equations in <cit.>of J = 14.4726, H = 13.7356, K = 13.350. Within the uncertainties of these measurements,the brightness after the dipping episode is indistinguishable from that before it.§.§ Optical Spectroscopy§.§.§ UH88 Telescope and SNIFSWe obtained optical spectra of Gaia21bcv at nine epochs between MJD 59502 (2021 Oct. 14) and MJD 59663 (2022 March 25) using the “Super Nova Integral Field Spectrograph”<cit.> at the UH 88” telescope through the SCAT survey <cit.>.The earliest of these observations just recorded the last few days of the last poorly observed minimum in the LCOGT and UKIRT photometry.The other 8 spectra were obtained when the object had returned to stable brightness.After MJD 59653, well into the stable bright phase, emission in the [O1] doublet at 6300 and 6364 Å was observed, while it was not seen prior to that time. To improve the S/N over that of the individual exposures, we separately averaged all spectra with and without the [O1] line for Fig. 3. §.§.§ Keck1/HIRESWe obtained a high resolution spectrum of Gaia21bcv on MJD 59617 and 59618 (2022 Feb. 6 and 7), just after the apparent end of the dipping episode, with the Keck HIRES spectrograph <cit.>.We used a fairly narrow slit of 0.6 to achieve a spectral resolution of ≈ 60000 and used the longest slit possible to search for emission lines near the star and detected Hα and [S2] emission.Wavelength calibration used a combination of Th-Ar calibration spectra and telluric OH airglow lines, and Na1 and Hα emission in the science data.The sky and extended background were measured along the slit, away from the point source PSF.The comparison of telluric Na1 night sky emission and the Na1 absorption in the star gave a spectral shift of 0.828 Å, corresponding to a 31 ± 1 km s^-1 radial velocity relative to the solar system barycenter.Spectra extracted on-source without sky subtraction are shown as blue lines in Fig. 4, while sky-subtracted spectra are shown in red. § DATA ANALYSIS AND RESULTS§.§ Location and Distance Gaia21bcv at coordinates 10863865, -1222426 (J2000.0) is listed in the Gaia EDR3 catalog <cit.> as object number 3045209156636885760, with a parallax of 0.72 ± 0.13 mas and renormalized unit weight error (RUWE) of 1.06, giving a distance of 1382 pc (1178 - 1672 pc) and distance modulus 10.70 mag.Gaia21bcv (cyan circle in Fig. 2) appears to lie on the south-eastern edge of a region of reduced star density on DSS red and blue, identified as dark nebula DOBASHI 5098 <cit.>.The PS1/UKIRT gJK color composite image in Fig. 2 shows a large number of very red objects in the area of the dark cloud, which also contains a mid-IR source WISE J071429.04-121239.5 indicated by a yellow circle in Fig. 2.<cit.> have studied young stars in the Canis Major OB1/R1 OB association that forms a ring-like system of molecular clouds with the brightest optically visible object being the H2 region Sh 2-296 near the western edge of the association.They concluded, based on its parallax and proper motion, that Gaia21bcv isa Class III object with a probability of membership in the OB association of 85%.Numerous Hα emitting young stars have been found in the CMa OB1/R1 region by <cit.>,including the area immediately surrounding Gaia21bcv, but Gaia21bcv was not included in their list of Hα stars.Using Gaia DR2 data, <cit.> identified groups of stars in CMa OB1/R1 based on their position and proper motion.Gaia21bcv lies outside these defined groups, and therefore was not assigned an age. A small grouping of bright, blueish stars is just west ofGaia21bcv.The brightest of these is HD 55902 (V = 8.7), a B9III star with low extinction and aparallax of 1.52 mas, d = 656 pc.These bright stars are in the foreground of Gaia21bcv and not physically close to it. The Keck HIRES spectrum detected Hα emission both from telluric emission (the narrow emission line) and from the nebulosity near Gaia21bcv (the broad line component). Fitting a Gaussian function to the broad line profile gives a barycentric radial velocity of27± 11 kms^-1. §.§ Properties of the Star§.§.§ Spectral typeThe combined SNIFS spectra show a continuum slope that in itself would match the continuum slope of a mid-M spectral type. However, such a late type can be excluded since we do not observe the deep molecular absorption features, primarily of TiO <cit.> characteristic of late spectral types.The best match to standard spectra is between K4V and K5V just where TiO absorption bands begin to be noticeable.The comparison K4V and K5V spectra more show pronounced Hα absorption, while this line in Gaia21bcv is fainter, and variable in the series of SNIFS medium-resolution spectra.The steep continuum slope is caused by extinction along the line of sight.We have dereddened the combined SNIFS spectra to match the overall continuum slope of aK4-type star by applying the extinction function of <cit.> with A_V = 3.2.The mean wavelength of the Gaia G bandpass is 639.74 nm <cit.> for which the <cit.> extinction is A_G = 0.80 A_V = 2.56.As confirmation of this extinction estimate, Gaia DR3 has B_p-R_p =2.59 for this star.The Bayestar19 (Pan-STARRS-based) reddening is E(B-V)∼1.0, consistent with the extinction value from the spectral slope.In the top panel in Fig. 3 the bright phase photometry of Gaia21bcv is fitted with a blackbody of T = 4500 K and A_V = 3.2 (red).To indicate the uncertainty of this combination, T = 4600 K and A_V = 3.0 is indicated in orange and T = 4400 K and A_V = 3.4 in gold.As confirmation of our spectroscopic determination of the effective temperature, the Gaia DR3 catalog lists the effective temperature as 4520 K, corresponding to a spectral type between K4V and K5V in the tables[http://www.pas.rochester.edu/ emamajek/] by <cit.>. We adopt a spectral type of K4.5V for Gaia21bcv in the following discussion.The radius of a K4.5V ZAMS star is 0.707 R_⊙, interpolated from the same tables.In the bright phase, the K_s magnitude of Gaia21bcv in the 2MASS Catalog is 13.338. The extinction of A_V = 3.2 corresponds to A_K_s = 0.238 <cit.>.With a distance of 1382 pc and a distance modulus of 10.70, Gaia21bcv has an extinction-corrected absolute magnitude of M_K_S = 2.40. With absolute magnitude and colors from the tables of<cit.> a K4.5V star has M_K_S = 4.32, making Gaia21bcv 1.92 mag (factor 5.86) brighter than the main sequence and the radius 2.42 times larger, i.e., Gaia21bcv has a radius of 1.7 R_⊙.The SED from PS1, 2MASS and WISE data points can be fitted well with a blackbody of T=4500K and extinction of A_V = 3.2 mag and the <cit.>.extinction law. Integration of this blackbody fit over the wavelength range gives a luminosity of 4.3 L_⊙.Both arguments clearly indicate that Gaia21bcv is overluminous compared to the ZAMS and therefore apre-main-sequence star.The long-slit Keck HIRES spectrum in Fig. 4 shows that Hα and [N2] emission that permeates the region near Gaia21bcv (blue spectrum), but net emission at the position of the star in the sky-subtracted spectrum is not detected (red spectrum). This is consistent with the finding by <cit.> in their objective prism search for Hα emission stars in CMa OB1/R1 region that Gaia21bcv was not included in their list of Hα emitters. The extended Hα emission away from the star has two components: A narrow telluric component and the broader component originating in the nebulosity near Gaia21bcv. This emission is redshifted by 38 km s^-1 relative to the telluric emission. A Gaussian fit to its profile gives σ = 11 km s^-1.The radial velocity of the star Gaia21bcv of 31 km s^-1 is therefore well within the velocity dispersion of the Hα emission line. This is consistent with the star being physically located within this HII emission region. Based on this limited information, we conclude that Gaia21bcv has the characteristics of a weak-line Class III pre-main-sequence star with a K4-5 photosphere and variable line emission.§.§.§ Lithium AbsorptionThe Keck HIRES spectrum of Gaia21bcv shows strong absorption in the Li 6708 Å line, stronger than the Ca1 line at 6717 Å. The equivalent width of the Li 6708 Å  absorption line is 334 ± 40 mÅ. In stars with strongly convective atmospheres, i.e., spectral classes K and M, Lithium is destroyed over a timescale of order 10^8 years. As specific examples,in the K dwarfs in the ≈ 600 Myr old Hyades cluster <cit.>, did not detect any Li absorption. The younger, ≈ 70 Myr old <cit.>, Pleiades cluster shows Li absorption in K-type stars <cit.>,with equivalent widths between 30 and 300 mÅ. In the young, about 5 10^6 year old, cluster NGC 2264 <cit.> found strong Li absorption in K-type stars, with some dependence of the line equivalent width on the rotation period. In this young cluster, the Li equivalent widths lie between 500 and 550 mÅ. The line observed in Gaia21bcv lies just above the distribution of equivalent width for K-type stars in the Pleiades, and is about half of the typical value in NGC 2264. This suggests an age for Gaia21bcv between those two cases, and probably closer to that of the Pleiades. As an estimate, we adopt the rounded average age of these two clusters, 40 ± 10 Myr for Gaia21bcv.The width of the absorption lines of the Na1 doublets, the strongest absorption lines in the spectrum, were measured.Under the simplistic assumption that vsini ≤ FWHM/2., we get vsini ≤ 13 km s^-1. Compared to the rotational velocities measured by <cit.> in the Pleiades, Gaia21bcv is thus a relatively slow rotator, which correlates with lower abundance of Lithium, asfirst pointed out by <cit.>.§.§.§ Emission LinesThe SNIFS spectra can be divided into two groups: those taken up to MJD 59638 (2022 Feb. 28) without [O1] emission and those taken in the 10-day period after MJD 59653 (2022 March 15) up to MJD 59663 (2022 March 25)that show prominent emission of the [O1] doublet. The onset of this emission, happening about 100 days after the end of the last dipping event is, apparently, not connected to the end of these dipping events, but may be caused by independent variations in weak accretion that happens in the innermost regions of a circumstellar disk.We note that the comparison of the averaged SNIFS spectra with and without [O1] also show varying Hα absorption. We conclude that the Hα absorption of the stellar photosphere is mostly filled in by time-variable chromospheric emission.On the other hand, the lack of strong and numerous emission lines, e.g. Hα, Ca-II infrared triplet, Na I that are often observed in classical T Tauri stars rules out strong accretion in this object, and any explanation that the dips were caused by absorption from accretion funnels<cit.>,which have been used to explain dipping events in T Tauri stars <cit.>. §.§ Wavelength Dependence of AbsorptionThe monitoring campaign initiated in the late 2021 visibility period of Gaia21bcv was intended to obtain multi-wavelength photometry of the light curve. As it turned out, only the first ≈ 40 days, from MJD 59480 to 59520, recorded the end of the last dipping event in this recent episode. We work under the assumption that the dipping episode is caused by absorption by dust, obscuring the star Gaia21bcv. Therefore, the multi-wavelength light curves gives some information about the wavelength dependence of the dust absorption. In each of the filters, we computed the slope of alinear regression approximation of the light curve (in magnitudes) in the 40 day time interval. This slope is proportional to the extinction, and Fig. 5 shows the slope value compared to the extinction law <cit.>, normalized to match the measured value in the UKIRT K-band. The wavelength dependence of the light curve slope at the end of the dipping episode closely matches the wavelength dependence of interstellar extinction on which the<cit.> law is based.This indicates that the dust in the obscuring material contains particles smaller than the wavelength of the observations and similar in size distribution to the interstellar medium. §.§ Infrared (WISE) light curveThe NEOWISE data confirm that Gaia21bcv was of constant brightness prior to MJD 58600.Four NEOWISE data points were obtained during the dipping episode, during which the W1-W2 color was positive (i.e. redder) in contrast to the constant bright phase when the W1-W2 color was neutral.The two data points near the middle of the dipping episodeare fainter than the quiescent brightness, and the other two are brighter.The brightest W1 and W2 magnitudes were observed within days of the deepest observed optical dip.The data points in the stable bright phase, i.e., MJD ≤ 58406 and MJD ≥ 59659, have average W1 = 13.135 ± 0.031 (SD) and W2 = 13.074 ± 0.050 (SD), and thus W1-W2 = 0.061 ± 0.016 (standard error of the mean).The A_V=3.2 mag interstellar extinction of Gai21bcv gives an E(W1-W2) = -0.042, using the extinction law by <cit.>.In the list of <cit.> stars in the age range 5-30 Myr of K4 spectral type have neutral W1-W2=0.00 mag color. The observed W1-W2 = 0.06 ± 0.02 in the stable bright phase is therefore consistent with a pre-main-sequence K4 star with A_V = 3.2.During the dipping phase, which we interpret at the result of occultation by some form of dust cloud we have four NEOWISE data points (Table 1 and Fig. 1). The two data points at the beginning and end of the occultations are below the stable brightness, while the two data points in the middle of the occultation phase are brighter than the unobscured brightness. During this phase, the W1-W2 color is redder at all the four epoch recorded than during the unobscured phase.§ DISCUSSION For most of the time that we have observations of, Gaia21bcv is of near constant brightness within the errors of our photometry. The light curve in Fig. 1 shows a distinct episode of several deep and sharp minima. The beginning and the end of this episode are equally well defined, and we do not see evidence for any gradual clearing of the obscuration.We base this discussion on the assumption that the star is of constant observed brightness most of the time and that the episode of dipping events events represents a rare occurrence. Based on the wavelength dependence of the absorption (Fig. 1 bottom panel and Fig. 5) during minima, we conclude the the dipping events are due to occultations of the star by dust that contains at least some small particles. We will discuss the nature, spatial structure, and distance of that dust in the following discussion. §.§ Is Gaia21bcv a RCB Star?The light curve of Gaia21bcv has similarities to that of some R Coronae Borealis (RCB) variables,reviewed by <cit.>.In those old, carbon rich AGB supergiants, episodic formation of carbon dust clouds in the variable wind emerging from the star lead to episodic, irregular minima lasting months to years that bear a superficial resemblance to those observed in Gaia21bcv.For a direct comparison, the prototypical R CrB at a distance of 696 pc and neglecting extinction, has M_K_S = -4.65 mag at least, and is therefore clearly in a different luminosity class from Gaia21bcv.Most RCB stars have spectral types of F and G <cit.>, and their optical spectra show the molecular C_2 Swan bands in absorption, characterizing these stars as extremely overabundant in carbon.The SNIFS spectra of Gaia21bcv show no indication of C_2 absorption and are not consistent with a RCB star. While Gaia21bcv is above the main sequence absolute magnitude, it is far below the luminosity and absolute V magnitude of R CrB. Its relatively low luminosity, C_2-free spectrum,and its apparent association with star formation in its vicinity are the strongest arguments that Gaia21bcv is, in fact, not a RCB variable.It should be noted that the clear detection of Li1 absorption at nominally 6708 Ådoes not strengthen the case against being a RCB star, since Li can be produced in the He flash forming the RCB star <cit.>. However, since the RCB explanation is excluded based on the absolute magnitude and lack of C_2 absorption, the Li absorption provides additional evidence that Gaia21bcv is indeeda young object, independent of the circumstantial evidence from the Hα emission, dark clouds and IRAS sources in its vicinity.Main sequence stars with spectral type K and M show very little Lithium absorption as was already pointed out by <cit.> due to the deep convection in their atmospheres during their contraction phase and also in their early main-sequence phase that leads to rapid destruction of Li. The detection of Li in Gaia21bcv with its late spectral type is therefore another strong indication of a very young star. §.§ Occultation by an Orbiting object Many young stars show the ”dipping” behavior, i.e., multiple, repeated short dips in the brightness.The most common dipper objects are very young, still accreting classical T Tauri stars, of which up to 25% show dipper-like variability. For these accreting stars, <cit.> has shown a correlation between dipping depth and infrared excess, i.e., the more warm dust is present in the immediate vicinity of the star, the deeper the dips in brightness are.However, in Gaia21bcv, we do not observe substantial excess emission at 4.6 μmin the bright phaseSED and the bright phase W1-W2 color is,within the errors, as expected for a K4-5 pre-main-sequence star with A_V = 3.2 mag foreground interstellar extinction. The deep occultation events and, at the same time, little infrared excess at near to mid-infrared wavelengths support what is already strongly suggested by the episodic nature of the occultations and long intervals between these episodes: The obscuring dust is concentrated on a small section of the orbit, the overall mass of dust around the star is small, and the distance of the dust from the star is so large that, in combination of these factors, infrared excess is not observed at short and mid-infrared wavelengths.Such a dust cloud at substantial distance from the star could have been produced by a catastrophic collision of two small planetesimals or asteroids. Such events have indeed been observed by an increase in infrared emission, which is approximately isotropic and can be observed from any direction, making the discovery of such spikes in infrared emission likely. Results on two objects, ID8 and P1121 have been summarized by <cit.>. In both objects, orbital periods, collisional cascade build-up time of the dust cloud and cloud dispersal times are of the order of years.In at least one case, a recently produced dust cloud has been directly imaged: Formalhaut b, initially thought to be an exoplanet, was found by <cit.> to be an expanding, very low mass cloud of debris after what appears to have been an asteroid collision. Based on measurements of the cloud expansion, they estimate that the cloud had formed only a few years prior to its discovery and is dissipating on timescales of a few decades. The dissipation time depends, of course, on the initial velocity of the collision debris.By comparison, the occulting object around Gaia21bcv is larger and more opaque, but without a massive object, would probably still dissipate on timescales of decades or centuries. While we cannot categorically exclude the possibility of a short-lived dust cloud,it would be a very unlikely event to find such a short-lived object at large distances from its host star in just the right orientation towards the observer to appear transiting in front of the star.Frequent, sporadic, short, and comparatively shallow dips in the light curve ofthe main sequence star KIC 8462852 were discovered by <cit.>. The best explanation for these short dipping events are swarms of comets briefly obscuring the star. However, the much longer and deeper brightness minima of Gaia21bcvare much more substantial than what could be explained with anything close to comets in our own solar system.In addition, the occulting object in Gaia21bcv appears to be well defined, with no indications of a gradual temporal clearing or smooth profile of the dipping events in the light curve. The light curve does, instead, suggest a well-defined outer edge of a disk or a ring system.We therefore favor a model, similar to ϵ Aur, of a long-lasting disk or ring system where the containment of the dust or debris cloud along its orbital path must be aided by the presence of a companion body of dynamically significant mass. §.§ Absorption and ScatteringIn the case of Gaia21bcv, the only available data in the mid-infrared are from the WISE/NEOWISE <cit.> mission. The WISE W3 band shows a faint, 4.7σ detection. The WISE W4 image shows a 2-σ source that is extended and includes the position of Gaia21bcv.Based on the same WISE data <cit.> have classified Gaia21bcv as a Class III object, i.e., a pre-main-sequence star without significant infrared excess.While the W1-W2 colors are nearly neutral (zero) in the bright phase, during the dipping episode, the color was redder (W2 brighter). There are two possible explanations, not necessarily mutually exclusive,for this change in mid-IR colors:All dust size distributions containing particles much smaller than the wavelength observed tend to absorb stronger at shorter wavelengths, explaining a reddening during the occultations. We have measured the color dependence of the dust absorption in the last few weeks of the occultation phase, and confirmed a wavelength dependence similar to that of the interstellar medium, specifically the extinction law by <cit.>.However, somewhat surprisingly, two of the NEOWISE infrared data points during the occultation phase are above the unobscured brightness, indicating the addition of mid-infrared flux near the middle of this phase.Any additional thermal emission from the obscuring cloud would be radiated isotropically and would be observable in all phases of the cloud's orbit. We do not observe any fluctuations of the W1 or W2 flux in the unobscured bright phase and therefore, increases in thermal emission must be very rare. The sudden addition of thermal emission just in the middle of the occultation phase would be purely coincidental and very unlikely. We therefore suggest that the infrared flux during the occultation is the superposition of two opposite effects: Enhanced flux due to forward scattering in the dust cloud, and, of course, absorption by this same dust. At the optical wavelengths where most of data were obtained, absorption dominates and no data point during that phase exceeds the unobscured flux. At the WISE W1 and W2 wavelengths, absorption is present and leads to a reddening of the observed color in all four data points, and the minima at two epochs, but forward scattering directs so much additional flux towards the observer that the center two data points actually exceed the unobstructed brightness level.Strong forward scattering has been observed in the more tenuous rings of the Saturn ring system and discussed by <cit.> as a possible analog to the dust in debris disks around other stars. Even earlier in the evolution of dust particles, strong scattering at infrared wavelengths is evident in the core shine phenomenon observed in isolated small molecular clouds illuminated by the interstellar radiation field <cit.>. A detailed analysis or modeling of the dust properties in the Gaia21bcv occulting disk is beyond the scope of this paper, but based on the properties of both younger and older dust mixtures mentioned above, strong forward scattering in the occulting disk is a plausible scenario.§.§ A Transiting Ring System The model of an orbiting dust cloud, held together by some massive body, is conceptually similar to the now well established model for the periodic deep minima of ϵ Aurigae, summarized by<cit.>.In this model of ϵ Aurigae, a dense dust cloud orbiting a companion star occults the primary star periodically,but that companion is not itself visible at optical wavelengths. In contrast to ϵ Aurigae, the occultation event in Gaia21bcv is not simply a smooth minimum,but is a series of sharp minima between brighter periods that in some cases return almost to the unocculted brightness.This indicates more structure in the orbiting dust cloud than is observed in ϵ Aurigae. For an eclipse by a disk with sub-structure, for example a ring system, the modulation by the orbital motion in front of the star is dominant over temporal variations in the cloud for plausible combinations of orbital period, size of the dust cloud, and rotation of the dust cloud. <cit.> specifically apply this model of a highly structured disk around an orbiting companion object, possibly a ring system, to the case of SWASP J140747.93-394542.6, where multiple deep minima spread over ≈ 54 days were observed.Their preliminary model was refined further by<cit.> and <cit.> who fit their light curve by an elaborate model of an eclipse by a multiple ring system arounda planetary companion.Similarly and even earlier, <cit.> have discussed the case of EE Cep, where multiple, long-duration minima have been observed, interpreted as eclipses by an optically thick disk around a companion object. We have no evidence from the high resolution spectrum of line splitting and therefore the presence of a stellar companion. Gaia21bcv is listed in the Gaia EDR3 with a RUWE of 1.06, indicating a solid detection with no evidence astrometric wobble that might indicate an orbiting companion. We can only speculate that the companion object stabilizing the dust cloudmay be an otherwise unobservable companionof stellar, sub-stellar, or planetary mass.We determined an estimate of the transverse velocity of the dust by analysis of the gradients in the light curve of the star.We followed the ”exoring” prescription in<cit.>, where we obtain anupper bound on the transverse velocity by assuming that each of the individual flux dipsis caused by an opaque occulter much larger than the diameter of the star that it is transiting.When the edge of the occulter is perpendicular to the direction of motion,this enables an upper estimate of the transverse velocity of the cloud,since an edge that is not perpendicular to the vector of motion willrequire a higher velocity to cover the entire disk of the star.The method is as follows: the light curve is converted from stellar magnitudes to flux,and the flux light curve is divided by the “out of eclipse” flux level to produce a normalized flux curve(see upper panel of Fig. 6).The light curve is visually inspected to identify turning points (indicated by gray vertical lines in the upper panel)which we associate with a new edge with different opacity beginning to transit the stellar disk.A straight line is then fit to the photometry between these turning points,and a gradient with an associated measurement error is determined - this is plotted in the middle panel of Fig. 6.Together with the diameter of the star, a lower bound on the transverse velocity of the material can be determined.Assuming a radius of 1.7 R_⊙ for the star, the transverse velocity is shown in the lower panel of Fig. 6.The analysis showed a robust lower speed of 2 km s^-1 for the occulter. This lower limit is partly due to the limited sampling of the light curve during the major minima, before the dedicated high-cadence monitoring was initiated. Multiplying the velocity by the duration of the eclipse gives a lower estimate for the diameter of the occulter.For an eclipse duration of 866 days and a velocity of 2 km s^-1 the estimated diameter of the eclipsing disk is 1.0 au. The 866 days duration includes a minor gradient before the first observed deep minimum and establishes a more symmetric light curve.The distribution of the gradients as a function of time show approximate time symmetry,with a notable dip in the measured gradients at the approximate midpoint of the eclipse.Using the analysis of light curve gradients as a function of time as described in <cit.> this is qualitatively consistent with a disk with azimuthally symmetric substructurethat is moderately inclined and that the projected semi-major axis of the ring systemis close to being parallel with respect to the path of the star behind the rings.Assuming, for an order of magnitude estimate, that Gaia21bcv has a mass of 1 M_⊙, an orbital velocity of 2 km s^-1 implies a radius of a circular orbit of 225 au, and an orbital period of 3375 years. Since the orbital velocity is a lower limit, these estimates for radius and orbital period are upper limits.Even with this caveat, repeat observation of a second transit are a very distant prospect. §.§ Is an Orbiting Disk Consistent with the Age ?The SED of Gaia21bcv outside of the occultation events is essentially a reddened photosphere, with only minimal indication of a possible infrared excess in the W3 filter, leading to the classification of Gaia21bcv as a Class III star whose protostellar disk has largely dispersed. While there is a large scatter in the disk clearing times for individual stars, reviewed by<cit.>, its Class III properties suggest an age of at least 10 Myr for Gaia21bcv. Together with the membership in an OB association and its pre-main-sequence luminosity, all this points to Gaia21bcv having an age in the range of 10 - 40 Myr.It is therefore plausible that at an age when the protoplanetary disk around the primary star Gaia21bcv has largely dissipated, a smaller (only 1 au) disk at a substantial distance from the primary star, orbiting a secondary, less massive component, has escaped disk clearing and may still persist.While it appears clear that a massive body is needed to keep that occulting disk together, this object appears to be invisible and we cannot constrain it mass. Anything from a low mass star, a future brown dwarf or a young giant planet would be possible. It may be possible to observe the thermal emission from the occulting disk to place constraints on its internal heating by the secondary object and thereby on the luminosity of that object.§ SUMMARY AND CONCLUSIONS Gaia21bcv has undergone an episode of repetitive, deep minima in its brightnessbetween2019 Aug. 18 (MJD 58713) and 2021 Nov. 2 (MJD 59520), after showing constant brightness before and after this dipping episode. The star is a young, probably still weakly accreting pre-main-sequence star and most likely a member of the CMa OB1/R1 association.The star is approximately of K4-5 spectral type with strong metal absorption lines, including Li absorption that indicates its youth. Hα was weakly detected in absorption and was variable. Variable [O1] emission was detected after the end of the dipping episode and extended [S2] was detected, both indicating shock-excited gas.The dipping episode can be understood in a model similar to that of ϵ Aurigae: Occultation by a large dust cloud orbiting the primary star. The dust cloud is probably surrounding a star or planet sufficiently massive to prevents is rapid dissipation. In contrast to ϵ Aurigae, the occultation minimum is not as stable but consists of multiple dipping events, suggesting a more clumpy distribution of the dusty material or a system of rings.We suggest that the occulting object may be a circumstellar or circumplanetary debris disk aroundan otherwise undetected companion object to Gaia21bcv.This work has made use of data from the ESA mission Gaia [<https://www.cosmos.esa.int/gaia>] and processed by the Gaia Data Processing and Analysis Consortium (DPAC, [<https://www.cosmos.esa.int/web/gaia/dpac/consortium>] and the Photometric Science Alerts Team. [<http://gsaweb.ast.cam.ac.uk/alerts>] Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. This work makes use of observations from the 1-m telescopes of the Las Cumbres Observatory global telescope network. E.G. acknowledges support from NSF Astronomy & Astrophysics Research Grant No. 2106927. The high-resolution spectroscopy presented herein was obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and NASA. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. This publication makes use of data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE),which is a joint project of the Jet Propulsion Laboratory/California Institute of Technology and the University of Arizona.NEOWISE is funded by the National Aeronautics and Space Administration.The light curve is partly based on observations obtained with theSamuel Oschin 48-inch Telescope at the Palomar Observatoryas part of the Zwicky Transient Facility project.ZTF is supported by the National Science Foundation under Grant No. AST-1440341and a collaboration including Caltech, IPAC, the Weizmann Institute for Science,the Oskar Klein Center at Stockholm University, the University of Maryland,the University of Washington, Deutsches Elektronen-Synchrotron and Humboldt University, Los Alamos National Laboratories, the TANGO Consortium of Taiwan, the University of Wisconsin at Milwaukee,and Lawrence Berkeley National Laboratories. Operations are conducted by COO, IPAC, and UW. This work is based in part on near-infrared imaging data from the WFCAM at the UKIRT observatory operated by the University of Hawaii. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and IPAC/Caltech, funded by NASA and NSF. We thank A. M. Boesgaard for helpful discussions and W. Varricatt for help with the UKIRT observations. Gaia, WISE, Keck:I, UKIRT, UH:2.2m, LCO [Ansdell et al.(2016)]Ansdell.2016.ApJ.816.69.dippers Ansdell, M., Gaidos, E., Rappaport, S. A., et al. 2016, , 816, 69 [Bellm et al.(2019)]Bellm.2019.PASP.131.8002.ZTF Bellm, E. C., Kulkarni, S. R., Graham, M. J., et al. 2019, , 131, 018002 [Bessolaz et al.(2008)]Bessolaz.2008.AA.478.155.accretion.funnels Bessolaz, N., Zanni, C., Ferreira, J., et al. 2008, , 478, 155. doi:10.1051/0004-6361:20078328 [Bouvier et al.(2016)]Bouvier.2016.AA.590A.78.Li.NGC2264 Bouvier, J., Lanzafame, A. C., Venuti, L., et al. 2016, , 590, A78. doi:10.1051/0004-6361/201628336 [Boyajian et al.(2016)]Boyajian.2016.MNRAS.457.3988.dips Boyajian, T. S., LaCourse, D. M., Rappaport, S. A., et al. 2016, , 457, 3988. doi:10.1093/mnras/stw218 [Brown et al.(2013)]Brown.2013.PASP.125.1031.LCO Brown, T. M., Baliber, N., Bianco, F. B., et al. 2013, , 125, 1031. doi:10.1086/673168 [Butler et al.(1987)]Butler.1987.ApJ.319L.19.Pleiades.rotation Butler, R. P., Cohen, R. D., Duncan, D. K., et al. 1987, , 319, L19. [Casali et al.(2007)]Casali2007 Casali, M., Adamson, A., Alves de Oliveira, C. et al. 2007, , 467, 777 [Clayton(1996)]Clayton.1996.PASP.108.225.RCBreview Clayton, G. C. 1996, , 108, 225. doi:10.1086/133715 [Clayton et al.(2011)]Clayton.2011.ApJ.743.44.RCB.Li Clayton, G. C., Sugerman, B. E. K., Stanford, S. A., et al. 2011, , 743, 44. doi:10.1088/0004-637X/743/1/44 [Dobashi (2011)]Dobashi.2011.DarkClouds Dobashi, K. 2011, PASP, 63, 1 [Fernandes et al.(2019)]2019A A...628A..44F Fernandes, B., Montmerle, T., Santos-Silva, T., et al. 2019, , 628, A44. doi:10.1051/0004-6361/201935484 [Flewelling et al.(2020)]Flewelling.2020.ApJS.251.7.PS1database Flewelling, H. A., Magnier, E. A., Chambers, K. C., et al. 2020, , 251, 7 [Gaia Collaboration (2016)]Gaia-2016A A...595A...1G Gaia Collaboration, Prusti, T., de Bruijne, J. H. J., et al. 2016, , 595, A1[Gaia Collaboration (2022)]Gaia-2022-arXiv-DR3 Gaia Collaboration: Vallenari, A., Brown, A. G. A., Prusti, T. et al. 2022, arXiv:2208.00211 [Gaidos et al.(2019)]Gaidos.2019.MNRAS.488.4465.HD240779 Gaidos, E., Jacobs, T., LaCourse, D., et al. 2019, , 488, 4465 [Gaidos et al.(2022)]Gaidos.2022.MNRAS.514.1386G Gaidos, E., Mann, A. W., Rojas-Ayala, B., et al. 2022, , 514, 1386. doi:10.1093/mnras/stac1433 [Gaspar & Rieke(2020)]Gaspar.2020.PNAS.117.9712.Formalhaut.b Gaspar, A. & Rieke, G. 2020, Proceedings of the National Academy of Science, 117, 9712. doi:10.1073/pnas.1912506117 [Gregorio-Hetem et al.(2021)]Gregorio-Hetem.2021.A A.654.150.CMaOB1.member Gregorio-Hetem, J., Lefloch, B., Hetem, A., et al. 2021, , 654, A150. doi:10.1051/0004-6361/202141535 [Hedman & Stark(2015)]Hedman.2015.Ap.811.67.saturn.rings Hedman, M. M. & Stark, C. C. 2015, , 811, 67. doi:10.1088/0004-637X/811/1/67 [Herbig(1962)]Herbig.1962.AdA A.1.47.TTauri Herbig, G. H. 1962, Advances in Astronomy and Astrophysics, 1, 47. doi:10.1016/B978-1-4831-9919-1.50006-6 [Herbig(1965)]Herbig.1965.ApJ.141.588.Lithium Herbig, G. H. 1965, , 141, 588. [Herbst et al.(1994)]Herbst.1994.AJ.108.1906.UXOR Herbst, W., Herbst, D. K., Grossman, E. J., et al. 1994, , 108, 1906. doi:10.1086/117204 [Hoard et al.(2010)]Hoard.2010.ApJ.714.549.EpsAur Hoard, D. W., Howell, S. B., & Stencel, R. E. 2010, , 714, 549 [Hodgkin et al.(2009)]Hodgkin.2009.MNRAS.394.675.WFCAM.phot.system Hodgkin, S. T., Irwin, M. J., Hewett, P. C., et al. 2009, , 394, 675. doi:10.1111/j.1365-2966.2008.14387.x [Jones et al.(1996)]Jones.1996.AJ.112.186.Li.Pleiades Jones, B. F., Shetrone, M., Fischer, D., et al. 1996, , 112, 186. doi:10.1086/117999 [Kearns & Herbst(1998)]Kearns.1998.AJ.116.261.KH15D Kearns, K. E. & Herbst, W. 1998, , 116, 261. doi:10.1086/300426 [Kenworthy & Mamajek(2015)]Kenworthy.2015.ApJ.800.126K.J1407B.rings Kenworthy, M. A. & Mamajek, E. E. 2015, , 800, 126. doi:10.1088/0004-637X/800/2/126 [Lantz et al.(2004)]Lantz.2004.SPIE.5249.146L Lantz, B., Aldering, G., Antilogus, P., et al. 2004, , 146 [Lawrence et al.(2007)]Lawrence.2007.MNRAS.379.1599.UKIDSS Lawrence, A. et al. 2007, , 379, 1599 [Mainzer et al.(2014)]Mainzer.2014.ApJ.792.30.NEOWISE Mainzer, A., Bauer, J., Cutri, R. M., et al. 2014, , 792, 30 [Mamajek et al.(2012)]Mamajek.2012.AJ.143.72.J1407 Mamajek, E. E., Quillen, A. C., Pecaut, M. J., et al. 2012, , 143, 72[Masci et al.(2019)]Masci.2019.PASP.131.8003.ZTFarchive Masci, F. J., Laher, R. R., Rusholme, B., et al. 2019, , 131, 018003 [McCully et al.(2018)]McCully.2018.SPIE.10707E.0KM McCully, C., Volgenau, N. H., Harbeck, D.-R., et al. 2018, , 10707, 107070K. doi:10.1117/12.2314340 [Mikolajewski & Graczyk(1999)]Mikolajewski.1999.MNRAS.303.521.EECep Mikolajewski, M. & Graczyk, D. 1999, , 303, 521. [Pecaut & Mamajek(2013)]Pecaut.2013.ApJS.208.9.Teff Pecaut, M. J. & Mamajek, E. E. 2013, , 208, 9[Pettersson & Reipurth(2019)]Pettersson.2019.A A.630.90.noHa Pettersson, B. & Reipurth, B. 2019, , 630, A90. doi:10.1051/0004-6361/201731578 [Pouilly et al.(2021)]Pouilly.2021.AA.656A.50.accretion.funnel Pouilly, K., Bouvier, J., Alecian, E., et al. 2021, , 656, A50. doi:10.1051/0004-6361/202140850 [Roggero et al.(2021)]Roggero.2021.AA.651.44.Taurus.Dips Roggero, N., Bouvier, J., Rebull, L. M., et al. 2021, , 651, A44. [Santos-Silva et al.(2021)]Santos-Silva.2021.MNRAS.508.1033.CMa.Gaia Santos-Silva, T., Perottoni, H. D., Almeida-Fernandes, F., et al. 2021, , 508, 1033.[Skrutskie et al.(2006)]Skrutskie.2006.AJ.131.1163 Skrutskie, M. F., Cutri, R. M., Stiening, et al. 2006, , 131, 1163 [Soderblom et al.(1995)]Soderblom.1995.AJ.110.729.Li.Hyades Soderblom, D. R., Jones, B. F., Stauffer, J. R., et al. 1995, , 110, 729. [Steinacker et al.(2015)]Steinacker.2015.AA.582.70.coreshine Steinacker, J., Andersen, M., Thi, W.-F., et al. 2015, , 582, A70. doi:10.1051/0004-6361/201425434 [Su et al.(2019)]Su.2019.AJ.157.202.debris.disk.variability Su, K. Y. L., Jackson, A. P., Gáspár, A., et al. 2019, , 157, 202. doi:10.3847/1538-3881/ab1260[Tucker et al.(2022)]2022PASP..134l4502T Tucker, M. A., Shappee, B. J., Huber, M. E., et al. 2022, , 134, 124502. doi:10.1088/1538-3873/aca719[van Werkhoven, Kenworthy, & Mamajek (2014)]vanWerkhoven.2014.MNRAS.4412845.EXORINGS van Werkhoven, T. I. M., Kenworthy, M. A., & Mamajek, E. E. 2014, MNRAS, 441, 2845[Valenti et al.(1998)]Valenti.1998.ApJ.498.851.TiO Valenti, J. A., Piskunov, N., & Johns-Krull, C. M. 1998, , 498, 851. doi:10.1086/305587 [Vogt et al.(1994)]Vogt.1994.SPIE.2198.362.HIRES Vogt, S. S., Allen, S. L., Bigelow, B. C., et al. 1994, , 2198, 362. doi:10.1117/12.176725 [Wang & Chen(2019)]Wang.2019.ApJ.877.116.Extinction Wang, S. & Chen, X. 2019, , 877, 116 [Weiler(2018)]Weiler.2018.AA.617A.138W.Gaia.G Weiler, M. 2018, , 617, A138. doi:10.1051/0004-6361/201833462 [Williams & Cieza(2011)]Williams.2011.ARA A.49.67.disk.dispersion Williams, J. P. & Cieza, L. A. 2011, , 49, 67. doi:10.1146/annurev-astro-081710-102548[WISE Team(2020)]WISE.2020.archive WISE Team 2020, "NEOWISE 2-Band Post-Cryo Single Exposure (L1b) Source Table", doi:10.26131/IRSA124[Wright et al.(2010)]Wright.2010.AJ.140.1868.WISE Wright, E. L., Eisenhardt, P. R. M., Mainzer, A. K. et al., , 140, 1868
http://arxiv.org/abs/2312.16367v1
{ "authors": [ "Klaus W. Hodapp", "Eric Gaidos", "Matthew A. Kenworthy", "Michael Tucker", "Benjamin J. Shappee", "Anna V. Payne", "Aaron Do" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20231227005543", "title": "An Episode of Occultation Events in Gaia21bcv" }
firstpage–lastpage Analytical Insight of Earth: A Cloud-Platform of Intelligent Computing for Geospatial Big Data [==============================================================================================High precision mapping of H_2O megamaser emissions from active galaxies have revealed more than a dozen of Keplerian H_2O aser disks that enable a ∼4% Hubble constant measurement and provide accurate black hole masses. The maser disks that allow for these important astrophysical applications usually display clear inner and outer edges at sub-parsec scales. It is still unclear what causes these boundaries and how their radii are determined. To understand whether the physical conditions favorable for population inversion of H_2O molecules can determine the inner and outer radii of a maser disk, we examine the distributions of gas density and X-ray heating rate in a warped molecular disk described by power-law surface density profile. With a suitable choice of the disk mass, we find that the outer radius R_ out of the maser disk predicted from our model can match the observed value, with R_ out mainly determined by the maximum heating rate or the minimum density for efficient maser action, depending on the combination of the Eddington ratio, black hole mass and disk mass. Our analysis also suggests that the inner edge of a maser disk often lie close to the dust sublimation radius, suggesting that the physical conditions of the dusts may play a role in defining inner boundary of the disk. Finally, our model predicts that H_2O gigamaser disks could possibly exist at the center of high-z quasars, with disk size of ≳ 10-30 pc. keyword1 – keyword2 – keyword3 § INTRODUCTIONIn the nuclear regions of external galaxies hosting active galactic nuclei (AGNs), there exist powerful cosmic masers from the 6_16-5_23 transition of ortho-H_2O molecules at 22.23508 GHz, arising either from a sub-parsec circumnuclear disk <cit.> or from the molecular gas excited by the nuclear wind or jet <cit.>. These 22 GHz H_2O megamasers often display total maser luminosities ≳10^6 greater than typical Galactic maser sources and their extremely high surface brightness allows for maser mapping at sub-milliarcsecond reoslution by Very Long Baseline interferometry (VLBI), providing a unique probe of gas distribution and kinematics at sub-parsec scales at the center of a distant galaxy<cit.>.In the prototypical H_2O maser galaxy NGC 4258 <cit.>, the masing gas resides in a ∼0.13-0.26 pc thin disk viewed almost edge-on and follow Keplerian rotation. These disk properties support black hole (BH) mass measurements to percent-level accuracy <cit.> and provide an accurate geometric distance measurement independent of distance ladders and standard candles <cit.>. To identify disk megamasers like NGC 4258 for measuring the Hubble constant H_0, the Megamaser Cosmology Project (MCP; ) has made extensive surveys of H_2O megamaser emissions from >4800 AGNs <cit.>, resulting in the detections of ≳30 candidates of disk masers <cit.>. The follow-up imaging of these candidates has increased the number of H_2O maser disks with high precision VLBI maps by a factor of ≳4 over the past decade. This progress not only enables a 4% H_0 measurement <cit.>, but also yields accurate BH masses (M_ BH) for exploring the black hole-host galaxy coevolution <cit.>. In addition, it provides a new tool for constraining the spins of supermassive BHs <cit.>. While the progress in the detection and mapping of H_2O maser disks appears to be significant in the past decade, the physics of H_2O maser disks receives little attention and is is not well-explored. In particular, it's not yet clear what causes warps seen in every Keplerian maser disks <cit.>. In addition, it is still uncertain what determines the physical size of a maser disk and confines H_2O megamaser emissions within the narrow rage of ∼0.1-1 pc in a circumnuclear disk <cit.>. Addressing questions like these is valuable because it would not only improve our understanding of the maser disks themselves, but also allow one to explore unrecognized systematics when using the H_2O megamaser technique for H_0 and BH mass measurement. Moreover, a deeper knowledge of maser disk physics can also allow one to investigate whether hyper luminous H_2O “gigamaser" disks could possibly exist at the high redshift universe <cit.>. Investigating this possibility is important because such gigamasers, if exist, would enable the application of the maser technique to high-z galaxies for measuring dynamical BH masses and accurate distances, opening an new avenue to test cosmological models <cit.> that could solve the Hubble tension problem <cit.>.In this paper, we aim to explore whether the physical conditions favorable for population inversion of H_2O molecules could play the primary role for determining the inner and outer radii of an H_2O maser disk. In section 2, we discuss the measurements of the maser disk radii based on VLBI maps compiled from literature. In Section 3, we make predictions of the inner and outer radii of a maser disk based on the examination of the physical conditions in a circumnuclear disk and compare the predictions with the observed values. The discussion of the high-z H_2O gigamasers is presented in Section 4, and our results are summarized in Section 5.§ THE PHYSICAL SIZE OF AN H_2O MASER DISK§.§ The Radius MeasurementThe H_2O megamasers in disk configuration (maser disks hereafter) often display three distinct spatial and velocity components, including the systemic, redshifted, and blueshifted maser features <cit.>. In nearly all cases, the VLBI maps of such maser disks usually show clear inner and outer edges at the sub-parsec scales, defined by the positions of the highest and lowest velocity components of either blueshifted or redshifted masers. When the position of the dynamical center of the disk is available either from disk modeling <cit.> or rotation curve fitting <cit.>, the inner (R_ in) and outer (R_ out) radii of a maser disk can be measured directly from its VLBI map. Alternatively, if one assumes that maser emissions purely originate from a disk without any components associated with jet or outflow, one could also infer the disk radii indirectly from the single-dish spectrum of a maser disk based on the velocities of the high-velocity maser features assuming Keplerian rotation[The high-velocity maser features refer to the redshifted and blueshifted maser components of a maser disk <cit.> ] <cit.>.To identify the physical mechanism that determines the size of a H_2O maser disk, we first compile in Table <ref> all H_2O megamasers from literature that display geometrically thin maser disks in their VLBI maps <cit.>. The majority of our sources are drawn from Table 5 in <cit.>, which provides reliable estimates of disk radii for all “clean"[A “clean" maser disk indicates that all maser emissions come from the disk, with no maser components associated with jet or outflow.] H_2O maser disks that follow Keplerian rotation.We exclude NGC 4388 listed in <cit.> in our analysis because the lack of systemic maser features in this system plus the small number of high-velocity maser components (∼4) makes it difficult to ascertain whether the maser emissions purely originate from a thin rotating disk. Since the physical mechanism that determines the disk size may not be dependent on the Keplerian nature of a disk, we also include the thin, sub-Keplerian maser disk in NGC 1068 <cit.> in our sample for comparison. In total, our sample consists of 16 maser disks. The BH masses and the inner/outer disk radii of these systems are shown in Column (3) through (5) in Table <ref>.In the left panel of Figure <ref>, we plot the BH masses of our maser sample against disk radii in unit of pc. As indicated by the inner and outer edges of each black horizontal line, representing R_ in and R_ out for a maser disk, respectively, the maser emissions are mostly confined within the radial range of ∼ 0.1-1 pc for the majority of the sources. The red and blue error bars in the plot show the uncertainties of R_ out and R_ out, respectively. Excerpt for the obvious uncertainties in the galaxy distance and maser position measurement, the error bars also include the discrepancy between the disk radii inferred by the direct and indirect methods, which could indicate the systematic uncertainty arising from the limit in the spectral coverage or sensitivity in the VLBI observation. For example, the high sensitivity single-dish spectrum of NGC 1194 <cit.> suggests that the (weak) redshifted maser feature having the highest velocity appears be lie well outside the spectral coverage of the VLBI observations <cit.>, suggesting that the true inner radius of the disk may be smaller than the value derived from the VLBI map. In addition, the spectra of some sources (e.g. NGC 3393) sometimes reveal faint, isolated maser emissions lying between the the systemic and high-velocity maser complexes in the spectra. Without sensitive VLBI mapping, it's difficult to discern whether these emissions are the systemic masers, the outflow components, or the high-velocity maser features residing close to the outer edge of the maser disk, leading to uncertainties in R_ out. For more detailed discussion on the error analysis, we refer the readers to <cit.>. §.§ The Characteristic Inner Radius of a H_2O maser Disk As one can see in the left panel of Figure <ref>, the mean radius of a maser disk appears to increase with the BH mass. This trend was first reported by <cit.> based on the data of 8 maser disks, showing that the outer radii can be approximately described by R_ out = 0.3(M_ BH/10^7M_) pc. With the inclusion of 6 more maser disks in their analysis, <cit.> obtained a slightly different scaling (R_ out∝ M_ BH^0.57± 0.16) and show that both R_ in and R_ out are well correlated with the BH mass[The Spearman's rank correlation coefficients of 0.71 and 0.62 for R_ in and R_ out, respectively <cit.>], suggesting that M_ BH plays an important role in determining the inner and outer radii of a maser disk. Because of the significant correlation between R_ out and M_ BH, we conjecture that the maser disks may reveal an interesting, characteristic scale if the disk size is expressed in unit of the Schwarzchild radius R_ S=2GM_ BH/c^2, where G and c are the gravitational constant and the speed of light, respectively.In the right panel of Figure <ref>, we plot the inner and outer radii of the maser disks in unit of 10^5 R_ S. As one can see in this plot, while the outer radii show more substantial variation, the majority of our maser disks have inner radii close to R_ in∼ 1× 10^5 R_ S (see the vertical dashsed line). The only significant outlier is NGC 1068, whose inner radius is R_ in∼7.6× 10^5 R_ S. Based on the available proposals that provide tentative explanations for the maser disk size <cit.>, there is no clear reason why the inner radii of most maser disks have this characteristic scale of R_ in∼ 1× 10^5 R_ S. In addition, it is also puzzling why there exist an outlier such as NGC 1068 which shows a significantly larger inner radius than other maser disks. It is likely that the presence of the characteristic inner radius may be associated with the fine-tuning nature of the H_2O megamaser phenomenon while the outlier suggests that some physical parameters in addition to M_ BH may become important for determining the maser disk size in certain circumstances. As we will show in Section <ref>, the existence of the characteristic inner radius and the outlier can be understood if the inner edge of a maser disk lies close to the dust sublimation radius of a local Seyfert galaxy, which varies depending on both the mass and the Eddington ratio of an accreting BH.§ THE MECHANISMS THAT DETERMINE THE SIZE OF A MASER DISK§.§ The Physical Conditions for Maser PumpingIn the interstellar medium, the 6_16-5_23 water maser transition occurs naturally through collisional pumping in a warm molecular cloud if the temperature (T_ H_2) and the number density (n_ H_2) of the gas fall within the favored ranges of 400 ≲ T_ H_2≲ 1500 K and 10^7≲ n_ H_2≲ 10^11 cm^-3, respectively <cit.>. The minimum gas temperature of T_ min∼400 K is required for sufficient collisional pumping given the 6_16 level lying at E/k = 643 K above the ground. Moreover, this temperature threshold is also essential to make a significant enhancement of the water abundance in the cloud <cit.>. Provided that the gas density is below the critical value (n_ crit≲ 10^11 cm^-3) for collisional de-excitation, a sufficiently high density (n_ H_2≳ 10^7 cm^-3) is also crucial to make maser pumping efficient.In addition to the suitable physical conditions of the gas, the presence of cold dusts in the gas cloud is also important for producing the large maser luminosities <cit.> observed in water megamasers <cit.>. Given that the gas temperature and density fall in the preferred ranges for population inversion, <cit.> demonstrated that the maser emission can be significantly enhanced if there exist cold dust grains in the cloud with a dust temperature T_ dust∼50-100 K below the gas temperature. The presence of such cold dusts can absorb nonmasing far-infrared water lines trapped in the cloud, enabling a much larger extent of H_2O molecules maintaining population inversion without being quenched <cit.>.To maintain a sufficient temperature for efficient maser pumping, it has been proposed that X-rays from the active nucleus could be the primary heating source for the masing gas <cit.>. Despite that spiral shock waves travelling a circumnuclear disk <cit.> could also provide the energy for maser pumping, the predicted velocity shift of the high-velocity maser features based on this model are shown to be inconsistent with observed drifts seen in the analysis of high-sensitivity spectra of eleven H_2O maser disks <cit.>, making this model less favorable. As a consequence, we will focus in the following discussion on the scenario in which H_2O maser emissions arise in the X-ray dissociation region within a circumnuclear disk subject to X-ray irradiation.§.§ The Role of X-ray Ionization and Heating Considering a circumnuclear disk illuminated by the central X-ray source, it is expected that the disk can receive X-heating for maser excitation most efficiently if the disk is warped, allowing one side of the disk plane to be irradiated by X-rays directly. In the well-known picture of maser excitation in NGC 4258 <cit.>, it was suggested that the outer edge of the warped maser disk is determined by the critical radius beyond which X-ray ionization becomes strong enough to dissociate all molecules into atoms. By assuming the viscous gas disk to be in a steady state of accretion, NM95 shows that this critical radius can be expressed as R_ cr = 0.040L_41^-0.426(Ṁ_-5/α)^0.898μ^-0.383M_8^0.617 pc , where 10^41L_41 ergs s^-1 is the 2–10 keV X-ray luminosity of the central source, 10^-5Ṁ_-5/α M_ yr^-1 is the mass accretaion rate normalized by the conventional α(≲ 1) viscosity parameter, and 10^8M_8 is the BH mass of the maser system. μ in the above equation is the obliquity parameter defined as μ = cos η , where η is the angle at which the disk is illuminated obliquely with respective the normal direction of the disk. To explain the presence of the inner edge in NGC 4258, NM95 observed that the maser disk appears to flatten out close to the inner radius <cit.>, suggesting that the obliquity parameter μ falls to zero and the disk is no longer directly illuminated by the X-ray source, making the gas too cold to mase. As a result, they speculated that the inner edge of the maser disk is determined by the nature of the warp.While the NM95 model seems to provide a plausible explanation for the inner and outer edges of NGC 4258 <cit.>, it is not yet well explored whether this model can be applied to H_2O maser disks discovered in the past two decades, whose intrinsic AGN luminosities are 2-3 orders of magnitude higher than that of NGC 4258.Given the possibility that the H_2O maser disks may not follow steady-state accretion <cit.>, <cit.> proposed a simple alternative model in which the outer edge of a maser disk is determined by the radius beyond which T_ H_2≲ 400 K. Assuming the gas and dust are well-coupled in a maser disk, <cit.> uses the dust temperature T_ d in the optically-thin limit as a proxy of T_ H_2, finding that the gas temperature is a decreasing function of radius, with the outer radius described by R_ out∝ L_ bol^1/2, where L_ bol is the bolometric AGN luminosity. While the derived scaling between R_ out and L_ bol is broadly consistent with the observations, we find that the simple approximation of R_ out in <cit.> can be further improved by considering the effect of X-ray heating on the gas. It is well-known that the collisions between gas and dust particles do not guarantee T_ d≈ T_ H_2 in a X-ray irradiated gas. The gas and dust can coexist at substantially different temperatures, with the difference varying dependent upon the the X-ray heating rate <cit.>. In addition, in the region where the gas density is within the favored range for maser pumping, the obscuring column density is required to be large enough <cit.> to avoid molecular dissociation <cit.>. As shown in <cit.>, photoheating for dust grains in such a regime is unimportant due to large optical depth. The dust temperature is mainly determined by collisional energy transfer between gas and dust, with T_ d≲200-300 K, suggesting that using T_ d in the optically-thin limit to estimate T_ H_2 may lead to non-negligible systematic errors. To obtain a more reliable estimate of R_ out for a maser disk based on the physical conditions of the interstellar medium, it would be beneficial if one can explore the gas temperature distribution directly with the X-ray heating rate. §.§ Are disk radii determined by the critical ionization parameter ? To examine whether the outer edges of the H_2O maser disks in our sample lie at the molecular-to-atomic transition radius R_ cr as suggested by the NM95 model, we apply Equation <ref> to all sixteen maser disks listed in Table <ref> and compare the predictions with the observed values. When evaluating R_ cr for our sample, we first estimate the 2-10 keV X-ray luminosity L_ X^2-10=10^41L_41 ergs s^-1 by assuming L_ X^2-10=0.1L_ bol. For all sources except for ESO 558-G009 and CGCG 074-064,we inferred L_ bol from the reddening corrected [OIII] luminosities L_ [OIII] of the maser galaxies <cit.>. For the two exceptions which do not have L_ [OIII] available, we estimate the L_ bol based on the mid-infrared luminosities derived from SED-fitting <cit.>, with the bolometric correction given by <cit.>. Since NM95 assumes a steady-state accretion disk, suggesting that the accretion rate is constant with radius, we calculate the mass accretion rate with Ṁ=L_ bol/ϵ c^2, where ϵ is the accretion efficiency for a Kerr black hole <cit.>. To obtain the obliquity parameter μ, we performed 3-dimensional bayesian modeling for all maser disks except for NGC 5495 and NGC 1068 using the MCP modeling code described in <cit.>, <cit.>, <cit.>, and <cit.>. These modelings[In seven systems listed in Table 1, the maser acceleration measurements required for our modeling are not available. For these cases, we performed modeling by assuming the high-velocity maser features lie within ≲15^∘ from the mid-line of the disk plane, a typical value expected for maser disks <cit.>. ] give the positions of the dynamical centers of our maser disks and provide the best-fit parameters that characterize the disk warps, enabling us to calculate the obliquity parameter with μ=n̂·r̂, where n̂ and r̂ are the unit vector of the disk normal and the unit radial vector for a high-velocity maser spot located at a radius r, respectively <cit.>. For NGC 5495 and NGC 1068, we simply make a crude estimations of μ based on their maser maps <cit.> because the quality of the VLBI map for NGC 5495 is not good enough for a reliable 3-D modeling whereas NGC 1068 does not follow Keplerian rotation as assumed in our code. In Columns (8), (9), & (10) in Table <ref>, we list the obliquity parameters for the inner (μ_ in) and outer (μ_ out) edges of each maser disk as well as the ratio ε_μ=μ_ in/μ_ out. Based on the values of μ_ in/μ_ out, we see no evidence of disk flattening toward the inner edges of all maser disks (i.e. μ_ in/μ_ out → 0) except for UGC 3789. The ratio is of the order of unity for the majority of the H_2O maser disks, implying that the X-ray heating at the inner edge of the disk would not be significantly less than that at the outer edge. Therefore, the conjecture proposed by NM95 is difficult to account for the presence of the inner edges of most maser disks in our sample. In Figure <ref>, we compare the critical radius R_ cr with the observed R_ out for each maser disk, with R_ cr calculated based on the assumption of α=0.25 and ϵ=0.42 <cit.>. The dashed line shows the locations where R_ out=R_ cr. The error bar in R_ cr indicates the possible range of the critical radius given the preferred range of α∼ 0.1-0.4 suggested from observations <cit.>. This comparison shows that except for three sources including NGC 4258 (the red filled square in Figure <ref>), the molecular-to-atomic transition radius is substantially greater than the observed outer radius for the majority of our H_2O maser disks. If one adopts a smaller accretion efficiency (i.e. ϵ<0.42), R_ cr would become even greater than R_ out, suggesting that the NM95 model could not well explain the outer radii of the majority of the maser disks. It is likely that the discrepancy between R_ cr and R_ out originates from the deviation from the steady-state assumption in the NM95 model. Alternatively, it is also possible that the outer radius of the maser disk is determined by some other mechanisms, such as the the minimum temperature or density requirements for maser excitation. Both possibilities will be tested with our model presented in the following section.§.§ The Outer Radius Determined by the Physical Conditions of the Gas Considering a dusty warm medium where the molecular gas is mainly heated by X-rays, instead of using T_ d to approximate T_ H_2, we use the X-ray heating rate to probe the gas temperature distribution and examine whether the outer radius of a maser disk is determined by the minimum temperature T_ min∼ 400 K or the maximum X-ray heating rate that breaks molecules into atoms. It is well known that in a medium subject to X-ray heating, one can obtain the gas temperature by balancing the rates of heating and cooling <cit.>. If one further considers the region within which water maser emissions can occur, assuming a typical value of the water abundance (e.g. x_ H_2O = n( H_2O)/n( H_2)∼ 10^-4), it has been shown by <cit.> that one can use the equilibrium X-ray heating rate per hydrogen nucleus H_ X/n_ H to replace the gas temperature as the key variable that determines the level populations given the dust temperature T_ d and gas density n_ H_2 <cit.>. In their work, it is demonstrated that the maximum heating rate that allows for efficient maser action is (H_ X/n_ H)_ max∼ 1.2× 10^-28 ergs cm^3 s^-1, beyond which the gas will be subject to molecular dissociation. In addition, the minimum heating rate (H_ X/n_ H)_ min that ensures T_ H_ 2≳ 400 K ranges from∼5.0× 10^-31 ergs cm^3 s^-1 to ∼6.3× 10^-30 ergs cm^3 s^-1, depending on T_ d. If the heating rate of a gas is between (H_ X/n_ H)_ max and (H_ X/n_ H)_ max, it is expected that the gas temperature would fall within T_ H_ 2∼ 400-1500 K, the favored range for maser excitation. In addition, T_ H_2 would be an increasing function of H_ X/n_ H if the dust temperature and water abundance are roughly constant in the region. In the following analysis, we will assume T_ d∼ 300 K and x_ H_2O∼ 10^-4 in the masing medium, suggesting (H_ X/n_ H)_ min∼ 3.2× 10^-30 ergs cm^3 s^-1. For the purpose of determining the outer radius of a maser disk, we will simply identify the region within a disk where the X-ray heating rate and gas density fall within the allowed ranges that permit efficient maser action. We will not evaluate the exact gas temperature in the masing region because it will not affect our conclusion.Following <cit.>, we compute the X-ray heating rate per hydrogen nucleus for a gas cloud in the X-ray dissociation region asH_ X/n_ H=3.8× 10^-25ξ_ eff   ergs s^-1,where n_ H is the density of the hydrogen nuclei and ξ_ eff is the effective ionization parameter defined as ξ_ eff=1.26× 10^-4F_ X n_5N_22^0.9 . Here, F_ X is the unattenuated 1-100 keV X-ray flux received by a gas clump at a radius r from the central black hole, n_5=n_ H/10^5 cm^-3 and N_22=N_ H/10^22 cm^-2 are the normalized total densities of hydrogen nuclei n_ H and X-ray attenuating column N_ H for the gas clump, respectively. To explore the region in the disk where the physical conditions of the gas are suitable for maser excitation, we consider the scenario in which an initially flat, cold molecular disk is warped by a certain mechanism such as Resonat Relaxation <cit.>, which can warp a sub-parsec scale disk efficiently in a timescale of ∼10^7 years. After the disk get warped, one expects that one side of the warped disk will be subject to direct X-ray illumination which would deposit thermal energy in the disk. To evaluate the density distribution,we assume that the gas motion is dominated by turbulence <cit.>, with the turbulence velocity c_ g∼ 2 km s^-1, the typical width of the water maser lines seen in Keplerian maser disks <cit.>. We describe the gas distribution in cylindrical polar coordinates r⃗=(r,ϕ,z), with the BH sitting at the origin and the mid-plane of the disk lying at z=0. Because of hydrostatic equilibrium, the gas density ρ(r,z) at a radius r from the central black hole and an elevation z above the mid-plane of the geometrically-thin disk can be expressed as ρ(r,z)=ρ_ mid(r) exp[-z^2 2H^2] ,where ρ_ mid(r) is the gas density at the mid-plane and H(r) is the scale height of the disk, given by H(r)=c_ g(GM_ BH)^1/2r^3/2<cit.>. The mid-plane density can be calculated with ρ_ mid(r)=Σ(r)/[(2π)^1/2H] where Σ(r) is the surface density of the disk. Here, we assume that the surface density takes the form ofΣ(r)=Σ_ out(r/R_ out)^s ,where the power-law index s is allowed to vary in the interval of -2<s<0 and Σ_ out is the surface density at the outer radius of the maser disk <cit.>. If M_ D represents the disk mass within R_ out, one finds that Σ_ out=(s+2)M_ D/2π R_ out^2. If the disk under consideration is in the steady-state accretion, one expects s=-1.5 and M_ D can be expressed asM_ D= 4(GM_ BH)^1/2ṀR_ out^1/2 3α c_ g^2 ,where Ṁ is the mass accretion rate, calculated with Ṁ=L_ bol/ϵ c^2. Given the above expressions, one can evaluate the number densities of the molecular gas in the disk with n_ H_2(r,z)=ρ(r,z)/ζ_ H_2 m_ H ,where ζ_ H_2=2.36 is the mean molecular weight per hydrogen molecule <cit.> and m_ H is the mass of the hydrogen atom.Assuming the parcel of gas directly irradiated by the X-ray photons from the z>0 side of the disk is located at the position r⃗=(r,ϕ,z), we calculate the X-ray absorption column density in between the gas and the X-ray source as N_ H(r,z)=1/μ∫^∞_zn_ H(r,z')dz'=Σ(r)/μζ_ H_2m_ H[1- Erf(z/√(2)H)] ,where the obliquity parameter μ accounts for the increase in the obscuring column density due to the fact that the disk is illuminated obliquely <cit.>, Erf(X) is the standard error function, and n_ H(r,z)=2n_ H_2(r,z) is the number of H nuclei. For the region where Thomson scattering becomes important and enhances X-ray obscuration (i.e. the Compton-thick regime; N_ H(r,z) ≥ 1.5×10^24 cm^-2), we adopt the effective column density N^ eff_ H(r,z)=τ_ TN_ H for evaluating the attenuated X-ray heating rate, where the boosting factor is τ_ T∼ 6.65× 10^-25N_ H <cit.>.Finally, in our calculation of H_ X/n_ H with Equations <ref> and <ref>, we assume that the X-rays that originate from the disk corona in the vicinity of the central BH were emit isotropically <cit.> and the 1-100 keV X-rays account for ∼20% of the bolometric flux <cit.>, suggesting that F_ X=0.2L_ bol/4π r^2. In addition to estimate the heating rate, we also compute the density distribution of the molecular gas with Equation <ref> to find out the region where n_ H_2(r,z) fall in the favored range for maser excitation (i.e.n_ H_2 = 10^7-10^11 cm^-3). By comparing this region with the locations where the X-ray heating rate is sufficient to maintain T_ H_2∼ 400-1500 K, we identify the boundaries of the masing region within which the level population of water molecules could be inverted.§.§ The Location and Outer Boundary of the Masing RegionGiven that the predicted outer radii of the maser disks from the steady-state model cannot be reconciled with the observations for the majority of our sources, we focus our attention on the more general power-law disk model prescribed by Equation <ref>, with M_ D and s treated as free parameters. In our analysis, we model every maser disk by choosing a set of (s, M_ D), with s varied between -2.0 < s < 0.0 in step of 0.1. For a given value of s, our model suggests that the outer radius of the masing region is an increasing function of M_ D. By using the observed outer radius as the constraint, we try to fit the disk mass such that the predicted R_ out can match with the observation. In Column (3) of Table <ref>, we list the range of s in which we can find solutions for M_ D, with Column (4), (5), & (6) indicating the the best-fit M_ D in unit of 10^4 M_⊙ for three representative models with s=-1.8, s=-1.4, and s=-1.0, respectively. To compare the disk mass within the same reference radius, we also show in Column (7) the total disk mass M̃_ D within r=1 pc for each maser system with the assumption of s=-1 <cit.>. One can see that the disk masses within 1 pc are comparable for all maser systems, with the values falling within the narrow range of ∼(1-10)× 10^4 M_⊙. In addition, the disk-to-BH-mass ratios M̃_ D/M_ BH shown in Column (8) are all significantly smaller than unity, with a mean value of M̃_ D/M_ BH∼ 0.005, consistent with the fact that most maser disks in our sample follow nearly perfect Keplerian rotation. Finally, we also note that the best-fit M_ D are substantially smaller than the predictions from the steady-state accretion model, suggesting that the steady-state assumption is in question. Given L_ bol∼ 10^44 ergs s^-1 and M_ BH∼ 10^7 M_⊙ for most maser disks (see Table <ref>), Equation <ref> predicts that M̃_ D would be ∼1.0× 10^6 M_⊙ if the accretion is in the steady state. This is ∼1-2 orders of magnitudes greater than our best-fit M_ D, suggesting that the the steady-state accretion disk is too massive to produce a small enough outer radius consistent with the observation. To illustrate how the physical conditions of the gas define the boundaries of the masing region, we show in Figure <ref>the X-ray heating rate and gas density distributions for each maser disk based on the s=-1 model. In each panel, the two vertical grey bars indicate the observed inner and outer radii of the maser disk. The green dotted and dot-dashed lines delineate the locations for the minimum and maximum density for maser excitation, respectively. The black solid line represent the points where the heating rate reaches (H_ X/n_ H)_ max while the dashed line shows the curve for (H_ X/n_ H)_ min. It is expected that the gas lying beyond the maximum heating curve will become atomic, and the gas bound within the minimum heating curve will be too cold to mase. To produce luminous maser emissions, the gas needs to lie within the masing region marked by the blue shaded area, in which the gas density and temperature would fall in the favored ranges for population inversion. It can be seen that the masing region typically lies close to the mid-plane of the disk except for NGC 1194, and the thickness of the masing region between R_ in and R_ out is typically ∼1-4 H. In addition, one can also infer that the X-ray heating rate per nucleus always increases with radius in every maser disk, suggesting that the gas temperature would increase with radius as well assuming T_ d is roughly constant. As a result, we argue that the minimum temperature T_ min∼ 400 K could not be the primary factor that determine R_ out. As suggested by Figure <ref>, the outer radius of a maser disk can only be determined either by the maximum heating rate (e.g. NGC 4258, UGC 3789, etc.) or the minimum gas density n_ min for maser pumping (e.g. NGC 2960, NGC 5765b, etc.), depending on the combination of L_ bol, M_ BH, and M_ D. We will discuss this dependence in details in Section <ref>. Finally, the readers should be aware that the outer radius of a maser disk predicted by our model should be seen as an approximation. In our simple modeling presented above, we ignore the effect of density perturbation of the molecular gas as a result of X-ray heating. Given the turbulence dominated disk in our analysis, we implicitly assume that the equilibrium density of the molecular gas after X-ray irradiation is comparable to the gas density before any injection of thermal energy. Taking the density perturbation into account would require a more rigorous modeling that involves solving the energy balance equation for the gas. This is beyond the scope of this paper, and we defer this to future work. §.§ The Inner Edge and the Dust Sublimation RadiusOne can see in Figure <ref> that the density and heating rate requirements for efficient maser pumping does not impose a clear inner bound in the maser disk. This situation does not change no matter how we vary the model parameters, suggesting that the physical conditions of the gas alone may not be able to define the inner edge of a maser disk. Other factors might be involved.Among the additional factors that could affect the production of water maser emissions, dust properties are the most important. As explained in Section <ref>, the presence of cold dusts is essential for maintaining population inversion by absorbing nonmasing far-infrared water lines trapped in the cloud. If the amount of dust particles are substantially reduced at some radius due to reasons such as dust sublimation, it is likely that the trapping of the far-infrared photons may become more significant, leading to an edge of the maser disk where the population inversion is quenched.To explore this possibility, we estimate the dust sublimation radius for each maser disk (see Column (9) of Table <ref>) based on the prescription provided by <cit.>:R_ sub, Nenkova≈ 0.4( L_ bol 10^45  erg^-1)^1/2( 1500 KT_ sub)^2.6  pc.In our calculation, we use L_ bol from Table <ref>, with the assumption that dusts sublimate at the temperature T_ sub∼ 1500 K. We estimate the uncertainty δ R_ sub, Nenkova by adopting δ L_ bol∼ 0.54 dex, obtained from the comparison between L_ bol estimated from X-ray spectroscopy and the [OIII]λ5007 line for our sample <cit.>. As noted in<cit.>, R_ sub, Nenkova is not a sharp boundary within which no dusts exist. Instead, it is an approximation that marks the radius across which the environment transitions from being dusty to dust-free as the individual components of the dust mixture gradually sublimate at different radii. It is expected that the largest grains can survive down to the innermost radius of the dusty torus probed by reverberation mapping observations <cit.>, which is ∼3 times smaller than R_ sub, Nenkova.In Figure <ref>, we compare the observed inner radius of the maser disk with the dust sublimation radius for all sources in our sample. This figure shows that the majority of the maser disks have their inner radii consistent with R_ sub, Nenkova within ∼1σ (∼75%) or ∼ 2σ (∼19%) error. The only significant outlier in this comparison is NGC 4258 (the red square in the figure), for which R_ in is considerably greater than R_ sub, Nenkova given the measurement uncertainties. This result suggests that, except for NGC 4258, dust sublimation may play a role in determining the inner radius of a maser disk. It is likely that dusts residing near the sublimation radius would become warmer and their efficiency as a heat sink <cit.> for absorbing far-infrared photons gets reduced. In addition, the trapping of far-infrared photons by the masing clouds may become more significant as dusts gradually sublimate away at R∼ R_ sub, Nenkova. This process may quickly destroy the population inversion, resulting in the observed inner edge of an H_2O maser disk.§ DISCUSSION §.§ The Characteristic Inner RadiusOur results from the last section suggest that the outer radius of a maser disk could be determined either by the minimum gas density or the maximum heating rate that enables efficient maser action. Moreover, the inner edge of the disk may result from the quench of population inversion near the dust sublimation radius. While the physical conditions of the gas and the dust appear to play an important role in defining the inner and outer boundaries of a maser disk, these conditions do not seem to directly explain why the inner radii of most maser disks are all around R_ in∼ 1.0× 10^5 R_ S as shown in Section 2. In particular, if the inner edge of a maser disk is indeed constrained by the dust sublimation radius R_ sub, Nenkova≈ 0.4 (L_ bol/10^45  erg^-1)^1/2, one would expect that R_ in should depend more strongly onL_ bol, rather than scale linearly with R_ S, which is proportional to M_ BH.To explain the characteristic scale of R_ in∼ 1× 10^5 R_ S in light of the dust sublimation radius, we find it helpful to replace L_ bol in Equation <ref> with L_ bol=λ_ EddL_ Edd, where λ_ Edd is the Eddington ratio and L_ Edd=1.26× 10^38 (M_ BH/M_⊙) is the Eddington luminosity. This replacement allows one to express the dust sublimation radius in unit of R_ S as R_ sub, Nenkova = 1.05× 10^5 (λ_ Edd 0.05)^1/2(M_ BH/10^7M_⊙)^-1/2 R_ S . The above equation directly implies that the dust sublimation radius in AGN in general does not equal ∼1.0× 10^5 R_ S since its value depends substantially upon λ_ Edd and M_ BH. However, it is well-known that the BH masses for the Keplerian megamaser disks are typically ∼10^7M_⊙, likely resulting from the fact that disk megamasers are preferentially detected in local Seyfert 2 galaxies <cit.> whose BH mass function peaks at M_ BH≈ 3×10^7 M_⊙ <cit.>. Moreover, the Eddington ratios of the majority of the Keplerian disk maser systems fall in the narrow range of λ_ Edd∼ 0.01-0.1, with the median value ≈ 0.04 <cit.>. Given that disk maser systems typically have M_ BH≈ 10^7 M_⊙ and λ_ Edd≈ 0.04, one can infer from Equation <ref> that the dust sublimation radii of most Keplerian H_2O maser disks would be ∼1.0× 10^5 R_ S, the characteristic inner radius we see in Figure <ref>. It is likely that this characteristic scale is deeply connected with the fine-tuning nature of the disk megamaser phenomenon and it reflects the dust sublimation radius of a gas disk in a certain phase of AGN evolution, with BH masses following the population for low redshift Seyfert 2 galaxies. As suggested in <cit.>, disk megamaser phenomenon may only occur in a certain (short) phase in the galaxy-AGN coevolution during which the mode of gas accretion dramatically changes. We further speculate that Keplerian maser disks are more likely to arise in the phase in which the accretion disks are optically-thick, geometrically thin, with typical λ_ Edd≳ 0.01 <cit.>. Given this speculation, the typical Eddington ratio distributions for local Seyfert galaxies <cit.> would imply that the disk megamasers with λ_ Edd∼ 0.01-0.1 would outpopulate the ones with λ_ Edd≳ 0.1. It can be expected that if H_2O maser disks also exit at the centers of high redshift AGNs (e.g. quasars at z>2), the Eddington ratio and BH mass distribution would be considerably different (i.e. λ_ Edd≳ 0.1 and M_ BH≳ 10^9 M_ BH), leading to a distinctly different characteristic radius for high-z maser disks. §.§ The Inner Radius of NGC 4258As shown in Section <ref>, NGC 4258 is the most prominent outlier in our comparison between R_ out and R_ sub, Nenkova for maser disks, suggesting that the inner edge of NGC 4258 requires explanations beyond the physical conditions of the gas and the dust in the disk. While one could resort to different mechanisms, such as the Bardeen-Peterson (BP) effect <cit.>, to explain its inner radius, we note that the present inner edge of NGC 4258 is partially defined by the observational constraints and it is the only source in our sample having this issue. Assuming a distance of 7.6 Mpc, the maser emissions from NGC 4258 terminate at an inner radius of R_ in=0.11 pc, corresponding to the position of the redshifted maser component whose radio LSR velocity is 1647 km s^-1. This maser feature lies quite close to the edge of the VLBI observing bands for NGC 4258 as reported in <cit.>, which provides the widest ever VLBI bandpass for NGC 4258, covering a velocity range between -706 km s^-1 and 1676 km s^-1. The sensitive single-dish monitoring of NGC 4258 with the Green Bank Telescope (GBT) as part of the MCP also covers a similar velocity range, preventing one to explore masers beyond the current spectral limits. It would be interesting if future observations of NGC 4258 can cover a substantially wider spectral range and examine whether there are maser features lying well inside the present inner edge.Aside from the observational constraints, we speculate that the inner edge of NGC 4258 could also be imposed by the significant inclination warp in this famous maser system. By comparing the orientations of all maser disks in our sample, it can be seen that while most maser disks are within ∼ 1^∘-2^∘ from being edge-on <cit.>, NGC 4258 displays the most significant inclination warp. Based on the disk modeling by <cit.>, the disk inclination at a radius r ≤ R_ in=0.11 pc is ≤79.2^∘, showing that the deviation from the edge-on configuration would be greater than 20^∘ if there are masers residing inside R_ in. Given such large inclinations, it is possible that the effective coherent path length would reduce substantially, leading to weak maser emissions below the detection limit. §.§ On the Detection of H_2O Gigamasers at High Redshifts§.§.§ The Effect of Increasing Gain Length Observations of the early universe have revealed strong evidence that galaxies or mergers once went through a extremely rapid and luminous phase of evolution, leading to intensive star formation and quasar activities that peaks at z∼ 2-3 <cit.>. During the most active phase of evolution, it is believed that a high fraction of luminous quasars may harbor ≳ 10^9 M_⊙ supermassive BHs at their cores <cit.>. As a result of immense energy injection into the surrounding gas, <cit.> speculates that AGN formation at the high-z galaxies could possibly trigger H_2O gigamasers, whose total maser luminosities could be ≳ 10^3 higher than those of low-z H_2O megamasers. If such gigamasers truly exist in the early universe, they could serve as a new class of high redshift distance indicators, providing an independent probe of the expansion rate of the early universe.To access the possibility of discovering high-z H_2O gigamasers in disk configuration quantitatively, we estimate the flux densities of high-z maser disks based on the model presented in the last section. Our estimation suggests that the existing radio telescopes, such as the Very Large Array (VLA) and the GBT, would have sufficient sensitivities to detect these gigamaser disks if their typical radii can be ≳ 20-30 times greater than those of the low redshift maser systems (i.e. R∼ 0.3-0.8 pc). As one can infer from <cit.>, the flux density S_ν of a saturated maser source can be expressed asS_ν = (1+z)n_ uΔ Phν L_ g^3 D_ L^2Δν erg s^-1cm^-2Hz^-1,where n_ u is the density of H_2O molecules in the upper excited state, Δ P is the rate of maser transition from the upper to the lower state, hν is the energy of a maser photon, L_ g is the gain length for maser amplification, D_ L is the luminosity distance to the source, and Δν is the observing bandwidth at the observer's frame. The cubic dependence of L_ g shown in the equation suggests that the maser flux density is highly sensitive to the gain length, which is expected to be comparable to the velocity coherent path length L_ c in a maser disk. Since maser action only take places when the gas density and temperature fall within the narrow ranges shown in Section <ref>, one could assume that on average n_ uΔ P in the high-z environment would be comparable to that in local maser sources. Given this assumption, the key parameters that determine the maser flux density would be the the coherence path length L_ c of the disk and the obvious inverse square dependence of distance D_ L. For the high-velocity maser components in a maser disk, the velocity coherent path length L_ c at the tangent points at radius R is L_ c=2(δ V/V)^1/2R, where δ V and V are the gas velocity dispersion and the orbital velocity, respectively, suggesting that the gain length would increase linearly with R <cit.>. If the radius of a maser disk can increase by a factor of ≳ 20-30, the isotropic luminosity density of the source L_ν≡ 4π D_ L^2S_ν/(1+z) would increase by a factor of ≳8000-27000 due to the increase in L_ c, making the source a H_2O gigamaser. Assuming such H_2O gigamaser disks exist at z∼ 2-3, their luminosity distances would be ∼ 150-250 times greater than a maser disk at the distance of ∼100 Mpc assuming standard cosmology. The inverse square dependence of D_ L would then lead to a decrease in the flux density by a factor of ∼ 20000-60000. By considering the change in the flux density S_ν due to the factor (1+z) and the increases in D_ L and L_ c based on Equation <ref>, one would expect that the flux densities of a 22 GHz H_2O gigamasers at z∼ 2-3 could be comparable to that of a local H_2O megamaser at ∼100 Mpc. Given that the flux densities of known H_2O maser disks at D_ L∼ 100 Mpc are typically ∼20-40 mJy <cit.>, the expected flux densities of the strongest maser features in a high-z gigamaser could range from a few mJy up to ≳40 mJy, suggesting that a few σ detections of the stongest lines are possible with a few hours of on-source integration time using the VLA, the GBT, and the High Sensitivity Array (HSA). Note that before applying the above estimation to high-z sources, one need to explore whether the high-z gigamaser disks could have significantly larger sizes than the ones in the local universe.§.§.§ The Dependence of Maser Disk Size on the Black Hole MassBased on the observations of quasars in <cit.>, it has been shown that the BH mass function of luminous quasars at redshifts between 1.5 ≲ z ≲ 3 peaks at M_ BH∼ 1.5×10^9 M_⊙, and the Eddington ratios of these quasars tend to be λ_ Edd≳ 0.1. To see whether maser disks could exist around these ≳ 10^9 M_⊙ supermassive black holes, with their sizes significantly larger than the low redshift maser systems, we explore the dependence of maser disk size on black hole mass based on the disk model presented in Section <ref>. In this exploration, we adopt the disk profile Σ(r) = Σ_ out(r/r_ out)^-1 with c_ g=2 km s^-1 and fix the obliquity parameter at η=0.15, a typical value for local H_2O maser disks. In addition, we set the bolometric luminosity in our model as L_ bol=λ_ EddL_ Edd, with λ_ Edd varied between 0.03 to 0.6. Finally, we also assume that the ratio between the disk mass M̃_ D within 1 pc and M_ BH is comparable to those of the local maser disks (see Table <ref>) and set the ratio at five representative values ranging from 0.001 to 0.03. Given a combination of λ_ Edd and M̃_ D/M_ BH, we model the heating rate and density distribution in the disk and calculate the outer radius of the masing region for BH mass between 1.0× 10^7M_⊙ and 1.5× 10^9 M_⊙. In the left panel of Figure <ref>, we show R_ out as a function of M_ BH for λ_ Edd = 0.03, 0.06, 0.1, 0.3, and 0.6, with M̃_ D/M_ BH = 0.005, the average disk-to-BH mass ratio for our sample. The right panel of Figure <ref> shows the prediction of the outer radius for M̃_ D/M_ BH ranging from 0.001 to 0.03 assuming λ_ Edd=0.1. It can be seen in both plots that the outer radius R_ out for the cases with M_ D/M_ BH≲ 0.005 first increases with BH mass with a steeper slope, which drops distinctly when the BH mass is greater than a certain value (the critical BH mass M_ BH^ crit hereafter). It appears that M_ BH^ cirt varies depending on upon λ_ Edd and M̃_ D/M_ BH. To understand why the scaling between R_ out and M_ BH changes in different cases, we examine the heating rate and density distributions of the maser disks associated with different M_ BH, λ_ Edd, and M̃_ D. We note that the outer boundary of the minimum density region lies beyond the maximum heating curve of the gas when M_ BH≪ M_ BH^ crit. On the other hand, the edge of the minimum density region is bound within the maximum heating curve if M_ BH≫ M_ BH^ crit. As a result, the outer radius of a maser disk is primarily determined either by the minimum density or the maximum X-ray heating, depending on whether the BH mass is greater or smaller than M_ BH^ crit.Based on Equations (2) through (8), we find that the outer radius confined by the maximum heating can be expressed as R_ out^ H=0.77(λ_ Edd/0.1)^-0.44(M̃_ D/M_ BH/0.005)^1.22(M_ BH/10^7 M_⊙)  pc ,, reminiscent of the scaling R_ out∝ M_ BH found by <cit.>. In the minimum density limited regime, the outer radius is related to the disk parameters asR_ out^ D=0.79(M̃_ D/M_ BH/0.005)^0.4(M_ BH/10^7 M_⊙)^0.6  pc,consistent with the empirical relationship R_ out∝ M_ BH^0.57± 0.16 found by <cit.>. For the disks in which M_ BH≈ M_ BH^ crit, we find that R_ out^ H≈ R_ out^ D, suggestingM_ BH^ crit/10^7 M_⊙≈(λ_ Edd/0.1)^1.09(M̃_ D/M_ BH/0.005)^-2.04.The above equation indicates that the critical BH mass would be ≲ 10^8 M_⊙ if 0.1 ≲λ_ Edd≲ 1 and M̃_ D/M_ BH≳ 0.005, suggesting that the outer radius of the maser disk around a ≳ 10^9 M_⊙ BH in a high-z quasar would be determined by R_ out^ D if the gas disk is massive enough. As one can see in Figure <ref>, such high-z maser disks would have R_ out∼ 10-30 pc given M̃_ D/M_ BH∼ 0.005 - 0.03, about ∼ 20-60 times greater than the average outer radius of the local maser disks (i.e. R_ out = 0.52 pc). Based on the discussion in Section <ref>, we speculate that such large disk sizes could lead to H_2O gigamasers in high redshift galaxies.Considering the typical angular-diameter distance D_ A for a galaxy at z∼ 2-3 (i.e. D_ A∼1620-1760 Mpc), the angular radius of a high-z maser disk with the phsical size of r∼ 10-30 pc would be ∼1.2 - 3.8 milliarcseconds, comparable to those of the local H_2O maser disks, suggesting that it is possible to to apply the H_2O maser technique to high-z quasars with existing centimeter VLBI facilities. If one could further detect submillimter water maser emissions <cit.> from these high-z H_2O gigamaser disks, if exist, it is possible that future observations with the Event Horizon Telescope (EHT) would provide highly accurate maser imaging with ∼20-40 microarcsecond resolution, leading to maser maps with fractional position uncertainties comparable to NGC 4258 <cit.>.§ DISCUSSION AND CONCLUSION In this work, we examine whether the physical conditions favorable for population inversion of H_2O molecules can play the primary role for determining the inner and outer radii of an H_2O maser disk. In particular, we compare the observed radii of sixteen maser disks with the predictions from the steady-state accretion model and the power-law surface density model. We also apply our models to explore whether H_2O gigamaser disks could possibly exit in the high redshift universe. Our conclusions are summarized as follows :1. The predictions from the well-known NM95 model that assumes steady-state accretion tend to over-predict the outer radii of the maser disks by a factor of ∼3-10 for ∼75% of our sample if one approximates the mass accretion rate as Ṁ≈ L_ bol/ϵ c^2. In light of the results from our modeling, it is most likely that this discrepancy orginates from the breakdown of the steady-state assumption for the majority of the maser disks. 2. The outer radii of all maser disks can be well explained if one adopts the disk model described by the power-law surface density profile. By examining the distributions of the X-ray heating rate and gas density in a X-ray illuminated molecular disk based on the power-law model, we are able identify the masing region within which the physical conditions of the gas would enable efficient maser action. For all maser disks in our sample, we can find solutions that predict disk outer radii consistent with the observations. The best-fit models reveal that the masing regions tend to lie at the mid-plane of the disk, with the outer boundaries defined either by the maximum X-ray heating rate or minimum gas density for maser pumping, depending on the combination of M_ BH, M_ D, and λ_ Edd.3. The physical conditions of the gas alone cannot explain the inner radii of the maser disks. In the region well inside the inner radius of a maser disk, one can always find gas at a sufficiently high elevation having physical conditions suitable for maser excitation, suggesting that the inner edge of a maser disk involves physics beyond basic gas properties. 4. We find that the observed inner radii of the majority of the maser disks are roughly consistent with the dust sublimation radius R_ sub, Nenkova prescribed by <cit.>, which indicates the transition radius between dusty and dust-free environements. It is likely that the trapping of far-infrared photons by the masing clouds becomes more significant as dusts gradually sublimate away at this transition region, leading the inner edge of a maser disk where the population inversion is mostly quenched. 5. Finally, our model predicts that H_2O gigamaser disks could exist around ≳ 10^9 M_⊙ supermassive BHs at the centers of high-z quasars. Their sizes could be as large as ∼10-30 pc if the disk-to-BH-mass ratio is comparable or greater than the average value for local maser disks. The predicted flux densities of these systems range from a few mJy to ≳ 20-30 mJy, high enough to be detected with exiting radio interferometers with a few hours of on-source intergration. Future surveys of H_2O gigamasers from high-z quasars that could hosting these systems <cit.> would provide a good test for our model.§ ACKNOWLEDGEMENTS We gratefully thank Dr. Fred Lo, the former director of National Radio Astronomy Observatory, for initiating the work for this paper before he passed away in 2016. This publication is supported by Ministry of Science and Technology, R.O.C. under the project 112-2112-M-110-003. This research has made use of NASA's Astrophysics Data System Bibliographic Services, and the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. In addition, this work also makes use of the cosmological calculator described in <cit.>. mnras
http://arxiv.org/abs/2312.16382v1
{ "authors": [ "C. Y. Kuo", "F. Gao", "J. A. Braatz", "D. W. Pesce", "E. M. L. Humphreys", "M. J. Reid", "C. M. V. Impellizzeri", "C. Henkel", "J. Wagner", "C. E. Wu" ], "categories": [ "astro-ph.GA", "astro-ph.CO" ], "primary_category": "astro-ph.GA", "published": "20231227024902", "title": "What Determines the Physical Size of a H2O Megamaser Disk ?" }
[Improved Qubit Routing for QAOA Circuits Martin Leib Received ...; accepted... ========================================type=figure< g r a p h i c s >figureConceptDiffusion improves the text-to-image alignment by modulating the guidance of diffusion models towards any concept from which the model diverges. Our method follows text prompt faithfully compared to Stable Diffusion <cit.>, which tends to neglect certain subject or incorrectly binds attributes. All images are generated using the same random seeds.] Recent advancements in Text-to-Image (T2I) diffusion models have demonstrated impressive success in generating high-quality images with zero-shot generalization capabilities. Yet, current models struggle to closely adhere to prompt semantics, often misrepresenting or overlooking specific attributes. To address this, we propose a simple, training-free approach that modulates the guidance direction of diffusion models during inference.We first decompose the prompt semantics into a set of concepts, and monitor the guidance trajectory in relation to each concept. Our key observation is that deviations in model's adherence to prompt semantics are highly correlated with divergence of the guidance from one or more of these concepts. Based on this observation, we devise a technique to steer the guidance direction towards any concept from which the model diverges. Extensive experimentation validates that our method improves the semantic alignment of images generated by diffusion models in response to prompts. § INTRODUCTIONText-to-Image (T2I) diffusion models <cit.> have taken significant strides recently in producing high-quality images from free-form text prompts with increasing diversity and visual fidelity. However, text-to-image synthesis is still tempered by a key challenge: semantic misalignment between text prompts and generated images <cit.>. While diffusion models are adept at generating a single subject, they fail to faithfully adhere to the semantic subtleties of multi-subject text prompts <cit.>. The generated images often neglect one or more subjects in the prompt or incorrectly bind attributes between them as shown in <ref>. Recent works have revealed that the diffusion process in latent diffusion models (, Stable Diffusion <cit.>) organizes high-level semantics and implicitly learns relationships between them without explicit supervision <cit.>.Such an understanding allows us to interpret the generation process of diffusion models as a reverse process: recomposing individual concepts into a cohesive scene. Therefore, any failure in accurately following the semantics of a scene suggests that one or more concepts may not be effectively reintegrated during this recomposition process. Figure <ref> provides a visual representation of the guidance trajectory with respect to different concepts contained within the prompt. This visualization specifically highlights how the trajectory diverges from concepts that are neglected or inadequately represented in the model's recomposition process.In response, we propose a training-free approach to carefully modulate the guidance direction towards any concept from which the diffusion model diverges. Specifically, our method starts by extracting distinct concepts from the text prompt. We then measure the cosine similarity between the model's score for the overall prompt and the score specific to each extracted concept. A lower cosine similarity is indicative of a concept potentially being neglected. Based on these similarity measurements, we adjust the guidance direction towards such concepts. Our experiments show that our method improves the diffusion models' text-to-image alignment. We refer to our method as ConceptDiffusion. Our contributions can be summarized as follows: * We develop a technique to detect semantic misalignment of diffusion models during inference, enabling real-time correction.* We propose an intuitive method to improve the alignment between text prompts and generated images, without requiring additional training or optimization.* Through comprehensive and empirical evaluations, we demonstrate that ConceptDiffusion qualitatively and quantitatively improves the text-to-image alignment. § RELATED WORKS Text-to-Image Models.In the initial exploration of the text-to-image synthesis task, GANs were at the forefront <cit.>. Subsequent advancements were marked by the advent of large-scale auto-regressive models, which demonstrated remarkable capabilities <cit.>. The diffusion models <cit.> recently have emerged as a promising generative model with unparalleled photo-realism and stable training procedures. However, generated images by diffusion models often fail to closely adhere to the input prompt. To address this, classifier-free guidance <cit.> has been introduced to strengthen the prompt reliance, yet extensive prompt engineering is still required to achieve desired results <cit.>. Imagen <cit.>, on the other hand, addresses the problem with improved text encoder model <cit.>. For our work, we focus on Stable Diffusion <cit.>, the state-of-the-art open-sourced T2I model. Semantic Dimensions. The concept of semantic dimensionality has been extensively explored in language models, where it is used to mirror semantic and linguistic relationships <cit.>. A notable illustration of this is vector arithmetic within semantic dimensions, where an operation like `King - male + female' yields a vector closely related to `Queen'. This characteristic enables meaningful interpolation of semantic vectors. For generative models, StyleGANs <cit.> have demonstrated the presence of semantic dimensions that can be leveraged during image generation. In the context of diffusion models, several works have uncovered semantic latent spaces within frozen pre-trained diffusion models <cit.>. Concurrent studies revealed the meaningful interpretation of concept representations within these semantic spaces. <cit.>. Building upon these findings, our work utilizes the semantic dimension in pre-trained diffusion models to adjust the guidance vector towards the inclusion of concepts that are otherwise missing in the generation process. Controllable Image Synthesis. For a more controllable generation, recent works have proposed methods to synthesize images with additional spatial control <cit.>. Another line of works explores providing users control solely through text <cit.>. While these models provide not only a layer of control over the image generation process but also tend to adhere more closely to input prompts, they require extra inputs from users. Moreover, these methods may require fine-tuning of pre-trained models <cit.> or the integration of new modules like adapters <cit.>. For more precise control, several works have proposed methods to edit the image in diffusion models' latent space <cit.>. Our work aligns with these developments and can be viewed as a one-shot generation method that edits an image on-the-fly to encapsulate the input prompt semantics in its entirety.Compositional Generation. Another line of works tries to address the misalignment between input prompts and generated images using training-free methods. Composable Diffusion <cit.> employs separate denoising processes for distinct phrases derived from the input prompt, leveraging the score-based interpretation of diffusion models. The noise-estimates for each phrase are added to attain a unified image. This method combines noise estimates from each phrase to create a unified image. Structure Diffusion <cit.> takes a different approach by modulating cross-attention maps based on consistency trees or scene graphs derived from the input prompt. Similarly, Attend-and-Excite <cit.> aims to enhance the representation of overlooked tokens in cross-attention maps through explicit gradient-based optimization of noise estimates. Our work Concept Diffusion is closely related to the Composable Diffusion in its fundamental approach of manipulating the noise-estimates of diffusion models without training and optimization. § METHOD §.§ Latent Diffusion ModelsWe apply our method and experiment on the state-of-the art T2I model, Stable Diffusion (SD) <cit.>. SD consists of two modules, an auto-encoder and a diffusion model. An encoder ℰ is trained to map a given image x in RGB space to a latent space, z = ℰ(x). A decoder 𝒟 is to reconstruct the image from the latent, such that x̃ = 𝒟(z) = 𝒟(ℰ(x)). Operating on the learned latent space of the autoencoder, the denoising diffusion probabilistic model (DDPM) <cit.> gradually denoises an input latent vector z_t at each timestep t into a less noisy vector z_t-1. During the denoising process, the diffusion model is conditioned on an input prompt p, which is encoded by a pre-trained CLIP text encoder <cit.>. The DDPM model ϵ_θ is trained with objective,𝔼_z∼ℰ(x),p,ϵ∼𝒩(0,1),t[ ϵ-ϵ_θ(z_t,t,p)^2_2],where t is drawn from uniform distribution t∼𝒰([0,1]) and ϵ is sampled from a Gaussian distribution ϵ∼𝒩(0,I). The model is trained to denoise z_t by estimating the noise ϵ added to the latent vector at each timestep t. Classifier-free guidance <cit.> is a conditioning method that does not require an additional pre-trained classifier by intermittently omitting the text conditioning at a predetermined probability. This process results in fulfilling both unconditional and conditional objectives. During inference, the noise estimate of the diffusion models are adjusted asϵ_θ(z_t,t,p) = ϵ_θ(z_t,t) + w_g(ϵ_θ(z_t,t,p) - ϵ_θ(z_t,t)),in which w_g is the guidance scale. The noise estimate can be interpreted as the score of an underlying unnormalized Energy-Based Model <cit.>. This interpretation enables us to consider the noise estimate for prompt condition ϵ_θ(z_t,t,p) as a composite of concepts 𝒞 = {c_1, c_2, ..., c_n} that can be expressed as,ϵ_θ(z_t,t,p) = ϵ_θ(z_t,t) +∑_i=1^nw_i(ϵ_θ(z_t,t,c_i)-ϵ_θ(z_t, t)), in which w_i represents the weighting factor for each concept.§.§ Concept Diffusion Our goal is to regulate the noise estimate of diffusion models at each timestep t to achieve improved semantic alignment between the prompt and the generated image, without fine-tuning or optimization. We hypothesize that the cause of semantic misalignment is correlated to the divergence of the noise estimate from one or more concepts that are critical for successful synthesis. In <ref>, we show a framework of our proposed method.Concept Extraction To address this, we first define the ideal concepts that should be represented in the noise estimate. Intuitively, in order for a subject to be present in the image, the noise estimate must contain semantics of it as well. Therefore, given a prompt p, we extract all the individual subjects (, noun phrases). These are referred to as subject concepts, denoted by C_s={c_s^1,c_s^2,...,c_s^n}, where n represents the total number of subjects. We leave the rest of the concepts present in the noise estimate unknown and collectively refer them to an abstract concept c_a. Now, rewriting <ref> in terms of the concepts we defined gives,ϵ_θ(z_t,t,p)=ϵ_θ(z_t,t)+∑_i=1^nw_s^i(ϵ_θ(z_t,t,c_s^i)-ϵ_θ(z_t,t)) +w_a(ϵ_θ(z_t,t,c_a)-ϵ_θ(z_t,t)),where w_s^i and w_a represent weighting factors for each concept. Weight Measurement In this context, the inclusion of a concept in the noise estimate is indicated by its respective weighting factor. When the weight assigned to a particular concept drops significantly, it suggests a decreased likelihood of that concept being represented in the generated image. We define such notable reductions in a concept's weighting factor, especially when compared to other factors that remain stable, as a divergence. Therefore, we suppose a divergence as a key indicator of how closely each concept is represented in the noise estimate, guiding us in adjusting the model for better semantic alignment.However, the noise estimate of diffusion models is high dimensional that it is not feasible to disentangle the noise estimate into an arbitrary set of concepts that we defined. Consequently, the exact numerical values of the weighting factors of the concepts cannot be calculated. We thus propose an indirect method to approximate the scale of each concept we defined. To do so, we first perform forward passes through the diffusion model with p and C_s separately to obtain scores s_p and s_s^i for each using the same latent z_t:s_p = ϵ_θ(z_t,t,p) - ϵ_θ(z_t,t),s_s^i = ϵ_θ(z_t,t,c_s^i) - ϵ_θ(z_t,t), i ∈ [1,n].The scores represent the direction in which the prompt p and each subject concept c_s^i influence the noise estimate. Specifically, the scores for the subject concepts represent their respective influences within the overall prompt score s_p, thus providing insights into how each concept contributes to the direction of the noise estimate.Given that the abstract concept represents the negation of subject concepts, the score s_a of abstract concept can be calculated through an orthogonal projection,s_a = s_p-s_p·S_s/||S_s||^2S_s, where S_s = ∑_i=1^ns_s^i.This method ensures s_a to capture the overall structural composition of the image, as opposed to the specific elements depicted by the subject concepts. Consider the prompt “a cat and a dog". In this case, the abstract concept, derived from the negation of the subject concepts `a cat' and `a dog', conveys the spatial or relational interaction between these subjects – the coexistence of two objects, in this case. <ref> demonstrates the images generated using the scores of each individual concept. Notably, the image created based on the abstract concept score exhibits alignment with the overall structural composition of the image generated using the prompt score. This alignment illustrates the efficacy of our method in isolating and emphasizing each concept, particularly in capturing the broader, abstract relationships and structures implied by the prompt.Now that we have defined the score of concepts that constitute the prompt score s_p, we measure the cosine similarity, denoted as k, between the score derived from the prompt s_p and the score for each concept s_c∈{s_a,s_s^1,s_s^2,...,s_s^n} as a proxy to measure the weighting factor associated with each concept. The cosine similarity k(s_p,s_c) between s_p and s_c is denoted ask(s_p, s_c) = |s_p·s_c/||s_p|| ||s_c|||.A high degree of cosine similarity indicates a semantic overlap of the prompt score and the concept score. Consequently, this suggests a high weighting scale of the concept. Our key finding is that these similarity measurements are strongly correlated with the degree of semantic alignment in the generated images. <ref> graphically demonstrates this relationship, showing the cosine similarity k of each concept across timesteps t∈[0,T]. In scenarios where the model accurately synthesizes all subjects in the prompt, we observe that the similarity scores for each concept are consistently high and closely aligned. Meanwhile, when the model fails to accurately render a specific concept, the similarity score for that concept fluctuates and generally remains lower, highlighting the direct impact of concept representation on the quality of the generated image. Concept Guidance Based on the observation that the cosine similarity k is associated with semantic alignment, we propose a concept guidance term ϕ that modulates the noise estimate of the diffusion model based on the similarity. Formally, extended to <ref>, we compute,ϵ_θ(z_t)+w_gs_p+ϕ(z_t,S_c,s_p),in which S_c={s_a,s_s^1,s_s^2,...,s_s^n}. The concept guidance ϕ is defined as,ϕ(z_t,S_c,s_p) = w_c∑γ(s_p, s_c,η)ψ(s_c, s_p),where w_c is the concept guidance scale and s_c∈S_c. The γ is a delta function based on the cosine similarity between s_p and s_c,γ(s_p,s_c,η) =1, k(s_c, s_p) < η0,otherwise.The threshold η is a hyperparameter that corresponds to the minimum inclusion of a concept in prompt score. Naturally, larger η increases the effect of concept guidance. We empirically found that η=1/(n+1) generally works well. The guidance direction ψ(s_p,s_c) is determined as,ψ(s_p,s_c) =s_c - s_c· s_p/||s_c||^2,if subject concepts_c,if abstract concept.For subject concepts, a lower cosine similarity suggests a divergence of it from the prompt score. In such cases, our approach involves steering the model's guidance towards these concepts to ensure their inclusion. Conversely, the abstract concept is understood as an independent component of the main score that is distinct from the subject concepts. Therefore, a low cosine similarity here indicates that the main score closely resembles the aggregate of subject concepts, potentially leading to a fusion of concepts in the generated results. To counteract this, when the cosine similarity for the abstract concept decreases, we adjust the guidance so that k(s_p, ∑ s_s) > η. Note that our method doesn't need extra training or optimization and can be used with any diffusion model that uses classifier-free guidance <cit.>.§ EXPERIMENTS §.§ Experiments SettingBaselines. We compare our method with various training-free methods, namely Composable Diffusion <cit.>, Structure Diffusion <cit.>, and Attend-and-Excite <cit.>, along with Stable Diffusion <cit.>. Since Composable Diffusion requires the input prompt to follow the pattern of noun phrases joined together by conjunction and or not, we deconstruct each prompt into its constituent noun phrases and subsequently reassemble them using a conjunction and. We use constituency trees for language parser in Structure Diffusion. For Attend-and-Excite, all nouns present in the prompt are excited by the model.Metrics. We evaluate each method on the image quality and the alignment of the input prompt and the generated image. To measure the fidelity of the image generated, we follow previous works to utilize Fréchet Inception Distance (FID) <cit.>, which is the distance between feature vectors calculated for real and generated images. For text-to-image alignment we measure CLIP R-precision (R-prec.) <cit.> that measures how precisely a model can retrieve a relevant image from a set of images given a text prompt. However, since it is reported in failing to measure fine-grained correspondences such as attribute binding <cit.>. We utilize BLIP-VQA <cit.> for attribute binding evaluation. Lastly, we conduct a human evalution on both image fidelity and image-text alignment as in <cit.>. Datasets. We use Concept Conjunction 500 (CC-500) dataset <cit.>, which is consisted of prompts that conjunct multiple subjects together. We also evaluate on Dense MS-COCO <cit.> to measure the generalization capability of each method on more complex prompts. §.§ ResultsQualitative Analysis. In <ref>, we present a comparative analysis of ConceptDiffusion against various baseline methods, using images generated from the CC-500 dataset <cit.> with same random seeds. This comparison clearly illustrates that our method, ConceptDiffusion, achieves improved semantic alignment between the input prompts and the generated images. Although all the other baseline methods generally improve the performance as well, there are some notable drawbacks for each. For instance, Composable Diffusion <cit.> often blends subjects from the prompt in a literal sense. An example of this is seen with the prompt, “a red book and a gold clock", it generates a red book with an embedded gold clock. Similarly, it generates a horse with vase-like head for the prompt “a brown horse and a blue vase". Structure Diffusion <cit.>, on the other hand, struggles to generate all the subjects or correctly bind attributes when the images generated by Stable Diffusion are substantially different from the input prompt. For Attend-and-Excite <cit.>, although the method mostly generate all the subjects in the prompt, it tends to turn the images into paintings as can be seen in the third and fifth column. Furthermore, the arrangement of subjects sometimes appears unrealistic, such as a floating vase in the image for the prompt “a brown horse and a blue vase". ConceptDiffusion, in comparison, not only accurately synthesizes all subjects from the prompt but also ensures their natural composition within the image. For example, for the prompt “a brown bird and a blue car", our method produces an image of a bird perched on a car, which is a more realistic composition compared compared to that of Composable Diffusion. In addition, when the images generated by Stable Diffusion closely aligns with the prompt's semantics, as in the case of the prompt “a red book and a gold clock", our method applies minimal modifications, preserving the core elements of the image. Conversely, ConceptDiffusion demonstrates its robustness in scenarios where the generated images of Stable Diffusion deviate significantly from the intended semantics of the prompt. For instance, with prompts like “a brown horse and a blue vase", ConceptDiffusion makes more extensive structural adjustments. This adaptability suggests the method's ability to tailor its modifications according to the degree of semantic misalignment.<ref> further demonstrates the capability of baseline methods on more complex prompts. Many baseline models struggle to accurately represent prompts that involve intricate relationships between subjects or specific state. For the prompt “an orange cat taking a nap on top of a car", several baseline methods successfully generate images that include all the subjects, yet they fall short in depicting the spatial relationship “on top" or the state of the cat “taking a nap". In contrast, Concept Diffusion outperforms in reflecting these relationships and the states. Quantitative Analysis. In <ref>, we quantify the performance of each method based on the quality of images generated and the text-to-image alignment. To evaluate the quality of images generated by each method, we randomly select 6 and 10 seeds to generate images, amounting to 3000-5000 images for CC-500 dataset and Dense MS-COCO dataset, respectively. We then measure the FID against the MS-COCO 2017 validation dataset <cit.>. Our analysis revealed that baseline methods tend to degrade the quality of the generated images, as indicated by an increase in FID scores. Notably, Concept Diffusion improves the quality of images generated on both datasets. The results from our text-to-image alignment evaluations clearly demonstrate that Concept Diffusion significantly enhances the semantic alignment between text prompts and the corresponding generated images. In terms of R-Precision scroes, which measures the similarity of image feature and text feature in pre-trained model, our method outperforms the baseline models, achieving a substantial increase compared to Stable Diffusion. Additionally, when evaluated using BLIP-VQA, our method shows performance that is on par with Attend-and-Excite, although our method does not require explicit gradient-based optimization during inference.We further evaluate the semantic alignment with a user study. For each prompt, we generated images using the same random seed across different methods. Participants are asked to select the image they consider to have the best quality and the one that most closely followed the prompt. <ref> shows that users prefer our method over baseline methods especially on complex Dense MS-COCO dataset, despite lower BLIP-VQA score. This outcome suggests that Concept Diffusion potentially improves the performance in capturing the broader semantics of the prompt as BLIP-VQA primarily assesses attribute binding. § CONCLUSION In this work, we introduce a novel approach designed to enhance the text-to-image synthesis process in diffusion models, notably without the need for fine-tuning or optimization. Our key finding is that semantic misalignment in the synthesis process can be effectively identified on-the-fly. This is achieved by measuring the cosine similarity between the noise estimate and the scores of the concepts extracted from the text prompt. Through extensive experimentation, we demonstrate that Concept Diffusion significantly improves text-to-image alignment, with capability to capture complex semantics such as relationship between subjects. This advancement underscores the effectiveness of our method in generating contextually coherent and visually accurate images from descriptive prompts. Limitations. Our method has several limitations as illustrated in <ref>. First, our method often binds attributes of a subject to background as shown in the left. This is inherited from Stable Diffusion, which captures entangled semantics. Second, there are cases in which the model results in fusion or swapping of subjects as an example in the middle and right. Future work may explore the isolation of attributes for more precise control of noise estimate.ieeenat_fullname § ADDITIONAL DETAILS. §.§ ImplementationIn our experiments, we utilize the official Stable Diffusion v1.4 text-to-image model, integrated with the pre-trained text encoder CLIP ViT-L/14 <cit.>. We consistently apply a fixed guidance scalew_g of 7.5 across all experiments. To ensure consistency in our results, we use a fixed random seed to generate the same initial Gaussian map for each experiment and employ 50 timesteps with PLMS sampling <cit.> for the diffusion process. For parsing and identifying subject concepts from the prompts, we use the Stanza Library <cit.>. Specifically, we extract noun phrases from the lowest level of the constituency tree as our subject concepts. We set the concept guidance scale w_c to 7.5 and the threshold η is set to 1/(n+1), where n is the number of subject concepts. The concept guidance is applied in every timestep t.§.§ Inference TimeTo assess the efficiency of the baseline methods in terms of inference time, we conduct an evaluation. We generate 10 images for each method using complex prompts from Dense MS-COCO dataset <cit.>. The average inference time for each method is then calculated, with all tests conducted on a single RTX 3090 GPU. The results is in <ref>. Composable Diffusion, Structure Diffusion, and Concept Diffusion approximately triple the inference time. This increase is attributed to their reliance on running multiple diffusion processes. For Attend-and-Excite, despite utilizing only a single diffusion process, the method prolongs the inference time due to its iterative gradient-based optimization approach. §.§ User StudyWe conduct a user study involving 40 participants to assess the effectiveness of our method. In each task, participants are presented with a set of images that are generated using the same prompt and random seeds. The study is structured in two phases. Initially, participants are asked to select the image they consider to be of the highest quality, without knowledge of the prompt. Following this, the prompt is revealed, and they are then asked to choose the image that they believe best followed the prompt. To ensure unbiased responses, the order in which the images are presented is randomized in every round. <ref> shows the interface used for user study. § ABLATION STUDY §.§ Abstract ConceptIn this subsection, we validate the use of abstract concept along with subject concept. As discussed, steering the guidance direction towards the abstract concept helps prevent the fusion of subjects in the generated image. This is illustrated in <ref>, where we present images generated with and without the inclusion of an abstract concept in concept guidance for the prompt “a blue backpack and a brown sheep". The comparison distinctly shows that when the model relies solely on subject concepts, it successfully synthesizes images containing both a sheep and a backpack. However, in these images, the subjects are merged together, leading to an unnatural fusion, such as a sheep with a head resembling a blue backpack. In contrast, the inclusion of the abstract concept results in a significant improvement. When the abstract concept is used, the model not only generates all the subjects but also clearly separates them. The resulting image distinctly features both the sheep and the backpack as separate entities, effectively illustrating the advantages of incorporating an abstract concept. §.§ Concept Guidance ScheduleThroughout the experiments, we use concept guidance at every timestep t when the similarity between the prompt score and each concept score is below the threshold η. Recent works have suggested that the spatial location of each subject is determined in the early denoising steps <cit.> and after most of the denoising has occurred, the diffusion model's self-attention layers play a large role, making the guidance less impactful <cit.>. Therefore, in this subsection, we closely examine when to apply concept guidance effectively.Specifically, we set a warm-up periods κ_w and a cool-down periods κ_c. The concept guidance scale w_c is set to 0 when κ_w < t or t > κ_c. <ref> illustrates the impact of these periods on image synthesis using the prompt “a brown bird and a blue bear". When a warm-up period is used, the model fails to fully synthesize all subjects.For instance, while the color of the bird changes from blue to brown, the bear is not present in the image. Conversely, when a cool-down period is used, we observe a degradation in image quality, such as the bird appearing with missing legs. Accordingly, our full Concept Diffusion method does not employ either the warm-up or cool-down period parameters. We determine that omitting these periods allows more consistent and effective application of concept guidance throughout the entire process of image generation.§ ADDITIONAL COMPARISONHere in <ref> and <ref>, we present additional qualitative comparison with baseline methods.
http://arxiv.org/abs/2312.15964v1
{ "authors": [ "Hyun Kang", "Dohae Lee", "Myungjin Shin", "In-Kwon Lee" ], "categories": [ "cs.CV", "cs.AI" ], "primary_category": "cs.CV", "published": "20231226090217", "title": "Semantic Guidance Tuning for Text-To-Image Diffusion Models" }
0000-0002-2570-2186Tohoku University Japan [email protected] environment is the time-honored way of making sense of free variables, used in programming language theory as well when writinginterpreters and some compilers. Algebraic effects give another way, as was pointed already at HOPE 2017. Although a theoretical curiosity, it may have surprising practical benefits:a new way of writing compilers, with the incremental type-checking, with easy variable usage, leaf function analyses. This work-in-progress report prototypes and illustrates the idea.It also touches on a new way of thinking about functions.Free Variable as Effect, in Practice Oleg Kiselyov January 14, 2024 ==================================== Questions: It looks like `depth-first' compilationDepth-first compilation reminds of Attribute Grammars. Isn't it?Free(er) monad is like partially known data structure? Free extension (from Ohad)Another illustration of modularity: location tracking § INTRODUCTIONWhenever one writes an interpreter or a compiler, or studies logic, model theory, programming language theory – one soon has to face variables. The well-known way to deal with them is by introducing an environment, or variable assignment in logic. There is, however, another, more general approach, related to algebraic effects. One should not be too surprised. Algebraic effects originate from studying terms with variables, and equations on them: free algebras.The algebraic effect approach was already described at HOPE 2017 <cit.> and elaborated at <cit.>. It was explored and scaled up in the study of rigorous, realistic and interesting reasoning with effects <cit.>.Here we look at an unexpected practical side, in interpreting or compiling languages. It has occurred to me as I was teaching compiler class developing the complete compiler to x68-64 assembly feature-by-feature, and in tagless-final style.The first benefit is the ability to evaluate intermediary expressions and report errors soon, before the whole program is parsed – hence reducing the amount of memory for intermediary data and improving latency. The approach also facilities variable usage and leaf function analyses, indispensable in compilation.Returning to theory, We also look at a new meaning of functions. § INTERPRETING LANGUAGES WITH VARIABLESTo explain the idea, let's write an interpreter, which we later turn into a compiler by changing the domain of interpretation.[The complete code accompanying the paper is available at<https://okmij.org/ftp/Computation/var-effect/>] In rough strokes the development, however simplified, actually follows the compiler class.We start with the simplest interpreter: the source language has only integers and addition. The language should hopefully be clear from the grammar, in the (ocaml)yacc form. We borrowed this example from the ocamllex/ocamlyacc chapter of the OCaml Reference<cit.>. [mathescape=false] exp: INT int 1 | exp PLUS exp add 13 | LPAREN exp RPAREN 2;The grammar defines the concrete syntax of the language.The semantic actions |int| and |add| are arranged in a separate module with the following signature, which in effect, defines the abstract syntax.module type LangInt = sig type repr (* representation type *)val int : int -> repr val add : repr -> repr -> reprtype obs(* observation type *) val observe : repr -> obs endWhere |repr| is the domain of the interpretation.Here is one implementation of the signature.module EvalInt = structtype dom = inttype repr = dom let int x = xlet add x y = let s = x + y in printf " =>type obs = unitlet observe x = string_of_int x |> print_endline endThe value domain is |int| (OCaml integers), which is also the domain of interpretation |repr|. The function |observe| is invoked after the parsing is finished; it observes the |repr| value representing the result of the whole program, by printing it. We also made the interpreter to print the (intermediate) results, of each addition expression. As we shall see, it is a good diagnostic for the evaluation order.[as well as memory requirements: deferring a computation needs memory to store what is to compute later.]Let's add variables. We add the productions [mathescape=false]| IDENTvar 1 | LET IDENT EQ exp IN explet_ (2,4) 6to the parser, and likewise extend the abstract syntax. By `extending' we mean creating a new version re-using the old code –in its already compiled form, and without any copy-pasting or editing. module type LangLet = sig include LangInttype name = stringval var : name -> repr val let_ : name * repr -> repr -> repr endIts implementation also re-uses |EvalInt|, but re-defines all operations:[Here ≫ is left-to-right function composition] module EvalEnv = structtype dom = EvalInt.domtype name = stringtype env = (name * dom) listtype repr = env -> dom let ans : dom -> repr = fun v -> fun _env -> vlet lift2 : (dom->dom->dom) -> (repr->repr->repr) = fun op e1 e2 -> fun env -> op (e1 env) (e2 env) let int = EvalInt.int >> anslet add = lift2 EvalInt.addlet var : name -> repr = List.assoclet let_ : name * repr -> repr -> repr = fun (n,b) body -> fun env -> body ((n,b env) :: env) type obs = unitlet init_env : env = []let observe x = x init_env |> EvalInt.observe endTo handle variables, we introduce the variable environment |env| – here,the associated list of variable names and their meanings – as explained in every textbook about interpreters. The domain of interpretation is now a function from |env| to the value domain, to which the earlier semantic functions are lifted. Again, the interpretation is completely standard and explained in every textbook on this topic.What the textbooks rarely point out is an undesirable change. In the original |EvalInt|, the (intermediate) result of a sub-expression is printed as soon as it is parsed. When we enter |"1+2+3+4"| we see the partial sums printed as soon as we hit `Enter'. |EvalInt| indeed works as the familiar desk calculator.[In fact, it makes a simpler and clearer example than the one in the ocamlyacc's reference manual.]|EvalEnv| is different: As we enter the same |"1+2+3+4"|, nothing is printed. It is only when we terminate the input and tell the parser the whole program is finished that we see the results. Whereas |EvalInt| interprets as the program is being parsed, |EvalEnv| does the real work (summation) only after the whole program has been parsed. It is not hard to see why.The meaning of |1+2| in the |EvalInt| semantics is |3| (computed compositionally). In the |EvalEnv| semantics, the same expression has the meaninglet m12_env : EvalEnv.repr = fun env -> (fun _ -> 1) env + (fun _ -> 2) envwhich is a function. Its body is not evaluated until it receives the |env| argument – even if the argument is not needed. That argument, the initial environment, is passed by |observe| only when the entire program is parsed.One may notice that |m12_env| has the structure of the corresponding source expression |1+2| – obviously, sincethe meaning assignment is a homomorphism. The meaning is a function (closure) that references the meanings of |1| and |2|, which are also closures. In effect, |m12_env| is a parse tree of the source expression |1+2| – in a form of closures and taking hence more memory compared to a data structure. This parse tree is interpreted upon the final observation.What |EvalEnv| gained, however, is handling programs with variables like |(1+2)+x| – which cannot be evaluated until we receive the environment and look up the value of |x|. Still, the sub-expression |(1+2)| could be interpreted on the spot. How to make it happen? §.§ Variable as an Effect When dealing with expressions like |(1+2)+x|, we need to know what value corresponds to |x|. We can just ask. The meaning of an expression is then either an answer |A(v)|, or a question |Q(n,k)| about the value of the variable |n|, to be continued as |k|, perhaps asking further questions until the final answer. We hence introduce the following variable effect (which is the Free monad implementation of the Reader effect, and entirely standard):module VarEff = structtype name = stringtype 'd t = A of 'd | Q of name * ('d -> 'd t) let ans : 'd -> 'd t = fun v -> A vlet var : name -> 'd t = fun n -> Q(n,ans) let rec lift2 : ('d->'d->'d) -> ('d t -> 'd t -> 'd t) = fun op e1 e2 -> match (e1,e2) with | (A v1, A v2) -> A (op v1 v2) | (Q (n,k), e2) -> Q (n, (fun v -> lift2 op (k v) e2)) | (e1, Q (n,k)) -> Q (n, (fun v -> lift2 op e1 (k v))) let lift : ('d -> 'd t) -> ('d t -> 'd t) = ... let handle_var : ('d -> 'd t) -> (name -> 'd option) -> 'd t -> 'd t = ...let letv : (name * 'd) -> 'd t -> 'd t = fun (n,v) -> handle_var ans (function n' when n'=n -> Some v | _ -> None)let top_hand : 'd t -> 'd = function A v -> v endA binary operation on two expressions |lift2 op| checks to see if both operands have the answer. If so, the operation |op| can be performed right away. Otherwise, |lift2| propagates operand's questions. Eventually, the questions have to be answered, which is the job of a handler. The handler |handle_var| is the mapping/fold over the denotation (|'d t| tree). Its particular instance |letv| replies to questions only about the given name, propagating all others. The domain of interpretation is now |dom VarEff.t|, to which the semantic functions are lifted:module EvalEff = structmodule V = VarEfftype dom = EvalInt.domtype repr = dom V.t let int = EvalInt.int >> V.anslet add = V.lift2 EvalInt.addlet var = V.var let let_ : name * repr -> repr -> repr= fun (n,b) body -> V.lift (fun v -> V.letv (n,v) body) b type obs = unitlet observe x = V.top_hand x |> EvalInt.observe endAs expected, |let_| acts as a handler, answering questions about its bound variable, and propagating all other questions up. One may show, using the technique in <cit.>, that |EvalEff.repr| has the same equational theory as |EvalEnv.repr| – that is, |EvalEff| is extensionally equivalent to |EvalEnv|. Still, |"(1+2)+x"| and |"x+(1+2)+3"| now print the result of interpreting |1+2| right away, without waiting for the whole program to be parsed. Furthermore, when we enter the programlet y = let x = 1 + 2in x + x + 3 iny + 1;;we not only see |1+2| being evaluated right away, but also |x+x+3| being evaluated as soon as it has been parsed, at the end of the second line.[One can see that for themselves by compiling and running the code in the directoriesandin the accompanying code. The former implements the environment and the latter effect semantics for variables. ] Questions about local variables can therefore be answered quickly, without waiting for the whole program be parsed.[However, straightened-outlet-expressions are right-associated. Therefore, their parsing finishes only at the end of the program.]|EvalEff| offers further opportunities for optimization: if the body of a let-expression has |A v| as its interpretation (denotation) – that is, not a question – the body has not needed the value of the bound variable.We have hence come upon an easy way to determine the usage of bound variables, which is valuable in compilation, as we shall see in the next section. Also, if you have a variable reference, like x+1, then we do build a closure. However, a binding operator like let, acting as a handler, will answer the question quickly and so the expression can be evaluated locally, without waiting for the whole program. what does have to wait until the end are references to global symbol (but they can be put in the global, static env) and references to undefined symbols, which will produce an error.This incremental evaluation works better if lets are nested: let y = let x = 1 + 2 in x + x + 3 in y + 1;; let x = let y = ... in ... in ... because the body of the inner let is terminated early and can be evaluated. Straightened-out let-expressions are right-associated, so to speak, so we can't evaluate them till the end. The variable-as-effect approach scales up to functions –as was in effect shown already in <cit.>. Here are two sample programs[seein the accompanying code.] let x = 1 inlet fun f(y) = x + y in let x = 2 in f(2)let x = 1 inlet fun f(y) = x + y inlet fun g(x) = f(x) ing(2)Since a variable dereference is an effect, to be handled by a dynamically enclosed handler, one may wonder if we are really implementing lexical rather than dynamic binding. As was shown already in<cit.> and elaborated in <cit.>,variable-dereference-as-effect does support lexical binding,with some work. Generally, a mechanism to capture the current dynamic environment is needed. The current implementation uses a simplerapproach: handling the body of a function in the handling environment of its definition rather than of its invocation. Therefore, both sample programs evaluate to |3|.§ COMPILATION The ability of |EvalEff| to evaluate as soon as possible, without waiting for the whole program to be parsed is especially valuable in compilation, where it translates to reporting type and other errors early and reducing memory footprint. There is another benefit, hinted earlier: the ease of variable use analyses, which are needed for memory/register allocation. This section demonstrates both benefits. First, we turn our interpreter into a compiler, to Wasm. We change the interpretation domainfrom |int| to a sequence of Wasm instructions that leave the |int| result on the stack. module EvalInt_wasm = struct type dom = Wasm.instr type repr = domlet int = Wasm.I32.const let add x y = Wasm.(exp [x; y; I32.add])type obs = unit let observe x =let open Wasm in wasm_module [func  result:I32 [x]] |> observe endWe rely on the module |Wasm|: tagless-final embedding ofWasm.[see the directoryin the accompanying code.] The new |EvalInt_wasm| is quite like |EvalInt|, structurally.It interprets |"1+2+3"| as:i32.const 1 i32.const 2 i32.add i32.const 3 i32.addJust as we lifted |EvalInt| to |EvalEff| in <ref>,we lift |Eval_wasm|; the result, to be called |Eval_var|, is |EvalEff| with |EvalInt| replaced with |Eval_wasm|. One may now compile programs with local variables; for example, let x=10+11 in 1+x+x+3produces:i32.const 1 i32.const 10 i32.const 11 i32.add i32.add i32.const 10 i32.const 11 i32.add i32.add i32.const 3 i32.addThe variable |x| turns out substituted with its bound expression: the let-binding got inlined. One should not be too surprised: after all, variables are like named `holes' in the domain, with let-expressions telling how to fill the holes. Such behavior of let-expressions – effecting sharing in the compiler rather than in the object code – is well-known in code generation <cit.>. To properly compile let-expressions, allocating storage (Wasm locals) for bound variables, we lift |Eval_var| one more time.[see, in particular,in that directory.] In other words, we generate Wasm with `holes', to be filled with the names of the allocated Wasm locals. The allocation is performed after a let-expression is compiled and the variable usage in its body is determined.Strictly speaking, the compilation becomes two-pass. However, the first pass generates as much Wasm code as possible. Local let-expressions can even be compiled entirely before the end of parsing of the whole program.The let-handler is particularly notable:let letv : name * dom -> repr -> repr= fun (n,v) b -> let cnt= ref 0 in (* usage count of n *) let vars = ref [] in(* other variables used *) let lkup = function | n' when n = n' -> incr cnt; Some (V.var n) | n' -> if List.mem n' !vars then () else vars := n' :: !vars; None in let ret res =if !cnt = 0 then V.ans res(* no need to allocate anything *)else if !cnt = 1 then V.ans (Eval_var.let_ (n,v) res) (* inline *)else (* request allocation, reporting n and the list of alive,hence conflicted variables *) in V.handle ret lkup bAs the handler answers questions about its bound variable, it counts them. At the end, it knows how many times the bound variable has been accessed. If zero, there is no need to allocate storage for the variable. (If the source language has no side effects, as ours currently, we may even skip compiling the bound expression). If the variable was used only once, we substitute it with the bound expression, using |Eval_var|'s let-machinery to do the substitution. Again, no storage allocation is needed.The letv-handler also watches for other variable requests, and learns of all free variables in its managed expression. Their list is reported to the allocator: these are conflicts, i.e., their storage must be disjoint. We thus obtain all the information (variable usage and conflicts) needed for storage allocation; see the source code for details.For example, the programlet x = 1 + 2 in let y = x + 1 in let z = y + x in z + z + ycompiles to the following Wasm module [mathescape=false] (module(func(export "start" )(result i32 )(local t_1i32) (localt_2i32)(i32.const 1) (i32.const 2) i32.add local.set t_1 local.gett_1 (i32.const 1) i32.add local.set t_2 local.gett_2 local.get t_1 i32.add local.sett_1 local.get t_1 local.gett_1 i32.add local.get t_2 i32.add))The variables |x| and |z| share the same Wasm local |t_1|.Let us add functions – for simplicity, second-class top-levelfunctions whose bodies have no free variables aside from the arguments (since functions are second class, their names are distinct from ordinary variable names).[Compiling functions with `open bodies' is rather challenging: Wasm intentionally prohibits accessing locals from a different function. To use locals as much as possible we would need an extensive variable use analysis, which should be feasible in our approach. This is the topic for future work.] Here is an example:let fun f(x) = x + 2 in let fun g(x,y) = f(y) + x in f(g(1,2))Since functions may take several arguments, there comes the possibility of applying a function to a wrong number of arguments – which is a type error. We should report it at the compilation time.The language with top-level second-class functions |Lang2Fun| is the extension of |LangLet| with function calls and function declarations: module type Lang2Fun = siginclude LangLet val call: name -> repr list -> repr type fundecl (* function declaration *)val defun : name * name list * repr -> fundecl type defns(* a sequence of fundecl *)val defn_empty : defnsval defn_add : defns -> fundecl -> defns type topformval top_exp : defns -> repr -> topformval topf_observe : topform -> obs endHere, |defun| interprets a declaration (the function name, the list of argument names and the function body) as |fundecl|. Since functions may only be declared at top-level and may not refer to outside variables, all function declarations have to appearat the beginning of the program, followed by the top-level expression (main program body) – which is what |topform| signifies. The compilation for function bindings and function calls is not much different from what we have seen for integer-type let-expressions. A question about a function name is answered with its type (i.e., arity) and the Wasm name[Function names may be re-defined but Wasm names are unique.] (needed to generate the Wasm call instruction). We refer to the accompanying code for details (see the directory ).We have claimed that the effect semantics for variable and function names enables incremental type checking and the early reporting of errors. Let us see.First, consider the OCaml code:let f(x) = x + 2 let g(y) = f(y,1) + y f(g(1XXXwith two problems. On line 2 the function |f| is invoked with a wrong number of arguments. Then there is a parse error on line 3. Although it occurs later in the code, it and only it is reported by the OCaml compiler:3 | f(g(1XXX^^^^ Error: Invalid literal 1XXXIndeed, an OCaml program must first be completely parsed, and only then type-checked. When writing or refactoring code, however, one would have liked to type check fragments (definitions) as soon as they are finished, before the whole program is completed. If the compiler shows an error on line 100, it does not mean the code on line 50 is flawless (as far as the compiler is concerned). In contrast, if we submit the similar codelet fun f(x) = x + 2 in let fun g(y) = f(y,1) + y in f(g(1XXXXto our compiler, we get the compilation error about the first problem: [keywords=] Function f requires 1 arguments but was invoked with 2In fact, if we feed the code into the compiler line-by-line, we notice that the error is reported right after the second line is entered – before the third, ill-formed, line is even input. § CONCLUSIONS In the environment semantics the meaning of an expression is a function from the environment, which is opaque and cannot be examined. We cannot tell which variables in the environment have actually been used, and how many times. Algebraic effects make the denotation more observable: a handler can watch questions and find out which variables have been asked about, and how many times. Thus we obtain the variable usage analysis in the ordinary course of compilation, almost for free, so to speak.It remains to be seen how this promise holds for a real compiler for a realistic programming language. I intend to find it out by trying this technique out in the new installment of the compiler class (which is underway). It is not hence very suitable for a compiler: it converts the code to a big nest of closures, and starts doing type-checking etc. only when has read the whole file. Practically: takes a lot of memory (for large expressions/programs), also defers errors (errors can't be reported soon, we have to wait till we receive the environment, at the end of parsing).I thank Chung-Chieh Shan for helpful discussions, and the reviewers and participants of the HOPE 2023 workshop for many insightful comments. Especially big benefit for functions: even second-class functions but with nesting and free variables (such as functions in Pascal) are difficult to compile to Wasm: there is no provision for one function to access local variables of another, even its parent. The variable usage analyses are crucial to find out which variables can be allocated locally and which have to be put in Wasm's memory. § NEW VIEW ON FUNCTIONS New view on functions: |fun x -> e| has two roles: suspend the evaluation of e until the function is applied, and create a new variable x that can be used in e. These roles are separate. The second can be accomplished by an object binding <cit.> Suppose|fun x -> e| just creates a binding. Then e proceeds evaluating. when we encounter x, we create a question. that question is the function. So,type 'a -> 'b = 'a varname * 'b req A function is a pair of a variable name (indexed by type) plus the function body, which may be a request for variable. It is the application that is a handler. When the function is applied, the application checks body. If it is value already, nothing needs to be done, return that value. (unless the value is a function value: may still need to wrap the interpreter around!) If the body is a request for the value of the variable in the fst of the pair, answer the request and keep answering until the value is returned. What if we do want to suspend e? Make it a thunk e. So, a function is a combination of creating a binding and thunking. But these two things can be separated. Curried functions are interesting. Partial application. Things to check: |fun x -> fun y -> x + y| |fun x -> fun y -> y + x||(fun u -> u 2 + u 3) ((fun x y -> x + y) 10)| plainnat
http://arxiv.org/abs/2312.16446v1
{ "authors": [ "Oleg Kiselyov" ], "categories": [ "cs.PL", "D.3.4; D.3.1" ], "primary_category": "cs.PL", "published": "20231227072203", "title": "Free Variable as Effect, in Practice" }
Age of Information in Gossip Networks: A Friendly Introduction and Literature Survey Priyanka Kaswan Purbesh Mitra Arunabh Srivastava Sennur UlukusThe authors are with the Department of Electrical and Computer Engineering, at the University of Maryland, College Park, MD, 20742, USA. Emails: {pkaswan, pmitra, arunabh, ulukus}@umd.edu. January 14, 2024 ==================================================================================================================================================================================================================================================================== Gossiping is a communication mechanism, used for fast information dissemination in a network, where each node of the network randomly shares its information with the neighboring nodes. To characterize the notion of fastness in the context of gossip networks, age of information (AoI) is used as a timeliness metric. In this article, we summarize the recent works related to timely gossiping in a network. We start with the introduction of randomized gossip algorithms as an epidemic algorithm for database maintenance, and how the gossiping literature was later developed in the context of rumor spreading, message passing and distributed mean estimation. Then, we motivate the need for timely gossiping in applications such as source tracking and decentralized learning. We evaluate timeliness scaling of gossiping in various network topologies, such as, fully connected, ring, grid, generalized ring, hierarchical, and sparse asymmetric networks. We discuss age-aware gossiping and the higher order moments of the age process. We also consider different variations of gossiping in networks, such as, file slicing and network coding, reliable and unreliable sources, information mutation, different adversarial actions in gossiping, and energy harvesting sensors. Finally, we conclude this article with a few open problems and future directions in timely gossiping. § INTRODUCTIONIn this review article, we discuss goal-oriented applications of gossip networks for time-sensitive information. An example of such applications is an autonomous driving system, where timely communication with nearby connected devices, such as other cars in the vicinity, sensors, infrastructure and even smartphones, is crucial to accurately perform driving actions and avoid accidents. A parallel example is a smart factory environment with human, robot, drone, camera collaborative system to safely and effectively perform manufacturing; see Fig. <ref>. Another application is remote surgery, where a doctor performs surgery on a patient using a remote surgical system, even though they are not physically present in the same location. The absence of real-time surgical data has to be taken into consideration to minimize the chance of inaccuracies in the surgical procedure. In such systems, there is a source, which has some time-varying information that is of interest to a user or a group of users. A user would like to track the time-varying information at the source as closely as possible in real-time to achieve its goal. However, there are network limitations, such as processing delays in the buffer queue, that prevent the user fromtracking the source arbitrarily closely, even with the high data rates afforded in the state-of-the-art communication systems.As a consequence of such limitations, the information at the user becomes stale if timely updates from the source are not received. Hence, it is crucial to optimize the communication parameters such that minimum staleness is maintained in the system. To achieve that, we need a metric that captures this staleness of the information at a user in a time-sensitive application. One such metric, proposed in the literature, is the age of information (AoI). For the latest information packet present at a user node at time t that has the generation time of u(t) at the source, the instantaneous AoI Δ(t) at the user is defined as Δ(t)=t-u(t). This simple metric essentially indicates how long ago a user's current packet was generated at the source. The AoI of a user increases at a unit rate as time progresses, until it receives a new packet from the source with a different generation time; see Fig. <ref>. Ideally, a user would want Δ(t)=0 for all t. However, this is not possible to achieve due to the limitations mentioned before. Therefore, it is desired to keep the AoI as low as possible. Since AoI is a time-dependent quantity, most literary works focus at optimizing either the time-average age, or peak-age, or some other statistical property of the age process at the user. Over an interval [0,T] with large T, the average age is defined asΔ=lim sup_T→∞1/T∫_0^TΔ(t) d t.Graphically, the time-average age is the area under the saw-tooth curve in Fig. <ref>, normalized by the interval of observation.The idea of AoI was first introduced in <cit.> in the context of vehicular networks and was generalized in the context of communication systems in <cit.>. Before going into the details of AoI formulation and its applications, it is crucial to motivate the use of AoI as a metric, and reason why we cannot rely on more traditional metrics such as delay and throughput that have been studied for decades in the literature <cit.>. To that end, <cit.> explains the novelty of AoI metric and its relevance to different time-sensitive applications. <cit.> explains this through an example, where in a vehicle, sensor measurements generated by various sensors are aggregated into a status update message and queued while they wait to be serviced by the car radio and transmitted to other cars. The radio interface is a simple first-come-first-served (FCFS) M/M/1 queue system with arrival rate of update packets as λ and service rate μ. <cit.> finds the server utilization ρ=λ/μ that minimizes the average age Δ for a fixed service rate μ, by varying the arrival rate λ. It turns out that the optimal age is achieved with ρ=0.53, i.e., the server remains idle about 47% of the time, such that the λ biases the server towards being busy only slightly more than idle. Clearly to maximize the traditional metric of throughput, we would desire ρ to be closer to 1. However, since we are keeping the server always busy, we cause the queue to be backlogged with status update messages, leading to messages getting very stale or outdated by the time of their delivery. On the other hand, to minimize the traditional metric of delay, we would want ρ to be close to 0, since a low update rate leading to an empty queue lets a message get serviced right away. However, in this case, while keeping the queue empty, the other cars do not receive updates from this vehicle frequently enough, causing them to have outdated information about this vehicle, (voice communications are examples of such low delay low throughput applications). This is explained pictorially in Fig. <ref>. In this figure, the sharp (vertical) decreases in the age denote the updates. Therefore, many such sharp decreases indicate a high update rate, hence a high throughput (as in Fig. <ref>). On the other hand, the height of the age right after an update has taken place denotes the delay that the update packet has experienced. Therefore, an age curve that decreases down to a small vertical height indicates a low delay (as in Fig. <ref>). Finally, the normalized area under the age curve denotes the average age. Therefore, a smaller area for the same duration indicates a low average age (as in Fig. <ref>). We observe that, generally, a low delay as in Fig. <ref> comes at the cost of low throughput, and a high throughput as in Fig. <ref> comes at the cost of high delay, and a medium-throughput medium-delay point as in Fig. <ref> yields a better age performance. Essentially, AoI is a complicated joint function of throughput and delay, and obtaining a good AoI outcome is about capturing a good trade-off point between jointly achievable throughput and delay. In recent years, various papers have explored age-optimal policies in a wide range of contexts, such as, queuing networks, energy harvesting systems, scheduling problems, UAV systems<cit.>, web crawling <cit.>, remote estimation <cit.>, and so on. For the purposes of this article, we focus on the age of information in the context of a gossip network, which sometimes is called, the age of gossip. Before we proceed to discuss the motivation of timeliness in gossiping in the next section, we take note of another metric: the version age of information <cit.>, useful for systems where update packets do not carry a generation timestamp. Instead, information packets generated at the source are marked by version numbers, that increment in steps of 1 each time the source gets a new update. If N_E(t) denotes the version number corresponding to the current state of the event and N_i(t) denotes the version number of the information about the event present at node i, then the instantaneous version age of information at node i is defined as X_i(t)=N_E(t)-N_i(t), where N_E(t) increments by one every time the event gets updated. Naturally, in this case, the user nodes wish to have access to the latest version present at the source, but unlike the traditional age of information which increases at unit rate, version age does not change at a user node in absence of a source update or a newer version update packet arrival at the user node. Additionally, the version age remains 0 when the source and the node both have the same information, whereas AoI keeps increasing linearly with time if the source does not transmit any new information, thus penalizing the network even when there is no actual necessity of transmission. Evidently, the version age is particularly useful where information changes as per some counting process, such as, Poisson renewals, whereas, AoI is more suitable for continuous-time information dynamics, such as, temperature data.In this review article, we mainly focus on AoI and its variant version age in the context of timely source tracking in a gossip network. First, we motivate the necessity of timeliness in a gossip network. We define the age of a gossip network and derive closed-form expressions for it. Then, we look into the growth of age as a function of the network size (age scaling) for different symmetrical gossip network structures, such as, fully connected, ring, grid, generalized ring, and hierarchical networks. We summarize the improvements based on different age-aware gossiping techniques. We discuss higher order moments of the age process. We also discuss the fairness optimization in timely operation of sparse asymmetric gossip networks. Then, we summarize works related to several variations of timely gossiping, such as, gossiping with file slicing and network coding, reliable and unreliable sources, information mutation, jamming and timestomping adversary, and gossiping with an energy harvesting sensor. Finally, we list several open problems and possible future directions.§ TIMELINESS IN GOSSIP NETWORKSGossiping is a fast and distributed information sharing mechanism in a network, where each node of the network randomly communicates to its neighboring nodes and spreads information; see Fig. <ref>. Such gossip networks are commonly used for various resource-constrained, goal-oriented applications, where centralized scheduling does not offer simple scalable solutions. Examples of such applications include internet of things (IoT) networks <cit.>, dense sensor networks <cit.>, mobile ad-hoc networks <cit.>, content distribution networks <cit.>, autonomous driving networks <cit.>, decentralized learning networks <cit.>. In high risk applications, such as, autonomous driving, it is crucial to have efficient communication with other cars in the vicinity and many sensors of the car to avoid road accidents. Such networks with large scale connectivity call for simple gossip based algorithms, where nodes arbitrarily exchange information with their neighboring nodes while being oblivious to the overall dynamics of the system, thereby causing information to spread like a gossip or rumor. Indeed, hyper-connectivity among all humans and machines is one of the key promises of emerging sixth generation (6G) communication standards <cit.>, where gossip networks will play a pivotal role. In the literature, the idea of gossip protocols was first introduced in <cit.> as an epidemic algorithm for clearinghouse database maintenance. <cit.> shows that randomized gossip algorithms are efficient in ensuring that every update in the system is eventually reflected in all the databases, thus, maintaining a consistency in the network. In the subsequent studies <cit.>, the efficiency of gossip algorithms in rumor dissemination was analyzed. It is worth noting here that different works in the literature mention gossiping as peer-to-peer (P2P) or device-to-device (D2D) or machine-to-machine (M2M) communications, differing in semantics only. <cit.> shows that a single rumor can be spread to n nodes in O(log n) rounds. This order performance can be further extended to multiple rumors via network coding techniques <cit.>, which allow transmitting multiple pieces of information, encoded in a single message and decoding them intelligently to retrieve the desired information. <cit.> studies dissemination time of k messages in a large network of n nodes with gossip protocols based on random linear coding (RLC), random message selection (RMS), and sequential dissemination. <cit.> shows that RLC-based protocol has superior performance and has ck + O(√(k)log k log n) dissemination time in complete graphs. <cit.> further extends the result to arbitrary graphs. In <cit.>, a file is split into k pieces for faster dissemination in a large network of n nodes. <cit.> shows that a dissemination time of O(k+log n) is achievable by a hybrid piece selection protocol, named the INTERLEAVE protocol. In <cit.>, gossiping protocols for distributed mean estimation were analyzed. Such protocols are used for distributed multi-agent optimization and consensus problems <cit.>. The works mentioned so far consider the total dissemination time of a static message in the network as the performance metric. However, in real-time scenarios, data dynamics is time-dependent and asynchronous. In distributed databases such as Amazon DynamoDB <cit.> and Apache Cassandra <cit.>, the database nodes use gossip protocols to keep their information fresh. In Cassandra, cluster metadata at each node is stored in endpoint state, which tracks the version number or timestamp of the data. During a gossip exchange between two nodes, the version numbers of the data at the two nodes are compared, and the node with the older version discards its data, replacing it with the more up-to-date data of the other node. Thus, old information is often discarded by the nodes before it spreads to the whole network and only timely information is kept. Another application, which necessitates timely information dissemination in a network, is decentralized machine learning <cit.>. Unlike centralized learning, decentralized learning is a method where the training data for learning a model is distributed across different devices. In this method, the devices themselves train local models with their available data and communicate their own model to their neighboring devices for averaging or mixing after exponential time intervals (Poisson arrivals). The model training and the model mixing can run asynchronously, which makes the setting equivalent to a dynamic information gossip model. Model mixing is essential to convergence of the overall network, especially with non-i.i.d. distributed data. The memoryless property of the exponential distribution allows easier convergence analysis by discrete-time model formulation. The analysis in <cit.> shows that for guaranteed model convergence of a device, the version difference between the model available at the device and the global consensus version must be bounded by a constant and the rate of convergence becomes faster as the timeliness of the network improves.These examples show that the total dissemination time is not an adequate metric to study in timely dissemination of dynamically changing information in a network. Rather, some metric that can incorporate information freshness for dynamic data, such as age or version age, is more suitable.§ AGE OF GOSSIPMost works prior to <cit.> studied the age for specific network topologies such as a simple transmitter-receiver pair. However, the saw-tooth curves become complex for arbitrary network topologies, making their analysis difficult. <cit.> first demonstrated the application of stochastic hybrid systems (SHS) technique based on the model of <cit.> for the analysis of age processes. In the subsequent studies, the SHS characterization became crucial in analyzing age in gossip networks. §.§ Stochastic Hybrid System Modelling for Age AnalysisAn SHS is characterized by its state, which is partitioned into a discrete component q(t) ∈𝒬={0,1, …, m} that evolves as a jump process and a continuous component x(t)=[x_0(t) ⋯ x_n(t)] ∈ℝ^n+1 that evolves according to a stochastic differential equation. Given the discrete set 𝒬 and a k dimensional vector z(t) of independent Brownian motion processes, the stochastic differential equation is expressed asẋ=f(q, x, t)+g(q, x, t) ż,where f: 𝒬×ℝ^n+1×[0, ∞) →ℝ^n+1 and g : 𝒬×ℝ^n+1×[0, ∞) →ℝ^(n+1) × k are the mappings, and ℒ={0, …, ℓ_0-1} is a set of transitions, such that each ℓ∈ℒ is a discrete transition/reset map ϕ_ℓ: 𝒬×ℝ^n+1×[0, ∞) →𝒬×ℝ^n+1. The corresponding transition(q^', x^')=ϕ_ℓ(q, x, t)occurs with transition intensityλ^(ℓ)(q, x, t), λ^(ℓ): 𝒬×ℝ^n+1×[0, ∞) →[0, ∞) . In other words, the probability that the ℓth transition occurs in the interval (t, t+d t] is λ^(ℓ)(q(t), x(t), t) d t. When the system is in discrete state (q, x(t)), it evolves following (<ref>); but if it transitions from q to q^', the continuous state can have a discontinuous jump from x to x^', as given in (<ref>). Hence, the resulting process x(t) has piecewise continuous sample paths. Due to the broad nature of the SHS model, describing the processes q(t) and x(t) can be intricate and challenging. The strategy proposed in <cit.> is to establish test functions ψ(q, x, t), with the expected value denoted as 𝔼[ψ(q(t), x(t), t)], that can be assessed utilizing the method outlined in <cit.>. In the subsequent section, we begin by demonstrating this approach, specifically for the version age of information metric. §.§ SHS for Version Age of Information We start with the exploitation of the SHS technique for the characterization of average version age in arbitrary networks, as explained in <cit.>. Consider an arbitrary network topology with n user nodes 𝒩={1,2,…,n} and a source node 0, where the source gets updated with newer versions according to λ_00 rate Poisson process such that all user nodes wish to have access to the latest possible version of this constantly updating information. The version age at node i is denoted by X_i(t), and node i sends update packets to node j as a λ_ij rate Poisson process. Since the source always has the latest version of the information, X_0(t)=0 at all times. Each time the source gets updated, the age of node i becomes X_i'(t)=X_i(t)+1. When node i receives a packet from node j, the age of node i becomes X_i'(t)=min{X_i(t),X_j(t)}, since node i keeps the fresher of the two packets and discards the staler one to improve its version age; see Fig. <ref>. We wish to compute lim_t →∞E[X_i(t)], by employing the SHS method.The continuous state of this SHS model is X(t)=[X_1(t),…,X_n(t)] ∈ℝ^n, which is a vector of the instantaneous ages at the n nodes. The convenience of the SHS based version age characterization follows from the presence of a single discrete mode with trivial stochastic differential equation Ẋ(t)=0_n, since the version age at nodes does not change between transitions. In the gossip network, the set of transitions ℒ corresponds to the set of directed edges (i,j), such that node i sends updates to node j on this edge according to a Poisson process of rate λ_ij, with (0,0) denoting a source self-update, i.e.,ℒ= {(0,0)}∪{(0,i):i ∈𝒩}∪{(i,j):i,j ∈𝒩},where (i,j) transition resets the state Xat time t to ϕ_i,j(X)=[X_1',…,X_n']∈ℝ^2n post transition, such thatX_k'=X_k+1, i=0, j=0, k ∈𝒩 0, i=0, k=j ∈𝒩 min(X_i, X_j), i ∈𝒩, k=j ∈𝒩X_k,otherwise. Frequently in the remaining part of this article, we will define a test function of the form ψ:𝒬×ℝ^n× [0,∞) →ℝ that is time-invariant, i.e., its partial derivative with respect to t is ∂ψ(q,X,t)/∂ t=0, such that we are interested in finding its long-term expected value lim_t →∞𝔼[ψ(q(t),X(t),t)]. Since the test function only depends on the continuous state values X due to time-invariance and single discrete mode, for simplicity, we will drop the inputs q and t and write ψ(q,X,t) as ψ(X). The test function ψ(𝐗(t)) has an extended generator (L ψ)(𝐗(t)) that satisfies Dynkin's formula,d E[ψ(𝐗(t))]/d t=E[(L ψ)(𝐗(t))].Given the trivial stochastic differential equation Ẋ(t)=0_n and the time-invariance of the test function in our case, the extended generator of SHS for version age is given by <cit.>,(L ψ_S)(𝐗)=∑_(i, j) ∈ℒλ_i j[ψ_S(ϕ_i, j(𝐗))-ψ_S(𝐗)]. To compute the expected version age at the nodes, <cit.> defines a test function ψ_S(𝐗)=X_S=min_j∈ SX_j, for a set of nodes S ⊆𝒩, which induces the process ψ_S(𝐗(t))=X_S(t)=min_j∈ SX_j(t). The effect on the test function of transition (i, j) isψ_S(ϕ_i, j(𝐗))=X_S^'=min _k ∈ S X_k^',which in accordance with (<ref>), changes as follows,X_S'=X_S+1, i=0, j=00, i=0, j ∈ SX_S∪{i}, i ∈ N(S), j ∈ SX_S,otherwise,where N(S) is the set of updating neighbors of S,N(S)= {i∈𝒩\ S: λ_i(S)=∑_j ∈ Sλ_ij>0 }.The extended generator, therefore, becomes,(L ψ_S)(𝐗)= λ_00(X_S+1-X_S)+∑_j ∈ Sλ_0 j[0-X_S] +∑_i ∈ N(S)∑_j ∈ Sλ_i j[X_S ∪{i}-X_S]. With the definitions 𝔼[X_S(t)]=v_S(t) and v_S=lim_t →∞ v_S(t), substituting (<ref>) into (<ref>) gives, v̇_S(t)= λ_00 - v_S(t)λ_0(S)+ ∑_i∈ N(S)λ_i(S)[v_S ∪{i}(t)-v_S(t)].As t tends to ∞, setting v̇_S(t)=0 yields,v_S=λ_00+∑_i ∈ N(S)λ_i(S) v_S ∪{i}/λ_0(S)+∑_i ∈ N(S)λ_i(S). Equation (<ref>) carries significant importance in the subsequent studies in timely gossip networks. (<ref>) expresses the version age of a set of size |S| in terms of the ages of sets of size |S|+1 by incorporating the neighboring nodes of set S one by one. (<ref>) also allows us to compute expected version age in very large networks of certain types of topologies, as we will see in the remaining sections of this article. Specifically, in symmetric fully connected network and symmetric ring network, by exploiting the symmetry, the equation will reduce to a summation in the case of fully-connected networks and a Riemann integral in the case of ring networks.Before moving on to the next section, we would like to remark that in the case of the age of information instead of the version age of information, it has been shown in <cit.> that for time-invariant test functions, the extended generator counterpart of (<ref>) is(L ψ_S)(𝐗)=1+ ∑_(i, j) ∈ℒλ_i j[ψ_S(ϕ_i, j(𝐗))-ψ_S(𝐗)]. Using similar steps as before, (<ref>) results in the formula for the average age of information for a set S as,v_S=1+∑_i ∈ N(S)λ_i(S) v_S ∪{i}/λ_0(S)+∑_i ∈ N(S)λ_i(S).Note that the formula in (<ref>) can be obtained from (<ref>) if we set λ_00=1. Thus, for the rest of the article, the results obtained for the version age of information metric can be readily applied to the traditional age of information metric by just setting the source self-update rate as λ_00=1.§ AGE SCALING FOR SIMPLE NETWORK TOPOLOGIESIn order to find the version age scaling for simple networks, we can use the recursive equations. However, the number of such equations that we need to solve recursively is exponential. In the following toy example, we demonstrate how to compute the expected ages at all nodes for a small network. Consider the network in Fig. <ref>. Here, the source sends updates to nodes 1 and 3; there is no direct communication link between nodes 1 and 3; and nodes 1, 2 and 2, 3 gossip. From (<ref>), we have v_{1,2,3} = λ_e/λ_s1+λ_s2.Using this, we can find the version age of two-sized sets as,v_{1,2} = λ_e + λ_32v_{1,2,3}/λ_s1 + λ_32,v_{2,3} = λ_e + λ_12v_{1,2,3}/λ_s2 + λ_12.Then, we can find the version age of single nodes as,v_1= λ_e + λ_21v_{1,2}/λ_s1 + λ_21, v_2= λ_e + λ_12v_{1,2} + λ_32v_{2,3}/λ_12+λ_32,v_3= λ_e + λ_23v_{2,3}/λ_s3 + λ_23.That is, we calculate the version age of each node in the network using the recursive equation and by forming sets that are one-larger. For example, to calculate the age of node 2, in (<ref>), we write the age of set {2} by the ages of one-larger sets {1,2} and {2,3}. Then, to find the ages of these two-sized sets, in (<ref>) and (<ref>), we write their ages by the age of one-larger set {1, 2, 3}. Since there is no one-larger set to {1, 2, 3}, the recursive equation (<ref>) directly gives its age as in (<ref>). Now, (<ref>) will be inserted into (<ref>) and (<ref>), which will be inserted into (<ref>) to find the age of node 2. Note also, interestingly, that we did not evaluate v_{1,3} here, since it does not appear in the calculation of the version age of any node.Note that, in general, we may have as many equations as the number of subsets of n nodes, which is exponential in n. In order to simplify the calculations for larger networks, we need to exploit the geometry of the network to reduce the number of recursive equations and simplify the calculations. The number of equations can be reduced to O(n) in the special cases of the bidirectional ring network and the fully connected network. §.§ Age Scaling for a Fully Connected Network The fully connected network, as shown in Fig. <ref>, has the highest density of connections, with each node sharing an edge with every other node in the network, as analyzed in <cit.>. Since the total gossip rate of each node is λ, each node gossips with every other node with rate λ/n-1. In order to find the version age of a single node in the network, we first notice that the version age of two sets of the same size is the same, and hence we can write the version age of set v_S = v_|S|. Hence, we obtain a sequence of version ages v_1,v_2, …, v_n, where v_n = λ_e/λ. Then, the recursion in (<ref>), expressed only in terms of the subset size j is given by (see also <cit.>),v_j = λ_e + j(n-j)λ/n-1 v_j+1/jλ/n + j(n-j)λ/n-1.In principle, one can start from v_n = λ_e/λ, and work backwards j=n-1, n-2, …, 2, 1 utilizing (<ref>) to obtain the age of a single node v_1 exactly for any given network size n and update rates λ_e and λ. To find a closed-form order-wise expression for the age in terms of the network size n, we can write upper and lower bounds for each recursion in (<ref>), in terms of the sum of reciprocal of integers, ∑_k=1^i 1/k, and obtain the following upper and lower bounds for the age of a single node in the fully connected (FC) network,λ_e/λ[n-1/n∑_k=1^n-11/k + 1/n] ≤ v_1^FC≤λ_e/λ∑_k=1^n 1/k,from which we can conclude that the version age scaling for a single node in the fully connected network is O(logn) as the sum of 1/k for k from 1 to n grows as log n. §.§ Age Scaling for a Bidirectional Ring NetworkThe bidirectional ring network, as shown in Fig. <ref>, is arranged in the form of a ring and each node communicates with its two neighbors, one on each side, as analyzed in <cit.> and <cit.>. The total rate of gossip for each node is λ, which it equally divides to λ/2 and λ/2 to gossip with each of its two neighbors. Similar to the case of fully connected network, here also, due to the symmetry of the network, the version age of a contiguous set of nodes depends only on the size of the set and not its position. We know that the version age of 𝒩, v_n = λ_e/λ. Then, the recursion in (<ref>), expressed only in terms of the contiguous subset size j is given by (see also <cit.>),v_j = λ_e + λ v_j+1/jλ/n + λ.Again, in principle, one can start from v_n = λ_e/λ, and work backwards j=n-1, n-2, …, 2, 1 utilizing (<ref>) to obtain the age of a single node v_1 in a bidirectional ring exactly. To find a closed-form order-wise expression for the age in terms of the network size n, we can write upper and lower bounds for each recursion in (<ref>) in terms of sum of products terms. In this way, we find that the version age of a single node in the bidirectional ring network scales as follows,v_1 ≈λ_e/λ∑_i=1^n-1∏_j=1^i1/1+j/n.This can be written as an integral using a Riemann sum approximation via a step size of 1/√(n), yielding the result1/√(n)∑_i=1^n-1∏_j=1^i 1/1+j/n≈∫_0^∞ e^-t^2/2dt = √(π/2),from which we have v_1 ≈λ_e/λ√(π/2)√(n). Thus, we can conclude that the version age scaling for a single node in a bidirectional ring network is v_1=O(√(n)).Thus, the version age in fully connected network and ring network, which represent the two extremes of the connectivity spectrum, scales as O(logn) and O(√(n)), respectively. Since both networks have the same update rates and consume similar bandwidth, we conclude that better connectivity leads to lower age, lower staleness, hence higher freshness, in the network. § AGE UPPER BOUND FOR COMPLEX NETWORKSIn this section and the next, we analyze the version age scaling for more complex gossiping networks when compared to the bidirectional ring and the fully connected network. In the earlier cases studied so far, the number of recursive equations could be reduced from exponential to linear in the number of nodes in the network by exploiting the symmetry in the network; essentially, by using the fact that the age of a subset depends only on the size of the subset in those networks. This, however, is not possible for more complex networks, since arbitrary connections lead to many random connected sets, and age of a subset not only depends on the size of the subset but also on the specific shape of the subset, i.e., that no consolidation is possible only to the size in these cases.Hence, in order to analyze version age scaling in such networks, we need to modify the recursive equations proposed in <cit.>, and use the geometry of these networks to find tight upper bounds. Specifically, we see that the version age of a set depends on the sets that are one-larger by including a neighbor of the set at each time. This evolution of the recursion is the reason for the exponential number of equations. Instead, we can write an upper bound for v_S using only one-expanded set that has the highest version age among all one-expanded sets, and the number of neighbors of S, denoted by N(S). This can also be modified to depend on the one-expanded set with the highest version age and the number of incoming edges (edges emanating at neighbors of S and ending in a node in S) of set S. The method for bounding using N(S) starts by rearranging (<ref>) as follows,λ_00 = λ_0(S)v_S + ∑_i ∈ N(S)λ_i(S)(v_S - v_S ∪{i}).Next, we lower bound the sum on the right hand side as,λ_00 ≥λ_0(S)v_S + |N(S)|min_i ∈ N(S)λ_i(S)(v_S - v_S ∪{i})≥λ_0(S)v_S + |N(S)|min_i∈ N(S)λ_i(S) min_i ∈ N(S)(v_S - v_S ∪{i}) = λ_0(S)v_S + |N(S)|min_i ∈ N(S)λ_i(S)(v_S -max_i ∈ N(S) v_S ∪{i}).After rearranging the terms, we get an upper bound on v_S,v_S ≤λ_e + |N(S)| min_i∈ N(S)λ_i(S)max_i∈ N(S)v_S ∪{i}/λ_0(S) + |N(S)|min_i∈ N(S)λ_i(S). The geometric parameters can be further lower bounded using the geometry of the network. The lower bound can depend on many parameters of the set. In order to reduce the number of equations from exponential to linear, we find the lower bound for the geometric parameters in terms of the size of the set. This lower bound may or may not result in a tight lower bound, depending on the generality of the networks. If the set of networks is too general, then the version age bounds are loose. This is because, in order to satisfy the geometric constraints for all networks in the set, we need to take into account sets with very bad connectivity. As an example, the set of all d-regular graphs <cit.> can be analyzed in this way, but the bounds do not provide insightful information about the real version age scaling of the graphs. On the other hand, we are able toanalyze two classes of networks: the two-dimensional grid network (Fig. <ref>) and the generalized ring network (Fig. <ref>), and find tight upper bounds. §.§ Age Scaling for a Grid NetworkIn <cit.>, we find the version age scaling for the two-dimensional grid network that has n nodes, and hence is a √(n)×√(n) lattice. Fig. <ref> shows an illustration of the grid network. Unlike the ring and fully connected networks, the version age of a subset of the grid network depends not only on the size of the set, but also on its shape. The number of sets that have a fixed number of nodes increases rapidly. As an example, Fig. <ref> shows all subsets of the grid network that contain 5 nodes. Hence, directly applying the recursion in (<ref>) is not feasible, since the number of equations grows rapidly as the size of the sets in the grid network increases. Instead, we use a lower bound on the number of incoming edges depending on the size of the set to find the upper bound for the recursion. In <cit.>, it was found that the number of incoming edges of a set of j nodes is lower bounded by 2⌈ 2√(j)⌉ in an infinite grid. In order to translate this result to our finite grid network, we need to consider the boundary effects. We see that the set with the least number of incoming edges changes as the number of nodes in the set increases. Hence, we derive a common lower bound to write the recursive upper bound equations. Then, we solve these equations and find that the version age scaling in the grid network is O(n^1/3). In comparison to the ring network, we see that the age scaling has improved significantly by adding only two extra connections per node. This is because the geometry of the grid network results in a O(√(n)) diameter of the network, whereas the diameter of the ring is O(n). This facilitates faster transfer of information with gossiping.The grid network can also be considered from the perspective of conjoined networks. It is well known that conjoining two networks disseminating the same information speeds up the information diffusion <cit.>. For example, suppose there is a group of friends living in the same city, who meet each other regularly. They want to be up-to-date about the happenings in each other's lives. We see that they will receive updates about their friends faster if they were connected to each other on a social media platform and also met each other in-person, rather than doing just one of the two. This is an interesting way with which we can also look at the grid network as a conjoined network of two line networks (the equivalence of version age scaling in line and ring networks was shown in <cit.>), as shown in Fig. <ref>, which improves the version age of the network from O(n^1/2) to O(n^1/3). §.§ Age Scaling for a Generalized Ring NetworkIn <cit.>, we find the version age scaling for a network which is placed in a ring formation and each node now has f(n) nodes on both sides, i.e., 2f(n) nodes in total, to gossip with. Fig. <ref> shows an illustration of the generalized ring network. We analyze the network as f(n) is varied, 1 ≤ f(n) < n/2. We use the number of incoming edges to write the recursive upper bound equations. It was shown in <cit.> that for sets of a fixed size with the least number of incoming edges is the set of contiguous nodes. Using this, lower bounds for the number of incoming edges were found for all values of j. The recursive upper bound equations are then solved to find the version age scaling of the generalized ring (GR) network as,v_1^GR = O(logf(n) + √(n/f(n))). We note that for two special cases studied already, i.e., when f(n) = 1, in which case the network becomes a ring network, and when f(n) = n/2, in which case the network becomes a fully connected network, (<ref>) reduces to their corresponding age expressions. In particular, (<ref>)reduces to O(√(n)) in the first case, and to O(logn) in the second case. Further, if f(n) is any positive constant, which means that each node in the generalized ring network has a fixed number of neighboring nodes, then the version age still scales as O(√(n)) like a simple ring. Finally, if f(n) = n^α, 0<α<1, i.e., f(n) is a rational function, which covers a large range of functions between the two extremes of connectivity discussed in the first two cases, then the version age scales as O(n^1-α/2). As a simple example, if α=1/3, i.e., f(n) = n^1/3, then the age scales as O(n^1/3).This work along with <cit.>, also allows us to analyze the dependence of the version age scaling for networks with a wide range of diameters and geometries. We note that the diameter of the generalized ring network is O(n/f(n)). Hence, in order to have the same version age scaling as the grid network, we need f(n) = n^1/3. In other words, each node needs to be connected to n^1/3 nodes on each side in order to achieve the same version age scaling as the grid network, in which each node needs to be connected only to 4 neighbors. From this, we can conclude that the version age scaling heavily depends on the geometry of the network.The dependence of version age on the diameter of a generalized ring network can also be analyzed. In order to achieve a poly-logarithmic scaling in a generalized ring, we need the diameter to be only a poly-logarithmic gap away from n. When f(n) is a rational function of n, i.e., f(n) = n^α, 0<α<1, then the version age scaling of the network becomes O(αlogn + n^1-α/2). As n →∞, the second term, being a super-logarithmic function, dominates the first term. However, when α is large the second term grows very slowly, even slower than logn. Hence, in order for us to say that the scaling is of the order of the second term, the number of nodes in the network need to be very large. In the range 0.6 ≤α < 1, we need the network to be in excess of a billion nodes for the version age scaling to be according to the second term. Since networks of this size do not occur in the real world, the version age scaling in this range for α can be considered to be logarithmic for all practical purposes.§ CLUSTERED NETWORKSIn this section, we describe networks that have hierarchical clustering and find the version age scaling. <cit.> investigated whether creating hierarchical networks can improve the version age scaling of nodes in the network. An example of this network is shown in Fig. <ref>. Clustered gossip networks have the usual source generating updates, cluster heads that get updated directly by the source and clusters consisting of nodes that are updated only by the respective cluster heads as a combined rate λ_c Poisson process. Using a similar SHS calculation as in Section <ref>, we can obtain the following recursive equations,v_S = λ_e + λ_c(S)v_c + ∑_i ∈ N_c(S)λ_i(S)v_S ∪ i/λ_c(S) + ∑_i ∈ N_c(S)λ_i(S). Along with these equations and the assumption that all the clusters have the same number of nodes and the same connectivity, we are able to find the version age scaling for several networks. There are two cases depending on whether the cluster heads themselves are connected to each other or not. If the nodes are connected to each other in a fully connected network, and the cluster heads are not connected to each other, then the version age scaling is still O(logn). Hence, clustering does not improve the version age scaling for the highest possible connectivity beyond log n; however, clustering reduces the connectivity requirements while achieving the same log n scaling (that is, nodes are fully connected only within the clusters, and disconnected across the clusters). On the other hand, for the ring and disconnected networks, the version age scaling improves from O(n^1/2) to O(n^1/3), and from O(n) to O(n^1/2), respectively. Moreover, if the cluster heads are further connected in a ring network, then the version age scaling is further improved to O(n^1/3) in a disconnected cluster, and to O(n^1/4) in clustered ring networks. There is no improvement due to connection of cluster heads for clustered fully connected networks. Finally, if we have h layers of hierarchy of rings, then the version age of each node scales as O(n^1/2h). § AGE-AWARE GOSSIPINGIn this section, we discuss a gossiping scheme for an age-aware, fully connected network. As discussed in Section <ref>, in a fully connected network, the average version age scales as O(log n). Even though this is better than the age scaling in all other lesser-connected networks (e.g., the ring network where age scaling is O(√(n))), it still grows as the network size grows. Here, we show that the age scaling can be brought down to O(1), i.e., something that does not grow with the network size, if the nodes in the network are age-aware, i.e., the nodes can estimate their own version ages. This age-awareness can be easily achieved by receiving a feedback signal from the source when the source updates itself. This version update information about the source, available at the network, can be leveraged for intelligent allocation of the overall gossip capacity. The uniform gossiping setting, described in the previous sections, where all network nodes have equal update capacity λ, allows nodes with relatively fresh information and nodes with relatively stale information to gossip with equal rate. Thus, a significant portion of the overall gossip capacity B=nλ is wasted in communications that do not bring down the age of any node. Intuitively, if there is a mechanism such that the gossip capacity of the relatively higher age nodes can be shifted to the relatively fresh nodes, it should improve the overall timeliness of the network. The key challenge lies in implementing such a mechanism in a distributed manner, i.e., a mechanism where the nodes in the network will intelligently figure out the freshest nodes in the network at any particular time, and assign all the update capacities to them.To that end, age sense update multiple access network (ASUMAN) gossiping scheme was proposed in <cit.>. In this scheme, the age sensing nodes use the source self-update process as a method to synchronize themselves. When the source has a new version of information at time T_k, the nodes stop gossiping and wait for a time proportional to their own age. This time interval is called the age sensing phase of the network. After this waiting time, each node sends a small pilot signal alerting their neighbors and start gossiping. The number of minimum age nodes can be estimated from the number of received pilot signals. Since the nodes with the minimum age at time T_k have to wait for the least amount of time, only those nodes continue gossiping with rate B/|ℳ_k|, where ℳ_k is the set of minimum age nodes at time T_k. On the other hand, the rest of the nodes 𝒩\ℳ_k in the network remain silent up to time T_k+1; see Fig. <ref>.The ASUMAN scheme yields an opportunistic mechanism, where the minimum age nodes are selected as the gossiping leaders. The average version age of the minimum age nodes, denoted as X(t), is independent of the network size n, and can be expressed aslim_t→∞𝔼[X(t)]=λ_e+λ/λ.Therefore, the ASUMAN scheme can be interpreted as the minimum age nodes, with average age obtained in (<ref>), gossiping in the fully connected network with full capacity nλ. Hence, using SHS analysis, <cit.> shows that the average age of the ith node becomeslim_t→∞𝔼[X_i(t)] =λ_e/λ(1+nλ/n-1(1/λ+1/λ_e))/(1/n+n/n-1)2λ_e/λ+1. The analysis in <cit.> shows that the ASUMAN scheme can be extended to networks with fractional connectivity, i.e., each node is connected to a fraction of n-1 nodes, or with networks where the source does not update all the nodes with equal rate. Additionally, using hierarchical structures with O(n) connectivity among nodes can yield O(1) age performance, although the upper bound is worse than that of the fully connected network. In this way, the trade-off between connectivity and age performance can be maintained. Importantly, <cit.> shows that ASUMAN does not produce good age performance for networks with finite O(1) connectivity, such as ring, two-dimensional grid, etc., thus, sufficiently rich connectivity is needed to reap the benefits of opportunism.The optimality of any gossiping scheme was shown in <cit.>, which proved a fundamental limit of age performance for gossiping with the total capacity of B=nλ in a symmetric fully connected network. This is achieved by a semi-distributed scheme which lets the node with the minimum age at any time to gossip with full capacity. When the source updates a node, it becomes the minimum age node of the network. It sends a pilot signal in the network to alert the other nodes and starts gossiping. The gossiping continues until it receives a signal from any other node, which is the new minimum age node. The SHS analysis in <cit.> shows that this scheme isoptimal among all possible schemes with gossip capacity constraint B=nλ and the achieved age performance islim_t→∞𝔼[X_i(t)]=λ_e/λ(1+n/n-1/1/n+n/n-1)2λ_e/λ. Additionally, <cit.> proposed a fully-distributed gossiping scheme that does not require any implicit coordination mechanisms, such as pilot signal transmissions in the network. In this scheme, the freshest node just gossips for a finite time duration of 1/λ and achieves an age performance of (1+e)λ_e/λ for large n. We remark that although all these different schemes have different age performances, since they all scale as O(1), the scaling results for different network topologies, such as fractional connectivity, finite connectivity, hierarchical connectivity, sublinear connectivity, derived for ASUMAN scheme in <cit.> also apply to the semi- and fully-distributed schemes. § HIGHER ORDER MOMENTS OF THE AGE PROCESSReference <cit.> considers the higher order moments of the age process. It uses the SHS method for evaluating the moment generating function (MGF) of a single node and the joint MGF of two age processes. For the subset of nodes S, S_1, S_2⊆𝒩, the test functions considered are ψ^(n)_S(X(t))=exp(nX_S(t)) and ψ^(n_1,n_2)_S_1,S_2(X(t))=exp(n_1X_S_1(t)+n_2X_S_2(t)). By defining the rest of the reset maps and transitions similarly as before, <cit.> shows that for any arbitrarily connected gossip network and for m≥ 1, the stationary marginal mth moment of the AoI for set S⊆𝒩 can be expressed asv^(m)_S=mv^(m-1)_S+∑_i∈ N(𝒮)λ_i(S)v^(m)_S∪{i}/λ_0(S)+∑_i∈ N(𝒮)λ_i(S).Note for m=1, i.e., the first moment, (<ref>) reduces to (<ref>). Now, consider the simple toy example in Fig. <ref>. Using (<ref>), we obtain the variances of the two AoI processes, X_1(t) and X_2(t) at node 1 and node 2, respectively, asVar[X_1(t)]=1/λ^2_01, Var[X_2(t)]=1/λ^2_01+1/λ^2_12.Thus, the results of <cit.> show that the standard deviations of the age processes are relatively large with respect to their average (mean) values for gossip networks. Therefore, it is important to take higher order moments into consideration while implementing or optimizing timely gossip algorithms. § SPARSE NETWORKSThe gossip networks we have discussed so far, are assumed to be symmetric, and hence easy to analyze. However, in real-life networks, such symmetric structures may not always be guaranteed; see Fig. <ref>. Several factors, such as link failures, expensive bandwidth, physical separation between devices, asymmetric connectivity, etc. introduce heterogeneity in the network, which makes it difficult to analyze and optimize such networks. Indeed, applying the recursive formula in (<ref>) with an asymmetric setting results in different age performance equations for different nodes, which are difficult to optimize with conventional gradient-based methods. To that end, <cit.> introduces the concept of fair timeliness, which evaluates the performance of the node with the worst average age in the network. The key idea behind this is that optimizing this worst case age performance will ensure a fair timeliness for all the other nodes in the network. The nodes are assumed to be age-aware, and therefore, they can calculate the empirical time average of their own ages as â_i=1/T∫_TΔ_i(t)dt. Assuming ergodicity of the age process, â_i→ a_i almost surely as T→∞, and therefore, for a large time window T, the estimate â_i≈ a_i. Now, these average ages of the nodes can be tuned by choosing asymmetric update rates from the source to the node. Thus, the set of update rates from the source to the nodes λ={λ_i}_i=1^n is the tunable parameter for the source; see Fig. <ref>. To maintain fair timeliness of the overall network, we minimize the average version age of the worst performing node a(λ)=max_i∈𝒩a_i(λ) by choosing λ. Therefore, the optimization problem to solve is,min_λ∈[0,λ]^n a(λ) subject to ∑_i=1^nλ_i≤λ. Due to the recursive formulation of age in a sparse asymmetric network and the complications due to the max(·) function, the solution of (<ref>) is obtained by a derivative-free or black-box optimization technique, i.e., a continuum multi-arm bandit formulation <cit.> with action λ and reward f(λ)=-â(λ). Such formulation requires a sequential solution that makes the cumulative regret R_M=∑_m=1^M(â(λ_m)-â(λ^*)) sublinear, i.e., o(M). <cit.> uses sequential Bayesian optimization technique with Gaussian process regression, which fits a Gaussian process GP(μ(λ),k(λ,λ')) to f, where μ(λ) is the mean of the process and k(λ,λ') is a regularized kernel. Performing the regression for M steps over the reward points f_M=[f_1, f_2, ⋯, f_M]^T, and corresponding sequence {λ_m} yields the regression formula asμ_M(λ)=k_M(λ)^TK_M^-1f_M k_M(λ,λ')=k(λ,λ')-k_M(λ)^TK_M^-1k_M(λ')σ^2_M(λ)=k_M(λ,λ),where k_M(λ)=[k(λ,λ_1),k(λ,λ_2),⋯,k(λ,λ_M)]^T and K_M is the kernel matrixK_M=[ k(λ_1,λ_1) k(λ_1,λ_2)⋯ k(λ_1,λ_M); k(λ_2,λ_1) k(λ_2,λ_2)⋯ k(λ_2,λ_M);⋮⋮⋮⋮; k(λ_M,λ_1) k(λ_M,λ_2)⋯ k(λ_M,λ_M) ]. Then, <cit.> utilizes the commonly used upper confidence bound (GP-UCB) algorithm, which employs an optimization of a simple acquisition function over the search space. This auxiliary optimization can be written asλ_m=argmax_λ∈𝒟μ_m-1(λ)+√(β_m)σ_m-1(λ),where 𝒟={λ∈[0,λ]^n:1^Tλ≤λ} is the search space. This optimization is much simpler than the original optimization and can be performed with the existing gradient-based numerical methods. The term μ_m-1(λ) in (<ref>) exploits information from the data points up to m-1 steps. Whereas the term σ_m-1(λ) pushes the algorithm for exploring different regions of the search space, thus meeting the exploration-exploitation trade-off. Analysis in <cit.> shows that choosing β_m∼ O(log(m^2)) for a convex search space, such as 𝒟, the regret R_M is guaranteed to be sublinear with arbitrary high probability, i.e., with high probability the algorithm converges to a global optimum. The only downside of using GP-UCB is that the auxiliary optimization is not scalable, as the time complexity increases exponentially for large networks.§ FILE SLICING AND NETWORK CODINGIn Section <ref>, we saw how in a complete symmetric network of n nodes that wishes to track a single time-varying file, the average age at each node comes out to O(log n). A natural question that follows is whether we can do better than O(log n) version age in a fully connected network.[We remark that the work summarized in Section <ref> achieves this goal by utilizing two modifications to the system model: Nodes are age-aware and the total network gossiping capacity can be redistributed. In the current section, via use of file slicing and periodic pausing, we achieve O(1) age while being age-agnostic and without redistributing gossiping capacity.] Likewise, if n time-varying files, all generated at distinct nodes, were simultaneously tracked by all n nodes, then dividing all rate resources among n files in <cit.> results in a version age of order O(n log n) with respect to each file. Therefore, one can ask whether we can do better than O(n log n) for n files. In this respect, <cit.> achieves O(1) version age for single-file dissemination and O(n) version age for n-file dissemination (improving both results by log n), in a network of n nodes, though their model is slightly different from <cit.>.<cit.> assumes a discrete time model of gossiping, where the timeline is divided into cycles, with each cycle containing O(1)=c timeslots in the case of single file dissemination; see Fig. <ref>(a). Here one timeslot is the duration of time required to receive an entire file at a node. The constant c is determined based on the specifics of the gossip protocol, since their scheme works for a wide class of gossip protocols that satisfy certain conditions. At the beginning of each new cycle, the source begins transmitting the latest version in its possession to other nodes, and the latter further transmit packets to neighbors as part of gossiping. The age analysis depends on one key factor – in a single cycle, a fixed version is gossiped in the network to avoid mixing of versions. That is, at the beginning of a new cycle, all nodes halt transmissions of previous versions they obtained in prior cycles and the source further does not transmit any new versions that it gets updated with during the course of the current cycle. <cit.> employs a class of gossip protocols that have a non-zero probability of transmitting a fixed file to all network nodes in O(1)=c time. <cit.> shows how the INTERLEAVE protocol of <cit.> mentioned in Section <ref> can be fine-tuned to satisfy such requirements, which slices packets into smaller pieces and uses a push-pull hybrid scheme to disseminate the sliced packets. The effectiveness of the periodic pausing of newer version updates in each cycle in achieving expected age of O(1) is proven by graphically studying the age at an arbitrary node within a typical cycle and formulating an upper bound that is O(1) in expected sense.The analysis can be extended to the case of multi-file gossiping shown in Fig. <ref>(b), where the system consists of a network of n nodes and n files, such that each node is the source of a unique time-varying file that all other nodes aim to closely track. Again the timeline is divided into cycles, such that each cycle consists of cn timeslots for some constant c chosen based on specifics of the gossip protocol. In this case, a class of gossip protocols capable of disseminating a fixed set of n messages to all nodes within cn timeslots in each cycle is employed; an example of such a gossip protocol is the random-linear coding (RLC)-based protocol discussed in <cit.>. In each cycle, this gossip protocol is utilized to distribute the latest versions of all n files possessed by all source nodes at the beginning of the cycle. Through the design of a pertinent upperbound, it is demonstrated that the age for each file at each node is O(n).In both the cases of single file and n files dissemination, two noteworthy aspects emerge. Firstly, achieving the specified age bounds does not necessitate the complete dissemination of a file version to the entire network in each cycle. Even if very few nodes really receive a file version in a particular cycle, since every cycle presents a fresh opportunity for a node to receive a new version and reduce its age, it is okay for network nodes to miss out some updates in a couple of cycles. Further, periodic pausing of dissemination of newer file versions from the source does not hinder the achievement of these bounds. § RELIABLE AND UNRELIABLE SOURCESIn the works so far, we have examined system models involving a source node receiving updates for a time-varying file and accurately disseminating updates on the current status of the file to the network. The work in <cit.> delves into the system model depicted in Fig. <ref>. In this model, a network comprising n nodes, denoted as 𝒩={1,…,n}, aims to continually track an updating process or event (E) that receives version updates according to a Poisson process with a rate of λ_E. However, for transmitting information about the event to the nodes, two sources are available —- a reliable source and an unreliable source. Information received from the former is considered more reliable than that from the latter and is expected to provide more accurate information. The less reliable source could, for instance, act as a proxy for a cost-effective sensor transmitting quantized or noisy measurements to an IoT network. This system model distinguishes itself from previous works in that there is still a single source of information, but two relaying sources are now available.Nodes wish to have fresh information, however, they have preference for packets that originated at the reliable source, i.e., reliable information, and are willing to sacrifice their version age of information by up to G versions to switch from an unreliable packet to a reliable packet. Consider S_i(t) as an indicator of the reliability status of the information packet at node i at time t, where S_i(t)=0 and S_i(t)=1 represent reliable and unreliable packets, respectively. Additionally, let X_i(t) denote the version age of information at node i at time t. At time t, when node i sends an update to node j, node j decides to accept or reject the packet based on the following rules: If both nodes have unreliable information, node j selects the packet with the lower version age. If both nodes have reliable information, node j chooses the packet with the lower version age. If the incoming packet has reliable information but node j has unreliable information, node j selects the incoming reliable packet if X_i ≤ X_j+G. If node j has a reliable packet and the incoming packet is unreliable, node j retains its reliable packet if X_j ≤ X_i+G. Let F(t)= S_1(t)+ S_2(t)+ … + S_n(t)/n denote the fraction of user nodes that have unreliable information packet at time t.The goal is to study how this protocol impacts the prevalence of unreliable packets at nodes in the network and their version age. Using the SHS framework, <cit.> formulates analytical equations to characterize two quantities: long-term expected fraction of nodes with unreliable packets F= lim_t →∞𝔼[F(t)] and expected version age of information at network nodes x_i= lim_t →∞𝔼[X_i(t)], i ∈𝒩. The choice of the continuous state for the SHS is (S(t),X(t))∈ℝ^2n, where S(t)=[S_1(t),…,S_n(t)] and X(t)=[X_1(t),…,X_n(t)] represent the instantaneous reliability status and instantaneous version age, respectively, of the n user nodes at time t. The SHS operates in a single discrete mode with the differential equation (Ṡ(t),Ẋ(t))=0_2n. For a set of nodes A, <cit.> introduces the concepts of reliability status S_A and version age X_A, which play a crucial role in assessing certain test functions later. The determination of S_A and X_A essentially involves identifying the most optimal node in the set A in some sense, such that its reliability status and version age become representative of the entire set. If the most recent reliable packet is at most G versions older than the latest unreliable packet in the node set A, then the node with the latest reliable packet establishes the values of X_A and S_A. Otherwise, if the most recent unreliable packet is predominant, the node with the latest unreliable packet determines X_A and S_A. With these definitions, <cit.> introduces five distinct test functions, leading to five separate recursive equations, unlike the single test function and recursive equation (<ref>) in classical age-based gossiping discussed in Section <ref>. These five equations can be solved iteratively to obtain values for F and x_i. <cit.> then proves several results that allow us to show that F is a decreasing function of G and x_i are an increasing function of G. Therefore, G induces a trade-off between reliability and freshness of information.§ INFORMATION MUTATIONWe proceed to extend the model depicted in Fig. <ref> to account for alterations in information as it passes from one node to another; see Fig. <ref>. <cit.> attempts to characterize spread of misinformation in an age-based gossip network of n user nodes that receives version updates from a source. The source always communicates accurate information to network nodes, however, there is a possibility of information getting mutated during inter-node transmissions in the network. This can stem from the sender node not always being truthful, occasionally deliberately altering the information before forwarding it, or the information packet becoming corrupted during the transmission process. An exemplar of this scenario could be software distribution to end users, where the software vendor consistently provides reliable versions or iterations of software packages to users. However, if users acquire a software version from their neighboring users, they might sporadically receive an incorrectly functioning or even harmful version of the software.Here, T_i(t) represents the accuracy of information at node i, where T_i(t)=1 indicates accurate information (as originated at the source, also termed the truth), and T_i(t)=0 indicates inaccurate information (alternatively referred to as misinformation). Upon receiving a packet, node i cannot immediately discern whether the information is true or not, or if it differs from the sender's information. Upon receiving a packet, node i compares its version number with the received packet; if different, the staler packet is discarded, and the fresher one is retained. However, if the version numbers match, node i retains the information it trusts more, perhaps based on software performance or measurement noise in a smart sensor network. The assumption, then, is that when two packets have the same version age, truth prevails over misinformation; thus, if one of the packets contains accurate information (the truth), node i retains that packet.The goal here is to find what fraction of nodes are misinformed in this network and the version age of information at all user nodes, which is somewhat similar to <cit.>. However, a distinction lies in <cit.> where once a packet is created at one of the sources, the packet information remains unchanged during the diffusion process. In contrast, in <cit.>, information is susceptible to mutation into misinformation during inter-node transmissions. Additionally, in <cit.>, network nodes are aware of whether a particular packet originated at the reliable or unreliable source, allowing them to consider a freshness-reliability trade-off. Conversely, in <cit.>, nodes are unaware of whether the received information is the truth or not.The information mutation problem is addressed through SHS modeling, where the continuous state is denoted as (T(t),X(t))∈ℝ^2n, with T(t)=[T_1(t),…,T_n(t)] and X(t)=[X_1(t),…,X_n(t)] representing the instantaneous accuracy and instantaneous version age, respectively, of packets stored at n user nodes at time t. A transition (i,j,h) occurs when node i sends a packet to node j, where h=1 signifies error-free communication, and h=0 indicates mutation of information into misleading information during transmission. Transition (0,0,1) represents an update at the source. The stochastic hybrid system operates in a single discrete mode, with the continuous state following the differential equation (Ṫ(t),Ẋ(t))=0_2n. In <cit.>, variables T_A,B are defined for two disjoint sets of nodes A and B, leading to the introduction of three test functions and corresponding recursive equations. These equations must be iterated in a specific manner to solve for the desired quantities: the expected fraction of users with truth F= lim_t →∞𝔼[F(t)] and version age x_i= lim_t →∞𝔼[X_i(t)], i ∈𝒩, where F(t)= T_1(t)+ T_2(t)+ … + T_n(t)/n.Noteworthy findings in <cit.> include the observation that very low gossip rates control the dissemination of mutated information on one hand, while very high gossip rates accelerate the dissemination of accurate fresh packets to all network nodes, mitigating misinformation due to the prevalence of truth in the model on the other hand. Hence, both extreme cases contribute to minimizing misinformation, with misinformation spread being higher at moderate gossip rates.§ ADVERSARIAL ACTIONS IN GOSSIPINGUp until this point, our exploration has focused on information distortion in gossiping in the form of either information dissemination by a less reliable source in <cit.>, or innocent mutations to information packets happening during transmission in <cit.>. However, these works did not involve a malicious entity actively attempting to disrupt the gossip operation. In this section, we will delve into and review two studies that explore adversarial attacks —- specifically, jamming attacks and timestomping attacks. §.§ Jamming Attack<cit.> investigates the resilience of age-based gossiping over networks against intentional jamming. It focuses on characterizing the impact of the number of jammers n on the version age scaling in two distinct gossip network topologies, the ring network and the fully connected network, which represent the two extremes of the connectivity spectrum of networks. In the ring network, it is shown that when the number of jammers ñ scales as a fractional power of network size n, i.e., ñ= cn^α, the average version age scales with a lower bound Ω(√(n)) and an upper bound O(√(n)) when α∈[0,1/2) ., and with a lower bound Ω(n^α) and an upper bound O(n^α) when α∈[1/2,1]. This implies that the version age with gossiping on a ring remains robust against upto √(n) jammers, considering that the version age in a ring network without jammers scales as √(n). These age scalings hold irrespective of the particular placement of jammers on the gossiping links. When multiple adversaries cut inter-node communication links in this symmetric ring, the ring network is dismembered into a collection of isolated groups of nodes as in Fig. <ref>, where each group has the structure of a line network. The age of nodes in each such group are no longer statistically identical, owing to disappearance of circular symmetry. The spatial variation of version age over a line network is initially examined, where it is shown that expected version age in this network is highest at the corner nodes and decreases towards the center. Subsequently, bounds on version age of line network, and, consequently, on the dismembered ring are derived. For this purpose, <cit.> introduces an alternate system model of mini-rings (formed by dotted lines, as depicted in Fig. <ref>, around each individual line network component, thus forming a smaller ring with fewer than n nodes) and proves that the version age of the original model can be sandwiched between constant multiples of the version age of the alternate mini-ring model. Several other structural results are also proven along the way.In the case of the fully connected gossip network, <cit.> formulates a greedy strategy to place n jammers so as to maximize the age of the resultant network. The greedy method involves using the n jammers to isolate maximum possible nodes, thereby consolidating all the remaining links into a single ball, as shown in Fig. <ref>(a). In the resulting network, the average version age is shown to scale as O(logn) when n=O(nlogn), and as O(n^α-1), 1<α≤2 when n=O(n^α), implying that the network is robust against nlogn jammers, since the version age in a fully connected network without jammers scales as O(logn). Therefore, despite having the same update capacity in both ring and fully connected networks, the large number of links in the fully connected topology constrains the effectiveness of jammers, requiring higher number of jammers for any meaningful deterioration of system age scaling of the network. §.§ Timestomping Attack<cit.> explores timestomping attacks in timely gossiping, where an adversary disrupts the gossip operation by manipulating timestamps of certain data packets in the network. This technique, referred to as timestomping <cit.>, aims to introduce staleness and inefficiency into the network. A timestomping attack can take various forms. A malicious insider node may breach the gossip protocol, injecting old packets by disguising them as fresh through timestamp manipulation, while maintaining the gossiping frequency to evade suspicion. Alternative methods include meddler in the middle (MITM) attacks, where the adversary inserts its node undetected between two nodes and manipulates communication, and eclipse attacks where the adversary redirects a target node's inbound and outbound links to adversary-controlled nodes for manipulation, isolating it from the legitimate network.Consider two nodes, A and B, coming in contact to exchange information, such that node A is controlled by a timestomping adversary. If node A is outdated compared to node B, the adversary is inclined to increase the timestamp of an outgoing packet from node A to make it appear fresher so as to misguide node B into discarding its packet in favor of a staler packet, and also, decrease the timestamp of an incoming packet from node B so as to avoid its acceptance at node A. Conversely, if node A is more up-to-date than node B, the adversary would reduce timestamps of outgoing packets and increase timestamps of incoming packets to make node B reject fresher files and node A accept staler files. The further the manipulated timestamps deviate from their actual values, higher are the chances of error in deciding which packet should be discarded, since this decision relies on a comparison of timestamps. At time t, the maximum error occurs when the timestamp is changed to either the current time t or the earliest time 0. In this respect, <cit.> investigates an oblivious adversary that probabilistically alters the timestamp of each incoming and outgoing packet to either t or 0.Some interesting results follow in case of the fully connected network shown in Fig. <ref>(a), where it is shown that one infected node can single-handedly suppress the availability of fresh information in the entire network and increase the expected age from O(log n) found in <cit.> to O(n). This is achieved by increasing the timestamp of every outgoing packet to t and decreasing the timestamp of every incoming packet to 0, in effect, preventing all incoming files from being accepted by the infected node and actively persuading other nodes to always accept outgoing packets from the infected node. Additionally, if the malicious node contacts only a single node instead of all nodes of the network, see Fig. <ref>(b), the system age still gets degraded to O(n). These observations show how fully connected nature of a network can be both a benefit and a detriment for information freshness; full connectivity facilitates rapid information dissemination but, at the same time, accelerates the dissipation of adversarial inputs. On the other hand, for the unidirectional ring network, which falls on the other end of the network connectivity spectrum, it is shown that the timestomping effect on age scaling of a node is limited by its distance from the adversary, and the age scaling for a large fraction of the network continues to be O(√(n)), unchanged from the case with no adversary. The analysis involves considering an SHS, where due to the presence of a timestomping adversary, the continuous state is chosen as (X(t),U(t))∈ℝ^2n, where X(t)=[X_1(t),…,X_n(t)] denotes the instantaneous ages at the n nodes and U(t)=[U_1(t),…,U_n(t)] denotes the timestamps marked on the packets at the n nodes at time t. The state evolves in single discrete mode with differential equation (Ẋ(t),U̇(t))=(1_n,0_n), as the age at each node grows at unit rate in absence of update transition and the timestamps of the node packets, both true and claimed, do not change between such transitions. Here, the actual instantaneous age at node i is X_i(t)= t-U̅_i(t), whereU̅_i(t) indicates the true packet generation time, which can be different from the claimed timestamp U_i(t) if the file timestamp has been tampered with. The analysis in <cit.> primarily relies on definition of X_N(S)(t) for a set of nodes S at time t, which indicates the actual instantaneous age of the node claiming to possess the most recent timestamped packet in set S, i.e., X_N(S)(t)=X_max_j∈ S U_j(t)(t). This definition is used in <cit.> to formulate a series of interesting test functions to explain age scaling in the different system models considered.§ NON-POISSON UPDATING<cit.> consider timeliness in multi-hop cache-enabled networks, where updates on each link do not necessarily occur according to a Poisson process. That is, all nodes forward updates according to an ordinary renewal process, independently of other nodes. As a result, the evolution of age in the network is defined by a superposition of multiple independent ordinary renewal processes. <cit.> showed that the superposition of two ordinary renewal processes is an ordinary renewal process only if all processes are Poisson processes. Therefore, the age of information literature relies heavily on Poisson process based updates or restricts to single hop cache-updating systems, since the exponential distribution (or geometric distribution) is the only continuous (or discrete) probability distribution with memoryless property. The particular convenience of Poisson updates in SHS stems from the fact that the exponential distribution is the only distribution with constant hazard rate, which simplifies expressions involving expectations of products of test functions and transition intensities or hazard rates. <cit.> show how when the inter-update times follow general distributions which are not exponential, it becomes difficult to calibrate the different renewal processes with respect to each other, for both traditional age and version age metrics.Non-Poisson cache updating is studied through the lens of renewal theory in these works. Packets are assumed to arrive from node i at node j on link (i,j) according toN^(i,j)(t) renewal process, and the finite random times 0≤ T^(i,j)_1 ≤ T^(i,j)_2 ≤… denote the corresponding renewal times, such that, inter-arrival times Y^(i,j)_n=T^(i,j)_n- T^(i,j)_n-1 are i.i.d. with common distribution F^i,j. Given N^(i,j)(t)=max{n:T^(i,j)_n≤ t}, the regenerative process A^(i,j)(t)=t-T^(i,j)_N^(i,j)(t) denotes the corresponding backward recurrence time at t, i.e., time elapsed since the last renewal prior to t. <cit.> focuses on expected age in a multi-hop network of n nodes as shown in Fig. <ref>, and shows that the age at node n is, X_n(t)=∑_j=1^nΔ_j(t),where Δ_i(t) =A^(n-i,n-i+1)(t-∑_j=0^i-1Δ_j(t)).Then, (<ref>) in conjunction with certain asymptotic results, giveslim_t →∞𝔼[X_n(t)]= ∑_j=1^nlim_t →∞𝔼[Δ_j(t)]= ∑_j=1^n𝔼[(Y^(n-j,n-j+1))^2]/2𝔼[Y^(n-j,n-j+1)]. We see that the age at node n depends on independent contributions of the intermediate links (i,i+1), 0≤ i ≤ n-1, and is invariant to ordering of these links. Hence, each node can minimize its age by optimizing its individual packet request renewal process, irrespective of the statistical properties of other nodes and links in the network.<cit.> shows an analogous additive result for version age of information in an n-hop network where source gets updated according to an ordinary renewal process with typical inter-renewal interval Y^(0,0), such that version age at end user islim_t →∞𝔼[X_n(t)]=1/𝔼[Y^(0,0)]∑_j=1^n𝔼[(Y^(n-j,n-j+1))^2]/2𝔼[Y^(n-j,n-j+1)]. § GOSSIPING WITH AN ENERGY HARVESTING SENSOR<cit.> looks at version age in a new light in a discrete time system. It considers a sensor which can sample a source at discrete time instants and has a finite-sized battery that charges according to a Bernoulli process. There is a caching aggregator that has one storage unit and stores versions of the update. Finally, there is a gossiping uni-directional ring network of n nodes; see Fig. <ref>. At any time slot, at most one node asks for updates from the aggregator. The aggregator can then send the update it has stored or ask for a fresh update from the sensor. If the sensor is asked for an update, then it samples the source and consumes one unit of battery if the battery is not empty. Node i in the gossip network has q_i probability of asking for an update from the aggregator. The probability that no node asks for an update in a particular time slot is 1 - ∑_i=1^nq_i. The source updates itself with probability p_t as a function of time in every time slot. The problem is to find the optimal causal policy for the aggregator to ask for updates from the source. The solution to this problem is found using a Markov decision process (MDP) formulation, which is different from the SHS formulation considered in most other papers with version age in gossiping networks. An MDP consists of the tuple (S,A,P,C), where S is the state space, A is the set of actions, P is the state transition probability function, and C is the cost associated with the MDP. In the context of this problem, the state at time t is represented by s(t)={b(t), X_1(t), X_2(t),⋯, X_n(t), X_C(t)}, where b(t) is the battery state, X_i(t) is the version age of the ith node and X_C(t) is the version age of the aggregator. The action at time t, a(t)∈{0,1} represents the decision of the aggregator C if requests for a new update from sensor S, or serves the external request with a cached update. The transition probabilities P(s'|s,a) describe the probability of transitioning to state s' in the next timeslot, given the system is in state s currently and the action taken is a. The update policy π, among the set of all possible policies Π, dictates the action to be taken at each timeslot. A desired policy is the one that minimizes the cost C(s(t),a(t))=X_avg(t)=1/n∑_i=1^nX_i(t), averaged over time. Hence, the optimization problem can be expressed asmin_π∈Πlim sup_T→∞1/T𝔼[∑_t=0^T-1X_avg^π(t)|s(0)=s_0].Solving (<ref>), <cit.> shows that the optimal policy is a threshold policy that depends only on the energy left in the battery and the version age of the aggregator; it does not depend on the version ages of the nodes in the network. § OPEN PROBLEMS: FUTURE DIRECTIONSThe works discussed throughout this survey article lay the foundations for tackling various aspects of the problems related to timeliness in gossip networks. However, there is an abundance of interesting open research directions. Some of these directions are discussed next.Connectivity-Freshness Synergy The two extremes of the connectivity spectrum are, the ring network on the one end where each node is connected only to two of its neighbors, and the fully connected network on the other end where each node is connected to all of the remaining nodes. The age in the case of the ring scales as n^1/2 and the age in the case of the fully connected network scales as log n. There is a whole range of levels of connectivity between these two extremes. At the moment, only one more connectivity point is understood in between, which is the grid network, where each node is connected to four of its neighbors. The age of a grid network scales as n^1/3, which is in between the age scalings of the ring and fully connected networks. This hints at a synergy between connectivity and freshness: the more connected a network is, the fresher its nodes will be through gossiping. Describing this synergy in its entirety by finding age scalings of networks with medium-level connectivities is an important open problem.Connectivity-Adversarial Action Trade-Off While connectivity enables fast dissemination of useful information leading to better freshness for the network, it unfortunately enables fast dissemination of adversarial actions as well, such as timestomping attacks and spread of misinformation. Therefore, there may be a trade-off between connectivity of the network versus the adversarial robustness of the network. This means that there may be a trade-off between freshness of information versus spread of misinformation. Since timeliness may emerge as an important semantic feature for future high-connectivity applications, and gossiping may prove to be a simple-to-implement distributed freshness mechanism in such networks, this may open up a whole unprotected attack surface for adversarial actions, which needs to be studied carefully. To that end, it is important to study effects of connectivity on potential adversarial actions, and identify trade-offs between age of information and spread of misinformation. In addition, encryption, privacy-preserving mechanisms and physical layer security, may be integrated into age and gossiping studies.Semantics and Multi-Objective Gossiping Gossiping so far considers only the ages of packets in spreading information in the network. We view this as a scalar type gossiping mechanism that considers only a single variable – the age. Future networks will consider semantics of information, where information may have multiple features, age being one of them. Such semantics will be carried in (encoded into) different dimensions of a vector for each update packet. In such a scenario, different nodes of the network may assign priority to different dimensions, such that, for example, a node caring about the age dimension can run an age-based gossiping protocol, while another node caring about another dimension of the packet may resort to a different gossiping protocol. Studying multi-objective gossiping, and considering synergies and trade-offs between different semantic components of a gossiping vector, are future research directions. Mobility and Time-Varying Connections in Gossiping The current literature only considers gossiping networks which are static. Mobility and time-varying connections will be part of future dense networks such as vehicular networks, robotic networks, and drone swarms. Mobility will constantly change the connectivity and topology of such networks. It is well-known that mobility increases the capacity (throughput) of wireless networks. It would be interesting to study the effects of mobility on the freshness of information in networks.Tracking Multiple Sources via Gossiping So far, we have considered networks that monitor and update the versions of a single observable (i.e., samples of a single random process). In future networks, multiple observables or samples of multiple random processes will flow through a network. In such a system, where each node possesses various samples of multiple information sources, when two nodes get in touch and have an opportunity to gossip, which information should they pass on: Should they randomly choose one of the sources and gossip the freshest sample they have from that source? Should they go over the sources in a round robin fashion? Instead of sending the freshest sample of a single information source, should they linearly combine samples from different information sources and gossip that mixed sample? These are interesting open problems that can be studied. In addition, different approaches may be needed depending on whether these multiple information sources are independent or correlated. § CONCLUSIONThis article provides a comprehensive overview of gossiping as a communication mechanism for rapid information dissemination in networks, with emphasis on use of age of information as a key metric. The overview covers the evolution of gossip algorithms, starting from dissemination of static messages, all the way to real-time gossiping. The evaluation of gossip timeliness scaling across different network topologies, including fully connected, ring, grid, generalized ring, hierarchical, and sparse asymmetric networks, is discussed. The article explores age-aware gossiping, higher-order moments of the age process, and various network variations such as file slicing, network coding, reliable and unreliable sources, information mutation, adversarial actions, and energy harvesting sensors. In addition to consolidation of the recent developments in timely gossiping, the article also outlines several open problems and future directions, which may be of interest.unsrt
http://arxiv.org/abs/2312.16163v1
{ "authors": [ "Priyanka Kaswan", "Purbesh Mitra", "Arunabh Srivastava", "Sennur Ulukus" ], "categories": [ "cs.IT", "cs.NI", "eess.SP", "math.IT" ], "primary_category": "cs.IT", "published": "20231226185140", "title": "Age of Information in Gossip Networks: A Friendly Introduction and Literature Survey" }
[1]School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China. Email: [email protected], [email protected]. W.L. is partially supported by NSFC Grant 12174310 and a Key Project of Joint Funds For Regional Innovation and Development (U21A20425). [2]Institute of Mathematics, Technische Universität Berlin, Straße des 17. Juni 136, 10623 Berlin, Germany. Email: [email protected]. The author is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) Project-ID 433126998. Does PML exponentially absorb outgoing waves scattering from a periodic surface? Wangtao Lu^1, Kuanrong Shen^1 and Ruming Zhang^2================================================================================ The PML method is well-known for its exponential convergence rate and easy implementation for scattering problems with unbounded domains. For rough-surface scattering problems, authors in <cit.> proved that the PML method converges at most algebraically in the physical domain. However, the authors also asked a question whether exponential convergence still holds for compact subsets.In <cit.>, one of our authors proved the exponential convergence for2π-periodic surfaces via the Floquet-Bloch transform when 2k∈ℝ^+\ℤwhere k is the wavenumber; when 2k∈ℝ^+∩ℤ, a nearly fourth-order convergence rate was shown in <cit.>. The extension of this method to locally perturbed cases is not straightforward, since the domain is no longer periodic thus the Floquet-Bloch transform doesn't work, especially when the domain topology is changed.Moreover, the exact decay rate when 2k∈ℝ^+∩ℤ remains unclear. The purpose of this paper is to address these two significant issues. For the first topic, the main idea is to reduce the problem by the DtN map on an artificial curve, then the convergence rate of the PML is obtained from the investigation of the DtN map. It shows exactly the same convergence rate as in the unperturbed case. Second, to illustrate the convergence rate when 2k∈ℝ^+∩ℤ, we design a specific periodic structure for which the PML converges at the fourth-order, showing that the algebraic convergence rate is sharp. We adopt a previously developed high-accuracy PML-BIE solver to exhibit this unexpected phenomenon. § INTRODUCTION For the numerical simulation of wave scattering problems in unbounded domains, the perfectly matched layers (PML), which was invented by Berenger in1994in <cit.>, is a widely used cut-off technique. The main idea of this method is to add an absorbing layer outside the physical domain, then the problem is approximated by a truncated problem with a properboundary condition. We refer to <cit.> to a detailed discussion of this method. In this paper, we will focus on the PML convergence forwave scattering problems with locally perturbed periodic surfaces.In the past decades, many mathematicians have been working on the theoretical analysis and numerical implementations of scattering problems with (locally perturbed) periodic structures. For a special case, when the incident is quasi-periodic and the structure is purely periodic, there is a well established framework such that the problem is easily reduced into one periodicity cell. We refer to <cit.> for theoretical discussions and <cit.> for numerical results.However, when the surface is perturbed, or the incident field is not periodic (e.g., point sources, Herglotz wave functions), this framework no longer works. Despite the periodicity, the problem can be treated as a general rough surface scattering problem.In <cit.>, the well-posedness of the problems have been proved in a normal Sobolev space even for non-periodic surfaces and it isalso proved in weighted Sobolev spacesin <cit.>. For radiation conditions for the scattered field, we refer to <cit.>.In order to take the advantage of the (locally perturbed) periodic structures, the Floquet-Bloch transform is applied to these problems, see <cit.> for numerical implementations with absorbing media. In <cit.>, the authors studied Herglotz wave functions scattered by periodic surfaces, with the help of the Floquet-Bloch transform. The method was extended to locally perturbed periodic surfaces in <cit.>. Based on these approaches, numerical methods have been developed. For 2D cases, the method was proved to be convergent in <cit.>. Based ona detailed study of the singularity of the DtN map, a higher order method was also developed in <cit.>. However, since the singularity of the DtN map becomes much more complicated for 3D cases, the convergence was not proved in general (see <cit.>) and the extension of the method proposed in <cit.> becomes impossible. In addition, the DtN map is a non-local boundary condition, which is difficult to be implemented numerically. Thus we are motivated to apply the PML method to solve these problems.For the particular case that the incident field is quasi-periodic and the surface is purely periodic, exponential convergence for the PML method has been proved in <cit.> and the computation was carried out with an adaptive finite element method. For rough surface scattering problems, authors have proved in <cit.> that the convergence in the whole physical domain is at most algebraic. At the end of this paper, the authors proposed a conjecture that exponential convergence also holds for compact subsets. In <cit.>,one of our authors proved the conjecture for non-periodic incident fields scattered by purely periodic surfaces using the method of the Floquet-Bloch transform with a countable number of wavenumbers excluded;at these excluded wavenumbers, the author further derived a nearly fourth-order decaying upper bound for the PML truncation error although its sharpness remains unjustified. In this paper, we study the case with locally perturbed periodic surfaces. Since the domain is no longer periodic, the Floquet-Bloch transform no longer works. To this end, our idea is to first derive an upper bound for the difference oftwo Dirichlet-to-Neumann (DtN) maps of both the original problem and the PML-truncated problems on a common artificial curve separating an unperturbed region from the perturbed part. The DtN maps are constructed via single-layer operators defined through closely related transmission problems with the unperturbed periodic surface. The difference is estimated based on the theories in <cit.>. The convergence result is then proved based on standard error estimates for elliptic differential equations. We construct a specific periodic structure to illustrate that the PML truncation error indeed can decay algebraically at a fourth order convergence rate. Finally, we adopt a recently developed high-accuracy PML-BIE method <cit.> to numerically validate such an unexpected phenomenon.The rest of this paper is organized as follows. In section 2, the mathematical model for the problem is described and the main convergence results for the PML method are reviewed. In the next section, we study the convergence of the DtN map on an artificial curve. With this result, the local exponential convergence is proved in Section 4. We construct a fourth-order accurate PML in section 5. Numerical experiments are presented with the PML-BIE method in Section 6. § PROBLEM DESCRIPTIONThe profile of the scattering problem is depicted in Figure <ref>.Let Γ_0 be a periodic and Lipschitz curve of period 2π in x_1-direction, where and in the following (x_1,x_2) denotes the standard Cartesian coordinate system. Let a curve Γ be from locally perturbing Γ_0 such that Γ\Γ_0 is bounded and Lipschitz (empty if Γ=Γ_0). Let Ω and Ω_0 be the two upper Lipschitz domains bounded by Γ and Γ_0, respectively. For technical reason, we assume further that Ω (and hence Ω_0) satisfies the following geometrical condition(GC): (x_1,x_2)∈Ω (x_1,x_2+h) ∈Ω∀ h≥ 0. Consider the following scattering problem in the perturbed domain Ω, Δ u + k^2 u =f, inΩ,u =0, onΓ,where Δ = ∂_x_1^2 + ∂_x_2^2 denotes the 2D Laplacian, f∈H^-1(Ω)=[H^1(Ω)]' is the exciting source term with a compact support, k>0 denotes the wavenumber for the source, and u denotes the wavefield. Such a problem can describe a TE-polarized electric field with the nonzero component u propagating in Ω due to a perfectly electric conductor Γ, or a sound field u due to a sound-soft surface Γ. To ensure the well-posedness of the above problem, one may enforce the following upward propagating radiation condition (UPRC):u(x) = 2∫_Γ_H∂ G(x;y)/∂ y_2 ds(y),where x=(x_1,x_2), y=(y_1,y_2), Γ_H = {(x_1,H):x_1∈ℝ} denotes a straight line strictly above Γ for some sufficiently large H>0 such that suppf⊂Ω_H:=Ω∩{x: x_2<H}, and G(x;y) = /4 H_0^(1)(k|x-y|) denotes the fundamental solution for the Helmholtz equation (<ref>). Alternatively but more precisely, one can enforce the half-plane Sommerfeld radiation condition (hpSRC):lim_r→∞sup_α∈[0,π]√(r)| ∂_ru(x)- ku(x)| = 0, sup_r≥ Rr^1/2|u(x)|<∞,and u∈ H_ρ^1(S_H^R),where x=(rcosα,H+rsinα), S_H^R={x∈Ω:|x_1|>R, x_2<H}, and H_ρ^1(·)=(1+x_1^2)^-ρ/2H^1(·) denotes a weighted Sobolev space.In real applications, the total wave field u is usually generated by specifying an incident plane or cylindrical wave in Ω and cannot be characterized directly by the above scattering problem. Nevertheless, as shown in <cit.>, one can always decompose u as the sum of a known (or more easily computed) function and another unknown wave field satisfying (<ref>), (<ref>), and one of the two radiation conditions (<ref>) and (<ref>). Thus, all results in this paper can be trivially extended to plane-wave or cylindrical-wave incidences. We collect some well-known results in existing literature in the following. The well-posedness of the above scattering problem has been justified in <cit.> as follows. Under the geometrical condition (GC), the scattering problem (<ref>) and (<ref>) with either UPRC (<ref>) or hpSRC (<ref>) at infinity, has a unique solution u∈ H^1_ loc(Ω) for any f∈H^-1(Ω) with a compact support and any k>0. Moreover u|_Ω_H∈ H_Γ^1(Ω_H):={v∈ H^1(Ω_H): v|_Γ=0}. To numerically solve the problem, it is advantageous to place a perfectly matched layer (PML) above Γ_H to truncate x_2, as shown in Figure <ref>.Mathematically, the PML can be characterized by a complexified coordinate transformation_2 = x_2 + ∫_0^x_2σ(t)dt,where σ(x_2)=0 for |x_2|≤ H and σ(x_2)>0 for |x_2|∈ (H,∞) that controls the absorption power of the PML. The planar strip S_H^L={x: H< x_2 < H+L} is called the PML region where u(x) is analytically continued to (x)=u(x_1,_2). On the PML boundary Γ_H+L={x:x_2=H+L}, we assume that , after going through the PML strip S_H^L, is absorbed sufficiently such that it is reasonable to assume =0. Let Ω_H+L=Ω∩{x:x_2<H+L}. Consequently,satisfies the following PML problem∇·(∇) + k^2α =f, inΩ_H+L,=0, onΓ∪Γ_H+L,where =diag{α(x_2),α^-1(x_2)} and α(x_2)=1+σ(x_2). Let = ∫_H^H+Lα(x_2)d x_2. The well-posedness of the PML problem has been justified in <cit.> as shown below. Under the geometrical condition (GC), the PML problem (<ref>) and (<ref>), has a unique solution u∈ H_0^1(Ω_H+L):={v∈ H^1(Ω_H+L): v|_Γ∪Γ_H+L=0} for any f∈H^-1(Ω) and any k>0 such that for any bounded domain D∈Ω_H||u||_H^1(D)≤ C ||f||_H̃^-1(Ω),provided that || is sufficiently large, where C does not depend on L or σ. After the truncation of the vertical x_2-direction, Ω becomes a strip Ω_H+L which consists of two periodic semiwaveguides at infinity. Thus, existing techniques such as Floquet-Bloch transforms, Ricatti-equation governing marching operators, and recursive doubling procedures can be used to terminate the two semiwaveguides in terms of posing exact Neumann-to-Dirichlet or Dirichlet-to-Neumann maps. The resulting boundary value problem can then be solved by standard solvers. Naturally, one would ask how accurate would the PML-truncated solutionbe, compared with the total field u, in the physical domain Ω_H.Chandler-Wilder and Monk <cit.> firstly studied the PML convergence theory when Γ is a rough surface, not necessarily period at infinity. They proved thatconverges to u at an algebraic rate in Ω_H but conjectured that an exponential rate in any compact subset of Ω_H as the PML parameter ||→∞, and have strictly proved this for a flat Γ. In a recent work <cit.>, Yu et al. proved an algebraically convergent rate ofin Ω_H for the locally perturbed periodic curve Γ under consideration and had numerically verified the exponentially convergent rate ofon a compact subset of Ω_H, yet a theoretical justification remains open.For the unperturbed case Γ=Γ_0, One of our authors in <cit.> adopted the Floquet-Bloch transform to firstly decouple the original problem into a series of subproblems, each of which possesses only quasi-periodic solutions, i.e., Bloch waves. The original problem, on the other hand, is written as the integral of these quasi-periodic solutions with respect to the quasi-periodicity parameters on a bounded interval.Based on the contour deformation theory, the integral contour is modified near the Rayleigh anomalies and the locally exponential convergence of the PML-truncated solutionto u was successfully established for 2k∉ℤ. If 2k∈ℤ, the two Rayleigh anomalies coincide, making the contour deformation technique in <cit.> break down. Nevertheless, a highorder algebraically convergent can still be proved, with a detailed study of the convergence rate of the quasi-periodic PML solutions near the Rayleigh anomalies. We refer to <cit.> for 3D bi-periodic cases and the method proposed is easily applied to 2D cases.The two results are summarized below. If Γ is purely periodic, i.e., Γ=Γ_0, and if the geometrical condition (GC) is satisfied, then: for any bounded open subset D⊂Ω_H and any f∈H^-1(Ω), provided that || is sufficiently large, (1) If 0<2k∉ℤ^+, there are constants C,c_0>0 such that|| - u||_H^1(D)≤ C e^-c_0 ||||f||_H^-1(Ω),(2) If 2k∈ℤ^+, for any constant 0<γ_0<1, there is a constant C=C(γ_0)>0 such that|| - u||_H^1(D)≤ C||^-4γ_0||f||_H^-1(Ω),The main contribution of this paper is to prove the exponential/algebraic convergence ofto u in any compact subset of Ω_H, when Γ is a locally pertubed periodic curve, i.e., Γ≠Γ_0.Note that the method of Floquet-Bloch transform in <cit.> cannot be trivially extended to such a more general case since the Floquet-Bloch transform is not valid for non-periodic domains. As the methodology does not rely on whether k is a half integer or not, we shall assume 2k∉ℤ^+ for the moment.The basic idea goes as follows. Firstly, for each of the original scattering problem and the PML problem, we shall, following the approach in <cit.>, use a single-layer operator to express the field in the exterior of a bounded domain containing the perturbed part Γ\Γ_0 so as to establisha Dirichlet-to-Neumann (DtN) map that truncates the unbounded problem into a boundary value problem. The single-layer operator is defined by an equivalent transmission problem involving only the unperturbed periodic surface. Secondly, we shall justify the exponentially decaying difference between the two DtN maps asincrease. Finally, for the two boundary value problems, we present the equivalent variational formulations with the aid of the two DtN maps, analyze their inf-sup conditions, and establish the exponential convergence theory. § DIRICHLET-TO-NEUMANN MAPSAs shown in Figure <ref>,let T be a sufficiently smooth curve between the scattering surface Γ and Γ_H, with endpoints A and B on Γ. Let Ω_T be the domain bounded by Γ and T and let Γ_T = T∪(Γ\Ω_T) be the curve consists of T and the part of Γ outside Ω_T. We choose T properly such that Γ_T is Lipschitz and that Ω_T satisfies (GC). For any given function g∈H^1/2 (T)=[H^-1/2(T)]'⊂ H^1/2(Γ_T), consider the following two problems: find v∈ H^1_ loc(Ω\Ω_T) such that(P1):{[ Δ v + k^2 v = 0, inΩ\Ω_T,; v = g, onΓ_T,; vsatisfies (<ref>) or (<ref>), ].and ∈ H^1(Ω_H+L\Ω_T) such that(P2):{[ ∇·(∇)+ k^2α = 0, inΩ_H+L\Ω_T,; = g, onΓ_T,; = 0, onΓ_H+L. ].One can find a function f_g∈H^1(Ω_H) with its compactly support in Ω_H such that f_g|_T=g. Then, the two functions v-f_g and -f_g respectively satisfy the original scattering problem and the PML problem with f replaced by Δ f_g + k^2 f_g∈H^-1(Ω_H). According to Theorems <ref> and <ref>, both (P1) and (P2) are well-posed. Thus, we can define two DtN maps T, : H^1/2(T)→ H^-1/2(T) that are boundedand satisfy Tv|_T=∂_νv|_T and |_T =∂_ν|_T where ν denotes the outer unit normal to T. Clearly, the two DtN maps T andcan respectively serve as the transparent boundary conditions of the scattering problem and the PML problem to truncate them into boundary value problems. The purpose of this section is to estimate the difference between T andwhenis sufficiently large. To study this, we adopt the idea in <cit.> and <cit.> to use single-layer operators to express the two DtN maps.Consider two associate problems in the unperturbed domain Ω_0 and the PML region Ω_0,H+L:=Ω_0∩{x:x_2<H+L}: Given ϕ,∈ H^-1/2(T), find v∈ H^1_ loc(Ω_0) such that(P1'):{[ Δ v + k^2 v = 0, inΩ_0\T,; ∂_νv_+ - ∂_νv_- = ϕ,on T,; v = 0, onΓ_0,; vsatisfies (<ref>) or (<ref>), ].and ∈ H^1(Ω_0,H+L) such that(P2'):{[ ∇·(∇)+ k^2 α = 0,inΩ_0,H+L\T,; ∂_ν_+ - ∂_ν_- = , on T,;= 0,onΓ_H+L∪Γ_0, ].where the two subscripts + and - indicate that the normal derivatives are taken from the exterior and interior of Ω_T, respectively. Let Ω_0,H=Ω_0∩{x:x_2<H}. Then, one can construct functions f_ϕ,f_∈H^1(Ω_0,H) such that ∂_ν[f_ϕ]_+ - ∂_ν[f_ϕ]_- = ϕ and ∂_ν[f_]_+ - ∂_ν[f_]_- =. We also assume that there is a constact C>0 such that f_ϕ_H^1(Ω_0,H)≤ Cϕ_H^-1/2(T) and f__H^1(Ω_0,H)≤ C_H^-1/2(T). Again, v - f_ϕ and - f_ respectively satisfy the original scattering problem and the PML problem with f replaced by the two functions -Δ f_ϕ-k^2f_ϕ and -Δ f_-k^2f_ in H^-1(Ω_H).Thus, (P1') and (P2') (for sufficiently large L and ) are well-posed. Consequently, we can define two boundedoperators S, : H^-1/2(T) →H^1/2(T) such that Sϕ = v|_T and = |_T.Using the background Green functions for the two problems, it is easy to see that the two operators coincide with the standard single-layer operators. Before proceeding, we present some important properties of the single-layer operator S defined on T. For any k>0, the operator S is Fredholm of index zero. Moreover, there exists a smooth curve T such that suppf ⊂Ω_T, Γ_T satisfies (GC), and that S is boundedly invertible.The proof follows simply from the two works <cit.>. Consider problem (P1') for k=. The variational solution _ of (P1') for k= belongs to H^1_(τ)(Ω_0):={u∈ H_0^1(Ω_0): e^τ |x| u∈ H^1(Ω_0)} for τ∈(0,1), and the mapping ϕ ↦_ is bounded from H^-1/2(T) into H^1_(τ)(Ω_0) and even compact from H^-1/2(T) into L^2_(τ')(Ω_0):={u∈ H_0^1(Ω_0): e^τ' |x| u∈ L^2(Ω_0)}with τ'<τ (cf. <cit.>).Denote the operator mapping ϕ ↦ṽ_|_T by S_. The invertibility of S_ follows from the strong ellipticity of (Δ-1).It follows that( S- S_)ϕ=w|_T where w∈ H_0, loc^1(Ω_0) isthe solution toΔ w + k^2 w = -(k^2+1)ṽ_,inΩ_0.By <cit.>, the unique solvability of w to the previous equation (<ref>) yields the boundedness of the mappingL^2_(τ')(Ω_0)∋ṽ_↦ w∈H^1/2(T). This together with the compactness of ϕ ↦ṽ_ from H^-1/2(T)→ L^2_(τ')(Ω_0) proves the compactness of S- S_.To prove the invertibility of S, we justify that equation Sϕ = v|_T = 0 possesses the zero solution only. Let Ω_- be the domain bounded by T and Γ_0 and Ω_+=Ω_0\Ω_-. Since v|_T= 0, v|_Ω_+ solves the original scattering problem (<ref>) and (<ref>) with f≡ 0 and Γ replaced by ∂Ω_+. Thus, v|_Ω_+≡ 0 so that v|_Ω_- solves-Δ v =k^2 v, inΩ_-, v= 0, on∂Ω_-. If k^2 is not a Dirichlet eigenvalue of -Δ on Ω_-, then v|_Ω^-=0 so thatϕ = ∂_νv_+ - ∂_ν v_-=0, which justifies the injectivity and hence the bijectivity of S. Suppose now k^2 is a Dirichlet eigenvalue of -Δ for the specified curve T. Choose another curve T_0 intersecting Γ_0 at the endpoints of T such that the vertical distance between T_0 and Γ_0 is sufficiently small. Then, Fredrich's inequality implies that k^2 cannot be a Dirichlet eigenvalue of -Δ for the domain bounded by Γ_0 and T_0. Let ξ∈[0,1] and consider the following family of curvesT(ξ):={(x_1,ξ x_2^0(x_1)+(1-ξ)x_2^1(x_1)):(x_1,x_2^0(x_1))∈ T_0, (x_1,x_2^1(x_1))∈ T}.The corresponding sesquilinear form of Δ u+k^2 u for the domain Ω(ξ) bounded by Γ_0 and T(ξ) defines a linear operator L(ξ): H_0^1(Ω(ξ))→ [H_0^1(Ω(ξ))]^* that analytically depends on ξ and is Fredholm of index zero. Since L(1) is invertible, there exists at most countable values of ξ in [0,1] such that L(ξ) has a non-degenerate kernel. Therefore, for a sufficiently small parameter ϵ>0, there exist ξ_0∈(0,ϵ)such that suppf∈Ω(ξ_0) and that L(ξ_0) is invertible, i.e., k^2 is not an eigenvalue of -Δ when T is deformed to T(ξ_0). Choosing =ϕ, we have f_ϕ=f_ so that when 2k∉ℤ, Theorem <ref> implies || - v||_H^1(D) ≤ C e^-c_0||||-Δ f_ϕ-k^2f_ϕ||_H^-1(Ω_H)≤ Ce^-c_0 || ||ϕ||_H^-1/2(T),for any bounded domain D⊂Ω_0. Choosing D sufficiently large to contain T, we have by the trace theorem that ||(- S)ϕ|| = ||(-v)|_T|| ≤ C|| - v||_H^1(D) ≤Ce^-c_0 || ||ϕ||_H^-1/2(T),implying || -S||≤ Ce^-c_0 ||. They indicate the following lemma. Suppose 2k∉ℤ. For sufficiently large ||,is boundedly invertible and ||^-1 -S^-1|| ≤C e^-c_0 ||. First, choose sufficiently large || such that|| S^-1 -I||≤Ce^-c_0 || || S^-1|| < 1/2. By the method of Neumann series, S^-1 as a map fromH^1/2(T) to itselfis invertible and||( S^-1)^-1||≤ 2. One directly verifies that ^-1 =S^-1(S^-1)^-1 is bounded invertible with ||^-1||≤ 2 || S^-1|| and that||^-1- S^-1|| = ||^-1( I- S^-1)||≤ 2 C|| S^-1||^2 e^-c_0||. Now, for any g∈H^1/2(T), set ϕ =S^-1g in (P1') and = ^-1g in (P2').Then, T g = ∂_ν v_+ and T̃ g = ∂_νṽ_+. Lemma <ref> implies ||-ϕ||_H^-1/2(T)≤ Ce^-c_0 || ||g||_∈H^1/2(T). Clearly, the restriction of the solution v of (P1') onto Ω\Ω_T is the solution of (P1), and the restriction |_Ω_H+L\Ω_T the solution of (P2). We compare ∂_ν v_+ and ∂_ν_+ on T. Decomposing ϕ =+ (ϕ-) = ^-1 g + ( S^-1-^-1)g in (P1'), we see that v is the sum of v^1 where ϕ is replaced by ^-1g in (P1') and v^2 where ϕ is replaced by ( S^-1-^-1)g. Moreover, choosing the bounded domain D large enough to contain T, by (<ref>),||∂_ν v^1_+ - ∂_ν_+||_H^-1/2(T)≤ C || v^1 - ||_H^1(D)≤ Ce^-c_0 || ||g||_H^1/2(T).By Lemma <ref> and by the well-posedness of (P1'),||∂_ν v^2_+||_H^-1/2(T)≤ C || v^2||_H^1(D)≤C || ( S^-1-^-1)g||_H^-1/2(T)≤ Ce^-c_0 || ||g||_H^1/2(T).The triangular inequality then implies||( T - ) g||_H^-1/2(T)=||∂_ν v_+ - ∂_ν_+||_H^-1/2(T)≤ Ce^-c_0 || ||g||_H^1/2(T),giving rise toSuppose 2k∉ℤ. For sufficiently large , || T - || ≤C e^-c_0 ||. § LOCAL CONVERGENCE OF THE PML SOLUTIONLet Γ_p = ∂Ω_T∩Γ\T be the part of Γ bounded by the two endpoints of T. With the two DtN maps T andwell-defined, the original scattering problem (<ref>), (<ref>) and (<ref>) can now be truncated as the following boundary value problem( OP): {[ Δ u + k^2 u = f, inΩ_T,; u = 0, onΓ_p,;∂_ν u =T u, onT, ].and the PML problem (<ref>) and (<ref>) as ( TP): {[ Δ + k^2= f,inΩ_T,;= 0,onΓ_p,; ∂_ν = ,onT.; ].We note that in Problem (TP), equation (<ref>) reduces to the original Helmholtz equation since the computational region Ω_T is away from the PML region such that = and α = 1. Now, we consider the variational formulations of the two problems. Let V:={v∈ H^1(Ω_T): v|_Γ_T∩∂Ω_T = 0}, and a,ã: V× V→ℂ be two bilinear forms given by a(p,q) =-(∇ p,∇ q)_Ω_T + k^2(p,q)_Ω_T + ⟨ Tγ_T p, γ_T q⟩_T, ã(p,q) =-(∇ p,∇ q)_Ω_T + k^2(p,q)_Ω_T + ⟨γ_T p, γ_T q⟩_T,where (·,·)_Ω_T denotes the standard L^2 inner product, ⟨·,·⟩_T denotes the duality pair between H^-1/2(T) and H^1/2(T), and γ_T: V→H^1/2(T) denotes the trace operator. Clearly, (OP) is equivalent to the following weak formulation: Find u∈ V, such that(WOP): a(u,v) := ⟨ f,v⟩_Ω_T,∀ v∈ V,where ⟨·,·⟩_Ω_T denotes the duality pair between H^-1(Ω_T) and H^1(Ω_T). Similarly, (TP) is equivalent to the following weak formulation: Find ∈ V, such that(WTP):ã(,v) := ⟨ f,v⟩_Ω_T,∀ v∈ V. The two sesqui-linear forms a and ã induce two bounded operators B, : V→ V^* such that<Bp,q>_Ω_T = a(p,q),< p,q>_Ω_T = ã(p,q),for all p,q∈ V. Authors in <cit.> proved that B in fact is Fredholm. Thus,the uniqueness of problem (WOP) implies that B is bijectiveso that ||B^-1||≤ c^-1 for some positive constant c. Let δ B= B -. By Lemma <ref>, for 2k∉ℤ and for a sufficiently large L or , |<δ B p, q>_Ω_T|=|a(p,q) - ã(p,q)|= |⟨( T - )γ_Tp,γ_T q⟩| ≤C e^-c_0 || ||γ_Tp||_H^1/2(Ω_T)||γ_Tq||_H^1/2(Ω_T) ≤C e^-c_0 || ||p||_H^1(Ω_T)||q||_H^1(Ω_T),so that ||δ B||≤ C e^-c_0 ||. Thus, the method of Neumann series indicates thatis also bounded invertible as long as ||δ B||< c. Similar to the proof of Lemma <ref>, we derive that ||^-1 - B^-1|| ≤ Ce^-c_0 ||.Consequently, Problem (WTP) has a unique solution ∈ V and ||u - ||_H^1(Ω_T) = ||B^-1 f - ^-1 f||_H^1(Ω_T)≤C e^-c_0 || ||f||_H^-1(Ω).The above indicates that the PML solutionconverges to u exponentially in the compact domain Ω_T. We now claim that such a locally exponential convergence holds for any compact subdomain of the physical domain Ω_H.Set ϕ =S^-1γ_T u in (P1') and = ^-1γ_T in (P2').Clearly,the restriction of the solution v of (P1') onto Ω\Ω_T is u|_Ω\Ω_T, and |_Ω_H+L\Ω_T=|_Ω_H+L\Ω_T. Moreover, ||ϕ - ||_H^-1/2(T)≤||( S^-1-^-1)γ_T u ||_H^-1/2(T)+|| ^-1γ_T (u - ) ||_H^-1/2(T) ≤C e^-c_0 || ||f||_H^-1(Ω).As in section 3,we find two functions f_ϕ and f_in H^1(Ω_0,H) such that ∂_ν[f_ϕ]_+ - ∂_ν[f_ϕ]_- = ϕ, ∂_ν[f_]_+ - ∂_ν[f_]_- =, and ||f_ϕ - f_||≤ C||ϕ - ||. Applying Theorem <ref> (with f replaced by (-Δ - k^2)[f_ϕ-f_]) and then Theorem <ref> (with f replaced by (-Δ-k^2) f_ϕ∈H^-1(Ω_H)),we obtain the exponential convergence of the PML solution for any bounded domain exterior of Ω_T. Consequently, we obtain Suppose 2k∈ℝ^+\ℤ. Under the geometrical condition (GC) and provided thatis sufficiently large,|| - u||_H^1(D)≤ C e^-c_0 ||||f||_H^-1(Ω)for any bounded open subset D⊂Ω_H. Following exactly the same procedure above, we obtain the algebraic convergence when k is a half integer as stated below.Suppose 2k∈ℤ^+. Under the geometrical condition (GC) and provided that || is sufficiently large, for any fixed γ_0∈(0,1),|| - u||_H^1(D)≤ C ||^-4γ_0||f||_H^-1(Ω)for any bounded open subset D⊂Ω_H. In <cit.>, one of our authors directly applied the Floquet-Bloch transformto establish the same PML convergence theory for purely periodic surfaces. The method is extendable to bounded penetrable medium or locally perturbed periodic surfaces, following the domain transformation method proposed in <cit.>. Nevertheless, when the perturbation changes the topology, for example when the scattering domain contains an impenetrable obstacle, the method is no longer valid since the domain transformation can no longer be constructed. In comparison, our method is still extendable to all of the three situations provided that the original scattering problem is well-posed, and the rest is just a routine work as proposed in this paper.§ A FOURTH-ORDER CONVERGENT PMLWe now study a specific example to illustrate that PML absorbs outgoing waves at most fourth order such that the convergence rate in Theorem <ref>, is nearly sharp. Let us consider the following problem:Δ u + (k^2+ϵsin x_1 h(x_2))u =-δ(x - x^*), inℝ_+^2,u =0, onΓ,where 0<ϵ≪ 1,h(x_2) =1 x_2∈(0,1);0 otherwise,and x^* = (x_1^*,x_2^*) with x_2^*>1. Without loss of generality, we assume x_1^*=0. Instead of studying a scattering problem with a periodic surface, we here consider a periodic layered structure in the half space ℝ^2_+. This approach is to simplify the representation by avoiding the huge computational complexity brought by the domains transformations from the periodic domain to ℝ^2_+. However, the idea is extended without any difficulty forperiodic surface scattering problems. From the perturbation theory, the well-posedness of the problem (<ref>)-(<ref>) is ensured for sufficiently small ϵ. Moreover, it is easy to see that the solution is analytic w.r.t. ϵ at ϵ = 0.Thus, we suppose u = ∑_j=0^∞ u_jϵ^j.The leading term u_0 is Green's function of the half-space ℝ^2_+ given byu_0(x) = /4[H_0^(1)(k|x - x^*|) - H_0^(1)(k|x + x^*|)] = 1/4π∫_-∞^+∞e^-ξ x_1[e^μ(ξ) |x_2-x_2^*|-e^μ(ξ)(x_2+x_2^*)]/μ(ξ)dξ,where μ(ξ) = √(k^2-ξ^2) with the negative real axis as its branch cut. The second term u_1 is governed by the following source problem:Δ u_1 + k^2 u_1 =-sin x_1 h(x_2) u_0, x_2 > 0u_1 =0, on x_2 = 0.In the following, let f̂ denote the Fourier transform of a generic function f(x_1) w.r.t. x_1 given by(ξ) = ∫_-∞^+∞ f(x_1) e^ξ x_1 dx_1.Taking the x_1-Fourier transform of equations (<ref>) and (<ref>), ”_1(x_2;ξ) + μ^2(ξ) _1(x_2;ξ) = /2h(x_2) [_0(x_2;ξ+1) - _0(x_2;ξ-1)], x_2 > 0, _1(x_2;ξ) =0, on x_2 = 0.By (<ref>), _0(x_2;ξ) = 1/2e^μ(ξ) |x_2-x_2^*|-e^μ(ξ)(x_2+x_2^*)/μ(ξ).For x_2>1 such that h(x_2)≡ 0, we assume_1(x_2;ξ)= A(ξ) e^μ(ξ) (x_2 - 1), x_2 > 1.For x_2<1 such that h(x_2)≡ 1 and x_2^*>x_2, (<ref>) implies _0(x_2;ξ) = 1/2e^μ(ξ) |x_2-x_2^*|-e^μ(ξ)(x_2+x_2^*)/μ(ξ) = - e^μ(ξ)x_2^*sin(μ(ξ) x_2)/μ(ξ).Thus,_1^-(x_2;ξ) = 1/2[sin(μ(ξ+1)x_2)/μ(ξ+1)(2ξ+1) e^μ(ξ+1) x_2^* + sin(μ(ξ-1)x_2)/μ(ξ-1)(2ξ-1) e^μ(ξ-1) x_2^*]provides a special solution such that we assume_1(x_2;ξ) = _1^-(x_2;ξ) + B(ξ) sin(μ(ξ) x_2), x_2 < 1.The continuity condition on x_2=1 leads to the following linear systemA(ξ) = _1^-(1;ξ) + B(ξ) sin(μ(ξ)),A(ξ)= [_1^-]'(1;ξ)/μ(ξ) + B(ξ) cos(μ(ξ)).On solving the linear system, we obtain for x_2>1 that_1(x_2;ξ)= 1/2{e^μ(ξ+1) x_2^*/2ξ+1[ sin(μ(ξ+1))/μ(ξ+1)cos(μ(ξ)) -sin(μ(ξ))/μ(ξ)cos(μ(ξ+1))]+e^μ(ξ-1) x_2^*/2ξ-1[ sin(μ(ξ-1))/μ(ξ-1)cos(μ(ξ)) -sin(μ(ξ))/μ(ξ)cos(μ(ξ-1))] }e^μ(ξ) x_2.For simplicity, let the first line in (<ref>) be denoted by _11(x_2;ξ) and the second line by _12(x_2;ξ).As PML is imposed in the region x_2 > 1, the closed form of _1 in x_2∈(0,1) is not needed here. In the following, we shall prove that the truncation error of PML terminating u_1 could converge only algebraically. §.§ PML-truncated problemNow, we introduce a PML in x_2∈ (H,H+L) by complexifying x_2 via (<ref>). Recall = ∫_H^H+Lα(x_2) dx_2. The PML-truncated problem is characterized by∂^2_x_1 v + ∂_x_2/1+σ(x_2)[∂_x_2v/1+σ(x_2)]+ (k^2+ϵsin x_1 h(x_2))v =-δ(x - x^*), inℝ_+^2,v =0, onΓ∪Γ_H+L.The error function w(x)=v(x)-u(x_1,_2) is governed by∂^2_x_1 w + ∂_x_2/1+σ(x_2)[∂_x_2w/1+σ(x_2)]+ (k^2+ϵsin x_1 h(x_2))w =0, inℝ_+^2,w(x_1,0) =0,w(x_1,H+L) =-u(x_1,).Similar as before, we assume w = ∑_j=0^∞ w_jϵ^j as the problem is well-posed and its solution is analytic at ϵ=0. It can be seen that the leading term w_0 solves∂^2_x_1 w_0 + ∂_x_2/1+σ(x_2)[∂_x_2w_0/1+σ(x_2)]+ k^2w_0 =0, x_2>0w_0(x_1,0) =0,w_0(x_1,H+L) =-u_0(x_1,).Using the method of Fourier transform, it can be seen that since x_2>x_2^*,_0(x_2;ξ) = e^μ(ξ) sin(μ(ξ)x_2^*)/μ(ξ)sin(μ(ξ) _2)/sin(μ(ξ) ).Next, w_1 is governed by ∂^2_x_1 w_1 + ∂_x_2/1+σ(x_2)[∂_x_2w_1/1+σ(x_2)] + k^2w_1 =-sin x_1 h(x_2) w_0, x_2>0w_1(x_1,0) =0,w_1(x_1,H+L) =-u_1(x_1,).Again, we take the Fourier transform of the above equations,1/1+σ(x_2)[_1'/1+σ(x_2)]' + μ^2(ξ)_1= /2h(x_2) [_0(x_2;ξ+1) - _0(x_2;ξ-1)], _1(0;ξ) =0, _1(H+L;ξ) =-_1(;ξ).For x_2>1, we assume_1(x_2;ξ) = A_1(ξ) e^μ(ξ) _2 + B_1(ξ) e^-μ(ξ) _2,with the two unknowns A_1 and B_1. For x_2∈(0,1), we find_1^-(x_2;ξ) =-1/2[ e^μ(ξ+1) sin(μ(ξ+1)x_2^*)/μ(ξ+1)sin(μ(ξ+1) x_2)/sin(μ(ξ+1) )(2ξ+1)+e^μ(ξ-1) sin(μ(ξ-1)x_2^*)/μ(ξ-1)sin(μ(ξ-1) x_2)/sin(μ(ξ-1) )(2ξ-1)]is a special solution such that_1(x_2;ξ) = _1^-(x_2;ξ) + C_1(ξ) sin(μ(ξ) x_2).with the unknown C_1(ξ). The continuity conditions on x_2=1 and the boundary condition on x_2=L+D implyA_1(ξ) e^μ(ξ)+ B_1(ξ) e^-μ(ξ) =-_1(;ξ),A_1(ξ) e^μ(ξ)+ B_1(ξ) e^-μ(ξ)= _1^-(1;ξ) + C_1(ξ) sin(μ(ξ)),A_1(ξ) e^μ(ξ)- B_1(ξ) e^-μ(ξ)= [_1^-]'(1;ξ)/μ(ξ) -C_1(ξ) cos(μ(ξ)).On solving the above equation, C_1(ξ) =-_1(;ξ) /sin(μ(ξ))+cos(μ(ξ)[-1])/2sin(μ(ξ))[ e^μ(ξ+1) sin(μ(ξ+1)x_2^*)/μ(ξ+1)sin(μ(ξ+1) )/sin(μ(ξ+1) )(2ξ+1)+e^μ(ξ-1) sin(μ(ξ-1)x_2^*)/μ(ξ-1)sin(μ(ξ-1) )/sin(μ(ξ-1) )(2ξ-1)]+sin(μ(ξ)[-1])/2sin(μ(ξ))[ e^μ(ξ+1) sin(μ(ξ+1)x_2^*)/μ(ξ)cos(μ(ξ+1) )/sin(μ(ξ+1) )(2ξ+1)+e^μ(ξ-1) sin(μ(ξ-1)x_2^*)/μ(ξ)cos(μ(ξ-1) )/sin(μ(ξ-1) )(2ξ-1)].It is left to justifyfor any x_2∈(0,1) and a finite fixed x_1∈ℝ, w_1(x) = 1/2π∫_-∞^+∞_1(x_2;ξ) e^-ξ x_1 dξdecays algebraically when k is a half-integer and exponentially otherwise as ||→∞.§.§ Algebraic convergenceTaking the inverse Fourier transform of _1, we obtainw_1(x) = 1/2π∫_-∞^+∞_1(x_2;ξ)e^-ξ x_1dξ = w_1^A(x) + w_1^E(x),where we have definedw_1^A(x) = 1/2π∫_-∞^+∞-sin(μ(ξ)x_2) _1(;ξ) /sin(μ(ξ))e^-ξ x_1dξw_1^E(x) = 1/2π∫_-∞^+∞[_1 + sin(μ(ξ)x_2) _1(;ξ) /sin(μ(ξ))]e^-ξ x_1dξ.We now show that the error function w_1 decays algebraically for k=1/2 as the branch points of μ(ξ+1) and μ(ξ) coincide when |ξ+1/2|≪ 1 and the branch points of μ(ξ-1) and μ(ξ) coincide when |ξ-1/2|≪ 1.Let k=1/2 and =P(α+β) for some constants α,β>0, and let x_1 be such that (2π)^-1x_1∉ℤ. Then,w_1(x) = ^-4-π^3x_2x_2^*/45sinx_1/2 +O(P^-5), P→+∞. We justify that w_1^A decays algebraically with P. Using the same idea, it is straightforward to derive that w_1^E decays exponentially. By (<ref>) and by _11(x_2;ξ)=-_12(x_2;-ξ),w_1^A =-1/2π∫_-∞^∞sin(μ(ξ)x_2)_11(;ξ)/sin(μ(ξ))e^-ξ x_1dξ-1/2π∫_-∞^∞sin(μ(ξ)x_2)_12(;ξ)/sin(μ(ξ))e^-ξ x_1dξ=-1/2π∫_-∞^∞sin(μ(ξ)x_2)_11(;ξ)/sin(μ(ξ))e^-ξ x_1dξ+1/2π∫_-∞^∞sin(μ(ξ)x_2)_11(;ξ)/sin(μ(ξ))e^ξ x_1dξ=∫_-∞^+∞/2πf(ξ)e^μ(ξ+1)x_2^*+μ(ξ) sin(μ(ξ)x_2)sin(ξ x_1)/sin(μ(ξ)) dξ=(∫_|ξ + 1/2|<ϵ_0 + ∫_|ξ - 1/2|<ϵ_0 + ∫_min|ξ± 1/2|>ϵ_0)⋯ dξ=:I_1(x;x_2^*) + I_2(x;x_2^*) + I_3(x;x_2^*),where ϵ_0>0 is a sufficiently small constant such that sin(x_1 ξ) >0 for any |ξ+1/2|<ϵ_0, and we have definedf(ξ) = 1/(2ξ+1)[ sin(μ(ξ+1))/μ(ξ+1)cos(μ(ξ)) -sin(μ(ξ))/μ(ξ)cos(μ(ξ+1))].It can be easily verified that f(ξ) is an analytic function on ℝ with f(-1/2)=-1/3. It is straightforward to verify that for |ξ± 1/2|>ϵ, there exists a positive constant δ_0, that depends only on ϵ_0, such that |I_3(x;x_2^*)|≤ C e^-δ_0 P.As for I_2(x;x_2^*), we deform the path 1/2-ϵ_0 → 1/2+ϵ_0 to the lower half circle {z: |z-1/2|=ϵ_0,Im(z)≤ 0} in ℂ^+-. It is then straightforward to deduce that I_2 decays exponentially with P. It is left to prove that I_1(x;x_2^*) decays algebraically as P→+∞. Note that the idea of path deformation breaks down here as the whole line segment (-1/2-ϵ_0,-1/2+ϵ_0) always overlaps with one of branch cuts of μ(ξ+1) and μ(ξ) <cit.>. Now, letI_11(x;x_2^*) = ∫_-1/2^-1/2+ϵ_0/2πf(ξ)e^μ(ξ+1)x_2^*+μ(ξ) sin(μ(ξ)x_2)sin(ξ x_1)/sin(μ(ξ)) dξ,I_12(x;x_2^*) = ∫_-1/2-ϵ_0^-1/2/2πf(ξ)e^μ(ξ+1)x_2^*+μ(ξ) sin(μ(ξ)x_2)sin(ξ x_1)/sin(μ(ξ)) dξ.For I_11, let ξ = -1/2+t^2 such that t∈(0,√(ϵ_0)). Then,I_11(x;x_2^*) = ∫_0^√(ϵ_0) F_11(t) e^ t√(1-t^2)t^2 /sin(t√(1-t^2)) dt,where we have definedF_11(t) = /πf(-1/2+t^2)e^-t√(1+t^2)x_2^*sin(t√(1-t^2)x_2)sin((-1/2+t^2) x_1)/t,analytic at t=0. We further introduce a new variable s = t√(1-t^2) with s∈ (0,√(ϵ_0-ϵ_0^2)) to transformI_11(x;x_2^*)= ∫_0^√(ϵ_0) G_11(s)s^2 / 1-e^-2 s ds,whereG_11(s)=2 F_11(t(s))t'(s)/1-t^2(s),analytic at s=0. We first claim|∫_0^√(ϵ_0)[G_11(s) - G_11(0) - G_11'(0) s]s^2 / 1-e^-2 s ds|=O(P^-5), P→∞.This can be seen by L.H.S.≤C∫_0^√(ϵ_0)s^4 / |1-e^-2 s| ds≤ C∫_0^√(ϵ_0)s^4 / e^2s Pβ-1 ds ≤CP^-5∫_0^+∞s^4/e^2s β- 1 ds,for some generic constant C. Next, apply the Cauchy integral formula, we get∫_0^√(ϵ_0) [G_11(0) + G_11'(0) s]s^2 / 1-e^-2 s ds = ^-3∫_0^√(ϵ_0) G_11(0)s^2 / 1-e^-2 s ds + ^-4∫_0^√(ϵ_0) G_11'(0)s^3 / 1-e^-2 s ds ∼ ^-3G_11(0)∫_0^(α+β)∞s^2 / 1-e^-2 s ds + ^-4G_11(0)∫_0^(α+β)∞s^3 / 1-e^-2 s ds = ^-3G_11(0)∫_0^+∞- s^2 / 1-e^2 s ds + ^-4G_11'(0)∫_0^+∞s^3 / 1-e^2 s ds, P→∞.Thus,I_11(x;x_2^*) = ^-3G_11(0) ∫_0^+∞- s^2 / 1-e^2 s ds + ^-4G_11'(0) ∫_0^+∞s^3 / 1-e^2 s ds +O(P^-5), P→∞.Following the same approach,we get I_12(x;x_2^*) = ^-3G_12(0)∫_0^+∞s^2 / 1-e^2 s ds + ^-4G_12'(0)∫_0^+∞s^3 / 1-e^2 s ds +O(P^-5), P→∞. By directly computing the involved coefficients and the equation∫_0^+∞s^3 / 1-e^2 s ds = -π^4/240,we get the desired result.As w_1 decays algebraically in the physical region {x:x_2∈(0,1)} as →∞, one can make ϵ sufficiently small to ensure that ϵ w_1 becomes the dominant term of w considering w_0 decays exponentially. Consequently, there exists a compact region D such thatmax_x∈ D |w(x)| ∼ C P^-4,for P≫ 1,for some constant C>0 depending on ϵ. For 2k∈ℤ^+\{1}, it can be shown in a similar fashion that w_1 decays exponentially. Nevertheless, one shall see that there exists some integer j≥ 2, such that the high-order error term w_j decays algebraically. In other words, the error function w shall always decay algebraically as long as k is a half-integer.§ NUMERICAL EXAMPLESIn this section, we carry out several experiments to validate the previously established theory. In all examples, the period of the scattering surface is set to be 2π the same as before. Thus, the half-integers become the exceptional case where the convergence rate of the PML is downgraded to be algebraic. To observe such a phenomenon, we require the PML truncation error dominates the numerical error so that the accuracy of the numerical solution becomes essential. Thus, the recently developed high-accuracy PML-BIE method <cit.> becomes a suitable solver to check the phenomenon.Basically, the PML-BIE method separates Ω_H into unit cells, establish BIEs on the boundary of the unit cell containing the perturbed part Γ\Γ_0, and then evaluateselsewhere via Green's representation theorem; in Appendix A, we present the basic idea of this numerical solver.As depicted in Figure <ref>, we consider four different surfaces in the following: Γ_1:A sine curve x_2=sin x_1;Γ_2: A sine curve x_2=sin x_1 locally perturbed by the straight line {x:x_2=0,x_1∈(-π,π)};Γ_3: A locally perturbed binary grating.Γ_4: The union of a sine curve x_2=sin x_1 and the boundary of a non-penetrable obstacle occupying the region x_1^2 + (x_2-2.8)^2≤ 0.4^2. To setup the PML, we takeσ(x_2)={[ 2f^6_2/f^6_1+f^6_2,x_2∈[H,H+L/2],;2, x_2≥ H+L/2, ].in (<ref>) to ensure that ũ is sufficiently smooth across x_2=H, where f_1=(1/2-1/6)ξ^3+ξ/6+1/2, f_2=1-f_1, ξ=2x_2-(2H+L/2)/L/2.In all examples, we consider only point-source incidences at the same source point (0,1.5), and compute numerical solutions in D=[-0.3,0.3]×[1.2,1.8], sufficiently away from the three aforementioned scattering surfaces {Γ_j}_j=1^3, to ease the PML-BIE method for accurately computing u in D.We take H=3 for the first three surfaces and H=4 for the last surface Γ_4, and S=2.8 in all examples, use a sufficiently refined mesh in the PML-BIE solver, and let L vary to check the accuracy of the PML.A reference solution ^ ref is defined as the numerical solution for a sufficiently large L, and the H^1-error is then defined by E_ rel = || - ^ ref||_H^1(D)/||^ ref||_H^1(D).Certainly, a discrete H^1-norm is used to approximate the continuous norm ||^ ref||_H^1(D) as ^ ref is available on a grid in D in the numerical solver. To illustrate the affection of the wavenumber k on the decaying rate of PML, we consider two groups of values of k:(i) k∈{3.1,3.01,3.001,3}.(ii) k∈{1.5,3,6};Numerical results are shown in Figures <ref>, <ref>, <ref>, and <ref>. We make several observations below. Firstly, when k is sufficiently away from half-integers, E_ rel decays exponentially as the PML thickness L increases. Secondly,as k approaches 3, the decaying rate decreases dramatically, making the accuracy goes down from 14 digits to merely 6 digits (c.f. Figure <ref>); this was not observed in the numerical results of <cit.> due to the limited accuracy of the FEM solver. Lastly, the convergence rate seems to be independent of k as k varies in half-integers; heuristically, the decaying exponent is usually proportional to k for PML that is capable of exponentially absorbing outgoing waves. This is a crucial evidence for the algebraically decaying rate for PML truncation errors at half-integer wavenumbers.§ CONCLUSIONS AND DISCUSSIONSThis paper established the PML convergence theory for the problem of wave scattering by a locally perturbed periodic surface. For either the original or the PML-truncating problems, we solved an associated scattering problem with the unperturbed periodic surface to construct the Dirichlet-to-Neumann map on a bounded surface that bounds the whole perturbed region. Using the previous PML convergence theory for unperturbed periodic surfaces <cit.>, we justified that the difference between the two DtN maps on the same bounded surface is exponentially small, or algebraically small for half-integer wavenumbers, with the PML parameters. Consequently, the convergence of the PML solution to the true solution in any compact region was established. We have found the deteriorate of PML in periodic structures as the wavenumber k approaches any half-integer. Our theory indicates that the PML truncation error can at most achieve a fourth-order convergence rate. To ensure the accuracy, PML must be made as thick as possible (c.f. a nine-wavelength thick PML in Figure <ref> that retrieves 6 digits only), making itself lose attraction. Thus, a truncation technique that is uniformly accurate for all wavenumbers is desired in practice. We shall investigate this issue in a future work.§ THE PML-BIE METHODIn the appendix, we briefly introduce the high-accuracy PML-BIE method developed in <cit.>. For simplicity, we consider the unperturbed case Γ=Γ_0. For the PML problem (<ref>) and (<ref>), we assume f=-δ(x-x^*) for x^*=(x_1^*,x_2^*)∈Ω_H, such that the BIE method is sufficient to get ũ in Ω_H. As shown in Fig. <ref>,the method basically consists of three steps: I. Divide the domain Ω_H+L into three regions by two vertical lines x_2=± R for some sufficiently large R>0 with |x_2^*|<R; II. Compute Neumann-to-Dirichlet (NtD) operators N^± that map ∂_ν u to ũ on the two boundaries x_2=± R, where ν denotes the unit outer normal; III. Solve the resulting boundary value problem in Ω_H+L∩{x:|x_2|<R}.Step II is essential as it truncates the unbounded domain Ω_H+L. Without loss of generality, we compute the NtD operator N^+ on Γ^+. In doing so, we split the periodic domain Ω_H+L∩{x:x_2>R} into identical unit cells Ω_n^+:=Ω_H+L∩{x: R+2π n<x_2<R+2π(n+1)}, n=0,1,⋯. Let Γ_n^+=Ω_H+L∩{x:x_2=R+2π n}. We define the marching operator R^+: H^-1/2(Γ^+)→ H^-1/2(Γ^+) (Here, we suppress the subscript of Γ_n^+ as the related Sobolev spaces are independent of n) that maps ∂_x_1 u on Γ_n^+ to itself on Γ_n+1^+; it is proved that R^+ does not depend on n and that || R^+||< 1. On the other hand, in each unit cell, due to the Dirichlet boundary conditions on Γ_H+L and Γ, we find the NtD operators N^(0)_ij: H^-1/2(Γ^+)→H^1/2(Γ^+), i,j=1,2 for one unit cell that satisfy[N^(0)_11N^(0)_12N^(0)_21N^(0)_22 ] [ ∂_x_1^+_n∂_x_1^+_n+1 ]=[ ^+_n^+_n+1 ],where ^+_n is the trace ofon Γ_n^+, and ∂_x_1^+_n can be regarded as the normal derivative ofon Γ_n^+. Then, R^+ is governed by the following Riccati equationN^(0)_21 +N^(0)_22 R^+ =N^(0)_11 R^+ +N^(0)_12[ R^+]^2.In fact, the above procedure also applies if we use the NtD operator N_ij^(m) for 2^m consecutive unit cells; we can iteratively obtain N_ij^(m+1) from N_ij^(m) based on the continuity ofand ∂_x_1 on Γ^+. Then,we getN^(m)_21 +N^(m)_22[ R^+]^2^m =N^(m)_11[ R^+]^2^m +N^(m)_12[ R^+]^2^m+1,or[ R^+]^2^m = ( N^(m)_22- N^(m)_11)^-1[ N^(m)_12[ R^+]^2^m+1 -N^(m)_21].Let M be a sufficiently large such that [ R^+]^2^M+1≈ 0. Eq. (<ref>) provides a backward iteration to approximate R^2^m, m=M,⋯, 0. Then, _0^+ = [ N^(0)_11 +N^(0)_12 R^+]∂_x_1_0^+ so that N^+ =N^(0)_11 +N^(0)_12 R^+.One similarly N^- on x_2=-R. Numerically, to approximate N^+, the most significant step is to approxiamte N_ij^(0) in (<ref>). As PML is involved in the unit cell Ω_n^+, the high-accuracy PML-BIE method originated in <cit.> is a suitable way to approximate N^0 that maps ∂_ν toon the boundary ∂Ω_n^+. Then, an algebraic manipulation based on the boundary condition on Γ∪Γ_H+L gets numerical approximations of N_ij^(0) so that R^+ and N^+ are approximated. Once we get N^±, the resulting boundary value problem can be solved easily via a standard BIE formulation. Note that G(x;x^*)=/4H_0^(1)(k|x-x^*|) must be extracted from the total fieldfirst to eliminate the singularity from f=-δ(x-x^*). plainIn following examples, we choose the region on Γ|_[-T/2,T/2]×[0,H], where Γ|_[-T/2,T/2],T=1 represents the restriction of Γ on [-T/2,T/2] and H=3 represents the physical region height.Example 2: a sine curve. In the second example, we assume that Γ is a sine curve, x_2=sin(2π x_1+π). For the cylindrical incidence, we discretize each smooth segment of any unit cell by 600 grid points and set the refractive index n=√(2). For the plane-wave incidence, we discretize each smooth segment of any unit cell by 900 grid points, set the refractive index n=1.03 and take θ=π/3. We use the same method as Example 1. Because there is no accurate solution, we take the numerical solution under L=3.2 and S=2.8 as the reference solution, where L represents thinckness and S represents Scale. Example 3: a locally perturbed sine curve. In the third example, we assume that the sine curve Γ:x_2=sin(2π x_1+π) is locally perturbed with the part over x_1∈[-0.5,0.5] replaced by the line segment [-0.5,0.5]×{0}. For the cylindrical incidence, we discretize each smooth segment of any unit cell by 600 grid points and set the refractive index n=√(2). For the plane-wave incidence, we discretize each smooth segment of any unit cell by 900 grid points, set the refractive index n=1.03 and take θ=π/3.We use the same method as Example 1. Because there is no accurate solution, we take the numerical solution under L=3.2 and S=2.8 as the reference solution, where L represents thinckness and S represents Scale. Example 4. A locally perturbed binary gratings.§ CONCLUSION AND DISCUSSION In this section, we briefly discuss the extension of the convergence theory of the vertical-directional PML to scattering problems in locally perturbed layered media.First, the idea in this paper in fact can be easily extended to two-dimensional layered media that are uniform along the horizontal direction at infinity. As inspired above, the convergence theory of the vertical-direction PML requires two essential parts: the convergence theory of the source problem when the surfaces are unperturbed, and Obviously, Theorem <ref> is readily applicable to the case when Ω is a locally perturbed half-space, i.e., when Γ is a local perturbation of a straight line. Even when there are two layers or more, as the technique of Fourier transform along the horizontal direction.it is natural to place PML along both directions, rather than the vertical direction only, as complex coordinate stretching applies in both directions. A significant advantage of doing so is that the unbounded problem is truncated to a Dirichlet boundary value problem so that standard solvers can easily apply. Nevertheless, challenges come along as well. The PML convergence theory is more difficult to study as the horizontal truncation makes the Fourier transform along the horizontal direction break down. For two-layer media, Chen and Zheng <cit.> established the PML convergence theory based on an essential fact that background Green's functions decay exponentially in the horizontal and vertical PMLs. However, when there are three or more layers,the existence of propagating guided modes makes it harder to tune the horizontal PML parameters, let alone the instability of PML when complex modes exist <cit.>.§ PROBLEM SETUPNotation and equations: * x = (x_1,x_2)^T;* Governing equations:Δ u + k^2 u =0, inΩ,u =0, onΓ, where Ω denotes the scattering domain, Γ denotes a periodic surface of period 2π. For the moment, we assume Γ to be Lipschitz. * For any x^*∈Ω, Green's function G(x,x^*) satisfies Δ G(x,x^*) + k^2 G(x,x^*) =-δ(x-x^*), inΩ, G(x,x^*)= 0, onΓ.* PML region S_H^L={x:L≤ x_2≤ L+H}, the complexified transformation is x̂_2 = x_2 + ∫^x_2σ(t) dt, and the related boundary condition for an outgoing wave u^ og is u^ og = 0, onΓ_L+H={x:x_2=L+H}.* PML truncated domain Ω_ PML = Ω∩{x:x_2<L+H}. * Physical domain S_H = {x∈Ω:x_2<H}. * PML parameter M(H) = ∫_H^H+Lσ(t)dt. * The PML truncated Green function Ĝ satisfiesΔ(x,x^*) + k^2 (x,x^*) =-δ(x-x^*), inΩ_ PML, (x,x^*)= 0, onΓ, (x,x^*)= 0, onΓ_H+L. An important question: For any compact subset D⊂ S_H and a smooth curve Γ_r defined later, can we prove that ||(·,x^*) - G(·,x^*)||_H^1(D)≤ C e^-γ M(H),uniformly for x^*∈Γ_r, and for some constants C>0 and γ>0 depending only on D and Γ_r?§ PARTIALLY STRAIGHT SURFACEIn this section, we assume that each period of Γ contains a line segment. Then, it is clear that for any two line segments in different unit cells,we can find a smooth curve Γ_r⊂ S_H that satisfies * it perpendicularly intersects the two line segments;* it ends with two line segments.For two line segments L_A,L_B⊂Γ, assume that Γ_r intersects them at A and B, respectively. To prove (<ref>), we split Γ_r into two parts, Γ_ϵ^+ and Γ_ϵ^-,for some sufficiently small ϵ>0, where Γ_ϵ^+ = {x∈Γ_r:dist(x,{A,B})>ϵ}, Γ_ϵ^P = {x∈Γ_r:dist(x,{P})<ϵ}, P∈{A,B}.According to the choice of Γ_r, both Γ_ϵ^A and Γ_ϵ^B are line segments.Consider first x^*∈Γ_ϵ^+. We choose a cut-off function χ_ϵ such that it is 1 in a small neighborhood of Γ_ϵ^+ and that its support remains in S_H. LetΦ(x;x^*) = /4 H_0^(1)(k|x-x^*|),G_s(x;x^*)=G(x;x^*)-χ(x)Φ(x;x^*), and _s(x;x^*)=(x;x^*)-χ(x)Φ(x;x^*). The two functions G_s and _s are governed byΔ G_s(x;x^*) + k^2 G_s(x;x^*) = f(x;x^*), inΩ, G_s(x;x^*) =0, onΓ,andΔ_s(x;x^*) + k^2 _s(x;x^*) = f(x;x^*), in S_H^L, _s(x;x^*) =0, onΓ∪Γ_H+L,where f(x;x^*)= - Δχ(x) Φ(x;x^*) - 2∇χ(x)·∇Φ(x;x^*)∈ C_ comp^∞(S_H). Note that the support of f depends only on Γ_ϵ^+ but not x^*. Moreover, it is clear that||f(·;x^*)||_L^2(S_H)≤ C,uniformly for all x^*∈Γ_ϵ^+. Then, based on the existing theory, we can conclude that (<ref>) holds uniformly for x^*∈Γ_ϵ^+.The troublesome case is x^*∈Γ_ϵ^A∪Γ_ϵ^B. Without loss of generality, assume x^*∈Γ_ϵ^A. Let x^*_A be the reflection point of x^* w.r.t A. Clearly, x^*_A=2A - x^*.Define G_A(x;x^*) = G(x;x^*) - χ_A(Φ(x;x^*)-Φ(x;x_A^*)) and _A(x;x^*) = (x;x^*) - χ_A(Φ(x;x^*)-Φ(x;x_A^*)), where χ_A is a cut-off function that is 1 in a small neighborhood of Γ_ϵ^A and its support on Γ is inside the segment L_A. The two functions satisfyΔ G_A(x;x^*) + k^2 G_A(x;x^*) = f_A(x;x^*), inΩ, G_A(x;x^*) =0, onΓ,andΔ_A(x;x^*) + k^2 _A(x;x^*) = f_A(x;x^*), in S_H^L, _A(x;x^*) =0, onΓ∪Γ_H+L,where f_A(x;x^*)= - Δχ_A(x) (Φ(x;x^*) - Φ(x;x_A^*)) - 2∇χ_A(x)·∇(Φ(x;x^*) - Φ(x;x_A^*))∈ C_ comp^∞(S_H).Then, we can conclude that (<ref>) holds uniformly for x^*∈Γ_ϵ^A. From the above, it can be seen that C in (<ref>) depends on the parameter ϵ, which further depends on the lengths of L_A and L_B.§.§ Why Γ_r should be perpendicular to L_A and L_B?Consider again the case x^*∈Γ_ϵ^A. Suppose now Γ_ϵ^A is not perpendicular to L_A. Assume the direction vectors of Γ_ϵ^A and L_A areand . Then, γ = ·≠ 0. Now, forx on L_A and sufficiently close to A, G_A(x;x^*) = -Φ(x;x^*) + Φ(x;x^*_A).Let x^*=A + t and x = A + s. DefineG_0(x;x^*) = -1/2πlog|x-x^*| + 1/2πlog|x-x^*_A|.It can be verified thatG_A(x;x^*) - G_0(x;x^*) ∈ C^2(L_A×Γ_ϵ^A).Is it true thatχ_A(x) G_0(x;x^*)∈ H^1/2(L_A)with its norm uniformly bounded w.r.t. x^*∈Γ_ϵ^A?If the above answer is no for any nonzero γ. Then, we have to make sure that Γ_r is perpendicular to L_A and L_B, as this is the only way to ensure (<ref>) holds. Thus, only assuming Γ is Lipschitz seems not enough! § PARTIALLY SMOOTH SURFACEIn this section, we try to extend the above results to a partially smooth surface. Suppose now both L_A and L_B are smooth only. As has been discussed, it is safer to assume that Γ_r is perpendicular to L_A and L_B. In this case, we prove that (<ref>) holds uniformly for x^*∈Γ_ϵ^A. In fact, we have the following lemma.It holds thatG_0(x;x^*)∈ C^1(L_A×Γ_ϵ^A)∩ C^1(L_B×Γ_ϵ^B). Without loss of generality, we prove the smooth property at A. Suppose L_A is parameterized by x(s) = [x_1(s),x_2(s)], -a≤ s≤ a,with A=x(0). § LIPSCHITZ CONTINUOUS SURFACELet Γ be defined by a periodic Lipschitz continuous function ζ. We choose the point A on Γ such that the derivative of ζ (i.e., the tangential vector ) exists, andΓ_ϵ^A is in the normal directionat A. Moreover, let =e^θ_A where θ_A∈ (-π/2,π/2). Then =e^(θ_A+π/2).Let x^* be any point on the straight part of Γ_ϵ^A and x^*_A be itsreflected point w.r.t. x^*, i.e., x_A^*=2A-x^*. Let x be any point on L_A, then we represent x by its distance from A and angle from Γ_ϵ^A, i.e.,x=A+|x-A|e^θ_x,where |x-A| is the 2-norm of the vector and θ_x is the angle of the vector x-A. Since Γ is continuous, we assume that when 0≤ |x-A|≤ϵ for a sufficiently small ϵ, there is a 0≤σ<<1 such that the angle θ_x∈ [θ_A-σ,θ_A+σ] (on the right of A) or θ_x∈ [π+θ_A-σ,π+θ_A+σ] (on the left of A).I think this is the only part that needs the Lipschitz continuity. Actually, what we need is ζ is continuous and differentiable at one point A.Then for any random point x^*∈Γ_ϵ^A with the length ξ, x^*=A+ξ e^(θ_A+π/2), x_A^*=A-ξ e^(θ_A+π/2).Fromthe cosine formula,|x-x^*|=ξ^2+η^2-2ξηcosθ and |x-x^*_A|=ξ^2+η^2+2ξηcosθ,where η=|x-A|≤ϵ, ξ∈[0,ϵ] and θ=θ_x-θ_A-π/2. Since θ_x is close to θ_A or π-θ_A, there is a constant c>0 such that 0≤|cosθ|≤ c<1.Ignore thecoefficients and cutoff functions, we consider the regularity of the following function on L_A:f(x):=2log|x-x_A^*|-2log|x-x^*|=log(ξ^2+η^2+2ξηcosθ/ξ^2+η^2-2ξηcosθ):=log(g(η;ξ)). When η∈[0,δ], f(η) is uniformly bounded. Let C_0>0 to be the constant independent of η such that |f(η)|≤ C_0,∀ η∈[0,δ].Since 0≤|cosθ|≤ c<1, we have the following estimation:(1-c)(ξ^2+η^2) ≤ξ^2+η^2± 2ξηcosθ≤(1+c)(ξ^2+η^2).Thus1-c/1+c≤ξ^2+η^2+2ξηcosθ/ξ^2+η^2-2ξηcosθ≤1+c/1-c.Let C_0:=log1+c/1-c, then the function f is uniformly bounded byC_0.We will prove the following theorem: For any ξ∈[0,ϵ], f∈ H^1/2(L_A). Moreover, there is a constant C>0 independent of ξ such that f_H^1/2(L_A)≤ C.With abuse of notation, we let x to replace η in the definition of f, then we need to check the following norm:f_H^1/2(-δ,δ)=f_L^2(-δ,δ)+(∫_-δ^δ∫_-δ^δ|f(x)-f(y)|^2/|x-y|^2x̣ỵ)^1/2.From the symmetry, we only need to consider the boundedness of I(f):=∫_0^δ∫_y^δ|f(x)-f(y)|^2/|x-y|^2d x d y. Let h(x,y)=4cosθ(ξ^3-ξ xy)/(ξ^2+x^2-2ξ x cosθ)(ξ^2+y^2+2ξ y cosθ). When |h(x,y)||x-y|≤ 1/2, then|f(x)-f(y)|≤ 2| h(x,y)||x-y|.From direct calculation,f(x)-f(y)=log(g(x)/g(y))andg(x)/g(y) =ξ^2+x^2+2ξ x cosθ/ξ^2+x^2-2ξ x cosθ·ξ^2+y^2-2ξ y cosθ/ξ^2+y^2+2ξ y cosθ=1+4cosθ(ξ^3-ξ xy)/(ξ^2+x^2-2ξ x cosθ)(ξ^2+y^2+2ξ y cosθ)(x-y)=1+h(x,y)(x-y).The proof is finished by elementary estimation. We consider the following integrals:I_1(f):=∫_0^3ξ∫_y^3ξ|f(x)-f(y)|^2/|x-y|^2d x d y;I_2(f):=∫_2ξ^δ∫_y^δ|f(x)-f(y)|^2/|x-y|^2d x d y;I_3(f):=∫_0^2ξ∫_3ξ^δ|f(x)-f(y)|^2/|x-y|^2d x d y.Then I(f)≤ I_1(f)+I_2(f)+I_3(f), thus it is sufficient to estimate each integral separately. 1. Estimation of I_1(f). Since 0≤|cosθ|≤ c<1, we have sin^2θ≥ 1-c^2>0. Thenξ^2+x^2±2ξ xcosθ=ξ^2sin^2θ+(ξcosθ± x)^2≥ξ^2sin^2θ≥ξ^2(1-c^2).Then|h(x,y)|≤4|cosθ|(ξ^3+9ξ^3)/ξ^4sin^4θ=40c/ξ(1-c^2)^2:=C_1/2ξwhere C_1=80c/(1-c^2)^2>0 does not depend on θ.When 0≤ x-y <ξ/C_1, from Lemma <ref>,|f(x)-f(y)|≤ 2|h(x,y)(x-y)|≤ C_1ξ^-1|x-y|.Thus∫_0^3ξ∫_y^C_1^-1ξ+y|f(x)-f(y)|^2/|x-y|^2d x d y≤ C_1^2ξ^-2∫_0^3ξ∫_y^C_1^-1ξ+yd x d y≤ 3√(2)C_1.From Lemma <ref>, |f| is uniformly bounded by C_0. Then∫_0^3ξ∫_C_1^-1ξ+y^3ξ|f(x)-f(y)|^2/|x-y|^2d x d y≤ 4 C_0^2C_1^2ξ^-2∫_0^3ξ∫_C_1^-1ξ+y^3ξd xd y<18 C_0^2 C_1^2.ThusI_1(f)≤ 3√(2)C_1^-2+18C_0^2 C_1^2.2. Estimation of I_2(f). From direct computation,ξ^2+x^2±2ξ xcosθ≥ (x^2+ξ^2)(1-c)≥ (1-c)x^2,then|h(x,y)|≤4|cosθ|ξ xy/(1-c)^2x^2y^2≤4cξ/(1-c)^2xy :=C_2ξ/xy. Given a fixed α∈(0,1/2), let C_2:=C_2^-12^-α, then we consider two parts: |x-y|≤C_2 y^1+αξ^-α and |x-y|> C_2 y^1+αξ^-α separately. When |x-y|≤C_2y^1+αξ^-α and y≤ x,|h(x,y)(x-y)|≤C_2ξ/xy C_2^-12^-αy^1+αξ^-α≤ 2^-αξ^1-αx^α/x≤1/2.From Lemma <ref>,|f(x)-f(y)|≤ 2|h(x,y)||x-y|≤2C_2ξ/xy|x-y|, then∫_2ξ^δ∫_y^y+C_2y^1+αξ^-α|f(x)-f(y)|^2/|x-y|^2d x d y ≤ 4C_2^2ξ^2∫_2ξ^δ∫_y^y+C_2y^1+αξ^-α1/y^4d x d y =4C_2^2 C_2ξ^2-α∫_2ξ^δ1/y^3-αd y=4C_2^2 C_2/2-αξ^2-α[1/(2ξ)^2-α-1/δ^2-α] ≤ 4C_2^2 C_2/2-αξ^2-α1/(2ξ)^2-α=4C_2^2 C_2^-12^-α/(2-α)2^2-α= C_2/2-α.Then consider the case that x≥ y+C_2y^1+αξ^-α. For any γ∈(0,1/2),|x-y|^-2=|x-y|^-2+γ|x-y|^γ≥ |x-y|^-2+γC_2^γ y^γ+αγξ^-αγ.From Lemma <ref>, |f|≤ C_0. Then∫_2ξ^δ∫_y+C_2y^1+αξ^-α^δ|f(x)-f(y)|^2/|x-y|^2d x d y ≤4C_0^2C_2^-γξ^γα∫_2ξ^δ∫_y+C_2y^1+αξ^-α^δ (x-y)^-2+γy^-γ-γαd xd y = 4C_0^2C_2^-γξ^γα/1-γ∫_2ξ^δ[C_2^-1+γ(y^1+αξ^-α)^-1+γ-(δ-y)^-1+γ]y^-γ-γαd y=4C_0^2C_2^-1ξ^α/1-γ∫_2ξ^δ y^-1-αd y-4C_0^2C_2^-γξ^γα/1-γ∫_2ξ^δ(δ-y)^-1+γy^-γ-γαd y = 4C_0^2C_2^-1ξ^α/1-γ(2ξ)^-α-δ^-α/α-4C_0^2C_2^-γξ^γα/1-γ∫_2ξ^δ(δ-y)^-1+γy^-γ-γαd y ≤ 4C_0^2C_2/(1-γ)α-4C_0^2C_2^-γξ^γα/1-γ∫_2ξ^δ(δ-y)^-1+γy^-γ-γαd y. The latter integral equals to the sum of two parts:ξ^γα∫_2ξ^δ/2(δ-y)^-1+γy^-γ-γαd y ≤ (δ/2)^-1+γξ^γα∫_6ξ^δ/2y^-γ-γαd y= (δ/2)^-1+γξ^γα.y^1-γ-γα/1-γ-γα|^δ/2_2ξ= (δ/2)^-γαξ^γα-2^1-γ-γα(δ/2)^-1+γξ^1-γ/1-γ-γα;andξ^γα∫_δ/2^δ(δ-y)^-1+γy^-γ-γαd y≤ (δ/2)^-γ-γαξ^γα∫_δ/2^δ (δ-y)^-1+γd y = -(δ/2)^-γ-γαξ^γα/γ(δ-y)^γ|^δ_δ/2≤(δ/2)^-γαξ^γα/γ.Both terms are bounded when ξ is very small, so I_2(f) is also uniformly bounded.3. Estimation of I_3(f).From Lemma <ref>,I_3(f)=∫_0^2ξ∫_3ξ^δ|f(x)-f(y)|^2/|x-y|^2d x d y≤ 4C_0^2∫_0^2ξ∫_3ξ^δ1/|x-y|^2d x d y.From direct calculation, for any y∈[0,2ξ],∫_3ξ^δ1/(x-y)^2d x=1/3ξ-y-1/δ-y.Then∫_0^2ξ[1/3ξ-y-1/δ-y]d y =-log(3ξ-y)|^2ξ_0+log(δ-y)|^2ξ_0=-log(ξ)+log(3ξ)+log(δ-2ξ)-log(δ)=log(3)+log(δ-2ξ)-log(δ)which is also uniformly bounded when 0≤ξ<<δ.From all above arguments, I_1, I_2 and I_3 are uniformly bounded with respect to ξ. Thus I is also uniformly bounded. The proof is finished. To proceed, we first prove the following inf-sup condition of the sesqui-linear form a.For any k>0, there exists a positive constant c>0 such thatsup_u∈ V|a(u,v)|/||u||_H^1(Ω_T)≥ c ||v||_H^1(Ω_T),for any v∈ V.According to <cit.>, Problem (OP) has a unique solution.Thus, Problem (WOP) has the same unique solution so that by the generalized Babuška-Lax-Milgram theory, there exists a positive constant c>0, such that sup_u∈ V|a(v,u)|/||u||_H^1(Ω_T)≥ c ||v||_H^1(Ω_T),∀ v∈ V.In the following, we justify that a satisfies the following symmetry propertya(v,u) = a(u̅,v̅),where u̅ denotes the complex conjugate of u. Based on the definition of a, it suffices to prove ∫_Tϕ Tψ ds = ∫_Tψ Tϕ ds,for any ϕ,ψ∈H^1/2(T). Choose a sufficiently large H>0 such that T⊂Ω_H, where we recall that Ω_H is the unbounded stripe bounded between Γ_H and Γ. Let Ω^T_H be the domain bounded between Γ_T and Γ_H, and let Φ and Ψ be the two solutions of (P1) for g=ϕ and g=ψ, respectively. Then, Green's third identity implies that∫_Tϕ Tψ - ψ Tϕ ds = ∫_Γ_HΦ∂_y_2Ψ - Ψ∂_y_2Φ ds = 0,where the second equality is due to <cit.>. Consequently, sup_u∈ V|a(u,v)|/||u||_H^1(Ω_T)= sup_u∈ V|a(v̅,u̅)|/||u̅||_H^1(Ω_T)≥ c ||v̅||_H^1(Ω_T) = c ||v||_H^1(Ω_T),for any v∈ V.Let ℝ^2 be the two-dimensional space and x=(x_1,x_2)^T∈ℝ^2 be a generic point with coordinates x_j, j=1,2 in the standard Cartesian coordinate system. For any r>0, D(x_c;r)={y:|x|<r} denotes the circular disk centeredWe emphasize that the framework of establishing the PML convergence theory is standard in the sense that it can be naturally extended to almost all wave scattering problems defined on domains that are local perturbations of regular regions, such as planar regions and periodic regions, as long as PML is applicable. It relies essentially on the PML convergence theory for the associated source problems for the unperturbed structure. Put simply, it suffices to show that Green's function of the PML problem converges to Green's function of the original problem in any compact region away from the PML. In general, the proof can be done via standard approaches such as Fourier transform in layered structures <cit.> or Floquet-Bloch transform in periodic structures <cit.>. Then, provided that the well-posedness of the original scattering problem is established, the PML convergence theory can be done in a routine way as we have proposed in this paper.Moreover, we point out that our theory does not need to prove the coercivity of the related bi-linear form in the PML region, which is an essential step in <cit.> where PML convergence theories were established for acoustic and electromagnetic wave scattering problems in a two-layer medium. Consequently, this provides a new opportunity in studying the convergence theory of PML for many challenging scattering problems, such as scattering problems with three or moreplanar or periodic layers, water wave problems, etc. We will report the related results in future works.
http://arxiv.org/abs/2312.16134v1
{ "authors": [ "Wangtao Lu", "Kuanrong Shen", "Ruming Zhang" ], "categories": [ "math.NA", "cs.NA" ], "primary_category": "math.NA", "published": "20231226174523", "title": "Does PML exponentially absorb outgoing waves scattering from a periodic surface?" }
Potts and Random Cluster Measures on Locally Tree-Like Graphs]Pottsand random cluster measureson locally regular-tree-like graphsA. Basak]Anirban Basak^⋆^⋆Research partially supported by DAE Project no. RTI4001 via ICTS, the Infosys Foundation via the Infosys-Chandrashekharan Virtual Centre for Random Geometry, an Infosys–ICTS Excellence Grant, and a MATRICS grant (MTR/2019/001105) from Science and Engineering Research Board.A. Dembo]Amir Dembo^†^†Research partially supported by NSF grant DMS-1954337. A.Sly]Allan Sly^$ ^⋆International Centre for Theoretical SciencesTata Institute of Fundamental ResearchBangalore 560089, India^†Department of Mathematics and Statistics, Stanford University Sequoia Hall, 390 Serra Mall, Stanford, CA 94305, USA^$Department of Mathematics, Princeton University Fine Hall, Princeton, NJ 08540, USA [2010]60K35, 82B20, 82B26.[ [ January 14, 2024 ==================== Fixing β≥ 0 and an integer q ≥ 2, consider the ferromagnetic q-Potts measures μ_n^β,B on finite graphs _n on n vertices, with externalfield strength B ≥ 0and the corresponding random cluster measures φ^q,β,B_n. Suppose that as n →∞ the uniformly sparsegraphs _n converge locally to an infinite d-regular tree _d,d ≥ 3. We show that the convergence of the Potts free energy density to its Bethe replica symmetricprediction (which has been proved in case d is even, or when B=0), yields the local weak convergenceof φ^q,β,B_n and μ_n^,Bto the corresponding free or wired random cluster measure, Potts measure, respectively, on _d.The choice of free versus wired limitis according to which has the largerPotts Bethe functional value, withmixtures of these two appearing as limit points on the critical line β_c(q,B) where these two values of the Bethe functional coincide.For B=0 and β>β_c,we furtherestablish a pure-state decomposition byshowing that conditionally on the samedominant color 1 ≤ k ≤ q, the q-Potts measures on such edge-expandergraphs _n converge locally to the q-Potts measure on _dwith a boundary wired at color k.§ INTRODUCTIONFor a finite graph =(V,E), parameters , B ∈ and an integer q ≥ 2,the Potts measure μ_^, B(·) onis a probability measure on [q]^V,given byμ_^,B(σ) :=1Z_(, B)exp{∑_(i,j) ∈ Eδ_σ_i, σ_j + B ∑_i ∈ Vδ_σ_i, 1}, σ∈ [q]^V,where δ_σ, σ' :=1{σ= σ' } and Z_(, B) is the normalizing constant (commonly known as the partition function).Setting [n]:={1,2,…, n}, our goal here is to characterize the limits, as n →∞, of Potts measures in the ferromagnetic regime ,B ≥ 0, for uniformly sparsegraphs {_n:=([n], E_n)}_n ∈ that converge locallyto the d-regular infinite tree _d:=(_d,o)rooted at a distinguished vertex o, as defined next. [Uniform sparsity and local weak convergence of graphs] A sequence of graphs _n=([n], E_n) (possibly random) is uniformly sparse, iflim_L →∞lim sup_n →∞ _n [Δ_I_n 1 {Δ_I_n ≥L}] =0, where I_n is uniform on [n], _n denotes the expectation with respect to the (possible) randomness of the graph _n and the randomly chosen vertex I_n, and Δ_i denotes the degree of a vertex i ∈ [n].Given a graph(possibly infinite), and t ∈, let _v,(t) be thesubgraph induced by{v' ∈ V():dist_(v,v') ≤ t}, for the graph distance dist_(·, ·) on , writing _v(t) whenis clear from the context and _d(t):= _o,_d(t). A sequence _n=([n], E_n) is said to converge locally weakly to_d, iflim_n →∞ _n [ 1 (_I_n (t) _d(t)) ]= 0, for allt ∈ , and by a slight abuse of notation, we write _n _d when both(<ref>) and (<ref>) hold. It is widely believed that statistical physics models on large locally tree-like graphs are a good proxy for models on the integer lattice ^d for large d or for those with long interaction range, and therefore the study of such models on locally tree-like graphs is considered a test bed for mean-field theory. Based on the cavity method physicists predict that, for a wide class of ferromagnetic spin systems, the asymptotic free energy densityis given by the Bethe prediction, the maximum of the Bethe free energy functional (see Definition <ref> below) over all `meaningful' fixed points of the belief propagation equations(cf. <cit.>).In the large n limit, the relative probabilities of these meaningful fixed points, commonly termed as the `pure states', are further conjectured to be dictated by their respective Bethe free energies. In particular, if the Bethe free energy is maximized by a unique pure state then the ferromagnetic spin system, in the large n limit, is governed by that pure state, while if there are multiple maximizersthen the system be governed in the limitby a mixture of such maximizers.While these conjectures been in the physics literature for quite some time,verifying them rigorously posesserious mathematical challenges. In this paper,we prove the latter conjecture for the ferromagnetic Potts model, with an external magnetic field, on locally tree-like graphs when the limiting tree is a regular tree. After a series of works, the Bethe prediction for the free energy density of Ising measures(namely, Potts measures with q=2), on locally tree-like graphs was rigorously established in <cit.> in full generality (see the references therein for a historical account). In the context of the limit of Ising measures on locally tree-like graphs, the first success was attained in <cit.>, where they showed that under the assumption_n _d the measuresμ__n^, B, for q=2, locally weakly convergence to the free or wired/plusIsing measures on _d (the choice of limit depends on whether both B=0and >_c, with _c being the critical inverse temperature, or not). Such local weak convergence has been strengthened in <cit.> to show that it holds for a much larger collection of graph sequences {_n}, coveringalso sequences that converge locally to certain non-regular, possibly random trees (e.g. to a multi-type Galton-Watson tree of minimum degree d_min≥ 3). As mentioned above, this phenomenon should extendto Potts measures with q ≥ 3. However, as explained in the sequel,proving such a convergence is way more challenging, especiallyfor the two-dimensional region of parameters R_,where a-priori two different scenariosare possible for such a limit law.The emergence of the two-dimensional region R_ is due to the presence two distinct phase transitions for the Potts measures with q ≥ 3 on _d: the uniqueness/non-uniqueness transition and the disordered/ordered transition, a phenomenon absent in the Ising setting. Moreover, on the critical line R_c within R_, where the Bethe free energies of both meaningful belief propagation fixed points are the same, the predicted limit is a mixture of both pure states, a behavior also absent in the Ising case. We note in passing that the phase diagram of Potts measures, and in particular the local weak convergence of μ_n^,B := μ^,B__n, is related to algorithmic questions of much current interest (see <cit.> and the references therein, where B=0). Such convergence is also related to the metastability of the Glauber dynamics and of the Swendsen-Wang chain for μ^,B_n, as well as the non-reconstructions of the paramagnetic and ferromagnetic stateswhen (,B) ∈ R_ (for B=0, see <cit.> and the references therein for a list representative works in this area). To state our result about such convergence of Potts measuresμ_n^, B when _n _d, we proceedto define the Potts measures on _d that one expects to find in the limit. [Potts on trees, with given boundary conditions] For each t ∈ and ∈ [q] ∪{} consider the following Potts measures on _d(t):μ^, B_,t(σ)= μ^, B_,_d, t(σ)∝ exp{β∑_(i,j) ∈ E(_d(t))δ_σ_i, σ_j + B ∑_i ∈ V(_d(t))δ_σ_i, 1}∏_u ∈∂_d(t)ν_(du)where ∂_d(t):=V(_d(t)) ∖ V(_d(t-1)), ν_ is the uniform measure on [q], and ν_ is the Dirac measure atfor ∈ [q]. The Potts measure withboundary condition μ_^, B:= lim_t ↑∞μ_, _d, t^, Bexists (in the sense of weak convergence over [q]^V(_d)with its cylindrical σ-algebra), for any , B ≥ 0 and ∈{, 1}, andμ_^,0 also exists for any ∈ [q] and ≥ 0 (see<cit.>, in case q=2, andRemark <ref> for an outline of the proof when q ≥ 3). For q=2, if B>0 or ≤_c(d) the measures μ_1^, B and μ_^, B coincide (see <cit.>), while for any > _c(d) the measures μ_1^, 0 and μ_^, 0 differ (see <cit.>). As mentioned earlier, for q ≥ 3 the picture is more delicate and the two-dimensional non-uniquenessregion R_ of (, B) values where μ_1^, Bμ_^, B (see <cit.>),plays a significant role in describing our results. Indeed, within this region, the limit isdetermined by the relation between the free energy densities of μ_n^,BΦ_n(, B) := 1n logZ__n (, B), and the Bethe (free energy) functional at certain Bethe recursion fixed points, which we define next. [Bethe functional; Bethe recursion and its fixed points] Denoting by ([q]) the set of all probabilities on [q], the Bethe functionalΦ: ([q]) ↦ is given by Φ(ν) = Φ^, B(ν):=log{ ∑_σ∈[q] e^B δ_σ,1( (e^-1) ν(σ)+1)^d } - d/2 log{ (e^-1)∑_σ∈[q] ν(σ)^2+1 }. Of primary interest to usis the value of Φ(·) at ν_^, B and at ν_1^, B. The latter are the fixed points of the Bethe recursion BP: ([q]) ↦([q])(BP ν)(σ) := 1z_ν e^B δ_σ,1 ( (e^ -1) ν(σ) +1)^d-1, σ∈[q], (with z_ν denoting the corresponding normalizing constant), obtainedas the limit of successive iterations of BP starting from the uniform probability measure on [q] and from the probability measure supported on spin 1, respectively (cf.  <cit.> for the existence of both limits, which we also denote asν_ and ν_1,respectively). Throughout this paper we make the following assumption on _n and the Potts free energy densities Φ_n(,B) on them. Fix , B ≥ 0 and integer q ≥ 2.Suppose _n _d for some d ≥ 3 and alsolim_n →∞ Φ_n(, B)=max{ Φ(ν_^, B),Φ(ν_1^, B)}.For Ising measures (namely q=2), the identity (<ref>) holds for any , B ≥ 0, d ≥ 3 and _n _d (see <cit.>). While widely believed to extend to all q,d ≥ 3, this has been verified only for d ∈ 2 (see <cit.>), or when _n are d-regular and B=0 (see <cit.>, following upon <cit.> which required large q ≥ d^C d and d ≥ 5).In view of (<ref>), we expect for B>0 to arrive at a limit lawμ^,B_, μ^,B_1 or a mixture thereof, according to the partition ofR_ to R_, R_1, and R_c which correspond, respectively, to whether Φ(ν_) is larger, smaller, or equal to Φ(ν_1). Thus, prior to stating our limit theorem, we first describe the shape of this partition of R_ (taken mostly from the literature, with a few items supplemented in Appendix <ref>).See also Figure <ref> for a pictorial representation of this description.Fix q, d ≥ 3. There exist 0 < B_+ = B_+(q,d) < ∞ and smooth curves _, _c, and _+ defined on [0,B_+], with 0< β_ (B) < _c (B) < β_+ (B) for B ∈ [0,B_+) and β_(B_+)=β_c(B_+) = β_+(B_+) such that R_ :={(', B') ∈(0,∞)^2:0<B' < B_+, ' ∈[_(B'), _+(B')]}∪{(',0): ' ≥_(0) }, and the following holds: * If (, B) ∈ [0,∞)^2 ∖ R_ then ν_=ν_1and consequently, alsoμ_1^,B = μ_^, B,wheneverB >0,μ_^,0 = μ_^, 0 for all ∈ [q],B =0. * If (, B) ∈ R_≠ then ν_ (1) < ν_1(1). Further, setting R_:= { (, B) ∈R_≠: < _c(B)},R_1:= { (, B) ∈R_≠: > _c(B)}, R_c:= R_≠ ∖(R_ ∪R_1), we have that { [ Φ(ν_^,B) > Φ(ν_1^,B) (, B) ∈R_,; Φ(ν_^,B) = Φ(ν_1^,B)(, B) ∈R_c,; Φ(ν_^,B) < Φ(ν_1^,B)(, B) ∈R_1. ] . In the Ising setting, the only challenge is posed at B=0 and > _c(d), whereby μ_1 μ_. The resolution of this case in <cit.> crucially relies on the FKG inequality for a stochastic ordering of the edge-correlations for all plausible local marginalsof μ^,0_n between those two extreme candidates. Lacking suchmonotonicity property when q ≥ 3, we instead couple each Potts measure on _n with the corresponding random cluster measure (rcm), thereby allowing us to utilize the FKG property of the latter, and for doing so whenB>0,we first amend our graphs by a ghost vertex. [Amending graphs by a ghost vertex] From a finite or infinite graph =(V,E) we get ^⋆ by adding from every v ∈ Van edge to the additional ghost vertex v^⋆. That is, ^⋆=(V^⋆, E^⋆) for V^⋆:= V ∪{v^⋆} and E^⋆:= E ∪{(v, v^⋆), v ∈ V}.With a slight abuse of notation, for any i ∈ V and t ∈, we take for _i^⋆(t)the subgraph _i,(t) amended byv^⋆ and all edges between v^⋆ and _i,(t). That is, V(_i^⋆(t))=V(_i(t))∪{v^⋆} and E(_i^⋆(t))= E(_i(t)) ∪{(v, v^⋆): v ∈ V(_i(t))}. We likewise set _d^⋆(t):=_o^⋆(t) for the root o of _d (whereby_d^⋆ is also the increasing limit of _d^⋆(t) as t ↑∞). Denoting by (t) the collection of all rooted graphs (,o)of depth t ∈ (namely, with all vertices at distance at most t from the root o), andcorrespondingly setting ^⋆(t):= {_o^⋆(t): (, o) ∈(t)}, we furthersee that _i^⋆(t) ∈^⋆(t) for any i ∈ V() and t ∈. As promised, we proceed with the definition of the rcm on ^⋆ that corresponds to the q-Potts measure μ^,B_, for a finite graph . [Random cluster measure for a finite graph] Let =(V,E) be a finite graph and ^⋆ be as in Definition <ref>.Fixing q>0, the RCM with external field B ≥ 0 and percolation parameter ≥ 0, is the probability measure on subgraphs of ^⋆,given byφ_^,B(η) ∝[∏_e ∈ E^⋆ p_e^η_e(1-p_e)^1-η_e]q^| C(η)|, η∈{0,1}^E^⋆,where C(η)= C_^⋆(η) denotes the collection of connected components for edge configurationη,|A| denotes the the size of a finite set A and p_e:={[ 1-e^-β e ∈E; 1-e^-B e ∈E^⋆∖E ] . . Summing φ_^,B(·) overbonds configurations on E^⋆∖ E, yields an alternative RCM with external field φ^, B_(η) ∝∏_e ∈E p_e^η_e(1-p_e)^1-η_e ∏_C ∈C(η) (1+(q-1)e^-B|C|), η ∈{0,1}^E(now without a ghost vertex, see also <cit.> for such RCM-sin the presence of several external fields).We use the shorthand φ_n^, B for φ__n^, B, where{_n}_n ∈ is a sequence of graphs under consideration. Anticipatingour Edwards-Sokal coupling of φ_^, B withthe q-Potts measure μ_^, B (cf. Section <ref>),we have opted to index the rcm in Definition <ref> byinstead of p. Indeed, for φ_^,B theEdwards-Sokal coupling yields a nicer conditional distribution of the spin (Potts) variables giventhe bond (rcm) variables η, than what we getif using instead φ^,B_(compare Lemma <ref>(ii) and <cit.>). However, for B=0 necessarily η_e=0 at any edgee ∈ E^⋆∖ E, with φ_^,0(·) =φ^,0_(·) thus matching the standardrcm definition (see <cit.>). Since our proof of the local weak limits for μ_n^, B goes via a coupling of those measures with φ_n^, B, it also requires us to identify the local weak limits of φ_n^, B. Similarly to the Potts case, these involve the special choices of free and wired rcm on ^⋆_d, which we define next. [The free and the wired rcm-s on _d] Suppose first that B >0. Fixing t ∈, let _i^⋆(t) be thegraph _i^⋆(t) amended by all edges of the star graph ^⋆(∂_i (t)) for ghost v^⋆ and the complete graph (∂_i(t)) on ∂_i(t).In case =(,o), we denote _o^⋆(t) by ^⋆_d(t), calling hereafter the edges of ^⋆(∂_d(t)) as boundary edges of _d^⋆(t) (there are in _d^⋆(t) two distinct copies of each edge between v^⋆ and∂_d(t), but only one of them is a boundary edge). Let ()=0 and ()=1, setting for ∈{, }the probability measuresφ_, t^, B(·) := φ__d^⋆(t)^, B(· | η_e =() ,for alle ∈ E(^⋆(∂_d(t)))), and define the wired and the free rcm on _d with parametersand B,as their limitsφ_^, B:= lim_t ↑∞φ_, t^, Bandφ_^, B:= lim_t ↑∞φ_, t^, B. Similar to Definition<ref>, the limits in (<ref>)are in the sense of weak convergence of probability measuresover {0,1}^E(_d^⋆) with its cylindrical σ-algebra and the existence of these limits is straightforward (seethe proof of Lemma <ref>).For B=0 the ghost v^⋆ is an isolated vertex, so proceeding without it, we consider (<ref>) for ^o_d(t) based on _d(t) and (∂_d(t)), in lieu of_d^⋆ (t), _d^⋆(t) and ^⋆(∂_d(t)), respectively. The measures φ_^, 0on {0,1}^E(_d) which are thereby defined via (<ref>), match the standard notion of free and wired rcm on _d (as defined in <cit.>). Having defined the various parameter domains and candidate limit laws,we arrive at the final ingredient for stating our theorem, namely that of local weak convergenceof probability measures, to which end we must first define theappropriate spaces of such measures. [Spaces of probability measures] Fixing finite setsand(throughout this paper =[q] and ={0,1}), we equip ^V(_d^⋆)×^E(_d^⋆) with the product topology. We denote by _d^⋆ the set of all Borel probability measures on^V(_d^⋆)×^E(_d^⋆), endowed with the topology of weak convergence. Similarly, for any t ∈, let _d^⋆,t denote the set of all probability measures on the finite set ^V(_d^⋆(t))×^E(_d^⋆(t)), equipping _d^⋆ with the cylindrical σ-algebra that corresponds to{_d^⋆,t, t ∈}. Next, for aprobability measureon _d^⋆ and any t ∈, denote by ^t the (Borel)probability measure on ^⋆, t_d obtained fromby projection, which we hereafter call the t-dimensional marginal of .Utilizing Definition <ref>, we next define thelocal weak convergence of probability measures.[Local weak convergence of probability measures] Given graphs _n = ([n],E_n) and probability measures ζ_n on ^[n]∪{v^⋆}×^E_n^⋆,for any t ∈ and i ∈ [n], let P_ζ_n^t(i) denote the law of the triplet (_i^⋆(t), σ__i^⋆(t), η__i^⋆(t)), for (σ, η) ∈^[n]∪{v^⋆}×^E_n^⋆ drawn according the law ζ_n. Combined with uniformly chosen I_n ∈ [n]this yields random distributions P_ζ_n^t(I_n). While their first marginal can be anywhere in ^⋆(t), when _n _d it must converge to δ_^⋆_d(t). We then say that {ζ_n}converges locally weakly to a probability measureon _d^⋆ if further P_ζ_n^t(I_n) ⇒δ__d^⋆(t)⊗^t for any fixed t ∈.In case =δ_ν for some ν∈_d^⋆, we say that {ζ_n}converge locally weakly in probability to ν, denoted by ζ_n ν.Denoting the marginals of ζ_n in the spin variables σ and in the bond variables η, by ζ_n, and ζ_n,, the local weak convergence of {ζ_n,} to _ is similarly defined for ∈{,},where _ and _ are distributions over the set of probabilities on^V(_d^⋆) and ^E(_d^⋆), respectively, and if_ = δ_ν_ we say that {ζ_n,} converges locally weakly in probability to ν_, denoted by ζ_n, ν_.Lastly, if ζ_n,(σ_v^⋆=1)=1 for all n ∈ andζ_n,ν_, we can and shall view ζ_n, andν_ as probability measures on ^[n] and^V(_d), respectively. Introducing the notationμ_^, B:= { [μ_1^, BB >0,; 1q ∑_k=1^q μ^, 0_k B=0, ] . our first main result is the rigorous justification of the Bethe replica-symmetricheuristic for our ferromagnetic Potts models. Namely, that at a non-critical ,B ≥ 0 the Pottslaws {μ_n^,B, n ≫ 1} are locally near μ_^,B or near μ_^,B, taking per (,B) among these two limits the one with larger Bethe functional value.Under Assumption <ref>, we have in terms of R_1 and R_c of Proposition <ref>, the following limits: * If (, B) ∈ [0, ∞)^2∖ ( R_1∪ R_c) then μ_n^, Bμ_^, B and φ_n^, Bφ_^, B, as n →∞.* Let (, B)∈ R_1. Then μ_n^, Bμ_^,B and φ_n^, Bφ_^, B, as n →∞.Further, only global mixtures of μ_^,B and μ_^,Bmay emerge as the limit points, at any parameter value on the critical line R_c(where Φ(ν_)=Φ(ν_)).Let d ∈ 2, d ≥ 3. Then, for (, B) ∈ R_c, B > 0 and any _n _d, all local weak limit points of {μ_n^, B} and {φ_n^, B}, as n →∞, are supported on _^, B:= { μ_^,B + (1-) μ_^, B,∈[0,1]} and _^, B:= { φ_^,B + (1-) φ_^, B,∈[0,1]}, respectively. As φ^,B_ is the marginal of φ_^, B, bothTheorems <ref> and <ref> hold also for the former RCM-s(taking for the free and the wired RCM-s, the marginals ofφ_^,B and φ_^, B on {0,1}^E(_d), respectively). Recently <cit.>, using results on the sofic entropy for a sofic approximation of the free group of rank d/2 (for d ∈ 2, d ≥ 4), has shown that for Potts measures with parameters (, B)=(_c(0),0) on random d-regulargraphs _n chosen according to the uniform-permutation-model, as n →∞, any local weak limit point in probability, must be supported on _^_c(0),0. Although neither Theorem <ref> nor <cit.> shows that the limit point in context is a genuine measure on _^, B (i.e. neither Dirac at μ_^, B nor at μ_^, B), this is believed to be the case for (, B) ∈ R_c. Indeed, <cit.> confirms the latter prediction whend ≥ 5, q ≥ d^Cd (C< ∞ is some absolute constant), (, B)=(_c(0),0) and _n are uniformly random d-regular graphs.While their method might plausibly be adapted to all (, B) ∈ R_c we do not pursue this here.For any > _c(0), the q-symmetry of the Potts model at B=0 gave rise inTheorem <ref> to a limit μ^,0_ which is a balanced mixture of the q possible pure-states Potts limit measures μ^,0_k on _d. In the Ising setting (q=2), this has been further refined and explained in <cit.>, by showing that for n ≫ 1the choice between the two pure state limits matches with high probabilitythe (random) dominating value _n(σ) at the spin configurationσ on _n.Towards such a refinement for q-Potts with q ≥ 3, we proceed to define the Potts measure with fixed dominating spin value.[Potts with a given dominating spin value] For a graph _n=([n], E_n) with spin configuration σ∈ [q]^[n] define _n(σ):= arg max_k ∈[q] { ∑_i ∈[n] 1(σ_i =k) },breaking ties uniformly among the subset of [q] of all maximizer values.For any , B ≥ 0 and k ∈ [q], we call the probability measure μ_n, k^, B (·):= μ_n^, B (· | _n(·)=k),which is supported on {σ : _n(σ)=k }, the q-Potts with dominating spin k. As demonstrated in <cit.>, even in the Ising setting, having a pure-state decompositionaccording to the dominating spin value, requires a certain uniform edge-expansion property of _n. Our next definition states the relevant notion of edge expansion, followed by the statement of the promised pure-state decomposition according to the dominant spin value. [Expander graph] A finite graph =(V,E) isa (δ_1, δ_2, λ) edge-expander, if for any set of vertices S ⊂ V with δ_1 |V| ≤ |S| ≤δ_2 |V| we have that |∂ S| ≥λ |S|, where ∂ S denotes the set of edges between S and S^c.Under the same setting as in Theorem <ref> we have the following limits: * For ∈ [0,_c(0)) and any k ∈ [q] we have μ_n,k^, 0μ_^, 0. * Additionally assume that {_n}_n ∈ are (, 1/2, λ_) edge expander graphs for all ∈ (0,1/2) and for some λ_ >0 (which is independent of n). For > _c(0) and k ∈ [q] we have μ_n,k^,0μ_k^, 0.§.§ Organization of the paperSection <ref> shows that the Edwards-Sokal couplingof our q-Potts measures and their rcm counterparts admitlocal weaklimit points and establish key properties for the t-dimensional spin and bondmarginals of any such limit, showing also that the free and wired rcm-s on _dare the two extremal possible limit points for our rcm-s. Lemma <ref>further reduces the question whether the bond marginal of a local weak limit point is anextremal one, to the evaluation of the expected value of specific observables(namely, _s). Combining the latter result with Assumption <ref>, we prove Theorem <ref> in Section <ref>. The key to the proof of Theorem <ref>(i) is a coupling of μ_n,k^,0 and μ_n,k'^, 0, kk' ∈ [q], so that the number of disagreements between the spin configurations induced by μ_n,k^,0 and μ_n,k'^,0 is negligible (cf. Section <ref>). Our proof, in Section <ref>, of Theorem <ref>(ii), uses the assumed edge-expansion property, to argue that _n is well approximated by the dominant color in large neighborhood of a uniformly chosen vertex in _n. By Theorem <ref>, this is in turn further approximated by the dominant color in a large neighborhood of the root in _d (under the wired Potts measure), and thereby we complete the proof upon noting that under μ_k^,0 the latter dominant color is k (see Lemma <ref>). The proof of Theorem <ref> involves the following key steps. First, Section <ref> shows that any limit point outside _^_c(B),B yields`messages' to the root (see Definition <ref>), of Bethe functional value smaller than max{Φ(ν_^_c(B), B), Φ(ν_1^_c(B), B)}. For B>0, if a local weak limit is not supported on _^_c(B),B, the same can be shown to hold when the messages on _d are replaced by their analogs on large neighborhoods of a uniformlychosen vertex in _n. For d ∈ 2 this allows us in Section <ref> to procure some _n' _d whose asymptotic free energy densityexceeds the rhs of (<ref>), in contradiction with <cit.>. Appendix <ref> provides several properties of the infinite volumePotts and the Bethe fixed points, on which we rely in these proofs. § EXISTENCE AND PROPERTIES OF LIMIT POINTS Hereafter we fix ,B ≥ 0 and integer q ≥ 2. For a finite graph =(V,E), let^⋆ be as in Definition <ref>, with^⋆_(i,j) := for (i,j) ∈ E and ^⋆_(i,j) :=B for (i,j) ∈ E^⋆∖ E. We considerthe Potts measureμ_^⋆^,B(σ) = 1/Z_(,B) exp{ ∑_(i,j) ∈E^⋆^⋆_(i,j) δ_σ_i, σ_j } ·δ_σ_v^⋆,1, σ∈[q]^V^⋆, proceeding to define the corresponding Edwards-Sokal measure. [Edwards-Sokal measure on a finite graph] Fix an integer q ≥ 2. For a finite graph ,^⋆ of Definition <ref>, and ,B ≥ 0, set p_e as in (<ref>) and δ_e(σ):=δ_σ_i,σ_j fore=(i,j) ∈ E^⋆. The Edwards-Sokal probability measure on the joint spin σ∈ [q]^V^⋆ andbond η∈{0,1}^E^⋆ configuration, is given by ϖ_^⋆^,B(σ,η) ∝∏_e ∈E^⋆[ (1-p_e)(1-η_e) + p_e η_e δ_e(σ) ] ·δ_σ_v^⋆,1.Our next lemma, whose elementary proof is omitted, states that for integer any q ≥ 2, such Edwards-Sokal measuregives a useful coupling between μ^,B_^⋆ and the rcm of Definition <ref>.Fix an integer q ≥ 2, a finite graph =(V,E), and B,≥ 0. * The marginal of ϖ_^⋆^,B(·, ·) in the spin variableσ is μ_^⋆^,B(·), whereas the marginal in the bond variable η is the rcm φ_^,B(·). * For any η∈{0,1}^E^⋆, with induced connected components(η)=(C_1,C_2,…,C_k), for some k ≥ 1, the conditional distribution ϖ_^⋆^,B(·| η) is such thatthe same spin σ' is assigned to all vertices in each connected component C, independently ofall other components. If v^⋆∉ C, then the law of σ' is uniform on [q], whereas σ'=1 if v^⋆∈ C.We use throughout μ(f) or μ[f] for the expectation of-valued function f with respect to a probability measure μ.Further, suppressing the dependence of our Potts, random cluster, and Edwards-Sokal measureson the fixed integer q ≥ 2, we write for brevity ϖ_n^, B:= ϖ_^⋆_n^, Band φ_n^, B:= φ__n^, B. Asthe marginal of μ_^⋆^, B on [q]^V isμ^, B_, hereafter μ_n^, B stands also forμ__n^⋆^, B.The following immediate corollary of Lemma <ref> is crucial in our proof of Theorem <ref>. For any integer q ≥ 2, finite graph =(V,E), and B,≥ 0,we have thatμ_^,B (σ_i=σ_j)=(1 - 1q)φ_^,B(i ↔j ) +1q, ∀(i,j) ∈E, where { i' ↔ j'} denotes the event that i' and j' are in the same connected component of ^⋆, or equivalently, that there is an open path in ^⋆ connecting i' and j'.In view of Lemma <ref>(i), for any (i,j) ∈ E^⋆, μ_^⋆^,B (σ_i=σ_j) =ϖ_^⋆^,B[ 1(σ_i=σ_j) ] = φ_^,B[ϖ_^⋆^,B[ 1(σ_i=σ_j)| η] ]. Now (<ref>) follows from Lemma <ref>(ii) (recall also that μ_^,B is the [q]^V-marginal ofμ_^⋆^,B). Hereafter, we denote by (S) the set of probability measures on aPolish space S. Further, adopting the notation of Definition <ref>,for †∈{o,⋆} let ^†_t:=[q]^V(_d^†(t)) and_t^† := {0,1}^E(_d^†(t)) denote the domains of the spins, and the bonds, respectively, in Edwards-Sokal measures on ^†_d(t). Using Ξ_t^† := {0,1}^E(^†(∂_d(t))) and _t^† :={0,1}^E(_d^†(t)) for suchdomains of boundary edge bonds, and of the bonds on the tree with itsedge boundary, respectively, we shall view each bond assignment η∈_t^† on _d^†(t) as the pair η=(η, η) of assignments η∈_t^† for bonds in _d^†(t) and η∈Ξ_t^† for the boundary edge bonds.We shall show that the Edwards-Sokal measures on _n (and hence also the corresponding Potts and random cluster measures),admit local weak limits along sub sequences satisfying certain key properties. To this end, we proceed to define the spaces in which such local weak limits must be (where to lighten our notation we suppress the dependence of these spaceson d, q and ,B ≥ 0).[Mixtures of Edwards-Sokal and rcm-s on _d] Fixing t ∈, let _⋆:=_⋆(t) denote the collection of all partitions ={C_⋆, C_1, C_2, …, C_k} of the labeled set ∂_d(t) ∪{v^⋆} where v^⋆∈ C_⋆ and _o:=_o(t) be all such partitions for which C_⋆={v^⋆}. Recalling Definition <ref>of ^†(∂_d(t)) and _d^†(t), †∈{o, ⋆},we identify each∈_† with the subgraph of ^†(∂_d(t)) havingedges between i,j ∈∂_d(t) ∪{v^⋆} if and only if i and j belong to the same block of . Alternatively, each ∈_† corresponds to the boundary edgeassignment η() such thatη_e = 1 if and only if the edge e is within a block of . Now, let ϖ^, B, t_ bethe Edwards-Sokal measure on _d^†(t) conditioned onη=η() (namely, on open bonds in and closed bonds in ^†(∂_d(t))∖).Thus, for ∈_†, σ∈_t^†, andη∈_t^†,ϖ_^, B, t(σ, η)∝∏_e ∈ E(^†_d(t))[(1-p_e) (1-η_e) + p_e η_e δ_e(σ) ] ∏_j=1^k ∏_u,v ∈ C_jδ_σ_u, σ_v∏_v ∈ C_⋆δ_σ_v,1,restricting for †=o to B=0 and eliminating then the (trivial) product over C_⋆. We further view ϖ_^, B, t(σ,η) of (<ref>) alsoas the probability of (σ, η) for η=(η,η()). Now define _†(t) :={ϖ: ϖ= ∑_∈_†ρ ()ϖ_^, B, t for some ρ∈(_†)},†∈{o,⋆}. That is, _†(t) denotes the collection of measures on spins and bonds of _d^†(t) induced by mixtures of Edwards-Sokal measures on _d^†(t) conditioned to have the edge boundary bonds η(). Likewise, the rcm conditioned to such edge boundary, gives the measure on _t^†φ^, B, t_(η):= φ^, B__d^†(t)( (η, η)|η = η () ),with the corresponding space of mixtures_†(t):={φ: φ= ∑_∈_†ρ() φ^, B, t_ for someρ∈(_†) }, †∈{o,⋆},where for †=o we restrict (<ref>)-(<ref>) to B=0(and eliminate in this case the irrelevant v^⋆). Further viewing each φ^, B, t_(η) also as theprobability of η=(η,η()) makes _†(t) a subset of (_t^†), and themixture coefficients are then uniquely determined by ρ () = φ(η = η()).We next show that the t-marginals of both the free and wired rcm-s on _dreside in the spaces _†(t) of Definition <ref>. For , B ≥ 0 and ∈{, } the rcmφ_^, B on _d^⋆ exists with marginals φ_^, B,t∈_⋆(t) andφ_^, 0,t∈_o(t) for any t ∈.Fix ,B ≥ 0. For the existence of φ_^, 0 on _dsee <cit.>. More generally, fix t ∈ and note that, by definition, φ_, t^, B(·)= φ__d^⋆(t+1)^, B(·|η_e=1, e ∈^⋆(∂_d(t+1)) ∪^⋆(∂_d(t)) ∪E(∂_d(t), ∂_d(t+1))),where for two sets of vertices S and S' the notation E(S,S') denotes the collection of edges between S and S'. As φ__d^⋆(t+1) is a strictly positive measure satisfying FKG inequality (see <cit.>), by <cit.> it is monotonic. This observation together with the definition of φ_, t+1^, B and (<ref>)entails that φ_, t^, B[f] ≥φ_, t+1^, B[f] for any increasing function f on^⋆_s with s ≤ t. As each ^⋆_s is in the linear span of increasing indicator functions, the existence oflim_t →∞φ_, t^, B[f] for any s ∈ and every increasing f on ^⋆_s implies the existence of the limit φ_^, B (in the sense ofweak convergence). The existence of φ_^, B follows similarly, since φ_, t^, B[f] ≤φ_, t+1^, B[f] for anyincreasing function f on ^⋆_s with s ≤ t.Turning to prove that φ_^,B, t∈_⋆(t), we start by showing thatφ∈_⋆(t+1) ⟹φ^t ∈_⋆(t), where φ^t denotes the marginal of φ on bonds of _d^⋆(t).Indeed, setting Ξ_t:={0,1}^E(_d^⋆(t+1))∖ E(_d^⋆(t)),any pair η∈Ξ_t and ∈_⋆(t+1) induces a partition '(η, ) ∈_⋆(t) and further determines the difference betweenthe number of connected components of ^⋆_d(t) with boundary ' and the number ofconnected components of ^⋆_d(t+1) with boundary . Hence, by definition φ_^,B,t+1(η^0|η)= φ^, B,t_'(η, )(η^0), ∀η^0 ∈^⋆_t. Thus, if φ∈_⋆(t+1), then for some ρ∈(_⋆(t+1)) andany η^0 ∈^⋆_t,φ(η^0) =∑_∈_⋆(t+1) ∑_η∈Ξ_t ρ() φ_^,B,t+1 (η) φ^, B, t_'(η, )(η^0) = ∑__0 ∈_⋆(t) ρ'(_0) φ^, B, t__0(η^0), where ρ'(_0) denotes the sum of ρ() φ_^,B,t+1 (η)over all pairs (η, ) for which '(η, )=_0. Having thus established (<ref>), recall that φ_, s^, B∈_⋆(s) for any s ∈.Hence, by iteratively applying (<ref>) we deduce that the t-dimensional marginal φ_, s^, B, tof φ_, s^, B is in the compact set _⋆(t) for any s ≥ t.Since φ_,s^, B, t⇒φ_^, B, t as s →∞,we conclude that φ_^, B, t∈_⋆(t).For B=0 we have in (<ref>) that p_e=0 for any e ∈ E^⋆∖ E touchingv^⋆. Thus, then v^⋆ is an isolated vertex and in particular C_⋆={v^⋆}. So, in this case we can wlog replace _⋆(t) by _o(t) throughout thepreceding proof, while alsoreplacing our spin and bond domains _d^⋆(t) and ^⋆(∂_d(t)) by_d (t) and (∂_d(t)), respectively, to conclude that φ_^, 0, t∈_o(t).Having a lattice of partitions _†(t) induces a stochastic ordering on thercm mixtures in _†(t). [Stochastic ordering on _†(t)] Fix t ∈ and †∈{o, ⋆}. Recall from Definition <ref>our embedding of _†(t) inside Ξ^†_t via ↦η() and the one-to-one mapping it induces between _†(t) and (_†(t)), bymatching eachφ (η) = ∑_∈_†(t)ρ() φ_^,B,t (η),with the distribution ρ()=φ(η = η()) of a random edge boundary η supported on _†(t) ⊂Ξ^†_t.An edge boundary η of law ρ is stochastically dominated by η' of law ρ', denoted by ηη' (or by ρρ'), if ρ (f) ≤ρ'(f) for every function f which is non-decreasing with respect to the usualpartial ordering of Ξ^†_t. Equivalently, ηη' if and only if there is a coupling such that η≤η' in the sense of partial ordering on Ξ^†_t. For random edge boundaries in _†(t) this is further equivalent to a coupling with such partial order on the induced partitions (η) ≤(η') (i.e. where (η) is a refinement of (η')). This notion extends to a stochastic ordering on _†(t) by saying thatφ∈_†(t) is stochastically dominated by φ' ∈_†(t) (denoted by φ≼φ'), if and only ifηη' for the corresponding random edge boundaries. The proof of Theorem <ref> crucially relies on our next lemma, which reducesthe question whether two stochastically ordered measures in _†(s+1) are equal,to the evaluation of the corresponding expectations for a single functional _s (defined below).Fix s ∈ and †∈{⋆, o} (with B=0 if †=o).Suppose φ, φ∈_†(s+1) are such that φφ.(i) Then, φ^s φ^s for the s-dimensional marginals of φ and φ. (ii) If also φφ,then φ(_s) < φ(_s) for _s:= ∑_i ∈∂_d(s)∑_j ∈∂ i 1(ij),where ∂ i denotes the neighborhood of i in _d(s+1) and{ij} denotes the event that there exists an open path in ^†_d(s+1) connecting i and j.(i). Since φφ there is a coupling such that (η) ≤(η) for the partitions induced by the corresponding random edge boundaries η and η.Let η_s and η_s denote the random vectors in Ξ_sdistributed according to the corresponding marginal of φ^, B, s+1_(η)and φ^, B, s+1_(η), respectively. Since the rcm φ__d^⋆(s+1)^, B is monotonic (see <cit.> and <cit.>), we have that φ_(η)^, B, s+1(f) ≤φ_(η)^, B, s+1(f) for any increasing function f on ^⋆_s+1. This implies in turn thatφ (g) ≤φ (g) for any function g of (η_s,η) which is increasing on Ξ_s ×Ξ^⋆_s+1. That is,(η_s, η)(η_s, η). Recall that any given pairη̅_s ∈Ξ_s and η̅∈Ξ^⋆_s+1 inducesan edge boundary η̅' ∈Ξ^⋆_s. Furthermore, the map (η̅_s, η̅) ↦η̅' is increasing, hence also η' η' for the random edge boundaries η' and η' induced by the pairs (η_s, η) and (η_s, η), respectively.Recall(<ref>) that η' and η' are precisely theedge boundaries of φ^s and φ^s, respectively. Thus, φ^s φ^s as claimed. In case B=0 and †=o we can and shallfollow the same argument, while eliminating throughout the isolated ghost vertex v^⋆. (ii). Upon considering (<ref>) for the increasing events {ij }, we deducethat φ(ij) ≤φ (ij) for any (i,j) ∈^⋆_d(s+1), hence alsoφ (_s) ≤φ(_s). Now since φφ and _⋆(s+1) is a finite set, there exist u, u' ∈^⋆ (∂_d(s+1))such that under the monotone coupling η≤η, with positive probability η_(u,u') = 1 while η_(u,u')=0. In particular, u and u' are in different blocks of (η) and we may further assume that η_(u,v^⋆)=0 (or else, exchange u with u'). It thus suffices to show that any such boundary edges result with φ_(η)^, B, s+1(u w) < φ_(η)^, B, s+1(u w), where w ∈∂_d(s) is the parent of u.Turning to prove (<ref>), consider the event := {uin v^⋆}∪{win∂_d(s+1) ∖{u'}} ,where {Uin U'} denotes the event of an open path within_d^⋆(s+1)between some vertex of U and some vertex of U'. With {uw}∩ an increasing event, for which (<ref>) holds, we arrive at (<ref>) upon showing thatφ_(η)^, B, s+1({u w} ∩^c) > 0and φ_(η)^, B, s+1({u w} ∩^c) =0. To this end, the event ^c implies that apart from boundary edges, u is anisolated vertex and there is no open path between w and ∂_d(s+1) except possibly between w and u'. Thus,{uw }∩^c = ( {uex u'in w}∪{uex v^⋆ in w }) ∩^c,where {uex u'} denotes the existence of an open path with only boundary edges between u and u'. The latter event amounts to having both u and u' in the same block of , so ourobservation that u is neither in the same block of (η) as u' nor in that of v^⋆, yields the right part of (<ref>). Further, with η_(u,u') = 1,the left side of (<ref>) holds as soon asφ_(η)^, B, s+1 ({w in u'} ∩^c) > 0.For B>0 we get (<ref>)by opening only the edges (w,v^⋆) and (u',v^⋆) of _d^⋆(s+1). In case B=0we follow the same reasoning in _d(s+1) (i.e. without the isolated v^⋆),now satisfying the event {win u'}∩^c by opening onlythe edges of _d(s+1) which lie on the unique path from w to u',the probability of which is strictly positive whenever >0. Finally, in case of B==0 there is nothing to prove, for then _⋆(s+1) is a singleton (as all bonds of ^⋆_d(s+1) are closed, regardless of the boundary edge configuration ).Thanks to our stochastic ordering of _†(t), all marginals of local weak limit points of {φ_n^, B} are supported on the set of measures _†(t) and _⋆(t) (the latter for B >0),as defined and shown below. Namely, such local weak limitsare all sandwiched between the free and the wired rcm-s on _d, and additionally for B>0their random edge boundary at level t have edges only between ∂_d(t) and the ghost v^⋆. Fix t ∈. For ∈{, }, let φ_^, B,t be the marginal of φ_^, B on _t^⋆ and define_†(t):= {φ∈_†(t): φ^, B, t_φφ^, B, t_} ,where †=⋆ if B>0 and †=o if B=0. Let _⋆ = _⋆(t) ⊂_⋆(t) be the collection of all partitions ={C_⋆, C_1, C_2, …, C_k} of the labeled set ∂_d(t) ∪{v^⋆} with v^⋆∈ C_⋆ such that |C_i|=1 for all i ≥ 1.Define_⋆(t):={φ∈_⋆(t): φ= ∑_∈_⋆ρ () φ^, B, t_ for some ρ∈(_⋆) }.As promised, we now show that the finite dimensional marginals of sub-sequentiallocal weak limits of our Edwards-Sokal measures must be supported on thespaces from Definition <ref> and those for our rcm-s must further be supported on the spaces from Definition <ref>.Fix , B ≥ 0, integer q ≥ 2 and _n _d.Set †=⋆ if B>0 and †=o if B=0.* The sequence of measures {ϖ_n^,B} admits sub-sequential local weak limits, as do its marginals {φ_n^,B} and {μ_n^,B}, in bond and spin variables,respectively.* Any t-dimensional marginal of a local weak limit point _, of {ϖ_n^,B} satisfies _,^t ∈(_†(t)). * Any t-dimensional marginal of a local weak limit point _ of{φ_n^,B} satisfies _^t ∈(_† (t)).* For B >0, a t-dimensional marginal of suchlocal weak limit point _ satisfies _^t ∈(_⋆ (t)). (a).As _n _d, for any t ∈ and >0, there exists an n_0() such thatinf_n ≥ n_0() P_ϖ_n^,B^t(I_n)(_d^⋆,t) ≥ 1-,where _d^⋆,t and P_ϖ_n^,B^t(·) are as in Definitions <ref> and <ref>, respectively. Since _d^⋆, t is compact, from Prokhorov's theorem it follows that for fixed t ∈ the random probabilitymeasures { P_ϖ_n^,B^t(I_n)} admit a sub-sequential limit _t which is a Borel probability measure on _d^⋆,t. Upon extracting successive subsequences, from the definition of P_ϖ_n^,B^t(I_n) it further follows that for every t ∈ the marginal of _t+1 on _d^⋆,t is _t. Choosing the diagonal subsequence and using Kolmogorov's extension theorem we thus establish the existence of a sub-sequential local weak limit pointof{ϖ_n^,B}, witha probability measure on _d^⋆. As both φ_n^,B and μ_n^, B are marginals of ϖ_n^,B, the existence of sub-sequential local weak limits of {φ_n^,B} and{μ_n^, B} is now immediate.(b). Fixing t ∈ and (σ^0, η^0) ∈^⋆_t ×_t^⋆ we have that for any ℓ∈ [n],P_ϖ_n^,B^t(ℓ)(_d^⋆(t), σ^0, η^0)= 1 (_ℓ(t) ≅_d(t))q_ℓ,n^t (σ^0)∏_e ∈ E(_ℓ^⋆(t))[ (1-p_e)(1-η^0_e) + p_e η_e^0δ_e (σ^0) ] · 1 (σ^0_v^⋆=1),whereq_ℓ,n^t (σ^0):= 1Z_n∑_η |_E_n^⋆∖ E(_ℓ^⋆(t))∑_σ : σ |_V(_ℓ^⋆(t)) = σ^0∏_e ∈ E_n^⋆∖ E(_ℓ^⋆(t))[ (1-p_e)(1-η_e) + p_e η_eδ_e (σ) ],depends only on σ^0|_∂_ℓ(t) and Z_n issome normalizing constant. Now, split the outer sum over η in (<ref>) according to which vertices in∂_ℓ(t) ∪{v^⋆} are connected among themselves via an open path induced by η. Whenever _ℓ(t) ≅_d(t), this amounts to splitting the sum over η per partitions∈_⋆(t) of ∂_d(t) ∪{v^⋆} with v^⋆∈ C_⋆. Further, σ_v^⋆=1 and any open path γ in η induces the multiplicative factor∏_e ∈γδ_e(σ) in (<ref>). Hence, if ηinduces the partition , then the sum over the spins in the RHS of (<ref>) be zerounless _(σ^0):=∏_j=1^k ∏_u,v ∈ C_jδ_σ_u^0, σ_v^0·∏_v ∈ C_⋆δ_σ_v^0,1=1. Therefore,q_ℓ,n^t (σ^0)= ∑_∈_⋆(t)ϑ_ℓ,n () _(σ^0),for some nonnegative constants {ϑ_ℓ,n()}_∈_⋆(t).Plugging this into (<ref>) results withP^t_ϖ_n^,B(ℓ)(_d^⋆(t), σ^0, η^0) =1 (_ℓ(t) ≅_d(t)) P_n^, B, t(ℓ)(σ^0, η^0),where for each ℓ∈ [n],P_n^, B, t(ℓ) (·,·) :=∑_∈_⋆(t)ϱ_ℓ,n() ϖ_^, B,t (·,·) ∈_⋆(t),namely, each ϱ_ℓ,n∈(_⋆(t)) (andwhen _ℓ(t) ≇_d(t) we can choose an arbitrary ϱ_ℓ,n of this form). Recall that the set of probability measures on the finite dimensional simplex (_⋆(t) ×^⋆_t ×_t^⋆) is compact under weak convergence.In particular,the probability measures P_n^, B, t(I_n) on thecompact subset _⋆(t) of (^⋆_t ×_t^⋆) admit sub-sequential limits in the topology of weak convergenceon _⋆(t). As _n _d,by (<ref>) thesub-sequential limit points of { P^t_ϖ_n^,B(I_n)}_n ∈coincide with those of { P_n^, B, t(I_n)}_n ∈. In particular, the t-marginal ^t_,of any local weak limit point of {ϖ_n^,B} must also be in (_⋆(t)).As noted at the end of the proof of Lemma <ref>, if B=0 we can omit throughout the isolated vertex v^⋆ and the preceding argumentthen yields that ^t_,∈(_o(t)). (c). First recall that for any t ∈ and ∈_⋆(t), the bond-marginal of ϖ^, B, t_ is φ_^, B, t. Thus, by (<ref>), for any ℓ∈ [n], P_n, ^,B,t(ℓ)(·):= ∑_σ P_n^,B,t(ℓ)(σ,·)= ∑_∈_⋆(t) ϱ_ℓ,n() φ_^, B, t(·) ∈_⋆(t). From (<ref>), the bond-marginal ofP^t_ϖ_n^,B(ℓ)(_d^⋆(t), ·, ·) is1 (_ℓ(t) ≅_d(t))P_n,^, B, t(ℓ). Hence, applying the precedingreasoning for limit points of the probability measures P_n,^, B, t(I_n) on the compact _⋆(t), we conclude that the t-marginal ^t_ of any limit point of the rcm-s {φ_n^,B} must be in (_⋆(t)). Further, by Definitions <ref> and <ref> we know that _⋆(s) = {φ∈_⋆(s): φ_,s^, Bφφ_, s^, B} , ∀ s ∈ .Next, for s>t let _⋆^t(s) be the collection of all t-dimensionalmarginals of measures from _⋆(s), noting that since ^t_ isalso the t-marginal of ^s_ for any s>t, necessarily _^t(⋂_s>t _⋆^t(s) )=1, ∀t ∈ . Now, any φ∈_⋆^t(s) is the t-marginal of some φ∈_⋆(s) and in particular φ_,s^,Bφφ_,s^, B.In view of (<ref>) and Lemma <ref>(i), it thus follows that _⋆^t(s) ⊂{φ∈_⋆(t) :φ^, B, t_,sφφ^, B, t_,s} .Since φ_, s^, B, tφ_^, B, t when s →∞, we thus deduce that ⋂_s>t_⋆^t(s) ⊂{φ∈_⋆(t) :φ^, B, t_φφ^, B, t_} = _⋆(t)(see Definition <ref>), which together with (<ref>) implies that _^t ∈(_⋆(t)), as claimed.As explained at the end of the proof ofpart (b), for B=0 we omit the isolated v^⋆and re-run the same argument on the original tree _d, to arrive at ^t_∈(_o(t)). (d). Fix any t ∈, u,v ∈∂_d(t), and _0∈_⋆(t) such that u and v belong to a same block of _0 not containing v^⋆. Fix a φ∈({0,1}^E(_d^⋆)) such that its (t+s)-dimensional marginal φ^t+s∈_⋆(t+s) for all integer s≥ 0. Recall from Definition <ref> that φ^t+s can be viewed as a measure on the bonds of _d^⋆(t+s). For s ≥ 0 we let Ω_u,v(s) be the event of bond configurations for which the cluster of u, induced by the open bonds in _d^⋆(t+s)∖_d^⋆(t), contains v but does not contain the ghost vertex v^⋆.Due to consistency of the marginals of φ and that (<ref>) holds we therefore haveφ^t(η = η(_0)) ≤φ^t+s(Ω_u,v(s))for all integers ≥0.We will show that for any B >0 and s ∈φ^t+s(Ω_u,v(s)) ≤q^2 exp(-2Bs).This together with (<ref>) and (<ref>) yield the desired result. Turning to prove (<ref>), since φ^t+s∈_⋆(t+s) there exists some probability measure ρ∈(_⋆(t+s)) such thatφ^t+s(Ω_u,v(s)) = ∑_∈_⋆(t+s)φ^,B, t+s_(Ω_u,v(s)) ρ(). Hence, it suffices to prove (<ref>) with φ^t+s replaced by φ^,B,t+s_ with ∋ C v^⋆ such that u',v' ∈ C for some descendants of u and v, respectively. To this end, fix such aand split the bond configurations of _d^⋆(t+s) into three pieces: η^(1)∈_t^⋆, η^(2)∈_t+s∖_t, and η^(3)∈_t+s^⋆∖ (_t^⋆∪_t+s).Notice that the event Ω_u,v(s) does not depend on η^(1). Further, upon ordering ∂_d(t+s) in some fixed, non-random manner observe that (η^(2), η^(3)) ∈Ω_u,v(s) requires that the following events hold: * There exist open paths P_u and P_v, determined by η^(2), from u and vto u and v ∈∂_d(t+s), respectively, where u and v aredescendants of u and v, respectively. When there are multiple such u and v we fix the first of these as our u and v, respectively. * We must have η^(3,1)≡ 0, where η^(3) = (η^(3,1), η^(3,2)), η^(3,1) is the bond configuration for the collection of edges between V(P_u ∪ P_v)and v^⋆, and η^(3,2) is the rest of the bond configurations. Set Ω_u,v(s):={η^(2): (η^(2), η^(3)) ∈Ω_u,v(s)for some η^(3)}. Note that given any η^(1), η^(2)∈Ω_u,v(s), and η^(3,2) the only indeterminacy in the cluster structure through the choice of η^(3,1) is whether the clusters of u and v contains the ghost v^⋆ or not. Therefore, using that |V(P_u ∪ P_v)|=2s and that 1-p_e=e^-B for e ∈_d^⋆(t+s)∖_d(t+s), we haveφ^, B, t+s(η^(3,1) ≡0 |η^(1), η^(2), η^(3,2)) ≤q^2 exp(-2Bs). Finally using observations (i) and (ii) above, and taking an average over (η^(1), η^(2), η^(3,2)) we obtain (<ref>) with φ^t+s replaced by φ^,B,t+s_ withas above. This completes the proof.For η∈_t^† let_†^t(η) denote the probability measure on spins of_d^†(t), with the same spin value across all vertices in each connected component andiid uniform on [q] spin values across different connected components, except for the component containing v^⋆ whose spin is σ_v^⋆=1. For ∈_†(t) let _†^t(φ^,B,t_) denote therandom probability measure induced by _†^t(η) for random η whose law is φ_^, B, t (restricting to B=0 when†=o), and for any φ∈_†(t) ⊂(_t^†) letθ_†^t(φ) := ∑_∈_†(t)φ(η())_[_†^t (φ_^, B, t) ],where _[·] denotes the expectation with respect to the bond variables.By Lemma <ref>(ii),ϖ_^, B, t (·, η)=φ_^, B, t (η) _†^t (η) (·), ∀∈_†(t),η∈_t^†,and therefore∑_ηϖ_^, B, t (·, η)= _[_†^t (φ_^, B, t) ] (·), ∀∈_†(t).Thus, with ϖ_ and ϖ_ denoting the spin and bond marginals of ϖ∈(^†_t ×_t^†),ϖ∈_†(t) ⟹ ϖ=ϖ_ (η) _†^t (η), ϖ_= θ_†^t(ϖ_) and ϖ_ ∈_†(t), †∈{o, ⋆}. Utilizing Remark <ref>, we now relate marginals of thefree and wired Potts and rcm-s on _d.Let †=⋆ if B>0 and †=o if B=0. Then, for any t ∈: * The t-dimensional marginal of ϖ =(ϖ_, ϖ_)∈_†(t+1) is ϖ^t=(ϖ_^t, ϖ_^t) ∈_†(t). * The Potts and rcm marginals are such that μ^, B, t_= θ_†^t(φ^,B,t_), for ∈{, }.* For ∈{, } we haveμ_^, B (σ_i = σ_j) = (1 -1q) φ_^,B( i ↔j) +1q, (i,j) ∈E(_d).(a). With ϖ^t denoting the marginal of ϖ to the sub-tree ^†_d(t), clearly one may compute the marginals(ϖ^t)_ and (ϖ^t)_ on the spins and bonds of ^†_d(t), respectively, also in the opposite order.That is, as stated, (ϖ^t)_=(ϖ_)^t and (ϖ^t)_=(ϖ_)^t. For our remaining claim, that ϖ^t ∈_†(t) for any ϖ∈_†(t+1), it suffices to consider only the extremal measures of_†(t+1). Namely, to fix ∈_†(t+1) and consider the t-marginal of ϖ^,B_ (with boundary specificationη=η() on ^†(∂_d(t+1))). Doing this, we denote by(σ^0,η^0) the configuration on _d^†(t), so upon splitting thespins and bonds of _d^†(t+1) as (σ^0,σ) and (η^0,η),our task is to show that for some ρ∈(_†(t)), ∑_η∑_σϖ^,B,t+1_ (σ^0,σ, η^0, η)= ∑_^0 ∈_†(t)ρ (^0) ϖ_^0^,B,t (σ^0,η^0), ∀σ^0, η^0.Applying (<ref>) on both sides of this identity and then utilizing (<ref>), it remains only to show that∑_σ ^t+1_†(η^0,η, ) (σ^0,σ)= ^t_†(η^0,'(η,) ) (σ^0),∀(η^0,η),where '='(η,) ∈_†(t) denotes the partition corresponding to open componentsof ∂_d(t) induced by η and the given partitionof theboundary of ^†_d(t+1). Next, recallthat under _†^t+1(·) all spins in each C_i ∈ must take the same value, if C_i has an open path to some v ∈∂_d (t) then the spin of C_i must match σ^0_v while otherwise the spin of C_i be chosen uniformly in [q], independentlyof everything else. Further, (η,) determines theconnectivity between (∂_d (t+1))^⋆ and (∂_d (t))^⋆ and '(η,) is the partition induced on(∂_d (t))^⋆ by our requirement that σ^0_v=σ^0_u whenever u and v connect, via η, to the same block of .Combining these observations leads to (<ref>).(b). Fix ={,}, B>0 and s ∈. By Lemma <ref>(i), Definition <ref>, and Definition <ref>, we have thatφ_,s^, B are the bond marginals of the Edwards-Sokal measure ϖ_^,B on ^⋆_d(s) with η() ≡(), where (·) is as in (<ref>). We further claim that thenμ_,s^,B of Definition <ref> is such that μ_,s^,B (·) = ∑_η ϖ_^,B (·,η)=θ_⋆^s(φ_,s^, B). Indeed, the lhs of (<ref>) follows from Lemma <ref>(i)upon observing that by Lemma <ref>(ii),here ϖ_^,B =ϖ^,B_^⋆_d(s) with iid uniform spins at ∂_d(t) if =and all such spins set to 1 if = (whereas the rhs of (<ref>) is merely an application of Remark <ref>). Next, in view of part (a) we further deduce from (<ref>) that μ_, s^, B, t = θ_⋆^t(φ_,s^, B, t) for any s>t. Sinceμ_, s^, B, tμ_^, B, t and φ_, s^, B, tφ_^, B, t when s →∞, we arrive atμ_^, B,t= θ_⋆^t(φ_^, B,t).For B=0 we omit the isolated v^⋆ and arrive by the preceding reasoning at μ_^, 0,t = θ_o^t(φ_^, 0,t). We likewise getthat μ_^, 0,t = θ_o^t(φ_^, 0,t) uponverifying that the lhs of (<ref>) holds in that case, whichin view of (<ref>) and both parts of Lemma <ref> amounts to checking that1/q∑_k=1^q μ^,0_k,s (·) = μ^,0__d(s)(·|σ |_∂_d(s)≡σ' ∈ [q] ).By symmetry the value of μ^,0__d(s)( σ |_∂_d(s)≡ k ) is independent of k and as μ^,0_k,s equals μ^,0__d(s) conditional tothe latter event (see Definition <ref>), the preceding identity follows.(c). For any (i,j)∈ E(_d) there exists some t ∈ such that (i,j) ∈ E(_d(t))in which caseμ_^, B(σ_i=σ_j)=μ_^,B,t(σ_i=σ_j) and φ_^,B(ij) = φ_^, B,t(ij). Thus, by part (b) and arguingas in the proof of Corollary <ref>(now conditionally to η≡() on ^†(∂_d(t))), results with (<ref>).§ BETHE REPLICA SYMMETRY: PROOF OF THEOREM <REF>We set as usual †=⋆ if B>0 and †=o if B=0. Throughout thissection also = if(, B) ∈ R_ and = if (, B) ∈ R_1. Our next lemma (the proof of which is postponed to Section <ref>), identifies the limit of the internal energy per vertex forμ_n^,B hence being key to pinning down the local weak limits as thosesupported on one specific degenerate measure.Under Assumption <ref> (with = if(, B) ∈ R_; = if (, B) ∈ R_1) we have,lim_n →∞ 1n ∑_i=1^n ∑_j ∈∂i μ_n^, B(σ_i = σ_j) = ∑_i ∈∂o μ_^, B (σ_o=σ_i) . We proceed with the following three steps of our proof of Theorem <ref>.Step I. From rcm limits to Potts limits.Suppose φ_n_k^,Bφ∈_^,B, where φ = φ_^,B+(1-) φ_^,B for some [0,1]-valued . The local weak limit points _, of the Edwards-Sokal measures ϖ_n_k^,B(which exist in view of Lemma <ref>(a)), must all have the same bond marginal with _ ({φ})=1.Recallfrom Lemma <ref>(b) that each _,^t is supported on_†(t).In view of(<ref>), this in turn implies that _^t({θ_†^t(φ)})=1. Applying Lemma <ref>(b) we deduce that_^t({μ_^,B,t+(1-) μ_^,B,t})=1 for all t ∈.That is,μ_n_k^,Bμ_^,B+(1-) μ_^,B. In particular, in the setting of Theorems <ref> and <ref> it suffices to prove only the stated local weak convergence for the rcm-s φ_n^,B.Step II. The uniqueness regime (,B) ∈ [0,∞) ∖ R_.Here μ_^, B= μ_^, B (see Proposition <ref>(i)), hence φ_^,B(i ↔ j) = φ_^,B(i ↔ j)at any edge (i,j) of _d (see Lemma <ref>(c)), and in particular φ_^,B,s+1(_s) = φ_^,B,s+1(_s) for all s ∈ (recall (<ref>)). From Lemma <ref> it then follows that φ_^,B,t=φ_^,B,t and hence _†(t)={φ_^,B,t} for all t ∈ (see Definition <ref>). In view of Lemma <ref>(c), we conclude that φ_n^,Bφ_^,B, as claimed.Step III. Non-uniqueness, with (,B) ∈ R_∪ R_1.Consider the uniformly bounded functionals^_t := 1|∂_o(t)|∑_i ∈∂_o(t)1Δ_i∑_j ∈∂ i 1(σ_i = σ_j)and ^_t := 1|∂_o(t)|∑_i ∈∂_o(t)1Δ_i∑_j ∈∂ i 1(ij) (see Definitions <ref> and <ref>), where theevent {ij} denotes that there is an open path between i and j in _o^†(t+1).With _d vertex transitive, μ_^, B is translation invariant,so for any t ∈,1/d ∑_j ∈∂o μ_^, B(σ_o=σ_j)= 1|∂_d(t)|∑_i ∈∂_d(t) 1/Δ_i ∑_j ∈∂iμ^,B_(σ_i=σ_j) = (1-1/q )φ^, B,t+1_ (^_t) + 1/q, where the right identity is due to Lemma <ref>(c). Consider the [0,1]-valued functions on V(_n),_n^t(i):= ∑_k ∈∂_i(t) 1(_k(2t) ≅_d(2t))/|∂_k(t)| = {[ 0_i(t) _d(t),; 1 _i(3t) ≅_d(3t). ].For _n _d we have that _n [_n^t(I_n)] → 1, hence it follows from (<ref>) and (<ref>) that(1-1/q)φ^, B,t+1_ (^_t) + 1/q = lim_n →∞1d n∑_i=1^n ∑_j ∈∂ i_n^t (i) ϖ_n^, B(σ_i = σ_j) =lim_n →∞_n [1(_I_n(2t) ≅_d(2t))P^t+1_ϖ_n^,B (I_n) (_t^) ](in the last identity we used that k ∈∂_i(t) if and only if i ∈∂_k(t)).From (<ref>) we deduce that for any weak limit point _,of {ϖ_n^,B}(1-1/q )φ^, B,t+1_ (^_t) + 1/q= _,^t+1 ( ϖ(^_t) ), ∀t ∈ . Applying Corollary <ref> on the graph ^†_d(t+1) with boundaryedges per η(), results withϖ_^,B,t+1(^_t) =(1-1/q)φ^, B,t+1_ (^_t) + 1/q, ∀ t ∈ , ∀∈_† .Plugging the latter identity into (<ref>) we arrive atφ^, B,t+1_ (^_t) = _,^t+1( φ(^_t) ) = _^t+1(φ(_t^)),∀ t ∈,which since^_t=1/c_t_t on _d^†(t+1) for any boundary edges (with c_t=d |∂_d(t)|), implies that for any local weak limit point _ ofφ_n^, B, _^t+1 ( φ(_t) ) =φ^, B,t+1_(_t), ∀t ∈ . By Lemma <ref>(c) and Definition <ref>, we know that φ_^,B,s+1φφ_^, B, s+1 for _^s+1-a.e. φ. Combining Lemma <ref> with (<ref>) then yields that _^s+1({φ_^, B, s+1})=1 for all s ∈.Since this applies for any local weak limit point _, as claimedφ_n^, Bφ_^, B when n →∞. §.§ Proof of Lemma <ref> Setting hereafter ∈{,1} we shall rely on the next two lemmas about smoothness of the fixed points ν_ of the BP recursionand the corresponding marginals of two adjacent spinsunder μ_^, B (proofs of whichare postponed to Appendix <ref>).Let ∂ R_≠ denote the boundary of the non-uniqueness regime R_≠ defined in Proposition <ref>. Define∂ R_≠^ := { (', B') ∈∂ R_≠: '= _(B')} and ∂ R_≠^+ := { (', B') ∈∂ R_≠: '= _+(B')}. * If (_0, B_0) ∈ R_≠∖∂ R_≠^ then the map (β, B) ↦ν_1^, B is continuously differentiable at (_0, B_0). * If (_0, B_0) ∈ R_≠∖∂ R_≠^+ then the same conclusion holds for ν_^, B.For ∈{,1} and any , B ≥ 0,μ_^,B (σ_i, σ_j)∝ e^βδ_σ_i, σ_jν_^, B(σ_i) ν_^, B(σ_j),∀ (i, j) ∈ E(_d), ∀σ_i, σ_j ∈ [q], μ_^, B(σ_i)∝((e^-1)ν_^, B(σ_i)+1)ν_^, B(σ_i), ∀ i ∈ V(_d), ∀σ_i ∈ [q].First note that for (_0, B_0) ∈ R_lim_n →∞ 1n∑_(i,j) ∈E_n μ_n^_0, B_0(σ_i = σ_j)= lim_n →∞ ∂∂ Φ_n(_0, B_0) = ∂∂ Φ(ν_^_0, B_0). Indeed, the left equality in (<ref>) follows by a straightforward computation, whereas for the right equality recall that the derivatives inofconvex functions {Φ_n(·, B)}_n ∈ converge to the derivative of their (convex) limit,whenever the latter exists. Now, as R_c ∈ ( R_)^c, it follows by Proposition <ref> and Assumption <ref> that for any (_0, B_0) ∈ R_there exists some open neighborhood U__0, B_0∋ (_0, B_0) such that lim_n →∞ Φ_n(, B) = max{Φ(ν_^, B), Φ(ν_1^, B)}=Φ(ν_^, B),for any(, B) ∈U__0,B_0.From Lemma <ref>(ii) we know that (,B) ↦ν_^, B is differentiableat (_0, B_0) ∈ R_⊂ R_≠∖∂ R_≠^+ andwith (ν, , B) ↦Φ^, B(ν) differentiable, by the chain rulethe limitin (<ref>) is differentiable inat (_0, B_0), yielding the right equality in (<ref>).To complete the proof of (<ref>) for (_0, B_0) ∈ R_ note that the identity∂∂ Φ(ν_^_0,B_0)= 12 ∑_i ∈∂o μ_^_0, B_0 (σ_o=σ_i), is a special case of <cit.>. Indeed, the positivity and finite meanconditions <cit.> apply for the Potts model and _d, respectively,their differentiability requirement (H3^) is covered for (,B) ↦ν_^, B by Lemma <ref>(ii), and comparing (<ref>) with<cit.> shows that the translation invariance (hence unimodular), measure μ_^,B is in the space ℋ^⋆ of <cit.>.Following the same line of reasoning we find that for (_0, B_0) ∈ R_1 the equality (<ref>) holds with μ_^,B replaced by μ_1^,B.Finally, by construction μ_^,0(σ_o=σ_i)= μ_k^,0(σ_o=σ_i) for any k ∈ [q], and μ_1^, B= μ_^, B for B >0 (recall (<ref>)).§ DOMINATING SPIN AND PURE STATE: PROOF OF THEOREM <REF> At B=0 the Potts measure is invariant under a global spin color permutation. Hence,μ_n^,0(_n(σ)=k)=1/q for all k ∈ [q] (see (<ref>) for_n(·)), with μ_n,k^,0(·)=q μ_n^,0(·,_n(·)=k).In Section <ref> we show that for < _c(0) andany k_1k_2 ∈ [q],lim_n →∞_n | P_μ_n,k_1^,0^t(I_n) (σ__I_n(t)=σ) -P_μ_n,k_2^,0^t(I_n) (σ__I_n(t)=σ) | = 0, ∀σ∈_t, ∀t ∈ .By Theorem <ref>(i), as for such βμ_n^,0=1/q∑_k=1^q μ_n,k^,0μ_^,0 ,we deduce in view of (<ref>) that the same must apply also for each μ_n,k^,0, yielding Theorem <ref>(i).In contrast, to get μ_n,k^,0μ_k^,0 in Theorem <ref>(ii) amounts to showing that for >_c(0),lim_n →∞_n [ μ_n^,0 (σ__I_n(t)=σ, _n(·)=k) ] = 1/q μ_k^, 0(σ__d(t)= σ), ∀σ∈_t, ∀t ∈. To this end, we show in Section <ref> that for such , due to the edge expansion property of {_n}, with high μ_n^,0-probability the dominating color _n coincides with that for a randomly chosen `large'local neighborhood. Employing the local weak convergence from Theorem <ref>(ii), we relate the latter functional to the dominating colorof _d(t) under the wired rcm and the proof is then completed upon identifyingthe behavior that corresponds to a dominant color in this tree setting.§.§ Free limit (<_c(0)): Proof of Theorem <ref>(i) We establish (<ref>) by a couplingwith an arbitrarily small fraction of disagreements between the spins under μ_n,k_1^,0 and under μ_n,k_2^,0. To this end, we first show that the free rcm on _d, does not have an infinite cluster when < _c(0).For any d≥ 3, q ≥ 2 and ∈ [ 0, _c(0)),φ_^, 0(o ∞):= lim_t →∞φ_^, 0(o ∂_o(t)) =0.Recall <cit.> that the rcm φ_^, 0 on _d with free boundary condition is merely the product measure on {0,1}^E(_d), where each edge is open with probability π():= pp+q(1-p) for p=p_e=1-e^-. Since the branching number of _d is (d-1) (see <cit.> for a definition), the stated result follows from <cit.>, upon verifying thatm() := (d-1) π() = (d-1)(e^-1)e^+ q -1 < 1, ∀∈[0, _c(0)) . By the monotonicity of y ↦yy+a whenever a >0, it suffices for (<ref>) to show that m(_c(0)) ≤ 1. To this end, note that by Jensen's inequality,h(x):=x^1-d/2 (x^d-1 -1)(d-1)(x-1) = 1/d-1∑_j=0^d-2 x^j+1-d/2 > 1, ∀ x > 1.Now, recall from the discussion after <cit.>, that for any d,q ≥ 3,e^_c(0) = q-2(q-1)^1-2/d -1 .In view of (<ref>), it then follows that m(_c(0))=1/h((q-1)^2/d)<1. Similarly,e^_c(0) =d/(d-2) when q=2, in which case m(_c(0))=1. Thus, m() < m(_c(0)) ≤ 1 for all q ≥ 2, as claimed. With φ_n^,0φ_^, 0(see Theorem <ref>(i)), we next show that Lemma <ref>implies having only a small fraction of the vertices of _n in large open connectedcomponents of φ_n^,0 (whenever < _c(0)). Let _n(r):=_n(r, η), r ∈, denote the number of open connectedcomponents of size r in _n equipped with bond configurationη∈{0,1}^E_n. For any >0 and β∈ [0, β_c(0)), there exists someℓ_⋆ = ℓ_⋆(, β) < ∞ such that lim sup_n →∞φ_n^, 0(∑_r ≥ℓ_⋆ r _n(r) ≥ n ) ≤.Fixing ℓ∈, note that the event {∑_r ≥ℓ r _n(r) ≥ n} implieshaving at least n vertices of _n with open connected componentsof size at least ℓ. Setting the smallest t_ℓ∈ such that |_d(t_ℓ)| ≥ℓ, where for a finite graphthe notation || is used to denote the cardinality of its vertices, for any i ∈ [n] with open connected component of size at least ℓ and_i(t_ℓ) ≅_d(t_ℓ), there must be an open path within _i(t_ℓ) from i to ∂_i(t_ℓ). This together with Markov's inequality, Theorem <ref>(i) for the rcm-sφ_n^,0, and the fact that _n _d, entail thatlim sup_n →∞φ_n^, 0(∑_r ≥ℓ r _n(r) ≥ n )≤^-1lim_n →∞_n [φ_n^, 0(I_nin∂_I_n(t_ℓ)) · 1(_I_n(t_ℓ) ≅_d(t_ℓ))]=^-1φ_^, 0( o ↔∂_o(t_ℓ)).Since t_ℓ→∞ as ℓ→∞, by Lemma <ref>we can choose ℓ large enough so that the RHS above is at most ,thereby completing the proof.Recall that, by Lemma <ref>(ii) at B=0, given a random cluster bond configuration, the Potts spin configuration is obtained by assigning a single color to all vertices in each connected component, uniformly at random, and independently across all components.Our next `complicated' procedure of generating such independent discreteuniform random variables is the key to our promised coupling of μ_n,k^,0 and μ_n,k'^,0.Suppose :=(_1, _2, …, _q) d= Mult_q(M, q^-1, q^-1, …,q^-1), follows the multinomial distribution with M trials and equal probability q^-1 for each of the q categories. Conditioned onchoose uniformly at random a labeled partition of [M]to distinguished sets {B_k,B_k} of sizes |B_k|= _⋆ := min_k{_k} and | B_k| = _k - _⋆.For a uniformly chosen permutationof [q],set Y_i=k whenever i ∈ B_k ∪ B_(k). This yields iid variables {Y_i}_i=1^M, each following the discrete uniform law ([q]) on [q].It suffices to show that for Y:=(Y_1, Y_2, …, Y_M),any fixed k:=(k_1, k_2, …, k_M) ∈ [q]^M and a fixed permutation γ^0 of [q],(Y=k |γ= ^0) = q^-M. Now, given =^0, the event Ω_ k:={ Y= k} induces the partition B_i:= {j ∈ [M] : k_j =i} of [M] with B_i = B_i ∪ B_^0 (i). Setting | B_i|=y_i, since |B_i|=y_⋆ := min_j{y_j} for all i ∈ [q], we also have | B_^0 (i)|=y_i-y_⋆. That is, given , the event Ω_ k determines thesizes and colors of {B_i, B_i}, with onlythe choices of B_i ⊂ B_i indeterminate. In fact, any such choiceof {B_i} with |B_i|=y_⋆ produces a realization of the event Ω_ k.There are ∏_i=1^qy_iy_⋆ such choices and the probability of observing each realization of {B_i, B_i}_i=1^q is then1My_1, y_2, …, y_q∏_i=1^q y_iy_⋆(_i=y_i, i ∈ [q]) = 1∏_i=1^q y_iy_⋆ q^-M .Hence, taking a union over the set of all possible choices of {B_i}_i=1^qwe arrive at (<ref>).In Lemma <ref> the dominant color is completely determined by the∑_k|_k - _⋆| colors involving γ, andby standard concentration bounds with high probability there are only o(M) such colors. We can thusproduce an identical copy {Y_i'}_i=1^M with only o(M) discrepancy from {Y_i}_i =1^Mand two different specified dominant colors. Indeed, building on Lemmas<ref> and <ref> we proceed this wayto establish (<ref>) (and thereby get Theorem <ref>(i)). Fixing k_1k_2 ∈ [q], our proof hinges on producing Potts spin configurationsσ^1, σ^2 ∈ [q]^nthat with high probability agree up to o(n) sites, while _n(σ^1)=k_1 _n(σ^2)=k_2. To this end, equip_n with bond configuration η∈{0,1}^E_n andlet W_r ⊂ [n], r ≥ 1, denote the vertex disjoint unionof its _n(r):=_n(r, η) open connected components of size r.Next, fix a uniformly random permutationof [q] and set ' such that '(k)=(k) for k ∉{k_1, k_2} withγ'(k_1)=γ(k_2) and γ'(k_2)=γ(k_1). Choosing =^(r):=(_n(r,1), _n(r,2), …, _n(r,q)) d= Mult_q(_n(r), q^-1, q^-1, …,q^-1),and the corresponding partition of [_n(r)] as inLemma <ref>, induces viaa color coding foreach open connected component of size r. Thereby assigning that same color to all vertices of such a component, we obtain a spin configuration σ_W_r^1. Following the sameprocedure except for replacingby ' yields another spin configurationσ_W_r^2. With _r ⊂ W_r denoting the (random) set of siteswhose color is independent of the choice of , we have by construction that σ^1__r = σ^2__r and|W_r ∖_r| ≤ q r max_k, k' ∈ [q]|_n(r,k) - _n(r,k')|. Repeating this procedure for each r ∈, independently across different r's yields a random set of sites ⊂ [n] with color independent ofand spin configurations σ^1, σ^2 ∈ [q]^n withσ_^1 = σ_^2 and|^c| ≤q ∑_r ∈ r max_k k' ∈[q]|_n(r,k) - _n(r,k')|.In view of Lemmas <ref>(ii) and <ref>, for η drawn according the rcm φ_n^, 0, the marginal laws of σ^1 and σ^2 are both given by thePotts measure μ_n^, 0.Further, these spin configurations are such that _n(σ^1)= k_1 _n(σ^2)=k_2. Indeed, since the number of sites inwith any given color is exactly q^-1|| in bothσ^1 and σ^2, the event _n(σ^1)=k_1 amounts tospin configuration σ_^c^1 of dominating color k_1. The relation between and ' dictates that this happens if and only if the same holdsfor σ^2 with k_1 replaced by k_2, namely that equivalently_n(σ^2)=k_2. Armed with these observations, for any t ∈ and σ∈_t, we deduce that1n∑_i=1^n | 1(σ^1__i(t)= σ, _n(σ^1)=k_1) -1(σ^2__i(t)= σ, _n(σ^2)=k_2)| ≤1n∑_i=1^n1(_i(t) ∩^c ∅, _i(2t) ≅_d(2t)) + 1n∑_i=1^n1(_i(2t) _d(2t))Clearly, 1(_i(t) ∩^c ∅) ≤∑_j ∈^c 1(_i(t) ∋ j) and if j ∈_i(t) for _i(2t) ≅_d(2t) then also_j(t) ≅_d(t). Thus, the first term in the RHS of (<ref>) is bounded above by1n∑_j ∈^c∑_i =1^n1(_i(2t) ≅_d(2t), _i(t) ∋ j)≤1n∑_j ∈^c|_j(t)|1(_j(t) ≅_d(t))≤ (d+1)^t |^c|n.Recalling that μ_n,k^,0(·)=q μ_n^,0(·,_n(·)=k) and _n _d, by (<ref>), (<ref>) and the triangle inequality, we find that for any k_1k_2 ∈ [q] and t ∈,lim sup_n →∞_n |P_μ_n,k_1^,0^t(I_n) (σ__I_n(t)=σ) - P_μ_n,k_2^,0^t(I_n) (σ__I_n(t)=σ) | ≤ q(d+1)^tlim sup_n →∞1n [|^c|],where the expectation on the RHS is with respect to both the underlying rcmand our two color assignments via Lemma <ref>. Thus, to conclude with(<ref>) it suffices to fix >0 and show that lim sup_n →∞ n^-1 [||^c] ≤3 q . To this end, set for ℓ_⋆() as in Lemma <ref> and 0 < _⋆≤ℓ_⋆()^-2 the events:= _1 ⋂_r < ℓ_⋆()_2(r), _1:= {∑_r ≥ℓ_⋆() r _n(r) ≤ n }, _2(r):= {max_k, k' ∈ [q]|_n(r, k) -_n(r, k')| ≤_⋆ n }.Note that on the eventwe have from (<ref>) and our choice of _⋆ thatq^-1 |^c| ≤∑_r ∈ r max_k, k' ∈ [q] |_n(r,k) - _n(r,k')| ≤∑_r ≥ℓ_⋆() r _n(r) + _⋆ n ∑_r < ℓ_⋆() r ≤2 n.With |^c| ≤ n, we thus arrive at (<ref>) and thereby at (<ref>), upon showing that lim sup_n →∞ (_1^c) + lim sup_n →∞∑_r<ℓ_⋆()(_2(r)^c) ≤q. Next note that for some c=c(q) >0 and any _⋆>0, r ∈, by the union bound( max_kk' ∈ [q] |_n(r,k) -_n(r,k')| ≥_⋆_n(r)|η)≤∑_k ∈ [q](|_n(r,k) -_n(r)/q| ≥_⋆_n(r)/2| η) ≤ 2 q exp(-c_⋆^2 _n(r))(the last step is a standard Binomial(m,q^-1) tail bound).Further _2(r)^c ⊆{_n(r) > _⋆ n }, hence ∑_r < ℓ_⋆()(_2(r)^c) ≤ 2 q ℓ_⋆() e^-c_⋆^3 n .The latter bound goes to zero as n →∞, so by Lemma <ref>we have that(<ref>) holds, as claimed. §.§ Pure states (> _c(0)): Proof of Theorem <ref>(ii)For any ℓ∈ we have the following local proxy at u ∈ V for the dominant color _(·) of a graph =(V,E) (possibly infinite): _ℓ, (u):= argmax_k ∈ [q]{ N_ℓ, (u,k) } , N_ℓ, (u,k):= 1|_u, (ℓ)|∑_v ∈_u,(ℓ) 1(_v,(2ℓ) ≅_d(2ℓ)) δ_σ_v, k ,and we break ties uniformly among all maximizer values (as done in Definition<ref>). Further let N_ℓ, ^(1)(u):= max_k ∈ [q]{ N_ℓ, (u,k) } andN_ℓ, ^(2)(u):= max_k _ℓ, (u){ N_ℓ, (u,k) },suppressing the dependency onwhen it is clear from the context. Our next result identifies the behavior of the dominating color of _d(ℓ), under the Potts measures{μ_k^, 0}_k ∈ [q].Fix ≥_(0) and d ≥ 3. Then, for any ℓ∈, μ_k^, 0 ( N_ℓ, _d(o,k')) = { [ μ_1^, 0(σ_o=1)k=k',; μ_1^, 0(σ_o=2)k k'. ] . Further,lim_ℓ→∞_μ_k^, 0 (N_ℓ, _d(o,k') ) =0, k, k' ∈[q],and consequently,lim_ℓ→∞ μ_k^, 0 ( _ℓ, _d(o)=k') = δ_k,k', k, k' ∈[q].As _d is a vertex transitive graph, the measures {μ_k^, 0}_k ∈ [q]are translation invariant on V(_d), so Lemma <ref>applies even when the root o is replaced by any j ∈ V(_d).We first show (<ref>) given (<ref>)-(<ref>). Indeed, fixing kk' ∈ [q] note thatμ_k^, 0( _ℓ(o)=k') ≤μ_k^, 0(N_ℓ(o, k)≤ N_ℓ(o, k')).Applying (<ref>)-(<ref>) and Chebychev's inequality, we see that, for any δ >0,lim sup_ℓ→∞μ_k^, 0(N_ℓ(o, k) ≤μ_1^, 0(σ_o=1) -δ) ≤1δ^2lim_ℓ→∞_μ_k^, 0(N_ℓ(o, k)) =0.By a similar argumentlim_ℓ→∞μ_k^, 0(N_ℓ(o, k') ≥μ_1^, 0(σ_o=2) +δ) =0.From Proposition <ref>(ii) and the definition of ν_^, 0 it follows that q^-1= ν_^, 0(1) < ν_1^, 0(1) for ≥_(0).The definition of ν_1 further yields that ν_1^, 0(2)= (q-1)^-1 (1- ν_1^, 0(1)) and thus ν_1^, 0(1) > ν_1^, 0(2) for ≥_(0). As the map ν↦((e^ -1)ν+1) ν is strictly increasing on (0, ∞) it follows from(<ref>) that μ_1^, 0(σ_o=1) > μ_1^, 0(σ_o=2), for any ≥_(0).Now setting δ:= 13(μ_1^, 0(σ_o=1) - μ_1^, 0(σ_o=2)),we deduce that for any kk' ∈ [q],lim_ℓ→∞μ_k^, 0 ( N_ℓ(o, k)-N_ℓ(o, k') ≥δ) =1. Comparing (<ref>) with (<ref>) and using that ∑_k'=1^q μ_k^, 0( _ℓ(o)=k')=1, results with(<ref>).Setting π_k(1)=k, π_k(k)=1 and π(k')=k' whenever k'1, k'k, the proof of (<ref>) is immediate by the translation invariance of {μ_k^, 0}_k ∈ [q] and having that for any finite set W ⊂ V(_d),μ_1^,0 (σ_i =k_i', i ∈W) = μ_k^,0 (σ_i =π_k(k'_i), i ∈W), k, k_i' ∈[q].Turning to establish (<ref>), by (<ref>) we can setk=1, further noting that by the translation invariance of μ_1^,0and the vertex transitivity of _d,_μ_1^,0(δ_σ_v, k', δ_σ_u,k') = c_k'(dist (u,v) ), ∀u,v ∈V(_d). By definition c_k'(s) ≤ 1 and thus, for any t ∈,_μ_1^, 0( N_ℓ, _d(o,k') ) = 1/|_d(ℓ)|^2∑_u,v ∈_d(ℓ) c_k'( dist(u,v)) ≤|_d(t)|/|_d(ℓ)| + max_s ≥ t{ c_k'(s)} .As |_d(ℓ)| →∞, it remains only to show that c_k' (t) → 0 when t →∞. This in turn follows from the fact that the marginal of μ_1^,0 on any fixed ray in _dis a time homogeneous Markov chain of finite state space ([q]), and strictly positive transitionprobabilities. Indeed, generalizing the proof of Lemma <ref> along the lines of Remark<ref>, shows that such marginal must be a (possibly inhomogeneous) Potts measure on , hence a Markov chain with strictly positive transition matrices, whichthanks to the translation transitivity of μ^,0_1 must also be time homogeneous.Setting for δ, η >0 and i ∈ [n], _ℓ, n^δ := {|{(i,j) ∈ E_n: _ℓ(i) _ℓ(j)}|≥δ n } and _ℓ, n^η(i) :=Δ_i1{ N_ℓ, _n^(1)(i) -N_ℓ, _n^(2)(i) ≤η},we proceed to show that as n,ℓ→∞, under μ_n^,0 both_ℓ, n^δ and _ℓ, n^η(I_n) become negligible. This observation together with the assumed edge expansion property of _n will ensure that _ℓ(I_n) and _n are same on an event with arbitrarily large probability for all large n and ℓ.For any β > _c(0), δ>0 and small η=η(β)>0, underAssumption <ref>,lim_ℓ→∞ lim sup_n →∞ μ_n^,0(_ℓ, n^δ) = 0andlim_ℓ→∞ lim sup_n →∞ _n[μ_n^,0(_ℓ, n^η(I_n))] =0.Fixing ℓ∈, by Markov's inequality, the local weak convergence of Theorem <ref>(ii) and the uniform integrability of Δ_I_n lim sup_n →∞μ_n^,0(_ℓ, n^δ) ≤δ^-1 lim sup_n →∞_n[ ∑_j ∈∂ I_nμ_n^,0(_ℓ(I_n) _ℓ(j))] = (qδ)^-1∑_k=1^q ∑_j ∈∂ oμ_k^, 0(_ℓ(o) _ℓ(j)).As {_ℓ(o) _ℓ(j)}⊂{_ℓ(o)k}∪{_ℓ(j)k}, the proof of the first assertion in (<ref>) completes upon combining (<ref>) and(<ref>) (see also Remark <ref>).Turning to the second assertion of (<ref>), denote_ℓ, n^η(i) := ∑_k=1^q _ℓ, n^η, k(i),where _ℓ, n^η, k(i):=Δ_i1{_ℓ, _n(i)= k ,N_ℓ, _n^(1)(i) -N_ℓ, _n^(2)(i) ≤η}. Hence, by Theorem <ref>(ii) andthe uniform integrability of Δ_I_n we have thatlim_n →∞ _n [ μ_n^,0(_ℓ, n^η(I_n) )] = d/q ∑_j, k=1^q μ_j^, 0(_ℓ(o)=k, N_ℓ,_d^(1)(o) - N_ℓ,_d^(2)(o) ≤η).By (<ref>) and the union bound, for small enough η=η()>0and all k ∈ [q]lim_ℓ→∞μ_k^, 0(_ℓ(o)=k,N_ℓ,_d^(1)(o) - N_ℓ,_d^(2)(o) ≥η) = 1,which in combination with (<ref>) yields the second assertion of (<ref>). Recall from Theorem <ref>(ii) that for any > _c(0), k ∈ [q], ℓ≥ t, andσ∈_t,lim_n →∞ _n [ P_μ_n^,0^ℓ(I_n)(σ__I_n(t) =σ, _ℓ(I_n)=k) ] = 1q∑_k'=1^q μ_k'^, 0(σ__d(t) =σ, _ℓ, _d(o)=k). Furthermore, for any such k, ℓ≥ t, and σ,_n| μ_n^,0 (σ__I_n(t)=σ, _n=k) - P_μ_n^,0^ℓ(I_n)(σ__I_n(t) =σ, _ℓ(I_n)=k)| ≤∑_k=1^q ∑_k'k_n[μ_n^,0 (_ℓ(I_n)=k', _n=k)],and from (<ref>) of Lemma <ref>,for any k,k' ∈ [q],lim_ℓ→∞ μ_k'^, 0(σ__d(t) =σ, _ℓ, _d(o)=k) = δ_k,k' ·μ_k^, 0(σ__d(t) =σ).In view of (<ref>)-(<ref>) and the triangle inequality,(<ref>) is a direct consequence of lim_ℓ→∞ lim sup_n →∞ _n [μ_n^,0 (_ℓ(I_n)=k',_n=k)] =0,∀k' k. Turning to prove (<ref>), fix η=η() as in Lemma <ref> and set S_k, ℓ:= {i ∈ [n]: _ℓ(i)=k,N_ℓ^(1)(i) -N_ℓ^(2)(i)> η},k ∈ [q].We next show that the assumed edge expansion property of _n allows us to restrict attention when ℓ is large and <1/(2q), to the event_ℓ:= ⋃_k=1^q _k,ℓwhere_k, ℓ:={max_k' k {|S_k', ℓ|} < n, |S_k, ℓ| ≥(1-q) n}. Indeed, with ∑_k |S_k,ℓ| ≤ n, there is at most one k ∈ [q] with |S_k,ℓ| > n/2and consequently,_ℓ^c ⊂{∑_k=1^q |S_k,ℓ| ≤ (1-) n }⋃_k=1^q{ |S_k,ℓ| ∈ [ n,n/2] } .Now, fix ∈ (0,η/4q), let λ = λ_∧ 2 for λ_>0 of Theorem <ref>(ii). The event |S_k, ℓ| ∈ [ n,n/2] impliesby the edge expansion property of _n that |∂ S_k, ℓ| ≥λ n :=2 δ n.Further, noting that ∑_i=1^n Δ_i1( N_ℓ^(1)(i) - N_ℓ^(2)(i) ≤η) + |{(i,j) ∈ E_n: _ℓ(i) _ℓ(j)}| ≥max_k{ |∂ S_k, ℓ| } , it follows from the preceding that under_ℓ^c either the event{∑_i=1^n _ℓ, n^η(i) ≥ n δ} or ^δ_ℓ,n holds. Thus, from Lemma <ref> we deduce that, as claimed earlier, lim_ℓ→∞ lim sup_n →∞ μ_n^,0(^c_ℓ) =0. Having to contend only with the (disjoint) events_k,ℓ of (<ref>), wearrive at (<ref>) upon showing that for any kk' ∈ [q] and ℓ∈,_n[μ_n^,0(_k, ℓ,_ℓ(I_n)=k')]≤qandlim sup_n →∞μ_n^,0(_k', ℓ, _n =k) =0. To this end, as |S_k, ℓ| ≥ (1-q) n on _k,ℓ, it is immediate that for any k'k,1(_k, ℓ) 1/n∑_i=1^n1(_ℓ(i) =k')≤ q,which upon taking the expectation gives the lhs of (<ref>).Next, N(i,k) ≤ 1, hence on _k', ℓ N_ℓ (k',k) := 1/n∑_i=1^n [N_ℓ(i,k')- N_ℓ(i,k) ]≥1/n∑_i ∈ S_k', ℓ [N^(1)_ℓ(i)- N^(2)_ℓ(i)] -q ≥ (1- q) η- q ≥η2(due to our choice of q <η/4<1/4). Thus,μ_n^,0(N_ℓ(k',k) 1{ _k', ℓ, _n =k} ) ≥η/2 μ_n^,0(_k', ℓ,_n =k). Moreover, note that for any k'k,N_ℓ (k',k) =1n ∑_v=1^n1( _v(2ℓ) ≅_d(2ℓ))[ δ_σ_v, k' -δ_σ_v,k ]. Hence, with _n _d, lim_n →∞ sup_σ ∈[q]^n |N_ℓ(k',k) - N (k',k) | = 0, where N (k',k) :=1n ∑_v=1^n (δ_σ_v, k'- δ_σ_v, k). Clearly, N (k',k)1(_n =k) ≤ 0, hence (<ref>) entails that lim sup_n →∞ μ_n^,0(N_ℓ(k',k) 1 {_k', ℓ, _n =k} ) ≤0. From (<ref>) and (<ref>) we get the rhs of (<ref>), thus completing the proof of Theorem <ref>(ii).§ THE CRITICAL LINE: PROOF OF THEOREM <REF> Recall Lemma <ref>(a) on the existence of sub-sequentiallocal weak limit points for both μ_n^,B and φ_n^,B and that as in Step I of the proof of Theorem <ref>, fixingd, q ≥ 3, B>0 and (,B) ∈ R_c, it suffices to show that such limitpoint _ of φ_n^,B is supported on _ m:=_^,B. RecallingDefinition <ref> of _⋆(t) and Lemma <ref>(d)that such _ must be supported on_⋆:={φ∈(_∞^⋆): φ^t ∈_⋆(t)for allt ∈}, weshow that _(_⋆∖_ m) >0 contradicts <cit.>. Indeed, Section <ref> identifies _ m in terms of the support of the rcm messages, whereby Section <ref> completes this argument by combining this characterization with the symmetric form Ψ^ sym(·) of the free energy density limitof the rcm in case d ∈ 2. §.§ Properties of the rcm messages We start with the definition of our RCM messages. [RCM messages] Let ^⋆ denote the tail σ-algebra of the bonds on _d^⋆. That is,for the subsets _t^⋆ := {0,1}^E(_d^⋆(t)) of _∞^⋆ :={0,1}^E(_d^⋆),^⋆ := ⋂_t ≥ 1^⋆_t , where^⋆_t := σ(⋃_r > t_t^⋆,r) and_t^⋆,r:=σ(_r^⋆∖_t^⋆).For (u,v) ∈ E(_d) and φ∈_⋆, the rcm message from u to v is thens_u →v (φ) := φ[ u v^⋆| η_(u,v)=0, ^⋆].While not needed here, one can define the infinite volumerandom cluster Gibbs measures on _d, in presence of an external field, via the so calledDobrushin-Lanford-Ruelle condition, as done in <cit.>,and show that the messages of (<ref>)are indeed thosethat characterize such Gibbs measures. The rcm messages are limits of the rcm pre-messages which we define next(and later relate to the local functions ^t,r_u → w on the finite graphs _n).To this end, for finite graph, w ∈ V() and t<r, we denote hereafter η_w,t^⋆,r:=η__w^⋆(r)∖_w^⋆(t) andη_w,t^r :=η__w (r)∖_w (t) withη_t^⋆,r:=η_o,t^⋆,r and η_t^r:=η_o,t^r the corresponding objects on the tree _d rooted at o.[rcm pre-messages] For (u,v) ∈ E(_d(t)) considerthe (local) function _u →v^t,r(y):= φ_^⋆_d(r)^,B (u v^⋆| η_(u,v)=0, η_t^⋆,r=y),ofy ∈_r^⋆∖_t^⋆ . Similarly,for finite graph , w ∈ V() and (u,v) ∈_w(t) let_u →v^t,r(w, y) := φ__w(r)^, B (u v^⋆| η_(u,v)=0, η_w,t^⋆,r=y), y ∈{0,1}^_w^⋆(r)∖_w^⋆(t),using hereafter s_∂ v(φ) for the vector (s_u → v(φ))_u ∈∂ v and similarly_∂ v^t,r and _∂ v^t,r:=(_u → v^t,r(v, ·))_u ∈∂ v.Clearly _t^⋆,r↑^⋆_t and therefore theDoob's martingale {_u → v^t,r(η_t^⋆,r)}_rconverges φ-a.e. to φ [uv^⋆| η_(u,v)=0,^⋆_t ].Further,^⋆_t ↓^⋆ and hence the latter backward martingale converges φ-a.e. to s_u → v(φ).In conclusion,fixing any φ∈_⋆,forφ-a.e. bond configuration, s_∂ v(φ) = lim_t →∞lim_r →∞_∂ v^t,r(η_t^⋆,r ), ∀ v ∈ V(_d).Now, for ∈{, 1} and ν_^,B of Definition <ref>, let b_=b_^, B:= q ν_^,B(1)-1/q-1 ≥0, setting hereafter b_:=b_1^,B, γ := (e^β -1)/(e^β+q-1), andBP (s_1,…,s_d-1; x):=e^x∏_i=1^d-1(1+(q-1) γ s_i)- ∏_i=1^d-1(1-γ s_i)/e^x∏_i=1^d-1(1+(q-1) γ s_i) + (q-1)∏_i=1^d-1(1-γ s_i ) . Utilizing these objects, we proceed to characterize _ m via the support of the RCM messages.Fix B >0,≥ 0 and φ∈_⋆. * Forφ-a.e. bond configurations and any (u,v) ∈ E(_d),s_u → v(φ) =BP((s_w → u(φ))_w ∈∂ u ∖{v}; B), b_≤s_u → v(φ)≤ b_ . * For ∈{, } and any (u,v) ∈ E(_d), we have s_u → v(φ_^,B)=b_,forφ_^, B-a.e. configurations. * If φ-a.e. the random vector s_∂ o(φ)is supported on {(b_, b_, …, b_)}_∈{, },then φ∈_ m.(a).For any (i,j) ∈ E(_d) let _i → j be the connected component of the sub-tree of _d rooted at i after deleting the edge (i,j). Set _i → j^⋆ to be the graph obtainedfrom _i → j by adding the edges from v^⋆ to V(_i → j)∖{i} (so there is no edge between i and v^⋆ in_i → j^⋆). Denoting by Ω_i → j the event that i is connectedto v^⋆ using only the open bonds within _i → j^⋆ and by (i) the (unique) parent of io (the root) in _d, let_t^r :=σ (_t^⋆,r, {Ω_i →(i), i ∈∂_d(r)} ).By the one-to-one correspondence between the power set of _⋆(r) and σ({Ω_i →(i), i ∈∂_d(r)}), theσ-algebra _t^r is generated by the finite partitionof (_r^⋆∖_t^⋆) ×_⋆(r)to pairs (η_t^⋆,r, ^r).Thus,for any B>0,≥ 0, φ∈_⋆ and r>tφ(·| _t^r)(η_t^⋆,r=y, ^r =)=φ_^,B,r(·|η_t^⋆,r=y) , ∀y ∈_r^⋆∖_t^⋆ , ∀∈_⋆(r). In particular, φ [uv^⋆| η_(u,v)=0,_t^r ] = _u → v^t,r(η_t^⋆,r,^r),∀ (u,v) ∈ E(_d(t)) ,where_u →v^t,r(y,):=φ_^,B,r(u v^⋆| η_(u,v)=0, η_t^⋆,r=y). As _t^r ↑_t^⋆,fixing any φ∈_⋆,we get similarly to (<ref>),that φ-a.e.s_u →v(φ) = lim_t →∞ lim_r →∞ _u →v^t,r(η_t^⋆,r,^r), ∀(u,v) ∈E(_d)and by the continuity of the mapping BP(·;B) it suffices for (<ref>) to show that for any t<r,^t,r_u →v = BP ((^t,r_w →u)_w ∈∂u ∖{v}; B) , ∀(u,v) ∈E(_d(t)). By Lemma <ref>, using an argument similar to the proof of Corollary <ref>we find thats̅^t,r_u → v( y,) :=ϖ^,B,r_(σ_u =1|η_(u,v)=0, η_t^⋆,r= y) =(1 -1q) ^t,r_u → v ( y,) + 1q,for any y ∈_r^⋆∖_t^⋆, ∈_⋆(r). Fixing such (u,v), y, and ,the identity (<ref>) thus amounts tos̅^t,r_u →v = BP( (s̅^t,r_w →u)_w ∈∂u ∖{v}), where BP(s̅_1,…,s̅_d-1):=(1-1q) BP(q s̅_1-1/q-1,…, q s̅_d-1-1/q-1;B) +1q.Alternatively, in view of (<ref>),BP(s̅_1,…,s̅_d-1) =e^B∏_i=1^d-1(1+ (e^-1) s̅_i ) /e^B ∏_i=1^d-1(1+ (e^-1) s̅_i ) + (q-1) ∏_i=1^d-1(1+ e^β -1/q-1 (1-s̅_i) ) .The spin marginal of ϖ^,B,r_(·|η_(u,v)=0, η_t^⋆,r= y)on the sub-tree _u → v∩_d(t) is a Potts measure.Further, with ∈_⋆(r), the boundary condition for the spins at _u → v∩∂_d(t),as determined by ( y, ), must be a product measure, where each marginalspin is either uniform over [q] or supported on 1. Thus, restricting thisPotts measure to u and ∂ u ∖{v}, yields a Potts measureon q spins,whose boundary marginals on ∂ u ∖{v} aremutually independent and uniform over {2,…,q}. Thus, upon expressing the rhs of(<ref>) in terms of (<ref>), the identity (<ref>) follows by a direct computation.Setting b_,t' (u,v) := φ_, t'^, B(uv^⋆|η_(u,v)=0) for ∈{, }, (u,v) ∈ E(_d(t)) and φ_,t'^,B as in (<ref>),we show in part (b) thatb_,t'(u,v) → b_ as t' →∞.Thus, in view of (<ref>) and (<ref>),it suffices for(<ref>) to fix such (u,v) and show that for r> t' > t, any y ∈_r^⋆∖_t'^⋆ and ∈_⋆(r),b_,t' (u,v) ≤φ_^,B,r(uv^⋆|η_(u,v)=0, η_t'^⋆,r =y) ≤ b_,t' (u,v) .Theconditional measure in (<ref>) is merely the rcm on _u → v∩^⋆_d(t') with the induced boundary condition' ( y,) ∈_⋆(t') at _u → v∩^⋆(∂_d(t')). Any such ' stochastically dominates the free boundary condition andis stochastically dominated by the corresponding wired boundary condition (see Definition <ref>), with (<ref>) thus a consequence of the monotonicity of the RCMφ^,B__u → v∩_d^⋆(t')(·). (b). The rcm-s φ_^, B, for ∈{, }, are bothtail trivial (indeed, this follows as in the proof of <cit.>). Moreover, by definition φ^,B_∈_⋆ are invariantunder automorphisms of _d. Hence,φ^,B_-a.s. the values of s_u → v(φ^, B_) for(u,v) ∈ E(_d), must equal the same non-random constantb_:=φ^,B_ (wv^⋆ | η_(w,o)=0),where w ∈∂ o. Recall (<ref>) thatb_,t (w,o) → b_ as t →∞.Now, for any finite graphand e ∈ E(), the conditional measure φ_^, B(·| η_e=0) is merely the RCMφ_^, B(·), for the subgraphobtained upon deleting the edge e from . Hence, it follows from Lemma <ref> thatb_=lim_t →∞ b_, t =lim_t →∞q μ^, B_w →o, , t(σ_w =1)-1q-1, where μ^, B_w → o, , t denotes the Potts measure on_w → o∩_d(t) with free and 1-boundary conditions at the spins of _w → o∩∂_d (t), for = and =, respectively. Since _w → o is a (d-1)-ary tree (i.e. every vertex has (d-1) children), it thus followsfrom Definition <ref> of ν_^,B, that the RHS of(<ref>) is precisely b_^,B of (<ref>), as claimed.(c). Setting _ := {(s_∂ o(φ))= (b_, b_, …, b_)} and fixingφ∈_⋆ with φ(_∪_)=1, our claimthat φ∈_ m amounts to having for any t ∈ that φ-a.e.φ(η__o^⋆(t)=·|^⋆) =φ_^,B(η__o^⋆(t)=·) 1__+φ_^,B(η__o^⋆(t)=·) 1__ . To this end, fixing t<t'<r, recall that by (<ref>) φ(η_ B_o^⋆(t)= y |_t'^r) =∑_∈_⋆(t)φ_^,B,t(η__o^⋆(t)= y)ρ_φ (|_t'^r), ∀ y ∈_o^⋆(t),with ρ_φ (·|_t'^r) the conditional distribution induced on _⋆(t)by the finitely generated _t'^r.Recall the one-to-one correspondence between_⋆(t) and subgraphs C_⋆⊂{(u,v^⋆), u ∈∂_d(t)}, withρ_φ ({u ∈ C_⋆}|_t'^r) =φ[uv^⋆| η_(u,(u))=0, η_(u,v^⋆)=0, _t'^r ] =: s̃_u →(u)^t',r(η_t'^⋆,r,^r), ∀ u ∈∂_d(t).Using standard and backward martingale convergence theorems, it then follows that φ-a.e.φ(η__o^⋆(t)=·|^⋆)=∑_∈_⋆(t)φ_^,B,t(η__o^⋆(t)=·)ρ_φ (|^⋆),ρ_φ ({u ∈ C_⋆}|^⋆)= s̃_u →(u)(φ) := lim_t' →∞lim_r →∞s̃_u →(u)^t',r .In view of (<ref>), we get (<ref>) and thereby complete the proof,upon showing that φ-a.e.ρ_φ(·|^⋆) = ρ_φ_^,B (·) 1__ +ρ_φ_^,B(·) 1__. Turning to prove (<ref>), recall that φ∈_⋆ and thatthe RCM φ__d^⋆(t')^, B is monotonic. Hence,ρ_φ_,t'^,B(· ) ρ_φ(·|_t'^r) ρ_φ_,t'^,B(·) ,which upon taking first r →∞ and then t' →∞, yields that φ-a.e. ρ_φ_^,B(·) ρ_φ(·|^⋆) ρ_φ_^,B(·) .Given such stochastic ordering, (<ref>) follows once all the corresponding one-dimensional marginals match (see <cit.>). That is, in view of (<ref>) and our assumption thatφ(_∪_)=1, it remains only to show that φ-a.e. on the event _s̃_u →(u)(φ) = s̃_u →(u)(φ^,B_), ∀ u ∈_d.To this end, note that as in the proof of (<ref>), we also have that φ-a.e.s̃_u →(u)(φ) =BP((s_w→ u(φ))_w ∈∂ u ∖{(u)};0),∀ u ∈_d.Recalling from part (b) that s_w→ u(φ^,B_)=b_ for all (w,u) ∈ E(_d), is thus suffices to verify that on the event _, also s_w→ u(φ)=b_ (outside a null set). Indeed, the latter is a direct consequence of (<ref>) and theidentityb_^,B =BP(b^,B_,…, b^,B_;B),which in view of (<ref>), is merely the fact that ν_^,B andν_1^,B are both fixed points of (<ref>).With b_,ℓ(w,o) ↓ b_ andb_,ℓ (w,o) ↑ b_ (for w ∈∂ o),we deduce from (<ref>) that for some finite κ_o=κ_o() and any r >t + κ_o,b_ - ≤_u →v^t+κ_o,r (y) ≤b_+, ∀(u,v) ∈E(_d(t)), y∈_r^⋆∖_t+κ_o^⋆ . Moreover, if w ∈ V() for a finite graphsuch that _w(r) ≅_d(r) for r > t + κ_o, thenb_ -≤_u →v^t+κ_o,r(w, y) ≤b_+, ∀(u,v) ∈_w(t), y ∈{0,1}^_w^⋆(r)∖_w^⋆(t+κ_o) .Denoting hereafter by ^-(w) the graphwithout w ∈ V()and all edges to it, we next show that for B >0, when r ≫ t, uniformly overthe law induced on _w^⋆(t) by the RCMondoesnot depend much on the boundary conditions outside _w^⋆(r).Fix B >0, ≥ 0, 1< t < r and o∈ V() for a finite graph .Denoting by _t,r^c the set of η_o,t^⋆,r with an open cluster,not containing v^⋆,that intersects both∂_o(t) and ∂_o(r), φ_^, B(η__o^⋆(t)=·|η_^⋆∖^⋆_o(t) = z)= φ^, B__o(r) (η__o^⋆(t)=·|η_o,t^⋆,r = z_^⋆_o(r) ∖_o^⋆(t)), whenever z__o^⋆(r)∖_o^⋆(t)∈_t,r. Further, for any >0, some r_0=r_0(,q,B,t, |_o(t)|)<∞,sup_r ≥ r_0sup_, y{φ^, B_(^c_t,r|η_∖_o(t) =y) } ≤ ,sup_r ≥ r_omax_, y|φ_^, B(η__o(t)=·|η_∖_o(t) =y)/φ^, B__o(r) (η__o(t)=·|η_o,t^r =y__o(r) ∖_o(t))-1| ≤ .In addition, (<ref>) holds for φ^, B_^-(o)(·). Note that (<ref>) amounts to having_⋆( y,y):=φ_^, B( y)/φ_^, B( y)φ__o(r)^, B( y__o^⋆(r))/φ__o(r)^, B( y__o^⋆(r)) =1,whenever y__o^⋆(r)∖_o^⋆(t)∈_t,r and y_e= y_e for all e ∉^⋆_o(t). Turning to prove (<ref>), recall by Definition <ref> of the rcm on a finite graph, that_⋆(y, y) =q^|C_^⋆(y)|- |C_^⋆(y)| +|C__o^⋆(r)(y__o^⋆(r))| - |C__o^⋆(r) (y__o^⋆(r))|. The open cluster containing v^⋆ appears once in each of these four counts. Thus,with y_e =y_e for all e ∉^⋆_o(t), theRHS of (<ref>) is one, unless some other open cluster induced byy or by y intersects both ∂_o(t) and ∂_o(r), a scenario which is precluded for y__o^⋆(r)∖_o^⋆(t) =y__o^⋆(r)∖_o^⋆(t)∈_t,r.As for (<ref>), we can assume wlog thaty_∖_o(t) induces L ≥ 1 disjoint open clusters C_ithat intersect both ∂_o(t) and ∂_o(r). In particular,L ≤ |_o(t)| and |C_i| ≥ r-t. Since _t,r^c implies that all edges from v^⋆ to some C_i must be closed, we thus deduce by a union over i ≤ L,that φ^, B_(^c_t,r|η_∖_o(t) =y) ≤∑_i=1^L q e^-B |C_i|≤ q |_o(t)| e^-B(r-t),which since B>0, completes the proof for φ_^,B. With t>1, the same applies for φ^,B_^-(o).Turning to prove (<ref>), we move to the marginal rcm-s φ^,B_ (·) of (<ref>) and note that as in our proof of (<ref>),it suffices to find _r=_r(q,B,t,|_o(t)|) → 0, such that if y_e= y_e for all e ∉_o(t), then( y,y):=φ_^, B( y)/φ_^, B( y)φ__o(r)^, B( y__o (r))/φ__o(r)^, B( y__o(r))≤ 1 + _r.To this end, since y and y agree outside _o(r), the four products over edges in (<ref>)match and perfectly cancel each other, with ( y,y) therebybeing the ratio of product of contributions from open clusters.Further, as y coincides with y outside _o(t), the collectionsC_( y) and C_( y) contain exactly thesame open clusters C which do not intersect _o(t), and this applies also forC__o(r) ( y__o(r)) versus C__o(r) ( y__o(r)). Similarly, a cluster C ⊂_o(r-1) appears in C_( y) if and only if it appears in C__o(r) ( y__o(r)) (and the same holds for y). Denoting by C_+ those elements ofC_( y) ∪ C__o(r) ( y__o(r)),counted twice if needed, which intersect both ∂_o(t) and ∂_o(r), and by C_- all such elements in C_( y) ∪ C__o(r) ( y__o(r)), we thus deduce that( y,y)=∏_C∈ C_+(1+(q-1)e^-B|C|)/∏_C∈ C_-(1+(q-1) e^-B|C|) ≤ (1+(q-1) e^-B(r-t))^2|_o(t)| =: 1+_r(at most 2 |_o(t)| elements in C_+, each of at least size r-t, and product over C_- exceeding one). §.§ The rcm free energy density for d ∈ 2:proof of Theorem <ref>Recall <cit.> that for (β,B) ∈ R_c and d∈ 2 the Potts free energy density limitΦ=Φ(,B) := max{Φ(ν_^,B), Φ(ν_1^,B) } ,has the symmetric rcm formulation, Φ = log (Ψ^ sym(b_,…,b_))=log (Ψ^ sym(b_,…,b_))= sup_b∈[b_,b_]^d{log (Ψ^ sym(b)) },where for the symmetric group _d of size d,Ψ^ sym(b):=Ψ^ vx(b)Ψ^ e, sym(b), Ψ^ e, sym(b):= 1/d!∑_π∈_dΨ^ e(b_π(1),…,b_π(d)),while for any b=(b_1,b_2,…, b_d) ∈ [0,1]^d and d ∈ 2,Ψ^ vx( b) :=(1-γ)^-d(e^B∏_i=1^d (1+ (q-1) γ b_i ) + (q-1)∏_i=1^d (1-γ b_i ) ), γ := e^β -1/e^β+q-1,Ψ^ e( b) := (1-γ)^-d/2∏_i=1^d/2(1+(q-1) γ b_2i-1b_2i).We prove Theorem <ref> by passing wlog to a locally weakly convergentsub-sequence of φ_n^,B and procuring from_(_⋆∖_ m) >0 some graphs '_n _d of free energy densities such that _n Φ'_n(,B) > Φ. To do so, we rely on our next observation,that the supremum in (<ref>) over bwith at leasttwo coordinates strictly inside (b_, b_),is smaller than Φ. For (,B) ∈ R_c and any δ>0 there exists _o∈ (0, δ) such thatsup_b∈Λ_δ{log (Ψ^ sym(b)) } < Φ-_o,whereΛ_δ:={b∈[b_,b_]^d: |{i: b_i∈[b_+δ,b_-δ]}|≥2}. Since Λ_δ is compact and Ψ^ sym(·) is continuous, the supremum over Λ_δ is achieved at some b'∈Λ_δ and it suffices toshow that log (Ψ^ sym(b'))< Φ. Now, suppose thatb^∘∈[b_,b_]^d is such that log (Ψ^ sym(b^∘))= Φ and b_i^∘∈ (b_,b_) for some i ∈ [d].Since both Φ^ vx and Φ^ e,sym are affine in each b_j, j ∈ [d], and b^∘ is a maximizer, the function must be constant as we vary the i-th coordinate of b^∘.This, in particular implies that if we replace b_i^∘ with either b_ or b_ then the new vector continues be a maximizer of logΨ^ sym(·). In particular, if log (Ψ^ sym(b'))= Φ for b' with at least two coordinates i j ∈ [d] such that b'_i,b'_j ∈ (b_, b_), then by the preceding argument there existsb”∈{b_, b_}^d, still a maximizer, with the property 2 ≤ |{i: b”_i =b_}| ≤ d-2.However, the proof of <cit.> shows that such a b” can never be a maximizer, yielding a contradiction. Combining Lemma <ref>(c) with Lemma <ref>, we proceed to show thatif a local weak limit point of RCM-sputs a positive mass on _⋆∖_ m then it must also put a positive mass on those φ∈_⋆ for whichlogΨ^ sym(_∂ o^t,r) is strictly smaller than Φ,for large t < r, with non-negligible probability.For >0 and t < r ∈ set _^t,r:= { logΨ^sym(_∂o^t,r) < Φ-}.If _(_⋆∖_ m) >0 for a local weak limit point _of {φ_n^,B},then there exist ξ >0,t' ∈, and r'(t) < ∞,such that ∫φ (_ξ^t,r) d _≥ξ for any t ≥ t' and r ≥ r'(t). With _ supported on the convex set _⋆ (see Lemma <ref>(d)), also φ̅(·):= ∫φ (·) d_∈_⋆.Further, for any non-negative F ∈ C_b(^d) and φ∈_⋆,it follows from (<ref>) thatlim_t →∞lim_r →∞φ(F(_∂ o^t,r)) = φ(F(s_∂ o(φ))),hence by definition of φ̅ and Fatou's lemma,φ̅(F(s_∂o(φ̅))) = lim_t →∞lim_r →∞ φ̅(F(_∂o^t,r))=lim_t →∞ lim_r →∞ ∫φ(F(_∂o^t,r)) d _≥∫φ(F(s_∂o(φ)) d _. Considering F( s) := ∑_i=1^d f_(s_i) for continuous f_such that 0 ≤ f_(·) ↑ 1_(b_, b_)(·) as ↓ 0,we deduce from (<ref>) by monotone convergence and Fatou's lemma,thatin view of (<ref>),∑_u ∈∂o φ̅(s_u →o(φ̅) {b_,b_})≥ ∫∑_u ∈∂o φ(s_u →o(φ) {b_,b_} ) d _ ≥∫φ( (_∪_)^c ) d _. Recall Lemma <ref>(c) thatφ( (_∪_)^c )>0 whenever φ∈_⋆∖_ m. Hence,our assumption that _(_⋆∖_ m)>0 implies the strict positivity of therhs of (<ref>) and thereby the same holds for the LHS.Bydefinition of the local weak convergence,the law _ must be invariant to ourchoice of the root of _d.Hence,with _d a regular tree,necessarilyφ̅ is invariant under any automorphism of _d.In particular, fixing u_1 ∈∂ o and settingH_:={min_∈{, }|s_u_1 → o(φ̅) - b_| > },we deduce that φ̅(H_0)>0.This in turn allows us to fix >0 small enough so that φ̅(H_) ≥. Next,fix ℓ:=⌈ 3/⌉ and a non-random path o=:v_1,v_2, …,v_ℓ+1 in _d such that v_i+1∈∂_d(i).Then,fixing for each 1 ≤ i ≤ℓ,some w_i∈∂ v_i,w_i ≠ v_i+1 (which as d ≥ 3,is always possible),we have by the automorphism invariance of φ̅ that φ̅(A_i)=φ̅(H_) ≥≥3/ℓ forA_i:={min_∈{, } |s_w_i→v_i(φ̅)-b_|>}. The smooth map BP(·;B):[0,1]^d-1→ [0,1] of (<ref>)is coordinate-wise strictly increasing, having (b_,…,b_)as its fixed points (see (<ref>) and Lemma <ref>(b)). Thus,for some δ_0=δ_0(, , B,q,d)>0,it follows from (<ref>) and (<ref>) thats_v_j→ v_j-1(φ̅) ∈ (b_+δ_0,b_-δ_0) on the event A_j. Iterating this argument,while reducing δ_0 to δ=δ(δ_0,ℓ,, B,q,d) ∈ (0, ),it follows that on each A_j,j ≤ℓ, {s_v_i+1→ v_i (φ̅),1 ≤ i < j }⊂ (b_+δ, b_-δ).So,if A_i∩ A_j occurs for i<j,then s_v_i+1→ v_i(φ̅),s_w_i → v_i(φ̅) ∈ (b_+δ, b_-δ)and in particular s_∂ v_i∈Λ_δ of(<ref>).Namely,any pair from {A_1,…, A_ℓ} results with s_∂ v_i(φ̅) ∈Λ_δ for some i < ℓ.Consequently, by the union bound and the automorphism invariance property of φ̅,ℓφ̅(s_∂ o(φ̅) ∈Λ_δ)≥φ̅( ⋃_i=1^ℓ{ s_∂ v_i(φ̅) ∈Λ_δ}) ≥φ̅(∑_i=1^ℓ I_A_i≥ 2) ≥1/ℓφ̅(∑_i=1^ℓ I_A_i - 2) ≥1/ℓ(where the right-most inequality is due to (<ref>)).In view of Lemma <ref> we deduce from (<ref>) that for some _o>0,φ̅(logΨ^ sym(s_∂ o(φ̅)) < Φ -_o) ≥1/ℓ^2 .With logΨ^ sym∈ C_b([0,1]^d),it follows from (<ref>) thatφ̅-a.e.  logΨ^ sym(_∂ o^t,r) →logΨ^ sym(s_∂ o(φ̅)) as r →∞ and then t →∞. Thus,by the preceding lim_t →∞lim_r →∞φ̅(__o^t,r) ≥1/ℓ^2 ,yielding our claim for ξ=_o ∧ (2 ℓ^2)^-1.To prove Theorem <ref> we keep moving from a finite graphtoa graph ^-(w) without some w ∈ V() and all the edges to it, thereby re-connecting the neighbors x_1, x_2,…,x_d of w to form for π∈_d the graph ^π(w) as^-(w) with the additional edges (x_π(2i-1),x_π(2i)), i=1…,d/2. Key to the proof is thus to estimate the ratios of contributions to the partition functions from RCM-sonversus ^-(w), and on ^π(w) versus ^-(w), when fixing certain bonds. To this end, recall _e^⋆ of(<ref>) and p_e, C(·) of Definition <ref>, where theun-normalized Edwards-Sokal probability massϖ_^⋆^,B(σ, η) := ∏_e ∈E^⋆e^^⋆_e[ (1-p_e)(1-η_e) + p_e η_e δ_e(σ) ] ·δ_σ_v^⋆,1,σ∈[q]^V^⋆, η∈{0,1}^E^⋆ (see Definition <ref>), has total mass matching the Potts partition function of , as in (<ref>).Namely, ∑_σ, ηϖ_^⋆^,B(σ, η) = Z_(,B).More generally, restricting (<ref>) to bond valuesy_W^⋆∈{0,1}^E(W^⋆) on the edges of W ⊂ (or, to only y_W∈{0,1}^E(W) on non-ghosted edges),yields the rcm-restricted partition functions_, W^⋆( y_W^⋆):= 1/q∑_η: η_W^⋆=y_W^⋆q^| C(η)|∏_e ∈ E^⋆ e^^⋆_e p_e^η_e(1-p_e)^1-η_e= ∑_η: η_W^⋆ = y_W^⋆∑_σϖ_^⋆^,B(σ, η), _, W( y_W) := 1/q∑_η: η_W=y_Wq^| C(η)|∏_e ∈ E^⋆ e^^⋆_e p_e^η_e(1-p_e)^1-η_e= ∑_η: η_W = y_W∑_σϖ_^⋆^,B(σ, η). For any v ∈ V() with ∂ v = (u_i)_i=1^d, π∈𝔖_d and y ∈{0,1}^_v(r)∖_v(t), t<r, we setthe functions Ψ^ vx_t,r,(v,y):= φ^,B_^-(v)[ Ψ^ vx(^t,r_∂ v) |η_v,t^r =y], Ψ^ e,sym_t,r,(v,y):= 1/d!∑_π∈𝔖_dΨ^ e, π_t,r,(v,y),Ψ^ e, π_t,r,(v,y):=φ^, B_^-(v)[Ψ^ e(^t,r_u_π(1)→ v,…,^t,r_u_π(d)→ v) |η_v,t^r =y].Utilizing Lemma <ref> and (<ref>)-(<ref>), we proceed to estimate the ratios of various rcm-restricted partition functions, in terms of these functions. Fix B >0 and ≥ 0. For some r_o'=r_o'(, t), if o∈ V() and _o(r)≅_d(r), r ≥ r_o', thenfor any _o(r) ⊂⊂, we get upon settingW:=∖_o(t) and W_r:=_o(r) ∖_o(t), that max_ y_ W{| _, W( y_ W)/_^-(o), W( y_ W) - Ψ^ vx_t,r,(o,y_W_r)| } ≤ ,max_π∈𝔖_dmax_ y_ W{| _^π(o), W( y_ W)/_^-(o), W( y_ W) -Ψ^ e, π_t,r,(o,y_W_r) | } ≤.In view of (<ref>), the bounds (<ref>)-(<ref>) actually hold also for any subgraph W of ∖_o(t) which contains _o(r) ∖_o(t).With ∩_o(r) = ∩_o(r) ≅_d(r)and W_r =W ∩_o(r), we first examine the specialcase == _o(r). Specifically, using ^-_o(r):=^-(o) and^π_o(r):=^π(o) when =_o(r), we show that for anyy ∈_r^⋆∖_t^⋆ and π∈𝔖_d, _r( y) :=__o(r), W_r^⋆( y)_^-_o(r), W_r^⋆( y) = Ψ^ vx(_∂ o^t,r( y)),^π_r( y) :=_^π_o(r), W_r^⋆( y)_^-_o(r), W_r^⋆( y) = Ψ^ e(^t,r_u_π(1)→ o( y),…,^t,r_u_π(d)→ o( y)).Indeed, it followsfrom (<ref>) that _^-_o(r), W_r^⋆( y) = ∑_{η' : η_o,t^⋆,r =y}∑_σ'ϖ_^-_o(r)^⋆^,B(σ', η').Further, for any e ∈ E^⋆ and σ∈ [q]^V^⋆,∑_η_e ∈{0,1} e^^⋆_e[ (1-p_e)(1-η_e) + p_e η_eδ_e(σ) ] = 1 + (e^^⋆_e -1 ) δ_e(σ) .Thus, with η__o^⋆(r) = (η^(1), η') for η^(1) :=(η_e)_e ∈_o(1) ∪{(o,v^⋆)}, we similarly get from (<ref>)-(<ref>) that __o(r), W_r^⋆( y)= ∑_{η' : η'_W_r^⋆=y}∑_σ'∑_σ_o,η^(1)ϖ_^⋆_o(r)^,B((σ_o,σ'), (η^(1),η'))= ∑_{η' : η_o,t^⋆,r =y}∑_σ'f(σ'_∂ o) ϖ_^-_o(r)^⋆^,B(σ', η'),for some function f(·), where by enumerating over the [q]-valued σ_o, one verifies thatf(σ_1,…,σ_d)=∑_k=1^q e^Bδ_k,1∏_i=1^d(1+(e^-1) δ_σ_i,k). Consequently,_r(y) =ϖ_^-_o(r)^⋆^,B[f(σ_∂o)| η_o,t^⋆,r= y ]. By the same reasoning, now with η^(1) := η__o^π(1) ∖_o^-(1), we find that, for any π∈𝔖_d, ^π_r( y)=ϖ_^-_o(r)^⋆^,B[f^π(σ_∂ o)| η_o,t^⋆,r =y ],f^π(σ_1,…,σ_d) :=∏_i=1^d/2(1+(e^-1) δ_σ_π(2i-1),σ_π(2i)).Since _o(r) ≅_d(r),the spins (σ_u, u ∈∂ o) are mutually independent under ϖ_^-_o(r)^⋆^,B(·|η_o,t^⋆,r), witheach of them uniformly distributed on {2,…,q}. Thus, upon setting for i=1,…,d,b_i := 1/1-1/qϖ_^-_o(r)^⋆^,B(σ_u_i=1| η_o,t^⋆,r=y) - 1/q/1-1/q ,a straightforward computation shows that the rhs of (<ref>)and (<ref>) are given by Ψ^ vx ( b) andΨ^ e(b_π(1),…,b_π(d)), respectively. Finally, the identification b = ^t,r_∂ o follows from Lemma <ref>, analogously to the derivation of (<ref>).Armed with (<ref>), we proceed to prove (<ref>).To this end, writing y =y_W^⋆ and y^o =y_W, we get by following the derivation of (<ref>),that for anywith |∂ o|=d and W_r ⊂ W ⊂∖_o(t),_,W^⋆( y)_^-(o),W^⋆( y)=ϖ_^- (o)^⋆^,B[f(σ_∂ o)| η_W^⋆=y ]and_,W( y^o)_^-(o),W( y^o) =ϖ_^- (o)^⋆^,B[f(σ_∂ o)| η_W=y^o].Consequently, _,W( y^o)/_^-(o),W( y^o)=∑_ y ∖ y^o_, W^⋆( y)_^-(o), W^⋆( y)φ^,B_^-(o)(η_ W^⋆= y|η_W= y^o).Further, from (<ref>)-(<ref>) we have that for some c̅ = c̅ (q,,B,d) >0, _^-(o),W^⋆( y)/_, W^⋆( y) = 1/qφ^,B_(η^(1)≡ 0|η_W^⋆= y) ≥c̅ .Comparing (<ref>) with (<ref>) for (_o(r),W_r), we deduce that_,W^⋆(y)/_^-(o),W^⋆ (y) = φ^,B__o(r)(η^(1)≡0|η_o,t^⋆,r=y_W_r^⋆)/φ^,B_(η^(1)≡0|η_W^⋆=y) Ψ^vx(_∂o^t,r (y_W_r^⋆)), which, in view of (<ref>) equals toΨ^ vx(_∂ o^t,r ( y_W_r^⋆)) whenever y_W_r^⋆∈_t,r. Utilizing this observation when plugging (<ref>) into (<ref>), thenusing the uniform lower bound of (<ref>) and the triangle inequality, we find that for r ≥ r_0(c̅/6,t) of Lemma <ref>,| _,W( y_W)/_^-(o),W( y_W)-φ^,B_^-(o) (Ψ^ vx(_∂ o^t,r)|η_W= y_W) | ≤ (2/c̅)φ^,B_^-(o)(_t,r^c |η_W= y_W) ≤3 ,where in the last inequality we have employed (<ref>) for φ^, B_^-(o). Analogously to the derivation of (<ref>), except for now using (<ref>), we get for any W_r ⊂ W ⊂W, _^-(o),W( y_W)/_,W( y_W) = 1/e^B +q-1φ^,B_(η__o(1)≡ 0|η_W= y_W) .Hence, by (<ref>), if ⊃ matcheson their respective r balls around o and r ≥ r_0(/9,t), thenmax_y_W |_,W_r(y_W_r)/_^-(o),W_r(y_W_r)_^-(o),W(y_W)/_,W(y_W) -1 |≤/3. We combine (<ref>) with(<ref>) at (,W_r), and recall (<ref>)that Ψ^ vx_t,r,(o,·) =φ^,B_^-(o) (Ψ^ vx(_∂ o^t,r)| η_W_r), to arrive at (<ref>).Upon changing f to f^π and η^(1) to η__o^π(1) ∖_o^-(1) (as in the derivation of (<ref>)), the proof of (<ref>) out of (<ref>) is the same, hence omitted. Hereafter we fix the sequence {_n}, using for v ∈ [n] the shorthand φ_n^v:= φ__n^-(v)^, B and defineΨ^ sym_t,r,n(v,y):= Ψ^ vx_t,r,n(v,y)/Ψ^ e,sym_t,r,n(v,y),for Ψ^ vx_t,r,n := Ψ^ vx_t,r,_n andΨ^ e,sym_t,r,n := Ψ^ e, sym_t,r,_n of (<ref>). For any t < r, v ∈ [n] and >0, we further define, analogously to (<ref>),the bond events _,v,n^t,r := { y : logΨ_t,r,n^sym(v,y__v(r)∖_v(t))< Φ-}.Aiming to later utilize Lemma <ref>,we first adapt Lemma <ref> to lower bound the average over v of φ^,B__n (_,v,n^t,r), when φ^,B_n converges locally weakly to _. Out of this we produce vertex subsets _n, having well-spaced, tree-likeneighborhoods in _n, at which with non-negligible probability a positive fraction of the events (<ref>) hold. Suppose _(_⋆∖_ m) >0. The following then holds for some δ_o >0, t', large r'(t)<∞, and small enough ζ(δ_o,r)>0.(a). If _n _d and {φ_n:=φ__n^,B}converges locally weakly to _, then for t ≥ t' and r ≥ r'(t), lim inf_n →∞1/n∑_v=1^n φ_n (_δ_o,v,n^t,r) ≥ 4δ_o.(b). For any n ≥ n'(ζ), there exist _n ⊂ [n] of size |_n|=ζ n, such that inf_v v' ∈_n{ dist__n (v,v') } > 3r,_v(3r) ≅_d(3r), ∀ v ∈_n,φ_n (1|_n|∑_v ∈_n 1_^t,r_δ_o,v,n ≥δ_o )≥δ_o.(a). Note that for some finite C=C(q,d,,B), uniformly over nand v ∈ [n] with |∂ v| ≤ d,C^-1 φ_n(η__n^-(v)) ≤φ_n^v(η__n^-(v)) ≤C φ_n(η) ∀η∈{0,1}^E_n^⋆. Then, from _n _d and Lemma <ref>, we have that for any ≤ξ/(2C), t ≥ t' and r ≥ r'(t),lim inf_n →∞ 1/n ∑_v=1^n 1_{_v(r) ≅_d(r)}( φ_n^v ( _2C ,v,n^t,r ) - )≥C^-1 ∫φ(_2 C ^t,r) d _ - ≥ , where _,v,n^t,r := {logΨ^ sym(_∂ v^t,r) < Φ -}. Further, for some c=c(q,d,,B)>0, Ψ^vx(b),Ψ^e, sym(b) ∈[1,c^-1],∀b ∈[0,1]^d,and we now set for any v ∈ [n], Y_v,n(y) := φ_n^v [Ψ^e, sym(_∂v^t,r) 1_^t,r_2 C ,v,n |η_v,t^r=y ] -c Ψ_t,r,n^e, sym(v,y). Recall from (<ref>) that_∂ v^t,r∈ [b_-^4, b_+^4]^d whenever t > κ_o(^4) and _v(r) ≅_d(r). For such v we have by (<ref>) and the Lipschitz continuity oflogΨ^ sym(·), that for some '>0 and all ≤', sup_r> t > κ_o(^4) sup_y' ∈_r^⋆∖_t^⋆{ logΨ^sym( _∂v^t,r(y') ) } ≤Φ+ ^3. In view of (<ref>), we have for any t>κ_o(^4) that if _v(r) ≅_d(r) and Y_v,n( y) ≥ 0, thenΨ^ vx_t,r,n(v, y)= φ_n^v[Ψ^ vx(_∂ v^t,r) 1_^t,r_2C,v,n | η_v,t^r =y ] +φ_n^v[Ψ^ vx(_∂ v^t,r) 1_(^t,r_2 C,v,n)^c|η_v,t^r =y ] < e^Φ -2 Cφ_n^v[Ψ^ e, sym(_∂ v^t,r) 1_^t,r_2 C,v,n|η_v,t^r =y ]+ e^Φ +^3φ_n^v[Ψ^ e, sym(_∂ v^t,r) 1_(^t,r_2 C,v,n)^c|η_v,t^r =y ]= e^Φ -2 C( Ψ^ e,sym_t,r,n(v, y) + (e^2 C+^3-1)φ_n^v[Ψ^ e, sym(_∂ v^t,r) 1_(^t,r_2C,v,n)^c|η_v,t^r =y] ) ≤ e^Φ -2 C(1+ (e^2 C+^3-1)(1-c)) Ψ^ e,sym_t,r,n (v, y) ≤ e^Φ -c^2Ψ^ e,sym_t,r,n(v, y),where the first inequality uses the definitionof ^t,r_2 C,v,n and the uniform bound (<ref>), the second one holds for Y_v,n( y) ≥ 0 (see (<ref>)), and the last one applies when ∈ (0,”).That is, in view of (<ref>), for any ≤' ∧” if r>t>κ_o(^4) and δ≤ c ^2, then{ _v(r) ≅_d(r)andY_v,n(y) ≥0 } ⟹{ logΨ_t,r,n^sym(v,y) < Φ-δ}. As Y_v,n≤ 1/c, clearly φ_n^v(Y_v,n≥ 0) ≥ c φ_n^v [Y_v,n].Further, by (<ref>), (<ref>) and (<ref>),φ_n^v[Y_v,n(η_v,t^r)] =φ_n^v[Ψ^ e,sym(_∂ v^t,r)1_^t,r_2C,v,n]- c φ_n^v[ Ψ^ e,sym(_∂ v^t,r)] ≥φ_n^v(^t,r_2C,v,n) -.Combining (<ref>), (<ref>) and (<ref>),we find thatφ_n (_δ,v,n^t,r) ≥C^-1 1_{_v(r) ≅_d(r)}φ_n^v(Y_v,n (η_v,t^r) ≥ 0) ≥c/C 1_{_v(r) ≅_d(r)}( φ_n^v( _2C,v,n^t,r ) - ).The preceding inequality, together with (<ref>), completes our proof(with δ_o := δ∧ c /(4C)). (b). We establish the existence of _n via a probabilistic construction based on the auxiliary iid, uniform over [n], samples {w_i, i ≤ 2 ζ n}. Specifically, set :=_0∖ where_0:={i: _w_i(3r)≅_d(3r)}, := {i ∈_0: ∃ jisuch thatdist__n (w_i, w_j) ≤ 3 r} .Observe that n^-1|_0| → 2ζ since _n_d. Moreover,|| ≤∑_ij(w_j ∈_w_i(3r) ≅_d(3r) ) ≤2 ζ |_d(3r)||_0 |.Thus, || ≥ (1- √(ζ))|_0| for any ζ≤ζ_o(r), in which caselim inf_n →∞ (|| ≥2ζn (1-3 √(ζ)) )≥lim inf_n →∞ ||2ζn - (1 - 3 √(ζ))≥2 √(ζ).With q_n(v):=φ_n(_δ_o,v,n^t,r), recall from part (a) thatthe iid [0,1]-valued {q_n(w_i)} satisfy lim inf_n →∞ [q_n(w_i)] =lim inf_n →∞1/n∑_v=1^n q_n(v) ≥ 4 δ_o.Hence, by Chebychev's inequalitylim_n →∞ ( 1/2ζn ∑_i=1^2 ζn q_n(w_i) ≥3 δ_o ) = 1and in view of (<ref>), at any n ≥ n'(ζ) we have with probability√(ζ) that || ≥ 2ζ n (1-3√(ζ)) and the event in (<ref>) holds. For 3 √(ζ)≤δ_o ≤1/2, it then follows that1/||∑_i ∈ q_n(w_i) ≥1/2ζ n∑_i ∈ q_n(w_i) ≥1/2ζ n∑_i=1^2 ζ n q_n(w_i) - 3 √(ζ)≥ 3 δ_o -3 √(ζ)≥ 2 δ_o.Take for _n the [ζ n] elements of largestq_n(·) in {w_i : i ∈}. Then, (<ref>) holds and moreover1/|_n|∑_v ∈_nφ_n(_δ_o,v,n^t,r) ≥ 2 δ_o,from which (<ref>) immediately follows. Fix (,B) ∈ R_c, B>0, d ∈ 2, d ≥ 3 and _n _d, further assuming wlog thatφ_n=φ__n^, B converges locally weakly to _.In view of <cit.>, postulating further that_(_⋆∖_ m)>0, it suffices to produce'_n _d such that _n Φ'_n(,B) > Φ.To this end, for δ_o>0 of Lemma <ref> and c>0 from (<ref>), set_⋆:=- 1/3 log( 1- (1-e^-δ_o) δ_o c^2/3 ). Similarly to the derivation of (<ref>), with logΨ^ vx(·) and logΨ^ e, sym(·) Lipschitz on [0,1]^d,taking κ_o=κ_o(_⋆^2) as in Remark <ref>(and shrinking δ_o as needed for _⋆ to be small enough), guarantees by (<ref>), (<ref>), and (<ref>) that for r>t>κ_o and any finite graph _n,sup_y ∈_r ∖_t logΨ_t,r,n^sym(v, y) ≤Φ+ _⋆provided v ∈ V(_n) is such that _v(r) ≅_d(r). Hereafter we further take r>r'(t) and t>t' as needed inLemma <ref>, possiblyincreasing r'(t) till Lemma <ref> holds with =_⋆/4. Then, for _n of Lemma <ref>(b), set the disjoint unionW_n :=∪_v∈_n (_v(r)∖_v(t)) and for _v=_δ_o,v,n^t,r of (<ref>) let𝒴_n :={y ∈{0,1}^E(W_n): 1/|_n|∑_v∈_n 1__v≥δ_o}. Our graph decomposition startsat _n^(0)=_n anditeratively has _n^(ℓ) :=_n^(ℓ-1),π_ℓ(v_ℓ) forv_ℓ:= _v ∈_n ∖{v_1, …, v_ℓ-1}{∑_ y ∈𝒴_n__n^(ℓ-1),W_n( y)1__v( y) },π_ℓ:= _π∈_d{∑_ y ∈𝒴_n__n^(ℓ-1),π(v_ℓ),W_n( y) } .We will show in the sequel that for any ℓ≤ℓ_n := δ_o/3ζ n,Δ_ℓ:= log( ∑_y ∈𝒴_n __n^(ℓ-1), W_n(y) ) - log( ∑_y ∈𝒴_n __n^(ℓ), W_n(y) ) ≤Φ- _⋆. We claim that _n' :=_n^(ℓ_n) then have too large free densities Φ'_n(,B).Indeed, by (<ref>)-(<ref>), and the fact that the Potts measure is the spin marginal of the Edwards-Sokal measure, we have thatZ__n(,B) = ∑_ y__n, W_n( y).Thus, by the lhs of (<ref>) and (<ref>),log Z_'_n(,B) ≥log(∑_ y ∈_n_^(ℓ_n)_n,W_n( y)) = log(∑_ y ∈_n__n,W_n( y)) - ∑_ℓ=1^ℓ_nΔ_ℓ≥log Z__n(,B) +logδ_o -(Φ- _⋆)ℓ_n .With _n _d we have from <cit.> that Φ_n(,B) →Φ, so by the preceding, lim inf_n →∞ 1/nlogZ_'_n(,B) ≥Φ(1- δ_o/3 ζ) + δ_o/3 ζ_⋆. Note that |V('_n)|=n-ℓ_n=(1-δ_o/3ζ) n and the degree in '_n of anyw ∈ V('_n) matches its degree in _n. Moreover,the sets _v_ℓ(1) on which '_n and _n differ,are 2r-separated on '_n and if (u,u') ∈ E('_n) withinsuch a changed set _v(1), then dist__n(u,u') ≤ 2. Consequently, any cycle of length k in _n' must have been obtainedfrom a cycle of length at most k/(1+(2r)^-1) in _n. The last two observations together imply that _n' _d as n →∞.Hence, by <cit.> lim sup_n →∞1/nlog Z__n' (,B)≤Φ (1- δ_o/3ζ),in contradiction to (<ref>).We thus complete the proof of Theorem <ref> upon establishing (<ref>). To this end, we shall apply Lemma <ref> with =_n, =^(ℓ-1)_n, and o=v_ℓ∈_n (so _o(r) ≅_d(r) in view of (<ref>)), where as inf_i<ℓ dist__n(v_ℓ,v_i) > 3r (see (<ref>)),indeed ^(ℓ-1)_n coincides with _n on its r-ball around v_ℓ. Further,Remark <ref> allows us to do so for W=W_n (which byconstruction contains the relevant annuli _v_ℓ(r) ∖_v_ℓ(t) whileexcluding _v_ℓ(t)). Specifically, using the shorthand_ℓ( y):=__n^(ℓ),W_n( y), ^-_ℓ( y):=__n^(ℓ-1),-(v_ℓ),W_n( y), ^π_ℓ( y):=__n^(ℓ-1),π(v_ℓ),W_n( y),andΓ_ℓ := ∑_ y∈𝒴_nΨ^ e,sym_t,r,n(v_ℓ,y) _ℓ^-( y), we have by Lemma <ref> and our choice of r that,Δ_ℓ = log( ∑_ y ∈𝒴_n_ℓ-1( y)/_ℓ^-( y)_ℓ^-( y) ) - log( max_π∑_ y∈𝒴_n_ℓ^π( y) /_ℓ^-( y)_ℓ^-( y) )≤_⋆ +log( ∑_ y∈𝒴_nΨ^ vx_t,r,n(v_ℓ,y)_ℓ^-( y) ) -log( max_π∑_ y∈𝒴_nΨ^ e,π_t,r,n(v_ℓ,y) _ℓ^-( y) )≤_⋆ +log( ∑_ y∈𝒴_nΨ^ vx_t,r,n(v_ℓ,y) _ℓ^-( y) ) -logΓ_ℓ . To simplify the RHS of (<ref>), note that by our choice of v_ℓ,∑_ y∈𝒴_n_ℓ-1( y)1__v_ℓ( y)≥1|_n|∑_v ∈_n ∖{v_1,…, v_ℓ-1}∑_ y∈𝒴_n_ℓ-1( y)1__v( y)≥1|_n|∑_ y∈𝒴_n_ℓ-1( y) (∑_v ∈_n1__v( y) - (ℓ-1)) ≥2δ_o3∑_ y∈𝒴_n_ℓ-1( y),with the right-most inequality due to (<ref>) and the size of _n. Moreover, by Lemma <ref> and (<ref>) ∑_ y∈𝒴_nΨ^ e,sym_t,r,n(v_ℓ,y) _ℓ^-( y)1__v_ℓ( y) ≥3c/4∑_ y∈𝒴_n_ℓ-1( y) 1__v_ℓ( y)≥δ_o c/2∑_ y∈𝒴_n_ℓ-1( y) ≥ δ_o c^2/3Γ_ℓ where in the penultimate step we have used (<ref>). Substituting the latter inequality in (<ref>), upon recalling the definitions of _v_ℓ and Ψ^ sym_t,r,n(·,y), and using (<ref>) we gete^Δ_ℓ -_⋆ ≤1Γ_ℓ∑_ y∈𝒴_nexp(Φ+_⋆ - δ_o 1__v_ℓ( y) )Ψ^ e, sym_t,r,n(v_ℓ,y) _ℓ^-( y)≤ e^Φ +_⋆( 1-1-e^-δ_o/Γ_ℓ∑_ y∈𝒴_nΨ^ e, sym_t,r,n(v_ℓ,y) _ℓ^-( y)1__v_ℓ( y) )≤ e^Φ +_⋆( 1-(1-e^-δ_o) δ_o c^2/3)= e^Φ -2_⋆(with the last equality by our choice (<ref>) of _⋆). This is (<ref>), which thus completes our proof. § INFINITE VOLUME POTTS MEASURES AND BETHE FIXED POINTSWe prove here various properties of the phase diagram associated to thePotts measures on _d. To this end, for any (i,j) ∈ E(_d) let _i → j be the tree rooted at i obtained upon deleting (i,j) from E(_d) and with_i → j(t) denoting for t ∈ the ball _i(t) in _i → j, we first prove Lemma <ref>. Summing in (<ref>) over σ_j results with (<ref>). The proof of (<ref>) in case q=2 is well known (e.g. see <cit.>),and the extension for q ≥ 3 (which we now give), follows a similar scheme.With μ^,B_ a translation invariant measure, it suffices to consider i=o and j=1 a specific neighbor of the root o of _d. Fixing t ∈,as there are no cycles in _d(t),we have under μ_, t^, B ofDefinition <ref>, that for someν_t^1,ν_t^o ∈([q]) μ_, t^, B (σ_o,σ_1) ∝ e^βδ_σ_o,σ_1ν^1_t(σ_1) ν^o_t(σ_o) , ∀σ_o,σ_1 ∈ [q].Further, both T_o1 and T_1o are infinite (d-1)-ary trees (i.e. each vertex has (d-1) children). Hence, by induction we deduce from Definitions <ref>and <ref>, that ν_t^o and ν_t^1 are the probability measures obtained by running the BP recursion t and (t-1) times, respectively, starting fromν_ (i.e. starting at the uniform measure if = and at Dirac at 1when =1). By definition the limit of these recursions is ν_^, B and (<ref>) follows. By a similar reasoning one finds also that for any , B ≥ 0 and ∈{, 1},all finite dimensional marginals of μ_, t^, B converge as t →∞. Furthermore, it can be checked that these marginals are consistent, thus implying the existence of μ_^, B for ∈{, 1}. The existence of μ_i^, 0 for all i ∈ [q]is proved similarly (now starting the BP recursion at Dirac at i).Next we prove that ν_ and ν_1, when viewed as functions ofand B, are continuously differentiable except on ∂ R_≠^+ and ∂ R_≠^, respectively. (i) Since ν_1^, B is a fixed point of the BP recursion of (<ref>), starting from the probability measure on [q] that is Dirac at 1, necessarily ν_1^,B(σ)=ν_1^,B(σ') for all σ, σ' ∈ [q]∖{1} and as ν_1^, B∈([q]),it suffices to show that (,B) ↦ν_1^,B(1) is continuously differentiable at any fixed (_0,B_0) ∉∂ R_^. Further, after some algebra we deduce from(<ref>) thatr_1(,B):=logν_1^,B(1)/ν_1^,B(2) =log(1-q)ν_1^,B(1)/1-ν_1^,B(1) ,satisfies the fixed point equation F(r; , B)=r, whereF(r; , B):= B+ (d-1) log( e^+r+q-1e^r +e^ +q-2).Using this representation it suffices to show that (, B) ↦ r_1(,B) is continuously differentiable, and this follows by an application of the implicit function theorem,once we have verified that∂∂r F(r; , B) 1,for(r, , B) = (r_1(_0,B_0), _0,B_0). To obtain (<ref>) we borrow results from <cit.>. From the proof of <cit.> it follows that for d ≥ 3 there is some _- ∈ (0, ∞) such that for < _- there does not exist any solution to the equation ∂/∂r F(r; , B) = ∂/∂r F(r; , 0)=1.In contrast, for ≥_- there exist solutions ρ_-(β) ≤ρ_+() of the equation (<ref>), with equality if and only if =_-. Thus, to complete the proof of (<ref>) we need to show that if (_0, B_0) ∈ R_≠∖∂ R_≠^ (and therefore _0 > _-, see <cit.>) then r_1(_0,B_0) ρ_±(β_0).If possible, let us assume that r_1(_0,B_0) = ρ_+(_0). DenotingB_±():= ρ_∓() - F(ρ_∓(); , 0), as r_1(,B) is a fixed point of the equation (<ref>), we note that B_0=B_-(_0). From the proof of <cit.> we have that the map B ↦_(B) is the inverse of the map ↦ B_-(). So, _0=_(B_0) implying that (_0, B_0) ∈∂ R_^. As (_0, B_0) ∉∂ R^_≠ we arrive at a contradiction. To rule out the other possibility that r_1(_0,B_0) = ρ_-(_0) we again proceed by contradiction. As_0 >_- we have that ρ_-(_0) < ρ_+(_0).Now noteB_+() - B_-() = ∫_ρ_-()^ρ_+()[∂∂ r F(r;, 0) -1] dr.It can be checked that ∂^2/∂ r^2F(r; ,0) is negative for sufficiently large r and has a single change of sign. As ρ_±()are the solutions of (<ref>) one therefore have that ∂∂ r F(r;,0) > 1 for r ∈ (ρ_-(), ρ_+()). Hence, it follows from above that B_-(_0) < B_+(_0).ThusF(ρ_+(_0); _0, B_0) = ρ_+(_0) + B_0 - B_-(_0) = ρ_+(_0) + B_+(_0) - B_-(_0) > ρ_+(_0), where in the penultimate step we have used the fact that the assumption r_1(_0,B_0)=ρ_-(_0) implies that B_0=B_+(_0).Noting that lim_r →∞ F(r; _0 ,B_0) <∞, (<ref>) implies that there exists some r_⋆∈ (ρ_+(_0),∞) such that r_⋆ = F(r_⋆; _0,B_0). However, this contradicts the fact that r_1(_0,B_0) is the largest root of that fixed point equation (<ref>). (ii) As in part (i), it now suffices to show that (,B) ↦ r_(,B) is continuously differentiable for (, B) ∈ R_≠∖∂ R_^+,where r_(,B) := log(1-q) ν_^,B(1)/1-ν_^,B(1)for ν_^,B(·) of Definition <ref>. To this end, fixing (_0,B_0) ∈ R_≠∖∂ R_^+ we only need to show that (<ref>) fails at (r,,B)=(r_(_0,B_0),_0,B_0) which analogously to part (i) be a direct consequence of having that r_(_0,B_0) ρ_±(_0).Proceeding to do this task, the same argument as in part (i) shows thatr_(_0,B_0) ρ_-(_0), but a slightly different argument is neededfor ruling out the other choice. Specifically, we claim that r_(_0,B_0)=ρ_+(_0) would implyρ_-(_0) >0. To see this, from the definition of B_±() we find that B_0=B_-(_0). From the proof of <cit.> we also have that the map B ↦_(B) is the inverse of the map ↦ B_-(). Moreover, the assumption (_0,B_0) ∈ R_∖∂ R_^+ implies that B_0 ∈ [0,B_+). Therefore, _0=_(B_0) < _+(B_0) ≤_+(0) =:_+, where the first inequality follows from the fact that _(B) < _+(B) for B ∈ [0,B_+) and the second inequality follows from the fact that the map _+(·) is decreasing in B. Using <cit.> we thus deduce that ρ_-(_0) >0 wheneverr_(_0,B_0)=ρ_+(_0). Now, similarly to part (i),if both ρ_-(_0) >0 and r_(_0,B_0) = ρ_+(_0), thenthere exists a solution r_⋆∈ [0, r_(_0,B_0)) of (<ref>). From Definition <ref> we know that r_(_0,B_0) is the smallestnonnegative solution of (<ref>), so having arrived at a contradiction,the proof of the lemma is thus complete.From <cit.> we have the existence of the smooth curves _(B) and _+(B) with the desired properties. It also follows from there that for (, B) ∈ [0,∞)^2 ∖ R_≠ one has ν_^, B = ν_1^, B. This together with Lemma <ref> implies that all two dimensional marginals of μ_^, B and μ_1^, B coincide. Further, as described in Remark <ref>, any finite dimensional marginal of μ_^, B then coincides with that of μ_1^, B, thereby yielding our claim (<ref>) from which (<ref>) also follows (since at B=0 the Potts measures are invariant with respect to permutations on [q]). Since r_^, B < r_1^, B for (, B) ∈ R_ our claim that ν_^, B(1) < ν_1^, B(1) follows from the fact that map r ↦ e^r/(e^r+q-1) is strictly increasing on . Now, for the existence of smooth B →_c(B) of the required properties, it suffices to verify that∂∂Φ(ν_^, B) < ∂/∂Φ( ν_1^, B), for all(, B) ∈ Int( R_≠) , Φ(ν_^,B) > Φ(ν_1^,B), for(, B) ∈∂ R_≠^,Φ(ν_^,B) < Φ(ν_1^,B), for(, B) ∈∂ R_≠^+.Indeed, (<ref>)-(<ref>) imply that (<ref>) holds for some B ↦_c(B) such that _(B) < _c(B) < _+(B) for B ∈ [0, B_+),with smoothness of _c(·) due to the implicit function theorem (where ∂∂Φ(ν_^, B) ∂∂Φ(ν_1^, B) throughout Int( R_) thanks to (<ref>)).Turning to prove (<ref>), we set ν̅:= 1-ν/q-1 and note that the mapν↦ν̅(1+ν - ν̅)/ν̅^2 +(q-1)ν^2 is strictly decreasing on [1q,1) (having positive numerator and denominator,whose derivatives are strictly negative and positive, respectively, on the interval (1q, 1)). Using Lemma <ref> and the fact that ν_(i) =ν_(2) for ∈{, 1}and all i ∈ [q]∖{1}, we also have at any i ∈∂ o,μ_^, B(σ_o=σ_i) = e^[ ν_(1)^2 + (q-1) ν_(2)^2]e^[ ν_(1)^2 + (q-1) ν_(2)^2]+ 2(q-1) ν_(1) ν_(2)+ (q-1) (q-2) ν_(2)^2 .With 1/q≤ν_^, B(1) < ν_1^, B(1) at any (, B) ∈ R_≠, combining the last two observations results with∑_i ∈∂ oμ_^, B(σ_o=σ_i) < ∑_i ∈∂ oμ_1^, B(σ_o=σ_i),∀(, B) ∈ R_from which (<ref>) follows upon using (<ref>). Moving next to the proof of (<ref>), for any ∈ [_-, _] let Ψ_-():=Φ^, B_-()(r_1(, B_-()))- Φ^, B_-() (r_(, B_-())) ,with the convention that Φ^, B(r)= Φ^, B(r(ν)):= Φ^, B(ν)for r(ν):= log(ν(1)/ν(2)) and any ν∈([q]) such that ν(2)=⋯=ν(q). Fix any (_0, B_0) ∉∂ R_≠^+. By Lemma <ref>(ii), proceeding as in the steps leading to (<ref>), and using the definition of r(ν) we obtain that∂∂ Φ^_0, B_0(r_(_0, B_0))= h_1(r_(_0, B_0); _0)and∂∂B Φ^_0, B_0(r_(_0, B_0))= h_2(r_(_0, B_0); _0),whereh_1(r;):= d2·e^(e^2r+q-1)e^r(e^+r+q-1)+ (q-1)(e^r+e^ +q-2)andh_2(r;):= e^r(e^+r+q-1)e^r(e^+r+q-1)+ (q-1)(e^r+e^ +q-2).By (<ref>) we see that ρ_+(_0) is a fixed point of (<ref>) for (,B)=(_0, B_-(_0)). Recall that ρ_+() is the largest solution of (<ref>) and since r_1(, B) is the largest solution of (<ref>), we deduce that ρ_+(_0)=r_1(_0, B_-(_0)) for _0 ∈ [_-, _]. From (<ref>) it can be further checked that ρ_+() is the log of the largest solution of a quadratic equation with coefficients smooth inthat does not admit a double root for >_-. Therefore, the map ↦ρ_+() is differentiable on (_-, _]. Hence, repeating a computationas in the steps leading to (<ref>), we find that (<ref>) continues to hold when B_0 is replaced by B_-(_0) and r_(_0, B_0) is replaced by r_1(_0, B_-(_0)). Moreover, by (<ref>) we have that ∂_ B_-(_0)=-∂_ F(ρ_+(_0);_0,0) for _0 ∈ (_-, _]. Thus, using the chain rule of differentiation, we find thatΨ_-'(_0)= Δ h_1(_0) - Δ h_2(_0) ·∂∂F(ρ_+(_0); _0, 0) = ∫_r_(_0, B_-(_0))^r_1(_0, B_-(_0))(∂∂ r h_1(r; _0) - ∂∂ r h_2(r, _0) ·∂∂F(ρ_+(_0); _0, 0)) dr,whereΔ h_i(_0):= h_i(r_1(_0, B_-(_0)); _0) - h_i(r_(_0, B_-(_0)); _0),fori=1,2. Next denoting g__0(r):= ∂∂ r h_1(r;_0)∂∂ r h_2(r;_0)= d ·e^2r +(q-2)e^r -(q-1)e^2r + 2(e^_0+q-2)e^r + (q-1)(1+e^-_0(q-2))we find that (g'__0(r))= (2e^_0+q-2)·(Q__0(e^r)), where Q__0(t):=t^2 +2 (q-1)e^-_0 + (q-1)(1+(q-2)e^-_0). Since the roots of Q__0 are either negative or complex conjugates of each other we deduce that g'__0(r) >0 for all r ∈. This entails thatg__0(r) < g__0(r_1(_0, B_-(_0))) = ∂/∂ F(ρ_+(_0); _0, 0)forr ∈[0, r_1(_0, B_-(_0))), where to obtain the right most equality we use the fact that ρ_+(_0)=r_1(_0, B_-(_0)) satisfies (<ref>). So, noticing that ∂∂ rh_2(·;_0) is strictly positive, by (<ref>), we find that the integrand in the RHS of (<ref>) is strictly negative for r∈[r_(_0, B_-(_0), r_1(_0,B_-(_0))), and consequentlythat Ψ_-'(_0) <0 for _0 ∈ (_-, _]. By definition of the non-uniqueness regime, necessarily Ψ_-(_-)=0 and (<ref>) follows.The proof of (<ref>) is similar. Indeed, noting that r_(, B)= ρ_-() for any (, B) ∈∂ R_^+, denoting Ψ_+():=Φ^, B_+()(r_1(, B_+()))- Φ^, B_+() (r_(, B_+())) for any ∈ [_-, _+], and proceeding as in the steps leading to (<ref>) we find that (<ref>) continues to hold for Ψ_+'(_0)at any _0 ∈ (_-,_+], provided we replace there ρ_+(_0) and B_-(_0) by ρ_-(_0) and B_+(_0), respectively. Moreover, a same reasoning as in (<ref>) yields thatg__0(r) > g__0(r_(_0, B_+(_0))) = ∂/∂ F(ρ_-(_0); _0, 0)forr ∈ (r_ (_0, B_+(_0)),∞).Repeating now the rest of the argument in the proof of (<ref>)we obtain (<ref>) and thereby complete the proof of the proposition.99BDA. Basak, A. Dembo. Ferromagnetic Ising measures on large locally tree-like graphs. Ann. Probab., 45(2), 780–823, 2017.BBC22+F. Bencs, M. Borbényi, and P. Csikvári. Random cluster model on regular graphs.Comm. Math. Phys., 399, 203–248, 2023.BBCK00M. Biskup, C. Borgs, C., J. T. Chayes, and R. Kotecký. Gibbs states of graphical representations of the Potts model with external fields.J. Math. Phys., 41(3), 1170–1210, 2000.CO23 A. Coja-Oghlan, A. Galanis, L. A. Goldberg, J. B. Ravelomanana,D. Štefankovič, and E. Vigoda.Metastability of the Potts ferromagnet on random regular graphs. Comm. Math. Phys., 401, 185–225, 2023.DMA. Dembo, A. Montanari. Ising models on locally tree-like graphs. Ann. Appl. Probab., 20(2), 565–592, 2010. DMSSA. Dembo, A. Montanari, A. Sly, and N. Sun. The replica symmetric solution for Potts models on d-regular graphs. Comm. Math. Phys., 327(2), 551–575, 2014. DMSA. Dembo, A. Montanari, and N. Sun. Factor models on locally tree-like graphs. Ann. Probab., 41(6), 4162–4213, 2013.G-RCG. R. Grimmett. The random-cluster model (Vol. 333). Springer Science & Business Media, 2006.HJP22+T. Helmuth, M. Jenssen, and W. Perkins.Finite-size scaling, phase coexistence, and algorithms for the random cluster model on random graphs.Ann. Inst. H. Poincaré Probab. Statist., 59(2), 817–848, 2023. liggettT. M. Liggett. Interacting particle systems. Classics in Mathematics. Springer-Verlag, Berlin, 2005. Reprint of the 1985 original.L89R. Lyons. The Ising model and percolation on trees and tree-like graphs.Comm. Math. Phys.,125, 337–353, 1989. L90R. Lyons.Random walks and percolation on trees.Ann.  Probab., 18(3), 931–958, 1990.MM09M. Mézard, A. Montanari. Information, Physics, and Computation.Oxford University Press, 2009.MMSA. Montanari, E. Mossel, and A. Sly. The weak limit of Ising models on locally tree-like graphs. Probab. Th. Rel. Flds., 152, 31–51, 2012. S23C. Shriver. Typical sofic entropy and local limits for free group shift systems.arXiv:2308.08041 preprint, 2023.§ COMMENTS ON THE PROOF OF THEOREM <REF>* Some (standard) definitions are not yet included in the draft. Please let me know if some of the notations are unclear. * I have checked that Lemma <ref> holds when B=0. Needs to check that it continues to hold for B >0. * I have worked out the proof of Lemma <ref> on paper. Needs to be typed up. * Proof of Lemma <ref> follows from the scanned notes. Have verified that there are no gaps in the proof at the moment. * To complete the proof of Theorem <ref> we need to find the conditional distribution of the spin variables given the bond variables. When B=0 this is done. Needs to do it for B >0. I checked it earlier. It seemed okay then. But not entirely confident about this.
http://arxiv.org/abs/2312.16008v1
{ "authors": [ "Anirban Basak", "Amir Dembo", "Allan Sly" ], "categories": [ "math.PR", "cond-mat.stat-mech", "math-ph", "math.MP", "60K35, 82B20, 82B26" ], "primary_category": "math.PR", "published": "20231226112712", "title": "Potts and random cluster measures on locally regular-tree-like graphs" }
http://arxiv.org/abs/2312.16583v1
{ "authors": [ "Igor Bragar", "Łukasz Cywiński" ], "categories": [ "cond-mat.mes-hall", "quant-ph" ], "primary_category": "cond-mat.mes-hall", "published": "20231227141922", "title": "Limitations on the maximal level of entanglement of two singlet-triplet qubits in GaAs quantum dots" }
theoremTheorem[section] lemma[theorem]Lemma corollary[theorem]Corollary example[theorem]Example remark[theorem]Remark proposition[theorem]Proposition obser[theorem]Observation definition[theorem]Definition assumption[theorem]Assumption height5pt width5pt depth0pt Proof. .5cm .5cm δ̣ϵ η̅ #1#1_2 F^' FISHPACK FORTRAN GMRES GMRES(m) K #1#1 w̅ z̅ 𝐫̊ 𝐱 𝐲 𝐳 𝐮̆ ν 𝐇̋ 𝒰 𝐧 → ℝ ℝNα̱ασωεφλLiptr/∇q⟨⟨⟨⟩⟩⟩÷∇·ΩωØΩ J B E𝒪ℬ𝒟𝒮𝒦𝒯ℛℐ𝒞𝒩Ł𝕃ℰ∂Γℝ𝔸𝒱:#1{ #1 }#1 #1#1‘ #1 ’ #1| #1 | div dist diam Proj #1#1arg max arg min:= #1#2⟨ #1, #2⟩ WdomPreprint , Vol. x, No. x, January 14, 2024H. Woo: An extended asymmetric sigmoid with Perceptron(SIGTRON) for imbalanced linear classificationAn extended asymmetric sigmoid with Perceptron(SIGTRON) for imbalanced linear classification Hyenkyun WooH. Woo is with Logitron X, Daejeon 34890, Republic of Korea, e-mail:[email protected]. January 14, 2024 ======================================================================================================= This article presents a new polynomial parameterized sigmoid called SIGTRON, which is an extended asymmetric sigmoid with Perceptron, and its companion convex model called SIGTRON-imbalanced classification (SIC) model that employs a virtual SIGTRON-induced convex loss function. In contrast to the conventional π-weighted cost-sensitive learning model, the SIC model does not have an external π-weight on the loss function but has internal parameters in the virtual SIGTRON-induced loss function. As a consequence, when the given training dataset is close to the well-balanced condition, we show that the proposed SIC model is more adaptive to variations of the dataset, such as the inconsistency of the scale-class-imbalance ratio between the training and test datasets. This adaptation is achieved by creating a skewed hyperplane equation. Additionally, we present a quasi-Newton optimization(L-BFGS) framework for the virtual convex loss by developing an interval-based bisection line search. Empirically, we have observed that the proposed approach outperforms π-weighted convex focal loss and balanced classifier LIBLINEAR(logistic regression, SVM, and L2SVM) in terms of test classification accuracy with 51 two-class and 67 multi-class datasets. In binary classification problems, where the scale-class-imbalance ratio of the training dataset is not significant but the inconsistency exists, a group of SIC models with the best test accuracy for each dataset (TOP1) outperforms LIBSVM(C-SVC with RBF kernel), a well-known kernel-based classifier.Extended exponential function, extended asymmetric sigmoid function, SIGTRON, Perceptron, logistic regression, large margin classification, imbalanced classification, class-imbalance ratio, scale-class-imbalance ratio, line search, Armijo condition, Wolfe condition, quasi-Newton, L-BFGS§ INTRODUCTIONLearning a hyperplane from the given training dataset D = { (x_l,y_l) ∈^s ×{ -1,+1 } |l=1,2,⋯,d } is the most fundamental process while we characterize the inherent clustered structure of the test dataset. The main hindrance of the process is that the dataset is imbalanced <cit.> and inconsistent <cit.>. An example of an imbalanced dataset is when the number of positive instances in dataset D, denoted by N_+, is not equal to the number of negative instances in dataset D, denoted by N_-. Here, N_+ = { l|y_l = +1} and N_- =D∖ N_+. To address the class-imbalance issues, one can apply under-sampling or over-sampling techniques while preserving the cluster structure of dataset D <cit.>. In addition to the class imbalance problem, there is another imbalance problem, known as scale imbalance between the positive class of D, { x_i |i ∈ N_+ } and the negative class of D, {x_j |j ∈ N_- } <cit.>. Considering scale and class imbalance simultaneously, we generalize the class-imbalance ratio r_c =N_+/ N_- to the scale-class-imbalance ratior_sc = r_c√(x_p^c^2+1/x_n^c^2+1), where x_p^c = 1/ N_+∑_i ∈ N_+ x_i is the centroid of the positive class of D and x_n^c = 1/ N_-∑_j ∈ N_- x_j is the centroid of the negative class of D. When r_sc=1 and x_p^c-x_n^c > a where a is a positive constant, we say that D iswell-balanced with respect to r_sc. See <cit.> for more details on imbalancedness appearing in classification. It is worth noting that we can improve the scale imbalance through various normalization methods <cit.>. In our experiments, we use the well-organized datasets in <cit.>. They are normalized in each feature dimension with mean zero and variance one so that we have r_sc-1≤r_c-1. Although we could improve r_sc of D by using mean-zero normalization, there is still r_sc-inconsistency between the training and test datasets <cit.>.In cost-sensitive learning <cit.>, we usually use the π-weighted loss function to learn a stable hyperplane considering r_c of the training dataset. For example, <cit.> uses the π-weighted focal loss function for imbalanced objection detection. Also, see <cit.> for designing large-margin loss functions and the corresponding π-weighted cost-sensitive loss functions based on Bregman-divergence. Although the π-weighted cost-sensitive loss function is helpful to overcome imbalancedness, because of the inherent external structure of π-wight on the loss function, it is sensitive to variations of the dataset, such as r_sc-inconsistency. One of the primary goals of this article is to suggest not a π-weighted loss function but a new class of adjustable convex loss functions by way of virtualization for novel cost-sensitive learning. For that, we introduce SIGTRON(extended asymmetric sigmoid with Perceptron) and a novel cost-sensitive learning model, the SIGTRON-imbalanced classification (SIC) model. The proposed SIC model has internal polynomial parameters in the virtual SIGTRON-induced loss function instead of the external π-weight on the loss function. By the inherent internal structure of the parameters, when r_sc of the training dataset is not severe, the SIC model is more adaptable to inconsistencies in r_sc between training and test datasets. We demonstrate the effectiveness of our model by conducting experiments on 51 two-class datasets. For more information, refer to Figure <ref> (a) in Section <ref>.Before we go further, we present the definition of virtualization.The virtual convex loss function ℓ is defined as a function satisfying ∇ℓ = -p for the given probability function p. For instance, the gradient of the logistic loss function is the negative canonical sigmoid (probability function) ∇ℓ(x) = -σ(-x). Various variants of soft-max function and canonical sigmoid function, such as sparsemax <cit.>, sphericalmax <cit.>, Taylormax <cit.>, high-order sigmoid function <cit.>, and other diverse activation functions <cit.> are in the category of gradients of virtual loss functions. SIGTRON, which we will introduce in the coming Section <ref>, is also in this category. Although, in this article, we only consider S-shaped probability functions <cit.> for virtualization, they could be expandable to general functions. A typical example is the quasi-score function, of which the virtual loss function is the negative quasi-likelihood function defined by the mean and variance relation <cit.>. In addition, virtual loss functions with monotonic gradient function include various ready-made adjustable convex loss functions, such as tunable loss function <cit.>, high-order hinge loss <cit.>, and Logitron <cit.>. The other main goal of this article is to introduce a quasi-Newton optimization framework for cost-sensitive learning, including the proposed SIC model and π-weighted convex focal loss <cit.>. We name the presented optimization frameworkquasi-Newton(L-BFGS) optimization for virtual convex loss. In quasi-Newton(L-BFGS) optimization, the Hessian matrix is approximated by a rank-two symmetric and positive definite matrix, and its inverse matrix is algorithmically computed by simple two-loop iterations with m recent elements. It generally uses sophisticated cubic-interpolation-based line search to keep positive definiteness. This line search heavily depends on the evaluation of loss function <cit.>. Instead of the well-known cubic-interpolation-based line search, we propose a relatively simple but accurate line search method, the interval-based bisection line search. With the relatively accurate strong Wolfe stopping criterion, the proposed method performs better than L-BFGS with the cubic-interpolation-based line search regarding test classification accuracy. Please refer to the details in Figure <ref>. Although we only consider virtual convex loss functions, which are smooth and bounded below, the proposed optimization framework could be extended to deep neural networks where the non-convexity of loss functions is not severe <cit.>. It is worth mentioning that with the exact line search condition, the nonlinear conjugate gradient utilizes a larger subspace for Hessian matrix approximation <cit.>.We justify the performance advantage of the proposed approach, the cost-sensitive SIC model andquasi-Newton(L-BFGS) for virtual convex loss, with 118 various classification datasets <cit.>. For binary classification problems(51 datasets) where r_sc of training datasets is not severe, the test classification accuracy of TOP1(a group of SIC models having the best test accuracy for each dataset) is 83.96%, which is 0.74% better than that of kernel-based LIBSVM(C-SVS with RBF kernel) and 0.16% better than that of TOP1-FL of π-weighted convex focal loss. Within linear classifiers, the MaxA(α_+=7/8,α_-=8/7) SIC model shows better performance than the π-weighted convex focal loss <cit.> and the balanced classifier LIBLINEAR(logistic regression, SVM, and L2SVM) <cit.> in terms of test classification accuracy with all 118 datasets. Last but not least, the proposed SIC model with (α_+,α_-)-matrix parameters is a useful tool for understanding the structure of each dataset. For example, see Figure <ref> spectf dataset for r_sc-inconsistency, i.e., the training dataset ofspectf is well-balanced, and the test dataset of it is imbalanced <cit.>. For the multi-label structure, refer to Figure <ref> (e)energy-y1 dataset and (f)energy-y2 dataset. They have the same input but opposite outputs, such as heating load vs cooling load <cit.>. §.§ NotationWe briefly review the extended exponential function <cit.> and the extended logarithmic function <cit.>. For information on the Tweedie statistical distribution and beta-divergence based on extended elementary functions, refer to the following citations: <cit.>.For notational convenience, let _≥ a = { x ∈ |x ≥ a } and _> a = { x ∈ |x > a}, where a ∈. In the same way, _≤ a and _< a are set. Then the extended logarithmic function ln_α,c <cit.>and the extended exponential function exp_α,c <cit.> are defined as follows: ln_α,c(x)= {[ln(x/c),if α=1; c_α - x_α,otherwise ]. exp_α,c(x)= {[cexp(x),if α=1; c(1 - x/c_α)^1/(1-α) ,otherwise ].where c>0, α≥ 0, x_α = 1/α-1 x^1-α and c_α = 1/α-1 c^1-α. In the case where c=1, the extended functions exp_α,c and ln_α,c become the generalized exponential and logarithmic functions <cit.>, respectively. For the effective domains of ln_α,c and exp_α,c, see <cit.>. In this article, we only consider restricted domains of ln_α,c and exp_α,c in Table <ref>. Within the restricted domains in Table <ref>, irrespective of α_i and c_i, we have ln_α_2,c_2(exp_α_1,c_1(x)) ∈ for all x ∈ int((exp_α_1,c_1)). This property defines the extended logistic loss, including high-order sigmoid function <cit.>. Here, int(E) means the largest open interval contained in an interval E ⊆. Note that xy = ∑_l=1^s x_ly_l for x,y ∈^s, x = √(xx), and x_∞ = max_l x_l. Additionally, · means the absolute value or the size of a discrete set, depending on the context in which it is used.§.§ Cost-sensitive Learning framework, Skewed hyperplane equation, and OverviewLet us start with the cost-sensitive learning modelmin_h ∈ H ∑_i ∈ N_+L_+(h(x_i)) +∑_j ∈ N_-L_-(-h(x_j)) + λ/2 Reg(h),where H = {w· +b|(w,b) ∈^s ×} and Reg is an appropriate regularizer for h, such as w^2. Note that L_+ and L_- are virtualized large-margin convex loss functions that are both smooth and lower-bounded. For more information on cost-sensitive learning, please refer to <cit.>.For simplicity, assume that x_i ≈ x_p^c for i ∈ N_+, x_j ≈ x_n^c for j ∈ N_-, and λ=0. Then (<ref>) becomes h^* = _h ∈ H r_c L_+(h(x_p^c)) + L_-(-h(x_n^c)).Now, we apply ·h^*(x_+) and ·h^*(x_-) to the first optimal equation ∇_h(r_c L_+(h(x_p^c)) + L_-(-h(x_n^c)) )|_h^* = 0 and simplify the corresponding equations. Then we have r_c∇ h^*(x_p^c)/∇ h^*(x_n^c)∇ L_+(h^*(x_p^c)) = ∇ L_-(-h^*(x_n^c)), where ∇ L_± = - p_± and p_±∈ (0,1) are smooth and monotonic probability functions defined in their respective domains. Hence, the first-order optimal equation for classification is derived as follows: r_sc p_+( h^*(x_p^c) ) =p_-(- h^*(x_n^c) )Roughly speaking, the goal of imbalanced linear classification is to design p_+ and p_- so that the hyperplane h^*(x)=0 satisfying (<ref>) separates the given testing dataset as effectively as possible. For instance, by applying Taylor approximation at zero, after simplification, we get the skewed hyperplane equationw^*r_sc∇ p_+(0) x_p^c + ∇ p_-(0) x_n^c/r_sc∇ p_+(0) + ∇ p_-(0) + b^* ≈ - r_scp_+(0) - p_-(0)/r_sc∇ p_+(0) + ∇ p_-(0).where 0< h^*(x_+) ≪ 1 and 0< -h^*(x_-) ≪ 1. When the angle between the hyperplane h^*(x)=0 and the vector x_p^c-x_n^c does not change much, and the skewness of (<ref>) is negligible, the distance of x_p^c to the hyperplane h^*(x)=0 is mainly adjusted by ∇ p_±(0). It is crucial to bear in mind that the internal parameters of the proposed SIC model have a direct impact on ∇ p_±(0) and not p_±(0). The details of the SIC model are discussed in Section <ref>, where the virtual SIGTRON-induced loss function is also introduced. In Section <ref>, we study the properties of SIGTRON, such as smoothness, inflection point, probability-half point, and parameterized mirror symmetry of inflection point with respect to the probability-half point. SIGTRON is used to exemplify the probability function p_± in (<ref>). In Section <ref>, we demonstrate the usefulness ofquasi-Newton optimization(L-BFGS) for virtual convex loss, which includes the interval-based bisection line search. With this optimization method, we solve two different types of cost-sensitive learning models: the SIC model and the π-weighted convex focal loss. The performance evaluation of the proposed framework, i.e., the SIC model andquasi-Newton optimization(L-BFGS) for virtual convex loss, is done in Section <ref>. We compare the proposed framework with the imbalanced classifier π-weighted convex focal loss <cit.>, the balanced classifier LIBLINEAR(logistic regression, SVM, and L2SVM) <cit.>, and the nonlinear classifier LIBSVM(C-SVC with RBF kernel) <cit.>. The conclusion is given in Section <ref>. § SIGTRON: EXTENDED ASYMMETRIC SIGMOID WITH PERCEPTRON In this Section, we define SIGTRON using the extended exponential function exp_α,c (<ref>). We then study various properties of SIGTRON, such as its smoothness, inflection point, probability-half point, and parameterized mirror symmetry of the inflection point with respect to the probability-half point.Let α≥ 0, c >0, and x ∈. Then SIGTRON(extended asymmetric sigmoid with Perceptron) is defined ass_α,c(x) = {[ σ_α,c(x)ifx ∈(σ_α,c); σ_P(x)otherwise, ].where σ_α,c is the extended asymmetric sigmoid functionσ_α,c(x) = c/c+exp_α,c(-x).Here, exp_α,c is the extended exponential function (<ref>) andσ_P is the Perceptron function(or Heaviside function): σ_P(x) = 1, if x ≥ 0 and 0, otherwise. The restricted domains of exp_α,c and σ_α,c are defined in Table <ref>. Note that s_α,c(x) ∈ [0,1] is a non-decreasing continuous function defined onwith lim_x → -∞ s_α,c(x)= 0 and lim_x → +∞ s_α,c(x)= 1. Additionally, s_α,c(0) = 1/2, irrespective of α and c_α. Here x_ph=0 is denoted as the probability-half point. When α=1, s_α,c(x) = 1/1+exp(-x) is the canonical sigmoid function, irrespective of c.Note that SIGTRON with c=1 becomes the canonical sigmoid function as α-1→ 0, since the extended exponential function with c=1 is the generalized exponential function. However, SIGTRON with c_α=1 becomes a smoothed Perceptron as α-1→ 0 and α≠1. Refer to Figure <ref> for additional information.In the following Theorem <ref>, we characterize the smoothness of SIGTRON (<ref>) depending on α. The proof of Theorem <ref> is given in Appendix <ref>.For n=1,2,3,⋯, when α∈(1-1/n, 1+ 1/n), the n-th derivative of s_α,c is continuous onand expressed as∇^n s_α,c(x) = {[ ∑_k=1^n F_n,k(x)ifx ∈(σ_α,c); 0 otherwise, ].where F_n,k(x) =A_n,k(1/1-α) c exp^k-n(1-α)_α,c(-x) / (c + exp_α,c(-x))^k+1,andA_n,k(1/1-α) = (-1)^n+kk! ∑_l=0^n [nl]{ lk } (α-1)^n-l.Here, [ nl ] is the Stirling number of the first kind  <cit.> with the recurrence equation [ nl ] = (n-1)[ n-1l ] + [ n-1l-1 ], where n,l≥1. { lk } is the Stirling number of the second kind with the recurrence equation { lk } = k{ l-1k } + { l-1k-1 }, where l,k≥1. For the computation of the Stirling number of the first kind and the second kind, we need additional notational conventions: { 00 } = [ 00 ] = 1 and { a0 } = [ a0]=0 for a ≥ 1. We have { a1 } = 1 and [ a1 ] = (a-1)! with 0!=1, for a≥1. Additionally, we note that { ab } = [ ab ] = 0 if b>a ≥ 0. For more details, refer to <cit.>. Theorem <ref> states that for any α∈ (0,2), the gradient of s_α,c(x) is given by c^α-1(1-s_α,c(x))^α(s_α,c(x))^2-α, where x ∈. Check Figure <ref> (c) and (d) for a visual representation of ∇ s_α,c(x). The information regarding the inflection point of s_α,c is provided in Corollary <ref>. Additionally, we have observed that the function ∇ s_α,c(x) takes the form of the beta distribution β_D(x;α) = 6/Γ(3-α)Γ(1+α)x^2-α(1-x)^α, where x∈[0,1]. The cumulant distribution of the beta distribution, which has an adjustable parameter α, can also be classified as an S-shaped sigmoid function.For α∈ (0,2), the inflection point x_ip of SIGTRON s_α,c exists in the interval int((σ_α,c)) and is expressed as x_ip = -ln_α,c(cα/2-α).When α=1, the inflection point is the probability-half point, that is, x_ip = x_hp=0. From (<ref>) and Appendix <ref>, we know that s_α,c∈ C^∞(int((σ_α,c))) and∇^2 s_α,c(x) = -α c exp_α,c^2α-1(-x)/(c+exp_α,c(-x))^2 +2c exp_α,c^2α(-x)/(c+exp_α,c(-x))^3. Let α≠1, then, since exp_α,c(-x) ≠ 0 for x ∈ int((σ_α,c)), the inflection point x_ip is a point satisfyingx_ip = - ln_α,c( cα/2-α). If α=1, then s_α,c is the canonical sigmoid function. Thus, x_ip=x_hp = 0. Figure <ref> shows s_α,c and its derivative for various choices of α satisfying α -1 = 1/k (k=1,2,3,4,6,10) and c_α=1. Note that ∇ s_α,c is not defined at α=0 and α=2. When α>1, the inflection point x_ip is getting close to -1 as α→ 2. On the other hand, when α<1, the inflection point x_ip is getting close to 1 as α→ 0.SIGTRON is a general framework for replacing the S-shaped sigmoid function in diverse machine learning problems requiring adjustability of probability(or inflection point) and fixed probability-half point. For instance, refer to the simplified first-order optimal equation for classification (<ref>) and Example <ref>. As canonical sigmoid function σ(x) = 1/1+exp(-x) has a symmetric property σ(x) = 1 - σ(-x), SIGTRON s_α,c also has an extended symmetric property:s_α,c(x) = 1 - s_2-α,c^-1(-x)where α∈ [0,2]. Also, forα∈ (0,2), we have ∇ s_α,c(x) = ∇ s_2-α,c^-1(-x), the parameterized mirror symmetry with respect to probability-half point x_hp=0. See Figure <ref> (c) and (d) for examples of parameterized mirror symmetry of ∇ s_α,c. It is worth commenting that the gradient of Logitron L_α,c <cit.> is also a negative probability function, of which the probability-half point depends on α. For α∈ (0,2], we have ∇ L_α,c(x) = - (s_2-α,c^-1(-x))^αwhere the exponent α is an acceleration parameter of SIGTRON s_2-α,c^-1(-x) and (<ref>) is used. It is well-known that it is hard to give a probability for the results of max-margin SVM classifier <cit.>. In fact, <cit.> uses the canonical sigmoid function σ(γ x+ ξ) to fit a probability to the classified results of the SVM. Here γ and ξ should be estimated <cit.>. Instead of fitting with the canonical sigmoid function σ(γ x+ ξ), we could use SIGTRON s_α,c as a probability estimator for the results of the SVM classifier or any other classifiers having decision boundary, such as hyperplane. For this purpose, there are three steps to follow. First, we must place the probability-half point x_hp of s_α,c at the decision boundary. Second, we should adjust c_α to place the exact probability-one point of s_α,c at a specific point, such as the maximum margin point. Finally, we only need to estimate α for the decreasing slope of s_α,c based on the distribution of classified results. See <cit.> for the probability estimation issues in deep neural networks.§ VIRTUAL SIGTRON-INDUCED LOSS FUNCTION, SIC(SIGTRON-IMBALANCED CLASSIFICATION) MODEL, AND SKEWED HYPERPLANE EQUATIONThis Section studies the SIC model with the virtual SIGTRON-induced loss functions and the skewed hyperplane equation of the SIC model.Let α∈ [0,2], c >0, and x ∈, then the virtual SIGTRON-induced loss function L_α,c^S is defined by the following gradient equation ∇ L^S_α,c(x) = s_α,c(x)-1,where s_α,c(x) -1 is a negative probability function. By the extended symmetric property of SIGTRON in (<ref>), we have s_α,c(x) -1= -s_2-α,c^-1(-x).We notice that an expansion of the class of Logitron loss (<ref>) via virtualization is easily achieved by ∇L_β,α,c(x) = - (s_2-α,c^-1(-x))^β where β>0 is a tuning parameter which controls the location of probability-half point x_hp. Thus, the virtualized Logitron loss contains both the virtual SIGTRON-induced loss (<ref>) and the Logitron loss (<ref>). Let α∈ [0,1)∪(1,2] and c > 0. Then the virtual SIGTRON-induced loss function L^S_α,c satisfying (<ref>) has the following integral formulations:(1) Case α∈ (1,2]:L_α,c^S(x) = {[ -c_αF(1+ x/c_α;α-1) + c_α ifx ≥ -c_α;-x1.6inotherwise. ].(2) Case α∈ [0,1):L_α,c^S(x) = {[ c_αF(1+ x/c_α;1-α) - c_α - xifx ≤ -c_α; 01.9inotherwise. ].Here, F(z;b) = ∫_0^z1/1+t^1/bdt with z ∈_≥ 0 and b>0.(1) Case α∈ (1,2]:From (<ref>), we have∇ L_α,c^S(x) = {[ -1/ 1 + (1 + x/c_α)^1/α-1ifx ≥ -c_α;-1otherwise, ].where -c_α <0 and 1+x/c_α≥ 0. The integration of ∇ L^S_α,c becomesL^S_α,c(a_1) - L^S_α,c(a_0) = ∫_a_0^a_1∇ L^S_α,c(t)dt ={[ -c_αF(1+a_1/c_α;α-1) + c_α + a_0,ifa_1 ≥ -c_α; -a_1 + a_0,1.67inotherwise, ].where we may choose a_0 ≪ -c_α. Then, we get the virtual SIGTRON-induced loss function (<ref>), after setting a_1=x and removing constants. (2) Case α∈ [0,1): We have∇ L_α,c^S(x) ={[ 1/1 + (1+x/c_α)^1/1-α - 1ifx ≤ -c_α; 01.25inotherwise, ].where -c_α > 0 and 1+x/c_α≥ 0. Thus, we getL^S_α,c(a_0) - L^S_α,c(a_1) = ∫_a_1^a_0∇ L^S_α,c(t)dt ={[ -c_αF(1+ a_1/c_α;1-α) + c_α + a_1ifa_1 ≤ -c_α;02.1inotherwise, ].where a_0 > -c_α. Let a_1=x, then we get the virtual SIGTRON-induced loss function (<ref>).In Figure <ref>, we present the virtual SIGTRON-induced loss L_α,c^S(x) with c_α=1 andα-1 = 1/k. Here k=1,2,4,6 and 10 are the polynomial orders of exp_α,c. For k=10, L_α,c^S(x) is computed directly by (<ref>) and (<ref>). For k=1,2,4,6, L_α,c^S(x) is expressed in a closed form by virtue of Example <ref>. As we increase the polynomial order k = 1/α-1, i.e. α→ 1 and c_α=1, L_α,c^S(x) is getting close to the smoothed Perceptron loss function <cit.>, not to the logistic loss. We make a list of F(a;1/k) for k=1,⋯,6. * k=1:F(a;1/1) = ln(1+a) * k=2:F(a;1/2) = arctan(a) * k=3:F(a;1/3) = 1/6log( 1 + 3a/a^2-a+1)+1/√(3)arctan(2a-1/√(3)) - 1/√(3)arctan(-1/√(3)) * k=4:F(a;1/4) = 1/4√(2)log( 1 + 2√(2)a/a^2-√(2)a+1) + 1/2√(2)arctan(1+√(2) a ) - 2 arctan(1 - √(2)a ) * k=5:F(a;1/5) =(√(5) -1)/20log(2a^2+(√(5)-1)a+2) - (√(5)+1)/20log(2a^2-(√(5)+1)a+2) + log(1+a)/5                             - √(10-2√(5))/10arctan(-4a + √(5) +1/√(10-2√(5))) + √(10+2√(5))/10arctan(4a + √(5) - 1/√(10+2√(5)))- (√(5) -1)log 2 + (√(5)+1)log 2/20                         +√(10-2√(5))/10arctan(√(5)+1/√(10-2√(5))) - √(10+2√(5))/10arctan(√(5) -1/√(10 + 2√(5))) * k=6:F(a;1/6) =√(3)log(a^2 + √(3)a+1/a^2-√(3)a+1) + arctan( √(3) + 2a)/6 -arctan( √(3) - 2a)/6 + arctan(a)/3§.§ Learning a hyperplane with SIC modelLet us first consider the cost-sensitive convex minimization model (<ref>) to find a hyperplane h^*(x) = 0 from the given training dataset D. The following is the reformulation of (<ref>) through the virtual SIGTRON-induced loss function (<ref>) and ℓ_2-regularizer.h^* = _ h ∈ HF(h)where H = {w·+b|(w,b) ∈^s ×} andF(h) = ∑_i ∈ N_+L^S_α_+,c_+(h(x_i)) +∑_j ∈ N_-L^S_α_-,c_-(-h(x_j)) + λ/2w_2^2.This minimization problem (<ref>) with (<ref>) is named as the SIGTRON-imbalanced classification(SIC) model. To demonstrate the merit of SIC model (<ref>), we start with the following simplified first-order optimal equation for classification introduced in (<ref>) with p_+(x) = s_2-α_+,c_+^-1(-x) andp_-(x) = s_2-α_-,c_-^-1(-x). r_sc s_2-α_+,c_+^-1(- h^*(x_p^c) ) =s_2-α_-,c_-^-1( h^*(x_n^c) )where x_p^c is the centroid of the positive training dataset x_i ∈ N_+ and x_n^c is the centroid of the negative training dataset x_j ∈ N_-. In the following Theorem <ref>, the skewed hyperplane equation of the SIC model (<ref>) is derived from a first-order approximation to (<ref>). Let x_p^c-x_n^c>a for a positive constant a, h^*(x_p^c) ∈ dom(σ_α_+,c_+), 0< h^*(x_p^c) ≪c_α_+, - h^*(x_n^c) ∈ dom(σ_α_-,c_-), and 0< - h^*(x_n^c) ≪c_α_-. Here, c_α_+ = c_+^1-α_+/(α_+-1), c_α_- = c_-^1-α_-/(α_–1), and α_+,α_- ∈ [0,1) ∪ (1, 2]. Then, from (<ref>), we have the skewed hyperplane equationw^*(c_+^α_+-1x_p^c + r_sc c_-^α_–1x_n^c/c_+^α_+-1 + r_scc_-^α_- -1) + b^* ≈2(r_sc-1)/c_+^α_+-1 + r_sc c_-^α_–1,where r_sc is the scale-class-imbalance ratio (<ref>). If r_sc=1, then the signed distance of x_p^c to the hyperplane h^*(x)=0 is approximately given as h^*(x_p^c)/w^*≈ηx_p^c-x_n^c cos(θ_+)where η =c_-^α_–1/c_+^α_+-1 + c_-^α_- -1∈ (0,1) and cos(θ_+) = w^*/w^*x_p^c-x_n^c/x_p^c-x_n^c>0. In the same way, for x_n^c, we have h^*(x_n^c)/w^*≈ (η-1)x_p^c-x_n^ccos(θ_+).We get r_sc(1 + ( 1 - h^*(x_n^c)/c_α_-)^1/α_–1) = (1 + (1 + h^*(x_p^c)/c_α_+)^1/α_+-1) from (<ref>). Since 0<h^*(x_p^c)/c_α_+≪ 1 and 0<h^*(x_n^c)/c_α_-≪ 1, we have the first order approximationr_sc( 2 - h^*(x_n^c)/c_-^1-α_-) ≈( 2 +h^*(x_p^c)/c_+^1-α_+),where (α_- - 1)c_α_- = c_-^1-α_- and (α_+-1)c_α_+ = c_+^1-α_+. Note that, when 0 ≤α<1 and r≪ 1, we use (1 + r)^1/α-1≈ (1-r)^1/1-α≈ 1 + 1/α-1r. By simplifying (<ref>), we get the skewed hyperplane equation in (<ref>). If r_sc = 1 then -b^* ≈w^*η x_p^c + (1-η)x_n^c with η =c_-^α_–1/c_+^α_+-1 + c_-^α_- -1. Thus, the signed distance of x_p^c to the hyperplane h^*(x)=0 in (<ref>) is easily derived.In practice, due to computational constraints, we normally choose polynomial functions for exp_α,c, i.e., positive integers for 1/α_±-1. Assume that c_α_± is aconstant, 1/α_+-1 = k_+, and 1/α_–1=k_-. Here, k_±=1,2,3,⋯. Then we haveη = 1/c_-^1-α_-/1/c_+^1-α_++1/c_-^1-α_- = k_-/k_++k_-The hyperplane h^*(x)=0 is tuned by the ratio of polynomial order of SIGTRON if cos(θ_+) does not change much. The following Example <ref> describes the tunable hyperplane through r_sc-inconsistent dataset having a well-balanced training dataset. Let us start with the two-classspectf dataset in Table <ref>. The training dataset is well-balanced, i.e., r_sc=1. However, the test dataset has r_sc = 0.26(r_c=0.09). It indicates that the positive class of the test dataset is the minority class. The hyperplane to be learned should be located near the minority class to achieve better test classification accuracy. As observed in (<ref>) and Figure <ref> (d), to move the hyperplane to the minority class as close as we can, we need to select the smallest η = 1/11. This η corresponds to four (α_+,α_-) candidates: (11/10,2),(9/10,2),(11/10,0), and (9/10,0). In fact, at (α_+,α_-)=(11/10,2), we obtain the minimum distance of x_test,p^c (the centroid of the positive class of test dataset) to the hyperplane h_(α_+=11/10,α_-=2)^*(x)=0 (Figure <ref> (b)) and the best test classification accuracy 64.6% (Figure <ref> (a)). Note that the pattern of η in Figure <ref> (d) is similar to the pattern of the distance of x_test,p^c to the hyperplane in Figure <ref> (b). As Figure <ref> (a) shows, the region α_- ≈ 2 obtains better test classification accuracy than the region α_- ≈ 0. Additionally, note thatcos(θ_test,+) = w_(α_+,α_-)^*/w_(α_+,α_-)^*x_test,p^c-x_test,n^c/x_test,p^c-x_test,n^c∈ [0.52,0.90] and (cos(θ_test,+))=0.71. As a reference, we obtained 20×20 hyperplanes h^*_(α_+,α_-)(x)=0 by solving 20×20 SIC models (<ref>) with the well-balanced training dataset. The cross-validation was used for the best regularization parameter λ. We set c_α_±=2, 1/1-α_+ = k_+ = 1,2,⋯,10, and 1/1-α_- = k_- = 1,2,⋯,10. Lately, <cit.> has proposed two focal loss functions for imbalanced object detection. The first one is the non-convex focal loss function. It has L_+(h(x)) = -π(1-p(h(x)))^γ_glog(p(h(x))) and L_-(h(x)) = -(1-π)p(h(x))^γ_glog(1-p(h(x))) where p(h(x)) ∈ (0,1) is a probability function, like canonical sigmoid σ or reduced Sigtron. The second one is the convex focal loss function. It has L_+(h(x))= -πlog(σ(γ h(x)+ξ)) and L_-(h(x)) =- (1-π)log(1-σ(γ h(x) + ξ)). Here π∈ (0,1) is known as a cost-sensitive parameter to be selected depending on r_sc of the training dataset. Note that γ≥ 1 and ξ≥ 0 control the stiffness and shift of the convex focal loss, respectively. As <cit.> mentioned, the performance gap between the two types of focal losses is negligible. Therefore, we exclusively compare the convex focal loss to the SIC model. Unlike the latter, which uses an external π-weight, the SIC model employs a virtualized convex loss function with internal polynomial parameters. To find additional information, please refer to Section <ref>. § QUASI-NEWTON OPTIMIZATION(L-BFGS) FOR VIRTUAL CONVEX LOSSThis Section presentsquasi-Newton optimization(L-BFGS) for virtual convex loss framework. It includes the proposed interval-based bisection line search, which uses gradients of a virtual convex loss function. Let us discuss the SIC model (<ref>), where F(h) is convex, differentiable, and bounded below. It is worth noting that the optimization framework we will be proposing for this model can also be used for cost-sensitive learning model (<ref>), including the π-weighted convex focal loss. Before we proceed, let us take a moment to review the quasi-Newton optimization framework described in <cit.>. The iterates h_0,h_1,h_2,⋯ satisfy h_t+1 = h_t + ρ_tz_t where ρ_t>0 is a step length and z_t = -B_t^-1∇ F(h_t) is a descent direction. Here, B_t is a symmetric and positive definite rank-two approximation of the Hessian matrix ∇^2F(h_t). Interestingly, L-BFGS directly approximates B_t^-1∇ F(h_t) by two-loop iterations with m recent elements. Here, m is the tuning parameter of L-BFGS. The performance comparison of the proposed optimization framework considering m of L-BFGS is shown in Figure <ref>. For the initial point, we set h_0=0, corresponding to the probability-half point of SIGTRON in the gradient of the SIC model. It is well known that, to guarantee sufficient descent of F(h) and positive definiteness of low-rank matrix B_t, the step length ρ_t of L-BFGS should satisfy the Armijo condition (<ref>) and the Wolfe condition (<ref>): F(h_t + ρ_t z_t) -F(h_t) ≤ c_Iρ_t ∇ F(h_t)z_tand∇ F(h_t + ρ_tz_t)z_t≥ c_II∇ F(h_t) z_t,where 0<c_I < c_II<1. The Armijo condition (<ref>) can be reformulated through the expectation of gradients: ϕ(ρ_t) - ϕ(0) =∫_0^ρ_tϕ'(ρ) dρ = ρ_t_[0,ρ_t](ϕ'),where ϕ(ρ_t) =F(h_t + ρ_t z_t) and ϕ'(ρ) = ∇ F(h_t + ρ z_t) z_t. Note that ϕ'(0) = ∇ F(h_t)z_t < 0 where z_t = - B_t^-1∇ F(h_t). Now, we get the reformulated Armijo condition_[0,ρ_t](ϕ')≤ c_Iϕ'(0)and the Wolfe conditionc_IIϕ'(0) ≤ϕ'(ρ_t),where (<ref>) is also known as the curvature condition <cit.>, which is clearly understood by the reformulation of (<ref>) as _[0,ρ_t](ϕ”) > (c_II-1)ρ_tϕ'(0) > 0. The positive definiteness of B_t in L-BFGS is adjusted by c_II∈ (0,1), normally set as 0.9. For more details, see <cit.>. The reformulated Armijo condition (<ref>) has several advantages, compared to the Armijo condition (<ref>). First, it is more intuitive about the descent condition of the loss function. The average slopes of ϕ in the interval [0,ρ_t] must be less than the initial slope ϕ'(0). Second, for the SIC model (<ref>), using an approximation of (<ref>) is more practical. That is, _[0,ρ_t](ϕ') ≈∑_i=1^n a_i ϕ'(ρ̃_i), wherea_i ≥ 0, ∑_i=1^n a_i = 1, and 0 ≤ρ̃_0 < ρ̃_1 < ⋯ < ρ̃_n ≤ρ_t. This approach is workable for the general loss function, including virtual non-convex loss function. For a virtual convex loss function, however, we do not need to evaluate a relatively large number of directional derivatives in the interval [0,ρ_t]. Instead of (<ref>) and (<ref>), we can use the strong Wolf condition, i.e., (relative) strong Wolfe stopping criterion. ϕ'(ρ_t)≤ -c_IIϕ'(0)where c_II∈ (0,1) is a tuning parameter of the proposedquasi-Newton(L-BFGS) optimization for virtual convex loss. See also <cit.> for related line search algorithms utilizing (<ref>). In this article, for the strong-Wolfe stopping criterion (<ref>), we create a new interval-based bisection line search(Algorithm <ref>). See <cit.> for the various characteristics of the interval reduction method in general line search. The overall framework ofquasi-Newton(L-BFGS) optimization for virtual convex loss is stated in Algorithm <ref>, which contains the interval-based bisection line search in Algorithm <ref>. See also Theorem <ref> for the convergence of Algorithm <ref>.Let ϕ be convex, differentiable, and bounded below. Then Algorithm <ref> with an initial condition ϕ'(0)<0 converges to ρ^* satisfying (<ref>), where c_II∈ (0,1), in finite steps.Let us first consider the case that ϕ is a coercive function. Since ϕ'(0)<0 and ϕ' is a non-decreasing function, there is ρ_opt >0 such that ϕ'(ρ) ≥ 0 for all ρ≥ρ_opt. As noticed in line 6-7 and line 10-11 of Algorithm <ref>, there is ith iteration such that ϕ'(ρ_i) < ϕ'(ρ_opt) = 0 < ϕ'(2ρ_i). Therefore, the interval, which includes ρ_opt, is established as [ρ_L,ρ_U] = [ρ_i,2ρ_i]. Then by the bisection algorithm in line 6-9 and line 13, [ρ_L,ρ_U] is shrinking to ρ_opt and the strong-Wolfe stopping criterion in line 4 of Algorithm <ref> is satisfied within finite steps. Now, we consider the case that ϕ is not a coercive function. Since ϕ is convex, bounded below, and ϕ'(0)<0, lim_ρ→ +∞ϕ'(ρ) → 0(line 11). Therefore, it stops by strong-Wolfe stopping criterion in line 4. Besides Armijo (<ref>) and Wolfe (<ref>) criteria for line search, there is an additional criterion known as Goldstein condition <cit.>. By way of (<ref>), it is reformulated as(1-c_III) ϕ'(0) ≤_[0,ρ_t](ϕ') ≤ c_IIIϕ'(0)where c_III∈ (0,1/2) and ϕ'(0)<0. Unfortunately, this condition does not always include the solution of min_ρϕ(ρ). To plug it into the quasi-Newton(L-BFGS) optimization for virtual loss, We need an additional curvature condition (<ref>). § NUMERICAL EXPERIMENTS WITH THE 20 × 20 SIC MODELSThis Section reports the classification results acquired by the SIC model (<ref>) andquasi-Newton(L-BFGS) for virtual convex loss(Algorithm <ref> and <ref>). We compare the proposed methodology with well-known classifiers: π-weighted convex focal loss <cit.>, LIBLINEAR(logistic regression, SVM, and L2SVM) <cit.>, and LIBSVM(C-SVC with RBF kernel) <cit.>. Note thatQuasi-Newton(L-BFGS) for virtual convex loss is mainly implemented inMatlab(version R2023b) based on <cit.>. This optimization algorithm is used for the SIC model (<ref>) and the π-weighted convex focal loss <cit.>. LIBLINEAR(version 2.4.5) <cit.> and LIBSVM(version 3.3.2) <cit.> are mainly implemented inC/C++ language withMatlab interface. All runs are performed on APPLE M2 Ultra with a 24-core CPU and 192GB memory. The operating system is MacOS Sonoma(version 14.1). We useparfor inMatlab for parallel processing of all models, including LIBLINEAR and LIBSVM, in a 24-core CPU. In terms of multi-class datasets, the OVA(one-vs-all) strategy is used for all linear classification models. The OVO(one-vs-one) strategy is used for the kernel-based classification model LIBSVM <cit.>. Concerningquasi-Newton(L-BFGS) for virtual convex loss, as observed in Figure <ref>, it is recommended to select m ∈ [20, 50] for two-loop iterations of L-BFGS and c_II∈ [0.1,0.5] for the interval-based bisection line search(Algorithm <ref>).We choose m=40 and c_II=0.4 considering performance-computation complexity. For stopping criterions ofquasi-Newton(L-BFGS) for virtual convex loss, we use ∇ F(h_t)_∞≤ϵ_tol1 and h_t+1-h_t_∞≤ϵ_tol2 where ϵ_tol1=10^-2 and ϵ_tol2=10^-4(Algorithm <ref>). We could select a smaller c_II for exact line search, used in other quasi-Newton optimization, such as nonlinear conjugate gradient <cit.>. In order to use the π-weighted convex focal loss <cit.> discussed in Remark <ref>, we need to set three parameters: γ, ξ, and π. Following the recommendations in <cit.>, we choose γ=1, 2,3,4 and ξ=0,1. As for π, we select 19 regular points ranging from 0.05 to 0.95. This gives us 152 convex focal losses, expressed as a (π, γ:ξ)-matrix.We have selected LIBLINEAR <cit.> and LIBSVM <cit.> as our standard for balanced linear classification models and non-linear classification models, respectively. For logistic regression, we use logistic loss, hinge-loss for SVM, and squared hinge-loss for L2SVM. To learn an inhomogeneous hyperplane, we set B=1. We use the primal formulation (s=0) for logistic regression and the dual formulation (s=3) for SVM. As for L2SVM, we use the primal formulation (s=2). In LIBSVM <cit.>, we use C-SVC(support vector classification) (s=0) with the RBF kernel K(x_i,x_j) = exp(-νx_i-x_j^2) (t=2).All models have an ℓ_2-regularizer λ/2w^2. In terms of regularization parameter λ for the cost-sensitive learning framework (<ref>), including 20×20 SIC models (<ref>) and 19×8π-weighted convex focal loss models in Remark <ref>, we use CV(cross-validation) with candidates in (<ref>) as recommended in LIBSVM <cit.>.λ = 2^r, r = -14,-13,-12,⋯,5 In LIBLINEAR and LIBSVM, the regularization parameter λ is located on the loss function. Therefore, we use C = λ^-1 with (<ref>) for CV. For LIBSVM, in addition to the regularization parameter C on the loss function, the RBF kernel parameter ν is cross-validated with candidates ν = 2^r and r=-14,..,5.Regarding benchmark datasets <cit.>, they are pre-processed and normalized in each feature dimension with mean zero and variance one <cit.>, except for when the variance of the raw data is zero. This process reduces the effect of scale imbalance of datasets. The scale-class-imbalance ratio r_sc (<ref>) of two-class and multi-class datasets is presented in Table <ref> and Table <ref>, respectively. In the case of two-class datasets in Table <ref>, the mean value of r_sc of training dataset is (r_sc T) ≈ 1.61. Also, we have min r_sc T = 0.49 and max r_sc T =7. Thus, the two-class datasets used in our experiments are roughly well-balanced. However, most of the two-class datasets have variations between r_sc T and r_sc of test dataset (r_sc Te). The raw format of each benchmark dataset is available in the UCI machine learning repository <cit.>. As commented in <cit.>, we reorganize datasets in <cit.>. Each dataset is separated into the non-overlapped training and test datasets. The training dataset of each dataset is randomly shuffled for 4-fold CV <cit.>. Table <ref>(51 two-class datasets) and Table <ref>(67 multi-class datasets) include all information of datasets such as number of instances, size of training dataset, size of test dataset, feature dimension, number of classes, class-imbalance ratio r_c for combined/training/test dataset, and scale-class-imbalance ratio r_sc for combined/training/test dataset. The experiments are conducted five times using randomly selected CV datasets, with a fixed initial condition of (w_0,b_0) = (0,0). For α and c_α of SIC model (<ref>), we conducted a preliminary experiment with the reduced class of SIC model (α=α_+=α_-). We found that the best test classification accuracy is obtainable when c_α=2. For general purposes, c_α∈ [1,10] is a possible choice. When α is not close to 1, the SIC model with c_α=1 shows the best performance. The detailed information is provided in Figure <ref>. For the experiments in this Section, we set c_α = 2 for α>1 and c_α=-2 for α<1. Thus, α_± are the only tuning parameters for which we use the following 20 different values in [0,2]:α_±∈{ 0/1,1/2,2/3,⋯,9/10, 11/10,10/9,9/8,⋯,3/2,2/1 }This gives us 20 × 20 SIC models. The characteristic of each dataset could be captured by the large class of hyperplanes h^*_(α_+,α_-)=(w^*_(α_+,α_-),b^*_(α_+,α_-)) learned via the 20 × 20 SIC models (<ref>), as noticed in Theorem <ref> and Example <ref>. The details are as follows. §.§ Performance evaluation of 20 × 20 SIC modelsTable <ref> summarizes the classification accuracy (%) and computation time of all experiments conducted on 118 datasets. The acronym TOP1 refers to a group of SIC models that have the highest test accuracy for each dataset, while MaxA/Max2/MaxM refers to an SIC model with the best test accuracy for all-, two-, and multi-class datasets. The same notations are used for π-weighted convex focal loss: TOP1-FL, MaxA-FL, Max2-FL, and MaxM-FL. The test classification accuracy of each dataset is reported in Table <ref> for two class datasets and in Table <ref> for multi-class datasets. Note that MaxA(α_+=7/8,α_-=8/7) achieves 78.56%. On the other hand, MaxA-FL(π=0.5,γ=2,ξ=1) obtains 78.39%. Over half of all SIC models obtain at least 78.20% accuracy. Out of all the π-weighted convex focal losses, only 10% can achieve the same level of accuracy as the proposed SIC model. This implies that the SIC model is less sensitive to the parameter than the π-weighted convex focal loss. Therefore, the SIC model could serve as an alternative cost-sensitive learning framework without external π-weight. Refer to Figure <ref> for additional information. The details are as follows.In the case of two-class, of which the training dataset is close to the well-balanced condition, TOP1 achieves the best results, i.e., 0.74% better than the kernel-based classifier LIBSVM(C-SVC with RBF kernel) and 0.16% better than TOP1-FL. When the parameters of the SIC model are fixed, its performance is still better than other linear classifiers, such as π-weighted convex focal loss and LIBLINEAR. For instance, Max2(α_+=3/4,α_-=0) has 82.51% accuracy, which is 0.14% better than Max2-FL and 0.4% better than logistic regression, the best model of LIBLINEAR. As shown in Figure <ref> (a), the test accuracy of all SIC models is in the range of [80.64%, 82.51%]. More than 35% of all SIC models achieve at least 82.20% test accuracy. On the other hand, the test accuracy of all convex focal losses is in the range of [65.93%,82.37%]. Out of all the convex focal loss, only 2% can achieve 82.20% test accuracy. It appears that the SIC models are quite resilient to internal parameter changes. Specifically, Figure <ref> (a) shows an X-shaped pattern. This pattern covers a much larger area compared to the best test accuracy area of convex focal losses in Figure <ref> (c). The X-shaped pattern relates to the pattern of η in Figure <ref> (d). It represents a small deviation from the balanced SIC model, which has α_+-1=α_–1. Essentially, the virtual SIGTRON-induced loss functions L_α_+,c_+^S and L_α_-,c_-^S of the SIC model have similar polynomial orders, i.e., k_+ ≈ k_-. Figure <ref> (b)horse-colic demonstrates the X-shaped pattern. It is important to note thatspectf dataset in Table <ref> is a typical r_sc-inconsistent dataset. By using this dataset, the connection between η = k_-/k_-+k_+ and the movement of the hyperplane h^*_α_+,α_-(x)=0 is empirically demonstrated in Figure <ref>. Notably, the best test accuracy of the dataset is observed in the region (α_+,α_-)=(-,2), which is outside the X-shaped pattern. Regarding multi-class datasets, the kernel-based classifier LIBSVM(C-SVC with RBF kernel) achieves the highest test classification accuracy. As presented in Table <ref>, although the test accuracy of TOP1 is less than kernel-based LIBSVM, it still achieves a respectable 77.30%, which is 0.62% better than TOP1-FL. In Figure <ref> (b), we observe that the SIC model, which has only internal polynomial order parameters k_± = 1/α_± -1, performs similarly to the convex focal loss, which has the external π-weight parameter and the internal ξ and γ parameters. Note that Figure <ref> (b) shows a pattern of the best-performing SIC model in the (α_+,α_-)-matrix. Compared to two-class, the X-shaped pattern is rounded and biased toward α_->1. In the case of convex focal loss in Figure <ref> (d), the best-performing region is much larger than the two-class convex focal loss. The region is shifted towards π<0.5.Lastly, regarding computation time, L2SVM(primal) and logistic regression(primal) of LIBLINEAR are the fastest models. These models use the truncated Newton method <cit.> that is based on the unique Hessian structure of the large-margin linear classifier. On the other hand, for both the SIC model and convex focal loss, the proposedQuasi-Newton(L-BFGS) optimization for virtual convex loss is used. As shown in Figure <ref> (b) in Appendix <ref>, the π-weighted convex focal loss with π=0.5,γ=1,ξ=0, which corresponds to the logistic loss of LIBLINEAR, achieves reasonable performance-computation complexity, resulting in 78.30% test accuracy at 83 seconds. It is worth noting that the logistic regression of LIBLINEAR only obtains 77.93% test accuracy at 60 seconds.Figure <ref> demonstrates patterns of test classification accuracy for two-class datasets,statlog-australian-credit andhorse-colic and for multi-class datasets,ecoli,arrhythmia,energy-y1, andenergy-y2. Overall, the best test accuracy regions of multi-class datasets are more localized than those of two-class datasets. The X-shaped pattern in Figure <ref> (a) is also observed in thehorse-colic dataset in Figure <ref> (b). Bothenergy-y1 andenergy-y2 have the same input dataset but look for hyperplanes for opposite outputs. Specifically,energy-y1 is used to determine the heating load, whileenergy-y2 is used to determine the cooling load <cit.>. The best performing (α_+,α_-) forenergy-y1 andenergy-y2 exhibit opposite patterns: (α_+,α_-)≈ (2,-) forenergy-y1, and (α_+,α_-) ≈ (-, 2) forenergy-y2. Refer to Figure <ref> (e) and (f) for further details. Understanding the correlation between the pattern of (α_+,α_-)-matrix and the structure of each dataset can be a valuable tool for multi-label classification and imbalanced classification. § CONCLUSIONThis article introduces SIGTRON, an extended asymmetric sigmoid function with Perceptron, and its virtualized loss function called virtual SIGTRON-induced loss function. Based on this loss function, we propose the SIGTRON-imbalanced classification (SIC) model for cost-sensitive learning. Unlike other models, the SIC model does not use an external π-weight on the loss function but instead has an internal two-dimensional parameter (α_+,α_-)-matrix. We show that when a training dataset is close to a well-balanced condition, the SIC model is moderately resilient to variations in the dataset through the skewed hyperplane equation. When r_sc is not severe, the proposed SIC model could be used as an alternative cost-sensitive learning model that does not require an external π-weight parameter. Additionally, we introducequasi-Newton(L-BFGS) optimization for virtual convex loss with an interval-based bisection line search. This optimization is a competitive framework for a convex minimization problem, compared to conventional L-BFGS with cubic-interpolation-based line search. We utilize the proposed optimization framework for the SIC model and the π-weighted convex focal loss. Our SIC model has shown better performance in terms of test classification accuracy with 118 diverse datasets compared to the π-weighted convex focal loss and LIBLINEAR. In binary classification problems, where the severity of r_sc is not high, selecting the best SIC model for each dataset(TOP1) can lead to better performance than the kernel-based LIBSVM. Specifically, TOP1 achieves a test classification accuracy of 83.96%, which is 0.74% better than the accuracy of LIBSVM and 0.16% better than the accuracy of TOP1-FL of the convex focal loss. In multi-class classification problems, although the test accuracy of TOP1 is lower than that of kernel-based LIBSVM, it achieves an accuracy of 77.30%, which is 0.62% better than the accuracy of TOP1-FL of the convex focal loss. Last but not least, the proposed SIC model, which includes an (α_+,α_-)-matrix parameter, could be a valuable tool for analyzing various structures of datasets, such as r_sc-inconsistency and multi-label structures. §.§ Proof of Theorem <ref> Let α=1, then we have s_α,c(x) = 1/1+exp(-x)∈ C^∞(). Therefore, we only consider the case α≠1. Note that∇^n s_α,c(x) = 0, for all x ∈∖(σ_α,c) and n=1,2,3,⋯. Also, for x ∈ int((σ_α,c)), let y = 1 + x/c_α>0 and a = 1/1-α, then we get s_α,c(c_α(y-1)) = 1/1 + y^a. Thus, it is not difficult to see s_α,c(x) = σ_α,c(x) ∈ C^∞(int((σ_α,c))). For the continuity of ∇^n s_α,c(x) on x ∈, we only need to check s_α,c(-c_α)=0. Let us assume that (<ref>) is true. (A) When 0 ≤α<1, exp_α,c(-x)=0 at x=-c_α. Thus, we only need to consider numerator of F_n,k(x), i.e., (exp_α,c(-x))^k-n(1-α). F_n,k(c_α) = 0, if k - n(1-α) > 0 for all k=1,2,⋯,n. Now, we get 1>α>1-1/n. (B) When α>1, exp_α,c(-x)=+∞ at x=-c_α. Thus, we have F_n,k(-c_α) = lim_x → -c_α c A_n,k(1/1-α) (exp_α,c(-x))^-n(1-α) -1 = 0 if -n(1-α)-1<0. It means 1<α< 1 + 1/n. From (A) and (B), we get ∇^n s_α,c(x) ∈ C^n() for α∈(1-1/n,1+1/n). Now, we want to show (<ref>) by induction for x ∈ int((σ_α,c)). Let y=1+x/c_α and a = 1/1-α, then (<ref>) becomesd^n/dy^n1/1+y^a = ∑_k=1^n B_n,k(a)y^ka-n/(1+y^a)^k+1where B_n,k(a) = (-1)^n+kk! ∑_l=0^n [nl]{ lk } (-a)^l. ( I) Let n=1. Then the left-hand side of (<ref>) is d/dy1/1+y^a = -ay^a-1/(1+y^a)^2. The right-hand side is B_1,1(a)x^a-1/(1+x^a)^2 where B_1,1(a) = [ 10 ]{ 01 } + [ 11 ]{ 11 }(-a) = -a. For the computation of the Stirling number of the first and second kind, we use the following convention and rule in <cit.>: { 00 } = [ 00 ] = 1 and { a0 } = [ a0]=0 for a ≥ 1. Also, we have { a1 } = 1 and [ a1 ] = (a-1)! with 0!=1, for a≥1. Additionally, { ab } = [ ab ] = 0 if b>a ≥ 0. ( II) For n>1, let (<ref>) be true. Then, for n+1, we need to show thatd/dy( ∑_k=1^n B_n,k(a)y^ak-n/(1+y^a)^k+1)= ∑_k=1^n+1B_n+1,k(a)y^ak-(n+1)/(1+y^a)^k+1,whered/dy( ∑_k=1^n B_n,k(a)y^ak-n/(1+y^a)^k+1) = ∑_k=1^n B_n,k(a)((ak-n)y^ak-(n+1)/(1+y^a)^k+1 - (k+1)ay^a(k+1)-(n+1)/(1+y^a)^k+2).From (<ref>) and (<ref>), we getB_n+1,k(a) = (-ka) B_n,k-1(a) + (ka-n) B_n,k(a)where B_n,0(a) = B_n,n+1(a)=0. It comes from the rule of the Stirling number in ( I). Now, we only need to prove (<ref>). The left-hand side of (<ref>) isB_n+1,k(a)=(-1)^k k! ∑_l=0^n+1[n+1l ]{ lk }(-1)^n+1 - la^l=(-1)^k k! ({ n+1k } a^n+1+∑_l=1^n( n[ nl ] + [ nl-1 ] ){ lk }(-1)^n+1 - la^l )where [ n+ 10 ] = 0, [ n+ 1n+1 ] = 1, and [ n+1l ] = n[ nl ] + [ nl-1 ](see the rule of the Stirling number in ( I) and <cit.>). By using the equivalence {l+1k } = {lk-1 } + k { lk } and [ n0 ]=0, the right-hand side of (<ref>) is simplified to the following equation.(-ka)B_n,k-1(a) + (ka-n)B_n,k(a) = (-1)^k k! ∑_l=0^n ( a{ l+1k } - n { lk}) [ nl ] (-1)^n-l a^l.By dividing (-1)^k k! and adding ∑_l=1^n n[ nl ]{lk }(-1)^n-la^l on both sides, we obtain the equivalence (<ref>).§.§ The structure of the dataset and classification results of all-classWe summarize the structure of 118 datasets used in our experiments and present classification accuracy of the SIC model and π-weighted convex focal loss of all-class. Table <ref> summarizes two-class datasets. For each dataset, we describe the number of instances, the size of training dataset, the size of test dataset, the size of class, and the feature dimension. Additionally, we show imbalancedness, i.e., r_c/r_c T/r_c Te for class-imbalance ratio of combined/training/test dataset and r_sc/r_sc T/r_sc Te for scale-class-imbalance ratio of combined/training/test dataset. Table <ref> summarizes multi-class datasets. For each dataset, we describe the number of instances, the size of training dataset, the size of test dataset, the feature dimension, and the size of class. Additionally, we show imbalancedness, i.e., minimum and maximum of r_c for combined/training dataset: r_c m/r_c M/r_c Tm/r_c TM. Also, minimum and maximum of r_sc for combined/training dataset: r_sc m/r_sc M/r_sc Tm/r_sc TM.Figure <ref> presents classification accuracy matrix and the corresponding histogram of all-class for 20×20 SIC models and 19×8 convex focal losses. The test classification accuracy of all SIC models ranges between 76.88% and 78.56%. On the other hand, the test classification accuracy of all convex focal losses ranges between 71.04% and 78.39%. § ACKNOWLEDGMENTSH. Woo is supported by Logitron X.30fernandez18 A. Fernández, S. García, M. Galar, R. C. Prati, B. Krawczyk, and F. Herrera, Learning from imbalanced datasets, Springer-Verlag, 2018.johnson19 J. M. Johnson and T. M. Khoshgoftaar,“Survey on deep learning with class imbalance",Journal of Big Data, vol. 6, pp. 1-54., 2019.oksuz21 K. Oksuz, B. C. Cam, S. Kalkan, and E. Akbas,“Imbalance problems in object detection: a review",IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, pp. 3388-3415, 2021. bach06 F. R. Bach, D. Heckerman, and E. Horvitz,“Considering cost asymmetry in learning classifiers,",Journal of Machine Learning Research, vol. 7, pp. 1713-1741, 2006.he09 H. He and E. A. Garcia,“Learning from imbalanced data",IEEE Transactions on Knowledge and Data Engineering, vol. 21, pp. 1263-1284, 2009. garcia15 S. Garcia, J. Luengo, and F. Herrera, Data preprocessing in data mining. Springer-Verlag, 2015.ioffe15 S. Ioffe and C. Szegedy,“Batch normalization: Accelerating deep network training by reducing internal covariate shift",arXiv:1502.03167v3, 2015.delgado14 M. F.-Delgado, E. Cernadas, S. Barro, and D. Amorim,“Do we need hundreds of classifiers to solve real world classification problems?",Journal of Machine Learning Research, vol. 15, pp. 3133-3181, 2014.lin20 T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár,“Focal loss for dense object detection",IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, pp. 318-327, 2020.masnadi08 H. Masnadi-Shirazi and N. Vasconcelos,“On the design of loss functions for classification: theory, robustness to outliers, and savageboost",Advances in Neural Information Processing Systems 21, 2008.reid11 M. D. Reid and R. C. Williamson,“Information, divergence and risk for binary experiments",Journal of Machine Learning Research, vol. 12, pp. 731-817, 2011.martins16 A. F. T. Martins and R. F. Astudillo,“From softmax to sparsemax: A sparse model of attention and multi-label classification",Proceedings of the 33th International Conference on Machine Learning, 2016.ollivier15 Y. Ollivier,“Riemannian metrics for neural networks I: feedforward networks",Information and Inference: A Journal of the IMA, vol. 4. pp. 108-153, 2015.brebisson16 A. de Brébisson and P. Vincent,“An exploration of softmax alternatives belonging to the spherical loss family",arXiv:1511.05042v3, 2016.woo19a H. Woo,“Logitron: Perceptron-augmented classification model based on an extended logistic loss function",arXiv:1904.02958v1, 2019.dubey22 S. R. Dubey, S. K. Singh, and B. B. Chaudhuri,“Activation functions in deep learning: A comprehensive survey and benchmark",arXiv:2109.14545v3, 2022.sigmoidwiki Wikipedia - sigmoid function<https://en.wikipedia.org/wiki/Sigmoid_function>.murphy12 K. P. Murphy, Machine Learning, MIT Press, 2012.mccullagh89 P. McCullagh and J. A. Nelder, Generalized Linear Models, Second Edition, Chapman & Hall/CRC, 1989.wedderburn74 R. Wedderburn,“Quasi-likelihood functions, generalized linear models, and the Gauss-Newton method”,Biometrika, vol. 61, pp. 439-447, 1974.woo19c H. Woo,“Bregman-divergence-guided Legendre exponential dispersion model with finite cumulants (k-LED)",arXiv: 1910.03025v1, 2019.liao18 J. Liao, O. Kosut, L. Sankar, F. du Pin Calmon,“Tunable measures for information leakage and applications to privacy-utility tradeoffs",IEEE Transactions on Information Theory, vol. 65, pp. 8043-8066, 2019.sypherd19 T. Sypherd, M. Diaz, J. K. Cava, G. Dasarathy, P. Kairouz, and L. Sankar,“A tunable loss function for robust classification: calibration, landscape, and generalization",IEEE Transactions on Information Theory, vol. 68, pp. 6021-6051, 2022.lin02 Y. Lin,“Support Vector Machines and the Bayes rule in classification",Data Mining and Knowledge Discovery, vol. 6, pp. 259-275, 2002.janocha17 K. Janocha and W. M. Czarnecki,“On loss functions for deep neural networks in classification",arXiv:1702.05659v1, 2017.fan08 R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin, "LIBLINEAR: A library for large linear classification",Journal of Machine Learning Research, vol. 9, pp. 1871-1874, 2008.nocedal06 J. Nocedal and S. J. Wright, Numerical Optimization, Second Edition, Springer-Verlag, 2006.schmidt05 M. Schmidt, minFunc: unconstrained differentiable multivariate optimization in Matlab, <http://www.cs.ubc.ca/ schmidtm/Software/minFunc.html>, 2005.mutschler20 M. Mutschler and A. Zell,“Parabolic approximation line search for DNNs",Advances in Neural Information Processing Systems 33,2020.hager06 W. W. Hager and H. Zhang,“Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent", ACM Transactions on Mathematical Software, vol. 32, pp. 113-137, 2006.hager05 W. W. Hager and H. Zhang,“A new conjugate gradient method with guaranteed descent and an efficient line search", SIAM Journal on Optimization, vol. 16, pp. 170-192, 2005.galli22 L. Galli and C.-J. Lin,“A study on truncated Newton methods for linear classification",IEEE Transactions on Neural Networks and Learning Systems, vol. 33, pp. 2828-2841, 2022.kurgan01 L. A. Kurgan, K. J. Cios, R. Tadeusiewicz, M. Ogiela, L. Goodenday,“Knowledge discovery approach to automated cardiac SPECT diagnosis,",Artificial Intelligence in Medicine, vol. 23, pp. 149-169, 2001. tsanas12 A. Tsanas and A. Xifara,“Accurate quantitative estimation of energy performance of residential building using statistical machine learning tools,",Energy Buildings, vol. 49, pp. 560-567, 2012. woo19b H. Woo,“The Bregman-Tweedie classification model",arXiv: 1907.06923v1, 2019.woo17 H. Woo,“A characterization of the domain of Beta-divergence and its connection to Bregman variational model",Entropy, vol. 19, 482, 2017.jorgensen97 B. Jorgensen, The Theory of Dispersion Models, Chapman & Hall, 1997.amari16 S. Amari, Information geometry and its applications, Springer, 2016.ding10 N. Ding and S.V.N. Vishwanathan,”t-logistic regression",Advances in Neural Information Processing Systems 23, 2010.rockafellar70 R. T. Rockafellar, Convex Analysis, Princeton University Press, Princeton, 1970.chang11 C.-C. Chang and C.-J. Lin,“LIBSVM : a library for support vector machines",ACM Transactions on Intelligent Systems and Technology, vol. 2, pp. 27:1-27:27, 2011.graham94R. Graham, D. Knuth, and O. Patashnik, Concrete Mathematics: A foundation for computer science, Second Edition, Addison-Wesley, 1994.lin07 H.-T. Lin, C.-J. Lin, R. C. Weng,“A note on Platt's probabilistic outputs for support vector machines",Mach. Learn., 68 (2007), pp. 267-276. platt99 J. C. Platt,“Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods", in Advances in Large Margin Classifiers, A.J. Smola, P. Bartlett, B. Schölkopf, D, Schuurmans eds, MIT Press., 1999.guo17C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger,“On calibration of modern neural networks",Proceedings of the 34th International Conference on Machine Learning, 2017.vaswani19x S. Vaswani, F. Bach, M. Schmidt, "Fast and faster convergence of SGD for over-parameterized models (and an accelerated Perceptron)",Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, 2019.more94 J. J. Moré and D. J. Thuente,“Line search algorithm with guaranteed sufficient decrease",ACM Transactions on Mathematical Software vol. 20, pp. 286-307, 1994.hiriart-urruty96 J.-B. Hiriart-Urruty and C. Lemarechal, Convex Analysis and Minimization Algorithms I. Springer-Verlag, 1996.ucidata M. Kelly, R. Longjohn, and K. Nottingham, The UCI Machine Learning Repository, <https://archive.ics.uci.edu>wainberg16 M. Wainberg, B. Alipanahi, and B. J. Frey,“Are random forests truly the best classifiers?",Journal of Machine Learning Research, vol. 17, pp. 1-5, 2016.
http://arxiv.org/abs/2312.16043v1
{ "authors": [ "Hyenkyun Woo" ], "categories": [ "cs.LG", "cs.AI", "cs.CV", "cs.NE", "stat.ML" ], "primary_category": "cs.LG", "published": "20231226131417", "title": "An extended asymmetric sigmoid with Perceptron (SIGTRON) for imbalanced linear classification" }
Brunnian planar braids and simplicial groups Mahender Singh============================================ Inverse reinforcement learning (IRL) usually assumes the model of the reward function is pre-specified and estimates the parameter only. However, how to determine a proper reward model is nontrivial. A simplistic model is less likely to contain the real reward function, while a model with high complexity leads to substantial computation cost and risks overfitting. This paper addresses this trade-off in IRL model selection by introducing the structural risk minimization (SRM) method from statistical learning. SRM selects an optimal reward function class from a hypothesis set minimizing both estimation error and model complexity. To formulate an SRM scheme for IRL, we estimate policy gradient by demonstration serving as empirical risk and establish the upper bound of Rademacher complexity of hypothesis classes as model penalty. The learning guarantee is further presented. In particular, we provide explicit SRM for the common linear weighted sum setting in IRL. Simulations demonstrate the performance and efficiency of our scheme.Structural Risk Minimization, Model Selection, Inverse Reinforcement Learning § INTRODUCTIONThe concept of learning from demonstration (LfD) has received substantial attention in recent years <cit.>. LfD makes robots programming possible for non-experts and has been applied to various fields including autonomous driving <cit.>, manufacturing <cit.> and human-robot interaction <cit.>. Given the demonstration sampled from expert's outputs, one mainstream category of LfD algorithms is to learn the policy directly from the observations to action <cit.>, known as end-to-end learning. However, these methods usually require large amount of data and do not generalize well. Another class named inverse reinforcement learning (IRL) conducts a two-stage algorithm, which is to infer the reward function first and then solve the forward problem with the learned objective to obtain the target policy <cit.>. Since the objective is learned, IRL is able to imitate expert's policy as the environment and initial state change. Extensive effort went into solving the IRL problem <cit.>. These IRL algorithms presume a predefined model for the reward function, which is usually described by a feature-based linear function r(s,a)=∑_i=1^q ω_i^T ϕ_i(s,a) = ω^T ϕ(s,a), and only estimate the parameter ω.However, selecting a proper model (e.g. features ϕ_i in linear weighted sum setting) in IRL is a nontrivial problem. Notice that when the chosen function class is too simple, the real reward function may not be contained in this class, resulting in a substantial disparity between the learned policy and the authentic one. Conversely, when we search in a complex and rich function class, we may find an function minimizing the determined criterion on the given demonstration, but the computational cost and the generalization error could be large. Hence, a critical trade-off emerges concerning the complexity of the reward function model. The model selection problem is first proposed in statistical learning <cit.>. In a classification task, the choice of the hypothesis function class can be determined through a trade-off between the estimation error and approximation error. The estimation error is described by the empirical risk while the approximation error can be bounded with the Rademacher complexity <cit.>. Based on this, the structural risk minimization (SRM) problem is to find the optimal function class that minimizes the both of these two error terms. Recently, SRM scheme has been applied to solve traditional control problems. <cit.> estimates the number of modes in a switched system with SRM. <cit.> tackles SRM problems for nonlinear system identification over hierarchies of model class including norm-constrained reproducing kernel Hilbert space and neural network.Nevertheless, to the best of our knowledge, few studies have considered the function model selection in the IRL problem.Motivated by above discussion, we propose an SRM scheme for IRL with unknown reward model. (i) Given a series of hypothesis function classes, we leverage the policy gradient calculated through expert's demonstration samples as the empirical risk. We establish the upper bound on the Rademacher complexity of this gradient-based risk for each class as the model penalty. (ii) By minimizing the combination of the risk and the complexity bound, we determine the optimal model for the reward function achieving the trade-off, and learn its corresponding parameters. Additionally, we provide the union bound and the SRM learning guarantee of the solution. (iii) Particularly, we present the explicit complexity bound and algorithm in response to the common linearly weighted sum setting. (iv) Numerical simulations on a linear quadratic regulator (LQR) control have been conducted to show the efficiency and performance of our SRM scheme.The remainder of the paper is organized as follows. Section <ref> describes the problem of interest and introduces related preliminaries. Section <ref> presents the SRM scheme design for IRL with unknwon reward model. Numerical experiments and simulation results are shown in Section <ref>, followed by the conclusion in Section <ref>.§ PRELIMINARIES AND PROBLEM OF INTEREST §.§ IRL with Unknown Reward Model Consider a Markov decision process (MDP) defined by a tuple (𝒮, 𝒜, p, γ, r), where 𝒮∈ℝ^|𝒮| is the state space, 𝒜∈ℝ^|𝒜| is the action space. The environment dynamics are characterized by the state transition model p, where p(s'|s,a) ∈ [0,1] denotes the probability of the transition from state s to s' under action a. γ∈ [0,1) is the discount factor and r : (s,a) ↦ℝ is the reward function.The goal of IRL is to identify the objective function r based on a demonstration 𝒯 generated by an expert following the optimal policy π: 𝒮𝒫(𝒜).Different fromprior IRL studies where the reward function model is pre-specified, this paper considers a more practical scenario, where the model and parameters both are unknown.The problem is described as follows.Suppose we have a demonstration 𝒯 and a countable set of hypothesis reward function classes {ℱ_j}_j=1^C. IRL with unknown reward model is to select an optimal class index j^* and identify the optimal reward function r^* ∈ℱ_j^*. However, properly defining the optimality in Problem <ref> is nontrivial. In a rich function class, we are more likely to find an optimal function that minimizes a given criterion, while searching in such a complex set leads to large computation cost and poor generalizability. Therefore, to obtain an optimal model balancing this trade-off, we introduce the SRM scheme in statistical learning into our problem. In order to establish the empirical risk and measure the model complexity, we first provide the following two preliminaries. §.§ Policy Gradient Minimization According to the average reward formulation, the objective function of an RL task is written asJ(π,r)= ∫_𝒮 d^π(s) ∫_𝒜π(a;s,θ) r(s,a) da ds,where d^π(s) is the stationary distribution of state s under policy π <cit.>. In policy gradient approaches <cit.>, the policy is required to be stochastic, which can be achieved by adding zero-mean Gaussian noise, or we need to obtain the transition model p. Assuming π is differentiable with respect to its parameter θ, we can calculate the gradient of the objective function with respect to θ as∂ J(π,r)/∂θ = ∫_𝒮 d^π(s) ∫_𝒜∂π(a;s,θ)/∂θ Q^π(s,a) da ds,where Q^π(s,a) = 𝔼{∑_k=1^∞γ^k-1 r_t+k | s_t=s,a_t=a, π}.For the gradient in (<ref>), we have the following proposition.If the expert policy π^* is the optimal policy with respect to the designed reward r(s,a), the gradient ∂ J(π^*,r)/∂θ will equal to 0, which means π^* is a stationary point of J(π,ω).§.§ Rademacher complexityRademacher complexitymeasures the capacity of a hypothesis class of real-valued functions. Denote 𝒳, 𝒴 as the input and output spaces in a regression problem. ℱ is a hypothesis function class, where f:𝒳𝒴, f∈ℱ. For an arbitrary loss function class ℒ^reg associated with ℱ mapping from 𝒵 = 𝒳×𝒴 to ℝ, we haveℒ^reg={l^reg(z): (x,y) ↦ l^reg(f(x),y), z:=(x,y), f∈ℱ}.We then provide the following definition. Given ℒ^reg as a function class mapping from 𝒵 to ℝ, and S={z_i}_i=1^m as a sequence of m samples from 𝒵, the empirical Rademacher complexity of ℒ^reg with respect to S is defined asℜ̂_S(ℒ^reg) = 𝔼_σ[ sup_l^reg∈ℒ^reg1/m∑_i=1^m σ_i l^reg(z_i) ],where σ = (σ_i)_i=1^m are independent uniform random variables distributed in {-1,1}. σ_i is called Rademacher variable. The Rademacher complexity of ℒ^reg is ℜ_m(ℒ^reg) = 𝔼_Sℜ̂_S(ℒ^reg). The Rademacher complexity provides an upper bound on the difference between the empirical risk and the expected risk. The generalization bound is presented as follows.Let ℒ^reg be a function class mapping from 𝒵 to [0,B] and S={z_i}_i=1^m be a sequence of m i.i.d. samples from 𝒵. Then for any δ>0, with at least 1-δ probability𝔼[l^reg(z)] ≤1/m∑_i=1^m l^reg(z_i) + 2 ℜ̂_S(ℒ^reg) + 3 B √(log2/δ/2m)holds for all l^reg∈ℒ^reg.Noticing Theorem <ref> requires ℒ^reg to be bounded, we define the clipped version f̅ of a function f asf̅(x) = {[ f(x) ·B/‖ f(x) ‖_2,   ‖ f(x) ‖_2>B;f(x), ]. . § SRM SCHEME FOR IRL WITH UNKNOWN REWARD MODELBased on preliminaries introduced in the previous section, we now derive the SRM method for Problem <ref>. We will first build the empirical risk minimization (ERM) based on the policy gradient and the upper bound of model complexity with Rademacher complexity, then we provide the SRM scheme. SRM consists of selecting an optimal reward function class index j^* ⩾ 1 and the ERM hypothesis r^* ∈ℱ_j^* which minimizes both estimation error and the model complexity penalty. §.§ ERM-IRL based on Policy Gradient Suppose there is an expert following the optimal policy π with respect to a set reward function r. We have observed a trajectory τ = (s_0,a_0,s_1,…,s_T), where s_0 ∼𝒟 is the initial state generated from a distribution 𝒟 on space 𝒮. Then the objective gradient (<ref>) with π,r can be estimated through some existing methods such as REINFORCE <cit.>, GPOMDP <cit.> and eNAC <cit.>. For the simplicity of the calculation and formulation, we choose REINFORCE here and the gradient is calculated as∇_θĴ_τ(s_0,r) = ∑_t=1^T∇_θπ_a_t,s_t(s_0) ∑_k=t^Tγ^k-t r(s_k(s_0),a_k(s_0)),where ∇_θπ_a_t,s_t = ∂π(a_t;s_t,θ)/∂θ is for notational brevity. Since the policy π is fix, (s_t,a_t) in trajectory τ depend on s_0, thus ∇_θπ_a_t,s_t and the gradient ∇_θĴ_τ are all functions with respect to s_0.When the sampling on 𝒟 is infinite and stochastic, the estimated gradient (<ref>) goes to zero according to Proposition <ref>. Therefore, for an IRL problem, the real reward function r can be identified by minimizing the following criterionϵ_𝒟(r) := 𝔼_s_0 ∼𝒟 l( ∇_θĴ_τ(s_0,r)),where l(·) is the loss function on the gradient ∇_θĴ_τ satisfying Assumption <ref>. The loss function l(·) is Lipschitz continuous with l(0) = 0.However, in the realistic situation, the distribution 𝒟 is unknown and we can only observe a set of M trajectories 𝒯={τ^i}_i=1^M with a set {s_0^i}_i=1^M of initial states. Thus, the empirical risk for IRL problem with the given demonstration is described byϵ̂_𝒯 (r) := 1/M∑_i=1^M l(∇_θĴ_τ^i(s_0^i,r)).[ERM-IRL problem] Given an observed demonstration 𝒯 and a function class ℱ, the ERM hypothesis reward function is obtained throughr^ERM_𝒯 = min_r ∈ℱϵ̂_𝒯 (r). During the gradient estimation, we assume the expert policy π(a;s,θ), including the function model and parameter, is known. If θ is unknown, it can be estimated through a maximum likelihood problem based on 𝒯.§.§ General SRM Scheme for IRL For a reward function class ℱ and a trajectory sample τ with s_0, a risk function class is defined as ℒ^irl = {l^irl:s_0 ↦ l(∇_θĴ_τ(s_0,r)), r∈ℱ} based on the established empirical risk. We now present the following lemma to measure its model complexity with Rademacher complexity.Given a demonstration 𝒯 with a set {s_0^i}_i=1^M of initial states and the reward function class ℱ, the Rademacher complexity of ℒ^irl is bounded byℜ̂_𝒯(ℒ^irl) ⩽ L ∑_t=1^T∑_k=t^Tγ^k-t∇_θπ_t ·ℜ̂_𝒯_k(ℱ) := R_𝒯(ℱ),where 𝒯_k = {(s^i_k,a^i_k)}_i=1^M,k=1,…,T and ∇_θπ_t = max_i‖∇_θπ^i_a_t,s_t‖. L is the Lipschitz constant of loss function l(·). Then, we define the SRM problem with empirical risk and the upper bound of ℜ̂_𝒯(ℒ^irl).[SRM-IRL problem] Given an observed demonstration 𝒯 and a series of hypothesis reward function class {ℱ_j}_j=1^C, the optimal SRM solution for IRL is defined asJ_𝒯(r,j) := ϵ̂_𝒯 (r) +2 R_𝒯(ℱ_j), and   r_𝒯^SRM = min_1⩽ j ⩽ C, r∈ℱ_j J_𝒯(r,j). Given an observed demonstration 𝒯 generated from 𝒟 and a set of hypothesis reward function classes {ℱ_j}_j=1^C, for any r ∈ℱ_j, we define the clipped version r̅ and ‖r̅‖⩽ B. Then, for δ∈ (0,1], we provide the following error bounds in IRL model selection.(i) Union bound: For all r̅∈ℱ̅_j, 1⩽ j ⩽ C, there is at least 1-δ probability such that |ϵ_𝒟(r̅) - ϵ̂_𝒯(r̅)| ⩽ 2 R_𝒯(ℱ̅_j) + 3 LB ∑_t=1^T∑_k=t^Tγ^k-t‖∇_θπ_a_t,s_t‖√(log4/δ/2M).(ii) SRM learning bound: For the SRM solution defined by (<ref>), with the probability 1-δ,ϵ_𝒟(r̅_𝒯^SRM) ⩽ min_r ∈ℱ( ϵ_𝒟(r̅) + 4 R_𝒯(ℱ̅_j(r)) )+ 3 LB ∑_t=1^T∑_k=t^Tγ^k-t‖∇_θπ_a_t,s_t‖√(log4(C+1)/δ/2M)holds, where j(r) refers to the index of a hypothesis function class that r ∈ℱ_j(r). The general algorithm is shown in Algorithm <ref>. SRM Algorithm for IRL with Unknown Reward Model The demonstration from expert, 𝒯; The hypothesis function classes, {ℱ_j}_j=1^C;The optimal class index j^*; The reward function identification r_𝒯^SRM∈ℱ_j^*; j=1 to CSolve ERM problem (<ref>) to obtain the optimal estimation r_j^* in function class ℱ_j;Calculate the structural risk J_𝒯^SRM(r_j^*,j) in (<ref>); Select the minimum risk and its corresponding class number j^* = min_1⩽ j ⩽ C J_𝒯^SRM(r_j^*,j);return The optimal reward function estimation r_𝒯^SRM = r^*_j^*. §.§ Linear Weighted Sum CaseIn this subsection, we treat the special linearly weighted feature-based sum reward function described as:r(s,a;ω)=∑_p=1^q ω_p^T ϕ_p(s,a),which is parameterized by ω =(ω^T_1, …, ω^T_q)^T with features ϕ_p mapping from 𝒮×𝒜 to ℝ^n_p. This is a quite general setting. Taking the classic LQR problem as an example, suppose the reward for (s,a) pair is r = - ∑_p=1^q (s^T Q_p s + a^T R_p a). Then, write it into the feature-based sum form. We have ω_p = (vec(Q)^T, vec(R)^T) and feature ϕ_p(s,a)=((s ⊗ s)^T, (a ⊗ a)^T)^T. Traditional IRL is to identify the parameter ω with the given demonstration, assuming the features {ϕ_p}_p=1^q are known. We now utilize our proposed SRM scheme to re-analyze the IRL under this linear combination setting with unknown features. Supposing C hypothesis reward function classes {ℱ_j}_j=1^C with different sets of features {ϕ_p^j}_p=1^q, we are going to obtain the optimal function class ℱ_j^* and its corresponding optimal parameter estimation ω̂^*. With this specific reward function form, we provide the following lemma establishing the explicit bound of Rademacher complexity bound for (<ref>). For the linear weighted sum case, assuming the parameter ‖ω_p ‖⩽ B_ω for all p and Φ_p(k) is the bound for ‖ϕ_p(s,a) ‖ on dataset 𝒯_k, k=1,…,T, then we haveℜ̂_𝒯_k(ℱ) ⩽B_ω/√(M)∑_p=1^q Φ_p(k). Note that when considering different function class ℱ_j, there exist different minimum upper bounds B^j_ω for ‖ω^j_p ‖. The following remark provides a uniform bound for all j=1,…, C. When considering the linear weighted form reward, the expected policy gradient equation ϵ_𝒟(r) = 0 has a scalar ambiguity property, which suggests that ω and ω' = αω, α∈ℝ_+ both satisfy the equation. To eliminate this ambiguity and avoid the trivial zero solution during the ERM, we add a unit simplex constraint {ω⩾ 0,‖ω‖_1 = 1}. Coincidentally, this constraint makes ω_p and reward r bounded, which just suits the Rademacher complexity definition. Therefore, in this case we do not need to do the function clipping in (<ref>) and a uniform bound for all function classe {ℱ_j}_j=1^C is ℜ̂_𝒯_k(ℱ_j) ⩽1 /√(M)∑_p=1^q Φ^j_p(k). Based on Lemma <ref> and the constraints in Remark <ref>, the SRM-IRL problem for linear weighted sum reward is described asr_𝒯^SRM := min_1⩽ j ⩽ C, r∈ℱ_j( ϵ̂_𝒯 (r) +2 L ∑_t=1^T∑_k=t^Tγ^k-t∇_θπ_t ·∑_p=1^q_jΦ_p(k)/√(M)).Combined with Theorem <ref>, we provide the SRM learning bound in linear weighted sum case. For the SRM solution defined by (<ref>), with at least 1-δ probability,ϵ_𝒟(r_𝒯^SRM) ⩽ min_r ∈ℱ( ϵ_𝒟(r) + 4 L ∑_t=1^T∑_k=t^Tγ^k-t∇_θπ_t ·1 /√(M)∑_p=1^q Φ^j(r)_p(k) ) + 3 LB ∑_t=1^T∑_k=t^Tγ^k-t‖∇_θπ_a_t,s_t‖√(log4(C+1)/δ/2M)holds, where j(r) refers to the index of a hypothesis function class that r ∈ℱ_j(r).§ SIMULATION RESULTSIn this section we conduct multiple simulations to illustrate the performance of the proposed SRM scheme. Consider an LQR control problem which is described ass_t+1 = A s_t + B a_t, a_t ∼𝒩(k s_t, Σ),  t=0,1,…,T-1, r_t =-∑_p=1^q ω^T_p ϕ_p(s_t,a_t) = -∑_p=1^q ω_p (s_t^T Q_p s_t + a_t^T R_p a_t),where s_t,a_t ∈ℝ^n, A,B are dynamics matrices satisfying the condition of controllability and Q_p,R_p are positive definite matrices. s_t,a_t are bounded by box constraints [-5 ⊗1_n,5 ⊗1_n] elementwise for all t. Let T=50 and discount γ = 0.9. We apply a Gaussian policy with fixed covariance matrix Σ = 0.5I. For the data collection, we first train the policy through REINFORCE with baseline as shown in Fig. <ref>. We collect M trajectories after 60 episodes with random initial states generated from a uniform distribution. Using these data as demonstration 𝒯, ERM-IRL is to estimate the parameter ω = (ω_1, …, ω_q). Set q=3,n=4 and the real parameter ω_1=1/6, ω_2=1/3, ω_3=1/2. The estimation result with respect to the dataset size (trajectory number) is illustrated in Fig. <ref>. As the size of the dataset gets larger, the estimation error ‖ω̂ - ω‖ decreases.To demonstrate the proposed SRM scheme, we define C=5 hypothesis function classes ℱ_j={r:(s,a) ↦ -∑_p=1^j ω_p (s^T Q_p s + a^T R_p a)}, j=1,…,C. Matrices {Q_p,R_p}_p=1^3 are same as those in (<ref>). If we design Q_p,R_p,p=1,…,q so that (vec(Q_p)^T, vec(R_p)^T)^T is linear independent of each other, the classes {ℱ_j}_j=1^C will be nested, i.e. ℱ_j ⊂ℱ_j+1, and we have ℜ_M(ℱ_j) < ℜ_M(ℱ_j+1) according to the monotonicity property of Rademacher complexity. The loss function l(·) is set to be the 2-norm of the gradient. We use M=1000 trajectories as the demonstration 𝒯 to run the explicit optimization problem (<ref>). Fig. <ref> illustrates the mean results of 50 experiments. The abscissa denotes the index of the function class j from 1 to 5. We can find that as the model becomes complex, the empirical risk ϵ̂_𝒯(r) decreases and after j=3, the real function model is contained in ℱ_j, thus the empirical risk changes little. Aligning with our intuitive, the penalty term for model complexity goes larger with j. Therefore, through adding these two terms together we obtain the optimal function class j^*=3 minimizing the structural risk in Fig. <ref>, which is consistent with q=3. Since the SRM is data-dependent, different results may occur when noise exists. Fig. <ref> shows the statistics of 50 trials and j^*=1,2 occurs with low probability. Notice that for a fixed function class, when the dataset size M goes to infinity, the empirical risk will converge to the true error ϵ_𝒟(r), while the penalty term for model complexity will decrease as a speed of √(M). This indicates different choices of M leads to different optimal solutions. When M is relatively small, the model is prone to overfitting. At the same time, the penalty term returns a high value, resulting in the SRM minimum being achieved with a simpler function class (small j), which effectively reduces the risk of overfitting. When the dataset is large, it is less likely to overfit, then our scheme will obtain more complex function class.§ CONCLUSIONIn this paper, an SRM scheme is provided for the model selection in IRL. For a series of hypothesis reward function classes, we utilize the policy gradient as the empirical risk and the Rademacher complexity upper bound as the model penalty. Through minimizing the sum of these two term, we obtain the optimal reward identification, achieving a trade-off between the estimation error and generalization ability.Note that although we only analyze the linear weighted form in the simulation, this SRM scheme can also handle nonlinear hypothesis functions, including the kernel based and neural network representation, as long as we establish their Rademacher complexity. § PROOF OF LEMMA <REF>According to Definition <ref> and (<ref>), we have the empirical Rademacher complexity ℜ̂_𝒯(ℒ^irl) as 𝔼_σ[ sup_l^irl∈ℒ^irl1/M∑_i=1^M σ_i l^irl(s_0^i) ]=𝔼_σ[ sup_r∈ℱ1/M∑_i=1^M σ_i l (∑_t=1^T∇_θπ^i_a_t,s_t∑_k=t^Tγ^k-t r(s_k(s_0^i),a_k(s_0^i)) ) ]⩽ L 𝔼_σ[ sup_r∈ℱ1/M∑_i=1^M σ_i ∑_t=1^T∑_k=t^Tγ^k-t‖∇_θπ^i_a_t,s_t‖ r(s_k(s_0^i),a_k(s_0^i)) ] ⩽ L ∑_t=1^T∑_k=t^Tγ^k-t𝔼_σ[ sup_r∈ℱ1/M∑_i=1^M ‖∇_θπ^i_a_t,s_t‖σ_i r(s_k(s_0^i),a_k(s_0^i)) ]⩽ L ∑_t=1^T∑_k=t^Tγ^k-t∇_θπ_t 𝔼_σ[ sup_r∈ℱ1/M∑_i=1^Mσ_i r(s_k(s_0^i),a_k(s_0^i)) ] =L ∑_t=1^T∑_k=t^Tγ^k-t∇_θπ_t ·ℜ̂_𝒯_k(ℱ),where 𝒯_k = {(s^i_k,a^i_k)}_i=1^M and ∇_θπ_t = max_i‖∇_θπ^i_a_t,s_t‖.§ PROOF OF THEOREM <REF>(i) Notice that the Rademacher complexity is defined on the bounded function class. We have the clipped version reward function ‖r̅‖⩽ B. Since the loss function l(·) is a Lipschitz function with the Lipschitz constant L and l(0)=0, we derive the bound on ℒ̅^irl as | l (∑_t=1^T∇_θπ_a_t,s_t∑_k=t^Tγ^k-tr̅(s_k,a_k)) )| ⩽ L ‖∑_t=1^T∇_θπ_a_t,s_t∑_k=t^Tγ^k-tr̅(s_k,a_k)) ‖⩽ L∑_t=1^T∑_k=t^T‖∇_θπ_a_t,s_tγ^k-t‖·‖r̅(s_k,a_k)‖⩽ LB ∑_t=1^T∑_k=t^Tγ^k-t‖∇_θπ_a_t,s_t‖ =: B_ℒ̅. Therefore, combining Theorem <ref> and the bound on ℒ̅^irl_j, for any r ∈ℱ_j, we have |ϵ_𝒟(r̅) - ϵ̂_𝒯(r̅)| ⩽2 ℜ̂_𝒯(ℒ̅^irl_j) + 3 B_ℒ̅√(log4/δ/2M)⩽ 2L ∑_t=1^T∑_k=t^Tγ^k-t∇_θπ_t ·ℜ̂_𝒯_k(ℱ̅_j) + 3 LB (∑_t=1^T∑_k=t^Tγ^k-t‖∇_θπ_a_t,s_t‖) √(log4/δ/2M)= L ∑_t=1^T∑_k=t^Tγ^k-t(2 ∇_θπ_tℜ̂_𝒯_k(ℱ̅_j) + 3B ‖∇_θπ_a_t,s_t‖√(log4/δ/2M)).(ii) To derive the SRM learning bound, we consider P(X_1+X_2 >ε) ⩽P(X_1 > ε/2) + P(X_2 > ε/2). Then we have P(ϵ_𝒟(r̅_𝒯^SRM) - ϵ_𝒟(r̅) - 4 ℜ̂_𝒯(ℒ̅^irl_j(r)) > ε) ⩽P(ϵ_𝒟(r̅_𝒯^SRM) - ϵ_𝒯(r̅_𝒯^SRM) - 2 ℜ̂_𝒯(ℒ̅^irl_j(r_𝒯^SRM)) > ε/2)+ P(ϵ_𝒯(r̅_𝒯^SRM) +2 ℜ̂_𝒯(ℒ̅^irl_j(r_𝒯^SRM)) - ϵ_𝒟(r̅) -4 ℜ̂_𝒯(ℒ̅^irl_j(r)) > ε/2) ⩽P( sup_r ∈ℱ (ϵ_𝒟(r̅) - ϵ_𝒯(r̅) - 2 ℜ̂_𝒯(ℒ̅^irl_j(r))) > ε/2)+ P(ϵ_𝒯(r̅) - ϵ_𝒟(r̅) - 2 ℜ̂_𝒯(ℒ̅^irl_j(r)) > ε/2), where P ( sup_r ∈ℱ (ϵ_𝒟(r̅) - ϵ_𝒯(r̅) - 2 ℜ̂_𝒯(ℒ̅^irl_j(r))) > ε/2) = P ( sup_1⩽ j ⩽ Csup_r ∈ℱ_j (ϵ_𝒟(r̅) - ϵ_𝒯(r̅) - 2 ℜ̂_𝒯(ℒ̅^irl_j(r))) > ε/2) ⩽∑_j=1^C P ( sup_r ∈ℱ_j (ϵ_𝒟(r̅) - ϵ_𝒯(r̅) - 2 ℜ̂_𝒯(ℒ̅^irl_j(r))) > ε/2) ⩽ 4C exp{-M ε^2/18 B_ℒ̅^2}, and P (ϵ_𝒯(r̅) - ϵ_𝒟(r̅) - 2 ℜ̂_𝒯(ℒ̅^irl_j(r)) > ε/2) ⩽ 4 exp{-M ε^2/18 B_ℒ̅^2}. Therefore, we derive P(ϵ_𝒟(r̅_𝒯^SRM) - ϵ_𝒟(r̅) - 4 ℜ̂_𝒯(ℒ̅^irl_j(r)) > ε) ⩽ 4 (C+1) exp{-M ε^2/18 B_ℒ̅^2}. Set the right side equal to δ, then (ii) has been proved.§ PROOF OF LEMMA <REF>ℜ̂_𝒯_k(ℱ) = 𝔼_σ[ sup_r∈ℱ1/M∑_i=1^Mσ_i r(s_k(s_0^i),a_k(s_0^i)) ] = 𝔼_σ[ sup_‖ω_p ‖⩽ B_ω1/M∑_i=1^Mσ_i ∑_p=1^q ω^T_p ϕ_p(s_k(s_0^i),a_k(s_0^i)) ] ⩽1/M𝔼_σ[ sup_‖ω_p ‖⩽ B_ω∑_p=1^q ‖ω^T_p ‖·‖∑_i=1^Mσ_i ϕ_p(s_k(s_0^i),a_k(s_0^i))‖]⩽∑_p=1^q B_ω/M𝔼_σ[ ‖∑_i=1^Mσ_i ϕ_p(s_k(s_0^i),a_k(s_0^i))‖]= ∑_p=1^q B_ω/M𝔼_σ[ √(∑_m=1,n=1^Mσ_m σ_n ϕ^T_p(s^m_k,a^m_k) ϕ_p(s^n_k,a^n_k))] ⩽∑_p=1^q B_ω/M√(∑_m=1,n=1^Mϕ^T_p(s^m_k,a^m_k) ϕ_p(s^n_k,a^n_k) 𝔼_σ[ σ_m σ_n])⩽∑_p=1^q B_ω/M√(∑_m=1,n=1^M ϕ^T_p(s^m_k,a^m_k) ϕ_p(s^n_k,a^n_k))⩽∑_p=1^q B_ω/M√( M Φ^2_p (k)) =B_ω/√(M)∑_p=1^q Φ_p(k), where Φ_p(k) is the upper bound for ‖ϕ_p(s,a) ‖ on dataset 𝒯_k.
http://arxiv.org/abs/2312.16566v1
{ "authors": [ "Chendi Qu", "Jianping He", "Xiaoming Duan", "Jiming Chen" ], "categories": [ "cs.LG" ], "primary_category": "cs.LG", "published": "20231227132317", "title": "Inverse Reinforcement Learning with Unknown Reward Model based on Structural Risk Minimization" }
w s r̊ y v̌ x𝐱 ŁL w s r̊ ŭ v̌ x y z q m a ḇ ç ḍ f h CD CUSTM CD AArgEtrCN N B ℋ V PdiagReIm→[[ ] OD t w sr̊ŭv̌ x y z q aḇçḍ f p eλ 1 1^(s) W G g 0 KK̅k_μ X Yθ̂H̃δ_sH̋ X A B_(norm)_̋(norm)_(norm)_(norm)ÃB̃γ_incγ_inc,+B_sc_sθ̂h̃ãb̃δ_sS_wB_wc_wβ_secS_secS_w^(p)S_w^(h)B_secc_secδ_secθ̂_secθ_secS_w^+B_weak^+^+c_w^+β_w^+δ_w^+θ̂_w^+θ_w^+τ^+S_wB_w c_wδ_wβ_wθ_wθ̂_w ℝℕ𝔼𝕊ℙC_intC_extC_comΨ_intΨ_extΨ_comΨ_netC_int^+C_ext^+C_com^+Ψ_int^+Ψ_ext^+Ψ_com^+Ψ_net^+ϕ_intϕ_extϕ_comϕ_net F ( ) w sr̊ yv̌ x𝐱Ł L w sr̊ŭv̌ x y z q aḇçḍ f hCDCUSTM CD A ArgEtrCN N B H V PdiagReIm → [[ ] OD t w sr̊ŭv̌ x y z q aḇçḍ f p e S R( )λ 1 m q W G g 0 KK̅𝒳𝒴^(r)𝒳^(r)√()q̅B_sc_sθ̂h̃ãb̃δ_sS_wB_wc_wβ_secS_secB_secc_secδ_secθ̂_secθ_secS_w^+B_weak^+^+c_w^+β_w^+δ_w^+θ̂_w^+θ_w^+τ^+ skins on line, arc=7pt, before upper=[-3pt]0pt10pt,boxrule=0pt, boxsep=0pt,left=6pt,right=6pt,top=0pt,bottom=0pt,enhanced, coltext=blue, colback=white!10!yellowon line, arc=7pt, before upper=[-3pt]0pt10pt,boxrule=0pt, boxsep=0pt,left=6pt,right=6pt,top=0pt,bottom=0pt,enhanced, colback=white!10!yellowon line, arc=7pt, before upper=[-3pt]0pt10pt,boxrule=1pt,colframe=darkgreen!100!blue, boxsep=0pt,left=6pt,right=6pt,top=0pt,bottom=0pt,enhanced, colback=white!10!yellowon line, arc=7pt, before upper=[-3pt]0pt10pt,boxrule=.7pt,colframe=blue!100!blue, boxsep=0pt,left=6pt,right=6pt,top=0pt,bottom=0pt,enhanced, coltext=blue, colback=white!10!yellow on line, arc=7pt, before upper=[-3pt]0pt10pt,boxrule=.0pt,colframe=pink!50!yellow, boxsep=0pt,left=6pt,right=6pt,top=0pt,bottom=0pt,enhanced, coltext=white, colback=blue!40!redon line, arc=7pt, before upper=[-3pt]0pt10pt,boxrule=.0pt,colframe=pink!50!yellow, boxsep=0pt,left=6pt,right=6pt,top=0pt,bottom=0pt,enhanced, coltext=white, colback=white!40!greentheoremTheorempropositionPropositioncorollaryCorollaryclaimLemmalemmaLemmadefinitionDefinitionobservationObservationFl RDT based ultimate lowering of the negative spherical perceptron capacity Mihailo Stojnic [e-mail:[email protected]]================================================================================ AbstractWe consider the classical spherical perceptrons and study their capacities. The famous zero-threshold case was solved in the sixties of the last century (see, <cit.>) through the high-dimensional combinatorial considerations. The general threshold, κ, case though turned out to be much harder and stayed out of reach for the following several decades. A substantial progress was thenmade in <cit.> and <cit.> where the positive threshold (κ≥ 0) scenario was finally fully settled. While the negative counterpart (κ≤ 0) remained out of reach, <cit.> did show that the random duality theory (RDT) is still powerful enough to provide excellent upper bounds. Moreover, in <cit.>, a partially lifted RDT variantwas considered and it was shown that the upper bounds of <cit.> can be lowered. After recent breakthroughs in studying bilinearly indexed (bli) random processes in <cit.>, fully lifted random duality theory (fl RDT) was developed in <cit.>. We here first show that the negative spherical perceptrons can be fitted into the frame of the fl RDT and then employ the whole fl RDT machinery to characterize the capacity. To be fully practically operational, the fl RDT requires a substantial numerical work. We, however, uncover remarkable closed form analytical relations among key lifting parameters. Such a discovery enables both shedding a new light on the parametric interconnections within the lifting structure and performing the needed numerical calculations to obtain concrete capacity values. After doing all of that, we also observe that an excellent convergence (with the relative improvement ∼ 0.1%) is achieved already on the third (second non-trivial) level of the stationarized full lifting. Index Terms: Negative spherical perceptrons; Fully lifted random duality theory.§ INTRODUCTIONThe last two decades have seen a remarkable progress in studying various aspects of neural networks (NN) and machine learning (ML). Development of powerful algorithmic techniques and corresponding performance characterizinganalytical toolstogether with persistent widening of the range of potential applications are only a couple of the most important ones. We, here, follow into similar footsteps and continue the analytical progress through a theoretical studying ofperceptrons as key NN/ML building blocks.We are particularly interested in the so-called spherical perceptrons which are easily the most popular and quite likely the simplest of all perceptron variants. Despite the simplicity, their full analytical characterizations in many important scenarios are not easy to obtain. For example, one of their most relevant features, the storage or classifying capacity, is, in general, very difficult to compute. Moreover, designing practical algorithms that can confirm the capacity achievability is often even harder. Some special cases are a bit easier though and relevant results can be found throughout the literature. For example, the so-called zero-threshold capacity was determined through a combinatorial, high-dimensional geometry based, approach in seminal works <cit.> (for relevant geometric followup extensions related topolytopalneighborliness see, e.g., <cit.>).While the results of <cit.> established a monumental breakthrough at the time of their appearance, they remained an isolated example of extraordinary success for the better part of the following several decades. For example, even the simplest possible extension to general positive thresholds turned out to be a formidable challenge. As it became apparent that the mathematically rigorous treatments might be a bit further away than initially predicated, the emergence of the statistical physics replica tools in the seventies of the last century provided a glimmer of hope that at least some (not necessarily mathematically rigorous) analytical characterizations can be obtained. Not long after, in the second half of the eighties, the Gardner's seminal work, <cit.> appeared and paved the way for many of the very best perceptrons' analytical results. Namely, <cit.> and a follow-up <cit.>, utilized the replica theory and established a generic framework that can be used for the analytical characterizations of, basically, all relevant features of interest invarious perceptrons models. Among others, these certainly included the storage capacities in a host of different scenarios: positive/negative thresholds, correlated/uncorrelated patterns, patterns stored incorrectly and many others. The predictions obtained in <cit.> were later on (in identical or similar statistical contexts) established as mathematically fully rigorous (see, e.g., <cit.>). In particular, <cit.> proved the predictions of <cit.> related to the storage capacity and the volume of the bond strengths that satisfies the dynamics of the positive spherical perceptrons (i.e., the perceptrons with spherical constraints and positive thresholds κ≥ 0). Talagrand, in <cit.>, reconfirmed these predictions through a related but somewhat different approach. On the other hand, <cit.> designed a completely different, random duality theory (RDT) based,frameworkand again confirmed almost all of the predictions from <cit.>, including many previously not considered in <cit.>.A substantial help in all of these, mathematically rigorous, treatments, was provided by the underlying convexity. §.§ Negative spherical perceptron (NSP) — no convexity help As recognized in<cit.>, the above mentioned convexity help disappears when the spherical perceptrons have a negative threshold (i.e., when κ<0). The underlying deterministic strong duality is not present anymore and obtaining accurate capacity characterizations becomes notoriously hard. Still, the power of the RDT remains useful. In particular, relying on the fundamental principles of the RDT,<cit.> proved the Talagrand's conjecture from <cit.> that the capacity predictions of <cit.> are, at the very least, rigorous upper bounds even when κ<0. <cit.> went a step further, utilized a partially lifted RDT variant and established that, these rigorous bounds can in fact be lowered. This effectively confirmed that the replica symmetry (assumed in <cit.>) must be broken. A series of works based on statistical physics replica approaches then followed (see, e.g., <cit.>). <cit.> was the first one where the NSP was connected to the recently studied jamming phenomena and hard spheres packing problems. It established a preliminary version of the phase diagram and emphasized the relevance of the distribution laws of “gaps” and “forces” and computed their critical exponents. <cit.> then provided a more complete phase diagram characterization with all predicated types of transitioning in both the so-called SAT and UNSAT phases. Moreover, it hypothesized a potential universality in gaps and forces distribution laws exponents.<cit.> studied similar features in the linear cost NSP variant. Again, the critical exponents of the distribution laws were found to match the ones associated with the jamming of the hard spheres. The corresponding algorithmic confirmations were obtained in<cit.>. Algorithmic considerations of a different type were discussed in <cit.>. Relying on the (access to the) Parisi replica symmetry breaking (rsb) variational functional, an iterative message-passing type of procedure is suggested as an algorithmic way of achieving the capacity.On the rigorous front though, the results of <cit.> remained untouchable until now. Moreover, <cit.> showed that the upper bounds of <cit.> are actually (up to the leading order terms) tight in κ→ -∞ regime. As mentioned above, <cit.> also studied many other perceptron properties. It, for example, gave the replica symmetry prediction for the capacities of spherical perceptrons when functioning as erroneous storage memories.<cit.> showed that these predictions of <cit.> are again rigorous upper boundswhich in certain range of system parameters can be lowered. This proved that, in the erroneous scenarios, the replica symmetry (assumed in <cit.>) must again be broken.While our primary interest here is in the simplest and possibly most famous spherical perceptrons, various other perceptron variants are of interest. Moreover, many of them that belong to the class of analytically “hard” perceptrons have been intensively studied over the last several decades as well. We here single out probably the most well knowndiscrete± 1 perceptrons. Their symmetric realizations are analyticallya bit easier than other variants and the corresponding fullcapacity characterizations are known to have very particular relatively simple formulations (see, e.g., <cit.> as well as <cit.>). On the other hand, an initial replica symmetry based treatment of the original nonsymmetric ones was already given in the foundational papers<cit.>, where the underlying hardness is properly recognized. After the rsb based results were obtained in <cit.>, a strong mathematical progress followed first in <cit.> and then ultimately in <cit.> as well.We here follow the path of <cit.> and utilize the connection between the so-called random feasibility problems (rfps) (and the spherical perceptrons as their particular instances) on the one side and the random duality theory (RDT) (see, e.g., <cit.>) concepts on the other side. We first recognize the connection between the rfps andbilinearly indexed (bli) random processes and then utilize a strong recent progress in studying these processes in <cit.>. Namely, relying on <cit.>, in <cit.> a fully lifted random duality theory (fl RDT) was established. Utilizing further the fl RDT and its a particular stationarized fl RDT variant (called sfl RDT), we then obtain desired capacity characterizations. As is usually the case, to have the fl RDT become practically operational, underlying numerical evaluations need to be conducted. Doing so is a problem on its own and often requires a rather strong effort. Here, however, we discover remarkable closed form relations between key lifting parameters. These provide a direct view into a rather beautiful structuring of the intrinsic parametric interconnections and ultimately substantially facilitate the underlying numerical work. Moreover, they eventually enable us to uncover that the obtained capacity characterizations, already on the third level of the full lifting (3-sfl RDT), exhibit an extraordinarily rapid convergence with a relative improvement∼ 0.1% for all considered thresholds κ. § CONNECTING NSPS TO RFPS AND FREE ENERGIES As suggested above, we will rely on the fact that studying the NSP properties is tightly connected to studying the properties of feasibility problems. Moreover, studying feasibility problems is then tightly connected to studying statistical physics objects called free energies. Both of these connections were recognized and utilized in a long line of work <cit.>. To capitalize on the existing results and to make the exposition of the main ideas needed here as smooth as possible, we find it convenient to carefully parallel the presentations from these papers. Along the same lines, to avoid an unnecessary repetition of the already introduced concepts, we adopt the practice to briefly recall on them and then focus on highlighting the main differences, novelties, and other particularities related to the problems of out interest here. §.§ NSP ⟷ rfps connectionAs is well known, the feasibility problems with linear inequalities have the following mathematical formG≥ ∈.In (<ref>), G∈^n× n, ∈̱^m× 1, ∈^n, and α=m/n.<cit.> recognized that the above formulation is directly related to both the main principles of the random duality theory (RDT) and the mathematical description of perceptrons. The perceptron's types, however, can be different and are determined based on matrix G, vector $̱, and set. For example, for={-1/√(n),1/√(n)}^n(i.e., forbeing the corners of then-dimensional unit norm hypercube), one has the so-called binary± 1perceptrons, whereas for=^n(i.e., forbeing then-dimensional unit sphere^n), one has the so-called spherical perceptrons. Both of these perceptron variants allow for generic (variable) values of the components of the threshold vector$̱. When $̱ is a multiple of(column vector of all ones of appropriate dimension), i.e., when=̱κ(whereκ∈), one further obtains perceptrons with fixed thresholdsκ. In particular, for=^nandκ<0one obtains that (<ref>) effectively emulates the so-called negative spherical perceptron (NSP). Moreover, ifGis generic and deterministic, we have a deterministic perceptron. Correspondingly, ifGis random, we have a statistical one. Our main objects of interest in this paper are the random NSPs and, in particular, the Gaussian NSPs, where the components ofGare iid standard normal random variables. TakingGto be comprised of the iid standard normal components makes the presentation neater. However, all the key results that we obtain are adaptable so that they relate to other random NSP variantswhere the randomness can come from basically any other distribution that can be pushed through the Lindenberg variant of the central limit theorem.It is easy to see that the feasibility problem from (<ref>) can be rewritten as the following optimization problemmin_ f() G≥ ∈,where an artificial functionf():^n →is introduced. As is also well known, for any optimization problem to be solvable, the necessary precondition is that it is actually feasible. Assuming the feasibility, (<ref>)can then be rewritten asξ_feas^(0)(f,) = min_∈max_∈_+ f() -^T G +^T,where_+is basically a set that collects allsuch that_i≥ 0,1≤ i≤ m. Sincef()=0is clearly an artificial object, one can also specialize back tof()=0and findξ_feas^(0)(0,) = min_∈max_∈_+ -^T G +^T.The main point behind perceptron's functioning and its connection to rfps is contained precisely in (<ref>). To see this, one starts by observing that existence of ansuch thatG≥$̱, i.e., such that (<ref>) is feasible, ensures that the inner maximization in (<ref>) can do no better than make ξ_feas^(0)(0,) =0. On the other hand, if such andoes not exist, then at least one of the inequalities in G≥$̱ is not satisfied and the inner maximization trivially makesξ_feas^(0)(0,) =∞. It is also easy to see that, from the feasibility point of view,ξ_feas^(0)(0,) =∞andξ_feas^(0)(0,) >0are equivalent which implies that, for all practical feasibility purposes, the underlying optimization problem in (<ref>) is structurally insensitive with respect toscaling. One can then restrict to_2=1and basically ensure thatξ_feas^(0)(0,)remains bounded. It is then straightforward to see from (<ref>), that determiningξ_feas(0,) = min_∈max_∈_+,_2=1 -^TG +^T= min_∈^nmax_∈_+^m -^TG + κ^T,with_+^mbeing the positive orthant part of them-dimensional unit sphere, is critically important for the analytical characterization of the rfps from (<ref>). One then has that the sign of the objective value in (<ref>) (i.e., ofξ_feas(f,)) determines the feasibility of (<ref>). In more concrete terms, (<ref>) is infeasible ifξ_feas(f,)>0and feasible ifξ_feas(f,)≤ 0.The above reasoning holds generically, i.e., for anyGand$̱. It then automatically applies to the Gaussian NSPs as particular instances of the above formalism obtained for Gaussian G and =̱κ,κ<0. Given that the connection between the rfps from (<ref>) and the corresponding random optimization problem counterpart from(<ref>) is rather evident, one clearly observes the critically important role of (<ref>) in characterizing various perceptrons' features. The feature of our particular interest here is the storage/classifyingcapacity. In a large dimensional statistical context, it is defined as follows α=lim_n→∞m/n α_c(κ)≜ max{α |lim_n→∞_Gξ_perc(0,)≜ξ_feas(0,)>0⟶ 1} =max{α |lim_n→∞_Gℱ(G,,̱,α) ⟶ 1}.The above is the so-called statistical capacity. The corresponding deterministicvariant is defined in exactly the same way with _G being removed. Throughout the paper, the subscripts next toanddenote the randomness with respect to which the statistical evaluation is taken. On occasion, when this is clear from the contexts, these subscripts are left unspecified. Moreover, to shorten writing, we regularly use the term capacity instead of statistical capacity.§.§ Rfps ⟷ (partially reciprocal) free energy connection In the previous section, we have established that studying the random feasibility problems (rfps) is critically important for the NSP's capacity analytical characterization. In this section we extend this connection to studying free energies. These object are well known and almost unavoidable in many statistical physics consideration. To introduce them in a mathematically proper way that would be of use here, we start by defining the following, so-called, bilinear Hamiltonian _sq(G)= ^TG,and its corresponding (so to say, partially reciprocal) partition functionZ_sq(β,G)=∑_∈∑_∈e^β_sq(G)^-1. To ensure an overall generality of the exposition, we, in (<ref>), takeandas general sets (fairly soon, we make specializations, =^n and =_+^m,necessary for perceptrons' consideration of our interest here). One quickly notes, the reciprocal nature of the inner summation, which makes the partition function given in(<ref>) somewhat different from the counterparts typically seen in statistical physics literature. The correspondingthermodynamic limit of the average “partially reciprocal” free energy is then given asf_sq(β) =lim_n→∞_Glog(Z_sq(β,G))/β√(n) =lim_n→∞_Glog∑_∈∑_∈e^β_sq(G)^-1/β√(n) =lim_n→∞_Glog∑_∈∑_∈e^β^TG)^-1/β√(n).The ground state special case is obtained by considering the so-called “zero-temperature” (T→ 0 orβ=1/T→∞) regimef_sq(∞) ≜lim_β→∞f_sq(β) =lim_β,n→∞_Glog(Z_sq(β,G))/β√(n) =lim_n→∞_G max_∈-max_∈^TG/√(n) = - lim_n→∞_G min_∈max_∈^TG/√(n).Restricting to G's comprised of iid standard normals allows to utilize their sign symmetry and rewrite the above as-f_sq(∞) =lim_n→∞_G min_∈max_∈^TG/√(n)= lim_n→∞_G min_∈max_∈ -^TG/√(n).It is not that difficult to see that (<ref>) is directly related to (<ref>). This, on the other hand, also implies that f_sq(∞) is very tightly connected to ξ_feas(0,) , whichhints that understandingf_sq(∞) is likely to play critically important role in understanding and ultimately characterizing bothξ_feas(0,) and the NSPs capacity. This is, in fact, exactly what happens in the sections that follow below. Namely, since studying f_sq(∞) directly is not very easy, we rely on studying f_sq(β). In other words, we study the above introduced partially reciprocal variant of the free energy for a general βand then specialize the obtained resultsto the ground state, β→∞, regime. In the interest of easing the exposition, we, however, on occasion neglect some terms which paly nosignificant role in the ground state considerations.§ NEGATIVE SPHERICAL PERCEPTRONS THROUGH THE PRISM OF SFL RDT We start with one of the key observations that enables pretty much everything that follows. It is precisely the recognition that the free energy from (<ref>),f_sq(β) =lim_n→∞_Glog∑_∈∑_∈e^β^TG)^-1/β√(n),is a function of bilinearly indexed (bli) random process ^TG. Such a recognition then puts us in position to establish a connection between f_sq(β) and the bli related results of <cit.>. To do so, we closely follow <cit.> and start with a collection of needed technical definitions. For r∈, k∈{1,2,…,r+1}, real scalars s, x, and ysuch that s^2=1, x>0, and y>0, sets ⊆^n and ⊆^m, function f_S(·):^n→ R, vectors =[_0,_1,…,_r+1], =[_0,_1,…,_r+1], and =̧[_̧0,_̧1,…,_̧r+1] such that 1=_0≥_1≥_2≥…≥_r≥_r+1= 0 1=_0≥_1≥_2≥…≥_r≥_r+1=0, _̧0=1, _̧r+1=0, and 𝒰_k≜ [u^(4,k),^̆(2,k),^(k)]such that the components ofu^(4,k)∈, ^̆(2,k)∈^m, and ^(k)∈^n are i.i.d. standard normals, we setψ_S,∞(f_S,,,,,,̧x,y,s)=_G,𝒰_r+11/n_̧rlog_𝒰_r…_𝒰_3_𝒰_2 Z_S,∞^_̧2^_̧3/_̧2^_̧4/_̧3…^_̧r/_̧r-1,whereZ_S,∞ ≜e^D_0,S,∞ D_0,S,∞ ≜ max_∈,_2=x s max_∈,_2=y√(n) f_S +√(n)y∑_k=2^r+1c_k^(k)^T + √(n) x ^T∑_k=2^r+1b_k^̆(2,k) b_k≜b_k(,)=√(_k-1-_k)c_k≜c_k(,)=√(_k-1-_k).Having all the above definitions set, we are in position to recall on the following theorem – unquestionably, one of key fundamental components of sfl RDT. <cit.>Consider large n context withα=lim_n→∞m/n, remaining constant asn grows. Let the elements ofG∈^m× nbe i.i.d. standard normals and let ⊆^n and ⊆^m be two given sets. Assume the complete sfl RDT frame from <cit.> and consider a given function f():R^m→ R. Setψ_rp ≜ - max_∈ s max_∈ f()+^TG ψ_rd(,,,̧x,y,s)≜x^2y^2/2∑_k=2^r+1(._k-1_k-1-_k_k.) _̧k - ψ_S,∞(f(),,,,,,̧x,y,s) .Let _̂0̂→ 1, _̂0̂→ 1, and _̧̂̂0̂→ 1, _r+1=_r+1=_r+1=0, and let the non-fixed parts of ≜(x,y), ≜(x,y), and≜(x,y) be the solutions of the following systemd ψ_rd(,,,̧x,y,s)/d =0,d ψ_rd(,,,̧x,y,s)/d =0,d ψ_rd(,,,̧x,y,s)/d =0. Then,lim_n→∞_Gψ_rp/√(n)=min_x>0max_y>0lim_n→∞ψ_rd((x,y),(x,y),(x,y),x,y,s) ,where ψ_S,∞(·) is as in (<ref>)-(<ref>). The s=-1 scenario follows directly from the corresponding one proven in <cit.> after a cosmetic change f()→ f(). On the other hand, the s=1 scenario, follows after trivial adjustments and a line-by-line repetition of the arguments of Section 3 of <cit.> with s=-1 replaced by s=1 and f() replaced by f().Clearly, the above theorem is very generic and holds for any given setsand . The corollary that follows below makes it fully operational for the case of spherical perceptrons which are of our interest here. Assume the setup of Theorem <ref> withandhaving the unit norm elements. Setψ_rp ≜ - max_∈ s max_∈^TG + κ^T ψ_rd(,,,̧x,y,s)≜1/2∑_k=2^r+1(._k-1_k-1-_k_k.) _̧k - ψ_S,∞(κ^T,,,,,,̧1,1,s) .Let the non-fixed parts of , , and be the solutions of the following systemd ψ_rd(,,,̧1,1,s)/d =0,d ψ_rd(,,,̧1,1,s)/d =0,d ψ_rd(,,,̧1,1,s)/d =0. Then,lim_n→∞_Gψ_rp/√(n)=lim_n→∞ψ_rd(,,,1,1,s) ,where ψ_S,∞(·) is as in (<ref>)-(<ref>). Follows trivially as a direct consequence of Theorem <ref>, after choosing f()=κ^T and recognizing that all elements inandare of unit norm.As<cit.> noted, the above random primal problems' trivial concentrations enable various corresponding probabilistic variants of (<ref>) and (<ref>) as well. We, however, skip stating such trivialities.§ PRACTICAL REALIZATIONTo have the results of Theorem <ref> and Corollary <ref> become practically useful, one needs to ensure that all the underlying quantities can be valuated. Two key obstacles might pose a problem in that regard: (i) It is a priori not clear what should be the correct value for r; and (ii) Setsanddo not have a component-wise structure characterization which does not provide any guarantee that the decoupling over bothandis very straightforward. It turns out, however, that neither of these potential obstacles is unsurpassable.After specialization to =^n and =_+^m, we rely on results of Corollary <ref> and start by observing that the key object of practical interest is the following random dual ψ_rd(,,,̧1,1,s)≜1/2∑_k=2^r+1(._k-1_k-1-_k_k.) _̧k - ψ_S,∞(0,,,,,,̧1,1,s).= 1/2∑_k=2^r+1(._k-1_k-1-_k_k.) _̧k - 1/nφ(D^(per)(s)) - 1/nφ(D^(sph)(s)),where analogously to (<ref>)-(<ref>)φ(D,)̧=_G,𝒰_r+11/_̧rlog_𝒰_r…_𝒰_3_𝒰_2e^D^_̧2^_̧3/_̧2^_̧4/_̧3…^_̧r/_̧r-1,andD^(per)(s) =max_∈ s√(n)∑_k=2^r+1c_k^(k)^TD^(sph)(s)≜s max_∈√(n)κ^T + √(n)^T∑_k=2^r+1b_k^̆(2,k).After a simple evaluation, we findD^(per)(s) =max_∈ s√(n)∑_k=2^r+1c_k^(k)^T = √(n)max_∈^n s ∑_k=2^r+1c_k^(k)^T = √(n)∑_k=2^r+1c_k^(k)_2.We now utilize thesquare root trick introduced on numerous occasions in <cit.> D^(per)(s) =√(n)∑_k=2^r+1c_k^(k)_2 =√(n)min_γ^(p)∑_k=2^r+1c_k^(k)_2^2/4γ^(p) +γ^(p) =√(n)min_γ^(p)∑_i=1^n∑_k=2^r+1c_k_i^(k)^2/4γ^(p) +γ^(p).After introducing scaling γ^(p)=γ^(p)_sq√(n), we rewrite (<ref>) asD^(per)(s) =√(n)min_γ_sq^(p)∑_i=1^n∑_k=2^r+1c_k_i^(k)^2/4γ_sq^(p)√(n) +γ_sq^(p)√(n) = min_γ_sq^(p)∑_i=1^n∑_k=2^r+1c_k_i^(k)^2/4γ_sq^(p) +γ_sq^(p)n=min_γ_sq^(p)∑_i=1^nD^(per)_i(c_k) +γ_sq^(p)n.whereD^(per)_i(c_k)=∑_k=2^r+1c_k_i^(k)^2/4γ_sq^(p).In a similar fashion (and following <cit.>), we also haveD^(sph)(s)≜s √(n)max_∈κ^T + ^T∑_k=2^r+1b_k^̆(2,k) =s√(n)maxκ +∑_k=2^r+1b_k^̆(2,k),0 _2.Utilizing again thesquare root trick, we obtainD^(sph) (s) = √(n)s maxκ + √(n)∑_k=2^r+1b_k^̆(2,k),0 _2 =s√(n)min_γmaxκ +∑_k=2^r+1b_k^̆(2,k),0 _2^2/4γ+γ = s√(n)min_γ∑_i=1^mmaxκ +∑_k=2^r+1b_k_̆i^(2,k),0^2/4γ+γ.After introducing scaling γ=γ_sq√(n), (<ref>) can be rewritten asD^(sph)(s) =s√(n)min_γ_sq∑_i=1^mmaxκ + ∑_k=2^r+1b_k_̆i^(2,k),0 ^2/4γ_sq√(n)+γ_sq√(n) = s min_γ_sq∑_i=1^mmaxκ + ∑_k=2^r+1b_k_̆i^(2,k),0^2/4γ_sq+γ_sqn= s min_γ_sq∑_i=1^m D_i^(sph)(b_k)+γ_sqn ,withD_i^(sph)(b_k)= maxκ + ∑_k=2^r+1b_k_̆i^(2,k),0^2/4γ_sq. §.§ s=-1 particularization Taking s=-1 gives us the opportunity to establish a direct connection between the ground state energy, f_sq(∞) given in (<ref>), and the random primal of the above machinery, ψ_rp(·), given in Corollary <ref>. In concrete terms, this basically means the following -f_sq(∞)= - lim_n→∞_G max_∈-max_∈^TG/√(n)= lim_n→∞_Gψ_rp/√(n)=lim_n→∞ψ_rd(,,,1,1,-1),where the non-fixed parts of , , and are the solutions of the following systemd ψ_rd(,,,̧1,1,-1)/d =0,d ψ_rd(,,,̧1,1,-1)/d =0,d ψ_rd(,,,̧1,1,-1)/d =0.Relying on (<ref>)-(<ref>), we further have lim_n→∞ψ_rd(,,,1,1,-1) =ψ̅_rd(,,,γ̂_sq,γ̂_sq^(p),1,1,-1),withψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) =1/2∑_k=2^r+1(._k-1_k-1-_k_k.) _̧k -γ_sq^(p) - φ(D_1^(per)(c_k(,)),)̧ +γ_sq- αφ(-D_1^(sph)(b_k(,)),)̧.Connecting(<ref>), (<ref>), and (<ref>), we further find -f_sq(∞) =-lim_n→∞_G max_∈-max_∈^TG/√(n)=lim_n→∞ψ_rd(,,,1,1,-1)= ψ̅_rd(,,,γ̂_sq,γ̂_sq^(p),1,1,-1)=1/2∑_k=2^r+1(._k-1_k-1-_k_k.) _k -γ̂_sq^(p)- φ(D_1^(per)(c_k(,)),)̧ + γ̂_sq - αφ(-D_1^(sph)(b_k(,)),)̧.The following theorem summarizes the above mechanism. Assume the complete sfl RDT setup of <cit.>. Consider large n linear regime with α=lim_n→∞m/n and φ(·) and ψ̅(·) from (<ref>) and (<ref>). Let the “fixed” parts of , , andsatisfy _1→ 1, _1→ 1, _1→ 1, _r+1=_r+1=_r+1=0, and let the “non-fixed” parts of _k, _k, and _k (k∈{2,3,…,r}) be the solutions of the following system of equationsd ψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1)/d =0d ψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1)/d =0d ψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1)/d =0d ψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1)/dγ_sq =0 d ψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1)/dγ_sq^(p) =0, and, consequently, letc_k(,)=√(_k-1-_k)b_k(,)=√(_k-1-_k). Then -f_sq(∞)=1/2∑_k=2^r+1(._k-1_k-1-_k_k.) _k-γ̂_sq^(p) - φ(D_1^(per)(c_k(,)),) + γ̂_sq - αφ(-D_1^(sph)(b_k(,)),).Follows from the previous discussion, Theorem <ref>, Corollary <ref>, and the sfl RDT machinery presented in <cit.>. §.§ Numerical evaluations As stated earlier, the results of Theorem <ref> become operational if one can conduct the underlying numerical evaluations. All technical ingredients for such evaluations are present in the theorem itself. We start the evaluations with r=1 and proceed by incrementally increasing r. Proceeding in such a way enables one to systematically followprogressing of the entire lifting machinery. Moreover, it allows us to connect to some to the known results and show how they can be deduced as special cases of the generic mechanism presented here. To enable concrete numerical values, the evaluations are, on occasion, specializedto particular values of κ. Also, several analytical closed form results can be obtained along the way that make the overall evaluation process somewhat easier. We state those below as well. §.§.§ r=1– first level of liftingFor the first level, we have r=1 and _1→ 1 and _1→ 1 which, together with _r+1=_2=_r+1=_2=0, and _2→ 0, givesψ̅_rd(,,,γ_sq,γ_sq^(p),1,1,-1) = 1/2_̧2 -γ_sq^(p)- 1/_̧2log_𝒰_2 e^_̧2√(1-0)_1^(2)^2/4γ_sq^(p) +γ_sq - α1/_̧2log_𝒰_2 e^-_̧2max(κ+√(1-0)_̆1^(2,2),0)^2/4γ_sq→ -γ_sq^(p) - 1/_̧2log 1+ _𝒰_2_̧2√(1-0)_1^(2)^2/4γ_sq^(p) +γ_sq- α1/_̧2log 1- _𝒰_2_̧2max(κ+√(1-0)_̆1^(2,2),0)^2/4γ_sq→-γ_sq^(p)- 1/_̧2log 1+ _̧21/4γ_sq^(p) +γ_sq - α1/_̧2log 1- _̧2/4γ_sq_𝒰_2max(κ+√(1-0)_̆1^(2,2),0)^2 →-γ_sq^(p)-1/4γ_sq^(p) +γ_sq +α/4γ_sq_𝒰_2max(κ+√(1-0)_̆1^(2,2),0)^2.One then easily finds γ_sq^(p)=1/2 and γ̂_sq=√(α)/2√(_𝒰_2max(κ+√(1-0)_̆1^(2,2),0)^2) and- f_sq^(1)(∞)=ψ̅_rd(,,,γ̂_sq,γ̂_sq^(p),1,1,-1) = -1+√(α)√(_𝒰_2max(κ+_̆1^(2,2),0)^2).To obtain the critical α_c^(1) as a function of κ, we rely on condition f_sq^(1)(∞)=0, which givesa_c^(1)(κ) =1/_𝒰_2max(κ+_̆1^(2,2),0)^2 =1/κ e^-κ^2/2/√(2π) + (κ^2+1) -κ/√(2)/2.To get concrete values we specialize to κ=-1.5 and find() a_c^(1)(-1.5) = 1/_𝒰_2max(-1.5+_̆1^(2,2),0)^2→43.77.§.§.§ r=2– second level of liftingWe split the second level of lifting into two separate parts: (i) partial second level of lifting; and (ii) full second level of lifting.Partial second level of lifting When r=2 and the partial lifting is considered, then (similarly to the first level)_1→ 1 and _1→ 1, _2=_2=0, and _r+1=_3=_r+1=_3=0 but in general_2≠ 0. As above, one again hasψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) = 1/2_̧2-γ_sq^(p)- 1/_̧2log_𝒰_2 e^_̧2√(1-0)_1^(2)^2/4γ_sq^(p)+ γ_sq - α1/_̧2log_𝒰_2 e^-_̧2max(κ+√(1-0)_̆1^(2,2),0)^2/4γ_sq = 1/2_̧2 -γ_sq^(p)+1/2_̧2log2γ_sq^(p)-_̧2/2γ_sq^(p) + γ_sq - α1/_̧2log_𝒰_2 e^-_̧2max(κ+√(1-0)_̆1^(2,2),0)^2/4γ_sq.Solving the integrals givesh̅= -κ B̅=_̧2/4γ_sq C̅=κf_(zd)=e^-B̅C̅^2/2B̅ + 1/2√(2B̅ + 1)h̅/√(4B̅ + 2)f_(zu)=1/2-h̅/√(2), and_𝒰_2 e^-_̧2max(κ+√(1-0)_̆1^(2,2),0)^2/4γ_sq=f_(zd) + f_(zu).Differentiation(optimization) with respect to γ_sq^(p), γ_sq, and _̧2 brings two different scenarios for concrete optimal parameter values that are distinguished based on the value of κ. (i) For κ≥κ_c≈ -0.622, we find _2→ 0, γ̂_sq^(p)=1/2, and γ̂_sq=√(α)/2√(_𝒰_2max(κ+√(1-0)_̆1^(2,2),0)^2). In other words, when κ≥κ_c≈ -0.622,one uncovers the first level of lifting with a_c^(2,p) as in (<ref>), i.e., witha_c^(2,p) = a_c^(1) = 1/_𝒰_2max(κ+_̆1^(2,2),0)^2 = 1/κ e^-κ^2/2/√(2π) + (κ^2+1) -κ/√(2)/2.(ii) For κ≤κ_c, after computing the derivatives with respect to γ_sq^(p), γ_sq, and _̧2 and equalling them to zero, we obtain for, say, κ=-1.5 () a_c^(2,p)(-1.5)≈37.36.Full second level of lifting The setup presented above can also be utilized for the full lifting on the second level. However, one has to be careful since now (in addition to _2≠ 0) one, in general,also has _2≠0 and _2≠0. Analogously to (<ref>), we now writeψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) =1/2 (1-_2_2)_̧2 -γ_sq^(p)- 1/_̧2_𝒰_3log_𝒰_2 e^_̧2√(1-_2)_1^(2) +√(_2)_1^(3)^2/4 γ_sq^(p)+ γ_sq-α1/_̧2_𝒰_3log_𝒰_2 e^-_̧2max(√(1-_2)_̆1^(2,2)+√(_2)_̆1^(2,3),0)^2/4γ_sq =1/2 (1-_2_2)_̧2-γ_sq^(p) -(. -1/2_̧2log2γ_sq-_̧2(1-_2)/2γ_sq+_2/2(2γ_sq-_̧2(1-_2)).)+ γ_sq-α1/_̧2_𝒰_3log_𝒰_2 e^-_̧2max(√(1-_2)_̆1^(2,2)+√(_2)_̆1^(2,3),0)^2/4γ_sq.After solving the remaining integrals, we also haveĥ=-√(_2)_̆1^(2,3)+κ/√(1-_2) B̂=_̧2/4γ_sqĈ=√(_2)_̆1^(2,3)+κf_(zd)^(2,f)=e^-B̂Ĉ^2/2(1-_2)B̂ + 1/2√(2(1-_2)B̂ + 1)ĥ/√(4(1-_2)B̂ + 2) f_(zu)^(2,f)=1/2-ĥ/√(2), f_(zt)^(2,f)= f_(zd)^(2,f)+f_(zu)^(2,f).and_𝒰_3log_𝒰_2 e^-_̧2max(√(1-_2)_̆1^(2,2)+√(_2)_̆1^(2,3),0)^2/4γ_sq = _𝒰_3log f_(zt)^(2,f).One now needs to compute five derivatives with respect to _2, _2, _̧2, γ_sq,, and γ_sq^(p). We systematically compute each of them. (i) _2– derivative: We start by writingdψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /d_2=-1/2_2_̧2 -(.-1/2((2γ_sq^(p)-_̧2(1-_2)))+1/2((2γ_sq^(p)-_̧2(1-_2)))-_2/2(2γ_sq^(p)-_̧2(1-_2))^2_̧2 .) =-1/2_2_̧2 +_2/2(2γ_sq^(p)-_̧2(1-_2))^2_̧2=_̧2 -1/2_2 +_2/2(2γ_sq^(p)-_̧2(1-_2))^2.(ii) _2– derivative: As above, we start by writingdψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /d_2=-1/2_2_̧2 - α/_̧2d_𝒰_3log f_(zt)^(2,f)/d_2 =-1/2_2_̧2 - α/_̧2_𝒰_31/f_(zt)^(2,f)d f_(zt)^(2,f)/d_2.From (<ref>), we then havedf_(zt)^(2,f)/d_2=df_(zd)^(2,f)/d_2+ df_(zu)^(2,f)/d_2.Moreover, we also havedf_(zu)^(2,f)/d_2 =e^-ĥ^2/2/√(2π)dĥ/d_2,anddĥ/d_2 =-_̆1^(2,3)/2√(_2)√(1-_2)-(√(_2)_̆1^(2,3)+κ)/2√(1-_2)^3.A combination of(<ref>) and (<ref>) givesdf_(zu)^(2,f)/d_2=e^-ĥ^2/2/√(2π)dĥ/d_2 =e^-ĥ^2/2/√(2π)-_̆1^(2,3)/2√(_2)√(1-_2)-(√(_2)_̆1^(2,3)+κ)/2√(1-_2)^3. After observingdĈ/d_2 =_̆1^(2,3)/2√(_2), we can further writedf_(zd)^(2,f)/d_2 =f_(d)^(1)+f_(d)^(2)+f_(d)^(3), wheref_(d)^(1)=-B̂Ĉ_̆1^(2,3)/√(_2)(2(1-_2)B̂ + 1)-2B̂^2Ĉ^2/(2(1-_2)B̂ + 1).^2 e^-B̂Ĉ^2/2(1-_2)B̂ + 1ĥ/√(4(1-_2)B̂ + 2)/2√(2(1-_2)B̂ + 1), andf_(d)^(2)=e^-B̂Ĉ^2/2(1-_2)B̂ + 1/2√(2(1-_2)B̂ + 1) -2/√(π)1/√(4(1-_2)B̂ + 2)dĥ/d_2 +2B̂ĥ/√(4(1-_2)B̂ + 2)^3 e^-ĥ/√(4(1-_2)B̂ + 2)^2,andf_(d)^(3)=B̂e^-B̂Ĉ^2/2(1-_2)B̂ + 1/2√(2(1-_2)B̂ + 1)^3ĥ/√(4(1-_2)B̂ + 2).A combination of (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>)-(<ref>) is then sufficient to determine _2–derivative.(iii) _̧2– derivative: We again start by writingdψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /d_̧2=1/2(1- _2_2)-1/2_̧2^2log2γ_sq^(p)-_̧2(1-_2)/2γ_sq^(p) -1-_2/2_̧2(2γ_sq^(p)-_̧2(1-_2))-_2(1-_2)/2(2γ_sq^(p)-_̧2(1-_2))^2 + α/_̧2^2_𝒰_3log f_(zt)^(2,f) - α/_̧2_𝒰_31/f_(zt)^(2,f)d f_(zt)^(2,f)/d_̧2.From (<ref>), we then havedf_(zt)^(2,f)/d_̧2=df_(zd)^(2,f)/d_̧2+ df_(zu)^(2,f)/d_̧2=df_(zd)^(2,f)/d_̧2,where we utilized the fact thatdf_(zu)^(2,f)/d_̧2 =e^-ĥ^2/2/√(2π)dĥ/d_̧2=0.After observingdB̂/d_̧2 =1/4γ_sqdĈ/d_̧2 =0, we can further writedf_(zd)^(2,f)/d_̧2 =f_(d)̧^(1)+f_(d)̧^(2)+f_(d)̧^(3), wheref_(d)̧^(1)=-Ĉ^2/4γ_sq(2(1-_2)B̂ + 1)+(1-_2)B̂Ĉ^2/2γ_sq(2(1-_2)B̂ + 1).^2 e^-B̂Ĉ^2/2(1-_2)B̂ + 1ĥ/√(4(1-_2)B̂ + 2)/2√(2(1-_2)B̂ + 1), andf_(d)̧^(2)=e^-B̂Ĉ^2/2(1-_2)B̂ + 1/2√(2(1-_2)B̂ + 1) -2/√(π)-(1-_2)ĥ/2γ_sq√(4(1-_2)B̂ + 2)^3 e^-ĥ/√(4(1-_2)B̂ + 2)^2,andf_(d)̧^(3)=-(1-_2)e^-B̂Ĉ^2/2(1-_2)B̂ + 1/8γ_sq√(2(1-_2)B̂ + 1)^3ĥ/√(4(1-_2)B̂ + 2),A combination of (<ref>), (<ref>), (<ref>), and (<ref>)-(<ref>) is then sufficient to determine _̧2–derivative.(iv) γ_sq^(p)– derivative: We easily finddψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /dγ_sq^(p) =-1- -1/_̧2(2γ_sq^(p)-_̧2(1-_2))+1/2_̧2γ_sq^(p)-_2/(2γ_sq^(p)-_̧2(1-_2))^2 =-1- -1-_2/2γ_sq^(p)(2γ_sq^(p)-_̧2(1-_2))-_2/(2γ_sq^(p)-_̧2(1-_2))^2.(v) γ_sq– derivative: We first writedψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /dγ_sq =1 - α/_̧2_𝒰_31/f_(zt)^(2,f)d f_(zt)^(2,f)/dγ_sq.Relying on (<ref>), we also havedf_(zt)^(2,f)/dγ_sq=df_(zd)^(2,f)/dγ_sq+ df_(zu)^(2,f)/dγ_sq=df_(zd)^(2,f)/dγ_sq,where we utilizeddf_(zu)^(2,f)/dγ_sq =e^-ĥ^2/2/√(2π)dĥ/dγ_sq=0.After observingdĥ/dγ_sq =dĈ/dγ_sq =0 dB̂/dγ_sq =-_̧2/4γ_sq^2, we can further writedf_(zd)^(2,f)/dγ_sq =f_(dγ)^(1)+f_(dγ)^(2)+f_(dγ)^(3), wheref_(dγ)^(1)=_̧2Ĉ^2/4γ_sq^2(2(1-_2)B̂ + 1)-_̧2(1-_2)B̂Ĉ^2/2γ_sq^2(2(1-_2)B̂ + 1).^2 e^-B̂Ĉ^2/2(1-_2)B̂ + 1ĥ/√(4(1-_2)B̂ + 2)/2√(2(1-_2)B̂ + 1), andf_(dγ)^(2)=e^-B̂Ĉ^2/2(1-_2)B̂ + 1/2√(2(1-_2)B̂ + 1) -2/√(π)_̧2(1-_2)ĥ/2γ_sq^2√(4(1-_2)B̂ + 2)^3 e^-ĥ/√(4(1-_2)B̂ + 2)^2,andf_(dγ)^(3)=_̧2(1-_2)e^-B̂Ĉ^2/2(1-_2)B̂ + 1/8γ_sq^2√(2(1-_2)B̂ + 1)^3ĥ/√(4(1-_2)B̂ + 2),An easy combination of (<ref>), (<ref>), (<ref>), and (<ref>)-(<ref>) ensures that all the ingredients needed to determine γ_sq–derivative are obtained. After solving the following systemdψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /d_2=0 dψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /d_2=0dψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /d_̧2=0 dψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /dγ_sq^(p)=0 dψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /dγ_sq=0,and denoting by _2,_2,_2,γ̂_sq^(p),γ̂_sqthe obtained solution, we utilize- f_sq^(2)(∞)=ψ̅_rd(,,,γ̂_sq,γ̂_sq^(p),1,1,-1) = 0,to determine the critical α_c(κ), for any given κ. For example, taking κ=-1.5, we find() a_c^(2,f)(-1.5) ≈36.57.Closed form relations: To handle the above system we found as useful to utilize the following helpful, closed form, relations. First from (<ref>), we find_2=_2/(2γ_sq^(p)-_̧2(1-_2))^2.From (<ref>), we further have1=1-_2/2γ_sq^(p)(2γ_sq^(p)-_̧2(1-_2))+_2/(2γ_sq^(p)-_̧2(1-_2))^2.Combining (<ref>) and (<ref>), we obtain1=1-_2/2γ_sq^(p)(2γ_sq^(p)-_̧2(1-_2))+_2/(2γ_sq^(p)-_̧2(1-_2))^2=1-_2/2γ_sq^(p)(2γ_sq^(p)-_̧2(1-_2))+_2.A further combination of (<ref>) and (<ref>) givesγ_sq^(p) =1-_2/2(1-_2)(2γ_sq^(p)-_̧2(1-_2)) =1/21-_2/1-_2√(_2/_2).Also, from (<ref>) we have_̧2(1-_2) =2γ_sq^(p)-√(_2/_2).A combination of (<ref>) and (<ref>) gives_̧2(1-_2) =2γ_sq^(p)-√(_2/_2)=1-_2/1-_2√(_2/_2)-√(_2/_2),and_̧2 =1/1-_2√(_2/_2)-1/1-_2√(_2/_2). Concrete numerical values:InTable <ref>, we complement a_c^(2,f)(-1.5) with the concrete values of all the relevant quantities related to the second full (2-sfl RDT) level of lifting. To enable a systematic view of the lifting progress,we, in parallel, show the same quantities for the first full (1-sfl RDT) and the second partial (2-spf RDT) level.In Table <ref>, we show the key, second level of lifting, parameters over a range of κ.The progression of the capacity as the level of lifting increases is shown in Table <ref>.§.§.§ r=3– third level of lifting Since we have already seen the main idea behind the partial lifting, we here immediatelyconsider full third level of lifting. For r=3, we have that _1→ 1 and _1→ 1as well as_r+1=_4=_r+1=_4=0.Analogously to (<ref>), we now writeψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) = 1/2 (1-_2_2)_̧2+ 1/2 (_2_2-_3_3)_̧3 -γ_sq^(p)- 1/_̧3_𝒰_4log_𝒰_3_𝒰_2 e^_̧2√(1-_2)_1^(2) +√(_2-_3)_1^(3)+√(_3)_1^(4)^2/4 γ_sq^(p)^_̧3/_̧2 + γ_sq-α/_̧3_𝒰_4log_𝒰_3_𝒰_2 e^-_̧2max(√(1-_2)_̆1^(2,2)+√(_2-_3)_̆1^(2,3)+√(_3)_̆1^(2,4),0)^2/4γ_sq^_̧3/_̧2 = 1/2 (1-_2_2)_̧2+ 1/2 (_2_2-_3_3)_̧3 -γ_sq^(p) -(. -1/2_̧2log2γ_sq^(p)-_̧2(1-_2)/2γ_sq^(p)-1/2_̧3log2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3)/2γ_sq^(p)-_̧2(1-_2) +_3/2(2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3)).) + γ_sq-α/_̧3_𝒰_4log_𝒰_3_𝒰_2 e^-_̧2max(√(1-_2)_̆1^(2,2)+√(_2-_3)_̆1^(2,3)+√(_3)_̆1^(2,4),0)^2/4γ_sq^_̧3/_̧2,where we handled the first sequence of integrals utilizing the closed form solutions obtained in <cit.>. After solving the remaining integrals, we also haveh̃=-√(_2-_3)_̆1^(2,3)+√(_3)_̆1^(2,4)+κ/√(1-_2) B̃=_̧2/4γ_sqC̃=√(_2-_3)_̆1^(2,3)+√(_3)_̆1^(2,4)+κf_(zd)^(3,f)=e^-B̃C̃^2/2(1-_2)B̃ + 1/2√(2(1-_2)B̃ + 1)h̃/√(4(1-_2)B̃ + 2) f_(zu)^(3,f)=1/2-h̃/√(2), f_(zt)^(3,f)= f_(zd)^(3,f)+f_(zu)^(3,f).and_𝒰_4log_𝒰_3_𝒰_2 e^-_̧2max(√(1-_2)_̆1^(2,2)+√(_2-_3)_̆1^(2,3)+√(_3)_̆1^(2,4),0)^2/4γ_sq^_̧3/_̧2 = _𝒰_4log_𝒰_3 f_(zt)^(3,f)^_̧3/_̧2.Combining (<ref>) and (<ref>), we obtainψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1)=1/2 (1-_2_2)_̧2+ 1/2 (_2_2-_3_3)_̧3 -γ_sq^(p)-(. -1/2_̧2log2γ_sq^(p)-_̧2(1-_2)/2γ_sq^(p)-1/2_̧3log2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3)/2γ_sq^(p)-_̧2(1-_2)+_3/2(2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3)).) + γ_sq-α/_̧3_𝒰_4log_𝒰_3 f_(zt)^(3,f)^_̧3/_̧2.§.§.§ Third level derivatives One now needs to compute eight derivatives with respect to _3, _2, _3, _2,, _̧3, _̧2,, γ_sq,, and γ_sq^(p). We again systematically compute each of them. (i) _3– derivative: Utilizing (<ref>) and (<ref>), we havedψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /d_3=-1/2_2_̧3 -(.-1/2(2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3))+1/2(2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3))-_̧3_3/2(2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3))^2.)=-1/2_3_̧3 +_̧3_3/2(2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3))^2.(ii) _2– derivative: Relying further on (<ref>) and (<ref>), we also havedψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /d_2=-1/2_2(_̧2-_̧3) -(. -1/2(2γ_sq^(p)-_̧2(1-_2)) -_̧2-_̧3/2_̧3(2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3))+_̧2/2_̧3(2γ_sq^(p)-_̧2(1-_2))-_3(_̧2-_̧3)/2(2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3))^2.)=-1/2_2(_̧2-_̧3) -(. _̧2-_̧3/2_̧3(2γ_sq^(p)-_̧2(1-_2)) -_̧2-_̧3/2_̧3(2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3))-_3(_̧2-_̧3)/2(2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3))^2.)=-1/2_2(_̧2-_̧3) -(_̧2-_̧3)(. 1/2_̧3(2γ_sq^(p)-_̧2(1-_2)) -1/2_̧3(2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3))-_3/2(2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3))^2.)= (_̧2-_̧3)(. -_2/2 +_2-_3/2(2γ_sq^(p)-_̧2(1-_2))(2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3))+_3/2(2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3))^2.).(iii) _3– derivative: As above, we utilize (<ref>) and (<ref>) and start by writingdψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /d_3=-1/2_3_̧3 -α/_̧3d _𝒰_4log_𝒰_3 f_(zt)^(3,f)^_̧3/_̧2/d_3 =-1/2_3_̧3 -α/_̧3_𝒰_41/_𝒰_3 f_(zt)^(3,f)^_̧3/_̧2d_𝒰_3 f_(zt)^(3,f)^_̧3/_̧2/d_3 =-1/2_3_̧3 -α/_̧3_𝒰_41/_𝒰_3 f_(zt)^(3,f)^_̧3/_̧2_𝒰_3_̧3/_̧2 f_(zt)^(3,f)^_̧3/_̧2-1d f_(zt)^(3,f)/d_3.From (<ref>), we then havedf_(zt)^(3,f)/d_3=df_(zd)^(3,f)/d_3+ df_(zu)^(3,f)/d_3.Moreover, utilizing (<ref>) further, we can also writedf_(zu)^(3,f)/d_3 =e^-h̃^2/2/√(2π)dh̃/d_3,anddh̃/d_3 =--1/2√(_2-_3)_̆1^(2,3)+1/2√(_3)_̆1^(2,4)+κ/√(1-_2) .A combination of(<ref>) and (<ref>) givesdf_(zu)^(3,f)/d_3=e^-h̃^2/2/√(2π)dh̃/d_3 =e^-h̃^2/2/√(2π) --1/2√(_2-_3)_̆1^(2,3)+1/2√(_3)_̆1^(2,4)+κ/√(1-_2). After observingdC̃/d_3 =-1/2√(_2-_3)_̆1^(2,3)+1/2√(_3)_̆1^(2,4), we can further writedf_(zd)^(3,f)/d_3 =f_(d_3)^(1)+f_(d_3)^(2), wheref_(d_3)^(1)=-B̃C̃ -1/√(_2-_3)_̆1^(2,3)+1/√(_3)_̆1^(2,4)/(2(1-_2)B̃ + 1) e^-B̃C̃^2/2(1-_2)B̃ + 1h̃/√(4(1-_2)B̃ + 2)/2√(2(1-_2)B̃ + 1), andf_(d_3)^(2)=e^-B̃C̃^2/2(1-_2)B̃ + 1/2√(2(1-_2)B̃ + 1) -2/√(π)1/√(4(1-_2)B̃ + 2)dh̃/d_3 e^-h̃/√(4(1-_2)B̃ + 2)^2. A combination of (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>)-(<ref>) is then sufficient to determine _3–derivative. (iv) _2– derivative: Relying again on (<ref>) and (<ref>), we havedψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /d_2=-1/2_2(_̧2-_̧3) -α/_̧3d _𝒰_4log_𝒰_3 f_(zt)^(3,f)^_̧3/_̧2/d_2 =-1/2_2(_̧2-_̧3) -α/_̧3_𝒰_41/_𝒰_3 f_(zt)^(3,f)^_̧3/_̧2d_𝒰_3 f_(zt)^(3,f)^_̧3/_̧2/d_2 =-1/2_2(_̧2-_̧3) -α/_̧3_𝒰_41/_𝒰_3 f_(zt)^(3,f)^_̧3/_̧2_𝒰_3_̧3/_̧2 f_(zt)^(3,f)^_̧3/_̧2-1d f_(zt)^(3,f)/d_2.From (<ref>), we then finddf_(zt)^(3,f)/d_2=df_(zd)^(3,f)/d_2+ df_(zu)^(3,f)/d_2.Utilizing (<ref>) further, we also havedf_(zu)^(3,f)/d_2 =e^-h̃^2/2/√(2π)dh̃/d_2,anddh̃/d_2 =-1/2√(_2-_3)_̆1^(2,3)/√(1-_2) -√(_2-_3)_̆1^(2,3)+√(_3)_̆1^(2,4)+κ/2√(1-_2)^3.Combining(<ref>) and (<ref>), we obtaindf_(zu)^(3,f)/d_2=e^-h̃^2/2/√(2π)dh̃/d_3 =e^-h̃^2/2/√(2π) -1/2√(_2-_3)_̆1^(2,3)/√(1-_2) -√(_2-_3)_̆1^(2,3)+√(_3)_̆1^(2,4)+κ/2√(1-_2)^3. After observingdC̃/d_2 =1/2√(_2-_3)_̆1^(2,3), we can further writedf_(zd)^(3,f)/d_2 =f_(d_2)^(1)+f_(d_2)^(2)+f_(d_2)^(3), wheref_(d_2)^(1)=-B̃C̃_̆1^(2,3)/√(_2-_3)(2(1-_2)B̃ + 1)-2B̃^2C̃^2/(2(1-_2)B̃ + 1).^2 e^-B̃C̃^2/2(1-_2)B̃ + 1h̃/√(4(1-_2)B̃ + 2)/2√(2(1-_2)B̃ + 1), andf_(d_2)^(2)=e^-B̃C̃^2/2(1-_2)B̃ + 1/2√(2(1-_2)B̃ + 1) -2/√(π)1/√(4(1-_2)B̃ + 2)dh̃/d_2 +2B̃h̃/√(4(1-_2)B̃ + 2)^3 e^-h̃/√(4(1-_2)B̃ + 2)^2,andf_(d_2)^(3)=B̃e^-B̃C̃^2/2(1-_2)B̃ + 1/2√(2(1-_2)B̃ + 1)^3h̃/√(4(1-_2)B̃ + 2). A combination of (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>)-(<ref>) is then sufficient to determine _2–derivative. (v) _̧3– derivative: As usual, we start by utilizing(<ref>) and (<ref>) andwritedψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /d_̧3=1/2 (_2_2-_3_3) -(. 1/2_̧3^2log2γ_sq-_̧2(1-_2)-_̧3(_2-_3)/2γ_sq-_̧2(1-_2) _2-_3/2_̧3(2γ_sq-_̧2(1-_2)-_̧3(_2-_3))+_3(_2-_3)/2(2γ_sq-_̧2(1-_2)-_̧3(_2-_3))^2.) +α/_̧3^2_𝒰_4log_𝒰_3 f_(zt)^(3,f)^_̧3/_̧2 -α/_̧3_𝒰_41/_𝒰_3 f_(zt)^(3,f)^_̧3/_̧2_𝒰_3 f_(zt)^(3,f)^_̧3/_̧21/_̧2log f_(zt)^(3,f).(<ref>) is then sufficient to determine _̧3–derivative.(vi) _̧2– derivative: We once again start by utilizing(<ref>) and (<ref>) andwritedψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /d_̧2=1/2 (1-_2_2) -(. 1/2_̧2^2log2γ_sq-_̧2(1-_2)/2γ_sq+1-_2/2_̧2(2γ_sq-_̧2(1-_2))+1-_2/2_̧3(2γ_sq-_̧2(1-_2)-_̧3(_2-_3))-1-_2/2_̧3(2γ_sq-_̧2(1-_2)) +_3(1-_2)/2(2γ_sq-_̧2(1-_2)-_̧3(_2-_3))^2.)-α/_̧3_𝒰_41/_𝒰_3 f_(zt)^(3,f)^_̧3/_̧2 ×_𝒰_3 f_(zt)^(3,f)^_̧3/_̧2-_̧3/_̧2^2log f_(zt)^(3,f)+_̧3/_̧2 f_(zt)^(3,f)df_(zt)^(3,f)/d_̧2.From (<ref>), we finddf_(zt)^(3,f)/d_̧2=df_(zd)^(3,f)/d_̧2+ df_(zu)^(3,f)/d_̧2=df_(zd)^(3,f)/d_̧2,where we utilized the fact thatdf_(zu)^(3,f)/d_̧2 =e^-h̃^2/2/√(2π)dh̃/d_̧2=0.After observingdB̃/d_̧2 =1/4γ_sqdC̃/d_̧2 =0, we further writedf_(zd)^(3,f)/d_̧2 =f_(d_̧2)^(1)+f_(d_̧2)^(2)+f_(d_̧2)^(3), wheref_(d_̧2)^(1)=-C̃^2/4γ_sq(2(1-_2)B̃ + 1)+(1-_2)B̃C̃^2/2γ_sq(2(1-_2)B̃ + 1).^2 e^-B̃C̃^2/2(1-_2)B̃ + 1h̃/√(4(1-_2)B̃ + 2)/2√(2(1-_2)B̃ + 1), andf_(d_̧2)^(2)=e^-B̃C̃^2/2(1-_2)B̃ + 1/2√(2(1-_2)B̃ + 1) -2/√(π)-(1-_2)h̃/2γ_sq√(4(1-_2)B̃ + 2)^3 e^-h̃/√(4(1-_2)B̃ + 2)^2,andf_(d_̧2)^(3)=-(1-_2)e^-B̃C̃^2/2(1-_2)B̃ + 1/8γ_sq√(2(1-_2)B̃ + 1)^3h̃/√(4(1-_2)B̃ + 2),A combination of (<ref>), (<ref>), (<ref>), and (<ref>)-(<ref>) is then sufficient to determine _̧2–derivative.(vii) γ_sq^(p)– derivative: From (<ref>) and (<ref>) ,we easily also finddψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /dγ_sq^(p)=-1 -(. -1/_̧2(2γ_sq^(p)-_̧2(1-_2))+ 1/2_̧2γ_sq^(p)-1/_̧3(2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3)) +1/_̧3(2γ_sq^(p)-_̧2(1-_2))-_3/(2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3))^2.)=-1 -(. -1-_2/2γ_sq^(p)(2γ_sq^(p)-_̧2(1-_2))-_2-_3/(2γ_sq^(p)-_̧2(1-_2))(2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3))-_3/(2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3))^2.).(viii) γ_sq– derivative: Relying again on (<ref>) and (<ref>), we writedψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /dγ_sq =1 - α/_̧3_𝒰_41/_𝒰_3 f_(zt)^(3,f)^_̧3/_̧2_𝒰_3_̧3/_̧2 f_(zt)^(3,f)^_̧3/_̧2-1d f_(zt)^(3,f)/dγ_sq.From (<ref>), we also havedf_(zt)^(3,f)/dγ_sq=df_(zd)^(3,f)/dγ_sq+ df_(zu)^(3,f)/dγ_sq=df_(zd)^(3,f)/dγ_sq,where we utilizeddf_(zu)^(2,f)/dγ_sq =e^-h̃^2/2/√(2π)dh̃/dγ_sq=0.After observingdh̃/dγ_sq =dC̃/dγ_sq =0 dB̃/dγ_sq =-_̧2/4γ_sq^2, we can further writedf_(zd)^(2,f)/dγ_sq =f_(dγ_sq)^(1)+f_(dγ_sq)^(2)+f_(dγ_sq)^(3), wheref_(dγ_sq)^(1)=_̧2C̃^2/4γ_sq^2(2(1-_2)B̃ + 1)-_̧2(1-_2)B̃C̃^2/2γ_sq^2(2(1-_2)B̃ + 1).^2 e^-B̃C̃^2/2(1-_2)B̃ + 1h̃/√(4(1-_2)B̃ + 2)/2√(2(1-_2)B̃ + 1), andf_(dγ_sq)^(2)=e^-B̃C̃^2/2(1-_2)B̃ + 1/2√(2(1-_2)B̃ + 1) -2/√(π)_̧2(1-_2)h̃/2γ_sq^2√(4(1-_2)B̃ + 2)^3 e^-h̃/√(4(1-_2)B̃ + 2)^2,andf_(dγ_sq)^(3)=_̧2(1-_2)e^-B̃C̃^2/2(1-_2)B̃ + 1/8γ_sq^2√(2(1-_2)B̃ + 1)^3h̃/√(4(1-_2)B̃ + 2),Together, (<ref>), (<ref>), (<ref>), and (<ref>)-(<ref>) provide all necessary ingredients to determine γ_sq–derivative. One then proceeds by solving the following systemdψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /d_3= dψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /d_2=0 dψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /d_3= dψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /d_2=0 dψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /d_̧3= dψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /d_̧2 = 0 dψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /dγ_sq^(p)=dψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1) /dγ_sq = 0.After denoting by _2,_2,_2,γ̂_sq^(p),γ̂_sqthe obtained solution, one further utilizes- f_sq^(3)(∞)=ψ̅_rd(,,,γ̂_sq,γ̂_sq^(p),1,1,-1) = 0,to determine the critical α_c(κ), for any given κ. For example, specializingto κ=-1.5, we find() a_c^(3,f)(-1.5) ≈36.40.§.§.§ Explicit generic closed form parametric relations Solving the above system is doable in principle. However, in general it is not an easy task. It often requires a substantial effort to conduct all the required numerical work. Rather surprisingly and despite heavy analytical machinery, it turns out that the key lifting parameters are generically connected to each other. Moreover, we below uncover that the parametric interconnections can be described via remarkably simple and elegant closed form expressions. Besides their analytical importance, the relations that we provide below are practically extremely useful and make the underlying numerical work immeasurably simpler and smoother.We first observe that from (<ref>) one can obtain_3=_3/(2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3))^2.In a similar fashion, from (<ref>), we find_2 =_2-_3/(2γ_sq^(p)-_̧2(1-_2))(2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3))+_3/(2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3))^2.A combination of (<ref>) and (<ref>) then gives2γ_sq^(p)-_̧2(1-_2)=_2-_3/(_2-_3)(2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3))= _2-_3/_2-_3√(_3/_3).One then also observes_̧3(_2-_3)=2γ_sq^(p)-_̧2(1-_2)-(2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3)).A combination of (<ref>), (<ref>), and (<ref>) then gives_̧3(_2-_3)= _2-_3/_2-_3√(_3/_3) -√(_3/_3),and_̧3= 1/_2-_3√(_3/_3) -1/_2-_3√(_3/_3).From (<ref>), we also have1 =1-_2/2γ_sq^(p)(2γ_sq^(p)-_̧2(1-_2)) +_2-_3/(2γ_sq^(p)-_̧2(1-_2))(2γ_sq^(p)-_̧2(1-_2)-_̧3(_2-_3)) +_3.A combination of (<ref>) and (<ref>) further gives1 =1-_2/2γ_sq^(p)(2γ_sq^(p)-_̧2(1-_2)) +(_2-_3) +_3 = 1-_2/2γ_sq^(p)(2γ_sq^(p)-_̧2(1-_2)) +_2.From (<ref>) and (<ref>), we then findγ_sq^(p) =1-_2/2(1-_2)(2γ_sq^(p)-_̧2(1-_2))=1/21-_2/1-_2_2-_3/_2-_3√(_3/_3).Moreover, from (<ref>) and (<ref>), we also have_̧2(1-_2)=2γ_sq^(p)- _2-_3/_2-_3√(_3/_3).Combining (<ref>) and (<ref>), one then easily also has_̧2=2γ_sq^(p)/1-_2- 1/1-_2_2-_3/_2-_3√(_3/_3)=1/1-_2_2-_3/_2-_3√(_3/_3)- 1/1-_2_2-_3/_2-_3√(_3/_3).We found all the above relations (and in particular those given in (<ref>), (<ref>), and (<ref>)) as extremely useful for the numerical work. Moreover, following the above procedure one also obtains for any r the analogue versions of (<ref>), (<ref>), and (<ref>) γ_sq^(p) =1/2_1-_2/_1-_2∏_k=2:2:r-1_k-_k+1/_k-_k+1∏_k=2:2:r-2_k+1-_k+2/_k+1-_k+2√(_r/_r^(-1)^r+1).and for i∈{2,3,…,r} _̧i=1/_i-1-_i∏_k=i:2:r-1_k-_k+1/_k-_k+1∏_k=i:2:r-2_k+1-_k+2/_k+1-_k+2√(_r/_r^(-1)^i) - 1/_i-1-_i∏_k=i:2:r-1_k-_k+1/_k-_k+1∏_k=i:2:r-2_k+1-_k+2/_k+1-_k+2√(_r/_r^(-1)^i).We summarize the above in the following lemma. Assume the setup of Theorem <ref>. Let the “non-fixed” parts of _k, _k, and _k (k∈{2,3,…,r}) be the solutions of the system in (<ref>). The following holds. For r=1:γ̂_sq^(p) = 1/2.For r=2:γ̂_sq^(p) = 1/21-_2/1-_2√(_2/_2) _2 =1/1-_2√(_2/_2)- 1/1-_2√(_2/_2).For r=3:γ̂_sq^(p) = 1/21-_2/1-_2_2-_3/_2-_3√(_3/_3) _3 =1/_2-_3√(_3/_3) -1/_2-_3√(_3/_3) _2 =1/1-_2_2-_3/_2-_3√(_3/_3)- 1/1-_2_2-_3/_2-_3√(_3/_3).For general r:γ̂_sq^(p) =1/2_1-_2/_1-_2∏_k=2:2:r-1_k-_k+1/_k-_k+1∏_k=2:2:r-2_k+1-_k+2/_k+1-_k+2√(_r/_r^(-1)^r+1) _i=1/_i-1-_i∏_k=i:2:r-1_k-_k+1/_k-_k+1∏_k=i:2:r-2_k+1-_k+2/_k+1-_k+2√(_r/_r^(-1)^i) - 1/_i-1-_i∏_k=i:2:r-1_k-_k+1/_k-_k+1∏_k=i:2:r-2_k+1-_k+2/_k+1-_k+2√(_r/_r^(-1)^i),i∈{2,3,…,r}.The r=1 case follows immediately from (<ref>), the r=2 from (<ref>) and (<ref>), the r=3 from (<ref>), (<ref>), and (<ref>), whereas the general case follows by repeating the above procedure for an arbitrary r.§.§.§ Concrete parameter values InTable <ref>,a_c^(3,f)(-1.5), obtained in (<ref>), is complemented with the concrete values of all the relevant quantities related to the third full (3-sfl RDT)level of lifting. To enable a systematic view of the lifting progress,we, in parallel, show the results from Table <ref> that contain the same quantities for the first full (1-sfl RDT), the second partial (2-spf RDT), and the second full (2-sff RDT) level. In Table <ref>, we show the key second level of lifting parameters over a range of κ. The progression of the capacity as the level of lifting increases is shown in Table <ref>. The systematic showing of the progression in Table <ref> (as well as in in Table <ref>) allows one to also note, that the first rows in these tables relate to the results that can be obtained through the plain RDT (see, e.g., <cit.>), whereas their second rows relate to the results that can be obtained through the partially lifted RDT of <cit.>.The obtained results are also visualized in Figures <ref> and <ref>. In Figure <ref> a small κ range is shown resulting in not so large scaled capacities (of order of a few tens). In these regimes the differences between various levels of lifting are more pronounced. However, as the figure clearly shows, the convergence is rather remarkably fast. When capacities get larger the relative differences become even smaller. This is clear from Figure <ref>, where one can not make much of a difference between say 2-spl RDT on the one sideand 2-sfl and 3-sfl RDT on the other side. In other words, in the large α_c(κ) regimes, the lifted curves are visually indistinguishable which reconfirms the fact that 2-spl RDTresults of <cit.> are up to the leading order terms optimal (this was also shown in <cit.>).§.§.§ Modulo- sfl RDT Everything presented above can be repeated relying on the so-called modulo-m sfl RDT frame of <cit.>. Instead of Theorem <ref>, one then basically has the following theorem. Assume the setup of Theorem <ref> and instead of the complete, assume the modulo- sfl RDT setup of <cit.>.Let the “fixed” parts of , , andsatisfy _1→ 1, _1→ 1, _1→ 1, _r+1=_r+1=_r+1=0, and let the “non-fixed” parts of _k, and _k (k∈{2,3,…,r}) be the solutions of the following system of equationsd ψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1)/d =0d ψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1)/d =0d ψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1)/dγ_sq =0 d ψ̅_rd(,,,̧γ_sq,γ_sq^(p),1,1,-1)/dγ_sq^(p) =0. Consequently, letc_k(,)=√(_k-1-_k)b_k(,)=√(_k-1-_k). Then -f_sq(∞)≤ max_1/2∑_k=2^r+1(._k-1_k-1-_k_k.) _̧k -γ̂_sq^(p)- φ(D_1^(per)(c_k(,)),)̧ + γ̂_sq - αφ(-D_1^(sph)(b_k(,)),)̧ =-f_sq,(∞).Follows from the previous discussion, Theorems <ref> and <ref>, Corollary <ref>, and the sfl RDT machinery presented in <cit.>.We conducted the numerical evaluations using the modulo- results of the above theorem without finding any scenario where the inequality in (<ref>) is not tight. In other words, we have found that f_sq^(r)(∞)=f_sq,^(r)(∞). This indicates that the stationarity over $̧ is actually of the maximization type.§ CONCLUSIONWe studied the statistical capacity of the negative sphericalperceptrons (i.e., the classical spherical perceptron with negative thresholdsκ). Differently from their positive thresholds counterparts, these problems belong to the class of hard random structures where standard analytical approaches are powerless when it comes to approaching the exact capacity characterizations. The random duality (RDT)based results<cit.> provided solid generic upper bounds that were substantially improved via the partially lifted RDT in <cit.>. A recent breakthroughs in studying bilinearly indexed random processes<cit.>, enabled <cit.> to create a fully lifted random duality theory (fl RDT) counterpart to the RDT from <cit.>.After recognizing the connection between the statistical perceptrons, general random feasibility problems (rfps), and the bilinearly indexed (bli), we utilized the fl RDT and its a particular stationarized variant (called sfl RDT) to establish a general framework for studying the negative spherical perceptrons. The practical usability of the entire fl RDT machinery relies on a successful conducting of heavy underlying numerical evaluations. We first presented a large amount of analytical simplifications that resulted in uncovering remarkable closed form interconnections among the key lifting parameters. In addition to providing a direct view into the structure of the parametric relations, they also greatly helped with the numerical work. In particular, we obtained concrete numerical results and uncovered a remarkably rapid convergence of the whole fl RDT mechanism. Over a wide range of thresholdsκ(allowing scaled capacities of a few thousands), we observed that the third (second non-trivial) level of stationarized full lifting suffices to achieve relative improvements no better than∼ 0.1%. To ensure that the lifting progress is systematically presented and that the rapid convergence is clearly visible, we started with the very first level and then incrementally increased the level of lifting. Such a systematic procedure also allowed us to deduce as special cases the earlier results obtained through the plain RDT in <cit.> and the partial RDT in <cit.>.The methodology is very generic andvarious extensions and generalizations are possible. These include many related to both general random feasibility problems (rfps) andparticular random perceptrons. A lengthy list of random structures discussed in <cit.> is an example of a collection of such problems that can be handled through the methods presented here. As the technical details are problem specific,we discuss them in separate papers.As<cit.> emphasized, the sfl RDT considerations do not require the standard Gaussianityassumption of the random primals. The Lindeberg variant of the central limit theorem (see, e.g.,<cit.>) can be utilized to quickly extend the sfl RDT results to a wide range of different statistics.<cit.>'s utilization of the Lindenberg approach is, for example, particularly elegant.plain differentiate [c*(1-b)/a*e^(-erfinv((1-a)/(1-b))^2)/sqrt(2)/erfinv((1-a)/(1-b))](d)/(db)(((1-b) c e^(-erf^(-1)((1-a)/(1-b))^2))/(sqrt(2) a erf^(-1)((1-a)/(1-b)))) = (c (-(sqrt(π) (a-1))/((b-1) erf^(-1)((a-1)/(b-1))^2)-(2 e^(-erf^(-1)((a-1)/(b-1))^2))/(erf^(-1)((a-1)/(b-1)))-(2 sqrt(π) (a-1))/(b-1)))/(2 sqrt(2) a)(c (-(sqrt(π) (a-1))/((b-1) erf^(-1)((a-1)/(b-1))^2)-(2 e^(-erf^(-1)((a-1)/(b-1))^2))/(erf^(-1)((a-1)/(b-1)))-(2 sqrt(π) (a-1))/(b-1)))/(2 sqrt(2) a) ] ]
http://arxiv.org/abs/2312.16531v1
{ "authors": [ "Mihailo Stojnic" ], "categories": [ "stat.ML", "cond-mat.dis-nn", "cs.IT", "cs.LG", "math.IT", "math.PR" ], "primary_category": "stat.ML", "published": "20231227112340", "title": "Fl RDT based ultimate lowering of the negative spherical perceptron capacity" }
ErTe_3 CsV_3Sb_5 AV_3Sb_5CsV_3Sb_5Stanford Institute for Materials and Energy Sciences,SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025Geballe Laboratory for Advanced Materials, Stanford University, Stanford, CA 94305Department of Physics, Stanford University, Stanford, CA 94305Stanford Institute for Materials and Energy Sciences,SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025Geballe Laboratory for Advanced Materials, Stanford University, Stanford, CA 94305Department of Physics, Stanford University, Stanford, CA 94305Max Planck Institute for Chemical Physics of Solids, 01187 Dresden, GermanyCollege of Physics, Qingdao University, Qingdao 266071, ChinaGeballe Laboratory for Advanced Materials, Stanford University, Stanford, CA 94305Department of Applied Physics, Stanford University, Stanford, CA 94305Geballe Laboratory for Advanced Materials, Stanford University, Stanford, CA 94305Department of Physics, Stanford University, Stanford, CA 94305Max Planck Institute for Chemical Physics of Solids, 01187 Dresden, GermanyStanford Institute for Materials and Energy Sciences,SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025Geballe Laboratory for Advanced Materials, Stanford University, Stanford, CA 94305Department of Applied Physics, Stanford University, Stanford, CA 94305Stanford Institute for Materials and Energy Sciences,SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025Geballe Laboratory for Advanced Materials, Stanford University, Stanford, CA 94305Department of Physics, Stanford University, Stanford, CA 94305Stanford Institute for Materials and Energy Sciences,SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025Geballe Laboratory for Advanced Materials, Stanford University, Stanford, CA 94305Department of Physics, Stanford University, Stanford, CA 94305Department of Applied Physics, Stanford University, Stanford, CA 94305We study thermalization and thermal transport in single crystals of CsV_3Sb_5through the CDW transition by directly measuring thermal diffusivity (D), thermal conductivity (κ), resistivity (ρ), and specific heat (c). Commensurate with previous reports, we observe a sharp, narrow anomaly in specific heat associated with a first order transition that results in a CDW state below ∼94 K. While a corresponding sharpanomaly in thermal diffusivity is also observed, resistivity andthermal conductivity only exhibit small steps at the transition, where the feature is sharp for resistivity and broader for thermal conductivity.Scrutinizing the thermal Einstein relation κ=cD, we find that this relation is satisfied in the entire temperature range, except in a narrow range around the transition. The Wiedemann-Franz law seems to work outside the critical region as well. Below the transition and persisting below the two-phase regime we find strong resemblance between the resistivity anomaly and the specific heat, which may point to a secondary electronic order parameter that emerges continuously below the transition.Thermal transport measurements of the charge density wave transition in CsV_3Sb_5 Aharon Kapitulnik January 14, 2024 =================================================================================Introduction.- Understanding transport in correlated quantum materials is a subject of intense intellectual effort, with newly discovered material systems helping to stimulate fresh and original theoretical work that in particular examine the validity of common assumptions based on kinetic theory. Focusing on thermal transport, this includesFourier's law of heat transfer and the Wiedemann-Franz (WF)law that connects the coefficients of thermal and electrical conductivities of electrons in metals. Where correlations are pronounced and the quasi-particle picture breaks down, both assumptions may fail as was previously demonstrated in studies of thermal transport beyond the Mott-Ioffe-Regel (MIR) limit in cuprates <cit.>. There, phonons were shown to play a key role, which was strikinglyevident from parallel studies of diffusivity bounds in similar complex insulators <cit.>. Another fertile ground to examine non-quasiparticle transport is in the vicinity of a charge density wave (CDW), which is driven by an interplay of both strong electron-electron and electron-phonon interactions. For example, thermal diffusivity, resistivity, and specific heat measurements on the CDW material   exhibit a sharp decrease in thermal conductivity both parallel and perpendicular to the primary CDW at the CDW transition temperature, while the resistivity changes more gradually, implying a strong breakdown of the WF law in the critical regime of the CDW transition <cit.>. Furthermore, assuming Fisher-Langer (FL) theory <cit.> applies to the continuous CDW transition in , large anomalies observed in the temperature derivative of the resistivity stands in sharp contrast to the small anomaly observed in heat capacity measurements <cit.>. These results are suggestive of a phenomenological recount of a strongly coupled electron-phonon critical ‘soup’ <cit.>. A particularly exciting new material system, where a CDW state appears as a forceful effect, is the class of quasi-two-dimensional kagomé metals , which exhibit charge order transitions at ∼80 K, 103 K and 94 K for A = K, Rb and Cs respectively <cit.>.Focusing on , the CDW transition is associated with a reconstruction of the Fermi surface pockets linked to the vanadium orbitals and the kagomé lattice framework <cit.>. Nuclear magnetic resonance (NMR) studies on the different vanadium sites are consistent with orbital ordering at T_CDW∼94 K induced by a first order structural transition, accompanied by an electronic charge density wave (CDW) that appears to grow gradually below T_CDW, with possible intermediate subtle stacking transitions perpendicular to the Kagomé planes <cit.>. With superconductivity appearing at lower temperatures (∼ 4 K),   is a prime example of a system exhibiting “intertwined order’’ <cit.>, where multiple phases emerge out of a primary phase, starting from the first order transition at ∼94 K.Understanding the consequences of this unique electron-phonon landscape on thermal transport is the primary objective of the present study.In this letter we examine the dynamics of the CDW phase transition by means of independent measurements of the specific heat, thermal conductivity, thermal diffusivity and electrical resistivity. Our primary result is that thermal transport in a critical regime near T_CDW shows anomalous behavior inconsistent with quasiparticle transport.In addition, we also observe: i) The specific heat shows a strong singularity at T_CDW, although no hysteresis could be detected; ii) Despite it being a first order transition, afinite drop in the resistivity at T_CDW yields a strong peak in its temperature derivative, which seems to correspond to the specific heat anomaly similar to continuous CDW transitionmaterials <cit.>.iii) The thermal conductivity is almost temperature independent above the transition, but starts to increase below the transition;iv) Thermal diffusivity through the CDW transition shows a sharp decrease, which cannot be fully explained by the ratio D=κ/c;v) except for the region of ∼ 2 K around the CDW transition, the relation κ=cD is satisfied quantitatively and applying the WF law to the electrical resistivity, the difference thermal conductivity gives a reasonable result for the phonon contribution to the heat transport. Results.- CsV_3Sb_5 samples were grown at MPI Dresden following the procedure described in <cit.>, <cit.>, and in related publications.Here a self-flux method was used, ensuring melt purity, and producing large crystals with a high degree of structural order. For the thermal diffusivity measurements we used a photothermal microscope <cit.>. Fig. <ref> shows specific heat, resistivity, thermal diffusivity and thermal conductivity measurements on same-batch CsV_3Sb_5 crystals. Technical details of these measurements are described in the Supplementary Material (SM)<cit.>. Figure <ref>(a) shows specific heat of a same-batch CsV_3Sb_5 crystal (see SM <cit.> for determination of geometrical factors), closely matching previous work <cit.> featuring a strong anomaly at T_CDW with the magnitude of the anomaly over 66%. Fitting the specific heat to a Debye model, it shows an increase in Debye temperature from θ_D=160 K at 10 K to θ_D=260 K at 80 K and above through the CDW transition, and saturates at the high temperature Dulong-Petit value. The specific heat at T_CDW almost doubles its value, showing a sharp anomaly that rises to Δ c_p≈ 0.8J/cm^3·K within ∼3 K of the transition. However, despite its first-order nature, no hysteresis or latent heat (down to 6.6× 10^-4 J/cm^3) was detected through the CDW transition (see Fig. <ref>(a) inset). CsV_3Sb_5 resistivity was measured previously. However, due to the strong anisotropy of this layered structure, it is difficult to quantitatively determine the resistivity from resistance measurements. Indeed, scale of resistivity reported in the literature varies between 31 μΩ·cm and 300 μΩ·cm <cit.>, although all resistivity measurements can be scaled to show the same temperature dependence, including a small “jump” at T_CDW.Fig. <ref>(b) shows resistivity measurements on our crystals exhibiting 160 μΩ·cm at 300 K and a RRR between 150 and 220, resulting in very low residual resistance. We can also use the WF law and our thermal diffusivity and conductivity measurements to put a lower bound of ∼40% for the heat conducted by electrons at room temperature, while the remainder, i.e. phononic, component of thermal conductivity ∼ 0.06 W/cm·K is similar to other layered chalcogenide materials <cit.>. Using WF to extract electronic thermal conductivity below T_CDW yields a sharp upturn, before it turns down at low temperatures. Finally, the small resistivity jump at T_CDW yields a very sharp singularity in the derivative ∂ρ/∂ T shown in Fig. <ref>(b) inset. A weaker anomaly seen at ∼62 K may correspond to a previously observed transition to a 2×2×4 supercell <cit.>, while an observed anomaly at ∼18 K was not previously reported.Thermal diffusivity, shown in Fig. <ref>(c),was directly measured using a photothermal microscope <cit.> from 30 K to 300 K, showing three different regions: below T_CDW, around T_CDW, and above T_CDW. Above the CDW transition temperature, the diffusivity is approximately constant with a slight increase at lower temperatures. Lowering the temperature towards the transition, the thermal diffusivity sharply decreases by at least a factor of ×20 as it is limited by the resolution of the measurement. This anomaly is followed by a recovery to roughly the value just above T_CDW, with a trend of increasing thermal diffusivity below the transition. Fig. <ref> depicts an expanded region of the CDW transition, showing that within the larger uncertainty in diffusivity data, the specific heat and diffusivity anomalies are roughly similar in width, about ≲3 K around T_CDW, similar to previously reported NMR measurements <cit.>. Below T_CDW a sharp increase in thermal diffusivity together with a decrease in electrical resistivity is consistent with decrease in electron-phonon scattering.Thermal conductivity in the a-b plane was also measured as shown in Fig. <ref>(d) with the aim to compare this direct measurement to the measurements of specific heat and diffusivity. Here a known heat current was introduced to the sample and the measured temperature gradient across the sample was measured, showing values between 0.05 K and 0.8 K. Additional details of thermal conductivity measurements are described in the Supplementary Material (SM): <cit.>. The most striking observation here is that the thermal conductivity does not show any singularity at the CDW transition, similar to previously reported results in <cit.>. The inset in Fig. <ref>(d) further indicates a tendency to an increase below T_CDW, again similar to <cit.>.Assuming as in Eqn. <ref> a global thermal “Einstein relation” for the thermal conductivity κ=cD, it is surprising that vestiges of the first order transition singularity at T_CDW are absent in κ. Comparing the direct measurement of thermal conductivity with the aforementioned thermal Einstein relation, Fig. <ref> shows an overall excellent correspondence of the two approaches above the critical region, a strong deviation in the critical region and a small deviation below the CDW transition. An instructive way to demonstrate this observation is shown in the inset of Fig. <ref> where we plot the ratio: R_q≡κ/cD using the experimentally measured quantities. Where the kinetic approach prevails, we expect R_q=1, which seems to hold above T_CDW. We find that R_q≫ 1 in the ∼3 K around T_CDW, which is the two phase regime as was also found inthe NMR experiments of Songet al., <cit.>.We also include in Fig. <ref> the electronic thermal conductivity calculated from the measured resistivity and assuming the WF law, which shows similar trend outside the CDW transition region, with the difference from the measured thermal conductivity indicating a reasonable phonon contribution, at least above T_CDW<cit.>. Below T_CDW the difference Δκ=κ-κ_e shows increase as well, presumably due to increase in both phonon and electron thermal conductivities due to decrease in electron-phonon scattering. It is thus reasonable to assume that Δκ≡κ_ph outside the critical regime.Discussion.- Heat conduction in solids is mediated by the thermal motion of quasiparticles andvarious elementary excitations, primarily phonons, which also serve to locally thermalize the solid. Where both electrons and phonons are well defined, the thermal conductivity is the sum of both electrons and phonons thermal conductivities κ=κ_e+κ_ph, such that a temperature gradient applied to the sample results in a heat flux density j⃗_q=-κ∇ T,transferring entropy from the hotter to the colder end of the sample. This so-called Fourier's law considers heat as a “fluid” and together with the first law of thermodynamics lead to the heat equation.Applying energy conservation, the localmolar energy density u(r⃗,t) will satisfy a continuity equation∂ u/∂t+∇·j⃗_u=0Since most experiments are done at constant pressure conditions, it is advantageous to consider the first law in the form dh=Tds+vdP+∑_iμ_idn_i, where h(s,P,{n_i}) is the molar enthalpy of the system, s is the molar entropy, v is the molar volume, P is the pressure, μ_i is the chemical potential of the i^th component (if any) and n_i is the respective density. For a one-component system such as in our present experiment and under constant pressure and density conditions, it is advantageous to consider the continuity equation for the enthalpy. This can be shown to yield the heat equation (written in components convention):c_p(∂ T/∂ t)_P-κ_ij∂_i∂_j T-(∂_iκ_ij)∂_j T=0For most cases the thermal conductivity is uniform in any particular direction in space and thus the second gradient term vanishes yielding a simple diffusion equation withD_ij=κ_ij/c_p. If the electrons and phonons are in equilibrium and well defined quasiparticles, the kinetic approach (e.g. for the isotropic case) implies:κ=c_pD=κ_e+κ_ph=c_eD_e+c_phD_ph where c_e and c_ph are the electronic and phononic specific heats and D_e, D_ph are the corresponding thermal diffusivities. Furthermore, when transport is dominated by weakly interacting quasi-elastic scattering processes, κ_e is related to electrical conductivity by the Wiedemann-Franz law, i.e. κ_e/σ=L_0T, whereL_0=π^2k_B^2/3 e^2 ≈ 2.44× 10^-8WΩK^-2 is a universal constant. Observing this ratio indicates “standard” transport in a given electronic system, while significant violations of the WF law may indicate a breakdown of the quasiparticle description (see e.g. <cit.>). The data presented above clearly shows that while the kinetic approach, which follows Eq. <ref> holds in most of the temperature regime, it strongly fails in a narrow temperature range around the CDW anomaly. We noted earlier that R_q=1 holds except for a ∼3 K range around T_CDW. In fact, this is the exact temperature range where a two-phase region is observed in the NMR experiments of Songet al., <cit.>.Since the thermal conductivity does not show any significant anomaly at T_CDW (similar to the resistivity), we would a-priori expect that the anomaly in the specific heat will be compensated by the anomaly in the thermal diffusivity. The fact that R_q shows a strong singularity at the transition indicates the breakdown of simple hydrodynamic heat diffusion. This can happen if e.g. the nonlinear term in Eqn. <ref> becomes significant in the two-phase regime wherethe system exhibits randomly distributed puddles of the two phases, thus κ_ij varies in space, causing the ∂_iκ_ij term to be finite.In addition to the spatial inhomogeneity in the two-phase regime,the latent heat of the transition <cit.> will create local temperature variations, which will be time dependent as per the first term in Eqn. <ref>. Thus, at a given time scale the term(∂_iκ_ij)∂_jT may dominate over simple diffusion, yielding a strong anomaly in the measured diffusivity. For a slow measurement of the thermal conductivity the electrons and phonons are in equilibrium and κ may not acquire any singularity through the transition. However, the diffusivity is a dynamical property that depends on the time scale of the measurement, the associated relative fraction of the two phases through the transition as well as their spatial distribution. Focusing on the experimental results, both the specific heat (Fig. <ref>a) and the thermal conductivity(Fig. <ref>d) were measured quasi-statically as is evident from the lack of hysteresis in these two measurements. On the other hand the thermal diffusivity was measured at a finite frequency (1 to 5 kHz), exhibiting a very strong dip at the transition and accompanied by a large scatter of the diffusivity value in different runs, presumably due to difference in local distribution of the two phases.Below the transition the system is anisotropic and whether R_q=1 may depend strongly on the nature of the CDW state. Indeed, close examination of directly measured κ vs. the combination cD below T_CDW may indicate the latter exhibiting a slightly larger value and R_q≲1. However, as seen in Fig. <ref>, the steep increase of the thermal conductivity below the transition prevents an accurate determination of this value. In particular we note the large variation in resistivity among measured samples, which if WF is used implies similar variation in the electronic part of the thermal conductivity. Turning to the anomaly of the electronic response through the transition, it is instructive to plot the derivative of the resistivity, which exhibits a sharp drop at the transition. We notice that while above the transition the correspondence of the two anomalies is poor, below the transition it is remarkably similar, persisting well below the narrow two-phase regime.For a continuous transition Fisher-Langer (FL) theory<cit.> states that the dominant contribution to the scattering near the phase transition of a metal is due to the short-range order-parameter fluctuations, which in turn cause the temperature derivative of the resistivity through the transition to simulate the specific heat anomaly. However, here we observe a first order transition, for which the FL theory is not applicable.A possible explanation to the correspondence of the anomalies in dρ/dT and c_p below the transition is that a corresponding continuous order parameter emerges, which is triggered by the first order distortion transition at T_CDW. This idea is further supported by NMR studies, which suggest that the observed transition originates from orbital ordering at T_CDW∼94 K induced by a first order structural transition, accompanied by electronic charge density wave (CDW) that appears to grow gradually below T_CDW<cit.>. In that case we may expect two different effects that will affect the resistivity through the transition. While the first order structural transition may primarily induce the finite step in the resistivity as a result of a sharp change in the density of states, the accompanied continuous formation of CDW below the transition may be responsible for the enhanced scattering, conforming to the FL theory. In conclusion by comparing direct measurements of thermal conductivity, specific heat, thermal diffusivity and electrical resistivity we could test the validity of the kinetic approximation, where heat is considered a hydrodynamic diffusive fluid, as well as the Wiedemann-Franz law for the electronic part of the thermal conduction. We find that both hallmarks of standard transport in metals hold except in the very narrow regime of the first-order transition, where a two-phase regime appears.§ ACKNOWLEDGMENTSThis work was supported by the Department of Energy, Office of Basic Energy Sciences, under contract no. DE-AC02-76SF00515.SUPPLEMENTARY INFORMATION Thermal transport measurements of the charge density wave transition in CsV_3Sb_5 912Erik D. Kountz^1,2,3, Chaitanya R. Murthy^1,2,3, Dong Chen^4,5, Linda Ye^2,6, Mark Zic^2,3, Claudia Felser^4, Ian R. Fisher^1,2,6, Steven A. Kivelson^1,2,3, Aharon Kapitulnik^1,2,3,6^1 Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 ^2 Geballe Laboratory for Advanced Materials, Stanford University, Stanford, CA 94305 ^3 Department of Physics, Stanford University, Stanford, CA 94305 ^4 Max Planck Institute for Chemical Physics of Solids, 01187 Dresden, Germany ^5 College of Physics, Qingdao University, Qingdao 266071, China ^6 Department of Applied Physics, Stanford University, Stanford, CA 94305 §MATERIALS, METHODS AND ADDITIONAL INFORMATION§.§ Single Crystal Growth CsV_3Sb_5 single crystals were prepared by self-flux which ensures purity of the melt and produces large crystals with a high degree of structural order. The measured crystal sizes were on the order of 1×2×0.01 mm^3. All measured crystals came from same batch growths. Crystals were grown with slight excess of Cs <cit.>. §.§ Specific heat measurements The heat capacity of the single-crystal samples was measured using a relaxation time technique in a Quantum Design Physical Property Measurement System (PPMS). A crystal with a mass of approximately 400 μg and dimensions approximately 1×3×0.011 mm^3 with flat surfaces was selected for good thermal contact with the sample platform. Data were taken in zero applied field from 2.9 to 300K. The dc temperature increase was 2% at all temperatures with additional measurements at 1% and 0.5% temperature increases near T_CDW. Specific heat was converted from J/K to J/cm^3 ·K by measuring the sample thickness and sample area with a microscope to calculate area and using density from the unit cell from single-crystal x-ray diffraction (SCXRD) of 6.102 g/cm^3<cit.>. §.§ Resistivity measurements Because CsV_3Sb_5 has hexagonal symmetry both in and out of the CDW state, axis of measuring the resistivity, diffusivity, and conductivity was not aligned with XRD. However, the axis was the constant for each measurement. Multiple measurements of resistance versus temperature were performed with a traditional dipping probe measurement and with a PPMS. The resistivity in the a-b plane was measured on thin rectangular crystals which had been cut with a scalpel before contacting with silver paint. The largest source of error in such resistivity measurements is the magnitude of the resistivity resulting from the uncertanty in sample dimensions particularly sample thickness. Multiple measurements of resistivity were measured and the smallest resistivity measurement was used as the reported resistivity in the a-b plane. Other resistivities were rescaled to this reference resistivity presented in this letter. This system of reporting the lowest resistivity was done because in many samples the current contact do not make contacts to the entire depth of the sample but instead only the topmost layers. In all cases, the resistivity data measured was higher than previous measurements from MPI <cit.>. If the data was not rescaled the resulting electrical thermal conductivity would be larger and the phononic thermal conductivity would be smaller than presented. Differential resistivity was calculated by taking the numerical derivative of the resistivity data. §.§ Thermal conductivity measurements Direct measurements of total thermal conductivity versus temperature were performed using a dc heating measurement approach. A smooth flat crystal of CsV_3Sb_5 was cut to a rectangular shape. The bottom of the crystal was vertically mounted to a copper block with silver paste right next to an Omega 100Ohm Pt thermometer to measure the sample temperature. The copper block was then attached to the copper cryostat coldfinger with GE varnish and cigarette paper to electrically isolate the sample. On the top of the CsV_3Sb_5 crystal, a 350Ohm strain gauge was attached to act as a heater. Current leads used 40AWG copper wire. On the sides of the CsV_3Sb_5 sample 2mil type T thermocouples were attached with GE varnish. The ends of the thermocouples were soldered to pins which were connected to the outside of the cryostat by manganin wires to reduce thermal heat flow. The soldered connections were thermally anchored with GE varnish and cigarette paper on the coldfinger close to the Pt thermometer to minimize temperature gradients and thermoelectric voltages generated at the junctions. All wires were chosen to be over 3cm long to minimise any thermal leakage. Thermoclectric voltages were measured with Keithley 181 and 182 nanovoltmeters.The sample was cooled down to 30 K with liquid helium or 80 K with liquid nitrogen the setup slowly warmed or cooled. Measurements of thermoelectric voltages, temperature, heating current, etc. were taken every ∼ 1.5s giving a heating/cooling rate of 1K/5min (200 measurements per degree), 1K/10min (400 measurements per degree), or 1K/15min (600 measurements per degree). The heating current was supplied with a Keithley 220 current supply and voltage across the heater measured with a Keithley 197A multimeter. After every measurement, using the measured voltage across the heater, the current supplied was changed so that a constant amount of power was applied to the crystal. Every 50-200 measurements, the heating current is switched between 1 μW and the chosen heating power: 0.1mW, 0.2mW, 0.5mW, or 1mW. A “no heating power” of 1 μW was chosen so that the resistance of the resistor would always be known. Heating from this small current was not discernible from cases where no 0 μW of heating was applied. After dropping 6-16 data points around when the heating current was switched to avoid transients, the data for each heating power measurement was averaged and interpolated for all temperatures to find the temperature difference between each thermocouple and the base temperature. Then, the temperature difference between the two thermocouples for heating current on and off was seperately calculated by subtracting the previously calculated temperature differences. Finally, the temperature gradient from the heating current was calculated by subtracting the `heater on' temperature difference from the `heater off' temperature difference. This second subtraction is used to eliminate any residual thermoelectric voltages across any junctions elsewhere in the cryostat to get the true temperature gradient. This temperature gradient is then inverted to find the thermal conductance. The thermal conductivity is calculated by multiplying the thermal conductance by the sample dimensions and a global temperature independent rescaling factor so that the room temperature conductivity matches that of the conductivity calculated from diffusivity.Resulting temperature gradients were approximately 0.1K to 0.05K depending on the applied heating current. §.§ Thermal diffusivity measurements Thermal diffusivity measurements were performed using a photo-thermal microscope working in reflection mode first introduced for the study of single crystal cuprate superconductors by Fantonet al.<cit.>.A complementary comprehensive study of the technique was given by Huaet al.<cit.>. The specifics of the apparatus used in the present study is first described in <cit.>. Using this apparatus, the thermal diffusivity is obtained directly, without the need to measure the thermal conductivity and specific heat separately. An advantage of this apparatus is the ability to measure the full any in-plane anisotropy of the thermal diffusivity by orienting the pair of heating and probing laser spots at any arbitrary orientation with respect to the crystal axes. The mobility in the optics is further used for diagnostics of spatial uniformity of the thermal diffusivity. §.§.§ Principles of the Photothermal Apparatus For the high resolution thermal diffusivity measurements we use a home-built photothermal microscope.The microscope views the sample through a sapphire optical window in a cryostat, with the sample mounted to a cold finger just under the window.A schematic is shown in Fig. <ref>. A heating laser at 637nm or 642nm and a probing laser at 820nm are focused onto the sample surface by the microscope objective.The focused spots have Gaussian size of approximately 1 μm and 2 μm, respectively, due to the diffraction limit of different wavelengths, and can be moved independently over the sample surface.A camera allows us to observe the sample surface nearby, align the spots in a particular orientation with respect to the crystal, and determine the distance between the spots. The output power of the heating laser is modulated with a sinusoidal profile P(t)=P_0 [1 + sin(ω_0 t)]. The modulation frequency ω_0/2π has a typical range of 5.5kHz - 25.5kHz, much slower than the microscopic equilibration on the order of picoseconds. This means that the parameters extracted are all within the in the DC limit of linear response, and that the dependency on the modulation frequency can be neglected. The probing laser is aimed at a spot a small distance (typically 9-15 μm) away from the heating laser. The reflected light from the probing laser is diverted by an optical circulator and fed into a photodetector. The AC component of the photodetector signal is then fed to a lock-in amplifier referenced to the laser modulation and the amplitude and phase are measured. Modulation of the heating laser at a frequency ω implies that at the detector δ R ∝ (dR/dT)δ T. The DC component is used as a gauge to make sure the lasers are focused. Since the electronic thermalization time is many orders of magnitude faster than the heat-modulation frequency, the heat wave that emerges from the heating spot is carried by both electrons and phonons, which are in thermal equilibrium with each other, propagating in the radial direction. Given the sample's total specific heat capacity, the phase shift that is measured at the probing spot reflects the thermal diffusivity of the sample. We further emphasize that the local optical properties of the material will determine the absolute temperature increase at the heating spot and the magnitude of the detected reflectivity at the probing spot. Thus, if light emerging from the heating laser or probe laser are polarized, this may affect the amplitude of the measured signal, but not the phase shift associated with the diffusive heat propagation in the sample.§.§.§ Measuring Thermal DiffusivityThe diffusive transport of heat is governed by the diffusion equation∂ δ T(r⃗,t)/∂ t -D∇^2 δ T(r⃗,t) =q(r⃗,t)/cwhere δ T is the temperature disturbance above the ambient temperature T, r⃗=(x_1, x_2, x_3) is the spherical radial coordinate given in terms of the euclidean principal axes x_i, q is the absorbed power density, c is the volumetric specific heat capacity, D ≡κ / c is the thermal diffusivity, and κ is the thermal conductivity.Note that c and D are themselves functions of T, but in the limit of weak heating δ T ≪ T, we make the approximation c(T+δ T)≈ c(T) and D(T+δ T)≈ D(T). Indeed, the temperature disturbance at the probing point from both lasers is estimated to be ≲ 1K through out the temperature range, so the above approximation is well justified, ensuring true linear response. Note that for the equation D ≡κ / c to hold, this requires that any modes in the specific heat c not be zero modes. This is in addition to the two-phase model described in the main text.Modulating the heating laser with a frequency ω, we model the focused heat source as a point source, q(t,r⃗ )= P_0 e^-iω tδ^3(r⃗), which is valid as long as the distance from the heating spot is much larger than the spot radius.The amplitude of the temperature variation at a point far from the heating spot is also expected to be modulated with the same frequency: δ T(r⃗,t) = δ̃ ̃T̃(r⃗,ω)e^-iω t.In a semi-infinite isotropic system, the temperature profile is spherically symmetric and takes the formδ̃ ̃T̃(r,ω)=P_0/κ1/rexp(-√(ω/2D)r)_amplitudeexp(-i√(ω/2D)r)._phaseWe vary the separation distance to verify the semi-infinite 3D system assumption and the small spot assumption, and we vary the heating power to verify the weak heating assumption. Our measurement gives us the response at the modulation frequency ω. Because the separation distance between the lasers spots was measured using camera vision and each individually mounted sample comes at a small random tilt, a systematic error on the order of 5% is associated to each set of measurements. This systematic error is constant and figures in the main text show this representative error on selected data points at the highest and lowest temperature of each measurement. Starting from Eqn. <ref>, we can write the instantaneous reflectivity at the probe point, which is held at an average temperature T as:δ R(r,t,T)= { AdR/dTδ̃ ̃ ̃T̃(r,ω) e^-iω t}= AdR/dTP_0/κ re^- ϕ(r,ω)cos(ω t + ϕ(r,ω))Here A is an efficiency factor, which may dependon mechanical vibrations, fluctuations in the laser power, and surface imperfections among other possible effects. At the same timethe phase shift of the signal:ϕ(r,ω)= √(ω/2D)ris the robust quantity to measure, which is independent of the local amplitudes either at the heating point at r=0 or the probe point at r, and only depend on the heat transport between these two points, making the direct measurement of the phase shift a robust way to extract the directional diffusivity. We obtain D by fitting the phase delay ϕ between the source and the response signals as a function of ω at fixed r: D = ω r^2 / 2 ϕ^2. A typical fit is shown in Fig. <ref>. As Fig. <ref> shows and as repeated with multiple separation distances, the measured phase delays at a fixed separation distance as a function of heating laser modulation frequency follows the same function as the semi-infinite 3D system and small spot assumption (phase delay proportional to square root of modulation frequency) and so diffusivity can be extracted usingEqn. <ref>. Because the amplitude of the reflected signal decreases at higher frequencies and because the amplitude of the reflected signal decreases at lower temperatures, measurements of diffusivity were taken by measuring the phase delay at 12 evenly spaced frequencies between 5.5kHz and 25.5kHz above 115 K and 3 evenly spaced frequencies between 5.5kHz and 8.7kHz below 115 K and fitting for diffusivity using Eqn. <ref>. Multiple measurements of diffusivity at a given temperature were averaged to a single value.We check the homogeneity of the crystals by repeating measurements at different positions on the surface, and check the isotropy/anisotropy by rotating the relative orientation between the laser spots. We also verify that there is no polarization dependence in the extracted diffusivity by repeating measurements at different heating laser and probing laser polarizations.
http://arxiv.org/abs/2312.16640v1
{ "authors": [ "Erik D. Kountz", "Chaitanya R. Murthy", "Dong Chen", "Linda Ye", "Mark Zic", "Claudia Felser", "Ian R. Fisher", "Steven A. Kivelson", "Aharon Kapitulnik" ], "categories": [ "cond-mat.str-el" ], "primary_category": "cond-mat.str-el", "published": "20231227170231", "title": "Thermal transport measurements of the charge density wave transition in CsV$_3$Sb$_5$" }
Analytical Insight of Earth: A Cloud-Platform of Intelligent Computing for Geospatial Big Data [============================================================================================== § INTRODUCTION The ongoing three-body spectroscopy program <cit.> aims to use the lattice formulation of QCD to precisely determine properties of hadronic resonances characterized by significant three-particle decay channels. These include curious states like ω(782), π_1(1600), or T_cc^+(3872), and many more. One study such hadrons with the generalization of the Lüscher's formula <cit.>, known as the three-body quantization condition <cit.>. Analogously to the Lüscher's formalism, it allows one to translate the spectrum of the finite-volume theory into the → scattering amplitude, _3. Since resonant states manifest as complex poles of _3, one has to analytically continue this function to complex energies to extract their masses and widths.Such a procedure has been well established as a required supplement to the quantization condition in the two-body analyses of the lattice data; see Refs. <cit.> for the recent advanced examples. Rigorous understanding of the amplitude's analytic properties became obligatory when determining a model-independent interpretation of states appearing in realistic scattering systems <cit.>. This necessity is even more pressing when describing the → reactions, which require a solution to a system of integral equations <cit.> containing singular, multi-variable functions of the particles momenta <cit.>. In this article, we desire to present a general procedure for the analytic continuation of these relativistic three-body integral equations, as developed and discussed in Refs. <cit.>. These two works extend and clarify techniques introduced and studied by Hetherington and Schick <cit.>, Brayshaw <cit.> and Glöckle <cit.> in the non-relativistic context. The procedure is applicable in the infinite-volume analyses of the three-body lattice spectrum. We start with a short review of the infinite-volume counterpart of the three-body scattering formalism of Ref. <cit.> in Sec. <ref>. In Sec. <ref>, we discuss the analytic continuation of the amplitude defined via the Lippmann–Schwinger-like integral equation. We try to keep the discussion simple and conceptual, hoping to make it a useful introduction to Ref. <cit.>. In Sec. <ref>, we present the results of applying this procedure to the simple → process. We shortly discuss curious properties of the spectrum unique to the three-body physics—such as the cyclic trajectories of resonances and the Efimov phenomenon. We close the article in Sec. <ref> with a summary.§ RELATIVISTIC THREE-BODY SCATTERING EQUATIONFollowing Refs. <cit.> we study the connected three-body amplitude _3 within a framework of a generic relativistic scalar EFT <cit.>. The amplitude describes the probability for elastic scattering of three identical spin-zero particles of mass m, denoted here by φ. For simplicity, we consider exclusively the S-wave reaction, i.e., a single partial-wave component of _3. As discussed in Ref. <cit.>, it does not affect the generality of our procedure, which applies to sectors of higher angular momentum. To further simplify the description of the problem, we make a stronger assumption that the short-range three-body couplings, described jointly by the so-called three-body K matrix, _df,3, vanish <cit.>. The scattering process becomes generated solely by repeated on-shell particle exchanges (OPEs). We note that it is likely necessary to include a non-zero K matrix to accurately describe three-body states occurring in nature. However, a large enough class of possible K's does not substantially affect the method of analytic continuation described below. It justifies proceeding with the simplified model.As for the kinematics of the reaction, the particles interact with the total invariant mass squared s=E^2, where E is the energy in their c.m. frame. Furthermore, we divide a three-particle state into a single particle (a spectator) and remaining particles in a corresponding two-body subchannel (a pair).[In the language of Ref. <cit.>, we always investigate the “unsymmetrized" amplitude ^(u,u)_3 of a quasi two-body pair-spectator process.] In the total c.m. frame, a spectator of momentum k = ||̨ has energy ω_k = √(k^2 + m^2). The corresponding pair is characterized by the total invariant mass squared, σ_k = (√(s) - ω_k)^2 - k^2. We use s and initial and final spectator momenta, k and p, as the independent variables describing the scattering. In this setting, one obtains the amputated partial-wave projected amplitude d(p,k), _3(p,k) ≡_2(p) d(p,k)_2(k) by solving the (ladder) integral equation, d(p,k) =- G(p,k)- ∫_0^k_ max d k' k'^2/(2 π)^2 ω_k'G(p,k') _2(k')d(k',k), where the implicit s dependence is assumed <cit.>. We use the letter d for the amplitude to distinguish it from the case of _df,3≠ 0.[A non-zero three-body K matrix leads to an additional contribution to _3, not discussed in this article.] The equation is similar to the Lippmann–Schwinger equation for the off-shell two-body T matrix (assuming G is the potential and _2 is the energy denominator) and can be solved numerically using comparable methods <cit.>. However, Eq. (<ref>) describes an on-shell amplitude for a pair-spectator reaction, where G governs on-shell particle exchanges while _2 on-shell subchannel processes.Before defining these objects, let us note that d(p,k) can have complex poles in the variable s corresponding to three-body states. The amplitude has branch cuts associated with the open scattering channels, and these poles can be found either on the first Riemann sheet (three-body bound states) or the nearest unphysical sheet (virtual states and resonances) corresponding to these discontinuities. The problem of identifying states of interest is non-trivial given that the objects building the ladder equation, when viewed as complex functions, have various singularities in all three variables, p,k, and s. We wish to explain their nature, describe the resulting discontinuities of d, and extend the applicability of Eq. (<ref>) to arbitrary complex energies on various Riemann sheets.In the ladder equation, _2 is the S-wave 2φ→ 2 φ amplitude, implied by the nature of interactions between particles in the pair. For concreteness, in the following, we use a low-energy approximation, _2(k') = 16 π√(σ_k')/-1/a - i √(σ_k' / 4 - m^2),where a is the two-body scattering length and the only parameter of the model. This two-body amplitude has a pole at k' = q = λ^1/2(s,m^2,m_b^2)/2√(s), where m_b^2 = 4(m^2-1/a^2 ) and λ(x,y,z) = x^2 + y^2 + z^2 - 2(xy+yz+zx). Moreover, _2(k') has branch points at the threshold, k' = k_r = λ^1/2(s, m^2, (2m)^2)/2 √(s), and pseudo-threshold, k' = k_l = λ^1/2(s, m^2, 0)/2 √(s). We orient the associated cuts to run to the complex infinity. We note that all these singularities in k' are “movable", i.e., are functions of another parameter, s. Moreover, they have mirror copies in the complex k' plane obtained by their reflection with respect to the point k'=0. The presence of the branch points implies that the two-body amplitude has multiple Riemann sheets associated with the listed cuts. For instance, the second-sheet value of _2(k'), corresponding to the cut starting at k_r, is, _2^II(k') = 16 π√(σ_k')/-1/a + i √(σ_k' / 4 - m^2). The second building block of Eq. (<ref>) is the S-wave-projected OPE amplitude, G, which governs a probability for a boson exchange between the incoming and outgoing pairs, G(p, k) = - H(p, k)/4pk log( z(p, k) - 2pk + i ϵ/z(p, k) + 2pk + i ϵ) . The function z(p, k) = (√(s)-ω_k - ω_p)^2 - k^2 - p^2 - m^2. Quadratic momentum dependence in the argument of the logarithm leads to a convoluted formula <cit.> parameterizing the cut of G(p,k) in the complex p plane, p_+(s,k,x) = 1/2β_x ( k x (β_1 + i ϵ) + √(β_0) √((β_1 + i ϵ)^2 - 4 m^2 β_x) ) .Here we introduced an auxiliary function, β_x = (√(s) - ω_k)^2 - x^2 k^2 and a real parameter x ∈ [-1,1]. The cut runs between two branch points reproduced by Eq. (<ref>) for x=± 1. Similarly to singularities of _2, it has a mirror copy given by -p_+. The factor H(p,k) in Eq. (<ref>) describes a regularization choice <cit.>. We consider two types of cut-offs represented by a smooth or discontinuous (“hard" cut-off) function H(p,k) <cit.>. In both cases, the upper limit of the integration is k_ max = k_l, i.e., it does not exceed the value of the pseudo-threshold of _2. It ensures that the pseudo-threshold cut of the two-body amplitude does not affect the analytic properties of d(p,k).It is the presence of the logarithmic singularity in Eq. (<ref>) that makes the solution and analytic continuation of Eq. (<ref>) non-trivial, as known from the beginnings of the quantum-mechanical three-body problem <cit.>. In particular, for some values of external momenta and total invariant mass, the branch points of the OPE coincide with the integration interval, requiring its generalization to a complex deformed contour. In the most complicated case, both mirror copies of the branch cut can merge and form the so-called circular cut surrounding the integration endpoint. Although the position of the branch cuts is a matter of convention which, in principle, could be altered to produce a simpler geometry of these singularities, in practice, such redefinitions typically lead to equivalent or additional complications. Below, we describe how to deal with them.§ ANALYTIC CONTINUATION OF THE LADDER EQUATION§.§ General idea It is insufficient to generalize s from a real to a complex variable and then solve the integral equation (numerically) to obtain the amplitude for an arbitrary energy. As mentioned in the previous section, it happens that for specific values of s, singularities of the integration kernel K(p,k') = -G(p,k') _2(k') cross the integration interval, invalidating simplistic extrapolation of the three-body equation to complex energies.Forgetting for a moment about the inhomogeneous term of the ladder equation, we can view the amplitude d(s) = d(p,k) as a complex function defined as an integral of another (unknown) function, d(s) =∫_(0,k_max)f(k', s) dk' + ….Here f(k', s) = K(p,k') d(k', k), where we highlight the dependence of the integrand on the argument k' and a complex parameter s and keep the (p,k) dependence implicit. The integration contour, (0,k_max) starts at k' = 0 and ends at k' = k_max. Although we do not know f(k',s) beforehand, we know the positions of its singularities in k' and how they change with s. It is enough to infer the analytic structure of d(s) in the complex s plane <cit.>. In particular, its singularities are present not only when the integrand exhibits an explicit non-analyticity in s but also when an s-dependent (movable) singularity in the k' variable coincides with the lower endpoint of the integration contour, k'=0.Let us illustrate it with a concrete example. The d amplitude has two branch points corresponding to two reaction thresholds, implied by the S matrix unitarity. One occurs at s_φ b = (m+m_b)^2 and is associated with a physical φ particle scattering off the 2φ bound-state, denoted here by b. It is present only for a>0, i.e., when amplitude in Eq. (<ref>) develops a bound-state pole. The branch cut is algebraic in nature and has two associated Riemann sheets. The other branch point, at s_3φ = (3m)^2, is related to the possibility of a physical 3 φ→ 3 φ process and is independent of a. It is logarithmic and has an infinite but countable number of Riemann sheets <cit.>. The origin of these two singularities of d(s), in the context of Eq. (<ref>), is as follows. The integrand, f(k',s), has a pole at k'= q, which travels in the complex k' plane with varying s. The pole collides with the point k'=0 at s = s_φ b, as seen from Eq. (<ref>). On the other hand, the three-body cut is born from the collision of the two-body amplitude's branch point at k' = k_r with k'=0 at energy s = s_3 φ. These two structures are independent of G(p,k'). However, in addition to the two threshold singularities required by unitarity, the amplitude develops an unphysical singularity from the collision of the OPE branch point p_+ with k'=0. We do not discuss it here and refer the reader to Ref. <cit.> for more details. Finally, for each (p,k), the three-body amplitude inherits an explicit logarithmic cut from the OPE amplitude in the first term of Eq. (<ref>), so far neglected in this analysis.§.§ Appropriate integration contour Whenever singularities of the kernel, q, k_r, p_+, cross the integration interval (but not the endpoint), one can avoid integrating over them by deforming the integration path. As inferred by the Cauchy theorem, this allows one to analytically continue d(s) to values of s where the naïve extrapolation is ill-defined. The shape of the contour (i.e., the direction from which it circumvents the poles and branch points) defines the Riemann sheet at which one probes d(s).The deformed integration contour must belong to the region of the complex plane where the function f(k',s) is analytic. It imposes a set of restrictions on the acceptable paths (0,k_max). The most strict constraint is imposed by the need to avoid the logarithmic cut of Eq. (<ref>). This cut is present not only in G(p,k') but also in d(k',s), which acquires it from both terms on the right-hand side of Eq. (<ref>).Focusing on the first one, for a fixed choice of s and k, G(p,k) contributes the logarithmic cut to the p dependence of d(p,k). One avoids it by an appropriate choice of the contourin the homogeneous term of the integral equation. In the second term of Eq. (<ref>), G(p,k') contributes to d(p,k) one cut in the complex p plane for each value of k' ∈. Altogether, they cover a region in the complex p plane where d(p,k) is non-analytic. This region, referred to as the domain of non-analyticity, _, depends on the contour introduced in the earlier step. The integration path must detour _ that it defines, i.e., it has to be self-consistent. We refer the reader to Fig. 8 in Ref. <cit.> for an illustration of a typical domain of non-analyticity. Once we ensure that the chosen integration path satisfies the self-consistency criterion, the integral equation can be solved numerically along the complex integration path.§.§ Continuation through the φ b and 3φ cuts As mentioned, one chooses between different self-consistent contours to access unphysical Riemann sheets of d(p,k) associated with the bound-state–particle and three-body thresholds. When s becomes complex, the _2 pole at k' = q and the branch point at k' = k_r travel off the real k' axis. The integration contourrunning from k' = 0 to k' = k_max can bypass the pole or branch point from the top or bottom. By Cauchy's theorem, these two choices lead to solutions different by the value of the loop integral of f(k',s) around the singularity. These different values define distinct Riemann sheets of the three-body amplitude. Formally, this defines the discontinuity and Riemann sheets of the solution to the integral equation via its monodromy <cit.>.We present an example of this situation in Fig. <ref>. Three integration contours correspond to the amplitude d(p,k) on three different Riemann sheets associated with the three-body threshold. Deformation from the straight line, _1, to a complex contour _2, takes one from the physical sheet to the first unphysical one. To maintain the continuity of the integration kernel, the 2nd sheet value of _2(k'), given in Eq. (<ref>), is used when k' belongs to the dashed portion of _2 (i.e., before crossing the cut). Finally, one employs integration contours that encircle the branch point to obtaind(p,k) on higher sheets. For instance, contour _3 allows one to evaluate the solution on the subsequent Riemann sheet. Again, when the integration variable belongs to the dashed part of the contour, _2^II is used to ensure analyticity. For completeness, Sec. IV.E of Ref. <cit.> discusses an example of continuation beyond the φ b branch cut. § THE THREE-BODY SPECTRUMIn the model under study, depending on the value of the two-body scattering length, a, the three-body amplitude has one or more poles on the physical and nearest unphysical Riemann sheets. Using techniques described in the previous section, we can freely access various sheets of the amplitude and explore the dependence of the pole positions on the interaction parameter, changing it from negative to positive infinity. We present the main result of our analysis in Fig. <ref>. Following Efimov <cit.>, we rephrase the binding energy of a given state, Δ E = E - 3m, in terms of the (generalized) binding momentum, κ =sign (Δ E)√(|m Δ E| ) and plot it against 1/ma. We do it for the first three states (trimers) only. Some properties of the three-body spectrum are worth pointing out: * As observed most clearly on insets A and B, energy levels of trimers are related by a simple rescaling. It is known as the Efimov phenomenon in the non-relativistic three-body problem and occurs for an infinite number of excited states (not shown). Although, due to the relativistic corrections, the ground state energy only approximately obeys this property, its behavior remains similar to that of the excited states. * The excited three-body resonances follow closed trajectories approaching the three-body threshold at two different values of a. They evolve from narrow near-threshold resonances to objects more closely resembling virtual states. Interestingly, we find that any two subsequent poles coincide at the threshold at the same value of this parameter. Our finding agrees with the non-relativistic investigation performed in Refs. <cit.>.* The depicted evolution of excited resonances ends abruptly at these coincidence points. This behavior highlights the importance of the infinite number of Riemann sheets in a single-channel three-body problem. We claim that the trajectories of resonances extend to the higher sheets of the logarithmic three-body cut (not shown here), which in principle can be studied with contours of type _3 from Fig. <ref>.Most of these features are likely unique to the examined model. However, given that our parametrization of _2 is valid for any low-energy two-body scattering process and that the kinematic one-particle exchange between interacting pairs is a generic feature of the three-body scattering, it is conceivable that analysis of the more involved realistic systems features a spectrum qualitatively similar to the one explained here. Given the available knowledge of the non-relativistic three-body systems <cit.>, we do not expect the emergence of the exact discrete scaling symmetry or perfectly cyclic trajectories in studies of realistic → reactions. Nevertheless, the presented example of the three-body spectrum may serve as a tool guiding searches for three-body resonances and the state-of-the-art Lattice three-body calculations <cit.>. § CONCLUSIONSIn the trilogy of Refs. <cit.> we have shown how to numerically solve and analytically continue the relativistic three-body integral equation derived from a generic EFT formulation and linked to a widely used finite-volume quantization condition <cit.>. We paid special attention to describing our approach in a systematic and ready-to-implement manner. Similar methods will be required in future Lattice QCD analyses of the multi-body hadronic spectrum.As explained in the text, the presented techniques rely on the integration contour deformation and careful analysis of the singularities of the constituents of the formalism. They allow one to extend the validity of the original equation to the complex energies and determine resonance poles of the → amplitude. As a guiding example, we applied them to the simplest example of three-body interactions. We studied the evolution of the resulting spectrum across various Riemann sheets of the complex energy plane by varying the interaction strength. Within the infinite-volume fully relativistic framework, we recovered both the known non-relativistic <cit.> and finite-volume <cit.> results.Generalization of the analysis presented here to coupled-channel problems involving non-degenerate particles is underway. Given the conceptual development of the three-body infinite-volume techniques, and parallel advancement in their finite-volume counterparts, we believe the field of hadronic spectroscopy reached a stage where soon one will be able to perform a reliable lattice computation of the physical three-body states, such as T_cc^+ or χ_c1(3872). § ACKNOWLEDGEMENTS The author acknowledges the financial support through the U.S. Department of Energy Contract no. DE-SC0011637.JHEP
http://arxiv.org/abs/2312.16380v1
{ "authors": [ "Sebastian M. Dawid" ], "categories": [ "hep-lat", "nucl-th" ], "primary_category": "hep-lat", "published": "20231227023815", "title": "Analytic continuation of the finite-volume three-particle amplitudes" }
Photoemission of spin-polarized electrons from aligned grains and chiral symmetry breaking Thiem Hoang Received ...; accepted... ========================================================================================== Let B=(B_t)_t≥ 0 be a standard Brownian motion. The main objective is to find a uniform (in time) control of the modulus of continuity of B in the spirit of what appears in <cit.>. More precisely, it involves the control of the exponential moments of the random variable sup_0≤ s≤ t |B_t-B_s|/w(t,|t-s|) for a suitable function w. A stability inequality for diffusion processes is then derived and applied to two simple frameworks. *Keywords: Brownian motion, Modulus of continuity, stability, strong approximation.*MSC classification: Primary 60J65 ; 60G17, Secondary 60J60 ; 60F15. § INTRODUCTION Let f:ℝ_+→ℝ be some function and T>0 be a positive time horizon. Then, the modulus of continuity of f on [0,T] is the function ω_f(T,·) defined by, for all h≤ T,ω_f(T,h) := sup_0≤ s<t≤ T, |t-s|≤ h |f(t)-f(s)|.Let B = (B_t)_t∈ℝ_+ be a standard Brownian motion living on the probability space (Ω,ℱ,ℙ) and denote by ω_B its pathwise modulus of continuity as defined above. Of course, this function depends on the path of B and in turn is random. Perhaps the most known result about ω_B is Lévy's modulus of continuity theorem <cit.>, which gives the following equivalent of ω_B for small h,ω _B(1, h) ∼_h→ 0√(2h ln1/h)almost surely.More recently, some bounds were obtained on ω_B. On the one hand, <cit.> proves that for all p>0, there exists an explicit constant C(p) such that for all T>0 and h≤ T,𝔼[ ( ω_B(T,h) )^p] ≤ C(p) ( √(hln( 2T/h)))^p.Moreover, this bound is also derived for general Itô processes. Those results are then applied to the control of the Euler approximation of stochastic delay differential equations.On the other hand, the Remark below Lemma 3.2. in <cit.> states that the random variable M = sup_0< s<t< Th=|t-s||B_t-B_s|/√(h (1+ lnT/h)),is such that M^2 admits exponential moments, that is 𝔼[ exp(λ M^2) ] < ∞ for some λ>0. Of course, it is related with the following bound for the modulus of continuity, ω_B(T,h) ≤ M √(h (1+ lnT/h)).Those results are applied to the derivation of strong diffusion approximation of jump processes. Even if it is not clear from the notation, the random variable M depends on T so that the equation above does not give a bound for the uniform (with respect to T) modulus of continuity.The main objective of this paper is to prove the following bound for the uniform modulus of continuity of Brownian motion.Let B be a standard Brownian motion. Let ε>0 and define the random variableM_B := sup_0< s<t<∞, h=|t-s||B_t-B_s|/√(h (1+ lnt/h + ε |ln t| )).Then, M_B^2 admits exponential moments.The second objective is to derive a stability inequality for diffusion processes in the spirit of what appear in the proofs of the strong diffusion approximation in <cit.>. Finally, two applications of this inequality are given.The paper is organized as follows. Some properties of the quantity that appears in the denominator in the definition of M_B are stated in Section <ref>. A stability inequality for diffusion processes is proved in Section <ref>. This inequality can be used to prove convergence results in the framework of small perturbations of the coefficients of the diffusion in two different frameworks (see Section <ref>). Finally, the proof of the main result is given in Section <ref>.§ ABOUT THE UPPER-BOUND Let 0<ε<1 in this section, and define w:ℝ_+^*×ℝ_+^*→ℝ by, for all 0<h≤ t,w(t,h) := √(h (1+lnt/h + ε |ln t| )),andw(t,h) := w(t,t)if 0<t<h. The value w(t,h) for h≤ t is linked with Theorem <ref>, whereas it is defined for h> t in order to satisfy some monotony (see Proposition <ref>). Notice that, in comparison with Equation ??, it is also natural to consider the function w_K:ℝ_+^*×ℝ_+^*→ℝ defined by, for all 0<h≤ t,w_K(t,h) := √(h (1+lnt/h)),andw_K(t,h) := w_K(t,t)if 0<t<h.Of course, w_K(t,h) ≤ w(t,h) but w_K controls the the modulus of continuity for finite time horizons whereas w gives a uniform control. For instance, one can compare Equation (<ref>) with the following corollary of Theorem <ref>. There exists a random variable M_B such that M_B^2 has exponential moments and for all 0<h<t<+∞,ω_B(t,h) ≤ M_Bw(t,h).Theorem <ref> and in turn Corollary <ref> are expected to be valid with w replaced by w_K, but we were not able to prove it. Nevertheless, the additional logarithmic term in w is not a critical flaw: for instance, it only implies an additional logarithmic term in Corollaries <ref> and <ref>, whereas the rate of convergence obtained in Corollary <ref> would be unchanged.Here are listed two nice properties satisfied by our upper-bound function w. The function w is non decreasing, that is∀ t'≥ t≥ 0,h'≥ h≥ 0, w(t',h')≥ w(t,h).Let t'≥ t≥ 0 and h'≥ h≥ 0. It is clear that w(t',h) ≥ w(t,h) and it only remains to prove that w(t,h')≥ w(t,h). The function h↦ h(1+ln(a/h)) is non-decreasing for all positive h≤ a which directly implies (with a=max{t^p,t^q}) that w(t,h)≤ w(t,min{t,h'})= w(t,h') by definition. Obviously, the function w_K is also non decreasing. Furthermore, the function w_K satisfies the same scaling invariance as the Brownian motion. More precisely, for all a>0, w_K(at,ah) = √(a) w_K(t,h). This property is almost satisfied by the function w in the following sense. For all a>0, (t,h)∈ (ℝ_+^*)^2, 1/1+√(ε |ln a|)≤w(at,ah)/√(a) w(t,h)≤ 1+√(ε |ln a|).Let a>0 and (t,h)∈ (ℝ_+^*)^2. Assume that t≥ h. We havew(at,ah) = √(ah ( 1+lnat/ah + ε |ln at| ))≤√(ah( 1 + lnt/h + ε |ln t| + ε | ln a | )).Using the fact that √(b+c)≤√(b)+√(c) when b,c≥ 0 and the fact that √(h)≤ w(t,h), we get w(at,ah) ≤√(a) w(t,h) (1+ √(ε |ln a |)) which corresponds to the upper bound and the same kind of argument gives the lower bound.Finally, if h>t, then the same kind of argument can be applied to w(at,ah) = w(at,at) and w(t,h)=w(t,t). In particular, this property can be used to compare the modulus of continuity of the Brownian motion B and its space-time scaling. More precisely, we have the following corollary of Theorem <ref>. Let a>0 and define the scaled Brownian motion B̃ by B̃_t = a^-1/2 B_at for all t≥ 0. Then, M_B̃ = sup_0≤ s<t<+∞|B̃_t-B̃_s|/w(t,|t-s|)≤( 1 + √(ε |ln a|)) M_B,where M_B is the random variable defined in Theorem <ref>. Let a>0 and 0≤ s<t<+∞. By Proposition <ref>, we know that √(a)w(t,|t-s|) ≥ w(at,a|t-s|)/(1+√(ε |ln a|)). Hence, |B̃_t-B̃_s|/w(t,|t-s|) = |B_at - B_as|/√(a) w(t,|t-s|)≤|B_at - B_as|/w(at,a|t-s|)(1+√(ε |ln a|)),which gives the result.§ STABILITY INEQUALITY FOR DIFFUSIONSThe main result of this section is Proposition <ref>. It is a general stability inequality for diffusions which can be used in particular to provide explicit rates for strong convergence results (see Section <ref>). §.§ Setting Let X and X be two diffusion processes satisfyingX(t) = x_0 + ∫_0^t b(s,X(s)) ds + B(Λ(t)) X(t) = x_0 + ∫_0^tb(s,X(s)) ds + B(Λ(t)),where Λ(t) := ∫_0^tσ(s,X(s))^2dsand Λ(t) := ∫_0^tσ(s,X(s))^2ds. In the whole paper, we assume that those two equations admit strong solutions. For instance, this is guaranteed if the drift b and the diffusion σ are both sub-linear and Lipschitz functions (see <cit.> for instance). For instance, the equation for X is equivalent to the Ito equationX(t) = x_0 + ∫_0^t b(s,X(s)) ds + ∫_0^tσ(s,X(s)) dW_s,where W is a standard Brownian motion. See for instance <cit.> for details on this equivalence. The functions g,g : ℝ_+×ℝ^d→ℝ^k are said to be Lipschitz-bounded-close with constant L and non-decreasing functions K,D:ℝ_+→ℝ_+, abbreviated as LBC(L,K,D), if for all t∈ℝ_+ and x,x∈ℝ^d,|g(t,x) - g(t,x)| ≤ L |x-x|, max{ |g(t,x)|, |g(t,x)| }≤ K(|x|),|g(t,x) - g(t,x)| ≤ D(|x|).By extension, g: ℝ_+×ℝ^d→ℝ^k is said to be Lipschitz-bounded with constant L and non-decreasing function K, abbreviated as LB(L,K), if the first two lines of (<ref>) are satisfied.Here are gathered the assumptions made on the parameters of the model.The functions b,b:ℝ_+×ℝ→ℝ are LBC(L_b,K_b,D_b) and the functions σ,σ:ℝ_+×ℝ→ℝ are such that σ^2 and σ^2 are LBC(L_σ,K_σ,D_σ). §.§ The result In this following, we denote X^*(t) = sup_0≤ s≤ t |X(s)| and X^*(t) = sup_0≤ s≤ t |X(s)|. The following result is highly related to and inspired from <cit.>. Let X and X be the two processes defined by (<ref>). Let M_B be the random variable defined in Theorem <ref>.Under Assumption <ref>, for all T>0, γ(T):= sup_0≤ t≤ T |X(t) - X(t)| satisfiesγ(T) ≤ 1 + 2e^2L_bT[ |x_0-x_0| + TD_b(X^*(t)) + M_Bw(TK_σ(X^*(t)), TD_σ(X^*(t))) + M_B^2w( TK_σ(X^*(t)+X^*(t)) , TL_σ)^2].Let us define the intermediate integrated diffusion coefficient Λ̃ asΛ̃(t) = ∫_0^tσ^2(s,X(s)) ds.The assumptions made on σ^2 imply thatΛ(t) ≤ K_σ(X^*(t)), Λ̃_i(t) ≤ K_σ(X^*(t)) andΛ_i(t) ≤ K_σ(X^*(t)). The difference between X and X can be decomposed into X(t)-X(t) = ∑_j=1^5 A_j(t) withA_1(t) := x_0-x_0,A_2(t) := ∫_0^t b(s,X(s)) - b(s,X(s)) ds,A_3(t) := ∫_0^tb(s,X(s)) - b(s,X(s)) ds,A_4(t) := B(Λ(t)) - B(Λ̃(t)),A_5(t) := B(Λ̃(t)) - B(Λ(t)).Thanks to the assumptions on the model, we have|A_1(t)| ≤ |x_0-x_0|,|A_2(t)| ≤ tD_b(X^*(t)),|A_3(t)| ≤ L_b∫_0^t |X(s) - X(s)| ds.The last two terms, namely A_4(t) and A_5(t), can be bounded by using the monotony of w. On the one hand, since max{Λ(t), Λ̃(t)}≤ tK_σ(X^*(t)) and |Λ(t) - Λ̃(t)| ≤ tD_σ(X^*(t)) we have|A_4(t)| ≤ M_B w(tK_σ(X^*(t)), tD_σ(X^*(t))).On the other hand, remind that γ(T)=sup_0≤ t≤ T |X(t) - X(t)|. Since max{Λ̃(t), Λ(t)}≤ tK_σ(X^*(t) + X^*(t)) and, using the fact that σ^2 is Lipschitz, |Λ̃(t) - Λ(t)| ≤ L_σ∫_0^t |X(s) - X(s)| ds ≤ TL_σγ(T), we have |A_5(t)| ≤ M_B w( T K_σ(X^*(T) + X^*(T)) , TL_σγ(T) ).Hence, Gronwall's Lemma gives, for all t≤ T,|X(t) - X(t)| ≤ e^L_b t(Δ (T) + M_B w( T K_σ(X^*(T) + X^*(T)) , TL_σγ(T) ) ).withΔ(T) := |x_0-x_0| + TD_b(X^*(T)) + M_Bw(TK_σ(X^*(T)), TD_σ(X^*(T))). Now, either γ(T)≤ 1 in which case Equation (<ref>) is trivially satisfied, or γ(T) >1 in which case, using some property of the function w, Equation (<ref>) givesγ(T) ≤ e^L_bTΔ(T) + e^L_bTM_Bw( T K_f(X^*(T)+X^*(T)) , TL_f) √(γ(T)).Yet, inequality of the form γ≤ a+b√(γ) implies that γ≤ 2a + b^2 which ends the proof. § APPLICATIONS OF THE STABILITY INEQUALITYIn this section, we use the notation N>2 for a scaling parameter and the notation α>0 for a parameter which controls the rate of the scaling. Moreover, we consider b and σ two functions such that b is LB(L_b,K_b) and σ^2 is LB(L_σ,K_σ). Finally, we assume for simplicity that the functions K_b and K_σ are constant in this whole section. §.§ Spatially independent diffusion coefficient Here, we assume that L_σ=0, that is σ(t,x)=σ(t) is constant with respect to the space variable. For all N>2, let us defineb^N(t,x) := b(t,x) + N^-αD_b and σ^N(t,x)^2 := σ(t,x)^2 + N^-2αD_σ,where D_b and D_σ are two constants for simplicity. Let x_0∈ℝ and define x_0^N = x_0 + N^-αD_x for some constant D_x∈ℝ. Then, let X^N and X satisfyX^N(t) = x^N_0 + ∫_0^t b^N(s,X^N(s)) ds + B(Λ^N(t)) X(t) = x_0 + ∫_0^tb(s,X(s)) ds + B(Λ(t)),where Λ^N(t) := ∫_0^tσ^N(s)^2dsand Λ(t) := ∫_0^tσ(s)^2ds.We are now in position to state the following corollary of Proposition <ref>. Let η>0. There exists a random variable Ξ such that Ξ^2 admits exponential moments and, for all N>1,sup_0≤ t < ∞|X^N(t) - X(t)|/e^2(L_b + η)t≤Ξln N/N^α. Let us consider the processes Y^N and Y^N defined by Y^N := N^αX^N and Y^N:= N^αX^N which satisfyY^N(t) = N^αx^N_0 + ∫_0^t N^αb^N(s,N^-αY^N(s)) ds + B̃(Λ̂^N(t)) Y^N(t) = N^αx_0 + ∫_0^t N^αb(s,N^-αY^N(s)) ds + B̃(Λ̃^N(t)),where B̃(t) = N^αB(N^-2α t) defines a standard Brownian motion and,Λ̂^N(t) := ∫_0^t N^2ασ^N(s)^2dsand Λ̃^N(t) := ∫_0^t N^2ασ(s)^2ds.The scaling used in the definition of Y^N and Y^N magnifies the difference between X^N and X^N (which is expected to be of order N^-α). Hence, the difference between the Y processes is expected to be of order 1 and we are now in position to apply Proposition <ref>. Let M_B̃ be the random variable of Theorem <ref> associated with the Brownian motion B̃. The drift coefficients involved in Equation (<ref>) are LBC(L_b, N^αK_b + D_b, D_b) and the square diffusion coefficients are LBC(0, N^2αK_σ + D_σ, D_σ), so it follows that for all T>0, |Y^N(T) - Y^N(T)| ≤ 1+ A(T), where A(T) = 2e^2L_bT[ D_x + TD_b + M_B̃w(T(N^2αK_σ + D_σ), TD_σ) ].Yet, there exists some deterministic constant C>0 such thatD_x + TD_b≤ C e^2η Tandw(T(N^2αK_σ + D_σ),TD_σ) ≤ C e^η T√(1+ ln N).Moreover, Corollary <ref> implies that M_B̃≤ M_B (1 + √(2αεln N)), where M_B is the random variable of Theorem <ref> associated with the initial Brownian motion B.Finally, we have, for all t>0,|X^N(t) - X^N(t)|/e^2(L_b+η)t≤ 2 C N^-α[ 1 + M_B (1 + √(2αεln N)) √(1+ln N)],which gives the desired result since M_B^2 admits exponential moments.§.§ Diffusion approximation of an ODE Here, L_σ may be non null. For all N>2, let us defineb^N(t,x) := b(t,x) + N^-αD_b and σ^N(t,x)^2 := σ(t,x)^2 + N^-αD_σ,where D_b and D_σ are two constants for simplicity. Let x_0∈ℝ and define x_0^N = x_0 + N^-αD_x for some constant D_x∈ℝ. Then, let X^N and X^N satisfyX^N(t) = x^N_0 + ∫_0^t b^N(s,X^N(s)) ds + N^-αB( N^αΛ^N(t)) X^N(t) = x_0 + ∫_0^tb(s,X^N(s)) ds + N^-αB(N^αΛ(t)),where Λ^N(t) := ∫_0^tσ^N(s,X^N(s))^2dsand Λ^N(t) := ∫_0^tσ(s,X^N(s))^2ds. Notice that the diffusion part of Equation (<ref>) vanishes when N goes to infinity. In particular, one could prove that both X^N and X^N converge (at rate N^-α/2) to the solution of the ordinary integral equation x(t) = x_0 + ∫_0^tb(s,x(s))ds. The aim here is to prove that X^N and X^N are close at the finer scale N^-α (up to logarithmic term).We are now in position to state the following corollary of Proposition <ref>. Let η>0. There exists a random variable Ξ with exponential moments such that, for all N>1,sup_0≤ t < ∞|X^N(t) - X^N(t)|/e^2(L_b + η)t≤Ξln N/N^α.Notice that an equivalent result can be obtained from the proof of <cit.>: for any T>0, sup_0≤ t < T |X^N(t) - X^N(t)| ≤Ξ Te^2L_b Tln N/N^α.Hence, at the price of replacing a linear term in t by an arbitrary small exponential term (and without any loss in the rate of convergence with respect to N) we are able to get a uniform control with respect to t. Let us consider the processes Y^N and Y^N defined by Y^N := N^αX^N and Y^N:= N^αX^N which satisfyY^N(t) = N^αx^N_0 + ∫_0^t N^αb^N(s,N^-αY^N(s)) ds + B(Λ̂^N(t)) Y^N(t) = N^αx_0 + ∫_0^t N^αb(s,N^-αY^N(s)) ds + B(Λ̃^N(t)),where Λ̂^N(t) := ∫_0^t N^ασ^N(s,N^-αY^N(s))^2dsand Λ̃^N(t) := ∫_0^t N^ασ(s,N^-αY^N(s))^2ds.The scaling used in the definition of Y^N and Y^N magnifies the difference between X^N and X^N (which is expected to be of order N^-α). Hence, the difference between the Y processes is expected to be of order 1 and we are now in position to apply Proposition <ref>. Let M be the random variable of Theorem <ref> associated with the Brownian motion B. The drift coefficients involved in Equation (<ref>) are LBC(L_b, N^αK_b + D_b, D_b) and the square diffusion coefficients are LBC(L_σ, N^αK_σ + D_σ, D_σ), so it follows that for all T>0, |Y^N(T) - Y^N(T)| ≤ 1+ A(T), where A(T) = 2e^2L_bT[ D_x + TD_b + Mw(T(N^αK_σ + D_σ), TD_σ) + M^2w( T(N^αK_σ + D_σ), TL_σ)^2]Yet, there exists some deterministic constant C>0 such thatD_x + TD_b≤ C e^2η Tandw(T(N^αK_σ + D_σ), max{ TD_σ, TL_σ}) ≤ C e^η T√(1+ ln N).Finally, we have, for all t>0,|X^N(t) - X^N(t)|/e^2(L_b+η)t≤ 2 C N^-α[ 1 + M √(1+ln N) + M^2(1+ln N) ],which gives the desired result since M^2 admits exponential moments.§ PROOF OF THEOREM <REF>§.§ A modified Garsia–Rodemich–Rumsey lemma Let us introduce some notation. Let Ψ and μ be two non decreasing functions from ℝ_+ to ℝ_+. Furthermore, assume that μ is continuous, μ(0)=0, lim_x→ +∞Ψ(x) = +∞ and define Ψ^-1 : [Ψ(0),+∞) byΨ^-1(u) := sup{v, Ψ(v)≤ u}. The following lemma is a simple extension of <cit.>.For any T>0, let f:[0,T]→ℝ be a continuous function such that∫_0^T∫_0^TΨ(|f(t)-f(s)|/μ(t-s)) dtds ≤ B_T <+∞.Then, for all t,s∈ [0,T], |f(t)-f(s)| ≤ 8 ∫_0^|t-s|Ψ^-1(4B_T/u^2)dμ(u).For any T>0, let f_T:[0,1]→ℝ be defined by f̃(x) := f(Tx) and let us define μ_T in the same way. By a change of variable Tx→ t and Ty→ s, we have∫_0^1∫_0^1Ψ(|f_T(x)-f_T(y)|/μ_T(x-y)) dxdy = 1/T^2∫_0^T∫_0^TΨ(|f(t)-f(s)|/μ(t-s)) dtds ≤B_T/T^2.Then, the functions f_T, Ψ and μ_T satisfies the assumption of <cit.> which implies that, for all x,y∈ [0,1],|f_T(x)-f_T(y)| ≤ 8 ∫_0^|x-y|Ψ^-1(4B_T/T^2v^2) dμ_T(v).Finally, the change of variable Tx→ t, Ty→ s and Tv→ u gives (<ref>)The rest of the proof relies on an application of Lemma <ref> with the functions Ψ and μ defined by, for all x∈ℝ_+,Ψ(x) := e^x^2/2 - 1and μ(x):= √(cx),where c>1 is some constant. Notice that Ψ^-1(y) = √(2ln(y+1)) and dμ(x) = √(c)/2√(x)dx.Let ε>0. For all real number T> 0, let us define the random variableξ_T := f_ε(T) ∫_0^T∫_0^TΨ(|B_t-B_s|/μ(|t-s|)) dsdt,wheref_ε(T) := (1/T+1)^2(1-ε) if T<1,1if T=1,(T-1)^-2(1+ε) if T>1.In particular, for any positive number t, we have: 1) if t≤ 1, 1/⌊ 1/t ⌋ +1 < t ≤1/⌊ 1/t ⌋ and f_ε(1/⌊ 1/t ⌋)≥ t^-2(1-ε); 2) if t≥ 1, ⌈ t ⌉ - 1 < t ≤⌈ t ⌉ and f_ε(⌈ t ⌉)≥ t^-2(1+ε). Finally, let us denote by ξ the sup over all integer or inverse integer times, that is ξ := sup{ξ_T, ξ_1/T |T∈ℕ^*}. For all p∈ (1,c), 𝔼[ξ^p]< +∞. Let p∈ (1,c) and q∈ (1,p). For all positive integers T≥ 1, we have by convexity,𝔼[ξ_T^q] ≤𝔼[(ξ_T+T^2f_ε(T))^q] = f_ε(T)^q 𝔼[(∫_0^T∫_0^Texp(|B_t-B_s|^2/2c|t-s|) dsdt)^q]≤f_ε(T)^q T^2(q-1)𝔼[∫_0^T∫_0^Texp(|B_t-B_s|^2/2c|t-s|)^q dsdt] = T^2q/(T-1)^(2+ε)qT^-2∫_0^T∫_0^T𝔼[exp(q/2c(|B_t-B_s|/√(|t-s|))^2)] dsdt.Yet, since the increments of B are gaussian, for all t≠ s,𝔼[exp(q/2c(|B_t-B_s|/√(|t-s|))^2)] = √(c)/√(c-q).Hence, 𝔼[ξ_T^q]≤T^2q/(T-1)^(2+ε)q√(c)/√(c-q),and similarly, 𝔼[ξ_1/T^q]≤(T+1)^(2-ε)q/T^2q√(c)/√(c-q). Denote g(T) := T^2q/(T-1)^(2+ε)q + (T+1)^(2-ε)q/T^2q and remark that g(T) is equivalent to 2T^-ε q as T→∞ which in turn implies summability since ε>0. By Markov's inequality and the union bound, for all integer n≥ 0, ℙ(max(ξ_T^p, ξ_1/T^p) > n) = ℙ(max(ξ_T^q, ξ_1/T^q) > n^q/p) ≤ g(T)n^-q/p. Then, the union bound givesℙ(ξ^p>n) ≤∑_T=1^+∞ g(T) n^-q/p≤ C n^-q/p.Finally, the fact that q/p<1 gives the result (use for instance the fact that 𝔼[ ξ^p] ≤∑_n=0^+∞ℙ(ξ^p>n)). We are now in position to prove Theorem <ref>. Let us first fix some t ≥ 1. By definition of ξ and properties of the function f_ε, we have∫_0^t∫_0^tΨ(|B_x-B_y|/μ(|x-y|)) dxdy ≤ f_ε(⌈ t ⌉)^-1ξ_⌈ t ⌉≤ t^2(1+ε)ξ,Hence, Lemma <ref> implies that∀ x,y∈ [0,t],|B_x-B_y| ≤ 8 ∫_0^|x-y|Ψ^-1(4t^2(1+ε)ξ/u^2) dμ(u).Specializing the equation above with x=t and y=s≤ t and denoting h=|t-s| yields|B_t-B_s|≤ 8 ∫_0^h√(2ln(4t^2(1+ε)ξ/u^2+1))√(c)/2√(u) du,and so|B_t-B_s|≤4√(2c)∫_0^h√(ln(4ξ + u^2/t^2(1+ε)) + ln(t^2(1+ε)/u^2))du/√(u).The ratio u^2/t^2(1+ε) is less than 1 so the second logarithm in the equation above is positive and we can use the inequality √(a+b)≤√(a)+√(b) to get, for all s≤ t and t≥ 1, |B_t-B_s| ≤ I_1+I_2+I_3 with I_1 := 4√(2c)√(ln(4ξ + 1))∫_0^hdu/√(u),I_2 := 8√(c)∫_0^h√(lnt/u + ε|ln t|) - 1/√(lnt/u + ε|ln t|)du/√(u),I_3 :=8√(c)∫_0^h1/√(lnt/u + ε|ln t|)du/√(u).If t≤ 1, one can use the fact that∫_0^t∫_0^tΨ(|B_x-B_y|/μ(|x-y|)) dxdy ≤( f_ε(1/⌊ 1/t ⌋) )^-1ξ_1/⌊ 1/t ⌋≤ t^2(1-ε)ξ,and the same arguments as above to prove that the bound |B_t-B_s| ≤ I_1+I_2+I_3 is also valid for all s≤ t and t≤ 1.First, I_1≤ 8√(2c)√(ln(4ξ + 1))√(h). Then, to simplify the expressions of I_2 and I_3, let us denote a such that ln a = ln t + ε |ln t|, so that lnt/u + ε|ln t| = ln (a/u). Remark that a≥ t. The integrand in I_2 is the derivative of u↦ 2√(u ln (a/u)), so that I_2 = 16√(c)√(h ln (a/h)). Finally, with the change of variable y=√(ln (a/u))/√(2), we haveI_3 = 8√(c)√(2π)√(a)( 1 - erf( √(ln (a/h))/√(2)) ),where erf is the error function defined by erf(x) = 2/√(π)∫_0^x e^-y^2 dy. If h≤ a/2, we use the classic bound 1-erf(x) ≤ e^-x^2/(x√(π)) to getI_3 ≤ 8√(c)√(2π)√(a)√(h/a)/√(π)√(ln (a/h))/√(2)≤16√(c)/√(ln 2)√(h).If h≥ a/2, we use 1-erf(x) ≤ 1 and √(a)≤√(2h) to get I_3 ≤ 16√(c)√(π)√(h). Since (ln 2)^1/2≤√(π), we have for all h≤ t, I_3 ≤ 16√(c)√(π)√(h).Remind that ln (a/u) = lnt/u + ε|ln t| and combine the bounds on I_1, I_2 and I_3 to get, for all t≥ 0 and s≤ t, with h=|t-s|,|B_t-B_s| ≤ 8√(2c)( √(ln(4ξ + 1)) + √(2) (1+√(π)) ) √(h ( 1+lnt/h + ε|ln t| )).This inequality holds for all 0≤ s<t<+∞ which implies that the random variable M defined in the statement of the Theorem satisfiesM ≤ 8√(2c)( √(ln(4ξ + 1)) + √(2) (1+√(π)) ),and so, using 1+√(π)≤ 4, we have M^2≤ 256cln(4ξ + 1) + 8192. In particular, it implies that M is almost surely finite. Moreover, for λ>0, 𝔼[e^λ M^2]≤ e^8192λ𝔼[exp(256cln(4ξ + 1))]≤ e^8192λ𝔼[(4ξ+1)^256cλ].Finally, λ can be chosen such that 1 < 256cλ < c and Lemma <ref> gives the result. *AcknowledgmentsThis research has been supported by ANR-19-CE40-0024 (CHAllenges in MAthematical NEuroscience) and has been conducted while the author was in Statify team at Centre Inria de l'Université Grenoble Alpes. The author would also like to thank Markus Fischer for fruitful discussions on the subject. abbrv
http://arxiv.org/abs/2312.15931v1
{ "authors": [ "Julien Chevallier" ], "categories": [ "math.PR" ], "primary_category": "math.PR", "published": "20231226075542", "title": "Uniform in time modulus of continuity of Brownian motion" }
fancyConvergence of Ginzburg-Landau expansions: superconductivity in the BCS theory and chiral symmetry breaking in the NJL model [ January 14, 2024 ================================================================================================================================§ INTRODUCTION Reinforcement Learning (RL) has achieved remarkable successes in game playing <cit.>, robotics <cit.> and industrial control <cit.>. However, its need for well-defined reward functions is often its Achilles heel: Specifying reward functions usually requires significant domain expertise and effort.One option enabling agents to learn in such settings, Learning from Demonstrations (LfD), is a well-established alternative <cit.>. In LfD, a learning agent observes demonstrations from an expert to infer rewarding behavior. Typically, it is assumed that the expert's demonstrations are close to optimal and that the learner can optimize the expert's implicit reward function by mimicking its behavior. Two common approaches for LfD are Inverse RL (IRL) <cit.> and Imitation Learning (IL) <cit.>. In this paper, we focus on the latter. The classical formulation of IL assumes that the learner can observe state-action trajectories of the expert's behavior <cit.>. The learning agent's policy can then be optimized via supervised learning by predicting the expert's actions for given states. Yet, in many practical scenarios, e.g., when learning from a human demonstrator through cameras, observing expert actions and the environment's ground-truth states is infeasible. Thus, an agent must learn from observations without having access to actions <cit.>.But there is a further—commonly neglected—challenge, namely from which perspective the learner should observe the demonstrator to learn efficiently. To make this more concrete, consider the following motivating example: A human wants to teach a kitchen robot how to chop vegetables. Specifying a reward function that assigns rewards for each low-level behavior and individual vegetable is tedious if not infeasible. When applying IL to learn from demonstrations, the kitchen robot must select the perspective from which to watch the demonstrator. Watching the expert chop from the side reveals information about correctly moving the knife, whereas watching from the front discloses crucial details on the positioning of the fingers. Choosing a perspective that does not show the demonstrator does not support learning. This example highlights that [label=(*)]* information from different perspectives can be complementary and* that a learner must actively and deliberately choose the perspectives to learn efficiently.These features distinguish our work from other related works like <cit.> that perform imitation learning from multiple perspectives but involve no need to actively choose good perspectives, cf. Section <ref> for details. Other settings in which actively choosing the perspective for imitation learning can be important include applications with power and privacy constraints and applications with different perspectives that are expensive to provide, e.g., because of required human effort. Inspired by this, we introduce the problem of active third-person IL in which the learner can choose among a set of perspectives when observing the expert. Figure <ref> illustrates our considered setting and highlights differences compared to traditional third-person IL. Both settings consider a problem where the learner's observation space differs from the expert's, but only in our setting the learner can influence the perspective from which it observes the expert. In this paper, we formalize and theoretically analyze third-person IL and characterize the limits of what can be learned regarding the available perspectives and the structure of the underlying reward function. Inspired by previous work on third-person IL <cit.>, we propose an approach for learning in the active third-person IL setting based on generative adversarial imitation learning (GAIL) <cit.>. In our approach, we assume a finite number of possible perspectives and discriminators quantifying the imitation performance of the learner in those perspectives. The learner implements a perspective selection strategy to decide from which perspective to observe the expert. Ultimately, the goal is that the discriminators cannot distinguish the learner's and the expert's actions from any of the perspectives. To summarize our contributions: [label=(*)]* We formalize the problem of active third-person IL. * We analyze the characteristics of the active third-person IL problem. * We propose multiple approaches to account for different perspectives in active third-person IL.* We provide proof of concept experiments with multiple scenarios and demonstrate the effectiveness of our framework in toy and benchmark environments (MuJoCo) <cit.>. Our paper is structured as follows. We formalize the active third-person IL problem in Section <ref> and discuss related work in Section <ref>. Then we characterize the active third-person IL problem in Section <ref> and introduce our approach in Section <ref>. We present our experiments in Section <ref> and conclude our paper in Section <ref>. § PROBLEM SETTINGBasic notation & setup.We consider the problem of active third-person IL in a Markov Decision Process (MDP). An MDP is characterized by = (, , , ρ, r, ), whereis a set of states,a set of actions an agent can take, ×→ [0,1]^|| describes the transition probabilities into the next state by taking an action in the current state, ρ is the initial state distribution, r×→ℝ the reward function, andthe horizon of the interaction <cit.>. An agent implements a (stochastic) policy π×→ [0,1], which is characterized by the probability π(s,a) of taking action a in state s, and describes the agent's behavior. The standard goal of an agent in RL is to learn a policy π that maximizes the expected return J(π) = 𝔼[ ∑_t=0^H-1 r(s_t, a_t) | π], where the expectation is over the randomness in the initial state distribution, the transitions, and the policy.Imitation learning. In imitation learning (IL), an agent (learner) aims to learn rewarding behavior from an expert by observing it. The expert provides demonstrations from a (possibly stochastic) policy π^ which (approximately) maximizes the expected return J(π^). Depending on the precise setting, the learner either observes the expert's states and actions or only the expert's states. In more general settings, the learner does not observe the states directly but only observations →ℝ^d instead.Suppose states/observations and actions are observed. In that case, imitation learning is often considered a supervised learning problem in which imitating the expert corresponds to learning a classifier from state/observations to the expert's actions. Such approaches typically suffer from compounding errors but can be performed without access to the environment. In other lines of work, IL is considered as the problem of matching state-action-occupancies of the expert and the learner, and interaction with the environment is required <cit.>. Consider, e.g., <cit.> for a survey. The active third-person imitation learning problem. In this paper, we are interested in a more general IL setting in which the learner must actively decide how it observes the expert. This is inspired by scenarios in which the agent could for instance be a robot, observing a human's demonstrations and deciding from which angle to watch the demonstration. Formally, we assume that the learner can select the perspective and thereby influence the observations o it receives, i.e., (s,) ↦ o, where s is the state of the environment and ∈ is the selected perspective chosen among a set of perspectives  available to the learner and o ∈ℝ^d_ is a perspective dependent observation. In this paper, we assume that the set of possible perspectivesis finite and that the learner selects the perspective for a whole demonstration. The learner's goal is to find a policy π^ that imitates the expert's behavior as quickly and accurately as possible. To achieve this, the learner must employ a suitable perspective selection strategyand effectively combine theinformation from the experts' demonstrations observed from the chosen perspectives. The interaction of the learner and expert is summarized in Algorithm <ref>. In Section <ref> we provide details of our instantiation for learning during this interaction. mycommfont[t] Interaction in the active third-person imitation learning problem environment ℳ, expert , learner , learner policy π^, perspectives = {_1, …, _i}, selection strategy , number of demonstrations K i=1,…,K_i ← Next perspective for observing the expertselected by the learner ω = (o_1, o_2, …) ← Observations of expert 's demonstration from perspective _iπ^_i,_i ← Learnerusesobservations ω to update its policy and strategy Optimized learner's policy π^§ RELATED WORKWe first review approaches for IL when expert actions are available, followed by a discussion of IL methods suitable for scenarios where the expert's actions cannot be observed, and approaches for IL from multiple perspectives.Imitation learning from demonstrations. There are two main approaches for IL from demonstrations: Behavioural cloning (BC) <cit.> and Inverse Reinforcement Learning (IRL) <cit.>. In BC, learning from demonstrations is phrased as a supervised problem in which a regressor or classifier is trained to predict an expert's action based on the expert's state <cit.>. IRL <cit.> addresses IL by inferring a reward function from expert observations. While it has already proven useful in practical applications <cit.>, IL is prone to compounding errors and causal confusion <cit.>. Generative Adversarial Imitation Learning (GAIL) <cit.> is an adaption of the GAN framework <cit.> for IL circumventing some issues arising from directly learning the reward function. It does so by training a model to discriminate between the expert's and a learner's state-action pairs while using the discriminator's prediction as a reward for the learner.Imitation learning from observations. The assumption of access to the expert's actions can be limiting in many applications. Thus, learning from observations only has received considerable attention, e.g., <cit.> approached the challenge of learning from online videos, where only observations of expert behavior are available. Using state-only observations for IL has also been discussed in  <cit.>. Recent work uses image observations to extend IL's applicability to scenarios like robotics. Various approaches, e.g., <cit.> are based on inverse dynamics models, which the learner can infer without any interaction with the expert <cit.>.Another widely used IL variant adapts the original GAIL <cit.> framework to learning from observations. Using single states as inputs to the discriminator may not be sufficient to imitate the expert <cit.>. <cit.> provides a counterexample with an environment where the goal is to run a circle clockwise. Observing the expert's state distribution does not allow the learner to distinguish clockwise from counterclockwise circles. Follow-up work has therefore used three consecutive images as observations <cit.>. <cit.> focus on viewpoint differences and use two images (s_t,s_t+Δ) as input to the discriminator, that are shifted by Δ time steps.Imitation learning from different perspectives. IL from demonstrations typically requires the perspective of the expert and learner to be the same. This is a limiting assumption compared to human learning, where observing a demonstrator in a third-person view is sufficient to facilitate learning. The seminal work by <cit.> provides a step towards this goal by focusing on viewpoint differences and using two time-shifted images (s_t,s_t+Δ) as input to the discriminator. The agent's reward is given by a discriminator built on top of a viewpoint-agnostic representation.<cit.> also aim to learn a representation that is perspective-invariant by distinguishing between temporally distant frames from the same trajectory and temporally close frames from different trajectories. Note that their approach requires collecting perfectly aligned observations from all perspectives for each trajectory, which is a constraint not applied to our framework. Compared to learning a joint but viewpoint-invariant representation, <cit.> use a dual autoencoder to disentangle viewpoint and state information. This results in a perspective-invariant representation of the state. This is related to training a generative model to reconstruct embeddings of third-person frames into first-person views <cit.>. Lastly, <cit.> considers IL from multiple experts with different embodiments that differ from the learner's configuration. This relaxes the assumption of perfectly corresponding observations made in previous works. All the above-mentioned methods focus on the problem of learning a viewpoint-invariant representation. In comparison, our work additionally considers the question of which perspective to choose for imitation. § ANALYSIS AND INSIGHTSIn this section, we analyze the characteristics of the active third-person IL problem. We focus on feature-matching-based approaches that can be readily applied in cases for which the ground truth states can not be observed. In particular, we study how the combination of reward structure and available perspectives impacts achievable performance. To this end, we assume that state-action pairs (s,a) are associated with ground truth features ∈ℝ^d and that rewards are (possibly non-linear) functions of these features, i.e., r(s,a) = g(), where gℝ^d →ℝ. Because of space constraints, all proofs are deferred to the appendix.Linear reward functions and linear transformations. We start by investigating linear reward functions and perspectives that correspond to linear transformations of the (unknown) ground truth features , i.e., =, where ∈ℝ^d_ν× d. To this end, let μ(π) = 𝔼[∑_t=0^∞γ^t ϕ(s_t,a_t) |π] be the expected discounted feature for policy π. We can then make the following observation by extending previous results of <cit.>. Assume a linear reward function r(s,a) = ⟨w^*, ⟩, where w^* are unknown reward parameters with w^*≤ 1, and perspectives characterized by linear transformations {}_∈ of the features . Assume that the learner's policy π^ matches the feature expectations of the expert's policy π^ with precision ϵ / || for all perspectives, i.e., (μ(π^) - μ(π^)) < ϵ / || for all ∈. Then| ⟨w^*, μ(π^) - μ(π^) ⟩ | < ϵ/σ(A̅) + ρ( A̅; w^*)diam μ(Π),where A̅ = [A_1^, …, ^]^ is the matrix resulting from stacking all transformation matrices, σ(A̅) = min_v⊥kerA̅, v=1A̅v, ρ(A̅; w^*) = max_v∈kerA̅, v = 1⟨w^*, v⟩, Π is the set of possible learner policies, and diam μ(Π) = sup_π_1, π_2 ∈Πμ(π_1) - μ(π_2).Furthermore, if rank(A̅) = d, then identifying a policy π which matches the feature expectations in all perspectives exactly ensures that the learning agent matches the expert's performance, i.e.,| ⟨w^*, μ(π^) - μ(π^) ⟩| = 0.The above theorem highlights that learning in the active third-person IL setting via feature-matching in the available perspectives can be successful if the perspectives provide sufficient information regarding feature-matching in the ground truth features. Intuitively, performance degrades with decreasing information about the ground truth features retained in the perspectives. Importantly, inaccuracies in feature-matching can have an amplified impact on the learning agent's performance as characterized by 1/σ(A̅).Non-linear reward functions or non-linear transformations. The characteristics of the problem change significantly for non-linear reward functions or non-linear transformations. Assume a non-linear reward function, i.e., r(s,a)=g(), where gℝ^d →ℝ is non-linear. Then, there exists an instance of the active third-person IL problem in which the relative decrease of the learner's performance is unbounded even if the learner matches the feature trajectories perfectly in each perspective and the perspectives jointly contain all information about the ground truth feature occurrences. The same is true for non-linear transformations of the ground truth features even if the rewards are linear in the original features and the transformation is bijective.The above two statements indicate that the learner's performance is strongly influenced by the perspectives and the reward structure. Nevertheless, we can identify rich settings in which the learner's performance can match that of the expert. Consider reward functions of the form r(s,a) = ∑_i=1^K w^*_i g_i(_S_i),where _S_i is the subset of ground truth features indicated by the set S_i ⊆ [d], g_iℝ^|S_i|→ℝ are possibly non-linear functions of the subsets of features, and w^* = [w^*_1, …, w^*_K]^∈ℝ^K are unknown reward parameters. If the learner can observe the expert from perspectives ={ p_iℝ^d →ℝ^|S_i|}_i=1^K, where p_i↦A_i _S_i and A_i ∈ℝ^|S_i| × |S_i| is invertible, then the learner can asymptotically achieve the expert's performance via feature-matching.Reward functions with a structure according to the above observation, allow for non-linear bijective functions on a subset of the ground truth features regarding the reward function if the respective set of features can be observed via an invertible linear transformation in a single perspective. The condition on the perspectives ensures that the learner can observe all dependencies among features that are reward-relevant. Thus by matching the probabilities of all possible feature trajectories, the learner can achieve the expert's performance. Note, however, that matching only the feature expectations from all perspectives is not sufficient to ensure matching the expert's performance <cit.>. § OUR APPROACHWe first analyze a stylized variant of the active third-person IL problem in Section <ref> and then develop our approach to the problem leveraging insights from this analysis in Section <ref>. §.§ Warm up: Informative Perspectives For our approach, we take inspiration from analyzing a stylized variant of the active third-person IL problem with rewards linear in some unknown ground truth features and with perspectives corresponding to known linear transformations , cf. Section <ref>. In particular, assume that if the learning agent selects perspective _t for the next demonstration, it observes the cumulative features o_t =A__t^ + η_t, where ^ = 𝔼[∑_t=0^H-1ϕ(s_t,a_t) | π^] are the expected cumulative features of the expert policy π^ and η_t is a subgaussian random variable (representing the possible randomness in a single demonstration and any other observational noise). An approach to solving the third-person IL problem is then to select perspectives that allow accurate estimation of ^ as quickly as possible. Concretely, if we would employ penalized least-squares regression to estimate ^, i.e.,^E_t = min_∈ℝ^d∑_t'=1^t ( o_t' -A__t')^2 + λ _2^2, where λ > 0 is a regularization factor, it can be guaranteed with a high probability that ^_t - ^_ V_t^2 = (^_t - ^)^V_t (^_t - ^) ≤β_t, where V_0 = λ𝐈, V_t=λ𝐈 + ∑_t'=1^tA_ν_t'^ A_ν_t' and β_t ≥ 1 is an increasing sequence of constants <cit.>. If β_t does not grow too quickly, then ^_t - ^_ V_t^2 will shrink. Furthermore, if perspectives are selected such that the smallest eigenvalue of V_t increases fast enough, ^_t - ^_2^2 will also decrease. This, in turn, implies that matching the feature expectations in a perspective to the respective observed empirical feature expectations will result in cumulative rewards close to those of the expert.The volume of possible ^ satisfying ^_t - ^_ V_t^2 ≤β_t is proportional to β_t / ∏_i σ_i( V_t), where σ_i( V_t) is the ith eigenvalue of V_t. Hence, in the absence of knowledge about the reward structure, for a fixed sequence β_t, a sensible perspective selection strategy could aim to maximize ∏_i σ_i( V_t) or log V_t—see <cit.> for a connection to D-optimal design. Importantly, selecting strongly similar perspectives might result in a shrinkage of the volume of the ellipse containing the possible(confidence ellipse) only along specific directions.Hence, it is crucial to account for the similarity of perspectives in the perspective selection strategy. However, in most realistic settings, we will not know the relations of the different perspectives, i.e.,is unknown. In such cases, we can still leverage insights from above: The size of the confidence ellipse depends on the selected perspectives and in particular, the relation of the transformations. Thus, we should select perspectives containing complementary information as measured by the matrices A__t. As we do not know these matrices, we suggest accounting for similarities/correlations among perspectives through prior knowledge and appropriate designs of the neural network architectures, thereby enabling more effective querying of complementary perspectives. We refer the reader to Section <ref> for details and the experiments in Section <ref> for the advantages of doing so. §.§ Learning Algorithm Overview. We aim to find suitable perspective selection strategiesand an approach allowing for effective learning from multiple perspectives. To this end, we propose an approach building on generative adversarial imitation learning (GAIL) <cit.>, recent work on third-person IL <cit.>, and our insights from Section <ref>.A schematic illustration of our approach is presented in Figure <ref> and contains two novel elements:[label=(*)]* discriminators _1, …, _||, one for each perspective ∈, and* a perspective selection strategy .The discriminators measure how well the learner's policy π^ matches the expert's policy π^ in the respective perspective. The different discriminators are not necessarily distinct neural networks but other suitable discriminator architectures can be used (see details below). The perspective selection strategyselects a perspectivefrom which expert and learner data is generated. After the discriminator has been trained for a fixed number of episodes on the expert's and learner's trajectories ω^E_, ω^L_ in perspective , respectively, the learner collects trajectories for updating its policy π^. Before each observation, the learner must use the perspective selection strategyto decide on a perspective . Algorithm <ref> summarizes the interplay between the components.Below, we provide detailed information about the components of our approach.Discriminator Architectures.Central to our approach is using discriminators _ that can distinguish between expert and learner data for all available perspectives ∈. We present four choices for defining such discriminators within our framework. All discriminators are based on the DCGAN architecture <cit.> combined with design choices from <cit.> and <cit.>[Using the architectures from <cit.> and <cit.> without modification did not yield satisfactory results in our setting.]. In particular, we substitute missing action information by considering two slightly time-shifted observations o_t,o_t+Δ, where Δ is the time shift, as inputs to a discriminator. We use binary cross entropy as the objective function for all discriminators. All layers are regularized with spectral normalization <cit.> to improve stability and training performance.We consider the following possible designs for the discriminators: [label=(*), font=, leftmargin=15pt]* Using a single discriminator network for all perspectives without conditioning on perspective information; * using an individual discriminator network for each perspective;* using discriminators that combine perspective information with the features of the convolutional encoder of the network (this could be, e.g., the index of a perspective or its rotation parameters); and* using a conditional discriminator based on the FiLM architecture <cit.> which scales the outputs of its convolutional layers based on perspective-specific weights.Motivated by the analysis in Section <ref>, we account for correlations between perspectives with a parameter-shared correlation network when using multiple discriminators. The correlation network receives the current discriminator's CNN features as input and conditions on the current perspectivethrough a one-hot encoding of . Perspective selection. Observing the expert only from a single perspective may limit the learner's performance as this perspective might not be informative about the expert's behavior (see also Section <ref>). To enable learning from multiple perspectives, we assume that in each iteration of the algorithm, the learner can choose a perspective ∈ from which it observes the expert. A discriminator _ is trained to differentiate between expert and learner for perspective . The discriminator's output is then used as a reward signal for the learner. The selection between the available perspectives is accomplished by the perspective selection strategy . The selection strategyis employed in two components of our algorithm: [label=(*)]* to select a perspective ∈ from which expert and learner data is generated for discriminator optimization and* to select a perspective, i.e., a discriminator _ used to provide rewards for the learner's training.Note that we assume that the learner can select an individual perspective through the perspective selection strategyfor each of its internal training episodes. While numerous selection strategies are conceivable, in this work, we consider uniform random,feature correlation-based, and UCB-like strategies (cf. Section <ref>).Learner policy optimization. The output of the discriminator _, i.e., a measure of how likely the expert generated the data, is used as a reward signal for training thelearner's policy π^. For each trajectory the learner generates, the perspective selection strategydetermines the discriminator providing the reward. We use PPO <cit.> as a policy optimization algorithm due to its good performance and stability on a wide range of tasks. Active third-person IL perspective selection strategy , nr. of iterations K, perspectivesInitialisation Initialize discriminators _, environment , expert policy π^, learner policy π^, and perspective selection strategy learning i=1,…,K_i ← Next perspective according to persp. selection strategy ω^__i← Demonstration(s) from expert's policy π^ω^__i← Trajectories from learner's policies π^ Update discriminator __i based on ω^__i, ω^__iUpdate learner's policy by training in the environment using rewards-log(_) whereis selected according to Optimized learner's policy π^ § EXPERIMENTSIn this section, we empirically evaluate our proposed approach.We first demonstrate the feasibility of third-person imitation learning in a simplified setting on tabular environments in Section <ref>.Then, we introduce the considered environments and tested perspective selection strategies in Sections <ref> and  <ref>, respectively, followed by our conducted main experiments in Section <ref>. Additional results and details, e.g., on environments and hyperparameters, optimizers, and network architectures, are provided in the Appendix.§.§ The Tabular Case With Known DynamicsHere, we demonstrate that actively selecting the perspectives from which to observe the expert can lead to improved performance in comparison to selecting perspectives uniformly at random in grid world environments. Experimenting with grid worlds has the advantage that we can perform imitation learning using linear programming, avoiding many of the challenges that we face when using our GAIL-based approach. In particular, we consider grid worlds of size 10 × 10 in which an agent can collect different types of objects, each having some randomly chosen reward in [0,1]. In total, we consider 4 types of objects each of which is present in the grid world 2 times. The ground-truth features on which the rewards are defined are indicators of whether a cell contains a particular type of object. Upon collection of a reward object, the object is replaced and the agent is randomly placed at an empty position in the grid world. For the perspectives, we consider subsets of the indicator features or random linear transformations of the features. To amplify the effect of active selection strategies, we ensure the presence of highly similar perspectives—in the case of indicator features, we duplicate the feature vector for the first object 12 times, and in the case of random linear transformations, we create a total of 40 random transformations making it probable that some perspectives are similar.Such similar (but to some degree redundant) perspectives would be selected by a uniform strategy with a fixed probability while an adaptive active learning strategy should avoid them. Feature matching is performed using the linear program presented in the Appendix assuming full knowledge about the environment dynamics.Error terms for the individual perspectives are scaled proportionally to how often the individual perspectives have been selected. We consider 3 active learning strategies based on different levels of knowledge about the perspectives (active (corr), active (sim), active (var); details are provided in the Appendix) and compare them to a uniform selection strategy (uniform).We evaluate the different strategies regarding the achieved cumulative ground-truth reward. Additional details regarding the experimental setup are provided in the Appendix. Our results showing the performance of the different perspective selection strategies regarding the attained reward over the number of observed perspectives are presented in Figure <ref>. Results are averaged over 100 random grid worlds and cumulative rewards are normalized so that the maximum achievable reward is 1.The shaded regions indicate 95 % confidence intervals regarding the mean reward. The active learning strategies clearly outperform the naive uniform perspective selection strategy for the first 23 of observed demonstrations, highlighting the utility of collecting and leveraging information from different perspectives in a principled way. For more observed demonstrations, uniform catches up with active (corr) and active (sim). The strategy active (var) dominates all other strategies for all number of observed demonstrations. §.§ Benchmark EnvironmentsFor our further experiments, we evaluate our approach on 2 environments (Point and Reacher) described below. To evaluate whether effective perspective selection occurs we include non-informative perspectives that always return a constant observation. Other perspectives contain partial information and require implicitly combining multiple perspectives over multiple interactions, e.g., projections on the x- and y-axis to imitate the expert. Performance is again measured using the ground truth reward. Point. The agent controls a point mass in a plane and aims to move it towards a target location, cf. Figure <ref>. The following 4 perspectives are available to the learner: [label=(*),font=]* birds-eye perspective in the form of a 2D image containing all information (Figure <ref>);* /* x-perspective and y-perspective in the form of 1D vectors corresponding to a horizontal and a vertical projection of the birds-eye perspective, respectively (see Figure <ref> for the x-perspective);* no-information perspective corresponding to a black image.We use a rule-based expert policy that walks perfectly toward the goal.Reacher. In the MuJoCo reacher environment <cit.>, a two-jointed robot arm aims to point its end towards a (randomly spawning) target on the plane, see Figure <ref>.The learner can select the following perspectives: [label=(*),font=]* birds-eye perspective in the form of a 2D image containing all information (Figure <ref>);* /* side-perspectives in the form of 2D images looking at the reacher arm from the side (see Figure <ref> for an example). The difference between both perspectives is a 90° rotation;* no-information perspective based on an angle where the robot arm is not visible.As an expert policy, we use an MLP policy trained to maximize the ground truth reward with PPO <cit.>. §.§ Perspective selection strategies In each algorithm iteration, the perspective selection strategyselects a perspective used to observe expert and learner trajectories for discriminator optimization, cf. Algorithm <ref>. During learner training, the perspective is chosen anew for each episode. Refer to Section <ref> for a detailed description. We evaluate the following 3 selection strategies:* Uniform strategy (Uniform). Uniform random sampling among perspectives. * UCB style strategy (UCB). This strategy is inspired by the upper confidence bound-based algorithms commonly used in the bandit literature <cit.>. In the tth trajectory the strategyselects the discriminator as_t ∈__v , v ∈ _v(ω^) - c √(log(t)N_,t), where N_,t is the number of times perspectivehas been used so far, c is a hyper-parameter, and _v(ω^) is the probability that the observations in a batch of observations stem from the expert. The strategy focuses on improving the imitation performance on perspectives leading to large discrimination errors, accounting for the number of times the perspectives have been considered so far. * Feature dissimilarity strategy (Dissimilarity). A strategy that selects perspectives based on the similarities of the perspective's features. It tracks the approximate discounted feature expectations for each perspective and uses an exponentially weighted average to account for the non-stationarity of the policy during training. A perspective _t at step t is sampled with a probability proportional to the inverse correlation coefficient between _t and all other perspectives. A more detailed description is provided in the Appendix. §.§ DiscriminatorsAs the discriminator determines the reward for the RL agent, it is of key importance in our framework. To understand how different design choices for the discriminators impact the performance of our approach, we experiment with four different architectures. Details for the conditional discriminator architectures can be found in the Appendix.Concretely, we consider the following architectures in line with Section <ref>: * Multiple discriminators (Multiple). One discriminator for each perspective. * Single discriminator (Single). A single discriminator for all perspectives without any additional information. * Conditional discriminator (Conditional). A single discriminator for all perspectives that conditions on the current perspective by concatenating perspective information with features extracted from the discriminator's convolutional layers. For Reacher, we use the camera's angle and distance as information. For Point, we use the index of the selected perspective. * FiLM discriminator (FiLM). The FiLM architecture <cit.> allows conditioning a network on arbitrary information by generating conditional weights that are used to scale a convolutional network's feature maps. In our case, this is perspective information such as, e.g., a perspective's rotation angles. §.§ Empirical EvaluationWe use PPO <cit.> to optimize the learner and tune relevant hyperparameters with a grid search for each environment (details are provided in the Appendix). All experiments use the same 20 seeds. For evaluation, we use the rliable library <cit.> to plot the interquartile mean with bootstrapped confidence intervals computed from 50,000 subsampled runs. On Reacher, learning is unstable, with failing runs generating very large negative values. We therefore cap the negative reward at -300 to prevent outliers from distorting our results. We present our findings arranged in three scenarios that pose increasingly challenging problems to the agent. In the easy scenario, the agent is presented with the partially informative perspectives described in Section <ref> and a single non-informative perspective. The duplicate scenario is the same as easy, with each perspective duplicated thrice. Lastly, in the adversarial scenario, we present the agent with a set of uninformative perspectives among which a single fully informative perspective (Reacher) or two partially informative perspectives (Point) must be chosen.Selection strategies and discriminator architectures in the easy scenario. We evaluate all perspective selection strategiesand discriminator architectures introduced in Section <ref> and <ref> on Reacher and Point. The expert's reward serves as an upper bound on performance.Effective perspective selection using a particular strategyis evaluated by introducing a non-informative perspective offering no information to the agent. See Appendix <ref> for details on these perspectives.Figure <ref> presents our findings for the Point environment. We first ablate the effect of the selection strategy when using a unique discriminator for each perspective (Figure <ref>). All strategies effectively learn from multiple perspectives. The perspective selection based on feature Dissimilarity performs best.We select perspectives uniformly at random with fixed seeds to evaluate the benefits of the different considered discriminator architectures without confounding effects from the perspective selection strategies (Figure <ref>). Having a single discriminator for each available perspective yields the highest reward, followed by using the FiLM architecture. Using a Single discriminator without perspective information or with naive conditioning (Conditional) on the perspective performs worst. For Reacher, we defer the reader to Appendix <ref>. The results for Reacher are not as clear as for Point. All strategies exhibit some learning, with the Dissimilarity strategy performing worst. Using multiple discriminators or the FiLM network achieves similar performance.Results for the duplicate scenario. Section <ref> motivates the exploitation of similarities/correlations between perspectives to accelerate imitation learning.Information about such structure is not necessarily available or present in the learning environment: For Point introduced in Section <ref>, no such correlations exist in the perspectives.However, we can easily induce relations among perspectives by duplicating perspectives. In this scenario, we duplicate all of the partially informative perspectives 3 times to test how well strategies and discriminator architectures perform in the presence of correlated perspectives. Figure <ref> depicts our findings. Surprisingly, the parameter-sharing heuristic (Shared) we use to learn correlations (see Section <ref>) does not yield improvements over using multiple isolated discriminators without parameter-sharing.Combining the UCB strategy with the FiLM discriminator results in the highest reward for both environments. On Point, there is a large gap between all other methods, whereas on Reacher Uniform and UCB with Multiple discriminators provide similar results. Results for the adversarial scenario. Part of learning successfully in the active third-person IL problem boils down to effectively reducing the number of times an uninformative perspective is selected. We verify whether different combinations of selection strategies and discriminator architectures can do this through a challenging setup where most perspectives in the environment are uninformative. To this end, we present the agent with a fully informative perspective and 5 uninformative perspectives in Reacher. For Point, the agent can choose from the x-axis and y-axis perspectives and 6 uninformative perspectives. Analyzing the results, no single best strategy works well for Reacher and Point. We hypothesize that this is due to both environments' different properties posing distinct challenges to a learning algorithm. For Reacher, the Dissimilarity Multiple combination works best, whereas Uniform strategies fail to learn. This highlights the importance of active perspective selection in this environment. UCB selection combined with the FiLM discriminator performs strongly on Point, whereas it fails when it is combined with Multiple discriminators. Note that there is a substantial drop in overall performance compared to the easy and duplicate scenarios for both environments, particularly for Point. Discussion. Considering the results described in the previous paragraphs, it first stands out that the UCB strategy combined with the FiLM discriminator performs either best or second best in all scenarios and environments. Successfully imitating the expert in the active third-person IL setting amounts to a more frequent selection of informative perspectives while avoiding uninformative ones. A naturally following hypothesis from our findings would be that UCB FiLM is able to avoid uninformative perspectives.As shown in Appendix <ref> this is indeed the case. We further note that our proposed framework interleaves three highly noisy, non-stationary processes: Perspective selection, agent training, and discriminator training. We hypothesize that the intricate and often implicit interplay between all these components is quickly destabilized, rendering stable and robust learning highly challenging. Additionally, the similarity of the provided perspectives plays a role. For Reacher, the fully informative perspective is very dissimilar from the non-informative perspectives with a correlation coefficient between images of around 0.06 (see Figure <ref> for a visualization). Inspecting Figure <ref> on the other hand, we can see that for Point, the non-informative perspective is largely the same as the informative perspectives. This fact may explain the success of the Dissimilarity strategy for Reacher in the adversarial scenario.§ CONCLUSIONS AND FUTURE WORKWe introduced the active third-person imitation learning problem, a challenging variant of the learning from demonstrations problem, in which the learning agent has control over the perspective from which it observes an expert's demonstration. We formalized this problem and analyzed its characteristics. In particular, we showed that for linear reward functions, learning via feature matching is feasible, provided perspectives are given by linear transformations. Additionally, we found that feature matching does not guarantee successful learning in the case of non-linear reward functions. Inspired by generative adversarial imitation learning, we proposed an approach for solving the active third-person IL problem. We evaluated our method in toy and benchmark environments on increasingly difficult scenarios. Our findings indicate that various perspective selection strategies and discriminator architectures enable learning from perspectives with partial information. While our approach to the active third-person IL problem makes a step towards learning from demonstrations when different perspectives are available, we have only considered scenarios where the set of perspectives is finite. An exciting direction for future work is the generalization of active third-person IL to unbounded sets of perspectives, e.g., allowing the learner to freely choose the camera angle from which it observes an expert. Another interesting problem setting emerges when the learner can change the perspective while receiving a single demonstration. Lastly, we want to explore more efficient methods for parameterizing the ensemble of discriminators when the number of perspectives || is large, e.g., through Hypernetworks <cit.>. We thank Lukas Miklautz and Simon Rittel for their valuable feedback on earlier versions of this work. We thank Kevin Sidak for the insightful discussions. Lastly, we want to thank the open source communities of NumPy <cit.>, Weights & Biases <cit.>, plotly <cit.> and Pytorch <cit.> for providing the tools used in this study. ACM-Reference-Format § PROOFS§.§ Proof of Theorem <ref>The proof of the first part of the statement follows by observing that ( μ(π^) - μ(π^)) < ϵ / || implies that A̅( μ(π^) - μ(π^)) < ϵ. The result then follows by invoking Theorem 1 from <cit.>.The second part of the statement follows by observing that if ϵ=0, rank(A̅) = d, we have σ(A̅) > 0 and ρ(A̅, w^*) = 0.§.§ Proof of Theorem <ref>We start by providing an example of a non-linear reward function (conjunction of two features) for which expert performance cannot be achieved in the third-person IL setting. To this end, consider an MDP with action set ={left, right}, 2-dimensional features, a horizon H=2, and perspectives corresponding to observing only a single of these features. The reward function isr(s) =1if ϕ(s) = [1,1]^T, and0otherwise.The dynamics and features of the MDP are shown in Figure <ref>.The agent is assumed to always start in state S_0 at the beginning of an episode. An optimal policy π^* would always perform action "left", resulting in an expected cumulative reward of 0.5.Observe that the probabilities of all possible feature trajectories in each perspective are identical for any possible policy, i.e.,p(ϕ(S_t=0)_1 = 2, ϕ(S_t=1)_1 = 0 | π)= 0.5 π(a_1=left) + 0.5 π(a_1=right)= 0.5 [π(a_1=left) + π(a_1=right)] = 0.5, p(ϕ(S_t=0)_1 = 2, ϕ(S_t=1)_1 = 1 | π)= 0.5, p(ϕ(S_t=0)_2 = 2, ϕ(S_t=1)_2 = 0 | π)= 0.5, p(ϕ(S_t=0)_2 = 2, ϕ(S_t=1)_2 = 1 | π)= 0.5,where S_t=t' is the random variable representing state S at time t', and ϕ(S_t=t')_d denotes the features associated with state S_t=t' in dimension d. Hence the observations in a single perspective carry no information whatsoever to distinguish between different policies and how well they match the expert's features while perfectly matching the feature expectation marginals (even matching the actual sample distribution). Hence a learning agent with a policy that would in the first time step take action "right", would exactly match the feature trajectory distributions in each dimension individually but achieve a cumulative reward of 0.Now, assuming that the reward function is linear in some ground truth features ϕ'(s,a). Assume a 5-state MDP in which only a single state provides a reward of 1. The feature function is linear in features corresponding to a one-hot encoding of the states. For these features we can construct a bijective mapping to the feature vectors of the MDP shown above.Assuming the same dynamics and reward assignment as in the above MDP concludes the proof. § PERSPECTIVE SELECTION FREQUENCIES ON POINT This section briefly studies the hypothesis that successful imitation in the active third-person IL setting amounts to avoiding uninformative perspectives. Figure <ref> shows the selection frequencies for Point in the duplicate and adversarial scenarios. Indeed, we find that the best combination of discriminator and strategy (UCB FiLM) selects the uninformative perspective less often than other configurations. In the duplicate scenario, the Uniform probability of selecting an uninformative perspective is 33%. Compared to this, the UCB FiLM combination only selects the nonsense perspective with28.4% probability, which is an improvement of 14.7%. For the adversarial scenario, the improvement is even more stark: Here, the probability of selecting an uninformative perspective with Uniform random selection is 6 out of 8 or 75%. In contrast, the selection probability for UCB FiLM is only 61.4%, yielding an 18.3% improvement. However, the results also show that the selection frequencies alone cannot explain agent performance. While UCB FiLM reduces the chance of sampling an uninformative perspective in both the duplicate and adversarial scenarios, imitation performance does not increase proportionally. Confounding factors at the intersection between discriminator training, agent training, and perspective selection are likely.§ LEARNING CURVES FOR REACHER IN THE EASY SCENARIO This section presents learning curves for perspective selection strategies and discriminator architectures on Reacher. The results are not as clear-cut as on Point and suffer a high variance. We find that this is due to discriminator training being less stable. Evaluating strategies in Figure <ref>, the UCB and Uniform strategies perform roughly equally well, with the Dissimilarity strategy not improving over random performance. This finding is striking as it starkly contrasts the results presented in Paragraph <ref>. We hypothesize that it is a consequence of the uninformative perspective in Reacher having the most dissimilar features out of the three used perspectives in the easy scenario. In the adversarial scenario, the fully informative perspective is the most dissimilar from a set of identical uninformative perspectives. We see a similar picture for the discriminators to Point: Multiple and FiLM perform best. The Single discriminator is competitive with the best settings at the cost of a very high variance. Naively conditioning on perspective information (Conditional) does not work on Reacher, just as it does not work on Point. § DETAILED ENVIRONMENT DESCRIPTIONSWe use 2 environments for our experiments. Details of common hyperparameters, such as the maximum episode length or the observation space, can be found in Section <ref> and Table <ref> and <ref>.For each environment, we provide 4 perspectives: A baseline perspective showing all information, two perspectives providing partial information, and a perspective with no relevant information. As a performance metric for our evaluation, we use the reward provided by the environment. §.§ Point Environment Point is a 2-dimensional environment where the agent must move a yellow point (the chaser) to a blue point (the goal) within a [-5,5] × [-5,5] coordinate system (the arena). The chaser always starts an episode at the origin [0,0]. The goal is created at a random position within an Euclidean distance of 4.5 to the origin. Figure <ref> visualizes the observation space, and Figure <ref> the arena in particular. The entire environment specification is: * Action space: Point has a 2D continuous action space for movements in the x and y direction. In particular, movements in both directions are possible and bounded by a magnitude of 0.1 per dimension and per time step, i.e., (a_1, a_2) ∈ [-0.1,0.1]^2.* State space: As states, we provide the agent with the current position of the chaser, the distance between chaser and goal d_cg [The largest distance possible is reached when goal and chaser reside on diagonally opposite corners.], and the position of the goal in this episode in the coordinate system. Concretely (x_chaser,y_chaser,d_cg,x_goal,y_goal) ∈ [-5, -5, 0, -5, -5] × [5, 5, 10√(2), 5, 5].* Environment's reward function: The reward function is defined using the negative of the Euclidean distance between the current position and the goal.r(s) = - ‖ [x_chaser, y_chaser] - [x_goal, y_goal] ‖_2. * Observation space: As the environment is only two-dimensional, partially informative perspectives are represented as 1D images. Using C × H × W to specify the number of channels and size of an RGB image, we use 3 × 32 × 32 images for the fully informative perspective and 3 × 32 images for the partially informative perspectives, respectively. The chaser has color (255,255,0), corresponding to yellow, whereas the goal is blue, (0,0,255), and the background is black, (0,0,0). §.§.§ PerspectivesWe define partial information perspectives through a projection[Details on how the network architecture is adapted are provided in Appendix <ref>.] on the x (resp. y) axis, cf. Figures <ref>, <ref>. This entails both perspectives, including information about movement in one particular direction (a_1, a_2 respectively). We use a black 1D image as an uninformative perspective, allowing us to use this "nonsense" view of the environment in conjunction with the partially informative perspectives.§.§.§ Expert PolicySince the observations include the current position and the position of the goal, we define an (almost) perfect deterministic expert policy[The expert is only almost perfect, as it does not stop moving when reaching the target. Instead, it oscillates around the target with small movements.] by taking a maximum step in the correct direction. Concretely:[a_1, a_2] = 0.1 ·([x_goal,y_goal] - [x_chaser,y_chaser] ) . §.§ Reacher EnvironmentIn Reacher <cit.>, a two-jointed robot arm tries to move its end, referred to as the fingertip, by moving both joints toward a target on the 2-dimensional plane represented by a yellow point. This corresponds to the default configuration specified by <cit.> and is visualized in Figure <ref>. In each episode, the goal spawns randomly within an Euclidean distance of 0.2 to the origin. We can define the environment as follows: * Action space: The reacher arm moves by applying torques to both hinge joints, concretely 𝒜 [a_1, a_2] ∈ [-1, 1]^2. This corresponds to a 2-dimensional continuous action space.* State space: The state space contains the current state of the reacher arm as well as the absolute position of the target, the angular velocity of the arm, and the relative position of the reacher's fingertip to the target. Concretely, a state in Reacher is defined by an 11-dimensional vector containing:* sin and cos respectively of both parts of the arm (4)* position of the target (2)* angular velocity of both parts of the arm (2)* 3D - vector between fingertip and goal in the form (x,y,0) as they do never differ in the z-coordinate (3) * Environment's reward function: The reward consists of the sum of the distance between the fingertip of the robot arm and the goal with an added squared L2-penalty for taking too large actions. Concretely:r(s, a) = - ‖ [x_tip, y_tip] - [x_goal, y_goal] ‖_2 - ∑_i=1^|𝒜|a_i^2 . * Observation space: We experimented with 3 × 32 × 32 (C × H × W) RGB image observations due to their appealing computational efficiency but observed almost no learning progress. Visual inspection of these small images revealed a lack of detail, leading us to use 3 × 64 × 64 images. We also note that the side perspective displayed in Figure <ref> showed strong similarities between the originally defined goal color and the depicted purplish frame. Therefore, we decided to change the color of the goal to yellow.§.§.§ PerspectivesReacher is a 3D environment allowing us to control the amount of information shown by adapting the camera angle. As the reacher arm moves only 2-dimensionally on a plane, we define a baseline showing all information through a central birds-eye view of the environment, cf. Figure <ref>. Partially informative perspectives utilize a side view of the environment to obscure relevant information. We find that an angle of 7^∘ between the camera and the surface retains some relevant environment information while posing a sufficiently hard learning problem. Our two proposed partially informative perspectives show a view from the front and one side (90^∘ rotation on the z-axis) using the 7^∘ angle between the surface and the camera, cf. Figures <ref> and <ref> for a visualization. As a non-informative perspective, we use an angle of 0^∘ between the camera and the reacher surface, thus showing only a side view of the environment's frame. §.§.§ Expert PolicyAs Reacher is more complex than Point, we cannot define a rule-based expert. Instead, we optimize our expert with PPO <cit.>. As a neural network, we use a 2-layer MLP with 32 hidden neurons and a Gaussian policy trained for 20000 epochs with batch size 1 024. We verify the quality of the expert policy through visual inspection. § DISSIMILARITY SAMPLINGThis section describes our perspective selection strategy based on estimating feature similarities between the different perspectives. It estimates the discounted feature expectations of a perspective-policy combination via sampling, i.e., it approximatesμ () = 𝔼[∑_t=0^∞γ^t ϕ_ (s_t,a_t) |π] , ∈,where π is the policy for which the feature expectations are computed. To account for the learner's policy π^ being non-stationary, we use an exponentially weighted average to estimate the approximate discounted feature expectations. The exponential weights down-weigh earlier features from previous policies. Our intuition behind formulating the Dissimilarity Sampling (DS) strategy is that the agent should prefer to select perspectives that are as diverse as possible in order not to acquire redundant information. The DS strategy achieves this by tracking approximate discounted feature expectations as detailed above and sampling from perspectives based on their inverse overall similarity to all other perspectives. Algorithm <ref> outlines the procedure. First, it ensures that each perspective has been selected at least once to ensure that valid similarities can be calculated (line 2). After an estimate μ̂ for the discounted feature expectations is available for each perspective, the DS strategy first calculates a score s_i for each perspective _i by inverting the summed correlation coefficients between _i and all other perspectives (lines 5-6). These scores are then normalized to yield probabilities, from which the next perspective is sampled line (7). Including stochasticity in the selection process serves a dual purpose: [label=(*), font=]* It prevents an early lock-on to a perspective that is dissimilar to all other perspectives but uninformative.* It accounts for uncertainty in the estimated feature expectations by down-weighting old trajectories.[t] Dissimilarity Sampling perspectives , approximate feature expectations μ̂ (), learner's policy π^, similarity function f ∃_i ∈: n(_i) = 0 Perspective has not been selected yet ←_i Calculate selection probabilities based on inverse similarities i=1,…, || s_i = 1 / ∑_j ≠ i f (μ̂ (_i), μ̂( _j)) ← Score for persp. _i p_i = s_i / ∑_j s_j ← Selection probability for _i∼Categorical([1, …, ||], [ p_1, …, p_|| ])μ̂' () =∑_t=0^∞γ^t ϕ_ (s_t,a_t) ← Estimate discounted feature expectations with rollout from π^ in _t μ̂ () = αμ̂ () + (1 - α) μ̂' () ← Update exponentially weighted average of 's feature expectationsNext perspective§ BACKGROUND: GENERATIVE ADVERSARIAL IMITATION LEARNINGOur approach builds on generative adversarial imitation learning (GAIL) <cit.>, which utilizes a GAN<cit.>-inspired framework for training the learner's policy to mimic the expert's policy.In GAIL, we use (given) expert behavior as demonstrations to train a learner policy without direct access to the reward signal or interaction with the expert. Concretely, a discriminator is trained to distinguish between data from the expert and the learner. The learner aims to prevent the discriminator from distinguishing between expert data and its trajectories. An oracle discriminator would assign probability 1 to state-action pairs generated by the learner and 0 to the expert's state-action pairs [Note that perfect separation of the learner's and expert's state-actions might not be possible.]. The learner's goal is to find a policy π^_θ such that the performance of the discriminator is minimized.Formally, this results in the following minimax optimization problem:min_π^_θmax_𝔼_π^_θ [log (( s,a))] + 𝔼_π^ [1-log (( s,a))]_Discriminator objective - λ H(π^_θ)_entropy regularization ,where the discriminator ×→ [0,1] outputs the probability that the learner generated a state-action pair, 𝔼_π[·] refers to the expectation with respect to state-action pairs observed when following some policy π, and where π^_θ is the learner's policy parameterized by θ.The optimal solution to the above problem occurs when the learner's policy generates data that can not be distinguished from the expert's data by the discriminator. In this case, for an arbitrary powerful discriminator, the learner's policy equals the expert's policy.In practice, the discriminator and the learner's policy are alternatingly updated using stochastic gradient descent <cit.>.The discriminator minimizes the negative binary cross-entropy, while the learner optimizes its policy using a (reformulated) output of the discriminator as a reward signal (e.g., using Trust Region Policy Optimization (TRPO) <cit.> or Proximal Policy Optimization (PPO) <cit.>).In the case of not having direct access to actions but only observations (i.e., the case of learning from observations (LfO)), we reformulate the above equation based on <cit.>:min_π^_θmax_𝔼_π^_θ [log (( o_t,o_t+Δ))] + 𝔼_π^ [1-log (( o_t,o_t+Δ))]- λ H(π^_θ) , where the discriminatoris now a function ×→ [0,1] mapping two observations with some specific delay Δ to the probability of how likely the tuple of states was generated from the learner.§ ADDITIONAL RELATED WORKSeveral existing works consider active learning in the context of IL or IRL. They share the goal of reducing the expert's effort and/or the number of interactions with the environment. However, to the best of our knowledge, none of these considers the setting of our paper in which the learner must actively decide on the perspective from which it observes the expert. They mainly focus on querying additional information regarding optimal actions in relevant parts of the state space or in out-of-distribution settings, e.g., by directly querying for the optimal action in specific states (e.g., <cit.>) or by requesting additional demonstrations in cases of uncertainty about optimal behavior (e.g, <cit.>. These works are complementary to ours and combining their approaches with ours might be an interesting direction for future work.More specifically, <cit.> for instance considers the setting in which the learner can query the expert about the best action for a particular state which is selected based on previous queries and environment interactions. A similar approach is taken in <cit.> in which additionally a noisy oracle (also termed noisy heuristic, a classifier) is considered which predicts the probability that an expert would not be consistent with the noisy oracle, and the expert is only consulted if that probability is sufficiently large.Thereby, the number of queries to the expert is minimized. <cit.> implement active IL to improve generalization in deep RL, querying actions for states for which the policy is uncertain, and, thereby, speeding up the learning process. <cit.> consider a setting in which the learner can request additional demonstrations in out-of-distribution settings and demonstrate that this can improve performance on manipulation tasks.The recent survey of work on interactive imitation in robotics research by <cit.> provides an overview of different possible query modalities and interfaces for human-robot interaction.§ ARCHITECTURE, HYPERPARAMETERS AND COMPUTEIn the following section, we describe the architecture of our discriminators and the hyperparameters we used to get our results. Section <ref> outlines our experiments' computational resources and hardware specifications. §.§ Network architecturesSimilar to <cit.>, we concatenate RGB-images (in our case 2) of size 3× d × d and feed them into our network as 6 × d × d arrays. We use image shifts (Δ>1 as in <cit.>) instead of consecutive observations. Refer to the tables inSection <ref> for the specific shifts used for each environment. As we assume to be given only observations of the environment (i.e. images in our experiments, cf. Section <ref>) and do not observe the expert's actions, we build on Equation <ref> substituting observations for states. Concretely, we pass a pair of observations through a convolutional feature extractor before using an MLP classification head with the Sigmoid activation function. We regularize all discriminators with Spectral Normalization <cit.>, which we found improves training stability. When learning correlations with multiple discriminators, we insert a 2 layer MLP network with shared parameters across all discriminators before a linear classification layer. In addition to the features extracted from the convolutional network, the first layer of the correlation network also conditions on the current perspective through a one-hot encoding. In both cases, we can interpret the output of the Sigmoid as the probability that the concatenated input observations belong to the learner's state distribution. The conditional discriminator concatenates the perspective information with the flattened features obtained after its convolutional layers. For Point, this information is the index of a perspective whereas for Reacher we use the normalized camera rotation parameters (azimuth, elevation, distance). The FiLM discriminator uses the same perspective information to generate the parameters for the FiLM blocks.We use binary cross entropyas an objective for optimizing our discriminators:(ŷ_t, y_t) = -(y_t log(ŷ_t) + (1-y_t) log(1-ŷ_t)),where ŷ_t = ( o_t,o_t+Δ) is the discriminator's output. The discriminator's prediction target y_t is 0 in case the observation is from the expert and 1 if it is from the learner. We use scaled uniform noise around the target labels to stabilize discriminator training. §.§ Computational ResourcesOur experiments are generated using two servers with 2 × 2x Intel 4214R CPUs each. Both machines are equipped with two Tesla A100 GPUs. For hyperparameter tuning, we rely on sensible default values and use a simple grid search to tune the most important hyperparameters around those values totaling 620 runs for Point and 700 for Reacher. The time required to run a single experiment depends on other variables such as current server load but is approximately 70 minutes for Point and 30 minutes for Reacher. §.§ Hyperparameters§ EMPIRICAL VALIDATION OF THEOREM <REF>In this section, we empirically validate Theorem <ref>.Environment. To this end, we consider grid worlds of size 10 × 10 in which an agent can move up, down, left, or right. In each instance of the grid world, there are 8 objects of k=4 different types (2 objects of each type) distributed uniformly at random across the grid world's cells. Each object type is associated with a random non-negative reward sampled uniformly from [0,1], i.e., the reward of the ith object type is w_i^* ∼𝒰([0,1]). When reaching a grid cell with an object in it, the agent receives the corresponding reward upon moving out of the cell and is randomly placed in a cell without an object. Note that the rewards described above are linear in state-dependent features which correspond to indicator vectors representing the type of object present in a state, i.e., r(s,a) = ⟨ϕ(s,a), [w_1^*, …, w_k^*]^T ⟩, where ϕ(s,a) = [1_object of type 1 is in state s, …, 1_object of type k is in state s]^T. We consider a continuing setting (i.e., an infinite horizon), where the agent's starting position is a randomly selected empty cell. For computing cumulative rewards, we use a discount factor γ=0.3.Learner. The learning agent does not observe the features described above directly. Instead, it views them through perspectives corresponding to linear transformations. In particular, we consider the following two settings: * Subset: The agent observes the first 1 ≤ i ≤ k features, i.e., ={A_1, …, A_i}, where A_i ∈ℝ^1 × k such thatA_i = [ 1_i=1, 1_i=2, …, 1_i=k ].* Random: The agent observes 1 ≤ i ≤ k projections of the features onto a random vector, i.e., ={A_1, …, A_i}, where A_i ∈ℝ^1 × k such thatA_i = [ A_i,1, A_i,2, …, A_i,k ],where A_i,j∼𝒰([0,1]) for j ∈ [k].Algorithms, the expert, and learning. We obtain an expert policy for a grid world instance through linear programming (LP) and use it to generate demonstrations. The learner aims to match the empirical feature frequencies of the expert in all perspectives available to it. We perform feature matching using an LP formulation minimizing the ℓ_∞ distance between the empirical feature expectations computed from the demonstrations and the feature expectations realized by the learner (viewed through the perspectives available to the learner). In particular, given demonstrations τ_1, …, τ_T in perspectives A_j_1, … A_j_T, where j_t is the index of the perspective used at time t, the learner solves the following LP:min_μ, ϵ_1, …, ϵ_|| ∑_i=1^||ϵ_is.t. ∑_a μ(s,a) - γ∑_s'∑_a μ(s',a) T(s' | s, a) = ρ(s) ∀ s, a (Bellman flow equations) A_i F μ - Ψ̂_i _∞≤ϵ_i ∀ i ∈(feature matching)where μ is the so-called state occupancy, ρ(s) is the initial state probability, F ∈ℝ^k × (|·|) is a mapping assigning each state-action-pair a feature vector in ℝ^k, and where Ψ̂_i is the empirical feature expectation from the demonstrations in the ith perspective.Results. In Figure <ref>, we show the performance of the learner for different numbers of available perspectives and increasing numbers of expert demonstrations. Results are averaged over 10 random grid worlds according to the above description. We observe that for an increasing number of demonstrations the performance of all agents endowed with any number of perspectives improves. For sufficiently many perspectives, the performance converges to the optimal achievable performance (computed using the true reward function). Comparing Subset and Random, for the same number of available perspectives, better performance is achieved using the random projections, likely because of the random features combining information about multiple ground truth features.§ GRID WORLDS FOR EVALUATING ACTIVE PERSPECTIVE SELECTION STRATEGIES In this section, we provide additional details regarding the experiments presented in Section <ref>. Environment. We consider grid worlds of size 10 × 10 in which 2 objects of 4 different types are placed randomly upon creation of the environment. Each object type is assigned a reward randomly drawn from 𝒰([0,1]). An agent receives an object's reward upon collecting it by moving on the respective cell. The agent can move deterministically along the cardinal directions with the exception of collecting an object in which case the agent is randomly moved to an empty cell. The ground-truth features of the environment are indicators for the 4 different object types, i.e., ϕ(s,a) ∈ℝ^4. The interaction of the agent with the environment is non-episodic, and we use a discount factor γ=0.3. Perspectives. * For the basis-vector transformations we consider the following 16 perspectives given by their linear transformation matrices A_i, i=1, …, 16:A_1= [1,0,0,0] A_2= [0,1,0,0] A_3= [0,0,1,0] A_4= [0,0,0,1] A_i= [1,0,0,0]i ∈5, …, 16That is, the perspectives A_1, A_5, …, A_16 are the same. * For the random linear transformations 40 perspectives given by their linear transformation matrices A_i, i=1, …, 40, are created as follows: * Draw Ã_i ∈ℝ^4 from 𝒰([0,1]^4).* Construct A_i from Ã_i by setting all entries below 0.5 to zero. This ensures that each perspective only measures a subset of the ground truth features (in expectation).Because of the large number of perspectives, it is probable that some of the perspectives are similar, i.e., contain redundant information.Perspective selection strategies. We consider 4 perspective selection strategies: * uniform. This strategy selects the available perspectives in a round-robin fashion, i.e., in the jth interaction it selects the perspective jK, where K is the total number of available perspectives. * active (var). This strategy exploits full knowledge about the feature transformations. In particular, the feature matching problem is considered as a least-squares problem in which the matrices A_i correspond to the regressors, and the observed feature expectations for a demonstration correspond to the independent variable. In this setting, one can compute the variance of the estimate of the optimal regression coefficients.The perspectives are selected in order to minimize the variance of this estimate. * active (sim). This strategy exploits some knowledge about the feature transformations provided in terms of similarities among perspectives. Concretely, the inner product of pairs of feature transformations (normalized by the 2-norm of the transformations) is considered as their similarities and used to construct a similarity matrix S.From this similarity, we compute probabilities for sampling each perspective by considering the normalized inverse similarity of the perspective to all other perspectives. * active (corr).This strategy is similar to active (sim) but does not leverage any knowledge about the feature transformations. Similarities are replaced by feature correlations among different perspectives similar as detailed in Section <ref>.
http://arxiv.org/abs/2312.16365v1
{ "authors": [ "Timo Klein", "Susanna Weinberger", "Adish Singla", "Sebastian Tschiatschek" ], "categories": [ "cs.LG", "cs.AI", "stat.ML" ], "primary_category": "cs.LG", "published": "20231227001709", "title": "Active Third-Person Imitation Learning" }
http://arxiv.org/abs/2312.16617v1
{ "authors": [ "Andrey Gelash", "Sergey Dremov", "Rustam Mullyadzhanov", "Dmitry Kachulin" ], "categories": [ "nlin.PS", "nlin.SI" ], "primary_category": "nlin.PS", "published": "20231227155115", "title": "Bi-solitons on the surface of a deep fluid: an inverse scattering transform perspective based on perturbation theory" }
Department of Physics and Origin of Matter and Evolution of Galaxies (OMEG) Institute, Soongsil University, Seoul 06978, Korea Department of Physics, Tohoku University, Sendai, 980-8578, Japan Department of Physics and Origin of Matter and Evolution of Galaxies (OMEG) Institute, Soongsil University, Seoul 06978, Korea Effects of the center-of-mass (CM) correction together with the nucleon electromagnetic form factors on the nuclear charge radius are systematically studied with a relativistic Hartree-Bogoliubov model.Both one- and two-body parts of the CM correction are taken into account.It is found that the one- and two-body CM corrections, and the spin-orbit effect originatingfrom the nucleon anomalous magnetic moments are all of the same order in magnitude,and that they give sizable impacts on the charge radius from light to heavy nuclei. Effects of center-of-mass correction and nucleon anomalous magnetic moments on nuclear charge radii Myung-Ki Cheoun January 14, 2024 =================================================================================================== § INTRODUCTION The nuclear charge radius is one of the most fundamental observables of the atomic nucleus,which is measured accurately by the electromagnetic probes such as electron scattering andatomic laser spectroscopy <cit.>. Although the charge radius represents simply the size of the nuclear many-body system,it exhibits signals of the nuclear structure effects such as the shell effect <cit.>,pairing correlation <cit.>, and deformation <cit.>.The quantum fluctuation of the nuclear shape can also have considerable effects on thecharge radius <cit.>. It is also argued that the difference of charge radii betweena pair of mirror nuclei is correlated with the nuclear symmetry energy <cit.>.Therefore, the precise theoretical interpretation of the charge radiusis intimately related to various many-body and electromagnetic effects as well asthe understanding of nuclear force. Among the nuclear many-body theory,the mean-field model <cit.> is suitable to study the systematicbehaviors of the charge radius.It describes the nuclear many-body system in a microscopic mannerwith a universal energy density functional (EDF).Properties of the atomic nucleus such as binding energy, size,and electromagnetic moments are the basic ground-state observables that one wishes todescribe with the model.An essential feature of the mean-field model is the breaking of the symmetries possessed bythe many-body Hamiltonian.On the one hand, it introduces additional correlations within a single product-statewave function, and on the other hand, it necessitates restoration of symmetriesor correction of the observables for the symmetry breaking <cit.>. The translational invariance is always violated in the mean-field model forfinite nuclei since a many-body state is constructed as nucleons bound in a mean-fieldpotential which is fixed in space.The center of mass (CM) of the state is localized around the potential and gives spuriouscontributions to observables.In principle, one should restore the symmetry by a projection method, <cit.>, which is numerically costly for realistic calculations. In most applications, the spurious effect is either neglected orremoved in various approximate ways from the binding energy and the charge radius<cit.>.Recently, the CM correction on the binding energy was extensively discussed in Ref. <cit.>with a particular focus on the impact of the two-body operator part of the CM kinetic energy,which has been neglected in many of the existing EDFs.The significant effects of the two-body part on the surface-energy coefficient and the deformation energy were demonstrated <cit.>. In this work, we assess the correction of the charge radius for theviolation of translational invariance.The correction is made by removing the effect of the zeropoint fluctuation of the CMin calculating the expectation value of the squared radius.As in the case of the CM kinetic energy <cit.>,there arise one- and two-body parts of the correction of the expectation value.The CM correction of the radius has often been completely neglected,although it is taken into account in some of the existing functionalswith the one- and two-body parts <cit.> in an approximate way <cit.>,with only the one-body part <cit.>.Note that, for the charge radius, the CM correction can also be taken into account in the nuclear charge form factors by an approximate projection technique <cit.>(see also Refs. <cit.>).The connection between our approach and the projection method will also be discussed via aharmonic-oscillator model. In addition to the CM correction, it is important also to consider the electromagnetic structure of the nucleon for precise description of the charge radius,which is reflected in the electromagnetic form factors.Notice that the form factors of nucleon directly affect thenuclear charge-density distribution.In particular, the effect of the so-called “spin-orbit” contributiondue to the anomalous magnetic moment of nucleon is sensitive to the shell structure,as has long been discussed <cit.>.Since it is an O((v/c)^2) effect, it would be comparable to the CM correction of O(1/A). Therefore, in the present work, we take into account the full CM correction ofthe charge radius, including its two-body part, togetherwith the nucleon electromagnetic form factors to study systematicallyi) the contributions to the charge radius from CM correction and anomalous magnetic coupling,and ii) the impact of the corrections on the charge radius,in comparison with the experimental data.To be consistent with the electromagnetism formulated in a covariant way,it is appropriate to treat the nuclear many-body system with a relativistic theory.For this purpose, therefore, we employ a relativistic Hartree-Bogoliubov (RHB) model.It should also be noted that the significance of the relativistic nuclear mean fieldsin the anomalous magnetic coupling term has been pointed out in Refs. <cit.>.The paper is organized as follows.In Sec. <ref>, we describe how the CM correction and anomalous magnetic coupling effectmodify the calculation of charge radius.The analysis of the corrections and comparison with experimental data are presentedin Sec. <ref>. Lastly, summary and outlook is given in Sec. <ref>.§ MODEL§.§ Relativistic Hartree-Bogoliubov modelWe employ an RHB model with DDME2 parameter set <cit.> for the ph channel andGogny D1S interaction <cit.> for the pp channel.A remark on DDME2 is in order: the parameter fit to charge radii wasmade by r_ ch = √(⟨ r^2⟩_p + (0.8  fm)^2), where ⟨ r^2⟩_p is the mean-squared (MS) radius of point-proton density distribution,and (0.8fm)^2 is a correction for the charge radius of the proton itself, with BCS calculationsinstead of Hartree-Bogoliubov.The CM correction and anomalous magnetic coupling described in the following subsectionswere not considered.See Refs. <cit.> for details of the RHB model and the DDME2 parameter set.We impose the spherical symmetryand solve the RHB equations in the radial coordinate space. §.§ Center-of-mass correction on mean-squared radiiThe mean-square (MS) radius ⟨ r^2⟩_p of proton distribution, without CM correction, is given as Z⟨ r^2⟩_p = ⟨∑_i∈ p r_i^2⟩ =∫ d^3r r^2ρ_p( r),where r_i is the position of the ith proton.The correction for the spurious CM contribution should be made byZ⟨ r^2⟩_p, corr = ⟨∑_i∈ p ( r_i- R_G)^2⟩≡Z[ ⟨ r^2⟩_p+Δ_p^( CM1)+Δ_p^( CM2)],where R_G = (1/A)∑_i=1^A r_i is the CM position of the nucleus,and the one- and and two-body parts of the correction, Δ_p^( CMi) (i=1,2),are given byΔ_p^( CM1) =-2/AZ∑_α∈ pv_α^2⟨α|r^2|α⟩ +1/A^2∑_αv_α^2⟨α|r^2|α⟩, Δ_p^( CM2) =+2/AZ∑_αβ∈ p (v_α^2v_β^2-u_α v_α u_β v_β)|⟨α| r|β⟩|^2-1/A^2∑_αβ (v_α^2v_β^2-u_α v_α u_β v_β)|⟨α| r|β⟩|^2,respectively.u_α and v_α are the occupation amplitudes of the canonical single-particle state α <cit.>. Notice that the summation of the first terms inEqs. (<ref>) and (<ref>) runs over the proton states only whereasthe one in the second terms runs over both the proton and the neutron states.See Appendix <ref> for a derivation of Eqs. (<ref>) and (<ref>). §.§ Effect of anomalous magnetic moment and finite size of nucleon In general, the nuclear charge form factor is given by <cit.>ρ̃_ ch( q) =∑_τ=p,n∫ d^3r e^i q· r[ F_1τ(q^2)ρ_τ( r) . . +F_2τ(q^2)ρ_κτ( r) ],where in the mean-field approximationρ_τ( r)= ∑_α∈τv_α^2ψ_α^†( r)ψ_α( r), ρ_κτ( r)= κ_τħ/2mc∇·∑_α∈τv_α^2ψ̅_α( r)iαψ_α( r),with ψ_α being the wave function of a canonical single-particle state α.In Eq. (<ref>), m is the nucleon mass, κ_p=1.793 and κ_n=-1.913 arethe anomalous magnetic moments of nucleon, and α=γ^0γ is theusual Dirac matrix.The nucleon form factors F_1(q^2) and F_2(q^2) contain the information about the internal electromagnetic structure of nucleon.Note that their values at zero momentum transfer are identified asF_1(0)=Q and 2[F_1(0)+κ F_2(0)]=g, where Q is the electric charge, and g is the g factor of nucleon <cit.>.Thus they are normalized as F_1p(0)=F_2p(0)=F_2n(0)=1, and F_1n(0)=0.The nuclear MS charge radius without the CM correction, which we denote here as ⟨ r^2⟩_ ch', is given by⟨ r^2⟩_ ch'= -∇^2ρ̃_ ch( q)|_ q= 0/ρ̃_ ch( 0)= ⟨ r^2⟩_p+ ⟨ r^2⟩_κ + C_p + N/ZC_n,where ⟨ r^2⟩_κ = 1/Z∑_τ=p,n∫ d^3r r^2ρ_κτ( r),and C_τ (τ=p,n) are the constants independent of the nuclear structure,C_τ = -6.dF_1τ/dq^2|_q^2=0= -6.dG_Eτ/dq^2|_q^2=0 - 3ħ^2/2m^2c^2κ_τ.Here, G_Eτ=F_1τ-q^2(ħ/2mc)^2κ_τ F_2τ isthe electric Sachs form factor <cit.>.The first term in Eq. (<ref>) is interpreted as the MS charge radius of the nucleon itself <cit.>. We take the experimental values <cit.> for proton and neutron charge radii, -6.dG_Ep/dq^2|_q^2=0 = (0.841fm)^2,-6.dG_En/dq^2|_q^2=0 = -0.116fm^2.Therefore, for given densities ρ_p and ρ_κτ of point nucleons,the momentum dependence of the form factors, or the finite-size effect,only adds a constant to the MS radius of point-nucleon charge distribution. In this work, we calculate the charge radius in the following way.In the RHB calculations, we take F_1τ(q^2) = F_1τ(0)and F_2τ(q^2) = F_2τ(0), i.e., we take into account theeffect of the point-nucleon anomalous magnetic moment. The charge density is thengiven as ρ_ ch( r) = ρ_p( r)+ ∑_τ=p,nρ_κτ( r).The first term ρ_p is the point-proton density distributionwhile the second term ρ_κ describes the contributions of the anomalous magnetic couplings to the charge density.We refer to the latter as the “spin-orbit” term. Since we use momentum-independent form factors, the finite-size effect is still neglected.Instead, the finite size of nucleon will be considered only at the final step to computethe MS charge radius by folding the resulting RHB charge density by the nucleonform factors, and consequently we add simply the C_τ terms to the MS radius. We expect this is enough to the first approximationsince the finite-size effect would give nearly the constant shift to the MS charge radiusunless the complicated many-body effects <cit.> on the nucleon form factors areexplicitly considered.Note also that, according to the extra term ∑_τρ_κτ of the charge density in Eq. (<ref>), the equations of motion to be solved in the mean-field calculationfor the electrostatic and the nucleon fields are modified.With the CM correction on ⟨ r^2⟩_p, we have for the MS charge radius, ⟨ r^2⟩_ ch = ⟨ r^2⟩_p, corr+⟨ r^2⟩_κ + C_p+N/ZC_n = ⟨ r^2⟩_p +Δ_p^( CM1)+Δ_p^( CM2)+ ⟨ r^2⟩_κ + (0.588 + 0.011N/Zfm^2 ),where we have substituted the numerical values for nucleon charge radii [Eqs. (<ref>) and (<ref>)], and the3ħ^2κ/2m^2c^2 terms.The first term of Eq. (<ref>) is the MS radius of point proton, the second and third terms arethe CM correction of the first, the fourth term is the contribution fromthe magnetic spin-orbit term, and the last term is the finite-size effect of nucleonintroduced by the momentum-dependence of the form factors.Notice that the last term which is independent of the many-body wave function isalmost constant with a weak N/Z dependence. In the present work, the CM correction of the small spin-orbit contribution⟨ r^2⟩_κ is neglected.The root-mean-square (RMS) charge radius is defined as r_ ch = √(⟨ r^2⟩_ ch).§ RESULTS AND DISCUSSIONS With the model described in the previous section, we calculate thecharge radii of even-even nuclei in the isotope chains ^4`-8He, ^10`-22C,^12`-28O, ^36`-56Ca, ^50`-80Ni,^78`-112Zr, ^100`-148Sn, and ^180`-220Pb. For brevity, the one- and two-body CM correction, Δ_p^( CM1) and Δ_p^( CM2), and the spin-orbit term ⟨ r^2⟩_κ will be referred to as CM1, CM2, and SO, respectively.§.§ Contribution of each correctionBefore making a direct comparison of calculated and measured values of the charge radius,we first show in Fig. <ref> the contributions to the MS charge radius of the three terms,the SO term, ⟨ r^2⟩_κ, with magenta triangles,the CM1 term, Δ_p^( CM1), with skyblue squares,and the CM2 term, Δ_p^( CM2), with purple squares.The sum of the three is shown by black dots.The gray bands in the figure show, as a reference to the size of experimental uncertainty,the range given by Δ⟨ r^2⟩( exp) ∈ [(r_ ch-δ r_ ch)^2-r_ ch^2:(r_ ch+δ r_ ch)^2-r_ ch^2],with r_ ch and δ r_ ch being the measured value of the charge radius and the associated error, respectively. Remarkably, all of the three correction terms are of the same order of magnitude, andfurthermore, each contribution as well as their sum are much larger thanthe size of experimental uncertainty except for a few cases. It implies thatthe three contributions have to be considered if one strives for precise descriptionof the nuclear charge radius. §.§.§ Center-of-mass correctionThe CM1 and CM2 terms are respectively negative and positive in most cases and rather smooth as functions of the mass number.Since CM1 and CM2 are O(1/A) corrections, their values tend to be more substantial forthe light nuclei but smaller and almost constant for heavy nuclei.Moreover, the CM2 term tend to cancel the CM1 term for heavier systems,representing the correct asymptotic behavior of the CM correction for A→∞, orinfinite-matter limit. Therefore, the CM2 term should not be neglected in particularfor heavier nuclei.An approximation with a harmonic-oscillator model described in Appendix <ref> is helpfulto discuss the CM correction. As shown in Appendix <ref>, the harmonic-oscillator modelreproduces accurately the RHB results for Ca and heavier nuclei but only qualitativelyfor the lighter nuclei.With a further crude approximation in the harmonic-oscillator model, N=Z=A/2,one finds for the two-body to one-body ratio of the CM correction that Δ_p^( CM2)/Δ_p^( CM1) = -N̅/N̅+2,where N̅ is the harmonic-oscillator quantum number of the highest-occupied major shell.One immediately sees that the ratio tends to zero for s-shell nucleiand decreases with A towards the asymptotic value -1 for A→∞. One observes the similar trend in Fig. <ref>. Now let us pick up the He isotopes showing somewhat irregular behavior,for which the harmonic-oscillator model may not work well because of the small mass numbersand the weakly-bound nucleons.As can be seen in Fig. <ref>(a), Δ_p^( CM1) for ^8He becomes positive,and Δ_p^( CM2) is negative for ^6He and ^8He.From Eq. (<ref>), we have for the CM1 correction, Δ_p^( CM1) =1/A[ -2(1-Z/2A)⟨ r^2⟩_p +N/A⟨ r^2⟩_n ] = 1/8(-3⟨ r^2⟩_p + ⟨ r^2⟩_n) for ^4 He,1/18(-5⟨ r^2⟩_p + 2⟨ r^2⟩_n) for ^6 He,1/32(-7⟨ r^2⟩_p + 3⟨ r^2⟩_n) for ^8 He,where ⟨ r^2⟩_n is the MS radius of neutron. Thus it is determined by the balance between negative and positive contributions from protons and neutrons, respectively. In the neutron-rich He isotopes, the neutron MS radius enhanced by theweekly-bound p-shell neutrons increases the CM1 term.See Table <ref> for the neutron and proton MS radii and the resulting CM1 term ofthe He isotopes obtained by the RHB calculations.We note that the similar mechanism applies also to general near-dripline nucleiand that this effect is missing in the harmonic-oscillator model.(See also Fig. <ref> in Appendix <ref> for the comparisons of the CM correctionbetween the RHB and the harmonic-oscillator models. ) The negative values of the CM2 correction in ^6He and ^8He can be understood more simply.Since the two protons fill only the s shell, the first term in Eq. (<ref>),which is positive, vanishes for the He isotopes.If we assume roughly that v_n1s_1/2^2≈ 1 and v_n1p_3/2^2≈ (N-2)/4for the occupation probabilities of the neutron 1s_1/2 and 1p_3/2 states,respectively, Δ_p^( CM2)≈ -2/3N-2/A^2 I_sp^2, I_sp≡∫ dr rG_n1s_1/2(r)G_n1p_3/2(r),where G_n1s_1/2(r) and G_n1p_3/2(r) are the radial wave functions of theupper component of the canonical neutron 1s_1/2 and 1p_3/2 states, respectively. Since I_sp^2∼ 1 fm^2, Eq. (<ref>) explains the small negative valuesof the CM2 term in ^6He and ^8He. We also mention here the connection of our approach to the approximate projectionmethod <cit.> via harmonic-oscillator approximation.Within the harmonic-oscillator model as described in Appendix <ref>,the total CM correction given by Eqs. (<ref>)-(<ref>) satisfiesΔ_p^( CM1)+Δ_p^( CM2) = -9ħ^2/4⟨ P_ CM^2⟩,where P_ CM is the CM momentum.On the other hand, it was shown in Ref. <cit.>that the second-order Gaussian-overlap approximation to the momentum projectionyields an effect identical to that with a harmonic-oscillator approximation.In their approximation, the nuclear charge form factor is corrected byan additional factor of exp(3ħ^2q^2/8⟨ P_ CM^2⟩) <cit.>,which coincides with the CM correction of -9ħ^2/4⟨ P_ CM^2⟩ in Eq. (<ref>).Thus our approach yields, for heavy nuclei, approximately the same correction as the projection method, but not for light or weakly-bound nuclei for which the harmonic-oscillator model is not a good approximation (see Appendix <ref>). §.§.§ Spin-orbit effect The SO effect is more sensitive than the CM corrections to the shell structure.As a result, the shape of the total correction for the heavier isotopesis determined almost by the SO effect with a shift by the CM correction. The behavior of ⟨ r^2⟩_κ can be qualitatively understood by a nonrelativistic approximation[ Note that the simple “nonrelativistic approximation” is onlya poor approximation to the SO contribution in relativistic mean-field theory, as pointed out in Refs. <cit.>, because of the strong relativistic potentials of hundreds of MeV, but it is still usefulto discuss the qualitative behavior of ⟨ r^2⟩_κ.We have found indeed that the estimates with Eq. (<ref>),⟨ r^2⟩_κ ( fm^2) = -0.0422n for ^4+nHe, = -0.0211n for ^16+nO,and =-0.0127n for ^40Ca underestimates the RHB results by factor of ≈ 2 in the absolute value but with the correct sign.] <cit.>, ρ_κ = κħ/2mc∇·⟨ψ̅iαψ⟩∼ -κħ/2mcħ/mc∇· J,where J is the nonrelativistic spin-orbit density <cit.>.By integrating Eq. (<ref>) with r^2, one finds thatZ⟨ r^2 ⟩_κ∼κ(ħ/mc)^2 ∑_av_a^2(2j_a+1)⟨ l·σ⟩_awhere a labels a j shell, and v_a^2 and j_a are the occupation probabilityand the angular momentum of the level a, respectively.The symbol ⟨ l·σ⟩_a is defined as ⟨ l·σ⟩_a = +l_afor j_a = l_a+1/2, -l_a-1for j_a = l_a-1/2,where l_a is the orbital angular momentum of the level a.Thus neutrons in a j_> = l+1/2 (j_< = l-1/2) shell give negative (positive)contribution to ⟨ r^2 ⟩_κ, and a pair of spin-orbit doublet orbitals canceleach other at an LS-closed configuration. Since κ_p is similar in the absolute valueto κ_n with the opposite sign, protons make the opposite contributionto ⟨ r^2 ⟩_κ in LS-open nuclei.Thus ⟨ r^2 ⟩_κ approximately vanishes for, e.g.,doubly LS-closed or N=Z nuclei.We illustrate here the five isotope chains for which we will show theisotope shifts in the next subsection.In the Ca isotopes shown in Fig. <ref>(d),the increase towards zero of ⟨ r^2 ⟩_κ up to N=20 andthe decrease beyond is understood by the effects of neutrons filling 1d_3/2 and1f_7/2 shells, respectively.In the Ni isotopes shown in Fig. <ref>(e), ⟨ r^2 ⟩_κ≈ 0at N=Z=28 due to the approximate isovector character of the SO effect.Above N=28, the neutrons are scattered over the 1p_3/2, 1p_1/2, and 1f_5/2states by the pairing interaction, which smoothen the variation of ⟨ r^2 ⟩_κ. The net increase of ⟨ r^2 ⟩_κ from N=28 to 40is caused by the 1f_5/2 neutrons.The large negative slope for N>40 is the effect of the 1g_9/2 neutrons.In the Zr isotopes shown in Fig. <ref>(f),⟨ r^2 ⟩_κ≈ 0 at the doubly LS-closed ^80Zr nucleus anddecreases as the neutrons are added in the 1g_9/2 shell.In the Sn isotopes shown in Fig. <ref>(g), although the shell effect on⟨ r^2 ⟩_κ is smoothened by the pairing correlation,its decrease between A≈ 120 and 132 is caused mainly by the 1h_11/2 neutrons.Finally, in the Pb isotopes shown in Fig. <ref>(h), it is again the intruder1i_13/2-state neutrons that mainly contribute the smooth decrease of⟨ r^2 ⟩_κ up to A=208.Let us give a little more general discussion on the SO effect around the neutron shell closures.Below the larger magic numbers N=50, 82, and 126,the neutrons filling the intruder j_> state, whose orbital angular momentum islarger than any levels in the shell below, mainly contribute to the decrease ofthe charge radius as approaching the magic numbers.Above a magic number, the decrease before is eventually compensated by filling of thespin-orbit partner of the intruder, but the other levels may also contributeat the early filling of the new shell.As a result, a local minimum of ⟨ r^2⟩_κat or a little beyond N=50, 82, or 126 is developed. It is not the case, however, for the lower magic numbers N=8 and 20 (and N=40) thatcorrespond to the LS closures.In contrast to the N≥ 50 shell closures,the single-particle level below (above) an LS closure is j_< (j_>), which for the neutron casemakes positive (negative) contribution to the charge radius, forming a local maximumat N=8, 20, or 40.Such local extrema of ⟨ r^2⟩_κ as described above are clearly observed indeedin Fig. <ref>.This characteristic behavior of ⟨ r^2⟩_κ may influence the shape of the isotopeshifts, in particular the kink structure as discussed also in Ref. <cit.>. See also a similar discussion based on the effect of nuclear spin-orbit force in Ref. <cit.>. §.§ Comparison with experimental data Here we compare the following three calculations with experimental data for the charge radius. * The RHB calculations are done with F_1p(q^2)=1, F_1n(q^2) = 0, and F_2p(q^2) = F_2n(q^2) = 0, and the charge radius is calculated by r_ ch = √(⟨ r^2⟩_p + (0.8fm)^2), denoted in Figs. <ref> and <ref> as “+(0.8)^2”.* The RHB calculations are done with anomalous magnetic moment, i.e.,F_1p(q^2)=1, F_1n(q^2) = 0, and F_2p(q^2) = F_2n(q^2) = 1, and the charge radius is calculated by Eq. (<ref>), denoted in Figs. <ref> and <ref> as “+FF”. * Same as 2. but r_ ch is calculated by Eq. (<ref>) with the CM correction, denoted in Figs. <ref> and <ref> as “+FF+CM”.§.§.§ Absolute values of charge radii Fig. <ref> shows the calculated absolute values of the RMS charge radii r_ ch in comparison with experimental data.The black dashed lines are the results obtained simply by r_ ch = √(⟨ r^2⟩_p + (0.8fm)^2) without CM and SO corrections,and the green triangles and yellow circles are the ones obtained with only the SO and finite-size correction as in Eq. (<ref>)and with the full correction as in Eq. (<ref>), respectively. The experimental data<cit.> are shown by red squareswith error bars. As was shown also in Sec. <ref>, both CM and SO influence thecharge radii by much more than the experimental uncertainties.The CM correction systematically reducesthe charge radii. The effect is most significant for He isotopes, and less for the heavier systems.The SO effect is comparable to the CM correction in light nuclei and dominantin many of heavier nuclei. It is negative except for neutron-deficient C, O, and Ni isotopesand some of the Sn isotopes (see discussion in Sec. <ref>). The calculated radii with the full correction of the He, C isotopes [Fig. <ref>(a)], and the Pb isotopes [Fig. <ref>(d)] tend to near the experimental values,while the agreements in other nuclei are deteriorated by CM and SO corrections.We note again that the fitting of DDME2 parameter set is done forr_ ch = √(⟨ r^2⟩_p + (0.8fm)^2) without CM and SO corrections to ^16O, ^40,48Ca, ^90Zr, ^116,124Sn,and ^204,208,214Pb nuclei <cit.>.It has also to be mentioned that the finite-size effect for “+FF” and “+FF+CM” valuesof the charge radius are given with different values of the nucleon sizesand the additional 3κħ^2/2m^2c^2 termsas compared to the one adopted in the DDME2 fit [see Eqs. (<ref>) and (<ref>)]. The charge radii of the He isotopes [Fig. <ref>(a)] are most influencedby the corrections because of small A and Z.Without CM and SO corrections, the charge radius is largest for ^4He andis almost constant along the chain up to ^8He.The slope becomes negative with the SO effect only, but the CM correctionmakes the slope positive, which follows the trend of the measured charge radii of He isotopes.The large staggering of r_ ch in ^4He-^6He-^8He is not reproduced. In the C and O isotopes, the CM correction is dominant around N=Z, but theSO effect increases as the neutrons fill the 1d_5/2 state while the CM correctionbecomes smaller. As a result, the total correction is more or less constant along the chains.One sees a kink at ^24O due to the SO effect of neutrons filling the 1d_3/2 state. In the Ni isotopes [Fig. <ref>(b)], the CM correction dominates over the SO correction for N≤ 40.Above N=40, the strong negative SO effects of 1g_9/2 neutrons suppresses the slopeof the charge radius, forming a kink at ^68Ni which was not observed in therecent measurement <cit.>. We will discuss the Ca, Ni, Zr, Sn, and Pb isotopes in more detail withthe isotope shifts in the next subsection. §.§.§ Isotopic shifts In order to reduce the systematic error in the calculated values of the charge radiuscoming from the above mentioned fitting procedure, we show inFig. <ref> the isotopic shifts, defined as the MS charge radiusof an isotope A relative to a reference one A', δ⟨ r^2⟩_ ch^A,A' = ⟨ r^2⟩_ ch(A)-⟨ r^2⟩_ ch(A'). for Ca, Ni, Zr, Sn, and Pb isotopes.Note that the effect of CM correction is also nearly cancelled out by the subtraction for heavier systems. In Ca isotopes shown in Fig. <ref>(a), the SO effect of 1f_7/2 neutronsdrastically changes the slope of the shift between A=40 and 48.The slight decrease of the charge radius from ^40Ca to ^48Ca is qualitativelyreproduced <cit.>.It can also be seen that the CM correction slightly decrease the charge radiuson A<40 side and increase on the other side, moderating the change of slopebeyond A=40.The local maximum of charge radius at ^44Ca and the unexpectedly large radiusof ^52Ca are not reproduced by the present calculations <cit.>. Fig. <ref>(b) shows the shifts in the Ni isotopes.A sharp kink at ^56Ni observed in a recent experiment <cit.>is reproduced both with and without the CM and SO corrections. The SO effect sharpens the kink and improves the agreement with the data.Another kink appears at A=68 because of the strong SO effect of 1g_9/2 neutrons.This kink was not observed in another recent experiment <cit.>.The rapid increase of the measured charge radius above N=28, as in the Ca isotopes,forming an arch-like shape over N=28`-40 is again not reproduced by the present calculations. Note that it was recently pointed out in Ref. <cit.> thatthis characteristic behavior of the charge radius between N=28 and 40 is affectedby various properties of the mean-field model such as the bulk properties, shell structure,and pairing correlation. The result for the Zr isotopes is shown in Fig. <ref>(c). The slope at A<90 region is changed mainly by the SO effect, which improves the agreement withthe decrease of the measured charge radius from A=88 to 90 [see also Fig. <ref>(f)]. The large discrepancy beyond A=90 may be attributed to deformation effect <cit.>. In Sn isotopes shown in Fig. <ref>(d), the decline of the slope at A>120 regionis well reproduced mainly by the SO effect of 1h_11/2 neutrons,as discussed in the previous subsection.The SO effect above N=82 shell closure is almost flat and smooth due to the scattering ofneutrons over the shell above N=82 [see Fig. <ref>(g)].This, together with the the SO effect of 1h_11/2 neutrons, leads to a kink at A=132slightly weaker than is experimentally observed. Lastly, in Fig. <ref>(e) showing the Pb isotope chain,the slope of the A<208 chain is changed by the SO effect of mainly 1i_13/2 neutrons,which yields the constant decrease of the negative SO effect for A<208 [see also Fig. <ref>(f)].It improves the region 182≤ A≤ 192 but slightly worsen 192≤ A≤ 206.We have also tried the same calculations for DDMEδ parameter set <cit.> and observed qualitatively similar effects of CM and SO corrections,but without a kink at ^68Ni.It implies that the SO effect on the kink structure is sensitiveto the proton shell structure and the proton occupation probabilities determined by the pairing correlation. See also Ref. <cit.>, in which a number of mean-field models are compared withoutthe CM correction. Global performance studies of the DDME2 and other parameter setswere also done in Refs. <cit.>. Recently, the effect of the ω-N and ρ-N tensor couplingsin a relativistic mean-field model on the charge radii were systematically investigated <cit.>.It was observed that the impact of the tensor couplings on charge radii is comparableto the effects considered in the present work. The meson-nucleon tensor couplings indirectly influence the charge radius throughits effect on the neutron spin-orbit splittings and the neutron occupation probabilitiesof the single-particle levels <cit.>.The same effect was also discussed in Ref. <cit.> with an extradensity-dependent nuclear spin-orbit force, which leads to results resemblingto ours for the isotope shifts in Ca, Ni, Sn, and Pb chains. On the other hand, the magnetic SO term in the present study, namely the photon-nucleontensor coupling, is a consequence of the electromagnetic property of the nucleon,which directly modifies the charge density. Note also that the SO effect is entangled with the effect of strong relativisticnuclear mean fields as discussed in Refs. <cit.> although it is a pure electromagnetic effect,and that the nucleon magnetic moments or more generally the form factorscould be modified in nuclear medium by the many-body effects and the underlying QCD quark-gluon dynamics <cit.>. As a final remark, the beyond-mean-field correlations other than the CM correctioncan also alter the charge radius <cit.>.The effects of the zeropoint quadrupole-shape fluctuation on charge radiuswas found to be as large as ∼ 0.01 fm <cit.>. § SUMMARY We have studied the effects of the one- and two-body CM corrections,and the SO term originating from the anomalous magnetic momentof nucleon on the nuclear charge radius.The former is required by the inevitable breaking of translational invariancein the mean-field model,whereas the latter is the electromagnetic property of nucleonaffecting directly the nuclear charge-density distribution.The finite-size effects of nucleon from both Dirac and Pauli form factors were also included.We employed an RHB model with DDME2 for ph channel and Gogny D1S for pp channel. We have observed sizable impacts of each correction on the charge radiusfrom light to heavy nuclei.The light nuclei are significantly affected by the both CM and SO corrections,while the heavier nuclei are much less affected by the former, as expected. The CM correction consists of one- and two-body parts.The heavier is the system, the more significant is the effect of the two-body part,thus it should not be neglected. We also find that the harmonic-oscillator modelis not a good approximation in light or weakly-bound systems althoughit is nearly satisfactory for heavy systems. The magnetic SO effect is more sensitive to the shell structure than the CM correction.In particular, it leads to remarkable improvement of Sn and Pb isotope shifts for DDME2 functional.The SO effect also produces additional kinks at ^24O and ^68Ni,latter of which is not observed in experimental data. The two corrections seemingly improve also the agreement with the measured charge radiiin very light H and C isotopes.Although the beyond-mean-field correlations are likely to be important in these lighter systems,it was shown that the present mean-field model roughly follows the trend of the measured charge radii. It would also be interesting to study the effects of the CM correction on the other kinds of radius.More detailed analyses including those of the matter radius and neutron skinthickness will be reported elsewhere. The CM correction affects also the deformation parameters.The correction of the quadrupole moments can be made in a similar way as the radiussince it is quadratic in coordinates.The corrections of higher moments will be much more complicated becausethere arise three-body and higher operators.However, it is expected that the the CM correction issmall for the deformation parameters because cancellation ofthe correction terms would occur among different spatial directions.We thank Toshio Suzuki for helpful discussions.We acknowledge support from the Basic Science Research Program of the National Research Foundation of Korea (NRF) under Grants No. 2021R1A6A1A03043957 and No. 2020R1A2C3006177. § DERIVATION OF CM CORRECTION ON RADIUS Derivation of Eqs. (<ref>) and (<ref>) is given here.In general, the expectation value of the square of an observable  is given by ⟨Â^2⟩ =Tr[A^2ρ] + ( Tr[Aρ])^2-Tr[Aρ Aρ] -Tr[A^*κ^* Aκ],where ρ and κ are the one-body density matrix and the paring tensor,respectively, and A is matrix representation of the operator Â.The first term in the right hand side is the one-body operator part of Â^2,while the rest is the two-body part.If  is a time-even operator, ⟨Â^2⟩ = ∑_α v_α^2⟨α|A^2|α⟩ + (∑_α v_α^2⟨α|A|α⟩)^2- ∑_αβ(v_α^2v_β^2 - u_α v_α u_β v_β) |⟨α|A|β⟩|^2,where v_α and u_α are the canonical occupation amplitudes.Note that summations run over the time-reversal partner states pairwise.If  is time-odd, on the other hand, ⟨Â^2⟩ = ∑_α v_α^2⟨α|A^2|α⟩ - ∑_αβ(v_α^2v_β^2 + u_α v_α u_β v_β) |⟨α|A|β⟩|^2.Note the opposite signs of the last terms in Eqs. (<ref>) and (<ref>).Eq. (<ref>) applies to the expectation value of the center-of-mass kinetic energy <cit.>. The proton squared radius with CM correction is given by ⟨∑_i∈ p( r_i- R_G)^2⟩=Z⟨ r^2⟩_p -2/A⟨(∑_i∈ p r_i)^2 ⟩ +1/A⟨(∑_i=1^A r_i)^2 ⟩,where R_G=(1/A)∑_i=1^A r_i. The second and third terms, which are the CM correction terms,can be computed by (<ref>) to obtain Eqs. (<ref>) and (<ref>).§ HARMONIC-OSCILLATOR MODELIn this appendix, we give an analytic estimate, similar to the one in Ref. <cit.>,of the charge radius and theCM correction terms with a harmonic-oscillator (HO) model,and compare them with the experimental data and the RHB results.A connection of our approach with an approximate projection method <cit.> is also demonstrated at the end. Let us consider particles with ν intrinsic degrees of freedom (spin and/or isospin) filling HO shells up to the one of N̅ quanta. The total number of particles N_p is given by N_p= ∑_n=0^N̅ν1/2(n+1)(n+2) =ν/6(N̅+1)(N̅+2)(N̅+3).The squared radius within the HO model is given by ∑_αv_α^2⟨α|r^2|α⟩ = ħ/mων/8 (N̅+1)(N̅+2)^2(N̅+3) = 3/4ħ/mωN_p(N̅+2),where ħ/mω is the squared oscillator length which will be determined later.For the CM2 term, we need to compute∑_αβ v_α^2v_β^2|⟨α| r|β⟩|^2. Notice that we neglect the uvuv term coming from the pairing tensorsince it is only effective near the Fermi surface and much smaller thanthe v^2v^2 term being a bulk effect.Using the HO matrix element of r, one obtains∑_αβv_α^2v_β^2|⟨α| r|β⟩|^2 = ν/8ħ/mωN̅(N̅+1)(N̅+2)(N̅+3) = 3/4ħ/mωN_pN̅. The real solution for the algebraic equation (<ref>) is N̅+2 = f_ν(N_p)^1/3 + 1/3f_ν(N_p)^1/3,wheref_ν(N_p) = √((3N_p/ν)^2-1/27) + 3N_p/ν.It follows from Eqs. (<ref>), (<ref>), and (<ref>) that ∑_αv_α^2⟨α|r^2|α⟩ = 3/4ħ/mω N_p[f_ν(N_p)^1/3 + 1/3f_ν(N_p)^-1/3],and∑_αβv_α^2v_β^2|⟨α| r|β⟩|^2 = 3/4ħ/mω N_p[f_ν(N_p)^1/3 + 1/3f_ν(N_p)^-1/3-2].Notice that these two expressions have the same limiting value for N̅→∞. The neutron, proton, and matter MS radii are then given by ⟨ r^2⟩_n = 3/4ħ/mω_n[f_2(N)^1/3 + 1/3f_2(N)^-1/3], ⟨ r^2⟩_p = 3/4ħ/mω_p[f_2(Z)^1/3 + 1/3f_2(Z)^-1/3], ⟨ r^2⟩_m = 1/A(N⟨ r^2⟩_n+Z⟨ r^2⟩_p),respectively.Here we allow the oscillator parameter different between neutron and proton.The CM1 term is given by substituting the above expressions into Eq. (<ref>),and the CM2 term is given asΔ^( CM2)_p =-3/4ħ/mω_pZ/A^2(1-2A/Z) ×[f_2(Z)^1/3 + 1/3f_2(Z)^-1/3-2]- 3/4ħ/mω_nN/A^2×[f_2(N)^1/3 + 1/3f_2(N)^-1/3-2].Note that we treat neutrons and protons separately anddo not set N=Z=A/2 as is done normally in estimations of this kind <cit.>. We have made no approximation so far within the HO model.Now we make the only ansatz for the oscillator parameter ħ/mωthat remains yet to be determined, 3/4ħ/mω_n = 3/4ħ/mω_p = (2/3)^1/33/5r_0^2A^1/3,with r_0≈ 1.2 fm. This corresponds toapproximating the oscillator frequency by ħω≈ 41 A^-1/3 MeV <cit.>.One could also consider (N,Z)-dependent oscillators differentbetween neutron and proton, but we take the simplest assumption with a single parameter r_0.Under this ansatz, the total CM correction simplifies to Δ_p^( CM1)+Δ_p^( CM2) =-3/4ħ/mω2/A=-(2/3)^1/36/5r_0^2A^-2/3,which coincides with the expression for the CM correction adopted in TM1 parametrization <cit.>. In Fig. <ref> is shown the the HO-model estimate ofthe charge radius in comparison with experimental data.The estimate is made by substituting the HO-model values of ⟨ r^2⟩_p and Δ_p^( CMi) (i=1,2) into Eq. (<ref>) but without the ⟨ r^2⟩_κ and the constant terms.We take r_0=1.23 fm fitted to the measured charge radiiof Pb and Sn isotopes.One can see that the HO model with a single parameter r_0 reproducesthe measured charge radii reasonably well from light to heavy nuclei.In particular, the present HO model closely follows the deviation ofthe measured values from the simple empirical formula R = r_0A^1/3.Although the model does not take into account the Coulomb effect, shell effect,deformation, etc., it captures the rough (N,Z) dependence of the radius. Using the same value of r_0 adjusted to the measured charge radii,we also compare the HO model with RHB results. In Fig. <ref>, we show the comparison of Δ_p^( CM1) and Δ_p^( CM2) between the RHF calculations and the HO estimates.It is found that the HO model gives only qualitative estimates for H and O isotopes,while the agreement is nearly satisfactory for Ca, Sn, Pb isotopes. There are two reasons of the discrepancies in the light isotopes.First, the enhancement of the radius by the weekly-boundnucleons in near-dripline nuclei is not taken into account in the HO model,as discussed in Sec. <ref>.Second, the simple assumption of ħω≈ 41A^-1/3 MeV may not be good for the very light nuclei.The CM correction for the kinetic energy can also be computed inthe HO model as ⟨ P_ CM^2⟩ ≈∑_α v_α^2 ⟨α|p^2|α⟩ - ∑_αβ v_α^2v_β^2 |⟨α| p|β⟩|^2 = 3/4ħ^2mω/ħ· 2A.From Eqs. (<ref>) and (<ref>), one finds the approximate relationshipof the CM correction between MS charge radius and kinetic energy, Δ_p^( CM1)+Δ_p^( CM2) = -9ħ^2/4⟨ P_ CM^2⟩.This expression is consistent with the CM correction adopted in Ref. <cit.> with an approximate projection method <cit.> giving an additionalfactor ofexp(3ħ^2q^2/8⟨ P_ CM^2⟩) to the nuclear charge form factor. 99Ang13 I. Angeli and K.P. Marinova,“Table of experimental nuclear ground state charge radii: An update”,https://www.sciencedirect.com/science/article/pii/S0092640X12000265 Atomic Data and Nuclear Data Tables 99, 69 (2013).Li21 Tao Li, Yani Luo, and Ning Wang,“Compilation of recent nuclear ground state charge radius measurements and tests for models”,https://www.sciencedirect.com/science/article/pii/S0092640X21000267 Atomic Data and Nuclear Data Tables 140, 101440 (2013).Sommer22 Felix Sommer et al.,“Charge Radii of ^55,56Ni Reveal a Surprisingly Similar Behavior at N=28 in Ca and Ni Isotopes”,https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.129.132501 Phys. Rev. Lett. 129, 132501 (2022).MaEt22 S. Malbrunot-Ettenauer et al.,“Nuclear Charge Radii of the Nickel Isotopes ^58`-68,70Ni”,https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.128.022502 Phys. Rev. Lett. 128, 022502 (2022).Pineda21 Skyy V. Pineda, Kristian König, Dominic M. Rossi, B. Alex Brown, Anthony Incorvati, Jeremy Lantis, Kei Minamisono, Wilfried Nörtershäuser, Jorge Piekarewicz, Robert Powel, and Felix Sommer,“Charge Radius of Neutron-Deficient ^54Ni and Symmetry Energy Constraints Using the Difference in Mirror Pair Charge Radii”,https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.127.182503 Phys. Rev. Lett. 127, 182503 (2021).Kuhl77 T. Kühl, P. Dabkiewicz, C. Duke, H. Fischer, H. -J. Kluge, H. Kremmling, and E. -W. Otten,“Nuclear Shape Staggering in Very Neutron-Deficient Hg Isotopes Detected by Laser Spectroscopy”,https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.39.180 Phys. Rev. Lett. 39, 180 (1977). Ans86 M. Anselment, W. Faubel, S. Göring, A. Hanser, G. Meisel, H. Rebel, G. Schatz,“The odd-even staggering of the nuclear charge radii of Pb isotopes”,https://www.sciencedirect.com/science/article/abs/pii/0375947486900710 Nucl. Phys. A451, 471 (1986).Mar18 B. A. Marsh et al.,“Characterization of the shape-staggering effect in mercury nuclei”,https://www.nature.com/articles/s41567-018-0292-8 Nat. Phys. 14, 1163 (2018).Mi19N A. J. Miller, K. Minamisono, A. Klose, D. Garand, C. Kujawa, J. D. Lantis, Y. Liu, B. Maaß, P. F. Mantica, W. Nazarewicz, W. Nörtershäuser, S. V. Pineda, P.-G. Reinhard, D. M. Rossi, F. Sommer, C. Sumithrarachchi, A. Teigelhöfer, and J. Watkins,“Proton superfluidity and charge radii in proton-rich calcium isotopes”,https://www.nature.com/articles/s41567-019-0416-9 Nat. Phys. 15, 432 (2019).Gro20 R. P. de Groote et al.,“Measurement and microscopic description of odd-even staggering of charge radii of exotic copper isotopes”,https://www.nature.com/articles/s41567-020-0868-y Nat. Phys. 16, 620 (2020).Good21 T. Day Goodacre et al.,“Laser Spectroscopy of Neutron-Rich ^207,208Hg Isotopes: Illuminating the Kink and Odd-Even Staggering in Charge Radii across the N=126 Shell Closure” https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.126.032502 Phys. Rev. Lett. 126, 032502 (2021).Kos21 Á. Koszorús “Charge radii of exotic potassium isotopes challenge nuclear theory and the magic character of N=32”,https://www.nature.com/articles/s41567-020-01136-5 Nat. Phys. 17, 439 (2021). Bar21 A. Barzakh et al.,“Large Shape Staggering in Neutron-Deficient Bi Isotopes” https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.127.192501 Phys. Rev. Lett. 127, 192501 (2021).Cubiss23 J. G. Cubiss et al.,“Deformation versus Sphericity in the Ground States of the Lightest Gold Isotopes”,https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.131.202501 Phys. Rev. Lett. 131, 202501 (2023).Nakada19 H. Nakada,“Irregularities in nuclear radii at magic numbers”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.100.044310 Phys. Rev. C 100, 044310 (2019).PeAf23 U. C. Perera and A. V. Afanasjev,“Differential charge radii: Proton-neutron interaction effects”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.107.064321 Phys. Rev. C 107, 064321 (2023).LNDS24 N. Liliani, A.M. Nugraha, J.P. Diningrum, A. Sulaksono,“Tensor and isovector-isoscalar terms of relativistic mean field model: Impacts on neutron-skin thickness, charge radius, and nuclear matter”,https://www.sciencedirect.com/science/article/pii/S0375947423002166 Nucl. Phys. A1042, 122812 (2024).ReNa17 P.-G. Reinhard and W. Nazarewicz, “Toward a global description of nuclear charge radii: Exploring the Fayans energy density functional”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.95.064328 Phys. Rev. C 95, 064328 (2017). Mun23 Myeong-Hwan Mun, Seonghyun Kim, Myung-Ki Cheoun, W.Y. So, Soonchul Choi, and Eunja Ha,“Odd-even shape staggering and kink structure of charge radii of Hg isotopes by the deformed relativistic Hartree-Bogoliubov theory in continuum” https://www.sciencedirect.com/science/article/pii/S0370269323006329 Phys. Lett. B847, 138298 (2023). KERM08 P. Klüpfel, J. Erler, P.-G. Reinhard, and J.A. Maruhn,“Systematics of collective correlation energies from self-consistent mean-field calculations”,https://link.springer.com/article/10.1140/epja/i2008-10633-3 Eur. Phys. J. A 37, 343 (2008).Ko22 Markus Kortelainen, Zhonghao Sun, Gaute Hagen, Witold Nazarewicz, Thomas Papenbrock, and Paul-Gerhard Reinhard,“Universal trend of charge radii of even-even Ca-Zn nuclei”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.105.L021303 Phys. Rev. C 105, L021303 (2022).Br17 B. Alex Brown,“Mirror Charge Radii and the Neutron Equation of State”,https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.119.122502 Phys. Rev. Lett. 119, 122502 (2017).Br20 B. A. Brown, K. Minamisono, J. Piekarewicz, H. Hergert, D. Garand, A. Klose, K. König, J. D. Lantis, Y. Liu, B. Maaß, A. J. Miller, W. Nörtershäuser, S. V. Pineda, R. C. Powel, D. M. Rossi, F. Sommer, C. Sumithrarachchi, A. Teigelhöfer, J. Watkins, and R. Wirth,“Implications of the ^36Ca-^36S and ^38Ca-^38Ar difference in mirror charge radii on the neutron matter equation of state”,https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.2.022035 Phys. Rev. Research 2, 022035(R) (2020).ReNa22 Paul-Gerhard Reinhard and Witold Nazarewicz,“Information content of the differences in the charge radii of mirror nuclei”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.105.L021301 Phys. Rev. C 105, L021301 (2022). NaMa22 Tomoya Naito, Xavier Roca-Maza, Gianluca Colò, Haozhao Liang, and Hiroyuki Sagawa,“Isospin symmetry breaking in the charge radius difference of mirror nuclei”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.106.L061306 Phys. Rev. C 106, L061306 (2022).HLN23 Y. N. Huang, Z. Z. Li, and Y. F. Niu,“Correlation between the difference of charge radii in mirror nuclei and the slope parameter of the symmetry energy”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.107.034319 Phys. Rev. C 107, 034319 (2023).Ne82 J. W. Negele,“The mean-field theory of nuclear structure and dynamics”,https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.54.913 Rev. Mod. Phys. 54, 913 (1982). Re89 P.-G. Reinhard,“The relativistic mean-field description of nuclei and nuclear dynamics”,https://iopscience.iop.org/article/10.1088/0034-4885/52/4/002 Rep. Prog. Phys. 52, 439 (1989).BRM03 Michael Bender, Paul-Henri Heenen, and Paul-Gerhard Reinhard,“Self-consistent mean-field models for nuclear structure”,https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.75.121 Rev. Mod. Phys. 75, 121 (2003).RS80 P. Ring and P. Schuck, “The Nuclear Many-Body Problems”,(Springer-Verlag Berlin Heidelberg New York, 1980).BR86 Jean-Paul Blaizot and Georges Ripka,“Quantum Theory of Finite Systems”,(MIT Press, 1986). ScRe91 K.W. Schmid, P.-G. Reinhard,“Center-of-mass projection of Skyrme-Hartree-Fock densities”,https://www.sciencedirect.com/science/article/abs/pii/037594749190804F Nucl. Phys. A530, 283 (1991).BRRM00 M. Bender, K. Rutz, P.-G. Reinhard, and J.A. Maruhn,“Consequences of the center-of-mass correction in nuclear mean-field models”,https://link.springer.com/article/10.1007/PL00013645 Eur. Phys. J. A 7, 467 (2000).Co23 Philippe Da Costa, Karim Bennaceur, Jacques Meyer, Wouter Ryssens, Michael Bender,“On the impact of the scheme for center-of-mass correction on the surface energy of Skyrme Energy Density Functionals”,https://arxiv.org/abs/2310.05090 arXiv:2310.05090 [nucl-th] (2023).STV03 V. B. Soubbotin, V. I. Tselyaev, and X. Viñas,“Quasilocal density functional theory and its application within the extended Thomas-Fermi approximation”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.67.014324 Phys. Rev. C 67, 014324 (2003).TM1 Y. Sugahara and H. Toki,“Relativistic mean-field theory for unstable nuclei with non-linear σ and ω terms”,https://www.sciencedirect.com/science/article/abs/pii/0375947494909237 Nucl. Phys. A579, 557 (1994). BSM83 M.N. Butler, D.W.L. Sprung, and J. Martorell,https://www.sciencedirect.com/science/article/abs/pii/0375947484904354 Nucl. Phys. A422, 157 (1983). PK1 Wenhui Long, Jie Meng, Nguyen Van Giai, and Shan-Gui Zhou,“New effective interactions in relativistic mean field theory with nonlinear terms and density-dependent meson-nucleon coupling”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.69.034319 Phys. Rev. C 69, 034319 (2004). SkM* J. Bartel, P. Quentin, M. Brack, C. Guet, and H.-B. Håkansson,“Towards a better parametrisation of Skyrme-like effective forces: A critical study of the SkM force”,https://www.sciencedirect.com/science/article/abs/pii/0375947482904031 Nucl. Phys. A386, 79 (1982).TyWo99 S. Typel and H.H. Wolter,“Relativistic mean field calculations with density-dependent meson-nucleon coupling”,https://www.sciencedirect.com/science/article/pii/S0375947499003103 Nucl. Phys. A656, 331 (1999).ReNa21 Paul-Gerhard Reinhard and Witold Nazarewicz,“Nuclear charge densities in spherical and deformed nuclei: Toward precise calculations of charge radii”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.103.054310 Phys. Rev. C 103, 054310 (2021). BFST88 D. Berdichevsky, R. Fleming, D. W. L. Sprung, and F. Tondeur,“Charge and mass radii of the tin isotopes”,https://link.springer.com/article/10.1007/BF01294344 Z. Phys. A329, 393 (1988).MiHe99 Bogdan Mihaila and Jochen H. Heisenberg,“Center-of-mass corrections reexamined: A many-body expansion approach”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.60.054303 Phys. Rev. C 60, 054303 (1999). HPD09 G. Hagen, T. Papenbrock, and D. J. Dean,“Solution of the Center-Of-Mass Problem in Nuclear Structure Calculations”,https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.103.062503 Phys. Rev. Lett. 103, 062503 (2009).HoPi12 C. J. Horowitz and J. Piekarewicz,“Impact of spin-orbit currents on the electroweak skin of neutron-rich nuclei”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.86.045503 Phys. Rev. C 86, 045503 (2012)KuSu00 Haruki Kurasawa and Toshio Suzuki,“Effects of the neutron spin-orbit density on the nuclear charge density in relativistic models”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.62.054303 Phys. Rev. C 62, 054303 (2000).NOSW23 Tomoya Naito, Tomohiro Oishi, Hiroyuki Sagawa, and Zhiheng Wang,“Comparative study on charge radii and their kinks at magic numbers”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.107.054307 Phys. Rev. C 107, 054307 (2023).Ber72 W. Bertozzi, J. Friar, J. Heisenberg and J.W. Negele,“Contributions of neutrons to elastic electron scattering from nuclei”,https://www.sciencedirect.com/science/article/pii/0370269372906624 Phys. Lett. B41, 408, (1972).Ong10 A. Ong, J. C. Berengut, and V. V. Flambaum,“Effect of spin-orbit nuclear charge density corrections due to the anomalous magnetic moment on halonuclei”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.82.014320 Phys. Rev. C 82, 014320 (2010).KuSu19 Haruki Kurasawa and Toshio Suzuki,“The nth-order moment of the nuclear charge density and contribution from the neutrons”,https://doi.org/10.1093/ptep/ptz121 Prog. Theor. Expt. Phys. 2019, 113D01 (2019).ddme2 G. A. Lalazissis, T. Nikšić, D. Vretenar, and P. Ring,“New relativistic mean-field interaction with density-dependent meson-nucleon couplings”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.71.024312 Phys. Rev. C 71, 024312 (2005).BGG91 J.F. Berger, M. Girod and D. Gogny,“Time-dependent quantum collective dynamics applied to nuclear fission”,https://www.sciencedirect.com/science/article/pii/001046559190263K Comput. Phys. Comm. 63, 365 (1991).YGB19 Walid Younes, Daniel Marc Gogny, and Jean-François Berger,“A Microscopic Theory of Fission Dynamics Based on the Generator Coordinate Method”,https://link.springer.com/book/10.1007/978-3-030-04424-4 (Springer Nature Switzerland AG 2019). ddme1 T. Nikšić, D. Vretenar, P. Finelli, and P. Ring,“Relativistic Hartree-Bogoliubov model with density-dependent meson-nucleon couplings”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.66.024306 Phys. Rev. C 66, 024306 (2002).dirhb T. Nikšić, N. Paar, D. Vretenar, P. Ring,“DIRHB–A relativistic self-consistent mean-field framework for atomic nuclei”,https://www.sciencedirect.com/science/article/pii/S0010465514000836 Comput. Phys. Comm. 185, 1808 (2014).LVPP95 G.A. Lalazissis, D. Vretenar, W. Pöschl, P. Ring,“Relativistic Hartree-Bogoliubov description of the neutron drip-line in light nuclei”,https://www.sciencedirect.com/science/article/pii/S0375947498000098 Nucl. Phys. A632, 363 (1998).SeRi02 M. Serra and P. Ring,“Relativistic Hartree-Bogoliubov theory for finite nuclei”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.65.064324 Phys. Rev. C 65, 064324 (2002).KuRi91 H. Kucharek and P. Ring,“Relativistic field theory of superfluidity in nuclei” https://link.springer.com/article/10.1007/BF01282930 Z. Phys. A339, 23 (1991).AARR14 S. E. Agbemava, A. V. Afanasjev, D. Ray, and P. Ring,“Global performance of covariant energy density functionals: Ground state observables of even-even nuclei and the estimate of theoretical uncertainties”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.89.054320 Phys. Rev. C 89, 054320 (2014).PeAfRi21 U. C. Perera, A. V. Afanasjev, and P. Ring,“Charge radii in covariant density functional theory: A global view”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.104.064313 Phys. Rev. C 104, 064313 (2021). XiLi23 Hui Hui Xie and Jian Li,“Impact of Intrinsic Electromagnetic Structure on Nuclear Charge Radius in Relativistic continuum Hartree-Bogoliubov Theory”,https://arxiv.org/abs/2308.02309v1 arXiv:2308.02309v1 [nucl-th] (2023).Ke02 James J. Kelly,“Nucleon charge and magnetization densities from Sachs form factors”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.66.065203 Phys. Rev. C 66, 065203 (2002).Ke04 J. J. Kelly,“Simple parametrization of nucleon form factors”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.70.068202 Phys. Rev. C 70, 068202 (2004).ESW60 F. J. Ernst, R. G. Sachs, and K. C. Wali,“Electromagnetic Form Factors of the Nucleon”,https://journals.aps.org/pr/abstract/10.1103/PhysRev.119.1105 Phys. Rev. 119, 1105 (1960).PPV07 C.F. Perdrisat, V. Punjabi, and M. Vanderhaeghen,“Nucleon electromagnetic form factors”,https://www.sciencedirect.com/science/article/pii/S0146641007000610 Prog. Part. Nucl. Phys. 59, 694 (2007).PS95 Michael E. Peskin and Daniel V. Schroeder,“An introduction to quantum field theory”,(Perseus Books Publishing L.L.C., 1995).GeCr11 T. R. Gentile and C. B. Crawford,“Neutron charge radius and the neutron electric form factor”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.83.055203 Phys. Rev. C 83, 055203 (2011).Hi16 Douglas W. Higinbotham, Al Amin Kabir, Vincent Lin, David Meekins, Blaine Norum, and Brad Sawatzky,“Proton radius from electron scattering data”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.93.055207 Phys. Rev. C 93, 055207 (2016).Mi19 Gerald A. Miller,“Defining the proton radius: A unified treatment”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.99.035202 Phys. Rev. C 99, 035202 (2019).pdg22 Particle Data Group,“Review of Particle Physics”,https://doi.org/10.1093/ptep/ptac097 Prog. Theor. Expt. Phys., 2022, 083C01 (2022).Bano23 P. Bano, S. P. Pattnaik, M. Centelles, X. Viñas, and T. R. Routray,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.108.015802 Phys. Rev. C 108, 015802 (2023).VB72 D. Vautherin and D. M. Brink,“Hartree-Fock Calculations with Skyrme's Interaction. I. Spherical Nuclei”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.5.626 Phys. Rev. C 5, 626 (1972). DRHBc22 Kaiyuan Zhang et al.,“Nuclear mass table in deformed relativistic Hartree-Bogoliubov theory in continuum, I: Even-even nuclei”,https://www.sciencedirect.com/science/article/pii/S0092640X22000018 Atomic Data and Nuclear Data Tables 144, 101488 (2022).ddmed X. Roca-Maza, X. Viñas, M. Centelles, P. Ring, and P. Schuck,“Relativistic mean-field interaction with density-dependent meson-nucleon vertices based on microscopical calculations”,https://journals.aps.org/prc/abstract/10.1103/PhysRevC.84.054309 Phys. Rev. C 84, 054309 (2011).Hen17 Or Hen, Gerald A. Miller, Eli Piasetzky, and Lawrence B. Weinstein,“Nucleon-nucleon correlations, short-lived excitations, and the quarks within”,https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.89.045002 Rev. Mod. Phys. 89, 045002 (2017).
http://arxiv.org/abs/2312.15983v1
{ "authors": [ "Yusuke Tanimura", "Myung-Ki Cheoun" ], "categories": [ "nucl-th" ], "primary_category": "nucl-th", "published": "20231226102734", "title": "Effects of center-of-mass correction and nucleon anomalous magnetic moments on nuclear charge radii" }
Gromov-Hausdorff propinquityand Christensen-Ivan quantum metrics]quantum Gromov-Hausdorff propinquity convergence of Christensen-Ivan quantum metrics on AF algebrasDepartment of Mathematics and Statistics, Pomona College, 610 N. College Ave., Claremont, CA [email protected] <https://aguilar.sites.pomona.edu> The second author is supported by NSF grant DMS-2316892 [2000]Primary:46L89, 46L30, 58B34. We provide convergence in the quantum Gromov-Hausdorff propinquity of Latrémolière of some sequences of infinite-dimensional Leibniz compact quantum metric spaces of Rieffel given by AF algebras and Christensen-Ivan spectral spaces. The main examples are convergence of Effros-Shen algebras and UHF algebras.[ Chloe Marple January 14, 2024 ==================== § INTRODUCTION AND BACKGROUND The first example of convergence of sequences of infinite-dimensional quantum metric spaces was established by Rieffel <cit.>, where he showed that the quantum tori converged with respect their parameters that defined their anti-commutation relation. This was accomplished by the introduction of the theory of quantum metric spaces and a noncommutative analogue to the Gromov-Hausdorff distance both introduce by Rieffel in <cit.>, respectively. This introduced an new field of study known as noncommutative metric geometry, which has its roots from work of Connes in <cit.> and Gromov <cit.>.Since the introduction of Rieffel's noncommutative analogue to the Gromov-Hausdorff distance, there has been much progress in developing noncommutative analogues of the Gromov-Hausdorff distance to capture the C*-algebraic structure of the quantum metric space <cit.>. In particular, in <cit.>,Latrémolière proved convergence of the quantum tori in this stronger sense. Moreoever,the Gromov-Hausdorff propinquity of Latrémolière first introduced in <cit.> has been adapted to capture more structure such as module structure <cit.> and spectral triple structure <cit.>.Another example of convergence of infinite-dimensional quantum metric spaces appeared in <cit.>, where it was shown that the Effros-Shen algebras <cit.> are continuous in Gromov-Hausdorff propinquity with respect to their natural parameter space of irrationals in (0,1) with the usuual topology. It was also show that UHF algebras are continuous with respect to their natural parameter space of multiplicity sequences metrized by the Baire space. This was accomplished by introducing new quantum metrics on AF algebras equipped with faithful tracial state motivated by work of Christensen and Ivan in <cit.>. Now, in <cit.>, Christensen and Ivan did introduce quantum metrics on these infinite-dimensional algebras, but at the time it wasn't clear how to provide convergence of these algebras in any noncommutative analogue to the Gromov-Hausdorff distance, which is one reason why the quantum metrics of <cit.> were introduced which are not defined using spectral triples. But, in an effort, to bring the realms of noncommutative geometry and noncommutative metric geometry closer, it is important to provide the convergence results of these infinite-dimensional algebras of <cit.> with the quantum metrics induced by the spectral triples of <cit.>, which is exactly what is accomplished in this article. The main hurdles to overcome in proving this arise from two issues that were circumvented by the quantum metrics introduced in <cit.>. First, the spectral triples of <cit.> are constructed using equivalence constants which are only provided by existence and not explicitly given, which cause an issue when providing continuous fields of L-seminorms as it is difficult to control these non-explicit constants. Second, providing continuous fields of L-seminorms provided by faithful tracial states is difficult when relying on convergence in various operator normsgiven by different GNS represenations for each spectral triple rather than a fixed C*-norm. The first issue is overcome by an application of <cit.>, and the second issue is overcome by a generalization of <cit.>. Both of these issues are overcome in Section <ref>, and we apply these results in the last section to provide convergence of these infinite-dimensional algebras using quantum metrics induced by Christensen-Ivan spectral triples. We only define what we mean by a Leibniz compact quantum metric space as things can get quite overwhelming as more definitions are provided, but for references regarding quantum metric spaces, propinquity and propinquity in the context of AF algebras see <cit.>.<cit.> Letbe a unital C*-algebra with norm ·_ and unit 1_. Let L:→ [0,∞) be a seminnorm (possibly taking value ∞) such that dom(L)={ a∈ : L(a) <∞} is a dense *-subalgebra of . If * L(a)=L(a^*) for every a ∈,* {a ∈ : L(a)=0}=1_,* L(ab)≤a_ L(b) +b_ L(a) for every a,b ∈,* the metric on the state space S()ofdefined for every ϕ, ψ∈ S() bymk_L(ϕ, ψ)=sup{|ϕ(a)-ψ(a)| : a ∈, L(a) ≤ 1}metrizes the weak* topology,then we call L an L-seminorm and (, L) a Leibniz compact quantum metric space.§ FINITE-DIMENSIONAL APPROXIMATIONS AND ASSOCIATED CONTINUOUS FIELDS OF L-SEMINORMSIn what follows, we use various results from the beginning of <cit.> with some slightly different notation. =∪_n ∈ A_n^·_A be a unital AF algebra, where _0=1_ equipped with a faithful tracial state τ. Let H_τ denote the associated GNS Hilbert space with inner product defined for every a,b ∈ H_τ by⟨ a,b⟩_τ =τ(b^*a) and associated norm a_τ=√(⟨ a,a⟩_τ).Since τ is faithful, we can canonically viewas a subspace (not necessarily closed) of H_τ. Letπ_τ : ⟶ B(H_τ)be the associated GNS representation such that π_τ(a)(b)=ab for every a,b ∈. Let n ∈, since _n is finite dimensional, we have that _n is a closed subspace of H_τ. LetP^τ_n: H_τ→_n denote the orthogonal projection of H_τ onto _n and define Q^τ_n=P^τ_n-P^τ_n-1. LetE^τ_n: →_n denote the restriction of P^τ_n to , and by <cit.>, we have that E^τ_n is the unique τ-preserving conditional expectation onto _n. Next, since _n+1 is finite dimensional, there exists a sharp c^τ_n+1>0 such that a_≤c^τ_n+1·a_τfor every a∈_n+1.Note that c^τ_n+1≥ 1 since ·_τ≤·_ on . We now prove a crucial fact about these constants.Let (τ^n)_n ∈ be a sequence of faithful tracial states onand let τ be a faithful tracial state on . If (τ^n)_n ∈ converges to τ in the weak* topology, then for every N ∈, the sequence (c^τ^n_N)_n ∈ converges to c^τ_N in the usual topology on . This is just <cit.> applied to <cit.> since norm ·_τ is a Frobenius-Rieffel norm. Let (β(n))_n ∈ be a summable sequence of positive reals.Set a^τ_β, n+1=c^τ_n+1/β_n+1.Next, we state a main result from <cit.>.<cit.>Let (β(n))_n ∈ be a summable sequence of positive reals. Using the above setting, we have thatD^τ_β=∑_n=1^∞ a^τ_β,n Q^τ_n defines an unbounded self-adjoint operator on H_τ. Furthermore, if we defineL^τ_β(a)=[D^τ_β, π_τ(a)]_B(H_τ) for every a∈ such that [D^τ_β, π_τ(a)] extends to a bounded operator on H_τ denoted by [D^τ_β, π_τ(a)], and set L^τ_β(a)=∞ if not, then (, L^τ_β) is a Leibniz compact quantum metric space, and for every n∈, (_n, L^τ_β) is a Leibniz compact quantum metric space such thatdom(L^τ_β(a))∩_n=_n. The following fact is stated after<cit.>, but we provide a proof here. Using the setting of Theorem <ref>, we have for every n ∈ and for every a ∈_nthat L^τ_β(a)=[D^τ_β, π_τ(a)]_B(^τ_n),where _n^τ=_n but for B(^τ_n) we are considering bounded operators with respect to the norm ·_τ on _n. Let n ∈. Let a ∈_n. By definition, we have that [D^τ_β, π_τ(a)]_B(_n)≤ L^τ_β(a).Next, letk ∈{1,2,…, n}. We have since π_τ(a) commutes with P^τ_n by the proof of <cit.> or the proof of Step 1 of <cit.>. Moreover, P_nP_k=P_kP_n=P_k and P_nP_k-1=P_k-1P_n=P_k-1 by construction. ThusP^τ_n[Q^τ_k,π_τ(a)]P^τ_n= P^τ_n(Q^τ_kπ_τ(a)-π_τ(a)Q^τ_k)P^τ_n = P^τ_n((P^τ_k-P^τ_k-1)π_τ(a)-π_τ(a) (P^τ_k-P^τ_k-1))P^τ_n = (P^τ_k-P^τ_k-1)π_τ(a)P^τ_n-P^τ_nπ_τ(a)(P^τ_k-P^τ_k-1) = (P^τ_k-P^τ_k-1)P^τ_nπ_τ(a)-π_τ(a) P^τ_n (P^τ_k-P^τ_k-1) = (P^τ_k-P^τ_k-1)π_τ(a)-π_τ(a) (P^τ_k-P^τ_k-1) = Q^τ_kπ_τ(a)-π_τ(a)Q^τ_k = [Q^τ_k,π_τ(a)]Thus[D^τ_β, π_τ(a)]= ∑_k=1^n a^τ_β, k[Q^τ_k,π_τ(a)] = ∑_k=1^n a^τ_β, kP^τ_n[Q^τ_k,π_τ(a)]P^τ_n =P^τ_n [D^τ_β, π_τ(a)]P^τ_n.Hence, since (P^τ_n)^2=P^τ_n and P^τ_n is contractive with respect to ·_τ and P^τ_n(h) ∈_n for every h∈ H_τ, we have L^τ_β(a)= P^τ_n [D^τ_β, π_τ(a)]P^τ_n _B(H_τ) = sup{ P^τ_n [D^τ_β, π_τ(a)]P^τ_n (h)_τ : h ∈ H_τ, h_τ≤ 1 } =sup{ P^τ_n [D^τ_β, π_τ(a)](P^τ_n)^2 (h)_τ : h ∈ H_τ, h_τ≤ 1 } =sup{ P^τ_n [D^τ_β, π_τ(a)]P^τ_n(P^τ_n (h))_τ : h ∈ H_τ, h_τ≤ 1 } =sup{[D^τ_β, π_τ(a)] (P^τ_n (h))_τ : h ∈ H_τ, h_τ≤ 1 }≤sup{[D^τ_β, π_τ(a)] _τ : h ∈_n, h_τ≤ 1 } = [D^τ_β, π_τ(a)]_B(_n).Therefore [D^τ_β, π_τ(a)]_B(_n)≤ L^τ_β(a)≤[D^τ_β, π_τ(a)]_B(_n) as desired.With this we can provide finite-dimensional approximations, which has been conveniently already proven in<cit.>.<cit.> For every n ∈, it holds that ((, L^τ_β), (_n, L^τ_β))≤∑_k=n^∞β_k,whereis the quantum Gromov-Hausdorff propinquity of <cit.>. The main examples of AF algebras in this article, Effros-Shen algebras and UHF algebras, are given in the setting of inductive limits of finite-dimensional C*-algebras. Thus, we introduce notation to prove results in this setting. Let (_n, α_n)_n ∈ be an inductive sequence of C*-algebras (see <cit.>) such that:* _0= and _n=⊕_k=1^n_n_d_n,k() for all n ∈∖{0}, where d_n,k∈∖{0} for each n ∈∖{0} and k ∈{1, 2, …, n_n};* α_n :_n →_n+1 is a unital *-monomorphism for all n ∈;* the inductive limit =lim (_n, α_n)_n ∈ is equipped with a faithful tracial state τ.For each n ∈, let α^(n) : _n → be the canonical unital *-monomorphism satisfying α^(n+1)∘α_n=α^(n),and if for each k∈{1,2,…, n-1}, we defineα_k,n=α_n∘α_n-1∘⋯α_k,then inductively, we haveα^(n+1)∘α_k,n=α^(k) Note that =∪_n ∈α^(n)(_n)^·_ and α^(n)(_n) ⊆α^(n+1)(_n+1) and α^(0)(_0)=1_ (see <cit.>). So, for each n ∈, set_n=α^(n)(_n).As above, for each n ∈, letE^τ_n :→_ndenote the unique τ-preserving faithful conditional expectation onto _n. For each n ∈, letτ_n=τ∘α^(n),which is a faithful tracial state on_n and let π_τ_n denote the associated GNS representation. Letk ∈{0, 1, …, n+1} letE^τ_n+1_n+1,k: _n+1→α_k,n(_k)be the unique τ_n+1-preserving faithful conditional expectation onto α_k,n(_k). DefineQ^τ_n+1_n+1,k=E^τ_n+1_n+1,k-E^τ_n+1_n+1,k-1and let D^τ_n+1_β=∑_k=1^n+1 a^τ_β,kQ^τ_n+1_n+1,k.For every a ∈_n+1, defineL^τ_n+1_β(a)=[D^τ_n+1_β, π_τ_n+1(a)]_B(^τ_n+1_n+1).By finite dimensionality, we have that (_n+1,L^τ_n+1_β )is a Leibniz compact quantum metric space. We can now add to Proposition <ref> in the inductive limit setting. Let n ∈. It holds thatL^τ_β∘α^(n)(a)= sup{[D^τ_β, π_τ(α^(n)(a))](α^(n)(b))_τ: b ∈_n, b_τ_n≤ 1}=L^τ_n_β(a)for every a ∈_n.Let n ∈ and let a ∈_n, then by Proposition <ref> L^τ_β∘α^(n)(a)= L^τ_β(α^(n)(a))= [D^τ_β, π_τ(α^(n)(a))]_B(^τ_n) = sup{[D^τ_β, π_τ(α^(n)(a))](c)_τ: c ∈_n, c_τ≤ 1}. Consider c∈_n, then there exists a unique b ∈_n such that α^(n)(b)=c. We have c_τ^2=τ(c^*c)=τ(α^(n)(b)^*α^(n)(b) )=τ( α^(n)(b^*b))= τ_n(b^*b)=b^2_τ_n. HenceL^τ_β∘α^(n)(a)= sup{[D^τ_β, π_τ(α^(n)(a))](α^(n)(b))_τ: b ∈_n, b_τ_n≤ 1}Let b ∈_n.Let k ∈{0,1,…, n}. Then a similar argument as the beginning of the proof of <cit.> provides E^τ_k∘α^(n) = α^(n)∘ E^τ_n_n,kandE^τ_k-1∘α^(n) = α^(n)∘ E^τ_n_n,k-1 by Expression (<ref>).Next, we have [Q^τ_k,π_τ(α^(n)(a))](α^(n)(b))=((E^τ_k-E^τ_k-1)π_τ(α^(n)(a))-π_τ(α^(n)(a))(E^τ_k-E^τ_k-1))(α^(n)(b))Nowπ_τ(α^(n)(a))(E^τ_k-E^τ_k-1)(α^(n)(b))=π_τ(α^(n)(a))(α^(n)(E^τ_n_n,k(b))-α^(n)(E^τ_n_n,k-1(b)))= α^(n)(a)(α^(n)(E^τ_n_n,k(b))-α^(n)(E^τ_n_n,k-1(b))) = α^(n)(aE^τ_n_n,k(b)-aE^τ_n_n,k-1(b))and similarly(E^τ_k-E^τ_k-1)π_τ(α^(n)(a))(α^(n)(b)) = α^(n)(E^τ_n_n,k(ab)-E^τ_n_n,k-1(ab)).Thus[Q^τ_k,π_τ(α^(n)(a))](α^(n)(b))=α^(n)(E^τ_n_n,k(ab)-E^τ_n_n,k-1(ab)-(aE^τ_n_n,k(b)-aE^τ_n_n,k-1(b))).However, E^τ_n_n,k(ab)-E^τ_n_n,k-1(ab)-(aE^τ_n_n,k(b)-aE^τ_n_n,k-1(b))= E^τ_n_n,k(π_τ_n(a)(b))-E^τ_n_n,k-1(π_τ_n(a)(b)) - (π_τ_n(a)(E^τ_n_n,k(b))-π_τ_n(a)(E^τ_n_n,k-1(b))) = (E^τ_n_n,k -E^τ_n_n,k-1)(π_τ_n(a)(b))- π_τ_n(a) ((E^τ_n_n,k- E^τ_n_n,k-1)(b)) = Q^τ_n_n,k(π_τ_n(a)(b))- π_τ_n(a)(( Q^τ_n_n,k)(b))= (Q^τ_n_n,k(π_τ_n(a))- π_τ_n(a)( Q^τ_n_n,k))(b) =[Q^τ_n_n,k,π_τ_n(a)](b).Hence[D^τ_β, π_τ(α^(n)(a))](α^(n)(b)) =α^(n)([D^τ_n_β,π_τ_n(a) ](b))and so as above[D^τ_β, π_τ(α^(n)(a))](α^(n)(b))_τ=[D^τ_n_β,π_τ_n(a) ](b)_τ_n.ThereforeL^τ_β∘α^(n)(a)= sup{[D^τ_β, π_τ(α^(n)(a))](α^(n)(b))_τ: b ∈_n, b_τ_n≤ 1}=L^τ_n_β(a)of Expression (<ref>) as desired.Now that we have an expression for the L-seminorms on the terms of a given inductive sequence, we would like to show that these form a continuous field of L-seminorms with respect to weak* convergence of the faithful tracial state. However, since the norms defining our L-seminorms are operator norms this takes some care, which is why we need some tools from metric geometry. The following result might be known in metric geometry, but we cannot find a proof and so we provide one here. The following result also serves as a generalization of <cit.>. Let (X, d) bea metric space. Let (C_n)_n ∈ be a sequence of compact subsets of X that converges in the Hausdorff distance with respect to d, Haus_d, to a compact C⊆ X. Let C'⊆ X be a compact set such that C∪ (∪_n ∈ C_n)⊆ C'.Let (f_n)_n ∈ be a sequence of real-valued continuous functions on X and let f:X → be continuous. If (f_n)_n ∈ converges uniformly to fon C', then(sup_x ∈ C_n f_n(x))_n ∈ converges to sup_x ∈ C f(x) in the usual topology on .Let ε>0. By uniform convergence, there exists δ>0 such that for every a,b ∈ C' and n ∈, we have |f_n(a)-f_n(b)|<ε/2.Let N ∈ such that for every n ≥ NHaus_d(C_n,C)<δ/3and |sup_x ∈ Cf_n(x) -sup_x ∈ Cf(x)| < ε/2by <cit.>.Let n ≥ N. By compact, there exists x' ∈ C such that sup_x∈ Cf_n(x)=f_n(x').Now consider sup_x ∈ C_nf_n(x). Assume by way of contradiction that |sup_x ∈ Cf_n(x)-sup_x ∈ C_nf_n(x)|>ε/2. Assume first that sup_x ∈ Cf_n(x)-sup_x ∈ C_nf_n(x)>ε/2 f_n(x')-sup_x ∈ C_nf_n(x)>ε/2 sup_x ∈ C_nf_n(x)<f_n(x')-ε/2.Hencef_n(x)<f_n(x')-ε/2for every x ∈ C_n. Now there exists x∈ C_n such that (x,x')<δ by definition of the Hausdorff distance. Hence|f_n(x)-f_n(x')|<ε/2.And so f_n(x')-ε/2<f_n(x)<f_n(x')-ε/2,contradiction.On the other hand, if sup_x ∈ C_nf_n(x)-sup_x ∈ Cf_n(x)>ε/2. Thenε/2<sup_x ∈ C_nf_n(x)-f_n(x').Now, by compact, there exists z∈ C_n such that sup_x ∈ C_nf_n(x)=f_n(z). Hencef_n(x')<f_n(z)-ε/2and sosup_x ∈ Cf_n(x)<f_n(z)-ε/2.Thusf_n(x)<f_n(z)-ε/2for every x ∈ C. This leads to a similar contradiction. Hence,|sup_x ∈ C_nf_n(x) - sup_x ∈ Cf_n(x)|≤ε/2.Finally,|sup_x ∈ C_n f_n(x)-sup_x ∈ C f(x)| ≤ |sup_x ∈ C_n f_n(x)-sup_x ∈ C f_n(x)|+|sup_x ∈ C f_n(x)-sup_x ∈ C f(x)|≤ε/2+|sup_x ∈ C f_n(x)-sup_x ∈ C f(x)|<ε/2+ε/2=ε.Before providing continuous fields of L-seminorms, weneed one more result so that we can satisfy the hypothesis of the previous Lemma.Let =∪{∞}. Letbe a finite-dimensional C*-algebra and let (τ^n)_n ∈ be a sequence of faithful tracial states onsuch that (τ^n)_n ∈ weak* converges to τ_∞. For each n ∈, define C_n={b ∈: b_τ^n≤ 1}.It holds that (C_n)_n ∈ converges to C_∞ in the Hausdorff distance with respect to ·_. Sinceis finite dimensional there exist N ∈, m_1, m_2, …, m_N∈ and a *-isomorphism α: ⊕_k=1^N M_m_k()→ onto . Set ⊕_k=1^N M_n_k()=. For each n ∈, define that σ^n=τ^n∘α. We have that (σ^n)_n ∈ is s sequence of faithful tracial states that weak* converges to σ_∞. Let n ∈, since σ^n is a faithful tracial state there exist μ^n_1, μ^n_2, …, μ^n_N∈ (0, ∞) such that ∑_k=1^N μ^n_k=1 and σ^n((a_1, a_2, …, a_N))=∑_k=1^N μ^n_k/m_kTr(a_k)for every (a_1, a_2, …, a_N).By weak* convergence, we have that ((μ^n_1, μ^n_2, …, μ^n_N))_n ∈ converges to (μ^∞_1, μ^∞_2, …, μ^∞_N) in the product topology on ^N.Define D_n={a ∈: a_σ^n≤ 1}. Let a ∈ D_∞. Now since σ^n is faithful we may definey=(√(μ^∞_1)/√(μ^n_1) a_1, √(μ^∞_2)/√(μ^n_2) a_2 , …, √(μ^∞_N)/√(μ^n_N) a_N).Thusy_σ^n^2 = σ^n(y^*y) = σ^n( μ^∞_1/μ^n_1a_1^*a_1, μ^∞_2/μ^n_2a_2^*a_2, … ,μ^∞_N/μ^n_Na_N^*a_N ) = ∑_k=1^N μ^n_∞/m_kTr(a_k^*a_k) = a_σ^∞^2≤ 1.Thus y ∈ D_n. Next,a-y_= max{a_1-y_1_M_n_1(), a_2-y_2_M_n_2(), …, a_N-y_N_M_n_N()}Consider k ∈{1,2,…, N}. We have that since the operator norm is bounded by the Frobenius norma_k-y_k_M_n_k()=a_k- √(μ^∞_k)/√(μ^n_k) a_k_M_n_k() = |1-√(μ^∞_k)/√(μ^n_k)|·a_M_n_k()≤|1-√(μ^∞_k)/√(μ^n_k)|·√(Tr(a_k^*a_k)).However, as∑_k=1^N μ^∞_k/m_kTr(a_k^*a_k)=a_σ^∞^2≤ 1,we have that μ^n_∞/m_kTr(a_k^*a_k)≤ 1, and so√(Tr(a_k^*a_k))≤√(m_k)/√(μ^∞_k)and thus a_k-y_k_M_n_k()≤|1-√(μ^∞_k)/√(μ^n_k)|·√(m_k)/√(μ^∞_k)Hencea-y_≤max{|1-√(μ^∞_k)/√(μ^n_k)|·√(m_k)/√(μ^∞_k): k ∈{1,2,…, N}}.By a symmetric argument, we have that Haus_·_(D_n, D_∞) ≤max{max{|1-√(μ^∞_k)/√(μ^n_k)|·√(m_k)/√(μ^∞_k): k ∈{1,2,…, N}} ,max{|1-√(μ^n_k)/√(μ^∞_k)|·√(m_k)/√(μ^n_k): k ∈{1,2,…, N}}} by definition of the Hausdorff distance. Thus as (μ^n_k)_n ∈ converges to μ^∞_k for each k ∈{1,2,…, N}, we have that lim_n →∞Haus_·_(D_n, D_∞)=0. By construction of σ^n and since α is a *-isomorphism, the proof is complete.We use these resultsto provide continuous fields of L-seminorms. Let m ∈=∪{∞}.Let (τ^n)_n ∈ be a sequence of faithful tracial states on .If (τ^n_m)_n ∈ of Expression (<ref>) weak* converges to τ^∞_m on _m, then for every a ∈_m, we have (L^τ^n_m_β(a))_n ∈ of Expression (<ref>) converges to L^τ^∞_m_β(a) in the usual topology on . Let a ∈_m. Let n ∈, definef_n: _m →by f_n(b)=[D^τ^n_m_β,π_τ^n_m(a) ](b)_τ^n_m. Note that f_n is continuous with respect to ·__m by finite dimensionality. DefineC_n={b ∈_n: b_τ^n_m≤ 1},which is compact with respect to ·__m by finite dimensionality. Next, we verify that all the C_n's are contained in one compact set. By finite dimensional, there exists a sharp ν_n>0 such that ·__m≤ν_n··_τ^n_m.By <cit.>, we have that (ν_n)_n ∈ converges to ν_∞. Hence r=sup_n ∈ν_n<∞. Now, let b ∈ C_n, then ·_τ^n_m≤ 1, and sob__m≤ν_n··_τ^n_m≤ν_n ≤ r.Hence b ∈{b ∈_m : b__m≤ r}. Set C'={b ∈_m : b__m≤ r}. We have that C' is compact by finite dimensionality and that C_n ⊆ C' for every n ∈N. Next, by Proposition <ref> and by a similar argument to <cit.>, we have that (f_n)_n ∈ converges uniformly to f_∞ on any compact subset of (_m, ·__m) including C'. Finally, we have that (C_n)_n ∈ converges to C_∞ in the Hausdorff distance with respect to ·__m by weak* convergenceby Proposition <ref>. Therefore by Lemma <ref>, we have that (sup_b ∈ C_nf_n(b))_n∈=(L^τ^n_m_β(a))_n ∈ converges to sup_b ∈ C_∞ f_∞ (b)=L^τ^∞_m_β(a) in the usual topology on .§ CONVERGENCE OF SEQUENCES OF EFFROS-SHEN ALGEBRAS AND UHF ALGEBRAS We will now provide our main convergence results. But first, we need notation for each of these applications. We begin with the Effros-Shen algebras which were first defined in<cit.>.Let θ∈ be irrational. There exists a unique sequence of integers (r^θ_n)_n ∈with r^θ_n>0 for all n ∈∖{0} such thatθ =lim_n →∞ r_0^θ +1r^θ_1 + 1r^θ_2 + 1r^θ_3 +1⋱+1r^θ_n. When θ∈ (0,1), we have that r^θ_0=0. The sequence (r^θ_n)_n ∈_0 is the continued fraction expansion of θ <cit.>.For each n ∈, definep_0^θ=r_0^θ,p_1^θ=1andq_0^θ=1,q_1^θ=r^θ_1,and set p_n+1^θ=r^θ_n+1 p_n^θ+p_n-1^θand q_n+1^θ= r^θ_n+1 q_n^θ+q_n-1^θ.The sequence (p_n^θ/q_n^θ)_n ∈ℕ_0 of convergents p^θ_n/q^θ_n converges to θ. In fact, for each n ∈, p_n^θ/q_n^θ=r_0^θ +1r^θ_1 + 1r^θ_2 + 1r^θ_3 +1⋱+1r^θ_n. We now define the terms for the inductive sequence that form the Effros-Shen algebras. Let _θ,0=ℂ and, for each n ∈ℕ_0, let_θ,n=M_q_n^θ() ⊕ M_q_n-1^θ()and for each n ∈, set _θ, n=α^(n)(_θ,n).These form an inductive sequence with the mapsα_θ,n:a⊕ b ∈_θ,n↦(a, …, a,b )⊕ a ∈_θ,n+1,where there are r^θ_n+1 copies of a on the diagonal in the first summand of _θ,n+1. This is a unital *-monomorphism by construction.For n=0,α_θ, 0: λ∈_θ,0↦(λ, …, λ)⊕λ ∈_θ,1.The Effros–Shen algebra associated to θ is the inductive limit (see <cit.>)_θ=lim (_θ,n, α_θ,n)_n ∈by <cit.>.There exists a unique faithful tracial state τ^θ on _θ such that for each n ∈∖{0},τ_θ,n (see Expression (<ref>)) is defined for each (a,b) ∈_θ,n byτ_θ,n(a,b)=t(θ,n)1/q_n^θTr(a)+(1-t(θ,n))1/q_n-1^θTr(b),wheret(θ,n)=(-1)^n-1q_n^θ(θ q_n-1^θ -p_n-1^θ ) ∈ (0,1) (see <cit.>). For each n ∈, define β^θ_n=1/(_θ,n)=1/(q_n^θ)^2+(q_n-1^θ)^2,and note that (β^θ_n)_n ∈ is summable by <cit.>. Finally, for each n ∈, define a^τ_θ_n=c^τ^θ_n/β^θ_nwhere c^τ^θ_n is given by Expression (<ref>).The mapθ∈ (0,1)∖⟼ (_θ, L^τ_θ_β_θ)is continuous with respect to the quantumGromov-Hausdorff propinquity of <cit.> where L^τ_θ_β_θ is given by Theorem <ref>. Note that for every θ∈ (0,1)∖ there exists a summable sequence of positive reals (β_n)_n ∈ such that β^θ_n≤β_n for every n ∈ (see the beginning of the proof of <cit.>. Now, let (θ^n)_n ∈ be a sequence in (0,1)∖ that converges to some θ∈ (0,1)∖ with respect to the usual topology on . Let ε>0. Choose N_1 ∈ such that ∑_k=n^∞β_n < ε/3 for every n ≥ N_1. Now choose N_2 ∈ such that N_2≥ N_1 and q^θ_n_k=q^θ_k for every n ≥ N_1 and k ∈{1,2,…, N_1} which is possible by <cit.>. Thus, for every n ≥ N_2, we have _θ_n, k=_θ, k and α_θ_n, k=α_θ, k for every k ≤ N_1. Now by <cit.>, we have that (τ_θ_l, N_1)_l ≥ N_2 converges to τ_θ,N_1in the weak* topology. Thus, by the same proof as <cit.>, we have that there exists N_3≥ N_2 such that ((_N_1, L^τ_θ_n, N_1_β^θ_n), (_N_1, L^τ_θ, N_1_β_θ))< ε/3by Theorem <ref>. Let n ≥ N_3. By Theorem <ref> and Theorem <ref> and the triangle inequality, we have((_θ_n, L^τ^θ_n_β^θ_n), (_θ, L^τ^θ_β^θ))≤ ((_θ_n, L^τ^θ_n_β^θ_n), (_θ_n,N_1, L^τ^θ_n_β^θ_n)) + ( (_θ_n,N_1, L^τ^θ_n_β^θ_n), (_θ_n,N_1, L^τ^θ_β^θ)) + ((_θ_n,N_1, L^τ^θ_β^θ), (_θ, L^τ^θ_β^θ))≤∑_k=N_1^∞β^θ_n_k +( (_θ_n,N_1, L^τ^θ_n_β^θ_n), (_θ_n,N_1, L^τ^θ_β^θ))+∑_k=N_1^∞β^θ_n_k≤∑_k=N_1^∞β_k +( (_θ_n,N_1, L^τ^θ_n_β^θ_n), (_θ_n,N_1, L^τ^θ_β^θ))+∑_k=N_1^∞β_k < ε/3+ ( (_θ_n,N_1, L^τ^θ_n_β^θ_n), (_θ_n,N_1, L^τ^θ_β^θ))+ε/3 = 2ε/3+((_N_1, L^τ_θ_n, N_1_β^θ_n), (_N_1, L^τ_θ, N_1_β_θ)) < 2ε/3+ε/3=εas desired. Next, we move to the UHF case. The Baire spaceis the set (∖{0})^ endowed with the metric 𝖽 defined, for any two (x(n))_n∈, (y(n))_n∈ in , by d_((x(n))_n∈, (y(n))_n∈) =0 if x(n) = y(n) for all n∈, 2^-min{ n ∈ : x(n) ≠ y(n) }otherwise.Next, we define UHF algebras in a way thatsuitsour needs. Given(β(n))_n∈∈, let ⊠β(n)= 1 ifn=0, ∏_j=0^n-1 (β(j)+1) otherwise. For each n ∈, define a unital *-monomorphism byμ_β,n : a ∈_⊠β(n)() ⟼diag(a,a,…, a) ∈_⊠β(n+1)(),where there are β(n)+1 copies of a in diag(a,a,…,a). Set 𝗎𝗁𝖿((β(n))_n∈)=lim (_⊠β(n)() , μ_β,n)_n ∈. The map(β(n))_n∈∈⟼𝗎𝗁𝖿((β(n))_n∈)is a surjection onto the class of all UHF algebras up to *-isomorphism by <cit.>.For each n ∈, let γ_β(n)=1/(_⊠β(n)()),and letρ_βbe the unique faithful tracial state on uhf((β(n))_n∈). Wenow state our result for continuity of UHF algebras with respect to the Baire space. The mapβ∈⟼ (𝗎𝗁𝖿(β), L^ρ_β_γ_β)is continuous with respect to the quantumGromov-Hausdorff propinquity of <cit.> where L^ρ_β_γ_β is given by Theorem <ref>.This follows similarly as the proof of Theorem <ref> since convergence in the Baire space is equivalent to convergence of irrationals by <cit.>.amsplain
http://arxiv.org/abs/2312.16458v1
{ "authors": [ "Clay Adams", "Konrad Aguilar", "Esteban Ayala", "Evelyne Knight", "Chloe Marple" ], "categories": [ "math.OA", "math.FA", "Primary: 46L89, 46L30, 58B34" ], "primary_category": "math.OA", "published": "20231227080503", "title": "Quantum Gromov-Hausdorff propinquity convergence of Christensen-Ivan quantum metrics on AF algebras" }
Federated Hyperdimensional Computing Kazim Ergun*, Rishikanth Chandrasekaran*, and Tajana RosingDepartment of Computer Science and Engineering, University of California San Diego*Both authors contributed equally to this researchJanuary 14, 2024 ============================================================================================================================================================================================================================== Federated learning (FL) enables a loose set of participating clients to collaboratively learn a global model via coordination by a central server and with no need for data sharing. Existing FL approaches that rely on complex algorithms with massive models, such as deep neural networks (DNNs), suffer from computation and communication bottlenecks.In this paper, we first propose FedHDC, a federated learning framework based on hyperdimensional computing (HDC). FedHDC allows for fast and light-weight local training on clients, provides robust learning, and has smaller model communication overhead compared to learning with DNNs.However, current HDC algorithms get poor accuracy when classifying larger & more complex images, such as CIFAR10.To address this issue, we design FHDnn, which complements FedHDC with a self-supervised contrastive learning feature extractor.We avoid the transmission of the DNN and instead train only the HDC learner in a federated manner, which accelerates learning, reduces transmission cost, and utilizes the robustness of HDC to tackle network errors. We present a formal analysis of the algorithm and derive its convergence rate both theoretically, and show experimentally that FHDnn converges 3× faster vs. DNNs. The strategies we propose to improve the communication efficiency enable our design to reduce communication costs by 66× vs. DNNs, local client compute and energy consumption by  1.5 - 6×, while being highly robust to network errors.Finally, our proposed strategies for improving the communication efficiency have up to 32× lower communication costs with good accuracy.§ INTRODUCTION Recent years have witnessed an unprecedented growth of data sensing and collection by the Internet of Things (IoT). It is estimated that the number of interconnected IoT devices will reach 40 billion by 2025, generating more than 79 zettabytes (ZB) of data <cit.>. Empowered by this massive data, emerging Deep Learning (DL) methods enable many applications in a broad range of areas including computer vision, natural languag processing, and speech processing <cit.>.In the traditional cloud-centric DL approach, data collected by remote clients, e.g. smartphones, is gathered centrally at a computationally powerful server or data center, where the learning model is trained. Often, the clients may not be willing to share data with the server due to privacy concerns. Moreover, communicating massive datasets can result in a substantial burden on the limited network resources between the clients and the server. This motivated the development of distributed algorithms to allow machine learning at edge networks without data sharing. Federated learning (FL), proposed in <cit.>, has recently drawn significant attention as an alternative to centralized learning. FL exploits the increased computational capabilities of modern edge devices to train a model on the clients’ side while keeping their collected data local. In FL, each client performs model training based on its local dataset and shares the model with a central server. The models from all participating clients are then aggregated to a global model.Learning in FL is a long-term process consisting of many progressive rounds of alternating computation and communication. Therefore, two of the main challenges associated with FL are the computation and communication bottlenecks <cit.>. With FL, the computation, i.e. the training process, is pushed to edge devices. However, state-of-the-art ML algorithms, including deep neural networks (DNN), require a large amount of computing power and memory resources to provide better service quality. The DNN models have complicated model architectures with millions of parameters and require backpropagation, resulting in prohibitively long training times. Besides computation, the communication load of DNN based FL suffers from the need to repeatedly convey massive model parameters between the server and large number of clients over wireless networks <cit.>.Another challenge arises when FL is carried out over wireless networks. The wireless channels are unreliable in nature, introducing noise, fading, and interference to the transmitted signals. Therefore, the communication in wireless FL is prone to transmission errors. The common solution for this problem is using multiple-access technologies <cit.> (e.g., TDMA, OFDMA) to prevent interference and error-correcting codes to overcome noise. If there still exists any errors, then a reliable transport layer protocol <cit.> (e.g.. TCP) is adopted, where acknowledgment, retransmission, and time-out mechanisms are employed to detect and recover from transmission failures. This reliability comes with a price; achieving error-free communication requires a lot of wireless resources, increases energy consumption, limits communication rates, and hence decreases the training speed and convergence of FL. Otherwise, in an unreliable scenario, the transmission errors will impact the quality and correctness of the FL updates, which, in turn, will affect the accuracy of FL, as well as its convergence.This paper proposes a novel technique that enables efficient, robust, and accurate federated learning using brain-inspired models in high-dimensional space. Instead of conventional machine learning algorithms, we exploit Hyperdimensional Computing (HDC) to perform lightweight learning with simple operations on distributed low-precision vectors, called hypervectors. HDC defines a set of operations to manipulate these hypervectors in the high-dimensional vector space, enabling a computationally tractable and mathematically rigorous framework for learning tasks. A growing number of works have applied HDC to a wide range of learning problems, including reasoning , biosignal processing , activity prediction , speech/object recognition , and prediction from multimodal sensor fusion . These studies have demonstrated the high efficiency, robustness and effectiveness of HDC in solving various learning problems, highlighting its potential as a powerful tool for a variety of applications. HDC has various appealing characteristics, particularly for edge devices. It is well-suited to address the challenges in FL as: [label=(*)]* HDC is low-power, computationally efficient, and amenable to hardware-level optimization <cit.>,* it is fault tolerant, providing strong robustness in the presence of errors <cit.>,* HDC models are small, thus both memory-efficient and communication-efficient <cit.>, * HDC encoding can transform non-linear learning tasks into linear optimization problems <cit.>, and* HDC enables fast and light-weight learning with its simple operations <cit.>.These features make HDC a promising solution for FL using today’s IoT edge devices with constrained storage, battery, and resources, over wireless networks with latency concerns and limited bandwidth. We address several technical challenges to enable federated hyperdimensional computing at the IoT edge. Although HDC is inherently suitable for FL, current HDC algorithms fail to provide acceptable accuracy for complex image analysis <cit.>, which is one of the key FL applications. Recently published work <cit.> combines convolutional neural networks (CNNs) with HDC to learn effectively on complex data. It leverages convolution-based feature extraction prior to the HD encoding step. Unfortunately, such a configuration (DNN+HDC) possesses the aforementioned computation and communication drawbacks of DNNs for FL. The other challengeis the communication of HDC models over unreliable wireless channels. While the robustness of HDC encoded data to noise and bit errors was demonstrated by prior work <cit.>, similar claims were not investigated for an entire HDC model itself. Finally, HDC models have a lot of redundancy that still can put a burden on communication efficiency, even though they are much smaller than DNN models.SecureHD , HDnnand our work, FedHDC, are all approaches that leverage high-dimensional computing for various tasks. However, there are distinct differences between these methods. FedHDC focuses on federated learning, which enables collaborative model training across multiple decentralized devices while maintaining data privacy. In contrast, SecureHD , emphasizes secure high-dimensional computing, specifically designed to handle classification tasks with a focus on security.HDnn , on the other hand, is a hybrid approach that combines high-dimensional computing with convolutional neural networks (CNNs). This method aims to harness the strengths of both HDC and CNNs to improve classification performance, particularly in image recognition tasks. While FHDnn and HDnn both utilize high-dimensional computing, the primary difference lies in their objectives: FedHDC targets federated learning, whereas HDnnfocuses on enhancing classification performance by integrating HDC with deep learning techniques. Unlike our proposed FHDnn, HDnntrains the feature extractor to learn representations amenable to learning in the high-dimensional vector space.In this paper, we first present federated hyperdimensional computing, FedHDC, an extension of the HDC paradigm into the federated learning domain.Next we design a novel synergetic FL framework, called FHDnn, that enables FedHDC to also perform complex image classification by combining contrastive learning framework as a feature extractor with FedHDC, but still keeping model updates to only HDC portion, resulting in fast and accurate model updates via federated learning.In the following, we summarize the main contributions of the paper: i) We present FedHDC to address the computation and communication problems in standard DNN based FL approaches. The simple and highly efficient operations of HDC allow for fast and low-weight local training on clients between communication rounds. FedHDC incurs very low communication overhead as HD models are very small in size and training requires many fewer rounds of communication to converge compared to DNNs, resulting in at least 66times lower communication overhead as we shown in our results.ii) We analyze HDC training process using the language of gradient methods from statistical learning and optimization. This viewpoint helps us provide a formal treatment of FedHDC as a general framework for federated learning, and precisely study its convergence properties. FedHDC can achieve 𝒪(1/T) convergence rate, with T representing the number of communication rounds, but such claim is not possible for non-convex and non-linear DNNs. As HD encoding embeds data into a high-dimensional space and can transform non-linear distributed learning tasks into linear optimization, FedHDC enjoys simpler training and faster convergence compared to DNNs as it uses only HD computing, while having the superior performance properties of non-linear models. iii) We present FHDnn, a novel synergetic FL framework that combines pre-trained CNN as a feature extractor with HDC. Specifically, we utilize a CNN trained using SimCLR <cit.>, a contrastive learning framework which learns informative representations of data self-supervised. FHDnn avoids the transmission of the CNN and instead trains only the HD learner in a federated manner. This strategy accelerates learning, reduces transmission costs, and utilizes the robustness of HDC to tackle network errors as shown in Fig. <ref>. iv) HD-based federated learning provides reliability for learning over unreliable wireless network with no additional cost. Unlike existing FL approaches, there is no need for multiple-access technologies to prevent interference or error-protection on the transmitted models. Due to such techniques, FL can have very limited communication rates, and hence low training speeds. We leverage the robustness of HDC and allow errors during transmission instead of limiting the rate to achieve error-free communication. We analyze FHDnn under three different unreliable network settings: packet loss, noise injection, and bit errors, and show that the perturbations in the client models can be tolerated by the HDC learner. A quantizer method with scaling is additionally proposed to enhance the resilience to bit errors.v) We also propose various strategies to further improve the communication efficiency of FedHDC and FHDnn. The HDC models have redundancy which we exploit to reduce their sizes for more efficient communication. We examine three approaches: binarized differential transmission, subsampling, and sparsification & compression. We show their trade-offs between performance and efficiency through experiments. We evaluate HDC-based federated learning by numerical experiments on different benchmark datasets and compare their performance with CNN based FL under various settings. We both theoretically and empirically show that the proposed approaches are robust to lossy network conditions. Based on our evaluations, FHDnn converges 3x faster than CNN, reduces the communication costs by 66× and the local computation cost on the clients by up to 6×.The communication efficiency of FedHDC and FHDnn is further improved by various strategies up to 32× with minimal loss in accuracy. § RELATED WORK Communication and computation bottlenecks of FL have been widely studied in the literature and various solutions were proposed targeting improvement at different parts of the overall process. FL involves many rounds of communication with the participation of numerous clients, typically at low rates over wireless links. These considerations have led to a significant interest in communication-efficient design of FL systems. Previous research has primarily focused on decreasing the size of the model updates <cit.> and reducing the number of communication rounds or communicating clients <cit.>. In addition, during each round of communication, participating clients train models locally on device for multiple epochs. Deep learning models that are commonly used tend to be expensive to train requiring backpropogation algorithm which is compute heavy. Efficient computation is also of great importance as clients are usually not equipped with powerful hardware. This is addressed in prior work by reducing the model complexity to alleviate local training <cit.>. On the other hand, there is often a trade-off between communication and computation; one strategy for lowering the frequency of communication is to put more emphasis on computation. The lightweight nature of HDC models make them suitable for running on edge devices with constrained resources. §.§ Communication Efficiency A prototypical FL approach named FedAvg <cit.> enables flexible communication and computation trade-off. The work follows from the seminal research in distributed stochastic gradient descent (SGD). Improvement in communication-efficiency is achieved by allowing for the clients to run multiple local SGD steps per communication round.Many succeeding studies have pursued the theoretical understanding of FedAvg in terms of communication-computation trade-offs and have carried out rigorous analysis of the convergence behavior depending on the underlying assumptions (e.g., IID or non-IID local datasets, convex or non-convex loss functions, gradient descent or stochastic gradient descent) <cit.>. Another approach that directly affects local training is to modify model complexity. Some examples are pruning <cit.>, restricting the model weights to be numbers at a certain bitwidth <cit.>, and bounding the model size<cit.>. These methods also lower computation complexity along with communication overhead. As the models for FL can get very large—especially in the case of DNNs—a different line of work explored methods to reduce the communicated model (or gradient) size, without altering the original local models. Existing schemes typically perform a form of compression, that is, instead of transmitting the raw model/gradient data, one transmits a compressed representation with fewer bits, for instance by means of limiting bitwidth (quantization) or enforcing sparsity (sparsification). Particularly, a popular class of quantization operators is based on random dithering <cit.>. Sparsification methods decrease the number of non-zero entries in the communicated data to obtain sparse vectors <cit.>. Structured and sketched updates are also proposed in <cit.>, which can be further supported by lossy compression and federated dropout <cit.>. Some other approaches include randomized techniques such as stochastic rounding <cit.>, subsampling <cit.>, and randomized approximation <cit.>. In FL, a group of clients might often provide similar, and hence redundant, model information during communication rounds. Orthogonal to the compression-based approaches, one can dismiss the updates of some clients as communicating all model updates would be an inefficient use of resources. Early works have attempted simple client selection heuristics such as selecting clients with higher losses  <cit.>, sampling clients of larger update norm with higher probability <cit.>, and sampling clients with probabilities proportional to their local dataset size <cit.>, butthe similarity or redundancy of the client updates are not exploited in these methods. Ideally, a diverse and representative set of clients should be selected that contribute different, informative updates. In consideration of this, several selection criteria have been investigated in recent literature, some of which are diversity-based selection <cit.>, importance sampling <cit.>, and selection by update significance <cit.>.FL is often carried out over wireless channels that attenuate the transmitted signal as well as introduce noise, and thus the communication is unreliable, prone to transmission errors. All the aforementioned approaches assume reliable links and ignore the wireless nature of the communication medium. The inherent assumption is that independent error-free communication “tunnels” has been established between the clients and the server by some existing wireless protocol.A common way to achieve this is to divide the channel resources among clients with multiple-access technologies (e.g., TDMA, CDMA, OFDMA) to mitigate interference, and utilize powerful error correcting codes to overcome noise <cit.>. However, the communication rates and consequently the overall training speed suffer due to the limited channel resources that can be allocated per client.§.§ Computation Efficiency The clients in FL are typically resource-constrained, battery-operated edge devices with limited power and computation budgets, unlike the powerful servers used in cloud-centric learning. DNN-based FL methods require clients to perform on-device backpropagation during each round of training which is computationally expensive and is incurring high resource usage. To overcome this challenge, prior works mainly explored low complexity NN architectures and lightweight algorithms suitable for edge devices. A lot of the `local methods' for improving communication efficiency fall into this category, e.g, pruning <cit.> and using quantized models <cit.>, which are also helpful for reducing computation. A small subset of the proposed approaches specifically devote their attention to resolving the computational issues in FL. In <cit.>, a “soft-training” method was introduced to dynamically compress the original training model into a smaller restricted volume through rotating parameter training. In each round, it lets different parts of model parameters alternately join the training, but maintains the complete model for federated aggregation. The authors of <cit.> suggested dividing the model into sub-models, then using only a few sub-models for partial federated training while keeping the rest of the parameters fixed. During training, sub-model capacities are gradually increased until it reaches the full model. Along similar lines, federated dropout <cit.> is a technique that enables each client to locally operate on a smaller sub-model while still providing updates that can be applied to the larger global model on the server. Finally, the technique presented in <cit.>, called splitfed learning, combines the strengths of FL and split learning by splitting a NN into client-side and server-side sub-networks during federated training. Our federated hyperdimensional computing approach is orthogonal to the most of the existing communication-efficient FL methods. For instance, it can be used in tandem with compression, subsampling, and client selection or techniques that reduce model complexity. In fact, in Section <ref>, we include some strategies for further improving the communication cost leveraging the statistical properties of hypervectors, even though HD models are much smaller (around hundred thousand parameters vs millions/billions), thus more communication-efficient, compared to DNNs. Furthermore, different from the aforementioned works, we account for unreliable communication scenarios. We use the robustness of HDC to tolerate communication errors and carry out accurate training. Finally, there are studies that aim at making the compute intensive DNN-based FL methods more efficient as summarized above. In contrast, HDC itself is a very light-weight framework with low computational cost. It was shown in previous work that HDC provides 3× reduction in training time, 1.8× in energy comsumption compared to optimized DNNs on NVIDIA Jetson TX2 low-power edge GPU <cit.>. An ASIC implementation of HDC for edge devices further improves the energy consumption by 1257× and training time by 11× over DNNs. § HYPERDIMENSIONAL COMPUTING In the following, we first introduce and give an overview on hyperdimensional computing. We next analyzehyperdimensional computing classification algorithm, then express it in a standard mathematical framework from statistical learning and optimization. The goal of this section is to provide an in-depth formal treatment of HDC as a general `learning' method. Leveraging the analysis presented here, we later study the convergence properties of federated hyperdimensional computing in Section <ref>, §.§ Background HDC performs cognitive tasks using high-dimensional vectors, also known as hypervectors. The typical length of a hypervector is usually from 1,000 to 10,000 dimensions. Hypervectors are random with independent identically distributed (i.i.d.) components. THus, any randomly chosen pair of points in the hyperdimensional space are nearly-orthogonal <cit.>. The first step of HDC is to map/encode the input signal (e.g., an image, feature vector, or a time-series window) into hypervectors. This step is common between all HDC applications. Assume the input 𝐱∈𝒳 is represented by the vector 𝐱 = [x_1, x_2, ..., x_m]^T where x_is denote the features and m is the length of input vector. HDC encoding operation maps the input data to its high-dimensional representation 𝐡∈ℋ with dimension d≫ m under some function ϕ : 𝒳→ℋ. There are several encoding algorithms in literature with different memory-compute trade-offs, namely, base-level (a.k.a position-ID) <cit.>, permutation <cit.>, and random projection <cit.>. In this work, we refer to the random projection encoding, but our methodology can be extended to any other encoding approach. Random projection encoding embeds the data into a high-dimensional Euclidean space under a random linear map. The output of this mapping can be quantized with minimal loss of information for better computational efficiency. If quantized,the HD embedding is constructed as ϕ(𝐱) = sign(Φ𝐱) under the encoding function ϕ : ℝ^m→ℤ^d, the rows of which Φ∈ℝ^d × m are generated by randomly sampling directions from the m-dimensional unit sphere. Here, sign(Φ𝐱) is the element-wise sign function returning +1 if Φ𝐱≥ 0 and -1 otherwise. Fig. <ref>a shows an overview of HDC encoding.§.§ Hyperdimensional LearningMany learning tasks can be implemented in the HD domain. Here, we focus on classification, one of the most popular supervised learning problems. Suppose we are given collection of labeled examples 𝒟 = {(𝐱_i,y_i)}^n_i=1 where 𝐱_i∈𝒳⊂ℝ^m and y_i∈𝒞 is a categorical variable indicating the class label of a particular data sample 𝐱_i. For HD learning, we first encode the entire set of data samples in 𝒟 into hyperdimensional vectors such that 𝐡_i = ϕ (𝐱_i) is a hypervector in the d-dimensional inner-product space ℋ. These high-dimensional embeddings represent data in a way that admits linear learning algorithms, even if the data was not separable to begin with. In other words, simple linear methods applied on HD encoded data can capture nonlinear decision boundaries on the original data <cit.>. The common approach to learning with HD representations is to bundle together the training examples corresponding to each class into a set of “prototypes”, which are then used for classification. The bundling operator is used to compile a set of elements in ℋ and assumes the form of a function ⊕ : ℋ×ℋ→ℋ. The function takes two points in ℋ and returns a third point similar to both operands. We bundle all the encoded hypervectors that belong to the k-th class to construct the corresponding prototype 𝐜_k:𝐜_k = is.t.y_i = k⊕𝐡_iGiven a query data 𝐱_q∈𝒳 for which we search for the correct label to classify, we take the encoded hypervector 𝐡_q∈ℋ and return the label of the most similar prototype:ŷ_q = k^* = k ∈ 1,...,Kargmax δ(𝐡_q,𝐜_k)where δ is a similarity metric.One-Shot Training. The bundling operator ⊕ is often chosen to be element-wise sum. In this case, the class prototypes are obtained by adding all hypervectors with the same class label. Then, the operation in Equation (<ref>) is simply calculated as:𝐜_k = ∑_is.t.y_i = k𝐡_iThis can be regarded as a single pass trainingsince the entire dataset is only used once—with no iterations—to train the model (class prototypes). Inference. The similarity metric δ is typically taken to be the cosine similarity which is a measure of angle between two vectors from an inner product space. Equation (<ref>) is rewritten using a dot-product and a magnitude operation as follows under cosine similarity:ŷ_q = k^* =k ∈ 1,...,Kargmax ⟨𝐜_k,𝐡_q⟩/𝐜_k Retraining. One-shot training often does not result in sufficient accuracy for complex tasks. A common approach is to fine-tune the class prototypes using a few iterations of retraining <cit.>. We use the perceptron algorithm <cit.> to update the class hypervectors for mistpredicted samples. The model is updated only if the query in (<ref>) returns an incorrect label. Let y_q=k and ŷ_q = k' be the correct and mispredicted labels respectively. Then, the new class prototypes after the retraining iteration are: 𝐜_k = 𝐜_k + α𝐡_q 𝐜_k' = 𝐜_k' - α𝐡_qwhere α is the HD learning rate, controlling the amount of change we make to the model during each iteration. Figure <ref>b shows an overview of HDC for classification.§.§ Hyperdimensional Linear Discriminant The single pass training and dot-product based inference approach of the HD algorithm bears a strong resemblance to Fisher’s linear discriminant <cit.>. Assume that each sample 𝐱∈𝒳 belongs to a class with binary label y ∈{-1,1} for notational convenience. The assumption of a binary classication task is primarily for clarity of exposition, and our results can be extended to support multi-class problems via techniques such as “one-versus-rest” decision rules. Fisher's linear discriminant on HD space finds the line z = 𝐰^T𝐡 that best separates the two classes. The goal is to select direction 𝐰 so that after projecting along this direction, [label=(*)]* the separation between classes are high with their means as far away as possible from each other, and* the scatter within the classes is as small as possible with low variance.A criterion that quantifies the desired goal is the Rayleigh quotient:J(𝐰) = 𝐰^T𝐒_B𝐰/𝐰^T𝐒_W𝐰𝐒_B = (μ_1-μ_-1)(μ_1-μ_-1)^T𝐒_W = Σ_1+Σ_-1where μ_± 1 and Σ_± 1 are the mean vector and the covariance matrix respectively. 𝐒_B is defined as the between-class scatter which measures the separation between class means, while 𝐒_W is the within-class scatter, measuring the variability inside the classes. Our goal is achieved by maximizing the Rayleigh quotient with respect to 𝐰. The corresponding optimal projection direction is then given as𝐰^* = (Σ_1+Σ_-1)^-1(μ_1-μ_-1)One can use Fisher's linear discriminant method as a classifier where the decision criterion is a threshold on the dot-product (projection):z = (μ_1-μ_-1)^T(Σ_1+Σ_-1)^-1𝐡_q + T {[ >0,ŷ_q =1; <0,ŷ_q = -1 ].In HD computing, the procedure of one-shot training followed by inference, described by (<ref>) and (<ref>), is equivalent to above decision criterion. For two classes, the “similarity check” step in (<ref>) can be rewritten in the form of a decision function as follows:ŷ_q = {[1,if ⟨𝐜_1,𝐡_q⟩/𝐜_1 > ⟨𝐜_-1,𝐡_q⟩/𝐜_-1; -1,if ⟨𝐜_1,𝐡_q⟩/𝐜_1 < ⟨𝐜_-1,𝐡_q⟩/𝐜_-1 ].which can be further simplified as:ŷ_q = {[1,if(𝐜_1/𝐜_1 - 𝐜_-1/𝐜_-1)^T𝐡_q > 0; -1,if(𝐜_1/𝐜_1 - 𝐜_-1/𝐜_-1)^T𝐡_q < 0 ].Since the class prototypes are normalized sums of hypervectors with the same labels, they relate to the respective class means by a scalar multiplication, i.e, 𝐜_± 1 = 𝐜_± 1/N_± 1μ_± 1. Here, N_± 1 denotes the total number of samples in classes. We obtain the below decision rule after plugging in μ_± 1 into (<ref>), then dividing both sides of the inequalities by 𝐜_± 1/N_± 1.ŷ_q = {[1,if(μ_1-μ_-1)^T𝐡_q > 0; -1,if(μ_1-μ_-1)^T𝐡_q < 0 ].Note that this is the same classifier as in (<ref>) for the special case when Σ_1 = Σ_-1 = Σ = 1/2𝐈. HD encoding maps data points to a hyperdimensional space such that different dimensions of the hypervectors are uncorrelated, i.e, Σ_ij≈ 0, i≠ j. Therefore, one-shot training followed by inference in HD computing is equivalent to applying Fisher's linear discriminant and classifying sample encoded hypervectors. The above result shows the HD algorithm explicitly optimizes the discrimination between the data points from different classes. We first project data via HD encoding such that it becomes linearly separable, then find a linear discriminant. §.§ A Gradient Descent Perspective on HDC A retraining step is required to fine-tune the HD model for tasks where one-shot training does not suffice. The goal is to update the class prototypes until finding the model that best separates classes. In the following, we analyze HD retraining process using the language of gradient methods from statistical learning and optimization. This viewpoint helps us provide a formal treatment of FedHDC as a general framework for federated learning, and precisely study its convergence properties.Without loss of generality, we continue our analysis using binary class labels as in Section <ref>. Let 𝐰∈ℝ^d be a vector of weights that specifies a hyperplane in the hyperdimensional space with d dimensions. We define this vector in terms of class prototypes, such that 𝐰 = 𝐜_1 - 𝐜_-1. Then, after inputing in the weight vector and simplifying the equations in (<ref>), classification of a query data 𝐱_q is made through the following decision function:ŷ_q = {[1,if 𝐰^T𝐡_q > 0; -1,if 𝐰^T𝐡_q < 0 ].This can be interpreted as a linear separator on the HD representations of the data. It divides ℋ into two half-planes, where the boundary is the plane with normal 𝐰. The goal is to learn the weights such that all the positive examples (y_i = 1) are on one side of the hyperplane and all negative examples (y_i = -1) on the other. For the optimal set of weights, the linear function g(𝐡) = 𝐰^T𝐡 agrees in the sign with the labels on all training instances, that is, sign(⟨𝐰,𝐡_i⟩) = y_i for any 𝐱_i∈𝒳. We can also express this condition as y_i⟨𝐰, 𝐡_i⟩ > 0.Recall that HD retraining, in the event of a misclassification, subtracts the query hypervector from the incorrect class prototype and adds it to the one that it should have been matched with. The two possible retraining iterations are illustrated below for binary classification..5Misclassifying 𝐱_1𝐜_1 = 𝐜_1 + α𝐡_1𝐜_-1 = 𝐜_-1 - α𝐡_1 .5Misclassifying 𝐱_-1𝐜_1 = 𝐜_1 - α𝐡_-1𝐜_-1 = 𝐜_-1 + α𝐡_-1 For both cases, the difference of class prototypes, i.e. 𝐜_1 - 𝐜_-1, is updated as a function of the misclassified class label. A unified update equation that covers both cases is as follows:𝐜_1 - 𝐜_-1 = 𝐜_1 - 𝐜_-1 + 2α y_i𝐡_iInputing in the weight vector in the above equation, we have:𝐰 = 𝐰 + 2α y_i𝐡_i A simple algorithm that implements HD retraining with the above notion of linear separators is described by Algorithm 1. Here, η is a positive scalar called the learning rate and t denotes iteration number. We now show that HD retraining can be represented as an instance of Empirical Risk Minimization (ERM). Particularly, we frame the retraining step as an optimization problem with convex loss function, then we argue that the updates in Algorithm 1 are equivalent to stochastic gradient descent (SGD) steps over an empirical risk objective. Our ultimate goal is to find the discriminant function g_𝐰(𝐡) which minimizes the empirical risk on the embedded training set 𝒟_ℋ = {(𝐡_1,y_1),...,(𝐡_n,y_n)}. Empirical risk is defined as follows:R_emp(g_𝐰) = 1/n∑_i=1^nℓ (g_𝐰(𝐡_i),y_i)where ℓ : ℋ×ℋ→ℝ is a loss function that describes the real-valued penalty calculated as a measure of the discrepancy between the predicted and true class labels. Zero empirical risk can be achieved if HD encoding admits a linearly separable representation. Otherwise, zero risk is not possible, but we search for the optimal weights that minimizes it:𝐰^* = *argmin_𝐰 R_emp(g_𝐰)The “no error” condition, y_i⟨𝐰, 𝐡_i⟩ > 0∀ i, provides a very concise expression for the situation of zero empirical risk. It allows for the formulation of the learning problem as the following function optimization:minimize J(𝐰) = -∑_i=1^n y_i𝐰^T𝐡_i The solution can be found by doing gradient descent on our cost function J(𝐰) where the gradient is computed as ∇ J(𝐰) = -∑_i=1^n y_i𝐡_i. Another optimization method is the stochastic gradient descent that picks a random example at each step and makes an improvement to the model parameters. Then, the gradient associated with an individual example is -y_i𝐡_i. Given a loss function ℓ (·), the stochastic gradient descent algorithm is defined below:Stochastic Gradient Descent:Given: starting point 𝐰 = 𝐰_init, learning rates η_1,η_2,η_3,...(e.g. 𝐰_init = 0 and η_t = η for all t, or η_t = 1/√(t)). For a sequence of random examples (𝐡_1,y_1),(𝐡_2,y_2),... * Given example (𝐡_t,y_t), compute the gradient ∇ℓ (g_𝐰(𝐡_t),y_t) of the loss w.r.t. the weights 𝐰.* Update: 𝐰←𝐰 - η_t∇ℓ (g_𝐰(𝐡_t),y_t) To present an equivalent formulation to (<ref>), consider the loss function ℓ (g_𝐰(𝐡),y) = max(0, y ⟨𝐰, 𝐡⟩) for the empirical risk in (<ref>). If g_𝐰(𝐡) has the correct sign, then we have a loss of 0, otherwise we have a loss equal to the magnitude of g_𝐰(𝐡). In this case, if g_𝐰(𝐡) has the correct sign and is non-zero, then the gradient will be zero since an infinitesimal change in any of the weights will not change the sign. So, the algorithm will not make any change on 𝐰. On the other hand, if g_𝐰(𝐡) has the wrong sign, then ∂ℓ/∂𝐰 = - y𝐡. Hence, using η_t = η, the algorithm will update 𝐰←𝐰 + η y𝐡. Note that this is exactly the same algorithm as HD retraining. We observe that empirical risk minimization by SGD with the above loss function gives us the update rule in Algorithm 1. § FEDHDC: FEDERATED HD COMPUTING We study the federated learning task where an HD model is trained collaboratively by a loose federation of participating clients, coordinated by a central server. The general problem setting discussed in this paper mostly follows the standard federated averaging framework from the seminal work in <cit.>. In particular, we consider one central server and a fixed set of N clients, each holding a local dataset. The k-th client, k∈ [N], stores embedded dataset 𝒟_k = {(𝐡_k,j,y_k,j)}^n_k_j=1, with n_k = |𝒟_k| denoting the number of feature-label tuples in the respective datasets.The goal in FL is to learn a global model by leveraging the local data at the clients. The raw datasets cannot be shared with the central server due to privacy concerns, hence the training process is apportioned among the individual clients as described by the following distributed optimization problem:𝐰min { F(𝐰) ≜∑_k=1^N p_kF_k(𝐰) }where p_k is the weight of the k-th client such that p_k≥ 0 and ∑_k=1^N p_k = 1. A natural and common approach is to pick p_k = n_k/n. Similar to Section <ref>, we represent our HD model by a vector of parameters 𝐰∈ℋ⊆ℝ^d. If the partition 𝒟_k is formed by randomly and uniformly distributing the training examples over the clients, then we have 𝔼_𝒟_k[F_k(𝐰)] = F(𝐰), where the expectation is over the set of examples assigned to the client. This is the IID assumption that usually does not hold in FL setting; F_k could be an arbitrarily bad approximation to F under non-IID data. To define the learning objective and measure the fit of the model to data, we introduce a loss function as in (<ref>). We denote ℓ(𝐰 ; (𝐡_k,j,y_k,j) ) for the loss of the prediction on example (𝐡_k,j,y_k,j) made with an HD model parametrized by 𝐰. For the k-th client, the local objective F_k(·) is defined in the form of local empirical loss as follows:F_k(𝐰) = 1/n_k∑_j=1^n_kℓ(𝐰 ; (𝐡_k,j,y_k,j) )For ease of notation, we do not explicitly use g_𝐰(𝐡) to denote the learning model, instead substitute 𝐰 which parametrizes it. The local empirical loss F_k measures how well the client model fits the local data, whereas the global loss F quantifies the fit to the entire dataset on average. We have shown above that the loss function ℓ= max(0, y ⟨𝐰, 𝐡⟩) captures the behavior of the HD algorithm for an equivalent optimization problem formulation solved by SGD. The objective is to find the model 𝐰^* that minimizes the global loss, i.e., 𝐰^* = *argmin_𝐰 F(𝐰).Algorithm.In the federated bundling framework, each client maintains its own HD model and participates in building a global model that solves (<ref>) in a distributed fashion. This is achieved via an iterative training procedure for which we describe one round (say t-th) of the algorithm below. * Broadcast: The central server broadcasts the latest global HD model, 𝐰_t, to all clients.* Local updates: Each client k∈ [N] sets its model 𝐰_t^k = 𝐰_t and then performs training for E epochs using local data:𝐰_t,0^k = 𝐰_t^k, 𝐰_t,τ+1^k⟵𝐰_t,τ^k- η_t∇ F_k(𝐰_t,τ^k, ξ_τ^k),i=0,1,...,E-1, 𝐰_t+1^k = 𝐰_t,E^k,where η_t is the learning rate and ξ_τ^k is a mini batch of data examples sampled uniformly from local dataset 𝒟_k.* Aggregation: The central server receives and aggregates the local models to produce a new global model:𝐰_t+1 = ∑_k=1^Np_k𝐰_t+1^k. After aggregation, the server moves on to the next round, t+1. This procedure is carried out until sufficient convergence is achieved. Fig. <ref> summarizes the federated training process for FedHDC. The overall update in one round of federated bundling is similar to a gradient descent step over the empirical loss corresponding to the entire distributed dataset across clients. §.§ FedHDC Convergence Analysis In this section, we first specify the objective functions and the corresponding gradient computations in FedHDC, for whose general forms were discussed above. We then analyze the convergence behavior of FedHDC, showing that it converges to the global optimum at a rate of 𝒪(1/T), where T is the number of communication rounds.For federated learning with HD algorithm, the optimization problem in (<ref>) is cast as follows:𝐰^* = *argmin_𝐰∑_k=1^Np_k/n_k∑_j=1^n_kmax(0, y_j⟨𝐰, 𝐡_j⟩),and the local gradient 𝐠_k = ∇ F_k(𝐰) is computed at client k∈ [N] as:𝐠_k = 1/n_k∑_j=1^n_k y_j𝐡_j As Equation (<ref>) suggests, the gradient computations are linear, demand low-complexity operations, and thus are favourable for resource-constrained, low-power client devices. However, in many learning tasks, linear federated learning models perform sub-optimally compared to their counterpart, DNN-based approaches. FedHDC diverges from traditional linear methods in this respect. It enjoys both the superior performance properties of non-linear models and low computational complexity of linear models. This is a direct result of HD computing, who embeds data into a high-dimensional space where the geometry is such that simple learning methods are effective. As we show in the following, linearity in HD training benefits convergence, at the same time the performance does not degrade due to the properties of non-linear hyperdimensional embeddings. Such convergence claims are not possible for non-convex and non-linear DNNs. The functions F_k(·) and the gradients ∇ F_k(·) have the following properties:* (L-smoothness). Each local function F_k(·) is L-smooth where the gradients ∇ F_k(·) are Lipschitz continuous: There exists a parameter L>0 such that for all 𝐯,𝐰∈ℝ^d,∇ F_k(𝐯) - ∇ F_k(𝐰)≤ L 𝐯 - 𝐰. * (Strong convexity). Each local function F_k(·) is μ-strongly convex and differentiable: For all 𝐯,𝐰∈ℝ^d,F_k(𝐯) ≥F_k(𝐰) + (𝐯 - 𝐰)^T∇ F_k(𝐰) + μ/2𝐯 - 𝐰^2. * (Bounded variance). The variance of stochastic gradients for each client k is bounded: Let ξ^k be sampled from the k-th client's dataset uniformly at random, then there exists constants σ_k such that for all 𝐰∈ℝ^d,𝔼∇ F_k(𝐰,ξ^k) - ∇ F_k(𝐰) ^2≤σ_k^2.* (Uniformly bounded gradient). The expected squared norm of stochastic gradients is uniformly bounded: for all mini-batches ξ^k at client k∈ [N] and for 𝐰∈ℝ^d,𝔼∇ F_k(𝐰,ξ^k)^2≤ G^2.These conditions on local functions are typical and widely used for the convergence analysis of different federated averaging frameworks <cit.>.Theorem 1. Define κ = L/μ, γ = max{8κ, E} and choose learning rate η_t = 2/μ (γ + t). Then, the convergence of FedHDC with Non-IID datasets and partial client participation satisfies𝔼[F(𝐰_T)] - F^*≤2κ/γ + T[B/μ + (2L+Eμ/4) 𝐰_0 - 𝐰^*^2]whereB = ∑_k=1^N p_k^2σ_k^2 + 6LΓ + 8(E-1)^2G^2 + N-K/N-14/KE^2H^2Here, T is the number of communication rounds (or SGD steps), The term Γ is used to quantify the degree of Non-IID <cit.>. Let F^* and F^*_k be the minimum values of F and F_k, respectively, then Γ = F^* - ∑_k=1^Np_kF^*_k. As shown in Theorem 1, FedHDC can achieve 𝒪(1/T) convergence rate. Such claim does not hold for non-convex and non-linear DNNs. This result follows from the standard proof on the convergence of FedAvg on Non-IID data <cit.>.The proof is given in Appendix A.§.§ FedHDC Experiemental Results We implemented FedHDC on Python using a custom HDC library for the PyTorch framework. For FedHDC, we use hypervectors with dimension 10,000. For comparison, we use a NN with a fully connected layers with 128 units and ReLU activation, and a final output layer with softmax.To observe the performance of our approach focusing on the real-world use-cases, we evaluated FedHDC on a wide range of benchmarks shown in Table <ref> that range from relatively small datasets collected in a small IoT network to a large dataset that includes hundreds of thousands of face images. The data include: ISOLET: recognizing audio of the English alphabet, UCIHAR: detecting human activity based on 3-axial linear acceleration and angular velocity data,from different people, PAMAP2: classifying five human activities based on a heart rate and inertial measurements, FACE: classifying images with faces/non-faces, and MNIST: recognizing handwritten digits by different people. §.§.§ Accuracy and ConvergenceWe run our experiments for 100 clients and 100 rounds of communication. We first tune the hyperparameters for both FedHDC and CNNs, then experiment with different federated learning parameters. Fig. <ref> shows the accuracy and convergence of both FedHDC and the CNN for various number of local epochs E and local batch sizes B. For all experiments, C=0.2 fraction of clients are randomly picked in every communication round. For all datasets, the best convergence is achieved with low number of epochs (E=1) and moderate batch sizes (B=10,20). §.§.§ Hypervector Dimensionality StudyTable  demonstrates the influence of hypervector dimensions on the FedHDC classification accuracy. A modest increase in accuracy is observed as the dimensionality grows. This outcome aligns with expectations, as the robustness of HDC is known to improve with increasing dimensions. Thomasshowed that dimensionality is directly proportional to the bandwidth of the noise in HDC classification problems, thus providing a guideline for a tradeoff between noise and the hypervector size.It is essential to consider the trade-off between performance and resource usage, as the computational cost rises with increasing dimensions.In essence, the HD encoding dimension exhibits a linear relationship with the number of categorical features, while it depends logarithmically on the alphabet size. As previously mentioned, the separation quality of the problem is associated with factors such as the class separability and the encoding dimension. Intuitively, when the classes are well separated, a smaller encoding dimension can be employed to achieve satisfactory performance. This is because the inherent separability of the data aids in reducing the required dimensionality for efficient classification. Conversely, when the classes are poorly separated, a larger encoding dimension is necessary to enhance the robustness and accuracy of the classification process. Consequently, understanding the relationship between the HD encoding dimension and the problem's complexity is crucial for optimizing the performance of high-dimensional computing methods in various classification tasks.§ FHDNN: FEDERATED HYPERDIMENSIONAL COMPUTING WITH CNN FEATURE EXTRACTION FedHDC gives great results for many datasets in a federated setting, but it does not have acceptable accuracy when doing complex image analysis due to inherent inaccuracy of HDC on larger images. Tablesummarizes accuracy of various state of the art encoding methods for HDC when runningimage classification tasks .The current HD encoding methods are not able to match state of the art accuracy. In this section, to overcome this issue, we present FHDnn, a synergetic FL framework which combines CNNs and HDC. FHDnn uses a pre-trained CNN as a feature extractor, whose outputs are encoded into hypervectors and then used for training. It avoids the transmission of the CNN and instead trains only the HD learner in a federated manner. The CNN excels at learning a complex hierarchy of features and boasts high accuracy, whereas HDC provides efficient and robust training. Therefore, FHDnn enjoys the complimentary salient properties of both HDC and CNN to enable a lightweight, communication-efficient, and highly robust FL framework.§.§ Model ArchitectureFHDnn consists of two components: i) a pre-trained CNN as a feature extractor and ii) a federated HD learner. Fig. <ref> shows the model architecture of FHDnn. The pre-trained feature extractor is trained once and not updated at run time. This removes the need for costly CNN weight updates via federated learning. Instead, HD Computing is responsible for all the federated model updates.Since its training only requires simple operations, it is much more efficient and scalable.In the next subsections we describe both components.Feature Extractor: While in theory any standard CNN can be used as a feature extractor, we use a pre-trained SimCLR ResNet model as our feature extractor due to its proven success in prior studies. SimCLR <cit.> is a contrastive learning framework which learns representations of images in a self-supervised manner by maximizing the similarity between latent space representations of different augmentations of a single image. This class-agnostic framework trained on a large image dataset allows for transfer learning over multiple datasets, (as evaluated in <cit.>) making it ideal for a generic feature extractor. Standard CNNs learn representations that are fine-tuned to optimize the classification performance of the dense classifier at the end of the network. Since SimCLR focuses on learning general representations as opposed to classification oriented representations, it is a better choice of a feature extractor. We choose the ResNet architecture due to availability of pre-trained models. It is possible to use other models such as MobileNet <cit.>.HD Learner: FHDnn encodes the outputs of the feature extractor into hypervectors. More formally, given a point 𝐱∈𝒳, the features 𝐳⊂ℤ^n are extracted using the feature extractor f: 𝒳→ℤ where f is a pre-trained neural network. The HD embedding is constructed as 𝐡 = ϕ(𝐳) = sign(Φ𝐳) under the encoding function ϕ : ℤ→ℋ. HD learner then operates on these hypervectors using binding and bundling which are simple and highly parallelizable. The goal of such configuration is to avoid the transmission of the CNN and instead train only the HD learner in a federated manner. An HD model is formed by bundling all encoded hypervectors with the same class level together. We perform bundling by the element-wise addition of those hypervectors, which generates corresponding class prototoypes. Then, the HD model is simply a set of hypervectors with the number of classes in the dataset. We use the HD learner in federated training that we discuss in the following.§.§ Federated TrainingFig. <ref> summarizes the overall federated training process for FHDnn. We separate the whole process into two steps, client local training and federated bundling. These two steps work in a cyclical fashion, one after the other, until convergence.Client Local Training: Each client initially starts the process with a feature extractor f and an untrained HD learner. Once we get the encoded hypervectors using the method described above, we create class prototypes by bundling together hypervectors of the corresponding class using 𝐜_k = ∑_i𝐡_i^k. Inference is done by computing the cosine similarity metric between a given encoded data point with each of the prototypes, returning the class which has maximum similarity. After this one-shot learning process, we iteratively refine the class prototypes by subtracting the hypervectors from the mispredicted class prototype and adding it to the correct prototype as shown in Fig. <ref>. We define the complete HD model 𝐂 as the concatenation of class hypervectors, i.e., 𝐂 = [𝐜_1^T,𝐜^T_2,...,𝐜^T_l]. Federated Bundling: In the federated bundling framework, each client maintains its own HD model and participates to build a global model in a distributed fashion. This is achieved via an iterative training procedure for which we describe one round (say t-th) of the algorithm below. * Broadcast: The central server broadcasts the latest global HD model, 𝐂^t, to all clients.* Local updates: Each participating client k∈ [N] sets its model 𝐂^k_t = 𝐂_t and then performs training for E epochs using local data.* Aggregation: The central server receives and aggregates the local models to produce a new global model:𝐂_t+1 = ∑_k=1^N𝐂_t+1^k. After aggregation, the server moves on to the next round, t+1. This procedure is carried out until sufficient convergence is achieved.§.§ FL Over Unreliable Channels With FHDnn Federated learning is often carried out over wireless channels that attenuate the transmitted signal and introduce noise. Thus, the communication between clients and the server is unreliable, prone to transmission errors followed by packet losses. In this section, we show how FHDnn and FedHDC provide reliability for learning over unreliable wireless network at no overhead.We consider different models for the uplink and the downlink channels. The centralized server is assumed to be able to broadcast the models reliably, error-free at arbitrary rates, which is a common assumption in many recent works <cit.>. For uplink communications, the channel capacity per client is notably more constrained as the wireless medium is shared, so transmissions can be unreliable even at very low rates. We next describe the considered communication setup over such multiple access channels (MAC). The mutual interference between the transmissions of multiple participating clients can lead to erroneous aggregation of models at the server. A common approach in FL to deal with interference is to use an orthogonal frequency division multiple access (OFDMA) technique <cit.>. The resources of the shared-medium are partitioned in the time–frequency space and allocated among the clients. This way, each of the N clients occupies one dedicated resource block, that is, channel's spectral band and time slot.Even though each client model can be recovered separately due to the orthogonality, the distinct channels are still inherently noisy. The individual, independent uplink channels should be rate-limited to be treated as error-free links under the Shannon capacity theorem. However, the bandwidth allocated per client decreases with the number of clients, so does the capacity. Accordingly, the volume of data that can be conveyed reliably, i.e, throughput, scales by 1/N. This implies that the data rates will be small, resulting in slow training speed unless transmission power is increased, which is undesirable considering energy consumption concerns.Instead of limiting the rate to achieve error-free communication, weadmit errors for the channel output at the server. The intuition is that the perturbations in the client models can be tolerated to a certain extent by the learning algorithm. If the learning model is robust to errors, then there is no need for forcing perfectly reliable transmissions. Thus, we analyze our FHDnn scheme assuming that the clients communicate over unreliable MAC and the transmitted models are corrupted.In the following, we consider three error models at different layers of the network stack. All models are applicable in practice depending on the underlying protocol. We first explore the properties of HD computing that makes the learning robust under the considered error models, then introduce different techniques for further improvement. §.§.§ Noisy Aggregation In conventional systems, the transmitter performs three steps to generate the wireless signal from data: source coding, channel coding, and modulation. First, a source encoder removes the redundancies and compresses the data. Then, to protect the compressed bitsream against the impairments introduced by the channel, a channel code is applied. The coded bitstream is finally modulated with a modulation scheme which maps the bits to complex-valued samples (symbols), transmitted over the communication link.The receiver inverts the above operations, but in the reverse order. A demodulator first maps the received complex-valued channel output to a sequence of bits. This bitstream is then decoded with a channel decoder to obtain the original compressed data; however, it might be possibly corrupted due to the channel impairments. Lastly, the source decoder provides a (usually inexact) reconstruction of the transmitted data by applying a decompression algorithm. For noisy aggregation, as an alternative of the conventional pipeline, we assume uncoded transmission <cit.>. This scheme bypasses the transformation of the model to a sequence of bits, which are then need to be mapped again to complex-valued channel inputs. Instead, the real model parameter values are directly mapped to the complex-valued samples transmitted over the channel. Leveraging the properties of uncoded transmission, we can treat the channel as formulated in Equation (<ref>), where the additive noise is directly applied to model parameters. The channel output received by the server for client k at round t is given by𝐰̃_t^k = 𝐰_t^k + 𝐧_t^kwhere 𝐧_t^k∼𝒩(0, σ_t,k^2) is the d-dimensional additive noise. The signal power and noise power are computed as 𝔼𝐰_t^k^2 = P_t,k and 𝔼𝐧_t^k^2 = σ_t,k^2, respectively. Then, the signal-to-noise ratio (SNR) is:SNR_t,k = 𝔼𝐰_t^k^2/𝔼𝐧_t^k^2 = P_t,k/σ_t,k^2 An immediate result of federated bundling is the improvement in the SNR for the global model. When the class hypervectors from different clients are bundled at the server, the signal power scales up quadratically with the number of clients N, whereas the noise power scales linearly. Assuming that the noise for each client is independent, we have the following relation:SNR_t = 𝔼[∑_k=1^N𝐰_t^k]/𝔼[∑_k=1^N𝐧_t^k]≈N^2P_t,k/Nσ_t,k^2 = N × SNR_t,kNotice that the effect of noise is suppressed by N times due to bundling. This claim can also be made for the  <cit.> framework over CNNs. However, even though the noise reduction factor is the same, the impact of the small noise might be amplified by large activations of CNN layers. In FHDnn, we do not have such problem as the inference and training operations are purely linear. One other difference of FHDnn from CNNs is its information dispersal property. HD encoding produces hypervectors which have holographic representations, meaning that the information content is spread over all the dimensions of the high-dimensional space. In fact, no dimension in a hypervector is more responsible for storing any piece of information than others. Since the noise in each dimension can be also assumed independent, we can leverage the information spread to further eliminate noise. Consider the random projection encoding described in Section <ref>, which is also illustrated by Fig. <ref>a. Let the encoding matrix Φ∈ℝ^d × n expressed in terms of its d row vectors, i.e., Φ = [Φ_1, Φ_2, ..., Φ_d]^T. Then, the hypervector formed by encoding information 𝐱∈𝒳 can be written as 𝐡 = [Φ_1^T𝐱, Φ_2^T𝐱,...,Φ_d^T𝐱]^T, where 𝐱 = [x_1, x_2, ..., x_n]^T. As implied by this expression, the information is dispersed over the hypervectors uniformly. Now consider additive noise over the same hypervector such that 𝐡 + 𝐧 = [Φ_1^T𝐱 + n_1, Φ_2^T𝐱+ n_2,...,Φ_d^T𝐱+n_d]^T. We can reconstruct the encoded information from the noisy hypervector 𝐡̃ = 𝐡 + 𝐧 as follows:𝐱≈[ 1/d∑_i=1^dΦ_i,1𝐡̃_i, 1/d∑_i=1^dΦ_i,2𝐡̃_i, ..., 1/d∑_i=1^dΦ_i,n𝐡̃_i]where 𝐡̃_i = Φ_i^T𝐱 + n_i are the elements of the noisy hypervector. The noise variance is then reduced by the averaging operation, similar to the case in Equation (<ref>). Therefore, in HD computing, the noise is not only suppressed by bundling accross models from different clients, but also by averaging over the dimensions within the same hypervector. We demonstrate this over an example where we encode a sample from the MNIST dataset, add Gaussian noise, then reconstruct it. Fig. <ref> shows the original image, noisy image in the sample space, and reconstructed image for which the noise was added in the hyperdimensional space. Finally, there is a “flying under the radar” principle for federated learning over noisy channel. The analysis in <cit.> shows that since SGD is inherently a noisy process, as long as the channel noise do not dominate the SGD noise during model training, the convergence behavior is not affected. As the noise is immensely suppressed in FHDnn, we can claim such principle holds true in our case. §.§.§ Bit Errors We use bit error rate (BER) in conventional coded transmission as a figure of merit for system robustness. It is a measure on how accurately the receiver is able to decode transmitted data. The errors are bit flips in the received digital symbols, and are simply evaluated by the difference (usually Hamming distance) between the input bitstream of channel encoder and the output bitstream of channel decoder. Let 𝐰̂ be the binary coded model parameters that are communicated to the server. For the bit error model, we treat the channel as a binary symmetric channel (BSC), which independently flips each bit in 𝐰̂ with probability p_e (e.g., 0 → 1). The received bitstream output at the server for client k at round t is then as follows:ŵ̃_t^k = ŵ_t^k⊕𝐞_t^kwhere 𝐞_t^k is the binary error vector and ⊕ denotes modulo 2 addition. Given a specific vector 𝐯 of Hamming weight wt(𝐯), the probability that 𝐞_t^k = 𝐯 is given byℙ(𝐞_t^k = 𝐯) = p_e^wt(𝐯)(1-p_e)^m - wt(𝐯)The bit error probability, p_e, is a function of both the modulation scheme and the channel coding technique (assuming lossless source coding). To conclude the transmission, the corrupted bitstream in (<ref>) is finally reconstructed to a real-valued model, i.e., ŵ̃_t^k→𝐰̃_t^k.Bit errors can have a detrimental effect on the training accuracy, especially for CNNs. At worst case, a single bit error in one client in one round can fail the whole training. In Fig. <ref> we give an example of how much difference a single bit error can make for the standard 32 bit floating point CNN weights. In floating point notation, a number consists of three parts: a sign bit, an exponent, and a fractional value. In IEEE 754 floating point representation, the sign bit is the most significant bit, bits 31 to 24 hold the exponent value, and the remaining bits contain the fractional value. The exponent bits represent a power of two ranging from -127 to 128. The fractional bits store a value between 1 and 2, which is multiplied by 2^exp to give the decimal value. Our example shows that one bit error in the exponent can change the weight value from 0.15625 to 5.31× 10^37. The bit errors are contagious because a parameter from one client gets aggregated to the global model, then communicated back to all clients. Furthermore, errors propagate through all communication rounds because local training or aggregation does not completely change the parameter value, but only apply small decrements. For instance, assume a federated learning scenario with 100 clients and one bit error in a client's model as in the above example. After 10 rounds of training, the CNN weight for the global model will be on the order of ∼5.31×10^37/100^10 = 5.31× 10^17, still completely failing the whole model. Consider ResNet-50, which has 20 million parameters, so training 100 clients even over a channel with p_e = 10^-9 BER results in two errors per round on average, making model failure inevitable. A similar problem exists with HD model parameters, but to a lesser extent because the hypervector encodings use integer representations. Fig. <ref> implies that the parameters can also change significantly for the HD model. Particularly, errors in the most significant bits (MSB) of integer representation leads to higher accuracy drop. We propose a quantizer solution to prevent this.The adopted quantizer design is illustrated in Fig. <ref>. Inspired by the classical quantization methods in communication systems, we leverage scaling up and scaling down operations at the transmitter and the receiver respectively. This can be implemented by the automatic gain control (AGC) module in the wireless circuits. For a class hypervector 𝐜_k,k∈{1,...,K}, the quantizer output Q(𝐜_k) can be obtained via the following steps: * Scale Up: Each dimension in the class hypervector, i.e. c_k,i, is amplified with a scaling factor denoted quantization gain G. We adjust the gain such that the dimension with the largest absolute value attains the maximum value attainable by the integer representation. Thus, G = 2^B-1-1/max(c_k) where B is the bitwidth.* Rounding: The scaled up values are truncated to only retain their integer part.* Scale Down: The receiver output is obtained by scaling down with the same factor G.This way, bit errors are applied to the scaled up values. Intuitively, we limit the impact of the bit error on the models. Remember, from Equation (<ref>), that prediction is realized by a normalized dot-product between the encoded query and class hypervectors. Therefore, the ratio between the original parameter and the received (corrupted) parameter determines the impact of the error on the dot-product. Without our quantizer, this ratio can be very large whereas after scaling up then later down, it is diminished. Fig. <ref> demonstrates this phenomenon. The ratio between the corrupted and the original parameter is ĉ_k,i/c_k,i = 2,071/7≈ 295.9. The ratio decreases to onlyĉ_k,i/c_k,i = 12,005/9,973≈ 1.2 between the scaled versions.§.§.§ Packet LossAt the physical layer of the network stack, errors are observed in the form of additive noise or bit flips directly on the transmitted data. On the other hand, at the network and transport layers, packet losses are introduced. The combination of network and protocol specifications allows us to describe the error characteristics, with which the data transmission process has to cope. The form of allowed errors, either bit errors or packet losses, are decided by the error control mechanism. For the previous error model, we assumed that the bit errors are admitted to propagate through the transport hierarchy. This assumption is valid for a family of protocols used in error resilient applications that can cope with such bit errors <cit.>. In some protocols, the reaction of the system to any number of bit errors is to drop the corrupted packets <cit.>. These protocols employ a cyclic redundancy check (CRC) or a checksum that allows the detection of bit errors. In such a case, the communication could assume bit-error free, but packet lossy link. We use the packet error rate (PER) metric as a performance measure, whose expectation is denoted packet error probability p_p. For a packet length of N_p bits, this probability can be expressed as:p_p = 1 - (1-p_e)^N_p The common solution for dealing with packet losses and guarantee successful delivery is to use a reliable transport layer communication protocol, e.g., transmission control protocol (TCP), where various mechanisms including acknowledgment messages, retransmissions, and time-outs are employed. To detect and recover from transmission failures, these mechanisms incur considerable communication overhead. Therefore, for our setup we adopt user datagram protocol (UDP), another widely used transport layer protocol. UDP is unreliable and cannot guarantee packet delivery, but is low-latency and have much less overhead compared to TCP.HDC's information dispersal and holographic representation properties are also beneficial for packet losses. Another direct result of these concepts is obtaining partial information on data from any part of the encoded information. The intuition is that any portion of holographic coded information represents a blurred image of the entire data. Then, each transmitted symbol–packets in our case–contains an encoded image of the entire model. We demonstrate the property of obtaining partial information as an example using a speech recognition dataset <cit.>. In Fig. <ref>a, after training the model, we increasingly remove the dimensions of a certain class hypervector in a random fashion. Then we perform a similarity check to figure out what portion of the original dot-product value is retrieved. The same figure shows that the amount of information retained scales linearly with number of remaining dimensions. Fig. <ref>b further clarifies our observation. We compare the dot-product values across all classes and find the class hypervector with the highest similarity. Only the relative dot-product values are important for classification. So, it is enough to have the highest dot-product value for the correct class, which holds true with ∼90% accuracy even when 80% of the hypervector dimensions are removed. §.§ Strategies for Improving Communication Efficiency The simplest implementation of FHDnn requires that clients send a full model back to the server in each round. Even though HDC models are much smaller than DNN models, it can still put a burden on communication. The structure and the characteristics of class hypervectors allow us to leverage certain techniques for improving communication efficiency of FHDnn. We propose three approaches: i) binarized differential transmission, ii) subsampling, and iii) sparsification & compression.§.§.§ Binarized Differential Transmission At the beginning of each round, the central server broadcasts the latest global HD model, 𝐂_t, to all clients. Then, before performing local updates, each client makes a copy of this global model. Instead of sending the local updated models 𝐂_t+1^k at the aggregation step, the clients send the difference between the previous model and the updated model, i.e., 𝐂_t+1^k- 𝐂_t. We call this operation differential transmission. As shown in (<ref>), we binarize the difference to reduce the communication cost by 32x, going from 32-bit floating point to 1-bit binary transmission. Δ𝐂_bin^k = sign(𝐂_t+1^k- 𝐂_t),∀ kThe central server receives and aggregates the differences, then adds it to the previous global model as:𝐂_t+1 = 𝐂_t + ∑_k=1^N𝐂_bin^kThis global model is broadcasted back to the clients. Such binarization framework is not possible for the original federated bundling approach where clients communicate their full models. Binarizing the models itself instead of the `difference' results in unstable behavior in training. Therefore, we utilize binarized differential transmission whose stability can be backed by studies on similar techniques. In <cit.>, it is theoretically shown that transmitting just the sign of each minibatch stochastic gradient can achieve full-precision SGD-level convergence rate in distributed optimization. §.§.§ Subsampling In this approach, the clients only send a subsample of their local model to the central server. Each client forms and communicates a subsample matrix Ĉ_t+1^k, which is formed from a random subset of the values of 𝐂_t+1^k. The server then receives and averages the subsampled client models, producing the global update 𝐂_t+1 as: Ĉ_t+1 = 1/N∑_k=1^NĈ^k_t+1The subsample selection is completely randomized and independent for each client in each round. Therefore, the average of the sampled models at the server is an unbiased estimator of their true average, i.e., 𝔼Ĉ_t = 𝐂_t. We can achieve the desired improvement in communication by changing the subsampling rate. For example, if we subsample 10% of the values of 𝐂_t+1^k, the communication cost is reduced by 10x. §.§.§ Sparsification & CompressionThe goal of this approach is to drop the elements (class hypervector dimensions) of each individual class that have the least impact on model performance. As discussed in Section <ref>, given a query hypervector, inference is done by comparing it with all class hypervectors to find the one with the highest similarity. The similarity is typically taken to be the cosine similarity and calculated as a normalized dot-product between the query hypervector and class hypervectors. The elements of a query hypervector are input dependent and changes from one input to another one. Due to the randomness introduced by HDC encoding, the query hypervectors, on average, have a uniform distribution of values in all dimensions. Under this assumption, we need to find and drop the elements of class hypervectors that have minimal impact on cosine similarity.Indeed, the elements with the smallest absolute values are the best candidates as they have the least contribution to the dot-product computation of cosine similarity.We find the elements of each class hypervector with the smallest absolute value and make those elements zero. For example, for the i^th class hypervector, we select S elements with the minimum absolute value as follows.min{c_d^i,...,c_2^i,c_1^i}_S To make a model with S% sparsity, we make S/100× d elements of each class hypervector zero. Then, we employ the Compressed Sparse Column (CSC) <cit.> to compress the sparse model. CSC stores only the non-zero data values and the number of zero elements between two consecutive non-zero elements.§ FHDNN RESULTSWe demonstrate through systematic experiments the performance of FHDnn under various settings. We first briefly discuss the datasets and setup for evaluation, and present our results for different data distributions under the reliable communication scenario. We then compare the resource usage of FHDnn against CNNs. The strategies for improving communication efficiency are also evaluated in this section. Lastly, we analyze FHDnn under three different unreliable network settings: packet loss, noise injection, and bit errors. §.§ Experimental SetupWe evaluate FHDnn on 3 different real world datasets: MNIST<cit.>, FashionMNIST<cit.>, CIFAR10 <cit.> and Caltech101 <cit.>. For the MNIST dataset, we use a CNN with two 5x5 convolution layers, two fully connected layers with 320 and 50 units and ReLU activation, and a final output layer with softmax. The first convolution layer has 10 channels while the second one has 20 channels, and both are followed by 2x2 max pooling. While for the CIFAR10 and FashionMNIST datasets, the well-known classifier model, ResNet-18 with batch normalization proposed in <cit.>, is used. We run our experiments on Raspberry Pi Model 3b and NVIDIA Jetson for the performance evaluations. All models are implemented on Python using the PyTorch framework. We consider an IoT network with N=100 clients and one server. The simulations were run for 100 rounds of communication each in order to keep our experiments tractable.We first tune the hyperparameters of both FHDnn and CNNs, and analyze their performance by experimenting with three key parameters: E, the number of local training epochs, B the local batch size, and C, the fraction of clients participating in each round. We select the best parameters for ResNet and use the same for FHDnn for all experiments in order to allow for a direct comparison. We study two ways of partitioning the datasets over clients: IID, where the data is shuffled and evenly partitioned into all clients, and Non-IID, where we first sort the data by their labels, divide it into a number shards of a particular size, and assign the shards to each of clients. We test FHDnn on two different types of edge devices: Raspberry PI 4 (RPi) and NVIDIA Jetson. The RPi features a Broadcom BCM2711 quad-core Cortex-A72 (ARM v8) 64-bit SoC, running at 1.5GHz, and 4GB RAM. The NVIDIA Jetson uses a quad-core ARM Cortex-A57 CPU, 128-core NVIDIA Maxwell GPU, and 4GB memory.§.§ FHDnn Accuracy ResultsFig. <ref> compares the test accuracy of FHDnn with ResNet on MNIST, CIFAR-10, FashionMNIST and Caltech101 datasets after 100 rounds of federated training. We observe that FHDnn achieves accuracy comparable to the state of the art, even though it trains a much smaller and less complex model. We depict how test accuracy changes over communication rounds for CIFAR-10 in Fig. <ref>. The plot illustrates the smoothed conditional mean of test accuracy across all different hyperparameters (E,B,C) for IID and Non-IID distributions. FHDnn reaches an accuracy of 82% in less than 25 rounds of communication whereas ResNet takes 75 rounds on average for both IID and Non-IID data distributions. Moreover the hyperparameters do not have a big influence for FHDnn as seen by the narrow spread (gray region) in Fig. <ref>. Note that the local batch size B doesn't impact FHDnn at all due to the linear and additive nature of its training methodology. This allows us to use higher batch sizes up to the constraints of the device, allowing for faster processing, and going over the dataset in less rounds. On the other hand, the batch size B affects the convergence of CNNs. §.§ FHDnn Performance and Energy ConsumptionLocal training is computationally expensive for constrained IoT devices, which was one of the main drivers for centralized learning over many years. Particularly, CNN training involves complicated architectures and backpropagation operation that is very compute intensive. In addition, this has to be repeated for many communication rounds. HD on the contrary is lightweight, low-power, and fast. Table <ref> quantitatively compares the computation time and energy consumption of FHDnn and ResNet local training on 2 different edge device platforms. FHDnn is 35% faster and energy efficient than ResNet on Raspberry Pi and 80% faster and energy efficient on the Nvidia Jetson. §.§ FHDnn in Unreliable CommunicationIn this section, we analyze the performance of FHDnn and ResNet under unreliable network conditions as described in Section <ref>. We obtained similar results for FedHDC, which is why we focus on FHDnn results in this section.Fig. <ref> shows the performance of models under packet loss, Gaussian noise, and bit errors. To maintain a direct comparison between ResNet and FHDnn, we use the same hyperparameters for both models and all experiments. We set E = 2, C = 0.2, B = 10 and evaluate the performance on the CIFAR10 dataset. From our experiments, we observe that even with fewer clients at C = 0.1, and for other datasets, the performance of FHDnn is better than ResNet. Here, we present only the results for the settings mentioned earlier to keep it concise.Packet Loss As shown in Fig. <ref>a, if the packet loss rate is extremely small, e.g., below 10^-2, ResNet has very minimal accuracy loss. However, for more, realistic packet loss rates such as 20% the CNN model fails to converge. When there is packet loss, the central server replaces the model weights from the lost packets with zero values. For example, 20% packet loss rate implies 20% of the weights are zero.Moreover, this loss is accumulative as the models are averaged during each round of communication thereby giving the CNNs no chance of recovery. In contrast, FHDnn is highly robust to packet loss with almost no loss in accuracy. For FHDnn, since the data is distributed uniformly across the entire hypervector, a small amount of missing data is tolerable. However, since CNNs have a more structured representation of data with interconnections between neurons, the loss of weights affects the performance of subsequent layers which is detrimental to its performance. Gaussian NoiseWe experiment with different Signal-to-Noise Ratios (SNR) to simulate noisy links, illustrated in Fig. <ref>b. Even for higher SNRs such 25dB the accuracy of ResNet drops by 8% under Non-IID data distribution. However it's more likely that IoT networks operating on low-power wireless networks will incur lower SNRs. For such scenarios, FHDnn outperforms ResNet as the latter fails to perform better than random classification. ResNet performance starts to completely deteriorate around 10dB SNR. The accuracy of FHDnn only reduces by 3%, even at -10dB SNR, which is negligible compared to ResNet. Bit ErrorsFig. <ref>c shows that CNNs completely fail whenbit errors are present. ResNet achieves the equivalent of random classification accuracy even for small bit errors. Since the weights of CNNs are floating point numbers, a single bit flip can significantly change the value of the weights. This, compounded with federated averaging, hinders convergence. We observe FHDnn incurs an accuracy loss as well, achieving 72% for IID and 69% for Non-IID data. FHDnn uses integer representations which is again susceptible to large changes from bit errors to some extent. However, our quantizer method with scaling described in Section <ref> assuages the remaining error. §.§ FHDnn Communication EfficiencySo far we have benchmarked the accuracy of FHDnn for various network conditions. In the following, we demonstrate the communication efficiency of FHDnn compared to ResNet. We compare the amount of data transmitted for federated learning to reach a target accuracy of 80%. The amount of data transmitted by one client is calculated using the formula data_transmitted = n_rounds× update_size, where n_rounds is the number of rounds required for convergence by each model. The update size for ResNet with 11M parameters is 22MB while that of FHDnn is 1MB making it 22× smaller. From Section <ref> we know that FHDnn converges 3× faster than ResNet bringing its total communication cost to 25MB. ResNet on the other hand uses up 1.65GB of data to reach the target accuracy.In Fig. <ref>, we illustrated that FHDnn can converge to the optimal accuracy in much fewer communication rounds. However, this improvement is even higher in terms of the actual clock time of training. We assume that federated learning takes places over LTE networks where SNR is 5dB for the wireless channel. Each client occupies 1 LTE frame of 5MHz bandwidth and duration 10ms in a time division duplexing manner. For error-free communication, the traditional FL system using ResNet can support up to 1.6 Mbits/sec data rate, whereas we admit errors and communicate at a rate of 5.0 Mbits/sec. Under this setting and for the same experiment as in Section 4.2, FHDnn converges in 1.1 hours for CIFAR IID and 3.3 hours for CIFAR Non-IID on average. On the other hand, ResNet converges in 374.3 hours for both CIFAR IID and CIFAR Non-IID on average. §.§ FHDnn: Effect of Communication Efficiency StrategiesEven though FHDnn is much smaller than CNN models and the training converges faster, it's communication efficiency can be further improved. We use the MNIST dataset and the parameters from Section <ref> for our experiments.Table <ref> shows the final accuracy after 100 rounds of training and the improvement in communication cost for the respective approaches.The differential transmission approach binarizes the model difference, going from 32-bit floating point to 1-bit binary transmission to reduce the communication cost by 32x. For subsampling and sparsification & compression approaches, the communication improvement depends on the percentage of the model values that are subsampled or sparsified. For example, if we subsample 10% of the model, than the communication cost is reduced by 10x. Or, if we sparsify the model by 90%, the reduction is 10x again. We present the final accuracy and improvement in communication cost at different subsampling and sparsification percentages.§ CONCLUSIONIn this paper we introduced methods to implement federated learning using hyperdimensional computing to enable communication efficient and robust federated learning for IoT networks. We first formalize the theoretical aspects of hyperdimensional computing to perform federated learning, presented as our first contribution called FedHD. To combat the inability of HDC to extract relevant features which consequently leads to poor performance of FedHD on large image classification, we propose FHDnn. FHDnn complements FedHD with a fixed contrastive learning feature extractor to compute meaningful representations of data that helps the HDC model better classify images. We described the federated hyperdimensional computing architecture, described the training methodology and evaluated FedHDC and FHDnn through numerous experiments in both reliable and unreliable communication settings. The experiment results indicate that FHDnn converges 3× faster, reduces communication costs by 66×, local client compute and energy consumption by  1.5 - 6× compared to CNNs. It is robust to bit errors, noise, and packet loss. Finally, we also showed that the communication efficiency of FedHDC and FHDnn can be further improved up to 32× with a minimal loss in accuracy. § ACKNOWLEDGEMENTSThis work was supported in part by CRISP, PRISM and CoCoSys, centers in JUMP1.0 and 2.0 (SRC programs sponsored by DARPA), SRC Global Research Collaboration grants (GRC TASK 3021.001), and NSF grants #2003279, #1911095, #1826967, #2100237, and #2112167. § PROOF OF THEOREM 1 §.§ Additional NotationIn our original analysis in Section <ref>, we used the variable t to denote the communication rounds. Here, with a slight abuse of notation, we change the granularity of the time steps to be with respect to the SGD iterations instead of the communication rounds. Let 𝐰_t^k be the model maintained on the k-th device at the t-th step, and 𝐰_t be the global model. The clients communicate after E local epochs for global aggregation. Let ℐ_E be the set of those aggregation steps, i.e., ℐ_E = {nE| n=1,2,...}, so the client models are aggregated if t+1 ∈ℐ_E. We introduce an additional variable 𝐯_t+1^k to represent the result of the SGD steps where no communication occurs, similar to <cit.>. The following equation describes the local updates of the clients: 𝐯_t+1^k = 𝐰_t^k- η_t∇ F_k(𝐰_t^k, ξ_t^k) If t+1 ∉ℐ_E, we have 𝐰_t+1^k = 𝐯_t+1^k because there is no communication and aggregation of models. On the other hand, if t+1 ∈ℐ_E, the randomly selected clients k ∈𝒮_t+1 communicate their models which are then aggregated and averaged at the server. This is summarized by the equation below.𝐰_t+1^k = {[ 𝐯_t+1^k, ift+1 ∉ℐ_E; 1/|𝒮_t+1|∑_k ∈𝒮_t+1𝐯_t+1^k,ift+1 ∈ℐ_E ]. The variable 𝐰_t+1^k can be interpreted as the model obtained directly after the communication steps. We define two virtual sequences 𝐯_t+1 = ∑_k=1^Np_k𝐯_t^kand 𝐰_t+1 = ∑_k=1^Np_k𝐰_t^k. Notice that if we apply a single step of SGD to 𝐰, we get 𝐯_t+1. These sequences are denoted virtual because both are not available when t+1 ∉ℐ_E and we can only access 𝐰_t+1 when t+1 ∈ℐ_E. We also define 𝐠_t = ∑_k=1^Np_k∇ F_k(𝐰_t^k) and 𝐠_t = ∑_k=1^Np_k∇ F_k(𝐰_t^k, ξ_t^k) for convenience of notation. Therefore, 𝐯_t+1 = 𝐰_t - η_t𝐠_t and 𝔼𝐠_t = 𝐠_t.There are two sources of randomness in the following analysis. One results from the stochastic gradients and the other is from the random sampling of devices. To distinguish them, we use the notation 𝔼_𝒮_t(·) when we take expectation over the randomness of stochastic gradients. §.§ LemmasWe present the necessary lemmas that we use in the proof of Theorem 1. These lemmas are derived and established in <cit.>, so we defer their proofs.Lemma 1 (Result of One Step SGD). Assume Properties 1 and 2 hold. If η_t≤1/4L, we have𝔼𝐯_t+1 - 𝐰^*^2≤ (1- η_tμ)𝔼𝐰_t - 𝐰^*^2+ η_t^2𝔼𝐠_t - 𝐠_t^2 + 6L η^2_tΓ + 2𝔼[∑_k=1^Np_k𝐰_t - 𝐰^k_t^2].Lemma 2 (Bounding the Variance). Assume Property 3 holds. It follows that𝔼𝐠_t - 𝐠_t^2≤∑_k=1^N p_k^2σ_k^2. Lemma 3 (Bounding the Divergence of 𝐰_t^k). Assume Property 4 holds, η_t is non-increasing and η_t≤ 2η_t+E for all t≤ 0. It follows that𝔼[ 1/N∑_k=1^N𝐰_t - 𝐰^k_t^2] ≤ 4 η_t^2(E-1)^2G^2.Lemma 4 (Unbiased Sampling). If t+1 ∈ℐ_E, we have𝔼_𝒮_t [𝐰_t+1] = 𝐯_t+1Lemma 5 (Bounding the Variance of 𝐰_t^k). For t+1 ∈ℐ_E, assume that η_t≤ 2η_t+E for all t≥0. We then have 𝔼_𝒮_t𝐯_t+1 - 𝐰_t+1^2≤N-K/N-14/Kη_t^2E^2G^2 §.§ Theorem 1We have 𝐰_t+1 = 𝐯_t+1 whether t+1 ∈ℐ_E or t+1 ∉ℐ_E. Then, we take the expectation over the randomness of stochastic gradient and use Lemma 1, Lemma 2, and Lemma 3 to get 𝔼𝐰_t+1 - 𝐰^*^2 = 𝔼𝐯_t+1 - 𝐰^*^2≤ (1- η_tμ)𝔼𝐰_t - 𝐰^*^2 + η_t^2𝔼𝐠_t - 𝐠_t^2+6L η^2_tΓ + 2𝔼[∑_k=1^Np_k𝐰_t - 𝐰^k_t^2] ≤ (1- η_tμ)𝔼𝐰_t - 𝐰^*^2 + η_t^2[∑_k=1^N p_k^2σ_k^2 + 6LΓ + 8(E-1)^2G^2]. If t+1 ∈ℐ_E, note that𝐰_t+1 - 𝐰^*^2= 𝐰_t+1 - 𝐯_t+1 + 𝐯_t+1 - 𝐰^*^2 = 𝐰_t+1 - 𝐯_t+1^2_A_1 + 𝐯_t+1 - 𝐰^*^2_A_2 + ⟨𝐰_t+1 - 𝐯_t+1, 𝐯_t+1 - 𝐰^*⟩_A_3 .When the expectation is taken over 𝒮_t+1, the term A_3 vanishes because 𝔼_𝒮_t+1[𝐰_t+1 - 𝐯_t+1]=0, that is, 𝐰_t+1 is unbiased. If t+1 ∉ℐ_E, A_1 is vanished as 𝐰_t+1 = 𝐯_t+1. A_2 can be bounded using Lemmas 1 to 3 and Lemma 5. It follows that𝔼𝐰_t+1 - 𝐰^*^2≤ (1- η_tμ)𝔼𝐰_t - 𝐰^*^2 + η_t^2[∑_k=1^N p_k^2σ_k^2 + 6LΓ + 8(E-1)^2G^2].If t+1 ∈ℐ_E, the term A_1 can be additionally bounded using Lemma 5, then 𝔼𝐰_t+1 - 𝐯_t+1^2 + 𝔼𝐯_t+1 - 𝐰^*^2≤ (1- η_tμ)𝔼𝐰_t - 𝐰^*^2 + η_t^2[∑_k=1^N p_k^2σ_k^2 + 6LΓ + 8(E-1)^2G^2 + N-K/N-14/KE^2G^2]. Now, let Δ_t = 𝐰_t+1 - 𝐰^*^2 for notational convenience. Also, letB = ∑_k=1^N p_k^2σ_k^2 + 6LΓ + 8(E-1)^2G^2 + N-K/N-14/KE^2G^2.We use a diminishing learning rate with η_t = β/t+γ for some β≥1/μ and γ > 0 such that η_1≤min{1/μ,1/4L} = 1/4L and η_t≤ 2η_t+E. Now, we prove by induction thatΔ_t≤v/γ + twherev = max{β^2B/βμ -1, (γ+1)Δ_0}.The definition of v ensures that it holds for t =0. If we assume the result holds for some t>0, it followsΔ_t+1 ≤(1-η_tμ)Δ_t + η_t^2B = ( 1 - βμ/t + γ)v/t+γ + β^2B/(t+γ)^2 = t+γ-1/(t+γ)^2v + [β^2B/(t+γ)^2 - βμ -1/(t+γ)^2v ] ≤v/t+γ +1.Then, by the strong convexity of F(·),𝔼[F(𝐰_t)] - F^*≤L/2Δ_t≤L/2v/γ+t.If we choose the parameters as β = 2/μ, γ = max{8L/μ-1,E} and define κ = L/μ, then η_t = 2/μ1/γ+t. Using the fact that max{a,b}≤ a+b, we havev≤β^2B/βμ -1+(γ+1)Δ_0= 4B/μ^2 + (γ+1)Δ_0≤ 4B/μ^2 + (8L/μ-1+E+1 )Δ_0 =4B/μ^2 + (8L/μ+E)𝐰_1 - 𝐰^*^2.Therefore,𝔼[F(𝐰_t)] - F^* ≤L/2(γ+t)[4B/μ^2 + (8L/μ+E)𝐰_1 - 𝐰^*^2] = 2κ/γ+t[ B/μ + (2L + Eμ/4) 𝐰_1 - 𝐰^*^2]. ieeetr
http://arxiv.org/abs/2312.15966v1
{ "authors": [ "Kazim Ergun", "Rishikanth Chandrasekaran", "Tajana Rosing" ], "categories": [ "cs.LG", "cs.DC" ], "primary_category": "cs.LG", "published": "20231226092419", "title": "Federated Hyperdimensional Computing" }
[ MoTCoder: Elevating Large Language Models with Modular of Thought for Challenging Programming Tasks Jingyao Licuhk Pengguang Chensm Jiaya Jiacuhk,sm cuhkDepartment of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China smSmartMore, Shenzhen, ChinaJiaya [email protected] 0.3in ]Large Language Models (LLMs) have showcased impressive capabilities in handling straightforward programming tasks. However, their performance tends to falter when confronted with more challenging programming problems. We observe that conventional models often generate solutions as monolithic code blocks, restricting their effectiveness in tackling intricate questions. To overcome this limitation, we present Modular-of-Thought Coder (MoTCoder). We introduce a pioneering framework for MoT instruction tuning, designed to promote the decomposition of tasks into logical sub-tasks and sub-modules.Our investigations reveal that, through the cultivation and utilization of sub-modules, MoTCoder significantly improves both the modularity and correctness of the generated solutions,leading to substantial relative pass@1 improvements ofon APPS andon CodeContests. Our codes are available at https://github.com/dvlab-research/MoTCoderhttps://github.com/dvlab-research/MoTCoder. § INTRODUCTION It has been a longstanding objective in the field of artificial intelligence to develop systems capable of generating executable and functionally correct computer programs to address intricate problems <cit.>.Recently, there has been considerable attention on Large Language Models (LLMs) <cit.>, which showcases remarkable success.These models leverage extensive pre-training and undergo further fine-tuning with detailed instruction data <cit.>, leading to SOTA performance across diverse tasks.Initially designed for natural languages, these models have been expanded through the instruction fine-tuning for code modeling capabilities <cit.>.This extension has led to impressive performance in generating code from natural language problem descriptions <cit.>.However, when confronted with highly intricate coding challenges, the current SOTA models still struggle to match the seasoned developers <cit.>, mainly due to their overly simplistic generation approach. Many previous LLMs produce the code solution as a single monolithic block, instead of breaking down of the task into logical sub-tasks. However, tackling highly complex problems with a single code module is practically unattainable. In contrast, adept developers devise modularized solutions, organizing them into high-level logical sub-modules. Subsequently, they complete and combine these components to efficiently enhance their final solutions.In our Modular-of-Thought Coder (MoTCoder), our objective is to enhance the modularization capabilities of coding Large Language Models (LLMs) by introducing modular-of-thought prompting. This approach guides LLMs to break down their solutions into modular segments, where each segment represents an abstract function dedicated to a high-level logical sub-task. To train the model to adhere to the MoT prompt, we generate instructional data using a process termed MoT Code Instruction Evolution.As depicted in <ref>, our MoT code instruction evolution framework comprises two key stages: In the initial stage, we tailor the evolutionary prompt process. LLMs are instructed to outline necessary sub-modules, generating only their function headers and docstrings that describe their intended usage. Subsequently, the instruction guides the model in implementing these modules and eventually combining them into a comprehensive final solution. In the second stage, we fine-tune the LLM using our created code instruction-following training set, resulting in our MoTCoder model. Our experiments reveal that MoTCoder significantly enhances LLM performance, establishing new SOTA results on challenging code tasks in APPS <cit.> and CodeContests  <cit.>. Specifically, MoTCoder elevates the pass@1 performance by overon APPS andon CodeContests. § RELATED WORKS §.§ Large Language ModelsGeneral LLMs. In recent times, Large Language Models (LLMs) have exhibited remarkable prowess across a wide array of tasks. Leading technology companies have made significant advancements in developing highly proficient closed-source LLMs, including OpenAI's GPT3 and GPT4 <cit.>, Google's PaLM <cit.>, Bard[<https://bard.google.com/>], DeepMind's Chinchilla <cit.>, and Gopher <cit.>, as well as Anthropic's Claude[<https://www.anthropic.com/index/introducing-claude>]. The AI community has also observed the release of several open-source LLMs, where model weights are made publicly available. EleutherAI has contributed GPT-NeoX-20B <cit.> and GPT-J-6B <cit.>. Google has released UL2-20B <cit.>. Tsinghua University has introduced GLM-130B <cit.>. Meta has released OPT <cit.> and LLaMA <cit.>. Coding LLMs. Recent research has introduced a significant number of LLMs tailored for code-related tasks to address the challenges of code understanding and generation. Closed-source models include OpenAI's Codex <cit.> and Code-Davinci <cit.>. Google has proposed PaLM-Coder <cit.>. These models excel on popular code completion benchmarks such as HumanEval <cit.> and MBPP <cit.>. On the open-source front, Salesforce has introduced CodeGen <cit.>, CodeT5 <cit.>, and CodeT5+ <cit.>. Tsinghua University has contributed CodeGeeX <cit.>, and the BigCode Project has developed StarCoder <cit.>. These models have demonstrated significant advancements in code-related tasks. §.§ Instruction Fine-TuningGeneral Instruction Tuning. In its early stages, the core aim of instruction fine-tuning was to amplify the cross-task generalization capabilities of Language Models (LMs). This was accomplished by subjecting LMs to fine-tuning using an extensive corpus of public Natural Language Processing (NLP) tasks. Pioneering this approach, T5 <cit.> underwent training on a diverse set of supervised text-to-text tasks. Subsequent endeavors like FLAN <cit.>, ExT5 <cit.>, T0 <cit.>, and UnifiedQA <cit.> broadened the spectrum of tasks, fortifying the overall generalization capability of LMs. Noteworthy contributions from ZeroPrompt <cit.> and FLAN-T5 <cit.> pushed boundaries by incorporating thousands of tasks into their training pipelines. OpenAI has taken an alternative route by enlisting human annotators to contribute an extensive corpus of human instructions, encompassing diverse formats and a broad spectrum of task types. Building upon this dataset, OpenAI trained its GPT-3 <cit.> model to create InstructGPT <cit.>, which better aligns with users' inputs. This developmental trajectory has given rise to notable works such as ChatGPT. In the open-source realm, Alpaca <cit.> adopts the self-instruct method <cit.>, leveraging ChatGPT to generate data for training. Vicuna <cit.> utilizes user-shared conversations collected from ShareGPT.com to train its models. Introducing the Evol-Instruct method, WizardLM <cit.> involves evolving existing instruction data to generate more intricate and diverse datasets. In contrast to these generalized instruction fine-tuning approaches, WizardCoder <cit.> applies the Evol-Instruct method specifically in the domain of Code LLMs. Chain-of-Thought Instruction Tuning. While large language models have demonstrated impressive capabilities in handling straightforward programming tasks, their performance tends to diminish when faced with more challenging programming problems. A notable limitation is observed in conventional models, where solutions are often generated as monolithic code blocks, limiting their effectiveness in addressing intricate questions. Drawing inspiration from the success of chain-of-thought (CoT) prompting in enhancing performance in reasoning-based tasks <cit.>, we introduce a pioneering framework for Modular-of-Thought (MoT) instruction tuning. MoT is designed to facilitate the decomposition of tasks into logical sub-tasks and sub-modules. Our investigations reveal that, through the cultivation and utilization of sub-modules, MoTCoder significantly enhances both the modularity and correctness of the generated solutions.§ METHODS The depiction of our MoTCoder training framework is presented in  <ref>. In the initial stage, we introduce the modular-of-thought instructions evolution and evolution examination in  <ref>. Following this, we elaborate on the MoT instruction tuning and testing of the model in  <ref>. §.§ Modular-of-Thought Instruction Evolution In this section, we propose the modular-of-thought instruct evolution approach. We initiate the evolution process with a given initial instruction dataset D^(0)=(I_n^(0),Ô_n^(0))_1≤ n≤ N, where I_n^(0) represents the n-th instruction in D^(0), O_n^(0) is the corresponding output, and N is the number of samples in D^(0). In each evolution step, we upgrade all I^(k) in D^(k) to I^(k+1) by employing an LLM instruction evolution prompt. Subsequently, we use the LLM to generate corresponding outputs Ô^(k+1) for the newly evolved I^(k+1). Thus, we obtain an evolved instruction dataset {D^(0)… D^(k+1)}. By iteratively conducting K evolutions, we sequentially acquire the final MoT instruction dataset {D^(0)… D^(K)}.Our process for the evolution of modular-of-thought instructions comprises two key stages: MoT instructions evolution, and evolution examination, involving the filtration of instructions that do not successfully evolve. We commence by populating the instruction pool with the provided initial instruction dataset D^(0). In each subsequent evolution epoch, normal instructions from the preceding epoch are retrieved from the pool. Subsequently, we employ the instruction evolver to advance each retrieved instruction and the instruction eliminator to evaluate whether the evolution is unsuccessful. Instructions that evolve successfully are integrated into the final pool, while unsuccessful ones are returned as they are, with the anticipation of achieving successful enhancement in the following evolution epoch.Normal Instruction. In general, a code sequence generated by a language model θ through the autoregressive sampling of tokens ô_t is from the parameterized conditional distribution:ô_t ∼ p_θ (.| ô_1:t-1, I),where I represents the input instruction and ô_t is the t-th of the flattened output sequence. Modular-of-Thought Instruction. Referring to the recent works in code revision <cit.>, our proposed methodology aims to evolute normal instructions into a sequential code generation process by leading the models through a two-step procedure. * Sub-modules. Initially, the models are instructed to outline the required sub-modules, generating only their function headers and docstrings describing their intended usage.Ŝ_̂î∼ p_θ (.| Ŝ_1:i-1, I),where Ŝ_i represents the i-th sub-module outlined by the model and I represents the input instruction. * Final solution. The subsequent instruction guides the model to implement these modules and eventually combine them into a comprehensive final solution. ô_t ∼ p_θ (.| ô_1:t-1, {Ŝ_i}, I),where ô_t is the t-th of the flattened output sequence.Modular-of-Thought Instruction Illustration. The instruction is supplemented with a one-shot example, serving to prompt the model to adhere to the MoT instruction generation strategy. An illustration of the instruction prompt is presented in <ref> and further detailed in <ref>. The instruction generated by MoTCoder encourages the model to decompose a program into sub-modules. This mirrors the methodology commonly employed by developers when addressing intricate coding tasks, where they systematically break down solutions into modular components. Evolution Examination.We categorize the following scenarios as instances of instruction evolution failure:* The evolved instruction deviates from the modular-of-thought generation strategy, failing to follow the approach of initially generating sub-modules and then generating the main code.* During the first phase, namely the sub-module generation stage, either no sub-modules are generated, or global code is produced. * In the second phase, or the main code generation stage, there is either a lack of main code generation or the production of multiple main code segments. Following the evolution process, we created a modular-of-thought instruction dataset comprising around 24k instructions. The dataset exhibits a distribution ratio of 45:55 between MoT and normal instruction data.§.§ Modular-of-Thought Training and Inference We present the MoT prompt training methodology through the introduction of a modular training prompt, as illustrated in <ref>. The instructions first specify sub-modules with their function headers and docstrings detailing their intended usage. The ultimate solution involves implementing these modules and subsequently integrating them into a final solution. The {input} in prompts is the problem description and the model is trained to complete the {response} in prompts. We conducted instruction tuning using the base model WizardCoder <cit.> on our MoT instruction dataset for three epochs. This process took approximately 28 hours using 8 A800 GPUs. § EXPERIMENTS We showcase the effectiveness of our MoTCoder in addressing challenging code generation tasks, specifically focusing on the following prominent benchmarks:APPS <cit.> is a description-to-code generation benchmark from competitive programming platforms Codewars[<https://www.codewars.com/>], AtCoder[<https://atcoder.jp/>], Kattis[<https://open.kattis.com/>], Codeforces[<https://codeforces.com/>], etc. Building upon prior research <cit.>, we conducted an assessment of the models using the passing rate metric pass@k. This metric is defined as the proportion of problems successfully solved by employing k generated programs for each problem. CodeContests <cit.> is a competitive programming dataset sourced from Aizu[<https://judge.u-aizu.ac.jp>], AtCoder[<https://atcoder.jp>], CodeChef[<https://www.codechef.com>], Codeforces, HackerEarth[<https://www.hackerearth.com>], etc. Building upon prior research <cit.>, we conducted an assessment of the models using the passing rate metric pass@k.§.§ Examination of Modular AbilityIn this section, we evaluate the modular capabilities of our model through both normal and modular inference, utilizing prompts as illustrated in <ref>. For modular prompts, we instruct the model to utilize provided submodules to complete the final code. Conventional models lack this ability to leverage submodules, making the logical and modular aspects more challenging for the model. Results are in <ref>. On one hand, our MoTCoder demonstrates a significant performance boost compared to models with the same parameters trained with traditional approaches, achieving a remarkableimprovement on APPs. On the other hand, when employing modular reasoning on models that have not undergone MoT training, there is a considerable decrease in performance. In contrast, our outcomes are minimally affected. This confirms the superior modular capabilities of our approach, which is crucial in large-scale code development environments, where developers often leverage and integrate code sub-modules to construct code frameworks, rather than crafting each line of code from scratch. MoTCoder enables the construction of complete code using only a standardized description of functions, unlocking the potential to generate extensive and complex code frameworks.The results of MoTCoder also demonstrates that the absence of submodule inputs has a negligible impact on the results. This implies that our method autonomously learns to design submodules without requiring additional hints. In subsequent experiments, we use normal prompts during the inference process for MoTCoder and other methods to ensure a fair comparison.§.§ Ablation ExperimentsIn this section, we perform ablation experiments to examine the effectiveness of our MoTCoder training framework. §.§.§ Modular-of-Thought Instruction-Tuning We propose two instruction-tuning strategies: * One-step strategy. The models directly generate the necessary modules and incorporate them into a complete solution.* Two-step strategy. Initially, the instructions outline the essential sub-modules, generating their function headers and docstrings that describe their intended purpose. Subsequently, the instruction guides the model to implement these outlined modules and eventually merge them into a cohesive final solution. Note that both strategies generate substeps within a single process. As indicated in <ref>, the suggested two-step modular-of-thought instruction tuning strategy outperforms the other both for normal and modular inference. This aligns with the problem-solving approach employed by adept developers: When faced with a problem, skilled developers design solutions with a modular structure, organizing them into high-level logical sub-modules. They subsequently finalize and integrate these components to effectively produce their final solutions.§.§.§ Training DatasetIn this section, we conduct ablation experiments on the dataset, as illustrated in <ref>. Initially, we finetuned our model on the modular dataset obtained after modular-of-though instruction evolution, resulting in a substantial improvement (11.0%) compared to the baseline of normal fine-tuning. Subsequently, we explore the incorporation of a normal dataset, conducting experiments on a mixture of normal and modular datasets, revealing further performance enhancement of 3.38%. This underscores the significance of data diversity: While we encourage the model to decompose tasks into submodules, we also allow the model to employ direct methods to solve problems. This provides a larger solution space, leading to significant improvements, especially for easy and medium-level problems (2.46% and 4.54%, respectively). The improvement is more modest for difficult problems (0.84%), aligning with the intuition that direct methods are better suited for simpler problems.§.§ Results In this section, we present a comprehensive comparison of our approach with existing large coding model baselines on APPS  <cit.> in <ref> and CodeContest <cit.> in <ref>. We employ normal inference prompts for all methods. §.§.§ Results on APPSWe conducted a comparison of our approach with existing large language model baselines on APPs <cit.>. All outcomes are computed using raw predictions without being filtered by the test cases provided in the prompt. Our analysis includes a comparison with open-sourced approaches such as CodeT5 <cit.>, fine-tuned GPT-Neo <cit.>, GPT-2 <cit.>, GPT-3 <cit.>, one-shot StarCoder <cit.>, and one-shot WizardCoder <cit.>. Additionally, we present results from closed-source models, including text-davinci, code-davinci, and GPT3.5.As depicted in the results from the APPS shown in <ref>, we notice that the modular-of-thought training approach leads to an enhancement in code generation capabilities compared with previous instruction finetuning models. Our MoTCoder exhibits improved performance across all difficulty levels and demonstrates more substantial gains in the interview and competition-level problems, characterized by more intricate solutions. To provide specifics, the pass@1 performance of MoTCoder-15B surpasses WizardCoder-15b by 15.49% on interview-level problems and 10.28% on competition-level problems. Moreover, our pass@1 on competition-level questions surprisingly outperformed the closed-source model GPT3.5 by an impressive margin of 5.7%. This outcome serves as compelling evidence of the efficacy of our approach in addressing intricate problems.We also conduct a comparative analysis of our approach against previous LLM baselines with code-revision methods as well. Our included baselines contain Codex <cit.>, CodeT5 <cit.>, code-davinci, StarCoder <cit.>, and WizardCoder <cit.> and code-revision methods contain Self-edit <cit.>, CodeRL <cit.>, Self-repair <cit.>, and CodeChain <cit.>. The results presented in <ref> illustrate that MoTCoder exhibits notable performance improvements. Specifically, MoTCoder achieves a % pass@1, surpassing the current SOTA, CodeChain + WizardCoder, by 10.3%. CodeChain <cit.> is an iterative framework for inference that elicits modularized code generation through a chain of self-revisions. In contrast to CodeChain, our approach introduces a guided modular-of-thought framework during training, making it more intrinsic. This leads to improvements in performance. Notably, this achievement is obtained without the need for any additional revision iterations, thereby avoiding any extra associated inference costs. §.§ Results on LeetCode We benchmark on LeetCode our approach against SOTA models including StarCoder <cit.>, WizardCoder <cit.>, and CodeLlama <cit.>. The results presented in <ref> illustrate that MoTCoder exhibits significant performance improvements across all difficulty levels, with particularly notable advancements in tackling more challenging problems.To provide specific examples, the current SOTA 15B model, WizardCoder, achieves a pass rate of only 3.53% on medium-difficulty problems, whereas our approach achieves a remarkable 51.76%. Furthermore, all existing methods fall short in solving hard difficulty problems, while our method achieves a groundbreaking success rate of 19.51% on hard-level challenges.§.§.§ Results on CodeContestsWe conduct a comprehensive evaluation of our approach on CodeContests, benchmarking it against current state-of-the-art models, including code-davinci, GPT3.5, and WizardCoder <cit.>. The competitive models utilize the code-revision method CodeChain <cit.> and the filtering method CodeT <cit.>. The results, as depicted in <ref>, reveal notable performance enhancements achieved by MoTCoder. Specifically, MoTCoder attains an impressive 12.73% pass@5 on the test set, surpassing WizardCoder with CodeChain by 9.43%. Remarkably, MoTCoder even outperforms GPT3.5 by 1.57%. Importantly, these achievements are realized without the need for additional revision iterations or filtering by test samples, thereby avoiding any additional associated inference costs or the introduction of extra information. We conduct a comprehensive evaluation of our approach on CodeContests, benchmarking it against current state-of-the-art models, including code-davinci, GPT3.5, and WizardCoder <cit.>. The competitive models utilizes the code-revision method CodeChain <cit.> and the filtering method CodeT <cit.>. The results, as depicted in <ref>, reveal notable performance enhancements achieved by MoTCoder. Specifically, MoTCoder attains an impressive 13.94% pass@5 on the test set, surpassing WizardCoder by 10.67%. Remarkably, MoTCoder even outperforms GPT3.5 by 2.78%.§ LIMITATIONSThrough experimentation, it has been observed that although our MoTCoder elevates the model's modular and logical capabilities, leading to a commendable performance in addressing competitive-level programming challenges, it concurrently sacrifices some of the model's conversational abilities. This finding, along with corroborative evidence from prior research <cit.>, underscores the persistent and crucial nature of addressing the forgetting problem. We remain committed to ongoing exploration of this issue in our future work. § CONCLUSIONWe propose a novel framework, MoTCoder, for tuning instructions based on a modular of thought.MoTCoder directs LLMs to decompose their solutions into modular segments, culminating in the synthesis of a comprehensive final solution. Our experiments demonstrate that modular-of-thought strategy significantly improves LLM performance, achieving SOTA passing rate on challenging code tasks and showcasing superior modular capabilities. It is crucial in large-scale code development environments, since developers commonly leverage code sub-modules to construct code frameworks over crafting each line of code from scratch.MoTCoder empowers the creation of entire code structures using only standardized descriptions of functions.Adopting this methodology, we unlock the potential to generate extensive and intricate code frameworks.icml2024
http://arxiv.org/abs/2312.15960v2
{ "authors": [ "Jingyao Li", "Pengguang Chen", "Jiaya Jia" ], "categories": [ "cs.LG", "cs.PL", "cs.SE" ], "primary_category": "cs.LG", "published": "20231226084957", "title": "MoTCoder: Elevating Large Language Models with Modular of Thought for Challenging Programming Tasks" }
Article Title]Discrete Messages Improve Communication Efficiency among Isolated Intelligent Agents[1]Hang Chen[1]Yuchuan Jiang 2]Weijie Zhou 3]Cristian Meo 1]Ziwei Chen 4]Dianbo Liu [email protected] or [email protected][1]BeiJing JiaoTong University [2]Rice University [3]Delft University of Technology [4]School of Medicine and College of Design&Engineering, National University of Singapore Individuals, despite having varied life experiences and learning processes, can communicate effectively through languages. This study aims to explore the efficiency of language as a communication medium. We put forth two specific hypotheses: First, discrete messages are more effective than continuous ones when agents have diverse personal experiences. Second, communications using multiple discrete tokens are more advantageous than those using a single token. To validate these hypotheses, we designed multi-agent machine learning experiments to assess communication efficiency using various information transmission methods between speakers and listeners. Our empirical findings indicate that, in scenarios where agents are exposed to different data, communicating through sentences composed of discrete tokens offers the best inter-agent communication efficiency. The limitations of our finding include lack of systematic advantages over other more sophisticated encoder-decoder model such as variational autoencoder and lack of evluation on non-image dataset, which we will leave for future studies. [ [ =====§ INTRODUCTIONIntelligent agent communication, situated at the crossroads of AI and linguistics, aims to explore how a common language develops among agents. The Lewis Game<cit.> exemplifies collaborative tasks in this field, where a speaker and a listener collaborate to accomplish a task. The speaker describes a specific object to the listener, who must correctly identify it from a set of alternatives (Figure <ref>(left)). Using AI, significant research<cit.><cit.><cit.> has investigated language origins and evolution.Chaabouni et al<cit.> conducted research on emergent communication<cit.>, focusing on various aspects such as the assessment of compositionality and generalization in emerging language agents and the development of efficient color naming systems<cit.>. Lazaridou et al<cit.> proposed a method that combines multi-agent communication with data-driven natural language learning, aiming to enable machine agents to effectively communicate with humans using natural language. Lowe et al<cit.> investigated the interaction between supervised learning and self-play in protocol learning for emergent communication. Deep reinforcement learning has also been applied in the field of multi-agent communication<cit.><cit.>. However, as the number of agents increases, redundant and indistinguishable information often leads to inefficient communication.Yet, with more agents comes the challenge of redundancy and inefficiency. Recent work<cit.><cit.><cit.><cit.> has delved into optimizing communication, surpassing simple capacity and extending dataset complexity, significantly contributing to our understanding of language's evolution. Vector Quantization (VQ) adheres to Shannon's rate-distortion theory<cit.>, suggesting that vector encoding may outperform scalar encoding by handling dependencies in source symbols adequately. Advances in reparameterization, specifically for VAEs handling discrete variables<cit.><cit.>, have improved the model's effectiveness. VQVAE<cit.> circumvents non-differentiability issues by applying the identity function for efficient gradient transmission. This vector-based discrete representation is shown to be more robust and generalizable than its continuous counterparts for complex learning models<cit.><cit.>. Furthermore, applying discretization in multi-agent reinforcement learning addresses communication challenges within modular reasoning architectures, enhancing efficient interactions across modules<cit.>.Building upon the prior research, this study presents advancements in the vector quantization technique of the VQ model. We have made the following hypotheses in this paper: When intelligent agents have diverse personal experiences, using discrete messages is more advantageous than continuous messages, and communicating using multiple discrete tokens is more advantageous than using a single token. Through our designed experiments in multi-agent machine learning, we have empirically demonstrated that communication using sentences composed of multiple discrete tokens offers superior communication efficiency among agents with diverse personal experiences. However, in the experiments, when we use the VAE model to simulate continuous language communication between agents instead of the AE model, its effectiveness is superior to that of communication using multi-token discrete language. The structure of our paper is as follows: Section <ref> provides the background of emergent communication and outlines the objectives of our research. In Section <ref>, we introduce the related work in the field. Section <ref> describes the experimental settings and methods employed in our study. In Section <ref>, we present the experimental results and draw meaningful conclusions. Finally, in Section <ref>, we conclude our research with discussions and prospects for future work. § RELATED WORKS In recent years, numerous approaches have emerged to facilitate effective communication among specialized components within machine learning models. These approaches include attention mechanisms that selectively transmit information between specialized components in machine learning models<cit.><cit.><cit.> and the Transform method<cit.><cit.>. Additionally, collective memory and shared parameter techniques have been utilized for communication in multi-agent settings<cit.>.The RIAL model is widely used for discrete communication in intelligent agent coordination, enabling communication through discrete symbols, mirroring human social interaction<cit.>. Guo et al.<cit.> explored the importance of computer simulations in evolutionary linguistics using intelligent agent models to simulate agents developing a compositional language for numerical concepts through communication. Studies into LSTM<cit.> language models by Lakretz et al.<cit.> have shed light on how hidden states represent numbers and syntactic structures, guiding further inquiry into language patterns. Further work by Miao et al.<cit.>, Garcia et al.<cit.>, and Havrylov et al.<cit.> expanded our understanding of multi-agent communication dynamics and the emergence of language in neural network-based agents.In this vein, our research employs the Vector-Quantized Variational Autoencoder (VQVAE) model<cit.> and utilizes cross-training and cross-validation to probe communication patterns amongst agents. Our experiments demonstrate that in settings where agents have varied language systems, discrete language proves more effective than continuous forms. We also examine how differing token quantities in codebooks influence discrete communication effectiveness, a topic which is further elaborated in Section <ref>.§ THEORETICAL BASIS AND EXPERIMENTAL METHOD §.§ Discrete and Continuous CommunicationIn the series of autoencoder models<cit.>, we designate the encoder component of the model as the speaker, while the decoder component is referred to as the listener. For the discrete communication model, we use VQVAE. To train a pair of agents, let's assume the input is x. The information is passed through the speaker as z_e(x)= e(x,θ ), representing the encoded representation of x. Then, the information undergoes discrete quantization using a codebook, resulting in Z=DISCRETIZE(z_e(x),φ). Finally, the speaker reconstructs the original information from the received codebook indices x^'=d(Z,ϕ), where the model parameters θ,φ,ϕare continuously updated by minimizing the reconstruction loss and codebook loss. The complete loss function for this process is as shown in Equation <ref>:𝕃_VQ=||x-x^' ||_2 +||sg[z_e(x)]-e_k ||_2^2+β||z_e(x)-sg[e_k]||_2^2Where the last two terms represent the codebook loss in the VQVAE model, in the subsequent algorithm, we use L_quantify to represent these two items. The continuous data output by the encoder is quantized by the codebook layer before being transmitted to the decoder, which is the process of discrete communication between agents as we define it. In the experiments involving the AE model, the overall loss can be expressed as:z=e(x,θ ),x^'=d(z,ϕ ). As shown in Equation <ref>, the overall loss is equivalent to the reconstruction loss.𝕃_AE=||x-x^' ||_2=||x-d(e(x,θ ),ϕ )||_2During this process, the continuous data output by the encoder is directly input into the decoder, which is the process of agents using continuous language for communication.Throughout the entire experiment, the experimental data based on the Autoencoder (AE) serves as a baseline, which aims to verify that under the same experimental settings, the use of continuous language communication is less effective than discrete semantic communication between unfamiliar agents.§.§ Multi-token DiscretizationLi et al<cit.> proposed a human-like discrete information generation method that enables discrete message communication to have the effect of continuous message communication. Based on the foundation of discretization, we propose a multi-token discretization approach. In the original autoencoder (AE) framework, input data is encoded by the encoder into a continuous vector. The VQVAE model builds upon the AE and introduces a latent space codebook between the encoder and decoder. The continuous variables from the encoder are quantized into multiple vectors of the same size as the codebook. In our research, multi-token discretization is applied before the data enters the codebook layer. It involves dividing the output of the encoder into multiple segments of equal size but containing different data. Let's assume our latent codebook size is e∈ R^L× M. Initially, the output z_e(x) is divided into N segments s_1, s_2,s_3,...s_N with z_e(x)=CONCAT(s_1, s_2,s_3,...s_N), where each segment s_i∈ℝ^M/N with M/N∈ N^+. Next, each of these segments is discretized sequentially: e_o_i=Discretize(s_i ), where o_i=argmin||s_i-s_j||(j∈1,....N). After the discretization process, the N segments of data are then integrated back together in the order of their original splitting, as shown in Equation <ref>:Z=CONCAT(Discretize(s_1), Discretize(s_2), ...Discretize(s_N))Throughout the entire process, the discretized multi-token data always shares the same codebook. The schematic diagram of the multi-token discretization is illustrated in Figure <ref>(Right). After modifying this part of the structure, the corresponding model loss function needs to be adjusted as well. Since we divide the data into N segments, the total loss function for model training is defined as shown in Equation <ref>.𝕃= 𝕃_task+1/N{∑_i=1^N||sg[s_i ]-e_o_i ||_2^2+β∑_i=1^N||s_i -sg[e_o_i]||_2^2}Where L_task represents the specific task loss, which can be the aforementioned reconstruction loss, classification loss, or any other relevant loss function.§.§ Learning and Validation of Communication for AgentsAttempts have been made to explore cross-training (Guo et al<cit.>, Tieleman et al<cit.>) in the context of multi-agent learning. In this approach, during the simultaneous training of multiple agents, after each iteration, a random combination is selected, pairing one agent's speaker with another agent's listener for the next round of iterative learning. The reason behind this approach is that when multiple agents learn the same language, their understanding of the language may not be entirely identical. Figure <ref> demonstrate the feature distributions of the latent codebook spaces for 10 pairs of agents trained simultaneously on the same MNIST dataset. Each color represents a communication protocol between a pair of agents, that is, the feature distribution in the codebook. It can be observed that there are differences in semantic understanding among the agents. Hence, cross-training becomes necessary because it allows different agents to have the most similar understanding of the same language. Algorithm <ref> in the Appendix implements the aforementioned process.In terms of experimental validation, in addition to using trained agents for verification, we conducted another form of validation by manipulating the dataset. Assuming there are m types of data in the dataset, we merged the training set and validation set into a single dataset. The merged dataset was then divided into m classes based on their labels Dataset=(dataset_1,dataset_2,...dataset_m). After the division, a portion of images was uniformly sampled from each class to form the validation set, denoted as Validataset=(sample_1^1 ,sample_2^1,...sample_m^1). To meet the experimental overlap requirements, from the remaining training set P_train, a certain number of images were extracted from each class according to the desired experimental overlap rate across the classes Overlapset=(sample_1^2 ,sample_2^2,...sample_m^2). There are an equal number of images in each class, and P_train is the number of images left after the first extraction. For p_j∈{ 0.05,0.1,0.2,...0.9 }, the calculation of the number of images sampled in the second extraction is given by Equation <ref>. m∗ sample_i^2/P_train+(m-1)*sample_i^2 =p_j ,(i=1,2...,m)These extracted images were then merged with the respective training sets, ensuring that each class in the training set contained images from the remaining m-1 classes. The processed training set is Dataset^'=(dataset_1^' ,dataset_2^',...dataset_m^'). This process resulted in a training set where each category served as a separate training set for a pair of agents to learn from. The experiments conducted on the split dataset follow Algorithm <ref>, which forms the core of our research paper. Specifically, discrete messages are more effective than continuous ones when agents have diverse personal experiences. Similar to Algorithm <ref>, the experimental methodology of our core content is illustrated in Figure <ref>.The aforementioned are the two experimental procedures we used to explore the communication patterns of multiple agents. The main model involved in the procedures is the VQVAE model. However, when we incorporate the AE (Autoencoder) model in our experiments, we simply remove the "c" module from the procedure, and the data outputted by the encoder is directly decoded by the decoder.§ EXPERIMENTSThe models used in this paper were implemented on the PyTorch 1.12.1 framework, using PyCharm Community Edition 2023.1 on the Windows platform. The model training was conducted on a single GeForce RTX 3060 GPU with 8GB of GPU memory, using the CUDA 12.1 experimental environment. The operating system used was Windows. In our work, we employed three datasets: MNIST, CIFAR10, CelebA and our own medical dataset. The image resolution for all three datasets was separately set to 28× 28,32× 32,64× 64 and 64× 64. The batch size for the first two datasets during training is set to 256, while the batch size for the latter two datasets is set to 64, and we utilized the Adam optimizer with a learning rate of 0.001. The commitment cost for the model's discrete layer was set to 0.25, with a decay rate of 0.99. The specific codebook size for the discrete layer varied depending on the dataset. In the experiments, we compared the reconstruction errors between the original images and the reconstructed images. An example of the two types of images can be seen in Figure <ref>. §.§ Multi-token Discretization for Improved Agent CommunicationAccording to the method shown in Figure 3 and Algorithm <ref>, we repeated the experiments with different overlap ratios using the multi-token discretization model with the best performance and the AE model. For the MNIST and CelebA datasets, the original VQVAE model hadlatent space size e_m∈ℝ ^512× 64 and e_m∈ℝ ^512× 128, while for the CIFAR10 dataset, it was e_m∈ℝ ^1024× 256. We conducted the above overlap experiments with 32 tokens, and the experimental results are shown in Figure <ref>. Under three different datasets, the average loss incurred by using multiple discrete tokens for communication is 32.1%, 10.6%, and 3.7% lower than that incurred by using continuous semantics for communication, respectively. Our experiments indicate that when one agent interacts with another unfamiliar agent, the discrete semantic learning method using multiple tokens has certain advantages over continuous semantic learning. Figure <ref> explain why we chose to conduct our experiments with a 32-token VQVAE model as mentioned above and also illustrate the advantages of our proposed multi-token discrete mechanism compared to a single-token approach. It shows the results of training m pairs of agents simultaneously according to Algorithm <ref>, where each boxplot in the figure represents the stable loss from communications between the m pairs of agents. The general pattern is that as the number of discrete tokens increases, the communication loss decreases, which from the model's perspective, leads to improved performance, and in terms of agent communication, this makes their communication more efficient. In all of the above experiments, the number of agent pairs m for the three datasets respectively are (m=10(MNIST,CIFAR10),8(CelebA)). Our experiments have demonstrated two theories. First, discrete messages with multiple tokens are more effective than continuous ones when agents have diverse personal experiences. Second, communications using multiple discrete tokens are more advantageous than those using a single token.§.§ Theoretical Validation and Practical ApplicationBased on the open-source datasets, we conducted the same experiments as in section 4.1 with our own ocular dataset(Figure <ref>) to further validate our theory. When experimenting with the VQVAE model on this dataset, codebook size e_m∈ℝ^512× 128. The dataset is divided into 5 categories based on symptom types, with varying numbers of images in each category. We performed 5 sets of experiments, each involving communications between agents. First, to demonstrate the feasibility of our proposed multi-token discretization, we conducted experiments following Algorithm <ref>. The experimental results are shown in Figure <ref>(Right), which once again validate the effectiveness of our theory. Subsequently, we proceeded to the second approach, which is based on Algorithm <ref>.The dataset, which originally consisted of only 2750 images, has been expanded to 5000 images by applying data augmentation techniques. Each class now contains 1000 augmented images. Then, we processed the dataset according to the data preprocessing steps outlined in Algorithm <ref>, and completed the cross-validation experiments. The cross-validation loss under multi-token discretization is shown in Figure <ref>(Mid).Finally, we conducted experiments on the core theoretical aspects based on this dataset. The results, shown in Figure <ref>(Left), indicate that when the number of discrete tokens reaches 32, the overall discrete interactive communication outperforms continuous interactive communication, the former's average loss is 7.1% lower than that of the latter. The experimental results on the new dataset provided strong evidence to support our conclusions. Communication between agents who are unfamiliar with each other using multi-token discretized information variables is better than using continuous variables.§.§ Research on codebook aspectsThe experimental results above prove the core theory of this paper: when unfamiliar agents communicate with each other, the use of multi-token discrete semantics is more effective than that of continuous semantics. The subsequent research will mainly focus on the usage patterns of the codebook when communicating with discrete semantics and how to improve the codebook to enhance communication efficiency. In the research, the MNIST and CIFAR10 datasets are primarily used for exploration. Firstly, We investigated the impact of the size of the latent codebook space on the efficiency of discrete communication. We conducted experiments using a single-token VQVAE model following Algorithm <ref>, with the number of agents m set to 10. In the experiments, we controlled the experimental variable to be the size of the first dimension of the latent space. The result is shown in Figure <ref>.It can be observed that as the codebook space increases, the communication loss between different agents decreases overall. Although there is some fluctuation in the subsequent data for the MNIST dataset, we speculate that this is due to the small dataset size and the large codebook space, which results in the model not fully learning and the codebook not being evenly distributed. Therefore, we have reason to believe that as the codebook space expands, agents can capture more patterns when learning the language, thereby further improving the efficiency of discrete communication and enhancing the performance of discrete learning.After completing the aforementioned experiments, in order to further investigate this direction in-depth, we conducted a study on the utilization of the codebook and some patterns in the VQVAE model. In the following results, our experiments were not conducted according to the aforementioned algorithm, but rather using a single model trained on the official datasets.Figure <ref>(left) represents the number of times code vectors are used for each codebook update. Assuming the single-token model uses a code vector N times for each codebook update. For an m-token model, each codebook update occurs N/mtimes. In each iteration, the codebook is updated m times, so after implementing multi-token discretization, the codebook updates strictly follow the rule based on the number of tokens, with the total usage of code vectors in each iteration remaining N, and any m-token model updating the codebook m times within that iteration, each token using N/m code vectors. Figure <ref>(right) represents the variance between the frequencies of use of different code words for different numbers of discrete tokens. It can be observed that as the number of discrete tokens increases, the codebook is utilized more evenly.Figure <ref> shows the proportion of the number of codewords used in each iteration to the total number, under different numbers of discrete tokens. It can be observed that when the number of discrete tokens is greater than or equal to 8, the codebook is effectively utilized throughout the iterations. This undoubtedly contributes to improving the model's performance.Figure <ref> shows the transformation of codebook quantization loss for different numbers of discrete tokens. As the number of training iterations increases, the codebook's quantization loss gradually stabilizes, and a higher number of discrete tokens results in a higher stable loss value.For m agents with an overlap rate of 0.1, we explore the codebook similarity among them during the training process, using the Euclidean distance as a measure. Assuming a codebook size of ℝ^L× M, the calculation of the Euclidean distance is shown in Equation <ref>.𝔼𝔻_Average= 1/m(m-1)/2∑_i=1^m∑_j=i+1^m√(∑_u=1^L∑_v=1^M(C_i (u,v) - C_j(u,v))^2)C_i,C_j represents the codebook of different agents. Figure <ref> illustrates the Euclidean distances between pairwise codebooks of ten agents during the learning process, with an overlap of 0.1. As the iterative learning progresses, the Euclidean distances between the latent codebooks of different agents decrease, indicating an increase in their similarity. In section 4.3 of the research content, it is stated that increasing the size of the latent codebook space is beneficial for agents to improve their discrete communication efficiency. The patterns of codebook usage will aid in our future research endeavors. Especially the multi-token discretization mechanism, which has improved the issue of uneven codebook usage and mitigated the "discretization bottleneck". § CONCLUSION, LIMITATION AND FUTURE STUDYOur experiments show that while the efficacy of communication between agents using single-token discrete semantics can rival that with continuous semantics, multi-token discretization before communication notably enhances the quality of information exchange over continous language. Furthermore, multi-token discretization outperforms single-token in terms of system generalization, revealing two key insights: multi-token communications are more effective than single-token approaches, and in contexts where agents encounter diverse languages, sentence construction with multiple discrete tokens yields optimal communication efficiency among isolated intelligent Agents. Additionally, we have put forth a theoretical foundation that contrasts the use of a VAE model against an AE model, noting that agent communicates through the VQVAE model falter in comparison to those facilitated by the VAE model, specific details will be given in the Appendix. The multi-token discretization method we proposed does not perform as well as the VAE, which may be due to optimization difficulties. Identifying the underlying reasons for this discrepancy will be the focus of our future research. In summary, the multi-token discretization approach we propose outperforms the original single-token discretization method, and compared to continuous language based on the AE model, linguistic communication using multi-token discretization offers a greater advantage for communication among isolated agents.§ CONFLICT OF INTERESTThe authors declared no potential conflict of interest with respect to the research, authorship, or publication of this article. § DATA AVAILABILITY STATEMENTAll relevant data are within the manuscript and its Appendix. § APPENDIXHere we provide additional details about the experimental setup and additional results.In Figure <ref>, we have demonstrated that the multi-token discretization mechanism is more effective in terms of communication between agents compared to the single-token discretization mechanism. Prior to this, we had already conducted some experimental work to prove the feasibility of the multi-token discretization mechanism. Algorithm <ref> is the process through which we validate the effectiveness of multi-token discrete communication. We varied the intermediate processing architecture between the speaker and listener and recorded the test loss of m pairs of agents throughout the entire training process, as shown in Figure <ref>. The curves in the figure indicate that when using cross-training, multi-token discretization indeed outperforms the single-token approach. Furthermore, the results demonstrate a pattern where increasing the number of tokens leads to better performance and faster learning. AE model still exhibits the fastest learning speed. That is, when paired agents learn a language, those that adopt a continuous semantic approach learn the fastest. However, as shown in Figure <ref>, when these agents interact with new agents, the outcomes are not as good as those of agents that learned through a discrete semantic approach. In section <ref>, we mentioned the configuration of experimental parameters. For the four datasets, we adjusted the batch size or the size of the latent code space accordingly. However, all parameters for experiments within the same dataset must remain consistent. During our experiments, we attempted to use the VAE model instead of the AE model as a baseline, and simulated the learning and communication process between a pair of agents using continuous semantics. Similarly, we used Equation <ref> to allocate individual datasets to each pair of agents, and the loss during training of the VAE model is represented by Equation <ref>.ℒ _VAE= ℒ _recon+βℒ _KL= x-x'_2+β/2∑_j=0^J ( 1+log (σ _j^2 )- μ_j^2-σ _j^2 )The first term represents the reconstruction loss, the second term represents the KL divergence loss, which characterizes the difference between the actual distribution of variables in the latent space and the prior distribution (usually assumed to be a standard normal distribution). Here, μ and σ respectively denote the mean and standard deviation of this distribution, while β represents a hyperparameter. Regarding all the experiments on the autoencoder (AE) model, we replaced it with a variational autoencoder (VAE) model and repeated the experiments. When repeating the core experiments of Algorithm <ref>, we found that the agents learning with discrete variables did not achieve very good results when communicating with each other. That is, the loss from communication was relatively high. In contrast, the agents using continuous semantics for communication showed higher efficiency in their exchanges, with lower communication losses. To further explore and compare the performance of continuous semantic communication based on the VAE model and discrete semantic communication, we have devised a series of experimental setups (see Figure <ref>). According to these setups, we conducted experiments, where the first major category of experiments involved the speaker's output being processed discretely for half of the information and the other half either being processed continuously or masked as zero. The second major category involved one half of the information being processed either discretely or continuously, while the other half was masked as zero.The experimental results indicate that regardless of whether pairs of agents are trained and learned through the aforementioned combined model, or learned with singel model by masking half, under the condition of a lower overlap ratio, the relationship between the three types of cross-validation losses is: AE> VQVAE> VAEHowever, this pattern is not absolute. Our main evaluation metric is the reconstruction loss between unfamiliar agents, as we mentioned in Section 3, where unfamiliarity indicates that their training datasets are not completely identical. When the proportion of overlap in the datasets is low, the discretecommunication approach indeed does not perform as well as the continuous communication approach.Figure <ref> represents some results when we trained using a combination of continuous and discrete methods, and used different validation methods during communication validation. The training method here involves splitting the encoder's output into two parts: one part goes through the latent variable layer of the VAE model, and the other part goes through the codebook layer of the VQVAE model, and then both parts are integrated into the decoder. During communication validation, we only use continuous, discrete, or a combination of continuous and discrete methods for cross-validation. The loss during the training process can be represented by Equation <ref>. ℒ = ℒ _recon+ ℒ _quantization+ℒ_KLIn this experiment, we found that when the overlap ratio is low, the effect of discrete communication is not as good as continuous communication. However, when the overlap ratio exceeds 90%, agents learning through discrete communication overcome the problems in learning communication protocols, leading to a reduction in overall communication loss and outperforming continuous methods. However, the results of this experiment were obtained considering that both continuous and discrete information are present in the communication process of agents. When the agents learn and communicate entirely in a discrete or continuous manner, the advantage of the discrete method also disappears even with a 90% overlap ratio.Regarding the experiments on non-image datasets and the lack of demonstrated advantages over variational autoencoders, we will reserve them for continued research.
http://arxiv.org/abs/2312.15985v2
{ "authors": [ "Hang Chen", "Yuchuan Jang", "Weijie Zhou", "Cristian Meo", "Ziwei Chen", "Dianbo Liu" ], "categories": [ "cs.LG", "cs.IT", "math.IT" ], "primary_category": "cs.LG", "published": "20231226103005", "title": "Discrete Messages Improve Communication Efficiency among Isolated Intelligent Agents" }
Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025, USA Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025, USA Department of Physics, Stanford University, Stanford, California 94305, USA Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025, USA Department of Physics, Stanford University, Stanford, California 94305, USA School of Applied and Engineering Physics, Cornell University, Ithaca, New York 14850, USA Kavli Institute at Cornell for Nanoscale Technology, Cornell University, Ithaca, New York 14850, USA Diamond Light Source, Harwell Campus, Didcot OX11 0DE, United Kingdom Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025, USA Department of Materials Science and Engineering, Stanford University, Stanford, California 94305, USA Department of Physics, Stanford University, Stanford, California 94305, USA Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025, USA Department of Applied Physics, Stanford University, Stanford, California 94305, USA Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025, USA Department of Physics, Stanford University, Stanford, California 94305, USA Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025, USA Diamond Light Source, Harwell Campus, Didcot OX11 0DE, United Kingdom Diamond Light Source, Harwell Campus, Didcot OX11 0DE, United Kingdom Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025, USA Department of Physics, Stanford University, Stanford, California 94305, USA Geballe Laboratory for Advanced Materials, Stanford University, Stanford, California 94305, USA Diamond Light Source, Harwell Campus, Didcot OX11 0DE, United Kingdom Department of Physics, Stanford University, Stanford, California 94305, USA Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025, USA School of Applied and Engineering Physics, Cornell University, Ithaca, New York 14850, USA Kavli Institute at Cornell for Nanoscale Technology, Cornell University, Ithaca, New York 14850, USA Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025, USA Department of Materials Science and Engineering, Stanford University, Stanford, California 94305, USA Geballe Laboratory for Advanced Materials, Stanford University, Stanford, California 94305, USA Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025, USA Department of Applied Physics, Stanford University, Stanford, California 94305, USA Geballe Laboratory for Advanced Materials, Stanford University, Stanford, California 94305, USA [Corresponding author: ][email protected] Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025, USAWe conducted a comparative study of the rare-earth infinite-layer nickelates films, RNiO_2 (R = La, Pr, and Nd) using resonant inelastic X-ray scattering (RIXS). We found that the gross features of the orbital configurations are essentially the same, with minor variations in the detailed hybridization. For low-energy excitations, we unambiguously confirm the presence of damped magnetic excitations in all three compounds. By fitting to a linear spin-wave theory, comparable spin exchange coupling strengths and damping coefficients are extracted, indicating a universal magnetic structure in the infinite-layer nickelates. Interestingly, while signatures of a charge order are observed in LaNiO_2 in the quasi-elastic region of the RIXS spectrum, it is absent in NdNiO_2 and PrNiO_2. This prompts further investigation into the universality and the origins of charge order within the infinite-layer nickelates.Universal orbital and magnetic structures in infinite-layer nickelates W. S. Lee January 14, 2024 ======================================================================§ INTRODUCTIONThe discovery of superconductivity in the infinite-layer nickelates has marked a new milestone in the field of unconventional superconductivity <cit.>. While the nickelates were first thought to resemble the cuprates due to their similiar crystal structure and electron count in the 3d orbitals <cit.>, the initial stages of the experimental investigations have revealed notable differences between the two systems <cit.>. As a result, the nickelates have emerged as a new class of strongly correlated materials, presenting fresh opportunities to study the role of strong electronic correlations in unconventional superconductors. To date, superconductivity has been found in La-, Pr-, and Nd-based infinite-layer nickelate thin films <cit.>. The doping-temperature phase diagram in these compound appears to be quite similar, indicating a similar underlying microscopic origin. However, subtle differences have also been observed. For example, London penetration depth measurements indicate that La- and Pr-NiO_2 exhibit a node in the superconducting order parameter, whereas it is less clear in NdNiO_2, possibly due to the magnetic moment carried by the Nd ions <cit.>. Another example is the recent demonstration of substantial variations in the anisotropy of the upper critical field across La, Pr, and Nd-NiO_2 <cit.>. These observations imply that the rare-earth element may play a role in the low-energy electronic behavior.Indeed, theories have suggested that 5d orbitals of the rare-earth element hybridize with the Ni 3d orbitals and gives rise hole-like Fermi surface (FS) pockets, making the infinite-layer nickelates multi-orbital systems <cit.>. The minimal model should at least consist of a hole-like band with dominant Ni 3d_x^2-y^2 character which is coupled to an electron-like band with a dominant character of rare-earth 5d orbitals. It has been an intriguing question regarding whether rare earth element could serve as a tuning knob to control the properties of the infinite-layer nickelates, as it does in the well-known cases of the perovskite nickelates <cit.>. Along this line, theoretical studies have investigated the variation of electronic, magnetic, and lattice structures as a function of rare earth elements <cit.>, suggesting the possible occurrence of nontrivial phase transitions. Thus, it is of great interest to investigate the rare-earth dependence by examining the microscopic behaviors via spectroscopic measurements.Owing to the element-specific capability, resonant inelastic x-ray scattering (RIXS) has been a powerful tool to reveal the electronic structure <cit.>, magnetic excitations <cit.>, and charge order in the infinite-layer nickelates <cit.>. To date, most of the RIXS studies were focused on one family of the infinite-layer nickelates from various sample sources, which is not ideal for a systematic study between La-, Pr-, and Nd-NiO_2. In this article, we report RIXS data that were measured with identical measurement conditions on Nd-, Pr-, and La-NiO_2 thin films. Importantly, the samples were grown and prepared using the same synthesis and characterization standard. Since uncapped infinite-layer nickelates are prone to heterogeneity, and their synthesis is hard to control <cit.>, we focus our studies on samples with a protective SrTiO_3 capping layer, for which we have previously demonstrated uniform crystallinity throughout the the thickness of the nickelate film <cit.>. We show that the gross features of orbital configurations are essentially identical with some minor variations in the detailed hybridization. For the low-energy excitations, we compare the dispersion and the bandwidth of the magnetic excitations in the three compounds and conclude that the magnetic structure is universal in the nickelates. Interestingly, in the quasi-elastic RIXS spectra, we found signatures of a charge order only in LaNiO_2, but not in NdNiO_2 and PrNiO_2. In the context of recent debates about the charge order in infinite-layer nickelates <cit.>, we leverage cross-sectional scanning transmission electron microscopy (STEM) to investigate the secondary phases present in our LaNiO_2 film, and discuss the implications on the observed charge order peak in the LaNiO_2 film.§ EXPERIMENT DETAILSThin films of the precursor perovskite RNiO_3 (R = La, Nd, Pr) with a thickness of 10 nm were grown on a substrate of SrTiO_3(001).The c-axis oriented infinite-layer RNiO_2 was obtained by employing a topotactic reduction process <cit.>. To protect and support the crystalline order, a capping layer made of five unit cells of SrTiO_3(001) was grown on top of the nickelate films before the topotactic reduction. X-ray diffraction (XRD) measurements were conducted to confirm the quality of the infinite-layer nickelate. The XRD data were included in the Supplementary Material <cit.>. XAS and RIXS measurements were performed at beamline I21 of the Diamond Light Source (United Kingdom). The combined energy resolution of the RIXS measurements was approximately 40 meV at the Ni L_3 edge. For the incident energy RIXS map (Fig. <ref>), the spectra were collected at an incidence angle of 35 and scattering angle of 154. Measurements were taken at a temperature of 20 K. For the dd excitations, magnetic excitations, and quasi-elastic map, the RIXS spectra were taken at the photon energy of 852.6 eV at which the intensity of the dd and magnetic excitations is maximal. Since the magnetic excitations are quasi-two-dimensional <cit.>, the momentum dependent data have been denoted as a function of the projected in-plane momentum transfer q_∥, which can be varied by rotating the sample angle with a fixed scattering angle. For all the momentum-dependent RIXS data shown here were obtained with the scattering angle set to 154^∘ using π polarization of the incident photons with grazing exit geometry, at which the magnetic cross section is dominant.Cross-sectional specimens for STEM analysis were prepared by the standard focused ion beam (FIB) lift-out process on a Thermo Fisher Helios G4 UX FIB and imaged on an aberration-corrected FEI Titan Themis 300 operating at 300 kV with a probe convergence angle of 21.4 mrad and inner (outer) collection angles of 68 (200) mrad.§ RESULTSWe begin by examining the orbital configurations of La-, Nd-, and Pr-NiO_2. Figure <ref> (a) illustrates the x-ray absorption spectra (XAS) near the Ni L_3-edge for these three parent compounds. In the normal incident geometry (θ = 90^∘), where the polarization of the incident photons aligns with the Ni-O bond direction, all XAS exhibit a single main peak, as reported in previous experiments <cit.>. We note that the absorption is significantly reduced in the grazing incident geometry (θ = 10^∘), where the majority of the incident photon polarization lies in the normal direction of the NiO_2 plane. This observation indicates a significant anisotropy between the in-plane and out-of-plane orbital configurations of the system. The presence of a single XAS peak and the linear dichroism in XAS support the notion of a quasi-two dimensional electronic structure, primarily consisting of a dominant 3d^9 character with a half-filled 3d_x^2-y^2 orbital <cit.>.Further insights into the orbital configuration can be obtained from RIXS spectra across the Ni L_3-edge. As depicted in Fig. <ref> (b), the overall features of the RIXS incident map exhibit similarities among the three compounds. These include magnetic excitations below an energy of 0.2 eV, a spectral feature around ∼0.65 eV (referred to as 3d^8R), dd excitations within the 3d orbitals in the energy range of 1.0 to 2.5 eV, and fluorescence emission (FL) above 2.5 eV. The presence of similar energy scales for dd and FL excitations reflects a generic crystal field splitting characteristic of the three families of nickelates, as expected. Notably, the 3d^8R feature has been associated with an excitation related to the hybridized rare-earth 5d ligands and Ni 3d states <cit.>. Its observation in all three compounds indicates that it is a common feature of infinite-layer nickelates. This observation aligns with various theoretical predictions that suggest a multi-orbital nature in infinite-layer nickelates, where the rare-earth 5d states contribute to the band structures near the Fermi energy <cit.>. This is in contrast to high-T_c cuprates, where the states near the Fermi energy are primarily dominated by a mixture of Cu 3d_x^2-y^2 character and the O 2p ligand states. Despite the overall similarity, we have observed a subtle difference in the dd excitations. Figure <ref> illustrates the dd excitations obtained at three different angles, effectively varying the incident photon polarization with respect to the crystallographic axes. As a result, the cross-section of dd excitations associated with different orbital symmetries are modulated <cit.>. Specifically, in the normal incident geometry (θ = 90^∘) depicted in Figure <ref>(a), the electric field of the incident photon polarization lies in the NiO_2 plane, favoring transitions associated with in-plane orbitals such as d_x^2-y^2 and d_xy. In the other geometry (θ∼ 138.5^∘ in Figure <ref>), the photon polarization now includes a notable component perpendicular to the NiO_2 plane, enabling excitations associated with out-of-plane d_xz/yz orbitals. Since the dd excitations in the RIXS incident photon map are generic (Fig. <ref> (a)), we follow the assignment of dd peaks described in our previous work <cit.>, as also denoted in Figure <ref>. Interestingly, as shown in Figure <ref>(c) and (d), while the excitations associated with the d_xz/yz orbitals are prominent in Nd- and Pr-NiO_2, they are significantly weaker in LaNiO_2. This observation suggests that LaNiO_2 exhibits more extended d_xz/yz orbitals.Next, we discuss the magnetic excitations, which bear the hallmarks of the underlying electronic correlations <cit.>. Figure <ref>(a) shows the unambiguous presence of a branch of dispersive magnetic excitations in all three compounds with similar characteristics. Specifically, the dispersion emanates away from the zone center and reaches respective maxima near (0.5, 0) and (0.25, 0.25), consistent with the spin wave dispersion in an antiferromagnetically coupled spin-1/2 square lattice system. By comparing the RIXS spectra near the two maxima, as shown in Figure <ref>(b), we found that the peak positions of the magnetic excitations are essentially identical in the three compounds within the uncertainty of our experiments. This suggests that the bandwidth of the magnetic excitations are quite similar across the three families of nickelates. Additionally, we observe that the magnetic peak spectra are broad (Figure <ref>(b)), indicating that the excitations are heavily damped. This is consistent with the presence of rare-earth 5d FS pocket in the undoped parent compounds, which can dissipate the magnetic excitations in the particle-hole continuum <cit.>.To obtain a more quantitative comparison, following our previous work on NdNiO_2 <cit.>, we fit the RIXS magnetic spectrum to a damped harmonic oscillation functions χ”(q,ω) <cit.>, given by χ”(q,ω) = γ_q ω/(ω^2-ω_q^2)^2 + 4γ_q^2ω^2where ω_q is the undamped mode energy and γ_q damping coefficient. The extracted ω_q and γ_q are summarized in Fig. <ref>(c). All three nickelates exhibit a similar energy-momentum dispersion with a similar degree of damping. Then, we fit the extracted dispersion to a linear spin wave form for the spin-1/2 square-lattice Heisenberg antiferromagnet <cit.>, including nearest- and next-nearest-neighbor exchange couplings, H = J_1 ∑_i,jS_i · S_j + J_2 ∑_i,i'S_i · S_i'where S_i, S_j and S_i' denote Heisenberg spins at site i, nearest-neighbor sites j, and the next-nearest-neighbor sites i', respectively. The fitted J_1 and J_2 are summarized in Table <ref>. Both J_1 and J_2 are comparable in all three nickelates, except that the J_1 of PrNiO_2 appears to be slightly larger than the others, consistent with the higher peak position in the raw magnetic spectra shown in Fig. <ref>(b). This finding aligns with the weak rare-earth dependence on the spin exchange interaction predicted by theoretical models <cit.>. We also note that there appears to be no clear correlation with the c-axis lattice parameter, which monotonically decreases by replacing the rare-earth element from La to Nd. This null correlation suggests that the spin exchange interaction is essentially insensitive to the electronic structure along the c-axis direction. Thus, the magnetic interactions are dictated by the in-plane low energy electronic structure, presumably the 3d_x^2-y^2 orbitals.It is important to emphasize that our results do not necessarily imply the existence or nonexistence of long-range antiferromagnetic order (AFM), as the putative AFM wave-vector is beyond the reach of the Ni L-edge RIXS. Indeed, multiple studies of bulk poloycrystalline sample do not shown long range AFM order <cit.>.Nevertheless, the observation firmly establishes the universal presence of substantial antiferromagnetic spin interactions in infinite-layer nickelates, suggesting that Mott physics should play a significant role in shaping the microscopic electronic structure of these nickelates.Another hallmark of strong electronic correlations is a complex phase diagram comprising multiple quantum phases. Particularly, those quantum phases that break the translation symmetry can be detected by examining the momentum distribution of the quasi-elastic peak intensity in the RIXS spectrum (See, for example, Ref. Ghiringhelli2012YBCO). Indeed, previously, we observed a charge order in LaNiO_2 and investigated its doping and temperature dependence using RIXS in σ-scattering polarization <cit.>, where the incident x-ray polarization is perpendicular to the scattering plane. Using a different scattering geometry (e.g., π-polarization), as shown in Fig. <ref>, a peak at (0.345±0.0007, 0) is observed in LaNiO_2, confirming the presence of the charge order. Interestingly, the signature of the charge order scattering peak is absent in the RIXS data of Nd- and Pr-NiO_2. While this may appear to indicate that the charge order is not universal in infinite-layer nickelates, recent findings have presented a puzzle. Specifically, a peak with a wavevector of (0.333, 0) has been reported in NdNiO_2 films prepared without a SrTiO_3 capping layer <cit.> and PrNiO_2 with a STO capping layer <cit.>. Notably, the peak intensity in uncapped NdNiO_2 exhibits a positive correlation with the spectral weight of the RIXS 3d^8R feature <cit.>, but our LaNiO_2 exhibits the least pronounced 3d^8R peak in RIXS spectra of the three families (Fig. <ref>(b)). Consequently, the emergence of CO appears to be sample-specific. Some aspects of the material specificity will be further discussed in the next section.§ DISCUSSION As previously discussed by Krieger et al. <cit.>, the XAS and RIXS spectra of NdNiO_2 without a SrTiO_3 capping layer exhibit distinct characteristics compared to their counterparts with a capping layer. Namely, the XAS exhibits less linear dichroism between σ and π polarization of the incident photon, indicating a more three dimensional orbital configuration in the uncapped NdNiO_2. In addition, the 3d^8R feature in the RIXS spectra also becomes stronger and resonates at a photon energy lower than the main peak in Ni L-edge XAS, producing a shoulder in the leading edge of the XAS <cit.>. Our results shown in Fig. <ref> and <ref>unambiguously establish the generic orbital configuration in capped La-, Pr-, and Nd-NiO_2 films and confirm the different electronic configurations between the capped and uncapped nickelate films. This notion gained further support from the work by Raji et al. via STEM-EELS and hard x-ray photoemission spectroscopy <cit.>. We also note that in the uncapped NdNiO_2 thin film, no magnetic excitations were resolved, raising a question regarding the dichotomy between the charge order and the spin order that could be switched by the presence of the capping layer <cit.>. We remark that this needs not be the case, as magnetic excitations and charge order coexist in our capped LaNiO_2 (Fig. <ref>).It is important to note that the recent discovery of charge order in infinite-layer nickelates has sparked a significant debate. Pelliciari et al. proposed that this observation could have arisen from the SrTiO_3 (STO) substrate's (1 0 1) Bragg peak due to the third harmonic x-ray photon contamination originating from the beamline <cit.>. Since all of our films were grown on STO substrates and capped with STO layers, we would have expected to observe the purported Bragg peak contamination in all of our samples. However, this is contradicted by the data presented in Fig. <ref>. In addition, we remark that Tam et al have conducted comprehensive measurements to specifically address Pelliciari's scenario <cit.>. Another scenario may be more difficult to completely rule out. Some recent studies propose that the "charge order" peak in Nd-based compounds originates from secondary phases, where ordered rows of excess apical oxygens with 3 unit cell periodicity (denoted as 3a_o) are formed <cit.>. In fully reduced NdNiO_2, no charge order is detected, as found in our NdNiO_2 and PrNiO_2 samples (Fig. <ref>). But are secondary phases also present in our LaNiO_2 sample?We first remark that while we cannot comprehensively rule out the presence of secondary phases, our LaNiO_2 film should predominantly consist of the infinite-layer nickelate phase, with a purity comparable to that of the NdNiO_2 and PrNiO_2 samples. This assertion is supported by the consistent spectroscopic features observed in XAS (Fig. <ref>), RIXS maps (Fig. <ref>), and RIXS magnetic excitation spectra (Fig. <ref>) across the three families of nickelates. Also, the charge order in LaNiO_2 has an incommensurate wave-vector, unlike the commensurate 3a_o unit-cell periodicity found in the uncapped NdNiO_2 and the secondary phase <cit.>. Moreover, our previous work demonstrates a systematic doping evolution of the charge order wavevctor, width, and onset temperature, inconsistent with an explanation solely based on oxygen order of secondary phases <cit.>.In order to gain further insight about secondary phases in our LaNiO_2 films, we conduct a detailed structural investigation of a capped LaNiO_2 thin film grown on SrTiO_3 with identical conditions to that studied by RIXS using high-angle annular dark-field (HAADF) scanning transmission electron microscopy (STEM). Figure <ref> shows a representative HAADF-STEM image of a large area of the LaNiO_2 thin film and the fast Fourier transform (FFT) of the micrograph which provides quantitative analysis of the periodicities present in the image. The diagonal canting which is observed on either side of the HAADF-STEM image in Fig. <ref> has been discussed elsewhere <cit.> and is attributed to a local reorientation of the c-axis into the plane of the film as a mechanism for relieving the relatively high compressive strain imposed on the reduced LaNiO_2 by epitaxial growth on SrTiO_3. As sampled by our HAADF-STEM measurements, most of the film is consistent with the reduced infinite-layer structure and certain types of heterogeneity in the lattice (e.g., Ruddlesden-Popper type stacking faults) which have been well documented throughout all reported infinite-layer nickelate thin film species <cit.>. We do, however, observe some limited regions of off-stoichiometry from the infinite-layer formula LaNiO_2. But rather than the 3a_o order in the excess oxygen, we find instead local regions of film which appear consistent with the nominally unreduced perovskite precursor phase, i.e., LaNiO_3. The HAADF-STEM image in Fig. <ref> shows one region with local perovskite- and infinite-layer like regions on either side of the ∼40 nm field of view, separated by a diagonal canted region in the middle of the image. The right-hand region marked by a yellow box (Region B) and its corresponding FFT show a lattice which is consistent with the infinite-layer phase: namely a reduced c-axis lattice constant and lack of additional superlattice peaks. The left-hand region marked by the blue box (Region A) and its corresponding selected area FFT show a lattice structure which is consistent with the perovskite precursor phase, characterized in particular by the half-order in-plane (1/2, 0, 0) superlattice peaks and the expanded c-axis lattice constant similar to that of the SrTiO_3 substrate (STO in Fig. <ref>). A subtle (1/4, 0, 1/4) ordering can be found in the few unit cells nearest to the substrate-film interface within this perovskite-like region, as marked by the orange box and arrows the HAADF-STEM image and corresponding FFT shown in Figure <ref>. Across a set of ∼40 images constituting a quasi-random survey spanning ∼1.5 μm in this LaNiO_2 thin film, we find no signs of the (1/3, 0, 1/3) peaks of the 3a_o order described in similar studies of NdNiO_2 thin films. It is important to note that this lack of evidence for 3a_o oxygen ordering does not definitively rule out the possibility of such ordering elsewhere in the film, but rather provides an estimated upper bound on the prevalence any such ordering we would expect in this and similar samples. Still, it is further interesting to reflect that the 3a_o ordering which has been observed by similar techniques in uncapped NdNiO_2 thin films reflects a different character of accommodating oxygen off-stoichiometry than what we observe here <cit.>. Here, we find nm-scale regions which appear more or less fully perovskite-like interspersed between large regions of what appear to be fully reduced infinite-layer, while reports in the uncapped NdNiO_2 thin films suggest a wider or more disperse distribution of excess oxygen into regions which can be locally considered NdNiO_2+δ (δ = 0.33 - 0.66) <cit.>. We speculate that these differences in the oxygen distribution could be related to different intrinsic effects of the La- and Nd-based compounds as well as to more extrinsic consideration such as lattice defects which may act as barriers or channels for oxygen de-intercalation during the chemical reduction process. A more thorough and comprehensive understanding of these differences is called for, and should include comparison of samples with different rare earth cations (La, Nd, and even Pr) synthesized by different research groups. Finally, some characteristics of LaNiO_2 are noteworthy. As suggested by Been et al. <cit.>, due to the presence of empty 4f states, the calculated size of the La 5d Fermi surface (FS) pocket and the effective mass along the out-of-plane direction are the smallest among all rare-earth infinite layer nickelates and do not follow the same trend as others. Additionally, theory has also suggested that the modulation of charge order might be intimately related to both the rare-earth FS and the Ni 3d_x^2-y^2 states <cit.>. Therefore, it is possible that the La 5d FS in LaNiO_2 is somehow more conducive to the emergence of charge order. Another characteristic of LaNiO_2 is its epitaxial strain. By comparing the lattice constants to those of the bulk crystal powders <cit.>, we determined that the La-, Pr-, and Nd-NiO_2 thin films exhibit compressive epitaxial strains of 1.34%, 0.9%, and 0.6%, respectively. Interestingly, the LaNiO_2 film experiences the most compressive strain compared to the other two families of nickelates. Drawing from the insights gained from the study of cuprates, where the emergence and behavior of charge order can be substantially influenced by strain <cit.>, it may be reasonable to speculate whether strain plays a role in the emergence of charge order in infinite-layer nickelates. Further investigations are necessary to clarify the role of the rare-earth 5d Fermi surface (FS) pocket and strain in the emergence of charge order.In summary, our RIXS data showcase the orbital configuration and magnetic excitations of early rare-earth infinite-layer nickelate thin films with a SrTiO_3 capping layer. The observed universal orbital configuration and magnetic features indicate a common behavior among these nickelates. Interestingly, we only observed a signature of charge order in LaNiO_2, but not in PrNiO_2 and NdNiO_2, calling for further investigations. As an outlook, it would be interesting to investigate infinite-layer nickelates with high-Z rare-earth elements, which can further examine the universality of the electronic and magnetic structures, as well as the emergence of other instabilities, as predicted by some theories <cit.>.This work is supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, under contract DE-AC02-76SF00515. We acknowledge the Gordon and Betty Moore Foundation’s Emergent Phenomena in Quantum Systems Initiative through grant number GBMF9072 for synthesis equipment. We acknowledge Diamond Light Source for time on beamline I21-RIXS under Proposal NT25165 and MM25598. This research also used resources of the Advanced Light Source, a U.S. DOE Office of Science User Facility under contract no. DE-AC02-05CH11231. B.H.G. acknowledges support by the Department of Defense Air Force Office of Scientific Research (No. FA 9550-16-1-0305). This work made use of the Cornell Center for Materials Research (CCMR) Shared Facilities, which are supported through the NSF MRSEC Program (No. DMR-1719875). The FEI Titan Themis 300 was acquired through No. NSF-MRI-1429155, with additional support from Cornell University, the Weill Institute, and the Kavli Institute at Cornell. The Thermo Fisher Helios G4 UX FIB was acquired with support by NSF No. DMR-1539918. The Thermo Fisher Spectra 300 X-CFEG was acquired with support from PARADIM, an NSF MIP (DMR-2039380), and Cornell University.55 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Li et al.(2019)Li, Lee, Wang, Osada, Crossley, Lee, Cui, Hikita, and Hwang]Li2019 author author D. Li, author K. Lee, author B. Y. Wang, author M. Osada, author S. Crossley, author H. R. Lee, author Y. Cui, author Y. Hikita, and author H. Y. Hwang, title title Superconductivity in an infinite-layer nickelate, https://doi.org/10.1038/s41586-019-1496-5 journal journal Nature volume 572, pages 624 (year 2019)NoStop [Anisimov et al.(1999)Anisimov, Bukhvalov, and Rice]Anisimov1999 author author V. I. Anisimov, author D. Bukhvalov, and author T. M. Rice, title title Electronic structure of possible nickelate analogs to the cuprates, https://doi.org/10.1103/PhysRevB.59.7901 journal journal Phys. Rev. B volume 59, pages 7901 (year 1999)NoStop [Lee and Pickett(2004)]Lee2004 author author K.-W. Lee and author W. E. Pickett, title title Infinite-layer LaNiO_2: Ni^1+ is not Cu^2+, https://doi.org/10.1103/PhysRevB.70.165109 journal journal Phys. Rev. B volume 70, pages 165109 (year 2004)NoStop [Li et al.(2020)Li, Wang, Lee, Harvey, Osada, Goodge, Kourkoutis, and Hwang]Li2020 author author D. Li, author B. Y. Wang, author K. Lee, author S. P. Harvey, author M. Osada, author B. H. Goodge, author L. F. Kourkoutis, and author H. Y. Hwang, title title Superconducting dome in Nd_1xSr_xNiO_2 infinite layer films, https://doi.org/10.1103/PhysRevLett.125.027001 journal journal Phys. Rev. Lett. volume 125, pages 027001 (year 2020)NoStop [Zeng et al.(2020)Zeng, Tang, Yin, Li, Li, Huang, Hu, Liu, Omar, Jani, Lim, Han, Wan, Yang, Pennycook, Wee, and Ariando]Zeng2020 author author S. Zeng, author C. S. Tang, author X. Yin, author C. Li, author M. Li, author Z. Huang, author J. Hu, author W. Liu, author G. J. Omar, author H. Jani, author Z. S. Lim, author K. Han, author D. Wan, author P. Yang, author S. J. Pennycook, author A. T. S. Wee, and author A. Ariando, title title Phase diagram and superconducting dome of infinite-layer Nd_1xSr_xNiO_2 thin films, https://doi.org/10.1103/PhysRevLett.125.147003 journal journal Phys. Rev. Lett. volume 125, pages 147003 (year 2020)NoStop [Lee et al.(2023)Lee, Wang, Osada, Goodge, Wang, Lee, Harvey, Kim, Yu, Murthy, Raghu, Kourkoutis, and Hwang]Lee2023 author author K. Lee, author B. Y. Wang, author M. Osada, author B. H. Goodge, author T. C. Wang, author Y. Lee, author S. Harvey, author W. J. Kim, author Y. Yu, author C. Murthy, author S. Raghu, author L. F. Kourkoutis, and author H. Y. Hwang, title title Linear-in-temperature resistivity for optimally superconducting (nd,sr)nio2, https://doi.org/10.1038/s41586-023-06129-x journal journal Nature volume 619, pages 288 (year 2023)NoStop [Hepting et al.(2018)Hepting, Chaix, Huang, Fumagalli, Peng, Moritz, Kummer, Brookes, Lee, Hashimoto, Sarkar, He, Rotundu, Lee, Greene, Braicovich, Ghiringhelli, Shen, Devereaux, and Lee]Hepting2018 author author M. Hepting, author L. Chaix, author E. W. Huang, author R. Fumagalli, author Y. Y. Peng, author B. Moritz, author K. Kummer, author N. B. Brookes, author W. C. Lee, author M. Hashimoto, author T. Sarkar, author J.-F. He, author C. R. Rotundu, author Y. S. Lee, author R. L. Greene, author L. Braicovich, author G. Ghiringhelli, author Z. X. Shen, author T. P. Devereaux, and author W. S. Lee, title title Three-dimensional collective charge excitations in electron-doped copper oxide superconductors, https://doi.org/10.1038/s41586-018-0648-3 journal journal Nature volume 563, pages 374 (year 2018)NoStop [Rossi et al.(2021)Rossi, Lu, Nag, Li, Osada, Lee, Wang, Agrestini, Garcia-Fernandez, Kas, Chuang, Shen, Hwang, Moritz, Zhou, Devereaux, and Lee]Rossi2021 author author M. Rossi, author H. Lu, author A. Nag, author D. Li, author M. Osada, author K. Lee, author B. Y. Wang, author S. Agrestini, author M. Garcia-Fernandez, author J. J. Kas, author Y.-D. Chuang, author Z. X. Shen, author H. Y. Hwang, author B. Moritz, author K.-J. Zhou, author T. P. Devereaux, and author W. S. Lee, title title Orbital and spin character of doped carriers in infinite-layer nickelates, https://doi.org/10.1103/PhysRevB.104.L220505 journal journal Phys. Rev. B volume 104, pages L220505 (year 2021)NoStop [Goodge et al.(2021)Goodge, Li, Lee, Osada, Wang, Sawatzky, Hwang, and Kourkoutis]Goodge2021 author author B. H. Goodge, author D. Li, author K. Lee, author M. Osada, author B. Y. Wang, author G. A. Sawatzky, author H. Y. Hwang, and author L. F. Kourkoutis, title title Doping evolution of the mott–hubbard landscape in infinite-layer nickelates, journal journal Proc. Natl. Acad. Sci. USA volume 118, https://doi.org/10.1073/pnas.2007683118 10.1073/pnas.2007683118 (year 2021)NoStop [Rossi et al.(2022)Rossi, Osada, Choi, Agrestini, Jost, Lee, Lu, Wang, Lee, Nag, Chuang, Kuo, Lee, Moritz, Devereaux, Shen, Lee, Zhou, Hwang, and Lee]Rossi2022 author author M. Rossi, author M. Osada, author J. Choi, author S. Agrestini, author D. Jost, author Y. Lee, author H. Lu, author B. Y. Wang, author K. Lee, author A. Nag, author Y.-D. Chuang, author C.-T. Kuo, author S.-J. Lee, author B. Moritz, author T. P. Devereaux, author Z.-X. Shen, author J.-S. Lee, author K.-J. Zhou, author H. Y. Hwang, and author W.-S. Lee, title title A broken translational symmetry state in an infinite-layer nickelate, https://doi.org/10.1038/s41567-022-01660-6 journal journal Nature Physics volume 18, pages 869 (year 2022)NoStop [Osada et al.(2020a)Osada, Wang, Lee, Li, and Hwang]Osada2020 author author M. Osada, author B. Y. Wang, author K. Lee, author D. Li, and author H. Y. Hwang, title title Phase diagram of infinite layer praseodymium nickelate Pr_1-xSr_xNiO_2 thin films, https://doi.org/10.1103/PhysRevMaterials.4.121801 journal journal Phys. Rev. Mater. volume 4, pages 121801 (year 2020a)NoStop [Osada et al.(2021)Osada, Wang, Goodge, Harvey, Lee, Li, Kourkoutis, and Hwang]Osada2021 author author M. Osada, author B. Y. Wang, author B. H. Goodge, author S. P. Harvey, author K. Lee, author D. Li, author L. F. Kourkoutis, and author H. Y. Hwang, title title Nickelate superconductivity without sare-earth magnetism: (La,Sr)NiO_2, https://doi.org/https://doi.org/10.1002/adma.202104083 journal journal Advanced Materials volume 33, pages 2104083 (year 2021), https://arxiv.org/abs/https://onlinelibrary.wiley.com/doi/pdf/10.1002/adma.202104083 https://onlinelibrary.wiley.com/doi/pdf/10.1002/adma.202104083 NoStop [Harvey et al.(2022)Harvey, Wang, Fowlie, Osada, Lee, Lee, Li, and Hwang]harvey2022 author author S. P. Harvey, author B. Y. Wang, author J. Fowlie, author M. Osada, author K. Lee, author Y. Lee, author D. Li, and author H. Y. Hwang, @nooptitle Evidence for nodal superconductivity in infinite-layer nickelates (year 2022), https://arxiv.org/abs/2201.12971 arXiv:2201.12971 [cond-mat.supr-con] NoStop [Wang et al.(2023)Wang, Wang, Hsu, Osada, Lee, Jia, Duffy, Li, Fowlie, Beasley, Devereaux, Fisher, Hussey, and Hwang]Wang2023 author author B. Y. Wang, author T. C. Wang, author Y.-T. Hsu, author M. Osada, author K. Lee, author C. Jia, author C. Duffy, author D. Li, author J. Fowlie, author M. R. Beasley, author T. P. Devereaux, author I. R. Fisher, author N. E. Hussey, and author H. Y. Hwang, title title Effects of rare-earth magnetism on the superconducting upper critical field in infinite-layer nickelates, https://doi.org/10.1126/sciadv.adf6655 journal journal Science Advances volume 9, pages eadf6655 (year 2023), https://arxiv.org/abs/https://www.science.org/doi/pdf/10.1126/sciadv.adf6655 https://www.science.org/doi/pdf/10.1126/sciadv.adf6655 NoStop [Nomura et al.(2019)Nomura, Hirayama, Tadano, Yoshimoto, Nakamura, and Arita]Nomura2019 author author Y. Nomura, author M. Hirayama, author T. Tadano, author Y. Yoshimoto, author K. Nakamura, and author R. Arita, title title Formation of a two-dimensional single-component correlated electron system and band engineering in the nickelate superconductor NdNiO_2, https://doi.org/10.1103/PhysRevB.100.205138 journal journal Phys. Rev. B volume 100, pages 205138 (year 2019)NoStop [Adhikary et al.(2020)Adhikary, Bandyopadhyay, Das, Dasgupta, and Saha-Dasgupta]Adhikary2020 author author P. Adhikary, author S. Bandyopadhyay, author T. Das, author I. Dasgupta, and author T. Saha-Dasgupta, title title Orbital-selective superconductivity in a two-band model of infinite-layer nickelates, https://doi.org/10.1103/PhysRevB.102.100501 journal journal Phys. Rev. B volume 102, pages 100501(R) (year 2020)NoStop [Botana and Norman(2020)]Botana2020 author author A. S. Botana and author M. R. Norman, title title Similarities and Differences between LaNiO_2 and CaCuO_2 and Implications for Superconductivity, https://doi.org/10.1103/PhysRevX.10.011024 journal journal Phys. Rev. X volume 10, pages 011024 (year 2020)NoStop [Gu et al.(2020)Gu, Zhu, Wang, Hu, and Chen]Gu2020a author author Y. Gu, author S. Zhu, author X. Wang, author J. Hu, and author H. Chen, title title A substantial hybridization between correlated Ni-d orbital and itinerant electrons in infinite-layer nickelates, https://doi.org/10.1038/s42005-020-0347-x journal journal Commun. Phys. volume 3, pages 84 (year 2020)NoStop [Kapeghian and Botana(2020)]Kapeghian2020 author author J. Kapeghian and author A. S. Botana, title title Electronic structure and magnetism in infinite-layer nickelates RNiO_2 (R=LaLu), https://doi.org/10.1103/PhysRevB.102.205130 journal journal Phys. Rev. B volume 102, pages 205130 (year 2020)NoStop [Lechermann(2020)]Lechermann2020 author author F. Lechermann, title title Late transition metal oxides with infinite-layer structure: Nickelates versus cuprates, https://doi.org/10.1103/PhysRevB.101.081110 journal journal Phys. Rev. B volume 101, pages 081110 (year 2020)NoStop [Leonov et al.(2020)Leonov, Skornyakov, and Savrasov]Leonov2020 author author I. Leonov, author S. L. Skornyakov, and author S. Y. Savrasov, title title Lifshitz transition and frustration of magnetic moments in infinite-layer NdNiO_2 upon hole doping, https://doi.org/10.1103/PhysRevB.101.241108 journal journal Phys. Rev. B volume 101, pages 241108 (year 2020)NoStop [Liu et al.(2020)Liu, Ren, Zhu, Wang, and Yang]Liu2020 author author Z. Liu, author Z. Ren, author W. Zhu, author Z. Wang, and author J. Yang, title title Electronic and magnetic structure of infinite-layer NdNiO_2: trace of antiferromagnetic metal, https://doi.org/10.1038/s41535-020-0229-1 journal journal npj Quantum Mater. volume 5, pages 31 (year 2020)NoStop [Sakakibara et al.(2020)Sakakibara, Usui, Suzuki, Kotani, Aoki, and Kuroki]Sakakibara2020 author author H. Sakakibara, author H. Usui, author K. Suzuki, author T. Kotani, author H. Aoki, and author K. Kuroki, title title Model Construction and a Possibility of Cupratelike Pairing in a New d^9 Nickelate Superconductor (Nd,Sr)NiO_2, https://doi.org/10.1103/PhysRevLett.125.077003 journal journal Phys. Rev. Lett. volume 125, pages 077003 (year 2020)NoStop [Wu et al.(2020)Wu, Di Sante, Schwemmer, Hanke, Hwang, Raghu, and Thomale]Wu2020 author author X. Wu, author D. Di Sante, author T. Schwemmer, author W. Hanke, author H. Y. Hwang, author S. Raghu, and author R. Thomale, title title Robust d_x^2y^2-wave superconductivity of infinite-layer nickelates, https://doi.org/10.1103/PhysRevB.101.060504 journal journal Phys. Rev. B volume 101, pages 060504 (year 2020)NoStop [Been et al.(2021)Been, Lee, Hwang, Cui, Zaanen, Devereaux, Moritz, and Jia]Been2021 author author E. Been, author W.-S. Lee, author H. Y. Hwang, author Y. Cui, author J. Zaanen, author T. Devereaux, author B. Moritz, and author C. Jia, title title Electronic structure trends across the rare-earth series in superconducting infinite-layer nickelates, https://doi.org/10.1103/PhysRevX.11.011050 journal journal Phys. Rev. X volume 11, pages 011050 (year 2021)NoStop [Torrance et al.(1992)Torrance, Lacorre, Nazzal, Ansaldo, and Niedermayer]Torrance1992 author author J. B. Torrance, author P. Lacorre, author A. I. Nazzal, author E. J. Ansaldo, and author C. Niedermayer, title title Systematic study of insulator-metal transitions in perovskites RNiO_3 (R=Pr,Nd,Sm,Eu) due to closing of charge-transfer gap, https://doi.org/10.1103/PhysRevB.45.8209 journal journal Phys. Rev. B volume 45, pages 8209 (year 1992)NoStop [Middey et al.(2016)Middey, Chakhalian, Mahadevan, Freeland, Millis, and Sarma]Middey2016 author author S. Middey, author J. Chakhalian, author P. Mahadevan, author J. Freeland, author A. Millis, and author D. Sarma, title title Physics of ultrathin films and heterostructures of rare-earth nickelates, https://doi.org/10.1146/annurev-matsci-070115-032057 journal journal Annual Review of Materials Research volume 46, pages 305 (year 2016)NoStop [Zhang et al.(2023)Zhang, Zhang, He, Wang, and Ghosez]Zhang2023 author author Y. Zhang, author J. Zhang, author X. He, author J. Wang, and author P. Ghosez, title title Rare-earth control of phase transitions in infinite-layer nickelates, journal journal PNAS Nexus volume 2, https://doi.org/10.1093/pnasnexus/pgad108 10.1093/pnasnexus/pgad108 (year 2023), note pgad108, https://arxiv.org/abs/https://academic.oup.com/pnasnexus/article-pdf/2/5/pgad108/50633207/pgad108.pdf https://academic.oup.com/pnasnexus/article-pdf/2/5/pgad108/50633207/pgad108.pdf NoStop [Subedi(2023)]Subedi2023 author author A. Subedi, title title Possible structural quantum criticality tuned by rare-earth ion substitution in infinite-layer nickelates, https://doi.org/10.1103/PhysRevMaterials.7.024801 journal journal Phys. Rev. Mater. volume 7, pages 024801 (year 2023)NoStop [Lu et al.(2021)Lu, Rossi, Nag, Osada, Li, Lee, Wang, Garcia-Fernandez, Agrestini, Shen, Been, Moritz, Devereaux, Zaanen, Hwang, Zhou, and Lee]Lu2021 author author H. Lu, author M. Rossi, author A. Nag, author M. Osada, author D. F. Li, author K. Lee, author B. Y. Wang, author M. Garcia-Fernandez, author S. Agrestini, author Z. X. Shen, author E. M. Been, author B. Moritz, author T. P. Devereaux, author J. Zaanen, author H. Y. Hwang, author K.-J. Zhou, and author W. S. Lee, title title Magnetic excitations in infinite-layer nickelates, https://doi.org/10.1126/science.abd7726 journal journal Science volume 373, pages 213 (year 2021)NoStop [Gao et al.(2022)Gao, Fan, Wang, Li, Ren, Biało, Drewanowski, Rothenbühler, Choi, Wang, Xiang, Hu, Zhou, Bisogni, Comin, Chang, Pelliciari, Zhou, and Zhu]gao2022 author author Q. Gao, author S. Fan, author Q. Wang, author J. Li, author X. Ren, author I. Biało, author A. Drewanowski, author P. Rothenbühler, author J. Choi, author Y. Wang, author T. Xiang, author J. Hu, author K.-J. Zhou, author V. Bisogni, author R. Comin, author J. Chang, author J. Pelliciari, author X. J. Zhou, and author Z. Zhu, @nooptitle Magnetic Excitations in Strained Infinite-layer Nickelate PrNiO_2 (year 2022), https://arxiv.org/abs/2208.05614 arXiv:2208.05614 [cond-mat.supr-con] NoStop [Tam et al.(2022)Tam, Choi, Ding, Agrestini, Nag, Wu, Huang, Luo, Gao, García-Fernández, Qiao, and Zhou]Tam2022 author author C. C. Tam, author J. Choi, author X. Ding, author S. Agrestini, author A. Nag, author M. Wu, author B. Huang, author H. Luo, author P. Gao, author M. García-Fernández, author L. Qiao, and author K.-J. Zhou, title title Charge density waves in infinite-layer NdNiO_2 nickelates, https://doi.org/10.1038/s41563-022-01330-1 journal journal Nature Materials volume 21, pages 1116 (year 2022)NoStop [Krieger et al.(2022)Krieger, Martinelli, Zeng, Chow, Kummer, Arpaia, Moretti Sala, Brookes, Ariando, Viart, Salluzzo, Ghiringhelli, and Preziosi]Krieger2022 author author G. Krieger, author L. Martinelli, author S. Zeng, author L. E. Chow, author K. Kummer, author R. Arpaia, author M. Moretti Sala, author N. B. Brookes, author A. Ariando, author N. Viart, author M. Salluzzo, author G. Ghiringhelli, and author D. Preziosi, title title Charge and Spin Order Dichotomy in NdNiO_2 Driven by the Capping Layer, https://doi.org/10.1103/PhysRevLett.129.027002 journal journal Phys. Rev. Lett. volume 129, pages 027002 (year 2022)NoStop [Ren et al.(2023)Ren, Sutarto, Gao, Wang, Li, Wang, Xiang, Hu, Zhang, Chang, Comin, Zhou, and Zhu]Ren2023 author author X. Ren, author R. Sutarto, author Q. Gao, author Q. Wang, author J. Li, author Y. Wang, author T. Xiang, author J. Hu, author F.-C. Zhang, author J. Chang, author R. Comin, author X. J. Zhou, and author Z. Zhu, @nooptitle Symmetry of charge order in infinite-layer nickelates (year 2023), https://arxiv.org/abs/2303.02865 arXiv:2303.02865 [cond-mat.supr-con] NoStop [Lee et al.(2020)Lee, Goodge, Li, Osada, Wang, Cui, Kourkoutis, and Hwang]Lee2020 author author K. Lee, author B. H. Goodge, author D. Li, author M. Osada, author B. Y. Wang, author Y. Cui, author L. F. Kourkoutis, and author H. Y. Hwang, title title Aspects of the synthesis of thin film superconducting infinite-layer nickelates, https://doi.org/10.1063/5.0005103 journal journal APL Mater. volume 8, pages 041107 (year 2020)NoStop [Osada et al.(2020b)Osada, Wang, Goodge, Lee, Yoon, Sakuma, Li, Miura, Kourkoutis, and Hwang]Osada2020b author author M. Osada, author B. Y. Wang, author B. H. Goodge, author K. Lee, author H. Yoon, author K. Sakuma, author D. Li, author M. Miura, author L. F. Kourkoutis, and author H. Y. Hwang, title title A superconducting praseodymium nickelate with infinite layer structure, https://doi.org/10.1021/acs.nanolett.0c01392 journal journal Nano Letters volume 20, pages 5735 (year 2020b), note pMID: 32574061,https://arxiv.org/abs/https://doi.org/10.1021/acs.nanolett.0c01392 https://doi.org/10.1021/acs.nanolett.0c01392 NoStop [Raji et al.(2023)Raji, Krieger, Viart, Preziosi, Rueff, and Gloter]raji2023 author author A. Raji, author G. Krieger, author N. Viart, author D. Preziosi, author J.-P. Rueff, and author A. Gloter, @nooptitle Charge distribution across capped and uncapped infinite-layer neodymium nickelate thin films (year 2023), https://arxiv.org/abs/2306.10507 arXiv:2306.10507 [cond-mat.mtrl-sci] NoStop [Parzyck et al.(2023)Parzyck, Gupta, Wu, Anil, Bhatt, Bouliane, Gong, Gregory, Luo, Sutarto, He, Chuang, Zhou, Herranz, Kourkoutis, Singer, Schlom, Hawthorn, and Shen]parzyck2023 author author C. T. Parzyck, author N. K. Gupta, author Y. Wu, author V. Anil, author L. Bhatt, author M. Bouliane, author R. Gong, author B. Z. Gregory, author A. Luo, author R. Sutarto, author F. He, author Y. D. Chuang, author T. Zhou, author G. Herranz, author L. F. Kourkoutis, author A. Singer, author D. G. Schlom, author D. G. Hawthorn, and author K. M. Shen, @nooptitle Absence of 3a_0 charge density wave order in the infinite layer nickelates (year 2023), https://arxiv.org/abs/2307.06486 arXiv:2307.06486 [cond-mat.supr-con] NoStop [SM()]SM @noopnote See Supplemental Material at [URL] for additional experiment details, data and analysis.Stop [Sala et al.(2011)Sala, Bisogni, Aruta, Balestrino, Berger, Brookes, de Luca, Castro, Grioni, Guarise, Medaglia, Granozio, Minola, Perna, Radovic, Salluzzo, Schmitt, Zhou, Braicovich, and Ghiringhelli]Moretti2011 author author M. M. Sala, author V. Bisogni, author C. Aruta, author G. Balestrino, author H. Berger, author N. B. Brookes, author G. M. de Luca, author D. D. Castro, author M. Grioni, author M. Guarise, author P. G. Medaglia, author F. M. Granozio, author M. Minola, author P. Perna, author M. Radovic, author M. Salluzzo, author T. Schmitt, author K. J. Zhou, author L. Braicovich, and author G. Ghiringhelli, title title Energy and symmetry of dd excitations in undoped layered cuprates measured by Cu L_3 resonant inelastic x-ray scattering, https://doi.org/10.1088/1367-2630/13/4/043026 journal journal New J. Phys. volume 13, pages 043026 (year 2011)NoStop [Lamsal and Montfrooij(2016)]Lamsal2016 author author J. Lamsal and author W. Montfrooij, title title Extracting paramagnon excitations from resonant inelastic x-ray scattering experiments, https://doi.org/10.1103/PhysRevB.93.214513 journal journal Phys. Rev. B volume 93, pages 214513 (year 2016)NoStop [Coldea et al.(2001)Coldea, Hayden, Aeppli, Perring, Frost, Mason, Cheong, and Fisk]Coldea2001 author author R. Coldea, author S. M. Hayden, author G. Aeppli, author T. G. Perring, author C. D. Frost, author T. E. Mason, author S.-W. Cheong, and author Z. Fisk, title title Spin Waves and Electronic Interactions in La_2CuO_4, https://doi.org/10.1103/PhysRevLett.86.5377 journal journal Phys. Rev. Lett. volume 86, pages 5377 (year 2001)NoStop [Hayward and Rosseinsky(2003)]Hayward2003 author author M. Hayward and author M. Rosseinsky, title title Synthesis of the infinite layer Ni(I) phase NdNiO_2+x by low temperature reduction of NdNiO_3 with sodium hydride, https://doi.org/https://doi.org/10.1016/S1293-2558(03)00111-0 journal journal Solid State Sci. volume 5, pages 839(year 2003)NoStop [Lin et al.(2022)Lin, Gawryluk, Klein, Huangfu, Pomjakushina, von Rohr, and Schilling]Lin_2022 author author H. Lin, author D. J. Gawryluk, author Y. M. Klein, author S. Huangfu, author E. Pomjakushina, author F. von Rohr, and author A. Schilling, title title Universal spin-glass behaviour in bulk LaNiO_2, PrNiO_2 and NdNiO_2, https://doi.org/10.1088/1367-2630/ac465e journal journal New Journal of Physics volume 24, pages 013022 (year 2022)NoStop [Ortiz et al.(2022)Ortiz, Puphal, Klett, Hotz, Kremer, Trepka, Hemmida, von Nidda, Isobe, Khasanov, Luetkens, Hansmann, Keimer, Schäfer, and Hepting]Ortiz2022 author author R. A. Ortiz, author P. Puphal, author M. Klett, author F. Hotz, author R. K. Kremer, author H. Trepka, author M. Hemmida, author H.-A. K. von Nidda, author M. Isobe, author R. Khasanov, author H. Luetkens, author P. Hansmann, author B. Keimer, author T. Schäfer, and author M. Hepting, title title Magnetic correlations in infinite-layer nickelates: An experimental and theoretical multimethod study, https://doi.org/10.1103/PhysRevResearch.4.023093 journal journal Phys. Rev. Res. volume 4, pages 023093 (year 2022)NoStop [Ghiringhelli et al.(2012)Ghiringhelli, Tacon, Minola, Blanco-Canosa, Mazzoli, Brookes, Luca, Frano, Hawthorn, He, Loew, Sala, Peets, Salluzzo, Schierle, Sutarto, Sawatzky, Weschke, Keimer, and Braicovich]Ghiringhelli2012YBCO author author G. Ghiringhelli, author M. L. Tacon, author M. Minola, author S. Blanco-Canosa, author C. Mazzoli, author N. B. Brookes, author G. M. D. Luca, author A. Frano, author D. G. Hawthorn, author F. He, author T. Loew, author M. M. Sala, author D. C. Peets, author M. Salluzzo, author E. Schierle, author R. Sutarto, author G. A. Sawatzky, author E. Weschke, author B. Keimer, and author L. Braicovich, title title Long-Range Incommensurate Charge Fluctuations in (Y, Nd)Ba_2Cu_3O_6+x, https://doi.org/10.1126/science.1223532 journal journal Science volume 337, pages 821 (year 2012)NoStop [Pelliciari et al.(2023)Pelliciari, Khan, Wasik, Barbour, Li, Nie, Tranquada, Bisogni, and Mazzoli]pelliciari2023comment author author J. Pelliciari, author N. Khan, author P. Wasik, author A. Barbour, author Y. Li, author Y. Nie, author J. M. Tranquada, author V. Bisogni, and author C. Mazzoli, @nooptitle Comment on newly found charge density waves in infinite layer nickelates (year 2023), https://arxiv.org/abs/2306.15086 arXiv:2306.15086 [cond-mat.supr-con] NoStop [Tam et al.(2023)Tam, Choi, Ding, Agrestini, Nag, Wu, Huang, Luo, Gao, Garcia-Fernandez, Qiao, and Zhou]tam2023reply author author C. C. Tam, author J. Choi, author X. Ding, author S. Agrestini, author A. Nag, author M. Wu, author B. Huang, author H. Luo, author P. Gao, author M. Garcia-Fernandez, author L. Qiao, and author K.-J. Zhou, @nooptitle Reply to "comment on newly found charge density waves in infinite layer nickelates” (year 2023), https://arxiv.org/abs/2307.13569 arXiv:2307.13569 [cond-mat.str-el] NoStop [Zeng et al.(2022)Zeng, Li, Chow, Cao, Zhang, Tang, Yin, Lim, Hu, Yang, and Ariando]Zeng2022 author author S. Zeng, author C. Li, author L. E. Chow, author Y. Cao, author Z. Zhang, author C. S. Tang, author X. Yin, author Z. S. Lim, author J. Hu, author P. Yang, and author A. Ariando, title title Superconductivity in infinite-layer nickelate La_1-xCa_xNiO_2 thin films, https://doi.org/10.1126/sciadv.abl9927 journal journal Science Advances volume 8, pages eabl9927 (year 2022), https://arxiv.org/abs/https://www.science.org/doi/pdf/10.1126/sciadv.abl9927 https://www.science.org/doi/pdf/10.1126/sciadv.abl9927 NoStop [Peng et al.(2022)Peng, Jiang, Moritz, Devereaux, and Jia]peng2022 author author C. Peng, author H.-C. Jiang, author B. Moritz, author T. P. Devereaux, and author C. Jia, @nooptitle Charge order and superconductivity in a minimal two-band model for infinite-layer nickelates (year 2022), https://arxiv.org/abs/2110.07593 arXiv:2110.07593 [cond-mat.str-el] NoStop [Kim et al.(2018)Kim, Souliou, Barber, Lefrançois, Minola, Tortora, Heid, Nandi, Borzi, Garbarino, Bosak, Porras, Loew, König, Moll, Mackenzie, Keimer, Hicks, and Tacon]Kim2018 author author H.-H. Kim, author S. M. Souliou, author M. E. Barber, author E. Lefrançois, author M. Minola, author M. Tortora, author R. Heid, author N. Nandi, author R. A. Borzi, author G. Garbarino, author A. Bosak, author J. Porras, author T. Loew, author M. König, author P. J. W. Moll, author A. P.Mackenzie, author B. Keimer, author C. W. Hicks, and author M. L. Tacon, title title Uniaxial pressure control of competing orders in a high-temperature superconductor, https://doi.org/10.1126/science.aat4708 journal journal Science volume 362, pages 1040 (year 2018), https://arxiv.org/abs/https://www.science.org/doi/pdf/10.1126/science.aat4708 https://www.science.org/doi/pdf/10.1126/science.aat4708 NoStop [Bluschke et al.(2018)Bluschke, Frano, Schierle, Putzky, Ghorbani, Ortiz, Suzuki, Christiani, Logvenov, Weschke, Birgeneau, da Silva Neto, Minola, Blanco-Canosa, and Keimer]Bluschke2018j author author M. Bluschke, author A. Frano, author E. Schierle, author D. Putzky, author F. Ghorbani, author R. Ortiz, author H. Suzuki, author G. Christiani, author G. Logvenov, author E. Weschke, author R. J. Birgeneau, author E. H. da Silva Neto, author M. Minola, author S. Blanco-Canosa, and author B. Keimer, title title Stabilization of three-dimensional charge order in YBa_2Cu_3O_6+x via epitaxial growth, @noopjournal journal Nature Communications volume 9, pages 2978 (year 2018)NoStop [Choi et al.(2022)Choi, Wang, Jöhr, Christensen, Küspert, Bucher, Biscette, Fischer, Hücker, Kurosawa, Momono, Oda, Ivashko, Zimmermann, Janoschek, and Chang]Choi2022 author author J. Choi, author Q. Wang, author S. Jöhr, author N. B. Christensen, author J. Küspert, author D. Bucher, author D. Biscette, author M. H. Fischer, author M. Hücker, author T. Kurosawa, author N. Momono, author M. Oda, author O. Ivashko, author M. v. Zimmermann, author M. Janoschek, and author J. Chang, title title Unveiling unequivocal charge stripe order in a prototypical cuprate superconductor, https://doi.org/10.1103/PhysRevLett.128.207002 journal journal Phys. Rev. Lett. volume 128, pages 207002 (year 2022)NoStop [Wang et al.(2022)Wang, von Arx, Mazzone, Mustafi, Horio, Küspert, Choi, Bucher, Wo, Zhao, Zhang, Asmara, Sassa, Månsson, Christensen, Janoschek, Kurosawa, Momono, Oda, Fischer, Schmitt, and Chang]Wang2022 author author Q. Wang, author K. von Arx, author D. G. Mazzone, author S. Mustafi, author M. Horio, author J. Küspert, author J. Choi, author D. Bucher, author H. Wo, author J. Zhao, author W. Zhang, author T. C. Asmara, author Y. Sassa, author M. Månsson, author N. B. Christensen, author M. Janoschek, author T. Kurosawa, author N. Momono, author M. Oda, author M. H. Fischer, author T. Schmitt, and author J. Chang, title title Uniaxial pressure induced stripe order rotation in La_1.88Sr_0.12CuO_4, https://doi.org/10.1038/s41467-022-29465-4 journal journal Nature Communications volume 13, pages 1795 (year 2022)NoStop [Gupta et al.(2023)Gupta, Sutarto, Gong, Idziak, Hale, Kim, and Hawthorn]gupta2023 author author N. K. Gupta, author R. Sutarto, author R. Gong, author S. Idziak, author H. Hale, author Y.-J. Kim, and author D. G. Hawthorn, @nooptitle Tuning charge density wave order and structure via uniaxial stress in a stripe-ordered cuprate superconductor (year 2023), https://arxiv.org/abs/2305.16499 arXiv:2305.16499 [cond-mat.str-el] NoStop
http://arxiv.org/abs/2312.16444v1
{ "authors": [ "M. Rossi", "H. Lu", "K. Lee", "B. H. Goodge", "J. Choi", "M. Osada", "Y. Lee", "D. Li", "B. Y. Wang", "D. Jost", "S. Agrestini", "M. Garcia-Fernandez", "Z. X. Shen", "Ke-Jin Zhou", "E. Been", "B. Moritz", "L. F. Kourkoutis", "T. P. Devereaux", "H. Y. Hwang", "W. S. Lee" ], "categories": [ "cond-mat.str-el", "cond-mat.mtrl-sci", "cond-mat.supr-con" ], "primary_category": "cond-mat.str-el", "published": "20231227071911", "title": "Universal orbital and magnetic structures in infinite-layer nickelates" }
Terra Quantum AG, 9000 St. Gallen, Switzerland HAKOM Time Series GmbH, 1230 Vienna, Austria Terra Quantum AG, 9000 St. Gallen, Switzerland Terra Quantum AG, 9000 St. Gallen, Switzerland Terra Quantum AG, 9000 St. Gallen, Switzerland Terra Quantum AG, 9000 St. Gallen, Switzerland Terra Quantum AG, 9000 St. Gallen, Switzerland Terra Quantum AG, 9000 St. Gallen, Switzerland Terra Quantum AG, 9000 St. Gallen, Switzerland Terra Quantum AG, 9000 St. Gallen, SwitzerlandPredicting solar panel power output is crucial for advancing the energy transition but is complicated by the variable and non-linear nature of solar energy. This is influenced by numerous meteorological factors, geographical positioning, and photovoltaic cell properties, posing significant challenges to forecasting accuracy and grid stability. Our study introduces a suite of solutions centered around hybrid quantum neural networks designed to tackle these complexities. The first proposed model, the Hybrid Quantum Long Short-Term Memory, surpasses all tested models by over 40% lower mean absolute and mean squared errors. The second proposed model, Hybrid Quantum Sequence-to-Sequence neural network, once trained, predicts photovoltaic power with 16% lower mean absolute error for arbitrary time intervals without the need for prior meteorological data, highlighting its versatility. Moreover, our hybrid models perform better even when trained on limited datasets, underlining their potential utility in data-scarce scenarios. These findings represent a stride towards resolving time series prediction challenges in energy power forecasting through hybrid quantum models, showcasing the transformative potential of quantum machine learning in catalyzing the renewable energy transition.Photovoltaic power forecasting using quantum machine learning Alexey Melnikov January 14, 2024 =============================================================§ INTRODUCTION Electricity generation prediction, especially for photovoltaic (PV) systems, is a crucial tool for renewable energy adoption <cit.>. The global economy must radically reduce emissions to stay within the 1.5C pathway (Paris Agreement) and the transition to renewable energy sources is necessary to achieve these objectives <cit.>. According to the IEA, solar PV’s installed power capacity is poised to surpass that of coal by 2027, becoming the largest in the world.Accurate PV power forecasts are vital for multiple facets of the energy industry such as long-term investment planning, regulatory compliance for avoiding penalties, and renewable energy management across storage, transmission, and distribution activities. Several studies show that an increase in forecasting accuracy reduces electricity generation from conventional sources. Increased accuracy also reduces operating costs of systems through reducing the uncertainty of PV power generation <cit.>. They support improving the stability and sustainability of the power grid through optimizing power flow and counteracting solar power's intermittent nature <cit.>. Such predictions are foundational in increasing the economic viability and improving the adoption of solar energy as they inform pricing and economic dispatch strategies, bolster competitiveness and over time reduce reliance on reserve power. Additionally, they assist in managing energy storage effectively and integrating PV systems into the power grid <cit.>, which is essential for the enduring success of renewable energy solutions <cit.>. Traditional methods for predicting PV power have primarily relied on statistical models, machine learning algorithms, or a blend of both <cit.>. These approaches encompass a diverse toolkit, ranging from time series forecasting and artificial neural networks <cit.>, to support vector machines <cit.>, k-nearest neighbor methods <cit.>, and random forest models <cit.>. However, the intermittent and non-linear nature of solar power generation, influenced by a wide range of meteorological factors, poses a significant challenge to the performance of these conventional models <cit.>.In light of these challenges, quantum machine learning (QML) emerges as a promising avenue. This rapidly evolving field, which melds the principles of quantum mechanics with classical machine learning <cit.>, can offer enhanced capabilities for improving the forecasting accuracy of time series tasks <cit.>, including PV power generation <cit.>. QML's potential arises from quantum features like superposition and entanglement, promising exponential speedups in certain tasks <cit.>. Moreover, QML algorithms produce inherently probabilistic results, aptly suited for prediction tasks, and they may potentially function within an exponentially larger search space, amplifying their efficacy <cit.>. Nonetheless, implementing quantum algorithms bears its own set of challenges, such as the need for error correction and sensitivity to external interference <cit.>. Yet, in spite of these challenges, hybrid quantum-classical models, especially hybrid quantum neural networks (HQNNs), have showcased their potential in diverse industrial realms, including healthcare <cit.>, energy <cit.>, aerospace <cit.>, logistics <cit.> and automotive <cit.> industries.In this article, we present three types of hybrid quantum models as potential solutions for PV power forecasting. We assess the performance of our proposed models using a publicly accessible dataset, encompassing a comprehensive array of meteorological variables as well as hourly mean PV power measurements spanning a 21-month period. This dataset, along with the data preprocessing and analytical methodologies employed, is described in detail in Section <ref>.Our first proposed HQNN architecture, articulated in Section <ref>, incorporates classical fully connected layers with a vanilla variational repetitive quantum layer (VVRQ). Our second model, delineated in Section <ref>, constitutes a hybrid quantum adaptation of the classical recurrent neural network, termed the Hybrid Quantum Long Short-Term Memory with quantum depth-infused layer (HQLSTM). While the first two models can predict the power for a certain hour ahead, the third model, presented in Section <ref>, a Hybrid Quantum Sequence-to-Sequence Neural Network with quantum depth-infused layer, HQSeq2Seq, after training is capable of forecasting PV power for arbitrary time intervals without requiring prior meteorological data. Remarkably, despite having fewer parameters, our hybrid quantum models outperform their classical counterparts in terms of more accurate predictions, including trained on a reduced dataset. We summarize our conclusions and outline future research directions in Section <ref>.§ RESULTS The application of HQNNs in addressing time series prediction challenges, specifically in forecasting PV power output offers several advantages. Primarily, their capability to operate within an exponentially larger computational search space enables them to efficiently capture intricate data patterns and relationships <cit.>. This feature not only enhances forecast accuracy <cit.> but also streamlines the learning process, requiring fewer iterations for model optimization <cit.>. Furthermore, the inherent capacity of quantum technologies to manage the uncertainty and noise ubiquitous in data offers more resilient and trustworthy predictions <cit.>. This is particularly pertinent to power forecasting, given the inherent noise in meteorological data. Recent research also suggests that quantum models can be represented as partial Fourier series, positioning them as potential universal function approximators <cit.>, thereby broadening their applicability and efficacy in predictive tasks.In terms of architecture, an HQNN is an amalgamation of classical and quantum components. The classical segments may consist of fully connected layers, convolutional layers, or recurrent layers, while the quantum segments are typically represented by variational quantum circuits (VQCs) or their contemporary modifications <cit.>. §.§ Dataset To underscore the advantages of hybrid quantum models using empirical evidence, we selected a publicly accessible dataset <cit.> from a conventional generation plant situated in the Mediterranean region. This dataset not only provides comprehensive data but also allows benchmarking with results from various algorithms available in literature. A comparative analysis of our model's predictions and those from the study by <cit.> is provided in Section <ref>. The dataset, presented as a numerical table showcased in Fig. <ref>, encompasses variables like hourly mean ambient temperature (T_a), hourly mean module temperature (T_m), hourly mean solar irradiance recorded on two tilted planes with tilt angles of 3 and 15 degrees (I_3,I_15), and hourly mean PV power (P) spanning 21 months, accounting for more than 500 days.Beyond the scope of constructing models for predicting the output of PV panels, this dataset's utility extends to other applications. It aids in planning distributed battery energy storage systems <cit.>, devising novel energy collection systems <cit.>, and researching the degradation patterns of photovoltaic panels <cit.>. The dataset's multifaceted applicability emphasizes its significance.To ensure the validity and precision of the data, meticulous preprocessing and analysis were undertaken. We discovered approximately 20 anomalies in the original dataset. To maintain a continuous timeline, missing data points were replaced with the arithmetic mean of the preceding and succeeding day's values. Additionally, data corresponding to the date “12/31/13” was excluded as it contained all-zero values, suggesting an error in data collection. As a result, we obtained an uninterrupted dataset ranging from 4:00 AM on “3/5/12” to 12:00 AM on “12/30/13”.Additional in-depth analysis of the dataset was also conducted for a more nuanced understanding. Fig. <ref>(a) delineates the hourly distribution of PV power across the entire period. As expected, peak PV power values occur during midday, whereas night time values plummet to zero. Fig. <ref>(b) portrays monthly PV power fluctuations, which are more volatile compared to daily patterns, likely attributable to the limited number of full-year periods in the dataset. Fig. <ref>(c) presents a correlation matrix for the dataset features, identifying solar irradiances I_3 and I_15 as the features most correlated with PV power. Finally, the joint distribution of dataset features depicted in Fig. <ref>(c) further confirms that solar intensity is the feature most highly correlated with PV power. §.§ HQNN This section introduces our first proposed model, referred to as the HQNN. As illustrated in Fig. <ref>, the model accepts weather data spanning 24 consecutive hours as its input. The output is a prediction of the PV power for the upcoming 25th hour. The HQNN presented at the Fig. <ref>(a) is a combination of classical fully-connected layers, in our case with 120, 17 and 8 neurons, and a VVRQ layer, which is a VQC, consisting of q qubits and d repetitions of variational layers, each distinguished by unique weights. The choice of 120 neurons is methodical: the model ingests 5 distinct features for each of the 24 hours, resulting in a total of 120 = 5 * 24. The determination of the remaining parameters stemmed from an extensive hyperparameter optimization process, detailed in the subsequent sections.Initially, every qubit in the VVRQ layer is set to the state |0⟩. We subsequently encode the classical data by converting it into rotation angles around one of the X, Y, Z axes using R_x, R_y, R_z gates respectively. This conversion employs the angle embedding technique <cit.>. For each qubit, the rotation angle, denoted by x_j, is determined by the j-th component of the input vector.Following this, the variational layer is applied, which can either utilize “basic” or “strong” entanglements. For the “basic” entanglement, each qubit undergoes a rotation by an angle w_j^i around the X axis, subsequently followed by a layer of CNOT gates <cit.>. Conversely, for the “strong” entanglement, each qubit is sequentially rotated by the angles w^(Z_1)_ji, w^(Y_2)_ji, and w^(Z_3)_ji around the Z, Y, and Z axes, respectively. This sequence is then followed by a layer of CNOT gates. In both cases, the variables i and j play crucial roles in determining the operations. The variable i signifies the particular wire to which the operation is applied, and it takes values from the set 1, 2, …, q. Meanwhile, the variable j represents the number of variational layers and ranges from 1, 2, …, d.Lastly, all qubits are measured in Pauli-Z basis, yielding the classical vector v∈ℝ^q. This output serves as input for a subsequent classical fully-connected layer. This layer processes information from q neurons into 1 neuron that predicts the power value. The proposed HQNN model will be compared with its classical analog – a Multilayer Perceptron (MLP) that consists of 4 fully connected layers with 120, 32, 3, 3, and 1 neurons. The number of neurons in each layer was selected by a hyperparameter optimization procedure, detailed in the subsequent sections. §.§ HQLSTM This section presents a description of our second hybrid model – HQLSTM, which is a hybrid analog of the classical LSTM model <cit.>, with which predictions will be compared in the following sections. LSTM architectures have garnered significant attention in the realm of time series forecasting, including in predicting PV power <cit.>. HQLSTM models have proven themselves well for solving problems from various fields. Examples of successful use of this model are the tasks of natural language processing <cit.>, the detection of software vulnerabilities <cit.>, and predicting solar radiation <cit.>.In this proposed model we added a quantum layer to each of the LSTM gates <cit.>. Let's take a closer look at our implementation, depicted in Fig <ref>(b). The input to the model is: * The current step information, represented by a green circle, x(t). This is a tensor of size 5, reflecting the five features for an hour, which include meteorological data and the PV power itself.* The information from the previous step, denoted by a purple circle, h(t-1). It consists of a tensor of size h_dim. For the initial step, this is simply a zero vector. These inputs are processed through classical fully-connected layers to yield vectors with a uniform dimension of 4n_q. These vectors are then concatenated through a bitwise addition operation.Subsequently, this concatenated vector is partitioned into four distinct groups, for the four gates of the LSTM cell. As illustrated in Fig. <ref>(b-c), each group is directed to the input of its corresponding quantum layer, symbolized by the QDI square.The outputs from QDI layers are transformed via classical fully-connected layers to standardize their dimensions to h_dim. Following this, activation functions together with appropriate for each of the 4 gates transformations, similar to the classical LSTM, are applied to the outputs originating from the quantum layers. This processing culminates in the derivation of the new cell state C(t) and the hidden state h(t) vectors.The process operates in a cyclical manner. For each iteration, the vector from the current time step, x(t), and the hidden vector from the previous step, h(t-1), serve as inputs to the HQLSTM. This iterative process is executed as many times as the input width; in our case input width equals 24. Subsequently, all the hidden vectors are concatenated to produce a single composite vector. This vector is then processed through a fully-connected layer consisting of a single neuron, which outputs a value that predicts the PV power.In our first proposed architecture, the HQNN, the quantum layer functioned as a vanilla layer, where variational layers were sequentially placed after the encoding layer. In contrast, in the HQLSTM approach we used a QDI layer <cit.> as depicted in Fig. <ref>(b). Here, variational layers are positioned multiple times before the encoding layer (green rectangular) and additionally (purple rectangular), within each encoding layer (blue rectangular) to increase the layer's expressivity. §.§ HQSeq2Seq Here we present a hybrid version of the Sequence-to-Sequence (Seq2Seq) model, first introduced in <cit.>. Seq2seq models are widely used in natural language processing tasks <cit.>, where the length of input and output sequence is not pre-determined and can be variable. We can also apply the principle of Seq2Seq models to the power prediction task <cit.>. That means we can feed the neural network with time series with arbitrary length and prompt it to give us the forecast for any hours ahead. In this problem setting, the longer the input time series is, the better the model prediction is. The same applies to the required output length: the shorter it is, the easier it is for the model to generate the forecast.The seq2Seq model is a type of encoder-decoder model. The encoder is given the entire input sequence, which it uses to generate a context vector. This vector is used as an input hidden state for the decoder, so it literally provides it with “context”, according to which the decoder will generate the forecast. Thereby, the hidden dimensions of the encoder and the decoder must match. The decoder creates the output sequence step by step. It starts with only one entry: the one which is the last known. Based on this entry and the context vector, the decoder generates the second entry and appends it to the existing one. Now, the obtained two-entry sequence is once again fed into the decoder to generate the third entry. Then, the cycle repeats until the length of the generated sequence matches the length requested by the user.We create and compare two models with Seq2Seq architecture: the classical Seq2Seq and the hybrid model called HQSeq2Seq. Both of these models have identical LSTMs acting as encoders and decoders. In the classical model, the decoder's hidden output vector is mapped to the “Power” value with a single linear layer, while in HQSeq2Seq it is processed by a QDI layer <cit.>.In the QDI layer, instead of attempting to use a qubit for each feature <cit.>, we employed the data re-uploading technique <cit.>. Specifically, we work with 4 qubits and structure them into a lattice of depth 4 (depicted as a blue big rectangular in Fig. <ref>(d)). Each of our 16 input features leading to the quantum layer is intricately encoded within this lattice. The first four features are mapped onto the initial depth, followed by the subsequent features in blocks of four. Encoding these classical features into the quantum domain, we adopt the “angle embedding” using R_z gate. This operation effectively translates the input vector into a quantum state that symbolizes the preceding classical layer's data. Entangling variational layers, signified by purple squares, are interposed between every encoding layer, ensuring optimal Fourier accessibility. Each variational layer has two components: rotations governed by trainable parameters, and sequential CNOT gates. The rotations are implemented by quantum gates that metamorphose the encoded input in line with the variational parameters, while the CNOT operations handle the entanglement of the qubits, facilitating quantum superposition. Each lattice depth, represented by each blue square, encompasses a variational layer (purple square). Moreover, prior to all encoding layers, we introduce a variational layer (designated by a green square) for enhanced model representation. Consequently, the total weight count in the quantum segment of our network is 20. In the measurement phase, except for the first qubit, all qubits execute a CNOT operation targeting the first qubit, ensuring the Y-measurement spans all qubits. Therefore, the quantum layer's output serves as the power value prediction for a specific hour.The input size of the encoder and decoder can differ, which is a substantial benefit. For instance, we can use all of the 5 features to create a context vector, but request to generate the forecast for only 1 feature. Exploiting this advantage, we will feed the Seq2Seq model with a window of all known features and demand the forecast only for the “Power” one.For simplicity's sake, we will train both models with fixed input and output length of 96 hours and then try to vary the length in the testing stage.§.§ Training and results In the study, six distinct models were employed for PV power prediction based on weather features: HQNN, MLP, HQLSTM, LSTM, HQSeq2Seq, and Seq2Seq. To train the models, the mean square error (MSE) was chosen as the loss function:MSE= 1/N∑_n=1^N(x_n-y_n)^2,where N is number of predictions, x denotes the predicted PV power, and y represents the actual PV power value.To test the models, in addition to the MSE loss metric, we also used the mean absolute error (MAE), root mean squared error (RMSE) and variance account factor (VAF).Here x⃗ = (x_1, x_2, …, x_N) and y⃗ = (y_1, y_2, …, y_N) represent vectors of predicted and target PV power values respectively, where N is the number of predicted values. All the machine learning simulations for this study were conducted on CPUs, on the QMware cloud <cit.> device. The classical part of our modeling was structured using the PyTorch library <cit.>, while the quantum part was implemented using the PennyLane framework. Notably, PennyLane provides an assortment of qubit devices. For our requirements, we selected the lightning.qubit device which is a custom backend for simulating quantum state-vector evolution. To compute the gradients of the loss function relative to each parameter: for the classical components of our hybrid models, we employed the widely-recognized backpropagation algorithm <cit.>; and for the quantum part, we used the adjoint method as highlighted in Refs. <cit.>. §.§.§ HQNN & MLP and HQLSTM & LSTM Both the HQNN and its classical analog, the MLP, were trained for 20 epochs. In contrast, the HQLSTM and its classical counterpart, the LSTM, were trained for over 50 epochs. The Adam optimiser <cit.> from the PyTorch framework was used to update the parameters of the models in order to minimize their loss functions. The comprehensive training process, accompanied by the results, is delineated in Fig. <ref> and in Table <ref>.In this study, we employed cross-validation as a fundamental technique to assess the performance of our models across distinct testing subsets. The application of cross-validation is pivotal to safeguard against potential data leakage from the training dataset into the testing dataset. To achieve this, a rigorous approach was adopted wherein a 24-hour time window, from each side of the subsets, was systematically excluded from the dataset.Furthermore, we opted to partition the dataset into training and testing sets in a 4:1 ratio. This strategy was implemented to promote a comprehensive evaluation of our models, as we carried out model training and assessment on five distinct data splits. Subsequently, a meticulous averaging process was employed to consolidate the results obtained from these splits, and the mean values thus derived served as the primary metrics for inter-model comparisons.The utilization of cross-validation techniques in our methodology significantly bolsters the robustness and reliability of our results, as they diminish the reliance on specific train-test partitioning, thereby enhancing the credibility of our findings.In a head-to-head comparison between HQNN and MLP, the former exhibits superior performance regarding training and testing losses across three key metrics: RMSE, MAE, and MSE. Specifically, HQNN surpasses MLP's power prediction accuracy by 41% estimated by MSE loss function and by 26% by MAE loss, all the while boasting 1.8 times fewer parameters (2266& 3987).On juxtaposing HQLSTM with LSTM, the former outperforms in training and testing loss across all three aforementioned metrics. Remarkably, HQLSTM's has better predictive ability (on 40% better) than LSTM assessed by the MSE and MAE metrics, and it achieves this with less than half the number of parameters (1109& 2857). Moreover, HQLSTMs are more resistant to overfitting, while classical LSTM suffers from it.In a broader comparison encompassing all four models, HQLSTM emerges as the most precise model on all metrics, namely on 52% more precise than HQNN, having two times fewer trainable parameters.It is worth noting that we performed hyperparameter optimization technique using the Optuna optimizer <cit.>. The set of optimized parameters, limits of their variation, and best sets of hyperparameters for all our 4 models are presented in Table <ref>. Moreover, we scanned external articles that refer to this dataset and found only one article that solves 1 hour ahead PV power prediction using neural networks. In comparison, our HQNN model is better than the model from the external article according to VAF metrics by 40% (91 &65).Further, to confirm that hybrid models train better including on a smaller dataset, an additional experiment was conducted, wherein the volume of training data was intentionally reduced. The results are shown in Figure <ref>. The hybrid models performed better with less data, having fewer losses and better prediction capabilities than classical models. §.§.§ HQSeq2Seq & Seq2SeqAfter preprocessing the dataset, which is described in the Section <ref>, it spanned 12775 hours from 3/5/12 4:55 AM to 8/19/13 10:00 AM for training, and 3194 hours from 8/19/13 11:00 PM to 12/30/13 00:00 AM for testing.Although models are capable of being trained on sequences of arbitrary lengths, we chose to use sequences of fixed 96 hours for simplicity. In this case, the encoder gets 96 hours of all available features, while the decoder is asked to extrapolate only the “Power" feature of the data 96 hours ahead. Training for 15 epochs with the Adam optimizer (learning rate 0.001) proved to be enough for the models to converge (Fig. <ref> (f-g)). As an example of inference, we pass the time series of the length different from 96 into both models and prompt them to give us a forecast for 137 hours ahead (Fig. <ref> (h)). We can conclude that both models transition from the fixed sequence length to an arbitrary one quite well. It may even be possible to improve these results by introducing variable-length sequences into the training stage. We also measured the dependency of test loss on the size of the training dataset for Seq2Seq and HQSeq2Seq, shown in Fig. <ref> (e). As one can see, the test RMSE loss of the hybrid model is less for any size of training data, which proves that the hybrid model shows an advantage over the classical model, including on a trimmed dataset. § DISCUSSIONIn this work we introduced three hybrid quantum approaches to time series prediction task. The first two models allow one to predict the power of solar panels for 1 hour ahead, using weather features for the previous 24 hours. The third model allows to predict a longer-term user-defined forecast, showcasing the versatility of our models for various planning tasks.The first approach is the HQNN, a combination of classical fully connected layers and quantum layer, which is a VVRQ, analog of classical fully-connected layer. We compared this hybrid model with its classical counterpart, MLP, and demonstrated that, even though HQNN has 1.8 times less variational parameters, it has 41% better predictive ability, estimated by MSE error.The second approach is HQLSTM, a hybrid quantum analogue of classical LSTM. Here, QDI layer is inserted into each gate of the LSTM cell. This approach provides a 40% improvement in prediction using the MAE and MSE metrics comraped to its classical counterpart. Our proposed architecture is a unique combination of classical and quantum layers, which we believe to be a breakthrough in solving time series prediction tasks. Comparing HQNN and HQLSTM models, the second one was better on 52% than HQNN having two times fewer weights.The third approach is hybrid Seq2Seq model, a classical Seq2Seq model, consisting of 2 LSTMs with quantum layer at the end. This approach allows one to predict the PV power not only for an hour ahead, but for any number of hours ahead, without knowing the weather features in advance. The addition of the proposed QDI layer improves the accuracy of the predictions, reducing theMAE error by 16% compared to a purely classical Seq2Seq model. Our proposed architecture is a unique combination of classical and quantum layers, which we believe to be a breakthrough in solving time series prediction tasks.Also, for all models, we conducted an additional experiment in which our models were trained on a reduced dataset, and confirmed that hybrid models have better learning capabilities and have less loss trained on any amount of dataset compared to their classical counterparts. This confirmation can serve as an excellent motivation to use hybrid networks for applications where data collection is a complex task.It is worth noting that all parameters are trainable in our layers; the architecture and hyperparameters were selected by the Optuna optimizer. Moreover, we compare our models to a paper that solved the same problem using the same dataset, and demonstrate that our best HQLSTM is 40% more accurate in predicting power using VAF metric.To fully unlock the potential of HQNNs in time series prediction problem, further research and testing of models on other datasets is necessary. Also, the development of more efficient optimization VQC training and implementation methods, larger-scale quantum hardware could lead to even more significant performance improvements.Furthermore, while this work was done on a public dataset with an emphasis on hybrid quantum models for better forecasting performance, the quality and source of data plays a crucial role in overall effectiveness in the real world, especially considering weather data. Accurate weather forecast is a crucial input into any high performing and useful PV prediction given its dynamism and influence on PV output. An interesting area of research is cloud prediction using satellite and weather data for geo-locations, directly impacting solar irradiance and therefore PV output. The added complexity could enhance the need for hybrid quantum models to increase computational efficiency and higher quality forecasts.To summarize, our developments provide three hybrid quantum approaches for time series problems that demonstrate the possibility of combining classical and quantum methods. Our proposed models show improved performance compared to classical models with similar architecture when using fewer variation parameters. We believe that these results pave the way for further research in developing hybrid models that leverage the strengths of both classical and quantum computing.unsrt
http://arxiv.org/abs/2312.16379v1
{ "authors": [ "Asel Sagingalieva", "Stefan Komornyik", "Arsenii Senokosov", "Ayush Joshi", "Alexander Sedykh", "Christopher Mansell", "Olga Tsurkan", "Karan Pinto", "Markus Pflitsch", "Alexey Melnikov" ], "categories": [ "cs.LG", "cs.ET", "quant-ph" ], "primary_category": "cs.LG", "published": "20231227023746", "title": "Photovoltaic power forecasting using quantum machine learning" }
Balancing Priorities in Patrolling with Rabbit Walks Rugved Katole^1,2*, Deepak Mallya^1*, Leena Vachhani^1, Arpita Sinha^1 ^1 Systems and Control Engineering, Indian Institute of Technology Bombay, ^2 Dept. of Mechanical Engineering, Birla Institute of Technology and Science Pilani, K.K. Birla Goa Campus================================================================================================================================================================================================================================================================= In an environment with certain locations of higher priority, it is required to patrol these locations as frequently as possible due to their importance. However, the Non-Priority locations are often neglected during the task. It is necessary to balance the patrols on both kinds of sites to avoid breaches in security. We present a distributed online algorithm that assigns the routes to agents that ensures a finite time visit to the Non-Priority locations along with Priority Patrolling.The proposed algorithm generates offline patrol routes (Rabbit Walks) with three segments (Hops) to explore non-priority locations.The generated number of offline walks depends exponentially on a parameter introduced in the proposed algorithm,thereby facilitating the scalable implementation based on the onboard resources available on each patrolling robot. A systematic performance evaluation through simulations and experimental results validates the proportionately balanced visits and suggests the proposed algorithm's versatile applicability in the implementation of deterministic and non-deterministic scenarios. § INTRODUCTIONPatrolling involves the systematic and repetitive visits of specific locations in a given environment. The task of patrolling plays a crucial role in the surveillance and monitoring of the environment, acting as a deterrent for anomalous activities. The multi-robot patrolling has been explored to consider repeated coverage with various patrol objectives. These methods have evolved to utilize multi-agent benefits, including cyclic strategies<cit.>, partition-based strategies<cit.>, learning-based patrolling <cit.>, adversarial patrolling <cit.>, and negotiation mechanisms<cit.>.It has been shown that Static surveillance using CCTV reduces the incidence of theft and burglaries by 19% <cit.>. Furthermore, crime rates have reduced by 23% during police patrolling <cit.>. Additionally, robots equipped with sensors are used for regular inspection and monitoring of infrastructure like buildings, bridges, tunnels, storage tanks, pipelines, and roads. They enhance the safety of operations and improve cost-effectiveness <cit.>. The global market for inspection robots is projected to reach $18.9 billion by 2030 <cit.>.The patrolling environment contains different patrolling locations, with certain locations of higher priority than others. The locations of higher priority (Priority Nodes) are usually the critical points of threats or failures. Although these priority locations are of absolute importance, patrolling other locations (Non-Priority Nodes) is also necessary to avoid disruption in operation. From a security standpoint, if the Non-Priority Nodes are not patrolled frequently, it makes it easier for adversaries to infiltrate through these locations. In this work, we present a patrolling methodology that balances the visits to priority and Non-Priority Nodes. The challenge is in associating parameters that would ensure the desired balancing. The notion of optimality is to minimize the overall graph idleness or worst idleness, i.e., the time delay between consecutive visits to a location <cit.>.Typically, the environment is represented using an undirected graph with nodes representing visiting locations and edges representing connecting paths. The edge weights are characterized by factors such as distance <cit.>, expected travel time <cit.>, priorities <cit.>, and visitation frequency <cit.>. These weights play a crucial role in decision-making by assigning different rewards to visit different nodes. Usually, reward functions in multi-robot patrolling employ idleness, which measures the time elapsed between two consecutive visits, as a key component. The optimality of an algorithm is often assessed by minimizing the overall idleness. Different multi-agent architectures have been evaluated for patrolling under varying parameters, including reactive algorithms versus cognitive algorithms and coordination schemes, using different metrics based on idleness <cit.>. A factor approximation to the optimal solution for cyclic, acyclic, and chain graphs through partitioning has been studied in the past <cit.>. Heavy-edge heuristics are used in this multi-level graph partitioning approach for consistent partitioning of large graphs <cit.>. The method achieves this through multi-node swapping based on equilibrium, with the goal of reducing redundancy and minimizing robot detritions. The k-means clustering for partitioning and a Simulated Annealing algorithm <cit.> have been developed for pathfinding. Lauri et al. <cit.> uses ant colony optimization techniques for partitioning and path-finding. These centralized methods have a central station/node for partitioning and calculating the shortest paths for a robot to patrol. However, the centralized approach is prone to a single point of failure. Thereby, researchers have explored distributed approached for a robust alternative. The distributed approach typically uses reward-based online route assignment, the rewards functions use idleness values along with other metrics to achieve optimality. The work in <cit.> extends <cit.> by deepening the heuristics and choosing a slightly longer path to reduce overall idleness. Other multi-agent patrolling methods include auction-based patrolling and learning-based patrolling. In the auction-based method, the robots negotiate a bid for a node, and through communication, the best robot gets to patrol the node. The bidding is based on different heuristics such as patrol path length <cit.>, path distance <cit.>, travel cost, and number of tasks <cit.>. The learning-based methods are highly suited for dynamic environments due to their adaptability and probabilistic nature, the robots use their local idleness values to make local patrolling decisions. A patrolling problem modeled as a Semi-Markov decision process for cooperative multi-agent reinforcement learning. The agents communicate using flags or through broadcasting intentions or both, a rewards function that maximizes selfish utility is found to be more effective than the one using wonderful life utility (utility considering all other agent intentions) <cit.>. A recent work leverages graph attention networks for persistent monitoring using multi-agent reinforcement learning <cit.>. The agents share locally perceived information using graph attention networks, a neural network architecture to operate on graph data structure <cit.>. In <cit.>, authors use Bayesian reasoning and learning to adapt to system dynamics. It uses a Bayesian decision-making model with likelihood reward-based learning and continued prior updates.All the work in multi-agent patrolling leads to either deterministic or non-deterministic patrolling routes. In a scenario where adversarial forces attempt to infiltrate the patrolled area, the knowledge of patrolling routes can be easily obtained by observing the agents over a period of time. For problem objectives with patrolling locations of absolute importance (priority), it is also important to sweep the other environment locations as frequently as possible to eliminate threats from those areas. Previously, this has been addressed through weighted graphs <cit.>, where the routes are generated based on the ratio of maximum and minimum weights of locations (priority and Non-Priority Nodes, respectively, in the graph).A solution based on route selection has a clear advantage of faster sweeping of the environment as compared to one that is based on node selection. A reward-based route selection algorithm <cit.> has been developed for priority patrolling.The reward function considers idleness overshoot from the given time period of visits for Priority Nodes, their proximity, and the length of the route for online route assignment.A partitioning algorithm <cit.> based on the distribution of priority and Non-Priority Nodes is also used for priority patrolling. Each segment is then patrolled by a robot.Deep Q-learning is employed for prioritizing sanitation of crowded spots at railway stations <cit.>. Also, the number of routes to explore increases exponentially with the time period, and an exhaustive search that considers all the routes for selection is impractical.The deterministic strategy is well suited for monitoring or inspection purposes where the threat of attack is not considered. On the other hand, security and surveillance purposes require non-deterministic strategies to ensure safety. Moreover, the patrolling problem focuses on balancing the visits to all the locations/nodes, while the priority patrolling problem aims to visit the Priority Nodes, and visits to Non-Priority Nodes are unaccounted for. A distributed strategy that ensures priority patrolling with accountability on visits to all locations (including non-priority ones).In this paper, we focus on multi-agent distributed patrolling to balance visits between different priority and non-priority locations in an environment.The objective is to visit priority locations as well as non-priority locations such that the Non-Priority Nodes are visited in a finite time. The algorithm explores more routes with Non-Priority Nodes such that the trade-off between the increase in priority location idleness is compensated by the reduction in overall graph maximum idleness. The algorithm is designed for practical implementation on robots with limited resources. The algorithm uses a parameter based on the robot's memory for route generation and further selects a route from a subset of the generated routes. This reduces the computational and memory requirements of the robot. Contributions: The contributions of this work are summarized as follows. * A new approach for efficiently balancing priorities among priority and non-priority locations/nodes during patrolling tasks. The proposed algorithm dynamically assigns routes to robots, minimizing the delay between subsequent visits to each location. * A novel resource-aware route generation algorithm. The algorithm generates a finite number of routes based on a tunable parameter, enabling exploration within the robot's onboard resource constraints. * Performance trade-offs between the proposed four variant algorithms of online robots' route assignments designed to balance priorities are obtained. These deterministic and non-deterministic variants ensure that the visits to the Priority and Non-Priority nodes are proportionately balanced.The remaining paper is organized as follows: In section <ref>, we formalize the Priority Patrolling Problem (PPP) for proportionately balanced visits and establish the terminology. Section <ref> provides the four variants of solution to PPP for resource-aware implementations. In section <ref>, we present the evaluation metrics and discuss systematic simulation and experimental analysis of proposed variants. Finally, section <ref> concludes the paper and presents future directions. § PROBLEM FORMULATION The patrolling environment is represented as a strongly connected directed graph denoted using 𝒢(𝒱, ℰ), where 𝒱 is a set of nodes indicating patrolling locations and ℰ is the set of edges describing the road segment between two nodes v_i and v_j.Given a patrolling environment 𝒢(𝒱, ℰ), the objective of the priority patrolling problem is to visit the Priority Nodes as often as possible and also ensure visits to Non-Priority Nodes. The set of Priority Nodes is denoted by 𝒮⊂𝒱 and the set of remaining Non-Priority Nodes is denoted by 𝒩𝒮= 𝒱∖𝒮. The set of identical mobile agents is denoted by 𝒜 = {a_1, …, a_k}. Figure <ref> shows an illustrative graph with Priority and Non-Priority Nodes. The Patrol Strategy is denoted by 𝒫 = {w_1, …, w_k} where w_i ∈𝒫 is the walk (a sequence of non-repetitive consecutive edges) traversed by an agent a_i ∈𝒜 during the patrol. At the time t, Instantaneous Idleness (or, simply Idleness) I_i(t) of a node v_i ∈𝒱 is the time elapsed since the last visit to v_i by an agent a_i ∈𝒜. The Instantaneous Maximum Idleness value for any node amongst the Priority Nodes is given byℐ_S(t) = max_v_i ∈𝒮 I_i(t) Now, the objective is to reduce the Maximum Idleness of Priority Nodes while patrolling the entire environment such that the idleness of Non-Priority Nodes is finite.[PPP: Priority Patrolling Problem] Given a patrolling environment represented as Graph 𝒢(𝒱, ℰ) with Priority Nodes 𝒮 and the set 𝒜 of agents, find a Patrol Strategy 𝒫 that achievesmin_𝒫maxℐ_S(t)such that maximum idleness of each Non-Priority Node is finite. § PRIORITY PATROLLING ALGORITHMThe Priority Patrolling Algorithm aims to find an optimal strategy for each agent to minimize the maximum Priority Node idleness such that the idleness of all Non-Priority Nodes is finite. For each agent, the algorithm selects a walk from one Priority Node to another to achieve the balanced visits of all nodes.As an agent completes its walk, it is assigned another walk. There are two phases to finding an optimal strategy - offline and online. In the first phase (offline), we compute the walks from one Priority Node to another Priority Node. For an exhaustive search strategy, the number of walks generated increases exponentially with the size of the graph. We address this issue by introducing “Rabbit Walks,” each consisting of three “Hops” (analogous to a rabbit's hop). In the second phase (online), we select a target Priority Node and a Rabbit Walk that maximizes the sum of Idleness of nodes. §.§ Phase 1: Rabbit Walks GenerationWe define a Rabbit Walk as a walk between two Priority Nodes, v_s and v_t, with three consecutive Hops.[The source Priority Node v_s and target Priority Node v_t may or may not be the same.] The first Hop is a walk starting from source node v_s and has “H” nodes. The next two Hops are the shortest paths from the end of the first Hop to the Priority Node v_t via a random node v_R. The Rabbit Walk Generator Algorithm (<ref>) is divided into three steps. In step 1 (Lines 1-7), we construct a tree with depth H and the source Priority Node v_s as the root of the tree. The children of each node in the tree are its neighbors 𝒩 in Graph 𝒢. To avoid cyclic walks (a walk with repetitive edges), we ensure the pair of the last node in the candidate walk w.last, and the new node v is not an edge in w (Line 5). In Line 6, we extend each walk w with h nodes (a walk in W_h) with a node v and store it in the set of walks W_h+1.For instance, Figure <ref>(a) illustrates a tree with depth H = 2. The tree starts with the source Priority Node v_s, which then extends to its neighbors at H = 1 and to neighbors of neighbors at H = 2.Each path from the source to the leaf of the tree represents a candidate walk to be considered for the first Hop. The number of candidate walks depends on the depth H and the d degree of the graph. Hence, at most, d^H walks will be generated in the first step. For the second and third Hop, a random node v_R is selected from the nodes not in w, and a candidate walk from the first step. Step 2 (lines 8-10) of the algorithm extends w by the shortest path between the last node w.last to node v_R(line 10).Similarly, in step 3 (lines 11-12), the shortest path from v_R to the target Priority Node v_t (line 12) generates the third Hop.These walks are then stored in the set W_s^t for v_t ∈𝒮 and the algorithm returns a collection of all walks starting from source node v_s to all Priority Nodes v_t ∈𝒮.Remark 1: The maximum number of Rabbit Walks generated is the product of the number of candidates for First, Second, and Third Hops given by d^H × (|𝒱|- H+1) × |𝒮|.§.§ Phase 2: Rabbit Walk AssignmentA Rabbit Walk is assigned to an agent at source node v_s from the set of walks obtained in Phase 1. The sets of Rabbit Walks are arranged as W_s^t, walks between all pairs of Priority Nodes v_s and v_t. To avoid parsing through all the walks exhaustively, we propose four different ways to select a subset of walks W_s from the previously generated rabbit walks between all priority nodes.* PPA-Exhaustive: Every Priority Node is a candidate Target Node.W_s = ⋃_v_t ∈𝒮 W_s^t * PPA-Sampled: Based on an additional parameter N, we sample N Priority Nodes from 𝒮 and consider them as Target Nodes.W_s = ⋃_v_t ∈ f(𝒮, N) W_s^twhere, f(𝒮, N) denotes the N nodes sampled from 𝒮 at the time of assignment.* PPA-Random: In this case, we select one Priority Node from 𝒮 arbitrarily and consider it as the Target Node.v_T ∼ Uniform(𝒮); W_s = W_s^T * PPA-Greedy: We maintain a counter C_t for each Priority Node v_t, denoting the number of times the corresponding node is assigned as the Target Node. We then select the Priority Node with the least count as the Target Node.v_T = v_t ∈𝒮C_t; W_s= W_s^T Remark 2: The PPA-Exhaustive searches all the Rabbit Walks, i.e., at most d^H × (|𝒱|- H+1) × |𝒮| walks. Whereas PPA-Random and PPA-Greedy search at most d^H × (|𝒱|- H+1). The PPA-Sampled searches d^H × (|𝒱|- H+1) × N Rabbit Walks. Next, the reward calculation for a walk w ∈ W_s is given in Equation <ref>. Reward(t) = ∑_v_j ∈ w I_j(t), where, I_j(t) is the instantaneous idleness of node v_j ∈𝒱.The agent is assigned the walk w with the maximum Reward value to solve Problem <ref>. In summary, the proposed method of walk assignment obtained through W_s, a subset of Rabbit Walks ensures frequent visits to Priority Nodes while guaranteeing visits to Non-Priority Nodes in finite time.§ SIMULATIONS AND EXPERIMENTAL RESULTS In this section, we present the results of our extensive simulations focused on evaluating the performance of the patrolling algorithm across various scenarios and layouts. The proposed algorithm and its variants are evaluated using the following metrics at the end of each simulation:* Priority Nodes' Maximum Idleness: Indicates the maximum time a Priority Node was idle or unvisited.max_v_i ∈𝒮 I_i(t) * Graph Maximum Idleness: Indicates the maximum time any node was idle or unvisited.max_v_i ∈𝒱 I_i(t) * Idleness Ratio: The ratio of Graph Maximum Idleness to the Priority Nodes' Maximum Idleness. It indicates the proportional priority given to the Non-Priority Nodes and a measure for balanced visits.max_v_i ∈𝒱 I_i(t)/max_v_i ∈𝒮 I_i(t)§.§ Simulation SettingsThe developed algorithm is evaluated through comprehensive simulations on Simulator for Urban MObility (SUMO) <cit.>. SUMO is an open-source traffic simulator capable of large-scale traffic simulations. It hosts a number of vehicle types and motion models. The motion planning is handled by in-built functions, allowing effortless evaluation of patrolling strategies. The Traffic Control Interface (TraCI) is a Python API for SUMO for real-time control of vehicles. During the patrol, agents communicated with the server to track the idleness of the node. The maximum velocity of a patrolling vehicle is 10 m/s, and each simulation runs for 20,000 seconds.§.§ Graphs and other SettingsEach algorithm setting is evaluated on three different Environments as shown in Figure <ref>. All proposed variants of the algorithm are evaluated for various settings (refer to Table <ref>). There are 81 unique simulation settings for each variant, these settings are repeated thrice with different initial conditions. Hence, 243 simulations are performed for each variant; therefore, a total of 972 simulations are performed.§.§ Experimental Set-upTo evaluate the real-time capabilities of PPA variants, we implement them on mobile robots. The experiments are conducted on a 5x5 grid layout. TurtleBot3 Burger robot uses a Raspberry Pi 4B for onboard computing. The Raspberry Pi hosts a Ubuntu 20.04 server with ROS Noetic on a quad processor CPU @1.4GHz. Figure <ref> shows the block diagram for experiments. We implement a hybrid reciprocal velocity obstacle method for multi-robot collision avoidance while patrolling <cit.>. The robots share the idleness of graphs through ROS topics. Each robot obtains its position on the grid through the motion capture system at IITB-ARMS lab using a ROS package<cit.>. Every time a robot arrives at a node, it broadcasts its arrival, and all the robots update their idleness values for the node in the graph. For PPA-Greedy, where a counter is maintained for a visit to each priority node. Robots share information in a similar way through ROS topics. The experimental results explain the memory and computational resources used by the proposed algorithm. §.§ Results and Analysis The performance analysis is presented in a systematic manner using the evaluation metrics namely Priority Nodes' Maximum Idleness, Graph Maximum Idleness, and Idleness Ratio. These evaluation metrics facilitate the validation of our claims on balanced visits in priority patrolling. Figure <ref> illustrates the maximum Idleness values observed both within Priority Nodes and across the entire node set for simulations conducted on the IIT Bombay (IITB) layout. Notably, under the PPA-Exhaustive variant, we observe a distinct increase in the maximum Idleness values within Priority Nodes as the value of H increases. This trend is attributed to the elongation of Rabbit Walks with a greater magnitude of H. Conversely, in the case of other variants, we observe a more even distribution of maximum Idleness values, indicating their robustness across varying H, including the case with only two hops (H=0).Across all PPA variants, there is a clear and consistent relationship between the value of H and Graph Maximum Idleness. As the H increases, the Graph's Maximum Idleness value decreases. Notably, the PPA-Greedy variant consistently outperforms other variants in terms of minimizing Graph Maximum Idleness. Under Grid and Campus 2 layouts, PPA-Greedy demonstrates comparable performance with PPA-Exhaustive.Next, we present the results from the experiments. Figure <ref> illustrates the relationship between memory requirements and H during the generation of Rabbit Walks. The Rabbit Walks generated for all possible pairs of Priority Nodes are considered during the calculation of memory requirements. The number of Priority Nodes is set to four for all layouts. It is evident from Figure <ref> that the number of Rabbit Walks increases exponentially with H. On a similar note, table <ref> shows the compute time for PPA-Greedy and PPA-Exhaustive with TurtleBot3's onboard computer. The PPA-Exhaustive requires more computational time than PPA-Greedy due to a larger set of walks to parse for assignment.We compare the online performance of our proposed variants (deterministic) with that of an offline method, the Latency Walks Algorithm <cit.>.Latency Walks Algorithm uses weighted graphs for priority patrolling. To keep the reference common for comparison, our Latency Walks implementation pre-assigns the weights of the Priority Nodes to the worst Idleness Ratio, while the weights of Non-Priority Nodes are set to 1. Figure <ref> provides a comparison between PPA-Exhaustive and PPA-Greedy (deterministic variants) when benchmarked against the Latency Walks algorithm, with the weight ratio set to 8, the maximal Idleness Ratio observed across all PPA variants. Moreover, we evaluate the balance using the Idleness Ratio, a metric that signifies the finite time visits to the Non-Priority nodes during Priority Patrolling. The case of an Idleness Ratio equal to 1 indicates equal priority being given to all nodes. Figure <ref> provides a comparison of the idleness ratio among Latency walks and PPA variants with H = 5. The Latency Walks algorithm consistently exhibits lower values of Graph Maximum Idleness as well as Priority Nodes' Maximum Idleness when compared to both PPA variants for lower H values. Therefore, in memory-scarce systems (H = 0 implementation), an offline Priority Patrolling strategy is a solution. With some onboard memory on robots (for example, scenarios with H=5) and the PPA-Greedy variant, Graph Idleness values are comparable with an increasing number of agents. Focusing on Priority Nodes, PPA-Greedy outperforms Latency Walks as the number of agents increases. These findings remain consistent across various Graph Layouts. Whereas the PPA-Exhaustive variant has the opposite effect. It has comparable Priority Nodes' Maximum Idleness values while there is an increase in Graph Maximum Idleness values. The PPA-Exhaustive variant has a higher range of Idleness Ratios in comparison with PPA-Greedy, thereby creating a greater proportional balance between Priority and Non-Priority.In summary, results showcase the effectiveness of the PPA-Greedy variant in balancing Graph Maximum Idleness and Priority Maximum Idleness, particularly when compared to PPA-Exhaustive and Latency Walks under various layout scenarios and with varying numbers of agents. These underscore its potential for practical application in real-world patrolling scenarios.§ CONCLUSION AND FUTURE WORKSWhile the Patrolling Problem deals with the entirety of the underlying graph at par, the primary objective of the proposed Priority Patrolling Problem is to achieve visits to the Priority Nodes as frequently as possible without compromising the overall patrolling of the environment. In this paper, we develop a novel online practical solution to the Priority Patrolling problem. We show through this work that the suggested algorithm addresses this objective of achieving a proportional balance between patrol of Priority as well as Non-Priority Nodes. The proposed variants are applicable in patrolling scenarios that range from routine inspection which can be handled in a deterministic manner to surveillance and threat detection, which involve arbitrary assignment of the agents' routes during operation.The proposed algorithm is independently scalable to (i) onboard resources,(ii) the number of agents, and (iii) graph size. Extensions can be explored to investigate different methodologies in terms of the number of Hops while generating Rabbit Walks and complex Reward functions to achieve auxiliary objectives.Since the optimal results are a function of the underlying Graph and Agent availability, we are exploring the analytical results on the guarantees of achieving optimality using the observations from this work.unsrt
http://arxiv.org/abs/2312.16564v1
{ "authors": [ "Rugved Katole", "Deepak Mallya", "Leena Vachhani", "Arpita Sinha" ], "categories": [ "cs.RO", "cs.MA" ], "primary_category": "cs.RO", "published": "20231227130907", "title": "Balancing Priorities in Patrolling with Rabbit Walks" }
[email protected] Department of Physics, Indian Institute of Science Education and Research (IISER), Pune 411008, India. Department of Physics and Astronomy, University of Manchester, United Kingdom, M13 9PL. Department of Physics, Indian Institute of Science Education and Research (IISER), Pune 411008, India.Department of Condensed Matter Physics and Materials Science, Tata Institute of Fundamental Research, Mumbai 400005, India.Department of Physics and Astronomy, University of Manchester, United Kingdom, M13 9PL.Department of Physics and Astronomy, University of Manchester, United Kingdom, M13 9PL.Department of Physics, Indian Institute of Science Education and Research (IISER), Pune 411008, India.Department of Electrical Communication Engineering, Indian Institute of Science, Bangalore 560012, India Department of Electrical Communication Engineering, Indian Institute of Science, Bangalore 560012, India Department of Physics, Indian Institute of Science Education and Research (IISER), Pune 411008, India. Department of Physics and Astronomy, University of Manchester, United Kingdom, M13 [email protected] Department of Physics, Indian Institute of Science Education and Research (IISER), Pune 411008, India.Transition-metal dichalcogenides (TMDs) host tightly bound quasi-particles called excitons. Based on spin and momentum selection rules, these excitons can be either optically bright or dark. In tungsten-based TMDs, momentum-forbidden dark exciton is the energy ground state and therefore it strongly affect the emission properties. In this work, we brighten the momentum forbidden dark exciton by placing WS_2 on top of nanotextured substrates which put the WS_2 layer under tensile strain, modifying electronic bandstructure. This enables phonon assisted scattering of exciton between momentum valleys, thereby brightening momentum forbidden dark excitons. Our results will pave the way to design ultrasensitive strain sensing devices based on TMDs.Tensile strain induced brightening of momentum forbidden dark exciton in WS_2 Atikur Rahman January 14, 2024 ============================================================================= TMDs (e.g. MX_2, M=Mo, W, X=S, Se) are known for their novel optical properties<cit.>. They host excitons - charge neutral electron-hole pairs bound by Coulomb interactions.<cit.>. The large spin-orbit coupling in WS_2 due to the heavy mass of W atom, splits the valance band (VB) maxima and conduction band (CB) minima at K, K' points in two sub-bands with opposite spin orientations (up, down at K and down, up at K') respectively. This results in the formation of two `bright' intravalley excitons with opposite spins at KK, K'K'<cit.>. There is possibility for the formation of indirect intervalley excitons KΛ as well. But because of the large momentum mismatch, they require the assistance of phonons to recombine radiatively by emitting a photon<cit.>.The KΛ exciton is therefore called momentum-forbidden dark exciton. In case of W-based ML TMDs, KΛ exciton is the excitonic ground state and has higher binding energy and longer lifetime than the bright excitons KK, K'K' and therefore they play an important role in the exciton dynamics of the system<cit.>. Thus, controlling them is essential for designing novel optical devices.The dark exciton can be brightened by exciton-phonon coupling if theenergy of the available phonon mode matches with the dark-bright exciton energy splitting which can be effectively tuned by applying strain on ML WS_2<cit.>.Therefore, strain acts as a tuning knob for the emission of dark excitons<cit.>. In this work, we apply tensile strain on ML WS_2 by placing them on nanotextured substrates patterened with nanopillars. The nanopillars of height `h' and interpillar seperation (center to center) `l' [Fig.<ref>a, c]. Conically shaped nanopillars made of Si(100) have insulator Al_2O_3 nanospheres 10 nm on top[Fig.<ref>c].We prepared samples with varied interpillar distances: l ∼ 25 nm(sample C-49), l ∼ 44 nm (C-99) and l ∼ 60nm (C-132). By tuning `l' we tune the amount of strain applied on ML-WS_2. ML WS_2 was grown by chemical vapour deposition (CVD) and was transferred on top of the nanotextured substrate by wet transfer technique [Fig. <ref>b] (details of sample preparation and characterization can be found in supplemental material section I, XI and our previous work<cit.>). We perform temperature dependent photoluminescence (PL) and Raman measurements on the strained and unstrained ML WS_2 samples. Supported by, ab initio calculations, we discover the brightening of KΛ dark excitons by applying tensile strain. The PL measurements were performed using a continuous wave laser of wavelength 514.5 nm. The details of the measurement can be found in the supplemental material section II. In the temperature dependent PL study of ML WS_2 on C-99 substrate [Fig. <ref>a], two well resolved peaks, one at ∼ 2.01 eV (FWHM∼ 24 meV) and other at ∼1.95 eV (FWHM∼ 32 meV) were observed at 280 K. The peak position and FWHM were extracted from each PL spectrum by fitting with a sum of Gaussian functions (see supplemental material section VI for details). We attribute the peak at 2.01 eV as bright exciton KK/K'K' peak (X^0) and the peak at 1.95 eV as negatively charged trion peak (X^-) since ML WS_2 is a n-doped semiconductor.We further confirm this attribution of peaks from excitation power (P) dependence of the integrated intensity (I). The data were fitted with the power law dependence I∝ P^α and we obtained values for α of 0.9, 1.03 for X^0 and X^- respectively, typical for excitons and trions (see supplemental material section VII)<cit.>. As we lower the temperature both X^0, X^- blueshift as reported earlier<cit.>. At around 200 K a new peak starts to appear at ∼1.94 eV. As we further decrease the temperature, the intensity of new peak increases while the opposite is true for X^- and X^0: their intensity diminish<cit.>.The new peak can be attributed to (a) a biexciton, XX (b) a defect bound exciton (X^L) or (c) a dark exciton (X^D). The exponent α of the power law dependence for XX and X^L is known to be superlinear (∼2.0) and sublinear (∼0.5) respectively<cit.>. From the I vs P plot of the new peak we obtain a value of α of about ∼ 0.97 and 1.15 at 180 and 100 K respectively (see Fig. 2d and supplemental material section VII). Moreover, the new peak do not show any blueshift with increasing excitation power, characteristic of X^L because of its broad energy distribution <cit.>. However, it showed red shift due to local heating, a behaviour generally seen in excitons <cit.> (see supplemental material section VIII). Furthermore, the new peak also shows anisotropy in circular polarization dependent PL, uncharacteristic of X^L <cit.>(see supplemental information section IV for details). Therefore, the new peak is neither XX or X^L. However, value of α and its peak position at 77 K ∼1.92 eV is similar to recent reports of observation of dark exciton under strain and strong exciton-phonon coupling<cit.>. We therefore attribute this new peak at ∼1.92 eV as X^D. We performed the similar study on other two samples namely C-48 and C-132 [Fig. <ref>b and c]. For the C-48 and C-132 samples we observed X^0 and X^- peaks at room temperature but no new peaks were observed as we lowered the temperature to 77 K. The X^0 and X^- showed blueshift and narrowing with decreasing temperature similar to that of C-99 sample.The temperature dependence of the peak position of X^0 in C-99 sample was studied in detail [Fig. <ref>e]. The temperature dependence shows acharacteristic redshift with decreasing temperature induced by a exciton-phonon coupling. This can be described by the phenomenological model proposed by O'Donnell and Chen<cit.>:E(T)= E(0)-S⟨ħω⟩(⟨ħω⟩/k_BT-1)where E(T) is the resonance energy of X^0 at temperature T, S is dimensionless exciton-phonon coupling constant, k_B is Boltzmann constant and ⟨ħω⟩ is the average phonon energy responsible for the coupling. By fitting the experimental data we obtained the parameters, E(0)= 2.074 ± 0.003 eV, S= 3.65 ± 0.98 and ⟨ħω⟩= 43 ± 10 meV. The value of ⟨ħω⟩ is close to the energy of E^' phonon (∼ 43.9 meV) mode of ML WS_2. This suggests that the E^' phonon mode has a crucial role in the exciton-phonon coupling. Note that the peak position of X^D changes only by ∼2 meV as we increase the temperature from 75 K to 180 K, whereas, in the same temperature range the X^0 peak position changes by ∼20 meV. This observation is consistent with the fact that the CB minima at K point shifts at a much faster rate with temperature compared to the Λ point <cit.>. To determine the strength of exciton-phonon coupling, the evolution of the FWHM of X^0 was fitted by a phonon-induced broadening model[Fig. <ref>f]<cit.>:γ=γ_0+c_1T+c_2/e^ħω/k_BT-1where γ_0 is the intrinsic FWHM, the linear term in T is due to the interaction of acoustic phonon modes (LA and TA) and the last term is the interaction term with the optical phonon mode<cit.>. c_2 is the measure of the exciton- optical phonon coupling strength. The value of ħω that we obtained previously by fitting Eq. <ref>, was used for fitting Eq. <ref>. The value of c_2 obtained by fitting Eq. <ref> is 26.5 ± 4.6 and is significantly higher than the previously reported value of 6.5 for ML WS_2<cit.>. This higher value of c_2 further confirms the strong exciton and E^' phonon mode coupling in C-99 substrate. See supplemental material section XII. for the above analysis of X^D peak in ML WS_2 on top of C-48 and C-132 substrate. We further did temperature dependent Raman study on the C-99 sample. Phonon modes responsible for electron-phonon scattering in case of ML WS_2 are LA, TA, E^' and A_1 modes<cit.>. The various Raman peaks (E^', 2LA, A_1^') were analysed with multiple Lorentzian functions[ Fig. <ref>a]<cit.> (see supplemental material section IX for fitting details). All the phonon modes except the in-planeE^' mode showed redshift in Raman shift and an increase in their linewidth with increasing temperature [see Fig. <ref>b, c for E^' and supplemental material section X for A_1^']. The redshift and increasing linewidth with temperature can be explained by anharmonic cubic equations<cit.>: ω_ph(T)= ω_0 - C(1+2/e^ħω_0/2k_BT-1) γ_ph(T)= γ_0 + D(1+2/e^ħω_0/2k_BT-1) where ω_ph(T), γ_ph(T) are the frequency of the phonon mode and linewidth at temperature T respectively. ω_0, γ_0(T) are the frequency of phonon mode and linewidth at T= 0 K respectively and C is a constant. The behaviour of Eq. <ref> and Eq. <ref> as a function of temperature is plotted in the insets of Fig. <ref>b and c. The E^' phonon modes shows a completely opposite trend when compared to Eq. <ref> and Eq. <ref> [Fig. <ref>b, c]. This anomalous behaviour of E^'phonon mode is related to strong electron-phonon coupling<cit.>. The various factors affecting the Raman mode frequency can be expressed mathematically as <cit.> ω(T) =ω_0+Δω_vol(T)+Δω_anh(T)+Δω_sp-ph(T)+Δω_e-ph(T) where Δω_vol(T) corresponds to quasiharmonic contribution due to change in unit-cell volume, Δω_anh(T) corresponds to phonon-phonon interaction related anharmonic effects, Δω_sp-ph(T) is due to spin-phonon coupling and Δω_e-ph(T) is due to electron-phonon coupling. Eq. <ref> and <ref> take into account the first three terms but not the last term. Therefore Eq. <ref> and <ref> fails to describe the anomalous behaviour of the E^' Raman modes. To understand the effect of strain, calculated the electronic band structure of ML-WS_2 by DFT using full-relativistic ultrasoft pseudopotential with Perdew–Burke–Ernzerhof (PBE) exchange-correlation functionals alongside plane waves implemented in Quantum ESPRESSO package [Fig.<ref>a] (see supplemental material section V for details). With increasing tensile(compressive) strain the absolute CB minima at K- point shifts down (up) and the local CB minima at Λ shifts up (down). The VB maxima at K and Λ shows almost no change with strain[Fig. <ref>a]. We denote the direct bandgap at K point as E^KK and the indirect bandgap at Λ point as E^K Λ. Note that, in electronic band structure the CB minima at Λ point is at higher energythan the CB minima at K point by E^KΛ - E^KK = Δ E^KΛ= 64 meV. To get into the exciton picture from electron-hole picture (as described in ref. <cit.>) we need to calculate the binding energy of the excitons. The binding energy (E_b) is calculated from the effective mass model <cit.>:E_b= μ e^4/2ħ^2ϵ^2(n-1/2)^2where 1/μ=1/m_e+1/m_h the exciton reduced mass, m_h and m_e are the effective masses of holes and electrons respectively (see supplemental material section XIV for details). ϵ is the dielectric constant of ML WS_2, e is the electron charge and n is the principal quantum number. m_e and m_h are calculated from parabolic band approximation of the electronic bands obtained from DFT calculations (extracted values are listed in Table <ref>)<cit.>. ϵ of ML-WS_2 was taken to be ∼ 5 for n=1 as was shown in <cit.>.μ of K-K exciton (X^0) and K-Λ exciton (X^D) are ∼0.155m_0 and 0.219m_0 respectively, where m_0 is the free electron mass. Using Eq. <ref> we found E_b of X^0 and X^D ∼ 310 meV and 438 meV respectively. Now, if we visualize the scenario in the excitonic picture, the K-K bright exciton state is formed at the Γ point (zero momentum point) in the center of mass (COM) coordinates at the position E^KK_exc = E^KK - E_b^X^0 and the K-Λ dark exciton state is formed at the Λ point in COM coordinates at the position E^KΛ_exc = E^KΛ - E_b^X^D<cit.>. As the E_b^X^D is higher than E_b^X^0, X^D is at a lower energy than X^0 in the exciton picture, unlike the electron-hole picture. Our calculations show that, in unstrained sample, the dark state is below the bright state by energy E^KK_exc - E^KΛ_exc = ΔE ∼ 64 meV. Under strain, the excitonic states at Γ (COM) and Λ (COM) points behave similar to the CB minima at K and Λ point respectively. The change of ΔE as a function of strain is plotted in Fig. <ref>c (the values used to generate this plot can be found in supplementary section V). The fitting shows ∼ 184 ± 2.5 meV change of ΔE with 1% of applied strain. Note that E_b does not change that much with strain as the latter has little influence on μ<cit.>.On optically exciting a coherent exciton population at Γ point (COM), K-K bright excitons are formed. Incoherent excitons are then formed at Λ point (COM) by phonon assisted scattering of excitons from Γ point (COM), where a phonon covers the energy and momentum mismatch<cit.>. However, in the unstrained case, no optical or acoustic phonon modes with energyΔE ∼ 64 meV are available, therefore K-Λ states are not formed at Λ point (COM). Whenever we apply tensile strain on the ML WS_2, ΔE decreases and under ∼0.11 ± 0.01% strain the value of ΔE is ∼ 44 meV. From the PL map (see supplemental material section XIII) of X^- and X^0 the distribution of their positions was plotted. The statistical distribution was fitted with a normal distribution to extract the mean and standard deviation [Fig. <ref>d]. To estimate the amount of strain on ML WS_2 on top of C-99 substrate due to nanopillars, its position of X^- was compared with the X^- position in ML WS_2 on top of flat SiO_2/Si [Fig. <ref>d]. ML WS_2 on top of flat SiO_2/Siwas considered to be unstrained. We did not take into account X^0 position for this purpose because X^0 was not clearly resolved in ML WS_2 on top of flat SiO_2/Si. The mean X^- position was found to be 1.96 eV and 1.94 eV for SiO_2/Si and C-99 respectively. This amounts to ∼ 20 meV redshift of X^- in C-99. It is reported that X^- and X^0 redhshifts by ∼ 130 and 127 meV respectively for 1% applied tensile strain<cit.>. Therefore we can estimate that ML WS_2 on top of C-99 is under a tensile strain of ∼ 0.15 %. In C-99 sample phonon assisted scattering of excitons from Γ point (COM) to Λ valley (COM) is possible, thereby forming a population of K-Λ excitonic states in the Λ point (COM). The scattering process is illustrated in the schematic Fig. <ref>a. An E^' phonon with momentum Λ (since at Γ point (COM) momentum is zero) and energy ∼44 meV can make this scattering possible. The calculated phonon density of states shows a large number of phonon states available at ∼44 meV, thereby making this scattering more favorable [Fig. <ref>b]. Note that, change of phonon energy with strain is very negligible<cit.>.The K-Λ states at Λ point (COM) then can scatter non-radiatively to a virtual state inside the light cone at Γ point (COM) by emitting phonons. Once inside the light cone, the `dark' excitons can decay radiatively from the virtual state by emitting photon, thus leaving its signature in the PL spectra. To know about the kinetics of X^D, we did time-resolved PL (TRPL) on the ML WS_2 on top of C-99 substrate (see measurement details in supplemental material section III).The measured TRPL data was fitted with two exponentials (∑_n=1^2 A_ie^-t/τ_i) after deconvoluting from the IRF as implemented in QuCoa software (PicoQuant)[Fig.<ref>b]. The faster and stronger component τ_1 which represents the X^D decay time is estimated to be τ_1 ∼ 36.3 ± 1.2 ps. This value of τ_1 is ∼ 30 times larger than the reported decay time of a neutral exciton X^0 ( τ ∼ 1 ps at T = 60 K) in literature <cit.> . This longer lifetime of X^D compared to X^0 is expected because, X^D is excitonic ground state of ML WS_2<cit.>. The slower (A_1/A_2∼ 294) and weak decay component τ_2 ∼ 100 ps is expected to be coming from the contribution of tail of defect-bound exciton complex observed in ML WS_2 at lower temperatures<cit.>. In summary, we have reported the experimental observation of momentum-forbidden K-Λ dark excitons by applying tensile strain on ML WS_2 using a nanotextured substrate. The 2D TMDs are known to buckle easily with compressive strain<cit.> and is also more difficult to create, especially at low temperatures which is essential to prevent thermally activated depopulation of dark state into the bright state. However, it's easy to create tensile strain in the 2D TMDs and they can endure high values of tensile strength as well. Therefore it would be more practical and application-oriented if we can modulate the dark exciton with tensile strain rather than compressive strain.apsrev4-2 Acknowledgements. The research reported here was funded by the Commonwealth Scholarship Commission and the Foreign, Commonwealth and Development Office in the UK (Grant no. INCN-2021-049). T. C. is grateful for their support. All views expressed here are those of the author(s) not the funding body.T. C. thanks the Prime Minister's Research Fellowship (PMRF), Government of India (ID: 0700441) for funding. AR acknowledges funding support from DST SERB Grant no. CRG/2021/005659 and partial funding support under the Indo-French Centre for the Promotion of Advanced Research (CEFIPRA), project no. 6104-2. We thank National Supercomputing Mission (NSM) for providing computing resources of `PARAM Bramha' at IISER Pune, which is implemented by C-DAC and supported by the Ministry of Electronics and Information Technology (MeitY) and Department of Science and Technology (DST), Government of India. The authors would like to acknowledge funding from National Mission on Interdisciplinary Cyber-Physical Systems (NM-ICPS) of the Department of Science and Technology, Govt. Of India through the I-HUB Quantum Technology Foundation, Pune, India. We acknowledge Professor Sandip Ghosh, TIFR Mumbai, India for his help in polarization PL experiments and valuable discussions.
http://arxiv.org/abs/2312.16041v1
{ "authors": [ "Tamaghna Chowdhury", "Sagnik Chatterjee", "Dibyasankar Das", "Ivan Timokhin", "Pablo Díaz Núñez", "Gokul M. A.", "Suman Chatterjee", "Kausik Majumdar", "Prasenjit Ghosh", "Artem Mishchenko", "Atikur Rahman" ], "categories": [ "cond-mat.mes-hall", "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mes-hall", "published": "20231226130828", "title": "Tensile strain induced brightening of momentum forbidden dark exciton in WS$_2$" }
The globular cluster VVV CL002 falling down to the hazardous Galactic centre Dante Minniti 1,2,3Noriyuki Matsunaga 4,5José G. Fernández-Trincado 6 Shogo Otsubo 5 Yuki Sarugaku 5 Tomomi Takeuchi 5 Haruki Katoh 5 Satoshi Hamano 7 Yuji Ikeda 5,8 Hideyo Kawakita 5,9 Philip W. Lucas 10 Leigh C. Smith 11 Ilaria Petralia 1 Elisa Rita Garro 1 Roberto K. Saito 3 Javier Alonso-García 12 Matías Gómez 1 María Gabriela Navarro 13 Received Month DD, Year; accepted Month DD, Year =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================empty empty Reliable estimation of terrain traversability is critical for the successful deployment of autonomous systems in wild, outdoor environments. Given the lack of large-scale annotated datasets for off-road navigation, strictly-supervised learning approaches remain limited in their generalization ability. To this end, we introduce a novel, image-based self-supervised learning method for traversability prediction, leveraging a state-of-the-art vision foundation model for improved out-of-distribution performance. Our method employs contrastive representation learning using both human driving data and instance-based segmentation masks during training. We show that this simple, yet effective, technique drastically outperforms recent methods in predicting traversability for both on- and off-trail driving scenarios. We compare our method with recent baselines on both a common benchmark as well as our own datasets, covering a diverse range of outdoor environments and varied terrain types. We also demonstrate the compatibility of resulting costmap predictions with a model-predictive controller. Finally, we evaluate our approach on zero- and few-shot tasks, demonstrating unprecedented performance for generalization to new environments. Videos and additional material can be found here: <https://sites.google.com/view/visual-traversability-learning>. § INTRODUCTIONAutonomous navigation in off-road environments requires an accurate understanding of the terrain, particularly in identifying areas in the scene which can be traversed by the vehicle. However, contrary to on-road scenarios <cit.>, the notion of traversability in unstructured outdoor settings is much more ambiguous, a priori. For instance, vehicle interaction with terrain features such as ground-vegetation, rocks, and debris is strongly correlated to their size, shape, and material appearance. As such, manual assignment of traversability labels to off-road perception data is non-trivial and prone to error <cit.>. Furthermore, the complexity of terrain characteristics and the sheer variety of environments make comprehensive data collection of all terrain types a daunting task. Although efforts have been made towards generating annotated, off-road datasets <cit.>, these are generally restricted to a particular set of geospatial locations and seasonal conditions. For these reasons, achieving reliable performance in traversability prediction via supervised learning remains a challenging problem. Despite the necessity for labeled datasets, recent work on leveraging semantic segmentation has demonstrated the viability of supervised methods for terrain classification <cit.>, typically by generating a Bird's Eye View (BEV) semantic map of the scene with projection of semantic labels <cit.> or by directly predicting in BEV <cit.>. Still, the resulting class predictions in such cases must be mapped to a traversability metric or cost, which may require manual tuning and/or additional geometric feature information.In order to address the challenges of off-road traversability prediction, self-supervised learning has shown to be a promising alternative <cit.>. Instead of relying on manually annotated data, many of these approaches use projected traces of vehicle or robot trajectories derived from human-piloted examples as positive traversability labels. Although practical to generate, such datasets are only equipped with positive labels, where much of the observed terrain remains unlabeled.As a result, self-supervised methods that train single-class predictors can be prone to overfitting <cit.>. Additionally, most of these methods were tested on a restricted class of environments, many of which predominantly consist of on-trail scenarios, which is categorically similar to on-road evaluations <cit.>. While various sensors (e.g., LiDAR, IMU, etc.) can be used to capture terrain features, RGB cameras offer dense, high-resolution semantic and geometric information. In addition, learning traversability using RGB cameras offers a unique advantage over other sensor modalities: it allows us to leverage large general-purpose models, trained on massive datasets, to derive robust and expressive features from RGB input. Such “Vision Foundation Models" are typically trained on other vision-based tasks, such as semantic segmentation or image classification, but are able to learn rich, mid-level representations that can generalize to other, novel tasks with impressive results <cit.>. In particular, the Segment Anything Model (SAM) <cit.>, has demonstrated unparalleled zero-shot performance on instance-based semantic segmentation tasks. Trained on a dataset of 11 million images (with over 1.1 billion mask instances), the model is comprised of a Vision-Transformer (ViT) backbone <cit.>, allowing it to retain fine-grained visual information and generate multiple mask proposals for most objects in the scene (<ref>). Such models have the potential to drastically improve the generalization performance of traversability learning for off-road autonomy. To date, however, this has yet to be demonstrated. How to best utilize these models remains an open question.In this paper, we posit that leveraging mask segments for self-supervision offers a simple yet pragmatic method for bootstrapping traversability learning: assuming that image pixels corresponding to the same object or terrain patch should have a similar level of traversability, class-agnostic semantic masks can provide strong priors for self-supervised learning. Specifically, we propose a novel method for pixel-level, contrastive self-supervised learning using SAM and mask-based regularization to address the shortcomings of previous self-supervised methods.We demonstrate the effectiveness of our method on newly collected off-road datasets as our benchmark, on which our method drastically outperforms state-of-the-art baseline methods. Furthermore, while many existing methods validate their approach on off-road but on-trail sequences, ours effectively predicts traversability for both on-/off-trail cases in a number of varied, diverse environments. Lastly, we show how our method can be used for zero- and few-shot traversability learning in new environments not covered in the training data.§ RELATED WORK§.§ Self-supervised Traversability LearningGiven the aforementioned limitations in labeling traversability for off-road terrains, numerous approaches to learning traversability in a self-supervising fashion have been proposed for off-road autonomy <cit.>. Such efforts have made use of a variety of sensory modalities to provide a self-supervised learning signal. For instance, Castro et al. <cit.> used IMU z-axis measurements as a traversability score, whereas Seo et al. <cit.> used a combination of proprioceptive sensors and LiDAR to extract vehicle trajectory traces. §.§ Visual Traversability LearningSelf-supervised learning to predict terrain traversability from visual information has become a recent trend for off-road autonomy <cit.>. A common approach in this category has been to use autoencoder-based anomaly detection to classify visual terrain features that have not been traversed by the system. Inspired by <cit.>, Schmid et al. <cit.> propose to aggregate the vehicle footprints and project them into image space to crop out traversed regions. Afterward, an autoencoder <cit.> is trained with the reconstruction loss only on the traversed regions. This makes the model fail to reconstruct not-traversed areas since they are out-of-distribution features. Thus, at test time, they translate the reconstruction error into the traversability score. As remarked by the authors, this approach can be susceptible to illumination changes and may produce visual artifacts due to the nature of the reconstruction loss. §.§ Contrastive Traversability LearningContrastive approaches for terrain traversability have been explored in prior work <cit.>.In Xue et al. <cit.>, prototype vectors are learned from embedded positive and unlabeled terrain patches based on LiDAR features, which also serve to generate pseudo-labels for an additional supervised classification task. Additionally, other approaches propose to train a network to generate discriminative feature embeddings either with acoustic features <cit.> or proprioceptive sensor readings <cit.> coupled with weakly supervised labels learning to estimate traversability with relatively less manual labeling. In comparison, our method only utilizes visual input from RGB cameras and does not require other sensor measurements.Perhaps closest in spirit to our work is Seo et al. <cit.>, where the authors propose to use Positive-Unlabeled (PU) learning <cit.> with image-level contrastive learning <cit.>. The authors adopt a normalizing-flow <cit.> model and apply a PU learning algorithm for binary classification. Additionally, the contrastive loss <cit.> is applied to augmented images to encourage the model to have good image representations. However, such a contrastive loss may not be sufficient to provide meaningful information in distinguishing traversable and untraverable areas. On the contrary, we utilize contrastive learning to promote a model to separate traversable and untraversable features in the representation space by sampling positive and negative points within and outside the vehicle trajectories. §.§ Vision Foundation ModelsRecent vision foundation models <cit.> have shown remarkable performance in both accuracy and generalization. In particular, SAM <cit.> has demonstrated impressive performance in identifying different object instances in high-resolution images, even in unseen environments. As many approaches <cit.> have been proposed using pre-trained SAM networks, we also find strong advantages of using SAM. First, we take advantage of strong generalization performance and well-trained representation space by adopting the SAM image encoder as our backbone. In addition, we demonstrate the importance of using SAM-predicted mask proposals to improve trajectory-based self-supervised learning. Since vehicle trajectories cannot cover all traversable areas appearing in the image, these masks provide auxiliary self-supervised labels outside traversed regions. As shown in <ref> (c) and (d), incorporating SAM-predicted masks allows us to cover missing traversable areas where manual labeling efforts would previously be required. § METHODIn this section, we first describe trajectory projection and occlusion handling for positive-label generation, followed by trajectory-level and mask-level contrastive loss definitions leveraging SAM mask predictions. We then elaborate on prototype-vector estimation and conversion to a traversability metric. The overview of our method is illustrated in <ref>.§.§ Trajectory Projection and Occlusion HandlingWe use vehicle trajectories as a self-supervision signal for learning traversability as shown in <ref>. Specifically, with given poses of left/right wheels P_t:t+T in the global frame from time t to t+T, we project the trajectory into the image space byP_t:t+T^I = K[R|t]P_t:t+T,where K denotes an intrinsic matrix of the camera, R, t indicate rotation and translation of an extrinsic matrix, and T denotes the time horizon. We filter out occluded points by using a stereo depth estimate D_t at time t:P^I, filtered_t:t+T = {p | p∈ P^I_t:t+T,p_z ≤ D_t(p) }.We then calculate the contour of the projected left/right trajectories and fill in the contour to complete the trajectory region (<ref> (b)). Finally, we randomly sample positive points ℙ_traj within the completed trajectory and negative points ℕ_traj outside the trajectory (<ref> (c)).However, as shown in Fig. <ref> (c), the projected trajectory often covers only a portion of the traversable terrain. Such a gap between the projected trajectory and the actual traversable region adversely affects the training since negative points can be sampled from the gap. To mitigate this, we use mask predictions from SAM as an additional signal for traversability.Specifically, we obtain the mask predictions from SAM <cit.> by using positive samples ℙ_traj from the previous step as query points. We select a mask from the proposals if its area is larger than a certain threshold, and it has the highest confidence. Then, analogous to the previous sampling step, we sample positive (i.e., ℙ_mask) and negative (i.e., ℕ_mask) points within and outside the predicted mask. As shown in <ref> (d), positive samples from the predicted mask successfully cover the whole traversable region. §.§ Contrastive Learning for TraversabilityWe define a traversability prediction model f(𝐱) = h ∘ g(𝐱) that predicts traversability features 𝐅∈ℝ^H× W × D from a given image 𝐱∈ℝ^H× W × 3, where D denotes a dimension of traversability features. The model f(𝐱) is composed of a pre-trained image encoder g(·) and a traversability decoder h(·). We adopt the pre-trained image encoder from SAM <cit.>, leveraging its generalized latent feature representations, and we do not update the encoder during training.With the obtained positive and negative samples from the previous step, we apply contrastive losses to train the traversability decoder. Specifically, with traversability features 𝐅, a set of positive samples ℙ, and a set of negative samples ℕ, the contrastive loss is defined as:ℒ_contra(𝐅, ℙ, ℕ) =-1/N (N - 1)∑_s_i∈ℙ∑_s_j∈ℙ1(i ≠ j)logexp(𝐅_s_i^⊺·𝐅_s_j / τ)/∑_s_k∈ℕexp(𝐅_s_i^⊺·𝐅_s_k / τ),with pixel-level features 𝐅_s∈ℝ^D× 1 at pixel s. Here, N = |ℙ|, and τ denotes a temperature scalar. We normalize the traversability features 𝐅 along with the D dimension before applying the loss.Finally, our final loss is represented asℒ =(1-ω_mask) ℒ_contra(𝐅, ℙ_traj, ℕ_traj)+ ω_mask ℒ_contra(𝐅, ℙ_mask, ℕ_mask),with weight ω_mask∈[0, 1]. §.§ Trajectory Prototype and TraversabilityWhile training the model with contrastive losses, we estimate a trajectory prototype vector 𝐳∈ℝ^D, for converting traversability features to cost at inference time. Specifically, we update the prototype vector 𝐳 using exponential moving averaging (EMA),𝐳 = α𝐳 + (1 - α)1/|ℙ_traj|∑_s∈ℙ_traj𝐅_s,where α∈[0, 1] denotes a momentum value. The prototype vector 𝐳 and feature 𝐅_s are normalized before and after the EMA operation.At test time, we convert the traversability features 𝐅 to costs 𝐂∈ℝ^H× W by calculating cosine similarity between 𝐅 and 𝐳 (i.e., 𝐂 = 𝐅𝐳). Since cosine similarity has a range of [-1, 1], the predicted cost 𝐂 is always bounded. § EXPERIMENTS§.§ Dataset Collection and ProcessingOur datasets cover both on-trail and off-trail environments with varied types of terrain. Data collection was performed at the following sites in the US: LT Murray (WA), Mojave Desert (CA), and California Hills (CA).LT Murray (WA) sequence is composed of on-trail scenes, where the trails go through dense and sparse vegetation with bushes and trees. We recorded 40 miles of on-trail data.Mojave Desert (CA) consists of both on-trail and off-trail scenes with rocks, small bushes, Joshua trees, and cacti. We recorded two different runs by following the trail or driving through small bushes and rocks.CA Hills is composed of off-trail scenes on grassy hills populated with trees. We recorded two different runs by driving the terrain.We collected images, stereo depths, LiDAR point clouds, and vehicle poses from each run and ran an offline SLAM algorithm <cit.> to obtain precise state estimations. For evaluation purposes, we labeled LiDAR point clouds and generated segmentation ground truth in BEV. We also set a manual cost per class in BEV for hyper-parameter selection by running real-world experiments. §.§ BaselinesFor fair evaluations, we compare our method with Seo et al. <cit.> and Schmid et al. <cit.>. Both methods use vehicle trajectories to self-supervise the model to learn traversability. Seo et al. adopt a normalizing-flow model <cit.> along with an image-level contrastive loss to train traversability, while Schmid et al. crop the trajectory regions in images and train variational auto-encoder (VAE) <cit.> to reconstruct the traversed regions. Afterward, the reconstruction error is translated into traversability scores at test time. §.§ Implementation detailsWe adopt the encoder of the ViT-H SAM model as our image encoder and freeze it during training. For training, we use a learning rate of 1e-3 and a batch size of 2 and train the model for 1 epoch. We sample 256/1024 positive/negative points for the trajectory contrastive loss, and sample 512/1024 positive/negative points for the mask contrastive loss. Positive/negative samples on the ego vehicle are excluded. We use α=0.999 to update the trajectory prototype vector and T=300 for trajectory projection. We set the temperature τ=0.05 and the mask contrastive loss weight ω_mask=0.05. These hyper-parameters (i.e., τ and ω_mask) have been selected based on the averaged L1 error between predicted costs and manual costs obtained from ground-truth segmentation labels. Note that we project the predicted costs into BEV for hyper-parameter selection since the ground truth labels are in BEV. §.§ Qualitative Results<ref> shows predictions of ours and baseline methods on the RELLIS-3D and LT Murray, which are on-trail datasets. While Seo et al. <cit.> successfully predicts trajectories on trails, it fails to predict lethal objects clearly and distinguish the costs between different semantics such as trees, bushes, and traffic barriers. Schmid et al. <cit.> predicts trails as low costs but fails to mark surrounding lethal objects as high costs. On the other hand, our method clearly marks the trails as low costs and maintains subtle differences well in objects of off-trail terrain.The strength of our method becomes clearer when it is applied to off-trail scenarios. <ref> illustrates cost prediction results on CA Hills and Mojave Desert sequences, which contain off-trail images. Baselines struggle to find traversable regions correctly or to predict the lethal object as a high cost. Our approach not only distinguishes traversable regions from lethal objects correctly but also assigns different costs for different semantic objects. §.§ Quantitative ResultsRELLIS-3D dataset We evaluate our method and baselines on the RELLIS-3D dataset <cit.>, which is a publicly available labeled dataset. We adopt AUROC (area under receiver operating characteristic), AUPRC (area under precision-recall curve), F1 score, FPR (false positive rate), FNR (false negative rate), Precision, and Recall metrics for evaluation. Note that we report FPR, FNR, Precision, and Recall metrics at the best threshold that achieves the highest F1 score.<ref> shows the comparisons on the RELLIS-3D dataset. Our method outperforms baselines by a large margin overall, except for FNR and Recall. However, even in such cases, the gaps are marginal (i.e., 0.012 in both FNR and Recall) compared to our performance gains in other metrics.Our datasets Since our datasets do not have labels in image space but in BEV, we project the predicted costs into BEV and inpaint the missing costs using the nearest values. Then, we run a model predictive control (MPC) algorithm to obtain the optimal trajectory based on the predicted costmaps (<ref>). Specifically, we adopt model predictive path integral (MPPI) <cit.> as our MPC algorithm and measure the number of collisions with lethal objects over the number of successful MPPI runs (i.e., Collision Rate). To check collision, we use the ground truth labels in BEV and follow the optimal trajectory obtained from running MPPI on predicted costmaps. To achieve a low collision rate, it is important to find all the lethal objects appearing in the image while finding traversable areas correctly. As reported in <ref>, our method outperforms other baselines in LT Murray and CA Hills datasets by a large margin. In the case of the Mojave Desert dataset, all three methods show a similar performance achieving nearly 0 collision rates.§.§ Generalization to New EnvironmentsA strong advantage of having well-trained visual representations is generalization to out-of-distribution environments. To demonstrate the robustness of our method in new environments, we train a model on the on-trail LT Murray dataset and test it on the Mojave Desert and CA Hills. As illustrated in <ref>, our method generalizes surprisingly well without any adaptation. The model successfully finds a cactus and a small tree in Mojave and identifies logs lying on the ground in CA Hills, while predicting low cost on traversable areas. The MPPI evaluation results reported in <ref> also align with such observations. The few-shot adaptation result in CA Hills outperforms the in-domain results. This demonstrates that our method can effectively transfer from one domain to another, unseen domain. §.§ Ablation Study Effectiveness of using masks Using vehicle trajectories alone for self-supervision is insufficient since the trajectory cannot cover the whole traversable region. This ablation study aims to reveal how mask-based self-supervision is effective in addressing such a concern by comparing results with and without mask-based loss. <ref> presents ablation results when we train the model with and without the mask-based loss. As reported, applying the mask-based loss brings significant improvements in overall metrics. Additionally, as shown in <ref>, the model without mask information overfits vehicle trajectories and marks left and right side regions as high costs even though they are indeed traversable. On the other hand, once we apply the mask-based loss, the model correctly identifies traversable regions and marks them as low costs, mitigating the overfitting problem.§ CONCLUSIONIn this paper, we proposed a novel self-supervised approach to learning traversability in image space using contrastive learning. We show that with the addition of mask-based regularization, guided by robust segmentation proposals, generalization of traversability predictions can be drastically improved and deliver state-of-the-art performance. This is further emphasized by results in out-of-distribution environments. As for future work, we hope to incorporate temporal sequences of image data and learn-able in-painting to improve the quality of predictions, and investigate extending the approach for online adaptation. IEEEtran
http://arxiv.org/abs/2312.16016v1
{ "authors": [ "Sanghun Jung", "JoonHo Lee", "Xiangyun Meng", "Byron Boots", "Alexander Lambert" ], "categories": [ "cs.RO" ], "primary_category": "cs.RO", "published": "20231226120024", "title": "V-STRONG: Visual Self-Supervised Traversability Learning for Off-road Navigation" }
decorations.markings shapes backgrounds calc=0.1in 5.5in 7.8in =0.4in =0.4in⌈⌉ ⌊⌋ ⟨⟩ ||
http://arxiv.org/abs/2312.16334v1
{ "authors": [ "Thomas Koberda", "J. de la Nuez González" ], "categories": [ "math.GR", "math.GT", "math.LO" ], "primary_category": "math.GR", "published": "20231226211443", "title": "Uniform first order interpretation of the second order theory of countable groups of homeomorphisms" }
A Quantum Approach to solve N-Queens Problem Santhosh G S 1, PiyushJoshi2, Ayan Barui3, and Prasanta K. Panigrahi3,* 1 Sri Sivasubramaniya Nadar College of Engineering,Rajiv Gandhi Salai (OMR), Kalavakkam, 603110, Tamil Nadu, India2 Indian Institute of Space Science and Technology, Valiamala, Thiruvananthapuram, 695547, Kerala, India3 Indian Institute of Science Education and Research Kolkata, Mohanpur, 741246, West Bengal, India============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== I use QAOA to solve the Hamiltonian Circle problem. First, inspired by Lucas <cit.>, I define the QUBO form of Hamiltonian Cycle an transform it to a quantum circuit by embedding the problem of n vertices to an encoding of (n-1)^2 qubits. Then, I calcluate the spectrum of the cost hamiltonian for both triangle case and square case and justify my definition. I also write a python program to generate the cost hamiltonian automatically for finding the hamiltonian cycle in an arbitrary graph. I test the correctess of the hamailtonian by analyze their energy spectrums. Since the (n-1)^2 embedding limit my simulation of graph size to be less than 5, I decide to test the correctness, only for small and simple graph in this project. I implement the QAOA algorithm using qiskit and run the simulation for the triangle case and the square case, which are easy to test the correctness, both with and without noise. A very interesting result I got is that for the square case, the QAOA get much better result on a noisy simulator than a noiseless simulator! The explanation for this phenomena require further investigation, perhaps quantum noise can actually be helpful, rather than harmful in the annealing algorithms. I also use two different kinds of mixer, R_x mixer and R_y circuit to run the simulation. It turns out that R_x mixer performs much better than R_y mixer in this problem.§ INTRODUCTION QAOA, first introduce in 2014 <cit.>, is one of the the most famous and widely studied in NISQ era <cit.> <cit.> of quantum computing. In 2020, Google AI quantum implement the QAOA on a real Sycamore superconducting qubit quantum processor <cit.>, where they, for the first time, got a non-trivial result using QAOA to solve maxcut problem, where the graph has the same topology as the real Hardware Grid. Recently, a group from Harvard used Rydberg atom arrays with up to 289 qubits in two spatial dimensions, and experimentally investigate quantum algorithms for solving the maximum independent set problem<cit.>. Despite all the effort and progress, whether QAOA has an advantage in solving classical intractable problem , especially NP-complete problem, is still an open question. § THEORY OF QAOA QAOA is a typical variational quantum algorithm that could solve combinatorial optimization problem.The initial idea of QAOA comes from quantum adabatic theorem. Consider a time dependent hamiltonian that evolve slowly with time. The initial hamiltonian is Ĥ_M and the final one after evolution time T is Ĥ_C. The adabitic theorem tells us that if we set the initial state as the ground state of Ĥ_M, the final state will also be the ground state of Ĥ_C. The theorem has a potential implementation, when we can easily prepare the ground state of Ĥ_C, and encode the solution of a hard problem we want to solve to be the ground state of Ĥ_M.We can choose the Ĥ_M, to be Pauli X gates on all qubits, the ground state of which, is simply |+⟩^⊗ n, which can be easily prepared in a quantum computer by a row of Hadmard gate on the initial |0⟩^⊗ n. The time dependent Hamiltonian can be expressed asĤ(t)=f(t)Ĥ_C + g(t)Ĥ_MThe two function f(t) and g(t) changes slowly in time. An example for such f(t) g(t) is f(t)=t/T and g(t)=1-t/T. The unitary of such slowly varying hamiltonian is Û(t)= e^-i ∫_0^t dτĤ(τ)can be simulated in a quantum computer, by Trotterization:Û(t) ≈∏_k=0^r-1exp[-iĤ(k Δτ)Δτ]= ∏_k=1^r-1exp[-if(kΔτ)Ĥ_C Δτ ] exp[-ig(kΔτ)Ĥ_M Δτ] Next, let's talk about how to find suitable Ĥ_̂Ĉ with regard to the problem we want to solve and how to embed the solution to its ground state. The most formal way to describe the classical problem that QAOA want to solve is define a set of boolean functions,{C_α, α=0,1,⋯,m}, each one of these function define a condition to be satisfied given the specific problem, or mathematically as a mapping: C_α: { 0,1}^n →{0,1}. The input of the boolean function represent an “assignment” to the given problem and the output is whether the assignment belong to the solution space or not. The optimization version of the problem, when there are ,can be stated in the function below, formally as:C(z)=∑_α=1^m C_α (z) When C(z)=m, all of the conditions of the problem are satisfied, otherwise some of them are not. Our goal is to maximized the value of C(z). The way to construct the circuit of QAOA, is to make alternating layers of mixer and cost circuit that could possibly simulate the adiabatic evolution of quantum hamiltonian in euqation [eq:adiabatic]Equation (3). However, if we want to get accurate solution, we have to choose infinitly small Δτ in [eq:adiabatic]Equation (3), which is unacceptable for aquantum computer in the real application. Thus, in QAOA algorithm, we only construct a fixed number of layers, and assign each layer with a parameter to be optimized, under the hope that after such optimization, the hamiltonian of the quantum computer approximate the Trotterized adiabitic unitary very well. The optimization algorithm itself, utilize the similar idea from training a neural network. The parameters are changed every time we execute the circuit and measured the result. People also call this Variational Quantum Eigenvalue (VQE) algorithms <cit.>. The same kind of methods have demonstrated it's potential in solving eigenstate and energy for chemical molecule. A very recent result is Google run VQE on 12 qubits on Sycamore <cit.> and get the ground state energy for hydrogen chains.§.§ Structure of Cost circuit First, we have to design the cost hamiltonian Ĥ_C with regard to the given problem.All of the possible solutions for embedded as |x⟩ should be an eigen state of Ĥ_C, whose eigen value contain the information of the cost function: Ĥ_C |x⟩ =C(x) |x⟩ The unitary for the cost circuit, with parameter γ , is defined asU_C(γ)=e^-iγĤ_C = ∏_j<k e^-iγ w_jkẐ_jẐ_k The circuit, can be implemented by two CNOT gate and a R(Z) gate §.§ Structure of Mixer circuit The mixer hamiltonion, is choosen to be Ĥ_M=∑_j ∈𝒱X̂_j The unitary for the mixer part, with parameter β, is defined as U_M(β)=e^-iβ B= ∏_j e^-i βX̂_jThe circuit, can be implemented by a R_X gate. We can also choose another mixer hamiltonion, such as Grover Mixer <cit.>,§.§ Structure of the QAOA circuitJust as most of the circuti structure of quantum algorithm, there should be a row of Hadamard gate ad the front of the circuit that convert |0⟩^⊗ n to|+⟩^n. And then, we add p alternating layers of Cost circuit and Mixer circuit. The two set of parameters, are denoted as γ=(γ_1,⋯,γ_p) andβ=(β_1,⋯,β_p). The |γ,β⟩=U_M(β_p) U_C(γ_p) … U_M(β_1) U_C(γ_1) |+⟩^⊗ n And the measured cost function of the equation [eq:Equation]equation (4).⟨C|=⟩⟨γ, β|Ĥ_C |γ, β⟩ §.§ Optimization algorithmWe can simply use a classical optimizer to optimize all the parameters. §.§ Steps of the algorithm * Define a cost Hamiltonion Ĥ_C given the problem. The eigen state with the highest eigen energy of Ĥ_C should be the exact solution to the optimization problem.* Initialize the state in |s⟩. |s⟩= |+⟩^⊗ n=1/√(2^n)∑_x ∈{0,1}^n|x⟩|s⟩ here is actually the eigen state of the highest eigen state of the mixer hamiltonian H_M defined in [eq:Mixer]Equation (7) * Choose the number of layer p. Make p alternating pair of mixer and cost circuit. * Initialize 2p parameters γ⃗=(γ_1,γ_2,⋯,γ_p) and β⃗=(β_1,β_2,⋯,β_p) such that γ_i,β_k ∈{0,2π}* Calculate the cost by measuring repeatedly. F_p(γ⃗,β⃗)=⟨ψ_p(γ⃗,β⃗)|H_C |ψ_p(γ⃗,β⃗)⟩ * Use a classical algorithm to optimize the paramter by maximize the expectation value in [eq:Fidelity]Equation (12).(γ⃗^*,β⃗^*)= max_γ⃗,β⃗F_p(γ⃗,β⃗) §.§ MAX-CUT I use MAX cut problem first to test theQAOA circuit. Which is the most commonly used algorithm to benchmark the behavior of QAOA. The input to the MAXCUT is a graph 𝒢=(𝒱,ℰ). 𝒱 is the set of vertices, ℰ is the set of edges. Assume that there is a weight w_ij assigned to each of the edge (i,j) ∈ℰ. We want to find the largest cut in graph 𝒢, which is a subset of the vertices whose “cut” with the rest of the vertices are the maximum. The “cut” is defined as the sum of all the weight between a vertex in the subset and a vertex that is not in the subset. Any possible assignment can be represented by a set of 0,1. So we use x_i to represent the assignment for vertex i. x_i=1 if and only if x_i is assigned to the subset.C(x)=∑_i,j=1^|𝒱| w_i,jx_i(1-x_j)The correspondence between the x_i in [eq:Cmax]Equation (11) with the Pauli Z gate used in our Cost Hamiltonian is:x_i →1/2 (1- Z_i)For example, for the cost function of MAX-CUT defined in[eq:Cmax]Equation (11), the corresponding cost hamiltonian is H_C= ∑_i,j=1^|𝒱| w_i,j1/4 (1-Z_i)Z_j= 1/4∑_i,j=1^|𝒱| w_i,j (Z_j-Z_iZ_j) § CLASSICAL NP-COMPLETE PROBLEM In 1971, <cit.> Cook first proved that the boolean satisfiability problem(3-SAT) is NP-complete, which is also called the Cook-Levin theorem. In 1972, Karp <cit.> used Cook's result and first introduce 21 famous NP complete problem.In 2014, Lucas <cit.> first discussed how to map all of the 21 NP complete to Quadratic Unconstrained Binary Optimization problems(QUBO) in polynomial time, which suddenly raise people's attention because this open the door for using the quantum computer to solve NP-complete problem.One of the most notorious NP complete problem is the Hamiltonian Circle problem. I will try to solve the problem using QAOA in this small project.§.§ Hamiltonian Circle problem The input for the Hamiltonian Circle problem is a graph 𝒢=(𝒱,ℰ). Suppose |𝒢|=n. Our goal is to find a cycle to travel thorough all vertices exactly once. I use the QUBO form given by Lucas <cit.> for the Hamiltonian Circle as follows: H=A ∑_v=1^n(1-∑_j=1^nx_v,j)^2+A ∑_j=1^n(1-∑_v=1^nx_v,j)^2+A ∑_(uv) ∉ℰ [∑_j=1^n-1 x_u,jx_v,j+1 + x_u,nx_v,1] x_v,j is the encoding of whether the vertex v is at the j^th position of the Circle. The first part of H requires that every vertex can be only assigned to one position in the Circle, which we call vertex uniqueness term. The second part is the constraint that every position in the Circle of length N we find is only assigned to one vertex, which we call edge uniqueness term. The final term is the penalty for the two consecutive vertices in the Circle are actually not connected in the original graph, which we call edge validity term. Thus, it is obvious that the minimal value of H, given any assignment x_v,j=f(v,j), is 0, when such assignment represent a hamiltonian Circle. To construct the circuit, we can also use the following substitution:x_v,j→1/2 (1- Z_i,j) The encoding of a given graph requires (n-1) × (n-1) qubits.First, we calculate a very simple case, a triangle, as illustrate in <ref>.In this simple example, we can write the full embedding in details: x_1,1≡ 1 We always fix the position of the first vertex to be 1.x_1,2≡ x_1,3≡ x_2,1≡ x_3,1≡ 0 The impossible assignment when the first vertex is fixed.x_2,2=1 Vertex 2 is at the second position.x_2,3=1 Vertex 2 is at the third position.x_3,3=1 Vertex 2 is at the third position. By the above definition, we can see that 2 × 2 qubits are enough for defining the hamiltonian for hamiltonian circle.Now we can derive the concrete form of the hamiltonian defined in <ref>, where we set the constant A=1. H=∑_v=2^3(1-∑_j=2^3x_v,j)^2+ ∑_j=2^3(1-∑_v=2^3x_v,j)^2+[∑_(uv) ∉ℰ∑_j=2^2 x_u,jx_v,j+1 + ∑_(u1) ∉ℰx_u,3+∑_(1u) ∉ℰx_u,2]We expand the three terms seperately1.The vertex uniqueness term: H_1=∑_v=2^3(1-∑_j=2^3x_v,j)^2=(1-x_2,2-x_2,3)^2+(1-x_3,2-x_3,3)^2=2+x_2,2^2+x_2,3^2+x_3,2^2+x_3,3^2-2x_2,2-2x_2,3+2x_2,2x_2,3-2x_3,2-2x_3,3+2x_3,2x_3,3 We can simply use the substitution rule x_i,j→1/2(1-Z_i,j) H_1=∑_v=2^3(1-∑_j=2^3x_v,j)^2⇒ (1/2Z_2,2+1/2Z_2,3)^2+(1/2Z_3,2+1/2Z_3,3)^2=1/4[I+I+2Z_2,2Z_2,3+I+I+2Z_3,2Z_3,3]=I+1/2Z_2,2Z_2,3+1/2Z_3,2Z_3,32.The edge uniqueness term:H_2 =∑_j=2^3(1-∑_v=2^3x_v,j)^2=(1-x_2,2-x_3,2)^2+(1-x_2,3-x_3,3)^2We also use the substitution rule x_i,j→1/2(1-Z_i,j) H_2=(1-x_2,2-x_3,2)^2+(1-x_2,3-x_3,3)^2⇒ (1/2Z_2,2+1/2Z_3,2)^2+(1/2Z_2,3+1/2Z_3,3)^2=1/4(I+I+2Z_2,2Z_3,2+I+I+2Z_2,3Z_3,3) = I +1/2Z_2,2Z_3,2+1/2Z_2,3Z_3,33.The edge validity term:H_3=∑_(uv) ∉ℰ∑_j=2^2 x_u,jx_v,j+1 + ∑_(u1) ∉ℰx_u,3+∑_(1u) ∉ℰx_u,2Notice the the triangle is actually a completely connected graph, so H_3=0. Finally, we have H_C=H_1+H_2+H_3=2I+1/2Z_2,2Z_2,3+1/2Z_3,2Z_3,3+1/2Z_2,2Z_3,2+1/2Z_2,3Z_3,3 Since I and constant factor don't affect the eigen energy and eigen state, we can rewrite H_C as H_C=Z_2,2Z_2,3+Z_3,2Z_3,3+Z_2,2Z_3,2+Z_2,3Z_3,3Finally, we can construct the circuit for the cost hamiltonian. The circuit has four qubits, each represent:* Q_1 represent x_2,2. * Q_2 represent x_2,3. * Q_3 represent x_3,2. * Q_4 represent x_3,3.Since the correct result must be either (x_2,2=1,x_2,3=0,x_3,2=0,x_3,3=1), which represent the hamiltonian cycle (1 → 2→3→1 ) or (x_2,2=0,x_2,3=1,x_3,2=1,x_3,3=0), which represent the hamiltonian cycle (1 → 3→2→1 ). The ground state for this solution must in dirac notation be either |1001⟩ or |0110⟩. We can check the correctness. We can write our H_C as: H_C=Z_1Z_2+Z_3Z_4+Z_1Z_3+Z_2Z_4 Let's check the correctness of the above statement by diagonalization:import numpy as np from scipy.linalg import eigh from functools import reduce import matplotlib.pyplot as plt# Pauli Z matrix pauli_z = np.array([[1, 0], [0, -1]])# Function to create a matrix representation of Z_k gate on k-th qubit def z_k_matrix(k, total_qubits): I = np.eye(2)# Identity matrix matrices = [pauli_z if i == k else I for i in range(total_qubits)] return reduce(np.kron, matrices)# Terms for the Hamiltonian H_C terms_direct = [ (1, [0, 1]),# Z1Z2 (1, [2, 3]),# Z3Z4 (1, [0, 2]),# Z1Z3 (1, [1, 3]) # Z2Z4 ]# Total number of qubits for the Hamiltonian new_total_qubits = 4# Construct the Hamiltonian matrix directly H_direct = np.zeros((2**new_total_qubits, 2**new_total_qubits))# Add each term directly to the Hamiltonian for coeff, qubits in terms_direct: term = np.eye(2**new_total_qubits) for qubit in qubits: term = np.dot(term, z_k_matrix(qubit, new_total_qubits)) H_direct += coeff * term# Calculate eigenvalues and eigenvectors of the Hamiltonian directly new_eigenvalues_direct, new_eigenvectors_direct = eigh(H_direct)# Plot the energy spectrum with annotations plt.figure(figsize=(12, 8)) previous_eigenvalue = None offset_multiplier = 0for i in range(2**new_total_qubits): eigenvalue = new_eigenvalues_direct[i] max_amplitude_index = np.argmax(np.abs(new_eigenvectors_direct[:, i])) dirac_state = "|0:04b>".format(max_amplitude_index) if previous_eigenvalue == eigenvalue: offset_multiplier += 1 else: offset_multiplier = 0 horizontal_position =+ offset_multiplier * 0.088 plt.hlines(eigenvalue, 0, 1, colors='b', linestyles='solid') plt.text(horizontal_position, eigenvalue, dirac_state, fontsize=12,verticalalignment='center') previous_eigenvalue = eigenvalueplt.xlabel('State Index',fontsize=20) plt.ylabel('Energy',fontsize=20) plt.title('Energy Spectrum and Corresponding Eigenstates(Dirac Notation)',fontsize=20) plt.xticks([]) plt.grid(True) plt.savefig("SpectrumTriangle.png") plt.show()The spectrum of the hamiltonian calculated above is computed and shown in <ref>. There is a large energy gap, between the solution state and the non-solution state.Another example we is a square , as illustrate in <ref>:In this simple example, we can also write the full embedding in details: x_1,1≡ 1 We always fix the position of the first vertex to be 1.x_1,2≡ x_1,3≡ x_1,4≡ x_2,1≡ x_3,1≡ x_4,1≡ 0 The impossible assignment when the first vertex is fixed.x_2,2=1 Vertex 2 is at the second position.x_2,3=1 Vertex 2 is at the third position.x_2,4=1 Vertex 2 is at the fourth position.x_3,2=1 Vertex 3 is at the second position.x_3,3=1 Vertex 2 is at the third position.x_3,4=1 Vertex 2 is at the fourth position.x_4,2=1 Vertex 2 is at the third position.x_4,3=1 Vertex 2 is at the third position.x_4,4=1 Vertex 2 is at the third position. We can see that 3 ⊗ 3=9for defining the hamiltonian for hamiltonian circle.Now we can derive the concrete form of the hamiltonian defined in <ref>, where we set the constant A=1. H=∑_v=2^4(1-∑_j=2^4x_v,j)^2+ ∑_j=2^4(1-∑_v=2^4x_v,j)^2+∑_(uv) ∉ℰ∑_j=2^4 x_u,jx_v,j+1We expand the four terms seperately1.The vertex uniqueness term: H_1=∑_v=2^4(1-∑_j=2^4x_v,j)^2=(1-x_2,2-x_2,3-x_2,4)^2+(1-x_3,2-x_3,3-x_3,4)^2+(1-x_4,2-x_4,3-x_4,4)^2 We can simply use the substitution rule x_i,j→1/2(1-Z_i,j) H_1=∑_v=2^4(1-∑_j=2^4x_v,j)^2 =(-I/2+1/2Z_2,2+1/2Z_2,3+1/2Z_2,4)^2+(-I/2+1/2Z_3,2+1/2Z_3,3+1/2Z_3,4)^2+(-I/2+1/2Z_4,2+1/2Z_4,3+1/2Z_4,4)^2=1/4[I+Z_2,2^2+Z_2,3^2+Z_2,4^2-2Z_2,2-2Z_2,3-2Z_2,4+2Z_2,2Z_2,3+2Z_2,2Z_2,4+2Z_2,3Z_2,4+I+Z_3,2^2+Z_3,3^2+Z_3,4^2-2Z_3,2-2Z_3,3-2Z_3,4+2Z_3,2Z_3,3+2Z_3,2Z_3,4+2Z_3,3Z_3,4+ I+Z_4,2^2+Z_4,3^2+Z_4,4^2-2Z_4,2-2Z_4,3-2Z_4,4+2Z_4,2Z_4,3+2Z_4,2Z_4,4+2Z_4,3Z_4,4]=1/4[12I-2Z_2,2-2Z_2,3-2Z_2,4+2Z_2,2Z_2,3+2Z_2,2Z_2,4+2Z_2,3Z_2,4-2Z_3,2-2Z_3,3-2Z_3,4+2Z_3,2Z_3,3+2Z_3,2Z_3,4+2Z_3,3Z_3,4+ -2Z_4,2-2Z_4,3-2Z_4,4+2Z_4,2Z_4,3+2Z_4,2Z_4,4+2Z_4,3Z_4,4] Finally, after removing I and the constant factor: H_1=-Z_2,2-Z_2,3-Z_2,4+Z_2,2Z_2,3+Z_2,2Z_2,4+Z_2,3Z_2,4-Z_3,2-Z_3,3-Z_3,4+Z_3,2Z_3,3+Z_3,2Z_3,4+Z_3,3Z_3,4+ -Z_4,2-Z_4,3-Z_4,4+Z_4,2Z_4,3+Z_4,2Z_4,4+Z_4,3Z_4,4 2.The edge uniqueness term:H_2 =∑_j=2^4(1-∑_v=2^4x_v,j)^2=(1-x_3,2-x_3,2-x_4,2)^2+(1-x_2,2-x_2,3)^2We also use the substitution rule x_i,j→1/2(1-Z_i,j) H_2=(1-x_2,2-x_3,2-x_4,2)^2+(1-x_2,3-x_3,3-x_4,3)^2+(1-x_2,4-x_3,4-x_4,4)^2Likewise, we can use the substitution rules:We also also use the substitution rule x_i,j→1/2(1-Z_i,j), and finally get the same kind of format as H_1: H_2=-Z_2,2-Z_3,2-Z_4,2+Z_2,2Z_3,2+Z_2,2Z_4,2+Z_3,2Z_4,2-Z_2,3-Z_3,3-Z_4,3+Z_2,3Z_3,3+Z_2,3Z_4,3+Z_3,3Z_4,3+ -Z_2,4-Z_3,4-Z_4,4+Z_2,4Z_3,4+Z_2,4Z_4,4+Z_3,4Z_4,4 3.The edge validity term:H_3=∑_(uv) ∉ℰ∑_j=2^3 x_u,jx_v,j+1 + ∑_(u1) ∉ℰx_u,4+∑_(1u) ∉ℰx_u,2Different from the triangle case, there are two pairs of vertices not connected with each other: (X_1,X_3) and (X_2,X_4). Which means that our cost function will punish the assignment which try to find a path with X_1, X_3 or X_2,X_4 adjacent with each other. H_3=x_2,2x_4,3+x_2,3x_4,4+x_4,2x_2,3+x_4,3x_2,4+x_3,4+x_3,2 After substitution, the edge validity term becomes: H_3 =(I-Z_2,2)(I-Z_4,3)+(I-Z_2,3)(I-Z_4,4)+(I-Z_4,2)(I-Z_2,3)+(I-Z_4,3)(I-Z_2,4)+2(I-Z_3,4)+2(I-Z_3,2) =-Z_2,2-Z_4,3+Z_2,2Z_4,3-Z_2,3-Z_4,4+Z_2,3Z_4,4-Z_4,2-Z_2,3+Z_4,2Z_2,3-Z_4,3-Z_2,4+Z_4,3Z_2,4-2Z_3,4-2Z_3,2=-Z_2,2-2Z_4,3-2Z_2,3-Z_4,4-Z_4,2-Z_2,4-2Z_3,4-2Z_3,2+Z_2,2Z_4,3+Z_2,3Z_4,4+Z_4,2Z_2,3+Z_4,3Z_2,4 H_C=H_1+H_2+H_3=-Z_2,2-Z_2,3-Z_2,4+Z_2,2Z_2,3+Z_2,2Z_2,4+Z_2,3Z_2,4-Z_3,2-Z_3,3-Z_3,4+Z_3,2Z_3,3+Z_3,2Z_3,4+Z_3,3Z_3,4+ -Z_4,2-Z_4,3-Z_4,4+Z_4,2Z_4,3+Z_4,2Z_4,4+Z_4,3Z_4,4H_C=-Z_2,2-Z_2,3-Z_2,4+Z_2,2Z_2,3+Z_2,2Z_2,4+Z_2,3Z_2,4-Z_3,2-Z_3,3-Z_3,4+Z_3,2Z_3,3+Z_3,2Z_3,4+Z_3,3Z_3,4+ -Z_4,2-Z_4,3-Z_4,4+Z_4,2Z_4,3+Z_4,2Z_4,4+Z_4,3Z_4,4-Z_2,2-Z_3,2-Z_4,2+Z_2,2Z_3,2+Z_2,2Z_4,2+Z_3,2Z_4,2-Z_2,3-Z_3,3-Z_4,3+Z_2,3Z_3,3+Z_2,3Z_4,3+Z_3,3Z_4,3+ -Z_2,4-Z_3,4-Z_4,4+Z_2,4Z_3,4+Z_2,4Z_4,4+Z_3,4Z_4,4-Z_2,2-2Z_4,3-2Z_2,3-Z_4,4-Z_4,2-Z_2,4-2Z_3,4-2Z_3,2+Z_2,2Z_4,3+Z_2,3Z_4,4+Z_4,2Z_2,3+Z_4,3Z_2,4= Z_2,2Z_2,3 + Z_2,2Z_2,4 + Z_2,2Z_3,2 + Z_2,2Z_4,2 + Z_2,2Z_4,3 - 3Z_2,2+ Z_2,3Z_2,4 + Z_2,3Z_3,3 + Z_2,3Z_4,2 + Z_2,3Z_4,3 + Z_2,3Z_4,4 - 4Z_2,3+ Z_2,4Z_3,4 + Z_2,4Z_4,3 + Z_2,4Z_4,4 - 3Z_2,4 + Z_3,2Z_3,3 + Z_3,2Z_3,4 + Z_3,2Z_4,2 - 4Z_3,2+ Z_3,3Z_3,4 + Z_3,3Z_4,3 - 2Z_3,3 + Z_3,4Z_4,4 - 4Z_3,4+ Z_4,2Z_4,3 + Z_4,2Z_4,4 - 3Z_4,2 + Z_4,3Z_4,4 - 4Z_4,3 - 3Z_4,4 Finally, we can construct the circuit for the cost hamiltonian. The circuit has four qubits, each represent: * Q_1 represent x_2,2 * Q_2 represent x_2,3=1 * Q_3 represent x_2,4=1 * Q_4 represent x_3,2=1* Q_5 represent x_3,3=1* Q_6 represent x_3,4=1 * Q_7 represent x_4,2=1 * Q_8 represent x_4,3=1* Q_9 represent x_4,4=1 This format should be more compact while still clearly representing the polynomial.The final hamiltonian is: H_C= Z_1Z_2 + Z_1Z_3 + Z_1Z_4 + Z_1Z_7 + Z_1Z_8 - 3Z_1 + Z_2Z_3 + Z_2Z_5 + Z_2Z_7 + Z_2Z_8 + Z_2Z_9 - 4Z_2 + Z_3Z_6 + Z_3Z_8 + Z_3Z_9 - 3Z_3 + Z_4Z_5 + Z_4Z_6 + Z_4Z_7 - 4Z_4 + Z_5Z_6 + Z_5Z_8 - 2Z_5 + Z_6Z_9 - 4Z_6 + Z_7Z_8 + Z_7Z_9 - 3Z_7 + Z_8Z_9 - 4Z_8 - 3Z_9Since the correct result [The benefit to use a cycle to be the example is that there are only two solution, clockwise and counterclockwise, and thus we can easily check the correctness of our hamiltonian] must be either (1,0,0,0,1,0,0,0,1), which represent the hamiltonian cycle (1 → 2→3→→ 4 → 1 ) or (0,0,1,0,1,0,1,0,0), which represent the hamiltonian cycle (1 → 4→ 3→ 2 → 1 ). The ground state for this solution must in dirac notation be either |100010001⟩ or |001010100⟩. We can check the correctness by calculating the eigen value and eigen energies of the above equation: import numpy as np from scipy.linalg import eigh from functools import reduce# Define the Pauli Z matrix pauli_z = np.array([[1, 0], [0, -1]])# Function to create a matrix representation of Z_k gate on k-th qubit def z_k_matrix(k, total_qubits): I = np.eye(2)# Identity matrix matrices = [pauli_z if i == k else I for i in range(total_qubits)] return reduce(np.kron, matrices)# Total number of qubits total_qubits = 9# Construct the Hamiltonian matrix H = np.zeros((2**total_qubits, 2**total_qubits))# Define the terms of the Hamiltonian (coefficients and qubit indices) terms = [ (1, [0, 1]), (1, [0, 2]), (1, [0, 3]), (1, [0, 6]), (1, [0, 7]), (-3, [0]), (1, [1, 2]), (1, [1, 4]), (1, [1, 6]), (1, [1, 7]), (1, [1, 8]), (-4, [1]), (1, [2, 5]), (1, [2, 7]), (1, [2, 8]), (-3, [2]), (1, [3, 4]), (1, [3, 5]), (1, [3, 6]), (-4, [3]), (1, [4, 5]), (1, [4, 7]), (-2, [4]), (1, [5, 8]), (-4, [5]), (1, [6, 7]), (1, [6, 8]), (-3, [6]), (1, [7, 8]), (-4, [7]), (-3, [8]) ]# Add each term to the Hamiltonian for coeff, qubits in terms: if len(qubits) == 1: H += coeff * z_k_matrix(qubits[0], total_qubits) else: term = z_k_matrix(qubits[0], total_qubits) for qubit in qubits[1:]: term = np.dot(term, z_k_matrix(qubit, total_qubits)) H += coeff * term# Calculate eigenvalues and eigenvectors eigenvalues, eigenvectors = eigh(H)# Extract the five lowest eigenvalues and their corresponding eigenstates lowest_five_eigenvalues = eigenvalues[:5] lowest_five_eigenstates = eigenvectors[:, :5]# Convert the eigenstates to Dirac notation lowest_five_eigenstates_dirac = [] for i in range(5): max_amplitude_index = np.argmax(np.abs(lowest_five_eigenstates[:, i])) dirac_state = "|0:09b>".format(max_amplitude_index) lowest_five_eigenstates_dirac.append(dirac_state)# Display the results print("Lowest Five Eigenvalues:", lowest_five_eigenvalues) print("Corresponding Eigenstates in Dirac Notation:",lowest_five_eigenstates_dirac) § SIMULATION USING QISKIT §.§ How do I scale the problem upThe most difficult part in implementation is how to embed an arbitrary graph automatically to cost hamiltonian in QAOA?. I write a function, the input is the graph, the output is the the parameter for the hamiltonian, in a python dictionary.from sympy import symbols, Sum, IndexedBase, simplify from sympy.abc import n, v, j, u# Define symbolic variables x = IndexedBase('x')# Function to represent the vertex uniqueness term of the Hamiltonian def vertex_uniqueness_term(n): return Sum((1 - Sum(x[v, j], (j, 2, n)))**2, (v, 2, n))# Function to represent the edge uniqueness term of the Hamiltonian def edge_uniqueness_term(n): return Sum((1 - Sum(x[v, j], (v, 2, n)))**2, (j, 2, n))# Function to represent the edge validity term# of the Hamiltonian for a given graph def edge_validity_term(graph, n): validity_term = 0 for u in range(2,n+1): edge=(u,1) if not edge in graph: validity_term +=x[u,n] for v in range(2,n+1): u, v = edge if not edge in graph: validity_term += Sum(x[u, j] * x[v, j+1], (j, 1, n-1)) return validity_term# Combine the terms to form the complete Hamiltonian def hamiltonian(graph, n): H = vertex_uniqueness_term(n) +edge_uniqueness_term(n) + edge_validity_term(graph, n) return simplify(H) def apply_substitution_to_hamiltonian(H, n): Z = IndexedBase('Z') H_substituted = H for v in range(2, n+1): for j in range(2, n+1): z_index = (v-2)*(n-1) + j-1# Corrected index calculation if z_index > 0: H_substituted =H_substituted.subs(x[v, j], 1/2 * (1 - Z[z_index])) else: H_substituted = H_substituted.subs(x[v, j], 0) return simplify(H_substituted) def expand_and_simplify_hamiltonian(H,n): Z = IndexedBase('Z') H_expanded = H.expand() # Apply the simplification rule Z_k^2 = I for k in range(1, (n-1)**2+1):# Assuming up to 8 qubits for this example H_expanded = H_expanded.subs(Z[k]**2, 0) return simplify(H_expanded) def hamiltonian_to_string_list(H, n): """ Convert the expanded Hamiltonian to alist of strings with corresponding coefficients. Each string represents a term in the Hamiltonian, with 'Z' at positions corresponding to qubits involved in the term. For example, 'ZZI' represents Z_1 Z_2.:param H: The expanded Hamiltonian expression :param n: Number of qubits :return: List of tuples (string, coefficient) """ Z = IndexedBase('Z') terms = []# Iterate over each term in the Hamiltonian expression for term in H.as_ordered_terms(): # Initialize a string with 'I's for each qubit term_string = ['I'] * n coeff = H.coeff(term)# Extract the coefficient of the term# Check for the presence of Z operators in the term for k in range(1, n+1): if term.has(Z[k]): term_string[k-1] = 'Z'# Join the term string and append it with its coefficient to the list terms.append((”.join(term_string), coeff))return termsn=3 # Example: Hamiltonian for a triangle graph triangle_graph = [(1, 2), (2, 3), (3, 1), (2, 1), (3, 2), (3, 1)] H_triangle = hamiltonian(triangle_graph, n) print(f"Polynomial H is H_triangle") # Apply the substitution rule to the # Hamiltonian for a triangle graph (n = 3) H_triangle_substituted = apply_substitution_to_hamiltonian(H_triangle, n) print(f"After Substitute wo Z is H_triangle_substituted") H_final=expand_and_simplify_hamiltonian(H_triangle_substituted,n) print(f"After simplificationH_final") # Convert the final Hamiltonian for the# triangle graph to string list representation hamiltonian_string_list = hamiltonian_to_string_list(H_final, (n-1)**2) print(f"Final resulthamiltonian_string_list")The output in <ref> is exactly the form of hamiltonian that I calculated by hand based on <ref>. With the code above, I can generate the cost hamiltonian for hamiltonian cycle problem for an arbitrary graph. Qiskit has already implement the next step to compile the dictionary to the circuit. §.§ Simulation of QAOA for Hamiltonian cycle of a triangle.First, I run the simulation of QAOA algorithm on the simplest case: When the graph is a triangle! The benefit of doing this is that the circuit is really simple and it is easy for us to test the correctness. The compiled circuit, when the repetition number is two, is shown in <ref>. After a row of hadamard gate, there are four pairs of ZZ gate, as defined and calculated in <ref>, which is the cost circuit. The mixer part, is chosen as a row of R_x gate, with the same rotation angle. Finally, we measure the result and get the output. Since our goal is to minimize the energy of the cost hamiltonian with respect to the output state, we have to to calculate the cost value[In classcal simulation, the cost is easy to calculate, because we only need to the the matrix vector calculation:⟨ψ| H|ψ⟩. However, in a real quantum computer, to get ⟨ψ| H|ψ⟩ is much harder. Generally speaking, we have to divide H into different Pauli Gate, and measure the expectation of each pauli gate by sampling. Finally, add the energy of each pauli gate.] and use a classical optimizer to minimize the energy. We use the"COBYLA" method of scipy.optimize to to the optimization.§.§ Simulation of QAOA for Hamiltonian cycle of a Square §.§ Result of different Mixers The choice of mixer can be essential in the training and optimization process of QAOA. In the previous, I used the default R_x mixer. In this section, I also use R_y mixer to run the QAOA and compare the new result with the previous one.In <ref>, we run the same noiseless simulation for hamiltonian cycle on a triangle, and compare the result of R_x mixer and R_y mixer. I also simulate for the same comparison when the graph is a square, which is plotted in <ref>. From the result in <ref> and <ref>, it's clear that R_x mixer is much better than R_y mixer.§ CODE FOR MY SIMULATION# General imports import numpy as np# Pre-defined ansatz circuit, operator class and visualization tools from qiskit.circuit.library import QAOAAnsatz from qiskit.quantum_info import SparsePauliOp from qiskit.visualization import plot_distribution from qiskit.providers.fake_provider import FakeManilaV2 # Qiskit Runtime from qiskit_ibm_runtime import QiskitRuntimeService from qiskit_ibm_runtime import Estimator, Sampler, Session, Options from qiskit.providers.fake_provider import FakeManilaV2 # SciPy minimizer routine from scipy.optimize import minimize from qiskit.primitives import Estimator, Sampler options = Options() options.transpilation.skip_transpilation = False options.execution.shots = 10000 estimator = Estimator(options="shots": int(1e4)) sampler = Sampler(options="shots": int(1e4))from sympy import symbols, Sum, IndexedBase, simplify from sympy.abc import n, v, j, u# Define symbolic variables x = IndexedBase('x')# Function to represent the vertex uniqueness term of the Hamiltonian def vertex_uniqueness_term(n): return Sum((1 - Sum(x[v, j], (j, 2, n)))**2, (v, 2, n))# Function to represent the edge uniqueness term of the Hamiltonian def edge_uniqueness_term(n): return Sum((1 - Sum(x[v, j], (v, 2, n)))**2, (j, 2, n))# Function to represent the edge validity# term of the Hamiltonian for a given graph def edge_validity_term(graph, n): validity_term = 0 for u in range(2,n+1): edge=(u,1) if not edge in graph: validity_term +=x[u,n] for v in range(2,n+1): u, v = edge if not edge in graph: validity_term += Sum(x[u, j] * x[v, j+1], (j, 1, n-1)) return validity_term# Combine the terms to form the complete Hamiltonian def hamiltonian(graph, n): H = 1.5*vertex_uniqueness_term(n) + edge_uniqueness_term(n) + edge_validity_term(graph, n) return simplify(H) def apply_substitution_to_hamiltonian(H, n): Z = IndexedBase('Z') H_substituted = H for v in range(2, n+1): for j in range(2, n+1): z_index = (v-2)*(n-1) + j-1# Corrected index calculation if z_index > 0: H_substituted =H_substituted.subs(x[v, j], 1/2 * (1 - Z[z_index])) else: H_substituted = H_substituted.subs(x[v, j], 0) return simplify(H_substituted) def expand_and_simplify_hamiltonian(H,n): Z = IndexedBase('Z') H_expanded = H.expand() # Apply the simplification rule Z_k^2 = I for k in range(1, (n-1)**2+1):# Assuming up to 8 qubits for this example H_expanded = H_expanded.subs(Z[k]**2, 0) return simplify(H_expanded) def hamiltonian_to_string_list(H, n): """ Convert the expanded Hamiltonian to a list ofstrings with corresponding coefficients. Each string represents a term in the Hamiltonian, with 'Z' at positions corresponding to qubits involved in the term. For example, 'ZZI' represents Z_1 Z_2.:param H: The expanded Hamiltonian expression :param n: Number of qubits :return: List of tuples (string, coefficient) """ Z = IndexedBase('Z') terms = []# Iterate over each term in the Hamiltonian expression for term in H.as_ordered_terms(): # Initialize a string with 'I's for each qubit term_string = ['I'] * n coeff = H.coeff(term)# Extract the coefficient of the term findZ=False # Check for the presence of Z operators in the term for k in range(1, n+1): if term.has(Z[k]): findZ=True term_string[k-1] = 'Z' if not findZ: continue # Join the term string and append it with its coefficient to the list terms.append((”.join(term_string), coeff))return termsn=4 # Example: Hamiltonian for a triangle graph triangle_graph = [(1, 2), (2, 3), (3, 4),(4,1), (2, 1), (3, 2),(4,3),(1,4)] H_triangle = hamiltonian(triangle_graph, n) print(f"Polynomial H is H_triangle") # Apply the substitution rule to the Hamiltonian for a triangle graph (n = 3) H_triangle_substituted = apply_substitution_to_hamiltonian(H_triangle, n) print(f"After Substitute wo Z is H_triangle_substituted") H_final=expand_and_simplify_hamiltonian(H_triangle_substituted,n) print(f"After simplificationH_final") # Convert the final Hamiltonian for the triangle# graph to string list representation hamiltonian_string_list = hamiltonian_to_string_list(H_final, (n-1)**2) print(f"Final resulthamiltonian_string_list")from qiskit.providers.fake_provider import FakeAlmadenV2 # Get a fake backend from the fake provider backend = FakeAlmadenV2()from qiskit.transpiler import PassManager from qiskit.transpiler.preset_passmanagers import generate_preset_pass_manager from qiskit_ibm_runtime.transpiler.passes.scheduling import ( ALAPScheduleAnalysis, PadDynamicalDecoupling, ) from qiskit.circuit.library import XGatetarget = backend.target pm = generate_preset_pass_manager(target=target, optimization_level=3) ansatz_ibm = pm.run(ansatz) hamiltonian_string_list = [ ('ZZIIIIIII', 1), ('ZIZIIIIII', 1), ('ZIIZIIIII', 1), ('ZIIIIIZII', 1), ('ZIIIIIIZI', 1), ('ZIIIIIIII', -3), ('IZZIIIIII', 1), ('IZIIZIIII', 1), ('IZIIIIZII', 1), ('IZIIIIIZI', 1), ('IZIIIIIIZ', 1), ('IZIIIIIII', -4), ('IIZIIZIII', 1), ('IIZIIIIZI', 1), ('IIZIIIIIZ', 1), ('IIZIIIIII', -3), ('IIIZZIIII', 1), ('IIIZIZIII', 1), ('IIIZIIZII', 1), ('IIIZIIIII', -4), ('IIIIZZIII', 1), ('IIIIZIIZI', 1), ('IIIIZIIII', -2), ('IIIIIZIIZ', 1), ('IIIIIZIII', -4), ('IIIIIIZZI', 1), ('IIIIIIZIZ', 1), ('IIIIIIZII', -3), ('IIIIIIIZZ', 1), ('IIIIIIIZI', -4), ('IIIIIIIIZ', -3) ] # Problem to Hamiltonian operator hamiltonian = SparsePauliOp.from_list(hamiltonian_string_list) # QAOA ansatz circuit ansatz = QAOAAnsatz(hamiltonian, reps=8)ansatz.decompose(reps=8).draw(output="mpl", style="iqp")def cost_func(params, ansatz, hamiltonian, estimator): """Return estimate of energy from estimatorParameters: params (ndarray): Array of ansatz parameters ansatz (QuantumCircuit): Parameterized ansatz circuit hamiltonian (SparsePauliOp): Operator representation of Hamiltonian estimator (Estimator): Estimator primitive instanceReturns: float: Energy estimate """ cost = estimator. run(ansatz, hamiltonian, parameter_values=params). result().values[0] return costx0 = 2 * np.pi * np.random.rand(ansatz_ibm.num_parameters)res = minimize(cost_func, x0, args=(ansatz, hamiltonian, estimator),method="COBYLA",options='disp': True)# Assign solution parameters to ansatz qc = ansatz.assign_parameters(res.x) # Add measurements to our circuit qc.measure_all()# Sample ansatz at optimal parameters samp_dist = sampler.run(qc).result().quasi_dists[0] # Close the session since we are now done with it session.close() § CONCLUSIONI got many interesting result in this project. First and foremost, this is the first time I test that the QAOA really works, for solving the NP complete problem such as hamiltonian cycle path problem. However, since the embedding require (n-1)^2 qubits, I can only simulate up to no more than n=6. In this problem, I choose n=3,4 and run the simulation on the simplest case: A triangle and a square. In both cases, I analyze the energy spectrum of the cost hamiltonian beforehand, and didn't start the simulation of QAOA until I'm convinced that the cost hamiltonian is correct. This step actually benefit me a lot in understanding the behavior of QAOA. One important thing is that once you know what exactly the minimum energy is, you can check the quality of QAOA parameter optimizer, by comparing the cost given by QAOA circuit with the minimum energy. Another interesting observation in the spectrum plotted in <ref> and <ref> is that in both cases there is a large enough energy gap between the solution space the the non-solution space. I highly doubt that, such energy gap is crutial for the success and time complexity of QAOA. There is no doubt that when the structure of the graph get more complicated, the spectrum can also get complicated, and thus it becomes harder for an optimizer to tell whether we are getting closer to ground state or not. On the other hand, if we could find a better general way of embedding Hamiltonian cycle, such that the solution space has a large gap between the non-solution space, than I would be much more confident that we can use QAOA to get the accurate solution state.One thing one can never neglect is the role of quantum noise. Intuitively, when we add more noise into the circuit, our algorithm will only get worse result. The experiments of triangle case, in <ref>, is consistent with this intuition. However, in the square case, the result of running QAOA is much better than on a noiseless simulator! Does that mean, we don't need error correction at all for QAOA? Because noise seems to be a resource that benefit our optimization and annealing process, rather than a harmful factor! The idea of utilization quantum noise, is so crazy but attracting. Maybe I will explore this possiblity someday in the future.
http://arxiv.org/abs/2401.00017v1
{ "authors": [ "Zhuoyang Ye" ], "categories": [ "cs.ET" ], "primary_category": "cs.ET", "published": "20231226192607", "title": "QAOA on Hamiltonian Cycle problem" }
Learning from small data sets: Patch-based regularizers in inverse problems for image reconstruction Moritz Piening^1,Fabian Altekrüger^2, Johannes Hertrich^2, Paul Hagemann^1, Andrea Walther^2, Gabriele Steidl^1 January 14, 2024 ===================================================================================================================== [1]Institute of Mathematics, Technische Universität Berlin, Straße des 17. Juni 136, D-10623 Berlin, Germany[2]Department of Mathematics, Humboldt-Universität zu Berlin, Unter den Linden 6, D-10099 Berlin, Germany[3]Department of Computer Science, University College London, 90 High Holborn, London, WC1V 6LJ, United Kingdom The solution of inverse problems is of fundamental interest inmedical and astronomical imaging, geophysics as well as engineering and life sciences. Recent advances were made by using methods frommachine learning, in particular deep neural networks. Most of these methods require a huge amount of (paired) data and computer capacity to train the networks, which often may not be available. Our paper addresses the issue of learning from small data sets by taking patches of very few images into account.We focus on the combination of model-based and data-driven methods by approximating just the image prior, also known as regularizer in the variational model. We review two methodically different approaches, namelyoptimizing the maximum log-likelihood of the patch distribution,and penalizing Wasserstein-like discrepancies ofwhole empirical patch distributions. From the point of view of Bayesian inverse problems, we show how we can achieve uncertainty quantification by approximating the posterior using Langevin Monte Carlo methods. We demonstrate the power of the methodsin computed tomography, image super-resolution, and inpainting. Indeed, the approach provides also high-quality results in zero-shot super-resolution, where only a low-resolution image is available.The paper is accompanied by a GitHub repository containing implementations of all methodsas well as data examples so that the reader can get their own insight intothe performance. § INTRODUCTIONIn medical and astronomical imaging, engineering, and life sciences,data in the form of transformed images is acquired. This transformation is the result of a forward process that underlies a physical model. In general, the data is corrupted by “noise” and the inversion of the forward operator to get the original image back is not possible, since there may exist many solutions and/or the noise would be heavily amplified. This is the typical setting in ill-posed inverse problems in imaging.It is critical in applications like image-guided medical diagnostics, where decision-making is based on the recovered image. One strategy for treating such problems is to include prior knowledge of the desired images in the model. This leads to a variational formulation of the problem that typically contains two kinds of terms.The first one is a “distance” term between the received data and the acquisition model, which includes the forward operator. The chosen distance reflects the noise model. The second one is an image prior, also called a regularizer,since it should force the variational problem to become well-posed,see <cit.>. The choice of the image prior is more difficult. A prominent example is the total variation regularizer <cit.> and its vast amount of adaptations. The past decades have witnessed a paradigm shiftin data processing due to the emergence of the artificial intelligence revolution.Sophisticated optimization strategiesbased on the reverse mode of automatic differentiation methods <cit.>, also known as backpropagation, were developed. The great success of deep learning methods has entered the field of inverse problems in imaging in quite different ways. For an overview of certain techniques, we refer to <cit.>.However, for many applications, there is only limited data available such that most deep learning based methods cannot be applied. In particular, for very high dimensional problems in image processing, the necessary amount of training data pairs is often out of reach and the computational costs for model training are high. On the other hand, the most powerful denoising methods before deep learningentered the field were patched-based as BM3D <cit.>orMMSE-based techniques <cit.>.This review paper aims to advertisea combination of model-based and data-driven methods for learning from small data sets. The idea consists of retaining the distance term in the variational modeland to establish a new regularizer that takes the internal image statistics,in particular the patch distribution of very few images into account. To this end, we follow the path outlined below. Outline of the paper. We start by recalling Bayesian inverse problems in Section <ref>. In particular, we highlight the difference between* the maximum a posteriori approach which leads to a variational model whose minimization provides one solution to the inverse problem, and * the approximation of the whole posterior distribution from which we intend to samplein order to get, e.g., uncertainty estimations. We demonstrate by example thenotations of well-posedness due to Hadamard and Stuart's Bayesian viewpoint.Section <ref> shows the relevance of internal image statistics. Although we will exclusively deal with image patches, we briefly sketch feature extraction by neural networks. Then we explain two different strategies to incorporate feature information into an image prior (regularizer). The first one is based on maximum likelihood estimations of the patch distribution which can also be formulated in terms of minimizing the forward Kullback-Leibler divergence. The second one penalizes Wasserstein-like divergences between the empirical measure obtained from the patches. Section <ref> addresses three methods for parameterizing the functionin the maximum likelihood approach, namely via Gaussian mixture models, the push-forward of a Gaussian by a normalizing flow, and a local adversarial approach. Section <ref> shows three methods for choosing, based on the Wasserstein-2 distance, appropriate divergences for comparing the empirical patch measures. Having determined various patch-based regularizers, we use them to approximate the posterior measure and describe how to sample from this measure using a Langevin Monte Carlo approach in Section <ref>.Section <ref> illustrates the performance of the different approaches by numerical examples in computed tomography (CT), image super-resolution, and inpainting. Moreover, we consider zero-shot reconstructions in super-resolution. Further,we give an example for sampling from the posterior in inpainting and for uncertainty quantification in computed tomography. Sincequality measures in image processing reflecting the human visual impressions are still a topic of research,we decided to give an impression of the different quality measures used in this section at the beginning. The code base for the experiments is made publicly available on GitHub[<https://github.com/MoePien/PatchbasedRegularizer>] to allow for benchmarking for future research. It includes ready-to-use regularizers within a common framework and multiple examples. Implementation on top of the popular programming language Python and the library PyTorch <cit.> that enables algorithmic differentiation enhances its accessibility. Finally, note that alternative patch-based regularization strategies exist in addition to the presented ones, e.g., based on patch-based denoisers <cit.> or an estimation of the latent dimension of the patch manifold <cit.>. § INVERSE PROBLEMS: A BAYESIAN VIEWPOINTThroughout this paper, we consider digital gray-valued images of size d_1 × d_2 as arrays x ∈^d_1,d_2 or alternatively, by reordering their columns, as vectors x ∈^d, d = d_1 d_2. For simplicity, we ignore that in practice gray values are encoded as finite discrete sets.The methods can directly be transferred to RGB color images by considering three arrays of the above form for the red, green, and blue color channels.In inverse problems in image processing, we are interested in the reconstruction of an image x ∈ℝ^d from its noisy measurementy = noisy (F(x)) ,where F ℝ^d →ℝ^d̃is a forward operatorand “noisy” describes the underlying noise model. In all applications of this paper, F is a linear operatorwhich is either not invertible as in image super-resolution and inpainting or ill-conditioned as in computed tomography, so that the direct inversion of F would amplify the noise. A typical noise model is additive Gaussian noise, resulting iny = F(x) + ξ,where ξ is a realization of a Gaussian random variable Ξ∼𝒩(0, σ^2 I_d̃).Recall that the density function of the normal distribution 𝒩(m, Σ) with mean m ∈ℝ^d̃ and covariance matrix Σ∈ℝ^d̃, d̃ is given byφ(x| m, Σ)(2π)^-d̃/2|Σ|^-1/2exp(-1/2(x-m)^⊺Σ^-1(x-m)).More generally, we may assume that x itself is a realization of a continuous random variable X ∈ℝ^dwith law P_X determined by the density function p_X^d→[0,∞) with ∫_^dp_X(x) d x=1, i.e., x is a sample from P_X. Then we can consider the random variableY = F(X) + Ξ, Ξ∼𝒩(0, σ^2 I_d̃)and the posterior distribution P_X|Y=y for given y ∈^d̃. The crucial law to handle this is Bayes' rulep_X|Y=y_posterior (x)=p_Y|X=x^likelihood (y) p_X(x)^prior/ p_Y(y)_evidence.Now we can ask at least for three different quantities. 1. MAP estimator The maximum a posteriori (MAP) estimatorprovides the value with the highest probability of the posteriorx_MAP(y) ∈_x ∈^d{p_X|Y=y(x) } =_x ∈^d{log p_X|Y=y(x) }.By Bayes' rule (<ref>) and since the evidence is constant, this can be rewritten asx_MAP(y) ∈_x ∈^d{log p_Y|X=x(y) + log p_X(x) }.The first term depends on the noise model, while the second one on the distribution within the image class. Assuming thatp_Y|X=x(y) = C exp(-𝒟(Fx,y)) anda Gibbs prior distribution p_X (x) = C_βexp(-βℛ(x)),we arrive at the variational model for solving inverse problemsx_MAP(y) ∈_x ∈^d{𝒟(F(x),y)_data term+βℛ(x)_prior}, β > 0.Instead of a “prior” term, ℛ is also known as a “regularizer” in inverse problems since it often transfers the original ill-posed or ill-conditioned problem into a well-posed one. By Hadamard's definition, this means that for any y there exists a unique solution that continuously depends on the input data. For example, for Gaussian noise as in (<ref>) we havethat F(x) + Ξ∼𝒩(F(x), σ^2 I_d̃) so that by (<ref>) we getlog p_Y|X=x(y) = log (2πσ^2)^-d̃/2 - 1/2 σ^2F(x) -y^2,which results with ασ^2 β in x_MAP(y) ∈ _x ∈^d{1/2F(x) -y^2 + αℛ(x)}.2. Posterior distributionHere we are searching for a measure P_X|Y=y∈𝒫(^d)and not as in MAP for a single sample that is most likely for a given y. We will see that approximating the posterior,which mainly means to find a way to sample from it, provides a tool for uncertainty quantification. It was shown in <cit.> that the posterior P_X|Y=y is often locally Lipschitz continuous with respect to y, i.e.,d(P_X|Y=y_1,P_X|Y=y_2) ≤ L y_1 - y_2with some L >0 and a discrepancy d between measures as theKullback-Leibler divergence or Wasserstein distances explained in Section <ref>. Indeed, this Lipschitz continuity is the key feature ofStuart's formulation of a well-posed Bayesian inverse problem <cit.> as a counterpart of Hadamard's definition.There are only few settings in (<ref>) where the posterior can be computed analytically, see <cit.>,namely if X is distributed by a Gaussian mixture model (GMM)X ∼∑_k=1^K α_k 𝒩(m_k,Σ_k) ∈ℝ^d, i.e.,p_X = ∑_k=1^K α_k φ(·|m_k, Σ_k), ∑_k=1^K α_k = 1,α_k > 0,the forward operator F ∈^d̃,d is linear and Ξ∼ N(0,σ^2 I_d̃). Then it holdsp_X|Y=y = ∑_k=1^K α̃_k φ(·|m̃_k,Σ̃_k)withΣ̃_k (1σ^2F^ F+Σ_k^-1)^-1, m̃_k Σ̃_k (1σ^2F^ y+Σ_k^-1μ_k),α̃_kα_k exp(1/2 (m̃_k^Σ̃_k^-1m̃_k - m_k^Σ_k^-1 m_k)).3. MMSE estimator The maximum mean square error (MMSE) estimator is just the expectation value of the posterior, i.e.,x_MMSE(y) = 𝔼[X|Y=y]= ∫_ℝ^d x p_X|Y = y (x)d x.If X ∼𝒩(m,Σ), F is linear and Ξ∼ N(0,σ^2 I_d̃), then the MMSE can be computed analytically byx_MMSE(y)=m + Σ F^(F Σ F^ + σ^2 I_d̃)^-1 (y - F m).We would like to note that for more general distributions the estimator (<ref>) is known as thebest linear unbiased estimator (BLUE).MMSE techniques in conjunction with patch-based techniques were among the most powerful techniques for image denoising before ML-based methods entered the field, see <cit.>. The following simple one-dimensional example illustrates the behavior of the posterior in contrast to the MAP estimator and MMSE. [<cit.>] For ε^2 = 0.05^2, let X ∼12𝒩(-1,ε^2) + 12𝒩(1,ε^2),F = I and Ξ∼𝒩(0,σ^2) with σ^2= 0.1.TheMAP estimator is given byx_MAP(y) = _x∈{12 σ^2 ( y - x )^2 - log( 12 (e^-1/2 ε^2 ( x - 1 )^2+e^-1/2 ε^2 ( x + 1 )^2 )) }=_x∈{12 σ^2 ( y - x )^2+ 12 ε^2 (x^2 + 1) - log( cosh(x/ε^2) )}.This minimization problem has a unique solution for y ≠ 0 which we computed numerically. By (<ref>), we can compute the posterior P_X|Y=y = 1/α̃_1 + α̃_2 (α̃_1 𝒩(· | m̃_1 , σ̃^2) + α̃_2 𝒩(· | m̃_2 σ̃^2) )with σ̃^2= σ^2 ε^2/σ^2 + ε^2, m̃_1 = ε^2 y + σ^2/ε^2 + σ^2, m̃_2 = ε^2 y - σ^2/ε^2 + σ^2,α̃_1= 1/2 εexp(1/2ε^2( (ε^2 y + σ^2)^2/σ^2(ε^2 + σ^2) - 1 ) ), α̃_2 = 1/2 εexp(1/2ε^2( (ε^2 y - σ^2)^2/σ^2(ε^2 + σ^2) - 1 ) ) .Finally, the MMSE is given by x_MMSE(y) = 1/α̃_1 + α̃_21/ε ( ε^2 + σ^2) e^ε^2 y^2 - σ^2/2 σ^2 (ε^2 + σ^2)(ε^2 ycosh(y/ε^2 + σ^2)+σ^2sinh( y/ε^2 + σ^2) ).Figure <ref> illustrates the behavior of the three quantities. The main observation is that in contrast to the posterior, the MAP estimator is discontinuous at y=0. In the numerical experiments we will consider the MAP estimator in Sections <ref>, <ref>, and <ref>, and the posterior sampling in Section <ref>.§ INTERNAL IMAGE STATISTICSIn this paper, when dealing with the MAP estimator,i.e., with problems of the form (<ref>), we follow a physics-informed approach, where both the forward operator and the noise model are known.Then the data term 𝒟(F(x),y) is completely determined. The challenging part is the modeling of the prior distribution P_X, where we only knowsamples from. In contrast to deep learning methods which rely on a huge amount of (paired) ground truth data, we are in a situation, where only one or a few images are available. Then, instead of working with the distribution P_X of whole images in the prior, we consider typical features of the images and ask for the feature distribution. These features live in a much lower dimensional space than the images. Indeed, one key finding in image processing was the expressiveness of this internal image statistics <cit.>. Clearly, there are many ways to extract meaningful features and we refer only to the“field of experts” framework <cit.> here. In the following, we explain two typical choices of meaningful features, namely image patches and features obtained from a nonlinear filtering process of a neural network. Image patches Image patches are square-shaped (or rectangular) regions of size p × p within an image x ∈^d_1,d_2 which can be extracted by operators P_i: ^d_1,d_2→^p,p, i=(i_1,i_2) ∈{1,…,d_1}×{1,…,d_2} via P_i(x) = (x_l_1,l_2)_l_1=i_1,l_2 = i_2^i_1+p,i_2+p. The patch extraction is visualized in Figure <ref> (left). The use of such patches for image reconstruction has a long history <cit.>and statistical analyses of empirical patch distributions reveal their importance to image characterization <cit.>.Furthermore, the patch distributions are similar at different scales for many image classes. Therefore, the approach is not sensitive to scale shifts. Figure <ref> illustrates this behavior,see also Figure <ref>. Indeed, replication of patch distributions by means of patch sampling <cit.> or statistical distance minimization <cit.> enables the synthesis of high-quality texture images. Neural network image generators are able to generate diverse outputs on the basis of patch discriminators <cit.> or patch distribution matching <cit.>. Furthermore, patch-matching methods have successfully been employed for style transfer <cit.>.Neural network filtered features Several feature methods for image reconstruction, as, e.g., in the “field of experts” framework <cit.> are based on features obtained from various linear filter responsespossibly finally followed by an application of a nonlinear function.More recently, such techniques were further extended by using a pre-trained classification convolutional neural network, e.g., a VGG architecture trained on the ImageNet dataset <cit.>.In each layer, multiple nonlinear filters (convolutions and component-wise nonlinear activation function)are applied to the downsampled result of the previous layer.Typically, the outputs of the first convolutional layer after a downsampling step are utilized as hidden features. This is illustrated in Figure <ref> right.Since every nonlinear filter of a convolutional filter is applied locally, these extracted features represent nonlinear transformations of patches of different sizes.The use of such features has been pioneered by Gatys et al. <cit.>,who used them to construct a loss function for the style transfer between two images based on a statistical distance between their hidden feature distributions <cit.>. Furthermore, such features have been utilized for texture synthesis in <cit.> and for image similarity comparison in <cit.>. Internal image statistics in image priors In the rest of the paper, we will concentrate on patches as features. We assume that we are given a small number n of images from an image class, say, n=1 high-resolution material image or n=6 computed tomography scans.For simplicity, let us enumerate the patch operators by P_i, i=1,…,N. Further, let us denote by P_X the patch distribution. Then we will follow two different strategies to incorporate them into the prior ℛ of model (<ref>), which we describe next.1.Patch maximum log-likelihood : We approximate the patch distribution P_X by a distribution P_X_θ with densityp_θ depending on some parameter θ. Then, we learn its parameter via a maximum log-likelihood (ML) estimator:θ̂ =_θ{∏_j=1^n∏_i=1^N p_θ(P_i(x_j) )} = _θ{∑_j=1^n∑_i=1^N log( p_θ(P_i(x_j)) )}=_θ{-∑_j=1^n∑_i=1^N log( p_θ(P_i(x_j) ) ) _=:ℒ(θ)} .Once the optimal parameter θ̂ is determined by minimizing the loss functionℒ(θ), we can use ℛ(x)- 1/N∑_i=1^N log( p_θ̂(P_i(x)) )as a prior in our minimization problem (<ref>). Indeed this value should become small, ifthe patches of the wanted image x are distributed according to p_θ̂.The above model can be derived from another point of view using the Kullback-Leibler (KL) divergence between P_X and P_X_θ. As a measure divergence, the KL is non-negative and becomes zero if and only if both measures coincide. For P_X and P_X_θ on ^d, d = p^2 with densities p_X and p_θ, respectively, the KL divergence is given (if it exists) byKL(P_X, P_ X_θ)=∫_^dlog(p_X(x) / p_θ(x) )p_X (x)d x =∫_^dlog( p_X(x) ) p_X(x)_const d x - ∫_^dlog(p_θ(x))p_X(x)d x.Skipping the constant part with respect to θ, this becomesKL(P_X, P_X_θ) ∝ - ∫_^dlog(p_θ(x))p_X(x)d x= - 𝔼_x ∼ X[ p_θ(x) ].Here ∝ denotes equality up to an additive constant. Replacing the expectation value with the empirical one and neglecting the factor 1/nN, we arrive exactly at the loss function ℒ(θ) in (<ref>). 2. Divergences between empirical patch measures: We can associate empirical measures to the image patches byν1/nN∑_j=1^n ∑_i=1^N δ_P_i(x_j)andμ_x1/N∑_i=1^N δ_P_i(x)as illustrated in Figure <ref>. Then we use a priorℛ(x) dist(μ_x,ν),with some distance, respectively divergence, between measures.Our distances of choice in (<ref>) will be Wasserstein-like distances. Let 𝒫_p(ℝ^d), p ∈ [1,∞), denote the set of probability measures with finite p-th moments. TheWasserstein-p distanceW_p 𝒫_p(ℝ^d) ×𝒫_p(ℝ^d) → is defined byW_p^p(μ, ν) inf_γ∈Π(μ, ν)∫_^d ×^d x-y^pdπ(x, y),whereΠ(μ, ν) {π∈𝒫(^d ×^d): (proj_1)_#π = μ, (proj_2)_#π = ν} is the set of all couplings with marginals μ and ν. Here proj_i, i=1,2 denote the projection onto the i-th marginals. Further, we used the notation of a push-forward measure. In general, for a measurable function T:^d →^d̃and a measure μ on ^d, the push-forward measure of μ by T on ^d̃ is defined asT_#μ (A) = μ( T^-1 (A) ),i.e., ∫_Ag(y)d (T_#μ) (y) =∫_T^-1 (A)g(T(x) )dμ (x)for all g ∈ C_0(^d̃) and for all Borel measurable sets A ⊆^d̃. The push-forward measure and the corresponding densities p_μand p_T_#μ are related by thetransformation formulap_T_#μ (x) = p_μ( T^-1 (x))| ( ∇ T^-1 (x) ) |.An example of the Wasserstein-2 distance is given in Figure <ref>.Note that both strategies can easily be generalized to multi-scale regularization by adding the composition ℛ∘ D for a downsampling operator D.§ PATCH MAXIMUM LOG-LIKELIHOOD PRIORSFor the prior in (<ref>), it remains to find an appropriateparameterized function p(·|θ). In the following, we present three different regularizers, namely obtained via Gaussian mixture models (GMM-EPLL), normalizing flows (patchNR), and adversarial neural networks (ALR). We will compare their performance in inverse problems later in the experimental section.§.§ Gaussian Mixture Model (GMM-EPLL) A classical approach assumes that the patch distribution can be approximated by a GMM (<ref>), i.e.,p_θ(x) = ∑_k=1^Kα_k φ(x |m_k, Σ_k), θ = (m_k,Σ_k)_k=1^K.This is justified by the fact that any probability distribution can be approximated arbitrarily well in the Wasserstein distance by a GMM. However, the number of modes K has to be fixed in advance. Then the maximization problem becomesθ̂= _θ{∑_j=1^n ∑_i=1^Nlog( ∑_k=1^Kα_k φ(P_i(x_j) | m_k, Σ_k) )}.This is typically solved by the Expectation-Maximization (EM) Algorithm <ref>,with the guarantee of convergence to a local maximizer, see <cit.>. The corresponding regularizer (<ref>) becomesℛ (x) = EPLL(x) 1/N∑_i=1^N - log( ∑_k=1^Kα_k φ(P_i(x) | m_k, Σ_k) ).It was suggested for solving inverse problems under the name expected patch log-likelihood (EPLL) by Zoran and Weiss <cit.>. By the relation (<ref>) the EPLL defines a prior distribution p_X (x) = C_βexp(-βEPLL(x)). The integrability of the function p_X can be shown by similar arguments as in the proof of <cit.>. While originally the variational formulation (<ref>) with the EPLL was solved using half quadratic splitting, in our implementation we use a stochastic gradient descent for minimizing (<ref>). Meanwhile there exist many extensions and improvements: GMMs may be replaced with other families ofdistributions <cit.> or multiple image scales can be included <cit.>.The intrinsic dimension of the Gaussian componentscan be restricted as in <cit.> or in the PCA reduced GMM model <cit.>. Finally, image restoration can be accelerated by introducing flat-tail Gaussian components, balanced search trees, and restricting the sum of the EPLL to a stochastically chosen subset of patch indices <cit.>. For the inclusion of learned local features into the model, we refer to <cit.>. In the next subsections, we will see that machine learning based models can further improve the performance.§.§ Patch Normalizing Flow Regularizer (patchNR) Another successful approach models the patch distribution using normalizing flows (NFs) <cit.>. NFs are invertible differentiable mappings. Currently, there are two main structures that achieve invertibility of a neural network,namely invertible residual networks <cit.> and directly invertible networks <cit.>. For the patchNR, the directly invertible networks are of interest. The invertibility is ensured by the special network structurewhich in the simplest case consists of a concatenation of K invertible,differentiable mappings T_θ_k: ^d →^d (and some permutation matrices which are skipped for simplicity)T_θ=T_θ_K∘…∘T_θ_1.The invertibility is ensured by a special splitting structure, namely for d_1 + d_2 = d, we setT_θ_k (z_1,z_2) = [ x_1; x_2 ] := [ z_1; z_1 ⊙exp(s_θ_k_1(z_1) )+ t_θ_k_2(z_1) ], z_i,x_i ∈^d_i, i=1,2, where s_θ_k_1, t_θ_k_2 are arbitrary neural networksand ⊙ denotes the component-wise multiplication. Then its inverse can be simply computed byT_θ_k^-1(x_1,x_2) =[ z_1; z_2 ]=[ x_1; (x_2 - t_θ_k_2(x_1) ) ⊙exp(-s_θ_k_1(x_1)) ] . This is the simplest, Real NVP network architecture <cit.>. A more sophisticated one is given in <cit.>. Now the idea is to approximate our unknown patch distribution P_X on ^d, d = p^2, using the push-forward by T_θ of a measure P_Z, where it is easy to sample from as, e.g., the d-dimensional standard normal distribution Z ∼𝒩(0, I_d). Our goal becomesP_X ≈ (T_θ)_# P_Z = P_X_θ.The NF between (samples of) the standard normal distribution in ^36 and the distribution of material image patches is illustrated in Figure <ref>. Let us take the KL approach to find the parameters of p_θ = p_ (T_θ)_# P_Z, i.e.,KL (P_X, (T_θ)_# P_Z ) =∫_^dlog(p_X(x) / p_ (T_θ)_# P_Z(x) )p_X (x)d x =∫_^dlog( p_X(x) ) p_X(x) _const d x - ∫_^dlog(p_(T_θ)_# P_Z(x))p_X(x)d x.Using the transformation formula (<ref>), we obtain (up to a constant)KL(P_X,(T_θ)_# P_Z )∝ -∫_^dlog(p_Z ((T_θ)^-1 (x)) |det∇ T_θ^-1(x)|)p_X(x)d x = -𝔼_x ∼ P_X[logp_Z( T_θ^-1(x) )+ log(|det∇ T_θ^-1(x)| ) ]and since P_Z is standard normally distributed furtherKL(P_X,(T_θ)_# P_Z )∝𝔼_x ∼ P_X[ 12T_θ^-1(x)^2- log(|det∇T_θ^-1(x)| ) ]. Taking the empirical expectation provides us with the ML loss function ℒ (θ) =∑_j=1^n ∑_i=1^N12T_θ^-1(P_i(x_j) ) ^2- log( | ∇ T_θ^-1(P_i(x_j)) | ).To minimize this function we use a stochastic gradient descent algorithm, where the special structure (<ref>) of the network can be utilized for the gradient computations. Once good network parameters θ̂ are found, we introduce in (<ref>) the patchNRℛ(x) = patchNR (x)1/N∑_i=1^N 12T_θ̂^-1(P_i(x) )^2 - log( |∇ T_θ̂^-1(P_i(x) )|). By the relation (<ref>) the patchNR defines a prior distribution p_X (x) = C_βexp(-βpatchNR(x)). The integrability of the function p_X is shown in <cit.>.The KL divergence of measures is neither symmetric nor fulfills a triangular inequality. Concerning symmetry, the setting KL(P_X,P_X_θ)is called forward KL. Changing the order of the measures gives the backward (or reverse) KL inKL(P_X_θ,P_X). These settings have differentproperties as being mode seeking or mode covering and the loss functions rely on different data inputs, see <cit.>. There are also mixed variantsαKL(P_X,P_X_θ) + (1-α) KL(P_X_θ,P_X), α∈ (0,1)as well as the Jensen–Shannon divergence <cit.>. Unfortunately, NFs mapping unimodal to multimodal distributions suffer from exploding Lipschitz constants and are therefore sensitive to adversarial attacks <cit.>.This is demonstrated in Figure <ref>. A remedy is the use of GMMs for latent distribution <cit.> or of stochastic NFs <cit.>. §.§ Adversarial Local Regularizers (ALR)The adversarial local regularizer (ALR) proposed by Prost et al. <cit.>makes use of a discriminative model.Originally, the ALR was not formulated with a loss of the form (<ref>),but by an adversarial approachsimilar toWasserstein generative adversarial networks (WGANs) <cit.>. The basic idea goes back to Lunz et al. <cit.>, who suggested learning regularizers through corrupted data. More precisely, the regularizer is aneural network trained to discriminate between the distribution of ground truth images and the distribution of unregularized reconstructions.The ALR is based on the same idea, but it operates on patches instead of whole images.Here a discriminator D_θ between unpaired samples from the original patch distribution P_X and a degraded one, say P_X̃, is trained using the Wasserstein-1 distance. Conveniently, the Wasserstein-1 distance has the dual formulation W_1(P_X, P_X̃)= sup_f ∈Lip_1{𝔼_x ∼ P_X[f(x)] -𝔼_x̃∼ P_X̃[f(x̃)]},whereLip_1 denotes the set of all Lipschitz continuous functions on ^d with Lipschitz constant not larger than 1. A maximizing function is called optimal Kantorovich potential and can be considered as a good separation between the two distributions.Unfortunately, obtaining such a potential is computationally intractable.Nevertheless, it can be approximated by functions from a parameterized family ℱ as, e.g., neural networkswith a fixed architecture_D_θ∈ℱ∩Lip_1{𝔼_x ∼ P_X[D_θ (x)] - 𝔼_x̃∼ P_X̃[D_θ (x̃ )]}.One possibility to relax the Lipschitz condition is the addition of a gradient penalty, see <cit.>, to arrive atθ̂= _θ{𝔼_x ∼ P_X[ D_θ(x)] - 𝔼_x̃∼ P_X̃[D_θ(x̃)] - λ𝔼_x ∼ P_α X + (1-α) X̃[(∇D_θ(x)-1)^2 ] }, λ > 0,where α is uniformly distributed in [0,1].This can be solved using a stochastic gradient descent algorithm. Finally, to solve our inverse problem, we can use the ALR ℛ(x) =ALR(x) 1/N∑_i=1^N D_θ(P_i(x)).This parameter estimation is different from the ML estimation of the previous two models, since it employs a discriminative approach. The original GAN architecture utilizes the Jensen–Shannon divergence <cit.> and can be replaced with the (forward) KL divergence (<ref>).This would lead to some form of maximum likelihood estimation in a discriminative setting. Furthermore, GAN architectures within an explicit maximum likelihood framework exist <cit.>. On closer inspection, however, this still takes a similar form as the EPLL or the patchNR. We assign a value to each patch in an image and sum over the set of resulting values. Moreover, higher values are assigned to patches that are more likely to stem from the true patch distribution. § DIVERGENCES BETWEEN EMPIRICAL PATCH MEASURES In the previous section, we have constructed patch-based regularizers using sums over all patches within an ML approach. This calls for independently drawn patches from the underlying distribution, an assumption that does not hold true, e.g., for overlapping patches.In particular, the same value may be assigned for an image which is a combination of very likely and very unlikely patches. This makes it desirable to address the patch distribution as a whole byassigning empirical measures to the patches as in (<ref>). In the following, we will consider three different “distances” between these empirical measures, namely the Wasserstein-2 distance, the regularized Wasserstein-2 distance, andan unbalanced variant.§.§ Wasserstein Patch Prior To keep the notation simple, let us rewrite the empirical measures in(<ref>) as ν= 1/nN∑_j=1^n ∑_i=1^N δ_P_i(x_j) = 1/M∑_k=1^M δ_y_kandμ_x= 1/N∑_i=1^N δ_P_i(x) = 1/N∑_i=1^Nδ_x_i.Then the admissible plans in the Wasserstein-2 distance (<ref>) have the formπ = ∑_i=1^N ∑_k=1^M p_i,kδ_i,k, ∑_i=1^N p_i,k = 1/M,∑_k=1^M p_i,k = 1/N, i=1,…,N, k=1,…,M.Obviously, they are determined by the weight matrix π (π_i,k)_i,k=1^N,M. Then,with the cost matrixC (x_i-y_k^2)_i,k=1^N,M,the Wasserstein-2 distance becomesW^2_2(μ_x, ν)=min_π∈Π ⟨ C,π⟩, Π = {π∈_+^N,M:1_N ^π = 1/M1_M, π1_M = 1/N1_N}.Here 1_M ∈^M denotes the vector with all entries one. An example of the Wasserstein-2 distance for two discrete measures is given in Figure <ref>. The dual formulation of the linear optimization problem (<ref>) reads as W^2_2(μ_x, ν) = max_ϕ(x_i) + ψ_k ≤ c_i,k1/N∑_i=1^Nϕ(x_i) + 1/M∑_k=1^Mψ_k=max_ψ∈^M1/N∑_i=1^Nψ^c( x_i) + 1/M∑_k=1^Mψ_kwith the c-conjugate functionψ^c( x_i) min_k{ x_i - y_k^2 - ψ_k},see <cit.>. The maximization problem (<ref>) is concave, and for large-scale problems, a gradient ascent algorithm as in <cit.> can be used to find a global maximizer ψ̂. As in <cit.>, the optimal vector ψ̂allows for the computation of the gradientof ℛ(x) = WPP(x)W^2_2(μ_x, ν)in our inverse problem (<ref>). More precisely, with the minimizerσ(i)∈_k { x_i - y_k^p - ψ̂_k} in (<ref>)we obtainW^2_2(μ_x, ν) = 1/N∑_i=1^Nx_i - y_σ(i)^2 - ψ̂_σ(i) + 1/M∑_k=1^Mψ̂_k,so that if the gradient with regard to the support point x_i of μ_x exists, it reads as ∇_x_iW^2_2(μ_x, ν) = 1/N∇_x_ix_i - y_σ(i)_2^2 = 2/N(x_i - y_σ(i)). By the relation (<ref>) the WPP defines a prior distribution p_X (x) = C_βexp(-βWPP(x)). The integrability of the function p_X is shown in <cit.>.Wasserstein patch priors were originally introduced by Gutierrez et al. <cit.> and Houdard et al. <cit.> for texture generation, where a direct minimization of the regularizer without a data fidelity term was used. Their use was adopted for regularization in inverse problems by Hertrich et al. <cit.>. Note that this stands in contrast to the previous EPLL-based regularizers, where a direct minimization of these regularizers would result in the synthesis of images with almost equally likely patches. In practice, this would lead to single-color images.§.§ Sinkhorn Patch PriorTo lower the computational burden in the WPP approach, a combination of the Wasserstein distance with the KL of the coupling and theproduct measure μ_x ⊗ν can be usedW_2,ε^2(μ_x, ν) = inf_π∈Π(μ_x, ν) ∫_^d ×^dx-y^2d π(x, y) + εKL(π , μ_x ⊗ν)= inf_π∈Π(μ_x, ν)⟨ C,π⟩ +ε∑_i,k=1^N,Mπ_i, klog(MN π_i, k). In Figure <ref> we give an example of W_2,ε^2 for different choices of ε and the same discrete measures as in Figure <ref>.The dual formulation reads as W^2_2,ε(μ_x, ν) =max_ϕ∈^N, ψ∈^M1/N∑_i=1^Nϕ_i+ 1/M∑_k=1^Mψ_k-ε/MN∑_i=1^N∑_k=1^Mexp(ϕ_i+ ψ_k - x_i - y_k^2/ε) + ε.This problem can be efficiently solved using the Sinkhorn algorithm which employs a fixed-point iteration. To this end, fix ψ^(r), respectively, ϕ^(r) and set the gradient with respect to the other variable in (<ref>) to zero. This results in the iterationsϕ^(r+1)_i=-εlog(∑_k=1^M exp(ψ^(r)_k- x_i - y_k^2ε) ) + εlog M, ψ^(r+1)_k= -εlog(∑_i=1^N exp(ϕ^(r)_i- x_i - y_k^2ε) ) + εlog N,which converge linearly to the fixed points ϕ̂ and ψ̂, see, e.g., <cit.>. Then, noting that by construction of ϕ̂ and ψ̂ we have - ε/M N∑_i=1^N∑_k=1^Mexp(ϕ̂_i+ ψ̂_k - x_i - y_k^2/ε) + ε = 0,the regularized Wasserstein distance becomesW^2_2,ε(μ_x, ν)=- ε/N( ∑_i=1^Nlog(∑_k=1^M exp(ψ̂_k- x_i - y_k^2ε) ) - log M ) + 1/M∑_k=1^Mψ̂_k,which is differentiable with respect to the support points and the gradient is given by ∇_x_iW^2_2,ε(μ_x, ν)= 2/N(∑_k=1^M exp(ψ̂_k- x_i - y_k^2ε) )^-1∑_k=1^M exp(ψ̂_k- x_i - y_k^2ε) (x_i-y_k).If the Wasserstein gradient from (<ref>) exists, it is recovered for ε→ 0. Computation of the gradient can, e.g., be achieved by means of algorithmic differentiation through the Sinkhorn iterations or on the basis of the optimal dual potentials through the Sinkhorn algorithm. Finally, we can use the Sinkhorn patch prior (WPP_ε) first used in <cit.> as a regularizer in our inverse problemℛ(x)= WPP_ε(x) W^2_2,ε (μ_x, ν). By (<ref>) the Sinkhorn regularizer defines a prior distribution p_X (x) = C_βexp(-βWPP_ε(x)). The integrability of the function p_X follows from the integrability of the WPP <cit.> and the relation WPP_ε(x) ≥WPP(x). This can be seen immediately sinceKL(π , μ_x ⊗ν) ≥ 0. The regularized Wasserstein-2 distance is no longer a distance.It does not fulfill the triangular inequality and is moreover biased, i.e., W_2,ε(μ,ν)does not take its smallest value if and only if μ = ν. As a remedy, the debiased regularized Wasserstein distance or Sinkhorn divergenceS^2_2,ε(μ, ν) =W^2_2,ε, (μ, ν)-1/2W^2_2,ε(μ, μ)-1/2W^2_ε(ν, ν)can be used, which is now indeed a statistical distance. Computation with the Sinkhorn divergence is similar to above so it can be used as a regularizer as well.For more information see <cit.>. §.§ Semi-Unbalanced Sinkhorn Patch Prior The optimal transport framework for regularizing the patch distribution was extendedby Mignon et al. <cit.>to the semi-unbalanced case, where the marginal of the coupling only approximates the target distribution forW_2, ε, ρ^2(μ_x, ν) = inf_π∈𝒫(^d ×^d)(proj_1)_#π = μ_x∫_^d ×^dx-y^2dπ(x, y)+ εKL (π , μ_x ⊗ν) + ρKL((proj_2)_#π, ν)=inf_π∈^N, M π1_M = 1/N1_N⟨ C,π⟩ + ε∑_i,k=1^N,Mπ_i, klog(MN π_i, k) + ρ∑_k=1^M(1_N ^π)_klog(M(1_N ^π)_k ).In this setting, probability mass can be added to ν or removed from ν. This behavior is controlled by the parameter ρ and leads to a decreased sensitivity with regard to isolated areas in the second distribution. An example of W_2,ε,ρ^2 for different choices of ρ and the same measures as in Figure <ref> is given in Figure <ref>. The dual formulation becomes W^2_2,ε, ρ(μ_x, ν)=max_ϕ∈^M ψ∈^M1/N∑_i=1^Nϕ_i + 1/M∑_k=1^Mρ(exp(ψ_k/ρ)-1) -ε/M N∑_i=1^N∑_k=1^Mexp(ϕ_i+ ψ_k - x_i - y_k^2/ε) + ε,see, e.g., <cit.>.This maximization problem can be solved by the following adapted Sinkhorn iterationsϕ^(r+1)_i =-εlog(∑_k=1^M exp(ψ^(r)_k- x_i - y_k^2ε) ) + εlog M,ψ^(r+1)_k= - ερ/ρ + εlog(∑_i=1^N exp(ϕ^(r)_i- x_i - y_k^2ε) ) + εlog N.Note that the fixed point ϕ̂ equals the fixed point from Section <ref> and consequently ϕ̂ and ψ̂ fulfill (<ref>). The semi-unbalanced regularized Wasserstein distance becomesW^2_2,ε, ρ(μ_x, ν)=- ε/N( ∑_i=1^Nlog(∑_k=1^M exp(ψ̂_k- x_i - y_k^2ε) ) - log M ) + 1/M∑_k=1^Mρ(exp(ψ̂_k/ρ)-1).This expression equals the expression (<ref>) up to the second term, which does not depend on the support points. As a result, the gradient takes the same form as in (<ref>), but for a ψ̂ depending on ρ. For ρ→∞ we recover the balanced formulation and hence the gradient from (<ref>). Finally, we can use a semi-unbalanced Sinkhorn patch prior (WPP_ε,ρ) defined asℛ(x) = WPP_ε, ρ(x) W^2_2,ε, ρ(μ_x, ν).This was proposed as an extension of the WPP in <cit.>.By the relation (<ref>) the WPP_ε,ρ defines a prior distribution p_X (x) = C_βexp(-βWPP_ε,ρ(x)). This can be seen as follows: Using the auxiliary variable ν̃= (proj_2)_#π we rewrite W^2_2,ε, ρ(μ_x, ν) byinf_π∈Π(μ_x,ν̃) supp (ν̃)⊆supp(ν) W^2_2(μ_x, ν̃)+ εKL (π , μ_x ⊗ν) + ρKL(ν̃, ν)≥inf_π∈Π(μ_x,ν̃) supp (ν̃)⊆supp(ν) W^2_2(μ_x, ν̃).The constraint supp (ν̃)⊆supp(ν) is due to the term KL(ν̃, ν) which otherwise would be infinite. Exploiting the discrete structure ν = 1/M∑_k=1^M δ_y_k, such a measure ν̃ needs to be of the form ν̃= ∑_k=1^M a_k δ_y_k, for a ∈_+^M with ∑_k=1^M a_k = 1. The dual formulation (<ref>) yieldsinf_π∈Π(μ_x,ν̃) supp (ν̃)⊆supp(ν)W^2_2(μ_x, ν̃) = inf_a ∈_+^M ∑_k=1^M a_k = 1 (max_ψ(a) ∈^M1/N∑_i=1^N ψ(a)^c(x_i) + ∑_k=1^M a_k ψ(a)_k ) ≥1/N∑_i=1^N ψ_0^c(x_i),where the last inequality follows from inserting ψ(a)= ψ_0 = 0 for all a ∈^M. Now, the statement follows from the proof of <cit.>. By construction, the semi-unbalanced regularized Wasserstein distance is not symmetric anymore. Moreover, it is again biased. Similarly, as for the balanced case, the semi-unbalanced regularized Wasserstein distance can be transformed into a (non-symmetric) semi-unbalanced Sinkhorn divergenceS^2_2,ε, ρ(μ, ν) = W^2_2,ε, ρ(μ, ν) - 1/2W^2_2,ε(μ, μ) - 1/2W^2_2,ε, ρ, ρ(ν, ν)with the fully unbalanced regularized Wasserstein distanceW^2_2,ε, ρ, ρ(μ, ν) = inf_π∈ℳ^+(^d ×^d)∫_^d ×^dx-y^2dπ(x, y)+ εKL (π , μ_x ⊗ν) + ρKL((proj_2)_#π, ν) +ρKL((proj_1)_#π, μ). Here, ℳ^+(^d ×^d) denotes the set of positive measures on ^d ×^d. For more information, see <cit.>.§ UNCERTAINTY QUANTIFICATION VIA POSTERIOR SAMPLINGIn contrast to the MAP approaches, which just give point estimates for the most likely solution of the inverse problem, see Paragraph 1 of Section <ref>, we want to approximate the whole posterior measure P_X|Y=y now. More precisely, we intend to sample from the approximate posterior to get multiple possible reconstructions of the inverse problem and to quantify the uncertainty in our reconstruction.By Bayes' law and relation (<ref>), we know thatp_X|Y=y(x) ∝ p_Y|X=x(y) p_X(x),p_X (x) = C_βexp(-βℛ(x)).While the likelihood p_Y|X=x is determined by the noise model and the forward operator, the idea is now to choosea prior from the previous sections, i.e.,ℛ∈{EPLL, patchNR, ALR, WPP, WPP_ε,WPP_ε, ρ},By the Remarks <ref>, <ref>,<ref>, <ref> and <ref> we have ensured that the corresponding functions p_X are indeed integrable, except for ALR, where this is probably not the case. Nevertheless, we will use ALR in our computations even without the theoretical foundation. Techniques to enforce the integrability of a given regularizer by utilizing a projection onto a compact set, e.g., [0, 1]^d, exist in the literature <cit.>.Even if the density of a distribution is known you can in general not sample from this distribution, except for the uniform and the Gaussian distribution. Established methods for posterior sampling are Markov chain Monte Carlo (MCMC) methods such as Gibbs sampling <cit.>. We want to focus on Langevin Monte Carlo methods <cit.>, which have shown good performance for image applications and come with theoretical guarantees <cit.>. In particular, in <cit.> the EPLL was used in combination with Gibbs sampling for posterior reconstruction of natural images, and in <cit.> the patchNR was used in combination with Langevin sampling for posterior reconstruction in limited-angle CT. Consider the overdamped Langevin stochastic differential equation (SDE)d X_t = ∇log p_X|Y=y(X_t) dt + √(2)d B_t,where B_t is the d-dimensional Brownian motion. If p_X|Y=y is proper, smooth and x ↦∇log p_X|Y=y(x) is Lipschitz continuous, then Roberts and Tweedie <cit.> have shown that, for any initial starting point,the SDE (<ref>) has a unique strong solution andp_X|Y=y is the unique stationary density. For a discrete time approximation, the Euler-Maruyama discretization with step size δ leads to the unadjusted Langevin algorithm (ULA)X_k+1 = X_k + δ∇log p_X|Y=y(X_k) + √(2 δ) Z_k+1=X_k + δ∇log p_Y|X=X_k(y) + δ∇log p_X(X_k) + √(2 δ) Z_k+1,where Z_k ∼𝒩(0,I), k ∈ℕ. The step size δ provides control between accuracy and convergence speed. The error made due to the discretization step in (<ref>) can be asymptotically removed by a Metropolis-Hastings correction step <cit.>. The corresponding Metropolis-adjusted Langevin algorithm (MALA) comes with additional computational cost and will not be considered here. Now using an approximation(<ref>) for the prior, weget up to an additive constantX_k+1 =X_k + δ∇log p_Y|X=X_k(y) - δβ∇logℛ(X_k) + √(2 δ) Z_k+1.In Section <ref>, we will use this iteration for posterior sampling in image inpainting. Other methods for sampling from the posterior distributionAlternatively to MCMC methods, posterior sampling can be done by conditional neural networks.While conditional variational auto-encoders (VAEs) <cit.> approximate the posterior distribution by learning conditional stochastic encoder and decoder networks, conditional generative adversarial networks (GANs) <cit.> learn a conditional generator via adversarial training. Conditional diffusion models <cit.> map the posterior distribution to an approximate Gaussian distribution and reverse the noising process for sampling from the posterior distribution. For the reverse noising process, the conditional model needs to approximate the score ∇_xlog p_X|Y=y.Conditional normalizing flows <cit.> aim to approximate the posterior distribution using diffeomorphisms. In particular, in <cit.> the WPP was used as the prior distribution for training the normalizing flow with the backward KL. Recently, gradient flows of the maximum mean discrepancy and the sliced Wasserstein distance were successfully used for posterior sampling <cit.>.§ EXPERIMENTS In this section, we first use the MAP approachx_MAP(y)∈_x ∈^d{𝒟(F(x),y) +βℛ(x) }, β > 0.with our different regularizers on 6 × 6 image patchesℛ∈{EPLL, patchNR, ALR, WPP, WPP_ε,WPP_ε, ρ}for solving various inverse problems. Since the data term 𝒟 depends on the forward operator and the noise model, we have to describe both for each application. We consider the following problems: * computed tomography (CT) in a low-dose and a limited-angle setting, where we learn the regularizer from just n=6 “clean” images shown in Figure <ref>. The transformed images are corrupted by Poisson noise.* super-resolution, where the regularizer is first learned from just n=1 “clean” image and second from the corrupted image. The later setting is known as zero-shot super-resolution. Here we have a Gaussian noise model.* image inpainting from the corrupted image in a noise-free setting.Second, we provide example from sampling from the posterior in image inpainting and for uncertainty quantification in computed tomography.The code for all experiments is implemented in PyTorch and is available online[<https://github.com/MoePien/PatchbasedRegularizer>].You can also find all hy­per­para­meters in the GitHub repository. In the experiments, we minimize the variational formulation (<ref>) using the Adam optimizer <cit.>. The presented experimental set-up for super-resolution and computed tomography is closely related to the set-up of Altekrüger et al. <cit.>.However, before comparing these approaches, weshould give some comments on error measures in image processing.§.§ Error Measures There does not exist an ultimate measure for the visual quality of images, since this depends heavily on the human visual perception. Nevertheless, there are some frequently used quality measures between the original image x ∈^d_1,d_2 and the reconstructed, deteriorated one x̂. The peak signal-to-noise ratio (PSNR) is defined byPSNR(x̂) = 10 ·log_10(d_1d_2max^2(x)/x - x̂^2),where max(x) denotes the highest possible pixel value of an image, e.g., 255 for 8 bit representations. Unfortunately, small changes in saturation and brightness of the image have a large impact on the PSNR despite a small impact on the visual quality. An established alternative meant to alleviate this issue is the structural similarity index (SSIM) <cit.>. It is based on a comparison of pixel means and variances of various local windows of the images. Still, this is a rather simple model for human vision and small pixel shifts heavily influence its value, see <cit.>.Recently, the development of improved similarity metrics has revolved around the importance of low-level features for human visual impressions.Hence, alternative metrics focus on the comparison of extracted image features. Prominent examples include the Feature Similarity Index (FSIM) <cit.> based on hand-crafted features and the Learned Perceptual Image Patch Similarity (LPIPS) <cit.> based on the features learned by a convolutional neural network. All these different metrics behave very differently for image distortions as visualized in Figure <ref>.The image in Figure <ref> is obtained by corrupting the original image in Figure <ref> by 5% salt-and-pepper noise. The other images were generated with a gradient descent algorithm starting in Figure <ref> for customized loss functions that penalize one quality metric and deviation from the initial values for the other quality metric. As a result, the evaluation of reconstructions depends on the chosen metrics, where a single metric may not be suitable for all problems since the requirements differ, e.g., for natural and medical images. §.§ Computed Tomography In CT, we want to reconstruct a CT scan from a given measurement, which is called a sinogram. We used the LoDoPaB dataset <cit.>[available at <https://zenodo.org/record/3384092#.Ylglz3VBwgM>]for low-dose CT imaging is usedwith images of size 362 × 362.The ground truth images are based on scans of the Lung Image Database Consortium and Image Database Resource Initiative <cit.> and the measurements are simulated.The LoDoPab dataset uses a two-dimensional parallel beam geometry with 513 equidistant detector bins, which results in a linear forward operator F for the discretized Radon transformation.A CT scan and its corresponding sinogram is visualized in Figure <ref>. The noise model follows a Poisson distribution. Recall thatPois(λ) has probability p(k|λ) = λ^k exp(-λ)/k! with mean (= variance) λ. More specifically, we assume that the pixels y_i are corrupted independently and we have for each pixelY = - 1/μlog(Ỹ/N_0), Ỹ∼Pois( N_0 exp(-F(x) μ) ),where N_0 = 4096 is the mean photon count per detector bin without attenuation andμ = 81.35858 is a normalization constant.Then we obtain pixelwiseexp(-Y μ) N_0 = Ỹ∼Pois( N_0 exp( (-F(x) μ) ) ),and consequently for the whole data term𝒟(F(x),y)=- log∏_i=1^d̃ p ( exp(-y_i μ) N_0| exp( -F(x)_i μ) N_0) =∑_i=1^d̃exp(-F(x)_i μ) N_0+ exp(-y_i μ) N_0 ( F(x)_i μ - log(N_0) ).For the initialization, we use the Filtered Backprojection (FBP) described by the adjoint Radon transform<cit.>. We used the ODL implementation <cit.> with the filter type “Hann” and a frequency scaling of 0.641.Low-Dose CTFirst, we consider a low-dose CT example with 1000 angles between 0 and π. In Figure <ref>, we compare the different regularizers. The ALR tends to oversmooth the reconstructions and the WPP,the WPP_ε and the WPP_ε, ρ are not able to reconstruct sharp edges. Both, the EPLL and the patchNR perform well, while the patchNR gives slightly more accurate and realistic reconstructions. This can be also seen quantitatively in Table <ref>, where we evaluated the methods on the first 100 test images of the dataset. Here the patchNR gives the best results with respect to PSNR and SSIM. The weak performance of the WPP, the WPP_ε and the WPP_ε, ρ can be explained by the diversity of the CT dataset, leading to very different patch distributions. Therefore, defining the reference patch distribution as a mixture of patch distributions of the given 6 reference images is not sufficient for a good reconstruction. Note that for CT data the LPIPS is not meaningful, since the feature-extracting network is trained on natural images, which differ substantially from the CT scans. Therefore, we cannot expect informative results from LPIPS.Limited-Angle CTNext, we consider a limited-angle CT setting, i.e., instead of using 1000 equidistant angles between 0 and π, we cut off the first and last 100 angles such that we consider 144^∘ instead of 180^∘. This leads to a much worse FBP due to the missing part in the measurement. In Figure <ref>, we compare the different regularizers. Again, the ALR smooths out the reconstruction, and the WPP,the WPP_ε and the WPP_ε, ρ are not able to reconstruct the missing parts well. In contrast, the EPLL and the patchNR give good reconstructions, although the patchNR gives sharper edges as can be seen in the right part of the zoomed-in part. In Table <ref>, a quantitative comparison is given. Again, the patchNR gives the best results with respect to PSNR and SSIM. §.§ Super-Resolution For image super-resolution, we want to recover a high-resolution image from a given low-resolution image. The forward operator F consists of a convolution with a 16 × 16 Gaussian blur kernel of a certain standard deviation specified below and a subsampling process.For the noise model, we consider additive Gaussian noise with standard deviation Ξ∼𝒩(0,σ^2 I) with standard deviation σ = 0.01. Consequently, we want to minimize the variational problem (<ref>) withα = βσ^2.We consider two different types of super-resolution: First, we deal with the super-resolution of material data. Here we assume that we are given one high-resolution reference image of the material which we can use as prior knowledge. Second, we consider zero-shot super-resolution of natural images, where no reference data is known. Material DataThe dataset consists of 2D slices of size 600 × 600 from a 3D material image of size 2560 × 2560 × 2120. This has been acquired by synchrotron micro-computed tomography at the SLS beamline TOMCAT. More specifically, we consider a composite (“SiC Diamond”) obtained by microwave sintering of silicon and diamonds, see <cit.>. We assume that we are given one high-resolution reference image of size 600 × 600.The blur kernel of the forward operator F has standard deviation 2 and we consider a subsampling factor of 4 (in each direction). In Figure <ref>, we compare the different regularizers, where we choose the bicubic interpolation as initialization. In the reconstruction of the ALR and the WPP, we can observe a significant blur, in particular in the regions between the edges.In contrast, the EPLL and the patchNR reconstructions are sharper and more realistic.The WPP, the WPP_ε and the WPP_ε, ρ reconstructions have quite similar lower quality.A quantitative comparison is given in Table <ref>. Zero-Shot Super-Resolution We consider the grayscale BSD68 dataset <cit.>. The blur kernel of the forward operator F has standard deviation 1 and we consider a subsampling factor of 2. We assume that no reference data is given so that we need to extract our prior information from the given low-resolution observation.Here we exploit the concepts of zero-shot super-resolution by internal learning. The main observation is that the patch distribution of natural images is self-similar across the scales <cit.>. Thus the patch distributions of the same image are similar at different resolutions. An illustrative example with two images from the BSD68 dataset is given in Figure <ref>.The reconstruction of the unknown high-resolution image using the different regularizers is visualized in Figure <ref>. Here ALR andEPLL smooth out parts of the reconstruction, in particular when these parts are blurry in the low-resolution part, see, e.g., the stripes of the zebra or the fur pattern of the giraffe. In contrast,the WPP,WPP_ε, ρ and the patchNR are able to reconstruct well and without blurred parts. The WPP_ε reconstructions admit structured noise, which can be seen in the upper right corner of the zoomed-in part of the giraffe. A quantitative comparison is given in Table <ref>. Again, the patchNR performs best in terms of quality measures. §.§ Inpainting The task of image inpainting is to reconstruct missing data in the observation. For a given inpainting mask m ∈{0,1 }^n, the forward operator F is given by F(x) = x ⊙ m. In this subsection, we focus on region inpainting, where large regions of data are missing in the observation.We assume that there is no additional noisein the observation, leading to the negative log-likelihood- log (p_Y|X=x(y)) =0, if  F(x) = y, + ∞, else.Consequently, we are searching for x ∈ℝ^d{ℛ(x):F(x) = y }.We consider the Set5 dataset <cit.> and assume that no reference data is given, such that we extract the prior information froma predefined area around the missing part of the observation. In Figure <ref>, we compare the results of the different regularizers for the inpainting task. The missing part is the black rectangle in the observation and the reference patches are extracted around the missing part, which is visualized with the larger white box. Weobserve that ALR fails completely. In contrast,EPLL and patchNR are able to connect the lower missing black line.Here, the patchNR gives visually better results, in particular, the black lines are much sharper. Further, WPP,WPP_ε andWPP_ε, ρ fill out the missing part in a different way, as they aim to match the patch distribution between the reference part and the missing part. Obviously, the filled area is influenced by the patch distribution of the lower right corner in the reference part.§.§ Posterior SamplingIn this section, we apply ULA (<ref>).First, we use it for posterior sampling in image inpainting, where we can expect a high variety in the reconstructions due to the highly ill-posed problem.Then we quantify the uncertainty in limited-angle CT reconstructions.Posterior Sampling for Image InpaintingWe apply ULA (<ref>) with the different regularizers for the same task of image inpainting as in Section <ref>. Again, the data-fidelity term vanishes, so that (<ref>) simplifies toX_k+1 = X_k - δλ∇ℛ(X_k) + √(2 δ) Z_k+1.Since the inverse problem is highly ill-posed due to the missing part, we can expect a high variety in the reconstructions. In Figure <ref>, we compare the different methods. The ground truth and the observation are the same as in Figure <ref>. We illustrated three different reconstruction samples. Again, we observe differences between the regularizers of Sections <ref> and <ref>. First, we note that the ALR is, similar to MAP inpainting, not able to give meaningful reconstructions. On the other hand, the EPLL and the patchNR can reconstruct well, although the EPLL reconstructions look more realistic and are more diverse. The regularizers from Section <ref> give the most diverse reconstructions. Here the reconstruction quality is similar for the WPP,the WPP_ε and the WPP_ε, ρ.Uncertainty Quantification for Limited-Angle CTFinally, we consider the limited-angle CT reconstruction as in Section <ref>. The negative log-likelihood is given by (<ref>) so that (<ref>) reads asX_k+1 = X_k+ δ∇∑_i=1^de^-F(X_k)_i μ N_0 + e^-y_i μ N_0 (F(X_k)_i μ - log(N_0) )- δα∇ℛ(X_k) + √(2 δ) Z_k+1.In Figure <ref>, we compare the reconstructions of the different regularizers. We illustrate the mean image (left) and the pixel-wise standard deviation (right) of the corresponding regularizers for 10 reconstructions. The standard deviation can be seen as the uncertainty in the reconstruction and the brighter a pixel is, the less secure is the model in its reconstruction.As in the MAP reconstruction, EPLL and patchNR are able to reconstruct best. Moreover, the standard deviation of EPLL and patchNR are most meaningful and the highest uncertainty is in regions, where the FBP has missing parts. In contrast, the ALR smooths out the reconstruction. While the reconstructions of WPP and WPP_ε,ρ appear almost similar at first glance, the WPP admits more uncertainty in its reconstructions. Nevertheless, both regularizers are not able to reconstruct the corrupted parts in the FBP. The WPP_ε has a lot of artifacts in its reconstructions. Moreover, most of the uncertainty is observable in the artificially reconstructed artifacts.§ ACKNOWLEDGEMENTSM.P. and G.S. acknowledge funding from the German Research Foundation (DFG) within the project BIOQIC (GRK2260).F.A., A.W. and G.S. acknowledge support from the DFG through Germany's Excellence Strategy – The Berlin Mathematics Research Center MATH+ under project AA5-6. P.H. acknowledges support from the DFG within the SPP 2298 "Theoretical Foundations of Deep Learning" (STE 571/17-1).J.H. acknowledges support from the DFG within the project STE 571/16-1.The material data presented in Section <ref> was obtained as part of the EU Horizon 2020 Marie Sklodowska-Curie Actions Innovative Training Network MUMMERING (MUltiscale, Multimodal, and Multidimensional imaging for EngineeRING, Grant Number 765604) at the TOMCAT beamline of the Swiss Light Source (SLS), performed by A. Saadaldin, D. Bernard, and F. Marone Welford. We express our gratitude to the Paul Scherrer Institut, Villigen, Switzerland, for providing synchrotron radiation beamtime at the TOMCAT beamline X02DA of the SLS.abbrv
http://arxiv.org/abs/2312.16611v1
{ "authors": [ "Moritz Piening", "Fabian Altekrüger", "Johannes Hertrich", "Paul Hagemann", "Andrea Walther", "Gabriele Steidl" ], "categories": [ "cs.CV", "cs.LG", "eess.IV", "math.PR" ], "primary_category": "cs.CV", "published": "20231227153005", "title": "Learning from small data sets: Patch-based regularizers in inverse problems for image reconstruction" }
[ [===== Continual learning (CL) has shown promising results and comparable performance to learning at once in a fully supervised manner. However, CL strategies typically require a large number of labeled samples, making their real-life deployment challenging. In this work, we focus on semi-supervised continual learning (SSCL), where the model progressively learns from partially labeled data with unknown categories.We provide a comprehensive analysis of SSCL and demonstrate that unreliable distributions of unlabeled data lead to unstable training and refinement of the progressing stages. This problem severely impacts the performance of SSCL.To address the limitations, we propose a novel approach called Dynamic Sub-Graph Distillation (DSGD) for semi-supervised continual learning, which leverages both semantic and structural information to achieve more stable knowledge distillation on unlabeled data and exhibit robustness against distribution bias.Firstly, we formalize a general model of structural distillation and design a dynamic graph construction for the continual learning progress. Next, we define a structure distillation vector and design a dynamic sub-graph distillation algorithm, which enables end-to-end training and adaptability to scale up tasks. The entire proposed method is adaptable to various CL methods and supervision settings. Finally, experiments conducted on three datasets CIFAR10, CIFAR100, and ImageNet-100, with varying supervision ratios, demonstrate the effectiveness of our proposed approach in mitigating the catastrophic forgetting problem in semi-supervised continual learning scenarios. Our code is available: https://github.com/fanyan0411/DSGD.§ INTRODUCTION Continual learning (CL) has been commonly investigated to model the realistic learning progress in evolving environments by learning new knowledge and reinforcing existing cognition.Numerous efforts have been made to alleviate the issue of catastrophic forgetting of learned models when new tasks are involved <cit.>. However, these methods heavily rely on labeled data, which poses limitations in insufficient supervision scenarios, such as face recognition and video recognition <cit.>. For the purpose of reducing reliance on annotations, semi-supervised continual learning (SSCL) is proposed and developed by exploiting massive unlabeled data to enhance the performance. An illustration of SSCL is provided in Figure <ref>(a). Nevertheless, as demonstrated in Figure <ref>(b), when all labeled samples are retrained in the subsequent tasks without any replay of unlabeled data, the decline in performance correlates with a decrease in accuracy of unlabeled training data, indicating the forgetting of unlabeled data contributes to catastrophic forgetting of the SSCL. To deal with the forgetting problem in SSCL, some methods employ the semi-supervised learning strategy on SSCL, such as distilling pseudo-label on unlabeled samples <cit.> or applying consistency loss to enhance the model discriminability <cit.>. Additionally, the online sample replaying method is utilized through training data sampled from a learned conditional generator in an online manner <cit.>. However, most of these strategies have paid more attention to leveraging the learned representations of unlabeled data, while unreliable distributions will have a detrimental impact on the training stability and refinement <cit.>. We provide a visualization of the negative effect of distribution bias and pseudo label errors on performance and training stability in Figure <ref>. This encourages us to investigate into a more robust strategy for alleviating forgetting of SSCL. Considering the aforementioned observations, we propose a Dynamic Sub-graph Distillation (DSGD) designed explicitly for SSCL. We aim to enhance the robustness of the proposed SSCL method by exploring the association and structural knowledge in unlabeled data. To achieve this, we first describe the data's underlying structure through a graph representation. We then formulate a sub-graph preserving principle to model stable learning, where the graph local structure captures the underlying high-order relationships among samples. Subsequently, to ensure the scalability of our method as the learning progresses and the knowledge becomes more complex, we introduce the concept of distillation vectors based on personalized PageRank values <cit.> of the dynamic graph. We finally design an efficient distillation loss that scales well with the growing complexity of the tasks. By relying less on absolute representations, our DSGD strategy can mitigate the influence of data distribution bias and pseudo-label errors, enabling more robust and effective semi-supervised continual learning. We follow the ORDisCo <cit.> to split commonly used CL benchmark datasets. The experiments show significant boosts in last and average accuracy across different label ratios throughout these benchmarks, with up to 60% memory occupation savings over existing state-of-the-art approaches. In summary, our contributions are threefold: (1) We provide a systematical study of the SSCL and show that unreliable distributions of unlabeled data lead to unstable training and harmful refinement in the continual learning progress. (2) We propose a novel method called Dynamic Sub-graph Distillation (DSGD) that leverages higher-order structures of association information to improve the robustness against hurtful distribution bias and mitigate the catastrophic forgetting of SSCL. (3) Through comprehensive experiments on three commonly used benchmarks, we show that our method improves the catastrophic forgetting problem of SSCL, highlighting its robustness in various supervision scenarios and effectiveness on practical relevance. § RELATED WORK §.§ Continual LearningContinual learning (CL) methods can be organized into three aspects: reply-based methods, regularization-based methods, and parameter isolation methods.Reply-based methods select representative samples for retraining when learning new concepts. For instance, iCaRL <cit.> uses the approximated class means. GEM <cit.> constrains new task updates to not interfere with previous tasks. ERC <cit.> proposes the reservoir sampling scheme. Generative replay methods <cit.> model the distribution and generate instances for rehearsal without revisiting prior samples.Regularization-based methods try to employ extra regularization loss to consolidate prior knowledge during the learning process on novel data,such as penalizing changes to essential parameters <cit.> or distilling output of the previous model and the new model <cit.>.Parameter isolation methods dedicate different model parameters to each task. DER <cit.> freezes previously learned extractor and expands a new backbone when facing new tasks.To alleviate the catastrophic expansion, some studies design a feature-boosting strategy <cit.> or decouple the backbone at the middle layers instead of the entire network <cit.>.Knowledge distillation and data replay dominated the research before the presentation of DER, while dynamic networks became popular after DER <cit.>. §.§ Semi-supervised Learning Semi-supervised learning (SSL) presents a general framework for harnessing the potential of unlabeled data.Pseudo-labeling methods assign the predictions of unlabeled data as pseudo labels to enlarge the training set <cit.>. Consistency regularization methods constrain the predictions of different augmented distributions to be close through teacher-student model interactions <cit.> or adversarial perturbations on inputs <cit.> to expand the generality boundary.To expand the margin with unlabeled data, FixMatch <cit.> applies AutoAugment <cit.> to build a stronger augmentation version compared to the weaker one. The impressive results of FixMatch have pushed forward more studies <cit.> for adapting to more complex situations.Nevertheless, existing efforts of SSL do not fully account for the potential variations in the data distribution or category over time. §.§ Semi-supervised Continual LearningSemi-supervised Continual Learning (SSCL) considers a more realistic continual task stream that only a limited number of samples are annotated.The success of SSL allows a general effort to exploit unlabeled data to improve performance throughout the entire stage.CNNL <cit.> fine-tunes its incremental learner by generating the pseudo-labels of unlabeled to enable self-training. DistillMatch <cit.> employs knowledge distillation with prediction consistency on unlabeled data and optimizes an out-of-distribution detector to identify task-specific representations. Pseudo Gradient Learners <cit.> proposes a gradient learner from labeled data to predict gradients on unlabeled data to avoid the risk of pseudo labels. Apart from SSL-based methods, generative replayed methods dedicate to dealing with the forgetting of SSCL. For instance, ORDisCo <cit.> continually learns a conditional GAN with a classifier from partially labeled data and replays data online.Meta-Consolidation <cit.> extends ORDisCO to meta-learning setting scheme.Despite the learned representations of unlabeled data having a favor for expanding the classification boundary, the unreliable distribution will blur the boundary and hurt the refinement in the following tasks.§ METHODSIn this section, we begin by formulating the problem of Semi-supervised Continual Learning (SSCL). Subsequently, we systematically analyze the primary challenge associated with SSCL. To address this challenge, we propose Dynamic Sub-graph Distillation (DSGD) to mitigate the issue of catastrophic forgetting of unlabeled data.§.§ Problem Formulation and BaselineThe research of SSCL amounts to learning an ordered set of T tasks that exhibit different data distribution 𝒟^t.The data of each task sampled i.i.d. from 𝒟^t with few annotations D^t={(X^t_l,Y^t_l),X^t_u}, where X^t_l and X^t_u represents the labeled and unlabeled data, respectively. In this paper, we consider class continual learning so that for any two tasks Y^s∩ Y^t=∅.Each task is a specific semi-supervised learning process, which attempts to find a model f:X^t→ Y^t to map both labeled samples and unlabeled samples to the target space.After learning each task, the learned model should perform well on the previous tasks even without experience id by involving distillation loss or regularization constraints.In summary, the whole object can be formulated as follows:min_θ∑_t=1^T 𝔼_(x,y)∼𝒟^t[ℒ_CE(p,y) +λ_1 ℒ_SSL(p^𝒜,p^ℬ) + λ_2 ℒ_CL(z,z)],where ℒ_CE is the cross-entropy loss, ℒ_SSL is the semi-supervised loss andℒ_CL represents the continual learning loss. λ_1, λ_2 are corresponding weights. We utilize the representative method Fixmatch as our SSL baseline, iCaRL and DER as our CL baselines. The SSL loss encourages prediction consistency between the strong augmentation p^ℬ and the weak augmentation p^𝒜 of the same image. Additionally, iCaRL follows knowledge distillation by compelling the new network to generate outputs z aligning with those of the old network z. DER preserves the old network by parameter consolidation. The combined baselines in our paper are denoted as iCaRL&Fix and DER&Fix. §.§ A Systematic Study of SSCLAs illustrated in Figure <ref>(b),the catastrophic forgetting of unlabeled data is an essential challenge in SSCL. We then explore the capability of CL methods, iCaRL and DER, when adapting to unlabeled data.In particular, we conduct extensive experiments on the CIFAR100, where only 20 samples are annotated per class. The training set is divided into 10 tasks, with 10 categories assigned to each task. We initially conducted comparative experiments with the distillation strategy presented inFigure <ref>(a), where it can be seen that applying the distillation strategy of iCaRL on unlabeled data leads to explicit accuracy decreases. When correcting the wrong distillation term z into ground truth, we can observe consistent improvements across tasks. The results show that conventional distillation may introduce unreliable information for unlabeled data, causing unstable distillation and detrimental impacts on the refinement of SSCL. Parameter isolation methods heavily rely on the annotated examples in the memory buffer to prevent catastrophic forgetting of the classifier <cit.>. To explore if preserving previous classification results on unlabeled data is valuable, we consider returning old predictions of unlabeled examples in the memory buffer as pseudo labels on subsequent tasks. Therefore, we employ two kinds of p^𝒜, the predictions of current and previous tasks, as pseudo labels for SSL loss to deal with forgetting of SSCL. As illustrated in Figure <ref>(b), when retraining the replayed data with the previous predictions, the accuracy decreases across several tasks. The results provide further evidence of the negative effects of preventing incorrect predictions. To have a deeper insight into why conventional CL fails to address the catastrophic forgetting problem on SSCL, we visualize the distribution learned under fully supervised and semi-supervised settings.As depicted in Figure <ref>(c-d), the presence of distribution bias in the semi-supervised learning task is evident when compared to the fully supervised learning setting. Furthermore, this bias continues into the subsequent tasks. Accordingly, directly preserving the unreliable instance-wise representations or classification results is not appropriate for unlabeled data, which undermines the potential of unlabeled data in overcoming catastrophic forgetting in SSCL. This allows us to investigate a more robust distillation strategy for SSCL. §.§ Dynamic Sub-graph DistillationIn order to address the limitations mentioned above, we introduce a novel framework that focuses on the utilization of association knowledge derived from high-order neighbors and local structure information. Our method is built upon the following ideas: (1) The knowledge acquired in the brain is interconnected and structural,ensuring the knowledge evolves with fundamental structure stability. (2) Through using graph-based techniques, we build a projection from the old topology graph to the new one and preserve essential local structures. The framework is shown in Figure <ref>. We formalize the proposed method as follows. Given the training data D_t of the new stage containing previous data M_R, we first construct the new topology graph G(D_t, E^N) and the topology graph of replayed data G(M_R, E^R), where E represents the adjacency matrix. We denote S(x, E) as the encoding of the node sub-structure in the graph. The learning progress is considered sub-structure preserving if there exists a mapping ϕ from the old knowledge based on replayed data G(M_R, E^R) to the new one G(D_t, E^N), such that S(x, E^R) = S(ϕ(x), E^N). We intuitively use the identity mapping ϕ(x) = x, and design a distillation objective to ensure its sub-structure preserving property: min_θ𝔼_(x,y)∼ M_R[ℒ(S(x, E^R),S(ϕ(x), E^N))]. Structural Similarity. Graph Matching is a general method to compare two structures with node connections by exploiting structural information and features. Personalized PageRank (PPR) <cit.> value quantifies the connections between two vertices. The proximity of PPR values for vertices u and v indicates a higher likelihood of the pair [u,v] being a valid match. For any vertex u∈ V, its PPR value π(s,u) w.r.t. source vertex s is the probability that a random walk from s terminates at u. Starting from vertex s, let q^(t)_su be the probability that the random walk reaches vertex u after t steps, then π(s,u)=∑_t=0^∞q^(t)_su. Graph matching involves a predefined graph structure and aims to establish correspondences between nodes using structural similarity. However, with the model training process in SSCL, the graph structure of new tasks is often unknown, and each batch of samples varies. To ensure the consistency of the graph structure in such scenarios, it becomes essential to design an approach capable of adapting to dynamically changing graph structures. Dynamic Topology Graph Construction. To achieve end-to-end training, we need to explore and use the structure information on batch samples in a dynamic manner.We first use the herding strategy to select prior exemplars without requiring class ID.Given the merged batch of current and replayed data, we need to build two associated knowledge graphs. In semi-supervised learning, the widely adopted manifold assumption suggests that representations of similar instances should be closer in feature space. Guided by this assumption, we build the old topology graph basedon the cosine similarity of representations Z = f_θ^t-1(M_R), where f_θ^t-1 is the feature extractor trained on the previous task.The similarity then can be represented in matrix forms: A = ẐẐ^T,where Ẑ is the normalized features. To compute the PPR value, wedefine the probability transition matrix P by P_ij = exp(A_ij/γ)/∑^|M_R|_i=1exp(A_ij/γ).The vector P_j represents the probability transition started from vertex v_j satisfying ∑_i=1^|M_R|P_ij=1. The parameter γ controls the smoothness of the transition matrix.We then get the old topology graph G(M_R, P^R), where higher similarity between two node's representation leads to higher transition probability.Similarly, we define the new topology graph G(D_t, P^N) utilizing the updated embedding Z_t=f_θ^t(D_t). The appearance of new tasks leads to the acquisition of new knowledge. As a result, the ability to recognize new associations and comprehend deeper structural information expands and evolves. Our graph structure is specifically designed to adapt to these dynamic processes, enabling the accommodation of evolving learning tasks. Dynamic Sub-graph Distillation.Based on the Equation (<ref>), we propose a dynamic graph structure distillation mechanism tocombat the issue of catastrophic forgetting of unlabeled data effectively. We quantify the stability of old knowledge by evaluating the invariance of the sub-graph structure induced by each sample on the topology graph. Accordingly, we define the PPR value associated with the replayed sample as a representation of the sub-graph structure. Notably, as our adjacency matrix is fully connected, the original PPR value is toward infinity. To address this problem, we propose the K-order PPR value π^K(s,u)=∑_t=0^Kq^(t)_su, which represents a probability that a random walk from s terminated at u within K steps. Such probabilities can be represented in matrix forms. Let e_s∈ℝ^|M|*1 be s^th unit vector, i.e. with 1 at the s^th position and 0 everywhere else. Let P be the transition matrix,then the PPR value of vertex u w.r.t. source s is π^K(s,u)=∑_t=0^K[P^t ·e_s]_u. Given a set of starting vertex, we define the distillation vector that signifies the high-order topology structure of replayed sample x on the old graph as S_R(x) = {π^K_R(s_1,x), π^K_R(s_2,x), …, π^K_R(s_|V|,x)}, where s is the starting vertex. Similarly, the distillation vector of the same example x on new graph is S_N(x) = {π^K_N(s_1,x), π^K_N(s_2,x), …, π^K_N(s_|V|,x)}. Intuitively, the closer the distillation vectors S_R(x) and S_N(x) are, the better the local structure is preserved.During the training stage, only a small size of samples could be available in each batch, so we use all the examples in the memory buffer as the starting vertex.Thus, we propose to define the dynamic distillation loss through the sub-graphs associated with examples.ℒ_SGD =ℒ(S(x, P^R),S(x, P^N)) = ℒ(S_R(x), S_N(x)) = ∑_s_i∈ M_R(π^K_R(s_i,x)-π^K_N(s_i,x))^2. Regarding the semi-supervised loss in Equation (<ref>) of data X_RU, we design a weighted sum of the predictions of current and previous networks to ensemble the supervision samples without annotations:p̂^𝒜 = αp + (1-α) p^𝒜.As the learning task progresses, the predictions of examples selected for rehearsal become more reliable, so we design α to increase in a logistic manner α = 1/(1+exp^(-1-T/2)). § EXPERIMENTSIn this section, we compare our DSGD with other methods on benchmark datasets. Then we conduct ablation studies to assess the significance of each component and provide more insights into the effectiveness of our approach. §.§ Experiment Setups Datasets. We validate our method on the widely used benchmark of class continual learning CIFAR10 <cit.>, CIFAR100 <cit.> and ImageNet-100 <cit.>.CIFAR-10 is a dataset containing colored images classified into 10 classes, which consists of 50,000 training samples and 10,000 testing samples of size 32 * 32. CIFAR-100 comprises 50,000 training images with 500 images per class and 10,000 testing images with 100 images per class. ImageNet-100 is composed of 100 classes with 1300 images per class for training and 500 images per class for validation.ImageNet-100 resembles real-world scenes with a higher resolution of 256*256.Implementation Details. For CIFAR10, CIFAR100, and ImageNet-100 datasets, we separately train all 10, 100, and 100 classes gradually with 2, 10 and 10 classes per stage. We use a fixed memory size of 2,000 exemplars, assigning 500 samples to labeled data and the remaining 1,500 samples to unlabeled data under sparse annotations. For the semi-supervised setting, we follow ORDisCo to allocate a small number of labels for each class and adhere to the standard experiment setup for selecting the labeled data <cit.>.To simplify the notation, we denote the benchmark as “dataset-(number of labels/class)". For example, CIFAR10-30 indicates CIFAR10 with 30 labeled samples per class. Please see the Appendix for more details.Baseline and Metrics. For CIFAR-10 and CIFAR-100, we employ a modified ResNet-32 <cit.> as our feature extractor, and adopt the standard ResNet-18 <cit.> as the feature extractor for ImageNet-100. We follow the Methods section and apply iCaRL&Fix and DER&Fix as the baselines and maintain the same architecture.C[1]>p#1 Following previous research on continual learning <cit.>, we compare the top-1 average incremental accuracy: A = 1/T∑_t=1^tA_t,where A_t is the incremental accuracy on the task t and is defined by A_t=1/t∑_i=1^ta_t,i, where a_t,i is the accuracy on the test set of the i^th task after learning the t^th task.§.§ Quantitative Results CIFAR100. We present the performance on CIFAR100 of our method DSGD and the two baselines (iCaRL&Fix and DER&Fix) under four labels ratios: 4%, 5%, 16% and 25%, as shown in Table <ref> and Figure <ref>. We first validate the efficiency of combining CL and SSL directly by comparing iCaRL with iCaRL&Fix and DER with DER&Fix in Table <ref>.Nevertheless, the catastrophic forgetting of unlabeled data remains, disrupting the model's ability to retain learned knowledge in SSCL, as illustrated in Figure <ref>(a-b).Through the following analysis, we demonstrate that DSGD effectively mitigates catastrophic forgetting of unlabeled data. Our method exhibits outstanding performance with fewer annotations and lower memory buffer size. As shown in Table <ref>, improvements on two baselines highlight the effectiveness of our method on robust semi-supervised continual learning. Especially in scenarios with sparse labels, such as only 20 samples annotated, DSGD can remarkably increase the base model iCaRL&Fix by 7.05% and 12.07% in average incremental accuracy and the last incremental accuracy, respectively. Moreover, compared to existing SSCL strategies, our method also shows superior performance. Specifically, our method surpasses NNCSL by 1.1%, 0.68% and 2.1% under label ratios of 4%, 5%, and 25%, respectively, while reducing the memory buffer by 60%.In addition, DSGD based on iCaRL and DER significantly outperforms DistillMatch with 16% annotations by 0.81% and 18.4%, and reduces 20% annotations. DistillMatch requires all unlabeled data available, which is challenging in limited storage scenarios. Our methods only require revisiting fewer examples and are adaptable to limited storage scenarios. DSGD effectively mitigates the negative effects of distribution bias. As shown in Figure <ref>(a), by following Figure <ref>(a) and omitting the backbone iCaRL&Fix for clarity, the results indicate that directly distilling previous logits or the pseudo labels on unlabeled data is not effective in refining the unlabeled data for subsequent processes.The issue of distribution bias on labeled data and unreliable pseudo labels contributes to this inefficacy.In contrast,the incremental accuracy improvements achieved by DSGD highlight its effectiveness in addressing this flaw by avoiding to use the distributions and, instead, exploring association information. The results in Figure <ref>(b) illustrate that DSGD notably enhances the learning of previous data without interrupting the adaptation to new tasks, where the accuracy on all old tasks evaluates the performance of refining previous data. CIFAR10. Table <ref> summarizes the experimental results for the CIFAR10-30 and CIFAR10-150 benchmarks. In the setting of only 30 annotated samples, our method based on iCaRL&Fix surpasses the base models by 31.65% and 45.7% points in average accuracy and last accuracy, respectively. Even with stronger backbone DER&Fix, our strategy achieve 8.33% and 11.18% accuracy improvement. Compared to existing SSCL methods, our method exceeds CCIC by 21.21% at 0.6% label ratio with less data replayed, and surpasses ORDisCo by 13.78% under 3% labeling ratio, while ORDisCo suffers from complexity and computation cost and our method is more economical and effective. Our method is flexible in adapting dimension changes and exhibits robustness against hurtful distribution bias even in severe scarce supervision scenarios. It is usually challenging to distill knowledge in expandable CL methods, such as DER, due to dimension mismatch. Our method DSGD is capable of satisfying dimension alignment problems, so it can be integrated into different continual learning methods and achieve explicit improvements. ImageNet-100. We also validate the proposed methods on a higher resolution dataset ImageNet-100, where the number of annotations is 13 and 100 per class. Table <ref> summarizes the experimental results.The entire results illustrate that our method is also efficacious in large-scale continual learning. Our method still outperforms the conventional CL by a large margin in 100 annotation settings, showing that our method is capable of mitigating catastrophic forgetting of unlabeled data.Nevertheless, as shown in the JointTrain results, there is a striking gap between JointTrain and semi-supervised continual learning, indicating that although with the improvements of DSGD, the catastrophic forgetting is severe on SSCL in more realistic applications and further research should be devoted to this field.§.§ Ablation Study and Parameter Analysis To validate the effectiveness of the proposed strategies, we conduct an ablation study on the CIFAR100 dataset with baseline iCaRL&Fix on three different semi-supervised settings: CIFAR100-5, CIFAR100-20, and CIFAR100-80. The performance comparison is shown in Table <ref>, where the DSGD means the sub-graph knowledge distillation. PseDis means utilizing pseudo labels of the previous model as logits distillation targets. The reported average accuracy across three settings can reflect the robustness of the model. The ablation study reveals that the dynamic sub-graph knowledge distillation can significantly improve the accuracy throughout the entire continual learning stage. This progress is explicit when the annotations are scarce, leading to a notable 4.3% increase in average accuracy for CIFAR100-5 dataset. Additionally, the graph structure distillation can complement the logits distillation, showcasing that our methods can work in conjunction with other continual learning methods. This highlights the adaptability and effectiveness of our approach in SSCL scenarios. Parameter Analysis.To verify the robustness of DSGD, we conduct experiments on CIFAR100-20 with different hyper-parameters γ∈{0.9, 0.95, 1, 1.5, 2} in dynamic topology graph construction. The results are presented in Figure <ref>(a). It is evident that the performance changes are minimal across different values of γ. In Figure <ref>(b), we gradually increase the value of K in the distillation vector from 1 to 6 and record the performance of CIFAR100. In CIFAR100-20, The average accuracy increases from 47.34% to 50.98% as the K change from 1 to 6. Similarly, the average accuracy rises from 31.37% to 37.25% in CIFAR100-5, indicating that our method can effectively make full use of association information. These results show that sub-graph distillation is capable of mitigating the negative effect of distribution bias of unlabeled data. § CONCLUSIONTremendous unlabeled data has the potential to improve the generalizability of continual learning significantly. However, the issue of catastrophic forgetting on unlabeled data has an impact on the performance of learned tasks.To address the limitations, we proposed a novel approach called Dynamic Sub-graph Distillation for robust semi-supervised continual learning, which leverages high-order structural information for more stable knowledge distillation on unlabeled data. We designed a dynamic sub-graph distillation algorithm, which enables end-to-end training and adaptability to scale up tasks.Experimental evaluations conducted on three datasets: CIFAR10, CIFAR100, and ImageNet-100, with varying supervision ratios, demonstrated the effectiveness of our proposed approach in mitigating the catastrophic forgetting issue in semi-supervised continual learning. § ACKNOWLEDGMENTSThis work was supported in part by the National Key R&D Program of China under Grant 2022ZD0116500 and in part by the National Natural Science Foundation of China under Grants 62106174, 62222608, 62266035, 61925602, U23B2049, and 62076179.
http://arxiv.org/abs/2312.16409v1
{ "authors": [ "Yan Fan", "Yu Wang", "Pengfei Zhu", "Qinghua Hu" ], "categories": [ "cs.LG", "cs.CV" ], "primary_category": "cs.LG", "published": "20231227044012", "title": "Dynamic Sub-graph Distillation for Robust Semi-supervised Continual Learning" }
Convergence of Ginzburg-Landau expansions: superconductivity in the BCS theory and chiral symmetry breaking in the NJL model [ January 14, 2024 ================================================================================================================================Detecting polygons defined by a set of line segments in a plane is an important step in analyzing vector drawings. This paper presents an approach combining several algorithms to detect basic polygons from arbitrary line segments. The resulting algorithm runs in polynomial time and space, with complexities of O((N+M)^4) and O((N+M)^2) respectively, where N is the number of line segments and M is the number of intersections between line segments. Our choice of algorithms was made to strike a good compromise between efficiency and ease of implementation. The result is a simple and efficient solution to detect polygons from lines. § KEYWORDS Polygon Detection, Segment Intersection, Minimum Cycle Basis § INTRODUCTIONUnlike image processing, where data consist of raster images, our algorithm deals with drawings in vector format, consisting of line segments. This requires completely different approaches, such as described here.We divide this task into four major steps to perform polygon detection from a set of line segments. First, we detect line segment intersections using the Bentley-Ottmann algorithm <cit.>. The next step creates a graph induced by the drawing, where vertices represent endpoints or proper intersection points of line segments and edges represent maximal relatively open subsegments that contain no vertices. The third step finds the Minimum Cycle Basis (MCB) <cit.> of the graph induced in the previous step, using Horton's algorithm <cit.>. The last step constructs a set of polygons based on cycles in the previously found MCB. This is straightforward if we transform each cycle into a polygon, where each node represents a polygon vertex, and each edge in the cycle represents an edge in the polygon.A previous version of this paper was presented in <cit.>. In sections <ref> and <ref>, we describe the four steps of our method. Section <ref> presents the whole algorithm and experimental results in section <ref>5. Finally, in section <ref>, we discuss conclusions and future work. § INTERSECTION REMOVAL In a vector drawing composed of a set of line segments, many intersections might exist between these segments. To detect polygonal shapes, we must remove proper segment intersections, thus creating a new set of line segments in which any pair of segments share at most one endpoint. §.§ Finding line segment intersectionsThe first step of our approach consists of detecting all M intersections between N line segments in a plane. This is considered one of the fundamental problems of Computational Geometry, and it is known that any algorithm within the algebraic decision tree model has a lower bound of Ω(N log N+M) time to solve it  <cit.>.In <cit.> Balaban proposes two algorithms for finding intersecting segments, a deterministic and asymptotically optimal for both time O(N log N+M) and space O(N) algorithm and a simpler one that can perform the same task in O(N log ^2 N+M)-time. Before that, Chazelle and Edelsbrunner <cit.> reached a time optimal algorithm O(N log N+M) with a space requirement of O(N+M). The randomized approach devised by Clarkson and Shor <cit.> produced an algorithm for reporting all intersecting pairs that requires O(N log N+M) time and O(N) space.In 1979 Bentley and Ottmann proposed an algorithm that solved this problem in O((N+M) log N) time and O(N+ M) space <cit.>. This algorithm is the well-known Bentley-Ottmann algorithm, and after more than 40 years, it is still widely adopted in practical implementations because it is easy to understand and implement <cit.>. In realizing that this is not the most complex part of our approach, we use the Bentley-Ottmann algorithm since its complexity is acceptable for our purposes, and its published implementations are quite simple.§.§ Removing line segment intersectionsThe next step of our approach is to remove all proper intersections between line segments, dividing each intersected segment in sub-segments without proper intersections, only sharing endpoints. To find and remove intersections, performing at once the first two steps of our approach, we use a robust and efficient implementation of the Bentley-Ottmann algorithm, described by Bartuschka, Mehlhorn and Naher <cit.> that computes the planar graph induced by a set of line segments. Their implementation, represented in this paper by Compute-Induced-Graph, computes the graph G induced by set Φ in O((N+M) log N) time. Since this algorithm is quite long, we choose not to present it here. We refer our readers to <cit.> for a detailed description.In this implementation, the vertices of G represent all endpoints and proper intersection points of line segments in Φ, and the edges of G are the maximal relatively open sub-segments of lines in Φ that do not contain any vertex of G. The major drawback of this implementation lies in that parallel edges are produced in the graph for overlapping segments. We assume that Φ contains no such segments. For example, the set Φ shown in Figure 1, Compute-Induced-Graph, will produce the graph G, depicted in Figure 2, where each edge represents a nonintersecting line segment.§ POLYGON DETECTION Detecting polygons is similar to finding cycles on the graph G produced in the previous step.The first known linear-time algorithm for listing all graph cycles was presented by Syslo <cit.>. This algorithm requires O(V) space and O(V × C) time, where V and C are the number of vertices and cycles in G, respectively. Later, Dogrusöz and Krishnamoorthy proposed a vector space algorithm for enumerating all cycles of a planar graph that runs in O(V^2× C) time and O(V) space  <cit.>. Although asymptotically slower, this algorithm is much simpler than Syslo's and is amenable to parallelization. Unfortunately, the number of cycles in a planar graph can grow exponentially with the number of vertices <cit.>. An example of this situation is the graph presented in Figure 3. In this case, the number of cycles, including the interior region numbered 1, is O(2^r) with r=k / 2+1, where k is the number of vertices since one can choose any combination of the remaining regions to define a cycle <cit.>. This is why detecting all polygons that can be constructed from a set of lines is not very feasible. In this paper, we chose to detect the minimal polygons with few edges that cannot be constructed by joining other minimal polygons. §.§ Minimum Cycle Basis of a GraphSince we want to detect the minimal polygons, this can be treated as searching for a Minimum Cycle Basis (MCB). So, the second step of our approach consists in obtaining an MCB of graph G. A cycle basis is defined as a basis for the cycle space of G, which consists entirely of elementary cycles. A cycle is called elementary if it contains no vertex more than once. The dimension of the cycle space is given by the cyclomatic number ν=E-V+P <cit.>, where E is the number of edges and V the number of vertices in G and P is the number of connected components of G. §.§ All Cycles of a GraphHorton presented the first known polynomial-time algorithm to find the shortest cycle basis of a graph, which runs in O(E^3 V) time <cit.> or in O(E^4) on simple planar graphs <cit.>, which is the case. While asymptotically better solutions have been published in the literature, the Bentley-Ottmann algorithm is simple and usable for our needs. The pseudo-code of this algorithm is listed in Minimum-Cycle-Basis and shortly described below. A further detailed description of this algorithm and its concepts can be found in <cit.>.The All-Pairs-Shortest-Paths finds the shortest paths between all pairs of vertices in graph G and can be performed in O(V^3) time and O(V^2) space using FloydWarshall or Dijkstra algorithms <cit.>. Order-By-length orders the cycles by ascending length and can be implemented by any efficient sorting algorithm. This is a non-critical step because it has a O(V νlog V) upper bound in time complexity, which is insignificant compared to other steps of this algorithm.In Select-Cycles, we use a greedy algorithm to find the MCB from Γ set of cycles. To do this Horton <cit.> suggests representing the cycles as rows of a 0-1 incidence matrix, in which columns correspond to the edges of the graph, and rows are the incidence vectors of each cycle. Gaussian elimination using elementary row operations over the integers modulo two can then be applied to the incidence matrix, processing each row in turn, in ascending order of the weights of cycles, until enough independent cycles are found.This step dominates the time complexity from other steps since it takes O(E ν^2 V) time. Knowing that G is always a simple planar graph we can conclude that as a whole, the Minimum-Cycle-Basis algorithm has a worst-case upper bound of O(E ν^2 V)=O(E^3 V)=O(E^4) operations and space requirements of O(V^2).Figure <ref> shows an example of Γ, the set of cycles resulting from applying the Minimum-Cycle-Basis to graph G shown in Figure 2.§.§ Polygon constructionThe last step of our approach consists of constructing a set Θ of polygons from the MCB. An algorithm to perform this operation can easily run in O(C V) time, where C is the number of cycles in MCB. Such an algorithm is listed in Polygons-From-Cycles, which returns a set Θ of polygons. Figure <ref> illustrates the resulting set Θ of polygons generated by applying Polygons-From-Cycles to Γ depicted in Figure <ref>.§ ALGORITHM OUTLINE We can now outline Detect-Polygons. This algorithm can detect a set Θ of polygons from an initial set Ψ of line segments. To perform this task, we pipeline the algorithms referred to in previous sections for line segment intersection removal, MCB finding, and cycle-to-polygon conversion.As referred in section 2.2, Compute-Induced-Graph runs in O((N+M) log N) time and O(N+M) space. The Shortest-Cycle-Basis runs in O(V^4) operations and has a space requirement of O(V^2), making this the critical step in the complexity of this algorithm, since the Polygons-From-Cycles needs O(C V) time.Since the number V of vertices in the graph is no greater than the sum of line endpoints (2 × N) with detected intersections M, we can then conclude that the proposed algorithm has time and space complexities of O(V^4)= O((N+M)^4) and O(V^2)=O((N+M)^2), respectively.§ EXPERIMENTAL RESULTS The algorithm proposed in this paper was implemented in C++ and tested in an Intel Pentium III 1GHz 512MB RAM computer running Windows XP. We tested the algorithm with line segments created from simple test drawings, technical drawings of mechanical parts, and hand-sketched drawings. Table <ref> presents the results obtained from these tests. These results show that performance is acceptable for online processing in sets with less than three hundred lines, like hand sketches or small-size technical drawings. If the line set has about 2,500 lines, the algorithm will take more than twenty minutes to detect the polygons. Still, this remains a feasible solution for batch processing of medium-size technical drawings.§ CONCLUSIONS AND FUTURE WORK The proposed algorithm uses polygon detection in vector drawings to create descriptions based on spatial and topological relationships between polygons. Another use is detecting planar shapes in sketches. Both applications have been implemented as working prototypes for shape retrieval and architectural drawing from sketches.The algorithm presented here detects all minimal polygons that can be constructed from a set of line segments in polynomial time and space. This approach uses well-known and simple-to-implement algorithms to perform line segment intersection detection and to find an MCB of a graph instead of using more efficient but less simple methods.Indeed, the presented algorithm has considerable room for improvement, namely through more recent, complex, and efficient algorithms. Further work may be carried out regarding detecting and correcting rounding errors resulting from finite precision computations.§ ACKNOWLEDGMENTSWe thank Professor Mukkai S. Krishnamoorthy from Rensselaer Polytechnic Institute, New York, for his very helpful suggestions.This work was partly funded by the Portuguese Foundation for Science and Technology, project 34672/99, and the European Commission, project SmartSketches IST-200028169.Fonseca2005,Ferreira2009,pereira2004 plain
http://arxiv.org/abs/2312.16363v1
{ "authors": [ "Alfredo Ferreira Jr.", "Manuel J. Fonseca", "Joaquim A. Jorge" ], "categories": [ "cs.CG" ], "primary_category": "cs.CG", "published": "20231227000915", "title": "Polygon Detection from a Set of Lines" }
Journal ofClass Files, Vol. 14, No. 8, August 2023 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE JournalsMaximum Likelihood CFO Estimation for High-Mobility OFDM Systems: A Chinese Remainder Theorem Based Method Wei Huang, Jun Wang, Senior Member, IEEE, Xiaoping Li, Qihang PengWei Huang and Jun Wang are with the National Key Laboratory of Wireless Communications, University of Electronic Science and Technology of China, Chengdu, 611731, China (e-mail: [email protected], [email protected]). Xiaoping Li is with the School of Mathematical Science, University of Electronic Science and Technology of China, Chengdu, 611731, China (e-mail: [email protected]). Qihang Peng is with the School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China (e-mail: [email protected]).=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================Orthogonal frequency division multiplexing (OFDM) is a widely adopted wireless communication technique but is sensitive to the carrier frequency offset (CFO). For high-mobility environments, severe Doppler shifts cause the CFO to extend well beyond the subcarrier spacing. Traditional algorithms generally estimate the integer and fractional parts of the CFO separately, which is time-consuming and requires high additional computations. To address these issues, this paper proposes a Chinese remainder theorem-based CFO Maximum Likelihood Estimation (CCMLE) approach for jointly estimating the integer and fractional parts. With CCMLE, the MLE of the CFO can be obtained directly from multiple estimates of sequences with varying lengths. This approach can achieve a wide estimation range up to the total number of subcarriers, without significant additional computations. Furthermore, we show that the CCMLE can approach the Cramér-Rao Bound (CRB), and give an analytic expression for the signal-to-noise ratio (SNR) threshold approaching the CRB, enabling an efficient waveform design. Accordingly, a parameter configuration guideline for the CCMLE is presented to achieve a better MSE performance and a lower SNR threshold. Finally, experiments show that our proposed method is highly consistent with the theoretical analysis and advantageous regarding estimated range and error performance compared to baselines.High mobility, OFDM, Carrier frequency offset (CFO), Maximum likelihood estimation (MLE), Chinese remainder theorem (CRT).§ INTRODUCTIONSatellite communication is expected to take center stage for the 6th generation of mobile communication systems (6G) commercial services due to its wide bandwidth, ubiquitous coverage, and ability to withstand natural disasters <cit.>.Meanwhile, orthogonal frequency division multiplexing (OFDM) is the primary scheme of an integrated satellite-terrestrial network, given its promising anti-multipath fading characteristics and high spectrum efficiency <cit.>. However, a crucial feature of the satellite network is that the high mobility leads to significant Doppler shifts, and an inherent drawback of OFDM systems is that they are very sensitive to the carrier frequency offset (CFO) <cit.>, including CFOs at integer and fractional multiples of the subcarrier spacing. Integer frequency offset (IFO) will result in a cyclic shift of the subcarriers while fractional frequency offset (FFO) will destroy the orthogonality of the subcarriers and lead to inter-carrier interference (ICI) <cit.>. Both kinds of CFOs can lead to severe performance degradation if not accurately compensated. Therefore, an accurate estimation of the CFO is of critical importance in high-mobility OFDM systems, where Doppler shifts can be well beyond the subcarrier spacing. In general, the frequency synchronization algorithms for OFDM systems can be classified into two categories, i.e., non-data-aided blind estimation methods <cit.> and data-aided estimation methods<cit.>. Most of the non-data-aided blind estimation methods perform synchronization based on the inherent structure of the OFDM symbol and generally have high latency and computational complexity <cit.>, which could not meet the fast and reliable requirements of many applications. In contrast, data-aided frequency synchronization schemes typically use the autocorrelation of received training symbols or the cross-correlation with the local copy at the receiver <cit.>, which is faster, more reliable, and easier to implement. Therefore, the data-aided frequency synchronization methods are more suitable for practical applications due to their low latency and high reliability <cit.>.For the data-aided methods, early research works focused on the FFO estimation problem. In <cit.>, Moose and Bai presented a maximum likelihood estimation (MLE) of the FFO based on two repeated symbols. Although the accuracy of the CFO estimate is satisfying, the estimated CFO range of these method is limited, i.e., ±1/2 the subcarrier spacing. In fact, the accuracy and range of the CFO estimate are generally in conflict with each other, e.g., a wider estimated range usually requires shorter sequences, which will deteriorate the accuracy <cit.>. In order to increase the range of the CFO estimate while maintaining the accuracy of the algorithm, an effective alternative approach is to perform accurate FFO estimation over a small range and then perform IFO estimation based on the FFO-compensated signal <cit.>. In <cit.>, Schmidl presented a wide-range frequency synchronization based on the autocorrelation of two training OFDM symbols. Further, authors in <cit.> proposed an improved estimator based on the timing synchronization algorithm in <cit.>, which can achieve the same CFO range as <cit.> but only one training symbol is required. Considering the sensitivity to CFO for the Zadoff-Chu (ZC) sequences-based timing synchronization method, <cit.> proposed a cross-correlation-based joint time and frequency synchronization schemes for OFDM downlink transmissions using two suitable ZC sequences. Furthermore, a frequency synchronization methods for the OFDM-based satellite system was proposed by fully utilizing the correlation property of two symmetric ZC sequences<cit.>. However, these methods are time-consuming due to multi-level estimation and requires a large amount of additional computations to estimate IFO. To address these issues, the Chinese remainder theorem (CRT) are applied <cit.>, which can directly obtain estimation results with a full estimated range at the cost of a small number of additional computations.The CRT is to reconstruct a number by the remainder of a series of integer modes <cit.>, which is widely used in the field of sub-Nyquist-sampling frequency determination <cit.>, DOA estimation <cit.>, range estimation <cit.>, and physical layer encryption <cit.>.As a first attempt, to obtain a wide range while retaining a high accuracy for the CFO estimation, authors in <cit.> applied the classic CRT to estimate the integer parts of the CFO with two relatively co-prime sample intervals used in coherent optical OFDM systems. However, this method could not directly obtain the fractional parts of the CFO estimation, but requires a length of the product of all sample interval values to estimate the FFO individually. In order to jointly estimate the integer and fractional parts of a real number with some small sample intervals, the robust CRT <cit.> are considered. Authors in <cit.> removed the phase uncertainty for Doppler shift detection in synthetic aperture radar (SAR) systems based on the closed-form robust CRT, which allows for the direct CFO estimate of a real number. Furthermore, <cit.> also used a sequential-based approach to transform the Doppler estimation problem into a robust CRT problem and solved by the closed-form robust CRT. However, the closed-form robust CRT are based on the assumption that all the remainder errors have the same variance <cit.>, i.e., the CFO estimation errors for different estimation ranges have the same variance. In this case, a closed-form robust CRT actually can only achieve the suboptimal performance <cit.>, since the errors usually vary differently over different estimation ranges. Hence, the robust CRT-based method in <cit.> for Doppler shift detection also yields only the suboptimal performance for the CFO estimation problem.Considering the variability in the distribution of CFO estimation errors obtained under different estimated ranges, we propose a CRT-based CFO Maximum Likelihood Estimation (CCMLE) method, which enables MLE of CFOs based on multiple CFO estimates from sample intervals of different lengths, provided that the CFO estimation errors follow the wrapped normal distribution <cit.>.Through theoretical analysis, we show that the distribution of the CFO estimation error can be well approximated by the wrapped normal distribution and the CCMLE can achieve the Cramér-Rao Bound (CRB) when a certain signal-to-noise ratio (SNR) threshold is met. Meanwhile, this threshold is given in an approximate analytic form.Moreover, we present a parameter configuration guideline based on these theoretical analyses to achieve a better MSE performance and a lower SNR threshold. Furthermore, simulation experiments demonstrate high consistency with the theoretical analysis and show that our proposed CCMLE method has advantages in terms of estimated range and error performance compared to baseline schemes. The main contributions of this article are presented in the following: * We first show that the distribution of the CFO estimation error can be modeled by the von Mises distribution <cit.>, which is a close approximation to the wrapped normal distribution <cit.>. And then, we propose the CCMLE method to achieve the optimal performance for high-mobility OFDM systems, which has the advantages of a full estimated range and low additional complexity.* We analyze the CRB of the CFO estimation under varying CFO estimated ranges, and reveal that the theoretical mean square error (MSE) of the proposed CCMLE method is actually the CRB. Besides, we give the analytic expression for the SNR threshold approaching the CRB, which enables an efficient waveform design.* With the comprehensive analysis of the SNR threshold approaching the CRB and theoretical MSE, we give a guideline for the parameter configurations of the proposed CCMLE method to achieve a better MSE performance and a lower SNR threshold. The rest of the paper is organized as follows. Section <ref> presents the problem statement and the system model. In Section <ref>, we formulate the CFO estimation problem as the remaindering problem and discuss the distribution of the CFO estimation errors. And then, the CCMLE method is proposed based on the MLE-based CRT. Section <ref> shows the performance analysis of the proposed CCMLE method and gives a parameter configuration guideline for this method. Furthermore, we present a brief complexity analysis for the proposed CCMLE and baseline methods in Section <ref>. Next, the experiment results are shown in Section <ref>. Finally, the conclusions are given in Section <ref>. § PROBLEM STATEMENT AND SYSTEM MODELWe consider a high-mobility scenario where the line-of-sight (LOS) path is dominant <cit.>. As a result, the communication system will suffer from severe LOS component Doppler shifts due to high mobility. For example, a high-speed aircraft using an OFDM system will suffer from severe CFOs, i.e., LOS component Doppler shifts, that significantly exceed the subcarrier spacing. Meanwhile, we assume that the speed of the aircraft and the angle of the received signal are quasi-static for a certain period of time. For example, a fifth generation (5G) frame of 10ms in length <cit.>, during which the above quasi-static assumption is reasonable. Moreover, we assume that the transmitted time-domain training symbols s(n) consist of the ZC sequences and satisfy[ s^*(n)s(n) = | s(n)|^2 = 1, n = 0,1,2, …, ]and[ s(n+L)=s(n), n = 0,1,2, …,L-1, ]where n represents the discrete sample point and L represents the sample interval. In this case, the received time-domain signals r(n) can be expressed as[ r(n) = h(n)s(n)e^j2π(ε_NΔ f)nT_s + ω(n), ]where h(n) represents the channel coefficient of the LOS component, ω(n) is the additive white Gaussian noise (AWGN), ε_N ∈ [0,N) is the normalized CFO with respect to subcarrier spacing Δ f=(NT_s)^-1, N is the discrete Fourier transform size, and T_s is the sampling period.Note that, we consider the channel is constant during the period of training symbols. Furthermore, to facilitate mathematical analysis, we assume that |h(n)|^2 = 1 by considering the quasi-static characteristic without loss of generality.In conventional training symbol aided CFO estimation, the normalized CFO can be estimated by the phase difference between identical sample intervals <cit.>, described as[ ε̂_N = N/2π Larg(P_L), ]where ε̂_N is the estimated normalized CFO, the function arg(·) returns the phase angle in radians, and P_L is a correlation function with the sample interval L, defined as[ P_L = ∑_m = 0^L - 1r^*(m)r(m + L). ] As a result, the estimated range of the normalized CFO is [-N/2L,N/2L]. Intuitively, a smaller sample interval L is required to obtain a wider range of estimates, which leads to a lower accuracy <cit.>.As mentioned before, we apply CRT to perform CFO estimation in this paper. It is important to note that for the purpose of describing the CRT, we denote the estimated range as [0,N/L] and one can return ε̂_N to [-N/2L,N/2L] using the following formula:[ ε̂_N = ((ε̂_N + N/2L) N/L)-N/2L, ]where the notation `' represents modular arithmetic. § PROPOSED MLE-BASED CRT METHOD FOR CFO ESTIMATION In order to apply the CRT in the CFO estimation problem, K different estimated ranges [0, Γ_1),…,[0, Γ_K) are required, while the greatest common divisor (gcd) of any two different estimated ranges needs to be satisfied as 1, i.e., gcd(Γ_i,Γ_j)=1 for ij.Without loss of generality, assume that the above relatively co-prime numbers satisfy Γ_1 < ⋯ < Γ_K. In addition, to ensure that the above estimated ranges can be obtained, the estimated CFO requires to be normalized to (Γ T_s)^-1 instead of (N T_s)^-1, where Γ=Γ_1Γ_2⋯Γ_K. Accordingly, the sample intervals for the training symbols are set to be L_1,…,L_K, where L_i=Γ/Γ_i for i=1,…,K.Therefore, the normalized CFOs of different sample intervals can be represented as[ ε̂_i = Γ/2π L_iarg(P_L_i),i=1,…,K, ]where P_L_i is obtained by replacing L in (<ref>) with L_i, ε̂_i is the estimated CFO normalized to (Γ T_s)^-1, and the corresponding estimated range is [0, Γ_i).Furthermore, we can rewrite (<ref>) into the following form:[ ε̂_i=ε_i+Δε_i,i=1,…,K, ]where Δε_i represents the normalized CFO estimation error caused by the noise.Let ε be the normalized CFO to be estimated, which is normalized to (Γ T_s)^-1. If we ignore the effect of noise temporarily, ε_i will be exactly the remainder of ε modulo Γ_i. This is because the estimated phase by the CFO estimation has cyclic characteristics and the estimated range with sample interval L_i used is exactly [0, Γ_i). Hence, we can obtain the following equations on the condition that there is no noise,[ ε _i≡εΓ _i,i=1,…,K. ]where 0 ≤ε_i < Γ_i, denoted by ε _i = ⟨ε⟩ _Γ_i.As a result, the CFO estimation problem can be formulated as the remaindering problem for a real number ε, i.e., recovering the real number ε from its erroneous remainders ε̂_i, where the remainder noises are denoted as Δε_i.Based on the property of the CRT <cit.>, ε can be uniquely reconstructed if and only if 0 ≤ε < Γ. In such case, a wide estimated range [0, Γ) can be obtained from several different small estimated ranges [0, Γ_i). §.§ Estimated Range of CFOsBased on (<ref>), since the CRT-based CFO estimation requires Γ rather than N as a normalized factor, the estimated normalized CFO is ε̂ times virtual subcarrier spacing, given byΔ f_V=(Γ T_s)^-1,rather than the actual subcarrier spacingΔ f=(N T_s)^-1.Hence, the relationship between Δ f_V and Δ f can be represented as[ Δ f_V/Δ f = N/Γ. ] As a result, the CFO normalized to (NT_s)^-1, i.e., ε̂_N, can be obtained by[ ε̂_N = N/Γε̂. ] Since the estimated range of ε̂ is [0, Γ), we can obtain a wide estimated range of the CFO normalized to (NT_s)^-1 from (<ref>), i.e., 0 ≤ε̂_N< N. Accordingly, based on (<ref>), the range of normalized CFOs that can actually be estimated is [-N/2,N/2].§.§ Normalized CFO Error Distribution In this section, we will analyze the distribution of errors for the estimated normalized CFO. According to (<ref>), we know that the estimated error of ε̂_i is proportional to the estimated phase error of arg(P_L_i).Therefore, the problem of the estimated normalized CFO error distribution can be transformed into the analysis of the estimated phase error distribution.Let θ̂_̂î=arg(P_L_i) denote the estimated phase, and then the phase error is defined as[ ϵ_i=θ̂_̂î-θ_i. ]where θ_i is the actual phase.Based on (<ref>) and (<ref>), we can rewrite the correlation function P_L_i into the following form:[ P_L_i = L_ie^j2π L_i(ε_NΔ f)T_s + ω_L_i, ]where ω_L_i is still the AWGN based on the Central Limit Theorem (CLT). According to <cit.>, if ω_L_i is the AWGN, the probability distribution function (PDF) of the phase error ϵ_i can be perfectly matched by the von Mises distribution over a wide range of SNRs.Therefore, we have that the phase error ϵ_i follows the von Mises distributionwith mean zero.According to <cit.>, the von Mises distribution is a close approximation to the wrapped normal distribution.Furthermore, based on (<ref>), we know that the estimated normalized CFOs are linear transformations of the estimated phases. Therefore, it is reasonable to assume that the normalized CFO estimation error follows the wrapped normal distribution with mean zero.Additionally, even though the received signals have the same SNR, the normalized CFO estimates have different variances. This is because different sample intervals (L_i) are used in the correlation function. As a result, the normalized CFO estimation errors obtained from different sample intervals result in a wrapped normal distribution with varying variances.§.§ CRT-based CFO Maximum Likelihood Estimation Since the normalized CFO estimation errors, i.e., the remainder noises of robust CRT problem, follow the wrapped normal distribution with varying variances, we propose to use the MLE-based robust CRT <cit.> to obtain optimal performance. This proposed CRT-based CFO MLE (CCMLE) method is detailed in the following.Let M_i=MΓ_i,where M is an integer greater than or equal to 2. Substitute Γ in (<ref>) by MΓ and Γ_i in (<ref>) by M_i, we have[ ε_M_i≡ε _MM _i,i=1,…,K, ]where ε_M and ε _M_i are the CFOs normalized to (MΓ T_s)^-1, and the corresponding range are [0, MΓ) and [0, MΓ_i). Hence, similar to (<ref>), we have the following form:[ ε̂_M_i=ε_M_i+Δε_M_i,i=1,…,K. ] As described in <cit.>, by using the method in <cit.>, the variance of the CFO estimates for different-length sample intervals is found to be [ σ^2_i = M^2Γ ^2/4π ^2L_i^3 ·η, ]where σ^2_i (i=1,…, K) represents the variance of the normalized CFO estimation error Δε_M_i and η represents the SNR of the received signals.Therefore, we can conclude that the remainder noise Δε_M_i follows the wrapped normal distribution with mean zero and variance σ^2_i for i=1,…, K. To optimally reconstruct ε_M, it is essential to optimally determine the common remainder r^c from erroneous common remainders r̂_i^c. First, r̂_i^c is obtained by noisy remainder ε̂_M_i modulo M, i.e.,[ r̂_i^c = ⟨ε̂_M_i⟩ _M, i=1,…,K. ] Next, the optimal estimate for the common remainder r^c can be determined by the following equation <cit.>: [ r̂^c = min_x ∈Ω∑_i = 1^K w_id_M^2( r̂_i^c,x), ]where w_i is the weights of the remainders defined as[ w_i = 1/σ _i^2/∑_i = 1^K 1/σ _i^2, i=1,…,K. ]d_M( r̂_i^c,x) is the circular distance defined as[ d_M( r̂_i^c,x) Δ = r̂_i^c-x-[ r̂_i^c-x/M]M, ]where [·] stands for the rounding integer. Besides, the set of the optimal solutions Ω is defined asΩ = {⟨∑_i = 1^K w_ir̂_i^c + M∑_i = 1^t w_ρ (i)⟩_M,t = 1, … ,K},where ρ is a permutation of the set {1,…,K } such that[ r̂_ρ _(1)^c ≤⋯≤r̂_ρ _(K)^c, ]and w_ρ _(i) is the weight of r̂_ρ _(i)^c.After the common remainder r^c is optimally determined, the normalized CFOs can be estimated by[ ε̂_M = M( (∑_i = 1^K L̅_iL_iq̂_i) Γ) + r̂^c, ]whereq̂_i = [ ε̂_M_i-r̂_i^c/M],i=1,…,K.L̅_i are predetermined constants, i.e., the modular multiplicative inverse of L_i modulo Γ_i, defined asL̅_iL_i≡ 1 Γ _i,i=1,…,K. Finally, based on (<ref>), the CFO normalized to (NT_s)^-1, i.e., ε̂_N, is given by[ ε̂_N = N/MΓε̂_M . ] We summarize the proposed CCMLE method as Algorithm <ref> and the corresponding diagram is given in Fig. <ref>.§ PERFORMANCE ANALYSIS AND PARAMETER CONFIGURATION GUIDELINES §.§ CFO Estimation Performance AnalysisAccording to <cit.>, the theoretical MSE of ε̂_M for the proposed CCMLE method is given by[ Δ_MSE(M) = ∑_i = 1^K w_i^2σ _i^2. ]For the CFO estimation performance, we have the following theorem.Theorem 1: When the weights w_i are defined by (<ref>), the Δ_MSE(M) in (<ref>) is smaller than the minimum of σ^2_i (i=1,…, K) defined by (<ref>), i.e., Δ_MSE(M) < min{σ^2_i}, where min{σ^2_i} represents the minimum of the set {σ^2_i,i=1,…, K}. Furthermore, the CRB var[ε̂_M] of the normalized CFO estimates ε̂_M with K different sample intervals L_1,…,L_K used is given by[ var[ε̂_M] ≥Δ_MSE(M). ]Proof: The proof of this theorem is relegated to Appendix <ref>.This theorem states that the performance of the proposed CCMLE method is better than that obtained under any of the single-length sample intervals used. Meanwhile, the normalized CFO estimated errors obtained by the proposed CCMLE method can approach the CRB.Intuitively, based on (<ref>) and (<ref>), the theoretical MSE of ε̂_N is given byΔ_MSE = (N/MΓ)^2 Δ_MSE(M).That is,Δ_MSE=(N/MΓ)^2 ∑_i = 1^K w_i^2σ _i^2.Accordingly, the CRB of ε̂_N is given by[ var[ε̂_N] ≥Δ_MSE. ]It can be seen from (<ref>) and (<ref>) that the MSE of the proposed CCMLE method can approach the CRB, which indicates that the proposed CFO estimator can achieve the optimal performance. In addition, we show in the following theorem that given the values of Γ_1,Γ_2,…,Γ_K, the proposed CCMLE is theoretically capable of yielding a SNR threshold approaching the CRB. Before that, we give the following Lemma 1 to facilitate the derivation of Theorem 2. Lemma 1: If L_1 > ⋯ > L_K>0, and ξ_Ψ is defined as ξ _Ψ = {[ 1/∑_Δε _M_i∈ SL_i^3 + 1/∑_Δε _M_j∈S̅L_j^3, S ∅ ,S̅∅; 1/∑_i = 1^K L_i^3,S = ∅orS̅ = ∅ ].,where S is the subset of U={Δε_M_1, …,Δε_M_K}, and S̅ is the complement of S in U. Then, the maximum value of ξ_Ψ is given byξ_Ψ^*=1/L_K^3 + 1/∑_j KL_j^3.Proof: The proof of this theorem is relegated to Appendix <ref>.Theorem 2: For an arbitrarily small real number δ, MSE of the CCMLE method approaches the CRB with a probability of at least 1-δ for η≥Γ^2 x_δ^2 ξ_Ψ^*/π^2,where x_δ is a constant with respect to δ, and ξ_Ψ^* is given by (<ref>).Proof: The proof of this theorem is relegated to Appendix <ref>. §.§ Parameter Configuration GuidelineNext, based on Theorem 1 and Theorem 2, we thoroughly examine how the parameter configurations affect the performance of the CCMLE method, taking into account the MSE and the SNR threshold η_th. After that, we present a guideline for the parameter configurations of the proposed CCMLE method to achieve a better MSE performance and a lower SNR threshold.First, based on (<ref>) and (<ref>), we know that η_th is determined by the value of Γ^2 ξ_Ψ^* for a given arbitrarily small number δ. Recall that Γ=Γ_1Γ_2⋯Γ_K and L_i=Γ/Γ_i. Then, based on the fact that Γ_1 < ⋯ < Γ_K and Γ_1,Γ_2,…,Γ_K are co-prime numbers, we have L_K^3 ≪∑_j KL_j^3. As a result, based on (<ref>), the value of Γ^2 ξ_Ψ^* can be approximated asΓ^2 ξ_Ψ^*≈Γ_K^2/Γ_1Γ_2⋯Γ_K-1. It can be seen from (<ref>) that the larger the Γ_1Γ_2⋯Γ_K-1, the smaller the Γ^2 ξ_Ψ^* for a given Γ_K. Furthermore, because Γ_1,Γ_2,…,Γ_K are co-prime numbers, the maximum value of Γ_1Γ_2⋯Γ_K-1 is obtained when Γ_1,Γ_2,…,Γ_K-1 are consecutive K-1 primes less than Γ_K.Moreover, according to (<ref>) and (<ref>), we can rewrite the theoretical MSE of ε̂_N asΔ_MSE = 1/∑_i = 1^K 4π ^2L_i^3 ·η/N^2,which shows that the larger K and L_i will result in a smaller Δ_MSE for a given SNR. Besides, based on (<ref>) and (<ref>), it can be found that the value of M has no effect on the final performance. Thus, without loss of generality, we set M=2 in the following simulation.In general, a group of parameter configurations will be selected to achieve a better MSE performance and a lower SNR threshold, leading to the following configuration guidelines.Guidelines:We should use as many K different estimated ranges as possible, and set Γ_i(i=1,…,K) to be K consecutive co-prime numbers, while making the sample intervals L_i(i=1,…,K) as large as possible. § COMPLEXITY ANALYSIS In this section, we compare the complexity of the proposed CCMLE and baseline methods. The main benchmarks for comparison can be divided into two categories: CRT-based methods and traditional methods. Specifically, the CRT-based methods include the closed-form robust CRT-based method <cit.>, denoted as “Closed-form CRT”, and the classic CRT-based method <cit.>, abbreviated as “Classic CRT”.For traditional methods, the typical two-stage estimation method that the IFO estimation based on the symmetrical correlation property of FFO-compensated received signal <cit.> is adopted for comparison, marked as “Sym.-corr.”. Besides, the Moose's method <cit.> is used for comparison as well.All these methods obtain an initial CFO estimate based on the training symbols by means of autocorrelation and summation, and thus the corresponding complexity is equivalent to O(L), where L is the length of the sample interval. Additional computation for the Classic CRT requires (K-1) multiplications and (K-1) additions. For the two-stage estimation methods in <cit.>, additional computation is mainly involved in performing N times autocorrelation and summation of the sequence and finding the minimum values, so the additional complexity is equivalent to O(NL). Meanwhile, our proposed CCMLE method needs additional computations to obtainr̂_i^c, r̂^c, Ω, q̂_i and ε̂_M, but the overall complexity is equivalent to O(K^2), which is essentially determined by the number of K. Recall that K represents the number of different estimated ranges, which is generally a small number, typically 3 or 4. Therefore, only a small number of additional computations are required for the proposed CCMLE method. However, since the Closed-form CRT estimates r^c by searching all the reals in the range of [0,M) <cit.>, the complexity is inversely proportional to the step size λ in search. The complexity for the above methods is summarized in TABLE <ref>. § EXPERIMENTAL RESULTSIn this section, we demonstrate the performances of the proposed CCMLE method and the benchmarks in a high-mobility OFDM system. Specifically, we consider the scenario that the maximum normalized CFO is N/2, i.e., ε_N ∈ [-N/2,N/2], where N is set to be 64 [ It is well-known that LEO satellite communication systems, such as Telesat, OneWeb, and SpaceX, use the 17.8-19.3 GHz bands for downlink communications <cit.> and that LEO satellites move at very high speeds, typically 7.5 km/s <cit.>. Hence, the maximum Doppler shift can be as high as 480 kHz under the condition that the carrier frequency of the simulation is set to 19.2 GHz. Furthermore, a commonly used subcarrier spacing of 15kHz is assumed <cit.>, and then the maximum normalized Doppler shift can be as high as 32 subcarriers.] For parameter configurations of the proposed CCMLE method, we restrict the maximum sample interval L_1 to be less than N. As a result, according to the configuration guidelines in Section-<ref>, we set Γ_1=3, Γ_2=5, Γ_3=7, corresponding to L_1=35, L_2=21, L_3=15. It is worth noting that all the baselines are restricted to use no more than the maximum sample interval of the CCMLE method, i.e., L_1, for fairness of comparison. Furthermore, we carry out 1 × 10^6 trials for each method. §.§ MSE Performance We present the MSE performances of various fixed normalized CFOs, including ε_N= 0.1, 0.6, 1.1, 10.1, 30.1, 31.1, for the proposed and baseline methods in Fig. <ref> from (a) to (f). Note that, according to Theorem 1, the CRB of the proposed CCMLE method corresponds to the theoretical MSE, i.e., Δ_MSE.The simulation results from Fig. <ref> (a) to (f) show that when the SNR is no smaller than 10dB, the proposed CCMLE approaches the CRB. Besides, it can be found that the CCMLE achieves the best performance when the normalized CFO ε_N increases from 0.1 to 31.1, in terms of a lower SNR threshold approaching the CRB and a smaller MSE. In addition, it can be seen from Fig. <ref> that the Closed-form CRT can reach the achievable performance of the algorithm at the same low SNR as the CCMLE, but its performance is still worse than the CCMLE. This is due to the fact that both the CCMLE and the Closed-form CRT are derived from the robust CRT problem, but the latter does not take into account the cases that the remainder noise variances vary in different sample intervals, which results in a performance degradation that deviates from the CRB. Moreover, Fig. <ref> shows that the achievable performance of the Classic CRT is always worse than Δ_MSE. This can be explained as follows. According to <cit.>, we know that the performance of the Classic CRT is upper bounded by the maximum sample interval L_1. Therefore, based on Theorem 1, we have that the achievable performance of the Classic CRT is always worse than the CRB of the CCMLE, i.e., Δ_MSE. Furthermore, it also can be seen from Fig. <ref> that the Classic CRT requires higher SNR to reach the achievable performance compared with the CCMLE.For traditional methods, Fig. <ref> shows that Moose's method can perform much better than all the other methods in the low SNR range, i.e., SNR< 10 dB, but its estimated performance degrades rapidly in all these SNR ranges at ε_N > 1. This is because Moose's method only has an estimated range of [-1,+1] <cit.>, indicating that it can work only in a small range of ε_N.In addition, it can be seen from Fig. <ref> that the Sym.-corr. can reach the achievable performance at a similar SNR threshold as the CCMLE, but the achievable performance of the Sym.-corr. is still worse than Δ_MSE. Meanwhile, it can be found that the MSE of Sym.-corr. has a similar performance for most cases except when ε_N = 31.1. It is worth noting that, Fig. <ref>(f) shows that the MSE performance of Sym.-corr. is similar to Moose's method for ε_N = 31.1, i.e., estimates are almost all failures, which indicates that Sym.-corr. could not work effectively for extremely large ε_N.Furthermore, to compare the performance over a wide range of the normalized CFO, we show the MSE performance for ε_N sampled uniformly from [-N/2,N/2] in Fig. <ref>. The simulation results show a similar performance trend to that with fixed normalized CFOs. However, it is worth noting that Moose's method and Sym.-corr. perform poorly over the entire simulated SNR range. This is caused by the fact that the method in <cit.> is based on the correlation property of two symmetric sequences for IFO estimation. However, this sequence will have the same properties as the original sequence if it is cyclically shifted by N/2. Thus, this method is extremely prone to estimation ambiguity when the normalized CFO is close to N/2 or -N/2, i.e., boundaries of the estimated range, which will result in poor MSE performance.Further, we compare the MSE performance of these methods across different normalized CFO ranges and present their performances in the normalized CFO range of [0,N/2] in Fig. <ref>. We set SNR= 10 dB, which is a SNR at which most methods reach their achievable performance, and the results in Fig. <ref> reflect this. It can be seen from Fig. <ref> that the proposed CCMLE can attain the CRB with ε_N increasing from 0 to 32, which indicates that it has a full estimated range. Furthermore, Fig. <ref> shows that the Classic CRT and the Closed-form CRT also have the full estimated range, but they perform worse than the proposed CCMLE. For traditional methods, it can be seen that Moose's method could not work effectively for ε_N ≥ 1 and Sym.-corr. could not work effectively for ε_N ≥ 31, which is consistent with the results in Fig. <ref>. As a result, the CCMLE has the advantage of wider estimated range and better MSE performance compared with the benchmarks.§.§ SNR Threshold EvaluationIn this section, we first theoretically give the SNR threshold approaching the CRB according to Theorem 2. After that, the consistency of the theoretical analysis with the simulation results is presented. In addition, an efficient waveform design is expected to be achieved through theoretical analysis.First, due to the fact that 1 × 10^6 trials are carried out for different methods, it is reasonable to consider an arbitrarily small number δ=1× 10^-6 for Theorem 2. Hence, we can obtain x_δ≈ 4.9 from TABLE <ref> given δ=1× 10^-6. Furthermore, according to (<ref>) and corresponding parameter configurations, we can obtain η_th = 9.3dB. At the same time, it can be found from Fig. <ref> and Fig. <ref> that the SNR thresholds approaching the CRB for the proposed CCMLE fall into the range (9, 10)dB, which is consistent with the 9.3dB derived from Theorem 2.Moreover, it can be found that the MSE values for low SNRs are always especially large, which is essentially caused by the estimation error of the integer part. For example, in Fig. <ref>, the MSE of the CCMLE is almost far greater than 1 when the SNR is less than 6dB and even reaches an order of magnitude of 10^2 when the SNR is close to 0dB.Note that the MSE will deviates by orders of magnitude from theoretical value only when there is an IFO error. In addition, the number of IFO errors decreases with increasing SNR until the CRB can be approached without IFO errors. As a result, we can use the IFO error rate (IER) to reflect the probability of approaching the CRB for a given SNR. In the simulation, IER is computed as follows,IER=N_IE/N_all,where N_all represents the total number of trials and N_IE denotes the occurrence number of |ε̂_N-ε_N|>1. Note that the IER can be regarded as the arbitrarily small number δ in Theorem 2, based on which the corresponding SNR threshold η_th can be obtained.The simulated IER curve v.s. SNR and the theoretical SNR threshold η_th are presented in Fig. <ref>. It can be seen that the theoretical SNR threshold obtained from Theorem 2 are slightly lower than that from the simulation, which is caused by the difference between the exact distribution of the phase error and the assumed wrapped normal distribution. In addition, for a better comparison, we present the corresponding results in TABLE <ref>, where η_sim represents the SNR threshold obtained from simulations. It is worth noting that the differences between the theoretical SNR thresholds and the simulated ones are significant at high IER (≥ 0.001), but reduce to the range of 0.5 dB to 0.2 dB as IER decreases from 1× 10^-3 to 1× 10^-6. This is due to the fact that the exact distribution of the normalized CFO error, i.e., the von Mises distribution, becomes closer to the wrapped normal distribution with decreasing variance <cit.>, namely, increasing SNR. As a result, given a sufficiently small IER, usually less than or equal to 0.001, we can estimate the approximate SNR threshold of the proposed CCMLE in advance based on the parameter configurations without expensive simulations, which enables an efficient waveform design.§.§ Parameter Configuration EvaluationIn this section, we first theoretically analyze the performance for different values of N according to the configuration guidelines in Section-<ref>. Next, we carry out simulations to validate the effectiveness of the analysis. Finally, we outline the impact on performance if the configuration guidelines are not followed. Specifically, we set the number of subcarriers to be N=128,256,512, respectively, and ε_N is still sampled uniformly from [-N/2,N/2].According to the guidelines in Section-<ref>, subject to the constraint that the maximum sample interval L_1 to be less than N, we have the parameter configurations shown in TABLE <ref>, where η_th is obtained by δ=1× 10^-6. Note that, base on (<ref>), we have that Δ_MSE is inversely proportional to Σ_L=∑_i=1^KL_i^3 for a given SNR. Thus, we can use Σ_L to evaluate the theoretical MSE. Next, we further confirm the parameter configurations in the following: * For N=128, as both Σ_L and η_th have advantages for the first parameter configuration, we set parameters to be Γ_1=2,Γ_2=3,Γ_3=5,Γ_4=7.* For N=256, the theoretical MSE for the second parameter configuration has an order of magnitude increase compared to the first one (about 12 times), but only a 1.5 dB loss in η_th. Hence, we prefer to set parameters to be Γ_1=11,Γ_2=13,Γ_3=17.* For N=512, the η_th for the first parameter configuration has a 2.4 dB advantage compared to the second one, but the degradation of theoretical MSE is far less than an order of magnitude (about 2 times). Hence, we prefer to set parameters to be Γ_1=3,Γ_2=5,Γ_3=7,Γ_4=11. The corresponding simulation results of MSE performances for different parameter configurations are shown in Fig. <ref>. It can be seen that the SNR threshold for N=128,256,512 falls into the range (6, 7), (7, 8), (4, 5) dB, respectively, which is consistent with the results that η_th=6.1, 7.6, 4.5 dB in TABLE <ref>.In addition, to illustrate the impact of not following the configuration guidelines in Section-<ref>, we carry out experiments with an alternative parameter configuration for N=512. Specifically, we set Γ_1=2, Γ_2=5, Γ_3=7, Γ_4=13, corresponding to L_1=455, L_2=182, L_3=130, L_4=70. In such case, Γ_i(i=1,…,4) is no longer 4 consecutive co-prime numbers, but the sample intervals L_i are still large enough, where Σ_L=1.0×10^8 is slightly larger than Σ_L=7.5×10^7. Besides, according to Theorem 2, we can obtain η_th = 7.7dB, which is 3.2 dB higher than 4.5 dB.Accordingly, the experimental results are presented in Fig. <ref>, where the results for the alternative parameter configuration are marked with dash-dot line and “(P2)”. It can be seen that the MSE performance of the CCMLE is similar for two parameter configurations in the absence of IFO errors, i.e., SNR≥ 8 dB, due to the fact that Σ_L is guaranteed to be of approximately the same magnitude in both cases. However, the experimental results show that the CCMLE loses 3 dB of SNR threshold if the parameters are not configured according to the configuration guidelines in Section-<ref>, which is consistent with the above analysis. Moreover, it is worth noting that there are two other significant differences.First, there is a significant increase in the performance gap between the CCMLE and the Closed-form CRT in Fig. <ref>, which is due to the fact that the differences among L_i^3 become more significant for the alternative parameter configuration. According to (<ref>), we know that the more significant differences among L_i^3 will lead to more significant differences in variance for different sample intervals. Therefore, the case of the alternative parameter configuration deviates farther from the assumption of equal variance in the Closed-form CRT <cit.>, which results in the MSE performance further away from the CCMLE.Second, the achievable performances for the CCMLE and the Classic CRT are much closer, which is due to the fact that the variance of the maximum sample interval is far smaller than that of the other sample intervals. Based on (<ref>) and (<ref>), it can be found that Δ_MSE(M) and σ_1^2 will be very close to each other under the condition that L_1^3 is much greater than L_i^3 (i=2,…,K), i.e., the variance of the maximum sample interval is far smaller than that of the other sample intervals. Therefore, the CCMLE and the Classic CRT can achieve similar performance in high-SNR range, i.e., SNR≥ 12 dB. However, the enlarged diagram in Fig. <ref> shows that the CCMLE still slightly outperforms the Classic CRT for SNR≥ 12 dB, which is consistent with the Theorem 1. § CONCLUSIONWe have proposed the CCMLE method for joint integer and fractional CFO estimation for high-mobility OFDM systems. This approach enabled a straightforward calculation of the CFO by utilizing various estimates from different ranges without adding significant complexity. The theoretical analyses for the proposed method were presented, including the MSE performance and the SNR threshold approaching the CRB. Furthermore, based on the results of theoretical analysis, we presented the guideline for selecting the parameter configuration to accommodate a better MSE performance and a lower SNR threshold. Finally, extensive experiments demonstrated the consistency with the theoretical analysis and showed that the proposed CCMLE method offers better performance than the baseline schemes. § PROOF OF THEOREM 1 Proof: By substituting the weights w_i defined in (<ref>) into (<ref>), we can simplify (<ref>) as[ Δ_MSE(M) = 1/∑_i = 1^K 1/σ _i^2. ]Since σ^2_i>0 for i=1,…, K, we have[ ∑_i = 1^K 1/σ _i^2 > 1/σ _j^2, j=1,…,K. ] Clearly, we have[ Δ_MSE(M) < σ _j^2, j=1,…,K, ]which indicates that Δ_MSE(M) is smaller than the minimum among σ^2_i (i=1,…, K), i.e., Δ_MSE(M) < min{σ^2_i}.Next, substitute Γ, ε by MΓ, ε_M in (<ref>) and plug it into (<ref>), we can obtain the following sample vector:[ Z = [ Z_1,...,Z_K ], ]where[ Z_i= L_ie^j 2π L_i/MΓε_M + ω_L_i, ]and ω_L_i is the complex Gaussian noise sample with mean zero and variance 2L_iσ^2. Note that σ^2 represents the noise variance of the received signal r(n). According to <cit.>, the joint PDF of the elements of the sample vector Z is given by[ f(Z;ε _M); = ( 1/2π)^K∏_i = 1^K 1/σ _L_i^2exp{ - ∑_i = 1^K ( X_i - μ _i)^2 + ( Y_i - ν _i)^2/2σ _L_i^2}, ]where X_i and Y_i represent the real and imaginary parts of the sample Z_i, respectively. σ _L_i^2=L_i σ^2 represents the noise variance of X_i and Y_i, and[ μ _i = L_icos( 2π L_i/MΓε_M),; ν _i = L_isin( 2π L_i/MΓε_M). ]Then, the elements of the Fisher Information Matrix (FIM) J can be derived as follows <cit.>:[ J_mn = ∑_i = 1^K 1/σ _L_i^2[ ∂μ _i/∂α _m∂μ _i/∂α _n + ∂ν _i/∂α _m∂ν _i/∂α _n], ]where α_m (m=1,2,…) are unknown parameters.Note that the only parameter to be estimated in (<ref>) is ε_M, i.e., α_1=ε_M. Therefore, the CRB of ε_M is given by <cit.>var[ε̂_M] ≥ J_11^-1 = (∑_i = 1^K 4π ^2L_i^3/σ ^2 M^2Γ ^2)^-1. Specifically, from (<ref>) and (<ref>), we know that the SNR of the received signals can be represented as η=1/σ ^2. Hence, based on (<ref>) and (<ref>), we havevar[ε̂_M]≥1/∑_i = 1^K 4π ^2L_i^3 ·η/M^2Γ ^2 =1/∑_i = 1^K 1/σ _i^2.This proves the theorem.▪§ PROOF OF LEMMA 1 Proof: * Case I: S ∅ ,S̅∅ Letf(x)=1/x+1/a-x, 0<x<a,where a is a constant. Then, we have the derivative of f(x) asf'(x)=-1/x^2+1/(a-x)^2, 0<x<a.Clearly, we havef'(x)>0,ifa/2<x<a,f'(x)<0,if 0<x<a/2.Therefore, f(x) is monotonically decreasing on (0,a/2) and is monotonically increasing on (a/2,a).LetP = {∑_Δε_M_i∈ SL_i^3}.Based on the fact that S is the subset of U, and S̅ is the complement of S in U, leta-x= ∑_Δε_M_j∈S̅L_j^3.where x ∈ P anda=∑_i=1^K L_i^3.As a result, we haveξ_Ψ=f(x), x ∈ P,where the constant a is defined by (<ref>).Note thatL_1 > ⋯ > L_K>0. Hence, according to (<ref>), (<ref>) and (<ref>), we havex_min = L_K^3<a/2,x_max = ∑_i=1^K-1L_i^3>a/2.Recall that f(x) is monotonically decreasing on (0,a/2) and is monotonically increasing on (a/2,a). Thus, we havef(x_min)≥ f(x),if x<a/2,f(x_max)≥ f(x),if x>a/2for x ∈ P.Clearly, for x_1,x_2 ∈ P, if x_1 + x_2 = a, we havef(x_1)=f(x_2). According to (<ref>), we have x_min + x_max = a. Hence,f(x_min)=f(x_max).Letξ_Ψ^*=max{ξ_Ψ}.Based on (<ref>), (<ref>) and (<ref>), we have that ξ_Ψ^* is the maximum value of f(x), given byξ_Ψ^*=f(x_min)=f(x_max).That is,ξ_Ψ^*=1/L_K^3 + 1/∑_j KL_j^3. * Case II: S = ∅orS̅ = ∅ If S or S̅ is an empty set, according to (<ref>) and (<ref>), we haveξ_Ψ=1/∑_i=1^K L_i^3 < 1/L_K^3<ξ_Ψ^*.This proves the lemma.▪§ PROOF OF THEOREM 2Proof: According to <cit.>, we have that the normalized CFO estimation error of CCMLE methods Δε_M=∑_i = 1^K w_iΔε_M_i holds, i.e., the MSE Δ_MSE(M) = ∑_i = 1^K w_i^2σ _i^2 approaches the CRB holds, if and only if[ | Ψ| < M /. - 2 ]holds for any subset S of set U={Δε_M_1, …,Δε_M_K}, where Ψ is defined as <cit.>Ψ=∑_Δε_M_i∈ Sw_iΔε_M_i/∑_Δε_M_j∈ Sw_j- ∑_Δε_M_i∈S̅w_iΔε_M_i/∑_Δε_M_j∈S̅w_jand S̅ is the complement of S in U.Therefore, Δ_MSE(M) approaching the CRB with a probability of at least 1-δ meansp( | Ψ| < M /.- 2) > 1- δ,i.e., p( | Ψ| ≥M /.- 2) < δ. Furthermore, according to (<ref>), since Ψ is the linear combinationof Δε_M_i that follows the wrapped normal distribution, Ψ also follows the wrapped normal distribution <cit.>. Hence, the probability p( | Ψ| ≥M /.- 2) can be obtained by integrating the PDF of Ψ, i.e.,p( | Ψ| ≥M /.- 2)= 2∫_M /.- 2^Af_Ψ(x)dx,where A ≤ MΓ_1 is the maximum of the wrapped normal distributed random variables. f_Ψ(x) is the PDF of the wrapped normal distribution with mean zero and variance σ_Ψ^2, represented as <cit.>f_Ψ(x) = 1/σ _Ψ√(2π)∑_k =- ∞^ + ∞exp{ - (x + 2kA)^2/2σ _Ψ ^2},where -A ≤ x <A and k is an integer.Let t=x + 2kA, we can rewrite (<ref>) as the integral of the normal distribution over the interval Σ, as follows:[ p( | Ψ| ≥M /.- 2); = 2/σ _Ψ√(2π)∫_M /.- 2^A∑_k =- ∞^ + ∞exp{ - (x + 2kA)^2/2σ _Ψ ^2}dx; =2/σ _Ψ√(2π)∫_Σexp(- t^2/2σ _Ψ ^2)dt. ]Based on the symmetry of normal distribution, Σ can be further expressed asΣ =[M/2,2A-M/2]∪[M/2+2A,4A-M/2]∪⋯. Therefore, the probability p( | Ψ| ≥M /.- 2) in (<ref>) can be simplified as[p( | Ψ| ≥M /.- 2); =2 Q( M/2σ _Ψ) - 2 Q( 2A/σ _Ψ - M/2σ _Ψ) +; 2 Q( 2A/σ _Ψ+M/2σ _Ψ) - 2 Q( 4A/σ _Ψ-M/2σ _Ψ) +⋯; ≈ 2Q(M/2σ _Ψ), ]where Q(t) is Q-function defined byQ(t)=1/√(2π)∫_ t ^+ ∞exp(- t^2/2)dt. According to (<ref>) and (<ref>), the variance of Ψ can be represented as a function with respect to SNR, as follows:σ_Ψ^2(η) =1/∑_Δε_M_i∈ S1/σ _i^2 + 1/∑_Δε_M_j∈S̅1/σ _j^2=M^2Γ^2/4π^2ηξ_Ψ,where ξ_Ψ is given by (<ref>).Recall that | Ψ| < M /. - 2 holds for any subset S of set U, which means that p( | Ψ| ≥M /.- 2) < δ holds for any σ_Ψ^2(η) under a given η. In addition, note that p( | Ψ| ≥M /.- 2) increases as σ_Ψ^2(η) increases.Thus, the above condition can be equivalently converted to p( | Ψ| ≥M /.- 2) < δ holds for max{σ_Ψ^2(η)} under a given η, where max{σ_Ψ^2(η)} represents the maximum of σ_Ψ^2(η).Meanwhile, based on (<ref>) and (<ref>), for a given η, the maximum of σ_Ψ^2(η) is given bymax{σ_Ψ^2(η)}=M^2Γ^2/4π^2ηmax{ξ_Ψ},where max{ξ_Ψ} represents the maximum value of ξ_Ψ. As a result, the problem of finding the maximum value of the variance σ_Ψ^2(η) is simplified to the problem of determining the maximum value of ξ_Ψ defined by (<ref>).Let max{ξ_Ψ}=ξ_Ψ^*, according to Lemma 1 in Appendix <ref>, we have that ξ_Ψ^* is given by (<ref>). Letδ=2Q(M/2σ^max _Ψ(η))=2Q(x_δ),whereσ^max _Ψ(η)=√(max{σ_Ψ^2(η)}). According to the monotonicity of the Q-function, we haveM/2σ^max _Ψ(η)=x_δ,where the value of x_δ can be obtained by solving (<ref>) given an arbitrarily small number δ. In addition, some typical approximate values of x_δ corresponding to δ are presented in TABLE <ref>. Furthermore, the variance of Ψ decreases as the SNR increases, which leads to the integral result of (<ref>) being even smaller. Therefore, based on (<ref>), (<ref>), (<ref>) and (<ref>), we can conclude that p( | Ψ| ≥M /.- 2) < δ holds for η≥Γ^2 x_δ^2 ξ_Ψ^*/π^2,where ξ_Ψ^* is given by (<ref>) and x_δ can be obtained by solving (<ref>) given an arbitrarily small number δ.Letη_th = Γ^2 x_δ^2 ξ_Ψ^*/π^2.Then, we havep( | Ψ| < M /.- 2) > 1- δfor η≥η_th, i.e., the MSE of the proposed CCMLE method approachs the CRB with a probability of at least 1-δ for η≥η_th, where a closed-form η_th is given by (<ref>).This proves the theorem.▪IEEEtran
http://arxiv.org/abs/2312.16386v1
{ "authors": [ "Wei Huang", "Jun Wang", "Xiaoping Li", "Qihang Peng" ], "categories": [ "eess.SP" ], "primary_category": "eess.SP", "published": "20231227031758", "title": "Maximum Likelihood CFO Estimation for High-Mobility OFDM Systems: A Chinese Remainder Theorem Based Method" }
X Modality Assisting RGBT Object Tracking Zhaisheng Ding January 14, 2024 ========================================= We devise a deterministic algorithm for minimum Steiner cut which uses polylogarithmic maximum flow calls and near-linear time outside of these maximum flow calls. This improves on Li and Panigrahi's (FOCS 2020) algorithm which takes O(m^1+ϵ) time outside of maximum flow calls. Our algorithm thus shows that deterministic minimum Steiner cut can be solved in maximum flow time up to polylogarithmic factors, given any black-box deterministic maximum flow algorithm. Our main technical contribution is a novel deterministic graph decomposition method for terminal vertices which generalizes all existing s-strong partitioning methods and may have future applications. § INTRODUCTIONThe minimum cut (or “min-cut”) of a weighted graph is the smallest weighted subset of edges whose deletion disconnects the graph. The problem of finding the minimum cut is one of the most fundamental problems in combinatorial optimization, and theoretical computer science as a whole. It also has important applications such as network optimization <cit.> and image segmentation <cit.>. Thus, finding faster algorithms for this problem will have far-reaching applications for a wide variety of fields. Recently there has been a large amount of groundbreaking work in the field, including deterministic almost-linear time algorithms for both minimum cut <cit.> and maximum flow <cit.>. §.§ Minimum Steiner Cut Background A classic extension of the min-cut problem is the minimum Steiner cut (or “Steiner min-cut”) problem. In this problem, we are given an undirected, weighted graph G = (V,E) and a subset T ⊆ V of terminals. A Steiner cut is subset of edges whose removal disconnects at least one pair of terminals in the graph. The minimum Steiner cut is the Steiner cut with the minimum total weight of cut edges. This problem generalizes both s-t minimum cut (T = {s,t}) and global minimum cut (T=V), and is therefore a fundamental problem in graph algorithms.The classical algorithm to solve minimum Steiner cut uses |T| - 1 max-flow computations. Li and Panigrahi <cit.> give a randomized algorithm which reduces minimum Steiner cut in near-linear time to just polylogarithmic number of max-flow computations. They additionally give a deterministic algorithm which takes, for any parameter ϵ>0, (log n)^O(1/ϵ^4) max-flow calls with O(m^1+ϵ) additional running time. Given the currently known fastest deterministic maximum flow algorithm in almost-linear time <cit.>, the two results combined give an almost-linear time algorithm for global minimum cut which matches the algorithm of Li <cit.>. We remark that a very recent work <cit.> has improved the running time of deterministic global minimum cut to near-linear, i.e. Õ(m). However, since minimum Steiner cut is at least as hard as s-t minimum cut, which is traditionally solved through s-t max-flow, a near-linear time minimum Steiner cut algorithm remains elusive without an equally fast max-flow algorithm.§.§ Our ContributionsWe show that a deterministic, near-linear time max-flow algorithm is the only obstacle towards obtaining a deterministic, near-linear time minimum Steiner cut algorithm. More precisely, we introduce a new deterministic algorithm which finds the minimum Steiner cut in polylogarithmic s-t max flow calls and near-linear additional processing time.Given a undirected, weighted graph G=(V,E) with n vertices and m edges, polynomially bounded edge weights, and a set of terminal vertices T ⊆ V, there is a deterministic minimum Steiner cut algorithm that makes polylog(n) maximum flow calls on undirected, weighted graphs with O(n) vertices and O(m) edges, and runs in O(m) time outside of these maximum flow calls. Specifically, a hypothetical deterministic near-linear time algorithm for s-t max-flow implies a deterministic near-linear time algorithm for minimum Steiner cut as well. This was not known from the work of <cit.>, given the additional m^1+ϵ running time in their deterministic algorithm. Terminal-Based Partitioning Methods. Expander decompositions have been a powerful tool for solving minimum cut problems in recent years <cit.>. However, the current state-of-the-art deterministic expander decomposition takes almost-linear time <cit.>, and it is an open problem whether this can be improved.For deterministic near-linear time algorithms, a key tool is the decomposition into s-strong clusters used by recent minimum cut algorithms on simple graphs <cit.>. Our main technical contribution is to extend this decomposition framework to general weighted graphs and apply it towards a decomposition that specifically parititions clusters of terminals with a boundary proportional to the size of the terminal set. We state this result informally below.Given a undirected, weighted graph G=(V,E) with n vertices and m edges, polynomially bounded edge weights, a set of terminal vertices T ⊆ V, sparsity parameter 0 < ψ < 1, and cut size parameter δ > 0, there is a deterministic algorithm which returns a vertex partitioning of clusters V_1, V_2,...,V_ℓ such that the following hold: * For every cluster, any cut with weight less than δ splits the cluster with at mostterminals on at least one side. * For every cluster, any cut with weight less than δ either does not split the cluster, or has at least ≥δ/ weight of cut edges inside the cluster.* The total weight of edges between clusters at most O(δ·|T|).The algorithm makes O(log^2 n) maximum flow calls on undirected, weighted graphs with O(n) vertices and O(m) edges, and runs in O(m) time outside of these maximum flow calls.Properties 1 and 3 together directly generalizes the notion of s-strong partitions to the terminal regime. Property 2 provides an additional guarantee that is useful for the Steiner algorithm of <cit.> and can be achieved with only polylogarithmic loss elsewhere in the decomposition. § PRELIMINARIESIn this paper, all graphs are undirected and weighted, and for simplicity, all weights are assumed to be polynomially bounded.We begin by introducing standard definitions and tools from previous works that we will utilize for our algorithm, as well as defining our new modification of s-strong clusters to terminals. We use a standard definition of induced subgraphs using self-loops for boundary edges, which preserves degrees of vertices in subgraphs. We denote the induced subgraph of vertex set S ⊂ V on graph G(V, E) as G[S]. §.§ Sparsity and StrengthSparsity is a specific measure of how connected a graph is. For a cut (U, U), where U = V ∖ U, we define ∂ U = ∂U as the boundary of the cut, which is the set of edges between U and U. The sparsity of a cut (U,U) is defined asΨ(U) = w(∂ U)/min{|U|, |U|} = Ψ(U) We also introduce the new definition of terminal-sparsity for Steiner cuts with the terminal set T.The terminal-sparsity of a cut (U,U) is defined asΨ_T(U) = w(∂ U)/min{|U ∩ T|, |U∩ T|} = Ψ_T(U) We use the terms ψ-sparse and ψ-terminal-sparse to refer to cuts with sparsity and terminal-sparsity < ψ, respectively.The concept of strength, introduced by Kawarabayashi and Thorup <cit.>, is a relaxed notion of edge expanders. At a high level, a vertex subset U⊆ V is s-strong if every cut (C,C) of weight at most δ satisfies min{vol(C∩ U),vol(C∩ U)}≤ s, where the volume vol(S) is the sum of degrees of vertices in S. In <cit.>, the parameter δ is chosen to be the minimum degree of all vertices, which serves as an upper-bound on the min-cut.One of our main conceptual contributions is translating s-strength to the terminal setting, as well as providing necessary generalizations to handle the Steiner min-cut problem. First, we work from a sparsity viewpoint, which bounds the minimum cardinality of intersection min{|C ∩ U|, |C∩ U|} instead of volume, which is more handy when we start introducing terminals. Second, we can no longer choose δ as the minimum degree, since it no longer upper bounds the Steiner min-cut. One natural choice is the minimum (weighted) degree of all vertices in set S, but instead, for technical reasons, we set δ closer to the minimum Steiner cut λ itself. For now, we keep δ as a free parameter and provide our s-strong guarantees in terms of δ. Finally, we need an additional requirement that if the cut (C,C) cuts any edges inside a cluster, then it must cut sufficiently many such edges, and we introduce another parameter γ to capture this condition. A vertex subset U ⊆ V (called a cluster) is (s, δ, γ)-strong in G if every cut (C, C) of graph G with at most weight δ satisfies min{|C ∩ U|, |C∩ U|}≤ s, and moreover, if min{|C ∩ U|, |C∩ U|}>0 then w(∂_G[U]C) ≥γ·δ. Next, we introduce the notion of (s, δ, γ)-terminal-strength, where the “size” of a set of vertices is only determined by the number of terminals that it contains. This is specifically necessary to deal with cuts separating terminal vertices as opposed to just regular ones. A vertex subset U ⊆ V (called a cluster) is (s, δ, γ, T)-terminal-strong in G if every Steiner cut (C, C) of graph G with at most weight δ satisfiesmin{|C ∩ U ∩ T|, |C∩ U ∩ T|}≤ s, and moreover, if min{|C ∩ U ∩ T|, |C∩ U ∩ T|} > 0 then w(∂_G[U]C) ≥γ·δ. For the rest of the paper we sometimes omit the “in G” from the definition whenever the graph G is clear from context. An important property of both (s, δ, 0)-strength and terminal strength is that the property is inherited by subgraphs, i.e. if G(V,E) is (s, δ, 0)-strong or terminal-strong, then G[A] is as well for all A ⊆ V. This property holds for s-strength <cit.>, and is straightforward to verify that the same is true for our (s, δ, 0)-strength definitions.Lastly we also define terminal-strong decompositions, which are analogous to s-strong and expander decompositions, except that we split our graph into (s, δ, γ)-terminal-strong components as opposed to s-strong sets and expanders, respectively. A set of disjoint vertex clusters V_1, V_2,...,V_ℓ is a (s, δ, γ, T)-terminal-strong decomposition if each cluster V_i is (s, δ, γ, T)-terminal-strong, and if the total weight of edges between clusters is at most O(δ·|T|). Our main new technical tool is a fast algorithm for computing a (s,δ,γ,T)-terminal-strong decomposition with a small bounded weight of intercluster edges (<ref>), which is presented in detail in <ref>. At a high level, we use a (non-terminal) (s, δ, γ)-strong decomposition in order to devise an algorithm which finds a (s, δ, γ, T)-terminal-strong decomposition through the cut-matching game framework of Khandekar, Rao, and Vazirani <cit.>, which we outline in the next subsection. Finally, given such a decomposition, we use the framework of <cit.>, replacing their expander decomposition step with our (s,δ,γ)-strong decomposition. We leave the details to <Ref>.§.§ Cut-Matching GameWe start with an overview of the cut-matching game. * The cut player chooses a bisection (S, S) of the graph H_t-1 based on a given strategy. * The matching player chooses a perfect matching of the bisection based on a given strategy. * The cut player adds the edges of the perfect matching to graph H_t-1, forming graph H_t The game continues until graph H_t is an edge-expander. The key insight of the cut-matching game is that there is always a strategy for the cut player that finishes the game in few rounds. We use the cut-matching game to reduce our problem from one with terminals ((s, δ, γ)-terminal-strong decomposition) to one without terminals ((s, δ, γ)-strong decomposition). We then adapt the (s, δ, 0)-strong decomposition algorithm of <cit.> to obtain an (s, δ, γ)-strong decomposition for large enough γ. § MINIMUM STEINER CUT ALGORITHM OVERVIEWThe following is an overview of our algorithm (<ref>) to solve minimum Steiner cut on an undirected, weighted graph G deterministically in near-linear time (i.e. O(m)) plus polylogarithmic maximum flow calls. Throughout, we assume that we have guessed the value of the Steiner mincut up to factor 2 (which we denote λ̃) by, say, guessing all powers of 2. (Incorrect guesses may return an overestimate of the minimum Steiner cut, but we can take the minimum cut ever found at the end.) * We use the “unbalanced case” of <cit.> to find the Steiner minimum cut (C,C) if min{|C∩ T|,|C∩ T|}≤(n). This algorithm is described in <ref>. * In the “balanced case”, we find a (s, δ, γ)-terminal-strong decomposition on the graph. To do this, we use the cut-matching game on a graph H containing only the terminals of the original graph, with s=(n), δ=λ̃, and γ=1/(n).This algorithm is described in <Ref> (<ref>). At a high level, we use a (non-terminal) (s', δ', γ')-strong decomposition to find a (s, δ, γ)-terminal-strong decomposition for appropriate parameters s',δ',γ',s,δ,γ.* Using our (s, δ, γ)-terminal-strong decomposition, we find a set T' ⊆ T and |T'| ≤ |T|/2 such that the minimum Steiner cut of G with terminal set T' is the same as with terminal set T. In this case we recursively apply our minimum Steiner cut algorithm on graph G with terminal set T' (<ref>).We give a high level analysis of the runtime, which we formally prove in the following sections. Each call of terminal-decompositiontakes poly-logarithmic max-flow computations and at most near-linear time with respect to the graph outside of the max-flows. Since the sparsification procedure halves the terminal set each iteration, it adds at most a log n extra factor in runtime as well. Along with the extra log n factor for guessing λ, this gives us our claimed runtime. § TERMINAL DECOMPOSITION USING CUT-MATCHING GAME The goal of the cut-matching game is to try to certify that the entire vertex set V is (s, δ, γ, T)-terminal-strong in G by iteratively constructing our cut-graph H to be (s, O(δ), γ)-strong. This may not always be possible, butthroughout the cut-matching game, the algorithm may also verify that V is (s,δ,γ,C)-terminal-strong for a subset C⊆ T with |C|≥2|T|/3. In that case, we apply a trimming procedure similar to <cit.>. Otherwise if this is also not possible, we are able to find a balanced sparse cut in the cut-graph H. We then run a max-flow between the two terminal sets in G, which outputs either a large flow or (by duality) a small cut. In the former case, we add a corresponding large (fractional) matching to the cut-graph H. In the latter case, we immediately find a balanced terminal-sparse cut in G, at which point we recursively decompose the two sides.We give high level overviews of the cut and matching player strategies, before going through the formal procedure.Cut Player. The cut player attempts to find a sparse, balanced cut (U,T∖ U) in the cut graph, which ensures that we make sufficient progress when the matching player creates a matching. If the cut player fails to find a cut, we terminate the cut-matching game and prove that the original graph G satisfies desirable properties. Cut Player Strategy on current cut-graph H* Find an (s, δ, γ)-strong decomposition on cut graph H* If there exists a cluster with size greater than 2|T|/3, we terminate the cut-matching game. We trim the cluster according to <Ref> and certify the cluster U as (s, δ, γ)-terminal-strong. We then recursively apply <ref> on the smaller side.* Otherwise, we merge the clusters into two groups that each contain between 1/3 and 2/3 fraction of all vertices of H (this is always possible, see <ref>). Denote the bipartition as (C, C). Matching Player. The goal of the matching player is to add edges in the cut graph corresponding to the maximum possible flow in G from one side of the bipartition to the other. They do this by running a maximum flow algorithm across the bipartition. If a large flow is successfully routed, the matching player adds edges into the cut graph. Otherwise, a terminal-balanced cut is found, and we terminate the cut-matching game and recursively apply our terminal-strong-decomposition algorithm on both sides of the cut. Matching Player Strategy on bipartition C of cut-graph H * We calculate a max-flow on graph G between the terminals in C and T ∖ C using <ref>. If the flow has value at least |T|/6 ·δ·ψ, we call the flow a “large flow”. Otherwise, the flow has value less than |T|/6 ·δ·ψ, so we call the corresponding cut a “small cut”.* If we find a large flow, we add a large matching into cut graph H: we break down the flow into paths and add edges between vertices in graph H with the same corresponding weights as the flow paths. * If we find a small cut, we certify the minimum cut found as a terminal-balanced, terminal-sparse cut. We stop the cut-matching game and recursively apply our terminal-decomposition algorithm on both sides.We formally define strategies for the cut and matching players in this game in <ref>.Our guarantee given by our cut-matching game method is stated as the following:Given an undirected weighted graph G and parameters δ>0, ψ=1/(n), <ref> runs in time Õ(m) plus (n) calls to maximum flow, and outputs one of the following: * An (O(log^9|V|/ψ^5), δ, Ω(ψ^5/log^9|V|),T)-terminal-strong cluster U with |U∩ T|≥|T|/3 such that U is either empty or ψ·δ-terminal-sparse, or* A ψ·δ-terminal-sparse cut (U, U) with |U∩ T|,|U∩ T|≥|T|/6The correctness of the Cut Player strategy is shown in <ref>, matching player strategy in <ref>, and the termination within L_max rounds is proved in <ref>. §.§ Cut Player The lemma below for α=L_max/ψ shows that <Ref> of <ref> can be computed efficiently. Note that we apply the lemma on graph H and vertex set T.Given any parameters δ>0 and α≤(n) and a graph G=(V,E) with total edge weight at most αδ|V|, there exists s≤ O(α^2log^2|V|) and γ=Ω(1/s) and an algorithm in Õ(|E|) time that outputs a decomposition of V into (s, αδ, γ)-strong clusters such that the total weight of inter-cluster edges is at most |V|δ/50. The first step is to apply the following lemma to a slightly modified graph. Given a weighted graph G=(V,E,w) and a parameter δ_0 such that δ_0≤min_v∈ V(v) and a parameter s_0≤δ_0(n), there is an algorithm that runs in Õ(|E|) time and partitions the vertex set V into components V_1,…,V_k such that * For any cluster V_i and any cut (S,S) in G of weight at most δ_0, we have min{(S∩ V_i),(S∩ V_i)}≤ s_0. Here, (U) is the sum of weighted degrees of vertices in U. * The total weight of inter-cluster edges is at most an O(√(δ_0)log|V|/√(s_0)) fraction of the total weight of edges. Construct the graph G_0 from G as follows. For each vertex v∈ V, add a new vertex v' with an edge to v of weight αδ. This new graph has minimum weighted degree αδ. Apply the lemma above to G_0 with parameters δ_0=αδ and s_0=sαδ. The total weight of inter-cluster edges is at most O(log|V|/√(s))·αδ|V| ≤ |V|δ/100 for large enough s=O(α^2log^2|V|). Since G_0 has minimum degree δ_0, the guarantee min{(S∩ V_i),(S∩ V_i)}≤ s_0 from property 1 implies that min{|S∩ V_i|,|S∩ V_i|}≤ s_0/δ_0=s. In other words, each V_i is (s,αδ,0)-strong in G_0. Consider the partition in G obtained by removing all new vertices v'. It is straightforward to see that this partition is also (s,αδ,0)-strong in G, and the total weight of inter-cluster edges is still at most |V|δ/100.We now modify the partition so that each cluster is (s,αδ, γ)-strong by applying the lemma below to each V_i. The total weight of additional inter-cluster edges guaranteed by the lemma is at most ∑_i|V_i|δ/100≤|V|δ/100. Together with the inter-cluster edges from the first step, the total weight is at most |V|δ/12. It remains to prove the lemma below: Let C be an (s,αδ,0)-strong cluster in G and let γ=1/200α s. There is an algorithm in Õ(|E(G[C])|) time that partitions C into (s,αδ,γ)-strong clusters such that the total weight of inter-cluster edges is at most |C|δ/100.Within the context of this proof, we assign a separate identity to each edge, and we do not merge distinct edges upon contraction.The algorithm begins with H G[C] and iteratively executes the following two steps in arbitrary order whenever possible. * Contract two vertices with at least γαδ total weight of edges between them. * Remove a vertex v with weighted degree at most δ/100 in H. At the end of the proof, we show how to perform these steps in near-linear time overall.For each removed vertex, consider all original vertices in C that were contracted to that vertex, and add a new output cluster consisting of those vertices. If H is non-empty at the end of the iterative algorithm, add another cluster consisting of all vertices in C that were contracted to a vertex in H. By construction, the resulting clusters partition C. By dynamically maintaining appropriate structures, the algorithm can be implemented in Õ(|E(G[C])|) time.The bound on the weight of inter-cluster edges follows from the fact that we remove at most |C| many vertices in the algorithm, and each removal adds at most δ/100 to the total weight of inter-cluster edges.Since each cluster C_i is a subset of (s,αδ,0)-strong cluster C, cluster C_i is also (s,αδ,0)-strong. It remains to show that C_i is (s,αδ,γ)-strong. That is, given a cut (S,S) in G with w(S,S)≤αδ, S∩ C_i∅, and S∩ C_i∅, we have ∂_G[C_i](S∩ C_i)≥γαδ.First, take a set C_i consisting of all vertices contracted to some vertex v. Color the vertices in S∩ C_i black and the vertices in S∩ C_i white, and consider the contraction process starting from the set C_i and ending at v, where each step contracts an edge between two vertices in the set with weight at least γαδ. If we contract an edge whose endpoints have the same color, then assign the same color to the contracted vertex. Eventually, we contract an edge with differently colored endpoints. Each edge is included in ∂_G[C_i](S∩ C_i), and we contract edges of total weight at least γαδ. It follows that ∂_G[C_i](S∩ C_i)≥γαδ.Now take the set C_i consisting of all vertices in C that were contracted to a vertex in H, if it is non-empty. Since C_i is (s,αδ,0)-strong, we have min{|S∩ C_i|,|S∩ C_i|}≤ s, and assume without loss of generality that |S∩ C_i|≤ s. Color the vertices in S∩ C_i black and the vertices in S∩ C_i white, and consider the contraction process again. If we contract an edge with differently colored endpoints, then ∂_G[C_i](S∩ C_i)≥γαδ as before. So suppose that never happens. At the end, let B be the set of black vertices, which satisfies ∂_HB=∂_G[C_i](S∩ C_i) by construction. Also, |B|≤|S∩ C_i|≤ s since the number of black vertices can only decrease over time. Since there are no more vertex deletions, each vertex in B has weighted degree at least δ/100 in H. Since there are no more edge contractions, the total weight of edges between black vertices is at most γαδ|B|2.Suppose for contradiction that ∂_G[C_i](S∩ C_i) < γαδ, which means that|B|δ/100≤_H(B) =2w(E(H[B]))+∂_HB =2w(E(H[B]))+∂_G[C_i](S∩ C_i) <2·γαδ|B|2+γαδ≤γαδ|B|^2+γαδ. This quadratic solves to |B|∈ℕ∖[ℓ,r] for some interval [ℓ,r]. It suffices to show that ℓ≤1 and r≥ s, which would imply that |B|>s, a contradiction. To show this claim, we simply show that the inequality fails for |B|=1 and |B|=s. For |B|=1, we obtain δ/100<2γαδ which is false since γ≤1/200α. For |B|=s, we obtain sδ/100<γαδ s^2+γαδ which is false since γ≤s/s^2+1·1/100α. It follows that there is no cut (S,S) in G with w(S,S)≤αδ, S∩ C_i∅, S∩ C_i∅, and ∂_G[C_i](S∩ C_i)<γαδ.Finally, we show that we can dynamically execute steps (<ref>) and (<ref>) in Õ(|E(G[C])|) time overall. We maintain the vertex degrees, the number of edges incident to each vertex, and the total weight of edges between any two vertices, storing their values in a balanced binary tree. To execute step (<ref>), query the pair of vertices with maximum total weight of edges, and to execute step (<ref>), query the vertex with minimum degree. To update the maintained values over time, we perform the following. Every time a vertex is removed on step (<ref>), we remove the incident edges and update values accordingly; each removed edge induces one update in each category, which is O(|E(G[C])|) total updates overall. Suppose now that two vertices u and v are contracted, where u has at most as many incident edges as v (which can be checked by querying their maintained number of incident edges). We remove the contracted edges between u and v, and for all remaining edges incident to u, replace the endpoint u by v. This successfully implements step (<ref>) with the contracted vertex labeled v. We now show that the total number of such edge updates is at most 2mlog2m where m=|E(G[C])|. For each vertex v, let n_v be the current number of incident edges. Define the potential function∑_v : n_v>0n_vln2m/n_v ,which is at most 2mlog2m initially since ∑_vn_v=2m. The function n_vln2m/n_v is increasing in n_v in the range n_v∈[1,m], which can be verified by taking the derivative:d/dn_vn_vln2m/n_v=d/dn_vn_v(ln 2m-ln n_v)=ln 2m-(1+ln n_v)=lnm/n_v>0.Since removing vertices can only decrease n_v, doing so can only decrease the potential. Suppose now that two vertices u and v are contracted with n_u≤ n_v. After removing the contracted edges between u and v, the values n_u and n_v decrease by the same amount, so n_u≤ n_v still. In the contraction step, we update the n_u edges incident to u, and the n_uln2m/n_u and n_vln2m/n_v terms in the potential function become a single (n_u+n_v)ln2m/n_u+n_v. The net difference isn_uln2m/n_u+n_vln2m/n_v - (n_u+n_v)ln2m/n_u+n_v≥ n_u(ln2m/n_u-ln2m/n_u+n_v)≥ n_u(ln2m/n_u-ln2m/2n_u)=n_u ,where the second inequality follows from n_u≤ n_v. Hence, the potential drops by at least n_u, while the number of edge updates is n_u. It follows that the total number of edge updates is at most 2mlog2m, and each update induces one maintenance update in each category. Overall, the algorithm makes O(mlog m) maintenance updates, and each update takes O(log m) time, which is Õ(m) total as promised.Let the size of a cluster be the number of vertices in the cluster. The following lemma shows that <Ref> of <ref> can be executed efficiently. Suppose no single cluster has size greater than 2|T|/3. Then there exists a bipartition of clusters such that each group of clusters has total size in the range [|T|/3, 2|T|/3], and this bipartition can be computed in nearly linear time. We can split our proof into two cases: * There exists a cluster with size ∈ [|T|/3, 2|T|/3]:We make that cluster its own group, and all remaining clusters the second group. * All clusters have size less than |T|/3:Enumerate the clusters in an arbitrary order, and consider the shortest prefix of clusters whose total size exceeds |T|/3. The prefix without its last cluster has total size less than |T|/3, and this last cluster of the prefix has size less than |T|/3, so this prefix has size in [|T|/3, 2|T|/3]. §.§ Matching Player We introduce a key subroutine of the matching player, called CutOrFlow, which is used to both find matchings over partitions and for trimming.If <ref> returns a valid cut (U, U), then it is δ·κ-terminal sparse in G. Without loss of generality, assume that U contains at most as many terminals as V ∖ U. Consider the difference between the edges of ∂ U' and the edges of the cut ∂({s}) which cuts all edges adjacent to s. The edges in ∂ U' ∖∂({s}) are precisely the edges of ∂ U' originally within G, and the edges in ∂({s}) ∖∂ U' are precisely the edges between s and U ∩ T. Since ∂ U' is an s-t min-cut, we have w(∂ U' ∖∂({s})) ≤ w(∂({s}) ∖∂ U'), which is equivalent to w_G(∂ U) ≤ |U ∩ T|·δ·κ. A symmetric argument yields w_G(∂ U)≤|U∩ T|·δ·κ, and combining the two proves the lemma. <ref> proves the sparsity guarantees of <ref>, as the algorithm sets either sets κψ or κmin{γ/2s,γ/6}) ≪ψ in every CutOrFlow call. Thus every cut returned is always ψ·δ-terminal sparse in G. If |S|,|T∖ S|≥|T|/3 and flow f has value less than |T|/6 ·δ·κ, then the cut (U, U) satisfies |U|, |U| ≥ |T|/6. In graph G', the value of flow f' and cut (U', U') are equal by flow-cut duality. In particular, cut (U', U') has weight less than |T|/6 ·δ·κ. Since s has edges to S∩U' that cross the cut, and since t has edges from (T∖ S)∩ U' that cross the cut, we have |S∩U'|,|(T∖ S)∩ U'|≤|T|/6. Since |S|,|T∖ S|≥|T|/3, it follows that |S∩ U'|=|S|-|S∩U'|≥|T|/6 and |(T∖ S)∩U'|=|T∖ S|-|(T∖ S)∩ U'|≥|T|/6. In particular, |U|, |U| ≥ |T|/6. §.§.§ TrimmingIn <Ref> of <ref>, we begin with a subset C⊆ T of size at least 2|T|/3 such that V is (s,δ,γ,C)-terminal-strong in G. Our next goal is to find a cluster U that is a (O(s/γ),δ,Ω(γ/s))-terminal strong with |U∩ T|≥|T|/3. This allows us to only recurse on U which satisfies |U∩ T|≤2|T|/3, allowing for an efficient algorithm.We used a modified form of the trimming method found in <cit.>. Note that in their paper they describe a simple “Slow Trimming”, and an improved “Efficient Trimming” scheme which is much more involved by circumventing the use of exact max-flow. However, the slow trimming scheme suffices for our purposes since we are fine with maximum flow time.We begin with the following lemma which we use to prove the correctness of trimming:If a cluster S is (s, (L_max/ψ)δ, γ)-strong in the cut graph H, then V is (s, δ, γ, S)-terminal-strong in G. By construction, each edge (u,v) of weight w in the cut graph H certifies the existence of a flow of capacity 1/ψ· w in the original graph G. Since we run the cut-game for at most L_max rounds, we are able to simultaneously route flows between terminals u and v of weight 1/ψ· w(u,v) for all edges (u,v)∈ E_H with capacities scaled by at most L_max in graph G. Equivalently, scaling everything by 1/ψ, we are able to simultaneously route flows between terminals u and v of weight w(u,v) for all edges (u,v)∈ E_H with capacities scaled by at most L_max·1/ψ in graph G.We proceed with two cases. First, assume for contradiction that there exists a Steiner cut (C, C) of graph G with at most weight δ which satisfies min{|C∩ S|, |C∩ S|} > s. Consider cut (C∩ T, C∩ T) in cut graph H. Since C∩ S ⊆ C ∩ T and C∩ S ⊆C∩ T, we havemin{|(C∩ T) ∩ S|, |(C∩ T) ∩ S|}≥min{|C∩ S|, |C∩ S|} > s.The weight of cut (C∩ T, C∩ T) in H is at most a L_max·1/ψ factor greater than the amount of (scaled) flow able to be routed over cut (C, C) in graph G. In other words, w_H(C∩ T, C∩ T) ≤ (L_max·1/ψ)δ. This contradicts the assumption that S is a (s, (L_max/ψ)δ, γ)-strong cluster in the cut graph.For the second case, assume for contradiction that there exists a cut (C, C) of graph G with at most weight δ which satisfies w(∂_G C) < γ·δ and min{|C∩ S|, |C∩ S|} > 0. Similar to above, the weight of cut (C∩ T, C∩ T) in H is at most a L_max factor larger than cut (C, C) in graph G. Since min{|C∩ S|, |C∩ S|} > 0, ∂_H[S] C is an actual cut of H[S], and ∂_H[S] C ≤∂_H C < γ· (L_max/ψ)δ, contradicting the assumption that S is (s, (L_max/ψ)δ, γ)-strong in H.Now we introduce the main theorem of the section which shows that the cluster U is terminal-strong and contains a large fraction of terminals.If V is (s, δ, γ, S)-terminal-strong in G and |S| ≥ 2|T|/3, the cut (U, U) returned by <ref> with parameter κ=min{γ/(2s),γ/6} satisfies the property that U is (max{2/κ+s,3s}, δ, κ, U∩ T)-terminal strong in G and |U∩ T|≥|T|/3.For the rest of <ref>, we focus on proving this theorem. We split our proof into two cases, when the cut U = V, and all other cuts. Case 1: U = V. Here, we will only use the bound κ≤γ. This fact will be important later in the proof.By flow-cut duality, the flow f in G' sends full capacity along each edge into t. In particular, the value of the flow is equal to |T ∖ S| ·δ·κ. This case clearly satisfies |U ∩ T| = |T| ≥ |T|/3. We now show that in this case U is (max{2/κ+s,3s}, δ, κ, U∩ T)-terminal strong. Consider an arbitrary Steiner cut (C, C) in G of size <δ. Suppose first that min{|C∩ S|, |C∩ S|} = 0, and assume without loss of generality that C ∩ S = ∅. Each terminal in C ∩ (T ∖ S) sends full capacity into t in the flow f, so at least |C ∩ (T ∖ S)|·δ·κ flow must cross the cut C. Since w(∂ C) < δ, we obtain |C ∩ (T ∖ S)|·δ·κ<δ, so |C ∩ T| = |C ∩ (T ∖ S)| < 1/κ. Additionally since |C ∩ (T ∖ S)| ≥ 1, we have the total flow being at least κ·δ, and therefore the cut is at least this size as well.Suppose now that min{|C∩ S|, |C∩ S|} > 0. From <ref>, we know that min{|C∩ S|, |C∩ S|}≤ s and w(∂ C) ≥γ·δ≥κ·δ.Assume without loss of generality that |C∩ S| ≤ |C∩ S|. We consider two cases: * Case 1a: |C∩(T∖ S)| ≥ 2|C∩ S|.Recall that the s-t flow f has value |T∖ S|·δ·κ in graph G'. In this flow, at most |C∩ S|·δ·κ of the flow initially routed into C∩(T∖ S) from s can reach t without crossing cut C. The remaining flow must therefore cross cut C. Therefore we have δ > w(∂ C) ≥ (|C∩(T∖ S)|-|C∩ S|)·δ·κ≥ |C∩(T∖ S)|/2·δ·κ. Therefore |C∩(T∖ S)| < 2/κ, and thus |C ∩ T| = |C∩(T∖ S)| + |C∩ S| < 2/κ + s. * Case 1b: |C∩(T∖ S)| < 2|C∩ S|.We have |C∩(T∖ S)| < 2|C∩ S| ≤ 2s, so |C ∩ T| = |C∩(T∖ S)| + |C∩ S| ≤ 3s. This completes the proof of case 1 of <ref> when U = V.Case 2: U ⊊ V. We begin by showing |U∩ T|≥|T|/3. If this was not the case, since we assume |S|≥2|T|/3, more than |T|/3 terminals in S would be in U. All of these terminals would have an edge of size δ·κ crossing the cut ∂_G' U. However, the s-t cut ∂_G' U must have size at most |T∖ S|·δ·κ in graph G'. Since |T∖ S| ≤ |T|/3, we arrive at a contradiction. Now we prove the terminal-strong property of G[U]. We start by proving the following lemma:Assume that V is (s, δ, γ, S)-terminal-strong in G. Then U is (s, δ, γ/6, S∩ U)-terminal-strong in G. Note that the condition min{|C ∩ U ∩ S|, |C∩ U ∩ S|}≤ s follows immediately from min{|C ∩ S|, |C∩ S|}≤ s since G is (s, δ, γ, S)-terminal-strong. So it suffices to prove that for all Steiner cuts (C, C) such that ∂ C ≤δ, we have w(E(C ∩ U, C∩ U))≥γ/6 ·δ if min{|C ∩ U ∩ S|, |C∩ U ∩ S|} > 0. By flow-cut duality, the flow f saturates the entire boundary E(U, U). Assume for contradiction there exists a Steiner cut (C, C) such that ∂ C ≤δ, w(E(C ∩ U, C∩ U)) < γ/6·δ, and min{|C ∩ U ∩ S|, |C∩ U ∩ S|} > 0. As mentioned before, we know that min{|C ∩ U ∩ S|, |C∩ U ∩ S|}≤ s, so assume without loss of generality that |C∩ U∩ S|≤ s. We also have w(∂(C∩ U)) = w(E(C ∩ U, C∩ U)) + w(E(C ∩ U, V∖ U)) ≥γ·δ since G is (s, δ, γ, S)-terminal-strong. This implies w(E(C∩ U, V∖ U)) > 5γ/6·δ. The flow f sends >5γ/6·δ flow from C∩ U to V ∖ U, and only <γ/6·δ flow can enter C∩ U from C∩ U. Therefore >2γ/3·δ flow must be routed from the source node s into C∩ U. But this flow is upper bounded by |C ∩ U ∩ S| ·δ·κ≤ s·δ·γ/(2s)≤γ/2·δ < 2γ/3·δ (using our assumption κ≤γ/(2s) from <Ref>), and we arrive at our contradiction. Next, we show that |S ∩ U| ≥ 2|T ∩ U|/3. For each terminal from S in V ∖ U and each terminal from T ∖ S in U, there exists an edge of weight δ· 1/κ crossing ∂ U in G'. Denote |S ∩ (V∖ U)|=a and |(T ∖ S)∩ U|=b. Since the cut (U,U) has weight <|T∖ S|·δ·κ≤ |T|/3·δ·κ, we have a+b<|T|/3. Additionally |S| ≥ 2|T|/3, so |(T∖ S)∩ U|/|S∩ U| = b/|S|-a≤a+b/|S| < |T|/3/2|T|/3≤ 1/2,proving the requirement as desired.At this point, we can apply the U=V case on G[U], since we have shown that G[U] is (s, δ, γ/6, S∩ U)-terminal-strong and |S ∩ U| ≥ 2|T ∩ U|/3, and the U=V case only requires that κ≤γ/6. This completes the proof of case 2 of <ref> when U ⊊ V.§.§.§ Final ParametersFinally, we plug in our parameters L_max=O(log|T|), α=L_max/ψ, s=O((L_max/ψ)^2log^2|V|)=O(log^4|V|/ψ^2), and γ=1/200α s=Ω(ψ^3/log^5|V|) in <Ref>. We have κ=Ω(γ/s)=Ω(ψ^5/log^9|V|), so the (max{2/κ+s,3s}, δ, κ, U∩ T)-terminal strong cluster U output by <Ref> is (O(log^9|V|/ψ^5),δ,Ω(ψ^5/log^9|V|)-terminal-strong, fulfilling the output guarantee of <Ref>.§.§ Termination To show that the cut-matching game terminates within L_max rounds, we introduce the following guarantee of the cut-matching game analysis.For large enough L_max=O(log|T|), the algorithm proceeds for at most L_max iterations.The proof is a direct adaptation of the cut-matching game analysis of <cit.>, and thus we leave the details to <ref>. §.§ Terminal DecompositionFinally, we introduce the complete algorithm for terminal decomposition, which uses the cut-matching game algorithm as a key subroutine.The algorithm uses Cut-Game as a subroutine and <ref> as its guarantee. First we note that since we recurse on both sides of a cut in <ref> only if they both have at least Ω(1) terminals (from <ref>), we have at most O(log n) recursive levels. We now prove the formal theorems for our terminal decomposition. The algorithm runs with O(log^2 n) max-flows and O(m) additional time. First, we note that in each recursive level, each Terminal-Decomp call is on a mutually disjoint portion of the graph. Therefore all maximum flows on a single recursive level can be done in parallel with a single maximum flow call on a graph of size O(m) edges. Additionally, each round of the cut-matching game uses at most a single max-flow call (in CutOrFlow). With a total of L_max = O(log n) cut-matching game rounds and O(log n) recursive levels, the entire terminal decomposition runs in O(log^2 n) max-flows.All other cut-matching game procedures (specifically the (s,δ,γ)-strong decomposition of <ref>) run in near-linear time. With O(log n) recursive levels, the entire algorithm runs in near-linear time excluding max-flows. <ref> returns a (O(1/ψ^5), δ, Ω(ψ^5), T)-terminal-strong decomposition of G. We begin by proving the upper-bound on intercluster edges:The total weight of intercluster edges from the decomposition outputted by <ref> is at most O(ψ·δ·|T|log |T|).Every cut made by <ref> is ψ·δ terminal-sparse due to <ref>. We can charge ψ·δ weight to each terminal on the smaller side of the cut. Since there are at most log |T| recursive level and each cluster gets only one cut per recursive level, each terminal gets charged at most log |T| times. Summing up the weights charged to each terminal gives us a total edge weight of O(ψ·δ·|T|log |T|). From <ref>, every cluster returned is certified to be (O(log^9|V|/ψ^5), δ, Ω(ψ^5/log^9|V|), T)-terminal strong, completing the proof.§ MINIMUM STEINER CUT USING SPARSIFICATIONWe complete our algorithm by showing a polylogarithmic maximum flow algorithm for minimum Steiner cut on a graph by using its terminal-strong decomposition. We use the minimum isolating cuts method and terminology described by <cit.>. The key difference is that instead of using an expander decomposition, we use a terminal-strong decomposition. However, we prove that the exact same guarantees apply. We begin by introducing some definitions from <cit.>. A subset of vertices U ⊆ V is considered k-unbalanced if there exists a Steiner minimum cut S such that min{|S ∩ U|, |S∩ U|}≤ k. Otherwise U is considered k-balanced. Our main result of the section is as follows: There is a deterministic algorithm which given a undirected weighted graph G=(V,E), a (s, δ, γ, T)-terminal-strong decomposition G'={V_1, V_2,...,V_ℓ}, a parameter k Clog^Cn for some large enough constant C>0, and a subset of terminals U ⊆ T, does the following: * If U is k-unbalanced, we return the minimum Steiner cut of G with polylogarithmic maximum flow calls and near-linear additional runtime.* If U is k-balanced with witness (S_1, S_2), we return a subset U' ⊂ U such that |U'|≤|U|/2 and S_i∩ U' ≠∅ for both i=1,2.We prove the two cases separately: * Unbalanced Case:We use the following method from <cit.> to deal with the unbalanced case:Consider a graph G = (V, E), a parameter k≥ 1, and a k-unbalanced set U ⊆ T. Then, we can compute the minimum Steiner cut of G in k^O(1)polylog(n) many s-t max-flow computations plus O(m) deterministic time.With k Clog^Cn, this gives us polylogarithmic maximum flow calls and near-linear additional runtime as desired. * Balanced Case:If U is k-balanced, we use a sparsification procedure by using the (s, δ, γ)-terminal-strong decomposition with terminal set U to find a subset U' ⊂ U such that |U'| ≤ |U|/2 with the guarantee that some Steiner minimum cut contains at least one vertex from U' in both of its sides. We then set UU' (see <ref> for full details)After computing the sparsified set U U' in the balanced case, we can then recursively run our minimum Steiner cut algorithm on graph G and terminal set TU. Since the size of U at least halves each time we sparsify, this only needs to be done at most log n times.Also since we never know which case we are specifically in (balanced or unbalanced) in <ref>, we run both cases in parallel until U must be guaranteed to be k-unbalanced. At this point we take the minimum over all Steiner cuts found, and we are guaranteed to have found a minimum one (see <ref>). § CONCLUSIONOur algorithm solves deterministic minimum Steiner cut with polylogarithmic max flow calls and near-linear additional processing time. We thus show minimum Steiner cut reduces to maximum flow up to polylogarithmic factors in runtime. Specifically, the existence of a deterministic near-linear time s-t max-flow algorithm would imply a deterministic near-linear time algorithm for minimum Steiner cut.Our main contribution is the (s, δ, γ)-terminal-strong decomposition. We are able to do this deterministically in polylogarithmic max flows and near-linear additional time for small δ, which is not yet known for standard expander decompositions. We also believe that (s, δ, γ)-strong and terminal-strong decompositions may have additional future applications in faster algorithms for graph problems.§ ACKNOWLEDGEMENTSJL would like to thank Monika Henzinger, Satish Rao, and Di Wang for helpful discussions related to <Ref>.§ CUT-MATCHING PROOFFor completeness, we prove <ref> below by directly adapting the analysis of <cit.>.Let L_max=O(log|V|) be large enough. Suppose for contradiction that the algorithm does not terminate within L_max iterations. On each iteration, the algorithm must execute <Ref> with the partition (C,C) with |C|,|C|≥|T|/3 and w_H(C,C)<δ |T|/12. Following <cit.>, we use the entropy function potentialΦ_u(t)=-∑_v∈ Tp_u,v(t)log p_u,v(t) andΦ(t) = ∑_u∈ TΦ_u(t),where p_u,v(t)∈[0,1] satisfy ∑_v∈ Tp_u,v(t)=1 for all u∈ T. Intuitively, p_u,v models a random walk on the cut-graph, and p_u,v(t) is the probability distribution at time t starting at vertex u∈ T. The entropy function Φ_u(t) is at most ln|T|, so Φ(t)≤|T|ln|T| always. We will show that the potential function Φ(t) can never decrease, and it increases by Ω(|T|) every time the algorithm executes <Ref>. It follows that there can only be O(log|T|) total iterations.Let M_t be the edges added to H on <Ref>. Since the flow f sends at most δ·γ/2s flow through each terminal in T, and since the weight of the edges in H are scaled by 2s/γ, each vertex has weighted degree at most δ in M_t. Also, since f has value at least |T|/6·δ·γ/2s the total weight of M_t is at least |T|/6·δ.Initially, set p_u,v(0)=1 if u=v and p_u,v(0)=0 otherwise. For each iteration t, setp_u,v(t+1)=2δ-_M_t(v)/2δp_u,v(t)+∑_v'∈ Tw_M_t(v',v)/2δp_u,v'(t) .Note that p_u,v(t+1) is a convex combination of p_u,v'(t) over all v'∈ T. Since the entropy function Φ_u(t) is concave, we obtain Φ_u(t+1)≥Φ_u(t), which implies that Φ(t+1)≥Φ(t).Given a partition (C,C) on iteration t, define q_u(t)=∑_v∈Cp_u,v(t), which represents the probability that the random walk starting at u ends up in C.∑_u∈ Cq_u(t)<|T|/100. We have ∑_u∈ Cq_u(t+1)=∑_u∈ C∑_v∈Cp_u,v(t+1) =∑_u∈ C∑_v∈C(2δ-_M_t(v)/2δp_u,v(t)+∑_v'∈ Tw_M_t(v',v)/2δp_u,v'(t)) =∑_u∈ C∑_v∈C2δ-_M_t(v)/2δp_u,v(t)+∑_u∈ C∑_v∈C∑_v'∈ Tw_M_t(v',v)/2δp_u,v'(t) =∑_u∈ C∑_v∈C2δ-_M_t(v)/2δp_u,v(t)+∑_u∈ C∑_v∈Cw_M_t(v',v)/2δ≤∑_u∈ C∑_v∈Cp_u,v(t)+∑_u∈ C∑_v∈Cw_M_t(v',v)/2δ=q_u(t)+w_M_t(C,C)/2δ,By induction on t, we obtain ∑_u∈ Cq_u(t+1)=(∑_i=1^tw_M_t(C,C))/2δ. Note that ∑_i=1^tw_M_t(C,C) is the total value of the cut (C,C) in the cut-graph H_t, which has value at most |T|δ/50 by the construction of cut (C,C). It follows that ∑_u∈ Cq_u(t)<|T|/100. Since |C|≥|T|/3, the values q_u(t) for u∈ C have average at most 3/100. By Markov's inequality, a constant fraction have value q_u(t)≤1/24. We will show that for each vertex u∈ T with q_u(t)≤1/24, we have Φ_u(t+1)≥Φ_u(t)+Ω(1). This would imply Φ(t+1)≥Φ(t)+Ω(|T|) and finish the analysis.For the rest of the proof, fix a vertex u∈ T with q_u(t)=∑_v∈Cp_u,v(t)≤1/24. By Markov's inequality, at most 1/8 fraction of the vertices in C have p_u,v(t)≥1/3; call these vertices bad. Similarly, ∑_v∈ Cp_u,v(t)≥23/24, and by (reverse) Markov's inequality, at most 1/8 fraction of the vertices in C have p_u,v(t)≤2/3; call these vertices bad. Overall, at most |T|/8 vertices are bad. Now consider the matching M_t+1 of total weight at least |T|/6·δ. Each vertex has degree at most δ in M_t+1, so at most |T|/8·δ weight of edges in M_t+1 are incident to bad vertices. So a constant fraction of the edges of M_t+1 (by weight) have both endpoints good, which means one endpoint u has value p_u,v(t)≤1/3 and the other has value at least 2/3. The definition of p_u,v(t+1) in (<ref>) will “mix” these separated values, and a tedious but straightforward algebraic calculation establishes Φ(t+1)≥Φ(t)+Ω(|T|). § SPARSIFICATION PROCEDUREThe details of the full sparsification procedure from <ref> are detailed here. We define a cluster V_i to be trivial if |U_i| = 0, small if 1≤ |U_i| ≤ s^2, and large if |U_i| > s^2. To construct set U', for each cluster V_i, we take an arbitrary vertex from U_i if V_i is small, or s+1 arbitrary vertices from U_i if V_i is large. To prove correctness, we show that U' is always at least a constant factor smaller than U each iteration, and that U' always contains at least one terminal on both sides of a minimum Steiner cut if U is k-balanced. §.§ Size Bound There are at most O(ψ·|U|log n) total clusters, i.e. ℓ≤ O(ψ·|U|log n). The total weight of intercluster edges is upper-bounded by O(ψ·δ·|U|log n) from the guarantee of our (s, δ, γ, U)-terminal-strong decomposition. We set δλ, where λ∈[λ,2λ] denotes a 2-approximation of the value of the minimum Steiner cut λ on graph G. Since ∂ V_i is a Steiner cut in graph G, we have λ≤ w(∂ V_i). Thereforeℓλ≤∑_i∈[ℓ] w(∂ V_i) ≤O(ψ·λ·|U|log n)Dividing by λ on the left and right sides gives us our claim.The sparsification procedure above returns a set U' such that |U'|≤ |U|/2. We can only have less than |U|/s^2 large clusters, and at most O(ψ·|U|log n) small clusters due to <ref>. From our construction, U' has total size|U'| < |U|/s^2·(1+s) + O(ψ·|U|log n) · 1With s=O((L_max/ψ)^2log^2|V|) =O(1/ψ^2) and ψ = 1/polylog(n), we get that |U'| ≤ |U|/2 as desired with a small enough chosen ψ. §.§ Hitting Both Sides of the Minimum Steiner CutThe following claim ensures that the minimum Steiner cut can only cut the sets U_i a very small amount.(Analogous to Claim 4.11 in <cit.>) Let C be one side of a minimum Steiner cut of G. We have∑_i∈[ℓ]min{|U_i ∩ C|, |U_i ∩C|}≤ 1/γwhere U_i:=V_i∩ U for i∈[ℓ] The minimum Steiner cut is at most δ, so the portion of the min cut E(V_i ∩ C, V_i ∩C) within cluster V_i is also at most δ. Since the cuts E(V_i ∩ C, V_i ∩C) for all i ∈ [ℓ] are disjoint portions of the min-cut ∂ C, we haveℓ·γ·δ≤∑_i∈[ℓ] w(E(V_i ∩ C, V_i ∩C) ≤ w(∂ C) ≤δ Dividing by γ·δ to the left and right sides of the inequality gives our claim. This following claim ensures that only a few number of clusters are actually cut (have terminals on both sides) by the minimum Steiner cut.(Analogous to Claim 4.12 in <cit.>) Let C be one side of a minimum Steiner cut of G. Then, C cuts at most 1/γ clusters of G' (we define a cluster as being cut if there is at least one terminal on both sides of the cluster and both sides of the cut). Steiner min-cut ∂ C has a maximum weight of δ. For all clusters V_i, the portion of C that intersects it (call this ∂ C_V_i) has a terminal on both sides of it in V_i. From the definition of (s, δ, γ, U)-terminal-strong, we must have that w(∂_G[V_i]C) ≥γ·δ. Since clusters G[V_i] are edge-disjoint and the minimum Steiner cut is upper-bounded by δ, ∂ C cannot cut more than 1/γ clusters of G'.Suppose U is 2s^2/γ-balanced with witness (S_1, S_2). Then U' ∩ S_i ≠∅ for both i=1,2. The proof is a direct modification of Lemma 4.13 of <cit.>, replacing each instance of 1/ϕ with either s or 1/γ. Call a cluster V_i: * white if S_1∩ U_i=∅ (i.e., U_i⊆ S_2). * light gray if 0<|S_1∩ U_i|≤ |S_2∩ U_i|<|U_i|, which implies that 0<|S_1∩ U_i|≤ s. * dark gray if 0<|S_2∩ U_i|<|S_1∩ U_i|<|U_i|, which implies that 0<|S_2∩ U_i|≤ s. * black if S_2∩ U_i=∅ (i.e., U_i⊆ S_1).Every cluster must be one of the four colors, and by <Ref>, there are at most 1/γ many (light or dark) gray clusters since U_i∩ S_1, U_i∩ S_2 ≠∅ implies that S_1 cuts cluster V_i. Note that since we are only considering clusters V_i such that U_i ≠∅, it must be that for a white cluster, we have |S_2∩ U_i| ≠∅, and similarly, for a black cluster, we have |S_1∩ U_i| ≠∅. There are now a few cases: * There are no large clusters. In this case, if there is at least one white and one black small cluster, then the vertices from these clusters added to U' are in S_2 and S_1, respectively.Otherwise, assume w.l.o.g. that there are no black clusters. Since there are at most 1/γ gray clusters in total, |S_1∩ U|≤ 1/γ· s^2, contradicting our assumption that min{|S_1∩ U|,|S_2∩ U|}≥ 2s^2/γ for large enough C.* There are large clusters, but all of them are white or light gray. Let V_i be a large white or light gray cluster. Since we select s+1 vertices of U_i, and |S_1∩ U_i|=min{|S_1∩ U_i|,|S_2∩ U_i|}≤ s, we must select at least one vertex not in S_1. Therefore, S_2∩ U'∅. If there is at least one black cluster, then the selected vertex in there is in U', so S_1∩ U'∅ too, and we are done.So, assume that there is no black cluster. Since all large clusters are light gray (or white), |S_1∩ U_i| ≤ s for all large clusters V_i. Moreover, by definition of small clusters, |S_1∩ U_i| ≤ |U_i| ≤ 1/s^2 for all small clusters V_i. Since there are at most 1/γ gray clusters by <Ref>,|S_1∩ U| = ∑_i: V_i small|S_1∩ U_i| + ∑_i: V_i large|S_1∩ U_i| ≤1/γ· s^2 + 1/γ· s < 2s^2/γ,a contradiction.* There are large clusters, but all of them are black or dark gray. Symmetric case to (2) with S_1 replaced with S_2.* There is at least one black or dark gray large cluster V_i, and at least one white or light gray large cluster V_j. In this case, since we select s+1 vertices of U_i and |S_2∩ U_i|=min{|S_1∩ U_i|,|S_2∩ U_i||}≤ s, we must select at least one vertex in S_1. Similarly, we must select at least one vertex in U_j that is in S_2. Since s,1/γ≤polylog(n), we can set C large enough in the statement of <ref> so that Clog^Cn≥2s^2/γ. This completes the proof of the balanced case for <ref>.
http://arxiv.org/abs/2312.16415v1
{ "authors": [ "Matthew Ding", "Jason Li" ], "categories": [ "cs.DS", "cs.DM" ], "primary_category": "cs.DS", "published": "20231227052722", "title": "Deterministic Minimum Steiner Cut in Maximum Flow Time" }
[ Chloe Marple January 14, 2024 ==================== § INTRODUCTIONWe update on our work to calculate the inclusive semileptonic decay rate of the D_s-meson. Following our recent work to understand the systematic error associated with the Chebyshev approximation of the kernel function <cit.>, in this paper we report on our estimate of the systematic error due to finite-volume effects. We refer to <cit.> for details on the strategy to calculate inclusive semileptonic decay rates in lattice QCD. In particular, the most recent work <cit.> presents a comparison between the Chebyshev and Backus-Gilbert approaches to approximate the kernel function in the energy integral.The rest of this paper is structured as follows. We briefly review the inclusive semileptonic decay on the lattice in Sec. <ref>. In Sec. <ref> we introduce the model used to estimate finite volume effects before applying it to the lattice data in order to extrapolate towards the infinite volume limit in Sec. <ref>. We present the conclusions in Sec. <ref>.§ INCLUSIVE SEMILEPTONIC DECAYS ON THE LATTICEThe total decay rate of the inclusive semileptonic decay is written asΓ∼∫_0^q^2_max dq^2 √(q^2)∑_l=0^2X̅^(l)(q^2) ,where X̅^(l)(q^2) contains the integral over the hadronic final state energy ωX̅_σ^(l)(q^2)= ∫_ω_0^∞ dωW^μν(q,ω) e^-2ω t_0 K^(l)_μν, σ (q,ω) = ⟨ψ^μ(q)|K^(l)_μν, σ(q,Ĥ)|ψ^ν(q)| ⟩ ,with the hadronic tensor W^μν(q,ω), |ψ^ν(q)⟩ = e^-Ĥ t_0J̃^ν(q, 0) |D_s⟩ / √(2M_D_s) and J̃^ν(q, 0) being the Fourier transformed currents. The lower limit0 ≤ω_0 ≤ω_min can be chosen freely as there are no states below the lowest lying energy state ω_min. The parameter t_0 is introduced to avoid the contact term which receives contributions from the opposite time ordering corresponding to unphysical states. In the definition of X̅_σ^(l)(q^2) above K^(l)_μν, σ(q,ω) = e^2ω t_0√(q^2)^2-l (m_D_s - ω)^l θ_σ(m_D_s - √(q^2) - ω) ,defines the kernel function and θ_σ(x) is a sigmoid function with smearing width σ.On the lattice we computeC_μν(t) = 1/2M_D_s⟨D_s|J̃^μ†(q,0) e^-ĤtJ̃^ν(q,0)|D_s| ⟩ ,and the calculation of the inclusive decay rate is reduced to the one of finding an appropriate polynomial approximation of the kernel function K^(l)_μν, σ(q,Ĥ).We employ the shifted Chebyshev polynomials T̃_j(x), with x = e^-ω and define the approximation as⟨K_μν, σ^(l)|≃⟩1/2c̃_0^(l)⟨T̃_0|+⟩∑_k=1^Nc̃_k^(l)⟨T̃_k| ⟩ .Here, c̃_k^(l) are analytically known coefficients and ⟨T̃_k|$⟩ are referred to as Chebyshev matrix elements. We use the notation⟨·|≡⟩⟨ψ^μ|·|ψ^ν|/⟩⟨ψ^μ|ψ^ν|$⟩. For simplicity, we skip the indices μ, ν going forward.The matrix elements are extracted from a fit to the correlator data followingC̅(t) = ∑_j=0^tã_j^(t)⟨T̃_j| ⟩ ,where ã_j^(t) are obtained from the power representation of the Chebyshev polynomials, see (A.24) and (A.25) of <cit.> for the definition of ã_j^(t), and C̅(t) is constructed from (<ref>) as C̅(t) = C(t+2t_0)/C(2t_0). To maximize the available data we choose t_0 = 1/2. We use priors to ensure that the fitted Chebyshev matrix elements satisfy the condition that the Chebyshev polynomials are bounded, i.e. |⟨T̃_j|⟩| ≤ 1. We refer to <cit.> for more details on the Chebyshev approximation and the practical application.§ MODELING STRATEGYOn the lattice, there is a well-known challenge concerning the reconstruction of the spectral density from correlators C(t) with a finite set of discrete time slices, commonly referred to as the ill-posed inverse problem. Even if the inverse problem could be solved for a correlator in a finite volume, C_V(t), where V=L^3 denotes the volume of the lattice, and hence the spectral density ρ_V(ω) is reconstructed, there is still a qualitative difference from its infinite volume counterpart ρ(ω). The spectral density in the infinite volume is a smooth function, while ρ_V(ω) is given by a sum of δ-peaks representing allowed states in a finite volume. In Fig. <ref> we sketch the situation for two-body states in a finite volume.This problem is avoided by the introduction of the smearing in the kernel function K(ω) as shown in Eq. (<ref>). The inverse problem is made arbitrarily mild by increasing the smearing width σ, and the smeared spectral density ρ_σ, V then smoothly approaches its infinite volume counterpart. To recover the inclusive decay rate, we therefore need to take the limit V→∞ before taking the limit of vanishing smearing width.The finite-volume effects for the spectral density can be sizeable for multi-body states, because the allowed states are controlled by the boundary condition. The energy spectrum for two-body states, for instance, receives corrections of 𝒪(1/L^3). This would be reduced significantly for the smeared spectral density, but its size and the scaling to the V→∞ limit may be non-trivial. We therefore introduce a model to investigate the volume dependence. After checking that the model describes the finite-volume data well, we use it to estimate the finite-volume effects.Among various multi-hadron states, we consider two-body final states, i.e. KK̅ states to be specific, which give the dominant contribution. The spectral density is obtained from the imaginary part of the vacuum polarization function, evaluated at one-loop, asρ(ω) = π∫d^3q/(2π)^31/(2ϵ_q)^2δ(ω - 2ϵ_q)introducing the short-hand notation ϵ_q^2 = q^2 + m_K^2. It corresponds to the production of KK̅ states from the vacuum through an operator 𝒪, which is taken either as a scalar density (J=0) or vector current (J=1). It models the two-body decays of the D_s meson under an assumption that the wave function of the D_s meson has only insignificant effects, which can be incorporated later by introducing a form factor.Within this model, one can obtain an explicit expression for the spectral density in the finite-volume and in the infinite-volume limit:ρ_V(ω) = π/V∑_q1/4(q^2 + m^2)δ(ω - 2√(q^2 + m^2)) ,andρ(ω) = 1/16π√(1 - 4m^2/ω^2),respectively. The expression above corresponds to the scalar density (J=0). For the vector current (J=1), we obtainρ_V(ω) = π/V∑_qq^2/4(q^2 + m^2)δ(ω - 2√(q^2 + m^2)) .andρ(ω) = 1/64πω^2 (1 - 4m^2/ω^2)^3/2, To estimate how the infinite volume limit is approached, we considerX̅^(l)(ω_th) = ∫_0^ω_th dωρ(ω) × K^(l)(ω) ,which is defined as a convolution between the kernel and the spectral density. We introduce a variable ω_th, which can be understood as a varying energy cut-off in the kernel function. Although ω_th is fixed for the physical semileptonic decay process, we use the freedom to choose it in the analysis in order to study how well our model describes the lattice data. In Fig. <ref> we show X̅^(l)(ω_th) for two choices of the volume V = 48^3and256^3, as well as the infinite volume limit.We find that the volume effect depends on the choice of l in the kernel function. As argued in <cit.>, due to the sharp cut around the threshold in the kernel for l=0 (left), a strong dependence on the volume is expected, while l=2 (right) smoothly approaches zero at the threshold and is hence expected to only possesses a mild dependence. Nonetheless, for both cases we observe that V=256^3 nearly reproduces the infinite volume expression. § SYSTEMATIC ERROR DUE TO FINITE VOLUME EFFECTSWe combine the model and the lattice data to study the infinite volume limit. Weconstruct a fit function of the lattice dataC̅(t) = A_0 e^-E_0t + s(L) ∑_i A_i e^-E_i t1/E_i^2 - m_J^2,where we treat the ground state separately and sum over the two-body excited states in the second term. The factor 1/(E_i^2 - m_J^2) appearing in the second term is motivated by the time-like kaon form factor (pole-dominance model) where the mass m_J is that of the state of corresponding quantum number, i.e. f_0 for J=0 and ϕ for J=1.We constrain the prior of the ground state energy E_0 and its amplitude A_0 through a fit to the lattice data. The energies and amplitudes E_i and A_i are taken from our model. The prefactor s(L) is determined by a fit to the lattice data, and thus only the relative magnitude of A_i's are relevant.We consider the case of the spatial current insertions A_iA_i with vanishing q^2. This channel contributes only when l=2 in the kernel function.In Fig. <ref> we compare the fit results to the lattice data of the four-point correlation function. We also include the fit to the ground state. We observe that the short-distance behavior of the correlator, where the excited states contributions become significant, is well described by our fit, while also reproducing the correct long distance behavior. In Fig. <ref> we calculate X̅^(l)(ω_th) from (<ref>) using the spectrum determined by the fit. We fit the lattice data at a volume V=48^3 and then use it to calculate results for V=256^3 which serves as a proxy for the infinite volume limit. On the l.h.s. we combine the fit with the kernel function assuming that the cut is implemented through a Heaviside function. On the r.h.s. we show the case assuming a smeared kernel with a smearing width σ=0.1. For the latter, we also compare the results with the Chebyshev analysis of the lattice data following (<ref>), where we repeat the analysis for a set of values of ω_th. We confirm a good agreement between our model and the results obtained from the lattice data. We conclude that for this specific case the volume dependence is quite mild since no major changes in the shape of the results are found depending on the choice of the volume. Finally, we address how we construct our estimate of the corrections to the lattice result. The estimate is constructed by adding the corrections from the V→∞ limit before adding the σ→ 0 extrapolation, which translates to: X̅_AA^⊥(0^2) = 0.0786(31) (lattice result)+ 0.0001(0)(finite volume correction)+ 0.0055(1) (Finite smearing correction)= 0.0843(31).For the case considered in this work, the corrections due to the infinite volume limit are negligible, while the σ→ 0 limit gives a correction of order ∼7.§ CONCLUSIONWe developed a model under the assumption of two-body final states for which we have full control over the infinite volume extrapolation and then combine it with a fit to the lattice data to estimate the expected corrections from the infinite volume limit. In the case study performed here we found negligible corrections due to the infinite volume extrapolation, although larger corrections for larger values of q^2 and different shapes of the kernel function are expected. Further work is required to give a proper estimate of the systematic error associated with finite volume effects.§ ACKNOWLEDGMENTS The numerical calculations of the JLQCD collaboration were performed on SX-Aurora TSUBASA at the High Energy Accelerator Research Organization (KEK) under its Particle, Nuclear and Astrophysics Simulation Program, as well as on Fugaku through the HPCI System Research Project (Project ID: hp220056).The works of S.H. and T.K. are supported in part by JSPS KAKENHI Grant Numbers 22H00138 and 21H01085, respectively, and by the Post-K and Fugaku supercomputer project through the Joint Institute for Computational Fundamental Science (JICFuS).99Kellermann:2022mms R. Kellermann, A. Barone, S. Hashimoto, A. Jüttner and T. Kaneko,PoS LATTICE2022, 414 (2023) doi:10.22323/1.430.0414 [arXiv:2211.16830 [hep-lat]].Gambino:2020crt P. Gambino and S. Hashimoto,Phys. Rev. Lett. 125 (2020) no.3, 032001 doi:10.1103/PhysRevLett.125.032001 [arXiv:2005.13730 [hep-lat]]. Gambino:2022dvu P. Gambino, S. Hashimoto, S. Mächler, M. Panero, F. Sanfilippo, S. Simula, A. Smecca and N. Tantalo,JHEP 07 (2022), 083 doi:10.1007/JHEP07(2022)083 [arXiv:2203.11762 [hep-lat]]. Hansen:2017mnd M. T. Hansen, H. B. Meyer and D. Robaina,Phys. Rev. D 96 (2017) no.9, 094513 doi:10.1103/PhysRevD.96.094513 [arXiv:1704.08993 [hep-lat]].Hansen:2019idp M. Hansen, A. Lupo and N. Tantalo,Phys. Rev. D 99 (2019) no.9, 094508 doi:10.1103/PhysRevD.99.094508 [arXiv:1903.06476 [hep-lat]].Bulava:2021fre J. Bulava, M. T. Hansen, M. W. Hansen, A. Patella and N. Tantalo,JHEP 07 (2022), 034 doi:10.1007/JHEP07(2022)034 [arXiv:2111.12774 [hep-lat]].Barone:2023tbl A. Barone, S. Hashimoto, A. Jüttner, T. Kaneko and R. Kellermann,JHEP 07, 145 (2023) doi:10.1007/JHEP07(2023)145 [arXiv:2305.14092 [hep-lat]].
http://arxiv.org/abs/2312.16442v1
{ "authors": [ "Ryan Kellermann", "Alessandro Barone", "Shoji Hashimoto", "Andreas Jüttner", "Takashi Kaneko" ], "categories": [ "hep-lat" ], "primary_category": "hep-lat", "published": "20231227071132", "title": "Studies on finite-volume effects in the inclusive semileptonic decays of charmed mesons" }
[][email protected] Department of Physics and Astronomy, The University of Iowa, Iowa City, Iowa 52242, USA [][email protected] Department of Physics, Syracuse University, Syracuse, NY 13244, USA [][email protected] Department of Physics and Astronomy, The University of Iowa, Iowa City, Iowa 52242, USA [][email protected] Jij Inc., Bunkyo-ku, Tokyo 113-0031, Japan Department of Physics, Syracuse University, Syracuse, NY 13244, USA [][email protected] Department of Physics, Syracuse University, Syracuse, NY 13244, USA We show how to construct a tensor network representation of the path integral for reduced staggered fermions coupled to a non-abelian gauge field in two dimensions. The resulting formulation is both memory and computation efficient because reduced staggered fermions can be represented in terms of a minimal number of tensor indices while the gauge sector can be approximated using Gaussian quadrature with a truncation. Numerical results obtained using the Grassmann TRG algorithm are shown for the case of SU(2) lattice gauge theory and compared to Monte Carlo results. Tensor network representation of non-abelian gauge theory coupled to reduced staggered fermions Goksu Can Toga January 14, 2024 =============================================================================================== § INTRODUCTION Tensor networks furnish a powerful tool to represent and study lattice quantum field theories. In a Hamiltonian formulation they yield efficient representations of low lying states of the system <cit.> while in the context of a Euclidean path integral they form the starting point of efficient blocking/RG schemes that can be used to compute a variety of observable.One of the main motivations for their use within the HEP community is the famous sign problem that prohibits the use of Monte Carlo techniques for many theories of interest. In contrast, renormalization group algorithms for tensor networks are deterministic and hence insensitive to sign problems—see <cit.> for reviews and recent developments.The ultimate goal in HEP is to formulate a tensor network representation of full QCD, in which fermions are coupled to an SU(3) gauge field in four dimensions which can be contracted efficiently on current hardware.[By taking the time continuum limit one can also extract a gauge invariant Hamiltonian from such a network that can be implemented, in principle, on quantum computers.]The numerical complexity, in terms of both CPU and memory, of any tensor network depends on the number of physical degrees of freedom which must be captured in the tensor. For the gauge fields one must truncate the continuous degrees of freedom associated with the gauge group down to a finite set while fermions are characterized by multidimensional bond dimensions (see e.g. <cit.>). In addition the number of tensor indices increases rapidly with dimension. These facts imply that tensor renormalization group computations for the simplest non-abelian lattice gauge theory coupled to fermions are already extremely difficult even in two space time dimensions [ Two dimensional QCD was studied using tensor networks in ref. <cit.>. In that paper the strong coupling limit is taken, so that the major part of the physical degrees of freedom are integrated out at the initial stage. By contrast, our current paper provides a way to construct a tensor network representation for QCD-like theories for any value of the coupling constant. ] [ Note that theories where SU(2) gauge fields are coupled to scalar fields have been studied in refs. <cit.>. ].A typical way to extract discrete tensor indices for gauge or spin systems is the character expansion and this approach has been shown to be successful for studies of U(N) and pure SU(N) LGTs <cit.>. Recently other approaches that are based on the method of quadratures, probabilistic sampling, and trial (variational) actions have been proposed <cit.> [ Note that the use of the quadrature method was introduced earlier in the context of scalar fields <cit.>.]. Also a new method in which the tensors depend on only representation indices was proposed in <cit.> for pure gauge theories.In this work, we discretize the path integral using the Gaussian quadrature rule. Since the fermions are represented by Grassmann valued fields they are naturally discrete. Nevertheless the requirements needed to build Grassmann tensor networks are typically large since they depend on the number of both spinor and color components of a complex field. Using ordinary staggered fermions removes the spinor index component but we will show that it still leaves a formidable computational challenge even in the simplest case of a two color gauge theory. In contrast, we will show that reduced staggered fermions <cit.> give the most economical lattice fermion formulation possible in such systems. Reduced staggered fermions are also interesting in the context of symmetric mass generation and recent efforts to construct chiral lattice gauge theories—see <cit.>. Indeed in the latter case a sign problem is almost inevitable which provides strong motivation for the use of tensor methods. § MODEL AND TENSOR NETWORK REPRESENTATION As a warmup we will focus first on the construction of a theory of regular staggered fermions coupled to SU(2)—the simplest continuous non-abelian gauge group. First, we describe why this theory is computationally challenging in the tensor renormalization group studies. Subsequently we introduce a tensor network formulation for the SU(2) gauge theory with reduced staggered fermions where the higher order orthogonal iteration (HOOI) algorithm is used for the construction of tensor. §.§ SU(2) theory with full staggered fermions We can make a tensor network representation of this fermion model by following the Grassmannn tensor network construction (see e.g. <cit.>). First we express the action as a product of Grassmannn valued tensors. The action for the gauged staggered fermion is given byS_F[ U ] = ∑_n[ m ψ̅_nψ_n + ∑_μ=1^2η_n,μ/2( ψ̅_nU_n,μψ_n+μ̂ - ψ̅_n+μ̂U_n,μ^†ψ_n) ].The staggered sign factor is defined by η_n,μ = (-1)^∑_ν<μn_ν. Both periodic and anti-periodic boundary conditions can be used.The partition function can be expanded thanks to the nilpotency of the Grassmannn variables:Z_F[ U ] = ∫𝒟ψ̅𝒟ψ∏_n e^-S_F[U] = [t] ∫𝒟ψ̅𝒟ψ∏_n ∏_a=1^2∑_s_n^a=0^1( -mψ̅_n^aψ_n^a)^s_n^a·∏_a,b=1^2[t]∑_x_n,1=0^1( -η_n,1/2ψ̅_n^aU_n,1^abψ_n+1̂^b)^x_n,1^ab∑_x_n,2=0^1( η_n,1/2ψ̅_n+1̂^aU_n,1^ba∗ψ_n^b)^x_n,2^ab·∑_t_n,1=0^1( -η_n,2/2ψ̅_n^aU_n,2^abψ_n+2̂^b)^t_n,1^ab∑_t_n,2=0^1( η_n,2/2ψ̅_n+2̂^aU_n,2^ba∗ψ_n^b)^t_n,2^ab.As shown in <cit.>, the lattice coordinates x and t which label the index associated with the expansion of the exponential constitute candidates for the tensor indices. On each link, and for both ψ and ψ̅, there is a two component (forward and backward hopping) index and, in addition, a color index running over two values for SU(2). Thus, the bond dimension associated with each fermion link will turn out to be 2^2 × 2 × 2 = 256. This is prohibitively large since, in the complete tensor network, one has to consider additionally the contribution from the gauge part. Specifically, if we assume that the bond dimension of the gauge sector is χ, the bond dimension of the total tensor network will be 256χ, and this is not currently feasible [ In previous tensor network studies, the typical bond dimension is 100 or less. While bond dimensions as large as 512 have been used for the two dimensional Ising model <cit.>, such bond dimensions requirea huge amount of CPU time and also carry memory footprints on the order of 100–1000 GB. ]. To remedy this situation we have instead considered using reduced staggered fermions. §.§ SU(2) theory with reduced staggered fermions If one uses a massless reduced staggered formulation as in ref. <cit.>, the degrees of freedom can be reduced by half. We substitute the staggered fields by the reduced staggered fermions using the transformation ψ_n → (1-ϵ_n)ψ_n/2 and ψ̅_n → (1+ϵ_n)ψ_n/2. In this formulation the reduced staggered field ψ_nand it's conjugate ψ̅_n are placed on odd and even sites (or even and odd sites), respectively, so that one can just relabel ψ̅_̅n̅ as ψ_n^T. The fermionic action canthen be simplifed toS_F[ U ] = ∑_n∑_μ=1^2η_n,μ/2ψ_n^T𝒰_n,μψ_n+μ̂.A “projected” link variable 𝒰 is defined by 𝒰 = (1+ϵ_n)U_n,μ/2 + (1-ϵ_n)U_n,μ^∗/2, where the parity factor is ϵ_n = (-1)^n_1+n_2. In this case the Boltzmann factor is expanded likee^-S_F= ∑_{ x,t }∏_n∏_a,b=1^2( - η_n,1/2ψ_n^a𝒰_n,1^abψ_n+1̂^b)^x_n^ab( - η_n,2/2ψ_n^a𝒰_n,2^abψ_n+2̂^b)^t_n^ab. Because of the halving of degrees of freedom the bond dimension of the resultant fermion tensor network is now just 2^2× 2=16. This is a significant reduction from a bond dimension of 256 for the case of full staggered fermions.We can split ψ_n^aψ_n+1̂^b and ψ_n^aψ_n+2̂^b using a set of dummy Grassmannn variables α_n, β_n asψ_n^aψ_n+1̂^b = ∫ (ψ_n^adα_n^ab) (dα̅_n+1̂^abψ_n+1̂^b) (α̅_n+1̂^abα_n^ab), ψ_n^aψ_n+2̂^b = ∫ (ψ_n^adβ_n^ab) (dβ̅_n+2̂^abψ_n+2̂^b) (β̅_n+2̂^abβ_n^ab). Using dummy Grassmannn variables, the Boltzmann factor turns out to bee^-S_F = ∑_{ x,t }∏_n∏_a,b=1^2 ( η_n,1/2𝒰_n,1^ab)^x_n^ab( η_n,2/2𝒰_n,2^ab)^t_n^ab· (ψ_n^adα_n^ab)^x_n^ab (ψ_n+1̂^bdα̅_n+1̂^ab)^x_n^ab (ψ_n^adβ_n^ab)^t_n^ab (ψ_n+2̂^bdβ̅_n+2̂^ab)^t_n^ab· (α̅_n+1̂^abα_n^ab)^x_n^ab (β̅_n+2̂^abβ_n^ab)^t_n^ab. Then the fermion partition function canbe expressed asZ_F[ U ] = ∫( ∏_ndψ_n^1dψ_n^2) e^-S_F = ∑_{x,t}∏_n[ ∏_a,b=1^2( 𝒰_n,1^ab)^x_n^ab( 𝒰_n,2^ab)^t_n^ab]T_F x_nt_nx_n-1̂t_n-2̂ G_n, x_nt_nx_n-1̂t_n-2̂,where, the bosonic and the fermionic components can be written repectively asT_F x_nt_nx_n-1̂t_n-2̂ = ∫dψ_n^1dψ_n^2 [ ∏_a,b=1^2( η_n,1/2)^x_n^ab( η_n,2/2)^t_n^ab](ψ_n^2)^t_n-2̂^22 (ψ_n^2)^t_n-2̂^12 (ψ_n^1)^t_n-2̂^21 (ψ_n^1)^t_n-2̂^11· (ψ_n^2)^x_n-1̂^22 (ψ_n^2)^x_n-1̂^12 (ψ_n^1)^x_n-1̂^21 (ψ_n^1)^x_n-1̂^11· (ψ_n^2)^t_n^22 (ψ_n^1)^t_n^12 (ψ_n^2)^t_n^21 (ψ_n^1)^t_n^11· (ψ_n^2)^x_n^22 (ψ_n^1)^x_n^12 (ψ_n^2)^x_n^21 (ψ_n^1)^x_n^11,andG_n, ijkl =( dα_n^11)^x_n^11( dα_n^21)^x_n^21( dα_n^12)^x_n^12( dα_n^22)^x_n^22·( dβ_n^11)^t_n^11( dβ_n^21)^t_n^21( dβ_n^12)^t_n^12( dβ_n^22)^t_n^22·( dα̅_n^11)^x_n-1̂^11( dα̅_n^21)^x_n-1̂^21( dα̅_n^12)^x_n-1̂^12( dα̅_n^22)^x_n-1̂^22·( dβ̅_n^11)^x_n-2̂^11( dβ̅_n^21)^x_n-2̂^21( dβ̅_n^12)^x_n-2̂^12( dβ̅_n^22)^x_n-2̂^22·[ ∏_a,b=1^2( α̅_n+1̂^abα_n^ab)^x_n^ab( β̅_n+2̂^abβ_n^ab)^t_n^ab].Note that these tensor elements are quite similar to the tensor network representation of the Majorana–Wilson fermion system given in the authors' previous paper <cit.>. Indeed, if one takes a mapping as 11 → 1, 21 → 2, 12 → 3, and 22 → 4, G is exactly the same as that in <cit.>.The total partition function is thenZ = ∑_{x,t}∫𝒟U ∏_n[t] T_F G_n[ ∏_a,b=1^2( 𝒰_n,1^ab)^x_n^ab( 𝒰_n,2^ab)^t_n^ab] [ ∏_a,b,c,d=1^2 e^(β/2) U_n,1^ab U_n+1̂,2^bc U_n+2̂,1^dc∗ U_n,2^ad∗].Note that for the gauge part of the actionwe can use the normal link variables U rather than the projected ones 𝒰 since the real part of UUUU and 𝒰𝒰𝒰𝒰 are the same.To consider the integral of the gauge variables, we use the following parametrization of the gauge elements U_n,μ( θ, α, β) = [ cosθ_n,μ e^iα_n,μsinθ_n,μe^iβ_n,μ; -sinθ_n,μ e^-iβ_n,μ cosθ_n,μe^-iα_n,μ ].Switching β with γ to avoid confusion with β in the partition functionU_n,μ( θ, α, γ) = [ cosθ_n,μ e^iα_n,μsinθ_n,μe^iγ_n,μ; -sinθ_n,μ e^-iγ_n,μ cosθ_n,μe^-iα_n,μ ]. Under this parametrization the Haar measure becomes ∫𝒟U = ∫∏_n,μdU_n,μ = ∏_n,μ∫_0^π/2dθ_n,μ∫_-π^πdα_n,μ∫_-π^πβ_n,μsinθ_n,μcosθ_n,μ/2π^2.∫𝒟U = ∫∏_n,μdU_n,μ = ∏_n,μ∫_0^π/2dθ_n,μ∫_-π^πdα_n,μ∫_-π^πdγ_n,μsinθ_n,μcosθ_n,μ/2π^2. We can now discretize the variables by using the Gaussian quadrature rule. For example, for a single variable function g, the Gauss–Legendre (GL) quadrature rule is∫_a^bdy g(y) ≈b-a/2∑_i=1^K w_i g( b-a/2z_i + a+b/2).K is the order of the Legendre polynomial to be used, z_i is the root of the Legendre polynomial, and w_i is the corresponding weight. The higher the order K of the polynomial is, the better the approximation of the integral is. The formula generalizes to multi variable integrals(∏_i∫_a_i^b_idy_i) g(y_1, …, y_i, …) ≈ (∏_i b_i-a_i/2) (∏_i ∑_i=1^K)(∏_i w_i)g( b-a/2z_1 + a_i+b_i/2,…, b_i-a_i/2z_i + a_i+b_i/2,…).Using this discretization each plaquette interaction factor can be regarded as a twelve rank tensorP_(ijk)(lmn)(opq)(rst)= ∏_a,b,c,d=1^2 e^(β/2)U^bcU^dc∗U^ad∗U^ab = [t] ∏_a,b,c,d=1^2exp{ β/2 U(π/4z_i + π/4, π z_j, π z_k)_bc U(π/4z_l + π/4, π z_m, π z_n)_dc^∗· U(π/4z_o + π/4, π z_p, π z_q)_ad^∗ U(π/4z_r + π/4, π z_s, π z_t)_ab},where z-variable corresponds to each one of the three angles in the parameterization of the gauge group element in eq. <ref>. For simplicity we omit showing the indices for coordinates and directions here.The number of elements of P, namely K^12, still grows rapidly along with K, but one wants to have large K to keep the accuracy of the GL quadrature approximation. To address the large rank of the tensor, the Tucker decomposition can be used to express P as a product of lower rank tensors. In this paper we apply the higher order orthogonal iteration (HOOI) algorithm <cit.> to the plaquette tensor [ One can of course apply the higher order singular value decomposition (HOSVD) <cit.> to P. However, the HOOI has an advantage in terms of both CPU and memory. It is expected that the HOOI reproduces the result of the HOSVD. Indeed, in the numerical section in this paper, we will show convergence of this algorithm for some cases. ].The HOOI algorithm proceeds as follows.* Input: an N-rank tensor A whose bond dimension is χ. Output: a core tensor C, whose bond dimension is χ^' < χ, and a set of unitary matrices V whose dimension is χ^'×χ, so that the tensorX_I_1I_2⋯ I_N = ∑_i_1,i_2, …, i_N=1^χ^' C_i_1i_2⋯ i_N V^[1]_i_1I_1 V^[2]_i_2I_2⋯ V^[N]_i_NI_Napproximates A well. For the simplicity, here we assume that the length of each direction is the same for each A and C.* Initialize Vs as randomly generated unitary matrices.* For j-th leg each, * Apply V^[ j̃]†s to A for j̃≠ j:B_i_1i_2⋯ I_j⋯ i_N =∑_I_1,I_2,…,I_j-1,I_j+1,…,I_N =1^χA_I_1I_2⋯ I_N V^[1]†_I_1i_1 V^[2]†_I_2i_2⋯ V^[j-1]†_I_j-1i_j-1 V^[j+1]†_I_j+1i_j+1⋯ V^[N]†_I_Ni_N, * Take a truncated singular value decomposition (SVD) for the j-th leg of B:B_i_1i_2⋯ I_j⋯ i_N≈∑_k=1^χ^' O_i_1i_2⋯ k ⋯ i_Nρ_k P^†_kI_j, * Update V^[j] by P^†. * Update C asC_i_1i_2⋯ i_N = ∑_I_1,I_2,…,I_N=1^χ A_I_1I_2⋯ I_N V^[1]†_I_1i_1 V^[2]†_I_2i_2⋯ V^[N]†_I_Ni_N. * Iterate until the error |A - X|_F/|A|_F converges, where |· |_F denotes the Frobenius norm. HOOI has a quite tolerable numerical complexity to HOSVD, where SVDs are taken for each leg of A directly. Another big advantage of HOOI is that one does not need to store P explicitly in memory. Instead, one can just calculate an element of P on demand. Of course this is a tradeoff with computational complexity.After applying the HOOI, the plaquette tensor P is decomposed into a core tensor S and a set of unitary matrices V:P_ζ_n+1̂,2ζ_n+2̂,1ζ_n,2ζ_n,1 ≈ ∑_x_n,b,t_n,b,x_n-1̂,b,t_n-2̂,b=1^D S_x_n,b t_n,b x_n-1̂,b t_n-2̂,b V^[1]_x_n,bζ_n+1̂,2 V^[2]_t_n,bζ_n+2̂,1 V^[3]_x_n-1̂,bζ_n,2 V^[4]_t_n-2̂,bζ_n,1,where D < K^3 and where each ζ simply denotes a set of three indices that correspond to the roots of the Legendre polynomial (see eq. (<ref>) for the correspondence). In this way one can approximate the plaquette tensor with a memory requirement of 𝒪(D^4 + 4DK^3) instead of 𝒪(K^12).Finally, the full partition function isZ = ∑_{x,t}∏_n∑_ζ_n,1, ζ_n,2, x_n-1̂,b^', t_n-2̂,b^'[t] T_F G_n S_x_n,b t_n,b x_n-1̂,b^' t_n-2̂,b^'[ ∏_a,b=1^2𝒰_n,1^ab( ζ_n,1)^x_n^ab𝒰_n,2^ab( ζ_n,2)^t_n^ab] ·V^[4]_t_n-2̂,b^'ζ_n,1 V^[2]_t_n-2̂,bζ_n,1 V^[3]_x_n-1̂,b^'ζ_n,2V^[1]_x_n-1̂,bζ_n,2,where the summation for ζ_n,1, ζ_n,2 and for x_n-1̂,b^', t_n-2̂,b^' run over K^3 and D integers, respectively [ Note that we assume the weight and the constant factors generated from the Gaussian quadrature are incorporated to P tensor. Otherwise one should explicitly have the factors in eq. (<ref>). ]. By defining the integrated bosonic tensor asT_x_n t_n x_n-1̂ t_n-2̂ = ∑_ζ_n,1, ζ_n,2, x_n-1̂,b^', t_n-2̂,b^'T_F x_n,f t_n,f x_n-1̂,f t_n-2̂,f S_x_n,b t_n,b x_n-1̂,b^' t_n-2̂,b^'[ ∏_a,b=1^2𝒰_n,1^ab( ζ_n,1)^x_n^ab𝒰_n,2^ab( ζ_n,2)^t_n^ab] ·V^[4]_t_n-2̂,b^'ζ_n,1V^[2]_t_n-2̂,bζ_n,1 V^[3]_x_n-1̂,b^'ζ_n,2V^[1]_x_n-1̂,bζ_n,2,the partition function can be written asZ = ∑_{x,t}∏_n T_x_n t_n x_n-1̂ t_n-2̂ G_n, x_n t_n x_n-1̂ t_n-2̂.In this expression the indices with the subscript “f” denote the set of fermionic (binary) indices; i.e. x_f = (x^11,x^21,x^12,x^22). Also, the integrated indices are simply shown without subscript as in x = (x_f,x_b). § NUMERICAL RESULTS §.§ Pure SU(2) gauge theory Figure <ref> shows how the relative error converges for the SU(2) plaquette tensor as the HOOI proceeds. Here we discretize the plaquette tensor by using the roots of the Legendre polynomial with varying the number of roots N_gauge to be 3, 4, and 5. N_ gauge in this section is identified as K in the previous section; in other words, we approximate the plaquette tensor by replacing the integrals of angle by summations over the N_gauge roots of the Legendre polynomial. With the same notation in eq. (<ref>), the error in the figure is defined by| P - SV^[1]V^[2]V^[3]V^[4]|_F/| P |_F.From the figure we can observe that larger β are relatively difficult although fortunately the iteration rapidly converges in all cases. Surprisingly, in the strong coupling region β < 0.5, the accuracies are beyond the single precision even though the drastic reduction of the number of d.o.f. (from N_gauge^3× 4 to 8^4) is taken place.Next we show the efficiency of the truncated quadrature scheme by comparing free energies calculated from the tensor renormalization group with the exact solution. The latter is easy to derive in two dimensions since the partition function can be reduced to a single plaquette integral.For the sake of completeness, the partition function of the pure SU(2) gauge theory in terms of tensors is detailed in the appendix <ref> using the character expansion. Figures <ref> and <ref> show the free energy of the pure SU(2) theory and corresponding relative errors on a L=4 lattice. In these figures, “Full” indicates that the plaquette tensor with N_gauge^3 × 4 elements is treated as the fundamental tensor in the network. On the other hand, truncated cases are also shown, where N_gauge^3 × 4 elements are reduced to 8^4 (fixed for any choice of N_gauge) by using the HOOI algorithm. It is clear from the error analysis that relatively a small number of terms (i.e. N_gauge) is needed in the quadrature approximation and that the effect of the further reduction by the HOOI is quite small.We also find from the comparison to the relative errors in fig. <ref> that the β dependence is quite milder for the free energies. This might be attributed to some cancellation occurring among neighboring plaquettes. §.§ SU(2) theory coupled to reduced staggered fermions We now turn to the theory including reduced staggered fermions. Figure <ref> shows a plot of the free energy versus β on L=32 lattice.To check for the accuracy of the tensor network calculation we have compared the expectation value of the plaquette with Monte Carlo results [In general the Pfaffian arising in reduced staggered fermions suffers from a sign problem, but one can use the pseudoreal property of the gauge group to show that this is evaded in the case of SU(2). It can hence be simulated with a conventional RHMC algortithm.]. This comparison is shown in fig. <ref> for a lattice of size L=32 and a bond dimension of 64.Clearly the Monte Carlo agrees well with the tensor network result over a wide range of β. It is interesting to examine in more detail the small β region. This is done in fig. <ref>. The straight line shows a fit to the strong coupling result for the average plaquette PP=(1/2)^4+β/4,where the intercept arises from the leading contribution to the plaquette from expanding the fermion hopping term. One can see the stability of TN result and that the TN calculation finely reproduces the analytical formula. § SUMMARY In this paper we have shown how to construct a tensor network representing the path integral of reduced staggered fermions coupled to an SU(2) gauge field which is minimal in terms of its memory and computational requirements. We have described the complexities arising in formulating tensor network representations for fermions coupled to non-abelian gauge fields and shown how the use of reduced staggered fermions combined with a HOOI modified Gaussian quadrature algorithm for handling the gauge fields, allows for an efficient tensor representation. We use this representation to compute the free energy and the average plaquette using the Grassmannn tensor renormalization group (GTRG) algorithmfinding good agreement with Monte Carlo results in the case of the latter.In general one expects that SU(N) gauge theories coupled to reduced staggered fermions will have sign problems and this is hence the arena in which tensor formulations such as the one described in this paper will be most useful. We hope to report on such work in the near future.We thank the members of the QuLAT Collaboration for valuable discussions. This work was supported in part by the U.S. Department of Energy (DOE) under Award Numbers DE-SC0009998, DE-SC0010113, and DE-SC0019139. This research used resources of the Syracuse University HTC Campus Grid and NSF award ACI-1341006 and the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231 using NERSC awards HEP-ERCAP0020659 and HEP-ERCAP0023235. 27 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Orus(2014)]Orus:2013kga author author R. Orus, 10.1016/j.aop.2014.06.013 journal journal Ann. Phys. volume 349,pages 117 (year 2014), http://arxiv.org/abs/1306.2164 arXiv:1306.2164 [cond-mat.str-el] NoStop[Bañuls et al.(2018)Bañuls, Cichy, Cirac, Jansen, and Kühn]Banuls:2018jag author author M. C. Bañuls, author K. Cichy, author J. I. Cirac, author K. Jansen,and author S. Kühn, booktitle booktitle Proceedings, 36th International Symposium on Lattice Field Theory (Lattice 2018): East Lansing, MI, USA, July 22-28, 2018,@noopjournal journal PoS volume LATTICE2018, pages 310 (year 2018), http://arxiv.org/abs/1810.12838 arXiv:1810.12838 [hep-lat] NoStop[Meurice et al.(2022a)Meurice, Osborn, Sakai, Unmuth-Yockey, Catterall, and Somma]Meurice:2022xbk author author Y. Meurice, author J. C. Osborn, author R. Sakai, author J. Unmuth-Yockey, author S. Catterall,and author R. D. Somma, in @noopbooktitle Snowmass 2021 (year 2022) http://arxiv.org/abs/2203.04902 arXiv:2203.04902 [hep-lat] NoStop [Meurice et al.(2022b)Meurice, Sakai,and Unmuth-Yockey]Meurice:2020pxc author author Y. Meurice, author R. Sakai, and author J. Unmuth-Yockey,10.1103/RevModPhys.94.025005 journal journal Rev. Mod. Phys. volume 94,pages 025005 (year 2022b), http://arxiv.org/abs/2010.06539 arXiv:2010.06539 [hep-lat] NoStop [Gu et al.()Gu, Verstraete, and Wen]Gu:2010yh author author Z.-C. Gu, author F. Verstraete, and author X.-G. Wen,@noop http://arxiv.org/abs/1004.2563 arXiv:1004.2563 [cond-mat.str-el] NoStop[Gu(2013)]Gu:2013gba author author Z.-C. Gu, 10.1103/PhysRevB.88.115139 journal journal Phys. Rev. volume B88,pages 115139 (year 2013), http://arxiv.org/abs/1109.4470 arXiv:1109.4470 [cond-mat.str-el] NoStop[Bloch and Lohmayer(2023)]Bloch:2022vqz author author J. Bloch and author R. Lohmayer, 10.1016/j.nuclphysb.2022.116032 journal journal Nucl. Phys. volume B986, pages 116032 (year 2023), http://arxiv.org/abs/2206.00545 arXiv:2206.00545 [hep-lat] NoStop [Bazavov et al.(2019)Bazavov, Catterall, Jha, andUnmuth-Yockey]Bazavov:2019qih author author A. Bazavov, author S. Catterall, author R. G. Jha,andauthor J. Unmuth-Yockey, 10.1103/PhysRevD.99.114507 journal journal Phys. Rev. D volume 99, pages 114507 (year 2019), http://arxiv.org/abs/1901.11443 arXiv:1901.11443 [hep-lat] NoStop [Asaduzzaman et al.(2020)Asaduzzaman, Catterall, and Unmuth-Yockey]Asaduzzaman:2019mtx author author M. Asaduzzaman, author S. Catterall,and author J. Unmuth-Yockey, 10.1103/PhysRevD.102.054510 journal journal Phys. Rev. D volume 102, pages 054510 (year 2020), http://arxiv.org/abs/1905.13061 arXiv:1905.13061 [hep-lat] NoStop [Shimizu and Kuramashi(2014)]Shimizu:2014uva author author Y. Shimizu and author Y. Kuramashi, 10.1103/PhysRevD.90.014508 journal journal Phys. Rev. volume D90, pages 014508 (year 2014), http://arxiv.org/abs/1403.0642 arXiv:1403.0642 [hep-lat] NoStop[Liu et al.(2013)Liu, Meurice, Qin, Unmuth-Yockey, Xiang, Xie, Yu, andZou]Liu:2013nsa author author Y. Liu, author Y. Meurice, author M. P. Qin, author J. Unmuth-Yockey, author T. Xiang, author Z. Y. Xie, author J. F.Yu,and author H. Zou, 10.1103/PhysRevD.88.056005 journal journal Phys. Rev. volume D88,pages 056005 (year 2013), http://arxiv.org/abs/1307.6543 arXiv:1307.6543 [hep-lat] NoStop[Hirasawa et al.(2021)Hirasawa, Matsumoto, Nishimura, andYosprakob]Hirasawa:2021qvh author author M. Hirasawa, author A. Matsumoto, author J. Nishimura,andauthor A. Yosprakob, 10.1007/JHEP12(2021)011 journal journal JHEP volume 12, pages 011 (year 2021), http://arxiv.org/abs/2110.05800 arXiv:2110.05800 [hep-lat] NoStop [Kuramashi and Yoshimura(2020)]Kuramashi:2019cgs author author Y. Kuramashi and author Y. Yoshimura, 10.1007/JHEP04(2020)089 journal journal JHEP volume 04,pages 089 (year 2020), http://arxiv.org/abs/1911.06480 arXiv:1911.06480 [hep-lat] NoStop [Fukuma et al.(2021)Fukuma, Kadoh, and Matsumoto]Fukuma:2021cni author author M. Fukuma, author D. Kadoh, and author N. Matsumoto, 10.1093/ptep/ptab143 journal journal PTEP volume 2021, pages 123B03 (year 2021), http://arxiv.org/abs/2107.14149 arXiv:2107.14149 [hep-lat] NoStop [Kuwahara and Tsuchiya(2022)]Kuwahara:2022ubg author author T. Kuwahara and author A. Tsuchiya, 10.1093/ptep/ptac103 journal journal PTEP volume 2022, pages 093B02 (year 2022), http://arxiv.org/abs/2205.08883 arXiv:2205.08883 [hep-lat] NoStop [Kadoh et al.(2018)Kadoh, Kuramashi, Nakamura, Sakai, Takeda, and Yoshimura]Kadoh:2018hqq author author D. Kadoh, author Y. Kuramashi, author Y. Nakamura, author R. Sakai, author S. Takeda,and author Y. Yoshimura, 10.1007/JHEP03(2018)141 journal journal JHEPvolume 03, pages 141 (year 2018), http://arxiv.org/abs/1801.04183 arXiv:1801.04183 [hep-lat] NoStop[Kadoh et al.(2019)Kadoh, Kuramashi, Nakamura, Sakai, Takeda, and Yoshimura]Kadoh:2018tis author author D. Kadoh, author Y. Kuramashi, author Y. Nakamura, author R. Sakai, author S. Takeda,and author Y. Yoshimura, 10.1007/JHEP05(2019)184 journal journal JHEPvolume 05, pages 184 (year 2019), http://arxiv.org/abs/1811.12376 arXiv:1811.12376 [hep-lat] NoStop[Yosprakob(2023)]Yosprakob:2023jgl author author A. Yosprakob, @noop(year 2023), http://arxiv.org/abs/2311.02541 arXiv:2311.02541 [hep-th] NoStop [van den Doel and Smit(1983)]vandenDoel:1983mf author author C. van den Doel and author J. Smit, 10.1016/0550-3213(83)90401-7 journal journal Nucl. Phys. volume B228, pages 122 (year 1983)NoStop [Catterall(2023)]Catterall:2022jky author author S. Catterall, 10.1103/PhysRevD.107.014501 journal journal Phys. Rev. D volume 107, pages 014501 (year 2023), http://arxiv.org/abs/2209.03828 arXiv:2209.03828 [hep-lat] NoStop [Butt et al.(2021)Butt, Catterall, and Toga]Butt:2021koj author author N. Butt, author S. Catterall, and author G. C. Toga, 10.3390/sym13122276 journal journal Symmetry volume 13, pages 2276 (year 2021), http://arxiv.org/abs/2111.01001 arXiv:2111.01001 [hep-lat] NoStop [Takeda and Yoshimura(2015)]Takeda:2014vwa author author S. Takeda and author Y. Yoshimura, 10.1093/ptep/ptv022 journal journal Prog. Theor. Exp. Phys. volume 2015, pages 043B01 (year 2015), http://arxiv.org/abs/1412.7855 arXiv:1412.7855 [hep-lat] NoStop[Morita et al.(2018)Morita, Igarashi, Zhao, andKawashima]2018PhRvE..97c3310M author author S. Morita, author R. Igarashi, author H.-H. Zhao,and author N. Kawashima, 10.1103/PhysRevE.97.033310 journal journal Phys. Rev. volume E97, pages 033310 (year 2018), http://arxiv.org/abs/1712.01458 arXiv:1712.01458 [cond-mat.stat-mech] NoStop [Catterall and Butt(2019)]Catterall:2018pms author author S. Catterall and author N. Butt, 10.1103/PhysRevD.99.014505 journal journal Phys. Rev. volume D99,pages 014505 (year 2019), http://arxiv.org/abs/1810.00853 arXiv:1810.00853 [hep-lat] NoStop [Asaduzzaman et al.(2023)Asaduzzaman, Catterall, Meurice, Sakai, and Toga]Asaduzzaman:2022pnw author author M. Asaduzzaman, author S. Catterall, author Y. Meurice, author R. Sakai,and author G. C. Toga, 10.1007/JHEP01(2023)024 journal journal JHEPvolume 01, pages 024 (year 2023), http://arxiv.org/abs/2210.03834 arXiv:2210.03834 [hep-lat] NoStop [De Lathauwer et al.(2000a)De Lathauwer, De Moor, and Vandewalle]de2000best author author L. De Lathauwer, author B. De Moor,and author J. Vandewalle, @noopjournal journal SIAM J. Matrix Anal. Appl. volume 21, pages 1324 (year 2000a)NoStop [De Lathauwer et al.(2000b)De Lathauwer, De Moor, and Vandewalle]de2000multilinear author author L. De Lathauwer, author B. De Moor,and author J. Vandewalle, @noopjournal journal SIAM J. Matrix Anal. Appl. volume 21, pages 1253 (year 2000b)NoStop § CHARACTER EXPANSION FORMULAE The character expansion is given bye^( β/2 ) [ U_n,1 U_n+1̂,2 U_n+2̂,1^† U_n,2^†] = ∑_r_n=0^∞ F_r_n( β) χ_r_n( U_n,1 U_n+1̂,2 U_n+2̂,1^† U_n,2^†).For the SU(2) case, F is expressed using the modified Bessel function of the first kind I:F_r( β) = I_2r( β) - I_2r+2( β) = 2 ( 2r+1 ) I_2r+1( β)/β.χ is called the character, whose properties are given below.The character of the product of the group elements can be broken up into the trace over the product of the matrix represnetation of the group elements:χ_r_n( U_n,1 U_n+1̂,2 U_n+2̂,1^† U_n,2^†) = ∑_a,b,c,d D^[r_n]_ab( U_n,1) D^[r_n]_bc( U_n+1̂,2) D^[r_n]†_cd( U_n+2̂,1) D^[r_n]†_da( U_n,2)Note that the dimensions of the matrices (the ranges of a, b, c, d) depend on the label of the irreducible representation of the group r. D is called the Wigner D-matrix.The D-matrices satisfy an orthogonality condition∫dU D^[r_1]_i_1j_1( U ) D^[r_2]∗_i_2j_2( U ) = 1/2r_1+1δ_r_1r_2δ_i_1i_2δ_j_1j_2.§ PURE SU(2) WITH CHARACTER EXPANSIONThe lattice action of the 2D pure SU(2) model is given byS = - β/2∑_n[ U_n,1 U_n+1̂,2 U_n+2̂,1^† U_n,2^†]with the inverse coupling constant β=1/g^2 and the link variables U_n,μ=exp{igA^i_n,μT^i}. T is the generator of SU(2).We make a tensor network representation of the partition functionZ= ∫𝒟U e^-S= ∫𝒟U ∏_n e^( β/2 ) [ U_n,1 U_n+1̂,2 U_n+2̂,1^† U_n,2^†],where 𝒟U = ∏_ndU_n,1dU_n,2 is the SU(2) Haar measure. By using the well known formulae (<ref>), (<ref>), the partition function can be written using the Wigner D-matrices:Z= ∑_{ r,x,t,x^',t^'}∏_n[t] F_r_n( β) ∫dU_n,1 D^[r_n]_t^'_n,1 t^'_n,2( U_n,1) D^[r_n-2̂]∗_t_n-2̂,1 t_n-2̂,2( U_n,1) ·∫dU_n,2 D^[r_n]∗_x^'_n,1 x^'_n,2( U_n,2) D^[r_n-1̂]_x_n-1̂,1 x_n-1̂,2( U_n,2) ·δ_t^'_n,2 x_n,1δ_x_n,2 t_n,2δ_t_n,1 x^'_n,2δ_x^'_n,1 t^'_n,1.The summation ∑_{·} denotes the summation over the corresponding indices all over the sites and links; this rule is inherited throughout this paper.Now we can integrate out the original link variables by using the orthogonality condition (<ref>) and obtain a tensor network representation:Z= ∑_{ r,x,t }∏_nF_r_n( β)/( 2r_n+1 )^2δ_r_n r_n-1̂δ_r_n r_n-2̂δ_t_n-2̂,2 x_n,1δ_x_n,2 t_n,2δ_t_n,1 x_n-1̂,2δ_x_n-1̂,1 t_n-2̂,1.An object in the product in the righthand side can be regarded as a tensor placed on the center of each plaquette.Note that all the indices associated to plaquette (r in eq. (<ref>)) take the same value in two dimensions. In other words, if one fixes one r, every other r takes the same value owing to δ_r_n r_n-1̂ and δ_r_n r_n-2̂. One may call this property the Gauss's law.
http://arxiv.org/abs/2312.16167v1
{ "authors": [ "Muhammad Asaduzzaman", "Simon Catterall", "Yannick Meurice", "Ryo Sakai", "Goksu Can Toga" ], "categories": [ "hep-lat" ], "primary_category": "hep-lat", "published": "20231226185549", "title": "Tensor network representation of non-abelian gauge theory coupled to reduced staggered fermions" }
Lyapunov-Krasovskii Functionals ofRobust Typefor the Stability Analysis in Time-Delay Systems footnoteinfo [ January 14, 2024 ================================================================================================================ The ability of machine learning systems to learn continually is hindered by catastrophic forgetting, the tendency of neural networks to overwrite existing knowledge when learning a new task. Existing continual learning methods alleviate this problem through regularisation, parameter isolation, or rehearsal, and are typically evaluated on benchmarks consisting of a handful of tasks. We propose a novel conceptual approach to continual classification that aims to disentangle class-specific information that needs to be memorised from the class-agnostic knowledge that encapsulates generalization. We store the former in a buffer that can be easily pruned or updated when new categories arrive, while the latter is represented with a neural network that generalizes across tasks. We show that the class-agnostic network does not suffer from catastrophic forgetting and by leveraging it to perform classification, we improve accuracy on past tasks over time. In addition, our approach supports open-set classification and one-shot generalization. To test our conceptual framework, we introduce Infinite dSprites, a tool for creating continual classification and disentanglement benchmarks of arbitrary length with full control over generative factors. We show that over a sufficiently long time horizon all major types of continual learning methods break down, while our approach enables continual learning over hundreds of tasks with explicit control over memorization and forgetting.§ INTRODUCTIONA machine learning system designed for continual learning must not only adapt to the current task, but also improve its performance on past tasks and build representations that facilitate the learning of future tasks. The latter two requirements are known as backward and forward transfer. The path to meeting these requirements is obstructed by catastrophic forgetting, the inability to preserve existing knowledge upon learning new information.As noted in early studies <cit.>, catastrophic forgetting is caused by destructive model updates, where adjustments to model parameters, made through gradient descent, focus solely on the current task's objective and can potentially impair performance on past tasks.To mitigate this issue, continual learning methods employ strategies such as (i) regularization, which aims to preserve existing knowledge by limiting the plasticity of selected network weights [],(ii) parameter isolation or dynamic architectures, which effectively solve each task with a dedicated model <cit.>, or (iii) replay, which augments the training data with stored samples from past tasks <cit.>. Most continual learning methods are evaluated on image classification benchmarks in which a discriminative model is transferred across tasks that typically involve disjoint sets of classes. We argue that this purely discriminative learning framework is not conducive to positive forward or backward transfer. Supervised classification networks tend to preserve only the features that are relevant for predicting the output labels in the training data <cit.>. In a continual learning setting, these features transfer poorly to future tasks with a completely different set of labels. Conversely, gradient updates with respect to current task's objective do not encourage preserving features relevant to previous tasks.Based on these observations, we propose an alternative paradigm for continual learning centered around the idea of transferring modules that learn the general aspects of the problem (for example identity-preserving transforms that act similarly on all objects, such as illumination changes). We hypothesize that destructive model updates can be avoided by separating two objectives: (i) generalization, or learning class-agnostic transforms that successfully transfer to past and future tasks, and (ii) memorization of class-specific information. Crucially, our framework resolves the catastrophic forgetting issue by disentangled learning, that is, having a separate update procedure for the generalization model and the memory buffer (please note the difference from learning disentangled representations).This separation allows us to maintain, prune, and expand task-specific knowledge stored in the memory buffer while continuously training the generalization model. By focusing on learning the universal transformations, we can not only avoid destructive gradient updates, but efficiently accumulate knowledge over time.To demonstrate our proposed idea of learning universal transformations, we introduce ids, a continual learning benchmark generator inspired by the dSprites dataset <cit.>. It allows for procedurally generating a virtually infinite progression of random two-dimensional shapes. Similar to <cit.>, we generate each unique shape in every combination of orientation, scale, and position (see <ref>). Most importantly, by providing the ground truth values of individual fov, ids enables us to learn general transformations, thereby separating generalization from memorization and testing our main hypothesis. We hope that by releasing ids as a Python package we will encourage the research community to test their methods on our benchmark.Section <ref> describes an implementation of our disentangled learning framework in the context of class-incremental continual learning. Our proof of concept consists of an equivariant network that learns to regress the parameters of an affine transform that maps any input into its canonical form, a normalisation module that applies the predicted affine transformation to the input, and an exemplar buffer that stores a single exemplar per class. <Ref> shows the main components of our framework. Contributions We summarize the most important contributions of this work below: * We introduce a new framework for generating continual classification and disentanglement benchmarks that for the first time allows testing continual learning methods over thousands of tasks. We will open-source our software package upon acceptance. 1em* We propose a novel continual learning paradigm based on learning symmetry transformations, which circumvents catastrophic forgetting by separating gradient-based model updates from explicit memory edits. 1em* We demonstrate that as the number of tasks grows, regularization-based continual learning methods quickly break down and replay-based methods either deteriorate in performance or become impractical due to extensive use of memory and compute. 1em* We show that our approach exhibits significant forward and backward transfer, strong open-set classification performance, and excellent zero-shot generalisation. It can learn over hundreds of tasks with a constant computational budget and a slowly growing memory footprint. § TWO PROBLEMS WITH THE CURRENT APPROACH TO CLASS-INCREMENTAL CONTINUAL LEARNING We begin by addressing two aspects of current continual learning research that motivate our contributions. Benchmarking Continual learning datasets are typically limited to just a few tasks and, at most, a few hundred classes. In contrast, humans can learn and recognise thousands of novel objects throughout their lifetime. We argue that as the continual learning community we should focus more on scaling the number of tasks in our benchmarks. We show that when tested over hundreds of tasks standard methods inevitably fail: the effect of regularization decays over time, adding more parameters quickly becomes unpractical, and storing and replaying old examples causes a rapid increase in both memory and compute. Moreover, to tackle individual sub-problems in continual learning, such as the influence of task similarity on forgetting, the role of disentangled representations, and the influence of hard task boundaries, we need to be able to flexibly create datasets that let us isolate these problems. We should also step away from the static training and testing stages and embrace streaming settings where the model can be evaluated an any point. Finally, to advance beyond classification tasks, we need richer ground truth data than just class labels.These observations motivated us to create a novel evaluation protocol. Taking inspiration from object-centric disentanglement libraries <cit.>, we introduce the ids framework that allows for procedurally generating virtually infinite streams of data while retaining full control over the number of tasks, their difficulty and respective similarity, and the nature of boundaries between them. We hope that it will be a useful resource to the community and unlock new research directions in continual learning. Invariant representations Continual learning methods are usually benchmarked on class-incremental setups, where a classification problem is split into several tasks to be solved sequentially <cit.>. Note that the classification learning objective is invariant to identity-preserving transformations of the object of interest, such as rotation, change of lighting, or perspective projection. Unsurprisingly, the most successful discriminative learning architectures, from AlexNet <cit.> to ResNet <cit.>, learn only features that are relevant to the classification task <cit.> and discard valuable information about universal transformations, symmetries, and compositionality. In doing so, they entangle the particular class information with the knowledge about generalization mechanisms and represent both in the weights of the model. When a new task arrives, there is no clear way to update the two separately.In this paper, we reframe the problem and recognize that the information about identity-preserving transformations, typically discarded, is important for transfer across tasks. For instance, a change in illumination affects objects of various classes similarly. Understanding this mechanism would lead to better generalization on future classes. Consequently, we suggest that modeling these transformations is key to achieving positive forward and backward transfer in continual classification. Symmetry transformations, or equivariances, offer a structured framework for this kind of modeling, which we elaborate on in the subsequent section. § METHODSIn this section, we describe two important contributions of this work: a software package for generating arbitrarily long continual learning benchmarks and a conceptual disentangled learning framework accompanied by an example implementation. We would like to emphasize that this work aims to provide a new perspective on knowledge transfer in continual learning, and to propose new benchmarks for evaluating continual learning methods. Our implementation serves as a proof of concept, spotlighting the potential of equivariance learning, and is not intended as a practical method for general use. §.§ Infinite dSprites We introduce ids, a novel framework inspired by dSprites <cit.>, designed for easy creation of arbitrarily long continual learning benchmarks. A single ids benchmark consists of T tasks, where each task is an n-fold classification of procedurally generated shapes. Similar to dSprites, each shape is observed in all possible combinations of the following fov: color, scale, orientation, horizontal position, and vertical position. <Ref> shows an example batch of images with four fov and two values per factor (in general, our implementation allows for arbitrary granularity). The canonical form corresponds to a scale of 1, orientation of 0, and horizontal and vertical positions of 0.5. We only use a single color in our experiments for simplicity and to save computation.The shapes are generated by first randomly sampling the number of vertices from a discrete uniform distribution over a closed integer interval a, b, then constructing a regular polygon on a unit circle, randomly perturbing the polar coordinates of each vertex, and finally connecting the perturbed vertices with a closed spline of the order randomly chosen from {1, 3}. All shapes are then scaled and centered so that their bounding boxes are the same size and their centers of mass align in the canonical form. We also make orientation identifiable by painting one half of the shape black.The number of tasks T, the number of shapes per task n, the vertex number interval a, b, the exact fov ranges, and the parameters of noise distributions for radial and angular coordinates are set by the user, providing the flexibility to control the length and difficulty of the benchmark. The framework also provides access to the ground truth values of the individual fov. We will release ids as a Python package and hope it will unlock new research directions in continual classification, transfer learning, and continual disentanglement. §.§ Disentangled learning With our procedural benchmark generator, we can test continual learning methods over time frames an order of magnitude longer than those covered by existing datasets. As previously mentioned, we hypothesize that to learn efficiently over such time horizons, we need to clearly distinguish between the generalization mechanism that needs to be learned and the class-specific information that has to be memorized. We start by observing that human learning is likely characterized by such separation. Take face recognition, for example. A child is able to memorize the face of its parent but can still get confused by an unexpected transformation, as evidenced by countless online videos of babies failing to recognize their fathers after a shave. Once we learn the typical identity-preserving transformations that a face can undergo, we need only to memorize the particular features of any new face to instantly generalize over many transformations, such as facial expression, lighting, three-dimensional rotation, scale, perspective projection, or a change of hairstyle. Note that while we encounter new faces every day, these transforms remain consistent and affect every face similarly.Inspired by this observation, we aim to disentangle generalization from memorization by explicitly separating the learning module from the memory buffer in our model design. The memory buffer stores a single exemplar image of each encountered shape. We assume these are given to us by an oracle throughout training, but it would be possible to bootstrap the buffer with a few initial exemplars. The equivariance learning module is a neural network designed to learn the general transformations present in the data. Our implementation and training objective We tackle the class-incremental scenario <cit.>: at each task n we observe a set of data points D_n={(x_n, m, y_n, m)}_m=1^N_n, where x_n, m is an image and y_n, m is a vector containing the values of the generative factors and the class label. Each data point belongs to one of C_n distinct classes. At the start of each task we add the current task exemplars E_n={x̂_i}_i=1^C_n to the exemplar buffer. We then train a neural network to output a transformation matrix θ̂ that approximates the ground truth affine transformation matrix θ mapping each input image to its respective exemplar: x̂ = T_θ(x). The network is trained continually to minimize the MSE loss between θ̂ and θ. At test time, we use the trained network to regress an affine transformation matrix mapping a previously unseen image to its exemplar. The normalization module applies the predicted transform to the input image using grid sampling with border values for out-of-bound grid locations. The image is then classified through nearest neighbour lookup. Our normalization module is differentiable and therefore allows for backpropagating a loss computed in image space. In practice, we found that this additional reconstruction loss does not improve much over direct fov supervision.Discussion The disentangled learning approach has a number of advantages. First, by learning transformations instead of class boundaries, we reformulate a challenging class-incremental classification scenario as a domain-incremental fov regression learning problem <cit.>. Since the transformations affect every class in the same way, they are easier to learn in a continual setting. We show that this approach is not only less prone to forgetting but exhibits significant forward and backward transfer. In other words, the knowledge about regressing fov is efficiently accumulated over time. Second, the exemplar buffer is a fully explainable representation of memory that can be explicitly edited: we can easily add a new class or completely erase a class from memory by removing its exemplar. Finally, we show experimentally that our method generalises instantly to new shapes with just a single exemplar and works reliably in an open-set classification scenario. <Ref> illustrates the stages of the classification mechanism: five input images, their corresponding normalization network outputs, and closest exemplars from the buffer. § RELATED WORK §.§ Continual learningContinual learning literature typically focuses on catastrophic forgetting in supervised classification. Parameter isolation methods use dedicated parameters for each task by periodically extending the architecture while freezing already trained parameters <cit.> or by relying on isolated sub-networks <cit.>. Regularization approaches aim to preserve existing knowledge by limiting the plasticity of the network. Functional regularization methods constrain the network output through knowledge distillation <cit.> or by using a small set of anchor points to build a functional prior <cit.>. Weight regularization methods <cit.> directly constrain network parameters according to their estimated importance for previous tasks. In particular, vcl <cit.> derives the importance estimate by framing continual learning as sequential approximate Bayesian inference. Most methods incorporate regularization into the objective function, but it is also possible to implement it using constrained optimization <cit.>. Finally, replay methods <cit.> retain knowledge through rehearsal. When learning a new task, the network is trained with a mix of new samples from the training stream and previously seen samples drawn from the memory buffer. A specific case of this strategy is generative replay <cit.>, where the rehearsal samples are produced by a generative model trained to approximate the data distribution for each class. Many continual learning methods are hybrid systems that mix and match the above techniques. §.§ Benchmarking continual learningEstablished continual learning benchmarks primarily involve splitting existing computer vision datasets into discrete, non-overlapping segments to study continual supervised classification. Notable examples in this domain include split MNIST <cit.>, split CIFAR <cit.>, and split MiniImageNet <cit.>, along with their augmented counterparts, such as rotated MNIST <cit.>, and permuted MNIST <cit.>. More recently, contributions from <cit.>, <cit.> and <cit.> have enriched the field with dataset designed specifically for continual learning, such as CORe50, CLAD, and Stream-51, which comprise temporally correlated images with diverse backgrounds and environments. § EXPERIMENTS In this section, we evaluate standard continual learning methods and our disentangled learning framework on a benchmark generated using ids. The benchmark consists of 200 classification tasks. For each task, we randomly generate 10 shapes and create an image dataset showing the shapes in all combinations of 4 fov with 8 possible values per factor. This gives us 40,960 samples per task, which we then randomly split into training, validation, and test sets with a 98:1:1 ratio. The test set is being accumulated at each task until it reaches the maximum limit of 50,000 samples, at which point we employ reservoir sampling to include new classes while keeping the test set bounded and balanced. After training on each task, we report test accuracy on the accumulated test set. To ensure a reliable comparison, we use the same backbone (ResNet-18 <cit.>) for every method. We also make sure that all models are trained until convergence.We aim to position our approach among existing continual learning methods, as well as understand its generalization properties. More concretely, the experiments answer the following questions: * (Section <ref>) How does our approach compare to regularisation-based continual learning baselines?* (Section <ref>) How does our approach compare to replay-based continual learning baselines? How is the performance of replay affected by memory buffer size and computational budget?* (Section <ref>) Do we need equivariance or is learning invariant representations enough?* (Section <ref>) Is our approach able to generalise instantly to previously unseen shapes?* (Section <ref>) Can our approach perform open-set classification, i.e. distinguish between new and previously seen shapes?* (Section <ref>) Can we use our approach in the online learning scenario, where each sample from the training stream is observed only once?§.§ Regularization methods In this section, we compare our method to standard regularization methods: lwf <cit.> and si <cit.>. We use implementations from Avalanche <cit.>. We provide details of the hyperparameter choice in the supplementary material. As shown in <ref>, such regularization methods are ill-equipped to deal with the class-incremental learning scenario and perform no better than naive fine-tuning.§.§ Replay-based methods Replay-based methods retain a subset of past training data that is then mixed with the current training data for every task. While this can be a viable strategy to preserve accuracy over many tasks, it results in ever-growing memory and computation requirements, unless the buffer size is bounded. In this section, we investigate the effect of buffer size on performance for standard experience replay with reservoir sampling. While there are replay-based methods that improve on this baseline, we are interested in investigating the fundamental limits of rehearsal over long time horizons and strip away the confounding effects of data augmentation, generative replay, sampling strategies, pseudo-rehearsal etc. <Ref> shows test accuracy for experience replay with different buffer sizes. Storing enough past samples lets the model maintain high test accuracy, but even with a buffer of 20,000 images the performance eventually starts to deteriorate. Note that after 200 tasks a balanced buffer will only contain 10 samples per class. A note on implementation In an attempt to make the replay baseline stronger, we first add the data from the current task to the buffer and then train the model exclusively on the buffer, effectively discounting the influence of the current task over time <cit.>. A more standard version of experience replay would mix current and past data in equal proportions in each mini-batch, likely leading to diminished performance on previous tasks. The supplementary material includes a comparison to this other replay baseline, as well as to a version of experience replay with no memory constraint but with a compute budget matching our approach. §.§ Do we need equivariance? By training a network to regress the value of each fov, we learn a representation that is equivariant to affine transformations. However, we could also take advantage of the shape labels to learn an invariant representation that we could then use to perform classification. In this section, we directly compare equivariant and invariant learning to demonstrate further that learning an equivariant representation is the key to achieving effective continual learning within our framework.Our baseline for invariant representation learning is based on SimCLR <cit.>, a simple and effective contrastive learning algorithm that aims to learn representations invariant to data augmentations. To adapt SimCLR to our problem, we introduce two optimization objectives. The first objective pulls the representation of each training point towards the representation of its exemplar while repelling all other training points. The second objective encourages well-separated exemplar representations by pushing the representations of all exemplars in the current task away from each other. We observed that the first training objective alone is sufficient, but including the second loss term speeds up training. For each task, we train the baseline until convergence. At test time, the class labels are assigned through nearest neighbor lookup in the representation space. Similar to our method, we store a single exemplar per class.<Ref> shows test accuracy for both methods over time. The performance of the contrastive learning baseline decays over time, but not as rapidly as naive fine-tuning. Note that in contrast to our method, invariant learning could benefit from storing more than one exemplars per class. The supplementary material provides an exact formulation of the contrastive objective and implementation details. §.§ One-shot generalization To evaluate whether the learned regression network can generalize to unseen classes, we perform a one-shot learning experiment. Here, the model had to normalize and classify transformed versions of shapes it had not previously encountered.Since the returned class label depends on the exemplars in the buffer, we consider two variants of the experiment, corresponding to generalized and standard one-shot learning. In the first one, we keep the training exemplars in the buffer and add new ones. In the second, the buffer is cleared before including novel exemplars. We also introduce different numbers of test classes. The classification accuracies are presented in Table <ref>. As expected, keeping the training exemplars in the buffer and adding more test classes makes the task harder. Nevertheless, the accuracy stays remarkably high, showing that the equivariant network has learned a correct and universal mechanism that works even for previously unseen shapes. This is the essence of our framework.§.§ Open-set classification Next, we investigate how well our proposed framework can detect novel shapes. This differs from the one-shot generalization task because we do not add the exemplars corresponding to the novel shapes to the buffer. Instead of modifying the learning setup, we use a simple heuristic based on an empirical observation that our model can almost perfectly normalize any input—we classify the input image as unseen if we can't find an exemplar that matches the normalized input significantly better than others.Denoting the equivariant network output by θ̂ and the two best candidate exemplars by c_1 and c_2, we classify the test input as novel if c_1-T_θ̂(x) _2^2 > σc_2-T_θ̂(x) _2^2. <Ref> shows the precision-recall curve for different values of σ, with an overall area under curve of 0.92. We also present qualitative results in <ref>.§.§ Online vs. offlineIn all previous experiments, we applied our method in batch mode: we performed multiple training passes over the data for each task. However, efficiently learning from streaming data might require observing each training sample only once to make sure computation is not becoming a bottleneck. This is why we test our method in the online learning regime and compare it to two batch learning scenarios. The results are shown in <ref>. Unsurprisingly, training for multiple epochs results in better and more robust accuracy on past tasks; it is however worth noting that our method still improves over time in the online learning scenario. It is possible that, given enough tasks, all three curves would converge.§ DISCUSSIONIn the last decade, continual learning research has made progress through parameter and functional regularization, rehearsal, and architectural strategies that mitigate forgetting by preserving important parameters or compartmentalizing knowledge. As pointed out in a recent survey <cit.>, the best performing continual learners are based on storing or synthesizing samples. Such methods are typically evaluated on sequential versions of standard computer vision datasets such as MNIST or CIFAR-100, which often involve only a small number of learning tasks, discrete task boundaries, and fixed data distributions. As such, the benchmarks do not match the lifelong nature of real-world learning tasks.Our work is motivated by the hypothesis that state-of-the-art continual learners and their predecessors, would inevitably fail when trained in a true lifelong fashion akin to humans. To test our claim, we introduced  dataset, consisting of procedurally generated shapes and their affine transformation. To our knowledge, this is the first class-incremental continual learning benchmark that allows generating hundreds or thousands of tasks. While acknowledging the relatively simplistic nature of our dataset, we believe any lifelong learner must solvebefore tackling more complicated, real-world datasets. Nevertheless, our empirical findings highlight that all standard methods are doomed to collapse, and memory buffers can only defer the ultimate end.Updating synaptic connections in the human brain upon novel experiences does not interfere with the general knowledge accumulated throughout life. Inspired by this insight, we propose our disentangled learning framework, which splits the continual learning problem into (i) sequentially training a network that models the general aspects of the problem that apply to all instances (equivariances) and (ii) memorizing class-specific information relevant to the task (exemplars). This separation enables disentangled model updating, which allows for continually learning equivariant representations without catastrophic forgetting and explicitly updating class-specific information without harming information corresponding to other classes. As demonstrated experimentally, such a separation exhibits successful forward and backward transfer and achieves impressive one-shot generalization and open-set recognition performance. Limitations With this work, we aim to bring a fresh perspective and chart a novel research direction in continual learning. To demonstrate our framework, we stick to a simple dataset and include the correct inductive biases in our learning architecture. We acknowledge that when applied to natural images, our approach would suffer from a number of issues, which we list below, along with some mitigation strategies.* Real-world data does not come with perfect supervision signals, hindering the learning of equivariant networks. As a remedy, one might employ equivariant architectures as an inductive bias <cit.> or weakly supervise the learning, e.g. with image-text pairs <cit.>. 1em* Obtaining class exemplars for real-world data is not straightforward, which makes training the normalization network difficult. A potential solution is to maintain multiple exemplars per class.* It is not clear that we can separate generalization and memorization for any continual learning problem. We plan to investigate this question on a real-world dataset.Acknowledgements This work was supported by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039A. This research utilized compute resources at the Tübingen Machine Learning Cloud, DFG FKZ INST 37/1057-1 FUGG. We thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting SD. This work was supported by the National Centre of Science (Poland) Grants No. 2020/39/B/ST6/01511 and 2022/45/B/ST6/02817. ieeenat_fullname§ EXPERIMENT DETAILSAll models employ the same ResNet-18 <cit.> backbone and are trained using the Adam optimizer <cit.> with default PyTorch <cit.> parameter values (λ=0.001, β_1=0.9, β_2=0.999). For our method we always report the average test accuracy over 5 runs with 5 epochs per task. For other methods, due to computational constraints, we only report the accuracy of a single run. §.§ Regularization methodsWe ran a grid search to set the loss balance weight λ_o in lwf <cit.> and the strength parameter c in si <cit.>, but found the choice of hyperparameter did not influence the result. For the run shown in Figure <ref>, we used λ_o=0.1 and c=0.1. §.§ Replay methodsFor the experiments presented in the main text, we use a modified version of experience replay, inspired by the approach of GDumb <cit.>. At the beginning of each task t, we add all the training data from the current task to the buffer. If the buffer is full, we employ reservoir sampling so that the memory buffer contains an equal number of examples of every class seen so far, including the classes in task t. We then train the model on the memory buffer until convergence and report test accuracy.Here we present results for a different rehearsal approach. For each task t, we extend every mini-batch of the training set with an equal number of samples chosen randomly (with replacement) from the buffer. We train the model for five epochs per task. <Ref> shows a comparison of this training protocol to our method. Surprisingly, this way of doing replay yields better test set accuracy than the replay baseline we used in the main text, despite putting a disproportionate weight on the current task.<Ref> shows a comparison of our method to experience replay with no limit on the buffer size. Here we also mix the samples from the current task with randomly chosen buffer samples and train for five epochs per task. §.§ Contrastive baselineThe optimization objective of the contrastive baseline consists of two components. The first one ensures that each sample in the batch is pulled towards its exemplar and pushed away from all the other samples in the mini-batch that belong to a different class. Using the terminology from <cit.>, a sample x_i and its corresponding exemplar x̂_i constitute a positive pair. Denoting the class of a sample as c(x), the mini-batch size as N, and the representation of x_i and x̂_i as z_i and ẑ_i, respectively, the first component of the loss is:l_1(x_i) = -logexp(sim(z_i, ẑ_̂î) / τ)/∑_k=1^N1_[c(k) ≠ c(i)]exp(sim(z_i, z_k) / τ),where sim(z_i, z_j) is the cosine similarity between z_i and z_j.The second component encourages well-separated exemplar representations by pushing apart the representations of all the exemplars in the current mini-batch. Denoting the number of distinct classes in the mini-batch as C, we have:l_2(x_i) = -logexp(1 / τ)/∑_k=1^C1_[k ≠ i]exp(sim(ẑ_̂î, ẑ_̂k̂) / τ).The final mini-batch loss is ℒ=∑_i=1^Nl_1(x_i) + l_2(x_i).
http://arxiv.org/abs/2312.16731v1
{ "authors": [ "Sebastian Dziadzio", "Çağatay Yıldız", "Gido M. van de Ven", "Tomasz Trzciński", "Tinne Tuytelaars", "Matthias Bethge" ], "categories": [ "cs.LG", "cs.CV" ], "primary_category": "cs.LG", "published": "20231227220542", "title": "Disentangled Continual Learning: Separating Memory Edits from Model Updates" }
1]Hyowon WiYonsei University 50 Yonsei-ro Seoul South Korea [email protected]]Yehjin ShinYonsei University 50 Yonsei-ro Seoul South Korea [email protected]]Noseong ParkYonsei University 50 Yonsei-ro Seoul South Korea [email protected] Corresponding author. Time series imputation is one of the most fundamental tasks for time series. Real-world time series datasets are frequently incomplete (or irregular with missing observations), in which case imputation is strongly required. Many different time series imputation methods have been proposed. Recent self-attention-based methods show the state-of-the-art imputation performance. However, it has been overlooked for a long time to design an imputation method based on continuous-time recurrent neural networks (RNNs), i.e., neural controlled differential equations (NCDEs). To this end, we redesign time series (variational) autoencoders based on NCDEs. Our method, called continuous-time autoencoder (CTA), encodes an input time series sample into a continuous hidden path (rather than a hidden vector) and decodes it to reconstruct and impute the input. In our experiments with 4 datasets and 19 baselines, our method shows the best imputation performance in almost all cases.<ccs2012><concept><concept_id>10002951.10003227.10003351</concept_id><concept_desc>Information systems Data mining</concept_desc><concept_significance>500</concept_significance></concept><concept><concept_id>10010147.10010257</concept_id><concept_desc>Computing methodologies Machine learning</concept_desc><concept_significance>500</concept_significance></concept></ccs2012> [500]Information systems Data mining [500]Computing methodologies Machine learning Continuous-time Autoencoders for Regular and Irregular Time Series Imputation [ January 14, 2024 =============================================================================§ INTRODUCTIONTime series is one of the most frequently occurring data formats in real-world applications, and there exist many machine learning tasks related to time series, ranging from stock price forecasting to weather forecasting <cit.>. These applications frequently assume complete time series. In reality, however, time series can be incomplete with missing observations, e.g., a weather station's sensors are damaged for a while. As a matter of fact, many famous benchmark datasets for time series forecasting/classification are pre-processed with imputation methods to make them complete, e.g., <cit.>. In this regard, time series imputation is one of the most fundamental topics in the field of time series processing. However, its difficulty lies in that i) time series is incomplete since some elements are missing and moreover, ii) the missing rate can be sometimes high, e.g., <cit.>. To this end, diverse approaches have been proposed, ranging from simple interpolations to deep learning-based methods. Those deep learning-based methods can be further categorized into recurrent neural network-based <cit.>, variational autoencoder-based <cit.>, generative adversarial networks-based <cit.>, self-attention-based <cit.>, and some others. Among them, self-attention-based methods, e.g., SAITS <cit.>, show the state-of-the-art imputation quality. SAITS adopts dual layers of transformers since the time series imputation is challenging and therefore, a single layer of transformer is not sufficient.Our approach: In this work, we propose the concept of Continuous-Time Autoencoder (CTA) to impute time series. We, for the first time, extend (variational) autoencoders for processing time series in a continuous manner — there already exist some other non-continuous-time autoencoders for time series, e.g., Latent ODE <cit.>. To enable our concept, we resort to neural controlled differential equations (NCDEs) which are considered as continuous-time recurrent neural networks (RNNs) <cit.>. The overall framework follows the (variational) autoencoder architecture <cit.> with an NCDE-based encoder and decoder (see Fig. <ref>). Since time series for imputation is inevitably irregular with missing elements, our method based on continuous-time RNNs, i.e., NCDEs, is suitable for the task. Table <ref> summarizes how the state-of-the-art RNN, VAE, GAN, and transformer-based imputation models process irregular and incomplete time series inputs — existing methods primarily focus on regular time series and process irregular time series with heuristics, e.g., time decay. BRITS gives a decay on the time lag, and GP-VAE uses the raw timestamp as an additional feature. The temporal information is not used in GAIN, and SAITS adopts the positional encoding within its transformer. Moreover, they fill out missing values simply with zeros, which introduces noises into the data distribution. To address these limitations, our proposed method resorts to the NCDE technology by creating the continuous hidden path with irregular time series inputs. By modeling the hidden dynamics of time series in a continuous manner, in addition, our method is able to learn robust representations. r0.45< g r a p h i c s >We highlight the encoder part from Fig. <ref>.An infinite number of autoencoders: Our approach differs from other (variational) autoencoder-based approaches for time series that encode a time series sample into a single hidden vector and decodes it (cf. Fig. <ref> where 𝐳(0) is produced by the encoder). As shown in Fig. <ref>, however, we define an autoencoder for every time t ∈ [t_0, t_N] given a time series sample. In other words, there exist infinitely many autoencoders in [t_0, t_N] since we can define [μ(t), σ(t)] for every single time point t. At the end, the continuous hidden path H(t) is produced (rather than a vector). Therefore, one can consider that ourmethod is a continuous generalization of (variational) autoencoders — our method is able to continuously generalize both variational and vanilla autoencoders. Hidden vector vs. hidden path: Compared to the single hidden vector approach, our method has much flexibility in encoding an input time series sample. Since the single hidden vector may not be able to compress all the information contained by the input, it may selectively encode some key information only and this task can be difficult sometimes. However, our method encodes the input into a continuous path that has much higher representation flexibility. Dual layer and training with missing values: Existing time series imputation methods have various architectures. However, some highly performing methods have dual layers of transformers, e.g., SAITS, and being inspired by them, we also design i) a special architecture for our method and ii) its training algorithm. We carefully connects two continuous-time autoencoders (CTAs) via a learnable weighted sum method, i.e., we learn how to combine those two CTAs. Since our CTA can be either variational or vanilla autoencoder, we test all four combinations of them, i.e., two options for each layer. In general, VAE-AE or AE-AE architectures show good performance, where VAE means variational autoencoder and AE means vanilla autoencoder, and the sequence separated by the hyphen represents the layered architecture. In addition, we train the proposed dual-layered architecture with our proposed special training method with missing elements. We intentionally remove some existing elements to create imputation environments for training.We conduct time series imputation experiments with 4 datasets and 19 baselines. In almost all cases, our CTA shows the best accuracy and its model size is also much lower than the state-of-the-art baseline. Our contributions can be summarized as follows: * We generalize (variational) autoencoders in a continuous manner. Therefore, the encoder in our proposed method creates a continuous path H(t) of latent representations, from which our decoder reconstructs the original time series and imputes missing elements. Our continuous hidden path H(t) is able to encode rich information. * We test with various missing rates from 30% to 70% on 4 datasets. Our method consistently outperforms baselines in most cases, owing to the continuous RNNs, i.e., NCDEs.§ PRELIMINARIES & RELATED WORK §.§ Neural Controlled Differential EquationsNCDEs solve the following initial value problem (IVP) based on the Riemann-Stieltjes integral problem <cit.> to derive the hidden state h(T) from the initial state h(0):h(T)= h(0) + ∫_0^T f(h(t), t;θ_f)dX,= h(0) + ∫_0^T f(h(t), t;θ_f)dX(t)/dt dt, where X(t) is a path representing the continuous input. In general, X(t) is estimated from its discrete time series {(𝐱_i, t_i)}_i=0^N via an interpolation method, where 𝐱_i is a multivariate observation at time t_i, e.g., given discrete sensing results of weather stations, we reconstruct their continuous path via an interpolation method — the natural cubic spline method <cit.> is frequently used for NCDEs since it is twice differentiable when calculating the gradient w.r.t. θ_f. For this reason, NCDEs are called as continuous RNNs — one can consider that the hidden state h(t) of RNNs continuously evolves from t=0 to T while reading the input dX(t)/dt in NCDEs (cf. Fig. <ref>). Since NCDEs create X(t) via the interpolation method, however, the hidden state h(t) at time t is created by considering input observations around t (cf. Fig. <ref>), which is one subtle but important difference from conventional RNNs.The above IVP of NCDEs can be solved with existing ODE solvers, sincedh(t)/dt=f(h(t),t,θ_f)dX(t)/dt. For instance the explicit Euler method, the simplest ODE solver, solves the above IVP by iterating the following step multiple times from t=0 to T:h(t + s)= h(t) + sdh(t)/dt = h(t) + sf(h(t),t,θ_f)dX(t)/dt, where s is a pre-configured step size. §.§ Time Series Imputation§.§.§ RNN-based Existing RNN-based models regard timestamps as one attributes of raw data. GRU-D <cit.> proposes the concept of time lag and imputes missing elements with the weighted combination of its last observation and the global mean with time decay. However, such assumption has limitations on general datasets. M-RNN <cit.> proposes the multi-directional RNN to impute random missing elements, which considers both intra-data relationships inside a data stream and inter-data relationships across data streams. M-RNN, however, has no consideration on the correlation among features. BRITS <cit.> imputes missing elements with bi-directional RNNs using time decay. It also makes use of bi-directional recurrent dynamics, i.e., they train RNNs in both forward and backward directions, introducing advanced training methods, e.g., a consistency loss function.§.§.§ VAE-basedLatent ODE <cit.> is VAE-based model that adopts ODE-RNN <cit.> as its encoder to encode a time series sample to a single hidden vector, and use it as the initial value of its ODE-based decoder. Hence, Latent ODE can handle sparse and/or irregular time series without any assumptions. Sequential VAEs are designed to extent the latent space of VAEs over time, considering the time information of sequential samples <cit.>. VRNN <cit.> combines VAE and RNN to capture the temporal information of the data. To overcome the deterministic property of RNNs, SRNN <cit.> and STORN <cit.> propose stochastic sequential VAEs by integrating RNNs and state space models. However, existing sequential VAEs struggle to handle irregular data as they heavily rely on RNNs. GP-VAE <cit.> is sequential VAE-based imputation model which has an assumption that high-dimensional time series has a lower-dimensional representation that evolves smoothly over time using a Gaussian process (GP) prior in the latent space.§.§.§ GAN-basedRecently, generative adversarial networks (GANs) have been used to impute missing values. GAIN <cit.> is the first model to apply GANs to the imputation task. The generator replaces missing values based on observed values, while the discriminator determines the correctness of the replaced values compared to the actual values. The discriminator receives partial hints on missing values during training. GRUI-GAN <cit.> is a combination of GRU-D and GAN. It uses the GRU-I structure where the input attenuation is removed. It combines the generator and classifier structures using this modified GRU-I structure to increase accuracy through adversarial learning. E2GAN <cit.> introduces the concept of an end-to-end model. It constructs an autoencoder structure based on GRU-I in the generator. Time series data is compressed into a low-dimensional vector through the autoencoder and used for generation.§.§.§ Self-attention-based Self-attention mechanisms <cit.> have been adapted for time series imputation after demonstrating an improved performance on seq-to-seq tasks in natural language processing. mTAN <cit.> proposed a model that combines VAEs and multi-time attention module that embeds time information to process irregularly sampled time series. HetVAE <cit.> can handle the uncertainties of irregularly sampled time series data by adding a module that encodes sparsity information and heterogeneous output uncertainties to the multi-time attention module. SAITS <cit.> uses a weighted combination of two self-attention blocks and a joint-optimization training approach for reconstruction and imputation. SAITS now shows the state-of-the-art imputation accuracy among those self-attention methods. § PROBLEM DEFINITIONIn many real-world time series applications, incomplete observations can occur for various reasons, e.g., malfunctioning sensors and/or communication devices during a data collection period. As a matter of fact, many benchmark datasets for time series classification and forecasting had been properly imputed before being released <cit.>. Therefore, imputation is one of the key tasks for time series.Given a time series sample {(𝐱_i, t_i)}_i=0^N, where t_i < t_i+1, and t_i ∈ [0,T], let 𝐗∈ℝ^(𝐱) × N be a matrix-based representation of {(𝐱_i, t_i)}_i=0^N. We consider real-world scenarios that some elements of 𝐗 can be missing. Thus, we denote the incomplete time series with missing elements as 𝐗̈ — those missing elements can be denoted as nan in 𝐗̈. Our goal is to infer 𝐗̂≈𝐗 from 𝐗̈.For ease of our discussion but without loss of generality, we assume that i) all elements of 𝐗 are known, and ii) t_0 = 0, t_N = T. For our experiments, however, some ground-truth elements of 𝐗 are missing in its original data, in which case we exclude them from testing and training (see Appendix <ref>). § PROPOSED METHODWe describe our proposed method in this section. We first outline the overall model architecture, followed by detailed designs.§.§ Encoder Given an incomplete time series sample {(𝐱̈_t_i, t_i)}_i=0^N, which basically means 𝐗̈, we first build a continuous path X(t) as in the original NCDE method. We note that after the creation of the continuous path X(t), we have an observation for every t ∈ [0,T]. After that, our NCDE-based encoder begins — for ease of discussion, we assume variational autoencoders and will shortly explain how they can be changed to vanilla autoencoders. For our continuous-time variational autoencoders, we need to define two continuous functions, μ(t): [0,T] →ℛ^(μ) and σ(t): [0,T] →ℛ^(σ), each of which denotes the mean and standard deviation of the hidden representation w.r.t. time t, respectively. We first define the following augmented state of e(t), where μ(t) and σ(t) are concatenated into a single vector form:e(t)= (μ(t), σ(t)) We then define the following NCDE-based continuous-time encoder:μ(t)= μ(0) + ∫_0^t g_μ(e(t), t; θ_μ) dX,⇒μ(0) + ∫_0^t g_μ(e(t), t; θ_μ) dX(t)/dt dt,σ(t)= σ(0) + ∫_0^t g_σ(e(t), t; θ_σ) dX,⇒σ(0) + ∫_0^t g_σ(e(t), t; θ_σ) dX(t)/dt dt, where dμ(t)/dt and dσ(t)/dt are modeled by g_μ(e(t), t; θ_μ) dX(t)/dt and g_σ(e(t), t; θ_σ) dX(t)/dt, respectively.The continuous path of the hidden representation of the input time series, which we call as continuous hidden path hereinafter, can then be written as follows, aided by the reparameterization trick: H(t)= μ(t) + ϵ_t⊙exp(σ(t)), where ϵ_t ∼𝒩(0, I), and ⊙ means the element-wise multiplication. One subtle point is that we use exp(σ(t)) instead of σ(t) in Eq. (<ref>). In other words, σ(t) is for modeling the log-variance in our case. In our preliminary study, this log-variance method brings much more stable training processes. The reason is that the exponential function amplifies the continuous log-variance path σ(t) and therefore, the continuous variance path by exp(σ(t)) can represent complicated sequences. An alternative is to model the continuous variance path directly by σ(t), which can be a burden for the encoder.Network architecture: Note that g_μ, g_σ are neural networks in our method. We basically use fully-connected layers with non-linear activations to build them. The architecture of the NCDE functions g_μ, g_σ in the encoder are as follows:g_μ(e(t), t; θ_μ)= ψ((𝐄_L)) g_σ(e(t), t; θ_σ)= ψ((𝐄_L))⋮ 𝐄_1= ω((𝐄_0)), 𝐄_0= ω((𝐞(t))), where ω is a sigmoid linear unit <cit.>, ψ is a hyperbolic tangent, and L is the number of hidden layers. We use dim(𝐡) to denote the hidden size before the final layer and dim(𝐥) to denote the size of the final layer. Therefore, 𝐄_i has a size of dim(𝐡) for all i and the output sizes of g_μ, g_σ are commonly dim(𝐥).§.§ DecoderOur NCDE-based decoder, which decodes H(t) into an inferred (or a reconstructed) time series sample, can be written as follows:d(t)= d(0) + ∫_0^T k(d(t); θ_k) dH,⇒d(0) + ∫_0^T k(d(t); θ_k) dH(t)/dt dt, ⇒d(0) + ∫_0^T k(d(t); θ_k) (dμ(t)/dt + ϵ_t⊙dexp(σ(t))/dt) dt, where dμ(t)/dt and dexp(σ(t))/dt are defined in Eq. (<ref>) as follows:dμ(t)/dt = g_μ(e(t), t; θ_μ) dX(t)/dt, dexp(σ(t))/dt = exp(σ(t))dσ(t)/dt= exp(σ(t))(g_σ(e(t), t; θ_σ) dX(t)/dt).Network architecture:The architecture of the NCDE function k in the decoder is as follows:k(d(t); θ_k)= ψ((𝐃_L))⋮ 𝐃_1= ω((𝐃_0)), 𝐃_0= ω((𝐝(t))), where we use dim(𝐡) to denote the hidden size before the final layer and dim(𝐥) to denote the size of the final layer. Therefore, 𝐃_i has a size of dim(𝐡) for all i and the output size of k is dim(𝐥). §.§ Output LayerIn order to infer an observation 𝐱̂_i at time t_i, we use the following output layer:𝐱̂_i = _2((_1(d(t_i)))), wheremeans an fully-connected layer, andmeans an exponential linear unit. Taking the elements of 𝐱̂_i whose original values are nan in 𝐱_i, we can accomplish the time series imputation task. §.§ Augmented ODE for Encoder and DecoderIn order to implement our model, we use the following augmented ordinary differential equation (ODE): d/dt[ μ(t); σ(t); d(t);] = [g_μ(e(t), t; θ_μ) dX(t)/dt;g_σ(e(t), t; θ_σ) dX(t)/dt; k(d(t); θ_k) (dμ(t)/dt + ϵ_t⊙dexp(σ(t))/dt); ].and [ μ(0); σ(0); d(0);] = [ _μ(X(0)); _σ(X(0)); _d(X(0));]. Throughout Eq. (<ref>), we can integrate our proposed continuous-time encoder and decode into a single ODE state, which means that by solving the ODE, the entire forward pass of our continuous-time variational autoencoder can be calculated simultaneously. Why continuous hidden path?: We note that the hidden representation H(t) in Eq. (<ref>) is continuously defined over time, which is different from existing methods where only a single hidden representation is created after reading the entire time series (cf. Fig. <ref> vs. <ref>). The benefits of our proposed continuous hidden path are two folds.Firstly, our proposed method is suitable for time series imputation. For instance, suppose that we want to infer (𝐱̂_j, t_j) for time series imputation. H(t_j) contains the information of the input time series up to time t and its near future — note that additional information around time point t_j is used when creating X(t_j) with an interpolation method (cf. Fig <ref>). Thus, H(t_j) contains enough information to infer 𝐱̂_j via the decoder and the output layer.Secondly, our proposed method provides one-way lightweight processing. Only by solving the augmented ODE in Eq. (<ref>) from an initial time 0 to a terminal time T sequentially and incrementally, we can impute all missing elements with the output layer. Vanilla autoencoder: Our framework can be converted to the vanilla autoencoder in a naïve way only by i) setting H(t) = μ(t) after removing σ(t) and ii) using the usual reconstruction loss (without the ELBO <cit.> loss). Since we discard σ(t) in this vanilla setting, its inference time and space complexities are reduced in comparison with those of the variational autoencoder setting.How to infer: For inference, we use only μ(t), i.e., H(t) = μ(t) is used for the variational autoencoder setting. In other words, we use the mean hidden representation only. By considering σ(t), we can further extract the confidence interval, but our main interest is how to impute incomplete time series. For the vanilla setting, we remove σ(t) so it clear that H(t) = μ(t) for inference.§.§ Dual Autoencoder ArchitectureWe have described how a single (variational) autoencoder can be defined so far. For time series imputation, however, two-layer architectures are popular <cit.>. We also propose the following dual-autoencoder approach and its training method (cf. Fig. <ref>): * (Blue Path of Fig. <ref>) Given a training time series sample 𝐗, i.e., a matrix representation of {(𝐱_i, t_i)}_i=0^N, we intentionally remove some more elements from 𝐗 in order to create more challenging training environments. The intentionally removed elements are marked as `M' in Fig. <ref>, and we use 𝐌 to denote the masking matrix, e.g., 1 in 𝐌 means `intentionally removed by us.' We use N_M to denote the number of these intentionally removed elements, which is a hyperparameter for our training method.* (Red Path of Fig. <ref>) We first take 𝐗̂, i.e., {(𝐱̂_t_i, t_i)}_i=0^N, from the initial proposed (variational) autoencoder marked as `(V)AE_1' in Fig. <ref>. We then create the initial imputation outcome 𝐗̌ by replacing the missing nan elements of 𝐱̈_t_i with the inferred elements of 𝐱̂_t_i for all i.* (Green Path of Fig. <ref>) We then feed 𝐗̌ to the next proposed (variational) autoencoder. Let 𝐗̂', i.e., {(𝐱̂'_i, t_i)}_i=0^N, be the output from the second autoendoer via the residual connection with 𝐗̌.* (Purple Path of Fig. <ref>) We then let 𝐗̃, i.e., {(𝐱̃_j,t_j)}_i=0^N, be our final imputation outcome, which is calculated as follows — in other words, the first and second imputation outcomes are connected through the learnable weighted sum:𝐱̃_j = α⊙𝐱̂_j + (1-α)⊙𝐱̂'_j, where α = ϕ(_3(d'(t_j))), ϕ is Sigmoid, and d'(t_j) means the hidden representation of the decoder of the 2nd autoencoder at time t_j. ⊙ means the element-wise multiplication. Training method: In order to train the dual autoencoders, we use the following loss function and the method in Alg. <ref>:L :=Reconstruction||(𝐗 - 𝐗̃)||_F + ||(𝐗 - 𝐗̂)||_F + ||(𝐗 - 𝐗̂')||_F + ||𝐌⊙ (𝐗 - 𝐗̃)||_F+ KL Divergence∫_0^T KLD_1(t) + KLD_2(t) dt, where we use KLD_1(t) and KLD_2(t) for brevity to denote the usual KL divergence <cit.> terms of the first and the second variational autoencoders at time t, respectively — note that those KL Divergence terms can be omitted for the vanilla setting of our proposed method. In particular, those KLD terms can be defined for every time point t since our CTA is a method to create an infinite number of variational autoencoders in [0,T] (cf. Fig. <ref>) and therefore, we need to integrate them over time. We use existing ODE solvers for this purpose as we do it for NCDEs (see Appendix <ref> for details). We also note that our loss is a continuous generalization of the ELBO loss since we have an infinite number of variational autoencoders in the time domain [0,T] and therefore, the KLD loss term is defined for every time point t ∈ [0,T]. In Alg. <ref>, we first initialize all the model parameters, denoted Θ. In Line <ref>, we create a mini-batch of size N_B. Each pair of (𝐗_b, 𝐗̈_b) means an incomplete time series sample 𝐗̈_b and its ground-truth sample 𝐗_b. In Line <ref>, we train Θ following the method described for Fig. <ref>. As described, our training process intentionally removes N_M more elements from each 𝐗̈_b to increase the effect of the supervised training. Our CTA produces two intermediate and one final inference outcomes, i.e., 𝐗̃_b, 𝐗̂_b, and 𝐗̂'_b, for each training sample 𝐗̈_b, with which the training with the loss L is conducted. Role of each layer: In the proposed dual autoencoder architecture, the first (variational) autoencoder infers the initial imputed time series where for some challenging imputation points, its quality may not be satisfactory. The second (variational) autoencoder then tries to complement for the challenging cases via the learnable weighted sum, i.e., the learnable residual connection. In ablation study, we analyze the benefits of the dual-layer architecture. How to Solve ∫_0^T KLD(t) dt?: Our loss function in Eq. (<ref>) requires an integral problem to calculate the KLD terms along the time in [0,T]. For simplicity but without loss of generality, we assume only one variational autoencoder so we need to solve ∫_0^T KLD(t) dt. For this, we define and solve the following augmented ODE to calculate all the hidden states and the KLD loss at the same time. Therefore, ξ(T) corresponds to the Riemann integral of ∫_0^T KLD(t) dt and contains the final KLD loss value.d/dt[ μ(t); σ(t); d(t); ξ(t) ] = [g_μ(e(t), t; θ_μ) dX(t)/dt;g_σ(e(t), t; θ_σ) dX(t)/dt; k(d(t); θ_k) (dμ(t)/dt + ϵ_t⊙dexp(σ(t))/dt);KLD(t) ]. Original Missing Elements of𝐗: For ease of our discussion, we assumed that for 𝐗, all ground-truth elements are known in the main body of this paper. In our experiments, however, some ground-truth elements are unknown in their originally released dataset. In this situation, we cannot use those elements for training and testing. Modifying our descriptions in the main paper to consider those missing ground-truth elements is straightforward. For instance, α is redefined to α = ϕ(_3(d'(t_j), 𝐎)) and the loss function can be rewritten as follows:L :=Reconstruction||𝐎⊙ (𝐗 - 𝐗̃)||_F + ||𝐎⊙ (𝐗 - 𝐗̂)||_F + ||𝐎⊙ (𝐗 - 𝐗̂')||_F+ Reconstruction||𝐌⊙ (𝐗 - 𝐗̃)||_F + KL Divergence∫_0^T KLD_1(t) + KLD_2(t) dt, where 𝐎 means a masking matrix to denote those elements whose ground-truth values are known in its original dataset. § EXPERIMENTSIn this section, we describe our experimental environments followed by experimental results and analyses. §.§ Experimental Environments §.§.§ Datasets To evaluate the performance of various methods, we use four real-world datasets from different domains as follows: , ,and(See supplementray material for their details). §.§.§ Baselines We compare our method with 19 baselines, which include statistical methods, VAE-based, RNN-based, GAN-based and self-attention-based methods (see supplementray material for details). §.§.§ Evaluation MethodsTo evaluate our method and baselines, we utilize two metrics: MAE (Mean Absolute Error) and RMSE (Root Mean Square Error). These are commonly used in the time series imputation literature <cit.>. We report the mean and standard deviation of the error for five trials.In order to create more challenging evaluation environments, we increase the percentage of the missing elements, denoted r_missing. We remove the element by the ratio of r_missing from the training, i.e., the model does not learn about this missing elements, and test datasets, i.e., the imputation task's targets are those missing elements. In total, we test in three different settings, i.e., r_missing∈{ 30%, 50%, 70%}. The above evaluation metrics are measured only for those missing elements since our task is imputation. §.§.§ HyperparametersWe report the search range of each hyperparameter in our method and all the baselines in our supplementary material. In addition, we summarize the best hyperparameter of our method for reproducibility in the supplementray material.§.§ Experimental ResultsTable <ref> summarizes the results onand . For , the performances of SAITS and BRITS are the best among the baselines for all missing rates, but CTA shows the lowest errors in all cases. In the case of , our method, the self-attention-based methods, and some of the VAE-based methods work reasonably. When r_missing is 30%, mTAN performs slightly better than our model in RMSE. However, in all other cases, the performance of CTA (VAE-AE) outperforms other baselines by large margins.The results onandare shown in Table <ref>. For , MICE, which is a statistical method, shows the best result among all the baselines. However, Our CTA marks the best accuracy in general. In particular, CTA significantly outperforms others baselines when r_missing is high. In the case of , BRITS shows the best performance among the baselines. However, it is shown that the error increases rapidly as r_missing increases. When r_missing is high, the performance of HetVAE is the best among the baselines, but our CTA (AE-AE) outperforms other baselines at all missing rates. §.§ Empirical ComplexityWe compare the model sizes and the inference GPU memory usage of our method and SAITS, the best-performing baseline, in Table <ref>. Except for the number of parameters for , our model has a smaller size and consumes less GPU memory than SAITS. Especially forand , our model's size is 2 to 3 orders of magnitude smaller than that of SAITS, which is an outstanding result. One more interesting point is that for , CTA marks comparable GPU memory footprint to SAITS with more parameters, which shows the efficiency of our computation.§.§ Ablation Study on Dual-Layer ArchitectureSince CTA uses a dual-layer autoencoder architecture, we conduct an ablation study by varying the number of layers. We test all the combinations from single to triple layers, and their results are shown in Table <ref> forandwith the 70% missing rate. In general, VAE-AE and AE-AE show good results for our CTA. In both datasets, the single-layer ablation models, i.e., AE and VAE, produce worse outcomes than those of the dual-layer models. Among the dual-layer models, it shows better outcomes when the second layer is AE instead of VAE. For , The performances of dual and triple-layer are not significantly different.§.§ Comparison to Interpolation methodWe use the natural cubic spline method to build X(t). We report the performance of the interpolation itself in the extreme case of the 70% missing rate inand . As shown in Table <ref>, it can be observed that the performance is improved compared to the interpolation alone. § CONCLUSION In this paper, we tackled how to impute regular and irregular time series. We presented a novel method based on NCDEs, which generalizes (variational) autoencoders in a continuous manner. Our method creates one (variational) autoencoder every time point and therefore, there are an infinite number of (variational) autoencoders along the time domain [0,T]. For this, the ELBO loss is calculated after solving an integral problem. Therefore, training occurs for every time point in the time domain, which drastically increases the training efficacy. We also presented a dual-layered architecture.In our experiments with 4 datasets and 19 baselines, our presented method clearly marks the best accuracy in all cases. Moreover, our models have much smaller numbers of parameters than those of the state-of-the-art method. Our ablation and sensitivity studies also justify our method design. SAITS also has a dual-transformer architecture. Therefore, the main difference between our method and SAITS is that our method continuously generalizes (variational) autoencoders.In the future, our method can be adopted to solving other time series tasks, such as classification and forecasting. Time series synthesis <cit.> is also one more application to which our method can be applied. § ETHICAL CONSIDERATIONOur model focuses on advancing the time series imputation. While our model itself hasn't introduced any new ethical issues, it brings to light potential concerns regarding privacy and anonymity. As data is harnessed for imputation, it's crucial to thoughtfully address the ethical considerations surrounding the confidentiality of sensitive information and the preservation of individual anonymity. balancing between data utility and safeguarding personal privacy will be crucial in ensuring the responsible and trustworthy deployment of our model. As we continue to improve and implement our model, we are committed to maintaining the highest standards of ethics and privacy, and to promoting discussions on integrating solutions that acknowledge these concerns and prioritize the well-being of all stakeholders involved. This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No. 2020-0-01361, Artificial Intelligence Graduate School Program at Yonsei University, 1%), and (No. 2022-0-01032, Development of Collective Collaboration Intelligence Framework for Internet of Autonomous Things, 99%)ACM-Reference-Format
http://arxiv.org/abs/2312.16581v2
{ "authors": [ "Hyowon Wi", "Yehjin Shin", "Noseong Park" ], "categories": [ "cs.LG", "cs.IR" ], "primary_category": "cs.LG", "published": "20231227141342", "title": "Continuous-time Autoencoders for Regular and Irregular Time Series Imputation" }
Using Enriched Category Theory to Construct the Nearest Neighbour Classification Algorithm Matthew Pugh [email protected] of Electronics and Computer Science University of Southampton University Road, SO17 1BJ Jo Grundy [email protected] of Electronics and Computer Science University of Southampton University Road, SO17 1BJ Corina Cirstea [email protected] of Electronics and Computer Science University of Southampton University Road, SO17 1BJNick Harris [email protected] of Electronics and Computer Science University of Southampton University Road, SO17 1BJ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ Exploring whether Enriched Category Theory could provide the foundation of an alternative approach to Machine Learning. This paper is the first to construct and motivate a Machine Learning algorithm solely with Enriched Category Theory. In order to supplement evidence that Category Theory can be used to motivate robust and explainable algorithms, it is shown that a series of reasonable assumptions about a dataset lead to the construction of the Nearest Neighbours Algorithm. In particular, as an extension of the original dataset using profunctors in the category of Lawvere metric spaces. This leads to a definition of an Enriched Nearest Neighbours Algorithm, which consequently also produces an enriched form of the Voronoi diagram. This paper is intended to be accessible without any knowledge of Category Theory.Category Theory, Enriched Category Theory, Lawvere Metric Space, Classification, Nearest Neighbours, Foundations § INTRODUCTIONAs Machine Learning (ML) becomes more popular, the use of black box approaches is beginning to hinder the progression of the field. During engineering and development, the better ones understanding of a model the easier it is to improve its performance, diagnose faults, and provide guarantees for its behaviour. Unfortunately, necessary to the development of many algorithms, there are design decisions which are motivated by intuition or trial and error. Potentially, part of the difficulty in understanding these algorithms comes from a lack of clarity in how they are interacting with the data they are provided. How does the encoding of input data effect the information that an algorithm actually understands. To approach this question, this paper seeks to investigate the development of a first principles approach to the design of ML algorithms using Enriched Category Theory. To provide evidence that this approach has potential, it is demonstrated that basic assumptions about a dataset can lead to the natural construction of a pre-existing algorithm which is popular for its predictable and robust behaviour: the Nearest Neighbours Algorithm (NNA).The argument for the use of Enriched Category Theory in such a theory proceeds as follows. The process of learning requires the ability to make comparisons. This may be comparisons between: entries of a training dataset in order to identify patterns; training examples and new cases for the sake of inference; between different models of the same dataset, for selection of the best one. Enriched Category Theory provides a very general framework for defining and studying comparisons between objects. It demonstrates that the entirety of the information associated with an object can be encoded in its comparisons to other objects. Using Enriched Category Theory, the structure of data can be encoded explicitly in their mutual comparisons, rather than implicitly, as is common with many ML algorithms. The benefit of this approach would be that the design and mechanism of ML algorithms would becomes more transparent. The assumptions about datasets can be made more explicit. And the process of learning can be interpretted in its natural form as reasoning about the comparison of observations. § BACKGROUND To the knowledge of the authors, the construction of the Nearest Nearest Neighbours Algorithm demonstrated in this paper is one of the first examples of a machine learning algorithm motivated and constructed solely with Enriched Category Theory. There is one other example of an entirely categorical construction of an ML algorithm, where <cit.> shows that the single linkage clustering algorithm can be found as a Kan-extension of a dataset of points. However, it is suggested that the steps shown for the derivation of the NNA draw a tighter parallel between the intuition of how the dataset is represented, and the derived algorithm.There are also examples of algorithms whose structures have been encoded in the language of category theory, such as Graph Neural Networks <cit.>. But they represent the structure of how the algorithm computes information, and not necessarily the selection of the optimal model or representation of the input dataset. In contrast, the NNA construction draws a direct line from the representation of the data to the selection of the optimal classification.Understanding the Enriched Category Theory construction of the Nearest Neighbours algorithm requires an understanding of Lawvere metric space as Cost Enriched Categories, as well as a working knowledge of the Nearest Neighbours Algorithm. It is beyond the scope of this paper to provide a complete introduction to Enriched Category Theory [A basic introduction can be found in <cit.> while a more technical overview occurs in <cit.>.], but thankfully many of its complexities can be avoided by focusing on the specific case of Lawvere metric spaces. The following section provides the necessary components, as well as a brief overview of the Nearest Neighbours Algorithm.§.§ Nearest Neighbours Algorithm The Nearest Neighbours Algorithm <cit.> extends the classification of a dataset of points in a metric space to the entire metric space. Consider a dataset of n pairs (x_0, y_0), ... , (x_n, y_n). The targets of the dataset, y_i, are elements of a set of class labels Y. The features of the dataset, x_i, represent points in a metric space X. This allows the distance between any two points to be measured, following the traditional metric space axioms.∙d(a,a) = 0∙a ≠ b ⇔ d(a, b) > 0Positivity ∙d(a, b) = d(b, a) Symmetry ∙d(a, b) + d(b, c) ≥ d(a, c)Triangle InequalityTo a point of the metric space not in the dataset, the Nearest Neighbors Algorithm assigns a class if the closest point in the dataset has that class. An example of the classification regions produced can be seen in Fig <ref> which shows the NNA classification of a two class dataset of points sampled from two Gaussian distributions.To express this as a relation we can represent the dataset with two functions. The indexes of the dataset can be expressed as the set of integers from 1 to n, N = {a ∈𝐙 | 1 ≤ a ≤ n}. The features of the dataset can be encoded with the function F : N → X such that Fi = x_i. The targets of the dataset can be expressed similarly with a function T : N → Y, such that Ti= y_i. Given a point x∈ X and a class y ∈ Y, the relation should return true if the closest data-point to x has the class y. [inf in the following expression represents the infimum or least upper bound of a set of values. For finite cases it can be replaced with minimum.]NNA(y, x) = ∃ i ∈ N[Ti = y and d(Fi, x) = inf_i' ∈ Nd(Fi', x) ] This relation can be presented in an alternate form that will be useful later, but it requires that the indexes are partitioned based on their classes. We define the partition as follows. NT(y) = { i ∈ N | Ti=y }. This allows the relation to be presented as:NNA(y, x)⇔inf_i ∈ Nd(Fi, x) = inf_i ∈ NT(y)d(Fi, x)§.§ Lawvere Metric Spaces As mentioned in the introduction, Enriched Category Theory provides a method of encoding structure through a rigorous language for talking about comparisons. In some sense, an Enriched Category is a collection of objects which can be compared. Given a category C, two objects x ∈ C and y ∈ C can be compared with the notation C(x,y). This is referred to as the hom-object of x and y. This hom-object exists in its own category called the base of enrichment. To make the comparisons meaningful, ECT requires that the base of enrichment have some way of combining hom-objects, called a monoidal product, and some juxtaposition of these two hom-objects to a third. An example of how this structure works can be seen in order relations. Consider a category called Fruits, which is a collection of fruits ordered by price. The hom-object Fruits(Apple, Orange) would test to see if Apples were cheaper than Oranges. In this instance this comparison could also be written as Apples ≤ Oranges. The outcome of this comparison is either true or false so the base of enrichment would be a category containing an object representing true and an object representing false. This base of enrichment can be called Bool for Boolean.A sensible logical deduction to make with such a category would be to say that if I know fruit A is cheaper than fruit B, and fruit B is cheaper than fruit C, then A must be cheaper than C. Notionally, this can be written as:(A≤ B) and (B≤ C)(A ≤ C)This process of logical inference gives the general motivating structure of an enriched category. In this instance, each comparison of the ordered set returns a value in Bool. The monoidal product of Bool is the logical "and", allowing its objects to be combined. Bool also has arrows of implication from False to False, False to True, and True to True. But not from True to False, as True cannot logically imply False. By using Bool as the base of enrichment, the general structure of the enriched category becomes the structure of a pre-order relation. A Lawvere metric space is an enriched category whose base of enrichment is chosen so that the categories operate like metric spaces, allowing the enriched category to measure the distances between its objects. The base of enrichment for a Lawvere metric spaces is called the Cost category. Because it represents measurements of distance, its objects are the non-negative real numbers extended with infinity[The objects of Cost being {x ∈ℝ | x ≥ 0 }∪{∞}. The monoidal product is addition, with addition by infinity defined as x + ∞ = ∞].Given a Cost enriched category X, and two objects x and y of X, the hom object X(x,y) can be interpretted as the distance between x and y. The monoidal product of Cost is addition and the arrows of the Cost category point from large numbers to smaller numbers. As in, there is an arrow from a ∈ Cost to b ∈ Cost if and only if a≥ b. This can also be interpretted as Cost(a, b) = a ≥ b. Looking at the previous example, we can replace the and operation of Bool with addition, and the implication with ≥ to recover the following expression for Cost categories.X(x,y) + X(y, z) ≥ X(x, z)This requirement of Cost categories is the triangle inequality, stating that taking a detour to a third object cannot be quicker than travelling directly between two objects. By choosing the Cost category as the base of enrichment, ECT naturally recovers some, but not all of the the metric space axioms (As detailed in section <ref>). This makes Lawvere metric spaces pseudo-metric spaces. In Lawvere metric spaces, one retains the triangle inequality, and the requirement that the distance from an object to itself is zero (d(a, a) = 0), but the metric spaces are not required to be symmetric (d(a,b) = d(b, a)) and two different objects can be zero distance apart. This can be a controversial choice, but there are several arguments for this being a desirable outcome. For example, in many cases an intuitive notion of distance is not symmetric, e.g. its easier to go down stairs than up them. One might also say that distance is a measure of similarity not identity, and the idea of two different objects being zero distance apart is sensible when considering systems at a certain level of coarseness. In either case, if one wishes to operate with traditional metric spaces, they are all also Lawvere metric spaces, and the necessary axioms can be asserted as convenient.By sensibly considering how we wish to compare objects in our enriched categories, choosing objects, arrows, and a monoidal product in the base of enrichment, we have recovered the structure of a metric space. Though the Lawvere metric space is one of the simpler examples of an enriched category, it starts to reveal the power of such a theory to construct complex structures for the representation of data.§.§ Functors and Profunctors An Enriched Category may be thought of as representing a particular datatype, with the structure of that datatype being represented by the hom-objects of the category. In order to interact with this information, there are many ways of comparing categories to each other. Between categories with the same base of enrichment, there are two constructions which are relevant for this work: Functors and Profunctors.In set theory, a mapping from one set to another is called a function. In ECT, there is a similar concept called a functor. Functors between enriched categories are structure preserving maps. In the case of Cost-enriched categories (Lawvere metric spaces), this reduces to the statement that functors are distance non-increasing functions. Given a functor F:X→ Y, from X to Y, this can be expressed as the statement that for any two objects a,b ∈ X.X(a, b) ≥ Y(Fa, Fb) As well as Functors between categories being the ECT version of functions between sets, there is also an ECT version of relations between categories. A set relation R between two sets X and Y is often described as a subset of the Cartesian products of X and Y, i.e. R⊆ X × Y. However, this relation can also be thought of as a function which returns true if the relation is true, and false if the relation is false: R : Y × X →{False, True}. In ECT, this notion is extended to a functor from the product of two categories to the base of enrichment. Where the product of two categories Y^op⊗ X contains objects which are pairs of objects in X and Y similar to how the Cartesian product of sets contains pairs of elements of sets. R : Y^op⊗ X → Cost. Such a construction is called a profunctor. For notation, a profunctor R : Y^op⊗ X → Cost, can be written as R : X ↛ Y.With two set relations R : X ↛ Y and S : Y ↛ Z, a composite relation can be produced of the form S ∘ R : A ↛ C. The composition of two relations R and S is true for two inputs x and z, if there exists an element y in Y such that R(y, x) is true, and S(z, y) is true. The logic of relation composition is described by the following equation. (S ∘ R) (z, x) := ∃ y∈ Y[ R(y, x) and S(z, y) ] Similar to relations, profunctors can also be composed. Given Cost enriched profunctors R : X ↛ Y and S : Y ↛ Z, the output of their composition bares a striking resemblance to the formula for relation composition. (S ∘ R)(z, x) := inf _y∈ Y ( R(y, x) + S(z, y)) The similarity between relation composition and profunctor composition is more than just cosmetic. It also emulates how Cost enriched categories treat logical propositions. In the Boolean logic setting, the "and" operation outputs true only when both of its inputs are true, and false otherwise. In Cost enriched categories, a distance of zero can be interpreted as true, and a distance greater than zero is false. With this interpretation, the sum of two values a and b, where both are non-negative, can only be zero if both a and b are zero. From the perspective of Cost category logic, a+b is the logical "and" operation. Furthermore, within this version of logic the infimum operation is the Cost version of the existential quantifier. When X is finite, The statement inf_x ∈ X Fx = 0 means there exists a value x such that Fx is zero. In the infinite case, it suggests that there exists a value Fx which is arbitrarily close to zero. Applying this logic to the definition of profunctors, it can be seen that profunctors produce truth values from pairs of objects, if the output of zero is interpreted as true, and the output of non-zero is interpretted as false. Such an interpretation can be represented by the functor (0=x) : Cost → Bool.With knowledge of Functors, Profunctors and their composition there is a final piece of information necessary for the construction of the Nearest Neighbours Algorithm. Continuing with the intuition from functions and relations of sets, it can be observed that functions are a special kind of relation, known as a functional relation. A function F:N → X is said to produce an element Fi when given an element i∈ N, but this behaviour can be represented directly as a relation F_* : N ↛ X which evaluates to a truth value under the condition F_*(x,i) ⇔ (x = Fi). In fact, there is also a second relation of the opposite direction F^*:X ↛ N which represents the logical evaluation of the function F^*(i, x) ⇔ (Fi = x). The interaction between functions and relations has a mirror in the interaction between functors and profunctors. A functor F : N → X canonically generates two profunctors. One of the same direction F_* : N ↛ X and one of the opposite direction F^* : X ↛ N. They are defined with the aid of hom-objects, where F_*(x, i) = X(x, Fi) and F^*(i, x) = X(Fi, x). In the case of Lawvere metric spaces, the profunctors of F evaluated on objects x and i can be read as: "The distance between x and the image of i under F". With this final component, it is now possible to construct the Nearest Neighbours Algorithm. § CONSTRUCTING THE NEAREST NEIGHBOURS ALGORITHMThis section explores the construction of the Nearest Neighbours Algorithm, given a dataset of points in a metric space, and classification labels, using Enriched Category Theory. Starting with a dataset of n pairs (x_0, y_0), ... , (x_n, y_n), the x_i values are elements of a metric space X, and the y_i values are class labels. Given a new point x ∈ X, what is the correct class label to associate with it?From the format of the dataset, the primary characteristic of the data points are the distances between them. This would suggest that the natural choice for the enriched categories are Lawvere metric spaces, i.e. Cost enriched categories. The first step is to find an appropriate representation of the data. An individual data point, (x_i, y_i), has three components. An index value i, an associated point in the metric space x_i, and the classification label y_i. The n index values can be stored in a Cost-enriched category N. The metric space X can clearly also be represented as a Cost-enriched category X, but the class labels can also be represented in a similar way, as the contents of the Cost-enriched category Y, which contains all of the possible class labels. With these categories, the information of the dataset can be represented by two functors. F : N → X maps the index values to their associated position in the metric space x_i. The functor T : N → Y, similarly, maps data indexes to class labels.Though it is now clear what objects the various enriched categories contain, it remains to determine what the hom-objects of each category should be. In the case of the metric space X, it is clear that between any points a, b ∈ X, the hom object X(a, b) should correspond directly with the distance metric on X. It is less clear what the choice should be for the categories N and Y.Proceeding with the intuition that the hom-objects, or in this case the distances, between objects should encode meaningful information about the data, the objects of N, the indexes, possess no explicit relation to each other. This would suggest that the distances between indexes should be as "un-constraining as possible". In the context of enriched categories, the lack of constraint would suggest that the Functors from N to any other Cost category, should correspond directly with maps from the objects of N to the other category. To achieve this, the category N can be given the discrete metric, shown in the following equation. N(i, j) = 0 i = j∞i ≠ j Recalling that functors between Cost-Categories are distance non-increasing functions, the discrete metric means that this condition is trivially satisfied, as the objects of N are as distant from each other as possible. This models the lack of a relationship between the data indexes. The same logic can be applied to the objects of Y. Class labels should also have no meaningful relation to each other, so the discrete metric can be applied to Y as well. With the categories N, X, Y and the functors F, and T, the dataset can be represented by the following diagram.X NY ["T"', from=2-1, to=2-3] ["F", from=2-1, to=1-2] [dotted, from=1-2, to=2-3] To find the classes of all the points in X would optimistically be to find a suitable candidate for the dotted arrow from X to Y. However, there is an issue. It is expected that two classification regions in X may be touching, producing a boundary between classification regions which can have a trivially small distance. If we insist that classes are assigned by functors, then the functors must be distance non increasing. This would require that the classes in Y have a distance of zero from each other. It is tempting to think that one should not assign Y the discrete metric, but this has an unfortunate consequence. Within the language of Enriched Category Theory, the hom-objects are the only way to distinguish between objects of a category. Setting all of the distances between objects in Y to zero would make all of the classes indistinguishable from each other in any categorical construction. It was correct to assign Y the discrete metric, but not to expect the classifications to be represented by a functor. The classifications can in fact be represented by a profunctor NNA : X ↛ Y.With the expectation that the correct classification is represented by a profunctor, we can attempt to produce this profunctor directly by composition. The functors F and T both have two canonical profunctors associated with them. By selecting these profunctors appropriately, we can compose them to produce a profunctor from X to Y. This can be done with the profunctors F^* : X ↛ N and T_* : N ↛ Y.X NY ["F^*"', ""marking, from=1-2, to=2-1] ["T_*", ""marking, from=2-1, to=2-3] ["T_*∘ F^*", dashed, from=1-2, to=2-3] As previously discussed, the profunctor F^* : X ↛ N measures the distance between a point in X and the image of a data point in N. The profunctor T_* : N ↛ Y does something similar, but because it is produced by a functor between discrete categories, its outputs are even easier to interpret. If a data index i has a class y, i.e. Ti = y, then T_*(y, i) will be 0. However, if i does not have class y then T_*(y, i) is infinity. Substituting these profunctors into the profunctor composition formula produces the following equation. (T_*∘ F^*) (y, x) = inf_i ∈ N (F^*(i, x) + T_*(y, i)) The interpretation of this composition is relatively straight forward. If the class of i selected by the infimum is not y, then T(y, i) will be infinity, making the entire sum as large or larger than any other possible value. However, if the i selected was of class y, then the formula returns inf_i ∈ N F^*(i, x). In other words the composition (T_*∘ F^*)(y, x) returns the distance from x to the closest data point which is of class y. This could also be interpretted as evaluating the infimum of a partition of the indexes which have the class y [Note that the following expression re-uses the notation NT(y) introduced in section <ref> to represent the partition subset of N with classes y, NT(y) = {i ∈ N | Ti = y}]. (T_*∘ F^*)(y, x) = inf_i ∈ NT(y)d(Fi, x) A useful outcome, but not quite the NNA. There is one additional step. In order to reproduce the NNA we need to compare the output of the profunctor T_*∘ F^*, to a similar composition with a profunctor that has no knowledge of the classes, 1_NY : N ↛ Y.To model the notion that 1_NY has no knowledge of the classes, it must respond true to any i∈ N and y∈ Y, i.e. 1_NY(y, i) = 0 [This also makes 1_NY the terminal profunctor of the category of profunctors between N and Y, Prof(N, Y)]. Composing this profunctor with F^* produces a composition with no knowledge of the classes. (1_NY∘ F^*)(y, x) = inf_i ∈ N (F^*(i, x) + 1_NY(y, i)) = inf_i ∈ N F^*(i, x) Given a point x ∈ X and class y ∈ Y, the profunctor (1_NY∘ F^*)(y, x) gives the distance to the closest point in the dataset (i.e. in the image of F). This composition has forgotten all class information. Finally, to reconstruct the NNA classification it only remains to compare the outputs of both profunctors. As their outputs are objects of the Cost category, the natural comparison is their hom-object in Cost. NNA(y, x) := Cost((1_NY∘ F^*)(y, x), (T_*∘ F^*)(y, x)) : X ↛ Y Because the arrows in Cost encode the ordering information, this leads to the expression: NNA(y, x) = (1_NY∘ F^*)(y, x) ≥ (T_*∘ F^*)(y, x) A point x is taken to have class y when NNA(y, x) is true. Consider the situation that the closest data point Fj to x has class y, then T(y, j) = 0. The left hand side of the inequality finds the smallest distance from x to a data point with any class and the right hand side finds the smallest distance to a data point with class y. When the closest data point to x has class y, the left hand side returns the same value as the right hand side and the inequality is true. NNA(y, x)⇔ Cost( (1_NY∘ F^*)(y, x) , (T_*∘ F^*)(y, x) )⇔ (1_NY∘ F^*)(y, x) ≥ (T_*∘ F^*)(y, x)⇔inf_i ∈ N F^*(i, x)≥inf_i ∈ N (F^*(i, x) + T(y, i))⇔ F^*(j, x)≥ F^*(j, x) + T(y, j)⇔ F^*(j, x) ≥ F^*(j, x)⇔ True Alternatively, in a situation where the nearest data point does not have class y, then (T_*∘ F^*)(y, x) > (1_NY∘ F^*)(y, x) and the output will be false. From this interpretation, it is clear that the NNA profunctor produces the same classification as the Nearest Neighbours Algorithm. In its purely categorical form, the similarity between the profunctor construction and the relation introduced in Section <ref> is obscured, but it can be made clear through substitution. NNA(y, x)⇔ Cost( (1_NY∘ F^*)(y, x) , (T_*∘ F^*)(y, x) )⇔(1_NY∘ F^*)(y, x)≥ (T_*∘ F^*)(y, x)⇔inf_i ∈ N F^*(i, x)≥inf_i∈ N (F^*(i, x) + T(y, i))⇔inf_i ∈ N F^*(i, x)≥inf_i∈ NT(y) F^*(i, x)⇔inf_i ∈ N F^*(i, x) = inf_i∈ NT(y) F^*(i, x)⇔inf_i ∈ N X(Fi, x) = inf_i∈ NT(y) X(Fi, x)⇔inf_i ∈ N d(Fi, x) = inf_i∈ NT(y) d(Fi, x)The last line is the same as the NNA relation shown in Section <ref>, demonstrating that this construction is the same as the standard Nearest Neighbours Algorithm.§ FUTURE WORK Given the diversity of Machine Learning algorithms, and the natural generalising power of Enriched Category Theory, there are numerous avenues to explore for future extensions of this work.Firstly, the construction of the NNA in section <ref> does not require any specific properties of Cost-enriched categories to define. This leads very naturally to a candidate definition of the V-enriched Nearest Neighbours Algorithm (V-NNA). V-NNA(y, x) := V((1_NY∘ F^*)(y, x), (T_*∘ F^*)(y, x)) : X ↛ Y This immediately begs the question of whether this definition has useful properties in other bases of enrichment. Though the previous section interpretted the hom-object of the base of enrichment in its non enriched form for the sake of clarity, future works would benefit from considering the self enriched form of the hom-object. In the case of the Cost-NNA, interpreting the hom-object with truncated subtraction rather than an inequality. Cost-NNA(y, x) =(T_*∘ F^*)(y, x) -̇ (1_NY∘ F^*)(y, x) : X ↛ Y Researchers who aren't interested in Machine Learning would possibly consider the Voronoi diagram as a more interesting outcome of the V-NNA. By assigning each index its own separate class, the NNA partitions the metric space dependent on each individual point rather than each individual class. In this instance, the partitions generated in other bases of enrichment may prove interesting.It is also interesting to ask what other algorithms can be presented in this language. An obvious generalisation of the Nearest Neighbours Algorithm is the k Nearest Neighbours Algorithm, where classification of a point is based on the majority classification of the k closest points to it. Beyond this, there are many ML algorithms which depend purely on the distance metric of their dataset, so many may also be found as constructions of Cost-enriched categories. § CONCLUSION The nascent field of Category Theory for Machine Learning has been growing in recent years. As Category Theory is predominantly concerned with mathematical structure, there is a hope that such techniques can improve our understanding of how Machine Learning algorithms operate. Previous works have demonstrated that there is value in this avenue of research, but there is currently not enough experience to indicate the correct way to apply Category Theory to the understanding of Machine Learning algorithms. In particular, there has not previously been an application of Enriched Category Theory in Machine Learning. With the construction of the Nearest Neighbours Algorithm, using tools from Enriched Category Theory, there is now a stronger indication that this area can provide valuable insight. Furthermore, the strategies used for the representation of information and reasoning about the construction of machine learning algorithms in this format suggests that the enriched structure offers a potentially more intuitive framework than other categorical attempts.The simplicity of constructing the Nearest Neighbours Algorithm in this framework does add credence to the sense that the algorithm itself is an exceedingly natural approach to extending classifications. With the formulation of the Extended Nearest Neighbours Algorithm, it becomes a tantalising area of future work to ask if this algorithm continues to provide sensible classifications in other bases of enrichment. This motivation is part of the underpinning interest mentioned in the introduction of this work. Is it the case that machine learning requires fundamentally new algorithms to tackle stranger and stranger problems. Or is it that when suitably abstracted, a handful of algorithms might prove to be sufficient for the majority of case and that the engineering challenge comes in choosing the correct base of enrichment.Another interesting outcome of this work is to indicate that Enriched Category Theory is a framework of reasoning that should be of more interest to both Machine Learning Experts and Mathematicians. Often derided as a more abstract formulation of the exceedingly abstract field of Category Theory, it can be seen that certain basis of enrichment create enriched categories which are more practically useful than other categorical notions. Furthermore, it indicates that an understanding of the interaction between hom-objects, functors, and profunctors can provide useful insights into the structuring of information and the meaning behind those structures. Even if one does not find the rigorous application of the theory useful, the intuition may prove helpful.This work was partly funded by the grant “Early detection of contact distress for enhanced performance monitoring and predictive inspection of machines” (EP/S005463/1) from the Engineering and Physical Sciences Research Council (EPSRC), UK, and Senseye.
http://arxiv.org/abs/2312.16529v1
{ "authors": [ "Matthew Pugh", "Jo Grundy", "Corina Cirstea", "Nick Harris" ], "categories": [ "cs.LG", "math.CT" ], "primary_category": "cs.LG", "published": "20231227112003", "title": "Using Enriched Category Theory to Construct the Nearest Neighbour Classification Algorithm" }
[]979-8-3503-2277-4/23/$31.00 2023 IEEE []Achieving Fairness in DareFightingICE Agents Evaluation Through a Delay MechanismChollakorn Nimpattanavong, Thai Van Nguyen, Ibrahim Khan Graduate School of Information Science and Engineering Ritsumeikan University, Japan {gr0608sp, gr0557fv, gr0556vx}@ed.ritsumei.ac.jp Ruck Thawonmas College of Information Science and Engineering Ritsumeikan University, Japan [email protected] Worawat Choensawat, Kingkarn Sookhanaphibarn School of Information Technology and Innovation Bangkok University, Thailand{worawat.c, kingkarn.s}@bu.ac.thJanuary 14, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ This paper proposes a delay mechanism to mitigate the impact of latency differences in the gRPC framework—a high-performance, open-source universal remote procedure call (RPC) framework—between different programming languages on the performance of agents in DareFightingICE, a fighting-game research platform. The study finds that gRPC latency differences between Java and Python can significantly impact real-time decision-making. Without a delay mechanism, Java-based agents outperform Python-based ones due to lower gRPC latency on the Java platform. However, with the proposed delay mechanism, both Java-based and Python-based agents exhibit similar performance, leading to a fair comparison between agents developed using different programming languages. Thus, this work underscores the crucial importance of considering gRPC latency when developing and evaluating agents in DareFightingICE, and the insights gained could potentially extend to other gRPC-based applications.Fighting Game, DareFightingICE, Delay mechanism, Agent evaluation, Fair comparison § INTRODUCTIONDareFightingICE <cit.> is a Java-based game artificial intelligence (AI) research platform that aims to create a generalized fighting game agents. It builds upon the FightingICE <cit.> framework and seeks to develop an agent that can learn and perform well against different opponents. One of DareFightingICE's objectives is to include the integration of audio data into the decision-making process during gameplay. By utilizing audio data, the project aims to provide an agent with the useful information that can be used to make more informed decisions. The successful integration of audio data into an agent's decision-making process could represent a significant breakthrough in game AI research.gRPC Remote Procedure Call (gRPC) <cit.> has recently been integrated into DareFightingICE <cit.> as the communication framework between an agent and the platform itself. This decision was made due to the strict 16.66 millisecond time limit within which an agent must process. gRPC was chosen for its high performance in efficiently transmitting a constant stream of data. Additionally, gRPC's support for multiple programming languages opens up opportunities to develop an agent in languages other than Java. By leveraging gRPC, the project aims to improve the performance and flexibility of the agent development process. This integration is expected to provide faster and more efficient communication between an agent and the platform, ultimately leading to higher agent performance in game playing scenarios.DareFightingICE offers an interface for developing agents in Java and Python, which are the primary languages supported by the team. However, the latency of the gRPC framework differs between the two languages, with Java exhibiting lower latency in gRPC calls to the platform compared to Python's gRPC that is not good at streaming RPC calls <cit.>. This difference in latency can have a significant impact on the agent performance since the time pool available for processing on Java and Python is different. According to the game's specification, if the overall processing time exceeds 16.66 ms <cit.>, an agent will not be able to process the next frame data due to frame skipping, which can lead to a drop in performance.To ensure fairness between Java-based and Python-based agents, we propose a delay mechanism that can mitigate the effects of the latency differences between the two languages. The proposed mechanism is designed to provide a consistent processing time for both Java and Python, ultimately leading to a fair comparison between agents developed using different programming languages.The contributions of this work are as follows: first, we conduct an investigation into gRPC latency across multiple programming languages, with a particular focus on Java and Python, and identify an appropriate delay to minimize any differences; second, we investigate the effects of latency variations on the performance of agents created for DareFightingICE by utilizing BlackMamba, the 2021 competition winner, as a test-bed.§ RELATED WORK§.§ Recent Studies on FightingICE and DareFightingICE AgentsMoon et al. <cit.> used a machine learning algorithm to dynamically adjust the behavior of FightingICE agents in response to players' affective states. This approach facilitated adaptive interaction between the agent and player, significantly enhancing the overall gaming experience by creating a more immersive and responsive environment. In the same year, Waris et al. <cit.> utilized CATNeuro, a neural-network model based on the graph evolution concept propelled by Cultural Algorithms, to engineer real-time industrial controllers. CATNeuro was tested on FightingICE and a trailer motion system, with it consistently outperforming other methods. The superior performance of CATNeuro can be attributed to its design which fosters increased diversity in the model, a result of the interplay between cooperative and competitive knowledge.In a separate study, Thai et al. <cit.> presented a deep reinforcement learning agent for DareFightingICE. Uniquely, this agent uses sound exclusively as an input, marking a significant deviation from traditional decision-making processes used by FightingICE agents, which typically rely on game states provided by the game system, as done in the above two studies.§.§ Improving Data Transfer Efficiency for Agents in the DareFightingICE using gRPCOne of the challenges faced by agent developers in DareFightingICE as well as FightingICE was the high overhead associated with the previous communication interface used until the 2022 competition, Py4J, when transmitting large amounts of data. This often resulted in an agent's overall processing time exceeding the 16.66 ms time limit, which is crucial for timely and accurate decision-making in the competition. To address this issue, the competition has recently integrated gRPC as a communication framework between agents and DareFightingICE. The use of gRPC has several advantages, including its high performance, which reduces latency by up to 65% <cit.>, and increased stability. Moreover, gRPC has been found to mitigate the issue of missed frames, which can occur when using Py4J. The adoption of gRPC as the communication framework in DareFightingICE has significantly improved the agent performance, allowing developers to create more complex agents that can better utilize audio data in their decision-making processes.§ METHODOLOGYIn this section, we elaborate on the methodology adopted to identify the optimal delay mechanism for mitigating the impact of latency differences between Java's gRPC and Python's gRPC by implementing an agent called Sandbox in both Java and Python. This involves discussing the objectives and implementation of the agent, the experimental setup to ensure accuracy and reliability, and the evaluation of latency for data transmission between the agent and the platform in both Java and Python. §.§ Objectives and ImplementationOur primary objective was to pinpoint the most efficient delay mechanism for mitigating the impact of latency differences between Java's gRPC and Python's gRPC. To achieve this goal, we implemented agents both in Java and Python, named SandBox, in such a way that it would measure only the overhead on the round-trip latency of transmitting data between the agent and the platform, without processing any data. By doing so, we could isolate the delay caused exclusively by data transmission, allowing us to assess and identify the most effective delay mechanism. This approach ensures that we focus on the optimization of data transmission and communication between an agent and the platform. §.§ Experimental SetupTo conduct our experiments, we employed a computer with specifications closely matching those of the official competition PC used in the DareFightingICE Competition. This similarity was crucial to ensure the accuracy and reliability of our results, given that the agent performance would be evaluated under similar conditions. The computer was equipped with an Intel(R) Xeon(R) W-2125 @ 3.70GHz CPU, 16 GB DDR4 RAM, and an NVIDIA Quadro P1000 graphics card, running on the Windows 10 Pro for Workstations operating system. Utilizing the same PC for all experiments allowed us to maintain consistent conditions and effectively eliminate other factors, which in turn facilitated an accurate comparison of the performance of the different delay mechanisms and programming languages. §.§ Evaluation of LatencyTo evaluate latency, we deployed Sandbox for 32 games (96 rounds) in both Java and Python, measuring the average latency of each round and illustrating our findings in Fig. <ref>. During the experiment, we observed that Java-based Sandbox's latency stabilized after 6 rounds, while the Python-based one's stabilized after just 3 rounds. To maintain consistency and ensure a fair comparison, we only considered the latency values after 6 rounds, at which point both agents' latency had stabilized.The average latency for the Java-based agent after round 6 was 0.465 ms, while for Python-based Sandbox, it was 0.807 ms. We rounded the difference in latency between the two agents up to 0.35 ms. This comparison enabled us to identify the most efficient delay mechanism for Java-based agents, and ensure that this delay mechanism operates efficiently within the competition context.§ EVALUATIONIn this section, we discuss the evaluation process designed to investigate the impact of the delay mechanism on the agent performance in the context of the DareFightingICE Competition. We outline our experimental approach, the use of BlackMamba as a test-bed, and the implementation of four variants. Furthermore, we introduce a novel evaluation method for the agent performance and present our findings, which highlight the effectiveness of the delay mechanism in mitigating performance differences between Java-based and Python-based agents. §.§ Experimental ApproachOur experiments aim to investigate the impact of gRPC latency on the agent performance in DareFightingICE and the effectiveness of the delay mechanism in mitigating performance differences between Java-based and Python-based agents. To examine the impact of the delay mechanism on the agent performance, we selected BlackMamba, the winner of the 2021 FightingICE Competition, as our test-bed. We reimplemented BlackMamba in both Java and Python to enable a comparison of the performance of Java-based and Python-based agents.The motivation behind the reimplementation is due to the fact that the initial implementation of BlackMamba in Java involved creating a new Java object in every frame, without reusing the available object. This led to frequent execution of Java Garbage Collection, resulting in unstable latency. Furthermore, since the initial implementation of BlackMamba is in Java, it is necessary to re-implement the same algorithm in Python as well. Our experiments involve four variants: the reimplemented BlackMamba (baseline) and three versions of the baseline, in both Java and Python, with processing times adjusted to 15.15 ms, 15.5 ms, and 15.85ms, respectively, as shown in Table <ref> in order from top to bottom.To ensure a fair comparison, we conducted 96 rounds for each variant, where they fought against MctsAi <cit.>, a sample agent in the competition using Monte-Carlo tree search. The first six rounds were disregarded to ensure consistent gRPC latency, as mentioned in Sec. III-C. Our hypothesis is that Java-based BlackMamba would outperform the Python-based one without an introduced delay in cases where the overall processing time exceeds 16.66 ms, but with a 0.35 ms delay introduced to the Java-based agents, both Java-based and Python-based agents would exhibit similar performance. §.§ Evaluation MethodWe introduce the method for evaluating the agent performance, taking into account both the remaining Health Points (HP) and elapsed time. The evaluation method used in existing work <cit.> solely focused on the remaining HP of both players, which is insufficient in effectively assessing performance. The elapsed time is also crucial as it reflects how fast an agent defeats the opponent, a factor that cannot be accurately determined by HP alone. Therefore, the elapsed time together with a boolean flag representing win or loss is also included. The reason for the introduction of this boolean flag is that if the elapsed time is the same and the fight is highly competitive with tiny HP difference, ignoring the fight result would lead to a similar assessment of performance. This new method allows for more precise and efficient evaluations.To implement this evaluation method, the result data from each round are used, which provide information on the remaining HP for each agent and the elapsed time measured in frames, with a maximum of 3600 frames per round. These values are then normalized using a set of equations (Eqns. (<ref>), (<ref>), (<ref>), and (<ref>)). In these equations, HP_BlackMamba and HP_Mcts denote the remaining HP of BlackMamba and MctsAi, respectively, while Time_Elapsed denotes the elapsed time of the round. In addition, HP_Total and Time_Total denote the maximum possible HP (set at 400 HP) and the total time per round (set at 3600 frames), respectively. The average of the values from these four equations is then calculated to evaluate the performance score of BlackMamba (Eqn. (<ref>)). HP_1 = HP_BlackMamba/HP_TotalHP_2 = 1 - HP_MctsAi/HP_Totalw = 1,ifHP_BlackMamba > HP_MctsAi0,otherwiset = w(1 - Time_Elapsed/Time_Total) + (1 - w) Time_Elapsed/Time_TotalScore = HP_1 + HP_2 + w + t/4§.§ ResultsIn Fig. <ref>, we observed that without the delay mechanism, Java-based BlackMamba consistently outperformed the Python version if the processing time is above 15.5 ms due to the lower gRPC latency on the Java platform. However, by adding a delay of 0.35 ms to the Java version, the performance gap was effectively reduced, with both languages showing similar performance. These results support our hypothesis that the delay mechanism can mitigate the impact of gRPC latency differences and ensure a fair and accurate evaluation of the agent performance in DareFightingICE.§ DISCUSSIONSThe findings indicate that both Java-based and Python-based agents demonstrate comparable performance when the processing time is under 15.15 ms, even without the introduced delay mechanism. The processing time limit that triggers the difference in the agent performance was discovered to be 15.5 ms, not 16.66 ms as mentioned in <cit.>. This is because our results based on average processing time, which may overlook occasional delays, resulting in the overall processing time on the game system greater than 16.66 ms.While our experiments with BlackMamba provided valuable insights, it is important to recognize that our evaluation was limited to a single type of agent in a specific environment. Further studies are necessary to explore the effects of gRPC latency on other agents in various settings. Additionally, our study focused solely on the impact of gRPC latency differences between Java and Python, and did not consider other factors that could affect the agent performance such as operating system (OS) thread management and other OS features. Therefore, future research should investigate the impact of other variables on an agent's performance in DareFightingICE and similar applications.§ CONCLUSIONSOur study sought to investigate the impact of gRPC latency differences between programming languages on the agent performance in DareFightingICE. Specifically, we compared the performance of Java-based and Python-based agents with and without a delay mechanism. The results showed that the differences in gRPC latency between these programming languages can have a significant impact on the agent performance in DareFightingICE. However, with a delay mechanism introduced to Java-based agents, both Java-based and Python-based agents exhibited similar performance, indicating that this delay mechanism can effectively mitigate the impact of gRPC latency differences.These findings have important implications for the development and evaluation of agents in DareFightingICE and other gRPC-based applications. When designing a game-playing AI competition that supports multiple programming languages by utilizing gRPC, it is crucial to consider the potential latency differences between programming languages and take measures to mitigate these variations. Introduction of the delay mechanisms is one such measure that can help ensure a fair and accurate evaluation of the agent performance. § ONLINE RESOURCES Source code and raw data are available at <https://github.com/Staciiaz/cog2023-darefightingice-evaluation>.00 b1 I. Khan, T. V. Nguyen, X. Dai, R. Thawonmas, “DareFightingICE Competition: A Fighting Game Sound Design and AI Competition," 2022 IEEE Conference on Games (CoG), pp. 478-485, August 2022. b2 F. Lu, K. Yamamoto, L. H. Nomura, S. Mizuno, Y. Lee, and R. Thawonmas, “Fighting Game Artificial Intelligence Competition Platform," Proceedings of the 2nd IEEE Global Conference on Consumer Electronics (GCCE), pp. 320-323, October 2013. b3 “Introduction to gRPC," gRPC. [Online]. Available: <https://grpc.io/docs/what-is-grpc/introduction>. [Accessed: May 14, 2023]. b4 C. Nimpattanavong, I. Khan, T. V. Nguyen, R. Thawonmas, W. Choensawat, K. Sookhanaphibarn “Improving Data Transfer Efficiency for AIs in the DareFightingICE using gRPC,” arXiv preprint arXiv:2303.10001, 2023 (accepted for oral representation at 2023 8th International Conference on Business and Industrial Research, May 2023). b5 “Performance Best Practices," gRPC. [Online]. Available: <https://grpc.io/docs/guides/performance>. [Accessed: May 14, 2023]. b6 J. Moon, Y. Choi, T. Park, J. Choi, J. Hong, and K. Kim, “Diversifying dynamic difficulty adjustment agent by integrating player state models into Monte-Carlo tree search," Expert Systems with Applications, Volume 205, 2022. b7 F. Waris, R. Reynolds, and J. Lee, “Evolving Deep Neural Networks with Cultural Algorithms for Real-Time Industrial Applications," International Journal of Semantic Computing, Volume 16, No. 02, pp. 281-312, 2022. b8 T. Van Nguyen, X. Dai, I. Khan, R. Thawonmas, and H. V. Pham, “A Deep Reinforcement Learning Blind AI in DareFightingICE,” 2022 IEEE Conference on Games (CoG), pp. 632–637, 2022. b9 M. Ishihara, T. Miyazaki, C. Y. Chu, T. Harada, and R. Thawonmas, “Applying and improving Monte-Carlo Tree Search in a fighting game AI," Proceedings of the 13th international conference on advances in computer entertainment technology, pp. 1-6, November 2016.
http://arxiv.org/abs/2312.16010v1
{ "authors": [ "Chollakorn Nimpattanavong", "Thai Van Nguyen", "Ibrahim Khan", "Ruck Thawonmas", "Worawat Choensawat", "Kingkarn Sookhanaphibarn" ], "categories": [ "cs.NI", "cs.AI", "cs.PF", "C.4; H.4" ], "primary_category": "cs.NI", "published": "20231226113836", "title": "Achieving Fairness in DareFightingICE Agents Evaluation Through a Delay Mechanism" }
thebibliography[1] [enumi] enumi #1. arabic=165mm=230mm=-.5cm=5mm=-15mm2ex empty 1.5cmOn Quantum States for Angular Position and Angular Momentumof LightBo-Sture K. Skagerstam[Corresponding author]^,[Email address: [email protected]] Department of PhysicsNorwegian University of Science and Technology - NTNU N-7491 Trondheim,NorwayandPer K. Rekdal[Email address: [email protected]] Molde University College PO Box 2110, N-6402 Molde,NorwayIn the present paper we constructa properly defined quantum state expressed in terms of elliptic Jacobi theta functions for the self-adjoint observables angular position θand the corresponding angular momentum operator L = -id/dθ in units of ħ =1.ThequantumuncertaintiesΔθ andΔ L for thestate are well-definedand are, e.g.,shown to give a lower value of the uncertainty product ΔθΔ Lthan the minimal uncertainty states of Ref.<cit.>. The mean value ⟨ L ⟩ of the state is not required to be an integer. In the case of any half-integer mean value⟨ L ⟩the state constructed exhibits a remarkable critical behaviour withupper and lower bounds Δθ < √(π^2/3 -2) and Δ L > 1/2. Motivated by the ingenious experimental procedures to generate restricted values ofangular position and angular momentum degrees of freedom (see, e.g., Refs.<cit.>), we consider a description of such limitations in terms of a well-definedpurequantum state. Restrictions of the angular phase of degree of freedom must be accompanied byrestrictions onangular momentum degrees of freedom of, e.g.,light pulses (see Ref.<cit.> for a general presentation and Refs.<cit.> for a limited selected set of original papers on the notion of angular momentum for light pulses).Experimental and theoretical explorations have been extended to the observations of fractional angular momentumas well as to the use of single photon sources (see, e.g., Refs.<cit.>).Here we, in particular, notice that the presence half-integer angular momentum appears to playa very special role as emphasisedin, e.g., Ref.<cit.>.The quantum state as discussed inthe present paperturns also out to lead to unique signatures for half-integer angular momentummean values.In order to describe photon sources and the detection of angular momentum degrees of freedomit is of importance to carry out a detailed second-quantization of the electromagnetic field making use ofappropriate normal mode functions. This has been investigated in great detail in the literature in various contexts requiring considerable efforts (see, e.g., Refs.<cit.> and references cited therein). At the single-photon level one predictsfractional angular momentum in a reduced physical dimension (see, e.g., Refs.<cit.> and references cited therein). In the present paper welimit ourselves to a transverse orbital angular momentum degree of freedomfor mode functions in a reduced physical propagation dimension and a construction of a possible well-defined quantum state according to the basic rules of quantum mechanics. Elsewhere related charged quantum states have previously been discussed <cit.> in the context ofquantum statesfor charged q-bits. The results presented here can be applied to such systems as well. Weconsider the following periodic extension of the minimal dispersion intelligent pure states as discussed in Ref.<cit.>, later revisitedin Ref.<cit.>,in termsof aparametric representation witha real-valuedparameter λ > 0, i.e.,ψ(θ) =N ∑_n=-∞^∞f(θ - θ̅ + 2π n)   , and where f(x) = e^ixl̅ e^-λ x^2/2  .The range of θ is restrictedin terms ofan in principle arbitraryoff-set value θ_0 for the phase θsuch that θ_0≤θ≤θ_0 +2π.Apart from theθ̅-dependence, the n=0 contribution in Eq.(<ref>) corresponds to the minimal dispersion state as first discussed in Ref.<cit.>. As in this reference we find itconvenient to make use of the choice θ_0=- πsincethisnaturally leads to⟨θ⟩ = 0when θ̅=0.Here N is a real-valued normalization constant for the state ψ(θ) and l an additional real-valued parameter all of which will be discussed in more detaillater on. In passing we notice that a periodic extension of the form as in Eq.(<ref>) has a resemblance to the quantum states considered by M. Born <cit.> in a discussion of the notion of probabilities in classical and quantum physics and the role of real numbers in physics. We will comment on these fundamental issues below where we, in particular, exhibit an exponential sensitivity to the difference between rational and real numbers for the observables under consideration. The state ψ(θ) cannow be expressed in terms of a well-knownelliptic ϑ_3-function (see, e.g.,Refs.<cit.>), i.e., ψ(θ) =N e^il(θ -θ̅) e^-λ(θ -θ̅)^2/2ϑ_3[ π(l̅ +iλ(θ-θ̅)), e^-2λπ^2]  , using the definition ϑ_3[z,q] = ∑_n=-∞^∞ q^n^2 e^2niz  , within the unit disk |q|< 1.The state ψ(θ)haswell-defined properties, i.e., it is continuous and differentiable for all values θ∈ [-π,π],including the boundary θ = ±π in contrast to the state considered in Ref.<cit.>.One can actually be more precise in a mathematical sense and show that ψ(θ) belongs to states for whichθ andL = -id/dθ are self-adjoint operators (see, e.g., Example 1 inRef.<cit.> and related discussions in Refs.<cit.>) as required by the fundamental rules of quantum mechanics. The representation of ψ(θ) in the form of Eq.(<ref>) is suitable for considerations of large values of the parameter λ.An alternative and explicit expression for ψ(θ) according to Eq.(<ref>) which is more adapted for a small λ expansion can now be obtained by making use of general properties of ϑ_3-functions under modular transformations (see, e.g., Refs.<cit.>). For our purposes, and for the convenience of the reader, we make this explicitin termsof a Poisson summation technique (see, e.g., Refs.<cit.>)by noticingthatthe state ψ(θ) according to Eq.(<ref>) is a periodic function.It can then beexpanded in a Fourier series ψ(θ) = ∑_n=-∞^∞c(n)e^inθ  ,such thatc(n) = 1/2π∫_-π^π dθ e^ -inθψ (θ) = N/2π∫_-∞^∞ dθe^-inθf(θ - θ̅) = N/√(2πλ) e^ -inθ̅ e^ -(n - l̅)^2/2λ  . It then follows that ψ(θ) = N/√(2πλ)∑_n=-∞^∞e^in(θ -θ̅)e^ -(n - l̅)^2/2λ ,whichalso can be expressed in terms of an elliptic ϑ_3-function, i.e.,ψ(θ) =N/√(2πλ) e^-l^2 /2λϑ_3[ (θ -θ̅)/2 -i l/2λ, e^-1/2λ]  ,which is rapidly converging for small values of the parameter λ > 0. The explicitrepresentation of ψ(θ) in the form as inEq.(<ref>) leads to the general normalization condition(ψ,ψ) ≡∫_-π^π dθ |ψ(θ)|^2 = N^2/λ∑_n=-∞^∞e^ -(n - l̅)^2/λ = N^2√(π/λ)ϑ_3[ -l̅π, e^-λπ^2]=1.The normalization factorN is therefore in general only a function of the parameters l̅ and λ.With the state ψ(θ) in the representation as in Eq.(<ref>), we can make use of Borns rule to find the properly normalized probability distribution p(l) to find theinteger angular momentum l component in terms of the angular momentum eigenfunction ψ_l(θ)= exp(ilθ)/√(2π), i.e., p(l) = |(ψ_l,ψ)|^2 = |∫_-π^πdθψ_l^*(θ)ψ(θ) |^2=N^2/λ e^- (l-l)^2/λ , which, of course, can be used to compute various expectation values of the observableL. The discrete nature of this Gaussian-like distribution is now such that λ isnot directly relatedto the uncertainties Δθ orΔ L. Below we, e.g.,show thatλ =O(1/(Δθ)^2) for all values of l̅ in the large λ limit and λ =O(1/log(1/Δ L))in the small λ limit at least for integer values of ⟨ L ⟩.We notice thatEq.(<ref>), or making use of the probability distribution p(l) above,implies the generaland useful relations⟨ L ⟩ = N^2/λ∑_n=-∞^∞ne^- (n-l)^2/λ ,⟨ L^2 ⟩ = N^2/λ∑_n=-∞^∞n^2e^- (n-l)^2/λ ,independent of θ̅,and which can be used to relate the parameter λ to the uncertaintyΔ Lat least numerically. Furthermore, and with ψ(θ) in the form as given in Eq.(<ref>), and for any value of θ_0 ,we obtain the following general expression ⟨θ⟩ = ∫_θ_0^θ_0 +2πdθθ|ψ(θ)|^2 =N^2/λ∑_m ≠ nsin[(m-n)(θ_0- θ̅)]/m-n e^- ((m-l)^2 +(n-l)^2)/2λ+ (θ_0 +π)   .For any value of l̅ weobservethat due to Eq.(<ref>) the choice θ_0 = -π leads to⟨θ⟩ = 0 only if theparameter θ̅ =0, ±π.A similar expression can be obtained for ⟨θ^2 ⟩ in a straightforward manner with the general result⟨θ^2 ⟩=2N^2/λ∑_m ≠ ncos[(m-n)(θ̅ - θ_0)]/(m-n)^2 e^- ((m-l)^2 + (n-l)^2)/2λ+2N^2/λ(θ_0 + π)∑_m ≠ nsin[(m-n)(θ_0 - θ̅)]/(m-n) e^- ((m-l)^2 + (n-l)^2)/2λ+ (θ_0 + 2π)^3 - θ_0^3/6π .With θ_0 = -π we,in particular,have that⟨θ^2 ⟩=2N^2/λ∑_m ≠ ncos[(m-n)(θ̅ - π)]/(m-n)^2 e^- ((m-l)^2 + (n-l)^2)/2λ+ π^2/3  .In the course of the present workwe have found that less general expressions than Eqs.(<ref>) and (<ref>) have been discussed in the literaturein the context of a minimumentrophic angular position and angular momentum quantum state <cit.>. Thereit was also argued that a proposed quantum state could be of use in theoretical and/or experimental explorations of the notion of a phase observable in quantum physicswhich we also presume can be the case for the more general quantum state Eq.(<ref>).It now follows form Eqs.(<ref>) and(<ref>) that theimplicit dependence of the parameter λcan in general be eliminated in terms of the dispersion Δθ which, however,requires numerical but straightforward considerations. If, finally, the parameter l̅ is any integer Eq.(<ref>) simplifies toψ(θ) =N/√(2πλ) e^il̅(θ - θ̅)ϑ_3[ (θ - θ̅)/2, e^-1/2λ]  ,and the normalization factor N is determined by the relationN^2/λϑ_3[0, e^- 1/λ] = 1 .For integer values of l̅ it also follows from Eq.(<ref>) that⟨ L ⟩ = l̅ ,(Δ L)^2= N^2/λ∑_n=-∞^∞n^2e^- n^2/λ .In the analysis of the physical properties of the state ψ(θ)we are going to make use of thegeneral form of theuncertainty relation for the observables θ and L, i.e., the Cauchy-Schwarz inequality, which now takes the following exact form(Δθ)^2(ΔL^2)≥ |((θ - ⟨θ⟩)ψ,(L - ⟨ L ⟩)ψ)|^2=1/4|(θψ,Lψ)-(Lψ,θψ) |^2 + 1/4|(θψ,Lψ)+(Lψ,θψ) -2⟨θ⟩⟨ L ⟩|^2    ,where (Δ O)^2 = (( O- ⟨ O⟩)ψ,( O- ⟨ O⟩) ψ), and where the first term leads to the Kraus lowerinequality bound <cit.>for the uncertainty product, i.e.,ΔθΔL≥ |1-2π|ψ(π)|^2|/2 after a partial integration. If we, as used in Fig.<ref>, consider the special case with θ̅=0, i.e., ⟨θ⟩ = 0,we observe that (θψ,Lψ) is purely imaginary for the state Eq.(<ref>). This is so since, as can be verified using Eq.(<ref>),that indeed(θψ,Lψ) = -iN^2/2λ∑_m ≠ ncos[(m-n)π]) e^- ((m-l)^2 + (n-l)^2)/2λ= i/2(1-2π|ψ(π)|^2)   ,for all values of l̅ inaccordance with the general uncertainty relation Eq.(<ref>). Therefore a strict minimal uncertainty state must satisfy the condition ΔθΔL = |1-2π|ψ(π)|^2|/2. With the parameters as used in Fig.<ref> it then follows that the state in Eq.(<ref>) is not a minimal uncertainty state. Nevertheless, as is also illustrated in Fig.<ref>, we find for the same state lower values of the uncertainty product ΔθΔ Las compared to the result of Ref.<cit.>.For large values ofthe parameter λ the state ψ(θ) according to Eq.(<ref>) is such that the mean value of the phase θ is close to θ̅ due to then=0 contribution. The state ψ(θ) can then be approximated by the properly normalized expression ψ(θ) = (1/2π(Δθ)^2)^1/4exp (il̅(θ -θ̅))exp( - (θ -θ̅)^2/4(Δθ)^2 ) ,where the range of θ can be extended to all real numbers with an exponentially small error and using the identification λ = 1/2(Δθ)^2. It then follows from Eq.(<ref>) that ⟨ L⟩ = l̅,⟨θ⟩ = θ̅,and ΔθΔ L = 1/2 as well as ((θ - ⟨θ⟩)ψ,(L - ⟨ L ⟩)ψ)=i/2 is purely imaginary. The asymptotic expression Eq.(<ref>) therefore corresponds toa strict minimal dispersion state with equality in the uncertainty relation Eq.(<ref>) using the approximation ψ(±π) →ψ(±∞)=0.As illustrated in Fig.<ref>,it turns out that the approximation Eq.(<ref>) also well describes precise numerical considerations for a wide range offinite values of λ. For sufficiently small values of λ we proceedwithout loss of generality as follows. In terms ofl̅= l + ϵwith1/2< ϵ < 1 or 0 < ϵ < 1/2, respectively, we can make use of the small λ expansions of the exact expressions Eqs.(<ref>) and (<ref>) in order to obtain the corresponding expressions for⟨ L ⟩ and Δ L, i.e.,⟨ L ⟩ = l + 1/1+ e^(1-2ϵ)/λ , (Δ L)^2 = e^(1-2ϵ)/λ/(1+ e^(1-2ϵ)/λ)^ 2 ,independent of the parameter θ̅. Therefore the mean value ⟨ L⟩ in the small λ limit approach theinteger values l+1 or l depending on 1/2< ϵ < 1 or 0 < ϵ < 1/2, respectively, with limiting valueΔ L = 0 as it should. This branching feature of the state ψ(θ) is illustrated in Fig.<ref>for the case l=1.For integer values of ⟨ L ⟩, corresponding to the limiting values ϵ=0 or ϵ =1, we obtain from Eq.(<ref>) that λ = 1/log(1/(Δ L)^2) in the small λ limit.By inspection of Eq.(<ref>) it,furthermore, followsthat in the small λ limitthe quantum state ψ(θ) takes the following form ψ(θ) = 1/√(2π)1+ e^i(θ -θ̅)e^-(1-2ϵ)/2λ/(1+ e^-(1-2ϵ)/λ)^ 1/2e^il(θ -θ̅) . In this limit we thereforeobtain the expected l +1or langular momentum eigenstatesψ(θ) = 1/√(2π) e^i(l+1)(θ -θ̅), , 1/√(2π) e^il(θ -θ̅) ,inthe case 1/2< ϵ < 1orinthe case 1/2< ϵ < 1, respectively, with Δ L = 0 and the limiting value of Δθ given by π/√(3). It is now clear from Eqs.(<ref>) and (<ref>) with ϵ = 1/2 thatexact half-integervalues of l̅ = l + 1/2, where l is a positive integer, play a very special role, at least for sufficiently small λ.The asymptotic form of ψ(θ)is then a superposition of the l+1 and l angular momentum eigenstatesin Eq.(<ref>), i.e., ψ(θ) = 1/√(4π)(e^il(θ -θ̅) + e^i(l+1)(θ -θ̅)). For sufficiently small values of λ, and according to Eq.(<ref>) or using Eq.(<ref>),we then findthat ⟨ L ⟩ = l+ 1/2 and Δ L = 1/2 as well as thelimiting value (Δθ)^2 = π^2/3 -2. Here we notice another unique feature in the case ofϵ = 1/2 namely that ⟨ L ⟩ = l+ 1/2 is actually exact and valid for all values of λ >0.This can be verified by making use of the exact form of ⟨ L ⟩ according to Eq.(<ref>) and is due a simple but remarkable cancellation of contributing terms and by making use of the exact form of the normalization constant N in Eq.(<ref>).For half-integer mean values of ⟨ L ⟩, andfor ⟨θ⟩ = 0, it also follows that the lower limit in uncertainty relation Eq.(<ref>) is such that equality inΔθΔ L ≥ 1/2 implies a minimum uncertainty state. This is due to the fact that in this case it can be verified |(ψ(θ),ψ(θ))|=0 for all values of λ if θ= π.These remarkable features are illustrated in Figs.<ref> and <ref> for various values of the parameter l̅.One can now verify that the state ψ(θ) with the parameters as in Fig.<ref> is not a minimum uncertainty state.It is of importance to notice that for any real value of l̅ arbitrarily closeto a half-integer rational number we always obtain the branching to angular momentum l or l+1 eigenstatesfor sufficiently small values of λ as illustrated in Fig.<ref>. The well-defined quantum state as constructed in the present work therefore provides for an explicitexample with observables that exhibit exponential sensitivity to the difference between rational numbers and real numbers in quantum mechanics. It would, of course, be of great interest if the state ψ(θ) according to Eq.(<ref>) could be prepared and investigated in a concrete physical situationrevealing the fundamental role of real numbers in quantum physics <cit.> perhaps by extendingthe experimental l̅=0 procedure of Ref.<cit.>.In summary we have studied a well-defined quantum state ψ(θ) which exhibits some remarkable properties. For ⟨ L ⟩ arbitrarily close to a half-integer we have, e.g.,shown thatthe state ψ(θ) leads to two orthogonal eigenstate of L in the limit as Δθ approaches the conventional π/√(3). In the case ofan exacthalf-integer mean value of the angular momentum L the uncertaintyΔθ has, furthermore, an upper precise bound √(π^2/3 -2)which is smaller than conventional bound π/√(3) and Δ L ≥ 1/2. We have also exhibited an explicit exponential sensitivity to the difference of rational and real numbers in terms of an allowed and well-defined quantum state.ACKNOWLEDGMENTThe research by B.-S. Skagerstam (B.-S. S.)has been supported in by NTNU and Molde University College.B.-S.S. is grateful to J.R. Klauder for discussions and for informing us about Ref.<cit.>. The research by P.K. Rekdalhas been supported byMolde University College.REFERENCES 99 Padgett_2004 S. Franke-Arnold, S.M. Barnett, E. Yao, J. Leach, J. Courtial and M.J. Padgett, “Uncertainty Principle for Angular Position and Angular Momentum ”,New J. Phys. 6(2004) 103-1-8. Padgett_2005 D.T. Pegg,S.M. Barnett, R. Zambrini, S. Franke-Arnold and M.J. Padgett, “MinimumUncertainty Statesof Angular Momentum andAngular Position ”,New J. Phys. 7(2005)62-1-21. Barnett_2007S.M. Barnett and R. Zambrini,“Orbital Angular Momentum of Light ”,in “Quantum Imaging ”, Ed. M.I. Kolobov, pp. 277-308 (Springer, Singapore, 2007). Padgett_2010 G.C.G. Berkhout, M.P.J. Lavery, J. Courtial, M.W. Beijersbergen and M.J. Padgett, “Efficient Sorting of Orbital Angular Momentum States of Light ”, Phys. Rev. Lett. 105(2010) 153601-1-4. Padgett_Phys_Today_2004 M.J. Padgett, J. Courtial and L. Allen, “Light's Orbital Angular Momentum ”, Phys. Today, May (2004) 35-40. OAM_2003 “Optical Angular Momentum ”, Eds. L. Allen, S.M. Barnett and M.J. Padgett, Institute of Physics(Bristol, England, 2003);S. Franke-Arnold, L. Allen and M.J. Padgett, “ Advances in Optical Angular Momentum ”, Laser & Photon. Rev. 2, No. 4, (2008) 299-313 ;A.M. Yao and M.J. Padgett, “Orbital Angular Momentum: Origins, Behavior and Applications ”, Advances in Optics and Photonics 3(2011) 161-204 (2011) ; A.E. Willner et al., “Optical Communications Using Orbital Angular Momentum Beams ”, Advances in Optics and Photonics 7 (2015) 66-106.Leach_2002 J. Leach, M.J. Padgett, S.M. Barnett, S. Franke-Arnold and J. Courtial, “Measuring the Orbital Angular Momentum of a Single Photon ”, Phys. Rev. Lett. 88 (2002) 257901-1–4. Leach_2004 J. Leach, E. Yao and M.J. Padgett, “Observation of the Vortex Structure of a Non-Integer Vortex Beam ”, New J. Phys. 6(2004)1-8. Berry_2004 M.V. Berry, “Optical Vortices Evolving From Helicoidal Integer and Fractional Phase Steps ”, J. Opt.A: Pure Appl. Opt. 6(2004)259-268.Woerdman_2005 S.S.R. Oemrawsingh,X. Ma, D. Voigt, A. Aiello, E.R. Eliel, G.W. 't Hooft and J.P. Woerdman, “Experimental Demonstration of Fractional Orbital Angular Momentum Entanglement of Two Photons  ”, Phys. Rev. Lett. 95 (2005) 240501-1–4. Tao_2005 S.H. Tao, X-C. Yuan and J.Lin,“Fractional Optical Vortex Beam Induced Rotation of Particles  ”, Optics Express 13 (2005) 7726-7731. Yao_2006 E. Yao, S. Franke-Arnold, J. Courtialand M. Padgett, “Observation of Quantum Entanglement Using Spatial Light Modulators ”, Optics Express 14(2006) 13089-13094. Tanimura_2015 S. Tanimura “Uncertainty Relation Between Angle and Orbital Angular Momentum: Interference Effect in Electron Vortex Beams ”, Nanosystems: Phys. Chem. Math. 6(2015) 2015-212. Wang_2015 J. Wang, W. Zhang, Q. Qi, S. Zheng and L. Chen, “Gradual Edge Enhancement in Spiral Phase Contrast Imaging With Fractional Vortex Filters ”, Nature Scientific Reports 5 (2015) 1-6. Balantine_2016 K.E. Ballantine, J.F. Donegan, and P.R. Eastham, “There are Many Ways to Spin a Photon: Half-Quantization of a Total Optical Angular Momentum ”, Sci. Avd. 2 (2016) 1-7. Mitri_2016 F.G. Mitri, “Negative Optical Spin Torque Wrench of a Non-Diffracting Non-Paraxial FractionalBessel Vortex Beam ”, J.Quantitative Spectroscopy & Radiative Transfer 182 (2016) 172-179. Pan_2016 Y. Pan, X.Z. Gao, Z.C. Ren, X.L. Wang, C. Tu, Y. Li and H.T. Wang, “Arbitrarily Tunable Orbital Angular Momentum of Photons ”, Nature Scientific Reports 6 (2016) 1-8. Deng_2019 D. Deng, M. Lin, Y. Li and H. Zhao, “Precision Measurement of Fractional Orbital Angular Momentum ”,Phys. Rev. Applied 12 (2019)014048–1–7. Huang_2019 H.-C. Huang, “Quantifiable Example of Complementarity Relation Between Optical Orbital Angular Momentum and Angular Position ”,Optics Communications 446 (2019)23-32. Chen_2021 B. Chen, Y. Wei1, T. Zhao, S. Liu, R. Su, B. Yao, Y. Yu, J. Liu and X. Wang ,“Bright Solid-State Sources for Single Photons with Orbital Angular Momentum ”, Nature Nanotech. 16 (2021) 302-308. Deach_2022 S. Deachapunya, S. Srisuphaphon and S. Buathong ,“Production of Orbital Angular Momentum States of Optical Vortex Beams Using a Vortex Half-Wave Retarder With Double-Pass Configuration ”, Nature Scientific Reports 12 (2022) 6061-1–7. Wang_2022 M. Wang, F. Zhou, X. Lu, A. McClung, M. Davanco, V.A. Aksyuk and K. Srinivasan ,“Fractional Optical Angular Momentum and Multi-Defect-Mediated Mode Renormalization and Orientation Control in Photonic Crystal Microring Resonators ”, Phys. Rev. Lett. 129 (2022) 186101-1–6. Liang_2023 C.P. Liang, Y. Liu, F.F. Li, S.W. Leung, Y. Poo and J.H. Jiang, “Fractional Topological Numbers at Photonic Edges and Corners ”,Phys. Rev. Appl. 20 (2023)034028-1–10. barabosa_2000 H.H. Arnaut and G.A. Barbosa, “Orbital and Intrinsic Angular Momentum of Single Photons and Entangled Pairs of Photons Generated by Parametric Down-Conversion ”, Phys. Rev. Lett. 85 (2000) 286-289 and Comments by E.R. Eliel, S.M. Dutra, G. Nienhuis and J.P. Woerdman inibid.86 (2001) 5208 and by H.H. Arnaut and G. A. Barbosa in ibid.86 (2001) 5209 ; G.A. Barbosa and H.H. Arnaut, “Twin Photons with Angular-Momentum Entanglement: Phase Matching ”, Phys. Rev. 65 (2002) 053801-1-7. Matula_2013 O. Matula, A.G. Hayrapetyan, V.G. Serbo and A. Shurzhykov, “Atomic Ionization of Hydrogen-Like Ions by Twisted Photons: Angular Distribution of Emitted Electrons ”, J. Phys. B: At. Mol. Opt. Phys. 46(2013)205002-1-12. Skagerstam_2024 B.-S. Skagerstam, Lectures on “Quantum Field Theory - Applications in Atomic Physics and Quantum Optics ” (to appear). Born_1955 M. Born, “Continuity, Determinism, and Reality ”, dedicated to Professor Niels Bohr on Occasion of His 70th Birthday, Dan. Mat. Fys. Medd. 30 (1955) 3-26.Mumford_1983 I.S. Gradshteyn and I.M. Ryzhik, “Table of Integrals, Series and Products ”, Fourth Edition, Chap. 8.18-8.19 (Academic Press, New York, 1965); D. Mumford,“Tata Lectures on Theta. I. ”, Reprint of the 1983 Edition(Birkhäser, Boston 1983 (third printing 1994)); J.M. Borwein and P.P. Borwein, “Theta Functions and the Arithmetic-Geometric Mean Iteration ”, Ch. 2 (1987) 33-61, in Pi and the AGM,“A Study in Analytic Number Theory and Computational Complexity ” (Wiley, New York, 1987); E.T. Whittaker andG.N. Watson,“A Course on Modern Analysis ”, Ed.V.H. Moll (Cambridge University Press, Cambridge, 2021 (5:th revised Edition)).Gesztesy_1978 F. Gesztesy and L. Pittner, “Uncertainty Relations and Quadratic Forms ”,J. Phys.A: Math. Gen. A 65 (1978) 1765-1770. Glazman_1963 N.I. Akhiezer and I.M. Glazman, Chapter IV and Paragraph 49in Vol. I of “Theory of Linear Operators in Hilbert Space ”, Republication of the1961 and 1963 versions published by Frederick Ungar Publishing Co., New York (Dover Publications, New York,1993). Kraus_1965 K. Kraus, “Remark on the Uncertainty Between Angle and Angular Momentum ”, Zeitschrift für Physik 188 (1965) 374-377 and “A Further Remark on Uncertainty Relations ”, ibid.201 (1967) 134-141. Kato_1995 T. Kato, Chapter 5 in“Perturbation Theory for Linear Operators ”, Classics in Mathematics(Springer-Verlag, Berlin1995). Geloun_2012 J.B. Geloun, “Enhanced Quantization: The Particle on the Circle ”,Contribution to the “XXIXth International Colloquium on Group-Theoretical Methods in Physics ”, Chern Institute of Mathematics, Nankai, China, August 20-26, 2012 and in Nankai Series in Pure, Applied Mathematics and Theoretical Physics, “Symmetries and Groups in Contemporary Physics ”, 11 (2013) 569-574, and J.B. Geloun and J.R. Klauder, “Enhanced Quantization on the Circle ”,Phys. Scr.87 (2013) 035006-1–5.Rudin_1970 W. Rudin, Chapter 9in “Real and Complex Analysis ” (McGraw-Hill, London, 1970). Schleich_1993 M. Fleischhauer and W.P. Schleich, “Revivals Made Simple: Poisson Summation Formula as a Key to the Revivals in the Jaynes-Cummings Model ”,Phys. Rev. A 47 (1993) 4258-4269. Yao_2014 A.M. Yao, T. Brougham, E. Eleftheriadou, M.J. Padgett and S.M. Barnett, “Entropic Uncertainty Minimum for Angle and Angular Momentum ”,J. Opt. 16 (2014) 105404-1–6.
http://arxiv.org/abs/2312.16535v1
{ "authors": [ "Bo-Sture K. Skagerstam", "Per K. Rekdal" ], "categories": [ "quant-ph", "math-ph", "math.MP", "physics.optics" ], "primary_category": "quant-ph", "published": "20231227113159", "title": "On Quantum States for angular Position and Angular Momentum of Light" }
The natural smallness of Dirac neutrino mass from the multiplicative Lagrangian Monsit Tanasittikosol^1,2=============================================================================== 3D point cloud semantic segmentation has a wide range of applications. Recently, weakly supervised point cloud segmentation methods have been proposed, aiming to alleviate the expensive and laborious manual annotation process by leveraging scene-level labels. However, these methods have not effectively exploited the rich geometric information (such as shape and scale) and appearance information (such as color and texture) present in RGB-D scans. Furthermore, current approaches fail to fully leverage the point affinity that can be inferred from the feature extraction network, which is crucial for learning from weak scene-level labels. Additionally, previous work overlooks the detrimental effects of the long-tailed distribution of point cloud data in weakly supervised 3D semantic segmentation. To this end, this paper proposes a simple yet effective scene-level weakly supervised point cloud segmentation method with a newly introduced multi-modality point affinity inference module. The point affinity proposed in this paper is characterized by features from multiple modalities (e.g., point cloud and RGB), and is further refined by normalizing the classifier weights to alleviate the detrimental effects of long-tailed distribution without the need of the prior of category distribution. Extensive experiments on the ScanNet and S3DIS benchmarks verify the effectiveness of our proposed method, which outperforms the state-of-the-art by ∼ 4% to ∼ 6% mIoU. Codes are released at <https://github.com/Sunny599/AAAI24-3DWSSG-MMA>. § INTRODUCTION Point cloud data capture rich object and scene geometric and appearance information, which serves as an essential data representation for various applications, such as autonomous driving, augmented reality, and robotic. Point cloud semantic segmentation plays a key role in 3D scene understanding, and has been extensively explored <cit.>. However, the success of most of the methods is based on learning in a fully supervised manner, requiring extensive point-level annotations. To reduce the annotation costs, some recent works have delved into the realm of weakly supervised semantic segmentation (WSSS) methods <cit.>. These methods can be categorized based on different levels of supervision, including partially labeled points, sub-cloud level annotations, and scene-level annotations. Among these, segmentation with scene-level labels is the most challenging scenario as point-wise annotations are completely unavailable, which is the focus of our paper.Most of the current methods with scene-level annotations <cit.> are proposed based on pseudo labels which can be obtained by Class Activation Map (CAM) <cit.> or Multiple Instance Learning (MIL) <cit.>. The key to successful weakly supervised semantic segmentation is how to expand the semantic regions to achieve completeness and preciseness of the localized objects. The 3D point clouds of the RGB-D scans provide accurate object shape and scale information without occlusion and distortions, while the corresponding RGB data provide additional color and texture information. However, the current weakly supervised point cloud segmentation based on RGB-D scans fails to fully take advantage of all the data modalities. Moreover, current methods fail to fully exploit the point affinity that can be readily inferred from the feature extraction network. We argue that point affinity that characterizes the similarity between points is essential for learning from scene-level weak labels.In addition, the long-tailed distribution of point cloud data due to the extremely imbalanced point data have detrimental effects on both point-wise classification performance and point-wise feature learning, which has largely overlooked by previous research on weakly-supervised point cloud semantic segmentation. For example, when the point cloud data have long-tailed distribution, the point-wise classifier for semantic segmentation will be biased towards the head classes with more number of samples <cit.>. Moreover, the point features of the tail classes with less number of samples will also be learned similar to the point features of the head classes since the feature extractor may also be dominated by the head classes. These detrimental effects will further affect the affinity learning that relies on feature similarity. However, the lack of per-point category information in scene-level weak supervision makes it challenging to obtain category distribution. Therefore, how to take full advantage of both RGB and geometry information in the point cloud data while addressing the long-tailed distribution issue for affinity learning in the context of weakly-supervised semantic segmentation remain a challenging task.To this end, we propose a novel multi-modality affinity (MMA) enhanced weakly supervised semantic segmentation (WSSS) method by fully exploiting the point feature affinities from multiple data modalities and eliminating the influence of long-tail data distribution. The geometric information derived from point clouds and the color and texture information captured in RGB data offer distinct perspectives for characterizing feature affinity. By leveraging these complementary data modalities, we propose to generate multi-modality affinities based on both pure geometric data as well as color-appended RGB-D data. For simplicity, we mask out the RGB features from the input point cloud data to model the geometric affinity and use the original point cloud data to model the RGB-enriched data affinity. We also enhance the affinity by normalizing the weights of the point-wise classifier. This normalization process assists in mitigating the network's tendency to misclassify data points into the dominant head categories. Consequently, it enhances the point-wise features of the tail categories, allowing for improved affinity inference. The obtained multi-modality affinity matrices are used to refine the WSSS-related objective functions. Extensive experiments are conducted on ScanNet <cit.> and S3DIS <cit.> benchmarks, and the results demonstrate that our method significantly outperforms the state-of-the-art scene-level weakly supervised point cloud semantic segmentation methods.§ RELATED WORK §.§ Weakly-supervised Point Cloud Segmentation with Sparse Labels. The main idea of learning from sparse annotations focuses on propagating information from labeled points to the unlabeled points. For example, Liu et al. <cit.> generates a super-voxel graph between labeled points and unlabeled to guide the iterative training. Yang et al.<cit.> design transformer model derived by multiple instance learning (MIL), where the two clouds with shared category yield a positive bag while with different classes produce a negative bag.Recently, PSD <cit.> proposes to learn 3D point affinity based on sparsely labeled points. Though the affinity can be precisely learned based on the supervised learning loss with sparsely labeled points, this method is not directly applicable to our setting, where only the scene-level category supervision is provided without any ground-truth point-level supervision. Therefore, these methods with sparse labels mainly focus on relationship between labeled and unlabeled points, which is not applicable in our setting when point-level supervision is not available. §.§ Weakly-supervised Point Cloud Segmentation with Scene-level Labels. Compared to weakly supervised point cloud segmentation based on sparse point-level annotations, the methods by scene-level annotations are less exploited. The state-of-the-art methods generally generate pseudo labels in the first step, and then refine the segmentation results via self-training.Two types of strategies are commonly used for generating pseudo labels by previous methods: class activation maps (CAM)-based and multi-instance learning (MIL)-based. Wei et al.<cit.> propose a Multi-Path Region Mining model involving spatial, channel, and point-wise paths to generate CAM of sub-cloud as pseudo. It is common to design two branches for a network and use consistency loss for self-supervised learning between the original point cloud and the augmented<cit.> or perturbed <cit.> point clouds. Similarly, Ren et al.<cit.> propose to jointly learn semantic segmentation, 3D proposal generation, and 3D object detection in a two-branch framework.However, current methods do not effectively leverage the complementary RGB and geometric information present in point cloud data. Additionally, these methods often overlook the long-tail distribution of different categories, resulting in suboptimal performance.§ METHODOLOGY Given a set of M point clouds with scene-level annotations: D={P_m,y_m}_m=1^M, where P_m∈ℝ^N× (3+K) denotes the mth point cloud and y_m∈{0,1}^C is a C-dimensional binary vector indicating which categories are present in point cloud P_m, we aim to derive a segmentation model, which classifies each point into one of the C categories. Each P_m∈ℝ^N× (3+K) represents a whole 3D scene with N 3D coordinates together with K-dimensional auxiliary features, such RGB, object normals, and height.Overview. As shown in Figure <ref>, our framework consists of three main modules: a feature extraction module and a segmentation module, and a multi-modality affinity inference module to enhance the segmentation results.Specifically, our feature extraction module hierarchically extract multiple scales of point set features by gradually grouping point features in local regions. In the multi-modality affinity inference module, to differentiate between pure geometric data and RGB-enriched data, we employ a masking technique to exclude the RGB features from the input data, resulting in the pure geometric data. Simultaneously, we retain the original point cloud data as the RGB-enriched data. The two types of input data are fed into the shared backbone network to obtain the respective point affinity of the corresponding modality. The segmentation module then utilizes the learned point affinity to refine the MIL (Multiple Instance Learning) objective, as well as the point-level pseudo-labels. These refinements are instrumental in guiding the self-training process of semantic segmentation. §.§ MIL-based 3D Semantic Segmentation We use a MIL-based weakly-supervised 3D semantic segmentation model as our baseline model, which consists of a feature extraction network and a segmentation head.Feature Extraction Module. In this work, we choose our backbone network based on PointNet++ <cit.> to hierarchically extract multiple scales of point set features by gradually grouping point features in local regions. The four set abstraction (SA) layers down-sample the point cloud from N points to N_1=2048, N_2=1024, N_3=512, and N_4=256 points, respectively by each layer. Two feature propagation (FP) layers then up-sample the points from N_4 to N_2 points. Note that other backbone networks that extract multiple scales of point features can also be selected as the feature extraction network.Segmentation Module. Inspired by <cit.>, we use two segmentation heads to process extracted features from the backbone networks. Each head predicts a segmentation logits matrix over C classes for all points, denoted as U_seg∈ℝ^N_1× C and S_seg∈ℝ^N_1× C, respectively. The first head is to find the semantically discriminative points through a MIL-loss and produce pseudo labels by the points with highly confident predictions, which is denoted as the teacher head. The pseudo labels are generated by following <cit.>. The second head takes the pseudo labels produced by the first head for self-training, which is denoted as the student head. However, we use a slightly different structure for the teacher and student segmentation heads. Specifically, the teacher head contain one FP layer to up-sample the points from N_2=1024 to N_1=2048 and one fully-connected (FC) layer to predict the per-point segmentation logits.The student head contains one FP layer and is followed by a one-layer Transformer Encoder <cit.>. The motivation for adding a Transformer layer to the student head is two-fold. The first is to borrow the self-attention module in Transformer for capturing long-range dependencies to ensure object completeness. The second is to explicitly enforce the teacher head and the student head to learn different features such that the pseudo labels produced by the teacher head could benefit more on the student head, which is inspired by co-training <cit.>.Loss Function. The loss function of our weakly supervised semantic segmentation module is defined as,ℒ_wsss=ℒ_mil+ℒ_self,where ℒ_mil is the MIL loss and ℒ_self is the self-training loss. We define the scene-level MIL loss as,ℒ_mil=-∑_c=1^C (y[c] logσ [c]-(1-y[c]) log(1-σ[c])),where σ[c]=sigmoid(1/N_1∑_i=1^N_1 U_seg[i,c])converts the per-point logits U_seg into a scene-level prediction σ via average pooling and a sigmoid activation. The self-training loss for each scene is formulated byℒ_self=-1/N_1∑_i=1 ^N_1∑_c=1^C Ŷ[i,c] logψ [i,c],where ψ [i,c]=softmax(S_seg[i,c]) denotes the probability of the ith point being predicted as class c, N_1 denotes the number of points in U_seg,and Ŷ[i,c]∈{0,1} is the point-wise pseudo label generated by the teacher head <cit.> by selecting the high confident points within the scene categories.§.§ Long-tail-aware Multi-modality Point Affinity Multi-modality Point Affinity.We argue that geometric information from point cloud and color information encoded in RGB data can characterize the feature affinity from different perspectives by taking advantages of different data modalities. Moreover, to make the most of the multi-scale features extracted by the backbone, we incorporate features from multiple layers of feature extractor. We obtain the RGB-appended point affinity by concatenating multiple scales of point features F∈ℝ^N_1× D=concate(F_1, F_2, F_u), where F_1∈ℝ^N_1× d_1, F_2∈ℝ^N_1× d_2, F_u∈ℝ^N_1× d_u. F_u is obtained by teacher head, is a high-level abstract points representation. F_1 and F_2 indicate the multiple scales of features produced by different SA layers from the backbone network. Note that the N_1 points of F_2 is obtained by up-sampling via linear interpolation. We model the geometric affinity by masking out the RGB values from the input point cloud for simplicity. Similarly, we use F∈ℝ^N_1× D=concate(F_1, F_2, F_u) to denote the multi-scale geometric features by aggregating multiple scales of features from RGB-masked point clouds. We define the multi-scale multi-modality features by concatenating the multi-scale geometric features and RGB-appended features as F^M=[F∈ℝ^N_1× D;F∈ℝ^N_1× D] and F^M∈ℝ^2N_1× D. Thus, our final multi-scale multi-modality affinity matrix is defined as,A^MMA[i,j]=max(θ, ⟨ F^M[i,·],F^M[j,·]⟩/F^M[i,·]F^M[j,·]),where i,j=1,2,...,2N_1, and θ is the threshold for filtering out less confident affinities. Since the multi-scale multi-modality affinity matrix A^MMA∈ℝ^2N_1× 2N_1 is defined based on features from multiple modalities, the similarity of point features across-modality is also considered in our affinity matrix.Long-tail Aware Affinity Enhancement.In point cloud-based semantic segmentation, the long-tailed distribution of data among classes happens at both the category level and point level. For example, in indoor scenarios, the category “wall” and “floor” generally not only appear in almost all of the scenes but also contain a larger amount of points in each scene compared to other objects, which are termed head classes in long-tailed distribution.The long-tailed distribution leads to the classifier weights of the head classes are learned to have larger norms, resulting in greater logits for head classes in each sample. This can be attributed to that large classifier weights norms cause the gradient leans towards head classes during back propagation. This biases the network's learned features towards head categories. Since the affinity matrix is calculated based on feature similarity, the head classes data points will contribute more to the affinity values than the tail classes, which contradict to the fact that the within-class point features should have larger affinity values.Therefore, the long-tail issue not only affects the final segmentation results but also is detrimental to affinity inference. Unfortunately, previous research has overlooked this issue, and utilizing affinity affected by the long-tail distribution may lead to error accumulation.In WSSS, the point-level category distribution is not accessible with scene-level annotations. Thus, inspired by Decoupling Representation and Classifier <cit.>, we enhance affinity inference by dealing with the long-tail issue via a simple method through normalizing classifier weight(NCW), which alleviates the long-tail issue without the need of the prior of category distribution. Formally, let W={w_i}∈ℝ^d× C, where w_i∈ℝ^d are the classifier weights corresponding to class i of the teacher and student segmentation head. We normalize W to obtain W={w_i} via w_i=w_i/w_i, where · denotes the l_2 norm.§.§ Objective FunctionsWith the learned multi-modality affinity matrix A^MMA, we obtain the refined segmentation logits matrix U_seg^refined and U_seg^refined of the teacher segmentation head by multiplying the original logits matrix by the affinity matrix:[U_seg^refined;U_seg^refined]=A^MMA[U_seg;U_seg],where U_seg∈ℝ^N_1× C denotes the predicted segmentation logits matrix over C classes of the RGB-masked point cloud data P. The refined segmentation logits of a point aggregate information from the points with similar features both within and across modalities, which are considered to be able to improve the pseudo labels for self-training loss in Eq.(<ref>).To achieve message passing between multi-modality affinities and further improve the segmentation performance, we explicitly impose the prediction consistency constraint between the original point cloud data P and the RGB-masked point cloud data P with horizontal transformation, which is inspired from previous WSSS methods that introduce different contrastive or consistency learning strategies by augmenting the original point cloud data <cit.>. The consistency loss is defined as,ℒ_consist=1/N_1∑_i=1 ^N_1∑_c=1^C |U_seg^refined[i,c]-U_seg^refined[i,c]|. Moreover, the refined self-training loss ℒ_self^refined is defined:ℒ_self^refined=-1/N_1∑_i=1 ^N_1∑_c=1^C Ŷ^refined[i,c] logψ [i,c],where Ŷ^refined[i,c]∈{0,1} is the point-wise pseudo label generated by U_seg^refined.The final objective function of our method isℒ = ℒ_mil+ℒ_self^refined+ ℒ_consist. § EXPERIMENTS§.§ Experimental setting Datasets and evaluation metrics. We evaluate the proposed approach MMA on two benchmarks, ScanNet <cit.> and S3DIS <cit.> datasets. ScanNet is a commonly-used indoor 3D point cloud dataset for semantic segmentation. It contains 1513 training scenes (1201 scenes for training, 312 scenes for validation) and 100 test scenes,annotated with 20 classes. S3DIS is also an indoor 3D point cloud dataset, which contains 6 indoor areas and has 13 classes. By following the previous work, we use area 5 as the test data. Mean intersection over union (mIoU) is used as the evaluation metric of the segmentation results. Implementation details. The RGB-appended input point clouds are a set of 10-dimensional vectors, including coordinates (x,y,z), color (R,G,B), surface normal, and height, while the pure geometric input is produced by masking out the RGB values with 0.PointNet++ <cit.> is adopted as the backbone feature extraction module to extract point cloud features. The segmentation module has two segmentation heads: a teacher head for providing pseudo labels with MIL loss and a student head for self-training.The teacher head is a multi-label classification model, which contains one FP layer to upsample the points to 2048 and one fully-connected (FC)layer to predict per-point logit and then through average to get per-class logit.The student head contains one FP layer to upsample points and follows one Transformer Encoder layer to capture the long-range dependencies of points, then an FC layer to predict per-point logit. Multi-scale module use features obtained from the first and second SA layers and U_seg FP layer. The details of the architecture of teacher head and student head can be found in the supplementary material.The model is trained on 3090 GPU with batch size 8 for 300 epochs. We use AdamW optimizer with an initial learning rate of 0.0014 and decay to half at 160 epochs and 180 epochs. All hyper-parameters are tuned based on the validation set. §.§ Comparison with State-of-the-artsWe mainly compare our approach to other 3D weakly supervised segmentation methods utilizing scene labels, including MPRM <cit.>, WyPR <cit.>, and MIL-Derived <cit.>. This type of supervision is challenging for large-scale point cloud datasets. MPRM uses various attention modules to mine local and global context information. WyPR joint learning of segmentation and detection to get a better feature representation, and gain high performance. MIL-Derived proposes a transformer model to explore pair-wise cloud-level supervision, where two clouds of the same category yield a positive bag while two of different classes produce a negative bag.Results on ScanNet. Table <ref> reports the mIoU results of the proposed method and the state-of-the-art baseline methods. It can be seen that our proposed method performs better than existing methods (MPRM <cit.>, WyPR <cit.>, MIL-Derived <cit.>)by large margins(+15.8%, +8.1%, +11.5%) on the ScanNet validation set. And our method outperforms WyPR by 6.6% in terms of the test mIoU. In Table <ref>, we report the per-class IoU on ScanNet. Obviously, the proposed MMA achieves the highest mIoU, and significantly improves the performance in “floor”,“chair”, “sofa”, “table”, “shelf”, “desk”, “toilet” and “bathtub” against WyPR. These categories are often co-occurred and easily mis-classified. In addition, our “MMA” as shown in Table <ref> improves the performance of objects with either discriminative geometric or color information, such as “floor”, “chair”, “table”, “curtain”, “shower curtain”, and “bathtub”. The Multi-modality affinity in MMA takes advantage of both geometric and color information and thus improves the performance to a large margin.Results on S3DIS. Table <ref> shows the S3DIS results of the proposed method and the baseline methods. It can be seen that our proposed method achieves much higher mIoU scores, and outperforms MPRM, MIL-Derived and WyPR with gains of 16.0%, 13.4% and 4.0%. Discussion. When compared with the state-of-the-art methods, our method owns different advantages. Firstly, MPRM <cit.> proposes various attention modules to refine the point features by local and global context information. The attention modules are only applied to the output features of the backbone network, which might fail to capture multi-scale attentions. Moreover, the multiple modalities of RGB-D data are not explicitly exploited. By contrast, our MMA affinity takes both multi-scale and multi-modality similarities into account and the obtained MMA affinity matrix is directly applied to the segmentation logits the refine the segmentation results, which directly improve the pseudo labels for self-training. Secondly, WyPR <cit.> achieves better results than MPRM. However, the good results are achieved by jointly training with 3D object detection, which relies on a costly selective search step. Differently, our method with a single segmentation task outperforms WyPR to a large margin. Lastly, the MIL-derived Transformer <cit.> modeling the similarities across scenes to improve the weak labels. However, the cross-scene similarity might be hard to model due to the large cross-scene variations. Moreover, our method can be a complementary to MIL-derived Transformer by exploiting similarities from different perspectives.§.§ Ablation Study and Further AnalysisWe report the results of ablation study to show the effectiveness of each components of our method. We also conduct further analysis by qualitative results. The experiments in this section are conducted on ScanNet validation set.Contributions of Components. We report the results of ablation study to demonstrate the contribution of different components of our method. The results are shown in Table <ref> with 8 experiments denoted by “A.#”. The baseline method in “A.1” is our MIL-based segmentation model as described in Section <ref>, which consists of a MIL-loss, a self-training loss, and a cross transformation consistency loss between the original point cloud and the geometrically augmented point cloud. The baseline method achieves 23.8% mIoU. Then we add the multi-modality affinity to the baseline model in “A.3”, which achieves obtains 28.4% with 4.6% performance gain compared to the baseline. To alleviate the imbalance issue that may affect both the affinity matrix and the segmentation results, the classifier normalization is added and the performance achieves 36.2% as shown in “A.4”, which significantly improve the results. “A.8” is our final results by using the proposed MMA-refined model, which further improve “A.4” by using additional multi-scale information and achieves the best results. The results also show that though simply introducing NCW (baseline + NCW in “A.2”) can improve the baseline result (in “A.1”) by 3%, “A.8” (resp., “A.4”) can further improve “A.6” (resp., “A.3”) by nearly 8%, which verifies that NCW not only balances the classification results but also enhances the point affinity. In other words, NCW only works significantly well when jointly working with the proposed MMA module. Even without NCW, we add multi-modality affinity and multi-scale affinity to baseline in “A.6” can achieve 30.1% with 6.3% performance gain compared to the baseline. Adding multi-scale affinity to the baseline in “A.2” can improve baseline result by 1.9%. To validate the effectiveness of the additional light-weight Transformer block in our student segmentation head, we conduct experiments by removing the Transformer block as shown in “A.7”, the performance drops from 37.7% to 33.9%. To sum up, all the components in our method contribute to the final results. Qualitative Results. We qualitatively illustrate more segmentation results in Figure <ref>, where the columns indicate:(a) Point cloud data, (b) Ground-Truth segmentation results,(c) Baseline results, (d) Results of “Baseline+Multi-modality affinity”, (e) Results of our final method. From the results, we can find that our final method achieves the best results. For the baseline method (Figure <ref>(c)), different objects are not clearly distinguished with clear boundary, and the semantic categories for many objects are mis-classified.In our final model with the additional multi-modality affinity module (Figure <ref> (e)), the discriminative information from both geometric modality and color modality can both benefit the final results.Although the original RGB-appended point cloud data are more informative, the RGB data and the geometric data are entangled, and thus the complementary multi-modality information is hard to be fully exploited.For example, in the RGB-appended point cloud data, the rich geometric data with shape and scale information might be overwhelmed by the RGB information that suffers from lighting conditions, shadows, and reflections in many cases. As shown in Figure <ref> #2 and #3, “floor” is wrongly segmented in our single-stream baseline variant due to the serious reflections in the RGB data. By contrast, our multi-modality affinity ( Figure <ref> (d)) successfully corrects the results thanks to the enhanced geometric information. In addition, the “floor” is geometrically located with smaller “z” value and smaller height, which is distinguished from the objects placed on it with higher height (such as “table” and “chair”) as shown in most of the scenes such as #1, #2, and #3. For the “door” and “wall” classes in scene #1, we observe that by taking advantages from the discriminative color information, our method can better distinguish “door” from “wall” when compared with the baselines.Analysis of Multi-scale Affinity. In Table <ref>, we further evaluate the effectiveness of multi-modality multi-scale affinity. Specifically, the “F^M_uonly” variant is the multi-modality affinity-only baseline. The “concate(F^M_u,F^M_1)” method concatenates the F^M_u features with F^M_1 (i.e., features from first SA layer). The “concate(F^M_u,F^M_1,F^M_2)” method concatenates the F^M_u features with both the first SA layer features F^M_1 and the second SA layer feature F^M_2. Based on multi-modality, multi-scale can further enhance the model's performance. Therefore, it is necessary for multi-modality and multi-scale to work in conjunction with each other.Analysis of Normalizing Classifier Weights for Affinity. To demonstrate the effectiveness of the Normalizing Classifier Weights (NCW) module on improving affinity, we conducted a comparative analysis between the MMA w/o W affinity, MMA affinity. We split the ScanNet classes into three groups by the number of points in each category in training samples: head classes each contains over 8 million points, medium classes each has between 1.2 million and 8 million points, and tail classes with under 0.3 million points. We demonstrate in supplementary materials the categories included in the head, medium, and tail of ScanNet, respectively.We evaluate mAP and mIoU for each subset, the results are shown in Table <ref>, where Δ represents the relative performance difference between MMA and MMA w/o W.mAP reflects the performance of multi-label classification, which impact the quality of generated pseudo-labels. We observed that MMA w/o W affinity can significantly enhance the performance of medium and tail classes while maintaining the effectiveness of the head set. Notably, there was a 7.2% performance gain in the tail set. Due to the improved classification performance of the medium and tail categories, the points previously wrongly categorized as head class have been correctly classified to the medium and tail categories. Thus, the mIoU for all splits are improved.Visualization of Point Class Relationship Maps Enhanced by Point Affinity. We further visualize the binary map of the class relationship enhanced by point affinity in Figure <ref>. If two points are predicted as the same class, the value is 1, otherwise the value is 0.The columns indicate the affinity matrices produced based on: (a) Ground-truth, (b) Baseline F^M_u only,(c) MMA F^M_u only, (d) MMA (i.e., concate(F^M_u,F^M_1,F^M_2)), (e) MMA w/o W. In Figure <ref>(b), different categories get confused because the affinities between points of the same class are not higher than those between points of different classes. In contrast, Figure <ref>(c) shows much better results than (b). This suggests that our MMA approach enhances the intra-class similarity by fully exploiting multiple data modalities and considering the long-tail issues. Figure <ref>(c) and (d) demonstrate that our multi-scale features can further improve the affinity of objects from different scales. By comparing Figure <ref>(d) and (e), the affinities of points from different classes in (d) with the proposed long-tail aware normalization is much smaller than that in (e), especially in the highlighted red boxes, which is the key to learn discriminative features. § CONCLUSION This paper proposes a novel multi-modality point affinity (MMA) enhanced weakly supervised 3D semantic segmentation method. The proposed MMA considers the point featuresimilarities by taking advantage of complementary information from different data modalities. The point affinity is also enhanced by normalizing the classifier weights to alleviate the detrimental effects of long-tailed data distribution to the affinity matrix. Extensive experiments demonstrate the effectiveness of the proposed method on two commonly used indoor scene understanding benchmark datasets.§ ACKNOWLEDGMENTSThis work was supported by the National Key Research and Development Program of China (No. 2021YFB1714300), National Natural Science Foundation of China (No.62006012, No.62132001, No.62002012), in part by the Hong Kong Research Grants Council General Research Fund (17203023), in part by The Hong Kong Jockey Club Charities Trust under Grant 2022-0174,in part by the Startup Funding and the Seed Funding for Basic Research for New Staff from The University of Hong Kong, and in part by the funding from UBTECH Robotics.
http://arxiv.org/abs/2312.16578v2
{ "authors": [ "Xiawei Li", "Qingyuan Xu", "Jing Zhang", "Tianyi Zhang", "Qian Yu", "Lu Sheng", "Dong Xu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231227140135", "title": "Multi-modality Affinity Inference for Weakly Supervised 3D Semantic Segmentation" }
^1Theoretical and Computational Physics (TCP) Group, Department of Physics, Faculty of Science, King Mongkut's University of Technology Thonburi, Bangkok 10140, Thailand ^2Theoretical and Computational Science Centre (TaCS), Faculty of Science, King Mongkut's University of Technology Thonburi, Bangkok 10140, Thailand ^3Department of Physics, University of Oxford, Oxford OX1 3PU, United Kingdom ^4The Institute for Fundamental Study (IF), Naresuan University, Phitsanulok 65000, Thailand ^5Department of Physics, School of Science, King Mongkut's Institute of Technology Ladkrabang, Bangkok 10520, Thailand We present an alternative scheme to provide an anomalous smallness of the Dirac neutrino mass. The multiplicative Lagrangian model of the Higgs field plays an essential role in explaining a huge difference between the mass of the charged leptons and Dirac neutrinos while the ratio of Yukawa coupling between these two groups of particles is naturally of order unity. On the other hand, if the neutrino mass is mixed between the Dirac and Majorana types, the mass of the right-handed neutrinos can be in the range between sub-eV and the grand unification scale without fine-tuning and naturalness problems. Moreover, the little hierarchy between the Yukawa coupling of top-quark and electron is also discussed. The natural smallness of Dirac neutrino mass from the multiplicative Lagrangian Monsit Tanasittikosol^1,2===============================================================================§ INTRODUCTIONThe origin of neutrino mass is one of the most puzzling problems in particle physics.A current observed neutrino mass from the neutrino oscillation <cit.> and the cosmological constraint <cit.> is incredibly small compared to the mass of the standard model (SM) particles <cit.>. The mass ratio between the neutrino mass eigenstate and the charged leptonis aroundm_ν_i/m_l_i∼ 10^-10-10^-6, where i{=1,2,3} refers to the i-th generation of fermion, l_1=e, l_2=μ, l_3=τ are the flavor index of charged lepton, and m_ν_i is the mass of neutrino eigenstate. If the neutrino acquires a mass from the Yukawa interaction as in those cases of charged leptons and quarks,L_j(λ_l)_ijϕ  l_iR+ q_L_j(λ_q)_ijϕ  d_iR+L_j(λ_ν )_l_ijϕ̃ ν_l_iR, the Yukawa coupling (λ_i) of the neutrinos has to be much smaller than that of the charged leptons, where ϕ is Higgs doublet, ϕ̃=iτ_2ϕ^†, L and q_L are left-handed lepton and quark doublet. In this situation, the Dirac naturalness <cit.>, which expects the dimensionless ratio of the parameters to be of order unity, is violated around six to ten orders of magnitude.If we follow a hint from the naturalness argument, the unnatural smallness of the neutrino mass requires an explanation from the physics beyond the SM. The most famous scenario for explaining this unnatural situation is the seesaw mechanism <cit.>. In the type-I of this scheme, the mass term of neutrino can be mixed between Dirac and Majorana types -ℒ^M =m_D (ν ^c_L ν^c _R ) + M_R (ν ^c_R ν _R) + h.c.  ,where the flavor index is ignored, m_D denotes the Dirac mass coming from the Higgs mechanism, and M_R refers to the Majorana mass obtained from the extension of the SM. Here, the observed neutrino state with the diagonalization of the mass matrix can be obtained bytransformationsν_1 = iν_Lcosθ_ν- iν_R^csinθ_ν , ν_2 = ν_L sinθ_ν +ν_R^ccosθ_ν,where θ_ν is mixing angle and tan 2θ_ν = 2m_D/M_R. The eigenvalues of the neutrino mass eigenstate are mixed between both types m_1,2=1/2M_R∓1/2√(M_R^2+4m_D^2).For ν_1 to be mostly left-handed with light mass and ν_2 to be mostly right-handed with heavy mass, a small mixing angle(or m_D ≪ M_R) is required. Consequently, ν_1 with mass m_1 is interpreted as an active left-handed neutrino and ν_2 with mass m_2 is interpreted as a heavy right-handed neutrino.Hence, if M_R is expected in the early TeV scale up to the grand unification scale (GUT), then m_1 could naturally be small in the eV scale, m_1≃ m_D^2/M_R∼𝒪(1) eV while m_D is restricted around a mass of the SM particle.However,without the naturalness and fine-tuning problem, M_R is still unconstrained, and the possibility of M_R<TeV is not completely ruled out from various experimental observations, such as a neutrino oscillation anomaly <cit.>, higher intensity collider <cit.>. In addition, the heavy right-handed neutrino with mass in this range could theoretically be the candidate for explaining dark matter <cit.> and baryogenesis <cit.>. Therefore, if the right-handed neutrino mass is experimentally concluded to be below a few GeV, these various unexplained phenomena can be solved while the type-I seesaw mechanism is not an appropriate resolution of the natural small neutrino mass.Moreover, the types of the neutrino (Dirac or Majorana) are still indistinguishable based on the current observation data in the neutrinoless double beta decay (0νββ) experiment <cit.>. Hence, the possibility of the Dirac neutrino is not ruled out yet. In the future experimental search with a higher sensitivity of 0νββ <cit.>, if the type of neutrino is identified as the Dirac particle, the seesaw mechanism is no longer applicable as an explanation of an anomalously small neutrino mass. The naturalness problem of the Dirac neutrino mass still remains elusive and an entirely new perspective is still desirable. In this paper, we propose a mechanism to provide an unnatural smallness of the Dirac neutrino mass from tree-level calculation. In our framework, the multiplicative form of the Higgs Lagrangian <cit.> will play an essential role to provide the small neutrino mass. Briefly, this model is constructed from the inverse problem of the calculus of variations <cit.> to explain the hierarchy problem <cit.> in the radiative correction to the Higgs massM^2_h,obs=M^2_h+ Δμ^2, where Δμ^2=cΛ^2_UV, c∼∑ m_SM^2/v^2_SM is sum of the SM particle mass. In this model, the Higgs vacuum expectation value (VEV) could be arbitrarily large without conflicting with the observable three-legged vertices of SM predictions.One finds that the UV cutoff of Higgs radiative correction, which is an unknown value in the renormalizable SM Lagrangian, can be naturally obtained in TeV without the sensitivity to the large Higgs VEV. The details of the calculation will be explored in Sec-II. By employing the multiplicative Lagrangian, we will show that the neutrino can obtain a small amount of mass while the ratio of the Yukawa coupling between neutrino and the i-th generation particle is of order unityλ_ν_i/λ_l_i∼𝒪(1),where the value of O(1) is around 1-10^3 or 10^-3-1. On the other hand, if neutrinos are Majorana, including the right-handed singlet component, we also show that the mass of the active left-handed neutrino can be very small, while the mass of the heavy right-handed neutrino can possibly be varied between sub-eV scale to the GUT scale,1eV≲ M_R≲ 10^15GeV,without violating the naturalness principle. This result might be a portal allowing us to study various unexplainable phenomena.This paper is organized as follows. In Sec-II, the multiplicative form of the Lagrangian will be introduced together with the explanation of the Higgs TeV cutoff. In Sec-III, we apply the multiplicative Lagrangian to the neutrino mass problem and show that a mass of the Dirac or Majorana neutrinos could possibly be small without the introduction of the seesaw mechanism and violation of naturalness. In Sec-IV, we discuss the constraints of Dirac neutrino and the little violation of naturalness in the top-quark and electron Yukawa coupling. Finally, in Sec-V, we provide a concluding summary. § THE MULTIPLICATIVE FORM OF THE HIGGS LAGRANGIAN In this paper, we consider the SM Lagrangian added with the multiplicative form of the Higgs Lagrangian (ℒ_Λ) from the work <cit.> as ℒ=ℒ_SM-ℒ_Λ,which, originally, is proposed to explain why the UV-cutoff of the Higgs in the broken phase should be restricted in the TeV scale without introducing newforce fields as the composite Higgs model <cit.>, and the little Higgs <cit.>. Here, the term ℒ_Λ is derived from the combination of the three fundamental ideas in the Lagrangian mechanics: “non-uniqueness property of Lagrangian” <cit.>, “the inverse problem of the calculus of variations” <cit.>, and “the effective field perspective” <cit.>. ∙ Non-uniqueness: The form of Lagrangian is trivial, hence, there can be many non-standard Lagrangian that can provide the same EOM as the standard Lagrangian in T-V form. However, this property in the field theory seems to be not applicable. According to the Heaunux work, the scalar field Lagrangian in 4-d Lorentzian spacetime<cit.> is unique, therefore, there is a single form of Lagrangian that can provide the Klein-Gordon equation. ∙ Modified non-uniqueness: The non-uniqueness property of Lagrangian is re-interpreted to construct the Lagrangian of the complex scalar field model. In the non-standard Lagrangian approach such as Dirac-Born-Infeld <cit.>, k-essence<cit.>, every modified Lagrangian of the scalar field must be reducible to the Klein-Gordon Lagrangian in the appropriate limit to describe the relativistic motion of scalar field. From this perspective, the low energy theory of the scalar field Lagrangian is not unique, thus, the Lagrangian of the scalar field is writable in terms of the linear combination between all possible forms of Lagrangian ℒ_ϕ=ℒ_Standard+∑_iα_iℒ_Non-standardgiving rise to the different quantum field theory<cit.>. ∙ Inverse problem of the calculus of variations: According to the modified non-uniqueness property, the form of the scalar field Lagrangian can be set into arbitrary function, and the symbolic expression leading to the Klein-Gordon equation can be given by inverse calculating Euler-Lagrangian equation.∙ Effective field theory: Non-renormalized theory with the cutoff Λ must be reducible into the renormalized one under the energy scale far below Λ. From the modified non-uniqueness, the multiplicative form of the Higgs Lagrangian in Eq. (<ref>) is expressed in the following formℒ_Λ=F(∂_μϕ^†∂^μϕ)f(ϕ^†ϕ),which is motivated by several areas in mathematical physics <cit.>. Here, the F and f are unknown functions, which can be solved from the inverse problem of the calculus of variations yielding the standard Klein-Gordon equation at some certain limit.As a consequence, if ℒ_Λ is another form that can explain the relativistic motion of the scalar field,therefore, it is possible to write the Lagrangian of the scalar field as a linear combination of the standard and the multiplicative Lagrangian given in Eq. (<ref>).The details of the multiplicative Lagrangian construction can be found in <cit.>.Here, the expression of ℒ_Λ withSU(2)× U(1) covariant derivative is given byℒ_Λ=(Λ^4+D_μϕ^† D^μϕ-V)e^-V/Λ^4,where Λ is a cutoff parameter of the multiplicative Lagrangian, ϕ is a Higgs doublet, ϕ^†=(ϕ^+ ϕ_0)^†, and V is the flipped sign Higgs potential. We emphasize that the term V inside the multiplicative Lagrangian is artificially added while preserving the symmetry to improve the stability of the model, see the last discussion paragraph in <cit.>.In this model, the sign of parameters in the Higgs potential V of the SM Lagrangian and the multiplicative Lagrangian is flipped into the opposite asV=μ^2ϕ^†ϕ-λ (ϕ^†ϕ)^2.Now, the tree-level Higgs potential, so-called effective potential, is modified from the multiplicative Lagrangian asV_eff= μ^2ϕ^†ϕ-λ (ϕ^†ϕ)^2+(Λ^4-μ^2ϕ^†ϕ +λ (ϕ^†ϕ)^2)e^-μ^2ϕ^†ϕ-λ (ϕ^†ϕ)^2/Λ^4.We note that the effective potential in this sense means the combination of the linear and multiplicative Higgs potential rather than 1-loop potential. With this effective potential (<ref>), the vacuum expectation value (VEV) of Higgs can be obtained from the minimum point of the potential (<ref>) or the vacuum solution of the equation of motion as2⟨ϕ^†ϕ⟩=v^2=μ^2/λ,which still has the same form as the SM and does not depend on energy scale Λ. After spontaneous symmetry breaking (SSB) of the Higgs boson, the electroweak phase transition happens. We then expand the Higgs field around the VEV in unitary gauge, which the Goldstone mode is ignored, ϕ(x)=1/√(2)[0; v+χ(x) ],where χ is a quantum fluctuation of the Higgs field. We find that the kinetic energy of χ is not canonicalA(χ)∂_μχ∂^μχ/2where A(h)= 1-e^-μ ^2 (v+χ )^2 (v^2-2 v χ -χ ^2)/4 Λ ^4v^2, ≃ (1-e^-μ ^2 v^2/4 Λ ^4)-μ ^2 e^-μ ^2 v^2/4 Λ ^4/Λ ^4χ ^2 -μ ^2 e^-μ ^2 v^2/4 Λ ^4/Λ ^4 vχ ^3-μ ^2 e^-μ ^2 v^2/4 Λ ^4(Λ ^4+2 μ ^2 v^2)/4 Λ ^8 v^2χ ^4 +O(χ ^5).To reorganize Eq. (<ref>) into a canonical normalized form, one employs the field redefinitionχ=h/√(1-e^-μ ^2 v^2/4 Λ ^4)-h^3 μ ^2/6 Λ ^4 (1-e^-μ ^2 v^2/4 Λ^4)^5/2+O(h^4).The Lagrangian Eq. (<ref>) can be rewritten asℒ=ℒ_Higgs+ℒ_Lepton+ℒ_Yukawa+..whereℒ_Higgs= 1/2∂_μ h∂^μ h-M_h^2/2h^2-λ_3/3!h^3-λ_4/4!h^4-4π/Λ_5,Hh^5-(4π)^2/Λ_6,H^2h^6+O(h^7)ℒ_Lepton= l̅^i_Lγ^μ i∂_μ l_L^i+l̅^i_R iγ^μ∂_μ l_R^i+ν̅^l_i_Lγ^μ i∂_μν_L^l_i-m_l_i(l̅^i_Ll^i_R+l̅^i_Rl^i_L)ℒ_Gauge= -1/4F_μνF^μν-1/4Z_μνZ^μν-1/4M_Z^2Z_μ Z^μ-1/2W^+_μνW_-^μν-M_W^2W^μ_+W_μ^–1/4G_μνG^μν ℒ_Yukawa= -y_l_ih (l̅^i_Ll^i_R+l̅^i_Rl^i_L)+𝒪(h^2)ℒ_llV= -g_llγ A_μ (l̅^i_L γ^μ l^i_L +l̅^i_Rγ^μ l^i_R)+1/2g θ_W Z_μν̅_L^l_iγ^μν_L^l_i+1/√(2)g(W^+_μ+W^-_μ)ν̅_L^l_iγ^μ l_iℒ_hVV=g_hWWh W^μ_+W_μ^-+g_hZZhZ^2The higher dimension operators of Higgs self-coupling are organized in the form of the naive dimensional analysis (NDA) power counting formula <cit.>, which specifies the UV-cutoff energy of particles circulating in the loop. The gauge boson and the charged lepton masses can be written in terms of the Higgs VEV and the coupling constant asM_W^2=1/4 g^2 v^2 (1-e^-μ ^2 v^2/4 Λ ^4),M_Z^2=1/4 g^2 v^2^2(θw) (1-e^-μ ^2 v^2/4 Λ ^4),m_l_i=λ_l_iv/√(2), M_h^2=2 μ ^2+μ ^4 v^2-4 Λ ^4 μ ^2/2 Λ ^4 (e^μ ^2 v^2/4 Λ ^4-1),while the photon mass vanishes, m_γ=0. From Eq. (<ref>)-(<ref>), the tree level custodial symmetry, defined from the parameter ρ_tree=M_W^2/M_Z^2cos^2θ, obviously ensure the SM result ρ_SM,tree=1 <cit.>.In addition, the coupling constants of the three-legged interaction terms areλ_3=3!8 Λ ^4 μ ^2-4 Λ ^4 μ ^2 e^μ ^2 v^2/4 Λ ^4-μ ^4 v^2/4 Λ ^4 v √(1-e^-μ ^2v^2/4 Λ ^4)(e^μ ^2 v^2/4 Λ ^4-1)g_llγ=g sinθ_W, g_lν W=g/√(2), g_llZ=g/√(2)g_hWW=g^2 v √(1-e^-μ ^2 v^2/4 Λ ^4)/2 , g_hZZ=g^2 v √(1-e^-μ ^2 v^2/4 Λ ^4)/4cos^2θ_Wy_l_i=λ _l/√(2-2 e^-μ ^2 v^2/4 Λ ^4)Then, we fit the mass of the W-boson to the Fermi coupling constant (G_F=1.166× 10^-5GeV^-2<cit.>) by integrating out the W-boson in the muon decay scattering amplitude.Since the lepton weak interaction in Eq. (<ref>) is not modified from the SM, we still obtain the standard relationG_F/√(2)=g^2/8M_W^2.Substituting Eq. (<ref>) into (<ref>) and parametrizing G_F in terms of the VEV in the SM (v_SM=1/√(√(2)G_F)=246 GeV), we find that the value of Higgs VEV, v, is unidentified v_SM^2=v^2 (1-e^-μ ^2 v^2/4 Λ ^4),because the parameter Λ is still unknown. We note that in this work, v_SM is the parametrization of G_F instead of the Higgs VEV.Therefore, the value of v could be arbitrarily large depending on the free parameter Λ.By fitting μ with the Higgs mass and solving Λ in terms of v, we obtainμ ^2=M_h^2 v_SM^2/-2 (v^2-v_SM^2) log(v^2/v^2-v_SM^2)-4 v_SM^2+2 v^2Λ^4=v^2 M_h^2 v_SM^2/8 log(v^2/v^2-v_SM^2) (v^2-(v^2-v_SM^2) log(v^2/v^2-v_SM^2)-2 v_SM^2).Substituting Eq. (<ref>)-(<ref>) into (<ref>), the coupling constants of the three-point interactions in Eq. (<ref>) can be reduced into the SM parameter λ_3=3!M_h^2/v_SM,  y_l_i=m_l_i/v_SM,g_hWW=g^2v_SM^2/2, g_hZZ=g^2v_SM^2/4cos^2θ_Wg_llγ=g sinθ_W, g_lν W=g/√(2), g_llZ=g/√(2)It suggests that this tree-level prediction is insensitive to the large value of the Higgs VEV. According to these coupling constants, the prediction of the Higgs decay rate into the vector boson and the fermion from this model still satisfies the experimental observation in the particle collider <cit.>. The deviations from the SM appears in the quartic interaction, see the first paragraph of the discussion in <cit.>, and will be more explored in Sec-<ref>. Substituting the parameter Λ from Eq. (<ref>) into the Λ_6,H, the UV-cutoff in the Higgs section in the large v limit is given byΛ_6,H=24π/√(113)(v_SM/M_h)v_SM+𝒪(v^-1)∼𝒪(1)TeV.where M_h=125GeV <cit.>. Then, the value of the Higgs cutoff in broken phase depends only on the Fermi constant and the Higgs mass so it is insensitive to the large scale of the Higgs VEV.As a consequence, the size of the radiative corrective in Eq. (<ref>) can be small without fine-tuning.The cutoff of the Higgs before the electroweak phase transition,λμ^2 (ϕ^†ϕ)^3/Λ^4≃3 M_h^2 v_SM^4/4 v^8 (ϕ^†ϕ)^3,which can be in an arbitrarily high energy scale proportional to the unknown value of v, can be tuned down naturally to the TeV scale,(ϕ^†ϕ)^3/Λ_High^2→h^6/Λ_TeV^2,after SSB of SU(2)× U(1). Hence, this framework might be an explanation for why the Higgs cutoff in the broken phase is very small comparing to the UV-physics above TeV. Then, we extend this model to explain the origin of neutrino mass.§ MASS OF DIRAC NEUTRINO AND MODIFIED SEESAW SCALEIn this section, we demonstrate how Dirac neutrinos can acquire unnatural small mass compared to the charged leptons, while the Yukawa coupling remains the same order of magnitude. Furthermore, we will show that if the right-handed Majorana mass M_R is encoded into the model, the seesaw mechanism can be achieved with the natural small M_R<TeV. §.§ MotivationThe idea originates from studying the addition of anarbitrary operator  inside and outside the multiplicative Lagrangian (<ref>), see the discussion in <cit.>, as followsℒ=ℒ_SM-Â-(Λ^4+D_μϕ^† D^μϕ-V-Â)e^-V/Λ^4.After the electroweak phase transition occurs, the A term in the Lagrangian (<ref>) can be reorganized as ℒ_A=-(1-e^-μ^2 v^2/Λ^4)Â.Substituting Eq. (<ref>)-(<ref>) into (<ref>), we haveℒ_A =-v^2_SM/v^2Â.Here, the coefficient in front of the operator  is suppressed by an arbitrary large Higgs VEV v, see Eq. (<ref>). If we assume that, before the electroweak phase transition, the Higgs field has the cutoff energy in the Planck scale together with Eq. (<ref>), we can set 3 M_h^2 v_SM^4/4 v^8 (ϕ^†ϕ)^3= (ϕ^†ϕ)^3/M_p^2.The Higgs VEV is then given byv^2=√(3)√(M_h M_p)/√(2)v_SM≃ (2.00 × 10^6)^2 GeV^2,where M_h=125 GeV, M_p=2.44× 10^18 GeV, and v_SM=246 GeV. Hence, the dimensionless coefficient in front of the operator Âv^2_SM/v^2≃ 1.51× 10^-8, obtains a naturally small value around the order eight. We notice that the difference between the mass scale of the lepton and neutrino is coincidentally around eight order of magnitude associating with the ratio v^2_SM/v^2.Therefore, the Yukawa interaction term of the neutrino can be inserted into the operator Â, giving a small neutrino mass while the Yukawa coupling constant of neutrino is in the same order of magnitude as lepton's.§.§ Dirac neutrino Here, if the multiplicative Lagrangian is a part of the SM, we can hypothesize that the Yukawa couplings of charged leptons, which are the electron, muon, and tau are written only outside of the multiplicative Lagrangian.On the other hand, the Yukawa couplings of Dirac neutrino with three flavors ν_l_i (for l_1=e, l_2=μ, l_3=τ) are written inside and outside of the multiplicative Lagrangian as followsℒ⊃- L_j(λ_l)_ijϕ  l_iR-L_j(λ_ν )_l_ijϕ̃ ν_l_iR+h.c.-(Λ^4+D_μϕ^*D^μϕ-V-L_j(λ_ν )_l_ijϕ̃ ν_l_iR+h.c.)e^-V/Λ^4whereL_i=[ ν_l_iL; l_iL ], ϕ=1/√(2)[0; v+χ(h) ],   ϕ̃=1/√(2)[ v+χ(h);0 ].Then, the neutrino field in the interaction basis ν_l_i can be reorganized into the neutrino mass eigenstate ν_j by the unitary transformationν_l_i→ U^l_ijν_jwhere U^l_ij is Pontecorvo–Maki–Nakagawa–Sakata (PMNS) matrix <cit.>. By substituting Eq. (<ref>)-(<ref>) together with Eq. (<ref>) and Eq. (<ref>)-(<ref>) into Eq. (<ref>), the mass terms and Yukawa interactions are given by- ℒ_mass= m_l_il_iLl_iR+ m_ν_iν_iLν_iR+h.c., - ℒ_Y=y_l_i h l_iLl_iR+ y_ν_ihν_iLν_iR+h.c,where m_l_i=λ_l_i v/√(2),m_ν_i=λ_ν_iv_SM^2/√(2)v,and y_l_i=m_l_i/v_SM, y_ν_i=m_ν_i/v_SM.Here, the Yukawa coupling λ_l in Eq. (<ref>) is different from the one in SM since v∼ 10^6 GeV.Consequently, the Yukawa couplings of the charged leptons(and quarks) are very small around 10^-10 but this is not a problem for naturalness since the dimensionless ratio of λ_l between the particle generations is of order unityλ_e/λ_μ∼λ_μ/λ_τ∼λ_e/λ_τ∼𝒪(1).In addition, the coupling constant of the Yukawa interaction between the Higgs field and the charged lepton, y_l in Eq. (<ref>), which specifies the rate of Higgs decay, has no deviation from the SMy_l_i/y_l_i,SM=1.The tree-level prediction of the decay rate of the Higgs boson into the charged leptons, h→ ll, can match the bound from the experimental observation at CMS detector with the center of mass energy 7-8 TeV <cit.> and 13 TeV <cit.>. Now, let us consider the ratio between m_ν_i and m_l in Eq. (<ref>)m_ν_i/m_l_j= v_SM^2/v^2 (λ_ν_i/λ _l_j) ≃10^-8(λ_ν_i/λ _l_j).We see that the mass of the neutrino mass eigenstate can be smaller than the mass of the charged lepton in three generations around eight order of magnitude. If λ_ν_i/λ_l_j is not too much different around 1-10^3 or 10^-3-1, this our proposed model can be explored 10^-10≲ m_ν_i/m_l_j≲ 10^-6, which satisfies the mass ratio in Eq. (<ref>). To verify the above argument, the predicted mass of neutrino should not conflict with the observation. From Eq. (<ref>), we parametrize neutrino mass in terms of the charged lepton mass asm_ν_i= v_SM^2/v^2 (λ_ν_i/λ _l_i)m_l_i,and we then fit λ_ν_i/λ_l_i with the simple observed neutrino parameters. According to the Direct neutrino-mass measurement of the neutrino mass sum ∑_i m_i=m_1+m_2+m_3 from the KATRIN experiment <cit.>and the neutrino oscillation constraint ofthe mass square difference, Δ m_ij^2=m_ν_i^2-m_ν_j^2, in the normal hierarchy (NH) mass m_ν_1< m_ν_2<m_ν_3, from the particle data group <cit.>Δ m_21^2 ≃ 7.53× 10^-5 eV^2, Δ m_31^2 ≃Δ m_32^2≃ 2.437× 10^-3 eV^2, ∑_i m_i ≃ 0.06-0.26 eV,the theoretical predictions for Δ m_ij^2 and ∑_i m_ican be given by Δ m_ij^2= v_SM^4 /v^4(λ _ν_i^2 m_l_i^2/λ_l_i^2-λ _ν_j^2 m_l_j^2/λ_l_j^2), = 2v_SM^2/√(3) M_h M_p(λ _ν_i^2 m_l_i^2/λ_l_i^2-λ _ν_j^2 m_l_j^2/λ_l_j^2), ≃  2.29× 10^-16(λ _ν_i^2 m_l_i^2/λ_l_i^2-λ _ν_j^2 m_l_j^2/λ_l_j^2), ∑_i m_i= v_SM^2/v^2 (λ_ν_1 m_l_1/λ _l_1+λ_ν_2 m_l_2/λ _l_2+λ_ν_1 m_l_3/λ _l_3), = √(2) v_SM/√(3)√(M_hM_p)(λ_ν_1 m_l_1/λ _l_1+λ_ν_2 m_l_2/λ _l_2+λ_ν_1 m_l_3/λ _l_3), ≃ 1.51× 10^-8(λ_ν_1 m_e/λ _e+λ_ν_2 m_μ/λ _μ+λ_ν_1 m_τ/λ _τ). By substituting Eq. (<ref>) into Eq. (<ref>)-(<ref>) and setting m_e=0.511 MeV, m_μ=105 MeV, m_τ=1776 MeV, the ratios of Yukawa coupling between neutrino and charged leptons with ∑_i m_i≃ 0.26 eV are given byλ_ν_1/λ_e≃ 11,  λ_ν_2/λ_μ≃ 0.052, λ_ν_3/λ_τ≃ 0.0036.While, with ∑_i m_i≃ 0.06 eV, the ratios of Yukawa coupling are given byλ_ν_1/λ_e≃0.1,  λ_ν_2/λ_μ≃ 0.006, λ_ν_3/λ_τ≃ 0.002. With the result from Eq. (<ref>)-(<ref>), the other Yukawa coupling ratios are directly evaluated by λ_ν_i/λ_j= λ_ν_i/λ_l_iλ_l_i/λ_l_j.The value of λ_ν_i/λ_l_i varying in terms of the neutrino mass sum is shown in Fig. (<ref>).From Fig. <ref>, we find that, the ratio λ_ν_1/λ_τ and λ_ν_3/λ_μ start to slightly violate the Dirac naturalness when ∑ m_ν_i≲ 0.11 eV. The Dirac naturalness of λ_ν_1/λ_τ is entirely violated by one order of magnitude, when ∑ m_ν_i≃ 0.06 eV, see the small blue dashing line. To preserve the Dirac naturalness, it theoretically implies that the sum of the neutrino mass must be above 0.06 eV, ∑ m_ν_i>0.06. Astonishingly, the lower bound of ∑ m_ν_i from Dirac naturalness, that requires the dimensionless ratio of Yukawa coupling is of 𝒪(1), and the lower bound of ∑ m_ν_i from the cosmic constraint, that requires positive neutrino mass m_ν_i>0,both are coincidentally at same value. We have to emphasize that this coincidence will not happen if the Higgs VEV v≠ 2× 10^6GeV. In fact, if we ignore the Planck scale cutoff of Higgs, v is an arbitrary value that is larger than v_SM. For example, if v is chosen to be smaller by one order of magnitude around 3× 10^5GeV, the lower bound of neutrino mass sum from naturalness is around 0.3 eV, which leads to the incompatibility with the upper limit of ∑_i m_ν_i from the Direct neutrino mass<cit.> and the cosmological constraint <cit.>. On the other hand, too large v also leads to the large ratio of λ_ν_i/λ_l_i.Therefore, in this model, the prediction of ∑_i m_i in lower limit might be noteworthy supporting that v=2× 10^6 GeV and the Higgs before broken phase might have cutoff in the Planck scale. As a consequence, the mass scale of the three generations of the active neutrino in Eq. (<ref>)is naturally obtained in a sub-eV scale agreeing with the lower bounded constraint,10^-3eV≲ m_ν_i≲ 10^-1eV.The Yukawa couplings ratio of the neutral and charged particles are naturally of order unity 10^-3≲λ_ν_i/λ_l_j≲1,if neutrino Yukawa interaction is written inside and outside of the multiplicative Lagrangian (<ref>). Therefore, there is no violation of the Dirac naturalness and the induced neutrino mass from the loop level <cit.> is not required. Although the hierarchy in the mass scale of the charged and neutral lepton is explained through applying neutrino Yukawa interaction inside and outside multiplicative Lagrangian, the Yukawa couplings of the lepton and neutrino cannot be predicted from this model. One may rely on the symmetry extension of the model such as SU(5) or SO(10) <cit.> to explain the little hierarchy of the lepton mass generation and the incomprehensible result of an empirical Koide formula <cit.>.Moreover, in the future0νββ experiment, if the neutrino is concluded as the Dirac particle, this model predicts the mass of the right-handed neutrino in the same value as the active left-handed component. Then, in the cosmological context, the problem of dark matter requiring a heavy sterile neutrino cannot be explained by this extension of neutrino physics.§.§ Majorana neutrino and modified type-I seesaw mechanism In this section, we consider the contribution from right-handed Majorana mass M_R, which does not violate the electroweak symmetry. We will show that the existence of multiplicative Lagrangian does not lead to any conflict with the type-I seesaw mechanism. On the other hand, the multiplicative Lagrangian can possibly extend the range of the right-handed Majorana neutrino to below the TeV scale while the naturalness principle is still applicable. From the previous section, we have proposed two possible ways to introduce the Yukawa interaction terms into the Lagrangian. One is to include the terms outside of multiplicative Lagrangian and the other is to add the terms both inside and outside of multiplicative Lagrangian.If the seesaw mechanism is implemented, our Lagrangian is required to contain both Dirac and Majorana Yukawa interaction terms. The Dirac mass term comes from the Higgs mechanism while the Majorana mass comes from new physics beyond SM with the mass terms expressed as ℒ⊃ -L_j(λ_ν )_l_i,jϕ̃ ν_l_iR-M_Rν^c_Rν_R+h.c.-(Λ^4+D_μϕ^*D^μϕ-V-a L_j(λ_ν )_l_i,jϕ̃ ν_l_iR+h.c.)e^-V/Λ^4respectively, where a=0 or 1. We will consider two distinct schemes for introducing m_D either only outside (a=0) or inside-outside (a=1) the multiplicative Lagrangian, leading to two different results for m_D as presented in Table <ref>.The Yukawa interaction terms appearing inside-outside the multiplicative Lagrangian are suppressed with the dimensionless factor of v_SM^2/v^2∼ 10^-8 while the terms appearing only outside are not suppressed. We note that, in order to simplify the mass scale of neutrinos from the seesaw mechanism, we consider three flavor neutrinos with the diagonal 3× 3 Dirac and Majorana mass matrix. As a consequence, each generation of neutrinos can be diagonalized separately with left-handed and right-handed mass eigenstates. The mass matrix then can be diagonalized giving two mass eigenstates as in Eq. (<ref>).Then, the effect of each scheme on active neutrino and heavy right-handed neutrino mass is analyzed below.∙ In the first scheme (a=0), one can see that the mass m_D is naturally in the same order of magnitude as m_l. However, M_R is an arbitrary value coming from the physics beyond SM.Obviously, there is no effect from the multiplicative Lagrangian on the mass of the neutrinos and the result follows the standard Type-I seesaw mechanism. If the small mixing angle is assumed, the mass eigenstates split into active neutrino m_1 and heavy right-handed neutrino m_2 withm_1≃m_D^2/M_R, m_2≃ M_Rrespectively.Here, the mass m_1 can be lifted to a small value as M_R becomes very large in the Type-I seesaw mechanism. If m_D ≃ m_e, M_R is constrained to be around 2 TeV to provide active neutrino mass in sub-eV scale. On the other hand, for m_D in electroweak scale, M_R is required to be around the GUT scale. In order to achieve active neutrino with sub-eV mass, right-handed Majorana mass needs to be very large,10^3 GeV≲ M_R≲ 10^15GeV.Although the evidence for a heavy right-handed neutrino has not yet been found, the possibility of a heavy right-handed neutrino with a mass around or below GeV is not ruled out. This scheme can still be an explanation but the naturalness between Yukawa coupling constants is violated as λ_ν/λ_l ∼ 10^-6.∙ In the second scheme (a=1), m_D is naturally small as it is suppressed by a factor of v_SM ^2/ v^2.The mixing angle can be written as tan 2θ_ν =(v_SM^2 / v^2 ) (λ_D v √(2) / M_R) ∼ 10^-8  m_l / M_R. This suggests that the small mixing angle condition can be realized in this scheme even for a small value of M_R below the mass of charged leptons. The active neutrino and heavy neutral lepton masses are given bym_1≃λ _D^2 v_SM^4/2 v^2 M_R,  m_2≃ M_R.Now, consider the relation between the mass of lepton and active neutrino mass, m_1 then can be expressed as m_1=(v^4_SM/v^4)(λ_D/λ_l)^2m_l^2/M_R,with m_l = λ_l v / √(2) from Eq.(<ref>).Then, we assume that there are three generations of active neutrinos m_1→ m_1^i(i=1,2,3) and three types of right-handed heavy right-handed neutrinos M_R→ M_R^i(i=1,2,3).Here, if λ_D / λ_l is of order unity, we can obtain the possible mass range of right-handed Majorana mass satisfying Eq.(<ref>) as1eV≲ M_R^1≲ 10keV 10eV≲ M_R^2≲ 100MeV10keV≲ M_R^3≲ 10GeV where 1<λ_D / λ_l<10^3, and we substitutedm_l=m_e, m_l=m_μ, and m_l=m_τ into Eq. (<ref>) to obtain Eq. (<ref>)-(<ref>), respectively. Interestingly, the right-handed neutrino mass can naturally be smaller than the electroweak scale to lift the active neutrino mass up to sub-eV scale without violating Dirac naturalness. Thus, we conclude that multiplicative Lagrangian could provide an explanation for naturally small active neutrinos' mass while the mass of heavy right-handed neutrinos remains smaller than the GUT scale.The masses of the first, second, and third generations of the right-handed neutrino (<ref>)-(<ref>) are in the regime of eV to GeV scale, which allows incorporating with the LSND anomaly oscillation <cit.>, the dark matter problem <cit.>, and the studying of baryon-asymmetry by leptogenesis <cit.> without the hierarchy of Yukawa coupling between the neutrinos and charged leptons.In addition, from the current probe of the heavy Majorana neutrino and Weinberg operator at the LHC through the vector boson fusion process, the upper limit of the heavy Majorana mass is set to a few GeV up to TeV <cit.>.The Eq. (<ref>)-(<ref>) does not exceed the value of the data constraint so it is possible to incorporate the multiplicative Lagrangian into the neutrino physics at high energy collider.§ DISCUSSIONS §.§ Inverted Hierarchy (IH) of neutrino massFirst, we discuss a theoretical constraint on IH of neutrino mass from our framework. In brief, the inverted order of neutrino mass (m_3≪ m_1<m_2) in the mass eigenstate is allowed from the current observation since the sign ofm_32^2 is indistinguishable. For the IH, the observed mass different <cit.> is Δ m_32^2=-2.519× 10^-3eV^2,and the lower limit of neutrino mass sum is required around∑_i m_ν_i≳ 0.1to obtain m_ν_i>0. By replacing Δ m_32^2 in Eq. (<ref>) with (<ref>) and solving λ_ν_i/λ_l_i in Eq. (<ref>)-(<ref>) with ∑ m_ν_i=0.26 eV, the IH of neutrino mass is still possible ifλ_ν_1/λ_e≃ 12, λ_ν_2/λ_μ≃ 0.058, λ_ν_3/λ_τ≃ 0.0029.These ratios are of order unity, which is allowed by naturalness. However,when ∑ m_ν_i is numerically below 0.11 eV, the ratio λ_ν_3/λ_τ∼ 10^-4, see black line in Fig. <ref>Obviously, the Dirac naturalness is violated by one order of magnitude. Therefore, if we respect the naturalness argument, the mass sum of neutrino should not be below 0.11 eV. Intriguingly, the lower limit from the naturalness is again coincident with the lower bound from the observation in Eq. (<ref>). This situation has existed in the case of NH. Unreasonably, it seems that the lower bounded limit of neutrino mass sum is protected by the Dirac naturalness.§.§ Dirac neutrino constraint Second, we discuss the constraint on the Dirac neutrino model. We check the consistency of the “in and out mechanism" with the four neutrino effective interactions,ℒ_eff= G_S ν_Lν_Rν_Lν_R+G_S ν_Rν_Lν_Rν_L+G̃_S ν_Lν_Rν_Rν_L.These effective coupling constants are directly constrained by the current measurement of the effective number of relativistic neutrino species N_eff, which requires the lower limit <cit.>√(1/G_S)>12.4 TeV, √(1/G̃_S)>8.1 TeV. In our proposed model, these effective interactions can be obtained by integrating out of the Higgs particle in the intermediate state from the scattering process νν→νν. We have ℒ_eff= m_ν_i/v_SMM_h^2 (ν̅_Lν_R+ν̅_Rν_L)^2, which gives G_S and G̃_S in the same order of magnitude asG_S=m_ν_i/v_SMM_h^2, G̃_S=2m_ν_i/v_SMM_h^2. Substituting Eq. (<ref>) into (<ref>), the parameter 1/√(G^i_S) for the neutrino in the i-th generation can be possible in the regime 10^3 TeV≲1/√(G^i_S)≲ 10^6TeV, which is consistent with the lower bounded limit in Eq. (<ref>). §.§ Cutoff energy of the light active neutrinoThird, we discuss the tree-level validity of this neutrino model depending on the background value of the Higgs field. According to Sec-II, the validity of the model can be provided by the inverse coefficient of the higher dimension operator, which shows the UV cutoff energy of the perturbative scattering amplitude. For the light active neutrino (both Dirac and Majorana type), the leading higher dimension operator for the multiplicative Lagrangian model is the dimension five operatorh^2 νν/Λ_ν,UVwhere h is a canonical quantum fluctuation of ϕ, νν is short notation for the bilinear neutrino field operator:Dirac type νν=ν̅_Lν_R+h.c. and Majorana type νν=ν̅_Lν_L^c. The cutoff Λ_ν,UV is determined byΛ_ν,UV≃ Λ ^4 (e^μ ^2 v^2/4 Λ ^4-1)/λ _νμ ^2 v≃v_SM^2/λ _ν(v^2-v_SM^2) log(v^2/v^2-v_SM^2)v≃v_SM^2/m_ν.If m_ν∼ 0.1eV and v_SM=246, the cutoff energy of neutrinos naturally appears in the GUT scale Λ_ν,UV∼Λ_GUT.Therefore, if neutrinos are Dirac-type, the effective interaction for hh→νν scattering becomes important at the GUT scale, which is still very hard to observe at the current collider physics with the center of mass energy 21 TeV and the future collider at 100 TeV.This large neutrino cutoff energy does not give rise to the radiative correction to the Higgs mass in Eq. (<ref>). We find that the neutrino contribution to the Higgs mass is given in the formΔμ^2_ν-loop≃1/16π^2(m_ν^2/v_SM^2)Λ_UV^2.The size of this radiative correction comparing with the observed Higgs mass is still smallΔμ^2_ν-loop/M_h,obs^2≃1/16π^2(m_ν^2/v_SM^2)(Λ_UV^2/M_h,obs^2)∼ 0.1,even Λ_UV≃Λ_ν,UV=10^15 GeV in Eq. (<ref>). The Higgs mass is still free from the fine-tuning problem although the cutoff momentum of neutrino is far beyond the TeV scale. Therefore, the naturalness is still preserved.It appears that, with v≃ 10^6 GeV, the Higgs multiplicative Lagrangian allows us to explore the unsolved physics problem in various energy scales. However, this model is obviously not a complete theory due to the non-renormalizability of the model. Various different values of the cutoff exist in each part of Lagrangian, which are Λ_Higgs∼ 10^3GeV, Λ_ν∼ 10^15 GeV. If this Lagrangian is an extended part of the SM, new physics must be required to improve the validity of our proposed model such as the supersymmetric theory. On the other hand, if the framework of the field theory is an approximated theory at a certain limit, we expect the non-standard Lagrangian of Higgs to emerge from the UV-completion theories such as string theory.§.§ The deviation from the SM and the problem of the real shape of Higgs potentialFourth, we discuss the deviation of the renormalized operators between our proposed model and the SM. In our model, the tree-level coupling constant of the three-legged interactions has an explicit expression as the SM prediction while the deviations exist from the four-legged renormalized operators λ_4/4!h^4, and g_hhVVh^2 V^μ V_μ, where V can be either W or Z boson. We find that the quartic Higgs self-coupling λ_4 and the quartic gauge Higgs coupling g_hhVV are modified from the SM. The ratio of λ_4 and g_hhWW to the SM prediction areλ_4/λ_4,SM≃ 6, g_hhVV/g_hhVV^SM≃ 3.From the theoretical constraint of the partial wave unitary J=0, λ_4/λ_4,SM≲ 65 <cit.> is required to preserve unitarity of scattering amplitude and g_hhVV/g_hhVV^SM≃ 3 breaks the perturbative unitary at the cutoff scale around TeV scale <cit.>. However, both λ_4 and g_hhVV are not yet come to conclusions and these couplings are known to be difficult even at 100 TeV collider with high luminosity <cit.>. In fact, both couplings are very sensitive to the physics beyond SM and the shape of potential <cit.>. For example, we replace V in the Eq. (<ref>) with the tadpole induced electroweak Higgs potential <cit.>V→ Y√(ϕ^†ϕ)-μ^2 ϕ^†ϕ.where Y is dimensionful parameter with [Y]=3. The set of modified parameter isλ_3/λ_3,SM≃ 0, λ_4/λ_4,SM≃ 1.33, g_hhVV/g_hhVV^SM≃ 10^-7,while the other renormalized operators are restricted the same as the SM value. As a consequence, although the quartic interaction is finally probed with the unmatched quartic coupling, it does not mean that the existence of the multiplicative Lagrangian is ruled out from the observation. The potential term V can be modified to match with the future experimental observation.The problem is that the real shape of the Higgs potential could not yet be determined and has become one of the most important quests in particle physics. §.§ Naturalness of the top-quark to electron Yukawa coupling ratioFifth, we discuss an application of the multiplicative model in the context of top-quark Yukawa coupling. In brief, the current experimental observation of the top quark mass from the Higgs decay channel is around 175 GeV. Traditionally, if an electron and top quark acquire mass from the Higgs mechanism with the standard Yukawa interaction as,λ_e/√(2)eϕ_0 e+λ_t/√(2)tϕ_0 t,the ratio of the Yukawa coupling between both particles in the SM isλ_t/λ_e∼ 10^-5.This situation slightly violates the notion of the Dirac naturalness by two orders of magnitude. This sign implies the new physics of the mass hierarchy between the charged lepton and the heavy quark. According to the previous section, we can add the operator inside and outside of the multiplicative Lagrangian resulting in the operator with small coefficients. Now, we propose thatif the Yukawa interaction of the top quark is written in the same way as the charged lepton while the kinetic energy of the top quark is written both inside and outside of the multiplicative Lagrangian asℒ⊃ it̅γ^μ∂_μ t-λ_t/√(2)ϕ_0t̅ t-(Λ^4+D_μϕ^*D^μϕ-V+it̅γ^μ∂_μ t)e^-V/Λ^4,the little hierarchy between the top quark and the electron can be reconciled. After electroweak phase transition and reorganizing the Higgs field into canonical form, the top-quark Lagrangian including the quadratic part and Yukawa interaction is expressed asℒ_top= v_SM^2/v^2it̅γ^μ∂_μ t-vλ_t/√(2)t̅ t -vλ_t/√(2)v_SMht̅t.To reorganize the kinetic energy of the top quark to canonical form, we can perform t→ (v/v_SM)t, resulting inℒ_top= it̅γ^μ∂_μ t-m_tt̅ t -y_t ht̅t,wherem_t=v^3λ_t/√(2)v_SM^2, y_t=m_t/v_SM.We find that the tree-level y_t in Eq. (<ref>) is reorganized into the SM prediction so the Higgs decay experiment (h→ tt) is still satisfied. Let's consider the ratio between the top-quark mass in Eq. (<ref>) and the electron mass in Eq. (<ref>)m_e/m_t= v_SM^2/v^2 (λ _e/λ _t).Substituting m_e=0.511 MeV, m_t=175 GeV, v=2.00× 10^6 GeV, and v_SM=246GeV into Eq. (<ref>), the Yukwa coupling ratioλ_e/λ_t is naturally of order unity λ_e/λ_t∼ 10^2.Therefore, the Dirac naturalness between these coupling is recovered.§.§ Fermion mass hierarchyIn our framework, we find that the fermionic particles can be categorized into three different mass scales, which are identified by m_light=v_SM^2λ_f/√(2)v, m_medium=vλ_f/√(2), m_heavy=v^3λ_f/√(2)v_SM^2,depending on the addition scheme of both Yukawa interaction and the fermionic kinetic energy into their corresponding Lagrangians. If we require λ_f of all fermions are "natural", i.e. within 1-10^3 times or 1-10^-3 times that of Yukawa coupling of electron, the fermionic masses in SM can be categorized into three families classified by their corresponding mass scales shown in Eq. (<ref>), and in Fig. <ref>. We find that the mass bands of each generation slightly overlap so there is no forbidden regime between three mass generations.As can be seen from Fig. <ref>, the reported masses of e, μ, τ, u, d, s, and c <cit.> suggest that they fall into the m_medium category, indicating that their Yukawa interactions should contribute to the outside of the multiplicative Lagrangian. On the other hand, since their masses fall into the m_light category, the Yukawa interactions of ν_i should appear on the inside and the outside of the multiplicative Lagrangian. The mass of the top quark is considered to be in the m_heavy mass scale, denoting that its kinetic energy should be on the inside and the outside of the multiplicative Lagrangian while it is unclear for the bottom quark since the mass of the bottom quark lies on the overlapping region between the m_medium regime and the m_heavy regime. Without the naturalness problem, this term of bottom quark can possibly be written in both ways either the top quark or charged lepton. This means, in this context, the bottom quark may be governed by Lagrangians based on any of the two addition schemes and it still does not suffer from the naturalness problem.In conclusion, our model allows the fermionic mass to be between 10^-15 GeV and 10^9 GeV while they still please the naturalness argument. However, from the perturbative unitarity aspect, the fermionic masses cannot be higher than the electroweak scale because the Yukawa coupling y=m/v_SM is too large to perform a perturbative calculation. As a consequence, it is reasonable that the fermionic mass in our model can be restricted around 10^-15 GeV to 10^3 GeV. § CONCLUSIONSWe conclude that, with the multiplicative Lagrangian, an anomalous smallness of the Dirac neutrino and the Higgs hierarchy problem between the TeV cutoff of and the Planck scale could possibly be explained together with a single framework. The key ingredient is that the Yukawa interaction terms of the charged leptons should contribute solely on the outside of the multiplicative Lagrangian while the Yukawa interaction terms of the neutral leptons should contribute both on the inside and the outside of the multiplicative Lagrangian. Furthermore, the seesaw type-I mechanism is still applicable. The range of the Majorana mass can be extended into below GeV scale while the dimensionless ratio between the Yukawa coupling of neutrino and the charged lepton is still of order unity. Moreover, this mechanism presumably provides the description of fermionic mass hierarchy in the SM through the different contributions of the Yukawa interactions and the fermionic kinetic energies into the multiplicative Lagrangians. As a consequence, the fermions are classified into three families: 1. light fermions of mass scale m_light, 2. fermions of mass scale m_medium, and 3. heavy fermions of mass scale m_heavy.Although this model that arises from multiplicative Lagrangian formalism allows one to explore the problem of neutrino mass, our model does not contribute towards the underlying mechanism of the neutrino oscillation. In fact, the PMNS matrix element remains unpredictable within our current framework. Furthermore, our results on fermionic mass classification, which corresponds to three different addition schemes of Yukawa interaction and kinetic energy, may shed light towards a more complete theory that explains the origin of these different forms of fermion Lagrangian.As for the possible extension in the context of multiplicative Lagrangian formalism, it may be worthwhile to use it to explore the so-called flavor problem. One may integrate the idea of the multiplicative Lagrangian into the higher symmetry group such as SU(5) and SO(10) symmetries, which could naturally provide the origin of the lepton mixing and the quark-lepton unification. This research has received funding support from the NSRF via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation [grant number B37G660013]. Suppanat Supanyo also acknowledge the support from the Petchra Prajomklao Ph.D. Research Scholarship from King Mongkut’s University of Technology Thonburi (KMUTT). apsrev4-1
http://arxiv.org/abs/2312.16587v1
{ "authors": [ "Suppanat Supanyo", "Chanon Hasuwannakit", "Sikarin Yoo-Kong", "Lunchakorn Tannukij", "Monsit Tanasittikosol" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20231227142509", "title": "The natural smallness of Dirac neutrino mass from the multiplicative Lagrangian" }
𝐱 ŁLKun Lan1, Haoran Li1, Haolin Shi1, Wenjun Wu1, Yong Liao1, Lin Wang2, Pengyuan Zhou*11University of Science and Technology of China,2AI Thrust, HKUST(GZ){lankun,lhr123,mar,wu_wen_jun,yliao}@mail.ustc.edu.cn [email protected], [email protected] 2D-guided 3D Gaussian SegmentationFangqing Chen ^*University of Toronto Copyright may be transferred without notice, after which this version may no longer be accessible. ^* Corresponding Author.January 14, 2024 =============================================================================================================================================================================== Recently, 3D Gaussian, as an explicit 3D representation method, has demonstrated strong competitiveness over NeRF (Neural Radiance Fields) in terms of expressing complex scenes and training duration. These advantages signal a wide range of applications for 3D Gaussians in 3D understanding and editing. Meanwhile, the segmentation of 3D Gaussians is still in its infancy. The existing segmentation methods are not only cumbersome but also incapable of segmenting multiple objects simultaneously in a short amount of time. In response, this paper introduces a 3D Gaussian segmentation method implemented with 2D segmentation as supervision. This approach uses input 2D segmentation maps to guide the learning of the added 3D Gaussian semantic information, while nearest neighbor clustering and statistical filtering refine the segmentation results. Experiments show that our concise method can achieve comparable performances on mIOU and mAcc for multi-object segmentation as previous single-object segmentation methods.3D Gaussian, 3D Segmentation § INTRODUCTIONThe recently emerged 3D Gaussian technique <cit.> marks a significant advancement over previous 3D representation methods such as point clouds <cit.>, meshes <cit.>, signed distance functions (SDF) <cit.>, and neural radiance fields (NeRF) <cit.>, especially in terms of training time and scene reconstruction quality. The mean of each 3D Gaussian represents the position of its center point, the covariance matrix indicates rotation and size, and spherical harmonics express color. Starting with point clouds obtained from SFM <cit.>, 3D Gaussians inherently contain the scene's geometric information, thus saving time in locating areas with concentrated objects in space. Moreover, their explicit expression method further accelerates calculations of color and density for every 3D Gaussian in space, enabling real-time rendering. Additionally, adaptive density control endows them with the capability to express detailed features. These advantages make it widely applicable in 3D understanding and editing. Nonetheless, there is little research on 3D Gaussian segmentation, which is another critical pillar of the realm. A few Gaussian segmentation methods have been proposed recently, yet they require further improvement. For example, Gaussian Grouping <cit.> requires an extended training period of about 15 minutes. SAGA <cit.> is complex in its implementation and struggles with segmenting multiple objects simultaneously. Additionally, the explicit expression of 3D Gaussians leads to storage overhead, preventing it from directly transferring 2D semantic features into 3D, as in NeRF segmentation <cit.>. Finally, the scarcity of datasets and the lack of annotations impede the application of supervised segmentation methods, commonly utilized in 2D and point cloud segmentation. In light of the aforementioned challenges, we propose leveraging a pre-trained 2D segmentation model to guide 3D Gaussian segmentation. Inspired by the 2D segmentation approach, which assigns a probability distribution vector for each pixel across different categories, we first assign an object code to each 3D Gaussian to indicate the Gaussian's categorical probability distribution. Subsequently, we employ an algorithm that guides the classification of each 3D Gaussian by minimizing the error between the 2D segmentation map and the rendered segmentation map at a given pose. Finally,we employ KNN clustering to resolve semantic ambiguity in 3D Gaussians and statistical filtering to remove erroneously segmented 3D Gaussians. We validated the effectiveness of our approach through experiments in object-centric and 360° scenes. Our contributions can be summarized as follows. * We propose an efficient 3D Gaussian segmentation method supervised by 2D segmentation, which can learn the semantic information of a 3D scene in less than two minutes and segment multiple objects in 1-2 seconds for a given viewpoint.* Extensive experiments on LLFF, NeRF-360, and Mip-NeRF 360 have demonstrated the effectiveness of our method, obtaining an mIOU of 86%. § RELATED WORK 3D Gaussian, a recently proposed explicit representation method, has attained remarkable achievements in three-dimensional scene reconstruction <cit.>. Its biggest advantage is the capability of real-time rendering. Utilizing a series of scene images and corresponding camera data, it employs 3D Gaussians to depict scene objects. Each 3D Gaussian is defined by parameters including mean, covariance matrix, opacity, and spherical harmonics. The mean pinpoints the Gaussian's central position in the 3D scene. Expressed by a scaling matrix S and a rotation matrix R, the covariance matrix describes the Gaussian's size and shape, while the spherical harmonics encode its color information. Gaussian Splatting then utilizes point-based rendering for efficient 3D to 2D projection. Recent developments have seen numerous advancements in Gaussian Splatting. Innovations like DreamGaussian <cit.> and GaussianDreamer <cit.> merge this technique with Diffusion model <cit.>, facilitating text-to-3D generation. 4D Gaussian Splatting <cit.> extends these methods to dynamic scene representation and rendering. Focusing on segmentation, Gaussian Grouping <cit.> and SAGA <cit.> have made significant strides. They both employ the Segment Anything Model (SAM) <cit.> to derive 2D prior segmentation data, guiding the learning of added semantic information in 3D Gaussians. In Gaussian Grouping, this information is conveyed similarly to coefficients of spherical harmonic functions, whereas SAGA uses learnable low-dimensional features. However, SAM's reliance on geometric structures limits its semantic inclusivity in each mask. Thus, both methods propose strategies to ensure consistency of SAM's segmentation outcomes from various perspectives. Gaussian Grouping treats images from different angles as a sequence of video frames, utilizing a pre-trained model for mask propagation and matching. In contrast, SAGA consolidates consistent, multi-granularity segmentation information across viewpoints, employing a custom-designed SAM-guidance loss. 3D Segmentation in Radiance Fields. Prior to the advent of 3D Gaussians, NeRF <cit.> stood as a prominent method in 3D characterization, sparking a plethora of derivative works <cit.>, including several focusing on decomposing and segmenting NeRF. A notable example is Object NeRF <cit.>, which introduced a dual-pathway neural radiance field adept at object decomposition. Its scene branch processes spatial coordinates and viewing directions, outputting density and color details of a point from the viewer's perspective, primarily encoding the background of the 3D scene and offering geometric context for the object branch. Uniquely, the object branch, in addition to spatial and directional inputs, integrates a learnable object activation code, enabling the independent learning of neural radiance fields for each scene object. And the 3D guard msak helps mitigate occlusion issues between objects during the learning phase. Similarly, Switch-NeRF <cit.> demonstrates the decomposition of large-scale neural radiance fields through a trainable gating network.DM-NeRF <cit.> introduces an object field for NeRF segmentation, using it to generate a one-hot vector indicating the ownership of each spatial point by an object. SPIn-NeRF <cit.> employs a semantic radiance field, assessing the likelihood of scene locations being associated with specific objects. ISRF <cit.> adds semantic features to specific points and incorporates DINO <cit.> features of rendered images into this framework through a teacher-student model, allowing for feature interpolation at any given point. Techniques such as K-means clustering, nearest neighbor matching, and bilateral search are integrated, enabling interactive NeRF segmentation. Additionally, OR-NeRF<cit.> chooses to back-project 2D segmentation results into a 3D space, propagating them across different viewpoints, and then re-rendering them onto a 2D plane. These 3D Gaussian and NeRF segmentation methods either take a long time or struggle to preserve the detailed features of the scene in the segmentation result. For this reason, we propose a method that can segment multiple objects while preserving the detailed features in a short time.§ METHOD Given a well-trained scene using 3D Gaussian representation, scene rendering images, and corresponding camera parameters, we initially employed an interactive 2D segmentation model <cit.> to segment the rendered images. Then, the obtained 2D segmentation maps are used as guidance to facilitate the learning of semantic information (object code) added to the 3D Gaussians. Finally, we use KNN clustering to address issues of semantic ambiguity in certain 3D Gaussians, while optional statistical filtering can help eliminate those 3D Gaussians that have been erroneously segmented. The pipeline is depicted in Fig. <ref>. §.§ Point-Based rendering and Semantic Information Learning Gaussian Splatting <cit.> employs a point-based rendering technique (α-blending) to render a 3D scene onto a plane, and the color of a pixel on the plane can be calculated as:C = ∑_i∈𝒩c_iα_i∏_j=1^i-1(1-α_j),where 𝒩 denotes the ordered Gaussians overlapping the pixel, c_i represents the color of each 3D Gaussian projected onto the current pixel, and α_i is given by evaluating a 2D Gaussian with covariance Σ multiplied with a learned per-Gaussian opacity. It is worth noting that α expresses the opacity of any point in the projected 2D Gaussian, which decreases as its distance from the 2D Gaussian center increases. To achieve segmentation of a 3D scene, semantic information needs to be incorporated into the representation of the scene. Inspired by 2D segmentation, we assign an object code o∈ℛ^K to each 3D Gaussian to represent the probability distribution of the current 3D Gaussian across various categories, where K is the number of categories. Note that we defined a background class and the first dimension of o is used to represent it. To use 2D segmentation maps as supervision for learning the added 3D semantic information, it is necessary to project the added semantic information from 3D onto a 2D plane. Inspired by α-blending, we consider the pixel categories in the rendered 2D segmentation map as a weighted sum of the categories of multiple 3D Gaussians along the current ray during rendering. We assume that the first 3D Gaussian contributes the most, with each subsequent 3D Gaussian's contribution diminishing in accordance with its distance from the rendering plane, and this contribution is also proportional to the size of the 3D Gaussian itself. The category of each pixel on the rendered image can be represented by the object code of o the 3D Gaussians as:ô = ∑_i∈𝒩o_iα_i∏_j=1^i-1(1-α_j),which simply replaces the color c of each 3D Gaussian in Eq. (<ref>) with the object code of each 3D Gaussian.Assuming we have L images of 2D ground truth labels {I_1,⋯, I_l,⋯, I_L}, I_l∈ℛ^H × W, L is number of different camera poses in the dataset, H and W are the height and width of the label respectively. Each element in the ground truth label represents the category label of the corresponding pixel. Then we generate L corresponding projected segmentation maps {I_1,⋯, I_l,⋯, I_L}, I_l∈ℛ^K × H × W in the same camera viewpoint as the ground truth. In these projected segmentation maps, each element represents the probability of the pixel belonging to i^th category, i=1,2,⋯,K.Next, the original 2D segmentation maps are transformed into one-hot vector and then reshaped to be M∈ℛ^K × N, where N=H × W. As the projected segmentation maps, we perform a similar operation and obtain M∈ℛ^K × N. Then the ground truth object mask M and corresponding projected object mask M are used to calculate the Cross-Entropy Loss(CES):L_i=-1/N∑_n=1^NM_i^n logM_i^n, (0≤ i < K).The final loss is the average of all the losses for the L pairs of images:ℒ=1/L∑_l=1^LCES_l, where CES_l=1/K∑_i=1^KL_i.§.§ Gaussian Clustering During experiments, we observed that employing 2D segmentation maps as the sole guide for learning 3D semantic information may lead to inaccuracies in the semantic information of some 3D Gaussians. These inaccuracies manifest either as 3D Gaussians approximating an initial state of uniform distribution across all categories or as exhibiting similar probabilities in a limited number of categories. To address this issue, and considering that objects are continuously distributed in space, we posit that each 3D Gaussian should typically be classified within the same category as other 3D Gaussians located within a certain proximity.To remedy the inaccuracies in semantic information, we refer to the KNN clustering algorithm. For a 3D scene with pre-learned semantic information, we initially retrieve the object code, denoted as o, of each 3D Gaussian used to represent the scene. These codes then undergo softmax processing to deduce the probability distribution of each 3D Gaussian across various categories. 3D Gaussians with maximum probability values max(softmax(o))<β are selected. Finally, we fed the object codes of these selected 3D Gaussians along with their center coordinates into KNN for clustering. For a query 3D Gaussian, we calculate its distance from the surrounding 3D Gaussians, and the k 3D Gaussians closest in distance are selected, the object code of the query Gaussian is set to the mean of these 3D Gaussians' object code. §.§ Gaussian Filtering During experiments, We also found that after 3D semantic information learning and Gaussian clustering, some 3D Gaussians not belonging to the object intended for segmentation were incorrectly segmented out. We observed that these erroneously segmented 3D Gaussians are spatially distant from the rest of the segmented 3D Gaussians, as shown in Fig. <ref>(a). Therefore, we employ a statistical filtering algorithm similar to that used in point cloud segmentation to solve this problem. For each segmented Gaussian, we calculate its average distance D from the neighboring 3D Gaussians. Then, we compute the mean μ and variance σ of these average distances. Finally, we remove those 3D Gaussians whose average distance D > μ +σ from the current segmentation results. § EXPERIMENT §.§ Setups Due to the scarcity of 3D Gaussian segmentation methods and the lack of open source code for Gaussian Grouping <cit.> and SAGA <cit.>, we chose to compare our method with previous NeRF segmentation methods <cit.>. For this purpose, we selected well-known NeRF datasets for our experiments, including LLFF <cit.>, NeRF-360 <cit.>, and Mip-NeRF 360 <cit.>. Both LLFF and NeRF-360 are centered on objects in the scene, with the difference that the camera viewpoint of the former varies in a small range, while the latter contains a 360° image around the object. Mip-NeRF 360 features an unbounded scene, and its camera viewpoint also varies in a large range. In the Gaussian Clustering stage, the probability threshold β of each 3D Gaussian is set at 0.65, while the 50 3D Gaussians closest to its distance are filtered for subsequent computation. The 3D Gaussians are built and trained on a single Nvidia Geforce RTX 3090 GPU. §.§ ResultFig. <ref> illustrates the segmentation effects of this method in various scenes. The first two rows demonstrate the segmentation performance when the camera position varies within a small range. The third row depicts the segmentation effect in a 360° scene. The final row highlights the results of multi-object segmentation, where distinct objects such as the TV, desk, and table are segmented separately. Our method's efficiency is enhanced by the addition of object code, a simple yet effective tool for handling complex scenes. In the first row, this code enables the successful removal of complex background elements like the leaves behind the flower. In the third row, it ensures accurate segmentation even when there is a significant change in the viewing angle. Moreover, the object code, which encapsulates the probability distribution of the 3D Gaussian across all classes, facilitates the simultaneous segmentation of multiple objects in a scene. Fig. <ref> illustrates the comparative results between our method and ISRF <cit.>. Owing to the explicit representation of 3D Gaussians, our segmentation results are more precise in detail compared to those of ISRF, which is particularly evident in the leaf section of Fig. <ref>.§.§ Ablations Fig. <ref> presents the results of the ablation experiments, clearly demonstrating the effectiveness of KNN clustering and statistical filtering. Fig. <ref>(b) shows the initial segmented foreground object obtained without KNN clustering or statistical filtering, where it is noticeable that some leaves behind the flower are erroneously segmented. Fig. <ref>(c) displays the segmented foreground image after KNN clustering. Since KNN primarily addresses Gaussians with ambiguous semantic information, its impact on the visualization result is minimal. However, it can be observed that some incorrectly segmented Gaussians have been removed. Finally, Fig. <ref>(d) shows the result obtained after applying both KNN clustering and statistical filtering, which successfully filters out those Gaussians that were incorrectly segmented.§ CONCLUSION We propose a 3D Gaussian segmentation method guided by 2D segmentation maps, attaching a probability distribution vector for each 3D Gaussian on various categories to enable the segmentation of the majority of 3D Gaussians in the scene. Meanwhile, we employ KNN clustering to utilize the spatial continuity of objects, ensuring that nearby 3D Gaussians belong to the same category. Additionally, optional statistical filtering is used to help remove those 3D Gaussians that are incorrectly segmented. As an initial step in 3D understanding and editing, this method has a wide range of potential applications in downstream tasks. We demonstrate the effectiveness of our method on common NeRF datasets.IEEEbib
http://arxiv.org/abs/2312.16047v1
{ "authors": [ "Kun Lan", "Haoran Li", "Haolin Shi", "Wenjun Wu", "Yong Liao", "Lin Wang", "Pengyuan Zhou" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226132821", "title": "2D-Guided 3D Gaussian Segmentation" }
RDGCL: Reaction-Diffusion Graph Contrastive Learning for Recommendation Noseong Park======================================================================= § INTRODUCTIONThe relationship between X-ray and UV emissions in quasars has been a subject of study since the late seventies <cit.>. The prevailing interpretation of energy production in AGN involves the comptonisation of UV seed photons, emitted by the accretion disc, to the X-rays by the electrons in the hot-corona<cit.>. This two-phase model, however, does not explain the observed non-linear slope of the relation <cit.>, that implies that more luminous sources in the UV band are relatively less luminous in the X-rays. Additionally, the model fails to account for the persistence of the X-ray emission, given the fast cooling times implied by comptonisation, in the absence of a sustaining and refueling energy process.Despite the unknowns on the underlying physics, the non-linearity observed in the - relation has enabled the inference of the luminosity distance for quasars <cit.>. As a result, this relation has been employed as a tool for cosmological applications, provided that its dispersion is sufficiently small to allow for precise distances measurements. In the last years, significant efforts have focused on demonstrating that the observed dispersion in the relation is primarily influenced by observational factors, particularly the calibration of X-ray measurements <cit.>. By implementing effective selection criteria, that exclude sources with UV and X-ray fluxes not representative of the intrinsic emission from the accretion disc and hot-corona[This can be accounted for both by observational problems, like the one connected to calibration in the X-rays, and by obscuration/contamination of the intrinsic emission. A notable example of this last case are Radio Loud (RL) and Broad Absorption Line (BAL) quasars, the first ones characterised by an additional contribution in the X-rays due to the jets and the second ones by strong absorption in the UV, but the selection criteria exclude any kind of source absorbed or contaminated in either band by emission from other components and the host galaxy.], we can approach the intrinsic dispersion. Besides being necessary to enable cosmological applications, the tightness of the relation over several decades in luminosities in both bands, along with the stability of the slope over cosmic time (e.g <cit.>), also conveys important information about the universality of the physical mechanism that couples the accretion disc and the corona, that must hold for luminous and fainter engines. On a more general basis, the - relation describes the physics of the accretion in quasars. Therefore, it is crucial to understand how the gravitational energy lost by the matter accreting onto the SuperMassive Black Hole (SMBH) is converted into light, and, if present, into winds and jets. All of these representways in which the central engine impacts its surrounding environment on much larger scales than the scale of accretion, ultimately influencing the evolution of the host galaxies <cit.>). The nebular regions surrounding the central engine, Broad Line Region (BLR) and Narrow Line Region (NLR), respond to the ionisation caused by the primary emission from the central engine by emitting broad and narrow lines in the UV and optical range. Photons in the soft X-ray/ far UV range, originating from the hot-corona and the inner part of the accretion disc, are responsible for the production of the high ionisation potential lines, such as thein the UV and the [] in the optical. It is clear, therefore, that in order to delve into the physics underlying this relation and analyse the impact of the energy produced by the central engine on the surrounding nebular regions, which ultimately influences scales up to kiloparsecs, spectroscopic information is vital.Several studies have examined quasars' spectra in both X-ray and UV bands, often focusing on small samples that are representative of specific sub-populations of quasars, defined by either redshift or luminosity range <cit.>. However, these samples may not fully capture the diversity of the entire quasar population. To explore the connection between the accretion scale and the kpc scale for the entire quasar population, we require a statistically significant sample. The sample presented in this work, for the first time, combines a high level of statistical significance and spectroscopic information in both the X-ray and UV bands.§ A STATISTICALLY SIGNIFICANT SPECTROSCOPIC SAMPLE IN THE X-RAY BAND All the studies that aimed to reduce the dispersion in the relation by using statistically significant samples relied on photometric information in the X-ray band <cit.>. For the first time, the Chandra Source Catalog 2.0 <cit.> enabled us to retrieve spectroscopic information in the X-ray band for thousands of sources <cit.>. This sample was obtained by cross-matching a pre-selection of the SDSS DR14[We excluded BAL and RL quasars and selected sources that are not dust-absorbed or host galaxy contaminated in the UV/optical band.] with the CSC 2.0, resulting in more than 3000 spectroscopic data products ready to be used for scientific analysis. Fig. <ref> shows the - relation[The rest frame flux at 2 keV, and consequently the , was measured by fitting the X-ray spectrum with Xspec: we used a power law corrected for Galactic absorption and a cflux (calculate flux) component as a model. This allowed us to retrieve the intrinsic flux at 2 keV as one of the free parameters of the fit, together with the slope and the normalisation of the power law. The rest frame flux at 2500 Å (and then ) was instead computed from interpolation of the SED of the source by using the multi-wavelength photometric data available from the UV to the Near Infra Red bands. Detailed information on the analysis can be found in <cit.>.] for the final sample in <cit.>, which combines the CSC 2.0 dataset with Chandra COSMOS Legacy data <cit.>. The final selection was obtained after applying the filtering criteria in the X-rays, mainly aimed at excluding absorbed sources and at cleaning the sample for the Eddington bias, i.e. the inclusion of sources because of a positive fluctuation with respect to their average emission. It spans the remarkable redshift range of ∼ 0.5 - 4.5, especially noteworthy given that these data are entirely from catalogs. The high statistics available (>1500 sources) allowed us to split the sample in redshift bins and explore the behaviour of slope (γ) and dispersion (δ) of the relation across cosmic time. The redshift bins are chosen to be small enough that the differences in the distances within the same bin are negligible when compared to the observed dispersion in the relation. This allows for using the fluxes in place of the luminosities and for studying the evolution of slope and dispersion in a cosmologically independent way.Our analysis shows that the slope γ remains stable up to redshift z∼4.5 (Fig. <ref>, top panel). The analysis of the (non-)evolution of the relation with redshift is crucial for both cosmological and physical studies. A stable slope over cosmic time is essential for using quasars as standard candles. Additionally, it serves as a strong indication of the universality of the physical mechanism governing the coupling of the accretion disc and hot corona. The accuracy achieved through the spectroscopic analysis in measuring the flux at 2 keV, the proxy for the emission of the hot-corona, led to a dispersion δ∼ 0.15 dex at the highest redshift bins (Fig. <ref>, bottom panel). This level of accuracy is comparable to that of samples with dedicated X-ray observations <cit.>, which is very likely the case here as well (the observations in the CSC 2.0 at these redshifts were obtained through dedicated observations). § THE - RELATION AS A TOUCHSTONE FOR THE STATE OF ACCRETION OF QUASARS We have confirmed a few key aspects of the - relation: the emission from the corona increases at a slower rate than that from the disc in more luminous sources. Furthermore, the relation remains stable over several decades in luminosity in both bands and, also, across cosmic time.Several works investigated the interplay between accretion disc and hot-corona, many invoking a coupling via magnetic fields for explaining the X-ray emission in quasars <cit.>, others clumpy accretion flows <cit.> or modified viscosity prescriptions in the accretion disc depending on the accretion status of the source <cit.>. Other works examined the differences between the emissions coming from disc and corona in different accretion regimes <cit.>.Despite all these efforts, we still lack a clear comprehension of the coupling between the two innermost components and a clear explanation of the establishment of the - relation. A more comprehensive approach would be examining the entire - plane, rather than just the relation itself, in order to characterise the accretion status of quasars based on their X-ray and UV properties, regardless of whether they conform to the relation or not. In this regard, our focus has shifted from selecting the sample to reduce the dispersion and identify sources with a “canonical" coupling of accretion disc and hot corona, i.e. lying on the relation. Instead, we are now interested in examining the position of the sources of the entire quasar population in the - plane, without any selection applied except to ensure that the sample consists exclusively of blue quasars, thereby excluding dust-reddened or host galaxy-contaminated sources. In this context, the - relation can be thought as a reference for the “canonical” coupling of accretion disc and corona and be used as a landmark in the - plane describing the accretion of quasars. Given the - relation, for anywe can infer an “expected" , or equivalently, an “expected" optical to X-ray spectral index α_OX= - log ( / ) /log (ν_X/ν_UV) <cit.>, describing the steepness of the SED between the X and UV bands. If we want to measure the position of a source in the - plane, i.e. the distance of a source from the locus of the - relation, we can do that through the parameter Δα_OX= α_OX_exp - α_OX_obs = - 0.384 * log ( L_X_exp / L_X_obs).§.§ The X-ray Weak population The approach examining the entire - plane rather than only the locus of the relation was first motivated by the identification of a class of objects, selected to be part of the blue quasar population, but exhibiting a markedly different behaviuor, as found in a previous study <cit.>.As part of the project using quasars for measuring cosmological distances, we selected a sample of sources from the SDSS DR7 at z ∼ 3with the goal of populating the Hubble diagram at high redshift. The sources were specifically selected to be blue and unabsorbed in the UV band and, by design, they were expected to lie on the - relation.The XMM-Newton observations carried out for these sources (cycle 16, proposal ID: 080395, PI: G. Risaliti) revealed that roughly a third of them exhibit significantly lower X-ray fluxes than anticipated based on their UV emission assuming the - relation (they lie within 2 and 3 σ below the - relation). Additionally, the X-ray spectra, which are flatter on average compared to their normal counterparts, do not show any sign of absorption in the X-ray band <cit.>. The explanation proposed for the behaviour of these intrinsically X-ray Weak sources is a different state of the corona with respect to the radiatively efficient state of the sources that follow the relation <cit.>. Considering the size of the parent sample (30 quasars), the fraction of sources falling into the X-ray weak sub-population is large, ranging between 25% and 30%, depending on the threshold adopted to distinguish between the X-ray Weak and the X-ray Normal, i.e. quasars with a X-ray emission consistent with the expectations of the - relation. Such a high fraction of X-ray Weak quasars has never been reported in samples of radio quiet, non-BAL quasars (e.g <cit.>), but has been confirmed by other recent studies on samples of similarly highly accreting quasars <cit.>, finding up to 40% of X-ray Weak sources.While, by design of the sample selection, X-ray Weak and Normal sources share similar UV luminosity, differences in the profiles of the broad lines emitted hint at the possible presence of a nuclear wind in the X-ray Weak population, which may deplete the reservoir of UV photons from the accretion disc, potentially leading to the starvation of the corona in this class of objects <cit.>.§.§ A radiatively inefficient corona: the effect on nebular regionsThe reduced availability of seed photons from the accretion disc and the resulting depletion in the hot corona emission are expected to have an effect on the nebular regions surrounding the central engine.In particular, the more energetic photons emitted by the hot corona, whose energy range spans the far UV and soft X-rays bands, are responsible for the production of the high ionisation potential lines emitted by the BLR and NLR.The connection between the X-rays emission and the BLR emission is evident in the case of one of the most prominent emission features in the UV spectra of quasars,(ionisation potential 47.9 eV): the integrated luminosity of the line is tightly correlated with the luminosity at 2 keV <cit.>. This result can be interpreted in the context of the - relation. At high UV luminosity, where α_OX steepens compared to values observed in quasars characterised by lower UV luminosity, corresponding to a larger difference between the UV and the X-ray luminosities, there is a deficit of ionising photons in the far-UV/soft X-rays. This deficit, in turn, impacts the production of high ionisation potential lines, such as .However, it remains unclear why X-ray weak sources exhibit an excess ofcompared to the X-ray normal, average population. This seems to be in apparent contrast with the global trend observed in the context of the Eigenvector 1 <cit.>, where weakemissions are associated with sources having relatively weaker X-ray emission <cit.>.One possible explanation for this is provided by the combination of the line excitation mechanisms for , dominated by collisions, whose rate strongly depends on the temperature of the gas and therefore on the amount of X-ray photons available, and the non linearity of the - relation, which implies that for X-ray Normal sources with similar X-ray luminosity, but lower UV luminosity, the number of ionising photons is lower <cit.>. The CSC 2.0 sample allows us to broaden our investigation to include the entire population of quasars with spectroscopic information in the X-ray band, rather than just focusing on the bright end or high redshift sources. Fig. <ref> shows the -L_CIV relation for the CSC 2.0-SDSS DR14 sample (circles) and for the z ∼ 3 XMM-Newton sample of <cit.> (stars).is sampled here in the redshift range 1.2 < z < 4.5[the lower limit is imposed by SDSS wavelength coverage, while the upper limit by the redshift range sampled by the CSC 2.0 sample]. X-ray Weak and X-ray Weak “candidates" are defined, following <cit.>, by selecting sources with a Δα_OX, i.e. the difference between the observed and expected α_OX[The expected α_OX is computed following the - relation in <cit.>], <-0.3 (red) and -0.3< Δα_OX <-0.2 (yellow) respectively. X-ray Weak quasars confirm the behaviour shown in the z ∼ 3 sample, i.e. an excess ofwith respect to the bulk of quasar population.Thanks to the redshift range spanned by the CSC 2.0 sample, we can extend the study to the connection between the X-ray emission and the NLR by examining the [] line (ionisation potential 35.1 eV), available for sources at 0.48 < z < 0.9[The lower limit is imposed by the selection of the parent sample (sources with z < 0.48 are likely contaminated by host-galaxy emission), while the upper limit is set by SDSS wavelength coverage]. The presence of a correlation between [] and the X-ray emission is not unexpected <cit.>, considering [] is used as a proxy of the bolometric luminosity in AGN <cit.>, which is basically dominated by the UV emission, and that the X-ray emission, as a first approximation, serves as a proxy for the bolometric luminosity as well <cit.>, even though the non-linearity of the - relation adds some complexity. To compare our sample with others with information in the [] and X-ray band <cit.>, we plot in Fig. <ref> the relation between the luminosity of the line and the X-ray luminosity in the 2-10 keV band. Our sample as a whole aligns well with the relationship when compared to the others. However, in contrast to the XMM sample at z∼ 3<cit.>, where, despite the weakness of [] profiles in X-ray weak sources, this subclass does not deviate from the overall population, here we observe that the X-ray Weak and X-ray Weak candidates populate a specific region of space relative to the relationship, showing again an excess ofline flux with respect to X-ray Normal sources with analogous X-ray emission. Fig. <ref> shows the same behaviour, this time in the relation between the [] luminosity and the monochromatic luminosity at 2keV, where X-ray Weak and Weak candidates are located below the main relation, as in the case of theline. Despite the differences between the two nebular regions in terms of distance from the central engine and density, this seems to suggest that the starvation of the hot corona might also affect high ionisation lines emitted by the NLR at the kpc scale, and not just those emitted by the BLR.§ FUTURE PERSPECTIVE: THE - PLANE AS A FUNCTION OF ΔΑ_OXThe - relation can serve as a benchmark and can be employed to discriminate between X-ray Weak and X-ray Normal sources. The natural extension of this work is to widen our approach and explore the possibility of using the entire - plane as a diagnostic tool for quasar accretion. This can be achieved by investigating variations in UV and X-ray properties based on their “distance" from the - relation, quantified as Δα_OX. In this context, X-ray Weak and Weak candidate sources represent the most distant objects,characterised by Δα_OX<0.3 and <0.3Δα_OX<0.2, respectively.The final goal is to take a complete census of the quasar population and to describe the state of the accretion as a function of the position on the - plane. We are tackling the problem using two different methods. First, we are studying the properties measured on X-ray and UV spectra, such as fluxes, equivalent widths, and line properties, to determine if some of these properties change continuously with respect to Δα_OX. Second, we are comparing UV and X-ray stacked spectra for different bins of Δα_OX. This will be the subject of a forthcoming publication.DISCUSSIONMARIA GIOVANNA DAINOTTI's Comment: The consistency of the slope in the Risaliti-Lusso relation is due to the fact that when you correct for the luminosity evolution then the effect of these correction gives the constancy of the slope. In the application of cosmology this correction must be applied in order to avoid biases in the cosmological parameters. A set-up methodology has been fully developed in the following four published papers: 1. Dainotti et al. 2023, arXiv230519668D, 2. Bargiacchi, Dainotti et al. 2023, MNRAS, 521, 3909B 3. Lenart, Bargiacchi, Dainotti et al. 2023 ApJS, 264, 46L 4. Dainotti et al. 2022, ApJ, 931, 106D
http://arxiv.org/abs/2312.16562v1
{ "authors": [ "Susanna Bisogni" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20231227130216", "title": "The relation between X-ray and UV emission in quasars" }
Autonomous Driving using Residual Sensor Fusion and Deep Reinforcement Learning Amin Jalal Aghdasian, Amirhossein Heydarian Ardakani, Kianoush Aqabakee, Farzaneh Abdollahi [email protected], [email protected], [email protected], [email protected] of Electrical Engineering Amirkabir University of Technology (Tehran Polytechnic)Tehran, IranJanuary 14, 2024 =======================================================================================================================================================================================================================================================================================================================In this paper, we propose the use of self-supervised pretraining on a large unlabelled data set to improve the performance of a personalized voice activity detection (VAD) model in adverse conditions. We pretrain a long short-term memory (LSTM)-encoder using the autoregressive predictive coding (APC) framework and fine-tune it for personalized VAD. We also propose a denoising variant of APC, with the goal of improving the robustness of personalized VAD. The trained models are systematically evaluated on both clean speech and speech contaminated by various types of noise at different SNR-levels and compared to a purely supervised model. Our experiments show that self-supervised pretraining not only improves performance in clean conditions, but also yields models which are more robust to adverse conditions compared to purely supervised learning. Self-Supervised Learning, Voice Activity Detection, Target Speaker, Deep Learning § INTRODUCTION Being able to detect the presence of speech in a potentially noisy signal is a commonly utilized processing step in modern speech processing systems, generally referred to as voice activity detection (VAD).A VAD system has to classify whether a short frame of audio contains speech, usually from speech features like Mel-filterbank features, in an unsupervised <cit.> or supervised manner <cit.>.Applications include using a VAD model as a preprocessing step for automatic speech recognition, as a gating mechanism for microphones in online meeting devices, or as a part of a speech enhancement system.For real-time applications, such as speech enhancement for hearing aids, it is desired that the VAD model is low-latency and low complexity, while also being robust in adverse conditions such as background noise. Recently, personalized VAD models, which are also able to determine whether the speech is from a target speaker, have been proposed <cit.>.These personalized VAD models introduce a number of interesting capabilities, such as removing false-triggering on background speakers, at the expense of also needing to model speaker characteristics. Training a model to detect voice activity and distinguish between target speech and non-target speech in a supervised fashion requires a large amount of speech data from several speakers annotated with both VAD labels and framewise speaker identity. For example, <cit.> and <cit.> utilize 960 of annotated speech from 2338 different speakers to train their models, while <cit.> utilize up to 27500 of annotated speech. Although high-quality VAD labels can be automatically obtained using forced-alignment <cit.>, relatively clean speech and a corresponding transcript of the speech signal is required.Additionally, frame-level speaker identity labels can be difficult to obtain. Therefore, the adoption of Personalized VAD models is limited by the ability to obtain such large labelled data sets, restricting their widespread adoption.Self-supervised learning (SSL) methods provide a means of utilizing unlabelled data, which is easier to obtain in large quantities. Models pretrained using SSL have shown state-of-the-art performance in many domains, including speech processing <cit.>, and have also been shown to learn more robust features than purely supervised models <cit.>.The application of SSL for pretraining of VAD models is currently unexplored, although one study used speech features from a pretrained wav2vec2 <cit.> model as input features to a VAD model, and found that they perform better than standard Mel-spectrogram features <cit.>. However, using a large pretrained speech model (95M+ parameters) for feature extraction arguably defeats the purpose of having a small, efficient VAD model.In this work, we propose to use a simple SSL framework known as Autoregressive Predictive Coding (APC), to directly pretrain a small LSTM-encoder, with the aim of improving performance and robustness in adverse conditions, when fine-tuning for personalized VAD. Additionally, we propose a denoising variant of APC for improved robustness. Here, we modify the APC framework to predict future clean speech frames from noisy input features, as opposed to predicting clean future frames from clean input features.We carry out experiments, using both clean and noisy training data generated through online multistyle training (MTR) <cit.>.The trained models are evaluated systematically on both clean test data and on different test sets containing either seen or unseen noise at SNR levels ranging from -520.The results show the following:* APC pretraining and fine-tuning on clean data, leads to an absolute improvement in mean average precision (mAP) of 1.9 compared to supervised training, when evaluating the models in clean conditions.Interestingly, an average absolute improvement of 6.05 is observed for noisy conditions, while not having seen any noise during training. * When using MTR, APC pretraining leads to an average absolute improvement for seen noise 4.8 over the baseline, while a further absolute improvement 2.3 is observed for the proposed DenoisingAPC. For unseen noise, DenoisingAPC+MTR also achieves the best performance, with an absolute improvement of 8 compared to baseline+MTR.The source code used to produce the results of this paper is made publicly available.[<https://github.com/HolgerBovbjerg/SelfSupervisedPVAD>] § METHODOLOGY AND DATA SETSThis section describes our personalized VAD model, the APC pretraining framework and the data used for training and testing. §.§ Personalized VAD modelOur personalized VAD model is inspired by the Personal VAD system presented in <cit.> and is illustrated in <Ref>. The personalized VAD classifies input log Mel-filterbank features as either non-speech (ns), target-speaker speech (tss) or non-target-speaker speech (ntss). We choose to separate the speaker verification and VAD tasks into separate modules and reside to using an already trained model for speaker verification.More specifically, we focus on training a robust VAD model, and use a separate already trained d-vector model <cit.> to extract speaker embeddings for speaker verification.The VAD module predicts the probability of speech or no speech, z^s and z^ns, for any given input frame. The d-vector model generates an embedding which is compared to the target-speaker embedding through cosine similarity to generate a target-speaker similarity score s.As s is a similarity score, its value might not necessarily represent the probability of a target-speaker being present. Therefore, s is scaled by learnable parameters α and β such that the scaled similarity becomes s^' = sα + β.Finally, the VAD output and scaled similarity score are combined such thatz^k_t =z^ns_t k = ns,s^' z^s_t k = tss,(1 - s^')z^s_t k = ntss.where z^k_t is the output corresponding to class k at time frame t and is used to classify each frame as either non-speech (ns), target-speaker speech (tss) or non-target-speaker speech (ntss).In <cit.>, the authors also propose a personalized VAD where the target-speaker embedding is simply concatenated to the features, which is then used as input to the VAD. Here, the VAD model also learns to extract speaker characteristics through implicit knowledge distillation from the speaker embedding model, removing the need for a separate speaker embedding model during runtime.However, our implementation failed to achieve good performance using this approach.For the speaker embedding model, we use a freely available d-vector model as described in <cit.>.The d-vector model used in this work, is pretrained on VoxCeleb <cit.> and LibriSpeech-other data.It has 3 LSTM layers, each with a hidden dimension of 256 producing 256-dimensional d-vector speaker embeddings and has a total of 1.4M parameters.As in <cit.>, our VAD model is a 2-layer LSTM with a hidden dimension of 64, yielding a total of 60k parameters.Both the d-vector and VAD models take 40-dimensional log Mel-filterbank features as input, computed from 25 frames with frame shift of 10. Target-speakers are enrolled by generating a d-vector embedding using one or more enrolment utterances (minimum 5).Here, speaker embeddings are computed by sliding a 1.6 window across the enrolment utterances with a shift of 0.4, generating an embedding for each window position, which are then averaged to generate the target-speaker d-vector embedding.§.§ Autoregressive predictive codingInspired by the success of pretraining language models, the Autoregressive Predictive Coding <cit.> framework predicts future speech features from the current and previous feature vectors. More formally, given a sequence of feature vectors z_0, …, z_t computed from frames x_0, …, x_t, we ideally seek a model f that predicts z_t+n such that y_t = f(z_0, …, z_t) = z_t + n, denoting z_t as the feature vector and y_t as the output at time frame t. APC thus builds on the notion that if a model is able to predict the future, it must have a good representation of the past and present. An overview of the APC framework, including the denoising variant, is illustrated in <Ref>. Models pretrained using APC have been shown to learn both speaker and content information and shown good performance for a number of downstream tasks <cit.>.While more complex SSL methods such as wav2vec2 <cit.> can also learn information from future frames, APC only encodes information from previous frames.Thus, the learned representation does not rely on future information. This is particularly suitable for VAD applied to real-time applications, as the VAD model is desired to be causal. In our experiments, we feed the log Mel-filterbank features to a 2-layer LSTM encoder, and use a single 1D-convolutional layer to project the hidden representations back to the input feature space. In addition to the standard APC framework, we also propose a denoising variant of APC.Here, speech features are extracted from both clean speech and speech which have been corrupted by noise. We then predict future clean features from noisy input features, as depicted in <Ref>. This forces the model to extract information related to the source signal from a noisy mixture, thus learning to distinguish the source signal from background noise. §.§ Data setsAs mentioned in <cit.>, the amount of readily available multi-speaker data with natural speaker turns, as well as speaker identity information is limited, and as a result we carry out experiments on a simulated multi-speaker data set. While speakers might overlap in realistic settings, such as a cocktail party scenario, it has been found that a personalized VAD model trained on non-overlapping speech also performs well on overlapping speech <cit.>. Following <cit.>, we uniformly sample 1 to 3 utterances from individual speakers and a target-speaker is randomly selected from one of the individual utterances. We then simply concatenate the utterances to generate multi-speaker utterances. For our experiments, we use the freely available Librispeech <cit.> data set to construct both training and test data. Librispeech consists of 960 training data split into two sets of 100 and 360 categorized as clean, and a 500 other set, which is less clean. Similarly, both a clean and an other set is available for testing. <Ref> shows a summary of the different data sets used in our experiments for training, pretraining and testing.In our experiments, we only use utterances within the same set when constructing multi-speaker utterances.For pretraining, the multi-speaker utterances constructed from the train-clean-100, train-clean-360 and train-other-500 sets are used, yielding a total of all 960 speech.For supervised training, only multi-speaker utterances generated from the 100 train-clean-100 set are used. Additionally, we also train the models on the 10h LibriLight <cit.> training set. This simulates a setting with a large pool of unannotated data available for pretraining and a smaller pool of labelled data available for supervised training. As Librispeech includes speech transcripts, VAD labels are generated using forced-alignment <cit.>, while we generate framewise speaker labels using the speaker identity information included in the Librispeech metadata.For testing the model performance in clean conditions, we use the utterances generated from the test-clean set. To be able to evaluate the trained model in varying adverse conditions, noisy test data is generated by adding noise, to the test-clean multi-speaker utterances.Here, we pick two environmental noise types, namely bus and café, and two speech-like noise types, namely babble and speech-shaped noise, each representing a realistic adverse condition. For each noise type, noisy tests set have been generated by adding the noise type at a specific SNR level, ranging from -520 in steps of 5, yielding a total of 24 noisy test sets.§.§ Supervised baselineWhen training the personalized VAD model, the pretrained d-vector model weights are fixed, and we only update the VAD network and fully-connected network depicted in <Ref>.We reuse the hyperparameter choices in <cit.> using a cross-entropy loss and a batch size of 64, the ADAM optimizer with an initial learning rate of 5·10^-5, and a gradual reduction of the learning rate following a cosine annealing schedule.§.§ Pretraining and fine-tuningDuring pretraining, only the LSTM-encoder in the VAD network depicted in <Ref> is pretrained, yielding a system as seen in <Ref>.In <cit.> it is found that predicting n=3 frames ahead during APC pretraining leads to good downstream task performance, thus we adopt this choice and use an ℓ_1-loss as the objective function. We pretrain the LSTM encoder for 10 epochs using a batch size of 32 and ADAM optimizer with an initial learning-rate of 0.01, and a cosine annealing learning rate schedule.After pretraining, the LSTM encoder weights are copied to a Personal VAD model which is then fine-tuned, using the same procedure as used for training the supervised baseline. §.§ Multistyle trainingA commonly used technique to improve model robustness is adding various noise to the training data. This technique is generally referred to as multistyle training (MTR) and has been shown to improve model robustness <cit.>. Therefore, we also carry out experiments where we apply online MTR. Here, we add noise from different adverse conditions as described in <cit.>, namely babble, bus, pedestrian, street and speech-shaped noise. We include babble, bus and speech-shaped noise in both training data and test sets, while keeping café noise unseen during training. The noise is added at varying SNR levels in the range -520. We also add room acoustics from recorded RIRs as used in <cit.>.When applying MTR, randomly sampled noise and room acoustics are added to the individual multi-speaker utterances, each with a probability of 50. §.§ MetricsTo evaluate the performance of the trained model, we follow <cit.> and compute the average precision score for each class and use the mean average precision (mAP) score as our main evaluation metric.For a given class, average precision is computed asAP=∑_n(R_n-R_n-1)· P_nwith P_n and R_n being the precision and recall at threshold n. To compute the mAP score, we compute the AP score for each class and take the mean.§ RESULTSIn the following section, the results from our experiments are presented.First, we analyse how the trained models perform in clean conditions, followed by an in-depth analysis of how the trained models perform in various adverse environments. All models have been trained using five different random seeds. §.§ Clean conditionsIn <Ref> the performance of the various models on the clean test set is presented. Here, ns, tss and ntss denotes no-speech, target-speaker speech and non-targets-speaker speech, respectively.Comparing the models without MTR, Baseline and APC, the APC pretrained model shows an improvement in mAP of 2.1 and DN-APC an improvement of 2.5.The DN-APC model scores highest of all models with an mAP of 92.9.Using MTR for model robustness usually comes at the cost of a performance drop in clean conditions <cit.>.As expected, we observe that the models using MTR perform slightly worse, although the DN-APC model performs comparable to the baseline without MTR.When using MTR, the DN-APC pretrained model performs best, with an overall improvement of 1.3 compared to the supervised baseline using MTR.§.§ Adverse conditionsWe evaluate the performance of the trained models in various adverse conditions, including background noise consisting of bus, babble, and speech-shaped noise. Additionally, we also evaluate the performance on a noise type (café noise) unseen during training, to evaluate whether the robustness of the models generalize to an unseen noise type.<Ref> presents the mAP scores and 95 confidence intervals of the trained models when testing for seen noise at different SNR-levels are reported.While <Ref> shows summary scores averaged over all noise types, the general picture for each individual noise type is the same. Looking <Ref>, the pretrained models clearly outperform the supervised baseline models in seen noise. Interestingly, the APC model outperforms the baseline by 5.6 on average when neither is using MTR, without having seen any noise during pretraining or supervised training.Using MTR leads to a further improvement, while DN-APC+MTR yields the best results, outperforming the baseline+MTR by 7.1 on average and APC+MTR pretraining by 2.3.For <Ref>, showing results when evaluating the models in unseen noise, a similar pattern is observed, with DN-APC+MTR showing the best results, outperforming the baseline+MTR by 8 on average and APC+MTR by 3. In <Ref> the average performance for models trained on LibriLight 10h training set is presented. Here, we observe that the pretrained models outperform the supervised baselines by an even larger margin, with an average improvement of 24.3 for DN-APC+MTR compared to baseline+MTR.In summary, using APC pretraining improves performance substantially in noisy conditions, and additionally improves performance in noisy conditions.Our proposed DN-APC in combination with MTR achieves the best performance, with an average improvement of 7.1 in seen noise and 8 in unseen noise compared to baseline+MTR.§ CONCLUSIONSIn this paper, we proposed the use of self-supervised pretraining to leverage unlabelled data for improving the robustness of a personalized VAD model in adverse conditions. For pretraining we used the APC framework, while we also proposed a Denoising variant of APC for improved robustness. We compared the pretrained models with a supervised baseline and tested their performance in both clean and adverse conditions with both seen and unseen noise at various SNR-levels. Our results show a significant improvement in robustness to background noise when using APC pretraining.Both APC and our proposed Denoising APC outperform the baseline, while our proposed Denoising APC achieves the best performance.Overall, it can be concluded that self-supervised pretraining can improve the personalized VAD performance in both clean and noisy conditions. IEEEbib
http://arxiv.org/abs/2312.16613v1
{ "authors": [ "Holger Severin Bovbjerg", "Jesper Jensen", "Jan Østergaard", "Zheng-Hua Tan" ], "categories": [ "cs.SD", "cs.LG", "eess.AS", "68T10", "I.2.6" ], "primary_category": "cs.SD", "published": "20231227153617", "title": "Self-supervised Pretraining for Robust Personalized Voice Activity Detection in Adverse Conditions" }
Learning temporal formulas from examples is hardThis article is a long version of the article presented in the proceedings of the International Conference on Grammatical Inference (ICGI) in 2021 <cit.>. It includes much stronger and more general results than the extended abstract. Corto Mascle [email protected], University of Bordeaux, FranceNathanaël Fijalkow [email protected], LaBRI, Université de Bordeaux, FranceGuillaume Lagarde [email protected], University of Bordeaux, France===================================================================================================================================================================================================================================================================================================== We study the problem of learning linear temporal logic (LTL) formulas from examples, as a first step towards expressing a property separating positive and negative instances in a way that is comprehensible for humans. In this paper we initiate the study of the computational complexity of the problem. Our main results are hardness results: we show that the LTL learning problem is -complete, both for the full logic and for almost all of its fragments. This motivates the search for efficient heuristics, and highlights the complexity of expressing separating properties in concise natural language. § INTRODUCTIONWe are interested in the complexity of learning formulas of Linear Temporal Logic () from examples, in a passive scenario: from a set of positive and negative words, the objective is to construct a formula, as small as possible, which satisfies the positive words and does not satisfy the negative words. Passive learning of languages has a long history paved with negative results. Learning automata is notoriously difficult from a theoretical perspective,as witnessed by the original -hardness result of learning a Deterministic Finite Automaton (DFA) from examples <cit.>. This line of hardness results culminates with the inapproximability result of <cit.> stating thatthere is no polynomial time algorithm for learning a DFA from examples even up to a polynomial approximation of their size.1em One approach to cope with such hardness results is to change representation, for instance replacing automata by logical formulas;their syntactic structures make them more amenable to principled search algorithms. There is a range of potential logical formalisms to choose from depending on the application domain. Linear Temporal Logic <cit.>, which we abbreviate as , is a prominent logic for specifying temporal properties over words, it has become a de facto standard in many fields such as model checking, program analysis, and motion planning for robotics. Verification ofspecifications is routinely employed in industrial settings and marks one of the most successful applications of formal methods to real-life problems. A key property makinga strong candidate as a concept class is that its syntax does not include variables,contributing to the fact thatformulas are typically easy to interpret and therefore useful as explanations.1em Over the past five to ten years learning temporal logics (of whichis the core) has become an active research area. There are many applications, let us only mention a few: program specification <cit.> and fault detections <cit.>. We refer to <cit.> for a longer discussion on the potential and actual applications of learning temporal logics.1em Since learning temporal logics is a computationally hard problem, a number of different approaches have been explored.One of the first and probably most natural is leveraging SAT solvers <cit.>, which can then accommodate noisy data <cit.>. Another line of work relies on their connections to automata <cit.>, and a third completely different idea approaches it from the lens of Bayesian inference <cit.>. Learning specifically tailored fragments ofyields the best results in practice <cit.>. There are a number of temporal logics, and the ideas mentioned above have been extended to more expressive logics such as Property Specification Language (PSL) <cit.>, Computational Tree Logic (CTL) <cit.>, and Metric Temporal Logic (MTL) <cit.>. Other paradigms have been explored to make learning temporal logics useful, such as sketching <cit.>.1em Despite this growing interest, very little is known about the computational complexity of the underlying problem; indeed the works cited above focused on constructing efficient algorithms for practical applications. The goal of this paper is to initiate the study of the complexity of learningformulas from examples.1em Our contributions. We present a set of results for several fragments of , showing in almost all cases that the learning problem is -complete. Section <ref> gives definitions. * Our first -hardness result is presented in Section <ref>, it states that the learning problem for fullis -hard when the alphabet size is part of the input. * To obtain membership infor the learning problem, we show in Section <ref> that all fragments ofhave the short formula property. We then study some (degenerate) fragments in Section <ref> and show that for these fragments thelearning problem is in polynomial time. * We construct in Section <ref> a polynomial-time approximation algorithm forwith only the next operator and conjunctions, and show that assuming ≠, the approximation ratio of this algorithm is optimal. * Our most technical results are presented in Section <ref> and Section <ref>: in the first section we show that almost all fragments with the next operator are hard to approximate, and in the next a similar result for almost all fragments without the next operator.We conclude in Section <ref>. § PRELIMINARIES Let us fix a (finite) alphabet Σ. We index words from position 1 (not 0) and the letter at position i in the word w is written w(i), so w = w(1) … w(ℓ) where ℓ is the length of w, written |w| = ℓ. We write wk = w(k) … w(ℓ). To avoid unnecessary technical complications we only consider non-empty words, and let Σ^+ denote the set of (non-empty) words.The syntax of Linear Temporal Logic () includes atomic formulas c ∈Σ and their negations, as well as ⊤ and , the boolean operators ∧ and ∨, and the temporal operators ,, , and . Note that as usually done, we work within negation normal form, meaning that negation is only used on atomic formulas. The semantic ofover finite words is defined inductively over formulas, through the notation w ϕ where w ∈Σ^+ is a non-empty word,and ϕ is anformula. The definition is given below for the atomic formulas and temporal operators , ,, and , with boolean operators interpreted as usual. * We have w ⊤ and w.* wc if w(1) = c. * w ϕ if w > 1 and w2ϕ. It is called the next operator.* w ϕ if wiϕ for some i ∈ [1,w]. It is called the eventually operator.* w ϕ if wiϕ for all i ∈ [1,w]. It is called the globally operator.* w ϕψ if there exists i ∈ [1,w] such that for j ∈ [1,i-1] we have wjϕ,and wiψ. It is called the until operator.Note that ϕ is syntactic sugar for ⊤ϕ. We say that w satisfies ϕ when w ϕ is true. It is sometimes useful to write w,i ϕ to mean wiϕ. We consider fragments ofby specifying which boolean connectives and temporal operators are allowed. For instance (,∧) is the set of allformulas using only atomic formulas, conjunctions, and the next operator. The full logic is = (, ,,,,). More generally, for a set of operators Op ⊆, , , , ,, we write (Op) for the logic using operators from Op. The size of a formula is the size of its syntactic tree.We say that two formulas are equivalent if they have the same semantics, and we write ϕ≡ψ to say that ϕ and ψ are equivalent. *Thelearning problem. A sample is a pair (P,N) where P = u_1,…,u_n is a set of positive words and N = v_1,…,v_m a set of negative words. Without loss of generality we can assume that n = m (adding duplicate identical words to have an equal number of positive and negative words). Thelearning decision problem is:INPUT: a sample (P,N) and k ∈,QUESTION: does there exist anformula ϕ of size at most k such that for all u ∈ P, we have u ϕ, and for all v ∈ N, we have v ϕ? In that case we say that ϕ separates P from N, or simply that ϕ is a separating formula if the sample is clear from the context. Thelearning problem is analogously defined for any fragment of . *Parameters for complexity analysis. The three important parameters for the complexity of thelearning problem are: n the number of words, ℓ the maximum length of the words, and k the desired size for the formula. As we will see, another important parameter is the size of the alphabet. We will consider two settings: either the alphabet Σ is fixed, or it is part of the input. The size of the alphabet is |Σ|, the number of letters. *Representation. The words given as input are represented in a natural way: we work with the RAM model with word size log(|Σ| + n + ℓ), which allows us to write a letter in each cell and to manipulate words and positions in a natural way.We write |P| for ∑_j ∈ [1,n] |u_j| and similarly N = ∑_j ∈ [1,n] |v_j|. The size of a sample is |P| + |N|.We emphasise a subtlety on the representation of k: it can be given in binary (a standard assumption) or in unary. In the first case, the input size is O(n ·ℓ + log(k)), so the formula ϕ we are looking for may be exponential in the input size! This means that it is not clear a priori that thelearning problem is in . Opting for a unary encoding, the input size becomes O(n ·ℓ + k), and in that case an easy argument shows that thelearning problem is in . We follow the standard representation: k is given in binary, and therefore it is not immediate that thelearning problem is in . *A naive algorithm. Let us start our complexity analysis of the learningproblem by constructing a naive algorithm for the whole logic.There exists an algorithm for solving thelearning problem in time and space O(|Σ| + exp(k) · n ·ℓ), where exp(k) is exponential in k. Notice that the dependence of the algorithm presented in Theorem <ref> is linear in n and ℓ,and it is exponential only in k, but since k is represented in binary this is a doubly-exponential algorithm. For a formula ϕ∈, we write ϕ : P ∪ N →0,1^ℓ for the function defined byϕ(w)(i) = 1 if wiϕ, 0 if wiϕ,for w ∈ P ∪ N.Note that ϕ is separating if and only if ϕ(u)(1) = 1 and ϕ(v)(1) = 0 for all u ∈ P, v ∈ N. The algorithm simply consists in enumerating all formulas ϕ ofof size at most k inductively, constructing ϕ, and checking whether ϕ is separating. Initially, we construct a for all a ∈Σ, and then once we have computedϕ and ψ, we can compute ϕψ, ϕ, ϕ, ϕ, ϕ∧ψ, and ϕ∨ψ in time O(n ·ℓ). To conclude, we note that the number of formulas ofof size at most k is exponential in k.§.§ Subwords Let u = u(1) … u(ℓ) and w = w(1) … w(ℓ'). We say that * u is a subword of w if there exist p_1 < … < p_ℓ such that u(i) = w(p_i) for all i ∈ [1,ℓ]. * u is a weak subword of w if there exist p_1 ≤…≤ p_ℓ such that u(i) = w(p_i) for all i ∈ [1,ℓ].For instance, abba is a subword of babaaaaba,and bba is a weak subword of abaa (using the b twice). We say that a word is non-repeating if every two consecutive letters are different. If u is non-repeating, then u is a weak subword of w if and only if it is a subword of w.As a warm-up, let us construct simpleformulas related to subwords. * We consider the word u = u(1) … u(ℓ) and p_1 < p_2 < … < p_ℓ. They induce the following (,) formula, called a pattern:= ^p_1 - 1(u(1) ∧^p_2 - i_1(⋯∧^p_ℓ - p_ℓ-1 u(ℓ))⋯).It is equivalent to the (larger in size) formula ⋀_i ∈ [1,ℓ]^p_i - 1 u(i), which states that for each i ∈ [1,ℓ], the letter in position p_i is u(i). * We consider the word u = u(1) … u(ℓ). It induces the following (,) formula, called a fattern (pattern with an ):=(u(1) ∧(⋯∧ u(ℓ))⋯).A word w satisfiesif and only if u is a weak subword of w.§ NP-HARDNESS WHEN THE ALPHABET IS PART OF THE INPUT We prove our first hardness result: the learning problem for  is -hard for non-constant alphabets for all fragments includingand , so in particular for full . For all ,⊆ Op, the (Op) learning problem is -hard when the alphabet is part of the input. Recall that the hitting set problem takes as input C_1,…,C_n subsets of [1,ℓ] and k ∈, and asks whether there exists H ⊆ [1,ℓ] of size at most k such that for every j ∈ [1,n] we have H ∩ C_j ≠∅. It is known to be -complete. We construct a reduction from the hitting set problem. Let C_1,…,C_n ⊆ [1,ℓ] and k ∈ an instance of the hitting set problem. Let us define the alphabet Σ = a_j, b_j : j ∈ [1,n], it has size 2n. For j ∈ [1,n], we define u_j of length ℓ by u_j(i) = a_jifi ∈ C_j, b_jotherwise.Let v = b_1 ⋯ b_n. We claim that there exists a hitting set H of size at most k if and only if there exists a separating formula ofof size at most 2k.Given a hitting set H of size k, we construct the formula (⋁_j ∈ H a_j), it is separating by definition of H being a hitting set, and it has size 2k. Conversely, let us consider a separating formula ϕ of size 2k. Since all operators have arity one or two, the syntactic tree of ϕ contains at most k leaves, so the set H = j ∈ [1,ℓ] : a_jappears in ϕ has size at most k. Suppose H is not a hitting set of C_1,…, C_n, there exists j ∈ [1,n] such that H ∩ C_j = ∅.We prove that for all subformulas ψ of ϕ and for all i ∈ [1,ℓ], if u_jiψ then vjψ. We proceed by induction. * If ψ is a letter. If u_jiψ, this implies that u_j(i) = b_j because a_j cannot appear in both ψ and u_j since H ∩ C_j = ∅. Remark that v(i) = b_i, so viψ. * The other cases are easily proved by applying directly the induction hypothesis. In particular, since u_j satisfies ϕ, then so does v, a contradiction with ϕ being separating. This could be the end of our study! However the assumption that the alphabet is part of the input is very unusual, and in the rest of the paper we will therefore fix the alphabet size. In the remainder of the paper, we will consider only fragments ofwithout the until operator. In other words, when consider Op a set of operators, we assume that Op ⊆, , , ,.§ MEMBERSHIP IN NP: THE SHORT FORMULA PROPERTY Let Op a set of operator, we say that Op has the short formula property if there exists a polynomialsuch that for all samples (P,N), if there exists an (Op) separating formula, then there exists one of size at most (|P| + |N|).Note that if Op has the short formula property then the (Op) learning problem is in . All Op ⊆, , , , have the short formula property, and therefore the (Op) learning problem is in .This theorem is the consequence of the following propositions, which establish it for various subsets of operators. We start with some technical lemmas. The following lemma is easily proved by induction, it shows thatcannot detect repetitions in the last letter of a word. Let u whose last letter is a. For all formulas ϕ∈, for all k ∈, if u satisfies ϕ then ua^k satisfies ϕ. If a formula of the form ϕ =(ψ_1(ψ_2 ( …ψ_r ) …) is not satisfied by some word v then there exist i_1 < … < i_k with k ≤v+1 such that v does not satisfy ψ =(ψ_i_1 (ψ_i_2( …ψ_i_k) …) but all words satisfying ϕ also satisfy ψ. We set v_1 = v. For all i ∈ [2,r] we set v_i as the largest suffix of v_i-1 satisfying ψ_i. If this suffix does not exist, we set v_i = ϵ. As v does not satisfyϕ, we easily obtain that for all i<r v_i does not satisfy (ψ_i (ψ_i+1…ψ_r) …). Hence v_r-1 does not satisfy ψ_r and thus v_r = ϵ. We can now extract a decreasing subsequence of suffixes v_i_1, …, v_i_k with i_1 = 1 and for all j>1, i_j is the smallest index larger than i_j-1 such that v_i_j≠ v_i_j-1 if it exists. Note that as this sequence is decreasing k cannot be larger than v+1. We set ψ =(ψ_i_1 (ψ_i_2…ψ_i_k) …). We have that for all j>1, v_i_j is the largest suffix of v_i_j-1 satisfying ψ_i_j, and v_i_k = ϵ. It is then easy to infer by reverse induction on j that for all j, v_i_j does not satisfy (ψ_i_j (ψ_i_j+1…ψ_i_k) …) and thus that in particular v_i_1 = v does not satisfy ψ. Let u bea word satisfying ϕ, there is a non-increasing sequence of suffixes u_i of u such that u_1 = u and each u_i satisfies (ψ_i (ψ_i+1…ψ_r) …). It is then easy to see, by reverse induction on j, that for each j ∈ [1,k] u_i_j satisfies (ψ_i_j (ψ_i_j+1…ψ_i_k) …). In particular u_1 = u satisfies ψ.We state and prove a dual version. If a formula of the form ϕ =(ψ_1(ψ_2 …ψ_r) …) is satisfied by some word u then there exist i_1 < … < i_k with k ≤u+1 such that u satisfies ψ =(ψ_i_1 (ψ_i_2…ψ_i_k) …) and all words satisfying ψ also satisfy ϕ. Let us momentarily use negation. Note that the proof of Lemma <ref> still holds when we allow negations. The word u does not satisfy ϕ' =(ψ_1( ψ_2 …ψ_r) …) as it is equivalent to the negation of ϕ, thus there exist i_1 < … < i_k with k ≤u+1 such that u does not satisfy ψ' =(ψ_i_1 (ψ_i_2…ψ_i_k) …) but all words satisfying ϕ' satisfy ψ'. The formula ψ =(ψ_i_1 (ψ_i_2…ψ_i_k) …) is equivalent to the negation of ψ', thus it is satisfied by u, and all words satisfying ψ satisfy ϕ. The following fact will be useful for obtaining weak normal forms. For all formulas ϕ, ψ, ϕ_1, …, ϕ_n ∈, the following equivalences hold: *ϕ≡ϕ and ϕ≡ϕ. *( ϕψ) ≡ϕψ and ( ϕψ) ≡ϕψ. *ϕ≡ and ϕ≡ϕ. *ϕ≡ϕ (both state that ψ is satisfied on the last position) *( ϕψ) ≡ϕψ and ( ϕψ) ≡ϕψ. *(ψ⋀_i=1^n ϕ_i) ≡⋀_i=1^n(ψϕ_i) and (ψ⋁_i=1^n ϕ_i) ≡⋁_i=1^n(ψϕ_i). All Op ⊆, , , have the short formula property.Let us consider a formula ϕ separating P from N in (Op), we apply a series of transformations to ϕ to construct another separating formula of polynomial size. First of all we push thein front of the letters, which is possible by using repeatedly the equivalences <ref>, <ref>, <ref> and <ref> of Fact <ref>. Then we push theto the bottom of the formula as well, as they commute withand . We obtain a (, ) formula with atoms of the form either ^ka or ^k a. Once that is done, we make the formula follow a normal form by repeatedly using equivalence <ref> of Fact <ref>. While ϕ has a subformula of the form (ψ⋀_i=1^p ϕ_i), we turn it into ⋀_i=1^p( ψϕ_i). In the end ϕ is of the form ψ⋀_i=1^p ϕ_i with each ϕ_i of the form (ψ_1(ψ_2 …ψ_r) …) where ψ and the ψ_i are conjunctions of formulas of the form ^ka or ^k a. Note that p may be exponential in ϕ. Observe that given two formulas α and β of the form either a or a for some a ∈Σ, ^k α^k β is equivalent to either ^k α, ^k β, or . It is then easy to see that every conjunction of formulas of the form ^ka or ^k a is equivalent over non-empty words of length at most ℓ to eitheror a conjunction of at most ℓ formulas of the form ^k a or ^ka with k < ℓ. We can thus assume ψ and all ψ_i to be of that form, and thus to be of size polynomial in ℓ. This formula being equivalent to ϕ, all u_i satisfy all ϕ_r and ψ, and for all v_j either v_j does not satisfy ψ or it does not satisfy some ϕ_r. We focus on the second case. Say v_j does not satisfy ϕ_r =(ψ_1(ψ_2 …ψ_s) …), by Lemma <ref> we can turn ϕ_ℓ into a short formula ϕ'_j that is still satisfied by all u_i but not by v_j. We set ϕ'_j to be as described above for all v_j that satisfy ψ and to be ψ for the others. As a result, the formula ⋀_i=1^m ϕ'_j is a formula of polynomial size in m and ℓ separating the u_i and v_j. Furthermore, we did not add any operators to the formula, hence the final formula uses the same set of operators as ϕ. All Op ⊆, , , have the short formula property. The proof is nearly identical to the one of Proposition <ref>. We start by pushing theandto the bottom of the formula to obtain a formula of (, ) with atoms of the form ^k a or ^ka. We also use repeatedly equivalence <ref> of Fact <ref> to obtain a formula of the form ψ⋁_i=1^p ϕ_i with each ϕ_i of the form (ψ_1(ψ_2 …ψ_r) …) where ψ and the ψ_i are disjunctions of formulas of the form ^ka or ^k a. Then we note that every disjunction of formulas of the form ^k a and ^ka can be replaced by a disjunction of at most ℓΣ such formulas, equivalent on non-empty words of length at most ℓ (as ^ka ^k a is equivalent to ^ka). We can thus assume that all ψ_i and ψ mentioned above are of polynomial size in ℓ and Σ. No v_j satisfies either ψ or any ψ_r. For each u_i either there exists ϕ_r that is satisfied by u_i or u_i satisfies ψ. In the first case we use Lemma <ref> to turn that ϕ_r into a formula ϕ'_i of polynomial size in ℓ and Σ satisfied by u_i but not by any v_j. If u_i does not satisfy any ϕ_r we set ϕ'_i = ψ. In the end the formula ⋁_i=1^n ϕ'_i is satisfied by all u_i but no v_j, and is of polynomial size in ℓ, n and Σ. All , , , ⊆ Op have the short formula property. Let (P,N) a sample. Consider the formula ϕ = ⋁_i=1^n ϕ_u_i where ϕ_u_i = ⋀_j=1^m-1^j-1 a_j^m-1 a_m with u_i = a_1 ⋯ a_m. If there exist u_i, v_j such thatv_j ∈ u_i a^* with a the last letter of u_i, then there is no separating formula: By Lemma <ref> every formula satisfied by u_i is also satisfied by v_j. Otherwise, every u_i satisfies the associated formula ϕ_u_i while no v_j satisfies any of them. Hence ϕ is a separating formula of polynomial size. All Op such that , , ⊆ Op ⊆, , , have the short formula property. Let (P,N) a sample. Consider the formula ϕ = ⋁_i=1^n ϕ_u_i where ϕ_u_i = ⋀_j=1^m^j-1 a_j with u_i = a_1 ⋯ a_m. We distinguish two cases. * If there exist u_i, v_j such that u_i is a prefix of v_j, then there is no separating formula: An easy induction shows that every formula satisfied by u_i is also satisfied by v_j. * Otherwise, every u_i satisfies the associated formula ϕ_u_i while no v_j satisfies any of them. Hence ϕ is a separating formula of polynomial size. For all ϕ, ϕ_1, ϕ_2 ∈(,,,, ), the following equivalences hold: * (ϕ_1 ϕ_2) ≡ϕ_1 ϕ_2 * (ϕ_1 ϕ_2)≡ϕ_1 ϕ_2 * ϕ≡ϕ * ϕ≡ϕ , , and , , have the short formula property. Let (P,N) a sample. Let j ∈ [1,n], if there exists w_j a weak subword of all u_i that is not a weak subword of v_j then we set ψ(w_j) = (a_1(a_2 ⋯ a_k)) with w_j = a_1 ⋯ a_k. This formula is satisfied by exactly the words which have w_j as a weak subword, hence by all u_i but not v_j. We set ϕ_j as ψ(w_j) if it exists and a if all u_i start with an a and v_j does not for some a ∈Σ. If ϕ_j is defined for all j then the formula ⋀_j ϕ_j is a separating formula. Otherwise there exists a v_j such that there are no weak subwords of all the u_i that are not weak subwords of v_j and either the u_i do not start with the same letter or v_j also starts with that letter. In that case we can easily prove by induction that all (, , ) formulas of the form ψ that are satisfied by all u_i are also satisfied by v_j, and as all formulas of the form a with a ∈Σ satisfied by all u_i are also satisfied by v_j, the same can be said about boolean combinations of those two types of formulas, and thus there are no separating formulas in (, , ). Concerning , ,, one can easily infer from Fact <ref> that we can turn any formula of (, , ) into one of (, , ) equivalent to its negation, and vice-versa. Hence the short formula property of , , is a consequence of the one of , ,. , , , has the short formula property. Let (P,N) a sample. For all u_i we set ϕ(u_i) = (a_1(a_2 ⋯ a_k)) with u_i = a_1 ⋯ a_k. This formula is satisfied by exactly the words with u_i as a weak subword. For all v_j we set ϕ(v_j) = (a̅_̅1̅ (a̅_̅2̅⋯a̅_̅k̅)) with v_j = a_1 ⋯ a_k and a̅ = ⋁_b ∈Σ∖a b for all a ∈Σ. This formula is satisfied by exactly the words which do not have v_j as a weak subword. For all pairs (u_i, v_j) we set ψ_i,j as ϕ(u_i) ϕ(v_j), and finally we define ψ as ⋁_i⋀_j ψ_i,j. If this formula does not separate the u_i and v_j then there must exist u_i and v_j which are weak subwords of each other. An easy induction on the formula then shows that u_i and v_j satisfy the same formulas in (,,,), and thus there does not exist any separating formula. § DEGENERATE CASES: THE SHORTEST FORMULA PROPERTY Let Op a set of operators, we say that Op has the shortest formula property if there is a polynomial time algorithm solving the (Op) learning problem and outputting the minimal separating formula if it exists. Let Op such that either Op ⊆, ,, Op ⊆,, Op = ,. Then Op has the shortest formula property, and therefore the (Op) learning problem is in . The same holds for Op = , if the size of the alphabet is fixed. This theorem is the consequence of the following propositions, which establish it for various subsets Op of operators. All Op ⊆, have the shortest formula property. Let (P,N) a sample. Let A ⊆Σ be the set of first letters of the u_i. Consider the formula ψ = ⋁_a ∈ A a. We distinguish two cases. * Either ψ does not separate the u_i and v_i, meaning that some u_i and v_j share the same first letter. In that case u_i and v_j satisfy the same formulas of (, ) and thus there is no separating formula. * Or ψ does separate the u_i and v_j. In that case it is also of minimal size: say some element a of A does not appear in a formula ϕ∈(, ), then as it is satisfied by some u_i starting with a, ψ is a tautology and is also satisfied by the v_j.Hence a separating formula has to contain all letters of A, and thus also at least A-1 boolean operators. As a result, ψ is of minimal size. All Op ⊆, , have the shortest formula property. Let ℓ∈, every formula ϕ∈(,,) is equivalent over words of length at most ℓ to a formula of smaller or equal size in⊤, , ^k a, ^ka, ^ka, ^ka | a ∈Σ, 0≤ k < ℓ. This can be shown by observing that over those words, for all ψ, ψ and ψ are equivalent, as well as ψ and . This allows to push alloperators at the top of the formula. Finally, every formula of the form ^k ψ with k ≥ N is equivalent to . As a result we can compute a minimal separating formula by enumerating formulas from this set (of polynomial size) and checking which ones separate the positive and negative words. , has the shortest formula property. An easy induction on ψ shows that all (, ) formulas are equivalent over finite words to a formula of the form ⊤, , a or a with a ∈Σ, i.e., to a formula of size at most 2. As those can be enumerated in polynomial time, we can compute a separating formula of minimal size or conclude that it does not exist in polynomial time. A corollary of our results is that the classification between  and  mostly does not depend upon whether we consider the alphabet as part of the input or not. The only affected subcase is Op= ,, which is -hard when the alphabet is not fixed (Theorem <ref>). , has the shortest formula property when the alphabet is fixed. Sincecommutes with , every formula can be turned into an equivalent disjunction of formulas of the form a or a with a ∈Σ, of polynomial size. There are only 2^2Σ such formulas, hence when given a sample (P,N) one can simply select the formulas that are not satisfied by any v_j, take their disjunction, and check that all u_i satisfy the resulting formula. If they do we have a separating formula, otherwise there cannot exist any. § AN APPROXIMATION ALGORITHM FOR (,) An α-approximation algorithm for learning a fragment ofdoes the following: the algorithm either determines that there are no separating formulas,or constructs a separating formula ϕ which has size at most α· m with m the size of a minimal separating formula. There exists a O(n ·ℓ^2) time log(n)-approximation algorithm for learning (,). Recall the definition of patterns: a word u = u(1) … u(ℓ) and p_1 < p_2 < … < p_ℓ induce the following (,) formula, called a pattern:= ^p_1 - 1(u(1) ∧^p_2 - i_1(⋯∧^p_ℓ - p_ℓ-1 u(ℓ))⋯).It is equivalent to the (larger in size) formula ⋀_i ∈ [1,ℓ]^p_i - 1 u(i), which states that for each i ∈ [1,ℓ], the letter in position p_i is u(i). To determine the size of a patternwe look at two parameters: its last position () = i_p and its width () = p. The size ofis (P) + 2 ((P) - 1). The two parameters of a pattern, last position and width, hint at the key trade-off we will have to face in learning (,∧) formulas: do we increase the last position, to reach further letters in the words, or the width, to further restrict the set of satisfying words? For every formula ϕ∈(,∧) there exists an equivalent pattern of size smaller than or equal to ϕ. We proceed by induction on ϕ. * Atomic formulas are already a special case of patterns. * If ϕ = ϕ', by induction hypothesis we get a patternequivalent to ϕ', thenis a pattern and equivalent to ϕ. * If ϕ = ϕ_1 ∧ϕ_2, by induction hypothesis we get two patterns _1 and _2 equivalent to ϕ_1 and ϕ_2. We use the inductive definition for patterns to show that _1 ∧_2 is equivalent to another pattern. We focus on the case _1 = ^i_1 (c_1 ∧'_1) and _2 = ^i_2 (c_2 ∧'_2), the other cases are simpler instances of this one. There are two cases: i_1 = i_2 or i_1 ≠ i_2. If i_1 = i_2, either c_1 ≠ c_2 and then _1 ∧_2 is equivalent to false, which is the pattern c_1 ∧ c_2, or c_1 = c_2, and then _1 ∧_2 is equivalent to ^i_1 (c_1 ∧'_1 ∧'_2). By induction hypothesis '_1 ∧'_2 is equivalent to a pattern ', so the pattern ^i_1 (c_1 ∧') is equivalent to _1 ∧_2, hence to ϕ. If i_1 ≠ i_2, without loss of generality i_1 < i_2, then _1 ∧_2 is equivalent to ^i_1 (c_1 ∧'_1 ∧^i_2 - i_1 (c_2 ∧'_2)). By induction hypothesis '_1 ∧^i_2 - i_1 (c_2 ∧'_2) is equivalent to a pattern ', so the pattern ^i_1 (c_1 ∧'). is equivalent to _1 ∧_2, hence to ϕ. Let u_1,…,u_n,v_1,…,v_n a set of 2n words of length at most ℓ. Thanks to Lemma <ref> we are looking for a pattern. For a patternwe define I() = i_q ∈ [1,ℓ] : q ∈ [1,p]. Note that () = max I() and () = (I()). We define the set X = i ∈ [1,ℓ] : ∃ c ∈Σ, ∀ j ∈ [1,n], u_j(i) = c. Note thatsatisfies u_1,…,u_n if and only if I() ⊆ X. Further, given I ⊆ X, we can construct a patternsuch that I() = I andsatisfies u_1,…,u_n: we simply choose c_q = u_1(i_q) = … = u_n(i_q) for q ∈ [1,p]. We callthe pattern corresponding to I. Recall that the size of the patternis () + 2(() - 1). This makes the task of minimising it difficult: there is a trade-off between minimising the last position () and the width (). Let us consider the following easier problem: construct a log(n)-approximation of a minimal separating pattern with fixed last position. Assuming we have such an algorithm, we obtain a log(n)-approximation of a minimal separating pattern by running the previous algorithm on prefixes of length ℓ' for each ℓ' ∈ [1,ℓ]. 1em We now focus on the question of constructing a log(n)-approximation of a minimal separating pattern with fixed last position. For a set I, we write C_I = ⋃Y_i : i ∈ I: the pattern corresponding to I does not satisfy v_j if and only if j ∈ C_I. In particular, the pattern corresponding to I is separating if and only if C_I = [1,n]. The algorithm constructs a set I incrementally through the sequence (I_x)_x ≥ 0, with the following easy invariant: for x ≥ 0, we have C_x = C_I_x. The algorithm is greedy: I_x is augmented with i ∈ X ∖ I_x maximising the number of words added to C_x by adding i, which is the cardinality of Y_i ∖ C_x. 1em We now prove that this yields a log(n)-approximation algorithm. Let _opt a minimal separating pattern with last position ℓ, inducing I_opt = I(_opt) ⊆ [1,ℓ] of cardinal m. Note that C_I_opt = [1,n]. We let n_x = n - |C_x| and show the following by induction on x ≥ 0: n_x+1≤ n_x ·( 1 - 1/m) = n_x ·m - 1/m. We claim that there exists i ∈ X ∖ I_x such that (Y_i ∖ C_x) ≥n_x/m. Indeed, assume towards contradiction that for all i ∈ X ∖ I_x we have (Y_i ∖ C_x) < n_x/m, then there are no sets I of cardinal m such that C_I ⊇ [1,n] ∖ C_x, contradicting the existence of I_opt. Thus there exists i ∈ X ∖ I_x such that (Y_i ∖ C_x) ≥n_x/m, implying that the algorithm chooses such an i and n_x + 1≤ n_x - n_x/m = n_x ·( 1 - 1/m). 1em The proved inequality implies n_x ≤ n ·( 1 - 1/m)^x. This quantity is less than 1 for x ≥log(n) · m, implying that the algorithm stops after at most log(n) · m steps. Consequently, the pattern corresponding to I has size at most log(n) · |_opt|, completing the claim on approximation. 1em A naive complexity analysis yields an implementation of the greedy algorithm running in time O(n ·ℓ), leading to an overall complexity of O(n ·ℓ^2) by running the greedy algorithm on the prefixes of length ℓ' of u_1,…,u_n,v_1,…,v_n for each ℓ' ∈ [1,ℓ].§ ALMOST ALL FRAGMENTS WITH THE NEXT OPERATOR ARE HARD TO APPROXIMATEThe (, ∧) learning problem is -hard,and there are no (1 - o(1)) ·log(n) polynomial time approximation algorithms unless =, even for a single positive word. Note that Theorem <ref> and Theorem <ref>yield matching upper and lower bounds on approximation algorithms for learning (,).The hardness result stated in Theorem <ref> follows from a reduction to the set cover problem, that we define now. The set cover decision problem is: given S_1,…,S_ℓ subsets of [1,n] and k ∈, does there exists I ⊆ [1,ℓ] of size at most k such that ⋃_i ∈ I S_i = [1,n]? In that case we say that I is a cover.An α-approximation algorithm returns a cover of size at most α· k where k is the size of a minimal cover. The following results form the state of the art for solving exact and approximate variants of the set cover problem.The set cover problem is -complete,and there are no (1 - o(1)) ·log(n) polynomial time approximation algorithms unless =.We construct a reduction from set cover.Let S_1,…,S_ℓ subsets of [1,n] and k ∈.Let us consider the word u = a^ℓ + 1, and for each j ∈ [1,n] and i ∈ [1,ℓ], writing v_j(i) for the ith letter of v_j:v_j(i) = bifj ∈ S_i, aifj ∉ S_i,and we set v_j(ℓ+1)=a for any j ∈ [1,n]. We also add v_n+1 = a^ℓ b.We claim that there is a cover of size k if and only ifthere is a formula of size ℓ + 2k - 1 separating u from v_1,…,v_n+1.Thanks to Lemma <ref> we can restrict our attention to patterns, i.e formulas of the form (we adjust the indexing for technical convenience)ϕ = ^i_1 - 1(c_1 ∧^i_2 - i_1(⋯∧^i_p+1 - i_p c_p+1)⋯),for some positions i_1 ≤…≤ i_p+1 and letters c_1,…,c_p+1∈Σ. If ϕ satisfies u, then necessarily c_1 = … = c_p+1 = a. This implies that if ϕ does not satisfy v_n+1, then necessarily i_p+1 = ℓ + 1.We associate to ϕ the set I = i_1 ≤…≤ i_p. It is easy to see that ϕ is equivalent to ⋀_q ∈ [1,p]^i_q - 1 a ∧^ℓ a, and the size of ϕ is ℓ + 1 + 2 (|I|-1).By construction, ϕ separates u from v_1,…,v_n+1 if and only if I is a cover. Indeed, I is a cover if and only if for every j ∈ [1,n] there exists i ∈ I such that j ∈ S_i, which is equivalent tofor every j ∈ [1,n] we have v_j ϕ. We can extend the previous hardness result to all sets of operators Op such that ,⊆ Op ⊆,,,,, by reducing their learning problems to the previous one.The reduction consists in transforming the input words so that theandoperators are essentially useless.Theoperator is also useless since we only have one positive word, implying that the minimal separating formulas in (Op) and (,) are in fact the same.The first step is a reduction lemma for disjunctions. For all ϕ∈(,,), for all u,v_1,…,v_n, if ϕ separates u from v_1,…,v_n, then there exists ψ∈(, ) such that ψ≤ϕ which separates u from v_1,…,v_n. We define D(ϕ) ⊆(,) by induction: * If ϕ = c then D(ϕ) = c. * If ϕ = ϕ_1 ∧ϕ_2 then D(ϕ) = ψ_1 ∧ψ_2 : ψ_1 ∈ D(ϕ_1), ψ_2 ∈ D(ϕ_2). * If ϕ = ϕ_1 ∨ϕ_2 then D(ϕ) = D(ϕ_1) ∪ D(ϕ_2). * If ϕ = ϕ' then D(ϕ) = ψ : ψ∈ D(ϕ'). * If ϕ = ϕ' then D(ϕ)= ψ : ψ∈ D(ϕ'). Observe that all formulas of D(ϕ) are of size at most ϕ, and are in (, ). We now show that for all ϕ, for all u,v_1,…,v_n, if ϕ separates u from v_1,…,v_n then there exists ψ∈ D(ϕ) separating them, which proves the lemma. We proceed by induction on ϕ. * If ϕ = c this is clear. * If ϕ = ϕ_1 ∧ϕ_2 then D(ϕ) = ψ_1 ∧ψ_2 : ψ_1 ∈ D(ϕ_1), ψ_2 ∈ D(ϕ_2). Since ϕ separates u from v_1,…,v_n, there exists I_1,I_2 ⊆ [1,n] such that I_1 ∪ I_2 = [1,n], ϕ_1 separates u from v_i : i ∈ I_1, and ϕ_2 separates u from v_i : i ∈ I_2. By induction hypothesis applied to both ϕ_1 and ϕ_2 there exists ψ_1 ∈ D(ϕ_1) separating u from v_i : i ∈ I_1 and ψ_2 ∈ D(ϕ_2) separating u from v_i : i ∈ I_2. It follows that ψ_1 ∧ψ_2 separates u from v_1,…,v_n, and ψ_1 ∧ψ_2 ∈ D(ϕ). * If ϕ = ϕ_1 ∨ϕ_2 then D(ϕ) = D(ϕ_1) ∪ D(ϕ_2). Since ϕ separates u from v_1,…,v_n, either ϕ_1 or ϕ_2 does as well; without loss of generality let us say that ϕ_1 separates u from v_1,…,v_n. The induction hypothesis implies that ψ_1 ∈ D(ϕ_1) separates u from v_1,…,v_n, and ψ_1 ∈ D(ϕ). * The case ϕ = ϕ' follows directly by induction hypothesis. For all ,⊆ Op ⊆, , , ,, there is a polynomial-time reduction from the (X,) learning problem for a single positive word to the (Op) one over the same alphabet. We recall two (folklore) facts about . They can easily be proven by induction on the formula. For all w_1 ∈Σ^*, w_2 ∈Σ^+, N ∈, and ϕ∈ with ϕ≤ N, then w_1w_2^N ϕ if and only if w_1w_2^N+1ϕ. For all w_1 ∈Σ^*, N ∈, and ϕ∈(, , ) with at most N+1 operatorsin ϕ, then w_1 ϕ if and only if w_1Nϕ.We now proceed to the proof of Proposition <ref>. Let u, v_1, …, v_n ∈Σ^* be words, all of length ℓ. Let a ∈Σ, we set M as the size of the formula ψ_u = ⋀_i=0^u-1 X^i u_i, u' = u a^M, for all i, we define v'_i = v_i a^M, u = (u' v'_1 ⋯ v'_n)^M+1 ; v_i = v'_i ⋯ v'_n (u' v'_1 ⋯ v'_n)^M. The formula ψ_u separates u from v_1,…, v_n unless one of the v_i is equal to u. This can be checked in polynomial time, and in that case we can answer no to the learning problem immediately. In all that follows we assume that ψ_u separates u from the v_i. Let ϕ∈(Op) be a formula of minimal size separating u from the (v_i)_1≤ i≤ n. Note that ψ_u is satisfied by u but not by any v_i, thus since Op containsand , ϕ exists and ϕ≤ϕ_u = M. We first show that ϕ contains noor . Let C ∈(,,) be a context with free variables x_1, …, x_m (each appearing exactly once in C) such that ϕ = C[ψ_1 → x_1, …, ψ_m → x_m] for some ψ_i ∈(Op) of the form either ψ' or ψ'. As theoperator commutes with all the others, we can push theto the variables in C. Hence there exist a boolean context B with free variables y_1, …, y_m (each appearing exactly once in B) and j_1, … , j_m ∈ such that B[X^j_1 x_1 → y_1, …, X^j_m x_m → y_m] is equivalent to C. Furthermore, the formulas B and C have the same depth, thus as C is of size at most M, we have j_p ≤ M for all p. Note that B does not contain any negation, thus if u did not satisfy some X^j_pψ_p, we would have that u satisfies B[X^j_1ψ_1 → y_1, …,X^j_m→ y_p, … X^j_mψ_m → y_m], while no v_i satisfies it, since they do not satisfy B[X^j_1ψ_1 → y_1, …, X^j_mψ_m → y_m]. As a result, C[ψ_1 → x_1, …, → x_p, …,ψ_m → x_m] would be satisfied by u but no v_i, contradicting the minimality of ϕ. Hence u satisfies all X^j_pψ_p. Let 1 ≤ p ≤ m, let 1≤ i≤ n, we show that v_i satisfies ^j_pψ_p. We distinguish two cases: * ψ_p = ψ'. Since v_i is a suffix of u of length greater than M, v_ij_p is not empty and is a suffix of uj_p. As the latter satisfies ψ', so does the former. Hence v_i ^j_pψ_p. * ψ_p = ψ'. We have that uj_p = (u'j_p v'_1 ⋯ v'_n) (u' v'_1⋯ v'_n)^M satisfies ψ_p. By Fact <ref>, as ψ_p is of size at most ψ≤ M, (u'j_p v'_1 ⋯ v'_n) (u^M v_1⋯ u^M v_n)^M-1 satisfies ψ_p as well. As the latter is a suffix of v_ij_p, v_ij_p also satisfies the formula. Hence v_i ^j_pψ_p. Now consider the formula ϕ' = C[⊤→ x_1, …, ⊤→ x_m], which is equivalent to B[^j_1⊤→ y_1, …, ^j_m⊤→ y_m]. As all v_i satisfy all ψ_p but not ϕ, no v_i satisfies ϕ'. Further, as u satisfies ϕ and all ψ_p, it also satisfies ϕ'. This contradicts the minimality of ϕ, unless m=0. As a result, ϕ does not contain anyor . Thus the minimal (Op) formula separating u and the v_i is the same as the minimal (Op ∩,, ) separating them. By Fact <ref>, we have that this formula is also the minimal formula separating u' and the v'_i. Clearly any (Op ∩,, ) formula separating u and the v_i also separates those. By Lemma <ref>, there exists ϕ' ∈(,) with ϕ'≤ϕ separating u' and the v_iu^M-1, of the form X^i_1-1(c_1 ^i_2-i_1 (⋯^i_p-i_p-1 c_p) ⋯) with 0 < i_1 < … < i_p and c-1, …, c_p ∈Σ. As u' and the v'_i are equal after the first u letters, by minimality of ϕ, we have i_p < u. As a result, by Fact <ref>, ϕ' separates u and the v_i. We conclude that the minimal size of a formula of (Op) separating u and the v_i is the same as the minimal size of a formula of (, ) separating u and the v_i. This completes the reduction. For all ,⊆ Op ⊆, , , ,, there is a polynomial-time reduction from the (, ∨) learning problem for a single negative word to the (Op) one over the same alphabet. The proof is identical to the one of Proposition <ref>, with the roles of positive and negative words reversed and disjunctions and conjunctions reversed. § ALMOST ALL FRAGMENTS WITHOUT THE NEXT OPERATOR ARE HARD TO APPROXIMATE§.§ A study of (,∧)As we will see, (,∧) over an alphabet of size 2 is very weak. This degeneracy vanishes when considering alphabets of size at least 3. Instead of defining a normal form as we did for (,∧)we characterise the expressive power of (,∧) and construct for each property expressible in this logic a minimal formula. For every formula ϕ∈(,∧), either it is equivalent to false or there exists a finite set of non-repeating words w_1,…,w_p and c ∈Σ∪ε such that for every word z,z ϕ if and only if for all q ∈ [1,p], w_q is a subword of z,and z starts with c. We proceed by induction over ϕ. * For the atomic formula c ∈Σ, the property is satisfied using the empty set of words and c. * If ϕ = ϕ', by induction hypothesis we get w_1,…,w_p and c for ϕ'. We let w'_i = cw_i if w_i(1) ≠ c and w'_i = w_i otherwise, then z ϕ if and only iffor all q ∈ [1,p], w'_q is a subword of z and z starts with ε (the latter condition is always satisfied). * If ϕ = ϕ_1 ∧ϕ_2, by induction hypothesis we get w^1_1,…,w^1_p_1, c_1 for ϕ_1 and w^2_1,…,w^2_p_2, c_2 for ϕ_2. There are two cases. If c_1 and c_2 are non-empty and c_1 ≠ c_2 then ϕ is equivalent to false. Otherwise, either both are non-empty and equal or at least one is ε, say c_2. In both cases, u ϕ if and only if for all (e,q) ∈ (1,[1,p_1]) ∪ (2,[1,p_2]), w^e_q is a subword of u and u starts with c_1.Lemma <ref> gives a characterisation of the properties expressible in (,∧). It implies that over an alphabet of size 2 the fragment (,∧) is very weak. Indeed, there are very few non-repeating words over the alphabet Σ = a,b: only prefixes of abab … and baba …. This implies that formulas in (,∧) over Σ = a,b can only place lower bounds on the number of alternations between a and b (starting from a or from b) and check whether the word starts with a or b. In particular, the (,∧) learning problem over this alphabet is (almost) trivial and thus not interesting. Hence we now assume that Σ has size at least 3.1em We move back from semantics to syntax, and show how to construct minimal formulas. Let w_1,…,w_p a finite set of non-repeating words and c ∈Σ∪ε,we define a formula ϕ as follows.The set of prefixes of w_1,…,w_p are organised in a forest (set of trees):a node is labelled by a prefix w of some w_1,…,w_p, and its children are the words wc which are prefixes of some w_1,…,w_p. The leaves are labelled by w_1,…,w_p. We interpret each tree t as a formula ϕ_t in (,∧) as follows, in an inductive fashion: for c ∈Σ, if t is labelled w a with subtrees t_1,…,t_q, thenϕ_t = ( c ∧⋀_i ϕ_t_i).If c = ε, the formula associated to w_1,…,w_p and c is the conjunction of the formulas for each tree of the forest, and if c ∈Σ, then the formula additionally has a conjunct c.As an example, consider the set of words ab, ac, bab, and the letter a. The forest corresponding to ab, ac, bab contains two trees:one contains the nodes b, ba, bab, and the other one the nodes a, ab, ac. The two corresponding formulas are(b ∧(a ∧ b)); (a ∧ b ∧ c).And the formula corresponding to the set of words ab, ac, bab, and the letter a isa ∧(b ∧(a ∧ b))∧ (a ∧ b ∧ c).For every non-repeating words w_1,…,w_p and c ∈Σ∪ε, the formula ϕ constructed above is minimal, meaning there are no smaller equivalent formulas. Applying the construction above to a single non-repeating word w = c_1 … c_p we obtain what we call a “fattern”(pattern with an F):F =(c_1 ∧(⋯∧ c_p)⋯),We say that the non-repeating word w induces the fattern F above,and conversely that the fattern F induces the word w. The size of a fattern F is 3 |w| - 1.Adding the initial letter we obtain a grounded fattern c ∧ F, in that case the letter c is added at the beginning of w and the size is 3 |w| - 2. Let u_1,…,u_n,v_1,…,v_n. If there exists ϕ∈(,∧) separating u_1,…,u_n from v_1,…,v_n, then there exists a conjunction of at most n fatterns separating u_1,…,u_n from v_1,…,v_n. Thanks to Lemma <ref>, to the separating formula ϕwe can associate a finite set of non-repeating words w_1,…,w_p and c ∈Σ∪ε such that for every word z,z ϕ if and only if for all q ∈ [1,p], w_q is a subword of z,and z starts with c.Let j ∈ [1,n], since v_j does not satisfy ϕ either v_j does not start with c or for some q ∈ [1,p] the word w_q is not a subword of u. For each j ∈ [1,n] such that v_j starts with c, we pick one q_j ∈ [1,p] for which w_q_j is not a subword of v_j, and consider the set w_q_j : j ∈ [1,n] together with c ∈Σ∪ε. The formula induced by the construction above is a conjunction of at most n fatterns and it separates u_1,…,u_n from v_1,…,v_n.§.§ Hardness result when the alphabet is part of the inputThe (, ∧) learning problem is -hard when the alphabet is part of the input,and there are no (1 - o(1)) ·log(n) polynomial time approximation algorithms unless =, even with a single positive word. The result follows from a reduction from the hitting set problem.For proving the correctness of the reduction we will need a normalisation lemma specialised to the case of a single positive word. Let u,v_1,…,v_n. If there exists ϕ∈(,∧) separating u from v_1,…,v_n, then there exists a fattern of size smaller than or equal to ϕ separating u from v_1,…,v_n. Thanks to Lemma <ref>, to the separating formula ϕwe can associate a finite set of non-repeating words w_1,…,w_p and c ∈Σ∪ε such that for every word z,z ϕ if and only if for all q ∈ [1,p], w_q is a subword of z,and z starts with c.Since u satisfies ϕ, it starts with c and for all q ∈ [1,p], w_q is a subword of u. For each q ∈ [1,p] there exists ϕ_q mapping the positions of w_q to u. Let us write w for the word obtained by considering all positions mapped by ϕ_q for q ∈ [1,p]. By definition w is a subword of u, and for all q ∈ [1,p] w_q is a subword of w. It follows that the fattern induced by w separates u from v_1,…,v_n. The size of w is at most the sum of the sizes of the w_q for q ∈ [1,p], hence the fattern induced by w is smaller than the original formula ϕ. We can now prove Theorem <ref>. We construct a reduction from the hitting set problem. Let C_1,…,C_n subsets of [1,m] and k ∈. Let us consider the alphabet [0,m], we define the word u = 0 1 2 … m. For each j ∈ [1,n] we let [1,m] ∖ C_j = a_j,1 < … < a_j,m_j,and define v_j = 0 a_j,1… a_j,m_j.We claim that there exists a hitting set of size at most k if and only ifthere exists a formula in (,∧) of size at most 3k - 1 separating u from v_1,…,v_n.1em Let H = c_1,…,c_k a hitting set of size k with c_1 < c_2 < … < c_k, we construct the (non-grounded) fattern induced by w = c_1 … c_k, it separates u from v_1,…,v_n and has size 3k - 1.Conversely, let ϕ a formula in (,∧) of size 3k - 1 separating u from v_1,…,v_n. Thanks to Lemma <ref> we can assume that ϕ is a fattern, let w = c_1 … c_k the non-repeating word it induces. Necessarily c_1 < c_2 < … < c_k. If ϕ is grounded then c_1 = 0, but then the (non-grounded) fattern induced by c_2 … c_k is also separating, so we can assume that ϕ is not grounded. We let H = c_1,…,c_k, and argue that H is a hitting set. Indeed, H is a hitting set if and only iffor every j ∈ [1,n] we have H ∩ C_j ≠∅, which is equivalent tofor every j ∈ [1,n] we have v_j ϕ; indeed for c_i ∈ H ∩ C_j by definition c_i does not appear in v_j so v_jc_i.§.§ Hardness result for fixed alphabet The most technical result in this paper is the following theorem, which strengthens Theorem <ref> by restricting to an alphabet of size 3. For all ,⊆ Op ⊆, , , ,, the learning problem for (Op) is -hard even for an alphabet of size 3. Its main difficulty stems from the use of a fixed alphabet, combined with the absence of , which forbids us from pointing at a specific position. To circumvent this problem we will adapt the previous reductions from hitting set: we again use a single positive example u against several negative ones v_1, …, v_n. Each of those words is a sequence of factors abab⋯ (one for each subset) separated by a letter c. We introduce small differences between the factors in the v_i and in u. We ensure that detecting those differences is costly in terms or operators, and thus that a minimal formula has to “talk about” as few of them as possible. The main difficulty in the proof is showing that we do not have more efficient ways to separate the words. Much like in the previous hardness proofs, the minimal formula has to select a minimal amount of positions (here factors) in the words that are enough to distinguish u from the v_i. Those positions answer the hitting set problem. We temporarily allow the negation operator as it does not make the proof more difficult and allows us to obtain a dual version of Theorem <ref> easily.We consider the alphabet Σ = a,b,c. Let S = 1, …, m, let T_1, …, T_n ⊆ S, let k' ∈. We set M= 3m+2, u = ((ab)^M+1c)^m and for all i, v_i = c w_i,1c w_i,2 c ⋯ w_i,m c withw_i,j = (ab)^M ifj ∈ T_i, (ab)^M+1otherwise. Let k be size of a minimal hitting set. There exists a separating formula in (,) of size at most 6kM + 9m + 2. Let H be a hitting set of size k. We define for i ∈ [1,m], z_j = (ab)^M+1ifi ∈ H, ab otherwise. and w = c z_1 c z_2 c ⋯ z_m c. Let us write w = x_1⋯ x_p where x_1,…,x_p are letters, and define the formula ψ = (x_1(x_2 (x_3 (… x_p) ))). This formula has size 3p -1 = 3(2kM + 3m + 1) -1≤ 6kM + 9m + 2. We claim that ϕ is a separating formula. First, note that since all letters in w are non-repeating, ψ is satisfied by exactly the words which have w as a subword. * Since w is a subword of u, we have that u satisfies w. * Let j ∈ [1,n]. Since v_j and w contain the same number of c's, for w to be a subword of v_j, z_i needs to be a subword of w_i,j for all i, i.e., we need to have y_i,j = (ab)^M+1 for all j ∈ H. However, as H is a hitting set of (C_j)_1 ≤ j ≤ n, there exists j ∈ H such that i ∈ C_j and thus w_i,j = (ab)^M. The rest of the proof is about proving the converse implication: Let ϕ be an (, , , , ) formula separating u and the v_i, then we have k(6M-3)≤ϕ, and thus, as ϕ≤ϕ, this implies k(6M-3) ≤ϕ. We define a separation tree of ϕ as a finite tree with each node x labelled by a subformula ψ^x of ϕ, a set of indices I^x ⊆1, …, n, a family of numbers (J^x_i)_i ∈ I^x with J^x_i ∈0, …, m and two families of words (P^x_i)_i ∈ I, (V^x_i)_i ∈ I with P^x_i ∈ϵ, a, b, ab, ba and V^x_i a suffix of w_i,J^x_i for all i (with w_i,0 = ϵ for all i). This tree and labelling must respect the following rules. Those rules should be understood as follows: imagine two players, one trying to prove that u satisfies the formula ϕ but the v_i do not and the other trying to prove the contrary. The game is defined by induction on the formula. For instance, if ϕ is of the form ψ, then the first player will choose a suffix of u on which she claims ψ to be satisfied, and the second player will do the same for each v_i. A separation tree describes a play of this game in which the second player follows a specific strategy: he tries to copy everything the first player does, and to take a suffix that is as similar as possible to the current one of the first player.Here are the rules: * If ψ^x = ψ_1 ψ_2 or ψ = ψ_1 ψ_2 then x has two children x_1 and x_2 and ψ^x_1 = ψ_1, ψ^x_2 = ψ_2 and I^x is the disjoint union of I^x_1 and I^x_2, and (J_i^x_1, P_i^x_1, V_i^x_1) = (J_i^x, P_i^x, V_i^x) for all i ∈ I^x_1 and (J_i^x_2, P_i^x_2, V_i^x_2) = (J_i^x, P_i^x, V_i^x)for all i ∈ I^x_2. * If ψ_x = ψ then x has one child y, ψ^y=ψ, I^y = I^x and either J_i^y = J_i^x and V_i^y is a suffix of V_i^x or 0 ≤ J_i^y < J_i^x and V_i^y is a suffix of w_i,J_i^y. In both cases P_i^y=ϵ. * If ψ_x is a letter then x is a leaf and all P_i^x V_i^x c start with that letter but none of the V_i^x c do. * If ψ_x is the negation of a letter then x is a leaf and all V_i^x c start with that letter but none of the P_i^x V_i^x c do. * If ψ_x = ψ with ψ a boolean combination of formulas all of the form ψ' or ψ' then x has one child y, ψ^y = ψ and I^y = I^x. Furthermore, there exists (J, U) such that for all i ∈ I^x eitherJ^x_i= J and U is a suffix of P^x_i V^x_i or J^x_i < J and U is a suffix of (ab)^M+1. For all i we have J^y_i = J. We have V^y_i equal to the shortest of U and V^x_i if J^x_i = J^y_i and to the shortest of U and w_i, C^y_i otherwise. In the first case U is a suffix of P^x_i V^x_i and in the second it is a suffix of abw_i,J. Thus in all cases we can select P^y_i ∈ϵ, a, b, ab, ba such that U = P^x_iV^x_i. * If ψ_x = ψ with ψ a boolean combination of formulas all of the form ψ' or ψ' then x has one child y, ψ^y = ψ and I^y = I^x. Furthermore, there exists (J, U) such that for all i ∈ I^x eitherJ^x_i= J and U is a suffix of P^x_i V^x_i or J^x_i < J and U is a suffix of (ab)^M+1. For all i we have J^y_i = J. For the values of P^y_i and V^y_i we have several cases: → if J^y_i > J_i^x then V^y_i = U and P^y_i = ϵ if U is a suffix of w_i, J. Otherwise we have w_i,J = (ab)^M and either U = (ab)^M+1, in which case P_i^y = ab and V_i^y = (ab)^M, or U = b(ab)^M, and then P_i^y = ba and V_i^y = b(ab)^M-1. → if J^y_i = J_i^x and U is a suffix of V_i^x then V^y_i = U and P^y_i = ϵ. → if J^y_i = J_i^x and U=P^x_i V^x_i then P_i^y = P_i^x and V_i^y = V^x_i. → if J^y_i = J_i^x and U = a V^x_i and V^x_i > 1 then P_i^y = ab. Similarly if U = b V^x_i and V^x_i > 0 then P_i^y = ba. In both cases V_i^y is such that U = P^y_i V^y_i. → if J^y_i = J_i^x and U = a V^x_i and V^x_i≤ 1 or if U = b V^x_i and V^x_i = 0 then i ∉ I^y. This also defines I^y. If ϕ is satisfied by u but none of the v_i then there exists a separation proof tree for ϕ with a root r such that I^r = 1,…, n and for all i ∈ I^r, J^r_i = 0 and P^r_i = V^r_i = ϵ. We prove the following statement: Let ψ∈(, , , , ), let I ⊆1,…,n, let (J_i)_i ∈ I be a family of indices in 0,…,m, and let (P_i)_i ∈ I and (V_i)_i ∈ I be families of words such that for all i ∈ I, we have P_i ∈ϵ, a, b, ab, ba, V_i is a suffix of w_i, J_i and P_iV_i is a suffix of (ab)^M+1. If ψ is satisfied by all u_i = P_i V_i c ((ab)^M+1 c)^m-J_i but none of the v_i = V_i c w_i,J_i+1c ⋯ w_i,m c, then there exists a separation tree with a root r labelled by I^r = I, ψ^r=ψ and (J_i, P_i, V_i)_i ∈ I. This implies the lemma. We proceed by induction on ψ. Case 1: If ψ is a letter then all u_i start with that letter but none of the v_i do, thus the same can be said of the P_i V_i c and V_i c. Case 2: Similarly if ψ is the negation of a letter then all v_i start with that letter but none of the u_i do, thus the same can be said of the V_i c and P_i V_i c. Case 3: If ψ = ψ_1 ψ_2 then we set I_1 = i ∈ I | u_i ψ_1 and I_2 = I ∖ I_1. As all u_i satisfy ψ, all u_i with i ∈ I_2 satisfy ψ_2. As no v_i satisfies ψ, they satisfy neither ψ_1 nor ψ_2. By induction hypothesis there exists a separation tree with a root r_1 (resp. r_2) labelled by I_1 (resp. I_2), ψ_1 (resp. ψ_2) and (J_i, P_i, V_i)_i ∈ I_1 (resp. (J_i, P_i, V_i)_i ∈ I_2). The tree with a root r labelled by I, ψ and (J_i, P_i, V_i)_i ∈ I and with those two subtrees as children is a separation tree. We can proceed similarly when ψ^x = ψ_1 ψ_2 by setting I_1 = i ∈ I | v_i ⊭ψ_1. Case 4: If ψ = ψ' then for all i there exists a suffix v'_i of v_i not respecting ψ'. We set J'_i = m - C'_i with C'_i the number of c in v' and V'_i the largest prefix of v'_i without c. We also set P'_i = ϵ. As u_i satisfies ψ' and as u'_i = V_ic((ab)^M+1c)^m-J_i is a suffix of u_i, u'_i satisfies ψ' By induction hypothesis, there exists a separation tree t with a root labelled by I, ψ' and (J'_i, P'_i, V'_i)_i ∈ I. As a result, there exists a separation tree for ψ, obtained by taking a root labelled by I, ψ' and (J'_i, P'_i, V'_i)_i ∈ I and giving it one child whose subtree is t. Case 5: If ψ = ψ' then let u' be the shortest suffix of u satisfying ψ'. As all u_i are suffixes of u satisfying ψ, u' is a suffix of all u_i. Let J' = m - C' with C' the number of c in u' and U' the largest prefix of u' without c. For all i ∈ I we set J'_i = J'. We will define a set I' ⊆ I along the way. For all i ∈ I, if J' = J_i and U' is a suffix of V_i, or if J' > J_i and U' is a suffix of w_i,J', then we can set V'_i = U' and P'_i = ϵ. Then v'_i = V'_icw_i,J'+1⋯ cw_i,mc is a suffix of v_i and thus does not satisfy ψ' (as v_i does not satisfy ψ'). Otherwise, we have to distinguish cases according to the shape of ψ'. Case 5.1:Suppose ψ' is a boolean combination of formulas of the form ψ” or ψ”. If J_i = J' then we set V'_i = V_i. In that case U' is a suffix of P_iV_i and V_i is a suffix of U_i, hence there exists P'_i ∈ϵ, a, b, ab, ba such that U' = P'_i V'_i. As v_i = v'_i does not satisfy ψ', it does not satisfy ψ' either. If J_i > J' then we set V'_i = w_i, J'. In that case U' is a suffix of (ab)^M+1 but not of w_i,J', hence there exists P'_i ∈ϵ, a, b, ab, ba such that U' = P'_i V'_i. As v_i does not satisfy ψ', and as v'_i = V'_i c w_i,J'+1⋯ cw_i,mc is a suffix of v_i, v'_i does not satisfy ψ'. Case 5.2:Now suppose ψ' is a boolean combination of formulas of which at least one is a letter or its negation. If J_i = J' and U' starts with aba then we set P'_i = ab and V'_i such that U' = abV'_i. As U' is a suffix of P_i V_i and P_i≤ 2, V'_i is a suffix of V_i. As a result, v'_i = V'_i c w_i,J'+1⋯ cw_i,mc is a suffix of v_i, hence it does not satisfy ψ'. Similarly, if J_i = J' and U' starts with bab then we set P'_i = ba and V'_i such that U' = baV'_i. Again, as U' is a suffix of P_i V_i and P_i≤ 2, V'_i is a suffix of V_i. As a result, v'_i = V'_i c w_i,J'+1⋯ cw_i,mc is a suffix of v_i, hence it does not satisfy ψ'. If J_i = J' and U'≤ 2 then i ∉ I'. If J_i < J' then as U' is not a suffix of w_i, J' we must have w_i, J' = (ab)^M and U' is either (ab)^M+1 or b(ab)^M. If U' = (ab)^M+1 then we set P'_i = ab and V'_i = (ab)^M. As v'_i = V'_i c w_i,J'+1⋯ cw_i,mc is a suffix of v_i, it does not satisfy ψ'. Similarly if U' = b(ab)^M then we set P'_i = ba and V'_i = b(ab)^M-1. Since v'_i = V'_i c w_i,J'+1⋯ cw_i,mc is a suffix of v_i, it does not satisfy ψ'. We can then apply the induction hypothesis to obtain a separation tree t whose root is labelled by I', ψ' and (J'_i, P'_i, V'_i)_i ∈ I'. We obtain the result by taking a tree whose root is labelled by I, ψ and (J_i, P_i, V_i)_i ∈ I and giving it one child whose subtree is t. This concludes our induction. If there exists a separation proof tree for ϕ then ϕ≥ 6k(M-1). We make some key observations. Let x be a node labelled by some letter or its negation. Then: * either x only has ancestors labelled by formulas of the form ϕ_1 ϕ_2 or ϕ_1 ϕ_2, in which case P_i^x V_i^x c and V_i^x c both start with c. * or x has an ancestor labelled by a formula of the form ϕ or ϕ. Let y be its closest such ancestor, and z the only child of y. Then ψ^y is of the form ψ^z or ψ^z with ψ^z a boolean combination of formulas including ψ^x (which is a letter or its negation). Hence for all i ∈ I^z we have that P^z_i V^z_i c and V^z_i c start with the same letter. Moreover as all nodes from z to x are labelled by formulas of the form ψ_1 ψ_2 or ψ_1 ψ_2, we have I^x ⊆ I^z and P^x_i V^x_i c = P^x_i V^x_i c and V^z_i c = V^x_i c for all i ∈ I^x. As a result, by definition of a separation tree, we must have I^x = ∅.Thus for all node x, if x is a leaf then I^x = ∅. Furthermore, if x has two children y and z then I^x is the disjoint union of I^y and I^z, and if x has a single child y then I^y ⊆ I^x. Those facts are direct consequences of the definition of separation tree.1em The second key observation is that if y is a child of x and J^x_i = J^y_i and P^y_i ≠ϵ for some i then P^x_i ≠ϵ and V_i^y≥V_i^x -1. If furthermoreV_i^y = V_i^x -1 then ψ^x is of the form ψ^y with ψ^y a boolean combination of formulas including at least one letter or its negation.1em The third observation is that if y is a child of x and J^x_i < J^y_i and P^y_i ≠ϵ for some i then V_i^y≥ 2M -1 and w_i,J_i^y = (ab)^M. 1em Let i∈1,…,n. In light of the previous observations, there is a branch of the tree x_1, …, x_k+1 (x_j+1 being a child of x_j for all j) such that i is in I^x_j for all j ≤ k but not in I^x_k+1. As i ∈ I^x_k but i ∉ I^x_k+1, we have J_i^x_k+1 = J_i^x_k and V_i^x_k≤ 1 and P_i^x_k≠ϵ. Let j be the minimal index such that J_i^x_j = J_i^x_k+1. As the sequence of J_i^x_ℓ is non-increasing, all J_i^x_ℓ are equal for j ≤ℓ≤ k+1. Thanks to the previous remarks, we infer that P_i^x_ℓ≠ϵ for all j ≤ℓ≤ k+1. Therefore x_j cannot be the root, hence j>1. As a consequence, by our third remark, we have V_i^x_j≥ 2M-1 and w_i, J_i^x_j = (ab)^M. Furthermore V_i^x_k≤ 1. Our second remark allows us to conclude that there exist at least 2M-2 nodes x_ℓ such that ψ^x_ℓ is of the form ψ^x_ℓ+1 with ψ^x_ℓ+1 a boolean combination of formulas including at least one letter or its negation. For each such ℓ, x_ℓ+1 and the leaf corresponding to that letter (or its negation) are all labelled by J_i^x_ℓ.Hence for all i there exists J_i such that w_i,J_i = (ab)^M and J_i^x = J_i for at least 6M-6 distinct nodes in the separation tree.As w_i,J_i = (ab)^M, we have i ∈ T_J_i for all i, hence the set of J_i is a solution to the set cover problem. As a result, there are at least k distinct J_i, and thus there are at least 6k(M-1) nodes in the separation tree. As the size of the separation tree is exactly ϕ, we obtain the result. We may now finish the proof of Theorem <ref>.Recall that we considered an input S = 1,…, m, T_1, …, T_n ⊆ S and k' ∈ (the encoding is irrelevant). We set k to be the size of a minimal hitting set of H ⊆ S hitting all T_i, and M = 3m+2.Let ϕ be a formula of minimal size satisfied by u but not by any v_i.By Proposition <ref> and Claim <ref>, we have6kM-3k ≤ϕ≤ 6kM + 9m + 2. We set K = 6k'M + 9m +2 and show that k' ≥ k if and only if ϕ≤ K. * Suppose k' ≥ k, then K ≥ 6kM + 9m + 2 ≥ϕ. * Suppose k' ≤ k - 1, then K ≤ 6(k-1)M + 9m + 2 = 18km -9m +12k - 10≤ (6kM - 3k) - 9m +3k - 10 < 6kM - 3k ≤ϕ as k ≤ m. As a result, the hitting set problem has a positive answer on instance S, T_1, …, T_n, k' if and only if so does the (Op) learning problem on instance u, v_1, …, v_n, K. As the latter instance is constructible in polynomial time from the former, Theorem <ref> is proven. §.§ Dual hardness result We now show hardness for fragments with operatorsand . As we allowed negation in the previous result, we can infer that one almost directly. For all ,⊆ Op ⊆, , , ,, the learning problem for (Op) is -hard even for an alphabet of size 3. We take the same instance of the hitting problem, but this time we consider the (Op) learning problem with the v_i as positive words and u as the only negative one. Let ϕ be an (, , , F, G) formula separating theu_i and v, then we have: k(6M-3)≤ϕ. Let ϕ be such a formula, and let ϕ be its negation, with the negations pushed to the bottom. Formally, ϕ is defined by induction on ϕ: * a =a * ϕ_1 ϕ_2 = ϕ_1ϕ_2 * ϕ_1 ϕ_2 = ϕ_1ϕ_2 * ϕ_1 = ϕ_1 * ϕ_1 = ϕ_1 A clear induction shows that a word satisfies one if and only if it does not satisfy the other (note that this would not be true if we allowed the operator ). Hence ϕ is satisfied by u but not by the v_i, so by Theorem <ref> we have k(6M-3) ≤ϕ = ϕ. There exists a formula ϕ of (,) separating the u_i and v with ϕ≤ 6kM + 11m + 4. Let J be a cover of S of size k. We set, for all 1≤ i ≤ n, z_j = (ab)^M+1ifj ∈ J, ab otherwise. and we set w = c z_1 c z_2 c ⋯ z_m c. We use the formula ψ = (x̅_̅1̅ (x̅_̅2̅(x̅_̅3̅(... x̅_̅p̅) ))), where x_1, …, x_p are letters such that x_1⋯ x_p = w, a̅ = b, b̅ = a and c̅ = ab. This formula has size 3p -1 + 2m +2 = 3(2kM + 3m + 1) +2m +1≤ 6kM + 11m + 4. Observe that ψ is satisfied by exactly the words which do not have any w' as a weak subword, with w' ∈ (2^Σ)^* obtained by replacing each c by c, a by a,c and b by b,c. As w is a weak subword of v, so is w', hence v does not satisfy ϕ. Let i ∈ S, as u_i and w contain the same number of c, for u_i to not satisfy the formula, z_j needs to be a subword of y_i,j for all j, i.e., we need to have y_i,j = (ab)^M+1 for all j ∈ J. However, as J is a cover of S, there exists j ∈ J such that i ∈ C_j and thus y_i,j = (ab)^M. Therefore all u_i satisfy the formula. Let ϕ be a formula of minimal size satisfied by all v_i but not by u.By Proposition <ref> and Lemma <ref>, we have6kM-3k ≤ϕ≤ 6kM + 11m + 4.We set K = 6k'M + 11m +4 and show that k' ≥ k if and only if ϕ≤ K. * Suppose k' ≥ k, then K ≥ 6kM + 11m + 4 ≥ϕ. * Suppose k' ≤ k - 1, then K ≤ 6(k-1)M + 11m + 4 = 18km -7m +12k - 8≤ (6kM - 3k) - 7m +3k - 8 < 6kM - 3k ≤ϕ as k ≤ m. As a result, the hitting set problem has a positive answer on instance S, T_1, …, T_n, k' if and only if so does the (Op) learning problem on instance u, v_1, …, v_n, K. As the latter instance is constructible in polynomial time from the former, Theorem <ref> is proven. § PERSPECTIVES AND OPEN PROBLEMS In this paper, we showed -completeness of thelearning problem for all fragments which do not include the until operator. The same holds adding until for non-constant size alphabets. Hence the main open question is the following: Is the learning problem -complete for fullwith constant size alphabet? We considerover finite traces; some of the related works we discussed in the introduction actually considered infinite traces as common in the verification research community. All our hardness results easily transfer to the infinite trace case, simply by extending finite traces into infinite ones with a new symbol. In particular, we obtain as corollary that thelearning problem is -hard for infinite traces when the alphabet is part of the input.The negative results in this paper suggest looking for approximation algorithms. Beyond (,), nothing is known in terms of upper and lower bounds on polynomial time approximation algorithms, leaving an open field of exciting research directions. In the same vein, one could wonder about the parameterized complexity of thelearning problem, in particular when fixing the number of words. We leave this question open and hope to inspire further studies!0.2intheapa
http://arxiv.org/abs/2312.16336v1
{ "authors": [ "Corto Mascle", "Nathanaël Fijalkow", "Guillaume Lagarde" ], "categories": [ "cs.LG", "cs.AI", "cs.FL", "cs.LO" ], "primary_category": "cs.LG", "published": "20231226211619", "title": "Learning temporal formulas from examples is hard" }
DP-5-truncated-degree-colourability of K_2,4-minor freegraphsOn-Hei Solomon LoFaculty of Environment and Information Sciences, Yokohama National University, Japan. E-mail: [email protected]. This research was supported by a Postdoctoral Fellowship of Japan Society for the Promotion of Science.Cheng WangSchool of Mathematical Sciences, Zhejiang Normal University, China. Email: [email protected] Huan ZhouSchool of Mathematical Sciences, Zhejiang Normal University, China. Email: [email protected] Xuding ZhuSchool of Mathematical Sciences, Zhejiang Normal University,China.E-mail: [email protected]. This research was supported by the National Natural Science Foundation of China, Grant numbers: NSFC 12371359, U20A2068.January 14, 2024 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Assume G is a graph and k is a positive integer. Let f: V(G) →ℕ be defined as f(v)=min{k, d_G(v)}. If G is DP-f-colourable (respectively, f-choosable), then we say G is DP-k-truncated-degree-colourable (respectively, k-truncated-degree-choosable). Hutchinson [On list-colouring outerplanar graphs. J. Graph Theory, 59(1):59–74, 2008] proved that 2-connected maximal outerplanar graphs other than the triangle are 5-truncated-degree-choosable. This result was recently improved by Dai, Hu, Li, and Maezawa in [On DP-colouring of outerplanar graphs. Manuscript, 2023],where it is proved that 2-connected outerplanar graphs other than cycles are DP-5-truncated-degree-colourable. This paper further improves this result and proves that 2-connected K_2,4-minor free graphs other than cycles and complete graphs are DP-5-truncated-degree-colourable. Keywords. Degree-choosable, DP-colouring, Truncated-degree-colouring, K_2,4-minor free graph § INTRODUCTION Assume G is a graph and f: V(G) →ℕ. An f-list assignment of G is a mapping L with |L(v)|=f(v) for each vertex v. An L-colouring of G is a mapping ϕ which assigns each vertex a colourϕ(v)∈ L(v) such that ϕ(u) ϕ(v) for each edge uv. A graph G is f-choosable if G has an L-colouring for every f-list assignment L.We say G is k-choosable if G is f-choosable with f(v)=k for every v∈ V(G).The choice number ch(G) of G is the least integer k such that G is k-choosable. List colouring of graphs was introduced independently by Erdős-Rubin-Taylor <cit.> and Vizing <cit.> in the 1970s and has since been extensively studied in the literature.For a vertex v of a graph G, denote by d_G(v) the degree of v. A graph G is called degree-choosable if G is f-choosable with f(v)=d_G(v) for every v ∈ V(G). Degree-choosable graphs have been investigated in many papers <cit.>. It is known that a connected graph G is not degree-choosable if and only if G is a Gallai-tree, i.e., each block of G is either a complete graph or an odd cycle.In 2008, Hutchinson <cit.>studied a notion that combines the concepts of degree-choosable and k-choosable, which is called k-truncated-degree-choosable in <cit.>. A graph G is k-truncated-degree-choosable if it is f-choosable, where f is defined as f(v)=min{k, d_G(v)} for v ∈ V(G). Hutchinson's work was motivated by a question raised by Bruce Richter, asking whether every 3-connected non-complete planar graph is 6-truncated-degree-choosable. Hutchinson <cit.> established the following result concerning outerplanar graphs. Every 2-connected maximal outerplanar graph other than K_3 is 5-truncated-degree-choosable.Theorem <ref> is tight, as Kostochka <cit.> constructed a 2-connected maximal outerplanar graph that is not a triangle and not 4-truncated-degree-choosable. On the other hand, Hutchinson <cit.> proved that 2-connectedbipartite outerplanar graphs are 4-truncated-degree-choosable. This result is also tight, as there are 2-connected bipartite outerplanargraphs that are not 3-truncated-degree-choosable. DP-colouring (also known as correspondence colouring) is a generalization of list colouringintroduced by Dvořák and Postle <cit.>. A cover of a multigraph G is an ordered pair (L,M), where L = {L(v): v ∈ V(G)} is a family of pairwise disjoint sets, and M={M_e: e ∈ E(G)} is a family of bipartite graphs such that for each edge e=uv, M_e is a bipartite graph with partite sets L(u) and L(v).A cover (L,M) of G is simple if Δ(M_e) ≤ 1 for each edge e, i.e., M_e is a (possibly empty) matching for each edge e. For afunction f: V(G) →ℕ, we say (L,M) is an f-cover of G if |L(v)| ≥ f(v) for each vertex v ∈ V(G). The definition above is slightly different from that used in the literature, where a cover refers to a simple cover.For our purpose, it is more convenient to allow M_e to be a bipartite graph that is not a matching. Specific restrictions on the bipartite graphs will be given later.For any subgraph H of G, denote by (L,M)|_H the restriction of (L,M) to H, i.e., (L,M)|_H is the cover (L|_H, M|_H) of H with L|_H = {L(v) ∈ L : v∈ V(H)} and M|_H = {M_e ∈ M : e ∈ E(H)}.For convenience, we view each M_e as a set of edges, and view(L,M) as the graph with vertex set ⋃_v ∈ V(G)L(v) and edge set ⋃_e ∈ E(G)M_e. For a vertex a ∈⋃_v ∈ V(G)L(v), we denote by N_(L,M)(a) be the set of neighbours of a in (L,M). We may write N_M(a) instead of N_(L,M)(a) if it is clear from the context.Assume (L,M) is a cover of a graph G. In this context, both G and (L,M) are considered graphs. To highlight their distinct roles, we will refer to the vertices of (L,M) as nodes and the edges of (L,M) links. Given a cover (L,M) of a graph G, an (L,M)-colouring of G is a mapping ϕ: V(G) →⋃_v ∈ V(G)L(v) such that for each vertex v ∈ V(G), ϕ(v) ∈ L(v), and for each edge e=uv ∈ E(G), ϕ(u)ϕ(v) ∉ E(M_e). We say G is (L, M)-colourable if it has an (L,M)-colouring. Assume G is a graph andf: V(G) →ℕ.We say G is DP-f-colourable if for every simple f-cover (L,M), G has an (L,M)-colouring.It is well-known that if G is DP-f-colourable, then it is f-choosable. We say G is DP-degree-colourable if G is DP-f-colourable with f(v)=d_G(v) for v ∈ V(G). For a positive integer k, we say G is DP-k-truncated-degree-colourable if G is DP-f-colourable with f(v)=min{k, d_G(v)} for v ∈ V(G). It follows from the above definitions that DP-degree-colourable graphs are degree-choosable, and DP-k-truncated-degree-colourable graphs arek-truncated-degree-choosable. The converse is not true.For example, even cycles are degree-choosable but not DP-degree-colourable. It was proved in<cit.> that there are 2-connected outerplanar bipartite graphs that are not DP-4-truncated-degree-colourable (recall that 2-connected outerplanar bipartite graphs are 4-truncated-degree-choosable <cit.>). The following theorem, providing a characterization of DP-degree-colourable graphs, combines a result of Bernshteyn, Kostochka, and Pron <cit.> and (a weaker version of) a result of Kim and Ozeki <cit.>. A connected multigraph G is not DP-degree-colourable if and only if each block of G is one of the graphs K_n^k or C_n^k for some n and k, where K_n^k (respectively, C_n^k) is the graph obtained from the complete graph K_n (respectively, the cycle C_n) by replacing each edge with a set of k parallel edges. Moreover, if G has a simple f-cover (L, M) with f(v) = d_G(v) for all v ∈ V(G) such that G is not (L,M)-colourable, then M_e is a perfect matching for each edge e of G.A GDP-tree G is a simple graph whose blocks are either a complete graph or a cycle. It follows from Theorem <ref> that a connected simple graph G is not DP-degree-colourable if and only if it is a GDP-tree.The 5-truncated-degree-choosability of 2-connected outerplanar graphs was generalized and strengthened to DP-5-truncated-degree-colourability by Dai, Hu, Li, and Maezawa <cit.>. Every 2-connected outerplanar graph other than cycles is DP-5-truncated-degree-colourable. Note that Theorem <ref> is not restricted 2-connected maximal outerplanar graphs. So even its restriction to choosability extends Theorem <ref>.In <cit.>, k-truncated-degree-choosability and DP-k-truncated-degree-colourability of general graphs were studied.The following theorem was proved in <cit.> that in particular answers Richter's questionin negative. Every3-connected non-complete planar graph is DP-16-truncated-degree-colourable, and there exists a 3-connected non-complete planar graph which is not 7-truncated-degree-choosable.Indeed, a stronger result was established in <cit.>, implying that every3-connected non-complete planar graph is DP-16-truncated-degree-paintable.This result was also generalized to all proper minor closed families of graphs as follows. For any proper minor closed family 𝒢 of graphs, there is a constant k such that every s-connected graph in 𝒢 that is not a GDP-tree is DP-k-truncated-degree-colourable, wheres is the minimum integer such that K_s,t∉𝒢 for some integer t. Note that the condition that G be s-connected is necessary.For any integer k, the graph G=K_s-1, k^s-1∈𝒢, and G is (s-1)-connected and not k-truncated-degree-choosable (and hence not DP-k-truncated-degree-colourable).It follows from Theorem <ref> that * For each surface Σ, there is a positive integer k_Σ such that every 3-connected non-complete graph G embeddable on Σ is DP-k_Σ-truncated-degree-colourable.* For any graph H, there is a constant k_H such that every s-connected H-minor free graph G is DP-k_H-truncated-degree-colourable. Here s is the minimum integer such that for some integer t, K_s,t contains an H-minor. Given positive integers s,t, let τ(s,t) be the smallest integer such that every s-connected K_s,t-minor free graph G that is not a Gallai-tree is τ(s,t)-truncated-degree-choosable, andτ_DP(s,t) be the smallest integer such that every s-connected K_s,t-minor free graph G that is not a GDP-tree is DP-τ_DP(s,t)-truncated-degree-colourable.The proof of Theorem <ref> gives an upper bound for τ_DP(s,t)(which is also an upper bound forτ(s,t)). But the proven upper bound is large. For s=2 and t=3, 2-connected K_2,3-minor free graphs that are not GDP-trees are precisely the class of 2-connected outerplanar graphs other than cycles.It follows from Theorem <ref> that τ(2,3)=τ_DP(2,3)=5.In this paper, we consider K_2,4-minor free graphs, and proves the following result. Every 2-connected K_2,4-minor free graphother thancyclesand complete graphs is DP-5-truncated-degree-colourable. Hence τ(2,4)=τ_DP(2,4)=5. Contrasting with list colouring, the concept of DP-colouring places constraints on conflicts of colours in edges, rather than in the colour lists of vertices. Our proof takes advantage of this property. The result implies that 2-connected K_2,4-minor free graphsother than odd cyclesand complete graphs are 5-truncated-degree-choosable. However, we do not have a direct proof of this result that does not rely on the concept of DP-colouring.§ TWO-TERMINAL OUTERPLANAR GRAPHS In this section we introduce two-terminal outerplanar graphs and prove some lemmas concerning DP-colourings of these graphs. This graph class serves as a key element in the characterization of K_2,4-minor free graphs given by Ellingham et al. <cit.>, and will play a critical role in the proof of our main result. An x-y-outerplanar graph is asimple 2-connected outerplane graph, where xy is an edge incident to the unbounded face. A broken x-y-outerplanar graph is either a copy of K_2 with end vertices x and y, or a graphobtained from an x-y-outerplanar graph by deleting the edge xy.Both x-y-outerplanar graphs and broken x-y-outerplanar graphs are called two-terminal outerplanar graphs with terminal vertices x and y. A two-terminal outerplanar graph is trivial if it is isomorphic to K_2.Thus a two-terminal outerplanar graph G with terminal vertices x, y has a spanning path P joining x and y, and G can be embedded in the plane so that all edges not in Plie on the same side of P. The path P is called the outer path of G.We write P=v_1v_2… v_n to denote that the path P consists of n vertices in order, and denote by v_1v_2… v_nv_1 the cycle consisting ofn vertices in this cyclic order. In this paper we consider DP-truncated-degree-colourability of 2-connected K_2,4-minor free graphs. Two-terminal outerplanar graphs will be used as gadgets in the construction of 2-connected K_2,4-minor free graphs.In the remainder of this section we assume that G is a broken x-y-outerplanar graphwith outer path P. Let (L,M) be a cover of G and e=uv ∈ E(G). If M_e is a matching, then letλ_M_e(v)=1.If M_e is a subgraph of K_2,2 with Δ(M_e)=2, then let λ_M_e(v) =1,if M_e is a copy of K_1,2 with the degree 2 node in L(v), 2,otherwise.If M_e=M'_e ∪̇M”_e is the disjoint union of a matching M'_e and a subgraphM”_e of K_2,2 (and M_e itself is neither a matching nor a subgraph of K_2,2),then letλ_M_e(v) = λ_M'_e(v)+ λ_M”_e(v)= 1+ λ_M”_e(v).Notice that M_e can possibly be decomposed into a matching and a subgraph of K_2,2 in more than one way. In application, the disjoint union is given and hence λ_M_e(v) is well-defined. For v ∈ V(G), let λ_(L,M)(v) = ∑_e ∈ E_G(v)λ_M_e(v), where E_G(v) denotes the set of edges incident to v in G. For v ∈ V(G), let ℓ_(L,M)(v) = min{5, λ_(L,M)(v)}.If (L,M) is a simple cover, then ∑_e ∈ E_G(v)λ_M_e(v)=d_G(v) is the degree of v.Intuitively, we view ∑_e ∈ E_G(v)λ_M_e(v) as a weighted degree of v, where the contribution λ_M_e(v) of each incident edge e ∈ E_G(v) to the weighted degree of v is either 1, 2, or 3, depending on the bipartite graph M_e. Let (L,M) be a cover of G. We say (L,M) is valid if the following hold: * For e=uv ∈ E(P), M_e is either a matching, or a subgraph of K_2,2, or the union of a matching and a subgraph of K_2,2. * M_e is a matching for e ∈ E(G)-E(P). * For any vertex v, |L(v)| ≥ℓ_(L,M)(v). Moreover, if e=uv ∈ E(P) and L(u) has a node z with d_M_e(z) = 3, then |L(v)| ≥5. * If G≅ K_2 consists of a single edge e=xy, then M_e is a subgraph of K_2,2. The following Lemma will be frequently used in the proofs without mentioning it explicitly. Let (L,M) be a valid cover of G. For any edge e=uv of G and for any node a ∈ L(u), we have |N_M_e(a)| ≤λ_M_e(v). For any edge e=uv of G, there are at most two nodes a ∈ L(u) with |N_M_e(a)|≥ 2. Moreover, if λ_M_e(u) ≥ 2, then there are at most 2 nodes a ∈ L(u) for which |N_M_e(a)| ≥min{2,λ_M_e(v)}. It follows easily from the definition that for any node a ∈ L(u), |N_M_e(a)| ≤λ_M_e(v). It also follows from the definition that L(u) contains at most 2 nodes a with |N_M_e(a)| ≥ 2. Assume λ_M_e(u) ≥ 2. Ifλ_M_e(v) ≥ 2, then M_e is either a subgraph of K_2,2, or the union of a matching and a subgraph of K_2,2. Hence there are at most 2 nodes a ∈ L(u) for which|N_M_e(a)| ≥ 2 = min{2,λ_M_e(v)}. Ifλ_M_e(v) = 1, thenM_e is a copy of K_1,2 with the degree 2 node in L(v).Hence there are exactly two nodes a ∈ L(u) for which|N_M_e(a)|= 1 = min{2,λ_M_e(v)}. Assume (L,M) is a cover of G. Let M_xy be a bipartite graph with partite sets L(x) and L(y). We say M_xy is acoding of (L, M) if the following hold: * For any a ∈ L(x), b∈ L(y) with ab∉M_xy, there is an (L,M)-colouring ϕ of G such that ϕ(x)=a and ϕ(y)=b. * The linksin M̅_xyinduces a subgraph of K_2,2. * λ_(L,M)(x) ≥λ_M̅_xy(x) and λ_(L,M)(y) ≥λ_M̅_xy(y). The following proposition is a Key Lemma in this paper. If G is a broken x-y-outerplanar graph and (L, M) is a valid cover of G, then(L,M) has a coding M̅_xy. We prove the Lemma by induction on |V(G)|. If G ≅ K_2 consists of a single edge xy, then M̅_xy=M_xy is a coding of (L,M) which is a subgraph of K_2,2 by definition, and it holds that λ_(L,M)(x) = λ_M̅_xy(x) and λ_(L,M)(y) =λ_M̅_xy(y). We assume |V(G)| ≥ 3. Then we have xy ∉ E(G) and that there exists a vertex u ∈ V(G) ∖{x, y} with d_G(u)=2. Let u_1 and u_2 be the neighbours of u, and let e_i=u_iu. Note that e_1, e_2 are in the outer path P of G. Let G' be the graph with V(G')=V(G)-{u} and E(G')= E(G-u) ∪{e}, where e=u_1u_2. It is possible that u_1u_2 ∈ E(G). In this case,we have {u_1,u_2}{x,y} and E(G') = E(G-u).In any case, G' is a broken x-y-outerplanar graph with outer path P'=(P-u)+u_1u_2. We construct a cover (L',M') of G' as follows: * For v ∈ V(G'), set L'(v)=L(v). * For e' ∈ E(G')-{e}, set M'_e'=M_e'. * Let M^*_e= {ab : a ∈ L(u_1), b ∈ L(u_2), N_M_e_1(a) ∪ N_M_e_2(b) = L(u) }. If e ∉ E(G), we set M'_e =M^*_e; otherwise, set M'_e = M^*_e ∪ M_e. We claim that it suffices to show the following: * There is a coding M̅_xy of (L',M'). * For all v ∈ V(G'), λ_(L',M')(v) ≤λ_(L,M)(v). Suppose these two statements hold. As M̅_xy is a coding of (L', M'), for any a ∈ L'(x) = L(x) and b ∈ L'(y) = L(y) with ab ∈M̅_xy, there is an (L',M')-colouring ϕ of G' such that ϕ(x) = a and ϕ(y) = b. Write ϕ(u_1)=c_1 and ϕ(u_2)=c_2. Since c_1c_2 ∉ E(M'_e), we have that L(u) ∖ (N_M_e_1(c_1) ∪ N_M_e_2(c_2)) ∅. We can extend ϕ to an (L,M)-colouring of G by letting ϕ(u)=c for some c ∈ L(u) ∖ (N_M_e_1(c_1) ∪ N_M_e_2(c_2)). Therefore, by these two statements, we conclude that M_xy is a coding of (L,M). To show the first statement, we may prove that (L', M') is a valid cover of G' and then apply the induction hypothesis. We remark that in some cases, we need to consider another cover (L”, M”) of G' which is obtained from (L',M') by removing some nodes. Case 1:λ_M_e_i(u_i)=1 for i=1,2. We shall prove that M^*_e is a matching consisting of at most two links.Assume both M_e_1 and M_e_2 are matchings.For any a ∈ L(u_1) and any b ∈ L(u_2), if ab ∈ M^*_e, thenN_M_e_1(a) ∪ N_M_e_2(b) = L(u). As |N_M_e_1(a)| ≤ 1 and |N_M_e_2(b)| ≤ 1, N_M_e_1(a) ∪ N_M_e_2(b) = L(u) holds if and only if |L(u)|=2,|N_M_e_1(a)|=|N_M_e_2(b)| = 1 and N_M_e_1(a) ∩ N_M_e_2(b) = ∅. ThereforeM^*_e is a matching consisting of at most 2 links (see Figure <ref>(a)). Note that M^*_e can be empty, which is also viewed as a matching. Assume M_e_1 is a matching and M_e_2 is a copy of K_1, 2 with the degree 2 node in L(u_2).Then λ_M_e_2(u)=2 and hence|L(u)|≥λ_M_e_2(u)+λ_M_e_1(u) = 3.For a ∈ L(u_1), b ∈ L(u_2), if ab ∈ M^*_e, then |N_M_e_1(a)|=1, |N_M_e_2(b)| = 2, N_M_e_1(a)∩ N_M_e_2(b) = ∅ and |L(u)|=3. This occurs only if b is the degree 2 node of M_e_2 in L(u_2) and a is adjacentin M_e_1 to the node in L(u) not adjacent to bin M_e_2. Hence M^*_e is a matching consisting of at most one link (see Figure <ref>(b)). Assume M_e_i is a copy of K_1, 2 with the degree 2 node in L(u_i) for i=1, 2. Then λ_M_e_i(u)=2 for i=1,2 and hence|L(u)|≥λ_M_e_2(u)+λ_M_e_1(u) = 4. For a ∈ L(u_1) and b ∈ L(u_2), if ab ∈ M^*_e, then |N_M_e_1(a)|=|N_M_e_2(b)| = 2and N_M_e_1(a) ∩ N_M_e_2(b) = ∅. This occurs only if a is the degree 2 node of M_e_1 in L(u_1) and b is the degree 2 node of M_e_2 in L(u_2). Thus M^*_ehas at most one link (see Figure <ref>(c)).If e ∉ E(G), then e is a new edge added to G'. Thus M'_e=M^*_e is a matching consisting of at most two links,and for i=1,2, λ_M'_e(u_i) =λ_M_e_i(u_i)=1.If e ∈ E(G), then{u_1,u_2}{x,y} and M'_e=M^*_e ∪ M_e. We have λ_M'_e(u_i)=2 if M^*_e ⊈M_e, and λ_M'_e(u_i)=1 otherwise. This implies that λ_M'_e(u_i) ≤ 2 = λ_M_e_i(u_i)+λ_M_e(u_i). In any case, we have λ_(L',M')(v) ≤λ_(L,M)(v) and ℓ_(L',M')(v) ≤ℓ_(L,M)(v) for any v ∈ V(G'). Hence |L'(v)|=|L(v)| ≥ℓ_(L,M)(v) ≥ℓ_(L',M')(v) for v ∈ V(G').If {u_1, u_2}={x,y}, then M'_e is a subgraph of K_2,2 (as e ∉ E(G)). Moreover, for any c ∈ L(u_1) ∪ L(u_2), d_M'_e(c) ≤ 2 as M^*_e is a matching. We thus conclude that (L', M') is a valid cover of G'. It follows from the induction hypothesis that (L',M') has a coding M̅_xy. Therefore, we have that M̅_xy is a coding of (L, M). Case 2:λ_M_e_1(u_1)=1 and λ_M_e_2(u_2)≥ 2. We shall prove that M^*_e consists of at most two links, and if M^*_e is a copy of K_1,2, then the degree 2 node of M^*_e is contained in L(u_1). As λ_M_e_1(u_1)=1, M_e_1 is either a matchingor K_1,2 with the degree 2 node in L(u_1). We first consider the latter case. Then λ_M_e_1(u) =2 and hence |L(u)| ≥λ_M_e_1(u) + λ_M_e_2(u)=3. If |L(u)|= 3, then λ_M_e_2(u)=1 and hence M_e_2 isK_1,2 with the degree 2 node inL(u).For a ∈ L(u_1), b ∈ L(u_2), if ab ∈ M^*_e, then |N_M_e_1(a)|=2 and |N_M_e_2(b)| = 1. Therefore M^*_e is either an empty graph or a copy of K_1,2 with the degree 2 node contained in L(u_1) (see Figure <ref>(a)). Assume |L(u)|≥ 4. For a ∈ L(u_1), b ∈ L(u_2), if ab ∈ M^*_e, then |N_M_e_1(a)|= 2 and |N_M_e_2(b)| ≥ |L(u)| - |N_M_e_1(a)| ≥2. Since L(u_i) has at most two nodes of degree greater than 1 in M_e_i for i=1, 2, M^*_e is a subgraphof K_1, 2 so that if it is a copy of K_1,2, the degree 2 node is in L(u_1) (see Figure <ref>(b)). Next we consider the case that M_e_1 is a matching. If |L(u)| ≥ 4, then either each node in L(u_2) has degree at most 2 in M_e_2 or |L(u)| ≥ 5. In any case, for any a ∈ L(u_1), b ∈ L(u_2), we have |N_M_e_1(a)|+|N_M_e_2(b)| < |L(u)| and hence N_M_e_1(a) ∪ N_M_e_2(b) L(u). ThereforeM^*_e isan empty graph (see Figure <ref>(a)). Assume |L(u)|= 2.Then λ_M_e_2(u) =1 and M_e_2 is K_1,2 with degree 2 node in L(u). For a ∈ L(u_1), b ∈ L(u_2), if ab ∈ M^*_e, then |N_M_e_1(a)|=|N_M_e_2(b)| = 1. Therefore M^*_e is either an empty graph or K_1,2 with the degree 2 node contained in L(u_1) (see Figure <ref>(b)). Assume |L(u)|= 3. For a ∈ L(u_1), b ∈ L(u_2), if ab ∈ M^*_e, then |N_M_e_1(a)|=1 and |N_M_e_2(b)| = 2. Since L(u_2) has at most two nodes of degree greater than 1 in M_e_2, M^*_e is either a matching of at most two links or a copy of K_1,2 with the degree 2 node contained in L(u_1) (see Figures <ref>(c) and (d)).Subcase 2.1:e ∉ E(G). In this case,M'_e=M^*_e. For i=1,2,λ_M'_e(u_i) ≤λ_M_e_i(u_i). We have that λ_(L',M')(v) ≤λ_(L,M)(v) for any v ∈ V(G'). Thus |L'(v)|=|L(v)| ≥ℓ_(L,M)(v) ≥ℓ_(L',M')(v) for v ∈ V(G').If {u_1, u_2}={x,y}, then M'_e is a subgraph of K_2,2. Moreover, for any c ∈ L(u_1) ∪ L(u_2), d_M'_e(c) ≤ 2.So (L', M') is a valid cover of G', and (L',M') has a coding M̅_xy which is a subgraph of K_2,2 and λ_(L',M')(x) ≥λ_M̅_xy(x) and λ_(L',M')(y) ≥λ_M̅_xy(y). Again, M̅_xy is a coding of (L, M) satisfying λ_(L,M)(x) ≥λ_M̅_xy(x) and λ_(L,M)(y) ≥λ_M̅_xy(y).Subcase 2.2: e ∈ E(G). In this case,M'_e=M^*_e∪ M_e.Assume |L(u_2)|≥ 5. As λ_M'_e(u_1) ≤ 2= λ_M_e(u_1) + λ_M_e_1(u_1) and λ_M'_e(u_2) ≤ 3 ≤λ_M_e(u_2) + λ_M_e_2(u_2), we have λ_(L',M')(v) ≤λ_(L,M)(v) and |L'(v)|=|L(v)| ≥ℓ_(L,M)(v) ≥ℓ_(L',M')(v) for all v ∈ V(G'). Note that if there is c ∈ L(u_1) ∪ L(u_2) with d_M'_e(c) ≥ 3, then c must be in L(u_1). Since |L(u_2)|≥ 5, we can conclude that the cover (L', M') isa valid cover of G'. As before, we have that the cover (L,M) has a coding M̅_xy which is a subgraph of K_2,2 satisfying λ_(L,M)(x) ≥λ_M̅_xy(x) and λ_(L,M)(y) ≥λ_M̅_xy(y). Assume |L(u_2)|≤ 4. In particular, λ_(L,M)(u_2) = ℓ_(L,M)(u_2) ≤ 4. Let B = {z ∈ L(u_2): d_M^*_e(z) ≥ 1}. Then |B|≤ 2. Let (L”,M”) be the cover of G' obtained from (L',M') by deleting B, i.e.,L”(u_2)=L'(u_2)-B and M”_f=M'_f-B for any f ∈ E(G'). Note that every (L”,M”)-colouring of G' can serve as an (L',M')-colouring as well.Therefore, it suffices to prove that (L”,M”) has a coding M̅_xy satisfying the desired properties.Now M”_e is a matching as it is a subgraph of M_e.So, λ_M”_e(u_i) = λ_M_e(u_i) = 1 for i=1, 2. We have λ_(L”,M”)(v)≤λ_(L,M)(v) for any v ∈ V(G') ∖{u_2} and λ_(L”,M”)(u_2)= λ_(L,M)(u_2)- λ_M_e_2(u_2) = λ_(L,M)(u_2)-2.Therefore, |L”(v)|=|L(v)| ≥ℓ_(L,M)(v) ≥ℓ_(L”,M”)(v) for any v ∈ V(G') ∖{u_2} and |L”(u_2)|≥ |L(u_2)|-2≥ℓ_(L,M)(u_2)-2=ℓ_(L”,M”)(u_2).Also, if there exists e'=u'v' ∈ E(G') such that d_M”_e'(c) = 3 for some c ∈ L”(u'), then e' ≠ e and v' ≠ u_2 (as |L(u_2)| ≤ 4). We have |L”(v')| = |L(v')| ≥ 5. Altogether, we conclude that (L”, M”) is a valid cover of G' and has a coding M̅_xy, and hence M̅_xy is a coding of (L, M). Case 3: λ_M_e_i(u_i)≥ 2 for i=1,2. We shall prove that M^*_e is a subgraph of K_2,2. If |L(u)|= 2, then each of M_e_1 and M_e_2 is a copy of K_1,2 with the degree 2 node in L(u). Thus M^*_e iseither an empty graph or K_2,2 (see Figure <ref>(a)). If |L(u)|= 3, then one of M_e_1 and M_e_2 is a copy of K_1,2 with the degree 2 node in L(u).By symmetry, we may assume that it is M_e_1.For a ∈ L(u_1), b ∈ L(u_2), if ab ∈ M^*_e, then |N_M_e_1(a)|=1 and |N_M_e_2(b)| = 2. Therefore M^*_e is an empty graph, or K_2,2, or K_1,2 with the degree 2 node contained in L(u_2) (see Figures <ref>(b) and (c)). Assume |L(u)|≥ 4. For a ∈ L(u_1), b ∈ L(u_2), if ab ∈ M^*_e, then |N_M_e_1(a)|≥ 2 and |N_M_e_2(b)| ≥2. Since L(u_i) has at most two nodes of degree greater than 1 in M_e_i for i=1, 2, M^*_e is a subgraph of K_2,2 (see Figure <ref>(d)). Subcase 3.1: e∉ E(G). In this case, we can deduce from the arguments used in Subcase 2.1 that (L,M) has a coding M̅_xy satisfying λ_(L,M)(x) ≥λ_M̅_xy(x) and λ_(L,M)(y) ≥λ_M̅_xy(y).Subcase 3.2: e∈ E(G).If |L(u_i)| ≤ 4 for some i ∈{1, 2}, say, |L(u_2)|≤ 4, then let B = {z ∈ L(u_2): d_M^*_e(z) ≥ 1}. We have |B|≤ 2.Let (L”,M”) be obtained from (L',M') by deleting B. As any (L”,M”)-colouring of G' is also an (L',M')-colouring, it suffices to prove (L”,M”) has a coding M̅_xy.Indeed, by the arguments in Subcase 2.2, one can conclude that (L”, M”) has a coding M̅_xy.Assume |L(u_1)| ≥ 5 and |L(u_2)| ≥ 5. For i=1,2, λ_M'_e(u_i) ≤λ_M_e(u_i)+λ_M^*_e(u_i)≤λ_M_e(u_i)+λ_M_e_i(u_i).Thus, for all v ∈ V(G'), λ_(L',M')(v) ≤λ_(L,M)(v) and |L'(v)|=|L(v)| ≥ℓ_(L,M)(v) ≥ℓ_(L',M')(v).Since |L(u_1)|≥ 5 and |L(u_2)|≥ 5, the cover (L', M') is a valid cover of G', and hence (L',M') has a coding M̅_xy, which is a coding of (L, M).Remark.Since G+xy is 2-connected, x and y are the only vertices of G that may have degree 1 in G. For a valid cover (L,M) of G, it may happen that ℓ_(L,M)(x) =ℓ_(L,M)(y) = 1, and it may happen that a coding M_xy is a complete bipartite graph with partite sets L(x) and L(y). In such a case, there is no colouring ϕ of {x,y} with ϕ(x) ∈ L(x), ϕ(y) ∈ L(y) and ϕ(x)ϕ(y) ∉ M_xy. Nevertheless,in our application of Lemma <ref>, a broken x-y-outerplanar graph is only a part of a larger graph in which x and y have other neighbours, and ℓ_(L,M)(x) and ℓ_(L,M)(y)will be larger. Hence |L(x)| and |L(y)| will be larger as well and there are always colourings ϕ of {x,y} with ϕ(x) ∈ L(x), ϕ(y) ∈ L(y) and ϕ(x)ϕ(y) ∉ M_xy. § 2-CONNECTED K_2,4-MINOR FREE GRAPHS In this section we present the characterization of 2-connected K_2,4-minor free graphs given in<cit.>. We first define an infinite class 𝒢 of graphs and a family 𝒢' of eleven small graphs.Then we discuss how 2-connected K_2,4-minor free graphs are constructed from these graphs. For n ≥ 6 and r, s ∈{2,3 …, n-3}, let G_n, r, s consist of a spanning path v_1 v_2 … v_n, which we call the spine, and edges v_1 v_n-i for 1 ≤ i ≤ r and v_n v_1+j for 1 ≤ j ≤ s. The graph G_n, r, s^+ is obtained from G_n, r, s by adding the edge v_1 v_n. We use G_n, r, s^(+) to denote a graph that is either G_n,r,s or G_n,r,s^+. The second spine of the graph G_n, r, s^(+) is the spanning path v_n-2v_n-3… v_2v_1v_n-1v_n. Two examples are shown in Figure <ref>. Let 𝒢={G_n,r,s^(+): r≥2, s≥3, r+s ∈{n-2,n-1}}. We denote by K_5^- the graph obtained from K_5 by deleting one edge, and K_3K_2 the Cartesian product of K_3 and K_2. Let 𝒢' = { K_5,K_5^-, K_3,3, K_3K_2, A, A^+, B, B^+, C, C^+, D}, where the graphs A, A^+, B, B^+, C, C^+ and Daredepicted in Figure <ref> below. For n ≥ 3, the wheel W_n is the join of K_1 and C_n. The vertex of degree n in W_n is a hub, and its incident edges are spokes, while the remaining edges that induce a cycle are the rim. Let 𝒲 = {W_n: n ≥ 3}. A 3-connected graph G is K_2,4-minor free if and only if G ∈𝒲∪𝒢∪𝒢'. In <cit.>, the wheel W_n is denoted by G_n+1,1,n-2^+, K_5^- is denoted by G_5,2,2^+, K_3K_2 is denoted by G_6,2,2 and are put in 𝒢.For the convenience in the proofs in this paper, we partition the family of 3-connected K_2,4-minor free graphs into three subfamiles as above. For a graph G ∈𝒢∪𝒢' ∪𝒲, an edge set F⊆ E(G) is called subdividable if the graph obtained from G by replacing each edge in F by a path of length 2 is K_2,4-minor free.The following theorem gives a characterization of 2-connected K_2,4-minor free graphs. A 2-connected graph G is K_2,4-minor free if and only if one of the following holds: * G is outerplanar.* G is the union of three brokenx-y-outerplanar graphs H_1, H_2, H_3, and possibly the edge x y, where |V(H_i)| ≥ 3 for each i ∈{1, 2, 3} and V(H_i) ∩ V(H_j)={x, y} for any distinct i, j∈{1, 2, 3}.* G is obtained from a 3-connected K_2,4-minor free graph G_0 by replacing each edge x_i y_i in a (possibly empty) subdividable set of edges {x_1 y_1, x_2 y_2, …, x_k y_k} by a two-terminal outerplanar graph H_i with terminal vertices x_i and y_i, such that V(H_i) ∩ V(G_0)={x_i, y_i} for each i ∈{1, … , k} and V(H_i) ∩ V(H_j) ⊆ V(G_0) for any distinct i, j∈{1, … , k}. To complete the characterization, it remains to specify the subdividable edge sets for graphs in 𝒲∪𝒢∪𝒢'. It suffices to state the (inclusion-wise) maximal subdividable sets since an edge set is subdividable if and only if it is contained in some maximal subdividable edge set. For the sake of concise proofs, we only consider maximal subdividable sets up toautomorphism. The maximal subdividable sets of edges of graphs in 𝒢∪𝒢' ∪𝒲, up to automorphism,are given as below. Without loss of generality, when considering G_n, r, s^(+)∈𝒢, we assume that r ≤ s.* W_n has one maximal subdividable edge set, consisting of the rims and one spoke, except that W_4 has two othermaximal subdividable edge sets as illustrated in Table <ref>.* Graphs in 𝒢 haveone maximal subdividable edge set which is the edge set of the spine, with the following exceptions: * G_n, 2, n-3∈𝒢 with n≥6 and G_n, 2, n-4^+∈𝒢 with n ≥ 7 have another maximal subdividable edge set which is the edge setof the second spine. * G_7,2,3 has another maximal subdividable edge set which is {v_1v_2,v_4v_5,v_6v_7,v_3v_7}. * K_5 has no subdividable edge, and the maximal subdividable edge sets of other graphs in 𝒢' are given in Table <ref>. § UNION OF TWO OR THREE X-Y-OUTERPLANAR GRAPHS Assume G is a 2-connected K_2,4-minor free simple graph which is neither a cycle nor a complete graph,and (L,M) is a simple f-cover of G, where f(v) = min{5, d_G(v)} for each vertex v ∈ V(G). We shall prove that G is (L,M)-colourable.Theorem <ref> partitions the family of 2-connected K_2,4-minor free graphs into three subfamilies. The first two subfamilies consist of outerplanar graphs (formed by the union of two broken x-y-outerplanar graphs) and graphs that are unions of three non-trivial broken x-y-outerplanar graphs possibly with the edge xy, respectively. In this section we prove Theorem <ref> for these two subfamilies.Indeed, the case thatG is a 2-connected outerplanar graphwas proved in <cit.>. The following is a simpler proof.Recall that G is obtained from a broken x-y-outerplanar graph H by adding the edge e=xy. If each vertex of G has degree at most 5, then G is degree DP-colourable, and hence 5-truncated-degree DP-colourable (as G is neither a cycle nor a complete graph). Otherwise, we may assume that d_G(x) ≥ 5 and hence |L(x)|=5. As the restriction of (L,M) to H is a valid cover of H,by Lemma <ref>, H has a coding M_xy which is a subgraph of K_2,2, and hence has at most 4 links. Since M_e is a matching, M_e ∪M_xy is not a complete bipartite graph. Choose a ∈ L(x) and b ∈ L(y) such that ab ∉ M_e ∪M_xy. Then the colouring ϕ of {x,y} defined as ϕ(x)=a and ϕ(y)=b can be extended to an (L,M)-colouring of G. Next we consider the case thatG is the union of three non-trivial broken x-y-outerplanar graphs H_1,H_2,H_3, and possibly the edge xy. For each i ∈{1,2,3}, the restriction of (L,M) to H_i is valid (as H_i is non-trivial). Therefore, by Lemma <ref>, there is a coding M_xy,i for the restriction of (L,M) to H_i. For i=1,2,3, let d_i=1,if M_xy,i has more than 2 links,0,otherwise.Let d_0=1 if e=xy is an edge of G, and d_0=0 otherwise. By Lemma <ref>, d_G(x)=d_0+∑_i=1^3d_H_i(x)≥ d_0+∑_i=1^3λ_M_xy, i(x)≥ d_0+∑_i=1^3(d_i+1)=d_0+d_1+d_2+d_3 + 3. Similarly, we have d_G(y) ≥ d_0+d_1+d_2+d_3 + 3.Suppose e=xy is an edge of G.If d_1+d_2+d_3 =0, then|L(x)|≥min{d_G(x), 5}≥ 4 and |L(y)|≥min{d_G(y),5}≥ 4. If d_1+d_2+d_3 ≥ 1, then |L(x)|≥min{d_G(x), 5} = 5 and |L(y)|≥min{d_G(y),5} = 5. In either case, a straightforward counting shows that the number of links in ⋃_i=1^3M_xy,i∪ M_e is less than |L(x)||L(y)|. Hence ⋃_i=1^3M_xy,i∪ M_e is not a complete bipartite graph and there exist a ∈ L(x) and b ∈ L(y) such that ab ∉⋃_i=1^3M_xy,i∪ M_e (note that M_e is a matching containing at most |L(x)| edges). Thus the colouring ϕ of {x,y} defined as ϕ(x)=a and ϕ(y)=b can be extended to an (L,M)-colouring of G. Suppose e=xy is not an edge of G. Similarly as above, depending on whether d_1+d_2+d_3 =0 or d_1+d_2+d_3 ≥ 1, one can readily show that the number of links in ⋃_i=1^3M_xy,i is less than |L(x)||L(y)|. Hence, the colouring ϕ of {x,y} defined as ϕ(x)=a and ϕ(y)=b satisfying a ∈ L(x), b ∈ L(y) and ab ∉⋃_i=1^3M_xy,i can be extended to an (L,M)-colouring of G.§ GRAPHS OBTAINED FROM 3-CONNECTED K_2,4-MINOR FREE GRAPHS To complete the proof of Theorem <ref>, it suffices to consider the remaining case that there is a 3-connected K_2,4-minor free graph G_0 and a subdividable edge set F={x_iy_i: i=1,2,…, k} of G_0 such that G is obtained from G_0 by replacing each edge x_iy_i ∈ F by a non-trivial two-terminal outerplanar graph with terminal vertices x_i and y_i.Given a simple f-cover of G with f(v) = min{5, d_G(v)}, we define a cover (L',M') of G_0 as follows. Define L' to be the restriction of L to V(G_0). For each edge e ∈ E(G_0), define M'_e as follows: * If e ∈ E(G_0)-F,set M'_e=M_e.* If e=x_iy_i is replaced by a broken x_i-y_i-outerplanar graph H_i, set M'_e to be a coding M_x_iy_i of (L,M)|_H_i that is a subgraph of K_2,2. * If e=x_iy_i is replaced by an x_i-y_i-outerplanar graph H_i, set M'_e = M_e ∪M_x_iy_i, where M_x_iy_i is a coding of (L,M)|_H_i-e. Note that for each edge e ∈ E(G_0)-F, M'_e is a matching, and for each edge e ∈ F, M'_e is either a matching, or a subgraph of K_2,2, or a matching plus a subgraph of K_2,2. By Lemma <ref>, instead of showing that G is (L,M)-colourable, it suffices to show that G_0 is(L',M')-colourable. Moreover, it follows from Lemma <ref> that for any edge e=x_iy_i ∈ F and any v ∈{x_i, y_i}, d_H_i(v) ≥λ_M'_e(v). Therefore, for any v ∈ V(G_0), we have|L(v)| ≥min{5, d_G(v)}≥min{5, ∑_e ∈ E_G_0(v)λ_M'_e(v)}.Let G_0 be a 3-connected K_2,4-minor free graph with a subdividable edge set F ⊆ E(G_0). A cover (L,M) of G_0 is F-valid if it satisfies the following properties: * For any e ∈ E(G_0)-F, M_e is a matching.* For any e ∈ F, M_e is either a matching, or a subgraph of K_2,2, or the union of a matching and a subgraph of K_2,2.* For any v ∈ V(G_0), |L(v)| ≥min{5, ∑_e ∈ E_G_0(v)λ_M_e(v)}.* If G_0 is a complete graph, then there is at least one edge e∈ E(G_0) for which M_e is not a perfect matching.By the discussion above, the last case ofTheorem <ref> follows from the following Theorem.Let G_0 be a 3-connected K_2,4-minor free graph and (L,M) be an F-valid cover of G_0, where F is a subdividable edge setof G_0. Then G_0 is (L,M)-colourable. The remainder of the paper is devoted to the proof of Theorem <ref>. Supposeto the contrary of Theorem <ref>, there exists a 3-connected K_2,4-minor free graph G_0 with a subdividable set F ⊆ E(G_0) and an F-valid cover (L,M), such that G_0 has no (L,M)-colouring. We choose a counterexample so that |F| is minimum, and subject to this, ∑_v ∈ V(G_0) |L(v)| is minimum. In particular, |L(v)| = min{5, ∑_e ∈ E_G_0(v)λ_M_e(v)}≤ 5 for each vertex v ∈ V(G_0). We shall derive a sequence of properties of G_0 that lead to a contradiction. There are three subfamilies of 3-connected K_2,4-minor free graphs, and graphs in 𝒢' do not have much structure in common, and each has a few maximal subdividable edge sets that need to be treated separately. Hence the proof is not short. However, the general idea is simple: We colour one or two vertices v of G_0 carefully so that some neighbour(s) of v will lose one or no colour and one of the following is true: * The remaining graph is a path that can be recursively coloured by the remaining colours in their lists.* The remaining vertices can be ordered so that for i ≥ 0, after removing the first i vertices, the (i+1)-th vertex u_i+1 has more remaining colours than its remaining weighted degree. Thus all remaining vertices can be removed iteratively (cf. Lemma <ref>). This means that G_0 has an (L,M)-colouring, which is a contradiction.In this section, we first prove some lemmas about conditions under which vertices can be removed, about paths with given lists that can be coloured recursively, and about vertices that can be coloured carefully so that some of its neighbour(s) will lose no (or one) colour. Subsequently, in three subsections, we apply these lemmas to graphs in each of the three subfamilies of graphs.We note that Lemma <ref> can be easily extended to the context of G_0 and the F-valid cover (L, M) of G_0. The details are left to the reader.Recall that we view (L, M) as a graph. When a cover (L', M') of a subgraph G' of G_0 is a subgraph of (L, M), we write (L', M') ⊆ (L, M).Let G' be a subgraph of G_0 and (L',M') ⊆ (L, M) be a cover of G'. Let v ∈ V(G') such that |L'(v)| > ∑_e ∈ E_G'(v)λ_M'_e(v). Then G' is (L',M')-colourable if and only if G'-v is (L',M')|_G'-v-colourable.If there is an (L',M')|_G'-v-colouring ϕ of G'-v, then at most ∑_e ∈ E_G'(v)λ_M'_e(v) nodes in L'(v) join to {ϕ(u) : u ∈ N_G'(v)}. Hence we can extend ϕ to an (L',M')-colouring of G' by assigning ϕ(v) ∈ L'(v) that is not adjacent to any node in {ϕ(u) : u ∈ N_G'(v)}.Let G' be a subgraph of G_0 and (L',M') ⊆ (L, M) be a cover of G'. We say a sequence (u_1, …, u_k) of vertices of V(G')is removable if for i=1,…,k, |L'(u_i)| > ∑_e ∈ E_G'_i(u_i)λ_M'_e(u_i), where G'_i=G' - {u_1, …, u_i-1} for i=1,…, k. A vertex is called removable if it is contained in some removable sequence. It follows from Lemma <ref> that if a sequence (u_1,…, u_k) of vertices of V(G') isremovablewith respect to (L',M'), then G' is (L',M')-colourable if and only if G'-{u_1,…, u_k} is (L',M')|_G'-{u_1,…, u_k}-colourable. Let G' be a subgraph of G_0 with u_1 u_2 ∈ E(G') and (L',M') ⊆ (L, M) be a cover of G'. Assume one of the following holds: * |L'(u_2)| ≥ d_G'(u_2) andE_G'-u_1(u_2) ∩ F = ∅.* |L'(u_2)| ≥min{d_G'(u_2)+2, ∑_e ∈ E_G'(u_2)λ_M'_e(u_2) } and |E_G'-u_1(u_2) ∩ F| ≤ 1. If u_1 is removable with respect to (L',M'), then so is u_2. It suffices to prove that |L'(u_2)| > ∑_e ∈ E_G'-u_1(u_2)λ_M'_e(u_2). If (1) holds, then|L'(u_2)| ≥ d_G'(u_2) > d_G'-u_1(u_2) = ∑_e ∈ E_G'-u_1(u_2)λ_M'_e(u_2).Assume (2) holds. If |L'(u_2)| < d_G'(u_2)+2, then we have |L'(u_2)| ≥∑_e ∈ E_G'(u_2)λ_M'_e(u_2) > ∑_e ∈ E_G'-u_1(u_2)λ_M'_e(u_2). Otherwise, we have |L'(u_2)| ≥ d_G'(u_2)+2. Since |E_G'-u_1(u_2) ∩ F| ≤ 1, we have∑_e ∈ E_G'-u_1(u_2)λ_M'_e(u_2)≤ d_G'-u_1(u_2)+2 = d_G'(u_2)+1 < |L'(u_2)|.Let P=u_1u_2… u_k be a path of G_0. Let (L',M') ⊆ (L,M) be a cover of P such that|L'(u_i)| ≥ 2,if i ∈{1,k},min{4, ∑_e ∈ E_P(u_i)λ_M'_e(u_i)},if i ∈{2,…,k-1}, and λ_M'_u_1u_2(u_1) = 1 if |L'(u_1)|=2. Then P is (L',M')-colourable. We prove the Lemma by induction on the number of vertices of P.We first consider the case that k = 2, i.e. P=u_1u_2. If |L'(u_1)| = 2, then λ_M'_u_1u_2(u_1) =1, and M'_u_1u_2 is either a matching or a copy of K_1,2 with the degree 2 node in L'(u_1). If |L'(u_1)| ≥ 3, then L'(u_1) contains some node of degree at most 1 in M'_u_1u_2.In any case, there exists a ∈ L'(u_1) of degree at most 1 in M'_u_1u_2. Therefore, there is b ∈ L'(u_2)-N_M'(a) and ϕ(u_1)=a and ϕ(u_2)=b isan (L',M')-colouring ϕ of P.Assume k ≥ 3. Let P' = P - u_1. If |L'(u_1)| = 2, then λ_M'_u_1u_2(u_1)=1. We have |L'(u_1)|> ∑_e ∈ E_P(u_1)λ_M'_e(u_1). By Lemma <ref>, P is (L',M')-colourable if and only if P' is (L',M')|_P'-colourable. Since |L'(u_2)| ≥ 2, with equality only if λ_M'_u_2u_3(u_2)=1, P' is (L',M')|_P'-colourablebyinduction hypothesis.Suppose |L'(u_1)|≥ 3. If λ_M'_u_1u_2(u_1)≤ 2, then |L'(u_1)|> ∑_e ∈ E_P(u_1)λ_M'_e(u_1) and we can argue similarly as in the previous case to show that P is (L',M')-colourable. We thus assume that λ_M'_u_1u_2(u_1)=3. This implies that λ_M'_u_1u_2(u_2)≥ 2 and |L'(u_2)| ≥ 3. Moreover, |L'(u_2)| = 3 only if λ_M'_u_2u_3(u_2)=1. By Lemma <ref>, there exists a ∈ L'(u_1) such that |N_M'_u_1u_2(a)|≤ 1. Let (L”,M”)=(L',M')|_P'- N_M'(a). Then (L”,M”) is a cover of P' such that |L”(u_2)| ≥ 2 and if |L”(u_2)| =2, then |L'(u_2)| =3 and λ_M”_u_2u_3(u_2) ≤λ_M'_u_2u_3(u_2)=1. By the induction hypothesis, P' has an (L”,M”)-colouring ϕ. Thus P is (L',M')-colourable as ϕ can be extended to an (L',M')-colouring of P by letting ϕ(u_1)=a. Suppose v ∈ V(G_0) satisfying N_G_0(v)={u_1,u_2,u_3} and E_G_0(v) ⊈F. If ∑_i=1^3 λ_M_vu_i(v) ≥ 5, then λ_M_vu_i(v)≥ 2 for exactly two indices i ∈{1,2,3}. Since E_G_0(v) ⊈F, we have λ_M_vu_i(v)≥ 2 for at mosttwo indices i ∈{1,2,3}. Supposethe Lemma does not hold, say λ_M_vu_1(v)=3 and λ_M_vu_2(v)=λ_M_vu_3(v)=1. Then M_vu_1 is a matching plus a subgraph of K_2,2. Let (L',M') be obtained from (L,M) by deleting the two nodes in L(v) incident to links of the copy of K_2,2. Then M'_vu_i is a matching for each i=1, 2, 3, and ∑_e ∈ E_G_0(v)λ_M'_e(v)=∑_e ∈ E_G_0(v)λ_M_e(v) - 2≤ |L(v)|-2 = |L'(v)|. If G_0=K_4, then |L(u_1)| ≥ 4, and hence M'_vu_1 is not a perfect matching. Therefore (L',M') is an (F-vu_1)-valid cover of G_0. By our choice of the counterexample,G_0 has an (L',M')-colouring, which is also an (L,M)-colouring of G_0, a contradiction. There is a vertex v∈ V(G_0) with |L(v)|=5 < ∑_e ∈ E_G_0(v)λ_M_e(v).Suppose, to the contrary, that the Lemma does not hold. We have 3 ≤ |L(v)| =∑_e ∈ E_G_0(v)λ_M_e(v) ≤ 5 for every v∈ V(G_0). Let (L',M') be a cover of G_0 obtained from (L,M) by the following operation: For any edge e=uv ∈ E(G_0) such that M_e is either a copy of K_1,2 or the union of a matching and a copy of K_1,2 with the degree 2 node of the copy of K_1,2 in L(v), delete one of the degree 1 nodes in L(u) of the copy of K_1,2. For each such an operation, λ_M_e(u) is decreased by 1 while λ_M_e(v) remains unchanged. In particular, we have λ_M'_e(u) = λ_M'_e(v) for all e = uv ∈ E(G_0) and 5 ≥ |L'(v)| ≥∑_e ∈ E_G_0(v)λ_M'_e(v) for all v ∈ V(G_0).For each edge e=uv, let m(e) = λ_M'_e(u). Then M'_e is the union of m(e) matchings. Let G' be the graph obtained from G_0 by replacing each edge e of G_0 by m(e) parallel edges. So (L',M') is a simple f-cover of G' with f(v) = d_G'(v) for all v. Moreover, if G' = K_4^t for some t or G' = K_5, then there is an edge e ∈ E(G') such that M'_e is not a perfect matching.By Theorem <ref>, G' has an (L', M')-colouring, which is also an (L,M)-colouring of G_0, a contradiction. We are now ready to show that G_0 ∉𝒲∪𝒢∪𝒢'. The proofs are given in the following subsections. §.§ G_0 ∉𝒲 Recall that W_n is the join of a cycle v_1 v_2 … v_n v_1 and a vertex v_n+1. Let F(W_n) = {v_n+1v_1, v_1v_2, v_2v_3, …, v_n-1v_n, v_nv_1}, which consists of the rims and one spoke. If G_0=W_n with n≥ 3, then F is not a subset of F(W_n). Suppose, to the contrary, that F is a subset of F(W_n).Case 1: |L(v_t)|≤ 4 fort∈ [n]. By Lemma <ref>, there exists some vertex v_i with |L(v_i)|=5. Thus |L(v_n+1)|=5. Since v_2v_n+1∉ F, M_v_2v_n+1 is a matching. Thus there exists a∈ L(v_n+1) such that N_M(a) ∩ L(v_2)=∅. LetG'=G_0-v_n+1 and (L', M')= (L,M)|_G' - N_M(a).Then an (L',M')-colouring ϕ of G' can be extended to an (L,M)-colouring of G_0 by letting ϕ(v_n+1)=a, and it suffices to show that G' is (L',M')-colourable.Since |L(v_2)|≤ 4, then |L(v_2)|=λ_(L,M)(v_2) and hence|L'(v_2)|=|L(v_2)|>λ_(L',M')(v_2). By Lemma <ref>, (v_2,v_3,…, v_n, v_1) is a removable sequence with respect to (L',M') and hence G' is (L',M')-colourable, a contradiction. Case 2: |L(v_t)|=5 and |L(v_t+1)|≤ 4 forsome t∈ [n-1]. If |L(v_n+1)|=5, then there exists a∈ L(v_n+1) such that N_M(a) ∩ L(v_t+1)=∅. LetG'=G_0-v_n+1 and (L', M')= (L,M)|_G' - N_M(a). It follows from Lemma <ref> that(v_t+1,v_t+2,…, v_n, v_t, v_t-1,…, v_1)is a removable sequence with respect to (L',M') and hence G' is (L',M')-colourable, a contradiction.Assume |L(v_n+1)|≤ 4.By Lemma <ref>, there exists a∈ L(v_t) such that |N_M(a) ∩ L(v_t+1)|<λ_v_tv_t+1(v_t+1). LetG'=G_0-v_t and (L', M')= (L,M)|_G' - N_M(a). It suffices to show that G' is (L',M')-colourable.Since |L(v_t+1)|≤ 4, we have |L'(v_t+1)|>λ_(L',M')(v_t+1). Since |L(v_n+1)|≤ 4, then |L(v_n+1)|=λ_(L,M)(v_n+1). By Lemma <ref>,(v_t+1,v_t+2,…, v_n, v_n+1, v_1, v_2,…, v_t-1)is a removable sequence with respect to (L',M') and hence G' is (L',M')-colourable, a contradiction. Case 3: |L(v_t)| = 5 for all t∈ [n].Since |L(v_2)| = 5, d_G_0(v_2)=3 and v_2 v_n+1∉ F, by Lemma <ref>, λ_M_v_1v_2(v_2) ≥ 2 and λ_M_v_2v_3(v_2) ≥ 2.By Lemma <ref>,there are at most 2 nodes a∈ L(v_2) such that |N_M(a) ∩ L(v_i)|≥min{2, λ_M_v_iv_2(v_i)} for i ∈{1,3}.Thus there exists a node a ∈ L(v_2)such that|N_M(a) ∩ L(v_i)| ≤min{1, λ_M_v_2v_i(v_i)-1 } for  i ∈{1,3}.If |L(v_n+1)|=3, then M_v_n+1v_1 is either a matching or a copy of K_1,2 with the degree 2 node in L(v_n+1). If |L(v_n+1)|≥ 4, then |L(v_n+1) ∖ N_M(a)|≥ 3. In any case, there exists b ∈ L(v_n+1) ∖ N_M(a) such that |N_M(b) ∩ L(v_1)| ≤ 1.LetG'=G_0-v_2-v_n+1 and (L',M')=(L,M)|_G' - (N_M(a) ∪ N_M(b)). By the choice of a and b, we have|L'(v_i)| ≥ 3,if i∈{1,3},4,if i ∈{4,…,n}. By Lemma <ref>, the path G'=v_3v_4...v_n-1v_nv_1 is (L',M')-colourable, a contradiction. G_0 ∉𝒲.Suppose, to the contrary, that G_0∈𝒲. By Lemma <ref> and Lemma <ref>, G_0=W_4 andF is a subset of one of the maximal subdividable sets depicted in Figure <ref>.By Lemma <ref>, there is at least one vertex v_i with |L(v_i)|=5 and ∑_e ∈ E_G_0(v_i)λ_M_e(v_i)>5.Case 1: F is a subset of the maximal subdividable set given in Figure <ref>(a).For j ∈{1, 3},since d_G_0(v_j)=3 and |E_G_0(v_j)∩ F|≤ 1,|L(v_j)|≤ 4 by Lemma <ref>. Assume|L(v_5)|=5.Since |L(v_1)|≤ 4, there existsa∈ L(v_5) such that N_M(a)∩ L(v_1)=∅. Let G'=G_0-v_5 and(L', M')= (L,M)|_G'- N_M(a). If |L(v_4)|=5, then |L'(v_4)|≥ 5-3=2. If |L(v_4)|≤ 4, then |L(v_4)|=λ_(L,M)(v_4) and |L'(v_4)|=λ_(L',M')(v_4). Thus(v_1,v_4,v_3,v_2) is a removable sequence with respect to (L',M') by Lemma <ref>. By Lemma <ref>, G' is (L',M')-colourable, a contradiction.Thus |L(v_5)|=d_G_0(v_5)=4, and at least one of |L(v_2)|, |L(v_4)| equals 5. By symmetry,we may assume that |L(v_2)|= 5.By Lemma <ref>, there existsa∈ L(v_2) such that N_M(a)∩ L(v_5)=∅. Let G'=G_0-v_2 and(L', M')= (L,M)|_G'-N_M(a).Thus, by Lemma <ref> and Lemma <ref>, (v_5, v_1, v_4, v_3) is a removable sequence with respect to (L',M')and G' is (L',M')-colourable, a contradiction.Case 2:F is a subset of the maximal subdividable set given in Figure <ref>(b). For j ∈ [3], since d_G_0(v_j)=3 and |E_G_0(v_j)∩ F|≤ 1, |L(v_j)|≤ 4 by Lemma <ref>.Assume |L(v_4)|=5.There exists a node a in L(v_4) such that |N_M(a) ∩ L(v_1)| <λ_M_v_4v_1(v_1). Let G'=G_0-v_4 and (L', M')= (L,M)|_G'- N_M(a).Thus (v_1, v_2, v_3,v_5) is a removable sequence with respect to (L',M') byLemma <ref> and hence G' is (L',M')-colourable, a contradiction.Assume |L(v_5)|=5. Since |L(v_2)|≤ 4, there exists a∈ L(v_5) such that|N_M(a) ∩ L(v_2)| < λ_M_v_2v_5(v_2). Let G'=G_0-v_5 and(L', M')= (L,M)|_G'-N_M(a). Since |L'(v_2)|>∑_e ∈ E_G'(v_2)λ_M'_e(v_2), by Lemma <ref>, (v_2, v_1, v_3,v_4) is a removable sequence with respect to (L',M'). Thus G' is (L',M')-colourable by Lemma <ref>, a contradiction. §.§ G_0 ∉𝒢For G_0 = G^(+)_n,r,s∈𝒢, we assume r ≤ s and label the vertices of G_0 as v_1,v_2, …, v_n so that v_1 v_2 … v_n is a spine of G_0. If G_0 ∈{G^(+)_n, r, s: r ≥ 2, s ≥ 3, r+s=n-2}, or G_0 ∈{G^(+)_n, r, s: r ≥ 2, s ≥ 3, r+s=n-1} with|L(v_s+1)| = 4, then F is not a subset of the spine.Suppose, to the contrary, that G_0 ∈{G^(+)_n, r, s: r ≥ 2, s ≥ 3, r+s=n-2}, or G_0 ∈{G^(+)_n, r, s: r ≥ 2, s ≥ 3, r+s=n-1} with |L(v_s+1)| = 4, and F is a subset of edges of the spine v_1 v_2 … v_n.Recall that v_n v_i ∈ E(G_0) for i=2,3, …, s,s+1, v_1 v_i ∈ E(G_0) for i= n-r, …, n-1, and v_1v_n may or may not be an edge of G_0. Therefore, we have d_G_0(v_i)=3 for all 2≤ i ≤ n-1 if r+s=n-2, and d_G_0(v_i)=3 for all i ∈{2, …, s}∪{s+2, …, n-1} and d_G_0(v_s+1)=4 if r+s=n-1. Moreover, d_G_0(v_1) ≥ 3 and d_G_0(v_n) ≥ 4. Case 1: There exists a∈ L(v_n) such that |N_M_v_n-1v_n(a)|≤min{1, λ_M_v_n-1v_n(v_n-1)-1}.Let G'=G_0-v_n and (L',M')=(L,M)|_G' - N_M(a) be a cover of G'. Then an (L',M')-colouring ϕ of G' can be extended to an (L,M)-colouring of G_0 by letting ϕ(v_n)=a, and it suffices to show that G' is (L',M')-colourable. It is easy to see that |L'(v_1)| ≥ 3 (regardless of whether v_1 v_n ∈ E(G_0) or not). If |L'(v_2)|=2, then λ_M_v_1v_2(v_2)=1 and we can choose a node b ∈ L'(v_1) such thatN_M'_v_1 v_2(b)=∅. If |L'(v_2)|≥ 3, then we choose b∈ L'(v_1) such that |N_M'_v_1 v_2(b)|≤ 1. Let G”=G'-v_1 and (L”,M”)=(L',M')|_G” - N_M'(b). It remains to show that the path G” is (L”,M”)-colourable. By the choice of a and b, we know that |L”(v_2)| ≥ 2 and|L”(v_n-1)|≥ |L(v_n-1)|-min{1, λ_M_v_n-1v_n(v_n-1)-1} - 1≥min{5, ∑_e ∈ E_G_0(v_n-1)λ_M_e(v_n-1)} -λ_M_v_n-1v_n(v_n-1)≥ d_G'(v_n-1)=2.Moreover, if |L”(v_n-1)|=2, then min{1, λ_M_v_n-1v_n(v_n-1)-1} = λ_M_v_n-1v_n(v_n-1)-1 and |L(v_n-1)|-λ_M_v_n-1v_n(v_n-1)=d_G'(v_n-1), from which we obtain that λ_M_v_n-1v_n(v_n-1) ≤ 2 and ∑_e ∈ E_G'(v_n-1)λ_M_e(v_n-1) = d_G'(v_n-1). This implies that λ_M_v_n-1v_n-2(v_n-1)=1. For r+s=n-2 and i ∈{3, …, n-2}, or r+s=n-1 and i ∈{3, …, s}∪{s+2, …, n-2}, we have|L”(v_i)|≥ |L(v_i)| - 1≥min{5, ∑_e ∈ E_G_0(v_i)λ_M_e(v_i)} - 1≥min{4, ∑_e ∈ E_G”(v_i)λ_M”_e(v_i)}.For r+s=n-1, we have|L”(v_s+1)|≥ |L(v_s+1)| - 2= ∑_e ∈ E_G”(v_s+1)λ_M”_e(v_s+1)= min{4, ∑_e ∈ E_G”(v_s+1)λ_M”_e(v_s+1)}.Thus we can apply Lemma <ref> to conclude that G” is (L”,M”)-colourable, a contradiction. Case 2: For every a∈ L(v_n), |N_M_v_n-1v_n(a)| > min{1, λ_M_v_n-1v_n(v_n-1)-1}.By Lemma <ref>, λ_M_v_n-1v_n(v_n-1)=1. As |N_M_v_n-1v_n(a)| > 0 for every a∈ L(v_n), M_v_n-1v_n is a matching and |L(v_n-1)| ≥ |L(v_n)|≥4. As |L(v_1)|≥3 and (L,M) is F-valid, there exists a∈ L(v_1) such that |N_M_v_1 v_2(a)|≤ 1.Let G'=G_0-v_1 and (L',M')=(L,M)|_G' - N_M(a). Then it remains to show that G' is (L',M')-colourable. Since d_G'(v_n)≥4, either v_1v_n ∉ E(G) and |L'(v_n)| = |L(v_n)|≥ 4,or v_1v_n ∈ E(G) and hence |L(v_n)|=5 and |L'(v_n)| ≥|L(v_n)| -1 ≥ 4. If |L'(v_2)|≤3, then we can choose a node b∈ L'(v_n) such thatN_M'_v_2 v_n(b)=∅. If |L'(v_2)|≥ 4, then let b be an arbitrary node in L'(v_n).Let G”=G'-v_n and (L”,M”)=(L',M')|_G” - N_M'(b). Now it suffices to show that G” is (L”,M”)-colourable. By the choice of a and b, we know that|L”(v_i)| ≥ 2,if i∈{2, n-1}, min{4, ∑_e ∈ E_G”(v_i)λ_M”_e(v_i)},if i∈{3, …, n-2}. Moreover, if |L”(v_2)|=2, then λ_M”_v_2v_3(v_2)=λ_M_v_2v_3(v_2)=1. By Lemma <ref>, G” is (L”,M”)-colourable, a contradiction.If G_0 ∈{G^(+)_n, r, s: r ≥ 2, s ≥ 3, r+s=n-1} and |L(v_s+1)| = 5, then F is not a subset of the spine.Suppose, to the contrary, that G_0 = G^(+)_n, r, s with r ≥ 2, s ≥ 3, r+s=n-1, |L(v_s+1)|=5 and F is a subset of edges of the spine v_1 v_2 … v_n. In this case, d_G_0(v_i)=3 for i ∈{2, …, s}∪{s+2, …, n-1} and d_G_0(v_s+1)=4. Case 1: |L(v_1)|≤ 4 or |L(v_n)|≤ 4.We assume that |L(v_n)| ≤ 4, and the case for |L(v_1)| ≤ 4 can be proved similarly. Since|L(v_s+1)|=5>|L(v_n)| and M_v_s+1 v_n is a matching, we can choose a node a∈ L(v_s+1) such that N_M_v_s+1 v_n(a)=∅.Let G'=G_0-v_s+1 and (L',M')=(L,M)|_G' - N_M(a). So it remains to show that G' is (L',M')-colourable. Observe that since |L(v_n)| ≤ 4,|L'(v_n)| =|L(v_n)|=∑_e ∈ E_G_0(v_n)λ_M_e(v_n)>∑_e ∈ E_G'(v_n)λ_M'_e(v_n).It follows from Lemma <ref> that (v_n,v_n-1,…,v_s+2,v_1,v_2,…,v_s)is a removable sequence with respect to (L',M'), a contradiction.Case 2: |L(v_1)|=5 and |L(v_n)|=5.Assume first that |L(v_t)|≤ 4 for some t with 2 ≤ t ≤ s. Choose a node a ∈ L(v_n) such that N_M_v_n v_t(a) = ∅. Let G'=G_0-v_n and let (L', M')= (L,M)|_G' - N_M(a). It suffices to prove that G' is (L', M')-colourable. Since|L'(v_t)| =|L(v_t)|=∑_e ∈ E_G_0(v_t)λ_M_e(v_t) >∑_e ∈ E_G'(v_t)λ_M'_e(v_t), It follows from Lemma <ref> that (v_t,v_t-1, …, v_2, v_t+1,v_t+2, …, v_s) is a removable sequence with respect to (L',M').Thus it suffices to prove that G'-{v_2,…,v_s} is (L',M')|_G'-{v_2,…,v_s}-colourable.If |L'(v_n-1)|=2, then there exists b ∈ L'(v_1) with N_M'_v_1 v_n-1(b) = ∅ since |L'(v_1)| ≥ |L(v_1)|-1 = 4. If L'(v_n-1)≥ 3, then we take b to be an arbitrary node in L'(v_1). Let G”=G'-{v_1,v_2,…,v_s} and (L”,M”) = (L',M')|_G” - N_M'(b). It suffices to show G”=v_s+1v_s+2… v_n-1 is (L”,M”)-colourable. By the choice of b, we have|L”(v_i)| ≥5-2=3,if i=s+1,|L(v_i)|-1,if i ∈{s+2,…,n-2},2,if i=n-1.By Lemma <ref>,G” is (L”,M”)-colourable,a contradiction.The case that |L(v_t)|≤ 4 for some t with s+2≤ t ≤ n-1 can be proved similarly. Thus we have that |L(v_i)| = 5 for all 1 ≤ i ≤ n.By Lemma <ref>,there exists a node a ∈ L(v_s+1) such that|N_M(a) ∩ L(v_s)| ≤ 1 and |N_M(a) ∩ L(v_s+2)| ≤ 1. Moreover, since |L(v_1)|=5, there exists b ∈ L(v_1)-N_M(a) such that |N_M(b) ∩ L(v_2)| ≤ 1. As |L(v_n)-N_M(a)-N_M(b)|≥ 5-2=3, we can choose c ∈ L(v_n)-N_M(a)-N_M(b) such that |N_M(c) ∩ L(v_n-1)| ≤ 1. Let G' = G_0 - {v_1, v_s+1, v_n} and (L',M')=(L,M)|_G' - N_M(a) ∪ N_M(b) ∪ N_M(c). Hence, it suffices to show that G' is (L',M')-colourable. Note that G' is the union of two disjoint paths P_1 = v_2 … v_s and P_2 = v_s+2… v_n-1.By the choice of a, b and c, we know that |L'(v_i)| ≥ 5 - 2 = 3 for every i ∈{2, s, s+2, n-1}. Therefore, by Lemma <ref>, P_1 and P_2 are (L',M')|_P_1-colourable and (L',M')|_P_2-colourable, respectively. This implies that G' is (L',M')-colourable, a contradiction. G_0G_6,2,3.Suppose to the contrary that G_0=G_6,2,3. It follows from Lemma <ref> that F is a subset of the spine v_1v_2...v_6 or a subset of the second spine v_4v_3v_2v_1v_5v_6.By Lemma <ref> and Lemma <ref>,F is not a subset of the spine and hence it is a subset of the second spine v_4v_3v_2v_1v_5v_6.By Lemma <ref>, there exists some vertex v_i with |L(v_i)|=5 for i∈ [6]. Case 1: |L(v_4)|=5 or |L(v_6)|=5. By symmetry, we assume that |L(v_4)|=5.Assume |L(v_6)|=d_G_0(v_6)=4. Let a∈ L(v_4) such that N_M(a)∩ L(v_6)=∅.Let G'=G_0-v_4 and (L', M')= (L,M)|_G'- N_M(a).Thus (v_6, v_5, v_1, v_2, v_3) is a removable sequence with respect to (L',M')and G_0 is (L,M)-colourable by Lemma <ref> and Lemma <ref>, a contradiction. So |L(v_4)|=|L(v_6)|=5. Suppose |L(v_3)|≤ 4. Let a∈ L(v_4) such that |N_M(a)∩ L(v_3)|<λ_M_v_3v_4(v_3). Let G'=G_0-v_4 and (L', M')= (L,M)|_G' - N_M(a).Thus (v_3, v_2, v_1, v_5, v_6) is a removable sequence with respect to (L',M') by Lemma <ref>. Thus G_0 is (L,M)-colourable by Lemma <ref>, a contradiction. Similarly, one can show that |L(v_j)|=5 for any j ∈{1,2,5}. Hence we have |L(v_j)|=5 for all j∈ [6].Now let a∈ L(v_4) such that |N_M(a)∩ L(v_3)|≤ 1 and let b∈ L(v_6)-N_M(a) such that |N_M(b)∩ L(v_5)|≤ 1.Let G'=G_0-v_4-v_6 and (L', M')= (L,M)|_G'- (N_M(a)∪N_M(b)). By the choice of a and b, |L'(v_i)|≥ 3 for i=3, 5 and |L'(v_j)|≥ 4 for j=1,2. Thus G'=v_3v_2v_1v_5 is (L',M')-colourable by Lemma <ref>, a contradiction. Case 2: |L(v_4)|≤4, |L(v_6)|≤4 and |L(v_i)|=5 for some i∈{1,2}.Without loss generality, suppose |L(v_2)|=5. Let a∈ L(v_2) such that N_M(a)∩ L(v_6)=∅.Let G'=G_0-v_2 and(L', M')= (L,M)|_G'-N_M(a). Thus (v_6, v_5, v_4, v_3, v_1) is a removable sequence with respect to (L',M') by Lemma <ref>. Thus G_0 is (L,M)-colourable by Lemma <ref>, a contradiction.Case 3: |L(v_4)|≤4, |L(v_6)|≤4 and |L(v_i)|=5 for some i∈{3,5}.Without loss generality, suppose |L(v_3)|=5. Let a∈ L(v_3) such that N_M(a)∩ L(v_6)=∅.Let G'=G_0-v_3 and(L', M')= (L,M)|_G'-N_M(a). Thus (v_6, v_5, v_1, v_2, v_4) is a removable sequence with respect to (L',M') by Lemma <ref>. Thus G_0 is (L,M)-colourable by Lemma <ref>, a contradiction. G_0 ∉𝒢. Define σ_n(v_i) = v_n-i-1 for i ∈{1, …, n-2}, σ_n(v_n-1) = v_n-1 and σ_n(v_n)=v_n. Note that for n ≥ 7, σ_n is an isomorphism between G_n, 2, n-3 and G_n, 2, n-4^+ such that the second spine of G_n, 2, n-3 (respectively, G_n, 2, n-4^+) corresponds to the spine of G_n, 2, n-4^+ (respectively, G_n, 2, n-3). Therefore it follows from Lemmas [<ref>, <ref>–<ref>] that if G_0 ∈𝒢, thenG_0=G_7,2,3 and F is a subset of {v_1v_2,v_4v_5,v_6v_7,v_3v_7}.For i∈ [6], since d_G_0(v_i)=3 and |E_G_0(v_i)∩ F|=1, we have |L(v_i)|≤ 4 by Lemma <ref>. Moreover, it follows from Lemma <ref> that |L(v_7)|=5. By Lemma <ref>, there is a node a∈ L(v_7) such that |N_M(a)∩ L(v_6)|<λ_M_v_6v_7(v_6).Let G'=G_0-v_7 and (L',M')=(L,M)|_G'- N_M(a). Since|L'(v_6)|>|L(v_6)|-λ_M_v_6v_7(v_6)=∑_e ∈ E_G'(v_6)λ_M'_e(v_6),(v_6, v_5,v_4,v_3,v_2, v_1)is a removal sequence with respect to (L',M') by Lemma <ref>. Hence G_0 is (L,M)-colourable by Lemma <ref>, a contradiction.§.§ G_0 ∉𝒢' G_0K_5.If G_0 = K_5, then, by Lemma <ref>, F = ∅ and hence M_e is a matching for every edge e of G_0. By our assumption that G_0 is F-valid, there is an edge e such that M_e is not a perfect matching. By Theorem <ref>, G_0 is (L,M)-colourable, a contradiction. G_0K_5^-.Suppose to the contrary that G_0= K_5^-. The maximal subdividable sets of K_5^- are shown in Figure <ref>. Case 1:F is a subset of the maximal subdividable set in Figure <ref>(a).Subcase 1.1:|L(v_5)|=5.Suppose that |L(v_i)|≤ 4 for some i∈ [4]. By Lemma <ref>, there existsa∈ L(v_5) such that |N_M(a)∩ L(v_i)|<λ_M_v_5v_i(v_i). Let G'=G_0-v_5 and(L', M')= (L,M)|_G'- N_M(a). If i∈{1,3}, then (v_1, v_4,v_3, v_2) or (v_3, v_2,v_1, v_4) is a removable sequence with respect to (L',M') byLemma <ref>. If i∈{2,4}, then (v_2, v_3,v_1, v_4) or (v_4, v_1,v_3, v_2) is a removable sequence with respect to (L',M') byLemma <ref>. So G' is (L',M')-colourable by Lemma <ref>, a contradiction. Thus we have |L(v_i)|=5 for all i∈ [5].By Lemma <ref> and Lemma <ref>, there exists a∈ L(v_2) such that for each i∈{3, 5}, |N_M(a) ∩ L(v_i)| ≤min{1, λ_M_v_2v_i(v_i)-1}.Similarly, there exists b∈ L(v_4) such that for each i∈{1, 5}, |N_M(b) ∩ L(v_i)| ≤min{1, λ_M_v_4v_i(v_i)-1}. Let G'=G_0-v_2-v_4 and(L', M')= (L,M)|_G'- N_M(a) ∪ N_M(b). By our choice of a and b, we have |L'(v_j)|≥ 3 for all j ∈{1,3,5}. Thus it is easy to see that (v_1,v_3,v_5) is a removable sequence with respect to (L',M'), a contradiction.Subcase 1.2: |L(v_5)|=4. Suppose |L(v_2)|=5 or |L(v_4)|=5. By symmetry, we may assume that |L(v_2)|=5.By Lemma <ref> and Lemma <ref>, there exists a∈ L(v_2) such that|N_M(a) ∩ L(v_3)| ≤min{1, λ_M_v_2v_3(v_3)-1}.Let G'=G_0-v_2 and (L', M')= (L,M)|_G'- N_M(a). Thus (v_3, v_5, v_4,v_1) is a removable sequence with respect to (L',M') by Lemma <ref> and hence G' is (L',M')-colourable, a contradiction. We thus have |L(v_2)| ≤ 4 and |L(v_4)| ≤ 4. By Lemma <ref>, |L(v_1)|=5 or |L(v_3)|=5. By symmetry, we may assume that |L(v_1)|=5. Let a∈ L(v_1) such that N_M(a)∩ L(v_5)=∅. Let G'=G_0-v_1 and (L', M')= (L,M)|_G'-N_M(a). Thus (v_5, v_2, v_4,v_3) is a removable sequence with respect to (L',M') by Lemma <ref>. So, by Lemma <ref>, G' is (L',M')-colourableand hence G_0 is (L,M)-colourable. Case 2:F is a subset of the maximal subdividable set in Figure <ref>(b) and not that in Figure <ref>(a).Note that it follows from the case condition that both M_v_1v_4 and M_v_3v_4 are not matchings.Assume |L(v_4)|=5. Since M_v_1v_4 is not a matching, there exists a∈ L(v_4) such that |N_M(a) ∩ L(v_1)| < λ_M_v_1v_4(v_1). Let G'=G_0-v_4 and(L', M')= (L,M)|_G'- N_M(a). By the choice of a and Lemma <ref>, (v_1, v_2, v_3,v_5) is a removable sequence with respect to (L',M') and G' is (L',M')-colourable, a contradiction. So |L(v_4)|≤ 4. Assume|L(v_5)|=5. Since |L(v_4)|≤ 4, there exists a∈ L(v_5) such that |N_M(a) ∩ L(v_4)| < λ_M_v_4v_5(v_4). Let G'=G_0-v_5 and(L', M')= (L,M)|_G' - N_M(a). Thus (v_4, v_1, v_3,v_2) is a removable sequence with respect to (L',M') and G' is (L',M')-colourable by Lemma <ref> and Lemma <ref>, a contradiction. Thus |L(v_5)|=d_G_0(v_5)= 4.Assume |L(v_1)|=5.Let a∈ L(v_1) such that N_M(a)∩ L(v_5)=∅.Let G'=G_0-v_1 and(L', M')= (L,M)|_G'-N_M(a). Since L'(v_5)>∑_e ∈ E_G'(v_5)λ_M'_e(v_5), (v_5, v_2, v_4,v_3) is a removable sequence with respect to (L',M') by Lemma <ref>. Thus G' is (L',M')-colourable and hence G_0 is (L,M)-colourable, a contradiction. Thus |L(v_1)|≤4. Similarly, we have |L(v_3)|≤4.By Lemma <ref>, we must have |L(v_2)| = 5, which, however, contradicts Lemma <ref>. G_0K_3K_2.Suppose to the contrary that G_0= K_3K_2. By Lemma <ref>, the subdividable set F is a subset of the path v_1v_2...v_6 shown in Figure <ref>(a). Since d_G_0(v_i)=3 and |E_G_0(v_i)∩ F|≤ 1 for i∈{1, 6}, we have |L(v_i)|≤ 4 by Lemma <ref>. By Lemma <ref>, there exists a vertex v_i with |L(v_i)|=5 for some i={2, 3, 4, 5}. Assume |L(v_2)|=5. Let a∈ L(v_2) such that N_M(a)∩ L(v_6)=∅. Let G'=G_0-v_2 and (L', M')= (L,M)|_G'-N_M(a). Since|L'(v_6)|>∑_e ∈ E_G'(v_6)λ_M'_e(v_6),(v_6, v_5, v_4, v_3, v_1) is a removable sequence with respect to (L',M') by Lemma <ref>. Thus G' is (L',M')-colourable and hence G_0 is (L,M)-colourable, a contradiction.Similarly, one can prove that |L(v_i)|≤4 for all i={3, 4, 5}. We conclude that G_0K_3K_2.G_0 ∉𝒢'.By Lemmas [<ref>–<ref>], it suffices to show that G_0 ∉{K_3,3,A,A^+, B, B^+, C, C^+,D}.Assume that G_0 ∈{K_3,3, A}. We label the vertices of G_0 as Figure <ref>(b). For i∈{2,...,6}, we have either d_G_0(v_i) = 3 and |E_G_0(v_i)∩ F|≤ 1, or d_G_0(v_i) = 4 and E_G_0(v_i)∩ F=∅. Thus, by Lemma <ref>, we have|L(v_i)|≤ 4. It then follows from Lemma <ref> that |L(v_1)|=5. By Lemma <ref>, there is a vertex a∈ L(v_1) such that |N_M(a)∩ L(v_4)|<λ_M_v_1v_4(v_4). Let G'=G_0-v_1 and (L',M')=(L,M)|_G'- N_M(a). Since |L'(v_4)|>|L(v_4)|-λ_M_v_1v_4(v_4)=∑_e ∈ E_G'(v_4)λ_M'_e(v_4), it follows from Lemma <ref> that (v_4, v_2,v_3,v_5,v_6)is a removable sequence with respect to (L',M'). Hence G_0 is (L,M)-colourable, a contradiction.Assume that G_0∈{A^+, B, B^+, C, C^+}. Observe that d_G_0(v)≤ 4 for each vertex v ∈ V(G_0). Moreover, if v ∈ V(G_0) is incident with any edges in F, then d_G_0(v)=3 and |E_G_0(v)∩ F|=1. Thus it follows from Lemma <ref> that for every vertex v of G_0, |L(v)|=∑_e∈ E_G_0(v)λ_M_e(v), which contradicts toLemma <ref>.Assume that G_0=D. We label the vertices of G_0 as Figure <ref>(c).For i={2, 3, 5, 7}, since d_G_0(v_i)=3 and |E_G_0(v_i)∩ F|≤1, it follows from Lemma <ref> that |L(v_i)|≤ 4.It is clear that if M_v_1v_2 is a matching, then |L(v_1)| > |L(v_2)|. Thus, it follows from Lemma <ref> that there is a node a ∈ L(v_1) such that |N_M(a)∩ L(v_2)|<λ_M_v_1v_2(v_2).Let G'=G_0-v_1 and (L',M')=(L,M)|_G' - N_M(a). Since |L'(v_2)|>∑_e ∈ E_G'(v_2)λ_M'_e(v_2), it follows from Lemma <ref> and Lemma <ref> that (v_2, v_5,v_7,v_6,v_3, v_4) is a removable sequence with respect to (L',M') and G' is (L',M')-colourable. Hence G_0 is (L,M)-colourable, a contradiction.§ CONFLICT OF INTEREST None of the authors have a conflict of interest to disclose. abbrv
http://arxiv.org/abs/2312.15962v1
{ "authors": [ "On-Hei Solomon Lo", "Cheng Wang", "Huan Zhou", "Xuding Zhu" ], "categories": [ "math.CO" ], "primary_category": "math.CO", "published": "20231226085308", "title": "DP-$5$-truncated-degree-colourability of $K_{2,4}$-minor free graphs" }
Exact asymptotic order for adaptive approximations]Exact asymptotic order for generalised adaptive [email protected]@uni-bremen.deIn this note, we present an abstract approach to study asymptotic orders for adaptive approximations with respect to a monotone set functiondefined on dyadic cubes. We determine the exact upper order in terms of the critical value of the corresponding -partition function, and we are able to provide upper and lower bounds in term of fractal-geometric quantities. With properly chosen , our new approach has applications in many different areas of mathematics, including the spectral theory of Kreĭn–Feller operators, quantization dimensions of compactly supported probability measures, and the exact asymptotic order for Kolmogorov, Gel'fand and linear widths for Sobolev embeddings into L_μ^p-spaces. Institute for Dynamical Systems, Faculty 3 – Mathematics and Computer Science, University of Bremen, Bibliothekstr. 5, 28359 Bremen, Germany[2000]primary: 68W25, 41A25 secondary: 28A80, 65D15[t]55em This research was supported by the DFG grant Ke 1440/3-1.[ Aljoscha Niemann====================§ INTRODUCTION AND STATEMENT OF MAIN RESULTS The study of adaptive approximation algorithms goes back to the seminal work of Birman and Solomjak in the 1970s <cit.>, which was motivated by the study of asymptotics for spectral problems, and was subsequently refined by Borzov 1971 <cit.> for singular measures, and then by DeVore and Yu <cit.> for certain boundary cases not treated by Birman, Solomjak or Borzov. Generally speaking, we deal with asymptotics of counting problems derived from set functions defined on dyadic subcubes of the unit cube. Recently, this problem has attracted renewed attention in the context of * piecewise polynomial approximation in <cit.>,* spectral asymptotics in <cit.>,* quantization of probability measures in <cit.>, and* Kolmogorov, Gel'fand, and linear widths in <cit.>.Our new approach improves some of the classic results (see e.g. subsec:PartitionImprovemetBirmanSolomyak, where we compare our results with work of Birman and Solomjak from the 1970s) and is fundamental for all the results by the authors mentioned above. In this note we also allow a generalisation with respect to the range of set functions considered, as this proves to be very useful for applications to spectral asymptotics. However, the most important case studied concerns set functions defined on all dyadic cubes; we will refer to this case as the classical case. §.§ The basic setting This paper is concerned with the study of the asymptotic behaviour of an adaptive approximation algorithm in the following setting. Let =∏_i=1^dJ_i be the unit cube in dimension d∈ where the J_i's are (half-open, open, or closed) unit intervals. Let 𝒟_n denote a dyadic partition ofby cubes of the form Q∏_i=1^dI_i with (half-open, open, or closed) intervals I_i with endpoints in the dyadic grid of size 2^-n, i. e. (k-1)2^-n, k2^-n for some k∈. Note that by our assumption on the intervals I_i, chosen individually for each Q, these cubes are not necessarily congruent, in that we allow certain faces of Q not to belong to Q. However, we require that for each n∈ the partition 𝒟_n+1 is a refinement of 𝒟_n, this means that each element of 𝒟_n can be decomposed into 2^d disjoint elements of 𝒟_n+1. In this way, 𝒟⋃_n∈𝒟_n defines a semiring of sets. For many applications of our formalism, a more general approach is often required. For this we will introduce a fixed subset ⊂ and set its level n∈_0 cubes to be _n∩_n. Throughout the paper, we also fix a set function :→_≥0 which is assumed to be * monotone, i. e. for all Q,Q'∈ with Q⊂ Q' we have (Q)≤(Q'),* and uniformly vanishing in the sense that j_nsup_Q∈⋃_k≥ n_k(Q)↘0, j_0>0,* and locally non-vanishing, i. e. for n∈_0 and each cube Q from _n with (Q)>0 there exists at least one subcube Q'⊂ Q with Q'∈_n+1 and (Q')>0.For x>1/j_0, we define the so-called minimal x-good partition by G_x{ Q∈:(Q)<1/x & ∃ Q'∈_|log_2Λ(Q)|/d-1:Q'⊃ Q & (Q')≥1/x} .Note that, strictly speaking, G_x is not a partition unless we are in the classical case, i. e. =. However, G_x partitions the `x-bad cubes', that is those Q∈ with (Q)≥1/x, whereby we ignore cubes from ∖.The goal in this paper is to study the growth rate of M(x)(G_x) as x∈_>0 tends to infinity, in terms of the leading exponentshh_lim sup_x→∞log(M(x))/log(x) andhh_lim inf_x→∞log(M(x))/log(x).We will refer to these quantities as the upper, resp. lower, -partition entropy. In fact, we will determine the upper -partition entropy in terms of the -partition function, which generalises the concept of the L^q-spectrum for measures, see subsec:-Partition-functions. For particularly regular cases we also able to determine the lower -partition entropy. In any case, we give upper and lower bounds in terms generalise fractal quantities. §.§ The dual problem For application (like for quantization of probability measures) the dual problem is also sometimes useful. For this we define γ_nγ_,nmin_P∈Π_n(P), where.We will investigate the following upper, resp. lower, exponent of convergence of γ_n given byαα_lim sup_n→∞log(γ_n)/log(n) and αα_lim inf_n→∞log(γ_n)/log(n). §.§ The classical case (=) and the adaptive approximation algorithm This section is devoted to study the classical case = which leads to the classical adaptive approximation algorithm studied intensively in the past decades (see <cit.>). For this we show how G_x can be constructed via a finite induction (see also fig:PartitionAlgo for an illustration) by subdividing `x-bad cubes' into 2^d subcubes and picking in each inductive step the `x-good cubes'. §.§ Adaptive Approximation Algorithm. For x>1/j_0 we initialise our induction by setting ℬ_0{} and 𝒢_0=∅. Now, suppose the set of 'x-bad cubes' ℬ_n and 'x-good cubes' 𝒢_n of generation n∈_0 are given. Then we set ℬ_n+1 { Q∈_n+1:∃ Q'∈ℬ_n:Q⊂ Q',(Q)≥1/x} and𝒢_n+1 { Q∈_n+1:∃ Q'∈ℬ_n:Q⊂ Q',(Q)<1/x}∪𝒢_n. Sinceis uniformly vanishing, this procedure terminates after say m_x∈ steps with ℬ_m_x+1=∅ and we return 𝒢_m_x+1. For convenience we write (P)max_Q∈ P(Q) for any collection of cubes P⊂. The following lemma shows that for = the above algorithm indeed recovers the x-good partition G_x and that this set solves an optimisation problem. The proofs for the following lemmas are postponed to the last section, which is also devoted to the proofs of the main results.For :→_≥0, x>1/() and with the notation given in the Adaptive Approximation Algorithm we have G_x=𝒢_m_x+1and this set solves the following optimisation problem: For P̃ from the set Π of partitions ofwith elements from , we have (P̃)=inf{(P):P∈Π,(P)<1/x}P̃=G_x.Similarly, also the dual problem is well known in the literature and closely connected also to the study of the quantization dimension (<cit.>).For = and Π̃_n denoting the set of partitions ofwith elements fromand cardinality not exceeding n∈, we haveγ_n=inf_P∈Π̃_n(P).With this connection it will turn out that our results (see thm: ImproveBirmanSolomyak) improve some classical work in this respect, e. g. <cit.>. §.§ -partition functions Next, let us turn to the concept of partition functions, which in a certain extent is borrowed from the thermodynamic formalism. Our most powerful auxiliary object is the -partition function, for q∈_≥0, given by(q)_(q)lim sup_n→∞_n(q)with _n(q)_,n(q)1/log2^nlog∑_Q∈_n(Q)^q.Note that we use the convention 0^0=0, that is for q=0 we neglect the summands with (Q)=0 in the definition of _n. The functionis convex as a limit superior of convex functions. Further, for :→_≥0 we let ^* denote the restriction ofto ^*{ Q∈:(Q)>0} and we observe_=_^*. We call_∞()lim inf_n→∞log(_n)/-log(2^n)the ∞-dimension offor which we often assume 0<_∞(), which in turn leads tobeing uniformly vanishing (see lem:Unifrom Decreasing).To exclude trivial cases, we will always assume that there exist a>0 and b∈ such that _n(a)≥ b for all n∈ large enough; in particularis a proper convex function. All relevant examples mentioned above fulfil this condition.Since the maximal asymptotic direction lim_q→∞(q)/q ofcoincides with -_∞(), _∞()>0 implies that the critical exponent κκ_inf{ q≥0:∑_Q∈(Q)^q<∞} coincides with𝔮𝔮_inf{ q≥0:(q)<0} .If 0<𝔮<∞, then 𝔮 is the unique zero ofand 𝔮=κ; in general, we have 𝔮≤κ (cf. lem:Dim00Inequality). Sincedoes not change when we replaceby ^* we see 𝔮_=𝔮_^*.§.§.§ Coarse multifractal dimensions For the lower bounds, we use a concept closely connected to the coarse multifractal analysis (see e. g. <cit.>) . For all n∈ and α>0, we define 𝒩_α(n) B_α(n), B_α(n){ Q∈_n:(Q)≥2^-α n} ,(for an illustration of B_α(n) for an concrete example with optimal α, we refer to fig:PartitionAlgoAndMultiFractal) and setF(α)lim sup_nlog^+(𝒩_α(n))/log(2^n) andF(α)lim inf_nlog^+(𝒩_α(n))/log(2^n),with log^+(x)max{ 0,log(x)}, x≥0. We refer to the quantitiesFF_sup_α>0F(α)/αand FF_sup_α>0F(α)/αas the upper, resp. lower, optimised coarse multifractal dimension with respect to .At this point we would like to point out that the reciprocal quantities closely related to the concept of n-widths have already been considered in the work of Birman and Solomjak <cit.>; in subsec:PartitionImprovemetBirmanSolomyak we show that our formalism gives improved estimates on the asymptotic rates obtained by Birman and Solomjak. §.§ Main results Our main result are stated for :→_≥0 as given above; all proofs for this section are postponed to sec:Partition-functions. If _∞()>0, thenF≤h≤h=𝔮=κ=F. From the definition it is clear that for 𝒯⊂, all quantities above are monotone in the sense that F, h, etc., which are defined with respect to |_𝒯 do not exceed F, h, etc., which are defined with respect to .In our proofs we will see that ifis uniformly vanishing and allowing _∞()=0, we still have h≤κ≤𝔮.Further, we have h_=h_^*, which can be seen in two ways: either use _=_^* or alternatively F_=F_^* and thm:MainResult. Also note that F_=F_^*.Let ν be a finite Borel measure on , we consider _s:→_≥0,Q↦(ν(Q))^s for some s>0 and such that _∞(_1)>0. Then for x>1/ν() we have x^1/sν()<ℳ__s(x). In particular, h__s=h__s=𝔮__s=1/s. In <cit.>, the set function ν^s plays are crucial for the estimates of eigenvalues of Birman–Schwinger operators, therein dyadic cubes are replaced by arbitrary cubes contained in , leading to the improved upper estimate given by ≪ x^1/s. Assuming _∞()>0, we have-1/F≤-1/h=α≤α=-1/h=-1/𝔮.In subsec:PartitionImprovemetBirmanSolomyak, we give the proof and further discussions in the context of the classical work <cit.>.§.§.§ Fractal-Geometric bounds We define the support ofto be ()⋂_k∈⋃_n≥ k{Q:Q∈_n,(Q)>0} .We write _M(A)lim sup_n→∞log(({ Q∈_n:Q∩ A≠∅}))/log(2^n)∈[0,d] for the upper Minkowski dimension of the bounded set A⊂^d. Slightly abusing notation, we also write _M()_M(()). In several applications of our results, the value of (1) is easily accessible (see e. g. <cit.>), the following expressions provide convenient bounds. For an illustrating example see fig:Moment-generating-function.If 1≤𝔮<∞, then(0)/(0)-(1)≤𝔮≤_∞()+(1)/_∞()≤τ(0)/_∞()≤_M()/_∞()≤d/_∞(),and if 𝔮≤1, then _∞()+(1)/_∞()≤𝔮≤(0)/(0)-(1). §.§.§ Regularity results Assuming _∞()>0, we define two notions of regularity. * We callmultifractal-regular (MF-regular) if F=F.* We callpartition function regular (PF-regular) if * (q)=lim inf_n_n(q) for q∈(𝔮-ε,𝔮), for some ε>0, or* (𝔮)=lim inf_n_n(𝔮) andis differentiable at 𝔮.The above theorem and the notion of regularity give rise to the following list of observations: * An easy calculation shows that F≤inf{ q>0:lim inf_n_n(q)<0}≤𝔮=F.From this it follows that MF-regularity implies thatexists as a limit in 𝔮.* Ifis MF-regular, then equality holds everywhere in the chain of inequalities eq:MainChain_limsup.The following theorem shows that the -partition function is a valuable auxiliary concept to determine the exact value of the -partition entropy.If _∞()>0 andis PF-regular, then it is MF-regular and the -partition entropy h exists with h=𝔮==F. This result is optimal in the sense that there is an example of a measure ν (derived in the context of for Kreĭn–Feller operators in dimension d=1 in KN2022) such that _νν is not PF-regular and for which h>h. It should be noted that PF-regularity is often easily accessible if the spectral partition function is essentially given by the L^q-spectrum of an underlying measure. §.§ Possible applications This paper is partially based on the second author's PhD thesis <cit.>.Let ν be a Borel measure on. A classical example forwould be ν restricted to . In cor:special_nu^s we will provide an illustrating example for _ν,s(Q)ν(Q)^s, Q∈, s>0, that plays a crucial role in the context of spectral asymptotics <cit.>. In <cit.>, with Λ denoting the Lebesgue measure on , and for a,b∈, b>0, we studied_ν,a,b(Q)sup{ν(Q)^b|log(Λ(Q))|:Q∈(Q)} , a=0, sup{ν(Q)^bΛ(Q)^a:Q∈(Q)} , a≠0,with 𝒟(Q){ Q'∈𝒟:Q'⊂ Q}, Q∈. Note that for a>0 this definition reduces to _ν,a,b(Q)=ν(Q)^bΛ(Q)^a. For an appropriate choice of a,b the set function _ν,a,b naturally arises in the optimal embedding constant for the embedding of the standard Sobolev space H^1,2 in L_ν^2. For ⊂^d and t≥2, we were particularly interested in _ν,t_ν,2/d-1,2/t to investigate the spectral asymptotic of Kreĭn--Feller operators. We note that the general parameter a,b has also been shown to be useful when considering polyharmonic operators in higher dimensions or approximation order with respect to Kolmogorov, Gel'fand, or linear widths as elaborated in <cit.>. In these works, the deep connection to the original ideas of entropy numbers introduced by Kolmogorov also becomes apparent. In <cit.> we address the quantization problem, that is the speed of approximation of a compactly supported Borel probability measure by finitely supported measures (see <cit.> for an exposition), by adapting the methods developed in sec:OptimalPartitions and sec:Coarse-multifractal-analysis to _ν,r,1 with r>0 to identify the upper quantization dimension of order r of ν with its Rényi dimension.§ BASIC PROPERTIES OF THE PARTITION FUNCTIONRecall the definition in eq:DefGL of the -partition functionas well as the critical values 𝔮 and κ, for which we give further observations: One easily checks thatis scale invariant in the sense that for c>0, we have _c=_. We always have (0)≤_M() and if =, then (0)=_M().We first show that for Q∈_n with (Q)>0, we have Q∩()≠∅. Indeed, sinceis locally non-vanishing there exists a subsequence (n_k) with Q_n_k∈_n_k, (Q_n_k)>0 and Q_n_k⊂ Q_n_k-1⊂ Q. Since (Q_n_k)_k is a nested sequence of non-empty compact subsets of Q we have ∅≠⋂_k∈Q_n_k⊂()∩Q. Therefore, we{ Q∈_n:(Q)>0} ≤{ Q∈_n:Q∩()≠∅}≤3^d{ Q∈_n:Q∩()≠∅}implying (0)≤_M().Now, assume _n=_n. We observe that if Q∈_n,Q∩()≠∅, then there exists Q'∈ _n with Q'∩Q≠∅ and (Q')>0. This can be seen as follows: For x∈ Q∩() there exists a subsequence (n_k) such that x∈Q_n_k, Q_n_k∈_n_k and (Q_n_k)>0. For k∈ such that n_k≥ n there exists exactly one with Q_n_k⊂ Q'. Now, x∈Q_n_k⊂Q', implying Q'∩Q≠∅ and sinceis monotone, we have (Q')>0. Furthermore, for each Q∈_n, we have { Q”∈_n:Q”∩Q≠∅}≤3^d. Combining these two observations, we obtain{ Q∈_n:Q∩()≠∅} ≤{ Q∈_n:∃ Q'∈_n,Q'∩Q≠∅,(Q')>0}≤3^d{ Q∈_n:(Q)>0} ,implying (0)≥_M(). These bounds follow immediately from the finiteness and convexity of .The definition of _∞()>0 immediately gives the following lemma. If _∞()>0, thenis uniformly vanishing.Under our standing assumption with a and b as given in subsec:-Partition-functions, L(b-d)/a<0, for all n large enough and q≥0, we haveb+qL≤_n(q).In particular, -∞<lim inf_n→∞_n(q) and _∞()≤-LBy our assumptions we have _∞()>0, therefore, for n large, _n is monotone decreasing and also b≤_n(a). By definition, we have _n(0)≤ d for all n∈ and the convexity of _n implies for all q∈[0,a]_n(q)≤_n(0)+q(_n(a)-_n(0))/a.In particular, the convexity of _n implies for q>a(_n(a)-_n(0))/a≤(_n(q)-_n(0))/q.Implyingb+q(b-d)/a≤_n(0)+q(_n(a)-_n(0))/a≤_n(0)+q(_n(q)-_n(0))/q=_n(q).Since _n is decreasing with 0≤_n(0)≤ d and _n(a)≥ b, we obtain for all q∈[0,a]b+q(b-d)/a≤ b≤_n(a)≤_n(q). In the following lemma we use the convention -∞·0=0.For q≥0, we have -_∞()q≤(q)≤(0)-_∞()q≤_M()-_∞()q.Furthermore, _∞()>0𝔮<∞κ=𝔮. The first claim follows from the following simple inequalitiesqlog(_n)≤log(∑_Q∈_n(Q))≤log(∑_Q∈_n,(Q)>01)+qlog(_n).Now, assume 𝔮<∞. It follows there exists q>0 such that (q)<0. Consequently, we obtain from eq:InequalitiesTauP -_∞()q≤(q)<0, which gives _∞()>0. Reversely, suppose _∞()>0. In the case _∞()=∞, using eq:InequalitiesTauP, we have 𝔮=0 due to (q)=-∞ for q>0. Now, let us consider the case 0<_∞()<∞. Then it follows from eq:InequalitiesTauP that (q)<0 for all q>(0)/_∞() which proves the implication.Now, assume 𝔮<∞. Then we have (q)<0 for all q>𝔮, and therefore, for every ε>0 with (q)<-ε<0 and n large enough, we obtain ∑_Q∈_n(Q)^q≤2^-nε, implying ∑_Q∈(Q)^q<∞. This shows inf{ q≥0:∑_Q∈(Q)^q<∞}≤𝔮. For the reversed inequality we note that if 𝔮=0, then the claimed equality is clear. If, on the other hand, 𝔮>0, then we necessarily have _∞()<∞. Since,is decreasing, convex and proper (see lem:strictlyDecreaing below), it follows that 𝔮 is a zero ofand for all 0<q<𝔮 we have 0<(q). This implies that for every 0<δ<(q), there is a subsequence (n_k) such that 2^n_kδ≤∑_Q∈_n_k(Q)^q implying∞=∑_k∈∑_Q∈_n_k(Q)^q≤∑_Q∈(Q)^q.Consequently, 𝔮≤inf{ q≥0:∑_Q∈(Q)^q<∞}. Note that in the case _∞()≤0, we deduce from lem:Dim00Inequality that (q) is non-negative for q≥0, hence 𝔮=∞. However, it is possible that κ<∞. Indeed, in <cit.> we give an example of a measure ν, where κ__ν gives the precise upper bound for the spectral dimension, while κ__ν<𝔮__ν=∞.If _∞()∈(0,∞), thenis a strictly decreasing real-valued convex function on _≥0. In particular, if 𝔮>0, then 𝔮 is the only zero of .First, note that lem:Dim00Inequality implies (q)∈ for all q≥0 and lim_q(q)=-∞. Since _∞()>0 it follows from lem:Unifrom Decreasing that for n large and all Q∈_n, we have (Q)<1. Hence,is decreasing and as pointwise limit superior of convex functions again convex. Now, we show thatis strictly decreasing. Assume there exist 0≤ q_1<q_2 such that (q_1)=(q_2). Sinceis decreasing, we obtain (q_1)=(q) for all q∈[q_1,q_2]. The convexity ofimplies (q)=(q_1) for all q>q_1 which contradicts lim_q(q)=-∞. For the second claim note that, sinceis convex, it follows thatis continuous on _>0. Hence, we obtain (𝔮)=0. Finally, the uniqueness follows from the fact thatis a finite strictly decreasing function. § OPTIMAL PARTITIONS, PARTITION ENTROPY AND OPTIMISED COARSE MULTIFRACTAL DIMENSION §.§ Bounds for the partition entropy As before, let :→_≥0 be a non-negative, monotone, uniformly vanishing, and locally non-vanishing set function. For 0<1/x<j_0, the growth rate of (G_x) gives rise to the following inequalities:F≤h≤κ≤𝔮, F≤h.At this stage we would like to point out that in the next section (prop:LowerBoundUpperSpecDim), we will show equality in the second chain of inequalities eq:MainChainThm using the coarse multifractal formalism under some mild additional assumptions on .Sinceis uniformly vanishinglem:Dim00Inequality gives κ≤𝔮 (where equality holds if _∞()>0, otherwise 𝔮=∞). Hence, we only have to consider the case κ<∞. Let 0<1/x<j_0. Setting R_x{ Q∈:(Q)≥1/x} ,we note that, on the one hand, for Q∈ G_x there is exactly one Q'∈ R_x∩_|log_2Λ(Q)|/d-1 with Q⊂ Q' and, on the other hand, for each Q'∈ R_x∩_|log_2Λ(Q)|/d-1 there are at most 2^d elements of G_x∩_|log_2Λ(Q)|/d which are subsets of Q'. Hence, (G_x∩_n)≤2^d(R_x∩_n-1). For q>κ we obtainx^-q(G_x) =∑_n=1^∞∑_Q∈ G_x∩_nx^-q≤2^d∑_n=1^∞∑_Q∈ R_x∩_n-1x^-q≤2^d∑_n=1^∞∑_Q∈ R_x∩_n-1(Q)^q≤2^d∑_n=0^∞∑_Q∈_n(Q)^q<∞.This implieslim sup_x→∞log(M(x))/log(x)≤ qand letting q tend to κ proves h≤κ. To prove the first inequality, observe that for α>0, n∈ we have𝒩_α(n)={ Q∈_n:(Q)≥2^-α n}≤(G_2^α n)=M(2^α n),where we used the fact that, sinceis uniformly vanishing and locally non-vanishing, for each Q∈_n with (Q)≥2^-α n there exists at least one Q'∈(Q)∩ G_2^α n and this assignment is injective. Taking logarithms, dividing by α nlog2, taking the limit superior with respect to n and then the supremum over all α>0 gives F≤h.It remains to prove F≤h. For fixed α>0, there exists n_x∈ such that 2^-(n_x+1)α<1/x≤2^-n_xα and by eq:Nalpha(n) we have 𝒩_α(n_x)≤ M(x). Therefore, lim inf_n→∞log(𝒩_α(n))/αlog(2^n) ≤lim inf_x→∞log(𝒩_α(n_x))/log(x)≤lim inf_x→∞log(M(x))/log(x)=hand taking the supremum over α>0 gives F≤h.We provide a two-dimensional illustration in fig:PartitionAlgo of these partitions G_x for three different values of x>1 for the particular choice (Q)=(νν)(Q)Λ(Q)^2, Q∈𝒟, where ν denotes the (p,1-p)-Cantor measure supported on the triadic Cantor set. In general, it is difficult to determine an upper bound for the lower -partition entropy; the following proposition opens up a feasible condition which we used <cit.> to construct an Kreĭn–Feller operator for which the spectral dimension does not exist. To obtain meaningful bounds in the following theorem, it is important that |__n does not vary too much on a suitable subsequence. Suppose there exist sequences (n_k)_k∈∈^ and (x_k)∈_>0^ such that for all k∈, (_n_k)<1/x_k. Then we have h≤lim inf_k→∞log(_n_k)/log(x_k). Using max_Q∈_n_k(Q)<1/x_k gives M(x_k)≤(_n_k) and the claim follows by observingh≤lim inf_k→∞log M(x_k)/log(x_k)≤lim inf_k→∞log((_n_k))/log(x_k). §.§ The dual problem This section is devoted to γ_nmin_P∈Π^n(P). Using prop:GeneralUpperBounds, we are able to extend the class of set functions considered in <cit.> (i. e. we allow set functionswhich are only assumed to be non-negative, monotone and _∞()>0). Before we compare our results with the classical work, we provide a proof of thm: ImproveBirmanSolomyak.By the definition of h we have for h>h and n sufficiently largeM(n^1/h)≤ n.This means that there exists P∈Π_n^1/h, in particular (P)<n^-1/h, with (P)≤ n and therefore, min_P∈Π^n(P)<n^-1/h. Thus, in tandem with thm:MainResult, we see that α≤-1/h=-1/𝔮. The upper bound α≥-1/h holds clearly for α=0. For α∈[-∞,0), we choose α∈(α,0). Then we havemin_P∈Π^n(P)<n^α for all large n. This implies M(n^-α)≤ n, which shows h≤-1/α and in particular for α=-∞, h=0 and the upper bound follows. In the same way, one shows -1/h=α. For the remaining part of this section, we concentrate on special choice = and _J,a(Q) J(Q)Λ(Q)^a, a>0, Q∈, where J is a non-negative, locally non-vanishing, superadditive function on 𝒟, that is, if Q∈𝒟 is decomposed into a finite number of disjoint cubes (Q_j)_j of 𝒟, then ∑ J(Q_j)≤ J(Q). We are now interested in the decay rate of γ__J,a,n. Upper estimates for γ__J,a,n have first been obtained in MR0217487,Borzov1971. In the following we use the terminology as in <cit.>. Let Ξ_0 be a finite partition ofof dyadic cubes from 𝒟. We say a partition Ξ' ofis an elementary extension of Ξ_0 if it can be obtained by uniformly splitting some of its cubes into 2^d equal sized disjoint cubes lying in 𝒟. We call a partition Ξ dyadic subdivision of an initial partition Ξ_0 if it is obtained from the partition Ξ_0 with the help of a finite number of elementary extensions.Let Ξ_0 be a finite partition ofwith dyadic cubes from 𝒟 and suppose there exists ε>0 and a subset Ξ_0^'⊂Ξ_0 such that∑_Q∈Ξ_0∖Ξ_0^'Λ(Q)≤εand∑_Q∈Ξ_0^'J(Q)≤ε.Let (P_k)_k∈ denote a sequence of dyadic partitions obtained recursively as follows: set P_0Ξ_0 and, for k∈, construct an elementary extension P_k of P_k-1 by subdividing all cubes Q∈ P_k-1, for which _J,a(Q)≥2^-daη_a(P_k-1)with η_a(P_k-1)_J,a(P_k-1), into 2^d equal sized cubes. Then, for all k∈, we haveη_a(P_k)=_J,a(P_k)≤ Cε^min(1,a)(N_k-N_0)^-(1+a)J()with N_k(P_k), k∈_0, and the constant C>0 depends only on a and d. In particular, there exists C'>0 such that for all n>N_0,γ__J,a,n≤ C'J()ε^min(1,a)n^-(1+a). A proof can be found in <cit.> or alternatively with further details in <cit.> based on the presentation of <cit.>. We call J a singular function with respect to Λ if for every ε>0 there exists a partitions Ξ_0⊂𝒟 ofand a subset Ξ'_0⊂Ξ_0 such that∑_Q∈Ξ_0∖Ξ_0^'Λ(Q)≤εand∑_Q∈Ξ_0^'J(Q)≤ε. Since 𝒟 is a semiring of sets, it follows that a measure ν which is singular with respect to the Lebesgue measure, is also singular as a function J=ν in the sense of def:Singularfunction. As an immediate corollary of prop:PartitionAlogrthmDueToSolandBirma, we obtain the following statement due to <cit.>.We always have γ__J,a,n=O(n^-(1+a)) and M__J,a(x)=O(x^1/(1+a)).Additionally, if J is singular, thenγ__J,a,n=o(n^-(1+a)) and M__J,a(x)=o(x^1/(1+a)). If __J,a^N(q)<d(1-q(1+a)) for some q∈(0,1), then this estimate improves the corresponding results of <cit.>, where only α__J,a≤-(1+a) has been shown. Observe that __J,a(q)=_J(q)-adq for q≥0 and _J(0)≤ d. From the fact that J is superadditive, it follows that _J(1)≤0 and q↦_J(q), q≥0 is decreasing. We only have to consider the case _J(1)>-∞. Since _J is convex, for every q∈[0,1], we deduce__J,a(q)=_J(q)-adq≤_J(0)(1-q)-adq≤ d(1-q)-adq.This implies 𝔮__J,a≤_J(0)/(_J(0)+ad)≤1/(1+a). From prop:GeneralUpperBounds we deduce the improved upper bounds-1/h__J,a=-1/𝔮__J,a=α__J,a≤-(1+ad/_M(J))≤-(1+a). § COARSE MULTIFRACTAL ANALYSISThroughout this section letbe a non-negative, monotone and locally non-vanishing set function defined on the set of dyadic cubes 𝒟 with _∞()>0.Recall the definition of 𝒩_α and F, F from the introduction.For α∈(0,_∞()) and n large, we have 𝒩_α(n)=0. In particular,F=sup_α≥_∞()lim sup_n→∞log𝒩_α(n)/log(2^n)α, F=sup_α≥_∞()lim inf_n→∞log𝒩_α(n)/log(2^n)α. For fixed α∈(0,_∞()), by the definition of _∞(), for n large we have (_n)≤2^-α n. For every 0<α'<α, it follows that 𝒩_α',(n)=0. This proves the claim.We need the following elementary observation from large deviation theory which seems not to be standard in the relevant literature. Suppose (X_n)_n∈ are real-valued random variables on some probability spaces (Ω_n,𝒜_n,μ_n) such that the rate function 𝔠(t)lim sup_n→∞𝔠_n(t) is a proper convex function with 𝔠_n(t) a_n^-1log∫exp tX_nμ̣_n, t∈, a_n→ and such that 0 belongs to the interior of the domain of finiteness { t∈𝔠(t)<∞}. Let I=(a,d) be an open interval containing the subdifferential ∂𝔠(0)=[b,c] of 𝔠 in 0. Then there exists r>0 such that for all n sufficiently large, μ_n(a_n^-1X_n∉ I)≤2exp(-ra_n). We assume that ∂𝔠(0)=[b,c] and I=(a,d) with a<b≤ c<d. First note that the assumptions ensure that -∞<b≤ c<∞. We have by the Chebychev inequality for all q>0,μ_n(a_n^-1X_n≥ d) =μ_n(qX_n≥ qa_nd)≤exp(-qa_nd)∫exp(qX_n)μ̣_n, implying lim sup a_n^-1logμ_n(a_n^-1X_n≥ d)≤inf_q>0𝔠(q)-qd=inf_q∈𝔠(q)-qd≤0,where the equality follows from the assumption c<d, 𝔠(0)=0 and 𝔠(q)-qd≥(c-d)q≥0 for all q≤0, 𝔠(0)=0, and and the continuity of 𝔠 at 0. Similarly, we find lim sup a_n^-1logμ_n(a_n^-1X_n≤ a)≤inf_q<0𝔠(q)-qa=inf_q∈𝔠(q)-qa.We are left to show that both upper bounds are negative. We show the first case by contradiction – the other case follows in exactly the same way. Assuming inf_q∈𝔠(q)-qd=0 implies for all q∈ that 𝔠(q)-qd≥0, or after rearranging, 𝔠(q)-𝔠(0)≥ dq. This means, according to the definition of the sub-differential, that d∈∂𝔠(0), contradicting our assumptions. For a subsequence (n_k) define the convex function on _≥0 by Blim sup_n_k and for some q≥0, we assume B(q)=lim_n_k(q) and set [a',b']-∂ B(q). Then we have a'≥_∞() anda'q+B(q)/b' ≤sup_α>b'lim inf_k→∞log𝒩_α(n_k)/αlog(2^n_k)≤sup_α≥_∞()lim inf_k→∞log𝒩_α(n_k)/αlog(2^n_k)=sup_α>0lim inf_k→∞log𝒩_α(n_k)/αlog(2^n_k).Moreover, if B(q)=(q), then [a,b]=-∂(q)⊃-∂ B(q) and if additionally 0≤ q≤𝔮, thenaq+(q)/b≤a'q+B(q)/b'. Without loss of generality we can assume b'<∞. Moreover, _∞()>0 implies b'≥ a'≥_∞()>0. Indeed, observe that B is again a convex function on . Thus, by the definition of the sub-differential, we have for all x>0B(q)-a'(x-q)≤ B(x)≤(x)≤(x)≤-x_∞()+d,which gives a'≥_∞()>0. Let q≥0. Now, for all k∈ and s<a'≤ b'<t, we have with L_n_k^s,t{ Q∈_n_k:2^-sn_k>(Q)>2^-tn_k}𝒩_t,^(n_k)≥ L_n_k^s,t≥∑_Q∈ L_n_k^s,t(Q)^q2^sn_kq=2^sn_kq+n_k_n_k(q)∑_Q∈_n_k_L_n_k^s,t(Q)(Q)^q2^-n_k_n_k(q) =2^sn_kq+n_k_n_k(q)(1-∑_Q∈_n_k_(L_n_k^s,t)^∁(Q)(Q)^q2^-n_k_n_k(q)).We use the lower large deviation principle for the process X_k(Q)log(Q) with probability measure on _n_k given by μ_k({ Q})(Q)^q2^-n_k_n_k(q). We find for the free energy function𝔠(x)lim sup_klog(𝔼_μ_k(exp(xX_k)))/log(2^n_k)=lim sup_k1/log(2^n_k)log(∑_Q∈_n_k(Q)^x+q/2^n_k_n_k(q)) =lim sup_k_n_k(q+x)-B(q)=B(x+q)-B(q),with -∂𝔠(0)=[a',b']⊂(s,t) and hence there exists a constant r>0 depending on s,t and q such that for k large by lem:exponential decay-1∑_Q∈_n_k_(L_n_k^s,t)^∁(Q)(Q)^q/2^n_k_n_k(q)=μ_k(X_k/log(2^n_k)∉(-t,-s))≤2exp(-rn_k).Therefore, lim inf_k→∞log𝒩_t^(n_k)/log(2^n_k)≥ sq+B(q) for all s<a' and t>b' and hencesup_t>b'lim inf_k→∞log𝒩_t^(n_k)/tlog(2^n_k)≥sup_t>b'a'q+B(q)/t=a'q+B(q)/b'.The fact that -∂(q)⊃-∂ B(q) if (q)=B(q) follows immediately from the inequality lim sup_k_n_k≤.Ifis PF-regular with respect to _n, then F=𝔮.Due to prop:GeneralUpperBounds, we can restrict our attention to the case 𝔮>0. First, assume (q)=lim inf_n_n(q) for q∈(𝔮-ϵ,𝔮), for some ε>0 and set [a,b]=-∂(𝔮). Then by the convexity ofwe find for every ϵ∈(0,𝔮) an element q∈(𝔮-ϵ,𝔮) such thatis differentiable in q with -()'(q)∈[b,b+ε] since the points whereis differentiable on (0,∞) lie dense in (0,∞) which follows from the fact thatis a decreasing function and the fact that the left-hand derivative of the convex functionis left-hand continuous and non-decreasing. Then we have by prop:GeneralBound.-1sup_α≥()lim inf_n→∞log^+(𝒩_α(n))/αlog(2^n) ≥sup_α>-'(q)lim inf_n→∞log(𝒩_α(n))/αlog(2^n)≥-'(q)q+(q)/-'(q)≥b(𝔮-ϵ)/b+ε.Taking the limit ϵ→0 proves the claim in this situation. The case thatexists as a limit in 𝔮 and is differentiable in 𝔮 is covered by prop:GeneralBound.-1. We have F=𝔮.Due to prop:GeneralUpperBounds, we can restrict our attention to the case 𝔮>0. First note that by lem: uniformEstimatefortau_n, for n large, the family of convex functions (_n) restricted to [0,𝔮+1] only takes values in [-(𝔮+1)L+b,d] and on any compact interval [c,e]⊂(0,𝔮+1) we have for all c≤ x≤ y≤ e _n(x)-_n(0)/x-0≤_n(y)-_n(x)/y-x≤_n(𝔮+1)-_n(y)/𝔮+1-y.We obtain by lem: uniformEstimatefortau_n and the fact _n(0)≤ d(𝔮+1)L+b-d/c≤_n(x)-_n(0)/x-0and_n(𝔮+1)-_n(y)/𝔮+1-y≤d-(𝔮+1)L-b/𝔮+1-e,which implies |_n(y)-_n(x)|≤max{|b|-(𝔮+1)L+d/c,d-(𝔮+1)L+|b|/𝔮+1-e}|x-y|and hence (_n|_[c,e]) is uniformly bounded and uniformly Lipschitz and thus by Arzelà–Ascoli relatively compact. Using this fact, we find a subsequence (n_k) such that lim_k_n_k(𝔮)=lim sup_n_n(𝔮)=0 and _n_k converges uniformly to the proper convex function B on[𝔮-δ,𝔮+δ]⊂(0,𝔮+1),for δ sufficiently small. We put [a,b]-∂ B(𝔮). Since the points where B is differentiable are dense and since B is convex, we find for every δ>ϵ>0 an element q∈(𝔮-ε,𝔮) such that B is differentiable in q with -B'(q)∈[b,b+ϵ]. Noting B≤, we have -B'(q)≥_∞(). Hence, from prop:GeneralBound.-1 we deduce sup_α≥_∞()lim sup_n→∞log𝒩_α(n)/αlog(2^n)≥sup_α>-B'(q)lim sup_k→∞log𝒩_α(n_k)/αlog(2^n_k)≥-B'(q)q+B(q)/-B'(q)≥b(𝔮-ε)/b+ε.Taking the limit ϵ→0 gives the assertion. § PROOF OF MAIN RESULTS Now we are in a position to state the remaining proofs of our main results from subsec:Main-results.The equality G_x=𝒢_m_x+1 follows from the definitions. Clearly, we have inf{(P):P∈Π,(P)<1/x}≤(G_x), since G_x is a partition ofwhich is ensured by the monotonicity ofand the assumption thatis uniformly vanishing. For the inverse inequality let P_opt∈Π be the minimizing partition, i.e. inf{(P):P∈Π,(P)<1/x} =(P_opt). To prove that P_opt=G_x we assume that there exists Q∈ P_opt such that Q⊂ Q'∈_|log_2Λ(Q)|/d-1 with (Q')<1/x. Then, P̃{ Q”∈ P_opt:Q”∩ Q'=∅}∪ Q' is also partition ofwith (P̃)<(P̃)+2^d-1≤(P_opt) and (P̃)<1/x, contracting the assumption that P_opt minimises inf{(P):P∈Π,(P)<1/x}. Hence, we have P_opt=G_x.Clearly, Π̃_n⊃Π_n and hence inf_P∈Π̃_n(P)≤inf_P∈Π_n(P). Now suppose inf_P∈Π̃_n(P)=x. Then for every ϵ>0 we have M(x+ϵ)=(G_x+ϵ)≤ n. This shows that inf_P∈Π_n(P)≤ x+ϵ. Since ϵ>0 was arbitrary we conclude inf_P∈Π̃_n(P)≥inf_P∈Π_n(P).The main theorem is now a consequence of prop:GeneralUpperBounds and prop:LowerBoundUpperSpecDim.For all x>1/() and Q∈ G_x we have ν(Q)x^1/s<1. Therefore, x^1/sν()=x^1/s∑_Q∈ G_x(Q)^1/s<∑_Q∈ G_x1=(G_x).This gives 1/s≤h≤h=𝔮. Since under the assumption _∞(ν)>0, we have 𝔮=1/s, the equality follows. The bounds are an immediate consequence of the convexity of , the fact that -_∞() is maximal asymptotic direction ofand that (0)=_M(), as shown in lem:GL(0)=00003DDim_M.The statements are an immediate consequence of prop:=00005CGL reg. implies lower bound and prop:LowerBoundUpperSpecDim.The theorem is now a consequence of thm:MainResult and prop:=00005CGL reg. implies lower bound.
http://arxiv.org/abs/2312.16644v1
{ "authors": [ "Marc Kesseböhmer", "Aljoscha Niemann" ], "categories": [ "math.OC", "cs.IT", "math.FA", "math.IT", "math.PR", "primary: 68W25, 41A25 secondary: 28A80, 65D15" ], "primary_category": "math.OC", "published": "20231227171735", "title": "Exact asymptotic order for generalised adaptive approximations" }
05C80, 60B10We introduceprobability-graphons which are probabilitykernels that generalizegraphons tothecaseof weightedgraphs. Probability-graphons appear as the limit objects to study sequences of largeweighted graphswhosedistributionof subgraphsampling converge.Theedge-weights aretaken froma generalPolish space, which also coversthe case of decorated graphs.Here,graphs can be eitherdirected orundirected. Starting froma distance inducing theweak topology on measures,we define a cutdistance on probability-graphons,makingita Polishspace,andstudythe propertiesofthiscutdistance. Inparticular,weexhibita tightnesscriterionforprobability-graphonsrelatedtorelative compactnessin thecutdistance. We alsoprovethat undersome conditionson thedistance ,which aresatisfied forsome well-know distances like the Prohorov distance, and the Fortet-Mourier andKantorovitch-Rubinstein norms,the topologyinduced bythe cut distance on the space ofprobability-graphons is independent from the choice of .Eventually, we prove that this topology coincides with thetopology induced bythe convergence in distributionof the sampled subgraphs.Reshaping the ISAC Tradeoff Under OFDM Signaling: A Probabilistic Constellation Shaping Approach Zhen Du, Member, IEEE, Fan Liu, Senior Member, IEEE, Yifeng Xiong, Member, IEEE,Tony Xiao Han, Senior Member, IEEE, Yonina C. Eldar, Fellow, IEEE, and Shi Jin, Fellow, IEEE (Corresponding author: Fan Liu) An earlier version was partly presented at the IEEE Global Communications Conference (GLOBECOM), Kuala Lumpur, Malaysia, Dec 2023 <cit.>. Z. Du is with the School of Electronic and Information Engineering, Nanjing University of Information Science and Technology, Nanjing, China. F. Liu is with School of System Design and Intelligent Manufacturing, Southern University of Science and Technology, Shenzhen, China. Y. Xiong is with the School of Information and Electronic Engineering, Beijing University of Posts and Telecommunications, Beijing, China. T. X.-Han is with Huawei Technologies Co., Ltd, Shenzhen, China. Y. C. Eldar is with the Faculty of Mathematics and Computer Science, Weizmann Institute of Science, Rehovot, Israel. S. Jin is with National Mobile Communications Research Laboratory, Southeast University, Nanjing, 210096, China. Received ...; accepted... =====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================§ INTRODUCTION§.§ Motivation and literature review Networks appearnaturally in awide variety of context,including for example: biologicalnetworks <cit.>, epidemicsprocesses <cit.>, electricalpower grids<cit.> andsocial networks <cit.>. Most of those problems involve large dense graphs, that is graphs that have a large number of vertices and a number of edges that scales as the square of the number of vertices. Those graphs are too large to be represented entirely in the targeted applications. The idea is then to go from a combinatorial representation given by the graph to an infinite continuum representation.In the case of unweighted graphs (graphs without edge-weights), a theory was developed to study the asymptotic behaviour of large dense graphs, with the limit objects being the so-called graphons. The properties of graphons were studied in a series of articles started by <cit.>. We shall refer to the monograph <cit.> which exposes in details the theory of graphons developed in this series of articles. Graphons can be used to define models of random graphs with latent vertex-type variables (called W-random graphs) generalizing the Erdös-Rényi graph and the stochastic block model (SBM). The space of graphons can be equipped with the so-called cut distance, making it a compact space, and whose topology is that of the convergence in distribution for all sampled subgraphs, or equivalently of the convergence for subgraph homomorphism densities.In recent years, graphons have been used in several application context: non-parametric estimation methods and algorithms for massive networks <cit.>, SIS epidemic models <cit.>, the study of transferability properties for Graph Neural Networks <cit.>.Furthermore, there has been recent developments in the study of mean-field systems using graphons: stochastic games and their Nash equilibria <cit.>, opinion dynamic on a graphon <cit.>, cooperative multi-agent reinforcement learning <cit.>, to cite a few. However,mostreal-worldphenomenonon theabovenetworksinvolve weightednetworks, whereeachedge inthegraph carriesadditional information such asintensity or frequency ofinteraction, or transfer capacity.There existsmany models of randomweighted graphs.For example configurationmodels with edges havingindependent exponential weightshavebeen consideredin<cit.>, see also <cit.>where thedistribution of the weight of an edge depends on the typesof its end-points. Random geometric graphswith vertices and edges having independent Gaussian weights have beenconsidered in <cit.>.Weighted SBMs (sometimes also called labeled SBMs),in which each edge independently receivesa random weight whose distribution depends on the community labels of its end-points, have been studied to solve community detection in <cit.> (see also <cit.> for more general models where vertex-labels come from a compact space), and exact community recovery in <cit.>, and to get bounds on the number of misclassified vertices in <cit.>. Note that weighted SBMs correspond to a special case of the probability-graphons we study in this article where the space of vertex-labels is finite (they correspond to the stepfunction probability-graphons we define in <Ref>). Concomitantly to our work,in<cit.>, theauthors studied mean-fieldequations onlarge real-weightedgraphs modeling interactionswitha probability kernel from [0,1]^2 tothe set of probabilitymeasureson,buttheydidnotstudythe topological propertiesof the set ofthose probabilitykernels.Recently, in <cit.>, the authors studied the limit of the total weight of the minimum spanning tree (MST) for asequence of randomweighted graphs.Followingwhat has beendoneforthe uniformspanningtreein <cit.>, oneexpects thelocaland scalinglimits oftheMST tobe directly constructed from the limit of the random weighted graphs.Motivated bythose examples, we shallconsider probability-graphons as possible limits of large weighted graphs;they are defined as maps from [0,1]^2to the spaceof probabilitymeasuresona Polish space. When iscompact,this questionhasbeen consideredin<cit.>andin <cit.> usingconvergence of homomorphism densities of subgraphs decorated with realfunctions defined on , but the metricproperties ofthe setof probability-graphons have onlybeen establishedwhenis finite,see <cit.>. The work <cit.> is anextension of<cit.> whereis replaced by the dual spaceofa separableBanach space. Asis a subset of the dual of , this approach covers our setting whenis separable, that is, compact (see Section <ref> below). The norm introduced on the space of -valued graphons therein implies the convergence of homomorphims densities of -decorated sub-graphs, however there is no equivalence a priori. Inthispaper westudythetopologicalpropertyof thespaceof probability-graphonswhenis a general Polish space: the space is a Polish topological space andwe give “natural” cut distances onwhich are complete. One of the main difficulty is that the space of probability measurescan be endowed with many distances which induce the topology of weak convergence, each of them giving rise to a different cut distance on .We prove that the topology induced ondoes not depend on the initial choice of the distance on , provided this distance satisfies some simple general conditions.However, we stress that not all of these cut distances are complete. We also check thatthis topologycharacterizes the convergencein distribution of the sampled subgraphs with random weights on the edges or equivalently the convergenceof the homomorphism densities of -decorated subgraphs.Similarly to the graphon setting, weprovethe convergence in distribution oflarge sampled weighted subgraphs from a probability-graphon W to itself.Wealsoprovide a tightness criterionfor studying theconvergenceofweighted graphstowardsprobability-graphons.In conclusion, we believe that the unified framework developed here is easy-to-work-with and will allow to useprobability-graphons to study large (random) weightedgraphs. §.§ New contributionThrough the article, measure will always be used to denote a positive measure.§.§.§ Definition of probability-graphons In this article, we define an analogue of graphons for weighted graphs, which we call probability-graphons, and study their properties. To avoid any confusion, in the rest of the article we say real-valued graphons instead of graphons. We consider the general case where weighted graphs take their edge-weights in a Polish space(,or ^d), which thus also covers the case of decorated graphs, multi-graphs (graphs with possibly multiple edges between two vertices) and dynamical graphs (where edge-weights evolve over time). We define a probability-graphon as a probability kernel W : [0,1]^2 →, whereis the space of probability measures on . Aprobability-graphon canbe interpretedas follows:for two ‘‘vertex type’’x and yin [0,1],the weight zof an edge between two vertices of typex and y is distributed as theprobability measureW(x,y; z). In particular,the special case={0,1} allows torecover real-valued graphons: as any real-valued graphon w : [0,1]^2 → [0,1] can be representedas a probability-graphon W(x,y;·)=w(x,y)δ_1+(1-w(x,y))δ_0,where δ_z denote the Dirac mass located at z.Let us mention that it ispossible to define theprobability-graphons on a moregeneralprobabilityspace (Ω,,μ)than [0, 1]forthevertex-types, see <Ref> fordetails.In thisarticle, we also defineand study the propertiesof signed measure-valued kernels which are bounded (in total mass/total variationnorm) measurablefunctions W:[0,1]^2↦ whosevaluesaresigned measures,butforbrevity wemainlyfocuson probability-graphons in this introduction. As probability-graphons are measurable functions, we identify probability-graphons that are equal for almost every (x,y)∈ [0,1]^2, and we denote by the space of probability-graphons. Moreover, as we consider weighted graphs that are unlabeled (that is vertices are unordered), we need to consider probability-graphons up to ‘‘relabeling’’: for a measure-preserving map φ : [0,1] → [0,1] (relabeling map for probability-graphons), we define W^φ(x,y;·) = W(φ(x), φ(y);·); we say that two probability-graphons are weakly isomorphic if there exists measure-preserving maps φ, ψ : [0,1] → [0,1] such that U^φ = W^ψ for (x,y)∈[0,1]^2. We denote bythe space of probability-graphons where we identity probability-graphons that are weakly isomorphic.We can always assume that weighted graphs are complete graphs by adding all missing edges and giving them a weight/decoration ∂ which is a cemetery point added to . Any weighted graph G can be represented as a probability-graphon W_G in the following way: denote by n the number of vertices of G and divide the unit interval [0,1] into n intervals I_1, ⋯, I_n of equal lengths, then W_G is defined for (x,y)∈ I_i × I_j as W_G(x,y;·) = δ_M(i,j), where M(i,j) is the weight on the edge (i,j) in G. Note that weighted graphs can be either directed or undirected, in the case of undirected weighted graphs their limit objects are symmetric probability-graphons, that is probability-graphons W such that W(x,y;·) = W(y,x;·). §.§.§ The cut distance for probability-graphons and its properties While there is a usual distance on the field of reals , this is not the case for probability measures, measures or signed measures endowed with the weak topology. Some commonly used distances include the Prohorov distancewhich can be defined on measures, and the Kantorovitch-Rubinstein norm(sometimes also called the bounded Lipschitz norm) and the Fortet-Mourier normdefined on signed measures but metrizing the weak topology on measures. (Note that in general the weak topology is not metrizable on signed measures, see Section <ref> below.) We also use a normbased on a convergence determining sequence ⊂. See Section <ref> for definition of those distances. To define an analogue of the cut norm for probability-graphons, we first need to choose a distancethat metrizes the weak topology on the space of sub-probability measures (measures with total mass at most 1); we then define the cut distancefor probability-graphons as:(U,W) = sup_S,T⊂ [0,1](U(S× T;·), W(S× T;·) ),where the supremum is taken over allmeasurable subsets S and T of [0,1], and where W(S× T;·) = ∫_S× T W(x,y;·)xy is a sub-probability measure and similarly for U. Moreover, if the distanceis derived from a normdefined on the space of signed measures , then the cut distancederives from the cut normdefined on signed measure-valued kernels: W = sup_S,T⊂ [0,1]W(S× T;·) .We then define the unlabeled cut distanceon the space of unlabeled probability-graphonsas:(U,W) = inf_φ(U, W^φ) = min_φ, ψ(U^φ, W^ψ) ,where the infimum is taken over all measure-preserving maps φ and ψ, see <Ref> for alternative expressions of(including proof that the minimum exist for the second expression) and see <Ref> that states thatis indeed a distance on . In <Ref>, we prove an equivalent of the weak regularity lemma for probability-graphons.An interesting fact is that under some conditions on , the topology induced by the associated cut distancedoes not depend on the particular choice of . The following proposition is a particular case of Theorem <ref> together with <Ref>. The cut distances , ,and induce the same topology on the space of probability-graphons .Recall thatis aPolish space. We nowstate thatis also Polishfor the distance(but not for !), and we refer to <Ref> for other distances. The space of probability-graphons(,)isaPolish metricspace.We prove an analogue of Prohorov's theorem with a tightness criterion for probability-graphons. We say that a subset of probability-graphons ⊂ is tight if the set of probability measures { M_W:W∈} is tight (in the sense of probability measures), whereM_W(·) =W([0, 1]^2;·).The next result is consequence of <Ref> as well as <Ref>.[Compactness property] Consider the topologyon from <Ref>. * If a sequence of elements ofistight,thenit hasa converging subsequence.* If is compact, then the space is compact.§.§.§ Sampling from probability-graphons and its link with the cut distance Finally, we link the topology of the cut distancewith subgraph sampling. The probability-graphonsallow todefine modelsof random weighted graphs(the W-random graph model)which generalize weighted SBM randomgraphs, and whichplays therole of sampledsubgraphs for probability-graphons.The W-random graph (or sampled subgraph of size k) 𝔾(k,W) has two parameters,a number of vertices k and a probability-graphon W for edge-weights,and is defined as follows: first let X_1, ⋯, X_kbe k independent random ‘‘vertex-types’’ uniformly distributed over [0,1]; thengiven X_1, ⋯, X_k, each edge receive aweight independently, where the weight ofthe edge (i,j) is distributed as W(X_i, X_j; ·). We alsoprovide the convergence ofsampled subgraphs for the topology from <Ref>, see<Ref> together with <Ref>. Let W be a probability-graphon. Then, the sequence of sampled subgraphs ((k,W))_k∈^* converges to W for the topologyfrom <Ref>. To prove this theorem, we adaptthe proof scheme of <cit.>relying on thefirst and second samplinglemmas for real-valuedgraphons. Theproofisdoneusingthecutdistance becauseofthe goodapproximationsproperties of .In the caseof unweighted graphs, thehomomorphism numbers hom(F,G) countthe number ofoccurence of a graphF (often called amotif or agraphlet) as an inducedsubgraph of G,andtheirnormalized counterparts,thehomomorphismdensities t(F,G)allow tocharacterizea graph(upto relabelingand twin-verticesexpansion), andalsocharacterizethe topologyon real-valued graphons.In the caseof weighted graphs and probability-graphons,weneedto replaceabsence/presenceofedges (which is 0-1valued) by test functionsfromdecorating the edges. Hence, wedefine the homomorphism densityof a-graph F^gwhich isa finite graph F=(V,E)whose edgesare decorated witha familyof functions g=(g_e)_e∈ E from a subset ⊂ (in practice, we only consider the cases = or = ⊂ a convergence determining sequence), in a probability-graphon W as:t(F^g,W) = M_W^F(g):=∫_[0,1]^V∏_(i,j)∈ E W(x_i,x_j; g_i,j) ∏_i∈ V x_i ,whereW(x,y;f) = ∫_ f(z)W(x,y; z). Moreover, M_W^F defines a measure on ^E (which we still denote by M_W^F) which is defined by M_W^F(⊗_e∈ E g_e) = M_W^F(g) for g = (g_e)_e∈ E. Note that when F is the complete graph with k vertices, M_W^F is the joint measure of all the edge-weights of the random graph (k,W), and thus characterizes the random graph (k,W).In the counting Lemma <ref> and the weak counting Lemma <ref>, we prove that the cut normallows to control the homomorphism densities. Conversely, in the inverse counting Lemma <ref>, we prove that the cut norm can be controlled by the homomorphism densities. In particular, the topology of thecut distance turns out to be exactly the topology ofconvergence in distribution for sampled subgraphsof anygivensize;the nextresultis a direct consequence of <Ref>[Characterization of the topology] Let (W_n )_n∈ and W be unlabeled probability-graphons from . The following properties are equivalent: *(W_n)_n∈ converges to W for the topologyfrom <Ref>. *lim_n→∞ t(F^g,W_n) = t(F^g,W) for all -graph F^g. *lim_n→∞ t(F^g,W_n) = t(F^g,W)for all -graph F^g, for some convergence determining sequence . *For all k≥ 2, the sequence of sampled subgraphs ((k,W_n))_n∈ converges in distribution to (k,W).Now, we can turn back to the initial problem of finding a limit object for a convergent sequence of weighted graphs (G_n)_n∈; here convergent means that for all k≥ 2, the sequence ((k,G_n)=(k,W_G_n))_n∈ of sampled subgraphs of size k (defined above) converges in distribution (to some limit random graph). Note that the tightness criterion for a sequence of probability-graphons (W_n)_n∈ can be equivalently rephrased as tightness of the sequence ((2,W_n))_n∈ of sampled subgraphsof size 2. Hence, the convergence in distribution ofthe sequence ((2,G_n))_n∈ implies its tightness, and thus the tightness of the sequence of probability-graphons (W_G_n)_n∈. Then, <Ref> guarantees the existence of a probability-graphon W which is a sub-sequential limit of the sequence (W_G_n)_n∈ in the cut distance , and then <Ref> guarantees that for all k≥ 2, the sequence ((k,G_n))_n∈ converges in distribution to (k,W). As a consequence, probability-graphons are precisely the limit objects for sequences of weighted graphs (G_n)_n∈ (and also for random weighted graphs) whose number of vertices goes to infinity (otherwise the limit would simply be a weighted graph) and such that for each size k≥ 2, the sequence of sampled subgraphs ((k,G_n))_n∈ converges in distribution. The framework we have developed for probability-graphons could easily be extended to add weights on the vertices, or equivalently to allow for self-loops (edges linking a vertex to itself). In this case, weighted graphs and probability-graphons have a two-variable kernel (probability-graphon) W^e for edge-weights as before, and a one-variable kernel W^v : [0,1] → for vertex-weights. Note that this implies, as expected, that the same measure-preserving map φ : [0,1]→[0,1] must be used for both kernels W^v and W^e when relabeling. §.§ Organization of the paper The rest of the paper is organized as follows. In Section <ref>, we define some notations used throughout the paper, and remind some properties of the weak topology on the space of signed measures. In Section <ref>, we define probability-graphons and signed-measure valued kernels, we then define the cut distance and the cut norm and study their properties, and we also give some exemple of distances with the Prohorov distance , the Kanrorovitch-Rubinstein and Fortet-Mourier normsand , and the normbased on a convergence determining sequence. In Section <ref>, we define the steppings of a probability-graphon (which are stepfunction approximations corresponding to conditional expectations on [0,1]^2), we define the tightness criterion for probability-graphons, and we prove the weak regularity property of the cut distance. In Section <ref>, we prove the theorem linking the tightness criterion with relative compactness for the cut distance, we prove that under some conditions the topology of the cut distance does not depend on the choice of the initial distance , and we prove that the space of probability-graphons with the cut distance is a Polish space. In Section <ref>, we define the subgraph (k,W) sampled from a probability-graphon W, we then prove approximation bound in the cut normbetween probability-graphons and their sampled subgraphs. In Section <ref>, we prove the counting lemmas linking the cut distance with the homomorphism densities, and prove that the topology induced by the cut distance coincides with the topology of convergence in distribution for all the sampled subgraphs. § NOTATIONS AND TOPOLOGY ON THE SPACE OF SIGNED MEASURES Through the article, measure will always be used to denote a positive measure.Let =_+be theset of non-negative integers, ^*=\{0}thesetofpositiveintegers,and,for n∈^*, we define the integer set [n] = {1,…, n}. For k∈^*, the set [0, 1]^k is endowed with the Borel σ-fieldand the Lebesgue measure λ_k; and we write λ for λ_k when the context is clear. The supremum of a real-valued function f defined on [0, 1]^k is denoted by f_∞=sup_x∈ [0, 1]^k f(x). Let d be a distance on a topological space (X, ).* The distance d iscontinuous thetopologyif the identity mapfrom (X,) to (X, d) is continuous.* The distanced issequentially continuousthetopologyif forany sequence (x_n)_n∈ in Xwhich converges to some limit x for the topology , we also have that lim_n→∞d(x_n, x)=0. Let d and d' be two distances on a space X. We say that d' is continuous (resp. uniformly continuous) d if the identity map from (X, d) to (X, d') is continuous (resp. uniformly continuous).If the topologyis metrizable (can be generated by a distance on the space X),then the topology on X induced by the distance d is equivalent to if and only if for every sequence with values in X, convergence for d is equivalent to convergence for(see <cit.>). Moreover, when the topology is metrizable, then topological notions and their sequential counterparts coincides (compact and sequentially compact sets, closed and sequentially closed sets, see <cit.>).For a function, continuity always implies sequential continuity;and the converse is also true when the topology is metrizable.A map φ : Ω_1 →Ω_2 between two probability spaces (Ω_i, 𝒜_i, π_i), i=1,2, is measure-preserving if it is measurable and if for every A∈𝒜_2, π_2(A) = π_1(φ^-1(A)). In this case, for every measurable non-negative function f: Ω_2 →, we have:∫_Ω_1 f(φ(x)) π_1( x) = ∫_Ω_2 f(x)π_2( x).We denote bythe set of bijective measure-preserving maps from [0,1] with the Lebesgue measure to itself, and bythe set of measure-preserving maps from [0,1] with the Lebesgue measure to itself. Let (,) be some (non-empty) Polish space, and letbe the Borel σ-field ongenerated by the topology . Wedenote by thespace ofreal-valued continuousbounded functions on (,). We denoteby the spaceoffinite signedmeasures on (,); thesubspaceofmeasures; the subspaceofmeasureswith total mass at most 1; andthe subspace of probability measures.We have:⊂⊂⊂ . Fora signedmeasureμ∈,we remind the definition of theHahn-Jordan decomposition μ= μ^+ -μ^- where μ^+, μ^-∈ are mutually singular measures (thatis μ^+(A)=0 and μ^-(A^c)=0 for some measurable set A), as well as the total variation measure of μ which is defined as|μ|= μ^++μ^-∈.Notethatfor a measure μ∈, wesimply have |μ|= μ. For a signed-measure μ∈ and a real-valued measurable function f defined on , we write μ(f)=⟨μ, f ⟩= ∫ fμ=∫_ f(x) μ( x) the integral of f μ whenever it is well defined.Fora signed measure μ∈, wedenote by μ=μ^+() + μ^-() its total mass, which is also equalto the supremumof μ(f) overall measurable functions f withvalues in [-1,1].We endowwith the topology of weak convergence, that is the smallest topology for which the maps μ↦μ(f) are continuous for all f∈. In particular, a sequenceof signed measures (μ_n)_n∈weakly convergesto some μ∈ifand only if, forevery functionf∈,we have lim_n→ +∞μ_n(f) = μ(f).Let us recall thatandendowed withthe topology of weak convergence are Polish spaces. The topology of weak convergence on the set of signed measuresis equivalent to the weak-* topology onseen as a subspace of the topological dual of(see the paragraph after Definition 3.1.1 in <cit.>). As usual in probability theory, this topology will be simply called the weak topology (this is also consistent with <cit.>).We recall that a sequence of [0,1]-valued functions = (f_k )_k∈ in , with f_0= the constant function equal to one, is: * Separating iffor everymeasures μ,νfrom(or equivalently just from) such that for every k∈, μ(f_k) = ν(f_k), then μ = ν.* Convergence determiningif forevery (μ_n)_n∈andμ measuresfromsuchthat we have lim_n→ +∞μ_n(f_k)= μ(f_k) for all k∈, then (μ_n )_n∈ weakly converges to μ.Notice that a convergence determining sequence is also separating. A sequence of functions is separating if and only if it separates the points of (see <cit.>). There always exists a convergence determining sequence on Polish spaces, see <cit.> or the proof of Proposition3.4.4 in <cit.> (which are stated for probability measures but can be extended to finitepositive measuresas we requiredthat belongs to ). Note that there does not exist a convergence determining sequence for as the weak topology is not metrizable on (see <Ref> below).By <cit.>, the Borel σ-field on , associated with the weak topology, is countably generated and can be generated by either: * the family of maps μ↦μ(f_n) wherethe sequence (f_n)_n∈ of functions fromis separating;* the family of maps μ↦μ(B) where B∈and the subset ⊂ is countable and generates the whole σ-field(such subsetalways exists, see <cit.>).Note that the Borel σ-field of a Polish space is generated by any family of Borel functions that separates points (see <cit.>). Furthermore, the maps μ↦μ^+ andμ↦μ^- (andthus also μ↦|μ|) are measurable(see <cit.> and Remark <ref>). As a consequence, the map μ↦μ is also measurable (in fact it is even lower semicontinuous by <cit.>). Note that andare closed, and thus measurable, subsets of . We define the following two important properties for subsets of signed measures, which are related to relative compactness (see <Ref> below). Let ⊂ be a subset of signed measures. * The set is bounded(in total variation)if:sup_μ∈μ <+∞ .* The set istight iffor allε>0, there existsa compactset K⊂ such that:sup_μ∈|μ|(K^c)≤ε. Recall that is a Polish space. We stress that the weak topology on signed measures is not metrizableunless it coincides with the strong topology (see <cit.>), which happens only when the initial spaceis finite (see <cit.>).Moreover, the closed norm ball {μ∈:μ≤ 1} ofis metrizable if and only ifis compact (see <cit.>). Let ⊂. The following properties are equivalent (see <cit.>): *is weakly compact ( is compact for the weak topology);*is sequentially weakly compact (that is every sequence (μ_n)_n∈ in has a subsequence that converges to some limit μ∈);*is compact for the sequential weak topology (for which sets are closed if and only if they are closed under weak convergence).Moreover, when any of those is true,is tight, bounded, and metrizable in the weak topology. Furthermore, the Kantorovitch-Rubinstein and Fortet-Mouriet normsand (defined in <Ref>) can be used to generate the weak topology on a weakly compact set (see <cit.>).Nevertheless, the weak topology on the unit sphere {μ∈:μ = 1 } of is always metrizable with a complete metric, making the unit sphere a Polish space, however,the Kantorovitch-Rubinstein and Fortet-Mouriet normsand do not provide a complete metrization in this case (see <cit.>). Letbe either ,or the closed norm ball {μ∈:μ≤ 1} of . Then,is weakly compact if and only ifis compact.We give a short proof of this statement. Asis closed infor the weak topology, ifis weakly compact, thenis also weakly compact, and thusis compact by <cit.>. Conversely, ifis compact, then by <cit.>, we know that(endowed with the weak topology) is the topological dual space of(endowed with the uniform convergence topology), thus using Banach-Alaoglu theorem (see <cit.>), we get that the closed unit norm-ball of , and thus , are compact for the weak topology.We recall the following result, which is an equivalent of Prohorov's theorem for signed measures. [Prohorov's theorem for signed measures, <cit.>] Letbe a Polish space, and let ⊂ be a subset of signed measures on . Then the following conditions are equivalent: *is relatively sequentially compact, that is every sequence (μ_n)_n∈ incontains a subsequence which weakly converges in .*is relatively compact for the weak topology, that is the closure ofis compact for the weak topology.* The familyis tight and bounded. When the spaceis infinite, the weak topology does not coincide with the weak sequential topology on(but recall from <Ref> that their compact sets are the same). Recall thatif the space is compact,thenthe unit norm ballofis metrizable, and thus the weak topology and the weak sequential topology coincide on it. However, if the spaceis non-compact, then the weak topology and the weak sequential topology do not coincide on the unit norm ball of .We give a short proof of those statements according tobeing compact or not. * Remindthat when isan infinitecompact space(for instance=[0,1]), theBanachspace is infinite-dimensionaland separable(usingStone-Weierstrass theorem),and itstopological dualis ()^* = (see <cit.>).Thus,using <cit.>, we get the existence of acountable subsetwhich isweak sequentiallyclosed yet weak densein . In particular,the weak sequential topology and the weak topology do not coincide on . * Assumethatthespace isnon-compact. Thus,contains a countable closed subset F whose points are atmutualdistancesuniformlybounded awayfromzero. By <cit.>, theweak topologyon Ffor aclosed subsetF coincideswith the trace of the weak topology on the whole space.By <cit.>,Fis homeomorphicto ℓ^1 both endowed with their weak topology, weak convergence onℓ^1 isequivalent tonorm convergence,and theweak topology onℓ^1 is notsequential, even on theunit norm ball. Hence,theweak topologyonisnot sequential, even on the unit norm ball. We define the notion of a quasi-convex distance, which generalizes the convexity of a norm. [Quasi-convex distance] Let (X, d) be a metric space which is a convex subset of a vector space.The distanced is quasi-convex if for all x_1,x_2,y_1,y_2 ∈ X and all α∈ [0,1], we have: d( α x_1 + (1-α) x_2, α y_1 + (1-α) y_2) ≤max( d(x_1,y_1), d(x_2,y_2) ) .In particular, any distance (on a convex subset of a vector space) which derive from a normis quasi-convex. Letbe distance onwith ϵ∈{+, ±} which is quasi-convex and sequentially continuous with respect to the weak topology. Then,is uniformly continuous with respect to· on_ϵ(). Weshall simply consider the case =_+(), the other case being simpler. Wefirst check that forall μ∈ and > 0, there existsη>0 such that for allν∈, we have that μ-ν< η implies (μ,ν)<. Asis sequentially continuoustheweaktopology,it isalso(sequentially) continuous the strongtopology.Let μ∈and >0. Then, the set {ν∈ :(μ,ν) < } is an open set ofcontaining μbothforand forthestrong topology.Thus,it contains aneighborhood of μ forthe strong topology{ν∈ :μ-ν< η} for η>0 small enough. This proves the claim.Asis quasi-convex andis a cone, for μ,ν∈ we have:(μ,μ+ν) = ( 1/2· (2 μ + 0), 1/2· (2μ + 2 ν) ) ≤max( (2μ, 2μ), (0, 2ν)) = (0, 2ν) . Let >0 befixed. We choose η∈ (0,1) such thatν < η, with ν∈,implies (0,ν)<.Let μ,ν∈ besuchthatμ-ν<η/2. Let λ'=μ+ν and f (resp. g) the density of μ (resp. ν) with respect to λ'. We set π= min(f,g)λ', μ'=(f-g)_+λ' and ν'=(f-g)_- λ' so that π,μ', ν'∈, μ =π +μ' and ν =π +ν'. Since μ'-ν'=μ-ν and μ' and ν' are mutually singular, we deduce thatμ' + ν' < η/2.We get:(μ,ν) = (π + μ', π + ν')≤(π, π+μ') + (π, π+ν') ≤(0, 2μ') + (0, 2ν') ≤ 2 .Hence, the distanceis uniformly continuous with respect to· on . § MEASURED-VALUED GRAPHONS AND THE CUT DISTANCE In Section <ref>,we introducethe measure-valued graphons,which are ageneralization of real-valued graphons ([0, 1]-valued measurable functions defined on [0, 1]^2).We refer to the monography <cit.>on real-valued graphons for more details. In Sections <ref>, <ref> and <ref>,we introducethe cut distance, and its unlabeled variant, on the space of measure-valued graphonswhich are analogousto the onesfor real-valued graphons (see <cit.>).In Section <ref>,wedefine aweakisomorphism relation for measure-valuedgraphons based on thisdistance.Then, in Section <ref>,wegiveanalternative combinatorial formulation of the cut distance for stepfunctions. §.§ Definition of measure-valued graphons We start by defining measure-valued kernels and graphons which are a generalization of real-valued kernels and graphons. Recall thatis a Polish space andis the space of finite signed measures.[Signed measure-valued kernels] A signed measure-valued kernel or -valued kernel is a map W from [0,1]^2to ,such that: * Wis asigned-measure inz:for every (x,y) ∈ [0,1]^2, W(x,y;·) belongs to . * W is measurable in (x,y):for every measurable set A⊂,the function (x,y)↦ W(x,y;A) defined on [0,1]^2 is measurable. * W is bounded:W:=sup_x,y∈ [0, 1]W(x, y; ·) <+∞ . We denote by (resp. , resp. , resp. ) the space ofprobabilitymeasure-valuedkernels orsimplyprobability-graphons (resp. sub-probability measure-valued kernels, resp.measure-valuedkernels, resp.signedmeasure-valued kernels), where we identify kernels that areequal on [0,1]^2, with respect tothe Lebesguemeasure.Then,(<ref>) shouldbe read with anessential supremum instead of a supremum.In what follows, wealwaysassume forsimplicitythatwe chooserepresentativesof measure-valued kernels such that W is also the essential supremum of (x,y)↦W(x, y; ·). For⊂, wedenoteby _the subsetof signedmeasure-valued kernelW∈which are-valued: W(x,y; ·)∈ for every (x,y)∈ [0, 1]^2. Let ={ 0, 1} be equipped with the discrete topology.Every real-valuedgraphon w canbe representedusinga probability-graphonW definedforevery x,y∈[0,1]by W(x,y; z)= w(x,y)δ_1( z) +(1-w(x,y)) δ_0( z), where δ_z is the Dirac mass located at z. In particular we have that w(x,y)=W(x,y; {1}) for x,y ∈ [0, 1]. Let W ∈ be a signed measure-valued kernel. Define the map W^+ : [0,1]^2 → to be the positive part of W, for every (x,y)∈ [0,1]^2, W^+(x,y;·) is the positive part of the measure W(x,y;·). Similarly define W^- : [0,1]^2 → the negative part of W; and then define | W | = W^+ + W^- the total variation of W and ‖ W ‖ = | W | () the total mass of W.[The positive part W^+ of a kernel] The maps W^+, W^- and | W | are all measure-valued kernels, and the map ‖ W ‖ : (x,y) ↦W(x,y;·) is measurable.The statements for | W | and ‖ W ‖ are immediate consequences of the statements for W^+ and W^-; and as the proof for W^+ and W^- are similar, we only need to prove that W^+ is a measure-valued kernel. It is immediate that W^+ is bounded and that for every (x,y)∈ [0,1]^2, W^+(x,y;·) is a measure in . Thus, we are left to prove the measurability of W^+ in (x,y). By <cit.> and Remark <ref>, a signed measure-valued kernel U is measurable in (x,y) (for every A∈, the map (x,y)↦ U(x,y;A) is measurable) if and only if the map (x,y)↦ U(x,y;·) is measurable from [0,1]^2 (with its Borel σ-field) to equipped with the Borel σ-field generated by the weak topology. By <cit.>, the map μ↦μ^+,that associate to a signed measure the positive part of its Hahn-Jordan decomposition, is measurable fromtoboth endowed with the Borel σ-field generated by the weak topology. Considering the composition of W and μ↦μ^+,we get that W^+ is measurable in (x,y) and is thus a measure-valued kernel. Similarly to the case of real-valued graphons, it is possible to replace the vertex-type space [0,1] by any standard probability space (Ω,𝒜,π) that might be more appropriate to represent vertex-types for some applications, and to consider probability-graphons of the form W : Ω×Ω→. We recall that a standard probability space (Ω,𝒜,π) is a probability space such that there exists a measure-preserving map φ : [0,1] →Ω, where [0,1] is endowed with the Borel σ-field andthe Lebesgue measure. In particular, every Polish space endowed with its Borel σ-field is a standard probability space. As an example, the space [0,1]^2 equipped with the Borel σ-field and theLebesgue measure λ_2 is a standard probability space; we will reuse this fact later.Using the measure preserving map φ, it is then possible to consider an unlabeled version W^φof W constructed on Ω' = [0,1], and to modify the definition of the cut distance similarly as in <cit.> to allow each probability-graphons to be constructed on different standard probability spaces. For simplicity, in this article we only consider the equivalent case where all probability-graphons are constructed on Ω = [0,1]. We shallconsider non-symmetric measure-valued kernels and probability-graphons in order to handledirected graphs whose adjacency matrices are thus a priori non-symmetric. We say that a measure-valued kernel or graphon W is symmetricif for x,y ∈ [0,1], W(x,y;·)=W(y,x;·). We define stepfunctionsmeasure-valued kernel which areoften used for approximation. [Signed measure-valued stepfunctions]A signed measure-valued kernel W∈ is a stepfunction if there exists a finite partition of [0,1] into measurable (possibly empty) sets, say 𝒫={S_1,⋯,S_k}, such that W is constant on the sets S_i ×S_j, for 1≤ i,j≤k.We say thatW and the partition are adaptedto each other. We write||=k thenumber ofelements ofthe partition .§.§ The cut distance We define a distance and anorm on signed measure-valuedgraphons and kernels, calledthe cutdistanceand thecut normrespectively which are analogous to the cut norm for real-valued graphons and kernels, see<cit.>. For a signed measure-valued kernel W∈ and a measurable subsets A⊂ [0,1]^2, we denote by W(A;·)the signed measure ondefined by:W(A;·) = ∫_A W(x,y;·)xy. [The cut distance ] Letbe a quasi-convex distance ona convex subset ofcontaining the zero measure. The associated cut distanceis the functiondefined on ^2 by:(U,W) = sup_S,T⊂ [0,1](U(S× T;·), W(S× T;·) ),where the supremum is taken over allmeasurable subsets S and T of [0,1].Notice that the right-hand side of (<ref>) is well defined ascontains the zero measure (and thus if U belongs tothen U(A; ·) belongs to ). [The cut norm ] The cut normassociated with a normonis the functiondefined onby:W = sup_S,T⊂ [0,1]W(S× T;·) ,where the supremum is taken over allmeasurable subsets S and T of [0,1]. The next proposition states that the cut distance (resp. norm) is indeed a distance (resp. norm); its extension to distances onandis immediate. The cut distanceassociated with a distanceon(resp. ) is a distance on(resp. ). The cut normassociated with a normon is a norm on .Moreover, when the distanceon(resp. ) derives from a normon , then the distancederives also from the norm . Letbea distance on (the proof for the caseis similar). It isclear thatis symmetric andsatisfies the triangular inequality.Thus, we only need toprove that isseparating. Let Uand Wbe two probability-graphons such that (U,W)= 0.Then,for every measurable subsets S, T⊂[0,1], we have U(S× T;·)= W(S× T;·).Let=(f_k)_k∈bea separatingsequence. Forevery k∈, and for everymeasurable subsets S, T⊂[0,1], we have that U(S× T;f_k) = W(S× T;f_k). This implies thatU(x,y,f_k)x y= W(x,y,f_k)x y for all k∈.Hence, wededucethat for all k∈, U(x,y;f_k) =W(x,y;f_k) foralmost every (x,y)∈ [0,1]^2.Thus, U(x,y;·) =W(x,y;·) foralmost every (x,y)∈ [0,1]^2. This implies thatis separating on , and thus a distance on .The proof for the cut norm is similar.The proof of the last part of the proposition is clear.§.§ Graphon relabeling, invariance and smoothness properties The analogueof graphrelabelings for graphonsare measure-preserving maps. Recall thedefinition ofameasure-preserving mapfrom Section <ref>,and inparticular (<ref>). Recalldenotes the set of measure-preserving (measurable) maps from [0, 1] to [0, 1] endowed with the Lebesgue measure,anddenotes its subset of bijectivemaps.The relabeling of a signed measure-valued kernel Wby a measure-preserving map φ, is the signed measure-valued kernel W^φ defined for every x,y∈ [0,1] and every measurable set A⊂ by:W^φ(x,y;A) = W(φ(x),φ(y);A) for x,y∈ [0,1] and A⊂ measurable. We say that a subset⊂ is uniformly bounded if:sup_W∈W< +∞ .[Invariance and smoothness of a distance on kernels]Let d be a distance on(resp.or ).We say that the distance d is: * Invariant:if d(U,W)=d(U^φ,W^φ)foreverybijectivemeasure-preserving map φ∈ and U,V∈(resp. U,V belongs toor ). * Smooth: if weak convergence implies convergence for d, that is, if (W_n)_n∈and W arekernelsfrom (resp.kernels fromor that areuniformly bounded and)such that for (x,y)∈ [0,1]^2, W_n(x,y;·) weaklyconverges toW(x,y;·)as n→∞,then lim_n→∞d(W_n,W)= 0.We say that a norm N onis invariant (resp. smooth) if its associated distance d onis invariant (resp. smooth). We shall see in Section <ref> some examples of distancesfor which the associated cut distanceis invariant and smooth.The invarianceproperty from Definition <ref>is always satisfied by the cut distance, and thus also by the cut norm.[ is invariant] Letbe a distance on(resp., resp. ).Then the cut distanceon(resp. , resp. ) is invariant.For a signed measure-valued kernel W, a bijective measure-preserving map φ∈,andmeasurable sets S,T⊂ [0,1], we have thanks to (<ref>):∫_S× T W^φ(x,y;·)xy = ∫_S× T W(φ(x),φ(y);·)xy = ∫_φ(S)×φ(T) W(x,y;·)xy .Hence, taking the supremum over every measurable sets S,T⊂ [0,1], we get that the cut distanceis invariant. When a smoothdistance onor derives from a distance onor , we have the following result. Letbe a distance on(resp.or ) such that the distanceon(resp.or ) is smooth. Then, the distanceis continuous the weak topology on(resp. ). Let(μ_n)_n∈,andμbe measuresfrom (resp. ) such that (μ_n)_n∈ weakly converges to μ.Consider the constantmeasure-valued graphons (resp. kernels) W_n≡μ_n,n∈, andW≡μ. Then, forevery x,y∈ [0,1],W_n(x,y;·) weakly convergesto W(x,y;·) as n→∞. Asthedistanceissmooth,wegetthat lim_n→∞(W_n,W)= 0.Considering S=T=[0,1] inthe cutdistance, wededuce that lim_n→∞(μ_n,μ) = 0.The next lemma is a partial converse of Lemma <ref>, it gives sufficient conditions forto be smooth. Remind the definition of a quasi-convex distance in <Ref>. Letbe distance onwith ϵ∈{ +, ±} which is quasi-convex and sequentially continuous the weak topology (on ). Then, the cut distanceis smooth.Moreover, for all U,W ∈, and for all measurable A⊂ [0,1]^2, we have:(U(A;·),W(A;·)) ≤_(x,y)∈ A(U(x,y;·), W(x,y;·)).To prove <Ref>, we first need to prove the following lemma for approximation by -valued kernels taking finitely many values. Let W∈ and a subset A⊂ [0,1]^2.There exists a sequence (W_n)_n∈ insuch that (W_n(A;·))_n∈ weakly converges to W(A;·) and for all n∈, W_n is finitely valued and takes its values in { W(x,y;·) : (x,y)∈ A }.By scaling, we may assume that W≤ 1. Let (f_k)_k∈ be a convergence determining sequence with f_0= and f_k takes values in [0,1]. Thus, for all (x,y)∈ [0,1]^2, ϵ∈{± 1} and k∈, we have W_ϵ(x,y;f_k) ∈ [0,1]. For all n∈, let (C_n,i)_1≤ i≤ d_n be a partition of [0,1]^2(n+1) into d_n = n^2(n+1) hypercubes of edge-length r_n = 1/n. Then, for all n∈ and i∈ [d_n], define B_n,i = A ∩ ( W_+(·; (f_i)_0≤ i≤ n,W_-(·; (f_i)_0≤ i≤ n)^-1(C_n,i); thus we get a partition (B_n,i)_1≤ i≤ d_n of A. If B_n,i≠∅, fix some μ_n,i∈{ W(x,y;·) : (x,y)∈ B_n,i}. If A≠ [0,1]^2, fix some μ_∂∈{ W(x,y;·) : (x,y)∈ [0,1]^2 ∖ A }. For n∈, we define W_n = _A^c μ_∂ + ∑_i=1^d_n_B_n,i μ_n,i, which is finitely valued and takes its values in { W(x,y;·) : (x,y)∈ A }.Let k∈ and ϵ∈{±}. For all n≥ k, we have:| W_ϵ(A;f_k) - (W_n)_ϵ(A;f_k) |≤∑_i=1^d_n∫_B_n,i| W_ϵ(x,y;f_k) - (μ_n,i)_ϵ|x y≤1/n·As (f_k)_k∈ is convergence determining, this implies that ((W_n)_ϵ(A;·))_n∈ weakly converges to W_ϵ(A;·) for ϵ∈{±}. Hence, (W_n(A;·))_n∈ weakly converges to W(A;·).Asis quasi-convex, (<ref>) is immediatewhen U and W take only finitely many values. Now, assume that U and W are arbitrary -valued kernels. Let >0. Asis sequentially continuous the weak topology,using <Ref>, there exist two -valued kernel U' and W' such that (U'(A;·), U(A·)) < and U' is finitely valued and takes its values in { U(x,y;·) : (x,y)∈ A }, and similarly for W' and W. Thus, we have:(U(A;·),W(A;·)) ≤ 2 + _(x,y)∈ A(U(x,y;·), W(x,y;·)),and this being true for all >0, we get (<ref>).Let (W_n)_n∈ and W be -valued kernels which are uniformly bounded by some constant C<∞ and such that for (x,y)∈[0,1]^2, the sequence ((W_n(x,y;·))_n∈ converges to W(x,y;·) for the weak topology, and thus also for . Let >0 and S,T⊂ [0,1]. Asis quasi-convex and sequentially continuous the weak topology, using<Ref>, there existsη>0such that for all μ,ν∈, we have that μ-ν < η implies (μ,ν)<. For all n∈, define the measurable set: A_n = { (x,y)∈ S× T:(W_n(x,y;·), W(x,y;·)) < } .By assumption, we have that lim_n→∞λ(A_n) = λ(S× T). Let N∈ be such that for n≥ N, we have λ((S× T) ∖ A_n) < η/C. Let n≥ N.Remark that W_n((S× T) ∖ A_n; ·) and W((S× T) ∖ A_n;·) have total mass at most C λ(A_n^c) < η.Thus, we have that (W_n(A_n;·), W_n(S× T;·))< and (W(A_n;·), W(S× T;·))<. Hence, using (<ref>) we get that:(W_n(S× T;·), W(S× T;·)) ≤ 2 + (W_n(A_n;·), W(A_n;·)) ≤ 2 + _(x,y)∈ A_n(W_n(x,y;·), W(x,y;·)) ≤ 3 .Taking the supremum over S,T⊂ [0,1], we get (W_n,W) ≤ 3. This being true for all >0, we conclude that (W_n)_n∈ converges to W for , and thusis smooth.§.§ The unlabeledcut distance We can now define the cut distance for unlabeled graphons.[The unlabeled cut distance ] Set∈{,,}. Letdbean invariant distance on the kernel space .The premetricon , also called the cut distance, is defined by:(U,W) = inf_φ∈ d(U,W^φ) = inf_φ∈ d(U^φ,W) . Notice that satisfiesthe symmetryproperty (asdis invariant)andthe triangular inequality.Hence,induces a distance(that westilldenoteby )onthe quotientspace =/ of kernels in associated with the equivalence relationdefined by U Wif and only if(U,W)=0. Whenthe metric d= on = (resp. , resp. ) derives from a metricon(resp. , resp. ), and is thusinvariant thanks to Lemma <ref>,we writeforandfor .Weshallseein Theorem <ref> and Corollary <ref> thatunder some conditions,different choices of distance , which induces the weak topology on , lead to the same quotient space, then simply denoted by , with the same topology.§.§ Weak isomorphism Similarly to Theorem 8.13 in <cit.>, when the distanceis such thatis invariant and smooth, we can rewrite the cut distanceas a minimum instead of an infimum using measure-preserving maps, see the last equality in (<ref>).We introduce a weak isomorphism relation that allows to “un-label” probability-graphons.[Weak isomorphism] We say that two signed measure-valued kernels U and W are weakly isomorphic (and we note U∼ W) if there exists two measure-preserving maps φ, ψ∈ such that U^φ(x,y; ·) = W^ψ(x,y;·) for x,y∈ [0,1].We denote by = / ∼ (resp. = / ∼) the space of unlabeled signed measure-valued kernels (resp. probability-graphons) the space of signed measure-valued kernels (resp. probability-graphons) where we identify signed measure-valued kernels (resp. probability-graphons) that are weakly isomorphic.Notice that U∼ W implies that U=W (we recall that signed measure-valued kernels are only defined for x,y∈ [0,1] and that W in (<ref>) is anin general). In particular, the notion of uniformly bounded subset defined in (<ref>) naturally extends to . The last part of this section is devoted to the proof of the following key result. [Weak isomorphism and ]Let d be a distance defined on(resp. or )which is invariantand smooth.Then, twokernels are weakly isomorphic, U ∼ W, if and only ifUW, (U,W) = 0. Furthermore, the mapis a distance on =(resp. = or =). As a first step in the proof of Theorem <ref>,following <cit.>, we give a nice description ofusing couplings.We saythat ameasure μon [0,1]^2 isa couplingmeasure on [0,1]^2(between twocopiesof[0,1]each equippedwiththe Lebesgue measure)if the projectionmaps oneach components τ, ρ :[0,1]^2 → [0,1] (where [0,1]^2 isequipped with the measureμand[0,1]with theLebesguemeasure λ)aremeasure-preserving.Thusfor everykernelWon ([0,1],ℬ([0,1]),λ), thefunction W^τisa kernelon theprobability space([0,1]^2,ℬ([0,1]^2),μ), and similarly for the projection ρ.Let φ be a given measure-preserving map from [0,1] with the Lebesgue measure to [0,1]^2 with a couplingmeasure μ. For an invariantdistance d on(resp. ), we definea distance, say d^μ,on kernels on ([0,1]^2,ℬ([0,1]^2),μ) by:d^μ(U',W') = d(U'^φ, W'^φ).It is easy to see that, for U and W kernels on [0,1],we have d ^ μ(U^τ,W^τ) = d(U,W) as d is invariantand τ∘φ is a measure-preservingmap from [0, 1] to itself; and similarly d ^ μ(U^ρ,W^ρ) = d(U,W).A straightforward adaptation ofthe proof of <cit.> gives the nextresult.Let d be a distance defined on(resp. or )which is invariantand smooth. Then,wehavethe following alternativeformulations forthe cut distance on(resp.or ):(U,W)= φ∈inf d(U,W^φ)= φ∈inf d(U,W^φ) = ψ∈inf d(U^ψ,W)= ψ∈inf d(U^ψ,W) = φ, ψ∈inf d(U^ψ,W^φ)= φ,ψ∈min d(U^ψ,W^φ) ,and (U,W)= μmin d^μ(U^τ, W^ρ)where μ range over all coupling measures on [0,1]^2.We deduce from the last equality in (<ref>) that (U,W) =0 if andonly if thereexist measure-preservingmaps φ, ψ∈suchthat U^ψ(x,y;·) =W^φ(x,y;·) for x,y∈[0,1]. This gives that the equivalence relationsand ∼ are the same.§.§ The cut norm for stepfunctions For a quasi-convex distance , the cut distancefor stepfunctions can be reformulated using a finite combinatorial optimization.For a collection of subsets , denote by σ() the σ-field generated by . [Combinatorial optimization of quasi-convexfor stepfunctions] Letbe a quasi-convex distance ona convex subset ofcontaining the zero measure. Let U, W∈ be -valued stepfunctions adapted to the same finite partition .Then, there exists S, T ∈σ() such that:(U,W) =(U(S× T;·), W(S× T;·)). Let ={S_1, …, S_k} with k=|| the size of the partition . First, remark that the quantity(U,W) =(U(S'× T';·), W(S'× T';·)) depends on S' and T' only through the values of λ(S'∩S_i)and λ(T'∩ S_i) for 1≤ i≤ k. Thus, the cut distance between U and W can be reformulated as:(U,W) = 0≤α_i, β_i≤λ(S_i) ; 1 ≤ i ≤ ksup( ∑_1≤ i,j ≤ kα_i β_j μ_i,j(·) , ∑_1≤ i,j ≤ kα_i β_j ν_i,j(·) ),where μ_i, j (resp. ν_i,j) is the constant value of U(x,y; ·) (resp. W(x,y; ·)) when x∈ S_i and y∈ S_j.Moreover, when we fix the value ofβ = (β_i)_1≤ i≤ k, the quantity( ∑_1≤ i,j ≤ kα_i β_j μ_i,j(·) , ∑_1≤ i,j ≤ kα_i β_j ν_i,j(·) ) is aquasi-convex function of α = (α_i)_1≤ i≤ k, and thusrealizes itsmaximumonthe extremalpointsof thehypercube ∏_i=1^k [0,λ(S_i)],whenα_i equals0 or λ(S_i)forevery 1≤i≤k. By symmetry,asimilar argument holds forβ.The cut distance can thusbe reformulated as the combinatorial optimization:(U,W) = max_I,J⊂ [ k]( ∑_i∈I,j∈Jμ_i,j(·) , ∑_i∈I,j∈Jν_i,j(·) ) .Let I,J⊂[k] that maximizes this combinatorial optimization, and take S = ∪_i∈ I S_i and T = ∪_j∈ J S_j to conclude.§.§ The supremum inand in the cut distanceIn this section, we prove that the supremum in the cut distance is achieved by some subsets S,T⊂ [0,1].For W∈ and f,g : [0,1] → [0,1]measurable,we define the signed measure:W(f ⊗ g;·) = ∫_[0,1]^2 W(x,y;·) f(x)g(y) x y.Remark that if we have W∈ with ϵ∈{ 1, ≤ 1, +, ±}, then we have W(f ⊗ g;·) ∈.[The supremum in the cut distancefor quasi-convex distance ] Letbe a quasi-convex distance onwith ϵ∈{+, ±} that is sequentially continuous the weak topology. Let U,W∈. Then, there exist measurable subsets S,T ⊂ [0,1] such that f = _S and g = _T achieve the supremum in:sup_f,g( U(f ⊗ g;·), W(f ⊗ g;·) )where the supremum is taken over measurable functions f,g from [0,1] to itself.Define the map Ψ : (f,g) ↦(U(f ⊗ g;·),W(f ⊗ g;·)), and denote C = sup_f,gΨ(f,g), where the supremum is taken over measurable functions f,g from [0,1] to itself. Let (f_n)_n∈ and (g_n)_n∈ be sequences of measurable functions from [0,1] to itself such that lim_n→∞Ψ(f_n, g_n)= C. As the unit ball of L^∞([0,1],λ) is compact for the weak-* topology (with primal space L^1([0,1],λ)), upon taking subsequences, we may assume that (f_n)_n∈ (resp. (g_n)_n∈) weak-* converges to some f (resp. g) which take values in [0,1]. Thus, (f_n ⊗ g_n)_n∈ weak-* converges to f ⊗ g in L^∞([0,1]^2,λ_2). In particular, for every h∈, as W[h] is a real-valued kernel, this implies that lim_n→∞ W(f_n ⊗ g_n; h) = W(f ⊗ g ; h). This being true for every h∈, we get that the sequence ( W(f_n ⊗ g_n;·) )_n∈ inweakly converges to W(f ⊗ g;·)∈; and similarly for U. Asis sequentially continuous the weak topology on , we get that C = lim_n→∞Ψ(f_n, g_n) = Ψ(f,g).Now, we show that we can replace the functions f and g by functions that only take the values 0 and 1 (indicator functions). We first fix g and do this for f. Let X be a random variable uniformly distributed over [0,1], and consider the random function _X ≤ f. Remark that we have [ W( _X≤ f⊗g;·) ] = W( f ⊗ g;·), and similarly for U. Asis quasi-convex and sequentially continuous the weak topology, we have:C≥sup_x∈ [0,1] ( U(_x≤ f⊗ g;·), W(_x≤ f⊗ g;·) )≥( [ U(_X≤ f⊗ g;·) ], [ W(_X≤ f⊗ g;·)] )=( U(f ⊗ g;·), W(f ⊗ g;·) )= C,where in the second equality we used the quasi-convex supremum inequality from (<ref>) with the -valued kernels U'(x,y;·) = U(_x≤ f⊗ g;·) and W'(x,y;·) = W(_x≤ f⊗ g;·), and A = [0,1]^2. All inequalities being equalities, this imposes:C= sup_x∈ [0,1] ( U(_x≤ f⊗ g;·), W(_x≤ f⊗ g;·) ) = lim_n→∞ ( U(_r_n≤ f⊗ g;·), W(_r_n≤ f⊗ g;·) ),for some sequence (x_n)_n∈ in [0,1]. Upon taking a subsequence, we may assume that the sequence (x_n)_n∈ monotonically converges to some x∈[0,1]. In particular, the sequence of functions (_x_n≤ f)_n∈ (monotonically) converges to the function f' = _x ≤ f (resp. f' = _x < f) if (x_n)_n∈ is non-decreasing (resp. decreasing), and thus also weak-* converges in L^∞([0,1],λ). Using, as in the first part of the proof, the sequential continuity of the function Ψ the weak-* topology on L^∞([0,1],λ), we get that Ψ(f', g) =( U(f' ⊗ g;·), W(f' ⊗ g;·) ) = C, that is we can replace f by the indicator function f'. The same argument allows to replace g by an indicator function.§.§ Examples of distance We consider usual distances and norms onorthat induce the weak topology on .All the distances we consider are quasi-convex, and all the norms we consider are sequentially continuous the weak topology on . Thus their associated cut distances are invariant and smooth by <Ref> and <Ref>. Properties for the cut distances associated with those distances and norms are summarized in Corollaries <ref> and <ref>.In this section, we assume that (, ) is a Polish metric space, and remind thatdenotes its Borel σ-field. §.§.§ The Prohorov distanceThe Prohorov distanceis a complete distance defined on the set of finitemeasures that induces the weak topology (see <cit.>).It is defined for μ, ν∈ as:(μ,ν) = inf{ > 0 : ∀ A∈ℬ(), μ(A) ≤ν(A^) + andν(A) ≤μ(A^) + } ,where A^ = { x∈ : ∃ y∈ A, (x,y) < }. For probability measures, we only need one inequality in (<ref>) to define the Prohorov distance; however for positive measures we need both inequalities as two arbitrary positive measures might not have the same total mass. For =, we use the subscript = 𝒫. We now prove thatthe Prohorov distance is quasi-convex. The Prohorov distanceis quasi-convex on .Let μ_1,μ_2,ν_1,ν_2 ∈ and let α∈ [0,1]. Let > max( (μ_1,ν_1), (μ_2,ν_2) ), then for all i∈{1,2} and B∈, we have that μ_i(B) ≤ν_i(B^) + and ν_i(B) ≤μ_i(B^) +. Taking a linear combination of those inequalities, we get that for all B∈, we have that αμ_1(B) + (1-α) μ_2(B) ≤αν_1(B^) + (1-α) ν_2(B^) +, and similarly when swapping the role (μ_1,μ_2) and (ν_1,ν_2). Hence, we get that ( αμ_1 + (1-α) μ_2, αν_1 + (1-α) ν_2) ≤, and taking the infimum over , we get that ( αμ_1 + (1-α) μ_2, αν_1 + (1-α) ν_2) ≤max( (μ_1,ν_1), (μ_2,ν_2) ).§.§.§ The Kantorovitch-Rubinshtein and Fortet-Mourier norms The Kantorovitch-Rubinshtein norm(sometimes also called the bounded Lipschitz distance) and the Fortet-Mourier normare two norms defined on that induce the weak topology on(see Section 3.2 in <cit.> for definition and properties of those norms).They are defined for μ∈ by: μ = sup{∫_ f μ : f is 1-Lipschitz and f_∞≤ 1} , μ = sup{∫_ f μ : f is Lipschitz and f_∞ + Lip(f) ≤ 1} ,where f_∞ = sup_x∈| f(x) | is the infinite norm and Lip(f) is the smallest constant L>0 such that f is L-Lipschitz. Those two norms are metrically equivalent, see beginning of Section 3.2 in <cit.>:μ≤μ≤ 2 μ.Note that we have μ≤μ, and thus those two norms are sequentially continuous the weak topology on . An easy adaptation of the proof for Theorem 3.2.2 in <cit.> gives the following comparison between ,and . [Comparison of ,and ]Let μ, ν∈. Then, we have:(μ,ν)^2/1+(μ,ν)≤μ-ν≤μ-ν≤(2+ min(μ(), ν()))(μ,ν). In particular, the Prohorov distanceis uniformly continuousandon ; and andare uniformly continuouson .For the special choice = (resp. =),we use the subscript = (resp. =).§.§.§ A normbased on a convergence determining sequence Fromaconvergence determiningsequence=(f_k)_k∈,where f_0=and f_k ∈takes valuesin[0, 1],we definea normonmetrizing the weak topology on , for μ∈, by:μ = ∑_k∈ 2^-k |μ(f_k)|.Note that we have μ≤ 2 μ, and thusis sequentially continuous the weak topology on . For the special choice =,we use the subscript =.Even thoughthe norm is not complete when is not compact (see Lemma <ref> below),the cut normandthe cutdistance willturn outto bevery usefulin Sections <ref>and <ref>to link the topology of the cut distance to the homomorphism densities. Recallis the distance derived from the norm .[ is not complete in general] Letbe a convergence determining sequence. Then, the distanceiscomplete overif and only ifis a compact space, , if and only ifiscompact.Theorem 3.4 in <cit.> states thatis compact if and only ifis compact. When this is the case, any distance metrizing the weak topology onis complete.Reciprocally, assume thatis a complete metric over and write =(f_m)_m∈. Let (μ_n)_n∈ be an arbitrary sequence of probability measures from . For every m∈, as f_m takes values in [0,1], we have for every n∈ that μ_n(f_m) ∈ [0,1]. Hence, using a diagonal extraction, there exists a subsequence (μ_n_k )_k∈ of the sequence (μ_n)_n∈ such that for every m∈, the sequence (μ_n_k(f_m) )_k∈ converges,that is,(μ_n_k )_k∈ is a Cauchy sequence for the distance .As we assumed the distanceto be complete, this implies that the sequence (μ_n)_n∈ has a convergent subsequence. The sequence (μ_n)_n∈ being arbitrary, we conclude that the spaceis sequentially compact, and thus compact by <Ref>.ForW∈ andf∈,we denote by W[f] the real-valued kernel defined by:W[f](x,y) = W(x,y;f) = ∫_ f(z) W(x,y; z). We denote by(resp. ·) the cut norm (resp. one-sided version of the cut norm) for real-valued kernels defined as:w = sup_S,T ⊂ [0,1]|∫_S× T w(x,y)x y |andw = sup_S,T ⊂ [0,1]∫_S× T w(x,y)x y ,where w is a real-valued kernel w (see <cit.>, resp. <cit.>, for definition and properties of those objects).The following two remarks link the cut normof a signed measure-valued kernel W with the cut normof the real-valued kernels W[f] for some particular choices of functions f∈. We will reuse those facts in Section <ref>. For μ∈ we have:μ = sup_∈{± 1}^∑_n∈ 2^-n_n μ(f_n) = sup_∈{± 1}^μ( ∑_n∈ 2^-n_n f_n ),with ε=(ε_n)_n∈.Hence, for a signed measure-valued kernel W∈, we have:W= sup_∈{± 1}^sup_S, T⊂ [0,1] W(S× T; ∑_n∈ 2^-n_n f_n ) = sup_∈{± 1}^W [ ∑_n∈ 2^-n_n f_n ]. For a signed measure-valued kernel W, we have:W= sup_S,T ⊂ [0,1]∑_n=0^∞ 2^-n| ∫_S× TW(x,y,f_n) xy | ≤∑_n=0^∞ 2^-nsup_S,T ⊂ [0,1]| ∫_S× TW(x,y,f_n) xy |= ∑_n=0^∞ 2^-nW[f_n] .§ TIGHTNESS AND WEAK REGULARITYInthissection,usingaconditionalexpectationapproachasin <cit.>, weprovide approximationsof signed measure-valued kernels and probability-graphons by stepfunctionswith an explicit bound on thequality of the approximation. This proceduretakes intoaccount thatsigned measure-valued kernels are infinite-dimensional valued. §.§ Approximationby stepfunctionsWe start by introducing the partitioning of a signed measure-valued kernel.[The stepping operator] Let W∈ be a signed measure-valued kernel and𝒫={S_1,⋯,S_k} be a finite partition of [0,1]. We define the kernel stepfunction W_𝒫 adapted to the partition𝒫 by averaging W over the partition subsets:W_𝒫(x,y;·) = 1/λ(S_i)λ(S_j)W(S_i × S_j;·) for x∈ S_i, y∈ S_j,when λ(S_i)≠ 0 and λ(S_j)≠ 0, and W_𝒫(x,y;·) = 0 the null measure otherwise. We call the map W↦ W_ defined onthe stepping operator(associated with the finite partition 𝒫).Since the signed measure-valued kernel are defined up to an a.e. equivalence, the value of W_𝒫(x,y;·) forx∈ S_i, y∈ S_j when λ(S_i)λ(S_j) is unimportant. The stepfunction W_𝒫 can be viewed as the conditional expectation of W the (finite) sigma-field σ(×)on [0,1]^2, where W:[0,1]^2→ is seen asa random signed measure inand the probability measure on [0, 1]^2 is the Lebesgue measure. Let ⊂ be a convex subset of measures, for instanceis , ,or . Whenever W∈ is a -valued kernel,then by simple computation its stepping W_𝒫 is also a -valued kernel. In the following remark, we give a characterization of refining partitions that generate theBorelσ-field of [0,1]. Let(_k)_k∈ bea sequenceof refiningpartitions of [0, 1].Itgenerates theBorel σ-field of [0,1] (that is, {S:S∈_k, k∈} generates theBorel σ-field of [0,1])if and only if(_k)_k∈separates points (that is,for everydistinct x,y∈ [0,1], there existsk∈ suchthatxandy belongtodifferentclassesof _k). Indeed, assume that(_k)_k∈ separates points, and consider the countable familyof Borel-measurable functions = {_S:S∈_k, k∈} whichseparates points. Thus, by<cit.>(remark thata Polish spaceis a Souslinspace, see <cit.>), thefamily generates the Borelσ-field of[0,1].Thisimplies thatthe familyof Borel sets {S : S∈_k,k∈} generates the Borel σ-field of [0,1].Conversely, assume there exist x,y∈ [0,1] which are not separated by (_k)_k∈, for all k∈, x and y belong to the same class of _k. This implies that the set {x} does not belong to the σ-field generated by (_k)_k∈, and thus (_k)_k∈ does not generate theBorel σ-field of [0,1]. Recall the definition of the norm· ondefined in (<ref>).The following lemma allows to approximate any signed measure-valued kernel by its steppings.[Approximation using the stepping operator]Let W∈ be a signed measure-valued kernel(which is bounded by definition).Let(𝒫_n)_n∈bearefining sequenceof finite partitionsof[0,1]thatgenerates the Borel σ-field on [0,1].Then,thesequence (W_𝒫_n)_n∈ is uniformlybounded byW, and weakly converges to W almost everywhere (on [0,1]^2). Set W_n= W_𝒫_n for n∈.By definition of the stepping operator, we have for every n∈ and every (x,y)∈[0,1]^2that the total mass of W_n(x,y; ·) is upper bounded by W.Recallthat forW∈ andf∈,the real-valued kernel W[f] is defined by (<ref>). First assume that W∈. Let = (f_k )_k∈ be a convergence determining sequence, with by conventionf_0=.Forevery k∈and n∈,an immediate computationgives W_n[f_k] = (W[f_k])_𝒫_n.For everyk∈, as W[f_k]is areal-valuedkernel,we canapply the closed martingale theorem (as (W[f_k])_𝒫_n can be viewed as a conditional expectation, see Remark <ref>), and we get that lim_n→∞ W_n[f_k]=W[f_k]almost everywhere, since (_n)_n∈ generates the Borel σ-field.Hence, as the sequence (f_k)_k∈ is convergence determining, the sequence(W_n)_n∈ weakly converges to W almost everywhere.Now, for W∈, write W=W^+ - W^- where W^+, W^- ∈ (see Lemma <ref>). By linearity of the stepping operator,remark that we have W_n = (W^+)__n - (W^-)__n for all n∈. By the first case, we have that the sequence ((W^+)__n)_n∈ weakly converges to W^+, and similarly for ((W^-)__n)_n∈ and W^-. Hence, the sequence (W_n)_n∈ weakly converges to W almost everywhere.We first provide a separability result on the space of probability-graphons. Let dbe a smooth distanceon(resp. or ). Then, thespace (,d) (resp.(, d)or (, d)) is separable.If furthermore d is invariant (which implies thatis a distance), then thespace (,) (resp.(, )or (, )) is separable.We shall consider the space of probability-graphons , as the proofs forandare similar.Applying Lemma <ref> with the sequence of dyadic partitions, for every probability-graphon W, we can find a sequence of probability-graphon stepfunctions adapted to finite dyadic partitions and converging to W almost everywhere on [0,1]^2.As the spaceis separable,the space of probability measuresis also separable for the weak topology (see <cit.>). Let ⊂ be an at most countable dense (for the weak topology) subset. Then, for any stepfunction W∈ adaptedto a finite dyadic partition, we can approach it everywhere on [0,1]^2 by a sequence of-valued stepfunctions adaptedto the same finite dyadic partition.Hence, for every W∈, there exists a sequence (W_n )_n∈in the countableset of-valued stepfunctions adapted to a finite dyadic partitionthat converges to W almost everywhere on [0,1]^2. As d is smooth, we get that this convergence also holds for d. Thus, the space (, d) is complete.Remind that by Theorem <ref>, when the distance d is invariant and smooth, then the premetricis a distance on . In that case, convergence for d implies convergence for , and thus the space (,) is also separable. §.§ Tightness Similarly tothe case ofsigned measures (remind Lemma <ref>),we introduce atightness criterion for signed measure-valued kernels that characterizes relative compactness, seeProposition <ref> below.For a signed measure-valued kernel W∈, we define the measure M_W ∈ by:M_W( z)= | W | ([0,1]^2;z) = ∫_[0,1]^2| W| (x,y; z) xy,where for every x,y∈[0,1], | W| (x,y;·) is the total variation of W(x,y;·) (see Lemma <ref>). In particular, if W is a probability-graphon then M_W is a probability measure from . Notice also that if W and U are weakly isomorphic, then M_W=M_U, so that the application W ↦ M_W can be seen as a map from(resp.) to(resp. ). [Tightness criterion]A subset𝒦⊂ (resp. 𝒦⊂)is saidto be tightif thesubset ofmeasures { M_W : W∈𝒦}⊂ is tight. The following proposition shows the equivalence between a global tightness criterion and a local tightness criterion. Recall that uniformly bounded subsets ofare discussed after<Ref>. Recall also λ_2 is the Lebesgue measure on [0, 1]^2.Let ⊂ (or⊂) be a uniformly bounded subsetofsigned measure-valuedkernels. The setis tight if and only if for every >0, there exists a compact set K ⊂,such that for every W∈𝒦 we have:λ_2 ( { (x,y)∈ [0,1]^2 : | W|(x,y; K^c) ≤}) >1-.As the left hand side of (<ref>) is invariant by relabeling, it is enough to do the proof for . Let ⊂ beuniformly boundedand set C= sup_W∈W <∞.Assume thatfor every>0, there existsa compactset K⊂,such that (<ref>) holds for everyW∈𝒦.Let 1 > > 0. Thus, there existsacompactsubsetK⊂suchthatforevery W∈𝒦there exists asubset A_W⊂ [0,1]^2 with(Lebesgue)measureatleast 1-,suchthatforevery (x,y)∈ A_W, we have | W|(x,y;K^c) ≤. We have that for all W∈:M_W(K^c) = ∫_[0,1]^2| W|(x,y;K^c)x y ≤Wλ_2 (A_W^c )+ ελ_2(A_W) ≤(C+1) ε.Hence, the subset of measures{ M_W : W∈𝒦}⊂ is tight, that is𝒦 is tight.Conversely, suppose that 𝒦 istight.Let >0. Thereexistsa compactsetK⊂such thatforevery W∈𝒦, wehaveM_W(K^c) < ^2. For W∈𝒦,define A_W = { (x,y) ∈ [0,1]^2 : | W|(x,y;K^c) ≤}.We have:^2 >M_W(K^c) = ∫_[0,1]^2| W|(x,y;K^c)x y≥ελ_2(A_W^c).Hence, λ_2(A_W) >1 -, and consequently Equation (<ref>) holds.We end this section on a continuity result of the map W ↦ M_W.[Regularity of the map W ↦ M_W]Letbeadistance on (resp. ). Then the map W↦ M_W is1-Lipschitz, and thus continuous,from (, )(resp. (, )) to (, ) (resp. (, )).Taking S=T=[0,1] in Definition (<ref>) of , we get that (M_U,M_W) ≤(W,U).AsM_U^φ=M_U for any measure-preserving map φ thanks to (<ref>), we deduce from Definition (<ref>) ofthat(M_U,M_W) ≤(U,W).§.§ Weak regularityWe shall consider the following extra regularitiesof distances on the set of signed measure-valued kernels the stepping operator.For a finite partition , denote by || the size of the partition , the number of sets composing . [Regularities of distances]Let d be a distanceon(resp.or ).* Weakregularity. The distance d is weaklyregular ifwheneverthe subset of (resp.or) istight (resp. tight and uniformly bounded),thenforevery >0, thereexists m∈^*,such thatfor everykernelW∈,andfor everyfinite partition 𝒬of [0,1],there exists afinite partition 𝒫of [0,1]thatrefines𝒬 such that:|𝒫|≤ m|𝒬|and d(W, W_𝒫) < . *Regularity the steppingoperator. The distance d is regular the steppingoperator if(resp. for any finiteconstant C≥ 0) there exists a finite constant C_0>0suchthat for every W, U in(resp. inor , withW≤C and U≤ C) and every finite partitionof [0, 1], then we have:d(W,W_) ≤ C_0 d(W,U_). We say that a norm N onis weakly regular (resp. regular the steppingoperator) if its associated distance d onis weakly regular (resp. regular the steppingoperator). The weak regularity property is an analogue to the weak regularity lemma for real-valued graphons (see<cit.>). If a distance d is weakly regular, then for a subset ⊂ which is tight and uniformly bounded, every -valued kernel can be approximated by a stepfunction with a uniform bound. The regularity the steppingoperator states that thestepping operator gives an almost optimal way to approximate a signed measure-valued kernel using stepfunctions adapted to a given partition.§.§.§ An example of cut distance regular the steppingoperator Remind the definition of a quasi-convex distance in <Ref>. We first show that the stepping operator is 1-Lipschitzfor the cut distancewhen the distanceis quasi-convex. [The stepping operator is 1-Lipschitz] Letbe a quasi-convex distance ona convex subset ofcontaining the zero measure. Then, the stepping operator associated with a given finite partition of [0, 1] is 1-Lipschitzonfor the cut distance .Let U,W∈ be -valued kernels, and let 𝒫be a finite measurable partition of [0,1]. As U_𝒫 and W_𝒫 are stepfunctions adapted to the same partition, and asis quasi-convex, we can useLemma <ref>to get for some S, T∈σ() that:(U_𝒫, W_𝒫)= (U_𝒫(S× T;·), W_𝒫(S× T;·)) = (U(S× T;·), W(S× T;·)) ≤(U,W),where thesecond equality comesfrom thefact that theintegrals are equals asS, T∈σ(𝒫) andthus theintegration is over fullsteps of the partition. Hence, the steppingoperator is 1-Lipschitzonfor the cut distance . For a quasi-convex distance , the cut distanceis regular the stepping operator withC_0=2 in (<ref>) (and one can take C=+∞ in Definition <ref> <ref>). [ is regular the stepping operator] Letbe a quasi-convex distance ona convex subset ofcontaining the zero measure. Let W, U∈ be -valued kernels, and let𝒫 be a finitepartition of [0,1].Then, we have:(W, W_)≤ 2 (W, U_) . The proof is similar to the proof of <cit.>. Asis quasi-convex, using Lemma <ref>, we get:(W, W_𝒫)≤ (W , U_) + (U_ , W_𝒫)≤2 (W ,U_) .§.§.§ An example of weakly regular cut distance We have the following general result.Recall Definitions <ref> and <ref> on distances and norms on , with ∈{ +, ±}, beinginvariant, smooth, weakly regular and regular the stepping operator.Letbe a quasi-convex distanceon , with ϵ∈{+, ±}, which is sequentially continuous theweak topology. Then, the cut distanceonis invariant, smooth, weakly regularand regular the stepping operator.UsingresultsfromSection <ref>,we directly getthe following weak regularity of the cut distanceand the cut norms ,and .The cut norms ,and(resp. the cut distance )on(resp. )are invariant, smooth, weakly regular and regular the stepping operator. We deduce from Lemmas <ref> and <ref>,Proposition <ref> and thatthe cut distance onis invariant, smooth and regular the stepping operator. Weare left to provethatis weaklyregularon . We prove it by considering in the first step the casecompact and in a second step the general casePolish.Step 1. We assumecompact. As in the definition of weak regularity, let ⊂ be a subset of -valued kernels that is tight and uniformly bounded by some finite constant C. Let ⊂ be the subset of elements ofwith total mass at most C; in particularis a convex set containing 0 and⊂.Asis compact, from Remarks <ref> and <ref>, we know that the weak topology is metrizable on and thatis compact, and thus sequentially weakly compact.Hence, asis sequentially continuous the weak topology on , wehavethat(, )is sequentially compact, and thus compact.Denote by B(μ,r) = {ν∈: (μ, ν) < r} the open ball centered at μ∈ with radius r>0. Let>0. Asis compact, thereexist μ_1, …,μ_n ∈, n∈^*,such that = ∪_i=1^nB(μ_i,). For 1≤i≤ n,define A_i= B(μ_i,) ∖∪_j<i B(μ_j,), sothat {A_1,…,A_n} isafinitepartition (withpossiblysome empty sets) of .Every -valued kernel W can be approximated by a {μ_1, …, μ_n}-valued kernel U defined for every (x,y)∈[0,1]^2 byU(x,y;·) = μ_i for i such that W(x,y;·)∈ A_i. Thus, by construction, we have that for every (x,y)∈[0,1]^2,(W(x,y;·) , U(x,y;·)) <. Applying the quasi-convex supremum inequality from (<ref>)to W and U, we get that:(W,U) ≤_(x,y)∈ [0,1]^2(W(x,y;·) , U(x,y;·)) ≤. Then, asthe stepping operator is1-Lipschitzfor thecut norm, see Lemma <ref>, wehave forany finite partition𝒫 of [0,1] that:(W , W_𝒫)≤(W , U) + (U , U_𝒫) + (U_𝒫 , W_𝒫) ≤2 + (U , U_𝒫). Hence, to get the weak regularity property for -valued kernels, we are left to prove it for the much smaller set of𝒱-valued kernels,where 𝒱 is the convex hull of {μ_1, …, μ_n}.Asis quasi-convex and sequentially continuous the weak topology, using <Ref>, there exists η>0 such that for all μ,ν∈, we have that μ - ν < ηimplies that (μ,ν) ≤.As 𝒱is a subset of a vectorspace withfinite dimension n, the norm ·seenover𝒱isequivalenttotheL_1-norm μ= ∑_i=1^nα_i μ_i↦α_1= ∑_i=1^n |α_i|.We can now see 𝒱-valued kernelas ^n-valuedgraphonwitha cutnormderived fromthe L_1-norm·_1, andinthiscase the proof for the weak regularity Lemma 9.9 in <cit.> can easily be adapted.Hence,we havethe weakregularity propertyfor 𝒱-valued kernels: there existsm∈^*, suchthat forevery 𝒱-valued kernel U', and for every finite partitionof [0,1]there exists a finite partition 𝒫of[0,1]thatrefines 𝒬, and such that ||≤ m || and sup_S,T⊂ [0,1](U' - U'_𝒫)(S× T;·) < η, and thus (U', U'_𝒫) ≤.TakingU'=U in (<ref>), we get that (W, W_𝒫)≤ 3εand ||≤m |𝒬|. This concludes the proof of the lemma whenis compact.Step 2. We consider the general case Polish.Wenow provethatis weaklyregularon .Let⊂bea subsetof -valuedkernels thatistight and uniformly bounded, and denote by C=sup_W∈W<∞.Let > 0. Asis quasi-convex and sequentially continuous the weak topology, using <Ref>, there exists η>0 such that for all μ,ν∈, we have that μ - ν < ηimplies that (μ,ν) <. Without loss of generality, we assume that η≤. Let η_C = min(η, η / C).As istight, usingProposition <ref>, thereexistsa compactset K ⊂, such thatfor every W∈𝒦 the subset A_W= {(x,y)∈ [0,1]^2: | W|(x,y;K^c) ≤η_C / 2 }has Lebesgue measure atleast 1-η_C / 2. Let W∈, anddefine the signed measure-valued kernelU by:U(x,y;·) = W(x,y;·∩ K) forevery (x,y)∈A_W,and U(x,y;·)=0 otherwise.Let S,T ⊂ [0,1]. We have:(W-U)(S× T; ·) ≤∫_S× TW(x,y;·)-U(x,y;·)x y ≤∫_A_W ∩ (S× T)| W|(x,y;K^c) x y + ∫_A_W^c ∩ (S× T)W(x,y;·)x y ≤η_C / 2 + C ·η_C / 2 ≤η.Thus, we have that (W(S× T; ·), U(S× T; ·)) <. Since this holds for all S,T ⊂ [0,1], we get that (W,U) ≤.Notice that the -valued kernelU is alsoa -valued kernel, where K⊂ is a compact set, and that U≤W≤ C. Further remark that, using Lemma <ref>, for every W∈ and every finite partitionof [0,1], we have that:(W , W_)≤(W , U) + (U , U_) + (U_ , W_) ≤ 2+ (U , U_) .Hence, to get the weak regularity property foron (see Definition <ref> <ref>), it is enough to prove thatrestricted to -valued kernels is weakly regular, which is true by Step 1. As a consequence, we get thaton is weakly regular.§.§ A stronger weak regularity lemma forIn this subsection, we prove a stronger version of the weak regularity lemma for the special case of the cut distance . We shall use this result for the proof of the second sampling Lemma <ref>.Let = (f_n )_n∈, with f_0= and f_ntakes valuesin[0, 1],be a convergence determining sequence, which is assumed fixed inthis section.§.§.§ Comparison between and an euclidian norm To better understand the stepping operator, we introduce a scalar product over signed measure-valued kernels. The link between this scalar product and the normis given by Lemma <ref>. We define the scalar product ⟨·, ·⟩_ on signed measure-valued kernels for U,W∈ by:⟨ U,W ⟩_ = ∑_n≥ 0 2^-n⟨ U[f_n], W[f_n] ⟩,where for all n the scalar product taken for U[f_n] and W[f_n] is the usual scalar product in L^2([0,1]^2, λ_2) for real-valued kernels:⟨ U[f_n], W[f_n] ⟩= ∫_[0,1]^2 U[f_n](x,y) W[f_n](x,y) xy.The scalar product ⟨·, ·⟩_induces a norm onwhich we denote by ·_2,.Letbe a finite partition of [0, 1]. As the stepping operator formeasurable real-valued L^2 functions on [0,1]^2 is a linear projection, and is idempotent and symmetric, and by definition of the scalar product ⟨·, ·⟩_ for signed measure-valued kernels, we have that the stepping operator for signed measure-valued kernels is linear, idempotent and symmetric for ⟨·, ·⟩_. Moreover, the stepping operator is the orthogonal projection for ⟨·, ·⟩_ onto the space of stepfunctions with steps in 𝒫. Note that for a probability-graphon W∈, we have W_2,≤√(2) aseach f_n takes values in [0,1]. The following technical lemma gives a comparison betweenand ·_2,. [Comparison betweenand ·_2,] For a signed measure-valued kernel W∈, we have W_□,≤√(2)W_2,.Let S,T ⊂ [0,1] be measurable subsets. By the Cauchy-Schwarz inequality, we have |⟨ W[f_n], 1_S× T⟩|^2 ≤W[f_n]_2^2 = ⟨ W[f_n], W[f_n] ⟩ for every n≥ 0. Using this inequality along with Jensen's inequality, we get for every S,T⊂ [0,1] that:( ∑_n≥ 0 2^-n|W(S× T,f_n) |)^2 = ( ∑_n≥ 0 2^-n|⟨ W[f_n], 1_S× T⟩|)^2 ≤∑_n≥ 0 2^-n+1|⟨ W[f_n], 1_S× T⟩|^2 ≤∑_n≥ 0 2^-n+1⟨ W[f_n], W[f_n] ⟩ = 2 (W_2,)^2 .Taking the supremum over every measurable subsets S,T ⊂ [0,1] gives the desired inequality.§.§.§ The weak regularity lemma for The following lemma gives an explicit bound on the approximation of a signed measure-valued kernel, say W,by its steppings W_, witha finite partition on [0, 1]. Its proofis a straightforward adaptation of the proof of the weak regularity lemmafor real-valued graphons in <cit.>.[Weak regularity lemma for , simple formulation] For every signed measure-valued kernel W∈ and k≥ 1, there exists a finite partition 𝒫 of [0,1] such that ||=k and:W - W_𝒫_□,≤√(8)/√(log(k))W_2,. In particular, if W∈ is a probability-graphon,(as W_2,≤√(2)) we have:W - W_𝒫_□,≤4/√(log(k))·It is possible in the weak regularity lemma to ask for extra requirements, for instance to start from an already existing partition, or to ask the partition to be balanced, as stated in the following lemma. The proof is a straightforward adaptation of the proof of <cit.>.[Weak regularity lemma for , with extra requirements] Let W∈ be a probability-graphon, and let 1≤ m < k. *For every partition 𝒬 of [0,1] into m classes, there is a partition 𝒫 with k classes refining 𝒬 and such that:W - W_𝒫_□,≤4/√(log (k/m))· *For every partition 𝒬 of [0,1] into m classes, there is an equipartition (a finite partition into classes with the same measure) 𝒫 of [0,1] into k classes and such that:W - W_𝒫_□,≤ 2 W - W_𝒬_□, + 2m/k· § COMPACTNESS AND COMPLETENESS OF In Section <ref>,we link the tightnesscriterion for measure-valued kernelswith the relative compactness the cut distance .In Section <ref>, we compare the topologies inducedbythe cutdistancefordifferent choiceofthe distance ,and statethat undersome conditionson , thosetopologiescoincide.InSection <ref>,we investigate the completeness ofendowed with the cut distanceand prove that the space of probability-graphonsis a Polish space (Theorem <ref>), andthat it is compact if and onlyif is compact(Corollary <ref>).The technicalproofsare postponedto Section <ref>. §.§ Tightness criterion and compactness Let ⊂ bea subset of signed measures on . Recall that_⊂denotethe subsetof signed measure-valuedkernels which are -valued. In this section, we shall denote by 𝒲_ the quotient of 𝒲_ identifyingsigned measure-valued kernelsthat are weakly isomorphic.Remind from Definition <ref> and Theorem <ref> that for an invariant, smooth and weakly regular distance don(resp. ,),is defined as (U,W) = inf_φ∈ d(U, W^φ), and is a distance on(resp. ,).We are now ready to formulate the important following theorem, which relates tightness with compactness and convergence for signed measure-valued kernels. We prove this theorem in Section <ref>.[Compactness theorem for ]Letd bean invariant,smoothand weaklyregular distanceon(resp. ).*If a sequence of elements ofor(resp.or ) istight (resp.tightanduniformly bounded),thenit hasa subsequence converging for . * If⊂(resp. ⊂)is convex and compact (resp. sequentially compact) for the weak topology, then the space (𝒲_, ) is convex and compact. * If is compact, then the space(, ) is compact. We deduce from this theorem a characterization of relative compactness for subsets of probability-graphons. Letbe adistanceon (resp.or ) thatinduces the weak topology on (resp. ). Assume that the distanceon(resp.or )is(invariant)smooth andweakly regular.*If a sequence of elements ofor(resp.or ) is converging for , then it is tight.*Letbe a subset of(resp. a uniformly bounded subset of ).Then, the setis relatively compact forif and only if it is tight.* Letbea subset of which isbounded, convex and closed for theweak topology.Then the set _ is convex and closed in . Remark that convergence fordoes not necessarily imply tightness onor on .We consider the case whereis a distance onor , the case withis similar.We provePoint <ref>. Let(W_n )_n∈be a convergentsequenceof(and thusof)for . Wededuce from the continuityof the map W↦ M_W, seeLemma <ref>, thatthe sequence (M_W_n )_n∈ is converging for , and thus is tight asinduces the weaktopology on . Then, by definition the sequence (W_n )_n∈ is tight. We provePoint <ref>.If ⊂is tight and uniformly bounded, then by Theorem <ref> <ref> every sequence inhas a subsequence converging for , which implies thatis relatively compact in the metric space (,) (see Remark <ref>).Conversely, assumethat ⊂ isuniformly bounded andrelativelycompact for. Define = {M_W:W ∈𝒦}⊂. By Lemma <ref>, the mappingW ↦ M_W is continuous from (, ) to (, ).Hence, asinduces the weak topology on ,the setis also relatively compact in for the weaktopology.As the space is Polish,applying Lemma <ref>, wegetthat⊂istight,andby Definition <ref>, the set ⊂ is tight.We postpone the proof of Point <ref> to Section <ref> on page page_proof_point_iii. §.§ Equivalence of topologies induced by The following lemma allows to show a first result on equivalence of the topologies induced by the cut distancefor different distances , where the sub-script m is used to distinguish different distances. Its proof is given below. Remind from <Ref> thatmust be smooth forto be a distance.[Comparison of topologies induced byand ]Let andbetwo distances on such thatis uniformly continuous(in particular,induces a finer topology thanon ). Then, we have the following properties.*The distanceis uniformly continuouson .In particularinduces a finer topology thanon . * If thedistance onis smooth,thenthedistance isalsosmooth andis uniformly continuous . In particular,inducesa finertopology than on . *If the distanceonis weakly regular, then the distance is also weakly regular. *Assume that the distance induces theweak topology on , and that the distanceis smooth and weakly regular. In particular, the distancealso induces the weak topology on . Then, the distancesandinduce the same topology on .We will see some application of Lemma <ref> in Corollary <ref> below. In Lemma <ref> <ref>-<ref>, one can replaceandbyandor by andas soon as the distancesandare defined on or; in this case comparisons of topologies only apply on uniformly bounded subsets. In Lemma <ref> <ref>, one can replacebywith a bounded subset ⊂ as soon as the distancesandare defined on. We prove Point <ref>.Let> 0.Asis uniformly continuous , there exists η > 0 such that for every μ, ν∈, if (μ,ν) < η, then (μ,ν)<.Let U,W∈ such that (U,W)<η. Then, for every subsets S,T ⊂ [0,1], we have:(U(S× T;·),W(S× T;·) ) <.Thus, (U,W) ≤. Hence,is uniformly continuous .We provePoint <ref>.Assumethat is smooth. Let(W_n )_n∈and Wbe probability-graphons suchthat W_n(x,y;·)weakly convergesto W(x,y;·)for almostevery x,y∈ [0,1].Sincethe cut distanceis smooth,we get that (W_n, W)→ 0.As is uniformlycontinuous(and thusalso continuous),we havethat (W_n, W) → 0.Hence,is smooth.Furthermore, let > 0. Let η > 0 be such that for every μ,ν∈, (μ,ν) < η implies (μ,ν) <. For every U,W∈ such that (U,W) < η, there exists φ∈ such that (U, W^φ) < η, which implies that(U, W^φ) <, which then implies that (U,W) <. That is,is uniformly continuous . We provePoint <ref>.Assume that is weakly regular.Let 𝒦⊂ be tight.Let> 0. As isuniformlycontinuous , thereexists η > 0 such that for every U,W∈, if (U,W) < η, then (U,W) <.Sinceis weakly regular,there existsm∈^*, suchthat forevery probability-graphonW∈𝒦, andforeveryfinite partition 𝒬of [0,1],thereexists afinite partition 𝒫of [0,1] that refines 𝒬 and such that ||≤ m || and (W, W_𝒫)< η; andthus wealso have (W, W_𝒫) <. Hence,is weakly regular.We provePoint <ref>.Assume that induces the weak topology onand thatis smooth and weakly regular. In particular, the topology induced byif finer than the topology induced by , finer than the weak topology. Asis smooth, by Lemma <ref>, is continuous the weak topology (the weak topology if finer than the topology induced by ), and thusinduces the weak topology on . ByPoints <ref> and <ref>, weget thatisalso smooth and weakly regular. By Point <ref>, the distanceinduces a finer topology thanon .Wenow provethatthe topologyof isfiner thanthe topology of. Let(W_n )_n∈and Wbe probability-graphons in, suchthatW_nconverges toWfor .By Proposition <ref> <ref>, we deduce thatthe sequence (W_n)_n∈ is tight. Asis smoothand weakly regular, Theorem <ref>givesthateverysubsequence (W_n_k)_k∈ofthe sequence (W_n)_n∈hasafurther subsequence (W_n'_k)_k∈that converges forto a limit, sayU∈.Since is finerthan , wededuce that (W_n'_k)_k∈ converges alsoto U for ;but, as asubsequence, it alsoconvergestoWfor. As isadistanceonthanksto Theorem <ref>,we getU=W. Hence,every subsequenceof (W_n)_n∈has afurther subsequence thatconvergesto Wfor , therefore the whole sequence itself converges toW for . Consequently, isfinerthan , and thus thosetwo distances induce the same topology on .The following theorem states thatthe topology induced bydoes not depend onunder some hypothesis.We prove this theorem inSection <ref>. Recallthatunder suitableconditions satisfiedin thenext theorem,thequotient spacedoesnotdependon thechoiceofthedistance , see Theorem <ref>.[Equivalence of topologies induced byon ] The topology onthe space probability-graphoninduced by the distance does not depend on the choiceof the distanceon , as long asinduces the weaktopology onand the cut distanceonis (invariant) smooth,weakly regular and regular the stepping operator. Remind from <Ref> that when the distanceis quasi-convex and continuous the weak topology onor , then the cut distanceis invariant, smooth,weakly regular and regular the stepping operator. This is in particular the case of , , and .The next corollary is an immediate consequence of Lemma <ref>, Corollary <ref>, Lemma <ref> and Theorem <ref>. This corollary gathers results comparing the topology induced by the cut distances associated with the distances introduced in Section <ref>. It is yet unclear if thedistancesinduces the same topology on the space of labeled probability-graphons as the one induced by ,or . The cut distancesonand , andon are invariant, smooth, weakly regular and regular the stepping operator. Moreover, we have the following comparison between the distances introduced in Section <ref>. *The cut normsand(resp. the cut distancesand ) are metrically equivalent on(resp. ).*The cut distances ,and(resp. ,and ) are uniformly continuous one another, and thus induce the same topology on(resp. ) and on every uniformly bounded subset of(resp. ). *The cut distances, ,and , for everychoice of the convergence determining sequence , induce the same topologyon. The first part of the corollary is a re-statement of <Ref>. Point <ref> is an immediate consequence of (<ref>).We now prove Point <ref>. Thanks to (<ref>) and Point <ref>, it isenough to consider onlythe Prohorov and the Kantorovitch-Rubinshtein distances.Asis uniformly continuous (see Lemma <ref>), applying Lemma <ref>(remind Corollary <ref>) with Remark <ref> in mind, weget that (resp. ) isuniformly continuous(resp. ) on every uniformly bounded subset of(resp. ) Asisalso uniformly continuous (see Lemma <ref>), applying again Lemma <ref>, we have that(resp. ) isuniformly continuous(resp. ) on every uniformly bounded subset of(resp. ).Point <ref> is an immediate consequence of Corollary <ref> and Theorem <ref>, together with Point <ref>. In Theorem <ref> and also in Corollary <ref> <ref>, one can replace bywith a bounded subset ⊂ as soon as the distanceis defined on . (One has in mind the case =.) This can be seen by an easy modification in the proof of Theorem <ref>. Alternatively, this can be seen using scaling to reduce the case of generalto the case of , and then adding a cemetery point (for missing mass of measures) toto further reduce to the case of . §.§ Completeness Letbe a distance onor .We shall consider a slight modification of the cut distancesandtoachieve completeness.Recall the measure M_W∈defined by (<ref>) associated to W∈.[The cut distancesand ] Letand be two distances onwith ϵ∈{≤ 1, +}. We define the cut distanceon the space of -valued kernelsas:(U,W) = (U,W) + (M_U,M_W),and the cut (pseudo-)distance on the space of unlabeled -valued kernelsas:(U,W) = inf_φ∈(U,W^φ) =(U,W) + (M_U,M_W) . Notice that by Lemma <ref> and the definition of M_W, the distanceis invariant.[Topological equivalence ofand ]Let and be two distanceson , with ϵ∈{≤ 1, +}, such that iscontinuous andthat is (invariant and) smoothon .Then, thecut distanceisinvariant andsmoothandisadistanceon. Moreover, the distancesand(resp.and) induce the same topology on the space(resp. ). Let (W_n)_n∈ andW be elementsof such that (W_n(x,y;·))_n∈weakly convergesto W(x,y;·)for almost every x,y∈[0,1].Since thedistanceis smooth, we have that lim_n→∞(W_n,W)= 0. Using Lemma <ref>on thecontinuityof themap W ↦M_W andthat is continuous ,we obtainthat lim_n→∞(W_n,W)=0. This givesthat the distanceissmooth.Sincewehavealreadyseenthat is invariant, we deduce fromTheorem <ref> thatis a distance on .We nowprove thatthe twodistancesandinducethe sametopology (which implies that this is also true forand ). As≤, convergence for implies convergence for . Conversely, let (W_n )_n∈ be asequence inthat converges for to alimit, say W∈.Using againLemma <ref> andthecontinuity of,we obtain that lim_n→∞(M_W_n,M_W)= 0.Thisclearly implies that the sequence (W_n )_n∈ converges to W for . Then, the two distances have the same convergent sequences and thus induce the same topology (see Remark <ref>). Recallis a Polish space. We already proved in Proposition <ref> that the space (, ) is separable; and we now investigate completeness of this space.[ is a Polish space] Letand be two distances on such that inducestheweak topologyon , is completeandcontinuous , and is (invariant) smooth and weakly regularon .Then, the space (,)isaPolish metricspace.Note that the assumptions in Theorem <ref> imply that also inducestheweak topology on . Indeed, asis continuous , the topology induced byif finer than the topology induced by , finer than the weak topology. Asis smooth, by Lemma <ref>, is continuous the weak topology (the weak topology if finer than the topology induced by ), and thusinduces the weak topology on .Also note that <Ref> can easily be extended toor the space of unlabeled -valued kernelswhenis a bounded convex closed subset of . From Lemma <ref>,we have that is a distance onwhichinduces the same topology as, and from Proposition <ref>, we have that (,), and thus (, ), is separable.To get that this latter space is Polish, we are left to provethat the distanceis complete.Let (W_n )_n∈ be a sequence ofprobability-graphons that is Cauchy for .By definitionof the cut distance , the sequenceof probabilitymeasures (M_W_n)_n∈ isCauchy inforthe complete distance. Thus, thesequence (M_W_n)_n∈ is weakly convergent asinduces the weak topology, which implies that it is tight (see Lemma <ref>). Hence, by definition,the sequence ofprobability-graphons (W_n )_n∈ is tight. By Theorem <ref> <ref>, there exists a subsequence (W_n_k )_k∈ that converges forto a limit, say W∈.This subsequence alsoconverges for toWas and induce thesame topology. Finally, because the sequence (W_n )_n∈ is Cauchyforandhas a subsequence convergingto Wfor ,the wholesequence must alsoconvergetoW for. Consequently,thedistanceis complete.The following lemma shows that every probability measurecan be represented as a constant probability-graphon.[ seen as a closed subset of ] Letbe a distance on such that is (invariant and) smoothon. Then, the map μ↦ W_μ≡μ is aninjection from (, ) to (, )with a closed range and continuous inverse.For any μ∈consider the constant probability-graphon W_μ≡μ, andnotice that M_W_μ=μ, that W_μ(S× T;·)=λ(S)λ(T) μforallmeasurable S,T⊂[0,1],and that W_μ^φ=W_μforany measure-preservingmap φ. This readilyimplies thatfor μ∈ and W∈: (W_μ, W) = (W_μ, W) =sup_S, T⊂ [0, 1](λ(S)λ(T) μ, W(S× T; ·)) ≥(μ, M_W). In particular, taking W=W_ν for ν∈ we get that (W_μ, W_ν)≥( μ, ν). This implies thatthe map : μ↦ W_μ≡μ is an injection, and its inverse, given by the map W_μ↦μ, is 1-Lipschitz.Let (μ_n )_n∈ be a sequence insuch that the sequence (W_μ_n )_n∈ converges forto a limit, say W. We deduce from (<ref>)that (μ_n)_n∈converges forto μ=M_W and that for all measurable S, T⊂ [0,1], (λ(S)λ(T) μ_n)_n∈converges forto W(S× T; ·). This implies that W(S× T; ·) =λ(S)λ(T) μ(·) for all measurable S, T⊂ [0,1], that is,W=W_μ. This implies that the image byof any closed subset ofis a closed subset of , and thus therange ofis closed. Ifthe distance, inaddition tothe hypothesisof Lemma <ref>, is sub-homogeneous,that is, for all μ,ν∈we have (μ, ν)=sup_r∈[0, 1](r μ, rν) (which is the case ifis quasi-convex),then we deduce from (<ref>)thatthemap μ↦ W_μ≡μ is isometricfrom (, ) to (, ). We now state characterization of compactness and completeness for the space of probability-graphons. Recallis a Polish space. [Characterization of compactness and completeness for ]Letbea distance on , whichinduces the weak topology on , andsuch that is (invariant) smoothand weakly regular on .We have the following properties. *is compact(, ) is compact(, ) is compact.* If (, ) is complete then(, ) is complete. * Assumefurthermore thatis sub-homogeneous(seeRemark <ref>). If (, )is complete, then(,) iscomplete.We prove Point <ref>.rom <Ref>, we already know thatis compact if and only ifis weakly compact, compact foras induces the weak topology on .Now, assume that (, ) is compact. Applying Theorem <ref> <ref>, we get that the space (, ) is also compact.Conversely, assume that (, ) is compact. By Lemma <ref>, the mapping W ↦ M_W is continuous from (, ) to (, ), and as (, ) is compact its image through this mapping is also compact. To conclude, it is enough to check that this mapping issurjective. But this is clear as the image ofthe constant probability-graphon W_μ≡μ isM_W_μ = μ. Hence, (, ) (and thus (, )) is compact. We prove Point <ref>.Assume that (, ) is complete. Thus, we can choose = in Definition <ref>, and apply Theorem <ref> to get that (, ) is complete. As =, we have ≤≤ 2. Hence, (, ) is also complete. We provePoint <ref>. Assumethat (,) iscomplete.Let (μ_n )_n∈be aCauchy sequenceof probabilitymeasures in (, ). By Remark <ref>, thesequence of constant probability-graphons (W_μ_n )_n∈is also Cauchyfor.As(,)iscomplete, thereexistsa probability-graphon W∈ suchthat (W_μ_n )_n∈ converges to W forthecut distance. Thanksto Lemma <ref>,Wisconstant equaltosome μ∈, and (μ_n)_n∈ convergesto μ for .Hence, (, ) is complete. § SAMPLING FROM PROBABILITY-GRAPHONS Measured-valued graphons allow to define models for generating random weighted graphs that are more general than the models based on real-valued graphons. We prove that the weighted graphs sampled from probability-graphons are close to their original model for the cut distance , where = (f_k)_k∈ (with f_0 =) is a convergence determining sequence.It would have been more natural to work in Sections <ref> and <ref> with the Kantorovitch-Rubinshtein norm or the Fortet-Mourier norm that both treats all test functions in a uniform manner. Unfortunately, the supremum in the definition of both of this norms does not behave well regarding the probabilities and expectations of graphs sampled from probability-graphons. We need in our proofs (and in particular that of the First Sampling Lemma <ref> below) to consider simultaneously only a finite number of test functions in order to control the probability of failure for our stochastic bounds.§.§ -Graphs and weighted graphs A graph G=(V,E) is composed of a finite set of vertices V(G)=V, and a set of edges E(G)=E which is a subset of V× V avoiding the diagonal. When its set of edges E(G) is symmetric, we say that G is symmetric or non-oriented. We denote by v(G)=|V(G)| the number of vertices of this graph, and by e(G)=|E(G)| its number of edges.[𝒳-graphs] Let 𝒳 be a non-empty set. A𝒳-graphis a triplet G=(V,E,Φ) where (V,E) is a graph and Φ: E→𝒳 is a map that associates a decoration x=Φ(e)∈𝒳 to each edge e∈ E. When 𝒳=, we say that G is a weighted graph.Furthermore, the graph G is said to be symmetric if (V,E) is a symmetric graph and if Φis a symmetricfunction, for every edge (x,y)∈ E, we have (y,x)∈ E and Φ(x,y) = Φ(y,x). Any labeled -graph G can be naturally represented as an -valued graphon, which we denote by W_G, in the following way. Let G = (V,E,M) be a -graph, with v(G)=n∈^*. Denote by V = [n] = {1,…,n} the vertices of G. Consider intervals of length 1/n: for 1≤ i≤ n, let J_i = ( (i-1)/n , i/n]. We then define the -valued graphon stepfunction W_Gassociated with the -graph G by:∀ (i,j)∈ E,∀ (x,y) ∈ J_i× J_j, W_G(x,y; z) =Φ(i,j)( z) ;and W_G(x,y; z) equals the Dirac mass at ∂ otherwise, where ∂ is an element ofused as a cemetery point for missing edges in graphs. In this section,we investigate weighted graphs sampled from probability-graphons. Hence, using the cemetery point argument in the remark above, we only consider complete graphs for the restof this section.Letbe a distance on . IfG and H have the same vertex-set, the cut distance between them is defined as the cut distance between their associated graphons:(G,H) = (W_G,W_H) .When G and H does not have the same vertex-sets, as the numbering of the vertices in <Ref> is arbitrary, we must considerthe unlabeled cut distance between themdefined as the cut distance between their associated graphons:(G,H) = (W_G,W_H). Remind thatwhen the distancederives from a normon ,Lemma <ref> applies, and the cut distance (G,H) can be rewritten as a combinatorial optimization over whole steps. We will sometimes need to interpret a weighted graph G asa -graph where a weight x on an edge is replaced by δ_x the Dirac mass laocated at x. [The real-weighted graph G[f]] For a -graph (resp. weighted graph) Gand a function f∈, we denote by G[f] the real-weighted graph with the same vertex set and edge set as G,and where the edge (i,j) has weightΦ_G[f](i,j)=Φ _G(i,j;f) = ∫_ f(z)Φ_G(i,j; z) (resp. Φ_G[f](i,j)= f( Φ_G(i,j) )), where Φ_G is the decoration of the-graphG.§.§ -random graphs Let W be a probability-graphon, and x=(x_1,…,x_n), n∈^*, be a sequence of points from [0,1]. We define the -graph ℍ(x,W) as the complete graph whose vertex set is [n] = {1,…,n}, and with each edge (i,j) decorated by the probability measureW(x_i,x_j; z).Let H be any -graph. We can define from H a random weighted (directed) graph 𝔾(H) whose vertex set V(H) and edge set E(H) are the same as H, and with each edge (i,j) having a random weight β_i,jdistributed according tothe probability distributiondecorating the edge (i,j) in H, all the weights being independent from each other. For the special case where H = ℍ(x,W), we simply note𝔾(x,W) = 𝔾(ℍ(x,W)).An important special case is when the sequence X is chosen at random: X=(X_i)_1≤ i≤ n where the X_i are independentand uniformly distributed on [0,1]. For this special case, we simply note ℍ(n,W) =ℍ(X,W) and 𝔾(n,W) =𝔾(X,W), that are conditionally on X=x,distributed respectively as ℍ(x,W) and𝔾(x,W). The random graphs ℍ(n,W) and 𝔾(n,W) are called W-random graphs. In the special case where W is a symmetric probability-graphon, the -graph ℍ(x,W) is also symmetric. From a symmetric -graph H, the random weighted graph 𝔾(H) is not necessarily symmetric, but we can define a random symmetric weighted graph𝔾^sym(H) whose vertex set V(H) and E(H) are the same as H, and with independent weights β_i,j = β_j,i on each edge (i,j)=(j,i) distributed according to Φ_H(i,j;·). For H = ℍ(x,W) we simply note 𝔾^sym(x,W) and 𝔾^sym(n,W).For a weighted graph G, and for 1≤ k ≤ v(G), we can define the random weighted graph 𝔾(k,G) as being the sub-graph of G induced by a uniform random subset of k distinctvertices from G. Then, upper bounding by the probability that a uniformly-chosen map [k]→ V(G) is non-injective,we get the following bound on the total variation distance between the graphs obtained from G and its associated graphon W_G:d_var(𝔾(k,G),𝔾(k,W_G)) ≤k21/v(G) ,where d_var is the total variation distance between probability measures. §.§ Estimation of the distance by sampling §.§.§ The first sampling lemma In this subsection, we link sampling from graphons with the cut distance. This result is the equivalent of Lemma 10.6 in <cit.>. The main consequence of the following lemma is thatthe cut distancebetween two probability-graphons can be estimated by sampling. [The random stepfunction W_X] For a measure-valued kernel W (resp. a real-valued kernel w) and a vector X=(X_i)_1≤ i ≤ k composed ofk independent random variables uniformly distributed over [0,1], we denote by W_X=W_ℍ(k,W) (resp. w_X) the random measure-valued (resp. real-valued) stepfunctionwith k steps of size 1/k, and where the step (i,j) has value W(X_i,X_j;·) (resp. w(X_i,X_j)). [First Sampling Lemma] Letbe a convergence determining sequence. Let k∈^*, and U,W∈ be two probability-graphons, and let X be a random vector uniformly distributed over [0,1]^k. Then with probability at least 1-4 k^1/4^-√(k)/10, we have:-2/k^1/4≤U_X - W_X - U-W≤9/k^1/4·An immediate consequence of Lemma <ref> is that the decorated graphs with probability measures on their edges ℍ(k,U) and ℍ(k,W) can be coupled in order that (ℍ(k,U),ℍ(k,W))is close to (U,W) with high probability.To prove the first sampling lemma, we first need to prove the following lemma which states that the cut normcan be approximated by the maximum of the one-sided cut norm using a finite number of function. Remind from Remark <ref> the definition ofthe one-sided version of the cut norm .[Approximation bound withand ] Let U,W∈ and let N∈. For every =(_n)_1≤ n ≤ N∈{± 1 }^N, define g_N, = ∑_n=1^N 2^-n_n f_n. Then, we have:U-W - 2^-N≤max_∈{± 1}^N(U-W)[ g_N,]≤U-W . First remark that for n∈, f_n takes values in [0,1], and thus U[f_n]-W[f_n] takes values in [-1,1]. Remind that f_0 =, and thus U[f_0]-W[f_0] ≡ 0. Upper bounding integrals by 1 for indices n>N, we get:U-W≤sup_S,T ⊂ [0,1]∑_n=1^N 2^-n|∫_S× T (U-W)[f_n](x,y)x y | + 2^-N.And adding the non-negative terms for n>N, we get:sup_S,T ⊂ [0,1]∑_n=1^N 2^-n|∫_S× T (U-W)[f_n](x,y)x y |≤U-W.Using the same idea as in (<ref>) and (<ref>), we get:sup_S,T ⊂ [0,1]∑_n=1^N 2^-n|∫_S× T (U-W)[f_n](x,y)x y | = max_∈{± 1}^N(U-W)[ g_N,],which concludes the proof. Remark that for f∈ and W∈, we have (W_X)[f] = (W[f])_X, and we thus write W[f]_X without any ambiguity.Assume that k≥ 2^4 (otherwise the lower bound in the lemma is trivial). Set N=⌈log_2(k^1/4) ⌉, so that 2^-1k^-1/4 < 2^-N≤ k^-1/4. Let ∈{± 1 }^N. Remark that as the f_n take values in [0,1], the real-valued kernels (U-W)[f_n] take values in [-1,1], and thusthe real-valued kernel (U-W)[g_N,] also take values in [-1,1]. Applying Lemma 10.7 in <cit.> to the real-valued kernel (U-W)[g_N,], we get with probability at least 1-2^-√(k)/10 that:- 3/k≤(U-W)[g_N,]_X - (U-W)[g_N,] ≤8/k^1/4 ,where remind that · is the one-sided version of the cut norm for real-valued kernels defined in (<ref>). Hence, with probability at least 1-2^N+1^-√(k)/10≥ 1 - 4k^1/4^-√(k)/10, we have that the bounds in (<ref>) holds for every ∈{± 1 }^N simultaneously; and when all of this holds, applying Lemma <ref> to U, W and to U_X, W_X, we get:U_X-W_X ≤max_∈{± 1}^N(U-W)[g_N,]_X + 2^-N≤max_∈{± 1}^N(U-W)[g_N,] + 9/k^1/4≤U-W + 9/k^1/4,and similarly:U-W ≤max_∈{± 1}^N(U-W)[g_N,] + 2^-N≤max_∈{± 1}^N(U-W)[g_N,]_X + 1/k^1/4 + 3/k≤U_X-W_X + 2/k^1/4·This concludes the proof.§.§.§ Approximation with random weighted graphs As a consequence of the First Sampling Lemma <ref>, we get that the cut distance between the sampled graphs ℍ(k,U) and ℍ(k,W) (with the proper coupling) is close to the cut distance between the probability-graphons U and W. The following lemma states that if k is large enough, then 𝔾(k,W) is close to ℍ(k,W) in the cut distance , and thus the cut distance between the random weighted graphs 𝔾(k,U) and 𝔾(k,W) is also close to (U,W).Recall from Section <ref> the definition of the random weighted graph 𝔾(H) when H is a -graph. Following Remarks <ref>and <ref>, we shall see the weighted graph 𝔾(H) as a -graph or even as a probability-graphon.[Bound in probability for (𝔾(H),H)] For every -graph H with kvertices, and for every ≥ 10/√(k), we have:((𝔾(H),H) > 2) ≤ e^-^2k^2. Remind that (𝔾(H), H) ≤ 1. Applying Lemma <ref> with = 10 / √(k), we get the followingbound on the expectation of (𝔾(H),H):[(𝔾(H),H)] ≤20/√(k) + ^-100 k < 21/√(k)·Let H andbe as in the lemma. Assume that ≤ 1/2 (otherwise the probability to bound in the lemma is null). To simplify the notations, denote by G = 𝔾(H) through this proof. Define N=⌈log_2(^-1) ⌉, so that ∑_n = N+1^∞ 2^-n≤. Upper bounding by 1 the terms for n > N in (<ref>), we get for U,W∈: (U,W) ≤∑_n=1^N 2^-nU[f_n] - W[f_n] + ,where remind thatis the cut norm for real-valued kernels defined in (<ref>). Using this equation with the graphs G and H, we get:((G,H) > 2 )≤(∑_n=1^N 2^-n(G[f_n], H[f_n])> ) ≤∑_n=1^N ( (G[f_n], H[f_n])> ) ,wheredenotes the cut distance associated to the cut norm for real-valued graphons and kernels. Remark that for every n∈, H[f_n] and G[f_n] are real-weighted graphs with weights in [0,1]. Thus, by a straightforward adaptation of the proof of <cit.>, we get:∀ n ∈ [ N ],( d_□(G[f_n], H[f_n])>) ≤ 2 · 4^k ^-2 ^2 k^2 .Combining (<ref>) and (<ref>), we get for > 10 / √(k):((G,H) > 2 ) ≤ 2 N4^k e^-2 ^2 k^2≤^-^2 k^2 ,where the last bound derives from simple calculus. This concludes the proof. We can apply the First Sampling Lemma <ref> along with Lemma <ref> to get the following lemma, equivalent of the first sampling lemma forthe random weighted graph 𝔾(k,W):[First Sampling Lemma for (k,W)] Let U,W∈ be two probability-graphons, and k ∈^*. Then, we can couple the random weighted graphs 𝔾(k,U) and 𝔾(k,W) such that with probability at least 1 - (4k^1/4+1)^-√(k)/10, we have:| (𝔾(k,U), 𝔾(k,W)) - (U,W) | ≤13/k^1/4· Assume that k ≥ 13^4 (otherwise the bound in the corollary is trivial). Then, we have with probability at least 1 - 4k^1/4^-√(k)/10 - 2^-100 k > 1 - (4k^1/4+1)^-√(k)/10:| (𝔾(k,U), 𝔾(k,W)) - (U,W) | ≤| (𝔾(k,U), 𝔾(k,W)) - (ℍ(k,U), ℍ(k,W)) | + | (ℍ(k,U), ℍ(k,W)) - (U,W) |≤(𝔾(k,U), ℍ(k,U)) + (𝔾(k,W), ℍ(k,W)) + 9/k^1/4 ≤40/√(k) + 9/k^1/4 ≤13/k^1/4 ,where we usedthe upper bound from the First Sampling Lemma <ref>(which gives the coupling with the same random vector X to define both graphs U_X = (k,U) and W_X = (k,W)) for the second inequality, the upper bound from Lemma <ref> with = 10 / √(k) with both U and W for the third inequality, and that 1/√(k)≤1/13 k^1/4 for the last inequality.§.§ The distance between a probability-graphon and its sample In this section, we presentthe Second Sampling Lemma, that shows that a sampled -graph is close toits original probability-graphon with high probability. Note that we use the unlabeled cut distancerather thanas the sample pointsare unordered. The bound on the distance is much weaker than the one in the First Sampling Lemma <ref>,but nevertheless goes to 0 as the sample size increases.The proof is a straightforward adaptation of the proof of <cit.> (replacing the weak regularity lemma and the first sampling lemma by their counterparts for probability-graphons, that is Lemmas <ref> and <ref>; the sample concentration theorem for real-valued graphons can easily be adapted to probability-graphons).[Second Sampling Lemma] Letbe a convergence determining sequence. Let W∈ be a probability-graphon and k∈^*. Then, with probability at least 1 - exp( - k / (2ln(k) ) ) we have:( ℍ(k,W), W) ≤21/√(ln(k)) and( 𝔾(k,W), W) ≤22/√(ln(k))· In the above lemma, the asymmetric random graph 𝔾(k,W)can be replaced by the symmetric random graph 𝔾^sym(k,W) without changing the proof. Similarly, the results in Section <ref> can be reformulated with symmetric random graphs 𝔾^sym(k,W) and 𝔾^sym(H) (but with a slight modification of the proof for Lemma <ref> to symmetrize the random variable X_i,j and with the upper bound e^-^2k^2/2, see also <cit.>).As an immediate consequence of <Ref> and of the Borel-Cantelli lemma, we get the convergence of the sampled subgraphs for the cut distance .[Convergence of sampled subgraphs] Letbe a convergence determining sequence. Let W∈ be a probability-graphon. Then, the sequence of sampled subgraphs ((k,W))_k∈^* converges to W for the cut distance , and thus for any cut distancefrom <Ref>.§ THE COUNTING LEMMAS AND THE TOPOLOGY OF PROBABILITY-GRAPHONS In this section, we introduce the homomorphism densities for probability-graphons, and then we link those to the cut distancethrough the Counting Lemma and the Inverse Counting Lemma. Those results are analogous to the case of real-valued graphons, see<cit.> for the definition of homomorphism densities and <cit.> for the Counting Lemma and Inverse Counting Lemma. The main differences with <cit.> are: thedecoration ofthe edges of the graphs with functions from ; the Counting Lemma for the decorations belonging only in the convergence determining sequence ; the more technical proof of the Inverse Counting Lemma.Note that we need to work withhere as the proof of the Inverse Counting Lemma relies on the second sampling Lemma <ref>. §.§ The homomorphism densities In the case of non-weighted graphs, the homomorphism densities t(F,G) allow to characterize a graph (up to twin-vertices expansion), and also allow to define a topology for real-valued graphons. In the case of weighted graphs and probability-graphons,we need to replace the absence/presence of edges (which is 0-1 valued) by test functions fromdecorating each edge.In this section,we often need to fix the underlying (directed) graph structure F = (V,E) (which may be incomplete) of a -graph and to vary only the -decorating functions g=(g_e)_e∈ E, thus we will write F^g=(V,E,g) for a -graph. Moreover, when there exists a convergence determining sequencesuch that g_e∈ for every edge e∈ E, we say that F^g is a -graph and use the same notation conventions. [Homomorphism density] We define the homomorphism density of a -graph F^g in a signed measure-valued kernel W∈ as:t(F^g,W) = M_W^F(g)=∫_[0,1]^V(F)∏_(i,j)∈ E(F) W(x_i,x_j; g_i,j) ∏_i∈ V(F) x_i .Moreover, M_W^F defines a measure on ^E (which we still denote by M_W^F) which is characterized by M_W^F(⊗_e∈ E g_e)=M_W^F(g) for g = (g_e)_e∈ E. Let φ : [0,1] → [0,1] be a measure-preserving map. As φ^⊗ k : (x_1, …, x_k) ↦ (φ(x_1), …, φ(x_k)) is a measure-preserving map on [0,1]^k,applying the transfer formula (see (<ref>)), we get that for every -graph F^g and every signed measure-valued kernel W∈, we have t(F^g, W^φ) = t(F^g, W). Thus t(F^g, ·) can be extending to . When W∈ is a measure-valued kernel, and F is the graph with two vertices and one edge,we get that M_W^F = M_W the measure defined in (<ref>). When we work with probability-graphons, we can always assume the graph F to be complete, by adding the missing edges (i,j) and decorating them with the constant function g_(i,j) =. For a finite weighted graph G, we define the homomorphism density of the -graph F^g in G as t(F^g,G) = t(F^g,W_G) (remind from Remark <ref> the definition of W_G), that is:t(F^g, G) = 1/v(G)^k∑_(x_1,⋯, x_k) ∈ V(G)^k ∏_(i,j)∈ E(F) g_(i,j)( Φ_G(x_i,x_j) ),where k=v(F) and Φ_G(x_i,x_j) is the weight of the directed edge from x_i to x_j. §.§ The Counting Lemma The following lemma links the homomorphism densities with the cut distancefor some convergence determining sequence = (f_n)_n∈ (with f_0 = and f_ntakes valuesin[0,1]). This lemma is a generalization to probability-graphons of the Counting Lemma for real-valued graphons (see Lemmas 10.22 and 10.23 from <cit.>). Recall that by <Ref>, t(F^g,·) is defined on .[Counting Lemma] Let = (f_n)_n∈ be a convergence determining sequence (with f_0= and f_ntakes valuesin[0,1]). Let F^g be a -graph, and for every edge e∈ E(F), let n_e∈ be such that g_e = f_n_e. Then, for every probability-graphons W, W' ∈, we have:|t(F^g,W) - t(F^g,W')| ≤( ∑_e∈ E(F) 2^n_e) (W,W').The Lipschitz constant given by the lemma is too large to be useful in practical cases. Nevertheless, the homomorphism density function W ↦ t(F^g,W) is Lipschitzon the space of unlabeled probability-graphonsequipped with the cut distance . To do this proof, we will apply Lemma 10.24 from <cit.>, which applies to graphs F whose edges are decorated with (possibly different) real-valued graphons w = (w_e : e∈ E(F)), and the associated homomorphism density is defined ast(F,w) = ∫_[0,1]^V(F)∏_(i,j)∈ E(F) w_e(x_i,x_j) ∏_i∈ V(F) x_i.Remind from (<ref>) that for a probability-graphon W∈ and a function f∈ (which is [0,1]-valued by our definition of convergence determining sequences), we have that W[f] is a real-valued graphon. Define the collections of real-valued graphons w = (W[g_e] : e∈ E(F) ) and w' = (W'[g_e] : e∈ E(F) ). Notice from (<ref>) and (<ref>) that we have t(F,w) = t(F^g,W) and t(F,w') = t(F^g,W'). Applying <cit.> to the graph Fand edge-decorations w and w', we get:| t(F^g,W) - t(F^g,W') | = | t(F,w) - t(F,w') |≤∑_e∈ E(F)W[g_e] - W'[g_e],where the normin the upper bound is the cut norm for real-valued graphons (see (<ref>) for definition of this object).For e∈ E(F), by definition of the cut distanceand using (<ref>), we have:W[g_e] - W'[g_e] ≤ 2^n_e (W, W').Hence, combining all those upper bounds, we get the bound in the lemma but withinstead of . Sincet(F^g,·) is invariant under relabeling by <Ref>,taking the infimum other all relabelings allows to replacebyand to get the bound in the lemma.We have just seen that homomorphism densities defined using only functions fromare Lipschitz. We are going to see that the other homomorphism densities are nevertheless continuous.[Weak Counting Lemma]Letbe a convergence determining sequence (with f_0 =). Let (W_n )_n∈ and W be probability-graphons such that lim_n→∞t(F^g,W_n) = t(F^g,W)for all -graph F^g (which in particular the case if lim_n→∞(W_n,W) = 0 by theCounting Lemma <ref>).Then, for every -graph F^gwe have:t(F^g,W_n) n→∞⟶ t(F^g,W).Let F=(V,E) be some fixed (directed) graph. By assumption, we have for all edge-decorations g=(g_e)_e∈ E inthat lim_n→∞ M_W_n^F(⊗_e∈ E g_e) = M_W^F(⊗_e∈ E g_e) (see Definition <ref>). By <cit.>, ^⊗ E is a (countable) convergence determining family on ^E. Thus, the sequence of measures (M_W_n^F)_n∈ converges to M_W^F for the weak topology on ^E. And in particular, for every edge-decoration function g = (g_e)_e∈ E (here for every e∈ E, g_e∈ is arbitrary) we have M_W_n^F(⊗_e∈ E g_e) = t(F^g,W_n) → t(F^g,W) = M_W^F(⊗_e∈ E g_e) as n→∞. This being true for all choices of the graph F, it concludes the proof.§.§ The Inverse Counting Lemma The goal of this subsection is to establish a converseto the Counting Lemma: if two probability-graphons are close in terms of homomorphism densities, then they are close the cut distance .[Inverse Counting Lemma] Let = (f_n)_n∈ be a convergence determining sequence (with f_0 = and f_ntakes valuesin[0,1]). Let U,W ∈ be two probability-graphons, and let k, n_0∈^*. Assume that we have | t(F^g,U) - t(F^g,W) | ≤ 2^-k - n_0 k^2 for every (complete) -graph F^g with kverticesand such that the edge-decoration functions g=(g_e)_e∈ E(F) areproducts (without repetition) of the functions (f_n)_1≤ n≤ n_0 and (1-f_n)_1≤ n≤ n_0. Then, we have:(U,W) ≤44/√(log(k)) + 2^-n_0. To prove Lemma <ref>, we first need to prove the special case where the spaceis finite. [Inverse Counting Lemma, case with finite space ] Assume that the spaceis finite with cardinality n_1, for simplicity say = [n_1]. Define the indicator functions f_n : z↦_{z = n} for n∈ [n_1], in particular = (f_n)_1≤ n ≤ n_1 is a finite convergence determining sequence. Let U,W ∈ be two probability-graphons, and let k∈^*. Assume that we have | t(F^g,U) - t(F^g,W) | < 2^-k - log_2(n_1) k^2 for every (complete) -graph F^g with kvertices.Then, for any (possibly finite) convergence determining sequence , we have:(U,W) ≤44/√(log(k))·Abusing notations, we can identify a weight-value n∈ with its indicator functions f_n, and doing this identification for edge-decoration functions, we can identify a -graph F^g with its corresponding weighted graph. In particular, doing so we get t(F^g,W) = ( (k,W) = F^g) for every -graph F^g with k vertices. The proof of <Ref> is then a straightforward adaptation of the proof of <cit.>. As the functions (f_n)_n∈ take value in [0,1],for all φ measure-preserving map, for all S,T ⊂ [0,1] measurable sets and for all n∈, we have:| U(S× T;f_n) - W^φ(S× T;f_n) |≤ 1 .Using this bound, we get the following bound (remind that f_0 =):(U,W) ≤inf_φ∈sup_S,T ⊂ [0,1]∑_n=1^n_0 2^-n| U(S× T;f_n) - W^φ(S× T;f_n) | + 2^-n_0 . Hence, for a point z∈, the upper bound in (<ref>) uses only the information given by (f_n(z))_n∈ [n_0]. In order to discretize the space [0,1]^n_0, we replacea point p = (p_1,…, p_n_0) ∈ [0,1]^n_0 by a random point (Y_1,…,Y_n_0) ∈{0,1}^n_0 where the Y_i are independent random variables with Bernoulli distribution of parameter p_i. This leads us to replace a -valued kernel W by the ℳ_1({0,1}^n_0)-valued kernel W̃ defined for all (x,y) ∈ [0,1]^2, and for all s =(s_1, …, s_n_0)∈{0,1}^n_0 as:W̃(x,y;{s}) = W(x,y; f^s) wheref^s = ∏_n=1^n_0 f_n^s_n (1- f_n)^1-s_n . Fix some enumeration (s^m)_m ∈ [2^n_0] of the points in {0,1}^n_0, and define the indicator functions h̃_m : s ↦_{s=s^m} for m∈ [2^n_0], in particular = (h̃_m)_1≤ m ≤ 2^n_0 is a finite convergence determining sequence on {0,1}^n_0. Let F^g̃ be a -graph with vertex set V(F)=[k], and for every edge e∈ E(F), let m_e∈ [2^n_0] be such that g̃_e = h̃_m_e. Define the edge-decoration functions g=(g_e)_e∈ E(F) for every edge e∈ E(F) as g_e = f^s^m_e, then we get:t(F^g̃,W̃)= ∫_[0,1]^k∏_(i, j)∈ E(F)W̃(x_i,x_j; { s^m_e}) ∏_i=1^kx_i = t(F^g,W) .Thus, the {0,1}^n_0-valued graphons Ũ and W̃ inherit the bounds on the homomorphism densities: for every -graph F^g̃,we have | t(F^g̃, Ũ) - t(F^g̃, W̃) |≤ 2^-k - n_0 k^2.Define for all n∈ [n_0] the function f̃_n : s ↦_{s_n = 1 }, and letbe the concatenation of (f̃_n)_n∈ [n_0] and , in particularis a finite convergence determining sequence on {0,1}^n_0. Finally, as (Ũ,W̃) upper bounds the first term in the upper bound of (<ref>), applying Lemma <ref> with the finite space = {0,1}^n_0 and n_1 = 2^n_0, the finite convergence determining sequencesand , and the {0,1}^n_0-valued graphons Ũ and W̃,we get:(U,W) ≤44/√(ln(k))+ 2^-n_0 ,which concludes the proof. §.§ Subgraph sampling and the topology of probability-graphons Thanks to the Weak Counting Lemma <ref> and the Inverse Counting Lemma <ref>, we can formulate a new informative characterization ofweak isomorphism, equality in the space of unlabeled probability-graphons . Let U,W∈ be two probability-graphons. The following properties are equivalent: * (U,W) =0 for some (and hence for every) choiceof the distanceonsuch that the cut distanceonis (invariant) smooth. *There exist φ, ψ∈ such that U^φ = W^ψ almost everywhere on [0,1]^2. *t(F^g,U)=t(F^g,W)for all -graph F^g. *t(F^g,U)=t(F^g,W)for all -graph F^g. The equivalence between Properties <ref>and <ref> is a consequence of Proposition <ref> on the cut distance. <Ref> gives that Property <ref> implies Property <ref>. It is clear that Property <ref> implies Property <ref>. The Inverse Counting Lemma <ref> with the Weak Counting Lemma <ref> give that Property <ref> implies Property <ref> (with =). Hence, we have the desired equivalence. Thanks to the Weak Counting Lemma <ref> and the Inverse Counting Lemma <ref>, we get the following characterization of the topology induced by the cut distanceon the space of unlabeled probability-graphonsin terms of homomorphism densities[Characterization of the topology induced by ] Let (W_n )_n∈ and W be unlabeled probability-graphons from . The following properties are equivalent: * lim_n→∞(W_n,W) = 0 for some (and hence for every) choice of the distanceonsuch thatinduces the weak topology onand the cut distanceonis (invariant) smooth, weakly regular andregular the stepping operator. * lim_n→∞ t(F^g,W_n) = t(F^g,W) for all -graph F^g. * lim_n→∞ t(F^g,W_n) = t(F^g,W) for all -graph F^g. * For all k≥ 2, the sequence of sampled subgraphs ((k,W_n))_n∈ converges in distribution to (k,W).In particular, the topology induced by the cut distanceon the space of unlabeled probability-graphons coincides with the topology generated by the homomorphism densities functions W↦ t(F^g,W) for all -graph F^g. By Theorem <ref>,convergence for is equivalenttoconvergencefor foreverychoiceofthe distanceon such that induces the weak topology onand thecut distanceonis (invariant)smooth,weakly regularandregularthestepping operator. Taking =,the WeakCounting Lemma <ref>givesthat Property <ref>implies Property <ref>.Itis clearthat Property <ref>implies Property <ref>. The InverseCounting Lemma <ref> with theWeak Counting Lemma <ref> givethat Property <ref>implies Property <ref> (with =).Notice that when F is the completegraph with k vertices, M_W^F is the joint measure of all the edge-weights of the random graph (k,W), andthus characterizesthedistributionrandom graph(k,W). Thus(remindDefinition <ref>), Property <ref>and Property <ref> are equivalent. Hence, we have the desired equivalence. [Do the distancesall induce the same topology?] Even though every distancegenerates the same topology on the space of unlabeled probability-graphons, it is an open question whether or not this is also the case that every distanceinduces the same topology on the space of labeled probability-graphons . The following proposition states that to prove existence of a limit unlabeled probability-graphon it is enough to prove that there exists a convergence determining sequencesuch that for every -graph F^g the homomorphism densities t(F^g,·) converge. Letbe a distance onsuch thatinduces the weak topology onand the cut distanceonis (invariant) smooth, weakly regular andregular the stepping operator.Let (W_n)_n∈ be sequence of unlabeled probability-graphons inthat is tight. Letbe a convergence determining sequence such that for every -graph F^g the sequence (t(F^g,W_n))_n∈ converges. Then, there exists an unlabeled probability-graphon W∈ such that the sequence (W_n)_n∈ converges to W for .Since the sequence (W_n)_n∈ is tight, by <Ref>, there exists a subsequence (W_n_k)_k∈ ofthe sequence (W_n)_n∈ that converges to some W for . By Theorem <ref>, we have for every -graph F^g that lim_k→∞ t(F^g,W_n_k) = t(F^g,W); and as we already know that the sequence (t(F^g,W_n))_n∈ converges, we have that lim_n→∞ t(F^g,W_n) = t(F^g, W). Hence, by <Ref>, we get that the sequence (W_n)_n∈ converges to W for .For the special case = {0,1}, which is compact, we find back that convergence for real-valued graphons is characterized by the convergence of the homomorphism densities. Notice the tightness condition of <Ref> is automatically satisfied asis compact. § PROOFS OF THEOREM <REF> AND THEOREM <REF> We start by proving a lemma that allows to constructa convergent subsequence and its limit kernel for a tight sequence of measure-valued kernels. This lemma is useful for the proofs of bothTheorem <ref> and Theorem <ref>. For the proof of Theorem <ref>, we will also need the convergence to hold simultaneouslyfor two distancesand . Remind from <Ref> the definition of the stepfunction W_ for a signed measure-valued kernel W and a finite partitionof [0,1]. For a finite partitionof [0,1], define its diameter as the smallest diameter of its sets, () = min_S∈(S) = min_S∈sup_x,y∈ S| x-y |.[Convergence using given approximation partitions] Letd bean invariantsmoothdistanceon(resp.or ). Let (W_n )_n∈ be a sequence in (resp.or ) which is tight (resp. uniformly bounded and tight).Further assume that we are given, for every n,k∈, partitions 𝒫_n,k of [0,1], such that these partitions andthe corresponding stepfunctions W_n,k = (W_n)_𝒫_n,k satisfy the following conditions: * the partition 𝒫_n,k+1 is a refinement of 𝒫_n,k,* (_n,k) ≤ 2^-k and |𝒫_n,k| = m_k depends only on k (and not on n),* d(W_n, W_n,k)≤ 1/(k+1).Then, there exists a subsequence (W_n_ℓ )_ℓ∈ of the sequence (W_n )_n∈ and a measure-valued kernelW∈ (resp. W∈ or W∈) such that (W_n_ℓ )_ℓ∈ converges to W for . Moreover, assume thatis anotherinvariantsmooth distanceon(resp.or ) such thatfor every n∈ and k∈, W_n,k also satisfies: * (W_n, W_n,k)≤ 1/(k+1).Then, there exists a subsequence (W_n_ℓ )_ℓ∈ of the sequence (W_n )_n∈ and a measure-valued kernelW∈ (resp. W∈ or W∈) such that (W_n_ℓ )_ℓ∈ converges to the same measure-valued kernel Wsimultaneously both forand for , the cut distance associated with . We adapt here the general scheme from the proof of Theorem 9.23 in <cit.>, but the argument for the convergence of the U_k, defined below,takes intoaccount thatmeasure-valued kernels are infinite-dimensional valued. We set (remind from (<ref>) the definition of ·):C=sup_n∈W_n<+∞ .The proof is divided into four steps.Step 1: Without loss of generality, the partitions 𝒫_n,k are made of intervals.For every n∈, we can rearrange the points of [0,1] by a measure-preserving mapso that the partitions 𝒫_n,k are made of intervals, and we replace W_n by its rearranged version. An argument similar to the next lemma is used in the proof in <cit.> without any reference. So, we provide a proof and stress that diameters of the partitions shrinking to zero is an important assumption (see <Ref> below).[Kernel rearrangement with interval partitions] Let (_k)_k∈ be a refining sequence of finite partitions of [0,1] whose diameter converges to zero. Then, there exist a measure-preserving map φ∈ and a refining sequence of partitions made of intervals(_k)_k∈such thatfor all k∈, and all set S∈_k there exists a set R∈_k such that _R = _φ^-1(S). In particular, if W is a signed measure-valued kernel, then for U=W^φ, we have that U__k = (W^φ)__k = (W__k)^φ for all k∈.Notice that, according to <Ref>, the sequenceof refining partition (_k)_k∈, with a partition diameter converging to 0, separates points and thus generates the Borel σ-field of [0,1]. Consider the infinite Ulam-Harris tree ^∞ = { u_1 ⋯ u_k : k∈, u_1, …, u_k ∈^* },where for k=0 the empty word u= is called the root node of the tree; for a node u = u_1 ⋯ u_k ∈^∞ , we define its height as h(u)=k, and if k>0 we define its parent node as p(u) = u_1 ⋯ u_k-1 and we say that u is a child node of p(u). We order vertices on the tree ^∞ with the lexicographical (total) order . As a first step, we construct a subtree ⊂^∞ that indexes the sets in the partitions (_k)_k∈, such that for every k∈, _k = { S_u : u∈, h(u)=k }, and such that if S_v ⊂ S_u with S_v∈_k and S_u∈_k-1, then p(v) = u.Without loss of generality,we may assume that _0 = { [0,1] }, and we label its only set by the empty word , and we set S_ = [0,1]. Then, suppose we have already labeled the sets from _0, …, _k, and we proceed to label the sets from _k+1. Because the partition _k+1 is a refinement of _k, we can group the sets of _k+1 by their unique parent set from _k, for every S_u∈_k, let _u = { S∈_k+1 : S ⊂ S_u }, then S_u = ∪_S∈_u S. For S_u∈_k, we fix an arbitrary enumeration of _u = { S^1, …, S^ℓ} with ℓ = |_u |, then label the set S^j by uj, and set S^j = S_u j; remark that the parent node of w=u j is p(w)=u, and the height of node w is h(w) = h(u) + 1 = k+1. Hence, we have labeled every set from _k+1. To finish the construction, we set = { u : ∃ k∈, ∃ S∈_k, Shas labelu }. We now proceed to construct a measure-preserving map ψ such that the image of every set S_u is equal to an interval, and such that those intervals are ordered to the order of their labels in .Define the map σ : [0,1] →^ by σ(x) = (u^k(x) )_k∈∈^where u^k(x) is the only node ofwith height k such that x∈ S_u^k(x) (and thus u^k+1(x) is a child node of u^k(x)). Remark that if u^k_0(x)u^k_0(y) for some k_0∈, then u^k(x)u^k(y) for every k≥ k_0. We extend naturally the total orderfromto a the total order on ^: for (u^k)_k∈, (v^k)_k∈∈^, (u^k)_k∈ (v^k)_k∈ if u^k_0 v^k_0 where k_0 is the smallest k such that u^k ≠ v^k.For every u∈, define:A^-(u) = ⋃_vu : h(v)=h(u) S_v andA^+(u) = A^-(u) ∪ S_u ,and then define C^-(u) = λ(A^-(u)) and C^+(u) = λ(A^+(u)). Now, define ψ as, for x∈ [0,1]:ψ(x) = λ(A^-(x))whereA^-(x) = { y∈ [0,1] : σ(y) σ(x) } = ∪_k∈ A^-(u^k(x)) .Moreover, as the sequence of partitions (_k)_k∈ has a diameter that converges to zero, and thus separates points, the map σ is injective. Thus, we also have:ψ(x) = λ(A^+(x))whereA^+(x) = { y∈ [0,1] : σ(y) σ(x) } = A^-(x) ∪{ x } .Remark that both A^-(x) and A^+(x) are Borel measurable.Remark that for every k∈, we have A^-(u^k(x)) ⊂ A^-(x) ⊂ A^+(x) ⊂ A^+(u^k(x)). In particular, for every u∈, we have ψ(S_u) ⊂ [ C^-(u), C^+(u) ]; however ψ(S_u) is not necessarily an interval, but we shall see that λ(ψ(S_u)) = C^+(u) - C^-(u), ψ(S_u) is equal to [ C^-(u), C^+(u) ]. Remark that, as the sequence of partitions (_k)_k∈ is refining,we get that [C^-(u), C^+(u)] = ∪_v : p(v)=u [C^-(v), C^+(v)] for every u∈∖{}.As the diameter of the partitions (_k)_k∈ converges to zero, we have the following alternative formula for ψ:ψ(x) = lim_k →∞ C^-(u^k(x))= lim_k →∞ C^+(u^k(x)) .For every k∈, the map x↦ C^-(u^k(x)) is a simple function (constant on each S∈_k and takes finitely-many values), and thus ψ is measurable as a limit of measurable maps.We outline the rest of the proof. We first prove that ψ is measure-preserving. Secondly, we prove that ψ is bijective and construct its inverse map φ. Thirdly, we prove that (φ^-1(_k))_k∈ is a refining sequence of partitions. And lastly, we approximate almost everywhere the sequence of partitions (φ^-1(_k))_k∈ by a sequence of refining partitions composed of intervals.We now prove that ψ is measure preserving. Remark that ψ(x) is a non-decreasing function of σ(x) forthe total relation order ,ψ(y)≤ψ(x) if and only if σ(y) σ(x). Hence, ψ^-1([0,ψ(x)]) = { y∈ [0,1] : σ(y) σ(x) },and we have:λ( ψ^-1([0,ψ(x)]) ) = λ( { y∈ [0,1] : σ(y) σ(x) } ) = ψ(x) .Thus, to show that ψ is measure preserving we just need to show that ψ([0,1]) is dense in [0,1]. For every u∈, as ψ(S_u) ⊂ [ C^-(u), C^+(u) ], we know that the interval [ C^-(u), C^+(u) ] contains at least one point of the form ψ(x). Remark that for all k∈, we have [0,1] = ∪_u∈ : h(u) = k [ C^-(u), C^+(u) ]. Hence, as λ( [ C^-(u), C^+(u) ] ) = λ(S_u) ≤(_h(u)) for every u∈, and as the diameter of the partitions (_k)_k∈ converges to zero, we know that each interval of positive length contains a point of the form ψ(x) for some x∈ [0,1], which implies that ψ([0,1]) is indeed dense in [0,1].We now prove that ψ is bijective and construct its inverse map φ. Without loss of generality, assume that there is no set S_u with null measure. Consider two distinct elements x,y∈ [0,1] such that σ(x) σ(y). Assume that ψ(x) = ψ(y), and let N∈ be the last index k such that u^k(x) = u^k(y). Then, for every k>N, we have u^k(x)u^k(y), which implies thatψ(x) ≤ C^+(u^k(x)) ≤ C^-(u^k(y)) ≤ψ(y); and thus ψ(x) = ψ(y)= C^+(u^k(x)) = C^-(u^k(y)), which in turn implies that there is no node of between u^k(x) and u^k(y). Remark that this situation is analogous to the terminating decimal versus repeating decimal situation. Hence, we proved that there is no node between u^N+1(x) and u^N+1(y) and that for every k>N, u^k+1(x) is the right-most child of u^k(x), and u^k+1(y) is the left-most child of u^k(y) (u^k+1(x) = u^k(x) |_u^k(x)| and u^k+1(y) = u^k(y) 1). Remind that the map σ is injective. Putting all of this together, we get that the set { (x,y)∈ [0,1] : ψ(x) = ψ(y),x<y } can be indexed by the nodes of , and is thus at most countable. Hence, the map ψ is injective on a subset D⊂ [0,1] with measure one (indeed [0,1]∖ D is at most countable), and as ψ is measure preserving, we get that ψ(D) has measure one, and thus ψ is bijective from D to ψ(D), that is, ψ is bijective. We construct the map φ as the inverse map of ψ for x∈ψ(D) and φ(x)=0 for x∈ [0,1]∖ψ(D). Without loss of generality, we assume that 0∉D. Thus, φ is the inverse map of ψ, that is, φ∘ψ(x) = ψ∘φ(x) = x for almost every x∈ [0,1]. We are left to prove that φ is measurable and measure preserving. As we saw that each point z∈[0,1] as a pre-image ψ^-1(z) = { x∈[0,1] : ψ(x) =z } at most countable (indeed of cardinal at most 2), thus <cit.> insures that ψ is bimeasurable (ψ is (Borel) measurable and for all Borel set B⊂ [0,1], ψ(B) is also a Borel set). Let B⊂ ]0,1] be a Borel set. We have that φ^-1(B) = φ^-1(B∩ D) = ψ(B∩ D) is a Borel set, where the first equality uses that φ([0,1]) = D∪{0}, the second equality uses that ψ is the inverse of φ on D, and lastly we used that ψ is bimeasurable. We also have that φ^-1(B∪{0}) = φ^-1(B) ∪ ([0,1]∖ψ(D)) isa Borel set. Moreover, we have: λ(φ^-1(B)) = λ(ψ(B∩ D)) = λ( ψ^-1( ψ(B∩ D))) = λ(B∩ D) = λ(B) , where we used that φ^-1(B) = ψ(B∩ D) for the first equality, that ψ is measure preserving for the second equality, that ψ is bijective from D to ψ(D) for the third equality, and that D has measure one for the last equality. We also have: λ(φ^-1(B∪{0})) = λ( φ^-1(B) ) + λ( [0,1]∖ψ(D) ) = λ(B) = λ( B ∪{0}) , where we used that φ^-1(B)⊂ψ(D) and [0,1]∖ψ(D) are disjoint sets for the first equality, that λ(φ^-1(B)) = λ(B) and that ψ(D) has measure one for the second equality. Hence, the map φ is measurable and measure preserving.We now prove that (φ^-1(_k))_k∈ is a refining sequence of partitions. For k∈, as _k is a finite partition of [0,1], we have that φ^-1(_k) = {φ^-1(S_u) : u∈,h(u)=k } is also a finite partition of [0,1]. Moreover, as (_k)_k∈ is a refining sequence ofpartitions, we getthat the sequence of partitions (φ^-1(_k))_k∈ is also refining. Remark that the sets φ^-1(S_u) are not necessarily intervals, they are intervals minus some at most countable sets (this is similar to the unit line minus the Cantor set). To finish the proof, we are left to construct a refining sequence ofpartitions made of intervals (_k )_k∈ that agrees almost everywhere with the refining sequence of partitions (φ^-1(_k))_k∈. For u∈,define R_u = [C^-(u), C^+(u)[ (and R_u = [C^-(u), C^+(u)] if u is the unique node such that v u for every v∈ with h(v)=h(u)). As ψ is measure preserving, and as ψ(S_u) ⊂ [C^-(u), C^+(u)] with λ(S_u) = C^+(u) - C^-(u), we get that λ([C^-(u), C^+(u)] ∖ψ(S_u)) =0. As φ is the inverse map of ψ, we have that _φ^-1(S_u) = _ψ(S_u) = _[C^-(u), C^+(u)] = _R_u, R_u agrees almost everywherewithφ^-1(S_u). For k∈, define the finite partition _k = { R_u : h(u)=k }. Then, by definition of the sets R_u, the sequence of partitions (_k)_k∈ is refining. This concludes the proof. Even if it is not stressed in <cit.>, the measure preserving map φ (a fortiori an inversible one) in Lemma <ref> cannot be obtained without any assumption on the refining sequence of partitions (_k)_k∈ (in our case, we assumed that their diameter converges to zero). Indeed consider the sequence of partitions where for every k∈, _k is composed of the sets: S_k,j = [j 2^-k-1, (j+1) 2^-k-1[ ∪ [1/2 + j 2^-k-1, 1/2 + (j+1) 2^-k-1[ , 0≤ j< 2^k,S_k,j is the union of two dyadic interval translated by 1/2,(also add 1 to the set S_k,0 to get a complete partition). Then, for every x∈[0,1/2[, x and x+1/2 belong to the same set of _k for every k∈; in particular the diameter of the partitions (_k)_k∈ does not converge to zero. By contradiction, assume there exist a measure preserving map φ∈ and a sequence of interval partitions (_k)_k∈ such that for all k∈ and all set S_k,j∈_k with 0≤ j < 2^k, there exists a interval set I_k,j∈_k such that _I_k,j = _φ^-1(S_k,j). In particular, the set I_k,j must be an interval of length 2^-k. Hence, _k is a dyadic partition with stepsize 2^-k, and thus the diameter of the partitions (_k)_k∈ converges to zero. For every x∈[0,1/2[, we get that ( φ^-1({ x, x+1/2 }) ) ≤(_k) = 2^-k for all k∈; this impliesthat φ^-1({ x, x+1/2 }) is a singleton, either x∉φ([0,1]) orx+1/2 ∉φ([0,1]). Hence, we have λ([0,1/2[ ∩φ([0,1])) = λ( [1/2,1[ ∖φ([0,1])) and λ([0,1/2[ ∖φ([0,1])) = λ( [1/2,1[ ∩φ([0,1])). As λ([0,1/2[) = λ([0,1/2[ ∩φ([0,1])) + λ([0,1/2[ ∖φ([0,1])) = 1/2 because φ is measure preserving, we get that λ(φ([0,1]) ) = λ([0,1/2[ ∩φ([0,1])) + λ([1/2,1[ ∩φ([0,1])) = 1/2, which contradicts the fact that φ is measure preserving. Now, for every n∈, applying Lemma <ref> to (_n,k)_k∈ and W_n, we get ameasure-preserving map φ_n and a refining sequence of partitions ('_n,k)_k∈ made of intervals such that for all k∈, and all set R∈'_n,k there exists a set S∈_n,k such that _R = _φ_n^-1(S). In particular, for all k∈, the sequence of partitions (_n,k)_k∈ still satisfy <ref>–<ref>. Set W'_n = W_n^φ_n and W'_n,k=W_n,k^φ_n so that almost everywhere:W'_n,k = ((W_n)_𝒫_n,k)^φ_n =(W_n^φ_n)_𝒫'_n,k =(W')_𝒫'_n,k.As d and d' are invariant, we have for every n,k∈ that d(W_n, W_n,k) = d(W'_n, W'_n,k), and similarly for d'. This insures that the signed measure-valued kernels (W'_n)_n∈ and (W'_n,k)_n∈, k∈, still satisfy <ref>–<ref>. Remind that for a measure-valued kernel W and a measure-preserving map φ, (W,W^φ)=0. Hence, we can replace the signed measure-valued kernels (W_n)_n∈ and (W_n,k)_n∈, k∈, by (W'_n)_n∈ and (W'_n,k)_n∈,k∈, and assume that the partitions _n,k are made of intervals. Step 2: There exists a subsequence (W_n_ℓ )_ℓ∈ such thatfor every k∈ and ϵ∈{+,-},the subsequence (W_n_ℓ,k^ϵ )_ℓ∈ weakly converges,as ℓ→∞,almost everywhere to a limit, say U_k^ϵ which is a stepfunction adapted to a partition with m_k elements (some elements might be empty sets). Fixsomek∈. Thestepfunctions (W_n,k=(W_n)__n,k)_n∈ allhave thesame numberof steps m_k. For n∈,denote by 𝒫_n,k ={ S_n,k, i: 1≤ i≤ m_k}the interval partition adaptedtoW_n,kwhere the intervals are order according to the natural order on [0,1] (note that some intervals might be empty, simply put them at the end). Forn∈ and1≤i≤m_k,let λ(S_n,k,i) denotethelength of theinterval S_n,k,i∈𝒫_n,k. Asthe lengths of stepstake values in thecompactset [0,1],there exists a subsequence of indices (n_ℓ)_ℓ∈ suchthatforevery1≤ i≤m_k,thereexists s_k,i∈[0,1] suchthat lim_ℓ→∞λ(S_n_ℓ,k,i) =s_k,i.Denote by _k = { S_k, i : 1≤ i≤ m_k} the interval partition composed of m_k intervals where the i-th interval S_k,i has length s_k,i (note that some intervals might be empty). Up to a diagonal extraction, we can assume thatthe convergence holds for every k∈ simultaneously. Remark that for all n,k∈, the fact that _n,k+1 is a refinement of _n,k can be simply restated as linear relations on the interval lengths (λ(S_n,k,i))_1≤ i ≤ m_k and (λ(S_n,k+1,i))_1≤ i ≤ m_k+1. As linear relations are preserved when taking the limit, we get thatthe partition _k+1 is a refinement of _k forall k∈. Weassume from now on that (W_n )_n∈ and (W_n,k )_n∈, k∈, are the corresponding subsequences.For every n∈, we decompose W_n = W_n^+ - W_n^-into its positive and negative kernel parts, see Lemma <ref>. For n,k∈ and ϵ∈{+,-}, we define W_n,k^ϵ = (W_n^ϵ)__n,k. In particular, remark that W_n,k = W_n,k^+ - W_n,k^- andfor allℓ≥k,that W_n,k^ϵ = (W_n,ℓ^ϵ)_𝒫_n,k. Let ϵ∈{+,-} and 1≤ i,j ≤ m_k such that s_k,is_k,j>0 be fixed.For every n∈, we have on S_n,k,i× S_n,k,j that W_n,k^ = μ_n,k^i,j,ϵ∈ with:μ_n,k^i,j,ϵ(·) = 1/λ(S_n,k,i)λ(S_n,k,j) W_n^ϵ(S_n,k,i×S_n,k,j; ·) .We have that:μ_n,k^i,j,ϵ≤W_n≤ C.This gives thatthe sequence (μ_n,k^i,j,ϵ )_n∈ inisbounded. We now prove it is tight.Let >0. As lim_n→∞λ(S_n,k,ℓ) =s_k,ℓ>0 for ℓ=i,j, we deduce that there existsc>0 such that for every n∈ large enough and ℓ=i,j, we have λ(S_n,k,ℓ) > c.Set ' = c^2. As the sequence ( W_n )_n∈ inistight, there existsa compact set K ⊂ such that for every n∈, M_W_n(K^c) ≤'. Hence, for every n∈ large enough, we have:μ_n,k^i,j,ϵ (K^c)≤1/λ(S_n,k,i)λ(S_n,k,j) M_W_n(K^c) ≤ε.This gives thatthe sequence (μ_n,k^i,j,ϵ )_n∈in is bounded and tight, and thus by Lemma <ref>, it has a convergent subsequence. By diagonal extraction, we can assume there is a subsequence(W_n_ℓ)_ℓ∈such that for all k∈, all 1≤ i, j≤m_k such that s_k,is_k,j>0, and all ϵ∈{+,-}, the subsequence (μ_n_ℓ,k^i,j,ϵ )_ℓ∈ weakly converges to a limit say μ_k^i,j,ϵ.Define the stepfunction U_k^ϵ∈ adapted to the partition _k which isequal to μ_k^i,j,ϵ on S_k,i× S_k,j (if s_k,is_k,j=0, set μ_k^i,j,ϵ = 0).We have in particular obtained that, for every k∈, the subsequence (W_n_ℓ,k^ϵ )_ℓ∈weakly converges to U_k^ϵ which isa stepfunction adapted to a partition with m_k elements; this implies that the subsequence (W_n_ℓ,k )_ℓ∈ also weakly converges to U_k = U_k^+ - U_k^-. We now assume that (W_n)_n∈ is such a subsequence.With this convention, notice that for all k, n∈ and ϵ∈{+,-}:U_k^ϵ≤sup_n∈W_n,k^ϵ≤sup_n∈W_n = C<+∞ . Step 3: There exists a subsequence of (U_k )_k∈ which weakly converges to a limit U∈almost everywhere on [0,1]^2. The proof of this step is postponed to the end. Without loss of generality we still write(U_k )_k∈ for this subsequence. Step 4: We have lim_n→∞ (U, W_n)= lim_n→∞ (U, W_n)=0. Let> 0. As the cut distances d is smooth, we deduce from Step 3, that for k large enough d(U, U_k) ≤ε. By hypothesis <ref> on the sequence (W_n,k)_n∈,we also have that for k large enough d(W_n, W_n,k) ≤ε.For such large k, as by step 2 the sequence (W_n,k )_n∈ weakly converges almost everywhere to U_k, and again as the cut distances d is smooth,there is a n_0 such that for every n≥ n_0, d(U_k, W_n,k) ≤. Then for all n≥ n_0, we have:(U,W_n) ≤(U,U_k) + (U_k, W_n,k) + (W_n,k,W_n)≤ d(U,U_k) + d(U_k, W_n,k) + d(W_n,k, W_n) ≤ 3ε.This gives that lim_n→∞ (W_n, U)=0.If we consider a second distanceas in the lemma, then similarly lim_n→∞ (W_n, U)=0.Proof of Step 3.Assume that the claim is true for measure-valued kernels. Then, if (U_k)_k∈ is a sequence of signed-measure valued kernels, applying the claim to (U_k^ϵ)_k∈, for ϵ∈{+,-}, we get a measure-valued U^ϵ∈ such that the sequence (U_k^ϵ)_k∈ weakly conveges to U^ϵ. Thus, the sequence (U_k)_k∈ weakly conveges to U = U^+ - U^-.Hence, we are left to prove the claim for measure-valued kernels. The proof is divided in four steps. The first three steps also work for signed-measure valued kernels, but the last argument of step 3.d. only works for measures.Step 3.a: The sequence (U_k )_k∈inherit the tightness property from the sequence (W_n )_n∈. Let > 0. Since the sequence(W_n )_n∈ is tight, there existsa compact set K⊂ such that for every n∈,we have M_W_n(K^c) ≤.Remark that:M_W_n,k = ∑_1≤ i,j ≤ m_kλ(S_n,k,i)λ(S_n,k,j)μ_n,k^i,j = M_W_n and M_U_k = ∑_1≤ i,j ≤ m_ks_k,is_k,jμ_k^i,j.For all k∈ and 1≤ i,j ≤ m_k,as the sequence (μ_n,k^i,j)_n∈ weakly converges to μ_k^i,j, using <cit.> with the open subset K^c ⊂, we get that μ_k^i,j(K^c) ≤lim inf_n→∞μ_n,k^i,j(K^c). As lim_n→∞λ(S_n,k,i) = s_k,i for all 1≤ i≤ m_k, and summing those bounds, we get:M_U_k(K^c) ≤lim inf_n→∞ M_W_n,k(K^c)= lim inf_n→∞ M_W_n(K^c)≤.Consequently, the sequence (U_k )_k∈ istight. Step 3.b: Convergence of themeasuresÛ_k in ℳ_+([0,1]^2×) defined for k∈ as:Û_k( x, y,z) = U_k(x,y; z) λ_2( x,y).Sincethe sequence(M_U_k)_k∈ in istight, forall ε>0, there exists a compactset K⊂ such that for every k∈, M_U_k(K^c) ≤ε;and thus Û_k (K̂^c) = M_U_k(K^c) ≤ε where K̂=[0, 1]^2 × K is a compact subsetof [0,1]^2×,that is, the sequence (Û_k)_k∈ in ℳ_+([0,1]^2×) istight. The sequence (Û_k)_k∈ isalsobounded as Û_k≤U_k≤ Cthanks to (<ref>).Hence, using Lemma <ref>, there exists a subsequence (Û_k_ℓ )_ℓ∈of the sequence (Û_k )_k∈ that converges to somemeasure, say Û, in ℳ_+([0,1]^2×). Remark that, when considering the subsequenceof indices(k_ℓ)_ℓ∈, the subsequences (W_n,k_ℓ)_ℓ∈, n∈, still satisfy properties <ref>-<ref> of Lemma <ref>, andforall ℓ∈, the sequence (W_n,k_ℓ)_n∈ still weakly converges to U_k_ℓ.Without lossof generality, we now work with this subsequenceand thus write k instead of k_ℓ. Step 3.c: The measure Û ( x,y,z) can be disintegrated λ_2( x,y) giving us an element of . To prove this, we need the following disintegration theorem for measures, see <cit.> (stated in more the general framework of Borel spaces) which generalizes the disintegration theorem for probability measures <cit.>.The notation μ∼ν for two measures μ and νmeans that μ≪ν and ν≪μ,where μ≪ν means that μ is absolutely continuousν.[Disintegration theorem for measures, <cit.>] Let ρ be a measure on S× T , where S is a measurable space and Ta Polish space.Then there exist a measure ν≡ρ(·× T) on S and a probability kernel μ:S→Tsuch that ρ = ν⊗μ (ρ(ds,dt)=ν(ds)μ(s;dt)). Moreover, the measures μ_s = μ(s;·) are unique for ν-s ∈ S. UsingLemma <ref> with S=[0,1]^2 and T=, we get that there exists a probability kernel U' insuch that:Û( x, y ,z) = U'(x,y; z) π( x, y),where π =Û(·×) is a measure on [0,1]^2.We now need to prove that π≪λ_2. By contradiction, assume this is false, then there exists a measurable set A∈[0,1]^2 such that λ_2(A) = 0 and π(A)>0. As the measure ∫_A U'(x,y;·) π( x,y) is not null, there exists f∈ such that ∫_A U'(x,y;f) π( x,y) ≠ 0. As the sequence (Û_k)_k∈ weakly converges to Û in [0,1]^2× by step 3.b, we have that the sequence of measuresÛ_k( x, y; f) = U_k(x,y;f) λ_2( x,y) weakly converges as k→∞toÛ( x, y; f) = U'(x,y;f) π( x,y) in [0,1]^2. Moreover, as the maps x,y ↦ U_k(x,y;f) are uniformly bounded(by f_∞U_k_∞≤ C f_∞, see (<ref>)), they are also uniformly integrable (λ_2), and applying <cit.> there exist a subsequence (U_k_ℓ)_ℓ∈ and a bounded function g_f on [0,1]^2 such that for every bounded measurable function h∈ L^∞([0,1]^2), we have:lim_ℓ→∞∫ U_k_ℓ(x,y;f) h(x,y) λ_2( x,y) = ∫ g_f(x,y)h(x,y) λ_2( x,y) .In particular, the sequence of measures ( U_k_ℓ(x,y;f)λ_2( x,y) )_ℓ∈ weakly converges to the measure g_f(x,y) λ_2( x,y), which imposes the equality between measures: Û( x, y, f) = U'(x,y;f) π( x,y) = g_f(x,y) λ_2( x,y) .Hence, taking h = _A, we get:Û(A, f) = ∫_A U'(x,y;f) π( x,y) = ∫_A g_f(x,y) λ_2( x,y) = 0 , which yields a contradiction. Consequently,the measure πis absolutely continuousλ_2, with densitystilldenotedbyπ,andwesetλ_2-on [0,1]^2:U(x,y;z)=π(x,y) U'(x,y; z) and thusÛ( x, y ,z) = U(x,y; z) λ_2( x, y). Step 3.d: The sequence (U_k )_k∈ weakly convergesto U almost everywhere on [0, 1]^2.Recall thatby construction, the stepfunction U_k isadaptedto thepartition_kdefinedinStep 2,andthat _k+1isa refinementof_k.Acloserlook atStep2 yieldsthatfor allℓ≥k,since W_n,k = (W_n,ℓ)__n,k, we also get:U_k = (U_ℓ)__k .We prove (<ref>) for ℓ = k+1, the other cases follow by induction. As _n,k+1 is a refinement of _n,k, we already know that U_k and (U_k+1)__k are both stepfunctions adapted to the finite partition _k. Thus, we only need to verify that U_k and (U_k+1)__k take the same value on each step. For every n∈, the fact that W_n,k = (W_n,k+1)__n,k implies that for all 1≤ i,j ≤ m_k such that λ(S_n,k,i) λ(S_n,k,j)>0, we have:μ_i,j^n,k = ∑_i' ∈ I_i, j'∈ I_jλ(S_n,k+1,i') λ(S_n,k+1,j')/λ(S_n,k,i) λ(S_n,k,j)μ_i',j'^n,k+1,and this equation is preserved when taking the limit n→∞, which gives us:μ_i,j^k = ∑_i' ∈ I_i, j'∈ I_js_k+1,i' s_k+1,j'/s_k,i s_k,jμ_i',j'^k+1,for all 1≤ i,j ≤ m_k such that s_k,i s_k,j >0 .This proves that the stepfunctions U_k and (U_k+1)__k take the same value on each step S_k,i× S_k,j with positive size s_k,i s_k,j > 0 (on a step with null size s_k,i s_k,j = 0, U_k and (U_k+1)__k are both equal to the null measure). This givesthat U_k = (U_k+1)__k. Letf∈be aboundedcontinuous function,and X,Ybe independentuniformrandom variableson[0,1]. Then (<ref>) and (<ref>) imply thatthe sequenceN^f=(N_k^f=U_k(X,Y;f) )_k∈is amartingale bounded by Cf_∞for the filtration (_k)_k∈, where the σ-field _k is generatedby the events {X∈ S_k,i}∩{Y∈ S_k, j} for 1≤i, j ≤m_k and S_k, ℓ∈_k.By the martingale convergence theorem,the martingale N^f is almost surely convergent, that is, the sequence (U_k[f])_k∈ convergesλ_2-to a bounded measurable function u_f. Let g:[0,1]^2→ be a bounded measurable function. We get:∫_[0, 1]^2 g(x,y)U(x,y; f)λ_2( xy)=∫_[0, 1]^2× g(x,y) f(z)Û( x,y, z)= lim_k→∞∫_[0, 1]^2× g(x,y) f(z)Û_k( x,y, z)= lim_k→∞[g(X,Y) U_k(X,Y;f) ] =∫_[0, 1]^2 g(x,y)u_f(x,y) λ_2( xy),where we used the definition (<ref>)of U for the first equality, that (Û_k )_k∈ weakly convergesto Û for the second, the definition (<ref>)of Û_k for the third, and the convergence of the martingale N^f for the last. Since g is arbitrary,we deduce that λ_2-U(·,·; f)=u_f and thusthat thesequence (U_k[f])_k∈ converges λ_2-toU[f]. Applyingthisresultfor all f∈=(f_m)_m∈ aconvergencedeterminingsequence(with the convention f_0=), we deduce thatthe sequence (U_k )_k∈ weakly convergesto U almost everywhere on [0, 1]^2.Remind from Section <ref> that convergence determining sequences exist only for measures and not for signed measures in general, this is why we worked with measures in Step 3. This ends the proof of Step 3, and thus ends the proof of the lemma. We are now ready to prove Theorem <ref>. We firstprove Point <ref> on (the proof onis similar).Since thedistance d is weakly regular and the sequence( W_n)_n∈ is uniformlybounded andtight in ,wecanconstructinductively foreveryn∈a sequence (𝒫_n,k )_k∈ of partitions of [0,1] such thathypothesis <ref>-<ref> of Lemma <ref> are satisfied: _n,k+1 being obtained by applying the weak regularity property (see Definition <ref>-<ref>) with starting partition _n,k = _n,k_k, where _k is the dyadic partition with stepsize 2^-(k+1). (We may assume that the partitions P_n,k for all n∈ have the same size m_k by adding empty sets.)Then as d is alsoinvariant andsmoothon ,thefirst partof Lemma <ref> directly gives Point <ref>. Before proving Point <ref>, we first need to prove the following lemma.[Compactness theorem for ] Letd bean invariant,smoothand weaklyregular distanceon(resp.or ). Letbe a convex and weakly closed subset of(resp.or ). Let (W_n)_n∈ be a sequence of -valued kernels which is tight and uniformly bounded. Then, (W_n)_n∈ has a subsequence that converges for to some -valued kernel.First remark that, asis convex, the imageof _ bythe stepping operatorW ↦W_, whereis afinite partitionof [0, 1],is a subset of_. Hence, a close look at the proofofLemma <ref> (the partitions are constructed as in the proof of Point <ref> from Theorem <ref>),andusingthe notation therein, shows that, upto taking subsequences, one can take thestepping kernelsW_n,kand U_kin_, suchthat (U_k )_k∈ weakly convergesto U andthe subsequence (W_n_ℓ)_ℓ∈convergestoU . Since U_k(x,y;·) ∈ weakly converges to U(x,y;·) for almost everyx,y∈[0,1] andsinceis weakly closed (and thus sequentially weakly closed),wededucethat U(x,y;·) belongs tofor almost every x,y∈ [0,1].This means thatU∈_.We prove Point <ref> for ⊂ (the proof for ⊂ is identical).The fact that _ and_ are convex isclear asis convex. Let(W_n )_n∈be a sequenceofelementsof_. Sinceis convex,wededuce that(M_W_n)_n∈is asequencein . Asis sequentially compact for the weak topology,is tight and bounded by Lemma <ref>, and thus the sequence (W_n )_n∈ is tight and uniformly bounded (remind Definition <ref>).Hence, using <Ref>,we get that fromanysequence in _,we canextract a subsequencewhich converges fortoan element in _.This implies that ( _, ) is compact. Point <ref> is a direct consequence of Point <ref> as ifis compact, so is . We prove Point <ref>. The fact that _ and_ are convex isclear asis convex. To prove thatis closed, we consider asequence (W_n)_n∈ inthat converges forto some W∈. As (W_n)_n∈ is a Cauchy sequence for , by <Ref>, (M_W_n)_n∈ is a Cauchy sequence forand thus is tight. Hence, (W_n)_n∈ is uniformly bounded and tight. Applying <Ref>, there exists a subsequence (W_n_k)_k∈ of the sequence (W_n)_n∈ which converges for to some -valued kernel U∈. But as a subsequence, (W_n_k)_k∈ must also converge forto W. This implies that W=U is a -valued kernel.In order to prove Theorem <ref>, we first prove a lemma that allows to construct the partitions needed to use Lemma <ref>.[Construction of partitions for two distances] Letdandd' be twodistanceson(resp.or ) whichare invariant,smooth, weakly regularand regular the steppingoperator (seeDefinitions <ref> and <ref>). Let (W_n)_n∈be a sequence in (resp.or ) which is tight (resp. uniformly bounded and tight). Then, there exists sequences(𝒫_n,k)_k∈,n∈, of partitions of [0,1]such thathypothesis <ref>–<ref> of Lemma <ref> are satisfied.We prove the result on (the proof onandis similar). To simplify notations, write d^1=d and d^2=d'. Weproceedbyinductionon k∈∪{-1}.For every n∈, set _n,-1 = { [0,1] } the trivial partition with size 1. Letk∈and assumethat we have already constructed partitions (𝒫_n,k-1 )_n∈ that have the same size m_k-1. Now we proceed toconstruct partitions (𝒫_n,k)_n∈ that satisfy hypothesis <ref>-<ref>.Set C=sup_n∈W_n, which is finite as the sequence(W_n)_n∈ is uniformly bounded.As d^i, with i=1,2, areregular the stepping operator, there exists a finite constant C_0>0 such that for every W, U∈,with W≤ C and U≤ C, and U a stepfunction adapted to a finite partition𝒬:d^i(W,W_𝒬) ≤ C_0 d^i(W,U).Setε=1/C_0(k+1). Sinced^i,with i=1,2, areweakly regular andthe sequence (W_n )_n∈ istight and uniformlybounded,thereexists r_k∈^*,suchthatforevery n∈,there exists a partition_n,k^i of[0,1]thatrefines _n,k = _n,k-1_k, where _k is the dyadic partition with stepsize 2^-k,such that:|_n,k^i|≤r_k |_n,k|≤2^k r_k |_n,k-1|and d^i(W_n, (W_n)_^i_n,k)≤.(Indeed, a close look at the proof shows that _n,k-1 refines _k-1 by construction, thus _n,k cuts each set of _n,k-1 in at most 2 sets, and we get |_n,k|≤ 2|_n,k-1|.) Now, let 𝒫_n,k be the common refinement of _n,k^1and _n,k^2; it isa refinement of 𝒫_n,k-1, has diameter at most 2^-k and size:|𝒫_n,k|≤ 2^2k r_k^2 |_n,k-1|^2 = 2^2k r_k^2 m_k-1^2.If necessary, by completing 𝒫_n,k with null sets, we may assume that |_n,k| = m_k, where m_k = 2^2k r_k^2 m_k-1^2. As (W_n)_^i_n,k is a stepfunctionadapted to the partition _n,k, we deduce from (<ref>) and (<ref>) that for i=1,2 and n∈:d^i(W_n,(W_n)_𝒫_n,k) ≤ C_0d^i(W_n,(W_n)__n,k^i)≤C_0 = 1/k+1·Hence, for every n∈, the partition 𝒫_n,k satisfies the hypothesis <ref>-<ref> of Lemma <ref>.Thus,the induction is complete. Letandbe as inTheorem <ref>. Let(W_n )_n∈ beasequence ofprobability-graphonsthat converges tosome W∈for .By Lemma <ref>, the sequence of probability measure (M_W_n)_n∈converges to M_W for the distance . Asinduces the weak topology on , wehavethat thesequence (M_W_n)_n∈ is tight, and thus the sequence (W_n )_n∈ is also tight (remind Definition <ref>). The sequence (W_n )_n∈ isalso uniformlyboundedas asequence in. ApplyingLemma <ref> withthe distances d= and =,which are invariant, smooth, weakly regular and regular the stepping operator, wegetsequences of partitions (_n,k)_k∈, n∈,that satisfy hypothesis <ref>-<ref> of Lemma <ref>.Wethen deduce from the last partof Lemma <ref> that any subsequence of(W_n)_n∈ hasa furthersubsequence which convergesto the same limit for bothand , this limit must then be W. This impliesthat the sequence (W_n )_n∈ convergesto W for .The role ofandbeing symmetric, we conclude that the distancesandinduce the same topology on .§ ACKNOWLEDGEMENT We thank Pierre-André Zitt for some helpful discussionin particular on<Ref>.Index of notation 2Measures * (, ) a topological Polish space*the Borel σ-field induced by *the set of continuous bounded real-valued functions on * measure = positive measure*the set of signed measures on *the set ofmeasures on *the set of probability measures on *the set of sub-probability measures on , measures with total mass at most 1* μ^+, μ^- the positive and negative parts of μ from its Hahn-Jordan decomposition* |μ| = μ_+ + μ_- the total variation measure of μ* μ = |μ|() the total mass of μ*a distance on either ,or *a norm on *the Prohorov distance*the Kantorovitch-Rubinstein norm*the Fortet-Mourier norm*the norm based on a convergence determining sequenceRelabelings and partitions*the set of bijective measure-preserving maps from ([0,1],λ) to itself*the set of measure-preserving maps from ([0,1],λ) to itself* || the number of sets in the finite partitionKernels and graphons spaces*the set of probability-graphons *the set of measure-valued kernels*the set of signed measure-valued kernels*the set of -valued kernels with ⊂*the set of unlabeled probability-graphons *the set of unlabeled measure-valued kernels*the set ofunlabeled signed measure-valued kernels*the set of unlabeled -valued kernelsKernels and graphons * W^+ and W^- the positive and negative part of W∈* | W| = W^+ + W^-* W(A;·) = ∫_A W(x,y;·)xy for A⊂ [0,1]^2* W[f](x,y) = W(x,y;f) forf∈* W_𝒫 the stepping of W a partition* W:=sup_x,y∈ [0, 1]W(x, y; ·)* M_W( z) = | W | ([0,1]^2;z) * W_G the probability-graphon associated to a -graph or a weighted graph G* (k,W) the -graph with k vertices sampled from W∈* (k,W) the -graph with k vertices sampled from W∈* F^g a finite graph whose edges are decorated with functions in * t(F^g,W) = M_W^F(g) the homomorphism density of F^g in W Distances/norms on graphon spaces *the cut distance associated to *the cut norm associated to *the unlabeled distance associated to an arbitrary distance d*the unlabeled cut distance associated toor *the cut norm for real-valued kernels* · the positive part of the cut norm for real-valued kernels Definitions * weakisomorphism of kernels and graphons in<Ref> on page def_weak_isomorphism* tightnessforsets of kernels or graphons in<Ref>on page defi:tight* invariant and smoothfor a distance d on graphon spaces in <Ref> on page def:inv-smooth * weakly regularandregular the steppingoperator fora distanced ongraphon spacesin <Ref> on page defi:extra-propalpha
http://arxiv.org/abs/2312.15935v1
{ "authors": [ "Romain Abraham", "Jean-François Delmas", "Julien Weibel" ], "categories": [ "cs.DM", "math.PR" ], "primary_category": "cs.DM", "published": "20231226075959", "title": "Probability-graphons: Limits of large dense weighted graphs" }
Supported by the National Key Research and Development Program (Grant Nos. 2022YFB3503600 and 2021YFA0718500) and the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDA15360102), and National Natural Science Foundation of China (Grant Nos. 12273042 and 12075258) [Corresponding author, ]Pei-Yi Feng, Particle and Astrophysics Center, Institute of High Energy Physics, No. 19 (B), Yuquan Road, Laoshan Street, Shijingshan District, Beijing, China, 19151915020, [email protected]. Key Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing 100049, China[Corresponding author, ]Xi-Lei Sun, Experimental Physics Center, Institute of High Energy Physics, No. 19 (B), Yuquan Road, Laoshan Street, Shijingshan District, Beijing, China, 13671137148, [email protected]. State Key Laboratory of Particle Detection and Electronics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China[Corresponding author, ]Zheng-Hua An, Particle and Astrophysics Center, Institute of High Energy Physics, No. 19 (B), Yuquan Road, Laoshan Street, Shijingshan District, Beijing, China, 13661351124, [email protected]. Key Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, ChinaSchool of Nuclear Science and Technology, University of South China, Hengyang Hunan 421001, ChinaNational Engineering Research Center for Rare Earth, Grirem Advanced Materials Co., Ltd. and General Research Institute for Nonferrous Metals, Beijing 100088, ChinaState Key Laboratory of Particle Detection and Electronics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, ChinaSchool of Nuclear Science and Technology, University of South China, Hengyang Hunan 421001, ChinaKey Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, ChinaKey Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, ChinaKey Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, ChinaKey Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing 100049, ChinaKey Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, ChinaKey Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, ChinaKey Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, ChinaKey Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, ChinaKey Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, ChinaKey Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, ChinaKey Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, ChinaKey Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, ChinaKey Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing 100049, ChinaKey Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China[Corresponding author, ]Hong Lu, Particle and Astrophysics Center, Institute of High Energy Physics, No. 19 (B), Yuquan Road, Laoshan Street, Shijingshan District, Beijing, China, 13681034963, [email protected]. Key Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China The GECAM series of satellites utilize LaBr_3(Ce), LaBr_3(Ce,Sr), and NaI(Tl) crystals as sensitive materials for gamma-ray detectors (GRDs). To investigate the non-linearity in the detection of low-energy gamma rays and address errors in the E-C relationship calibration, comprehensive tests and comparative studies of the non-linearity of these three crystals were conducted using Compton electrons, radioactive sources, and mono-energetic X-rays. The non-linearity test results for Compton electrons and X-rays displayed substantial differences, with all three crystals showing higher non-linearity for X/γ-rays than for Compton electrons. Despite LaBr_3(Ce) and LaBr_3(Ce,Sr) crystals having higher absolute light yields, they exhibited a noticeable non-linear decrease in light yield, especially at energies below 400 keV. The NaI(Tl) crystal demonstrated "excess" light output in the 6–200 keV range, reaching a maximum "excess" of 9.2% at 30 keV in X-ray testing and up to 15.5% at 14 keV during Compton electron testing, indicating a significant advantage in the detection of low-energy gamma rays. Furthermore, this paper explores the underlying causes of the observed non-linearity in these crystals. This study not only elucidates the detector responses of GECAM, but also marks the inaugural comprehensive investigation into the non-linearity of domestically produced lanthanum bromide and sodium iodide crystals.The Energy Response of LaBr_3(Ce), LaBr_3(Ce,Sr) and NaI(Tl) Crystals for GECAM Hong Lu January 14, 2024 =============================================================================== § INTRODUCTION Recent years have witnessed groundbreaking advancements in various branches of astrophysics, such as gravitational waves, fast radio bursts, and cosmic rays, paving the way for a new "multi-messenger, multi-wavelength" era in astronomy<cit.>. These discoveries emphasize the importance of efficient detection methods for further understanding of high-energy astronomical phenomena. Transient gamma-ray sources, including gamma ray bursts and magnetar flares, play a vital role in shaping the landscape of astronomical research<cit.>.The Gravitational wave burst high-energy Electromagnetic Counterpart All-sky Monitor (GECAM) series, comprising satellites GECAM-A/B, GECAM-C and GECAM-D, was developed to monitor various high-energy electromagnetic events such as gamma-ray bursts and magnetar flares<cit.>. These satellites employ gamma-ray detectors (GRDs) that utilize different scintillating crystals such as LaBr_3(Ce) and LaBr_3(Ce,Sr) for GECAM-A/B, and a combination of LaBr_3(Ce), LaBr_3(Ce,Sr), and NaI(Tl) for GECAM-C to validate new detector technologies. The fourth satellite, GECAM-D, uses NaI(Tl) crystals, and is scheduled to be launched in early 2024. The main characteristics of the GRDs are listed in Table <ref>.GRDs serve as the primary detectors in the GECAM payload, and GECAM-A/B utilize an innovative solution employing LaBr_3 crystals coupled with silicon photomultiplier (SiPM) readout technology (Fig. <ref>)<cit.>. LaBr_3 crystals are advanced inorganic scintillators known for their high light output, excellent energy and timing resolutions, good energy linearity, and short light decay time. The SiPM, which replaces the conventional photomultiplier tube (PMT), offers advantages such as a simple and compact structure, ease of miniaturization, and efficient readout capability.For GECAM-C (Fig. <ref>), the GRDs employ both NaI(Tl) and LaBr_3 crystals, which are coupled to SiPM readout arrays<cit.>. The NaI(Tl) crystal is a high-performance, traditional inorganic scintillator with excellent luminescence properties that provides good resolution for both X-rays and gamma rays. Inorganic scintillators are widely used as the preferred choice for high-energy X/γ-ray detectors in space due to their versatility in shaping and sizing, stability, reliability, reasonable cost, inclusion of heavy elements, high density, and efficient detection capabilities for X/γ-rays.The crystals used in the GECAM satellite series were produced by the Beijing Glass Research Institute. To optimize the performance of these detectors, it is vital to understand their energy responses. Consequently, we conducted an in-depth study involving X-ray, Compton electron, and gamma-ray tests on LaBr_3(Ce), LaBr_3(Ce,Sr), and NaI(Tl) crystals used in these satellites<cit.>. Our findings indicated that the non-linearity of the three crystals varied when exposed to distinct excitation sources. The LaBr_3(Ce,Sr) crystal exhibited the strongest linear response to Compton electrons in the low-energy range, while the NaI(Tl) crystal demonstrated the best linear response to X-rays.Consistent with previous publications on the non-linearity of iodide crystals<cit.>, domestically produced NaI(Tl) crystals exhibited an light yield "excess" phenomenon, indicating unexpected advantages in the detection of low-energy gamma rays. These insights not only contribute to a deeper understanding of the detector response of the GECAM series but also serve as invaluable information for evaluating the performance of these domestically produced scintillating crystals in the low-energy range of 3–400 keV<cit.>. Manufacturers can refer to this paper to enhance their understanding of crystal non-linearity, potentially facilitating optimization and improvement in crystal growth processes and doping ratios. Furthermore, this study discusses the issue of non-linearity in crystals for low-energy gamma-ray detection, which holds substantial significance in addressing errors in detector calibration related to the energy-channel (E-C) relationship. Based on the content of this paper, we will delve deeper into the intrinsic resolution of crystals in our future work. § EXPERIMENTAL SETUP AND TEST PROCEDURE §.§ The Wide-Angle Compton Coincidence Technology Figure <ref> shows the Wide-Angle Compton Coincidence (WACC) experimental setup, which primarily comprises a radioactive source, an HPGe detector, the scintillation detector under examination, and a subsequent data acquisition system<cit.>. LaBr_3(Ce), LaBr_3(Ce,Sr), and NaI(Tl) cylindrical samples with diameters and height of 25.4 mm were selected for this study. Silicone oil was used to couple the encapsulated crystals to PMTs, which were R6233-100 models produced by Hamamatsu Photonics, Japan<cit.>. According to the user manual of the BE2020 planar germanium spectrometer manufactured by Canberra, the HPGe crystal had a thickness of 20 mm and volume of 40,000 mm^3, allowing for a detection energy range of 3 keV to 3 MeV<cit.>. Based on the experimental data, the energy resolution (represented by the full width at half maximum, i.e., FWHM) of the HPGe detector used in this study was determined to be 1.58 keV (^60Co, 1.33 MeV) and 1.15 keV (^137Cs, 662 keV).The experiment involved placing a ^137Cs radioactive source at a quarter-circle position around the center of the crystal, with a distance of 13 cm between them. Gamma photons were emitted by the radioactive source via radioactive decay and underwent Compton scattering when they struck the crystal. Consequently, Compton electrons were generated and absorbed in the crystal, whereas some scattered photons escaped from the crystal and were absorbed by the nearby HPGe detector. The distance between the tested crystal and HPGe detector was maintained at approximately 15 cm<cit.>. Lead blocks were positioned between the ^137Cs source and HPGe detector to provide shielding and minimize the incidence of primary gamma photons directly irradiating the HPGe detector. Coincidence events across a broad energy range could be obtained by adjusting the position of the radioactive source and varying the angle between the source, crystal, and HPGe detector.A desktop waveform acquisition device with 10 bit @ 2 GS/s (interleaved) or 1 GS/s, the DT5751 digitizer<cit.>, was utilized in this experiment to collect signals from the crystal and HPGe detectors (Fig. <ref>). The HPGe detector signal operating at +3500 V underwent shaping and filtering via an ORTEC 572A amplifier before being sent to channel-0 of the DT5751. The output signal from the PMT anode operating at +1300 V was routed to channel-1 of the DT5751 after the photoelectrons were multiplied by the dynodes<cit.>. The signals from the crystal and HPGe detectors successively underwent low-threshold discrimination, delay stretching, and logical coincidence. The resulting coincidence output signal was utilized as an external trigger for the DT5751. The DT5751 records the corresponding coincidence events and generates two data files when triggered externally. The secondary particles produced by Compton scattering were absorbed by the two detectors in a specific order in the time sequence. For a "true coincidence event", the waveform signal corresponding to the crystal appeared before the HPGe detector. Figure <ref> displays the coincidence matrix that representsall collected event data. In Fig. <ref> (a), the horizontal axis represents the energy deposited in the HPGe detector, whereas the vertical axis corresponds to the energy deposited in the LaBr_3(Ce,Sr) crystal. As stated in Compton scattering formula (Equation <ref>)<cit.>, with an increase in the incident angle, θ, of the gamma photon, the energy of the Compton electrons in the LaBr_3(Ce,Sr) crystal is also increased. E_e = E_γ - E_γ^' = E_γ/1 + m_ec^2/E_γ(1 - cosθ) . where E_γ is the energy of the gamma ray radiated from the source, E_e is the energy of the Compton electron, E_γ^' is the energy of the scattered photon, θ is the Compton scattering angle, and m_ec^2 is the remaining mass energy of the electron. Five scattering angles of θ were chosen during the experiment to obtain data over a broad energy range.The diagonal points in Fig. <ref> denote the "true coincidence events" of interest in this study. Each point corresponds to a specific scattering angle, and the combined deposited energies in both the crystal and the HPGe detector are constant of 661.6 keV. The uneven "spread" among the diagonal at different energy levels is due to the diverse energy resolutions of the crystal for Compton electrons. Furthermore, the non-linear response of the crystal determines the "linearity" of the diagonal. Coincidence matrix analysis enabled the extraction of the energy resolution and non-linear response of the crystal for Compton electrons.Figure <ref> displays the horizontal and vertical lines indicating accidental coincidence events detected simultaneously by both detectors. The horizontal line represents the finite resolution of the crystal and the vertical line represents the excellent resolution of the HPGe detector. The other points on the graph denote events in which the partial energy was deposited in the detector or where detection occurred after scattering through the surrounding materials.The WACC method accurately measures the energy response and resolution of the crystal detector to Compton electrons. Before the Compton experiments, it was necessary to calibrate the E–C relationship of the HPGe detector, which can be obtained from the energy spectra of multiple radioactive sources or directly using the vertical lines in the coincidence matrix.The HPGe detector offers an outstanding energy resolution, making it an excellent standard detector. The energy deposited in the crystal can be calculated by subtracting the scattered photon energy in the HPGe detector from the known gamma-ray source energy. In actual data processing, the cut width of the HPGe energy axis must be determined based on the Compton scattering event statistics. Within this range, the central value is considered to be the deposited energy in the HPGe detector, and Equation <ref> is used to calculate the energy deposited in the crystal. In this study, cutting was performed on the HPGe energy axis and projection onto the crystal axis. <E_scin> = E_γ - <E_HPGe> . where E_γ is the known gamma-ray source energy, <E_HPGe> is the deposited energy in the HPGe detector, and <E_scin> is the deposited energy in the crystal.To understand the effect of the cut width or energy window width, we measured the energy resolution of the LaBr_3(Ce,Sr) crystal to 46.6 keV Compton electrons for different cut widths. The outcome indicated that the energy resolution remained reasonably stable until a cut width of 4 keV was reached (Fig. <ref>). Wider cut-widths led to a broadened scattering angle range of relevant valid events and an increase in the FWHM of the Compton electron spectrum. Subsequently, the resolution deteriorated. The energy resolution of the HPGe detector was within the range of 1–2 keV, which must be considered when determining a reasonable cut width. It is also essential to ensure a sufficient number of events. Therefore, a cutoff width of 4 keV was used when the energy deposited in the HPGe detector was less than 615 keV. When the deposited energy fell at 615–661.6 keV, a cut width of 2 keV was selected.Multiple truncations of the HPGe energy axis were performed to obtain the spectra of the crystal for various Compton electron energies. Figure <ref> illustrates an example of this approach, in which an event data range of 614–616 keV was considered at an HPGe energy of 615 keV with a cut width of 2 keV to produce the Compton electron spectrum (Fig. <ref> (b)). A Gaussian-shaped, single-energy electron peak was visible and fitting it with a Gaussian function returned an energy resolution of 15.81 ± 0.25% for 46.6 keV Compton electrons in the LaBr_3(Ce,Sr) crystal.When incident particles deposit energy in a crystal, it causes excitation of atoms or molecules, leading to the emission of scintillation photons with wavelengths similar to those of visible light<cit.>. The light yield, defined as the number of scintillation photons per unit of energy deposited in the crystal, is described by Equation <ref>. S = ADC/ADC_spe· E . where S describes the light yield of the crystal, ADC represents the spectrum's peak position after deducting the baseline, E is the deposited energy in the crystal, and ADC_spe = 8.0321 channels denotes the single-photoelectron response of the Hamamatsu R6233-100 PMT at a high voltage of +1300 V. The response was calibrated using the LED-triggered charge method<cit.>.§.§ Measured by Radioactive Sources We employed radioactive sources of ^133Ba, ^137Cs, ^241Am, ^152Eu, and ^207Bi across a range of γ-ray energies from 30.85 keV to 1063.7 keV to investigate the gamma-ray responses. The tested crystal was coupled to a Hamamatsu R6233-100 PMT via silicon oil, and the digitizer DT5751 acquired the signal waveforms in self-triggering mode. ROOT, a data analysis framework conceived by the European Organization for Nuclear Research (CERN)<cit.>, had been used for analysing the experimental data in this study, including baseline subtraction, fitting of the full-energy peak, and analysis of the peak position and FWHM.§.§ Single Energy X-ray Measurements Using the Hard X-ray Calibration Facility We employed two sets of hard X-ray calibration facilities (HXCF, Fig. <ref>) established by the National Institute of Metrology (NIM) in Beijing Changping, China<cit.> to investigate the energy responses of these three crystals to X-rays in the range of 8–120 keV. The HXCF, which plays a substantial role in the calibration of gamma-ray detectors on GECAM, CubeSats and SVOM satellites<cit.>, was first built for the high-energy telescope of HXMT as a calibration facility<cit.> and comprises four primary components: an X-ray generator, monochromator, collimator, and standard detector. To shielding stray light from the X-ray generator, the collimator features apertures of various sizes at the entrance and exit. A low-energy HPGe detector from Canberra Industries was used as a standard detector. Before testing, we calibrated the HPGe detector for energy linearity, energy resolution, and detection efficiency using various standard radioactive sources<cit.>.The entire set of testing equipment, including the data-acquisition system, was placed inside an X-ray testing chamber (Fig. <ref>) and remotely controlled for data retrieval in the control room. The energy and flux of the X-rays were determined by the HPGe detector and the testing procedures are shown in Fig. <ref>. We utilized GENIE 2000, which is a spectroscopic data acquisition and analysis software, to record the spectral data from the HPGe detector. The crystal detector was coupled to a PMT (Hamamatsu Model CR160) using silicone oil. The signals from the crystal detector were collected using a digitizer (DT5751) and analyzed using computer software for the corresponding spectra.In this study, the range of X-ray testing was 8–120 keV, with fine measurement of the crystal's absorption edge at a step size of 0.1 keV. The performance of the crystal detector gradually changed with increasing X-ray energy, allowing for a reduced number of test energy points. Owing to the testing at room temperature (22–23 ℃), the detector noise was slightly higher, limiting the starting test energy points to the range of 8–10 keV. For the two LaBr_3 crystals, the PMTs coupled to them operated at -800 V, while the NaI(Tl) crystal was at -1000 V. § RESULTS AND DISCUSSION§.§ Light Yield Non-linearity to Compton Electrons The light yields of LaBr_3(Ce), LaBr_3(Ce,Sr), and NaI(Tl) crystals were normalized to "1" at 662 keV energy. Figure <ref> depicts the light output non-linearity of the LaBr_3(Ce), LaBr_3(Ce,Sr), and NaI(Tl) crystals to Compton electrons within the energy range of 3–400 keV. To better quantify the non-linearity of these crystals, we introduced a metric known as the "Non-linearity Standard Deviation" (NLSD), denoted by Equation <ref>, where x_i represents the relative light yield at each energy point. The NLSD values for the LaBr_3(Ce), LaBr_3(Ce,Sr), and NaI(Tl) crystals were calculated to be 0.11, 0.03, and 0.06, respectively. The larger the NLSD value, the more significant the crystal non-linearity.NLSD = √(1/n∑_i=1^n (x_i - 1)^2)(n =1, 2, 3...) .For both types of LaBr_3 crystals, as the energy of the Compton electrons decreased, the non-linearity of the light yield gradually increased. Within the measured electron energy range, the LaBr_3(Ce,Sr) crystal exhibited better linearity than the LaBr_3(Ce) crystal, particularly at energies below 20 keV. We hypothesize that the doping of Sr^2+ may have improved the internal energy transfer mechanism within the LaBr_3(Ce,Sr) crystal, enhancing energy transfer efficiency in the low-energy region, thereby ameliorating non-linearity. Both crystals exhibited a 10% "defect" light output at approximately 5 keV and 20 keV, respectively. The minimum measurable energy point using WACC was 3.1 keV, at which the LaBr_3(Ce,Sr) crystal exhibited approximately 24% "defect", whereas the LaBr_3(Ce) crystal reached a 35% "defect". This experiment validated the electron non-linearity simulation results presented by Zheng Chao et al.<cit.>, while also affirming the accuracy and rationality of both the model and experimental work conducted by the GECAM research team. In contrast to the two LaBr_3 crystals, the NaI(Tl) crystal did not exhibit a monotonic "defect" luminescence phenomenon as the energy of Compton electrons decreased. At approximately 14 keV, the NaI(Tl) crystal reached its maximum light yield, exhibiting approximately 15.5% "excess" light output. Beyond 14 keV, the light yield gradually decreased as the energy increased. Conversely, as the energy decreased below 14 keV, the light yield decreased. The lowest test energy point was 4.1 keV, at which the NaI(Tl) crystal demonstrated a luminosity non-linearity of approximately 14% "defect". §.§ The Absolute Light Yield of Crystals The three crystals were irradiated using multiple radioactive sources to obtain the energy spectra of each crystal for different sources. The single-photoelectron responses of the Hamamatsu R6233-100 PMT used in the measurements were calibrated using the LED-triggered charge method at various voltages, enabling the calculation of the absolute light yields of these three crystals. The absolute light yields and energy resolutions of the tested samples at 661.6 keV are listed in Table <ref>.§.§ Energy Resolution Figure <ref> illustrates the energy resolution of the LaBr_3(Ce), LaBr_3(Ce,Sr), and NaI(Tl) crystals for Compton electrons in the 3–400 keV range. The energy resolution of the NaI(Tl) crystal was comparable to that of the LaBr_3 crystals at 16–30 keV (Fig. <ref>).The energy resolution of the crystals was expressed using the FWHM of the X-ray full-energy peak. Figure <ref> shows the energy resolution of LaBr_3(Ce,Sr), LaBr_3(Ce), and NaI(Tl) crystals for X-rays in the 8–100 keV range as measured by HXCF. The LaBr_3(Ce,Sr) crystal exhibited the best energy resolution within this energy range. At 100 keV, the resolution of the LaBr_3(Ce,Sr) crystal was 8.74 ± 0.0681%, while the LaBr_3(Ce) and NaI(Tl) crystals had resolutions of 9.41 ± 0.0976% and 10.39 ± 0.1168%, respectively. Furthermore, a slight degradation in the energy resolution of no more than 1% was observed near the binding energy of the K-shell electrons.§.§ Comparison of the X/γ-Ray and Compton Electron Responses All data in this study were standardized by setting the full-energy peak response of 662 keV gamma rays from a ^137Cs source as the normalization factor. The non-linearity of the LaBr_3(Ce,Sr) crystal's light yield for Compton electrons and gamma rays in the 3–1000 keV range is shown in Fig. <ref>. Notably, the response of the Compton electrons exhibited excellent linearity at approximately 70 keV, with a non-linearity of less than 2%. However, a "deficiency" in light output occurred when the energy of Compton electrons was below 70 keV, while substantial non-linearity was observed in the response to gamma rays below approximately 200 keV. A more detailed test was conducted on the photon response below 120 keV using HXCF. Figure <ref> presents the non-linear light yield response curve of the LaBr_3(Ce,Sr) crystal to X-rays in the energy range 8–120 keV. As the error bars are similar in size to the data point symbols, they are not visible in the figure. Ideally, the relative light yield should be "1" at all energy points. However, this was not the case and varying degrees of light-yield deficiencies were observed within the tested energy range. Below 40 keV, the LaBr_3(Ce,Sr) crystal exhibited substantial non-linearity in the relative light yield response to X-rays, with the non-linearity exceeding 10%. As the energy decreased, the slope of the curve increased, reaching a non-linearity of 36% at 8 keV. When the X-ray energy exceeded 40 keV, the non-linear curve approached the ideal state and the slope became milder, indicating that the fluctuation in the number of photons generated per unit energy absorbed by the LaBr_3(Ce,Sr) crystal was small within the energy range of 40–120 keV. The LaBr_3(Ce,Sr) crystal exhibited absorption edges at 13–15 keV and 38–40 keV, and a slight reduction in the relative light yield was observed within these two energy intervals.The NLSD values for testing the LaBr_3(Ce,Sr) crystal with X-rays and Compton electrons were 0.17 and 0.03, respectively. The light output of the LaBr_3(Ce,Sr) crystal exhibited greater non-linearity in response to X-rays than to Compton electrons. This can be attributed to the different mechanisms by which these particles interact with atoms in matter. For X/γ-rays ranging from a few keV to several hundred keV, there are two possible interaction processes with the crystal: (1) a direct photoelectric cascade sequence or (2) a Compton scattering followed by photoelectric cascade sequence. These processes generate several primary electrons (e.g., Compton electrons and primary photoelectrons) and multiple secondary electrons (e.g., Auger electrons and secondary photoelectrons), with the final light emission being the sum of the contributions from secondary electrons with different energies. Notably, these electrons are products of the interaction between the incident photons and matter, and their energies cannot exceed those of the incident particles. Therefore, the light output induced by the photons in the LaBr_3(Ce,Sr) crystal is always lower than that caused by Compton electrons with equivalent energies.We conducted detailed testing of the LaBr_3(Ce) crystal using the same experimental procedures and data processing methods. Figure <ref> illustrates the non-linear light yield response of the LaBr_3(Ce) crystal to Compton electrons and gamma rays across the energy range of 3–1000 keV. The non-linearity curves tend to be flat, and the results are similar for Compton electrons and gamma rays when the energy is above 200 keV, but substantial differences are observed below 200 keV. As the energy decreased, the LaBr_3(Ce) crystal exhibited a lower response to the full-energy peak of gamma-rays than to Compton electrons of the same energy. This finding is consistent with the test results for the LaBr_3(Ce,Sr) crystal, indicating that the manner in which the particles interact with matter directly affects the light output of the crystal. For gamma rays in the energy range of several hundred kiloelectronvolts, Compton scattering is most likely the initial interaction, and most gamma rays require multiple interactions for full absorption. The high-energy primary and secondary electrons resulting from these interactions exhibited good linearity in their response, reflecting the excellent linearity of the response to high-energy gamma rays. Figure <ref> shows the non-linearity curve of the LaBr_3(Ce) crystal to X-rays in the energy range of 8–120 keV. Compared to the LaBr_3(Ce,Sr) crystal, this response curve deviated more markedly from the ideal state, and almost all the measured energy points exhibited scintillation responses below 90%. The light output sharply decreased near the K-shell binding energies (13–15 keV and 38–40 keV) of Br and La, leading to a greater non-linearity of the LaBr_3(Ce) crystal response curve to X-rays. Data points below 28 keV exhibited non-linearity greater than 20%, and the light output at 8 keV was only 58% of the ideal state. As X-ray energy decreased, induced secondary electron energies in the crystal decreased, thereby resulting in more significant light "defects". The NLSD values for testing the LaBr_3(Ce) crystal with X-rays and Compton electrons were 0.22 and 0.11, respectively. LaBr_3(Ce) crystal exhibited greater non-linearity to X-rays and Compton electrons than LaBr_3(Ce,Sr) crystal, particularly at energies below 100 keV. This may be attributed to the doping process, indicating that doping with Sr^2+ ions can improve the non-linearity of the LaBr_3 crystals. To understand the differences in non-linearity among the different crystal types better, the NaI(Tl) crystal was chosen as the third test subject in this study (Fig. <ref>). Unlike the two LaBr_3 crystals, the NaI(Tl) crystal exhibited a pronounced "excess" response to Compton electrons in the energy range of 8–80 keV, with a non-linearity exceeding 4%. At electron energies lower than 6 keV, the crystal displayed slight "defects" in light output, while above 80 keV, the curve tended to flatten, indicating a good linear response of this sodium iodide compound to high-energy electrons.Figure <ref> also shows the non-linearity of the light yield of the NaI (Tl) crystal to X-rays in the energy range 8–120 keV. Compared to the response to Compton electrons, the X-ray test results exhibited a similar trend, with NLSD values of 0.06 for both. However, there were differences in the curve slopes. Direct photoelectric interactions with matter are most likely to occur for photons in the tens-of-keV range. Assuming this photoelectric absorption occurs with iodine K-shell electrons (with a probability of 83% when the photon energy is greater than 33.17 keV), the resulting photoelectrons have energy falling within the range with substantial "excess" light output. The total light emission induced by all the secondary electrons generated from the photons exceeded that caused by Compton electrons with equivalent energies. Therefore, when the energy was within the range of 40–70 keV, NaI(Tl) crystal exhibited a higher relative light output to X-rays, producing a greater number of photons per unit X-ray-deposited energy compared to the case of Compton electron incidence.The response of the NaI (Tl) crystal to X-rays was similar to that of Compton electrons at approximately 33 keV. This is related to the binding energy (33.17 keV) of the iodine K-shell electrons, as photons with energies lower than this energy cannot excite K-shell electrons from the iodine atoms. Almost all the photon energy was transferred to electrons, and only a small fraction of low-energy photons interacted with an iodine L-shell electron (with a binding energy of 5.19 keV) to produce lower-energy X-rays through the photoelectric effect.Within the measured X-ray energy range, the NaI(Tl) crystal exhibited varying degrees of "excess" light output, which also can be explained by the photoelectric effect cascade sequence. In Fig. <ref>, the low-energy electron response showed an "excess" and reached its maximum value at  14 keV. Therefore, when photons undergo a series of interactions to produce multiple low-energy secondary electrons, a "burst" phenomenon occurs in the light output. This also explains why the photon response reached a maximum value at approximately 30 keV instead of 14 keV. As the incident photon energy increased, the light output gradually decreased but remained above 100%. This is because of the more complex distribution of secondary electron energies, resulting in a large number of secondary electrons with energies lower than 6 keV. The electron response below 6 keV exhibited a "deficient" luminous response, which formed a so-called "compensation" effect with the "excess" phenomenon observed in the electron response in the tens of keV range. § CONCLUSION We employed the WACC technique and HXCF/radioactive sources to compare the energy responses of domestically produced LaBr_3(Ce), LaBr_3(Ce, Sr), and NaI(Tl) crystals to Compton electrons and X/γ-rays. The NLSD values obtained through X-ray testing for LaBr_3(Ce), LaBr_3(Ce, Sr), and NaI(Tl) crystals were 0.22, 0.17, and 0.06, respectively. In contrast, Compton electron testing resulted in NLSD values of 0.11, 0.03, and 0.06 for the same crystals. The non-linear curves of these domestic crystals exhibited different slopes (Fig. <ref>, Fig. <ref> and Fig. <ref>), indicating varying degrees of non-linearity at low energies. Based on the experimental results, the non-linearity of the three crystals to X/γ-rays exceeds that of Compton electrons, which can be attributed to the distinct interaction mechanisms between the incident particles and the material. The NLSD values for LaBr_3(Ce) were 1.29 times higher for X-rays and 3.67 times higher for Compton electrons compared to LaBr_3(Ce, Sr), indicating that the LaBr_3(Ce, Sr) crystal exhibited better linearity and suggesting that doping with Sr^2+ ions could improve non-linearity. However, the absolute light yield of the LaBr_3(Ce, Sr) crystal was slightly lower than that of LaBr_3(Ce) (Table <ref>), potentially owing to the need for further optimization of the growth process and doping ratio by domestic manufacturers. The energy resolution of our LaBr_3(Ce, Sr) crystal was inferior to that reported by its foreign counterparts<cit.>. This discrepancy may arise from inherent performance variations among different crystals, differences in measurement methods when coupled with PMT, or distinctions in growth processes and raw materials between Chinese and Saint-Gobain crystals.NaI(Tl) crystal exhibited "excess" light output of up to 9.2% when tested with X-rays and 15.5% when tested with Compton electrons. This "excess" light output positioned NaI(Tl) crystal as a distinct advantage for detecting low-energy X/γ-rays. The calibration and in-orbit performance of GECAM-C have validated that the NaI(Tl) crystals exceeded expectations<cit.>. In the mission of gamma-ray burst detection, energy resolution is not the primary concern. While NaI(Tl) crystals may not match the energy resolution and absolute light yield of LaBr_3 crystals, test results have demonstrated their satisfactory performance in the energy range of 10 keV to 1000 keV. Furthermore, NaI(Tl) crystals can be manufactured in larger sizes and are cost-effective. Consequently, GECAM-D utilized NaI(Tl) crystals as the sensitive detector materials.We conducted a study on the light yield and non-linearity of the three crystals produced by the Beijing Glass Research Institute. An important insight from this study is that different calibration standards are required for the detection of gamma-rays and electrons. While the current GECAM satellite's GRDs lack electron-gamma discrimination capabilities, the non-linearity results of Compton electrons may find application in future corrections for electron detection.§ ACKNOWLEDGMENTS This research was supported by the National Key Research and Development Program (Grant Nos. 2022YFB3503600 and 2021YFA0718500) and the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDA15360102), and National Natural Science Foundation of China (Grant Nos. 12273042 and 12075258).
http://arxiv.org/abs/2312.16658v1
{ "authors": [ "Pei-Yi Feng", "Xi-Lei Sun", "Zheng-Hua An", "Yong Deng", "Cheng-Er Wang", "Huang Jiang", "Jun-Jie Li", "Da-Li Zhang", "Xin-Qiao Li", "Shao-Lin Xiong", "Chao Zheng", "Ke Gong", "Sheng Yang", "Xiao-Jing Liu", "Min Gao", "Xiang-Yang Wen", "Ya-Qing Liu", "Yan-Bing Xu", "Xiao-Yun Zhao", "Jia-Cong Liu", "Fan Zhang", "Hong Lu" ], "categories": [ "physics.ins-det", "astro-ph.IM", "hep-ex", "nucl-ex" ], "primary_category": "physics.ins-det", "published": "20231227180658", "title": "The Energy Response of LaBr3(Ce), LaBr3(Ce,Sr) and NaI(Tl) Crystals for GECAM" }
Does PML exponentially absorb outgoing waves scattering from a periodic surface? Wangtao Lu^1, Kuanrong Shen^1 and Ruming Zhang^2================================================================================ The rapid evolution of large language models (LLMs) necessitates effective benchmarks for evaluating their role knowledge, which is essential for establishing connections with the real world and providing more immersive interactions. This paper introduces RoleEval, a bilingual benchmark designed to assess the memorization, utilization, and reasoning capabilities of role knowledge. RoleEval comprises RoleEval-Global (including internationally recognized characters) and RoleEval-Chinese (including characters popular in China), with 6,000 Chinese-English parallel multiple-choice questions focusing on 300 influential people and fictional characters drawn from a variety of domains including celebrities, anime, comics, movies, TV series, games, and fiction. These questions cover basic knowledge and multi-hop reasoning abilities, aiming to systematically probe various aspects such as personal information, relationships, abilities, and experiences of the characters. To maintain high standards, we perform a hybrid quality check process combining automatic and human verification, ensuring that the questions are diverse, challenging, and discriminative.Our extensive evaluations of RoleEval across various open-source and proprietary large language models, under both the zero- and few-shot settings, reveal insightful findings. Notably, while GPT-4 outperforms other models on RoleEval-Global, Chinese LLMs excel on RoleEval-Chinese, highlighting significant knowledge distribution differences. We expect that RoleEval will highlight the significance of assessing role knowledge for foundation models across various languages and cultural settings.* Corresponding author.[Our dataset is available at <https://github.com/Magnetic2014/RoleEval>.] § INTRODUCTION Recent years have witnessed the huge success of large language models (LLMs), and agents based on these models present immense potential for reshaping our engagement with machines with their expansive world knowledge and remarkable predictive capabilities <cit.>. The cornerstone of this transformation lies in the development of LLM agents with a keen perception of the real world, offering users a more immersive experience than ever before and laying a solid foundation for the emergence of applications such as Character AI[<https://beta.character.ai>], AI Dungeon[<https://aidungeon.com>], and SillyTavern[<https://github.com/SillyTavern/SillyTavern>]. This necessitates that the role-playing capabilities of these models establish connections with real-world people or characters created by them.Understanding the nuances and contextual backstories of characters, whether real or fictional, enables LLMs to engage in richer and more relevant dialogues. This facet is crucial for applications ranging from personalized conversations to creative content generation. For example, in creative domains such as scriptwriting and game design, LLMs' ability to accurately reference and emulate both real-life personalities and fictional characters can serve as a valuable asset, sparking creativity and providing novel perspectives. This role-playing ability extends beyond mere replication of character traits, encompassing a deeper understanding and interaction that mirror human-like comprehension and empathy. Just imagine the attractiveness of an LLM-based agent that is well-versed in “Harry Potter” or “Game of Thrones” characters. Such a model can significantly enhance engagement in entertainment, education, and even creative writing. This becomes particularly relevant as users increasingly seek interactions that resonate with their interests and cultural backgrounds. Thus, considering connections between LLMs and our daily lives, the most direct and comprehensive way to evaluate the role-playing ability of large models is to examine their ability to behave like real-world people and the characters they create, and the prerequisite for achieving this is to have enough relevant role knowledge.From another perspective, a robust LLM agent is fundamentally built upon a well-pretrained foundation model. Recent research demonstrates that the majority of a model's knowledge is acquired during the pretraining phase <cit.>. Thus, a well-pretrained foundation model, encompassing a broader spectrum of role knowledge, is pivotal in underpinning the role-playing capability of an agent. In addition, even for entirely new characters without any existing footprint in our lives, acquiring knowledge about real-world personas is crucial. This knowledge helps ensure that the attributes of a character, whether real or imaginary, are coherent and interconnected. Similar to how an actor needs a deep grasp of real-world factual knowledge to effectively embody various fictional personalities, a model that learns from real-world characters can significantly enhance the authenticity and self-consistency in the portrayal of any role.However, there is a shortage of systematic evaluation of role knowledge for these pretrained foundation models. Traditional persona-based evaluation benchmarks, such as PersonaChat <cit.> and PersonalDialog <cit.>, often rely on artificially constructed personas or occupations abstracted from a group of people, which lack the complexity and real-world connection of genuine personas, thus hard to assess the models' capability to handle intricate character knowledge. Recent role-playing benchmarks like RoleBench <cit.> evaluate consistency in language style and knowledge extracted from scripts, yet the knowledge extracted from scripts is fragmented and lacks a systematic framework, making it challenging to accurately and comprehensively assess the breadth of knowledge captured by foundation models. In light of these considerations, this paper presents RoleEval (Role Knowledge Evaluation Benchmark), a bilingual test suite for structured evaluation of role knowledge in LLMs with renowned real-world and fictional characters, and the ability to reason over the knowledge. Specifically, we first carefully collect 200 characters who are influential outside their home countries from five categories: 1) celebrities, 2) anime and comics, 3) movies and TV series, 4) games, and 5) fiction to build purely human-written questions in Chinese, then translate them into English with GPT-4 and rigorous human revision to construct the bilingual parallel RoleEval-Global benchmark. Each domain has an equal number of characters, and each character has 20 questions, consisting of 17 questions about basic knowledge and 3 questions that inspect multi-hop reasoning based on basic knowledge. These questions aim to systematically examine the character's personal information, relationships, abilities, and experiences.In addition to these 200 characters, we also collect an additional 100 characters from the above five categories who are also influential in China and use the same question strategy to construct the RoleEval-Chinese benchmark, which aims to specialize in evaluating Chinese LLMs. Finally, we obtain 6,000 Chinese-English parallel questions (4,000/2,000 for RoleEval-Global/Chinese) for RoleEval. To further boost annotation efficiency and ensure the quality of RoleEval, we design both automatic and human quality check processes that encourage annotators to propose questions with appropriate difficulty and discrimination. To the best of our knowledge, RoleEval is the first benchmark to systematically evaluate the role knowledge for foundation models.This benchmark serves a dual purpose: * It benchmarks the current state of LLMs in understanding, utilizing, and reasoning over knowledge of a wide range of both public figures and fictional characters, which paves the way for enhancing AI's role as a companion and creative ally in our digital interactions. By delving into the depths of how LLMs perceive and portray an extensive array of characters, we aim to uncover new avenues for their application, making them not only repositories of information but also active, context-aware participants in our digital narratives.* In an era where misinformation can have far-reaching consequences, verifying the factual correctness of LLM responses is paramount. This benchmark is also useful for evaluating the factual correctness and hallucination of LLMs, ensuring the fidelity of the information they disseminate. In a nutshell, our contributions are as follows: * We propose RoleEval, a bilingual role evaluation benchmark with 6,000 Chinese-English parallel questions covering 300 diverse characters, to systematically examine the ability to memorize, understand, and reason over role knowledge for foundation models, which is an important prerequisite for successful role-playing.* To ensure quality and boost efficiency, we propose a hybrid quality check process with the combination of both automatic and human verification to ensure appropriate difficulty and discrimination ability control for questions.* We conducted extensive evaluations using RoleEval on a variety of large language models under both zero-shot and few-shot settings, encompassing models with varying parameter sizes and those designed for Chinese and English, as well as both open-source and closed-source proprietary models. § RELATED WORK §.§ Role-Playing Language Agents and Evaluation BenchmarksRecent efforts in the field of Natural Language Processing, especially LLMs, have focused on exploring the ability to act as role-playing agents <cit.>, which can be even traced back to ELIZA <cit.>, the first automated dialogue agent that conducts psychological consulting. However, their evaluation is predominantly conducted on models after supervised fine-tuning. This approach does not incorporate direct feedback from pretrained foundational models, which can offer critical insights into their intrinsic role-playing capabilities and limitations. On the other hand, existing evaluations largely rely on outputs from the ChatGPT <cit.> or humans. However, ChatGPT is not an infallible evaluator, and human evaluation lacks reproducibility. This leads to a lack of objective, accurate, and systematic knowledge assessments. In this case, our benchmark can serve as a robust standard for evaluating current role-playing agents, assessing whether the models possess sufficient role knowledge. Previous research in role-playing evaluation has largely focused on abstract personas <cit.> or specific professions like psychology consultants, chemists, or software engineers <cit.>. However, these methods often oversimplify real-world personas, failing to capture the complex nature of real-world human personalities and behaviors in role-playing scenarios. Despite the value of these approaches, they fall short in exploring individual character knowledge, which is critical for authentic role-playing experiences. Therefore, there is a clear need for more sophisticated and realistic persona models in role-playing research. There are also some closely related works for character-based evaluation, intending to inspect the ability to mimic real-world characters <cit.>, However, they do not have a detailed and systematic framework to evaluate role knowledge, leading to fragmented and incomplete knowledge assessment. Moreover, their evaluation relied on humans or other powerful large language models such as ChatGPT. However, as we stated above, this kind of evaluation suffers from the reproducibility and accuracy of judges. In contrast, RoleEval examines the knowledge required for role play in a detailed, objective and systematic approach for pretrained foundation models.§.§ Benchmarks for Evaluating General World Knowledge and FactsRecent advancements in LLMs have led to the development of numerous benchmarks aimed at evaluating the factual knowledge grasped by models, which is crucial for suppressing hallucination and beneficial for developing aligned LLMs <cit.>. These benchmarks play a crucial role in assessing the capabilities and limitations of LLMs in understanding and processing real-world information. However, most of these benchmarks tend to focus on the knowledge of various subjects, without enough coverage of the people and characters we are familiar with in our daily life.Our benchmark introduces a unique, role-centric approach to factual knowledge evaluation for LLMs, which can be considered as a complement to existing factual evaluation benchmarks. By structuring factual information around specific people or characters, it provides a more focused and organized framework for assessment. This role-centric method enriches the current landscape of AI evaluation, and offers another perspective in measuring AI's factual accuracy and its ability to reason and generalize based on this knowledge.§ ROLE KNOWLEDGE EVALUATION BENCHMARKWe develop RoleEval to assess how well role knowledge is memorized, utilized, and reasoned with. To do this, we gather a diverse range of characters and formulate questions aimed at systematically evaluating various fundamental aspects of role knowledge. Additionally, we create different forms of questions that demand a thorough understanding, adaptable use, and multi-level reasoning of role knowledge. §.§ Character CollectionRoleEval derives its character collection from a variety of comprehensive and specialized online encyclopedias: Wikipedia[<https://www.wikipedia.org>], Baidu Baike[<https://baike.baidu.com>], Fandom[<https://www.fandom.com>], and Moegirlpedia[<https://zh.moegirl.org.cn>]. Wikipedia (Multilingual) and Baidu Baike (Chinese) serve as general encyclopedias, providing a broad range of information. In contrast, Fandom (Multilingual) and Moegirlpedia (Chinese) specialize in anime, comics, and games, offering richer and more detailed information in these specific categories. For RoleEval-Global, we collect 200 diverse characters who are influential outside their home countries. These characters are from five categories: 1) celebrities, 2) anime and comics, 3) movie and TV series, 4) games, and 5) fiction. To further achieve a more balanced evaluation, we select the same number of characters for each category, and each chosen fictional character comes from a different work. This diversity ensures a broad spectrum of role knowledge and scenarios. For RoleEval-Chinese, in addition to these 200 characters, we use the same strategy to collect 100 influential characters mainly in China. The details of the collected 300 characters are listed in Appendix <ref>.Before annotation, we rigorously check the encyclopedia information of each character to ensure it is comprehensive and rich. Generally, characters with richer information and more contributors are selected, as this often correlates with the reliable and objective information provided. This process guarantees that the dataset is not only diverse but also accurate and reflective of the character's true nature and background. In addition, we also consider both the popularity of these characters on various social media platforms (indicated by metrics such as follower counts and engagement in discussions), as well as their presence on different search engines (reflected by the volume of search results). During annotation, we refer to as many online encyclopedias as possible for each character and only select the knowledge points without any conflict in these references. §.§ Question DesignOur focus is primarily on factual knowledge and aims to comprehensively assess the models' understanding and interpretation of varied character roles and contexts. In addition to basic knowledge that is directly stated in encyclopedias, we also design multi-hop questions to examine the ability to dynamically combine and reason with existing knowledge.Specifically, after collecting the 300 characters from online encyclopedias, we build RoleEval in the form of multiple-choice questions, with four options for each question, which is a common practice adopted by many existing benchmarks such as MMLU <cit.> and C-Eval <cit.>. Each character is associated with 17 and 3 unique questions for basic knowledge and multi-hop reasoning respectively, thus culminating in a total of 6,000 questions. The statistics of RoleEval are shown in Table <ref>. We also plot visualization of the questions' length in Figure <ref> for more details.In RoleEval, we consider three types of fundamental knowledge required to depict a character: * Inherent Attributes: This type of knowledge includes the fundamental characteristics intrinsic to the character, such as gender, race, personality, skills, and abilities. These attributes are typically presented in a tabular format, or directly described in online encyclopedias.* Social Relationships: This type of knowledge pertains to the relationships of the character with other individuals, which could include parents, disciples, and other significant personal or professional relationships.* Experiences: This type of knowledge details the experiences or events that the character has undergone. For real-world individuals, this usually includes significant life events or experiences in which they were direct participants. For fictional characters, this involves extracting key plot points or story arcs described in online encyclopedias. To enhance the comprehensive assessment and diversify the scope of multiple-choice questions, our approach extends beyond merely querying knowledge from online encyclopedias. As a supplement to direct questions, we incorporate two additional question formats, which are designed to be combined with the previously identified four types of knowledge for a more dynamic and comprehensive evaluation:* Negation Type: These questions, usually formatted as “Which of the following is NOT...”, require a comprehensive understanding of a specific knowledge point. For instance, “在《火影忍者》中,以下哪个不是漩涡鸣人的忍术? A. 万象天引B. 影分身之术 C. 色诱术 D. 螺旋丸” (“In Naruto, Which of the following is not a ninjutsu of Naruto Uzumaki? A. Universal Pull B. Shadow Clone Jutsu C. Sexy Jutsu D. Rasengan”). The correct answer is A because it is the only ninjutsu that Naruto Uzumaki cannot use among the four given options. * Non-occurrence Scenario Type: These questions test for non-occurrences, with correct answers often framed in the negative (e.g., “Did not happen...”). This format examines whether the model generates illusions or false assumptions. An example is, “在《火影忍者》中,漩涡鸣人是什么时候成为中忍的? A. 第四次忍界大战时B. 没有成为中忍C. 佩恩入侵时 D. 第七班完成任务后” (“In Naruto, When did Naruto Uzumaki become a Chūnin? A. During the Fourth Great Ninja War B. Never became a Chūnin C. During Pain's invasion D. After Team 7's mission completion.”). Actually, Naruto never became a Chūnin since he never passed his Chūnin selection exams, even when he became the Seventh Hokage, making the correct choice B. We further add three reasoning questions for each character that need multi-hop reasoning over these types of fundamental knowledge. According to the knowledge required in the intermediate reasoning steps, we classify these reasoning questions into three types, and assign one question for each available (role, reasoning type) pair:* Character Relationship Reasoning: When answering this type of question, models need to reason about the relationship between characters. For example, “海莉·比伯的丈夫在《黑衣人3》中客串的什么角色? A. 外星人B. 警官C. 医生D. 教师” (“What role did Hailey Bieber's husband make a cameo in Men in Black 3? A. Alien B. Police Officer C. Doctor D. Teacher”). Models need to first reason out that Hailey Bieber's husband is Justin Bieber, and then find out that Justin Bieber made a cameo appearance as an alien in Men in Black 3, so the correct answer is A.* Event Participant Reasoning: To solve this type of question, models need to reason out the participants of an event, and then combine it with other information in the question to locate the answer. For example, “下面哪个人既征服了波斯,又攻下了埃及?A. 亚历山大大帝 B. 腓力二世 C. 冈比西斯二世D. 阿明塔斯三世” (“Which of the following people conquered both Persia and Egypt? A. Alexander the Great B. Philip II C. Cambyses II D. Amyntas III”). Models need to know that although more than one person conquered Persia or Egypt (e.g., Cambyses II in option C used to conquer Egypt in 525 BC), Alexander the Great is the only one who conquered both Persia and Egypt among the four options.* Timeline Reasoning: Solving this type of question requires models to understand the sequence of events, infer the time of occurrence of the event in question stems and options, and then select the correct option based on the timeline. For example, “在《巴黎圣母院》中,哪件事在卡西莫多受鞭笞之刑前发生? A. 埃斯梅拉达被判绞刑B. 卡西莫多看见埃斯梅拉达被绞死C. 埃斯梅拉达给卡西莫多喝水D. 卡西莫多把埃斯梅拉达从绞刑架上救下” (“In The Hunchback of Notre Dame, which event occurs before Quasimodo is whipped? A. Esmeralda was sentenced to hanging B. Quasimodo saw Esmeralda being hanged C. Esmeralda gave Quasimodo water D. Quasimodo took Esmeralda saved from the gallows”). Among these options, only option C happened before whipping, while A, B, and D all happened after that. These reasoning questions go beyond simply memorizing one-hop knowledge in text, intending to connect multiple related characters, events, and storylines. It requires the models' compositionality to intrinsically and dynamically combine multiple knowledge points and answer the given questions, thus making it a challenging benchmark for foundation models. §.§ Quality CheckTo ensure quality and boost efficiency during benchmark construction, we propose a hybrid quality check process with the combination of both automatic and human verification. In the automatic checking stage, we use GPT-4 and GPT-3.5 API[Unless otherwise specified, the term “GPT-4” and “GPT-3.5” shall henceforth be used to representandrespectively.] to control the difficulty and discrimination ability. Assume the accuracy of GPT-4 and GPT-3.5 API for character c is x_c and y_c respectively, we apply the below criteria as the preliminary and instant feedback for human annotators: x_l ≤ x_c ≤ x_u y_c ≤ y_u x_c - y_c ≥ d Where x_l and x_u indicate the lower threshold and upper threshold of x_c, and y_u indicates the upper threshold of y_c, these three hyperparameters control the overall difficulty, making sure that the questions are not too easy or too hard for foundation models. And d is the lower threshold of difference between x_c and y_c. Since GPT-4 is a significantly more powerful model than GPT-3.5 in practice, we use this hyperparameter to ensure the discrimination ability of this benchmark for various models. In our preliminary study, we find x_l = 0.3, x_u = 0.9, y_u = 0.8, and d = 0.15 achieve appropriate difficulty and discrimination ability control for questions.Then, we manually check the questions and options to ensure the quality of this benchmark. To easily check the factual correctness and prevent questions from overemphasis on peripheral aspects, we ask annotators to also provide links to the referenced text in the online encyclopedias along with each question to achieve effective oversight. §.§ TranslationTo support the evaluation in different languages, we translate the Chinese questions in RoleEval into English with GPT-4[We use RoleEval (en) and RoleEval (zh) to indicate RoleEval in English and Chinese respectively.]. We find that compared to traditional translate engines such as Google Translate, GPT-4 is much more flexible and customizable with different prompts, making it better at generating character-related translations given the related information as background while maintaining decent translation quality. Considering the potential ambiguity for some entities related to some characters, we further ask human translators to carefully check and revise the entity translations according to original Chinese questions and options. Since RoleEval mainly focuses on factual knowledge rather than text style, it is acceptable to use machine translation and human post-editing to minimize information loss.§ EXPERIMENTSWe evaluated various English and Chinese LLMs on RoleEval, aiming to analyze the memorizing, utilizing, and reasoning capabilities of role knowledge for these LLMs.§.§ SetupWe implemented an evaluation pipeline for RoleEval with lm-evaluation-harness framework <cit.> in both zero-shot and five-shot. Considering RoleEval and MMLU <cit.> share the same multiple-choice format, we adopted a similar setup with MMLU for RoleEval. Specifically, for open-source models, we calculated the probability of subsequent tokens following the initial prompt. Among the options “A”, “B”, “C”, and “D”, we chose the one with the highest probability as the preferred choice of the model. For closed-source models such as GPT-4 <cit.>, we followed <cit.> and <cit.> to use regular expressions to extract the preferred choice of the model. All experiments in this paper were conducted using two NVIDIA A800 80G GPUs. §.§ PromptFigure <ref> illustrates the prompt we use for evaluation. For each question, we added “以下是关于[category]的单项选择题,请选出其中的正确答案。” (“The following multiple-choice questions are about [category]. Please choose the correct answer.”) before the question stem, and “答案:” (“Answer: ”) after four options, where “[category]” was chosen from 名人 (“celebrities”), 动漫角色 (“anime and comics”), 影视角色 (“movies and TV series”), 游戏角色 (“games”) and 小说人物 (“fiction”). For the five-shot setting, we added in-context examples before the actual question to answer. These examples shared the same format as the actual question, except the ground truth option was provided for each in-context example. §.§ Models We selected a wide range of publicly available LLMs for evaluation. Due to the limit of computational resources, we only evaluated models with between 1B and 80B parameters since they can produce meaningful results and be loaded in two 80GB Nvidia A800 GPUs with bf16 format. Chinese LLMs For Chinese questions in RoleEval, we evaluated popular Chinese open-source LLMs, such as ChatGLM <cit.>, Baichuan <cit.>, Qwen <cit.>, Yi[<https://github.com/01-ai/Yi>], Skywork <cit.>, Chinese-LLaMA-2 <cit.>, along with close-sourced LLMs like Minimax[<https://api.minimax.chat/examination-center/text-experience-center>]. English LLMs For English questions in RoleEval, we evaluated open-source LLMs including BLOOM <cit.>, Pythia <cit.>, LLaMA <cit.>, LLaMA2 <cit.>, Falcon[Falcon models are available at <https://huggingface.co/tiiuae>], Mistral-7B <cit.>, and two closed-source LLMs: ChatGPT <cit.> and GPT-4 <cit.>.Since Baichuan, Qwen, Yi, Chinese-LLaMA-2, BLOOM, Pythia, LLaMA, LLaMA2, and Falcon had multiple sizes of LLMs that satisfied our restriction, we evaluated various sizes of LLMs for each LLM family as shown in Table <ref>. §.§ Results and Analysis Overall Performance Table <ref> and <ref> show the few-shot experimental results. Since the zero-shot results are generally lower than the few-shot experimental results, we provide them in Appendix <ref>. We find that GPT-4 maintains a lead in RoleEval-Global, with its latest version () outperforming the earlier (). We believe this superiority is attributed to the more recent knowledge cutoff in , enhancing its performance in domains with rapidly evolving information. Nevertheless, the overall accuracy still indicates large room for improvement even for the state-of-the-art LLMs. In the RoleEval-Chinese (zh) dataset, certain Chinese models, such as Qwen-72B and Yi-34B, showed superior performance to GPT-4. This is likely due to their higher proportion of Chinese training corpus and abundant high-quality discussions on these models in Chinese online platforms. Notably, GPT-4 retains its edge in anime, comics, and games, where many characters are also popular in the English-speaking world. These results highlight the importance of choosing balanced training data and evaluating role knowledge across various languages and cultural settings. Difference in Dataset Languages We observe a significant improvement in GPT-4 and 3.5's performance on RoleEval (en) dataset compared to RoleEval (zh), even though these two parts of the dataset have identical semantic content. This trend is also evident in predominantly English-language open-source models, especially for models with little Chinese training data, such as LLaMA, Mistral, and Falcon. This suggests even the most powerful LLMs still lack effective cross-lingual knowledge transfer, which means these models fail to build complete bi-directional mappings between entities in different languages. Conversely, Chinese models generally underperform in English datasets, highlighting a similar language-specific limitation, though the gap is narrower since Chinese LLMs still use a large amount of English pre-training data. Among all open-source models, Yi-34B achieves almost the same performance in both Chinese and English on RoleEval-Global, indicating its balanced training corpora for global influential characters in both languages. Comparative Analysis of Open-Source Models LLaMA2-70B emerged as the best open-source model primarily trained in English, closely matching GPT-3.5. While for Chinese LLMs, Qwen-72B and Yi-34B not only surpassed GPT-3.5 but also exceeded GPT-4 in the RoleEval-Chinese (zh) dataset. However, these Chinese models still show noticeable gaps compared to GPT-4 in other scenarios. Parameter Scaling Laws We also analyze the correlation between the accuracy of role knowledge and the size and the number of training tokens of LLMs on RoleEval-Global.[To obtain meaningful results, we only choose model families with more than three models in this experiment, and we require the amount of training data and training tokens to be the same for models with different parameters within the same model family.] As shown in Figure <ref>, accuracy generally improves with model size for LLaMA and Qwen, and the trend is consistent with previously established knowledge transfer patterns: For Chinese LLMs, the rate of improvement is greater for RoleEval (zh) datasets than for RoleEval (en). In contrast, LLMs primarily trained on English corpora show opposite trends. Notably, BLOOM and Pythia did not show performance improvements across various settings, we speculate that this is due to their relatively lower token training volume (366B for BLOOM and 300B for Pythia), while most models with great performance have already been trained on more than 1TB tokens. Token Scaling Laws To further explore the scaling law on the number of tokens, we conduct experiments on publicly available intermediate checkpoints of BLOOM-7B1, Pythia-6.9B, Baichuan-7B, and Skywork-13B. For BLOOM, Baichuan, and Skywork, we select all available intermediate checkpoints, resulting in 8, 11, and 6 checkpoints respectively. For Pythia, we select a checkpoint every 13,000 steps and obtain 12 intermediate checkpoints for evaluation.Results from Figure <ref> indicate that while the performance of Baichuan and Skywork in their first checkpoint (trained with 220B and 500B respectively) is near random, which is similar to BLOOM and Pythia, their subsequent checkpoints show steady improvement after 500B tokens, which indicates the importance of sufficient training tokens with fixed model size. However, the bottleneck of cross-lingual knowledge transfer can still be observed with increasing training tokens, which means that simply increasing the number of parameters and adding training tokens may not be the best way to break down the barriers between languages. In future research, we intend to examine other LLMs with intermediate checkpoints trained on larger-scale datasets primarily in English, aiming to substantiate our hypotheses with more robust evidence. § CONCLUSIONIn this paper, we have presented RoleEval, a large-scale bilingual role evaluation benchmark, featuring 6,000 Chinese-English parallel questions across 300 diverse characters (200 for RoleEval-Global and 100 for RoleEval-Chinese) from five different domains. RoleEval is specifically designed to scrutinize the foundational models' capabilities in memorizing, understanding, and reasoning role knowledge. Our hybrid quality check process, blending automatic and human verification, guarantees the questions' difficulty and discrimination aptitude, setting a new standard in benchmark design. Extensive evaluations of RoleEval on various large language models, including both zero-shot and few-shot scenarios, highlight significant differences in knowledge distribution, as evidenced by GPT-4's superior performance in RoleEval-Global and the notable excellence of Chinese LLMs in RoleEval-Chinese These findings not only demonstrate the disparities in language model proficiencies but also illuminate the path for future enhancements in bilingual and culture-specific LLMs. Through RoleEval, we aim to provide a robust framework for future advancements in language model evaluation, particularly in role-playing scenarios, thereby enriching the landscape of language understanding and reasoning benchmarks.§ LIMITATIONSIn evaluating the effectiveness of real-world role evaluation benchmarks, there are two existing limitations. Firstly, the aspect of timeliness is crucial; the knowledge regarding real-world characters may change over time, making the benchmark outdated or irrelevant. To address this, we plan to explore methods for the automatic updating of benchmarks, ensuring that they remain current and reflective of ongoing changes. Secondly, the current format of benchmarks often restricts questions to having only one correct answer. This approach fails to adequately test scenarios where multiple answers could be correct, thus limiting the benchmark's ability to evaluate complex decision-making skills. A potential solution could be to incorporate a more dynamic question format that allows for the identification and acceptance of multiple correct answers, thereby enriching the assessment process by acknowledging the multi-faceted nature of real-world problems.§ ETHICS STATEMENTOur benchmark is designed to enhance the model's understanding of role knowledge, which is crucial for improving persona consistency and factual accuracy while reducing hallucination. To achieve this, we have selected encyclopedic content that has been edited by multiple individuals. This approach helps to minimize factual errors and biases, particularly in comparison to other texts sourced from the internet. Furthermore, all data collected for this project originates from publicly available materials, ensuring no concerns regarding privacy infringement.Moreover, our goal is to promote a comprehensive and accurate understanding of the role knowledge. Therefore, although most of our questions and options are positive, we have not entirely excluded potential negative aspects of the selected characters from our benchmark. Users should be aware of this when using this benchmark. Our commitment is to provide a comprehensive and balanced understanding, but users should remain critical and mindful of the context in which this information is used. acl_natbib§ APPENDIX§.§ Zero-shot Experimental ResultsWe list zero-shot results by five categories in Table <ref> and <ref>. Overall, the zero-shot performance of most models is slightly lower than (or similar to) five-shot results. We find that although GPT-4 and GPT-3.5 API already have great instruction-following abilities, providing five examples containing characters that are not related to who we aim to assess can still improve both utilization and reasoning of role knowledge. We conjecture that for powerful LLMs like GPT-4 and GPT-3.5, these examples act as cues that activate role knowledge stored in the model. By presenting scenarios where specific roles are depicted, the model can more effectively access and utilize its internal role knowledge, even if these roles are not related to the character that the question actually asks. §.§ Results by Knowledge and ReasoningWe list results by knowledge and reasoning questions for RoleEval, as shown in Table <ref> and <ref>. We observe a clear positive correlation between the breadth of knowledge and the correctness of reasoning. Among these LLMs, the GPT-4 model shows both extensive knowledge and robust reasoning capability. However, it is noteworthy that in specific versions of GPT-4 API, themodel demonstrates a more comprehensive knowledge base, while themodel is better at reasoning. This delineation suggests that while there is a general trend of knowledge and reasoning abilities advancing in tandem, individual models may specialize or excel in one aspect over the other.Further, the experimental findings in LLMs trained for different languages reveal intriguing insights. In English LLMs, Mistral-7B has shown remarkable reasoning capabilities, closely rivaling those of the LLaMA-65B model. This performance indicates a significant advancement in reasoning, as smaller models approach the upper echelons of reasoning skills previously dominated by more advanced models. In the realm of Chinese LLMs, Qwen-72B and Yi-34B also stand out with their exceptional reasoning abilities, positioning themselves between the GPT-3.5 and GPT-4 models. This not only underscores the progress in developing LLMs in different languages but also indicates the potential for LLMs to improve both knowledge and reasoning by considering a wider range of languages when constructing training corpora.§.§ Detailed Character List We list all the collected 300 characters in table <ref>, <ref>, <ref>, <ref>, <ref> and <ref>, with both the name and source of each character in Chinese and English.
http://arxiv.org/abs/2312.16132v1
{ "authors": [ "Tianhao Shen", "Sun Li", "Deyi Xiong" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20231226174055", "title": "RoleEval: A Bilingual Role Evaluation Benchmark for Large Language Models" }
Photoemission of spin-polarized electrons from aligned grains and chiral symmetry breaking Thiem Hoang Received ...; accepted... ==========================================================================================empty We consider the problem of synchronizing a multi-agent system (MAS) composed of several identicallinear systems connected through a directed graph.To design a suitable controller, we construct conditions based onBilinear Matrix Inequalities (BMIs) that ensure state synchronization.Since these conditions are non-convex, we propose an iterative algorithm based on a suitable relaxation that allows us to formulate Linear Matrix Inequality (LMI) conditions.As a result, the algorithm yields a common static state-feedback matrix for the controller that satisfies general linear performance constraints.Our results are achieved under the mild assumption that the graph is time-invariant and connected.§ INTRODUCTION In the last decades, the study of networks, and in particular the distributed control of networked MAS, has attracted a lot of interest in systems and control, due to the broad range of applications in many different areas <cit.>, including: power systems, biological systems, sensors network, cooperative control of unmanned aerial vehicles, quality-fair delivery of media content, formation control of mobile robots, and synchronization of oscillators.In networked MAS, the general objective is to reach an agreement on a variable of interest. We focus our attention on the synchronization problem, where the goal is to reach a common trajectory for all agents. In the literature, we can find several studies on scalar agents, but recent works also address networks of agents with finite-dimensional linear input-output dynamics <cit.>.In the case of identical agents,a common static control law inducing state synchronizationcan be designed by exploiting the information exchange among the agents, which modifies the system dynamics. This exchange is modeled by a (directed or undirected) graph. The spectrum of the Laplacian matrix of this graph plays an important role in the evolution of the associated networked system <cit.>. Necessary and sufficient conditions ensuring synchronization have been given under several different forms depending on the context<cit.>.A set of necessary and sufficient conditions for identical SISO agents over arbitrary time-invariant graphs is summarized in <cit.>. Different approachesfor control design can be foundin the literature depending on the desired objective.Most of the results are based on the solution of an algebraic Riccati equation, under the assumption that the static control law has a given infinite gain margin structure <cit.>: the state-feedback matrix K has the form K=B^⊤ P, where B is the input matrix and P is the solution of an algebraic Riccati equation. However, imposing an infinite gain margin potentially limits the achievable performance.As shown in <cit.>, by choosing a small enough constant, a feedback law can be designed without knowing the network topology;in practice, this constant depends on the non-zero eigenvalue of the Laplacian matrix having the smallest real part.While <cit.> studies the output feedback case, we consider the dual case, which is also discussed in <cit.>.The design procedure in <cit.> allows achievingsynchronization under bounded H_∞ disturbances, thanks to an observer-based dynamic controller, expressed in terms of suitable algebraic Riccati equations, which guarantees disturbance rejection properties.A different approach based on LMIs is presented in <cit.>, where synchronization conditions are imposed by relying on strong assumptions on the structure of the Lyapunov matrices, while the problem size is independent ofthe number of agents.In this work, we study the design of a static state-feedback control law ensuring MAS synchronization. The agents are modeled as identical LTI subsystems and their interconnections are described by time-invariant, directed, and connected graphs.We introduce a design strategy based on LMIs, similar to the one in <cit.>, but without imposing any assumption on the controller structure or constraints on the Lyapunov matrices, thus ensuring higher degrees of freedom in the design, and potentially improved optimized stabilizers. Through a relaxation of the conditions in <cit.>, we formulate an iterative LMI-based procedure to design a static state-feedback control law.Our LMI formulation allows us to easily embed additionallinear constraints in order to reach a desired performance <cit.>.Notation. and ℂ denote the sets of real and complex numbers, respectively.We denote withthe imaginary unit. Given λ = a+ b∈ℂ, (λ)=a and (λ) = b are its real and imaginary parts, respectively; λ^* = a- b is its complex conjugate.I_N is the identity matrix of size N, while 1_N ∈ℝ^N denotes the N dimensional (column) vector with all 1 entries. For any matrix A, A^⊤ denotes the transpose of A. Given two matrices A and B, A⊗ B indicates their Kronecker product. Given a complex matrix A∈^n× m, A^* denotes its conjugate transpose and A=A+A^*.Matrix A∈^n × nis Hermitian if A = A^*, namely (A) is symmetric ((A)=(A)^⊤) and (A) is skew-symmetric ((A)=-(A)^⊤).We denote the Euclidean distance of a point x from a set 𝒜 as |x|_𝒜. § PROBLEM STATEMENTConsider N identical dynamical systems ẋ_i= A x_i + B u_i i=1, …, N,with state vector x_i ∈ℝ^n, input vector u_i ∈ℝ^m, state matrix A∈^n× n and input matrix B∈^n× m.Assume that the pair (A,B) is controllable. The directed graph 𝒢 withweightmatrix 𝒲∈ℝ^N × N captures the communication topologyamong the agents; its Laplacian matrix is L:=diag(𝒲1_N)-𝒲. Denote by 0=λ_0, λ_1, …,λ_ν the eigenvalues of L, ordered with non-decreasing real part(complex conjugate pairs and repeated eigenvalues are only counted once). The control input u_i affecting agent i is expressed asu_i =K∑_j=1^N𝒲_ij(x_j-x_i)= -K∑_j=1^NL_ijx_j, where 𝒲_ij and L_ij are the entries of the weight and Laplacian matrices, respectively,and K∈^m× n is the state-feedback matrix. Each agent uses only relative information with respect to the others, as typically desired in applications. By defining the aggregate state vector x := [x_1^⊤… x_N^⊤]^⊤∈^nN and input vector u := [u_1^⊤… u_N^⊤]^⊤∈^mN,we can write the interconnection (<ref>) asẋ = (I_N ⊗ A) x + (I_N ⊗ B) u,u = - (L ⊗ K) x. The overall closed-loop expression is ẋ = ( (I_N ⊗ A) - (L ⊗ BK) ) x.Our goal is to synthesize a common static control law that enforces synchronization among systems (<ref>). To this aim, we introduce the synchronization set: : =x: x_i - x_j = 0,∀ i,j ∈{ 1, … , N }. We recall the definition of “μ–synchronization” from <cit.>. The attractorin (<ref>) is μ–UGES (uniformly globally exponentially stable with rate μ>0) for system (<ref>) if there exists M>0 such that |x(t)|_≤ M^-μ t|x(0)|_ for all t≥0.Some of the necessary and sufficient conditions for μ–synchronization in <cit.> are here adapted to deal with a synthesis problem: matrix C in <cit.> is replaced by the state-feedback matrix Kin the closed-loop dynamics (<ref>).Moreover, we can exploit parameter μ in iterative approaches for optimization-based selections of K. Consider the system in (<ref>) and the attractor in (<ref>). The synchronization setis μ–UGES if and only if any of the following conditions holds:* [Complex condition] The complex-valued matricesA_k := A - λ_k BK,k=1, …, ν,have spectral abscissa smaller than -μ. * [Real condition] The real-valued matricesA_e,k := [ A - (λ_k) BK (λ_k) BK;-(λ_k) BK A - (λ_k) BK ],k=1, …, ν, have spectral abscissa smaller than -μ. * [Lyapunov inequality] For each k=1, …, ν, there exist real-valued matrices P_k = P_k^⊤0 and Π_k^⊤ = -Π_k such that matrix P_e,k:= P_k- Π_kΠ_k P_k0satisfiesP_e,k A_e,k-2μ P_e,k.§ FEEDBACK DESIGN We aim at designing a common state-feedback matrixK so as to ensuresynchronization to , i.e., so as to satisfy the conditions in Proposition <ref>.We choose an LMI-based approach to design K, which allows us to easily embed additional linear constraints in the design process.Relevant linear constraints may be related, e.g., to theH_∞ gain, saturation handling, gain norm, and convergence rate <cit.>.§.§ Revisited synchronization conditionsWe can distinguish two main cases: either the Laplacian eigenvalues are all real or at least one of them is complex. In the former case, conditions (<ref>) can be framed within an LMI formulation,through a procedure similar to the one we describe next. In the latter case, we refer to expression (<ref>), where the problem is lifted to a higher space, considering A_e,k instead of A_k, so as to work with real-valued matrices. We focus on the latter case, which is more general and includes the former.Let us define the inverse of P_e,k in (<ref>) Q_e,k := P_e,k^-1 = Q_e,k^⊤ = Q_kΣ_k -Σ_k Q_k, k=1,…,ν, with Q_k symmetric positive definite and Σ_k skew-symmetric.In fact,Q_k^-1=P_k-Π_k^⊤ P_k^-1Π_k(which is invertible, since applying the Schur complement to P_e,k yields Q_k^-10) and Σ_k=P_k^-1Π_k Q_k = Q_kΠ_k P_k^-1 (where the equality holds becauseΠ_k P_k^-1Q_k^-1 = Π_k - Π_k P_k^-1Π_k^⊤ P_k^-1Π_k =(P_k - Π_k P_k^-1Π_k^⊤)P_k^-1Π_k =Q_k^-1P_k^-1Π_k). Then, we can left- and right- multiply inequality (<ref>) by Q_e,k, obtainingA_e,kQ_e,k -2μ Q_e,k. To look for a common state-feedback matrix K, even when the matrices Q_e,k are different, we take advantage of the results in <cit.>. We can rewrite (<ref>) asI_2nA_e,k^⊤ (Φ_μ⊗ Q_e,k) I_2nA_e,k 0,where Φ_μ=2μ1 1 0 describes the stability region, which in our case is the complex half-plane with real part smaller than -μ. Then, according to <cit.>, (<ref>) is equivalent to the existence of matrices X_1,k,X_2,k∈^2n× 2n satisfying(Φ_μ⊗ Q_e,k) +A_e,k-I_2nX_1,k X_2,k 0,where X_1,k and X_2,k are multipliers that add degrees of freedom by decoupling matrices Q_e,k and A_e,k. Conditions (<ref>) are still necessary and sufficient for μ–synchronization, because they are equivalent to (<ref>).According to the derivation in <cit.>, imposing X_2,k=αX_1,k, with α>0, does not add conservatism as far as there are ν independent matrices X_1,k. We are now going to relax this condition by assuming that matrices X_1,k and X_2,k have the specific structureX_1,k:=X_eZ_k,X_2,k:=X_eW_k, where X_e:=X 0 0 X=I_2⊗ X, with X∈^n× n, is a block-diagonal matrix common to all ν inequalities, while Z_k,W_k∈^2n× 2n are different multipliers for every inequality.This assumption introduces conservativeness; therefore, the conditions are now only sufficient. However, we can now expand (<ref>) and obtain the bilinear formulation2μQ_e,k Q_e,k Q_e,k0 + Θ_kZ_kΘ_kW_k -X_eZ_k -X_eW_k 0,with Θ_k=(I_2⊗ AX)- (Λ_k⊗ BY), where Λ_k=α_k -β_kβ_kα_k is related to the k-th eigenvalue λ_k=α_k+β_k and Y:=KX is a suitable change of variables. An expanded version of (<ref>) with the variables highlighted is provided inequation (<ref>) at the top of the next page. Constraints (<ref>) alone, might lead to badly conditioned optimized selections of K, due to the fact that the joint spectral abscissa of A_e,k for all k=1,…, ν may potentially grow unbounded for arbitrarily large values of K. Thus, as a possible sample formulation of a multi-objective optimization, we fix a maximum desired norm κ̅ for K and enforce the constraint K≤κ̅ through the following LMI formulation:X+X^⊤-I Y^⊤Yκ̅^2 I0,stemming from the expression K^⊤ K κ̅^2 I after applying a Schur complement and exploiting (X-I)(X^⊤-I)0.We then suggest to synthesize a state-feedback matrix K satisfying K≤κ̅ and maximizing μ by solving the bilinear optimization problemmax_X,Y Z_1,…,Z_ν W_1,…,W_ν Q_e,1,…,Q_e,νμ,subject to:, Q_e,k0, (<ref>),k = 1,…,ν,where alternative performance-oriented linear constraints can be straightforwardly included, and then selecting K=YX^-1. An iterative approach can be used to make the problem quasi-convex and solve it iteratively with standard LMI techniques.ruled Comment/**/ The most natural way to include the coefficient μ in the equations is that inspired by the techniques in <cit.>, leading to the formulation in (<ref>), where μ defines the stability region in the complex plane and the problem results in a generalized eigenvalue problem (GEVP).As an alternative, μ could be introduced as a destabilizing effect acting on the matrices A_e,k (shifting their eigenvalues to the right in the complex plane), which are still required to be Hurwitz:0Q_e,k Q_e,k0 +A_e,k+2μ I_2n-I_2nX_1,k X_2,k 0. However, with this formulation, the problem is no longer a GEVP. This complicates the implementation, since the feasibility domain with respect to μ could be bounded (while in a GEVP it is right or left unbounded)and bisection is not appropriate.We tested this alternative approach in simulation and we obtained similar results to those achieved with Algorithm <ref>, with the advantage of typically reaching convergence after a significantly lower number of iterations. §.§ Iterative algorithmIn order to solve the BMI optimization problem (<ref>) with convex techniques, we focus our attention on BMI (<ref>), since the other constraints are linear, and we propose an iterative approach for the problem solution, described in Algorithm <ref>. The algorithm is composed of two steps:1) Synthesis step: for given fixed multipliers Z_k and W_k, k=1,…, ν, optimization (<ref>) is solved in the decision variables (μ, X, Y,Q_e,k), which corresponds to a generalized eigenvalue problem (easily solvable by a bisection algorithm) due to the fact that matrices Q_e,k are all positive definite; 2)Analysis step: for given fixed matrices X and Y, optimization (<ref>) is solved in the decision variables (μ, Z_k, W_k,Q_e,k), which corresponds again to a generalized eigenvalue problem due to positive definiteness of Q_e,k.Algorithm <ref> essentially comprises iterations of the two steps above, until parameter μ increases less than a specified tolerance over two steps.To the end of establishing a useful means of comparison in the simulations reported in Section <ref>,we naively initialize the algorithm by fixing the initial multipliers as scaled identity matrices (with α>0 properly tuned). More generally, we emphasize that using the Riccati construction of <cit.>, stabilizability of (A,B) is sufficient for ensuring the existence of a Riccati-based solution of (<ref>) and it is immediate to design an infinite gain margin solution where all the matrices A_e,k share a common quadratic Lyapunov function.We do not pursue this initialization here, so as to perform a fair comparison between our algorithm(initialized in a somewhat naive way)and the construction resulting from <cit.>.The following proposition establishes useful properties of the algorithm.For any selection of the tolerance, if the initial condition is feasible, thenall the iterations of Algorithm <ref> are feasible. Moreover, μ never decreases across two successive iterations. Finally, the algorithm terminates in a finite number of steps.About recursive feasibility, note that once the first step is feasible, for any pair of successive steps, the optimal solution to the previous step is structurally a feasible solution to the subsequent step. Indeed, the variables that are frozen at each iteration are selected by using their optimal values from the previous step. Since the cost maximized at each step is always μ, then μ can never decrease across subsequent steps and then recursive feasibility is guaranteed.About the algorithm terminating in a finite number of steps, note that the optimal value of μ is necessarily upper bounded by a maximum achievable μ depending on thenorm of matrices A, B, on the eigenvalue of L having minimum norm, and on the bound κ̅ imposed on the norm of the state-feedback matrix K.Since μ monotonically increases across iterations and it is upper bounded, then it must converge to a value μ^⋆ and eventually reach any tolerance limit across pairs of consecutive iterations. Computationally speaking, each iteration of Algorithm <ref> amounts to solving a GEVP because μ is multiplying a sign definite matrix Q_e,k0, and hence the conditions are monotonic with respect to μ.Therefore, we can find the optimal μ via a bisection algorithm: if the problem is feasible for μ=μ^⋆, then the problem is feasible for all μ≤μ^⋆; on the other hand, if the problem is infeasible for μ=μ^⋆, then the problem is infeasible for all μ≥μ^⋆. Our objective is to find the maximum μ for which the problem is feasible (so that no other larger μ leads to feasibility).§ COMPARISON AND SIMULATIONSTo test the effectiveness of Algorithm <ref>, we compare it with other design procedures that solve the simultaneous stabilization problem.The benchmark problem is the maximization of the rate μwith the norm of K upper bounded by κ̅, as induced by constraint (<ref>)../data/tab_L_cpx5_row_e3_rev./data/tab_L_cpx10_row_e3_rev./data/tab_L_circ4_row_e3_rev./data/tab_L_circ10_row_e3_rev./data/tab_L_star10_row_e3_rev§.§ Dynamical system and networkIn our simulations we consider two types of agent dynamics: one is oscillatory,(A_osc,B_osc) = ( 0 -1 1 0 , 0 1);while the second one is the unstable lateral dynamics of a forward-swept wing, the Grumman X-29A, as in <cit.>,(A_X-29,B_X-29) =(-2.059 0.997 -16.550-0.1023 -0.0679 6.7790-0.0603 -0.9928-0.1645 0.04413 1 0.0716800.0 ,1.347 0.23650.09194-0.07056 -0.0006141 0.00068660 0)We consider five communication networks:two circular graphs with N=4 and N=10 agents,two generic directed graphs with N=5 and N=10 agents characterized by complex eigenvalues,and a star graph with N=10 agents. The graph topologies and the eigenvalues of the associated Laplacian matrices are visualized in Fig. <ref>.§.§ Compared approachesThe approaches that we compare are the following ones:* [Riccati], the dual case of <cit.>:the gain is structured as K=B^⊤ P, where P is the unique solution to the algebraic Riccati equation A^⊤ P+P A-2b PBB^⊤ P+aI=0, with b≤(λ_k) and a>0. We solve (<ref>) by fixing b=min((λ_k)) and adjusting the value of a so as to respect the bound K≤κ̅, which is easily done due to the monotonicity of K with respect to a. * [Listmann] from <cit.>: LMI conditions (<ref>) with Q_e,k=Q 0 0 Q and Y=KQ are imposed for the λ_k corresponding to the corners of a rectangular box in the complex plane that includes all non-zero eigenvalues of L (considering the eigenvalues in the first quadrant is enough, since conjugate eigenvalues lead to the same condition), while incorporating in the LMI-based design the maximum norm condition (<ref>). A fixed number of LMIs need to be solved regardless of the network size. * [A_e,k]: the method resembles that in “b”, but now conditions (<ref>) are imposed for each λ_k, k=1,…,ν. * [Direct]: one iteration of Algorithm <ref> is executed, which amounts to solving (<ref>) with Z_k=I_2n, W_k=α I_2n and α>0 properly tuned.Notably, matrices Q_e,k do not have a pre-defined structure. * [Algorithm <ref>]: the procedure in Algorithm <ref> is executed up to its termination, as guaranteed by Proposition <ref>. In the simulations, the convergence rate of the solutions is estimated from the spectral abscissa of matrices A_k in (<ref>), namely, from the largest-real-part eigenvalue:μ̂=-max((eig( A_k ))). §.§ ResultsWe implement the different procedures in MATLAB, using the toolbox YALMIP <cit.> with MOSEK as an LMI solver. For the algorithm, we consider a tolerance of 10^-3 and κ̅=20 as the bound on the norm of K.For the different combinations of dynamics and graph topologies, Table <ref> reports a summary of all our results, along with estimated converge rate μ̂, norm of K and execution time. The time evolutions of the distances from the synchronization setare shown in Figs. <ref> and <ref> for the approaches “a”, “b” and “e”.Generally, method “a” has a worse performance than the considered LMI-based methods.The gain bound is reached, but the convergence rate is the slowest, most likely because the approach is forcing an infinite gain margin for K.Locating the eigenvalues of the A_k matrices in the complex plane shows that method “a” tends to move to the left a few eigenvalues (faster modes) and penalize others, so that the convergence speed is limited.Method “b” performs similarly to “c”, as expected, since the two methods simply consider different (eigen)values. In general, method “b” is more conservative than “c”, but is faster in larger networks. With L_cpx,5 and L_cpx,10, method “b” is slightly more conservative, as is reasonable, since it considers values that are not in the spectrum of the Laplacian.Method “d” generally yields better results than “b” and “c”, provided that proper values of the parameter α are chosen. This improvement is due to the decoupling between matrices A_e,k and Q_e,k.The best results are obtained using our proposed Algorithm <ref>, which gives the highest convergence rate and gets close to the system specifications. However, the computational load is higher, since several iterations are needed. With dynamics A_X-29, the states are always converging faster; with dynamics A_osc, the performance is similar to that obtained with other non-iterative LMI-based techniques. Algorithm <ref> outperforms the other procedures in the case with dynamics A_X-29 and graph L_∘,10: even though it requires quite some iterations to converge, it provides a controller that leads to almost one-order-of-magnitude faster convergence than the others, as shown in Fig. <ref>.§ CONCLUSIONS We focused on the synchronization of identical linear systems in the case of full-state feedback.First we provided some necessary and sufficient conditions for the synchronization. Then, we relaxed them in order to have a new formulation that can be iteratively solved through LMIs. This new procedure to solve the simultaneous stabilization problem, although requiring relatively large computational times, turns out to give better results in our benchmark problem where the convergence rate is maximized under given constraints on the performance. Our results pave the way for further developments, such as the use of alternative methods (e.g., convex-concave decomposition) to deal with BMIs and the extension to the case of static output feedback control laws.Acknowledgment:The authors would like to thank Domenico Fiore for his work in the initial stages of this research activity. IEEEtran
http://arxiv.org/abs/2312.15929v1
{ "authors": [ "Nicola Zaupa", "Luca Zaccarian", "Isabelle Queinnec", "Sophie Tarbouriech", "Giulia Giordano" ], "categories": [ "cs.MA", "math.DS" ], "primary_category": "cs.MA", "published": "20231226074831", "title": "Controlling identical linear multi-agent systems over directed graphs" }
]Overcoming the Coherence Time Barrier in Quantum Machine Learning on Temporal DataThese two authors contributed equally Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA These two authors contributed equally Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA IBM Quantum, IBM T.J. Watson Research Center, Yorktown Heights, NY 10598, USA Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA Raytheon BBN, Cambridge, MA 02138, USARaytheon BBN, Cambridge, MA 02138, USA Raytheon BBN, Cambridge, MA 02138, USA Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USAThe practical implementation of many quantum algorithms known today is believed to be limited by the coherence time of the executing quantum hardware <cit.> and quantum sampling noise <cit.>. Here we present a machine learning algorithm, NISQRC, for qubit-based quantum systems that enables processing of temporal data over durations unconstrained by the finite coherence times of constituent qubits. NISQRC strikes a balance between input encoding steps and mid-circuit measurements with reset to endow the quantum system with an appropriate-length persistent temporal memory to capture the time-domain correlations in the streaming data. This enables NISQRC to overcome not only limitations imposed by finite coherence, but also information scrambling or thermalization in monitored circuits <cit.>. The latter is believed to prevent known parametric circuit learning algorithms even in systems with perfect coherence from operating beyond a finite time period on streaming data. By extending the Volterra Series analysis of dynamical systems theory <cit.> to quantum systems, we identify measurement and reset conditions necessary to endow a monitored quantum circuit with a finite memory time. To validate our approach, we consider the well-known channel equalization task to recover a test signal of N_ts symbols that is subject to a noisy and distorting channel. Through experiments on a 7-qubit quantum processor and numerical simulations we demonstrate that N_ts can be arbitrarily long not limited by the coherence time. [ Hakan E. Türeci January 14, 2024 ==================== The development of machine learning algorithms that can handle data with temporal or sequential dependencies, such as recurrent neural networks <cit.> and transformers <cit.>, has revolutionized fields like natural language processing <cit.>. Real-time processing of streaming data, also known as online inference, is an essential component of applications like edge computing, control systems <cit.>, and forecasting <cit.>. The use of physical systems whose evolution naturally entails temporal correlations appear at first sight to be ideally suited for such applications. An emerging approach to learning employs a wide variety ofphysical systems, referred to as physical neural networks (PNNs) <cit.>, to compute a trainable transformation on an input signal. A branch of PNNs that has proven well-suited to online data processing is physical reservoir computing <cit.>, distinguished by its trainable component only being a linear projector acting on the observable state of the physical system <cit.>. This approach has the enormous benefit of fast convex optimization through singular value decomposition routines, and already has enabled temporal learning on various hardware platforms <cit.>.Among many physical systems considered for PNNs, quantum systems are believed to offer an enormous potential for scalable, energy-efficient and faster machine learning <cit.>, due to their evolution taking place in the Hilbert space that scales exponentially with the number of nodes <cit.>. However, quantum machine learning (QML) on present-day noisy intermediate-scale quantum (NISQ) hardware has so far been restricted to training and inference on low-dimensional static data due to several difficulties. A fundamental restriction is Quantum Sampling Noise (QSN) – the unavoidable uncertainty arising from the finite sampling of a quantum system – which limits the accuracy of both QML training and inference <cit.> even on a fault-tolerant hardware. Secondly, the training of a quantum system often encounters so-called barren plateaus in the optimization landscape <cit.>, which are exponentially difficult to resolve, hindering the implementation of QML at practical scales. Two further concerns arise when considering inference on long data streams, which call into question whether quantum systems can even in principle be employed for online learning on streaming data. Firstly, without quantum error correction the operation fidelities and finite coherence times of constituent quantum nodes places a limit on the size of data on which inference can be performed <cit.>, which would appear to rule out inference on long data streams. Secondly, the nature of measurement on quantum systems imposes a fundamental constraint on continuous information extraction over long times. Backaction due to repeated measurements on quantum systems necessitated by inference on streaming data is expected to lead to rapid distribution of information between different parts of the system, a phenomenon known as information scrambling and thermalization <cit.>, making it extremely difficult to track or retrieve the information correlations in the input data. This constraint persists even in an ideal system with perfect coherence, such as one that may be realized by a fault-tolerant quantum computer. It is not known precisely what conditions need to be satisfied to avoid information scrambling. For classical dynamical systems, a strict condition known as the fading memory property <cit.> is required for a physical system to retain a persistent temporal memory that does not degrade on indefinitely long data streams. This imposes restrictions on the design of a classical reservoir and encoding of input data. Here, a mathematical framework known as Volterra Series theory <cit.> provides the basis and guidance for analyzing the necessary conditions a physical system has to satisfy for the fading memory property. Such a general theory for quantum systems has so far proved to be elusive.Here we present a Volterra series theory for quantum systems that accounts for measurement backaction, necessary for analyzing the conditions necessary for endowing a quantum system with a persistent temporal memory on streaming data. Based on this Quantum Volterra Theory we propose an algorithm, NISQ Reservoir Computing (NISQRC), that leverages recent technical advances in mid-circuit measurements to enable inference on an arbitrary long time-dependent signal, not limited by the coherence time of constituent physical qubits (see Fig. <ref>). The property that enables inference on an indefinitely-long input signal is intrinsic to the algorithm: it survives even in the presence of QSN, and does not require operating in a precisely-defined parameter subspace – and is thus unencumbered by barren plateaus.The practical viability of NISQRC is demonstrated through application to a task of immense technological importance for communications systems, namely, the equalization of a wireless communication channel. Channel equalization aims to reconstruct a message streamed through a noisy, non-linear and distorting communication channel and has been employed in benchmarking reservoir computing architectures <cit.> as well as other machine learning algorithms <cit.>. This task poses a challenge for parametric circuit learning-based algorithms <cit.> because the number of symbols in the message, N_ts, to recover in the inference stage directly determines the length of the encoding circuit, which in turn is limited by the coherence time of the system. A more critical issue is that the recovery has to be done online, as the message is streamed, which structurally is not suitable for static encoding schemes. We demonstrate, in Section <ref>, through experiments on a 7-qubit quantum processor and numerical simulations that NISQRC provides the key components so N_ts can be arbitrarily long, not limited by the coherence time. The role of the coherence time is to set the temporal memory. We show that by balancing the length of individual input encoding steps with the rate of information extraction through mid-circuit measurements, it is possible to endow the circuit with a memory that is appropriate for the ML task at hand. Interestingly, it is found that even in the limit of infinite coherence, the temporal memory is still limited by this balance. Reliable inference on a time-dependent signal of duration T_run = 117 μs is demonstrated on a 7-qubit quantum processor with qubit lifetimes in the range 63 μs – 164 μs and T_2 = 9 μs – 231 μs. In our experiments longer durations are restricted by limitations on mid-circuit buffer clearance. To leave no doubt that a persistent memory can be generated, we first compare the experimental results to numerical simulations with the same parameters, showing excellent agreement. Building on the reliability of numerical simulations in the presence of finite coherence and noise model, we demonstrate that successful inference can be made on a signal of 5000 symbols, the inference on which would require 500 lifetimes.Direct numerical sampling, required for this demonstration, is not possible for very deep circuits. We are able to do this demonstration by a numerical method we introduce (see Methods <ref>) that allows us to sample from repeated partial measurements on circuits of arbitrary depth. We further show that other seemingly reasonable-looking encoding methods adopted in previous studies lead to a sharp decline in performance. Drawing upon the Quantum Volterra Theory, we unveil the underlying cause: the absence of a persistent memory mechanism. § RESULTS The general aim of computation on temporal data is most naturally expressed in terms of functionals of a time-dependent input = {u_-∞, ⋯, u_-1, u_0, u_1, ⋯, u_∞}. A functional ℱ: ↦ maps a bounded functionto another arbitrary bounded function , where = {y_-∞, ⋯, y_-1, y_0, y_1, ⋯, y_∞}. Without loss of generality these functions can be normalized; we choose u_n ∈ [-1,1] and y_n ∈ [-1,1]. Within the reservoir computing paradigm <cit.>, this processing is achieved by extracting outputs x(n), where n is a temporal index, from a physical system evolving under said time-dependent stimulus u_n ≡(n). Learning then entails finding a set of optimal time-independent weights w to best approximate a desired ℱ with a linear projector y_n ≡(n) = w·x(n).If the physical system is sufficiently complex, its temporal response x(n) to a time-dependent stimulus u is universal in that it can be used to approximate a large set of functionals ℱ[u] with an error scaling inversely in system-size and using only this simple linear output layer <cit.>. To analyze the utility of this learning framework, it proves useful to quantify the space of functionals ℱ[u] that are accessible. For classical non-linear systems, a firmly-established means of doing so is a Volterra series representation of the input-output (I/O) map <cit.>:x_j (n) = ∑_k = 0^∞∑_n_1 = 0^∞⋯∑_n_k = n_k-1^∞ h_k^(j) (n_1, ⋯, n_k) ∏_κ = 1^k u_n - n_κwhere the Volterra kernels h_k^(j) (n_1, ⋯, n_k) characterize the dependence of the systems' measured output features at time n on its past inputs u_n - n_κ. Hence the support of h_k^(j) over the the temporal domain (n_1,⋯,n_k) quantifies the notion of memory of a particular physical system, with the kernel order k being the corresponding degree of nonlinearity of the map. Most importantly, the Volterra series representation describes a time-invariant I/O map, as well as the property of fading memory, which roughly translates to the property that the reservoir forgets initial conditions and thus depends more strongly on more recent inputs [ For instance, for multi-stable dynamical systems, a global representation such as eq:Volterra may not exist. However a local representation around each steady state can be shown to exist with a finite convergence radius. ]. Such a time-invariant map is essential for a physical system to be reliably employed for inference on an input signal of arbitrary length, and thus for online time series processing.In classical physical systems, the existence of a unique information steady state and the resulting fading memory property is determined only by the input encoding dynamics – the map from input series to system state. More explicitly, the information extraction step (sometimes referred to as the “output layer") on a classical system is considered to be a passive action, so that the state can always be observed at the precision required. However for physical systems operating in the quantum regime, the role of quantum measurement theory is fundamental: in addition to the inherent uncertainty in quantum measurements as dictated by the Heisenberg uncertainty principle, the conditional dependence of the statistical system state on prior measurement outcomes – referred to as backaction – strongly determines the information that can be extracted. Recent work in circuit-based quantum computation has shown that the qualitative features of the statistical steady state of monitored circuits strongly depends on the rate of measurement <cit.>. In particular, generic quantum systems that alternate dynamics and measurement (input encoding and output in the present context) are known to give rise to deep thermalization of the memory subsystem <cit.>, resulting in a Haar-random state with vanishing temporal memory.The absence of a comprehensive famework in QML for analyzing and implementing an encoding-decoding system with finite temporal memory, along with characterization tools for the accessible set of input-output functionals, has hindered both a systematic study and the practical application of online learning methods. Here, we develop a general temporal learning framework suitable for qubit-based quantum processors and the associated methods of analysis based on an appropriate generalization of the Volterra Series analysis to monitored quantum systems, the Quantum Volterra Theory (QVT). Our approach incorporates the effects of backaction that results from quantum measurements in the process of information extraction. Consider the `input' component of the map given by a pipeline (encoding) that injects temporal data { u_n } to an L-qubit system through the parameterized quantum channel 𝒰(u_n)ρ̂ = e^τℒ(u_n)ρ̂, acting over a time τ, where ℒ is given by:ℒ (u) ρ̂ = - i [Ĥ (u), ρ̂] +𝒟_Tρ̂. Here the input appears in the Hamiltonian Ĥ(u), while 𝒟_T = ∑_i = 1^L γ_i 𝒟 [σ̂_i^-, z] describes dissipative processes. To enable persistent memory in the presence of quantum measurement, we separate the L-qubit system into M emory qubits and R eadout qubits (L=M+R). After evolution under any input u_n, only the R eadout qubits are (simultaneously) measured; this separation therefore allows for the concept of partial measurements of the full quantum system, which proves critical to the success of our learning framework. The measurement scheme itself can be very general, characterized by a positive operator-valued measure (POVM) 𝒪_R = {M̂_j | M̂_j = Î^⊗ M⊗Ê_j . }satisfying Ê_j ≽ 0 and ∑_j Ê_j = Î^⊗ R. A simple example is the projective measurement of a complete set of commuting observables, given by Ê_j = |b_j⟩⟨b_j| where each bit-string b_j is the R-bit binary representation of integer j ∈{0, 1, ⋯, 2^R-1} denoting the bit-wise state of the measured qubits. Then, a single evolution step for input u_n constitutes unmonitored evolution via 𝒰(u_n), followed by measurement of the eadout subsystem to obtain measured observables at time step n,x_j(n) = Tr ( M̂_j ρ̂^𝖬𝖱_n),where ρ̂^𝖬𝖱_n is the effective full L-qubit system state at time step n (see Methods <ref> for further details). While for null inputs (i.e. u_n=0 for all n) such quantum systemsare guaranteed to have a unique statistical steady state,the existence of a nontrivial memory and kernel structure is much more involved. Through QVT (see Methods <ref>), we show that these requirements place strong constraints on the encoding and measurement steps viz. the choice of (𝒰, M̂_j). This then enables us to propose an algorithm for online learning that provably provides a controllable and time-invariant temporal memory (which will be referred to as persistent memory) – enabling inference on arbitrarily long input sequences even on NISQ hardware without any error-mitigation or correction. We refer to this general algorithm as . §.§ Quantum Volterra Theory and NISQRC is distinguished by an iterative encode-measure-reset scheme; measure-reset is formally described by the POVM operators Ê_j = K̂_j^†K̂_j in eq:nisqrcPOVM, with non-diagonal Kraus operators K̂_j = |b_0⟩⟨b_j|. Explicitly for each step n: the system starts in the state ρ̂^𝖬_n-1⊗|0⟩⟨0|^⊗ R, the input u_n is encoded via 𝒰(u_n), and the eadout qubits are measured and reset to their ground state (irrespective of the measurement outcome).This process is iterated on the resulting state ρ̂^𝖬𝖱_n to process subsequent inputs u_m>n, as depicted in Fig. <ref>. The output y_n ≡(n) = w·x(n) is obtained from the measurement results in each step, defining the functional I/O map which we characterize next (see details in Methods <ref> and <ref>).This structure elucidates the naming of the unmeasured emory qubits: these are the only qubits that retain memory of past inputs. We note that reset operations have been used implicitly in prior work on quantum reservoir computing, where the successive inputs are encoded in the state of an `input' qubit <cit.>. Inthe purpose of partial reset operation is instead to endow the system with asymptotic time-invariance, a finite persistent memory and a nontrivial Volterra Series expansion (see Methods <ref> and Supplementary Information (SI) <ref>). Through analytical arguments based on the QVT, we show that omitting the partial reset operation renders all Volterra kernels trivial – a finding corroborated by our experimental results in Fig. <ref>.QVT also provides a way to characterize the nontrivial I/O maps enabled by thealgorithm realized by a given encoding, which in turn can aid encoding design for a given ML task, as we demonstrate later. Remarkably, we show that this can be done even in the presence of dissipation and decoherence. For concreteness, consider a specific Ising Hamiltonian encoding Ĥ(u) = Ĥ_0 + u·Ĥ_1 inspired by quantum annealing and simulation architectures(other ansätze can likewise be considered), Ĥ_0 = ∑_⟨ i, i'⟩ J_i,i'σ̂^z_i σ̂^z_i' + ∑^L_i=1 h^x_iσ̂^x_i,  Ĥ_1 = ∑^L_i=1 h^z_iσ̂^z_i.The coupling strength J_i,i', transverse x-field strength h^x_i and longitudinal z-drive strength h^z_i are randomly chosen, but then fixed for all inputs {u_n} (see SI <ref> for more details).The encoding channel is applied for duration τ, and each qubit has a finite lifetime T_1 =γ^-1. We will specify the number of emory and eset qubits of a given QRC with the notation (M+R). In Fig. <ref>(a) we plot the first two Volterra kernels h_1 and h_2 (cf. eq:Volterra) for a random (2+1)-qubit QRC using the above encoding and the reset scheme. The expression for these kernels have been derived from the QVT and are given in Methods, Eqs. (<ref>, <ref>). Importantly, we find all kernels have an essential dependence on the statistical steady state or fixed-point in the absence of any input: ρ̂^𝖬_ FP = lim_n→∞ρ̂^𝖬_n |_u_n = 0. Here ρ̂^𝖬_n |_u_n = 0= 𝒫_0^nρ̂^𝖬_0 is obtained by n applications of the null-input single-step quantum channel 𝒫_0, defined in Methods <ref>. The properties of quantum Volterra kernels, including their characteristic decay time, can be related to the spectrum of 𝒫_0, defined by 𝒫_0 ϱ̂^𝖬_α = λ_αϱ̂^𝖬_α. Here ϱ̂^𝖬_α are eigenvectors that exist in the 4^M-dimensional space of 𝖬emory subsystem states. The eigenvalues satisfy 1 = λ_1 ≥ |λ_2| ≥⋯≥ |λ_4^M| ≥ 0; examples are plotted in Fig. <ref>(b) for various values of τ. The unique eigenvector corresponding to the largest eigenvalue λ_1 = 1 is special, being the fixed-point of the 𝖬emory subsystem, ϱ̂^𝖬_1 =, reached once transients have died out.The second largest eigenvalue λ_2 determines the time over which memory of an initial state persists as this fixed point is approached, and is used to identify a memory time n_M = - 1/ln|λ_2|. Note that this quantity is dimensionless and can be converted to actual passage of time through multiplication by τ, while n_M itself non-trivially depends on τ (see Fig. <ref>(b)). The memory time describes an effective `envelope' for a system's Volterra kernels; additional nontrivial structure is also required for QRC to produce meaningful functionals of past inputs. With the spectral problem at hand, we next analyze the information-theoretical benefit of the reset operation. Firstly, the absence of the unconditional reset operation produces a unital 𝒫_0 [“Unital” refers to an operator that maps the identity matrix to itself. See J. Preskill, Lecture notes for physics 229: Quantum information and computation.] with resulting = I^⊗ M/2^M. This fully-mixed state is inexorably approached after n_M steps under any input sequence and retains no information on past inputs: all Volterra kernels therefore vanish, despite a generally-finite n_ M. Such algorithms (e.g. Refs. <cit.>) are only capable of processing input sequences of length n_M and would not retain a persistent memory necessary for inference on longer sequences of inputs. Hence such encodings would be unsuitable for online learning on streaming data. The possibility of inference through the transients have been observed and utilized before (see e.g. Ref. <cit.>) in the context of classical reservoir computing. However, the simple yet essential inclusion of the purifying reset operation avoids unitality – more generally, a common fixed point for all u-encoding channels – which we find is the key to enabling nontrivial Volterra kernels and consequent online QRC processing (see Methods <ref>).Once such an I/O map is realized, λ_α and the consequent memory properties can be meaningfully controlled by the QRC encoding parameters. As shown in Fig. <ref>(b) the characteristic decay time set by n_ M, for instance, decreases across several orders-of-magnitude with increasing τ.The partial measurement and reset protocol also resolves the unfavorable quadratic runtime scaling of prior approaches. A wide range of proposals and implementations of QRC <cit.> consider the read out of all constituent qubits at every output step, terminating the computation. Not only does this preclude inference on streaming data, it requires the entire input sequence to be re-encoded to proceed one step further in the computation, leading to an O(N^2 S) running time. As shown in schematic Fig. <ref>, incorporating partial measurement with reset indoes not require such a re-encoding; the entire input sequence can be processed in any given measurement shot , enabling online processing with an O(N S) runtime, while maintaining a controllable memory timescale.Most importantly, the nontrivial nature of Volterra kernels realized by thealgorithm is preserved under the inclusion of dissipation. For example, we explore the effect of finite qubit T_1 on n_ M in Fig. <ref>(c). If T_1/τ > n_ M^0, where n_ M^0 is the memory time of the lossless map, then n_ M→ n_ M^0 and is essentially independent of T_1, determined instead by the unitary and measurement-induced dynamics. This requirement, which can be met in contemporary quantum devices for n^0_ M values relevant to practical tasks, ensures that dissipation does not destroy the Volterra kernel structure. As a result, lossy QRCs can still be deployed for online processing, with a total run time T_ run that is unconstrained by (and can therefore far exceed) T_1. We will demonstrate this via simulations in Sec <ref> with T_ run≫ T_1, and via experiments in Sec. <ref> for T_ run≃ T_1; in the latter T_ run is limited only by memory buffer constraints on the classical backend. §.§ Practical machine learning using temporal data Thus far, we have assumed outputs to be expected features x_j(n), which in principle assumes an infinite number of measurements. In any practical implementation, one must instead estimate these features with S shots or repetitions of the algorithm for a given input .The resulting QSN constrains the learning performance achievable in experiments on quantum processors in a way that can be fully characterized <cit.>, and is therefore also included in numerical simulations which we present next.To demonstrate the utility of theframework, we consider a practical application of machine learning on time-dependent classical data: the channel equalization (CE) task. Suppose one wishes to transmit a message m(n) of length N, which here takes discrete values m: [N] →{-3, -1, 1, 3}, through an unknown noisy channel to a receiver. This medium generally distorts the signal, so the received version u(n) is different from the intended m(n). Channel equalization seeks to reconstruct the original message m(n) from the corrupted signal u(n) as accurately as possible,and is of fundamental importance in communication systems. Specifically, we assume the message is corrupted by nonlinear receiver saturation, inter-symbol interference (a linear kernel), and additive white noise <cit.> (additional details in SI <ref>). As shown in Fig. <ref>(a), even if one has access to the exact inverse of the resulting nonlinear filter, the signal-to-noise (SNR) of the additive noise bounds the minimum achievable error rate.We also show the error rates of simple rounding and single-step logistic regression on u(n) directly for comparison: logistic regression outperforms rounding (≈30%), which is better than random guessing (75%), but both methods are severely limited by their linear, memory-less processing. We now perform the CE task using thealgorithm on a simulated (2+4)-qubit reservoir under the ansatz of Eq. (<ref>). The ability to efficiently compute the Volterra kernels for this quantum system immediately provides guidance regarding parameter choices. In particular, we choose random parameter distributions such that the memory time n_M≈ O(10^1) is on the order of the length of the distorting linear kernel h(n). These QRCs have K=2^4=16 readout features {x_j(n)}_j ∈ [K] whose corresponding time-independent output weights w are learned by minimizing cross-entropy loss on 100 training messages of length N=100 (see SI <ref> for additional details). The resultingperformance on test messages is studied in Fig. <ref>(a), where we compare two distinct coupling maps shown in (b).In the highly-connected (lower) system the performance approaches the theoretical bound for →∞; finite sampling(here, =10^5 is in the range typically used in experiments) increases the error rate as expected. We note that the split system (upper) performs significantly worse even without sampling noise: this is because the quantum system lives in a smaller effective Hilbert space – the product of two disconnected three-qubit systems– and is far less expressive as a result.Although in both cases the number of measured features is the same, those from the connected system span a richer and independent space of functionals.This functional independence can be quantified by the Jacobian rank R_J, which is the number of independent -gradients that can be represented by a given encoding (SI <ref>); an increased connectivity and complexity of state-description generally manifests as an increase in the Jacobian rank and consequent improved CE task performance. This observation can be viewed as a generalization of the findings in time-independent computation <cit.> to tasks over temporally-varying data, and also agrees with related recent theoretical work <cit.>.Most importantly, we demonstrate in Fig. <ref>(c) that thealgorithm enables the use of a quantum reservoir for online learning. In all cases studied here, N=100 is used for training and the length of the SNR=20dB test messages N_ts is varied. As suggested by the QVT, the performance is unaffected by N_ts even if it greatly exceeds the lifetime of individual qubits: N_ts = T_ run/τ≫ T_1/τ = 10, andcan therefore be used to perform inference on an indefinite-length signal with noisy quantum hardware.As seen in the same figure, while dissipation imposes only a small constant performance penalty, the reset operation is critical: if removed, the error rate increases to that of random guessing, as the Volterra kernels vanish and the I/O map becomes trivial.In particular, partial readout alone does not provide a persistent memory, if not accompanied by reset of system qubits in which inputs are encoded. An analysis based on the QVT shows that such encodings (e.g. as utilized in a recent article Ref. <cit.> based on a quantum non-demolition measurement proposal in Ref. <cit.>) can still result in zero persistent memory and to an amnesiac reservoir. In this scheme, the quantum circuit is coupled to ancilla qubits by using transversal CNOT gates. While each projective measurement of ancillas leads to read out of system qubits and their collapse to the ancilla state via back-action, subsequent reset of the ancillas does not reset the system qubits. This scheme therefore suffers from the same thermalization problem as any no-resetdoes, and hence has zero persistent memory. We verify this analysis in Fig. <ref>(c) by implementing the CE task with a four-ancilla-qubit circuit. The error rates are found to be very close to the no-reset- one, whose I/O map we have shown before to be trivial (see also Fig. <ref>(c)). §.§ Experimental results in quantum systems We now demonstratein action by performing the SNR=20dB CE task on an IBM Quantum superconducting processor. To highlight the generality of ourapproach, we now consider a circuit-based parametric encoding scheme inspired by a Trotterization of Eq. (<ref>), suitable for gate-based quantum computers. In particular, we use a L=7 qubit linear subgraph of the ibm_algiers device, with M=3 memory qubits and R=4 readout qubits in alternating positions, as depicted in Fig. <ref>(a). The encoding unitary for each time step n is also shown: Û(u_n) = (𝒲(J)ℛ_z(θ^z +θ^I u_n ) ℛ_x(θ^x) )^n_T, where ℛ_x,z are composite Pauli-rotations applied qubit-wise, and 𝒲(J) defines composite ℛ_zz gates between neighbouring qubits, all repeated n_T=3 times (for parameters θ^x,z,I, J and further details see Methods <ref>). Realizing theframework with the circuit ansatz depicted in Fig. <ref>(a) requires the state-of-the-art implementation of mid-circuit measurements and qubit reset, which has recently become possible on IBM Quantum hardware <cit.>. We plot the testing error using the indicated linear chain of the ibm_algiers device as a function of the number of shotsin solid blue Fig. <ref>(b), alongside simulations of both the ideal unitary circuit and with qubit losses in open circles. We clearly observe that performance is influenced by the number of shots available, and hence by QSN. In particular, for a sufficiently large S, the device outperforms the same logistic regression method considered previously. For the circuit runs, the average qubit coherence times over 7 qubits are T_1^av = 124 μs, T_2^av = 91μs (see SI <ref> for the ranges of all parameters, which varies over the time of runs as well), while the total circuit run time for a single message is T_ run≈ 117 μs. Even though T_ run≃ T_1^av, the CE task performance usingon ibm_algiers is essentially independent of qubit lifetimes. This is emphatically demonstrated by the excellent agreement between the experimental results and simulations assuming infinite coherence-time qubits. In fact, finite qubit decay consistent with ibm_algiers leaves simulation results practically unchanged (as plotted in dashed blue); we find that T_1 times would have to be over an order of magnitude shorter to begin to detrimentally impactperformance on this device (see SI <ref>). We further find that artificially increasing T_ run beyond T_1 by introducing controlled delays in each layer also leaves performance unchanged (see SI <ref>).Using the same device we are able to reiterate several important aspects of thealgorithm. First, we consider the same CE task with a split chain, where the connection between the qubits labelled `14' and `16' on ibm_algiers is severed by removing the R_zz gate highlighted in brown in Fig. <ref>(a). The resulting device performance using these two smaller chains is worse, consistent both with simulations of the same circuit and the analogous split Hamiltonian ansatz studied in Sec. <ref>. Next we return to the 7 qubit chain but now remove reset operations in thearchitecture, shaded in red in Fig. <ref>(a): all other gates and readout operations are unchanged. The device performance now approaches that of random guessing: the absence of the crucial reset operation leads to an amnesiac QRC with no dependence on past or present inputs. This remarkable finding reinforces that reset operations demanded by thealgorithm are therefore essential to imbue the QRC with memory and enable any non-trivial temporal data processing. We note that there is room for improvement in CE performance when compared against Hamiltonian ansatzof similar scale in Fig. <ref>. A key difference is the reduced number of connections in the nearest-neighbour linear chain employed on ibm_algiers; including effective ℛ_zz gates between disconnected qubits significantly increases the gate-depth of the encoding step, enhancing sensitivity to gate-fidelity increasing runtimes.The demonstrated circuit ansatz can also be optimized - using knowledge of the Volterra kernels - for better nonlinear processing capabilities demanded by the CE task, in addition to memory capacity determined by n_ M. Nevertheless, the demonstrated performance and robustness of theframework to dissipation already suggests its viability for increasingly complex time-dependent learning tasks using actual quantum hardware.§ DISCUSSION By enabling online learning in the presence of losses,paves the way to harness quantum machines for temporal data processing in far more complex applications than the CE task demonstrated here. Examples include spatiotemporal integrators, ML tasks where spatial information is temporally encoded, such as video processing. Recent results provide evidence that the most compelling applications however lie in the domain of machine learning on stochastic measurement trajectories originating from other, potentially complex quantum systems <cit.> for the purposes of quantum state analysis. In tackling such increasingly complex tasks, the scale of quantum devices required is likely to be larger than those employed here. The NISQRC framework can be applied irrespective of device size; however, its readout features at a given time live in a K=2^R dimensional space. For applications requiring a large R, the exponential growth of the feature-space dimension may give rise to concerns with under-sampling, as in practice the available number of shotsmay not be sufficiently large. In such large-R regimes, certain linear combinations of measured features can be found, known as eigentasks, that provably maximize the SNR <cit.> of the functions approximated by a given physical quantum system trained withshots. Eigentask analysis provides very effective strategies for noise mitigation. In Ref. <cit.> the Eigentask Learning methodology was proposed to enhance generalization in supervised learning. For the present work, such noise mitigation strategies were not needed as the size of the devices used were sufficiently small to efficiently sample. An interesting direction is the application of Eigentask analysis to NISQRC, which we leave to future work.The present work, and the availability of an algorithm for information processing beyond the coherence time, opens up new opportunities for mid-circuit measurement and control. While mid-circuit measurement is essential for quantum error correction <cit.>, its recent availability on cloud-based quantum computers has allowed exploration of other quantum applications on near-term noisy qubits. Local operations such as measurement followed by classical control for gate teleportation have been used to generate nonlocal entanglement <cit.>. Additionally, mid-circuit measurements have been employed to study critical phenomena such as phase transitions <cit.> and are predicted to allow nonlinear subroutines in quantum algorithms <cit.>. The present work opens up a new direction in the application space, namely the design of self-adapting circuits for inference on temporal data with slowly-changing statistics. This would require dynamic programming capabilities for mid-circuit measurements, not employed in the present work. We show here that implementing even the relatively simple CE task challenges current capabilities for repeated measurements and control; having a means to deploy more complex quantum processors for temporal learning viacan push hardware advancements to more tightly integrate quantum and classical processing for efficient machine-basedinference.Note added. During the final stages of this work, we became aware of related work, Ref. <cit.>, and we coordinated to release our papers simultaneously. Ref. <cit.> also introduces a framework for quantum reservoir computing on continuous time domain signals. Similar to their reservoir, our framework also harnesses the capabilities provided by mid-circuit measurements. In contrast with their work, we consider the problem of online inference of time-dependent targets on streaming time-dependent data. § METHODS§.§ Generating features via conditional evolution and measurement Here we detail how an input-output functional map is obtained in theframework. The quantum system is initialized to ρ̂^𝖬𝖱_0 = ρ̂^𝖬_0 ⊗|0⟩⟨0|^⊗ R, where ρ̂^𝖬_0 is the initial state, which is usually set to be |0⟩⟨0|^⊗ M. Then, for each run or `shot' indexed by s, the process described in the following paragraph is repeated. Before executing the n-th step, the overall state can be described as ρ̂^𝖬,𝚌𝚘𝚗𝚍_n-1⊗|0⟩⟨0|^⊗ R (usually pure), where the superscript 𝚌𝚘𝚗𝚍 emphasizes that the emory subsystem state is generally conditioned on the history of all previous inputs {u_m}_m≤ n-1 and all previous stochastic measurement outcomes. The eadout subsystem state is in a specific pure state, which can be ensured by the deterministic reset operation we describe shortly. Then, the current input u_n is encoded in the quantum system via the parameterized quantum channel 𝒰(u_n), generating the state ρ̂^𝖬𝖱,𝚌𝚘𝚗𝚍_n = 𝒰(u_n) (ρ̂^𝖬,𝚌𝚘𝚗𝚍_n-1⊗|0⟩⟨0|^⊗ R).In this work, 𝒰(u_n) takes the form of continuous evolution under Eq. (<ref>) for a duration τ, or the discrete gate-sequence Û(u_n) depicted in Fig. <ref>.The R readout qubits are then measured per Eq. (<ref>), and the observed outcome is represented as an R-bit string: b^(s)(n) = (b^(s)_L+1(n), ⋯, b^(s)_L+R(n) ).Here we consider simple `computational basis' (i.e. σ̂^z) measurements, where each bit simply denotes the observed qubit state. A given outcome j occurs with conditional probability Tr ( M̂_j ρ̂^𝖬𝖱,𝚌𝚘𝚗𝚍_n) as given by the Born rule, and the quantum state collapses to the new state ρ̂^𝖬,𝚌𝚘𝚗𝚍_n ⊗|b_j⟩⟨b_j| associated with this outcome. Finally, all R readout qubits are deterministically reset to the ground state (regardless of the measurement outcome); the quantum system is therefore in state ρ̂^𝖬,𝚌𝚘𝚗𝚍_n⊗|0⟩⟨0|^⊗ R.This serves as the initial state into which the next input u_n+1 is encoded, and the above process is iterated until the entire input sequence u is processed. It is important to notice that ρ̂^𝖬_n depends on the observed outcome in step n and thus the quantum state and its dynamics for a specific shot is conditioned on the history of measurement outcomes {b_i^(s)(m)}_m ≤ n.By repeating the above process for S shots, one obtains what is effectively a histogram of measurement outcomes at each time step n as represented in Fig. <ref>.The output features are taken as the frequency of occurrence of each measurement outcome, as in Ref. <cit.>: X̅_j(n) = 1/S∑_s=1^S X_j^(s)(n; ), where X_j^(s)(n; ) = δ(b^(s)(n), b_j) counts the occurrence of outcome j at time step n.These features are stochastic unbiased estimators of the underlying quantum state probability amplitudes x_j(n) =X_j^(s)(n; ) = lim_S →∞X̅_j(n) <cit.>.As noted in the main text, the finaloutput is obtained by applying a set of time-independent linear weights to approximate the target functional y̅_n = w·X̅ (n). Importantly, during each shot s∈[S], we execute a circuit with depth N; the total processing time is therefore O(N S).If instead one re-encoded N_m previous inputs prior to each successive measurement the processing time is O(N_m N S): N_m=O(N) if the entire past sequence is re-encoded as is conventionally done in QRC <cit.>.§.§ The Quantum Volterra Theory (QVT) and Analysis of At any given time step n, the conditional dependence on previous measurement outcomes, presented in Methods <ref>, is usually referred to as backaction. Defining ρ̂^𝖬𝖱_n as the effective pre-measurement state of the quantum system at time step n of theframework, quantum state evolution from time step n-1 to n can be written via the maps:ρ̂_n^𝖬𝖱= 𝒰(u_n) ( Tr_𝖱 (ρ̂_n-1^𝖬𝖱) ⊗|0⟩⟨0|^⊗ R),ρ̂_n^𝖬= Tr_𝖱 ( 𝒰(u_n) ( ρ̂_n-1^𝖬⊗|0⟩⟨0|^⊗ R ) ≡𝒞(u_n) ρ̂_n-1^𝖬,which describes the reset of the post-measurement 𝖱eadout subsystem after time step n-1, followed by input encoding via 𝒰(u_n) into the full quantum system state. With an eye towards the construction of an I/O map, it proves useful to introduce the expansion of the relevant single-step maps 𝒰(u) and 𝒞(u) in the basis of input monomials u^k: 𝒰 (u) ρ̂^𝖬𝖱 = ∑_k = 0^∞ u^k ℛ_k ρ̂^𝖬𝖱 and 𝒞 (u) ρ̂^𝖬= ∑_k = 0^∞ u^k 𝒫_k ρ̂^𝖬. Then, via iterative application of eq:one-step-evol, ρ̂^𝖬𝖱_n can be written as:ρ̂^𝖬𝖱_n =∑_k_1, ⋯, k_n = 0^∞ u_1^k_1⋯ u_n^k_nℛ_k_n( 𝒫_k_n-1⋯𝒫_k_1ρ̂^𝖬_0 ⊗|0⟩⟨0|^⊗ R).The measured features x_j(n) can then be obtained via x_j(n) = Tr ( M̂_j ρ̂^𝖬𝖱_n). In the SI <ref>, we show that these x_j(n) obtained using theframework can indeed be expressed as a Volterra seriesx_j (n) = ∑_k = 0^∞∑_n_1 = 0^∞⋯∑_n_k = n_k-1^∞ h_k^(j) (n_1, ⋯, n_k) ∏_κ = 1^k u_n - n_κin the infinite-shot limit. The existence of this manifestly time-invariant form is only possible due to the existence of an information steady-state, guaranteed for a quantum mechanical system under measurement.Due to fading memory, the Volterra kernel h_k^(j) (n_1, ⋯, n_k) characterizes the dependence of the systems' output at time n on inputs at most n_k steps in the past (recall n_1 ≤⋯≤ n_k, see eq:Volterra_method). The evolution of ρ̂^𝖬𝖱_n upto step n-n_k, namely for all i < n - n_k, is thus determined entirely by the null-input superoperator 𝒫_0. Then the existence of a Volterra series simply requires the existence of an asymptotic steady state for the 𝖬emory subsystem, lim_n →∞𝒫^n_0 ρ̂^𝖬_0 = ρ̂^𝖬_FP. As shown in the SI <ref>, such a fixed point is usually ensured by the map 𝒫_0 ρ̂^𝖬 = 𝒞(0) ρ̂^𝖬 = Tr_𝖱 ( 𝒰(0) (ρ̂^𝖬⊗|0⟩⟨0|^⊗ R) ) being a CPTP map in generic quantum systems. This immediately indicates the fundamental importance of 𝒫_0, the operator that corresponds to the single-step map of the 𝖬emory subsystem under null input: it determines the ability of theframework to evolve the quantum system to a unique statistical steady state, guaranteeing the asymptotic time-invariance property, and hence the existence of the Volterra series. One byproduct of computing infinite-S features { x_j (n) } is that it enables us to approximately simulate {X̅_j (n) } in a very deep N-layer circuit for finite S, without sampling individual quantum trajectories under N repeated projective measurement described in Methods <ref>. In fact, given any n, once we evaluate a probability distribution { x_j (n) ≥ 0 } satisfying ∑_j x_j (n) = 1, we can i.i.d. sample under this distribution vector for S shots and construct the frequency {X̃_j (n) } as an approximation of {X̅_j (n) }. The validity of this approximation is ensured by the additive nature of loss functions in dimension of time. More specifically, given Q input sequences {u^(q)∈ [-1, 1]^N }_q ∈ [Q], a general form of loss function is ℒ = 1/Q N∑_q ∑_n ℒ (X̅ (n ; u^(q))). As shown in Appendix C5 of Ref. <cit.>, 1/Q∑_q ℒ (X̅ (n ; u^(q))) ≈1/Q∑_q ℒ (X̃ (n ; u^(q))) in all orders of 1/S-expansion for any n ∈ [N], as long as Q is large enough. This is because the probability distribution of {X̃_j (n) } is exactly the same as the distribution (marginal in time slice) of {X̅_j (n) }. Therefore, 1/Q N∑_q ∑_n ℒ (X̃ (n ; u^(q))) is a good approximation of ℒ. In SI <ref> and <ref>, we show that without the reset operation, the fixed-point 𝖬emory subsystem density matrix is the identity, ρ̂^𝖬𝖱_FP = Î^⊗ L/2^L. While this steady state is independent of the initial state and therefore possesses a fading memory, it can be shown that the I/O map it enables is entirely independent of all past inputs as well, so that all Volterra kernels h_k^(j) = 0. This yields a trivial reservoir, unable to provide any response to its inputs u. Such single-step maps 𝒞(u) are referred to as unital maps (maps that map identity to identity), and must be avoided for thearchitecture to approximate any nontrivial functional. The inclusion of reset serves this purpose handily, although we have found certain improper encodings with reset to still result in unital maps 𝒞(u) (e.g., setting n_T=1 in the circuit ansatz depicted in Fig. <ref>). A more rigorous sufficient condition for obtaining a nontrivial functional map, referred to as fixed-point non-preserving map in the main text, is that 𝒞(u) does not share the same fixed points for all u. It is equivalently 𝒫_k ϱ̂^𝖬_FP≠ 0 for some k≥ 1, due to the identity 𝒞(u) ϱ̂^𝖬_FP = ϱ̂^𝖬_FP + ∑_k=1^∞ u^k 𝒫_k ϱ̂^𝖬_FP. We will prove the importance of this criteria in <ref> of SI. The breaking of this criteria will lead to a memoryless reservoir for all earlier input steps: if 𝒫_kρ̂^𝖬_FP = 0 for all k ≥ 1, then h^(j)_k(n_1, n_2, ⋯, n_k) ≠ 0 only if n_1=n_2=⋯=n_k=0. §.§ Spectral theory of : Memory, Measurement, and Kernel structuresRecall that we can always define the spectral problem 𝒫_0 ϱ̂^𝖬_α = λ_αϱ̂^𝖬_α where ϱ̂^𝖬_α are eigenvectors that exist in the (2^M)^2 = 4^M-dimensional space of 𝖬emory subsystem states, and whose eigenvalues satisfy 1 = λ_1 ≥ |λ_2| ≥⋯≥ |λ_4^M| ≥ 0. The importance of the spectrum of 𝒫_0 is obvious from the definition of ρ̂^𝖬_FP already. As ρ̂^𝖬_FP is the fixed point of the map defined by 𝒫_0, it must equal the eigenvector ϱ̂^𝖬_1 since λ_1=1. Then writing the initial density matrix in terms of these eigenvectors, ρ̂^𝖬_0 = ∑_α d_0αϱ̂^𝖬_α, the fixed point becomes ρ̂^𝖬_FP =lim_n→∞( ϱ̂^𝖬_1 + ∑_α≥ 2 d^0_αλ_α^n ϱ̂^𝖬_α). This not only reproduces the result lim_n →∞𝒫^n_0 ρ̂^𝖬_0 = ρ̂^𝖬_FP but also shows that the approach to the fixed point ρ̂^𝖬_FP = ϱ̂^𝖬_1 must be determined by the magnitude of λ_2; the smaller the magnitude, the faster terms for α≥ 2 decay and hence the shorter the memory time.To see more directly how the spectrum of 𝒫_0 influences memory of inputs, it is sufficient to analyze the Volterra kernels in eq:Volterra. Focusing on single-time contributions from u_n-p to x_j(n) at all orders of nonlinearity (multi-time contributions are exponentially suppressed, see SI <ref>), these may be expressed as ∑_k=1^∞ h_k^(j)(p^⊗ k) u^k_n - p = ∑_α = 2^4^Mν^(j)_αλ^p - 1_α F_α(u_n-p),which can be viewed as a spectral representation of Volterra kernel contributions to the jth measured feature obtained via POVM M̂_j. Here, F_α (u) = ∑_k = 1^∞ c^(k)_α 1 u^k define 4^M-1 internal features, so-called as they depend only on input encoding operators via 𝒫_k ϱ̂^𝖬_α' = ∑_α = 2^4^M c_αα'^(k)ϱ̂^𝖬_α, and are in particular independent of the measurement scheme. Nontrivial F_α (u) and c_α 1^(k) can be guaranteed if 𝒫_k ϱ̂^𝖬_FP≠ 0 for some k≥ 1. The dependence of observables on the measurement basis is via coefficients ν^(j)_α = Tr ( M̂_j ℛ_0 ( ϱ̂^𝖬_α⊗|0⟩⟨0|^⊗ R ) ). Crucially, the weighting of F_α(u_n-p) for p steps in the past is determined by eigenvalues λ_α^p-1 of 𝒫_0. For each α≥ 2, it vanishes when we take long time limit p→∞. This property is usually referred as fading memory. It also clearly defines a set of distinct, but calculable, memory fading rates {|λ_α|}_α≥ 2.Importantly, the ability to construct Volterra kernels and internal features enable us to approximately treat the infinite-dimensional function x_j(n) = ℱ_j(u_≤ n) as a function with support only over a space with effective task dimension d_eff = O(n_ M), representing d_eff time steps in the past:x_j(n) = ℱ_j(u_≤ n) ≈ℱ_j(u_n - d_eff, ⋯, u_n-1, u_n),and we can interpret the fading memory functional as a function: y(n) ≈ℱ(u_n-d_eff, ⋯, u_n-1, u_n). In other words, at any given timecan approximate nonlinear functions that live in a domain of dimension d_eff. §.§ IBMQ ImplementationWe recall that the encoding circuit Û(u_n) = (𝒲(J)ℛ_z(θ^z +θ^I u_n ) ℛ_x(θ^x) )^n_T for the experimental IBMQ implementation in Sec. <ref> describes a composite set of single and two-qubit gates repeated n_T times. Here ℛ_x,z are composite Pauli-rotations applied qubit-wise, e.g. ℛ_z = ⊗_iR̂_z(θ^z_i +θ^I_i u). 𝒲(J) defines composite two-qubit coupling gates, 𝒲(J) = ∏_⟨ i, i' ⟩𝒲_i, i'(J) = ∏_⟨ i, i' ⟩exp{- i (Jτ/n_T) σ̂^z_iσ̂^z_i'} for neighboring qubits i and i' along a linear chain in the device and some fixed J. The rotation angles θ^x,z,I are randomly drawn from a positive uniform distribution with limits [a,a+δ], where a = τ/n_Tθ_min^x,z,Iand δ = τ/n_TΔθ^x,z,I. We find that letting the number of Trotterization steps n_T=3 is sufficient to generate a well-behaved null-input CPTP map 𝒫_0. Our hyperparameter choices are further tuned to ensure a memory time n_ M commensurate with the CE task dimension. The particular hyperparameter choices for the plot in Fig. <ref> are θ^x,z,I_ min= {1.0,0.5,0.1}, Δθ^x,z,I = θ^x,z,I_ min, J=1, n_T = 3, and τ = 1. In the experiment, mid-circuit measurements and qubit resets are performed as separate operations, due to the differences in control flow paths between returning a result and the following qubit manipulation <cit.>. Related hardware complexities restrict us to a slightly shorter instance of the CE task than considered in Sec. <ref>, with messages m(n) of length N=20, submitted in batches of 200 jobs with 100 circuits each and 125 observations (shots) per circuit in order to prevent memory buffer overflows. Regardless, using cross-validation techniques, we ensure that our observed training and testing performance is not influenced by limitations of dataset size. We also forego the initial washout period needed to reach ρ^𝖬𝖱_ FP for similar reasons. Finally, the 𝒲_i, i'(J) rotations in the two-qubit Hilbert space that implement 𝒲(J) are generated by the native echoed cross-resonance interaction of IBM backends <cit.>, which provides higher fidelity than a digital decomposition in terms of CNOTs for Trotterized circuits <cit.>. § ACKNOWLEDGEMENT This research was developed with funding from the DARPA contract HR00112190072, AFOSR award FA9550-20-1-0177, and AFOSR MURI award FA9550-22-1-0203. The views, opinions, and findings expressed are solely the authors' and not the U.S. government's. The authors acknowledge the use of IBM Quantum services for this work.
http://arxiv.org/abs/2312.16165v1
{ "authors": [ "Fangjun Hu", "Saeed A. Khan", "Nicholas T. Bronn", "Gerasimos Angelatos", "Graham E. Rowlands", "Guilhem J. Ribeill", "Hakan E. Türeci" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20231226185433", "title": "Overcoming the Coherence Time Barrier in Quantum Machine Learning on Temporal Data" }
=1affilsepx section#2#1#3 listingLst.Lst. tightitemize ∙ tightenumerate [figure]labelfont=bf,name=Fig. theoremTheorem[section] lemma[theorem]Lemma 1]Alind Khare 1]Dhruv Garg 2]Sukrit Kalra 3]Snigdha Grandhi^*Work done as a student at Georgia Tech 2]Ion Stoica 1]Alexey Tumanov [1]Georgia Tech [2]UC Berkeley [3]Adobe [ ]{alindkhare, dgarg39, sgrandhi32, atumanov}@gatech.edu [ ]{sukrit.kalra, istoica}@berkeley.edu : Fine-Grained Inference Serving for Unpredictable Workloads[============================================================== The increasing deployment of ML models on the critical path of production applications in both datacenter and the edge requires ML inference serving systems to serve these models under unpredictable and bursty request arrival rates. Serving models under such conditions requires these systems to strike a careful balance between the latency and accuracy requirements of the application and the overall efficiency of utilization of scarce resources. State-of-the-art systems resolve this tension by either choosing a static point in the latency-accuracy tradeoff space to serve all requests orload specific models on the critical path of request serving.In this work, we instead resolve this tension by simultaneously serving the entire-range of models spanning the latency-accuracy tradeoff space. Our novel mechanism, , achieves this by carefully inserting specialized operators in weight-shared SuperNetworks. These operators enableto dynamically route requests through the network to meet a latency and accuracy target.requires upto 2.6× lower memory to serve a vastly-higher number of models than prior state-of-the-art. In addition, 's near-instantaneous actuation of models unlocks the design space of fine-grained, reactive scheduling policies. We explore the design of one such extremely effective policy,and instantiate bothandin a real system, . achieves 4.67% higher accuracy for the same SLO attainment and 2.85× higher SLO attainment for the same accuracy on a trace derived from the real-world Microsoft Azure Functions workload and yields the best tradeoffs on a wide range of extremely-bursty synthetic traces automatically. § INTRODUCTIONRecent advancements in machine-learning (ML) techniques have unlocked vast improvements in both accuracy and efficiency of a variety of tasks such as image classification <cit.>, object detection <cit.>, text summarization <cit.>, sentiment analysis <cit.>, next word prediction <cit.> etc. As a result, ML models have been quickly deployed across a wide-range of applications in both datacenters <cit.> and on the edge <cit.>, and are now subject to the stringent requirements of a production application. Notably, ML models on the critical-path of these applications must now deal with unpredictable request rates that rapidly change at a sub-second granularity. For example, web applications in datacenters increasingly rely on ML models <cit.>,and are known to have extremely bursty request rates, with a peak demand that is 50× higher than the average <cit.>. Similarly, request rates in autonomous vehicles change rapidly as a function of various factors such as the terrain (city vs. freeway driving), time of the day etc. <cit.>.Thus, ML inference serving systems have the arduous task of striking a careful balance between three key requirements of production applications facing unpredictable request rates:R1: LatencyML models are being increasingly deployed in applications that have extremely stringent latency requirements,quantified using a Service-Level Objective (SLO) <cit.>. For example, both web serving <cit.> in datacenters andautonomous vehicles <cit.> on the edge must maximize the number of requests completedwithin the specified SLO ranging from 10s to 100s of milliseconds <cit.>.R2: AccuracyTechniques such as Neural Architecure Search (NAS) <cit.> have enabled the development of multiple ML models that offer varying accuracies for a particular task. As a result, applications demand the highest-accuracy results possible within the latency targets of their requests. For example, increased accuracy has been intricately tied to a better user experience for web applications <cit.>. Similarly, the safety of an autonomous vehicle heavily relies on the accuracy of various different ML models <cit.>.R3: Resource-EfficiencyWeb applications at Facebook process 200 trillion ML modelrequests daily <cit.>, which represents a significant fraction of Facebook's datacenterdemands <cit.>. In addition to the increasing proliferation of ML models,their growingreliance on scarce resources such as GPUsand specialized accelerators (e.g, TPUs <cit.>, AWSInferentia <cit.>) for efficient inference has lead to resource tensions across applications in both datacenters and on theedge <cit.>. Thus, inference serving systems that serve awide range of ML models must make judicial use of these scarce resources.The first-generation of inference serving systems <cit.> resolves this tension by choosing a static point in the tradeoff-space between R1-R3 and serving all requests using the same model for the entirety of the application's runtime. As a result, applications must make a one-time decision to forego meeting their SLO targets (R1) under burstyrequest rates or suffer degraded accuracy (R2) under normal conditions. More recently, state-of-the-art inference serving systems <cit.> enable applications to register multiple ML models spanning the entire pareto frontier of latency (R1) and accuracy (R2) targets, andautomatically choose the appropriate model to serve requests based on the incoming request rates. These systemsmust either keep the entire set of models in memory or rely on model switching techniques to load the required models at runtime <cit.>. As GPU memory remains the key resource bottleneck in both datacenter and edge inferenceserving <cit.>, these systems must now choose betweenR3 – effectively utilizing the available resources (by incurring the enormous latency penalties ofswitching models), or R1 – meeting SLO targets under highly unpredictable request rates.Conventional wisdom in inference serving literature touts the “non-negligible provisioning time [for ML models due to switching],which can exceed the request processing times" as a“key characteristic of ML workloads", and "rules out reactive techniques" for responding to bursty request rates <cit.>. This wisdom has been widely accepted <cit.> leading to the development of coarse-grained scheduling policies for inference serving that must account for the enormous latency penalty of switching models whenreacting to bursty request rates. As a result, these coarse-grained policies typically avoid or minimize switching models by design <cit.>, and are hence, unable to optimally navigate the tradeoff space between R1-R3 under rapidly-changing, unpredictable request rates.In this work, we challenge this conventional wisdom that forces a choice between R1 and R3. We describe a mechanism, , to simultaneously serve the entire range of models spanning the latency-accuracy tradeoff space (R1-R2) in a resource-efficient manner (R3). At the core of our mechanism are novel control-flow and slicing operators thatcarefully inserts into the SuperNet <cit.> neural architectures. SuperNets enable a latency-accuracy tradeoff (R1-R2) by traininga set of shared model weightsfor many neural networks,without duplication. Prior works <cit.> propose efficient mechanisms for training SuperNets for both vision and NLP tasks, but requireeach model instance to be individually extracted for inference,leading to a similar choice as before between R1 and R3 – either load all individual models or switch between them at runtime. However, 's novel operators obviate the need to extract individual models and load them dynamically at runtime. Instead,dynamically routes requests within oneSuperNet deployment with negligible overhead, enablingnear-instantaneous actuation of different models. As a result,unlocks orders of magnitude improvements in the navigation of the latency-accuracy tradeoff-space(R1-R2), while substantiallyreducing the memoryfootprint (R3)(see <ref>).In addition to being resource-efficient (R3), 's agility in navigatingthe latency-accuracy tradeoff space (R1-R2) fundamentallychanges the design space of scheduling policies. Instead of complex scheduling policies that must reason about future request rates in a bid to avoid paying the latency of switching ML models dynamically under bursts,enables the specification of simple policies thatdirectly optimize for the key success metricsR1-R3. While conventional wisdom deems such reactive policies infeasible,we explore one example point in this design space with a simple, yet effective policy that we call .is a reactive scheduling policy that exploits the near-instantaneous actuation property ofto make fine-grained decisions about how many requests to serve in a batch, and which latency/accuracy choice to select for serving in real-time.We summarize the contributions of this paper as follows: * We introduce(<ref>), a novel mechanism that enables a resource-efficient,fine-grained navigation of the latency-accuracy tradeoff space.achieves this by carefully inserting novel control-flow and slicing operators that dynamically route requeststhrough a single SuperNet.* We unlock the design space of fine-grained, reactive scheduling policies and provide a mathematical formulation of their objective (<ref>). We then propose(<ref>), a simple, yet effective greedy heuristic and show how it accurately approximates the optimal objective.* We instantiateandin a real-world system, , a real-time asynchronous model serving system with pluggable scheduling policies (<ref>).* We extensively evalutewith bothand several state-of-the-art scheduling policies (<ref>). We find thatachieves 4.67% higher accuracy for the same SLO attainment and 2.85× higher SLO attainment for the same accuracy on the real-world Microsoft Azure Functions trace.§ MOTIVATION AND BACKGROUND In <ref>, we motivate the development of a reactive, fine-grained scheduling policy that maximizes R1-R3 under unpredictable, bursty request rates. <ref> then provides a background on Supernets <cit.> and discusses the properties of Supernetsrelevant to R1-R2 that make them a good fit for a fine-grained exploration of the latency-accuracy tradeoff. §.§ Fine-Grained Reactive Scheduling Prior works in inference serving systems <cit.> have exhaustively analyzed both production traces from Microsoft Azure Functions (MAF) <cit.> and syntheticapplication traces with a goal of highlighting their bursty request arrival patterns. For example, Zhang et al. <cit.>underscore the high coefficient of variance in request arrivals in production traces <cit.>. Further, the authors claim that the bursty “sub-second request arrival patterns [are] nearly impossible to predict", thus frustrating the goal of meeting the stringent SLO requirements of requests in an ML-based production applications.A strawman solution to fulfilling SLOs under bursty request rates requires inference serving systems to provision for the peak. In this setting, these systems load the entire set of models spanning the latency-accuracy tradeoff space into GPU memory and switch between them as request rate fluctuates. While this reduces the actuation latency of switching models, allowing inference serving systems to rapidly degrade accuracy (R2) under bursts to meet SLO targets (R1), it wastes resources under normal request rates (R3). As a result, state-of-the-art inference serving systems <cit.> rely on model switching techniques that page models in and out when required, to make efficient use of GPU memory (R3). However, as we show in <ref>, the loading time of ML models from CPU to GPU memory is vastly more than the inference time of the biggest batch size, and the gap widens as the model sizes increase. Thus, in a bid to offset the cost of loading models on the critical path of handling requests, inference serving systems rely on predictive scheduling policiesthat make coarse-grained estimations of future request arrival patterns. Such policies are bound to be suboptimal owing to the difficulty of predicting the short bursts in request arrival rates coupled with their stringent SLO requirements <cit.>. We believe that the key to optimally serving bursty request rates instead lies in the ability to rapidly switch between ML models thus obviating the need for coarse-grained predictive scheduling policies. To validate our hypothesis, we simulate a coarse-grained policy with an actuation delay (i.e., time taken to switch to a new ML model that can handle the current request rate) of 100ms and an idealistic fine-grainedpolicy with an actuation delay of 0ms. <ref> plot the effects of these policies on a small bursty subtrace from the MAF trace. We observe that the coarse-grained policy leads to higher SLO misses (R1)under increasing request rates and wasted resources (R3) under decreasing request rates. On the other hand, the fine-grained policy is able to instantaneously adjust to the increasing and decreasing request rates leading to no missed SLOs and effective utilization of the GPU.§.§ Weight-Shared Supernets The problem of navigating the pareto-optimal frontier of the latency-accuracy tradeoff space (R1-R2) by finding highest accuracy DNNs for a specific latency target is well studied in ML literature. Conventional Neural Architecture Search (NAS) <cit.> approaches have enabled architectures tailored to a particular latency target. However, these approaches search for and train individual networks for a particular latency target and hence require inference serving systems to either load all of them into memory and waste resources (R3), or switch between them at runtime.Instead, recent works <cit.> have proposed first training one supernet and then extracting a subset of its layers to form subnets. As a result, once the supernet is trained, no further retraining is required for a specific subnet. Each extracted subnet targets a specific point in the latency-accuracy tradeoffspace, and partially shares its weights/layers with other subnets. In addition, this automated neural architecture search (NAS) yields subnets that correspond to vastly superior points in the latency-accuracy tradeoff space (R1-R2). For example, <ref> highlights the accuracy benefits of subnets extracted from a ResNet-based supernet when compared to the hand-tuned ResNets for an equivalent number of FLOPs. In order to find subnets that target a particular point in the latency-accuracy tradeoff space, the architecture search in supernets relies on the following parameters:-Depth (𝔻) describes the depth of a subnet,Expand Ratio (𝔼) describes layer-wise ratio of output to input channels of a convolution or fully-connected layer, andWidth Multiplier (𝕎) describes layer-wise fraction of input and output channels to be used.These parameters (𝔻, 𝔼, 𝕎) combinatorially create an architecture space, Φ (|Φ| ≈ 10^19) <cit.>, from which individual subnets are extracted statically for inference. § : INSTANTANEOUS MODEL ACTUATIONMotivated by <ref>, we seek to develop a fine-grained, reactive scheduling mechanism that enables efficient execution of ML-based production applications (R1-R3) under bursty request rates. As discussed in <ref>, prior work in supernets <cit.> enables the extraction of individual models that target a specific point in the latency-accuracy tradeoff space (R1-R2). However, these approaches yield individual models that must either be simultaneously deployed(wasting resources; R3) or paged in as request rates fluctuate (missing SLOs; R1).To resolve this fundamental tension, we make the key observation that by virtue of performing architectural search post training, a supernet subsumes the entire architectural space of subnets. As a result, instead of extracting and deploying individual subnets (as done previously), we can instead deploy a supernet and dynamically route requests to the appropriate subnet. This observation leads us to introduce , a memory-efficient model actuation mechanism (R3) that exposes fine-grained control decisions to near-instantaneously switch betweensubnets in order to pick the optimal point in the latency-accuracy tradeoff space (R1-R2).'s (<ref>) key insight lies in the introduction of the following three novel operators that enable it to dynamically route requests to the required subnet in a supernet:LayerSelect.takes as input the depth 𝔻, and dynamically executes layers of a specific subnet based on 𝔻.The depth 𝔻 is converted byto per-layer boolean values that determine which layers participate in inference.then wraps each layer in a LayerSelect operator that enforces control-flow by eitherpassing the input activation to the wrapped layer or skipping the layer and directly forwarding input to the next layer. This operator enables layer-sharing among subnets that differ in depth (Φ_𝔻⊂Φ), which reduces the GPU memory consumption of the supernet (R3). Moreover, it enables near-instantaneous (R1) switching of the supernet's accuracy (R2) under bursty request rates.SubnetNorm.We observe that naively introducing the LayerSelect operator leads to a significant drop in subnet accuracy (as low as 10%). This is due to the incorrect tracking of the mean (μ) and variance (σ) in normalization layers such as BatchNorm <cit.>. To account for this discrepancy,introduces the SubnetNorm operator that precomputes and stores μ and σ for each possible subnet by performing forward pass inference on the training data. SubnetNorm takes as input a unique subnet ID (i) and a layer ID (j) and outputs the precomputed normalization statistics μ_i,j and σ_i,j. The layer j then uses the provided statistics to perform normalization of activations, effectively specializing j for each subnet i.Although this bookkeeping increases the memory requirements of deploying the supernet, <ref> shows that the overhead of these non-shared normalization statistics is 500× smaller than the memory requirement of the shared layers. As a result,can host thousands of subnets in memory by only keeping the statistics unique to each subnet and sharing the non-normalization weights amongst all the subnets.WeightSlice.This operator dynamically selects channels in convolution or fully-connected layers during inference. The input to WeightSlice is the expand ratio (𝔼) or width multiplier (𝕎) for each layer, which collectively determine the number of channels to be used. The operator outputs layer-specific channel indices, which are then used to select weight subsets for the forward pass inference. The operator enables partial layer-sharing among subnets ({Φ_𝔼∪Φ_𝕎}⊂Φ), thus increasing the number of available subnet architectures. As a result,is able to provide the entire set oflatency-accuracy options (R1-R2) to the scheduling policies.We note that the input to these operators (i.e., depth (𝔻), expand ratio (𝔼) and width multiplier (𝕎)) remains similar to the inputs forarchitectural search in ML literature <cit.>. Moreover, these control inputs are independent from the input to the actuated subnet (i.e., the request served by the model), and are declaratively specified by a scheduling policy(<ref>). Given the arrival rate, the scheduling policy chooses a specific subnet for a request (by specifying the control tuple 𝔻, 𝔼 and 𝕎), which is then actuated bynear-instantaneously. §.§ Discussion: Efficacy ofWe now highlight 's efficacy in achieving key application requirements (R1-R3) under bursty request rates.Reduced Memory Requirements 's novel operators enable subnets to share layers in place and dynamically route requests to the appropriate subnet based on the control tuple 𝔻, 𝔼 and 𝕎 determined by a scheduling policy. As a result,can simultaneously serve the entire range of models spanning the latency-accuracy tradeoff space while drastically reducing memory requirements. <ref> demonstrates this by comparing the memory requirement of loading four different ResNets <cit.>, six individually extracted subnets <cit.>, and that enables simultaneous actuation of 500 subnets. We observe thatcan reduce memory consumption by upto 2.6×, while vastly increasing thelatency-accuracy tradeoff points that can be actuated.Near-Instantaneous Model Actuation While switching between individual models requires loading their weights to the GPU, 's operators enable scheduling policies to actuate any subnet in place without incurring additional loading overhead. <ref> compares the time taken to perform on-demand loading of individual subnetworks versus in-place actuation of a subnet in . We observe that 's model actuation is orders-of-magnitude faster than on-demand loading of ML models. This allows scheduling policies that useto rapidly actuate lower-accuracy models under bursty conditions (R1) and switch to higher-accuracy models under normal load (R2),without coarse-grained predictions about future request rates. Increased Throughput & Accuracy By providing instant model actuation,allows scheduling policies to scale the throughput of the system up and down rapidly, thus inducing a broad throughput range within as few as 6% of accuracy to help meet SLO guarantees (R1-R2). <ref> compares the maximum sustained ingest throughput for a point-based open-loop arrival curve for serving the largest, smallest, and a median subnetwork on 8 workers. We observe thatcan serve a wide throughput range from 2000-8000 QPS, while being able to instantaneously increase accuracy between 74% to 80%.§ FINE-GRAINED SCHEDULING POLICIES's near-instantaneous actuation of the entire latency-accuracy tradeoff space unlocks the development of extremely fine-grained scheduling policies that can directly optimize for R1-R2. We note that by virtue of aggressively sharing weights/layers and dynamically routing requests within a supernet, automatically maximizes resource efficiency (R3). In this section, we first start with the problem formulation of online serving for the new space of fine-grained policies enabled by (<ref>). We then describe our proposed policy that aims to achieve both high accuracyand latency SLO attainment (<ref>).§.§ Problem Formulationhosts a set of all possible subnets Φ, where each subnet ϕ∈Φis defined by the control tuple 𝔻, 𝔼, 𝕎 and has an accuracy Acc(ϕ) and a latency-profile l_ϕ(B) as a function of batch-size B. A query[We use the term query and request interchangeably.] q arrives to the scheduler at time a_q with an SLO d_q. A fine-grained scheduling policy focuses on selecting a subnet across supernets for queries across GPUs, along with a batch B to execute the query in <cit.>. The arrival time a(B) of B is the earliest arrival time and the deadline d(B) is the earliest deadline of all queries in B.ℬ is the set of all possible batches of queries.The policy decides if B ∈ℬ should execute at time t using subnet ϕ on GPU n, which is captured by a decision variable I(B,t,n,ϕ) ∈{0,1}. Goal The policy's goal is to maximize the number of accurate responses (to queries) within specified SLO (R1-R2).Optimal Offline ILPWe now present the ILP formulation of a scheduling policy that achieves our stated goal withan oracular knowledge about future query arrivals: maximize∑_t ∑_n ∑_ϕ∈Φ∑_B ∈𝔹Acc(ϕ)·|B| · I(B,t,n,ϕ) <ref> s.t.∑_t ∑_n ∑_ϕ∈Φ∑_{B | q ∈𝔹}I(B,t,n,ϕ) ≤ 1,∀ q ∑_B ∈𝔹∑_{t^'≤ t ≤ t^'+l_ϕ(B) }I(B,t^',n,ϕ) ≤ 1,∀ n,t,ϕ a(B)· I(B,t,n,ϕ) ≤ t,∀ n,t,B,ϕ ∑_ϕ∈Φ I(B,t,n,ϕ) ≤ 1,∀ n,t,B ∑_ϕ∈Φ (l_ϕ(B)+t) · I(B,t,n,ϕ) ≤ d(B),∀ n,t,B I(B,t,n,ϕ) ∈{ 0, 1},∀ n,t,B,ϕ The ILP maximizes the number of queries that satisfy their latency SLOs with the highest possible accuracy across all the selected query batches ∃ ϕ : I(B,t,n,ϕ) = 1 and Acc(ϕ) · |B| is maximized. The constraints of the ILP denote - (1a) A query q can be assigned to at-most one batch B.(1b) A GPU n can only execute a single subnet ϕ on a single batch B at a particular time t.(1c) Batch B can only execute after its arrival time a(B).(1d) Each batch B can be served with a maximum of one subnet ϕ on a GPU n at a time t. (1e) The batch should complete before deadline d(B).(1f) The choice variable I(B,t,n,ϕ) is a boolean indicator.We note that our formulation (<ref>) is a Zero-one Integer Linear Program (ZILP) and solving it is known to be NP-Hard <cit.>. Furthermore, it is impractical to expect oracular query arrival knowledge. This renders the use of the formulated ILP in the online model serving setting unrealistic. Instead, we approximate its behavior under different query traffic conditions with an online scheduling policy instead.§.§ : Online-Scheduling PolicyWe introduce —a simple yet effective online scheduling policy that aims to maximize accuracy and latency SLO (R1-R2). is a greedy heuristic that approximates the ILP-based policy in Eq. (<ref>) and makes the decision-making tractable. makes following design choices:Operates on Pareto-Optimal Subnets (Φ_pareto) To make subnet choices in reasonable time, operates on the Φ_pareto instead of Φ. Φ_pareto is a set of pareto-optimal subnets latency, accuracy obtained by using existing neural architecture search (NAS) methods<cit.>[It takes ≤ 2min to perform NAS on supernets by using latency and accuracy predictors]. The size of |Φ_pareto| ≈ 10^3 is orders of magnitude smaller than |Φ| ≈ 10^19. This contributes to rapid scheduling decisions in .Uses Monotonic Properties of Subnets in Φ_pareto leverages key properties of subnets in Φ_pareto to further reduce the control search space by performing O(log) operations. fig:lat_profile:heatmap shows latency profiles of six subnetsuniformly sampled FLOPs fromΦ_pareto. The properties of subnets in Φ_pareto are - (P1) thelatency increases monotonically with batch size, (P2) thelatency increases monotonically with accuracy,(P3) the latency difference among different batch sizes of a subnet increases with increase in subnet accuracy.Properties P1 and P2 further reduce the dimensionality of control search for to a single dimension (latency) to determine both the subnet ϕ and batch size B. Instead of searching the two-dimensional table (fig:lat_profile:heatmap),operates on one-dimensional set of evenly sized latency buckets (fig:lat_profile:num_choices).Each bucket consists of control tuples (B,ϕ) such that l_ϕ(B) remains within the range of bucket width.Buckets are constructed to exploit P3 and to further reduce the search complexity to O(1) for bucket selection. By construction and property P3, low latency buckets contain lower accuracy, higher throughput control choices, as the inference latency of smaller accuracy subnets on higher batch sizes is relatively lower. High latency buckets contain higher accuracy, lower throughput control choices.Slack-Based Decision-making 's insight is that the remaining slack of the most urgent query provides proxy to changes in the traffic. Traffic peaks lead to more queueing delays in the system and in-turn reduces the remaining slack. Under low traffic, the slack remains high. Therefore, uses remaining slack for decision making. It chooses a bucket with latency that is closest to and less than the remaining slack of the most urgent query. All control choices within the bucket satisfy query deadline. It picks the control choice with maximum batch size to opt for a high throughput choice especially for lower slack (traffic peaks).Behavior With slack-based decision making, can automatically adjust accuracy(R2) and throughput of the system by choosing appropriate latency bucket on variable arrival traffic to maintain high SLO (R1).A well-behaved trace (e.g., low ingest rate, variation) results in higher slack. Higher slack leads to the choice of higher latency buckets. And, higher latency bucketsare correlated strongly with the probability of choosing higher accuracy models (due to property P2).Conversely, bursty traces lead to lower latency bucket choices, as operates under reduced latency slack. Lower latency bucket choices have higher batch sizes configuration (due to property P3). Thus, opportunistically maximizes accuracy while satisfying latency SLO.§.§.§ Approximation of Optimal Offline ILP We now provide insights on how emulates behavior of the optimal offline ILP. To understand the behavior of ILP, we formulate a proxy utility function that captures the inner-term of the ILP objective function in Eq <ref>, the utility function is defined for all subnet ϕ, batch-size B and a deadline d_B (earliest deadline of queries in batch) –𝕌(ϕ,B,d_B) = Acc(ϕ) · |B|,ifl_ϕ(B) < d_B0,otherwiseThe utility in Eq. <ref> is non-zero if and only if subnet ϕ performs batch-inference on batch B within the deadline d_B, and is zero otherwise. Note that this captures the success metrics: maximizing both the number of queries processed (R1) within their deadline and the accuracy they were served with (R2).A. Offline ILP and Prefer Pareto-Optimal Subnets. 's key design choice is to operate pareto-optimal subnets latency, accuracy (Φ_pareto) (sec:pol:slackfit). We claim that offline ILP also tends to pareto-optimal subnets (latency, accuracy), as the these subnets yield higher utility.The utility of pareto-optimal subnets is higher than non-pareto-optimal subnets if they have similar inference latency for a batch of queries.𝕌(ϕ_p,B,d_B) > 𝕌(ϕ_q,B,d_B), ∀ B,d_Bs.t.ϕ_p ∈Φ_pareto, ϕ_q ∈{Φ∖Φ_pareto}, l_ϕ_p(B) ≈ l_ϕ_q(B) This validates 's design choice to operate on pareto-optimal subnets only. We defer the proof to Appendix sec:pareto:utility. B. Offline ILP and Prioritize Lower Accuracy & Higher Batch Size under High Load.We make a key observation that the utility of lower accuracy Acc(ϕ_low), higher batch sizes (B_high) configurations is higher than higher accuracy (Acc(ϕ_high)), lower batch size (B_low) configuration in pareto-subnets of .This is because the factor difference in accuracy of pareto-subnets (< 1) is less than the factor differences of batch-sizes as seen in fig:lat_profile:heatmap Acc(ϕ_high)/Acc(ϕ_low)≤|B_high|/|B_low|⇒Acc(ϕ_high) · |B_low| ≤Acc(ϕ_low) · |B_high|. Therefore, 𝕌(ϕ_low,B_high, d_q) ≥𝕌(ϕ_high,B_low,d_q) may hold true under high load, in cases where the most urgent query q in a batch of k queries (q ∈ B_k) can be served either by a) low accuracy model (ϕ_min) with batch size B_k orb) higher accuracy model(ϕ_max)on a subset of queries (say m, q ∈ B_m) with remaining queries (B_k ∖ B_m) missing the deadline due to high load. In such cases, the offline ILP will tend to option (a). also tends to lower accuracy and higher batch size options under heavy load, as described in “'s Behavior” (sec:pol:slackfit). C. Offline ILP and Prefer Higher Accuracy under Low Load. We make yet another observation from the latency profiles of sampled pareto-optimal subnets in fig:lat_profile:heatmap. For a batch size B, such that B = B_1 + B_2 where B_1 > B_2, the following holds true in many cases - B_1 ·Acc(ϕ_high) + B_2 ·Acc(ϕ_low) > B ·Acc(ϕ_mid). Therefore, 𝕌(ϕ_high,B_1, d_q) + 𝕌(ϕ_low,B_2, d(B_2)) ≥𝕌(ϕ_mid,B,d_q), may hold true under low load, where the most urgent query q in batch B can be served by either a) mid accuracy model (ϕ_mid) with batch size B, or b) high accuracy model (ϕ_high) with larger-size batched partition B_1 (q ∈ B_1) with rest of the queries in batch B_2 servedwith the low accuracy model (ϕ_low) and meeting deadline d(B_2). In such cases, ILP will tend to option (b) an option with higher average accuracy. also tends to higher accuracy subnets under lower load, as described in sec:pol:slackfit.§ : SYSTEM ARCHITECTUREis a system that instantiates both mechanism and policy. 's architecture is illustrated in fig:sys_arch_detailed.The client submits asynchronous RPC queries to the router with a deadline.These queries are enqueued to a global earliest-deadline-first (EDF) queue (182). As soon as any worker becomes available, 's fine-grained scheduler is invoked (183). It decides on the query-batch (B) and the subnet (ϕ)which are then dispatched to the worker (184). Upon receiving this query-batch, the worker that instantiates the supernet instantaneously actuates the chosen subnet in-place on the GPU using (185), performs inference (186), and returns predictions for the query-batch (187). The router redirects these predictions back to the client (188).Router. The router runs fine-grained scheduler and interfaces with workers via RPCs. All queries are received, enqueued, and dequeued asynchronously in the router. It maintains pending queries in a global EDF queue, ordered by timestamps which denote query deadlines.The router invokes the scheduler whenever (a) a worker signals availability and (b) the EDF queue is not empty. It then sends query-batches to workers and also passes back the predictions to the clients. Fine-grained Scheduler. The scheduler's control decision is a batch-size and subnet (ϕ = (𝔻, 𝔼, 𝕎)).The scheduler provide pluggable APIs for different policy implementations. is one such policy implemented in the scheduler.All policies in scheduler leverage two key properties to make control decisions:(a) predictability of DNN inference latency,(b) fast actuation of on the query's critical path.Worker. The DNN worker employs the mechanism to host a supernetwork (R3). 's operators are implemented inTorchScript's intermediate representation (IR) <cit.>. After receiving a query-batch and subnet (𝔻, 𝔼, 𝕎) from the router, the worker actuates the desired subnet inplace using .A forward pass on the actuated subnet produces predictions that are returned to the router. The router's scheduler gets notified about worker availability on receiving the predictions.Supernet Profiler. A supernet profiler is used when a supernet is submitted to . This profiling completes apriori, before the workload begins, and off the critical path of the queries.The profiler first employs neural architecture search (NAS) <cit.> to find pareto-optimal subnetworks from the supernetwork for each latency target (key design choice of sec:pol:slackfit). The latency profiling is done the pareto-optimal subnets obtained by NAS.This latency is a function of batch size and target worker GPU (latency profile in fig:sys_arch_detailed on RTX2080Ti GPU), and the profiling process for the subnetworks is no different than the model profiling done for other existing models like ResNets <cit.>, Wide-ResNets <cit.>, ConvNeXt <cit.> etc. § EVALUATION We assess 's end-to-end performance its ability to maximize SLO attainment (R1) and accuracy (R2) under a variety of traffic conditions, including synthetic traces (sec:expt:eval:synthetic) and real-world derived Microsoft Azure Functions trace (sec:expt:eval:real). is resource-efficient (R3) due to the use of mechanism, already established in sec:mech:micro. We conclude with microbenchmarks (sec:expt:eval:micro) that show linearly scaling to 33,000 qps and providing transparent fault tolerance.§.§ Experimental SetupSuccess metrics. SLO attainment is defined as the fraction of the queries that complete within the latency deadline (R1). The mean serving accuracy is calculated for the queries that satisfy the SLO and is the average of models' profiled accuracy that were used to serve the queries (R2).Traces. We evaluate on three sets of traces: bursty, time-varying, and real-world. Bursty and time-varying traces are synthetic, similar to those used in InferLine <cit.>.We construct the bursty traces by starting with a base arrival with mean ingest rate λ_b (with CV^2=0) and add a variant arrival trace with mean ingest rate λ_v drawing inter-arrival times from a gamma distribution (fig:expt:syn:sys_dyn:gamma). We vary λ_b, λ_v and CV^2. Time-varying traces differ from bursty by varying the mean ingest throughput over time.We change the mean from μ=1/λ_1 to μ=1/λ_2at rate τ q/s^2 with a fixed CV^2_a. Higher ingest acceleration τ q/s^2corresponds to faster change from λ_1 to λ_2. All synthetic trace generation is seeded. Lastly, we use a MAF trace <cit.> for evaluation on a real-world workload.Baselines.We compare with the single model serving systems that don't perform accuracy trade-offs (and the models are manually selected by users, non-automated serving systems in sec:prevwork). These systems are represented as Clipper^+ baseline and include systems like Clipper <cit.>, Clockwork <cit.>, and TF-serving <cit.>. Clipper^+ is manually configured to serve six different accuracy points (subnets) that uniformly span the supernet's accuracy range and result in its six different versions. We also compare with INFaaS and note thatINFaaS is designed to “pick the most cost-efficient model that meets the [specified] accuracy constraint” <cit.>. However, in the presence of unpredictable, bursty request rates, the choice of the model accuracy to serve in order to meet the SLO requirements is unknown. Since, unlike , INFaaS does not automatically discover the accuracy of the model to serve under unpredictable request rates and instead requires queries to be hand-annotated with accuracy thresholds, we choose to run INFaaS with no accuracy thresholds provided (sec:expt:eval:synthetic:burst,sec:expt:eval:synthetic:tau). In such a scenario, INFaaS reduces to serving the most cost-efficient model (which is the model with the minimum accuracy). We confirmed this behavior with the INFaaS authors, who agree that “[our] representation of INFaaS as a baseline that always chooses the same model is correct in the absence of an accuracy threshold, or a fixed (never changing) accuracy threshold.” <cit.>. Subnet-Profiling. We use the supernet trained on ImageNet <cit.> dataset released by <cit.> and enable in it. We extract pareto-subnets (Φ_pareto) by running NAS (publicly released by <cit.>) on trained supernet. The pareto-subnets in the supernet span 0.9-7.5 GFLOPs range and an accuracy range of 73-80%. Pareto-subnets are profiled with varied batch sizes on NVIDIA RTX2080Ti GPU. Test bed. is implemented in C++ (17,500 lines of code). gRPC <cit.> is used for communication between a client, the router and workers.The experiments use 8 RTX2080Ti GPUs and 24 CPU cores. Each worker uses one GPU.§.§ End-to-End: Synthetic We aim to answer the following questions, whether (a) automatically serves queries using appropriate models (accuracy) for different traces (R2),(b)achieves a better trade-off the success metrics (R1-R2),(c) withstands sharp bursts while maintaining high SLO attainment (R1) and (d) instantaneously changes system capacity to serve traces where mean ingest rate changes over time. To answer these questions, we evaluate on the bursty and time-varying traces (sec:expts:setup).§.§.§ Baseline comparison with burstinessfig:expt:burst compares with the baselines over a range of traces increasingmean ingest rate λ_v across and CV^2_a down. All traces are configured with 36ms SLO.Achieving high SLO attainment (R1) and high mean serving accuracy (R2) is desirable, which implies the best trade-off is in the top-right corner of the graph.We demonstrate that no single choice of a model is sufficient for different mean arrival rates and CV^2_a.For instance, the SLO attainment of Clipper^+(76.69)decreases as the CV^2_a increases for λ_v=5550 (row 3).Similarly, the SLO attainment of Clipper^+(78.25) decreases with increase in λ_v for CV^2_a=2 (column 1).We draw the following takeaways: (1) achieves a significantly better trade-off between SLO attainment and accuracy (R1-R2) than the baselines(Clipper^+ and ). It is 4.33% more accurate than the baselines at an SLO attainment level of 0.9999 and2.06x higher SLO attainment at the same accuracy level.is consistently at the top-right corner in fig:expt:burst across all the traces.(2) automatically selects appropriate models for sustaining different traffic conditions.As λ_v increases, reduces serving accuracy while maintaining high SLO attainment (columns).Note that, across all the traces, achieves an optimal SLO attainment but with a significantly smaller mean serving accuracy (by up to 4.33%) than . policy serves the min-cost (and hence min accuracy) model for the trace without accuracy constraints.Whereas, achieves a better trade-off between the success metrics because(a) allows in place activation of different subnetworks without affecting SLO attainment (R1); (b) opportunistically selects higher accuracy models based on query's slack (R2).Also, the difference between and Clipper^+ narrows accuracy as CV^2_a increases. This is because switches to lower accuracy models more frequently with burstier traffic.This system dynamics is detailed in sec:expt:eval:synthetic:sys_dyn.§.§.§ Baseline comparison with arrival acceleration fig:expt:tau evaluates performance at different levels of arrival rate change (i.e., arrival acceleration). Traces start at λ_1 and increase to λ_2 with acceleration τ. Traces fix λ_1=2500 qps and CV^2_a = 8 but change λ_2and acceleration τ . The τ and λ_2 are chosen to demonstrate that single, pre-configured model choices are inadequate to sustain different rates of arrival (mean λ) and acceleration (τ). Clipper^+(79.44) starts divergingas τ increases ( λ_2 is 6800 qps (row 2)).Similarly, Clipper^+(79.44) starts diverging with increase in λ_2 (τ=250 q/s^2 (column 1)).The key takeaways from this experiment are as follows: * rapidly scales system capacityand achieves a high SLO attainment (0.991-1.0) even with high values ofτ (5000 q/s^2). The experiment demonstrates two key properties of — (a) the actuation delay in is indeed negligible,(b) the lower actuation delay helps achieve higher SLO attainments for time-varying traces (R1).empirically demonstrates “agile elasticity” (sec:back), and withstands high acceleration in arrival rate (τ).* dynamically adjusts the serving accuracy over time (R2) and achieves a better trade-off between success metrics (R1-R2). When the mean ingest throughput is low (λ_1), uses higher accuracy models. It quickly switches to lower accuracy models when mean arrival rate is high (λ_2), as evident in system dynamics fig:expt:syn:sys_dyn:tau.fig:expt:tau experiments exhibit interesting trends. As the τ increases, the gap between and Clipper^+ mean serving accuracy narrows. This is because selects smaller accuracy sooner with the increase in τ.Lower τ values give enough time to to serve intermediate mean arrival rates with higher accuracy models while gradually moving to lower accuracy models as mean ingest rate increases to λ_2 qps.Whereas, continues to serve min accuracy model for all traces as its policy doesn't maximize accuracy by design.§.§ End-to-End: Real Workloads We investigate if:(a) is capable of achieving a better trade-off between SLO attainment and mean serving accuracy on real workloads (R1-R2), and(b) contributes to serve highly unpredictable workloads at high SLO attainment.We use the MAF trace <cit.> to evaluate (similar to Clockwork <cit.>). The trace is collected on Microsoft's serverless platform and serves as a reasonable workload to evaluate as serverless ML inference is an active research area <cit.>. It consists of number of invocations made for each function per minute and contains nearly 46,000 different function workloads that are bursty, periodic, and fluctuate over time.We use 32,700 function workloads from the MAF trace, resulting in a mean arrival rate of 6400 qps. The 24 hour long trace is shrunk to 120 seconds using shape-preserving transformations to match our testbed.Result fig:expt:maf:compare compares with Clipper^+ and on the real-world MAF trace.achieves an SLO attainment (R1) of 0.99999 (five '9's). Compared to Clipper^+ and , is 4.65% more accurate (R2) at the same level of SLO attainment. It also achieves 2.85x factor improvement in SLO attainment at the same mean serving accuracy.Moreover, Clipper^+(79.44, 80.16) diverges on the MAF trace.System Dynamics fig:expt:maf:sys_dyn shows the ingest throughput (qps), serving accuracy and batch size control decisions (made by ) for the MAF trace. As seen in the figure, the trace contains periodic short-interval spikes that reach upto 8750 qps, demonstrating the agility of the system.selects both smaller accuracy model and higher batch size during the load spikes to meet the deadline (R1). makes such control decisions because it uses query's slack as a signal to maximize batch size. As the query slack decreases, it selectsmaximum batch size control parameters in the lower latency buckets. Furthermore, these control decisions increase the system capacity instantly through . Lastly, serves higher accuracy models when the ingest rate is low and hence, achieves bettermean serving accuracy (R2).§.§ MicrobenchmarksFault Tolerance. mechanism provides an additional advantage of transparent fault tolerance. We run with 100% capacity (8 workers) with a bursty traffic trace (λ=3500 qps, CV^2_a=2) for 60 seconds and gradually kill a worker every 12 seconds to simulate faults. shows resilience to decreases in system throughput capacity to as low as 50% by maintaining SLO attainment as high as 0.999 for the unchanging traceas it leverages subnetwork activation to serve lower accuracy models automatically.Similar methodology was used in <cit.>. fig:expt:ft shows SLO attainment as a function of time (along with other system dynamics). As the faults occur (workers killed, dotted red lines),automatically transitions to lower accuracy models to maintain high SLO attainment.We attribute 's fault tolerance to(a) a wide-dynamic throughput range afforded by (fig:subnetact:benefit:th_range) that allows to serve the workload even with 50% capacity, and(b) 's low actuation delay that provides agility to rapidly increase system-capacity (during faults) without sacrificing SLO attainment (R1).Scalability.We assess if reaches high SLO attainment at scale. To show this, we scale the number of workers and observe the maximum throughput sustains to reach SLO attainment of 0.999. We serve Resnet-18 <cit.> across all the workers with clients providing a batch of 8 images[we don't perform adaptive batching for this experiment]. Scalability experiments are conducted with CV^2_a=0. fig:expt:micro:scale shows sustained ingest throughput with the increase in workers. In this experiment achieves an SLO attainment of 0.999 while reaching throughputs as high as ≈33000 qps. Policy Space Exploration.We compare different policies implemented in (fig:expt:policy_micro). We show that achieves the best tradeoff our success metrics compared to both MaxAcc (greedily maximizes accuracy) and MaxBatch (greedily maximizes throughput) as CV^2_a is varied. Details of the policies and the experiment are in sec:sched_policies:impl.§ RELATED WORKWe build on recent DNN supernet training progress, which is complementary to this work.<cit.> train supernets for image classification vision tasks <cit.>.Dynabert <cit.> supernet supports NLP tasks trained onGLUE <cit.>, including textual entailment, question answering and sentiment analysis. Model serving systems can be broadly divided into two categorizes — a) Non-Automated, and b) Automated. Non-automated serving system expect application developers to provide the prediction models and make explicit choices in the accuracy-latency trade-off space. This category includes TensorFlow serving <cit.>, Clipper <cit.>, InferLine <cit.>, SageMaker <cit.>, Triton <cit.>, Shepherd <cit.> and Clockwork <cit.>. TensorFlow Serving serves the models trained in TensorFlow framework while Clipper and Triton supportmodels trained from multiple frameworks. Triton optimizes models for GPU serving. Clockwork guarantees predictable tail latency for DNN inference by designing a predictable system bottom up and making cross-stack design decisions explicitly for worst case predictability. Inferline provides support for provisioning inference pipelines that consist of multiple models, but the models are still hard coded in the pipeline vertices.However, all the prior works in this category are complementary to . For instance, 's workers can be made more predictable by consolidating choices like Clockwork. Inferline's autoscaling policy can be used on top of . The max-provisioning ratio of the smallest subnetwork in can be calculated to construct traffic envelopes for Inferline's high frequency tuner. The workers can then be scaled up if the ingest throughput cannot be served by the smallest subnetwork of the supernet.In contrast, automated serving systems <cit.> provide a mechanism for switching between available latency/accuracy points and automate the navigation of the accuracy-latency trade-off space with a policy, resulting in automatic DNN selection at runtime.However, both <cit.> and <cit.> use state-of-the-art DNNs (e.g., ResNets, MobileNets) and rely on model loading mechanism instead of the supernet, which offers better pareto-optimality and orders of magnitude faster model switching enabled via proposed . Fundamentally, these state of the art mechanisms implicitly bias their policies to avoid model switching, which clearly limits the ability of the system to respond to unpredictable trace and query complexity dynamics in a agile fashion. InFaaS's DNN switching policy is biased towards selecting the least accurate DNNs that satisfy accuracy constraints, as the goal of the stated goal of the system is to satisfy constraints instead of treating accuracy as an optimization objective. Model-Switching <cit.> switches between CPU models only. Thus, it cleverly never incurs the overhead of GPU model loading faced by other systems such as InFaaS and Clockwork. supports GPU model serving via subnetwork activation, addresing the model switching overhead through its proposed . Training Supernets Supernet (as a concept) and its training was first proposed by OFA <cit.>. The subnetworks trained as a part of training the supernet are shown to offer a better accuracy-latency trade-off than existing models like EfficientNets <cit.>, ResNets <cit.>, MobileNets <cit.> and SlimmableNets <cit.>. Furthermore, there is a surge of recent works that propose improvements to the supernet training such as CompOFA <cit.>, and BigNAS <cit.>. For instance, CompOFA makes the training of Supernets faster and more accurate by training a fewer number of subnetworks simultaneously. While BigNAS trains the supernet in a one-shot fashion with a wider range of subnetworks. Dynabert <cit.> trains a supernet based on transformer neural network architecture for text datasets. It varies the depth and hidden dimensions in BERT-like models.AutoFormer <cit.> is another technique that trains supernets derived from vision transformers. The subnetworks extracted from AutoFormer's supernet surpass state-of-the-art vision transformers like ViT <cit.> and DeiT <cit.>. NasViT <cit.> trains the supernet for semantic segmentation tasks and achieves a better trade-off betweenaccuracy and latency at fewer FLOPs. provides system support for serving Supernets trained using any existing technique.We believe that Supernets are an emerging phenomenon, and system support for serving them is the need of the hour.§ CONCLUSIONWe describe a novel mechanismthat carefully inserts specialized control-flow and slicing operators into SuperNets to enable a resource-efficient, fine-grained navigation of the latency-accuracy tradeoff space.unlocks the design space of fine-grained, reactive scheduling policies. We explore the design of one such simple, yet effective greedy heuristic-based cheduling policy . We instantiateandin , and extensively evaluate it on real-world workloads.achieves 4.67% better accuracy at the same level of SLO attainment or 2.85x better SLO attainment at the same level of serving accuracy compared to state-of-the-art inference-serving systems. plain § APPENDIX §.§ Utility of Pareto Points is HigherThe utility of pareto-optimal subnets is higher than non-pareto-optimal subnets if they have similar inference latency for a batch of queries.𝕌(ϕ_p,B,d_B) > 𝕌(ϕ_q,B,d_B), ∀ B,d_Bs.t.ϕ_p ∈Φ_pareto, ϕ_q ∈{Φ∖Φ_pareto}, l_ϕ_p(B) ≈ l_ϕ_q(B)Proof By Contradiction. Assume a non-pareto optimal subnet (ϕ_q)such that it has higher utility than pareto optimal subnet(ϕ_p) for a batch B and l_ϕ_p(B) ≈ l_ϕ_q(B) 𝕌(ϕ_p,B,d_B) < 𝕌(ϕ_q,B,d_B).Now, due to the pareto optimal property Acc(ϕ_p) > Acc(ϕ_q), this implies Acc(ϕ_p) · |B| > Acc(ϕ_q) · |B| which implies 𝕌(ϕ_p,B,d_B) ≥𝕌(ϕ_q,B,d_B) for any delay d_B as l_ϕ_p(B) ≈ l_ϕ_q(B). This is contradiction. Hence Proved.§.§ System Dynamics: Synthetic Traces We also derive key observations from the dynamics to understand how achieves high SLO attainment and better trade-offs (R1-R2) for synthetic traces. fig:expt:syn:sys_dyn shows the system dynamics of for both bursty and time-varying traces.The mean ingest rate of the bursty traces is 7000 qps and they vary in CV^2_a = {2, 8}. Similarly, in case of the time-varying traces, the ingest rate is increased from λ_1 qps to λ_2 qps at varying accelerations τ = {250 q/s^2, 5000 q/s^2}. The control decisions made by (subnetwork (accuracy) and batch size) are shown over time. fig:expt:syn:sys_dyn:gamma shows system dynamics for the bursty traces. The trace with CV^2=8 (blue line) has higher spikes than the trace with CV^2=2 (orange line). First, note that operates at an accuracy range of 76-78% and never selects a higher accuracy subnetwork such as the subnetwork of 80.16% accuracy. This is because the subnetwork of 80.16% accuracy diverges at the mean ingest rate of 7000 qps (also seen in fig:expt:burst last row). Hence, automatically selects appropriate subnetworks for different mean ingest rates. Moreover, uses lower accuracy models more frequently with increasing CV^2_a. This is because increased jitter reduces query slack, causing to pick lower latency buckets more often. This corroborates the trend seen in fig:expt:burst where the mean serving accuracy of monotonically decreases as CV^2_a increases. Lastly, during the load spikes, usually selects control parameters with high batch size and smaller subnetwork (sec:pol:approx). This control decision allows to drain the queue faster, resulting in a high SLO attainment on the traces (R1). fig:expt:syn:sys_dyn:tau shows the system dynamics for the time-varying traces. τ=5000q/s^2 (blue line) increases the ingest rate from 2500 qps to 7400 qps faster than τ=250 q/s^2. For both the traces, dynamically changes the accuracy from ≈79.2 to ≈77.5 as mean ingest rate increases. 's ability to dynamically adjust accuracy helps it achieve a higher mean serving accuracy (R2) compared to serving a single model statistically. Moreover, for τ=5000 q/s^2, jumps to lower accuracy and higher batch size control parameters quickly. While, for for τ=250 q/s^2, uses intermediate models to serve the intermediate ingest rate during ≈60-80 seconds. A higher τ value forces query's slack to reduce drastically. Hence, rapidly switches to selecting control parameters of smaller subnetwork and higher batch size from the low latency buckets (sec:pol:slackfit) to satisfy deadlines (R1). Therefore, increase in τ decreases mean serving accuracy (a trend observed in fig:expt:tau across the rows).§.§ Scheduling policiesThe core functionality of the 's scheduler is to maximize(a) SLO attainment and(b) prediction accuracy for any arrival trace dynamics.The scheduler offers a pluggable policy framework to support any application sensitivity to these metrics by allowing arbitrarily different trade-offs between them. policy interface dictates that control decisions are made the batch size and subnetwork to activate. Both of these control parameters affect SLO attainment and serving accuracy.This is because the scheduler a) perpetually operates under a latency constraint andb) these control decisions have a cumulative effect over time (e.g., higher accuracy affects queue build-up later).Finding a globally optimal set of batch size and subnetwork control tuples over time is NP-hard. As our control decisions must be made on the critical path of queries' end-to-end latency,quick sub-millisecond control decision making is a key performance requirement. Thus, to meet the real time requirements, we primarily consider scheduling policies that are greedy time. The policies decide the batch size and subnetwork based on the remaining slack of the most urgent query. The slack is calculated using a fast (sub-ms) O(1) EDF queue lookup operation.§.§ Control Parameter SpaceThe control parameter space of the scheduling policies is created using 's supernetwork profiler (fig:lat_profile:heatmap) by profiling latency as a function of subnetwork accuracy and batch size. With B possible batch size choices andS different subnetworks to serve the size of the control parameter spaceis B × S. All the scheduling policies use this space to inform their trajectory through the latency/accuracy trade-off space. Insights.We draw some key insights from the control parameters' space in fig:lat_profile:heatmap. (I1) thelatency increases monotonically with batch size. (I2) thelatency increases monotonically with accuracy. (I3) the number of control choices decreases with latency increase (fig:lat_profile:num_choices)[this occursdue to choice of power of 2 batch-size (which are granular enough to observe reasonable latency differences) and availability of fewer model choices at higher latency range.]. These insights inform scheduling policy implementation in in . The larger batch size incurs sub-linearly higher latency, which helps increase system throughput.Thus, it is always beneficial to maximize batch size subject to the latency constraints. By increasing system throughput, the systems utilization increases, serving more queries in the same amount of time, which helps the SLO attainment. At the same time, a policy may also favor more accurate subnetworks subject to query latency constraints. This increases the overall accuracy of predictions rendered by the system, improving application-visible quality of service.Thus, batch size and accuracy forms two levers of control to maximize both SLO attainment and prediction accuracy. §.§ Policy Design Space MaxBatch Policy.This policy first maximizes the batch sizeand then the accuracy.It greedily finds a maximal batch size (b) for the smallest accuracy subnetwork that fits within latency slack θ. Within the chosen batch size MaxBatch finds the maximum accuracy subnetwork (s) such that the profiled latency L(b,s) < θ . It returns the control choice(b,s). This policy leveragesinsights (I1) and (I2).It takes O(log(B)) operations to find bandO(log(S)) operations to find s (binary search onmonotonically increasing latency batch size and accuracy). As a result, this lightweight policy scales well with the profile table, taking only O(log(B) + log(S)) operations to make control decisions. MaxAcc Policy.MaxAcc first maximizes the accuracy and then the batch size. Mirroring MaxBatch, MaxAcc performs a binary search for the largest accuracy (s') with L(1,s') < θ first.Then, it finds the maximal batch size (b') keeping the subnetwork choice fixed to the chosen s', such that L(b',s') < θ ms. Similarly to MaxBatch policy, it leveragesinsights (I1) and (I2) and takes O(log(B) + log(S)) operations to return the control choice (b',s').The proposed Policy.This is our best performing policy. At a high level, partitions the set of feasible profiled latencies into evenly sized latency buckets. Each bucket consists of control tuples (b,s) with L(b,s) within the range of bucket width. Then the policy chooses a bucket with latency ≤θ. Finally, from the choices within the selected bucket, it picks the control choice that maximizes batch size. Intuitively, selecting control parameters closest to slack θ configures the system to operate as close to capacity as possible. In other words, choices with latency less than that either reduce the throughput capacity or the serving accuracy, eventually lowering system's SLO attainment and accuracy. This draws on the monotonicity insights (I1) and (I2). 's novelty is in insight (I3). Weobserve that dynamically detects and adapts to the runtime difficulty of the trace. A well-behaved trace (e.g., low ingest rate, variation, acceleration) results in higher θ. Higher θ leads to the choice of higher latency buckets. And higher latency bucketsare correlated strongly with fewer control tuple choices (fig:lat_profile:num_choices), maximizing the probability of choosing higher accuracy models!Conversely, mal-behaved traces (higher ingest rate, variation,acceleration) lead to lower latency bucket choices, as the scheduler is operating under much lower θ conditions. There are more control choices in lower latency buckets, which leads to control tuples within those buckets to favor higher batch sizes. This leads to processing the queue faster!Experiment Result. In fig:expt:policy_micro we show that achieves the best tradeoff our success metrics compared to bothMaxAcc – a policy that greedily maximizes accuracy and MaxBatch — a policy that greedily maximizes batches. The traces usedmeanλ=7000 qps ((λ_b=1500) + (λ_v=5550)) and CV^2_a ∈{2,4,8 }. reaches the highest SLO attainment(0.999) for all CV^2_a. MaxBatch starts under performing SLO attainment with CV^2_a increase.The and MaxBatch difference is most pronounced at the highest CV^2_a, eventually causinga significant 5% drop in the SLO attainment. Both policies maximize the batch size within latency slack θ when operating under small θ.When θ increases, however,MaxBatch continues to maximize the batch size unconditionally—a greedy choice that leads to packing larger batches.This greedy decision causes more time to be spent in a worker compared to , which adaptively shifts to higher accuracy models under larger θ conditions with compound effect on queued queries, eventually missing their SLOs. maxAcc is unable to keep up with this trace. It never switches to policy decisions that process the queue faster. This policy comparison shows a continuum between faster queue processing and serving higher accuracy, with automatically finding the best point in this continuum. § SERVING FIXED ACCURACY POINTSExperiment setup. The experiments were run for the MAF trace with the mean ingest rate of 4,000 queries per second. The MAF follows a poisson-like distribution with CV_a^2=1 and the latency SLO for each of these requests was set to 30ms. The scheduler at the router adaptively batches the requests, along with an appropriate model based on the scheduling policy. The query-batch and model configuration is then sent to one of the 6 workers, each of which had an NVIDIA RTX 2080Ti GPU. Results. In the plots, the red solid line represents the latency and accuracy range of all available models. Any query with remaining slack (ms) in the latency range spanned by the models can be served. It must be noted that while the system can serve queries with remaining slack higher than the right-interval red-line latency, queries with remaining slack lower than left-interval red-line latency cannot be serviced and must be dropped. The goal across policies is to manage the queue such that the remaining slack remains within the model latency range, while maximizing accuracy. For the 6 Clipper+ experiments, the scheduler is constrained to provision all request-batches with just a single model choice. This greatly affects the system throughput, queueing delays and the SLO attainment.* While the lower accuracy models (73.82, 76.69 and 77.64) have low inference latencies, they are able to serve the request batches within the SLO deadlines with low queueing delays. Hence they achieve 99%+ latency SLO attainment. This can also be seen from Fig <ref> (a-c) where the remaining slack latency is right-skewed and around 30ms, which is the per-query latency deadline.* On the other hand, higher accuracy models (78.25, 79.44 and 80.16) have significantly higher inference latencies. This leads to higher per-request serving time, higher queueing delays and lower SLO attainment. This can be seen from Fig <ref> (d-f) where the distribution begins to shift leftwards towards lower remaining slack indicating significant queueing delays and lower SLO attainment.The above results show that fixed-accuracy scheduling is insufficient to provide the best accuracy-SLO attainment tradeoff across different model choices, for any given arrival-trace. on the other hand performs dynamimc model selection throughout the models' latency range. This provides the scheduler several choices to maximize accuracy (higher accuracy models) during low loads and also leverage the lower accuracy models during high loads. This policy serves multiple accuracy points to help absorb queue build-up and mitigate queueing delays. Since Fig <ref> shows that the density function is always below the supernetwork latency range, >99% queries meet their deadline, while also maximizing serving accuracy.
http://arxiv.org/abs/2312.16733v1
{ "authors": [ "Alind Khare", "Dhruv Garg", "Sukrit Kalra", "Snigdha Grandhi", "Ion Stoica", "Alexey Tumanov" ], "categories": [ "cs.DC", "cs.LG" ], "primary_category": "cs.DC", "published": "20231227222411", "title": "SuperServe: Fine-Grained Inference Serving for Unpredictable Workloads" }
Yu. GIgnat'ev[Institute of Physics, Kazan Federal University, Research Laboratory of Cosmology, 420008 Russia, Kazan, st. Kremlevskaya 18; email: [email protected]] Self-gravitating Higgs field of scalar charge The self-gravitating Higgs field of a scalar charge has been studied. It is shown that in the zero and first approximation of the smallness of the scalar charge, the gravitational field of the scalar charge is described by the Schwarzschild-de Sitter metric with a cosmological constant determined by the vacuum potential of the Higgs field. An equation for the perturbation of the vacuum potential is obtained and studied. Particular exact solutions of the field equation are given. It is shown that in the case of a naked singularity, solutions to the field equation have the character of microscopic oscillations with a Compton wavelength. Limiting asymptotic cases of the behavior of solutions are studied and their comparative analysis is carried out in relation to the Fisher solution. The averaging of microscopic oscillations of the scalar field was carried out and it was shown that they make a negative contribution to the macroscopic energy of the scalar field, reducing the observed value of the Black Hole mass. A computer simulation of a scalar field has been carried out, demonstrating various types of behavior of solutions. Keywords: scalarly charged Black hole, scalar Higgs field, asymptotic behavior, macroscopic characteristics.§ INTRODUCTION Since scalar-gravitational instability of the cosmological medium of scalarly charged fermions apparently results in the formation of scalarly charged black holes <cit.> – <cit.>, it is necessary to consider in more detail the issue of such isolated static black holes.The Lagrange function L_s of the scalar Higgs field is[Here and below, Latin letters run through the values 1,4, Greek letters – 1,3. The Planck system of units G=c=ħ=1 is used throughout.]L_s=1/16π(g^ikΦ_,iΦ_,k -2V(Φ)), where V(Φ)=-α/4(Φ^2 -m_s^2/α)^2 – potential energy of the scalar field, α – self-action constant, m_s – boson mass. The energy tensor - momentum of scalar fields relative to the Lagrange function (<ref>) is:T^i_k =1/16π(2Φ^,iΦ_,k- δ^i_kΦ_,jΦ^,j+2V(Φ)δ^i_k ),Einstein's equations look like: R^i_k-1/2δ^i_k R=8π T^i_k + δ^i_k Λ_0, where Λ_0 is the initial value of the cosmological constant, associated with its observed value Λ, obtained by removing the constant terms in the potential energy, by the relation: Λ=Λ_0-1/4m_s^4/α.In curvature coordinates (see, for example, <cit.>) ds^2=e^ν(r)dt^2-e^λ(r)dr^2-r^2 dΩ^.T^1_1=-e^-λ(r)/16πΦ'^2-α/32π( Φ^2-m^2_s/α)^2, (≡ -p_∥)T^2_2=T^3_3=T^4_4=e^-λ(r)/16πΦ'^2-α/ 32π(Φ^2-m^2_s/α)^2, (≡ -p_⊥=ε), where p_∥ is the radial pressure, p_⊥ is the pressure along the surface of the sphere, ε is the energy density of the scalar field. § MASSLESS SCALAR FIELD - FISHER'S SOLUTION§.§ Fisher solutions For the first time, the metric of a scalarly charged black hole in the case of a massless canonical scalar field was found in the work of I.Z. Fisher (1948) <cit.>. Let us briefly present the main results of this work needed here. In the curvature coordinates (<ref>) and in the case of a massless scalar field using the known first integral of the massless scalar field equation (the prime denotes the derivative with respect to the radial variable r): Φ'= -G/r^2e^λ-ν/2, where G is the singular scalar charge, Fisher reduced Einstein's two independent equations to the following[Since <cit.> contains some mathematical inaccuracies, in order to avoid confusion we will briefly reproduce the results of this work, rewriting these equations in our notation and slightly reformatting Fisher's solution.]: e^-λ(1+rν')-1=-Φ'^2r^2 e^-λ;e^-λ(1-rλ')-1=Φ'^2r^2 e^-λ. The sum of these equations can be represented as: (r^2e^ν-λ)'=2re^-ν. Next Fisher using the function W(r)= re^ν-λ/2 determines the solutions of the Einstein - Laplace equations system: e^ν=1/rWW'; e^λ=rW'/W; Φ'=-G/rW. Substituting (<ref>) into the equations (<ref>), (<ref>), (<ref>) leads to a second-order closed differential equation for the function W(r):, with the help of which it is easy to find the first integral, in turn, leading to a first-order equation with separable variables: rW'-W+a^2/W=C_1≡ 2km ⇒WW'/W^2+2kmW-a^2=1/r, (a^2≡ kG^2). where C_1,k are arbitrary integration constants, m is the singular mass. Thus, the problem is solved in quadratures, the study of the solution is a matter of technology. Relations (<ref>) – (<ref>) we and will be called Fisher solutions.§.§ Properties of Fisher solutions Next, we will deviate somewhat from the cited work of Fisher, introducing a new dimensionless function f(r) W(r)≡κ(f(r)-p), f(r)⩾ p≡km/√(k^2m^2+a^2) equivkm/κ, κ≡√(k^2m^2+a^2), with the help of which the solution to the equation (<ref>) can be written in the form of an algebraic equation for the function: |f^2-1|^1/2|f+1/f-1|^p=C_2 r/√(k^2m^2+ a^2), where C_2 is the integration constant.Assuming that at infinity the metric (<ref>) tends to pseudo-Euclidean, i.e., .ν(r)|_r→∞→ 0, .λ(r)|_r→∞→ 0, we get from (<ref>) .W(r)|_r→∞→ r. But then according to (<ref>).f(r)|_r→∞=r/κ→∞. Comparing this expression with the equation (<ref>) in the limit r→∞, we find C_2=1. Thus, we bring the equation (<ref>) to its final form: |f^2-1|^1/2|f+1/f-1|^p=ξ,(ξ≡r /κ; κ=√(k^2m^2+kG^2)). The equation (<ref>) defines a one-parameter family of solutions f(x;p), which, in turn, using the formulas (<ref>) completely determines the solution to the problem: W=κ(f-p);e^ν=f'_ξ/ξ(f-p); e^λ=ξ f'_ξ/f-p; Φ'_ξ=-Gf'_ξ/ξ. Thus, the metric is also determined by a one-parameter family of functions, while the scalar potential Φ depends on two parameters, κ,p, but its derivative Φ' is still determined by a one-parameter family of functions.Let us indicate exact solutions of the equation (<ref>) for particular values of the parameter p=0.1/2. p=0: ⇒ f=√(1+ξ^2),e^ν=1, e^λ= ξ^2/1+ξ^2; Φ'_ξ=κ G√(1+ξ^2)/ξ; Φ= κ G(√(1+ξ^2)-ln√(1+ξ^2)+1/√(1+ξ^ 2)-1) – in this case m=0 and the metric is generated by the massless charge G. p=1/2: ⇒ f=ξ-1, e^ν=1-3/2ξ,e^λ= (1-3/2ξ)^-1; Φ'_ξ=κ G(1-3/2ξ); Φ= κ G(ξ-3/2lnξ) – in this degenerate case m^2=G^2/3k, κ=2km, and the metric, up to renotations, coincides with the Schwarzschild metric. In both of these cases, the scalar potential has asymptotics .Φ(ξ)|_ξ→0∼lnξ;.Φ'(ξ)|_ξ→0∼1/ξ; .Φ(ξ)|_ξ→∞∼κ Gξ;.Φ'(ξ)|_ξ→∞∼κ G=Const. ?Solutions with a massless scalar field in other coordinate systems are studied in detail in the work <cit.>, (see also reviews <cit.> – <cit.>). § SCALAR FIELD WITH THE HIGGS POTENTIAL OF A POINT SCALAR CHARGE IN THE PSEUDO-EUCLIDEAN METRIC Let us now study the gravitational field generated by a scalar charge with the Higgs potential in the metric (<ref>). The scalar Higgs field equation Φ(r) in this metric has the form: 1/r^2d/dr(r^2e^ν-λ/2d /drΦ)-e^ν-λ/2Φ(m^2_s-αΦ^2)=0.In the work <cit.> the equation (<ref>) was solved in the pseudo-Euclidean metric ν=λ=0 for the central point scalar charge G. In this case, with the self-action constant α=0, the equation (<ref>) reduces to the well-known Yukawa equation 1/r^2d/dr(r^2d/drΦ)-m_s^2Φ=0 and has as its solution the well-known Yukawa potential Φ=2G/re^-m_sr, where G is a scalar charge.The self-action constant factor α≢0 fundamentally changes the nature of solutions to the equation (<ref>). Now this equation has no stable solutions with zero asymptotics at infinity .Φ(r)|_r→∞→ 0. Stable solutions of the equation (<ref>) in a spatially flat metric are solutions with non-zero asymptotic behavior at infinity corresponding to special stable points of the dynamical system, – .Φ(r)|_r→∞→Φ_±=±m_s/√(α). For solutions close to stable, assuming Φ(r)=Φ_±+ϕ(r), (ϕ≪ 1), in the linear approximation we obtain the equation instead of (<ref>) 1/r^2d/dr(r^2dϕ/dr)+2m_s^2ϕ=0. Let us pay attention to the change in sign of the massive term compared to the Yukawa equation (<ref>), due to which the stable solution of the equation for the Higgs field will be <cit.> Φ(r)=±m_s/√(α)+C_1/rcos(√(2)m_sr)+C_2/r sin(√(2)m_sr)⇒±m_s/√(α)+2G/rcos(√(2)m_sr) – instead of exponential decay of the Yukawa potential (<ref>) we have a quasiperiodic potential.The presence of a fundamental scalar field with the Higgs potential fundamentally changes the physical picture. Now the vacuum state corresponds to one of the stable points of the Higgs potential (<ref>), which, in turn, corresponds to the zero energy of the scalar field.§ SELF-GRAVITATING SCALAR FIELD WITH HIGGS POTENTIAL§.§ Field equations Taking into account the above, we study the solution to the complete problem of a self-gravitating scalar Higgs field. Nontrivial combinations of Einstein's equations with a cosmological constant in the metric (<ref>)[These are combinations of the equations ^1_1, ^4_4 and the scalar field equation.] can be reduced to the form: 2rΦ'^2+(λ+ν)'=0; e^λ-1-rν'-r^2e^λ[Λ-α/2(Φ^2-m^2_s/α)^2]=0.We will look for solutions to the system of equations (<ref>), (<ref>), (<ref>) that are close to stable, assuming (<ref>). Then, in the zero approximation, due to the smallness of ϕ(r), the equation (<ref>) becomes an identity, and the equation (<ref>) gives λ=-ν. As a result, the equation (<ref>) will be reduced to a closed-loop equation for ν (or λ) rν'+1+e^-ν(1-Λ r^2)=0, solving which, we find: ν_0=-λ_0=ln(1-2m/r-Λ r^2/3), where m is the constant of integration. Thus, in the zeroth approximation we obtain the well-known Schwarzschild-de Sitter solution <cit.>: ds^2= (1-2m/r-Λ r^2/3)dt^2 -(1-2m/ r-Λ r^2/3)^-1dr^2-r^2dΩ^2. At first glance, it seems that the solution (<ref>) does not depend on the scalar field Φ(r). However, it is not. To correctly interpret this solution, we must first take into account the formula (<ref>) for unperturbed scalar field Φ_± and, secondly, a formula for renormalizing the observed cosmological constant (<ref>), putting in (<ref>) Λ=Λ_0-1/4m^2_sΦ^2_±. Thus, the solution (<ref>), in addition to the central mass m, is also determined by the square of the unperturbed value of the scalar potential, i.e., ultimately, by the square of the scalar charge.It turns out that in the first approximation of the smallness of ϕ the solution (<ref>) remains valid. Indeed, due to (<ref>), in the first approximation the relation (<ref>) is preserved, and therefore the equation (<ref>) is also preserved. Thus, themetric (<ref>) is preserved in the approximation linear in ϕ. Therefore, in the linear approximation, the field equation (<ref>) can be considered against the background of the Schwarzschild - de Sitter solution (<ref>). So, in the linear approximation (<ref>) we obtain the equation for the perturbation of the scalar field ϕ(x) d^2ϕ/dr^2+d/drln(r^2e^ν_0(r)) fracdϕdr+2m^2_sϕ=0. Introducing the dimensionless variable x and dimensionless non-negative parameters γ,σ x= r/2m; γ=4/3Λ m^2⩾0; σ=2√(2)mm_s⩾0, Let's rewrite the equation (<ref>) in terms of these quantities: d^2ϕ/dx^2+d/dxln(x(x-1-γ x^3))dϕ/dx+σ^2ϕ=0 ⇒ d^2ϕ/dx^2+1-2x+4γ x^3/x(1-x+γ x^3)dϕ/dx+σ^2ϕ=0.For convenience of analysis, as well as numerical integration, we will consider the second-order linear homogeneous differential equation (<ref>) also in the form of a normal system of first-order equations:dϕ/dx=z(x);dz/dx=-1-2x+4γ x^3/x(1-x+γ x^3) z-σ^2ϕ. §.§ Horizons and singularity Solutions of the field equation ϕ(x) (<ref>) are largely determined by the horizons and singularity of the Black Hole, which determine the behavior of the argument of the logarithmic function in this equationr^2e^ν_0≡ r^2(1-2m/r-Λ r^2/3)=0⇒ x (1-x+γ x^3)=0.The singularity corresponds to the zero root of the equation (<ref>) x_0=0, and the horizons, if they exist, correspond to the positive real roots of the cubic equation e^ν_0=0⇒γ x^3-x+1=0.The discriminant of the cubic equation (<ref>), Δ, is equal to: Δ=γ(4-27γ). For Δ>0 all roots of the horizon equation (<ref>) are real, for Δ<0 one root is real and two are complex conjugate, Δ=0 – all three roots are real and distinct, and, at least two of them are the same. At γ>0. and γ<4/27 Δ>0 all three roots are real, and one of them, x_0<-3, is negative and two are positive: 1<x_1<3/2, x_2>3/2. Thus, for 0<γ<4/27, the metric has two horizons: r_1 is internal and r_2 is external: 2m<r_1<3m,r_2>3m. At γ=4/27 both horizons merge into one doubly degenerate r_1=r_2. At γ>4/27 there are no horizons, and the gravitational field of the Black Hole is described by a metric with a naked singularity r=0. At γ≡0 only the Schwarzschild horizon remains. For γ<0 and Δ<0, – in this case there is also one real positive root, which corresponds to one horizon x_1<1.[Note that according to the cosmological constant renormalization formula (<ref>) we have no right to discard the case γ<0 from consideration.]Thus, depending on the value of γ, three-dimensional space is divided by horizons into 𝐑 - and 𝐓 - regions along the radial variable x [γ⩽ 0: 𝐗_1=[0,x_1],𝐗_3=(x_1,+∞), ;0<γ<4/27: 𝐗_1=[0,x_1], 𝐗_2=(x_1,x_2), 𝐗_3=( x_2,+∞),;γ>4/27:𝐗_3=[0,+∞). ;] Below Figure <ref> shows the behavior of solutions to the equation (<ref>) in the regions 𝐗_1,𝐗_2,𝐗_3 in the case of γ=0.1<4/27 , corresponding to the initial values in each of the areas 𝐗_1:ϕ(0)=±1,z(0)=0;𝐗_2:ϕ(2)=±1,z(0)=0; 𝐗_3:ϕ(3)=±1,z(0)=0. In this case, we assumed σ=1.§.§ Specific solutions In two special cases, the equation (<ref>) is solved in quadratures.§.§.§ Massless scalar field m_s=0 Note that, strictly speaking, we have no right to consider the case of zero mass of scalar bosons, since at m_s=0 the scalar Higgs potential (<ref>) degenerates into a parabolic potential, i.e., in this case we go beyond the scope of the study models. Stable points of the dynamic system (<ref>) Φ_± degenerate into one zero point Φ_+=Φ_-=0. Only this trivial solution Φ=0 is now stable. Therefore, within the framework of our model, we can only consider an asymptotically massless scalar field in the sense of approximation: m_s r→ 0, i.e., in the region r→0. In this case, the equation (<ref>) is immediately integratedϕ=C_1 +C_2∫dx/x(γ x^3-x+1)≡ C_1+C_2 J(x), Where J(x)= ∫dx/x(γ x^3-x+1). In particular, for Λ=0⇒γ=0 this integral is easily calculated ϕ(x)=C_1+C_2ln|x-1/x|⇒Φ=m_s/√(α)+C_2ln|1-2m/r|, (m_s→0, Λ=0) and gives the asymptotics at infinity .Φ(r)|_r→∞⋍m_s/√(α)-2m C_2/r.For γ≠0 the integral in (<ref>) is also calculated in elementary functions J(x)= ln|x|+∑_i=1^3δ_iln|x-x_i|;δ_i≡1-γ x^2_i/3γ x^2_i-1,where x_i are the roots of the horizon surface equation Thus, the solution (<ref>) in the case of non-degenerate roots x_i (<ref>) leads to logarithmic asymptotics at infinity: .Φ(r)|_r→∞⋍m_s/√(α)+C_2(1+δ_1+δ_2+δ_3)lnr/2m. In particular, the solution (<ref>) is obtained from (<ref>) – (<ref>) for γ=0, x_1=1, δ_1=-1, in In this case, only one term is retained in the sum (<ref>), corresponding to the simple horizon x=x_1=1.However, due to the condition (<ref>), for the correctness of this estimate it is necessary to satisfy the conditions 1≪ x≪1/2mm_s⇒ 2m≪1/m_s, i.e., the Compton wavelength of the scalar boson must be much larger than the horizon radius of the black hole and, in addition, r_∞≪ m^-1_s. §.§.§ Zero cosmological constant Λ≡0 In this case, the solution to the equation (<ref>) is expressed through the confluent functions Heun,H_c(2iσ,0,0,0,0,x), <cit.>: ϕ(x) = C_1 e^2iσ xH_c(2iσ,0,0,0,0 x)+ C_2 e^2i σ xH_c(2iσ,0,0,0,0,x)∫e^-2iσ xdx/x(x - 1)H^2_c(2iσ,0,0,0,0,x). In the general case, functions H_c(x) have two regular and one irregular singularities of rank 1 at points x=[0,1,∞]. In what follows, however, we will not use the exact solution (<ref>), taking into account, firstly, its particular nature, and, secondly, the fact that, unfortunately, the functions HeunC(x ), which determine its solution for Λ=0, are still very unreliably tabulated in applied mathematical packages for sufficiently large arguments x. Therefore, we will directly integrate the equation (<ref>) using numerical methods. § ASYMPTOTIC BEHAVIOR OF SOLUTIONS TO THE EQUATION (<REF>)§.§ Behavior of solutions near the singularity r=0 For x→0 the field equation (<ref>) reduces to a simple second order differential equationϕ” +ϕ'/x+σ^2ϕ=0,x→0, which has its decisions ϕ(x)= C_1 I_0(σ x)+C_2Y_0(σ x)⋍ C_1+C_22/πlnσ x ,(σ x → 0), where I_0(z) and Y_0(z) are Bessel functions of the 1st and 2nd kind, respectively. Thus, the scalar field potential diverges logarithmically near the singularity, and its derivative is equal to .Φ'|_x→0⋍C_1/x=G/r. §.§ Behavior of solutions near horizons It is obvious that the solution of the system (<ref>) near the horizons is singular, therefore the main term on the right side of the second equation (<ref>) near the horizons is the term proportional to z(x). Discarding the last term on the right side of this equation near the horizon x_a, we find by integrating: .z(x)|_x→ x_a⋍ C_1x/γ x^3-x+1. Considering that x_i are the roots of the horizon equation (<ref>), we write according to Vieta's theorem γ x^3-x+1=γ(x-x_a)(x-x_b)(x-x_c), where, for definiteness, x_b≠ x_a is a positive root of the equation (<ref>), and x_c is negative. Thus, near the horizon x=x_a>0 we obtain; z(x)⋍ C_1x_a/γ(x-x_a)(x_a-x_b)(x_a-x_c)). Integrating this relation, we obtain an asymptotic expression for the potential function ψ(x) near the horizon x=x_a: .ϕ(x)|_x→ x_a∼C_1ln|x-x_a|/γ(x_a-x_b)(x_a-x_c)+C_2. It is obvious that in the regions 𝐗_1 and 𝐗_2 the difference x_a-x_b has opposite signs, which explains the discontinuity of the second kind in the function ψ(x) when passing through the horizon. Behind the right horizon, the function ψ(x) has the character of damped periodic oscillations.§.§ Behavior of the solution at infinity§.§.§ Small values of γ≪ 1 For small γ in the intermediate range of values x 1≪ x≪1/√(γ), (γ≪ 1) the equation (<ref>) reduces to a simple differential equation d^2ψ/dx^2+2/xdψ/dx+σ^2ψ=0, which has its solution ϕ(x)= c_1sinσ x/x+c_2cosσ x/x=C_1/rsin√(2)m_s r+C_2/rcos√(2)m_s r, i.e., describes damped periodic oscillations with frequency ω and period τ ω=σ, τ=2π/σ⇒ T=√(2)π/m_s. Thus, in the intermediate range of values of the radial variable x, up to redesignations C_1=0,C_2=2G, the solution to the field equation coincides with the solution in flat space-time (<ref>).§.§.§ x→∞, γ x^2≫ 1 In area x→∞,γ x^2≫ 1the equation (<ref>) takes the form d^2ψ/dx^2+4/xdψ/dx+σ^2ψ=0, and has its decision ϕ(x)= C_1/x^3(σ xcosσ x-sinσ x)+C_2/x^3(cosσ x+σ x sinσ x). And in this case we get damped oscillations with a period (<ref>), however, the amplitude of the oscillations drops in proportion to 1/x^2. So, we note that in the case of small values of γ, an intermediate region can form with oscillations of the scalar field damping in proportion to 1/x, which then quickly fall: γ≪ 1, x∈(1,1/√(γ)): ϕ⋍e^i σ x/x;∀γ,x∈(Max{1/√(γ),1},+∞) : ϕ⋍e^iσ x/x^2. § AVERAGING OF SCALAR POTENTIAL OSCILLATIONS The above analysis shows the oscillatory nature of the scalar field outside the horizon region. It should be emphasized that the oscillations of the scalar field have a purely microscopic character, corresponding to oscillations with the Compton wavelengthexp(i√(2)m_s r). In this case, a macroscopic observer can measure only some average dynamic quantities corresponding to these oscillations, in particular, macroscopic energy density and pressure. In this case, the macroscopic picture corresponds to some, generally speaking, anisotropic medium with macroscopic characteristics of pressure and energy density, as well as a macroscopic equation of state. The situation here is completely analogous to microscopic oscillations of the scalar field at the late stages of the evolution of the Universe (see <cit.> – <cit.>). The difference lies only in the nature of the oscillations - in the cosmological situation these are time oscillations exp(im_s t), in our situation they are spatial.Expanding the expressions for the components of the energy-momentum tensor (<ref>) in terms of the smallness of the perturbation ϕ of the scalar potential (<ref>), we obtain in the quadratic approximation: T^4_4=ε=e^ν_0(r)/16πϕ'^2-m^2_s/8πϕ^2;-T^1_1=p_∥=e^ν_0(r)/16πϕ'^2+m^2_s/8πϕ^2. Expressing these quantities through the variable x and the functions ϕ(x) and z(x) that we use, we obtain dimensionless expressions for the physical quantities ε(r) and p_∥(r) 16π(2m)^2ε=e^ν_0(x)z^2-σ^2ϕ^2; 16π(2m)^2 p_∥=e^ν_0(x)z^2+σ^2ϕ^2. where e^ν_0(x) is described by the expression (<ref>). Taking into account the rapidly oscillating nature of the solutions to the system of equations (<ref>) for σ x≫ 1, let us average the values (<ref>) over a sufficiently large interval of the radial variable, using the technique of averaging cosmological fluctuations of the scalar field <cit.> - - <cit.>. Namely, let us introduce the macroscopic average of the rapidly varying function f(r): f(r)= 1/T∫_r-T/2^r+T/2f(r)dr⇒f(x)= 1/τ∫_x-τ/2^x+τ/2f(x)dx, believing τ x ≫ 1. Assuming further that the asymptotic formulas are valid in the (<ref>) approximation (see (<ref>) – (<ref>)) ϕ(x)⋍ϕ_0 e^iσ x1/x^β;z(x)⋍ iσϕ_0 e^iσ x1/x^β, we get ϕ(x)≈ 0;z(x)≈ 0; ϕ^2(x)≈|ϕ_0|^2/x^2β;z^2(x)≈σ^2|ϕ_0|^2/x^2β. Substituting these expressions into (<ref>), we obtain for macroscopic average energy densities and pressures of scalar field oscillations: 16π(2m)^2 ε(x)⋍[e^ν_0(x)-1]σ^2 |ϕ_0|^2/x^2β= -1+γ x^3/xσ^2|ϕ_0|^2/x^ 2β;16π(2m)^2 p_∥(x)⋍ -[e^ν_0(x)+1]σ ^2|ϕ_0|^2/x^2β=-1-2x+γ x^3/xσ^2|ϕ_0|^2/ x^2β. Note, firstly, that the left-hand sides of the relations (<ref>) are expressions for the dimensionless normalized macroscopic average energy density and radial pressure of the scalar field. Secondly, we note that the oscillation energy density is negative ε<0. Further, in the region ∀γ according to (<ref>) we obtain .ε |_γ x^3→∞⋍ -γσ^2|ϕ_0|^2/16π (2m)^2 x^2;ε+p_∥→ 0.Thus, in the region γ x^3≫1, microscopic oscillations of the scalar field create a macroscopic background with a negative energy density and the equation of state p=-ε, i.e., they manifest themselves as a macroscopic phantom scalar field. Mass-energy of this field M_s(r)=4π∫_r_0^r ε r^2dr ∼ -γ m_s^2|ϕ_0|^2/8π(r-r_0) increases in proportion to the radius, thereby reducing the observed mass of the Black Hole.§ NUMERICAL MODELING§.§ <<General>> numerical solutions To carry out numerical integrationSecondly, let us formulate an obvious property of the solutions of this system.General numerical solution Let Ψ_1(x0;x) and Ψ_2(x0;x) be solutions to the corresponding Cauchy problems for this system: Ψ_1(x;x_0)≡ [ψ_1(x),z_1(x)]: [ψ_1(x_0)=1,z_1(x_0)=0];Ψ_2(x;x_0)≡ [ψ_2(x),z_2(x)]: [ψ_2(x_0)=0,z_2(x_0)=1]. Then the solution to the Cauchy problem with arbitrary initial conditions Ψ(x;x_0)≡ [ψ(x),z(x)]: [ψ(x_0)=C_1,z(x_0)=C_2] There is: Ψ(x;x_0)=C_1Ψ_1(x;x_0)+C_2Ψ_2(x;x_0). For convenience, we will call the solution (<ref>) the general numerical solution of a system of linear homogeneous differential equations. ▪In what follows, we will use this property by default to study numerical models. §.§ Small values of the parameter γ<4/27 In this case, as we noted above, the metric (<ref>) has two horizons H_± (<ref>), through which it is impossible to analytically continue the solution of the field equations (<ref>). Figure <ref> shows graphs of the potential function ϕ(x) for γ=0.1<4/27⇒x_1=1.153467305, x_2= 2.423622140 in three areas 𝐗_1:=[0,r_1); 𝐗_2=(x_1,x_2);𝐗_3=(x_2,+∞). In this case, the initial conditions were chosen as follows: 𝐗_1: x(1)=±1,x'(1)=0;𝐗_2: x(1.2)=±1,x'(1.2)=0;𝐗_3: x(3)=±1,x'(3)=0. Ignatev1.eps126Behavior of solutions to the equation (<ref>) in the case of γ=0.1 in the regions 𝐗_1,𝐗_2, 𝐗_3. Solid lines correspond to positive initial values of ϕ, dashed lines to negative ones. Black circles on the abscissa axis mark the radius of the singularity x=0 and the radii of the horizons x_1≈ 1.153467305 and x_2≈ 2.423622140.§.§ Large values of the parameter γ>4/27 At γ>4/27 there are no horizons in the (<ref>) metric, i.e., we have a Black Hole with a bare singularity r=0. In this case, the field equations (<ref>) have analytical solutions in the entire space r⩾0. Figure <ref> and <ref> show graphs of the potential function ψ(x) and its derivative ψ'(x) at γ=0.2>4/27.Ignatev2.eps106.5Scalar potential function ϕ(x) for γ=0.2, σ=1. Ignatev3.eps106.5Function of derivative of the scalar potential ϕ'(x) for γ=0.2, σ=1.As before, in the figures <ref> - <ref>, dashed lines indicate graphs of potential functions for negative values of the starting potential, and solid lines for positive ones. Ignatev4.eps106.5Damped oscillations of the scalar potential ϕ(x) at γ=1, σ=5. Finally, Figure <ref> demonstrates damped oscillations of the scalar potential in the case of a bare singularity (γ=1). § CONCLUSION Summing up the article, we note its main results.* The well-known Fisher solution is reformatted, on the basis of which its exact partial solutions are obtained explicitly and their asymptotic properties are studied.* It is shown that in the case of the Higgs interaction potential, the zero approximation of the problem of the smallness of the scalar charge is the vacuum scalar field corresponding to its zero energy and the Schwarzschild gravitational field - de Sitter with a cosmological constant determined by the square of the vacuum potential.* In the first approximation of the smallness of the scalar charge, the scalar field is determined by the field equation against the background of the Schwarzschild - de Sitter metric, while the standard term in the field equation changes sign and doubles.* Partial exact solutions of the resulting scalar field equation for perturbations are found and their correspondence with Fisher's solutions is established.* The asymptotic behavior of solutions to the resulting field equation near the singularity and horizons, as well as at spatial infinity, has been studied.* The oscillatory nature of solutions to the field equation far from the horizons of the Black Hole has been established.* The macroscopic average energy densities and pressures of scalar oscillations are calculated and it is shown that the total macroscopic energy density of oscillations is negative, and the total equation of state corresponds to macroscopic phantom scalar field.* It is shown that the negative macroscopic oscillation energy density leads to a decrease in the observed mass of the Black Hole.* Based on numerical simulation, the behavior of the scalar field of a Black Hole is demonstrated.Note that in connection with the theory of the formation of supermassive Black holes in the early Universe constructed on the basis of the mechanism of scalar-gravitational instability, the appearance of a scalar halo with negative energy outside the horizons of the Black Hole could become an additional source of information when observing these objects.§.§ Funding This paper has been supported by the Kazan Federal University Strategic Academic Leadership Program.75 Yu_GC_23_No4 Yu. G. Ignat'ev, “Formation of Supermassive Nuclei of Black Holes in the Early Universe by the Mechanism of Scalar-Gravitational Instability. I. Local Picture”, Gravit. Cosmol. 29:4, 327–344 (2023); arXiv:2308.03192 [gr-qc].Yu_GC_24_No1 Yu. G. Ignat'ev, “Formation of supermassive nuclei of Black holes in the early Universe by the mechanism of scalar-gravitational instability. II. The evolution of localized spherical perturbations.” Gravit. Cosmol. 30:1, (2023) (to be published); arXiv:2311.09926 [gr-qc].archive3 Yu.G. Ignat'ev, “Formation of supermassive nuclei of Black holes in the early Universe by the mechanism of scalar-gravitational instability. III. Large scale picture.” arXiv:2312.00607 [gr-qc]. Land_Field L. D. Landau, E. M. Lifshitz. The Classical Theory of Fields. Pergamon Press. Oxford· New York· Toronto· Sydney· Paris· Frankfurt, 1971. Fisher I. Z. Fisher,“Scalar mesostatic field with regard for gravitational effects”, Zh.Eksp.Teor.Fiz. 18 636 (1948); arXiv:gr-qc/9911008. bronnik_fabris K.A. Bronnikov, J.C. Fabris, “Regular phantom black holes”,Phys.Rev.Lett. 96 251101 (2006); arXiv:gr-qc/0511109.bronnik_rus Kirill A Bronnikov, Sergey G Rubin, Lectures on gravity and cosmology, Tutorial. Moskow: MEPhI (2008) (in Russian). bronnik_eng Kirill A Bronnikov, Sergey G Rubin, Black Holes, Cosmology and Extra Dimensions, 2013 by World Scientific Publishing Co. Pte. Ltd. (2013). Yu_Scalar Yu. G. Ignat'ev, “Scalarly charged particles and interparticle interaction with the Higgs potential”, Gravit. Cosmol. 29:3, 213 (2023); arXiv:2307.13767 [gr-qc]. Edd A. S. Eddington, Mathematical Theory of Relativity, Cambridge Univ. Press, Cambridge (1923).Heun W. Hahn, “On linear geometric difference equations with accessory parameters”, Funkcial. Ekvac., 14, 73–78 (1971). YuTMF_23 Yu. G. Ignat'ev, “Evolution of spherical perturbations in the cosmological environment of degenerate scalarly charged fermions with the Higgs scalar interaction”, Theoret. and Mathemat. Phys., 215:3, 862–892(2023);arXiv:2306.17185 [gr-qc].Yu_17 Yu. G. Ignat'ev and A. R. Samigullina, “Averaging of the equations of the standard model over rapid oscillations”, Russ. Phys. J., 60, 1173-1181 (2017).Yu_18 Yu. G. Ignat'ev, D. Yu. Ignatyev, and A. R. Samigullina, “A Macroscopic View of the Standard Cosmological Model”, Gravit. Cosmol., 24, 148–152 (2018).
http://arxiv.org/abs/2312.16059v1
{ "authors": [ "Yu. G. Ignat'ev" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20231226140242", "title": "Self-gravitating Higgs field of scalar charge" }
Learning from small data sets: Patch-based regularizers in inverse problems for image reconstruction Moritz Piening^1,Fabian Altekrüger^2, Johannes Hertrich^2, Paul Hagemann^1, Andrea Walther^2, Gabriele Steidl^1 January 14, 2024 ===================================================================================================================== The tendency of people to engage in similar, matching, or synchronized behaviour when interacting is known as entrainment. Many studies examined linguistic (syntactic and lexical structures) and paralinguistic (pitch, intensity) entrainment, but less attention was given to finding the relationship between them. In this study, we utilized state-of-the-art DNN embeddings such as BERT and TRIpLet Loss network (TRILL) vectors to extract features for measuring semantic and auditory similarities of turns within dialogues in two comparable spoken corpora of two different languages. We found people's tendency to entrain on semantic features more when compared to auditory features. Additionally, we found that entrainment in semantic and auditory linguistic features are positively correlated. The findings of this study might assist in implementingthe mechanism of entrainment in human-machine interaction (HMI).Index Terms: entrainment, alignment, semantic information, DNN embeddings, TRILL vectors§ INTRODUCTION Entrainment is the tendency of a speaker to adjust some properties of a speaker’s features to match the interlocutor’s characteristics. It has been found to correlate with positive social attributes such as likeability <cit.>, task success <cit.>, and even rapport with a robot <cit.>. According to the psycholinguistics literature, entrainment affects various linguistic dimensions, such as lexical choice <cit.>, syntactic structure <cit.>, or acoustic-prosodic features <cit.>. Several studies have investigated the effects of entrainment utilizing different modalities and implemented it in Spoken Dialogue Systems (SDS) <cit.>. In SDS, speech entrainment functionality would enable machines to dynamically entrain and disentrain on various auditory features, which might result in more efficient, successful, natural, and pleasing interactions. Similarly, implementing semantic entrainment functionality would enable machines to align semantically with humans resulting in more meaningful conversations. An essential first step toward effectively implementing entrainment in SDS is understanding how entrainment works at different linguistic levels and what their relationships are. Understanding these variations will allow us to weigh them meaningfully when they are combined to develop SDS systems equipped with effective entrainment functionalities. Entrainment has previously been studied independently using linguistic-related parameters <cit.> or paralinguistic-related parameters <cit.>. Additionally, researchers have started exploring the correlation between entrainment at different linguistic levels. For instance, <cit.> explored the relationship between prosodic, lexical, semantic, and syntactic entrainment among individuals with autism spectrum disorder (ASD). The results revealed distinct patterns of prosodic and lexical entrainment. Similarly, <cit.> explored the correlation between acoustic-prosodic and syntactic entrainment within a dialogue. They reported speakers entrain on some but not all features within a linguistic level. Furthermore, <cit.> reported correlations between acoustic-prosodic and lexical entrainment in group conversations. On the contrary, <cit.> found that none of the acoustic-prosodic and lexical entrainment measures were meaningfully correlated, clustered, or exhibited principal components. Hence, the results of studies exploring the relationship of entrainment at different levels are inconclusive. In a recent study <cit.>, DNN embeddings were used to explore the relationship between acoustic-prosodic and semantic entrainment. The authors proposed measures of “semantic similarity” of dialogues using BERT embeddings trained on a Chinese spoken corpus. They reported an inverse relationship between them: interlocutors did not adjust prosodic features when their semantics were closer to their partners. However, these results and their wider impact on SDS applications should be interpreted with caution since there were three limitations to the given study. First, the question-response system in Chinese conversations was analyzed. The authors did not provide a cross-linguistic comparison, which would allow observing the trends and underlying patterns by comparing auditory and semantic entrainment in different languages. Second, the authors introduce convergence and synchrony as entrainment metrics. Convergence implies people become more similar over the period of time. Synchrony means people are consistently behaving in similar way. The authors did not consider proximity as an entrainment metric which is helpful in understanding if two people are getting semantically closer to each other at a given time. In a session that displays proximity, the speaker turns are more similar to the immediately adjacent turns of the interlocutor than to other random interlocutor’s turns <cit.>. Information about proximity might be valuable for turn-to-turn implementation of entrainment into automatic SDS. Last, the authors did not report if BERT embeddings were normalized or not. Usually, BERT embeddings are not normalized and utilizing Pearson's correlation can provide inconsistent results. There is a high degree of sensitivity in Pearson's r to even minor deviations from normality, where an outlier can hide an underlying association <cit.>. Using a novel approach in this study, we describe linguistic information and analyze the entrainment relationship between two different linguistic levels by utilizing different entrainment metrics (proximity, convergence, and synchrony) on two different spoken corpora using different languages. Empirical studies exploring the entrainment relationship between different linguistic levels have found variable results so far. There might be three possible reasons associated with it. First, entrainment in linguistic levels has been analyzed using different methods. For example, in <cit.>, the authors measured acoustic-prosodic entrainment using the metrics proposed in <cit.>, which measures correlations among adjacent turns. In contrast, they analyzed syntactic entrainment with generalized logit mixed-effect models (GLMM) <cit.>. Second, different toolkits are utilized for feature extraction.For example, in <cit.>, researchers used the PRAAT toolkit <cit.> for extracting 323temporal and acoustic-prosodic features, whereas <cit.> derived pitch, intensity, and rhythm-related features using the contour-based, parametric, and super positional intonation stylization (CoPaSul) toolkit that uses some different feature extraction and manipulation approaches <cit.>.Lastly, researchers measured similarity using different units of analysis. In <cit.>, authors measure acoustic-prosodic entrainment on the inter-pausal unit (IPU)[IPU is a pause-free unit in turn separated by at least 50 ms. of silence], whereas they measured lexical entrainment using n-gram sequences.In this study, we will extract features and measure entrainment using the same methodology in an effort to limit the mentioned sources of variability in results.Finally, empirical findings on entrainment suggest it is a complex phenomenon where people entrain/dis-entrain on different para-linguistic features <cit.>. Earlier studies on entrainment have utilized paralinguistic features that incorporate spectral, temporal, and acoustic-prosodic features. A DNN embedding can solve the problem of fragmentation in para-linguistic features. DNN embedding is a method used to represent discrete variables as continuous vectors. DNN embedding using textual modality such as transformer <cit.> is immensely popular and has broader applications in NLP applications. Similarly, DNN embedding using auditory modality has provided promising outcomes in improving the performance of automatic speech recognition and other applications. In <cit.>, the TRILL vector was proposed, which creates embeddings based on a CNN architecture that uses triplet-loss representation. This approach maps audio segments that appear nearer in time to be nearer in the embedding space. A comparison of different auditory features such as Low-level descriptors (LLD) features, spectral features, and DNN audio embeddings (x-vectors, TRILL vectors) was presented in <cit.>. In comparison to different auditory features, TRILL vectors provided greater classification accuracy in this study. Hence, we employ this method in our work to compare the acoustic and semantic entrainment. In sum, research into speech entrainment has so far been fragmented, with numerous individual features and measures of similarity being used, but no attempts have been made prior to our knowledge that measures auditory similarity using DNN embeddings. With a long-term goal to develop an effective SDS, we analyze in this study auditory and semantic entrainment in comparable corpora of conversational speech in English and Slovak. Our paper makes three main contributions. First, we measured entrainment in conversational corpora using state-of-the-art DNN embeddings on semantic and auditory levels. Second, we explore the relationship between the two levels using the same methodology. Finally, the experimental result shows that entrainment in both levels is correlated positively in both spoken corpora.§ DATA AND FEATURES In this section, we describe two task-oriented spoken language corpora we analysed in the current study, how we extracted semantic and auditory features from them, and how we calculated metrics for measuring auditory and semantic entrainment.§.§ Dataset§.§.§ Columbia Games CorpusThe Columbia Games Corpus <cit.> consists of 12 spontaneous dyadic conversations between native Standard American English (SAE) speakers. Participants included thirteen individuals (six females and seven males); eleven participated in two sessions on different days and with other partners. Each dyad played four computer games of two kinds: Cards games and Objects games involving communication and teamwork. The subjects did not have visual contact due to a curtain placed between them ensuring verbal communication only. Twelve sessions were recorded, totaling 9 hours and 13 minutes. The subset of the Columbia Games Corpus most closely resembling spontaneous task-related conversations, namely the Objects game, was used for the current study, which roughly comprises 4.3 hours of speech data.§.§.§ SK-Games Corpus The SK-games corpus <cit.> is identical to the Objects games of the Columbia Games Corpus for SAE, except for changes in some screen images and their locations. The corpus contains nine dyadic conversations recorded by native speakers of Slovak. Eleven speakers (five females and six males) participated in the study; seven participated in two sessions, each with a different partner. The corpus involves 6.3 hours of spoken dialogue. §.§ Feature extraction The semantic and auditory linguistic levels of entrainment are analysed in each corpus. To extract semantic features, each turn in the dialog is encoded into a fixed-length vector (embeddings). For the Columbia games corpus, we used a neural network-trained model (SBERT) <cit.>, representing 768 one-dimensional semantic features for each turn. Similarly, for the SK-Games corpus, we used the Slovak masked language model called SlovakBERT <cit.> where each turn is encoded into 768 one-dimensional semantic features. Furthermore, to extract auditory features for each turn, the TRILL vector <cit.> is used, representing 512 one-dimensional auditory features per turn. Since the TRILL vector model is language-independent, we used the same model on both the spoken corpora. §.§ Entrainment metrics In <cit.>, the authors introduced three measures of entrainment:Proximity describes the similarity of interlocutor’s speech at turn exchanges.Convergence quantifies the tendency when two speaker’s speech becomes more similar throughout the conversation.Synchrony describes the entrainment by direction where speaker’s prosodic features become correlated to his/her interlocutor. Based on the definition of the given metrics we used the same metrics for the current study. In earlier studies, absolute values were used to measure entrainment on acoustic-prosodic features. Since we are using DNN embeddings in the current study, the metrics are re-defined.Proximity is measured using paired t-tests on two sets of differences: a set of adjacent distance (Eq.<ref>) and another corresponding set of non-adjacent distance (Eq.<ref>). Adjacent distance is the cosine distance between speaker's embeddings and his/her conversational partners adjacent embeddings. On the other hand, non-adjacent distance is the cosine distance between the embeddings of a speaker and other random non-adjacent embeddings of his/her conversational partner. For ten random turns of another speaker, we measured the non-adjacent distance and calculated the mean. If the cosine distance of the adjacent distance is greater than the non-adjacent distance, we can infer that speakers are getting closer to each other. adjacent distance=cos (A,B) = A · B/|A||B|non-adjacent distance= ∑_i=1^10cos(A,B_rand)Convergence is measured by Pearson's correlation between cosine distance between adjacent turns and turn number (time). Synchrony is measured using Pearson's correlation on two sets of self-distance of speaker A and B. Self-distance (Eq.<ref>) of a speaker is measured using cosine similarity between two consecutive turns of the same speaker.self distance=cos(A_i,A_i+1)§ RESULTS§.§ Auditory and semantic entrainment using DNN §.§.§ Columbia-Games corpusTable <ref> (a) shows the auditory and semantic entrainment results in the Columbia games corpus.Proximity: The English dataset shows little evidence of local proximity on auditory features. Only three sessions shows evidence of positive proximity. On the semantic level, in contrast, we found seven sessions that showed positive proximity. In addition, we observed that the distribution of the sessions with positive proximity in two levels is not random and that in all but one case, if people entrain on the auditory level they also entrain on the semantic level.Convergence: We found little evidence of convergence on both levels in the Columbia games corpus. In auditory features, only one session shows significant evidence of divergence, i.e., differences between partners increase over time. On the contrary, one session shows significant evidence of positive convergence in semantic features. Synchrony: The auditory features showed little evidence of synchrony. Only one session shows evidence of positive synchrony. Positive synchrony implies both the speakers are moving in the same direction, i.e., if speaker A raises his/her voice, then speaker B also raises his/her voice. On the contrary, we did not find evidence of synchrony on semantic features in the English corpus. Furthermore, before the Bonferroni correction, we found that two sessions showed negative synchrony; one session exhibited positive synchrony in semantic features, and one session exhibited positive synchrony in auditory features.§.§.§ SK-Games CorpusTable <ref> (b) shows the results of auditory and semantic entrainment with proximity, convergence, and synchrony as entrainment metrics based on the Slovak games corpus.Proximity: The Slovak data shows little evidence of proximity on the auditory level. For auditory features, only one session shows evidence of positive proximity, and only one shows negative proximity. On the contrary, four sessions show significant positive proximity for semantic features, while one shows significant negative proximity. In addition, we observed a similar pattern that we observed earlier in the English corpus, i.e., people entrain on both features when they entrain on auditory features.Convergence: We found little evidence of convergence on both levels in the Slovak data. Two sessions display evidence of positive convergence for auditory features. On the contrary, only one session showed evidence of divergence on the semantic level. Additionally, before applying the Bonferroni correction, we found that people converge on auditory features more when compared to semantic features in the SK-games corpus.Synchrony: The Slovak data shows little evidence of synchrony on semantic features: One session shows positive synchrony for semantic features. On the contrary, no session shows evidence of synchrony in auditory features. In <cit.>, the authors reported negative synchrony is evident on almost every para-linguistic (auditory) feature of the SK-games corpus. We found similar evidence to be true where 6 out of 9 sessions show negative synchrony; however, they are not statistically significant. §.§ Relationship between auditory and semantic entrainment We measured two sets of adjacent distances using (Eq. <ref>): a set of adjacent distances on auditory features and another set of adjacent distances on semantic features. We measured Pearson's correlation between adjacent distance on auditory and semantic embeddings to investigate the relationship between semantic and auditory features. Columbia Games Corpus: Table <ref> (left panel 1a) shows results for the entrainment relationship between auditory and semantic features using the SBERT model in English Data. We found six sessions out of 12 exhibits a slightly significant positive correlation (mean r=0.21). To explore the potential effect of the selection of language models (semantic model), we also utilized Google's Universal sentence encoder (USE) model <cit.> for extracting semantic features for each turn. Using the USE model, we measured adjacent distance on semantic features and measured Pearson's correlation between semantic and auditory features.Table <ref> (middle panel 2a) shows the entrainment relationship between auditory and semantic features for the USE model.We found ten sessions out of 12 exhibits a slightly positive correlation (mean r=0.20).SK-Games Corpus: Table <ref> (rightmost panel b) shows that Slovak data has a stronger positive correlation between entrainment in both linguistic levels than the English data where all the sessions are positively correlated with mean r = 0.40.§ DISCUSSION AND CONCLUSION We analyzed semantic and auditory entrainment using three different entrainment metrics over a total of 21 sessions of collaborative dyadic interactions in two languages. We observed the following patterns that emerged from the analysis. Firstly, proximity is more prevalent than synchrony and convergence in both semantic and auditory entrainment. In both languages, positive proximity is evident in a greater number of dialogues compared to convergence and synchrony, indicating the tendency of people to get closer to each other in both semantic and auditory space at a given point in time. Secondly, we found that semantic proximity is more prevalent than auditory proximity. In both datasets, we observed that people entrain on semantic features more when compared to auditory features. In general, when the semantics of two interlocutors become more similar, the interlocutor can understand the content of the conversation more easily. One possible reason for such a result can be traced to the type of corpora utilized for entrainment analysis. We used task-oriented corpora, where the objective was to communicate about specific items in order to reach a joint goal. Semantic entrainment is crucial in task-oriented conversations like this since the task cannot be completed successfully without it. In contrast, auditory entrainment is optional and may be used to support semantic entrainment or indicate various aspects of the negotiation in terms of social relationship between the interlocutors. The findings of our study might vary from analyzing entrainment in real-life conversational corpora where semantic and auditory entrainment might weigh differently. Thirdly, we noticed that semantic and auditory entrainment are positively correlated. A positive relationship between different linguistic levels can be conceptualized as people who entrain on one level are more likely to entrain on other levels. This finding is consistent with the Interactive Alignment Model proposed by <cit.>. This cognitive theory suggests that alignment at one level leads to alignment at other levels. Our findings suggest entrainment can be considered a single latent behavior or a collection of linked behaviors where people aligning on auditory features are more likely to align on semantic features. It is interesting to note the directionality in our findings: semantic entrainment implies auditory one whereas the reverse is not the case. The results of our study may also inform models dealing with the percolation of entrainment across linguistic levels.Lastly, we noted that selecting a language model is crucial in identifying the relationship between different linguistic levels. We measured the relationship between auditory and semantic linguistic levels using two different language models for extracting semantic features in the English dataset. We found variance in results where utilizing the SBERT model reported six sessions are significantly positively correlated with mean r of 0.20. In contrast, the USE model reported that ten sessions are significantly positively correlated with mean r = 0.21. The average results of correlations are almost identical (r = 0.2 and 0.21); however, the number of sessions that are significantly positively correlated is different. A language model might account for such variability in results and when considering the entire corpus, differences are smoothed out. In the Slovak dataset, we found a relatively stronger correlation between auditory and semantic entrainment with mean r = 0.40 on all sessions. It remains to be explored if this difference stems from the difference among the patterns of entrainment in Slovak and English or if, in part, it might stem from the selection of the language model as both datasets in the current study are similar. We did not find any other language models trained in Slovak due lower NLP resources compared to English. Extracting semantic features from different language models could allow us to have a more meaningful comparison and understand if such a stronger correlation is due to the language model.To conclude, in earlier studies researchers used fragmented features and different methods to measure entrainment, which might have contributed to the variation in results. We measured entrainment using the comparable methodology on different levels and in different languages, and our measures captured entrainment patterns that differ from previous studies, e.g.<cit.>. This further implies that methodology and features utilized for measuring entrainment play an important role in finding the relationship between different levels. In our future work, we plan to investigate entrainment relationships also on other linguistic levels, such as lexical and syntactic, and analyze the entrainment relationships among them. This will allow us to pursue developing SDS whose entrainment functionalities are informed by the relationship among entrainment on different linguistic levels, which could provide a more naturalistic conversational experience in future human-machine spoken interactions.§ ACKNOWLEDGEMENTSThis project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 859588 and in part from the Slovak Granting Agency grant VEGA2/0165/21 and Slovak Research and Development Agency grant APVV-21-0373.IEEEtran
http://arxiv.org/abs/2312.16599v1
{ "authors": [ "Jay Kejriwal", "Štefan Beňuš" ], "categories": [ "cs.CL", "cs.SD", "eess.AS" ], "primary_category": "cs.CL", "published": "20231227145009", "title": "Relationship between auditory and semantic entrainment using Deep Neural Networks (DNN)" }
The globular cluster VVV CL002 falling down to the hazardous Galactic centre Dante Minniti 1,2,3Noriyuki Matsunaga 4,5José G. Fernández-Trincado 6 Shogo Otsubo 5 Yuki Sarugaku 5 Tomomi Takeuchi 5 Haruki Katoh 5 Satoshi Hamano 7 Yuji Ikeda 5,8 Hideyo Kawakita 5,9 Philip W. Lucas 10 Leigh C. Smith 11 Ilaria Petralia 1 Elisa Rita Garro 1 Roberto K. Saito 3 Javier Alonso-García 12 Matías Gómez 1 María Gabriela Navarro 13 Received Month DD, Year; accepted Month DD, Year ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Recently, neural module networks (NMNs) have yielded ongoing success in answering compositional visual questions, especially those involving multi-hop visual and logical reasoning. NMNs decompose the complex question into several sub-tasks using instance-modules from the reasoning paths of that question and then exploit intermediate supervisions to guide answer prediction, thereby improving inference interpretability.However, their performance may be hindered due to sketchy modeling of intermediate supervisions. For instance, (1) a prior assumption that each instance-module refers to only one grounded object yet overlooks other potentially associated grounded objects, impeding full cross-modal alignment learning; (2) IoU-based intermediate supervisions may introduce noise signals as the bounding box overlap issue might guide the model's focus towards irrelevant objects.To address these issues, a novel method,Detection-based Intermediate Supervision (DIS), is proposed, which adopts a generative detection framework to facilitate multiple grounding supervisions via sequence generation. As such, DIS offers more comprehensive and accurate intermediate supervisions, thereby boosting answer prediction performance. Furthermore, by considering intermediate results, DIS enhances the consistency in answering compositional questions and their sub-questions.Extensive experiments demonstrate the superiority of our proposed DIS, showcasing both improved accuracy and state-of-the-art reasoning consistency compared to prior approaches. § INTRODUCTION Compositional visual question answering (VQA) <cit.> has been an emerging research topic in multimodal domain, and received increasing attention from both computer vision and the language processing communities. Specifically, given input images and questions, it is required to generate answers for the questions according the content of images. Generally, compositional questions involve multiple visual entities or concepts (objects, attributes, and relations), and the models demand a rich set of abilities (semantic understanding, object detections, visual/logical reasoning) to get the right answer.One of the challenges in compositional visual question answering lies in modeling the reasoning process. From this perspective, VQA models can be divided into two categories, holistic and modular. Holistic models <cit.> generate answers for all types of questions through a unified multimodal fusion model, and the reasoning process is implicitly performed during the encoding and fusion stages. Despite the effectiveness, holistic models cannot reflect the intermediate reasoning process. On the contrary, modular model, neural module network (NMN) <cit.>, has been a mainstream approach due to its explicit reasoning procedure and interpretable characteristics. Specifically, NMN parses questions into predefined reasoning modules and composes these modules into an executive program, thus deconstructing complex questions into several easy-to-solve problems. During the process of multi-hop visual/logical reasoning and answering generation, the intermediate status and results can be explicitly reflected from each module. Moreover, in order to restrict the reasoning process of NMN models, extra intermediate supervisions <cit.> are proposed to improve the answer prediction performance, which restricts models to focus on pivotal objects via Intersection over Union (IoU) constraint between predicted bounding boxes and ground truth ones.Despite the significant improvement in the accuracy metric, there are several shortcomings in the IoU-based intermediate supervisions of previous methods. For example, MMN <cit.> directly exploits the ground-truth scene graph paths corresponding to questions for intermediate result generation, which ignores other potentially correct intermediate objects and hinders the model to fully learn cross-modal alignment and reasoning process. As Figure <ref> (a) shows, MMN adopts the leftmost pillow to supervise the first step result (Select(pillow)), while our proposed DIS method takes into account all possible correct results. Such comprehensive supervisions alleviate the missing-in-the-middle problem, thus facilitating the model to generate correct answers. Besides, previous IoU-based intermediate supervisions are possible to introduce noise signals due to the bounding-box overlap problem, which induces the model to focus on irrelevant objects. This is exemplified in Figure <ref> (b). In the first reasoning step, the model is required to focus on the napkin (probability distribution corresponding to the green lines). However, due to the overlap between napkin and other objects (table, laptop, cup, ), the IoU-based supervisions prompt the model to focus on irrelevant areas (probability distribution corresponding to the red lines), thus leading to grounding ambiguity and generating wrong answers.To this end, we propose a novel Detection-based Intermediate Supervision (DIS) method to resolve aforementioned issues. Specifically, to obtain comprehensive intermediate results, executive programs are parsed from the questions, and then step-by-step inference is performed on the scene graphs, which generates complete intermediate results (illustrated in Figure <ref> (a)). Afterwards, a generative framework is proposed to supervise VQA model using the intermediate results, which transforms the intermediate supervisions into sequences, and constrains model states via sequence generation (illustrated in Figure <ref> (b)). Compared to previous methods, our proposed DIS provides more comprehensive supervision signals for the reasoning process, and exploits a unified generative framework to constrain the intermediate states, thereby improving the answer prediction performance. Moreover, due to the consideration of intermediate results, the answering consistency among compositional questions and their sub-questions is significantly improved. In summary, the main contributions are the following, * We introduce Detection-based Intermediate Supervision (DIS), a novel method that provides more comprehensive intermediate supervisions via a unified generative framework. To the best of our knowledge, this is the first attempt in the usage of a generative framework for intermediate supervisions in visual question answering.* We propose a scene graph inference framework, which step-by-step executes programs parsed from questions on scene graphs to obtain intermediate results. The supervision signals are further constructed by converting the results into a unified sequential form.* We conduct extensive experiments to evaluate the effectiveness of our proposed DIS algorithm, in which our method achieves competitive answer prediction performance (61.31% vs. 60.83%), and superior reasoning consistency (73.11% vs. 71.47%, 64.20% vs. 61.94%, and 55.28% vs. 52.80%) compared to previous methods. § RELATED WORK §.§ Visual Question Answering Compositional visual question answering task is defined to generate answers for given compositional questions based on the image content. Generally, compositional questions consist of multiple visual concepts (objects, attributes, and relations), and require VQA models to perform multi-hop reasoning to get the right answers. Recently, several attempts have been made to facilitate visual and logical reasoning, and these methods can be divided into two categories: holistic and modular. Holistic methods <cit.> exploit a unified multimodal fusion model to solve all types of question, and achieve implicit visual/logical reasoning through graph structures <cit.> and relational attention mechanisms<cit.>. With the help of scene graph structure, images are represented as graphical webs, containing information about objects, attributes and relationships among interconnected objects, which can be used for visual reasoning via graph traversal. For example, NSM <cit.> performs sequential reasoning over the probabilistic scene graph of the image, and achieves multi-hop inference by shifting probability distributions. RPR <cit.> casts visual reasoning as a path routing task, and adopts reinforcement learning to explore the inference path. Despite the overwhelming successes achieved, holistic models process all types of questions via a unified model, which ignores the reasoning structure implicit in the question. Therefore, modular methods <cit.> are proposed to make up for the above-mentioned issues. Specifically, modular methods parse the question into a structured tree that reflects the reasoning process, and construct the question-specific model using pre-defined modules. Due to the explicit reasoning structure, such methods have strong interpretability and controllability. In addition, extra intermediate supervisions can be provided to constrain models to reason along prescribed directions, IoU-based Kullback-Leible (KL) divergence <cit.>, thereby improving answer prediction performance. However, such IoU-based supervisions suffer from two issues, ignorance of multiple grounded objects and grounding ambiguity, underutilizing the intermediate supervisions for model optimization. §.§ Object DetectionThere has been a tremendous amount of work in object detection tasks, which require extracting objects from images. Traditional object detection algorithms introduce explicit prior knowledge via producing a set of proposals <cit.>, anchors <cit.>, or window centers <cit.>, and then perform non-maximum suppression <cit.> to remove duplicate predictions. To avoid complex processing procedures, DETR <cit.> exploits Transformer-based encoder-decoder framework <cit.> for object detection, which learns a set of “object queries” to directly generate bounding boxes and object labels. All of these detectors require extra modules for bounding box regression and label prediction to obtain final predictions. To further avoid such complexities, Pix2Seq <cit.> directly predicts the raw pixel coordinates through an encoder-decoder network, which achieves competitive performance while simplifying the detection framework. Inspired from Pix2Seq and language modeling <cit.>, we propose a detection-based supervision framework, which converts the intermediate results into a unified sequential form that consists of pixels and tokens, thereby providing more comprehensive supervision signals. § METHODOLOGYWe propose Detection-based Intermediate Supervision (DIS) methods to facilitate the constraint of intermediate reasoning state, thereby improving the answer prediction performance. The overall framework is depicted in Figure <ref>. Specifically, image features are first extracted via convolutional neural networks (CNN). Then, the question is parsed into program tree, followed by the program execution network to get the final answer. Finally, we introduce detection-based intermediate supervision framework to enhance reasoning ability of the VQA model. §.§ Image Features ExtractionImage feature extraction is based on the pre-trained Faster R-CNN model <cit.>. In contrast to previous methods <cit.> that adopt bottom-up features, we adopt the feature maps output from C5 layer of Faster R-CNN <cit.> for image representation. Specifically, given an image I, the pre-trained CNN backbone of Faster R-CNN is utilized to extract the feature map V∈ℝ^HW×d_v, where H,W indicate height and width of the feature map, respectively. d_v denotes the feature dimension. To endow the image features with visual contexts and cross-modal textual information, we follow MCAN <cit.> method, which adopts Transformer block to encode the question and image. Specifically, given a question Q of length T, which is embedded into latent space E∈ℝ^T×d_h, a two-layer Transformer is adopted to encode the questions as follows:Ê =LN(E+SA(E)) Ẽ =LN(Ê+FFN(Ê)),where SA, LN, FFN denote self-attention, layer normalization and feed forward network, respectively. Afterwards, the image feature V is enriched via a two-layer Transformer using visual contexts and question semantics Ẽ as follows:V' =FC(V+PosEmb) V̅ =LN(V'+SA(V'))V̂ =LN(V̅+GA(V̅,Ê))Ṽ =LN(V̂+FFN(V̂)),where PosEmb indicates position embedding. FC denotes fully-connected layer, converting feature dimension from d_v to d_h. GA denotes guided attention, which exploits question semantics to enhance the relevant visual features. The resulting image representation Ṽ can be used for program execution to get the final answer. §.§ Program GenerationProgram generation aims to parse questions into program trees that reflect the reasoning procedures inferred by questions, and the program trees can be further used for model construction. We follow MMN <cit.> to generate programs from questions. Specifically, the nodes of the program tree are formalized as “Function(Arg1,...ArgN)”, where “Function” can be categorized into 10 different abstract types (select, relate, exist, or, ), and each abstract type is further subdivided into more subtypes (relate: relate_attr, relate_name, relate_inv_name, ), which take a variable number of arguments as inputs.Based on the abovementioned program types, the complete program tree can be viewed as the sequence of functional nodes, and generated from an encoder-decoder network, T5 <cit.>. As illustrated in Figure <ref>, a prompt (“transform question into programs:”) is added in front of the question, and fed to T5 model for program generation. The output sequence consists of inter-dependent functions, where “[N]” denotes the dependencies. Afterwards, a structured program tree with L layers is constructed from the sequence according to the dependencies, which can be used to guide step-by-step program execution. §.§ Program Execution Given image features Ṽ and the L-layer program tree, program execution performs step-by-step inference from Layer-1 to Layer-L based on image features to get the answer, and all layers share model parameters, making it parameter-efficient and scalable to any number of layers. Specifically, suppose the program tree contains N nodes, and each node corresponds to a program text (“Select(giraffe)”, “Select(elephant)”, ), a state matrix S∈ℝ^N×d_h is initialized with the program semantics as follows:{e⃗_0^i,e⃗_1^i,...,e⃗_k^i} =GloVe(prog_i)S_i =FC(Concat({e⃗_0^i,e⃗_1^i,...,e⃗_l^i})),where prog_i denotes the program text of i-th node, and is truncated or padded to fixed length k. GloVe denotes the GloVe embedding layer <cit.>. In the process of step-by-step inference, the matrix S implicitly contains the intermediate reasoning states and results, and can be used to decode outputs for our proposed DIS algorithm.Denote the input state of the l-th layer as S^l-1, we exploit Transformer framework to obtain the output state S^l. Specifically, a masked self-attention layer is firstly exploited to gather dependencies from the state of last layer, and then a guided attention layer is utilized to find visual clues from image features Ṽ, formulated as follows:Ŝ^l-1 =LN(S^l-1+MaskSA(S^l-1,M^l))S^l =LN(Ŝ^l-1+GA(Ŝ^l-1,Ṽ)),where MaskSA,GA denote mask self-attention and guided attention layer, respectively. MaskSA uses weight matrix M^l to mask non-dependent nodes, formulated as follows:MaskSA(S,M) =Softmax((Q^S)(K^S)^T/√(d_h)+M)V^S,where M∈ℝ^N×N denotes the mask matrix. M_ij=0 if and only if i-th node is the parent node of j-th node, and otherwise M_ij=-∞. Q^S,K^S,V^S are derived from S via three fully-connected layers, formulated as follows:Q^S =FC^Q(S)K^S =FC^K(S)V^S =FC^V(S). Different from SA that gathers information from itself, guided attention gathers features from other source (visual clues), formulated as follows:GA(S,Ṽ)=((Q^S)(K^Ṽ)^T/√(d_h))V^Ṽ,where K^Ṽ,V^Ṽ are derived from Ṽ similar to Equation <ref>.After L iterations of program inference, the final state S^L_N-1 is used to predict the answer via amulti-layer perception (MLP) layer:s⃗ =MLP(S^L_N-1) p(a|I,Q;Θ) =Softmax(s⃗),where s⃗∈ℝ^|𝒜| denotes the predicted scores of the answers in 𝒜, and the answer with the highest score is chosen as the final answer. Θ denotes the model parameters. Finally, cross-entropy loss is used to optimize the model, formulated as follows:ℒ^VQA=-𝔼_𝒟[log(p(a=a^gt|I,Q;Θ))],where 𝒟,a^gt denotes the VQA dataset and ground truth answer, respectively.§.§ Intermediate Supervision On top of the above-mentioned program execution, intermediate supervisions are proposed to constrain the reasoning process, and improve the answer prediction performance <cit.>. However, previous methods calculate probability distributions based on IoU between object bounding boxes and ground-truth ones, which easily induces the model to focus on irrelevant objects (ref to Figure <ref> (b)). To this end, we propose detection-based intermediate supervision (DIS) algorithm, which formulates the intermediate supervisions into a unified sequence form, thereby endowing the model with the abilities of exploiting diverse supervision types (bounding boxes, logical words (true/false), text).As shown in Figure <ref>, DIS algorithm consists of two steps: (1) symbolic graph reasoning designs manual rules to execute the program tree on the ground-truth scene graph[GQA <cit.> dataset provides ground-truth scene graph only for train and val splits. Therefore, DIS is used to optimize the model only in the training phase, and removed during testing phase.], resulting in intermediate results (Objects, True/False, Answer), and (2) intermediate result decoder decodes the intermediate outputs, which is optimized using auto-regression loss.§.§.§ Symbolic Graph Reasoning To obtain the intermediate supervisions, we follow MMN <cit.>, which executes the Program on the ground truth Scene Graph (as illustrated at the top of Figure <ref>). Specifically, we manually design the symbolic reasoning rules for each function type (Select, Relate, Verify, ). For example, Select(x) is defined to select the nodes corresponding to x, and Relate(x, to the left of) is defined to find the nodes to the left of x. With the Program and Scene Graph, the intermediate supervisions are inferred step by step, as illustrated at the bottom of Figure <ref>.Generally, the supervisions are categorized into three types: Objects with bounding boxes, True/False, and Answers. Regarding Objects, we follow Pix2Seq <cit.> to quantize coordinates into bins, which can be regarded as discrete labels. If multiple objects exist, their bounding boxes are randomly shuffled and concatenated to form the result sequence. For True/False and Answer, we directly use the textual tokens to form the sequence. Additionally, several special tokens are added to sequence for further generation, [BEG], [SEP], [END], §.§.§ Intermediate Result DecoderWith the help of intermediate results, VQA model can be optimized using these supervisions. Specifically, a two-layer Transformer is proposed to decode the intermediate outputs from the states S̃^L. Firstly, [BEG] token is initialized with the state S̃^L_i, and prompts the decoder to generate output sequence. In the training phase, we use teacher-forcing and auto-regression loss to optimize the model, formulated as follows:{y_0, y_1, ..., y_o-1} =Decoder(S̃^L_i,Ṽ)ℒ^DIS =-1/o∑_i=0^o-1log(p(y_i|y_0,...,y_i-1,S̃^L_i,Ṽ)),where o denotes the length of the result sequence. S̃^L_i∈ℝ^d_h denotes the intermediate state from the i-th program node.§.§.§ Model OptimizationIn the training phase, the answer prediction loss (Equation <ref>) and detection-based intermediate supervision loss (Equation <ref>) are combined to optimize the model:ℒ=ℒ^VQA+αℒ^DIS,where α denotes the loss weight of DIS. In the testing phase, DIS module can be removed because only the final answer needs to be predicted.§ EXPERIMENTS §.§ Datasets To evaluate the answer prediction performance and answering consistency, the reported results in the following sections are evaluated on the widely used GQA <cit.> dataset, and its variant GQA-Sub <cit.>. GQA <cit.> is a compositional visual question-answer dataset, which features compositional questions over the real-world images. It is designed to provide accurate indications of visual understanding capacities and mitigate the language priors that exist widely in previous VQA datasets <cit.>.GQA-Sub <cit.> is derived from the well-organized GQA dataset, and creates sub-questions for train and val splits, thereby enabling quantitative evaluation of reasoning consistency. More information about GQA and GQA-Sub, as well as their respective evaluation metrics, can be found in Appendix-A. §.§ Implementation DetailsProgram Generation. T5-base[<https://huggingface.co/docs/transformers/model_doc/t5>] is utilized for text transformation, where source and target texts are limited to a maximum length of 40 and 100, respectively. We exploit AdamW optimizer with learning of 1e^-4 and batch size of 32 to finetune T5 for 400k steps. Visual Question Answering. Following the settings from MMN <cit.>, the questions are truncated or padding to the fixed length of 32. The number of nodes in each program tree is limited to 9, and the maximum length k of each program is set to 8. See the Appendix-B for more implementation details.Baselines. Our model is compared with various state-of-the-art approaches excerpted from MMN, including BUTD <cit.>, MAC <cit.>, GRN <cit.>, LCGN <cit.>, BAN<cit.>, PVR <cit.>, LXMERT <cit.>, MCAN <cit.>, MMN <cit.>, RPR <cit.>, and RCVQA <cit.>. We did not compare with the NSM <cit.> because it utilizes a well-tuned external scene graph generation model.More information about baselines are provided in Appendix-C.§.§ Experimental ResultsIn the experiments, we primarily assessed the performance of our model on answering compositional questions as well as its performance in reasoning consistency. The corresponding experimental results are presented in Table <ref> and Table <ref>, respectively. The online test results of the state-of-the-art models and our proposed DIS method on the GQA dataset are shown in Table <ref>, and these results also reflect the performance of all models on compositional questions. The required inputs represent the information necessary for the model to predict the answers, where V and L indicate vision and language, respectively, while DataAug represents data augmentation.As shown, our proposed DIS achieves the best on Binary, Open, and Overall accuracies among the methods listed in Table <ref>. Specifically, with basically the same inputs and settings as MMN, DIS method outperforms MMN by a margin of +0.46% and +0.47% for Binary and Open questions, respectively.Also, we evaluate the performance of the proposed DIS in terms of reasoning consistency. The results regarding accuracy (Acc and Acc(Sub)) and reasoning consistency (RC(k), refer to Appendix-A for details) of our proposed DIS and state-of-the-art methods are presented in Table <ref>. Acc and Acc(Sub) denote the accuracies on val and val-sub splits, respectively. DA indicates the usage of augmented sub-questions for model training. As Table <ref> shows, our proposed DIS surpasses the other state-of-the-art methods on both accuracy and reasoning consistency metrics.Specifically, without data augmentation of sub-questions, DIS outperforms MMN by a large margin of 4.66% and 5.85% on val and val-sub splits, respectively. The reasoning consistency is also significantly improved by using our DIS algorithm, a margin of 6.19%, 9.57%, and 14.9% on RC(1), RC(2), and RC(3), respectively. Such superiority stems from the comprehensive yet noise-free supervisions of intermediate results provided by DIS, significantly enhancing the ability of the model to answer compositional questions and corresponding sub-questions.When we trained the DIS with sub-questions in train-sub as a form of data augmentation, our proposed DIS even outperforms the best-performing RCVQA model that is tailored for enhancing reasoning consistency through the incorporation of a consistency constraint loss. Specifically, DIS surpasses RCVQA by a significant margin of 3.55% on Acc and over 1.64%, 2.26% and 2.48% on the three reasoning consistency metrics, respectively, which indicates the effectiveness of our proposed method. §.§ Ablation Studies In this section, a series of ablations are conducted on GQA dataset to investigate the effectiveness of our proposed method. All the models are trained on train+val split, and evaluated on testdev split. The experimental settings are kept consistent throughout ablation studies.Different Object supervision format: Table <ref> shows the results with different object supervision formats. (x1, y1), (x2, y2) and name denote the top-left, bottom-right coordinates of bounding box, and object label, respectively. As Table <ref> shows, the bounding box information is enough for intermediate supervision, which achieves the highest score (59.97%), and the extra object names decrease the performance. It is conjectured that extra supervisions (object names) enforce the VQA model focus more on intermediate results than final answer prediction, thus decreasing the answer prediction performance. Different loss weights of DIS and bootstrapping epochs: Table <ref> shows the results of different DIS loss weights, and different epochs for bootstrapping. As shown, the best accuracy (59.97%) is achieved when DIS loss weight α=0.5, which surpasses the baseline α=0 by a significant margin of 1.22%, demonstrating the effectiveness of our proposed DIS method. In addition, it can be observed that the bootstrapping of 1 epoch has achieved the best accuracy (61.31%), while decreasing the performance with more bootstrapping epochs. It is conjectured that more training epochs on all-split bring biases for the VQA model, which is harmful for further fine-tuning. Additionally, the results regarding how different quantized bins and maximum number of generated objects and the shuffle mode of object order affect the performance are shown in Appendix-D. §.§ VisualizationWe visualize several cases from the GQA testdev split, showcasing the intermediate results and predicted answers from MMN and DIS. As depicted in Figure <ref>, it is evident that MMN tends to focus on a large area of the image, the attention maps of the store, cloth, and fire truck in Figure <ref> (a), (b), and (d) are not tightly focused on. It might be the reason that the large bounding boxes are more likely to overlap with others. Consequently, these bounding boxes are frequently used for training, leading the model to prioritize larger areas for IoU-based intermediate supervision methods. On the contrary, our proposed DIS method is able to predict the tight bounding boxes, the crosswalk, store in Figure <ref> (a), cloth in Figure <ref> (b), and street, fire truck in Figure <ref> (d), which facilitates the model to learn more fine-grained cross-modal alignments and accurate reasoning procedures. In addition, as depicted in Figure <ref> (b) and (c), there may exist multiple intermediate supervisions. However, MMN does not precisely focus on these areas, the cloth in Figure <ref> (b). The incomplete prediction of intermediate results makes it easy for the models to infer incorrect answers. In contrast, our proposed DIS is able to predict multiple object results in one sequence and unify the different result forms (logical true/false and objects) using one single framework.§ CONCLUSION We propose the DIS algorithm for compositional visual question answering. Specifically, DIS exploits a unified generative framework to provide intermediate supervisions in a sequential form that provides more fine-grained and accurate supervisions, addressing the issue of supervision ambiguity and promoting cross-modal knowledge alignment.We conducted experiments on the GQA and GQA-Sub datasets and the experimental results demonstrate that DIS achieves competitive answer prediction performance and superior reasoning consistency compared to previous state-of-the-arts. § ACKNOWLEDGMENTSThis work was supported in part by the National Natural Science Foundation of China under Grant No. 62276110, No. 62172039 and in part by the fund of Joint Laboratory of HUST and Pingan Property & Casualty Research (HPL). The authors would also like to thank the anonymous reviewers for their comments on improving the quality of this paper. This part serves as the appendix of the main paper. Section A provides detailed information about the GQA and GQA-Sub datasets, along with their respective evaluation metrics. Implementation details of the visual question answering model are presented in Section B. Additionally, Section C covers baselines, while Section D introduces more ablation studies. § A. DATASETS GQA: Specifically, GQA <cit.> is generated by leveraging Visual Genome <cit.> scene graphs to create diverse reasoning questions with less language bias. Therefore, it requires more complicated reasoning capacities to answer the questions. GQA consists of two splits (balance-split and all-split). The balance-split consists of QA pairs with re-sampled question-answer distribution. Following the common practice <cit.>, we use all-split for bootstrapping, and balance-split for finetuning and online evaluation. The dataset is split into 70% train, 10% validation, 10% test and 10% challenge. The metrics utilized in this paper include accuracy regarding the Open questions, Binary questions and the overall accuracy. GQA-Sub: Specifically, GQA-Sub <cit.> transforms compositional questions into language graphs and extracts sub-graphs to construct sub-questions. To obtain the answer labels for these questions, GQA-Sub exploits ground-truth scene graphs and language graph traversing to get the answers. Additionally, to avoid the language biases, GQA-Sub performs three times of sampling for these generated samples, resulting in 351,271 and 45,043 sub-questions for train and val splits, respectively. Finally, a total of 4 splits are generated, train, train-sub, val, and val-sub, where the train/val-sub split contains sub-questions corresponding to train/val split. Following the common settings <cit.>, the VQA model is trained on train(-sub), and evaluated on val(-sub). To measure the reasoning consistency of VQA models, GQA-Sub designes a metric, reasoning consistency score RC(k), computed by:RC(k)=∑_Q⊂ℚ, N≥kCorrect^f(Q,{Q_i}_i=1^N)/∑_Q⊂ℚ, N≥kCorrect^f(Q)where ℚ denotes the set of compositional questions. Correct(q)=1 only when q is correctly answered by the VQA model, or Correct(q)=0 otherwise. Correct^f(Q,{Q_i}_i=1^N)=1 only when the compositional question Q and all of its sub-questions {Q_i}_i=1^N are correctly answered, or Correct^f(Q,{Q_i}_i=1^N)=0 otherwise. The value of RC(k) ranges from 0 to 1, and indicates better consistency when RC(k) is higher.§ B. IMPLEMENTATION DETAILS Visual Question Answering. As for program execution, the number of reasoning layers L is set to 5. In the process of intermediate output decoding, the maximum number of generated objects is set to 4. The bins for quantizing coordinates are set to 256. Regarding model training, the loss weight of DIS α is 0.5. Following the settings from MMN <cit.>, the VQA model is firstly bootstrapped for 5 epochs on all-split, and then finetuned on balance-split. In the bootstrapping stage, the model is trained using Adam optimizer with a fixed learning rate of 1e^-4 and batch size of 128. In the fine-tuning stage, the model is optimized using Adam optimizer with batch size of 128 and base learning rate of 2e^-4. In addition, warmup <cit.> strategy is exploited to facilitate model fine-tuning. Specifically, the learning rate increases linearly from 1e^-4 to 2e^-4 for the first 4 epochs, and decays by 0.5 for every 2 epochs at epoch 10. The model is trained for 18 epochs in total. § C. BASELINESThe baselines include BUTD <cit.>, MAC <cit.>, GRN <cit.>, LCGN <cit.>, BAN<cit.>, PVR <cit.>, LXMERT <cit.>, MCAN <cit.>, MMN <cit.>, RPR <cit.>, and RCVQA <cit.>. A brief introduction of these baseline methods is described as follows. * BUTD. The classic attention-based model for visual question answering and image caption, which exploits top-down attention to capture the question-relevant visual features for answer prediction.* MAC. The memory-based reasoning model, which decomposes the question into a series of attention-based reasoning steps, and performs control and memory iteratively to get the answer.* GRN. The graph-based reasoning model, which considers the relationships between words in the question, and designs intra- and inter-modal graph to exchange information from multi-modal inputs, thereby realizing implicit multi-step reasoning.* LCGN. The graph-based reasoning model, which constructs fully-connected object graph, and exploits graph attention network (GAT) to perform implicit visual reasoning based on the graph structure.* BAN. The bilinear attention network, which calculates bilinear attention distributions via low-rank bilinear pooling to facilitate interactions between multimodal inputs. * PVR. The module-based approach that incorporates the concepts of logical and/or for logical inference, and introduces more perceptual modules for better logical generalization.* LXMERT. The multi-modal pretrained model, which first pretrains transform-based models on large-scale image-text corpus to learn task-agnostic representations, and then finetunes the models on downstream VQA task. * MCAN. The transformer-based multi-modal fusion model, which interacts intra- and inter-modal features using multi-head self-attention and guided attention, resulting in multi-modal joint representations for visual question answering.* MMN. The module-based network, which decomposes questions into an interdependent program sequence, and assembles parametric modules to construct reasoning model for answer generation. * RPR. The graph-based reasoning model, which formulates the reasoning process as a reinforced path routing problem, and exploits reinforcement learning to optimize the reasoning process. * RCVQA. The VQA model which focuses on the answering consistency problem and designs a consistency constraint loss to improve the answering consistency between a compositional question and its sub-questions. § D. ABLATION STUDIESDifferent quantized bins and maximum number of generated objects:Table <ref> shows the results when quantizing coordinates into different bins and generating different number of intermediate results. As Table <ref> shows, the highest accuracy (59.97%) is achieved when quantized bins are set to 256 and the maximum number of generated objects is set to 4. Intuitively, the quantized bins are limited by the size of feature maps. The reason why the accuracy does not increase when # bins >256 might be that the feature map with size of 10×10 is unable to predict such fine-grained coordinates. From Table <ref>, it can also be observed that generating 1∼3 intermediate results decreases the VQA performance, indicating that more intermediate supervisions are helpful for the model to learn visual reasoning and achieve higher performance. Shuffle mode of object order:Table <ref> shows the results of different shuffle modes when providing intermediate results. As illustrated in Figure 1(a), there might exist multiple results during program inference. Therefore, if we do not shuffle the order of the intermediate results, only part of the objects are used for supervision, hindering the model to learn comprehensive reasoning process. As expected, using the shuffle strategy improves the VQA performance (59.97% vs 59.57%).
http://arxiv.org/abs/2312.16012v1
{ "authors": [ "Yuhang Liu", "Daowan Peng", "Wei Wei", "Yuanyuan Fu", "Wenfeng Xie", "Dangyang Chen" ], "categories": [ "cs.CV", "cs.AI" ], "primary_category": "cs.CV", "published": "20231226114522", "title": "Detection-based Intermediate Supervision for Visual Question Answering" }